All of lore.kernel.org
 help / color / mirror / Atom feed
* Moah selftests
@ 2017-02-02  9:08 Chris Wilson
  2017-02-02  9:08 ` [PATCH 01/46] drm: Provide a driver hook for drm_dev_release() Chris Wilson
                   ` (47 more replies)
  0 siblings, 48 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Fewer wide ranging review comments, lots more r-b, we seem to be settling
on a compromise...
-Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* [PATCH 01/46] drm: Provide a driver hook for drm_dev_release()
  2017-02-02  9:08 Moah selftests Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:24   ` Laurent Pinchart
  2017-02-02  9:36   ` [PATCH v6] " Chris Wilson
  2017-02-02  9:08 ` [PATCH 02/46] drm/i915: Split device release from unload Chris Wilson
                   ` (46 subsequent siblings)
  47 siblings, 2 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Laurent Pinchart

Some state is coupled into the device lifetime outside of the
load/unload timeframe and requires teardown during final unreference
from drm_dev_release(). For example, dmabufs hold both a device and
module reference and may live longer than expected (i.e. the current
pattern of the driver tearing down its state and then releasing a
reference to the drm device) and yet touch driver private state when
destroyed.

v2: Export drm_dev_fini() and move the responsibility for finalizing the
drm_device and freeing it to the release callback. (If no callback is
provided, the core will call drm_dev_fini() and kfree(dev) as before.)
v3: Remember to add drm_dev_fini() to drm_drv.h
v4: Tidy language for kerneldoc
v5: Cross reference from drm_dev_init() to note that driver->release()
allows for arbitrary embedding.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/drm_drv.c | 65 ++++++++++++++++++++++++++++++++---------------
 include/drm/drm_drv.h     | 13 ++++++++++
 2 files changed, 58 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index a8ce3179c07c..fe611d601916 100644
--- a/drivers/gpu/drm/drm_drv.c
+++ b/drivers/gpu/drm/drm_drv.c
@@ -465,7 +465,10 @@ static void drm_fs_inode_free(struct inode *inode)
  * that do embed &struct drm_device it must be placed first in the overall
  * structure, and the overall structure must be allocated using kmalloc(): The
  * drm core's release function unconditionally calls kfree() on the @dev pointer
- * when the final reference is released.
+ * when the final reference is released. To override this behaviour, and so
+ * allow embedding of the drm_device inside the driver's device struct at an
+ * arbitrary offset, you must supply a driver->release() callback and control
+ * the finalization explicitly.
  *
  * RETURNS:
  * 0 on success, or error code on failure.
@@ -553,6 +556,41 @@ int drm_dev_init(struct drm_device *dev,
 EXPORT_SYMBOL(drm_dev_init);
 
 /**
+ * drm_dev_fini - Finalize a dead DRM device
+ * @dev: DRM device
+ *
+ * Finalize a dead DRM device. This is the converse to drm_dev_init() and
+ * frees up all state allocated by it. All driver state should be finalized
+ * first. Note that this function does not free the @dev, that is left to the
+ * caller.
+ *
+ * The ref-count of @dev must be zero, and drm_dev_fini() should only be called
+ * from a drm_driver->release() callback.
+ */
+void drm_dev_fini(struct drm_device *dev)
+{
+	drm_vblank_cleanup(dev);
+
+	if (drm_core_check_feature(dev, DRIVER_GEM))
+		drm_gem_destroy(dev);
+
+	drm_legacy_ctxbitmap_cleanup(dev);
+	drm_ht_remove(&dev->map_hash);
+	drm_fs_inode_free(dev->anon_inode);
+
+	drm_minor_free(dev, DRM_MINOR_PRIMARY);
+	drm_minor_free(dev, DRM_MINOR_RENDER);
+	drm_minor_free(dev, DRM_MINOR_CONTROL);
+
+	mutex_destroy(&dev->master_mutex);
+	mutex_destroy(&dev->ctxlist_mutex);
+	mutex_destroy(&dev->filelist_mutex);
+	mutex_destroy(&dev->struct_mutex);
+	kfree(dev->unique);
+}
+EXPORT_SYMBOL(drm_dev_fini);
+
+/**
  * drm_dev_alloc - Allocate new DRM device
  * @driver: DRM driver to allocate device for
  * @parent: Parent device object
@@ -598,25 +636,12 @@ static void drm_dev_release(struct kref *ref)
 {
 	struct drm_device *dev = container_of(ref, struct drm_device, ref);
 
-	drm_vblank_cleanup(dev);
-
-	if (drm_core_check_feature(dev, DRIVER_GEM))
-		drm_gem_destroy(dev);
-
-	drm_legacy_ctxbitmap_cleanup(dev);
-	drm_ht_remove(&dev->map_hash);
-	drm_fs_inode_free(dev->anon_inode);
-
-	drm_minor_free(dev, DRM_MINOR_PRIMARY);
-	drm_minor_free(dev, DRM_MINOR_RENDER);
-	drm_minor_free(dev, DRM_MINOR_CONTROL);
-
-	mutex_destroy(&dev->master_mutex);
-	mutex_destroy(&dev->ctxlist_mutex);
-	mutex_destroy(&dev->filelist_mutex);
-	mutex_destroy(&dev->struct_mutex);
-	kfree(dev->unique);
-	kfree(dev);
+	if (dev->driver->release) {
+		dev->driver->release(dev);
+	} else {
+		drm_dev_fini(dev);
+		kfree(dev);
+	}
 }
 
 /**
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index 732e85652d1e..d0d2fa83d06c 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -102,6 +102,17 @@ struct drm_driver {
 	 *
 	 */
 	void (*unload) (struct drm_device *);
+
+	/**
+	 * @release:
+	 *
+	 * Optional callback for destroying device state after the final
+	 * reference is released, i.e. the device is being destroyed. Drivers
+	 * using this callback are responsible for calling drm_dev_fini()
+	 * to finalize the device and then freeing the struct themselves.
+	 */
+	void (*release) (struct drm_device *);
+
 	int (*set_busid)(struct drm_device *dev, struct drm_master *master);
 
 	/**
@@ -437,6 +448,8 @@ extern unsigned int drm_debug;
 int drm_dev_init(struct drm_device *dev,
 		 struct drm_driver *driver,
 		 struct device *parent);
+void drm_dev_fini(struct drm_device *dev);
+
 struct drm_device *drm_dev_alloc(struct drm_driver *driver,
 				 struct device *parent);
 int drm_dev_register(struct drm_device *dev, unsigned long flags);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 02/46] drm/i915: Split device release from unload
  2017-02-02  9:08 Moah selftests Chris Wilson
  2017-02-02  9:08 ` [PATCH 01/46] drm: Provide a driver hook for drm_dev_release() Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-08 13:41   ` Joonas Lahtinen
  2017-02-02  9:08 ` [PATCH 03/46] drm/i915: Unbind any residual objects/vma from the Global GTT on shutdown Chris Wilson
                   ` (45 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

We may need to keep our memory management alive after we have unloaded
the physical pci device. For example, if we have exported an object via
dmabuf, that will keep the device around but the pci device may be
removed before the dmabuf itself is released, use of the pci hardware
will be revoked, but the memory and object management needs to persist
for the dmabuf.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 267d5f8c49e1..8bba6c4eb4ed 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1299,7 +1299,8 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
 	pci_disable_device(pdev);
 out_free_priv:
 	i915_load_error(dev_priv, "Device initialization failed (%d)\n", ret);
-	drm_dev_unref(&dev_priv->drm);
+	drm_dev_fini(&dev_priv->drm);
+	kfree(dev_priv);
 	return ret;
 }
 
@@ -1358,8 +1359,16 @@ void i915_driver_unload(struct drm_device *dev)
 	i915_driver_cleanup_mmio(dev_priv);
 
 	intel_display_power_put(dev_priv, POWER_DOMAIN_INIT);
+}
+
+static void i915_driver_release(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = to_i915(dev);
 
 	i915_driver_cleanup_early(dev_priv);
+	drm_dev_fini(&dev_priv->drm);
+
+	kfree(dev_priv);
 }
 
 static int i915_driver_open(struct drm_device *dev, struct drm_file *file)
@@ -2601,6 +2610,7 @@ static struct drm_driver driver = {
 	.driver_features =
 	    DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_GEM | DRIVER_PRIME |
 	    DRIVER_RENDER | DRIVER_MODESET,
+	.release = i915_driver_release,
 	.open = i915_driver_open,
 	.lastclose = i915_driver_lastclose,
 	.preclose = i915_driver_preclose,
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 03/46] drm/i915: Unbind any residual objects/vma from the Global GTT on shutdown
  2017-02-02  9:08 Moah selftests Chris Wilson
  2017-02-02  9:08 ` [PATCH 01/46] drm: Provide a driver hook for drm_dev_release() Chris Wilson
  2017-02-02  9:08 ` [PATCH 02/46] drm/i915: Split device release from unload Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-08 13:36   ` Joonas Lahtinen
  2017-02-02  9:08 ` [PATCH 04/46] drm/i915: Flush the freed object queue on device release Chris Wilson
                   ` (44 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

We may unload the PCI device before all users (such as dma-buf) are
completely shutdown. This may leave VMA in the global GTT which we want
to revoke, whilst keeping the objects themselves around to service the
dma-buf.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index b7bcb1e62ce4..65425e71f3a5 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2812,6 +2812,15 @@ int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
 void i915_ggtt_cleanup_hw(struct drm_i915_private *dev_priv)
 {
 	struct i915_ggtt *ggtt = &dev_priv->ggtt;
+	struct i915_vma *vma, *vn;
+
+	ggtt->base.closed = true;
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	WARN_ON(!list_empty(&ggtt->base.active_list));
+	list_for_each_entry_safe(vma, vn, &ggtt->base.inactive_list, vm_link)
+		WARN_ON(i915_vma_unbind(vma));
+	mutex_unlock(&dev_priv->drm.struct_mutex);
 
 	if (dev_priv->mm.aliasing_ppgtt) {
 		struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 04/46] drm/i915: Flush the freed object queue on device release
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (2 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 03/46] drm/i915: Unbind any residual objects/vma from the Global GTT on shutdown Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-08 13:38   ` Joonas Lahtinen
  2017-02-02  9:08 ` [PATCH 05/46] drm/i915: Provide a hook for selftests Chris Wilson
                   ` (43 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

As dmabufs may live beyond the PCI device removal, we need to flush the
freed object worker on device release, and include a warning in case
there is a leak.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6c7a83bbd068..88065fd55147 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4625,7 +4625,9 @@ i915_gem_load_init(struct drm_i915_private *dev_priv)
 
 void i915_gem_load_cleanup(struct drm_i915_private *dev_priv)
 {
+	i915_gem_drain_freed_objects(dev_priv);
 	WARN_ON(!llist_empty(&dev_priv->mm.free_list));
+	WARN_ON(dev_priv->mm.object_count);
 
 	mutex_lock(&dev_priv->drm.struct_mutex);
 	i915_gem_timeline_fini(&dev_priv->gt.global_timeline);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 05/46] drm/i915: Provide a hook for selftests
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (3 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 04/46] drm/i915: Flush the freed object queue on device release Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:11   ` Chris Wilson
  2017-02-10 10:19   ` Tvrtko Ursulin
  2017-02-02  9:08 ` [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
                   ` (42 subsequent siblings)
  47 siblings, 2 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Some pieces of code are independent of hardware but are very tricky to
exercise through the normal userspace ABI or via debugfs hooks. Being
able to create mock unit tests and execute them through CI is vital.
Start by adding a central point where we can execute unit tests and
a parameter to enable them. This is disabled by default as the
expectation is that these tests will occasionally explode.

To facilitate integration with igt, any parameter beginning with
i915.igt__ is interpreted as a subtest executable independently via
igt/drv_selftest.

Two classes of selftests are recognised: mock unit tests and integration
tests. Mock unit tests are run as soon as the module is loaded, before
the device is probed. At that point there is no driver instantiated and
all hw interactions must be "mocked". This is very useful for writing
universal tests to exercise code not typically run on a broad range of
architectures. Alternatively, you can hook into the live selftests and
run when the device has been instantiated - hw interactions are real.

v2: Add a macro for compiling conditional code for mock objects inside
real objects.
v3: Differentiate between mock unit tests and late integration test.
v4: List the tests in natural order, use igt to sort after modparam.
v5: s/late/live/
v6: s/unsigned long/unsigned int/
v7: Use igt_ prefixes for long helpers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1
---
 drivers/gpu/drm/i915/Kconfig.debug                 |  16 ++
 drivers/gpu/drm/i915/Makefile                      |   3 +
 drivers/gpu/drm/i915/i915_pci.c                    |  31 ++-
 drivers/gpu/drm/i915/i915_selftest.h               | 102 +++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  11 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |  11 +
 drivers/gpu/drm/i915/selftests/i915_random.c       |  63 ++++++
 drivers/gpu/drm/i915/selftests/i915_random.h       |  50 +++++
 drivers/gpu/drm/i915/selftests/i915_selftest.c     | 250 +++++++++++++++++++++
 tools/testing/selftests/drivers/gpu/i915.sh        |   1 +
 10 files changed, 531 insertions(+), 7 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_selftest.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_live_selftests.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_random.c
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_random.h
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_selftest.c

diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index 598551dbf62c..a4d8cfd77c3c 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -26,6 +26,7 @@ config DRM_I915_DEBUG
         select DRM_DEBUG_MM if DRM=y
 	select DRM_DEBUG_MM_SELFTEST
 	select DRM_I915_SW_FENCE_DEBUG_OBJECTS
+	select DRM_I915_SELFTEST
         default n
         help
           Choose this option to turn on extra driver debugging that may affect
@@ -59,3 +60,18 @@ config DRM_I915_SW_FENCE_DEBUG_OBJECTS
           Recommended for driver developers only.
 
           If in doubt, say "N".
+
+config DRM_I915_SELFTEST
+	bool "Enable selftests upon driver load"
+	depends on DRM_I915
+	default n
+	select PRIME_NUMBERS
+	help
+	  Choose this option to allow the driver to perform selftests upon
+	  loading; also requires the i915.selftest=1 module parameter. To
+	  exit the module after running the selftests (i.e. to prevent normal
+	  module initialisation afterwards) use i915.selftest=-1.
+
+	  Recommended for driver developers only.
+
+	  If in doubt, say "N".
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index c62ab45683c0..bac62fd5b438 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -116,6 +116,9 @@ i915-y += dvo_ch7017.o \
 
 # Post-mortem debug and GPU hang state capture
 i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
+i915-$(CONFIG_DRM_I915_SELFTEST) += \
+	selftests/i915_random.o \
+	selftests/i915_selftest.o
 
 # virtual gpu code
 i915-y += i915_vgpu.o
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index df2051b41fa1..732101ed57fb 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -27,6 +27,7 @@
 #include <linux/vga_switcheroo.h>
 
 #include "i915_drv.h"
+#include "i915_selftest.h"
 
 #define GEN_DEFAULT_PIPEOFFSETS \
 	.pipe_offsets = { PIPE_A_OFFSET, PIPE_B_OFFSET, \
@@ -473,10 +474,19 @@ static const struct pci_device_id pciidlist[] = {
 };
 MODULE_DEVICE_TABLE(pci, pciidlist);
 
+static void i915_pci_remove(struct pci_dev *pdev)
+{
+	struct drm_device *dev = pci_get_drvdata(pdev);
+
+	i915_driver_unload(dev);
+	drm_dev_unref(dev);
+}
+
 static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 {
 	struct intel_device_info *intel_info =
 		(struct intel_device_info *) ent->driver_data;
+	int err;
 
 	if (IS_ALPHA_SUPPORT(intel_info) && !i915.alpha_support) {
 		DRM_INFO("The driver support for your hardware in this kernel version is alpha quality\n"
@@ -500,15 +510,17 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (vga_switcheroo_client_probe_defer(pdev))
 		return -EPROBE_DEFER;
 
-	return i915_driver_load(pdev, ent);
-}
+	err = i915_driver_load(pdev, ent);
+	if (err)
+		return err;
 
-static void i915_pci_remove(struct pci_dev *pdev)
-{
-	struct drm_device *dev = pci_get_drvdata(pdev);
+	err = i915_live_selftests(pdev);
+	if (err) {
+		i915_pci_remove(pdev);
+		return err > 0 ? -ENOTTY : err;
+	}
 
-	i915_driver_unload(dev);
-	drm_dev_unref(dev);
+	return 0;
 }
 
 static struct pci_driver i915_pci_driver = {
@@ -522,6 +534,11 @@ static struct pci_driver i915_pci_driver = {
 static int __init i915_init(void)
 {
 	bool use_kms = true;
+	int err;
+
+	err = i915_mock_selftests();
+	if (err)
+		return err > 0 ? 0 : err;
 
 	/*
 	 * Enable KMS by default, unless explicitly overriden by
diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
new file mode 100644
index 000000000000..8b5994caa301
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_selftest.h
@@ -0,0 +1,102 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef __I915_SELFTEST_H__
+#define __I915_SELFTEST_H__
+
+struct pci_dev;
+struct drm_i915_private;
+
+struct i915_selftest {
+	unsigned long timeout_jiffies;
+	unsigned int timeout_ms;
+	unsigned int random_seed;
+	int mock;
+	int live;
+};
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+extern struct i915_selftest i915_selftest;
+
+int i915_mock_selftests(void);
+int i915_live_selftests(struct pci_dev *pdev);
+
+/* We extract the function declarations from i915_mock_selftests.h and
+ * i915_live_selftests.h Add your unit test declarations there!
+ *
+ * Mock unit tests are run very early upon module load, before the driver
+ * is probed. All hardware interactions, as well as other subsystems, must
+ * be "mocked".
+ *
+ * Live unit tests are run after the driver is loaded - all hardware
+ * interactions are real.
+ */
+#define selftest(name, func) int func(void);
+#include "selftests/i915_mock_selftests.h"
+#undef selftest
+#define selftest(name, func) int func(struct drm_i915_private *i915);
+#include "selftests/i915_live_selftests.h"
+#undef selftest
+
+struct i915_subtest {
+	int (*func)(void *data);
+	const char *name;
+};
+
+int __i915_subtests(const char *caller,
+		    const struct i915_subtest *st,
+		    unsigned int count,
+		    void *data);
+#define i915_subtests(T, data) \
+	__i915_subtests(__func__, T, ARRAY_SIZE(T), data)
+
+#define SUBTEST(x) { x, #x }
+
+#define I915_SELFTEST_DECLARE(x) x
+#define I915_SELFTEST_ONLY(x) unlikely(x)
+
+#else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
+
+static inline int i915_mock_selftests(void) { return 0; }
+static inline int i915_live_selftests(struct pci_dev *pdev) { return 0; }
+
+#define I915_SELFTEST_DECLARE(x)
+#define I915_SELFTEST_ONLY(x) 0
+
+#endif
+
+/* Using the i915_selftest_ prefix becomes a little unwieldy with the helpers.
+ * Instead we use the igt_ shorthand, in reference to the intel-gpu-tools
+ * suite of uabi test cases (which includes a test runner for our selftests).
+ */
+
+#define IGT_TIMEOUT(name__) \
+	unsigned long name__ = jiffies + i915_selftest.timeout_jiffies
+
+__printf(2, 3)
+bool __igt_timeout(unsigned long timeout, const char *fmt, ...);
+
+#define igt_timeout(t, fmt, ...) \
+	__igt_timeout((t), KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__)
+
+#endif /* !__I915_SELFTEST_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
new file mode 100644
index 000000000000..f3e17cb10e05
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -0,0 +1,11 @@
+/* List each unit test as selftest(name, function)
+ *
+ * The name is used as both an enum and expanded as subtest__name to create
+ * a module parameter. It must be unique and legal for a C identifier.
+ *
+ * The function should be of type int function(void). It may be conditionally
+ * compiled using #if IS_ENABLED(DRM_I915_SELFTEST).
+ *
+ * Tests are executed in order by igt/drv_selftest
+ */
+selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
new file mode 100644
index 000000000000..69e97a2ba4a6
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -0,0 +1,11 @@
+/* List each unit test as selftest(name, function)
+ *
+ * The name is used as both an enum and expanded as subtest__name to create
+ * a module parameter. It must be unique and legal for a C identifier.
+ *
+ * The function should be of type int function(void). It may be conditionally
+ * compiled using #if IS_ENABLED(DRM_I915_SELFTEST).
+ *
+ * Tests are executed in order by igt/drv_selftest
+ */
+selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
diff --git a/drivers/gpu/drm/i915/selftests/i915_random.c b/drivers/gpu/drm/i915/selftests/i915_random.c
new file mode 100644
index 000000000000..606a237fed17
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_random.c
@@ -0,0 +1,63 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/bitops.h>
+#include <linux/kernel.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "i915_random.h"
+
+static inline u32 i915_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
+{
+	return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
+}
+
+void i915_random_reorder(unsigned int *order, unsigned int count,
+			 struct rnd_state *state)
+{
+	unsigned int i, j;
+
+	for (i = 0; i < count; ++i) {
+		BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32));
+		j = i915_prandom_u32_max_state(count, state);
+		swap(order[i], order[j]);
+	}
+}
+
+unsigned int *i915_random_order(unsigned int count, struct rnd_state *state)
+{
+	unsigned int *order, i;
+
+	order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
+	if (!order)
+		return order;
+
+	for (i = 0; i < count; i++)
+		order[i] = i;
+
+	i915_random_reorder(order, count, state);
+	return order;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_random.h b/drivers/gpu/drm/i915/selftests/i915_random.h
new file mode 100644
index 000000000000..b9c334ce6cd9
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_random.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_SELFTESTS_RANDOM_H__
+#define __I915_SELFTESTS_RANDOM_H__
+
+#include <linux/random.h>
+
+#include "../i915_selftest.h"
+
+#define I915_RND_STATE_INITIALIZER(x) ({				\
+	struct rnd_state state__;					\
+	prandom_seed_state(&state__, (x));				\
+	state__;							\
+})
+
+#define I915_RND_STATE(name__) \
+	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(i915_selftest.random_seed)
+
+#define I915_RND_SUBSTATE(name__, parent__) \
+	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(prandom_u32_state(&(parent__)))
+
+unsigned int *i915_random_order(unsigned int count,
+				struct rnd_state *state);
+void i915_random_reorder(unsigned int *order,
+			 unsigned int count,
+			 struct rnd_state *state);
+
+#endif /* !__I915_SELFTESTS_RANDOM_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/i915_selftest.c b/drivers/gpu/drm/i915/selftests/i915_selftest.c
new file mode 100644
index 000000000000..6ba3abb10c6f
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_selftest.c
@@ -0,0 +1,250 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <linux/random.h>
+
+#include "../i915_drv.h"
+#include "../i915_selftest.h"
+
+struct i915_selftest i915_selftest __read_mostly = {
+	.timeout_ms = 1000,
+};
+
+int i915_mock_sanitycheck(void)
+{
+	pr_info(DRIVER_NAME ": %s() - ok!\n", __func__);
+	return 0;
+}
+
+int i915_live_sanitycheck(struct drm_i915_private *i915)
+{
+	pr_info("%s: %s() - ok!\n", i915->drm.driver->name, __func__);
+	return 0;
+}
+
+enum {
+#define selftest(name, func) mock_##name,
+#include "i915_mock_selftests.h"
+#undef selftest
+};
+
+enum {
+#define selftest(name, func) live_##name,
+#include "i915_live_selftests.h"
+#undef selftest
+};
+
+struct selftest {
+	bool enabled;
+	const char *name;
+	union {
+		int (*mock)(void);
+		int (*live)(struct drm_i915_private *);
+	};
+};
+
+#define selftest(n, f) [mock_##n] = { .name = #n, .mock = f },
+static struct selftest mock_selftests[] = {
+#include "i915_mock_selftests.h"
+};
+#undef selftest
+
+#define selftest(n, f) [live_##n] = { .name = #n, .live = f },
+static struct selftest live_selftests[] = {
+#include "i915_live_selftests.h"
+};
+#undef selftest
+
+/* Embed the line number into the parameter name so that we can order tests */
+#define selftest(n, func) selftest_0(n, func, param(n))
+#define param(n) __PASTE(igt__, __PASTE(__LINE__, __mock_##n))
+#define selftest_0(n, func, id) \
+module_param_named(id, mock_selftests[mock_##n].enabled, bool, 0400);
+#include "i915_mock_selftests.h"
+#undef selftest_0
+#undef param
+
+#define param(n) __PASTE(igt__, __PASTE(__LINE__, __live_##n))
+#define selftest_0(n, func, id) \
+module_param_named(id, live_selftests[live_##n].enabled, bool, 0400);
+#include "i915_live_selftests.h"
+#undef selftest_0
+#undef param
+#undef selftest
+
+static void set_default_test_all(struct selftest *st, unsigned int count)
+{
+	unsigned int i;
+
+	for (i = 0; i < count; i++)
+		if (st[i].enabled)
+			return;
+
+	for (i = 0; i < count; i++)
+		st[i].enabled = true;
+}
+
+static int __run_selftests(const char *name,
+			   struct selftest *st,
+			   unsigned int count,
+			   void *data)
+{
+	int err = 0;
+
+	while (!i915_selftest.random_seed)
+		i915_selftest.random_seed = get_random_int();
+
+	i915_selftest.timeout_jiffies =
+		i915_selftest.timeout_ms ?
+		msecs_to_jiffies_timeout(i915_selftest.timeout_ms) :
+		MAX_SCHEDULE_TIMEOUT;
+
+	set_default_test_all(st, count);
+
+	pr_info(DRIVER_NAME ": Performing %s selftests with st_random_seed=0x%x st_timeout=%u\n",
+		name, i915_selftest.random_seed, i915_selftest.timeout_ms);
+
+	/* Tests are listed in order in i915_*_selftests.h */
+	for (; count--; st++) {
+		if (!st->enabled)
+			continue;
+
+		cond_resched();
+		if (signal_pending(current))
+			return -EINTR;
+
+		pr_debug(DRIVER_NAME ": Running %s\n", st->name);
+		if (data)
+			err = st->live(data);
+		else
+			err = st->mock();
+		if (err == -EINTR && !signal_pending(current))
+			err = 0;
+		if (err)
+			break;
+	}
+
+	if (WARN(err > 0 || err == -ENOTTY,
+		 "%s returned %d, conflicting with selftest's magic values!\n",
+		 st->name, err))
+		err = -1;
+
+	return err;
+}
+
+#define run_selftests(x, data) \
+	__run_selftests(#x, x##_selftests, ARRAY_SIZE(x##_selftests), data)
+
+int i915_mock_selftests(void)
+{
+	int err;
+
+	if (!i915_selftest.mock)
+		return 0;
+
+	err = run_selftests(mock, NULL);
+	if (err) {
+		i915_selftest.mock = err;
+		return err;
+	}
+
+	if (i915_selftest.mock < 0) {
+		i915_selftest.mock = -ENOTTY;
+		return 1;
+	}
+
+	return 0;
+}
+
+int i915_live_selftests(struct pci_dev *pdev)
+{
+	int err;
+
+	if (!i915_selftest.live)
+		return 0;
+
+	err = run_selftests(live, to_i915(pci_get_drvdata(pdev)));
+	if (err) {
+		i915_selftest.live = err;
+		return err;
+	}
+
+	if (i915_selftest.live < 0) {
+		i915_selftest.live = -ENOTTY;
+		return 1;
+	}
+
+	return 0;
+}
+
+int __i915_subtests(const char *caller,
+		    const struct i915_subtest *st,
+		    unsigned int count,
+		    void *data)
+{
+	int err;
+
+	for (; count--; st++) {
+		cond_resched();
+		if (signal_pending(current))
+			return -EINTR;
+
+		pr_debug(DRIVER_NAME ": Running %s/%s\n", caller, st->name);
+		err = st->func(data);
+		if (err && err != -EINTR) {
+			pr_err(DRIVER_NAME "/%s: %s failed with error %d\n",
+			       caller, st->name, err);
+			return err;
+		}
+	}
+
+	return 0;
+}
+
+bool __igt_timeout(unsigned long timeout, const char *fmt, ...)
+{
+	va_list va;
+
+	if (!signal_pending(current)) {
+		cond_resched();
+		if (time_before(jiffies, timeout))
+			return false;
+	}
+
+	if (fmt) {
+		va_start(va, fmt);
+		vprintk(fmt, va);
+		va_end(va);
+	}
+
+	return true;
+}
+
+module_param_named(st_random_seed, i915_selftest.random_seed, uint, 0400);
+module_param_named(st_timeout, i915_selftest.timeout_ms, uint, 0400);
+
+module_param_named_unsafe(mock_selftests, i915_selftest.mock, int, 0400);
+MODULE_PARM_DESC(mock_selftests, "Run selftests before loading, using mock hardware (0:disabled [default], 1:run tests then load driver, -1:run tests then exit module)");
+
+module_param_named_unsafe(live_selftests, i915_selftest.live, int, 0400);
+MODULE_PARM_DESC(live_selftests, "Run selftests after driver initialisation on the live system (0:disabled [default], 1:run tests then continue, -1:run tests then exit module)");
diff --git a/tools/testing/selftests/drivers/gpu/i915.sh b/tools/testing/selftests/drivers/gpu/i915.sh
index d407f0fa1e3a..c06d6e8a8dcc 100755
--- a/tools/testing/selftests/drivers/gpu/i915.sh
+++ b/tools/testing/selftests/drivers/gpu/i915.sh
@@ -7,6 +7,7 @@ if ! /sbin/modprobe -q -r i915; then
 fi
 
 if /sbin/modprobe -q i915 mock_selftests=-1; then
+	/sbin/modprobe -q -r i915
 	echo "drivers/gpu/i915: ok"
 else
 	echo "drivers/gpu/i915: [FAIL]"
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (4 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 05/46] drm/i915: Provide a hook for selftests Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-10 10:24   ` Tvrtko Ursulin
  2017-02-02  9:08 ` [PATCH 07/46] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove Chris Wilson
                   ` (41 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Start exercising the scattergather lists, especially looking at
iteration after coalescing.

v2: Comment on the peculiarity of table construction (i.e. why this
sg_table might be interesting).
v3: Added one __func__ to identify expect_pfn_sg()

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c                    |  11 +-
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/scatterlist.c       | 331 +++++++++++++++++++++
 3 files changed, 340 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/scatterlist.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 88065fd55147..fc54a8eb3fe5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2216,17 +2216,17 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
 	mutex_unlock(&obj->mm.lock);
 }
 
-static void i915_sg_trim(struct sg_table *orig_st)
+static bool i915_sg_trim(struct sg_table *orig_st)
 {
 	struct sg_table new_st;
 	struct scatterlist *sg, *new_sg;
 	unsigned int i;
 
 	if (orig_st->nents == orig_st->orig_nents)
-		return;
+		return false;
 
 	if (sg_alloc_table(&new_st, orig_st->nents, GFP_KERNEL | __GFP_NOWARN))
-		return;
+		return false;
 
 	new_sg = new_st.sgl;
 	for_each_sg(orig_st->sgl, sg, orig_st->nents, i) {
@@ -2239,6 +2239,7 @@ static void i915_sg_trim(struct sg_table *orig_st)
 	sg_free_table(orig_st);
 
 	*orig_st = new_st;
+	return true;
 }
 
 static struct sg_table *
@@ -4967,3 +4968,7 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 	sg = i915_gem_object_get_sg(obj, n, &offset);
 	return sg_dma_address(sg) + (offset << PAGE_SHIFT);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/scatterlist.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 69e97a2ba4a6..5f0bdda42ed8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -9,3 +9,4 @@
  * Tests are executed in order by igt/drv_selftest
  */
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
+selftest(scatterlist, scatterlist_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
new file mode 100644
index 000000000000..fa5bd09c863f
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
@@ -0,0 +1,331 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <linux/prime_numbers.h>
+#include <linux/random.h>
+
+#include "../i915_selftest.h"
+
+#define PFN_BIAS (1 << 10)
+
+struct pfn_table {
+	struct sg_table st;
+	unsigned long start, end;
+};
+
+typedef unsigned int (*npages_fn_t)(unsigned long n,
+				    unsigned long count,
+				    struct rnd_state *rnd);
+
+static noinline int expect_pfn_sg(struct pfn_table *pt,
+				  npages_fn_t npages_fn,
+				  struct rnd_state *rnd,
+				  const char *who,
+				  unsigned long timeout)
+{
+	struct scatterlist *sg;
+	unsigned long pfn, n;
+
+	pfn = pt->start;
+	for_each_sg(pt->st.sgl, sg, pt->st.nents, n) {
+		struct page *page = sg_page(sg);
+		unsigned int npages = npages_fn(n, pt->st.nents, rnd);
+
+		if (page_to_pfn(page) != pfn) {
+			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg)\n",
+			       __func__, who, pfn, page_to_pfn(page));
+			return -EINVAL;
+		}
+
+		if (sg->length != npages * PAGE_SIZE) {
+			pr_err("%s: %s copied wrong sg length, expected size %lu, found %u (using for_each_sg)\n",
+			       __func__, who, npages * PAGE_SIZE, sg->length);
+			return -EINVAL;
+		}
+
+		if (igt_timeout(timeout, "%s timed out\n", who))
+			return -EINTR;
+
+		pfn += npages;
+	}
+	if (pfn != pt->end) {
+		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
+		       __func__, who, pt->end, pfn);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static noinline int expect_pfn_sg_page_iter(struct pfn_table *pt,
+					    const char *who,
+					    unsigned long timeout)
+{
+	struct sg_page_iter sgiter;
+	unsigned long pfn;
+
+	pfn = pt->start;
+	for_each_sg_page(pt->st.sgl, &sgiter, pt->st.nents, 0) {
+		struct page *page = sg_page_iter_page(&sgiter);
+
+		if (page != pfn_to_page(pfn)) {
+			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg_page)\n",
+			       __func__, who, pfn, page_to_pfn(page));
+			return -EINVAL;
+		}
+
+		if (igt_timeout(timeout, "%s timed out\n", who))
+			return -EINTR;
+
+		pfn++;
+	}
+	if (pfn != pt->end) {
+		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
+		       __func__, who, pt->end, pfn);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static noinline int expect_pfn_sgtiter(struct pfn_table *pt,
+				       const char *who,
+				       unsigned long timeout)
+{
+	struct sgt_iter sgt;
+	struct page *page;
+	unsigned long pfn;
+
+	pfn = pt->start;
+	for_each_sgt_page(page, sgt, &pt->st) {
+		if (page != pfn_to_page(pfn)) {
+			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sgt_page)\n",
+			       __func__, who, pfn, page_to_pfn(page));
+			return -EINVAL;
+		}
+
+		if (igt_timeout(timeout, "%s timed out\n", who))
+			return -EINTR;
+
+		pfn++;
+	}
+	if (pfn != pt->end) {
+		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
+		       __func__, who, pt->end, pfn);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int expect_pfn_sgtable(struct pfn_table *pt,
+			      npages_fn_t npages_fn,
+			      struct rnd_state *rnd,
+			      const char *who,
+			      unsigned long timeout)
+{
+	int err;
+
+	err = expect_pfn_sg(pt, npages_fn, rnd, who, timeout);
+	if (err)
+		return err;
+
+	err = expect_pfn_sg_page_iter(pt, who, timeout);
+	if (err)
+		return err;
+
+	err = expect_pfn_sgtiter(pt, who, timeout);
+	if (err)
+		return err;
+
+	return 0;
+}
+
+static unsigned int one(unsigned long n,
+			unsigned long count,
+			struct rnd_state *rnd)
+{
+	return 1;
+}
+
+static unsigned int grow(unsigned long n,
+			 unsigned long count,
+			 struct rnd_state *rnd)
+{
+	return n + 1;
+}
+
+static unsigned int shrink(unsigned long n,
+			   unsigned long count,
+			   struct rnd_state *rnd)
+{
+	return count - n;
+}
+
+static unsigned int random(unsigned long n,
+			   unsigned long count,
+			   struct rnd_state *rnd)
+{
+	return 1 + (prandom_u32_state(rnd) % 1024);
+}
+
+static bool alloc_table(struct pfn_table *pt,
+			unsigned long count, unsigned long max,
+			npages_fn_t npages_fn,
+			struct rnd_state *rnd)
+{
+	struct scatterlist *sg;
+	unsigned long n, pfn;
+
+	if (sg_alloc_table(&pt->st, max,
+			   GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN))
+		return false;
+
+	/* count should be less than 20 to prevent overflowing sg->length */
+	GEM_BUG_ON(overflows_type(count * PAGE_SIZE, sg->length));
+
+	/* Construct a table where each scatterlist contains different number
+	 * of entries. The idea is to check that we can iterate the individual
+	 * pages from inside the coalesced lists.
+	 */
+	pt->start = PFN_BIAS;
+	pfn = pt->start;
+	sg = pt->st.sgl;
+	for (n = 0; n < count; n++) {
+		unsigned long npages = npages_fn(n, count, rnd);
+
+		if (n)
+			sg = sg_next(sg);
+		sg_set_page(sg, pfn_to_page(pfn), npages * PAGE_SIZE, 0);
+
+		GEM_BUG_ON(page_to_pfn(sg_page(sg)) != pfn);
+		GEM_BUG_ON(sg->length != npages * PAGE_SIZE);
+		GEM_BUG_ON(sg->offset != 0);
+
+		pfn += npages;
+	}
+	sg_mark_end(sg);
+	pt->st.nents = n;
+	pt->end = pfn;
+
+	return true;
+}
+
+static const npages_fn_t npages_funcs[] = {
+	one,
+	grow,
+	shrink,
+	random,
+	NULL,
+};
+
+static int igt_sg_alloc(void *ignored)
+{
+	IGT_TIMEOUT(end_time);
+	const unsigned long max_order = 20; /* approximating a 4GiB object */
+	struct rnd_state prng;
+	unsigned long prime;
+
+	for_each_prime_number(prime, max_order) {
+		unsigned long size = BIT(prime);
+		int offset;
+
+		for (offset = -1; offset <= 1; offset++) {
+			unsigned long sz = size + offset;
+			const npages_fn_t *npages;
+			struct pfn_table pt;
+			int err;
+
+			for (npages = npages_funcs; *npages; npages++) {
+				prandom_seed_state(&prng,
+						   i915_selftest.random_seed);
+				if (!alloc_table(&pt, sz, sz, *npages, &prng))
+					return 0; /* out of memory, give up */
+
+				prandom_seed_state(&prng,
+						   i915_selftest.random_seed);
+				err = expect_pfn_sgtable(&pt, *npages, &prng,
+							 "sg_alloc_table",
+							 end_time);
+				sg_free_table(&pt.st);
+				if (err)
+					return err;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int igt_sg_trim(void *ignored)
+{
+	IGT_TIMEOUT(end_time);
+	const unsigned long max = PAGE_SIZE; /* not prime! */
+	struct pfn_table pt;
+	unsigned long prime;
+
+	for_each_prime_number(prime, max) {
+		const npages_fn_t *npages;
+		int err;
+
+		for (npages = npages_funcs; *npages; npages++) {
+			struct rnd_state prng;
+
+			prandom_seed_state(&prng, i915_selftest.random_seed);
+			if (!alloc_table(&pt, prime, max, *npages, &prng))
+				return 0; /* out of memory, give up */
+
+			err = 0;
+			if (i915_sg_trim(&pt.st)) {
+				if (pt.st.orig_nents != prime ||
+				    pt.st.nents != prime) {
+					pr_err("i915_sg_trim failed (nents %u, orig_nents %u), expected %lu\n",
+					       pt.st.nents, pt.st.orig_nents, prime);
+					err = -EINVAL;
+				} else {
+					prandom_seed_state(&prng,
+							   i915_selftest.random_seed);
+					err = expect_pfn_sgtable(&pt,
+								 *npages, &prng,
+								 "i915_sg_trim",
+								 end_time);
+				}
+			}
+			sg_free_table(&pt.st);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
+int scatterlist_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_sg_alloc),
+		SUBTEST(igt_sg_trim),
+	};
+
+	return i915_subtests(tests, NULL);
+}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 07/46] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (5 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 08/46] drm/i915: Add unit tests for the breadcrumb rbtree, completion Chris Wilson
                   ` (40 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

First retroactive test, make sure that the waiters are in global seqno
order after random inserts and removals.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/intel_breadcrumbs.c           |  21 +++
 drivers/gpu/drm/i915/intel_engine_cs.c             |   4 +
 drivers/gpu/drm/i915/intel_ringbuffer.h            |   2 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 172 +++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_engine.c       |  55 +++++++
 drivers/gpu/drm/i915/selftests/mock_engine.h       |  32 ++++
 7 files changed, 287 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_engine.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_engine.h

diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 9fd002bcebb6..1f36756f8759 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -107,6 +107,18 @@ static void __intel_breadcrumbs_enable_irq(struct intel_breadcrumbs *b)
 	if (b->rpm_wakelock)
 		return;
 
+	if (I915_SELFTEST_ONLY(b->mock)) {
+		/* For our mock objects we want to avoid interaction
+		 * with the real hardware (which is not set up). So
+		 * we simply pretend we have enabled the powerwell
+		 * and the irq, and leave it up to the mock
+		 * implementation to call intel_engine_wakeup()
+		 * itself when it wants to simulate a user interrupt,
+		 */
+		b->rpm_wakelock = true;
+		return;
+	}
+
 	/* Since we are waiting on a request, the GPU should be busy
 	 * and should have its own rpm reference. For completeness,
 	 * record an rpm reference for ourselves to cover the
@@ -142,6 +154,11 @@ static void __intel_breadcrumbs_disable_irq(struct intel_breadcrumbs *b)
 	if (!b->rpm_wakelock)
 		return;
 
+	if (I915_SELFTEST_ONLY(b->mock)) {
+		b->rpm_wakelock = false;
+		return;
+	}
+
 	if (b->irq_enabled) {
 		irq_disable(engine);
 		b->irq_enabled = false;
@@ -661,3 +678,7 @@ unsigned int intel_breadcrumbs_busy(struct drm_i915_private *i915)
 
 	return mask;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/intel_breadcrumbs.c"
+#endif
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 69a6416d1223..538d845d7251 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -524,3 +524,7 @@ void intel_engine_get_instdone(struct intel_engine_cs *engine,
 		break;
 	}
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_engine.c"
+#endif
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index b9c15cd40fbf..c2f0ecf612b9 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -5,6 +5,7 @@
 #include "i915_gem_batch_pool.h"
 #include "i915_gem_request.h"
 #include "i915_gem_timeline.h"
+#include "i915_selftest.h"
 
 #define I915_CMD_HASH_ORDER 9
 
@@ -247,6 +248,7 @@ struct intel_engine_cs {
 
 		bool irq_enabled : 1;
 		bool rpm_wakelock : 1;
+		I915_SELFTEST_DECLARE(bool mock : 1);
 	} breadcrumbs;
 
 	/*
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 5f0bdda42ed8..80458e2a2b04 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -10,3 +10,4 @@
  */
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
+selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
new file mode 100644
index 000000000000..6b5acf9de65b
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -0,0 +1,172 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+#include "i915_random.h"
+
+#include "mock_engine.h"
+
+static int check_rbtree(struct intel_engine_cs *engine,
+			const unsigned long *bitmap,
+			const struct intel_wait *waiters,
+			const int count)
+{
+	struct intel_breadcrumbs *b = &engine->breadcrumbs;
+	struct rb_node *rb;
+	int n;
+
+	if (&b->first_wait->node != rb_first(&b->waiters)) {
+		pr_err("First waiter does not match first element of wait-tree\n");
+		return -EINVAL;
+	}
+
+	n = find_first_bit(bitmap, count);
+	for (rb = rb_first(&b->waiters); rb; rb = rb_next(rb)) {
+		struct intel_wait *w = container_of(rb, typeof(*w), node);
+		int idx = w - waiters;
+
+		if (!test_bit(idx, bitmap)) {
+			pr_err("waiter[%d, seqno=%d] removed but still in wait-tree\n",
+			       idx, w->seqno);
+			return -EINVAL;
+		}
+
+		if (n != idx) {
+			pr_err("waiter[%d, seqno=%d] does not match expected next element in tree [%d]\n",
+			       idx, w->seqno, n);
+			return -EINVAL;
+		}
+
+		n = find_next_bit(bitmap, count, n + 1);
+	}
+
+	return 0;
+}
+
+static int check_rbtree_empty(struct intel_engine_cs *engine)
+{
+	struct intel_breadcrumbs *b = &engine->breadcrumbs;
+
+	if (b->first_wait) {
+		pr_err("Empty breadcrumbs still has a waiter\n");
+		return -EINVAL;
+	}
+
+	if (!RB_EMPTY_ROOT(&b->waiters)) {
+		pr_err("Empty breadcrumbs, but wait-tree not empty\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int igt_random_insert_remove(void *arg)
+{
+	const u32 seqno_bias = 0x1000;
+	I915_RND_STATE(prng);
+	struct intel_engine_cs *engine = arg;
+	struct intel_wait *waiters;
+	const int count = 4096;
+	unsigned int *order;
+	unsigned long *bitmap;
+	int err = -ENOMEM;
+	int n;
+
+	mock_engine_reset(engine);
+
+	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
+	if (!waiters)
+		goto out_engines;
+
+	bitmap = kcalloc(DIV_ROUND_UP(count, BITS_PER_LONG), sizeof(*bitmap),
+			 GFP_TEMPORARY);
+	if (!bitmap)
+		goto out_waiters;
+
+	order = i915_random_order(count, &prng);
+	if (!order)
+		goto out_bitmap;
+
+	for (n = 0; n < count; n++)
+		intel_wait_init(&waiters[n], seqno_bias + n);
+
+	err = check_rbtree(engine, bitmap, waiters, count);
+	if (err)
+		goto out_order;
+
+	/* Add and remove waiters into the rbtree in random order. At each
+	 * step, we verify that the rbtree is correctly ordered.
+	 */
+	for (n = 0; n < count; n++) {
+		int i = order[n];
+
+		intel_engine_add_wait(engine, &waiters[i]);
+		__set_bit(i, bitmap);
+
+		err = check_rbtree(engine, bitmap, waiters, count);
+		if (err)
+			goto out_order;
+	}
+
+	i915_random_reorder(order, count, &prng);
+	for (n = 0; n < count; n++) {
+		int i = order[n];
+
+		intel_engine_remove_wait(engine, &waiters[i]);
+		__clear_bit(i, bitmap);
+
+		err = check_rbtree(engine, bitmap, waiters, count);
+		if (err)
+			goto out_order;
+	}
+
+	err = check_rbtree_empty(engine);
+out_order:
+	kfree(order);
+out_bitmap:
+	kfree(bitmap);
+out_waiters:
+	drm_free_large(waiters);
+out_engines:
+	mock_engine_flush(engine);
+	return err;
+}
+
+int intel_breadcrumbs_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_random_insert_remove),
+	};
+	struct intel_engine_cs *engine;
+	int err;
+
+	engine = mock_engine("mock");
+	if (!engine)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, engine);
+	kfree(engine);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.c b/drivers/gpu/drm/i915/selftests/mock_engine.c
new file mode 100644
index 000000000000..4a090bbe807b
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.c
@@ -0,0 +1,55 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_engine.h"
+
+struct intel_engine_cs *mock_engine(const char *name)
+{
+	struct intel_engine_cs *engine;
+	static int id;
+
+	engine = kzalloc(sizeof(*engine) + PAGE_SIZE, GFP_KERNEL);
+	if (!engine)
+		return NULL;
+
+	/* minimal engine setup for seqno */
+	engine->name = name;
+	engine->id = id++;
+	engine->status_page.page_addr = (void *)(engine + 1);
+
+	/* minimal breadcrumbs init */
+	spin_lock_init(&engine->breadcrumbs.lock);
+	engine->breadcrumbs.mock = true;
+
+	return engine;
+}
+
+void mock_engine_flush(struct intel_engine_cs *engine)
+{
+}
+
+void mock_engine_reset(struct intel_engine_cs *engine)
+{
+	intel_write_status_page(engine, I915_GEM_HWS_INDEX, 0);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.h b/drivers/gpu/drm/i915/selftests/mock_engine.h
new file mode 100644
index 000000000000..0ae9a94aaa1e
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_ENGINE_H__
+#define __MOCK_ENGINE_H__
+
+struct intel_engine_cs *mock_engine(const char *name);
+void mock_engine_flush(struct intel_engine_cs *engine);
+void mock_engine_reset(struct intel_engine_cs *engine);
+
+#endif /* !__MOCK_ENGINE_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 08/46] drm/i915: Add unit tests for the breadcrumb rbtree, completion
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (6 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 07/46] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 09/46] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
                   ` (39 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Second retroactive test, make sure that the waiters are removed from the
global wait-tree when their seqno completes.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 107 +++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_engine.h       |   6 ++
 2 files changed, 113 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
index 6b5acf9de65b..32a27e56c353 100644
--- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -64,6 +64,27 @@ static int check_rbtree(struct intel_engine_cs *engine,
 	return 0;
 }
 
+static int check_completion(struct intel_engine_cs *engine,
+			    const unsigned long *bitmap,
+			    const struct intel_wait *waiters,
+			    const int count)
+{
+	int n;
+
+	for (n = 0; n < count; n++) {
+		if (intel_wait_complete(&waiters[n]) != !!test_bit(n, bitmap))
+			continue;
+
+		pr_err("waiter[%d, seqno=%d] is %s, but expected %s\n",
+		       n, waiters[n].seqno,
+		       intel_wait_complete(&waiters[n]) ? "complete" : "active",
+		       test_bit(n, bitmap) ? "active" : "complete");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int check_rbtree_empty(struct intel_engine_cs *engine)
 {
 	struct intel_breadcrumbs *b = &engine->breadcrumbs;
@@ -153,10 +174,96 @@ static int igt_random_insert_remove(void *arg)
 	return err;
 }
 
+static int igt_insert_complete(void *arg)
+{
+	const u32 seqno_bias = 0x1000;
+	struct intel_engine_cs *engine = arg;
+	struct intel_wait *waiters;
+	const int count = 4096;
+	unsigned long *bitmap;
+	int err = -ENOMEM;
+	int n, m;
+
+	mock_engine_reset(engine);
+
+	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
+	if (!waiters)
+		goto out_engines;
+
+	bitmap = kcalloc(DIV_ROUND_UP(count, BITS_PER_LONG), sizeof(*bitmap),
+			 GFP_TEMPORARY);
+	if (!bitmap)
+		goto out_waiters;
+
+	for (n = 0; n < count; n++) {
+		intel_wait_init(&waiters[n], n + seqno_bias);
+		intel_engine_add_wait(engine, &waiters[n]);
+		__set_bit(n, bitmap);
+	}
+	err = check_rbtree(engine, bitmap, waiters, count);
+	if (err)
+		goto out_bitmap;
+
+	/* On each step, we advance the seqno so that several waiters are then
+	 * complete (we increase the seqno by increasingly larger values to
+	 * retire more and more waiters at once). All retired waiters should
+	 * be woken and removed from the rbtree, and so that we check.
+	 */
+	for (n = 0; n < count; n = m) {
+		int seqno = 2 * n;
+
+		GEM_BUG_ON(find_first_bit(bitmap, count) != n);
+
+		if (intel_wait_complete(&waiters[n])) {
+			pr_err("waiter[%d, seqno=%d] completed too early\n",
+			       n, waiters[n].seqno);
+			err = -EINVAL;
+			goto out_bitmap;
+		}
+
+		/* complete the following waiters */
+		mock_seqno_advance(engine, seqno + seqno_bias);
+		for (m = n; m <= seqno; m++) {
+			if (m == count)
+				break;
+
+			GEM_BUG_ON(!test_bit(m, bitmap));
+			__clear_bit(m, bitmap);
+		}
+
+		intel_engine_remove_wait(engine, &waiters[n]);
+		RB_CLEAR_NODE(&waiters[n].node);
+
+		err = check_rbtree(engine, bitmap, waiters, count);
+		if (err) {
+			pr_err("rbtree corrupt after seqno advance to %d\n",
+			       seqno + seqno_bias);
+			goto out_bitmap;
+		}
+
+		err = check_completion(engine, bitmap, waiters, count);
+		if (err) {
+			pr_err("completions after seqno advance to %d failed\n",
+			       seqno + seqno_bias);
+			goto out_bitmap;
+		}
+	}
+
+	err = check_rbtree_empty(engine);
+out_bitmap:
+	kfree(bitmap);
+out_waiters:
+	drm_free_large(waiters);
+out_engines:
+	mock_engine_flush(engine);
+	return err;
+}
+
 int intel_breadcrumbs_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_random_insert_remove),
+		SUBTEST(igt_insert_complete),
 	};
 	struct intel_engine_cs *engine;
 	int err;
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.h b/drivers/gpu/drm/i915/selftests/mock_engine.h
index 0ae9a94aaa1e..9cfe9671f860 100644
--- a/drivers/gpu/drm/i915/selftests/mock_engine.h
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.h
@@ -29,4 +29,10 @@ struct intel_engine_cs *mock_engine(const char *name);
 void mock_engine_flush(struct intel_engine_cs *engine);
 void mock_engine_reset(struct intel_engine_cs *engine);
 
+static inline void mock_seqno_advance(struct intel_engine_cs *engine, u32 seqno)
+{
+	intel_write_status_page(engine, I915_GEM_HWS_INDEX, seqno);
+	intel_engine_wakeup(engine);
+}
+
 #endif /* !__MOCK_ENGINE_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 09/46] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (7 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 08/46] drm/i915: Add unit tests for the breadcrumb rbtree, completion Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02 12:49   ` Tvrtko Ursulin
  2017-02-02  9:08 ` [PATCH 10/46] drm/i915: Mock the GEM device for self-testing Chris Wilson
                   ` (38 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Third retroactive test, make sure that the seqno waiters are woken.

v2: Smattering of comments, rearrange code

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 201 +++++++++++++++++++++
 1 file changed, 201 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
index 32a27e56c353..fb368eb37660 100644
--- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -259,11 +259,212 @@ static int igt_insert_complete(void *arg)
 	return err;
 }
 
+struct igt_wakeup {
+	struct task_struct *tsk;
+	atomic_t *ready, *set, *done;
+	struct intel_engine_cs *engine;
+	unsigned long flags;
+#define STOP 0
+#define IDLE 1
+	wait_queue_head_t *wq;
+	u32 seqno;
+};
+
+static int wait_atomic(atomic_t *p)
+{
+	schedule();
+	return 0;
+}
+
+static int wait_atomic_timeout(atomic_t *p)
+{
+	return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT;
+}
+
+static bool wait_for_ready(struct igt_wakeup *w)
+{
+	DEFINE_WAIT(ready);
+
+	if (atomic_dec_and_test(w->done))
+		wake_up_atomic_t(w->done);
+
+	if (test_bit(STOP, &w->flags))
+		goto out;
+
+	set_bit(IDLE, &w->flags);
+	for (;;) {
+		prepare_to_wait(w->wq, &ready, TASK_INTERRUPTIBLE);
+		if (atomic_read(w->ready) == 0)
+			break;
+
+		schedule();
+	}
+	finish_wait(w->wq, &ready);
+	clear_bit(IDLE, &w->flags);
+
+out:
+	if (atomic_dec_and_test(w->set))
+		wake_up_atomic_t(w->set);
+
+	return !test_bit(STOP, &w->flags);
+}
+
+static int igt_wakeup_thread(void *arg)
+{
+	struct igt_wakeup *w = arg;
+	struct intel_wait wait;
+
+	while (wait_for_ready(w)) {
+		GEM_BUG_ON(kthread_should_stop());
+
+		intel_wait_init(&wait, w->seqno);
+		intel_engine_add_wait(w->engine, &wait);
+		for (;;) {
+			set_current_state(TASK_UNINTERRUPTIBLE);
+			if (i915_seqno_passed(intel_engine_get_seqno(w->engine),
+					      w->seqno))
+				break;
+
+			if (test_bit(STOP, &w->flags)) /* emergency escape */
+				break;
+
+			schedule();
+		}
+		intel_engine_remove_wait(w->engine, &wait);
+		__set_current_state(TASK_RUNNING);
+	}
+
+	return 0;
+}
+
+static void igt_wake_all_sync(atomic_t *ready,
+			      atomic_t *set,
+			      atomic_t *done,
+			      wait_queue_head_t *wq,
+			      int count)
+{
+	atomic_set(set, count);
+	atomic_set(ready, 0);
+	wake_up_all(wq);
+
+	wait_on_atomic_t(set, wait_atomic, TASK_UNINTERRUPTIBLE);
+	atomic_set(ready, count);
+	atomic_set(done, count);
+}
+
+static int igt_wakeup(void *arg)
+{
+	I915_RND_STATE(prng);
+	const int state = TASK_UNINTERRUPTIBLE;
+	struct intel_engine_cs *engine = arg;
+	struct igt_wakeup *waiters;
+	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	const int count = 4096;
+	const u32 max_seqno = count / 4;
+	atomic_t ready, set, done;
+	int err = -ENOMEM;
+	int n, step;
+
+	mock_engine_reset(engine);
+
+	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
+	if (!waiters)
+		goto out_engines;
+
+	/* Create a large number of threads, each waiting on a random seqno.
+	 * Multiple waiters will be waiting for the same seqno.
+	 */
+	atomic_set(&ready, count);
+	for (n = 0; n < count; n++) {
+		waiters[n].wq = &wq;
+		waiters[n].ready = &ready;
+		waiters[n].set = &set;
+		waiters[n].done = &done;
+		waiters[n].engine = engine;
+		waiters[n].flags = BIT(IDLE);
+
+		waiters[n].tsk = kthread_run(igt_wakeup_thread, &waiters[n],
+					     "i915/igt:%d", n);
+		if (IS_ERR(waiters[n].tsk))
+			goto out_waiters;
+
+		get_task_struct(waiters[n].tsk);
+	}
+
+	for (step = 1; step <= max_seqno; step <<= 1) {
+		u32 seqno;
+
+		/* The waiter threads start paused as we assign them a random
+		 * seqno and reset the engine. Once the engine is reset,
+		 * we signal that the threads may begin their wait upon their
+		 * seqno.
+		 */
+		for (n = 0; n < count; n++) {
+			GEM_BUG_ON(!test_bit(IDLE, &waiters[n].flags));
+			waiters[n].seqno =
+				1 + prandom_u32_state(&prng) % max_seqno;
+		}
+		mock_seqno_advance(engine, 0);
+		igt_wake_all_sync(&ready, &set, &done, &wq, count);
+
+		/* Simulate the GPU doing chunks of work, with one or more
+		 * seqno appearing to finish at the same time. A random number
+		 * of threads will be waiting upon the update and hopefully be
+		 * woken.
+		 */
+		for (seqno = 1; seqno <= max_seqno + step; seqno += step) {
+			usleep_range(50, 500);
+			mock_seqno_advance(engine, seqno);
+		}
+		GEM_BUG_ON(intel_engine_get_seqno(engine) < 1 + max_seqno);
+
+		/* With the seqno now beyond any of the waiting threads, they
+		 * should all be woken, see that they are complete and signal
+		 * that they are ready for the next test. We wait until all
+		 * threads are complete and waiting for us (i.e. not a seqno).
+		 */
+		err = wait_on_atomic_t(&done, wait_atomic_timeout, state);
+		if (err) {
+			pr_err("Timed out waiting for %d remaining waiters\n",
+			       atomic_read(&done));
+			break;
+		}
+
+		err = check_rbtree_empty(engine);
+		if (err)
+			break;
+	}
+
+out_waiters:
+	for (n = 0; n < count; n++) {
+		if (IS_ERR(waiters[n].tsk))
+			break;
+
+		set_bit(STOP, &waiters[n].flags);
+	}
+	mock_seqno_advance(engine, INT_MAX); /* wakeup any broken waiters */
+	igt_wake_all_sync(&ready, &set, &done, &wq, n);
+
+	for (n = 0; n < count; n++) {
+		if (IS_ERR(waiters[n].tsk))
+			break;
+
+		kthread_stop(waiters[n].tsk);
+		put_task_struct(waiters[n].tsk);
+	}
+
+	drm_free_large(waiters);
+out_engines:
+	mock_engine_flush(engine);
+	return err;
+}
+
 int intel_breadcrumbs_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_random_insert_remove),
 		SUBTEST(igt_insert_complete),
+		SUBTEST(igt_wakeup),
 	};
 	struct intel_engine_cs *engine;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 10/46] drm/i915: Mock the GEM device for self-testing
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (8 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 09/46] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 11/46] drm/i915: Mock a GGTT " Chris Wilson
                   ` (37 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

A simulacrum of drm_i915_private to let us pretend interactions with the
device.

v2: Tidy init error paths

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c                  |   4 +
 drivers/gpu/drm/i915/i915_gem.c                  |   1 +
 drivers/gpu/drm/i915/selftests/mock_drm.c        |  54 ++++++++++++
 drivers/gpu/drm/i915/selftests/mock_drm.h        |  31 +++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 104 +++++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.h |   8 ++
 drivers/gpu/drm/i915/selftests/mock_gem_object.h |   8 ++
 7 files changed, 210 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_drm.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_drm.h
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gem_device.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gem_device.h
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gem_object.h

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 8bba6c4eb4ed..13307f641e5a 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -2639,3 +2639,7 @@ static struct drm_driver driver = {
 	.minor = DRIVER_MINOR,
 	.patchlevel = DRIVER_PATCHLEVEL,
 };
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_drm.c"
+#endif
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fc54a8eb3fe5..778a659a7836 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4971,4 +4971,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/scatterlist.c"
+#include "selftests/mock_gem_device.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_drm.c b/drivers/gpu/drm/i915/selftests/mock_drm.c
new file mode 100644
index 000000000000..113dec05c7dc
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_drm.c
@@ -0,0 +1,54 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_drm.h"
+
+static inline struct inode fake_inode(struct drm_i915_private *i915)
+{
+	return (struct inode){ .i_rdev = i915->drm.primary->index };
+}
+
+struct drm_file *mock_file(struct drm_i915_private *i915)
+{
+	struct inode inode = fake_inode(i915);
+	struct file filp = {};
+	struct drm_file *file;
+	int err;
+
+	err = drm_open(&inode, &filp);
+	if (unlikely(err))
+		return ERR_PTR(err);
+
+	file = filp.private_data;
+	file->authenticated = true;
+	return file;
+}
+
+void mock_file_free(struct drm_i915_private *i915, struct drm_file *file)
+{
+	struct inode inode = fake_inode(i915);
+	struct file filp = { .private_data = file };
+
+	drm_release(&inode, &filp);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_drm.h b/drivers/gpu/drm/i915/selftests/mock_drm.h
new file mode 100644
index 000000000000..b39beee9f8f6
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_drm.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_DRM_H
+#define __MOCK_DRM_H
+
+struct drm_file *mock_file(struct drm_i915_private *i915);
+void mock_file_free(struct drm_i915_private *i915, struct drm_file *file);
+
+#endif /* !__MOCK_DRM_H */
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
new file mode 100644
index 000000000000..15d0a7ccc9d1
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -0,0 +1,104 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/pm_runtime.h>
+
+#include "mock_gem_device.h"
+#include "mock_gem_object.h"
+
+static void mock_device_release(struct drm_device *dev)
+{
+	struct drm_i915_private *i915 = to_i915(dev);
+
+	i915_gem_drain_freed_objects(i915);
+
+	kmem_cache_destroy(i915->objects);
+
+	drm_dev_fini(&i915->drm);
+	put_device(&i915->drm.pdev->dev);
+}
+
+static struct drm_driver mock_driver = {
+	.name = "mock",
+	.driver_features = DRIVER_GEM,
+	.release = mock_device_release,
+
+	.gem_close_object = i915_gem_close_object,
+	.gem_free_object_unlocked = i915_gem_free_object,
+};
+
+static void release_dev(struct device *dev)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+
+	kfree(pdev);
+}
+
+struct drm_i915_private *mock_gem_device(void)
+{
+	struct drm_i915_private *i915;
+	struct pci_dev *pdev;
+	int err;
+
+	pdev = kzalloc(sizeof(*pdev) + sizeof(*i915), GFP_KERNEL);
+	if (!pdev)
+		goto err;
+
+	device_initialize(&pdev->dev);
+	pdev->dev.release = release_dev;
+	dev_set_name(&pdev->dev, "mock");
+	dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+
+	pm_runtime_dont_use_autosuspend(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+
+	i915 = (struct drm_i915_private *)(pdev + 1);
+	pci_set_drvdata(pdev, i915);
+
+	err = drm_dev_init(&i915->drm, &mock_driver, &pdev->dev);
+	if (err) {
+		pr_err("Failed to initialise mock GEM device: err=%d\n", err);
+		goto put_device;
+	}
+	i915->drm.pdev = pdev;
+	i915->drm.dev_private = i915;
+
+	mkwrite_device_info(i915)->gen = -1;
+
+	spin_lock_init(&i915->mm.object_stat_lock);
+
+	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
+	init_llist_head(&i915->mm.free_list);
+
+	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
+	if (!i915->objects)
+		goto put_device;
+
+	return i915;
+
+put_device:
+	put_device(&pdev->dev);
+err:
+	return NULL;
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.h b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
new file mode 100644
index 000000000000..c557e33c3953
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
@@ -0,0 +1,8 @@
+#ifndef __MOCK_GEM_DEVICE_H__
+#define __MOCK_GEM_DEVICE_H__
+
+struct drm_i915_private;
+
+struct drm_i915_private *mock_gem_device(void);
+
+#endif /* !__MOCK_GEM_DEVICE_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_object.h b/drivers/gpu/drm/i915/selftests/mock_gem_object.h
new file mode 100644
index 000000000000..9fbf67321662
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_object.h
@@ -0,0 +1,8 @@
+#ifndef __MOCK_GEM_OBJECT_H__
+#define __MOCK_GEM_OBJECT_H__
+
+struct mock_object {
+	struct drm_i915_gem_object base;
+};
+
+#endif /* !__MOCK_GEM_OBJECT_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 11/46] drm/i915: Mock a GGTT for self-testing
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (9 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 10/46] drm/i915: Mock the GEM device for self-testing Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 12/46] drm/i915: Mock infrastructure for request emission Chris Wilson
                   ` (36 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

A very simple mockery, just a random manager and timeline. Useful for
inserting objects and ordering retirement; and not much else.

v2: mock_fini_ggtt() to complement mock_init_ggtt().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c              |   4 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c |  31 +++++
 drivers/gpu/drm/i915/selftests/mock_gtt.c        | 138 +++++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gtt.h        |  35 ++++++
 4 files changed, 208 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gtt.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_gtt.h

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 65425e71f3a5..afdb2859be05 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3751,3 +3751,7 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
 					   size, alignment, color,
 					   start, end, DRM_MM_INSERT_EVICT);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_gtt.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 15d0a7ccc9d1..dbd32b125d15 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -26,6 +26,7 @@
 
 #include "mock_gem_device.h"
 #include "mock_gem_object.h"
+#include "mock_gtt.h"
 
 static void mock_device_release(struct drm_device *dev)
 {
@@ -33,6 +34,12 @@ static void mock_device_release(struct drm_device *dev)
 
 	i915_gem_drain_freed_objects(i915);
 
+	mutex_lock(&i915->drm.struct_mutex);
+	mock_fini_ggtt(i915);
+	i915_gem_timeline_fini(&i915->gt.global_timeline);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 
 	drm_dev_fini(&i915->drm);
@@ -84,19 +91,43 @@ struct drm_i915_private *mock_gem_device(void)
 	i915->drm.pdev = pdev;
 	i915->drm.dev_private = i915;
 
+	/* Using the global GTT may ask questions about KMS users, so prepare */
+	drm_mode_config_init(&i915->drm);
+
 	mkwrite_device_info(i915)->gen = -1;
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 
 	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
 	init_llist_head(&i915->mm.free_list);
+	INIT_LIST_HEAD(&i915->mm.unbound_list);
+	INIT_LIST_HEAD(&i915->mm.bound_list);
 
 	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
 	if (!i915->objects)
 		goto put_device;
 
+	i915->vmas = KMEM_CACHE(i915_vma, SLAB_HWCACHE_ALIGN);
+	if (!i915->vmas)
+		goto err_objects;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	INIT_LIST_HEAD(&i915->gt.timelines);
+	err = i915_gem_timeline_init__global(i915);
+	if (err) {
+		mutex_unlock(&i915->drm.struct_mutex);
+		goto err_vmas;
+	}
+
+	mock_init_ggtt(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
 	return i915;
 
+err_vmas:
+	kmem_cache_destroy(i915->vmas);
+err_objects:
+	kmem_cache_destroy(i915->objects);
 put_device:
 	put_device(&pdev->dev);
 err:
diff --git a/drivers/gpu/drm/i915/selftests/mock_gtt.c b/drivers/gpu/drm/i915/selftests/mock_gtt.c
new file mode 100644
index 000000000000..a61309c7cb3e
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gtt.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_gtt.h"
+
+static void mock_insert_page(struct i915_address_space *vm,
+			     dma_addr_t addr,
+			     u64 offset,
+			     enum i915_cache_level level,
+			     u32 flags)
+{
+}
+
+static void mock_insert_entries(struct i915_address_space *vm,
+				struct sg_table *st,
+				u64 start,
+				enum i915_cache_level level, u32 flags)
+{
+}
+
+static int mock_bind_ppgtt(struct i915_vma *vma,
+			   enum i915_cache_level cache_level,
+			   u32 flags)
+{
+	GEM_BUG_ON(flags & I915_VMA_GLOBAL_BIND);
+	vma->pages = vma->obj->mm.pages;
+	vma->flags |= I915_VMA_LOCAL_BIND;
+	return 0;
+}
+
+static void mock_unbind_ppgtt(struct i915_vma *vma)
+{
+}
+
+static void mock_cleanup(struct i915_address_space *vm)
+{
+}
+
+struct i915_hw_ppgtt *
+mock_ppgtt(struct drm_i915_private *i915,
+	   const char *name)
+{
+	struct i915_hw_ppgtt *ppgtt;
+
+	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+	if (!ppgtt)
+		return NULL;
+
+	kref_init(&ppgtt->ref);
+	ppgtt->base.i915 = i915;
+	ppgtt->base.total = round_down(U64_MAX, PAGE_SIZE);
+	ppgtt->base.file = ERR_PTR(-ENODEV);
+
+	INIT_LIST_HEAD(&ppgtt->base.active_list);
+	INIT_LIST_HEAD(&ppgtt->base.inactive_list);
+	INIT_LIST_HEAD(&ppgtt->base.unbound_list);
+
+	INIT_LIST_HEAD(&ppgtt->base.global_link);
+	drm_mm_init(&ppgtt->base.mm, 0, ppgtt->base.total);
+	i915_gem_timeline_init(i915, &ppgtt->base.timeline, name);
+
+	ppgtt->base.clear_range = nop_clear_range;
+	ppgtt->base.insert_page = mock_insert_page;
+	ppgtt->base.insert_entries = mock_insert_entries;
+	ppgtt->base.bind_vma = mock_bind_ppgtt;
+	ppgtt->base.unbind_vma = mock_unbind_ppgtt;
+	ppgtt->base.cleanup = mock_cleanup;
+
+	return ppgtt;
+}
+
+static int mock_bind_ggtt(struct i915_vma *vma,
+			  enum i915_cache_level cache_level,
+			  u32 flags)
+{
+	int err;
+
+	err = i915_get_ggtt_vma_pages(vma);
+	if (err)
+		return err;
+
+	vma->flags |= I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND;
+	return 0;
+}
+
+static void mock_unbind_ggtt(struct i915_vma *vma)
+{
+}
+
+void mock_init_ggtt(struct drm_i915_private *i915)
+{
+	struct i915_ggtt *ggtt = &i915->ggtt;
+
+	INIT_LIST_HEAD(&i915->vm_list);
+
+	ggtt->base.i915 = i915;
+
+	ggtt->mappable_base = 0;
+	ggtt->mappable_end = 2048 * PAGE_SIZE;
+	ggtt->base.total = 4096 * PAGE_SIZE;
+
+	ggtt->base.clear_range = nop_clear_range;
+	ggtt->base.insert_page = mock_insert_page;
+	ggtt->base.insert_entries = mock_insert_entries;
+	ggtt->base.bind_vma = mock_bind_ggtt;
+	ggtt->base.unbind_vma = mock_unbind_ggtt;
+	ggtt->base.cleanup = mock_cleanup;
+
+	i915_address_space_init(&ggtt->base, i915, "global");
+}
+
+void mock_fini_ggtt(struct drm_i915_private *i915)
+{
+	struct i915_ggtt *ggtt = &i915->ggtt;
+
+	i915_address_space_fini(&ggtt->base);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_gtt.h b/drivers/gpu/drm/i915/selftests/mock_gtt.h
new file mode 100644
index 000000000000..9a0a833bb545
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_gtt.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_GTT_H
+#define __MOCK_GTT_H
+
+void mock_init_ggtt(struct drm_i915_private *i915);
+void mock_fini_ggtt(struct drm_i915_private *i915);
+
+struct i915_hw_ppgtt *
+mock_ppgtt(struct drm_i915_private *i915,
+	   const char *name);
+
+#endif /* !__MOCK_GTT_H */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 12/46] drm/i915: Mock infrastructure for request emission
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (10 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 11/46] drm/i915: Mock a GGTT " Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 13/46] drm/i915: Create a fake object for testing huge allocations Chris Wilson
                   ` (35 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Create a fake engine that runs requests using a timer to simulate hw.

v2: Prevent leaks of ctx->name along error paths

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_context.c            |   4 +
 drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c |  11 +-
 drivers/gpu/drm/i915/selftests/mock_context.c      |  78 ++++++++++
 drivers/gpu/drm/i915/selftests/mock_context.h      |  34 ++++
 drivers/gpu/drm/i915/selftests/mock_engine.c       | 172 +++++++++++++++++++--
 drivers/gpu/drm/i915/selftests/mock_engine.h       |  18 ++-
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |  95 +++++++++++-
 drivers/gpu/drm/i915/selftests/mock_gem_device.h   |   1 +
 drivers/gpu/drm/i915/selftests/mock_request.c      |  44 ++++++
 drivers/gpu/drm/i915/selftests/mock_request.h      |  44 ++++++
 10 files changed, 483 insertions(+), 18 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_context.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_context.h
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_request.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_request.h

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 680105421bb9..e6208e361356 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -1188,3 +1188,7 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
 
 	return 0;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_context.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
index fb368eb37660..55795cab483c 100644
--- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
@@ -25,6 +25,7 @@
 #include "../i915_selftest.h"
 #include "i915_random.h"
 
+#include "mock_gem_device.h"
 #include "mock_engine.h"
 
 static int check_rbtree(struct intel_engine_cs *engine,
@@ -466,15 +467,15 @@ int intel_breadcrumbs_mock_selftests(void)
 		SUBTEST(igt_insert_complete),
 		SUBTEST(igt_wakeup),
 	};
-	struct intel_engine_cs *engine;
+	struct drm_i915_private *i915;
 	int err;
 
-	engine = mock_engine("mock");
-	if (!engine)
+	i915 = mock_gem_device();
+	if (!i915)
 		return -ENOMEM;
 
-	err = i915_subtests(tests, engine);
-	kfree(engine);
+	err = i915_subtests(tests, i915->engine[RCS]);
+	drm_dev_unref(&i915->drm);
 
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/selftests/mock_context.c b/drivers/gpu/drm/i915/selftests/mock_context.c
new file mode 100644
index 000000000000..8d3a90c3f8ac
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_context.c
@@ -0,0 +1,78 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_context.h"
+#include "mock_gtt.h"
+
+struct i915_gem_context *
+mock_context(struct drm_i915_private *i915,
+	     const char *name)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return NULL;
+
+	kref_init(&ctx->ref);
+	INIT_LIST_HEAD(&ctx->link);
+	ctx->i915 = i915;
+
+	ret = ida_simple_get(&i915->context_hw_ida,
+			     0, MAX_CONTEXT_HW_ID, GFP_KERNEL);
+	if (ret < 0)
+		goto err_free;
+	ctx->hw_id = ret;
+
+	if (name) {
+		ctx->name = kstrdup(name, GFP_KERNEL);
+		if (!ctx->name)
+			goto err_put;
+
+		ctx->ppgtt = mock_ppgtt(i915, name);
+		if (!ctx->ppgtt)
+			goto err_put;
+	}
+
+	return ctx;
+
+err_free:
+	kfree(ctx);
+	return NULL;
+
+err_put:
+	i915_gem_context_set_closed(ctx);
+	i915_gem_context_put(ctx);
+	return NULL;
+}
+
+void mock_context_close(struct i915_gem_context *ctx)
+{
+	i915_gem_context_set_closed(ctx);
+
+	i915_ppgtt_close(&ctx->ppgtt->base);
+
+	i915_gem_context_put(ctx);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_context.h b/drivers/gpu/drm/i915/selftests/mock_context.h
new file mode 100644
index 000000000000..2427e5c0916a
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_context.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_CONTEXT_H
+#define __MOCK_CONTEXT_H
+
+struct i915_gem_context *
+mock_context(struct drm_i915_private *i915,
+	     const char *name);
+
+void mock_context_close(struct i915_gem_context *ctx);
+
+#endif /* !__MOCK_CONTEXT_H */
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.c b/drivers/gpu/drm/i915/selftests/mock_engine.c
index 4a090bbe807b..8d5ba037064c 100644
--- a/drivers/gpu/drm/i915/selftests/mock_engine.c
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.c
@@ -23,33 +23,185 @@
  */
 
 #include "mock_engine.h"
+#include "mock_request.h"
 
-struct intel_engine_cs *mock_engine(const char *name)
+static struct mock_request *first_request(struct mock_engine *engine)
 {
-	struct intel_engine_cs *engine;
+	return list_first_entry_or_null(&engine->hw_queue,
+					struct mock_request,
+					link);
+}
+
+static void hw_delay_complete(unsigned long data)
+{
+	struct mock_engine *engine = (typeof(engine))data;
+	struct mock_request *request;
+
+	spin_lock(&engine->hw_lock);
+
+	request = first_request(engine);
+	if (request) {
+		list_del_init(&request->link);
+		mock_seqno_advance(&engine->base, request->base.global_seqno);
+	}
+
+	request = first_request(engine);
+	if (request)
+		mod_timer(&engine->hw_delay, jiffies + request->delay);
+
+	spin_unlock(&engine->hw_lock);
+}
+
+static int mock_context_pin(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx)
+{
+	i915_gem_context_get(ctx);
+	return 0;
+}
+
+static void mock_context_unpin(struct intel_engine_cs *engine,
+			       struct i915_gem_context *ctx)
+{
+	i915_gem_context_put(ctx);
+}
+
+static int mock_request_alloc(struct drm_i915_gem_request *request)
+{
+	struct mock_request *mock = container_of(request, typeof(*mock), base);
+
+	INIT_LIST_HEAD(&mock->link);
+	mock->delay = 0;
+
+	request->ring = request->engine->buffer;
+	return 0;
+}
+
+static int mock_emit_flush(struct drm_i915_gem_request *request,
+			   unsigned int flags)
+{
+	return 0;
+}
+
+static void mock_emit_breadcrumb(struct drm_i915_gem_request *request,
+				 u32 *flags)
+{
+}
+
+static void mock_submit_request(struct drm_i915_gem_request *request)
+{
+	struct mock_request *mock = container_of(request, typeof(*mock), base);
+	struct mock_engine *engine =
+		container_of(request->engine, typeof(*engine), base);
+
+	i915_gem_request_submit(request);
+	GEM_BUG_ON(!request->global_seqno);
+
+	spin_lock_irq(&engine->hw_lock);
+	list_add_tail(&mock->link, &engine->hw_queue);
+	if (mock->link.prev == &engine->hw_queue)
+		mod_timer(&engine->hw_delay, jiffies + mock->delay);
+	spin_unlock_irq(&engine->hw_lock);
+}
+
+static struct intel_ring *mock_ring(struct intel_engine_cs *engine)
+{
+	const unsigned long sz = roundup_pow_of_two(sizeof(struct intel_ring));
+	struct intel_ring *ring;
+
+	ring = kzalloc(sizeof(*ring) + sz, GFP_KERNEL);
+	if (!ring)
+		return NULL;
+
+	ring->engine = engine;
+	ring->size = sz;
+	ring->effective_size = sz;
+	ring->vaddr = (void *)(ring + 1);
+
+	INIT_LIST_HEAD(&ring->request_list);
+	ring->last_retired_head = -1;
+	intel_ring_update_space(ring);
+
+	return ring;
+}
+
+struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
+				    const char *name)
+{
+	struct mock_engine *engine;
 	static int id;
 
 	engine = kzalloc(sizeof(*engine) + PAGE_SIZE, GFP_KERNEL);
 	if (!engine)
 		return NULL;
 
-	/* minimal engine setup for seqno */
-	engine->name = name;
-	engine->id = id++;
-	engine->status_page.page_addr = (void *)(engine + 1);
+	engine->base.buffer = mock_ring(&engine->base);
+	if (!engine->base.buffer) {
+		kfree(engine);
+		return NULL;
+	}
 
-	/* minimal breadcrumbs init */
-	spin_lock_init(&engine->breadcrumbs.lock);
-	engine->breadcrumbs.mock = true;
+	/* minimal engine setup for requests */
+	engine->base.i915 = i915;
+	engine->base.name = name;
+	engine->base.id = id++;
+	engine->base.status_page.page_addr = (void *)(engine + 1);
 
-	return engine;
+	engine->base.context_pin = mock_context_pin;
+	engine->base.context_unpin = mock_context_unpin;
+	engine->base.request_alloc = mock_request_alloc;
+	engine->base.emit_flush = mock_emit_flush;
+	engine->base.emit_breadcrumb = mock_emit_breadcrumb;
+	engine->base.submit_request = mock_submit_request;
+
+	engine->base.timeline =
+		&i915->gt.global_timeline.engine[engine->base.id];
+
+	intel_engine_init_breadcrumbs(&engine->base);
+	engine->base.breadcrumbs.mock = true; /* prevent touching HW for irqs */
+
+	/* fake hw queue */
+	spin_lock_init(&engine->hw_lock);
+	setup_timer(&engine->hw_delay,
+		    hw_delay_complete,
+		    (unsigned long)engine);
+	INIT_LIST_HEAD(&engine->hw_queue);
+
+	return &engine->base;
 }
 
 void mock_engine_flush(struct intel_engine_cs *engine)
 {
+	struct mock_engine *mock =
+		container_of(engine, typeof(*mock), base);
+	struct mock_request *request, *rn;
+
+	del_timer_sync(&mock->hw_delay);
+
+	spin_lock_irq(&mock->hw_lock);
+	list_for_each_entry_safe(request, rn, &mock->hw_queue, link) {
+		list_del_init(&request->link);
+		mock_seqno_advance(&mock->base, request->base.global_seqno);
+	}
+	spin_unlock_irq(&mock->hw_lock);
 }
 
 void mock_engine_reset(struct intel_engine_cs *engine)
 {
 	intel_write_status_page(engine, I915_GEM_HWS_INDEX, 0);
 }
+
+void mock_engine_free(struct intel_engine_cs *engine)
+{
+	struct mock_engine *mock =
+		container_of(engine, typeof(*mock), base);
+
+	GEM_BUG_ON(timer_pending(&mock->hw_delay));
+
+	if (engine->last_retired_context)
+		engine->context_unpin(engine, engine->last_retired_context);
+
+	intel_engine_fini_breadcrumbs(engine);
+
+	kfree(engine->buffer);
+	kfree(engine);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_engine.h b/drivers/gpu/drm/i915/selftests/mock_engine.h
index 9cfe9671f860..e5e240216ba3 100644
--- a/drivers/gpu/drm/i915/selftests/mock_engine.h
+++ b/drivers/gpu/drm/i915/selftests/mock_engine.h
@@ -25,9 +25,25 @@
 #ifndef __MOCK_ENGINE_H__
 #define __MOCK_ENGINE_H__
 
-struct intel_engine_cs *mock_engine(const char *name);
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+
+#include "../intel_ringbuffer.h"
+
+struct mock_engine {
+	struct intel_engine_cs base;
+
+	spinlock_t hw_lock;
+	struct list_head hw_queue;
+	struct timer_list hw_delay;
+};
+
+struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
+				    const char *name);
 void mock_engine_flush(struct intel_engine_cs *engine);
 void mock_engine_reset(struct intel_engine_cs *engine);
+void mock_engine_free(struct intel_engine_cs *engine);
 
 static inline void mock_seqno_advance(struct intel_engine_cs *engine, u32 seqno)
 {
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index dbd32b125d15..6a8258eacdcb 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -24,14 +24,46 @@
 
 #include <linux/pm_runtime.h>
 
+#include "mock_engine.h"
+#include "mock_context.h"
+#include "mock_request.h"
 #include "mock_gem_device.h"
 #include "mock_gem_object.h"
 #include "mock_gtt.h"
 
+void mock_device_flush(struct drm_i915_private *i915)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	lockdep_assert_held(&i915->drm.struct_mutex);
+
+	for_each_engine(engine, i915, id)
+		mock_engine_flush(engine);
+
+	i915_gem_retire_requests(i915);
+}
+
 static void mock_device_release(struct drm_device *dev)
 {
 	struct drm_i915_private *i915 = to_i915(dev);
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	mock_device_flush(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
 
+	cancel_delayed_work_sync(&i915->gt.retire_work);
+	cancel_delayed_work_sync(&i915->gt.idle_work);
+
+	mutex_lock(&i915->drm.struct_mutex);
+	for_each_engine(engine, i915, id)
+		mock_engine_free(engine);
+	i915_gem_context_fini(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drain_workqueue(i915->wq);
 	i915_gem_drain_freed_objects(i915);
 
 	mutex_lock(&i915->drm.struct_mutex);
@@ -39,6 +71,10 @@ static void mock_device_release(struct drm_device *dev)
 	i915_gem_timeline_fini(&i915->gt.global_timeline);
 	mutex_unlock(&i915->drm.struct_mutex);
 
+	destroy_workqueue(i915->wq);
+
+	kmem_cache_destroy(i915->dependencies);
+	kmem_cache_destroy(i915->requests);
 	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 
@@ -62,9 +98,19 @@ static void release_dev(struct device *dev)
 	kfree(pdev);
 }
 
+static void mock_retire_work_handler(struct work_struct *work)
+{
+}
+
+static void mock_idle_work_handler(struct work_struct *work)
+{
+}
+
 struct drm_i915_private *mock_gem_device(void)
 {
 	struct drm_i915_private *i915;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
 	struct pci_dev *pdev;
 	int err;
 
@@ -98,36 +144,81 @@ struct drm_i915_private *mock_gem_device(void)
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 
+	init_waitqueue_head(&i915->gpu_error.wait_queue);
+	init_waitqueue_head(&i915->gpu_error.reset_queue);
+
+	i915->wq = alloc_ordered_workqueue("mock", 0);
+	if (!i915->wq)
+		goto put_device;
+
 	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
 	init_llist_head(&i915->mm.free_list);
 	INIT_LIST_HEAD(&i915->mm.unbound_list);
 	INIT_LIST_HEAD(&i915->mm.bound_list);
 
+	ida_init(&i915->context_hw_ida);
+
+	INIT_DELAYED_WORK(&i915->gt.retire_work, mock_retire_work_handler);
+	INIT_DELAYED_WORK(&i915->gt.idle_work, mock_idle_work_handler);
+
+	i915->gt.awake = true;
+
 	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
 	if (!i915->objects)
-		goto put_device;
+		goto err_wq;
 
 	i915->vmas = KMEM_CACHE(i915_vma, SLAB_HWCACHE_ALIGN);
 	if (!i915->vmas)
 		goto err_objects;
 
+	i915->requests = KMEM_CACHE(mock_request,
+				    SLAB_HWCACHE_ALIGN |
+				    SLAB_RECLAIM_ACCOUNT |
+				    SLAB_DESTROY_BY_RCU);
+	if (!i915->requests)
+		goto err_vmas;
+
+	i915->dependencies = KMEM_CACHE(i915_dependency,
+					SLAB_HWCACHE_ALIGN |
+					SLAB_RECLAIM_ACCOUNT);
+	if (!i915->dependencies)
+		goto err_requests;
+
 	mutex_lock(&i915->drm.struct_mutex);
 	INIT_LIST_HEAD(&i915->gt.timelines);
 	err = i915_gem_timeline_init__global(i915);
 	if (err) {
 		mutex_unlock(&i915->drm.struct_mutex);
-		goto err_vmas;
+		goto err_dependencies;
 	}
 
 	mock_init_ggtt(i915);
 	mutex_unlock(&i915->drm.struct_mutex);
 
+	mkwrite_device_info(i915)->ring_mask = BIT(0);
+	i915->engine[RCS] = mock_engine(i915, "mock");
+	if (!i915->engine[RCS])
+		goto err_dependencies;
+
+	i915->kernel_context = mock_context(i915, NULL);
+	if (!i915->kernel_context)
+		goto err_engine;
+
 	return i915;
 
+err_engine:
+	for_each_engine(engine, i915, id)
+		mock_engine_free(engine);
+err_dependencies:
+	kmem_cache_destroy(i915->dependencies);
+err_requests:
+	kmem_cache_destroy(i915->requests);
 err_vmas:
 	kmem_cache_destroy(i915->vmas);
 err_objects:
 	kmem_cache_destroy(i915->objects);
+err_wq:
+	destroy_workqueue(i915->wq);
 put_device:
 	put_device(&pdev->dev);
 err:
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.h b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
index c557e33c3953..4cca4d57f52c 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.h
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.h
@@ -4,5 +4,6 @@
 struct drm_i915_private;
 
 struct drm_i915_private *mock_gem_device(void);
+void mock_device_flush(struct drm_i915_private *i915);
 
 #endif /* !__MOCK_GEM_DEVICE_H__ */
diff --git a/drivers/gpu/drm/i915/selftests/mock_request.c b/drivers/gpu/drm/i915/selftests/mock_request.c
new file mode 100644
index 000000000000..e23242d1b88a
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_request.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_request.h"
+
+struct drm_i915_gem_request *
+mock_request(struct intel_engine_cs *engine,
+	     struct i915_gem_context *context,
+	     unsigned long delay)
+{
+	struct drm_i915_gem_request *request;
+	struct mock_request *mock;
+
+	/* NB the i915->requests slab cache is enlarged to fit mock_request */
+	request = i915_gem_request_alloc(engine, context);
+	if (!request)
+		return NULL;
+
+	mock = container_of(request, typeof(*mock), base);
+	mock->delay = delay;
+
+	return &mock->base;
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_request.h b/drivers/gpu/drm/i915/selftests/mock_request.h
new file mode 100644
index 000000000000..cc76d4f4eb4e
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_request.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_REQUEST__
+#define __MOCK_REQUEST__
+
+#include <linux/list.h>
+
+#include "../i915_gem_request.h"
+
+struct mock_request {
+	struct drm_i915_gem_request base;
+
+	struct list_head link;
+	unsigned long delay;
+};
+
+struct drm_i915_gem_request *
+mock_request(struct intel_engine_cs *engine,
+	     struct i915_gem_context *context,
+	     unsigned long delay);
+
+#endif /* !__MOCK_REQUEST__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 13/46] drm/i915: Create a fake object for testing huge allocations
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (11 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 12/46] drm/i915: Mock infrastructure for request emission Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 14/46] drm/i915: Add selftests for i915_gem_request Chris Wilson
                   ` (34 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

We would like to be able to exercise huge allocations even on memory
constrained devices. To do this we create an object that allocates only
a few pages and remaps them across its whole range - each page is reused
multiple times. We can therefore pretend we are rendering into a much
larger object.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c                  |   1 +
 drivers/gpu/drm/i915/i915_gem_object.h           |  20 ++--
 drivers/gpu/drm/i915/selftests/huge_gem_object.c | 135 +++++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/huge_gem_object.h |  45 ++++++++
 4 files changed, 193 insertions(+), 8 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_gem_object.c
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_gem_object.h

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 778a659a7836..f35fda5d0abc 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4972,4 +4972,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/scatterlist.c"
 #include "selftests/mock_gem_device.c"
+#include "selftests/huge_gem_object.c"
 #endif
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 33a7d031e749..0da69546970b 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -167,14 +167,18 @@ struct drm_i915_gem_object {
 	/** Record of address bit 17 of each page at last unbind. */
 	unsigned long *bit_17;
 
-	struct i915_gem_userptr {
-		uintptr_t ptr;
-		unsigned read_only :1;
-
-		struct i915_mm_struct *mm;
-		struct i915_mmu_object *mmu_object;
-		struct work_struct *work;
-	} userptr;
+	union {
+		struct i915_gem_userptr {
+			uintptr_t ptr;
+			unsigned read_only :1;
+
+			struct i915_mm_struct *mm;
+			struct i915_mmu_object *mmu_object;
+			struct work_struct *work;
+		} userptr;
+
+		unsigned long scratch;
+	};
 
 	/** for phys allocated objects */
 	struct drm_dma_handle *phys_handle;
diff --git a/drivers/gpu/drm/i915/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
new file mode 100644
index 000000000000..4e681fc13be4
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
@@ -0,0 +1,135 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "huge_gem_object.h"
+
+static void huge_free_pages(struct drm_i915_gem_object *obj,
+			    struct sg_table *pages)
+{
+	unsigned long nreal = obj->scratch / PAGE_SIZE;
+	struct scatterlist *sg;
+
+	for (sg = pages->sgl; sg && nreal--; sg = __sg_next(sg))
+		__free_page(sg_page(sg));
+
+	sg_free_table(pages);
+	kfree(pages);
+}
+
+static struct sg_table *
+huge_get_pages(struct drm_i915_gem_object *obj)
+{
+#define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
+	const unsigned long nreal = obj->scratch / PAGE_SIZE;
+	const unsigned long npages = obj->base.size / PAGE_SIZE;
+	struct scatterlist *sg, *src, *end;
+	struct sg_table *pages;
+	unsigned long n;
+
+	pages = kmalloc(sizeof(*pages), GFP);
+	if (!pages)
+		return ERR_PTR(-ENOMEM);
+
+	if (sg_alloc_table(pages, npages, GFP)) {
+		kfree(pages);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	sg = pages->sgl;
+	for (n = 0; n < nreal; n++) {
+		struct page *page;
+
+		page = alloc_page(GFP | __GFP_HIGHMEM);
+		if (!page) {
+			sg_mark_end(sg);
+			goto err;
+		}
+
+		sg_set_page(sg, page, PAGE_SIZE, 0);
+		sg = __sg_next(sg);
+	}
+	if (nreal < npages) {
+		for (end = sg, src = pages->sgl; sg; sg = __sg_next(sg)) {
+			sg_set_page(sg, sg_page(src), PAGE_SIZE, 0);
+			src = __sg_next(src);
+			if (src == end)
+				src = pages->sgl;
+		}
+	}
+
+	if (i915_gem_gtt_prepare_pages(obj, pages))
+		goto err;
+
+	return pages;
+
+err:
+	huge_free_pages(obj, pages);
+	return ERR_PTR(-ENOMEM);
+#undef GFP
+}
+
+static void huge_put_pages(struct drm_i915_gem_object *obj,
+			   struct sg_table *pages)
+{
+	i915_gem_gtt_finish_pages(obj, pages);
+	huge_free_pages(obj, pages);
+
+	obj->mm.dirty = false;
+}
+
+static const struct drm_i915_gem_object_ops huge_ops = {
+	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
+		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.get_pages = huge_get_pages,
+	.put_pages = huge_put_pages,
+};
+
+struct drm_i915_gem_object *
+huge_gem_object(struct drm_i915_private *i915,
+		phys_addr_t phys_size,
+		dma_addr_t dma_size)
+{
+	struct drm_i915_gem_object *obj;
+
+	GEM_BUG_ON(!phys_size || phys_size > dma_size);
+	GEM_BUG_ON(!IS_ALIGNED(phys_size, PAGE_SIZE));
+	GEM_BUG_ON(!IS_ALIGNED(dma_size, I915_GTT_PAGE_SIZE));
+
+	if (overflows_type(dma_size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc(i915);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, dma_size);
+	i915_gem_object_init(obj, &huge_ops);
+
+	obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+	obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+	obj->cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
+	obj->scratch = phys_size;
+
+	return obj;
+}
diff --git a/drivers/gpu/drm/i915/selftests/huge_gem_object.h b/drivers/gpu/drm/i915/selftests/huge_gem_object.h
new file mode 100644
index 000000000000..a6133a9e8029
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/huge_gem_object.h
@@ -0,0 +1,45 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HUGE_GEM_OBJECT_H
+#define __HUGE_GEM_OBJECT_H
+
+struct drm_i915_gem_object *
+huge_gem_object(struct drm_i915_private *i915,
+		phys_addr_t phys_size,
+		dma_addr_t dma_size);
+
+static inline phys_addr_t
+huge_gem_object_phys_size(struct drm_i915_gem_object *obj)
+{
+	return obj->scratch;
+}
+
+static inline dma_addr_t
+huge_gem_object_dma_size(struct drm_i915_gem_object *obj)
+{
+	return obj->base.size;
+}
+
+#endif /* !__HUGE_GEM_OBJECT_H */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 14/46] drm/i915: Add selftests for i915_gem_request
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (12 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 13/46] drm/i915: Create a fake object for testing huge allocations Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 15/46] drm/i915: Add a simple request selftest for waiting Chris Wilson
                   ` (33 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Simple starting point for adding seltests for i915_gem_request, first
mock a device (with engines and contexts) that allows us to construct
and execute a request, along with waiting for the request to complete.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_request.c            |  5 ++
 drivers/gpu/drm/i915/selftests/i915_gem_request.c  | 68 ++++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |  1 +
 3 files changed, 74 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_request.c

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 72b7f7d9461d..bd2aeb290cad 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -1193,3 +1193,8 @@ void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
 	for_each_engine(engine, dev_priv, id)
 		engine_retire_requests(engine);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_request.c"
+#include "selftests/i915_gem_request.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
new file mode 100644
index 000000000000..9921d1c317c0
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -0,0 +1,68 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+#include "mock_gem_device.h"
+
+static int igt_add_request(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *request;
+	int err = -ENOMEM;
+
+	/* Basic preliminary test to create a request and let it loose! */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	request = mock_request(i915->engine[RCS],
+			       i915->kernel_context,
+			       HZ / 10);
+	if (!request)
+		goto out_unlock;
+
+	i915_add_request(request);
+
+	err = 0;
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
+int i915_gem_request_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_add_request),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, i915);
+	drm_dev_unref(&i915->drm);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 80458e2a2b04..bda982404ad3 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -11,3 +11,4 @@
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
+selftest(requests, i915_gem_request_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 15/46] drm/i915: Add a simple request selftest for waiting
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (13 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 14/46] drm/i915: Add selftests for i915_gem_request Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 16/46] drm/i915: Add a simple fence selftest to i915_gem_request Chris Wilson
                   ` (32 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

A trivial kselftest to submit a request and wait upon it.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 46 +++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index 9921d1c317c0..14584b67c4a0 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -49,10 +49,56 @@ static int igt_add_request(void *arg)
 	return err;
 }
 
+static int igt_wait_request(void *arg)
+{
+	const long T = HZ / 4;
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *request;
+	int err = -EINVAL;
+
+	/* Submit a request, then wait upon it */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	request = mock_request(i915->engine[RCS], i915->kernel_context, T);
+	if (!request) {
+		err = -ENOMEM;
+		goto out_unlock;
+	}
+
+	i915_add_request(request);
+
+	if (i915_gem_request_completed(request)) {
+		pr_err("request completed immediately!\n");
+		goto out_unlock;
+	}
+
+	if (i915_wait_request(request, I915_WAIT_LOCKED, T / 2) != -ETIME) {
+		pr_err("request wait succeeded (expected tiemout!)\n");
+		goto out_unlock;
+	}
+
+	if (i915_wait_request(request, I915_WAIT_LOCKED, T) == -ETIME) {
+		pr_err("request wait timed out!\n");
+		goto out_unlock;
+	}
+
+	if (!i915_gem_request_completed(request)) {
+		pr_err("request not complete after waiting!\n");
+		goto out_unlock;
+	}
+
+	err = 0;
+out_unlock:
+	mock_device_flush(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 int i915_gem_request_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_add_request),
+		SUBTEST(igt_wait_request),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 16/46] drm/i915: Add a simple fence selftest to i915_gem_request
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (14 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 15/46] drm/i915: Add a simple request selftest for waiting Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 17/46] drm/i915: Simple selftest to exercise live requests Chris Wilson
                   ` (31 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Do a quick selftest on in the interoperability of dma_fence_wait on a
i915_gem_request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 49 +++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index 14584b67c4a0..bc6f5618f22b 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -94,11 +94,60 @@ static int igt_wait_request(void *arg)
 	return err;
 }
 
+static int igt_fence_wait(void *arg)
+{
+	const long T = HZ / 4;
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *request;
+	int err = -EINVAL;
+
+	/* Submit a request, treat it as a fence and wait upon it */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	request = mock_request(i915->engine[RCS], i915->kernel_context, T);
+	if (!request) {
+		err = -ENOMEM;
+		goto out_locked;
+	}
+
+	i915_add_request(request);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	if (dma_fence_is_signaled(&request->fence)) {
+		pr_err("fence signaled immediately!\n");
+		goto out_device;
+	}
+
+	if (dma_fence_wait_timeout(&request->fence, false, T / 2) != -ETIME) {
+		pr_err("fence wait success after submit (expected timeout)!\n");
+		goto out_device;
+	}
+
+	if (dma_fence_wait_timeout(&request->fence, false, T) <= 0) {
+		pr_err("fence wait timed out (expected success)!\n");
+		goto out_device;
+	}
+
+	if (!dma_fence_is_signaled(&request->fence)) {
+		pr_err("fence unsignaled after waiting!\n");
+		goto out_device;
+	}
+
+	err = 0;
+out_device:
+	mutex_lock(&i915->drm.struct_mutex);
+out_locked:
+	mock_device_flush(i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 int i915_gem_request_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_add_request),
 		SUBTEST(igt_wait_request),
+		SUBTEST(igt_fence_wait),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 17/46] drm/i915: Simple selftest to exercise live requests
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (15 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 16/46] drm/i915: Add a simple fence selftest to i915_gem_request Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 18/46] drm/i915: Test simultaneously submitting requests to all engines Chris Wilson
                   ` (30 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Just create several batches of requests and expect it to not fall over!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c  | 147 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 2 files changed, 148 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index bc6f5618f22b..b5c7cd6633f0 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -22,6 +22,8 @@
  *
  */
 
+#include <linux/prime_numbers.h>
+
 #include "../i915_selftest.h"
 
 #include "mock_gem_device.h"
@@ -161,3 +163,148 @@ int i915_gem_request_mock_selftests(void)
 
 	return err;
 }
+
+struct live_test {
+	struct drm_i915_private *i915;
+	const char *func;
+	const char *name;
+
+	unsigned int reset_count;
+};
+
+static int begin_live_test(struct live_test *t,
+			   struct drm_i915_private *i915,
+			   const char *func,
+			   const char *name)
+{
+	int err;
+
+	t->i915 = i915;
+	t->func = func;
+	t->name = name;
+
+	err = i915_gem_wait_for_idle(i915, I915_WAIT_LOCKED);
+	if (err) {
+		pr_err("%s(%s): failed to idle before, with err=%d!",
+		       func, name, err);
+		return err;
+	}
+
+	i915_gem_retire_requests(i915);
+
+	i915->gpu_error.missed_irq_rings = 0;
+	t->reset_count = i915_reset_count(&i915->gpu_error);
+
+	return 0;
+}
+
+static int end_live_test(struct live_test *t)
+{
+	struct drm_i915_private *i915 = t->i915;
+
+	if (wait_for(intel_execlists_idle(i915), 1)) {
+		pr_err("%s(%s): GPU not idle\n", t->func, t->name);
+		return -EIO;
+	}
+
+	if (t->reset_count != i915_reset_count(&i915->gpu_error)) {
+		pr_err("%s(%s): GPU was reset %d times!\n",
+		       t->func, t->name,
+		       i915_reset_count(&i915->gpu_error) - t->reset_count);
+		return -EIO;
+	}
+
+	if (i915->gpu_error.missed_irq_rings) {
+		pr_err("%s(%s): Missed interrupts on engines %lx\n",
+		       t->func, t->name, i915->gpu_error.missed_irq_rings);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int live_nop_request(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_engine_cs *engine;
+	struct live_test t;
+	unsigned int id;
+	int err;
+
+	/* Submit various sized batches of empty requests, to each engine
+	 * (individually), and wait for the batch to complete. We can check
+	 * the overhead of submitting requests to the hardware.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	for_each_engine(engine, i915, id) {
+		IGT_TIMEOUT(end_time);
+		struct drm_i915_gem_request *request;
+		unsigned long n, prime;
+		ktime_t times[2] = {};
+
+		err = begin_live_test(&t, i915, __func__, engine->name);
+		if (err)
+			goto out_unlock;
+
+		for_each_prime_number_from(prime, 1, 8192) {
+			times[1] = ktime_get_raw();
+
+			for (n = 0; n < prime; n++) {
+				request = i915_gem_request_alloc(engine,
+								 i915->kernel_context);
+				if (IS_ERR(request)) {
+					err = PTR_ERR(request);
+					goto out_unlock;
+				}
+
+				/* This space is left intentionally blank.
+				 *
+				 * We do not actually want to perform any
+				 * action with this request, we just want
+				 * to measure the latency in allocation
+				 * and submission of our breadcrumbs -
+				 * ensuring that the bare request is sufficient
+				 * for the system to work (i.e. proper HEAD
+				 * tracking of the rings, interrupt handling,
+				 * etc). It also gives us the lowest bounds
+				 * for latency.
+				 */
+
+				i915_add_request(request);
+			}
+			i915_wait_request(request,
+					  I915_WAIT_LOCKED,
+					  MAX_SCHEDULE_TIMEOUT);
+
+			times[1] = ktime_sub(ktime_get_raw(), times[1]);
+			if (prime == 1)
+				times[0] = times[1];
+
+			if (__igt_timeout(end_time, NULL))
+				break;
+		}
+
+		err = end_live_test(&t);
+		if (err)
+			goto out_unlock;
+
+		pr_info("Request latencies on %s: 1 = %lluns, %lu = %lluns\n",
+			engine->name,
+			ktime_to_ns(times[0]),
+			prime, div64_u64(ktime_to_ns(times[1]), prime));
+	}
+
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
+int i915_gem_request_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(live_nop_request),
+	};
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index f3e17cb10e05..09bf538826df 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -9,3 +9,4 @@
  * Tests are executed in order by igt/drv_selftest
  */
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
+selftest(requests, i915_gem_request_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 18/46] drm/i915: Test simultaneously submitting requests to all engines
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (16 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 17/46] drm/i915: Simple selftest to exercise live requests Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 19/46] drm/i915: Test request ordering between engines Chris Wilson
                   ` (29 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Use a recursive-batch to busy spin on each to ensure that each is being
run simultaneously.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 173 ++++++++++++++++++++++
 1 file changed, 173 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index b5c7cd6633f0..570cfa196ca7 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -301,10 +301,183 @@ static int live_nop_request(void *arg)
 	return err;
 }
 
+static struct i915_vma *recursive_batch(struct drm_i915_private *i915)
+{
+	struct i915_gem_context *ctx = i915->kernel_context;
+	struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	struct drm_i915_gem_object *obj;
+	const int gen = INTEL_GEN(i915);
+	struct i915_vma *vma;
+	u32 *cmd;
+	int err;
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto err;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		goto err;
+
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	if (err)
+		goto err;
+
+	cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		goto err;
+	}
+
+	if (gen >= 8) {
+		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
+		*cmd++ = lower_32_bits(vma->node.start);
+		*cmd++ = upper_32_bits(vma->node.start);
+	} else if (gen >= 6) {
+		*cmd++ = MI_BATCH_BUFFER_START | 1 << 8;
+		*cmd++ = lower_32_bits(vma->node.start);
+	} else if (gen >= 4) {
+		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT;
+		*cmd++ = lower_32_bits(vma->node.start);
+	} else {
+		*cmd++ = MI_BATCH_BUFFER_START | MI_BATCH_GTT | 1;
+		*cmd++ = lower_32_bits(vma->node.start);
+	}
+	*cmd++ = MI_BATCH_BUFFER_END; /* terminate early in case of error */
+
+	wmb();
+	i915_gem_object_unpin_map(obj);
+
+	return vma;
+
+err:
+	i915_gem_object_put(obj);
+	return ERR_PTR(err);
+}
+
+static int live_all_engines(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_engine_cs *engine;
+	struct drm_i915_gem_request *request[I915_NUM_ENGINES];
+	struct i915_vma *batch;
+	struct live_test t;
+	unsigned int id;
+	u32 *cmd;
+	int err;
+
+	/* Check we can submit requests to all engines simultaneously. We
+	 * send a recursive batch to each engine - checking that we don't
+	 * block doing so, and that they don't complete too soon.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	err = begin_live_test(&t, i915, __func__, "");
+	if (err)
+		goto out_unlock;
+
+	batch = recursive_batch(i915);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		pr_err("%s: Unable to create batch, err=%d\n", __func__, err);
+		goto out_unlock;
+	}
+
+	for_each_engine(engine, i915, id) {
+		request[id] = i915_gem_request_alloc(engine,
+						     i915->kernel_context);
+		if (IS_ERR(request[id])) {
+			err = PTR_ERR(request[id]);
+			pr_err("%s: Request allocation failed with err=%d\n",
+			       __func__, err);
+			goto out_request;
+		}
+
+		err = engine->emit_flush(request[id], EMIT_INVALIDATE);
+		GEM_BUG_ON(err);
+
+		err = i915_switch_context(request[id]);
+		GEM_BUG_ON(err);
+
+		err = engine->emit_bb_start(request[id],
+					    batch->node.start,
+					    batch->node.size,
+					    0);
+		GEM_BUG_ON(err);
+		request[id]->batch = batch;
+
+		if (!i915_gem_object_has_active_reference(batch->obj)) {
+			i915_gem_object_get(batch->obj);
+			i915_gem_object_set_active_reference(batch->obj);
+		}
+
+		i915_vma_move_to_active(batch, request[id], 0);
+		i915_gem_request_get(request[id]);
+		i915_add_request(request[id]);
+	}
+
+	for_each_engine(engine, i915, id) {
+		if (i915_gem_request_completed(request[id])) {
+			pr_err("%s(%s): request completed too early!\n",
+			       __func__, engine->name);
+			err = -EINVAL;
+			goto out_request;
+		}
+	}
+
+	cmd = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		pr_err("%s: failed to WC map batch, err=%d\n", __func__, err);
+		goto out_request;
+	}
+	*cmd = MI_BATCH_BUFFER_END;
+	wmb();
+	i915_gem_object_unpin_map(batch->obj);
+
+	for_each_engine(engine, i915, id) {
+		long timeout;
+
+		timeout = i915_wait_request(request[id],
+					    I915_WAIT_LOCKED,
+					    MAX_SCHEDULE_TIMEOUT);
+		if (timeout < 0) {
+			err = timeout;
+			pr_err("%s: error waiting for request on %s, err=%d\n",
+			       __func__, engine->name, err);
+			goto out_request;
+		}
+
+		GEM_BUG_ON(!i915_gem_request_completed(request[id]));
+		i915_gem_request_put(request[id]);
+		request[id] = NULL;
+	}
+
+	err = end_live_test(&t);
+
+out_request:
+	for_each_engine(engine, i915, id)
+		if (request[id])
+			i915_gem_request_put(request[id]);
+	i915_vma_unpin(batch);
+	i915_vma_put(batch);
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 int i915_gem_request_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(live_nop_request),
+		SUBTEST(live_all_engines),
 	};
 	return i915_subtests(tests, i915);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 19/46] drm/i915: Test request ordering between engines
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (17 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 18/46] drm/i915: Test simultaneously submitting requests to all engines Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-09 10:20   ` Joonas Lahtinen
  2017-02-02  9:08 ` [PATCH 20/46] drm/i915: Live testing of empty requests Chris Wilson
                   ` (28 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

A request on one engine with a dependency on a request on another engine
must wait for completion of the first request before starting.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 137 ++++++++++++++++++++++
 1 file changed, 137 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index 570cfa196ca7..f9c171d1a05b 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -473,11 +473,148 @@ static int live_all_engines(void *arg)
 	return err;
 }
 
+static int live_sequential_engines(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *request[I915_NUM_ENGINES] = {};
+	struct drm_i915_gem_request *prev = NULL;
+	struct intel_engine_cs *engine;
+	struct live_test t;
+	unsigned int id;
+	int err;
+
+	/* Check we can submit requests to all engines sequentially, such
+	 * that each successive request waits for the earlier ones. This
+	 * tests that we don't execute requests out of order, even though
+	 * they are running on independent engines.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	err = begin_live_test(&t, i915, __func__, "");
+	if (err)
+		goto out_unlock;
+
+	for_each_engine(engine, i915, id) {
+		struct i915_vma *batch;
+
+		batch = recursive_batch(i915);
+		if (IS_ERR(batch)) {
+			err = PTR_ERR(batch);
+			pr_err("%s: Unable to create batch for %s, err=%d\n",
+			       __func__, engine->name, err);
+			goto out_unlock;
+		}
+
+		request[id] = i915_gem_request_alloc(engine,
+						     i915->kernel_context);
+		if (IS_ERR(request[id])) {
+			err = PTR_ERR(request[id]);
+			pr_err("%s: Request allocation failed for %s with err=%d\n",
+			       __func__, engine->name, err);
+			goto out_request;
+		}
+
+		if (prev) {
+			err = i915_gem_request_await_dma_fence(request[id],
+							       &prev->fence);
+			if (err) {
+				i915_add_request(request[id]);
+				pr_err("%s: Request await failed for %s with err=%d\n",
+				       __func__, engine->name, err);
+				goto out_request;
+			}
+		}
+
+		err = engine->emit_flush(request[id], EMIT_INVALIDATE);
+		GEM_BUG_ON(err);
+
+		err = i915_switch_context(request[id]);
+		GEM_BUG_ON(err);
+
+		err = engine->emit_bb_start(request[id],
+					    batch->node.start,
+					    batch->node.size,
+					    0);
+		GEM_BUG_ON(err);
+		request[id]->batch = batch;
+
+		i915_vma_move_to_active(batch, request[id], 0);
+		i915_gem_object_set_active_reference(batch->obj);
+		i915_vma_get(batch);
+
+		i915_gem_request_get(request[id]);
+		i915_add_request(request[id]);
+
+		prev = request[id];
+	}
+
+	for_each_engine(engine, i915, id) {
+		long timeout;
+		u32 *cmd;
+
+		if (i915_gem_request_completed(request[id])) {
+			pr_err("%s(%s): request completed too early!\n",
+			       __func__, engine->name);
+			err = -EINVAL;
+			goto out_request;
+		}
+
+		cmd = i915_gem_object_pin_map(request[id]->batch->obj,
+					      I915_MAP_WC);
+		if (IS_ERR(cmd)) {
+			err = PTR_ERR(cmd);
+			pr_err("%s: failed to WC map batch, err=%d\n", __func__, err);
+			goto out_request;
+		}
+		*cmd = MI_BATCH_BUFFER_END;
+		wmb();
+		i915_gem_object_unpin_map(request[id]->batch->obj);
+
+		timeout = i915_wait_request(request[id],
+					    I915_WAIT_LOCKED,
+					    MAX_SCHEDULE_TIMEOUT);
+		if (timeout < 0) {
+			err = timeout;
+			pr_err("%s: error waiting for request on %s, err=%d\n",
+			       __func__, engine->name, err);
+			goto out_request;
+		}
+
+		GEM_BUG_ON(!i915_gem_request_completed(request[id]));
+	}
+
+	err = end_live_test(&t);
+
+out_request:
+	for_each_engine(engine, i915, id) {
+		u32 *cmd;
+
+		if (!request[id])
+			break;
+
+		cmd = i915_gem_object_pin_map(request[id]->batch->obj,
+					      I915_MAP_WC);
+		if (!IS_ERR(cmd)) {
+			*cmd = MI_BATCH_BUFFER_END;
+			wmb();
+			i915_gem_object_unpin_map(request[id]->batch->obj);
+		}
+
+		i915_vma_put(request[id]->batch);
+		i915_gem_request_put(request[id]);
+	}
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 int i915_gem_request_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(live_nop_request),
 		SUBTEST(live_all_engines),
+		SUBTEST(live_sequential_engines),
 	};
 	return i915_subtests(tests, i915);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 20/46] drm/i915: Live testing of empty requests
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (18 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 19/46] drm/i915: Test request ordering between engines Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-09 10:30   ` Joonas Lahtinen
  2017-02-02  9:08 ` [PATCH 21/46] drm/i915: Add selftests for object allocation, phys Chris Wilson
                   ` (27 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Primarily to emphasize the difference between just advancing the
breadcrumb using a bare request and the overhead of dispatching an
execbuffer.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_request.c | 155 ++++++++++++++++++++++
 1 file changed, 155 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
index f9c171d1a05b..92fa55bd68c8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
@@ -301,6 +301,160 @@ static int live_nop_request(void *arg)
 	return err;
 }
 
+static struct i915_vma *empty_batch(struct drm_i915_private *i915)
+{
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	u32 *cmd;
+	int err;
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		goto err;
+	}
+	*cmd = MI_BATCH_BUFFER_END;
+	i915_gem_object_unpin_map(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		goto err;
+
+	vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto err;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_GLOBAL);
+	if (err)
+		goto err;
+
+	return vma;
+
+err:
+	i915_gem_object_put(obj);
+	return ERR_PTR(err);
+}
+
+static struct drm_i915_gem_request *
+empty_request(struct intel_engine_cs *engine,
+	      struct i915_vma *batch)
+{
+	struct drm_i915_gem_request *request;
+	int err;
+
+	request = i915_gem_request_alloc(engine,
+					 engine->i915->kernel_context);
+	if (IS_ERR(request))
+		return request;
+
+	err = engine->emit_flush(request, EMIT_INVALIDATE);
+	if (err)
+		goto out_request;
+
+	err = i915_switch_context(request);
+	if (err)
+		goto out_request;
+
+	err = engine->emit_bb_start(request,
+				    batch->node.start,
+				    batch->node.size,
+				    I915_DISPATCH_SECURE);
+	if (err)
+		goto out_request;
+
+out_request:
+	__i915_add_request(request, err == 0);
+	return err ? ERR_PTR(err) : request;
+}
+
+static int live_empty_request(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_engine_cs *engine;
+	struct live_test t;
+	struct i915_vma *batch;
+	unsigned int id;
+	int err = 0;
+
+	/* Submit various sized batches of empty requests, to each engine
+	 * (individually), and wait for the batch to complete. We can check
+	 * the overhead of submitting requests to the hardware.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	batch = empty_batch(i915);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		goto out_unlock;
+	}
+
+	for_each_engine(engine, i915, id) {
+		IGT_TIMEOUT(end_time);
+		struct drm_i915_gem_request *request;
+		unsigned long n, prime;
+		ktime_t times[2] = {};
+
+		err = begin_live_test(&t, i915, __func__, engine->name);
+		if (err)
+			goto out_batch;
+
+		/* Warmup / preload */
+		request = empty_request(engine, batch);
+		if (IS_ERR(request)) {
+			err = PTR_ERR(request);
+			goto out_batch;
+		}
+		i915_wait_request(request,
+				  I915_WAIT_LOCKED,
+				  MAX_SCHEDULE_TIMEOUT);
+
+		for_each_prime_number_from(prime, 1, 8192) {
+			times[1] = ktime_get_raw();
+
+			for (n = 0; n < prime; n++) {
+				request = empty_request(engine, batch);
+				if (IS_ERR(request)) {
+					err = PTR_ERR(request);
+					goto out_batch;
+				}
+			}
+			i915_wait_request(request,
+					  I915_WAIT_LOCKED,
+					  MAX_SCHEDULE_TIMEOUT);
+
+			times[1] = ktime_sub(ktime_get_raw(), times[1]);
+			if (prime == 1)
+				times[0] = times[1];
+
+			if (__igt_timeout(end_time, NULL))
+				break;
+		}
+
+		err = end_live_test(&t);
+		if (err)
+			goto out_batch;
+
+		pr_info("Batch latencies on %s: 1 = %lluns, %lu = %lluns\n",
+			engine->name,
+			ktime_to_ns(times[0]),
+			prime, div64_u64(ktime_to_ns(times[1]), prime));
+	}
+
+out_batch:
+	i915_vma_unpin(batch);
+	i915_vma_put(batch);
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 static struct i915_vma *recursive_batch(struct drm_i915_private *i915)
 {
 	struct i915_gem_context *ctx = i915->kernel_context;
@@ -615,6 +769,7 @@ int i915_gem_request_live_selftests(struct drm_i915_private *i915)
 		SUBTEST(live_nop_request),
 		SUBTEST(live_all_engines),
 		SUBTEST(live_sequential_engines),
+		SUBTEST(live_empty_request),
 	};
 	return i915_subtests(tests, i915);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 21/46] drm/i915: Add selftests for object allocation, phys
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (19 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 20/46] drm/i915: Live testing of empty requests Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02 13:10   ` Matthew Auld
  2017-02-02  9:08 ` [PATCH 22/46] drm/i915: Add a live seftest for GEM objects Chris Wilson
                   ` (26 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

The phys object is a rarely used device (only very old machines require
a chunk of physically contiguous pages for a few hardware interactions).
As such, it is not exercised by CI and to combat that we want to add a
test that exercises the phys object on all platforms.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c                    |   1 +
 drivers/gpu/drm/i915/selftests/i915_gem_object.c   | 120 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 3 files changed, 122 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_object.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f35fda5d0abc..429c5e4350f7 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4973,4 +4973,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 #include "selftests/scatterlist.c"
 #include "selftests/mock_gem_device.c"
 #include "selftests/huge_gem_object.c"
+#include "selftests/i915_gem_object.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
new file mode 100644
index 000000000000..db8f631e4993
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -0,0 +1,120 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+#include "mock_gem_device.h"
+
+static int igt_gem_object(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	int err = -ENOMEM;
+
+	/* Basic test to ensure we can create an object */
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		pr_err("i915_gem_object_create failed, err=%d\n", err);
+		goto out;
+	}
+
+	err = 0;
+	i915_gem_object_put(obj);
+out:
+	return err;
+}
+
+static int igt_phys_object(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	int err = -ENOMEM;
+
+	/* Create an object and bind it to a contiguous set of physical pages,
+	 * i.e. exercise the i915_gem_object_phys API.
+	 */
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		pr_err("i915_gem_object_create failed, err=%d\n", err);
+		goto out;
+	}
+
+	err = -EINVAL;
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
+	mutex_unlock(&i915->drm.struct_mutex);
+	if (err) {
+		pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
+		goto out_obj;
+	}
+
+	if (obj->ops != &i915_gem_phys_ops) {
+		pr_err("i915_gem_object_attach_phys did not create a phys object\n");
+		goto out_obj;
+	}
+
+	if (!atomic_read(&obj->mm.pages_pin_count)) {
+		pr_err("i915_gem_object_attach_phys did not pin its phys pages\n");
+		goto out_obj;
+	}
+
+	/* Make the object dirty so that put_pages must do copy back the data */
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	mutex_unlock(&i915->drm.struct_mutex);
+	if (err) {
+		pr_err("i915_gem_object_set_to_gtt_domain failed with err=%d\n",
+		       err);
+		goto out_obj;
+	}
+
+	err = 0;
+out_obj:
+	i915_gem_object_put(obj);
+out:
+	return err;
+}
+
+int i915_gem_object_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gem_object),
+		SUBTEST(igt_phys_object),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, i915);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index bda982404ad3..2ed94e3a71b7 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -12,3 +12,4 @@ selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
+selftest(objects, i915_gem_object_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 22/46] drm/i915: Add a live seftest for GEM objects
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (20 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 21/46] drm/i915: Add selftests for object allocation, phys Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 23/46] drm/i915: Test partial mappings Chris Wilson
                   ` (25 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Starting with a placeholder test just to reassure that we can create a
test object,

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_object.c   | 49 ++++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  1 +
 2 files changed, 50 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
index db8f631e4993..d7330db70063 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -100,6 +100,46 @@ static int igt_phys_object(void *arg)
 	return err;
 }
 
+static int igt_gem_huge(void *arg)
+{
+	const unsigned int nreal = 509; /* just to be awkward */
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	unsigned int n;
+	int err;
+
+	/* Basic sanitycheck of our huge fake object allocation */
+
+	obj = huge_gem_object(i915,
+			      nreal * PAGE_SIZE,
+			      i915->ggtt.base.total + PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err) {
+		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
+		       nreal, obj->base.size / PAGE_SIZE, err);
+		goto out;
+	}
+
+	for (n = 0; n < obj->base.size / PAGE_SIZE; n++) {
+		if (i915_gem_object_get_page(obj, n) !=
+		    i915_gem_object_get_page(obj, n % nreal)) {
+			pr_err("Page lookup mismatch at index %u [%u]\n",
+			       n, n % nreal);
+			err = -EINVAL;
+			goto out_unpin;
+		}
+	}
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out:
+	i915_gem_object_put(obj);
+	return err;
+}
+
 int i915_gem_object_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
@@ -118,3 +158,12 @@ int i915_gem_object_mock_selftests(void)
 	drm_dev_unref(&i915->drm);
 	return err;
 }
+
+int i915_gem_object_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gem_huge),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 09bf538826df..1822ac99d577 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -10,3 +10,4 @@
  */
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
 selftest(requests, i915_gem_request_live_selftests)
+selftest(object, i915_gem_object_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 23/46] drm/i915: Test partial mappings
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (21 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 22/46] drm/i915: Add a live seftest for GEM objects Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 24/46] drm/i915: Test exhaustion of the mmap space Chris Wilson
                   ` (24 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Create partial mappings to cover a large object, investigating tiling
(fenced regions) and VMA reuse.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_object.c | 293 +++++++++++++++++++++++
 1 file changed, 293 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
index d7330db70063..140bae2c8ad2 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -140,6 +140,298 @@ static int igt_gem_huge(void *arg)
 	return err;
 }
 
+struct tile {
+	unsigned int width;
+	unsigned int height;
+	unsigned int stride;
+	unsigned int size;
+	unsigned int tiling;
+	unsigned int swizzle;
+};
+
+static u64 swizzle_bit(unsigned int bit, u64 offset)
+{
+	return (offset & BIT_ULL(bit)) >> (bit - 6);
+}
+
+static u64 tiled_offset(const struct tile *tile, u64 v)
+{
+	u64 x, y;
+
+	if (tile->tiling == I915_TILING_NONE)
+		return v;
+
+	y = div64_u64_rem(v, tile->stride, &x);
+	v = div64_u64_rem(y, tile->height, &y) * tile->stride * tile->height;
+
+	if (tile->tiling == I915_TILING_X) {
+		v += y * tile->width;
+		v += div64_u64_rem(x, tile->width, &x) << tile->size;
+		v += x;
+	} else {
+		const unsigned int ytile_span = 16;
+		const unsigned int ytile_height = 32 * ytile_span;
+
+		v += y * ytile_span;
+		v += div64_u64_rem(x, ytile_span, &x) * ytile_height;
+		v += x;
+	}
+
+	switch (tile->swizzle) {
+	case I915_BIT_6_SWIZZLE_9:
+		v ^= swizzle_bit(9, v);
+		break;
+	case I915_BIT_6_SWIZZLE_9_10:
+		v ^= swizzle_bit(9, v) ^ swizzle_bit(10, v);
+		break;
+	case I915_BIT_6_SWIZZLE_9_11:
+		v ^= swizzle_bit(9, v) ^ swizzle_bit(11, v);
+		break;
+	case I915_BIT_6_SWIZZLE_9_10_11:
+		v ^= swizzle_bit(9, v) ^ swizzle_bit(10, v) ^ swizzle_bit(11, v);
+		break;
+	}
+
+	return v;
+}
+
+static int check_partial_mapping(struct drm_i915_gem_object *obj,
+				 const struct tile *tile,
+				 unsigned long end_time)
+{
+	const unsigned int nreal = obj->scratch / PAGE_SIZE;
+	const unsigned long npages = obj->base.size / PAGE_SIZE;
+	struct i915_vma *vma;
+	unsigned long page;
+	int err;
+
+	if (igt_timeout(end_time,
+			"%s: timed out before tiling=%d stride=%d\n",
+			__func__, tile->tiling, tile->stride))
+		return -EINTR;
+
+	err = i915_gem_object_set_tiling(obj, tile->tiling, tile->stride);
+	if (err)
+		return err;
+
+	GEM_BUG_ON(i915_gem_object_get_tiling(obj) != tile->tiling);
+	GEM_BUG_ON(i915_gem_object_get_stride(obj) != tile->stride);
+
+	for_each_prime_number_from(page, 1, npages) {
+		struct i915_ggtt_view view =
+			compute_partial_view(obj, page, MIN_CHUNK_PAGES);
+		u32 __iomem *io;
+		struct page *p;
+		unsigned int n;
+		u64 offset;
+		u32 *cpu;
+
+		GEM_BUG_ON(view.partial.size > nreal);
+
+		err = i915_gem_object_set_to_gtt_domain(obj, true);
+		if (err)
+			return err;
+
+		vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, PIN_MAPPABLE);
+		if (IS_ERR(vma)) {
+			pr_err("Failed to pin partial view: offset=%lu\n",
+			       page);
+			return PTR_ERR(vma);
+		}
+
+		n = page - view.partial.offset;
+		GEM_BUG_ON(n >= view.partial.size);
+
+		io = i915_vma_pin_iomap(vma);
+		i915_vma_unpin(vma);
+		if (IS_ERR(io)) {
+			pr_err("Failed to iomap partial view: offset=%lu\n",
+			       page);
+			return PTR_ERR(io);
+		}
+
+		err = i915_vma_get_fence(vma);
+		if (err) {
+			pr_err("Failed to get fence for partial view: offset=%lu\n",
+			       page);
+			i915_vma_unpin_iomap(vma);
+			return err;
+		}
+
+		iowrite32(page, io + n * PAGE_SIZE/sizeof(*io));
+		i915_vma_unpin_iomap(vma);
+
+		offset = tiled_offset(tile, page << PAGE_SHIFT);
+		if (offset >= obj->base.size)
+			continue;
+
+		i915_gem_object_flush_gtt_write_domain(obj);
+
+		p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
+		cpu = kmap(p) + offset_in_page(offset);
+		drm_clflush_virt_range(cpu, sizeof(*cpu));
+		if (*cpu != (u32)page) {
+			pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
+			       page, n,
+			       view.partial.offset,
+			       view.partial.size,
+			       vma->size >> PAGE_SHIFT,
+			       tile_row_pages(obj),
+			       vma->fence ? vma->fence->id : -1, tile->tiling, tile->stride,
+			       offset >> PAGE_SHIFT,
+			       (unsigned int)offset_in_page(offset),
+			       offset,
+			       (u32)page, *cpu);
+			err = -EINVAL;
+		}
+		*cpu = 0;
+		drm_clflush_virt_range(cpu, sizeof(*cpu));
+		kunmap(p);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int igt_partial_tiling(void *arg)
+{
+	const unsigned int nreal = 1 << 12; /* largest tile row x2 */
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	int tiling;
+	int err;
+
+	/* We want to check the page mapping and fencing of a large object
+	 * mmapped through the GTT. The object we create is larger than can
+	 * possibly be mmaped as a whole, and so we must use partial GGTT vma.
+	 * We then check that a write through each partial GGTT vma ends up
+	 * in the right set of pages within the object, and with the expected
+	 * tiling, which we verify by manual swizzling.
+	 */
+
+	obj = huge_gem_object(i915,
+			      nreal << PAGE_SHIFT,
+			      (1 + next_prime_number(i915->ggtt.base.total >> PAGE_SHIFT)) << PAGE_SHIFT);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err) {
+		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
+		       nreal, obj->base.size / PAGE_SIZE, err);
+		goto out;
+	}
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	if (1) {
+		IGT_TIMEOUT(end);
+		struct tile tile;
+
+		tile.height = 1;
+		tile.width = 1;
+		tile.size = 0;
+		tile.stride = 0;
+		tile.swizzle = I915_BIT_6_SWIZZLE_NONE;
+		tile.tiling = I915_TILING_NONE;
+
+		err = check_partial_mapping(obj, &tile, end);
+		if (err && err != -EINTR)
+			goto out_unlock;
+	}
+
+	for (tiling = I915_TILING_X; tiling <= I915_TILING_Y; tiling++) {
+		IGT_TIMEOUT(end);
+		unsigned int max_pitch;
+		unsigned int pitch;
+		struct tile tile;
+
+		tile.tiling = tiling;
+		switch (tiling) {
+		case I915_TILING_X:
+			tile.swizzle = i915->mm.bit_6_swizzle_x;
+			break;
+		case I915_TILING_Y:
+			tile.swizzle = i915->mm.bit_6_swizzle_y;
+			break;
+		}
+
+		if (tile.swizzle == I915_BIT_6_SWIZZLE_UNKNOWN ||
+		    tile.swizzle == I915_BIT_6_SWIZZLE_9_10_17)
+			continue;
+
+		if (INTEL_GEN(i915) <= 2) {
+			tile.height = 16;
+			tile.width = 128;
+			tile.size = 11;
+		} else if (tile.tiling == I915_TILING_Y &&
+			   HAS_128_BYTE_Y_TILING(i915)) {
+			tile.height = 32;
+			tile.width = 128;
+			tile.size = 12;
+		} else {
+			tile.height = 8;
+			tile.width = 512;
+			tile.size = 12;
+		}
+
+		if (INTEL_GEN(i915) < 4)
+			max_pitch = 8192 / tile.width;
+		else if (INTEL_GEN(i915) < 7)
+			max_pitch = 128 * I965_FENCE_MAX_PITCH_VAL / tile.width;
+		else
+			max_pitch = 128 * GEN7_FENCE_MAX_PITCH_VAL / tile.width;
+
+		for (pitch = max_pitch; pitch; pitch >>= 1) {
+			tile.stride = tile.width * pitch;
+			err = check_partial_mapping(obj, &tile, end);
+			if (err == -EINTR)
+				goto next_tiling;
+			if (err)
+				goto out_unlock;
+
+			if (pitch > 2 && INTEL_GEN(i915) >= 4) {
+				tile.stride = tile.width * (pitch - 1);
+				err = check_partial_mapping(obj, &tile, end);
+				if (err == -EINTR)
+					goto next_tiling;
+				if (err)
+					goto out_unlock;
+			}
+
+			if (pitch < max_pitch && INTEL_GEN(i915) >= 4) {
+				tile.stride = tile.width * (pitch + 1);
+				err = check_partial_mapping(obj, &tile, end);
+				if (err == -EINTR)
+					goto next_tiling;
+				if (err)
+					goto out_unlock;
+			}
+		}
+
+		if (INTEL_GEN(i915) >= 4) {
+			for_each_prime_number(pitch, max_pitch) {
+				tile.stride = tile.width * pitch;
+				err = check_partial_mapping(obj, &tile, end);
+				if (err == -EINTR)
+					goto next_tiling;
+				if (err)
+					goto out_unlock;
+			}
+		}
+
+next_tiling: ;
+	}
+
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	i915_gem_object_unpin_pages(obj);
+out:
+	i915_gem_object_put(obj);
+	return err;
+}
+
 int i915_gem_object_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
@@ -163,6 +455,7 @@ int i915_gem_object_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_gem_huge),
+		SUBTEST(igt_partial_tiling),
 	};
 
 	return i915_subtests(tests, i915);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 24/46] drm/i915: Test exhaustion of the mmap space
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (22 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 23/46] drm/i915: Test partial mappings Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 25/46] drm/i915: Test coherency of and barriers between cache domains Chris Wilson
                   ` (23 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

An unlikely error condition that we can simulate by stealing most of
the range before trying to insert new objects.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_object.c | 138 +++++++++++++++++++++++
 1 file changed, 138 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
index 140bae2c8ad2..3cdeb83f8742 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
@@ -25,6 +25,7 @@
 #include "../i915_selftest.h"
 
 #include "mock_gem_device.h"
+#include "huge_gem_object.h"
 
 static int igt_gem_object(void *arg)
 {
@@ -432,6 +433,142 @@ next_tiling: ;
 	return err;
 }
 
+static int make_obj_busy(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct drm_i915_gem_request *rq;
+	struct i915_vma *vma;
+	int err;
+
+	vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		return err;
+
+	rq = i915_gem_request_alloc(i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(rq)) {
+		i915_vma_unpin(vma);
+		return PTR_ERR(rq);
+	}
+
+	i915_vma_move_to_active(vma, rq, 0);
+	i915_add_request(rq);
+
+	i915_gem_object_set_active_reference(obj);
+	i915_vma_unpin(vma);
+	return 0;
+}
+
+static bool assert_mmap_offset(struct drm_i915_private *i915,
+			       unsigned long size,
+			       int expected)
+{
+	struct drm_i915_gem_object *obj;
+	int err;
+
+	obj = i915_gem_object_create_internal(i915, size);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_create_mmap_offset(obj);
+	i915_gem_object_put(obj);
+
+	return err == expected;
+}
+
+static int igt_mmap_offset_exhaustion(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_mm *mm = &i915->drm.vma_offset_manager->vm_addr_space_mm;
+	struct drm_i915_gem_object *obj;
+	struct drm_mm_node resv, *hole;
+	u64 hole_start, hole_end;
+	int loop, err;
+
+	/* Trim the device mmap space to only a page */
+	memset(&resv, 0, sizeof(resv));
+	drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
+		resv.start = hole_start;
+		resv.size = hole_end - hole_start - 1; /* PAGE_SIZE units */
+		err = drm_mm_reserve_node(mm, &resv);
+		if (err) {
+			pr_err("Failed to trim VMA manager, err=%d\n", err);
+			return err;
+		}
+		break;
+	}
+
+	/* Just fits! */
+	if (!assert_mmap_offset(i915, PAGE_SIZE, 0)) {
+		pr_err("Unable to insert object into single page hole\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+	/* Too large */
+	if (!assert_mmap_offset(i915, 2*PAGE_SIZE, -ENOSPC)) {
+		pr_err("Unexpectedly succeeded in inserting too large object into single page hole\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+	/* Fill the hole, further allocation attempts should then fail */
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto out;
+	}
+
+	err = i915_gem_object_create_mmap_offset(obj);
+	if (err) {
+		pr_err("Unable to insert object into reclaimed hole\n");
+		goto err_obj;
+	}
+
+	if (!assert_mmap_offset(i915, PAGE_SIZE, -ENOSPC)) {
+		pr_err("Unexpectedly succeeded in inserting object into no holes!\n");
+		err = -EINVAL;
+		goto err_obj;
+	}
+
+	i915_gem_object_put(obj);
+
+	/* Now fill with busy dead objects that we expect to reap */
+	for (loop = 0; loop < 3; loop++) {
+		obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto out;
+		}
+
+		mutex_lock(&i915->drm.struct_mutex);
+		err = make_obj_busy(obj);
+		mutex_unlock(&i915->drm.struct_mutex);
+		if (err) {
+			pr_err("[loop %d] Failed to busy the object\n", loop);
+			goto err_obj;
+		}
+
+		GEM_BUG_ON(!i915_gem_object_is_active(obj));
+		err = i915_gem_object_create_mmap_offset(obj);
+		if (err) {
+			pr_err("[loop %d] i915_gem_object_create_mmap_offset failed with err=%d\n",
+			       loop, err);
+			goto out;
+		}
+	}
+
+out:
+	drm_mm_remove_node(&resv);
+	return err;
+err_obj:
+	i915_gem_object_put(obj);
+	goto out;
+}
+
 int i915_gem_object_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
@@ -456,6 +593,7 @@ int i915_gem_object_live_selftests(struct drm_i915_private *i915)
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_gem_huge),
 		SUBTEST(igt_partial_tiling),
+		SUBTEST(igt_mmap_offset_exhaustion),
 	};
 
 	return i915_subtests(tests, i915);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 25/46] drm/i915: Test coherency of and barriers between cache domains
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (23 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 24/46] drm/i915: Test exhaustion of the mmap space Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 26/46] drm/i915: Move uncore selfchecks to live selftest infrastructure Chris Wilson
                   ` (22 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Write into an object using WB, WC, GTT, and GPU paths and make sure that
our internal API is sufficient to ensure coherent reads and writes.

v2: Avoid invalid free upon allocation error

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c                    |   1 +
 .../gpu/drm/i915/selftests/i915_gem_coherency.c    | 364 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 3 files changed, 366 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_coherency.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 429c5e4350f7..2749c64a35a3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4974,4 +4974,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 #include "selftests/mock_gem_device.c"
 #include "selftests/huge_gem_object.c"
 #include "selftests/i915_gem_object.c"
+#include "selftests/i915_gem_coherency.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
new file mode 100644
index 000000000000..b5de1828221d
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
@@ -0,0 +1,364 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/prime_numbers.h>
+
+#include "../i915_selftest.h"
+#include "i915_random.h"
+
+static int cpu_set(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 v)
+{
+	unsigned int needs_clflush;
+	struct page *page;
+	typeof(v) *map;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_write(obj, &needs_clflush);
+	if (err)
+		return err;
+
+	page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
+	map = kmap_atomic(page);
+	if (needs_clflush & CLFLUSH_BEFORE)
+		clflush(map+offset_in_page(offset) / sizeof(*map));
+	map[offset_in_page(offset) / sizeof(*map)] = v;
+	if (needs_clflush & CLFLUSH_AFTER)
+		clflush(map+offset_in_page(offset) / sizeof(*map));
+	kunmap_atomic(map);
+
+	i915_gem_obj_finish_shmem_access(obj);
+	return 0;
+}
+
+static int cpu_get(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 *v)
+{
+	unsigned int needs_clflush;
+	struct page *page;
+	typeof(v) map;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_read(obj, &needs_clflush);
+	if (err)
+		return err;
+
+	page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
+	map = kmap_atomic(page);
+	if (needs_clflush & CLFLUSH_BEFORE)
+		clflush(map+offset_in_page(offset) / sizeof(*map));
+	*v = map[offset_in_page(offset) / sizeof(*map)];
+	kunmap_atomic(map);
+
+	i915_gem_obj_finish_shmem_access(obj);
+	return 0;
+}
+
+static int gtt_set(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 v)
+{
+	struct i915_vma *vma;
+	typeof(v) *map;
+	int err;
+
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	if (err)
+		return err;
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	map = i915_vma_pin_iomap(vma);
+	i915_vma_unpin(vma);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	map[offset / sizeof(*map)] = v;
+	i915_vma_unpin_iomap(vma);
+
+	return 0;
+}
+
+static int gtt_get(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 *v)
+{
+	struct i915_vma *vma;
+	typeof(v) map;
+	int err;
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		return err;
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	map = i915_vma_pin_iomap(vma);
+	i915_vma_unpin(vma);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	*v = map[offset / sizeof(*map)];
+	i915_vma_unpin_iomap(vma);
+
+	return 0;
+}
+
+static int wc_set(struct drm_i915_gem_object *obj,
+		  unsigned long offset,
+		  u32 v)
+{
+	typeof(v) *map;
+	int err;
+
+	/* XXX GTT write followed by WC write go missing */
+	i915_gem_object_flush_gtt_write_domain(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	if (err)
+		return err;
+
+	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	map[offset / sizeof(*map)] = v;
+	i915_gem_object_unpin_map(obj);
+
+	return 0;
+}
+
+static int wc_get(struct drm_i915_gem_object *obj,
+		  unsigned long offset,
+		  u32 *v)
+{
+	typeof(v) map;
+	int err;
+
+	/* XXX WC write followed by GTT write go missing */
+	i915_gem_object_flush_gtt_write_domain(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		return err;
+
+	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	*v = map[offset / sizeof(*map)];
+	i915_gem_object_unpin_map(obj);
+
+	return 0;
+}
+
+static int gpu_set(struct drm_i915_gem_object *obj,
+		   unsigned long offset,
+		   u32 v)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct drm_i915_gem_request *rq;
+	struct i915_vma *vma;
+	int err;
+
+	err = i915_gem_object_set_to_gtt_domain(obj, true);
+	if (err)
+		return err;
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	rq = i915_gem_request_alloc(i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(rq)) {
+		i915_vma_unpin(vma);
+		return PTR_ERR(rq);
+	}
+
+	err = intel_ring_begin(rq, 4);
+	if (err) {
+		__i915_add_request(rq, false);
+		i915_vma_unpin(vma);
+		return err;
+	}
+
+	if (INTEL_GEN(i915) >= 8) {
+		intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM_GEN4 | 1 << 22);
+		intel_ring_emit(rq->ring, lower_32_bits(i915_ggtt_offset(vma) + offset));
+		intel_ring_emit(rq->ring, upper_32_bits(i915_ggtt_offset(vma) + offset));
+		intel_ring_emit(rq->ring, v);
+	} else if (INTEL_GEN(i915) >= 4) {
+		intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM_GEN4 | 1 << 22);
+		intel_ring_emit(rq->ring, 0);
+		intel_ring_emit(rq->ring, i915_ggtt_offset(vma) + offset);
+		intel_ring_emit(rq->ring, v);
+	} else {
+		intel_ring_emit(rq->ring, MI_STORE_DWORD_IMM | 1 << 22);
+		intel_ring_emit(rq->ring, i915_ggtt_offset(vma) + offset);
+		intel_ring_emit(rq->ring, v);
+		intel_ring_emit(rq->ring, MI_NOOP);
+	}
+	intel_ring_advance(rq->ring);
+
+	i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
+	i915_vma_unpin(vma);
+
+	reservation_object_lock(obj->resv, NULL);
+	reservation_object_add_excl_fence(obj->resv, &rq->fence);
+	reservation_object_unlock(obj->resv);
+
+	__i915_add_request(rq, true);
+
+	return 0;
+}
+
+static const struct igt_coherency_mode {
+	const char *name;
+	int (*set)(struct drm_i915_gem_object *, unsigned long offset, u32 v);
+	int (*get)(struct drm_i915_gem_object *, unsigned long offset, u32 *v);
+} igt_coherency_mode[] = {
+	{ "cpu", cpu_set, cpu_get },
+	{ "gtt", gtt_set, gtt_get },
+	{ "wc", wc_set, wc_get },
+	{ "gpu", gpu_set, NULL },
+	{ },
+};
+
+static int igt_gem_coherency(void *arg)
+{
+	const unsigned int ncachelines = PAGE_SIZE/64;
+	I915_RND_STATE(prng);
+	struct drm_i915_private *i915 = arg;
+	const struct igt_coherency_mode *read, *write, *over;
+	struct drm_i915_gem_object *obj;
+	unsigned long count, n;
+	u32 *offsets, *values;
+	int err;
+
+	/* We repeatedly write, overwrite and read from a sequence of
+	 * cachelines in order to try and detect incoherency (unflushed writes
+	 * from either the CPU or GPU). Each setter/getter uses our cache
+	 * domain API which should prevent incoherency.
+	 */
+
+	offsets = kmalloc_array(ncachelines, 2*sizeof(u32), GFP_KERNEL);
+	if (!offsets)
+		return -ENOMEM;
+	for (count = 0; count < ncachelines; count++)
+		offsets[count] = count * 64 + 4 * (count % 16);
+
+	values = offsets + ncachelines;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	for (over = igt_coherency_mode; over->name; over++) {
+		if (!over->set)
+			continue;
+
+		for (write = igt_coherency_mode; write->name; write++) {
+			if (!write->set)
+				continue;
+
+			for (read = igt_coherency_mode; read->name; read++) {
+				if (!read->get)
+					continue;
+
+				for_each_prime_number_from(count, 1, ncachelines) {
+					obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+					if (IS_ERR(obj)) {
+						err = PTR_ERR(obj);
+						goto unlock;
+					}
+
+					i915_random_reorder(offsets, ncachelines, &prng);
+					for (n = 0; n < count; n++)
+						values[n] = prandom_u32_state(&prng);
+
+					for (n = 0; n < count; n++) {
+						err = over->set(obj, offsets[n], ~values[n]);
+						if (err) {
+							pr_err("Failed to set stale value[%ld/%ld] in object using %s, err=%d\n",
+							       n, count, over->name, err);
+							goto put_object;
+						}
+					}
+
+					for (n = 0; n < count; n++) {
+						err = write->set(obj, offsets[n], values[n]);
+						if (err) {
+							pr_err("Failed to set value[%ld/%ld] in object using %s, err=%d\n",
+							       n, count, write->name, err);
+							goto put_object;
+						}
+					}
+
+					for (n = 0; n < count; n++) {
+						u32 found;
+
+						err = read->get(obj, offsets[n], &found);
+						if (err) {
+							pr_err("Failed to get value[%ld/%ld] in object using %s, err=%d\n",
+							       n, count, read->name, err);
+							goto put_object;
+						}
+
+						if (found != values[n]) {
+							pr_err("Value[%ld/%ld] mismatch, (overwrite with %s) wrote [%s] %x read [%s] %x (inverse %x), at offset %x\n",
+							       n, count, over->name,
+							       write->name, values[n],
+							       read->name, found,
+							       ~values[n], offsets[n]);
+							err = -EINVAL;
+							goto put_object;
+						}
+					}
+
+					__i915_gem_object_release_unless_active(obj);
+				}
+			}
+		}
+	}
+unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	kfree(offsets);
+	return err;
+
+put_object:
+	__i915_gem_object_release_unless_active(obj);
+	goto unlock;
+}
+
+int i915_gem_coherency_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gem_coherency),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 1822ac99d577..fde9ef22cfe8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -11,3 +11,4 @@
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
 selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
+selftest(coherency, i915_gem_coherency_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 26/46] drm/i915: Move uncore selfchecks to live selftest infrastructure
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (24 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 25/46] drm/i915: Test coherency of and barriers between cache domains Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 27/46] drm/i915: Test all fw tables during mock selftests Chris Wilson
                   ` (21 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Now that the kselftest infrastructure exists, put it to use and add to
it the existing consistency checks on the fw register lookup tables.

v2: s/tabke/table/

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/intel_uncore.c                | 52 +-----------
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  1 +
 drivers/gpu/drm/i915/selftests/intel_uncore.c      | 99 ++++++++++++++++++++++
 3 files changed, 104 insertions(+), 48 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_uncore.c

diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
index 3d243fefe09b..9bc103a00b72 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -642,33 +642,6 @@ find_fw_domain(struct drm_i915_private *dev_priv, u32 offset)
 	return entry->domains;
 }
 
-static void
-intel_fw_table_check(struct drm_i915_private *dev_priv)
-{
-	const struct intel_forcewake_range *ranges;
-	unsigned int num_ranges;
-	s32 prev;
-	unsigned int i;
-
-	if (!IS_ENABLED(CONFIG_DRM_I915_DEBUG))
-		return;
-
-	ranges = dev_priv->uncore.fw_domains_table;
-	if (!ranges)
-		return;
-
-	num_ranges = dev_priv->uncore.fw_domains_table_entries;
-
-	for (i = 0, prev = -1; i < num_ranges; i++, ranges++) {
-		WARN_ON_ONCE(IS_GEN9(dev_priv) &&
-			     (prev + 1) != (s32)ranges->start);
-		WARN_ON_ONCE(prev >= (s32)ranges->start);
-		prev = ranges->start;
-		WARN_ON_ONCE(prev >= (s32)ranges->end);
-		prev = ranges->end;
-	}
-}
-
 #define GEN_FW_RANGE(s, e, d) \
 	{ .start = (s), .end = (e), .domains = (d) }
 
@@ -707,23 +680,6 @@ static const i915_reg_t gen8_shadowed_regs[] = {
 	/* TODO: Other registers are not yet used */
 };
 
-static void intel_shadow_table_check(void)
-{
-	const i915_reg_t *reg = gen8_shadowed_regs;
-	s32 prev;
-	u32 offset;
-	unsigned int i;
-
-	if (!IS_ENABLED(CONFIG_DRM_I915_DEBUG))
-		return;
-
-	for (i = 0, prev = -1; i < ARRAY_SIZE(gen8_shadowed_regs); i++, reg++) {
-		offset = i915_mmio_reg_offset(*reg);
-		WARN_ON_ONCE(prev >= (s32)offset);
-		prev = offset;
-	}
-}
-
 static int mmio_reg_cmp(u32 key, const i915_reg_t *reg)
 {
 	u32 offset = i915_mmio_reg_offset(*reg);
@@ -1404,10 +1360,6 @@ void intel_uncore_init(struct drm_i915_private *dev_priv)
 		break;
 	}
 
-	intel_fw_table_check(dev_priv);
-	if (INTEL_GEN(dev_priv) >= 8)
-		intel_shadow_table_check();
-
 	i915_check_and_clear_faults(dev_priv);
 }
 #undef ASSIGN_WRITE_MMIO_VFUNCS
@@ -1925,3 +1877,7 @@ intel_uncore_forcewake_for_reg(struct drm_i915_private *dev_priv,
 
 	return fw_domains;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/intel_uncore.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index fde9ef22cfe8..c060bf24928e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -9,6 +9,7 @@
  * Tests are executed in order by igt/drv_selftest
  */
 selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
+selftest(uncore, intel_uncore_live_selftests)
 selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_uncore.c b/drivers/gpu/drm/i915/selftests/intel_uncore.c
new file mode 100644
index 000000000000..6b27dca78f69
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_uncore.c
@@ -0,0 +1,99 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+static int intel_fw_table_check(struct drm_i915_private *i915)
+{
+	const struct intel_forcewake_range *ranges;
+	unsigned int num_ranges, i;
+	s32 prev;
+
+	ranges = i915->uncore.fw_domains_table;
+	if (!ranges)
+		return 0;
+
+	num_ranges = i915->uncore.fw_domains_table_entries;
+	for (i = 0, prev = -1; i < num_ranges; i++, ranges++) {
+		/* Check that the table is watertight */
+		if (IS_GEN9(i915) && (prev + 1) != (s32)ranges->start) {
+			pr_err("%s: entry[%d]:(%x, %x) is not watertight to previous (%x)\n",
+			       __func__, i, ranges->start, ranges->end, prev);
+			return -EINVAL;
+		}
+
+		/* Check that the table never goes backwards */
+		if (prev >= (s32)ranges->start) {
+			pr_err("%s: entry[%d]:(%x, %x) is less than the previous (%x)\n",
+			       __func__, i, ranges->start, ranges->end, prev);
+			return -EINVAL;
+		}
+
+		/* Check that the entry is valid */
+		if (ranges->start >= ranges->end) {
+			pr_err("%s: entry[%d]:(%x, %x) has negative length\n",
+			       __func__, i, ranges->start, ranges->end);
+			return -EINVAL;
+		}
+
+		prev = ranges->end;
+	}
+
+	return 0;
+}
+
+static int intel_shadow_table_check(void)
+{
+	const i915_reg_t *reg = gen8_shadowed_regs;
+	unsigned int i;
+	s32 prev;
+
+	for (i = 0, prev = -1; i < ARRAY_SIZE(gen8_shadowed_regs); i++, reg++) {
+		u32 offset = i915_mmio_reg_offset(*reg);
+		if (prev >= (s32)offset) {
+			pr_err("%s: entry[%d]:(%x) is before previous (%x)\n",
+			       __func__, i, offset, prev);
+			return -EINVAL;
+		}
+
+		prev = offset;
+	}
+
+	return 0;
+}
+
+int intel_uncore_live_selftests(struct drm_i915_private *i915)
+{
+	int err;
+
+	err = intel_fw_table_check(i915);
+	if (err)
+		return err;
+
+	err = intel_shadow_table_check();
+	if (err)
+		return err;
+
+	return 0;
+}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 27/46] drm/i915: Test all fw tables during mock selftests
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (25 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 26/46] drm/i915: Move uncore selfchecks to live selftest infrastructure Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 28/46] drm/i915: Sanity check all registers for matching fw domains Chris Wilson
                   ` (20 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

In addition to just testing the fw table we load, during the initial
mock testing we can test that all tables are valid (so the testing is
not limited to just the platforms that load that particular table).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |  1 +
 drivers/gpu/drm/i915/selftests/intel_uncore.c      | 49 ++++++++++++++++------
 2 files changed, 37 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 2ed94e3a71b7..c61e08de7913 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -10,6 +10,7 @@
  */
 selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
 selftest(scatterlist, scatterlist_mock_selftests)
+selftest(uncore, intel_uncore_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_uncore.c b/drivers/gpu/drm/i915/selftests/intel_uncore.c
index 6b27dca78f69..c563962eaad7 100644
--- a/drivers/gpu/drm/i915/selftests/intel_uncore.c
+++ b/drivers/gpu/drm/i915/selftests/intel_uncore.c
@@ -24,20 +24,16 @@
 
 #include "../i915_selftest.h"
 
-static int intel_fw_table_check(struct drm_i915_private *i915)
+static int intel_fw_table_check(const struct intel_forcewake_range *ranges,
+				unsigned int num_ranges,
+				bool is_watertight)
 {
-	const struct intel_forcewake_range *ranges;
-	unsigned int num_ranges, i;
+	unsigned int i;
 	s32 prev;
 
-	ranges = i915->uncore.fw_domains_table;
-	if (!ranges)
-		return 0;
-
-	num_ranges = i915->uncore.fw_domains_table_entries;
 	for (i = 0, prev = -1; i < num_ranges; i++, ranges++) {
 		/* Check that the table is watertight */
-		if (IS_GEN9(i915) && (prev + 1) != (s32)ranges->start) {
+		if (is_watertight && (prev + 1) != (s32)ranges->start) {
 			pr_err("%s: entry[%d]:(%x, %x) is not watertight to previous (%x)\n",
 			       __func__, i, ranges->start, ranges->end, prev);
 			return -EINVAL;
@@ -83,15 +79,42 @@ static int intel_shadow_table_check(void)
 	return 0;
 }
 
-int intel_uncore_live_selftests(struct drm_i915_private *i915)
+int intel_uncore_mock_selftests(void)
 {
-	int err;
+	struct {
+		const struct intel_forcewake_range *ranges;
+		unsigned int num_ranges;
+		bool is_watertight;
+	} fw[] = {
+		{ __vlv_fw_ranges, ARRAY_SIZE(__vlv_fw_ranges), false },
+		{ __chv_fw_ranges, ARRAY_SIZE(__chv_fw_ranges), false },
+		{ __gen9_fw_ranges, ARRAY_SIZE(__gen9_fw_ranges), true },
+	};
+	int err, i;
+
+	for (i = 0; i < ARRAY_SIZE(fw); i++) {
+		err = intel_fw_table_check(fw[i].ranges,
+					   fw[i].num_ranges,
+					   fw[i].is_watertight);
+		if (err)
+			return err;
+	}
 
-	err = intel_fw_table_check(i915);
+	err = intel_shadow_table_check();
 	if (err)
 		return err;
 
-	err = intel_shadow_table_check();
+	return 0;
+}
+
+int intel_uncore_live_selftests(struct drm_i915_private *i915)
+{
+	int err;
+
+	/* Confirm the table we load is still valid */
+	err = intel_fw_table_check(i915->uncore.fw_domains_table,
+				   i915->uncore.fw_domains_table_entries,
+				   INTEL_GEN(i915) >= 9);
 	if (err)
 		return err;
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 28/46] drm/i915: Sanity check all registers for matching fw domains
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (26 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 27/46] drm/i915: Test all fw tables during mock selftests Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 29/46] drm/i915: Add some mock tests for dmabuf interop Chris Wilson
                   ` (19 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Add a late selftest that walks over all forcewake registers (those below
0x40000) and uses the mmio debug register to check to see if any are
unclaimed. This is possible if we fail to wake the appropriate
powerwells for the register.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/intel_uncore.c | 53 +++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/intel_uncore.c b/drivers/gpu/drm/i915/selftests/intel_uncore.c
index c563962eaad7..2fb8122944b2 100644
--- a/drivers/gpu/drm/i915/selftests/intel_uncore.c
+++ b/drivers/gpu/drm/i915/selftests/intel_uncore.c
@@ -107,6 +107,55 @@ int intel_uncore_mock_selftests(void)
 	return 0;
 }
 
+static int intel_uncore_check_forcewake_domains(struct drm_i915_private *dev_priv)
+{
+#define FW_RANGE 0x40000
+	unsigned long *valid;
+	u32 offset;
+	int err;
+
+	if (!HAS_FPGA_DBG_UNCLAIMED(dev_priv) &&
+	    !IS_VALLEYVIEW(dev_priv) &&
+	    !IS_CHERRYVIEW(dev_priv))
+		return 0;
+
+	valid = kzalloc(BITS_TO_LONGS(FW_RANGE) * sizeof(*valid),
+			GFP_TEMPORARY);
+	if (!valid)
+		return -ENOMEM;
+
+	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
+
+	check_for_unclaimed_mmio(dev_priv);
+	for (offset = 0; offset < FW_RANGE; offset += 4) {
+		i915_reg_t reg = { offset };
+
+		(void)I915_READ_FW(reg);
+		if (!check_for_unclaimed_mmio(dev_priv))
+			set_bit(offset, valid);
+	}
+
+	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
+
+	err = 0;
+	for_each_set_bit(offset, valid, FW_RANGE) {
+		i915_reg_t reg = { offset };
+
+		intel_uncore_forcewake_reset(dev_priv, false);
+		check_for_unclaimed_mmio(dev_priv);
+
+		(void)I915_READ(reg);
+		if (check_for_unclaimed_mmio(dev_priv)) {
+			pr_err("Unclaimed mmio read to register 0x%04x\n",
+			       offset);
+			err = -EINVAL;
+		}
+	}
+
+	kfree(valid);
+	return err;
+}
+
 int intel_uncore_live_selftests(struct drm_i915_private *i915)
 {
 	int err;
@@ -118,5 +167,9 @@ int intel_uncore_live_selftests(struct drm_i915_private *i915)
 	if (err)
 		return err;
 
+	err = intel_uncore_check_forcewake_domains(i915);
+	if (err)
+		return err;
+
 	return 0;
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 29/46] drm/i915: Add some mock tests for dmabuf interop
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (27 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 28/46] drm/i915: Sanity check all registers for matching fw domains Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 30/46] drm/i915: Add a live dmabuf selftest Chris Wilson
                   ` (18 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Check that we can create both dmabuf and objects from dmabuf.

v2: Cleanups, correct include, fix unpin on dead path and prevent
explosion on dmabuf init failure

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   5 +
 drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c   | 294 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/mock_dmabuf.c       | 176 ++++++++++++
 drivers/gpu/drm/i915/selftests/mock_dmabuf.h       |  41 +++
 5 files changed, 517 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_dmabuf.c
 create mode 100644 drivers/gpu/drm/i915/selftests/mock_dmabuf.h

diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index d037adcda6f2..3e276eee0450 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -307,3 +307,8 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 
 	return ERR_PTR(ret);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/mock_dmabuf.c"
+#include "selftests/i915_gem_dmabuf.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
new file mode 100644
index 000000000000..a2393fcf9fa8
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
@@ -0,0 +1,294 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+#include "mock_gem_device.h"
+#include "mock_dmabuf.h"
+
+static int igt_dmabuf_export(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	dmabuf = i915_gem_prime_export(&i915->drm, &obj->base, 0);
+	i915_gem_object_put(obj);
+	if (IS_ERR(dmabuf)) {
+		pr_err("i915_gem_prime_export failed with err=%d\n",
+		       (int)PTR_ERR(dmabuf));
+		return PTR_ERR(dmabuf);
+	}
+
+	dma_buf_put(dmabuf);
+	return 0;
+}
+
+static int igt_dmabuf_import_self(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct drm_gem_object *import;
+	struct dma_buf *dmabuf;
+	int err;
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	dmabuf = i915_gem_prime_export(&i915->drm, &obj->base, 0);
+	if (IS_ERR(dmabuf)) {
+		pr_err("i915_gem_prime_export failed with err=%d\n",
+		       (int)PTR_ERR(dmabuf));
+		err = PTR_ERR(dmabuf);
+		goto out;
+	}
+
+	import = i915_gem_prime_import(&i915->drm, dmabuf);
+	if (IS_ERR(import)) {
+		pr_err("i915_gem_prime_import failed with err=%d\n",
+		       (int)PTR_ERR(import));
+		err = PTR_ERR(import);
+		goto out_dmabuf;
+	}
+
+	if (import != &obj->base) {
+		pr_err("i915_gem_prime_import created a new object!\n");
+		err = -EINVAL;
+		goto out_import;
+	}
+
+	err = 0;
+out_import:
+	i915_gem_object_put(to_intel_bo(import));
+out_dmabuf:
+	dma_buf_put(dmabuf);
+out:
+	i915_gem_object_put(obj);
+	return err;
+}
+
+static int igt_dmabuf_import(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	void *obj_map, *dma_map;
+	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
+	int err, i;
+
+	dmabuf = mock_dmabuf(1);
+	if (IS_ERR(dmabuf))
+		return PTR_ERR(dmabuf);
+
+	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
+	if (IS_ERR(obj)) {
+		pr_err("i915_gem_prime_import failed with err=%d\n",
+		       (int)PTR_ERR(obj));
+		err = PTR_ERR(obj);
+		goto out_dmabuf;
+	}
+
+	if (obj->base.dev != &i915->drm) {
+		pr_err("i915_gem_prime_import created a non-i915 object!\n");
+		err = -EINVAL;
+		goto out_obj;
+	}
+
+	if (obj->base.size != PAGE_SIZE) {
+		pr_err("i915_gem_prime_import is wrong size found %lld, expected %ld\n",
+		       (long long)obj->base.size, PAGE_SIZE);
+		err = -EINVAL;
+		goto out_obj;
+	}
+
+	dma_map = dma_buf_vmap(dmabuf);
+	if (!dma_map) {
+		pr_err("dma_buf_vmap failed\n");
+		err = -ENOMEM;
+		goto out_obj;
+	}
+
+	if (0) { /* Can not yet map dmabuf */
+		obj_map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		if (IS_ERR(obj_map)) {
+			err = PTR_ERR(obj_map);
+			pr_err("i915_gem_object_pin_map failed with err=%d\n", err);
+			goto out_dma_map;
+		}
+
+		for (i = 0; i < ARRAY_SIZE(pattern); i++) {
+			memset(dma_map, pattern[i], PAGE_SIZE);
+			if (memchr_inv(obj_map, pattern[i], PAGE_SIZE)) {
+				err = -EINVAL;
+				pr_err("imported vmap not all set to %x!\n", pattern[i]);
+				i915_gem_object_unpin_map(obj);
+				goto out_dma_map;
+			}
+		}
+
+		for (i = 0; i < ARRAY_SIZE(pattern); i++) {
+			memset(obj_map, pattern[i], PAGE_SIZE);
+			if (memchr_inv(dma_map, pattern[i], PAGE_SIZE)) {
+				err = -EINVAL;
+				pr_err("exported vmap not all set to %x!\n", pattern[i]);
+				i915_gem_object_unpin_map(obj);
+				goto out_dma_map;
+			}
+		}
+
+		i915_gem_object_unpin_map(obj);
+	}
+
+	err = 0;
+out_dma_map:
+	dma_buf_vunmap(dmabuf, dma_map);
+out_obj:
+	i915_gem_object_put(obj);
+out_dmabuf:
+	dma_buf_put(dmabuf);
+	return err;
+}
+
+static int igt_dmabuf_import_ownership(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	void *ptr;
+	int err;
+
+	dmabuf = mock_dmabuf(1);
+	if (IS_ERR(dmabuf))
+		return PTR_ERR(dmabuf);
+
+	ptr = dma_buf_vmap(dmabuf);
+	if (!ptr) {
+		pr_err("dma_buf_vmap failed\n");
+		err = -ENOMEM;
+		goto err_dmabuf;
+	}
+
+	memset(ptr, 0xc5, PAGE_SIZE);
+	dma_buf_vunmap(dmabuf, ptr);
+
+	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
+	if (IS_ERR(obj)) {
+		pr_err("i915_gem_prime_import failed with err=%d\n",
+		       (int)PTR_ERR(obj));
+		err = PTR_ERR(obj);
+		goto err_dmabuf;
+	}
+
+	dma_buf_put(dmabuf);
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err) {
+		pr_err("i915_gem_object_pin_pages failed with err=%d\n", err);
+		goto out_obj;
+	}
+
+	err = 0;
+	i915_gem_object_unpin_pages(obj);
+out_obj:
+	i915_gem_object_put(obj);
+	return err;
+
+err_dmabuf:
+	dma_buf_put(dmabuf);
+	return err;
+}
+
+static int igt_dmabuf_export_vmap(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	void *ptr;
+	int err;
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	dmabuf = i915_gem_prime_export(&i915->drm, &obj->base, 0);
+	if (IS_ERR(dmabuf)) {
+		pr_err("i915_gem_prime_export failed with err=%d\n",
+		       (int)PTR_ERR(dmabuf));
+		err = PTR_ERR(dmabuf);
+		goto err_obj;
+	}
+	i915_gem_object_put(obj);
+
+	ptr = dma_buf_vmap(dmabuf);
+	if (IS_ERR(ptr)) {
+		err = PTR_ERR(ptr);
+		pr_err("dma_buf_vmap failed with err=%d\n", err);
+		goto out;
+	}
+
+	if (memchr_inv(ptr, 0, dmabuf->size)) {
+		pr_err("Exported object not initialiased to zero!\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+	memset(ptr, 0xc5, dmabuf->size);
+
+	err = 0;
+	dma_buf_vunmap(dmabuf, ptr);
+out:
+	dma_buf_put(dmabuf);
+	return err;
+
+err_obj:
+	i915_gem_object_put(obj);
+	return err;
+}
+
+int i915_gem_dmabuf_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_dmabuf_export),
+		SUBTEST(igt_dmabuf_import_self),
+		SUBTEST(igt_dmabuf_import),
+		SUBTEST(igt_dmabuf_import_ownership),
+		SUBTEST(igt_dmabuf_export_vmap),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	err = i915_subtests(tests, i915);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index c61e08de7913..955a4d6ccdaf 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -14,3 +14,4 @@ selftest(uncore, intel_uncore_mock_selftests)
 selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
+selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/mock_dmabuf.c b/drivers/gpu/drm/i915/selftests/mock_dmabuf.c
new file mode 100644
index 000000000000..99da8f4ef497
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_dmabuf.c
@@ -0,0 +1,176 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "mock_dmabuf.h"
+
+static struct sg_table *mock_map_dma_buf(struct dma_buf_attachment *attachment,
+					 enum dma_data_direction dir)
+{
+	struct mock_dmabuf *mock = to_mock(attachment->dmabuf);
+	struct sg_table *st;
+	struct scatterlist *sg;
+	int i, err;
+
+	st = kmalloc(sizeof(*st), GFP_KERNEL);
+	if (!st)
+		return ERR_PTR(-ENOMEM);
+
+	err = sg_alloc_table(st, mock->npages, GFP_KERNEL);
+	if (err)
+		goto err_free;
+
+	sg = st->sgl;
+	for (i = 0; i < mock->npages; i++) {
+		sg_set_page(sg, mock->pages[i], PAGE_SIZE, 0);
+		sg = sg_next(sg);
+	}
+
+	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+		err = -ENOMEM;
+		goto err_st;
+	}
+
+	return st;
+
+err_st:
+	sg_free_table(st);
+err_free:
+	kfree(st);
+	return ERR_PTR(err);
+}
+
+static void mock_unmap_dma_buf(struct dma_buf_attachment *attachment,
+			       struct sg_table *st,
+			       enum dma_data_direction dir)
+{
+	dma_unmap_sg(attachment->dev, st->sgl, st->nents, dir);
+	sg_free_table(st);
+	kfree(st);
+}
+
+static void mock_dmabuf_release(struct dma_buf *dma_buf)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+	int i;
+
+	for (i = 0; i < mock->npages; i++)
+		put_page(mock->pages[i]);
+
+	kfree(mock);
+}
+
+static void *mock_dmabuf_vmap(struct dma_buf *dma_buf)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return vm_map_ram(mock->pages, mock->npages, 0, PAGE_KERNEL);
+}
+
+static void mock_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	vm_unmap_ram(vaddr, mock->npages);
+}
+
+static void *mock_dmabuf_kmap_atomic(struct dma_buf *dma_buf, unsigned long page_num)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return kmap_atomic(mock->pages[page_num]);
+}
+
+static void mock_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, unsigned long page_num, void *addr)
+{
+	kunmap_atomic(addr);
+}
+
+static void *mock_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return kmap(mock->pages[page_num]);
+}
+
+static void mock_dmabuf_kunmap(struct dma_buf *dma_buf, unsigned long page_num, void *addr)
+{
+	struct mock_dmabuf *mock = to_mock(dma_buf);
+
+	return kunmap(mock->pages[page_num]);
+}
+
+static int mock_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
+{
+	return -ENODEV;
+}
+
+static const struct dma_buf_ops mock_dmabuf_ops =  {
+	.map_dma_buf = mock_map_dma_buf,
+	.unmap_dma_buf = mock_unmap_dma_buf,
+	.release = mock_dmabuf_release,
+	.kmap = mock_dmabuf_kmap,
+	.kmap_atomic = mock_dmabuf_kmap_atomic,
+	.kunmap = mock_dmabuf_kunmap,
+	.kunmap_atomic = mock_dmabuf_kunmap_atomic,
+	.mmap = mock_dmabuf_mmap,
+	.vmap = mock_dmabuf_vmap,
+	.vunmap = mock_dmabuf_vunmap,
+};
+
+static struct dma_buf *mock_dmabuf(int npages)
+{
+	struct mock_dmabuf *mock;
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+	struct dma_buf *dmabuf;
+	int i;
+
+	mock = kmalloc(sizeof(*mock) + npages * sizeof(struct page *),
+		       GFP_KERNEL);
+	if (!mock)
+		return ERR_PTR(-ENOMEM);
+
+	mock->npages = npages;
+	for (i = 0; i < npages; i++) {
+		mock->pages[i] = alloc_page(GFP_KERNEL);
+		if (!mock->pages[i])
+			goto err;
+	}
+
+	exp_info.ops = &mock_dmabuf_ops;
+	exp_info.size = npages * PAGE_SIZE;
+	exp_info.flags = O_CLOEXEC;
+	exp_info.priv = mock;
+
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf))
+		goto err;
+
+	return dmabuf;
+
+err:
+	while (i--)
+		put_page(mock->pages[i]);
+	kfree(mock);
+	return ERR_PTR(-ENOMEM);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_dmabuf.h b/drivers/gpu/drm/i915/selftests/mock_dmabuf.h
new file mode 100644
index 000000000000..ec80613159b9
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_dmabuf.h
@@ -0,0 +1,41 @@
+
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MOCK_DMABUF_H__
+#define __MOCK_DMABUF_H__
+
+#include <linux/dma-buf.h>
+
+struct mock_dmabuf {
+	int npages;
+	struct page *pages[];
+};
+
+static struct mock_dmabuf *to_mock(struct dma_buf *buf)
+{
+	return buf->priv;
+}
+
+#endif /* !__MOCK_DMABUF_H__ */
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 30/46] drm/i915: Add a live dmabuf selftest
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (28 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 29/46] drm/i915: Add some mock tests for dmabuf interop Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-09 10:59   ` Joonas Lahtinen
  2017-02-02  9:08 ` [PATCH 31/46] drm/i915: Add initial selftests for i915_gem_gtt Chris Wilson
                   ` (17 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Though we have good coverage of our dmabuf interface through the mock
tests, we also want to check the heavy module unload paths of the live
i915 driver.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c     | 9 +++++++++
 drivers/gpu/drm/i915/selftests/i915_live_selftests.h | 1 +
 2 files changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
index a2393fcf9fa8..817bef74bbcb 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
@@ -292,3 +292,12 @@ int i915_gem_dmabuf_mock_selftests(void)
 	drm_dev_unref(&i915->drm);
 	return err;
 }
+
+int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_dmabuf_export),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index c060bf24928e..1d26ca1f8bc8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -12,4 +12,5 @@ selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
 selftest(uncore, intel_uncore_live_selftests)
 selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
+selftest(dmabuf, i915_gem_dmabuf_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 31/46] drm/i915: Add initial selftests for i915_gem_gtt
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (29 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 30/46] drm/i915: Add a live dmabuf selftest Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 32/46] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
                   ` (16 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Simple starting point for adding selftests for i915_gem_gtt, first
try creating a ppGTT and filling it.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  1 +
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      | 97 ++++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |  1 +
 3 files changed, 99 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index afdb2859be05..ec360ab939b8 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3754,4 +3754,5 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_gtt.c"
+#include "selftests/i915_gem_gtt.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
new file mode 100644
index 000000000000..5c09dc920cb8
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -0,0 +1,97 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+static int igt_ppgtt_alloc(void *arg)
+{
+	struct drm_i915_private *dev_priv = arg;
+	struct i915_hw_ppgtt *ppgtt;
+	u64 size, last;
+	int err;
+
+	/* Allocate a ppggt and try to fill the entire range */
+
+	if (!USES_PPGTT(dev_priv))
+		return 0;
+
+	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+	if (!ppgtt)
+		return -ENOMEM;
+
+	err = __hw_ppgtt_init(ppgtt, dev_priv);
+	if (err)
+		goto err_ppgtt;
+
+	if (!ppgtt->base.allocate_va_range)
+		goto err_ppgtt_cleanup;
+
+	/* Check we can allocate the entire range */
+	for (size = 4096;
+	     size <= ppgtt->base.total;
+	     size <<= 2) {
+		err = ppgtt->base.allocate_va_range(&ppgtt->base, 0, size);
+		if (err) {
+			if (err == -ENOMEM) {
+				pr_info("[1] Ran out of memory for va_range [0 + %llx] [bit %d]\n",
+					size, ilog2(size));
+				err = 0; /* virtual space too large! */
+			}
+			goto err_ppgtt_cleanup;
+		}
+
+		ppgtt->base.clear_range(&ppgtt->base, 0, size);
+	}
+
+	/* Check we can incrementally allocate the entire range */
+	for (last = 0, size = 4096;
+	     size <= ppgtt->base.total;
+	     last = size, size <<= 2) {
+		err = ppgtt->base.allocate_va_range(&ppgtt->base,
+						    last, size - last);
+		if (err) {
+			if (err == -ENOMEM) {
+				pr_info("[2] Ran out of memory for va_range [%llx + %llx] [bit %d]\n",
+					last, size - last, ilog2(size));
+				err = 0; /* virtual space too large! */
+			}
+			goto err_ppgtt_cleanup;
+		}
+	}
+
+err_ppgtt_cleanup:
+	ppgtt->base.cleanup(&ppgtt->base);
+err_ppgtt:
+	kfree(ppgtt);
+	return err;
+}
+
+int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_ppgtt_alloc),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 1d26ca1f8bc8..16d6dde29fca 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -14,3 +14,4 @@ selftest(requests, i915_gem_request_live_selftests)
 selftest(object, i915_gem_object_live_selftests)
 selftest(dmabuf, i915_gem_dmabuf_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
+selftest(gtt, i915_gem_gtt_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 32/46] drm/i915: Exercise filling the top/bottom portions of the ppgtt
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (30 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 31/46] drm/i915: Add initial selftests for i915_gem_gtt Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-09 10:49   ` Joonas Lahtinen
  2017-02-02  9:08 ` [PATCH 33/46] drm/i915: Exercise filling the top/bottom portions of the global GTT Chris Wilson
                   ` (15 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Allocate objects with varying number of pages (which should hopefully
consist of a mixture of contiguous page chunks and so coalesced sg
lists) and check that the sg walkers in insert_pages cope.

v2: Check both small <-> large

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 362 ++++++++++++++++++++++++++
 1 file changed, 362 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 5c09dc920cb8..4cd55fc0820a 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -22,8 +22,101 @@
  *
  */
 
+#include <linux/prime_numbers.h>
+
 #include "../i915_selftest.h"
 
+#include "mock_drm.h"
+
+static void fake_free_pages(struct drm_i915_gem_object *obj,
+			    struct sg_table *pages)
+{
+	sg_free_table(pages);
+	kfree(pages);
+}
+
+static struct sg_table *
+fake_get_pages(struct drm_i915_gem_object *obj)
+{
+#define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
+#define PFN_BIAS 0x1000
+	struct sg_table *pages;
+	struct scatterlist *sg;
+	typeof(obj->base.size) rem;
+
+	pages = kmalloc(sizeof(*pages), GFP);
+	if (!pages)
+		return ERR_PTR(-ENOMEM);
+
+	rem = round_up(obj->base.size, BIT(31)) >> 31;
+	if (sg_alloc_table(pages, rem, GFP)) {
+		kfree(pages);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	rem = obj->base.size;
+	for (sg = pages->sgl; sg; sg = sg_next(sg)) {
+		unsigned long len = min_t(typeof(rem), rem, BIT(31));
+
+		sg_set_page(sg, pfn_to_page(PFN_BIAS), len, 0);
+		sg_dma_address(sg) = page_to_phys(sg_page(sg));
+		sg_dma_len(sg) = len;
+
+		rem -= len;
+	}
+
+	return pages;
+#undef GFP
+}
+
+static void fake_put_pages(struct drm_i915_gem_object *obj,
+			   struct sg_table *pages)
+{
+	fake_free_pages(obj, pages);
+	obj->mm.dirty = false;
+}
+
+static void fake_release(struct drm_i915_gem_object *obj)
+{
+	__i915_gem_object_unpin_pages(obj);
+}
+
+static const struct drm_i915_gem_object_ops fake_ops = {
+	.get_pages = fake_get_pages,
+	.put_pages = fake_put_pages,
+	.release = fake_release
+};
+
+static struct drm_i915_gem_object *
+fake_dma_object(struct drm_i915_private *i915, u64 size)
+{
+	struct drm_i915_gem_object *obj;
+
+	GEM_BUG_ON(!size);
+	GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE));
+
+	if (overflows_type(size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc(i915);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, size);
+	i915_gem_object_init(obj, &fake_ops);
+
+	obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+	obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+	obj->cache_level = I915_CACHE_NONE;
+
+	if (i915_gem_object_pin_pages(obj)) {
+		i915_gem_object_free(obj);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return obj;
+}
+
 static int igt_ppgtt_alloc(void *arg)
 {
 	struct drm_i915_private *dev_priv = arg;
@@ -87,10 +180,279 @@ static int igt_ppgtt_alloc(void *arg)
 	return err;
 }
 
+static void close_object_list(struct list_head *objects,
+			      struct i915_address_space *vm)
+{
+	struct drm_i915_gem_object *obj, *on;
+
+	list_for_each_entry_safe(obj, on, objects, batch_pool_link) {
+		struct i915_vma *vma;
+
+		vma = i915_vma_instance(obj, vm, NULL);
+		if (!IS_ERR(vma))
+			i915_vma_close(vma);
+
+		list_del(&obj->batch_pool_link);
+		i915_gem_object_put(obj);
+	}
+}
+
+static int fill_hole(struct drm_i915_private *i915,
+		     struct i915_address_space *vm,
+		     u64 hole_start, u64 hole_end,
+		     unsigned long end_time)
+{
+#define FLAGS (PIN_USER | PIN_OFFSET_FIXED)
+	const u64 hole_size = hole_end - hole_start;
+	struct drm_i915_gem_object *obj;
+	const unsigned long max_pages =
+		min_t(u64, ULONG_MAX - 1, hole_size/2 >> PAGE_SHIFT);
+	const unsigned long max_step = max(int_sqrt(max_pages), 2UL);
+	unsigned long npages, prime;
+	struct i915_vma *vma;
+	LIST_HEAD(objects);
+	int err;
+
+	/* Try binding many VMA working inwards from either edge */
+
+	for_each_prime_number_from(prime, 2, max_step) {
+		for (npages = 1; npages <= max_pages; npages *= prime) {
+			const u64 full_size = npages << PAGE_SHIFT;
+			const struct {
+				const char *name;
+				u64 offset;
+				int step;
+			} phases[] = {
+				{ "top-down", hole_end, -1, },
+				{ "bottom-up", hole_start, 1, },
+				{ }
+			}, *p;
+
+			obj = fake_dma_object(i915, full_size);
+			if (IS_ERR(obj))
+				break;
+
+			list_add(&obj->batch_pool_link, &objects);
+
+			/* Align differing sized objects against the edges, and
+			 * check we don't walk off into the void when binding
+			 * them into the GTT.
+			 */
+			for (p = phases; p->name; p++) {
+				u64 offset;
+
+				offset = p->offset;
+				list_for_each_entry(obj, &objects, batch_pool_link) {
+					vma = i915_vma_instance(obj, vm, NULL);
+					if (IS_ERR(vma))
+						continue;
+
+					if (p->step < 0) {
+						if (offset < hole_start + obj->base.size)
+							break;
+						offset -= obj->base.size;
+					}
+
+					err = i915_vma_pin(vma, 0, 0, offset | FLAGS);
+					if (err) {
+						pr_err("%s(%s) pin (forward) failed with err=%d on size=%lu pages (prime=%lu), offset=%llx\n",
+						       __func__, p->name, err, npages, prime, offset);
+						goto err;
+					}
+
+					if (!drm_mm_node_allocated(&vma->node) ||
+					    i915_vma_misplaced(vma, 0, 0, offset | FLAGS)) {
+						pr_err("%s(%s) (forward) insert failed: vma.node=%llx + %llx [allocated? %d], expected offset %llx\n",
+						       __func__, p->name, vma->node.start, vma->node.size, drm_mm_node_allocated(&vma->node),
+						       offset);
+						err = -EINVAL;
+						goto err;
+					}
+
+					i915_vma_unpin(vma);
+
+					if (p->step > 0) {
+						if (offset + obj->base.size > hole_end)
+							break;
+						offset += obj->base.size;
+					}
+				}
+
+				offset = p->offset;
+				list_for_each_entry(obj, &objects, batch_pool_link) {
+					vma = i915_vma_instance(obj, vm, NULL);
+					if (IS_ERR(vma))
+						continue;
+
+					if (p->step < 0) {
+						if (offset < hole_start + obj->base.size)
+							break;
+						offset -= obj->base.size;
+					}
+
+					if (!drm_mm_node_allocated(&vma->node) ||
+					    i915_vma_misplaced(vma, 0, 0, offset | FLAGS)) {
+						pr_err("%s(%s) (forward) moved vma.node=%llx + %llx, expected offset %llx\n",
+						       __func__, p->name, vma->node.start, vma->node.size,
+						       offset);
+						err = -EINVAL;
+						goto err;
+					}
+
+					err = i915_vma_unbind(vma);
+					if (err) {
+						pr_err("%s(%s) (forward) unbind of vma.node=%llx + %llx failed with err=%d\n",
+						       __func__, p->name, vma->node.start, vma->node.size,
+						       err);
+						goto err;
+					}
+
+					if (p->step > 0) {
+						if (offset + obj->base.size > hole_end)
+							break;
+						offset += obj->base.size;
+					}
+				}
+
+				offset = p->offset;
+				list_for_each_entry_reverse(obj, &objects, batch_pool_link) {
+					vma = i915_vma_instance(obj, vm, NULL);
+					if (IS_ERR(vma))
+						continue;
+
+					if (p->step < 0) {
+						if (offset < hole_start + obj->base.size)
+							break;
+						offset -= obj->base.size;
+					}
+
+					err = i915_vma_pin(vma, 0, 0, offset | FLAGS);
+					if (err) {
+						pr_err("%s(%s) pin (backward) failed with err=%d on size=%lu pages (prime=%lu), offset=%llx\n",
+						       __func__, p->name, err, npages, prime, offset);
+						goto err;
+					}
+
+					if (!drm_mm_node_allocated(&vma->node) ||
+					    i915_vma_misplaced(vma, 0, 0, offset | FLAGS)) {
+						pr_err("%s(%s) (backward) insert failed: vma.node=%llx + %llx [allocated? %d], expected offset %llx\n",
+						       __func__, p->name, vma->node.start, vma->node.size, drm_mm_node_allocated(&vma->node),
+						       offset);
+						err = -EINVAL;
+						goto err;
+					}
+
+					i915_vma_unpin(vma);
+
+					if (p->step > 0) {
+						if (offset + obj->base.size > hole_end)
+							break;
+						offset += obj->base.size;
+					}
+				}
+
+				offset = p->offset;
+				list_for_each_entry_reverse(obj, &objects, batch_pool_link) {
+					vma = i915_vma_instance(obj, vm, NULL);
+					if (IS_ERR(vma))
+						continue;
+
+					if (p->step < 0) {
+						if (offset < hole_start + obj->base.size)
+							break;
+						offset -= obj->base.size;
+					}
+
+					if (!drm_mm_node_allocated(&vma->node) ||
+					    i915_vma_misplaced(vma, 0, 0, offset | FLAGS)) {
+						pr_err("%s(%s) (backward) moved vma.node=%llx + %llx [allocated? %d], expected offset %llx\n",
+						       __func__, p->name, vma->node.start, vma->node.size, drm_mm_node_allocated(&vma->node),
+						       offset);
+						err = -EINVAL;
+						goto err;
+					}
+
+					err = i915_vma_unbind(vma);
+					if (err) {
+						pr_err("%s(%s) (backward) unbind of vma.node=%llx + %llx failed with err=%d\n",
+						       __func__, p->name, vma->node.start, vma->node.size,
+						       err);
+						goto err;
+					}
+
+					if (p->step > 0) {
+						if (offset + obj->base.size > hole_end)
+							break;
+						offset += obj->base.size;
+					}
+				}
+			}
+
+			if (igt_timeout(end_time, "%s timed out (npages=%lu, prime=%lu)\n",
+					__func__, npages, prime)) {
+				err = -EINTR;
+				goto err;
+			}
+		}
+
+		close_object_list(&objects, vm);
+	}
+
+	return 0;
+
+err:
+	close_object_list(&objects, vm);
+	return err;
+#undef FLAGS
+}
+
+static int exercise_ppgtt(struct drm_i915_private *dev_priv,
+			  int (*func)(struct drm_i915_private *i915,
+				      struct i915_address_space *vm,
+				      u64 hole_start, u64 hole_end,
+				      unsigned long end_time))
+{
+	struct drm_file *file;
+	struct i915_hw_ppgtt *ppgtt;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	if (!USES_FULL_PPGTT(dev_priv))
+		return 0;
+
+	file = mock_file(dev_priv);
+	if (IS_ERR(file))
+		return PTR_ERR(file);
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	ppgtt = i915_ppgtt_create(dev_priv, file->driver_priv, "mock");
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto out_unlock;
+	}
+	GEM_BUG_ON(offset_in_page(ppgtt->base.total));
+
+	err = func(dev_priv, &ppgtt->base, 0, ppgtt->base.total, end_time);
+
+	i915_ppgtt_close(&ppgtt->base);
+	i915_ppgtt_put(ppgtt);
+out_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	mock_file_free(dev_priv, file);
+	return err;
+}
+
+static int igt_ppgtt_fill(void *arg)
+{
+	return exercise_ppgtt(arg, fill_hole);
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
+		SUBTEST(igt_ppgtt_fill),
 	};
 
 	return i915_subtests(tests, i915);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 33/46] drm/i915: Exercise filling the top/bottom portions of the global GTT
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (31 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 32/46] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 34/46] drm/i915: Fill different pages of the GTT Chris Wilson
                   ` (14 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Same test as previously for the per-process GTT instead applied to the
global GTT.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 61 ++++++++++++++++++++++++++-
 1 file changed, 60 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 4cd55fc0820a..e1121d157d76 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -22,6 +22,7 @@
  *
  */
 
+#include <linux/list_sort.h>
 #include <linux/prime_numbers.h>
 
 #include "../i915_selftest.h"
@@ -189,7 +190,8 @@ static void close_object_list(struct list_head *objects,
 		struct i915_vma *vma;
 
 		vma = i915_vma_instance(obj, vm, NULL);
-		if (!IS_ERR(vma))
+		/* Only ppgtt vma may be closed before the object is freed */
+		if (!IS_ERR(vma) && !i915_vma_is_ggtt(vma))
 			i915_vma_close(vma);
 
 		list_del(&obj->batch_pool_link);
@@ -448,12 +450,69 @@ static int igt_ppgtt_fill(void *arg)
 	return exercise_ppgtt(arg, fill_hole);
 }
 
+static int sort_holes(void *priv, struct list_head *A, struct list_head *B)
+{
+	struct drm_mm_node *a = list_entry(A, typeof(*a), hole_stack);
+	struct drm_mm_node *b = list_entry(B, typeof(*b), hole_stack);
+
+	if (a->start < b->start)
+		return -1;
+	else
+		return 1;
+}
+
+static int exercise_ggtt(struct drm_i915_private *i915,
+			 int (*func)(struct drm_i915_private *i915,
+				     struct i915_address_space *vm,
+				     u64 hole_start, u64 hole_end,
+				     unsigned long end_time))
+{
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	u64 hole_start, hole_end, last = 0;
+	struct drm_mm_node *node;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	mutex_lock(&i915->drm.struct_mutex);
+restart:
+	list_sort(NULL, &ggtt->base.mm.hole_stack, sort_holes);
+	drm_mm_for_each_hole(node, &ggtt->base.mm, hole_start, hole_end) {
+		if (hole_start < last)
+			continue;
+
+		if (ggtt->base.mm.color_adjust)
+			ggtt->base.mm.color_adjust(node, 0,
+						   &hole_start, &hole_end);
+		if (hole_start >= hole_end)
+			continue;
+
+		err = func(i915, &ggtt->base, hole_start, hole_end, end_time);
+		if (err)
+			break;
+
+		/* As we have manipulated the drm_mm, the list may be corrupt */
+		last = hole_end;
+		goto restart;
+	}
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return err;
+}
+
+static int igt_ggtt_fill(void *arg)
+{
+	return exercise_ggtt(arg, fill_hole);
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
 		SUBTEST(igt_ppgtt_fill),
+		SUBTEST(igt_ggtt_fill),
 	};
 
+	GEM_BUG_ON(offset_in_page(i915->ggtt.base.total));
+
 	return i915_subtests(tests, i915);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 34/46] drm/i915: Fill different pages of the GTT
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (32 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 33/46] drm/i915: Exercise filling the top/bottom portions of the global GTT Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 35/46] drm/i915: Exercise filling and removing random ranges from the live GTT Chris Wilson
                   ` (13 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Exercise filling different pages of the GTT

v2: Walk all holes until we timeout

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 89 +++++++++++++++++++++++++++
 1 file changed, 89 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index e1121d157d76..7d695cdcd20a 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -408,6 +408,83 @@ static int fill_hole(struct drm_i915_private *i915,
 #undef FLAGS
 }
 
+static int walk_hole(struct drm_i915_private *i915,
+		     struct i915_address_space *vm,
+		     u64 hole_start, u64 hole_end,
+		     unsigned long end_time)
+{
+	const u64 hole_size = hole_end - hole_start;
+	const unsigned long max_pages =
+		min_t(u64, ULONG_MAX - 1, hole_size >> PAGE_SHIFT);
+	u64 size;
+
+	/* Try binding a single VMA in different positions within the hole */
+
+	for_each_prime_number_from(size, 1, max_pages) {
+		struct drm_i915_gem_object *obj;
+		struct i915_vma *vma;
+		u64 addr;
+		int err = 0;
+
+		obj = fake_dma_object(i915, size << PAGE_SHIFT);
+		if (IS_ERR(obj))
+			break;
+
+		vma = i915_vma_instance(obj, vm, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto err;
+		}
+
+		for (addr = hole_start;
+		     addr + obj->base.size < hole_end;
+		     addr += obj->base.size) {
+			err = i915_vma_pin(vma, 0, 0,
+					   addr | PIN_OFFSET_FIXED | PIN_USER);
+			if (err) {
+				pr_err("%s bind failed at %llx + %llx [hole %llx- %llx] with err=%d\n",
+				       __func__, addr, vma->size,
+				       hole_start, hole_end, err);
+				goto err;
+			}
+			i915_vma_unpin(vma);
+
+			if (!drm_mm_node_allocated(&vma->node) ||
+			    i915_vma_misplaced(vma, 0, 0, addr | PIN_OFFSET_FIXED)) {
+				pr_err("%s incorrect at %llx + %llx\n",
+				       __func__, addr, vma->size);
+				err = -EINVAL;
+				goto err;
+			}
+
+			err = i915_vma_unbind(vma);
+			if (err) {
+				pr_err("%s unbind failed at %llx + %llx  with err=%d\n",
+				       __func__, addr, vma->size, err);
+				goto err;
+			}
+
+			GEM_BUG_ON(drm_mm_node_allocated(&vma->node));
+
+			if (igt_timeout(end_time,
+					"%s timed out at %llx\n",
+					__func__, addr)) {
+				err = -EINTR;
+				goto err;
+			}
+		}
+
+err:
+		if (!i915_vma_is_ggtt(vma))
+			i915_vma_close(vma);
+		i915_gem_object_put(obj);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
 static int exercise_ppgtt(struct drm_i915_private *dev_priv,
 			  int (*func)(struct drm_i915_private *i915,
 				      struct i915_address_space *vm,
@@ -450,6 +527,11 @@ static int igt_ppgtt_fill(void *arg)
 	return exercise_ppgtt(arg, fill_hole);
 }
 
+static int igt_ppgtt_walk(void *arg)
+{
+	return exercise_ppgtt(arg, walk_hole);
+}
+
 static int sort_holes(void *priv, struct list_head *A, struct list_head *B)
 {
 	struct drm_mm_node *a = list_entry(A, typeof(*a), hole_stack);
@@ -504,11 +586,18 @@ static int igt_ggtt_fill(void *arg)
 	return exercise_ggtt(arg, fill_hole);
 }
 
+static int igt_ggtt_walk(void *arg)
+{
+	return exercise_ggtt(arg, walk_hole);
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
+		SUBTEST(igt_ppgtt_walk),
 		SUBTEST(igt_ppgtt_fill),
+		SUBTEST(igt_ggtt_walk),
 		SUBTEST(igt_ggtt_fill),
 	};
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 35/46] drm/i915: Exercise filling and removing random ranges from the live GTT
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (33 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 34/46] drm/i915: Fill different pages of the GTT Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 36/46] drm/i915: Test creation of VMA Chris Wilson
                   ` (12 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Test the low-level i915_address_space interfaces to sanity check the
live insertion/removal of address ranges.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 93 +++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 7d695cdcd20a..abde71d857e0 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -26,6 +26,7 @@
 #include <linux/prime_numbers.h>
 
 #include "../i915_selftest.h"
+#include "i915_random.h"
 
 #include "mock_drm.h"
 
@@ -485,6 +486,86 @@ static int walk_hole(struct drm_i915_private *i915,
 	return 0;
 }
 
+static int drunk_hole(struct drm_i915_private *i915,
+		      struct i915_address_space *vm,
+		      u64 hole_start, u64 hole_end,
+		      unsigned long end_time)
+{
+	I915_RND_STATE(seed_prng);
+	unsigned int size;
+
+	/* Keep creating larger objects until one cannot fit into the hole */
+	for (size = 12; (hole_end - hole_start) >> size; size++) {
+		I915_RND_SUBSTATE(prng, seed_prng);
+		struct drm_i915_gem_object *obj;
+		unsigned int *order, count, n;
+		u64 hole_size;
+
+		hole_size = (hole_end - hole_start) >> size;
+		if (hole_size > KMALLOC_MAX_SIZE / sizeof(u32))
+			hole_size = KMALLOC_MAX_SIZE / sizeof(u32);
+		count = hole_size;
+		do {
+			count >>= 1;
+			order = i915_random_order(count, &prng);
+		} while (!order && count);
+		if (!order)
+			break;
+
+		/* Ignore allocation failures (i.e. don't report them as
+		 * a test failure) as we are purposefully allocating very
+		 * large objects without checking that we have sufficient
+		 * memory. We expect to hit -ENOMEM.
+		 */
+
+		obj = fake_dma_object(i915, BIT_ULL(size));
+		if (IS_ERR(obj)) {
+			kfree(order);
+			break;
+		}
+
+		GEM_BUG_ON(obj->base.size != BIT_ULL(size));
+
+		if (i915_gem_object_pin_pages(obj)) {
+			i915_gem_object_put(obj);
+			kfree(order);
+			break;
+		}
+
+		for (n = 0; n < count; n++) {
+			if (vm->allocate_va_range &&
+			    vm->allocate_va_range(vm,
+						  order[n] * BIT_ULL(size),
+						  BIT_ULL(size)))
+				break;
+
+			vm->insert_entries(vm, obj->mm.pages,
+					   order[n] * BIT_ULL(size),
+					   I915_CACHE_NONE, 0);
+			if (igt_timeout(end_time,
+					"%s timed out after %d/%d\n",
+					__func__, n, count)) {
+				hole_start = hole_end; /* quit */
+				break;
+			}
+		}
+		count = n;
+
+		i915_random_reorder(order, count, &prng);
+		for (n = 0; n < count; n++)
+			vm->clear_range(vm,
+					order[n] * BIT_ULL(size),
+					BIT_ULL(size));
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+
+		kfree(order);
+	}
+
+	return 0;
+}
+
 static int exercise_ppgtt(struct drm_i915_private *dev_priv,
 			  int (*func)(struct drm_i915_private *i915,
 				      struct i915_address_space *vm,
@@ -532,6 +613,11 @@ static int igt_ppgtt_walk(void *arg)
 	return exercise_ppgtt(arg, walk_hole);
 }
 
+static int igt_ppgtt_drunk(void *arg)
+{
+	return exercise_ppgtt(arg, drunk_hole);
+}
+
 static int sort_holes(void *priv, struct list_head *A, struct list_head *B)
 {
 	struct drm_mm_node *a = list_entry(A, typeof(*a), hole_stack);
@@ -591,12 +677,19 @@ static int igt_ggtt_walk(void *arg)
 	return exercise_ggtt(arg, walk_hole);
 }
 
+static int igt_ggtt_drunk(void *arg)
+{
+	return exercise_ggtt(arg, drunk_hole);
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_ppgtt_alloc),
+		SUBTEST(igt_ppgtt_drunk),
 		SUBTEST(igt_ppgtt_walk),
 		SUBTEST(igt_ppgtt_fill),
+		SUBTEST(igt_ggtt_drunk),
 		SUBTEST(igt_ggtt_walk),
 		SUBTEST(igt_ggtt_fill),
 	};
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 36/46] drm/i915: Test creation of VMA
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (34 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 35/46] drm/i915: Exercise filling and removing random ranges from the live GTT Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 37/46] drm/i915: Exercise i915_vma_pin/i915_vma_insert Chris Wilson
                   ` (11 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Simple test to exercise creation and lookup of VMA within an object.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_vma.c                    |   3 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/i915_vma.c          | 224 +++++++++++++++++++++
 3 files changed, 228 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_vma.c

diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 155906e84812..5c32d12b2d8d 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -687,3 +687,6 @@ int i915_vma_unbind(struct i915_vma *vma)
 	return 0;
 }
 
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/i915_vma.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index 955a4d6ccdaf..b450eab7e6e1 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -15,3 +15,4 @@ selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
 selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
 selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
+selftest(vma, i915_vma_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
new file mode 100644
index 000000000000..e60f3a962f56
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -0,0 +1,224 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/prime_numbers.h>
+
+#include "../i915_selftest.h"
+
+#include "mock_gem_device.h"
+#include "mock_context.h"
+
+static bool assert_vma(struct i915_vma *vma,
+		       struct drm_i915_gem_object *obj,
+		       struct i915_gem_context *ctx)
+{
+	bool ok = true;
+
+	if (vma->vm != &ctx->ppgtt->base) {
+		pr_err("VMA created with wrong VM\n");
+		ok = false;
+	}
+
+	if (vma->size != obj->base.size) {
+		pr_err("VMA created with wrong size, found %llu, expected %zu\n",
+		       vma->size, obj->base.size);
+		ok = false;
+	}
+
+	if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
+		pr_err("VMA created with wrong type [%d]\n",
+		       vma->ggtt_view.type);
+		ok = false;
+	}
+
+	return ok;
+}
+
+static struct i915_vma *
+checked_vma_instance(struct drm_i915_gem_object *obj,
+		     struct i915_address_space *vm,
+		     struct i915_ggtt_view *view)
+{
+	struct i915_vma *vma;
+	bool ok = true;
+
+	vma = i915_vma_instance(obj, vm, view);
+	if (IS_ERR(vma))
+		return vma;
+
+	/* Manual checks, will be reinforced by i915_vma_compare! */
+	if (vma->vm != vm) {
+		pr_err("VMA's vm [%p] does not match request [%p]\n",
+		       vma->vm, vm);
+		ok = false;
+	}
+
+	if (i915_is_ggtt(vm) != i915_vma_is_ggtt(vma)) {
+		pr_err("VMA ggtt status [%d] does not match parent [%d]\n",
+		       i915_vma_is_ggtt(vma), i915_is_ggtt(vm));
+		ok = false;
+	}
+
+	if (i915_vma_compare(vma, vm, view)) {
+		pr_err("i915_vma_compare failed with create parmaters!\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	if (i915_vma_compare(vma, vma->vm,
+			     i915_vma_is_ggtt(vma) ? &vma->ggtt_view : NULL)) {
+		pr_err("i915_vma_compare failed with itself\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	if (!ok) {
+		pr_err("i915_vma_compare failed to detect the difference!\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	return vma;
+}
+
+static int create_vmas(struct drm_i915_private *i915,
+		       struct list_head *objects,
+		       struct list_head *contexts)
+{
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_context *ctx;
+	int pinned;
+
+	list_for_each_entry(obj, objects, batch_pool_link) {
+		for (pinned = 0; pinned <= 1; pinned++) {
+			list_for_each_entry(ctx, contexts, link) {
+				struct i915_address_space *vm =
+					&ctx->ppgtt->base;
+				struct i915_vma *vma;
+				int err;
+
+				vma = checked_vma_instance(obj, vm, NULL);
+				if (IS_ERR(vma))
+					return PTR_ERR(vma);
+
+				if (!assert_vma(vma, obj, ctx)) {
+					pr_err("VMA lookup/create failed\n");
+					return -EINVAL;
+				}
+
+				if (!pinned) {
+					err = i915_vma_pin(vma, 0, 0, PIN_USER);
+					if (err) {
+						pr_err("Failed to pin VMA\n");
+						return err;
+					}
+				} else {
+					i915_vma_unpin(vma);
+				}
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int igt_vma_create(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj, *on;
+	struct i915_gem_context *ctx, *cn;
+	unsigned long num_obj, num_ctx;
+	unsigned long no, nc;
+	IGT_TIMEOUT(end_time);
+	LIST_HEAD(contexts);
+	LIST_HEAD(objects);
+	int err;
+
+	/* Exercise creating many vma amonst many objections, checking the
+	 * vma creation and lookup routines.
+	 */
+
+	no = 0;
+	for_each_prime_number(num_obj, ULONG_MAX - 1) {
+		for (; no < num_obj; no++) {
+			obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+			if (IS_ERR(obj))
+				goto out;
+
+			list_add(&obj->batch_pool_link, &objects);
+		}
+
+		nc = 0;
+		for_each_prime_number(num_ctx, MAX_CONTEXT_HW_ID) {
+			for (; nc < num_ctx; nc++) {
+				ctx = mock_context(i915, "mock");
+				if (!ctx)
+					goto out;
+
+				list_move(&ctx->link, &contexts);
+			}
+
+			err = create_vmas(i915, &objects, &contexts);
+			if (err)
+				goto out;
+
+			if (igt_timeout(end_time,
+					"%s timed out: after %lu objects in %lu contexts\n",
+					__func__, no, nc))
+				goto end;
+		}
+
+		list_for_each_entry_safe(ctx, cn, &contexts, link)
+			mock_context_close(ctx);
+	}
+
+end:
+	/* Final pass to lookup all created contexts */
+	err = create_vmas(i915, &objects, &contexts);
+out:
+	list_for_each_entry_safe(ctx, cn, &contexts, link)
+		mock_context_close(ctx);
+
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link)
+		i915_gem_object_put(obj);
+	return err;
+}
+
+int i915_vma_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_vma_create),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_subtests(tests, i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
+
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 37/46] drm/i915: Exercise i915_vma_pin/i915_vma_insert
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (35 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 36/46] drm/i915: Test creation of VMA Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 38/46] drm/i915: Verify page layout for rotated VMA Chris Wilson
                   ` (10 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

High-level testing of the struct drm_mm by verifying our handling of
weird requests to i915_vma_pin.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_vma.c           |   4 +-
 drivers/gpu/drm/i915/i915_vma.h           |   4 +-
 drivers/gpu/drm/i915/selftests/i915_vma.c | 151 ++++++++++++++++++++++++++++++
 3 files changed, 155 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 5c32d12b2d8d..341c3f82ec1f 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -324,8 +324,8 @@ void i915_vma_unpin_and_release(struct i915_vma **p_vma)
 	__i915_gem_object_release_unless_active(obj);
 }
 
-bool
-i915_vma_misplaced(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
+bool i915_vma_misplaced(const struct i915_vma *vma,
+			u64 size, u64 alignment, u64 flags)
 {
 	if (!drm_mm_node_allocated(&vma->node))
 		return false;
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index e39d922cfb6f..2e03f81dddbe 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -228,8 +228,8 @@ i915_vma_compare(struct i915_vma *vma,
 int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
 		  u32 flags);
 bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long cache_level);
-bool
-i915_vma_misplaced(struct i915_vma *vma, u64 size, u64 alignment, u64 flags);
+bool i915_vma_misplaced(const struct i915_vma *vma,
+			u64 size, u64 alignment, u64 flags);
 void __i915_vma_set_map_and_fenceable(struct i915_vma *vma);
 int __must_check i915_vma_unbind(struct i915_vma *vma);
 void i915_vma_close(struct i915_vma *vma);
diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
index e60f3a962f56..095d8348f5f0 100644
--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -202,10 +202,161 @@ static int igt_vma_create(void *arg)
 	return err;
 }
 
+struct pin_mode {
+	u64 size;
+	u64 flags;
+	bool (*assert)(const struct i915_vma *,
+		       const struct pin_mode *mode,
+		       int result);
+	const char *string;
+};
+
+static bool assert_pin_valid(const struct i915_vma *vma,
+			     const struct pin_mode *mode,
+			     int result)
+{
+	if (result)
+		return false;
+
+	if (i915_vma_misplaced(vma, mode->size, 0, mode->flags))
+		return false;
+
+	return true;
+}
+
+__maybe_unused
+static bool assert_pin_e2big(const struct i915_vma *vma,
+			     const struct pin_mode *mode,
+			     int result)
+{
+	return result == -E2BIG;
+}
+
+__maybe_unused
+static bool assert_pin_enospc(const struct i915_vma *vma,
+			      const struct pin_mode *mode,
+			      int result)
+{
+	return result == -ENOSPC;
+}
+
+__maybe_unused
+static bool assert_pin_einval(const struct i915_vma *vma,
+			      const struct pin_mode *mode,
+			      int result)
+{
+	return result == -EINVAL;
+}
+
+static int igt_vma_pin1(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	const struct pin_mode modes[] = {
+#define VALID(sz, fl) { .size = (sz), .flags = (fl), .assert = assert_pin_valid, .string = #sz ", " #fl ", (valid) " }
+#define __INVALID(sz, fl, check, eval) { .size = (sz), .flags = (fl), .assert = (check), .string = #sz ", " #fl ", (invalid " #eval ")" }
+#define INVALID(sz, fl) __INVALID(sz, fl, assert_pin_einval, EINVAL)
+#define TOOBIG(sz, fl) __INVALID(sz, fl, assert_pin_e2big, E2BIG)
+#define NOSPACE(sz, fl) __INVALID(sz, fl, assert_pin_enospc, ENOSPC)
+		VALID(0, PIN_GLOBAL),
+		VALID(0, PIN_GLOBAL | PIN_MAPPABLE),
+
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | 4096),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | 8192),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+		VALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.base.total - 4096)),
+
+		VALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | (i915->ggtt.mappable_end - 4096)),
+		INVALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | i915->ggtt.mappable_end),
+		VALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.base.total - 4096)),
+		INVALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | i915->ggtt.base.total),
+		INVALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | round_down(U64_MAX, PAGE_SIZE)),
+
+		VALID(4096, PIN_GLOBAL),
+		VALID(8192, PIN_GLOBAL),
+		VALID(i915->ggtt.mappable_end - 4096, PIN_GLOBAL | PIN_MAPPABLE),
+		VALID(i915->ggtt.mappable_end, PIN_GLOBAL | PIN_MAPPABLE),
+		TOOBIG(i915->ggtt.mappable_end + 4096, PIN_GLOBAL | PIN_MAPPABLE),
+		VALID(i915->ggtt.base.total - 4096, PIN_GLOBAL),
+		VALID(i915->ggtt.base.total, PIN_GLOBAL),
+		TOOBIG(i915->ggtt.base.total + 4096, PIN_GLOBAL),
+		TOOBIG(round_down(U64_MAX, PAGE_SIZE), PIN_GLOBAL),
+		INVALID(8192, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | (i915->ggtt.mappable_end - 4096)),
+		INVALID(8192, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.base.total - 4096)),
+		INVALID(8192, PIN_GLOBAL | PIN_OFFSET_FIXED | (round_down(U64_MAX, PAGE_SIZE) - 4096)),
+
+		VALID(8192, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+
+#if !IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
+		/* Misusing BIAS is a programming error (it is not controllable
+		 * from userspace) so when debugging is enabled, it explodes.
+		 * However, the tests are still quite interesting for checking
+		 * variable start, end and size.
+		 */
+		NOSPACE(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | i915->ggtt.mappable_end),
+		NOSPACE(0, PIN_GLOBAL | PIN_OFFSET_BIAS | i915->ggtt.base.total),
+		NOSPACE(8192, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
+		NOSPACE(8192, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.base.total - 4096)),
+#endif
+		{ },
+#undef NOSPACE
+#undef TOOBIG
+#undef INVALID
+#undef __INVALID
+#undef VALID
+	}, *m;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	int err = -EINVAL;
+
+	/* Exercise all the weird and wonderful i915_vma_pin requests,
+	 * focusing on error handling of boundary conditions.
+	 */
+
+	GEM_BUG_ON(!drm_mm_clean(&i915->ggtt.base.mm));
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	vma = checked_vma_instance(obj, &i915->ggtt.base, NULL);
+	if (IS_ERR(vma))
+		goto out;
+
+	for (m = modes; m->assert; m++) {
+		err = i915_vma_pin(vma, m->size, 0, m->flags);
+		if (!m->assert(vma, m, err)) {
+			pr_err("%s to pin single page into GGTT with mode[%d:%s]: size=%llx flags=%llx, err=%d\n",
+			       m->assert == assert_pin_valid ? "Failed" : "Unexpectedly succeeded",
+			       (int)(m - modes), m->string, m->size, m->flags,
+			       err);
+			if (!err)
+				i915_vma_unpin(vma);
+			err = -EINVAL;
+			goto out;
+		}
+
+		if (!err) {
+			i915_vma_unpin(vma);
+			err = i915_vma_unbind(vma);
+			if (err) {
+				pr_err("Failed to unbind single page from GGTT, err=%d\n", err);
+				goto out;
+			}
+		}
+	}
+
+	err = 0;
+out:
+	i915_gem_object_put(obj);
+	return err;
+}
+
 int i915_vma_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_vma_create),
+		SUBTEST(igt_vma_pin1),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 38/46] drm/i915: Verify page layout for rotated VMA
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (36 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 37/46] drm/i915: Exercise i915_vma_pin/i915_vma_insert Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02 13:01   ` Tvrtko Ursulin
  2017-02-02  9:08 ` [PATCH 39/46] drm/i915: Test creation of partial VMA Chris Wilson
                   ` (9 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Exercise creating rotated VMA and checking the page order within.

v2: Be more creative in rotated params

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_vma.c | 179 ++++++++++++++++++++++++++++++
 1 file changed, 179 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
index 095d8348f5f0..4a737a670199 100644
--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -352,11 +352,190 @@ static int igt_vma_pin1(void *arg)
 	return err;
 }
 
+static unsigned long rotated_index(const struct intel_rotation_info *r,
+				   unsigned int n,
+				   unsigned int x,
+				   unsigned int y)
+{
+	return (r->plane[n].stride * (r->plane[n].height - y - 1) +
+		r->plane[n].offset + x);
+}
+
+static struct scatterlist *
+assert_rotated(struct drm_i915_gem_object *obj,
+	       const struct intel_rotation_info *r, unsigned int n,
+	       struct scatterlist *sg)
+{
+	unsigned int x, y;
+
+	for (x = 0; x < r->plane[n].width; x++) {
+		for (y = 0; y < r->plane[n].height; y++) {
+			unsigned long src_idx;
+			dma_addr_t src;
+
+			if (!sg) {
+				pr_err("Invalid sg table: too short at plane %d, (%d, %d)!\n",
+				       n, x, y);
+				return ERR_PTR(-EINVAL);
+			}
+
+			src_idx = rotated_index(r, n, x, y);
+			src = i915_gem_object_get_dma_address(obj, src_idx);
+
+			if (sg_dma_len(sg) != PAGE_SIZE) {
+				pr_err("Invalid sg.length, found %d, expected %lu for rotated page (%d, %d) [src index %lu]\n",
+				       sg_dma_len(sg), PAGE_SIZE,
+				       x, y, src_idx);
+				return ERR_PTR(-EINVAL);
+			}
+
+			if (sg_dma_address(sg) != src) {
+				pr_err("Invalid address for rotated page (%d, %d) [src index %lu]\n",
+				       x, y, src_idx);
+				return ERR_PTR(-EINVAL);
+			}
+
+			sg = sg_next(sg);
+		}
+	}
+
+	return sg;
+}
+
+static unsigned int rotated_size(const struct intel_rotation_plane_info *a,
+				 const struct intel_rotation_plane_info *b)
+{
+	return a->width * a->height + b->width * b->height;
+}
+
+static int igt_vma_rotate(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_address_space *vm = &i915->ggtt.base;
+	struct drm_i915_gem_object *obj;
+	const struct intel_rotation_plane_info planes[] = {
+		{ .width = 1, .height = 1, .stride = 1 },
+		{ .width = 2, .height = 2, .stride = 2 },
+		{ .width = 4, .height = 4, .stride = 4 },
+		{ .width = 8, .height = 8, .stride = 8 },
+
+		{ .width = 3, .height = 5, .stride = 3 },
+		{ .width = 3, .height = 5, .stride = 4 },
+		{ .width = 3, .height = 5, .stride = 5 },
+
+		{ .width = 5, .height = 3, .stride = 5 },
+		{ .width = 5, .height = 3, .stride = 7 },
+		{ .width = 5, .height = 3, .stride = 9 },
+
+		{ .width = 4, .height = 6, .stride = 6 },
+		{ .width = 6, .height = 4, .stride = 6 },
+		{ }
+	}, *a, *b;
+	const unsigned int max_pages = 64;
+	int err = -ENOMEM;
+
+	/* Create VMA for many different combinations of planes and check
+	 * that the page layout within the rotated VMA match our expectations.
+	 */
+
+	obj = i915_gem_object_create_internal(i915, max_pages * PAGE_SIZE);
+	if (IS_ERR(obj))
+		goto out;
+
+	for (a = planes; a->width; a++) {
+		for (b = planes + ARRAY_SIZE(planes); b-- != planes; ) {
+			struct i915_ggtt_view view;
+			unsigned int n, max_offset;
+
+			max_offset = max(a->stride * a->height,
+					 b->stride * b->height);
+			GEM_BUG_ON(max_offset > max_pages);
+			max_offset = max_pages - max_offset;
+
+			view.type = I915_GGTT_VIEW_ROTATED;
+			view.rotated.plane[0] = *a;
+			view.rotated.plane[1] = *b;
+
+			for_each_prime_number_from(view.rotated.plane[0].offset, 0, max_offset) {
+				for_each_prime_number_from(view.rotated.plane[1].offset, 0, max_offset) {
+					struct scatterlist *sg;
+					struct i915_vma *vma;
+
+					vma = checked_vma_instance(obj, vm, &view);
+					if (IS_ERR(vma)) {
+						err = PTR_ERR(vma);
+						goto out_object;
+					}
+
+					err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
+					if (err) {
+						pr_err("Failed to pin VMA, err=%d\n", err);
+						goto out_object;
+					}
+
+					if (vma->size != rotated_size(a, b) * PAGE_SIZE) {
+						pr_err("VMA is wrong size, expected %lu, found %llu\n",
+						       PAGE_SIZE * rotated_size(a, b), vma->size);
+						err = -EINVAL;
+						goto out_object;
+					}
+
+					if (vma->pages->nents != rotated_size(a, b)) {
+						pr_err("sg table is wrong sizeo, expected %u, found %u nents\n",
+						       rotated_size(a, b), vma->pages->nents);
+						err = -EINVAL;
+						goto out_object;
+					}
+
+					if (vma->node.size < vma->size) {
+						pr_err("VMA binding too small, expected %llu, found %llu\n",
+						       vma->size, vma->node.size);
+						err = -EINVAL;
+						goto out_object;
+					}
+
+					if (vma->pages == obj->mm.pages) {
+						pr_err("VMA using unrotated object pages!\n");
+						err = -EINVAL;
+						goto out_object;
+					}
+
+					sg = vma->pages->sgl;
+					for (n = 0; n < ARRAY_SIZE(view.rotated.plane); n++) {
+						sg = assert_rotated(obj, &view.rotated, n, sg);
+						if (IS_ERR(sg)) {
+							pr_err("Inconsistent VMA pages for plane %d: [(%d, %d, %d, %d), (%d, %d, %d, %d)]\n", n,
+							       view.rotated.plane[0].width,
+							       view.rotated.plane[0].height,
+							       view.rotated.plane[0].stride,
+							       view.rotated.plane[0].offset,
+							       view.rotated.plane[1].width,
+							       view.rotated.plane[1].height,
+							       view.rotated.plane[1].stride,
+							       view.rotated.plane[1].offset);
+							err = -EINVAL;
+							goto out_object;
+						}
+					}
+
+					i915_vma_unpin(vma);
+				}
+			}
+		}
+	}
+
+out_object:
+	i915_gem_object_put(obj);
+out:
+	return err;
+}
+
 int i915_vma_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_vma_create),
 		SUBTEST(igt_vma_pin1),
+		SUBTEST(igt_vma_rotate),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 39/46] drm/i915: Test creation of partial VMA
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (37 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 38/46] drm/i915: Verify page layout for rotated VMA Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:08 ` [PATCH 40/46] drm/i915: Live testing for context execution Chris Wilson
                   ` (8 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Mock testing to ensure we can create and lookup partial VMA.

v2: Named phases

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_vma.c | 192 ++++++++++++++++++++++++++++++
 1 file changed, 192 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
index 4a737a670199..bb42b6191c7a 100644
--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -530,12 +530,204 @@ static int igt_vma_rotate(void *arg)
 	return err;
 }
 
+static bool assert_partial(struct drm_i915_gem_object *obj,
+			   struct i915_vma *vma,
+			   unsigned long offset,
+			   unsigned long size)
+{
+	struct sgt_iter sgt;
+	dma_addr_t dma;
+
+	for_each_sgt_dma(dma, sgt, vma->pages) {
+		dma_addr_t src;
+
+		if (!size) {
+			pr_err("Partial scattergather list too long\n");
+			return false;
+		}
+
+		src = i915_gem_object_get_dma_address(obj, offset);
+		if (src != dma) {
+			pr_err("DMA mismatch for partial page offset %lu\n",
+			       offset);
+			return false;
+		}
+
+		offset++;
+		size--;
+	}
+
+	return true;
+}
+
+static bool assert_pin(struct i915_vma *vma,
+		       struct i915_ggtt_view *view,
+		       u64 size,
+		       const char *name)
+{
+	bool ok = true;
+
+	if (vma->size != size) {
+		pr_err("(%s) VMA is wrong size, expected %llu, found %llu\n",
+		       name, size, vma->size);
+		ok = false;
+	}
+
+	if (vma->node.size < vma->size) {
+		pr_err("(%s) VMA binding too small, expected %llu, found %llu\n",
+		       name, vma->size, vma->node.size);
+		ok = false;
+	}
+
+	if (view && view->type != I915_GGTT_VIEW_NORMAL) {
+		if (memcmp(&vma->ggtt_view, view, sizeof(*view))) {
+			pr_err("(%s) VMA mismatch upon creation!\n",
+			       name);
+			ok = false;
+		}
+
+		if (vma->pages == vma->obj->mm.pages) {
+			pr_err("(%s) VMA using original object pages!\n",
+			       name);
+			ok = false;
+		}
+	} else {
+		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
+			pr_err("Not the normal ggtt view! Found %d\n",
+			       vma->ggtt_view.type);
+			ok = false;
+		}
+
+		if (vma->pages != vma->obj->mm.pages) {
+			pr_err("VMA not using object pages!\n");
+			ok = false;
+		}
+	}
+
+	return ok;
+}
+
+static int igt_vma_partial(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_address_space *vm = &i915->ggtt.base;
+	const unsigned int npages = 1021; /* prime! */
+	struct drm_i915_gem_object *obj;
+	const struct phase {
+		const char *name;
+	} phases[] = {
+		{ "create" },
+		{ "lookup" },
+		{ },
+	}, *p;
+	unsigned int sz, offset;
+	struct i915_vma *vma;
+	int err = -ENOMEM;
+
+	/* Create lots of different VMA for the object and check that
+	 * we are returned the same VMA when we later request the same range.
+	 */
+
+	obj = i915_gem_object_create_internal(i915, npages*PAGE_SIZE);
+	if (IS_ERR(obj))
+		goto out;
+
+	for (p = phases; p->name; p++) { /* exercise both create/lookup */
+		unsigned int count, nvma;
+
+		nvma = 0;
+		for_each_prime_number_from(sz, 1, npages) {
+			for_each_prime_number_from(offset, 0, npages - sz) {
+				struct i915_ggtt_view view;
+
+				view.type = I915_GGTT_VIEW_PARTIAL;
+				view.partial.offset = offset;
+				view.partial.size = sz;
+
+				if (sz == npages)
+					view.type = I915_GGTT_VIEW_NORMAL;
+
+				vma = checked_vma_instance(obj, vm, &view);
+				if (IS_ERR(vma)) {
+					err = PTR_ERR(vma);
+					goto out_object;
+				}
+
+				err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
+				if (err)
+					goto out_object;
+
+				if (!assert_pin(vma, &view, sz*PAGE_SIZE, p->name)) {
+					pr_err("(%s) Inconsistent partial pinning for (offset=%d, size=%d)\n",
+					       p->name, offset, sz);
+					err = -EINVAL;
+					goto out_object;
+				}
+
+				if (!assert_partial(obj, vma, offset, sz)) {
+					pr_err("(%s) Inconsistent partial pages for (offset=%d, size=%d)\n",
+					       p->name, offset, sz);
+					err = -EINVAL;
+					goto out_object;
+				}
+
+				i915_vma_unpin(vma);
+				nvma++;
+			}
+		}
+
+		count = 0;
+		list_for_each_entry(vma, &obj->vma_list, obj_link)
+			count++;
+		if (count != nvma) {
+			pr_err("(%s) All partial vma were not recorded on the obj->vma_list: found %u, expected %u\n",
+			       p->name, count, nvma);
+			err = -EINVAL;
+			goto out_object;
+		}
+
+		/* Check that we did create the whole object mapping */
+		vma = checked_vma_instance(obj, vm, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_object;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
+		if (err)
+			goto out_object;
+
+		if (!assert_pin(vma, NULL, obj->base.size, p->name)) {
+			pr_err("(%s) inconsistent full pin\n", p->name);
+			err = -EINVAL;
+			goto out_object;
+		}
+
+		i915_vma_unpin(vma);
+
+		count = 0;
+		list_for_each_entry(vma, &obj->vma_list, obj_link)
+			count++;
+		if (count != nvma) {
+			pr_err("(%s) allocated an extra full vma!\n", p->name);
+			err = -EINVAL;
+			goto out_object;
+		}
+	}
+
+out_object:
+	i915_gem_object_put(obj);
+out:
+	return err;
+}
+
 int i915_vma_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_vma_create),
 		SUBTEST(igt_vma_pin1),
 		SUBTEST(igt_vma_rotate),
+		SUBTEST(igt_vma_partial),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 40/46] drm/i915: Live testing for context execution
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (38 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 39/46] drm/i915: Test creation of partial VMA Chris Wilson
@ 2017-02-02  9:08 ` Chris Wilson
  2017-02-02  9:09 ` [PATCH 41/46] drm/i915: Initial selftests for exercising eviction Chris Wilson
                   ` (7 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:08 UTC (permalink / raw)
  To: intel-gfx

Check we can create and execution within a context.

v2: Write one set of dwords through each context/engine to exercise more
contexts within the same time period.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_context.c            |   1 +
 drivers/gpu/drm/i915/selftests/i915_gem_context.c  | 400 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 3 files changed, 402 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_context.c

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index e6208e361356..baec4bbdffa6 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -1191,4 +1191,5 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_context.c"
+#include "selftests/i915_gem_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/selftests/i915_gem_context.c
new file mode 100644
index 000000000000..0e6ad1b3d3aa
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_context.c
@@ -0,0 +1,400 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+#include "mock_drm.h"
+#include "huge_gem_object.h"
+
+#define DW_PER_PAGE (PAGE_SIZE / sizeof(u32))
+
+static struct i915_vma *
+gpu_fill_dw(struct i915_vma *vma, u64 offset, unsigned long count, u32 value)
+{
+	struct drm_i915_gem_object *obj;
+	const int gen = INTEL_GEN(vma->vm->i915);
+	unsigned long n;
+	u32 *cmd;
+	int err;
+
+	n = (4*count + 1)*sizeof(u32);
+	obj = i915_gem_object_create_internal(vma->vm->i915,
+					      round_up(n, PAGE_SIZE));
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		goto err;
+	}
+
+	GEM_BUG_ON(offset + (count - 1) * PAGE_SIZE > vma->node.size);
+	offset += vma->node.start;
+
+	for (n = 0; n < count; n++) {
+		if (gen >= 8) {
+			*cmd++ = MI_STORE_DWORD_IMM_GEN4;
+			*cmd++ = lower_32_bits(offset);
+			*cmd++ = upper_32_bits(offset);
+			*cmd++ = value;
+		} else if (gen >= 4) {
+			*cmd++ = MI_STORE_DWORD_IMM_GEN4 |
+				(gen < 6 ? 1 << 22 : 0);
+			*cmd++ = 0;
+			*cmd++ = offset;
+			*cmd++ = value;
+		} else {
+			*cmd++ = MI_STORE_DWORD_IMM | 1 << 22;
+			*cmd++ = offset;
+			*cmd++ = value;
+		}
+		offset += PAGE_SIZE;
+	}
+	*cmd = MI_BATCH_BUFFER_END;
+	wmb();
+	i915_gem_object_unpin_map(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		goto err;
+
+	vma = i915_vma_instance(obj, vma->vm, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto err;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		goto err;
+
+	return vma;
+
+err:
+	i915_gem_object_put(obj);
+	return ERR_PTR(err);
+}
+
+static unsigned long real_page_count(struct drm_i915_gem_object *obj)
+{
+	return huge_gem_object_phys_size(obj) >> PAGE_SHIFT;
+}
+
+static unsigned long fake_page_count(struct drm_i915_gem_object *obj)
+{
+	return huge_gem_object_dma_size(obj) >> PAGE_SHIFT;
+}
+
+static int gpu_fill(struct drm_i915_gem_object *obj,
+		    struct i915_gem_context *ctx,
+		    struct intel_engine_cs *engine,
+		    unsigned int dw)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_address_space *vm =
+		ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	struct drm_i915_gem_request *rq;
+	struct i915_vma *vma;
+	struct i915_vma *batch;
+	unsigned int flags;
+	int err;
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		return err;
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		return err;
+
+	/* Within the GTT the huge objects maps every page onto
+	 * its 1024 real pages (using phys_pfn = dma_pfn % 1024).
+	 * We set the nth dword within the page using the nth
+	 * mapping via the GTT - this should exercise the GTT mapping
+	 * whilst checking that each context provides a unique view
+	 * into the object.
+	 */
+	batch = gpu_fill_dw(vma,
+			    (dw * real_page_count(obj)) << PAGE_SHIFT |
+			    (dw * sizeof(u32)),
+			    real_page_count(obj),
+			    dw);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		goto err_vma;
+	}
+
+	rq = i915_gem_request_alloc(engine, ctx);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto err_batch;
+	}
+
+	err = engine->emit_flush(rq, EMIT_INVALIDATE);
+	if (err)
+		goto err_request;
+
+	err = i915_switch_context(rq);
+	if (err)
+		goto err_request;
+
+	flags = 0;
+	if (INTEL_GEN(vm->i915) <= 5)
+		flags |= I915_DISPATCH_SECURE;
+
+	err = engine->emit_bb_start(rq,
+				    batch->node.start, batch->node.size,
+				    flags);
+	if (err)
+		goto err_request;
+
+	i915_vma_move_to_active(batch, rq, 0);
+	i915_gem_object_set_active_reference(batch->obj);
+	i915_vma_unpin(batch);
+	i915_vma_close(batch);
+
+	i915_vma_move_to_active(vma, rq, 0);
+	i915_vma_unpin(vma);
+
+	reservation_object_lock(obj->resv, NULL);
+	reservation_object_add_excl_fence(obj->resv, &rq->fence);
+	reservation_object_unlock(obj->resv);
+
+	__i915_add_request(rq, true);
+
+	return 0;
+
+err_request:
+	__i915_add_request(rq, false);
+err_batch:
+	i915_vma_unpin(batch);
+err_vma:
+	i915_vma_unpin(vma);
+	return err;
+}
+
+static int cpu_fill(struct drm_i915_gem_object *obj, u32 value)
+{
+	const bool has_llc = HAS_LLC(to_i915(obj->base.dev));
+	unsigned int n, m, need_flush;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_write(obj, &need_flush);
+	if (err)
+		return err;
+
+	for (n = 0; n < real_page_count(obj); n++) {
+		u32 *map;
+
+		map = kmap_atomic(i915_gem_object_get_page(obj, n));
+		for (m = 0; m < DW_PER_PAGE; m++)
+			map[m] = value;
+		if (!has_llc)
+			drm_clflush_virt_range(map, PAGE_SIZE);
+		kunmap_atomic(map);
+	}
+
+	i915_gem_obj_finish_shmem_access(obj);
+	obj->base.read_domains = I915_GEM_DOMAIN_GTT | I915_GEM_DOMAIN_CPU;
+	obj->base.write_domain = 0;
+	return 0;
+}
+
+static int cpu_check(struct drm_i915_gem_object *obj, unsigned int max)
+{
+	unsigned int n, m, needs_flush;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_read(obj, &needs_flush);
+	if (err)
+		return err;
+
+	for (n = 0; !err && n < real_page_count(obj); n++) {
+		u32 *map;
+
+		map = kmap_atomic(i915_gem_object_get_page(obj, n));
+		if (needs_flush & CLFLUSH_BEFORE)
+			drm_clflush_virt_range(map, PAGE_SIZE);
+
+		for (m = 0; !err && m < max; m++) {
+			if (map[m] != m) {
+				pr_err("Invalid value at page %d, offset %d: found %x expected %x\n",
+				       n, m, map[m], m);
+				err = -EINVAL;
+			}
+		}
+
+		for (; !err && m < DW_PER_PAGE; m++) {
+			if (map[m] != 0xdeadbeef) {
+				pr_err("Invalid value at page %d, offset %d: found %x expected %x\n",
+				       n, m, map[m], 0xdeadbeef);
+				err = -EINVAL;
+			}
+		}
+
+		kunmap_atomic(map);
+	}
+
+	i915_gem_obj_finish_shmem_access(obj);
+	return err;
+}
+
+static struct drm_i915_gem_object *
+create_test_object(struct drm_i915_private *i915,
+		   struct drm_file *file,
+		   struct list_head *objects)
+{
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_context *ctx;
+	struct i915_address_space *vm;
+	u64 npages;
+	u32 handle;
+	int err;
+
+	ctx = i915_gem_create_context(i915, file->driver_priv);
+	if (IS_ERR(ctx))
+		return ERR_CAST(ctx);
+
+	vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	npages = min(vm->total / 2,
+		     1024ull * DW_PER_PAGE * PAGE_SIZE);
+	npages = round_down(npages, DW_PER_PAGE * PAGE_SIZE);
+
+	obj = huge_gem_object(i915, DW_PER_PAGE * PAGE_SIZE, npages);
+	if (IS_ERR(obj))
+		return obj;
+
+	/* tie the handle to the drm_file for easy reaping */
+	err = drm_gem_handle_create(file, &obj->base, &handle);
+	i915_gem_object_put(obj);
+	if (err)
+		return ERR_PTR(err);
+
+	err = cpu_fill(obj, 0xdeadbeef);
+	if (err) {
+		pr_err("Failed to fill object with cpu, err=%d\n",
+		       err);
+		return ERR_PTR(err);
+	}
+
+	list_add_tail(&obj->batch_pool_link, objects);
+	return obj;
+}
+
+static unsigned long max_dwords(struct drm_i915_gem_object *obj)
+{
+	unsigned long npages = fake_page_count(obj);
+
+	GEM_BUG_ON(!IS_ALIGNED(npages, DW_PER_PAGE));
+	return npages / DW_PER_PAGE;
+}
+
+static int igt_ctx_exec(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_file *file = mock_file(i915);
+	struct drm_i915_gem_object *obj;
+	IGT_TIMEOUT(end_time);
+	LIST_HEAD(objects);
+	unsigned int count, dw;
+	int err;
+
+	/* Create a few different contexts (with different mm) and write
+	 * through each ctx/mm using the GPU making sure those writes end
+	 * up in the expected pages of our obj.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	obj = create_test_object(i915, file, &objects);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto out_unlock;
+	}
+
+	count = 0;
+	dw = 0;
+	while (!time_after(jiffies, end_time)) {
+		struct intel_engine_cs *engine;
+		struct i915_gem_context *ctx;
+		unsigned int id;
+
+		ctx = i915_gem_create_context(i915, file->driver_priv);
+		if (IS_ERR(ctx)) {
+			err = PTR_ERR(ctx);
+			goto out_unlock;
+		}
+
+		for_each_engine(engine, i915, id) {
+			err = gpu_fill(obj, ctx, engine, dw % max_dwords(obj));
+			if (err) {
+				pr_err("Failed to fill dword %u with gpu (%s), err=%d\n",
+				       dw, engine->name, err);
+				goto out_unlock;
+			}
+
+			if ((++dw % max_dwords(obj)) == 0) {
+				obj = create_test_object(i915, file, &objects);
+				if (IS_ERR(obj)) {
+					err = PTR_ERR(obj);
+					goto out_unlock;
+				}
+			}
+		}
+		count++;
+	}
+	pr_info("Submitted %d contexts (across %u engines), filling %u dwords\n",
+		count, INTEL_INFO(i915)->num_rings, dw);
+
+	count = 0;
+	list_for_each_entry(obj, &objects, batch_pool_link) {
+		unsigned int rem =
+			min_t(unsigned int, dw - count, max_dwords(obj));
+
+		err = cpu_check(obj, rem);
+		if (err)
+			break;
+
+		count += max_dwords(obj);
+	}
+
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	mock_file_free(i915, file);
+	return err;
+}
+
+int i915_gem_context_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_ctx_exec),
+	};
+
+	return i915_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 16d6dde29fca..15fb4e0dd503 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -15,3 +15,4 @@ selftest(object, i915_gem_object_live_selftests)
 selftest(dmabuf, i915_gem_dmabuf_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
 selftest(gtt, i915_gem_gtt_live_selftests)
+selftest(context, i915_gem_context_live_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 41/46] drm/i915: Initial selftests for exercising eviction
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (39 preceding siblings ...)
  2017-02-02  9:08 ` [PATCH 40/46] drm/i915: Live testing for context execution Chris Wilson
@ 2017-02-02  9:09 ` Chris Wilson
  2017-02-02  9:09 ` [PATCH 42/46] drm/i915: Add mock exercise for i915_gem_gtt_reserve Chris Wilson
                   ` (6 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:09 UTC (permalink / raw)
  To: intel-gfx

Very simple tests to just ask eviction to find some free space in a full
GTT and one with some available space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_evict.c              |   4 +
 drivers/gpu/drm/i915/selftests/i915_gem_evict.c    | 260 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 3 files changed, 265 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_evict.c

diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index c181b1bb3d2c..609a8fcb48ca 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -387,3 +387,7 @@ int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle)
 
 	return 0;
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/i915_gem_evict.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
new file mode 100644
index 000000000000..97af353db218
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c
@@ -0,0 +1,260 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+#include "mock_gem_device.h"
+
+static int populate_ggtt(struct drm_i915_private *i915)
+{
+	struct drm_i915_gem_object *obj;
+	u64 size;
+
+	for (size = 0;
+	     size + I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     size += I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, I915_GTT_PAGE_SIZE);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
+		if (IS_ERR(vma))
+			return PTR_ERR(vma);
+	}
+
+	if (!list_empty(&i915->mm.unbound_list)) {
+		size = 0;
+		list_for_each_entry(obj, &i915->mm.unbound_list, global_link)
+			size++;
+
+		pr_err("Found %lld objects unbound!\n", size);
+		return -EINVAL;
+	}
+
+	if (list_empty(&i915->ggtt.base.inactive_list)) {
+		pr_err("No objects on the GGTT inactive list!\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void unpin_ggtt(struct drm_i915_private *i915)
+{
+	struct i915_vma *vma;
+
+	list_for_each_entry(vma, &i915->ggtt.base.inactive_list, vm_link)
+		i915_vma_unpin(vma);
+}
+
+static void cleanup_objects(struct drm_i915_private *i915)
+{
+	struct drm_i915_gem_object *obj, *on;
+
+	list_for_each_entry_safe(obj, on, &i915->mm.unbound_list, global_link)
+		i915_gem_object_put(obj);
+
+	list_for_each_entry_safe(obj, on, &i915->mm.bound_list, global_link)
+		i915_gem_object_put(obj);
+
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	i915_gem_drain_freed_objects(i915);
+
+	mutex_lock(&i915->drm.struct_mutex);
+}
+
+static int igt_evict_something(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	int err;
+
+	/* Fill the GGTT with pinned objects and try to evict one. */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	/* Everything is pinned, nothing should happen */
+	err = i915_gem_evict_something(&ggtt->base,
+				       I915_GTT_PAGE_SIZE, 0, 0,
+				       0, U64_MAX,
+				       0);
+	if (err != -ENOSPC) {
+		pr_err("i915_gem_evict_something failed on a full GGTT with err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+	unpin_ggtt(i915);
+
+	/* Everything is unpinned, we should be able to evict something */
+	err = i915_gem_evict_something(&ggtt->base,
+				       I915_GTT_PAGE_SIZE, 0, 0,
+				       0, U64_MAX,
+				       0);
+	if (err) {
+		pr_err("i915_gem_evict_something failed on a full GGTT with err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+static int igt_overcommit(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	int err;
+
+	/* Fill the GGTT with pinned objects and then try to pin one more.
+	 * We expect it to fail.
+	 */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	obj = i915_gem_object_create_internal(i915, I915_GTT_PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto cleanup;
+	}
+
+	list_move(&obj->global_link, &i915->mm.unbound_list);
+
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
+	if (!IS_ERR(vma) || PTR_ERR(vma) != -ENOSPC) {
+		pr_err("Failed to evict+insert, i915_gem_object_ggtt_pin returned err=%d\n", (int)PTR_ERR(vma));
+		err = -EINVAL;
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+static int igt_evict_for_vma(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct drm_mm_node target = {
+		.start = 0,
+		.size = 4096,
+	};
+	int err;
+
+	/* Fill the GGTT with pinned objects and try to evict a range. */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	/* Everything is pinned, nothing should happen */
+	err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
+	if (err != -ENOSPC) {
+		pr_err("i915_gem_evict_for_node on a full GGTT returned err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+	unpin_ggtt(i915);
+
+	/* Everything is unpinned, we should be able to evict the node */
+	err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
+	if (err) {
+		pr_err("i915_gem_evict_for_node returned err=%d\n",
+		       err);
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+static int igt_evict_vm(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	int err;
+
+	/* Fill the GGTT with pinned objects and try to evict everything. */
+
+	err = populate_ggtt(i915);
+	if (err)
+		goto cleanup;
+
+	/* Everything is pinned, nothing should happen */
+	err = i915_gem_evict_vm(&ggtt->base, false);
+	if (err) {
+		pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
+		       err);
+		goto cleanup;
+	}
+
+	unpin_ggtt(i915);
+
+	err = i915_gem_evict_vm(&ggtt->base, false);
+	if (err) {
+		pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
+		       err);
+		goto cleanup;
+	}
+
+cleanup:
+	cleanup_objects(i915);
+	return err;
+}
+
+int i915_gem_evict_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_evict_something),
+		SUBTEST(igt_evict_for_vma),
+		SUBTEST(igt_evict_vm),
+		SUBTEST(igt_overcommit),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_subtests(tests, i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index b450eab7e6e1..cfbd3f5486ae 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -16,3 +16,4 @@ selftest(requests, i915_gem_request_mock_selftests)
 selftest(objects, i915_gem_object_mock_selftests)
 selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
 selftest(vma, i915_vma_mock_selftests)
+selftest(evict, i915_gem_evict_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 42/46] drm/i915: Add mock exercise for i915_gem_gtt_reserve
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (40 preceding siblings ...)
  2017-02-02  9:09 ` [PATCH 41/46] drm/i915: Initial selftests for exercising eviction Chris Wilson
@ 2017-02-02  9:09 ` Chris Wilson
  2017-02-02  9:09 ` [PATCH 43/46] drm/i915: Add mock exercise for i915_gem_gtt_insert Chris Wilson
                   ` (5 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:09 UTC (permalink / raw)
  To: intel-gfx

i915_gem_gtt_reserve should put the node exactly as requested in the
GTT, evicting as required.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      | 195 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 2 files changed, 196 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index abde71d857e0..f4b627eb63f6 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -29,6 +29,7 @@
 #include "i915_random.h"
 
 #include "mock_drm.h"
+#include "mock_gem_device.h"
 
 static void fake_free_pages(struct drm_i915_gem_object *obj,
 			    struct sg_table *pages)
@@ -682,6 +683,200 @@ static int igt_ggtt_drunk(void *arg)
 	return exercise_ggtt(arg, drunk_hole);
 }
 
+static void track_vma_bind(struct i915_vma *vma)
+{
+	struct drm_i915_gem_object *obj = vma->obj;
+
+	obj->bind_count++; /* track for eviction later */
+	__i915_gem_object_pin_pages(obj);
+
+	vma->pages = obj->mm.pages;
+	list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
+}
+
+static int igt_gtt_reserve(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj, *on;
+	LIST_HEAD(objects);
+	u64 total;
+	int err;
+
+	/* i915_gem_gtt_reserve() tries to reserve the precise range
+	 * for the node, and evicts if it has to. So our test checks that
+	 * it can give us the requsted space and prevent overlaps.
+	 */
+
+	/* Start by filling the GGTT */
+	for (total = 0;
+	     total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += 2*I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, 2*PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto out;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto out;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out;
+		}
+
+		err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
+					   obj->base.size,
+					   total,
+					   obj->cache_level,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_reserve (pass 1) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto out;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != total ||
+		    vma->node.size != 2*I915_GTT_PAGE_SIZE) {
+			pr_err("i915_gem_gtt_reserve (pass 1) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
+			       vma->node.start, vma->node.size,
+			       total, 2*I915_GTT_PAGE_SIZE);
+			err = -EINVAL;
+			goto out;
+		}
+	}
+
+	/* Now we start forcing evictions */
+	for (total = I915_GTT_PAGE_SIZE;
+	     total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += 2*I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, 2*PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto out;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto out;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out;
+		}
+
+		err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
+					   obj->base.size,
+					   total,
+					   obj->cache_level,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_reserve (pass 2) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto out;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != total ||
+		    vma->node.size != 2*I915_GTT_PAGE_SIZE) {
+			pr_err("i915_gem_gtt_reserve (pass 2) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
+			       vma->node.start, vma->node.size,
+			       total, 2*I915_GTT_PAGE_SIZE);
+			err = -EINVAL;
+			goto out;
+		}
+	}
+
+	/* And then try at random */
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		struct i915_vma *vma;
+		u64 offset;
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out;
+		}
+
+		err = i915_vma_unbind(vma);
+		if (err) {
+			pr_err("i915_vma_unbind failed with err=%d!\n", err);
+			goto out;
+		}
+
+		offset = random_offset(0, i915->ggtt.base.total,
+				       2*I915_GTT_PAGE_SIZE,
+				       I915_GTT_MIN_ALIGNMENT);
+
+		err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
+					   obj->base.size,
+					   offset,
+					   obj->cache_level,
+					   0);
+		if (err) {
+			pr_err("i915_gem_gtt_reserve (pass 3) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto out;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != offset ||
+		    vma->node.size != 2*I915_GTT_PAGE_SIZE) {
+			pr_err("i915_gem_gtt_reserve (pass 3) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
+			       vma->node.start, vma->node.size,
+			       offset, 2*I915_GTT_PAGE_SIZE);
+			err = -EINVAL;
+			goto out;
+		}
+	}
+
+out:
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+	return err;
+}
+
+int i915_gem_gtt_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_gtt_reserve),
+	};
+	struct drm_i915_private *i915;
+	int err;
+
+	i915 = mock_gem_device();
+	if (!i915)
+		return -ENOMEM;
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = i915_subtests(tests, i915);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	drm_dev_unref(&i915->drm);
+	return err;
+}
+
 int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index cfbd3f5486ae..be9a9ebf5692 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -17,3 +17,4 @@ selftest(objects, i915_gem_object_mock_selftests)
 selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
 selftest(vma, i915_vma_mock_selftests)
 selftest(evict, i915_gem_evict_mock_selftests)
+selftest(gtt, i915_gem_gtt_mock_selftests)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 43/46] drm/i915: Add mock exercise for i915_gem_gtt_insert
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (41 preceding siblings ...)
  2017-02-02  9:09 ` [PATCH 42/46] drm/i915: Add mock exercise for i915_gem_gtt_reserve Chris Wilson
@ 2017-02-02  9:09 ` Chris Wilson
  2017-02-02  9:09 ` [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling Chris Wilson
                   ` (4 subsequent siblings)
  47 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:09 UTC (permalink / raw)
  To: intel-gfx

i915_gem_gtt_insert should allocate from the available free space in the
GTT, evicting as necessary to create space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 208 ++++++++++++++++++++++++++
 1 file changed, 208 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index f4b627eb63f6..c1c7a8837ffd 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -857,10 +857,218 @@ static int igt_gtt_reserve(void *arg)
 	return err;
 }
 
+static int igt_gtt_insert(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj, *on;
+	struct drm_mm_node tmp = {};
+	const struct invalid_insert {
+		u64 size;
+		u64 alignment;
+		u64 start, end;
+	} invalid_insert[] = {
+		{
+			i915->ggtt.base.total + I915_GTT_PAGE_SIZE, 0,
+			0, i915->ggtt.base.total,
+		},
+		{
+			2*I915_GTT_PAGE_SIZE, 0,
+			0, I915_GTT_PAGE_SIZE,
+		},
+		{
+			-(u64)I915_GTT_PAGE_SIZE, 0,
+			0, 4*I915_GTT_PAGE_SIZE,
+		},
+		{
+			-(u64)2*I915_GTT_PAGE_SIZE, 2*I915_GTT_PAGE_SIZE,
+			0, 4*I915_GTT_PAGE_SIZE,
+		},
+		{
+			I915_GTT_PAGE_SIZE, I915_GTT_MIN_ALIGNMENT << 1,
+			I915_GTT_MIN_ALIGNMENT, I915_GTT_MIN_ALIGNMENT << 1,
+		},
+		{}
+	}, *ii;
+	LIST_HEAD(objects);
+	u64 total;
+	int err;
+
+	/* i915_gem_gtt_insert() tries to allocate some free space in the GTT
+	 * to the node, evicting if required.
+	 */
+
+	/* Check a couple of obviously invalid requests */
+	for (ii = invalid_insert; ii->size; ii++) {
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &tmp,
+					  ii->size, ii->alignment,
+					  I915_COLOR_UNEVICTABLE,
+					  ii->start, ii->end,
+					  0);
+		if (err != -ENOSPC) {
+			pr_err("Invalid i915_gem_gtt_insert(.size=%llx, .alignment=%llx, .start=%llx, .end=%llx) succeeded (err=%d)\n",
+			       ii->size, ii->alignment, ii->start, ii->end,
+			       err);
+			return -EINVAL;
+		}
+	}
+
+	/* Start by filling the GGTT */
+	for (total = 0;
+	     total + I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, I915_GTT_PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto out;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto out;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out;
+		}
+
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
+					  obj->base.size, 0, obj->cache_level,
+					  0, i915->ggtt.base.total,
+					  0);
+		if (err == -ENOSPC) {
+			/* maxed out the GGTT space */
+			i915_gem_object_put(obj);
+			break;
+		}
+		if (err) {
+			pr_err("i915_gem_gtt_insert (pass 1) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto out;
+		}
+		track_vma_bind(vma);
+		__i915_vma_pin(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+	}
+
+	list_for_each_entry(obj, &objects, batch_pool_link) {
+		struct i915_vma *vma;
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out;
+		}
+
+		if (!drm_mm_node_allocated(&vma->node)) {
+			pr_err("VMA was unexpectedly evicted!\n");
+			err = -EINVAL;
+			goto out;
+		}
+
+		__i915_vma_unpin(vma);
+	}
+
+	/* If we then reinsert, we should find the same hole */
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		struct i915_vma *vma;
+		u64 offset;
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out;
+		}
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		offset = vma->node.start;
+
+		err = i915_vma_unbind(vma);
+		if (err) {
+			pr_err("i915_vma_unbind failed with err=%d!\n", err);
+			goto out;
+		}
+
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
+					  obj->base.size, 0, obj->cache_level,
+					  0, i915->ggtt.base.total,
+					  0);
+		if (err) {
+			pr_err("i915_gem_gtt_insert (pass 2) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto out;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+		if (vma->node.start != offset) {
+			pr_err("i915_gem_gtt_insert did not return node to its previous location (the only hole), expected address %llx, found %llx\n",
+			       offset, vma->node.start);
+			err = -EINVAL;
+			goto out;
+		}
+	}
+
+	/* And then force evictions */
+	for (total = 0;
+	     total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
+	     total += 2*I915_GTT_PAGE_SIZE) {
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create_internal(i915, 2*I915_GTT_PAGE_SIZE);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto out;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			goto out;
+		}
+
+		list_add(&obj->batch_pool_link, &objects);
+
+		vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out;
+		}
+
+		err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
+					  obj->base.size, 0, obj->cache_level,
+					  0, i915->ggtt.base.total,
+					  0);
+		if (err) {
+			pr_err("i915_gem_gtt_insert (pass 3) failed at %llu/%llu with err=%d\n",
+			       total, i915->ggtt.base.total, err);
+			goto out;
+		}
+		track_vma_bind(vma);
+
+		GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+	}
+
+out:
+	list_for_each_entry_safe(obj, on, &objects, batch_pool_link) {
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+	return err;
+}
+
 int i915_gem_gtt_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_gtt_reserve),
+		SUBTEST(igt_gtt_insert),
 	};
 	struct drm_i915_private *i915;
 	int err;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (42 preceding siblings ...)
  2017-02-02  9:09 ` [PATCH 43/46] drm/i915: Add mock exercise for i915_gem_gtt_insert Chris Wilson
@ 2017-02-02  9:09 ` Chris Wilson
  2017-02-08 12:12   ` Matthew Auld
  2017-02-09 10:53   ` Joonas Lahtinen
  2017-02-02  9:09 ` [PATCH 45/46] drm/i915: Exercise manipulate of single pages in the GGTT Chris Wilson
                   ` (3 subsequent siblings)
  47 siblings, 2 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:09 UTC (permalink / raw)
  To: intel-gfx

Use the live tests against the mock ppgtt for quick testing on all
platforms of the VMA layer.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 43 +++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index c1c7a8837ffd..7ec6fb2208a6 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -28,6 +28,7 @@
 #include "../i915_selftest.h"
 #include "i915_random.h"
 
+#include "mock_context.h"
 #include "mock_drm.h"
 #include "mock_gem_device.h"
 
@@ -694,6 +695,45 @@ static void track_vma_bind(struct i915_vma *vma)
 	list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
 }
 
+static int exercise_mock(struct drm_i915_private *i915,
+			 int (*func)(struct drm_i915_private *i915,
+				     struct i915_address_space *vm,
+				     u64 hole_start, u64 hole_end,
+				     unsigned long end_time))
+{
+	struct i915_gem_context *ctx;
+	struct i915_hw_ppgtt *ppgtt;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	ctx = mock_context(i915, "mock");
+	if (!ctx)
+		return -ENOMEM;
+
+	ppgtt = ctx->ppgtt;
+	GEM_BUG_ON(!ppgtt);
+
+	err = func(i915, &ppgtt->base, 0, ppgtt->base.total, end_time);
+
+	mock_context_close(ctx);
+	return err;
+}
+
+static int igt_mock_fill(void *arg)
+{
+	return exercise_mock(arg, fill_hole);
+}
+
+static int igt_mock_walk(void *arg)
+{
+	return exercise_mock(arg, walk_hole);
+}
+
+static int igt_mock_drunk(void *arg)
+{
+	return exercise_mock(arg, drunk_hole);
+}
+
 static int igt_gtt_reserve(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
@@ -1067,6 +1107,9 @@ static int igt_gtt_insert(void *arg)
 int i915_gem_gtt_mock_selftests(void)
 {
 	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_mock_drunk),
+		SUBTEST(igt_mock_walk),
+		SUBTEST(igt_mock_fill),
 		SUBTEST(igt_gtt_reserve),
 		SUBTEST(igt_gtt_insert),
 	};
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 45/46] drm/i915: Exercise manipulate of single pages in the GGTT
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (43 preceding siblings ...)
  2017-02-02  9:09 ` [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling Chris Wilson
@ 2017-02-02  9:09 ` Chris Wilson
  2017-02-08 12:25   ` Matthew Auld
  2017-02-02  9:09 ` [PATCH 46/46] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
                   ` (2 subsequent siblings)
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:09 UTC (permalink / raw)
  To: intel-gfx

Move a single page of an object around within the GGTT and check
coherency of writes and reads.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 91 +++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 7ec6fb2208a6..27e380a3bae5 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -684,6 +684,96 @@ static int igt_ggtt_drunk(void *arg)
 	return exercise_ggtt(arg, drunk_hole);
 }
 
+static int igt_ggtt_page(void *arg)
+{
+	const unsigned int count = PAGE_SIZE/sizeof(u32);
+	I915_RND_STATE(prng);
+	struct drm_i915_private *i915 = arg;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct drm_i915_gem_object *obj;
+	struct drm_mm_node tmp;
+	unsigned int *order, n;
+	int err;
+
+	mutex_lock(&i915->drm.struct_mutex);
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto out_unlock;
+	}
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err)
+		goto out_free;
+
+	memset(&tmp, 0, sizeof(tmp));
+	err = drm_mm_insert_node_in_range(&ggtt->base.mm, &tmp,
+					  1024 * PAGE_SIZE, 0,
+					  I915_COLOR_UNEVICTABLE,
+					  0, ggtt->mappable_end,
+					  DRM_MM_INSERT_LOW);
+	if (err)
+		goto out_unpin;
+
+	order = i915_random_order(count, &prng);
+	if (!order) {
+		err = -ENOMEM;
+		goto out_remove;
+	}
+
+	for (n = 0; n < count; n++) {
+		u64 offset = tmp.start + order[n] * PAGE_SIZE;
+		u32 __iomem *vaddr;
+
+		ggtt->base.insert_page(&ggtt->base,
+				       i915_gem_object_get_dma_address(obj, 0),
+				       offset, I915_CACHE_NONE, 0);
+
+		vaddr = io_mapping_map_atomic_wc(&ggtt->mappable, offset);
+		iowrite32(n, vaddr + n);
+		io_mapping_unmap_atomic(vaddr);
+
+		wmb();
+		ggtt->base.clear_range(&ggtt->base, offset, PAGE_SIZE);
+	}
+
+	i915_random_reorder(order, count, &prng);
+	for (n = 0; n < count; n++) {
+		u64 offset = tmp.start + order[n] * PAGE_SIZE;
+		u32 __iomem *vaddr;
+		u32 val;
+
+		ggtt->base.insert_page(&ggtt->base,
+				       i915_gem_object_get_dma_address(obj, 0),
+				       offset, I915_CACHE_NONE, 0);
+
+		vaddr = io_mapping_map_atomic_wc(&ggtt->mappable, offset);
+		val = ioread32(vaddr + n);
+		io_mapping_unmap_atomic(vaddr);
+
+		ggtt->base.clear_range(&ggtt->base, offset, PAGE_SIZE);
+
+		if (val != n) {
+			pr_err("insert page failed: found %d, expected %d\n",
+			       val, n);
+			err = -EINVAL;
+			break;
+		}
+	}
+
+	kfree(order);
+out_remove:
+	drm_mm_remove_node(&tmp);
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_free:
+	i915_gem_object_put(obj);
+out_unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
 static void track_vma_bind(struct i915_vma *vma)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
@@ -1138,6 +1228,7 @@ int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
 		SUBTEST(igt_ggtt_drunk),
 		SUBTEST(igt_ggtt_walk),
 		SUBTEST(igt_ggtt_fill),
+		SUBTEST(igt_ggtt_page),
 	};
 
 	GEM_BUG_ON(offset_in_page(i915->ggtt.base.total));
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* [PATCH 46/46] drm/i915: Add initial selftests for hang detection and resets
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (44 preceding siblings ...)
  2017-02-02  9:09 ` [PATCH 45/46] drm/i915: Exercise manipulate of single pages in the GGTT Chris Wilson
@ 2017-02-02  9:09 ` Chris Wilson
  2017-02-02 13:28   ` Mika Kuoppala
  2017-02-02  9:18 ` [PATCH igt] intel-ci: Add all driver selftests to BAT Chris Wilson
  2017-02-02 11:32 ` ✗ Fi.CI.BAT: failure for series starting with [v6] drm: Provide a driver hook for drm_dev_release() (rev2) Patchwork
  47 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:09 UTC (permalink / raw)
  To: intel-gfx

Check that we can reset the GPU and continue executing from the next
request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h                    |   4 +-
 drivers/gpu/drm/i915/intel_hangcheck.c             |   4 +
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/intel_hangcheck.c   | 531 +++++++++++++++++++++
 4 files changed, 538 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_hangcheck.c

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 4a7e4b10c0a9..f82c59768f65 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3356,8 +3356,8 @@ int __must_check i915_gem_init(struct drm_i915_private *dev_priv);
 int __must_check i915_gem_init_hw(struct drm_i915_private *dev_priv);
 void i915_gem_init_swizzling(struct drm_i915_private *dev_priv);
 void i915_gem_cleanup_engines(struct drm_i915_private *dev_priv);
-int __must_check i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
-					unsigned int flags);
+int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
+			   unsigned int flags);
 int __must_check i915_gem_suspend(struct drm_i915_private *dev_priv);
 void i915_gem_resume(struct drm_i915_private *dev_priv);
 int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
index f05971f5586f..dce742243ba6 100644
--- a/drivers/gpu/drm/i915/intel_hangcheck.c
+++ b/drivers/gpu/drm/i915/intel_hangcheck.c
@@ -480,3 +480,7 @@ void intel_hangcheck_init(struct drm_i915_private *i915)
 	INIT_DELAYED_WORK(&i915->gpu_error.hangcheck_work,
 			  i915_hangcheck_elapsed);
 }
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/intel_hangcheck.c"
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 15fb4e0dd503..d0d4f4bcd837 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -16,3 +16,4 @@ selftest(dmabuf, i915_gem_dmabuf_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
 selftest(gtt, i915_gem_gtt_live_selftests)
 selftest(context, i915_gem_context_live_selftests)
+selftest(hangcheck, intel_hangcheck_live_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
new file mode 100644
index 000000000000..2131d8707dfd
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
@@ -0,0 +1,531 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+struct hang {
+	struct drm_i915_private *i915;
+	struct drm_i915_gem_object *hws;
+	struct drm_i915_gem_object *obj;
+	u32 *seqno;
+	u32 *batch;
+};
+
+static int hang_init(struct hang *h, struct drm_i915_private *i915)
+{
+	void *vaddr;
+	int err;
+
+	memset(h, 0, sizeof(*h));
+	h->i915 = i915;
+
+	h->hws = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(h->hws))
+		return PTR_ERR(h->hws);
+
+	h->obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(h->obj)) {
+		err = PTR_ERR(h->obj);
+		goto err_hws;
+	}
+
+	i915_gem_object_set_cache_level(h->hws, I915_CACHE_LLC);
+	vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB);
+	if (IS_ERR(vaddr)) {
+		err = PTR_ERR(vaddr);
+		goto err_obj;
+	}
+	h->seqno = memset(vaddr, 0xff, PAGE_SIZE);
+
+	vaddr = i915_gem_object_pin_map(h->obj,
+					HAS_LLC(i915) ? I915_MAP_WB : I915_MAP_WC);
+	if (IS_ERR(vaddr)) {
+		err = PTR_ERR(vaddr);
+		goto err_unpin_hws;
+	}
+	h->batch = vaddr;
+
+	return 0;
+
+err_unpin_hws:
+	i915_gem_object_unpin_map(h->hws);
+err_obj:
+	i915_gem_object_put(h->obj);
+err_hws:
+	i915_gem_object_put(h->hws);
+	return err;
+}
+
+static u64 hws_address(const struct i915_vma *hws,
+		       const struct drm_i915_gem_request *rq)
+{
+	return hws->node.start + offset_in_page(sizeof(u32)*rq->fence.context);
+}
+
+static int emit_recurse_batch(struct hang *h,
+			      struct drm_i915_gem_request *rq)
+{
+	struct drm_i915_private *i915 = h->i915;
+	struct i915_address_space *vm = rq->ctx->ppgtt ? &rq->ctx->ppgtt->base : &i915->ggtt.base;
+	struct i915_vma *hws, *vma;
+	unsigned int flags;
+	u32 *batch;
+	int err;
+
+	vma = i915_vma_instance(h->obj, vm, NULL);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	hws = i915_vma_instance(h->hws, vm, NULL);
+	if (IS_ERR(hws))
+		return PTR_ERR(hws);
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		return err;
+
+	err = i915_vma_pin(hws, 0, 0, PIN_USER);
+	if (err)
+		goto unpin_vma;
+
+	err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
+	if (err)
+		goto unpin_hws;
+
+	err = i915_switch_context(rq);
+	if (err)
+		goto unpin_hws;
+
+	i915_vma_move_to_active(vma, rq, 0);
+	if (!i915_gem_object_has_active_reference(vma->obj)) {
+		i915_gem_object_get(vma->obj);
+		i915_gem_object_set_active_reference(vma->obj);
+	}
+
+	i915_vma_move_to_active(hws, rq, 0);
+	if (!i915_gem_object_has_active_reference(hws->obj)) {
+		i915_gem_object_get(hws->obj);
+		i915_gem_object_set_active_reference(hws->obj);
+	}
+
+	batch = h->batch;
+	if (INTEL_GEN(i915) >= 8) {
+		*batch++ = MI_STORE_DWORD_IMM_GEN4;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = upper_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
+		*batch++ = lower_32_bits(vma->node.start);
+		*batch++ = upper_32_bits(vma->node.start);
+	} else if (INTEL_GEN(i915) >= 6) {
+		*batch++ = MI_STORE_DWORD_IMM_GEN4;
+		*batch++ = 0;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 1 << 8;
+		*batch++ = lower_32_bits(vma->node.start);
+	} else if (INTEL_GEN(i915) >= 4) {
+		*batch++ = MI_STORE_DWORD_IMM_GEN4 | 1 << 22;
+		*batch++ = 0;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 2 << 6;
+		*batch++ = lower_32_bits(vma->node.start);
+	} else {
+		*batch++ = MI_STORE_DWORD_IMM;
+		*batch++ = lower_32_bits(hws_address(hws, rq));
+		*batch++ = rq->fence.seqno;
+		*batch++ = MI_BATCH_BUFFER_START | 2 << 6 | 1;
+		*batch++ = lower_32_bits(vma->node.start);
+	}
+	*batch++ = MI_BATCH_BUFFER_END; /* not reached */
+
+	flags = 0;
+	if (INTEL_GEN(vm->i915) <= 5)
+		flags |= I915_DISPATCH_SECURE;
+
+	err = rq->engine->emit_bb_start(rq, vma->node.start, PAGE_SIZE, flags);
+
+unpin_hws:
+	i915_vma_unpin(hws);
+unpin_vma:
+	i915_vma_unpin(vma);
+	return err;
+}
+
+static struct drm_i915_gem_request *
+hang_create_request(struct hang *h,
+		    struct intel_engine_cs *engine,
+		    struct i915_gem_context *ctx)
+{
+	struct drm_i915_gem_request *rq;
+	int err;
+
+	if (i915_gem_object_is_active(h->obj)) {
+		struct drm_i915_gem_object *obj;
+		void *vaddr;
+
+		obj = i915_gem_object_create_internal(h->i915, PAGE_SIZE);
+		if (IS_ERR(obj))
+			return ERR_CAST(obj);
+
+		vaddr = i915_gem_object_pin_map(obj,
+						HAS_LLC(h->i915) ? I915_MAP_WB : I915_MAP_WC);
+		if (IS_ERR(vaddr)) {
+			i915_gem_object_put(obj);
+			return ERR_CAST(vaddr);
+		}
+
+		i915_gem_object_unpin_map(h->obj);
+		i915_gem_object_put(h->obj);
+
+		h->obj = obj;
+		h->batch = vaddr;
+	}
+
+	rq = i915_gem_request_alloc(engine, ctx);
+	if (IS_ERR(rq))
+		return rq;
+
+	err = emit_recurse_batch(h, rq);
+	if (err) {
+		__i915_add_request(rq, false);
+		return ERR_PTR(err);
+	}
+
+	return rq;
+}
+
+static u32 hws_seqno(const struct hang *h,
+		     const struct drm_i915_gem_request *rq)
+{
+	return READ_ONCE(h->seqno[rq->fence.context % (PAGE_SIZE/sizeof(u32))]);
+}
+
+static void hang_fini(struct hang *h)
+{
+	*h->batch = MI_BATCH_BUFFER_END;
+	wmb();
+
+	i915_gem_object_unpin_map(h->obj);
+	i915_gem_object_put(h->obj);
+
+	i915_gem_object_unpin_map(h->hws);
+	i915_gem_object_put(h->hws);
+
+	i915_gem_wait_for_idle(h->i915, I915_WAIT_LOCKED);
+	i915_gem_retire_requests(h->i915);
+}
+
+static int igt_hang_sanitycheck(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *rq;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	struct hang h;
+	int err;
+
+	/* Basic check that we can execute our hanging batch */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = hang_init(&h, i915);
+	if (err)
+		goto unlock;
+
+	for_each_engine(engine, i915, id) {
+		long timeout;
+
+		rq = hang_create_request(&h, engine, i915->kernel_context);
+		if (IS_ERR(rq)) {
+			err = PTR_ERR(rq);
+			pr_err("Failed to create request for %s, err=%d\n",
+			       engine->name, err);
+			goto fini;
+		}
+
+		i915_gem_request_get(rq);
+
+		*h.batch = MI_BATCH_BUFFER_END;
+		__i915_add_request(rq, true);
+
+		timeout = i915_wait_request(rq,
+					    I915_WAIT_LOCKED,
+					    MAX_SCHEDULE_TIMEOUT);
+		i915_gem_request_put(rq);
+
+		if (timeout < 0) {
+			err = timeout;
+			pr_err("Wait for request failed on %s, err=%d\n",
+			       engine->name, err);
+			goto fini;
+		}
+	}
+
+fini:
+	hang_fini(&h);
+unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+	return err;
+}
+
+static int igt_global_reset(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	unsigned int reset_count;
+	int err = 0;
+
+	/* Check that we can issue a global GPU reset */
+
+	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
+
+	mutex_lock(&i915->drm.struct_mutex);
+	reset_count = i915_reset_count(&i915->gpu_error);
+
+	i915_reset(i915);
+
+	if (i915_reset_count(&i915->gpu_error) == reset_count) {
+		pr_err("No GPU reset recorded!\n");
+		err = -EINVAL;
+	}
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
+	if (i915_terminally_wedged(&i915->gpu_error))
+		err = -EIO;
+
+	return err;
+}
+
+static u32 fake_hangcheck(struct drm_i915_gem_request *rq)
+{
+	u32 reset_count;
+
+	rq->engine->hangcheck.stalled = true;
+	rq->engine->hangcheck.seqno = intel_engine_get_seqno(rq->engine);
+
+	reset_count = i915_reset_count(&rq->i915->gpu_error);
+
+	set_bit(I915_RESET_IN_PROGRESS, &rq->i915->gpu_error.flags);
+	wake_up_all(&rq->i915->gpu_error.wait_queue);
+
+	return reset_count;
+}
+
+static bool wait_for_hang(struct hang *h, struct drm_i915_gem_request *rq)
+{
+	return !(wait_for_us(i915_seqno_passed(hws_seqno(h, rq),
+					       rq->fence.seqno),
+			     10) &&
+		 wait_for(i915_seqno_passed(hws_seqno(h, rq),
+					    rq->fence.seqno),
+			  1000));
+}
+
+static int igt_wait_reset(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_request *rq;
+	unsigned int reset_count;
+	struct hang h;
+	long timeout;
+	int err;
+
+	/* Check that we detect a stuck waiter and issue a reset */
+
+	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = hang_init(&h, i915);
+	if (err)
+		goto unlock;
+
+	rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto fini;
+	}
+
+	i915_gem_request_get(rq);
+	__i915_add_request(rq, true);
+
+	if (!wait_for_hang(&h, rq)) {
+		pr_err("Failed to start request %x\n", rq->fence.seqno);
+		err = -EIO;
+		goto out_rq;
+	}
+
+	reset_count = fake_hangcheck(rq);
+
+	timeout = i915_wait_request(rq, I915_WAIT_LOCKED, 10);
+	if (timeout < 0) {
+		pr_err("i915_wait_request failed on a stuck request: err=%ld\n",
+		       timeout);
+		err = timeout;
+		goto out_rq;
+	}
+	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
+
+	if (i915_reset_count(&i915->gpu_error) == reset_count) {
+		pr_err("No GPU reset recorded!\n");
+		err = -EINVAL;
+		goto out_rq;
+	}
+
+out_rq:
+	i915_gem_request_put(rq);
+fini:
+	hang_fini(&h);
+unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	if (i915_terminally_wedged(&i915->gpu_error))
+		return -EIO;
+
+	return err;
+}
+
+static int igt_reset_queue(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	struct hang h;
+	int err;
+
+	/* Check that we replay pending requests following a hang */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	err = hang_init(&h, i915);
+	if (err)
+		goto unlock;
+
+	for_each_engine(engine, i915, id) {
+		struct drm_i915_gem_request *prev;
+		IGT_TIMEOUT(end_time);
+		unsigned int count;
+
+		prev = hang_create_request(&h, engine, i915->kernel_context);
+		if (IS_ERR(prev)) {
+			err = PTR_ERR(prev);
+			goto fini;
+		}
+
+		i915_gem_request_get(prev);
+		__i915_add_request(prev, true);
+
+		count = 0;
+		do {
+			struct drm_i915_gem_request *rq;
+			unsigned int reset_count;
+
+			rq = hang_create_request(&h,
+						 engine,
+						 i915->kernel_context);
+			if (IS_ERR(rq)) {
+				err = PTR_ERR(rq);
+				goto fini;
+			}
+
+			i915_gem_request_get(rq);
+			__i915_add_request(rq, true);
+
+			if (!wait_for_hang(&h, prev)) {
+				pr_err("Failed to start request %x\n",
+				       prev->fence.seqno);
+				i915_gem_request_put(rq);
+				i915_gem_request_put(prev);
+				err = -EIO;
+				goto fini;
+			}
+
+			reset_count = fake_hangcheck(prev);
+
+			i915_reset(i915);
+
+			GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS,
+					    &i915->gpu_error.flags));
+			if (prev->fence.error != -EIO) {
+				pr_err("GPU reset not recorded on hanging request [fence.error=%d]!\n",
+				       prev->fence.error);
+				i915_gem_request_put(rq);
+				i915_gem_request_put(prev);
+				err = -EINVAL;
+				goto fini;
+			}
+
+			if (rq->fence.error) {
+				pr_err("Fence error status not zero [%d] after unrelated reset\n",
+				       rq->fence.error);
+				i915_gem_request_put(rq);
+				i915_gem_request_put(prev);
+				err = -EINVAL;
+				goto fini;
+			}
+
+			if (i915_reset_count(&i915->gpu_error) == reset_count) {
+				pr_err("No GPU reset recorded!\n");
+				i915_gem_request_put(rq);
+				i915_gem_request_put(prev);
+				err = -EINVAL;
+				goto fini;
+			}
+
+			i915_gem_request_put(prev);
+			prev = rq;
+			count++;
+		} while (time_before(jiffies, end_time));
+		pr_info("%s: Completed %d resets\n", engine->name, count);
+
+		*h.batch = MI_BATCH_BUFFER_END;
+		wmb();
+
+		i915_gem_request_put(prev);
+	}
+
+fini:
+	hang_fini(&h);
+unlock:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	if (i915_terminally_wedged(&i915->gpu_error))
+		return -EIO;
+
+	return err;
+}
+
+int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_hang_sanitycheck),
+		SUBTEST(igt_global_reset),
+		SUBTEST(igt_wait_reset),
+		SUBTEST(igt_reset_queue),
+	};
+
+	if (!intel_has_gpu_reset(i915))
+		return 0;
+
+	return i915_subtests(tests, i915);
+}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* Re: [PATCH 05/46] drm/i915: Provide a hook for selftests
  2017-02-02  9:08 ` [PATCH 05/46] drm/i915: Provide a hook for selftests Chris Wilson
@ 2017-02-02  9:11   ` Chris Wilson
  2017-02-10 10:19   ` Tvrtko Ursulin
  1 sibling, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:11 UTC (permalink / raw)
  To: intel-gfx

On Thu, Feb 02, 2017 at 09:08:24AM +0000, Chris Wilson wrote:
> Some pieces of code are independent of hardware but are very tricky to
> exercise through the normal userspace ABI or via debugfs hooks. Being
> able to create mock unit tests and execute them through CI is vital.
> Start by adding a central point where we can execute unit tests and
> a parameter to enable them. This is disabled by default as the
> expectation is that these tests will occasionally explode.
> 
> To facilitate integration with igt, any parameter beginning with
> i915.igt__ is interpreted as a subtest executable independently via
> igt/drv_selftest.
> 
> Two classes of selftests are recognised: mock unit tests and integration
> tests. Mock unit tests are run as soon as the module is loaded, before
> the device is probed. At that point there is no driver instantiated and
> all hw interactions must be "mocked". This is very useful for writing
> universal tests to exercise code not typically run on a broad range of
> architectures. Alternatively, you can hook into the live selftests and
> run when the device has been instantiated - hw interactions are real.
> 
> v2: Add a macro for compiling conditional code for mock objects inside
> real objects.
> v3: Differentiate between mock unit tests and late integration test.
> v4: List the tests in natural order, use igt to sort after modparam.
> v5: s/late/live/
> v6: s/unsigned long/unsigned int/
> v7: Use igt_ prefixes for long helpers.
v8: Deobfuscate macros overriding functions, stop using -I$(src)

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* [PATCH igt] intel-ci: Add all driver selftests to BAT
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (45 preceding siblings ...)
  2017-02-02  9:09 ` [PATCH 46/46] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
@ 2017-02-02  9:18 ` Chris Wilson
  2017-02-02 13:30   ` Maarten Lankhorst
  2017-02-17 11:50   ` Petri Latvala
  2017-02-02 11:32 ` ✗ Fi.CI.BAT: failure for series starting with [v6] drm: Provide a driver hook for drm_dev_release() (rev2) Patchwork
  47 siblings, 2 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:18 UTC (permalink / raw)
  To: intel-gfx

These are meant to be fast and sensitive to new (and old) bugs...

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
---
 tests/intel-ci/fast-feedback.testlist | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
index 828bd3ff..a0c3f848 100644
--- a/tests/intel-ci/fast-feedback.testlist
+++ b/tests/intel-ci/fast-feedback.testlist
@@ -249,4 +249,23 @@ igt@drv_module_reload@basic-reload
 igt@drv_module_reload@basic-no-display
 igt@drv_module_reload@basic-reload-inject
 igt@drv_module_reload@basic-reload-final
+igt@drv_selftest@mock_sanitycheck
+igt@drv_selftest@mock_scatterlist
+igt@drv_selftest@mock_uncore
+igt@drv_selftest@mock_breadcrumbs
+igt@drv_selftest@mock_requests
+igt@drv_selftest@mock_objects
+igt@drv_selftest@mock_dmabuf
+igt@drv_selftest@mock_vma
+igt@drv_selftest@mock_evict
+igt@drv_selftest@mock_gtt
+igt@drv_selftest@live_sanitycheck
+igt@drv_selftest@live_uncore
+igt@drv_selftest@live_requests
+igt@drv_selftest@live_object
+igt@drv_selftest@live_dmabuf
+igt@drv_selftest@live_coherency
+igt@drv_selftest@live_gtt
+igt@drv_selftest@live_context
+igt@drv_selftest@live_hangcheck
 igt@gvt_basic@invalid-placeholder-test
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* Re: [PATCH 01/46] drm: Provide a driver hook for drm_dev_release()
  2017-02-02  9:08 ` [PATCH 01/46] drm: Provide a driver hook for drm_dev_release() Chris Wilson
@ 2017-02-02  9:24   ` Laurent Pinchart
  2017-02-02  9:36   ` [PATCH v6] " Chris Wilson
  1 sibling, 0 replies; 81+ messages in thread
From: Laurent Pinchart @ 2017-02-02  9:24 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Daniel Vetter, intel-gfx

Hi Chris,

Thank you for the patch.

On Thursday 02 Feb 2017 09:08:20 Chris Wilson wrote:
> Some state is coupled into the device lifetime outside of the
> load/unload timeframe and requires teardown during final unreference
> from drm_dev_release(). For example, dmabufs hold both a device and
> module reference and may live longer than expected (i.e. the current
> pattern of the driver tearing down its state and then releasing a
> reference to the drm device) and yet touch driver private state when
> destroyed.
> 
> v2: Export drm_dev_fini() and move the responsibility for finalizing the
> drm_device and freeing it to the release callback. (If no callback is
> provided, the core will call drm_dev_fini() and kfree(dev) as before.)
> v3: Remember to add drm_dev_fini() to drm_drv.h
> v4: Tidy language for kerneldoc
> v5: Cross reference from drm_dev_init() to note that driver->release()
> allows for arbitrary embedding.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  drivers/gpu/drm/drm_drv.c | 65 ++++++++++++++++++++++++++++++--------------
>  include/drm/drm_drv.h     | 13 ++++++++++
>  2 files changed, 58 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index a8ce3179c07c..fe611d601916 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -465,7 +465,10 @@ static void drm_fs_inode_free(struct inode *inode)
>   * that do embed &struct drm_device it must be placed first in the overall
>   * structure, and the overall structure must be allocated using kmalloc():
> The * drm core's release function unconditionally calls kfree() on the @dev
> pointer - * when the final reference is released.
> + * when the final reference is released. To override this behaviour, and so
> + * allow embedding of the drm_device inside the driver's device struct at
> an + * arbitrary offset, you must supply a driver->release() callback and
> control + * the finalization explicitly.
>   *
>   * RETURNS:
>   * 0 on success, or error code on failure.
> @@ -553,6 +556,41 @@ int drm_dev_init(struct drm_device *dev,
>  EXPORT_SYMBOL(drm_dev_init);
> 
>  /**
> + * drm_dev_fini - Finalize a dead DRM device
> + * @dev: DRM device
> + *
> + * Finalize a dead DRM device. This is the converse to drm_dev_init() and
> + * frees up all state allocated by it. All driver state should be finalized
> + * first. Note that this function does not free the @dev, that is left to
> the + * caller.
> + *
> + * The ref-count of @dev must be zero, and drm_dev_fini() should only be
> called + * from a drm_driver->release() callback.
> + */
> +void drm_dev_fini(struct drm_device *dev)
> +{
> +	drm_vblank_cleanup(dev);
> +
> +	if (drm_core_check_feature(dev, DRIVER_GEM))
> +		drm_gem_destroy(dev);
> +
> +	drm_legacy_ctxbitmap_cleanup(dev);
> +	drm_ht_remove(&dev->map_hash);
> +	drm_fs_inode_free(dev->anon_inode);
> +
> +	drm_minor_free(dev, DRM_MINOR_PRIMARY);
> +	drm_minor_free(dev, DRM_MINOR_RENDER);
> +	drm_minor_free(dev, DRM_MINOR_CONTROL);
> +
> +	mutex_destroy(&dev->master_mutex);
> +	mutex_destroy(&dev->ctxlist_mutex);
> +	mutex_destroy(&dev->filelist_mutex);
> +	mutex_destroy(&dev->struct_mutex);
> +	kfree(dev->unique);
> +}
> +EXPORT_SYMBOL(drm_dev_fini);
> +
> +/**
>   * drm_dev_alloc - Allocate new DRM device
>   * @driver: DRM driver to allocate device for
>   * @parent: Parent device object
> @@ -598,25 +636,12 @@ static void drm_dev_release(struct kref *ref)
>  {
>  	struct drm_device *dev = container_of(ref, struct drm_device, ref);
> 
> -	drm_vblank_cleanup(dev);
> -
> -	if (drm_core_check_feature(dev, DRIVER_GEM))
> -		drm_gem_destroy(dev);
> -
> -	drm_legacy_ctxbitmap_cleanup(dev);
> -	drm_ht_remove(&dev->map_hash);
> -	drm_fs_inode_free(dev->anon_inode);
> -
> -	drm_minor_free(dev, DRM_MINOR_PRIMARY);
> -	drm_minor_free(dev, DRM_MINOR_RENDER);
> -	drm_minor_free(dev, DRM_MINOR_CONTROL);
> -
> -	mutex_destroy(&dev->master_mutex);
> -	mutex_destroy(&dev->ctxlist_mutex);
> -	mutex_destroy(&dev->filelist_mutex);
> -	mutex_destroy(&dev->struct_mutex);
> -	kfree(dev->unique);
> -	kfree(dev);
> +	if (dev->driver->release) {
> +		dev->driver->release(dev);
> +	} else {
> +		drm_dev_fini(dev);
> +		kfree(dev);
> +	}
>  }
> 
>  /**
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index 732e85652d1e..d0d2fa83d06c 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -102,6 +102,17 @@ struct drm_driver {
>  	 *
>  	 */
>  	void (*unload) (struct drm_device *);
> +
> +	/**
> +	 * @release:
> +	 *
> +	 * Optional callback for destroying device state after the final

Nitpicking, I'd talk about "device data" or "device memory" instead of "device 
state", as the latter could be confused with the device atomic state. Apart 
from that,

Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

> +	 * reference is released, i.e. the device is being destroyed. Drivers
> +	 * using this callback are responsible for calling drm_dev_fini()
> +	 * to finalize the device and then freeing the struct themselves.
> +	 */
> +	void (*release) (struct drm_device *);
> +
>  	int (*set_busid)(struct drm_device *dev, struct drm_master *master);
> 
>  	/**
> @@ -437,6 +448,8 @@ extern unsigned int drm_debug;
>  int drm_dev_init(struct drm_device *dev,
>  		 struct drm_driver *driver,
>  		 struct device *parent);
> +void drm_dev_fini(struct drm_device *dev);
> +
>  struct drm_device *drm_dev_alloc(struct drm_driver *driver,
>  				 struct device *parent);
>  int drm_dev_register(struct drm_device *dev, unsigned long flags);

-- 
Regards,

Laurent Pinchart

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* [PATCH v6] drm: Provide a driver hook for drm_dev_release()
  2017-02-02  9:08 ` [PATCH 01/46] drm: Provide a driver hook for drm_dev_release() Chris Wilson
  2017-02-02  9:24   ` Laurent Pinchart
@ 2017-02-02  9:36   ` Chris Wilson
  2017-02-02  9:44     ` Daniel Vetter
  1 sibling, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-02  9:36 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Laurent Pinchart, dri-devel

Some state is coupled into the device lifetime outside of the
load/unload timeframe and requires teardown during final unreference
from drm_dev_release(). For example, dmabufs hold both a device and
module reference and may live longer than expected (i.e. the current
pattern of the driver tearing down its state and then releasing a
reference to the drm device) and yet touch driver private state when
destroyed.

v2: Export drm_dev_fini() and move the responsibility for finalizing the
drm_device and freeing it to the release callback. (If no callback is
provided, the core will call drm_dev_fini() and kfree(dev) as before.)
v3: Remember to add drm_dev_fini() to drm_drv.h
v4: Tidy language for kerneldoc
v5: Cross reference from drm_dev_init() to note that driver->release()
allows for arbitrary embedding.
v6: Refer to driver data rather than driver state, as state is now
becoming associated with the struct drm_atomic_state and friends.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
---
 drivers/gpu/drm/drm_drv.c | 65 ++++++++++++++++++++++++++++++++---------------
 include/drm/drm_drv.h     | 13 ++++++++++
 2 files changed, 58 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index a8ce3179c07c..e122e4d022f8 100644
--- a/drivers/gpu/drm/drm_drv.c
+++ b/drivers/gpu/drm/drm_drv.c
@@ -465,7 +465,10 @@ static void drm_fs_inode_free(struct inode *inode)
  * that do embed &struct drm_device it must be placed first in the overall
  * structure, and the overall structure must be allocated using kmalloc(): The
  * drm core's release function unconditionally calls kfree() on the @dev pointer
- * when the final reference is released.
+ * when the final reference is released. To override this behaviour, and so
+ * allow embedding of the drm_device inside the driver's device struct at an
+ * arbitrary offset, you must supply a driver->release() callback and control
+ * the finalization explicitly.
  *
  * RETURNS:
  * 0 on success, or error code on failure.
@@ -553,6 +556,41 @@ int drm_dev_init(struct drm_device *dev,
 EXPORT_SYMBOL(drm_dev_init);
 
 /**
+ * drm_dev_fini - Finalize a dead DRM device
+ * @dev: DRM device
+ *
+ * Finalize a dead DRM device. This is the converse to drm_dev_init() and
+ * frees up all data allocated by it. All driver private data should be
+ * finalized first. Note that this function does not free the @dev, that is
+ * left to the caller.
+ *
+ * The ref-count of @dev must be zero, and drm_dev_fini() should only be called
+ * from a drm_driver->release() callback.
+ */
+void drm_dev_fini(struct drm_device *dev)
+{
+	drm_vblank_cleanup(dev);
+
+	if (drm_core_check_feature(dev, DRIVER_GEM))
+		drm_gem_destroy(dev);
+
+	drm_legacy_ctxbitmap_cleanup(dev);
+	drm_ht_remove(&dev->map_hash);
+	drm_fs_inode_free(dev->anon_inode);
+
+	drm_minor_free(dev, DRM_MINOR_PRIMARY);
+	drm_minor_free(dev, DRM_MINOR_RENDER);
+	drm_minor_free(dev, DRM_MINOR_CONTROL);
+
+	mutex_destroy(&dev->master_mutex);
+	mutex_destroy(&dev->ctxlist_mutex);
+	mutex_destroy(&dev->filelist_mutex);
+	mutex_destroy(&dev->struct_mutex);
+	kfree(dev->unique);
+}
+EXPORT_SYMBOL(drm_dev_fini);
+
+/**
  * drm_dev_alloc - Allocate new DRM device
  * @driver: DRM driver to allocate device for
  * @parent: Parent device object
@@ -598,25 +636,12 @@ static void drm_dev_release(struct kref *ref)
 {
 	struct drm_device *dev = container_of(ref, struct drm_device, ref);
 
-	drm_vblank_cleanup(dev);
-
-	if (drm_core_check_feature(dev, DRIVER_GEM))
-		drm_gem_destroy(dev);
-
-	drm_legacy_ctxbitmap_cleanup(dev);
-	drm_ht_remove(&dev->map_hash);
-	drm_fs_inode_free(dev->anon_inode);
-
-	drm_minor_free(dev, DRM_MINOR_PRIMARY);
-	drm_minor_free(dev, DRM_MINOR_RENDER);
-	drm_minor_free(dev, DRM_MINOR_CONTROL);
-
-	mutex_destroy(&dev->master_mutex);
-	mutex_destroy(&dev->ctxlist_mutex);
-	mutex_destroy(&dev->filelist_mutex);
-	mutex_destroy(&dev->struct_mutex);
-	kfree(dev->unique);
-	kfree(dev);
+	if (dev->driver->release) {
+		dev->driver->release(dev);
+	} else {
+		drm_dev_fini(dev);
+		kfree(dev);
+	}
 }
 
 /**
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index 732e85652d1e..5699f42195fe 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -102,6 +102,17 @@ struct drm_driver {
 	 *
 	 */
 	void (*unload) (struct drm_device *);
+
+	/**
+	 * @release:
+	 *
+	 * Optional callback for destroying device data after the final
+	 * reference is released, i.e. the device is being destroyed. Drivers
+	 * using this callback are responsible for calling drm_dev_fini()
+	 * to finalize the device and then freeing the struct themselves.
+	 */
+	void (*release) (struct drm_device *);
+
 	int (*set_busid)(struct drm_device *dev, struct drm_master *master);
 
 	/**
@@ -437,6 +448,8 @@ extern unsigned int drm_debug;
 int drm_dev_init(struct drm_device *dev,
 		 struct drm_driver *driver,
 		 struct device *parent);
+void drm_dev_fini(struct drm_device *dev);
+
 struct drm_device *drm_dev_alloc(struct drm_driver *driver,
 				 struct device *parent);
 int drm_dev_register(struct drm_device *dev, unsigned long flags);
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 81+ messages in thread

* Re: [PATCH v6] drm: Provide a driver hook for drm_dev_release()
  2017-02-02  9:36   ` [PATCH v6] " Chris Wilson
@ 2017-02-02  9:44     ` Daniel Vetter
  0 siblings, 0 replies; 81+ messages in thread
From: Daniel Vetter @ 2017-02-02  9:44 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Daniel Vetter, intel-gfx, Laurent Pinchart, dri-devel

On Thu, Feb 02, 2017 at 09:36:32AM +0000, Chris Wilson wrote:
> Some state is coupled into the device lifetime outside of the
> load/unload timeframe and requires teardown during final unreference
> from drm_dev_release(). For example, dmabufs hold both a device and
> module reference and may live longer than expected (i.e. the current
> pattern of the driver tearing down its state and then releasing a
> reference to the drm device) and yet touch driver private state when
> destroyed.
> 
> v2: Export drm_dev_fini() and move the responsibility for finalizing the
> drm_device and freeing it to the release callback. (If no callback is
> provided, the core will call drm_dev_fini() and kfree(dev) as before.)
> v3: Remember to add drm_dev_fini() to drm_drv.h
> v4: Tidy language for kerneldoc
> v5: Cross reference from drm_dev_init() to note that driver->release()
> allows for arbitrary embedding.
> v6: Refer to driver data rather than driver state, as state is now
> becoming associated with the struct drm_atomic_state and friends.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> ---
>  drivers/gpu/drm/drm_drv.c | 65 ++++++++++++++++++++++++++++++++---------------
>  include/drm/drm_drv.h     | 13 ++++++++++
>  2 files changed, 58 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index a8ce3179c07c..e122e4d022f8 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -465,7 +465,10 @@ static void drm_fs_inode_free(struct inode *inode)
>   * that do embed &struct drm_device it must be placed first in the overall
>   * structure, and the overall structure must be allocated using kmalloc(): The
>   * drm core's release function unconditionally calls kfree() on the @dev pointer
> - * when the final reference is released.
> + * when the final reference is released. To override this behaviour, and so
> + * allow embedding of the drm_device inside the driver's device struct at an
> + * arbitrary offset, you must supply a driver->release() callback and control

The new official way to reference a struct member is &foo.bar. I've
applied this polish to both places and applied your patch, thanks a lot.
-Daniel

> + * the finalization explicitly.
>   *
>   * RETURNS:
>   * 0 on success, or error code on failure.
> @@ -553,6 +556,41 @@ int drm_dev_init(struct drm_device *dev,
>  EXPORT_SYMBOL(drm_dev_init);
>  
>  /**
> + * drm_dev_fini - Finalize a dead DRM device
> + * @dev: DRM device
> + *
> + * Finalize a dead DRM device. This is the converse to drm_dev_init() and
> + * frees up all data allocated by it. All driver private data should be
> + * finalized first. Note that this function does not free the @dev, that is
> + * left to the caller.
> + *
> + * The ref-count of @dev must be zero, and drm_dev_fini() should only be called
> + * from a drm_driver->release() callback.
> + */
> +void drm_dev_fini(struct drm_device *dev)
> +{
> +	drm_vblank_cleanup(dev);
> +
> +	if (drm_core_check_feature(dev, DRIVER_GEM))
> +		drm_gem_destroy(dev);
> +
> +	drm_legacy_ctxbitmap_cleanup(dev);
> +	drm_ht_remove(&dev->map_hash);
> +	drm_fs_inode_free(dev->anon_inode);
> +
> +	drm_minor_free(dev, DRM_MINOR_PRIMARY);
> +	drm_minor_free(dev, DRM_MINOR_RENDER);
> +	drm_minor_free(dev, DRM_MINOR_CONTROL);
> +
> +	mutex_destroy(&dev->master_mutex);
> +	mutex_destroy(&dev->ctxlist_mutex);
> +	mutex_destroy(&dev->filelist_mutex);
> +	mutex_destroy(&dev->struct_mutex);
> +	kfree(dev->unique);
> +}
> +EXPORT_SYMBOL(drm_dev_fini);
> +
> +/**
>   * drm_dev_alloc - Allocate new DRM device
>   * @driver: DRM driver to allocate device for
>   * @parent: Parent device object
> @@ -598,25 +636,12 @@ static void drm_dev_release(struct kref *ref)
>  {
>  	struct drm_device *dev = container_of(ref, struct drm_device, ref);
>  
> -	drm_vblank_cleanup(dev);
> -
> -	if (drm_core_check_feature(dev, DRIVER_GEM))
> -		drm_gem_destroy(dev);
> -
> -	drm_legacy_ctxbitmap_cleanup(dev);
> -	drm_ht_remove(&dev->map_hash);
> -	drm_fs_inode_free(dev->anon_inode);
> -
> -	drm_minor_free(dev, DRM_MINOR_PRIMARY);
> -	drm_minor_free(dev, DRM_MINOR_RENDER);
> -	drm_minor_free(dev, DRM_MINOR_CONTROL);
> -
> -	mutex_destroy(&dev->master_mutex);
> -	mutex_destroy(&dev->ctxlist_mutex);
> -	mutex_destroy(&dev->filelist_mutex);
> -	mutex_destroy(&dev->struct_mutex);
> -	kfree(dev->unique);
> -	kfree(dev);
> +	if (dev->driver->release) {
> +		dev->driver->release(dev);
> +	} else {
> +		drm_dev_fini(dev);
> +		kfree(dev);
> +	}
>  }
>  
>  /**
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index 732e85652d1e..5699f42195fe 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -102,6 +102,17 @@ struct drm_driver {
>  	 *
>  	 */
>  	void (*unload) (struct drm_device *);
> +
> +	/**
> +	 * @release:
> +	 *
> +	 * Optional callback for destroying device data after the final
> +	 * reference is released, i.e. the device is being destroyed. Drivers
> +	 * using this callback are responsible for calling drm_dev_fini()
> +	 * to finalize the device and then freeing the struct themselves.
> +	 */
> +	void (*release) (struct drm_device *);
> +
>  	int (*set_busid)(struct drm_device *dev, struct drm_master *master);
>  
>  	/**
> @@ -437,6 +448,8 @@ extern unsigned int drm_debug;
>  int drm_dev_init(struct drm_device *dev,
>  		 struct drm_driver *driver,
>  		 struct device *parent);
> +void drm_dev_fini(struct drm_device *dev);
> +
>  struct drm_device *drm_dev_alloc(struct drm_driver *driver,
>  				 struct device *parent);
>  int drm_dev_register(struct drm_device *dev, unsigned long flags);
> -- 
> 2.11.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* ✗ Fi.CI.BAT: failure for series starting with [v6] drm: Provide a driver hook for drm_dev_release() (rev2)
  2017-02-02  9:08 Moah selftests Chris Wilson
                   ` (46 preceding siblings ...)
  2017-02-02  9:18 ` [PATCH igt] intel-ci: Add all driver selftests to BAT Chris Wilson
@ 2017-02-02 11:32 ` Patchwork
  47 siblings, 0 replies; 81+ messages in thread
From: Patchwork @ 2017-02-02 11:32 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v6] drm: Provide a driver hook for drm_dev_release() (rev2)
URL   : https://patchwork.freedesktop.org/series/18974/
State : failure

== Summary ==

  CC [M]  drivers/gpu/drm/i915/gvt/firmware.o
  CC [M]  drivers/gpu/drm/i915/gvt/trace_points.o
  CC [M]  drivers/gpu/drm/i915/gvt/vgpu.o
  CC [M]  drivers/gpu/drm/i915/gvt/interrupt.o
  CC [M]  drivers/gpu/drm/i915/gvt/display.o
  CC [M]  drivers/gpu/drm/i915/gvt/mmio.o
  CC [M]  drivers/gpu/drm/i915/gvt/cfg_space.o
  CC [M]  drivers/gpu/drm/i915/gvt/edid.o
  CC [M]  drivers/gpu/drm/i915/gvt/opregion.o
  CC [M]  drivers/gpu/drm/i915/gvt/scheduler.o
  CC [M]  drivers/gpu/drm/i915/gvt/execlist.o
  CC [M]  drivers/gpu/drm/i915/gvt/gtt.o
  CC [M]  drivers/gpu/drm/i915/gvt/sched_policy.o
  CC [M]  drivers/gpu/drm/i915/gvt/render.o
  CC [M]  drivers/gpu/drm/i915/intel_lpe_audio.o
  CC [M]  drivers/gpu/drm/i915/gvt/cmd_parser.o
  LD      drivers/tty/serial/8250/8250.o
  LD [M]  drivers/mmc/core/mmc_block.o
  LD      drivers/usb/storage/usb-storage.o
  LD      drivers/mmc/built-in.o
  LD      drivers/usb/storage/built-in.o
  LD      lib/raid6/raid6_pq.o
  LD [M]  drivers/gpu/drm/vgem/vgem.o
  LD      lib/raid6/built-in.o
  LD      drivers/scsi/scsi_mod.o
  LD      drivers/acpi/acpica/acpi.o
  LD      drivers/video/console/built-in.o
  LD      drivers/video/built-in.o
  LD      drivers/spi/built-in.o
  LD      drivers/acpi/acpica/built-in.o
  LD [M]  drivers/net/ethernet/intel/e1000/e1000.o
  LD      drivers/usb/gadget/libcomposite.o
  LD      net/ipv6/ipv6.o
  LD      drivers/acpi/built-in.o
  LD      net/ipv6/built-in.o
  LD      drivers/usb/gadget/udc/udc-core.o
  LD      drivers/usb/gadget/udc/built-in.o
  LD      drivers/usb/gadget/built-in.o
In file included from drivers/gpu/drm/i915/i915_gem_gtt.c:3764:0:
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c: In function ‘igt_ggtt_page’:
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c:715:8: error: ‘DRM_MM_INSERT_LOW’ undeclared (first use in this function)
        DRM_MM_INSERT_LOW);
        ^
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c:715:8: note: each undeclared identifier is reported only once for each function it appears in
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c:711:8: error: too many arguments to function ‘drm_mm_insert_node_in_range’
  err = drm_mm_insert_node_in_range(&ggtt->base.mm, &tmp,
        ^
In file included from ./include/drm/drmP.h:75:0,
                 from drivers/gpu/drm/i915/i915_gem_gtt.c:31:
./include/drm/drm_mm.h:356:19: note: declared here
 static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
                   ^
  LD      drivers/gpu/drm/drm.o
scripts/Makefile.build:293: recipe for target 'drivers/gpu/drm/i915/i915_gem_gtt.o' failed
make[4]: *** [drivers/gpu/drm/i915/i915_gem_gtt.o] Error 1
make[4]: *** Waiting for unfinished jobs....
  LD      drivers/scsi/sd_mod.o
  LD [M]  sound/pci/hda/snd-hda-codec-generic.o
  LD      drivers/scsi/built-in.o
  LD      sound/pci/built-in.o
  LD      sound/built-in.o
  LD      drivers/tty/serial/8250/8250_base.o
  LD      drivers/tty/serial/8250/built-in.o
  LD      drivers/tty/serial/built-in.o
  LD      net/ipv4/built-in.o
  LD [M]  drivers/net/ethernet/intel/igb/igb.o
  LD      drivers/md/md-mod.o
  LD      drivers/md/built-in.o
  AR      lib/lib.a
  LD      drivers/usb/core/usbcore.o
  EXPORTS lib/lib-ksyms.o
  LD      drivers/usb/core/built-in.o
  CC      arch/x86/kernel/cpu/capflags.o
  LD      arch/x86/kernel/cpu/built-in.o
  LD      arch/x86/kernel/built-in.o
  LD      drivers/tty/vt/built-in.o
  LD      drivers/tty/built-in.o
  LD      lib/built-in.o
  LD      fs/btrfs/btrfs.o
  LD      arch/x86/built-in.o
  LD      fs/btrfs/built-in.o
  LD      drivers/usb/host/xhci-hcd.o
  LD      fs/ext4/ext4.o
  LD      fs/ext4/built-in.o
  LD      fs/built-in.o
  LD      drivers/usb/host/built-in.o
  LD      drivers/usb/built-in.o
  LD [M]  drivers/net/ethernet/intel/e1000e/e1000e.o
  LD      net/core/built-in.o
  LD      net/built-in.o
  LD      drivers/net/ethernet/built-in.o
  LD      drivers/net/built-in.o
scripts/Makefile.build:551: recipe for target 'drivers/gpu/drm/i915' failed
make[3]: *** [drivers/gpu/drm/i915] Error 2
scripts/Makefile.build:551: recipe for target 'drivers/gpu/drm' failed
make[2]: *** [drivers/gpu/drm] Error 2
scripts/Makefile.build:551: recipe for target 'drivers/gpu' failed
make[1]: *** [drivers/gpu] Error 2
Makefile:988: recipe for target 'drivers' failed
make: *** [drivers] Error 2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 09/46] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-02-02  9:08 ` [PATCH 09/46] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
@ 2017-02-02 12:49   ` Tvrtko Ursulin
  2017-02-02 13:02     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Tvrtko Ursulin @ 2017-02-02 12:49 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 02/02/2017 09:08, Chris Wilson wrote:
> Third retroactive test, make sure that the seqno waiters are woken.
>
> v2: Smattering of comments, rearrange code

v3: Fix assert.

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c | 201 +++++++++++++++++++++
>  1 file changed, 201 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> index 32a27e56c353..fb368eb37660 100644
> --- a/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> +++ b/drivers/gpu/drm/i915/selftests/intel_breadcrumbs.c
> @@ -259,11 +259,212 @@ static int igt_insert_complete(void *arg)
>  	return err;
>  }
>
> +struct igt_wakeup {
> +	struct task_struct *tsk;
> +	atomic_t *ready, *set, *done;
> +	struct intel_engine_cs *engine;
> +	unsigned long flags;
> +#define STOP 0
> +#define IDLE 1
> +	wait_queue_head_t *wq;
> +	u32 seqno;
> +};
> +
> +static int wait_atomic(atomic_t *p)
> +{
> +	schedule();
> +	return 0;
> +}
> +
> +static int wait_atomic_timeout(atomic_t *p)
> +{
> +	return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT;
> +}
> +
> +static bool wait_for_ready(struct igt_wakeup *w)
> +{
> +	DEFINE_WAIT(ready);
> +
> +	if (atomic_dec_and_test(w->done))
> +		wake_up_atomic_t(w->done);
> +
> +	if (test_bit(STOP, &w->flags))
> +		goto out;
> +
> +	set_bit(IDLE, &w->flags);

I think this needs to be before atomic_dec_and_test(w->done), to avoid 
that same assert racing with the threads. Because immediately after the 
wake_up_atomic above the main loop starts asserting the IDLE bit which 
is not guaranteed to be set yet.

Regards,

Tvrtko

> +	for (;;) {
> +		prepare_to_wait(w->wq, &ready, TASK_INTERRUPTIBLE);
> +		if (atomic_read(w->ready) == 0)
> +			break;
> +
> +		schedule();
> +	}
> +	finish_wait(w->wq, &ready);
> +	clear_bit(IDLE, &w->flags);
> +
> +out:
> +	if (atomic_dec_and_test(w->set))
> +		wake_up_atomic_t(w->set);
> +
> +	return !test_bit(STOP, &w->flags);
> +}
> +
> +static int igt_wakeup_thread(void *arg)
> +{
> +	struct igt_wakeup *w = arg;
> +	struct intel_wait wait;
> +
> +	while (wait_for_ready(w)) {
> +		GEM_BUG_ON(kthread_should_stop());
> +
> +		intel_wait_init(&wait, w->seqno);
> +		intel_engine_add_wait(w->engine, &wait);
> +		for (;;) {
> +			set_current_state(TASK_UNINTERRUPTIBLE);
> +			if (i915_seqno_passed(intel_engine_get_seqno(w->engine),
> +					      w->seqno))
> +				break;
> +
> +			if (test_bit(STOP, &w->flags)) /* emergency escape */
> +				break;
> +
> +			schedule();
> +		}
> +		intel_engine_remove_wait(w->engine, &wait);
> +		__set_current_state(TASK_RUNNING);
> +	}
> +
> +	return 0;
> +}
> +
> +static void igt_wake_all_sync(atomic_t *ready,
> +			      atomic_t *set,
> +			      atomic_t *done,
> +			      wait_queue_head_t *wq,
> +			      int count)
> +{
> +	atomic_set(set, count);
> +	atomic_set(ready, 0);
> +	wake_up_all(wq);
> +
> +	wait_on_atomic_t(set, wait_atomic, TASK_UNINTERRUPTIBLE);
> +	atomic_set(ready, count);
> +	atomic_set(done, count);
> +}
> +
> +static int igt_wakeup(void *arg)
> +{
> +	I915_RND_STATE(prng);
> +	const int state = TASK_UNINTERRUPTIBLE;
> +	struct intel_engine_cs *engine = arg;
> +	struct igt_wakeup *waiters;
> +	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
> +	const int count = 4096;
> +	const u32 max_seqno = count / 4;
> +	atomic_t ready, set, done;
> +	int err = -ENOMEM;
> +	int n, step;
> +
> +	mock_engine_reset(engine);
> +
> +	waiters = drm_malloc_gfp(count, sizeof(*waiters), GFP_TEMPORARY);
> +	if (!waiters)
> +		goto out_engines;
> +
> +	/* Create a large number of threads, each waiting on a random seqno.
> +	 * Multiple waiters will be waiting for the same seqno.
> +	 */
> +	atomic_set(&ready, count);
> +	for (n = 0; n < count; n++) {
> +		waiters[n].wq = &wq;
> +		waiters[n].ready = &ready;
> +		waiters[n].set = &set;
> +		waiters[n].done = &done;
> +		waiters[n].engine = engine;
> +		waiters[n].flags = BIT(IDLE);
> +
> +		waiters[n].tsk = kthread_run(igt_wakeup_thread, &waiters[n],
> +					     "i915/igt:%d", n);
> +		if (IS_ERR(waiters[n].tsk))
> +			goto out_waiters;
> +
> +		get_task_struct(waiters[n].tsk);
> +	}
> +
> +	for (step = 1; step <= max_seqno; step <<= 1) {
> +		u32 seqno;
> +
> +		/* The waiter threads start paused as we assign them a random
> +		 * seqno and reset the engine. Once the engine is reset,
> +		 * we signal that the threads may begin their wait upon their
> +		 * seqno.
> +		 */
> +		for (n = 0; n < count; n++) {
> +			GEM_BUG_ON(!test_bit(IDLE, &waiters[n].flags));
> +			waiters[n].seqno =
> +				1 + prandom_u32_state(&prng) % max_seqno;
> +		}
> +		mock_seqno_advance(engine, 0);
> +		igt_wake_all_sync(&ready, &set, &done, &wq, count);
> +
> +		/* Simulate the GPU doing chunks of work, with one or more
> +		 * seqno appearing to finish at the same time. A random number
> +		 * of threads will be waiting upon the update and hopefully be
> +		 * woken.
> +		 */
> +		for (seqno = 1; seqno <= max_seqno + step; seqno += step) {
> +			usleep_range(50, 500);
> +			mock_seqno_advance(engine, seqno);
> +		}
> +		GEM_BUG_ON(intel_engine_get_seqno(engine) < 1 + max_seqno);
> +
> +		/* With the seqno now beyond any of the waiting threads, they
> +		 * should all be woken, see that they are complete and signal
> +		 * that they are ready for the next test. We wait until all
> +		 * threads are complete and waiting for us (i.e. not a seqno).
> +		 */
> +		err = wait_on_atomic_t(&done, wait_atomic_timeout, state);
> +		if (err) {
> +			pr_err("Timed out waiting for %d remaining waiters\n",
> +			       atomic_read(&done));
> +			break;
> +		}
> +
> +		err = check_rbtree_empty(engine);
> +		if (err)
> +			break;
> +	}
> +
> +out_waiters:
> +	for (n = 0; n < count; n++) {
> +		if (IS_ERR(waiters[n].tsk))
> +			break;
> +
> +		set_bit(STOP, &waiters[n].flags);
> +	}
> +	mock_seqno_advance(engine, INT_MAX); /* wakeup any broken waiters */
> +	igt_wake_all_sync(&ready, &set, &done, &wq, n);
> +
> +	for (n = 0; n < count; n++) {
> +		if (IS_ERR(waiters[n].tsk))
> +			break;
> +
> +		kthread_stop(waiters[n].tsk);
> +		put_task_struct(waiters[n].tsk);
> +	}
> +
> +	drm_free_large(waiters);
> +out_engines:
> +	mock_engine_flush(engine);
> +	return err;
> +}
> +
>  int intel_breadcrumbs_mock_selftests(void)
>  {
>  	static const struct i915_subtest tests[] = {
>  		SUBTEST(igt_random_insert_remove),
>  		SUBTEST(igt_insert_complete),
> +		SUBTEST(igt_wakeup),
>  	};
>  	struct intel_engine_cs *engine;
>  	int err;
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 38/46] drm/i915: Verify page layout for rotated VMA
  2017-02-02  9:08 ` [PATCH 38/46] drm/i915: Verify page layout for rotated VMA Chris Wilson
@ 2017-02-02 13:01   ` Tvrtko Ursulin
  0 siblings, 0 replies; 81+ messages in thread
From: Tvrtko Ursulin @ 2017-02-02 13:01 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 02/02/2017 09:08, Chris Wilson wrote:
> Exercise creating rotated VMA and checking the page order within.
>
> v2: Be more creative in rotated params

v3: ...

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/i915_vma.c | 179 ++++++++++++++++++++++++++++++
>  1 file changed, 179 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
> index 095d8348f5f0..4a737a670199 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_vma.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
> @@ -352,11 +352,190 @@ static int igt_vma_pin1(void *arg)
>  	return err;
>  }
>
> +static unsigned long rotated_index(const struct intel_rotation_info *r,
> +				   unsigned int n,
> +				   unsigned int x,
> +				   unsigned int y)
> +{
> +	return (r->plane[n].stride * (r->plane[n].height - y - 1) +
> +		r->plane[n].offset + x);
> +}
> +
> +static struct scatterlist *
> +assert_rotated(struct drm_i915_gem_object *obj,
> +	       const struct intel_rotation_info *r, unsigned int n,
> +	       struct scatterlist *sg)
> +{
> +	unsigned int x, y;
> +
> +	for (x = 0; x < r->plane[n].width; x++) {
> +		for (y = 0; y < r->plane[n].height; y++) {
> +			unsigned long src_idx;
> +			dma_addr_t src;
> +
> +			if (!sg) {
> +				pr_err("Invalid sg table: too short at plane %d, (%d, %d)!\n",
> +				       n, x, y);
> +				return ERR_PTR(-EINVAL);
> +			}
> +
> +			src_idx = rotated_index(r, n, x, y);
> +			src = i915_gem_object_get_dma_address(obj, src_idx);
> +
> +			if (sg_dma_len(sg) != PAGE_SIZE) {
> +				pr_err("Invalid sg.length, found %d, expected %lu for rotated page (%d, %d) [src index %lu]\n",
> +				       sg_dma_len(sg), PAGE_SIZE,
> +				       x, y, src_idx);
> +				return ERR_PTR(-EINVAL);
> +			}
> +
> +			if (sg_dma_address(sg) != src) {
> +				pr_err("Invalid address for rotated page (%d, %d) [src index %lu]\n",
> +				       x, y, src_idx);
> +				return ERR_PTR(-EINVAL);
> +			}
> +
> +			sg = sg_next(sg);
> +		}
> +	}
> +
> +	return sg;
> +}
> +
> +static unsigned int rotated_size(const struct intel_rotation_plane_info *a,
> +				 const struct intel_rotation_plane_info *b)
> +{
> +	return a->width * a->height + b->width * b->height;
> +}
> +
> +static int igt_vma_rotate(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct i915_address_space *vm = &i915->ggtt.base;
> +	struct drm_i915_gem_object *obj;
> +	const struct intel_rotation_plane_info planes[] = {
> +		{ .width = 1, .height = 1, .stride = 1 },
> +		{ .width = 2, .height = 2, .stride = 2 },
> +		{ .width = 4, .height = 4, .stride = 4 },
> +		{ .width = 8, .height = 8, .stride = 8 },
> +
> +		{ .width = 3, .height = 5, .stride = 3 },
> +		{ .width = 3, .height = 5, .stride = 4 },
> +		{ .width = 3, .height = 5, .stride = 5 },
> +
> +		{ .width = 5, .height = 3, .stride = 5 },
> +		{ .width = 5, .height = 3, .stride = 7 },
> +		{ .width = 5, .height = 3, .stride = 9 },
> +
> +		{ .width = 4, .height = 6, .stride = 6 },
> +		{ .width = 6, .height = 4, .stride = 6 },
> +		{ }
> +	}, *a, *b;
> +	const unsigned int max_pages = 64;
> +	int err = -ENOMEM;
> +
> +	/* Create VMA for many different combinations of planes and check
> +	 * that the page layout within the rotated VMA match our expectations.
> +	 */
> +
> +	obj = i915_gem_object_create_internal(i915, max_pages * PAGE_SIZE);
> +	if (IS_ERR(obj))
> +		goto out;
> +
> +	for (a = planes; a->width; a++) {
> +		for (b = planes + ARRAY_SIZE(planes); b-- != planes; ) {
> +			struct i915_ggtt_view view;
> +			unsigned int n, max_offset;
> +
> +			max_offset = max(a->stride * a->height,
> +					 b->stride * b->height);
> +			GEM_BUG_ON(max_offset > max_pages);
> +			max_offset = max_pages - max_offset;
> +
> +			view.type = I915_GGTT_VIEW_ROTATED;
> +			view.rotated.plane[0] = *a;
> +			view.rotated.plane[1] = *b;
> +
> +			for_each_prime_number_from(view.rotated.plane[0].offset, 0, max_offset) {
> +				for_each_prime_number_from(view.rotated.plane[1].offset, 0, max_offset) {
> +					struct scatterlist *sg;
> +					struct i915_vma *vma;
> +
> +					vma = checked_vma_instance(obj, vm, &view);
> +					if (IS_ERR(vma)) {
> +						err = PTR_ERR(vma);
> +						goto out_object;
> +					}
> +
> +					err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
> +					if (err) {
> +						pr_err("Failed to pin VMA, err=%d\n", err);
> +						goto out_object;
> +					}
> +
> +					if (vma->size != rotated_size(a, b) * PAGE_SIZE) {
> +						pr_err("VMA is wrong size, expected %lu, found %llu\n",
> +						       PAGE_SIZE * rotated_size(a, b), vma->size);
> +						err = -EINVAL;
> +						goto out_object;
> +					}
> +
> +					if (vma->pages->nents != rotated_size(a, b)) {
> +						pr_err("sg table is wrong sizeo, expected %u, found %u nents\n",

typo in size.

> +						       rotated_size(a, b), vma->pages->nents);
> +						err = -EINVAL;
> +						goto out_object;
> +					}
> +
> +					if (vma->node.size < vma->size) {
> +						pr_err("VMA binding too small, expected %llu, found %llu\n",
> +						       vma->size, vma->node.size);
> +						err = -EINVAL;
> +						goto out_object;
> +					}
> +
> +					if (vma->pages == obj->mm.pages) {
> +						pr_err("VMA using unrotated object pages!\n");
> +						err = -EINVAL;
> +						goto out_object;
> +					}
> +
> +					sg = vma->pages->sgl;
> +					for (n = 0; n < ARRAY_SIZE(view.rotated.plane); n++) {
> +						sg = assert_rotated(obj, &view.rotated, n, sg);
> +						if (IS_ERR(sg)) {
> +							pr_err("Inconsistent VMA pages for plane %d: [(%d, %d, %d, %d), (%d, %d, %d, %d)]\n", n,
> +							       view.rotated.plane[0].width,
> +							       view.rotated.plane[0].height,
> +							       view.rotated.plane[0].stride,
> +							       view.rotated.plane[0].offset,
> +							       view.rotated.plane[1].width,
> +							       view.rotated.plane[1].height,
> +							       view.rotated.plane[1].stride,
> +							       view.rotated.plane[1].offset);
> +							err = -EINVAL;
> +							goto out_object;
> +						}
> +					}
> +
> +					i915_vma_unpin(vma);
> +				}
> +			}
> +		}
> +	}
> +
> +out_object:
> +	i915_gem_object_put(obj);
> +out:
> +	return err;
> +}
> +
>  int i915_vma_mock_selftests(void)
>  {
>  	static const struct i915_subtest tests[] = {
>  		SUBTEST(igt_vma_create),
>  		SUBTEST(igt_vma_pin1),
> +		SUBTEST(igt_vma_rotate),
>  	};
>  	struct drm_i915_private *i915;
>  	int err;
>

With the changelog,

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 09/46] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups
  2017-02-02 12:49   ` Tvrtko Ursulin
@ 2017-02-02 13:02     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02 13:02 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Thu, Feb 02, 2017 at 12:49:58PM +0000, Tvrtko Ursulin wrote:
> 
> On 02/02/2017 09:08, Chris Wilson wrote:
> >+static bool wait_for_ready(struct igt_wakeup *w)
> >+{
> >+	DEFINE_WAIT(ready);
> >+
> >+	if (atomic_dec_and_test(w->done))
> >+		wake_up_atomic_t(w->done);
> >+
> >+	if (test_bit(STOP, &w->flags))
> >+		goto out;
> >+
> >+	set_bit(IDLE, &w->flags);
> 
> I think this needs to be before atomic_dec_and_test(w->done), to
> avoid that same assert racing with the threads. Because immediately
> after the wake_up_atomic above the main loop starts asserting the
> IDLE bit which is not guaranteed to be set yet.

Before wake_up_atomic_t which is the same thing, yup.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 21/46] drm/i915: Add selftests for object allocation, phys
  2017-02-02  9:08 ` [PATCH 21/46] drm/i915: Add selftests for object allocation, phys Chris Wilson
@ 2017-02-02 13:10   ` Matthew Auld
  2017-02-02 13:20     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Matthew Auld @ 2017-02-02 13:10 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On 2 February 2017 at 09:08, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> The phys object is a rarely used device (only very old machines require
> a chunk of physically contiguous pages for a few hardware interactions).
> As such, it is not exercised by CI and to combat that we want to add a
> test that exercises the phys object on all platforms.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem.c                    |   1 +
>  drivers/gpu/drm/i915/selftests/i915_gem_object.c   | 120 +++++++++++++++++++++
>  .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
>  3 files changed, 122 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_object.c
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index f35fda5d0abc..429c5e4350f7 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -4973,4 +4973,5 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
>  #include "selftests/scatterlist.c"
>  #include "selftests/mock_gem_device.c"
>  #include "selftests/huge_gem_object.c"
> +#include "selftests/i915_gem_object.c"
>  #endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
> new file mode 100644
> index 000000000000..db8f631e4993
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c
> @@ -0,0 +1,120 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include "../i915_selftest.h"
> +
> +#include "mock_gem_device.h"
> +
> +static int igt_gem_object(void *arg)
> +{
> +       struct drm_i915_private *i915 = arg;
> +       struct drm_i915_gem_object *obj;
> +       int err = -ENOMEM;
> +
> +       /* Basic test to ensure we can create an object */
> +
> +       obj = i915_gem_object_create(i915, PAGE_SIZE);
> +       if (IS_ERR(obj)) {
> +               err = PTR_ERR(obj);
> +               pr_err("i915_gem_object_create failed, err=%d\n", err);
> +               goto out;
> +       }
> +
> +       err = 0;
> +       i915_gem_object_put(obj);
> +out:
> +       return err;
> +}
> +
> +static int igt_phys_object(void *arg)
> +{
> +       struct drm_i915_private *i915 = arg;
> +       struct drm_i915_gem_object *obj;
> +       int err = -ENOMEM;
> +
> +       /* Create an object and bind it to a contiguous set of physical pages,
> +        * i.e. exercise the i915_gem_object_phys API.
> +        */
> +
> +       obj = i915_gem_object_create(i915, PAGE_SIZE);
> +       if (IS_ERR(obj)) {
> +               err = PTR_ERR(obj);
> +               pr_err("i915_gem_object_create failed, err=%d\n", err);
> +               goto out;
> +       }
> +
> +       err = -EINVAL;
> +       mutex_lock(&i915->drm.struct_mutex);
> +       err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
> +       mutex_unlock(&i915->drm.struct_mutex);
> +       if (err) {
> +               pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
> +               goto out_obj;
> +       }
> +
> +       if (obj->ops != &i915_gem_phys_ops) {
> +               pr_err("i915_gem_object_attach_phys did not create a phys object\n");
> +               goto out_obj;
I'm guessing that you meant to return an error value, see below also.

> +       }
> +
> +       if (!atomic_read(&obj->mm.pages_pin_count)) {
> +               pr_err("i915_gem_object_attach_phys did not pin its phys pages\n");
> +               goto out_obj;
> +       }
> +
> +       /* Make the object dirty so that put_pages must do copy back the data */
> +       mutex_lock(&i915->drm.struct_mutex);
> +       err = i915_gem_object_set_to_gtt_domain(obj, true);
> +       mutex_unlock(&i915->drm.struct_mutex);
> +       if (err) {
> +               pr_err("i915_gem_object_set_to_gtt_domain failed with err=%d\n",
> +                      err);
> +               goto out_obj;
> +       }
> +
> +       err = 0;
> +out_obj:
> +       i915_gem_object_put(obj);
> +out:
> +       return err;
> +}
> +
> +int i915_gem_object_mock_selftests(void)
> +{
> +       static const struct i915_subtest tests[] = {
> +               SUBTEST(igt_gem_object),
> +               SUBTEST(igt_phys_object),
> +       };
> +       struct drm_i915_private *i915;
> +       int err;
> +
> +       i915 = mock_gem_device();
> +       if (!i915)
> +               return -ENOMEM;
> +
> +       err = i915_subtests(tests, i915);
> +
> +       drm_dev_unref(&i915->drm);
> +       return err;
> +}
> diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> index bda982404ad3..2ed94e3a71b7 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> @@ -12,3 +12,4 @@ selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
>  selftest(scatterlist, scatterlist_mock_selftests)
>  selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
>  selftest(requests, i915_gem_request_mock_selftests)
> +selftest(objects, i915_gem_object_mock_selftests)
> --
> 2.11.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 21/46] drm/i915: Add selftests for object allocation, phys
  2017-02-02 13:10   ` Matthew Auld
@ 2017-02-02 13:20     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02 13:20 UTC (permalink / raw)
  To: Matthew Auld; +Cc: Intel Graphics Development

On Thu, Feb 02, 2017 at 01:10:57PM +0000, Matthew Auld wrote:
> On 2 February 2017 at 09:08, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > +       err = -EINVAL;
> > +       mutex_lock(&i915->drm.struct_mutex);
> > +       err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
> > +       mutex_unlock(&i915->drm.struct_mutex);
> > +       if (err) {
> > +               pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
> > +               goto out_obj;
> > +       }
> > +
> > +       if (obj->ops != &i915_gem_phys_ops) {
> > +               pr_err("i915_gem_object_attach_phys did not create a phys object\n");
> > +               goto out_obj;
> I'm guessing that you meant to return an error value, see below also.

Looks like I still thought I had the err = -EINVAL set. Good thing the
purpose of the test was to trigger the oops!
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 46/46] drm/i915: Add initial selftests for hang detection and resets
  2017-02-02  9:09 ` [PATCH 46/46] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
@ 2017-02-02 13:28   ` Mika Kuoppala
  0 siblings, 0 replies; 81+ messages in thread
From: Mika Kuoppala @ 2017-02-02 13:28 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Check that we can reset the GPU and continue executing from the next
> request.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_drv.h                    |   4 +-
>  drivers/gpu/drm/i915/intel_hangcheck.c             |   4 +
>  .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
>  drivers/gpu/drm/i915/selftests/intel_hangcheck.c   | 531 +++++++++++++++++++++
>  4 files changed, 538 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/selftests/intel_hangcheck.c
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 4a7e4b10c0a9..f82c59768f65 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -3356,8 +3356,8 @@ int __must_check i915_gem_init(struct drm_i915_private *dev_priv);
>  int __must_check i915_gem_init_hw(struct drm_i915_private *dev_priv);
>  void i915_gem_init_swizzling(struct drm_i915_private *dev_priv);
>  void i915_gem_cleanup_engines(struct drm_i915_private *dev_priv);
> -int __must_check i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
> -					unsigned int flags);
> +int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
> +			   unsigned int flags);
>  int __must_check i915_gem_suspend(struct drm_i915_private *dev_priv);
>  void i915_gem_resume(struct drm_i915_private *dev_priv);
>  int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
> diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
> index f05971f5586f..dce742243ba6 100644
> --- a/drivers/gpu/drm/i915/intel_hangcheck.c
> +++ b/drivers/gpu/drm/i915/intel_hangcheck.c
> @@ -480,3 +480,7 @@ void intel_hangcheck_init(struct drm_i915_private *i915)
>  	INIT_DELAYED_WORK(&i915->gpu_error.hangcheck_work,
>  			  i915_hangcheck_elapsed);
>  }
> +
> +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> +#include "selftests/intel_hangcheck.c"
> +#endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> index 15fb4e0dd503..d0d4f4bcd837 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> @@ -16,3 +16,4 @@ selftest(dmabuf, i915_gem_dmabuf_live_selftests)
>  selftest(coherency, i915_gem_coherency_live_selftests)
>  selftest(gtt, i915_gem_gtt_live_selftests)
>  selftest(context, i915_gem_context_live_selftests)
> +selftest(hangcheck, intel_hangcheck_live_selftests)
> diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
> new file mode 100644
> index 000000000000..2131d8707dfd
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
> @@ -0,0 +1,531 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include "../i915_selftest.h"
> +
> +struct hang {
> +	struct drm_i915_private *i915;
> +	struct drm_i915_gem_object *hws;
> +	struct drm_i915_gem_object *obj;
> +	u32 *seqno;
> +	u32 *batch;
> +};
> +
> +static int hang_init(struct hang *h, struct drm_i915_private *i915)
> +{
> +	void *vaddr;
> +	int err;
> +
> +	memset(h, 0, sizeof(*h));
> +	h->i915 = i915;
> +
> +	h->hws = i915_gem_object_create_internal(i915, PAGE_SIZE);
> +	if (IS_ERR(h->hws))
> +		return PTR_ERR(h->hws);
> +
> +	h->obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
> +	if (IS_ERR(h->obj)) {
> +		err = PTR_ERR(h->obj);
> +		goto err_hws;

Personally I like the verb added to the goto. Like goto free_hws,
put_hws or if you prefer, err_free_hws. The upside is
that comparing this code to what happens above and reviewing
the actual error cleanups in the end of function can
be made in separate phases, without jumping back and
forth in editor (in most cases).

Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>

> +	}
> +
> +	i915_gem_object_set_cache_level(h->hws, I915_CACHE_LLC);
> +	vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB);
> +	if (IS_ERR(vaddr)) {
> +		err = PTR_ERR(vaddr);
> +		goto err_obj;
> +	}
> +	h->seqno = memset(vaddr, 0xff, PAGE_SIZE);
> +
> +	vaddr = i915_gem_object_pin_map(h->obj,
> +					HAS_LLC(i915) ? I915_MAP_WB : I915_MAP_WC);
> +	if (IS_ERR(vaddr)) {
> +		err = PTR_ERR(vaddr);
> +		goto err_unpin_hws;
> +	}
> +	h->batch = vaddr;
> +
> +	return 0;
> +
> +err_unpin_hws:
> +	i915_gem_object_unpin_map(h->hws);
> +err_obj:
> +	i915_gem_object_put(h->obj);
> +err_hws:
> +	i915_gem_object_put(h->hws);
> +	return err;
> +}
> +
> +static u64 hws_address(const struct i915_vma *hws,
> +		       const struct drm_i915_gem_request *rq)
> +{
> +	return hws->node.start + offset_in_page(sizeof(u32)*rq->fence.context);
> +}
> +
> +static int emit_recurse_batch(struct hang *h,
> +			      struct drm_i915_gem_request *rq)
> +{
> +	struct drm_i915_private *i915 = h->i915;
> +	struct i915_address_space *vm = rq->ctx->ppgtt ? &rq->ctx->ppgtt->base : &i915->ggtt.base;
> +	struct i915_vma *hws, *vma;
> +	unsigned int flags;
> +	u32 *batch;
> +	int err;
> +
> +	vma = i915_vma_instance(h->obj, vm, NULL);
> +	if (IS_ERR(vma))
> +		return PTR_ERR(vma);
> +
> +	hws = i915_vma_instance(h->hws, vm, NULL);
> +	if (IS_ERR(hws))
> +		return PTR_ERR(hws);
> +
> +	err = i915_vma_pin(vma, 0, 0, PIN_USER);
> +	if (err)
> +		return err;
> +
> +	err = i915_vma_pin(hws, 0, 0, PIN_USER);
> +	if (err)
> +		goto unpin_vma;
> +
> +	err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
> +	if (err)
> +		goto unpin_hws;
> +
> +	err = i915_switch_context(rq);
> +	if (err)
> +		goto unpin_hws;
> +
> +	i915_vma_move_to_active(vma, rq, 0);
> +	if (!i915_gem_object_has_active_reference(vma->obj)) {
> +		i915_gem_object_get(vma->obj);
> +		i915_gem_object_set_active_reference(vma->obj);
> +	}
> +
> +	i915_vma_move_to_active(hws, rq, 0);
> +	if (!i915_gem_object_has_active_reference(hws->obj)) {
> +		i915_gem_object_get(hws->obj);
> +		i915_gem_object_set_active_reference(hws->obj);
> +	}
> +
> +	batch = h->batch;
> +	if (INTEL_GEN(i915) >= 8) {
> +		*batch++ = MI_STORE_DWORD_IMM_GEN4;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = upper_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
> +		*batch++ = lower_32_bits(vma->node.start);
> +		*batch++ = upper_32_bits(vma->node.start);
> +	} else if (INTEL_GEN(i915) >= 6) {
> +		*batch++ = MI_STORE_DWORD_IMM_GEN4;
> +		*batch++ = 0;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 1 << 8;
> +		*batch++ = lower_32_bits(vma->node.start);
> +	} else if (INTEL_GEN(i915) >= 4) {
> +		*batch++ = MI_STORE_DWORD_IMM_GEN4 | 1 << 22;
> +		*batch++ = 0;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 2 << 6;
> +		*batch++ = lower_32_bits(vma->node.start);
> +	} else {
> +		*batch++ = MI_STORE_DWORD_IMM;
> +		*batch++ = lower_32_bits(hws_address(hws, rq));
> +		*batch++ = rq->fence.seqno;
> +		*batch++ = MI_BATCH_BUFFER_START | 2 << 6 | 1;
> +		*batch++ = lower_32_bits(vma->node.start);
> +	}
> +	*batch++ = MI_BATCH_BUFFER_END; /* not reached */
> +
> +	flags = 0;
> +	if (INTEL_GEN(vm->i915) <= 5)
> +		flags |= I915_DISPATCH_SECURE;
> +
> +	err = rq->engine->emit_bb_start(rq, vma->node.start, PAGE_SIZE, flags);
> +
> +unpin_hws:
> +	i915_vma_unpin(hws);
> +unpin_vma:
> +	i915_vma_unpin(vma);
> +	return err;
> +}
> +
> +static struct drm_i915_gem_request *
> +hang_create_request(struct hang *h,
> +		    struct intel_engine_cs *engine,
> +		    struct i915_gem_context *ctx)
> +{
> +	struct drm_i915_gem_request *rq;
> +	int err;
> +
> +	if (i915_gem_object_is_active(h->obj)) {
> +		struct drm_i915_gem_object *obj;
> +		void *vaddr;
> +
> +		obj = i915_gem_object_create_internal(h->i915, PAGE_SIZE);
> +		if (IS_ERR(obj))
> +			return ERR_CAST(obj);
> +
> +		vaddr = i915_gem_object_pin_map(obj,
> +						HAS_LLC(h->i915) ? I915_MAP_WB : I915_MAP_WC);
> +		if (IS_ERR(vaddr)) {
> +			i915_gem_object_put(obj);
> +			return ERR_CAST(vaddr);
> +		}
> +
> +		i915_gem_object_unpin_map(h->obj);
> +		i915_gem_object_put(h->obj);
> +
> +		h->obj = obj;
> +		h->batch = vaddr;
> +	}
> +
> +	rq = i915_gem_request_alloc(engine, ctx);
> +	if (IS_ERR(rq))
> +		return rq;
> +
> +	err = emit_recurse_batch(h, rq);
> +	if (err) {
> +		__i915_add_request(rq, false);
> +		return ERR_PTR(err);
> +	}
> +
> +	return rq;
> +}
> +
> +static u32 hws_seqno(const struct hang *h,
> +		     const struct drm_i915_gem_request *rq)
> +{
> +	return READ_ONCE(h->seqno[rq->fence.context % (PAGE_SIZE/sizeof(u32))]);
> +}
> +
> +static void hang_fini(struct hang *h)
> +{
> +	*h->batch = MI_BATCH_BUFFER_END;
> +	wmb();
> +
> +	i915_gem_object_unpin_map(h->obj);
> +	i915_gem_object_put(h->obj);
> +
> +	i915_gem_object_unpin_map(h->hws);
> +	i915_gem_object_put(h->hws);
> +
> +	i915_gem_wait_for_idle(h->i915, I915_WAIT_LOCKED);
> +	i915_gem_retire_requests(h->i915);
> +}
> +
> +static int igt_hang_sanitycheck(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct drm_i915_gem_request *rq;
> +	struct intel_engine_cs *engine;
> +	enum intel_engine_id id;
> +	struct hang h;
> +	int err;
> +
> +	/* Basic check that we can execute our hanging batch */
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	err = hang_init(&h, i915);
> +	if (err)
> +		goto unlock;
> +
> +	for_each_engine(engine, i915, id) {
> +		long timeout;
> +
> +		rq = hang_create_request(&h, engine, i915->kernel_context);
> +		if (IS_ERR(rq)) {
> +			err = PTR_ERR(rq);
> +			pr_err("Failed to create request for %s, err=%d\n",
> +			       engine->name, err);
> +			goto fini;
> +		}
> +
> +		i915_gem_request_get(rq);
> +
> +		*h.batch = MI_BATCH_BUFFER_END;
> +		__i915_add_request(rq, true);
> +
> +		timeout = i915_wait_request(rq,
> +					    I915_WAIT_LOCKED,
> +					    MAX_SCHEDULE_TIMEOUT);
> +		i915_gem_request_put(rq);
> +
> +		if (timeout < 0) {
> +			err = timeout;
> +			pr_err("Wait for request failed on %s, err=%d\n",
> +			       engine->name, err);
> +			goto fini;
> +		}
> +	}
> +
> +fini:
> +	hang_fini(&h);
> +unlock:
> +	mutex_unlock(&i915->drm.struct_mutex);
> +	return err;
> +}
> +
> +static int igt_global_reset(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	unsigned int reset_count;
> +	int err = 0;
> +
> +	/* Check that we can issue a global GPU reset */
> +
> +	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	reset_count = i915_reset_count(&i915->gpu_error);
> +
> +	i915_reset(i915);
> +
> +	if (i915_reset_count(&i915->gpu_error) == reset_count) {
> +		pr_err("No GPU reset recorded!\n");
> +		err = -EINVAL;
> +	}
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
> +	if (i915_terminally_wedged(&i915->gpu_error))
> +		err = -EIO;
> +
> +	return err;
> +}
> +
> +static u32 fake_hangcheck(struct drm_i915_gem_request *rq)
> +{
> +	u32 reset_count;
> +
> +	rq->engine->hangcheck.stalled = true;
> +	rq->engine->hangcheck.seqno = intel_engine_get_seqno(rq->engine);
> +
> +	reset_count = i915_reset_count(&rq->i915->gpu_error);
> +
> +	set_bit(I915_RESET_IN_PROGRESS, &rq->i915->gpu_error.flags);
> +	wake_up_all(&rq->i915->gpu_error.wait_queue);
> +
> +	return reset_count;
> +}
> +
> +static bool wait_for_hang(struct hang *h, struct drm_i915_gem_request *rq)
> +{
> +	return !(wait_for_us(i915_seqno_passed(hws_seqno(h, rq),
> +					       rq->fence.seqno),
> +			     10) &&
> +		 wait_for(i915_seqno_passed(hws_seqno(h, rq),
> +					    rq->fence.seqno),
> +			  1000));
> +}
> +
> +static int igt_wait_reset(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct drm_i915_gem_request *rq;
> +	unsigned int reset_count;
> +	struct hang h;
> +	long timeout;
> +	int err;
> +
> +	/* Check that we detect a stuck waiter and issue a reset */
> +
> +	set_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags);
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	err = hang_init(&h, i915);
> +	if (err)
> +		goto unlock;
> +
> +	rq = hang_create_request(&h, i915->engine[RCS], i915->kernel_context);
> +	if (IS_ERR(rq)) {
> +		err = PTR_ERR(rq);
> +		goto fini;
> +	}
> +
> +	i915_gem_request_get(rq);
> +	__i915_add_request(rq, true);
> +
> +	if (!wait_for_hang(&h, rq)) {
> +		pr_err("Failed to start request %x\n", rq->fence.seqno);
> +		err = -EIO;
> +		goto out_rq;
> +	}
> +
> +	reset_count = fake_hangcheck(rq);
> +
> +	timeout = i915_wait_request(rq, I915_WAIT_LOCKED, 10);
> +	if (timeout < 0) {
> +		pr_err("i915_wait_request failed on a stuck request: err=%ld\n",
> +		       timeout);
> +		err = timeout;
> +		goto out_rq;
> +	}
> +	GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS, &i915->gpu_error.flags));
> +
> +	if (i915_reset_count(&i915->gpu_error) == reset_count) {
> +		pr_err("No GPU reset recorded!\n");
> +		err = -EINVAL;
> +		goto out_rq;
> +	}
> +
> +out_rq:
> +	i915_gem_request_put(rq);
> +fini:
> +	hang_fini(&h);
> +unlock:
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	if (i915_terminally_wedged(&i915->gpu_error))
> +		return -EIO;
> +
> +	return err;
> +}
> +
> +static int igt_reset_queue(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct intel_engine_cs *engine;
> +	enum intel_engine_id id;
> +	struct hang h;
> +	int err;
> +
> +	/* Check that we replay pending requests following a hang */
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	err = hang_init(&h, i915);
> +	if (err)
> +		goto unlock;
> +
> +	for_each_engine(engine, i915, id) {
> +		struct drm_i915_gem_request *prev;
> +		IGT_TIMEOUT(end_time);
> +		unsigned int count;
> +
> +		prev = hang_create_request(&h, engine, i915->kernel_context);
> +		if (IS_ERR(prev)) {
> +			err = PTR_ERR(prev);
> +			goto fini;
> +		}
> +
> +		i915_gem_request_get(prev);
> +		__i915_add_request(prev, true);
> +
> +		count = 0;
> +		do {
> +			struct drm_i915_gem_request *rq;
> +			unsigned int reset_count;
> +
> +			rq = hang_create_request(&h,
> +						 engine,
> +						 i915->kernel_context);
> +			if (IS_ERR(rq)) {
> +				err = PTR_ERR(rq);
> +				goto fini;
> +			}
> +
> +			i915_gem_request_get(rq);
> +			__i915_add_request(rq, true);
> +
> +			if (!wait_for_hang(&h, prev)) {
> +				pr_err("Failed to start request %x\n",
> +				       prev->fence.seqno);
> +				i915_gem_request_put(rq);
> +				i915_gem_request_put(prev);
> +				err = -EIO;
> +				goto fini;
> +			}
> +
> +			reset_count = fake_hangcheck(prev);
> +
> +			i915_reset(i915);
> +
> +			GEM_BUG_ON(test_bit(I915_RESET_IN_PROGRESS,
> +					    &i915->gpu_error.flags));
> +			if (prev->fence.error != -EIO) {
> +				pr_err("GPU reset not recorded on hanging request [fence.error=%d]!\n",
> +				       prev->fence.error);
> +				i915_gem_request_put(rq);
> +				i915_gem_request_put(prev);
> +				err = -EINVAL;
> +				goto fini;
> +			}
> +
> +			if (rq->fence.error) {
> +				pr_err("Fence error status not zero [%d] after unrelated reset\n",
> +				       rq->fence.error);
> +				i915_gem_request_put(rq);
> +				i915_gem_request_put(prev);
> +				err = -EINVAL;
> +				goto fini;
> +			}
> +
> +			if (i915_reset_count(&i915->gpu_error) == reset_count) {
> +				pr_err("No GPU reset recorded!\n");
> +				i915_gem_request_put(rq);
> +				i915_gem_request_put(prev);
> +				err = -EINVAL;
> +				goto fini;
> +			}
> +
> +			i915_gem_request_put(prev);
> +			prev = rq;
> +			count++;
> +		} while (time_before(jiffies, end_time));
> +		pr_info("%s: Completed %d resets\n", engine->name, count);
> +
> +		*h.batch = MI_BATCH_BUFFER_END;
> +		wmb();
> +
> +		i915_gem_request_put(prev);
> +	}
> +
> +fini:
> +	hang_fini(&h);
> +unlock:
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	if (i915_terminally_wedged(&i915->gpu_error))
> +		return -EIO;
> +
> +	return err;
> +}
> +
> +int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
> +{
> +	static const struct i915_subtest tests[] = {
> +		SUBTEST(igt_hang_sanitycheck),
> +		SUBTEST(igt_global_reset),
> +		SUBTEST(igt_wait_reset),
> +		SUBTEST(igt_reset_queue),
> +	};
> +
> +	if (!intel_has_gpu_reset(i915))
> +		return 0;
> +
> +	return i915_subtests(tests, i915);
> +}
> -- 
> 2.11.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH igt] intel-ci: Add all driver selftests to BAT
  2017-02-02  9:18 ` [PATCH igt] intel-ci: Add all driver selftests to BAT Chris Wilson
@ 2017-02-02 13:30   ` Maarten Lankhorst
  2017-02-02 13:44     ` Chris Wilson
  2017-02-17 11:50   ` Petri Latvala
  1 sibling, 1 reply; 81+ messages in thread
From: Maarten Lankhorst @ 2017-02-02 13:30 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Op 02-02-17 om 10:18 schreef Chris Wilson:
> These are meant to be fast and sensitive to new (and old) bugs...
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Petri Latvala <petri.latvala@intel.com>
> ---
>  tests/intel-ci/fast-feedback.testlist | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
>
> diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
> index 828bd3ff..a0c3f848 100644
> --- a/tests/intel-ci/fast-feedback.testlist
> +++ b/tests/intel-ci/fast-feedback.testlist
> @@ -249,4 +249,23 @@ igt@drv_module_reload@basic-reload
>  igt@drv_module_reload@basic-no-display
>  igt@drv_module_reload@basic-reload-inject
>  igt@drv_module_reload@basic-reload-final
> +igt@drv_selftest@mock_sanitycheck
> +igt@drv_selftest@mock_scatterlist
> +igt@drv_selftest@mock_uncore
> +igt@drv_selftest@mock_breadcrumbs
> +igt@drv_selftest@mock_requests
> +igt@drv_selftest@mock_objects
> +igt@drv_selftest@mock_dmabuf
> +igt@drv_selftest@mock_vma
> +igt@drv_selftest@mock_evict
> +igt@drv_selftest@mock_gtt
> +igt@drv_selftest@live_sanitycheck
> +igt@drv_selftest@live_uncore
> +igt@drv_selftest@live_requests
> +igt@drv_selftest@live_object
> +igt@drv_selftest@live_dmabuf
> +igt@drv_selftest@live_coherency
> +igt@drv_selftest@live_gtt
> +igt@drv_selftest@live_context
> +igt@drv_selftest@live_hangcheck
>  igt@gvt_basic@invalid-placeholder-test

Add basic somewhere in the test names?

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH igt] intel-ci: Add all driver selftests to BAT
  2017-02-02 13:30   ` Maarten Lankhorst
@ 2017-02-02 13:44     ` Chris Wilson
  2017-02-02 14:11       ` Maarten Lankhorst
  2017-02-02 15:42       ` Saarinen, Jani
  0 siblings, 2 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-02 13:44 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

On Thu, Feb 02, 2017 at 02:30:19PM +0100, Maarten Lankhorst wrote:
> Op 02-02-17 om 10:18 schreef Chris Wilson:
> > These are meant to be fast and sensitive to new (and old) bugs...
> >
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Petri Latvala <petri.latvala@intel.com>
> > ---
> >  tests/intel-ci/fast-feedback.testlist | 19 +++++++++++++++++++
> >  1 file changed, 19 insertions(+)
> >
> > diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
> > index 828bd3ff..a0c3f848 100644
> > --- a/tests/intel-ci/fast-feedback.testlist
> > +++ b/tests/intel-ci/fast-feedback.testlist
> > @@ -249,4 +249,23 @@ igt@drv_module_reload@basic-reload
> >  igt@drv_module_reload@basic-no-display
> >  igt@drv_module_reload@basic-reload-inject
> >  igt@drv_module_reload@basic-reload-final
> > +igt@drv_selftest@mock_sanitycheck
> > +igt@drv_selftest@mock_scatterlist
> > +igt@drv_selftest@mock_uncore
> > +igt@drv_selftest@mock_breadcrumbs
> > +igt@drv_selftest@mock_requests
> > +igt@drv_selftest@mock_objects
> > +igt@drv_selftest@mock_dmabuf
> > +igt@drv_selftest@mock_vma
> > +igt@drv_selftest@mock_evict
> > +igt@drv_selftest@mock_gtt
> > +igt@drv_selftest@live_sanitycheck
> > +igt@drv_selftest@live_uncore
> > +igt@drv_selftest@live_requests
> > +igt@drv_selftest@live_object
> > +igt@drv_selftest@live_dmabuf
> > +igt@drv_selftest@live_coherency
> > +igt@drv_selftest@live_gtt
> > +igt@drv_selftest@live_context
> > +igt@drv_selftest@live_hangcheck
> >  igt@gvt_basic@invalid-placeholder-test
> 
> Add basic somewhere in the test names?

Why? Does something still parse basic in the test name and add it to a
test set? Shouldn't that now be pulling from these lists instead?
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH igt] intel-ci: Add all driver selftests to BAT
  2017-02-02 13:44     ` Chris Wilson
@ 2017-02-02 14:11       ` Maarten Lankhorst
  2017-02-02 15:42       ` Saarinen, Jani
  1 sibling, 0 replies; 81+ messages in thread
From: Maarten Lankhorst @ 2017-02-02 14:11 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Op 02-02-17 om 14:44 schreef Chris Wilson:
> On Thu, Feb 02, 2017 at 02:30:19PM +0100, Maarten Lankhorst wrote:
>> Op 02-02-17 om 10:18 schreef Chris Wilson:
>>> These are meant to be fast and sensitive to new (and old) bugs...
>>>
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Petri Latvala <petri.latvala@intel.com>
>>> ---
>>>  tests/intel-ci/fast-feedback.testlist | 19 +++++++++++++++++++
>>>  1 file changed, 19 insertions(+)
>>>
>>> diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
>>> index 828bd3ff..a0c3f848 100644
>>> --- a/tests/intel-ci/fast-feedback.testlist
>>> +++ b/tests/intel-ci/fast-feedback.testlist
>>> @@ -249,4 +249,23 @@ igt@drv_module_reload@basic-reload
>>>  igt@drv_module_reload@basic-no-display
>>>  igt@drv_module_reload@basic-reload-inject
>>>  igt@drv_module_reload@basic-reload-final
>>> +igt@drv_selftest@mock_sanitycheck
>>> +igt@drv_selftest@mock_scatterlist
>>> +igt@drv_selftest@mock_uncore
>>> +igt@drv_selftest@mock_breadcrumbs
>>> +igt@drv_selftest@mock_requests
>>> +igt@drv_selftest@mock_objects
>>> +igt@drv_selftest@mock_dmabuf
>>> +igt@drv_selftest@mock_vma
>>> +igt@drv_selftest@mock_evict
>>> +igt@drv_selftest@mock_gtt
>>> +igt@drv_selftest@live_sanitycheck
>>> +igt@drv_selftest@live_uncore
>>> +igt@drv_selftest@live_requests
>>> +igt@drv_selftest@live_object
>>> +igt@drv_selftest@live_dmabuf
>>> +igt@drv_selftest@live_coherency
>>> +igt@drv_selftest@live_gtt
>>> +igt@drv_selftest@live_context
>>> +igt@drv_selftest@live_hangcheck
>>>  igt@gvt_basic@invalid-placeholder-test
>> Add basic somewhere in the test names?
> Why? Does something still parse basic in the test name and add it to a
> test set? Shouldn't that now be pulling from these lists instead?
the fast feedback list is a subtest of all basic tests.

scripts/run-tests.sh -t basic is supposed to run at least those.

~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH igt] intel-ci: Add all driver selftests to BAT
  2017-02-02 13:44     ` Chris Wilson
  2017-02-02 14:11       ` Maarten Lankhorst
@ 2017-02-02 15:42       ` Saarinen, Jani
  1 sibling, 0 replies; 81+ messages in thread
From: Saarinen, Jani @ 2017-02-02 15:42 UTC (permalink / raw)
  To: Chris Wilson, Maarten Lankhorst, Latvala, Petri; +Cc: intel-gfx

Hi, 

> >
> > Add basic somewhere in the test names?
> 
> Why? Does something still parse basic in the test name and add it to a test
No 
> set? Shouldn't that now be pulling from these lists instead?
You are right, yes , we control only static list and addition to it through reviews, Petri, more to add?
So we are not using -t basic anymore. 

> -Chris
> 
> --
> Chris Wilson, Intel Open Source Technology Centre


Jani Saarinen
Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling
  2017-02-02  9:09 ` [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling Chris Wilson
@ 2017-02-08 12:12   ` Matthew Auld
  2017-02-09 10:53   ` Joonas Lahtinen
  1 sibling, 0 replies; 81+ messages in thread
From: Matthew Auld @ 2017-02-08 12:12 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On 2 February 2017 at 09:09, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Use the live tests against the mock ppgtt for quick testing on all
> platforms of the VMA layer.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 45/46] drm/i915: Exercise manipulate of single pages in the GGTT
  2017-02-02  9:09 ` [PATCH 45/46] drm/i915: Exercise manipulate of single pages in the GGTT Chris Wilson
@ 2017-02-08 12:25   ` Matthew Auld
  2017-02-08 12:33     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Matthew Auld @ 2017-02-08 12:25 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development

On 2 February 2017 at 09:09, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Move a single page of an object around within the GGTT and check
> coherency of writes and reads.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 91 +++++++++++++++++++++++++++
>  1 file changed, 91 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> index 7ec6fb2208a6..27e380a3bae5 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> @@ -684,6 +684,96 @@ static int igt_ggtt_drunk(void *arg)
>         return exercise_ggtt(arg, drunk_hole);
>  }
>
> +static int igt_ggtt_page(void *arg)
> +{
> +       const unsigned int count = PAGE_SIZE/sizeof(u32);
> +       I915_RND_STATE(prng);
> +       struct drm_i915_private *i915 = arg;
> +       struct i915_ggtt *ggtt = &i915->ggtt;
> +       struct drm_i915_gem_object *obj;
> +       struct drm_mm_node tmp;
> +       unsigned int *order, n;
> +       int err;
> +
> +       mutex_lock(&i915->drm.struct_mutex);
> +
> +       obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
> +       if (IS_ERR(obj)) {
> +               err = PTR_ERR(obj);
> +               goto out_unlock;
> +       }
> +
> +       err = i915_gem_object_pin_pages(obj);
> +       if (err)
> +               goto out_free;
> +
> +       memset(&tmp, 0, sizeof(tmp));
> +       err = drm_mm_insert_node_in_range(&ggtt->base.mm, &tmp,
> +                                         1024 * PAGE_SIZE, 0,
> +                                         I915_COLOR_UNEVICTABLE,
> +                                         0, ggtt->mappable_end,
> +                                         DRM_MM_INSERT_LOW);
> +       if (err)
> +               goto out_unpin;
> +
> +       order = i915_random_order(count, &prng);
> +       if (!order) {
> +               err = -ENOMEM;
> +               goto out_remove;
> +       }
> +
> +       for (n = 0; n < count; n++) {
> +               u64 offset = tmp.start + order[n] * PAGE_SIZE;
> +               u32 __iomem *vaddr;
> +
> +               ggtt->base.insert_page(&ggtt->base,
> +                                      i915_gem_object_get_dma_address(obj, 0),
> +                                      offset, I915_CACHE_NONE, 0);
Forgive my ignorance but don't we need a write barrier here also?

Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 45/46] drm/i915: Exercise manipulate of single pages in the GGTT
  2017-02-08 12:25   ` Matthew Auld
@ 2017-02-08 12:33     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-08 12:33 UTC (permalink / raw)
  To: Matthew Auld; +Cc: Intel Graphics Development

On Wed, Feb 08, 2017 at 12:25:55PM +0000, Matthew Auld wrote:
> On 2 February 2017 at 09:09, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > Move a single page of an object around within the GGTT and check
> > coherency of writes and reads.
> >
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 91 +++++++++++++++++++++++++++
> >  1 file changed, 91 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> > index 7ec6fb2208a6..27e380a3bae5 100644
> > --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> > +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
> > @@ -684,6 +684,96 @@ static int igt_ggtt_drunk(void *arg)
> >         return exercise_ggtt(arg, drunk_hole);
> >  }
> >
> > +static int igt_ggtt_page(void *arg)
> > +{
> > +       const unsigned int count = PAGE_SIZE/sizeof(u32);
> > +       I915_RND_STATE(prng);
> > +       struct drm_i915_private *i915 = arg;
> > +       struct i915_ggtt *ggtt = &i915->ggtt;
> > +       struct drm_i915_gem_object *obj;
> > +       struct drm_mm_node tmp;
> > +       unsigned int *order, n;
> > +       int err;
> > +
> > +       mutex_lock(&i915->drm.struct_mutex);
> > +
> > +       obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
> > +       if (IS_ERR(obj)) {
> > +               err = PTR_ERR(obj);
> > +               goto out_unlock;
> > +       }
> > +
> > +       err = i915_gem_object_pin_pages(obj);
> > +       if (err)
> > +               goto out_free;
> > +
> > +       memset(&tmp, 0, sizeof(tmp));
> > +       err = drm_mm_insert_node_in_range(&ggtt->base.mm, &tmp,
> > +                                         1024 * PAGE_SIZE, 0,
> > +                                         I915_COLOR_UNEVICTABLE,
> > +                                         0, ggtt->mappable_end,
> > +                                         DRM_MM_INSERT_LOW);
> > +       if (err)
> > +               goto out_unpin;
> > +
> > +       order = i915_random_order(count, &prng);
> > +       if (!order) {
> > +               err = -ENOMEM;
> > +               goto out_remove;
> > +       }
> > +
> > +       for (n = 0; n < count; n++) {
> > +               u64 offset = tmp.start + order[n] * PAGE_SIZE;
> > +               u32 __iomem *vaddr;
> > +
> > +               ggtt->base.insert_page(&ggtt->base,
> > +                                      i915_gem_object_get_dma_address(obj, 0),
> > +                                      offset, I915_CACHE_NONE, 0);
> Forgive my ignorance but don't we need a write barrier here also?

After the insert, no. That's provided by insert_page() itself, so writes
cannot overtake the setup of the PTE. But we don't put a barrier before
insert_pages(), which is left to the caller to provide. It feels
inconsistent and we probably should just play safe and put all the
barriers inside the PTE maniupation routines. Mistakes are very hard to
detect - the easiest seem to be around relocation, where a wrong value
seen by the GPU can lead to a GPU hang.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 03/46] drm/i915: Unbind any residual objects/vma from the Global GTT on shutdown
  2017-02-02  9:08 ` [PATCH 03/46] drm/i915: Unbind any residual objects/vma from the Global GTT on shutdown Chris Wilson
@ 2017-02-08 13:36   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-08 13:36 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:08 +0000, Chris Wilson wrote:
> We may unload the PCI device before all users (such as dma-buf) are
> completely shutdown. This may leave VMA in the global GTT which we want
> to revoke, whilst keeping the objects themselves around to service the
> dma-buf.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 04/46] drm/i915: Flush the freed object queue on device release
  2017-02-02  9:08 ` [PATCH 04/46] drm/i915: Flush the freed object queue on device release Chris Wilson
@ 2017-02-08 13:38   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-08 13:38 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:08 +0000, Chris Wilson wrote:
> As dmabufs may live beyond the PCI device removal, we need to flush the
> freed object worker on device release, and include a warning in case
> there is a leak.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 02/46] drm/i915: Split device release from unload
  2017-02-02  9:08 ` [PATCH 02/46] drm/i915: Split device release from unload Chris Wilson
@ 2017-02-08 13:41   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-08 13:41 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:08 +0000, Chris Wilson wrote:
> We may need to keep our memory management alive after we have unloaded
> the physical pci device. For example, if we have exported an object via
> dmabuf, that will keep the device around but the pci device may be
> removed before the dmabuf itself is released, use of the pci hardware
> will be revoked, but the memory and object management needs to persist
> for the dmabuf.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -1299,7 +1299,8 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
>  	pci_disable_device(pdev);
>  out_free_priv:
>  	i915_load_error(dev_priv, "Device initialization failed (%d)\n", ret);
> -	drm_dev_unref(&dev_priv->drm);
> +	drm_dev_fini(&dev_priv->drm);
> +	kfree(dev_priv);
>  	return ret;
>  }

This function could use goto err; as there's kfree up there too.

With that teardown path;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 19/46] drm/i915: Test request ordering between engines
  2017-02-02  9:08 ` [PATCH 19/46] drm/i915: Test request ordering between engines Chris Wilson
@ 2017-02-09 10:20   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-09 10:20 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:08 +0000, Chris Wilson wrote:
> A request on one engine with a dependency on a request on another engine
> must wait for completion of the first request before starting.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +static int live_sequential_engines(void *arg)
> +{

<SNIP>

> +	for_each_engine(engine, i915, id) {
> +		long timeout;
> +		u32 *cmd;
> +
> +		if (i915_gem_request_completed(request[id])) {
> +			pr_err("%s(%s): request completed too early!\n",
> +			       __func__, engine->name);
> +			err = -EINVAL;
> +			goto out_request;
> +		}

I was kinda anticipating you capture prev and always release the
previous batch, doesn't add that much value necessarily, but you could
make sure before releasing the batch that current has not even been
started yet. That's what you mention in the commit message.

> +
> +		cmd = i915_gem_object_pin_map(request[id]->batch->obj,
> +					      I915_MAP_WC);
> +		if (IS_ERR(cmd)) {
> +			err = PTR_ERR(cmd);
> +			pr_err("%s: failed to WC map batch, err=%d\n", __func__, err);
> +			goto out_request;
> +		}
> +		*cmd = MI_BATCH_BUFFER_END;
> +		wmb();
> +		i915_gem_object_unpin_map(request[id]->batch->obj);

recursive_batch_release()

With the helper;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 20/46] drm/i915: Live testing of empty requests
  2017-02-02  9:08 ` [PATCH 20/46] drm/i915: Live testing of empty requests Chris Wilson
@ 2017-02-09 10:30   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-09 10:30 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:08 +0000, Chris Wilson wrote:
> Primarily to emphasize the difference between just advancing the
> breadcrumb using a bare request and the overhead of dispatching an
> execbuffer.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 32/46] drm/i915: Exercise filling the top/bottom portions of the ppgtt
  2017-02-02  9:08 ` [PATCH 32/46] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
@ 2017-02-09 10:49   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-09 10:49 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:08 +0000, Chris Wilson wrote:
> Allocate objects with varying number of pages (which should hopefully
> consist of a mixture of contiguous page chunks and so coalesced sg
> lists) and check that the sg walkers in insert_pages cope.
> 
> v2: Check both small <-> large
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Code is more understandable now;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling
  2017-02-02  9:09 ` [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling Chris Wilson
  2017-02-08 12:12   ` Matthew Auld
@ 2017-02-09 10:53   ` Joonas Lahtinen
  1 sibling, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-09 10:53 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:09 +0000, Chris Wilson wrote:
> Use the live tests against the mock ppgtt for quick testing on all
> platforms of the VMA layer.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

> +static int exercise_mock(struct drm_i915_private *i915,
> +			 int (*func)(struct drm_i915_private *i915,
> +				     struct i915_address_space *vm,
> +				     u64 hole_start, u64 hole_end,
> +				     unsigned long end_time))

typedef might not hurt.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 30/46] drm/i915: Add a live dmabuf selftest
  2017-02-02  9:08 ` [PATCH 30/46] drm/i915: Add a live dmabuf selftest Chris Wilson
@ 2017-02-09 10:59   ` Joonas Lahtinen
  0 siblings, 0 replies; 81+ messages in thread
From: Joonas Lahtinen @ 2017-02-09 10:59 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On to, 2017-02-02 at 09:08 +0000, Chris Wilson wrote:
> Though we have good coverage of our dmabuf interface through the mock
> tests, we also want to check the heavy module unload paths of the live
> i915 driver.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 05/46] drm/i915: Provide a hook for selftests
  2017-02-02  9:08 ` [PATCH 05/46] drm/i915: Provide a hook for selftests Chris Wilson
  2017-02-02  9:11   ` Chris Wilson
@ 2017-02-10 10:19   ` Tvrtko Ursulin
  2017-02-10 10:36     ` Chris Wilson
  1 sibling, 1 reply; 81+ messages in thread
From: Tvrtko Ursulin @ 2017-02-10 10:19 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 02/02/2017 09:08, Chris Wilson wrote:
> Some pieces of code are independent of hardware but are very tricky to
> exercise through the normal userspace ABI or via debugfs hooks. Being
> able to create mock unit tests and execute them through CI is vital.
> Start by adding a central point where we can execute unit tests and
> a parameter to enable them. This is disabled by default as the
> expectation is that these tests will occasionally explode.
>
> To facilitate integration with igt, any parameter beginning with
> i915.igt__ is interpreted as a subtest executable independently via
> igt/drv_selftest.
>
> Two classes of selftests are recognised: mock unit tests and integration
> tests. Mock unit tests are run as soon as the module is loaded, before
> the device is probed. At that point there is no driver instantiated and
> all hw interactions must be "mocked". This is very useful for writing
> universal tests to exercise code not typically run on a broad range of
> architectures. Alternatively, you can hook into the live selftests and
> run when the device has been instantiated - hw interactions are real.
>
> v2: Add a macro for compiling conditional code for mock objects inside
> real objects.
> v3: Differentiate between mock unit tests and late integration test.
> v4: List the tests in natural order, use igt to sort after modparam.
> v5: s/late/live/
> v6: s/unsigned long/unsigned int/
> v7: Use igt_ prefixes for long helpers.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1
> ---
>  drivers/gpu/drm/i915/Kconfig.debug                 |  16 ++
>  drivers/gpu/drm/i915/Makefile                      |   3 +
>  drivers/gpu/drm/i915/i915_pci.c                    |  31 ++-
>  drivers/gpu/drm/i915/i915_selftest.h               | 102 +++++++++
>  .../gpu/drm/i915/selftests/i915_live_selftests.h   |  11 +
>  .../gpu/drm/i915/selftests/i915_mock_selftests.h   |  11 +
>  drivers/gpu/drm/i915/selftests/i915_random.c       |  63 ++++++
>  drivers/gpu/drm/i915/selftests/i915_random.h       |  50 +++++
>  drivers/gpu/drm/i915/selftests/i915_selftest.c     | 250 +++++++++++++++++++++
>  tools/testing/selftests/drivers/gpu/i915.sh        |   1 +
>  10 files changed, 531 insertions(+), 7 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/i915_selftest.h
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_live_selftests.h
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_random.c
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_random.h
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_selftest.c
>
> diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
> index 598551dbf62c..a4d8cfd77c3c 100644
> --- a/drivers/gpu/drm/i915/Kconfig.debug
> +++ b/drivers/gpu/drm/i915/Kconfig.debug
> @@ -26,6 +26,7 @@ config DRM_I915_DEBUG
>          select DRM_DEBUG_MM if DRM=y
>  	select DRM_DEBUG_MM_SELFTEST
>  	select DRM_I915_SW_FENCE_DEBUG_OBJECTS
> +	select DRM_I915_SELFTEST
>          default n
>          help
>            Choose this option to turn on extra driver debugging that may affect
> @@ -59,3 +60,18 @@ config DRM_I915_SW_FENCE_DEBUG_OBJECTS
>            Recommended for driver developers only.
>
>            If in doubt, say "N".
> +
> +config DRM_I915_SELFTEST
> +	bool "Enable selftests upon driver load"
> +	depends on DRM_I915
> +	default n
> +	select PRIME_NUMBERS
> +	help
> +	  Choose this option to allow the driver to perform selftests upon
> +	  loading; also requires the i915.selftest=1 module parameter. To
> +	  exit the module after running the selftests (i.e. to prevent normal
> +	  module initialisation afterwards) use i915.selftest=-1.
> +
> +	  Recommended for driver developers only.
> +
> +	  If in doubt, say "N".
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index c62ab45683c0..bac62fd5b438 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -116,6 +116,9 @@ i915-y += dvo_ch7017.o \
>
>  # Post-mortem debug and GPU hang state capture
>  i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
> +i915-$(CONFIG_DRM_I915_SELFTEST) += \
> +	selftests/i915_random.o \
> +	selftests/i915_selftest.o
>
>  # virtual gpu code
>  i915-y += i915_vgpu.o
> diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
> index df2051b41fa1..732101ed57fb 100644
> --- a/drivers/gpu/drm/i915/i915_pci.c
> +++ b/drivers/gpu/drm/i915/i915_pci.c
> @@ -27,6 +27,7 @@
>  #include <linux/vga_switcheroo.h>
>
>  #include "i915_drv.h"
> +#include "i915_selftest.h"
>
>  #define GEN_DEFAULT_PIPEOFFSETS \
>  	.pipe_offsets = { PIPE_A_OFFSET, PIPE_B_OFFSET, \
> @@ -473,10 +474,19 @@ static const struct pci_device_id pciidlist[] = {
>  };
>  MODULE_DEVICE_TABLE(pci, pciidlist);
>
> +static void i915_pci_remove(struct pci_dev *pdev)
> +{
> +	struct drm_device *dev = pci_get_drvdata(pdev);
> +
> +	i915_driver_unload(dev);
> +	drm_dev_unref(dev);
> +}
> +
>  static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>  {
>  	struct intel_device_info *intel_info =
>  		(struct intel_device_info *) ent->driver_data;
> +	int err;
>
>  	if (IS_ALPHA_SUPPORT(intel_info) && !i915.alpha_support) {
>  		DRM_INFO("The driver support for your hardware in this kernel version is alpha quality\n"
> @@ -500,15 +510,17 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>  	if (vga_switcheroo_client_probe_defer(pdev))
>  		return -EPROBE_DEFER;
>
> -	return i915_driver_load(pdev, ent);
> -}
> +	err = i915_driver_load(pdev, ent);
> +	if (err)
> +		return err;
>
> -static void i915_pci_remove(struct pci_dev *pdev)
> -{
> -	struct drm_device *dev = pci_get_drvdata(pdev);
> +	err = i915_live_selftests(pdev);
> +	if (err) {
> +		i915_pci_remove(pdev);
> +		return err > 0 ? -ENOTTY : err;
> +	}
>
> -	i915_driver_unload(dev);
> -	drm_dev_unref(dev);
> +	return 0;
>  }
>
>  static struct pci_driver i915_pci_driver = {
> @@ -522,6 +534,11 @@ static struct pci_driver i915_pci_driver = {
>  static int __init i915_init(void)
>  {
>  	bool use_kms = true;
> +	int err;
> +
> +	err = i915_mock_selftests();
> +	if (err)
> +		return err > 0 ? 0 : err;
>
>  	/*
>  	 * Enable KMS by default, unless explicitly overriden by
> diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
> new file mode 100644
> index 000000000000..8b5994caa301
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/i915_selftest.h
> @@ -0,0 +1,102 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#ifndef __I915_SELFTEST_H__
> +#define __I915_SELFTEST_H__
> +
> +struct pci_dev;
> +struct drm_i915_private;
> +
> +struct i915_selftest {
> +	unsigned long timeout_jiffies;
> +	unsigned int timeout_ms;
> +	unsigned int random_seed;
> +	int mock;
> +	int live;
> +};
> +
> +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> +extern struct i915_selftest i915_selftest;
> +
> +int i915_mock_selftests(void);
> +int i915_live_selftests(struct pci_dev *pdev);
> +
> +/* We extract the function declarations from i915_mock_selftests.h and
> + * i915_live_selftests.h Add your unit test declarations there!
> + *
> + * Mock unit tests are run very early upon module load, before the driver
> + * is probed. All hardware interactions, as well as other subsystems, must
> + * be "mocked".
> + *
> + * Live unit tests are run after the driver is loaded - all hardware
> + * interactions are real.
> + */
> +#define selftest(name, func) int func(void);
> +#include "selftests/i915_mock_selftests.h"
> +#undef selftest
> +#define selftest(name, func) int func(struct drm_i915_private *i915);
> +#include "selftests/i915_live_selftests.h"
> +#undef selftest
> +
> +struct i915_subtest {
> +	int (*func)(void *data);
> +	const char *name;
> +};
> +
> +int __i915_subtests(const char *caller,
> +		    const struct i915_subtest *st,
> +		    unsigned int count,
> +		    void *data);
> +#define i915_subtests(T, data) \
> +	__i915_subtests(__func__, T, ARRAY_SIZE(T), data)
> +
> +#define SUBTEST(x) { x, #x }
> +
> +#define I915_SELFTEST_DECLARE(x) x
> +#define I915_SELFTEST_ONLY(x) unlikely(x)
> +
> +#else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
> +
> +static inline int i915_mock_selftests(void) { return 0; }
> +static inline int i915_live_selftests(struct pci_dev *pdev) { return 0; }
> +
> +#define I915_SELFTEST_DECLARE(x)
> +#define I915_SELFTEST_ONLY(x) 0
> +
> +#endif
> +
> +/* Using the i915_selftest_ prefix becomes a little unwieldy with the helpers.
> + * Instead we use the igt_ shorthand, in reference to the intel-gpu-tools
> + * suite of uabi test cases (which includes a test runner for our selftests).
> + */
> +
> +#define IGT_TIMEOUT(name__) \
> +	unsigned long name__ = jiffies + i915_selftest.timeout_jiffies
> +
> +__printf(2, 3)
> +bool __igt_timeout(unsigned long timeout, const char *fmt, ...);
> +
> +#define igt_timeout(t, fmt, ...) \
> +	__igt_timeout((t), KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__)
> +
> +#endif /* !__I915_SELFTEST_H__ */
> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> new file mode 100644
> index 000000000000..f3e17cb10e05
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> @@ -0,0 +1,11 @@
> +/* List each unit test as selftest(name, function)
> + *
> + * The name is used as both an enum and expanded as subtest__name to create
> + * a module parameter. It must be unique and legal for a C identifier.
> + *
> + * The function should be of type int function(void). It may be conditionally
> + * compiled using #if IS_ENABLED(DRM_I915_SELFTEST).
> + *
> + * Tests are executed in order by igt/drv_selftest
> + */
> +selftest(sanitycheck, i915_live_sanitycheck) /* keep first (igt selfcheck) */
> diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> new file mode 100644
> index 000000000000..69e97a2ba4a6
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> @@ -0,0 +1,11 @@
> +/* List each unit test as selftest(name, function)
> + *
> + * The name is used as both an enum and expanded as subtest__name to create
> + * a module parameter. It must be unique and legal for a C identifier.
> + *
> + * The function should be of type int function(void). It may be conditionally
> + * compiled using #if IS_ENABLED(DRM_I915_SELFTEST).
> + *
> + * Tests are executed in order by igt/drv_selftest
> + */
> +selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
> diff --git a/drivers/gpu/drm/i915/selftests/i915_random.c b/drivers/gpu/drm/i915/selftests/i915_random.c
> new file mode 100644
> index 000000000000..606a237fed17
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_random.c
> @@ -0,0 +1,63 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include <linux/bitops.h>
> +#include <linux/kernel.h>
> +#include <linux/random.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#include "i915_random.h"
> +
> +static inline u32 i915_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
> +{
> +	return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);

What is ep_ro?

> +}
> +
> +void i915_random_reorder(unsigned int *order, unsigned int count,
> +			 struct rnd_state *state)
> +{
> +	unsigned int i, j;
> +
> +	for (i = 0; i < count; ++i) {
> +		BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32));

? :)

> +		j = i915_prandom_u32_max_state(count, state);
> +		swap(order[i], order[j]);
> +	}
> +}
> +
> +unsigned int *i915_random_order(unsigned int count, struct rnd_state *state)
> +{
> +	unsigned int *order, i;
> +
> +	order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
> +	if (!order)
> +		return order;
> +
> +	for (i = 0; i < count; i++)
> +		order[i] = i;
> +
> +	i915_random_reorder(order, count, state);
> +	return order;
> +}
> diff --git a/drivers/gpu/drm/i915/selftests/i915_random.h b/drivers/gpu/drm/i915/selftests/i915_random.h
> new file mode 100644
> index 000000000000..b9c334ce6cd9
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_random.h
> @@ -0,0 +1,50 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __I915_SELFTESTS_RANDOM_H__
> +#define __I915_SELFTESTS_RANDOM_H__
> +
> +#include <linux/random.h>
> +
> +#include "../i915_selftest.h"
> +
> +#define I915_RND_STATE_INITIALIZER(x) ({				\
> +	struct rnd_state state__;					\
> +	prandom_seed_state(&state__, (x));				\
> +	state__;							\
> +})
> +
> +#define I915_RND_STATE(name__) \
> +	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(i915_selftest.random_seed)
> +
> +#define I915_RND_SUBSTATE(name__, parent__) \
> +	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(prandom_u32_state(&(parent__)))
> +
> +unsigned int *i915_random_order(unsigned int count,
> +				struct rnd_state *state);
> +void i915_random_reorder(unsigned int *order,
> +			 unsigned int count,
> +			 struct rnd_state *state);
> +
> +#endif /* !__I915_SELFTESTS_RANDOM_H__ */
> diff --git a/drivers/gpu/drm/i915/selftests/i915_selftest.c b/drivers/gpu/drm/i915/selftests/i915_selftest.c
> new file mode 100644
> index 000000000000..6ba3abb10c6f
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_selftest.c
> @@ -0,0 +1,250 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/random.h>
> +
> +#include "../i915_drv.h"
> +#include "../i915_selftest.h"
> +
> +struct i915_selftest i915_selftest __read_mostly = {
> +	.timeout_ms = 1000,
> +};
> +
> +int i915_mock_sanitycheck(void)
> +{
> +	pr_info(DRIVER_NAME ": %s() - ok!\n", __func__);
> +	return 0;
> +}
> +
> +int i915_live_sanitycheck(struct drm_i915_private *i915)
> +{
> +	pr_info("%s: %s() - ok!\n", i915->drm.driver->name, __func__);
> +	return 0;
> +}
> +
> +enum {
> +#define selftest(name, func) mock_##name,
> +#include "i915_mock_selftests.h"
> +#undef selftest
> +};
> +
> +enum {
> +#define selftest(name, func) live_##name,
> +#include "i915_live_selftests.h"
> +#undef selftest
> +};
> +
> +struct selftest {
> +	bool enabled;
> +	const char *name;
> +	union {
> +		int (*mock)(void);
> +		int (*live)(struct drm_i915_private *);
> +	};
> +};
> +
> +#define selftest(n, f) [mock_##n] = { .name = #n, .mock = f },
> +static struct selftest mock_selftests[] = {
> +#include "i915_mock_selftests.h"
> +};
> +#undef selftest
> +
> +#define selftest(n, f) [live_##n] = { .name = #n, .live = f },
> +static struct selftest live_selftests[] = {
> +#include "i915_live_selftests.h"
> +};
> +#undef selftest
> +
> +/* Embed the line number into the parameter name so that we can order tests */
> +#define selftest(n, func) selftest_0(n, func, param(n))
> +#define param(n) __PASTE(igt__, __PASTE(__LINE__, __mock_##n))
> +#define selftest_0(n, func, id) \
> +module_param_named(id, mock_selftests[mock_##n].enabled, bool, 0400);
> +#include "i915_mock_selftests.h"
> +#undef selftest_0
> +#undef param
> +
> +#define param(n) __PASTE(igt__, __PASTE(__LINE__, __live_##n))
> +#define selftest_0(n, func, id) \
> +module_param_named(id, live_selftests[live_##n].enabled, bool, 0400);
> +#include "i915_live_selftests.h"
> +#undef selftest_0
> +#undef param
> +#undef selftest
> +
> +static void set_default_test_all(struct selftest *st, unsigned int count)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < count; i++)
> +		if (st[i].enabled)
> +			return;
> +
> +	for (i = 0; i < count; i++)
> +		st[i].enabled = true;
> +}
> +
> +static int __run_selftests(const char *name,
> +			   struct selftest *st,
> +			   unsigned int count,
> +			   void *data)
> +{
> +	int err = 0;
> +
> +	while (!i915_selftest.random_seed)
> +		i915_selftest.random_seed = get_random_int();
> +
> +	i915_selftest.timeout_jiffies =
> +		i915_selftest.timeout_ms ?
> +		msecs_to_jiffies_timeout(i915_selftest.timeout_ms) :
> +		MAX_SCHEDULE_TIMEOUT;
> +
> +	set_default_test_all(st, count);
> +
> +	pr_info(DRIVER_NAME ": Performing %s selftests with st_random_seed=0x%x st_timeout=%u\n",
> +		name, i915_selftest.random_seed, i915_selftest.timeout_ms);
> +
> +	/* Tests are listed in order in i915_*_selftests.h */
> +	for (; count--; st++) {
> +		if (!st->enabled)
> +			continue;
> +
> +		cond_resched();
> +		if (signal_pending(current))
> +			return -EINTR;
> +
> +		pr_debug(DRIVER_NAME ": Running %s\n", st->name);
> +		if (data)
> +			err = st->live(data);
> +		else
> +			err = st->mock();
> +		if (err == -EINTR && !signal_pending(current))

What is the expected use for this?

> +			err = 0;
> +		if (err)
> +			break;
> +	}
> +
> +	if (WARN(err > 0 || err == -ENOTTY,
> +		 "%s returned %d, conflicting with selftest's magic values!\n",
> +		 st->name, err))
> +		err = -1;
> +
> +	return err;
> +}
> +
> +#define run_selftests(x, data) \
> +	__run_selftests(#x, x##_selftests, ARRAY_SIZE(x##_selftests), data)
> +
> +int i915_mock_selftests(void)
> +{
> +	int err;
> +
> +	if (!i915_selftest.mock)
> +		return 0;
> +
> +	err = run_selftests(mock, NULL);
> +	if (err) {
> +		i915_selftest.mock = err;
> +		return err;
> +	}
> +
> +	if (i915_selftest.mock < 0) {
> +		i915_selftest.mock = -ENOTTY;
> +		return 1;
> +	}
> +
> +	return 0;
> +}
> +
> +int i915_live_selftests(struct pci_dev *pdev)
> +{
> +	int err;
> +
> +	if (!i915_selftest.live)
> +		return 0;
> +
> +	err = run_selftests(live, to_i915(pci_get_drvdata(pdev)));
> +	if (err) {
> +		i915_selftest.live = err;
> +		return err;
> +	}
> +
> +	if (i915_selftest.live < 0) {
> +		i915_selftest.live = -ENOTTY;
> +		return 1;
> +	}
> +
> +	return 0;
> +}
> +
> +int __i915_subtests(const char *caller,
> +		    const struct i915_subtest *st,
> +		    unsigned int count,
> +		    void *data)
> +{
> +	int err;
> +
> +	for (; count--; st++) {
> +		cond_resched();
> +		if (signal_pending(current))
> +			return -EINTR;
> +
> +		pr_debug(DRIVER_NAME ": Running %s/%s\n", caller, st->name);
> +		err = st->func(data);
> +		if (err && err != -EINTR) {
> +			pr_err(DRIVER_NAME "/%s: %s failed with error %d\n",
> +			       caller, st->name, err);
> +			return err;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +bool __igt_timeout(unsigned long timeout, const char *fmt, ...)
> +{
> +	va_list va;
> +
> +	if (!signal_pending(current)) {
> +		cond_resched();
> +		if (time_before(jiffies, timeout))
> +			return false;
> +	}
> +
> +	if (fmt) {
> +		va_start(va, fmt);
> +		vprintk(fmt, va);
> +		va_end(va);
> +	}
> +
> +	return true;
> +}
> +
> +module_param_named(st_random_seed, i915_selftest.random_seed, uint, 0400);
> +module_param_named(st_timeout, i915_selftest.timeout_ms, uint, 0400);

I think selftest_ prefix would be better to match the ones below and 
just in general.

> +
> +module_param_named_unsafe(mock_selftests, i915_selftest.mock, int, 0400);
> +MODULE_PARM_DESC(mock_selftests, "Run selftests before loading, using mock hardware (0:disabled [default], 1:run tests then load driver, -1:run tests then exit module)");
> +
> +module_param_named_unsafe(live_selftests, i915_selftest.live, int, 0400);
> +MODULE_PARM_DESC(live_selftests, "Run selftests after driver initialisation on the live system (0:disabled [default], 1:run tests then continue, -1:run tests then exit module)");
> diff --git a/tools/testing/selftests/drivers/gpu/i915.sh b/tools/testing/selftests/drivers/gpu/i915.sh
> index d407f0fa1e3a..c06d6e8a8dcc 100755
> --- a/tools/testing/selftests/drivers/gpu/i915.sh
> +++ b/tools/testing/selftests/drivers/gpu/i915.sh
> @@ -7,6 +7,7 @@ if ! /sbin/modprobe -q -r i915; then
>  fi
>
>  if /sbin/modprobe -q i915 mock_selftests=-1; then
> +	/sbin/modprobe -q -r i915
>  	echo "drivers/gpu/i915: ok"
>  else
>  	echo "drivers/gpu/i915: [FAIL]"
>

No serious complaints. Bikeshed or not (but preferred for modparams):

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation
  2017-02-02  9:08 ` [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
@ 2017-02-10 10:24   ` Tvrtko Ursulin
  2017-02-10 10:43     ` Chris Wilson
  0 siblings, 1 reply; 81+ messages in thread
From: Tvrtko Ursulin @ 2017-02-10 10:24 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 02/02/2017 09:08, Chris Wilson wrote:
> Start exercising the scattergather lists, especially looking at
> iteration after coalescing.
>
> v2: Comment on the peculiarity of table construction (i.e. why this
> sg_table might be interesting).
> v3: Added one __func__ to identify expect_pfn_sg()
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem.c                    |  11 +-
>  .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
>  drivers/gpu/drm/i915/selftests/scatterlist.c       | 331 +++++++++++++++++++++
>  3 files changed, 340 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/selftests/scatterlist.c
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 88065fd55147..fc54a8eb3fe5 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2216,17 +2216,17 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
>  	mutex_unlock(&obj->mm.lock);
>  }
>
> -static void i915_sg_trim(struct sg_table *orig_st)
> +static bool i915_sg_trim(struct sg_table *orig_st)
>  {
>  	struct sg_table new_st;
>  	struct scatterlist *sg, *new_sg;
>  	unsigned int i;
>
>  	if (orig_st->nents == orig_st->orig_nents)
> -		return;
> +		return false;
>
>  	if (sg_alloc_table(&new_st, orig_st->nents, GFP_KERNEL | __GFP_NOWARN))
> -		return;
> +		return false;
>
>  	new_sg = new_st.sgl;
>  	for_each_sg(orig_st->sgl, sg, orig_st->nents, i) {
> @@ -2239,6 +2239,7 @@ static void i915_sg_trim(struct sg_table *orig_st)
>  	sg_free_table(orig_st);
>
>  	*orig_st = new_st;
> +	return true;
>  }
>
>  static struct sg_table *
> @@ -4967,3 +4968,7 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
>  	sg = i915_gem_object_get_sg(obj, n, &offset);
>  	return sg_dma_address(sg) + (offset << PAGE_SHIFT);
>  }
> +
> +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> +#include "selftests/scatterlist.c"
> +#endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> index 69e97a2ba4a6..5f0bdda42ed8 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> @@ -9,3 +9,4 @@
>   * Tests are executed in order by igt/drv_selftest
>   */
>  selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
> +selftest(scatterlist, scatterlist_mock_selftests)
> diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
> new file mode 100644
> index 000000000000..fa5bd09c863f
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
> @@ -0,0 +1,331 @@
> +/*
> + * Copyright © 2016 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/prime_numbers.h>
> +#include <linux/random.h>
> +
> +#include "../i915_selftest.h"
> +
> +#define PFN_BIAS (1 << 10)
> +
> +struct pfn_table {
> +	struct sg_table st;
> +	unsigned long start, end;
> +};
> +
> +typedef unsigned int (*npages_fn_t)(unsigned long n,
> +				    unsigned long count,
> +				    struct rnd_state *rnd);
> +
> +static noinline int expect_pfn_sg(struct pfn_table *pt,
> +				  npages_fn_t npages_fn,
> +				  struct rnd_state *rnd,
> +				  const char *who,
> +				  unsigned long timeout)
> +{
> +	struct scatterlist *sg;
> +	unsigned long pfn, n;
> +
> +	pfn = pt->start;
> +	for_each_sg(pt->st.sgl, sg, pt->st.nents, n) {
> +		struct page *page = sg_page(sg);
> +		unsigned int npages = npages_fn(n, pt->st.nents, rnd);
> +
> +		if (page_to_pfn(page) != pfn) {
> +			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg)\n",
> +			       __func__, who, pfn, page_to_pfn(page));
> +			return -EINVAL;
> +		}
> +
> +		if (sg->length != npages * PAGE_SIZE) {
> +			pr_err("%s: %s copied wrong sg length, expected size %lu, found %u (using for_each_sg)\n",
> +			       __func__, who, npages * PAGE_SIZE, sg->length);
> +			return -EINVAL;
> +		}
> +
> +		if (igt_timeout(timeout, "%s timed out\n", who))
> +			return -EINTR;
> +
> +		pfn += npages;
> +	}
> +	if (pfn != pt->end) {
> +		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
> +		       __func__, who, pt->end, pfn);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static noinline int expect_pfn_sg_page_iter(struct pfn_table *pt,
> +					    const char *who,
> +					    unsigned long timeout)
> +{
> +	struct sg_page_iter sgiter;
> +	unsigned long pfn;
> +
> +	pfn = pt->start;
> +	for_each_sg_page(pt->st.sgl, &sgiter, pt->st.nents, 0) {
> +		struct page *page = sg_page_iter_page(&sgiter);
> +
> +		if (page != pfn_to_page(pfn)) {
> +			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sg_page)\n",
> +			       __func__, who, pfn, page_to_pfn(page));
> +			return -EINVAL;
> +		}
> +
> +		if (igt_timeout(timeout, "%s timed out\n", who))
> +			return -EINTR;
> +
> +		pfn++;
> +	}
> +	if (pfn != pt->end) {
> +		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
> +		       __func__, who, pt->end, pfn);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static noinline int expect_pfn_sgtiter(struct pfn_table *pt,
> +				       const char *who,
> +				       unsigned long timeout)
> +{
> +	struct sgt_iter sgt;
> +	struct page *page;
> +	unsigned long pfn;
> +
> +	pfn = pt->start;
> +	for_each_sgt_page(page, sgt, &pt->st) {
> +		if (page != pfn_to_page(pfn)) {
> +			pr_err("%s: %s left pages out of order, expected pfn %lu, found pfn %lu (using for_each_sgt_page)\n",
> +			       __func__, who, pfn, page_to_pfn(page));
> +			return -EINVAL;
> +		}
> +
> +		if (igt_timeout(timeout, "%s timed out\n", who))
> +			return -EINTR;
> +
> +		pfn++;
> +	}
> +	if (pfn != pt->end) {
> +		pr_err("%s: %s finished on wrong pfn, expected %lu, found %lu\n",
> +		       __func__, who, pt->end, pfn);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int expect_pfn_sgtable(struct pfn_table *pt,
> +			      npages_fn_t npages_fn,
> +			      struct rnd_state *rnd,
> +			      const char *who,
> +			      unsigned long timeout)
> +{
> +	int err;
> +
> +	err = expect_pfn_sg(pt, npages_fn, rnd, who, timeout);
> +	if (err)
> +		return err;
> +
> +	err = expect_pfn_sg_page_iter(pt, who, timeout);
> +	if (err)
> +		return err;
> +
> +	err = expect_pfn_sgtiter(pt, who, timeout);
> +	if (err)
> +		return err;
> +
> +	return 0;
> +}
> +
> +static unsigned int one(unsigned long n,
> +			unsigned long count,
> +			struct rnd_state *rnd)
> +{
> +	return 1;
> +}
> +
> +static unsigned int grow(unsigned long n,
> +			 unsigned long count,
> +			 struct rnd_state *rnd)
> +{
> +	return n + 1;
> +}
> +
> +static unsigned int shrink(unsigned long n,
> +			   unsigned long count,
> +			   struct rnd_state *rnd)
> +{
> +	return count - n;
> +}
> +
> +static unsigned int random(unsigned long n,
> +			   unsigned long count,
> +			   struct rnd_state *rnd)
> +{
> +	return 1 + (prandom_u32_state(rnd) % 1024);
> +}
> +
> +static bool alloc_table(struct pfn_table *pt,
> +			unsigned long count, unsigned long max,
> +			npages_fn_t npages_fn,
> +			struct rnd_state *rnd)
> +{
> +	struct scatterlist *sg;
> +	unsigned long n, pfn;
> +
> +	if (sg_alloc_table(&pt->st, max,
> +			   GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN))
> +		return false;
> +
> +	/* count should be less than 20 to prevent overflowing sg->length */
> +	GEM_BUG_ON(overflows_type(count * PAGE_SIZE, sg->length));
> +
> +	/* Construct a table where each scatterlist contains different number
> +	 * of entries. The idea is to check that we can iterate the individual
> +	 * pages from inside the coalesced lists.
> +	 */
> +	pt->start = PFN_BIAS;
> +	pfn = pt->start;
> +	sg = pt->st.sgl;
> +	for (n = 0; n < count; n++) {
> +		unsigned long npages = npages_fn(n, count, rnd);
> +
> +		if (n)
> +			sg = sg_next(sg);
> +		sg_set_page(sg, pfn_to_page(pfn), npages * PAGE_SIZE, 0);
> +
> +		GEM_BUG_ON(page_to_pfn(sg_page(sg)) != pfn);
> +		GEM_BUG_ON(sg->length != npages * PAGE_SIZE);
> +		GEM_BUG_ON(sg->offset != 0);
> +
> +		pfn += npages;
> +	}
> +	sg_mark_end(sg);
> +	pt->st.nents = n;
> +	pt->end = pfn;
> +
> +	return true;
> +}
> +
> +static const npages_fn_t npages_funcs[] = {
> +	one,
> +	grow,
> +	shrink,
> +	random,
> +	NULL,
> +};
> +
> +static int igt_sg_alloc(void *ignored)
> +{
> +	IGT_TIMEOUT(end_time);
> +	const unsigned long max_order = 20; /* approximating a 4GiB object */
> +	struct rnd_state prng;
> +	unsigned long prime;
> +
> +	for_each_prime_number(prime, max_order) {
> +		unsigned long size = BIT(prime);
> +		int offset;
> +
> +		for (offset = -1; offset <= 1; offset++) {
> +			unsigned long sz = size + offset;
> +			const npages_fn_t *npages;
> +			struct pfn_table pt;
> +			int err;
> +
> +			for (npages = npages_funcs; *npages; npages++) {
> +				prandom_seed_state(&prng,
> +						   i915_selftest.random_seed);
> +				if (!alloc_table(&pt, sz, sz, *npages, &prng))
> +					return 0; /* out of memory, give up */

We need to define at least some amount of testing which must pass 
otherwise it is just too weak in my opinion.

	return prime < TBD ? -Esomething : 0;

?

Regards,

Tvrtko

> +
> +				prandom_seed_state(&prng,
> +						   i915_selftest.random_seed);
> +				err = expect_pfn_sgtable(&pt, *npages, &prng,
> +							 "sg_alloc_table",
> +							 end_time);
> +				sg_free_table(&pt.st);
> +				if (err)
> +					return err;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int igt_sg_trim(void *ignored)
> +{
> +	IGT_TIMEOUT(end_time);
> +	const unsigned long max = PAGE_SIZE; /* not prime! */
> +	struct pfn_table pt;
> +	unsigned long prime;
> +
> +	for_each_prime_number(prime, max) {
> +		const npages_fn_t *npages;
> +		int err;
> +
> +		for (npages = npages_funcs; *npages; npages++) {
> +			struct rnd_state prng;
> +
> +			prandom_seed_state(&prng, i915_selftest.random_seed);
> +			if (!alloc_table(&pt, prime, max, *npages, &prng))
> +				return 0; /* out of memory, give up */
> +
> +			err = 0;
> +			if (i915_sg_trim(&pt.st)) {
> +				if (pt.st.orig_nents != prime ||
> +				    pt.st.nents != prime) {
> +					pr_err("i915_sg_trim failed (nents %u, orig_nents %u), expected %lu\n",
> +					       pt.st.nents, pt.st.orig_nents, prime);
> +					err = -EINVAL;
> +				} else {
> +					prandom_seed_state(&prng,
> +							   i915_selftest.random_seed);
> +					err = expect_pfn_sgtable(&pt,
> +								 *npages, &prng,
> +								 "i915_sg_trim",
> +								 end_time);
> +				}
> +			}
> +			sg_free_table(&pt.st);
> +			if (err)
> +				return err;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +int scatterlist_mock_selftests(void)
> +{
> +	static const struct i915_subtest tests[] = {
> +		SUBTEST(igt_sg_alloc),
> +		SUBTEST(igt_sg_trim),
> +	};
> +
> +	return i915_subtests(tests, NULL);
> +}
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 05/46] drm/i915: Provide a hook for selftests
  2017-02-10 10:19   ` Tvrtko Ursulin
@ 2017-02-10 10:36     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-10 10:36 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Fri, Feb 10, 2017 at 10:19:01AM +0000, Tvrtko Ursulin wrote:
> 
> On 02/02/2017 09:08, Chris Wilson wrote:
> >+static inline u32 i915_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
> >+{
> >+	return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
> 
> What is ep_ro?

"right open interval endpoint"

And now I wish I didn't.

> >+void i915_random_reorder(unsigned int *order, unsigned int count,
> >+			 struct rnd_state *state)
> >+{
> >+	unsigned int i, j;
> >+
> >+	for (i = 0; i < count; ++i) {
> >+		BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32));
> 
> ? :)

Future proofing? A statement of the assumption that the prng can return
a value in the right range.

> >+static int __run_selftests(const char *name,
> >+			   struct selftest *st,
> >+			   unsigned int count,
> >+			   void *data)
> >+{
> >+	int err = 0;
> >+
> >+	while (!i915_selftest.random_seed)
> >+		i915_selftest.random_seed = get_random_int();
> >+
> >+	i915_selftest.timeout_jiffies =
> >+		i915_selftest.timeout_ms ?
> >+		msecs_to_jiffies_timeout(i915_selftest.timeout_ms) :
> >+		MAX_SCHEDULE_TIMEOUT;
> >+
> >+	set_default_test_all(st, count);
> >+
> >+	pr_info(DRIVER_NAME ": Performing %s selftests with st_random_seed=0x%x st_timeout=%u\n",
> >+		name, i915_selftest.random_seed, i915_selftest.timeout_ms);
> >+
> >+	/* Tests are listed in order in i915_*_selftests.h */
> >+	for (; count--; st++) {
> >+		if (!st->enabled)
> >+			continue;
> >+
> >+		cond_resched();
> >+		if (signal_pending(current))
> >+			return -EINTR;
> >+
> >+		pr_debug(DRIVER_NAME ": Running %s\n", st->name);
> >+		if (data)
> >+			err = st->live(data);
> >+		else
> >+			err = st->mock();
> >+		if (err == -EINTR && !signal_pending(current))
> 
> What is the expected use for this?

ctrl-c escaping of tests is self-explanatory. However, I adopted the
convention of using EINTR to mean both escape due to timeout, and due to
user interruption. The convenience of only having to check one errno
inside the tests, but here at the user interface, I did want to
distinguish between testing running to timeout as being normal, and
responding back to the user interruption with EINTR.

> >+			err = 0;
> >+		if (err)
> >+			break;
> >+	}
> >+module_param_named(st_random_seed, i915_selftest.random_seed, uint, 0400);
> >+module_param_named(st_timeout, i915_selftest.timeout_ms, uint, 0400);
> 
> I think selftest_ prefix would be better to match the ones below and
> just in general.
> 
> >+
> >+module_param_named_unsafe(mock_selftests, i915_selftest.mock, int, 0400);
> >+MODULE_PARM_DESC(mock_selftests, "Run selftests before loading, using mock hardware (0:disabled [default], 1:run tests then load driver, -1:run tests then exit module)");
> >+
> >+module_param_named_unsafe(live_selftests, i915_selftest.live, int, 0400);
> >+MODULE_PARM_DESC(live_selftests, "Run selftests after driver initialisation on the live system (0:disabled [default], 1:run tests then continue, -1:run tests then exit module)");
> >diff --git a/tools/testing/selftests/drivers/gpu/i915.sh b/tools/testing/selftests/drivers/gpu/i915.sh
> >index d407f0fa1e3a..c06d6e8a8dcc 100755
> >--- a/tools/testing/selftests/drivers/gpu/i915.sh
> >+++ b/tools/testing/selftests/drivers/gpu/i915.sh
> >@@ -7,6 +7,7 @@ if ! /sbin/modprobe -q -r i915; then
> > fi
> >
> > if /sbin/modprobe -q i915 mock_selftests=-1; then
> >+	/sbin/modprobe -q -r i915
> > 	echo "drivers/gpu/i915: ok"
> > else
> > 	echo "drivers/gpu/i915: [FAIL]"
> >
> 
> No serious complaints. Bikeshed or not (but preferred for modparams):

Hmm. They get very long, very quickly. I print them out on each run (so
that we can just copy'n'paste if we need to replay an exact set of
parameters) - they just look ugly. :|

I'll throw it to the bikeshed committee, if people are in favour I'll
switch.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation
  2017-02-10 10:24   ` Tvrtko Ursulin
@ 2017-02-10 10:43     ` Chris Wilson
  2017-02-10 12:01       ` Tvrtko Ursulin
  0 siblings, 1 reply; 81+ messages in thread
From: Chris Wilson @ 2017-02-10 10:43 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Fri, Feb 10, 2017 at 10:24:41AM +0000, Tvrtko Ursulin wrote:
> >+static int igt_sg_alloc(void *ignored)
> >+{
> >+	IGT_TIMEOUT(end_time);
> >+	const unsigned long max_order = 20; /* approximating a 4GiB object */
> >+	struct rnd_state prng;
> >+	unsigned long prime;
> >+
> >+	for_each_prime_number(prime, max_order) {
> >+		unsigned long size = BIT(prime);
> >+		int offset;
> >+
> >+		for (offset = -1; offset <= 1; offset++) {
> >+			unsigned long sz = size + offset;
> >+			const npages_fn_t *npages;
> >+			struct pfn_table pt;
> >+			int err;
> >+
> >+			for (npages = npages_funcs; *npages; npages++) {
> >+				prandom_seed_state(&prng,
> >+						   i915_selftest.random_seed);
> >+				if (!alloc_table(&pt, sz, sz, *npages, &prng))
> >+					return 0; /* out of memory, give up */
> 
> We need to define at least some amount of testing which must pass
> otherwise it is just too weak in my opinion.
> 
> 	return prime < TBD ? -Esomething : 0;
> 
> ?

Following our last discussion, it does a minimum of one prime [2].

static int igt_sg_alloc(void *ignored)
{
        IGT_TIMEOUT(end_time);
        const unsigned long max_order = 20; /* approximating a 4GiB object */
        struct rnd_state prng;
        unsigned long prime;
        int alloc_error = -ENOMEM;

        for_each_prime_number(prime, max_order) {
                unsigned long size = BIT(prime);
                int offset;

                for (offset = -1; offset <= 1; offset++) {
                        unsigned long sz = size + offset;
                        const npages_fn_t *npages;
                        struct pfn_table pt;
                        int err;

                        for (npages = npages_funcs; *npages; npages++) {
                                prandom_seed_state(&prng,
                                                   i915_selftest.random_seed);
                                if (!alloc_table(&pt, sz, sz, *npages, &prng))
                                        return alloc_error;

                                prandom_seed_state(&prng,
                                                   i915_selftest.random_seed);
                                err = expect_pfn_sgtable(&pt, *npages, &prng,
                                                         "sg_alloc_table",
                                                         end_time);
                                sg_free_table(&pt.st);
                                if (err)
                                        return err;
                        }
                }

                alloc_error = 0;
        }

        return 0;
}

Something like

/* Make sure we test at least one continuation before accepting oom */
if (size > MAX_SG_PER_PAGE) /* can't remember what the define is! */
	alloc_error = 0;

?

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation
  2017-02-10 10:43     ` Chris Wilson
@ 2017-02-10 12:01       ` Tvrtko Ursulin
  0 siblings, 0 replies; 81+ messages in thread
From: Tvrtko Ursulin @ 2017-02-10 12:01 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 10/02/2017 10:43, Chris Wilson wrote:
> On Fri, Feb 10, 2017 at 10:24:41AM +0000, Tvrtko Ursulin wrote:
>>> +static int igt_sg_alloc(void *ignored)
>>> +{
>>> +	IGT_TIMEOUT(end_time);
>>> +	const unsigned long max_order = 20; /* approximating a 4GiB object */
>>> +	struct rnd_state prng;
>>> +	unsigned long prime;
>>> +
>>> +	for_each_prime_number(prime, max_order) {
>>> +		unsigned long size = BIT(prime);
>>> +		int offset;
>>> +
>>> +		for (offset = -1; offset <= 1; offset++) {
>>> +			unsigned long sz = size + offset;
>>> +			const npages_fn_t *npages;
>>> +			struct pfn_table pt;
>>> +			int err;
>>> +
>>> +			for (npages = npages_funcs; *npages; npages++) {
>>> +				prandom_seed_state(&prng,
>>> +						   i915_selftest.random_seed);
>>> +				if (!alloc_table(&pt, sz, sz, *npages, &prng))
>>> +					return 0; /* out of memory, give up */
>>
>> We need to define at least some amount of testing which must pass
>> otherwise it is just too weak in my opinion.
>>
>> 	return prime < TBD ? -Esomething : 0;
>>
>> ?
>
> Following our last discussion, it does a minimum of one prime [2].
>
> static int igt_sg_alloc(void *ignored)
> {
>         IGT_TIMEOUT(end_time);
>         const unsigned long max_order = 20; /* approximating a 4GiB object */
>         struct rnd_state prng;
>         unsigned long prime;
>         int alloc_error = -ENOMEM;
>
>         for_each_prime_number(prime, max_order) {
>                 unsigned long size = BIT(prime);
>                 int offset;
>
>                 for (offset = -1; offset <= 1; offset++) {
>                         unsigned long sz = size + offset;
>                         const npages_fn_t *npages;
>                         struct pfn_table pt;
>                         int err;
>
>                         for (npages = npages_funcs; *npages; npages++) {
>                                 prandom_seed_state(&prng,
>                                                    i915_selftest.random_seed);
>                                 if (!alloc_table(&pt, sz, sz, *npages, &prng))
>                                         return alloc_error;
>
>                                 prandom_seed_state(&prng,
>                                                    i915_selftest.random_seed);
>                                 err = expect_pfn_sgtable(&pt, *npages, &prng,
>                                                          "sg_alloc_table",
>                                                          end_time);
>                                 sg_free_table(&pt.st);
>                                 if (err)
>                                         return err;
>                         }
>                 }
>
>                 alloc_error = 0;
>         }
>
>         return 0;
> }
>
> Something like
>
> /* Make sure we test at least one continuation before accepting oom */
> if (size > MAX_SG_PER_PAGE) /* can't remember what the define is! */
> 	alloc_error = 0;
>
> ?

SG_MAX_SINGLE_ALLOC. Sounds good. r-b on that.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH igt] intel-ci: Add all driver selftests to BAT
  2017-02-02  9:18 ` [PATCH igt] intel-ci: Add all driver selftests to BAT Chris Wilson
  2017-02-02 13:30   ` Maarten Lankhorst
@ 2017-02-17 11:50   ` Petri Latvala
  2017-02-17 11:57     ` Chris Wilson
  1 sibling, 1 reply; 81+ messages in thread
From: Petri Latvala @ 2017-02-17 11:50 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

NAK on these with the current drv_selftest code.

The subtest enumeration on IGT's side needs to contain these subtests
even if the running kernel doesn't have selftests, or these particular
subtests.


-- 
Petri Latvala


On Thu, Feb 02, 2017 at 09:18:00AM +0000, Chris Wilson wrote:
> These are meant to be fast and sensitive to new (and old) bugs...
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Petri Latvala <petri.latvala@intel.com>
> ---
>  tests/intel-ci/fast-feedback.testlist | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
> index 828bd3ff..a0c3f848 100644
> --- a/tests/intel-ci/fast-feedback.testlist
> +++ b/tests/intel-ci/fast-feedback.testlist
> @@ -249,4 +249,23 @@ igt@drv_module_reload@basic-reload
>  igt@drv_module_reload@basic-no-display
>  igt@drv_module_reload@basic-reload-inject
>  igt@drv_module_reload@basic-reload-final
> +igt@drv_selftest@mock_sanitycheck
> +igt@drv_selftest@mock_scatterlist
> +igt@drv_selftest@mock_uncore
> +igt@drv_selftest@mock_breadcrumbs
> +igt@drv_selftest@mock_requests
> +igt@drv_selftest@mock_objects
> +igt@drv_selftest@mock_dmabuf
> +igt@drv_selftest@mock_vma
> +igt@drv_selftest@mock_evict
> +igt@drv_selftest@mock_gtt
> +igt@drv_selftest@live_sanitycheck
> +igt@drv_selftest@live_uncore
> +igt@drv_selftest@live_requests
> +igt@drv_selftest@live_object
> +igt@drv_selftest@live_dmabuf
> +igt@drv_selftest@live_coherency
> +igt@drv_selftest@live_gtt
> +igt@drv_selftest@live_context
> +igt@drv_selftest@live_hangcheck
>  igt@gvt_basic@invalid-placeholder-test
> -- 
> 2.11.0
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: [PATCH igt] intel-ci: Add all driver selftests to BAT
  2017-02-17 11:50   ` Petri Latvala
@ 2017-02-17 11:57     ` Chris Wilson
  0 siblings, 0 replies; 81+ messages in thread
From: Chris Wilson @ 2017-02-17 11:57 UTC (permalink / raw)
  To: Petri Latvala; +Cc: intel-gfx

On Fri, Feb 17, 2017 at 01:50:06PM +0200, Petri Latvala wrote:
> NAK on these with the current drv_selftest code.

They are just slightly out of date.
 
> The subtest enumeration on IGT's side needs to contain these subtests
> even if the running kernel doesn't have selftests, or these particular
> subtests.

I thought you would have said, no way can we have these in BAT as the
kernel likes to explode! They catch some rather critical bugs...
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 81+ messages in thread

end of thread, other threads:[~2017-02-17 11:57 UTC | newest]

Thread overview: 81+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-02  9:08 Moah selftests Chris Wilson
2017-02-02  9:08 ` [PATCH 01/46] drm: Provide a driver hook for drm_dev_release() Chris Wilson
2017-02-02  9:24   ` Laurent Pinchart
2017-02-02  9:36   ` [PATCH v6] " Chris Wilson
2017-02-02  9:44     ` Daniel Vetter
2017-02-02  9:08 ` [PATCH 02/46] drm/i915: Split device release from unload Chris Wilson
2017-02-08 13:41   ` Joonas Lahtinen
2017-02-02  9:08 ` [PATCH 03/46] drm/i915: Unbind any residual objects/vma from the Global GTT on shutdown Chris Wilson
2017-02-08 13:36   ` Joonas Lahtinen
2017-02-02  9:08 ` [PATCH 04/46] drm/i915: Flush the freed object queue on device release Chris Wilson
2017-02-08 13:38   ` Joonas Lahtinen
2017-02-02  9:08 ` [PATCH 05/46] drm/i915: Provide a hook for selftests Chris Wilson
2017-02-02  9:11   ` Chris Wilson
2017-02-10 10:19   ` Tvrtko Ursulin
2017-02-10 10:36     ` Chris Wilson
2017-02-02  9:08 ` [PATCH 06/46] drm/i915: Add some selftests for sg_table manipulation Chris Wilson
2017-02-10 10:24   ` Tvrtko Ursulin
2017-02-10 10:43     ` Chris Wilson
2017-02-10 12:01       ` Tvrtko Ursulin
2017-02-02  9:08 ` [PATCH 07/46] drm/i915: Add unit tests for the breadcrumb rbtree, insert/remove Chris Wilson
2017-02-02  9:08 ` [PATCH 08/46] drm/i915: Add unit tests for the breadcrumb rbtree, completion Chris Wilson
2017-02-02  9:08 ` [PATCH 09/46] drm/i915: Add unit tests for the breadcrumb rbtree, wakeups Chris Wilson
2017-02-02 12:49   ` Tvrtko Ursulin
2017-02-02 13:02     ` Chris Wilson
2017-02-02  9:08 ` [PATCH 10/46] drm/i915: Mock the GEM device for self-testing Chris Wilson
2017-02-02  9:08 ` [PATCH 11/46] drm/i915: Mock a GGTT " Chris Wilson
2017-02-02  9:08 ` [PATCH 12/46] drm/i915: Mock infrastructure for request emission Chris Wilson
2017-02-02  9:08 ` [PATCH 13/46] drm/i915: Create a fake object for testing huge allocations Chris Wilson
2017-02-02  9:08 ` [PATCH 14/46] drm/i915: Add selftests for i915_gem_request Chris Wilson
2017-02-02  9:08 ` [PATCH 15/46] drm/i915: Add a simple request selftest for waiting Chris Wilson
2017-02-02  9:08 ` [PATCH 16/46] drm/i915: Add a simple fence selftest to i915_gem_request Chris Wilson
2017-02-02  9:08 ` [PATCH 17/46] drm/i915: Simple selftest to exercise live requests Chris Wilson
2017-02-02  9:08 ` [PATCH 18/46] drm/i915: Test simultaneously submitting requests to all engines Chris Wilson
2017-02-02  9:08 ` [PATCH 19/46] drm/i915: Test request ordering between engines Chris Wilson
2017-02-09 10:20   ` Joonas Lahtinen
2017-02-02  9:08 ` [PATCH 20/46] drm/i915: Live testing of empty requests Chris Wilson
2017-02-09 10:30   ` Joonas Lahtinen
2017-02-02  9:08 ` [PATCH 21/46] drm/i915: Add selftests for object allocation, phys Chris Wilson
2017-02-02 13:10   ` Matthew Auld
2017-02-02 13:20     ` Chris Wilson
2017-02-02  9:08 ` [PATCH 22/46] drm/i915: Add a live seftest for GEM objects Chris Wilson
2017-02-02  9:08 ` [PATCH 23/46] drm/i915: Test partial mappings Chris Wilson
2017-02-02  9:08 ` [PATCH 24/46] drm/i915: Test exhaustion of the mmap space Chris Wilson
2017-02-02  9:08 ` [PATCH 25/46] drm/i915: Test coherency of and barriers between cache domains Chris Wilson
2017-02-02  9:08 ` [PATCH 26/46] drm/i915: Move uncore selfchecks to live selftest infrastructure Chris Wilson
2017-02-02  9:08 ` [PATCH 27/46] drm/i915: Test all fw tables during mock selftests Chris Wilson
2017-02-02  9:08 ` [PATCH 28/46] drm/i915: Sanity check all registers for matching fw domains Chris Wilson
2017-02-02  9:08 ` [PATCH 29/46] drm/i915: Add some mock tests for dmabuf interop Chris Wilson
2017-02-02  9:08 ` [PATCH 30/46] drm/i915: Add a live dmabuf selftest Chris Wilson
2017-02-09 10:59   ` Joonas Lahtinen
2017-02-02  9:08 ` [PATCH 31/46] drm/i915: Add initial selftests for i915_gem_gtt Chris Wilson
2017-02-02  9:08 ` [PATCH 32/46] drm/i915: Exercise filling the top/bottom portions of the ppgtt Chris Wilson
2017-02-09 10:49   ` Joonas Lahtinen
2017-02-02  9:08 ` [PATCH 33/46] drm/i915: Exercise filling the top/bottom portions of the global GTT Chris Wilson
2017-02-02  9:08 ` [PATCH 34/46] drm/i915: Fill different pages of the GTT Chris Wilson
2017-02-02  9:08 ` [PATCH 35/46] drm/i915: Exercise filling and removing random ranges from the live GTT Chris Wilson
2017-02-02  9:08 ` [PATCH 36/46] drm/i915: Test creation of VMA Chris Wilson
2017-02-02  9:08 ` [PATCH 37/46] drm/i915: Exercise i915_vma_pin/i915_vma_insert Chris Wilson
2017-02-02  9:08 ` [PATCH 38/46] drm/i915: Verify page layout for rotated VMA Chris Wilson
2017-02-02 13:01   ` Tvrtko Ursulin
2017-02-02  9:08 ` [PATCH 39/46] drm/i915: Test creation of partial VMA Chris Wilson
2017-02-02  9:08 ` [PATCH 40/46] drm/i915: Live testing for context execution Chris Wilson
2017-02-02  9:09 ` [PATCH 41/46] drm/i915: Initial selftests for exercising eviction Chris Wilson
2017-02-02  9:09 ` [PATCH 42/46] drm/i915: Add mock exercise for i915_gem_gtt_reserve Chris Wilson
2017-02-02  9:09 ` [PATCH 43/46] drm/i915: Add mock exercise for i915_gem_gtt_insert Chris Wilson
2017-02-02  9:09 ` [PATCH 44/46] drm/i915: Add mock tests for GTT/VMA handling Chris Wilson
2017-02-08 12:12   ` Matthew Auld
2017-02-09 10:53   ` Joonas Lahtinen
2017-02-02  9:09 ` [PATCH 45/46] drm/i915: Exercise manipulate of single pages in the GGTT Chris Wilson
2017-02-08 12:25   ` Matthew Auld
2017-02-08 12:33     ` Chris Wilson
2017-02-02  9:09 ` [PATCH 46/46] drm/i915: Add initial selftests for hang detection and resets Chris Wilson
2017-02-02 13:28   ` Mika Kuoppala
2017-02-02  9:18 ` [PATCH igt] intel-ci: Add all driver selftests to BAT Chris Wilson
2017-02-02 13:30   ` Maarten Lankhorst
2017-02-02 13:44     ` Chris Wilson
2017-02-02 14:11       ` Maarten Lankhorst
2017-02-02 15:42       ` Saarinen, Jani
2017-02-17 11:50   ` Petri Latvala
2017-02-17 11:57     ` Chris Wilson
2017-02-02 11:32 ` ✗ Fi.CI.BAT: failure for series starting with [v6] drm: Provide a driver hook for drm_dev_release() (rev2) Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.