* [PATCH i-g-t v4 0/3] Unify slow/combinatorial test handling
@ 2016-02-11 11:09 David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 1/3] tests/gem_concurrent_blit: rename gem_concurrent_all David Weinehall
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: David Weinehall @ 2016-02-11 11:09 UTC (permalink / raw)
To: intel-gfx
Until now we've had no unified way to handle slow/combinatorial tests.
Most of the time we don't want to run slow/combinatorial tests, so this
should remain the default, but when we do want to run such tests,
it has been handled differently in different tests.
This patch adds an --all command line option to igt_core, changes
gem_concurrent_blit and kms_frontbuffer_tracking to use this instead of
their own methods, and removes gem_concurrent_all in the process, since
it's now unnecessary.
Test cases that have subtests that should not be run by default should
use the igt_subtest_flags() / ugt_subtest_flags_f() functions and
pass the subtest types as part of the flags parameter.
v2: Incorporate various suggestions from reviewers.
v3: Rewrite to provide a generic mechanism for categorising
the subtests
v4: Refreshed against a more recent version of i-g-t
David Weinehall (3):
Copy gem_concurrent_all to gem_concurrent_blit
Unify handling of slow/combinatorial tests
Remove superfluous gem_concurrent_all.c
David Weinehall (3):
tests/gem_concurrent_blit: rename gem_concurrent_all
lib/igt_core: Unify handling of slow/combinatorial tests
tests/gem_concurrent_all: Remove gem_concurrent_all.c
lib/igt_core.c | 43 +-
lib/igt_core.h | 42 ++
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1540 -------------------------------------
tests/gem_concurrent_blit.c | 1548 +++++++++++++++++++++++++++++++++++++-
tests/kms_frontbuffer_tracking.c | 186 +++--
6 files changed, 1725 insertions(+), 1635 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
--
2.7.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH i-g-t v4 1/3] tests/gem_concurrent_blit: rename gem_concurrent_all
2016-02-11 11:09 [PATCH i-g-t v4 0/3] Unify slow/combinatorial test handling David Weinehall
@ 2016-02-11 11:09 ` David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 3/3] tests/gem_concurrent_all: Remove gem_concurrent_all.c David Weinehall
2 siblings, 0 replies; 8+ messages in thread
From: David Weinehall @ 2016-02-11 11:09 UTC (permalink / raw)
To: intel-gfx
We'll both rename gem_concurrent_all over gem_concurrent_blit
and change gem_concurrent_blit in this changeset. To make
this easier to follow we first do the the rename.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
tests/gem_concurrent_blit.c | 1548 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 1540 insertions(+), 8 deletions(-)
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 513de4a1b719..9b7ef8700e31 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -1,8 +1,1540 @@
-/* This test is just a duplicate of gem_concurrent_all. */
-/* However the executeable will be gem_concurrent_blit. */
-/* The main function examines argv[0] and, in the case */
-/* of gem_concurent_blit runs only a subset of the */
-/* available subtests. This avoids the use of */
-/* non-standard command line parameters which can cause */
-/* problems for automated testing */
-#include "gem_concurrent_all.c"
+/*
+ * Copyright © 2009,2012,2013 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ * Eric Anholt <eric@anholt.net>
+ * Chris Wilson <chris@chris-wilson.co.uk>
+ * Daniel Vetter <daniel.vetter@ffwll.ch>
+ *
+ */
+
+/** @file gem_concurrent.c
+ *
+ * This is a test of pread/pwrite/mmap behavior when writing to active
+ * buffers.
+ *
+ * Based on gem_gtt_concurrent_blt.
+ */
+
+#include "igt.h"
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <fcntl.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/stat.h>
+#include <sys/time.h>
+#include <sys/wait.h>
+
+#include <drm.h>
+
+#include "intel_bufmgr.h"
+
+IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
+ " buffers.");
+
+#define LOCAL_I915_GEM_USERPTR 0x33
+#define LOCAL_IOCTL_I915_GEM_USERPTR DRM_IOWR (DRM_COMMAND_BASE + LOCAL_I915_GEM_USERPTR, struct local_i915_gem_userptr)
+struct local_i915_gem_userptr {
+ uint64_t user_ptr;
+ uint64_t user_size;
+ uint32_t flags;
+ uint32_t handle;
+};
+
+int fd, devid, gen;
+struct intel_batchbuffer *batch;
+int all;
+int pass;
+
+struct buffers {
+ const struct access_mode *mode;
+ drm_intel_bufmgr *bufmgr;
+ drm_intel_bo **src, **dst;
+ drm_intel_bo *snoop, *spare;
+ uint32_t *tmp;
+ int width, height, size;
+ int count;
+};
+
+#define MIN_BUFFERS 3
+
+static void blt_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src);
+
+static void
+nop_release_bo(drm_intel_bo *bo)
+{
+ drm_intel_bo_unreference(bo);
+}
+
+static void
+prw_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ for (int i = 0; i < b->size; i++)
+ b->tmp[i] = val;
+ drm_intel_bo_subdata(bo, 0, 4*b->size, b->tmp);
+}
+
+static void
+prw_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ uint32_t *vaddr;
+
+ vaddr = b->tmp;
+ do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*b->size, vaddr));
+ for (int i = 0; i < b->size; i++)
+ igt_assert_eq_u32(vaddr[i], val);
+}
+
+#define pixel(y, width) ((y)*(width) + (((y) + pass)%(width)))
+
+static void
+partial_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ for (int y = 0; y < b->height; y++)
+ do_or_die(drm_intel_bo_subdata(bo, 4*pixel(y, b->width), 4, &val));
+}
+
+static void
+partial_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ for (int y = 0; y < b->height; y++) {
+ uint32_t buf;
+ do_or_die(drm_intel_bo_get_subdata(bo, 4*pixel(y, b->width), 4, &buf));
+ igt_assert_eq_u32(buf, val);
+ }
+}
+
+static drm_intel_bo *
+create_normal_bo(drm_intel_bufmgr *bufmgr, uint64_t size)
+{
+ drm_intel_bo *bo;
+
+ bo = drm_intel_bo_alloc(bufmgr, "bo", size, 0);
+ igt_assert(bo);
+
+ return bo;
+}
+
+static bool can_create_normal(void)
+{
+ return true;
+}
+
+static drm_intel_bo *
+create_private_bo(drm_intel_bufmgr *bufmgr, uint64_t size)
+{
+ drm_intel_bo *bo;
+ uint32_t handle;
+
+ /* XXX gem_create_with_flags(fd, size, I915_CREATE_PRIVATE); */
+
+ handle = gem_create(fd, size);
+ bo = gem_handle_to_libdrm_bo(bufmgr, fd, "stolen", handle);
+ gem_close(fd, handle);
+
+ return bo;
+}
+
+static bool can_create_private(void)
+{
+ return false;
+}
+
+static drm_intel_bo *
+create_stolen_bo(drm_intel_bufmgr *bufmgr, uint64_t size)
+{
+ drm_intel_bo *bo;
+ uint32_t handle;
+
+ /* XXX gem_create_with_flags(fd, size, I915_CREATE_STOLEN); */
+
+ handle = gem_create(fd, size);
+ bo = gem_handle_to_libdrm_bo(bufmgr, fd, "stolen", handle);
+ gem_close(fd, handle);
+
+ return bo;
+}
+
+static bool can_create_stolen(void)
+{
+ /* XXX check num_buffers against available stolen */
+ return false;
+}
+
+static drm_intel_bo *
+(*create_func)(drm_intel_bufmgr *bufmgr, uint64_t size);
+
+static bool create_cpu_require(void)
+{
+ return create_func != create_stolen_bo;
+}
+
+static drm_intel_bo *
+unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return create_func(bufmgr, (uint64_t)4*width*height);
+}
+
+static bool create_snoop_require(void)
+{
+ if (!create_cpu_require())
+ return false;
+
+ return !gem_has_llc(fd);
+}
+
+static drm_intel_bo *
+snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
+ drm_intel_bo_disable_reuse(bo);
+
+ return bo;
+}
+
+static bool create_userptr_require(void)
+{
+ static int found = -1;
+ if (found < 0) {
+ struct drm_i915_gem_userptr arg;
+
+ found = 0;
+
+ memset(&arg, 0, sizeof(arg));
+ arg.user_ptr = -4096ULL;
+ arg.user_size = 8192;
+ errno = 0;
+ drmIoctl(fd, LOCAL_IOCTL_I915_GEM_USERPTR, &arg);
+ if (errno == EFAULT) {
+ igt_assert(posix_memalign((void **)&arg.user_ptr,
+ 4096, arg.user_size) == 0);
+ found = drmIoctl(fd,
+ LOCAL_IOCTL_I915_GEM_USERPTR,
+ &arg) == 0;
+ free((void *)(uintptr_t)arg.user_ptr);
+ }
+
+ }
+ return found;
+}
+
+static drm_intel_bo *
+userptr_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ struct local_i915_gem_userptr userptr;
+ drm_intel_bo *bo;
+ void *ptr;
+
+ memset(&userptr, 0, sizeof(userptr));
+ userptr.user_size = width * height * 4;
+ userptr.user_size = (userptr.user_size + 4095) & -4096;
+
+ ptr = mmap(NULL, userptr.user_size,
+ PROT_READ | PROT_WRITE, MAP_ANON | MAP_SHARED, -1, 0);
+ igt_assert(ptr != (void *)-1);
+ userptr.user_ptr = (uintptr_t)ptr;
+
+ do_or_die(drmIoctl(fd, LOCAL_IOCTL_I915_GEM_USERPTR, &userptr));
+ bo = gem_handle_to_libdrm_bo(bufmgr, fd, "userptr", userptr.handle);
+ bo->virtual = (void *)(uintptr_t)userptr.user_ptr;
+ gem_close(fd, userptr.handle);
+
+ return bo;
+}
+
+static void
+userptr_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ int size = b->size;
+ uint32_t *vaddr = bo->virtual;
+
+ gem_set_domain(fd, bo->handle,
+ I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
+ while (size--)
+ *vaddr++ = val;
+}
+
+static void
+userptr_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ int size = b->size;
+ uint32_t *vaddr = bo->virtual;
+
+ gem_set_domain(fd, bo->handle,
+ I915_GEM_DOMAIN_CPU, 0);
+ while (size--)
+ igt_assert_eq_u32(*vaddr++, val);
+}
+
+static void
+userptr_release_bo(drm_intel_bo *bo)
+{
+ munmap(bo->virtual, bo->size);
+ bo->virtual = NULL;
+
+ drm_intel_bo_unreference(bo);
+}
+
+static void
+gtt_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ uint32_t *vaddr = bo->virtual;
+ int size = b->size;
+
+ drm_intel_gem_bo_start_gtt_access(bo, true);
+ while (size--)
+ *vaddr++ = val;
+}
+
+static void
+gtt_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ uint32_t *vaddr = bo->virtual;
+
+ /* GTT access is slow. So we just compare a few points */
+ drm_intel_gem_bo_start_gtt_access(bo, false);
+ for (int y = 0; y < b->height; y++)
+ igt_assert_eq_u32(vaddr[pixel(y, b->width)], val);
+}
+
+static drm_intel_bo *
+map_bo(drm_intel_bo *bo)
+{
+ /* gtt map doesn't have a write parameter, so just keep the mapping
+ * around (to avoid the set_domain with the gtt write domain set) and
+ * manually tell the kernel when we start access the gtt. */
+ do_or_die(drm_intel_gem_bo_map_gtt(bo));
+
+ return bo;
+}
+
+static drm_intel_bo *
+tile_bo(drm_intel_bo *bo, int width)
+{
+ uint32_t tiling = I915_TILING_X;
+ uint32_t stride = width * 4;
+
+ do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
+
+ return bo;
+}
+
+static drm_intel_bo *
+gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return map_bo(unmapped_create_bo(bufmgr, width, height));
+}
+
+static drm_intel_bo *
+gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gtt_create_bo(bufmgr, width, height), width);
+}
+
+static drm_intel_bo *
+wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ gem_require_mmap_wc(fd);
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
+ return bo;
+}
+
+static void
+wc_release_bo(drm_intel_bo *bo)
+{
+ munmap(bo->virtual, bo->size);
+ bo->virtual = NULL;
+
+ nop_release_bo(bo);
+}
+
+static drm_intel_bo *
+gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return unmapped_create_bo(bufmgr, width, height);
+}
+
+static drm_intel_bo *
+gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gpu_create_bo(bufmgr, width, height), width);
+}
+
+static void
+cpu_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ int size = b->size;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, true));
+ vaddr = bo->virtual;
+ while (size--)
+ *vaddr++ = val;
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+cpu_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ int size = b->size;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, false));
+ vaddr = bo->virtual;
+ while (size--)
+ igt_assert_eq_u32(*vaddr++, val);
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+gpu_set_bo(struct buffers *buffers, drm_intel_bo *bo, uint32_t val)
+{
+ struct drm_i915_gem_relocation_entry reloc[1];
+ struct drm_i915_gem_exec_object2 gem_exec[2];
+ struct drm_i915_gem_execbuffer2 execbuf;
+ uint32_t buf[10], *b;
+ uint32_t tiling, swizzle;
+
+ drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
+
+ memset(reloc, 0, sizeof(reloc));
+ memset(gem_exec, 0, sizeof(gem_exec));
+ memset(&execbuf, 0, sizeof(execbuf));
+
+ b = buf;
+ *b++ = XY_COLOR_BLT_CMD_NOLEN |
+ ((gen >= 8) ? 5 : 4) |
+ COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
+ if (gen >= 4 && tiling) {
+ b[-1] |= XY_COLOR_BLT_TILED;
+ *b = buffers->width;
+ } else
+ *b = buffers->width << 2;
+ *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
+ *b++ = 0;
+ *b++ = buffers->height << 16 | buffers->width;
+ reloc[0].offset = (b - buf) * sizeof(uint32_t);
+ reloc[0].target_handle = bo->handle;
+ reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
+ reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
+ *b++ = 0;
+ if (gen >= 8)
+ *b++ = 0;
+ *b++ = val;
+ *b++ = MI_BATCH_BUFFER_END;
+ if ((b - buf) & 1)
+ *b++ = 0;
+
+ gem_exec[0].handle = bo->handle;
+ gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
+
+ gem_exec[1].handle = gem_create(fd, 4096);
+ gem_exec[1].relocation_count = 1;
+ gem_exec[1].relocs_ptr = (uintptr_t)reloc;
+
+ execbuf.buffers_ptr = (uintptr_t)gem_exec;
+ execbuf.buffer_count = 2;
+ execbuf.batch_len = (b - buf) * sizeof(buf[0]);
+ if (gen >= 6)
+ execbuf.flags = I915_EXEC_BLT;
+
+ gem_write(fd, gem_exec[1].handle, 0, buf, execbuf.batch_len);
+ gem_execbuf(fd, &execbuf);
+
+ gem_close(fd, gem_exec[1].handle);
+}
+
+static void
+gpu_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
+{
+ blt_copy_bo(b, b->snoop, bo);
+ cpu_cmp_bo(b, b->snoop, val);
+}
+
+const struct access_mode {
+ const char *name;
+ bool (*require)(void);
+ void (*set_bo)(struct buffers *b, drm_intel_bo *bo, uint32_t val);
+ void (*cmp_bo)(struct buffers *b, drm_intel_bo *bo, uint32_t val);
+ drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
+ void (*release_bo)(drm_intel_bo *bo);
+} access_modes[] = {
+ {
+ .name = "prw",
+ .set_bo = prw_set_bo,
+ .cmp_bo = prw_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "partial",
+ .set_bo = partial_set_bo,
+ .cmp_bo = partial_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "cpu",
+ .require = create_cpu_require,
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "snoop",
+ .require = create_snoop_require,
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = snoop_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "userptr",
+ .require = create_userptr_require,
+ .set_bo = userptr_set_bo,
+ .cmp_bo = userptr_cmp_bo,
+ .create_bo = userptr_create_bo,
+ .release_bo = userptr_release_bo,
+ },
+ {
+ .name = "gtt",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gtt_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gttX",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gttX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "wc",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = wc_create_bo,
+ .release_bo = wc_release_bo,
+ },
+ {
+ .name = "gpu",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpu_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gpuX",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpuX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+};
+
+int num_buffers;
+igt_render_copyfunc_t rendercopy;
+
+static void *buffers_init(struct buffers *data,
+ const struct access_mode *mode,
+ int width, int height,
+ int _fd, int enable_reuse)
+{
+ data->mode = mode;
+ data->count = 0;
+
+ data->width = width;
+ data->height = height;
+ data->size = width * height;
+ data->tmp = malloc(4*data->size);
+ igt_assert(data->tmp);
+
+ data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
+ igt_assert(data->bufmgr);
+
+ data->src = malloc(2*sizeof(drm_intel_bo *)*num_buffers);
+ igt_assert(data->src);
+ data->dst = data->src + num_buffers;
+
+ if (enable_reuse)
+ drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
+ return intel_batchbuffer_alloc(data->bufmgr, devid);
+}
+
+static void buffers_destroy(struct buffers *data)
+{
+ if (data->count == 0)
+ return;
+
+ for (int i = 0; i < data->count; i++) {
+ data->mode->release_bo(data->src[i]);
+ data->mode->release_bo(data->dst[i]);
+ }
+ data->mode->release_bo(data->snoop);
+ data->mode->release_bo(data->spare);
+ data->count = 0;
+}
+
+static void buffers_create(struct buffers *data,
+ int count)
+{
+ int width = data->width, height = data->height;
+ igt_assert(data->bufmgr);
+
+ buffers_destroy(data);
+
+ for (int i = 0; i < count; i++) {
+ data->src[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ data->dst[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ }
+ data->spare = data->mode->create_bo(data->bufmgr, width, height);
+ data->snoop = snoop_create_bo(data->bufmgr, width, height);
+ data->count = count;
+}
+
+static void buffers_fini(struct buffers *data)
+{
+ if (data->bufmgr == NULL)
+ return;
+
+ buffers_destroy(data);
+
+ free(data->tmp);
+ free(data->src);
+ data->src = NULL;
+ data->dst = NULL;
+
+ intel_batchbuffer_free(batch);
+ drm_intel_bufmgr_destroy(data->bufmgr);
+ data->bufmgr = NULL;
+}
+
+typedef void (*do_copy)(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src);
+typedef struct igt_hang_ring (*do_hang)(void);
+
+static void render_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
+{
+ struct igt_buf d = {
+ .bo = dst,
+ .size = b->size * 4,
+ .num_tiles = b->size * 4,
+ .stride = b->width * 4,
+ }, s = {
+ .bo = src,
+ .size = b->size * 4,
+ .num_tiles = b->size * 4,
+ .stride = b->width * 4,
+ };
+ uint32_t swizzle;
+
+ drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
+ drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
+
+ rendercopy(batch, NULL,
+ &s, 0, 0,
+ b->width, b->height,
+ &d, 0, 0);
+}
+
+static void blt_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
+{
+ intel_blt_copy(batch,
+ src, 0, 0, 4*b->width,
+ dst, 0, 0, 4*b->width,
+ b->width, b->height, 32);
+}
+
+static void cpu_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = b->size * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
+ s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void gtt_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = b->size * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
+ d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void wc_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = b->width * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static struct igt_hang_ring no_hang(void)
+{
+ return (struct igt_hang_ring){0, 0};
+}
+
+static struct igt_hang_ring bcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_BLT);
+}
+
+static struct igt_hang_ring rcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_RENDER);
+}
+
+static void do_basic0(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ gem_quiescent_gpu(fd);
+
+ buffers->mode->set_bo(buffers, buffers->src[0], 0xdeadbeef);
+ for (int i = 0; i < buffers->count; i++) {
+ struct igt_hang_ring hang = do_hang_func();
+
+ do_copy_func(buffers, buffers->dst[i], buffers->src[0]);
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef);
+
+ igt_post_hang_ring(fd, hang);
+ }
+}
+
+static void do_basic1(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ gem_quiescent_gpu(fd);
+
+ for (int i = 0; i < buffers->count; i++) {
+ struct igt_hang_ring hang = do_hang_func();
+
+ buffers->mode->set_bo(buffers, buffers->src[i], i);
+ buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
+
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ usleep(0); /* let someone else claim the mutex */
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
+
+ igt_post_hang_ring(fd, hang);
+ }
+}
+
+static void do_basicN(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+
+ gem_quiescent_gpu(fd);
+
+ for (int i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers, buffers->src[i], i);
+ buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
+ }
+
+ hang = do_hang_func();
+
+ for (int i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ usleep(0); /* let someone else claim the mutex */
+ }
+
+ for (int i = 0; i < buffers->count; i++)
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
+
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers, buffers->src[i], i);
+ buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < half; i++) {
+ buffers->mode->set_bo(buffers, buffers->src[i], i);
+ buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
+ buffers->mode->set_bo(buffers, buffers->dst[i+half], ~i);
+ }
+ for (i = 0; i < half; i++) {
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ if (do_rcs)
+ render_copy_bo(buffers, buffers->dst[i+half], buffers->src[i]);
+ else
+ blt_copy_bo(buffers, buffers->dst[i+half], buffers->src[i]);
+ }
+ hang = do_hang_func();
+ for (i = half; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
+ for (i = 0; i < half; i++) {
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
+ buffers->mode->cmp_bo(buffers, buffers->dst[i+half], i);
+ }
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_overwrite_source_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_overwrite_source__rev(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers, buffers->src[i], i);
+ buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source__one(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+
+ gem_quiescent_gpu(fd);
+ buffers->mode->set_bo(buffers, buffers->src[0], 0);
+ buffers->mode->set_bo(buffers, buffers->dst[0], ~0);
+ do_copy_func(buffers, buffers->dst[0], buffers->src[0]);
+ hang = do_hang_func();
+ buffers->mode->set_bo(buffers, buffers->src[0], 0xdeadbeef);
+ buffers->mode->cmp_bo(buffers, buffers->dst[0], 0);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef^~i);
+ buffers->mode->set_bo(buffers, buffers->dst[i], i);
+ }
+ for (i = 0; i < half; i++) {
+ if (do_rcs == 1 || (do_rcs == -1 && i & 1))
+ render_copy_bo(buffers, buffers->dst[i], buffers->src[i]);
+ else
+ blt_copy_bo(buffers, buffers->dst[i], buffers->src[i]);
+
+ do_copy_func(buffers, buffers->dst[i+half], buffers->src[i]);
+
+ if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
+ render_copy_bo(buffers, buffers->dst[i], buffers->dst[i+half]);
+ else
+ blt_copy_bo(buffers, buffers->dst[i], buffers->dst[i+half]);
+
+ do_copy_func(buffers, buffers->dst[i+half], buffers->src[i+half]);
+ }
+ hang = do_hang_func();
+ for (i = 0; i < 2*half; i++)
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef^~i);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_intermix_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_intermix_both(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, -1);
+}
+
+static void do_early_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ blt_copy_bo(buffers, buffers->spare, buffers->src[i]);
+ }
+ buffers->mode->cmp_bo(buffers, buffers->spare, 0xdeadbeef^(buffers->count-1));
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_write_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
+ for (i = 0; i < buffers->count; i++) {
+ blt_copy_bo(buffers, buffers->spare, buffers->src[i]);
+ do_copy_func(buffers, buffers->dst[i], buffers->spare);
+ }
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ render_copy_bo(buffers, buffers->spare, buffers->src[i]);
+ }
+ buffers->mode->cmp_bo(buffers, buffers->spare, 0xdeadbeef^(buffers->count-1));
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_write_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
+ for (i = 0; i < buffers->count; i++) {
+ render_copy_bo(buffers, buffers->spare, buffers->src[i]);
+ do_copy_func(buffers, buffers->dst[i], buffers->spare);
+ }
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_gpu_read_after_write(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers, buffers->src[i], 0xabcdabcd);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
+ for (i = buffers->count; i--; )
+ do_copy_func(buffers, buffers->spare, buffers->dst[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xabcdabcd);
+ igt_post_hang_ring(fd, hang);
+}
+
+typedef void (*do_test)(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+typedef void (*run_wrap)(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+static void run_single(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_test_func(buffers, do_copy_func, do_hang_func);
+ igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
+}
+
+static void run_interruptible(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ for (pass = 0; pass < 10; pass++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+ pass = 0;
+ igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
+}
+
+static void run_child(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+
+{
+ /* We inherit the buffers from the parent, but the bufmgr/batch
+ * needs to be local as the cache of reusable itself will be COWed,
+ * leading to the child closing an object without the parent knowing.
+ */
+ igt_fork(child, 1)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+ igt_waitchildren();
+ igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
+}
+
+static void __run_forked(struct buffers *buffers,
+ int num_children, int loops,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+
+{
+ const int old_num_buffers = num_buffers;
+
+ num_buffers /= num_children;
+ num_buffers += MIN_BUFFERS;
+
+ igt_fork(child, num_children) {
+ /* recreate process local variables */
+ buffers->count = 0;
+ fd = drm_open_driver(DRIVER_INTEL);
+
+ batch = buffers_init(buffers, buffers->mode,
+ buffers->width, buffers->height,
+ fd, true);
+
+ buffers_create(buffers, num_buffers);
+ for (pass = 0; pass < loops; pass++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+ pass = 0;
+
+ buffers_fini(buffers);
+ }
+
+ igt_waitchildren();
+ igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
+
+ num_buffers = old_num_buffers;
+}
+
+static void run_forked(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ __run_forked(buffers, sysconf(_SC_NPROCESSORS_ONLN), 10,
+ do_test_func, do_copy_func, do_hang_func);
+}
+
+static void run_bomb(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ __run_forked(buffers, 8*sysconf(_SC_NPROCESSORS_ONLN), 10,
+ do_test_func, do_copy_func, do_hang_func);
+}
+
+static void bit17_require(void)
+{
+ struct drm_i915_gem_get_tiling2 {
+ uint32_t handle;
+ uint32_t tiling_mode;
+ uint32_t swizzle_mode;
+ uint32_t phys_swizzle_mode;
+ } arg;
+#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
+
+ memset(&arg, 0, sizeof(arg));
+ arg.handle = gem_create(fd, 4096);
+ gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
+
+ do_ioctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg);
+ gem_close(fd, arg.handle);
+ igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
+}
+
+static void cpu_require(void)
+{
+ bit17_require();
+}
+
+static void gtt_require(void)
+{
+}
+
+static void wc_require(void)
+{
+ bit17_require();
+ gem_require_mmap_wc(fd);
+}
+
+static void bcs_require(void)
+{
+}
+
+static void rcs_require(void)
+{
+ igt_require(rendercopy);
+}
+
+static void
+run_basic_modes(const char *prefix,
+ const struct access_mode *mode,
+ const char *suffix,
+ run_wrap run_wrap_func)
+{
+ const struct {
+ const char *prefix;
+ do_copy copy;
+ void (*require)(void);
+ } pipelines[] = {
+ { "cpu", cpu_copy_bo, cpu_require },
+ { "gtt", gtt_copy_bo, gtt_require },
+ { "wc", wc_copy_bo, wc_require },
+ { "blt", blt_copy_bo, bcs_require },
+ { "render", render_copy_bo, rcs_require },
+ { NULL, NULL }
+ }, *pskip = pipelines + 3, *p;
+ const struct {
+ const char *suffix;
+ do_hang hang;
+ } hangs[] = {
+ { "", no_hang },
+ { "-hang-blt", bcs_hang },
+ { "-hang-render", rcs_hang },
+ { NULL, NULL },
+ }, *h;
+
+ for (h = hangs; h->suffix; h++) {
+ if (!all && *h->suffix)
+ continue;
+
+ for (p = all ? pipelines : pskip; p->prefix; p++) {
+ struct buffers buffers;
+
+ igt_fixture
+ batch = buffers_init(&buffers, mode,
+ 512, 512, fd,
+ run_wrap_func != run_child);
+
+ igt_subtest_f("%s-%s-%s-sanitycheck0%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers, do_basic0,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-%s-sanitycheck1%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers, do_basic1,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-%s-sanitycheckN%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers, do_basicN,
+ p->copy, h->hang);
+ }
+
+ /* try to overwrite the source values */
+ igt_subtest_f("%s-%s-%s-overwrite-source-one%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__one,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-%s-overwrite-source%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-%s-overwrite-source-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_bcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-%s-overwrite-source-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_rcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-%s-overwrite-source-rev%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__rev,
+ p->copy, h->hang);
+ }
+
+ /* try to intermix copies with GPU copies*/
+ igt_subtest_f("%s-%s-%s-intermix-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_rcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-%s-intermix-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-%s-intermix-both%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_both,
+ p->copy, h->hang);
+ }
+
+ /* try to read the results before the copy completes */
+ igt_subtest_f("%s-%s-%s-early-read%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_early_read,
+ p->copy, h->hang);
+ }
+
+ /* concurrent reads */
+ igt_subtest_f("%s-%s-%s-read-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-%s-read-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_rcs,
+ p->copy, h->hang);
+ }
+
+ /* split copying between rings */
+ igt_subtest_f("%s-%s-%s-write-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_write_read_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-%s-write-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_write_read_rcs,
+ p->copy, h->hang);
+ }
+
+ /* and finally try to trick the kernel into loosing the pending write */
+ igt_subtest_f("%s-%s-%s-gpu-read-after-write%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_gpu_read_after_write,
+ p->copy, h->hang);
+ }
+
+ igt_fixture
+ buffers_fini(&buffers);
+ }
+ }
+}
+
+static void
+run_modes(const char *style, const struct access_mode *mode, unsigned allow_mem)
+{
+ if (mode->require && !mode->require())
+ return;
+
+ igt_debug("%s: using 2x%d buffers, each 1MiB\n",
+ style, num_buffers);
+ if (!__intel_check_memory(2*num_buffers, 1024*1024, allow_mem,
+ NULL, NULL))
+ return;
+
+ run_basic_modes(style, mode, "", run_single);
+ run_basic_modes(style, mode, "-child", run_child);
+ run_basic_modes(style, mode, "-forked", run_forked);
+
+ igt_fork_signal_helper();
+ run_basic_modes(style, mode, "-interruptible", run_interruptible);
+ run_basic_modes(style, mode, "-bomb", run_bomb);
+ igt_stop_signal_helper();
+}
+
+igt_main
+{
+ const struct {
+ const char *name;
+ drm_intel_bo *(*create)(drm_intel_bufmgr *, uint64_t size);
+ bool (*require)(void);
+ } create[] = {
+ { "", create_normal_bo, can_create_normal},
+ { "private-", create_private_bo, can_create_private },
+ { "stolen-", create_stolen_bo, can_create_stolen },
+ { NULL, NULL }
+ }, *c;
+ uint64_t pin_sz = 0;
+ void *pinned = NULL;
+ int i;
+
+ igt_skip_on_simulation();
+
+ if (strstr(igt_test_name(), "all"))
+ all = true;
+
+ igt_fixture {
+ fd = drm_open_driver(DRIVER_INTEL);
+ intel_detect_and_clear_missed_interrupts(fd);
+ devid = intel_get_drm_devid(fd);
+ gen = intel_gen(devid);
+ rendercopy = igt_get_render_copyfunc(devid);
+ }
+
+ for (c = create; c->name; c++) {
+ char name[80];
+
+ create_func = c->create;
+
+ num_buffers = MIN_BUFFERS;
+ if (c->require()) {
+ snprintf(name, sizeof(name), "%s%s", c->name, "tiny");
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(name, &access_modes[i], CHECK_RAM);
+ }
+
+ igt_fixture {
+ num_buffers = gem_mappable_aperture_size() / (1024 * 1024) / 4;
+ }
+
+ if (c->require()) {
+ snprintf(name, sizeof(name), "%s%s", c->name, "small");
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(name, &access_modes[i], CHECK_RAM);
+ }
+
+ igt_fixture {
+ num_buffers = gem_mappable_aperture_size() / (1024 * 1024);
+ }
+
+ if (c->require()) {
+ snprintf(name, sizeof(name), "%s%s", c->name, "thrash");
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(name, &access_modes[i], CHECK_RAM);
+ }
+
+ igt_fixture {
+ num_buffers = gem_aperture_size(fd) / (1024 * 1024);
+ }
+
+ if (c->require()) {
+ snprintf(name, sizeof(name), "%s%s", c->name, "full");
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(name, &access_modes[i], CHECK_RAM);
+ }
+
+ igt_fixture {
+ num_buffers = gem_mappable_aperture_size() / (1024 * 1024);
+ pin_sz = intel_get_avail_ram_mb() - num_buffers;
+
+ igt_debug("Pinning %ld MiB\n", pin_sz);
+ pin_sz *= 1024 * 1024;
+
+ if (posix_memalign(&pinned, 4096, pin_sz) ||
+ mlock(pinned, pin_sz) ||
+ madvise(pinned, pin_sz, MADV_DONTFORK)) {
+ free(pinned);
+ pinned = NULL;
+ }
+ igt_require(pinned);
+ }
+
+ if (c->require()) {
+ snprintf(name, sizeof(name), "%s%s", c->name, "swap");
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(name, &access_modes[i], CHECK_RAM | CHECK_SWAP);
+ }
+
+ igt_fixture {
+ if (pinned) {
+ munlock(pinned, pin_sz);
+ free(pinned);
+ pinned = NULL;
+ }
+ }
+ }
+}
--
2.7.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests
2016-02-11 11:09 [PATCH i-g-t v4 0/3] Unify slow/combinatorial test handling David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 1/3] tests/gem_concurrent_blit: rename gem_concurrent_all David Weinehall
@ 2016-02-11 11:09 ` David Weinehall
2016-02-11 13:04 ` Chris Wilson
2016-02-16 15:45 ` Daniel Vetter
2016-02-11 11:09 ` [PATCH i-g-t v4 3/3] tests/gem_concurrent_all: Remove gem_concurrent_all.c David Weinehall
2 siblings, 2 replies; 8+ messages in thread
From: David Weinehall @ 2016-02-11 11:09 UTC (permalink / raw)
To: intel-gfx
Some subtests are not run by default, for various reasons;
be it because they're only for debugging, because they're slow,
or because they are not of high enough quality.
This patch aims to introduce a common mechanism for categorising
the subtests and introduces a flag (--all) that runs/lists all
subtests instead of just the default set.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
lib/igt_core.c | 43 +++++++--
lib/igt_core.h | 42 +++++++++
tests/gem_concurrent_blit.c | 50 +++++------
tests/kms_frontbuffer_tracking.c | 186 ++++++++++++++++++++++-----------------
4 files changed, 210 insertions(+), 111 deletions(-)
diff --git a/lib/igt_core.c b/lib/igt_core.c
index 6b69bb780700..e6e6949ed65a 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -216,6 +216,7 @@ const char *igt_interactive_debug;
/* subtests helpers */
static bool list_subtests = false;
+static unsigned int subtest_types_mask = SUBTEST_TYPE_NORMAL;
static char *run_single_subtest = NULL;
static bool run_single_subtest_found = false;
static const char *in_subtest = NULL;
@@ -237,12 +238,13 @@ int test_children_sz;
bool test_child;
enum {
- OPT_LIST_SUBTESTS,
- OPT_RUN_SUBTEST,
- OPT_DESCRIPTION,
- OPT_DEBUG,
- OPT_INTERACTIVE_DEBUG,
- OPT_HELP = 'h'
+ OPT_LIST_SUBTESTS,
+ OPT_WITH_ALL_SUBTESTS,
+ OPT_RUN_SUBTEST,
+ OPT_DESCRIPTION,
+ OPT_DEBUG,
+ OPT_INTERACTIVE_DEBUG,
+ OPT_HELP = 'h'
};
static int igt_exitcode = IGT_EXIT_SUCCESS;
@@ -516,6 +518,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
fprintf(f, " --list-subtests\n"
+ " --all\n"
" --run-subtest <pattern>\n"
" --debug[=log-domain]\n"
" --interactive-debug[=domain]\n"
@@ -548,6 +551,7 @@ static int common_init(int *argc, char **argv,
int c, option_index = 0, i, x;
static struct option long_options[] = {
{"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
+ {"all", 0, 0, OPT_WITH_ALL_SUBTESTS},
{"run-subtest", 1, 0, OPT_RUN_SUBTEST},
{"help-description", 0, 0, OPT_DESCRIPTION},
{"debug", optional_argument, 0, OPT_DEBUG},
@@ -659,6 +663,10 @@ static int common_init(int *argc, char **argv,
if (!run_single_subtest)
list_subtests = true;
break;
+ case OPT_WITH_ALL_SUBTESTS:
+ if (!run_single_subtest)
+ subtest_types_mask = SUBTEST_TYPE_ALL;
+ break;
case OPT_RUN_SUBTEST:
if (!list_subtests)
run_single_subtest = strdup(optarg);
@@ -1667,6 +1675,29 @@ void igt_skip_on_simulation(void)
igt_require(!igt_run_in_simulation());
}
+/**
+ * igt_match_subtest_flags:
+ *
+ * This function is used to check whether the attributes of the subtest
+ * makes it a candidate for inclusion in the test run; this is used to
+ * categorise tests, for instance to exclude tests that are purely for
+ * debug purposes, tests that are specific to certain environments,
+ * or tests that are very slow.
+ *
+ * Note that a test has to have all its flags met to be run; for instance
+ * a subtest with the flags SUBTEST_TYPE_SLOW | SUBTEST_TYPE_DEBUG requires
+ * "--subtest-types=slow,debug" or "--all" to be executed
+ *
+ * @subtest_flags: The subtests to check for
+ *
+ * Returns: true if the subtest test should be run,
+ * false if the subtest should be skipped
+ */
+bool igt_match_subtest_flags(unsigned long subtest_flags)
+{
+ return ((subtest_flags & subtest_types_mask) == subtest_flags);
+}
+
/* structured logging */
/**
diff --git a/lib/igt_core.h b/lib/igt_core.h
index 8f297e06a068..bf83de609bfa 100644
--- a/lib/igt_core.h
+++ b/lib/igt_core.h
@@ -193,6 +193,48 @@ bool __igt_run_subtest(const char *subtest_name);
#define igt_subtest_f(f...) \
__igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
+enum {
+ /* The set of tests run if nothing else is specified */
+ SUBTEST_TYPE_NORMAL = 1 << 0,
+ /* Basic Acceptance Testing set */
+ SUBTEST_TYPE_BASIC = 1 << 1,
+ /* Tests that are very slow */
+ SUBTEST_TYPE_SLOW = 1 << 2,
+ /* Tests that mainly intended for debugging */
+ SUBTEST_TYPE_DEBUG = 1 << 3,
+ SUBTEST_TYPE_ALL = ~0
+} subtest_types;
+
+bool igt_match_subtest_flags(unsigned long subtest_flags);
+
+/**
+ * igt_subtest_flags:
+ * @name: name of the subtest
+ * @__subtest_flags: the categories the subtest belongs to
+ *
+ * This is a wrapper around igt_subtest that will only execute the
+ * testcase if all of the flags passed to this function match those
+ * specified by the list of subtest categories passed from the
+ * command line; the default category is SUBTEST_TYPE_NORMAL.
+ */
+#define igt_subtest_flags(name, __subtest_flags) \
+ if (igt_match_subtest_flags(__subtest_flags)) \
+ igt_subtest(name)
+
+/**
+ * igt_subtest_flags_f:
+ * @...: format string and optional arguments
+ * @__subtest_flags: the categories the subtest belongs to
+ *
+ * This is a wrapper around igt_subtest_f that will only execute the
+ * testcase if all of the flags passed to this function match those
+ * specified by the list of subtest categories passed from the
+ * command line; the default category is SUBTEST_TYPE_NORMAL.
+ */
+#define igt_subtest_flags_f(__subtest_flags, f...) \
+ if (igt_match_subtest_flags(__subtest_flags)) \
+ __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
+
const char *igt_subtest_name(void);
bool igt_only_list_subtests(void);
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 9b7ef8700e31..3c23a95c4631 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -64,7 +64,6 @@ struct local_i915_gem_userptr {
int fd, devid, gen;
struct intel_batchbuffer *batch;
-int all;
int pass;
struct buffers {
@@ -1256,10 +1255,14 @@ run_basic_modes(const char *prefix,
}, *h;
for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
+ unsigned int subtest_flags;
- for (p = all ? pipelines : pskip; p->prefix; p++) {
+ if (*h->suffix)
+ subtest_flags = SUBTEST_TYPE_SLOW;
+ else
+ subtest_flags = SUBTEST_TYPE_NORMAL;
+
+ for (p = igt_match_subtest_flags(SUBTEST_TYPE_SLOW) ? pipelines : pskip; p->prefix; p++) {
struct buffers buffers;
igt_fixture
@@ -1267,21 +1270,21 @@ run_basic_modes(const char *prefix,
512, 512, fd,
run_wrap_func != run_child);
- igt_subtest_f("%s-%s-%s-sanitycheck0%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-sanitycheck0%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers, do_basic0,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-sanitycheck1%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-sanitycheck1%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers, do_basic1,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-sanitycheckN%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-sanitycheckN%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers, do_basicN,
@@ -1289,7 +1292,7 @@ run_basic_modes(const char *prefix,
}
/* try to overwrite the source values */
- igt_subtest_f("%s-%s-%s-overwrite-source-one%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-one%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
@@ -1297,7 +1300,7 @@ run_basic_modes(const char *prefix,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-overwrite-source%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
@@ -1305,7 +1308,7 @@ run_basic_modes(const char *prefix,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-overwrite-source-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
@@ -1313,7 +1316,7 @@ run_basic_modes(const char *prefix,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-overwrite-source-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
igt_require(rendercopy);
buffers_create(&buffers, num_buffers);
@@ -1322,7 +1325,7 @@ run_basic_modes(const char *prefix,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-overwrite-source-rev%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-rev%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
@@ -1331,7 +1334,7 @@ run_basic_modes(const char *prefix,
}
/* try to intermix copies with GPU copies*/
- igt_subtest_f("%s-%s-%s-intermix-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-intermix-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
igt_require(rendercopy);
buffers_create(&buffers, num_buffers);
@@ -1339,7 +1342,7 @@ run_basic_modes(const char *prefix,
do_intermix_rcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-intermix-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-intermix-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
igt_require(rendercopy);
buffers_create(&buffers, num_buffers);
@@ -1347,7 +1350,7 @@ run_basic_modes(const char *prefix,
do_intermix_bcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-intermix-both%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-intermix-both%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
igt_require(rendercopy);
buffers_create(&buffers, num_buffers);
@@ -1357,7 +1360,7 @@ run_basic_modes(const char *prefix,
}
/* try to read the results before the copy completes */
- igt_subtest_f("%s-%s-%s-early-read%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-early-read%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
@@ -1366,14 +1369,14 @@ run_basic_modes(const char *prefix,
}
/* concurrent reads */
- igt_subtest_f("%s-%s-%s-read-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-read-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
do_read_read_bcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-read-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-read-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
igt_require(rendercopy);
buffers_create(&buffers, num_buffers);
@@ -1383,14 +1386,14 @@ run_basic_modes(const char *prefix,
}
/* split copying between rings */
- igt_subtest_f("%s-%s-%s-write-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-write-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
do_write_read_bcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-%s-write-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-write-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
igt_require(rendercopy);
buffers_create(&buffers, num_buffers);
@@ -1399,8 +1402,8 @@ run_basic_modes(const char *prefix,
p->copy, h->hang);
}
- /* and finally try to trick the kernel into loosing the pending write */
- igt_subtest_f("%s-%s-%s-gpu-read-after-write%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
+ /* and finally try to trick the kernel into losing the pending write */
+ igt_subtest_flags_f(subtest_flags, "%s-%s-%s-gpu-read-after-write%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
p->require();
buffers_create(&buffers, num_buffers);
run_wrap_func(&buffers,
@@ -1454,9 +1457,6 @@ igt_main
igt_skip_on_simulation();
- if (strstr(igt_test_name(), "all"))
- all = true;
-
igt_fixture {
fd = drm_open_driver(DRIVER_INTEL);
intel_detect_and_clear_missed_interrupts(fd);
diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
index 64f880c667a3..651303e9d392 100644
--- a/tests/kms_frontbuffer_tracking.c
+++ b/tests/kms_frontbuffer_tracking.c
@@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
* combinations that are somewhat redundant and don't add much value to the
* test. For example, since we already do the offscreen testing with a single
* pipe enabled, there's no much value in doing it again with dual pipes. If you
- * still want to try these redundant tests, you need to use the --show-hidden
- * option.
+ * still want to try these redundant tests, you need to use the --all option.
*
* The most important hidden thing is the FEATURE_NONE set of tests. Whenever
* you get a failure on any test, it is important to check whether the same test
@@ -126,6 +125,9 @@ struct test_mode {
} flip;
enum igt_draw_method method;
+
+ /* Specifies the subtest categories this subtest belongs to */
+ unsigned long subtest_flags;
};
enum color {
@@ -241,7 +243,6 @@ struct {
bool fbc_check_last_action;
bool no_edp;
bool small_modes;
- bool show_hidden;
int step;
int only_pipes;
int shared_fb_x_offset;
@@ -253,7 +254,6 @@ struct {
.fbc_check_last_action = true,
.no_edp = false,
.small_modes = false,
- .show_hidden= false,
.step = 0,
.only_pipes = PIPE_COUNT,
.shared_fb_x_offset = 500,
@@ -3075,9 +3075,6 @@ static int opt_handler(int option, int option_index, void *data)
case 'm':
opt.small_modes = true;
break;
- case 'i':
- opt.show_hidden = true;
- break;
case 't':
opt.step++;
break;
@@ -3113,7 +3110,6 @@ const char *help_str =
" --no-fbc-action-check Don't check for the FBC last action\n"
" --no-edp Don't use eDP monitors\n"
" --use-small-modes Use smaller resolutions for the modes\n"
-" --show-hidden Show hidden subtests\n"
" --step Stop on each step so you can check the screen\n"
" --shared-fb-x offset Use 'offset' as the X offset for the shared FB\n"
" --shared-fb-y offset Use 'offset' as the Y offset for the shared FB\n"
@@ -3227,18 +3223,19 @@ static const char *flip_str(enum flip_type flip)
for (t.plane = 0; t.plane < PLANE_COUNT; t.plane++) { \
for (t.fbs = 0; t.fbs < FBS_COUNT; t.fbs++) { \
for (t.method = 0; t.method < IGT_DRAW_METHOD_COUNT; t.method++) { \
+ t.subtest_flags = SUBTEST_TYPE_NORMAL; \
if (t.pipes == PIPE_SINGLE && t.screen == SCREEN_SCND) \
continue; \
if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
continue; \
- if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
+ if (t.pipes == PIPE_DUAL && \
t.screen == SCREEN_OFFSCREEN) \
- continue; \
- if (!opt.show_hidden && t.feature == FEATURE_NONE) \
- continue; \
- if (!opt.show_hidden && t.fbs == FBS_SHARED && \
+ t.subtest_flags = SUBTEST_TYPE_SLOW; \
+ if (t.feature == FEATURE_NONE) \
+ t.subtest_flags = SUBTEST_TYPE_SLOW; \
+ if (t.fbs == FBS_SHARED && \
(t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
- continue;
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
#define TEST_MODE_ITER_END } } } } } }
@@ -3253,7 +3250,6 @@ int main(int argc, char *argv[])
{ "no-fbc-action-check", 0, 0, 'a'},
{ "no-edp", 0, 0, 'e'},
{ "use-small-modes", 0, 0, 'm'},
- { "show-hidden", 0, 0, 'i'},
{ "step", 0, 0, 't'},
{ "shared-fb-x", 1, 0, 'x'},
{ "shared-fb-y", 1, 0, 'y'},
@@ -3269,8 +3265,9 @@ int main(int argc, char *argv[])
setup_environment();
for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
- if (!opt.show_hidden && t.feature == FEATURE_NONE)
- continue;
+ t.subtest_flags = SUBTEST_TYPE_NORMAL;
+ if (t.feature == FEATURE_NONE)
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
t.screen = SCREEN_PRIM;
t.plane = PLANE_PRI;
@@ -3280,38 +3277,43 @@ int main(int argc, char *argv[])
t.flip = -1;
t.method = -1;
- igt_subtest_f("%s-%s-rte",
- feature_str(t.feature),
- pipes_str(t.pipes))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-rte",
+ feature_str(t.feature),
+ pipes_str(t.pipes))
rte_subtest(&t);
}
}
TEST_MODE_ITER_BEGIN(t)
- igt_subtest_f("%s-%s-%s-%s-%s-draw-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-draw-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
draw_subtest(&t);
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.plane != PLANE_PRI ||
- t.screen == SCREEN_OFFSCREEN ||
- (!opt.show_hidden && t.method != IGT_DRAW_BLT))
+ t.screen == SCREEN_OFFSCREEN)
continue;
+ if (t.method != IGT_DRAW_BLT)
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
+
for (t.flip = 0; t.flip < FLIP_COUNT; t.flip++)
- igt_subtest_f("%s-%s-%s-%s-%sflip-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- fbs_str(t.fbs),
- flip_str(t.flip),
- igt_draw_get_method_name(t.method))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%sflip-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ fbs_str(t.fbs),
+ flip_str(t.flip),
+ igt_draw_get_method_name(t.method))
flip_subtest(&t);
TEST_MODE_ITER_END
@@ -3322,10 +3324,11 @@ int main(int argc, char *argv[])
(t.feature & FEATURE_FBC) == 0)
continue;
- igt_subtest_f("%s-%s-%s-fliptrack",
- feature_str(t.feature),
- pipes_str(t.pipes),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-fliptrack",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ fbs_str(t.fbs))
fliptrack_subtest(&t, FLIP_PAGEFLIP);
TEST_MODE_ITER_END
@@ -3335,20 +3338,22 @@ int main(int argc, char *argv[])
t.plane == PLANE_PRI)
continue;
- igt_subtest_f("%s-%s-%s-%s-%s-move",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-move",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
move_subtest(&t);
- igt_subtest_f("%s-%s-%s-%s-%s-onoff",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-onoff",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
onoff_subtest(&t);
TEST_MODE_ITER_END
@@ -3358,27 +3363,31 @@ int main(int argc, char *argv[])
t.plane != PLANE_SPR)
continue;
- igt_subtest_f("%s-%s-%s-%s-%s-fullscreen",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-fullscreen",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
fullscreen_plane_subtest(&t);
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.screen != SCREEN_PRIM ||
- t.method != IGT_DRAW_BLT ||
- (!opt.show_hidden && t.plane != PLANE_PRI) ||
- (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
+ t.method != IGT_DRAW_BLT)
continue;
- igt_subtest_f("%s-%s-%s-%s-multidraw",
- feature_str(t.feature),
- pipes_str(t.pipes),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ if (t.plane != PLANE_PRI ||
+ t.fbs != FBS_INDIVIDUAL)
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
+
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-multidraw",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
multidraw_subtest(&t);
TEST_MODE_ITER_END
@@ -3390,7 +3399,9 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_GTT)
continue;
- igt_subtest_f("%s-farfromfence", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-farfromfence",
+ feature_str(t.feature))
farfromfence_subtest(&t);
TEST_MODE_ITER_END
@@ -3406,10 +3417,11 @@ int main(int argc, char *argv[])
if (t.format == FORMAT_DEFAULT)
continue;
- igt_subtest_f("%s-%s-draw-%s",
- feature_str(t.feature),
- format_str(t.format),
- igt_draw_get_method_name(t.method))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-draw-%s",
+ feature_str(t.feature),
+ format_str(t.format),
+ igt_draw_get_method_name(t.method))
format_draw_subtest(&t);
}
TEST_MODE_ITER_END
@@ -3420,9 +3432,11 @@ int main(int argc, char *argv[])
t.plane != PLANE_PRI ||
t.method != IGT_DRAW_MMAP_CPU)
continue;
- igt_subtest_f("%s-%s-scaledprimary",
- feature_str(t.feature),
- fbs_str(t.fbs))
+
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-scaledprimary",
+ feature_str(t.feature),
+ fbs_str(t.fbs))
scaledprimary_subtest(&t);
TEST_MODE_ITER_END
@@ -3434,25 +3448,37 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_CPU)
continue;
- igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-modesetfrombusy",
+ feature_str(t.feature))
modesetfrombusy_subtest(&t);
if (t.feature & FEATURE_FBC) {
- igt_subtest_f("%s-badstride", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-badstride",
+ feature_str(t.feature))
badstride_subtest(&t);
- igt_subtest_f("%s-stridechange", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-stridechange",
+ feature_str(t.feature))
stridechange_subtest(&t);
- igt_subtest_f("%s-tilingchange", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-tilingchange",
+ feature_str(t.feature))
tilingchange_subtest(&t);
}
if (t.feature & FEATURE_PSR)
- igt_subtest_f("%s-slowdraw", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-slowdraw",
+ feature_str(t.feature))
slow_draw_subtest(&t);
- igt_subtest_f("%s-suspend", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-suspend",
+ feature_str(t.feature))
suspend_subtest(&t);
TEST_MODE_ITER_END
--
2.7.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH i-g-t v4 3/3] tests/gem_concurrent_all: Remove gem_concurrent_all.c
2016-02-11 11:09 [PATCH i-g-t v4 0/3] Unify slow/combinatorial test handling David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 1/3] tests/gem_concurrent_blit: rename gem_concurrent_all David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests David Weinehall
@ 2016-02-11 11:09 ` David Weinehall
2 siblings, 0 replies; 8+ messages in thread
From: David Weinehall @ 2016-02-11 11:09 UTC (permalink / raw)
To: intel-gfx
When gem_concurrent_blit was converted to use the new common framework
for choosing whether or not to include slow/combinatorial tests,
gem_concurrent_all became superfluous. This patch removes it.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1540 --------------------------------------------
2 files changed, 1541 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
diff --git a/tests/Makefile.sources b/tests/Makefile.sources
index df92586a56fc..05ca603aa768 100644
--- a/tests/Makefile.sources
+++ b/tests/Makefile.sources
@@ -21,7 +21,6 @@ TESTS_progs_M = \
gem_caching \
gem_close_race \
gem_concurrent_blit \
- gem_concurrent_all \
gem_create \
gem_cs_tlb \
gem_ctx_param_basic \
diff --git a/tests/gem_concurrent_all.c b/tests/gem_concurrent_all.c
deleted file mode 100644
index 9b7ef8700e31..000000000000
--- a/tests/gem_concurrent_all.c
+++ /dev/null
@@ -1,1540 +0,0 @@
-/*
- * Copyright © 2009,2012,2013 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- * Eric Anholt <eric@anholt.net>
- * Chris Wilson <chris@chris-wilson.co.uk>
- * Daniel Vetter <daniel.vetter@ffwll.ch>
- *
- */
-
-/** @file gem_concurrent.c
- *
- * This is a test of pread/pwrite/mmap behavior when writing to active
- * buffers.
- *
- * Based on gem_gtt_concurrent_blt.
- */
-
-#include "igt.h"
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-#include <fcntl.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <sys/stat.h>
-#include <sys/time.h>
-#include <sys/wait.h>
-
-#include <drm.h>
-
-#include "intel_bufmgr.h"
-
-IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
- " buffers.");
-
-#define LOCAL_I915_GEM_USERPTR 0x33
-#define LOCAL_IOCTL_I915_GEM_USERPTR DRM_IOWR (DRM_COMMAND_BASE + LOCAL_I915_GEM_USERPTR, struct local_i915_gem_userptr)
-struct local_i915_gem_userptr {
- uint64_t user_ptr;
- uint64_t user_size;
- uint32_t flags;
- uint32_t handle;
-};
-
-int fd, devid, gen;
-struct intel_batchbuffer *batch;
-int all;
-int pass;
-
-struct buffers {
- const struct access_mode *mode;
- drm_intel_bufmgr *bufmgr;
- drm_intel_bo **src, **dst;
- drm_intel_bo *snoop, *spare;
- uint32_t *tmp;
- int width, height, size;
- int count;
-};
-
-#define MIN_BUFFERS 3
-
-static void blt_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src);
-
-static void
-nop_release_bo(drm_intel_bo *bo)
-{
- drm_intel_bo_unreference(bo);
-}
-
-static void
-prw_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- for (int i = 0; i < b->size; i++)
- b->tmp[i] = val;
- drm_intel_bo_subdata(bo, 0, 4*b->size, b->tmp);
-}
-
-static void
-prw_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- uint32_t *vaddr;
-
- vaddr = b->tmp;
- do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*b->size, vaddr));
- for (int i = 0; i < b->size; i++)
- igt_assert_eq_u32(vaddr[i], val);
-}
-
-#define pixel(y, width) ((y)*(width) + (((y) + pass)%(width)))
-
-static void
-partial_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- for (int y = 0; y < b->height; y++)
- do_or_die(drm_intel_bo_subdata(bo, 4*pixel(y, b->width), 4, &val));
-}
-
-static void
-partial_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- for (int y = 0; y < b->height; y++) {
- uint32_t buf;
- do_or_die(drm_intel_bo_get_subdata(bo, 4*pixel(y, b->width), 4, &buf));
- igt_assert_eq_u32(buf, val);
- }
-}
-
-static drm_intel_bo *
-create_normal_bo(drm_intel_bufmgr *bufmgr, uint64_t size)
-{
- drm_intel_bo *bo;
-
- bo = drm_intel_bo_alloc(bufmgr, "bo", size, 0);
- igt_assert(bo);
-
- return bo;
-}
-
-static bool can_create_normal(void)
-{
- return true;
-}
-
-static drm_intel_bo *
-create_private_bo(drm_intel_bufmgr *bufmgr, uint64_t size)
-{
- drm_intel_bo *bo;
- uint32_t handle;
-
- /* XXX gem_create_with_flags(fd, size, I915_CREATE_PRIVATE); */
-
- handle = gem_create(fd, size);
- bo = gem_handle_to_libdrm_bo(bufmgr, fd, "stolen", handle);
- gem_close(fd, handle);
-
- return bo;
-}
-
-static bool can_create_private(void)
-{
- return false;
-}
-
-static drm_intel_bo *
-create_stolen_bo(drm_intel_bufmgr *bufmgr, uint64_t size)
-{
- drm_intel_bo *bo;
- uint32_t handle;
-
- /* XXX gem_create_with_flags(fd, size, I915_CREATE_STOLEN); */
-
- handle = gem_create(fd, size);
- bo = gem_handle_to_libdrm_bo(bufmgr, fd, "stolen", handle);
- gem_close(fd, handle);
-
- return bo;
-}
-
-static bool can_create_stolen(void)
-{
- /* XXX check num_buffers against available stolen */
- return false;
-}
-
-static drm_intel_bo *
-(*create_func)(drm_intel_bufmgr *bufmgr, uint64_t size);
-
-static bool create_cpu_require(void)
-{
- return create_func != create_stolen_bo;
-}
-
-static drm_intel_bo *
-unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return create_func(bufmgr, (uint64_t)4*width*height);
-}
-
-static bool create_snoop_require(void)
-{
- if (!create_cpu_require())
- return false;
-
- return !gem_has_llc(fd);
-}
-
-static drm_intel_bo *
-snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- bo = unmapped_create_bo(bufmgr, width, height);
- gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
- drm_intel_bo_disable_reuse(bo);
-
- return bo;
-}
-
-static bool create_userptr_require(void)
-{
- static int found = -1;
- if (found < 0) {
- struct drm_i915_gem_userptr arg;
-
- found = 0;
-
- memset(&arg, 0, sizeof(arg));
- arg.user_ptr = -4096ULL;
- arg.user_size = 8192;
- errno = 0;
- drmIoctl(fd, LOCAL_IOCTL_I915_GEM_USERPTR, &arg);
- if (errno == EFAULT) {
- igt_assert(posix_memalign((void **)&arg.user_ptr,
- 4096, arg.user_size) == 0);
- found = drmIoctl(fd,
- LOCAL_IOCTL_I915_GEM_USERPTR,
- &arg) == 0;
- free((void *)(uintptr_t)arg.user_ptr);
- }
-
- }
- return found;
-}
-
-static drm_intel_bo *
-userptr_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- struct local_i915_gem_userptr userptr;
- drm_intel_bo *bo;
- void *ptr;
-
- memset(&userptr, 0, sizeof(userptr));
- userptr.user_size = width * height * 4;
- userptr.user_size = (userptr.user_size + 4095) & -4096;
-
- ptr = mmap(NULL, userptr.user_size,
- PROT_READ | PROT_WRITE, MAP_ANON | MAP_SHARED, -1, 0);
- igt_assert(ptr != (void *)-1);
- userptr.user_ptr = (uintptr_t)ptr;
-
- do_or_die(drmIoctl(fd, LOCAL_IOCTL_I915_GEM_USERPTR, &userptr));
- bo = gem_handle_to_libdrm_bo(bufmgr, fd, "userptr", userptr.handle);
- bo->virtual = (void *)(uintptr_t)userptr.user_ptr;
- gem_close(fd, userptr.handle);
-
- return bo;
-}
-
-static void
-userptr_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- int size = b->size;
- uint32_t *vaddr = bo->virtual;
-
- gem_set_domain(fd, bo->handle,
- I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
- while (size--)
- *vaddr++ = val;
-}
-
-static void
-userptr_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- int size = b->size;
- uint32_t *vaddr = bo->virtual;
-
- gem_set_domain(fd, bo->handle,
- I915_GEM_DOMAIN_CPU, 0);
- while (size--)
- igt_assert_eq_u32(*vaddr++, val);
-}
-
-static void
-userptr_release_bo(drm_intel_bo *bo)
-{
- munmap(bo->virtual, bo->size);
- bo->virtual = NULL;
-
- drm_intel_bo_unreference(bo);
-}
-
-static void
-gtt_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- uint32_t *vaddr = bo->virtual;
- int size = b->size;
-
- drm_intel_gem_bo_start_gtt_access(bo, true);
- while (size--)
- *vaddr++ = val;
-}
-
-static void
-gtt_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- uint32_t *vaddr = bo->virtual;
-
- /* GTT access is slow. So we just compare a few points */
- drm_intel_gem_bo_start_gtt_access(bo, false);
- for (int y = 0; y < b->height; y++)
- igt_assert_eq_u32(vaddr[pixel(y, b->width)], val);
-}
-
-static drm_intel_bo *
-map_bo(drm_intel_bo *bo)
-{
- /* gtt map doesn't have a write parameter, so just keep the mapping
- * around (to avoid the set_domain with the gtt write domain set) and
- * manually tell the kernel when we start access the gtt. */
- do_or_die(drm_intel_gem_bo_map_gtt(bo));
-
- return bo;
-}
-
-static drm_intel_bo *
-tile_bo(drm_intel_bo *bo, int width)
-{
- uint32_t tiling = I915_TILING_X;
- uint32_t stride = width * 4;
-
- do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
-
- return bo;
-}
-
-static drm_intel_bo *
-gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return map_bo(unmapped_create_bo(bufmgr, width, height));
-}
-
-static drm_intel_bo *
-gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gtt_create_bo(bufmgr, width, height), width);
-}
-
-static drm_intel_bo *
-wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- gem_require_mmap_wc(fd);
-
- bo = unmapped_create_bo(bufmgr, width, height);
- bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
- return bo;
-}
-
-static void
-wc_release_bo(drm_intel_bo *bo)
-{
- munmap(bo->virtual, bo->size);
- bo->virtual = NULL;
-
- nop_release_bo(bo);
-}
-
-static drm_intel_bo *
-gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return unmapped_create_bo(bufmgr, width, height);
-}
-
-static drm_intel_bo *
-gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gpu_create_bo(bufmgr, width, height), width);
-}
-
-static void
-cpu_set_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- int size = b->size;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, true));
- vaddr = bo->virtual;
- while (size--)
- *vaddr++ = val;
- drm_intel_bo_unmap(bo);
-}
-
-static void
-cpu_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- int size = b->size;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, false));
- vaddr = bo->virtual;
- while (size--)
- igt_assert_eq_u32(*vaddr++, val);
- drm_intel_bo_unmap(bo);
-}
-
-static void
-gpu_set_bo(struct buffers *buffers, drm_intel_bo *bo, uint32_t val)
-{
- struct drm_i915_gem_relocation_entry reloc[1];
- struct drm_i915_gem_exec_object2 gem_exec[2];
- struct drm_i915_gem_execbuffer2 execbuf;
- uint32_t buf[10], *b;
- uint32_t tiling, swizzle;
-
- drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
-
- memset(reloc, 0, sizeof(reloc));
- memset(gem_exec, 0, sizeof(gem_exec));
- memset(&execbuf, 0, sizeof(execbuf));
-
- b = buf;
- *b++ = XY_COLOR_BLT_CMD_NOLEN |
- ((gen >= 8) ? 5 : 4) |
- COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
- if (gen >= 4 && tiling) {
- b[-1] |= XY_COLOR_BLT_TILED;
- *b = buffers->width;
- } else
- *b = buffers->width << 2;
- *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
- *b++ = 0;
- *b++ = buffers->height << 16 | buffers->width;
- reloc[0].offset = (b - buf) * sizeof(uint32_t);
- reloc[0].target_handle = bo->handle;
- reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
- reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
- *b++ = 0;
- if (gen >= 8)
- *b++ = 0;
- *b++ = val;
- *b++ = MI_BATCH_BUFFER_END;
- if ((b - buf) & 1)
- *b++ = 0;
-
- gem_exec[0].handle = bo->handle;
- gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
-
- gem_exec[1].handle = gem_create(fd, 4096);
- gem_exec[1].relocation_count = 1;
- gem_exec[1].relocs_ptr = (uintptr_t)reloc;
-
- execbuf.buffers_ptr = (uintptr_t)gem_exec;
- execbuf.buffer_count = 2;
- execbuf.batch_len = (b - buf) * sizeof(buf[0]);
- if (gen >= 6)
- execbuf.flags = I915_EXEC_BLT;
-
- gem_write(fd, gem_exec[1].handle, 0, buf, execbuf.batch_len);
- gem_execbuf(fd, &execbuf);
-
- gem_close(fd, gem_exec[1].handle);
-}
-
-static void
-gpu_cmp_bo(struct buffers *b, drm_intel_bo *bo, uint32_t val)
-{
- blt_copy_bo(b, b->snoop, bo);
- cpu_cmp_bo(b, b->snoop, val);
-}
-
-const struct access_mode {
- const char *name;
- bool (*require)(void);
- void (*set_bo)(struct buffers *b, drm_intel_bo *bo, uint32_t val);
- void (*cmp_bo)(struct buffers *b, drm_intel_bo *bo, uint32_t val);
- drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
- void (*release_bo)(drm_intel_bo *bo);
-} access_modes[] = {
- {
- .name = "prw",
- .set_bo = prw_set_bo,
- .cmp_bo = prw_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "partial",
- .set_bo = partial_set_bo,
- .cmp_bo = partial_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "cpu",
- .require = create_cpu_require,
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "snoop",
- .require = create_snoop_require,
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = snoop_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "userptr",
- .require = create_userptr_require,
- .set_bo = userptr_set_bo,
- .cmp_bo = userptr_cmp_bo,
- .create_bo = userptr_create_bo,
- .release_bo = userptr_release_bo,
- },
- {
- .name = "gtt",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gtt_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gttX",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gttX_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "wc",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = wc_create_bo,
- .release_bo = wc_release_bo,
- },
- {
- .name = "gpu",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpu_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gpuX",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpuX_create_bo,
- .release_bo = nop_release_bo,
- },
-};
-
-int num_buffers;
-igt_render_copyfunc_t rendercopy;
-
-static void *buffers_init(struct buffers *data,
- const struct access_mode *mode,
- int width, int height,
- int _fd, int enable_reuse)
-{
- data->mode = mode;
- data->count = 0;
-
- data->width = width;
- data->height = height;
- data->size = width * height;
- data->tmp = malloc(4*data->size);
- igt_assert(data->tmp);
-
- data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
- igt_assert(data->bufmgr);
-
- data->src = malloc(2*sizeof(drm_intel_bo *)*num_buffers);
- igt_assert(data->src);
- data->dst = data->src + num_buffers;
-
- if (enable_reuse)
- drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
- return intel_batchbuffer_alloc(data->bufmgr, devid);
-}
-
-static void buffers_destroy(struct buffers *data)
-{
- if (data->count == 0)
- return;
-
- for (int i = 0; i < data->count; i++) {
- data->mode->release_bo(data->src[i]);
- data->mode->release_bo(data->dst[i]);
- }
- data->mode->release_bo(data->snoop);
- data->mode->release_bo(data->spare);
- data->count = 0;
-}
-
-static void buffers_create(struct buffers *data,
- int count)
-{
- int width = data->width, height = data->height;
- igt_assert(data->bufmgr);
-
- buffers_destroy(data);
-
- for (int i = 0; i < count; i++) {
- data->src[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- data->dst[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- }
- data->spare = data->mode->create_bo(data->bufmgr, width, height);
- data->snoop = snoop_create_bo(data->bufmgr, width, height);
- data->count = count;
-}
-
-static void buffers_fini(struct buffers *data)
-{
- if (data->bufmgr == NULL)
- return;
-
- buffers_destroy(data);
-
- free(data->tmp);
- free(data->src);
- data->src = NULL;
- data->dst = NULL;
-
- intel_batchbuffer_free(batch);
- drm_intel_bufmgr_destroy(data->bufmgr);
- data->bufmgr = NULL;
-}
-
-typedef void (*do_copy)(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src);
-typedef struct igt_hang_ring (*do_hang)(void);
-
-static void render_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
-{
- struct igt_buf d = {
- .bo = dst,
- .size = b->size * 4,
- .num_tiles = b->size * 4,
- .stride = b->width * 4,
- }, s = {
- .bo = src,
- .size = b->size * 4,
- .num_tiles = b->size * 4,
- .stride = b->width * 4,
- };
- uint32_t swizzle;
-
- drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
- drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
-
- rendercopy(batch, NULL,
- &s, 0, 0,
- b->width, b->height,
- &d, 0, 0);
-}
-
-static void blt_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
-{
- intel_blt_copy(batch,
- src, 0, 0, 4*b->width,
- dst, 0, 0, 4*b->width,
- b->width, b->height, 32);
-}
-
-static void cpu_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = b->size * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
- s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void gtt_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = b->size * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
- d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void wc_copy_bo(struct buffers *b, drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = b->width * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static struct igt_hang_ring no_hang(void)
-{
- return (struct igt_hang_ring){0, 0};
-}
-
-static struct igt_hang_ring bcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_BLT);
-}
-
-static struct igt_hang_ring rcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_RENDER);
-}
-
-static void do_basic0(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- gem_quiescent_gpu(fd);
-
- buffers->mode->set_bo(buffers, buffers->src[0], 0xdeadbeef);
- for (int i = 0; i < buffers->count; i++) {
- struct igt_hang_ring hang = do_hang_func();
-
- do_copy_func(buffers, buffers->dst[i], buffers->src[0]);
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef);
-
- igt_post_hang_ring(fd, hang);
- }
-}
-
-static void do_basic1(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- gem_quiescent_gpu(fd);
-
- for (int i = 0; i < buffers->count; i++) {
- struct igt_hang_ring hang = do_hang_func();
-
- buffers->mode->set_bo(buffers, buffers->src[i], i);
- buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
-
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- usleep(0); /* let someone else claim the mutex */
- buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
-
- igt_post_hang_ring(fd, hang);
- }
-}
-
-static void do_basicN(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
-
- gem_quiescent_gpu(fd);
-
- for (int i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers, buffers->src[i], i);
- buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
- }
-
- hang = do_hang_func();
-
- for (int i = 0; i < buffers->count; i++) {
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- usleep(0); /* let someone else claim the mutex */
- }
-
- for (int i = 0; i < buffers->count; i++)
- buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
-
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers, buffers->src[i], i);
- buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
- for (i = 0; i < buffers->count; i++)
- buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < half; i++) {
- buffers->mode->set_bo(buffers, buffers->src[i], i);
- buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
- buffers->mode->set_bo(buffers, buffers->dst[i+half], ~i);
- }
- for (i = 0; i < half; i++) {
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- if (do_rcs)
- render_copy_bo(buffers, buffers->dst[i+half], buffers->src[i]);
- else
- blt_copy_bo(buffers, buffers->dst[i+half], buffers->src[i]);
- }
- hang = do_hang_func();
- for (i = half; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
- for (i = 0; i < half; i++) {
- buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
- buffers->mode->cmp_bo(buffers, buffers->dst[i+half], i);
- }
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_overwrite_source_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_overwrite_source__rev(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers, buffers->src[i], i);
- buffers->mode->set_bo(buffers, buffers->dst[i], ~i);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = 0; i < buffers->count; i++)
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers, buffers->dst[i], i);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source__one(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
-
- gem_quiescent_gpu(fd);
- buffers->mode->set_bo(buffers, buffers->src[0], 0);
- buffers->mode->set_bo(buffers, buffers->dst[0], ~0);
- do_copy_func(buffers, buffers->dst[0], buffers->src[0]);
- hang = do_hang_func();
- buffers->mode->set_bo(buffers, buffers->src[0], 0xdeadbeef);
- buffers->mode->cmp_bo(buffers, buffers->dst[0], 0);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef^~i);
- buffers->mode->set_bo(buffers, buffers->dst[i], i);
- }
- for (i = 0; i < half; i++) {
- if (do_rcs == 1 || (do_rcs == -1 && i & 1))
- render_copy_bo(buffers, buffers->dst[i], buffers->src[i]);
- else
- blt_copy_bo(buffers, buffers->dst[i], buffers->src[i]);
-
- do_copy_func(buffers, buffers->dst[i+half], buffers->src[i]);
-
- if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
- render_copy_bo(buffers, buffers->dst[i], buffers->dst[i+half]);
- else
- blt_copy_bo(buffers, buffers->dst[i], buffers->dst[i+half]);
-
- do_copy_func(buffers, buffers->dst[i+half], buffers->src[i+half]);
- }
- hang = do_hang_func();
- for (i = 0; i < 2*half; i++)
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef^~i);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_intermix_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_intermix_both(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, -1);
-}
-
-static void do_early_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- blt_copy_bo(buffers, buffers->spare, buffers->src[i]);
- }
- buffers->mode->cmp_bo(buffers, buffers->spare, 0xdeadbeef^(buffers->count-1));
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_write_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
- for (i = 0; i < buffers->count; i++) {
- blt_copy_bo(buffers, buffers->spare, buffers->src[i]);
- do_copy_func(buffers, buffers->dst[i], buffers->spare);
- }
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- render_copy_bo(buffers, buffers->spare, buffers->src[i]);
- }
- buffers->mode->cmp_bo(buffers, buffers->spare, 0xdeadbeef^(buffers->count-1));
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_write_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xdeadbeef ^ i);
- for (i = 0; i < buffers->count; i++) {
- render_copy_bo(buffers, buffers->spare, buffers->src[i]);
- do_copy_func(buffers, buffers->dst[i], buffers->spare);
- }
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xdeadbeef ^ i);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_gpu_read_after_write(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers, buffers->src[i], 0xabcdabcd);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers, buffers->dst[i], buffers->src[i]);
- for (i = buffers->count; i--; )
- do_copy_func(buffers, buffers->spare, buffers->dst[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers, buffers->dst[i], 0xabcdabcd);
- igt_post_hang_ring(fd, hang);
-}
-
-typedef void (*do_test)(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-typedef void (*run_wrap)(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-static void run_single(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_test_func(buffers, do_copy_func, do_hang_func);
- igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
-}
-
-static void run_interruptible(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- for (pass = 0; pass < 10; pass++)
- do_test_func(buffers, do_copy_func, do_hang_func);
- pass = 0;
- igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
-}
-
-static void run_child(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-
-{
- /* We inherit the buffers from the parent, but the bufmgr/batch
- * needs to be local as the cache of reusable itself will be COWed,
- * leading to the child closing an object without the parent knowing.
- */
- igt_fork(child, 1)
- do_test_func(buffers, do_copy_func, do_hang_func);
- igt_waitchildren();
- igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
-}
-
-static void __run_forked(struct buffers *buffers,
- int num_children, int loops,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-
-{
- const int old_num_buffers = num_buffers;
-
- num_buffers /= num_children;
- num_buffers += MIN_BUFFERS;
-
- igt_fork(child, num_children) {
- /* recreate process local variables */
- buffers->count = 0;
- fd = drm_open_driver(DRIVER_INTEL);
-
- batch = buffers_init(buffers, buffers->mode,
- buffers->width, buffers->height,
- fd, true);
-
- buffers_create(buffers, num_buffers);
- for (pass = 0; pass < loops; pass++)
- do_test_func(buffers, do_copy_func, do_hang_func);
- pass = 0;
-
- buffers_fini(buffers);
- }
-
- igt_waitchildren();
- igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
-
- num_buffers = old_num_buffers;
-}
-
-static void run_forked(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- __run_forked(buffers, sysconf(_SC_NPROCESSORS_ONLN), 10,
- do_test_func, do_copy_func, do_hang_func);
-}
-
-static void run_bomb(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- __run_forked(buffers, 8*sysconf(_SC_NPROCESSORS_ONLN), 10,
- do_test_func, do_copy_func, do_hang_func);
-}
-
-static void bit17_require(void)
-{
- struct drm_i915_gem_get_tiling2 {
- uint32_t handle;
- uint32_t tiling_mode;
- uint32_t swizzle_mode;
- uint32_t phys_swizzle_mode;
- } arg;
-#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
-
- memset(&arg, 0, sizeof(arg));
- arg.handle = gem_create(fd, 4096);
- gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
-
- do_ioctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg);
- gem_close(fd, arg.handle);
- igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
-}
-
-static void cpu_require(void)
-{
- bit17_require();
-}
-
-static void gtt_require(void)
-{
-}
-
-static void wc_require(void)
-{
- bit17_require();
- gem_require_mmap_wc(fd);
-}
-
-static void bcs_require(void)
-{
-}
-
-static void rcs_require(void)
-{
- igt_require(rendercopy);
-}
-
-static void
-run_basic_modes(const char *prefix,
- const struct access_mode *mode,
- const char *suffix,
- run_wrap run_wrap_func)
-{
- const struct {
- const char *prefix;
- do_copy copy;
- void (*require)(void);
- } pipelines[] = {
- { "cpu", cpu_copy_bo, cpu_require },
- { "gtt", gtt_copy_bo, gtt_require },
- { "wc", wc_copy_bo, wc_require },
- { "blt", blt_copy_bo, bcs_require },
- { "render", render_copy_bo, rcs_require },
- { NULL, NULL }
- }, *pskip = pipelines + 3, *p;
- const struct {
- const char *suffix;
- do_hang hang;
- } hangs[] = {
- { "", no_hang },
- { "-hang-blt", bcs_hang },
- { "-hang-render", rcs_hang },
- { NULL, NULL },
- }, *h;
-
- for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
-
- for (p = all ? pipelines : pskip; p->prefix; p++) {
- struct buffers buffers;
-
- igt_fixture
- batch = buffers_init(&buffers, mode,
- 512, 512, fd,
- run_wrap_func != run_child);
-
- igt_subtest_f("%s-%s-%s-sanitycheck0%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers, do_basic0,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-%s-sanitycheck1%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers, do_basic1,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-%s-sanitycheckN%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers, do_basicN,
- p->copy, h->hang);
- }
-
- /* try to overwrite the source values */
- igt_subtest_f("%s-%s-%s-overwrite-source-one%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__one,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-%s-overwrite-source%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-%s-overwrite-source-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_bcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-%s-overwrite-source-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_rcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-%s-overwrite-source-rev%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__rev,
- p->copy, h->hang);
- }
-
- /* try to intermix copies with GPU copies*/
- igt_subtest_f("%s-%s-%s-intermix-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_rcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-%s-intermix-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-%s-intermix-both%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_both,
- p->copy, h->hang);
- }
-
- /* try to read the results before the copy completes */
- igt_subtest_f("%s-%s-%s-early-read%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_early_read,
- p->copy, h->hang);
- }
-
- /* concurrent reads */
- igt_subtest_f("%s-%s-%s-read-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-%s-read-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_rcs,
- p->copy, h->hang);
- }
-
- /* split copying between rings */
- igt_subtest_f("%s-%s-%s-write-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_write_read_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-%s-write-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_write_read_rcs,
- p->copy, h->hang);
- }
-
- /* and finally try to trick the kernel into loosing the pending write */
- igt_subtest_f("%s-%s-%s-gpu-read-after-write%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_gpu_read_after_write,
- p->copy, h->hang);
- }
-
- igt_fixture
- buffers_fini(&buffers);
- }
- }
-}
-
-static void
-run_modes(const char *style, const struct access_mode *mode, unsigned allow_mem)
-{
- if (mode->require && !mode->require())
- return;
-
- igt_debug("%s: using 2x%d buffers, each 1MiB\n",
- style, num_buffers);
- if (!__intel_check_memory(2*num_buffers, 1024*1024, allow_mem,
- NULL, NULL))
- return;
-
- run_basic_modes(style, mode, "", run_single);
- run_basic_modes(style, mode, "-child", run_child);
- run_basic_modes(style, mode, "-forked", run_forked);
-
- igt_fork_signal_helper();
- run_basic_modes(style, mode, "-interruptible", run_interruptible);
- run_basic_modes(style, mode, "-bomb", run_bomb);
- igt_stop_signal_helper();
-}
-
-igt_main
-{
- const struct {
- const char *name;
- drm_intel_bo *(*create)(drm_intel_bufmgr *, uint64_t size);
- bool (*require)(void);
- } create[] = {
- { "", create_normal_bo, can_create_normal},
- { "private-", create_private_bo, can_create_private },
- { "stolen-", create_stolen_bo, can_create_stolen },
- { NULL, NULL }
- }, *c;
- uint64_t pin_sz = 0;
- void *pinned = NULL;
- int i;
-
- igt_skip_on_simulation();
-
- if (strstr(igt_test_name(), "all"))
- all = true;
-
- igt_fixture {
- fd = drm_open_driver(DRIVER_INTEL);
- intel_detect_and_clear_missed_interrupts(fd);
- devid = intel_get_drm_devid(fd);
- gen = intel_gen(devid);
- rendercopy = igt_get_render_copyfunc(devid);
- }
-
- for (c = create; c->name; c++) {
- char name[80];
-
- create_func = c->create;
-
- num_buffers = MIN_BUFFERS;
- if (c->require()) {
- snprintf(name, sizeof(name), "%s%s", c->name, "tiny");
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(name, &access_modes[i], CHECK_RAM);
- }
-
- igt_fixture {
- num_buffers = gem_mappable_aperture_size() / (1024 * 1024) / 4;
- }
-
- if (c->require()) {
- snprintf(name, sizeof(name), "%s%s", c->name, "small");
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(name, &access_modes[i], CHECK_RAM);
- }
-
- igt_fixture {
- num_buffers = gem_mappable_aperture_size() / (1024 * 1024);
- }
-
- if (c->require()) {
- snprintf(name, sizeof(name), "%s%s", c->name, "thrash");
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(name, &access_modes[i], CHECK_RAM);
- }
-
- igt_fixture {
- num_buffers = gem_aperture_size(fd) / (1024 * 1024);
- }
-
- if (c->require()) {
- snprintf(name, sizeof(name), "%s%s", c->name, "full");
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(name, &access_modes[i], CHECK_RAM);
- }
-
- igt_fixture {
- num_buffers = gem_mappable_aperture_size() / (1024 * 1024);
- pin_sz = intel_get_avail_ram_mb() - num_buffers;
-
- igt_debug("Pinning %ld MiB\n", pin_sz);
- pin_sz *= 1024 * 1024;
-
- if (posix_memalign(&pinned, 4096, pin_sz) ||
- mlock(pinned, pin_sz) ||
- madvise(pinned, pin_sz, MADV_DONTFORK)) {
- free(pinned);
- pinned = NULL;
- }
- igt_require(pinned);
- }
-
- if (c->require()) {
- snprintf(name, sizeof(name), "%s%s", c->name, "swap");
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(name, &access_modes[i], CHECK_RAM | CHECK_SWAP);
- }
-
- igt_fixture {
- if (pinned) {
- munlock(pinned, pin_sz);
- free(pinned);
- pinned = NULL;
- }
- }
- }
-}
--
2.7.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests
2016-02-11 11:09 ` [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests David Weinehall
@ 2016-02-11 13:04 ` Chris Wilson
2016-02-11 13:40 ` David Weinehall
2016-02-16 15:45 ` Daniel Vetter
1 sibling, 1 reply; 8+ messages in thread
From: Chris Wilson @ 2016-02-11 13:04 UTC (permalink / raw)
To: David Weinehall; +Cc: intel-gfx
On Thu, Feb 11, 2016 at 01:09:33PM +0200, David Weinehall wrote:
> +enum {
> + /* The set of tests run if nothing else is specified */
> + SUBTEST_TYPE_NORMAL = 1 << 0,
> + /* Basic Acceptance Testing set */
> + SUBTEST_TYPE_BASIC = 1 << 1,
> + /* Tests that are very slow */
> + SUBTEST_TYPE_SLOW = 1 << 2,
I still feel that slow isn't a useful descriminant for tests. Off the
opt of my head, I would like
HANG,
SWAP,
STRESS (this is too also undefined)
that should cover the majority of the GEM cases. Looking at kms_flip
should also offer a few categories.
As for gem_concurrent_blit, I basically expect to
have h->subtest_flags | p->subtest_flags | subtest_flags.
I also expect QA to run the full gem_concurrent_blit anyway since it is
showing up regressions from over a year ago.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests
2016-02-11 13:04 ` Chris Wilson
@ 2016-02-11 13:40 ` David Weinehall
0 siblings, 0 replies; 8+ messages in thread
From: David Weinehall @ 2016-02-11 13:40 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
On Thu, Feb 11, 2016 at 01:04:14PM +0000, Chris Wilson wrote:
> On Thu, Feb 11, 2016 at 01:09:33PM +0200, David Weinehall wrote:
> > +enum {
> > + /* The set of tests run if nothing else is specified */
> > + SUBTEST_TYPE_NORMAL = 1 << 0,
> > + /* Basic Acceptance Testing set */
> > + SUBTEST_TYPE_BASIC = 1 << 1,
> > + /* Tests that are very slow */
> > + SUBTEST_TYPE_SLOW = 1 << 2,
>
> I still feel that slow isn't a useful descriminant for tests. Off the
> opt of my head, I would like
>
> HANG,
> SWAP,
> STRESS (this is too also undefined)
>
> that should cover the majority of the GEM cases. Looking at kms_flip
> should also offer a few categories.
As I said the first time I posted this patch, I'm open to both
alternative names and help with categorising the subtests.
> As for gem_concurrent_blit, I basically expect to
> have h->subtest_flags | p->subtest_flags | subtest_flags.
>
> I also expect QA to run the full gem_concurrent_blit anyway since it is
> showing up regressions from over a year ago.
Yeah, I did a partial run of the full set yesterday and spotted about
150 failed tests within an hour of so. After that I had to abort the
test to continue with other work on that machine.
But it's fairly obvious that the limited set is a lot better tested.
Kind regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests
2016-02-11 11:09 ` [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests David Weinehall
2016-02-11 13:04 ` Chris Wilson
@ 2016-02-16 15:45 ` Daniel Vetter
2016-02-18 8:42 ` David Weinehall
1 sibling, 1 reply; 8+ messages in thread
From: Daniel Vetter @ 2016-02-16 15:45 UTC (permalink / raw)
To: David Weinehall; +Cc: intel-gfx
On Thu, Feb 11, 2016 at 01:09:33PM +0200, David Weinehall wrote:
> Some subtests are not run by default, for various reasons;
> be it because they're only for debugging, because they're slow,
> or because they are not of high enough quality.
>
> This patch aims to introduce a common mechanism for categorising
> the subtests and introduces a flag (--all) that runs/lists all
> subtests instead of just the default set.
Is the idea to also add a --test-flags interface? How is this meant to
integrate with the current BAT runtime? Also we need to figure out
how/whether we need to change the spec for how testrunners are supposed to
run igt tests. Plus obviously patch igt.py in piglit.
I think I like where this is going, but I guess needs a bit more polish
still. Integrating this into how we run/select BAT would be a great
showcase of your concept I think.
Cheers, Daniel
>
> Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
> ---
> lib/igt_core.c | 43 +++++++--
> lib/igt_core.h | 42 +++++++++
> tests/gem_concurrent_blit.c | 50 +++++------
> tests/kms_frontbuffer_tracking.c | 186 ++++++++++++++++++++++-----------------
> 4 files changed, 210 insertions(+), 111 deletions(-)
>
> diff --git a/lib/igt_core.c b/lib/igt_core.c
> index 6b69bb780700..e6e6949ed65a 100644
> --- a/lib/igt_core.c
> +++ b/lib/igt_core.c
> @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>
> /* subtests helpers */
> static bool list_subtests = false;
> +static unsigned int subtest_types_mask = SUBTEST_TYPE_NORMAL;
> static char *run_single_subtest = NULL;
> static bool run_single_subtest_found = false;
> static const char *in_subtest = NULL;
> @@ -237,12 +238,13 @@ int test_children_sz;
> bool test_child;
>
> enum {
> - OPT_LIST_SUBTESTS,
> - OPT_RUN_SUBTEST,
> - OPT_DESCRIPTION,
> - OPT_DEBUG,
> - OPT_INTERACTIVE_DEBUG,
> - OPT_HELP = 'h'
> + OPT_LIST_SUBTESTS,
> + OPT_WITH_ALL_SUBTESTS,
> + OPT_RUN_SUBTEST,
> + OPT_DESCRIPTION,
> + OPT_DEBUG,
> + OPT_INTERACTIVE_DEBUG,
> + OPT_HELP = 'h'
> };
>
> static int igt_exitcode = IGT_EXIT_SUCCESS;
> @@ -516,6 +518,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>
> fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> fprintf(f, " --list-subtests\n"
> + " --all\n"
> " --run-subtest <pattern>\n"
> " --debug[=log-domain]\n"
> " --interactive-debug[=domain]\n"
> @@ -548,6 +551,7 @@ static int common_init(int *argc, char **argv,
> int c, option_index = 0, i, x;
> static struct option long_options[] = {
> {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> + {"all", 0, 0, OPT_WITH_ALL_SUBTESTS},
> {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> {"help-description", 0, 0, OPT_DESCRIPTION},
> {"debug", optional_argument, 0, OPT_DEBUG},
> @@ -659,6 +663,10 @@ static int common_init(int *argc, char **argv,
> if (!run_single_subtest)
> list_subtests = true;
> break;
> + case OPT_WITH_ALL_SUBTESTS:
> + if (!run_single_subtest)
> + subtest_types_mask = SUBTEST_TYPE_ALL;
> + break;
> case OPT_RUN_SUBTEST:
> if (!list_subtests)
> run_single_subtest = strdup(optarg);
> @@ -1667,6 +1675,29 @@ void igt_skip_on_simulation(void)
> igt_require(!igt_run_in_simulation());
> }
>
> +/**
> + * igt_match_subtest_flags:
> + *
> + * This function is used to check whether the attributes of the subtest
> + * makes it a candidate for inclusion in the test run; this is used to
> + * categorise tests, for instance to exclude tests that are purely for
> + * debug purposes, tests that are specific to certain environments,
> + * or tests that are very slow.
> + *
> + * Note that a test has to have all its flags met to be run; for instance
> + * a subtest with the flags SUBTEST_TYPE_SLOW | SUBTEST_TYPE_DEBUG requires
> + * "--subtest-types=slow,debug" or "--all" to be executed
> + *
> + * @subtest_flags: The subtests to check for
> + *
> + * Returns: true if the subtest test should be run,
> + * false if the subtest should be skipped
> + */
> +bool igt_match_subtest_flags(unsigned long subtest_flags)
> +{
> + return ((subtest_flags & subtest_types_mask) == subtest_flags);
> +}
> +
> /* structured logging */
>
> /**
> diff --git a/lib/igt_core.h b/lib/igt_core.h
> index 8f297e06a068..bf83de609bfa 100644
> --- a/lib/igt_core.h
> +++ b/lib/igt_core.h
> @@ -193,6 +193,48 @@ bool __igt_run_subtest(const char *subtest_name);
> #define igt_subtest_f(f...) \
> __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
>
> +enum {
> + /* The set of tests run if nothing else is specified */
> + SUBTEST_TYPE_NORMAL = 1 << 0,
> + /* Basic Acceptance Testing set */
> + SUBTEST_TYPE_BASIC = 1 << 1,
> + /* Tests that are very slow */
> + SUBTEST_TYPE_SLOW = 1 << 2,
> + /* Tests that mainly intended for debugging */
> + SUBTEST_TYPE_DEBUG = 1 << 3,
> + SUBTEST_TYPE_ALL = ~0
> +} subtest_types;
> +
> +bool igt_match_subtest_flags(unsigned long subtest_flags);
> +
> +/**
> + * igt_subtest_flags:
> + * @name: name of the subtest
> + * @__subtest_flags: the categories the subtest belongs to
> + *
> + * This is a wrapper around igt_subtest that will only execute the
> + * testcase if all of the flags passed to this function match those
> + * specified by the list of subtest categories passed from the
> + * command line; the default category is SUBTEST_TYPE_NORMAL.
> + */
> +#define igt_subtest_flags(name, __subtest_flags) \
> + if (igt_match_subtest_flags(__subtest_flags)) \
> + igt_subtest(name)
> +
> +/**
> + * igt_subtest_flags_f:
> + * @...: format string and optional arguments
> + * @__subtest_flags: the categories the subtest belongs to
> + *
> + * This is a wrapper around igt_subtest_f that will only execute the
> + * testcase if all of the flags passed to this function match those
> + * specified by the list of subtest categories passed from the
> + * command line; the default category is SUBTEST_TYPE_NORMAL.
> + */
> +#define igt_subtest_flags_f(__subtest_flags, f...) \
> + if (igt_match_subtest_flags(__subtest_flags)) \
> + __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
> +
> const char *igt_subtest_name(void);
> bool igt_only_list_subtests(void);
>
> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
> index 9b7ef8700e31..3c23a95c4631 100644
> --- a/tests/gem_concurrent_blit.c
> +++ b/tests/gem_concurrent_blit.c
> @@ -64,7 +64,6 @@ struct local_i915_gem_userptr {
>
> int fd, devid, gen;
> struct intel_batchbuffer *batch;
> -int all;
> int pass;
>
> struct buffers {
> @@ -1256,10 +1255,14 @@ run_basic_modes(const char *prefix,
> }, *h;
>
> for (h = hangs; h->suffix; h++) {
> - if (!all && *h->suffix)
> - continue;
> + unsigned int subtest_flags;
>
> - for (p = all ? pipelines : pskip; p->prefix; p++) {
> + if (*h->suffix)
> + subtest_flags = SUBTEST_TYPE_SLOW;
> + else
> + subtest_flags = SUBTEST_TYPE_NORMAL;
> +
> + for (p = igt_match_subtest_flags(SUBTEST_TYPE_SLOW) ? pipelines : pskip; p->prefix; p++) {
> struct buffers buffers;
>
> igt_fixture
> @@ -1267,21 +1270,21 @@ run_basic_modes(const char *prefix,
> 512, 512, fd,
> run_wrap_func != run_child);
>
> - igt_subtest_f("%s-%s-%s-sanitycheck0%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-sanitycheck0%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers, do_basic0,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-%s-sanitycheck1%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-sanitycheck1%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers, do_basic1,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-%s-sanitycheckN%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-sanitycheckN%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers, do_basicN,
> @@ -1289,7 +1292,7 @@ run_basic_modes(const char *prefix,
> }
>
> /* try to overwrite the source values */
> - igt_subtest_f("%s-%s-%s-overwrite-source-one%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-one%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> @@ -1297,7 +1300,7 @@ run_basic_modes(const char *prefix,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-%s-overwrite-source%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> @@ -1305,7 +1308,7 @@ run_basic_modes(const char *prefix,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-%s-overwrite-source-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> @@ -1313,7 +1316,7 @@ run_basic_modes(const char *prefix,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-%s-overwrite-source-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> igt_require(rendercopy);
> buffers_create(&buffers, num_buffers);
> @@ -1322,7 +1325,7 @@ run_basic_modes(const char *prefix,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-%s-overwrite-source-rev%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-overwrite-source-rev%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> @@ -1331,7 +1334,7 @@ run_basic_modes(const char *prefix,
> }
>
> /* try to intermix copies with GPU copies*/
> - igt_subtest_f("%s-%s-%s-intermix-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-intermix-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> igt_require(rendercopy);
> buffers_create(&buffers, num_buffers);
> @@ -1339,7 +1342,7 @@ run_basic_modes(const char *prefix,
> do_intermix_rcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-%s-intermix-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-intermix-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> igt_require(rendercopy);
> buffers_create(&buffers, num_buffers);
> @@ -1347,7 +1350,7 @@ run_basic_modes(const char *prefix,
> do_intermix_bcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-%s-intermix-both%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-intermix-both%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> igt_require(rendercopy);
> buffers_create(&buffers, num_buffers);
> @@ -1357,7 +1360,7 @@ run_basic_modes(const char *prefix,
> }
>
> /* try to read the results before the copy completes */
> - igt_subtest_f("%s-%s-%s-early-read%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-early-read%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> @@ -1366,14 +1369,14 @@ run_basic_modes(const char *prefix,
> }
>
> /* concurrent reads */
> - igt_subtest_f("%s-%s-%s-read-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-read-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> do_read_read_bcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-%s-read-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-read-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> igt_require(rendercopy);
> buffers_create(&buffers, num_buffers);
> @@ -1383,14 +1386,14 @@ run_basic_modes(const char *prefix,
> }
>
> /* split copying between rings */
> - igt_subtest_f("%s-%s-%s-write-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-write-read-bcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> do_write_read_bcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-%s-write-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-write-read-rcs%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> igt_require(rendercopy);
> buffers_create(&buffers, num_buffers);
> @@ -1399,8 +1402,8 @@ run_basic_modes(const char *prefix,
> p->copy, h->hang);
> }
>
> - /* and finally try to trick the kernel into loosing the pending write */
> - igt_subtest_f("%s-%s-%s-gpu-read-after-write%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> + /* and finally try to trick the kernel into losing the pending write */
> + igt_subtest_flags_f(subtest_flags, "%s-%s-%s-gpu-read-after-write%s%s", prefix, mode->name, p->prefix, suffix, h->suffix) {
> p->require();
> buffers_create(&buffers, num_buffers);
> run_wrap_func(&buffers,
> @@ -1454,9 +1457,6 @@ igt_main
>
> igt_skip_on_simulation();
>
> - if (strstr(igt_test_name(), "all"))
> - all = true;
> -
> igt_fixture {
> fd = drm_open_driver(DRIVER_INTEL);
> intel_detect_and_clear_missed_interrupts(fd);
> diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
> index 64f880c667a3..651303e9d392 100644
> --- a/tests/kms_frontbuffer_tracking.c
> +++ b/tests/kms_frontbuffer_tracking.c
> @@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
> * combinations that are somewhat redundant and don't add much value to the
> * test. For example, since we already do the offscreen testing with a single
> * pipe enabled, there's no much value in doing it again with dual pipes. If you
> - * still want to try these redundant tests, you need to use the --show-hidden
> - * option.
> + * still want to try these redundant tests, you need to use the --all option.
> *
> * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
> * you get a failure on any test, it is important to check whether the same test
> @@ -126,6 +125,9 @@ struct test_mode {
> } flip;
>
> enum igt_draw_method method;
> +
> + /* Specifies the subtest categories this subtest belongs to */
> + unsigned long subtest_flags;
> };
>
> enum color {
> @@ -241,7 +243,6 @@ struct {
> bool fbc_check_last_action;
> bool no_edp;
> bool small_modes;
> - bool show_hidden;
> int step;
> int only_pipes;
> int shared_fb_x_offset;
> @@ -253,7 +254,6 @@ struct {
> .fbc_check_last_action = true,
> .no_edp = false,
> .small_modes = false,
> - .show_hidden= false,
> .step = 0,
> .only_pipes = PIPE_COUNT,
> .shared_fb_x_offset = 500,
> @@ -3075,9 +3075,6 @@ static int opt_handler(int option, int option_index, void *data)
> case 'm':
> opt.small_modes = true;
> break;
> - case 'i':
> - opt.show_hidden = true;
> - break;
> case 't':
> opt.step++;
> break;
> @@ -3113,7 +3110,6 @@ const char *help_str =
> " --no-fbc-action-check Don't check for the FBC last action\n"
> " --no-edp Don't use eDP monitors\n"
> " --use-small-modes Use smaller resolutions for the modes\n"
> -" --show-hidden Show hidden subtests\n"
> " --step Stop on each step so you can check the screen\n"
> " --shared-fb-x offset Use 'offset' as the X offset for the shared FB\n"
> " --shared-fb-y offset Use 'offset' as the Y offset for the shared FB\n"
> @@ -3227,18 +3223,19 @@ static const char *flip_str(enum flip_type flip)
> for (t.plane = 0; t.plane < PLANE_COUNT; t.plane++) { \
> for (t.fbs = 0; t.fbs < FBS_COUNT; t.fbs++) { \
> for (t.method = 0; t.method < IGT_DRAW_METHOD_COUNT; t.method++) { \
> + t.subtest_flags = SUBTEST_TYPE_NORMAL; \
> if (t.pipes == PIPE_SINGLE && t.screen == SCREEN_SCND) \
> continue; \
> if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
> continue; \
> - if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
> + if (t.pipes == PIPE_DUAL && \
> t.screen == SCREEN_OFFSCREEN) \
> - continue; \
> - if (!opt.show_hidden && t.feature == FEATURE_NONE) \
> - continue; \
> - if (!opt.show_hidden && t.fbs == FBS_SHARED && \
> + t.subtest_flags = SUBTEST_TYPE_SLOW; \
> + if (t.feature == FEATURE_NONE) \
> + t.subtest_flags = SUBTEST_TYPE_SLOW; \
> + if (t.fbs == FBS_SHARED && \
> (t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
> - continue;
> + t.subtest_flags = SUBTEST_TYPE_SLOW;
>
>
> #define TEST_MODE_ITER_END } } } } } }
> @@ -3253,7 +3250,6 @@ int main(int argc, char *argv[])
> { "no-fbc-action-check", 0, 0, 'a'},
> { "no-edp", 0, 0, 'e'},
> { "use-small-modes", 0, 0, 'm'},
> - { "show-hidden", 0, 0, 'i'},
> { "step", 0, 0, 't'},
> { "shared-fb-x", 1, 0, 'x'},
> { "shared-fb-y", 1, 0, 'y'},
> @@ -3269,8 +3265,9 @@ int main(int argc, char *argv[])
> setup_environment();
>
> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
> - if (!opt.show_hidden && t.feature == FEATURE_NONE)
> - continue;
> + t.subtest_flags = SUBTEST_TYPE_NORMAL;
> + if (t.feature == FEATURE_NONE)
> + t.subtest_flags = SUBTEST_TYPE_SLOW;
> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
> t.screen = SCREEN_PRIM;
> t.plane = PLANE_PRI;
> @@ -3280,38 +3277,43 @@ int main(int argc, char *argv[])
> t.flip = -1;
> t.method = -1;
>
> - igt_subtest_f("%s-%s-rte",
> - feature_str(t.feature),
> - pipes_str(t.pipes))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-rte",
> + feature_str(t.feature),
> + pipes_str(t.pipes))
> rte_subtest(&t);
> }
> }
>
> TEST_MODE_ITER_BEGIN(t)
> - igt_subtest_f("%s-%s-%s-%s-%s-draw-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-%s-%s-%s-draw-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> draw_subtest(&t);
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.plane != PLANE_PRI ||
> - t.screen == SCREEN_OFFSCREEN ||
> - (!opt.show_hidden && t.method != IGT_DRAW_BLT))
> + t.screen == SCREEN_OFFSCREEN)
> continue;
>
> + if (t.method != IGT_DRAW_BLT)
> + t.subtest_flags = SUBTEST_TYPE_SLOW;
> +
> for (t.flip = 0; t.flip < FLIP_COUNT; t.flip++)
> - igt_subtest_f("%s-%s-%s-%s-%sflip-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - fbs_str(t.fbs),
> - flip_str(t.flip),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-%s-%s-%sflip-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + fbs_str(t.fbs),
> + flip_str(t.flip),
> + igt_draw_get_method_name(t.method))
> flip_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3322,10 +3324,11 @@ int main(int argc, char *argv[])
> (t.feature & FEATURE_FBC) == 0)
> continue;
>
> - igt_subtest_f("%s-%s-%s-fliptrack",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - fbs_str(t.fbs))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-%s-fliptrack",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + fbs_str(t.fbs))
> fliptrack_subtest(&t, FLIP_PAGEFLIP);
> TEST_MODE_ITER_END
>
> @@ -3335,20 +3338,22 @@ int main(int argc, char *argv[])
> t.plane == PLANE_PRI)
> continue;
>
> - igt_subtest_f("%s-%s-%s-%s-%s-move",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-%s-%s-%s-move",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> move_subtest(&t);
>
> - igt_subtest_f("%s-%s-%s-%s-%s-onoff",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-%s-%s-%s-onoff",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> onoff_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3358,27 +3363,31 @@ int main(int argc, char *argv[])
> t.plane != PLANE_SPR)
> continue;
>
> - igt_subtest_f("%s-%s-%s-%s-%s-fullscreen",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-%s-%s-%s-fullscreen",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> fullscreen_plane_subtest(&t);
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.screen != SCREEN_PRIM ||
> - t.method != IGT_DRAW_BLT ||
> - (!opt.show_hidden && t.plane != PLANE_PRI) ||
> - (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
> + t.method != IGT_DRAW_BLT)
> continue;
>
> - igt_subtest_f("%s-%s-%s-%s-multidraw",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + if (t.plane != PLANE_PRI ||
> + t.fbs != FBS_INDIVIDUAL)
> + t.subtest_flags = SUBTEST_TYPE_SLOW;
> +
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-%s-%s-multidraw",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> multidraw_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3390,7 +3399,9 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_GTT)
> continue;
>
> - igt_subtest_f("%s-farfromfence", feature_str(t.feature))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-farfromfence",
> + feature_str(t.feature))
> farfromfence_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3406,10 +3417,11 @@ int main(int argc, char *argv[])
> if (t.format == FORMAT_DEFAULT)
> continue;
>
> - igt_subtest_f("%s-%s-draw-%s",
> - feature_str(t.feature),
> - format_str(t.format),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-draw-%s",
> + feature_str(t.feature),
> + format_str(t.format),
> + igt_draw_get_method_name(t.method))
> format_draw_subtest(&t);
> }
> TEST_MODE_ITER_END
> @@ -3420,9 +3432,11 @@ int main(int argc, char *argv[])
> t.plane != PLANE_PRI ||
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
> - igt_subtest_f("%s-%s-scaledprimary",
> - feature_str(t.feature),
> - fbs_str(t.fbs))
> +
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-%s-scaledprimary",
> + feature_str(t.feature),
> + fbs_str(t.fbs))
> scaledprimary_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3434,25 +3448,37 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
>
> - igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-modesetfrombusy",
> + feature_str(t.feature))
> modesetfrombusy_subtest(&t);
>
> if (t.feature & FEATURE_FBC) {
> - igt_subtest_f("%s-badstride", feature_str(t.feature))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-badstride",
> + feature_str(t.feature))
> badstride_subtest(&t);
>
> - igt_subtest_f("%s-stridechange", feature_str(t.feature))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-stridechange",
> + feature_str(t.feature))
> stridechange_subtest(&t);
>
> - igt_subtest_f("%s-tilingchange", feature_str(t.feature))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-tilingchange",
> + feature_str(t.feature))
> tilingchange_subtest(&t);
> }
>
> if (t.feature & FEATURE_PSR)
> - igt_subtest_f("%s-slowdraw", feature_str(t.feature))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-slowdraw",
> + feature_str(t.feature))
> slow_draw_subtest(&t);
>
> - igt_subtest_f("%s-suspend", feature_str(t.feature))
> + igt_subtest_flags_f(t.subtest_flags,
> + "%s-suspend",
> + feature_str(t.feature))
> suspend_subtest(&t);
> TEST_MODE_ITER_END
>
> --
> 2.7.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests
2016-02-16 15:45 ` Daniel Vetter
@ 2016-02-18 8:42 ` David Weinehall
0 siblings, 0 replies; 8+ messages in thread
From: David Weinehall @ 2016-02-18 8:42 UTC (permalink / raw)
To: Daniel Vetter; +Cc: intel-gfx
On Tue, Feb 16, 2016 at 04:45:26PM +0100, Daniel Vetter wrote:
> On Thu, Feb 11, 2016 at 01:09:33PM +0200, David Weinehall wrote:
> > Some subtests are not run by default, for various reasons;
> > be it because they're only for debugging, because they're slow,
> > or because they are not of high enough quality.
> >
> > This patch aims to introduce a common mechanism for categorising
> > the subtests and introduces a flag (--all) that runs/lists all
> > subtests instead of just the default set.
>
> Is the idea to also add a --test-flags interface? How is this meant to
> integrate with the current BAT runtime? Also we need to figure out
> how/whether we need to change the spec for how testrunners are supposed to
> run igt tests. Plus obviously patch igt.py in piglit.
>
> I think I like where this is going, but I guess needs a bit more polish
> still. Integrating this into how we run/select BAT would be a great
> showcase of your concept I think.
The idea is to also add a --test-flags interface, yes, but I deemed it
outside the scope of this patchset.
The origin of this patch is a suggestion which I believe was
from you -- to handle combinatorial tests in one unified way instead of
using various different methods (--all, vs different test invocation
names). Since the first patch got feedback that not all excluded
subtests were because they were slow I introduced the flags concept
(the patch still classifies everything non-default as slow though,
seeing as I'd rather leave such classification to the test authors).
Since neither BAT nor piglit at the moment uses gem_concurrent_blit[1] or
kms_frontbuffer_tracking --all, I don't think there's any need to patch
either of those at this point.
Until there's some agreement on a reasonable subset of
gem_concurrent_blit subtests that can be run within at most 30 seconds,
there's no possibility to have it included in BAT or piglit -- currently
even the default set of subtests (without passing --all) will take several
hours to complete.
There are a bunch of subtests that won't complete within 30 seconds even
when run on their own[2], which obviously complicates things.
FWIW, both Skylake and Broadwell hard-hangs when running
gem_concurrent_blit --all (and up until that point spews a mighty amount
of FAIL).
Kind regards, David
[1] gem_concurrent_blit is currently not run at all due to it being too
slow. At the very least it would be nice to have a dedicated machine
that ran it on a daily basis. Having weekly runs with --all should
be a longer-term goal, but with the current hangs and massive amount
of FAILs that is not feasible yet.
[2] Notably the "swap-*" subtests.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2016-02-18 8:43 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-11 11:09 [PATCH i-g-t v4 0/3] Unify slow/combinatorial test handling David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 1/3] tests/gem_concurrent_blit: rename gem_concurrent_all David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 2/3] lib/igt_core: Unify handling of slow/combinatorial tests David Weinehall
2016-02-11 13:04 ` Chris Wilson
2016-02-11 13:40 ` David Weinehall
2016-02-16 15:45 ` Daniel Vetter
2016-02-18 8:42 ` David Weinehall
2016-02-11 11:09 ` [PATCH i-g-t v4 3/3] tests/gem_concurrent_all: Remove gem_concurrent_all.c David Weinehall
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.