All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [RFC 00/30] Stop cloning contexts
@ 2021-04-01  2:12 Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 01/30] lib/i915/gem_engine_topology: Expose the __query_engines helper Jason Ekstrand
                   ` (31 more replies)
  0 siblings, 32 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

I'm trying to clean up some of our uAPI technical debt in i915.  One of the
biggest areas we have right now is context mutability.  There's no good
reason why things like the set of engines or the VM should be able to be
changed on the fly and no "real" userspace actually relies on this
functionality.  It does, however, make for a good excuse for tests and lots
of bug reports as things like swapping out the set of engines under load
break randomly.  The solution here is to stop allowing that behavior and
simplify the i915 internals.

In particular, we'd like to remove the following from the i915 API:

 1. I915_CONTEXT_CLONE_*.  These are only used by IGT and have never been
    used by any "real" userspace.

 2. Changing the VM or set of engines via SETPARAM after they've been
    "used" by an execbuf or similar.  This would effectively make those
    parameters create params rather than mutable state.  We can't drop
    setparam entirely for those because media does use it but we can
    enforce some rules.

 3. Unused (by non-IGT userspace) GETPARAM for things like engines.

As much as we'd love to do that, we have a bit of a problem in IGT.  The
way we handle multi-engine testing today relies heavily on this soon-to-be-
deprecated functionality.  In particular, the standard flow is usually
something like this:

    static void run_test1(int fd, uint32_t engine)
    {
        igt_spin_t *spin;

        ctx = = gem_context_clone_with_engines(fd, 0);
        __igt_spin_new(fd, ctx, .engine = engine);

        /* do some testing with ctx */

        igt_spin_free(fd, spin);
        gem_destroy_context(fd, ctx);
    }

    igt_main
    {
        struct intel_execution_engine2 *e;

        /* Usual fixture code */

        __for_each_physical_engine(fd, e)
            run_test1(fd, e->flags);

        __for_each_physical_engine(fd, e)
            run_test2(fd, e->flags);
    }

Let's walk through what this does:

 1. __for_each_physical_engine calls intel_init_engine_list() which resets
    the set of engines on ctx0 to the full set of engines available as per
    the engine query.  On older kernels/hardware where we don't have the
    engines query, it leaves the set alone.

 2. intel_init_engine_list() also returns a set of engines for iteration
    and __for_each_physical_engine() sets up a for loop to walk the set.

 3. gem_context_clone_with_engines() creates a new context using
    I915_CONTEXT_CONTEXT_CLONE_ENGINES (not used by anything other than
    IGT) to ask that the newly created context has the same set of engines
    as ctx0.  Remember we changed that at the start of loop iteration!

 4. When the context is passed to __igt_spin_new(), it calls
    gem_context_lookup_engine which does a GETPARAM to introspet the set of
    engines on the context and figure out the engine class.

If you've been keeping track, this trivial and extremely common example
uses every single one of these soon-to-be-deprecated APIs even though the
test author may be completely obvious to it.  It also means that getting
rid of IGT's use of them is going to require some fairly deep surgery.

The approach proposed and partially implemented here is to add a new
wrapper struct intel_ctx_t which wraps a GEM context handle as well as the
full set of parameters used to create it, represented by intel_ctx_cfg_t.
We can then use the context anywhere we would regularly use a context, we
just have to do ctx->id.  If we want to clone it, we can do so by re-using
the create parameters by calling intel_ctx_create(fd, &old_ctx->cfg);

So far, I'm pretty happy with this solution.  I've converted around 25 test
programs and it's working quite well.  The only real sore point so far is
around dealing with platforms that don't support contexts.  We could
special case ctx0 a bit more but, right now, I'm just adding an if
statement and leaking the intel_ctx_t.  I'm happy to take suggestions
there.

--Jason


Jason Ekstrand (30):
  lib/i915/gem_engine_topology: Expose the __query_engines helper
  lib: Add an intel_ctx wrapper struct and helpers
  lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t
  tests/i915/gem_exec_basic: Convert to intel_ctx_t
  lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id
  lib/igt_spin: Support intel_ctx_t
  tests/i915/gem_exec_fence: Convert to intel_ctx_t
  tests/i915/gem_exec_schedule: Convert to intel_ctx_t
  tests/i915/perf_pmu: Convert to intel_ctx_t
  tests/i915/gem_exec_nop: Convert to intel_ctx_t
  tests/i915/gem_exec_reloc: Convert to intel_ctx_t
  tests/i915/gem_busy: Convert to intel_ctx_t
  tests/i915/gem_ctx_isolation: Convert to intel_ctx_t
  tests/i915/gem_exec_async: Convert to intel_ctx_t
  tests/i915/sysfs_clients: Convert to intel_ctx_t
  tests/i915/gem_exec_fair: Convert to intel_ctx_t
  tests/i915/gem_spin_batch: Convert to intel_ctx_t
  tests/i915/gem_exec_store: Convert to intel_ctx_t
  tests/amdgpu/amd_prime: Convert to intel_ctx_t
  tests/i915/i915_hangman: Convert to intel_ctx_t
  tests/i915/gem_ringfill: Convert to intel_ctx_t
  tests/prime_busy: Convert to intel_ctx_t
  tests/prime_vgem: Convert to intel_ctx_t
  tests/gem_exec_whisper: Convert to intel_ctx_t
  tests/i915/gem_ctx_exec: Convert to intel_ctx_t
  tests/i915/gem_exec_suspend: Convert to intel_ctx_t
  tests/i915/gem_sync: Convert to intel_ctx_t
  tests/i915/gem_userptr_blits: Convert to intel_ctx_t
  tests/i915/gem_wait: Convert to intel_ctx_t
  tests/i915/gem_request_retire: Convert to intel_ctx_t

 lib/i915/gem_context.c          |  34 ++
 lib/i915/gem_context.h          |   2 +
 lib/i915/gem_engine_topology.c  |  61 ++-
 lib/i915/gem_engine_topology.h  |  16 +-
 lib/igt_dummyload.c             |  30 +-
 lib/igt_dummyload.h             |   6 +-
 lib/igt_gt.c                    |   2 +-
 lib/intel_ctx.c                 | 159 ++++++
 lib/intel_ctx.h                 | 110 ++++
 lib/meson.build                 |   1 +
 tests/amdgpu/amd_prime.c        |  10 +-
 tests/i915/gem_busy.c           |  80 +--
 tests/i915/gem_ctx_engines.c    |   6 +-
 tests/i915/gem_ctx_exec.c       |  14 +-
 tests/i915/gem_ctx_isolation.c  | 111 ++--
 tests/i915/gem_ctx_shared.c     |  16 +-
 tests/i915/gem_eio.c            |   2 +-
 tests/i915/gem_exec_async.c     |  31 +-
 tests/i915/gem_exec_balancer.c  |  26 +-
 tests/i915/gem_exec_basic.c     |  10 +-
 tests/i915/gem_exec_fair.c      |  99 ++--
 tests/i915/gem_exec_fence.c     | 189 ++++---
 tests/i915/gem_exec_latency.c   |   2 +-
 tests/i915/gem_exec_nop.c       | 158 +++---
 tests/i915/gem_exec_reloc.c     | 102 ++--
 tests/i915/gem_exec_schedule.c  | 875 +++++++++++++++++---------------
 tests/i915/gem_exec_store.c     |  38 +-
 tests/i915/gem_exec_suspend.c   |  56 +-
 tests/i915/gem_exec_whisper.c   |  86 ++--
 tests/i915/gem_request_retire.c |  17 +-
 tests/i915/gem_ringfill.c       |  47 +-
 tests/i915/gem_spin_batch.c     |  83 +--
 tests/i915/gem_sync.c           | 162 +++---
 tests/i915/gem_userptr_blits.c  |  31 +-
 tests/i915/gem_vm_create.c      |   4 +-
 tests/i915/gem_wait.c           |  23 +-
 tests/i915/gem_workarounds.c    |   2 +-
 tests/i915/i915_hangman.c       |  35 +-
 tests/i915/perf_pmu.c           | 225 ++++----
 tests/i915/sysfs_clients.c      |  87 ++--
 tests/prime_busy.c              |  19 +-
 tests/prime_vgem.c              |  38 +-
 42 files changed, 1927 insertions(+), 1178 deletions(-)
 create mode 100644 lib/intel_ctx.c
 create mode 100644 lib/intel_ctx.h

-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 01/30] lib/i915/gem_engine_topology: Expose the __query_engines helper
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-08 18:50   ` Daniel Vetter
  2021-04-01  2:12 ` [igt-dev] [RFC 02/30] lib: Add an intel_ctx wrapper struct and helpers Jason Ekstrand
                   ` (30 subsequent siblings)
  31 siblings, 1 reply; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 lib/i915/gem_engine_topology.c | 20 +++++++++++---------
 lib/i915/gem_engine_topology.h |  4 ++++
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/lib/i915/gem_engine_topology.c b/lib/i915/gem_engine_topology.c
index c12cd920..5d196f59 100644
--- a/lib/i915/gem_engine_topology.c
+++ b/lib/i915/gem_engine_topology.c
@@ -62,14 +62,9 @@ static int __gem_query(int fd, struct drm_i915_query *q)
 	return err;
 }
 
-static void gem_query(int fd, struct drm_i915_query *q)
-{
-	igt_assert_eq(__gem_query(fd, q), 0);
-}
-
-static void query_engines(int fd,
-			  struct drm_i915_query_engine_info *query_engines,
-			  int length)
+int __gem_query_engines(int fd,
+			struct drm_i915_query_engine_info *query_engines,
+			int length)
 {
 	struct drm_i915_query_item item = { };
 	struct drm_i915_query query = { };
@@ -81,7 +76,14 @@ static void query_engines(int fd,
 
 	item.data_ptr = to_user_pointer(query_engines);
 
-	gem_query(fd, &query);
+	return __gem_query(fd, &query);
+}
+
+static void query_engines(int fd,
+			  struct drm_i915_query_engine_info *query_engines,
+			  int length)
+{
+	igt_assert_eq(__gem_query_engines(fd, query_engines, length), 0);
 }
 
 static void ctx_map_engines(int fd, struct intel_engine_data *ed,
diff --git a/lib/i915/gem_engine_topology.h b/lib/i915/gem_engine_topology.h
index f5edcb5d..76b7cd4d 100644
--- a/lib/i915/gem_engine_topology.h
+++ b/lib/i915/gem_engine_topology.h
@@ -29,6 +29,10 @@
 
 #define GEM_MAX_ENGINES		I915_EXEC_RING_MASK + 1
 
+int __gem_query_engines(int fd,
+			struct drm_i915_query_engine_info *query_engines,
+			int length);
+
 struct intel_engine_data {
 	uint32_t nengines;
 	uint32_t n;
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 02/30] lib: Add an intel_ctx wrapper struct and helpers
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 01/30] lib/i915/gem_engine_topology: Expose the __query_engines helper Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-08 18:58   ` Daniel Vetter
  2021-04-01  2:12 ` [igt-dev] [RFC 03/30] lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t Jason Ekstrand
                   ` (29 subsequent siblings)
  31 siblings, 1 reply; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

We're trying to clean up some of our technical debt in the i915 API.  In
particular, context mutability and unnecessary getparam().  There's
quite a bit of the introspection stuff that's not used by any userspace
other than IGT.  Most drivers don't care about fetching the set of
engines, for instance, because they don't forget about what set of
engines they asked for int the first place.

Unfortunately, IGT relies heavily on context introspection for just
about everything when it comes to multi-engine testing.  It also likes
to use ctx0 as temporary storage for whatever the current test config
is.  While effective at keeping IGC simple in some ways, this means
we're making heavy use of context mutability.  Also, passing data around
with in tests isn't really what contexts are for.

This patch adds a new intel_ctx_t struct which wraps a context and
remembers the full context configuration.  This will provide similar
ease-of-use without having use ctx0 as temporary storage.
---
 lib/i915/gem_context.c |  34 +++++++++
 lib/i915/gem_context.h |   2 +
 lib/intel_ctx.c        | 159 +++++++++++++++++++++++++++++++++++++++++
 lib/intel_ctx.h        | 110 ++++++++++++++++++++++++++++
 lib/meson.build        |   1 +
 5 files changed, 306 insertions(+)
 create mode 100644 lib/intel_ctx.c
 create mode 100644 lib/intel_ctx.h

diff --git a/lib/i915/gem_context.c b/lib/i915/gem_context.c
index 79411e10..0df42d02 100644
--- a/lib/i915/gem_context.c
+++ b/lib/i915/gem_context.c
@@ -107,6 +107,40 @@ int __gem_context_create(int fd, uint32_t *ctx_id)
        return err;
 }
 
+/**
+ * __gem_context_create_config:
+ * @fd: open i915 drm file descriptor
+ * @flags: context create flags
+ * @extensions: first extension struct, or 0 for no extensions
+ * @ctx_id: on success, the context ID is written here
+ *
+ * Creates a new GEM context with flags and extensions.  If no flags or
+ * extensions are required, it's the same as __gem_context_create and works
+ * on older kernels.
+ */
+int __gem_context_create_ext(int fd, uint32_t flags, uint64_t extensions,
+			     uint32_t *ctx_id)
+{
+	struct drm_i915_gem_context_create_ext ctx_create;
+	int err = 0;
+
+	if (!flags && !extensions)
+		return __gem_context_create(fd, ctx_id);
+
+	memset(&ctx_create, 0, sizeof(ctx_create));
+	ctx_create.flags = flags;
+	if (extensions) {
+		ctx_create.flags |= I915_CONTEXT_CREATE_FLAGS_USE_EXTENSIONS;
+		ctx_create.extensions = extensions;
+	}
+
+	err = create_ext_ioctl(fd, &ctx_create);
+	if (!err)
+		*ctx_id = ctx_create.ctx_id;
+
+	return err;
+}
+
 /**
  * gem_context_create:
  * @fd: open i915 drm file descriptor
diff --git a/lib/i915/gem_context.h b/lib/i915/gem_context.h
index c2c2b827..9748953c 100644
--- a/lib/i915/gem_context.h
+++ b/lib/i915/gem_context.h
@@ -31,6 +31,8 @@ struct drm_i915_gem_context_param;
 
 uint32_t gem_context_create(int fd);
 int __gem_context_create(int fd, uint32_t *ctx_id);
+int __gem_context_create_ext(int fd, uint32_t flags, uint64_t extensions,
+			     uint32_t *ctx_id);
 void gem_context_destroy(int fd, uint32_t ctx_id);
 int __gem_context_destroy(int fd, uint32_t ctx_id);
 
diff --git a/lib/intel_ctx.c b/lib/intel_ctx.c
new file mode 100644
index 00000000..406e85cb
--- /dev/null
+++ b/lib/intel_ctx.c
@@ -0,0 +1,159 @@
+/*
+ * Copyright © 2021 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <stddef.h>
+
+#include "intel_ctx.h"
+#include "ioctl_wrappers.h"
+#include "i915/gem_engine_topology.h"
+
+static void
+add_user_ext(uint64_t *root_ext_u64, struct i915_user_extension *ext)
+{
+	ext->next_extension = *root_ext_u64;
+	*root_ext_u64 = to_user_pointer(ext);
+}
+
+static size_t sizeof_param_engines(int count)
+{
+	return offsetof(struct i915_context_param_engines, engines[count]);
+}
+
+#define SIZEOF_QUERY		offsetof(struct drm_i915_query_engine_info, \
+					 engines[GEM_MAX_ENGINES])
+
+intel_ctx_cfg_t intel_ctx_cfg_all_physical(int fd)
+{
+	uint8_t buff[SIZEOF_QUERY] = { };
+	struct drm_i915_query_engine_info *qei = (void *) buff;
+	intel_ctx_cfg_t cfg = {};
+	int i;
+
+	if (__gem_query_engines(fd, qei, SIZEOF_QUERY) == 0) {
+		cfg.num_engines = qei->num_engines;
+		for (i = 0; i < qei->num_engines; i++)
+			cfg.engines[i] = qei->engines[i].engine;
+	}
+
+	return cfg;
+}
+
+static int
+__context_create_cfg(int fd, const intel_ctx_cfg_t *cfg, uint32_t *ctx_id)
+{
+	uint64_t ext_root = 0;
+	I915_DEFINE_CONTEXT_PARAM_ENGINES(engines, GEM_MAX_ENGINES);
+	struct drm_i915_gem_context_create_ext_setparam engines_param, vm_param;
+	uint32_t i;
+
+	if (cfg->vm) {
+		vm_param = (struct drm_i915_gem_context_create_ext_setparam) {
+			.base = {
+				.name = I915_CONTEXT_CREATE_EXT_SETPARAM,
+			},
+			.param = {
+				.param = I915_CONTEXT_PARAM_VM,
+				.value = cfg->vm,
+			},
+		};
+		add_user_ext(&ext_root, &vm_param.base);
+	}
+
+	if (cfg->num_engines) {
+		memset(&engines, 0, sizeof(engines));
+		for (i = 0; i < cfg->num_engines; i++)
+			engines.engines[i] = cfg->engines[i];
+
+		engines_param = (struct drm_i915_gem_context_create_ext_setparam) {
+			.base = {
+				.name = I915_CONTEXT_CREATE_EXT_SETPARAM,
+			},
+			.param = {
+				.param = I915_CONTEXT_PARAM_ENGINES,
+				.size = sizeof_param_engines(cfg->num_engines),
+				.value = to_user_pointer(&engines),
+			},
+		};
+		add_user_ext(&ext_root, &engines_param.base);
+	}
+
+	return __gem_context_create_ext(fd, cfg->flags, ext_root, ctx_id);
+}
+
+int __intel_ctx_create(int fd, const intel_ctx_cfg_t *cfg,
+		       intel_ctx_t **out_ctx)
+{
+	uint32_t ctx_id;
+	intel_ctx_t *ctx;
+	int err;
+
+	if (cfg)
+		err = __context_create_cfg(fd, cfg, &ctx_id);
+	else
+		err = __gem_context_create(fd, &ctx_id);
+	if (err)
+		return err;
+
+	ctx = calloc(1, sizeof(*ctx));
+	igt_assert(ctx);
+
+	ctx->id = ctx_id;
+	ctx->cfg = *cfg;
+
+	*out_ctx = ctx;
+	return 0;
+}
+
+intel_ctx_t *intel_ctx_create(int fd, const intel_ctx_cfg_t *cfg)
+{
+	intel_ctx_t *ctx;
+	int err;
+
+	err = __intel_ctx_create(fd, cfg, &ctx);
+	igt_assert_eq(err, 0);
+
+	return ctx;
+}
+
+static const intel_ctx_t __intel_ctx_0 = {};
+
+const intel_ctx_t *intel_ctx_0(int fd)
+{
+	(void)fd;
+	return &__intel_ctx_0;
+}
+
+intel_ctx_t *intel_ctx_create_all_physical(int fd)
+{
+	intel_ctx_cfg_t cfg = intel_ctx_cfg_all_physical(fd);
+	return intel_ctx_create(fd, &cfg);
+}
+
+void intel_ctx_destroy(int fd, intel_ctx_t *ctx)
+{
+	if (!ctx)
+		return;
+
+	gem_context_destroy(fd, ctx->id);
+	free(ctx);
+}
diff --git a/lib/intel_ctx.h b/lib/intel_ctx.h
new file mode 100644
index 00000000..94bd667a
--- /dev/null
+++ b/lib/intel_ctx.h
@@ -0,0 +1,110 @@
+/*
+ * Copyright © 2021 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef INTEL_CTX_H
+#define INTEL_CTX_H
+
+#include "igt_core.h"
+
+#include "i915_drm.h"
+
+#define GEM_MAX_ENGINES		I915_EXEC_RING_MASK + 1
+
+/**
+ * intel_ctx_cfg_t:
+ * @flags: Context create flags
+ * @vm: VM to inherit or 0 for using a per-context VM
+ * @num_engines: Number of client-specified engines or 0 for legacy mode
+ * @engines: Client-specified engines
+ *
+ * Represents the full configuration of an intel_ctx.
+ */
+typedef struct intel_ctx_cfg {
+	uint32_t flags;
+	uint32_t vm;
+	unsigned int num_engines;
+	struct i915_engine_class_instance engines[GEM_MAX_ENGINES];
+} intel_ctx_cfg_t;
+
+intel_ctx_cfg_t intel_ctx_cfg_all_physical(int fd);
+
+/**
+ * intel_ctx_t:
+ * @id: the context id/handle
+ * @cfg: the config used to create this context
+ *
+ * Represents the full configuration of an intel_ctx.
+ */
+typedef struct intel_ctx {
+	uint32_t id;
+	intel_ctx_cfg_t cfg;
+} intel_ctx_t;
+
+/**
+ * __intel_ctx_create:
+ * @fd: open i915 drm file descriptor
+ * @cfg: configuration for the created context
+ * @out_ctx: on success, the new intel_ctx_t pointer is written here
+ *
+ * Like intel_ctx_create but returns an error instead of asserting.
+ */
+int __intel_ctx_create(int fd, const intel_ctx_cfg_t *cfg,
+		       intel_ctx_t **out_ctx);
+
+/**
+ * intel_ctx_create:
+ * @fd: open i915 drm file descriptor
+ * @cfg: configuration for the created context
+ *
+ * Creates a new intel_ctx_t with the given config
+ */
+intel_ctx_t *intel_ctx_create(int i915, const intel_ctx_cfg_t *cfg);
+
+/**
+ * intel_ctx_0:
+ * @fd: open i915 drm file descriptor
+ *
+ * Returns an intel_ctx_t representing the default context.
+ */
+const intel_ctx_t *intel_ctx_0(int fd);
+
+/**
+ * intel_ctx_create_all_physical:
+ * @fd: open i915 drm file descriptor
+ *
+ * Creates an intel_ctx_t containing all physical engines.  On kernels
+ * without the engines API, the created context will be the same as
+ * intel_ctx_0() except that it will be a new GEM context.
+ */
+intel_ctx_t *intel_ctx_create_all_physical(int fd);
+
+/**
+ * intel_ctx_destroy:
+ * @fd: open i915 drm file descriptor
+ * @ctx: context to destroy, or NULL
+ *
+ * Destroys an intel_ctx_t.
+ */
+void intel_ctx_destroy(int fd, intel_ctx_t *ctx);
+
+#endif
diff --git a/lib/meson.build b/lib/meson.build
index 672b4206..871c7795 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -37,6 +37,7 @@ lib_sources = [
 	'intel_batchbuffer.c',
 	'intel_bufops.c',
 	'intel_chipset.c',
+	'intel_ctx.c',
 	'intel_device_info.c',
 	'intel_os.c',
 	'intel_mmio.c',
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 03/30] lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 01/30] lib/i915/gem_engine_topology: Expose the __query_engines helper Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 02/30] lib: Add an intel_ctx wrapper struct and helpers Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 04/30] tests/i915/gem_exec_basic: Convert to intel_ctx_t Jason Ekstrand
                   ` (28 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 lib/i915/gem_engine_topology.c | 41 ++++++++++++++++++++++++++++++++++
 lib/i915/gem_engine_topology.h | 14 ++++++++++--
 2 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/lib/i915/gem_engine_topology.c b/lib/i915/gem_engine_topology.c
index 5d196f59..528053a0 100644
--- a/lib/i915/gem_engine_topology.c
+++ b/lib/i915/gem_engine_topology.c
@@ -194,6 +194,47 @@ intel_get_current_physical_engine(struct intel_engine_data *ed)
 	return e;
 }
 
+struct intel_engine_data
+intel_engine_list_for_ctx_cfg(int fd, const intel_ctx_cfg_t *cfg)
+{
+	struct intel_engine_data engine_data = { };
+	int i;
+
+	if (cfg->num_engines) {
+		engine_data.nengines = cfg->num_engines;
+		for (i = 0; i < cfg->num_engines; i++)
+			init_engine(&engine_data.engines[i],
+				    cfg->engines[i].engine_class,
+				    cfg->engines[i].engine_instance,
+				    i);
+	} else {
+		/* This is a legacy context */
+		const struct intel_execution_engine2 *e2;
+
+		igt_debug("using pre-allocated engine list\n");
+
+		__for_each_static_engine(e2) {
+			if (igt_only_list_subtests() ||
+			    (fd < 0) ||
+			    gem_has_ring(fd, e2->flags)) {
+				struct intel_execution_engine2 *__e2 =
+					&engine_data.engines[
+					engine_data.nengines];
+
+				strcpy(__e2->name, e2->name);
+				__e2->instance   = e2->instance;
+				__e2->class      = e2->class;
+				__e2->flags      = e2->flags;
+				__e2->is_virtual = false;
+
+				engine_data.nengines++;
+                        }
+		}
+	}
+
+	return engine_data;
+}
+
 static int gem_topology_get_param(int fd,
 				  struct drm_i915_gem_context_param *p)
 {
diff --git a/lib/i915/gem_engine_topology.h b/lib/i915/gem_engine_topology.h
index 76b7cd4d..38027608 100644
--- a/lib/i915/gem_engine_topology.h
+++ b/lib/i915/gem_engine_topology.h
@@ -26,8 +26,7 @@
 
 #include "igt_gt.h"
 #include "i915_drm.h"
-
-#define GEM_MAX_ENGINES		I915_EXEC_RING_MASK + 1
+#include "intel_ctx.h"
 
 int __gem_query_engines(int fd,
 			struct drm_i915_query_engine_info *query_engines,
@@ -41,6 +40,8 @@ struct intel_engine_data {
 };
 
 bool gem_has_engine_topology(int fd);
+struct intel_engine_data
+intel_engine_list_for_ctx_cfg(int fd, const intel_ctx_cfg_t *cfg);
 struct intel_engine_data intel_init_engine_list(int fd, uint32_t ctx_id);
 
 /* iteration functions */
@@ -65,6 +66,15 @@ struct intel_execution_engine2 gem_eb_flags_to_engine(unsigned int flags);
 #define __for_each_static_engine(e__) \
 	for ((e__) = intel_execution_engines2; (e__)->name[0]; (e__)++)
 
+#define for_each_ctx_cfg_engine(fd__, ctx_cfg__, e__) \
+	for (struct intel_engine_data i__##e__ = \
+			intel_engine_list_for_ctx_cfg(fd__, ctx_cfg__); \
+	     ((e__) = intel_get_current_engine(&i__##e__)); \
+	     intel_next_engine(&i__##e__))
+
+#define for_each_ctx_engine(fd__, ctx__, e__) \
+	for_each_ctx_cfg_engine(fd__, &(ctx__)->cfg, e__)
+
 #define for_each_context_engine(fd__, ctx__, e__) \
 	for (struct intel_engine_data i__ = intel_init_engine_list(fd__, ctx__); \
 	     ((e__) = intel_get_current_engine(&i__)); \
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 04/30] tests/i915/gem_exec_basic: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (2 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 03/30] lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-08 20:06   ` Daniel Vetter
  2021-04-01  2:12 ` [igt-dev] [RFC 05/30] lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id Jason Ekstrand
                   ` (27 subsequent siblings)
  31 siblings, 1 reply; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

This acts as a template for the rest of this patch series.  The rough
idea is that we create a new context if the HW supports contexts and
otherwise we use intel_ctx_0().  Once we have an intel_ctx_t, we can
iterate over all of the engines in it in a consistent way.
---
 tests/i915/gem_exec_basic.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_exec_basic.c b/tests/i915/gem_exec_basic.c
index 31f6a234..f50e4c3b 100644
--- a/tests/i915/gem_exec_basic.c
+++ b/tests/i915/gem_exec_basic.c
@@ -41,10 +41,17 @@ static uint32_t batch_create(int fd)
 igt_main
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx = NULL;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver(DRIVER_INTEL);
+
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
+
 		/* igt_require_gem(fd); // test is mandatory */
 		igt_fork_hang_detector(fd);
 	}
@@ -54,12 +61,13 @@ igt_main
 			.handle = batch_create(fd),
 		};
 
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			igt_dynamic_f("%s", e->name) {
 				struct drm_i915_gem_execbuffer2 execbuf = {
 					.buffers_ptr = to_user_pointer(&exec),
 					.buffer_count = 1,
 					.flags = e->flags,
+					.rsvd1 = ctx->id,
 				};
 
 				gem_execbuf(fd, &execbuf);
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 05/30] lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (3 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 04/30] tests/i915/gem_exec_basic: Convert to intel_ctx_t Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 06/30] lib/igt_spin: Support intel_ctx_t Jason Ekstrand
                   ` (26 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 lib/igt_dummyload.c            |  6 +++---
 lib/igt_dummyload.h            |  2 +-
 lib/igt_gt.c                   |  2 +-
 tests/i915/gem_ctx_engines.c   |  6 +++---
 tests/i915/gem_ctx_exec.c      |  2 +-
 tests/i915/gem_ctx_isolation.c | 18 +++++++++---------
 tests/i915/gem_ctx_shared.c    | 16 ++++++++--------
 tests/i915/gem_eio.c           |  2 +-
 tests/i915/gem_exec_balancer.c | 26 +++++++++++++-------------
 tests/i915/gem_exec_latency.c  |  2 +-
 tests/i915/gem_exec_nop.c      |  2 +-
 tests/i915/gem_exec_schedule.c | 26 +++++++++++++-------------
 tests/i915/gem_spin_batch.c    |  2 +-
 tests/i915/gem_sync.c          |  2 +-
 tests/i915/gem_vm_create.c     |  4 ++--
 tests/i915/gem_workarounds.c   |  2 +-
 tests/i915/perf_pmu.c          |  4 ++--
 17 files changed, 62 insertions(+), 62 deletions(-)

diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c
index 34ad9221..5a11ec4e 100644
--- a/lib/igt_dummyload.c
+++ b/lib/igt_dummyload.c
@@ -127,7 +127,7 @@ emit_recursive_batch(igt_spin_t *spin,
 	if (opts->engine == ALL_ENGINES) {
 		struct intel_execution_engine2 *engine;
 
-		for_each_context_engine(fd, opts->ctx, engine) {
+		for_each_context_engine(fd, opts->ctx_id, engine) {
 			if (opts->flags & IGT_SPIN_POLL_RUN &&
 			    !gem_class_can_store_dword(fd, engine->class))
 				continue;
@@ -325,7 +325,7 @@ emit_recursive_batch(igt_spin_t *spin,
 
 	execbuf->buffers_ptr =
 	       	to_user_pointer(obj + (2 - execbuf->buffer_count));
-	execbuf->rsvd1 = opts->ctx;
+	execbuf->rsvd1 = opts->ctx_id;
 
 	if (opts->flags & IGT_SPIN_FENCE_OUT)
 		execbuf->flags |= I915_EXEC_FENCE_OUT;
@@ -423,7 +423,7 @@ igt_spin_factory(int fd, const struct igt_spin_factory *opts)
 		int class;
 
 		if (!gem_context_lookup_engine(fd, opts->engine,
-					       opts->ctx, &e)) {
+					       opts->ctx_id, &e)) {
 			class = e.class;
 		} else {
 			gem_require_ring(fd, opts->engine);
diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h
index a75fcdeb..aee72da8 100644
--- a/lib/igt_dummyload.h
+++ b/lib/igt_dummyload.h
@@ -60,7 +60,7 @@ typedef struct igt_spin {
 } igt_spin_t;
 
 struct igt_spin_factory {
-	uint32_t ctx;
+	uint32_t ctx_id;
 	uint32_t dependency;
 	unsigned int engine;
 	unsigned int flags;
diff --git a/lib/igt_gt.c b/lib/igt_gt.c
index f601d726..4981ef26 100644
--- a/lib/igt_gt.c
+++ b/lib/igt_gt.c
@@ -298,7 +298,7 @@ igt_hang_t igt_hang_ctx(int fd, uint32_t ctx, int ring, unsigned flags)
 		context_set_ban(fd, ctx, 0);
 
 	spin = __igt_spin_new(fd,
-			      .ctx = ctx,
+			      .ctx_id = ctx,
 			      .engine = ring,
 			      .flags = IGT_SPIN_NO_PREEMPTION);
 
diff --git a/tests/i915/gem_ctx_engines.c b/tests/i915/gem_ctx_engines.c
index 643a0b2f..058b2cc2 100644
--- a/tests/i915/gem_ctx_engines.c
+++ b/tests/i915/gem_ctx_engines.c
@@ -336,7 +336,7 @@ static void execute_one(int i915)
 	igt_spin_t *spin;
 
 	/* Prewarm the spinner */
-	spin = igt_spin_new(i915, .ctx = param.ctx_id,
+	spin = igt_spin_new(i915, .ctx_id = param.ctx_id,
 			    .flags = (IGT_SPIN_NO_PREEMPTION |
 				      IGT_SPIN_POLL_RUN));
 
@@ -439,7 +439,7 @@ static void execute_oneforall(int i915)
 			igt_spin_t *spin;
 
 			spin = __igt_spin_new(i915,
-					      .ctx = param.ctx_id,
+					      .ctx_id = param.ctx_id,
 					      .engine = i);
 
 			busy.handle = spin->handle;
@@ -480,7 +480,7 @@ static void execute_allforone(int i915)
 		igt_spin_t *spin;
 
 		spin = __igt_spin_new(i915,
-				      .ctx = param.ctx_id,
+				      .ctx_id = param.ctx_id,
 				      .engine = i++);
 
 		busy.handle = spin->handle;
diff --git a/tests/i915/gem_ctx_exec.c b/tests/i915/gem_ctx_exec.c
index 89776185..03c66bf7 100644
--- a/tests/i915/gem_ctx_exec.c
+++ b/tests/i915/gem_ctx_exec.c
@@ -184,7 +184,7 @@ static void norecovery(int i915)
 		igt_assert_eq(param.value, pass);
 
 		spin = __igt_spin_new(i915,
-				      .ctx = param.ctx_id,
+				      .ctx_id = param.ctx_id,
 				      .flags = IGT_SPIN_POLL_RUN);
 		igt_spin_busywait_until_started(spin);
 
diff --git a/tests/i915/gem_ctx_isolation.c b/tests/i915/gem_ctx_isolation.c
index 4f174268..a57a6637 100644
--- a/tests/i915/gem_ctx_isolation.c
+++ b/tests/i915/gem_ctx_isolation.c
@@ -629,7 +629,7 @@ static void nonpriv(int fd,
 
 		tmpl_regs(fd, ctx, e, tmpl, values[v]);
 
-		spin = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
+		spin = igt_spin_new(fd, .ctx_id = ctx, .engine = e->flags);
 
 		igt_debug("%s[%d]: Setting all registers to 0x%08x\n",
 			  __func__, v, values[v]);
@@ -641,12 +641,12 @@ static void nonpriv(int fd,
 
 			/* Explicit sync to keep the switch between write/read */
 			syncpt = igt_spin_new(fd,
-					      .ctx = ctx,
+					      .ctx_id = ctx,
 					      .engine = e->flags,
 					      .flags = IGT_SPIN_FENCE_OUT);
 
 			dirt = igt_spin_new(fd,
-					    .ctx = sw,
+					    .ctx_id = sw,
 					    .engine = e->flags,
 					    .fence = syncpt->out_fence,
 					    .flags = (IGT_SPIN_FENCE_IN |
@@ -654,7 +654,7 @@ static void nonpriv(int fd,
 			igt_spin_free(fd, syncpt);
 
 			syncpt = igt_spin_new(fd,
-					      .ctx = ctx,
+					      .ctx_id = ctx,
 					      .engine = e->flags,
 					      .fence = dirt->out_fence,
 					      .flags = IGT_SPIN_FENCE_IN);
@@ -708,7 +708,7 @@ static void isolation(int fd,
 		ctx[0] = gem_context_clone_with_engines(fd, 0);
 		regs[0] = read_regs(fd, ctx[0], e, flags);
 
-		spin = igt_spin_new(fd, .ctx = ctx[0], .engine = e->flags);
+		spin = igt_spin_new(fd, .ctx_id = ctx[0], .engine = e->flags);
 
 		if (flags & DIRTY1) {
 			igt_debug("%s[%d]: Setting all registers of ctx 0 to 0x%08x\n",
@@ -776,7 +776,7 @@ static uint32_t create_reset_context(int i915)
 static void inject_reset_context(int fd, const struct intel_execution_engine2 *e)
 {
 	struct igt_spin_factory opts = {
-		.ctx = create_reset_context(fd),
+		.ctx_id = create_reset_context(fd),
 		.engine = e->flags,
 		.flags = IGT_SPIN_FAST,
 	};
@@ -801,7 +801,7 @@ static void inject_reset_context(int fd, const struct intel_execution_engine2 *e
 	igt_force_gpu_reset(fd);
 
 	igt_spin_free(fd, spin);
-	gem_context_destroy(fd, opts.ctx);
+	gem_context_destroy(fd, opts.ctx_id);
 }
 
 static void preservation(int fd,
@@ -825,7 +825,7 @@ static void preservation(int fd,
 	gem_quiescent_gpu(fd);
 
 	ctx[num_values] = gem_context_clone_with_engines(fd, 0);
-	spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = e->flags);
+	spin = igt_spin_new(fd, .ctx_id = ctx[num_values], .engine = e->flags);
 	regs[num_values][0] = read_regs(fd, ctx[num_values], e, flags);
 	for (int v = 0; v < num_values; v++) {
 		ctx[v] = gem_context_clone_with_engines(fd, 0);
@@ -865,7 +865,7 @@ static void preservation(int fd,
 		break;
 	}
 
-	spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = e->flags);
+	spin = igt_spin_new(fd, .ctx_id = ctx[num_values], .engine = e->flags);
 	for (int v = 0; v < num_values; v++)
 		regs[v][1] = read_regs(fd, ctx[v], e, flags);
 	regs[num_values][1] = read_regs(fd, ctx[num_values], e, flags);
diff --git a/tests/i915/gem_ctx_shared.c b/tests/i915/gem_ctx_shared.c
index 6b21994d..e3e8a9be 100644
--- a/tests/i915/gem_ctx_shared.c
+++ b/tests/i915/gem_ctx_shared.c
@@ -124,8 +124,8 @@ static void disjoint_timelines(int i915)
 	child = gem_context_clone(i915, 0, I915_CONTEXT_CLONE_VM, 0);
 	plug = igt_cork_plug(&cork, i915);
 
-	spin[0] = __igt_spin_new(i915, .ctx = 0, .dependency = plug);
-	spin[1] = __igt_spin_new(i915, .ctx = child);
+	spin[0] = __igt_spin_new(i915, .ctx_id = 0, .dependency = plug);
+	spin[1] = __igt_spin_new(i915, .ctx_id = child);
 
 	/* Wait for the second spinner, will hang if stuck behind the first */
 	igt_spin_end(spin[1]);
@@ -388,7 +388,7 @@ static void exec_single_timeline(int i915, unsigned int engine)
 			continue;
 
 		if (spin == NULL) {
-			spin = __igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
+			spin = __igt_spin_new(i915, .ctx_id = ctx, .engine = e->flags);
 		} else {
 			struct drm_i915_gem_execbuffer2 execbuf = {
 				.buffers_ptr = spin->execbuf.buffers_ptr,
@@ -416,7 +416,7 @@ static void exec_single_timeline(int i915, unsigned int engine)
 			continue;
 
 		if (spin == NULL) {
-			spin = __igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
+			spin = __igt_spin_new(i915, .ctx_id = ctx, .engine = e->flags);
 		} else {
 			struct drm_i915_gem_execbuffer2 execbuf = {
 				.buffers_ptr = spin->execbuf.buffers_ptr,
@@ -510,11 +510,11 @@ static void unplug_show_queue(int i915, struct igt_cork *c, unsigned int engine)
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
 		const struct igt_spin_factory opts = {
-			.ctx = create_highest_priority(i915),
+			.ctx_id = create_highest_priority(i915),
 			.engine = engine,
 		};
 		spin[n] = __igt_spin_factory(i915, &opts);
-		gem_context_destroy(i915, opts.ctx);
+		gem_context_destroy(i915, opts.ctx_id);
 	}
 
 	igt_cork_unplug(c); /* batches will now be queued on the engine */
@@ -592,11 +592,11 @@ static void independent(int i915,
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
 		const struct igt_spin_factory opts = {
-			.ctx = create_highest_priority(i915),
+			.ctx_id = create_highest_priority(i915),
 			.engine = e->flags,
 		};
 		spin[n] = __igt_spin_factory(i915, &opts);
-		gem_context_destroy(i915, opts.ctx);
+		gem_context_destroy(i915, opts.ctx_id);
 	}
 
 	fence = igt_cork_plug(&cork, i915);
diff --git a/tests/i915/gem_eio.c b/tests/i915/gem_eio.c
index d86ccf2b..0036c4aa 100644
--- a/tests/i915/gem_eio.c
+++ b/tests/i915/gem_eio.c
@@ -176,7 +176,7 @@ static int __gem_wait(int fd, uint32_t handle, int64_t timeout)
 static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags)
 {
 	struct igt_spin_factory opts = {
-		.ctx = ctx,
+		.ctx_id = ctx,
 		.engine = flags,
 		.flags = IGT_SPIN_NO_PREEMPTION | IGT_SPIN_FENCE_OUT,
 	};
diff --git a/tests/i915/gem_exec_balancer.c b/tests/i915/gem_exec_balancer.c
index 01db0e11..225fe95f 100644
--- a/tests/i915/gem_exec_balancer.c
+++ b/tests/i915/gem_exec_balancer.c
@@ -530,7 +530,7 @@ static void check_individual_engine(int i915,
 			     I915_PMU_ENGINE_BUSY(ci[idx].engine_class,
 						  ci[idx].engine_instance));
 
-	spin = igt_spin_new(i915, .ctx = ctx, .engine = idx + 1);
+	spin = igt_spin_new(i915, .ctx_id = ctx, .engine = idx + 1);
 	load = measure_load(pmu, 10000);
 	igt_spin_free(i915, spin);
 
@@ -659,13 +659,13 @@ static void bonded(int i915, unsigned int flags)
 			plug = NULL;
 			if (flags & CORK) {
 				plug = __igt_spin_new(i915,
-						      .ctx = master,
+						      .ctx_id = master,
 						      .engine = bond,
 						      .dependency = igt_cork_plug(&cork, i915));
 			}
 
 			spin = __igt_spin_new(i915,
-					      .ctx = master,
+					      .ctx_id = master,
 					      .engine = bond,
 					      .flags = IGT_SPIN_FENCE_OUT);
 
@@ -806,7 +806,7 @@ static void bonded_slice(int i915)
 		set_load_balancer(i915, ctx, siblings, count, NULL);
 
 		spin = __igt_spin_new(i915,
-				      .ctx = ctx,
+				      .ctx_id = ctx,
 				      .flags = (IGT_SPIN_NO_PREEMPTION |
 						IGT_SPIN_POLL_RUN));
 		igt_spin_end(spin); /* we just want its address for later */
@@ -831,7 +831,7 @@ static void bonded_slice(int i915)
 
 			while (!READ_ONCE(*stop)) {
 				spin = igt_spin_new(i915,
-						    .ctx = ctx,
+						    .ctx_id = ctx,
 						    .engine = (1 + rand() % count),
 						    .flags = IGT_SPIN_POLL_RUN);
 				igt_spin_busywait_until_started(spin);
@@ -892,7 +892,7 @@ static void __bonded_chain(int i915, uint32_t ctx,
 		if (priorities[i] < 0)
 			gem_context_set_priority(i915, ctx, priorities[i]);
 		spin = igt_spin_new(i915,
-				    .ctx = ctx,
+				    .ctx_id = ctx,
 				    .engine = 1,
 				    .flags = (IGT_SPIN_POLL_RUN |
 					      IGT_SPIN_FENCE_OUT));
@@ -971,7 +971,7 @@ static void __bonded_chain_inv(int i915, uint32_t ctx,
 		if (priorities[i] < 0)
 			gem_context_set_priority(i915, ctx, priorities[i]);
 		spin = igt_spin_new(i915,
-				    .ctx = ctx,
+				    .ctx_id = ctx,
 				    .engine = 1,
 				    .flags = (IGT_SPIN_POLL_RUN |
 					      IGT_SPIN_FENCE_OUT));
@@ -1838,7 +1838,7 @@ static void __bonded_early(int i915, uint32_t ctx,
 
 	/* A: spin forever on engine 1 */
 	spin = igt_spin_new(i915,
-			    .ctx = ctx,
+			    .ctx_id = ctx,
 			    .engine = (flags & VIRTUAL_ENGINE) ? 0 : 1,
 			    .flags = IGT_SPIN_NO_PREEMPTION);
 
@@ -1953,10 +1953,10 @@ static void busy(int i915)
 		free(ci);
 
 		spin[0] = __igt_spin_new(i915,
-					 .ctx = ctx,
+					 .ctx_id = ctx,
 					 .flags = IGT_SPIN_POLL_RUN);
 		spin[1] = __igt_spin_new(i915,
-					 .ctx = ctx,
+					 .ctx_id = ctx,
 					 .dependency = scratch);
 
 		igt_spin_busywait_until_started(spin[0]);
@@ -2055,7 +2055,7 @@ static void full(int i915, unsigned int flags)
 			ctx = load_balancer_create(i915, ci, count);
 
 			if (spin == NULL) {
-				spin = __igt_spin_new(i915, .ctx = ctx);
+				spin = __igt_spin_new(i915, .ctx_id = ctx);
 			} else {
 				struct drm_i915_gem_execbuffer2 eb = {
 					.buffers_ptr = spin->execbuf.buffers_ptr,
@@ -2680,7 +2680,7 @@ static void semaphore(int i915)
 		for (int i = 0; i < count; i++) {
 			set_load_balancer(i915, block[i], ci, count, NULL);
 			spin[i] = __igt_spin_new(i915,
-						 .ctx = block[i],
+						 .ctx_id = block[i],
 						 .dependency = scratch);
 		}
 
@@ -2997,7 +2997,7 @@ static void __fairslice(int i915,
 	for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
 		ctx[i] = load_balancer_create(i915, ci, count);
 		if (spin == NULL) {
-			spin = __igt_spin_new(i915, .ctx = ctx[i]);
+			spin = __igt_spin_new(i915, .ctx_id = ctx[i]);
 		} else {
 			struct drm_i915_gem_execbuffer2 eb = {
 				.buffer_count = 1,
diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c
index 8ba924b8..e6466965 100644
--- a/tests/i915/gem_exec_latency.c
+++ b/tests/i915/gem_exec_latency.c
@@ -318,7 +318,7 @@ static void latency_from_ring(int fd,
 
 		if (flags & PREEMPT)
 			spin = __igt_spin_new(fd,
-					      .ctx = ctx[0],
+					      .ctx_id = ctx[0],
 					      .engine = e->flags);
 
 		if (flags & CORK) {
diff --git a/tests/i915/gem_exec_nop.c b/tests/i915/gem_exec_nop.c
index 62554ecb..f24ff88f 100644
--- a/tests/i915/gem_exec_nop.c
+++ b/tests/i915/gem_exec_nop.c
@@ -917,7 +917,7 @@ static void preempt(int fd, uint32_t handle,
 	intel_detect_and_clear_missed_interrupts(fd);
 
 	count = 0;
-	spin = __igt_spin_new(fd, .ctx = ctx[0], .engine = e->flags);
+	spin = __igt_spin_new(fd, .ctx_id = ctx[0], .engine = e->flags);
 	clock_gettime(CLOCK_MONOTONIC, &start);
 	do {
 		gem_execbuf(fd, &execbuf);
diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index 9585059d..f84967c7 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -205,11 +205,11 @@ static void unplug_show_queue(int fd, struct igt_cork *c, unsigned int engine)
 
 	for (int n = 0; n < max; n++) {
 		const struct igt_spin_factory opts = {
-			.ctx = create_highest_priority(fd),
+			.ctx_id = create_highest_priority(fd),
 			.engine = engine,
 		};
 		spin[n] = __igt_spin_factory(fd, &opts);
-		gem_context_destroy(fd, opts.ctx);
+		gem_context_destroy(fd, opts.ctx_id);
 	}
 
 	igt_cork_unplug(c); /* batches will now be queued on the engine */
@@ -639,7 +639,7 @@ static void lateslice(int i915, unsigned int engine, unsigned long flags)
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
 	ctx = gem_context_create(i915);
-	spin[0] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	spin[0] = igt_spin_new(i915, .ctx_id = ctx, .engine = engine,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT |
 					 flags));
@@ -648,7 +648,7 @@ static void lateslice(int i915, unsigned int engine, unsigned long flags)
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx = gem_context_create(i915);
-	spin[1] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	spin[1] = igt_spin_new(i915, .ctx_id = ctx, .engine = engine,
 			       .fence = spin[0]->out_fence,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_IN |
@@ -665,7 +665,7 @@ static void lateslice(int i915, unsigned int engine, unsigned long flags)
 	 */
 
 	ctx = gem_context_create(i915);
-	spin[2] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	spin[2] = igt_spin_new(i915, .ctx_id = ctx, .engine = engine,
 			       .flags = IGT_SPIN_POLL_RUN | flags);
 	gem_context_destroy(i915, ctx);
 
@@ -765,7 +765,7 @@ static void submit_slice(int i915,
 		engines.engines[0].engine_class = e->class;
 		engines.engines[0].engine_instance = e->instance;
 		gem_context_set_param(i915, &param);
-		spin = igt_spin_new(i915, .ctx = param.ctx_id,
+		spin = igt_spin_new(i915, .ctx_id = param.ctx_id,
 				    .fence = fence,
 				    .flags =
 				    IGT_SPIN_POLL_RUN |
@@ -909,7 +909,7 @@ static void semaphore_codependency(int i915, unsigned long flags)
 
 		task[i].xcs =
 			__igt_spin_new(i915,
-				       .ctx = ctx,
+				       .ctx_id = ctx,
 				       .engine = e->flags,
 				       .flags = IGT_SPIN_POLL_RUN | flags);
 		igt_spin_busywait_until_started(task[i].xcs);
@@ -917,7 +917,7 @@ static void semaphore_codependency(int i915, unsigned long flags)
 		/* Common rcs tasks will be queued in FIFO */
 		task[i].rcs =
 			__igt_spin_new(i915,
-				       .ctx = ctx,
+				       .ctx_id = ctx,
 				       .engine = 0,
 				       .dependency = task[i].xcs->handle);
 
@@ -1403,7 +1403,7 @@ static void preempt(int fd, const struct intel_execution_engine2 *e, unsigned fl
 			gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
 		}
 		spin[n] = __igt_spin_new(fd,
-					 .ctx = ctx[LO],
+					 .ctx_id = ctx[LO],
 					 .engine = e->flags,
 					 .flags = flags & USERPTR ? IGT_SPIN_USERPTR : 0);
 		igt_debug("spin[%d].handle=%d\n", n, spin[n]->handle);
@@ -1439,7 +1439,7 @@ static igt_spin_t *__noise(int fd, uint32_t ctx, int prio, igt_spin_t *spin)
 	__for_each_physical_engine(fd, e) {
 		if (spin == NULL) {
 			spin = __igt_spin_new(fd,
-					      .ctx = ctx,
+					      .ctx_id = ctx,
 					      .engine = e->flags);
 		} else {
 			struct drm_i915_gem_execbuffer2 eb = {
@@ -1718,7 +1718,7 @@ static void preempt_self(int fd, unsigned ring)
 	gem_context_set_priority(fd, ctx[HI], MIN_PRIO);
 	__for_each_physical_engine(fd, e) {
 		spin[n] = __igt_spin_new(fd,
-					 .ctx = ctx[NOISE],
+					 .ctx_id = ctx[NOISE],
 					 .engine = e->flags);
 		store_dword(fd, ctx[HI], e->flags,
 			    result, (n + 1)*sizeof(uint32_t), n + 1,
@@ -1766,7 +1766,7 @@ static void preemptive_hang(int fd, const struct intel_execution_engine2 *e)
 		gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
 
 		spin[n] = __igt_spin_new(fd,
-					 .ctx = ctx[LO],
+					 .ctx_id = ctx[LO],
 					 .engine = e->flags);
 
 		gem_context_destroy(fd, ctx[LO]);
@@ -2751,7 +2751,7 @@ static void fairslice(int i915,
 		ctx[i] = gem_context_clone_with_engines(i915, 0);
 		if (spin == NULL) {
 			spin = __igt_spin_new(i915,
-					      .ctx = ctx[i],
+					      .ctx_id = ctx[i],
 					      .engine = e->flags,
 					      .flags = flags);
 		} else {
diff --git a/tests/i915/gem_spin_batch.c b/tests/i915/gem_spin_batch.c
index 2ed601a6..db0af018 100644
--- a/tests/i915/gem_spin_batch.c
+++ b/tests/i915/gem_spin_batch.c
@@ -145,7 +145,7 @@ static void spin_all(int i915, unsigned int flags)
 
 		/* Prevent preemption so only one is allowed on each engine */
 		spin = igt_spin_new(i915,
-				    .ctx = ctx,
+				    .ctx_id = ctx,
 				    .engine = e->flags,
 				    .flags = (IGT_SPIN_POLL_RUN |
 					      IGT_SPIN_NO_PREEMPTION));
diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c
index 6ad31517..58781a5e 100644
--- a/tests/i915/gem_sync.c
+++ b/tests/i915/gem_sync.c
@@ -1109,7 +1109,7 @@ preempt(int fd, unsigned ring, int num_children, int timeout)
 		do {
 			igt_spin_t *spin =
 				__igt_spin_new(fd,
-					       .ctx = ctx[0],
+					       .ctx_id = ctx[0],
 					       .engine = execbuf.flags);
 
 			do {
diff --git a/tests/i915/gem_vm_create.c b/tests/i915/gem_vm_create.c
index 6d93c98a..954ba13e 100644
--- a/tests/i915/gem_vm_create.c
+++ b/tests/i915/gem_vm_create.c
@@ -363,7 +363,7 @@ static void async_destroy(int i915)
 	int err;
 
 	spin[0] = igt_spin_new(i915,
-			       .ctx = arg.ctx_id,
+			       .ctx_id = arg.ctx_id,
 			       .flags = IGT_SPIN_POLL_RUN);
 	igt_spin_busywait_until_started(spin[0]);
 
@@ -372,7 +372,7 @@ static void async_destroy(int i915)
 		err = 0;
 	igt_assert_eq(err, 0);
 
-	spin[1] = __igt_spin_new(i915, .ctx = arg.ctx_id);
+	spin[1] = __igt_spin_new(i915, .ctx_id = arg.ctx_id);
 
 	igt_spin_end(spin[0]);
 	gem_sync(i915, spin[0]->handle);
diff --git a/tests/i915/gem_workarounds.c b/tests/i915/gem_workarounds.c
index 00b475c2..0daf0cd6 100644
--- a/tests/i915/gem_workarounds.c
+++ b/tests/i915/gem_workarounds.c
@@ -135,7 +135,7 @@ static int workaround_fail_count(int i915, uint32_t ctx)
 
 	gem_set_domain(i915, obj[0].handle, I915_GEM_DOMAIN_CPU, 0);
 
-	spin = igt_spin_new(i915, .ctx = ctx, .flags = IGT_SPIN_POLL_RUN);
+	spin = igt_spin_new(i915, .ctx_id = ctx, .flags = IGT_SPIN_POLL_RUN);
 	igt_spin_busywait_until_started(spin);
 
 	fw = igt_open_forcewake_handle(i915);
diff --git a/tests/i915/perf_pmu.c b/tests/i915/perf_pmu.c
index 50b5c82b..c7ddac85 100644
--- a/tests/i915/perf_pmu.c
+++ b/tests/i915/perf_pmu.c
@@ -175,7 +175,7 @@ static igt_spin_t * __spin_poll(int fd, uint32_t ctx,
 				const struct intel_execution_engine2 *e)
 {
 	struct igt_spin_factory opts = {
-		.ctx = ctx,
+		.ctx_id = ctx,
 		.engine = e->flags,
 	};
 
@@ -381,7 +381,7 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 	spin[0] = __spin_sync(gem_fd, 0, e);
 	usleep(500e3);
 	spin[1] = __igt_spin_new(gem_fd,
-				 .ctx = ctx,
+				 .ctx_id = ctx,
 				 .engine = e->flags);
 
 	/*
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 06/30] lib/igt_spin: Support intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (4 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 05/30] lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-08 20:08   ` Daniel Vetter
  2021-04-01  2:12 ` [igt-dev] [RFC 07/30] tests/i915/gem_exec_fence: Convert to intel_ctx_t Jason Ekstrand
                   ` (25 subsequent siblings)
  31 siblings, 1 reply; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 lib/igt_dummyload.c | 30 ++++++++++++++++++++++--------
 lib/igt_dummyload.h |  4 ++++
 2 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c
index 5a11ec4e..ac83b331 100644
--- a/lib/igt_dummyload.c
+++ b/lib/igt_dummyload.c
@@ -123,16 +123,28 @@ emit_recursive_batch(igt_spin_t *spin,
 	addr += random() % addr / 2;
 	addr &= -4096;
 
+	assert(!(opts->ctx && opts->ctx_id));
+
 	nengine = 0;
 	if (opts->engine == ALL_ENGINES) {
 		struct intel_execution_engine2 *engine;
 
-		for_each_context_engine(fd, opts->ctx_id, engine) {
-			if (opts->flags & IGT_SPIN_POLL_RUN &&
-			    !gem_class_can_store_dword(fd, engine->class))
-				continue;
+		if (opts->ctx) {
+			for_each_ctx_engine(fd, opts->ctx, engine) {
+				if (opts->flags & IGT_SPIN_POLL_RUN &&
+				    !gem_class_can_store_dword(fd, engine->class))
+					continue;
 
-			flags[nengine++] = engine->flags;
+				flags[nengine++] = engine->flags;
+			}
+		} else {
+			for_each_context_engine(fd, opts->ctx_id, engine) {
+				if (opts->flags & IGT_SPIN_POLL_RUN &&
+				    !gem_class_can_store_dword(fd, engine->class))
+					continue;
+
+				flags[nengine++] = engine->flags;
+			}
 		}
 	} else {
 		flags[nengine++] = opts->engine;
@@ -325,7 +337,7 @@ emit_recursive_batch(igt_spin_t *spin,
 
 	execbuf->buffers_ptr =
 	       	to_user_pointer(obj + (2 - execbuf->buffer_count));
-	execbuf->rsvd1 = opts->ctx_id;
+	execbuf->rsvd1 = opts->ctx ? opts->ctx->id : opts->ctx_id;
 
 	if (opts->flags & IGT_SPIN_FENCE_OUT)
 		execbuf->flags |= I915_EXEC_FENCE_OUT;
@@ -422,8 +434,10 @@ igt_spin_factory(int fd, const struct igt_spin_factory *opts)
 		struct intel_execution_engine2 e;
 		int class;
 
-		if (!gem_context_lookup_engine(fd, opts->engine,
-					       opts->ctx_id, &e)) {
+		if (opts->ctx) {
+			class = opts->ctx->cfg.engines[opts->engine].engine_class;
+		} else if (!gem_context_lookup_engine(fd, opts->engine,
+						      opts->ctx_id, &e)) {
 			class = e.class;
 		} else {
 			gem_require_ring(fd, opts->engine);
diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h
index aee72da8..b26a7b7d 100644
--- a/lib/igt_dummyload.h
+++ b/lib/igt_dummyload.h
@@ -32,9 +32,12 @@
 #include "igt_list.h"
 #include "i915_drm.h"
 
+struct intel_ctx;
+
 typedef struct igt_spin {
 	struct igt_list_head link;
 
+
 	uint32_t handle;
 	uint32_t poll_handle;
 
@@ -61,6 +64,7 @@ typedef struct igt_spin {
 
 struct igt_spin_factory {
 	uint32_t ctx_id;
+	const struct intel_ctx *ctx;
 	uint32_t dependency;
 	unsigned int engine;
 	unsigned int flags;
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 07/30] tests/i915/gem_exec_fence: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (5 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 06/30] lib/igt_spin: Support intel_ctx_t Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 08/30] tests/i915/gem_exec_schedule: " Jason Ekstrand
                   ` (24 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_fence.c | 189 +++++++++++++++++++++---------------
 1 file changed, 113 insertions(+), 76 deletions(-)

diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c
index b7b3f8e3..84311d49 100644
--- a/tests/i915/gem_exec_fence.c
+++ b/tests/i915/gem_exec_fence.c
@@ -30,6 +30,7 @@
 #include "igt_syncobj.h"
 #include "igt_sysfs.h"
 #include "igt_vgem.h"
+#include "intel_ctx.h"
 #include "sw_sync.h"
 
 IGT_TEST_DESCRIPTION("Check that execbuf waits for explicit fences");
@@ -55,7 +56,8 @@ struct sync_merge_data {
 #define   MI_SEMAPHORE_SAD_EQ_SDD       (4 << 12)
 #define   MI_SEMAPHORE_SAD_NEQ_SDD      (5 << 12)
 
-static void store(int fd, const struct intel_execution_engine2 *e,
+static void store(int fd, const intel_ctx_t *ctx,
+		  const struct intel_execution_engine2 *e,
 		  int fence, uint32_t target, unsigned offset_value)
 {
 	const int SCRATCH = 0;
@@ -71,6 +73,7 @@ static void store(int fd, const struct intel_execution_engine2 *e,
 	execbuf.buffers_ptr = to_user_pointer(obj);
 	execbuf.buffer_count = 2;
 	execbuf.flags = e->flags | I915_EXEC_FENCE_IN;
+	execbuf.rsvd1 = ctx->id;
 	execbuf.rsvd2 = fence;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
@@ -118,7 +121,8 @@ static bool fence_busy(int fence)
 #define NONBLOCK 0x2
 #define WAIT 0x4
 
-static void test_fence_busy(int fd, const struct intel_execution_engine2 *e,
+static void test_fence_busy(int fd, const intel_ctx_t *ctx,
+			    const struct intel_execution_engine2 *e,
 			    unsigned flags)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -138,6 +142,7 @@ static void test_fence_busy(int fd, const struct intel_execution_engine2 *e,
 	execbuf.buffers_ptr = to_user_pointer(&obj);
 	execbuf.buffer_count = 1;
 	execbuf.flags = e->flags | I915_EXEC_FENCE_OUT;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(&obj, 0, sizeof(obj));
 	obj.handle = gem_create(fd, 4096);
@@ -214,7 +219,7 @@ static void test_fence_busy(int fd, const struct intel_execution_engine2 *e,
 	gem_quiescent_gpu(fd);
 }
 
-static void test_fence_busy_all(int fd, unsigned flags)
+static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 {
 	const struct intel_execution_engine2 *e;
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -272,7 +277,7 @@ static void test_fence_busy_all(int fd, unsigned flags)
 	i++;
 
 	all = -1;
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		int fence, new;
 
 		if ((flags & HANG) == 0 &&
@@ -280,6 +285,7 @@ static void test_fence_busy_all(int fd, unsigned flags)
 			continue;
 
 		execbuf.flags = e->flags | I915_EXEC_FENCE_OUT;
+		execbuf.rsvd1 = ctx->id;
 		execbuf.rsvd2 = -1;
 		gem_execbuf_wr(fd, &execbuf);
 		fence = execbuf.rsvd2 >> 32;
@@ -336,7 +342,8 @@ static unsigned int spin_hang(unsigned int flags)
 	return IGT_SPIN_NO_PREEMPTION | IGT_SPIN_INVALID_CS;
 }
 
-static void test_fence_await(int fd, const struct intel_execution_engine2 *e,
+static void test_fence_await(int fd, const intel_ctx_t *ctx,
+			     const struct intel_execution_engine2 *e,
 			     unsigned flags)
 {
 	const struct intel_execution_engine2 *e2;
@@ -350,20 +357,21 @@ static void test_fence_await(int fd, const struct intel_execution_engine2 *e,
 			I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
 
 	spin = igt_spin_new(fd,
+			    .ctx = ctx,
 			    .engine = e->flags,
 			    .flags = IGT_SPIN_FENCE_OUT | spin_hang(flags));
 	igt_assert(spin->out_fence != -1);
 
 	i = 0;
-	__for_each_physical_engine(fd, e2) {
+	for_each_ctx_engine(fd, ctx, e) {
 		if (!gem_class_can_store_dword(fd, e->class))
 			continue;
 
 		if (flags & NONBLOCK) {
-			store(fd, e2, spin->out_fence, scratch, i);
+			store(fd, ctx, e2, spin->out_fence, scratch, i);
 		} else {
 			igt_fork(child, 1)
-				store(fd, e2, spin->out_fence, scratch, i);
+				store(fd, ctx, e2, spin->out_fence, scratch, i);
 		}
 
 		i++;
@@ -439,9 +447,10 @@ static uint32_t timeslicing_batches(int i915, uint32_t *offset)
         return handle;
 }
 
-static void test_submit_fence(int i915, unsigned int engine)
+static void test_submit_fence(int i915, const intel_ctx_t *ctx,
+			      const struct intel_execution_engine2 *e)
 {
-	const struct intel_execution_engine2 *e;
+	const struct intel_execution_engine2 *e2;
 
 	/*
 	 * Create a pair of interlocking batches, that ping pong
@@ -450,8 +459,9 @@ static void test_submit_fence(int i915, unsigned int engine)
 	 * switch to the other batch in order to advance.
 	 */
 
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e2) {
 		unsigned int offset = 24 << 20;
+		intel_ctx_t *tmp_ctx;
 		struct drm_i915_gem_exec_object2 obj = {
 			.offset = offset,
 			.flags = EXEC_OBJECT_PINNED,
@@ -467,17 +477,19 @@ static void test_submit_fence(int i915, unsigned int engine)
 		result = gem_mmap__device_coherent(i915, obj.handle,
 						   0, 4096, PROT_READ);
 
-		execbuf.flags = engine | I915_EXEC_FENCE_OUT;
+		execbuf.flags = e->flags | I915_EXEC_FENCE_OUT;
 		execbuf.batch_start_offset = 0;
+		execbuf.rsvd1 = ctx->id;
 		gem_execbuf_wr(i915, &execbuf);
 
-		execbuf.rsvd1 = gem_context_clone_with_engines(i915, 0);
+		tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
+		execbuf.rsvd1 = tmp_ctx->id;
 		execbuf.rsvd2 >>= 32;
-		execbuf.flags = e->flags;
+		execbuf.flags = e2->flags;
 		execbuf.flags |= I915_EXEC_FENCE_SUBMIT | I915_EXEC_FENCE_OUT;
 		execbuf.batch_start_offset = offset;
 		gem_execbuf_wr(i915, &execbuf);
-		gem_context_destroy(i915, execbuf.rsvd1);
+		intel_ctx_destroy(i915, tmp_ctx);
 
 		gem_sync(i915, obj.handle);
 		gem_close(i915, obj.handle);
@@ -532,7 +544,9 @@ static uint32_t submitN_batches(int i915, uint32_t offset, int count)
         return handle;
 }
 
-static void test_submitN(int i915, unsigned int engine, int count)
+static void test_submitN(int i915, const intel_ctx_t *ctx,
+			 const struct intel_execution_engine2 *e,
+			 int count)
 {
 	unsigned int offset = 24 << 20;
 	unsigned int sz = ALIGN((count + 1) * 1024, 4096);
@@ -544,7 +558,8 @@ static void test_submitN(int i915, unsigned int engine, int count)
 	struct drm_i915_gem_execbuffer2 execbuf  = {
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
-		.flags = engine | I915_EXEC_FENCE_OUT,
+		.flags = e->flags | I915_EXEC_FENCE_OUT,
+		.rsvd1 = ctx->id,
 	};
 	uint32_t *result =
 		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
@@ -555,10 +570,11 @@ static void test_submitN(int i915, unsigned int engine, int count)
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
 	for (int i = 0; i < count; i++) {
-		execbuf.rsvd1 = gem_context_clone_with_engines(i915, 0);
+		intel_ctx_t *tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
+		execbuf.rsvd1 = tmp_ctx->id;
 		execbuf.batch_start_offset = (i + 1) * 1024;
 		gem_execbuf_wr(i915, &execbuf);
-		gem_context_destroy(i915, execbuf.rsvd1);
+		intel_ctx_destroy(i915, tmp_ctx);
 
 		execbuf.flags |= I915_EXEC_FENCE_SUBMIT;
 		execbuf.rsvd2 >>= 32;
@@ -594,7 +610,8 @@ static int __execbuf(int fd, struct drm_i915_gem_execbuffer2 *execbuf)
 	return err;
 }
 
-static void test_parallel(int i915, const struct intel_execution_engine2 *e)
+static void test_parallel(int i915, const intel_ctx_t *ctx,
+			  const struct intel_execution_engine2 *e)
 {
 	const struct intel_execution_engine2 *e2;
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
@@ -608,6 +625,7 @@ static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 
 	fence = igt_cork_plug(&cork, i915),
 	spin = igt_spin_new(i915,
+			    .ctx = ctx,
 			    .engine = e->flags,
 			    .fence = fence,
 			    .flags = (IGT_SPIN_FENCE_OUT |
@@ -615,7 +633,7 @@ static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 	close(fence);
 
 	/* Queue all secondaries */
-	__for_each_physical_engine(i915, e2) {
+	for_each_ctx_engine(i915, ctx, e2) {
 		struct drm_i915_gem_relocation_entry reloc = {
 			.target_handle = scratch,
 			.offset = sizeof(uint32_t),
@@ -632,6 +650,7 @@ static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 			.buffers_ptr = to_user_pointer(obj),
 			.buffer_count = ARRAY_SIZE(obj),
 			.flags = e2->flags | I915_EXEC_FENCE_SUBMIT,
+			.rsvd1 = ctx->id,
 			.rsvd2 = spin->out_fence,
 		};
 		uint32_t batch[16];
@@ -701,7 +720,8 @@ static void test_parallel(int i915, const struct intel_execution_engine2 *e)
 	igt_spin_free(i915, spin);
 }
 
-static void test_concurrent(int i915, const struct intel_execution_engine2 *e)
+static void test_concurrent(int i915, const intel_ctx_t *ctx,
+			    const struct intel_execution_engine2 *e)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_relocation_entry reloc = {
@@ -721,10 +741,12 @@ static void test_concurrent(int i915, const struct intel_execution_engine2 *e)
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = ARRAY_SIZE(obj),
 		.flags = e->flags | I915_EXEC_FENCE_SUBMIT,
+		.rsvd1 = ctx->id,
 	};
 	IGT_CORK_FENCE(cork);
 	uint32_t batch[16];
 	igt_spin_t *spin;
+	intel_ctx_t *tmp_ctx;
 	uint32_t result;
 	int fence;
 	int i;
@@ -737,6 +759,7 @@ static void test_concurrent(int i915, const struct intel_execution_engine2 *e)
 
 	fence = igt_cork_plug(&cork, i915),
 	      spin = igt_spin_new(i915,
+				  .ctx = ctx,
 				  .engine = e->flags,
 				  .fence = fence,
 				  .flags = (IGT_SPIN_FENCE_OUT |
@@ -760,13 +783,14 @@ static void test_concurrent(int i915, const struct intel_execution_engine2 *e)
 	batch[++i] = MI_BATCH_BUFFER_END;
 	gem_write(i915, obj[1].handle, 0, batch, sizeof(batch));
 
-	execbuf.rsvd1 = gem_context_clone_with_engines(i915, 0);
+	tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
+	execbuf.rsvd1 = tmp_ctx->id;
 	execbuf.rsvd2 = spin->out_fence;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
 
 	gem_execbuf(i915, &execbuf);
-	gem_context_destroy(i915, execbuf.rsvd1);
+	intel_ctx_destroy(i915, tmp_ctx);
 	gem_close(i915, obj[1].handle);
 
 	/*
@@ -795,7 +819,7 @@ static void test_concurrent(int i915, const struct intel_execution_engine2 *e)
 	igt_spin_free(i915, spin);
 }
 
-static void test_submit_chain(int i915)
+static void test_submit_chain(int i915, const intel_ctx_t *ctx)
 {
 	const struct intel_execution_engine2 *e;
 	igt_spin_t *spin, *sn;
@@ -806,8 +830,9 @@ static void test_submit_chain(int i915)
 	/* Check that we can simultaneously launch spinners on each engine */
 
 	fence = igt_cork_plug(&cork, i915);
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		spin = igt_spin_new(i915,
+				    .ctx = ctx,
 				    .engine = e->flags,
 				    .fence = fence,
 				    .flags = (IGT_SPIN_POLL_RUN |
@@ -847,7 +872,8 @@ static uint32_t batch_create(int fd)
 	return handle;
 }
 
-static void test_keep_in_fence(int fd, const struct intel_execution_engine2 *e)
+static void test_keep_in_fence(int fd, const intel_ctx_t *ctx,
+			       const struct intel_execution_engine2 *e)
 {
 	struct sigaction sa = { .sa_handler = alarm_handler };
 	struct drm_i915_gem_exec_object2 obj = {
@@ -857,13 +883,14 @@ static void test_keep_in_fence(int fd, const struct intel_execution_engine2 *e)
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
 		.flags = e->flags | I915_EXEC_FENCE_OUT,
+		.rsvd1 = ctx->id,
 	};
 	unsigned long count, last;
 	struct itimerval itv;
 	igt_spin_t *spin;
 	int fence;
 
-	spin = igt_spin_new(fd, .engine = e->flags);
+	spin = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
 
 	gem_execbuf_wr(fd, &execbuf);
 	fence = upper_32_bits(execbuf.rsvd2);
@@ -915,7 +942,8 @@ static void test_keep_in_fence(int fd, const struct intel_execution_engine2 *e)
 }
 
 #define EXPIRED 0x10000
-static void test_long_history(int fd, long ring_size, unsigned flags)
+static void test_long_history(int fd, const intel_ctx_t *ctx,
+			      long ring_size, unsigned flags)
 {
 	const uint32_t sz = 1 << 20;
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
@@ -932,7 +960,7 @@ static void test_long_history(int fd, long ring_size, unsigned flags)
 		limit = ring_size / 3;
 
 	nengine = 0;
-	__for_each_physical_engine(fd, e)
+	for_each_ctx_engine(fd, ctx, e)
 		engines[nengine++] = e->flags;
 	igt_require(nengine);
 
@@ -946,6 +974,7 @@ static void test_long_history(int fd, long ring_size, unsigned flags)
 	execbuf.buffers_ptr = to_user_pointer(&obj[1]);
 	execbuf.buffer_count = 1;
 	execbuf.flags = I915_EXEC_FENCE_OUT;
+	execbuf.rsvd1 = ctx->id;
 
 	gem_execbuf_wr(fd, &execbuf);
 	all_fences = execbuf.rsvd2 >> 32;
@@ -956,7 +985,8 @@ static void test_long_history(int fd, long ring_size, unsigned flags)
 	obj[0].handle = igt_cork_plug(&c, fd);
 
 	igt_until_timeout(5) {
-		execbuf.rsvd1 = gem_context_clone_with_engines(fd, 0);
+		intel_ctx_t *tmp_ctx = intel_ctx_create(fd, &ctx->cfg);
+		execbuf.rsvd1 = tmp_ctx->id;
 
 		for (n = 0; n < nengine; n++) {
 			struct sync_merge_data merge;
@@ -977,7 +1007,7 @@ static void test_long_history(int fd, long ring_size, unsigned flags)
 			all_fences = merge.fence;
 		}
 
-		gem_context_destroy(fd, execbuf.rsvd1);
+		intel_ctx_destroy(fd, tmp_ctx);
 		if (!--limit)
 			break;
 	}
@@ -991,7 +1021,7 @@ static void test_long_history(int fd, long ring_size, unsigned flags)
 	execbuf.buffers_ptr = to_user_pointer(&obj[1]);
 	execbuf.buffer_count = 1;
 	execbuf.rsvd2 = all_fences;
-	execbuf.rsvd1 = 0;
+	execbuf.rsvd1 = ctx->id;
 
 	for (s = 0; s < ring_size; s++) {
 		for (n = 0; n < nengine; n++) {
@@ -1257,7 +1287,7 @@ static void test_syncobj_signal(int fd)
 	syncobj_destroy(fd, fence.handle);
 }
 
-static void test_syncobj_wait(int fd)
+static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 obj;
@@ -1299,12 +1329,13 @@ static void test_syncobj_wait(int fd)
 	gem_write(fd, obj.handle, 0, &bbe, sizeof(bbe));
 
 	n = 0;
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		obj.handle = gem_create(fd, 4096);
 		gem_write(fd, obj.handle, 0, &bbe, sizeof(bbe));
 
 		/* Now wait upon the blocked engine */
 		execbuf.flags = I915_EXEC_FENCE_ARRAY | e->flags;
+		execbuf.rsvd1 = ctx->id;
 		execbuf.cliprects_ptr = to_user_pointer(&fence);
 		execbuf.num_cliprects = 1;
 		fence.flags = I915_EXEC_FENCE_WAIT;
@@ -1997,7 +2028,7 @@ static void test_syncobj_timeline_signal(int fd)
 static const char *test_syncobj_timeline_wait_desc =
 	"Verifies that waiting on a timeline syncobj point between engines"
 	" works";
-static void test_syncobj_timeline_wait(int fd)
+static void test_syncobj_timeline_wait(int fd, const intel_ctx_t *ctx)
 {
 	const uint32_t bbe[2] = {
 		MI_BATCH_BUFFER_END,
@@ -2046,12 +2077,13 @@ static void test_syncobj_timeline_wait(int fd)
 	gem_close(fd, obj.handle);
 
 	n = 0;
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		obj.handle = gem_create(fd, 4096);
 		gem_write(fd, obj.handle, 0, bbe, sizeof(bbe));
 
 		/* Now wait upon the blocked engine */
 		execbuf.flags = I915_EXEC_USE_EXTENSIONS | e->flags;
+		execbuf.rsvd1 = ctx->id,
 		execbuf.cliprects_ptr = to_user_pointer(&timeline_fences);
 		execbuf.num_cliprects = 0;
 		fence.flags = I915_EXEC_FENCE_WAIT;
@@ -2918,6 +2950,7 @@ static void test_syncobj_backward_timeline_chain_engines(int fd, struct intel_en
 igt_main
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx;
 	int i915 = -1;
 
 	igt_fixture {
@@ -2925,6 +2958,10 @@ igt_main
 		igt_require_gem(i915);
 		igt_require(gem_has_exec_fence(i915));
 		gem_require_mmap_wc(i915);
+		if (gem_has_contexts(i915))
+			ctx = intel_ctx_create_all_physical(i915);
+		else
+			ctx = intel_ctx_0(i915);
 
 		gem_submission_print_method(i915);
 	}
@@ -2937,9 +2974,9 @@ igt_main
 		}
 
 		igt_subtest("basic-busy-all")
-			test_fence_busy_all(i915, 0);
+			test_fence_busy_all(i915, ctx, 0);
 		igt_subtest("basic-wait-all")
-			test_fence_busy_all(i915, WAIT);
+			test_fence_busy_all(i915, ctx, WAIT);
 
 		igt_fixture {
 			igt_stop_hang_detector();
@@ -2947,9 +2984,9 @@ igt_main
 		}
 
 		igt_subtest("busy-hang-all")
-			test_fence_busy_all(i915, HANG);
+			test_fence_busy_all(i915, ctx, HANG);
 		igt_subtest("wait-hang-all")
-			test_fence_busy_all(i915, WAIT | HANG);
+			test_fence_busy_all(i915, ctx, WAIT | HANG);
 
 		igt_fixture {
 			igt_disallow_hang(i915, hang);
@@ -2957,7 +2994,7 @@ igt_main
 	}
 
 	igt_subtest_group {
-		__for_each_physical_engine(i915, e) {
+		for_each_ctx_engine(i915, ctx, e) {
 			igt_fixture {
 				igt_require(gem_class_can_store_dword(i915, e->class));
 			}
@@ -2968,42 +3005,42 @@ igt_main
 			}
 
 			igt_subtest_with_dynamic("basic-busy") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_busy(i915, e, 0);
+						test_fence_busy(i915, ctx, e, 0);
 				}
 			}
 			igt_subtest_with_dynamic("basic-wait") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_busy(i915, e, WAIT);
+						test_fence_busy(i915, ctx, e, WAIT);
 				}
 			}
 			igt_subtest_with_dynamic("basic-await") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_await(i915, e, 0);
+						test_fence_await(i915, ctx, e, 0);
 				}
 			}
 			igt_subtest_with_dynamic("nb-await") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_await(i915,
-								 e, NONBLOCK);
+						test_fence_await(i915, ctx, e,
+								 NONBLOCK);
 				}
 			}
 			igt_subtest_with_dynamic("keep-in-fence") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_keep_in_fence(i915, e);
+						test_keep_in_fence(i915, ctx, e);
 				}
 			}
 			igt_subtest_with_dynamic("parallel") {
 				igt_require(has_submit_fence(i915));
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name) {
 						igt_until_timeout(2)
-							test_parallel(i915, e);
+							test_parallel(i915, ctx, e);
 					}
 				}
 			}
@@ -3012,9 +3049,9 @@ igt_main
 				igt_require(has_submit_fence(i915));
 				igt_require(gem_scheduler_has_semaphores(i915));
 				igt_require(gem_scheduler_has_preemption(i915));
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_concurrent(i915, e);
+						test_concurrent(i915, ctx, e);
 				}
 			}
 
@@ -3023,9 +3060,9 @@ igt_main
 				igt_require(gem_scheduler_has_preemption(i915));
 				igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_submit_fence(i915, e->flags);
+						test_submit_fence(i915, ctx, e);
 				}
 			}
 
@@ -3034,9 +3071,9 @@ igt_main
 				igt_require(gem_scheduler_has_preemption(i915));
 				igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_submitN(i915, e->flags, 3);
+						test_submitN(i915, ctx, e, 3);
 				}
 			}
 
@@ -3045,15 +3082,15 @@ igt_main
 				igt_require(gem_scheduler_has_preemption(i915));
 				igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_submitN(i915, e->flags, 67);
+						test_submitN(i915, ctx, e, 67);
 				}
 			}
 
 			igt_subtest("submit-chain") {
 				igt_require(has_submit_fence(i915));
-				test_submit_chain(i915);
+				test_submit_chain(i915, ctx);
 			}
 
 			igt_fixture {
@@ -3069,27 +3106,27 @@ igt_main
 			}
 
 			igt_subtest_with_dynamic("busy-hang") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_busy(i915, e, HANG);
+						test_fence_busy(i915, ctx, e, HANG);
 				}
 			}
 			igt_subtest_with_dynamic("wait-hang") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_busy(i915, e, HANG | WAIT);
+						test_fence_busy(i915, ctx, e, HANG | WAIT);
 				}
 			}
 			igt_subtest_with_dynamic("await-hang") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_await(i915, e, HANG);
+						test_fence_await(i915, ctx, e, HANG);
 				}
 			}
 			igt_subtest_with_dynamic("nb-await-hang") {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_engine(i915, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						test_fence_await(i915, e, NONBLOCK | HANG);
+						test_fence_await(i915, ctx, e, NONBLOCK | HANG);
 				}
 			}
 			igt_fixture {
@@ -3110,10 +3147,10 @@ igt_main
 		}
 
 		igt_subtest("long-history")
-			test_long_history(i915, ring_size, 0);
+			test_long_history(i915, ctx, ring_size, 0);
 
 		igt_subtest("expired-history")
-			test_long_history(i915, ring_size, EXPIRED);
+			test_long_history(i915, ctx, ring_size, EXPIRED);
 	}
 
 	igt_subtest_group { /* syncobj */
@@ -3139,7 +3176,7 @@ igt_main
 			test_syncobj_signal(i915);
 
 		igt_subtest("syncobj-wait")
-			test_syncobj_wait(i915);
+			test_syncobj_wait(i915, ctx);
 
 		igt_subtest("syncobj-export")
 			test_syncobj_export(i915);
@@ -3187,7 +3224,7 @@ igt_main
 
 		igt_describe(test_syncobj_timeline_wait_desc);
 		igt_subtest("syncobj-timeline-wait")
-			test_syncobj_timeline_wait(i915);
+			test_syncobj_timeline_wait(i915, ctx);
 
 		igt_describe(test_syncobj_timeline_export_desc);
 		igt_subtest("syncobj-timeline-export")
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 08/30] tests/i915/gem_exec_schedule: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (6 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 07/30] tests/i915/gem_exec_fence: Convert to intel_ctx_t Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 09/30] tests/i915/perf_pmu: " Jason Ekstrand
                   ` (23 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_schedule.c | 891 +++++++++++++++++----------------
 1 file changed, 470 insertions(+), 421 deletions(-)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index f84967c7..710c6606 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -35,11 +35,13 @@
 #include <unistd.h>
 
 #include "i915/gem.h"
+#include "i915/gem_vm.h"
 #include "igt.h"
 #include "igt_rand.h"
 #include "igt_rapl.h"
 #include "igt_sysfs.h"
 #include "igt_vgem.h"
+#include "intel_ctx.h"
 #include "sw_sync.h"
 
 #define LO 0
@@ -51,7 +53,6 @@
 
 #define MAX_CONTEXTS 1024
 #define MAX_ELSP_QLEN 16
-#define MAX_ENGINES (I915_EXEC_RING_MASK + 1)
 
 #define MI_SEMAPHORE_WAIT		(0x1c << 23)
 #define   MI_SEMAPHORE_POLL             (1 << 15)
@@ -89,7 +90,7 @@ void __sync_read_u32_count(int fd, uint32_t handle, uint32_t *dst, uint64_t size
 	gem_read(fd, handle, 0, dst, size);
 }
 
-static uint32_t __store_dword(int fd, uint32_t ctx, unsigned ring,
+static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 			      uint32_t target, uint32_t offset, uint32_t value,
 			      uint32_t cork, int fence, unsigned write_domain)
 {
@@ -106,7 +107,7 @@ static uint32_t __store_dword(int fd, uint32_t ctx, unsigned ring,
 	execbuf.flags = ring;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
-	execbuf.rsvd1 = ctx;
+	execbuf.rsvd1 = ctx->id;
 
 	if (fence != -1) {
 		execbuf.flags |= I915_EXEC_FENCE_IN;
@@ -153,7 +154,7 @@ static uint32_t __store_dword(int fd, uint32_t ctx, unsigned ring,
 	return obj[2].handle;
 }
 
-static void store_dword(int fd, uint32_t ctx, unsigned ring,
+static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 			uint32_t target, uint32_t offset, uint32_t value,
 			unsigned write_domain)
 {
@@ -162,7 +163,7 @@ static void store_dword(int fd, uint32_t ctx, unsigned ring,
 				    0, -1, write_domain));
 }
 
-static void store_dword_plug(int fd, uint32_t ctx, unsigned ring,
+static void store_dword_plug(int fd, const intel_ctx_t *ctx, unsigned ring,
 			     uint32_t target, uint32_t offset, uint32_t value,
 			     uint32_t cork, unsigned write_domain)
 {
@@ -171,7 +172,7 @@ static void store_dword_plug(int fd, uint32_t ctx, unsigned ring,
 				    cork, -1, write_domain));
 }
 
-static void store_dword_fenced(int fd, uint32_t ctx, unsigned ring,
+static void store_dword_fenced(int fd, const intel_ctx_t *ctx, unsigned ring,
 			       uint32_t target, uint32_t offset, uint32_t value,
 			       int fence, unsigned write_domain)
 {
@@ -180,21 +181,24 @@ static void store_dword_fenced(int fd, uint32_t ctx, unsigned ring,
 				    0, fence, write_domain));
 }
 
-static uint32_t create_highest_priority(int fd)
+static intel_ctx_t *
+create_highest_priority(int fd, const intel_ctx_cfg_t *cfg)
 {
-	uint32_t ctx = gem_context_clone_with_engines(fd, 0);
+	intel_ctx_t *ctx = intel_ctx_create(fd, cfg);
 
 	/*
 	 * If there is no priority support, all contexts will have equal
 	 * priority (and therefore the max user priority), so no context
 	 * can overtake us, and we effectively can form a plug.
 	 */
-	__gem_context_set_priority(fd, ctx, MAX_PRIO);
+	__gem_context_set_priority(fd, ctx->id, MAX_PRIO);
 
 	return ctx;
 }
 
-static void unplug_show_queue(int fd, struct igt_cork *c, unsigned int engine)
+static void unplug_show_queue(int fd, struct igt_cork *c,
+			      const intel_ctx_cfg_t *cfg,
+			      unsigned int engine)
 {
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	int max = MAX_ELSP_QLEN;
@@ -204,12 +208,9 @@ static void unplug_show_queue(int fd, struct igt_cork *c, unsigned int engine)
 		max = 1;
 
 	for (int n = 0; n < max; n++) {
-		const struct igt_spin_factory opts = {
-			.ctx_id = create_highest_priority(fd),
-			.engine = engine,
-		};
-		spin[n] = __igt_spin_factory(fd, &opts);
-		gem_context_destroy(fd, opts.ctx_id);
+		intel_ctx_t *ctx = create_highest_priority(fd, cfg);
+		spin[n] = __igt_spin_new(fd, .ctx = ctx, .engine = engine);
+		intel_ctx_destroy(fd, ctx);
 	}
 
 	igt_cork_unplug(c); /* batches will now be queued on the engine */
@@ -220,7 +221,7 @@ static void unplug_show_queue(int fd, struct igt_cork *c, unsigned int engine)
 
 }
 
-static void fifo(int fd, unsigned ring)
+static void fifo(int fd, const intel_ctx_t *ctx, unsigned ring)
 {
 	IGT_CORK_FENCE(cork);
 	uint32_t scratch;
@@ -232,10 +233,10 @@ static void fifo(int fd, unsigned ring)
 	fence = igt_cork_plug(&cork, fd);
 
 	/* Same priority, same timeline, final result will be the second eb */
-	store_dword_fenced(fd, 0, ring, scratch, 0, 1, fence, 0);
-	store_dword_fenced(fd, 0, ring, scratch, 0, 2, fence, 0);
+	store_dword_fenced(fd, ctx, ring, scratch, 0, 1, fence, 0);
+	store_dword_fenced(fd, ctx, ring, scratch, 0, 2, fence, 0);
 
-	unplug_show_queue(fd, &cork, ring);
+	unplug_show_queue(fd, &cork, &ctx->cfg, ring);
 	close(fence);
 
 	result =  __sync_read_u32(fd, scratch, 0);
@@ -249,7 +250,8 @@ enum implicit_dir {
 	WRITE_READ = 0x2,
 };
 
-static void implicit_rw(int i915, unsigned ring, enum implicit_dir dir)
+static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
+			enum implicit_dir dir)
 {
 	const struct intel_execution_engine2 *e;
 	IGT_CORK_FENCE(cork);
@@ -259,7 +261,7 @@ static void implicit_rw(int i915, unsigned ring, enum implicit_dir dir)
 	int fence;
 
 	count = 0;
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		if (e->flags == ring)
 			continue;
 
@@ -274,28 +276,28 @@ static void implicit_rw(int i915, unsigned ring, enum implicit_dir dir)
 	fence = igt_cork_plug(&cork, i915);
 
 	if (dir & WRITE_READ)
-		store_dword_fenced(i915, 0,
+		store_dword_fenced(i915, ctx,
 				   ring, scratch, 0, ~ring,
 				   fence, I915_GEM_DOMAIN_RENDER);
 
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		if (e->flags == ring)
 			continue;
 
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		store_dword_fenced(i915, 0,
+		store_dword_fenced(i915, ctx,
 				   e->flags, scratch, 0, e->flags,
 				   fence, 0);
 	}
 
 	if (dir & READ_WRITE)
-		store_dword_fenced(i915, 0,
+		store_dword_fenced(i915, ctx,
 				   ring, scratch, 0, ring,
 				   fence, I915_GEM_DOMAIN_RENDER);
 
-	unplug_show_queue(i915, &cork, ring);
+	unplug_show_queue(i915, &cork, &ctx->cfg, ring);
 	close(fence);
 
 	result =  __sync_read_u32(i915, scratch, 0);
@@ -307,7 +309,8 @@ static void implicit_rw(int i915, unsigned ring, enum implicit_dir dir)
 		igt_assert_eq_u32(result, ring);
 }
 
-static void independent(int fd, unsigned int engine, unsigned long flags)
+static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
+			unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	IGT_CORK_FENCE(cork);
@@ -323,7 +326,7 @@ static void independent(int fd, unsigned int engine, unsigned long flags)
 	fence = igt_cork_plug(&cork, fd);
 
 	/* Check that we can submit to engine while all others are blocked */
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		if (e->flags == engine)
 			continue;
 
@@ -332,6 +335,7 @@ static void independent(int fd, unsigned int engine, unsigned long flags)
 
 		if (spin == NULL) {
 			spin = __igt_spin_new(fd,
+					      .ctx = ctx,
 					      .engine = e->flags,
 					      .flags = flags);
 		} else {
@@ -343,14 +347,14 @@ static void independent(int fd, unsigned int engine, unsigned long flags)
 			gem_execbuf(fd, &eb);
 		}
 
-		store_dword_fenced(fd, 0, e->flags, scratch, 0, e->flags, fence, 0);
+		store_dword_fenced(fd, ctx, e->flags, scratch, 0, e->flags, fence, 0);
 	}
 	igt_require(spin);
 
 	/* Same priority, but different timeline (as different engine) */
-	batch = __store_dword(fd, 0, engine, scratch, 0, engine, 0, fence, 0);
+	batch = __store_dword(fd, ctx, engine, scratch, 0, engine, 0, fence, 0);
 
-	unplug_show_queue(fd, &cork, engine);
+	unplug_show_queue(fd, &cork, &ctx->cfg, engine);
 	close(fence);
 
 	gem_sync(fd, batch);
@@ -373,11 +377,12 @@ static void independent(int fd, unsigned int engine, unsigned long flags)
 	gem_close(fd, scratch);
 }
 
-static void smoketest(int fd, unsigned ring, unsigned timeout)
+static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
+		      unsigned ring, unsigned timeout)
 {
 	const int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
 	const struct intel_execution_engine2 *e;
-	unsigned engines[MAX_ENGINES];
+	unsigned engines[GEM_MAX_ENGINES];
 	unsigned nengine;
 	unsigned engine;
 	uint32_t scratch;
@@ -385,7 +390,7 @@ static void smoketest(int fd, unsigned ring, unsigned timeout)
 
 	nengine = 0;
 	if (ring == ALL_ENGINES) {
-		__for_each_physical_engine(fd, e)
+		for_each_ctx_cfg_engine(fd, cfg, e)
 			if (gem_class_can_store_dword(fd, e->class))
 				engines[nengine++] = e->flags;
 	} else {
@@ -396,16 +401,16 @@ static void smoketest(int fd, unsigned ring, unsigned timeout)
 	scratch = gem_create(fd, 4096);
 	igt_fork(child, ncpus) {
 		unsigned long count = 0;
-		uint32_t ctx;
+		intel_ctx_t *ctx;
 
 		hars_petruska_f54_1_random_perturb(child);
 
-		ctx = gem_context_clone_with_engines(fd, 0);
+		ctx = intel_ctx_create(fd, cfg);
 		igt_until_timeout(timeout) {
 			int prio;
 
 			prio = hars_petruska_f54_1_random_unsafe_max(MAX_PRIO - MIN_PRIO) + MIN_PRIO;
-			gem_context_set_priority(fd, ctx, prio);
+			gem_context_set_priority(fd, ctx->id, prio);
 
 			engine = engines[hars_petruska_f54_1_random_unsafe_max(nengine)];
 			store_dword(fd, ctx, engine, scratch,
@@ -416,7 +421,7 @@ static void smoketest(int fd, unsigned ring, unsigned timeout)
 					    8*child + 4, count++,
 					    0);
 		}
-		gem_context_destroy(fd, ctx);
+		intel_ctx_destroy(fd, ctx);
 	}
 	igt_waitchildren();
 
@@ -483,7 +488,8 @@ static uint32_t timeslicing_batches(int i915, uint32_t *offset)
         return handle;
 }
 
-static void timeslice(int i915, unsigned int engine)
+static void timeslice(int i915, const intel_ctx_cfg_t *cfg,
+		      unsigned int engine)
 {
 	unsigned int offset = 24 << 20;
 	struct drm_i915_gem_exec_object2 obj = {
@@ -494,6 +500,7 @@ static void timeslice(int i915, unsigned int engine)
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
 	};
+	intel_ctx_t *ctx;
 	uint32_t *result;
 	int out;
 
@@ -517,12 +524,13 @@ static void timeslice(int i915, unsigned int engine)
 
 	/* No coupling between requests; free to timeslice */
 
-	execbuf.rsvd1 = gem_context_clone_with_engines(i915, 0);
+	ctx = intel_ctx_create(i915, cfg);
+	execbuf.rsvd1 = ctx->id;
 	execbuf.rsvd2 >>= 32;
 	execbuf.flags = engine | I915_EXEC_FENCE_OUT;
 	execbuf.batch_start_offset = offset;
 	gem_execbuf_wr(i915, &execbuf);
-	gem_context_destroy(i915, execbuf.rsvd1);
+	intel_ctx_destroy(i915, ctx);
 
 	gem_sync(i915, obj.handle);
 	gem_close(i915, obj.handle);
@@ -576,7 +584,8 @@ static uint32_t timesliceN_batches(int i915, uint32_t offset, int count)
         return handle;
 }
 
-static void timesliceN(int i915, unsigned int engine, int count)
+static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
+		       unsigned int engine, int count)
 {
 	const unsigned int sz = ALIGN((count + 1) * 1024, 4096);
 	unsigned int offset = 24 << 20;
@@ -592,6 +601,7 @@ static void timesliceN(int i915, unsigned int engine, int count)
 	};
 	uint32_t *result =
 		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
+	intel_ctx_t *ctx;
 	int fence[count];
 
 	/*
@@ -608,10 +618,11 @@ static void timesliceN(int i915, unsigned int engine, int count)
 	/* No coupling between requests; free to timeslice */
 
 	for (int i = 0; i < count; i++) {
-		execbuf.rsvd1 = gem_context_clone_with_engines(i915, 0);
+		ctx = intel_ctx_create(i915, cfg);
+		execbuf.rsvd1 = ctx->id;
 		execbuf.batch_start_offset = (i + 1) * 1024;;
 		gem_execbuf_wr(i915, &execbuf);
-		gem_context_destroy(i915, execbuf.rsvd1);
+		intel_ctx_destroy(i915, ctx);
 
 		fence[i] = execbuf.rsvd2 >> 32;
 	}
@@ -629,31 +640,32 @@ static void timesliceN(int i915, unsigned int engine, int count)
 	munmap(result, sz);
 }
 
-static void lateslice(int i915, unsigned int engine, unsigned long flags)
+static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
+		      unsigned int engine, unsigned long flags)
 {
+	intel_ctx_t *ctx;
 	igt_spin_t *spin[3];
-	uint32_t ctx;
 
 	igt_require(gem_scheduler_has_semaphores(i915));
 	igt_require(gem_scheduler_has_preemption(i915));
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
-	ctx = gem_context_create(i915);
-	spin[0] = igt_spin_new(i915, .ctx_id = ctx, .engine = engine,
+	ctx = intel_ctx_create(i915, cfg);
+	spin[0] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT |
 					 flags));
-	gem_context_destroy(i915, ctx);
+	intel_ctx_destroy(i915, ctx);
 
 	igt_spin_busywait_until_started(spin[0]);
 
-	ctx = gem_context_create(i915);
-	spin[1] = igt_spin_new(i915, .ctx_id = ctx, .engine = engine,
+	ctx = intel_ctx_create(i915, cfg);
+	spin[1] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
 			       .fence = spin[0]->out_fence,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_IN |
 					 flags));
-	gem_context_destroy(i915, ctx);
+	intel_ctx_destroy(i915, ctx);
 
 	usleep(5000); /* give some time for the new spinner to be scheduled */
 
@@ -664,10 +676,10 @@ static void lateslice(int i915, unsigned int engine, unsigned long flags)
 	 * third spinner we then expect timeslicing to be real enabled.
 	 */
 
-	ctx = gem_context_create(i915);
-	spin[2] = igt_spin_new(i915, .ctx_id = ctx, .engine = engine,
+	ctx = intel_ctx_create(i915, cfg);
+	spin[2] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
 			       .flags = IGT_SPIN_POLL_RUN | flags);
-	gem_context_destroy(i915, ctx);
+	intel_ctx_destroy(i915, ctx);
 
 	igt_spin_busywait_until_started(spin[2]);
 
@@ -689,7 +701,7 @@ static void lateslice(int i915, unsigned int engine, unsigned long flags)
 }
 
 static void cancel_spinner(int i915,
-			   uint32_t ctx, unsigned int engine,
+			   intel_ctx_t *ctx, unsigned int engine,
 			   igt_spin_t *spin)
 {
 	struct drm_i915_gem_exec_object2 obj = {
@@ -699,7 +711,7 @@ static void cancel_spinner(int i915,
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
 		.flags = engine | I915_EXEC_FENCE_SUBMIT,
-		.rsvd1 = ctx, /* same vm */
+		.rsvd1 = ctx->id, /* same vm */
 		.rsvd2 = spin->out_fence,
 	};
 	uint32_t *map, *cs;
@@ -720,21 +732,18 @@ static void cancel_spinner(int i915,
 	gem_close(i915, obj.handle);
 }
 
-static void submit_slice(int i915,
+static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 			 const struct intel_execution_engine2 *e,
 			 unsigned int flags)
 #define EARLY_SUBMIT 0x1
 #define LATE_SUBMIT 0x2
 #define USERPTR 0x4
 {
-	I915_DEFINE_CONTEXT_PARAM_ENGINES(engines , 1) = {};
 	const struct intel_execution_engine2 *cancel;
-	struct drm_i915_gem_context_param param = {
-		.ctx_id = gem_context_create(i915),
-		.param = I915_CONTEXT_PARAM_ENGINES,
-		.value = to_user_pointer(&engines),
-		.size = sizeof(engines),
+	intel_ctx_cfg_t engine_cfg = {
+		.num_engines = 1,
 	};
+	intel_ctx_t *ctx;
 
 	/*
 	 * When using a submit fence, we do not want to block concurrent work,
@@ -745,7 +754,7 @@ static void submit_slice(int i915,
 	igt_require(gem_scheduler_has_preemption(i915));
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
-	__for_each_physical_engine(i915, cancel) {
+	for_each_ctx_cfg_engine(i915, cfg, cancel) {
 		igt_spin_t *bg, *spin;
 		int timeline = -1;
 		int fence = -1;
@@ -762,10 +771,10 @@ static void submit_slice(int i915,
 			fence = sw_sync_timeline_create_fence(timeline, 1);
 		}
 
-		engines.engines[0].engine_class = e->class;
-		engines.engines[0].engine_instance = e->instance;
-		gem_context_set_param(i915, &param);
-		spin = igt_spin_new(i915, .ctx_id = param.ctx_id,
+		engine_cfg.engines[0].engine_class = e->class;
+		engine_cfg.engines[0].engine_instance = e->instance;
+		ctx = intel_ctx_create(i915, &engine_cfg);
+		spin = igt_spin_new(i915, .ctx = ctx,
 				    .fence = fence,
 				    .flags =
 				    IGT_SPIN_POLL_RUN |
@@ -778,10 +787,13 @@ static void submit_slice(int i915,
 		if (flags & EARLY_SUBMIT)
 			igt_spin_busywait_until_started(spin);
 
-		engines.engines[0].engine_class = cancel->class;
-		engines.engines[0].engine_instance = cancel->instance;
-		gem_context_set_param(i915, &param);
-		cancel_spinner(i915, param.ctx_id, 0, spin);
+		intel_ctx_destroy(i915, ctx);
+
+		engine_cfg.engines[0].engine_class = cancel->class;
+		engine_cfg.engines[0].engine_instance = cancel->instance;
+		ctx = intel_ctx_create(i915, &engine_cfg);
+
+		cancel_spinner(i915, ctx, 0, spin);
 
 		if (timeline != -1)
 			close(timeline);
@@ -789,9 +801,9 @@ static void submit_slice(int i915,
 		gem_sync(i915, spin->handle);
 		igt_spin_free(i915, spin);
 		igt_spin_free(i915, bg);
-	}
 
-	gem_context_destroy(i915, param.ctx_id);
+		intel_ctx_destroy(i915, ctx);
+	}
 }
 
 static uint32_t __batch_create(int i915, uint32_t offset)
@@ -810,7 +822,8 @@ static uint32_t batch_create(int i915)
 	return __batch_create(i915, 0);
 }
 
-static void semaphore_userlock(int i915, unsigned long flags)
+static void semaphore_userlock(int i915, const intel_ctx_t *ctx,
+			       unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	struct drm_i915_gem_exec_object2 obj = {
@@ -818,6 +831,7 @@ static void semaphore_userlock(int i915, unsigned long flags)
 	};
 	igt_spin_t *spin = NULL;
 	uint32_t scratch;
+	intel_ctx_t *tmp_ctx;
 
 	igt_require(gem_scheduler_has_semaphores(i915));
 
@@ -829,9 +843,10 @@ static void semaphore_userlock(int i915, unsigned long flags)
 	 */
 
 	scratch = gem_create(i915, 4096);
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		if (!spin) {
 			spin = igt_spin_new(i915,
+					    .ctx = ctx,
 					    .dependency = scratch,
 					    .engine = e->flags,
 					    .flags = flags);
@@ -854,13 +869,13 @@ static void semaphore_userlock(int i915, unsigned long flags)
 	 * on a HW semaphore) but it should not prevent any real work from
 	 * taking precedence.
 	 */
-	scratch = gem_context_clone_with_engines(i915, 0);
-	__for_each_physical_engine(i915, e) {
+	tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
+	for_each_ctx_engine(i915, ctx, e) {
 		struct drm_i915_gem_execbuffer2 execbuf = {
 			.buffers_ptr = to_user_pointer(&obj),
 			.buffer_count = 1,
 			.flags = e->flags,
-			.rsvd1 = scratch,
+			.rsvd1 = tmp_ctx->id,
 		};
 
 		if (e->flags == (spin->execbuf.flags & I915_EXEC_RING_MASK))
@@ -868,14 +883,15 @@ static void semaphore_userlock(int i915, unsigned long flags)
 
 		gem_execbuf(i915, &execbuf);
 	}
-	gem_context_destroy(i915, scratch);
+	intel_ctx_destroy(i915, tmp_ctx);
 	gem_sync(i915, obj.handle); /* to hang unless we can preempt */
 	gem_close(i915, obj.handle);
 
 	igt_spin_free(i915, spin);
 }
 
-static void semaphore_codependency(int i915, unsigned long flags)
+static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
+				   unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	struct {
@@ -894,8 +910,8 @@ static void semaphore_codependency(int i915, unsigned long flags)
 	 */
 
 	i = 0;
-	__for_each_physical_engine(i915, e) {
-		uint32_t ctx;
+	for_each_ctx_engine(i915, ctx, e) {
+		intel_ctx_t *tmp_ctx;
 
 		if (!e->flags) {
 			igt_require(gem_class_can_store_dword(i915, e->class));
@@ -905,11 +921,11 @@ static void semaphore_codependency(int i915, unsigned long flags)
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		ctx = gem_context_clone_with_engines(i915, 0);
+		tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
 
 		task[i].xcs =
 			__igt_spin_new(i915,
-				       .ctx_id = ctx,
+				       .ctx = tmp_ctx,
 				       .engine = e->flags,
 				       .flags = IGT_SPIN_POLL_RUN | flags);
 		igt_spin_busywait_until_started(task[i].xcs);
@@ -917,11 +933,11 @@ static void semaphore_codependency(int i915, unsigned long flags)
 		/* Common rcs tasks will be queued in FIFO */
 		task[i].rcs =
 			__igt_spin_new(i915,
-				       .ctx_id = ctx,
+				       .ctx = tmp_ctx,
 				       .engine = 0,
 				       .dependency = task[i].xcs->handle);
 
-		gem_context_destroy(i915, ctx);
+		intel_ctx_destroy(i915, tmp_ctx);
 
 		if (++i == ARRAY_SIZE(task))
 			break;
@@ -944,11 +960,13 @@ static void semaphore_codependency(int i915, unsigned long flags)
 	}
 }
 
-static void semaphore_resolve(int i915, unsigned long flags)
+static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg,
+			      unsigned long flags)
 {
 	const struct intel_execution_engine2 *e;
 	const uint32_t SEMAPHORE_ADDR = 64 << 10;
-	uint32_t semaphore, outer, inner, *sema;
+	uint32_t semaphore, *sema;
+	intel_ctx_t *outer, *inner;
 
 	/*
 	 * Userspace may submit batches that wait upon unresolved
@@ -962,13 +980,13 @@ static void semaphore_resolve(int i915, unsigned long flags)
 	igt_require(gem_scheduler_has_preemption(i915));
 	igt_require(intel_get_drm_devid(i915) >= 8); /* for MI_SEMAPHORE_WAIT */
 
-	outer = gem_context_clone_with_engines(i915, 0);
-	inner = gem_context_clone_with_engines(i915, 0);
+	outer = intel_ctx_create(i915, cfg);
+	inner = intel_ctx_create(i915, cfg);
 
 	semaphore = gem_create(i915, 4096);
 	sema = gem_mmap__wc(i915, semaphore, 0, 4096, PROT_WRITE);
 
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_cfg_engine(i915, cfg, e) {
 		struct drm_i915_gem_exec_object2 obj[3];
 		struct drm_i915_gem_execbuffer2 eb;
 		uint32_t handle, cancel;
@@ -1023,7 +1041,7 @@ static void semaphore_resolve(int i915, unsigned long flags)
 		obj[2].handle = handle;
 		eb.buffer_count = 3;
 		eb.buffers_ptr = to_user_pointer(obj);
-		eb.rsvd1 = outer;
+		eb.rsvd1 = outer->id;
 		gem_execbuf(i915, &eb);
 
 		/* Then add the GPU hang intermediatory */
@@ -1054,7 +1072,7 @@ static void semaphore_resolve(int i915, unsigned long flags)
 		obj[0].flags = EXEC_OBJECT_PINNED;
 		obj[1].handle = cancel;
 		eb.buffer_count = 2;
-		eb.rsvd1 = inner;
+		eb.rsvd1 = inner->id;
 		gem_execbuf(i915, &eb);
 		gem_wait(i915, cancel, &poke); /* match sync's WAIT_PRIORITY */
 		gem_close(i915, cancel);
@@ -1069,22 +1087,23 @@ static void semaphore_resolve(int i915, unsigned long flags)
 	munmap(sema, 4096);
 	gem_close(i915, semaphore);
 
-	gem_context_destroy(i915, inner);
-	gem_context_destroy(i915, outer);
+	intel_ctx_destroy(i915, inner);
+	intel_ctx_destroy(i915, outer);
 }
 
-static void semaphore_noskip(int i915, unsigned long flags)
+static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
+			     unsigned long flags)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *outer, *inner;
-	uint32_t ctx;
+	intel_ctx_t *ctx;
 
 	igt_require(gen >= 6); /* MI_STORE_DWORD_IMM convenience */
 
-	ctx = gem_context_clone_with_engines(i915, 0);
+	ctx = intel_ctx_create(i915, cfg);
 
-	__for_each_physical_engine(i915, outer) {
-	__for_each_physical_engine(i915, inner) {
+	for_each_ctx_engine(i915, ctx, outer) {
+	for_each_ctx_engine(i915, ctx, inner) {
 		struct drm_i915_gem_exec_object2 obj[3];
 		struct drm_i915_gem_execbuffer2 eb;
 		uint32_t handle, *cs, *map;
@@ -1094,9 +1113,11 @@ static void semaphore_noskip(int i915, unsigned long flags)
 		    !gem_class_can_store_dword(i915, inner->class))
 			continue;
 
-		chain = __igt_spin_new(i915, .engine = outer->flags, .flags = flags);
+		chain = __igt_spin_new(i915, .ctx = ctx,
+				       .engine = outer->flags, .flags = flags);
 
-		spin = __igt_spin_new(i915, .engine = inner->flags, .flags = flags);
+		spin = __igt_spin_new(i915, .ctx = ctx,
+				      .engine = inner->flags, .flags = flags);
 		igt_spin_end(spin); /* we just want its address for later */
 		gem_sync(i915, spin->handle);
 		igt_spin_reset(spin);
@@ -1129,7 +1150,7 @@ static void semaphore_noskip(int i915, unsigned long flags)
 		memset(&eb, 0, sizeof(eb));
 		eb.buffer_count = 3;
 		eb.buffers_ptr = to_user_pointer(obj);
-		eb.rsvd1 = ctx;
+		eb.rsvd1 = ctx->id;
 		eb.flags = inner->flags;
 		gem_execbuf(i915, &eb);
 
@@ -1153,11 +1174,12 @@ static void semaphore_noskip(int i915, unsigned long flags)
 	}
 	}
 
-	gem_context_destroy(i915, ctx);
+	intel_ctx_destroy(i915, ctx);
 }
 
 static void
-noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
+noreorder(int i915, const intel_ctx_cfg_t *cfg,
+	  unsigned int engine, int prio, unsigned int flags)
 #define CORKED 0x1
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
@@ -1169,24 +1191,24 @@ noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
 		.flags = engine,
-		.rsvd1 = gem_context_clone_with_engines(i915, 0),
 	};
+	intel_ctx_cfg_t vm_cfg = *cfg;
+	intel_ctx_t *ctx;
 	IGT_CORK_FENCE(cork);
 	uint32_t *map, *cs;
 	igt_spin_t *slice;
 	igt_spin_t *spin;
 	int fence = -1;
 	uint64_t addr;
-	uint32_t ctx;
 
 	if (flags & CORKED)
 		fence = igt_cork_plug(&cork, i915);
 
-	ctx = gem_context_clone(i915, execbuf.rsvd1,
-			      I915_CONTEXT_CLONE_ENGINES |
-			      I915_CONTEXT_CLONE_VM,
-			      0);
-	spin = igt_spin_new(i915, ctx,
+	vm_cfg.vm = gem_vm_create(i915);
+
+	ctx = intel_ctx_create(i915, &vm_cfg);
+
+	spin = igt_spin_new(i915, .ctx = ctx,
 			    .engine = engine,
 			    .fence = fence,
 			    .flags = IGT_SPIN_FENCE_OUT | IGT_SPIN_FENCE_IN);
@@ -1195,7 +1217,7 @@ noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
 	/* Loop around the engines, creating a chain of fences */
 	spin->execbuf.rsvd2 = (uint64_t)dup(spin->out_fence) << 32;
 	spin->execbuf.rsvd2 |= 0xffffffff;
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		if (e->flags == engine)
 			continue;
 
@@ -1208,7 +1230,7 @@ noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
 	}
 	close(spin->execbuf.rsvd2);
 	spin->execbuf.rsvd2 >>= 32;
-	gem_context_destroy(i915, ctx);
+	intel_ctx_destroy(i915, ctx);
 
 	/*
 	 * Wait upon the fence chain, and try to terminate the spinner.
@@ -1241,11 +1263,13 @@ noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
 	execbuf.rsvd2 = spin->execbuf.rsvd2;
 	execbuf.flags |= I915_EXEC_FENCE_IN;
 
-	gem_context_set_priority(i915, execbuf.rsvd1, prio);
+	ctx = intel_ctx_create(i915, &vm_cfg);
+	gem_context_set_priority(i915, ctx->id, prio);
+	execbuf.rsvd1 = ctx->id;
 
 	gem_execbuf(i915, &execbuf);
 	gem_close(i915, obj.handle);
-	gem_context_destroy(i915, execbuf.rsvd1);
+	intel_ctx_destroy(i915, ctx);
 	if (cork.fd != -1)
 		igt_cork_unplug(&cork);
 
@@ -1258,7 +1282,9 @@ noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
 	 *
 	 * Without timeslices, fallback to waiting a second.
 	 */
+	ctx = intel_ctx_create(i915, &vm_cfg);
 	slice = igt_spin_new(i915,
+			    .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_POLL_RUN);
 	igt_until_timeout(1) {
@@ -1266,6 +1292,7 @@ noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
 			break;
 	}
 	igt_spin_free(i915, slice);
+	intel_ctx_destroy(i915, ctx);
 
 	/* Check the store did not run before the spinner */
 	igt_assert_eq(sync_fence_status(spin->out_fence), 0);
@@ -1273,20 +1300,21 @@ noreorder(int i915, unsigned int engine, int prio, unsigned int flags)
 	gem_quiescent_gpu(i915);
 }
 
-static void reorder(int fd, unsigned ring, unsigned flags)
+static void reorder(int fd, const intel_ctx_cfg_t *cfg,
+		    unsigned ring, unsigned flags)
 #define EQUAL 1
 {
 	IGT_CORK_FENCE(cork);
 	uint32_t scratch;
 	uint32_t result;
-	uint32_t ctx[2];
+	intel_ctx_t *ctx[2];
 	int fence;
 
-	ctx[LO] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
+	ctx[LO] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
 
-	ctx[HI] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[HI], flags & EQUAL ? MIN_PRIO : 0);
+	ctx[HI] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[HI]->id, flags & EQUAL ? MIN_PRIO : 0);
 
 	scratch = gem_create(fd, 4096);
 	fence = igt_cork_plug(&cork, fd);
@@ -1294,40 +1322,40 @@ static void reorder(int fd, unsigned ring, unsigned flags)
 	/* We expect the high priority context to be executed first, and
 	 * so the final result will be value from the low priority context.
 	 */
-	store_dword_fenced(fd, ctx[LO], ring, scratch, 0, ctx[LO], fence, 0);
-	store_dword_fenced(fd, ctx[HI], ring, scratch, 0, ctx[HI], fence, 0);
+	store_dword_fenced(fd, ctx[LO], ring, scratch, 0, ctx[LO]->id, fence, 0);
+	store_dword_fenced(fd, ctx[HI], ring, scratch, 0, ctx[HI]->id, fence, 0);
 
-	unplug_show_queue(fd, &cork, ring);
+	unplug_show_queue(fd, &cork, cfg, ring);
 	close(fence);
 
-	gem_context_destroy(fd, ctx[LO]);
-	gem_context_destroy(fd, ctx[HI]);
-
 	result =  __sync_read_u32(fd, scratch, 0);
 	gem_close(fd, scratch);
 
 	if (flags & EQUAL) /* equal priority, result will be fifo */
-		igt_assert_eq_u32(result, ctx[HI]);
+		igt_assert_eq_u32(result, ctx[HI]->id);
 	else
-		igt_assert_eq_u32(result, ctx[LO]);
+		igt_assert_eq_u32(result, ctx[LO]->id);
+
+	intel_ctx_destroy(fd, ctx[LO]);
+	intel_ctx_destroy(fd, ctx[HI]);
 }
 
-static void promotion(int fd, unsigned ring)
+static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 {
 	IGT_CORK_FENCE(cork);
 	uint32_t result, dep;
 	uint32_t result_read, dep_read;
-	uint32_t ctx[3];
+	intel_ctx_t *ctx[3];
 	int fence;
 
-	ctx[LO] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
+	ctx[LO] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
 
-	ctx[HI] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[HI], 0);
+	ctx[HI] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[HI]->id, 0);
 
-	ctx[NOISE] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[NOISE], MIN_PRIO/2);
+	ctx[NOISE] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[NOISE]->id, MIN_PRIO/2);
 
 	result = gem_create(fd, 4096);
 	dep = gem_create(fd, 4096);
@@ -1339,30 +1367,30 @@ static void promotion(int fd, unsigned ring)
 	 * fifo would be NOISE, LO, HI.
 	 * strict priority would be  HI, NOISE, LO
 	 */
-	store_dword_fenced(fd, ctx[NOISE], ring, result, 0, ctx[NOISE], fence, 0);
-	store_dword_fenced(fd, ctx[LO], ring, result, 0, ctx[LO], fence, 0);
+	store_dword_fenced(fd, ctx[NOISE], ring, result, 0, ctx[NOISE]->id, fence, 0);
+	store_dword_fenced(fd, ctx[LO], ring, result, 0, ctx[LO]->id, fence, 0);
 
 	/* link LO <-> HI via a dependency on another buffer */
-	store_dword(fd, ctx[LO], ring, dep, 0, ctx[LO], I915_GEM_DOMAIN_INSTRUCTION);
-	store_dword(fd, ctx[HI], ring, dep, 0, ctx[HI], 0);
+	store_dword(fd, ctx[LO], ring, dep, 0, ctx[LO]->id, I915_GEM_DOMAIN_INSTRUCTION);
+	store_dword(fd, ctx[HI], ring, dep, 0, ctx[HI]->id, 0);
 
-	store_dword(fd, ctx[HI], ring, result, 0, ctx[HI], 0);
+	store_dword(fd, ctx[HI], ring, result, 0, ctx[HI]->id, 0);
 
-	unplug_show_queue(fd, &cork, ring);
+	unplug_show_queue(fd, &cork, cfg, ring);
 	close(fence);
 
-	gem_context_destroy(fd, ctx[NOISE]);
-	gem_context_destroy(fd, ctx[LO]);
-	gem_context_destroy(fd, ctx[HI]);
-
 	dep_read = __sync_read_u32(fd, dep, 0);
 	gem_close(fd, dep);
 
 	result_read = __sync_read_u32(fd, result, 0);
 	gem_close(fd, result);
 
-	igt_assert_eq_u32(dep_read, ctx[HI]);
-	igt_assert_eq_u32(result_read, ctx[NOISE]);
+	igt_assert_eq_u32(dep_read, ctx[HI]->id);
+	igt_assert_eq_u32(result_read, ctx[NOISE]->id);
+
+	intel_ctx_destroy(fd, ctx[NOISE]);
+	intel_ctx_destroy(fd, ctx[LO]);
+	intel_ctx_destroy(fd, ctx[HI]);
 }
 
 static bool set_preempt_timeout(int i915,
@@ -1376,34 +1404,35 @@ static bool set_preempt_timeout(int i915,
 
 #define NEW_CTX (0x1 << 0)
 #define HANG_LP (0x1 << 1)
-static void preempt(int fd, const struct intel_execution_engine2 *e, unsigned flags)
+static void preempt(int fd, const intel_ctx_cfg_t *cfg,
+		    const struct intel_execution_engine2 *e, unsigned flags)
 {
 	uint32_t result = gem_create(fd, 4096);
 	uint32_t result_read;
 	igt_spin_t *spin[MAX_ELSP_QLEN];
-	uint32_t ctx[2];
+	intel_ctx_t *ctx[2];
 	igt_hang_t hang;
 
 	/* Set a fast timeout to speed the test up (if available) */
 	set_preempt_timeout(fd, e, 150);
 
-	ctx[LO] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
+	ctx[LO] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
 
-	ctx[HI] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[HI], MAX_PRIO);
+	ctx[HI] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
 
 	if (flags & HANG_LP)
-		hang = igt_hang_ctx(fd, ctx[LO], e->flags, 0);
+		hang = igt_hang_ctx(fd, ctx[LO]->id, e->flags, 0);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
 		if (flags & NEW_CTX) {
-			gem_context_destroy(fd, ctx[LO]);
-			ctx[LO] = gem_context_clone_with_engines(fd, 0);
-			gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
+			intel_ctx_destroy(fd, ctx[LO]);
+			ctx[LO] = intel_ctx_create(fd, cfg);
+			gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
 		}
 		spin[n] = __igt_spin_new(fd,
-					 .ctx_id = ctx[LO],
+					 .ctx = ctx[LO],
 					 .engine = e->flags,
 					 .flags = flags & USERPTR ? IGT_SPIN_USERPTR : 0);
 		igt_debug("spin[%d].handle=%d\n", n, spin[n]->handle);
@@ -1421,8 +1450,8 @@ static void preempt(int fd, const struct intel_execution_engine2 *e, unsigned fl
 	if (flags & HANG_LP)
 		igt_post_hang_ring(fd, hang);
 
-	gem_context_destroy(fd, ctx[LO]);
-	gem_context_destroy(fd, ctx[HI]);
+	intel_ctx_destroy(fd, ctx[LO]);
+	intel_ctx_destroy(fd, ctx[HI]);
 
 	gem_close(fd, result);
 }
@@ -1430,22 +1459,23 @@ static void preempt(int fd, const struct intel_execution_engine2 *e, unsigned fl
 #define CHAIN 0x1
 #define CONTEXTS 0x2
 
-static igt_spin_t *__noise(int fd, uint32_t ctx, int prio, igt_spin_t *spin)
+static igt_spin_t *__noise(int fd, const intel_ctx_t *ctx,
+			   int prio, igt_spin_t *spin)
 {
 	const struct intel_execution_engine2 *e;
 
-	gem_context_set_priority(fd, ctx, prio);
+	gem_context_set_priority(fd, ctx->id, prio);
 
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		if (spin == NULL) {
 			spin = __igt_spin_new(fd,
-					      .ctx_id = ctx,
+					      .ctx = ctx,
 					      .engine = e->flags);
 		} else {
 			struct drm_i915_gem_execbuffer2 eb = {
 				.buffer_count = 1,
 				.buffers_ptr = to_user_pointer(&spin->obj[IGT_SPIN_BATCH]),
-				.rsvd1 = ctx,
+				.rsvd1 = ctx->id,
 				.flags = e->flags,
 			};
 			gem_execbuf(fd, &eb);
@@ -1456,7 +1486,7 @@ static igt_spin_t *__noise(int fd, uint32_t ctx, int prio, igt_spin_t *spin)
 }
 
 static void __preempt_other(int fd,
-			    uint32_t *ctx,
+			    intel_ctx_t **ctx,
 			    unsigned int target, unsigned int primary,
 			    unsigned flags)
 {
@@ -1472,7 +1502,7 @@ static void __preempt_other(int fd,
 	n++;
 
 	if (flags & CHAIN) {
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx[LO], e) {
 			store_dword(fd, ctx[LO], e->flags,
 				    result, (n + 1)*sizeof(uint32_t), n + 1,
 				    I915_GEM_DOMAIN_RENDER);
@@ -1496,11 +1526,12 @@ static void __preempt_other(int fd,
 	gem_close(fd, result);
 }
 
-static void preempt_other(int fd, unsigned ring, unsigned int flags)
+static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
+			  unsigned ring, unsigned int flags)
 {
 	const struct intel_execution_engine2 *e;
 	igt_spin_t *spin = NULL;
-	uint32_t ctx[3];
+	intel_ctx_t *ctx[3];
 
 	/* On each engine, insert
 	 * [NOISE] spinner,
@@ -1512,16 +1543,16 @@ static void preempt_other(int fd, unsigned ring, unsigned int flags)
 	 * can cross engines.
 	 */
 
-	ctx[LO] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
+	ctx[LO] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
 
-	ctx[NOISE] = gem_context_clone_with_engines(fd, 0);
+	ctx[NOISE] = intel_ctx_create(fd, cfg);
 	spin = __noise(fd, ctx[NOISE], 0, NULL);
 
-	ctx[HI] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[HI], MAX_PRIO);
+	ctx[HI] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
 
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_cfg_engine(fd, cfg, e) {
 		igt_debug("Primary engine: %s\n", e->name);
 		__preempt_other(fd, ctx, ring, e->flags, flags);
 
@@ -1530,12 +1561,12 @@ static void preempt_other(int fd, unsigned ring, unsigned int flags)
 	igt_assert(gem_bo_busy(fd, spin->handle));
 	igt_spin_free(fd, spin);
 
-	gem_context_destroy(fd, ctx[LO]);
-	gem_context_destroy(fd, ctx[NOISE]);
-	gem_context_destroy(fd, ctx[HI]);
+	intel_ctx_destroy(fd, ctx[LO]);
+	intel_ctx_destroy(fd, ctx[NOISE]);
+	intel_ctx_destroy(fd, ctx[HI]);
 }
 
-static void __preempt_queue(int fd,
+static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 			    unsigned target, unsigned primary,
 			    unsigned depth, unsigned flags)
 {
@@ -1543,33 +1574,33 @@ static void __preempt_queue(int fd,
 	uint32_t result = gem_create(fd, 4096);
 	uint32_t result_read[4096 / sizeof(uint32_t)];
 	igt_spin_t *above = NULL, *below = NULL;
-	uint32_t ctx[3] = {
-		gem_context_clone_with_engines(fd, 0),
-		gem_context_clone_with_engines(fd, 0),
-		gem_context_clone_with_engines(fd, 0),
+	intel_ctx_t *ctx[3] = {
+		intel_ctx_create(fd, cfg),
+		intel_ctx_create(fd, cfg),
+		intel_ctx_create(fd, cfg),
 	};
 	int prio = MAX_PRIO;
 	unsigned int n, i;
 
 	for (n = 0; n < depth; n++) {
 		if (flags & CONTEXTS) {
-			gem_context_destroy(fd, ctx[NOISE]);
-			ctx[NOISE] = gem_context_clone_with_engines(fd, 0);
+			intel_ctx_destroy(fd, ctx[NOISE]);
+			ctx[NOISE] = intel_ctx_create(fd, cfg);
 		}
 		above = __noise(fd, ctx[NOISE], prio--, above);
 	}
 
-	gem_context_set_priority(fd, ctx[HI], prio--);
+	gem_context_set_priority(fd, ctx[HI]->id, prio--);
 
 	for (; n < MAX_ELSP_QLEN; n++) {
 		if (flags & CONTEXTS) {
-			gem_context_destroy(fd, ctx[NOISE]);
-			ctx[NOISE] = gem_context_clone_with_engines(fd, 0);
+			intel_ctx_destroy(fd, ctx[NOISE]);
+			ctx[NOISE] = intel_ctx_create(fd, cfg);
 		}
 		below = __noise(fd, ctx[NOISE], prio--, below);
 	}
 
-	gem_context_set_priority(fd, ctx[LO], prio--);
+	gem_context_set_priority(fd, ctx[LO]->id, prio--);
 
 	n = 0;
 	store_dword(fd, ctx[LO], primary,
@@ -1578,7 +1609,7 @@ static void __preempt_queue(int fd,
 	n++;
 
 	if (flags & CHAIN) {
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx[LO], e) {
 			store_dword(fd, ctx[LO], e->flags,
 				    result, (n + 1)*sizeof(uint32_t), n + 1,
 				    I915_GEM_DOMAIN_RENDER);
@@ -1610,25 +1641,26 @@ static void __preempt_queue(int fd,
 		igt_spin_free(fd, below);
 	}
 
-	gem_context_destroy(fd, ctx[LO]);
-	gem_context_destroy(fd, ctx[NOISE]);
-	gem_context_destroy(fd, ctx[HI]);
+	intel_ctx_destroy(fd, ctx[LO]);
+	intel_ctx_destroy(fd, ctx[NOISE]);
+	intel_ctx_destroy(fd, ctx[HI]);
 
 	gem_close(fd, result);
 }
 
-static void preempt_queue(int fd, unsigned ring, unsigned int flags)
+static void preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
+			  unsigned ring, unsigned int flags)
 {
 	const struct intel_execution_engine2 *e;
 
 	for (unsigned depth = 1; depth <= MAX_ELSP_QLEN; depth *= 4)
-		__preempt_queue(fd, ring, ring, depth, flags);
+		__preempt_queue(fd, cfg, ring, ring, depth, flags);
 
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_cfg_engine(fd, cfg, e) {
 		if (ring == e->flags)
 			continue;
 
-		__preempt_queue(fd, ring, e->flags, MAX_ELSP_QLEN, flags);
+		__preempt_queue(fd, cfg, ring, e->flags, MAX_ELSP_QLEN, flags);
 	}
 }
 
@@ -1645,19 +1677,16 @@ static void preempt_engines(int i915,
 			    const struct intel_execution_engine2 *e,
 			    unsigned int flags)
 {
-	I915_DEFINE_CONTEXT_PARAM_ENGINES(engines , I915_EXEC_RING_MASK + 1);
-	struct drm_i915_gem_context_param param = {
-		.ctx_id = gem_context_create(i915),
-		.param = I915_CONTEXT_PARAM_ENGINES,
-		.value = to_user_pointer(&engines),
-		.size = sizeof(engines),
-	};
 	struct pnode {
 		struct igt_list_head spinners;
 		struct igt_list_head link;
-	} pnode[I915_EXEC_RING_MASK + 1], *p;
+	} pnode[GEM_MAX_ENGINES], *p;
+	struct intel_ctx_cfg cfg = {
+		.num_engines = GEM_MAX_ENGINES,
+	};
 	IGT_LIST_HEAD(plist);
 	igt_spin_t *spin, *sn;
+	intel_ctx_t *ctx;
 
 	/*
 	 * A quick test that each engine within a context is an independent
@@ -1666,19 +1695,19 @@ static void preempt_engines(int i915,
 
 	igt_require(has_context_engines(i915));
 
-	for (int n = 0; n <= I915_EXEC_RING_MASK; n++) {
-		engines.engines[n].engine_class = e->class;
-		engines.engines[n].engine_instance = e->instance;
+	for (int n = 0; n < GEM_MAX_ENGINES; n++) {
+		cfg.engines[n].engine_class = e->class;
+		cfg.engines[n].engine_instance = e->instance;
 		IGT_INIT_LIST_HEAD(&pnode[n].spinners);
 		igt_list_add(&pnode[n].link, &plist);
 	}
-	gem_context_set_param(i915, &param);
+	ctx = intel_ctx_create(i915, &cfg);
 
-	for (int n = -I915_EXEC_RING_MASK; n <= I915_EXEC_RING_MASK; n++) {
+	for (int n = -(GEM_MAX_ENGINES - 1); n < GEM_MAX_ENGINES; n++) {
 		unsigned int engine = n & I915_EXEC_RING_MASK;
 
-		gem_context_set_priority(i915, param.ctx_id, n);
-		spin = igt_spin_new(i915, param.ctx_id, .engine = engine);
+		gem_context_set_priority(i915, ctx->id, n);
+		spin = igt_spin_new(i915, .ctx = ctx, .engine = engine);
 
 		igt_list_move_tail(&spin->link, &pnode[engine].spinners);
 		igt_list_move(&pnode[engine].link, &plist);
@@ -1691,17 +1720,18 @@ static void preempt_engines(int i915,
 			igt_spin_free(i915, spin);
 		}
 	}
-	gem_context_destroy(i915, param.ctx_id);
+	intel_ctx_destroy(i915, ctx);
 }
 
-static void preempt_self(int fd, unsigned ring)
+static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
+			 unsigned ring)
 {
 	const struct intel_execution_engine2 *e;
 	uint32_t result = gem_create(fd, 4096);
 	uint32_t result_read[4096 / sizeof(uint32_t)];
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	unsigned int n, i;
-	uint32_t ctx[3];
+	intel_ctx_t *ctx[3];
 
 	/* On each engine, insert
 	 * [NOISE] spinner,
@@ -1711,21 +1741,21 @@ static void preempt_self(int fd, unsigned ring)
 	 * preempt its own lower priority task on any engine.
 	 */
 
-	ctx[NOISE] = gem_context_clone_with_engines(fd, 0);
-	ctx[HI] = gem_context_clone_with_engines(fd, 0);
+	ctx[NOISE] = intel_ctx_create(fd, cfg);
+	ctx[HI] = intel_ctx_create(fd, cfg);
 
 	n = 0;
-	gem_context_set_priority(fd, ctx[HI], MIN_PRIO);
-	__for_each_physical_engine(fd, e) {
+	gem_context_set_priority(fd, ctx[HI]->id, MIN_PRIO);
+	for_each_ctx_cfg_engine(fd, cfg, e) {
 		spin[n] = __igt_spin_new(fd,
-					 .ctx_id = ctx[NOISE],
+					 .ctx = ctx[NOISE],
 					 .engine = e->flags);
 		store_dword(fd, ctx[HI], e->flags,
 			    result, (n + 1)*sizeof(uint32_t), n + 1,
 			    I915_GEM_DOMAIN_RENDER);
 		n++;
 	}
-	gem_context_set_priority(fd, ctx[HI], MAX_PRIO);
+	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
 	store_dword(fd, ctx[HI], ring,
 		    result, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
@@ -1743,36 +1773,37 @@ static void preempt_self(int fd, unsigned ring)
 	for (i = 0; i <= n; i++)
 		igt_assert_eq_u32(result_read[i], i);
 
-	gem_context_destroy(fd, ctx[NOISE]);
-	gem_context_destroy(fd, ctx[HI]);
+	intel_ctx_destroy(fd, ctx[NOISE]);
+	intel_ctx_destroy(fd, ctx[HI]);
 
 	gem_close(fd, result);
 }
 
-static void preemptive_hang(int fd, const struct intel_execution_engine2 *e)
+static void preemptive_hang(int fd, const intel_ctx_cfg_t *cfg,
+			    const struct intel_execution_engine2 *e)
 {
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	igt_hang_t hang;
-	uint32_t ctx[2];
+	intel_ctx_t *ctx[2];
 
 	/* Set a fast timeout to speed the test up (if available) */
 	set_preempt_timeout(fd, e, 150);
 
-	ctx[HI] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[HI], MAX_PRIO);
+	ctx[HI] = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
-		ctx[LO] = gem_context_clone_with_engines(fd, 0);
-		gem_context_set_priority(fd, ctx[LO], MIN_PRIO);
+		ctx[LO] = intel_ctx_create(fd, cfg);
+		gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
 
 		spin[n] = __igt_spin_new(fd,
-					 .ctx_id = ctx[LO],
+					 .ctx = ctx[LO],
 					 .engine = e->flags);
 
-		gem_context_destroy(fd, ctx[LO]);
+		intel_ctx_destroy(fd, ctx[LO]);
 	}
 
-	hang = igt_hang_ctx(fd, ctx[HI], e->flags, 0);
+	hang = igt_hang_ctx(fd, ctx[HI]->id, e->flags, 0);
 	igt_post_hang_ring(fd, hang);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
@@ -1784,10 +1815,11 @@ static void preemptive_hang(int fd, const struct intel_execution_engine2 *e)
 		igt_spin_free(fd, spin[n]);
 	}
 
-	gem_context_destroy(fd, ctx[HI]);
+	intel_ctx_destroy(fd, ctx[HI]);
 }
 
-static void deep(int fd, unsigned ring)
+static void deep(int fd, const intel_ctx_cfg_t *cfg,
+		 unsigned ring)
 {
 #define XS 8
 	const unsigned int max_req = MAX_PRIO - MIN_PRIO;
@@ -1799,13 +1831,13 @@ static void deep(int fd, unsigned ring)
 	uint32_t result, dep[XS];
 	uint32_t read_buf[size / sizeof(uint32_t)];
 	uint32_t expected = 0;
-	uint32_t *ctx;
+	intel_ctx_t **ctx;
 	int dep_nreq;
 	int n;
 
 	ctx = malloc(sizeof(*ctx) * MAX_CONTEXTS);
 	for (n = 0; n < MAX_CONTEXTS; n++) {
-		ctx[n] = gem_context_clone_with_engines(fd, 0);
+		ctx[n] = intel_ctx_create(fd, cfg);
 	}
 
 	nreq = gem_submission_measure(fd, ring) / (3 * XS) * MAX_CONTEXTS;
@@ -1835,7 +1867,7 @@ static void deep(int fd, unsigned ring)
 		execbuf.buffer_count = XS + 2;
 		execbuf.flags = ring;
 		for (n = 0; n < MAX_CONTEXTS; n++) {
-			execbuf.rsvd1 = ctx[n];
+			execbuf.rsvd1 = ctx[n]->id;
 			gem_execbuf(fd, &execbuf);
 		}
 		gem_close(fd, obj[XS+1].handle);
@@ -1853,7 +1885,7 @@ static void deep(int fd, unsigned ring)
 			.buffers_ptr = to_user_pointer(obj),
 			.buffer_count = 3,
 			.flags = ring | (gen < 6 ? I915_EXEC_SECURE : 0),
-			.rsvd1 = ctx[n % MAX_CONTEXTS],
+			.rsvd1 = ctx[n % MAX_CONTEXTS]->id,
 		};
 		uint32_t batch[16];
 		int i;
@@ -1901,33 +1933,33 @@ static void deep(int fd, unsigned ring)
 	dep_nreq = n;
 
 	for (n = 0; n < nreq && igt_seconds_elapsed(&tv) < 4; n++) {
-		uint32_t context = ctx[n % MAX_CONTEXTS];
-		gem_context_set_priority(fd, context, MAX_PRIO - nreq + n);
+		intel_ctx_t *context = ctx[n % MAX_CONTEXTS];
+		gem_context_set_priority(fd, context->id, MAX_PRIO - nreq + n);
 
+		expected = context->id;
 		for (int m = 0; m < XS; m++) {
-			store_dword_plug(fd, context, ring, result, 4*n, context, dep[m], 0);
-			store_dword(fd, context, ring, result, 4*m, context, I915_GEM_DOMAIN_INSTRUCTION);
+			store_dword_plug(fd, context, ring, result, 4*n, expected, dep[m], 0);
+			store_dword(fd, context, ring, result, 4*m, expected, I915_GEM_DOMAIN_INSTRUCTION);
 		}
-		expected = context;
 	}
 	igt_info("Second deptree: %d requests [%.3fs]\n",
 		 n * XS, 1e-9*igt_nsec_elapsed(&tv));
 
-	unplug_show_queue(fd, &cork, ring);
+	unplug_show_queue(fd, &cork, cfg, ring);
 	gem_close(fd, plug);
 	igt_require(expected); /* too slow */
 
-	for (n = 0; n < MAX_CONTEXTS; n++)
-		gem_context_destroy(fd, ctx[n]);
-
 	for (int m = 0; m < XS; m++) {
 		__sync_read_u32_count(fd, dep[m], read_buf, sizeof(read_buf));
 		gem_close(fd, dep[m]);
 
 		for (n = 0; n < dep_nreq; n++)
-			igt_assert_eq_u32(read_buf[n], ctx[n % MAX_CONTEXTS]);
+			igt_assert_eq_u32(read_buf[n], ctx[n % MAX_CONTEXTS]->id);
 	}
 
+	for (n = 0; n < MAX_CONTEXTS; n++)
+		intel_ctx_destroy(fd, ctx[n]);
+
 	__sync_read_u32_count(fd, result, read_buf, sizeof(read_buf));
 	gem_close(fd, result);
 
@@ -1951,20 +1983,20 @@ static int __execbuf(int fd, struct drm_i915_gem_execbuffer2 *execbuf)
 	return err;
 }
 
-static void wide(int fd, unsigned ring)
+static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 {
 	const unsigned int ring_size = gem_submission_measure(fd, ring);
 	struct timespec tv = {};
 	IGT_CORK_FENCE(cork);
 	uint32_t result;
 	uint32_t result_read[MAX_CONTEXTS];
-	uint32_t *ctx;
+	intel_ctx_t **ctx;
 	unsigned int count;
 	int fence;
 
 	ctx = malloc(sizeof(*ctx)*MAX_CONTEXTS);
 	for (int n = 0; n < MAX_CONTEXTS; n++)
-		ctx[n] = gem_context_clone_with_engines(fd, 0);
+		ctx[n] = intel_ctx_create(fd, cfg);
 
 	result = gem_create(fd, 4*MAX_CONTEXTS);
 
@@ -1975,28 +2007,28 @@ static void wide(int fd, unsigned ring)
 	     igt_seconds_elapsed(&tv) < 5 && count < ring_size;
 	     count++) {
 		for (int n = 0; n < MAX_CONTEXTS; n++) {
-			store_dword_fenced(fd, ctx[n], ring, result, 4*n, ctx[n],
+			store_dword_fenced(fd, ctx[n], ring, result, 4*n, ctx[n]->id,
 					   fence, I915_GEM_DOMAIN_INSTRUCTION);
 		}
 	}
 	igt_info("Submitted %d requests over %d contexts in %.1fms\n",
 		 count, MAX_CONTEXTS, igt_nsec_elapsed(&tv) * 1e-6);
 
-	unplug_show_queue(fd, &cork, ring);
+	unplug_show_queue(fd, &cork, cfg, ring);
 	close(fence);
 
+	__sync_read_u32_count(fd, result, result_read, sizeof(result_read));
 	for (int n = 0; n < MAX_CONTEXTS; n++)
-		gem_context_destroy(fd, ctx[n]);
+		igt_assert_eq_u32(result_read[n], ctx[n]->id);
 
-	__sync_read_u32_count(fd, result, result_read, sizeof(result_read));
 	for (int n = 0; n < MAX_CONTEXTS; n++)
-		igt_assert_eq_u32(result_read[n], ctx[n]);
+		intel_ctx_destroy(fd, ctx[n]);
 
 	gem_close(fd, result);
 	free(ctx);
 }
 
-static void reorder_wide(int fd, unsigned ring)
+static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 {
 	const unsigned int ring_size = gem_submission_measure(fd, ring);
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -2040,9 +2072,11 @@ static void reorder_wide(int fd, unsigned ring)
 	for (int n = 0, x = 1; n < ARRAY_SIZE(priorities); n++, x++) {
 		unsigned int sz = ALIGN(ring_size * 64, 4096);
 		uint32_t *batch;
+		intel_ctx_t *tmp_ctx;
 
-		execbuf.rsvd1 = gem_context_clone_with_engines(fd, 0);
-		gem_context_set_priority(fd, execbuf.rsvd1, priorities[n]);
+		tmp_ctx = intel_ctx_create(fd, cfg);
+		gem_context_set_priority(fd, tmp_ctx->id, priorities[n]);
+		execbuf.rsvd1 = tmp_ctx->id;
 
 		obj[1].handle = gem_create(fd, sz);
 		batch = gem_mmap__device_coherent(fd, obj[1].handle, 0, sz, PROT_WRITE);
@@ -2082,10 +2116,10 @@ static void reorder_wide(int fd, unsigned ring)
 
 		munmap(batch, sz);
 		gem_close(fd, obj[1].handle);
-		gem_context_destroy(fd, execbuf.rsvd1);
+		intel_ctx_destroy(fd, tmp_ctx);
 	}
 
-	unplug_show_queue(fd, &cork, ring);
+	unplug_show_queue(fd, &cork, cfg, ring);
 	close(fence);
 
 	__sync_read_u32_count(fd, result, result_read, sizeof(result_read));
@@ -2111,17 +2145,18 @@ static void bind_to_cpu(int cpu)
 	igt_assert(sched_setaffinity(getpid(), sizeof(cpu_set_t), &allowed) == 0);
 }
 
-static void test_pi_ringfull(int fd, unsigned int engine, unsigned int flags)
+static void test_pi_ringfull(int fd, const intel_ctx_cfg_t *cfg,
+			     unsigned int engine, unsigned int flags)
 #define SHARED BIT(0)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct sigaction sa = { .sa_handler = alarm_handler };
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[2];
+	intel_ctx_t *ctx, *vip;
 	unsigned int last, count;
 	struct itimerval itv;
 	IGT_CORK_HANDLE(c);
-	uint32_t vip;
 	bool *result;
 
 	/*
@@ -2153,17 +2188,18 @@ static void test_pi_ringfull(int fd, unsigned int engine, unsigned int flags)
 
 	execbuf.buffers_ptr = to_user_pointer(&obj[1]);
 	execbuf.buffer_count = 1;
-	execbuf.flags = engine;
 
 	/* Warm up both (hi/lo) contexts */
-	execbuf.rsvd1 = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, execbuf.rsvd1, MAX_PRIO);
+	ctx = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx->id, MAX_PRIO);
+	execbuf.rsvd1 = ctx->id;
 	gem_execbuf(fd, &execbuf);
 	gem_sync(fd, obj[1].handle);
-	vip = execbuf.rsvd1;
+	vip = ctx;
 
-	execbuf.rsvd1 = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, execbuf.rsvd1, MIN_PRIO);
+	ctx = intel_ctx_create(fd, cfg);
+	gem_context_set_priority(fd, ctx->id, MIN_PRIO);
+	execbuf.rsvd1 = ctx->id;
 	gem_execbuf(fd, &execbuf);
 	gem_sync(fd, obj[1].handle);
 
@@ -2213,7 +2249,7 @@ static void test_pi_ringfull(int fd, unsigned int engine, unsigned int flags)
 			gem_write(fd, obj[1].handle, 0, &bbe, sizeof(bbe));
 		}
 
-		result[0] = vip != execbuf.rsvd1;
+		result[0] = vip->id != execbuf.rsvd1;
 
 		igt_debug("Waking parent\n");
 		kill(getppid(), SIGALRM);
@@ -2230,7 +2266,7 @@ static void test_pi_ringfull(int fd, unsigned int engine, unsigned int flags)
 		 * able to add ourselves to *our* ring without interruption.
 		 */
 		igt_debug("HP child executing\n");
-		execbuf.rsvd1 = vip;
+		execbuf.rsvd1 = vip->id;
 		err = __execbuf(fd, &execbuf);
 		igt_debug("HP execbuf returned %d\n", err);
 
@@ -2261,8 +2297,8 @@ static void test_pi_ringfull(int fd, unsigned int engine, unsigned int flags)
 	igt_cork_unplug(&c);
 	igt_waitchildren();
 
-	gem_context_destroy(fd, execbuf.rsvd1);
-	gem_context_destroy(fd, vip);
+	intel_ctx_destroy(fd, ctx);
+	intel_ctx_destroy(fd, vip);
 	gem_close(fd, obj[1].handle);
 	gem_close(fd, obj[0].handle);
 	munmap(result, 4096);
@@ -2277,8 +2313,8 @@ struct ufd_thread {
 	uint32_t batch;
 	uint32_t scratch;
 	uint32_t *page;
+	const intel_ctx_cfg_t *cfg;
 	unsigned int engine;
-	unsigned int flags;
 	int i915;
 
 	pthread_mutex_t mutex;
@@ -2301,11 +2337,12 @@ static void *ufd_thread(void *arg)
 		{ .handle = create_userptr(t->i915, t->page) },
 		{ .handle = t->batch },
 	};
+	intel_ctx_t *ctx = intel_ctx_create(t->i915, t->cfg);
 	struct drm_i915_gem_execbuffer2 eb = {
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = ARRAY_SIZE(obj),
 		.flags = t->engine,
-		.rsvd1 = gem_context_clone_with_engines(t->i915, 0),
+		.rsvd1 = ctx->id,
 	};
 	gem_context_set_priority(t->i915, eb.rsvd1, MIN_PRIO);
 
@@ -2314,13 +2351,15 @@ static void *ufd_thread(void *arg)
 	gem_sync(t->i915, obj[0].handle);
 	gem_close(t->i915, obj[0].handle);
 
-	gem_context_destroy(t->i915, eb.rsvd1);
+	intel_ctx_destroy(t->i915, ctx);
 
 	t->i915 = -1;
 	return NULL;
 }
 
-static void test_pi_userfault(int i915, unsigned int engine)
+static void test_pi_userfault(int i915,
+			      const intel_ctx_cfg_t *cfg,
+			      unsigned int engine)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct uffdio_api api = { .api = UFFD_API };
@@ -2353,6 +2392,7 @@ static void test_pi_userfault(int i915, unsigned int engine)
 		      "userfaultfd API v%lld:%lld\n", UFFD_API, api.api);
 
 	t.i915 = i915;
+	t.cfg = cfg;
 	t.engine = engine;
 
 	t.page = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, 0, 0);
@@ -2383,11 +2423,12 @@ static void test_pi_userfault(int i915, unsigned int engine)
 			.handle = gem_create(i915, 4096),
 		};
 		struct pollfd pfd;
+		intel_ctx_t *ctx = intel_ctx_create(i915, cfg);
 		struct drm_i915_gem_execbuffer2 eb = {
 			.buffers_ptr = to_user_pointer(&obj),
 			.buffer_count = 1,
 			.flags = engine | I915_EXEC_FENCE_OUT,
-			.rsvd1 = gem_context_clone_with_engines(i915, 0),
+			.rsvd1 = ctx->id,
 		};
 		gem_context_set_priority(i915, eb.rsvd1, MAX_PRIO);
 		gem_write(i915, obj.handle, 0, &bbe, sizeof(bbe));
@@ -2401,7 +2442,7 @@ static void test_pi_userfault(int i915, unsigned int engine)
 		igt_assert_eq(sync_fence_status(pfd.fd), 1);
 		close(pfd.fd);
 
-		gem_context_destroy(i915, eb.rsvd1);
+		intel_ctx_destroy(i915, ctx);
 	}
 
 	/* Confirm the low priority context is still waiting */
@@ -2425,15 +2466,10 @@ static void test_pi_userfault(int i915, unsigned int engine)
 
 static void *iova_thread(struct ufd_thread *t, int prio)
 {
-	unsigned int clone;
-	uint32_t ctx;
-
-	clone = I915_CONTEXT_CLONE_ENGINES;
-	if (t->flags & SHARED)
-		clone |= I915_CONTEXT_CLONE_VM;
+	intel_ctx_t *ctx;
 
-	ctx = gem_context_clone(t->i915, 0, clone, 0);
-	gem_context_set_priority(t->i915, ctx, prio);
+	ctx = intel_ctx_create(t->i915, t->cfg);
+	gem_context_set_priority(t->i915, ctx->id, prio);
 
 	store_dword_plug(t->i915, ctx, t->engine,
 			 t->scratch, 0, prio,
@@ -2444,7 +2480,7 @@ static void *iova_thread(struct ufd_thread *t, int prio)
 		pthread_cond_signal(&t->cond);
 	pthread_mutex_unlock(&t->mutex);
 
-	gem_context_destroy(t->i915, ctx);
+	intel_ctx_destroy(t->i915, ctx);
 	return NULL;
 }
 
@@ -2458,8 +2494,10 @@ static void *iova_high(void *arg)
 	return iova_thread(arg, MAX_PRIO);
 }
 
-static void test_pi_iova(int i915, unsigned int engine, unsigned int flags)
+static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
+			 unsigned int engine, unsigned int flags)
 {
+	intel_ctx_cfg_t ufd_cfg = *cfg;
 	struct uffdio_api api = { .api = UFFD_API };
 	struct uffdio_register reg;
 	struct uffdio_copy copy;
@@ -2493,9 +2531,12 @@ static void test_pi_iova(int i915, unsigned int engine, unsigned int flags)
 	igt_require_f(ioctl(ufd, UFFDIO_API, &api) == 0 && api.api == UFFD_API,
 		      "userfaultfd API v%lld:%lld\n", UFFD_API, api.api);
 
+	if (flags & SHARED)
+		ufd_cfg.vm = gem_vm_create(i915);
+
 	t.i915 = i915;
+	t.cfg = &ufd_cfg;
 	t.engine = engine;
-	t.flags = flags;
 
 	t.count = 2;
 	pthread_cond_init(&t.cond, NULL);
@@ -2534,9 +2575,10 @@ static void test_pi_iova(int i915, unsigned int engine, unsigned int flags)
 	 */
 	spin = igt_spin_new(i915, .engine = engine);
 	for (int i = 0; i < MAX_ELSP_QLEN; i++) {
-		spin->execbuf.rsvd1 = create_highest_priority(i915);
+		intel_ctx_t *ctx = create_highest_priority(i915, cfg);
+		spin->execbuf.rsvd1 = ctx->id;
 		gem_execbuf(i915, &spin->execbuf);
-		gem_context_destroy(i915, spin->execbuf.rsvd1);
+		intel_ctx_destroy(i915, ctx);
 	}
 
 	/* Kick off the submission threads */
@@ -2573,10 +2615,14 @@ static void test_pi_iova(int i915, unsigned int engine, unsigned int flags)
 	gem_close(i915, t.scratch);
 
 	munmap(t.page, 4096);
+
+	if (flags & SHARED)
+		gem_vm_destroy(i915, ufd_cfg.vm);
+
 	close(ufd);
 }
 
-static void measure_semaphore_power(int i915)
+static void measure_semaphore_power(int i915, const intel_ctx_t *ctx)
 {
 	const struct intel_execution_engine2 *signaler, *e;
 	struct rapl gpu, pkg;
@@ -2584,7 +2630,7 @@ static void measure_semaphore_power(int i915)
 	igt_require(gpu_power_open(&gpu) == 0);
 	pkg_power_open(&pkg);
 
-	__for_each_physical_engine(i915, signaler) {
+	for_each_ctx_engine(i915, ctx, signaler) {
 		struct {
 			struct power_sample pkg, gpu;
 		} s_spin[2], s_sema[2];
@@ -2596,6 +2642,7 @@ static void measure_semaphore_power(int i915)
 			continue;
 
 		spin = __igt_spin_new(i915,
+				      .ctx = ctx,
 				      .engine = signaler->flags,
 				      .flags = IGT_SPIN_POLL_RUN);
 		gem_wait(i915, spin->handle, &jiffie); /* waitboost */
@@ -2608,13 +2655,14 @@ static void measure_semaphore_power(int i915)
 		rapl_read(&pkg, &s_spin[1].pkg);
 
 		/* Add a waiter to each engine */
-		__for_each_physical_engine(i915, e) {
+		for_each_ctx_engine(i915, ctx, e) {
 			igt_spin_t *sema;
 
 			if (e->flags == signaler->flags)
 				continue;
 
 			sema = __igt_spin_new(i915,
+					      .ctx = ctx,
 					      .engine = e->flags,
 					      .dependency = spin->handle);
 
@@ -2686,8 +2734,7 @@ static int cmp_u32(const void *A, const void *B)
 		return 0;
 }
 
-static uint32_t read_ctx_timestamp(int i915,
-				   uint32_t ctx,
+static uint32_t read_ctx_timestamp(int i915, intel_ctx_t *ctx,
 				   const struct intel_execution_engine2 *e)
 {
 	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
@@ -2703,7 +2750,7 @@ static uint32_t read_ctx_timestamp(int i915,
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
 		.flags = e->flags,
-		.rsvd1 = ctx,
+		.rsvd1 = ctx->id,
 	};
 #define RUNTIME (base + 0x3a8)
 	uint32_t *map, *cs;
@@ -2736,7 +2783,7 @@ static uint32_t read_ctx_timestamp(int i915,
 	return ts;
 }
 
-static void fairslice(int i915,
+static void fairslice(int i915, const intel_ctx_cfg_t *cfg,
 		      const struct intel_execution_engine2 *e,
 		      unsigned long flags,
 		      int duration)
@@ -2744,14 +2791,14 @@ static void fairslice(int i915,
 	const double timeslice_duration_ns = 1e6;
 	igt_spin_t *spin = NULL;
 	double threshold;
-	uint32_t ctx[3];
+	intel_ctx_t *ctx[3];
 	uint32_t ts[3];
 
 	for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
-		ctx[i] = gem_context_clone_with_engines(i915, 0);
+		ctx[i] = intel_ctx_create(i915, cfg);
 		if (spin == NULL) {
 			spin = __igt_spin_new(i915,
-					      .ctx_id = ctx[i],
+					      .ctx = ctx[i],
 					      .engine = e->flags,
 					      .flags = flags);
 		} else {
@@ -2759,7 +2806,7 @@ static void fairslice(int i915,
 				.buffer_count = 1,
 				.buffers_ptr = to_user_pointer(&spin->obj[IGT_SPIN_BATCH]),
 				.flags = e->flags,
-				.rsvd1 = ctx[i],
+				.rsvd1 = ctx[i]->id,
 			};
 			gem_execbuf(i915, &eb);
 		}
@@ -2773,7 +2820,7 @@ static void fairslice(int i915,
 		ts[i] = read_ctx_timestamp(i915, ctx[i], e);
 
 	for (int i = 0; i < ARRAY_SIZE(ctx); i++)
-		gem_context_destroy(i915, ctx[i]);
+		intel_ctx_destroy(i915, ctx[i]);
 	igt_spin_free(i915, spin);
 
 	/*
@@ -2800,18 +2847,19 @@ static void fairslice(int i915,
 		     1e-6 * threshold * 2);
 }
 
-#define test_each_engine(T, i915, e) \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_engine(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		igt_dynamic_f("%s", e->name)
 
-#define test_each_engine_store(T, i915, e) \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_engine_store(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		for_each_if(gem_class_can_store_dword(fd, e->class)) \
 		igt_dynamic_f("%s", e->name)
 
 igt_main
 {
 	int fd = -1;
+	const intel_ctx_t *ctx = NULL;
 
 	igt_fixture {
 		igt_require_sw_sync();
@@ -2823,6 +2871,7 @@ igt_main
 		igt_require_gem(fd);
 		gem_require_mmap_wc(fd);
 		gem_require_contexts(fd);
+		ctx = intel_ctx_create_all_physical(fd);
 
 		igt_fork_hang_detector(fd);
 	}
@@ -2830,22 +2879,22 @@ igt_main
 	igt_subtest_group {
 		const struct intel_execution_engine2 *e;
 
-		test_each_engine_store("fifo", fd, e)
-			fifo(fd, e->flags);
+		test_each_engine_store("fifo", fd, ctx, e)
+			fifo(fd, ctx, e->flags);
 
-		test_each_engine_store("implicit-read-write", fd, e)
-			implicit_rw(fd, e->flags, READ_WRITE);
+		test_each_engine_store("implicit-read-write", fd, ctx, e)
+			implicit_rw(fd, ctx, e->flags, READ_WRITE);
 
-		test_each_engine_store("implicit-write-read", fd, e)
-			implicit_rw(fd, e->flags, WRITE_READ);
+		test_each_engine_store("implicit-write-read", fd, ctx, e)
+			implicit_rw(fd, ctx, e->flags, WRITE_READ);
 
-		test_each_engine_store("implicit-boths", fd, e)
-			implicit_rw(fd, e->flags, READ_WRITE | WRITE_READ);
+		test_each_engine_store("implicit-boths", fd, ctx, e)
+			implicit_rw(fd, ctx, e->flags, READ_WRITE | WRITE_READ);
 
-		test_each_engine_store("independent", fd, e)
-			independent(fd, e->flags, 0);
-		test_each_engine_store("u-independent", fd, e)
-			independent(fd, e->flags, IGT_SPIN_USERPTR);
+		test_each_engine_store("independent", fd, ctx, e)
+			independent(fd, ctx, e->flags, 0);
+		test_each_engine_store("u-independent", fd, ctx, e)
+			independent(fd, ctx, e->flags, IGT_SPIN_USERPTR);
 	}
 
 	igt_subtest_group {
@@ -2856,19 +2905,19 @@ igt_main
 			igt_require(gem_scheduler_has_ctx_priority(fd));
 		}
 
-		test_each_engine("timeslicing", fd, e)
-			timeslice(fd, e->flags);
+		test_each_engine("timeslicing", fd, ctx, e)
+			timeslice(fd, &ctx->cfg, e->flags);
 
-		test_each_engine("thriceslice", fd, e)
-			timesliceN(fd, e->flags, 3);
+		test_each_engine("thriceslice", fd, ctx, e)
+			timesliceN(fd, &ctx->cfg, e->flags, 3);
 
-		test_each_engine("manyslice", fd, e)
-			timesliceN(fd, e->flags, 67);
+		test_each_engine("manyslice", fd, ctx, e)
+			timesliceN(fd, &ctx->cfg, e->flags, 67);
 
-		test_each_engine("lateslice", fd, e)
-			lateslice(fd, e->flags, 0);
-		test_each_engine("u-lateslice", fd, e)
-			lateslice(fd, e->flags, IGT_SPIN_USERPTR);
+		test_each_engine("lateslice", fd, ctx, e)
+			lateslice(fd, &ctx->cfg, e->flags, 0);
+		test_each_engine("u-lateslice", fd, ctx, e)
+			lateslice(fd, &ctx->cfg, e->flags, IGT_SPIN_USERPTR);
 
 		igt_subtest_group {
 			igt_fixture {
@@ -2877,23 +2926,23 @@ igt_main
 				igt_require(intel_gen(intel_get_drm_devid(fd)) >= 8);
 			}
 
-			test_each_engine("fairslice", fd, e)
-				fairslice(fd, e, 0, 2);
+			test_each_engine("fairslice", fd, ctx, e)
+				fairslice(fd, &ctx->cfg, e, 0, 2);
 
-			test_each_engine("u-fairslice", fd, e)
-				fairslice(fd, e, IGT_SPIN_USERPTR, 2);
+			test_each_engine("u-fairslice", fd, ctx, e)
+				fairslice(fd, &ctx->cfg, e, IGT_SPIN_USERPTR, 2);
 
 			igt_subtest("fairslice-all")  {
-				__for_each_physical_engine(fd, e) {
+				for_each_ctx_engine(fd, ctx, e) {
 					igt_fork(child, 1)
-						fairslice(fd, e, 0, 2);
+						fairslice(fd, &ctx->cfg, e, 0, 2);
 				}
 				igt_waitchildren();
 			}
 			igt_subtest("u-fairslice-all")  {
-				__for_each_physical_engine(fd, e) {
+				for_each_ctx_engine(fd, ctx, e) {
 					igt_fork(child, 1)
-						fairslice(fd, e,
+						fairslice(fd, &ctx->cfg, e,
 							  IGT_SPIN_USERPTR,
 							  2);
 				}
@@ -2901,84 +2950,84 @@ igt_main
 			}
 		}
 
-		test_each_engine("submit-early-slice", fd, e)
-			submit_slice(fd, e, EARLY_SUBMIT);
-		test_each_engine("u-submit-early-slice", fd, e)
-			submit_slice(fd, e, EARLY_SUBMIT | USERPTR);
-		test_each_engine("submit-golden-slice", fd, e)
-			submit_slice(fd, e, 0);
-		test_each_engine("u-submit-golden-slice", fd, e)
-			submit_slice(fd, e, USERPTR);
-		test_each_engine("submit-late-slice", fd, e)
-			submit_slice(fd, e, LATE_SUBMIT);
-		test_each_engine("u-submit-late-slice", fd, e)
-			submit_slice(fd, e, LATE_SUBMIT | USERPTR);
+		test_each_engine("submit-early-slice", fd, ctx, e)
+			submit_slice(fd, &ctx->cfg, e, EARLY_SUBMIT);
+		test_each_engine("u-submit-early-slice", fd, ctx, e)
+			submit_slice(fd, &ctx->cfg, e, EARLY_SUBMIT | USERPTR);
+		test_each_engine("submit-golden-slice", fd, ctx, e)
+			submit_slice(fd, &ctx->cfg, e, 0);
+		test_each_engine("u-submit-golden-slice", fd, ctx, e)
+			submit_slice(fd, &ctx->cfg, e, USERPTR);
+		test_each_engine("submit-late-slice", fd, ctx, e)
+			submit_slice(fd, &ctx->cfg, e, LATE_SUBMIT);
+		test_each_engine("u-submit-late-slice", fd, ctx, e)
+			submit_slice(fd, &ctx->cfg, e, LATE_SUBMIT | USERPTR);
 
 		igt_subtest("semaphore-user")
-			semaphore_userlock(fd, 0);
+			semaphore_userlock(fd, ctx, 0);
 		igt_subtest("semaphore-codependency")
-			semaphore_codependency(fd, 0);
+			semaphore_codependency(fd, ctx, 0);
 		igt_subtest("semaphore-resolve")
-			semaphore_resolve(fd, 0);
+			semaphore_resolve(fd, &ctx->cfg, 0);
 		igt_subtest("semaphore-noskip")
-			semaphore_noskip(fd, 0);
+			semaphore_noskip(fd, &ctx->cfg, 0);
 
 		igt_subtest("u-semaphore-user")
-			semaphore_userlock(fd, IGT_SPIN_USERPTR);
+			semaphore_userlock(fd, ctx, IGT_SPIN_USERPTR);
 		igt_subtest("u-semaphore-codependency")
-			semaphore_codependency(fd, IGT_SPIN_USERPTR);
+			semaphore_codependency(fd, ctx, IGT_SPIN_USERPTR);
 		igt_subtest("u-semaphore-resolve")
-			semaphore_resolve(fd, IGT_SPIN_USERPTR);
+			semaphore_resolve(fd, &ctx->cfg, IGT_SPIN_USERPTR);
 		igt_subtest("u-semaphore-noskip")
-			semaphore_noskip(fd, IGT_SPIN_USERPTR);
+			semaphore_noskip(fd, &ctx->cfg, IGT_SPIN_USERPTR);
 
 		igt_subtest("smoketest-all")
-			smoketest(fd, ALL_ENGINES, 30);
+			smoketest(fd, &ctx->cfg, ALL_ENGINES, 30);
 
-		test_each_engine_store("in-order", fd, e)
-			reorder(fd, e->flags, EQUAL);
+		test_each_engine_store("in-order", fd, ctx, e)
+			reorder(fd, &ctx->cfg, e->flags, EQUAL);
 
-		test_each_engine_store("out-order", fd, e)
-			reorder(fd, e->flags, 0);
+		test_each_engine_store("out-order", fd, ctx, e)
+			reorder(fd, &ctx->cfg, e->flags, 0);
 
-		test_each_engine_store("promotion", fd, e)
-			promotion(fd, e->flags);
+		test_each_engine_store("promotion", fd, ctx, e)
+			promotion(fd, &ctx->cfg, e->flags);
 
 		igt_subtest_group {
 			igt_fixture {
 				igt_require(gem_scheduler_has_preemption(fd));
 			}
 
-			test_each_engine_store("preempt", fd, e)
-				preempt(fd, e, 0);
+			test_each_engine_store("preempt", fd, ctx, e)
+				preempt(fd, &ctx->cfg, e, 0);
 
-			test_each_engine_store("preempt-contexts", fd, e)
-				preempt(fd, e, NEW_CTX);
+			test_each_engine_store("preempt-contexts", fd, ctx, e)
+				preempt(fd, &ctx->cfg, e, NEW_CTX);
 
-			test_each_engine_store("preempt-user", fd, e)
-				preempt(fd, e, USERPTR);
+			test_each_engine_store("preempt-user", fd, ctx, e)
+				preempt(fd, &ctx->cfg, e, USERPTR);
 
-			test_each_engine_store("preempt-self", fd, e)
-				preempt_self(fd, e->flags);
+			test_each_engine_store("preempt-self", fd, ctx, e)
+				preempt_self(fd, &ctx->cfg, e->flags);
 
-			test_each_engine_store("preempt-other", fd, e)
-				preempt_other(fd, e->flags, 0);
+			test_each_engine_store("preempt-other", fd, ctx, e)
+				preempt_other(fd, &ctx->cfg, e->flags, 0);
 
-			test_each_engine_store("preempt-other-chain", fd, e)
-				preempt_other(fd, e->flags, CHAIN);
+			test_each_engine_store("preempt-other-chain", fd, ctx, e)
+				preempt_other(fd, &ctx->cfg, e->flags, CHAIN);
 
-			test_each_engine_store("preempt-queue", fd, e)
-				preempt_queue(fd, e->flags, 0);
+			test_each_engine_store("preempt-queue", fd, ctx, e)
+				preempt_queue(fd, &ctx->cfg, e->flags, 0);
 
-			test_each_engine_store("preempt-queue-chain", fd, e)
-				preempt_queue(fd, e->flags, CHAIN);
-			test_each_engine_store("preempt-queue-contexts", fd, e)
-				preempt_queue(fd, e->flags, CONTEXTS);
+			test_each_engine_store("preempt-queue-chain", fd, ctx, e)
+				preempt_queue(fd, &ctx->cfg, e->flags, CHAIN);
+			test_each_engine_store("preempt-queue-contexts", fd, ctx, e)
+				preempt_queue(fd, &ctx->cfg, e->flags, CONTEXTS);
 
-			test_each_engine_store("preempt-queue-contexts-chain", fd, e)
-				preempt_queue(fd, e->flags, CONTEXTS | CHAIN);
+			test_each_engine_store("preempt-queue-contexts-chain", fd, ctx, e)
+				preempt_queue(fd, &ctx->cfg, e->flags, CONTEXTS | CHAIN);
 
-			test_each_engine_store("preempt-engines", fd, e)
+			test_each_engine_store("preempt-engines", fd, ctx, e)
 				preempt_engines(fd, e, 0);
 
 			igt_subtest_group {
@@ -2989,11 +3038,11 @@ igt_main
 					hang = igt_allow_hang(fd, 0, 0);
 				}
 
-				test_each_engine_store("preempt-hang", fd, e)
-					preempt(fd, e, NEW_CTX | HANG_LP);
+				test_each_engine_store("preempt-hang", fd, ctx, e)
+					preempt(fd, &ctx->cfg, e, NEW_CTX | HANG_LP);
 
-				test_each_engine_store("preemptive-hang", fd, e)
-					preemptive_hang(fd, e);
+				test_each_engine_store("preemptive-hang", fd, ctx, e)
+					preemptive_hang(fd, &ctx->cfg, e);
 
 				igt_fixture {
 					igt_disallow_hang(fd, hang);
@@ -3002,30 +3051,30 @@ igt_main
 			}
 		}
 
-		test_each_engine_store("noreorder", fd, e)
-			noreorder(fd, e->flags, 0, 0);
+		test_each_engine_store("noreorder", fd, ctx, e)
+			noreorder(fd, &ctx->cfg, e->flags, 0, 0);
 
-		test_each_engine_store("noreorder-priority", fd, e) {
+		test_each_engine_store("noreorder-priority", fd, ctx, e) {
 			igt_require(gem_scheduler_enabled(fd));
-			noreorder(fd, e->flags, MAX_PRIO, 0);
+			noreorder(fd, &ctx->cfg, e->flags, MAX_PRIO, 0);
 		}
 
-		test_each_engine_store("noreorder-corked", fd, e) {
+		test_each_engine_store("noreorder-corked", fd, ctx, e) {
 			igt_require(gem_scheduler_enabled(fd));
-			noreorder(fd, e->flags, MAX_PRIO, CORKED);
+			noreorder(fd, &ctx->cfg, e->flags, MAX_PRIO, CORKED);
 		}
 
-		test_each_engine_store("deep", fd, e)
-			deep(fd, e->flags);
+		test_each_engine_store("deep", fd, ctx, e)
+			deep(fd, &ctx->cfg, e->flags);
 
-		test_each_engine_store("wide", fd, e)
-			wide(fd, e->flags);
+		test_each_engine_store("wide", fd, ctx, e)
+			wide(fd, &ctx->cfg, e->flags);
 
-		test_each_engine_store("reorder-wide", fd, e)
-			reorder_wide(fd, e->flags);
+		test_each_engine_store("reorder-wide", fd, ctx, e)
+			reorder_wide(fd, &ctx->cfg, e->flags);
 
-		test_each_engine_store("smoketest", fd, e)
-			smoketest(fd, e->flags, 5);
+		test_each_engine_store("smoketest", fd, ctx, e)
+			smoketest(fd, &ctx->cfg, e->flags, 5);
 	}
 
 	igt_subtest_group {
@@ -3037,20 +3086,20 @@ igt_main
 			igt_require(gem_scheduler_has_preemption(fd));
 		}
 
-		test_each_engine("pi-ringfull", fd, e)
-			test_pi_ringfull(fd, e->flags, 0);
+		test_each_engine("pi-ringfull", fd, ctx, e)
+			test_pi_ringfull(fd, &ctx->cfg, e->flags, 0);
 
-		test_each_engine("pi-common", fd, e)
-			test_pi_ringfull(fd, e->flags, SHARED);
+		test_each_engine("pi-common", fd, ctx, e)
+			test_pi_ringfull(fd, &ctx->cfg, e->flags, SHARED);
 
-		test_each_engine("pi-userfault", fd, e)
-			test_pi_userfault(fd, e->flags);
+		test_each_engine("pi-userfault", fd, ctx, e)
+			test_pi_userfault(fd, &ctx->cfg, e->flags);
 
-		test_each_engine("pi-distinct-iova", fd, e)
-			test_pi_iova(fd, e->flags, 0);
+		test_each_engine("pi-distinct-iova", fd, ctx, e)
+			test_pi_iova(fd, &ctx->cfg, e->flags, 0);
 
-		test_each_engine("pi-shared-iova", fd, e)
-			test_pi_iova(fd, e->flags, SHARED);
+		test_each_engine("pi-shared-iova", fd, ctx, e)
+			test_pi_iova(fd, &ctx->cfg, e->flags, SHARED);
 	}
 
 	igt_subtest_group {
@@ -3060,7 +3109,7 @@ igt_main
 		}
 
 		igt_subtest("semaphore-power")
-			measure_semaphore_power(fd);
+			measure_semaphore_power(fd, ctx);
 	}
 
 	igt_fixture {
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 09/30] tests/i915/perf_pmu: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (7 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 08/30] tests/i915/gem_exec_schedule: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 10/30] tests/i915/gem_exec_nop: " Jason Ekstrand
                   ` (22 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/perf_pmu.c | 227 +++++++++++++++++++++++-------------------
 1 file changed, 127 insertions(+), 100 deletions(-)

diff --git a/tests/i915/perf_pmu.c b/tests/i915/perf_pmu.c
index c7ddac85..568fdb80 100644
--- a/tests/i915/perf_pmu.c
+++ b/tests/i915/perf_pmu.c
@@ -46,6 +46,7 @@
 #include "igt_perf.h"
 #include "igt_sysfs.h"
 #include "igt_pm.h"
+#include "intel_ctx.h"
 #include "sw_sync.h"
 
 IGT_TEST_DESCRIPTION("Test the i915 pmu perf interface");
@@ -171,11 +172,11 @@ static unsigned int measured_usleep(unsigned int usec)
 #define FLAG_HANG (32)
 #define TEST_S3 (64)
 
-static igt_spin_t * __spin_poll(int fd, uint32_t ctx,
+static igt_spin_t * __spin_poll(int fd, const intel_ctx_t *ctx,
 				const struct intel_execution_engine2 *e)
 {
 	struct igt_spin_factory opts = {
-		.ctx_id = ctx,
+		.ctx = ctx,
 		.engine = e->flags,
 	};
 
@@ -214,7 +215,7 @@ static unsigned long __spin_wait(int fd, igt_spin_t *spin)
 	return igt_nsec_elapsed(&start);
 }
 
-static igt_spin_t * __spin_sync(int fd, uint32_t ctx,
+static igt_spin_t * __spin_sync(int fd, const intel_ctx_t *ctx,
 				const struct intel_execution_engine2 *e)
 {
 	igt_spin_t *spin = __spin_poll(fd, ctx, e);
@@ -224,7 +225,7 @@ static igt_spin_t * __spin_sync(int fd, uint32_t ctx,
 	return spin;
 }
 
-static igt_spin_t * spin_sync(int fd, uint32_t ctx,
+static igt_spin_t * spin_sync(int fd, const intel_ctx_t *ctx,
 			      const struct intel_execution_engine2 *e)
 {
 	igt_require_gem(fd);
@@ -232,7 +233,7 @@ static igt_spin_t * spin_sync(int fd, uint32_t ctx,
 	return __spin_sync(fd, ctx, e);
 }
 
-static igt_spin_t * spin_sync_flags(int fd, uint32_t ctx, unsigned int flags)
+static igt_spin_t * spin_sync_flags(int fd, const intel_ctx_t *ctx, unsigned int flags)
 {
 	struct intel_execution_engine2 e = { };
 
@@ -276,7 +277,8 @@ static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
 }
 
 static void
-single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
+single(int gem_fd, const intel_ctx_t *ctx,
+       const struct intel_execution_engine2 *e, unsigned int flags)
 {
 	unsigned long slept;
 	igt_spin_t *spin;
@@ -286,7 +288,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 	fd = open_pmu(gem_fd, I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, 0, e);
+		spin = spin_sync(gem_fd, ctx, e);
 	else
 		spin = NULL;
 
@@ -322,7 +324,8 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 }
 
 static void
-busy_start(int gem_fd, const struct intel_execution_engine2 *e)
+busy_start(int gem_fd, const intel_ctx_t *ctx,
+	   const struct intel_execution_engine2 *e)
 {
 	unsigned long slept;
 	uint64_t val, ts[2];
@@ -335,7 +338,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e)
 	 */
 	sleep(2);
 
-	spin = __spin_sync(gem_fd, 0, e);
+	spin = __spin_sync(gem_fd, ctx, e);
 
 	fd = open_pmu(gem_fd, I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
@@ -357,15 +360,16 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e)
  * will depend on the CI systems running it a lot to detect issues.
  */
 static void
-busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
+busy_double_start(int gem_fd, const intel_ctx_t *ctx,
+		  const struct intel_execution_engine2 *e)
 {
 	unsigned long slept;
 	uint64_t val, val2, ts[2];
 	igt_spin_t *spin[2];
-	uint32_t ctx;
+	intel_ctx_t *tmp_ctx;
 	int fd;
 
-	ctx = gem_context_clone_with_engines(gem_fd, 0);
+	tmp_ctx = intel_ctx_create(gem_fd, &ctx->cfg);
 
 	/*
 	 * Defeat the busy stats delayed disable, we need to guarantee we are
@@ -378,10 +382,10 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 	 * re-submission in execlists mode. Make sure busyness is correctly
 	 * reported with the engine busy, and after the engine went idle.
 	 */
-	spin[0] = __spin_sync(gem_fd, 0, e);
+	spin[0] = __spin_sync(gem_fd, ctx, e);
 	usleep(500e3);
 	spin[1] = __igt_spin_new(gem_fd,
-				 .ctx_id = ctx,
+				 .ctx = tmp_ctx,
 				 .engine = e->flags);
 
 	/*
@@ -412,7 +416,7 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 
 	close(fd);
 
-	gem_context_destroy(gem_fd, ctx);
+	intel_ctx_destroy(gem_fd, tmp_ctx);
 
 	assert_within_epsilon(val, ts[1] - ts[0], tolerance);
 	igt_assert_eq(val2, 0);
@@ -440,7 +444,8 @@ static void log_busy(unsigned int num_engines, uint64_t *val)
 }
 
 static void
-busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
+busy_check_all(int gem_fd, const intel_ctx_t *ctx,
+	       const struct intel_execution_engine2 *e,
 	       const unsigned int num_engines, unsigned int flags)
 {
 	struct intel_execution_engine2 *e_;
@@ -453,7 +458,7 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 	i = 0;
 	fd[0] = -1;
-	__for_each_physical_engine(gem_fd, e_) {
+	for_each_ctx_engine(gem_fd, ctx, e_) {
 		if (e->class == e_->class && e->instance == e_->instance)
 			busy_idx = i;
 
@@ -465,7 +470,7 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 	igt_assert_eq(i, num_engines);
 
-	spin = spin_sync(gem_fd, 0, e);
+	spin = spin_sync(gem_fd, ctx, e);
 	pmu_read_multi(fd[0], num_engines, tval[0]);
 	slept = measured_usleep(batch_duration_ns / 1000);
 	if (flags & TEST_TRAILING_IDLE)
@@ -506,7 +511,8 @@ __submit_spin(int gem_fd, igt_spin_t *spin,
 }
 
 static void
-most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
+most_busy_check_all(int gem_fd, const intel_ctx_t *ctx,
+		    const struct intel_execution_engine2 *e,
 		    const unsigned int num_engines, unsigned int flags)
 {
 	struct intel_execution_engine2 *e_;
@@ -518,13 +524,13 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 	unsigned int idle_idx, i;
 
 	i = 0;
-	__for_each_physical_engine(gem_fd, e_) {
+	for_each_ctx_engine(gem_fd, ctx, e_) {
 		if (e->class == e_->class && e->instance == e_->instance)
 			idle_idx = i;
 		else if (spin)
 			__submit_spin(gem_fd, spin, e_, 64);
 		else
-			spin = __spin_poll(gem_fd, 0, e_);
+			spin = __spin_poll(gem_fd, ctx, e_);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e_->class, e_->instance);
 	}
@@ -564,7 +570,8 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 }
 
 static void
-all_busy_check_all(int gem_fd, const unsigned int num_engines,
+all_busy_check_all(int gem_fd, const intel_ctx_t *ctx,
+		   const unsigned int num_engines,
 		   unsigned int flags)
 {
 	struct intel_execution_engine2 *e;
@@ -576,11 +583,11 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines,
 	unsigned int i;
 
 	i = 0;
-	__for_each_physical_engine(gem_fd, e) {
+	for_each_ctx_engine(gem_fd, ctx, e) {
 		if (spin)
 			__submit_spin(gem_fd, spin, e, 64);
 		else
-			spin = __spin_poll(gem_fd, 0, e);
+			spin = __spin_poll(gem_fd, ctx, e);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	}
@@ -615,7 +622,9 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines,
 }
 
 static void
-no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
+no_sema(int gem_fd, const intel_ctx_t *ctx,
+	const struct intel_execution_engine2 *e,
+	unsigned int flags)
 {
 	igt_spin_t *spin;
 	uint64_t val[2][2];
@@ -627,7 +636,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 			   fd[0]);
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, 0, e);
+		spin = spin_sync(gem_fd, ctx, e);
 	else
 		spin = NULL;
 
@@ -658,7 +667,8 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 #define   MI_SEMAPHORE_SAD_NEQ_SDD      (5 << 12)
 
 static void
-sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
+sema_wait(int gem_fd, const intel_ctx_t *ctx,
+	  const struct intel_execution_engine2 *e,
 	  unsigned int flags)
 {
 	struct drm_i915_gem_relocation_entry reloc[2] = {};
@@ -717,6 +727,7 @@ sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
 	eb.buffer_count = 2;
 	eb.buffers_ptr = to_user_pointer(obj);
 	eb.flags = e->flags;
+	eb.rsvd1 = ctx->id;
 
 	/**
 	 * Start the semaphore wait PMU and after some known time let the above
@@ -788,7 +799,7 @@ create_sema(int gem_fd, struct drm_i915_gem_relocation_entry *reloc)
 }
 
 static void
-__sema_busy(int gem_fd, int pmu,
+__sema_busy(int gem_fd, int pmu, const intel_ctx_t *ctx,
 	    const struct intel_execution_engine2 *e,
 	    int sema_pct,
 	    int busy_pct)
@@ -810,6 +821,7 @@ __sema_busy(int gem_fd, int pmu,
 		.buffer_count = 1,
 		.buffers_ptr = to_user_pointer(&obj),
 		.flags = e->flags,
+		.rsvd1 = ctx->id,
 	};
 	igt_spin_t *spin;
 	uint32_t *map;
@@ -821,7 +833,7 @@ __sema_busy(int gem_fd, int pmu,
 
 	map = gem_mmap__wc(gem_fd, obj.handle, 0, 4096, PROT_WRITE);
 	gem_execbuf(gem_fd, &eb);
-	spin = igt_spin_new(gem_fd, .engine = e->flags);
+	spin = igt_spin_new(gem_fd, .ctx = ctx, .engine = e->flags);
 
 	/* Wait until the batch is executed and the semaphore is busy-waiting */
 	while (!READ_ONCE(*map) && gem_bo_busy(gem_fd, obj.handle))
@@ -861,7 +873,7 @@ __sema_busy(int gem_fd, int pmu,
 }
 
 static void
-sema_busy(int gem_fd,
+sema_busy(int gem_fd, const intel_ctx_t *ctx,
 	  const struct intel_execution_engine2 *e,
 	  unsigned int flags)
 {
@@ -874,15 +886,15 @@ sema_busy(int gem_fd,
 	fd[1] = open_group(gem_fd, I915_PMU_ENGINE_BUSY(e->class, e->instance),
 			   fd[0]);
 
-	__sema_busy(gem_fd, fd[0], e, 50, 100);
-	__sema_busy(gem_fd, fd[0], e, 25, 50);
-	__sema_busy(gem_fd, fd[0], e, 75, 75);
+	__sema_busy(gem_fd, fd[0], ctx, e, 50, 100);
+	__sema_busy(gem_fd, fd[0], ctx, e, 25, 50);
+	__sema_busy(gem_fd, fd[0], ctx, e, 75, 75);
 
 	close(fd[0]);
 	close(fd[1]);
 }
 
-static void test_awake(int i915)
+static void test_awake(int i915, const intel_ctx_t *ctx)
 {
 	const struct intel_execution_engine2 *e;
 	unsigned long slept;
@@ -893,8 +905,8 @@ static void test_awake(int i915)
 	igt_skip_on(fd < 0);
 
 	/* Check that each engine is captured by the GT wakeref */
-	__for_each_physical_engine(i915, e) {
-		igt_spin_new(i915, .engine = e->flags);
+	for_each_ctx_engine(i915, ctx, e) {
+		igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
 
 		val = pmu_read_single(fd);
 		slept = measured_usleep(batch_duration_ns / 1000);
@@ -905,8 +917,8 @@ static void test_awake(int i915)
 	}
 
 	/* And that the total GT wakeref matches walltime not summation */
-	__for_each_physical_engine(i915, e)
-		igt_spin_new(i915, .engine = e->flags);
+	for_each_ctx_engine(i915, ctx, e)
+		igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
 
 	val = pmu_read_single(fd);
 	slept = measured_usleep(batch_duration_ns / 1000);
@@ -995,7 +1007,8 @@ static int has_secure_batches(const int fd)
 }
 
 static void
-event_wait(int gem_fd, const struct intel_execution_engine2 *e)
+event_wait(int gem_fd, const intel_ctx_t *ctx,
+	   const struct intel_execution_engine2 *e)
 {
 	struct drm_i915_gem_exec_object2 obj = { };
 	struct drm_i915_gem_execbuffer2 eb = { };
@@ -1051,6 +1064,7 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e)
 	eb.buffer_count = 1;
 	eb.buffers_ptr = to_user_pointer(&obj);
 	eb.flags = e->flags | I915_EXEC_SECURE;
+	eb.rsvd1 = ctx->id;
 
 	for_each_pipe_with_valid_output(&data.display, p, output) {
 		struct igt_helper_process waiter = { };
@@ -1123,7 +1137,8 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e)
 }
 
 static void
-multi_client(int gem_fd, const struct intel_execution_engine2 *e)
+multi_client(int gem_fd, const intel_ctx_t *ctx,
+	     const struct intel_execution_engine2 *e)
 {
 	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	unsigned long slept[2];
@@ -1142,7 +1157,7 @@ multi_client(int gem_fd, const struct intel_execution_engine2 *e)
 	 */
 	fd[1] = open_pmu(gem_fd, config);
 
-	spin = spin_sync(gem_fd, 0, e);
+	spin = spin_sync(gem_fd, ctx, e);
 
 	val[0] = val[1] = __pmu_read_single(fd[0], &ts[0]);
 	slept[1] = measured_usleep(batch_duration_ns / 1000);
@@ -1705,7 +1720,8 @@ test_rc6(int gem_fd, unsigned int flags)
 }
 
 static void
-test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
+test_enable_race(int gem_fd, const intel_ctx_t *ctx,
+		 const struct intel_execution_engine2 *e)
 {
 	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	struct igt_helper_process engine_load = { };
@@ -1723,6 +1739,7 @@ test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
 	eb.buffer_count = 1;
 	eb.buffers_ptr = to_user_pointer(&obj);
 	eb.flags = e->flags;
+	eb.rsvd1 = ctx->id;
 
 	/*
 	 * This test is probabilistic so run in a few times to increase the
@@ -1769,7 +1786,8 @@ test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
 	__assert_within(x, ref, tolerance, tolerance)
 
 static void
-accuracy(int gem_fd, const struct intel_execution_engine2 *e,
+accuracy(int gem_fd, const intel_ctx_t *ctx,
+	 const struct intel_execution_engine2 *e,
 	 unsigned long target_busy_pct,
 	 unsigned long target_iters)
 {
@@ -1819,7 +1837,7 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e,
 		igt_spin_t *spin;
 
 		/* Allocate our spin batch and idle it. */
-		spin = igt_spin_new(gem_fd, .engine = e->flags);
+		spin = igt_spin_new(gem_fd, .ctx = ctx, .engine = e->flags);
 		igt_spin_end(spin);
 		gem_sync(gem_fd, spin->handle);
 
@@ -1978,6 +1996,7 @@ static int unload_i915(void)
 static void test_unload(unsigned int num_engines)
 {
 	igt_fork(child, 1) {
+		intel_ctx_cfg_t cfg;
 		const struct intel_execution_engine2 *e;
 		int fd[4 + num_engines * 3], i;
 		uint64_t *buf;
@@ -2003,7 +2022,8 @@ static void test_unload(unsigned int num_engines)
 		if (fd[count] != -1)
 			count++;
 
-		__for_each_physical_engine(i915, e) {
+		cfg = intel_ctx_cfg_all_physical(i915);
+		for_each_ctx_cfg_engine(i915, &cfg, e) {
 			fd[count] = perf_i915_open_group(i915,
 							 I915_PMU_ENGINE_BUSY(e->class, e->instance),
 							 fd[count - 1]);
@@ -2051,12 +2071,12 @@ static void test_unload(unsigned int num_engines)
 	igt_assert_eq(unload_i915(), 0);
 }
 
-#define test_each_engine(T, i915, e) \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_engine(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		igt_dynamic_f("%s", e->name)
 
-#define test_each_rcs(T, i915, e) \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_rcs(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		for_each_if((e)->class == I915_ENGINE_CLASS_RENDER) \
 			igt_dynamic_f("%s", e->name)
 
@@ -2064,6 +2084,7 @@ igt_main
 {
 	const struct intel_execution_engine2 *e;
 	unsigned int num_engines = 0;
+	const intel_ctx_t *ctx = NULL;
 	int fd = -1;
 
 	/**
@@ -2078,7 +2099,9 @@ igt_main
 		igt_require_gem(fd);
 		igt_require(i915_perf_type_id(fd) > 0);
 
-		__for_each_physical_engine(fd, e)
+		ctx = intel_ctx_create_all_physical(fd);
+
+		for_each_ctx_engine(fd, ctx, e)
 			num_engines++;
 		igt_require(num_engines);
 	}
@@ -2106,48 +2129,48 @@ igt_main
 	 * Test that a single engine metric can be initialized or it
 	 * is correctly rejected.
 	 */
-	test_each_engine("init-busy", fd, e)
+	test_each_engine("init-busy", fd, ctx, e)
 		init(fd, e, I915_SAMPLE_BUSY);
 
-	test_each_engine("init-wait", fd, e)
+	test_each_engine("init-wait", fd, ctx, e)
 		init(fd, e, I915_SAMPLE_WAIT);
 
-	test_each_engine("init-sema", fd, e)
+	test_each_engine("init-sema", fd, ctx, e)
 		init(fd, e, I915_SAMPLE_SEMA);
 
 	/**
 	 * Test that engines show no load when idle.
 	 */
-	test_each_engine("idle", fd, e)
-		single(fd, e, 0);
+	test_each_engine("idle", fd, ctx, e)
+		single(fd, ctx, e, 0);
 
 	/**
 	 * Test that a single engine reports load correctly.
 	 */
-	test_each_engine("busy", fd, e)
-		single(fd, e, TEST_BUSY);
-	test_each_engine("busy-idle", fd, e)
-		single(fd, e, TEST_BUSY | TEST_TRAILING_IDLE);
+	test_each_engine("busy", fd, ctx, e)
+		single(fd, ctx, e, TEST_BUSY);
+	test_each_engine("busy-idle", fd, ctx, e)
+		single(fd, ctx, e, TEST_BUSY | TEST_TRAILING_IDLE);
 
 	/**
 	 * Test that when one engine is loaded other report no
 	 * load.
 	 */
-	test_each_engine("busy-check-all", fd, e)
-		busy_check_all(fd, e, num_engines, TEST_BUSY);
-	test_each_engine("busy-idle-check-all", fd, e)
-		busy_check_all(fd, e, num_engines,
+	test_each_engine("busy-check-all", fd, ctx, e)
+		busy_check_all(fd, ctx, e, num_engines, TEST_BUSY);
+	test_each_engine("busy-idle-check-all", fd, ctx, e)
+		busy_check_all(fd, ctx, e, num_engines,
 			       TEST_BUSY | TEST_TRAILING_IDLE);
 
 	/**
 	 * Test that when all except one engine are loaded all
 	 * loads are correctly reported.
 	 */
-	test_each_engine("most-busy-check-all", fd, e)
-		most_busy_check_all(fd, e, num_engines,
+	test_each_engine("most-busy-check-all", fd, ctx, e)
+		most_busy_check_all(fd, ctx, e, num_engines,
 				    TEST_BUSY);
-	test_each_engine("most-busy-idle-check-all", fd, e)
-		most_busy_check_all(fd, e, num_engines,
+	test_each_engine("most-busy-idle-check-all", fd, ctx, e)
+		most_busy_check_all(fd, ctx, e, num_engines,
 				    TEST_BUSY |
 				    TEST_TRAILING_IDLE);
 
@@ -2155,40 +2178,40 @@ igt_main
 	 * Test that semphore counters report no activity on
 	 * idle or busy engines.
 	 */
-	test_each_engine("idle-no-semaphores", fd, e)
-		no_sema(fd, e, 0);
+	test_each_engine("idle-no-semaphores", fd, ctx, e)
+		no_sema(fd, ctx, e, 0);
 
-	test_each_engine("busy-no-semaphores", fd, e)
-		no_sema(fd, e, TEST_BUSY);
+	test_each_engine("busy-no-semaphores", fd, ctx, e)
+		no_sema(fd, ctx, e, TEST_BUSY);
 
-	test_each_engine("busy-idle-no-semaphores", fd, e)
-		no_sema(fd, e, TEST_BUSY | TEST_TRAILING_IDLE);
+	test_each_engine("busy-idle-no-semaphores", fd, ctx, e)
+		no_sema(fd, ctx, e, TEST_BUSY | TEST_TRAILING_IDLE);
 
 	/**
 	 * Test that semaphore waits are correctly reported.
 	 */
-	test_each_engine("semaphore-wait", fd, e)
-		sema_wait(fd, e, TEST_BUSY);
+	test_each_engine("semaphore-wait", fd, ctx, e)
+		sema_wait(fd, ctx, e, TEST_BUSY);
 
-	test_each_engine("semaphore-wait-idle", fd, e)
-		sema_wait(fd, e, TEST_BUSY | TEST_TRAILING_IDLE);
+	test_each_engine("semaphore-wait-idle", fd, ctx, e)
+		sema_wait(fd, ctx, e, TEST_BUSY | TEST_TRAILING_IDLE);
 
-	test_each_engine("semaphore-busy", fd, e)
-		sema_busy(fd, e, 0);
+	test_each_engine("semaphore-busy", fd, ctx, e)
+		sema_busy(fd, ctx, e, 0);
 
 	/**
 	 * Check that two perf clients do not influence each
 	 * others observations.
 	 */
-	test_each_engine("multi-client", fd, e)
-		multi_client(fd, e);
+	test_each_engine("multi-client", fd, ctx, e)
+		multi_client(fd, ctx, e);
 
 	/**
 	 * Check that reported usage is correct when PMU is
 	 * enabled after the batch is running.
 	 */
-	test_each_engine("busy-start", fd, e)
-		busy_start(fd, e);
+	test_each_engine("busy-start", fd, ctx, e)
+		busy_start(fd, ctx, e);
 
 	/**
 	 * Check that reported usage is correct when PMU is
@@ -2197,16 +2220,16 @@ igt_main
 	igt_subtest_group {
 		igt_fixture gem_require_contexts(fd);
 
-		test_each_engine("busy-double-start", fd, e)
-			busy_double_start(fd, e);
+		test_each_engine("busy-double-start", fd, ctx, e)
+			busy_double_start(fd, ctx, e);
 	}
 
 	/**
 	 * Check that the PMU can be safely enabled in face of
 	 * interrupt-heavy engine load.
 	 */
-	test_each_engine("enable-race", fd, e)
-		test_enable_race(fd, e);
+	test_each_engine("enable-race", fd, ctx, e)
+		test_enable_race(fd, ctx, e);
 
 	igt_subtest_group {
 		const unsigned int pct[] = { 2, 50, 98 };
@@ -2216,18 +2239,18 @@ igt_main
 		 */
 		for (unsigned int i = 0; i < ARRAY_SIZE(pct); i++) {
 			igt_subtest_with_dynamic_f("busy-accuracy-%u", pct[i]) {
-				__for_each_physical_engine(fd, e) {
+				for_each_ctx_engine(fd, ctx, e) {
 					igt_dynamic_f("%s", e->name)
-						accuracy(fd, e, pct[i], 10);
+						accuracy(fd, ctx, e, pct[i], 10);
 				}
 			}
 		}
 	}
 
-	test_each_engine("busy-hang", fd, e) {
+	test_each_engine("busy-hang", fd, ctx, e) {
 		igt_hang_t hang = igt_allow_hang(fd, 0, 0);
 
-		single(fd, e, TEST_BUSY | FLAG_HANG);
+		single(fd, ctx, e, TEST_BUSY | FLAG_HANG);
 
 		igt_disallow_hang(fd, hang);
 	}
@@ -2235,17 +2258,18 @@ igt_main
 	/**
 	 * Test that event waits are correctly reported.
 	 */
-	test_each_rcs("event-wait", fd, e)
-		event_wait(fd, e);
+	test_each_rcs("event-wait", fd, ctx, e)
+		event_wait(fd, ctx, e);
 
 	/**
 	 * Test that when all engines are loaded all loads are
 	 * correctly reported.
 	 */
 	igt_subtest("all-busy-check-all")
-		all_busy_check_all(fd, num_engines, TEST_BUSY);
+		all_busy_check_all(fd, ctx, num_engines,
+				   TEST_BUSY);
 	igt_subtest("all-busy-idle-check-all")
-		all_busy_check_all(fd, num_engines,
+		all_busy_check_all(fd, ctx, num_engines,
 				   TEST_BUSY | TEST_TRAILING_IDLE);
 
 	/**
@@ -2290,27 +2314,30 @@ igt_main
 	 * Test GT wakeref tracking (similar to RC0, opposite of RC6)
 	 */
 	igt_subtest("gt-awake")
-		test_awake(fd);
+		test_awake(fd, ctx);
 
 	/**
 	 * Check render nodes are counted.
 	 */
 	igt_subtest_group {
 		int render_fd = -1;
+		intel_ctx_t *render_ctx = NULL;
 
 		igt_fixture {
 			render_fd = __drm_open_driver_render(DRIVER_INTEL);
 			igt_require_gem(render_fd);
+			render_ctx = intel_ctx_create_all_physical(render_fd);
 
 			gem_quiescent_gpu(fd);
 		}
 
-		test_each_engine("render-node-busy", render_fd, e)
-			single(render_fd, e, TEST_BUSY);
-		test_each_engine("render-node-busy-idle", render_fd, e)
-			single(render_fd, e, TEST_BUSY | TEST_TRAILING_IDLE);
+		test_each_engine("render-node-busy", render_fd, ctx, e)
+			single(render_fd, render_ctx, e, TEST_BUSY);
+		test_each_engine("render-node-busy-idle", render_fd, ctx, e)
+			single(render_fd, render_ctx, e, TEST_BUSY | TEST_TRAILING_IDLE);
 
 		igt_fixture {
+			intel_ctx_destroy(render_fd, render_ctx);
 			close(render_fd);
 		}
 	}
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 10/30] tests/i915/gem_exec_nop: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (8 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 09/30] tests/i915/perf_pmu: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 11/30] tests/i915/gem_exec_reloc: " Jason Ekstrand
                   ` (21 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_nop.c | 158 +++++++++++++++++++++++---------------
 1 file changed, 94 insertions(+), 64 deletions(-)

diff --git a/tests/i915/gem_exec_nop.c b/tests/i915/gem_exec_nop.c
index f24ff88f..af32ed00 100644
--- a/tests/i915/gem_exec_nop.c
+++ b/tests/i915/gem_exec_nop.c
@@ -45,6 +45,7 @@
 #include "igt_device.h"
 #include "igt_rand.h"
 #include "igt_sysfs.h"
+#include "intel_ctx.h"
 
 
 #define ENGINE_FLAGS  (I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK)
@@ -62,7 +63,7 @@ static double elapsed(const struct timespec *start, const struct timespec *end)
 		(end->tv_nsec - start->tv_nsec)*1e-9);
 }
 
-static double nop_on_ring(int fd, uint32_t handle,
+static double nop_on_ring(int fd, uint32_t handle, const intel_ctx_t *ctx,
 			  const struct intel_execution_engine2 *e,
 			  int timeout_ms,
 			  unsigned long *out)
@@ -81,6 +82,7 @@ static double nop_on_ring(int fd, uint32_t handle,
 	execbuf.flags = e->flags;
 	execbuf.flags |= I915_EXEC_HANDLE_LUT;
 	execbuf.flags |= I915_EXEC_NO_RELOC;
+	execbuf.rsvd1 = ctx->id;
 	if (__gem_execbuf(fd, &execbuf)) {
 		execbuf.flags = e->flags;
 		gem_execbuf(fd, &execbuf);
@@ -101,7 +103,8 @@ static double nop_on_ring(int fd, uint32_t handle,
 	return elapsed(&start, &now);
 }
 
-static void poll_ring(int fd, const struct intel_execution_engine2 *e,
+static void poll_ring(int fd, const intel_ctx_t *ctx,
+		      const struct intel_execution_engine2 *e,
 		      int timeout)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -185,6 +188,7 @@ static void poll_ring(int fd, const struct intel_execution_engine2 *e,
 	execbuf.buffers_ptr = to_user_pointer(&obj);
 	execbuf.buffer_count = 1;
 	execbuf.flags = e->flags | flags;
+	execbuf.rsvd1 = ctx->id;
 
 	cycles = 0;
 	do {
@@ -212,7 +216,8 @@ static void poll_ring(int fd, const struct intel_execution_engine2 *e,
 	gem_close(fd, obj.handle);
 }
 
-static void poll_sequential(int fd, const char *name, int timeout)
+static void poll_sequential(int fd, const intel_ctx_t *ctx,
+			    const char *name, int timeout)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const struct intel_execution_engine2 *e;
@@ -232,7 +237,7 @@ static void poll_sequential(int fd, const char *name, int timeout)
 		flags |= I915_EXEC_SECURE;
 
 	nengine = 0;
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		if (!gem_class_can_store_dword(fd, e->class) ||
 		    !gem_class_has_mutable_submission(fd, e->class))
 			continue;
@@ -312,6 +317,7 @@ static void poll_sequential(int fd, const char *name, int timeout)
 	memset(&execbuf, 0, sizeof(execbuf));
 	execbuf.buffers_ptr = to_user_pointer(obj);
 	execbuf.buffer_count = ARRAY_SIZE(obj);
+	execbuf.rsvd1 = ctx->id;
 
 	cycles = 0;
 	do {
@@ -342,19 +348,19 @@ static void poll_sequential(int fd, const char *name, int timeout)
 	gem_close(fd, obj[0].handle);
 }
 
-static void single(int fd, uint32_t handle,
+static void single(int fd, uint32_t handle, const intel_ctx_t *ctx,
 		   const struct intel_execution_engine2 *e)
 {
 	double time;
 	unsigned long count;
 
-	time = nop_on_ring(fd, handle, e, 20000, &count);
+	time = nop_on_ring(fd, handle, ctx, e, 20000, &count);
 	igt_info("%s: %'lu cycles: %.3fus\n",
 		  e->name, count, time*1e6 / count);
 }
 
 static double
-stable_nop_on_ring(int fd, uint32_t handle,
+stable_nop_on_ring(int fd, uint32_t handle, const intel_ctx_t *ctx,
 		   const struct intel_execution_engine2 *e,
 		   int timeout_ms,
 		   int reps)
@@ -371,7 +377,7 @@ stable_nop_on_ring(int fd, uint32_t handle,
 		unsigned long count;
 		double time;
 
-		time = nop_on_ring(fd, handle, e, timeout_ms, &count);
+		time = nop_on_ring(fd, handle, ctx, e, timeout_ms, &count);
 		igt_stats_push_float(&s, time / count);
 	}
 
@@ -387,7 +393,7 @@ stable_nop_on_ring(int fd, uint32_t handle,
                      "'%s' != '%s' (%f not within %f%% tolerance of %f)\n",\
                      #x, #ref, x, tolerance * 100.0, ref)
 
-static void headless(int fd, uint32_t handle,
+static void headless(int fd, uint32_t handle, const intel_ctx_t *ctx,
 		     const struct intel_execution_engine2 *e)
 {
 	unsigned int nr_connected = 0;
@@ -411,11 +417,11 @@ static void headless(int fd, uint32_t handle,
 	/* set graphics mode to prevent blanking */
 	kmstest_set_vt_graphics_mode();
 
-	nop_on_ring(fd, handle, e, 10, &count);
+	nop_on_ring(fd, handle, ctx, e, 10, &count);
 	igt_require_f(count > 100, "submillisecond precision required\n");
 
 	/* benchmark nops */
-	n_display = stable_nop_on_ring(fd, handle, e, 500, 5);
+	n_display = stable_nop_on_ring(fd, handle, ctx, e, 500, 5);
 	igt_info("With one display connected: %.2fus\n",
 		 n_display * 1e6);
 
@@ -423,7 +429,7 @@ static void headless(int fd, uint32_t handle,
 	kmstest_unset_all_crtcs(fd, res);
 
 	/* benchmark nops again */
-	n_headless = stable_nop_on_ring(fd, handle, e, 500, 5);
+	n_headless = stable_nop_on_ring(fd, handle, ctx, e, 500, 5);
 	igt_info("Without a display connected (headless): %.2fus\n",
 		 n_headless * 1e6);
 
@@ -431,7 +437,8 @@ static void headless(int fd, uint32_t handle,
 	assert_within_epsilon(n_headless, n_display, 0.1f);
 }
 
-static void parallel(int fd, uint32_t handle, int timeout)
+static void parallel(int fd, uint32_t handle,
+		     const intel_ctx_t *ctx, int timeout)
 {
 	const struct intel_execution_engine2 *e;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -445,11 +452,11 @@ static void parallel(int fd, uint32_t handle, int timeout)
 	sum = 0;
 	nengine = 0;
 
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		engines[nengine] = e->flags;
 		names[nengine++] = strdup(e->name);
 
-		time = nop_on_ring(fd, handle, e, 250, &count) / count;
+		time = nop_on_ring(fd, handle, ctx, e, 250, &count) / count;
 		sum += time;
 		igt_debug("%s: %.3fus\n", e->name, 1e6*time);
 	}
@@ -464,6 +471,7 @@ static void parallel(int fd, uint32_t handle, int timeout)
 	execbuf.buffer_count = 1;
 	execbuf.flags |= I915_EXEC_HANDLE_LUT;
 	execbuf.flags |= I915_EXEC_NO_RELOC;
+	execbuf.rsvd1 = ctx->id;
 	if (__gem_execbuf(fd, &execbuf)) {
 		execbuf.flags = 0;
 		gem_execbuf(fd, &execbuf);
@@ -494,7 +502,8 @@ static void parallel(int fd, uint32_t handle, int timeout)
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
 }
 
-static void independent(int fd, uint32_t handle, int timeout)
+static void independent(int fd, uint32_t handle,
+			const intel_ctx_t *ctx, int timeout)
 {
 	const struct intel_execution_engine2 *e;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -507,11 +516,11 @@ static void independent(int fd, uint32_t handle, int timeout)
 
 	sum = 0;
 	nengine = 0;
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		engines[nengine] = e->flags;
 		names[nengine++] = strdup(e->name);
 
-		time = nop_on_ring(fd, handle, e, 250, &count) / count;
+		time = nop_on_ring(fd, handle, ctx, e, 250, &count) / count;
 		sum += time;
 		igt_debug("%s: %.3fus\n", e->name, 1e6*time);
 	}
@@ -526,6 +535,7 @@ static void independent(int fd, uint32_t handle, int timeout)
 	execbuf.buffer_count = 1;
 	execbuf.flags |= I915_EXEC_HANDLE_LUT;
 	execbuf.flags |= I915_EXEC_NO_RELOC;
+	execbuf.rsvd1 = ctx->id;
 	if (__gem_execbuf(fd, &execbuf)) {
 		execbuf.flags = 0;
 		gem_execbuf(fd, &execbuf);
@@ -562,7 +572,7 @@ static void independent(int fd, uint32_t handle, int timeout)
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
 }
 
-static void multiple(int fd,
+static void multiple(int fd, const intel_ctx_t *ctx,
 		     const struct intel_execution_engine2 *e,
 		     int timeout)
 {
@@ -581,6 +591,7 @@ static void multiple(int fd,
 	execbuf.flags = e->flags;
 	execbuf.flags |= I915_EXEC_HANDLE_LUT;
 	execbuf.flags |= I915_EXEC_NO_RELOC;
+	execbuf.rsvd1 = ctx->id;
 	if (__gem_execbuf(fd, &execbuf)) {
 		execbuf.flags = e->flags;
 		gem_execbuf(fd, &execbuf);
@@ -592,9 +603,11 @@ static void multiple(int fd,
 		unsigned long count;
 		double time;
 		int i915;
+		intel_ctx_t *child_ctx;
 
 		i915 = gem_reopen_driver(fd);
-		gem_context_copy_engines(fd, 0, i915, 0);
+		child_ctx = intel_ctx_create(i915, &ctx->cfg);
+		execbuf.rsvd1 = child_ctx->id;
 
 		obj.handle = gem_create(i915, 4096);
 		gem_write(i915, obj.handle, 0, &bbe, sizeof(bbe));
@@ -609,6 +622,7 @@ static void multiple(int fd,
 		} while (elapsed(&start, &now) < timeout);
 		time = elapsed(&start, &now) / count;
 		igt_info("%d: %ld cycles, %.3fus\n", child, count, 1e6*time);
+		intel_ctx_destroy(i915, child_ctx);
 	}
 
 	igt_waitchildren();
@@ -617,7 +631,8 @@ static void multiple(int fd,
 	gem_close(fd, obj.handle);
 }
 
-static void series(int fd, uint32_t handle, int timeout)
+static void series(int fd, uint32_t handle,
+		   const intel_ctx_t *ctx, int timeout)
 {
 	const struct intel_execution_engine2 *e;
 	struct drm_i915_gem_execbuffer2 execbuf;
@@ -630,8 +645,8 @@ static void series(int fd, uint32_t handle, int timeout)
 	const char *name;
 
 	nengine = 0;
-	__for_each_physical_engine(fd, e) {
-		time = nop_on_ring(fd, handle, e, 250, &count) / count;
+	for_each_ctx_engine(fd, ctx, e) {
+		time = nop_on_ring(fd, handle, ctx, e, 250, &count) / count;
 		if (time > max) {
 			name = e->name;
 			max = time;
@@ -653,6 +668,7 @@ static void series(int fd, uint32_t handle, int timeout)
 	execbuf.buffer_count = 1;
 	execbuf.flags |= I915_EXEC_HANDLE_LUT;
 	execbuf.flags |= I915_EXEC_NO_RELOC;
+	execbuf.rsvd1 = ctx->id;
 	if (__gem_execbuf(fd, &execbuf)) {
 		execbuf.flags = 0;
 		gem_execbuf(fd, &execbuf);
@@ -688,9 +704,11 @@ static void xchg(void *array, unsigned i, unsigned j)
 	u[j] = tmp;
 }
 
-static void sequential(int fd, uint32_t handle, unsigned flags, int timeout)
+static void sequential(int fd, uint32_t handle,
+		       const intel_ctx_t *ctx, unsigned flags, int timeout)
 {
 	const int ncpus = flags & FORKED ? sysconf(_SC_NPROCESSORS_ONLN) : 1;
+	intel_ctx_t *tmp_ctx = NULL, *child_ctx = NULL;
 	const struct intel_execution_engine2 *e;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj[2];
@@ -707,10 +725,10 @@ static void sequential(int fd, uint32_t handle, unsigned flags, int timeout)
 
 	nengine = 0;
 	sum = 0;
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		unsigned long count;
 
-		time = nop_on_ring(fd, handle, e, 250, &count) / count;
+		time = nop_on_ring(fd, handle, ctx, e, 250, &count) / count;
 		sum += time;
 		igt_debug("%s: %.3fus\n", e->name, 1e6*time);
 
@@ -734,7 +752,8 @@ static void sequential(int fd, uint32_t handle, unsigned flags, int timeout)
 
 	if (flags & CONTEXT) {
 		gem_require_contexts(fd);
-		execbuf.rsvd1 = gem_context_clone_with_engines(fd, 0);
+		tmp_ctx = intel_ctx_create(fd, &ctx->cfg);
+		execbuf.rsvd1 = tmp_ctx->id;
 	}
 
 	for (n = 0; n < nengine; n++) {
@@ -754,7 +773,8 @@ static void sequential(int fd, uint32_t handle, unsigned flags, int timeout)
 
 		if (flags & CONTEXT) {
 			gem_require_contexts(fd);
-			execbuf.rsvd1 = gem_context_clone_with_engines(fd, 0);
+			child_ctx = intel_ctx_create(fd, &ctx->cfg);
+			execbuf.rsvd1 = child_ctx->id;
 		}
 
 		hars_petruska_f54_1_random_perturb(child);
@@ -777,7 +797,7 @@ static void sequential(int fd, uint32_t handle, unsigned flags, int timeout)
 		results[child] = elapsed(&start, &now) / count;
 
 		if (flags & CONTEXT)
-			gem_context_destroy(fd, execbuf.rsvd1);
+			intel_ctx_destroy(fd, child_ctx);
 
 		gem_close(fd, obj[0].handle);
 	}
@@ -793,7 +813,7 @@ static void sequential(int fd, uint32_t handle, unsigned flags, int timeout)
 		 nengine, ncpus, 1e6*results[ncpus], 1e6*sum*ncpus);
 
 	if (flags & CONTEXT)
-		gem_context_destroy(fd, execbuf.rsvd1);
+		intel_ctx_destroy(fd, tmp_ctx);
 
 	gem_close(fd, obj[0].handle);
 	munmap(results, 4096);
@@ -810,6 +830,7 @@ static bool fence_wait(int fence)
 }
 
 static void fence_signal(int fd, uint32_t handle,
+			 const intel_ctx_t *ctx,
 			 const struct intel_execution_engine2 *ring_id,
 			 const char *ring_name, int timeout)
 {
@@ -827,7 +848,7 @@ static void fence_signal(int fd, uint32_t handle,
 
 	nengine = 0;
 	if (!ring_id) {
-		__for_each_physical_engine(fd, __e)
+		for_each_ctx_engine(fd, ctx, __e)
 			engines[nengine++] = __e->flags;
 	} else {
 		engines[nengine++] = ring_id->flags;
@@ -845,6 +866,7 @@ static void fence_signal(int fd, uint32_t handle,
 	execbuf.buffers_ptr = to_user_pointer(&obj);
 	execbuf.buffer_count = 1;
 	execbuf.flags = I915_EXEC_FENCE_OUT;
+	execbuf.rsvd1 = ctx->id;
 
 	n = 0;
 	count = 0;
@@ -885,20 +907,21 @@ static void fence_signal(int fd, uint32_t handle,
 }
 
 static void preempt(int fd, uint32_t handle,
+		    const intel_ctx_t *ctx,
 		    const struct intel_execution_engine2 *e)
 {
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_object2 obj;
 	struct timespec start, now;
 	unsigned long count;
-	uint32_t ctx[2];
+	intel_ctx_t *tmp_ctx[2];
 	igt_spin_t *spin;
 
-	ctx[0] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[0], MIN_PRIO);
+	tmp_ctx[0] = intel_ctx_create(fd, &ctx->cfg);
+	gem_context_set_priority(fd, tmp_ctx[0]->id, MIN_PRIO);
 
-	ctx[1] = gem_context_clone_with_engines(fd, 0);
-	gem_context_set_priority(fd, ctx[1], MAX_PRIO);
+	tmp_ctx[1] = intel_ctx_create(fd, &ctx->cfg);
+	gem_context_set_priority(fd, tmp_ctx[1]->id, MAX_PRIO);
 
 	memset(&obj, 0, sizeof(obj));
 	obj.handle = handle;
@@ -909,15 +932,16 @@ static void preempt(int fd, uint32_t handle,
 	execbuf.flags = e->flags;
 	execbuf.flags |= I915_EXEC_HANDLE_LUT;
 	execbuf.flags |= I915_EXEC_NO_RELOC;
+	execbuf.rsvd1 = ctx->id;
 	if (__gem_execbuf(fd, &execbuf)) {
 		execbuf.flags = e->flags;
 		gem_execbuf(fd, &execbuf);
 	}
-	execbuf.rsvd1 = ctx[1];
+	execbuf.rsvd1 = tmp_ctx[1]->id;
 	intel_detect_and_clear_missed_interrupts(fd);
 
 	count = 0;
-	spin = __igt_spin_new(fd, .ctx_id = ctx[0], .engine = e->flags);
+	spin = __igt_spin_new(fd, .ctx = tmp_ctx[0], .engine = e->flags);
 	clock_gettime(CLOCK_MONOTONIC, &start);
 	do {
 		gem_execbuf(fd, &execbuf);
@@ -927,8 +951,8 @@ static void preempt(int fd, uint32_t handle,
 	igt_spin_free(fd, spin);
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
 
-	gem_context_destroy(fd, ctx[1]);
-	gem_context_destroy(fd, ctx[0]);
+	intel_ctx_destroy(fd, tmp_ctx[1]);
+	intel_ctx_destroy(fd, tmp_ctx[0]);
 
 	igt_info("%s: %'lu cycles: %.3fus\n",
 		 e->name, count, elapsed(&start, &now)*1e6 / count);
@@ -937,6 +961,7 @@ static void preempt(int fd, uint32_t handle,
 igt_main
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx = NULL;
 	uint32_t handle = 0;
 	int device = -1;
 
@@ -948,6 +973,11 @@ igt_main
 		gem_submission_print_method(device);
 		gem_scheduler_print_capability(device);
 
+		if (gem_has_contexts(device))
+			ctx = intel_ctx_create_all_physical(device);
+		else
+			ctx = intel_ctx_0(device);
+
 		handle = gem_create(device, 4096);
 		gem_write(device, handle, 0, &bbe, sizeof(bbe));
 
@@ -955,57 +985,57 @@ igt_main
 	}
 
 	igt_subtest("basic-series")
-		series(device, handle, 2);
+		series(device, handle, ctx, 2);
 
 	igt_subtest("basic-parallel")
-		parallel(device, handle, 2);
+		parallel(device, handle, ctx, 2);
 
 	igt_subtest("basic-sequential")
-		sequential(device, handle, 0, 2);
+		sequential(device, handle, ctx, 0, 2);
 
 	igt_subtest_with_dynamic("single") {
-		__for_each_physical_engine(device, e) {
+		for_each_ctx_engine(device, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				single(device, handle, e);
+				single(device, handle, ctx, e);
 		}
 	}
 
 	igt_subtest_with_dynamic("signal") {
-		__for_each_physical_engine(device, e) {
+		for_each_ctx_engine(device, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				fence_signal(device, handle, e,
-					     e->name, 2);
+				fence_signal(device, handle, ctx,
+					     e, e->name, 2);
 		}
 	}
 
 	igt_subtest("signal-all")
 		/* NULL value means all engines */
-		fence_signal(device, handle, NULL, "all", 20);
+		fence_signal(device, handle, ctx, NULL, "all", 20);
 
 	igt_subtest("series")
-		series(device, handle, 20);
+		series(device, handle, ctx, 20);
 
 	igt_subtest("parallel")
-		parallel(device, handle, 20);
+		parallel(device, handle, ctx, 20);
 
 	igt_subtest("independent")
-		independent(device, handle, 20);
+		independent(device, handle, ctx, 20);
 
 	igt_subtest_with_dynamic("multiple") {
-		__for_each_physical_engine(device, e) {
+		for_each_ctx_engine(device, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				multiple(device, e, 20);
+				multiple(device, ctx, e, 20);
 		}
 	}
 
 	igt_subtest("sequential")
-		sequential(device, handle, 0, 20);
+		sequential(device, handle, ctx, 0, 20);
 
 	igt_subtest("forked-sequential")
-		sequential(device, handle, FORKED, 20);
+		sequential(device, handle, ctx, FORKED, 20);
 
 	igt_subtest("context-sequential")
-		sequential(device, handle, FORKED | CONTEXT, 20);
+		sequential(device, handle, ctx, FORKED | CONTEXT, 20);
 
 	igt_subtest_group {
 		igt_fixture {
@@ -1014,9 +1044,9 @@ igt_main
 			igt_require(gem_scheduler_has_preemption(device));
 		}
 		igt_subtest_with_dynamic("preempt") {
-			__for_each_physical_engine(device, e) {
+			for_each_ctx_engine(device, ctx, e) {
 				igt_dynamic_f("%s", e->name)
-					preempt(device, handle, e);
+					preempt(device, handle, ctx, e);
 			}
 		}
 	}
@@ -1027,23 +1057,23 @@ igt_main
 		}
 
 		igt_subtest_with_dynamic("poll") {
-			__for_each_physical_engine(device, e) {
+			for_each_ctx_engine(device, ctx, e) {
 				/* Requires master for STORE_DWORD on gen4/5 */
 				igt_dynamic_f("%s", e->name)
-					poll_ring(device, e, 20);
+					poll_ring(device, ctx, e, 20);
 			}
 		}
 
 		igt_subtest_with_dynamic("headless") {
-			__for_each_physical_engine(device, e) {
+			for_each_ctx_engine(device, ctx, e) {
 				igt_dynamic_f("%s", e->name)
 				/* Requires master for changing display modes */
-					headless(device, handle, e);
+					headless(device, handle, ctx, e);
 			}
 		}
 
 		igt_subtest("poll-sequential")
-			poll_sequential(device, "Sequential", 20);
+			poll_sequential(device, ctx, "Sequential", 20);
 
 	}
 
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 11/30] tests/i915/gem_exec_reloc: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (9 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 10/30] tests/i915/gem_exec_nop: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 12/30] tests/i915/gem_busy: " Jason Ekstrand
                   ` (20 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_reloc.c | 102 ++++++++++++++++++++++--------------
 1 file changed, 62 insertions(+), 40 deletions(-)

diff --git a/tests/i915/gem_exec_reloc.c b/tests/i915/gem_exec_reloc.c
index a897cc67..358c123d 100644
--- a/tests/i915/gem_exec_reloc.c
+++ b/tests/i915/gem_exec_reloc.c
@@ -266,7 +266,7 @@ static void check_bo(int fd, uint32_t handle)
 	munmap(map, 4096);
 }
 
-static void active(int fd, unsigned engine)
+static void active(int fd, const intel_ctx_t *ctx, unsigned engine)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_relocation_entry reloc;
@@ -280,7 +280,7 @@ static void active(int fd, unsigned engine)
 	if (engine == ALL_ENGINES) {
 		const struct intel_execution_engine2 *e;
 
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			if (gem_class_can_store_dword(fd, e->class))
 				engines[nengine++] = e->flags;
 		}
@@ -308,6 +308,7 @@ static void active(int fd, unsigned engine)
 	execbuf.buffer_count = 2;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	for (pass = 0; pass < 1024; pass++) {
 		uint32_t batch[16];
@@ -367,7 +368,8 @@ static uint64_t many_relocs(unsigned long count, unsigned long *out)
 	return to_user_pointer(reloc);
 }
 
-static void __many_active(int i915, unsigned engine, unsigned long count)
+static void __many_active(int i915, const intel_ctx_t *ctx, unsigned engine,
+			  unsigned long count)
 {
 	unsigned long reloc_sz;
 	struct drm_i915_gem_exec_object2 obj[2] = {{
@@ -379,10 +381,12 @@ static void __many_active(int i915, unsigned engine, unsigned long count)
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = ARRAY_SIZE(obj),
 		.flags = engine | I915_EXEC_HANDLE_LUT,
+		.rsvd1 = ctx->id,
 	};
 	igt_spin_t *spin;
 
 	spin = __igt_spin_new(i915,
+			      .ctx = ctx,
 			      .engine = engine,
 			      .dependency = obj[0].handle,
 			      .flags = (IGT_SPIN_FENCE_OUT |
@@ -405,7 +409,7 @@ static void __many_active(int i915, unsigned engine, unsigned long count)
 	gem_close(i915, obj[0].handle);
 }
 
-static void many_active(int i915, unsigned engine)
+static void many_active(int i915, const intel_ctx_t *ctx, unsigned engine)
 {
 	const uint64_t max = gem_aperture_size(i915) / 2;
 	unsigned long count = 256;
@@ -418,7 +422,7 @@ static void many_active(int i915, unsigned engine)
 			break;
 
 		igt_debug("Testing count:%lu\n", count);
-		__many_active(i915, engine, count);
+		__many_active(i915, ctx, engine, count);
 
 		count <<= 1;
 		if (count * 8 >= max)
@@ -426,7 +430,8 @@ static void many_active(int i915, unsigned engine)
 	}
 }
 
-static void __wide_active(int i915, unsigned engine, unsigned long count)
+static void __wide_active(int i915, const intel_ctx_t *ctx, unsigned engine,
+			  unsigned long count)
 {
 	struct drm_i915_gem_relocation_entry *reloc =
 		calloc(count, sizeof(*reloc));
@@ -436,6 +441,7 @@ static void __wide_active(int i915, unsigned engine, unsigned long count)
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = count + 1,
 		.flags = engine | I915_EXEC_HANDLE_LUT,
+		.rsvd1 = ctx->id,
 	};
 	igt_spin_t *spin;
 
@@ -446,6 +452,7 @@ static void __wide_active(int i915, unsigned engine, unsigned long count)
 	}
 
 	spin = __igt_spin_new(i915,
+			      .ctx = ctx,
 			      .engine = engine,
 			      .flags = (IGT_SPIN_FENCE_OUT |
 					IGT_SPIN_NO_PREEMPTION));
@@ -475,7 +482,7 @@ static void __wide_active(int i915, unsigned engine, unsigned long count)
 	free(reloc);
 }
 
-static void wide_active(int i915, unsigned engine)
+static void wide_active(int i915, const intel_ctx_t *ctx, unsigned engine)
 {
 	const uint64_t max = gem_aperture_size(i915) / 4096 / 2;
 	unsigned long count = 256;
@@ -488,7 +495,7 @@ static void wide_active(int i915, unsigned engine)
 			break;
 
 		igt_debug("Testing count:%lu\n", count);
-		__wide_active(i915, engine, count);
+		__wide_active(i915, ctx, engine, count);
 
 		count <<= 1;
 		if (count >= max)
@@ -501,7 +508,7 @@ static unsigned int offset_in_page(void *addr)
 	return (uintptr_t)addr & 4095;
 }
 
-static void active_spin(int fd, unsigned engine)
+static void active_spin(int fd, const intel_ctx_t *ctx, unsigned engine)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_relocation_entry reloc;
@@ -510,6 +517,7 @@ static void active_spin(int fd, unsigned engine)
 	igt_spin_t *spin;
 
 	spin = igt_spin_new(fd,
+			    .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_NO_PREEMPTION);
 
@@ -529,6 +537,7 @@ static void active_spin(int fd, unsigned engine)
 	execbuf.buffers_ptr = to_user_pointer(obj);
 	execbuf.buffer_count = 2;
 	execbuf.flags = engine;
+	execbuf.rsvd1 = ctx->id;
 
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[1].handle);
@@ -541,7 +550,7 @@ static void active_spin(int fd, unsigned engine)
 	igt_spin_free(fd, spin);
 }
 
-static void others_spin(int i915, unsigned engine)
+static void others_spin(int i915, const intel_ctx_t *ctx, unsigned engine)
 {
 	struct drm_i915_gem_relocation_entry reloc = {};
 	struct drm_i915_gem_exec_object2 obj = {
@@ -552,18 +561,20 @@ static void others_spin(int i915, unsigned engine)
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
 		.flags = engine,
+		.rsvd1 = ctx->id,
 	};
 	const struct intel_execution_engine2 *e;
 	igt_spin_t *spin = NULL;
 	uint64_t addr;
 	int fence;
 
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		if (e->flags == engine)
 			continue;
 
 		if (!spin) {
 			spin = igt_spin_new(i915,
+					    .ctx = ctx,
 					    .engine = e->flags,
 					    .flags = IGT_SPIN_FENCE_OUT);
 			fence = dup(spin->out_fence);
@@ -985,12 +996,13 @@ static void sighandler(int sig)
 	stop = 1;
 }
 
-static void parallel_child(int i915,
+static void parallel_child(int i915, const intel_ctx_t *ctx,
 			   const struct intel_execution_engine2 *engine,
 			   struct drm_i915_gem_relocation_entry *reloc,
 			   uint32_t common)
 {
-	igt_spin_t *spin = __igt_spin_new(i915, .engine = engine->flags);
+	igt_spin_t *spin = __igt_spin_new(i915, .ctx = ctx,
+					  .engine = engine->flags);
 	struct drm_i915_gem_exec_object2 reloc_target = {
 		.handle = gem_create(i915, 32 * 1024 * 8),
 		.relocation_count = 32 * 1024,
@@ -1005,6 +1017,7 @@ static void parallel_child(int i915,
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = ARRAY_SIZE(obj),
 		.flags = engine->flags | I915_EXEC_HANDLE_LUT,
+		.rsvd1 = ctx->id,
 	};
 	struct sigaction act = {
 		.sa_handler = sighandler,
@@ -1032,7 +1045,7 @@ static void kill_children(int sig)
 	signal(sig, SIG_DFL);
 }
 
-static void parallel(int i915)
+static void parallel(int i915, const intel_ctx_t *ctx)
 {
 	const struct intel_execution_engine2 *e;
 	struct drm_i915_gem_relocation_entry *reloc;
@@ -1043,16 +1056,16 @@ static void parallel(int i915)
 	reloc = parallel_relocs(32 * 1024, &reloc_sz);
 
 	stop = 0;
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		igt_fork(child, 1)
-			parallel_child(i915, e, reloc, common);
+			parallel_child(i915, ctx, e, reloc, common);
 	}
 	sleep(2);
 
 	if (gem_scheduler_has_preemption(i915)) {
-		uint32_t ctx = gem_context_clone_with_engines(i915, 0);
+		intel_ctx_t *tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
 
-		__for_each_physical_engine(i915, e) {
+		for_each_ctx_engine(i915, ctx, e) {
 			struct drm_i915_gem_exec_object2 obj[2] = {
 				{ .handle = common },
 				{ .handle = batch },
@@ -1061,12 +1074,12 @@ static void parallel(int i915)
 				.buffers_ptr = to_user_pointer(obj),
 				.buffer_count = ARRAY_SIZE(obj),
 				.flags = e->flags,
-				.rsvd1 = ctx,
+				.rsvd1 = tmp_ctx->id,
 			};
 			gem_execbuf(i915, &execbuf);
 		}
 
-		gem_context_destroy(i915, ctx);
+		intel_ctx_destroy(i915, tmp_ctx);
 	}
 	gem_sync(i915, batch);
 	gem_close(i915, batch);
@@ -1120,7 +1133,7 @@ static void xchg_u32(void *array, unsigned i, unsigned j)
 	u32[j] = tmp;
 }
 
-static void concurrent_child(int i915,
+static void concurrent_child(int i915, const intel_ctx_t *ctx,
 			     const struct intel_execution_engine2 *e,
 			     uint32_t *common, int num_common,
 			     int in, int out)
@@ -1133,6 +1146,7 @@ static void concurrent_child(int i915,
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = ARRAY_SIZE(obj),
 		.flags = e->flags | I915_EXEC_HANDLE_LUT | (gen < 6 ? I915_EXEC_SECURE : 0),
+		.rsvd1 = ctx->id,
 	};
 	uint32_t *batch = &obj[num_common + 1].handle;
 	unsigned long count = 0;
@@ -1213,7 +1227,7 @@ static uint32_t create_concurrent_batch(int i915, unsigned int count)
 	return handle;
 }
 
-static void concurrent(int i915, int num_common)
+static void concurrent(int i915, const intel_ctx_t *ctx, int num_common)
 {
 	const struct intel_execution_engine2 *e;
 	int in[2], out[2];
@@ -1239,12 +1253,12 @@ static void concurrent(int i915, int num_common)
 		common[n] = gem_create(i915, 4 * 4 * CONCURRENT);
 
 	nchild = 0;
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
 		igt_fork(child, 1)
-			concurrent_child(i915, e,
+			concurrent_child(i915, ctx, e,
 					 common, num_common,
 					 in[0], out[1]);
 
@@ -1308,6 +1322,7 @@ pin_scanout(igt_display_t *dpy, igt_output_t *output, struct igt_fb *fb)
 
 static void scanout(int i915,
 		    igt_display_t *dpy,
+		    const intel_ctx_t *ctx,
 		    const struct intel_execution_engine2 *e)
 {
 	struct drm_i915_gem_relocation_entry reloc = {};
@@ -1318,6 +1333,7 @@ static void scanout(int i915,
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = 2,
 		.flags = e->flags,
+		.rsvd1 = ctx->id,
 	};
 	igt_output_t *output;
 	struct igt_fb fb;
@@ -1437,6 +1453,7 @@ static void invalid_domains(int fd)
 
 igt_main
 {
+	const intel_ctx_t *ctx = 0;
 	const struct intel_execution_engine2 *e;
 	const struct mode {
 		const char *name;
@@ -1480,6 +1497,11 @@ igt_main
 		igt_require_gem(fd);
 		/* Check if relocations supported by platform */
 		igt_require(gem_has_relocations(fd));
+
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
 	}
 
 	for (f = flags; f->name; f++) {
@@ -1541,52 +1563,52 @@ igt_main
 
 	igt_subtest_with_dynamic("basic-active") {
 		igt_dynamic("all")
-			active(fd, ALL_ENGINES);
+			active(fd, ctx, ALL_ENGINES);
 
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			if (!gem_class_can_store_dword(fd, e->class))
 				continue;
 
 			igt_dynamic_f("%s", e->name)
-				active(fd, e->flags);
+				active(fd, ctx, e->flags);
 		}
 	}
 
 	igt_subtest_with_dynamic("basic-spin") {
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				active_spin(fd, e->flags);
+				active_spin(fd, ctx, e->flags);
 		}
 	}
 
 	igt_subtest_with_dynamic("basic-spin-others") {
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				others_spin(fd, e->flags);
+				others_spin(fd, ctx, e->flags);
 		}
 	}
 
 	igt_subtest_with_dynamic("basic-many-active") {
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				many_active(fd, e->flags);
+				many_active(fd, ctx, e->flags);
 		}
 	}
 
 	igt_subtest_with_dynamic("basic-wide-active") {
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				wide_active(fd, e->flags);
+				wide_active(fd, ctx, e->flags);
 		}
 	}
 
 	igt_subtest("basic-parallel")
-		parallel(fd);
+		parallel(fd, ctx);
 
 	igt_subtest("basic-concurrent0")
-		concurrent(fd, 0);
+		concurrent(fd, ctx, 0);
 	igt_subtest("basic-concurrent16")
-		concurrent(fd, 16);
+		concurrent(fd, ctx, 16);
 
 	igt_subtest("invalid-domains")
 		invalid_domains(fd);
@@ -1604,9 +1626,9 @@ igt_main
 		}
 
 		igt_subtest_with_dynamic("basic-scanout") {
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				igt_dynamic_f("%s", e->name)
-					scanout(fd, &display, e);
+					scanout(fd, &display, ctx, e);
 			}
 		}
 
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 12/30] tests/i915/gem_busy: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (10 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 11/30] tests/i915/gem_exec_reloc: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 13/30] tests/i915/gem_ctx_isolation: " Jason Ekstrand
                   ` (19 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_busy.c | 80 ++++++++++++++++++++++++++-----------------
 1 file changed, 48 insertions(+), 32 deletions(-)

diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c
index 77a55101..4e79842d 100644
--- a/tests/i915/gem_busy.c
+++ b/tests/i915/gem_busy.c
@@ -67,6 +67,7 @@ static void __gem_busy(int fd,
 
 static bool exec_noop(int fd,
 		      uint32_t *handles,
+		      const intel_ctx_t *ctx,
 		      unsigned flags,
 		      bool write)
 {
@@ -84,6 +85,7 @@ static bool exec_noop(int fd,
 	execbuf.buffers_ptr = to_user_pointer(exec);
 	execbuf.buffer_count = 3;
 	execbuf.flags = flags;
+	execbuf.rsvd1 = ctx->id;
 	igt_debug("Queuing handle for %s on engine %d\n",
 		  write ? "writing" : "reading", flags);
 	return __gem_execbuf(fd, &execbuf) == 0;
@@ -96,7 +98,8 @@ static bool still_busy(int fd, uint32_t handle)
 	return write;
 }
 
-static void semaphore(int fd, const struct intel_execution_engine2 *e)
+static void semaphore(int fd, const intel_ctx_t *ctx,
+		      const struct intel_execution_engine2 *e)
 {
 	struct intel_execution_engine2 *__e;
 	uint32_t bbe = MI_BATCH_BUFFER_END;
@@ -113,18 +116,19 @@ static void semaphore(int fd, const struct intel_execution_engine2 *e)
 	/* Create a long running batch which we can use to hog the GPU */
 	handle[BUSY] = gem_create(fd, 4096);
 	spin = igt_spin_new(fd,
+			    .ctx = ctx,
 			    .engine = e->flags,
 			    .dependency = handle[BUSY]);
 
 	/* Queue a batch after the busy, it should block and remain "busy" */
-	igt_assert(exec_noop(fd, handle, e->flags, false));
+	igt_assert(exec_noop(fd, handle, ctx, e->flags, false));
 	igt_assert(still_busy(fd, handle[BUSY]));
 	__gem_busy(fd, handle[TEST], &read, &write);
 	igt_assert_eq(read, 1 << e->class);
 	igt_assert_eq(write, 0);
 
 	/* Requeue with a write */
-	igt_assert(exec_noop(fd, handle, e->flags, true));
+	igt_assert(exec_noop(fd, handle, ctx, e->flags, true));
 	igt_assert(still_busy(fd, handle[BUSY]));
 	__gem_busy(fd, handle[TEST], &read, &write);
 	igt_assert_eq(read, 1 << e->class);
@@ -132,8 +136,8 @@ static void semaphore(int fd, const struct intel_execution_engine2 *e)
 
 	/* Now queue it for a read across all available rings */
 	active = 0;
-	__for_each_physical_engine(fd, __e) {
-		if (exec_noop(fd, handle, __e->flags, false))
+	for_each_ctx_engine(fd, ctx, __e) {
+		if (exec_noop(fd, handle, ctx, __e->flags, false))
 			active |= 1 << __e->class;
 	}
 	igt_assert(still_busy(fd, handle[BUSY]));
@@ -157,7 +161,8 @@ static void semaphore(int fd, const struct intel_execution_engine2 *e)
 
 #define PARALLEL 1
 #define HANG 2
-static void one(int fd, const struct intel_execution_engine2 *e, unsigned test_flags)
+static void one(int fd, const intel_ctx_t *ctx,
+		const struct intel_execution_engine2 *e, unsigned test_flags)
 {
 	uint32_t scratch = gem_create(fd, 4096);
 	uint32_t read[2], write[2];
@@ -167,6 +172,7 @@ static void one(int fd, const struct intel_execution_engine2 *e, unsigned test_f
 	int timeout;
 
 	spin = igt_spin_new(fd,
+			    .ctx = ctx,
 			    .engine = e->flags,
 			    .dependency = scratch,
 			    .flags = (test_flags & HANG) ? IGT_SPIN_NO_PREEMPTION : 0);
@@ -177,13 +183,13 @@ static void one(int fd, const struct intel_execution_engine2 *e, unsigned test_f
 	if (test_flags & PARALLEL) {
 		struct intel_execution_engine2 *e2;
 
-		__for_each_physical_engine(fd, e2) {
+		for_each_ctx_engine(fd, ctx, e2) {
 			if (e2->class == e->class &&
 			    e2->instance == e->instance)
 				continue;
 
 			igt_debug("Testing %s in parallel\n", e2->name);
-			one(fd, e2, 0);
+			one(fd, ctx, e2, 0);
 		}
 	}
 
@@ -228,7 +234,7 @@ static void xchg_u32(void *array, unsigned i, unsigned j)
 	u32[j] = tmp;
 }
 
-static void close_race(int fd)
+static void close_race(int fd, const intel_ctx_t *ctx)
 {
 	const unsigned int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
 	const unsigned int nhandles = gem_submission_measure(fd, ALL_ENGINES);
@@ -247,7 +253,7 @@ static void close_race(int fd)
 	 */
 
 	nengine = 0;
-	__for_each_physical_engine(fd, e)
+	for_each_ctx_engine(fd, ctx, e)
 		engines[nengine++] = e->flags;
 	igt_require(nengine);
 
@@ -295,6 +301,7 @@ static void close_race(int fd)
 
 		for (i = 0; i < nhandles; i++) {
 			spin[i] = __igt_spin_new(fd,
+						 .ctx = ctx,
 						 .engine = engines[rand() % nengine]);
 			handles[i] = spin[i]->handle;
 		}
@@ -303,6 +310,7 @@ static void close_race(int fd)
 			for (i = 0; i < nhandles; i++) {
 				igt_spin_free(fd, spin[i]);
 				spin[i] = __igt_spin_new(fd,
+							 .ctx = ctx,
 							 .engine = engines[rand() % nengine]);
 				handles[i] = spin[i]->handle;
 				__sync_synchronize();
@@ -354,10 +362,12 @@ static bool has_extended_busy_ioctl(int fd)
 	return read != 0;
 }
 
-static void basic(int fd, const struct intel_execution_engine2 *e, unsigned flags)
+static void basic(int fd, const intel_ctx_t *ctx,
+		  const struct intel_execution_engine2 *e, unsigned flags)
 {
 	igt_spin_t *spin =
 		igt_spin_new(fd,
+			     .ctx = ctx,
 			     .engine = e->flags,
 			     .flags = flags & HANG ?
 			     IGT_SPIN_NO_PREEMPTION | IGT_SPIN_INVALID_CS : 0);
@@ -384,32 +394,38 @@ static void basic(int fd, const struct intel_execution_engine2 *e, unsigned flag
 	igt_spin_free(fd, spin);
 }
 
-static void all(int i915)
+static void all(int i915, const intel_ctx_t *ctx)
 {
 	const struct intel_execution_engine2 *e;
 
-	__for_each_physical_engine(i915, e)
-		igt_fork(child, 1) basic(i915, e, 0);
+	for_each_ctx_engine(i915, ctx, e)
+		igt_fork(child, 1) basic(i915, ctx, e, 0);
 	igt_waitchildren();
 }
 
-#define test_each_engine(T, i915, e) \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_engine(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		igt_dynamic_f("%s", (e)->name)
 
-#define test_each_engine_store(T, i915, e) \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_engine_store(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		for_each_if (gem_class_can_store_dword(i915, (e)->class)) \
 			igt_dynamic_f("%s", (e)->name)
 
 igt_main
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx = NULL;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver_master(DRIVER_INTEL);
 		igt_require_gem(fd);
+
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
 	}
 
 	igt_subtest_group {
@@ -420,13 +436,13 @@ igt_main
 		igt_subtest_with_dynamic("busy") {
 			igt_dynamic("all") {
 				gem_quiescent_gpu(fd);
-				all(fd);
+				all(fd, ctx);
 			}
 
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				igt_dynamic_f("%s", e->name) {
 					gem_quiescent_gpu(fd);
-					basic(fd, e, 0);
+					basic(fd, ctx, e, 0);
 				}
 			}
 		}
@@ -437,15 +453,15 @@ igt_main
 				gem_require_mmap_wc(fd);
 			}
 
-			test_each_engine_store("extended", fd, e) {
+			test_each_engine_store("extended", fd, ctx, e) {
 				gem_quiescent_gpu(fd);
-				one(fd, e, 0);
+				one(fd, ctx, e, 0);
 				gem_quiescent_gpu(fd);
 			}
 
-			test_each_engine_store("parallel", fd, e) {
+			test_each_engine_store("parallel", fd, ctx, e) {
 				gem_quiescent_gpu(fd);
-				one(fd, e, PARALLEL);
+				one(fd, ctx, e, PARALLEL);
 				gem_quiescent_gpu(fd);
 			}
 		}
@@ -456,15 +472,15 @@ igt_main
 				igt_require(has_semaphores(fd));
 			}
 
-			test_each_engine("semaphore", fd, e) {
+			test_each_engine("semaphore", fd, ctx, e) {
 				gem_quiescent_gpu(fd);
-				semaphore(fd, e);
+				semaphore(fd, ctx, e);
 				gem_quiescent_gpu(fd);
 			}
 		}
 
 		igt_subtest("close-race")
-			close_race(fd);
+			close_race(fd, ctx);
 
 		igt_fixture {
 			igt_stop_hang_detector();
@@ -478,9 +494,9 @@ igt_main
 			hang = igt_allow_hang(fd, 0, 0);
 		}
 
-		test_each_engine("hang", fd, e) {
+		test_each_engine("hang", fd, ctx, e) {
 			gem_quiescent_gpu(fd);
-			basic(fd, e, HANG);
+			basic(fd, ctx, e, HANG);
 			gem_quiescent_gpu(fd);
 		}
 
@@ -490,9 +506,9 @@ igt_main
 				gem_require_mmap_wc(fd);
 			}
 
-			test_each_engine_store("hang-extended", fd, e) {
+			test_each_engine_store("hang-extended", fd, ctx, e) {
 				gem_quiescent_gpu(fd);
-				one(fd, e, HANG);
+				one(fd, ctx, e, HANG);
 				gem_quiescent_gpu(fd);
 			}
 		}
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 13/30] tests/i915/gem_ctx_isolation: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (11 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 12/30] tests/i915/gem_busy: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 14/30] tests/i915/gem_exec_async: " Jason Ekstrand
                   ` (18 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_ctx_isolation.c | 125 +++++++++++++++++----------------
 1 file changed, 65 insertions(+), 60 deletions(-)

diff --git a/tests/i915/gem_ctx_isolation.c b/tests/i915/gem_ctx_isolation.c
index a57a6637..aa60ce34 100644
--- a/tests/i915/gem_ctx_isolation.c
+++ b/tests/i915/gem_ctx_isolation.c
@@ -232,7 +232,7 @@ static bool ignore_register(uint32_t offset, uint32_t mmio_base)
 }
 
 static void tmpl_regs(int fd,
-		      uint32_t ctx,
+		      const intel_ctx_t *ctx,
 		      const struct intel_execution_engine2 *e,
 		      uint32_t handle,
 		      uint32_t value)
@@ -277,7 +277,7 @@ static void tmpl_regs(int fd,
 }
 
 static uint32_t read_regs(int fd,
-			  uint32_t ctx,
+			  const intel_ctx_t *ctx,
 			  const struct intel_execution_engine2 *e,
 			  unsigned int flags)
 {
@@ -349,7 +349,7 @@ static uint32_t read_regs(int fd,
 	execbuf.buffers_ptr = to_user_pointer(obj);
 	execbuf.buffer_count = 2;
 	execbuf.flags = e->flags;
-	execbuf.rsvd1 = ctx;
+	execbuf.rsvd1 = ctx->id;
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[1].handle);
 	free(reloc);
@@ -358,7 +358,7 @@ static uint32_t read_regs(int fd,
 }
 
 static void write_regs(int fd,
-		       uint32_t ctx,
+		       const intel_ctx_t *ctx,
 		       const struct intel_execution_engine2 *e,
 		       unsigned int flags,
 		       uint32_t value)
@@ -413,13 +413,13 @@ static void write_regs(int fd,
 	execbuf.buffers_ptr = to_user_pointer(&obj);
 	execbuf.buffer_count = 1;
 	execbuf.flags = e->flags;
-	execbuf.rsvd1 = ctx;
+	execbuf.rsvd1 = ctx->id;
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj.handle);
 }
 
 static void restore_regs(int fd,
-			 uint32_t ctx,
+			 const intel_ctx_t *ctx,
 			 const struct intel_execution_engine2 *e,
 			 unsigned int flags,
 			 uint32_t regs)
@@ -491,7 +491,7 @@ static void restore_regs(int fd,
 	execbuf.buffers_ptr = to_user_pointer(obj);
 	execbuf.buffer_count = 2;
 	execbuf.flags = e->flags;
-	execbuf.rsvd1 = ctx;
+	execbuf.rsvd1 = ctx->id;
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[1].handle);
 }
@@ -595,7 +595,7 @@ static void compare_regs(int fd, const struct intel_execution_engine2 *e,
 		     num_errors, who);
 }
 
-static void nonpriv(int fd,
+static void nonpriv(int fd, const intel_ctx_cfg_t *cfg,
 		    const struct intel_execution_engine2 *e,
 		    unsigned int flags)
 {
@@ -620,33 +620,34 @@ static void nonpriv(int fd,
 
 	for (int v = 0; v < num_values; v++) {
 		igt_spin_t *spin = NULL;
-		uint32_t ctx, regs[2], tmpl;
+		intel_ctx_t *ctx;
+		uint32_t regs[2], tmpl;
 
-		ctx = gem_context_clone_with_engines(fd, 0);
+		ctx = intel_ctx_create(fd, cfg);
 
 		tmpl = read_regs(fd, ctx, e, flags);
 		regs[0] = read_regs(fd, ctx, e, flags);
 
 		tmpl_regs(fd, ctx, e, tmpl, values[v]);
 
-		spin = igt_spin_new(fd, .ctx_id = ctx, .engine = e->flags);
+		spin = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
 
 		igt_debug("%s[%d]: Setting all registers to 0x%08x\n",
 			  __func__, v, values[v]);
 		write_regs(fd, ctx, e, flags, values[v]);
 
 		if (flags & DIRTY2) {
-			uint32_t sw = gem_context_clone_with_engines(fd, 0);
+			intel_ctx_t *sw = intel_ctx_create(fd, &ctx->cfg);
 			igt_spin_t *syncpt, *dirt;
 
 			/* Explicit sync to keep the switch between write/read */
 			syncpt = igt_spin_new(fd,
-					      .ctx_id = ctx,
+					      .ctx = ctx,
 					      .engine = e->flags,
 					      .flags = IGT_SPIN_FENCE_OUT);
 
 			dirt = igt_spin_new(fd,
-					    .ctx_id = sw,
+					    .ctx = sw,
 					    .engine = e->flags,
 					    .fence = syncpt->out_fence,
 					    .flags = (IGT_SPIN_FENCE_IN |
@@ -654,14 +655,14 @@ static void nonpriv(int fd,
 			igt_spin_free(fd, syncpt);
 
 			syncpt = igt_spin_new(fd,
-					      .ctx_id = ctx,
+					      .ctx = ctx,
 					      .engine = e->flags,
 					      .fence = dirt->out_fence,
 					      .flags = IGT_SPIN_FENCE_IN);
 			igt_spin_free(fd, dirt);
 
 			igt_spin_free(fd, syncpt);
-			gem_context_destroy(fd, sw);
+			intel_ctx_destroy(fd, sw);
 		}
 
 		regs[1] = read_regs(fd, ctx, e, flags);
@@ -678,12 +679,12 @@ static void nonpriv(int fd,
 
 		for (int n = 0; n < ARRAY_SIZE(regs); n++)
 			gem_close(fd, regs[n]);
-		gem_context_destroy(fd, ctx);
+		intel_ctx_destroy(fd, ctx);
 		gem_close(fd, tmpl);
 	}
 }
 
-static void isolation(int fd,
+static void isolation(int fd, const intel_ctx_cfg_t *cfg,
 		      const struct intel_execution_engine2 *e,
 		      unsigned int flags)
 {
@@ -703,12 +704,13 @@ static void isolation(int fd,
 
 	for (int v = 0; v < num_values; v++) {
 		igt_spin_t *spin = NULL;
-		uint32_t ctx[2], regs[2], tmp;
+		intel_ctx_t *ctx[2];
+		uint32_t regs[2], tmp;
 
-		ctx[0] = gem_context_clone_with_engines(fd, 0);
+		ctx[0] = intel_ctx_create(fd, cfg);
 		regs[0] = read_regs(fd, ctx[0], e, flags);
 
-		spin = igt_spin_new(fd, .ctx_id = ctx[0], .engine = e->flags);
+		spin = igt_spin_new(fd, .ctx = ctx[0], .engine = e->flags);
 
 		if (flags & DIRTY1) {
 			igt_debug("%s[%d]: Setting all registers of ctx 0 to 0x%08x\n",
@@ -724,7 +726,7 @@ static void isolation(int fd,
 		 * the default values from this context, but if goes badly we
 		 * see the corruption from the previous context instead!
 		 */
-		ctx[1] = gem_context_clone_with_engines(fd, 0);
+		ctx[1] = intel_ctx_create(fd, cfg);
 		regs[1] = read_regs(fd, ctx[1], e, flags);
 
 		if (flags & DIRTY2) {
@@ -749,7 +751,7 @@ static void isolation(int fd,
 
 		for (int n = 0; n < ARRAY_SIZE(ctx); n++) {
 			gem_close(fd, regs[n]);
-			gem_context_destroy(fd, ctx[n]);
+			intel_ctx_destroy(fd, ctx[n]);
 		}
 		gem_close(fd, tmp);
 	}
@@ -762,21 +764,24 @@ static void isolation(int fd,
 #define S4 (4 << 8)
 #define SLEEP_MASK (0xf << 8)
 
-static uint32_t create_reset_context(int i915)
+static intel_ctx_t *create_reset_context(int i915, const intel_ctx_cfg_t *cfg)
 {
+	intel_ctx_t *ctx = intel_ctx_create(i915, cfg);
 	struct drm_i915_gem_context_param param = {
-		.ctx_id = gem_context_clone_with_engines(i915, 0),
+		.ctx_id = ctx->id,
 		.param = I915_CONTEXT_PARAM_BANNABLE,
 	};
 
 	gem_context_set_param(i915, &param);
-	return param.ctx_id;
+	return ctx;
 }
 
-static void inject_reset_context(int fd, const struct intel_execution_engine2 *e)
+static void inject_reset_context(int fd, const intel_ctx_cfg_t *cfg,
+				 const struct intel_execution_engine2 *e)
 {
+	intel_ctx_t *ctx = create_reset_context(fd, cfg);
 	struct igt_spin_factory opts = {
-		.ctx_id = create_reset_context(fd),
+		.ctx = ctx,
 		.engine = e->flags,
 		.flags = IGT_SPIN_FAST,
 	};
@@ -801,10 +806,10 @@ static void inject_reset_context(int fd, const struct intel_execution_engine2 *e
 	igt_force_gpu_reset(fd);
 
 	igt_spin_free(fd, spin);
-	gem_context_destroy(fd, opts.ctx_id);
+	intel_ctx_destroy(fd, ctx);
 }
 
-static void preservation(int fd,
+static void preservation(int fd, const intel_ctx_cfg_t *cfg,
 			 const struct intel_execution_engine2 *e,
 			 unsigned int flags)
 {
@@ -818,17 +823,17 @@ static void preservation(int fd,
 		0xdeadbeef
 	};
 	const unsigned int num_values = ARRAY_SIZE(values);
-	uint32_t ctx[num_values +1 ];
+	intel_ctx_t *ctx[num_values +1 ];
 	uint32_t regs[num_values + 1][2];
 	igt_spin_t *spin;
 
 	gem_quiescent_gpu(fd);
 
-	ctx[num_values] = gem_context_clone_with_engines(fd, 0);
-	spin = igt_spin_new(fd, .ctx_id = ctx[num_values], .engine = e->flags);
+	ctx[num_values] = intel_ctx_create(fd, cfg);
+	spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = e->flags);
 	regs[num_values][0] = read_regs(fd, ctx[num_values], e, flags);
 	for (int v = 0; v < num_values; v++) {
-		ctx[v] = gem_context_clone_with_engines(fd, 0);
+		ctx[v] = intel_ctx_create(fd, cfg);
 		write_regs(fd, ctx[v], e, flags, values[v]);
 
 		regs[v][0] = read_regs(fd, ctx[v], e, flags);
@@ -838,7 +843,7 @@ static void preservation(int fd,
 	igt_spin_free(fd, spin);
 
 	if (flags & RESET)
-		inject_reset_context(fd, e);
+		inject_reset_context(fd, cfg, e);
 
 	switch (flags & SLEEP_MASK) {
 	case NOSLEEP:
@@ -865,7 +870,7 @@ static void preservation(int fd,
 		break;
 	}
 
-	spin = igt_spin_new(fd, .ctx_id = ctx[num_values], .engine = e->flags);
+	spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = e->flags);
 	for (int v = 0; v < num_values; v++)
 		regs[v][1] = read_regs(fd, ctx[v], e, flags);
 	regs[num_values][1] = read_regs(fd, ctx[num_values], e, flags);
@@ -879,10 +884,10 @@ static void preservation(int fd,
 
 		gem_close(fd, regs[v][0]);
 		gem_close(fd, regs[v][1]);
-		gem_context_destroy(fd, ctx[v]);
+		intel_ctx_destroy(fd, ctx[v]);
 	}
 	compare_regs(fd, e, regs[num_values][0], regs[num_values][1], "clean");
-	gem_context_destroy(fd, ctx[num_values]);
+	intel_ctx_destroy(fd, ctx[num_values]);
 }
 
 static unsigned int __has_context_isolation(int fd)
@@ -900,8 +905,8 @@ static unsigned int __has_context_isolation(int fd)
 	return value;
 }
 
-#define test_each_engine(e, i915, mask) \
-	__for_each_physical_engine(i915, e) \
+#define test_each_engine(e, i915, cfg, mask) \
+	for_each_ctx_cfg_engine(i915, cfg, e) \
 		for_each_if(mask & (1 << (e)->class)) \
 			igt_dynamic_f("%s", (e)->name)
 
@@ -909,6 +914,7 @@ igt_main
 {
 	unsigned int has_context_isolation = 0;
 	const struct intel_execution_engine2 *e;
+	intel_ctx_cfg_t cfg;
 	int i915 = -1;
 
 	igt_fixture {
@@ -917,6 +923,7 @@ igt_main
 		i915 = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(i915);
 		igt_require(gem_has_contexts(i915));
+		cfg = intel_ctx_cfg_all_physical(i915);
 
 		has_context_isolation = __has_context_isolation(i915);
 		igt_require(has_context_isolation);
@@ -928,50 +935,48 @@ igt_main
 		igt_skip_on(gen > LAST_KNOWN_GEN);
 	}
 
-	/* __for_each_physical_engine switches context to all engines. */
-
 	igt_fixture {
 		igt_fork_hang_detector(i915);
 	}
 
 	igt_subtest_with_dynamic("nonpriv") {
-		test_each_engine(e, i915, has_context_isolation)
-			nonpriv(i915, e, 0);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			nonpriv(i915, &cfg, e, 0);
 	}
 
 	igt_subtest_with_dynamic("nonpriv-switch") {
-		test_each_engine(e, i915, has_context_isolation)
-			nonpriv(i915, e, DIRTY2);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			nonpriv(i915, &cfg, e, DIRTY2);
 	}
 
 	igt_subtest_with_dynamic("clean") {
-		test_each_engine(e, i915, has_context_isolation)
-			isolation(i915, e, 0);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			isolation(i915, &cfg, e, 0);
 	}
 
 	igt_subtest_with_dynamic("dirty-create") {
-		test_each_engine(e, i915, has_context_isolation)
-			isolation(i915, e, DIRTY1);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			isolation(i915, &cfg, e, DIRTY1);
 	}
 
 	igt_subtest_with_dynamic("dirty-switch") {
-		test_each_engine(e, i915, has_context_isolation)
-			isolation(i915, e, DIRTY2);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			isolation(i915, &cfg, e, DIRTY2);
 	}
 
 	igt_subtest_with_dynamic("preservation") {
-		test_each_engine(e, i915, has_context_isolation)
-			preservation(i915, e, 0);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			preservation(i915, &cfg, e, 0);
 	}
 
 	igt_subtest_with_dynamic("preservation-S3") {
-		test_each_engine(e, i915, has_context_isolation)
-			preservation(i915, e, S3);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			preservation(i915, &cfg, e, S3);
 	}
 
 	igt_subtest_with_dynamic("preservation-S4") {
-		test_each_engine(e, i915, has_context_isolation)
-			preservation(i915, e, S4);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			preservation(i915, &cfg, e, S4);
 	}
 
 	igt_fixture {
@@ -981,8 +986,8 @@ igt_main
 	igt_subtest_with_dynamic("preservation-reset") {
 		igt_hang_t hang = igt_allow_hang(i915, 0, 0);
 
-		test_each_engine(e, i915, has_context_isolation)
-			preservation(i915, e, RESET);
+		test_each_engine(e, i915, &cfg, has_context_isolation)
+			preservation(i915, &cfg, e, RESET);
 
 		igt_disallow_hang(i915, hang);
 	}
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 14/30] tests/i915/gem_exec_async: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (12 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 13/30] tests/i915/gem_ctx_isolation: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 15/30] tests/i915/sysfs_clients: " Jason Ekstrand
                   ` (17 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_async.c | 31 +++++++++++++++++++------------
 1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/tests/i915/gem_exec_async.c b/tests/i915/gem_exec_async.c
index 412ad737..c64631de 100644
--- a/tests/i915/gem_exec_async.c
+++ b/tests/i915/gem_exec_async.c
@@ -26,7 +26,7 @@
 
 IGT_TEST_DESCRIPTION("Check that we can issue concurrent writes across the engines.");
 
-static void store_dword(int fd, unsigned ring,
+static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 			uint32_t target, uint32_t offset, uint32_t value)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -42,6 +42,7 @@ static void store_dword(int fd, unsigned ring,
 	execbuf.flags = ring;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = target;
@@ -79,7 +80,8 @@ static void store_dword(int fd, unsigned ring,
 	gem_close(fd, obj[1].handle);
 }
 
-static void one(int fd, unsigned engine, unsigned int flags)
+static void one(int fd, const intel_ctx_t *ctx,
+		unsigned engine, unsigned int flags)
 #define FORKED (1 << 0)
 {
 	const struct intel_execution_engine2 *e;
@@ -93,10 +95,11 @@ static void one(int fd, unsigned engine, unsigned int flags)
 	 * the scratch for write. Then on the other rings try and
 	 * write into that target. If it blocks we hang the GPU...
 	 */
-	spin = igt_spin_new(fd, .engine = engine, .dependency = scratch);
+	spin = igt_spin_new(fd, .ctx = ctx, .engine = engine,
+			    .dependency = scratch);
 
 	i = 0;
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		if (e->flags == engine)
 			continue;
 
@@ -105,9 +108,9 @@ static void one(int fd, unsigned engine, unsigned int flags)
 
 		if (flags & FORKED) {
 			igt_fork(child, 1)
-				store_dword(fd, e->flags, scratch, 4*i, ~i);
+				store_dword(fd, ctx, e->flags, scratch, 4*i, ~i);
 		} else {
-			store_dword(fd, e->flags, scratch, 4*i, ~i);
+			store_dword(fd, ctx, e->flags, scratch, 4*i, ~i);
 		}
 		i++;
 	}
@@ -134,13 +137,14 @@ static bool has_async_execbuf(int fd)
 	return async > 0;
 }
 
-#define test_each_engine(T, i915, e) \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_engine(T, i915, ctx, e) \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		igt_dynamic_f("%s", (e)->name)
 
 igt_main
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx = NULL;
 	int fd = -1;
 
 	igt_fixture {
@@ -148,14 +152,17 @@ igt_main
 		igt_require_gem(fd);
 		gem_require_mmap_wc(fd);
 		igt_require(has_async_execbuf(fd));
+
+		ctx = intel_ctx_create_all_physical(fd);
+
 		igt_fork_hang_detector(fd);
 	}
 
-	test_each_engine("concurrent-writes", fd, e)
-		one(fd, e->flags, 0);
+	test_each_engine("concurrent-writes", fd, ctx, e)
+		one(fd, ctx, e->flags, 0);
 
-	test_each_engine("forked-writes", fd, e)
-		one(fd, e->flags, FORKED);
+	test_each_engine("forked-writes", fd, ctx, e)
+		one(fd, ctx, e->flags, FORKED);
 
 	igt_fixture {
 		igt_stop_hang_detector();
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 15/30] tests/i915/sysfs_clients: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (13 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 14/30] tests/i915/gem_exec_async: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 16/30] tests/i915/gem_exec_fair: " Jason Ekstrand
                   ` (16 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/sysfs_clients.c | 87 ++++++++++++++++++++------------------
 1 file changed, 46 insertions(+), 41 deletions(-)

diff --git a/tests/i915/sysfs_clients.c b/tests/i915/sysfs_clients.c
index 0b7066c0..0a522492 100644
--- a/tests/i915/sysfs_clients.c
+++ b/tests/i915/sysfs_clients.c
@@ -18,12 +18,12 @@
 
 #include "drmtest.h"
 #include "i915/gem.h"
-#include "i915/gem_context.h"
 #include "i915/gem_engine_topology.h"
 #include "i915/gem_mman.h"
 #include "igt_aux.h"
 #include "igt_dummyload.h"
 #include "igt_sysfs.h"
+#include "intel_ctx.h"
 #include "ioctl_wrappers.h"
 
 #define __require_within_epsilon(x, ref, tol_up, tol_down) \
@@ -392,34 +392,26 @@ static uint64_t measured_usleep(unsigned int usec)
 	return igt_nsec_elapsed(&tv);
 }
 
-static int reopen_client(int i915)
-{
-	int clone;
-
-	clone = gem_reopen_driver(i915);
-	gem_context_copy_engines(i915, 0, clone, 0);
-	close(i915);
-
-	return clone;
-}
-
 static void
-busy_one(int i915, int clients, const struct intel_execution_engine2 *e)
+busy_one(int i915, int clients, const intel_ctx_cfg_t *cfg,
+	 const struct intel_execution_engine2 *e)
 {
 	int64_t active, idle, old, other[MAX_CLASS];
 	struct timespec tv;
+	intel_ctx_t *ctx;
 	igt_spin_t *spin;
 	uint64_t delay;
 	int me;
 
 	/* Create a fresh client with 0 runtime */
-	i915 = reopen_client(i915);
+	i915 = gem_reopen_driver(i915);
 
 	me = find_me(clients, getpid());
 	igt_assert(me != -1);
 
+	ctx = intel_ctx_create(i915, cfg);
 	spin = igt_spin_new(i915,
-			    gem_context_clone_with_engines(i915, 0),
+			    .ctx = ctx,
 			    .engine = e->flags,
 			    .flags = IGT_SPIN_POLL_RUN);
 	igt_spin_busywait_until_started(spin);
@@ -471,7 +463,7 @@ busy_one(int i915, int clients, const struct intel_execution_engine2 *e)
 		igt_assert(idle >= active);
 	}
 
-	gem_context_destroy(i915, spin->execbuf.rsvd1);
+	intel_ctx_destroy(i915, ctx);
 
 	/* And finally after the executing context is no more */
 	old = read_runtime(me, e->class);
@@ -512,28 +504,29 @@ busy_one(int i915, int clients, const struct intel_execution_engine2 *e)
 	close(i915);
 }
 
-static void busy_all(int i915, int clients)
+static void busy_all(int i915, int clients, const intel_ctx_cfg_t *cfg)
 {
 	const struct intel_execution_engine2 *e;
 	int64_t active[MAX_CLASS];
 	int64_t idle[MAX_CLASS];
 	int64_t old[MAX_CLASS];
 	uint64_t classes = 0;
+	intel_ctx_t *ctx;
 	igt_spin_t *spin;
 	int expect = 0;
 	int64_t delay;
 	int me;
 
 	/* Create a fresh client with 0 runtime */
-	i915 = reopen_client(i915);
+	i915 = gem_reopen_driver(i915);
 
 	me = find_me(clients, getpid());
 	igt_assert(me != -1);
 
-	spin = igt_spin_new(i915,
-			    gem_context_clone_with_engines(i915, 0),
+	ctx = intel_ctx_create(i915, cfg);
+	spin = igt_spin_new(i915, .ctx = ctx,
 			    .flags = IGT_SPIN_POLL_RUN);
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
@@ -579,7 +572,7 @@ static void busy_all(int i915, int clients)
 		igt_assert(idle[i] >= active[i]);
 	}
 
-	gem_context_destroy(i915, spin->execbuf.rsvd1);
+	intel_ctx_destroy(i915, ctx);
 	igt_spin_free(i915, spin);
 
 	/* And finally after the executing context is no more */
@@ -596,17 +589,19 @@ static void busy_all(int i915, int clients)
 }
 
 static void
-split_child(int i915, int clients,
+split_child(int i915, int clients, const intel_ctx_cfg_t *cfg,
 	    const struct intel_execution_engine2 *e,
 	    int sv)
 {
 	int64_t runtime[2] = {};
+	intel_ctx_t *ctx;
 	igt_spin_t *spin;
 	int go = 1;
 
-	i915 = reopen_client(i915);
+	i915 = gem_reopen_driver(i915);
 
-	spin = igt_spin_new(i915, .engine = e->flags);
+	ctx = intel_ctx_create(i915, cfg);
+	spin = igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
 	igt_spin_end(spin);
 	gem_sync(i915, spin->handle);
 
@@ -626,12 +621,14 @@ split_child(int i915, int clients,
 	igt_spin_free(i915, spin);
 
 	runtime[0] = read_runtime(find_me(clients, getpid()), e->class);
+	intel_ctx_destroy(i915, ctx);
 	write(sv, runtime, sizeof(runtime));
 }
 
 static void
-__split(int i915, int clients, const struct intel_execution_engine2 *e, int f,
-	void (*fn)(int i915, int clients,
+__split(int i915, int clients, const intel_ctx_cfg_t *cfg,
+	const struct intel_execution_engine2 *e, int f,
+	void (*fn)(int i915, int clients, const intel_ctx_cfg_t *cfg,
 		   const struct intel_execution_engine2 *e,
 		   int sv))
 {
@@ -652,7 +649,7 @@ __split(int i915, int clients, const struct intel_execution_engine2 *e, int f,
 
 		igt_assert(socketpair(AF_UNIX, SOCK_DGRAM, 0, c->sv) == 0);
 		igt_fork(child, 1)
-			fn(i915, clients, e, c->sv[1]);
+			fn(i915, clients, cfg, e, c->sv[1]);
 
 		read(c->sv[0], &go, sizeof(go));
 	}
@@ -720,13 +717,14 @@ __split(int i915, int clients, const struct intel_execution_engine2 *e, int f,
 }
 
 static void
-split(int i915, int clients, const struct intel_execution_engine2 *e, int f)
+split(int i915, int clients, const intel_ctx_cfg_t *cfg,
+      const struct intel_execution_engine2 *e, int f)
 {
-	__split(i915, clients, e, f, split_child);
+	__split(i915, clients, cfg, e, f, split_child);
 }
 
 static void
-sema_child(int i915, int clients,
+sema_child(int i915, int clients, const intel_ctx_cfg_t *cfg,
 	   const struct intel_execution_engine2 *e,
 	   int sv)
 {
@@ -739,9 +737,12 @@ sema_child(int i915, int clients,
 		.buffer_count = 1,
 		.flags = e->flags,
 	};
+	intel_ctx_t *ctx;
 	uint32_t *cs, *sema;
 
-	i915 = reopen_client(i915);
+	i915 = gem_reopen_driver(i915);
+	ctx = intel_ctx_create(i915, cfg);
+	execbuf.rsvd1 = ctx->id;
 
 	obj.handle = gem_create(i915, 4096);
 	obj.offset = obj.handle << 12;
@@ -771,6 +772,7 @@ sema_child(int i915, int clients,
 	*sema = 0;
 	gem_execbuf(i915, &execbuf);
 	gem_close(i915, obj.handle);
+	intel_ctx_destroy(i915, ctx);
 
 	write(sv, sema, sizeof(*sema));
 	read(sv, sema, sizeof(*sema));
@@ -794,9 +796,10 @@ sema_child(int i915, int clients,
 }
 
 static void
-sema(int i915, int clients, const struct intel_execution_engine2 *e, int f)
+sema(int i915, int clients, const intel_ctx_cfg_t *cfg,
+     const struct intel_execution_engine2 *e, int f)
 {
-	__split(i915, clients, e, f, sema_child);
+	__split(i915, clients, cfg, e, f, sema_child);
 }
 
 static int read_all(int clients, pid_t pid, int class, uint64_t *runtime)
@@ -944,21 +947,23 @@ static bool has_busy(int clients)
 static void test_busy(int i915, int clients)
 {
 	const struct intel_execution_engine2 *e;
+	intel_ctx_cfg_t cfg;
 	const int frac[] = { 10, 25, 50 };
 
 	igt_fixture {
 		igt_require(gem_has_contexts(i915));
 		igt_require(has_busy(clients));
+		cfg = intel_ctx_cfg_all_physical(i915);
 	}
 
 	igt_subtest_with_dynamic("busy") {
-		__for_each_physical_engine(i915, e) {
+		for_each_ctx_cfg_engine(i915, &cfg, e) {
 			if (!gem_class_can_store_dword(i915, e->class))
 				continue;
 			igt_dynamic_f("%s", e->name) {
 				gem_quiescent_gpu(i915);
 				igt_fork(child, 1)
-					busy_one(i915, clients, e);
+					busy_one(i915, clients, &cfg, e);
 				igt_waitchildren();
 				gem_quiescent_gpu(i915);
 			}
@@ -967,7 +972,7 @@ static void test_busy(int i915, int clients)
 		igt_dynamic("all") {
 			gem_quiescent_gpu(i915);
 			igt_fork(child, 1)
-				busy_all(i915, clients);
+				busy_all(i915, clients, &cfg);
 			igt_waitchildren();
 			gem_quiescent_gpu(i915);
 		}
@@ -975,10 +980,10 @@ static void test_busy(int i915, int clients)
 
 	for (int i = 0; i < ARRAY_SIZE(frac); i++) {
 		igt_subtest_with_dynamic_f("split-%d", frac[i]) {
-			__for_each_physical_engine(i915, e) {
+			for_each_ctx_cfg_engine(i915, &cfg, e) {
 				igt_dynamic_f("%s", e->name) {
 					gem_quiescent_gpu(i915);
-					split(i915, clients, e, frac[i]);
+					split(i915, clients, &cfg, e, frac[i]);
 					gem_quiescent_gpu(i915);
 				}
 			}
@@ -993,13 +998,13 @@ static void test_busy(int i915, int clients)
 
 		for (int i = 0; i < ARRAY_SIZE(frac); i++) {
 			igt_subtest_with_dynamic_f("sema-%d", frac[i]) {
-				__for_each_physical_engine(i915, e) {
+				for_each_ctx_cfg_engine(i915, &cfg, e) {
 					if (!gem_class_has_mutable_submission(i915, e->class))
 						continue;
 
 					igt_dynamic_f("%s", e->name) {
 						igt_drop_caches_set(i915, DROP_RESET_ACTIVE);
-						sema(i915, clients, e, frac[i]);
+						sema(i915, clients, &cfg, e, frac[i]);
 						gem_quiescent_gpu(i915);
 					}
 					igt_drop_caches_set(i915, DROP_RESET_ACTIVE);
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 16/30] tests/i915/gem_exec_fair: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (14 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 15/30] tests/i915/sysfs_clients: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 17/30] tests/i915/gem_spin_batch: " Jason Ekstrand
                   ` (15 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_fair.c | 99 ++++++++++++++++++++++----------------
 1 file changed, 58 insertions(+), 41 deletions(-)

diff --git a/tests/i915/gem_exec_fair.c b/tests/i915/gem_exec_fair.c
index c1a71f77..f3a894c9 100644
--- a/tests/i915/gem_exec_fair.c
+++ b/tests/i915/gem_exec_fair.c
@@ -224,7 +224,7 @@ static void delay(int i915,
 }
 
 static struct drm_i915_gem_exec_object2
-delay_create(int i915, uint32_t ctx,
+delay_create(int i915, const intel_ctx_t *ctx,
 	     const struct intel_execution_engine2 *e,
 	     uint64_t target_ns)
 {
@@ -235,7 +235,7 @@ delay_create(int i915, uint32_t ctx,
 	struct drm_i915_gem_execbuffer2 execbuf = {
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
-		.rsvd1 = ctx,
+		.rsvd1 = ctx->id,
 		.flags = e->flags,
 	};
 
@@ -325,7 +325,8 @@ static void tslog(int i915,
 }
 
 static struct drm_i915_gem_exec_object2
-tslog_create(int i915, uint32_t ctx, const struct intel_execution_engine2 *e)
+tslog_create(int i915, const intel_ctx_t *ctx,
+	     const struct intel_execution_engine2 *e)
 {
 	struct drm_i915_gem_exec_object2 obj = {
 		.handle = batch_create(i915),
@@ -334,7 +335,7 @@ tslog_create(int i915, uint32_t ctx, const struct intel_execution_engine2 *e)
 	struct drm_i915_gem_execbuffer2 execbuf = {
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
-		.rsvd1 = ctx,
+		.rsvd1 = ctx->id,
 		.flags = e->flags,
 	};
 
@@ -361,7 +362,8 @@ static int cmp_u32(const void *A, const void *B)
 }
 
 static uint32_t
-read_ctx_timestamp(int i915, const struct intel_execution_engine2 *e)
+read_ctx_timestamp(int i915, const intel_ctx_t *ctx,
+		   const struct intel_execution_engine2 *e)
 {
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_exec_object2 obj = {
@@ -373,6 +375,7 @@ read_ctx_timestamp(int i915, const struct intel_execution_engine2 *e)
 	struct drm_i915_gem_execbuffer2 execbuf = {
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
+		.rsvd1 = ctx->id,
 		.flags = e->flags,
 	};
 	const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
@@ -414,23 +417,31 @@ read_ctx_timestamp(int i915, const struct intel_execution_engine2 *e)
 	return ts;
 }
 
-static bool has_ctx_timestamp(int i915, const struct intel_execution_engine2 *e)
+static bool has_ctx_timestamp(int i915, const intel_ctx_cfg_t *cfg,
+			      const struct intel_execution_engine2 *e)
 {
 	const int gen = intel_gen(intel_get_drm_devid(i915));
+	intel_ctx_t *tmp_ctx;
+	uint32_t timestamp;
 
 	if (gen == 8 && e->class == I915_ENGINE_CLASS_VIDEO)
 		return false; /* looks fubar */
 
-	return read_ctx_timestamp(i915, e);
+	tmp_ctx = intel_ctx_create(i915, cfg);
+	timestamp = read_ctx_timestamp(i915, tmp_ctx, e);
+	intel_ctx_destroy(i915, tmp_ctx);
+
+	return timestamp;
 }
 
 static struct intel_execution_engine2
-pick_random_engine(int i915, const struct intel_execution_engine2 *not)
+pick_random_engine(int i915, const intel_ctx_cfg_t *cfg,
+		   const struct intel_execution_engine2 *not)
 {
 	const struct intel_execution_engine2 *e;
 	unsigned int count = 0;
 
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_cfg_engine(i915, cfg, e) {
 		if (e->flags == not->flags)
 			continue;
 		if (!gem_class_has_mutable_submission(i915, e->class))
@@ -441,7 +452,7 @@ pick_random_engine(int i915, const struct intel_execution_engine2 *not)
 		return *not;
 
 	count = rand() % count;
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_cfg_engine(i915, cfg, e) {
 		if (e->flags == not->flags)
 			continue;
 		if (!gem_class_has_mutable_submission(i915, e->class))
@@ -453,7 +464,7 @@ pick_random_engine(int i915, const struct intel_execution_engine2 *not)
 	return *e;
 }
 
-static void fair_child(int i915, uint32_t ctx,
+static void fair_child(int i915, const intel_ctx_t *ctx,
 		       const struct intel_execution_engine2 *e,
 		       uint64_t frame_ns,
 		       int timeline,
@@ -494,7 +505,7 @@ static void fair_child(int i915, uint32_t ctx,
 
 	srandom(getpid());
 	if (flags & F_PING)
-		ping = pick_random_engine(i915, e);
+		ping = pick_random_engine(i915, &ctx->cfg, e);
 	obj[0] = tslog_create(i915, ctx, &ping);
 
 	/* Synchronize with other children/parent upon construction */
@@ -514,7 +525,7 @@ static void fair_child(int i915, uint32_t ctx,
 		struct drm_i915_gem_execbuffer2 execbuf = {
 			.buffers_ptr = to_user_pointer(obj),
 			.buffer_count = 3,
-			.rsvd1 = ctx,
+			.rsvd1 = ctx->id,
 			.rsvd2 = -1,
 			.flags = aux_flags,
 		};
@@ -636,7 +647,7 @@ static void timeline_advance(int timeline, int delay_ns)
 	sw_sync_timeline_inc(timeline, 1);
 }
 
-static void fairness(int i915,
+static void fairness(int i915, const intel_ctx_cfg_t *cfg,
 		     const struct intel_execution_engine2 *e,
 		     int duration, unsigned int flags)
 {
@@ -649,7 +660,7 @@ static void fairness(int i915,
 		int parent[2];
 	} lnk;
 
-	igt_require(has_ctx_timestamp(i915, e));
+	igt_require(has_ctx_timestamp(i915, cfg, e));
 	igt_require(gem_class_has_mutable_submission(i915, e->class));
 	if (flags & (F_ISOLATE | F_PING))
 		igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
@@ -713,12 +724,12 @@ static void fairness(int i915,
 		if (flags & F_PING) { /* fill the others with light bg load */
 			struct intel_execution_engine2 *ping;
 
-			__for_each_physical_engine(i915, ping) {
+			for_each_ctx_cfg_engine(i915, cfg, ping) {
 				if (ping->flags == e->flags)
 					continue;
 
 				igt_fork(child, 1) {
-					uint32_t ctx = gem_context_clone_with_engines(i915, 0);
+					intel_ctx_t *ctx = intel_ctx_create(i915, cfg);
 
 					fair_child(i915, ctx, ping,
 						   child_ns / 8,
@@ -727,7 +738,7 @@ static void fairness(int i915,
 						   &result[nchild],
 						   NULL, NULL, -1, -1);
 
-					gem_context_destroy(i915, ctx);
+					intel_ctx_destroy(i915, ctx);
 				}
 			}
 		}
@@ -735,7 +746,7 @@ static void fairness(int i915,
 		getrusage(RUSAGE_CHILDREN, &old_usage);
 		igt_nsec_elapsed(memset(&tv, 0, sizeof(tv)));
 		igt_fork(child, nchild) {
-			uint32_t ctx;
+			intel_ctx_t *ctx;
 
 			if (flags & F_ISOLATE) {
 				int clone, dmabuf = -1;
@@ -751,10 +762,10 @@ static void fairness(int i915,
 					common = prime_fd_to_handle(i915, dmabuf);
 			}
 
-			ctx = gem_context_clone_with_engines(i915, 0);
+			ctx = intel_ctx_create(i915, cfg);
 
 			if (flags & F_VIP && child == 0) {
-				gem_context_set_priority(i915, ctx, 1023);
+				gem_context_set_priority(i915, ctx->id, 1023);
 				flags |= F_FLOW;
 			}
 			if (flags & F_RRUL && child == 0)
@@ -766,7 +777,7 @@ static void fairness(int i915,
 				   &result[child], &iqr[child],
 				   lnk.child[1], lnk.parent[0]);
 
-			gem_context_destroy(i915, ctx);
+			intel_ctx_destroy(i915, ctx);
 		}
 
 		{
@@ -911,7 +922,7 @@ static void fairness(int i915,
 }
 
 static void deadline_child(int i915,
-			   uint32_t ctx,
+			   const intel_ctx_t *ctx,
 			   const struct intel_execution_engine2 *e,
 			   uint32_t handle,
 			   int timeline,
@@ -931,7 +942,7 @@ static void deadline_child(int i915,
 		.buffers_ptr = to_user_pointer(obj),
 		.buffer_count = ARRAY_SIZE(obj),
 		.flags = I915_EXEC_FENCE_OUT | e->flags,
-		.rsvd1 = ctx,
+		.rsvd1 = ctx->id,
 	};
 	unsigned int seq = 1;
 	int prev = -1, next = -1;
@@ -976,11 +987,12 @@ static void deadline_child(int i915,
 	close(prev);
 }
 
-static struct intel_execution_engine2 pick_default(int i915)
+static struct intel_execution_engine2
+pick_default(int i915, const intel_ctx_cfg_t *cfg)
 {
 	const struct intel_execution_engine2 *e;
 
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_cfg_engine(i915, cfg, e) {
 		if (!e->flags)
 			return *e;
 	}
@@ -988,11 +1000,12 @@ static struct intel_execution_engine2 pick_default(int i915)
 	return (struct intel_execution_engine2){};
 }
 
-static struct intel_execution_engine2 pick_engine(int i915, const char *name)
+static struct intel_execution_engine2
+pick_engine(int i915, const intel_ctx_cfg_t *cfg, const char *name)
 {
 	const struct intel_execution_engine2 *e;
 
-	__for_each_physical_engine(i915, e) {
+	for_each_ctx_cfg_engine(i915, cfg, e) {
 		if (!strcmp(e->name, name))
 			return *e;
 	}
@@ -1029,15 +1042,16 @@ static uint64_t time_get_mono_ns(void)
 	return tv.tv_sec * NSEC64 + tv.tv_nsec;
 }
 
-static void deadline(int i915, int duration, unsigned int flags)
+static void deadline(int i915, const intel_ctx_cfg_t *cfg,
+		     int duration, unsigned int flags)
 {
 	const int64_t frame_ns = 33670 * 1000; /* 29.7fps */
 	const int64_t parent_ns = 400 * 1000;
 	const int64_t switch_ns = 50 * 1000;
 	const int64_t overhead_ns = /* estimate timeslicing overhead */
 		(frame_ns / 1000 / 1000 + 2) * switch_ns + parent_ns;
-	struct intel_execution_engine2 pe = pick_default(i915);
-	struct intel_execution_engine2 ve = pick_engine(i915, "vcs0");
+	struct intel_execution_engine2 pe = pick_default(i915, cfg);
+	struct intel_execution_engine2 ve = pick_engine(i915, cfg, "vcs0");
 	struct drm_i915_gem_exec_fence *fences = calloc(sizeof(*fences), 32);
 	struct drm_i915_gem_exec_object2 *obj = calloc(sizeof(*obj), 32);
 	struct drm_i915_gem_execbuffer2 execbuf = {
@@ -1053,9 +1067,9 @@ static void deadline(int i915, int duration, unsigned int flags)
 	igt_require(has_syncobj(i915));
 	igt_require(has_fence_array(i915));
 	igt_require(has_mi_math(i915, &pe));
-	igt_require(has_ctx_timestamp(i915, &pe));
+	igt_require(has_ctx_timestamp(i915, cfg, &pe));
 	igt_require(has_mi_math(i915, &ve));
-	igt_require(has_ctx_timestamp(i915, &ve));
+	igt_require(has_ctx_timestamp(i915, cfg, &ve));
 	igt_assert(obj && fences);
 	if (flags & DL_PRIO)
 		igt_require(gem_scheduler_has_preemption(i915));
@@ -1092,7 +1106,7 @@ static void deadline(int i915, int duration, unsigned int flags)
 
 		*ctl = 0;
 		igt_fork(child, num_children) {
-			uint32_t ctx = gem_context_clone_with_engines(i915, 0);
+			intel_ctx_t *ctx = intel_ctx_create(i915, cfg);
 
 			deadline_child(i915, ctx, &ve, obj[child + 1].handle,
 				       timeline, child_ns,
@@ -1100,7 +1114,7 @@ static void deadline(int i915, int duration, unsigned int flags)
 				       link[child].parent[0],
 				       ctl, flags);
 
-			gem_context_destroy(i915, ctx);
+			intel_ctx_destroy(i915, ctx);
 		}
 
 		for (int i = 0; i < num_children; i++)
@@ -1281,6 +1295,7 @@ igt_main
 		{}
 	};
 	const struct intel_execution_engine2 *e;
+	intel_ctx_cfg_t cfg;
 	int i915 = -1;
 
 	igt_fixture {
@@ -1296,6 +1311,8 @@ igt_main
 		igt_require(gem_scheduler_enabled(i915));
 		igt_require(gem_scheduler_has_ctx_priority(i915));
 
+		cfg = intel_ctx_cfg_all_physical(i915);
+
 		igt_info("CS timestamp frequency: %d\n",
 			 read_timestamp_frequency(i915));
 		igt_require(has_mi_math(i915, NULL));
@@ -1309,7 +1326,7 @@ igt_main
 			continue;
 
 		igt_subtest_with_dynamic_f("basic-%s", f->name)  {
-			__for_each_physical_engine(i915, e) {
+			for_each_ctx_cfg_engine(i915, &cfg, e) {
 				if (!has_mi_math(i915, e))
 					continue;
 
@@ -1320,19 +1337,19 @@ igt_main
 					continue;
 
 				igt_dynamic_f("%s", e->name)
-					fairness(i915, e, 1, f->flags);
+					fairness(i915, &cfg, e, 1, f->flags);
 			}
 		}
 	}
 
 	igt_subtest("basic-deadline")
-		deadline(i915, 2, 0);
+		deadline(i915, &cfg, 2, 0);
 	igt_subtest("deadline-prio")
-		deadline(i915, 2, DL_PRIO);
+		deadline(i915, &cfg, 2, DL_PRIO);
 
 	for (typeof(*fair) *f = fair; f->name; f++) {
 		igt_subtest_with_dynamic_f("fair-%s", f->name)  {
-			__for_each_physical_engine(i915, e) {
+			for_each_ctx_cfg_engine(i915, &cfg, e) {
 				if (!has_mi_math(i915, e))
 					continue;
 
@@ -1343,7 +1360,7 @@ igt_main
 					continue;
 
 				igt_dynamic_f("%s", e->name)
-					fairness(i915, e, 5, f->flags);
+					fairness(i915, &cfg, e, 5, f->flags);
 			}
 		}
 	}
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 17/30] tests/i915/gem_spin_batch: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (15 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 16/30] tests/i915/gem_exec_fair: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 18/30] tests/i915/gem_exec_store: " Jason Ekstrand
                   ` (14 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_spin_batch.c | 83 ++++++++++++++++++++++---------------
 1 file changed, 49 insertions(+), 34 deletions(-)

diff --git a/tests/i915/gem_spin_batch.c b/tests/i915/gem_spin_batch.c
index db0af018..8fbd5ccc 100644
--- a/tests/i915/gem_spin_batch.c
+++ b/tests/i915/gem_spin_batch.c
@@ -34,7 +34,7 @@
 		     "'%s' != '%s' (%lld not within %d%% tolerance of %lld)\n",\
 		     #x, #ref, (long long)x, tolerance, (long long)ref)
 
-static void spin(int fd,
+static void spin(int fd, const intel_ctx_t *ctx_id,
 		 unsigned int engine,
 		 unsigned int flags,
 		 unsigned int timeout_sec)
@@ -46,10 +46,12 @@ static void spin(int fd,
 	struct timespec itv = { };
 	uint64_t elapsed;
 
-	spin = __igt_spin_new(fd, .engine = engine, .flags = flags);
+	spin = __igt_spin_new(fd, .ctx = ctx_id, .engine = engine,
+			      .flags = flags);
 	while ((elapsed = igt_nsec_elapsed(&tv)) >> 30 < timeout_sec) {
 		igt_spin_t *next =
-			__igt_spin_new(fd, .engine = engine, .flags = flags);
+			__igt_spin_new(fd, .ctx = ctx_id, .engine = engine,
+				       .flags = flags);
 
 		igt_spin_set_timeout(spin,
 				     timeout_100ms - igt_nsec_elapsed(&itv));
@@ -75,21 +77,25 @@ static void spin(int fd,
 #define RESUBMIT_NEW_CTX     (1 << 0)
 #define RESUBMIT_ALL_ENGINES (1 << 1)
 
-static void spin_resubmit(int fd, unsigned int engine, unsigned int flags)
+static void spin_resubmit(int fd, const intel_ctx_t *ctx,
+			  unsigned int engine, unsigned int flags)
 {
+	intel_ctx_t *new_ctx = NULL;
 	igt_spin_t *spin;
 
 	if (flags & RESUBMIT_NEW_CTX)
 		igt_require(gem_has_contexts(fd));
 
-	spin = __igt_spin_new(fd, .engine = engine);
-	if (flags & RESUBMIT_NEW_CTX)
-		spin->execbuf.rsvd1 = gem_context_clone_with_engines(fd, 0);
+	spin = __igt_spin_new(fd, .ctx = ctx, .engine = engine);
+	if (flags & RESUBMIT_NEW_CTX) {
+		new_ctx = intel_ctx_create(fd, &ctx->cfg);
+		spin->execbuf.rsvd1 = new_ctx->id;
+	}
 
 	if (flags & RESUBMIT_ALL_ENGINES) {
 		const struct intel_execution_engine2 *other;
 
-		for_each_context_engine(fd, spin->execbuf.rsvd1, other) {
+		for_each_ctx_engine(fd, ctx, other) {
 			spin->execbuf.flags &= ~0x3f;
 			spin->execbuf.flags |= other->flags;
 			gem_execbuf(fd, &spin->execbuf);
@@ -100,8 +106,8 @@ static void spin_resubmit(int fd, unsigned int engine, unsigned int flags)
 	igt_spin_end(spin);
 	gem_sync(fd, spin->handle);
 
-	if (spin->execbuf.rsvd1)
-		gem_context_destroy(fd, spin->execbuf.rsvd1);
+	if (flags & RESUBMIT_NEW_CTX)
+		intel_ctx_destroy(fd, new_ctx);
 
 	igt_spin_free(fd, spin);
 }
@@ -112,45 +118,45 @@ static void spin_exit_handler(int sig)
 }
 
 static void
-spin_on_all_engines(int fd, unsigned long flags, unsigned int timeout_sec)
+spin_on_all_engines(int fd, const intel_ctx_t *ctx,
+		    unsigned long flags, unsigned int timeout_sec)
 {
 	const struct intel_execution_engine2 *e2;
 
-	__for_each_physical_engine(fd, e2) {
+	for_each_ctx_engine(fd, ctx, e2) {
 		igt_fork(child, 1) {
 			igt_install_exit_handler(spin_exit_handler);
-			spin(fd, e2->flags, flags, timeout_sec);
+			spin(fd, ctx, e2->flags, flags, timeout_sec);
 		}
 	}
 
 	igt_waitchildren();
 }
 
-static void spin_all(int i915, unsigned int flags)
+static void spin_all(int i915, const intel_ctx_t *ctx, unsigned int flags)
 #define PARALLEL_SPIN_NEW_CTX BIT(0)
 {
 	const struct intel_execution_engine2 *e;
 	struct igt_spin *spin, *n;
 	IGT_LIST_HEAD(list);
 
-	__for_each_physical_engine(i915, e) {
-		uint32_t ctx;
+	for_each_ctx_engine(i915, ctx, e) {
+		intel_ctx_t *new_ctx;
 
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		ctx = 0;
 		if (flags & PARALLEL_SPIN_NEW_CTX)
-			ctx = gem_context_clone_with_engines(i915, 0);
+			new_ctx = intel_ctx_create(i915, &ctx->cfg);
 
 		/* Prevent preemption so only one is allowed on each engine */
 		spin = igt_spin_new(i915,
-				    .ctx_id = ctx,
+				    .ctx = new_ctx ? new_ctx : ctx,
 				    .engine = e->flags,
 				    .flags = (IGT_SPIN_POLL_RUN |
 					      IGT_SPIN_NO_PREEMPTION));
-		if (ctx)
-			gem_context_destroy(i915, ctx);
+		if (flags & PARALLEL_SPIN_NEW_CTX)
+			intel_ctx_destroy(i915, new_ctx);
 
 		igt_spin_busywait_until_started(spin);
 		igt_list_move(&spin->link, &list);
@@ -187,11 +193,18 @@ igt_main
 {
 	const struct intel_execution_engine2 *e2;
 	const struct intel_execution_ring *e;
+	const intel_ctx_t *ctx = NULL;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(fd);
+
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
+
 		igt_fork_hang_detector(fd);
 	}
 
@@ -202,49 +215,51 @@ igt_main
 				igt_dynamic_f("%s", e->name)
 
 	test_each_legacy_ring("legacy")
-		spin(fd, eb_ring(e), 0, 3);
+		spin(fd, intel_ctx_0(fd), eb_ring(e), 0, 3);
 	test_each_legacy_ring("legacy-resubmit")
-		spin_resubmit(fd, eb_ring(e), 0);
+		spin_resubmit(fd, intel_ctx_0(fd), eb_ring(e), 0);
 	test_each_legacy_ring("legacy-resubmit-new")
-		spin_resubmit(fd, eb_ring(e), RESUBMIT_NEW_CTX);
+		spin_resubmit(fd, intel_ctx_0(fd), eb_ring(e), RESUBMIT_NEW_CTX);
 
 #undef test_each_legcy_ring
 
 	igt_subtest("spin-all")
-		spin_all(fd, 0);
+		spin_all(fd, ctx, 0);
 	igt_subtest("spin-all-new")
-		spin_all(fd, PARALLEL_SPIN_NEW_CTX);
+		spin_all(fd, ctx, PARALLEL_SPIN_NEW_CTX);
 
 #define test_each_engine(test) \
 	igt_subtest_with_dynamic(test) \
-		__for_each_physical_engine(fd, e2) \
+		for_each_ctx_engine(fd, ctx, e2) \
 			igt_dynamic_f("%s", e2->name)
 
 	test_each_engine("engines")
-		spin(fd, e2->flags, 0, 3);
+		spin(fd, ctx, e2->flags, 0, 3);
 
 	test_each_engine("resubmit")
-		spin_resubmit(fd, e2->flags, 0);
+		spin_resubmit(fd, ctx, e2->flags, 0);
 
 	test_each_engine("resubmit-new")
-		spin_resubmit(fd, e2->flags, RESUBMIT_NEW_CTX);
+		spin_resubmit(fd, ctx, e2->flags,
+			      RESUBMIT_NEW_CTX);
 
 	test_each_engine("resubmit-all")
-		spin_resubmit(fd, e2->flags, RESUBMIT_ALL_ENGINES);
+		spin_resubmit(fd, ctx, e2->flags,
+			      RESUBMIT_ALL_ENGINES);
 
 	test_each_engine("resubmit-new-all")
-		spin_resubmit(fd, e2->flags,
+		spin_resubmit(fd, ctx, e2->flags,
 			      RESUBMIT_NEW_CTX |
 			      RESUBMIT_ALL_ENGINES);
 
 #undef test_each_engine
 
 	igt_subtest("spin-each")
-		spin_on_all_engines(fd, 0, 3);
+		spin_on_all_engines(fd, ctx, 0, 3);
 
 	igt_subtest("user-each") {
 		igt_require(has_userptr(fd));
-		spin_on_all_engines(fd, IGT_SPIN_USERPTR, 3);
+		spin_on_all_engines(fd, ctx, IGT_SPIN_USERPTR, 3);
 	}
 
 	igt_fixture {
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 18/30] tests/i915/gem_exec_store: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (16 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 17/30] tests/i915/gem_spin_batch: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 19/30] tests/amdgpu/amd_prime: " Jason Ekstrand
                   ` (13 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_store.c | 38 +++++++++++++++++++++++--------------
 1 file changed, 24 insertions(+), 14 deletions(-)

diff --git a/tests/i915/gem_exec_store.c b/tests/i915/gem_exec_store.c
index 771ee169..b64ea577 100644
--- a/tests/i915/gem_exec_store.c
+++ b/tests/i915/gem_exec_store.c
@@ -36,7 +36,8 @@
 
 #define ENGINE_MASK  (I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK)
 
-static void store_dword(int fd, const struct intel_execution_engine2 *e)
+static void store_dword(int fd, const intel_ctx_t *ctx,
+			const struct intel_execution_engine2 *e)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
@@ -52,6 +53,7 @@ static void store_dword(int fd, const struct intel_execution_engine2 *e)
 	execbuf.flags = e->flags;
 	if (gen > 3 && gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = gem_create(fd, 4096);
@@ -93,7 +95,8 @@ static void store_dword(int fd, const struct intel_execution_engine2 *e)
 }
 
 #define PAGES 1
-static void store_cachelines(int fd, const struct intel_execution_engine2 *e,
+static void store_cachelines(int fd, const intel_ctx_t *ctx,
+			     const struct intel_execution_engine2 *e,
 			     unsigned int flags)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -113,6 +116,7 @@ static void store_cachelines(int fd, const struct intel_execution_engine2 *e,
 	execbuf.flags = e->flags;
 	if (gen > 3 && gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	obj = calloc(execbuf.buffer_count, sizeof(*obj));
 	igt_assert(obj);
@@ -170,7 +174,7 @@ static void store_cachelines(int fd, const struct intel_execution_engine2 *e,
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
 }
 
-static void store_all(int fd)
+static void store_all(int fd, const intel_ctx_t *ctx)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct drm_i915_gem_exec_object2 obj[2];
@@ -185,7 +189,7 @@ static void store_all(int fd)
 	int i, j;
 
 	nengine = 0;
-	__for_each_physical_engine(fd, engine) {
+	for_each_ctx_engine(fd, ctx, engine) {
 		if (!gem_class_can_store_dword(fd, engine->class))
 			continue;
 		nengine++;
@@ -206,6 +210,7 @@ static void store_all(int fd)
 	execbuf.buffer_count = 2;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = gem_create(fd, nengine*sizeof(uint32_t));
@@ -231,7 +236,7 @@ static void store_all(int fd)
 
 	nengine = 0;
 	intel_detect_and_clear_missed_interrupts(fd);
-	__for_each_physical_engine(fd, engine) {
+	for_each_ctx_engine(fd, ctx, engine) {
 		if (!gem_class_can_store_dword(fd, engine->class))
 			continue;
 
@@ -322,14 +327,15 @@ static int print_welcome(int fd)
 	return ffs(info->gen);
 }
 
-#define test_each_engine(T, i915, e)  \
-	igt_subtest_with_dynamic(T) __for_each_physical_engine(i915, e) \
+#define test_each_engine(T, i915, ctx, e)  \
+	igt_subtest_with_dynamic(T) for_each_ctx_engine(i915, ctx, e) \
 		for_each_if(gem_class_can_store_dword(i915, (e)->class)) \
 			igt_dynamic_f("%s", (e)->name)
 
 igt_main
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx = NULL;
 	int fd;
 
 	igt_fixture {
@@ -342,21 +348,25 @@ igt_main
 			igt_device_set_master(fd);
 
 		igt_require_gem(fd);
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
 
 		igt_fork_hang_detector(fd);
 	}
 
 	igt_subtest("basic")
-		store_all(fd);
+		store_all(fd, ctx);
 
-	test_each_engine("dword", fd, e)
-		store_dword(fd, e);
+	test_each_engine("dword", fd, ctx, e)
+		store_dword(fd, ctx, e);
 
-	test_each_engine("cachelines", fd, e)
-		store_cachelines(fd, e, 0);
+	test_each_engine("cachelines", fd, ctx, e)
+		store_cachelines(fd, ctx, e, 0);
 
-	test_each_engine("pages", fd, e)
-		store_cachelines(fd, e, PAGES);
+	test_each_engine("pages", fd, ctx, e)
+		store_cachelines(fd, ctx, e, PAGES);
 
 	igt_fixture {
 		igt_stop_hang_detector();
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 19/30] tests/amdgpu/amd_prime: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (17 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 18/30] tests/i915/gem_exec_store: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 20/30] tests/i915/i915_hangman: " Jason Ekstrand
                   ` (12 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

Written totally blind but I'm pretty sure it's right.
---
 tests/amdgpu/amd_prime.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/tests/amdgpu/amd_prime.c b/tests/amdgpu/amd_prime.c
index 537b0bcd..31ca209f 100644
--- a/tests/amdgpu/amd_prime.c
+++ b/tests/amdgpu/amd_prime.c
@@ -172,6 +172,7 @@ static void unplug(struct cork *c)
 static void i915_to_amd(int i915, int amd, amdgpu_device_handle device)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
+	intel_ctx_cfg_t cfg;
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_execbuffer2 execbuf;
 	const struct intel_execution_engine2 *e;
@@ -180,8 +181,10 @@ static void i915_to_amd(int i915, int amd, amdgpu_device_handle device)
 	unsigned long count;
 	struct cork c;
 
+	cfg = intel_ctx_cfg_all_physical(i915);
+
 	nengine = 0;
-	__for_each_physical_engine(i915, e)
+	for_each_ctx_cfg_engine(i915, &cfg, e)
 		engines[nengine++] = e->flags;
 	igt_require(nengine);
 
@@ -198,14 +201,15 @@ static void i915_to_amd(int i915, int amd, amdgpu_device_handle device)
 
 	count = 0;
 	igt_until_timeout(5) {
-		execbuf.rsvd1 = gem_context_clone_with_engines(i915, 0);
+		intel_ctx_t *ctx = intel_ctx_create(i915, &cfg);
+		execbuf.rsvd1 = ctx->id;
 
 		for (unsigned n = 0; n < nengine; n++) {
 			execbuf.flags = engines[n];
 			gem_execbuf(i915, &execbuf);
 		}
 
-		gem_context_destroy(i915, execbuf.rsvd1);
+		intel_ctx_destroy(i915, ctx);
 		count++;
 
 		if (!gem_uses_full_ppgtt(i915))
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 20/30] tests/i915/i915_hangman: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (18 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 19/30] tests/amdgpu/amd_prime: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 21/30] tests/i915/gem_ringfill: " Jason Ekstrand
                   ` (11 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/i915_hangman.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/tests/i915/i915_hangman.c b/tests/i915/i915_hangman.c
index 72e4d8b8..f40d5c90 100644
--- a/tests/i915/i915_hangman.c
+++ b/tests/i915/i915_hangman.c
@@ -204,7 +204,7 @@ static void check_error_state(const char *expected_ring_name,
 	igt_assert(found);
 }
 
-static void test_error_state_capture(unsigned ring_id,
+static void test_error_state_capture(const intel_ctx_t *ctx, unsigned ring_id,
 				     const char *ring_name)
 {
 	uint32_t *batch;
@@ -213,7 +213,7 @@ static void test_error_state_capture(unsigned ring_id,
 
 	clear_error_state();
 
-	hang = igt_hang_ctx(device, 0, ring_id, HANG_ALLOW_CAPTURE);
+	hang = igt_hang_ctx(device, ctx->id, ring_id, HANG_ALLOW_CAPTURE);
 	offset = hang.spin->obj[IGT_SPIN_BATCH].offset;
 
 	batch = gem_mmap__cpu(device, hang.spin->handle, 0, 4096, PROT_READ);
@@ -226,32 +226,34 @@ static void test_error_state_capture(unsigned ring_id,
 }
 
 static void
-test_engine_hang(const struct intel_execution_engine2 *e, unsigned int flags)
+test_engine_hang(const intel_ctx_t *ctx,
+		 const struct intel_execution_engine2 *e, unsigned int flags)
 {
 	const struct intel_execution_engine2 *other;
+	intel_ctx_t *tmp_ctx;
 	igt_spin_t *spin, *next;
 	IGT_LIST_HEAD(list);
-	uint32_t ctx;
 
 	igt_skip_on(flags & IGT_SPIN_INVALID_CS &&
 		    gem_has_cmdparser(device, e->flags));
 
 	/* Fill all the other engines with background load */
-	__for_each_physical_engine(device, other) {
+	for_each_ctx_engine(device, ctx, other) {
 		if (other->flags == e->flags)
 			continue;
 
-		ctx = gem_context_clone_with_engines(device, 0);
-		spin = __igt_spin_new(device, ctx,
+		tmp_ctx = intel_ctx_create(device, &ctx->cfg);
+		spin = __igt_spin_new(device, .ctx = tmp_ctx,
 				      .engine = other->flags,
 				      .flags = IGT_SPIN_FENCE_OUT);
-		gem_context_destroy(device, ctx);
+		intel_ctx_destroy(device, tmp_ctx);
 
 		igt_list_move(&spin->link, &list);
 	}
 
 	/* And on the target engine, we hang */
 	spin = igt_spin_new(device,
+			    .ctx = ctx,
 			    .engine = e->flags,
 			    .flags = (IGT_SPIN_FENCE_OUT |
 				      IGT_SPIN_NO_PREEMPTION |
@@ -310,13 +312,16 @@ static void hangcheck_unterminated(void)
 igt_main
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx = NULL;
 	igt_hang_t hang = {};
 
 	igt_fixture {
 		device = drm_open_driver(DRIVER_INTEL);
 		igt_require_gem(device);
 
-		hang = igt_allow_hang(device, 0, HANG_ALLOW_CAPTURE);
+		ctx = intel_ctx_create_all_physical(device);
+
+		hang = igt_allow_hang(device, ctx->id, HANG_ALLOW_CAPTURE);
 
 		sysfs = igt_sysfs_open(device);
 		igt_assert(sysfs != -1);
@@ -328,9 +333,9 @@ igt_main
 		test_error_state_basic();
 
 	igt_subtest_with_dynamic("error-state-capture") {
-		__for_each_physical_engine(device, e) {
+		for_each_ctx_engine(device, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				test_error_state_capture(e->flags, e->name);
+				test_error_state_capture(ctx, e->flags, e->name);
 		}
 	}
 
@@ -346,9 +351,9 @@ igt_main
                 ioctl(device, DRM_IOCTL_I915_GETPARAM, &gp);
 		igt_require(has_gpu_reset > 1);
 
-		__for_each_physical_engine(device, e) {
+		for_each_ctx_engine(device, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				test_engine_hang(e, 0);
+				test_engine_hang(ctx, e, 0);
 		}
 	}
 
@@ -363,9 +368,9 @@ igt_main
 		ioctl(device, DRM_IOCTL_I915_GETPARAM, &gp);
 		igt_require(has_gpu_reset > 1);
 
-		__for_each_physical_engine(device, e) {
+		for_each_ctx_engine(device, ctx, e) {
 			igt_dynamic_f("%s", e->name)
-				test_engine_hang(e, IGT_SPIN_INVALID_CS);
+				test_engine_hang(ctx, e, IGT_SPIN_INVALID_CS);
 		}
 	}
 
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 21/30] tests/i915/gem_ringfill: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (19 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 20/30] tests/i915/i915_hangman: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 22/30] tests/prime_busy: " Jason Ekstrand
                   ` (10 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_ringfill.c | 47 +++++++++++++++++++++++----------------
 1 file changed, 28 insertions(+), 19 deletions(-)

diff --git a/tests/i915/gem_ringfill.c b/tests/i915/gem_ringfill.c
index 78903707..dcbf08ba 100644
--- a/tests/i915/gem_ringfill.c
+++ b/tests/i915/gem_ringfill.c
@@ -93,7 +93,7 @@ static void fill_ring(int fd,
 	}
 }
 
-static void setup_execbuf(int fd,
+static void setup_execbuf(int fd, const intel_ctx_t *ctx,
 			  struct drm_i915_gem_execbuffer2 *execbuf,
 			  struct drm_i915_gem_exec_object2 *obj,
 			  struct drm_i915_gem_relocation_entry *reloc,
@@ -114,6 +114,8 @@ static void setup_execbuf(int fd,
 	if (gen > 3 && gen < 6)
 		execbuf->flags |= I915_EXEC_SECURE;
 
+	execbuf->rsvd1 = ctx->id;
+
 	obj[0].handle = gem_create(fd, 4096);
 	gem_write(fd, obj[0].handle, 0, &bbe, sizeof(bbe));
 	execbuf->buffer_count = 1;
@@ -167,7 +169,8 @@ static void setup_execbuf(int fd,
 	check_bo(fd, obj[0].handle);
 }
 
-static void run_test(int fd, unsigned ring, unsigned flags, unsigned timeout)
+static void run_test(int fd, const intel_ctx_t *ctx, unsigned ring,
+		     unsigned flags, unsigned timeout)
 {
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc[1024];
@@ -175,15 +178,15 @@ static void run_test(int fd, unsigned ring, unsigned flags, unsigned timeout)
 	igt_hang_t hang;
 
 	if (flags & (SUSPEND | HIBERNATE)) {
-		run_test(fd, ring, 0, 0);
+		run_test(fd, ctx, ring, 0, 0);
 		gem_quiescent_gpu(fd);
 	}
 
-	setup_execbuf(fd, &execbuf, obj, reloc, ring);
+	setup_execbuf(fd, ctx, &execbuf, obj, reloc, ring);
 
 	memset(&hang, 0, sizeof(hang));
 	if (flags & HANG)
-		hang = igt_hang_ring(fd, ring & ~(3<<13));
+		hang = igt_hang_ctx(fd, ctx->id, ring & ~(3<<13), 0);
 
 	if (flags & (CHILD | FORKED | BOMB)) {
 		int nchild;
@@ -197,16 +200,19 @@ static void run_test(int fd, unsigned ring, unsigned flags, unsigned timeout)
 
 		igt_debug("Forking %d children\n", nchild);
 		igt_fork(child, nchild) {
+			intel_ctx_t *child_ctx = NULL;
 			if (flags & NEWFD) {
 				int this;
 
 				this = gem_reopen_driver(fd);
-				gem_context_copy_engines(fd, 0, this, 0);
+				child_ctx = intel_ctx_create(fd, &ctx->cfg);
 				fd = this;
 
-				setup_execbuf(fd, &execbuf, obj, reloc, ring);
+				setup_execbuf(fd, child_ctx, &execbuf, obj, reloc, ring);
 			}
 			fill_ring(fd, &execbuf, flags, timeout);
+			if (child_ctx)
+				intel_ctx_destroy(fd, child_ctx);
 		}
 
 		if (flags & SUSPEND)
@@ -234,7 +240,7 @@ static void run_test(int fd, unsigned ring, unsigned flags, unsigned timeout)
 
 	if (flags & (SUSPEND | HIBERNATE)) {
 		gem_quiescent_gpu(fd);
-		run_test(fd, ring, 0, 0);
+		run_test(fd, ctx, ring, 0, 0);
 	}
 }
 
@@ -285,6 +291,7 @@ igt_main
 		{ NULL }
 	}, *m;
 	bool master = false;
+	const intel_ctx_t *ctx;
 	int fd = -1;
 
 	igt_fixture {
@@ -304,22 +311,23 @@ igt_main
 		ring_size = gem_measure_ring_inflight(fd, ALL_ENGINES, 0);
 		igt_info("Ring size: %d batches\n", ring_size);
 		igt_require(ring_size);
+
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
 	}
 
 	/* Legacy path for selecting "rings". */
 	for (m = modes; m->suffix; m++) {
 		igt_subtest_with_dynamic_f("legacy-%s", m->suffix) {
-			const struct intel_execution_ring *e;
-
 			igt_skip_on(m->flags & NEWFD && master);
 
-			for (e = intel_execution_rings; e->name; e++) {
-				if (!gem_has_ring(fd, eb_ring(e)))
-					continue;
-
+			for_each_ring(e, fd) {
 				igt_dynamic_f("%s", e->name) {
 					igt_require(gem_can_store_dword(fd, eb_ring(e)));
-					run_test(fd, eb_ring(e),
+					run_test(fd, intel_ctx_0(fd),
+						 eb_ring(e),
 						 m->flags,
 						 m->timeout);
 					gem_quiescent_gpu(fd);
@@ -334,12 +342,13 @@ igt_main
 			const struct intel_execution_engine2 *e;
 
 			igt_skip_on(m->flags & NEWFD && master);
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				if (!gem_class_can_store_dword(fd, e->class))
 					continue;
 
 				igt_dynamic_f("%s", e->name) {
-					run_test(fd, e->flags,
+					run_test(fd, ctx,
+						 e->flags,
 						 m->flags,
 						 m->timeout);
 					gem_quiescent_gpu(fd);
@@ -351,12 +360,12 @@ igt_main
 	igt_subtest("basic-all") {
 		const struct intel_execution_engine2 *e;
 
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			if (!gem_class_can_store_dword(fd, e->class))
 				continue;
 
 			igt_fork(child, 1)
-				run_test(fd, e->flags, 0, 1);
+				run_test(fd, ctx, e->flags, 0, 1);
 		}
 
 		igt_waitchildren();
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 22/30] tests/prime_busy: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (20 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 21/30] tests/i915/gem_ringfill: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 23/30] tests/prime_vgem: " Jason Ekstrand
                   ` (9 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/prime_busy.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/tests/prime_busy.c b/tests/prime_busy.c
index aec76393..2c88399b 100644
--- a/tests/prime_busy.c
+++ b/tests/prime_busy.c
@@ -39,7 +39,7 @@ static bool prime_busy(struct pollfd *pfd, bool excl)
 #define HANG 0x4
 #define POLL 0x8
 
-static void busy(int fd, unsigned ring, unsigned flags)
+static void busy(int fd, const intel_ctx_t *ctx, unsigned ring, unsigned flags)
 {
 	const int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t _bbe = MI_BATCH_BUFFER_END;
@@ -62,6 +62,7 @@ static void busy(int fd, unsigned ring, unsigned flags)
 	execbuf.flags = ring;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(obj, 0, sizeof(obj));
 	obj[SCRATCH].handle = gem_create(fd, 4096);
@@ -185,7 +186,7 @@ static void busy(int fd, unsigned ring, unsigned flags)
 	close(pfd[SCRATCH].fd);
 }
 
-static void test_mode(int fd, unsigned int flags)
+static void test_mode(int fd, const intel_ctx_t *ctx, unsigned int flags)
 {
 	const struct intel_execution_engine2 *e;
 	igt_hang_t hang = {};
@@ -195,7 +196,7 @@ static void test_mode(int fd, unsigned int flags)
 	else
 		hang = igt_allow_hang(fd, 0, 0);
 
-	__for_each_physical_engine(fd, e) {
+	for_each_ctx_engine(fd, ctx, e) {
 		if (!gem_class_can_store_dword(fd, e->class))
 			continue;
 
@@ -203,7 +204,7 @@ static void test_mode(int fd, unsigned int flags)
 			continue;
 
 		igt_dynamic_f("%s", e->name)
-			busy(fd, e->flags, flags);
+			busy(fd, ctx, e->flags, flags);
 	}
 
 	if ((flags & HANG) == 0)
@@ -214,11 +215,17 @@ static void test_mode(int fd, unsigned int flags)
 
 igt_main
 {
+	const intel_ctx_t *ctx;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver_master(DRIVER_INTEL);
 		igt_require_gem(fd);
+
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
 	}
 
 	igt_subtest_group {
@@ -237,10 +244,10 @@ igt_main
 
 		for (const struct mode *m = modes; m->name; m++) {
 			igt_subtest_with_dynamic(m->name)
-				test_mode(fd, m->flags);
+				test_mode(fd, ctx, m->flags);
 
 			igt_subtest_with_dynamic_f("%s-wait", m->name)
-				test_mode(fd, m->flags | POLL);
+				test_mode(fd, ctx, m->flags | POLL);
 		}
 	}
 
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 23/30] tests/prime_vgem: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (21 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 22/30] tests/prime_busy: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 24/30] tests/gem_exec_whisper: " Jason Ekstrand
                   ` (8 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/prime_vgem.c | 38 ++++++++++++++++++++++++--------------
 1 file changed, 24 insertions(+), 14 deletions(-)

diff --git a/tests/prime_vgem.c b/tests/prime_vgem.c
index 07ff69a2..9d8db8ba 100644
--- a/tests/prime_vgem.c
+++ b/tests/prime_vgem.c
@@ -558,7 +558,7 @@ static bool prime_busy(int fd, bool excl)
 	return poll(&pfd, 1, 0) == 0;
 }
 
-static void work(int i915, int dmabuf, unsigned ring)
+static void work(int i915, int dmabuf, const intel_ctx_t *ctx, unsigned ring)
 {
 	const int SCRATCH = 0;
 	const int BATCH = 1;
@@ -577,6 +577,7 @@ static void work(int i915, int dmabuf, unsigned ring)
 	execbuf.flags = ring;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(obj, 0, sizeof(obj));
 	obj[SCRATCH].handle = prime_fd_to_handle(i915, dmabuf);
@@ -653,7 +654,7 @@ static void work(int i915, int dmabuf, unsigned ring)
 	igt_assert(read_busy && write_busy);
 }
 
-static void test_busy(int i915, int vgem, unsigned ring)
+static void test_busy(int i915, int vgem, const intel_ctx_t *ctx, unsigned ring)
 {
 	struct vgem_bo scratch;
 	struct timespec tv;
@@ -667,7 +668,7 @@ static void test_busy(int i915, int vgem, unsigned ring)
 	vgem_create(vgem, &scratch);
 	dmabuf = prime_handle_to_fd(vgem, scratch.handle);
 
-	work(i915, dmabuf, ring);
+	work(i915, dmabuf, ctx, ring);
 
 	/* Calling busy in a loop should be enough to flush the rendering */
 	memset(&tv, 0, sizeof(tv));
@@ -683,7 +684,7 @@ static void test_busy(int i915, int vgem, unsigned ring)
 	close(dmabuf);
 }
 
-static void test_wait(int i915, int vgem, unsigned ring)
+static void test_wait(int i915, int vgem, const intel_ctx_t *ctx, unsigned ring)
 {
 	struct vgem_bo scratch;
 	struct pollfd pfd;
@@ -696,7 +697,7 @@ static void test_wait(int i915, int vgem, unsigned ring)
 	vgem_create(vgem, &scratch);
 	pfd.fd = prime_handle_to_fd(vgem, scratch.handle);
 
-	work(i915, pfd.fd, ring);
+	work(i915, pfd.fd, ctx, ring);
 
 	pfd.events = POLLIN;
 	igt_assert_eq(poll(&pfd, 1, 10000), 1);
@@ -710,7 +711,7 @@ static void test_wait(int i915, int vgem, unsigned ring)
 	close(pfd.fd);
 }
 
-static void test_sync(int i915, int vgem, unsigned ring)
+static void test_sync(int i915, int vgem, const intel_ctx_t *ctx, unsigned ring)
 {
 	struct vgem_bo scratch;
 	uint32_t *ptr;
@@ -727,7 +728,7 @@ static void test_sync(int i915, int vgem, unsigned ring)
 	igt_assert(ptr != MAP_FAILED);
 	gem_close(vgem, scratch.handle);
 
-	work(i915, dmabuf, ring);
+	work(i915, dmabuf, ctx, ring);
 
 	prime_sync_start(dmabuf, false);
 	for (i = 0; i < 1024; i++)
@@ -738,7 +739,7 @@ static void test_sync(int i915, int vgem, unsigned ring)
 	munmap(ptr, scratch.size);
 }
 
-static void test_fence_wait(int i915, int vgem, unsigned ring)
+static void test_fence_wait(int i915, int vgem, const intel_ctx_t *ctx, unsigned ring)
 {
 	struct vgem_bo scratch;
 	uint32_t fence;
@@ -759,7 +760,7 @@ static void test_fence_wait(int i915, int vgem, unsigned ring)
 	igt_assert(ptr != MAP_FAILED);
 
 	igt_fork(child, 1)
-		work(i915, dmabuf, ring);
+		work(i915, dmabuf, ctx, ring);
 
 	sleep(1);
 
@@ -799,7 +800,7 @@ static void test_fence_hang(int i915, int vgem, unsigned flags)
 	igt_assert(ptr != MAP_FAILED);
 	gem_close(vgem, scratch.handle);
 
-	work(i915, dmabuf, 0);
+	work(i915, dmabuf, 0, 0);
 
 	/* The work should have been cancelled */
 
@@ -1041,12 +1042,20 @@ static void test_flip(int i915, int vgem, unsigned hang)
 }
 
 static void test_each_engine(const char *name, int vgem, int i915,
-			     void (*fn)(int i915, int vgem, unsigned int flags))
+			     void (*fn)(int i915, int vgem,
+					const intel_ctx_t *ctx,
+					unsigned int flags))
 {
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx;
+
+	if (gem_has_contexts(i915))
+		ctx = intel_ctx_create_all_physical(i915);
+	else
+		ctx = intel_ctx_0(i915);
 
 	igt_subtest_with_dynamic(name) {
-		__for_each_physical_engine(i915, e) {
+		for_each_ctx_engine(i915, ctx, e) {
 			if (!gem_class_can_store_dword(i915, e->class))
 				continue;
 
@@ -1055,7 +1064,7 @@ static void test_each_engine(const char *name, int vgem, int i915,
 
 			igt_dynamic_f("%s", e->name) {
 				gem_quiescent_gpu(i915);
-				fn(i915, vgem, e->flags);
+				fn(i915, vgem, ctx, e->flags);
 			}
 		}
 	}
@@ -1109,7 +1118,8 @@ igt_main
 	{
 		static const struct {
 			const char *name;
-			void (*fn)(int i915, int vgem, unsigned int engine);
+			void (*fn)(int i915, int vgem, const intel_ctx_t *ctx,
+				   unsigned int engine);
 		} tests[] = {
 			{ "sync", test_sync },
 			{ "busy", test_busy },
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 24/30] tests/gem_exec_whisper: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (22 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 23/30] tests/prime_vgem: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 25/30] tests/i915/gem_ctx_exec: " Jason Ekstrand
                   ` (7 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_whisper.c | 86 +++++++++++++++++++++++------------
 1 file changed, 57 insertions(+), 29 deletions(-)

diff --git a/tests/i915/gem_exec_whisper.c b/tests/i915/gem_exec_whisper.c
index 71bd610c..24e706ad 100644
--- a/tests/i915/gem_exec_whisper.c
+++ b/tests/i915/gem_exec_whisper.c
@@ -28,12 +28,14 @@
  */
 
 #include "i915/gem.h"
+#include "i915/gem_vm.h"
 #include "igt.h"
 #include "igt_debugfs.h"
 #include "igt_rapl.h"
 #include "igt_gt.h"
 #include "igt_rand.h"
 #include "igt_sysfs.h"
+#include "intel_ctx.h"
 
 #define ENGINE_MASK  (I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK)
 
@@ -81,13 +83,14 @@ static void verify_reloc(int fd, uint32_t handle,
 #define BASIC 0x400
 
 struct hang {
+	intel_ctx_t *ctx;
 	struct drm_i915_gem_exec_object2 obj;
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	int fd;
 };
 
-static void init_hang(struct hang *h, int fd)
+static void init_hang(struct hang *h, int fd, const intel_ctx_cfg_t *cfg)
 {
 	uint32_t *batch;
 	int i, gen;
@@ -97,6 +100,13 @@ static void init_hang(struct hang *h, int fd)
 
 	gen = intel_gen(intel_get_drm_devid(h->fd));
 
+	if (gem_has_contexts(fd)) {
+		h->ctx = intel_ctx_create(h->fd, cfg);
+		h->execbuf.rsvd1 = h->ctx->id;
+	} else {
+		h->ctx = NULL;
+	}
+
 	memset(&h->execbuf, 0, sizeof(h->execbuf));
 	h->execbuf.buffers_ptr = to_user_pointer(&h->obj);
 	h->execbuf.buffer_count = 1;
@@ -156,6 +166,7 @@ static void submit_hang(struct hang *h, unsigned *engines, int nengine, unsigned
 
 static void fini_hang(struct hang *h)
 {
+	intel_ctx_destroy(h->fd, h->ctx);
 	close(h->fd);
 }
 
@@ -165,7 +176,8 @@ static void ctx_set_random_priority(int fd, uint32_t ctx)
 	gem_context_set_priority(fd, ctx, prio);
 }
 
-static void whisper(int fd, unsigned engine, unsigned flags)
+static void whisper(int fd, const intel_ctx_t *ctx,
+		    unsigned engine, unsigned flags)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -179,7 +191,8 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 	const struct intel_execution_engine2 *e;
 	struct hang hang;
 	int fds[64];
-	uint32_t contexts[64];
+	intel_ctx_cfg_t local_cfg;
+	intel_ctx_t *contexts[64];
 	unsigned nengine;
 	uint32_t batch[16];
 	unsigned int relocations = 0;
@@ -203,7 +216,7 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 
 	nengine = 0;
 	if (engine == ALL_ENGINES) {
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			if (gem_class_can_store_dword(fd, e->class))
 				engines[nengine++] = e->flags;
 		}
@@ -220,10 +233,10 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 		gem_require_contexts(fd);
 
 	if (flags & QUEUES)
-		igt_require(gem_has_queues(fd));
+		igt_require(gem_has_vm(fd));
 
 	if (flags & HANG)
-		init_hang(&hang, fd);
+		init_hang(&hang, fd, &ctx->cfg);
 
 	nchild = 1;
 	if (flags & FORKED)
@@ -272,6 +285,7 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 			execbuf.flags |= I915_EXEC_NO_RELOC;
 			if (gen < 6)
 				execbuf.flags |= I915_EXEC_SECURE;
+			execbuf.rsvd1 = ctx->id;
 			igt_require(__gem_execbuf(fd, &execbuf) == 0);
 			scratch = tmp[0];
 			store = tmp[1];
@@ -293,18 +307,20 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 		igt_assert(loc == sizeof(uint32_t) * i);
 		batch[++i] = MI_BATCH_BUFFER_END;
 
-		if (flags & CONTEXTS) {
-			for (n = 0; n < 64; n++)
-				contexts[n] = gem_context_clone_with_engines(fd, 0);
-		}
-		if (flags & QUEUES) {
-			for (n = 0; n < 64; n++)
-				contexts[n] = gem_queue_create(fd);
-		}
 		if (flags & FDS) {
 			for (n = 0; n < 64; n++) {
 				fds[n] = gem_reopen_driver(fd);
-				gem_context_copy_engines(fd, 0, fds[n], 0);
+			}
+		}
+		if (flags & (CONTEXTS | QUEUES | FDS)) {
+			local_cfg = ctx->cfg;
+			if (flags & QUEUES) {
+				igt_assert(!(flags & FDS));
+				local_cfg.vm = gem_vm_create(fd);
+			}
+			for (n = 0; n < 64; n++) {
+				int this_fd = (flags & FDS) ? fds[n] : fd;
+				contexts[n] = intel_ctx_create(this_fd, &local_cfg);
 			}
 		}
 
@@ -413,8 +429,8 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 						execbuf.flags &= ~ENGINE_MASK;
 						execbuf.flags |= engines[rand() % nengine];
 					}
-					if (flags & (CONTEXTS | QUEUES)) {
-						execbuf.rsvd1 = contexts[rand() % 64];
+					if (flags & (CONTEXTS | QUEUES | FDS)) {
+						execbuf.rsvd1 = contexts[rand() % 64]->id;
 						if (flags & PRIORITY)
 							ctx_set_random_priority(this_fd, execbuf.rsvd1);
 					}
@@ -442,7 +458,7 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 					}
 				}
 				execbuf.flags &= ~ENGINE_MASK;
-				execbuf.rsvd1 = 0;
+				execbuf.rsvd1 = ctx->id;
 				execbuf.buffers_ptr = to_user_pointer(&tmp);
 
 				tmp[0] = tmp[1];
@@ -492,16 +508,22 @@ static void whisper(int fd, unsigned engine, unsigned flags)
 		gem_close(fd, scratch.handle);
 		gem_close(fd, store.handle);
 
+		if (flags & (CONTEXTS | QUEUES | FDS)) {
+			for (n = 0; n < 64; n++) {
+				int this_fd = (flags & FDS) ? fds[n] : fd;
+				intel_ctx_destroy(this_fd, contexts[n]);
+			}
+			if (local_cfg.vm) {
+				igt_assert(!(flags & FDS));
+				gem_vm_destroy(fd, local_cfg.vm);
+			}
+		}
+		for (n = 0; n < QLEN; n++)
+			gem_close(fd, batches[n].handle);
 		if (flags & FDS) {
 			for (n = 0; n < 64; n++)
 				close(fds[n]);
 		}
-		if (flags & (CONTEXTS | QUEUES)) {
-			for (n = 0; n < 64; n++)
-				gem_context_destroy(fd, contexts[n]);
-		}
-		for (n = 0; n < QLEN; n++)
-			gem_close(fd, batches[n].handle);
 	}
 
 	igt_waitchildren();
@@ -554,6 +576,7 @@ igt_main
 		{ NULL }
 	};
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx;
 	int fd = -1;
 
 	igt_fixture {
@@ -562,16 +585,21 @@ igt_main
 		igt_require(gem_can_store_dword(fd, 0));
 		gem_submission_print_method(fd);
 
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
+
 		igt_fork_hang_detector(fd);
 	}
 
 	for (const struct mode *m = modes; m->name; m++) {
 		igt_subtest_f("%s%s",
 			      m->flags & BASIC ? "basic-" : "", m->name)
-			whisper(fd, ALL_ENGINES, m->flags);
+			whisper(fd, ctx, ALL_ENGINES, m->flags);
 		igt_subtest_f("%s%s-all",
 			      m->flags & BASIC ? "basic-" : "", m->name)
-			whisper(fd, ALL_ENGINES, m->flags | ALL);
+			whisper(fd, ctx, ALL_ENGINES, m->flags | ALL);
 	}
 
 	for (const struct mode *m = modes; m->name; m++) {
@@ -579,12 +607,12 @@ igt_main
 			continue;
 
 		igt_subtest_with_dynamic_f("%s", m->name) {
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				if (!gem_class_can_store_dword(fd, e->class))
 					continue;
 
 				igt_dynamic_f("%s", e->name)
-					whisper(fd, e->flags, m->flags);
+					whisper(fd, ctx, e->flags, m->flags);
 			}
 		}
 	}
@@ -598,7 +626,7 @@ igt_main
 			if (m->flags & INTERRUPTIBLE)
 				continue;
 			igt_subtest_f("hang-%s", m->name)
-				whisper(fd, ALL_ENGINES, m->flags | HANG);
+				whisper(fd, ctx, ALL_ENGINES, m->flags | HANG);
 		}
 	}
 
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 25/30] tests/i915/gem_ctx_exec: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (23 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 24/30] tests/gem_exec_whisper: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 26/30] tests/i915/gem_exec_suspend: " Jason Ekstrand
                   ` (6 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_ctx_exec.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tests/i915/gem_ctx_exec.c b/tests/i915/gem_ctx_exec.c
index 03c66bf7..6dfac11c 100644
--- a/tests/i915/gem_ctx_exec.c
+++ b/tests/i915/gem_ctx_exec.c
@@ -267,7 +267,7 @@ static void nohangcheck_hostile(int i915)
 	const struct intel_execution_engine2 *e;
 	igt_hang_t hang;
 	int fence = -1;
-	uint32_t ctx;
+	intel_ctx_t *ctx;
 	int err = 0;
 	int dir;
 
@@ -281,12 +281,12 @@ static void nohangcheck_hostile(int i915)
 	dir = igt_params_open(i915);
 	igt_require(dir != -1);
 
-	ctx = gem_context_create(i915);
-	hang = igt_allow_hang(i915, ctx, 0);
+	ctx = intel_ctx_create_all_physical(i915);
+	hang = igt_allow_hang(i915, ctx->id, 0);
 
 	igt_require(__enable_hangcheck(dir, false));
 
-	____for_each_physical_engine(i915, ctx, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		igt_spin_t *spin;
 		int new;
 
@@ -294,7 +294,7 @@ static void nohangcheck_hostile(int i915)
 		gem_engine_property_printf(i915, e->name,
 					   "preempt_timeout_ms", "%d", 50);
 
-		spin = __igt_spin_new(i915, ctx,
+		spin = __igt_spin_new(i915, .ctx = ctx,
 				      .engine = e->flags,
 				      .flags = (IGT_SPIN_NO_PREEMPTION |
 						IGT_SPIN_FENCE_OUT));
@@ -315,7 +315,7 @@ static void nohangcheck_hostile(int i915)
 			fence = tmp;
 		}
 	}
-	gem_context_destroy(i915, ctx);
+	intel_ctx_destroy(i915, ctx);
 	igt_assert(fence != -1);
 
 	if (sync_fence_wait(fence, MSEC_PER_SEC)) { /* 640ms preempt-timeout */
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 26/30] tests/i915/gem_exec_suspend: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (24 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 25/30] tests/i915/gem_ctx_exec: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 27/30] tests/i915/gem_sync: " Jason Ekstrand
                   ` (5 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_exec_suspend.c | 56 +++++++++++++++++++++--------------
 1 file changed, 33 insertions(+), 23 deletions(-)

diff --git a/tests/i915/gem_exec_suspend.c b/tests/i915/gem_exec_suspend.c
index a31dd662..81625a80 100644
--- a/tests/i915/gem_exec_suspend.c
+++ b/tests/i915/gem_exec_suspend.c
@@ -50,7 +50,8 @@
 #define CACHED (1<<8)
 #define HANG (2<<8)
 
-static void run_test(int fd, unsigned engine, unsigned flags);
+static void run_test(int fd, const intel_ctx_t *ctx,
+		     unsigned engine, unsigned flags);
 
 static void check_bo(int fd, uint32_t handle)
 {
@@ -65,12 +66,13 @@ static void check_bo(int fd, uint32_t handle)
 	munmap(map, 4096);
 }
 
-static void test_all(int fd, unsigned flags)
+static void test_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 {
-	run_test(fd, ALL_ENGINES, flags & ~0xff);
+	run_test(fd, ctx, ALL_ENGINES, flags & ~0xff);
 }
 
-static void run_test(int fd, unsigned engine, unsigned flags)
+static void run_test(int fd, const intel_ctx_t *ctx,
+		     unsigned engine, unsigned flags)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
@@ -85,7 +87,7 @@ static void run_test(int fd, unsigned engine, unsigned flags)
 	if (engine == ALL_ENGINES) {
 		const struct intel_execution_engine2 *e;
 
-		__for_each_physical_engine(fd, e) {
+		for_each_ctx_engine(fd, ctx, e) {
 			if (gem_class_can_store_dword(fd, e->class))
 				engines[nengine++] = e->flags;
 		}
@@ -96,7 +98,7 @@ static void run_test(int fd, unsigned engine, unsigned flags)
 
 	/* Before suspending, check normal operation */
 	if (mode(flags) != NOSLEEP)
-		test_all(fd, flags);
+		test_all(fd, ctx, flags);
 
 	gem_quiescent_gpu(fd);
 
@@ -106,6 +108,7 @@ static void run_test(int fd, unsigned engine, unsigned flags)
 	execbuf.flags = 1 << 11;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = gem_create(fd, 4096);
@@ -202,7 +205,7 @@ static void run_test(int fd, unsigned engine, unsigned flags)
 
 	/* After resume, make sure it still works */
 	if (mode(flags) != NOSLEEP)
-		test_all(fd, flags);
+		test_all(fd, ctx, flags);
 }
 
 struct battery_sample {
@@ -229,7 +232,8 @@ static double d_time(const struct battery_sample *after,
 		(after->tv.tv_nsec - before->tv.tv_nsec) * 1e-9); /* s */
 }
 
-static void power_test(int i915, unsigned engine, unsigned flags)
+static void power_test(int i915, const intel_ctx_t *ctx,
+		       unsigned engine, unsigned flags)
 {
 	struct battery_sample before, after;
 	char *status;
@@ -249,7 +253,7 @@ static void power_test(int i915, unsigned engine, unsigned flags)
 	igt_set_autoresume_delay(5 * 60); /* 5 minutes; longer == more stable */
 
 	igt_assert(get_power(dir, &before));
-	run_test(i915, engine, flags);
+	run_test(i915, ctx, engine, flags);
 	igt_assert(get_power(dir, &after));
 
 	igt_set_autoresume_delay(0);
@@ -273,6 +277,7 @@ igt_main
 	}, *m;
 	const struct intel_execution_engine2 *e;
 	igt_hang_t hang;
+	const intel_ctx_t *ctx;
 	int fd;
 
 	igt_fixture {
@@ -280,38 +285,43 @@ igt_main
 		igt_require_gem(fd);
 		igt_require(gem_can_store_dword(fd, 0));
 
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
+
 		igt_fork_hang_detector(fd);
 	}
 
 	igt_subtest("basic")
-		run_test(fd, ALL_ENGINES, NOSLEEP);
+		run_test(fd, ctx, ALL_ENGINES, NOSLEEP);
 	igt_subtest("basic-S0")
-		run_test(fd, ALL_ENGINES, IDLE);
+		run_test(fd, ctx, ALL_ENGINES, IDLE);
 	igt_subtest("basic-S3-devices")
-		run_test(fd, ALL_ENGINES, SUSPEND_DEVICES);
+		run_test(fd, ctx, ALL_ENGINES, SUSPEND_DEVICES);
 	igt_subtest("basic-S3")
-		run_test(fd, ALL_ENGINES, SUSPEND);
+		run_test(fd, ctx, ALL_ENGINES, SUSPEND);
 	igt_subtest("basic-S4-devices")
-		run_test(fd, ALL_ENGINES, HIBERNATE_DEVICES);
+		run_test(fd, ctx, ALL_ENGINES, HIBERNATE_DEVICES);
 	igt_subtest("basic-S4")
-		run_test(fd, ALL_ENGINES, HIBERNATE);
+		run_test(fd, ctx, ALL_ENGINES, HIBERNATE);
 
 	for (m = modes; m->suffix; m++) {
 		igt_subtest_with_dynamic_f("uncached-%s", m->suffix) {
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				if (!gem_class_can_store_dword(fd, e->class))
 					continue;
 				igt_dynamic_f("%s", e->name)
-					run_test(fd, e->flags, m->mode | UNCACHED);
+					run_test(fd, ctx, e->flags, m->mode | UNCACHED);
 			}
 		}
 
 		igt_subtest_with_dynamic_f("cached-%s", m->suffix) {
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				if (!gem_class_can_store_dword(fd, e->class))
 					continue;
 				igt_dynamic_f("%s", e->name)
-					run_test(fd, e->flags, m->mode | CACHED);
+					run_test(fd, ctx, e->flags, m->mode | CACHED);
 			}
 		}
 	}
@@ -322,14 +332,14 @@ igt_main
 	}
 
 	igt_subtest("hang-S3")
-		run_test(fd, 0, SUSPEND | HANG);
+		run_test(fd, intel_ctx_0(fd), 0, SUSPEND | HANG);
 	igt_subtest("hang-S4")
-		run_test(fd, 0, HIBERNATE | HANG);
+		run_test(fd, intel_ctx_0(fd), 0, HIBERNATE | HANG);
 
 	igt_subtest("power-S0")
-		power_test(fd, 0, IDLE);
+		power_test(fd, intel_ctx_0(fd), 0, IDLE);
 	igt_subtest("power-S3")
-		power_test(fd, 0, SUSPEND);
+		power_test(fd, intel_ctx_0(fd), 0, SUSPEND);
 
 	igt_fixture {
 		igt_disallow_hang(fd, hang);
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 27/30] tests/i915/gem_sync: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (25 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 26/30] tests/i915/gem_exec_suspend: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 28/30] tests/i915/gem_userptr_blits: " Jason Ekstrand
                   ` (4 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_sync.c | 162 ++++++++++++++++++++++++------------------
 1 file changed, 93 insertions(+), 69 deletions(-)

diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c
index 58781a5e..39a90587 100644
--- a/tests/i915/gem_sync.c
+++ b/tests/i915/gem_sync.c
@@ -96,38 +96,33 @@ filter_engines_can_store_dword(int fd, struct intel_engine_data *ied)
 	ied->nengines = count;
 }
 
-static struct intel_engine_data list_store_engines(int fd, unsigned ring)
+static struct intel_engine_data
+list_engines(int fd, const intel_ctx_t *ctx, unsigned ring)
 {
 	struct intel_engine_data ied = { };
 
 	if (ring == ALL_ENGINES) {
-		ied = intel_init_engine_list(fd, 0);
-		filter_engines_can_store_dword(fd, &ied);
+		ied = intel_engine_list_for_ctx_cfg(fd, &ctx->cfg);
+	} else if (ctx->cfg.num_engines) {
+		igt_assert(ring < ctx->cfg.num_engines);
+		ied.engines[ied.nengines].flags = ring;
+		strcpy(ied.engines[ied.nengines].name, " ");
+		ied.nengines++;
 	} else {
-		if (gem_has_ring(fd, ring) && gem_can_store_dword(fd, ring)) {
-			ied.engines[ied.nengines].flags = ring;
-			strcpy(ied.engines[ied.nengines].name, " ");
-			ied.nengines++;
-		}
+		igt_assert(gem_has_ring(fd, ring));
+		ied.engines[ied.nengines].flags = ring;
+		strcpy(ied.engines[ied.nengines].name, " ");
+		ied.nengines++;
 	}
 
 	return ied;
 }
 
-static struct intel_engine_data list_engines(int fd, unsigned ring)
+static struct intel_engine_data
+list_store_engines(int fd, const intel_ctx_t *ctx, unsigned ring)
 {
-	struct intel_engine_data ied = { };
-
-	if (ring == ALL_ENGINES) {
-		ied = intel_init_engine_list(fd, 0);
-	} else {
-		if (gem_has_ring(fd, ring)) {
-			ied.engines[ied.nengines].flags = ring;
-			strcpy(ied.engines[ied.nengines].name, " ");
-			ied.nengines++;
-		}
-	}
-
+	struct intel_engine_data ied = list_engines(fd, ctx, ring);
+	filter_engines_can_store_dword(fd, &ied);
 	return ied;
 }
 
@@ -149,11 +144,12 @@ static void xchg_engine(void *array, unsigned i, unsigned j)
 }
 
 static void
-sync_ring(int fd, unsigned ring, int num_children, int timeout)
+sync_ring(int fd, const intel_ctx_t *ctx,
+	  unsigned ring, int num_children, int timeout)
 {
 	struct intel_engine_data ied;
 
-	ied = list_engines(fd, ring);
+	ied = list_engines(fd, ctx, ring);
 	igt_require(ied.nengines);
 	num_children *= ied.nengines;
 
@@ -173,6 +169,7 @@ sync_ring(int fd, unsigned ring, int num_children, int timeout)
 		execbuf.buffers_ptr = to_user_pointer(&object);
 		execbuf.buffer_count = 1;
 		execbuf.flags = ied_flags(&ied, child);
+		execbuf.rsvd1 = ctx->id;
 		gem_execbuf(fd, &execbuf);
 		gem_sync(fd, object.handle);
 
@@ -195,7 +192,8 @@ sync_ring(int fd, unsigned ring, int num_children, int timeout)
 }
 
 static void
-idle_ring(int fd, unsigned int ring, int num_children, int timeout)
+idle_ring(int fd, const intel_ctx_t *ctx, unsigned int ring,
+	  int num_children, int timeout)
 {
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 object;
@@ -213,6 +211,7 @@ idle_ring(int fd, unsigned int ring, int num_children, int timeout)
 	execbuf.buffers_ptr = to_user_pointer(&object);
 	execbuf.buffer_count = 1;
 	execbuf.flags = ring;
+	execbuf.rsvd1 = ctx->id;
 	gem_execbuf(fd, &execbuf);
 	gem_sync(fd, object.handle);
 
@@ -234,11 +233,12 @@ idle_ring(int fd, unsigned int ring, int num_children, int timeout)
 }
 
 static void
-wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
+wakeup_ring(int fd, const intel_ctx_t *ctx, unsigned ring,
+	    int timeout, int wlen)
 {
 	struct intel_engine_data ied;
 
-	ied = list_store_engines(fd, ring);
+	ied = list_store_engines(fd, ctx, ring);
 	igt_require(ied.nengines);
 
 	intel_detect_and_clear_missed_interrupts(fd);
@@ -258,8 +258,10 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
 		execbuf.buffers_ptr = to_user_pointer(&object);
 		execbuf.buffer_count = 1;
 		execbuf.flags = ied_flags(&ied, child);
+		execbuf.rsvd1 = ctx->id;
 
 		spin = __igt_spin_new(fd,
+				      .ctx = ctx,
 				      .engine = execbuf.flags,
 				      .flags = (IGT_SPIN_POLL_RUN |
 						IGT_SPIN_FAST));
@@ -326,12 +328,12 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
 }
 
-static void active_ring(int fd, unsigned int ring,
+static void active_ring(int fd, const intel_ctx_t *ctx, unsigned int ring,
 			int num_children, int timeout)
 {
 	struct intel_engine_data ied;
 
-	ied = list_store_engines(fd, ring);
+	ied = list_store_engines(fd, ctx, ring);
 	igt_require(ied.nengines);
 
 	intel_detect_and_clear_missed_interrupts(fd);
@@ -341,10 +343,12 @@ static void active_ring(int fd, unsigned int ring,
 		igt_spin_t *spin[2];
 
 		spin[0] = __igt_spin_new(fd,
+					 .ctx = ctx,
 					 .engine = ied_flags(&ied, child),
 					 .flags = IGT_SPIN_FAST);
 
 		spin[1] = __igt_spin_new(fd,
+					 .ctx = ctx,
 					 .engine = ied_flags(&ied, child),
 					 .flags = IGT_SPIN_FAST);
 
@@ -376,11 +380,12 @@ static void active_ring(int fd, unsigned int ring,
 }
 
 static void
-active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
+active_wakeup_ring(int fd, const intel_ctx_t *ctx, unsigned ring,
+		   int timeout, int wlen)
 {
 	struct intel_engine_data ied;
 
-	ied = list_store_engines(fd, ring);
+	ied = list_store_engines(fd, ctx, ring);
 	igt_require(ied.nengines);
 
 	intel_detect_and_clear_missed_interrupts(fd);
@@ -400,6 +405,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
 		execbuf.buffers_ptr = to_user_pointer(&object);
 		execbuf.buffer_count = 1;
 		execbuf.flags = ied_flags(&ied, child);
+		execbuf.rsvd1 = ctx->id;
 
 		spin[0] = __igt_spin_new(fd,
 					 .engine = execbuf.flags,
@@ -490,12 +496,13 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen)
 }
 
 static void
-store_ring(int fd, unsigned ring, int num_children, int timeout)
+store_ring(int fd, const intel_ctx_t *ctx, unsigned ring,
+	   int num_children, int timeout)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
-	ied = list_store_engines(fd, ring);
+	ied = list_store_engines(fd, ctx, ring);
 	igt_require(ied.nengines);
 	num_children *= ied.nengines;
 
@@ -516,6 +523,7 @@ store_ring(int fd, unsigned ring, int num_children, int timeout)
 		execbuf.flags |= I915_EXEC_HANDLE_LUT;
 		if (gen < 6)
 			execbuf.flags |= I915_EXEC_SECURE;
+		execbuf.rsvd1 = ctx->id;
 
 		memset(object, 0, sizeof(object));
 		object[0].handle = gem_create(fd, 4096);
@@ -586,14 +594,15 @@ store_ring(int fd, unsigned ring, int num_children, int timeout)
 }
 
 static void
-switch_ring(int fd, unsigned ring, int num_children, int timeout)
+switch_ring(int fd, const intel_ctx_t *ctx, unsigned ring,
+	    int num_children, int timeout)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
 	gem_require_contexts(fd);
 
-	ied = list_store_engines(fd, ring);
+	ied = list_store_engines(fd, ctx, ring);
 	igt_require(ied.nengines);
 	num_children *= ied.nengines;
 
@@ -603,6 +612,7 @@ switch_ring(int fd, unsigned ring, int num_children, int timeout)
 			struct drm_i915_gem_exec_object2 object[2];
 			struct drm_i915_gem_relocation_entry reloc[1024];
 			struct drm_i915_gem_execbuffer2 execbuf;
+			intel_ctx_t *ctx;
 		} contexts[2];
 		double elapsed, baseline;
 		unsigned long cycles;
@@ -620,7 +630,9 @@ switch_ring(int fd, unsigned ring, int num_children, int timeout)
 			c->execbuf.flags |= I915_EXEC_HANDLE_LUT;
 			if (gen < 6)
 				c->execbuf.flags |= I915_EXEC_SECURE;
-			c->execbuf.rsvd1 = gem_context_create(fd);
+
+			c->ctx = intel_ctx_create(fd, &ctx->cfg);
+			c->execbuf.rsvd1 = c->ctx->id;
 
 			memset(c->object, 0, sizeof(c->object));
 			c->object[0].handle = gem_create(fd, 4096);
@@ -716,7 +728,7 @@ switch_ring(int fd, unsigned ring, int num_children, int timeout)
 		for (int i = 0; i < ARRAY_SIZE(contexts); i++) {
 			gem_close(fd, contexts[i].object[1].handle);
 			gem_close(fd, contexts[i].object[0].handle);
-			gem_context_destroy(fd, contexts[i].execbuf.rsvd1);
+			intel_ctx_destroy(fd, contexts[i].ctx);
 		}
 	}
 	igt_waitchildren_timeout(timeout+10, NULL);
@@ -765,7 +777,8 @@ static void *waiter(void *arg)
 }
 
 static void
-__store_many(int fd, unsigned ring, int timeout, unsigned long *cycles)
+__store_many(int fd, const intel_ctx_t *ctx, unsigned ring,
+	     int timeout, unsigned long *cycles)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
@@ -784,6 +797,7 @@ __store_many(int fd, unsigned ring, int timeout, unsigned long *cycles)
 	execbuf.flags |= I915_EXEC_HANDLE_LUT;
 	if (gen < 6)
 		execbuf.flags |= I915_EXEC_SECURE;
+	execbuf.rsvd1 = ctx->id;
 
 	memset(object, 0, sizeof(object));
 	object[0].handle = gem_create(fd, 4096);
@@ -893,7 +907,8 @@ __store_many(int fd, unsigned ring, int timeout, unsigned long *cycles)
 }
 
 static void
-store_many(int fd, unsigned int ring, int num_children, int timeout)
+store_many(int fd, const intel_ctx_t *ctx, unsigned int ring,
+	   int num_children, int timeout)
 {
 	struct intel_engine_data ied;
 	unsigned long *shared;
@@ -901,14 +916,14 @@ store_many(int fd, unsigned int ring, int num_children, int timeout)
 	shared = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
 	igt_assert(shared != MAP_FAILED);
 
-	ied = list_store_engines(fd, ring);
+	ied = list_store_engines(fd, ctx, ring);
 	igt_require(ied.nengines);
 
 	intel_detect_and_clear_missed_interrupts(fd);
 
 	for (int n = 0; n < ied.nengines; n++) {
 		igt_fork(child, 1)
-			__store_many(fd,
+			__store_many(fd, ctx,
 				     ied_flags(&ied, n),
 				     timeout,
 				     &shared[n]);
@@ -924,11 +939,11 @@ store_many(int fd, unsigned int ring, int num_children, int timeout)
 }
 
 static void
-sync_all(int fd, int num_children, int timeout)
+sync_all(int fd, const intel_ctx_t *ctx, int num_children, int timeout)
 {
 	struct intel_engine_data ied;
 
-	ied = list_engines(fd, ALL_ENGINES);
+	ied = list_engines(fd, ctx, ALL_ENGINES);
 	igt_require(ied.nengines);
 
 	intel_detect_and_clear_missed_interrupts(fd);
@@ -946,6 +961,7 @@ sync_all(int fd, int num_children, int timeout)
 		memset(&execbuf, 0, sizeof(execbuf));
 		execbuf.buffers_ptr = to_user_pointer(&object);
 		execbuf.buffer_count = 1;
+		execbuf.rsvd1 = ctx->id;
 		gem_execbuf(fd, &execbuf);
 		gem_sync(fd, object.handle);
 
@@ -970,12 +986,12 @@ sync_all(int fd, int num_children, int timeout)
 }
 
 static void
-store_all(int fd, int num_children, int timeout)
+store_all(int fd, const intel_ctx_t *ctx, int num_children, int timeout)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
 	struct intel_engine_data ied;
 
-	ied = list_store_engines(fd, ALL_ENGINES);
+	ied = list_store_engines(fd, ctx, ALL_ENGINES);
 	igt_require(ied.nengines);
 
 	intel_detect_and_clear_missed_interrupts(fd);
@@ -994,6 +1010,7 @@ store_all(int fd, int num_children, int timeout)
 		execbuf.flags |= I915_EXEC_HANDLE_LUT;
 		if (gen < 6)
 			execbuf.flags |= I915_EXEC_SECURE;
+		execbuf.rsvd1 = ctx->id;
 
 		memset(object, 0, sizeof(object));
 		object[0].handle = gem_create(fd, 4096);
@@ -1069,20 +1086,21 @@ store_all(int fd, int num_children, int timeout)
 }
 
 static void
-preempt(int fd, unsigned ring, int num_children, int timeout)
+preempt(int fd, const intel_ctx_t *ctx, unsigned ring,
+	int num_children, int timeout)
 {
 	struct intel_engine_data ied;
-	uint32_t ctx[2];
+	intel_ctx_t *tmp_ctx[2];
 
-	ied = list_engines(fd, ALL_ENGINES);
+	ied = list_engines(fd, ctx, ALL_ENGINES);
 	igt_require(ied.nengines);
 	num_children *= ied.nengines;
 
-	ctx[0] = gem_context_create(fd);
-	gem_context_set_priority(fd, ctx[0], MIN_PRIO);
+	tmp_ctx[0] = intel_ctx_create(fd, &ctx->cfg);
+	gem_context_set_priority(fd, tmp_ctx[0]->id, MIN_PRIO);
 
-	ctx[1] = gem_context_create(fd);
-	gem_context_set_priority(fd, ctx[1], MAX_PRIO);
+	tmp_ctx[1] = intel_ctx_create(fd, &ctx->cfg);
+	gem_context_set_priority(fd, tmp_ctx[1]->id, MAX_PRIO);
 
 	intel_detect_and_clear_missed_interrupts(fd);
 	igt_fork(child, num_children) {
@@ -1100,7 +1118,7 @@ preempt(int fd, unsigned ring, int num_children, int timeout)
 		execbuf.buffers_ptr = to_user_pointer(&object);
 		execbuf.buffer_count = 1;
 		execbuf.flags = ied_flags(&ied, child);
-		execbuf.rsvd1 = ctx[1];
+		execbuf.rsvd1 = tmp_ctx[1]->id;
 		gem_execbuf(fd, &execbuf);
 		gem_sync(fd, object.handle);
 
@@ -1109,7 +1127,7 @@ preempt(int fd, unsigned ring, int num_children, int timeout)
 		do {
 			igt_spin_t *spin =
 				__igt_spin_new(fd,
-					       .ctx_id = ctx[0],
+					       .ctx = tmp_ctx[0],
 					       .engine = execbuf.flags);
 
 			do {
@@ -1128,8 +1146,8 @@ preempt(int fd, unsigned ring, int num_children, int timeout)
 	igt_waitchildren_timeout(timeout+10, NULL);
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
 
-	gem_context_destroy(fd, ctx[1]);
-	gem_context_destroy(fd, ctx[0]);
+	intel_ctx_destroy(fd, tmp_ctx[1]);
+	intel_ctx_destroy(fd, tmp_ctx[0]);
 }
 
 igt_main
@@ -1137,7 +1155,7 @@ igt_main
 	const int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
 	const struct {
 		const char *name;
-		void (*func)(int fd, unsigned int engine,
+		void (*func)(int fd, const intel_ctx_t *ctx, unsigned int engine,
 			     int num_children, int timeout);
 		int num_children;
 		int timeout;
@@ -1172,6 +1190,7 @@ igt_main
 #define for_each_test(t, T) for(typeof(*T) *t = T; t->name; t++)
 
 	const struct intel_execution_engine2 *e;
+	const intel_ctx_t *ctx;
 	int fd = -1;
 
 	igt_fixture {
@@ -1180,6 +1199,11 @@ igt_main
 		gem_submission_print_method(fd);
 		gem_scheduler_print_capability(fd);
 
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
+
 		igt_fork_hang_detector(fd);
 	}
 
@@ -1188,7 +1212,7 @@ igt_main
 		igt_subtest_with_dynamic_f("%s", t->name) {
 			for (const struct intel_execution_ring *l = intel_execution_rings; l->name; l++) {
 				igt_dynamic_f("%s", l->name) {
-					t->func(fd, eb_ring(l),
+					t->func(fd, intel_ctx_0(fd), eb_ring(l),
 						t->num_children, t->timeout);
 				}
 			}
@@ -1196,30 +1220,30 @@ igt_main
 	}
 
 	igt_subtest("basic-all")
-		sync_all(fd, 1, 2);
+		sync_all(fd, ctx, 1, 2);
 	igt_subtest("basic-store-all")
-		store_all(fd, 1, 2);
+		store_all(fd, ctx, 1, 2);
 
 	igt_subtest("all")
-		sync_all(fd, 1, 20);
+		sync_all(fd, ctx, 1, 20);
 	igt_subtest("store-all")
-		store_all(fd, 1, 20);
+		store_all(fd, ctx, 1, 20);
 	igt_subtest("forked-all")
-		sync_all(fd, ncpus, 20);
+		sync_all(fd, ctx, ncpus, 20);
 	igt_subtest("forked-store-all")
-		store_all(fd, ncpus, 20);
+		store_all(fd, ctx, ncpus, 20);
 
 	for_each_test(t, all) {
 		igt_subtest_f("%s", t->name)
-			t->func(fd, ALL_ENGINES, t->num_children, t->timeout);
+			t->func(fd, ctx, ALL_ENGINES, t->num_children, t->timeout);
 	}
 
 	/* New way of selecting engines. */
 	for_each_test(t, individual) {
 		igt_subtest_with_dynamic_f("%s", t->name) {
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				igt_dynamic_f("%s", e->name) {
-					t->func(fd, e->flags,
+					t->func(fd, ctx, e->flags,
 						t->num_children, t->timeout);
 				}
 			}
@@ -1234,11 +1258,11 @@ igt_main
 		}
 
 		igt_subtest("preempt-all")
-			preempt(fd, ALL_ENGINES, 1, 20);
+			preempt(fd, ctx, ALL_ENGINES, 1, 20);
 		igt_subtest_with_dynamic("preempt") {
-			__for_each_physical_engine(fd, e) {
+			for_each_ctx_engine(fd, ctx, e) {
 				igt_dynamic_f("%s", e->name)
-					preempt(fd, e->flags, ncpus, 20);
+					preempt(fd, ctx, e->flags, ncpus, 20);
 			}
 		}
 	}
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 28/30] tests/i915/gem_userptr_blits: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (26 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 27/30] tests/i915/gem_sync: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 29/30] tests/i915/gem_wait: " Jason Ekstrand
                   ` (3 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_userptr_blits.c | 31 +++++++++++++++++++++----------
 1 file changed, 21 insertions(+), 10 deletions(-)

diff --git a/tests/i915/gem_userptr_blits.c b/tests/i915/gem_userptr_blits.c
index 7a80c016..9e7e8abf 100644
--- a/tests/i915/gem_userptr_blits.c
+++ b/tests/i915/gem_userptr_blits.c
@@ -623,7 +623,7 @@ static void test_nohangcheck_hostile(int i915)
 {
 	const struct intel_execution_engine2 *e;
 	igt_hang_t hang;
-	uint32_t ctx;
+	intel_ctx_t *ctx;
 	int fence = -1;
 	int err = 0;
 	int dir;
@@ -638,11 +638,11 @@ static void test_nohangcheck_hostile(int i915)
 	dir = igt_params_open(i915);
 	igt_require(dir != -1);
 
-	ctx = gem_context_create(i915);
-	hang = igt_allow_hang(i915, ctx, 0);
+	ctx = intel_ctx_create_all_physical(i915);
+	hang = igt_allow_hang(i915, ctx->id, 0);
 	igt_require(__enable_hangcheck(dir, false));
 
-	____for_each_physical_engine(i915, ctx, e) {
+	for_each_ctx_engine(i915, ctx, e) {
 		igt_spin_t *spin;
 		int new;
 
@@ -650,7 +650,7 @@ static void test_nohangcheck_hostile(int i915)
 		gem_engine_property_printf(i915, e->name,
 					   "preempt_timeout_ms", "%d", 50);
 
-		spin = __igt_spin_new(i915, ctx,
+		spin = __igt_spin_new(i915, .ctx = ctx,
 				      .engine = e->flags,
 				      .flags = (IGT_SPIN_NO_PREEMPTION |
 						IGT_SPIN_USERPTR |
@@ -672,7 +672,7 @@ static void test_nohangcheck_hostile(int i915)
 			fence = tmp;
 		}
 	}
-	gem_context_destroy(i915, ctx);
+	intel_ctx_destroy(i915, ctx);
 	igt_assert(fence != -1);
 
 	if (sync_fence_wait(fence, MSEC_PER_SEC)) { /* 640ms preempt-timeout */
@@ -1352,7 +1352,8 @@ static int test_dmabuf(void)
 	return 0;
 }
 
-static void store_dword_rand(int i915, unsigned int engine,
+static void store_dword_rand(int i915, const intel_ctx_t *ctx,
+			     unsigned int engine,
 			     uint32_t target, uint64_t sz,
 			     int count)
 {
@@ -1384,6 +1385,7 @@ static void store_dword_rand(int i915, unsigned int engine,
 	exec.flags = engine;
 	if (gen < 6)
 		exec.flags |= I915_EXEC_SECURE;
+	exec.rsvd1 = ctx->id;
 
 	i = 0;
 	for (int n = 0; n < count; n++) {
@@ -1501,17 +1503,24 @@ static void test_readonly(int i915)
 
 	igt_fork(child, 1) {
 		const struct intel_execution_engine2 *e;
+		const intel_ctx_t *ctx;
+		intel_ctx_t *tmp_ctx = NULL;
 		char *orig;
 
 		orig = g_compute_checksum_for_data(G_CHECKSUM_SHA1, pages, sz);
 
 		gem_userptr(i915, space, total, true, userptr_flags, &rhandle);
 
-		__for_each_physical_engine(i915, e) {
+		if (gem_has_contexts(i915))
+			ctx = tmp_ctx = intel_ctx_create_all_physical(i915);
+		else
+			ctx = intel_ctx_0(i915);
+
+		for_each_ctx_engine(i915, ctx, e) {
 			char *ref, *result;
 
 			/* First tweak the backing store through the write */
-			store_dword_rand(i915, e->flags, whandle, sz, 64);
+			store_dword_rand(i915, ctx, e->flags, whandle, sz, 64);
 			gem_sync(i915, whandle);
 			ref = g_compute_checksum_for_data(G_CHECKSUM_SHA1,
 							  pages, sz);
@@ -1520,7 +1529,7 @@ static void test_readonly(int i915)
 			igt_assert(strcmp(ref, orig));
 
 			/* Now try the same through the read-only handle */
-			store_dword_rand(i915, e->flags, rhandle, total, 64);
+			store_dword_rand(i915, ctx, e->flags, rhandle, total, 64);
 			gem_sync(i915, rhandle);
 			result = g_compute_checksum_for_data(G_CHECKSUM_SHA1,
 							     pages, sz);
@@ -1539,6 +1548,8 @@ static void test_readonly(int i915)
 
 		gem_close(i915, rhandle);
 
+		intel_ctx_destroy(i915, tmp_ctx);
+
 		g_free(orig);
 	}
 	igt_waitchildren();
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 29/30] tests/i915/gem_wait: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (27 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 28/30] tests/i915/gem_userptr_blits: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01  2:12 ` [igt-dev] [RFC 30/30] tests/i915/gem_request_retire: " Jason Ekstrand
                   ` (2 subsequent siblings)
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_wait.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/tests/i915/gem_wait.c b/tests/i915/gem_wait.c
index 7b2b1c2b..a3e7f0c1 100644
--- a/tests/i915/gem_wait.c
+++ b/tests/i915/gem_wait.c
@@ -74,13 +74,15 @@ static void invalid_buf(int fd)
 
 #define timespec_isset(x) ((x)->tv_sec | (x)->tv_nsec)
 
-static void basic(int fd, unsigned engine, unsigned flags)
+static void basic(int fd, const intel_ctx_t *ctx, unsigned engine,
+		  unsigned flags)
 {
 	IGT_CORK_HANDLE(cork);
 	uint32_t plug =
 		flags & (WRITE | AWAIT) ? igt_cork_plug(&cork, fd) : 0;
 	igt_spin_t *spin =
 		igt_spin_new(fd,
+			     .ctx = ctx,
 			     .engine = engine,
 			     .dependency = plug,
 			     .flags = (flags & HANG) ? IGT_SPIN_NO_PREEMPTION : 0);
@@ -146,21 +148,22 @@ static void basic(int fd, unsigned engine, unsigned flags)
 	igt_spin_free(fd, spin);
 }
 
-static void test_all_engines(const char *name, int i915, unsigned int test)
+static void test_all_engines(const char *name, int i915, const intel_ctx_t *ctx,
+			     unsigned int test)
 {
 	const struct intel_execution_engine2 *e;
 
 	igt_subtest_with_dynamic(name) {
 		igt_dynamic("all") {
 			gem_quiescent_gpu(i915);
-			basic(i915, ALL_ENGINES, test);
+			basic(i915, ctx, ALL_ENGINES, test);
 			gem_quiescent_gpu(i915);
 		}
 
-		__for_each_physical_engine(i915, e) {
+		for_each_ctx_engine(i915, ctx, e) {
 			igt_dynamic_f("%s", e->name) {
 				gem_quiescent_gpu(i915);
-				basic(i915, e->flags, test);
+				basic(i915, ctx, e->flags, test);
 				gem_quiescent_gpu(i915);
 			}
 		}
@@ -169,11 +172,17 @@ static void test_all_engines(const char *name, int i915, unsigned int test)
 
 igt_main
 {
+	const intel_ctx_t *ctx = NULL;
 	int fd = -1;
 
 	igt_fixture {
 		fd = drm_open_driver_master(DRIVER_INTEL);
 		igt_require_gem(fd);
+
+		if (gem_has_contexts(fd))
+			ctx = intel_ctx_create_all_physical(fd);
+		else
+			ctx = intel_ctx_0(fd);
 	}
 
 	igt_subtest("invalid-flags")
@@ -201,7 +210,7 @@ igt_main
 		}
 
 		for (const typeof(*tests) *t = tests; t->name; t++)
-			test_all_engines(t->name, fd, t->flags);
+			test_all_engines(t->name, fd, ctx, t->flags);
 
 		igt_fixture {
 			igt_stop_signal_helper();
@@ -228,7 +237,7 @@ igt_main
 		}
 
 		for (const typeof(*tests) *t = tests; t->name; t++)
-			test_all_engines(t->name, fd, t->flags);
+			test_all_engines(t->name, fd, ctx, t->flags);
 
 		igt_fixture {
 			igt_stop_signal_helper();
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] [RFC 30/30] tests/i915/gem_request_retire: Convert to intel_ctx_t
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (28 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 29/30] tests/i915/gem_wait: " Jason Ekstrand
@ 2021-04-01  2:12 ` Jason Ekstrand
  2021-04-01 10:20 ` [igt-dev] ✗ Fi.CI.BUILD: failure for Stop cloning contexts Patchwork
  2021-04-08 18:43 ` [igt-dev] [RFC 00/30] " Daniel Vetter
  31 siblings, 0 replies; 38+ messages in thread
From: Jason Ekstrand @ 2021-04-01  2:12 UTC (permalink / raw)
  To: igt-dev

---
 tests/i915/gem_request_retire.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/tests/i915/gem_request_retire.c b/tests/i915/gem_request_retire.c
index c23ddfb7..1b2878c3 100644
--- a/tests/i915/gem_request_retire.c
+++ b/tests/i915/gem_request_retire.c
@@ -62,24 +62,26 @@ static void
 test_retire_vma_not_inactive(int fd)
 {
 	struct intel_execution_engine2 *e;
-
+	intel_ctx_t *ctx;
 	igt_spin_t *bg = NULL;
 
-	__for_each_physical_engine(fd, e) {
+	ctx = intel_ctx_create_all_physical(fd);
+
+	for_each_ctx_engine(fd, ctx, e) {
 		igt_spin_t *spin;
-		uint32_t ctx;
+		intel_ctx_t *spin_ctx;
 
 		if (!bg) {
-			bg = igt_spin_new(fd, .engine = e->flags);
+			bg = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
 			continue;
 		}
 
-		ctx = gem_context_clone_with_engines(fd, 0);
-		spin = igt_spin_new(fd, ctx,
+		spin_ctx = intel_ctx_create(fd, &ctx->cfg);
+		spin = igt_spin_new(fd, .ctx = spin_ctx,
 				    .engine = e->flags,
 				    .dependency = bg->handle,
 				    .flags = IGT_SPIN_SOFTDEP);
-		gem_context_destroy(fd, ctx);
+		intel_ctx_destroy(fd, spin_ctx);
 		igt_spin_end(spin);
 
 		gem_sync(fd, spin->handle);
@@ -88,6 +90,7 @@ test_retire_vma_not_inactive(int fd)
 
 	igt_drop_caches_set(fd, DROP_RETIRE);
 	igt_spin_free(fd, bg);
+	intel_ctx_destroy(fd, ctx);
 }
 
 int fd;
-- 
2.29.2

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [igt-dev] ✗ Fi.CI.BUILD: failure for Stop cloning contexts
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (29 preceding siblings ...)
  2021-04-01  2:12 ` [igt-dev] [RFC 30/30] tests/i915/gem_request_retire: " Jason Ekstrand
@ 2021-04-01 10:20 ` Patchwork
  2021-04-08 18:43 ` [igt-dev] [RFC 00/30] " Daniel Vetter
  31 siblings, 0 replies; 38+ messages in thread
From: Patchwork @ 2021-04-01 10:20 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: igt-dev

== Series Details ==

Series: Stop cloning contexts
URL   : https://patchwork.freedesktop.org/series/88646/
State : failure

== Summary ==

Applying: lib/i915/gem_engine_topology: Expose the __query_engines helper
Applying: lib: Add an intel_ctx wrapper struct and helpers
Applying: lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t
Applying: tests/i915/gem_exec_basic: Convert to intel_ctx_t
Applying: lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id
Applying: lib/igt_spin: Support intel_ctx_t
Applying: tests/i915/gem_exec_fence: Convert to intel_ctx_t
Applying: tests/i915/gem_exec_schedule: Convert to intel_ctx_t
Applying: tests/i915/perf_pmu: Convert to intel_ctx_t
Applying: tests/i915/gem_exec_nop: Convert to intel_ctx_t
Applying: tests/i915/gem_exec_reloc: Convert to intel_ctx_t
Applying: tests/i915/gem_busy: Convert to intel_ctx_t
Applying: tests/i915/gem_ctx_isolation: Convert to intel_ctx_t
Applying: tests/i915/gem_exec_async: Convert to intel_ctx_t
Applying: tests/i915/sysfs_clients: Convert to intel_ctx_t
Applying: tests/i915/gem_exec_fair: Convert to intel_ctx_t
Applying: tests/i915/gem_spin_batch: Convert to intel_ctx_t
Applying: tests/i915/gem_exec_store: Convert to intel_ctx_t
Applying: tests/amdgpu/amd_prime: Convert to intel_ctx_t
Applying: tests/i915/i915_hangman: Convert to intel_ctx_t
Applying: tests/i915/gem_ringfill: Convert to intel_ctx_t
Applying: tests/prime_busy: Convert to intel_ctx_t
Applying: tests/prime_vgem: Convert to intel_ctx_t
Applying: tests/gem_exec_whisper: Convert to intel_ctx_t
Applying: tests/i915/gem_ctx_exec: Convert to intel_ctx_t
Applying: tests/i915/gem_exec_suspend: Convert to intel_ctx_t
Using index info to reconstruct a base tree...
M	tests/i915/gem_exec_suspend.c
Falling back to patching base and 3-way merge...
Auto-merging tests/i915/gem_exec_suspend.c
CONFLICT (content): Merge conflict in tests/i915/gem_exec_suspend.c
Patch failed at 0026 tests/i915/gem_exec_suspend: Convert to intel_ctx_t
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [igt-dev] [RFC 00/30] Stop cloning contexts
  2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
                   ` (30 preceding siblings ...)
  2021-04-01 10:20 ` [igt-dev] ✗ Fi.CI.BUILD: failure for Stop cloning contexts Patchwork
@ 2021-04-08 18:43 ` Daniel Vetter
  2021-04-08 20:12   ` Daniel Vetter
  31 siblings, 1 reply; 38+ messages in thread
From: Daniel Vetter @ 2021-04-08 18:43 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: igt-dev

On Wed, Mar 31, 2021 at 09:12:13PM -0500, Jason Ekstrand wrote:
> I'm trying to clean up some of our uAPI technical debt in i915.  One of the
> biggest areas we have right now is context mutability.  There's no good
> reason why things like the set of engines or the VM should be able to be
> changed on the fly and no "real" userspace actually relies on this
> functionality.  It does, however, make for a good excuse for tests and lots
> of bug reports as things like swapping out the set of engines under load
> break randomly.  The solution here is to stop allowing that behavior and
> simplify the i915 internals.

Randomly going to drop this here, but I think we might also want to
rethink the magic behaviour that for physical engines (except if you've
done your own selection) we instantiate intel_context on-demand in
execbuf.

Minimally would be neat to limit that magic behaviour to at least the
default context, but I fear that would be another massive pile of igt
changes. But if we can kinda do that in one go, might be real sweet to aim
for that with your work here too. Just as some prep.
-Daniel

> 
> In particular, we'd like to remove the following from the i915 API:
> 
>  1. I915_CONTEXT_CLONE_*.  These are only used by IGT and have never been
>     used by any "real" userspace.
> 
>  2. Changing the VM or set of engines via SETPARAM after they've been
>     "used" by an execbuf or similar.  This would effectively make those
>     parameters create params rather than mutable state.  We can't drop
>     setparam entirely for those because media does use it but we can
>     enforce some rules.
> 
>  3. Unused (by non-IGT userspace) GETPARAM for things like engines.
> 
> As much as we'd love to do that, we have a bit of a problem in IGT.  The
> way we handle multi-engine testing today relies heavily on this soon-to-be-
> deprecated functionality.  In particular, the standard flow is usually
> something like this:
> 
>     static void run_test1(int fd, uint32_t engine)
>     {
>         igt_spin_t *spin;
> 
>         ctx = = gem_context_clone_with_engines(fd, 0);
>         __igt_spin_new(fd, ctx, .engine = engine);
> 
>         /* do some testing with ctx */
> 
>         igt_spin_free(fd, spin);
>         gem_destroy_context(fd, ctx);
>     }
> 
>     igt_main
>     {
>         struct intel_execution_engine2 *e;
> 
>         /* Usual fixture code */
> 
>         __for_each_physical_engine(fd, e)
>             run_test1(fd, e->flags);
> 
>         __for_each_physical_engine(fd, e)
>             run_test2(fd, e->flags);
>     }
> 
> Let's walk through what this does:
> 
>  1. __for_each_physical_engine calls intel_init_engine_list() which resets
>     the set of engines on ctx0 to the full set of engines available as per
>     the engine query.  On older kernels/hardware where we don't have the
>     engines query, it leaves the set alone.
> 
>  2. intel_init_engine_list() also returns a set of engines for iteration
>     and __for_each_physical_engine() sets up a for loop to walk the set.
> 
>  3. gem_context_clone_with_engines() creates a new context using
>     I915_CONTEXT_CONTEXT_CLONE_ENGINES (not used by anything other than
>     IGT) to ask that the newly created context has the same set of engines
>     as ctx0.  Remember we changed that at the start of loop iteration!
> 
>  4. When the context is passed to __igt_spin_new(), it calls
>     gem_context_lookup_engine which does a GETPARAM to introspet the set of
>     engines on the context and figure out the engine class.
> 
> If you've been keeping track, this trivial and extremely common example
> uses every single one of these soon-to-be-deprecated APIs even though the
> test author may be completely obvious to it.  It also means that getting
> rid of IGT's use of them is going to require some fairly deep surgery.
> 
> The approach proposed and partially implemented here is to add a new
> wrapper struct intel_ctx_t which wraps a GEM context handle as well as the
> full set of parameters used to create it, represented by intel_ctx_cfg_t.
> We can then use the context anywhere we would regularly use a context, we
> just have to do ctx->id.  If we want to clone it, we can do so by re-using
> the create parameters by calling intel_ctx_create(fd, &old_ctx->cfg);
> 
> So far, I'm pretty happy with this solution.  I've converted around 25 test
> programs and it's working quite well.  The only real sore point so far is
> around dealing with platforms that don't support contexts.  We could
> special case ctx0 a bit more but, right now, I'm just adding an if
> statement and leaking the intel_ctx_t.  I'm happy to take suggestions
> there.
> 
> --Jason
> 
> 
> Jason Ekstrand (30):
>   lib/i915/gem_engine_topology: Expose the __query_engines helper
>   lib: Add an intel_ctx wrapper struct and helpers
>   lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t
>   tests/i915/gem_exec_basic: Convert to intel_ctx_t
>   lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id
>   lib/igt_spin: Support intel_ctx_t
>   tests/i915/gem_exec_fence: Convert to intel_ctx_t
>   tests/i915/gem_exec_schedule: Convert to intel_ctx_t
>   tests/i915/perf_pmu: Convert to intel_ctx_t
>   tests/i915/gem_exec_nop: Convert to intel_ctx_t
>   tests/i915/gem_exec_reloc: Convert to intel_ctx_t
>   tests/i915/gem_busy: Convert to intel_ctx_t
>   tests/i915/gem_ctx_isolation: Convert to intel_ctx_t
>   tests/i915/gem_exec_async: Convert to intel_ctx_t
>   tests/i915/sysfs_clients: Convert to intel_ctx_t
>   tests/i915/gem_exec_fair: Convert to intel_ctx_t
>   tests/i915/gem_spin_batch: Convert to intel_ctx_t
>   tests/i915/gem_exec_store: Convert to intel_ctx_t
>   tests/amdgpu/amd_prime: Convert to intel_ctx_t
>   tests/i915/i915_hangman: Convert to intel_ctx_t
>   tests/i915/gem_ringfill: Convert to intel_ctx_t
>   tests/prime_busy: Convert to intel_ctx_t
>   tests/prime_vgem: Convert to intel_ctx_t
>   tests/gem_exec_whisper: Convert to intel_ctx_t
>   tests/i915/gem_ctx_exec: Convert to intel_ctx_t
>   tests/i915/gem_exec_suspend: Convert to intel_ctx_t
>   tests/i915/gem_sync: Convert to intel_ctx_t
>   tests/i915/gem_userptr_blits: Convert to intel_ctx_t
>   tests/i915/gem_wait: Convert to intel_ctx_t
>   tests/i915/gem_request_retire: Convert to intel_ctx_t
> 
>  lib/i915/gem_context.c          |  34 ++
>  lib/i915/gem_context.h          |   2 +
>  lib/i915/gem_engine_topology.c  |  61 ++-
>  lib/i915/gem_engine_topology.h  |  16 +-
>  lib/igt_dummyload.c             |  30 +-
>  lib/igt_dummyload.h             |   6 +-
>  lib/igt_gt.c                    |   2 +-
>  lib/intel_ctx.c                 | 159 ++++++
>  lib/intel_ctx.h                 | 110 ++++
>  lib/meson.build                 |   1 +
>  tests/amdgpu/amd_prime.c        |  10 +-
>  tests/i915/gem_busy.c           |  80 +--
>  tests/i915/gem_ctx_engines.c    |   6 +-
>  tests/i915/gem_ctx_exec.c       |  14 +-
>  tests/i915/gem_ctx_isolation.c  | 111 ++--
>  tests/i915/gem_ctx_shared.c     |  16 +-
>  tests/i915/gem_eio.c            |   2 +-
>  tests/i915/gem_exec_async.c     |  31 +-
>  tests/i915/gem_exec_balancer.c  |  26 +-
>  tests/i915/gem_exec_basic.c     |  10 +-
>  tests/i915/gem_exec_fair.c      |  99 ++--
>  tests/i915/gem_exec_fence.c     | 189 ++++---
>  tests/i915/gem_exec_latency.c   |   2 +-
>  tests/i915/gem_exec_nop.c       | 158 +++---
>  tests/i915/gem_exec_reloc.c     | 102 ++--
>  tests/i915/gem_exec_schedule.c  | 875 +++++++++++++++++---------------
>  tests/i915/gem_exec_store.c     |  38 +-
>  tests/i915/gem_exec_suspend.c   |  56 +-
>  tests/i915/gem_exec_whisper.c   |  86 ++--
>  tests/i915/gem_request_retire.c |  17 +-
>  tests/i915/gem_ringfill.c       |  47 +-
>  tests/i915/gem_spin_batch.c     |  83 +--
>  tests/i915/gem_sync.c           | 162 +++---
>  tests/i915/gem_userptr_blits.c  |  31 +-
>  tests/i915/gem_vm_create.c      |   4 +-
>  tests/i915/gem_wait.c           |  23 +-
>  tests/i915/gem_workarounds.c    |   2 +-
>  tests/i915/i915_hangman.c       |  35 +-
>  tests/i915/perf_pmu.c           | 225 ++++----
>  tests/i915/sysfs_clients.c      |  87 ++--
>  tests/prime_busy.c              |  19 +-
>  tests/prime_vgem.c              |  38 +-
>  42 files changed, 1927 insertions(+), 1178 deletions(-)
>  create mode 100644 lib/intel_ctx.c
>  create mode 100644 lib/intel_ctx.h
> 
> -- 
> 2.29.2
> 
> _______________________________________________
> igt-dev mailing list
> igt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/igt-dev

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [igt-dev] [RFC 01/30] lib/i915/gem_engine_topology: Expose the __query_engines helper
  2021-04-01  2:12 ` [igt-dev] [RFC 01/30] lib/i915/gem_engine_topology: Expose the __query_engines helper Jason Ekstrand
@ 2021-04-08 18:50   ` Daniel Vetter
  0 siblings, 0 replies; 38+ messages in thread
From: Daniel Vetter @ 2021-04-08 18:50 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: igt-dev

On Wed, Mar 31, 2021 at 09:12:14PM -0500, Jason Ekstrand wrote:
> ---
>  lib/i915/gem_engine_topology.c | 20 +++++++++++---------
>  lib/i915/gem_engine_topology.h |  4 ++++
>  2 files changed, 15 insertions(+), 9 deletions(-)
> 
> diff --git a/lib/i915/gem_engine_topology.c b/lib/i915/gem_engine_topology.c
> index c12cd920..5d196f59 100644
> --- a/lib/i915/gem_engine_topology.c
> +++ b/lib/i915/gem_engine_topology.c
> @@ -62,14 +62,9 @@ static int __gem_query(int fd, struct drm_i915_query *q)
>  	return err;
>  }
>  
> -static void gem_query(int fd, struct drm_i915_query *q)
> -{
> -	igt_assert_eq(__gem_query(fd, q), 0);
> -}
> -
> -static void query_engines(int fd,
> -			  struct drm_i915_query_engine_info *query_engines,
> -			  int length)
> +int __gem_query_engines(int fd,
> +			struct drm_i915_query_engine_info *query_engines,
> +			int length)

Generally __ is supposed to be the internal version without checking, and
the non-__ version is the one everyone is supposed to use.

Also I know it wasn't you who didn't do this, but it would be really good
if you can go through all the library functions (at the end of the series,
or as you go) and
- make sure anything not used by tests is static
- anything non-static has some gtkdoc explaining wth it's for and how to
  use it. Including magic iterator macros and stuff like that, especially
  if they come in __ and ____ variants.

It's some work, but imo we really need to aim towards more maintainable
testcode here, and at least for library functions that imo should mean
reasonably documented exported functions.
-Daniel

>  {
>  	struct drm_i915_query_item item = { };
>  	struct drm_i915_query query = { };
> @@ -81,7 +76,14 @@ static void query_engines(int fd,
>  
>  	item.data_ptr = to_user_pointer(query_engines);
>  
> -	gem_query(fd, &query);
> +	return __gem_query(fd, &query);
> +}
> +
> +static void query_engines(int fd,
> +			  struct drm_i915_query_engine_info *query_engines,
> +			  int length)
> +{
> +	igt_assert_eq(__gem_query_engines(fd, query_engines, length), 0);
>  }
>  
>  static void ctx_map_engines(int fd, struct intel_engine_data *ed,
> diff --git a/lib/i915/gem_engine_topology.h b/lib/i915/gem_engine_topology.h
> index f5edcb5d..76b7cd4d 100644
> --- a/lib/i915/gem_engine_topology.h
> +++ b/lib/i915/gem_engine_topology.h
> @@ -29,6 +29,10 @@
>  
>  #define GEM_MAX_ENGINES		I915_EXEC_RING_MASK + 1
>  
> +int __gem_query_engines(int fd,
> +			struct drm_i915_query_engine_info *query_engines,
> +			int length);
> +
>  struct intel_engine_data {
>  	uint32_t nengines;
>  	uint32_t n;
> -- 
> 2.29.2
> 
> _______________________________________________
> igt-dev mailing list
> igt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/igt-dev

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [igt-dev] [RFC 02/30] lib: Add an intel_ctx wrapper struct and helpers
  2021-04-01  2:12 ` [igt-dev] [RFC 02/30] lib: Add an intel_ctx wrapper struct and helpers Jason Ekstrand
@ 2021-04-08 18:58   ` Daniel Vetter
  0 siblings, 0 replies; 38+ messages in thread
From: Daniel Vetter @ 2021-04-08 18:58 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: igt-dev

On Wed, Mar 31, 2021 at 09:12:15PM -0500, Jason Ekstrand wrote:
> We're trying to clean up some of our technical debt in the i915 API.  In
> particular, context mutability and unnecessary getparam().  There's
> quite a bit of the introspection stuff that's not used by any userspace
> other than IGT.  Most drivers don't care about fetching the set of
> engines, for instance, because they don't forget about what set of
> engines they asked for int the first place.
> 
> Unfortunately, IGT relies heavily on context introspection for just
> about everything when it comes to multi-engine testing.  It also likes
> to use ctx0 as temporary storage for whatever the current test config
> is.  While effective at keeping IGC simple in some ways, this means
> we're making heavy use of context mutability.  Also, passing data around
> with in tests isn't really what contexts are for.
> 
> This patch adds a new intel_ctx_t struct which wraps a context and
> remembers the full context configuration.  This will provide similar
> ease-of-use without having use ctx0 as temporary storage.
> ---
>  lib/i915/gem_context.c |  34 +++++++++
>  lib/i915/gem_context.h |   2 +
>  lib/intel_ctx.c        | 159 +++++++++++++++++++++++++++++++++++++++++

Maybe a bit a bikeshed, but I think the idea is to long-term move/keep all
the i915-gem interface helpers in lib/i915, which I think makes sense.
At least it reflects the structure we've implemented on the tests/ side of
things.

I don't have an opinion on the intel_ctx vs gem_context bikeshed :-)

Also excellent that you opt to document stuff, but please make sure it's
actually picked up (you need to add each file to an .xml file iirc, and it
needs an overview section explaining in a few words what that library
does). As explained in the previous patch, I think we need to just bite
that bullet.

doc building works with the ninja -C build igt-gpu-tools-doc targe, see
README.md. Gives something fairly prety in the end:

https://drm.pages.freedesktop.org/igt-gpu-tools/igt-gpu-tools-Core.html

Core libraries docs probably the best example we have. Scroll down past
the index for some of the overview docs we have and how you can link to
reference docs it to make it reasonable useful.

>  lib/intel_ctx.h        | 110 ++++++++++++++++++++++++++++
>  lib/meson.build        |   1 +
>  5 files changed, 306 insertions(+)
>  create mode 100644 lib/intel_ctx.c
>  create mode 100644 lib/intel_ctx.h
> 
> diff --git a/lib/i915/gem_context.c b/lib/i915/gem_context.c
> index 79411e10..0df42d02 100644
> --- a/lib/i915/gem_context.c
> +++ b/lib/i915/gem_context.c
> @@ -107,6 +107,40 @@ int __gem_context_create(int fd, uint32_t *ctx_id)
>         return err;
>  }
>  
> +/**
> + * __gem_context_create_config:
> + * @fd: open i915 drm file descriptor
> + * @flags: context create flags
> + * @extensions: first extension struct, or 0 for no extensions
> + * @ctx_id: on success, the context ID is written here
> + *
> + * Creates a new GEM context with flags and extensions.  If no flags or
> + * extensions are required, it's the same as __gem_context_create and works
> + * on older kernels.
> + */
> +int __gem_context_create_ext(int fd, uint32_t flags, uint64_t extensions,
> +			     uint32_t *ctx_id)

Stuff this into the same library file, make it static? Documenting
internals is imo just confusing.
-Daniel

> +{
> +	struct drm_i915_gem_context_create_ext ctx_create;
> +	int err = 0;
> +
> +	if (!flags && !extensions)
> +		return __gem_context_create(fd, ctx_id);
> +
> +	memset(&ctx_create, 0, sizeof(ctx_create));
> +	ctx_create.flags = flags;
> +	if (extensions) {
> +		ctx_create.flags |= I915_CONTEXT_CREATE_FLAGS_USE_EXTENSIONS;
> +		ctx_create.extensions = extensions;
> +	}
> +
> +	err = create_ext_ioctl(fd, &ctx_create);
> +	if (!err)
> +		*ctx_id = ctx_create.ctx_id;
> +
> +	return err;
> +}
> +
>  /**
>   * gem_context_create:
>   * @fd: open i915 drm file descriptor
> diff --git a/lib/i915/gem_context.h b/lib/i915/gem_context.h
> index c2c2b827..9748953c 100644
> --- a/lib/i915/gem_context.h
> +++ b/lib/i915/gem_context.h
> @@ -31,6 +31,8 @@ struct drm_i915_gem_context_param;
>  
>  uint32_t gem_context_create(int fd);
>  int __gem_context_create(int fd, uint32_t *ctx_id);
> +int __gem_context_create_ext(int fd, uint32_t flags, uint64_t extensions,
> +			     uint32_t *ctx_id);
>  void gem_context_destroy(int fd, uint32_t ctx_id);
>  int __gem_context_destroy(int fd, uint32_t ctx_id);
>  
> diff --git a/lib/intel_ctx.c b/lib/intel_ctx.c
> new file mode 100644
> index 00000000..406e85cb
> --- /dev/null
> +++ b/lib/intel_ctx.c
> @@ -0,0 +1,159 @@
> +/*
> + * Copyright © 2021 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <stddef.h>
> +
> +#include "intel_ctx.h"
> +#include "ioctl_wrappers.h"
> +#include "i915/gem_engine_topology.h"
> +
> +static void
> +add_user_ext(uint64_t *root_ext_u64, struct i915_user_extension *ext)
> +{
> +	ext->next_extension = *root_ext_u64;
> +	*root_ext_u64 = to_user_pointer(ext);
> +}
> +
> +static size_t sizeof_param_engines(int count)
> +{
> +	return offsetof(struct i915_context_param_engines, engines[count]);
> +}
> +
> +#define SIZEOF_QUERY		offsetof(struct drm_i915_query_engine_info, \
> +					 engines[GEM_MAX_ENGINES])
> +
> +intel_ctx_cfg_t intel_ctx_cfg_all_physical(int fd)
> +{
> +	uint8_t buff[SIZEOF_QUERY] = { };
> +	struct drm_i915_query_engine_info *qei = (void *) buff;
> +	intel_ctx_cfg_t cfg = {};
> +	int i;
> +
> +	if (__gem_query_engines(fd, qei, SIZEOF_QUERY) == 0) {
> +		cfg.num_engines = qei->num_engines;
> +		for (i = 0; i < qei->num_engines; i++)
> +			cfg.engines[i] = qei->engines[i].engine;
> +	}
> +
> +	return cfg;
> +}
> +
> +static int
> +__context_create_cfg(int fd, const intel_ctx_cfg_t *cfg, uint32_t *ctx_id)
> +{
> +	uint64_t ext_root = 0;
> +	I915_DEFINE_CONTEXT_PARAM_ENGINES(engines, GEM_MAX_ENGINES);
> +	struct drm_i915_gem_context_create_ext_setparam engines_param, vm_param;
> +	uint32_t i;
> +
> +	if (cfg->vm) {
> +		vm_param = (struct drm_i915_gem_context_create_ext_setparam) {
> +			.base = {
> +				.name = I915_CONTEXT_CREATE_EXT_SETPARAM,
> +			},
> +			.param = {
> +				.param = I915_CONTEXT_PARAM_VM,
> +				.value = cfg->vm,
> +			},
> +		};
> +		add_user_ext(&ext_root, &vm_param.base);
> +	}
> +
> +	if (cfg->num_engines) {
> +		memset(&engines, 0, sizeof(engines));
> +		for (i = 0; i < cfg->num_engines; i++)
> +			engines.engines[i] = cfg->engines[i];
> +
> +		engines_param = (struct drm_i915_gem_context_create_ext_setparam) {
> +			.base = {
> +				.name = I915_CONTEXT_CREATE_EXT_SETPARAM,
> +			},
> +			.param = {
> +				.param = I915_CONTEXT_PARAM_ENGINES,
> +				.size = sizeof_param_engines(cfg->num_engines),
> +				.value = to_user_pointer(&engines),
> +			},
> +		};
> +		add_user_ext(&ext_root, &engines_param.base);
> +	}
> +
> +	return __gem_context_create_ext(fd, cfg->flags, ext_root, ctx_id);
> +}
> +
> +int __intel_ctx_create(int fd, const intel_ctx_cfg_t *cfg,
> +		       intel_ctx_t **out_ctx)
> +{
> +	uint32_t ctx_id;
> +	intel_ctx_t *ctx;
> +	int err;
> +
> +	if (cfg)
> +		err = __context_create_cfg(fd, cfg, &ctx_id);
> +	else
> +		err = __gem_context_create(fd, &ctx_id);
> +	if (err)
> +		return err;
> +
> +	ctx = calloc(1, sizeof(*ctx));
> +	igt_assert(ctx);
> +
> +	ctx->id = ctx_id;
> +	ctx->cfg = *cfg;
> +
> +	*out_ctx = ctx;
> +	return 0;
> +}
> +
> +intel_ctx_t *intel_ctx_create(int fd, const intel_ctx_cfg_t *cfg)
> +{
> +	intel_ctx_t *ctx;
> +	int err;
> +
> +	err = __intel_ctx_create(fd, cfg, &ctx);
> +	igt_assert_eq(err, 0);
> +
> +	return ctx;
> +}
> +
> +static const intel_ctx_t __intel_ctx_0 = {};
> +
> +const intel_ctx_t *intel_ctx_0(int fd)
> +{
> +	(void)fd;
> +	return &__intel_ctx_0;
> +}
> +
> +intel_ctx_t *intel_ctx_create_all_physical(int fd)
> +{
> +	intel_ctx_cfg_t cfg = intel_ctx_cfg_all_physical(fd);
> +	return intel_ctx_create(fd, &cfg);
> +}
> +
> +void intel_ctx_destroy(int fd, intel_ctx_t *ctx)
> +{
> +	if (!ctx)
> +		return;
> +
> +	gem_context_destroy(fd, ctx->id);
> +	free(ctx);
> +}
> diff --git a/lib/intel_ctx.h b/lib/intel_ctx.h
> new file mode 100644
> index 00000000..94bd667a
> --- /dev/null
> +++ b/lib/intel_ctx.h
> @@ -0,0 +1,110 @@
> +/*
> + * Copyright © 2021 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#ifndef INTEL_CTX_H
> +#define INTEL_CTX_H
> +
> +#include "igt_core.h"
> +
> +#include "i915_drm.h"
> +
> +#define GEM_MAX_ENGINES		I915_EXEC_RING_MASK + 1
> +
> +/**
> + * intel_ctx_cfg_t:
> + * @flags: Context create flags
> + * @vm: VM to inherit or 0 for using a per-context VM
> + * @num_engines: Number of client-specified engines or 0 for legacy mode
> + * @engines: Client-specified engines
> + *
> + * Represents the full configuration of an intel_ctx.
> + */
> +typedef struct intel_ctx_cfg {
> +	uint32_t flags;
> +	uint32_t vm;
> +	unsigned int num_engines;
> +	struct i915_engine_class_instance engines[GEM_MAX_ENGINES];
> +} intel_ctx_cfg_t;
> +
> +intel_ctx_cfg_t intel_ctx_cfg_all_physical(int fd);
> +
> +/**
> + * intel_ctx_t:
> + * @id: the context id/handle
> + * @cfg: the config used to create this context
> + *
> + * Represents the full configuration of an intel_ctx.
> + */
> +typedef struct intel_ctx {
> +	uint32_t id;
> +	intel_ctx_cfg_t cfg;
> +} intel_ctx_t;
> +
> +/**
> + * __intel_ctx_create:
> + * @fd: open i915 drm file descriptor
> + * @cfg: configuration for the created context
> + * @out_ctx: on success, the new intel_ctx_t pointer is written here
> + *
> + * Like intel_ctx_create but returns an error instead of asserting.
> + */
> +int __intel_ctx_create(int fd, const intel_ctx_cfg_t *cfg,
> +		       intel_ctx_t **out_ctx);
> +
> +/**
> + * intel_ctx_create:
> + * @fd: open i915 drm file descriptor
> + * @cfg: configuration for the created context
> + *
> + * Creates a new intel_ctx_t with the given config
> + */
> +intel_ctx_t *intel_ctx_create(int i915, const intel_ctx_cfg_t *cfg);
> +
> +/**
> + * intel_ctx_0:
> + * @fd: open i915 drm file descriptor
> + *
> + * Returns an intel_ctx_t representing the default context.
> + */
> +const intel_ctx_t *intel_ctx_0(int fd);
> +
> +/**
> + * intel_ctx_create_all_physical:
> + * @fd: open i915 drm file descriptor
> + *
> + * Creates an intel_ctx_t containing all physical engines.  On kernels
> + * without the engines API, the created context will be the same as
> + * intel_ctx_0() except that it will be a new GEM context.
> + */
> +intel_ctx_t *intel_ctx_create_all_physical(int fd);
> +
> +/**
> + * intel_ctx_destroy:
> + * @fd: open i915 drm file descriptor
> + * @ctx: context to destroy, or NULL
> + *
> + * Destroys an intel_ctx_t.
> + */
> +void intel_ctx_destroy(int fd, intel_ctx_t *ctx);
> +
> +#endif
> diff --git a/lib/meson.build b/lib/meson.build
> index 672b4206..871c7795 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -37,6 +37,7 @@ lib_sources = [
>  	'intel_batchbuffer.c',
>  	'intel_bufops.c',
>  	'intel_chipset.c',
> +	'intel_ctx.c',
>  	'intel_device_info.c',
>  	'intel_os.c',
>  	'intel_mmio.c',
> -- 
> 2.29.2
> 
> _______________________________________________
> igt-dev mailing list
> igt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/igt-dev

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [igt-dev] [RFC 04/30] tests/i915/gem_exec_basic: Convert to intel_ctx_t
  2021-04-01  2:12 ` [igt-dev] [RFC 04/30] tests/i915/gem_exec_basic: Convert to intel_ctx_t Jason Ekstrand
@ 2021-04-08 20:06   ` Daniel Vetter
  0 siblings, 0 replies; 38+ messages in thread
From: Daniel Vetter @ 2021-04-08 20:06 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: igt-dev

On Wed, Mar 31, 2021 at 09:12:17PM -0500, Jason Ekstrand wrote:
> This acts as a template for the rest of this patch series.  The rough
> idea is that we create a new context if the HW supports contexts and
> otherwise we use intel_ctx_0().  Once we have an intel_ctx_t, we can
> iterate over all of the engines in it in a consistent way.
> ---
>  tests/i915/gem_exec_basic.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/i915/gem_exec_basic.c b/tests/i915/gem_exec_basic.c
> index 31f6a234..f50e4c3b 100644
> --- a/tests/i915/gem_exec_basic.c
> +++ b/tests/i915/gem_exec_basic.c
> @@ -41,10 +41,17 @@ static uint32_t batch_create(int fd)
>  igt_main
>  {
>  	const struct intel_execution_engine2 *e;
> +	const intel_ctx_t *ctx = NULL;
>  	int fd = -1;
>  
>  	igt_fixture {
>  		fd = drm_open_driver(DRIVER_INTEL);
> +
> +		if (gem_has_contexts(fd))
> +			ctx = intel_ctx_create_all_physical(fd);
> +		else
> +			ctx = intel_ctx_0(fd);

Can't we push this into the helper, at least by default?


> +
>  		/* igt_require_gem(fd); // test is mandatory */
>  		igt_fork_hang_detector(fd);
>  	}
> @@ -54,12 +61,13 @@ igt_main
>  			.handle = batch_create(fd),
>  		};
>  
> -		__for_each_physical_engine(fd, e) {
> +		for_each_ctx_engine(fd, ctx, e) {

Since we change them all anyway, what about also moving the engine index
into your ctx iterator?

And then explaining with some nice pseudocode the canonical example of how
to do a testcase over all physical engines on the box with ctx autocreate
and stuff. I think that's the main use case for this magic, and it would
give us a fairly clean interface.
-Daniel


>  			igt_dynamic_f("%s", e->name) {
>  				struct drm_i915_gem_execbuffer2 execbuf = {
>  					.buffers_ptr = to_user_pointer(&exec),
>  					.buffer_count = 1,
>  					.flags = e->flags,
> +					.rsvd1 = ctx->id,
>  				};
>  
>  				gem_execbuf(fd, &execbuf);
> -- 
> 2.29.2
> 
> _______________________________________________
> igt-dev mailing list
> igt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/igt-dev

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [igt-dev] [RFC 06/30] lib/igt_spin: Support intel_ctx_t
  2021-04-01  2:12 ` [igt-dev] [RFC 06/30] lib/igt_spin: Support intel_ctx_t Jason Ekstrand
@ 2021-04-08 20:08   ` Daniel Vetter
  0 siblings, 0 replies; 38+ messages in thread
From: Daniel Vetter @ 2021-04-08 20:08 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: igt-dev

On Wed, Mar 31, 2021 at 09:12:19PM -0500, Jason Ekstrand wrote:
> ---
>  lib/igt_dummyload.c | 30 ++++++++++++++++++++++--------
>  lib/igt_dummyload.h |  4 ++++
>  2 files changed, 26 insertions(+), 8 deletions(-)
> 
> diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c
> index 5a11ec4e..ac83b331 100644
> --- a/lib/igt_dummyload.c
> +++ b/lib/igt_dummyload.c
> @@ -123,16 +123,28 @@ emit_recursive_batch(igt_spin_t *spin,
>  	addr += random() % addr / 2;
>  	addr &= -4096;
>  
> +	assert(!(opts->ctx && opts->ctx_id));
> +
>  	nengine = 0;
>  	if (opts->engine == ALL_ENGINES) {
>  		struct intel_execution_engine2 *engine;
>  
> -		for_each_context_engine(fd, opts->ctx_id, engine) {
> -			if (opts->flags & IGT_SPIN_POLL_RUN &&
> -			    !gem_class_can_store_dword(fd, engine->class))
> -				continue;
> +		if (opts->ctx) {
> +			for_each_ctx_engine(fd, opts->ctx, engine) {
> +				if (opts->flags & IGT_SPIN_POLL_RUN &&
> +				    !gem_class_can_store_dword(fd, engine->class))
> +					continue;
>  
> -			flags[nengine++] = engine->flags;
> +				flags[nengine++] = engine->flags;
> +			}
> +		} else {

I'm assuming at the end of the series we'll have a patch to ditch this
transition code?
-Daniel

> +			for_each_context_engine(fd, opts->ctx_id, engine) {
> +				if (opts->flags & IGT_SPIN_POLL_RUN &&
> +				    !gem_class_can_store_dword(fd, engine->class))
> +					continue;
> +
> +				flags[nengine++] = engine->flags;
> +			}
>  		}
>  	} else {
>  		flags[nengine++] = opts->engine;
> @@ -325,7 +337,7 @@ emit_recursive_batch(igt_spin_t *spin,
>  
>  	execbuf->buffers_ptr =
>  	       	to_user_pointer(obj + (2 - execbuf->buffer_count));
> -	execbuf->rsvd1 = opts->ctx_id;
> +	execbuf->rsvd1 = opts->ctx ? opts->ctx->id : opts->ctx_id;
>  
>  	if (opts->flags & IGT_SPIN_FENCE_OUT)
>  		execbuf->flags |= I915_EXEC_FENCE_OUT;
> @@ -422,8 +434,10 @@ igt_spin_factory(int fd, const struct igt_spin_factory *opts)
>  		struct intel_execution_engine2 e;
>  		int class;
>  
> -		if (!gem_context_lookup_engine(fd, opts->engine,
> -					       opts->ctx_id, &e)) {
> +		if (opts->ctx) {
> +			class = opts->ctx->cfg.engines[opts->engine].engine_class;
> +		} else if (!gem_context_lookup_engine(fd, opts->engine,
> +						      opts->ctx_id, &e)) {
>  			class = e.class;
>  		} else {
>  			gem_require_ring(fd, opts->engine);
> diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h
> index aee72da8..b26a7b7d 100644
> --- a/lib/igt_dummyload.h
> +++ b/lib/igt_dummyload.h
> @@ -32,9 +32,12 @@
>  #include "igt_list.h"
>  #include "i915_drm.h"
>  
> +struct intel_ctx;
> +
>  typedef struct igt_spin {
>  	struct igt_list_head link;
>  
> +
>  	uint32_t handle;
>  	uint32_t poll_handle;
>  
> @@ -61,6 +64,7 @@ typedef struct igt_spin {
>  
>  struct igt_spin_factory {
>  	uint32_t ctx_id;
> +	const struct intel_ctx *ctx;
>  	uint32_t dependency;
>  	unsigned int engine;
>  	unsigned int flags;
> -- 
> 2.29.2
> 
> _______________________________________________
> igt-dev mailing list
> igt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/igt-dev

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [igt-dev] [RFC 00/30] Stop cloning contexts
  2021-04-08 18:43 ` [igt-dev] [RFC 00/30] " Daniel Vetter
@ 2021-04-08 20:12   ` Daniel Vetter
  0 siblings, 0 replies; 38+ messages in thread
From: Daniel Vetter @ 2021-04-08 20:12 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: igt-dev

On Thu, Apr 08, 2021 at 08:43:59PM +0200, Daniel Vetter wrote:
> On Wed, Mar 31, 2021 at 09:12:13PM -0500, Jason Ekstrand wrote:
> > I'm trying to clean up some of our uAPI technical debt in i915.  One of the
> > biggest areas we have right now is context mutability.  There's no good
> > reason why things like the set of engines or the VM should be able to be
> > changed on the fly and no "real" userspace actually relies on this
> > functionality.  It does, however, make for a good excuse for tests and lots
> > of bug reports as things like swapping out the set of engines under load
> > break randomly.  The solution here is to stop allowing that behavior and
> > simplify the i915 internals.
> 
> Randomly going to drop this here, but I think we might also want to
> rethink the magic behaviour that for physical engines (except if you've
> done your own selection) we instantiate intel_context on-demand in
> execbuf.
> 
> Minimally would be neat to limit that magic behaviour to at least the
> default context, but I fear that would be another massive pile of igt
> changes. But if we can kinda do that in one go, might be real sweet to aim
> for that with your work here too. Just as some prep.
> -Daniel
> 
> > 
> > In particular, we'd like to remove the following from the i915 API:
> > 
> >  1. I915_CONTEXT_CLONE_*.  These are only used by IGT and have never been
> >     used by any "real" userspace.
> > 
> >  2. Changing the VM or set of engines via SETPARAM after they've been
> >     "used" by an execbuf or similar.  This would effectively make those
> >     parameters create params rather than mutable state.  We can't drop
> >     setparam entirely for those because media does use it but we can
> >     enforce some rules.
> > 
> >  3. Unused (by non-IGT userspace) GETPARAM for things like engines.
> > 
> > As much as we'd love to do that, we have a bit of a problem in IGT.  The
> > way we handle multi-engine testing today relies heavily on this soon-to-be-
> > deprecated functionality.  In particular, the standard flow is usually
> > something like this:
> > 
> >     static void run_test1(int fd, uint32_t engine)
> >     {
> >         igt_spin_t *spin;
> > 
> >         ctx = = gem_context_clone_with_engines(fd, 0);
> >         __igt_spin_new(fd, ctx, .engine = engine);
> > 
> >         /* do some testing with ctx */
> > 
> >         igt_spin_free(fd, spin);
> >         gem_destroy_context(fd, ctx);
> >     }
> > 
> >     igt_main
> >     {
> >         struct intel_execution_engine2 *e;
> > 
> >         /* Usual fixture code */
> > 
> >         __for_each_physical_engine(fd, e)
> >             run_test1(fd, e->flags);
> > 
> >         __for_each_physical_engine(fd, e)
> >             run_test2(fd, e->flags);
> >     }
> > 
> > Let's walk through what this does:
> > 
> >  1. __for_each_physical_engine calls intel_init_engine_list() which resets
> >     the set of engines on ctx0 to the full set of engines available as per
> >     the engine query.  On older kernels/hardware where we don't have the
> >     engines query, it leaves the set alone.
> > 
> >  2. intel_init_engine_list() also returns a set of engines for iteration
> >     and __for_each_physical_engine() sets up a for loop to walk the set.
> > 
> >  3. gem_context_clone_with_engines() creates a new context using
> >     I915_CONTEXT_CONTEXT_CLONE_ENGINES (not used by anything other than
> >     IGT) to ask that the newly created context has the same set of engines
> >     as ctx0.  Remember we changed that at the start of loop iteration!
> > 
> >  4. When the context is passed to __igt_spin_new(), it calls
> >     gem_context_lookup_engine which does a GETPARAM to introspet the set of
> >     engines on the context and figure out the engine class.
> > 
> > If you've been keeping track, this trivial and extremely common example
> > uses every single one of these soon-to-be-deprecated APIs even though the
> > test author may be completely obvious to it.  It also means that getting
> > rid of IGT's use of them is going to require some fairly deep surgery.
> > 
> > The approach proposed and partially implemented here is to add a new
> > wrapper struct intel_ctx_t which wraps a GEM context handle as well as the
> > full set of parameters used to create it, represented by intel_ctx_cfg_t.
> > We can then use the context anywhere we would regularly use a context, we
> > just have to do ctx->id.  If we want to clone it, we can do so by re-using
> > the create parameters by calling intel_ctx_create(fd, &old_ctx->cfg);

Looks really reasonable from a very cursory read-through. Dropped a bunch
of bikesheds and questions on the more initial patches, just to check that
for the more polished version we're aiming in an agreeable direction.

> > So far, I'm pretty happy with this solution.  I've converted around 25 test
> > programs and it's working quite well.  The only real sore point so far is
> > around dealing with platforms that don't support contexts.  We could
> > special case ctx0 a bit more but, right now, I'm just adding an if
> > statement and leaking the intel_ctx_t.  I'm happy to take suggestions
> > there.

So one thing is that we have tons of tests which have been forever on the
notrun/blacklist. Imo converting those is just wasting time, and maybe
better to ditch them outright before we spend effort on anything.
-Daniel

> > 
> > --Jason
> > 
> > 
> > Jason Ekstrand (30):
> >   lib/i915/gem_engine_topology: Expose the __query_engines helper
> >   lib: Add an intel_ctx wrapper struct and helpers
> >   lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t
> >   tests/i915/gem_exec_basic: Convert to intel_ctx_t
> >   lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id
> >   lib/igt_spin: Support intel_ctx_t
> >   tests/i915/gem_exec_fence: Convert to intel_ctx_t
> >   tests/i915/gem_exec_schedule: Convert to intel_ctx_t
> >   tests/i915/perf_pmu: Convert to intel_ctx_t
> >   tests/i915/gem_exec_nop: Convert to intel_ctx_t
> >   tests/i915/gem_exec_reloc: Convert to intel_ctx_t
> >   tests/i915/gem_busy: Convert to intel_ctx_t
> >   tests/i915/gem_ctx_isolation: Convert to intel_ctx_t
> >   tests/i915/gem_exec_async: Convert to intel_ctx_t
> >   tests/i915/sysfs_clients: Convert to intel_ctx_t
> >   tests/i915/gem_exec_fair: Convert to intel_ctx_t
> >   tests/i915/gem_spin_batch: Convert to intel_ctx_t
> >   tests/i915/gem_exec_store: Convert to intel_ctx_t
> >   tests/amdgpu/amd_prime: Convert to intel_ctx_t
> >   tests/i915/i915_hangman: Convert to intel_ctx_t
> >   tests/i915/gem_ringfill: Convert to intel_ctx_t
> >   tests/prime_busy: Convert to intel_ctx_t
> >   tests/prime_vgem: Convert to intel_ctx_t
> >   tests/gem_exec_whisper: Convert to intel_ctx_t
> >   tests/i915/gem_ctx_exec: Convert to intel_ctx_t
> >   tests/i915/gem_exec_suspend: Convert to intel_ctx_t
> >   tests/i915/gem_sync: Convert to intel_ctx_t
> >   tests/i915/gem_userptr_blits: Convert to intel_ctx_t
> >   tests/i915/gem_wait: Convert to intel_ctx_t
> >   tests/i915/gem_request_retire: Convert to intel_ctx_t
> > 
> >  lib/i915/gem_context.c          |  34 ++
> >  lib/i915/gem_context.h          |   2 +
> >  lib/i915/gem_engine_topology.c  |  61 ++-
> >  lib/i915/gem_engine_topology.h  |  16 +-
> >  lib/igt_dummyload.c             |  30 +-
> >  lib/igt_dummyload.h             |   6 +-
> >  lib/igt_gt.c                    |   2 +-
> >  lib/intel_ctx.c                 | 159 ++++++
> >  lib/intel_ctx.h                 | 110 ++++
> >  lib/meson.build                 |   1 +
> >  tests/amdgpu/amd_prime.c        |  10 +-
> >  tests/i915/gem_busy.c           |  80 +--
> >  tests/i915/gem_ctx_engines.c    |   6 +-
> >  tests/i915/gem_ctx_exec.c       |  14 +-
> >  tests/i915/gem_ctx_isolation.c  | 111 ++--
> >  tests/i915/gem_ctx_shared.c     |  16 +-
> >  tests/i915/gem_eio.c            |   2 +-
> >  tests/i915/gem_exec_async.c     |  31 +-
> >  tests/i915/gem_exec_balancer.c  |  26 +-
> >  tests/i915/gem_exec_basic.c     |  10 +-
> >  tests/i915/gem_exec_fair.c      |  99 ++--
> >  tests/i915/gem_exec_fence.c     | 189 ++++---
> >  tests/i915/gem_exec_latency.c   |   2 +-
> >  tests/i915/gem_exec_nop.c       | 158 +++---
> >  tests/i915/gem_exec_reloc.c     | 102 ++--
> >  tests/i915/gem_exec_schedule.c  | 875 +++++++++++++++++---------------
> >  tests/i915/gem_exec_store.c     |  38 +-
> >  tests/i915/gem_exec_suspend.c   |  56 +-
> >  tests/i915/gem_exec_whisper.c   |  86 ++--
> >  tests/i915/gem_request_retire.c |  17 +-
> >  tests/i915/gem_ringfill.c       |  47 +-
> >  tests/i915/gem_spin_batch.c     |  83 +--
> >  tests/i915/gem_sync.c           | 162 +++---
> >  tests/i915/gem_userptr_blits.c  |  31 +-
> >  tests/i915/gem_vm_create.c      |   4 +-
> >  tests/i915/gem_wait.c           |  23 +-
> >  tests/i915/gem_workarounds.c    |   2 +-
> >  tests/i915/i915_hangman.c       |  35 +-
> >  tests/i915/perf_pmu.c           | 225 ++++----
> >  tests/i915/sysfs_clients.c      |  87 ++--
> >  tests/prime_busy.c              |  19 +-
> >  tests/prime_vgem.c              |  38 +-
> >  42 files changed, 1927 insertions(+), 1178 deletions(-)
> >  create mode 100644 lib/intel_ctx.c
> >  create mode 100644 lib/intel_ctx.h
> > 
> > -- 
> > 2.29.2
> > 
> > _______________________________________________
> > igt-dev mailing list
> > igt-dev@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/igt-dev
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2021-04-08 20:12 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-01  2:12 [igt-dev] [RFC 00/30] Stop cloning contexts Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 01/30] lib/i915/gem_engine_topology: Expose the __query_engines helper Jason Ekstrand
2021-04-08 18:50   ` Daniel Vetter
2021-04-01  2:12 ` [igt-dev] [RFC 02/30] lib: Add an intel_ctx wrapper struct and helpers Jason Ekstrand
2021-04-08 18:58   ` Daniel Vetter
2021-04-01  2:12 ` [igt-dev] [RFC 03/30] lib/i915/gem_engine_topology: Add an iterator for intel_ctx_t Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 04/30] tests/i915/gem_exec_basic: Convert to intel_ctx_t Jason Ekstrand
2021-04-08 20:06   ` Daniel Vetter
2021-04-01  2:12 ` [igt-dev] [RFC 05/30] lib/igt_spin: Rename igt_spin_factory::ctx to ctx_id Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 06/30] lib/igt_spin: Support intel_ctx_t Jason Ekstrand
2021-04-08 20:08   ` Daniel Vetter
2021-04-01  2:12 ` [igt-dev] [RFC 07/30] tests/i915/gem_exec_fence: Convert to intel_ctx_t Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 08/30] tests/i915/gem_exec_schedule: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 09/30] tests/i915/perf_pmu: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 10/30] tests/i915/gem_exec_nop: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 11/30] tests/i915/gem_exec_reloc: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 12/30] tests/i915/gem_busy: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 13/30] tests/i915/gem_ctx_isolation: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 14/30] tests/i915/gem_exec_async: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 15/30] tests/i915/sysfs_clients: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 16/30] tests/i915/gem_exec_fair: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 17/30] tests/i915/gem_spin_batch: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 18/30] tests/i915/gem_exec_store: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 19/30] tests/amdgpu/amd_prime: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 20/30] tests/i915/i915_hangman: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 21/30] tests/i915/gem_ringfill: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 22/30] tests/prime_busy: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 23/30] tests/prime_vgem: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 24/30] tests/gem_exec_whisper: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 25/30] tests/i915/gem_ctx_exec: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 26/30] tests/i915/gem_exec_suspend: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 27/30] tests/i915/gem_sync: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 28/30] tests/i915/gem_userptr_blits: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 29/30] tests/i915/gem_wait: " Jason Ekstrand
2021-04-01  2:12 ` [igt-dev] [RFC 30/30] tests/i915/gem_request_retire: " Jason Ekstrand
2021-04-01 10:20 ` [igt-dev] ✗ Fi.CI.BUILD: failure for Stop cloning contexts Patchwork
2021-04-08 18:43 ` [igt-dev] [RFC 00/30] " Daniel Vetter
2021-04-08 20:12   ` Daniel Vetter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.