All of lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH v21 0/6] new engine discovery interface
@ 2019-04-16 15:11 Andi Shyti
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 1/6] include/drm-uapi: import i915_drm.h header file Andi Shyti
                   ` (7 more replies)
  0 siblings, 8 replies; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:11 UTC (permalink / raw)
  To: IGT dev; +Cc: Andi Shyti

Hi,

In this patchset I propose an alternative way of engine discovery
thanks to the new interfaces developed by Tvrtko and Chris[4].

The changes to perf_pmu are a proposal, most probably they don't
work (it's anyway an RFC) because the dependency to legacy code
is still too strong.

Thanks Tvrtko, Chris, Antonio and Petri for your comments in the
previous RFCs.

Andi

v20 --> v21
===========
 - removed Tvrtko's debug messages
 - few fixes from Chris last review

v19 --> v20
===========
 - added some debugs from Tvrtko to get more info about gem_wait
   failure.
 - few fixes in gem_engine_topology.c from Tvrtko's comments,
   including a bigger fix about an uncontrolled variable
   increment in the _next function

v18 --> v19
===========
 - integrated Tvrtko's fixup patch [17]. From this patch some
   changes have been moved to gem_engine_topology as a new helper
   for getting the engine's properties.

v17 --> v18
===========
 - three patches have been applied (the ones that add
   gem_context_has_engine() function)
 - few cosmetic fixes
 - and some changes coming from Tvrtko's review on v17

v16 --> v17
===========
amongst many little things, three main changes:
 - improved perf_pmu adaptation to gem_engine_topology
 - removed the exec-ctx test, perf_pmu will be the flag test
 - when creating the engine list, now the
   for_each_engine_physical can be executed safely during subtest
   listing

v15 --> v16
===========
 - few changes to the gem_engine_topology stuff
 - added une more dummy test which loops through the physical
   engines, as well.
 - changes to test/perf_pmu required some more changes than
   expected (the 3 last patches)

v14 --> v15
===========
PATCH v14: [16]

 - virtual engines will be called "virtual" like unrecognised
   engines will be called "unknown"

 - renamed the for_each loops to more meaningful names
   (__for_each_static_engine and for_each_context_engine) and
   moved into gem_engine_topology.h

 - minor changes about data types.

v13 --> v14
===========
PATCH v13: [15]
minor changes this time:
 - squashed patch 2 and 3 (from v13) with a little rename and
   added Chris r-b

 - fixed some index issues and string assignement leaks

 - squashed patches 5, 6, 7 and 8 from v13

v12 --> v13
===========
PATCH v12: [14]
This patch is also very different from the previous other than
some reorganization of the code these are the main changes:

 - the previous version lacked the case when the context had its
   engines mapped. checks in the following order

 if the driver doesn't have the new API
  -> get the engines from the static list
 if the driver has the API but the context has nothing mapped
  -> get the engines from "query" and map them
 if the driver has the API and the context has engines mapped
  -> get the engines from the context

 - the helper functions have been removed as they were of no use.

v11 --> v12
===========
PATCH v11: [13]
This 12th version starts from a completely different thought.
Here's the main differences:

 - The list of engines is provided in an engine_data structure
   which contains an index (useful for looping through and for
   engine/context index mapping) instead of an array of engines.

 - The list of engines is generated every time the init function
   is called and nothing is allocated in heap memory.

 - The ioctl check is done already during the initialization part
   and if the new ioctls are not implemented, then the init
   function still stores only those present in the GPU.

 - The for_each loop is implemented by re-using the previous
   'for_each_engine_class_instance()' implemented by Tvrtko.

 - The gem_topology library offers few helper functions for
   checking the engine presence, checking the implementation of
   the ioctls and executing the buffer, in order to be completely
   unaware of the driver implementation.

Thanks Tvrtko for all your inputs.

v10 --> v11
===========
RFC v10: [12]
few cosmetic changes in v11 and minor architectural details.
Thanks Tvrtko.

- the 'query_engines()' functions are static as no one is using
  them yet.

- removed the 'gem_has_engine_topology()' function because it's
  very little used, 'get_active_engines()' can be used instead.

- a minor ring -> engine renaming coming from Chris. 

v9 --> v10
==========
RFC v9: [11]
also this time quite many changes, thanks Chris for the reviews,
here the most relevant of them.

- gem_query.[ch] have been renamed to gem_engine_topology.[ch]
  and all the functions ended up there as they are referring to
  the topology of the engines.

- the functions 'get_active_engines()',
  'gem_set_context_get_engines()' and
  'igt_require_gem_engine_list()' will be the main interface to
  the gem_engine_topology library, refer to patch 2 for details.

- the define 'for_each_engine2()' doesn't expose anymore the
  iterator.

- 'gem_context_has_engine()' has been moved from ioctl_wrappers.c
  to gem_context.c.

- the gem_exec_basic exec-ctx subtest does not abort if the new
  getparam/setparam and query apis are not implemented as it can
  work with both (as it was done at the beginning).

v8 --> v9
=========
RFC v8: [10]
quite many changes, please refer to the review in [10]. Thanks
Chris for the review. These are the most relevant:

- all the allocation in gem_query have been made in stack, not
  anymore allocated dynamically.

- removed get/set_context as it was already implemented and I
  didn't know.

- renamed some functions and variables to hopefully more
  meaningful names.

V7 --> v8
=========
RFC v7: [9]

- all functions have been moved from lib/igt_gt.{c,h} and
  lib/ioctl_wrappers.{c,h} to lib/i916/gem_query.{c,h}. (thanks
  Chris)

- 'for_each_engine_ctx' has been renamed to 'for_each_engine2' to
  be consistent with the '2' that indicates the new 'struct
  intel_execution_engine2' data structure.

V6 --> V7
=========
RFC v6: [8]

- a new patch has been added (patch 3) which adds a new
  requirement check through the igt_require_gem_engine_list()
  function. (thanks Chris) This function will initialize the
  engine list instead of the instead of igt_require_gem() as it
  was in v6

- all the ioctls have been wrapped (thanks Chris and Antonio) and
  new library functions have been added and assert the ioctls

- gem_init_engine_list() function returns the errno from the
  GETPARAM ioctl in order to be used as a requirement. (thanks
  Chris)

- fixed few requires/asserts

- The engine list "intel_active_engines2" is allocated of the
  number of engines instead of a political 64 (thanks Antonio).

- some parameter renaming in gem_has_ring_by_idx(). (thanks
  Chris).

- the original "intel_execution_engines2" has not been renamed,
  because it is used to create subtests before even executing any
  test/ioctl. By renaming it, some subtest generations failed.
  (thanks Petri)

V5 --> V6
=========
RFC v5: [7]
- Chris implemented the getparam ioctl which allows to the test
  to figure otu whether the new interface has been implemented.
  This way the for_each_engine_ctx() is able to work with new and
  old kernel uapi (thanks Chris)

V4 --> V5
=========
RFC v4: [6]

- the engine list is now built in 'igt_require_gem()' instead of
  '__open_driver()' so that we keep this discovery method
  specific to the i915 driver (thanks Chris).

- All the query/setparam structures dynamic allocation based on
  the number of engines, now are politically allocated 64 times,
  to avoid extra ioctl calls that retrieve the engine number
  (thanks Chris)

- use igt_ioctl instead of ioctl (thanks Chris)

- allocate intel_execution_engines2 dynamically instead of
  statically (thanks Tvrtko)

- simplify the test in 'gem_exec_basic()' so that simply checks
  the presence of the engine instead of executing a buffer
  (thank Chris)

- a new patch has been added (patch 3) that extends the
  'gem_has_ring()' boolean function. The new version sets the
  index as it's mapped in the kernel.The previous function is now
  a wrapper to the new function.

V3 --> V4
=========
PATCH v3: [3]

- re-architectured the discovery mechanism based on Tvrtko's
  sugestion and reviews.. In this version the discovery is done
  during the device opening and stored in a NULL terminated
  array, which replaces the existing intel_execution_engines2
  that is mainly used as a reference.

V2 --> V3
=========
RFC v2: [2]

- removed a standalone gem_query_engines_demo test and added the
  exec-ctx subtest inside gem_exec_basic (thanks Tvrtko).

- fixed most of Tvrtko's comments in [5], which consist in
  putting the mallocs igt_assert and ictls in igt_require and few
  refactoring (thanks Tvrtko).

V1 --> V2
=========
RFC v1: [1]

- added a demo test that simply queries the driver about the
  engines and executes a buffer (thanks Tvrtko)

- refactored the for_each_engine_ctx() macro so that what in the
  previous version was done by the "bind" function, now it's done
  in the first iteration. (Thanks Crhis)

- removed the "gem_has_ring_ctx()" because it was out of the
  scope.

- rename functions to more meaningful names

[1] RFC v1: https://lists.freedesktop.org/archives/igt-dev/2018-November/007025.html
[2] RFC v2: https://lists.freedesktop.org/archives/igt-dev/2018-November/007079.html
[3] PATCH v3: https://lists.freedesktop.org/archives/igt-dev/2018-November/007148.html
[4] https://cgit.freedesktop.org/~tursulin/drm-intel/log/?h=media
[5] https://lists.freedesktop.org/archives/igt-dev/2018-November/007100.html
[6] https://lists.freedesktop.org/archives/igt-dev/2019-January/008029.html
[7] https://lists.freedesktop.org/archives/igt-dev/2019-January/008165.html
[8] https://lists.freedesktop.org/archives/igt-dev/2019-February/008902.html
[9] https://lists.freedesktop.org/archives/igt-dev/2019-February/009185.html
[10] https://lists.freedesktop.org/archives/igt-dev/2019-February/009205.html
[11] https://lists.freedesktop.org/archives/igt-dev/2019-February/009277.html
[12] https://lists.freedesktop.org/archives/igt-dev/2019-March/010197.html
[13] https://lists.freedesktop.org/archives/igt-dev/2019-March/010467.html
[14] https://lists.freedesktop.org/archives/igt-dev/2019-March/010776.html
[15] https://lists.freedesktop.org/archives/igt-dev/2019-March/010827.html
[16] https://lists.freedesktop.org/archives/igt-dev/2019-March/010916.html
[17] https://lists.freedesktop.org/archives/igt-dev/2019-April/011821.html

Andi Shyti (6):
  include/drm-uapi: import i915_drm.h header file
  lib/i915: add gem_engine_topology library and for_each loop definition
  lib: igt_gt: add execution buffer flags to class helper
  lib: igt_gt: make gem_engine_can_store_dword() check engine class
  lib: igt_dummyload: use for_each_context_engine()
  test: perf_pmu: use the gem_engine_topology library

 include/drm-uapi/i915_drm.h    | 361 +++++++++++++++++++++++++++------
 lib/Makefile.sources           |   2 +
 lib/i915/gem_engine_topology.c | 291 ++++++++++++++++++++++++++
 lib/i915/gem_engine_topology.h |  80 ++++++++
 lib/igt.h                      |   1 +
 lib/igt_dummyload.c            |  29 ++-
 lib/igt_gt.c                   |  30 ++-
 lib/igt_gt.h                   |  12 +-
 lib/meson.build                |   1 +
 tests/perf_pmu.c               | 148 ++++++++------
 10 files changed, 821 insertions(+), 134 deletions(-)
 create mode 100644 lib/i915/gem_engine_topology.c
 create mode 100644 lib/i915/gem_engine_topology.h

-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [igt-dev] [PATCH v21 1/6] include/drm-uapi: import i915_drm.h header file
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
@ 2019-04-16 15:11 ` Andi Shyti
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition Andi Shyti
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:11 UTC (permalink / raw)
  To: IGT dev; +Cc: Andi Shyti

This header file is imported in order to include the two new
ioctls DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM,
DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM and DRM_IOCTL_I915_QUERY.

Signed-off-by: Andi Shyti <andi.shyti@intel.com>
---
 include/drm-uapi/i915_drm.h | 361 ++++++++++++++++++++++++++++++------
 1 file changed, 304 insertions(+), 57 deletions(-)

diff --git a/include/drm-uapi/i915_drm.h b/include/drm-uapi/i915_drm.h
index 4ae1c6ff6ae6..2bbad08eb9d2 100644
--- a/include/drm-uapi/i915_drm.h
+++ b/include/drm-uapi/i915_drm.h
@@ -62,6 +62,26 @@ extern "C" {
 #define I915_ERROR_UEVENT		"ERROR"
 #define I915_RESET_UEVENT		"RESET"
 
+/*
+ * i915_user_extension: Base class for defining a chain of extensions
+ *
+ * Many interfaces need to grow over time. In most cases we can simply
+ * extend the struct and have userspace pass in more data. Another option,
+ * as demonstrated by Vulkan's approach to providing extensions for forward
+ * and backward compatibility, is to use a list of optional structs to
+ * provide those extra details.
+ *
+ * The key advantage to using an extension chain is that it allows us to
+ * redefine the interface more easily than an ever growing struct of
+ * increasing complexity, and for large parts of that interface to be
+ * entirely optional. The downside is more pointer chasing; chasing across
+ * the boundary with pointers encapsulated inside u64.
+ */
+struct i915_user_extension {
+	__u64 next_extension;
+	__u64 name;
+};
+
 /*
  * MOCS indexes used for GPU surfaces, defining the cacheability of the
  * surface data and the coherency for this data wrt. CPU vs. GPU accesses.
@@ -104,6 +124,9 @@ enum drm_i915_gem_engine_class {
 	I915_ENGINE_CLASS_INVALID	= -1
 };
 
+#define I915_ENGINE_CLASS_INVALID_NONE -1
+#define I915_ENGINE_CLASS_INVALID_VIRTUAL 0
+
 /**
  * DOC: perf_events exposed by i915 through /sys/bus/event_sources/drivers/i915
  *
@@ -321,6 +344,8 @@ typedef struct _drm_i915_sarea {
 #define DRM_I915_PERF_ADD_CONFIG	0x37
 #define DRM_I915_PERF_REMOVE_CONFIG	0x38
 #define DRM_I915_QUERY			0x39
+#define DRM_I915_GEM_VM_CREATE		0x3a
+#define DRM_I915_GEM_VM_DESTROY		0x3b
 /* Must be kept compact -- no holes */
 
 #define DRM_IOCTL_I915_INIT		DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t)
@@ -370,6 +395,7 @@ typedef struct _drm_i915_sarea {
 #define DRM_IOCTL_I915_GET_SPRITE_COLORKEY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GET_SPRITE_COLORKEY, struct drm_intel_sprite_colorkey)
 #define DRM_IOCTL_I915_GEM_WAIT		DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_WAIT, struct drm_i915_gem_wait)
 #define DRM_IOCTL_I915_GEM_CONTEXT_CREATE	DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_CREATE, struct drm_i915_gem_context_create)
+#define DRM_IOCTL_I915_GEM_CONTEXT_CREATE_EXT	DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_CREATE, struct drm_i915_gem_context_create_ext)
 #define DRM_IOCTL_I915_GEM_CONTEXT_DESTROY	DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_DESTROY, struct drm_i915_gem_context_destroy)
 #define DRM_IOCTL_I915_REG_READ			DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_REG_READ, struct drm_i915_reg_read)
 #define DRM_IOCTL_I915_GET_RESET_STATS		DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GET_RESET_STATS, struct drm_i915_reset_stats)
@@ -380,6 +406,8 @@ typedef struct _drm_i915_sarea {
 #define DRM_IOCTL_I915_PERF_ADD_CONFIG	DRM_IOW(DRM_COMMAND_BASE + DRM_I915_PERF_ADD_CONFIG, struct drm_i915_perf_oa_config)
 #define DRM_IOCTL_I915_PERF_REMOVE_CONFIG	DRM_IOW(DRM_COMMAND_BASE + DRM_I915_PERF_REMOVE_CONFIG, __u64)
 #define DRM_IOCTL_I915_QUERY			DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_QUERY, struct drm_i915_query)
+#define DRM_IOCTL_I915_GEM_VM_CREATE	DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_CREATE, struct drm_i915_gem_vm_control)
+#define DRM_IOCTL_I915_GEM_VM_DESTROY	DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_VM_DESTROY, struct drm_i915_gem_vm_control)
 
 /* Allow drivers to submit batchbuffers directly to hardware, relying
  * on the security mechanisms provided by hardware.
@@ -563,6 +591,12 @@ typedef struct drm_i915_irq_wait {
  */
 #define I915_PARAM_MMAP_GTT_COHERENT	52
 
+/*
+ * Query whether DRM_I915_GEM_EXECBUFFER2 supports coordination of parallel
+ * execution through use of explicit fence support.
+ * See I915_EXEC_FENCE_OUT and I915_EXEC_FENCE_SUBMIT.
+ */
+#define I915_PARAM_HAS_EXEC_SUBMIT_FENCE 53
 /* Must be kept compact -- no holes and well documented */
 
 typedef struct drm_i915_getparam {
@@ -1085,7 +1119,16 @@ struct drm_i915_gem_execbuffer2 {
  */
 #define I915_EXEC_FENCE_ARRAY   (1<<19)
 
-#define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_ARRAY<<1))
+/*
+ * Setting I915_EXEC_FENCE_SUBMIT implies that lower_32_bits(rsvd2) represent
+ * a sync_file fd to wait upon (in a nonblocking manner) prior to executing
+ * the batch.
+ *
+ * Returns -EINVAL if the sync_file fd cannot be found.
+ */
+#define I915_EXEC_FENCE_SUBMIT		(1 << 20)
+
+#define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_SUBMIT << 1))
 
 #define I915_EXEC_CONTEXT_ID_MASK	(0xffffffff)
 #define i915_execbuffer2_set_context_id(eb2, context) \
@@ -1421,65 +1464,18 @@ struct drm_i915_gem_wait {
 };
 
 struct drm_i915_gem_context_create {
-	/*  output: id of new context*/
-	__u32 ctx_id;
-	__u32 pad;
-};
-
-struct drm_i915_gem_context_destroy {
-	__u32 ctx_id;
-	__u32 pad;
-};
-
-struct drm_i915_reg_read {
-	/*
-	 * Register offset.
-	 * For 64bit wide registers where the upper 32bits don't immediately
-	 * follow the lower 32bits, the offset of the lower 32bits must
-	 * be specified
-	 */
-	__u64 offset;
-#define I915_REG_READ_8B_WA (1ul << 0)
-
-	__u64 val; /* Return value */
-};
-/* Known registers:
- *
- * Render engine timestamp - 0x2358 + 64bit - gen7+
- * - Note this register returns an invalid value if using the default
- *   single instruction 8byte read, in order to workaround that pass
- *   flag I915_REG_READ_8B_WA in offset field.
- *
- */
-
-struct drm_i915_reset_stats {
-	__u32 ctx_id;
-	__u32 flags;
-
-	/* All resets since boot/module reload, for all contexts */
-	__u32 reset_count;
-
-	/* Number of batches lost when active in GPU, for this context */
-	__u32 batch_active;
-
-	/* Number of batches lost pending for execution, for this context */
-	__u32 batch_pending;
-
+	__u32 ctx_id; /* output: id of new context*/
 	__u32 pad;
 };
 
-struct drm_i915_gem_userptr {
-	__u64 user_ptr;
-	__u64 user_size;
+struct drm_i915_gem_context_create_ext {
+	__u32 ctx_id; /* output: id of new context*/
 	__u32 flags;
-#define I915_USERPTR_READ_ONLY 0x1
-#define I915_USERPTR_UNSYNCHRONIZED 0x80000000
-	/**
-	 * Returned handle for the object.
-	 *
-	 * Object handles are nonzero.
-	 */
-	__u32 handle;
+#define I915_CONTEXT_CREATE_FLAGS_USE_EXTENSIONS	(1u << 0)
+#define I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE	(1u << 1)
+#define I915_CONTEXT_CREATE_FLAGS_UNKNOWN \
+	(-(I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE << 1))
+	__u64 extensions;
 };
 
 struct drm_i915_gem_context_param {
@@ -1520,7 +1516,43 @@ struct drm_i915_gem_context_param {
  * On creation, all new contexts are marked as recoverable.
  */
 #define I915_CONTEXT_PARAM_RECOVERABLE	0x8
+
+	/*
+	 * The id of the associated virtual memory address space (ppGTT) of
+	 * this context. Can be retrieved and passed to another context
+	 * (on the same fd) for both to use the same ppGTT and so share
+	 * address layouts, and avoid reloading the page tables on context
+	 * switches between themselves.
+	 *
+	 * See DRM_I915_GEM_VM_CREATE and DRM_I915_GEM_VM_DESTROY.
+	 */
+#define I915_CONTEXT_PARAM_VM		0x9
+
+/*
+ * I915_CONTEXT_PARAM_ENGINES:
+ *
+ * Bind this context to operate on this subset of available engines. Henceforth,
+ * the I915_EXEC_RING selector for DRM_IOCTL_I915_GEM_EXECBUFFER2 operates as
+ * an index into this array of engines; I915_EXEC_DEFAULT selecting engine[0]
+ * and upwards. Slots 0...N are filled in using the specified (class, instance).
+ * Use
+ *	engine_class: I915_ENGINE_CLASS_INVALID,
+ *	engine_instance: I915_ENGINE_CLASS_INVALID_NONE
+ * to specify a gap in the array that can be filled in later, e.g. by a
+ * virtual engine used for load balancing.
+ *
+ * Setting the number of engines bound to the context to 0, by passing a zero
+ * sized argument, will revert back to default settings.
+ *
+ * See struct i915_context_param_engines.
+ *
+ * Extensions:
+ *   i915_context_engines_load_balance (I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE)
+ *   i915_context_engines_bond (I915_CONTEXT_ENGINES_EXT_BOND)
+ */
+#define I915_CONTEXT_PARAM_ENGINES	0xa
 /* Must be kept compact -- no holes and well documented */
+
 	__u64 value;
 };
 
@@ -1553,9 +1585,10 @@ struct drm_i915_gem_context_param_sseu {
 	__u16 engine_instance;
 
 	/*
-	 * Unused for now. Must be cleared to zero.
+	 * Unknown flags must be cleared to zero.
 	 */
 	__u32 flags;
+#define I915_CONTEXT_SSEU_FLAG_ENGINE_INDEX (1u << 0)
 
 	/*
 	 * Mask of slices to enable for the context. Valid values are a subset
@@ -1583,6 +1616,175 @@ struct drm_i915_gem_context_param_sseu {
 	__u32 rsvd;
 };
 
+/*
+ * i915_context_engines_load_balance:
+ *
+ * Enable load balancing across this set of engines.
+ *
+ * Into the I915_EXEC_DEFAULT slot [0], a virtual engine is created that when
+ * used will proxy the execbuffer request onto one of the set of engines
+ * in such a way as to distribute the load evenly across the set.
+ *
+ * The set of engines must be compatible (e.g. the same HW class) as they
+ * will share the same logical GPU context and ring.
+ *
+ * To intermix rendering with the virtual engine and direct rendering onto
+ * the backing engines (bypassing the load balancing proxy), the context must
+ * be defined to use a single timeline for all engines.
+ */
+struct i915_context_engines_load_balance {
+	struct i915_user_extension base;
+
+	__u16 engine_index;
+	__u16 mbz16; /* reserved for future use; must be zero */
+	__u32 flags; /* all undefined flags must be zero */
+
+	__u64 engines_mask; /* selection mask of engines[] */
+
+	__u64 mbz64[4]; /* reserved for future use; must be zero */
+};
+
+/*
+ * i915_context_engines_bond:
+ *
+ */
+struct i915_context_engines_bond {
+	struct i915_user_extension base;
+
+	__u16 engine_index;
+	__u16 mbz;
+
+	__u16 master_class;
+	__u16 master_instance;
+
+	__u64 sibling_mask;
+	__u64 flags; /* all undefined flags must be zero */
+};
+
+struct i915_context_param_engines {
+	__u64 extensions; /* linked chain of extension blocks, 0 terminates */
+#define I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE 0
+#define I915_CONTEXT_ENGINES_EXT_BOND 1
+
+	struct {
+		__u16 engine_class; /* see enum drm_i915_gem_engine_class */
+		__u16 engine_instance;
+	} class_instance[0];
+} __attribute__((packed));
+
+#define I915_DEFINE_CONTEXT_PARAM_ENGINES(name__, N__) struct { \
+	__u64 extensions; \
+	struct { \
+		__u16 engine_class; \
+		__u16 engine_instance; \
+	} class_instance[N__]; \
+} __attribute__((packed)) name__
+
+struct drm_i915_gem_context_create_ext_setparam {
+#define I915_CONTEXT_CREATE_EXT_SETPARAM 0
+	struct i915_user_extension base;
+	struct drm_i915_gem_context_param setparam;
+};
+
+struct drm_i915_gem_context_create_ext_clone {
+#define I915_CONTEXT_CREATE_EXT_CLONE 1
+	struct i915_user_extension base;
+	__u32 clone_id;
+	__u32 flags;
+#define I915_CONTEXT_CLONE_FLAGS	(1u << 0)
+#define I915_CONTEXT_CLONE_SCHED	(1u << 1)
+#define I915_CONTEXT_CLONE_SSEU		(1u << 2)
+#define I915_CONTEXT_CLONE_TIMELINE	(1u << 3)
+#define I915_CONTEXT_CLONE_VM		(1u << 4)
+#define I915_CONTEXT_CLONE_ENGINES	(1u << 5)
+#define I915_CONTEXT_CLONE_UNKNOWN -(I915_CONTEXT_CLONE_ENGINES << 1)
+	__u64 rsvd;
+};
+
+struct drm_i915_gem_context_destroy {
+	__u32 ctx_id;
+	__u32 pad;
+};
+
+/*
+ * DRM_I915_GEM_VM_CREATE -
+ *
+ * Create a new virtual memory address space (ppGTT) for use within a context
+ * on the same file. Extensions can be provided to configure exactly how the
+ * address space is setup upon creation.
+ *
+ * The id of new VM (bound to the fd) for use with I915_CONTEXT_PARAM_VM is
+ * returned in the outparam @id.
+ *
+ * No flags are defined, with all bits reserved and must be zero.
+ *
+ * An extension chain maybe provided, starting with @extensions, and terminated
+ * by the @next_extension being 0. Currently, no extensions are defined.
+ *
+ * DRM_I915_GEM_VM_DESTROY -
+ *
+ * Destroys a previously created VM id, specified in @id.
+ *
+ * No extensions or flags are allowed currently, and so must be zero.
+ */
+struct drm_i915_gem_vm_control {
+	__u64 extensions;
+	__u32 flags;
+	__u32 id;
+};
+
+struct drm_i915_reg_read {
+	/*
+	 * Register offset.
+	 * For 64bit wide registers where the upper 32bits don't immediately
+	 * follow the lower 32bits, the offset of the lower 32bits must
+	 * be specified
+	 */
+	__u64 offset;
+#define I915_REG_READ_8B_WA (1ul << 0)
+
+	__u64 val; /* Return value */
+};
+
+/* Known registers:
+ *
+ * Render engine timestamp - 0x2358 + 64bit - gen7+
+ * - Note this register returns an invalid value if using the default
+ *   single instruction 8byte read, in order to workaround that pass
+ *   flag I915_REG_READ_8B_WA in offset field.
+ *
+ */
+
+struct drm_i915_reset_stats {
+	__u32 ctx_id;
+	__u32 flags;
+
+	/* All resets since boot/module reload, for all contexts */
+	__u32 reset_count;
+
+	/* Number of batches lost when active in GPU, for this context */
+	__u32 batch_active;
+
+	/* Number of batches lost pending for execution, for this context */
+	__u32 batch_pending;
+
+	__u32 pad;
+};
+
+struct drm_i915_gem_userptr {
+	__u64 user_ptr;
+	__u64 user_size;
+	__u32 flags;
+#define I915_USERPTR_READ_ONLY 0x1
+#define I915_USERPTR_UNSYNCHRONIZED 0x80000000
+	/**
+	 * Returned handle for the object.
+	 *
+	 * Object handles are nonzero.
+	 */
+	__u32 handle;
+};
+
 enum drm_i915_oa_format {
 	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
 	I915_OA_FORMAT_A29,	    /* HSW only */
@@ -1744,6 +1946,7 @@ struct drm_i915_perf_oa_config {
 struct drm_i915_query_item {
 	__u64 query_id;
 #define DRM_I915_QUERY_TOPOLOGY_INFO    1
+#define DRM_I915_QUERY_ENGINE_INFO	2
 /* Must be kept compact -- no holes and well documented */
 
 	/*
@@ -1842,6 +2045,50 @@ struct drm_i915_query_topology_info {
 	__u8 data[];
 };
 
+/**
+ * struct drm_i915_engine_info
+ *
+ * Describes one engine and it's capabilities as known to the driver.
+ */
+struct drm_i915_engine_info {
+	/** Engine class as in enum drm_i915_gem_engine_class. */
+	__u16 engine_class;
+
+	/** Engine instance number. */
+	__u16 engine_instance;
+
+	/** Reserved field. */
+	__u32 rsvd0;
+
+	/** Engine flags. */
+	__u64 flags;
+
+	/** Capabilities of this engine. */
+	__u64 capabilities;
+#define I915_VIDEO_CLASS_CAPABILITY_HEVC		(1 << 0)
+#define I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC	(1 << 1)
+
+	/** Reserved fields. */
+	__u64 rsvd1[4];
+};
+
+/**
+ * struct drm_i915_query_engine_info
+ *
+ * Engine info query enumerates all engines known to the driver by filling in
+ * an array of struct drm_i915_engine_info structures.
+ */
+struct drm_i915_query_engine_info {
+	/** Number of struct drm_i915_engine_info structs following. */
+	__u32 num_engines;
+
+	/** MBZ */
+	__u32 rsvd[3];
+
+	/** Marker for drm_i915_engine_info structures. */
+	struct drm_i915_engine_info engines[];
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 1/6] include/drm-uapi: import i915_drm.h header file Andi Shyti
@ 2019-04-16 15:11 ` Andi Shyti
  2019-04-16 15:21   ` Chris Wilson
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 3/6] lib: igt_gt: add execution buffer flags to class helper Andi Shyti
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:11 UTC (permalink / raw)
  To: IGT dev; +Cc: Andi Shyti

The gem_engine_topology library is a set of functions that
interface with the query and getparam/setparam ioctls.

The library's access point is the 'intel_init_engine_list()'
function that, everytime is called, generates the list of active
engines and returns them in a 'struct intel_engine_data'. The
structure contains only the engines that are actively present in
the GPU.

The function can work in both the cases that the query and
getparam ioctls are implemented or not by the running kernel. In
case they are implemented, a query is made to the driver to fetch
the list of active engines. In case they are not implemented, the
list is taken from the 'intel_execution_engines2' array and
stored only after checking their presence.

The gem_engine_topology library provides some iteration helpers:

 - intel_get_current_engine(): provides the current engine in the
   iteration.

 - intel_get_current_physical_engine(): provides the current
   physical engine, if the current engine is a virtual engine,
   it moves forward until it finds a physical engine.

 - intel_next_engine() it just increments the counter so that it
   points to the next engine.

Extend the 'for_each_engine_class_instance' so that it can loop
using the new 'intel_init_engine_list()' and rename it to
'for_each_context_engine'.

Move '__for_each_engine_class_instance' to gem_engine_topology.h
and rename it to '__for_each_static_engine'.

Update accordingly tests/perf_pmu.c to use correctly the new
for_each loops.

Signed-off-by: Andi Shyti <andi.shyti@intel.com>
---
 lib/Makefile.sources           |   2 +
 lib/i915/gem_engine_topology.c | 291 +++++++++++++++++++++++++++++++++
 lib/i915/gem_engine_topology.h |  80 +++++++++
 lib/igt.h                      |   1 +
 lib/igt_gt.h                   |   2 +
 lib/meson.build                |   1 +
 6 files changed, 377 insertions(+)
 create mode 100644 lib/i915/gem_engine_topology.c
 create mode 100644 lib/i915/gem_engine_topology.h

diff --git a/lib/Makefile.sources b/lib/Makefile.sources
index a1d253511030..082049bf7c6a 100644
--- a/lib/Makefile.sources
+++ b/lib/Makefile.sources
@@ -13,6 +13,8 @@ lib_source_list =	 	\
 	i915/gem_ring.c	\
 	i915/gem_mman.c	\
 	i915/gem_mman.h	\
+	i915/gem_engine_topology.c	\
+	i915/gem_engine_topology.h	\
 	i915_3d.h		\
 	i915_reg.h		\
 	i915_pciids.h		\
diff --git a/lib/i915/gem_engine_topology.c b/lib/i915/gem_engine_topology.c
new file mode 100644
index 000000000000..1c89e425b81d
--- /dev/null
+++ b/lib/i915/gem_engine_topology.c
@@ -0,0 +1,291 @@
+/*
+ * Copyright © 2019 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include "drmtest.h"
+#include "ioctl_wrappers.h"
+
+#include "i915/gem_engine_topology.h"
+
+#define DEFINE_CONTEXT_PARAM(e__, p__, c__, N__) \
+		I915_DEFINE_CONTEXT_PARAM_ENGINES(e__, N__); \
+		struct drm_i915_gem_context_param p__ = { \
+			.param = I915_CONTEXT_PARAM_ENGINES, \
+			.ctx_id = c__, \
+			.size = SIZEOF_CTX_PARAM, \
+			.value = to_user_pointer(&e__), \
+		}
+
+static int __gem_query(int fd, struct drm_i915_query *q)
+{
+	int err = 0;
+
+	if (igt_ioctl(fd, DRM_IOCTL_I915_QUERY, q))
+		err = -errno;
+
+	errno = 0;
+	return err;
+}
+
+static void gem_query(int fd, struct drm_i915_query *q)
+{
+	igt_assert_eq(__gem_query(fd, q), 0);
+}
+
+static void query_engines(int fd,
+			  struct drm_i915_query_engine_info *query_engines,
+			  int length)
+{
+	struct drm_i915_query_item item = { };
+	struct drm_i915_query query = { };
+
+	item.query_id = DRM_I915_QUERY_ENGINE_INFO;
+	query.items_ptr = to_user_pointer(&item);
+	query.num_items = 1;
+	item.length = length;
+
+	item.data_ptr = to_user_pointer(query_engines);
+
+	gem_query(fd, &query);
+}
+
+static void ctx_map_engines(int fd, struct intel_engine_data *ed,
+			    struct drm_i915_gem_context_param *param)
+{
+	struct i915_context_param_engines *engines =
+			(struct i915_context_param_engines *) param->value;
+	int i = 0;
+
+	for (typeof(engines->class_instance[0]) *p =
+	     &engines->class_instance[0];
+	     i < ed->nengines; i++, p++) {
+		p->engine_class = ed->engines[i].class;
+		p->engine_instance = ed->engines[i].instance;
+	}
+
+	param->size = offsetof(typeof(*engines), class_instance[i]);
+	engines->extensions = 0;
+
+	gem_context_set_param(fd, param);
+}
+
+static void init_engine(struct intel_execution_engine2 *e2,
+			int class, int instance, uint64_t flags)
+{
+	const struct intel_execution_engine2 *__e2;
+	static const char *unknown_name = "unknown",
+			  *virtual_name = "virtual";
+
+	e2->class    = class;
+	e2->instance = instance;
+	e2->flags    = flags;
+
+	/* engine is a virtual engine */
+	if (class == I915_ENGINE_CLASS_INVALID) {
+		e2->name = virtual_name;
+		e2->is_virtual = true;
+		return;
+	}
+
+	__for_each_static_engine(__e2)
+		if (__e2->class == class && __e2->instance == instance)
+			break;
+
+	if (__e2->name) {
+		e2->name = __e2->name;
+	} else {
+		igt_warn("found unknown engine (%d, %d)", class, instance);
+		e2->name = unknown_name;
+	}
+
+	/* just to remark it */
+	e2->is_virtual = false;
+}
+
+static void query_engine_list(int fd, struct intel_engine_data *ed)
+{
+	uint8_t buff[SIZEOF_QUERY] = { };
+	struct drm_i915_query_engine_info *query_engine =
+			(struct drm_i915_query_engine_info *) buff;
+	int i;
+
+	query_engines(fd, query_engine, SIZEOF_QUERY);
+
+	for (i = 0; i < query_engine->num_engines; i++)
+		init_engine(&ed->engines[i],
+			    query_engine->engines[i].engine_class,
+			    query_engine->engines[i].engine_instance, i);
+
+	ed->nengines = query_engine->num_engines;
+}
+
+struct intel_execution_engine2 *
+intel_get_current_engine(struct intel_engine_data *ed)
+{
+	if (!ed->n)
+		ed->current_engine = &ed->engines[0];
+	else if (ed->n >= ed->nengines)
+		ed->current_engine = NULL;
+
+	return ed->current_engine;
+}
+
+void intel_next_engine(struct intel_engine_data *ed)
+{
+	if (ed->n + 1 < ed->nengines) {
+		ed->n++;
+		ed->current_engine = &ed->engines[ed->n];
+	} else {
+		ed->n = ed->nengines;
+		ed->current_engine = NULL;
+	}
+}
+
+struct intel_execution_engine2 *
+intel_get_current_physical_engine(struct intel_engine_data *ed)
+{
+	struct intel_execution_engine2 *e;
+
+	for (e = intel_get_current_engine(ed);
+	     e && e->is_virtual;
+	     intel_next_engine(ed))
+		;
+
+	return e;
+}
+
+static int gem_topology_get_param(int fd,
+				  struct drm_i915_gem_context_param *p)
+{
+	if (igt_only_list_subtests())
+		return -ENODEV;
+
+	if (__gem_context_get_param(fd, p))
+		return -1; /* using default engine map */
+
+	if (!p->size)
+		return 0;
+
+	p->size = (p->size - sizeof(struct i915_context_param_engines)) /
+		  (offsetof(struct i915_context_param_engines,
+			    class_instance[1]) -
+		  sizeof(struct i915_context_param_engines));
+
+	igt_assert_f(p->size <= GEM_MAX_ENGINES, "unsupported engine count\n");
+
+	return 0;
+}
+
+struct intel_engine_data intel_init_engine_list(int fd, uint32_t ctx_id)
+{
+	DEFINE_CONTEXT_PARAM(engines, param, ctx_id, GEM_MAX_ENGINES);
+	struct intel_engine_data engine_data = { };
+	int i;
+
+	if (gem_topology_get_param(fd, &param)) {
+		/* if kernel does not support engine/context mapping */
+		const struct intel_execution_engine2 *e2;
+
+		igt_debug("using pre-allocated engine list\n");
+
+		__for_each_static_engine(e2) {
+			struct intel_execution_engine2 *__e2 =
+				&engine_data.engines[engine_data.nengines];
+
+			if (!igt_only_list_subtests()) {
+				__e2->flags = gem_class_instance_to_eb_flags(fd,
+						e2->class, e2->instance);
+
+				if (!gem_has_ring(fd, __e2->flags))
+					continue;
+			} else {
+				__e2->flags = -1; /* 0xfff... */
+			}
+
+			__e2->name       = e2->name;
+			__e2->instance   = e2->instance;
+			__e2->class      = e2->class;
+			__e2->is_virtual = false;
+
+			engine_data.nengines++;
+		}
+		return engine_data;
+	}
+
+	if (!param.size) {
+		query_engine_list(fd, &engine_data);
+		ctx_map_engines(fd, &engine_data, &param);
+	} else {
+		for (i = 0; i < param.size; i++)
+			init_engine(&engine_data.engines[i],
+				    engines.class_instance[i].engine_class,
+				    engines.class_instance[i].engine_instance,
+				    i);
+
+		engine_data.nengines = i;
+	}
+
+	return engine_data;
+}
+
+int gem_context_lookup_engine(int fd, uint64_t engine, uint32_t ctx_id,
+			      struct intel_execution_engine2 *e)
+{
+	DEFINE_CONTEXT_PARAM(engines, param, ctx_id, GEM_MAX_ENGINES);
+
+	if (!e || gem_topology_get_param(fd, &param) || !param.size)
+		return -EINVAL;
+
+	e->class = engines.class_instance[engine].engine_class;
+	e->instance = engines.class_instance[engine].engine_instance;
+
+	return 0;
+}
+
+uint32_t gem_make_context_set_all_engines(int fd)
+{
+	DEFINE_CONTEXT_PARAM(engines, param, 0, GEM_MAX_ENGINES);
+	struct intel_engine_data engine_data = { };
+
+	param.ctx_id = gem_context_create(fd);
+
+	if (!gem_topology_get_param(fd, &param) && !param.size) {
+		query_engine_list(fd, &engine_data);
+		ctx_map_engines(fd, &engine_data, &param);
+	}
+
+	return param.ctx_id;
+}
+
+void gem_unset_context_engines(int fd, uint32_t ctx)
+{
+	gem_context_destroy(fd, ctx);
+}
+
+bool gem_has_engine_topology(int fd)
+{
+	struct drm_i915_gem_context_param param = {
+		.param = I915_CONTEXT_PARAM_ENGINES,
+	};
+
+	return !__gem_context_get_param(fd, &param);
+}
diff --git a/lib/i915/gem_engine_topology.h b/lib/i915/gem_engine_topology.h
new file mode 100644
index 000000000000..a9371ba1dc3b
--- /dev/null
+++ b/lib/i915/gem_engine_topology.h
@@ -0,0 +1,80 @@
+/*
+ * Copyright © 2019 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef GEM_ENGINE_TOPOLOGY_H
+#define GEM_ENGINE_TOPOLOGY_H
+
+#include "igt_gt.h"
+#include "i915_drm.h"
+
+/*
+ * Limit what we support for simplicity due limitation in how much we
+ * can address via execbuf2.
+ */
+#define SIZEOF_CTX_PARAM	offsetof(struct i915_context_param_engines, \
+					class_instance[GEM_MAX_ENGINES])
+#define SIZEOF_QUERY		offsetof(struct drm_i915_query_engine_info, \
+					engines[GEM_MAX_ENGINES])
+
+#define GEM_MAX_ENGINES		I915_EXEC_RING_MASK + 1
+
+struct intel_engine_data {
+	uint32_t nengines;
+	uint32_t n;
+	struct intel_execution_engine2 *current_engine;
+	struct intel_execution_engine2 engines[GEM_MAX_ENGINES];
+};
+
+bool gem_has_engine_topology(int fd);
+struct intel_engine_data intel_init_engine_list(int fd, uint32_t ctx_id);
+
+/* iteration functions */
+struct intel_execution_engine2 *
+intel_get_current_engine(struct intel_engine_data *ed);
+
+struct intel_execution_engine2 *
+intel_get_current_physical_engine(struct intel_engine_data *ed);
+
+void intel_next_engine(struct intel_engine_data *ed);
+
+int gem_context_lookup_engine(int fd, uint64_t engine, uint32_t ctx_id,
+			      struct intel_execution_engine2 *e);
+
+uint32_t gem_make_context_set_all_engines(int fd);
+void gem_unset_context_engines(int fd, uint32_t ctx);
+
+#define __for_each_static_engine(e__) \
+	for ((e__) = intel_execution_engines2; (e__)->name; (e__)++)
+
+#define for_each_context_engine(fd__, ctx__, e__) \
+	for (struct intel_engine_data i__ = intel_init_engine_list(fd__, ctx__); \
+	     ((e__) = intel_get_current_engine(&i__)); \
+	     intel_next_engine(&i__))
+
+/* needs to replace "for_each_physical_engine" when conflicts are fixed */
+#define __for_each_physical_engine(fd__, e__) \
+	for (struct intel_engine_data i__ = intel_init_engine_list(fd__, 0); \
+	     ((e__) = intel_get_current_physical_engine(&i__)); \
+	     intel_next_engine(&i__))
+
+#endif /* GEM_ENGINE_TOPOLOGY_H */
diff --git a/lib/igt.h b/lib/igt.h
index 6654a659c062..03f19ca2dfb6 100644
--- a/lib/igt.h
+++ b/lib/igt.h
@@ -53,5 +53,6 @@
 #include "media_spin.h"
 #include "rendercopy.h"
 #include "i915/gem_mman.h"
+#include "i915/gem_engine_topology.h"
 
 #endif /* IGT_H */
diff --git a/lib/igt_gt.h b/lib/igt_gt.h
index 475c0b3c3cc6..52b2f1ea95a5 100644
--- a/lib/igt_gt.h
+++ b/lib/igt_gt.h
@@ -95,6 +95,8 @@ extern const struct intel_execution_engine2 {
 	const char *name;
 	int class;
 	int instance;
+	uint64_t flags;
+	bool is_virtual;
 } intel_execution_engines2[];
 
 unsigned int
diff --git a/lib/meson.build b/lib/meson.build
index a846293307cb..e55e512403d9 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -5,6 +5,7 @@ lib_sources = [
 	'i915/gem_submission.c',
 	'i915/gem_ring.c',
 	'i915/gem_mman.c',
+	'i915/gem_engine_topology.c',
 	'igt_color_encoding.c',
 	'igt_debugfs.c',
 	'igt_device.c',
-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [igt-dev] [PATCH v21 3/6] lib: igt_gt: add execution buffer flags to class helper
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 1/6] include/drm-uapi: import i915_drm.h header file Andi Shyti
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition Andi Shyti
@ 2019-04-16 15:11 ` Andi Shyti
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 4/6] lib: igt_gt: make gem_engine_can_store_dword() check engine class Andi Shyti
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:11 UTC (permalink / raw)
  To: IGT dev; +Cc: Andi Shyti

we have a "class/instance to eb flags" helper but not the
opposite, add it.

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 lib/igt_gt.c | 18 ++++++++++++++++++
 lib/igt_gt.h |  2 ++
 2 files changed, 20 insertions(+)

diff --git a/lib/igt_gt.c b/lib/igt_gt.c
index 5999524326d0..3fa47903d853 100644
--- a/lib/igt_gt.c
+++ b/lib/igt_gt.c
@@ -41,6 +41,7 @@
 #include "intel_reg.h"
 #include "intel_chipset.h"
 #include "igt_dummyload.h"
+#include "i915/gem_engine_topology.h"
 
 /**
  * SECTION:igt_gt
@@ -586,6 +587,23 @@ const struct intel_execution_engine2 intel_execution_engines2[] = {
 	{ }
 };
 
+int gem_execbuf_flags_to_engine_class(unsigned int flags)
+{
+	switch (flags & 0x3f) {
+	case I915_EXEC_DEFAULT:
+	case I915_EXEC_RENDER:
+		return I915_ENGINE_CLASS_RENDER;
+	case I915_EXEC_BLT:
+		return I915_ENGINE_CLASS_COPY;
+	case I915_EXEC_BSD:
+		return I915_ENGINE_CLASS_VIDEO;
+	case I915_EXEC_VEBOX:
+		return I915_ENGINE_CLASS_VIDEO_ENHANCE;
+	default:
+		igt_assert(0);
+	}
+}
+
 unsigned int
 gem_class_instance_to_eb_flags(int gem_fd,
 			       enum drm_i915_gem_engine_class class,
diff --git a/lib/igt_gt.h b/lib/igt_gt.h
index 52b2f1ea95a5..8ceed14288c7 100644
--- a/lib/igt_gt.h
+++ b/lib/igt_gt.h
@@ -99,6 +99,8 @@ extern const struct intel_execution_engine2 {
 	bool is_virtual;
 } intel_execution_engines2[];
 
+int gem_execbuf_flags_to_engine_class(unsigned int flags);
+
 unsigned int
 gem_class_instance_to_eb_flags(int gem_fd,
 			       enum drm_i915_gem_engine_class class,
-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [igt-dev] [PATCH v21 4/6] lib: igt_gt: make gem_engine_can_store_dword() check engine class
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
                   ` (2 preceding siblings ...)
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 3/6] lib: igt_gt: add execution buffer flags to class helper Andi Shyti
@ 2019-04-16 15:11 ` Andi Shyti
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine() Andi Shyti
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:11 UTC (permalink / raw)
  To: IGT dev; +Cc: Andi Shyti

Engines referred by class and instance are getting more populare,
gem_engine_can_store_dword() should handle the situation.

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 lib/igt_gt.c | 12 +++++++++---
 lib/igt_gt.h |  1 +
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/lib/igt_gt.c b/lib/igt_gt.c
index 3fa47903d853..f9ca058247c8 100644
--- a/lib/igt_gt.c
+++ b/lib/igt_gt.c
@@ -557,7 +557,7 @@ const struct intel_execution_engine intel_execution_engines[] = {
 	{ NULL, 0, 0 }
 };
 
-bool gem_can_store_dword(int fd, unsigned int engine)
+bool gem_class_can_store_dword(int fd, int class)
 {
 	uint16_t devid = intel_get_drm_devid(fd);
 	const struct intel_device_info *info = intel_get_device_info(devid);
@@ -569,8 +569,8 @@ bool gem_can_store_dword(int fd, unsigned int engine)
 	if (gen == 3 && (info->is_grantsdale || info->is_alviso))
 		return false; /* only supports physical addresses */
 
-	if (gen == 6 && ((engine & 0x3f) == I915_EXEC_BSD))
-		return false; /* kills the machine! */
+	if (gen == 6 && class == I915_ENGINE_CLASS_VIDEO)
+		return false;
 
 	if (info->is_broadwater)
 		return false; /* Not sure yet... */
@@ -578,6 +578,12 @@ bool gem_can_store_dword(int fd, unsigned int engine)
 	return true;
 }
 
+bool gem_can_store_dword(int fd, unsigned int engine)
+{
+	return gem_class_can_store_dword(fd,
+				gem_execbuf_flags_to_engine_class(engine));
+}
+
 const struct intel_execution_engine2 intel_execution_engines2[] = {
 	{ "rcs0", I915_ENGINE_CLASS_RENDER, 0 },
 	{ "bcs0", I915_ENGINE_CLASS_COPY, 0 },
diff --git a/lib/igt_gt.h b/lib/igt_gt.h
index 8ceed14288c7..0b5c7fcb4c3c 100644
--- a/lib/igt_gt.h
+++ b/lib/igt_gt.h
@@ -90,6 +90,7 @@ bool gem_ring_is_physical_engine(int fd, unsigned int ring);
 bool gem_ring_has_physical_engine(int fd, unsigned int ring);
 
 bool gem_can_store_dword(int fd, unsigned int engine);
+bool gem_class_can_store_dword(int fd, int class);
 
 extern const struct intel_execution_engine2 {
 	const char *name;
-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine()
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
                   ` (3 preceding siblings ...)
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 4/6] lib: igt_gt: make gem_engine_can_store_dword() check engine class Andi Shyti
@ 2019-04-16 15:11 ` Andi Shyti
  2019-04-17 15:42   ` Daniele Ceraolo Spurio
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library Andi Shyti
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:11 UTC (permalink / raw)
  To: IGT dev; +Cc: Andi Shyti

With the new getparam/setparam api, engines are mapped to
context. Use for_each_context_engine() to loop through existing
engines.

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 lib/igt_dummyload.c | 29 ++++++++++++++++++++---------
 1 file changed, 20 insertions(+), 9 deletions(-)

diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c
index 47f6b92b424b..747335f31ec8 100644
--- a/lib/igt_dummyload.c
+++ b/lib/igt_dummyload.c
@@ -39,6 +39,7 @@
 #include "ioctl_wrappers.h"
 #include "sw_sync.h"
 #include "igt_vgem.h"
+#include "i915/gem_engine_topology.h"
 #include "i915/gem_mman.h"
 
 /**
@@ -86,7 +87,7 @@ emit_recursive_batch(igt_spin_t *spin,
 	struct drm_i915_gem_relocation_entry relocs[2], *r;
 	struct drm_i915_gem_execbuffer2 *execbuf;
 	struct drm_i915_gem_exec_object2 *obj;
-	unsigned int engines[16];
+	unsigned int flags[GEM_MAX_ENGINES];
 	unsigned int nengine;
 	int fence_fd = -1;
 	uint32_t *batch, *batch_start;
@@ -94,17 +95,17 @@ emit_recursive_batch(igt_spin_t *spin,
 
 	nengine = 0;
 	if (opts->engine == ALL_ENGINES) {
-		unsigned int engine;
+		struct intel_execution_engine2 *engine;
 
-		for_each_physical_engine(fd, engine) {
+		for_each_context_engine(fd, opts->ctx, engine) {
 			if (opts->flags & IGT_SPIN_POLL_RUN &&
-			    !gem_can_store_dword(fd, engine))
+			    !gem_class_can_store_dword(fd, engine->class))
 				continue;
 
-			engines[nengine++] = engine;
+			flags[nengine++] = engine->flags;
 		}
 	} else {
-		engines[nengine++] = opts->engine;
+		flags[nengine++] = opts->engine;
 	}
 	igt_require(nengine);
 
@@ -234,7 +235,7 @@ emit_recursive_batch(igt_spin_t *spin,
 
 	for (i = 0; i < nengine; i++) {
 		execbuf->flags &= ~ENGINE_MASK;
-		execbuf->flags |= engines[i];
+		execbuf->flags |= flags[i];
 
 		gem_execbuf_wr(fd, execbuf);
 
@@ -309,9 +310,19 @@ igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts)
 	igt_require_gem(fd);
 
 	if (opts->engine != ALL_ENGINES) {
-		gem_require_ring(fd, opts->engine);
+		struct intel_execution_engine2 e;
+		int class;
+
+		if (!gem_context_lookup_engine(fd, opts->engine,
+					       opts->ctx, &e)) {
+			class = e.class;
+		} else {
+			gem_require_ring(fd, opts->engine);
+			class = gem_execbuf_flags_to_engine_class(opts->engine);
+		}
+
 		if (opts->flags & IGT_SPIN_POLL_RUN)
-			igt_require(gem_can_store_dword(fd, opts->engine));
+			igt_require(gem_class_can_store_dword(fd, class));
 	}
 
 	spin = spin_batch_create(fd, opts);
-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
                   ` (4 preceding siblings ...)
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine() Andi Shyti
@ 2019-04-16 15:11 ` Andi Shyti
  2019-04-16 17:06   ` Tvrtko Ursulin
  2019-04-16 16:36 ` [igt-dev] ✓ Fi.CI.BAT: success for new engine discovery interface Patchwork
  2019-04-17  0:46 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  7 siblings, 1 reply; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:11 UTC (permalink / raw)
  To: IGT dev; +Cc: Andi Shyti

Replace the legacy for_each_engine* defines with the ones
implemented in the gem_engine_topology library.

Use whenever possible gem_engine_can_store_dword() that checks
class instead of flags.

Now the __for_each_engine_class_instance and
for_each_engine_class_instance are unused, remove them.

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 lib/igt_gt.h     |   7 ---
 tests/perf_pmu.c | 148 ++++++++++++++++++++++++++++-------------------
 2 files changed, 90 insertions(+), 65 deletions(-)

diff --git a/lib/igt_gt.h b/lib/igt_gt.h
index 0b5c7fcb4c3c..77318e2a82b8 100644
--- a/lib/igt_gt.h
+++ b/lib/igt_gt.h
@@ -119,11 +119,4 @@ void gem_require_engine(int gem_fd,
 	igt_require(gem_has_engine(gem_fd, class, instance));
 }
 
-#define __for_each_engine_class_instance(e__) \
-	for ((e__) = intel_execution_engines2; (e__)->name; (e__)++)
-
-#define for_each_engine_class_instance(fd__, e__) \
-	for ((e__) = intel_execution_engines2; (e__)->name; (e__)++) \
-		for_if (gem_has_engine((fd__), (e__)->class, (e__)->instance))
-
 #endif /* IGT_GT_H */
diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c
index 4f552bc2ae28..93e8efc9a645 100644
--- a/tests/perf_pmu.c
+++ b/tests/perf_pmu.c
@@ -72,7 +72,7 @@ static int open_group(uint64_t config, int group)
 }
 
 static void
-init(int gem_fd, const struct intel_execution_engine2 *e, uint8_t sample)
+init(int gem_fd, struct intel_execution_engine2 *e, uint8_t sample)
 {
 	int fd, err = 0;
 	bool exists;
@@ -82,7 +82,7 @@ init(int gem_fd, const struct intel_execution_engine2 *e, uint8_t sample)
 	if (fd < 0)
 		err = errno;
 
-	exists = gem_has_engine(gem_fd, e->class, e->instance);
+	exists = gem_context_has_engine(gem_fd, 0, e->flags);
 	if (intel_gen(intel_get_drm_devid(gem_fd)) < 6 &&
 	    sample == I915_SAMPLE_SEMA)
 		exists = false;
@@ -158,11 +158,6 @@ static unsigned int measured_usleep(unsigned int usec)
 	return igt_nsec_elapsed(&ts);
 }
 
-static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e)
-{
-	return gem_class_instance_to_eb_flags(gem_fd, e->class, e->instance);
-}
-
 #define TEST_BUSY (1)
 #define FLAG_SYNC (2)
 #define TEST_TRAILING_IDLE (4)
@@ -170,14 +165,15 @@ static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e)
 #define FLAG_LONG (16)
 #define FLAG_HANG (32)
 
-static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags)
+static igt_spin_t * __spin_poll(int fd, uint32_t ctx,
+				struct intel_execution_engine2 *e)
 {
 	struct igt_spin_factory opts = {
 		.ctx = ctx,
-		.engine = flags,
+		.engine = e->flags,
 	};
 
-	if (gem_can_store_dword(fd, flags))
+	if (gem_class_can_store_dword(fd, e->class))
 		opts.flags |= IGT_SPIN_POLL_RUN;
 
 	return __igt_spin_batch_factory(fd, &opts);
@@ -209,20 +205,34 @@ static unsigned long __spin_wait(int fd, igt_spin_t *spin)
 	return igt_nsec_elapsed(&start);
 }
 
-static igt_spin_t * __spin_sync(int fd, uint32_t ctx, unsigned long flags)
+static igt_spin_t * __spin_sync(int fd, uint32_t ctx,
+				struct intel_execution_engine2 *e)
 {
-	igt_spin_t *spin = __spin_poll(fd, ctx, flags);
+	igt_spin_t *spin = __spin_poll(fd, ctx, e);
 
 	__spin_wait(fd, spin);
 
 	return spin;
 }
 
-static igt_spin_t * spin_sync(int fd, uint32_t ctx, unsigned long flags)
+static igt_spin_t * spin_sync(int fd, uint32_t ctx,
+			      struct intel_execution_engine2 *e)
 {
 	igt_require_gem(fd);
 
-	return __spin_sync(fd, ctx, flags);
+	return __spin_sync(fd, ctx, e);
+}
+
+static igt_spin_t * spin_sync_flags(int fd, uint32_t ctx, unsigned int flags)
+{
+	struct intel_execution_engine2 e = { };
+
+	e.class = gem_execbuf_flags_to_engine_class(flags);
+	e.instance = (flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK)) ==
+		     (I915_EXEC_BSD | I915_EXEC_BSD_RING2) ? 1 : 0;
+	e.flags = flags;
+
+	return spin_sync(fd, ctx, &e);
 }
 
 static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
@@ -257,7 +267,7 @@ static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
 }
 
 static void
-single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
+single(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags)
 {
 	unsigned long slept;
 	igt_spin_t *spin;
@@ -267,7 +277,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 	fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+		spin = spin_sync(gem_fd, 0, e);
 	else
 		spin = NULL;
 
@@ -303,7 +313,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 }
 
 static void
-busy_start(int gem_fd, const struct intel_execution_engine2 *e)
+busy_start(int gem_fd, struct intel_execution_engine2 *e)
 {
 	unsigned long slept;
 	uint64_t val, ts[2];
@@ -316,7 +326,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e)
 	 */
 	sleep(2);
 
-	spin = __spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin = __spin_sync(gem_fd, 0, e);
 
 	fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance));
 
@@ -338,7 +348,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e)
  * will depend on the CI systems running it a lot to detect issues.
  */
 static void
-busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
+busy_double_start(int gem_fd, struct intel_execution_engine2 *e)
 {
 	unsigned long slept;
 	uint64_t val, val2, ts[2];
@@ -346,7 +356,7 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 	uint32_t ctx;
 	int fd;
 
-	ctx = gem_context_create(gem_fd);
+	ctx = gem_make_context_set_all_engines(gem_fd);
 
 	/*
 	 * Defeat the busy stats delayed disable, we need to guarantee we are
@@ -359,11 +369,11 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 	 * re-submission in execlists mode. Make sure busyness is correctly
 	 * reported with the engine busy, and after the engine went idle.
 	 */
-	spin[0] = __spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin[0] = __spin_sync(gem_fd, 0, e);
 	usleep(500e3);
 	spin[1] = __igt_spin_batch_new(gem_fd,
 				       .ctx = ctx,
-				       .engine = e2ring(gem_fd, e));
+				       .engine = e->flags);
 
 	/*
 	 * Open PMU as fast as possible after the second spin batch in attempt
@@ -393,7 +403,7 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
 
 	close(fd);
 
-	gem_context_destroy(gem_fd, ctx);
+	gem_unset_context_engines(gem_fd, ctx);
 
 	assert_within_epsilon(val, ts[1] - ts[0], tolerance);
 	igt_assert_eq(val2, 0);
@@ -421,10 +431,10 @@ static void log_busy(unsigned int num_engines, uint64_t *val)
 }
 
 static void
-busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
+busy_check_all(int gem_fd, struct intel_execution_engine2 *e,
 	       const unsigned int num_engines, unsigned int flags)
 {
-	const struct intel_execution_engine2 *e_;
+	struct intel_execution_engine2 *e_;
 	uint64_t tval[2][num_engines];
 	unsigned int busy_idx = 0, i;
 	uint64_t val[num_engines];
@@ -434,8 +444,8 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 	i = 0;
 	fd[0] = -1;
-	for_each_engine_class_instance(gem_fd, e_) {
-		if (e == e_)
+	__for_each_physical_engine(gem_fd, e_) {
+		if (e->class == e_->class && e->instance == e_->instance)
 			busy_idx = i;
 
 		fd[i++] = open_group(I915_PMU_ENGINE_BUSY(e_->class,
@@ -445,7 +455,7 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 	igt_assert_eq(i, num_engines);
 
-	spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin = spin_sync(gem_fd, 0, e);
 	pmu_read_multi(fd[0], num_engines, tval[0]);
 	slept = measured_usleep(batch_duration_ns / 1000);
 	if (flags & TEST_TRAILING_IDLE)
@@ -472,23 +482,23 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 
 static void
 __submit_spin_batch(int gem_fd, igt_spin_t *spin,
-		    const struct intel_execution_engine2 *e,
+		    struct intel_execution_engine2 *e,
 		    int offset)
 {
 	struct drm_i915_gem_execbuffer2 eb = spin->execbuf;
 
 	eb.flags &= ~(0x3f | I915_EXEC_BSD_MASK);
-	eb.flags |= e2ring(gem_fd, e) | I915_EXEC_NO_RELOC;
+	eb.flags |= e->flags | I915_EXEC_NO_RELOC;
 	eb.batch_start_offset += offset;
 
 	gem_execbuf(gem_fd, &eb);
 }
 
 static void
-most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
+most_busy_check_all(int gem_fd, struct intel_execution_engine2 *e,
 		    const unsigned int num_engines, unsigned int flags)
 {
-	const struct intel_execution_engine2 *e_;
+	struct intel_execution_engine2 *e_;
 	uint64_t tval[2][num_engines];
 	uint64_t val[num_engines];
 	int fd[num_engines];
@@ -497,13 +507,13 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
 	unsigned int idle_idx, i;
 
 	i = 0;
-	for_each_engine_class_instance(gem_fd, e_) {
-		if (e == e_)
+	__for_each_physical_engine(gem_fd, e_) {
+		if (e->class == e_->class && e->instance == e_->instance)
 			idle_idx = i;
 		else if (spin)
 			__submit_spin_batch(gem_fd, spin, e_, 64);
 		else
-			spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e_));
+			spin = __spin_poll(gem_fd, 0, e_);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e_->class, e_->instance);
 	}
@@ -545,7 +555,7 @@ static void
 all_busy_check_all(int gem_fd, const unsigned int num_engines,
 		   unsigned int flags)
 {
-	const struct intel_execution_engine2 *e;
+	struct intel_execution_engine2 *e;
 	uint64_t tval[2][num_engines];
 	uint64_t val[num_engines];
 	int fd[num_engines];
@@ -554,11 +564,11 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines,
 	unsigned int i;
 
 	i = 0;
-	for_each_engine_class_instance(gem_fd, e) {
+	__for_each_physical_engine(gem_fd, e) {
 		if (spin)
 			__submit_spin_batch(gem_fd, spin, e, 64);
 		else
-			spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e));
+			spin = __spin_poll(gem_fd, 0, e);
 
 		val[i++] = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	}
@@ -592,7 +602,7 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines,
 }
 
 static void
-no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
+no_sema(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags)
 {
 	igt_spin_t *spin;
 	uint64_t val[2][2];
@@ -602,7 +612,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 	open_group(I915_PMU_ENGINE_WAIT(e->class, e->instance), fd);
 
 	if (flags & TEST_BUSY)
-		spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+		spin = spin_sync(gem_fd, 0, e);
 	else
 		spin = NULL;
 
@@ -631,7 +641,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
 #define   MI_SEMAPHORE_SAD_GTE_SDD	(1<<12)
 
 static void
-sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
+sema_wait(int gem_fd, struct intel_execution_engine2 *e,
 	  unsigned int flags)
 {
 	struct drm_i915_gem_relocation_entry reloc[2] = {};
@@ -689,7 +699,7 @@ sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
 
 	eb.buffer_count = 2;
 	eb.buffers_ptr = to_user_pointer(obj);
-	eb.flags = e2ring(gem_fd, e);
+	eb.flags = e->flags;
 
 	/**
 	 * Start the semaphore wait PMU and after some known time let the above
@@ -792,7 +802,7 @@ static int wait_vblank(int fd, union drm_wait_vblank *vbl)
 }
 
 static void
-event_wait(int gem_fd, const struct intel_execution_engine2 *e)
+event_wait(int gem_fd, struct intel_execution_engine2 *e)
 {
 	struct drm_i915_gem_exec_object2 obj = { };
 	struct drm_i915_gem_execbuffer2 eb = { };
@@ -845,7 +855,7 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e)
 
 	eb.buffer_count = 1;
 	eb.buffers_ptr = to_user_pointer(&obj);
-	eb.flags = e2ring(gem_fd, e) | I915_EXEC_SECURE;
+	eb.flags = e->flags | I915_EXEC_SECURE;
 
 	for_each_pipe_with_valid_output(&data.display, p, output) {
 		struct igt_helper_process waiter = { };
@@ -917,7 +927,7 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e)
 }
 
 static void
-multi_client(int gem_fd, const struct intel_execution_engine2 *e)
+multi_client(int gem_fd, struct intel_execution_engine2 *e)
 {
 	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	unsigned long slept[2];
@@ -936,7 +946,7 @@ multi_client(int gem_fd, const struct intel_execution_engine2 *e)
 	 */
 	fd[1] = open_pmu(config);
 
-	spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
+	spin = spin_sync(gem_fd, 0, e);
 
 	val[0] = val[1] = __pmu_read_single(fd[0], &ts[0]);
 	slept[1] = measured_usleep(batch_duration_ns / 1000);
@@ -1039,6 +1049,7 @@ static void cpu_hotplug(int gem_fd)
 	igt_spin_t *spin[2];
 	uint64_t ts[2];
 	uint64_t val;
+	uint32_t ctx;
 	int link[2];
 	int fd, ret;
 	int cur = 0;
@@ -1046,14 +1057,18 @@ static void cpu_hotplug(int gem_fd)
 
 	igt_require(cpu0_hotplug_support());
 
+	ctx = gem_context_create(gem_fd);
+
 	fd = open_pmu(I915_PMU_ENGINE_BUSY(I915_ENGINE_CLASS_RENDER, 0));
 
 	/*
 	 * Create two spinners so test can ensure shorter gaps in engine
 	 * busyness as it is terminating one and re-starting the other.
 	 */
-	spin[0] = igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER);
-	spin[1] = __igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER);
+	spin[0] = igt_spin_batch_new(gem_fd,
+				     .engine = I915_EXEC_RENDER, .ctx = ctx);
+	spin[1] = __igt_spin_batch_new(gem_fd,
+				       .engine = I915_EXEC_RENDER, .ctx = ctx);
 
 	val = __pmu_read_single(fd, &ts[0]);
 
@@ -1137,6 +1152,7 @@ static void cpu_hotplug(int gem_fd)
 
 		igt_spin_batch_free(gem_fd, spin[cur]);
 		spin[cur] = __igt_spin_batch_new(gem_fd,
+						 .ctx = ctx,
 						 .engine = I915_EXEC_RENDER);
 		cur ^= 1;
 	}
@@ -1150,6 +1166,7 @@ static void cpu_hotplug(int gem_fd)
 	igt_waitchildren();
 	close(fd);
 	close(link[0]);
+	gem_context_destroy(gem_fd, ctx);
 
 	/* Skip if child signals a problem with offlining a CPU. */
 	igt_skip_on(buf == 's');
@@ -1165,17 +1182,21 @@ test_interrupts(int gem_fd)
 	igt_spin_t *spin[target];
 	struct pollfd pfd;
 	uint64_t idle, busy;
+	uint32_t ctx;
 	int fence_fd;
 	int fd;
 
 	gem_quiescent_gpu(gem_fd);
 
+	ctx = gem_context_create(gem_fd);
+
 	fd = open_pmu(I915_PMU_INTERRUPTS);
 
 	/* Queue spinning batches. */
 	for (int i = 0; i < target; i++) {
 		spin[i] = __igt_spin_batch_new(gem_fd,
 					       .engine = I915_EXEC_RENDER,
+					       .ctx = ctx,
 					       .flags = IGT_SPIN_FENCE_OUT);
 		if (i == 0) {
 			fence_fd = spin[i]->out_fence;
@@ -1217,6 +1238,7 @@ test_interrupts(int gem_fd)
 	/* Check at least as many interrupts has been generated. */
 	busy = pmu_read_single(fd) - idle;
 	close(fd);
+	gem_context_destroy(gem_fd, ctx);
 
 	igt_assert_lte(target, busy);
 }
@@ -1229,15 +1251,19 @@ test_interrupts_sync(int gem_fd)
 	igt_spin_t *spin[target];
 	struct pollfd pfd;
 	uint64_t idle, busy;
+	uint32_t ctx;
 	int fd;
 
 	gem_quiescent_gpu(gem_fd);
 
+	ctx = gem_context_create(gem_fd);
+
 	fd = open_pmu(I915_PMU_INTERRUPTS);
 
 	/* Queue spinning batches. */
 	for (int i = 0; i < target; i++)
 		spin[i] = __igt_spin_batch_new(gem_fd,
+					       .ctx = ctx,
 					       .flags = IGT_SPIN_FENCE_OUT);
 
 	/* Wait for idle state. */
@@ -1262,6 +1288,7 @@ test_interrupts_sync(int gem_fd)
 	/* Check at least as many interrupts has been generated. */
 	busy = pmu_read_single(fd) - idle;
 	close(fd);
+	gem_context_destroy(gem_fd, ctx);
 
 	igt_assert_lte(target, busy);
 }
@@ -1274,6 +1301,9 @@ test_frequency(int gem_fd)
 	double min[2], max[2];
 	igt_spin_t *spin;
 	int fd, sysfs;
+	uint32_t ctx;
+
+	ctx = gem_context_create(gem_fd);
 
 	sysfs = igt_sysfs_open(gem_fd);
 	igt_require(sysfs >= 0);
@@ -1301,7 +1331,7 @@ test_frequency(int gem_fd)
 	igt_require(igt_sysfs_get_u32(sysfs, "gt_boost_freq_mhz") == min_freq);
 
 	gem_quiescent_gpu(gem_fd); /* Idle to be sure the change takes effect */
-	spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER);
+	spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER);
 
 	slept = pmu_read_multi(fd, 2, start);
 	measured_usleep(batch_duration_ns / 1000);
@@ -1327,7 +1357,7 @@ test_frequency(int gem_fd)
 	igt_require(igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz") == max_freq);
 
 	gem_quiescent_gpu(gem_fd);
-	spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER);
+	spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER);
 
 	slept = pmu_read_multi(fd, 2, start);
 	measured_usleep(batch_duration_ns / 1000);
@@ -1348,6 +1378,8 @@ test_frequency(int gem_fd)
 			 min_freq, igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz"));
 	close(fd);
 
+	gem_context_destroy(gem_fd, ctx);
+
 	igt_info("Min frequency: requested %.1f, actual %.1f\n",
 		 min[0], min[1]);
 	igt_info("Max frequency: requested %.1f, actual %.1f\n",
@@ -1448,7 +1480,7 @@ test_rc6(int gem_fd, unsigned int flags)
 }
 
 static void
-test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
+test_enable_race(int gem_fd, struct intel_execution_engine2 *e)
 {
 	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
 	struct igt_helper_process engine_load = { };
@@ -1465,7 +1497,7 @@ test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
 
 	eb.buffer_count = 1;
 	eb.buffers_ptr = to_user_pointer(&obj);
-	eb.flags = e2ring(gem_fd, e);
+	eb.flags = e->flags;
 
 	/*
 	 * This test is probabilistic so run in a few times to increase the
@@ -1520,7 +1552,7 @@ static void __rearm_spin_batch(igt_spin_t *spin)
 	__assert_within(x, ref, tolerance, tolerance)
 
 static void
-accuracy(int gem_fd, const struct intel_execution_engine2 *e,
+accuracy(int gem_fd, struct intel_execution_engine2 *e,
 	 unsigned long target_busy_pct,
 	 unsigned long target_iters)
 {
@@ -1570,7 +1602,7 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e,
 		igt_spin_t *spin;
 
 		/* Allocate our spin batch and idle it. */
-		spin = igt_spin_batch_new(gem_fd, .engine = e2ring(gem_fd, e));
+		spin = igt_spin_batch_new(gem_fd, .engine = e->flags);
 		igt_spin_batch_end(spin);
 		gem_sync(gem_fd, spin->handle);
 
@@ -1674,7 +1706,7 @@ igt_main
 				I915_PMU_LAST - __I915_PMU_OTHER(0) + 1;
 	unsigned int num_engines = 0;
 	int fd = -1;
-	const struct intel_execution_engine2 *e;
+	struct intel_execution_engine2 *e;
 	unsigned int i;
 
 	igt_fixture {
@@ -1683,7 +1715,7 @@ igt_main
 		igt_require_gem(fd);
 		igt_require(i915_type_id() > 0);
 
-		for_each_engine_class_instance(fd, e)
+		__for_each_physical_engine(fd, e)
 			num_engines++;
 	}
 
@@ -1693,7 +1725,7 @@ igt_main
 	igt_subtest("invalid-init")
 		invalid_init();
 
-	__for_each_engine_class_instance(e) {
+	__for_each_physical_engine(fd, e) {
 		const unsigned int pct[] = { 2, 50, 98 };
 
 		/**
@@ -1897,7 +1929,7 @@ igt_main
 			gem_quiescent_gpu(fd);
 		}
 
-		__for_each_engine_class_instance(e) {
+		__for_each_physical_engine(render_fd, e) {
 			igt_subtest_group {
 				igt_fixture {
 					gem_require_engine(render_fd,
-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition Andi Shyti
@ 2019-04-16 15:21   ` Chris Wilson
  2019-04-16 15:28     ` Andi Shyti
  0 siblings, 1 reply; 20+ messages in thread
From: Chris Wilson @ 2019-04-16 15:21 UTC (permalink / raw)
  To: Andi Shyti, IGT dev; +Cc: Andi Shyti

Quoting Andi Shyti (2019-04-16 16:11:24)
> +void gem_unset_context_engines(int fd, uint32_t ctx)
> +{
> +       gem_context_destroy(fd, ctx);
> +}

That's a little surprising. I certainly would not expect my context to
be destroyed just to reset the engines back to default.

On the Rusty scale that would be
	-7. The obvious use is wrong.
-Chris
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition
  2019-04-16 15:21   ` Chris Wilson
@ 2019-04-16 15:28     ` Andi Shyti
  2019-04-16 17:09       ` Tvrtko Ursulin
  0 siblings, 1 reply; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 15:28 UTC (permalink / raw)
  To: Chris Wilson; +Cc: IGT dev, Andi Shyti

> > +void gem_unset_context_engines(int fd, uint32_t ctx)
> > +{
> > +       gem_context_destroy(fd, ctx);
> > +}
> 
> That's a little surprising. I certainly would not expect my context to
> be destroyed just to reset the engines back to default.
> 
> On the Rusty scale that would be
> 	-7. The obvious use is wrong.

if I have a context creator 'gem_make_context_set_all_engines()'
(which you already didn't like at the previous version), I should
have a context destroyer.

I understand your concern, I will remove the context creation and
destruction from gem_engine_topology.

Andi
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for new engine discovery interface
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
                   ` (5 preceding siblings ...)
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library Andi Shyti
@ 2019-04-16 16:36 ` Patchwork
  2019-04-17  0:46 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
  7 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2019-04-16 16:36 UTC (permalink / raw)
  To: Andi Shyti; +Cc: igt-dev

== Series Details ==

Series: new engine discovery interface
URL   : https://patchwork.freedesktop.org/series/59591/
State : success

== Summary ==

CI Bug Log - changes from IGT_4952 -> IGTPW_2871
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://patchwork.freedesktop.org/api/1.0/series/59591/revisions/1/mbox/

Known issues
------------

  Here are the changes found in IGTPW_2871 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_cs_nop@fork-compute0:
    - fi-icl-y:           NOTRUN -> SKIP [fdo#109315] +17

  * igt@gem_exec_basic@basic-bsd2:
    - fi-icl-y:           NOTRUN -> SKIP [fdo#109276] +7

  * igt@gem_exec_parse@basic-rejected:
    - fi-icl-y:           NOTRUN -> SKIP [fdo#109289] +1

  * igt@i915_selftest@live_execlists:
    - fi-apl-guc:         PASS -> INCOMPLETE [fdo#103927] / [fdo#109720]

  * igt@kms_chamelium@dp-crc-fast:
    - fi-icl-y:           NOTRUN -> SKIP [fdo#109284] +8

  * igt@kms_force_connector_basic@force-load-detect:
    - fi-icl-y:           NOTRUN -> SKIP [fdo#109285] +3

  * igt@kms_frontbuffer_tracking@basic:
    - fi-byt-clapper:     PASS -> FAIL [fdo#103167]

  * igt@kms_psr@primary_mmap_gtt:
    - fi-icl-y:           NOTRUN -> SKIP [fdo#110189] +3

  * igt@prime_vgem@basic-fence-flip:
    - fi-icl-y:           NOTRUN -> SKIP [fdo#109294]

  * igt@runner@aborted:
    - fi-apl-guc:         NOTRUN -> FAIL [fdo#108622] / [fdo#109720]

  
  [fdo#103167]: https://bugs.freedesktop.org/show_bug.cgi?id=103167
  [fdo#103927]: https://bugs.freedesktop.org/show_bug.cgi?id=103927
  [fdo#108622]: https://bugs.freedesktop.org/show_bug.cgi?id=108622
  [fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276
  [fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109294]: https://bugs.freedesktop.org/show_bug.cgi?id=109294
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#109720]: https://bugs.freedesktop.org/show_bug.cgi?id=109720
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189


Participating hosts (48 -> 44)
------------------------------

  Additional (1): fi-icl-y 
  Missing    (5): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-bdw-samus 


Build changes
-------------

    * IGT: IGT_4952 -> IGTPW_2871

  CI_DRM_5939: 757f5370dc4baed0475b6e28efd67ecc267e8745 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_2871: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2871/
  IGT_4952: d196925ed16221768689efa1ea06c4869e9fc2a9 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2871/
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library Andi Shyti
@ 2019-04-16 17:06   ` Tvrtko Ursulin
  2019-04-16 23:05     ` Andi Shyti
  0 siblings, 1 reply; 20+ messages in thread
From: Tvrtko Ursulin @ 2019-04-16 17:06 UTC (permalink / raw)
  To: Andi Shyti, IGT dev; +Cc: Andi Shyti


On 16/04/2019 16:11, Andi Shyti wrote:
> Replace the legacy for_each_engine* defines with the ones
> implemented in the gem_engine_topology library.
> 
> Use whenever possible gem_engine_can_store_dword() that checks
> class instead of flags.
> 
> Now the __for_each_engine_class_instance and
> for_each_engine_class_instance are unused, remove them.
> 
> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Signed-off-by: Andi Shyti <andi.shyti@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   lib/igt_gt.h     |   7 ---
>   tests/perf_pmu.c | 148 ++++++++++++++++++++++++++++-------------------
>   2 files changed, 90 insertions(+), 65 deletions(-)
> 
> diff --git a/lib/igt_gt.h b/lib/igt_gt.h
> index 0b5c7fcb4c3c..77318e2a82b8 100644
> --- a/lib/igt_gt.h
> +++ b/lib/igt_gt.h
> @@ -119,11 +119,4 @@ void gem_require_engine(int gem_fd,
>   	igt_require(gem_has_engine(gem_fd, class, instance));
>   }
>   
> -#define __for_each_engine_class_instance(e__) \
> -	for ((e__) = intel_execution_engines2; (e__)->name; (e__)++)
> -
> -#define for_each_engine_class_instance(fd__, e__) \
> -	for ((e__) = intel_execution_engines2; (e__)->name; (e__)++) \
> -		for_if (gem_has_engine((fd__), (e__)->class, (e__)->instance))
> -
>   #endif /* IGT_GT_H */
> diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c
> index 4f552bc2ae28..93e8efc9a645 100644
> --- a/tests/perf_pmu.c
> +++ b/tests/perf_pmu.c
> @@ -72,7 +72,7 @@ static int open_group(uint64_t config, int group)
>   }
>   
>   static void
> -init(int gem_fd, const struct intel_execution_engine2 *e, uint8_t sample)
> +init(int gem_fd, struct intel_execution_engine2 *e, uint8_t sample)
>   {
>   	int fd, err = 0;
>   	bool exists;
> @@ -82,7 +82,7 @@ init(int gem_fd, const struct intel_execution_engine2 *e, uint8_t sample)
>   	if (fd < 0)
>   		err = errno;
>   
> -	exists = gem_has_engine(gem_fd, e->class, e->instance);
> +	exists = gem_context_has_engine(gem_fd, 0, e->flags);
>   	if (intel_gen(intel_get_drm_devid(gem_fd)) < 6 &&
>   	    sample == I915_SAMPLE_SEMA)
>   		exists = false;
> @@ -158,11 +158,6 @@ static unsigned int measured_usleep(unsigned int usec)
>   	return igt_nsec_elapsed(&ts);
>   }
>   
> -static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e)
> -{
> -	return gem_class_instance_to_eb_flags(gem_fd, e->class, e->instance);
> -}
> -
>   #define TEST_BUSY (1)
>   #define FLAG_SYNC (2)
>   #define TEST_TRAILING_IDLE (4)
> @@ -170,14 +165,15 @@ static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e)
>   #define FLAG_LONG (16)
>   #define FLAG_HANG (32)
>   
> -static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags)
> +static igt_spin_t * __spin_poll(int fd, uint32_t ctx,
> +				struct intel_execution_engine2 *e)
>   {
>   	struct igt_spin_factory opts = {
>   		.ctx = ctx,
> -		.engine = flags,
> +		.engine = e->flags,
>   	};
>   
> -	if (gem_can_store_dword(fd, flags))
> +	if (gem_class_can_store_dword(fd, e->class))
>   		opts.flags |= IGT_SPIN_POLL_RUN;
>   
>   	return __igt_spin_batch_factory(fd, &opts);
> @@ -209,20 +205,34 @@ static unsigned long __spin_wait(int fd, igt_spin_t *spin)
>   	return igt_nsec_elapsed(&start);
>   }
>   
> -static igt_spin_t * __spin_sync(int fd, uint32_t ctx, unsigned long flags)
> +static igt_spin_t * __spin_sync(int fd, uint32_t ctx,
> +				struct intel_execution_engine2 *e)
>   {
> -	igt_spin_t *spin = __spin_poll(fd, ctx, flags);
> +	igt_spin_t *spin = __spin_poll(fd, ctx, e);
>   
>   	__spin_wait(fd, spin);
>   
>   	return spin;
>   }
>   
> -static igt_spin_t * spin_sync(int fd, uint32_t ctx, unsigned long flags)
> +static igt_spin_t * spin_sync(int fd, uint32_t ctx,
> +			      struct intel_execution_engine2 *e)
>   {
>   	igt_require_gem(fd);
>   
> -	return __spin_sync(fd, ctx, flags);
> +	return __spin_sync(fd, ctx, e);
> +}
> +
> +static igt_spin_t * spin_sync_flags(int fd, uint32_t ctx, unsigned int flags)
> +{
> +	struct intel_execution_engine2 e = { };
> +
> +	e.class = gem_execbuf_flags_to_engine_class(flags);
> +	e.instance = (flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK)) ==
> +		     (I915_EXEC_BSD | I915_EXEC_BSD_RING2) ? 1 : 0;
> +	e.flags = flags;
> +
> +	return spin_sync(fd, ctx, &e);
>   }
>   
>   static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
> @@ -257,7 +267,7 @@ static void end_spin(int fd, igt_spin_t *spin, unsigned int flags)
>   }
>   
>   static void
> -single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
> +single(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags)
>   {
>   	unsigned long slept;
>   	igt_spin_t *spin;
> @@ -267,7 +277,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
>   	fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance));
>   
>   	if (flags & TEST_BUSY)
> -		spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
> +		spin = spin_sync(gem_fd, 0, e);
>   	else
>   		spin = NULL;
>   
> @@ -303,7 +313,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
>   }
>   
>   static void
> -busy_start(int gem_fd, const struct intel_execution_engine2 *e)
> +busy_start(int gem_fd, struct intel_execution_engine2 *e)
>   {
>   	unsigned long slept;
>   	uint64_t val, ts[2];
> @@ -316,7 +326,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e)
>   	 */
>   	sleep(2);
>   
> -	spin = __spin_sync(gem_fd, 0, e2ring(gem_fd, e));
> +	spin = __spin_sync(gem_fd, 0, e);
>   
>   	fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance));
>   
> @@ -338,7 +348,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e)
>    * will depend on the CI systems running it a lot to detect issues.
>    */
>   static void
> -busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
> +busy_double_start(int gem_fd, struct intel_execution_engine2 *e)
>   {
>   	unsigned long slept;
>   	uint64_t val, val2, ts[2];
> @@ -346,7 +356,7 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
>   	uint32_t ctx;
>   	int fd;
>   
> -	ctx = gem_context_create(gem_fd);
> +	ctx = gem_make_context_set_all_engines(gem_fd);
>   
>   	/*
>   	 * Defeat the busy stats delayed disable, we need to guarantee we are
> @@ -359,11 +369,11 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
>   	 * re-submission in execlists mode. Make sure busyness is correctly
>   	 * reported with the engine busy, and after the engine went idle.
>   	 */
> -	spin[0] = __spin_sync(gem_fd, 0, e2ring(gem_fd, e));
> +	spin[0] = __spin_sync(gem_fd, 0, e);
>   	usleep(500e3);
>   	spin[1] = __igt_spin_batch_new(gem_fd,
>   				       .ctx = ctx,
> -				       .engine = e2ring(gem_fd, e));
> +				       .engine = e->flags);
>   
>   	/*
>   	 * Open PMU as fast as possible after the second spin batch in attempt
> @@ -393,7 +403,7 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e)
>   
>   	close(fd);
>   
> -	gem_context_destroy(gem_fd, ctx);
> +	gem_unset_context_engines(gem_fd, ctx);
>   
>   	assert_within_epsilon(val, ts[1] - ts[0], tolerance);
>   	igt_assert_eq(val2, 0);
> @@ -421,10 +431,10 @@ static void log_busy(unsigned int num_engines, uint64_t *val)
>   }
>   
>   static void
> -busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
> +busy_check_all(int gem_fd, struct intel_execution_engine2 *e,
>   	       const unsigned int num_engines, unsigned int flags)
>   {
> -	const struct intel_execution_engine2 *e_;
> +	struct intel_execution_engine2 *e_;
>   	uint64_t tval[2][num_engines];
>   	unsigned int busy_idx = 0, i;
>   	uint64_t val[num_engines];
> @@ -434,8 +444,8 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
>   
>   	i = 0;
>   	fd[0] = -1;
> -	for_each_engine_class_instance(gem_fd, e_) {
> -		if (e == e_)
> +	__for_each_physical_engine(gem_fd, e_) {
> +		if (e->class == e_->class && e->instance == e_->instance)
>   			busy_idx = i;
>   
>   		fd[i++] = open_group(I915_PMU_ENGINE_BUSY(e_->class,
> @@ -445,7 +455,7 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
>   
>   	igt_assert_eq(i, num_engines);
>   
> -	spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
> +	spin = spin_sync(gem_fd, 0, e);
>   	pmu_read_multi(fd[0], num_engines, tval[0]);
>   	slept = measured_usleep(batch_duration_ns / 1000);
>   	if (flags & TEST_TRAILING_IDLE)
> @@ -472,23 +482,23 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
>   
>   static void
>   __submit_spin_batch(int gem_fd, igt_spin_t *spin,
> -		    const struct intel_execution_engine2 *e,
> +		    struct intel_execution_engine2 *e,
>   		    int offset)
>   {
>   	struct drm_i915_gem_execbuffer2 eb = spin->execbuf;
>   
>   	eb.flags &= ~(0x3f | I915_EXEC_BSD_MASK);
> -	eb.flags |= e2ring(gem_fd, e) | I915_EXEC_NO_RELOC;
> +	eb.flags |= e->flags | I915_EXEC_NO_RELOC;
>   	eb.batch_start_offset += offset;
>   
>   	gem_execbuf(gem_fd, &eb);
>   }
>   
>   static void
> -most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
> +most_busy_check_all(int gem_fd, struct intel_execution_engine2 *e,
>   		    const unsigned int num_engines, unsigned int flags)
>   {
> -	const struct intel_execution_engine2 *e_;
> +	struct intel_execution_engine2 *e_;
>   	uint64_t tval[2][num_engines];
>   	uint64_t val[num_engines];
>   	int fd[num_engines];
> @@ -497,13 +507,13 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e,
>   	unsigned int idle_idx, i;
>   
>   	i = 0;
> -	for_each_engine_class_instance(gem_fd, e_) {
> -		if (e == e_)
> +	__for_each_physical_engine(gem_fd, e_) {
> +		if (e->class == e_->class && e->instance == e_->instance)
>   			idle_idx = i;
>   		else if (spin)
>   			__submit_spin_batch(gem_fd, spin, e_, 64);
>   		else
> -			spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e_));
> +			spin = __spin_poll(gem_fd, 0, e_);
>   
>   		val[i++] = I915_PMU_ENGINE_BUSY(e_->class, e_->instance);
>   	}
> @@ -545,7 +555,7 @@ static void
>   all_busy_check_all(int gem_fd, const unsigned int num_engines,
>   		   unsigned int flags)
>   {
> -	const struct intel_execution_engine2 *e;
> +	struct intel_execution_engine2 *e;
>   	uint64_t tval[2][num_engines];
>   	uint64_t val[num_engines];
>   	int fd[num_engines];
> @@ -554,11 +564,11 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines,
>   	unsigned int i;
>   
>   	i = 0;
> -	for_each_engine_class_instance(gem_fd, e) {
> +	__for_each_physical_engine(gem_fd, e) {
>   		if (spin)
>   			__submit_spin_batch(gem_fd, spin, e, 64);
>   		else
> -			spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e));
> +			spin = __spin_poll(gem_fd, 0, e);
>   
>   		val[i++] = I915_PMU_ENGINE_BUSY(e->class, e->instance);
>   	}
> @@ -592,7 +602,7 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines,
>   }
>   
>   static void
> -no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
> +no_sema(int gem_fd, struct intel_execution_engine2 *e, unsigned int flags)
>   {
>   	igt_spin_t *spin;
>   	uint64_t val[2][2];
> @@ -602,7 +612,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
>   	open_group(I915_PMU_ENGINE_WAIT(e->class, e->instance), fd);
>   
>   	if (flags & TEST_BUSY)
> -		spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
> +		spin = spin_sync(gem_fd, 0, e);
>   	else
>   		spin = NULL;
>   
> @@ -631,7 +641,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags)
>   #define   MI_SEMAPHORE_SAD_GTE_SDD	(1<<12)
>   
>   static void
> -sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
> +sema_wait(int gem_fd, struct intel_execution_engine2 *e,
>   	  unsigned int flags)
>   {
>   	struct drm_i915_gem_relocation_entry reloc[2] = {};
> @@ -689,7 +699,7 @@ sema_wait(int gem_fd, const struct intel_execution_engine2 *e,
>   
>   	eb.buffer_count = 2;
>   	eb.buffers_ptr = to_user_pointer(obj);
> -	eb.flags = e2ring(gem_fd, e);
> +	eb.flags = e->flags;
>   
>   	/**
>   	 * Start the semaphore wait PMU and after some known time let the above
> @@ -792,7 +802,7 @@ static int wait_vblank(int fd, union drm_wait_vblank *vbl)
>   }
>   
>   static void
> -event_wait(int gem_fd, const struct intel_execution_engine2 *e)
> +event_wait(int gem_fd, struct intel_execution_engine2 *e)
>   {
>   	struct drm_i915_gem_exec_object2 obj = { };
>   	struct drm_i915_gem_execbuffer2 eb = { };
> @@ -845,7 +855,7 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e)
>   
>   	eb.buffer_count = 1;
>   	eb.buffers_ptr = to_user_pointer(&obj);
> -	eb.flags = e2ring(gem_fd, e) | I915_EXEC_SECURE;
> +	eb.flags = e->flags | I915_EXEC_SECURE;
>   
>   	for_each_pipe_with_valid_output(&data.display, p, output) {
>   		struct igt_helper_process waiter = { };
> @@ -917,7 +927,7 @@ event_wait(int gem_fd, const struct intel_execution_engine2 *e)
>   }
>   
>   static void
> -multi_client(int gem_fd, const struct intel_execution_engine2 *e)
> +multi_client(int gem_fd, struct intel_execution_engine2 *e)
>   {
>   	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
>   	unsigned long slept[2];
> @@ -936,7 +946,7 @@ multi_client(int gem_fd, const struct intel_execution_engine2 *e)
>   	 */
>   	fd[1] = open_pmu(config);
>   
> -	spin = spin_sync(gem_fd, 0, e2ring(gem_fd, e));
> +	spin = spin_sync(gem_fd, 0, e);
>   
>   	val[0] = val[1] = __pmu_read_single(fd[0], &ts[0]);
>   	slept[1] = measured_usleep(batch_duration_ns / 1000);
> @@ -1039,6 +1049,7 @@ static void cpu_hotplug(int gem_fd)
>   	igt_spin_t *spin[2];
>   	uint64_t ts[2];
>   	uint64_t val;
> +	uint32_t ctx;
>   	int link[2];
>   	int fd, ret;
>   	int cur = 0;
> @@ -1046,14 +1057,18 @@ static void cpu_hotplug(int gem_fd)
>   
>   	igt_require(cpu0_hotplug_support());
>   
> +	ctx = gem_context_create(gem_fd);

TODO left was to try without new contexts for a handful of tests where I 
added them. Instead try I915_EXEC_DEFAULT when submitting and (ab)use 
the fact RCS is at index zero for foreseeable future. When it fails 
we'll know about it. This applies to hotplug, frequency, interrupts 
subtests AFAIR.

Regards,

Tvrtko

> +
>   	fd = open_pmu(I915_PMU_ENGINE_BUSY(I915_ENGINE_CLASS_RENDER, 0));
>   
>   	/*
>   	 * Create two spinners so test can ensure shorter gaps in engine
>   	 * busyness as it is terminating one and re-starting the other.
>   	 */
> -	spin[0] = igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER);
> -	spin[1] = __igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER);
> +	spin[0] = igt_spin_batch_new(gem_fd,
> +				     .engine = I915_EXEC_RENDER, .ctx = ctx);
> +	spin[1] = __igt_spin_batch_new(gem_fd,
> +				       .engine = I915_EXEC_RENDER, .ctx = ctx);
>   
>   	val = __pmu_read_single(fd, &ts[0]);
>   
> @@ -1137,6 +1152,7 @@ static void cpu_hotplug(int gem_fd)
>   
>   		igt_spin_batch_free(gem_fd, spin[cur]);
>   		spin[cur] = __igt_spin_batch_new(gem_fd,
> +						 .ctx = ctx,
>   						 .engine = I915_EXEC_RENDER);
>   		cur ^= 1;
>   	}
> @@ -1150,6 +1166,7 @@ static void cpu_hotplug(int gem_fd)
>   	igt_waitchildren();
>   	close(fd);
>   	close(link[0]);
> +	gem_context_destroy(gem_fd, ctx);
>   
>   	/* Skip if child signals a problem with offlining a CPU. */
>   	igt_skip_on(buf == 's');
> @@ -1165,17 +1182,21 @@ test_interrupts(int gem_fd)
>   	igt_spin_t *spin[target];
>   	struct pollfd pfd;
>   	uint64_t idle, busy;
> +	uint32_t ctx;
>   	int fence_fd;
>   	int fd;
>   
>   	gem_quiescent_gpu(gem_fd);
>   
> +	ctx = gem_context_create(gem_fd);
> +
>   	fd = open_pmu(I915_PMU_INTERRUPTS);
>   
>   	/* Queue spinning batches. */
>   	for (int i = 0; i < target; i++) {
>   		spin[i] = __igt_spin_batch_new(gem_fd,
>   					       .engine = I915_EXEC_RENDER,
> +					       .ctx = ctx,
>   					       .flags = IGT_SPIN_FENCE_OUT);
>   		if (i == 0) {
>   			fence_fd = spin[i]->out_fence;
> @@ -1217,6 +1238,7 @@ test_interrupts(int gem_fd)
>   	/* Check at least as many interrupts has been generated. */
>   	busy = pmu_read_single(fd) - idle;
>   	close(fd);
> +	gem_context_destroy(gem_fd, ctx);
>   
>   	igt_assert_lte(target, busy);
>   }
> @@ -1229,15 +1251,19 @@ test_interrupts_sync(int gem_fd)
>   	igt_spin_t *spin[target];
>   	struct pollfd pfd;
>   	uint64_t idle, busy;
> +	uint32_t ctx;
>   	int fd;
>   
>   	gem_quiescent_gpu(gem_fd);
>   
> +	ctx = gem_context_create(gem_fd);
> +
>   	fd = open_pmu(I915_PMU_INTERRUPTS);
>   
>   	/* Queue spinning batches. */
>   	for (int i = 0; i < target; i++)
>   		spin[i] = __igt_spin_batch_new(gem_fd,
> +					       .ctx = ctx,
>   					       .flags = IGT_SPIN_FENCE_OUT);
>   
>   	/* Wait for idle state. */
> @@ -1262,6 +1288,7 @@ test_interrupts_sync(int gem_fd)
>   	/* Check at least as many interrupts has been generated. */
>   	busy = pmu_read_single(fd) - idle;
>   	close(fd);
> +	gem_context_destroy(gem_fd, ctx);
>   
>   	igt_assert_lte(target, busy);
>   }
> @@ -1274,6 +1301,9 @@ test_frequency(int gem_fd)
>   	double min[2], max[2];
>   	igt_spin_t *spin;
>   	int fd, sysfs;
> +	uint32_t ctx;
> +
> +	ctx = gem_context_create(gem_fd);
>   
>   	sysfs = igt_sysfs_open(gem_fd);
>   	igt_require(sysfs >= 0);
> @@ -1301,7 +1331,7 @@ test_frequency(int gem_fd)
>   	igt_require(igt_sysfs_get_u32(sysfs, "gt_boost_freq_mhz") == min_freq);
>   
>   	gem_quiescent_gpu(gem_fd); /* Idle to be sure the change takes effect */
> -	spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER);
> +	spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER);
>   
>   	slept = pmu_read_multi(fd, 2, start);
>   	measured_usleep(batch_duration_ns / 1000);
> @@ -1327,7 +1357,7 @@ test_frequency(int gem_fd)
>   	igt_require(igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz") == max_freq);
>   
>   	gem_quiescent_gpu(gem_fd);
> -	spin = spin_sync(gem_fd, 0, I915_EXEC_RENDER);
> +	spin = spin_sync_flags(gem_fd, ctx, I915_EXEC_RENDER);
>   
>   	slept = pmu_read_multi(fd, 2, start);
>   	measured_usleep(batch_duration_ns / 1000);
> @@ -1348,6 +1378,8 @@ test_frequency(int gem_fd)
>   			 min_freq, igt_sysfs_get_u32(sysfs, "gt_min_freq_mhz"));
>   	close(fd);
>   
> +	gem_context_destroy(gem_fd, ctx);
> +
>   	igt_info("Min frequency: requested %.1f, actual %.1f\n",
>   		 min[0], min[1]);
>   	igt_info("Max frequency: requested %.1f, actual %.1f\n",
> @@ -1448,7 +1480,7 @@ test_rc6(int gem_fd, unsigned int flags)
>   }
>   
>   static void
> -test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
> +test_enable_race(int gem_fd, struct intel_execution_engine2 *e)
>   {
>   	uint64_t config = I915_PMU_ENGINE_BUSY(e->class, e->instance);
>   	struct igt_helper_process engine_load = { };
> @@ -1465,7 +1497,7 @@ test_enable_race(int gem_fd, const struct intel_execution_engine2 *e)
>   
>   	eb.buffer_count = 1;
>   	eb.buffers_ptr = to_user_pointer(&obj);
> -	eb.flags = e2ring(gem_fd, e);
> +	eb.flags = e->flags;
>   
>   	/*
>   	 * This test is probabilistic so run in a few times to increase the
> @@ -1520,7 +1552,7 @@ static void __rearm_spin_batch(igt_spin_t *spin)
>   	__assert_within(x, ref, tolerance, tolerance)
>   
>   static void
> -accuracy(int gem_fd, const struct intel_execution_engine2 *e,
> +accuracy(int gem_fd, struct intel_execution_engine2 *e,
>   	 unsigned long target_busy_pct,
>   	 unsigned long target_iters)
>   {
> @@ -1570,7 +1602,7 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e,
>   		igt_spin_t *spin;
>   
>   		/* Allocate our spin batch and idle it. */
> -		spin = igt_spin_batch_new(gem_fd, .engine = e2ring(gem_fd, e));
> +		spin = igt_spin_batch_new(gem_fd, .engine = e->flags);
>   		igt_spin_batch_end(spin);
>   		gem_sync(gem_fd, spin->handle);
>   
> @@ -1674,7 +1706,7 @@ igt_main
>   				I915_PMU_LAST - __I915_PMU_OTHER(0) + 1;
>   	unsigned int num_engines = 0;
>   	int fd = -1;
> -	const struct intel_execution_engine2 *e;
> +	struct intel_execution_engine2 *e;
>   	unsigned int i;
>   
>   	igt_fixture {
> @@ -1683,7 +1715,7 @@ igt_main
>   		igt_require_gem(fd);
>   		igt_require(i915_type_id() > 0);
>   
> -		for_each_engine_class_instance(fd, e)
> +		__for_each_physical_engine(fd, e)
>   			num_engines++;
>   	}
>   
> @@ -1693,7 +1725,7 @@ igt_main
>   	igt_subtest("invalid-init")
>   		invalid_init();
>   
> -	__for_each_engine_class_instance(e) {
> +	__for_each_physical_engine(fd, e) {
>   		const unsigned int pct[] = { 2, 50, 98 };
>   
>   		/**
> @@ -1897,7 +1929,7 @@ igt_main
>   			gem_quiescent_gpu(fd);
>   		}
>   
> -		__for_each_engine_class_instance(e) {
> +		__for_each_physical_engine(render_fd, e) {
>   			igt_subtest_group {
>   				igt_fixture {
>   					gem_require_engine(render_fd,
> 
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition
  2019-04-16 15:28     ` Andi Shyti
@ 2019-04-16 17:09       ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2019-04-16 17:09 UTC (permalink / raw)
  To: Andi Shyti, Chris Wilson; +Cc: IGT dev, Andi Shyti


On 16/04/2019 16:28, Andi Shyti wrote:
>>> +void gem_unset_context_engines(int fd, uint32_t ctx)
>>> +{
>>> +       gem_context_destroy(fd, ctx);
>>> +}
>>
>> That's a little surprising. I certainly would not expect my context to
>> be destroyed just to reset the engines back to default.
>>
>> On the Rusty scale that would be
>> 	-7. The obvious use is wrong.
> 
> if I have a context creator 'gem_make_context_set_all_engines()'
> (which you already didn't like at the previous version), I should
> have a context destroyer.
> 
> I understand your concern, I will remove the context creation and
> destruction from gem_engine_topology.

Yes I only asked for a helper to create a default/full engine map on a 
context. :) (So one call site does not have to abuse the 
intel_init_engines or so.)

Regards,

Tvrtko
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library
  2019-04-16 17:06   ` Tvrtko Ursulin
@ 2019-04-16 23:05     ` Andi Shyti
  2019-04-17 13:42       ` Tvrtko Ursulin
  0 siblings, 1 reply; 20+ messages in thread
From: Andi Shyti @ 2019-04-16 23:05 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: IGT dev, Andi Shyti

> > @@ -1046,14 +1057,18 @@ static void cpu_hotplug(int gem_fd)
> >   	igt_require(cpu0_hotplug_support());
> > +	ctx = gem_context_create(gem_fd);
> 
> TODO left was to try without new contexts for a handful of tests where I
> added them. Instead try I915_EXEC_DEFAULT when submitting and (ab)use the
> fact RCS is at index zero for foreseeable future. When it fails we'll know
> about it. This applies to hotplug, frequency, interrupts subtests AFAIR.

yes, I was also thinking about adding a few wrappers around.

I managed to build quite a hefty list of things to do in these 21
versions (22 with the coming)

But I would happily send them once this series gets accepted to
not procrastinate it further.

Thanks,
Andi
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [igt-dev] ✓ Fi.CI.IGT: success for new engine discovery interface
  2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
                   ` (6 preceding siblings ...)
  2019-04-16 16:36 ` [igt-dev] ✓ Fi.CI.BAT: success for new engine discovery interface Patchwork
@ 2019-04-17  0:46 ` Patchwork
  7 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2019-04-17  0:46 UTC (permalink / raw)
  To: Andi Shyti; +Cc: igt-dev

== Series Details ==

Series: new engine discovery interface
URL   : https://patchwork.freedesktop.org/series/59591/
State : success

== Summary ==

CI Bug Log - changes from IGT_4952_full -> IGTPW_2871_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://patchwork.freedesktop.org/api/1.0/series/59591/revisions/1/mbox/

Known issues
------------

  Here are the changes found in IGTPW_2871_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_schedule@preempt-queue-chain-blt:
    - shard-snb:          NOTRUN -> SKIP [fdo#109271] +36

  * igt@gem_softpin@evict-snoop-interruptible:
    - shard-glk:          NOTRUN -> SKIP [fdo#109271] +17

  * igt@gem_workarounds@suspend-resume:
    - shard-apl:          PASS -> DMESG-WARN [fdo#108566] +5

  * igt@kms_atomic_transition@3x-modeset-transitions-nonblocking:
    - shard-kbl:          NOTRUN -> SKIP [fdo#109271] / [fdo#109278] +3

  * igt@kms_busy@extended-modeset-hang-newfb-render-c:
    - shard-snb:          NOTRUN -> SKIP [fdo#109271] / [fdo#109278] +3

  * igt@kms_cursor_crc@cursor-128x128-suspend:
    - shard-glk:          NOTRUN -> FAIL [fdo#103232] +1

  * igt@kms_cursor_crc@cursor-64x21-onscreen:
    - shard-kbl:          PASS -> FAIL [fdo#103232]
    - shard-apl:          PASS -> FAIL [fdo#103232]

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-move:
    - shard-iclb:         PASS -> FAIL [fdo#103167] +6

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-fullscreen:
    - shard-iclb:         PASS -> FAIL [fdo#109247] +20

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-mmap-cpu:
    - shard-kbl:          NOTRUN -> SKIP [fdo#109271] +28

  * igt@kms_pipe_crc_basic@hang-read-crc-pipe-f:
    - shard-glk:          NOTRUN -> SKIP [fdo#109271] / [fdo#109278]

  * igt@kms_plane@pixel-format-pipe-b-planes-source-clamping:
    - shard-glk:          PASS -> SKIP [fdo#109271]

  * igt@kms_plane_lowres@pipe-b-tiling-yf:
    - shard-hsw:          NOTRUN -> SKIP [fdo#109271] +20

  * igt@kms_psr@psr2_primary_render:
    - shard-iclb:         PASS -> SKIP [fdo#109441]

  * igt@kms_psr@suspend:
    - shard-iclb:         PASS -> FAIL [fdo#107383] / [fdo#110215] +1

  * igt@kms_rotation_crc@multiplane-rotation:
    - shard-kbl:          PASS -> INCOMPLETE [fdo#103665]

  * igt@kms_universal_plane@universal-plane-gen9-features-pipe-e:
    - shard-hsw:          NOTRUN -> SKIP [fdo#109271] / [fdo#109278] +2

  * igt@tools_test@tools_test:
    - shard-apl:          PASS -> SKIP [fdo#109271]

  
#### Possible fixes ####

  * igt@i915_selftest@live_workarounds:
    - shard-iclb:         DMESG-FAIL [fdo#108954] -> PASS

  * igt@i915_suspend@forcewake:
    - shard-apl:          DMESG-WARN [fdo#108566] -> PASS

  * igt@kms_color@pipe-b-legacy-gamma:
    - shard-apl:          FAIL [fdo#104782] -> PASS
    - shard-kbl:          FAIL [fdo#104782] -> PASS

  * igt@kms_frontbuffer_tracking@fbc-indfb-scaledprimary:
    - shard-kbl:          FAIL [fdo#103167] -> PASS
    - shard-apl:          FAIL [fdo#103167] -> PASS

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-blt:
    - shard-iclb:         FAIL [fdo#103167] -> PASS +2

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-blt:
    - shard-iclb:         FAIL [fdo#109247] -> PASS +15

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - shard-kbl:          INCOMPLETE [fdo#103665] -> PASS

  * igt@kms_plane_scaling@pipe-a-scaler-with-rotation:
    - shard-glk:          SKIP [fdo#109271] / [fdo#109278] -> PASS +1

  * igt@kms_psr@no_drrs:
    - shard-iclb:         FAIL [fdo#108341] -> PASS

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         SKIP [fdo#109441] -> PASS +4

  * igt@kms_psr@sprite_mmap_gtt:
    - shard-iclb:         FAIL [fdo#107383] / [fdo#110215] -> PASS +1

  * igt@kms_setmode@basic:
    - shard-hsw:          FAIL [fdo#99912] -> PASS

  
#### Warnings ####

  * igt@gem_tiled_swapping@non-threaded:
    - shard-iclb:         FAIL [fdo#108686] -> DMESG-WARN [fdo#108686]

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#103167]: https://bugs.freedesktop.org/show_bug.cgi?id=103167
  [fdo#103232]: https://bugs.freedesktop.org/show_bug.cgi?id=103232
  [fdo#103665]: https://bugs.freedesktop.org/show_bug.cgi?id=103665
  [fdo#104782]: https://bugs.freedesktop.org/show_bug.cgi?id=104782
  [fdo#107383]: https://bugs.freedesktop.org/show_bug.cgi?id=107383
  [fdo#108341]: https://bugs.freedesktop.org/show_bug.cgi?id=108341
  [fdo#108566]: https://bugs.freedesktop.org/show_bug.cgi?id=108566
  [fdo#108686]: https://bugs.freedesktop.org/show_bug.cgi?id=108686
  [fdo#108954]: https://bugs.freedesktop.org/show_bug.cgi?id=108954
  [fdo#109247]: https://bugs.freedesktop.org/show_bug.cgi?id=109247
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110215]: https://bugs.freedesktop.org/show_bug.cgi?id=110215
  [fdo#99912]: https://bugs.freedesktop.org/show_bug.cgi?id=99912


Participating hosts (7 -> 6)
------------------------------

  Missing    (1): shard-skl 


Build changes
-------------

    * IGT: IGT_4952 -> IGTPW_2871

  CI_DRM_5939: 757f5370dc4baed0475b6e28efd67ecc267e8745 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_2871: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2871/
  IGT_4952: d196925ed16221768689efa1ea06c4869e9fc2a9 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2871/
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library
  2019-04-16 23:05     ` Andi Shyti
@ 2019-04-17 13:42       ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2019-04-17 13:42 UTC (permalink / raw)
  To: Andi Shyti; +Cc: IGT dev


On 17/04/2019 00:05, Andi Shyti wrote:
>>> @@ -1046,14 +1057,18 @@ static void cpu_hotplug(int gem_fd)
>>>    	igt_require(cpu0_hotplug_support());
>>> +	ctx = gem_context_create(gem_fd);
>>
>> TODO left was to try without new contexts for a handful of tests where I
>> added them. Instead try I915_EXEC_DEFAULT when submitting and (ab)use the
>> fact RCS is at index zero for foreseeable future. When it fails we'll know
>> about it. This applies to hotplug, frequency, interrupts subtests AFAIR.
> 
> yes, I was also thinking about adding a few wrappers around.

I'm afraid I lost you here, I did not mention any wrappers here? I was 
thinking about the thing we talked about before, where I knee-jerk added 
context creation to some subtests and have later realized we could 
possibly do without that.

> I managed to build quite a hefty list of things to do in these 21
> versions (22 with the coming)

Next time you can occasionally send updated individual patches as 
--in-reply-to instead of resending the whole series which would keep the 
cover letter change log smaller. ;)

You only need to send the whole series when there are new or removed 
patches, if the time delay between posts is long, or if there was as 
rebase/conflict due base moving. Or if the reply chain gets too long. 
Those are my guidelines anyway, I don't claim they are somewhere 
documented or match other people opinions.

> But I would happily send them once this series gets accepted to
> not procrastinate it further.

In my view it is basically almost there and accepted in principle, just 
needs some final tweaks.

Regards,

Tvrtko
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine()
  2019-04-16 15:11 ` [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine() Andi Shyti
@ 2019-04-17 15:42   ` Daniele Ceraolo Spurio
  2019-04-17 15:48     ` Chris Wilson
  2019-04-17 16:14     ` Tvrtko Ursulin
  0 siblings, 2 replies; 20+ messages in thread
From: Daniele Ceraolo Spurio @ 2019-04-17 15:42 UTC (permalink / raw)
  To: Andi Shyti, IGT dev; +Cc: Andi Shyti

<snip>

> @@ -94,17 +95,17 @@ emit_recursive_batch(igt_spin_t *spin,
>   
>   	nengine = 0;
>   	if (opts->engine == ALL_ENGINES) {
> -		unsigned int engine;
> +		struct intel_execution_engine2 *engine;
>   
> -		for_each_physical_engine(fd, engine) {
> +		for_each_context_engine(fd, opts->ctx, engine) {

On a kernel that has the new I915_CONTEXT_PARAM_ENGINES, wouldn't this 
implicitly update opts->ctx to use it (via the ctx_map_engines in 
intel_init_engine_list)? What if the caller then tries to submit with 
that ctx using an execbuf flag?only an issue until all the callers are 
updated I guess.

Thanks,
Daniele

>   			if (opts->flags & IGT_SPIN_POLL_RUN &&
> -			    !gem_can_store_dword(fd, engine))
> +			    !gem_class_can_store_dword(fd, engine->class))
>   				continue;
>   
> -			engines[nengine++] = engine;
> +			flags[nengine++] = engine->flags;
>   		}
>   	} else {
> -		engines[nengine++] = opts->engine;
> +		flags[nengine++] = opts->engine;
>   	}
>   	igt_require(nengine);
>   
> @@ -234,7 +235,7 @@ emit_recursive_batch(igt_spin_t *spin,
>   
>   	for (i = 0; i < nengine; i++) {
>   		execbuf->flags &= ~ENGINE_MASK;
> -		execbuf->flags |= engines[i];
> +		execbuf->flags |= flags[i];
>   
>   		gem_execbuf_wr(fd, execbuf);
>   
> @@ -309,9 +310,19 @@ igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts)
>   	igt_require_gem(fd);
>   
>   	if (opts->engine != ALL_ENGINES) {
> -		gem_require_ring(fd, opts->engine);
> +		struct intel_execution_engine2 e;
> +		int class;
> +
> +		if (!gem_context_lookup_engine(fd, opts->engine,
> +					       opts->ctx, &e)) {
> +			class = e.class;
> +		} else {
> +			gem_require_ring(fd, opts->engine);
> +			class = gem_execbuf_flags_to_engine_class(opts->engine);
> +		}
> +
>   		if (opts->flags & IGT_SPIN_POLL_RUN)
> -			igt_require(gem_can_store_dword(fd, opts->engine));
> +			igt_require(gem_class_can_store_dword(fd, class));
>   	}
>   
>   	spin = spin_batch_create(fd, opts);

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine()
  2019-04-17 15:42   ` Daniele Ceraolo Spurio
@ 2019-04-17 15:48     ` Chris Wilson
  2019-04-17 16:14     ` Tvrtko Ursulin
  1 sibling, 0 replies; 20+ messages in thread
From: Chris Wilson @ 2019-04-17 15:48 UTC (permalink / raw)
  To: Andi Shyti, Daniele Ceraolo Spurio, IGT dev; +Cc: Andi Shyti

Quoting Daniele Ceraolo Spurio (2019-04-17 16:42:41)
> <snip>
> 
> > @@ -94,17 +95,17 @@ emit_recursive_batch(igt_spin_t *spin,
> >   
> >       nengine = 0;
> >       if (opts->engine == ALL_ENGINES) {
> > -             unsigned int engine;
> > +             struct intel_execution_engine2 *engine;
> >   
> > -             for_each_physical_engine(fd, engine) {
> > +             for_each_context_engine(fd, opts->ctx, engine) {
> 
> On a kernel that has the new I915_CONTEXT_PARAM_ENGINES, wouldn't this 
> implicitly update opts->ctx to use it (via the ctx_map_engines in 
> intel_init_engine_list)? What if the caller then tries to submit with 
> that ctx using an execbuf flag?only an issue until all the callers are 
> updated I guess.

Using this iterator means to use both ctx and engine->flags for your
execbuf.
-Chris
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine()
  2019-04-17 15:42   ` Daniele Ceraolo Spurio
  2019-04-17 15:48     ` Chris Wilson
@ 2019-04-17 16:14     ` Tvrtko Ursulin
  2019-04-17 16:21       ` Chris Wilson
  1 sibling, 1 reply; 20+ messages in thread
From: Tvrtko Ursulin @ 2019-04-17 16:14 UTC (permalink / raw)
  To: Daniele Ceraolo Spurio, Andi Shyti, IGT dev; +Cc: Andi Shyti


On 17/04/2019 16:42, Daniele Ceraolo Spurio wrote:
> <snip>
> 
>> @@ -94,17 +95,17 @@ emit_recursive_batch(igt_spin_t *spin,
>>       nengine = 0;
>>       if (opts->engine == ALL_ENGINES) {
>> -        unsigned int engine;
>> +        struct intel_execution_engine2 *engine;
>> -        for_each_physical_engine(fd, engine) {
>> +        for_each_context_engine(fd, opts->ctx, engine) {
> 
> On a kernel that has the new I915_CONTEXT_PARAM_ENGINES, wouldn't this 
> implicitly update opts->ctx to use it (via the ctx_map_engines in 
> intel_init_engine_list)? What if the caller then tries to submit with 
> that ctx using an execbuf flag?only an issue until all the callers are 
> updated I guess.

I think you are right. And I think it is not only until all tests are 
updated that we need dual paths but maybe even longer.

Something like:

if (ctx_has_map(ctx)) {
	for_each_context_engine()
	...
} else {
	for_each_physical_engine
	...
}

Problem though is the end game of replacing for_each_physical_engine 
with the new implementation. It is okay to have for_each_physical_engine 
configure the context when called directly in a test, but a bit less 
okay if done deep down in some function like this one.

So what to do to preserve compatibility with eb.flags and not mess up 
the context in here.. use the static iterator for contexts wo/ maps and 
legacy eb? But what does ALL_ENGINES mean then? Phase out ALL_ENGINES 
support in spin batch? Needs an audit of how many call sites we have and 
how do they look.

Regards,

Tvrtko

> 
> Thanks,
> Daniele
> 
>>               if (opts->flags & IGT_SPIN_POLL_RUN &&
>> -                !gem_can_store_dword(fd, engine))
>> +                !gem_class_can_store_dword(fd, engine->class))
>>                   continue;
>> -            engines[nengine++] = engine;
>> +            flags[nengine++] = engine->flags;
>>           }
>>       } else {
>> -        engines[nengine++] = opts->engine;
>> +        flags[nengine++] = opts->engine;
>>       }
>>       igt_require(nengine);
>> @@ -234,7 +235,7 @@ emit_recursive_batch(igt_spin_t *spin,
>>       for (i = 0; i < nengine; i++) {
>>           execbuf->flags &= ~ENGINE_MASK;
>> -        execbuf->flags |= engines[i];
>> +        execbuf->flags |= flags[i];
>>           gem_execbuf_wr(fd, execbuf);
>> @@ -309,9 +310,19 @@ igt_spin_batch_factory(int fd, const struct 
>> igt_spin_factory *opts)
>>       igt_require_gem(fd);
>>       if (opts->engine != ALL_ENGINES) {
>> -        gem_require_ring(fd, opts->engine);
>> +        struct intel_execution_engine2 e;
>> +        int class;
>> +
>> +        if (!gem_context_lookup_engine(fd, opts->engine,
>> +                           opts->ctx, &e)) {
>> +            class = e.class;
>> +        } else {
>> +            gem_require_ring(fd, opts->engine);
>> +            class = gem_execbuf_flags_to_engine_class(opts->engine);
>> +        }
>> +
>>           if (opts->flags & IGT_SPIN_POLL_RUN)
>> -            igt_require(gem_can_store_dword(fd, opts->engine));
>> +            igt_require(gem_class_can_store_dword(fd, class));
>>       }
>>       spin = spin_batch_create(fd, opts);
> 
> 
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine()
  2019-04-17 16:14     ` Tvrtko Ursulin
@ 2019-04-17 16:21       ` Chris Wilson
  2019-04-18  8:46         ` Tvrtko Ursulin
  0 siblings, 1 reply; 20+ messages in thread
From: Chris Wilson @ 2019-04-17 16:21 UTC (permalink / raw)
  To: Andi Shyti, Daniele Ceraolo Spurio, IGT dev, Tvrtko Ursulin; +Cc: Andi Shyti

Quoting Tvrtko Ursulin (2019-04-17 17:14:45)
> So what to do to preserve compatibility with eb.flags and not mess up 
> the context in here.. use the static iterator for contexts wo/ maps and 
> legacy eb? But what does ALL_ENGINES mean then? Phase out ALL_ENGINES 
> support in spin batch? Needs an audit of how many call sites we have and 
> how do they look.

Actually, we've^I've open-coded several callsite that could do with 
ALL_ENGINES. And I think that's a good thing -- as the setup requires a
bit of finesse and resubmitting the spin batch works quite well. No
doubt someone else will be tempted to refactor.
-Chris
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine()
  2019-04-17 16:21       ` Chris Wilson
@ 2019-04-18  8:46         ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2019-04-18  8:46 UTC (permalink / raw)
  To: Chris Wilson, Andi Shyti, Daniele Ceraolo Spurio, IGT dev; +Cc: Andi Shyti


On 17/04/2019 17:21, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-04-17 17:14:45)
>> So what to do to preserve compatibility with eb.flags and not mess up
>> the context in here.. use the static iterator for contexts wo/ maps and
>> legacy eb? But what does ALL_ENGINES mean then? Phase out ALL_ENGINES
>> support in spin batch? Needs an audit of how many call sites we have and
>> how do they look.
> 
> Actually, we've^I've open-coded several callsite that could do with
> ALL_ENGINES. And I think that's a good thing -- as the setup requires a
> bit of finesse and resubmitting the spin batch works quite well. No
> doubt someone else will be tempted to refactor.

I didn't quite get if you want more usages of spin_batch(ALL_ENGINES) or 
fewer in the future?

Do we agree that for now at least we don't want to silently modify the 
context, and so for now to go with dual paths in the spin batch 
constructor, so Andi has a way forward?

Regards,

Tvrtko
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-04-18  8:46 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-16 15:11 [igt-dev] [PATCH v21 0/6] new engine discovery interface Andi Shyti
2019-04-16 15:11 ` [igt-dev] [PATCH v21 1/6] include/drm-uapi: import i915_drm.h header file Andi Shyti
2019-04-16 15:11 ` [igt-dev] [PATCH v21 2/6] lib/i915: add gem_engine_topology library and for_each loop definition Andi Shyti
2019-04-16 15:21   ` Chris Wilson
2019-04-16 15:28     ` Andi Shyti
2019-04-16 17:09       ` Tvrtko Ursulin
2019-04-16 15:11 ` [igt-dev] [PATCH v21 3/6] lib: igt_gt: add execution buffer flags to class helper Andi Shyti
2019-04-16 15:11 ` [igt-dev] [PATCH v21 4/6] lib: igt_gt: make gem_engine_can_store_dword() check engine class Andi Shyti
2019-04-16 15:11 ` [igt-dev] [PATCH v21 5/6] lib: igt_dummyload: use for_each_context_engine() Andi Shyti
2019-04-17 15:42   ` Daniele Ceraolo Spurio
2019-04-17 15:48     ` Chris Wilson
2019-04-17 16:14     ` Tvrtko Ursulin
2019-04-17 16:21       ` Chris Wilson
2019-04-18  8:46         ` Tvrtko Ursulin
2019-04-16 15:11 ` [igt-dev] [PATCH v21 6/6] test: perf_pmu: use the gem_engine_topology library Andi Shyti
2019-04-16 17:06   ` Tvrtko Ursulin
2019-04-16 23:05     ` Andi Shyti
2019-04-17 13:42       ` Tvrtko Ursulin
2019-04-16 16:36 ` [igt-dev] ✓ Fi.CI.BAT: success for new engine discovery interface Patchwork
2019-04-17  0:46 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.