All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [RFC PATCH v2 0/2] Introduce a ww transaction utility
@ 2020-09-17 18:59 Thomas Hellström (Intel)
  2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 1/2] drm/i915: Break out dma_resv ww locking utilities to separate files Thomas Hellström (Intel)
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Thomas Hellström (Intel) @ 2020-09-17 18:59 UTC (permalink / raw)
  To: intel-gfx; +Cc: maarten.lankhorst, chris

A ww transaction utility intended to help removing the
obj->mm.lock from the driver and introduce ww transactions
in a robust way.

Patch 1/2 breaks the current i915 utilities out to separate files
Patch 2/2 introduces the ww transaction utility

A similar utility could easily be introduced in the core
ww_mutex code, allowing even for cross-driver ww transactions,
and the template argument could then allow for per-ww-class derived
ww_acquire_mutex types. To facilitate a core implementation, (since we
can never guarantee the contended lock stays alive), we'd need a

void ww_mutex_relax(struct ww_acquire_ctx *)

and its interruptible variant that does the equivalent of
locking and unlocking the contended mutex.

With this driver implementation, we can extend the code to take a
reference on the object containing the contended lock to make sure
it stays alive.

Finally, the drawback of the current implementation is the use of the hash
table and corresponding performance cost, but as mentioned in
patch 2, a core variant could probably do this in a much more
efficient way.

v2: Version 1 of patch 2 was obviously a WIP patch. Fix that.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Intel-gfx] [RFC PATCH v2 1/2] drm/i915: Break out dma_resv ww locking utilities to separate files
  2020-09-17 18:59 [Intel-gfx] [RFC PATCH v2 0/2] Introduce a ww transaction utility Thomas Hellström (Intel)
@ 2020-09-17 18:59 ` Thomas Hellström (Intel)
  2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 2/2] drm/i915: Introduce a i915_gem_do_ww(){} utility Thomas Hellström (Intel)
  2020-09-17 19:16 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Introduce a ww transaction utility (rev2) Patchwork
  2 siblings, 0 replies; 6+ messages in thread
From: Thomas Hellström (Intel) @ 2020-09-17 18:59 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, maarten.lankhorst, chris

From: Thomas Hellström <thomas.hellstrom@intel.com>

As we're about to add more ww-related functionality,
break out the dma_resv ww locking utilities to their own files

Signed-off-by: Thomas Hellström <thomas.hellstrom@intel.com>
---
 drivers/gpu/drm/i915/Makefile               |  1 +
 drivers/gpu/drm/i915/gem/i915_gem_object.h  |  1 +
 drivers/gpu/drm/i915/gt/intel_renderstate.h |  1 +
 drivers/gpu/drm/i915/i915_gem.c             | 64 ------------------
 drivers/gpu/drm/i915/i915_gem.h             | 15 -----
 drivers/gpu/drm/i915/i915_gem_ww.c          | 72 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_ww.h          | 23 +++++++
 7 files changed, 98 insertions(+), 79 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gem_ww.c
 create mode 100644 drivers/gpu/drm/i915/i915_gem_ww.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 58d129b5a65a..71503bc26d98 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -45,6 +45,7 @@ i915-y += i915_drv.o \
 	  i915_switcheroo.o \
 	  i915_sysfs.o \
 	  i915_utils.o \
+	  i915_gem_ww.o \
 	  intel_device_info.o \
 	  intel_dram.o \
 	  intel_memory_region.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index f084a25c5121..cd64b1fdf53c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -15,6 +15,7 @@
 #include "i915_gem_object_types.h"
 #include "i915_gem_gtt.h"
 #include "i915_vma_types.h"
+#include "i915_gem_ww.h"
 
 void i915_gem_init__objects(struct drm_i915_private *i915);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_renderstate.h b/drivers/gpu/drm/i915/gt/intel_renderstate.h
index 713aa1e86c80..d9db833b873b 100644
--- a/drivers/gpu/drm/i915/gt/intel_renderstate.h
+++ b/drivers/gpu/drm/i915/gt/intel_renderstate.h
@@ -26,6 +26,7 @@
 
 #include <linux/types.h>
 #include "i915_gem.h"
+#include "i915_gem_ww.h"
 
 struct i915_request;
 struct intel_context;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3f83ac729644..fa1b7861b954 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1365,70 +1365,6 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 	return ret;
 }
 
-void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ww, bool intr)
-{
-	ww_acquire_init(&ww->ctx, &reservation_ww_class);
-	INIT_LIST_HEAD(&ww->obj_list);
-	ww->intr = intr;
-	ww->contended = NULL;
-}
-
-static void i915_gem_ww_ctx_unlock_all(struct i915_gem_ww_ctx *ww)
-{
-	struct drm_i915_gem_object *obj;
-
-	while ((obj = list_first_entry_or_null(&ww->obj_list, struct drm_i915_gem_object, obj_link))) {
-		if (WARN_ON(!kref_read(&obj->base.refcount))) {
-			unsigned long *entries;
-			unsigned int nr_entries;
-
-			nr_entries = stack_depot_fetch(obj->bt, &entries);
-			stack_trace_print(entries, nr_entries, 4);
-		}
-
-		obj->bt = 0;
-		list_del(&obj->obj_link);
-		i915_gem_object_unlock(obj);
-	}
-}
-
-void i915_gem_ww_unlock_single(struct drm_i915_gem_object *obj)
-{
-	list_del(&obj->obj_link);
-	i915_gem_object_unlock(obj);
-}
-
-void i915_gem_ww_ctx_fini(struct i915_gem_ww_ctx *ww)
-{
-	i915_gem_ww_ctx_unlock_all(ww);
-	WARN_ON(ww->contended);
-	ww_acquire_fini(&ww->ctx);
-}
-
-int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ww)
-{
-	int ret = 0;
-
-	if (WARN_ON(!ww->contended))
-		return -EINVAL;
-
-	i915_gem_ww_ctx_unlock_all(ww);
-	if (ww->intr)
-		ret = dma_resv_lock_slow_interruptible(ww->contended->base.resv, &ww->ctx);
-	else
-		dma_resv_lock_slow(ww->contended->base.resv, &ww->ctx);
-
-	if (!ret) {
-		list_add_tail(&ww->contended->obj_link, &ww->obj_list);
-		ww->contended->bt = ww->contended_bt;
-	}
-
-	ww->contended = NULL;
-	ww->contended_bt = 0;
-
-	return ret;
-}
-
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_gem_device.c"
 #include "selftests/i915_gem.c"
diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
index 4d50afab43f2..db0b2835095d 100644
--- a/drivers/gpu/drm/i915/i915_gem.h
+++ b/drivers/gpu/drm/i915/i915_gem.h
@@ -27,8 +27,6 @@
 
 #include <linux/bug.h>
 #include <linux/interrupt.h>
-#include <linux/stackdepot.h>
-#include <linux/stacktrace.h>
 #include <drm/drm_drv.h>
 
 #include "i915_utils.h"
@@ -117,17 +115,4 @@ static inline bool __tasklet_is_scheduled(struct tasklet_struct *t)
 	return test_bit(TASKLET_STATE_SCHED, &t->state);
 }
 
-struct i915_gem_ww_ctx {
-	struct ww_acquire_ctx ctx;
-	struct list_head obj_list;
-	bool intr;
-	struct drm_i915_gem_object *contended;
-	depot_stack_handle_t contended_bt;
-};
-
-void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ctx, bool intr);
-void i915_gem_ww_ctx_fini(struct i915_gem_ww_ctx *ctx);
-int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ctx);
-void i915_gem_ww_unlock_single(struct drm_i915_gem_object *obj);
-
 #endif /* __I915_GEM_H__ */
diff --git a/drivers/gpu/drm/i915/i915_gem_ww.c b/drivers/gpu/drm/i915/i915_gem_ww.c
new file mode 100644
index 000000000000..3490b72cf613
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_ww.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+#include <linux/dma-resv.h>
+#include <linux/stacktrace.h>
+#include "i915_gem_ww.h"
+#include "gem/i915_gem_object.h"
+
+void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ww, bool intr)
+{
+	ww_acquire_init(&ww->ctx, &reservation_ww_class);
+	INIT_LIST_HEAD(&ww->obj_list);
+	ww->intr = intr;
+	ww->contended = NULL;
+}
+
+static void i915_gem_ww_ctx_unlock_all(struct i915_gem_ww_ctx *ww)
+{
+	struct drm_i915_gem_object *obj;
+
+	while ((obj = list_first_entry_or_null(&ww->obj_list, struct drm_i915_gem_object, obj_link))) {
+		if (WARN_ON(!kref_read(&obj->base.refcount))) {
+			unsigned long *entries;
+			unsigned int nr_entries;
+
+			nr_entries = stack_depot_fetch(obj->bt, &entries);
+			stack_trace_print(entries, nr_entries, 4);
+		}
+
+		obj->bt = 0;
+		list_del(&obj->obj_link);
+		i915_gem_object_unlock(obj);
+	}
+}
+
+void i915_gem_ww_unlock_single(struct drm_i915_gem_object *obj)
+{
+	list_del(&obj->obj_link);
+	i915_gem_object_unlock(obj);
+}
+
+void i915_gem_ww_ctx_fini(struct i915_gem_ww_ctx *ww)
+{
+	i915_gem_ww_ctx_unlock_all(ww);
+	WARN_ON(ww->contended);
+	ww_acquire_fini(&ww->ctx);
+}
+
+int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ww)
+{
+	int ret = 0;
+
+	if (WARN_ON(!ww->contended))
+		return -EINVAL;
+
+	i915_gem_ww_ctx_unlock_all(ww);
+	if (ww->intr)
+		ret = dma_resv_lock_slow_interruptible(ww->contended->base.resv, &ww->ctx);
+	else
+		dma_resv_lock_slow(ww->contended->base.resv, &ww->ctx);
+
+	if (!ret) {
+		list_add_tail(&ww->contended->obj_link, &ww->obj_list);
+		ww->contended->bt = ww->contended_bt;
+	}
+
+	ww->contended = NULL;
+	ww->contended_bt = 0;
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/i915/i915_gem_ww.h b/drivers/gpu/drm/i915/i915_gem_ww.h
new file mode 100644
index 000000000000..94fdf8c5f89b
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_ww.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+#ifndef __I915_GEM_WW_H__
+#define __I915_GEM_WW_H__
+
+#include <linux/stackdepot.h>
+#include <drm/drm_drv.h>
+
+struct i915_gem_ww_ctx {
+	struct ww_acquire_ctx ctx;
+	struct list_head obj_list;
+	struct drm_i915_gem_object *contended;
+	depot_stack_handle_t contended_bt;
+	bool intr;
+};
+
+void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ctx, bool intr);
+void i915_gem_ww_ctx_fini(struct i915_gem_ww_ctx *ctx);
+int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ctx);
+void i915_gem_ww_unlock_single(struct drm_i915_gem_object *obj);
+#endif
-- 
2.25.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Intel-gfx] [RFC PATCH v2 2/2] drm/i915: Introduce a i915_gem_do_ww(){} utility
  2020-09-17 18:59 [Intel-gfx] [RFC PATCH v2 0/2] Introduce a ww transaction utility Thomas Hellström (Intel)
  2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 1/2] drm/i915: Break out dma_resv ww locking utilities to separate files Thomas Hellström (Intel)
@ 2020-09-17 18:59 ` Thomas Hellström (Intel)
  2020-09-22  9:12   ` Tvrtko Ursulin
  2020-09-17 19:16 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Introduce a ww transaction utility (rev2) Patchwork
  2 siblings, 1 reply; 6+ messages in thread
From: Thomas Hellström (Intel) @ 2020-09-17 18:59 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, maarten.lankhorst, chris

From: Thomas Hellström <thomas.hellstrom@intel.com>

With the huge number of sites where multiple-object locking is
needed in the driver, it becomes difficult to avoid recursive
ww_acquire_ctx initialization, and the function prototypes become
bloated passing the ww_acquire_ctx around. Furthermore it's not
always easy to get the -EDEADLK handling correct and to follow it.

Introduce a i915_gem_do_ww utility that tries to remedy all these problems
by enclosing parts of a ww transaction in the following way:

my_function() {
	struct i915_gem_ww_ctx *ww, template;
	int err;
	bool interruptible = true;

	i915_do_ww(ww, &template, err, interruptible) {
		err = ww_transaction_part(ww);
	}
	return err;
}

The utility will automatically look up an active ww_acquire_ctx if one
is initialized previously in the call chain, and if one found will
forward the -EDEADLK instead of handling it, which takes care of the
recursive initalization. Using the utility also discourages nested ww
unlocking / relocking that is both very fragile and hard to follow.

To look up and register an active ww_acquire_ctx, use a
driver-wide hash table for now. But noting that a task could only have
a single active ww_acqurie_ctx per ww_class, the active CTX is really
task state and a generic version of this utility in the ww_mutex code
could thus probably use a quick lookup from a list in the
struct task_struct.

Signed-off-by: Thomas Hellström <thomas.hellstrom@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_ww.c  | 74 ++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_ww.h  | 56 +++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_globals.c |  1 +
 drivers/gpu/drm/i915/i915_globals.h |  1 +
 4 files changed, 130 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_ww.c b/drivers/gpu/drm/i915/i915_gem_ww.c
index 3490b72cf613..6247af1dba87 100644
--- a/drivers/gpu/drm/i915/i915_gem_ww.c
+++ b/drivers/gpu/drm/i915/i915_gem_ww.c
@@ -1,10 +1,12 @@
 // SPDX-License-Identifier: MIT
 /*
- * Copyright © 2020 Intel Corporation
+ * Copyright © 2019 Intel Corporation
  */
+#include <linux/rhashtable.h>
 #include <linux/dma-resv.h>
 #include <linux/stacktrace.h>
 #include "i915_gem_ww.h"
+#include "i915_globals.h"
 #include "gem/i915_gem_object.h"
 
 void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ww, bool intr)
@@ -70,3 +72,73 @@ int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ww)
 
 	return ret;
 }
+
+static struct rhashtable ww_ht;
+static const struct rhashtable_params ww_params = {
+	.key_len = sizeof(struct task_struct *),
+	.key_offset = offsetof(struct i915_gem_ww_ctx, ctx.task),
+	.head_offset = offsetof(struct i915_gem_ww_ctx, head),
+	.min_size = 128,
+};
+
+static void i915_ww_item_free(void *ptr, void *arg)
+{
+	WARN_ON_ONCE(1);
+}
+
+static void i915_global_ww_exit(void)
+{
+	rhashtable_free_and_destroy(&ww_ht, i915_ww_item_free, NULL);
+}
+
+static void i915_global_ww_shrink(void)
+{
+}
+
+static struct i915_global global = {
+	.shrink = i915_global_ww_shrink,
+	.exit = i915_global_ww_exit,
+};
+
+int __init i915_global_ww_init(void)
+{
+	int ret = rhashtable_init(&ww_ht, &ww_params);
+
+	if (ret)
+		return ret;
+
+	i915_global_register(&global);
+
+	return 0;
+}
+
+void __i915_gem_ww_mark_unused(struct i915_gem_ww_ctx *ww)
+{
+	GEM_WARN_ON(rhashtable_remove_fast(&ww_ht, &ww->head, ww_params));
+}
+
+/**
+ * __i915_gem_ww_locate_or_use - return the task's i915_gem_ww_ctx context
+ * to use.
+ *
+ * @template: The context to use if there was none initialized previously
+ * in the call chain.
+ *
+ * RETURN: The task's i915_gem_ww_ctx context.
+ */
+struct i915_gem_ww_ctx *
+__i915_gem_ww_locate_or_use(struct i915_gem_ww_ctx *template)
+{
+	struct i915_gem_ww_ctx *tmp;
+
+	/* ctx.task is the hash key, so set it first. */
+	template->ctx.task = current;
+
+	/*
+	 * Ideally we'd just hook the active context to the
+	 * struct task_struct. But for now use a hash table.
+	 */
+	tmp = rhashtable_lookup_get_insert_fast(&ww_ht, &template->head,
+						ww_params);
+	return tmp;
+}
diff --git a/drivers/gpu/drm/i915/i915_gem_ww.h b/drivers/gpu/drm/i915/i915_gem_ww.h
index 94fdf8c5f89b..b844596067c7 100644
--- a/drivers/gpu/drm/i915/i915_gem_ww.h
+++ b/drivers/gpu/drm/i915/i915_gem_ww.h
@@ -6,18 +6,72 @@
 #define __I915_GEM_WW_H__
 
 #include <linux/stackdepot.h>
+#include <linux/rhashtable-types.h>
 #include <drm/drm_drv.h>
 
 struct i915_gem_ww_ctx {
 	struct ww_acquire_ctx ctx;
+	struct rhash_head head;
 	struct list_head obj_list;
 	struct drm_i915_gem_object *contended;
 	depot_stack_handle_t contended_bt;
-	bool intr;
+	u32 call_depth;
+	unsigned short intr;
+	unsigned short loop;
 };
 
 void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ctx, bool intr);
 void i915_gem_ww_ctx_fini(struct i915_gem_ww_ctx *ctx);
 int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ctx);
 void i915_gem_ww_unlock_single(struct drm_i915_gem_object *obj);
+
+/* Internal functions used by the inlines! Don't use. */
+void __i915_gem_ww_mark_unused(struct i915_gem_ww_ctx *ww);
+struct i915_gem_ww_ctx *
+__i915_gem_ww_locate_or_use(struct i915_gem_ww_ctx *template);
+
+static inline int __i915_gem_ww_fini(struct i915_gem_ww_ctx *ww, int err)
+{
+	ww->loop = 0;
+	if (ww->call_depth) {
+		ww->call_depth--;
+		return err;
+	}
+
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(ww);
+		if (!err)
+			ww->loop = 1;
+	}
+
+	if (!ww->loop) {
+		i915_gem_ww_ctx_fini(ww);
+		__i915_gem_ww_mark_unused(ww);
+	}
+
+	return err;
+}
+
+static inline struct i915_gem_ww_ctx *
+__i915_gem_ww_init(struct i915_gem_ww_ctx *template, bool intr)
+{
+	struct i915_gem_ww_ctx *ww = __i915_gem_ww_locate_or_use(template);
+
+	if (!ww) {
+		ww = template;
+		ww->call_depth = 0;
+		i915_gem_ww_ctx_init(ww, intr);
+	} else {
+		ww->call_depth++;
+	}
+
+	ww->loop = 1;
+
+	return ww;
+}
+
+#define i915_gem_do_ww(_ww, _template, _err, _intr)			\
+	for ((_ww) = __i915_gem_ww_init(_template, _intr); (_ww)->loop; \
+	     _err = __i915_gem_ww_fini(_ww, _err))
+
 #endif
diff --git a/drivers/gpu/drm/i915/i915_globals.c b/drivers/gpu/drm/i915/i915_globals.c
index 3aa213684293..9087cc8c2ee3 100644
--- a/drivers/gpu/drm/i915/i915_globals.c
+++ b/drivers/gpu/drm/i915/i915_globals.c
@@ -94,6 +94,7 @@ static __initconst int (* const initfn[])(void) = {
 	i915_global_request_init,
 	i915_global_scheduler_init,
 	i915_global_vma_init,
+	i915_global_ww_init,
 };
 
 int __init i915_globals_init(void)
diff --git a/drivers/gpu/drm/i915/i915_globals.h b/drivers/gpu/drm/i915/i915_globals.h
index b2f5cd9b9b1a..5976b460ee39 100644
--- a/drivers/gpu/drm/i915/i915_globals.h
+++ b/drivers/gpu/drm/i915/i915_globals.h
@@ -34,5 +34,6 @@ int i915_global_objects_init(void);
 int i915_global_request_init(void);
 int i915_global_scheduler_init(void);
 int i915_global_vma_init(void);
+int i915_global_ww_init(void);
 
 #endif /* _I915_GLOBALS_H_ */
-- 
2.25.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for Introduce a ww transaction utility (rev2)
  2020-09-17 18:59 [Intel-gfx] [RFC PATCH v2 0/2] Introduce a ww transaction utility Thomas Hellström (Intel)
  2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 1/2] drm/i915: Break out dma_resv ww locking utilities to separate files Thomas Hellström (Intel)
  2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 2/2] drm/i915: Introduce a i915_gem_do_ww(){} utility Thomas Hellström (Intel)
@ 2020-09-17 19:16 ` Patchwork
  2 siblings, 0 replies; 6+ messages in thread
From: Patchwork @ 2020-09-17 19:16 UTC (permalink / raw)
  To: Thomas Hellström (Intel); +Cc: intel-gfx

== Series Details ==

Series: Introduce a ww transaction utility (rev2)
URL   : https://patchwork.freedesktop.org/series/81787/
State : failure

== Summary ==

Applying: drm/i915: Break out dma_resv ww locking utilities to separate files
error: sha1 information is lacking or useless (drivers/gpu/drm/i915/gem/i915_gem_object.h).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 drm/i915: Break out dma_resv ww locking utilities to separate files
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Intel-gfx] [RFC PATCH v2 2/2] drm/i915: Introduce a i915_gem_do_ww(){} utility
  2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 2/2] drm/i915: Introduce a i915_gem_do_ww(){} utility Thomas Hellström (Intel)
@ 2020-09-22  9:12   ` Tvrtko Ursulin
  2020-09-22 13:31     ` Thomas Hellström (Intel)
  0 siblings, 1 reply; 6+ messages in thread
From: Tvrtko Ursulin @ 2020-09-22  9:12 UTC (permalink / raw)
  To: Thomas Hellström (Intel), intel-gfx
  Cc: Thomas Hellström, maarten.lankhorst, chris


On 17/09/2020 19:59, Thomas Hellström (Intel) wrote:
> From: Thomas Hellström <thomas.hellstrom@intel.com>
> 
> With the huge number of sites where multiple-object locking is
> needed in the driver, it becomes difficult to avoid recursive
> ww_acquire_ctx initialization, and the function prototypes become
> bloated passing the ww_acquire_ctx around. Furthermore it's not
> always easy to get the -EDEADLK handling correct and to follow it.
> 
> Introduce a i915_gem_do_ww utility that tries to remedy all these problems
> by enclosing parts of a ww transaction in the following way:
> 
> my_function() {
> 	struct i915_gem_ww_ctx *ww, template;
> 	int err;
> 	bool interruptible = true;
> 
> 	i915_do_ww(ww, &template, err, interruptible) {
> 		err = ww_transaction_part(ww);
> 	}
> 	return err;
> }
> 
> The utility will automatically look up an active ww_acquire_ctx if one
> is initialized previously in the call chain, and if one found will
> forward the -EDEADLK instead of handling it, which takes care of the
> recursive initalization. Using the utility also discourages nested ww
> unlocking / relocking that is both very fragile and hard to follow.
> 
> To look up and register an active ww_acquire_ctx, use a
> driver-wide hash table for now. But noting that a task could only have
> a single active ww_acqurie_ctx per ww_class, the active CTX is really
> task state and a generic version of this utility in the ww_mutex code
> could thus probably use a quick lookup from a list in the
> struct task_struct.

Maybe a stupid question, but is it safe to assume process context is the 
only entry point to a ww transaction? I guess I was thinking about 
things like background scrub/migrate threads, but yes, they would be 
threads so would work. Other than those I have no ideas who could need 
locking multiple objects so from this aspect it looks okay.

But it is kind of neat to avoid changing deep hierarchies of function 
prototypes.

My concern is that the approach isn't to "magicky"? I mean too hidden 
and too stateful, and that some unwanted surprises could be coming in 
use with this model. But it is a very vague feeling at this point so 
don't know.

I also worry that if the graphics subsystem would start thinking it is 
so special that it needs dedicated handling in task_struct, that it 
might make it (subsystem) sound a bit pretentious. I had a quick browse 
through struct task_struct and couldn't spot any precedent to such 
things so I don't know what core kernel would say.

Regards,

Tvrtko

> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_gem_ww.c  | 74 ++++++++++++++++++++++++++++-
>   drivers/gpu/drm/i915/i915_gem_ww.h  | 56 +++++++++++++++++++++-
>   drivers/gpu/drm/i915/i915_globals.c |  1 +
>   drivers/gpu/drm/i915/i915_globals.h |  1 +
>   4 files changed, 130 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem_ww.c b/drivers/gpu/drm/i915/i915_gem_ww.c
> index 3490b72cf613..6247af1dba87 100644
> --- a/drivers/gpu/drm/i915/i915_gem_ww.c
> +++ b/drivers/gpu/drm/i915/i915_gem_ww.c
> @@ -1,10 +1,12 @@
>   // SPDX-License-Identifier: MIT
>   /*
> - * Copyright © 2020 Intel Corporation
> + * Copyright © 2019 Intel Corporation
>    */
> +#include <linux/rhashtable.h>
>   #include <linux/dma-resv.h>
>   #include <linux/stacktrace.h>
>   #include "i915_gem_ww.h"
> +#include "i915_globals.h"
>   #include "gem/i915_gem_object.h"
>   
>   void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ww, bool intr)
> @@ -70,3 +72,73 @@ int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ww)
>   
>   	return ret;
>   }
> +
> +static struct rhashtable ww_ht;
> +static const struct rhashtable_params ww_params = {
> +	.key_len = sizeof(struct task_struct *),
> +	.key_offset = offsetof(struct i915_gem_ww_ctx, ctx.task),
> +	.head_offset = offsetof(struct i915_gem_ww_ctx, head),
> +	.min_size = 128,
> +};
> +
> +static void i915_ww_item_free(void *ptr, void *arg)
> +{
> +	WARN_ON_ONCE(1);
> +}
> +
> +static void i915_global_ww_exit(void)
> +{
> +	rhashtable_free_and_destroy(&ww_ht, i915_ww_item_free, NULL);
> +}
> +
> +static void i915_global_ww_shrink(void)
> +{
> +}
> +
> +static struct i915_global global = {
> +	.shrink = i915_global_ww_shrink,
> +	.exit = i915_global_ww_exit,
> +};
> +
> +int __init i915_global_ww_init(void)
> +{
> +	int ret = rhashtable_init(&ww_ht, &ww_params);
> +
> +	if (ret)
> +		return ret;
> +
> +	i915_global_register(&global);
> +
> +	return 0;
> +}
> +
> +void __i915_gem_ww_mark_unused(struct i915_gem_ww_ctx *ww)
> +{
> +	GEM_WARN_ON(rhashtable_remove_fast(&ww_ht, &ww->head, ww_params));
> +}
> +
> +/**
> + * __i915_gem_ww_locate_or_use - return the task's i915_gem_ww_ctx context
> + * to use.
> + *
> + * @template: The context to use if there was none initialized previously
> + * in the call chain.
> + *
> + * RETURN: The task's i915_gem_ww_ctx context.
> + */
> +struct i915_gem_ww_ctx *
> +__i915_gem_ww_locate_or_use(struct i915_gem_ww_ctx *template)
> +{
> +	struct i915_gem_ww_ctx *tmp;
> +
> +	/* ctx.task is the hash key, so set it first. */
> +	template->ctx.task = current;
> +
> +	/*
> +	 * Ideally we'd just hook the active context to the
> +	 * struct task_struct. But for now use a hash table.
> +	 */
> +	tmp = rhashtable_lookup_get_insert_fast(&ww_ht, &template->head,
> +						ww_params);
> +	return tmp;
> +}
> diff --git a/drivers/gpu/drm/i915/i915_gem_ww.h b/drivers/gpu/drm/i915/i915_gem_ww.h
> index 94fdf8c5f89b..b844596067c7 100644
> --- a/drivers/gpu/drm/i915/i915_gem_ww.h
> +++ b/drivers/gpu/drm/i915/i915_gem_ww.h
> @@ -6,18 +6,72 @@
>   #define __I915_GEM_WW_H__
>   
>   #include <linux/stackdepot.h>
> +#include <linux/rhashtable-types.h>
>   #include <drm/drm_drv.h>
>   
>   struct i915_gem_ww_ctx {
>   	struct ww_acquire_ctx ctx;
> +	struct rhash_head head;
>   	struct list_head obj_list;
>   	struct drm_i915_gem_object *contended;
>   	depot_stack_handle_t contended_bt;
> -	bool intr;
> +	u32 call_depth;
> +	unsigned short intr;
> +	unsigned short loop;
>   };
>   
>   void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ctx, bool intr);
>   void i915_gem_ww_ctx_fini(struct i915_gem_ww_ctx *ctx);
>   int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ctx);
>   void i915_gem_ww_unlock_single(struct drm_i915_gem_object *obj);
> +
> +/* Internal functions used by the inlines! Don't use. */
> +void __i915_gem_ww_mark_unused(struct i915_gem_ww_ctx *ww);
> +struct i915_gem_ww_ctx *
> +__i915_gem_ww_locate_or_use(struct i915_gem_ww_ctx *template);
> +
> +static inline int __i915_gem_ww_fini(struct i915_gem_ww_ctx *ww, int err)
> +{
> +	ww->loop = 0;
> +	if (ww->call_depth) {
> +		ww->call_depth--;
> +		return err;
> +	}
> +
> +	if (err == -EDEADLK) {
> +		err = i915_gem_ww_ctx_backoff(ww);
> +		if (!err)
> +			ww->loop = 1;
> +	}
> +
> +	if (!ww->loop) {
> +		i915_gem_ww_ctx_fini(ww);
> +		__i915_gem_ww_mark_unused(ww);
> +	}
> +
> +	return err;
> +}
> +
> +static inline struct i915_gem_ww_ctx *
> +__i915_gem_ww_init(struct i915_gem_ww_ctx *template, bool intr)
> +{
> +	struct i915_gem_ww_ctx *ww = __i915_gem_ww_locate_or_use(template);
> +
> +	if (!ww) {
> +		ww = template;
> +		ww->call_depth = 0;
> +		i915_gem_ww_ctx_init(ww, intr);
> +	} else {
> +		ww->call_depth++;
> +	}
> +
> +	ww->loop = 1;
> +
> +	return ww;
> +}
> +
> +#define i915_gem_do_ww(_ww, _template, _err, _intr)			\
> +	for ((_ww) = __i915_gem_ww_init(_template, _intr); (_ww)->loop; \
> +	     _err = __i915_gem_ww_fini(_ww, _err))
> +
>   #endif
> diff --git a/drivers/gpu/drm/i915/i915_globals.c b/drivers/gpu/drm/i915/i915_globals.c
> index 3aa213684293..9087cc8c2ee3 100644
> --- a/drivers/gpu/drm/i915/i915_globals.c
> +++ b/drivers/gpu/drm/i915/i915_globals.c
> @@ -94,6 +94,7 @@ static __initconst int (* const initfn[])(void) = {
>   	i915_global_request_init,
>   	i915_global_scheduler_init,
>   	i915_global_vma_init,
> +	i915_global_ww_init,
>   };
>   
>   int __init i915_globals_init(void)
> diff --git a/drivers/gpu/drm/i915/i915_globals.h b/drivers/gpu/drm/i915/i915_globals.h
> index b2f5cd9b9b1a..5976b460ee39 100644
> --- a/drivers/gpu/drm/i915/i915_globals.h
> +++ b/drivers/gpu/drm/i915/i915_globals.h
> @@ -34,5 +34,6 @@ int i915_global_objects_init(void);
>   int i915_global_request_init(void);
>   int i915_global_scheduler_init(void);
>   int i915_global_vma_init(void);
> +int i915_global_ww_init(void);
>   
>   #endif /* _I915_GLOBALS_H_ */
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Intel-gfx] [RFC PATCH v2 2/2] drm/i915: Introduce a i915_gem_do_ww(){} utility
  2020-09-22  9:12   ` Tvrtko Ursulin
@ 2020-09-22 13:31     ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 6+ messages in thread
From: Thomas Hellström (Intel) @ 2020-09-22 13:31 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx; +Cc: Thomas Hellström, maarten.lankhorst, chris


On 9/22/20 11:12 AM, Tvrtko Ursulin wrote:
>
> On 17/09/2020 19:59, Thomas Hellström (Intel) wrote:
>> From: Thomas Hellström <thomas.hellstrom@intel.com>
>>
>> With the huge number of sites where multiple-object locking is
>> needed in the driver, it becomes difficult to avoid recursive
>> ww_acquire_ctx initialization, and the function prototypes become
>> bloated passing the ww_acquire_ctx around. Furthermore it's not
>> always easy to get the -EDEADLK handling correct and to follow it.
>>
>> Introduce a i915_gem_do_ww utility that tries to remedy all these 
>> problems
>> by enclosing parts of a ww transaction in the following way:
>>
>> my_function() {
>>     struct i915_gem_ww_ctx *ww, template;
>>     int err;
>>     bool interruptible = true;
>>
>>     i915_do_ww(ww, &template, err, interruptible) {
>>         err = ww_transaction_part(ww);
>>     }
>>     return err;
>> }
>>
>> The utility will automatically look up an active ww_acquire_ctx if one
>> is initialized previously in the call chain, and if one found will
>> forward the -EDEADLK instead of handling it, which takes care of the
>> recursive initalization. Using the utility also discourages nested ww
>> unlocking / relocking that is both very fragile and hard to follow.
>>
>> To look up and register an active ww_acquire_ctx, use a
>> driver-wide hash table for now. But noting that a task could only have
>> a single active ww_acqurie_ctx per ww_class, the active CTX is really
>> task state and a generic version of this utility in the ww_mutex code
>> could thus probably use a quick lookup from a list in the
>> struct task_struct.
>
> Maybe a stupid question, but is it safe to assume process context is 
> the only entry point to a ww transaction? I guess I was thinking about 
> things like background scrub/migrate threads, but yes, they would be 
> threads so would work. Other than those I have no ideas who could need 
> locking multiple objects so from this aspect it looks okay.
>
> But it is kind of neat to avoid changing deep hierarchies of function 
> prototypes.
>
> My concern is that the approach isn't to "magicky"? I mean too hidden 
> and too stateful, and that some unwanted surprises could be coming in 
> use with this model. But it is a very vague feeling at this point so 
> don't know.
>
> I also worry that if the graphics subsystem would start thinking it is 
> so special that it needs dedicated handling in task_struct, that it 
> might make it (subsystem) sound a bit pretentious. I had a quick 
> browse through struct task_struct and couldn't spot any precedent to 
> such things so I don't know what core kernel would say.

Thanks for taking a look Tvrtko.

Yes, I've discussed with Maarten and we'll favour changing prototype for 
now, although for code clarity and simplicity we could still use the 
i915_do_ww() notation.

For any upsteam move we would have to move the utility to locking / 
ww_mutex, (with driver-specific adaption still possible, so any 
additional task struct fields will not belong to graphics but rather to 
locking) so I'll post a RFC for a generic utility on dri-devel, time 
permitting, to see if there is any interest.

/Thomas

>
> Regards,
>
> Tvrtko
>
>>
>> Signed-off-by: Thomas Hellström <thomas.hellstrom@intel.com>
>> ---
>>   drivers/gpu/drm/i915/i915_gem_ww.c  | 74 ++++++++++++++++++++++++++++-
>>   drivers/gpu/drm/i915/i915_gem_ww.h  | 56 +++++++++++++++++++++-
>>   drivers/gpu/drm/i915/i915_globals.c |  1 +
>>   drivers/gpu/drm/i915/i915_globals.h |  1 +
>>   4 files changed, 130 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_gem_ww.c 
>> b/drivers/gpu/drm/i915/i915_gem_ww.c
>> index 3490b72cf613..6247af1dba87 100644
>> --- a/drivers/gpu/drm/i915/i915_gem_ww.c
>> +++ b/drivers/gpu/drm/i915/i915_gem_ww.c
>> @@ -1,10 +1,12 @@
>>   // SPDX-License-Identifier: MIT
>>   /*
>> - * Copyright © 2020 Intel Corporation
>> + * Copyright © 2019 Intel Corporation
>>    */
>> +#include <linux/rhashtable.h>
>>   #include <linux/dma-resv.h>
>>   #include <linux/stacktrace.h>
>>   #include "i915_gem_ww.h"
>> +#include "i915_globals.h"
>>   #include "gem/i915_gem_object.h"
>>     void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ww, bool intr)
>> @@ -70,3 +72,73 @@ int __must_check i915_gem_ww_ctx_backoff(struct 
>> i915_gem_ww_ctx *ww)
>>         return ret;
>>   }
>> +
>> +static struct rhashtable ww_ht;
>> +static const struct rhashtable_params ww_params = {
>> +    .key_len = sizeof(struct task_struct *),
>> +    .key_offset = offsetof(struct i915_gem_ww_ctx, ctx.task),
>> +    .head_offset = offsetof(struct i915_gem_ww_ctx, head),
>> +    .min_size = 128,
>> +};
>> +
>> +static void i915_ww_item_free(void *ptr, void *arg)
>> +{
>> +    WARN_ON_ONCE(1);
>> +}
>> +
>> +static void i915_global_ww_exit(void)
>> +{
>> +    rhashtable_free_and_destroy(&ww_ht, i915_ww_item_free, NULL);
>> +}
>> +
>> +static void i915_global_ww_shrink(void)
>> +{
>> +}
>> +
>> +static struct i915_global global = {
>> +    .shrink = i915_global_ww_shrink,
>> +    .exit = i915_global_ww_exit,
>> +};
>> +
>> +int __init i915_global_ww_init(void)
>> +{
>> +    int ret = rhashtable_init(&ww_ht, &ww_params);
>> +
>> +    if (ret)
>> +        return ret;
>> +
>> +    i915_global_register(&global);
>> +
>> +    return 0;
>> +}
>> +
>> +void __i915_gem_ww_mark_unused(struct i915_gem_ww_ctx *ww)
>> +{
>> +    GEM_WARN_ON(rhashtable_remove_fast(&ww_ht, &ww->head, ww_params));
>> +}
>> +
>> +/**
>> + * __i915_gem_ww_locate_or_use - return the task's i915_gem_ww_ctx 
>> context
>> + * to use.
>> + *
>> + * @template: The context to use if there was none initialized 
>> previously
>> + * in the call chain.
>> + *
>> + * RETURN: The task's i915_gem_ww_ctx context.
>> + */
>> +struct i915_gem_ww_ctx *
>> +__i915_gem_ww_locate_or_use(struct i915_gem_ww_ctx *template)
>> +{
>> +    struct i915_gem_ww_ctx *tmp;
>> +
>> +    /* ctx.task is the hash key, so set it first. */
>> +    template->ctx.task = current;
>> +
>> +    /*
>> +     * Ideally we'd just hook the active context to the
>> +     * struct task_struct. But for now use a hash table.
>> +     */
>> +    tmp = rhashtable_lookup_get_insert_fast(&ww_ht, &template->head,
>> +                        ww_params);
>> +    return tmp;
>> +}
>> diff --git a/drivers/gpu/drm/i915/i915_gem_ww.h 
>> b/drivers/gpu/drm/i915/i915_gem_ww.h
>> index 94fdf8c5f89b..b844596067c7 100644
>> --- a/drivers/gpu/drm/i915/i915_gem_ww.h
>> +++ b/drivers/gpu/drm/i915/i915_gem_ww.h
>> @@ -6,18 +6,72 @@
>>   #define __I915_GEM_WW_H__
>>     #include <linux/stackdepot.h>
>> +#include <linux/rhashtable-types.h>
>>   #include <drm/drm_drv.h>
>>     struct i915_gem_ww_ctx {
>>       struct ww_acquire_ctx ctx;
>> +    struct rhash_head head;
>>       struct list_head obj_list;
>>       struct drm_i915_gem_object *contended;
>>       depot_stack_handle_t contended_bt;
>> -    bool intr;
>> +    u32 call_depth;
>> +    unsigned short intr;
>> +    unsigned short loop;
>>   };
>>     void i915_gem_ww_ctx_init(struct i915_gem_ww_ctx *ctx, bool intr);
>>   void i915_gem_ww_ctx_fini(struct i915_gem_ww_ctx *ctx);
>>   int __must_check i915_gem_ww_ctx_backoff(struct i915_gem_ww_ctx *ctx);
>>   void i915_gem_ww_unlock_single(struct drm_i915_gem_object *obj);
>> +
>> +/* Internal functions used by the inlines! Don't use. */
>> +void __i915_gem_ww_mark_unused(struct i915_gem_ww_ctx *ww);
>> +struct i915_gem_ww_ctx *
>> +__i915_gem_ww_locate_or_use(struct i915_gem_ww_ctx *template);
>> +
>> +static inline int __i915_gem_ww_fini(struct i915_gem_ww_ctx *ww, int 
>> err)
>> +{
>> +    ww->loop = 0;
>> +    if (ww->call_depth) {
>> +        ww->call_depth--;
>> +        return err;
>> +    }
>> +
>> +    if (err == -EDEADLK) {
>> +        err = i915_gem_ww_ctx_backoff(ww);
>> +        if (!err)
>> +            ww->loop = 1;
>> +    }
>> +
>> +    if (!ww->loop) {
>> +        i915_gem_ww_ctx_fini(ww);
>> +        __i915_gem_ww_mark_unused(ww);
>> +    }
>> +
>> +    return err;
>> +}
>> +
>> +static inline struct i915_gem_ww_ctx *
>> +__i915_gem_ww_init(struct i915_gem_ww_ctx *template, bool intr)
>> +{
>> +    struct i915_gem_ww_ctx *ww = __i915_gem_ww_locate_or_use(template);
>> +
>> +    if (!ww) {
>> +        ww = template;
>> +        ww->call_depth = 0;
>> +        i915_gem_ww_ctx_init(ww, intr);
>> +    } else {
>> +        ww->call_depth++;
>> +    }
>> +
>> +    ww->loop = 1;
>> +
>> +    return ww;
>> +}
>> +
>> +#define i915_gem_do_ww(_ww, _template, _err, _intr) \
>> +    for ((_ww) = __i915_gem_ww_init(_template, _intr); (_ww)->loop; \
>> +         _err = __i915_gem_ww_fini(_ww, _err))
>> +
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/i915_globals.c 
>> b/drivers/gpu/drm/i915/i915_globals.c
>> index 3aa213684293..9087cc8c2ee3 100644
>> --- a/drivers/gpu/drm/i915/i915_globals.c
>> +++ b/drivers/gpu/drm/i915/i915_globals.c
>> @@ -94,6 +94,7 @@ static __initconst int (* const initfn[])(void) = {
>>       i915_global_request_init,
>>       i915_global_scheduler_init,
>>       i915_global_vma_init,
>> +    i915_global_ww_init,
>>   };
>>     int __init i915_globals_init(void)
>> diff --git a/drivers/gpu/drm/i915/i915_globals.h 
>> b/drivers/gpu/drm/i915/i915_globals.h
>> index b2f5cd9b9b1a..5976b460ee39 100644
>> --- a/drivers/gpu/drm/i915/i915_globals.h
>> +++ b/drivers/gpu/drm/i915/i915_globals.h
>> @@ -34,5 +34,6 @@ int i915_global_objects_init(void);
>>   int i915_global_request_init(void);
>>   int i915_global_scheduler_init(void);
>>   int i915_global_vma_init(void);
>> +int i915_global_ww_init(void);
>>     #endif /* _I915_GLOBALS_H_ */
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-09-22 13:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-17 18:59 [Intel-gfx] [RFC PATCH v2 0/2] Introduce a ww transaction utility Thomas Hellström (Intel)
2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 1/2] drm/i915: Break out dma_resv ww locking utilities to separate files Thomas Hellström (Intel)
2020-09-17 18:59 ` [Intel-gfx] [RFC PATCH v2 2/2] drm/i915: Introduce a i915_gem_do_ww(){} utility Thomas Hellström (Intel)
2020-09-22  9:12   ` Tvrtko Ursulin
2020-09-22 13:31     ` Thomas Hellström (Intel)
2020-09-17 19:16 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Introduce a ww transaction utility (rev2) Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.