intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 01/11] drm/i915: Release i915_gem_context from a worker
@ 2021-08-13 20:30 Daniel Vetter
  2021-08-13 20:30 ` [Intel-gfx] [PATCH 02/11] drm/i915: Release ctx->syncobj on final put, not on ctx close Daniel Vetter
                   ` (21 more replies)
  0 siblings, 22 replies; 34+ messages in thread
From: Daniel Vetter @ 2021-08-13 20:30 UTC (permalink / raw)
  To: DRI Development
  Cc: Intel Graphics Development, Daniel Vetter, Daniel Vetter,
	Jon Bloomfield, Chris Wilson, Maarten Lankhorst, Joonas Lahtinen,
	Thomas Hellström, Matthew Auld, Lionel Landwerlin,
	Dave Airlie, Jason Ekstrand

The only reason for this really is the i915_gem_engines->fence
callback engines_notify(), which exists purely as a fairly funky
reference counting scheme for that. Otherwise all other callers are
from process context, and generally fairly benign locking context.

Unfortunately untangling that requires some major surgery, and we have
a few i915_gem_context reference counting bugs that need fixing, and
they blow in the current hardirq calling context, so we need a
stop-gap measure.

Put a FIXME comment in when this should be removable again.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 13 +++++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h | 12 ++++++++++++
 2 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index fd169cf2f75a..051bc357ff65 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -986,9 +986,10 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx,
 	return err;
 }
 
-void i915_gem_context_release(struct kref *ref)
+static void i915_gem_context_release_work(struct work_struct *work)
 {
-	struct i915_gem_context *ctx = container_of(ref, typeof(*ctx), ref);
+	struct i915_gem_context *ctx = container_of(work, typeof(*ctx),
+						    release_work);
 
 	trace_i915_context_free(ctx);
 	GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
@@ -1002,6 +1003,13 @@ void i915_gem_context_release(struct kref *ref)
 	kfree_rcu(ctx, rcu);
 }
 
+void i915_gem_context_release(struct kref *ref)
+{
+	struct i915_gem_context *ctx = container_of(ref, typeof(*ctx), ref);
+
+	queue_work(ctx->i915->wq, &ctx->release_work);
+}
+
 static inline struct i915_gem_engines *
 __context_engines_static(const struct i915_gem_context *ctx)
 {
@@ -1303,6 +1311,7 @@ i915_gem_create_context(struct drm_i915_private *i915,
 	ctx->sched = pc->sched;
 	mutex_init(&ctx->mutex);
 	INIT_LIST_HEAD(&ctx->link);
+	INIT_WORK(&ctx->release_work, i915_gem_context_release_work);
 
 	spin_lock_init(&ctx->stale.lock);
 	INIT_LIST_HEAD(&ctx->stale.engines);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index 94c03a97cb77..0c38789bd4a8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -288,6 +288,18 @@ struct i915_gem_context {
 	 */
 	struct kref ref;
 
+	/**
+	 * @release_work:
+	 *
+	 * Work item for deferred cleanup, since i915_gem_context_put() tends to
+	 * be called from hardirq context.
+	 *
+	 * FIXME: The only real reason for this is &i915_gem_engines.fence, all
+	 * other callers are from process context and need at most some mild
+	 * shuffling to pull the i915_gem_context_put() call out of a spinlock.
+	 */
+	struct work_struct release_work;
+
 	/**
 	 * @rcu: rcu_head for deferred freeing.
 	 */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 34+ messages in thread
* [Intel-gfx] [PATCH 01/11] drm/i915: Release i915_gem_context from a worker
@ 2021-09-02 14:20 Daniel Vetter
  2021-09-02 16:50 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/11] " Patchwork
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Vetter @ 2021-09-02 14:20 UTC (permalink / raw)
  To: DRI Development
  Cc: Intel Graphics Development, Daniel Vetter,
	Acked-by : Maarten Lankhorst, Daniel Vetter, Jon Bloomfield,
	Chris Wilson, Joonas Lahtinen, Thomas Hellström,
	Matthew Auld, Lionel Landwerlin, Dave Airlie, Jason Ekstrand

The only reason for this really is the i915_gem_engines->fence
callback engines_notify(), which exists purely as a fairly funky
reference counting scheme for that. Otherwise all other callers are
from process context, and generally fairly benign locking context.

Unfortunately untangling that requires some major surgery, and we have
a few i915_gem_context reference counting bugs that need fixing, and
they blow in the current hardirq calling context, so we need a
stop-gap measure.

Put a FIXME comment in when this should be removable again.

v2: Fix mock_context(), noticed by intel-gfx-ci.

Acked-by: Acked-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 13 +++++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h | 12 ++++++++++++
 drivers/gpu/drm/i915/gem/selftests/mock_context.c |  1 +
 3 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index fd169cf2f75a..051bc357ff65 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -986,9 +986,10 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx,
 	return err;
 }
 
-void i915_gem_context_release(struct kref *ref)
+static void i915_gem_context_release_work(struct work_struct *work)
 {
-	struct i915_gem_context *ctx = container_of(ref, typeof(*ctx), ref);
+	struct i915_gem_context *ctx = container_of(work, typeof(*ctx),
+						    release_work);
 
 	trace_i915_context_free(ctx);
 	GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
@@ -1002,6 +1003,13 @@ void i915_gem_context_release(struct kref *ref)
 	kfree_rcu(ctx, rcu);
 }
 
+void i915_gem_context_release(struct kref *ref)
+{
+	struct i915_gem_context *ctx = container_of(ref, typeof(*ctx), ref);
+
+	queue_work(ctx->i915->wq, &ctx->release_work);
+}
+
 static inline struct i915_gem_engines *
 __context_engines_static(const struct i915_gem_context *ctx)
 {
@@ -1303,6 +1311,7 @@ i915_gem_create_context(struct drm_i915_private *i915,
 	ctx->sched = pc->sched;
 	mutex_init(&ctx->mutex);
 	INIT_LIST_HEAD(&ctx->link);
+	INIT_WORK(&ctx->release_work, i915_gem_context_release_work);
 
 	spin_lock_init(&ctx->stale.lock);
 	INIT_LIST_HEAD(&ctx->stale.engines);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index 94c03a97cb77..0c38789bd4a8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -288,6 +288,18 @@ struct i915_gem_context {
 	 */
 	struct kref ref;
 
+	/**
+	 * @release_work:
+	 *
+	 * Work item for deferred cleanup, since i915_gem_context_put() tends to
+	 * be called from hardirq context.
+	 *
+	 * FIXME: The only real reason for this is &i915_gem_engines.fence, all
+	 * other callers are from process context and need at most some mild
+	 * shuffling to pull the i915_gem_context_put() call out of a spinlock.
+	 */
+	struct work_struct release_work;
+
 	/**
 	 * @rcu: rcu_head for deferred freeing.
 	 */
diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
index fee070df1c97..067d68a6fe4c 100644
--- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
@@ -23,6 +23,7 @@ mock_context(struct drm_i915_private *i915,
 	kref_init(&ctx->ref);
 	INIT_LIST_HEAD(&ctx->link);
 	ctx->i915 = i915;
+	INIT_WORK(&ctx->release_work, i915_gem_context_release_work);
 
 	mutex_init(&ctx->mutex);
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2021-09-03 10:40 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-13 20:30 [Intel-gfx] [PATCH 01/11] drm/i915: Release i915_gem_context from a worker Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 02/11] drm/i915: Release ctx->syncobj on final put, not on ctx close Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 03/11] drm/i915: Keep gem ctx->vm alive until the final put Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 04/11] drm/i915: Drop code to handle set-vm races from execbuf Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 05/11] drm/i915: Rename i915_gem_context_get_vm_rcu to i915_gem_context_get_eb_vm Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 06/11] drm/i915: Use i915_gem_context_get_eb_vm in ctx_getparam Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 07/11] drm/i915: Add i915_gem_context_is_full_ppgtt Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 08/11] drm/i915: Use i915_gem_context_get_eb_vm in intel_context_set_gem Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 09/11] drm/i915: Drop __rcu from gem_context->vm Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 10/11] drm/i915: use xa_lock/unlock for fpriv->vm_xa lookups Daniel Vetter
2021-08-31  9:29   ` Maarten Lankhorst
2021-08-31 12:14   ` [Intel-gfx] [PATCH] " Daniel Vetter
2021-08-13 20:30 ` [Intel-gfx] [PATCH 11/11] drm/i915: Stop rcu support for i915_address_space Daniel Vetter
2021-08-13 21:48 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/11] drm/i915: Release i915_gem_context from a worker Patchwork
2021-08-13 21:49 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-08-13 22:18 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-08-14  1:26 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-08-14 10:43 ` [Intel-gfx] [PATCH] " Daniel Vetter
2021-08-31  9:38   ` Maarten Lankhorst
2021-08-31 12:16     ` Daniel Vetter
2021-08-31 15:14       ` Daniel Vetter
2021-09-02 10:04         ` Maarten Lankhorst
2021-08-14 10:55 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with drm/i915: Release i915_gem_context from a worker (rev2) Patchwork
2021-08-14 10:56 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-08-14 11:19 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-08-14 12:38 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2021-08-31 10:16 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with drm/i915: Release i915_gem_context from a worker (rev3) Patchwork
2021-08-31 12:18 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with drm/i915: Release i915_gem_context from a worker (rev4) Patchwork
2021-09-02 12:42 ` [Intel-gfx] [PATCH 01/11] drm/i915: Release i915_gem_context from a worker Tvrtko Ursulin
2021-09-02 15:05   ` Daniel Vetter
2021-09-02 16:20     ` Tvrtko Ursulin
2021-09-02 20:02       ` Daniel Vetter
2021-09-03 10:40         ` Tvrtko Ursulin
2021-09-02 14:20 Daniel Vetter
2021-09-02 16:50 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/11] " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).