All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: intel-gfx@lists.freedesktop.org
Subject: [PATCH 30/33] drm/i915: Flush the execution-callbacks on retiring
Date: Mon, 20 May 2019 09:01:24 +0100	[thread overview]
Message-ID: <20190520080127.18255-30-chris@chris-wilson.co.uk> (raw)
In-Reply-To: <20190520080127.18255-1-chris@chris-wilson.co.uk>

In the unlikely case the request completes while we regard it as not even
executing on the GPU (see the next patch!), we have to flush any pending
execution callbacks at retirement and ensure that we do not add any
more.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_request.c | 93 +++++++++++++++--------------
 1 file changed, 49 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index a98e781cb178..ac30ed39d3a5 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -119,6 +119,50 @@ const struct dma_fence_ops i915_fence_ops = {
 	.release = i915_fence_release,
 };
 
+static void irq_execute_cb(struct irq_work *wrk)
+{
+	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
+
+	i915_sw_fence_complete(cb->fence);
+	kmem_cache_free(global.slab_execute_cbs, cb);
+}
+
+static void irq_execute_cb_hook(struct irq_work *wrk)
+{
+	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
+
+	cb->hook(container_of(cb->fence, struct i915_request, submit),
+		 &cb->signal->fence);
+	i915_request_put(cb->signal);
+
+	irq_execute_cb(wrk);
+}
+
+static void __notify_execute_cb(struct i915_request *rq)
+{
+	struct execute_cb *cb;
+
+	lockdep_assert_held(&rq->lock);
+
+	if (list_empty(&rq->execute_cb))
+		return;
+
+	list_for_each_entry(cb, &rq->execute_cb, link)
+		irq_work_queue(&cb->work);
+
+	/*
+	 * XXX Rollback on __i915_request_unsubmit()
+	 *
+	 * In the future, perhaps when we have an active time-slicing scheduler,
+	 * it will be interesting to unsubmit parallel execution and remove
+	 * busywaits from the GPU until their master is restarted. This is
+	 * quite hairy, we have to carefully rollback the fence and do a
+	 * preempt-to-idle cycle on the target engine, all the while the
+	 * master execute_cb may refire.
+	 */
+	INIT_LIST_HEAD(&rq->execute_cb);
+}
+
 static inline void
 i915_request_remove_from_client(struct i915_request *request)
 {
@@ -246,6 +290,11 @@ static bool i915_request_retire(struct i915_request *rq)
 		GEM_BUG_ON(!atomic_read(&rq->i915->gt_pm.rps.num_waiters));
 		atomic_dec(&rq->i915->gt_pm.rps.num_waiters);
 	}
+	if (!test_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags)) {
+		set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
+		__notify_execute_cb(rq);
+	}
+	GEM_BUG_ON(!list_empty(&rq->execute_cb));
 	spin_unlock(&rq->lock);
 
 	local_irq_enable();
@@ -285,50 +334,6 @@ void i915_request_retire_upto(struct i915_request *rq)
 	} while (i915_request_retire(tmp) && tmp != rq);
 }
 
-static void irq_execute_cb(struct irq_work *wrk)
-{
-	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
-
-	i915_sw_fence_complete(cb->fence);
-	kmem_cache_free(global.slab_execute_cbs, cb);
-}
-
-static void irq_execute_cb_hook(struct irq_work *wrk)
-{
-	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
-
-	cb->hook(container_of(cb->fence, struct i915_request, submit),
-		 &cb->signal->fence);
-	i915_request_put(cb->signal);
-
-	irq_execute_cb(wrk);
-}
-
-static void __notify_execute_cb(struct i915_request *rq)
-{
-	struct execute_cb *cb;
-
-	lockdep_assert_held(&rq->lock);
-
-	if (list_empty(&rq->execute_cb))
-		return;
-
-	list_for_each_entry(cb, &rq->execute_cb, link)
-		irq_work_queue(&cb->work);
-
-	/*
-	 * XXX Rollback on __i915_request_unsubmit()
-	 *
-	 * In the future, perhaps when we have an active time-slicing scheduler,
-	 * it will be interesting to unsubmit parallel execution and remove
-	 * busywaits from the GPU until their master is restarted. This is
-	 * quite hairy, we have to carefully rollback the fence and do a
-	 * preempt-to-idle cycle on the target engine, all the while the
-	 * master execute_cb may refire.
-	 */
-	INIT_LIST_HEAD(&rq->execute_cb);
-}
-
 static int
 __i915_request_await_execution(struct i915_request *rq,
 			       struct i915_request *signal,
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2019-05-20  8:02 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-20  8:00 [PATCH 01/33] drm/i915: Restore control over ppgtt for context creation ABI Chris Wilson
2019-05-20  8:00 ` [PATCH 02/33] drm/i915: Allow a context to define its set of engines Chris Wilson
2019-05-20  8:00 ` [PATCH 03/33] drm/i915: Extend I915_CONTEXT_PARAM_SSEU to support local ctx->engine[] Chris Wilson
2019-05-20  8:00 ` [PATCH 04/33] drm/i915: Re-expose SINGLE_TIMELINE flags for context creation Chris Wilson
2019-05-20  8:00 ` [PATCH 05/33] drm/i915: Allow userspace to clone contexts on creation Chris Wilson
2019-05-20  8:01 ` [PATCH 06/33] drm/i915: Load balancing across a virtual engine Chris Wilson
2019-05-20  8:01 ` [PATCH 07/33] drm/i915: Apply an execution_mask to the virtual_engine Chris Wilson
2019-05-20  8:01 ` [PATCH 08/33] drm/i915: Extend execution fence to support a callback Chris Wilson
2019-05-20  8:01 ` [PATCH 09/33] drm/i915/execlists: Virtual engine bonding Chris Wilson
2019-05-20  8:01 ` [PATCH 10/33] drm/i915: Allow specification of parallel execbuf Chris Wilson
2019-05-20  8:01 ` [PATCH 11/33] drm/i915: Split GEM object type definition to its own header Chris Wilson
2019-05-20  8:01 ` [PATCH 12/33] drm/i915: Pull GEM ioctls interface to its own file Chris Wilson
2019-05-20  8:01 ` [PATCH 13/33] drm/i915: Move object->pages API to i915_gem_object.[ch] Chris Wilson
2019-05-20  8:01 ` [PATCH 14/33] drm/i915: Move shmem object setup to its own file Chris Wilson
2019-05-20  8:01 ` [PATCH 15/33] drm/i915: Move phys objects " Chris Wilson
2019-05-20  8:01 ` [PATCH 16/33] drm/i915: Move mmap and friends " Chris Wilson
2019-05-20  8:01 ` [PATCH 17/33] drm/i915: Move GEM domain management " Chris Wilson
2019-05-20  8:01 ` [PATCH 18/33] drm/i915: Move more GEM objects under gem/ Chris Wilson
2019-05-20 12:58   ` Mika Kuoppala
2019-05-20  8:01 ` [PATCH 19/33] drm/i915: Pull scatterlist utils out of i915_gem.h Chris Wilson
2019-05-20  8:01 ` [PATCH 20/33] drm/i915: Move GEM object domain management from struct_mutex to local Chris Wilson
2019-05-20  8:01 ` [PATCH 21/33] drm/i915: Move GEM object waiting to its own file Chris Wilson
2019-05-20  8:01 ` [PATCH 22/33] drm/i915: Move GEM object busy checking " Chris Wilson
2019-05-20  8:01 ` [PATCH 23/33] drm/i915: Move GEM client throttling " Chris Wilson
2019-05-20  8:01 ` [PATCH 24/33] drm/i915: Drop the deferred active reference Chris Wilson
2019-05-20  8:01 ` [PATCH 25/33] drm/i915: Move object close under its own lock Chris Wilson
2019-05-22 14:32   ` Mika Kuoppala
2019-05-22 14:47     ` Chris Wilson
2019-05-22 14:52     ` Chris Wilson
2019-05-20  8:01 ` [PATCH 26/33] drm/i915: Rename intel_context.active to .inflight Chris Wilson
2019-05-20  8:01 ` [PATCH 27/33] drm/i915: Keep contexts pinned until after the next kernel context switch Chris Wilson
2019-05-20  8:01 ` [PATCH 28/33] drm/i915: Stop retiring along engine Chris Wilson
2019-05-20  8:01 ` [PATCH 29/33] drm/i915: Replace engine->timeline with a plain list Chris Wilson
2019-05-20  8:01 ` Chris Wilson [this message]
2019-05-20  8:01 ` [PATCH 31/33] drm/i915/execlists: Preempt-to-busy Chris Wilson
2019-05-20  8:01 ` [PATCH 32/33] drm/i915/execlists: Minimalistic timeslicing Chris Wilson
2019-05-20  8:01 ` [PATCH 33/33] drm/i915/execlists: Force preemption Chris Wilson
2019-05-20 13:05 ` ✗ Fi.CI.BAT: failure for series starting with [01/33] drm/i915: Restore control over ppgtt for context creation ABI Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190520080127.18255-30-chris@chris-wilson.co.uk \
    --to=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.