All of lore.kernel.org
 help / color / mirror / Atom feed
From: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
To: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 01/18] drm/i915/guc: Use a local cancel_port_requests
Date: Mon, 19 Aug 2019 12:06:08 +0200	[thread overview]
Message-ID: <2962209.gCHcVtk5aB@jkrzyszt-desk.ger.corp.intel.com> (raw)
In-Reply-To: <bd7578ee-ca93-00ef-91ea-73de9affbe00@intel.com>

On Monday, August 12, 2019 10:23:29 PM CEST Daniele Ceraolo Spurio wrote:
> 
> On 8/12/19 6:38 AM, Chris Wilson wrote:
> > Since execlista and the guc have diverged in their port tracking, we
> > cannot simply reuse the execlists cancellation code as it leads to
> > unbalanced reference counting. Use a local simpler routine for the guc.
> > 
> 
> LGTM, just wondering if we should put a comment somewhere to note that 
> ce->inflight and execlists->pending are currently unused in the GuC 
> path, so we don't incorrectly assume they're always set at certain 
> stages of the submission. But anyway:
> 
> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> 
> Janusz, does this work for you?

Yes, since the patch contains my original fix for the unbalanced reference, 
it still works for me (resolves kernel panics observed).

Regarding other modifications included, I haven't had time to spend on that 
yet so I don't feel competent to talk about that.  I don't know how to test, I 
can only confirm I haven't noticed any unexpected behavior.

Thanks,
Janusz

PS. Sorry for late answer, I had no internet access last few days

> 
> Thanks,
> Daniele
> 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_engine.h        |  3 ---
> >   drivers/gpu/drm/i915/gt/intel_lrc.c           |  6 ++---
> >   .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 23 +++++++++++--------
> >   3 files changed, 16 insertions(+), 16 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/
i915/gt/intel_engine.h
> > index e1228b0e577f..4b6a1cf80706 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> > +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> > @@ -136,9 +136,6 @@ execlists_active(const struct intel_engine_execlists 
*execlists)
> >   	return READ_ONCE(*execlists->active);
> >   }
> >   
> > -void
> > -execlists_cancel_port_requests(struct intel_engine_execlists * const 
execlists);
> > -
> >   struct i915_request *
> >   execlists_unwind_incomplete_requests(struct intel_engine_execlists 
*execlists);
> >   
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/
gt/intel_lrc.c
> > index b97047d58d3d..5c26c4ae139b 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -1297,8 +1297,8 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
> >   	}
> >   }
> >   
> > -void
> > -execlists_cancel_port_requests(struct intel_engine_execlists * const 
execlists)
> > +static void
> > +cancel_port_requests(struct intel_engine_execlists * const execlists)
> >   {
> >   	struct i915_request * const *port, *rq;
> >   
> > @@ -2355,7 +2355,7 @@ static void __execlists_reset(struct intel_engine_cs 
*engine, bool stalled)
> >   
> >   unwind:
> >   	/* Push back any incomplete requests for replay after the reset. 
*/
> > -	execlists_cancel_port_requests(execlists);
> > +	cancel_port_requests(execlists);
> >   	__unwind_incomplete_requests(engine);
> >   }
> >   
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/
gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index 449ca6357018..a37afc6266ec 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -517,11 +517,7 @@ static struct i915_request *schedule_in(struct 
i915_request *rq, int idx)
> >   {
> >   	trace_i915_request_in(rq, idx);
> >   
> > -	if (!rq->hw_context->inflight)
> > -		rq->hw_context->inflight = rq->engine;
> > -	intel_context_inflight_inc(rq->hw_context);
> >   	intel_gt_pm_get(rq->engine->gt);
> > -
> >   	return i915_request_get(rq);
> >   }
> >   
> > @@ -529,10 +525,6 @@ static void schedule_out(struct i915_request *rq)
> >   {
> >   	trace_i915_request_out(rq);
> >   
> > -	intel_context_inflight_dec(rq->hw_context);
> > -	if (!intel_context_inflight_count(rq->hw_context))
> > -		rq->hw_context->inflight = NULL;
> > -
> >   	intel_gt_pm_put(rq->engine->gt);
> >   	i915_request_put(rq);
> >   }
> > @@ -636,6 +628,17 @@ static void guc_reset_prepare(struct intel_engine_cs 
*engine)
> >   	__tasklet_disable_sync_once(&execlists->tasklet);
> >   }
> >   
> > +static void
> > +cancel_port_requests(struct intel_engine_execlists * const execlists)
> > +{
> > +	struct i915_request * const *port, *rq;
> > +
> > +	for (port = execlists->active; (rq = *port); port++)
> > +		schedule_out(rq);
> > +	execlists->active =
> > +		memset(execlists->inflight, 0, sizeof(execlists-
>inflight));
> > +}
> > +
> >   static void guc_reset(struct intel_engine_cs *engine, bool stalled)
> >   {
> >   	struct intel_engine_execlists * const execlists = &engine-
>execlists;
> > @@ -644,7 +647,7 @@ static void guc_reset(struct intel_engine_cs *engine, 
bool stalled)
> >   
> >   	spin_lock_irqsave(&engine->active.lock, flags);
> >   
> > -	execlists_cancel_port_requests(execlists);
> > +	cancel_port_requests(execlists);
> >   
> >   	/* Push back any incomplete requests for replay after the reset. 
*/
> >   	rq = execlists_unwind_incomplete_requests(execlists);
> > @@ -687,7 +690,7 @@ static void guc_cancel_requests(struct intel_engine_cs 
*engine)
> >   	spin_lock_irqsave(&engine->active.lock, flags);
> >   
> >   	/* Cancel the requests on the HW and clear the ELSP tracker. */
> > -	execlists_cancel_port_requests(execlists);
> > +	cancel_port_requests(execlists);
> >   
> >   	/* Mark all executing requests as skipped. */
> >   	list_for_each_entry(rq, &engine->active.requests, sched.link) {
> > 
> 




_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2019-08-19 10:06 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-12 13:38 [PATCH 01/18] drm/i915/guc: Use a local cancel_port_requests Chris Wilson
2019-08-12 13:38 ` [PATCH 02/18] drm/i915: Push the wakeref->count deferral to the backend Chris Wilson
2019-08-12 13:39 ` [PATCH 03/18] drm/i915/gt: Save/restore interrupts around breadcrumb disable Chris Wilson
2019-08-12 13:39 ` [PATCH 04/18] drm/i915/execlists: Lift process_csb() out of the irq-off spinlock Chris Wilson
2019-08-12 13:39 ` [PATCH 05/18] drm/i915/gt: Track timeline activeness in enter/exit Chris Wilson
2019-08-12 13:39 ` [PATCH 06/18] drm/i915/gt: Convert timeline tracking to spinlock Chris Wilson
2019-08-12 13:39 ` [PATCH 07/18] drm/i915/gt: Guard timeline pinning with its own mutex Chris Wilson
2019-08-12 13:39 ` [PATCH 08/18] drm/i915: Protect request retirement with timeline->mutex Chris Wilson
2019-08-12 13:39 ` [PATCH 09/18] drm/i915/gt: Mark context->active_count as protected by timeline->mutex Chris Wilson
2019-08-12 13:39 ` [PATCH 10/18] drm/i915: Forgo last_fence active request tracking Chris Wilson
2019-08-12 13:39 ` [PATCH 11/18] drm/i915/overlay: Switch to using i915_active tracking Chris Wilson
2019-08-12 15:38   ` Matthew Auld
2019-08-12 13:39 ` [PATCH 12/18] drm/i915: Extract intel_frontbuffer active tracking Chris Wilson
2019-08-12 13:39 ` [PATCH 13/18] drm/i915: Markup expected timeline locks for i915_active Chris Wilson
2019-08-12 13:39 ` [PATCH 14/18] drm/i915: Remove logical HW ID Chris Wilson
2019-08-12 13:39 ` [PATCH 15/18] drm/i915: Move context management under GEM Chris Wilson
2019-08-12 13:39 ` [PATCH 16/18] drm/i915/pmu: Use GT parked for estimating RC6 while asleep Chris Wilson
2019-08-12 13:39 ` [PATCH 17/18] drm/i915: Drop GEM context as a direct link from i915_request Chris Wilson
2019-08-12 13:39 ` [PATCH 18/18] drm/i915: Push the use-semaphore marker onto the intel_context Chris Wilson
2019-08-12 14:11 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/18] drm/i915/guc: Use a local cancel_port_requests Patchwork
2019-08-12 14:21 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-08-12 14:31 ` ✓ Fi.CI.BAT: success " Patchwork
2019-08-12 20:23 ` [PATCH 01/18] " Daniele Ceraolo Spurio
2019-08-12 20:36   ` [PATCH] " Chris Wilson
2019-08-19 10:06   ` Janusz Krzysztofik [this message]
2019-08-12 20:59 ` ✗ Fi.CI.IGT: failure for series starting with [01/18] " Patchwork
2019-08-12 22:00 ` ✗ Fi.CI.BAT: failure for series starting with drm/i915/guc: Use a local cancel_port_requests (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2962209.gCHcVtk5aB@jkrzyszt-desk.ger.corp.intel.com \
    --to=janusz.krzysztofik@linux.intel.com \
    --cc=daniele.ceraolospurio@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.