intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	daniel.vetter@ffwll.ch
Subject: Re: [Intel-gfx] [PATCH 19/22] drm/i915/guc: Proper xarray usage for contexts_lookup
Date: Tue, 17 Aug 2021 10:13:52 -0700	[thread overview]
Message-ID: <20210817171352.GA30887@jons-linux-dev-box> (raw)
In-Reply-To: <YRvuPcQFyglVyuMa@phenom.ffwll.local>

On Tue, Aug 17, 2021 at 07:13:33PM +0200, Daniel Vetter wrote:
> On Tue, Aug 17, 2021 at 08:26:28AM -0700, Matthew Brost wrote:
> > On Tue, Aug 17, 2021 at 12:27:29PM +0200, Daniel Vetter wrote:
> > > On Mon, Aug 16, 2021 at 06:51:36AM -0700, Matthew Brost wrote:
> > > > Lock the xarray and take ref to the context if needed.
> > > > 
> > > > v2:
> > > >  (Checkpatch)
> > > >   - Add new line after declaration
> > > > 
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > >  .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 84 ++++++++++++++++---
> > > >  1 file changed, 73 insertions(+), 11 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > index ba19b99173fc..2ecb2f002bed 100644
> > > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > @@ -599,8 +599,18 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc)
> > > >  	unsigned long index, flags;
> > > >  	bool pending_disable, pending_enable, deregister, destroyed, banned;
> > > >  
> > > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > > >  	xa_for_each(&guc->context_lookup, index, ce) {
> > > > -		spin_lock_irqsave(&ce->guc_state.lock, flags);
> > > > +		/*
> > > > +		 * Corner case where the ref count on the object is zero but and
> > > > +		 * deregister G2H was lost. In this case we don't touch the ref
> > > > +		 * count and finish the destroy of the context.
> > > > +		 */
> > > > +		bool do_put = kref_get_unless_zero(&ce->ref);
> > > 
> > > This looks really scary, because in another loop below you have an
> > > unconditional refcount increase. This means sometimes guc->context_lookup
> > 
> > Yea, good catch those loops need something like this too.
> > 
> > > xarray guarantees we hold a full reference on the context, sometimes we
> > > don't. So we're right back in "protect the code" O(N^2) review complexity
> > > instead of invariant rules about the datastructure, which is linear.
> > > 
> > > Essentially anytime you feel like you have to add a comment to explain
> > > what's going on about concurrent stuff you're racing with, you're
> > > protecting code, not data.
> > > 
> > > Since guc can't do a hole lot without the guc_id registered and all that,
> > > I kinda expected you'd always have a full reference here. If there's
> > 
> > The deregister is triggered by the ref count going to zero and we can't
> > fully release the guc_id until that operation completes hence why it is
> > still in the xarray. I think the solution here is to use iterator like
> > you mention below that ref counts this correctly.
> 
> Hm but if the refcount drops to zero while we have a guc_id, how does that
> work? Do we delay the guc_context_destroy until that's done, or is the

Yes, we don't want to release the guc_id and deregister the context with
the GuC until the i915 is done with the context (no refs). We issue the
deregister when we have no refs (done directly now, add worker to do
this in a upcoming patch). We release the guc_id, remove from xarray, and
destroy context when the deregister completes.

> context handed off internally somehow to a worker?
> 
> Afaik intel_context_put is called from all kinds of nasty context, so
> waiting is not an option as-is ...

Right, it is definitely can be called from nasty contexts hence why move
this to a work in an upcoming patch.

Matt

> -Daniel
> 
> > > intermediate stages (e.g. around unregister) where this is currently not
> > > always the case, then those should make sure a full reference is held.
> > > 
> > > Another option would be to threa ->context_lookup as a weak reference that
> > > we lazily clean up when the context is finalized. That works too, but
> > > probably not with a spinlock (since you most likely have to wait for all
> > > pending guc transations to complete), but it's another option.
> > > 
> > > Either way I think standard process is needed here for locking design,
> > > i.e.
> > > 1. come up with the right invariants ("we always have a full reference
> > > when a context is ont he guc->context_lookup xarray")
> > > 2. come up with the locks. From the guc side the xa_lock is maybe good
> > > enough, but from the context side this doesn't protect against a
> > > re-registering racing against a deregistering. So probably needs more
> > > rules on top, and then you have a nice lock inversion in a few places like
> > > here.
> > > 3. document it and roll it out.
> > > 
> > > The other thing is that this is a very tricky iterator, and there's a few
> > > copies of it. That is, if this is the right solution. As-is this should be
> > > abstracted away into guc_context_iter_begin/next_end() helpers, e.g. like
> > > we have for drm_connector_list_iter_begin/end_next as an example.
> > >
> > 
> > I can check this out.
> > 
> > Matt
> >  
> > > Cheers, Daniel
> > > 
> > > > +
> > > > +		xa_unlock(&guc->context_lookup);
> > > > +
> > > > +		spin_lock(&ce->guc_state.lock);
> > > >  
> > > >  		/*
> > > >  		 * Once we are at this point submission_disabled() is guaranteed
> > > > @@ -616,7 +626,9 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc)
> > > >  		banned = context_banned(ce);
> > > >  		init_sched_state(ce);
> > > >  
> > > > -		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> > > > +		spin_unlock(&ce->guc_state.lock);
> > > > +
> > > > +		GEM_BUG_ON(!do_put && !destroyed);
> > > >  
> > > >  		if (pending_enable || destroyed || deregister) {
> > > >  			atomic_dec(&guc->outstanding_submission_g2h);
> > > > @@ -645,7 +657,12 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc)
> > > >  
> > > >  			intel_context_put(ce);
> > > >  		}
> > > > +
> > > > +		if (do_put)
> > > > +			intel_context_put(ce);
> > > > +		xa_lock(&guc->context_lookup);
> > > >  	}
> > > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > >  }
> > > >  
> > > >  static inline bool
> > > > @@ -866,16 +883,26 @@ void intel_guc_submission_reset(struct intel_guc *guc, bool stalled)
> > > >  {
> > > >  	struct intel_context *ce;
> > > >  	unsigned long index;
> > > > +	unsigned long flags;
> > > >  
> > > >  	if (unlikely(!guc_submission_initialized(guc))) {
> > > >  		/* Reset called during driver load? GuC not yet initialised! */
> > > >  		return;
> > > >  	}
> > > >  
> > > > -	xa_for_each(&guc->context_lookup, index, ce)
> > > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > > > +	xa_for_each(&guc->context_lookup, index, ce) {
> > > > +		intel_context_get(ce);
> > > > +		xa_unlock(&guc->context_lookup);
> > > > +
> > > >  		if (intel_context_is_pinned(ce))
> > > >  			__guc_reset_context(ce, stalled);
> > > >  
> > > > +		intel_context_put(ce);
> > > > +		xa_lock(&guc->context_lookup);
> > > > +	}
> > > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > > +
> > > >  	/* GuC is blown away, drop all references to contexts */
> > > >  	xa_destroy(&guc->context_lookup);
> > > >  }
> > > > @@ -950,11 +977,21 @@ void intel_guc_submission_cancel_requests(struct intel_guc *guc)
> > > >  {
> > > >  	struct intel_context *ce;
> > > >  	unsigned long index;
> > > > +	unsigned long flags;
> > > > +
> > > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > > > +	xa_for_each(&guc->context_lookup, index, ce) {
> > > > +		intel_context_get(ce);
> > > > +		xa_unlock(&guc->context_lookup);
> > > >  
> > > > -	xa_for_each(&guc->context_lookup, index, ce)
> > > >  		if (intel_context_is_pinned(ce))
> > > >  			guc_cancel_context_requests(ce);
> > > >  
> > > > +		intel_context_put(ce);
> > > > +		xa_lock(&guc->context_lookup);
> > > > +	}
> > > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > > +
> > > >  	guc_cancel_sched_engine_requests(guc->sched_engine);
> > > >  
> > > >  	/* GuC is blown away, drop all references to contexts */
> > > > @@ -2848,21 +2885,26 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine)
> > > >  	struct intel_context *ce;
> > > >  	struct i915_request *rq;
> > > >  	unsigned long index;
> > > > +	unsigned long flags;
> > > >  
> > > >  	/* Reset called during driver load? GuC not yet initialised! */
> > > >  	if (unlikely(!guc_submission_initialized(guc)))
> > > >  		return;
> > > >  
> > > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > > >  	xa_for_each(&guc->context_lookup, index, ce) {
> > > > +		intel_context_get(ce);
> > > > +		xa_unlock(&guc->context_lookup);
> > > > +
> > > >  		if (!intel_context_is_pinned(ce))
> > > > -			continue;
> > > > +			goto next;
> > > >  
> > > >  		if (intel_engine_is_virtual(ce->engine)) {
> > > >  			if (!(ce->engine->mask & engine->mask))
> > > > -				continue;
> > > > +				goto next;
> > > >  		} else {
> > > >  			if (ce->engine != engine)
> > > > -				continue;
> > > > +				goto next;
> > > >  		}
> > > >  
> > > >  		list_for_each_entry(rq, &ce->guc_active.requests, sched.link) {
> > > > @@ -2872,9 +2914,17 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine)
> > > >  			intel_engine_set_hung_context(engine, ce);
> > > >  
> > > >  			/* Can only cope with one hang at a time... */
> > > > -			return;
> > > > +			intel_context_put(ce);
> > > > +			xa_lock(&guc->context_lookup);
> > > > +			goto done;
> > > >  		}
> > > > +next:
> > > > +		intel_context_put(ce);
> > > > +		xa_lock(&guc->context_lookup);
> > > > +
> > > >  	}
> > > > +done:
> > > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > >  }
> > > >  
> > > >  void intel_guc_dump_active_requests(struct intel_engine_cs *engine,
> > > > @@ -2890,23 +2940,32 @@ void intel_guc_dump_active_requests(struct intel_engine_cs *engine,
> > > >  	if (unlikely(!guc_submission_initialized(guc)))
> > > >  		return;
> > > >  
> > > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > > >  	xa_for_each(&guc->context_lookup, index, ce) {
> > > > +		intel_context_get(ce);
> > > > +		xa_unlock(&guc->context_lookup);
> > > > +
> > > >  		if (!intel_context_is_pinned(ce))
> > > > -			continue;
> > > > +			goto next;
> > > >  
> > > >  		if (intel_engine_is_virtual(ce->engine)) {
> > > >  			if (!(ce->engine->mask & engine->mask))
> > > > -				continue;
> > > > +				goto next;
> > > >  		} else {
> > > >  			if (ce->engine != engine)
> > > > -				continue;
> > > > +				goto next;
> > > >  		}
> > > >  
> > > >  		spin_lock_irqsave(&ce->guc_active.lock, flags);
> > > >  		intel_engine_dump_active_requests(&ce->guc_active.requests,
> > > >  						  hung_rq, m);
> > > >  		spin_unlock_irqrestore(&ce->guc_active.lock, flags);
> > > > +
> > > > +next:
> > > > +		intel_context_put(ce);
> > > > +		xa_lock(&guc->context_lookup);
> > > >  	}
> > > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > >  }
> > > >  
> > > >  void intel_guc_submission_print_info(struct intel_guc *guc,
> > > > @@ -2960,7 +3019,9 @@ void intel_guc_submission_print_context_info(struct intel_guc *guc,
> > > >  {
> > > >  	struct intel_context *ce;
> > > >  	unsigned long index;
> > > > +	unsigned long flags;
> > > >  
> > > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > > >  	xa_for_each(&guc->context_lookup, index, ce) {
> > > >  		drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id);
> > > >  		drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca);
> > > > @@ -2979,6 +3040,7 @@ void intel_guc_submission_print_context_info(struct intel_guc *guc,
> > > >  
> > > >  		guc_log_context_priority(p, ce);
> > > >  	}
> > > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > >  }
> > > >  
> > > >  static struct intel_context *
> > > > -- 
> > > > 2.32.0
> > > > 
> > > 
> > > -- 
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

  reply	other threads:[~2021-08-17 17:19 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-16 13:51 [Intel-gfx] [PATCH 00/22] Clean up GuC CI failures, simplify locking, and kernel DOC Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 01/22] drm/i915/guc: Fix blocked context accounting Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 02/22] drm/i915/guc: Fix outstanding G2H accounting Matthew Brost
2021-08-17  9:39   ` Daniel Vetter
2021-08-17 18:17     ` Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 03/22] drm/i915/guc: Unwind context requests in reverse order Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 04/22] drm/i915/guc: Don't drop ce->guc_active.lock when unwinding context Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 05/22] drm/i915/guc: Workaround reset G2H is received after schedule done G2H Matthew Brost
2021-08-17  9:32   ` Daniel Vetter
2021-08-17 15:03     ` Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 06/22] drm/i915/execlists: Do not propagate errors to dependent fences Matthew Brost
2021-08-17  9:21   ` Daniel Vetter
2021-08-17 15:08     ` Matthew Brost
2021-08-17 15:49       ` Daniel Vetter
2021-08-16 13:51 ` [Intel-gfx] [PATCH 07/22] drm/i915/selftests: Add a cancel request selftest that triggers a reset Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 08/22] drm/i915/guc: Don't enable scheduling on a banned context, guc_id invalid, not registered Matthew Brost
2021-08-17  9:47   ` Daniel Vetter
2021-08-17  9:57     ` Daniel Vetter
2021-08-17 16:44     ` Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 09/22] drm/i915/selftests: Fix memory corruption in live_lrc_isolation Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 10/22] drm/i915/selftests: Add initial GuC selftest for scrubbing lost G2H Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 11/22] drm/i915/guc: Take context ref when cancelling request Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 12/22] drm/i915/guc: Don't touch guc_state.sched_state without a lock Matthew Brost
2021-08-17  7:21   ` kernel test robot
2021-08-16 13:51 ` [Intel-gfx] [PATCH 13/22] drm/i915/guc: Reset LRC descriptor if register returns -ENODEV Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 14/22] drm/i915: Allocate error capture in atomic context Matthew Brost
2021-08-17 10:06   ` Daniel Vetter
2021-08-17 16:12     ` Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 15/22] drm/i915/guc: Flush G2H work queue during reset Matthew Brost
2021-08-17 10:06   ` Daniel Vetter
2021-08-16 13:51 ` [Intel-gfx] [PATCH 16/22] drm/i915/guc: Release submit fence from an IRQ Matthew Brost
2021-08-17 10:08   ` Daniel Vetter
2021-08-16 13:51 ` [Intel-gfx] [PATCH 17/22] drm/i915/guc: Move guc_blocked fence to struct guc_state Matthew Brost
2021-08-17 10:10   ` Daniel Vetter
2021-08-16 13:51 ` [Intel-gfx] [PATCH 18/22] drm/i915/guc: Rework and simplify locking Matthew Brost
2021-08-17 10:15   ` Daniel Vetter
2021-08-17 15:30     ` Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 19/22] drm/i915/guc: Proper xarray usage for contexts_lookup Matthew Brost
2021-08-17 10:27   ` Daniel Vetter
2021-08-17 15:26     ` Matthew Brost
2021-08-17 17:13       ` Daniel Vetter
2021-08-17 17:13         ` Matthew Brost [this message]
2021-08-16 13:51 ` [Intel-gfx] [PATCH 20/22] drm/i915/guc: Drop pin count check trick between sched_disable and re-pin Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 21/22] drm/i915/guc: Move GuC priority fields in context under guc_active Matthew Brost
2021-08-16 13:51 ` [Intel-gfx] [PATCH 22/22] drm/i915/guc: Add GuC kernel doc Matthew Brost
2021-08-17 11:11   ` Daniel Vetter
2021-08-17 16:36     ` Matthew Brost
2021-08-17 17:20       ` Daniel Vetter
2021-08-17 17:27         ` Michal Wajdeczko
2021-08-17 17:34           ` Daniel Vetter
2021-08-17 20:41             ` Michal Wajdeczko
2021-08-17 21:49               ` Daniel Vetter
2021-08-17 12:49 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Clean up GuC CI failures, simplify locking, and kernel DOC (rev2) Patchwork
2021-08-17 12:51 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-08-17 13:22 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-08-17 14:39 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210817171352.GA30887@jons-linux-dev-box \
    --to=matthew.brost@intel.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).