All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Teres Alexis, Alan Previn" <alan.previn.teres.alexis@intel.com>
To: "tvrtko.ursulin@linux.intel.com" <tvrtko.ursulin@linux.intel.com>,
	"intel-gfx@lists.freedesktop.org"
	<intel-gfx@lists.freedesktop.org>
Cc: "Teres Alexis, Alan Previn" <alan.previn.teres.alexis@intel.com>
Subject: Re: [Intel-gfx] [PATCH 1/1] drm/i915/guc: Delay disabling guc_id scheduling for better hysteresis
Date: Fri, 16 Sep 2022 07:53:52 +0000	[thread overview]
Message-ID: <2808b0f67797543e453e74b4e156df4a5cdd8656.camel@intel.com> (raw)
In-Reply-To: <5aec4a0d-e99b-011d-68a9-84ad1f1120bf@linux.intel.com>


On Thu, 2022-09-15 at 09:48 +0100, Tvrtko Ursulin wrote:
> On 15/09/2022 03:12, Alan Previn wrote:
> > From: Matthew Brost <matthew.brost@intel.com>
> > 
> > Add a delay, configurable via debugfs (default 34ms), to disable
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > 

> > +		u16 guc_ids_in_use;
> 
> Any specific reason to use u16? It can usually just result in larger 
> code generated and I don't see any space saving needed or achieved when 
> it is sandwiched between two struct list_heads.
> 
no specific reason - will switch to uint32.

> > +		u64 sched_disable_delay_ms;
> 
> 64-bits for the delay then sounds like overkill. Both should IMO just be 
> unsigned ints.
> 
avoiding some typecasting on related functions that reference this
but thats not a good excuse so will fix it.


> > +		int sched_disable_gucid_threshold;
> 
> unsigned int as well, so reader does not have to think about:
>   return guc->submission_state.guc_ids_in_use >
> 	guc->submission_state.sched_disable_gucid_threshold;
> 
> further down.
> 
yes agreed - will fix.


> > +static void __delay_sched_disable(struct work_struct *wrk)
> > +{
> > +	struct intel_context *ce =
> > +		container_of(wrk, typeof(*ce), guc_state.sched_disable_delay.work);
> > +	struct intel_guc *guc = ce_to_guc(ce);
> > +	unsigned long flags;
> > +
> >   	spin_lock_irqsave(&ce->guc_state.lock, flags);
> >   
> > +	if (bypass_sched_disable(guc, ce)) {
> > +		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> > +		intel_context_sched_disable_unpin(ce);
> > +	} else {
> > +		do_sched_disable(guc, ce, flags);
> > +	}
> 
> lock
> if
>    unlock
>    do sttuff
> else
>    do_sched_disable - which unlocks inside
> 
> Now move to next block..
> 
> > +}
> > +
> > +static bool guc_id_pressure(struct intel_guc *guc, struct intel_context *ce)
> > +{
> >   	/*
> > -	 * We have to check if the context has been disabled by another thread,
> > -	 * check if submssion has been disabled to seal a race with reset and
> > -	 * finally check if any more requests have been committed to the
> > -	 * context ensursing that a request doesn't slip through the
> > -	 * 'context_pending_disable' fence.
> > +	 * parent contexts are perma-pinned, if we are unpinning do schedule
> > +	 * disable immediately.
> >   	 */
> > -	if (unlikely(!context_enabled(ce) || submission_disabled(guc) ||
> > -		     context_has_committed_requests(ce))) {
> > -		clr_context_enabled(ce);
> > +	if (intel_context_is_parent(ce))
> > +		return true;
> > +
> > +	/*
> > +	 * If we are beyond the threshold for avail guc_ids, do schedule disable immediately.
> > +	 */
> > +	return guc->submission_state.guc_ids_in_use >
> > +		guc->submission_state.sched_disable_gucid_threshold;
> > +}
> > +
> > +static void guc_context_sched_disable(struct intel_context *ce)
> > +{
> > +	struct intel_guc *guc = ce_to_guc(ce);
> > +	u64 delay = guc->submission_state.sched_disable_delay_ms;
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&ce->guc_state.lock, flags);
> > +
> > +	if (bypass_sched_disable(guc, ce)) {
> > +		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> > +		intel_context_sched_disable_unpin(ce);
> > +	} else if (!intel_context_is_closed(ce) && !guc_id_pressure(guc, ce) &&
> > +		   delay) {
> >   		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> > -		goto unpin;
> > +		mod_delayed_work(system_unbound_wq,
> > +				 &ce->guc_state.sched_disable_delay,
> > +				 msecs_to_jiffies(delay));
> > +	} else {
> > +		do_sched_disable(guc, ce, flags);
> >   	}
> 
> lock
> if
>    unlock
>    do stuff
> else if
>    unlock
>    do stuff
> else
>    do_sched_disable - which unlocks inside
> 
> IMO it creates less readable code for the benefit of not repeating 
> with_intel_runtime_pm -> __guc_context_sched_disable two times. Dunno.. 
> it's ugly but I have no suggestions. Hm does it have to send using the 
> busy loop? It couldn't just queue the request and then wait for reply if 
> disable message was emitted?
> 
I agree that the above code lacks readability - will see if i can break it down to smaller
functions with cleaner in-function lock/unlock pairs. I agree that a little code duplication
is better than less readable code. It was inherited code i didn't want to modify but I'll
go ahead and refactor this.

On the busy loop - im assuming you are refering to the actual ct sending. I'll consult my
team if i am missing anything more but based on comments, I believe callers must use that
function to guarantee reservation of space in the G2H CTB to always have space to capture
responses for actions that MUST be acknowledged from GuC (acknowledged by either replying
with a success or failure). This is necessary for coherent guc-id state machine (because the
GuC firmware will drop requests for guc-id's that are not registered or not in a
'sched-enabled' state).


> > -	guc_id = prep_context_pending_disable(ce);
> > +}
> >   
> > -	spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> > +static void guc_flush_all_delayed_disable_sched_contexts(struct intel_guc *guc)
> > +{
> > +	struct intel_context *ce;
> > +	unsigned long index;
> > +	unsigned long flags;
> > +	unsigned long ceflags;
> >   
> > -	with_intel_runtime_pm(runtime_pm, wakeref)
> > -		__guc_context_sched_disable(guc, ce, guc_id);
> > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > +	xa_for_each(&guc->context_lookup, index, ce) {
> > +		if (!kref_get_unless_zero(&ce->ref))
> > +			continue;
> > +		xa_unlock(&guc->context_lookup);
> 
> So this whole loop _needs_ to run with interrupts disabled? Explaining 
> why in a comment would be good.
> 
Being mid-reset, the locking mode is consistent with other functions also used
as part of the reset preparation that parses and potentially modifies contexts.
I believe the goal is to handle all of this parsing without getting conflicting
latent G2H replies that breaks the preparation flow (that herds active contexts
into a fewer set of states as part of reset) - but i will double check
with my colleagues.

> > +		if (test_bit(CONTEXT_GUC_INIT, &ce->flags) &&
> > +		    cancel_delayed_work(&ce->guc_state.sched_disable_delay)) {
> > +			spin_lock_irqsave(&ce->guc_state.lock, ceflags);
> > +			spin_unlock_irqrestore(&ce->guc_state.lock, ceflags);
> 
> This deserves a comment about what lock toggling wants to ensure.
> 
I realize this might have been my local rebasing mistake, the intention was to
handle cases where sched_disable_delay wasn't pending but potentially still
executing do_sched_disable. I believe I could try cancel_delayed_work_sync (but
not sure if i can call that might-sleep funtion mid reset while not-
interruptible). Else, i would move that lock-unlock to if the
cancel_delayed_work did not return true (as per original intent before my
rebase error).

> Also, if the loops runs with interrupts disabled what is the point of 
> irqsave variant in here??
Yes - its redundant, let me fix that, apologies for that.

> 
> Also2, what is the reason for dropping the lock? intel_context_put?
Being consistent with other reset preparation code that closes contexts,
the lock is dropped before the intel_context_put.
(I hope i am not misunderstanding your question).
> 
> > +	/*
> > +	 * If the context gets closed while the execbuf is ongoing, the context
> > +	 * close code will race with the below code to cancel the delayed work.
> > +	 * If the context close wins the race and cancels the work, it will
> > +	 * immediately call the sched disable (see guc_context_close), so there
> > +	 * is a chance we can get past this check while the sched_disable code
> > +	 * is being executed. To make sure that code completes before we check
> > +	 * the status further down, we wait for the close process to complete.
> > +	 */
> > +	if (cancel_delayed_work_sync(&ce->guc_state.sched_disable_delay))
> > +		intel_context_sched_disable_unpin(ce);
> > +	else if (intel_context_is_closed(ce))
> > +		wait_for(context_close_done(ce), 1);
> 
> Comment makes it sounds important to handle the race, althought it 
> doesn't really explain the consequences. But most importantly, what if 
> close doesn't complete in 1ms?

will add the consequence (i believe the consequence is that we could prematurely
read context flags bit indicating its gucid is still registered and after skipping
re-registration, find that context gets closed and guc-id freed).

Yes the 1 second is arbitrary and unnervingly short. Just spent sometime trying to
figure out portions of the SCHED_foo state machine bits and believe that its possible
for guc_request_alloc to just force context_close to be done from here as it would
force it into a state requiring re-registration and would close that a few lines
below. I will however verify with my team mates as i am new to these SCHED_foo state
machine bits.

> 
> Regards,
> 
> Tvrtko
> 

  reply	other threads:[~2022-09-16  7:54 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-15  2:12 [Intel-gfx] [PATCH 0/1] Delay disabling GuC scheduling of an idle context Alan Previn
2022-09-15  2:12 ` [Intel-gfx] [PATCH 1/1] drm/i915/guc: Delay disabling guc_id scheduling for better hysteresis Alan Previn
2022-09-15  8:48   ` Tvrtko Ursulin
2022-09-16  7:53     ` Teres Alexis, Alan Previn [this message]
2022-09-16  8:58       ` Tvrtko Ursulin
2022-09-16 15:36         ` Ceraolo Spurio, Daniele
2022-09-20  6:22           ` Teres Alexis, Alan Previn
2022-09-20 14:33           ` Tvrtko Ursulin
2022-09-21  7:59           ` Teres Alexis, Alan Previn
2022-09-21 22:30           ` Teres Alexis, Alan Previn
2022-09-21 22:30         ` Teres Alexis, Alan Previn
2022-09-15  3:22 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Delay disabling GuC scheduling of an idle context Patchwork
2022-09-15  3:41 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-09-15 20:41 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2808b0f67797543e453e74b4e156df4a5cdd8656.camel@intel.com \
    --to=alan.previn.teres.alexis@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.