dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Michal Wajdeczko <michal.wajdeczko@intel.com>
To: Matthew Brost <matthew.brost@intel.com>,
	intel-gfx@lists.freedesktop.org,
	 dri-devel@lists.freedesktop.org
Cc: daniele.ceraolospurio@intel.com, john.c.harrison@intel.com
Subject: Re: [PATCH 1/1] drm/i915/guc: Relax CTB response timeout
Date: Fri, 11 Jun 2021 08:55:49 +0200	[thread overview]
Message-ID: <2576cc2b-ae11-1ae4-127d-7b2b1b51bfb6@intel.com> (raw)
In-Reply-To: <20210611000555.133859-2-matthew.brost@intel.com>



On 11.06.2021 02:05, Matthew Brost wrote:
> In upcoming patch we will allow more CTB requests to be sent in
> parallel to the GuC for processing, so we shouldn't assume any more
> that GuC will always reply without 10ms.

s/without/within

> 
> Use bigger value hardcoded value of 1s instead.
> 
> v2: Add CONFIG_DRM_I915_GUC_CTB_TIMEOUT config option
> v3:
>  (Daniel Vetter)
>   - Use hardcoded value of 1s rather than config option

if this is v3 then it's likely still my patch, so I can't give r-b

> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> index 8f7b148fef58..bc626ca0a9eb 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> @@ -475,12 +475,14 @@ static int wait_for_ct_request_update(struct ct_request *req, u32 *status)
>  	/*
>  	 * Fast commands should complete in less than 10us, so sample quickly
>  	 * up to that length of time, then switch to a slower sleep-wait loop.
> -	 * No GuC command should ever take longer than 10ms.
> +	 * No GuC command should ever take longer than 10ms but many GuC
> +	 * commands can be inflight at time, so use a 1s timeout on the slower
> +	 * sleep-wait loop.

this is x100 increase of timeout that not only looks nice, but it should
cover for ~100 CTB messages (of len 10 dwords) in our current 4K send CT
buffer, so LGTM

Michal

ps. unless in the future we decide to increase that CT size to something
much bigger, so maybe we should connect this timeout with number of
possible concurrent messages in flight? not a blocker

>  	 */
>  #define done INTEL_GUC_MSG_IS_RESPONSE(READ_ONCE(req->status))
>  	err = wait_for_us(done, 10);
>  	if (err)
> -		err = wait_for(done, 10);
> +		err = wait_for(done, 1000);
>  #undef done
>  
>  	if (unlikely(err))
> 

      reply	other threads:[~2021-06-11  6:55 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-11  0:05 [PATCH 0/1] Relax CTB response timeout Matthew Brost
2021-06-11  0:05 ` [PATCH 1/1] drm/i915/guc: " Matthew Brost
2021-06-11  6:55   ` Michal Wajdeczko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2576cc2b-ae11-1ae4-127d-7b2b1b51bfb6@intel.com \
    --to=michal.wajdeczko@intel.com \
    --cc=daniele.ceraolospurio@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=john.c.harrison@intel.com \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).