All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michał Winiarski" <michal.winiarski@intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 10/19] drm/i915/execlists: Assert there are no simple cycles in the dependencies
Date: Wed, 3 Jan 2018 12:13:40 +0100	[thread overview]
Message-ID: <20180103111340.3sc5ogabpikdnlwt@mwiniars-main.ger.corp.intel.com> (raw)
In-Reply-To: <20180102151235.3949-10-chris@chris-wilson.co.uk>

On Tue, Jan 02, 2018 at 03:12:26PM +0000, Chris Wilson wrote:
> The dependency chain must be an acyclic graph. This is checked by the
> swfence, but for sanity, also do a simple check that we do not corrupt
> our list iteration in execlists_schedule() by a shallow dependency
> cycle.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>

-Michał

> ---
>  drivers/gpu/drm/i915/intel_lrc.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 007aec9d95c9..8c9d6cef2482 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1006,7 +1006,8 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
>  	stack.signaler = &request->priotree;
>  	list_add(&stack.dfs_link, &dfs);
>  
> -	/* Recursively bump all dependent priorities to match the new request.
> +	/*
> +	 * Recursively bump all dependent priorities to match the new request.
>  	 *
>  	 * A naive approach would be to use recursion:
>  	 * static void update_priorities(struct i915_priotree *pt, prio) {
> @@ -1026,12 +1027,15 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
>  	list_for_each_entry_safe(dep, p, &dfs, dfs_link) {
>  		struct i915_priotree *pt = dep->signaler;
>  
> -		/* Within an engine, there can be no cycle, but we may
> +		/*
> +		 * Within an engine, there can be no cycle, but we may
>  		 * refer to the same dependency chain multiple times
>  		 * (redundant dependencies are not eliminated) and across
>  		 * engines.
>  		 */
>  		list_for_each_entry(p, &pt->signalers_list, signal_link) {
> +			GEM_BUG_ON(p == dep); /* no cycles! */
> +
>  			if (i915_gem_request_completed(priotree_to_request(p->signaler)))
>  				continue;
>  
> @@ -1043,7 +1047,8 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
>  		list_safe_reset_next(dep, p, dfs_link);
>  	}
>  
> -	/* If we didn't need to bump any existing priorities, and we haven't
> +	/*
> +	 * If we didn't need to bump any existing priorities, and we haven't
>  	 * yet submitted this request (i.e. there is no potential race with
>  	 * execlists_submit_request()), we can set our own priority and skip
>  	 * acquiring the engine locks.
> -- 
> 2.15.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2018-01-03 11:15 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-02 15:12 [PATCH 01/19] drm/i915: Delete defunct i915_gem_request_assign() Chris Wilson
2018-01-02 15:12 ` [PATCH 02/19] drm/i915: Only defer freeing of fence callback when also using the timer Chris Wilson
2018-01-02 15:12 ` [PATCH 03/19] drm/i915/fence: Separate timeout mechanism for awaiting on dma-fences Chris Wilson
2018-01-02 15:12 ` [PATCH 04/19] drm/i915: Use our singlethreaded wq for freeing objects Chris Wilson
2018-01-02 15:12 ` [PATCH 05/19] drm/i915: Only attempt to scan the requested number of shrinker slabs Chris Wilson
2018-01-02 15:12 ` [PATCH 06/19] drm/i915: Move i915_gem_retire_work_handler Chris Wilson
2018-01-02 15:12 ` [PATCH 07/19] drm/i915: Shrink the GEM kmem_caches upon idling Chris Wilson
2018-01-02 15:12 ` [PATCH 08/19] drm/i915: Shrink the request kmem_cache on allocation error Chris Wilson
2018-01-02 15:12 ` [PATCH 09/19] drm/i915: Assert all signalers we depended on did indeed signal Chris Wilson
2018-01-03 11:40   ` Michał Winiarski
2018-01-03 12:24     ` Chris Wilson
2018-01-02 15:12 ` [PATCH 10/19] drm/i915/execlists: Assert there are no simple cycles in the dependencies Chris Wilson
2018-01-03 11:13   ` Michał Winiarski [this message]
2018-01-02 15:12 ` [PATCH 11/19] drm/i915/execlists: Reduce list_for_each_safe+list_safe_reset_next Chris Wilson
2018-01-03 11:12   ` Michał Winiarski
2018-01-02 15:12 ` [PATCH 12/19] drm/i915: Drop request reference for the signaler thread Chris Wilson
2018-01-02 15:12 ` [PATCH 13/19] drm/i915: Reduce spinlock hold time during notify_ring() interrupt Chris Wilson
2018-01-02 15:12 ` [PATCH 14/19] drm/i915: Reconstruct active state on starting busy-stats Chris Wilson
2018-01-08 14:18   ` Tvrtko Ursulin
2018-01-08 14:29     ` Chris Wilson
2018-01-08 14:44       ` Tvrtko Ursulin
2018-01-08 14:52         ` Chris Wilson
2018-01-02 15:12 ` [PATCH 15/19] drm/i915: Hold rpm wakeref for modifying the global seqno Chris Wilson
2018-01-03 10:58   ` Michał Winiarski
2018-01-02 15:12 ` [PATCH 16/19] drm/i915/execlists: Clear context-switch interrupt earlier in the reset Chris Wilson
2018-01-03 10:23   ` Michał Winiarski
2018-01-02 15:12 ` [PATCH 17/19] drm/i915/execlists: Record elsp offset during engine setup Chris Wilson
2018-01-03 10:00   ` Michał Winiarski
2018-01-02 15:12 ` [PATCH 18/19] drm/i915/execlists: Tidy enabling execlists Chris Wilson
2018-01-03  9:55   ` Michał Winiarski
2018-01-03 11:12     ` Chris Wilson
2018-01-02 15:12 ` [PATCH 19/19] drm/i915/execlists: Repeat CSB mmio until it returns a sensible result Chris Wilson
2018-01-02 15:37 ` ✓ Fi.CI.BAT: success for series starting with [01/19] drm/i915: Delete defunct i915_gem_request_assign() Patchwork
2018-01-02 16:24 ` ✓ Fi.CI.IGT: " Patchwork
2018-01-02 19:01 ` [PATCH 01/19] " Rodrigo Vivi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180103111340.3sc5ogabpikdnlwt@mwiniars-main.ger.corp.intel.com \
    --to=michal.winiarski@intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.