All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/i915/execlists: Tweak virtual unsubmission
@ 2019-10-13 20:30 Chris Wilson
  2019-10-13 20:37 ` ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Chris Wilson @ 2019-10-13 20:30 UTC (permalink / raw)
  To: intel-gfx

Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
overtaking each other on preemption") we have restricted requests to run
on their chosen engine across preemption events. We can take this
restriction into account to know that we will want to resubmit those
requests onto the same physical engine, and so can shortcircuit the
virtual engine selection process and keep the request on the same
engine during unwind.

References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
 drivers/gpu/drm/i915/i915_request.c | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index e6bf633b48d5..03732e3f5ec7 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 	list_for_each_entry_safe_reverse(rq, rn,
 					 &engine->active.requests,
 					 sched.link) {
-		struct intel_engine_cs *owner;
 
 		if (i915_request_completed(rq))
 			continue; /* XXX */
@@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 		 * engine so that it can be moved across onto another physical
 		 * engine as load dictates.
 		 */
-		owner = rq->hw_context->engine;
-		if (likely(owner == engine)) {
+		if (likely(rq->execution_mask == engine->mask)) {
 			GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
 			if (rq_prio(rq) != prio) {
 				prio = rq_prio(rq);
@@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 			list_move(&rq->sched.link, pl);
 			active = rq;
 		} else {
+			struct intel_engine_cs *owner = rq->hw_context->engine;
+
 			/*
 			 * Decouple the virtual breadcrumb before moving it
 			 * back to the virtual engine -- we don't want the
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 437f9fc6282e..b8a54572a4f8 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -649,6 +649,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->gem_context = ce->gem_context;
 	rq->engine = ce->engine;
 	rq->ring = ce->ring;
+	rq->execution_mask = ce->engine->mask;
 
 	rcu_assign_pointer(rq->timeline, tl);
 	rq->hwsp_seqno = tl->hwsp_seqno;
@@ -671,7 +672,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->batch = NULL;
 	rq->capture_list = NULL;
 	rq->flags = 0;
-	rq->execution_mask = ALL_ENGINES;
 
 	INIT_LIST_HEAD(&rq->execute_cb);
 
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for drm/i915/execlists: Tweak virtual unsubmission
  2019-10-13 20:30 [PATCH] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
@ 2019-10-13 20:37 ` Patchwork
  2019-10-14  9:28 ` [PATCH] " Ramalingam C
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: Patchwork @ 2019-10-13 20:37 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: drm/i915/execlists: Tweak virtual unsubmission
URL   : https://patchwork.freedesktop.org/series/67958/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
3b2b0acd537f drm/i915/execlists: Tweak virtual unsubmission
-:14: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#14: 
References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")

-:14: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")'
#14: 
References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")

total: 1 errors, 1 warnings, 0 checks, 38 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-13 20:30 [PATCH] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
  2019-10-13 20:37 ` ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
@ 2019-10-14  9:28 ` Ramalingam C
  2019-10-14  9:45   ` Chris Wilson
  2019-10-14  9:34 ` Tvrtko Ursulin
  2019-10-14 16:14 ` ✗ Fi.CI.BUILD: failure for " Patchwork
  3 siblings, 1 reply; 11+ messages in thread
From: Ramalingam C @ 2019-10-14  9:28 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

On 2019-10-13 at 21:30:12 +0100, Chris Wilson wrote:
> Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
> overtaking each other on preemption") we have restricted requests to run
> on their chosen engine across preemption events. We can take this
> restriction into account to know that we will want to resubmit those
> requests onto the same physical engine, and so can shortcircuit the
> virtual engine selection process and keep the request on the same
> engine during unwind.
> 
> References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
Chris,

Based on what i understood here, change looks good to me.

If it helps, please use
Reviewed-by: Ramlingam C <ramalingam.c@intel.com>

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
>  drivers/gpu/drm/i915/i915_request.c | 2 +-
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index e6bf633b48d5..03732e3f5ec7 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>  	list_for_each_entry_safe_reverse(rq, rn,
>  					 &engine->active.requests,
>  					 sched.link) {
> -		struct intel_engine_cs *owner;
>  
>  		if (i915_request_completed(rq))
>  			continue; /* XXX */
> @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>  		 * engine so that it can be moved across onto another physical
>  		 * engine as load dictates.
>  		 */
> -		owner = rq->hw_context->engine;
> -		if (likely(owner == engine)) {
> +		if (likely(rq->execution_mask == engine->mask)) {
>  			GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
>  			if (rq_prio(rq) != prio) {
>  				prio = rq_prio(rq);
> @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>  			list_move(&rq->sched.link, pl);
>  			active = rq;
>  		} else {
> +			struct intel_engine_cs *owner = rq->hw_context->engine;
> +
>  			/*
>  			 * Decouple the virtual breadcrumb before moving it
>  			 * back to the virtual engine -- we don't want the
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 437f9fc6282e..b8a54572a4f8 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -649,6 +649,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
>  	rq->gem_context = ce->gem_context;
>  	rq->engine = ce->engine;
>  	rq->ring = ce->ring;
> +	rq->execution_mask = ce->engine->mask;
>  
>  	rcu_assign_pointer(rq->timeline, tl);
>  	rq->hwsp_seqno = tl->hwsp_seqno;
> @@ -671,7 +672,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
>  	rq->batch = NULL;
>  	rq->capture_list = NULL;
>  	rq->flags = 0;
> -	rq->execution_mask = ALL_ENGINES;
>  
>  	INIT_LIST_HEAD(&rq->execute_cb);
>  
> -- 
> 2.23.0
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-13 20:30 [PATCH] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
  2019-10-13 20:37 ` ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
  2019-10-14  9:28 ` [PATCH] " Ramalingam C
@ 2019-10-14  9:34 ` Tvrtko Ursulin
  2019-10-14  9:41   ` Chris Wilson
  2019-10-14  9:42   ` Chris Wilson
  2019-10-14 16:14 ` ✗ Fi.CI.BUILD: failure for " Patchwork
  3 siblings, 2 replies; 11+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14  9:34 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 13/10/2019 21:30, Chris Wilson wrote:
> Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
> overtaking each other on preemption") we have restricted requests to run
> on their chosen engine across preemption events. We can take this
> restriction into account to know that we will want to resubmit those
> requests onto the same physical engine, and so can shortcircuit the
> virtual engine selection process and keep the request on the same
> engine during unwind.
> 
> References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
>   drivers/gpu/drm/i915/i915_request.c | 2 +-
>   2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index e6bf633b48d5..03732e3f5ec7 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>   	list_for_each_entry_safe_reverse(rq, rn,
>   					 &engine->active.requests,
>   					 sched.link) {
> -		struct intel_engine_cs *owner;
>   
>   		if (i915_request_completed(rq))
>   			continue; /* XXX */
> @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>   		 * engine so that it can be moved across onto another physical
>   		 * engine as load dictates.
>   		 */
> -		owner = rq->hw_context->engine;
> -		if (likely(owner == engine)) {
> +		if (likely(rq->execution_mask == engine->mask)) {
>   			GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
>   			if (rq_prio(rq) != prio) {
>   				prio = rq_prio(rq);
> @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>   			list_move(&rq->sched.link, pl);
>   			active = rq;
>   		} else {
> +			struct intel_engine_cs *owner = rq->hw_context->engine;

I guess there is some benefit in doing fewer operations as long as we 
are fixing the engine anyway (at the moment at least).

However on this branch here the concern was request completion racing 
with preemption handling and with this change the breadcrumb will not 
get canceled any longer and may get signaled on the virtual engine. 
Which then leads to the explosion this branch fixed. At least that's 
what I remembered in combination with the comment below..

Regards,

Tvrtko

> +
>   			/*
>   			 * Decouple the virtual breadcrumb before moving it
>   			 * back to the virtual engine -- we don't want the
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 437f9fc6282e..b8a54572a4f8 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -649,6 +649,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
>   	rq->gem_context = ce->gem_context;
>   	rq->engine = ce->engine;
>   	rq->ring = ce->ring;
> +	rq->execution_mask = ce->engine->mask;
>   
>   	rcu_assign_pointer(rq->timeline, tl);
>   	rq->hwsp_seqno = tl->hwsp_seqno;
> @@ -671,7 +672,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
>   	rq->batch = NULL;
>   	rq->capture_list = NULL;
>   	rq->flags = 0;
> -	rq->execution_mask = ALL_ENGINES;
>   
>   	INIT_LIST_HEAD(&rq->execute_cb);
>   
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-14  9:34 ` Tvrtko Ursulin
@ 2019-10-14  9:41   ` Chris Wilson
  2019-10-14  9:50     ` Tvrtko Ursulin
  2019-10-14  9:42   ` Chris Wilson
  1 sibling, 1 reply; 11+ messages in thread
From: Chris Wilson @ 2019-10-14  9:41 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 10:34:31)
> 
> On 13/10/2019 21:30, Chris Wilson wrote:
> > Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
> > overtaking each other on preemption") we have restricted requests to run
> > on their chosen engine across preemption events. We can take this
> > restriction into account to know that we will want to resubmit those
> > requests onto the same physical engine, and so can shortcircuit the
> > virtual engine selection process and keep the request on the same
> > engine during unwind.
> > 
> > References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
> >   drivers/gpu/drm/i915/i915_request.c | 2 +-
> >   2 files changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index e6bf633b48d5..03732e3f5ec7 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >       list_for_each_entry_safe_reverse(rq, rn,
> >                                        &engine->active.requests,
> >                                        sched.link) {
> > -             struct intel_engine_cs *owner;
> >   
> >               if (i915_request_completed(rq))
> >                       continue; /* XXX */
> > @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >                * engine so that it can be moved across onto another physical
> >                * engine as load dictates.
> >                */
> > -             owner = rq->hw_context->engine;
> > -             if (likely(owner == engine)) {
> > +             if (likely(rq->execution_mask == engine->mask)) {
> >                       GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
> >                       if (rq_prio(rq) != prio) {
> >                               prio = rq_prio(rq);
> > @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >                       list_move(&rq->sched.link, pl);
> >                       active = rq;
> >               } else {
> > +                     struct intel_engine_cs *owner = rq->hw_context->engine;
> 
> I guess there is some benefit in doing fewer operations as long as we 
> are fixing the engine anyway (at the moment at least).
> 
> However on this branch here the concern was request completion racing 
> with preemption handling and with this change the breadcrumb will not 
> get canceled any longer and may get signaled on the virtual engine. 
> Which then leads to the explosion this branch fixed. At least that's 
> what I remembered in combination with the comment below..

No, we don't change back to the virtual engine, so that is not an issue.
The problem was only because of the rq->engine = owner where the
breadcrumbs were still on the previous engine lists and assumed to be
under that engine->breadcrumbs.lock (but would in future be assumed to be
under rq->engine->breadcrumbs.lock).
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-14  9:34 ` Tvrtko Ursulin
  2019-10-14  9:41   ` Chris Wilson
@ 2019-10-14  9:42   ` Chris Wilson
  1 sibling, 0 replies; 11+ messages in thread
From: Chris Wilson @ 2019-10-14  9:42 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 10:34:31)
> 
> On 13/10/2019 21:30, Chris Wilson wrote:
> > Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
> > overtaking each other on preemption") we have restricted requests to run
> > on their chosen engine across preemption events. We can take this
> > restriction into account to know that we will want to resubmit those
> > requests onto the same physical engine, and so can shortcircuit the
> > virtual engine selection process and keep the request on the same
> > engine during unwind.
> > 
> > References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
> >   drivers/gpu/drm/i915/i915_request.c | 2 +-
> >   2 files changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index e6bf633b48d5..03732e3f5ec7 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >       list_for_each_entry_safe_reverse(rq, rn,
> >                                        &engine->active.requests,
> >                                        sched.link) {
> > -             struct intel_engine_cs *owner;
> >   
> >               if (i915_request_completed(rq))
> >                       continue; /* XXX */
> > @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >                * engine so that it can be moved across onto another physical
> >                * engine as load dictates.
> >                */
> > -             owner = rq->hw_context->engine;
> > -             if (likely(owner == engine)) {
> > +             if (likely(rq->execution_mask == engine->mask)) {
> >                       GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
> >                       if (rq_prio(rq) != prio) {
> >                               prio = rq_prio(rq);
> > @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >                       list_move(&rq->sched.link, pl);
> >                       active = rq;
> >               } else {
> > +                     struct intel_engine_cs *owner = rq->hw_context->engine;
> 
> I guess there is some benefit in doing fewer operations as long as we 
> are fixing the engine anyway (at the moment at least).

It also added a bit of consistency to how we detect virtual engine
during i915_request construction (e.g. __i915_request_add_to_timeline),
I liked that.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-14  9:28 ` [PATCH] " Ramalingam C
@ 2019-10-14  9:45   ` Chris Wilson
  0 siblings, 0 replies; 11+ messages in thread
From: Chris Wilson @ 2019-10-14  9:45 UTC (permalink / raw)
  To: Ramalingam C; +Cc: intel-gfx

Quoting Ramalingam C (2019-10-14 10:28:18)
> On 2019-10-13 at 21:30:12 +0100, Chris Wilson wrote:
> > Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
> > overtaking each other on preemption") we have restricted requests to run
> > on their chosen engine across preemption events. We can take this
> > restriction into account to know that we will want to resubmit those
> > requests onto the same physical engine, and so can shortcircuit the
> > virtual engine selection process and keep the request on the same
> > engine during unwind.
> > 
> > References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
> Chris,
> 
> Based on what i understood here, change looks good to me.
> 
> If it helps, please use
> Reviewed-by: Ramlingam C <ramalingam.c@intel.com>

Welcome!
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-14  9:41   ` Chris Wilson
@ 2019-10-14  9:50     ` Tvrtko Ursulin
  2019-10-14  9:59       ` Chris Wilson
  0 siblings, 1 reply; 11+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14  9:50 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:41, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 10:34:31)
>>
>> On 13/10/2019 21:30, Chris Wilson wrote:
>>> Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
>>> overtaking each other on preemption") we have restricted requests to run
>>> on their chosen engine across preemption events. We can take this
>>> restriction into account to know that we will want to resubmit those
>>> requests onto the same physical engine, and so can shortcircuit the
>>> virtual engine selection process and keep the request on the same
>>> engine during unwind.
>>>
>>> References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> ---
>>>    drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
>>>    drivers/gpu/drm/i915/i915_request.c | 2 +-
>>>    2 files changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> index e6bf633b48d5..03732e3f5ec7 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>        list_for_each_entry_safe_reverse(rq, rn,
>>>                                         &engine->active.requests,
>>>                                         sched.link) {
>>> -             struct intel_engine_cs *owner;
>>>    
>>>                if (i915_request_completed(rq))
>>>                        continue; /* XXX */
>>> @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>                 * engine so that it can be moved across onto another physical
>>>                 * engine as load dictates.
>>>                 */
>>> -             owner = rq->hw_context->engine;
>>> -             if (likely(owner == engine)) {
>>> +             if (likely(rq->execution_mask == engine->mask)) {
>>>                        GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
>>>                        if (rq_prio(rq) != prio) {
>>>                                prio = rq_prio(rq);
>>> @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>                        list_move(&rq->sched.link, pl);
>>>                        active = rq;
>>>                } else {
>>> +                     struct intel_engine_cs *owner = rq->hw_context->engine;
>>
>> I guess there is some benefit in doing fewer operations as long as we
>> are fixing the engine anyway (at the moment at least).
>>
>> However on this branch here the concern was request completion racing
>> with preemption handling and with this change the breadcrumb will not
>> get canceled any longer and may get signaled on the virtual engine.
>> Which then leads to the explosion this branch fixed. At least that's
>> what I remembered in combination with the comment below..
> 
> No, we don't change back to the virtual engine, so that is not an issue.
> The problem was only because of the rq->engine = owner where the
> breadcrumbs were still on the previous engine lists and assumed to be
> under that engine->breadcrumbs.lock (but would in future be assumed to be
> under rq->engine->breadcrumbs.lock).

Breadcrumb signaling can only be set up on the physical engine? Hm, must 
be fine since without preemption that would be the scenario exactly. 
Okay, I see there is r-b from Ram already so no need for another one.

Regards,

Tvrtko




_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-14  9:50     ` Tvrtko Ursulin
@ 2019-10-14  9:59       ` Chris Wilson
  2019-10-14 13:15         ` Tvrtko Ursulin
  0 siblings, 1 reply; 11+ messages in thread
From: Chris Wilson @ 2019-10-14  9:59 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 10:50:25)
> 
> On 14/10/2019 10:41, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-10-14 10:34:31)
> >>
> >> On 13/10/2019 21:30, Chris Wilson wrote:
> >>> Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
> >>> overtaking each other on preemption") we have restricted requests to run
> >>> on their chosen engine across preemption events. We can take this
> >>> restriction into account to know that we will want to resubmit those
> >>> requests onto the same physical engine, and so can shortcircuit the
> >>> virtual engine selection process and keep the request on the same
> >>> engine during unwind.
> >>>
> >>> References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
> >>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>> ---
> >>>    drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
> >>>    drivers/gpu/drm/i915/i915_request.c | 2 +-
> >>>    2 files changed, 4 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> >>> index e6bf633b48d5..03732e3f5ec7 100644
> >>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> >>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> >>> @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >>>        list_for_each_entry_safe_reverse(rq, rn,
> >>>                                         &engine->active.requests,
> >>>                                         sched.link) {
> >>> -             struct intel_engine_cs *owner;
> >>>    
> >>>                if (i915_request_completed(rq))
> >>>                        continue; /* XXX */
> >>> @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >>>                 * engine so that it can be moved across onto another physical
> >>>                 * engine as load dictates.
> >>>                 */
> >>> -             owner = rq->hw_context->engine;
> >>> -             if (likely(owner == engine)) {
> >>> +             if (likely(rq->execution_mask == engine->mask)) {
> >>>                        GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
> >>>                        if (rq_prio(rq) != prio) {
> >>>                                prio = rq_prio(rq);
> >>> @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
> >>>                        list_move(&rq->sched.link, pl);
> >>>                        active = rq;
> >>>                } else {
> >>> +                     struct intel_engine_cs *owner = rq->hw_context->engine;
> >>
> >> I guess there is some benefit in doing fewer operations as long as we
> >> are fixing the engine anyway (at the moment at least).
> >>
> >> However on this branch here the concern was request completion racing
> >> with preemption handling and with this change the breadcrumb will not
> >> get canceled any longer and may get signaled on the virtual engine.
> >> Which then leads to the explosion this branch fixed. At least that's
> >> what I remembered in combination with the comment below..
> > 
> > No, we don't change back to the virtual engine, so that is not an issue.
> > The problem was only because of the rq->engine = owner where the
> > breadcrumbs were still on the previous engine lists and assumed to be
> > under that engine->breadcrumbs.lock (but would in future be assumed to be
> > under rq->engine->breadcrumbs.lock).
> 
> Breadcrumb signaling can only be set up on the physical engine? Hm, must 
> be fine since without preemption that would be the scenario exactly. 
> Okay, I see there is r-b from Ram already so no need for another one.

With no disrespect to Ram, as the expert you raised a technical point that
I would be happier to record as resolved with an r-b from yourself.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-14  9:59       ` Chris Wilson
@ 2019-10-14 13:15         ` Tvrtko Ursulin
  0 siblings, 0 replies; 11+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 13:15 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:59, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 10:50:25)
>>
>> On 14/10/2019 10:41, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2019-10-14 10:34:31)
>>>>
>>>> On 13/10/2019 21:30, Chris Wilson wrote:
>>>>> Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
>>>>> overtaking each other on preemption") we have restricted requests to run
>>>>> on their chosen engine across preemption events. We can take this
>>>>> restriction into account to know that we will want to resubmit those
>>>>> requests onto the same physical engine, and so can shortcircuit the
>>>>> virtual engine selection process and keep the request on the same
>>>>> engine during unwind.
>>>>>
>>>>> References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
>>>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>> ---
>>>>>     drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
>>>>>     drivers/gpu/drm/i915/i915_request.c | 2 +-
>>>>>     2 files changed, 4 insertions(+), 4 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>>>> index e6bf633b48d5..03732e3f5ec7 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>>>> @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>>>         list_for_each_entry_safe_reverse(rq, rn,
>>>>>                                          &engine->active.requests,
>>>>>                                          sched.link) {
>>>>> -             struct intel_engine_cs *owner;
>>>>>     
>>>>>                 if (i915_request_completed(rq))
>>>>>                         continue; /* XXX */
>>>>> @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>>>                  * engine so that it can be moved across onto another physical
>>>>>                  * engine as load dictates.
>>>>>                  */
>>>>> -             owner = rq->hw_context->engine;
>>>>> -             if (likely(owner == engine)) {
>>>>> +             if (likely(rq->execution_mask == engine->mask)) {
>>>>>                         GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
>>>>>                         if (rq_prio(rq) != prio) {
>>>>>                                 prio = rq_prio(rq);
>>>>> @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>>>                         list_move(&rq->sched.link, pl);
>>>>>                         active = rq;
>>>>>                 } else {
>>>>> +                     struct intel_engine_cs *owner = rq->hw_context->engine;
>>>>
>>>> I guess there is some benefit in doing fewer operations as long as we
>>>> are fixing the engine anyway (at the moment at least).
>>>>
>>>> However on this branch here the concern was request completion racing
>>>> with preemption handling and with this change the breadcrumb will not
>>>> get canceled any longer and may get signaled on the virtual engine.
>>>> Which then leads to the explosion this branch fixed. At least that's
>>>> what I remembered in combination with the comment below..
>>>
>>> No, we don't change back to the virtual engine, so that is not an issue.
>>> The problem was only because of the rq->engine = owner where the
>>> breadcrumbs were still on the previous engine lists and assumed to be
>>> under that engine->breadcrumbs.lock (but would in future be assumed to be
>>> under rq->engine->breadcrumbs.lock).
>>
>> Breadcrumb signaling can only be set up on the physical engine? Hm, must
>> be fine since without preemption that would be the scenario exactly.
>> Okay, I see there is r-b from Ram already so no need for another one.
> 
> With no disrespect to Ram, as the expert you raised a technical point that
> I would be happier to record as resolved with an r-b from yourself.

I went back to the patch I reviewed in July and it checks out.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* ✗ Fi.CI.BUILD: failure for drm/i915/execlists: Tweak virtual unsubmission
  2019-10-13 20:30 [PATCH] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
                   ` (2 preceding siblings ...)
  2019-10-14  9:34 ` Tvrtko Ursulin
@ 2019-10-14 16:14 ` Patchwork
  3 siblings, 0 replies; 11+ messages in thread
From: Patchwork @ 2019-10-14 16:14 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: drm/i915/execlists: Tweak virtual unsubmission
URL   : https://patchwork.freedesktop.org/series/67958/
State : failure

== Summary ==

Applying: drm/i915/execlists: Tweak virtual unsubmission
error: sha1 information is lacking or useless (drivers/gpu/drm/i915/gt/intel_lrc.c).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch' to see the failed patch
Patch failed at 0001 drm/i915/execlists: Tweak virtual unsubmission
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-10-14 16:14 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-13 20:30 [PATCH] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
2019-10-13 20:37 ` ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2019-10-14  9:28 ` [PATCH] " Ramalingam C
2019-10-14  9:45   ` Chris Wilson
2019-10-14  9:34 ` Tvrtko Ursulin
2019-10-14  9:41   ` Chris Wilson
2019-10-14  9:50     ` Tvrtko Ursulin
2019-10-14  9:59       ` Chris Wilson
2019-10-14 13:15         ` Tvrtko Ursulin
2019-10-14  9:42   ` Chris Wilson
2019-10-14 16:14 ` ✗ Fi.CI.BUILD: failure for " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.