* [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
@ 2020-02-06 15:23 Chris Wilson
2020-02-06 15:29 ` Chris Wilson
` (5 more replies)
0 siblings, 6 replies; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 15:23 UTC (permalink / raw)
To: intel-gfx
Virtual engines are fleeting. They carry a reference count and may be freed
when their last request is retired. This makes them unsuitable for the
task of housing engine->retire.work so assert that it is not used.
Tvrtko tracked down an instance where we did indeed violate this rule.
In virtal_submit_request, we flush a completed request directly with
__i915_request_submit and this causes us to queue that request on the
veng's breadcrumb list and signal it. Leading us down a path where we
should not attach the retire.
Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 3 +++
drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
2 files changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 0ba524a414c6..cbad7fe722ce 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -136,6 +136,9 @@ static void add_retire(struct intel_breadcrumbs *b, struct intel_timeline *tl)
struct intel_engine_cs *engine =
container_of(b, struct intel_engine_cs, breadcrumbs);
+ if (unlikely(intel_engine_is_virtual(engine)))
+ engine = intel_virtual_engine_get_sibling(engine, 0);
+
intel_engine_add_retire(engine, tl);
}
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 7ef1d37970f6..8a5054f21bf8 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
void intel_engine_add_retire(struct intel_engine_cs *engine,
struct intel_timeline *tl)
{
+ /* We don't deal well with the engine disappearing beneath us */
+ GEM_BUG_ON(intel_engine_is_virtual(engine));
+
if (add_retire(engine, tl))
schedule_work(&engine->retire_work);
}
--
2.25.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 15:23 [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine Chris Wilson
@ 2020-02-06 15:29 ` Chris Wilson
2020-02-06 15:57 ` Tvrtko Ursulin
2020-02-06 15:44 ` Mika Kuoppala
` (4 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 15:29 UTC (permalink / raw)
To: intel-gfx
Quoting Chris Wilson (2020-02-06 15:23:25)
> Virtual engines are fleeting. They carry a reference count and may be freed
> when their last request is retired. This makes them unsuitable for the
> task of housing engine->retire.work so assert that it is not used.
>
> Tvrtko tracked down an instance where we did indeed violate this rule.
> In virtal_submit_request, we flush a completed request directly with
> __i915_request_submit and this causes us to queue that request on the
> veng's breadcrumb list and signal it. Leading us down a path where we
> should not attach the retire.
>
> Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Alternatively we could fixup the rq->engine before
__i915_request_submit. That would stop the spread of
intel_virtual_engine_get_sibling().
This is likely to be the cleaner fix, so I think I would prefer this and
then remove the get_sibling().
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 15:23 [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine Chris Wilson
2020-02-06 15:29 ` Chris Wilson
@ 2020-02-06 15:44 ` Mika Kuoppala
2020-02-06 16:12 ` [Intel-gfx] [PATCH v2] " Chris Wilson
` (3 subsequent siblings)
5 siblings, 0 replies; 13+ messages in thread
From: Mika Kuoppala @ 2020-02-06 15:44 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
Chris Wilson <chris@chris-wilson.co.uk> writes:
> Virtual engines are fleeting. They carry a reference count and may be freed
> when their last request is retired. This makes them unsuitable for the
> task of housing engine->retire.work so assert that it is not used.
>
> Tvrtko tracked down an instance where we did indeed violate this rule.
> In virtal_submit_request, we flush a completed request directly with
s/virtal/virtual
-Mika
> __i915_request_submit and this causes us to queue that request on the
> veng's breadcrumb list and signal it. Leading us down a path where we
> should not attach the retire.
>
> Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
> drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 3 +++
> drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
> 2 files changed, 6 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
> index 0ba524a414c6..cbad7fe722ce 100644
> --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
> @@ -136,6 +136,9 @@ static void add_retire(struct intel_breadcrumbs *b, struct intel_timeline *tl)
> struct intel_engine_cs *engine =
> container_of(b, struct intel_engine_cs, breadcrumbs);
>
> + if (unlikely(intel_engine_is_virtual(engine)))
> + engine = intel_virtual_engine_get_sibling(engine, 0);
> +
> intel_engine_add_retire(engine, tl);
> }
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> index 7ef1d37970f6..8a5054f21bf8 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> @@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
> void intel_engine_add_retire(struct intel_engine_cs *engine,
> struct intel_timeline *tl)
> {
> + /* We don't deal well with the engine disappearing beneath us */
> + GEM_BUG_ON(intel_engine_is_virtual(engine));
> +
> if (add_retire(engine, tl))
> schedule_work(&engine->retire_work);
> }
> --
> 2.25.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 15:29 ` Chris Wilson
@ 2020-02-06 15:57 ` Tvrtko Ursulin
0 siblings, 0 replies; 13+ messages in thread
From: Tvrtko Ursulin @ 2020-02-06 15:57 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
On 06/02/2020 15:29, Chris Wilson wrote:
> Quoting Chris Wilson (2020-02-06 15:23:25)
>> Virtual engines are fleeting. They carry a reference count and may be freed
>> when their last request is retired. This makes them unsuitable for the
>> task of housing engine->retire.work so assert that it is not used.
>>
>> Tvrtko tracked down an instance where we did indeed violate this rule.
>> In virtal_submit_request, we flush a completed request directly with
>> __i915_request_submit and this causes us to queue that request on the
>> veng's breadcrumb list and signal it. Leading us down a path where we
>> should not attach the retire.
>>
>> Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
> Alternatively we could fixup the rq->engine before
> __i915_request_submit. That would stop the spread of
> intel_virtual_engine_get_sibling().
>
> This is likely to be the cleaner fix, so I think I would prefer this and
> then remove the get_sibling().
Yes it makes more sense for rq->engine to be always physical at the
point of __i915_request_submit.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Intel-gfx] [PATCH v2] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 15:23 [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine Chris Wilson
2020-02-06 15:29 ` Chris Wilson
2020-02-06 15:44 ` Mika Kuoppala
@ 2020-02-06 16:12 ` Chris Wilson
2020-02-06 16:23 ` Chris Wilson
2020-02-06 16:41 ` [Intel-gfx] [PATCH v1] " Chris Wilson
` (2 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 16:12 UTC (permalink / raw)
To: intel-gfx
Virtual engines are fleeting. They carry a reference count and may be freed
when their last request is retired. This makes them unsuitable for the
task of housing engine->retire.work so assert that it is not used.
Tvrtko tracked down an instance where we did indeed violate this rule.
In virtual_submit_request, we flush a completed request directly with
__i915_request_submit and this causes us to queue that request on the
veng's breadcrumb list and signal it. Leading us down a path where we
should not attach the retire.
v2: Always select a physical engine before submitting, and so avoid
using the veng as a signaler.
Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
drivers/gpu/drm/i915/gt/intel_engine.h | 1 +
drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
drivers/gpu/drm/i915/gt/intel_lrc.c | 17 ++++++++++++++---
drivers/gpu/drm/i915/i915_request.c | 2 ++
4 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index b36ec1fddc3d..5b21ca5478c2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -217,6 +217,7 @@ void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine);
static inline void
intel_engine_signal_breadcrumbs(struct intel_engine_cs *engine)
{
+ GEM_BUG_ON(!engine->breadcrumbs.irq_work.func);
irq_work_queue(&engine->breadcrumbs.irq_work);
}
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 7ef1d37970f6..8a5054f21bf8 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
void intel_engine_add_retire(struct intel_engine_cs *engine,
struct intel_timeline *tl)
{
+ /* We don't deal well with the engine disappearing beneath us */
+ GEM_BUG_ON(intel_engine_is_virtual(engine));
+
if (add_retire(engine, tl))
schedule_work(&engine->retire_work);
}
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index c196fb90c59f..e2bd1c357afc 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -4883,6 +4883,18 @@ static void virtual_submission_tasklet(unsigned long data)
local_irq_enable();
}
+static void __ve_request_submit(const struct virtual_engine *ve,
+ struct i915_request *rq)
+{
+ /*
+ * Select a real engine to act as our permanent storage
+ * and signaler for the stale request, and prevent
+ * this virtual engine from leaking into the execution state.
+ */
+ rq->engine = ve->siblings[0]; /* chosen at random! */
+ __i915_request_submit(rq);
+}
+
static void virtual_submit_request(struct i915_request *rq)
{
struct virtual_engine *ve = to_virtual_engine(rq->engine);
@@ -4900,12 +4912,12 @@ static void virtual_submit_request(struct i915_request *rq)
old = ve->request;
if (old) { /* background completion event from preempt-to-busy */
GEM_BUG_ON(!i915_request_completed(old));
- __i915_request_submit(old);
+ __ve_request_submit(ve, old);
i915_request_put(old);
}
if (i915_request_completed(rq)) {
- __i915_request_submit(rq);
+ __ve_request_submit(ve, rq);
ve->base.execlists.queue_priority_hint = INT_MIN;
ve->request = NULL;
@@ -5004,7 +5016,6 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings,
snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
intel_engine_init_active(&ve->base, ENGINE_VIRTUAL);
- intel_engine_init_breadcrumbs(&ve->base);
intel_engine_init_execlists(&ve->base);
ve->base.cops = &virtual_context_ops;
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 0ecc2cf64216..2c45d4b93e2c 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -358,6 +358,8 @@ bool __i915_request_submit(struct i915_request *request)
GEM_BUG_ON(!irqs_disabled());
lockdep_assert_held(&engine->active.lock);
+ GEM_BUG_ON(intel_engine_is_virtual(engine));
+
/*
* With the advent of preempt-to-busy, we frequently encounter
* requests that we have unsubmitted from HW, but left running
--
2.25.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 16:12 ` [Intel-gfx] [PATCH v2] " Chris Wilson
@ 2020-02-06 16:23 ` Chris Wilson
2020-02-06 16:32 ` [Intel-gfx] [PATCH] " Chris Wilson
0 siblings, 1 reply; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 16:23 UTC (permalink / raw)
To: intel-gfx
Quoting Chris Wilson (2020-02-06 16:12:32)
> +static void __ve_request_submit(const struct virtual_engine *ve,
> + struct i915_request *rq)
> +{
> + /*
> + * Select a real engine to act as our permanent storage
> + * and signaler for the stale request, and prevent
> + * this virtual engine from leaking into the execution state.
> + */
> + rq->engine = ve->siblings[0]; /* chosen at random! */
> + __i915_request_submit(rq);
Wait just a minute, who's lock do you think this is!
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 16:23 ` Chris Wilson
@ 2020-02-06 16:32 ` Chris Wilson
2020-02-06 16:40 ` Chris Wilson
2020-02-06 16:44 ` Tvrtko Ursulin
0 siblings, 2 replies; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 16:32 UTC (permalink / raw)
To: intel-gfx
Virtual engines are fleeting. They carry a reference count and may be freed
when their last request is retired. This makes them unsuitable for the
task of housing engine->retire.work so assert that it is not used.
Tvrtko tracked down an instance where we did indeed violate this rule.
In virtual_submit_request, we flush a completed request directly with
__i915_request_submit and this causes us to queue that request on the
veng's breadcrumb list and signal it. Leading us down a path where we
should not attach the retire.
v2: Always select a physical engine before submitting, and so avoid
using the veng as a signaler.
Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
drivers/gpu/drm/i915/gt/intel_engine.h | 1 +
drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
drivers/gpu/drm/i915/gt/intel_lrc.c | 21 ++++++++++++++++++---
drivers/gpu/drm/i915/i915_request.c | 2 ++
4 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index b36ec1fddc3d..5b21ca5478c2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -217,6 +217,7 @@ void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine);
static inline void
intel_engine_signal_breadcrumbs(struct intel_engine_cs *engine)
{
+ GEM_BUG_ON(!engine->breadcrumbs.irq_work.func);
irq_work_queue(&engine->breadcrumbs.irq_work);
}
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 7ef1d37970f6..8a5054f21bf8 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
void intel_engine_add_retire(struct intel_engine_cs *engine,
struct intel_timeline *tl)
{
+ /* We don't deal well with the engine disappearing beneath us */
+ GEM_BUG_ON(intel_engine_is_virtual(engine));
+
if (add_retire(engine, tl))
schedule_work(&engine->retire_work);
}
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index c196fb90c59f..639b5be56026 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -4883,6 +4883,22 @@ static void virtual_submission_tasklet(unsigned long data)
local_irq_enable();
}
+static void __ve_request_submit(const struct virtual_engine *ve,
+ struct i915_request *rq)
+{
+ struct intel_engine_cs *engine = ve->siblings[0]; /* totally random! */
+
+ /*
+ * Select a real engine to act as our permanent storage
+ * and signaler for the stale request, and prevent
+ * this virtual engine from leaking into the execution state.
+ */
+ spin_lock(&engine->active.lock);
+ rq->engine = engine;
+ __i915_request_submit(rq);
+ spin_unlock(&engine->active.lock);
+}
+
static void virtual_submit_request(struct i915_request *rq)
{
struct virtual_engine *ve = to_virtual_engine(rq->engine);
@@ -4900,12 +4916,12 @@ static void virtual_submit_request(struct i915_request *rq)
old = ve->request;
if (old) { /* background completion event from preempt-to-busy */
GEM_BUG_ON(!i915_request_completed(old));
- __i915_request_submit(old);
+ __ve_request_submit(ve, old);
i915_request_put(old);
}
if (i915_request_completed(rq)) {
- __i915_request_submit(rq);
+ __ve_request_submit(ve, rq);
ve->base.execlists.queue_priority_hint = INT_MIN;
ve->request = NULL;
@@ -5004,7 +5020,6 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings,
snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
intel_engine_init_active(&ve->base, ENGINE_VIRTUAL);
- intel_engine_init_breadcrumbs(&ve->base);
intel_engine_init_execlists(&ve->base);
ve->base.cops = &virtual_context_ops;
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 0ecc2cf64216..2c45d4b93e2c 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -358,6 +358,8 @@ bool __i915_request_submit(struct i915_request *request)
GEM_BUG_ON(!irqs_disabled());
lockdep_assert_held(&engine->active.lock);
+ GEM_BUG_ON(intel_engine_is_virtual(engine));
+
/*
* With the advent of preempt-to-busy, we frequently encounter
* requests that we have unsubmitted from HW, but left running
--
2.25.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 16:32 ` [Intel-gfx] [PATCH] " Chris Wilson
@ 2020-02-06 16:40 ` Chris Wilson
2020-02-06 16:44 ` Tvrtko Ursulin
1 sibling, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 16:40 UTC (permalink / raw)
To: intel-gfx
Quoting Chris Wilson (2020-02-06 16:32:43)
> Virtual engines are fleeting. They carry a reference count and may be freed
> when their last request is retired. This makes them unsuitable for the
> task of housing engine->retire.work so assert that it is not used.
>
> Tvrtko tracked down an instance where we did indeed violate this rule.
> In virtual_submit_request, we flush a completed request directly with
> __i915_request_submit and this causes us to queue that request on the
> veng's breadcrumb list and signal it. Leading us down a path where we
> should not attach the retire.
>
> v2: Always select a physical engine before submitting, and so avoid
> using the veng as a signaler.
>
> Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
> drivers/gpu/drm/i915/gt/intel_engine.h | 1 +
> drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
> drivers/gpu/drm/i915/gt/intel_lrc.c | 21 ++++++++++++++++++---
> drivers/gpu/drm/i915/i915_request.c | 2 ++
> 4 files changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
> index b36ec1fddc3d..5b21ca5478c2 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> @@ -217,6 +217,7 @@ void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine);
> static inline void
> intel_engine_signal_breadcrumbs(struct intel_engine_cs *engine)
> {
> + GEM_BUG_ON(!engine->breadcrumbs.irq_work.func);
> irq_work_queue(&engine->breadcrumbs.irq_work);
> }
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> index 7ef1d37970f6..8a5054f21bf8 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> @@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
> void intel_engine_add_retire(struct intel_engine_cs *engine,
> struct intel_timeline *tl)
> {
> + /* We don't deal well with the engine disappearing beneath us */
> + GEM_BUG_ON(intel_engine_is_virtual(engine));
> +
> if (add_retire(engine, tl))
> schedule_work(&engine->retire_work);
> }
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index c196fb90c59f..639b5be56026 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -4883,6 +4883,22 @@ static void virtual_submission_tasklet(unsigned long data)
> local_irq_enable();
> }
>
> +static void __ve_request_submit(const struct virtual_engine *ve,
> + struct i915_request *rq)
> +{
> + struct intel_engine_cs *engine = ve->siblings[0]; /* totally random! */
> +
> + /*
> + * Select a real engine to act as our permanent storage
> + * and signaler for the stale request, and prevent
> + * this virtual engine from leaking into the execution state.
> + */
> + spin_lock(&engine->active.lock);
> + rq->engine = engine;
> + __i915_request_submit(rq);
> + spin_unlock(&engine->active.lock);
This won't do either as it inverts the ve/phys locking order... And wait
for it...
We call ve->submit_request() underneath the phys->active.lock when
unsubmitting.
Bleurgh. Let's take the path in v1 for a bit while I see if this can be
unravelled.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Intel-gfx] [PATCH v1] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 15:23 [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine Chris Wilson
` (2 preceding siblings ...)
2020-02-06 16:12 ` [Intel-gfx] [PATCH v2] " Chris Wilson
@ 2020-02-06 16:41 ` Chris Wilson
2020-02-06 17:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: Prevent queuing retire workers on the virtual engine (rev4) Patchwork
2020-02-09 10:44 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
5 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 16:41 UTC (permalink / raw)
To: intel-gfx
Virtual engines are fleeting. They carry a reference count and may be freed
when their last request is retired. This makes them unsuitable for the
task of housing engine->retire.work so assert that it is not used.
Tvrtko tracked down an instance where we did indeed violate this rule.
In virtual_submit_request, we flush a completed request directly with
__i915_request_submit and this causes us to queue that request on the
veng's breadcrumb list and signal it. Leading us down a path where we
should not attach the retire.
Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 3 +++
drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
2 files changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 0ba524a414c6..cbad7fe722ce 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -136,6 +136,9 @@ static void add_retire(struct intel_breadcrumbs *b, struct intel_timeline *tl)
struct intel_engine_cs *engine =
container_of(b, struct intel_engine_cs, breadcrumbs);
+ if (unlikely(intel_engine_is_virtual(engine)))
+ engine = intel_virtual_engine_get_sibling(engine, 0);
+
intel_engine_add_retire(engine, tl);
}
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 7ef1d37970f6..8a5054f21bf8 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
void intel_engine_add_retire(struct intel_engine_cs *engine,
struct intel_timeline *tl)
{
+ /* We don't deal well with the engine disappearing beneath us */
+ GEM_BUG_ON(intel_engine_is_virtual(engine));
+
if (add_retire(engine, tl))
schedule_work(&engine->retire_work);
}
--
2.25.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 16:32 ` [Intel-gfx] [PATCH] " Chris Wilson
2020-02-06 16:40 ` Chris Wilson
@ 2020-02-06 16:44 ` Tvrtko Ursulin
2020-02-06 16:48 ` Chris Wilson
1 sibling, 1 reply; 13+ messages in thread
From: Tvrtko Ursulin @ 2020-02-06 16:44 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
On 06/02/2020 16:32, Chris Wilson wrote:
> Virtual engines are fleeting. They carry a reference count and may be freed
> when their last request is retired. This makes them unsuitable for the
> task of housing engine->retire.work so assert that it is not used.
>
> Tvrtko tracked down an instance where we did indeed violate this rule.
> In virtual_submit_request, we flush a completed request directly with
> __i915_request_submit and this causes us to queue that request on the
> veng's breadcrumb list and signal it. Leading us down a path where we
> should not attach the retire.
>
> v2: Always select a physical engine before submitting, and so avoid
> using the veng as a signaler.
>
> Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
> drivers/gpu/drm/i915/gt/intel_engine.h | 1 +
> drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
> drivers/gpu/drm/i915/gt/intel_lrc.c | 21 ++++++++++++++++++---
> drivers/gpu/drm/i915/i915_request.c | 2 ++
> 4 files changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
> index b36ec1fddc3d..5b21ca5478c2 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> @@ -217,6 +217,7 @@ void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine);
> static inline void
> intel_engine_signal_breadcrumbs(struct intel_engine_cs *engine)
> {
> + GEM_BUG_ON(!engine->breadcrumbs.irq_work.func);
> irq_work_queue(&engine->breadcrumbs.irq_work);
> }
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> index 7ef1d37970f6..8a5054f21bf8 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> @@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
> void intel_engine_add_retire(struct intel_engine_cs *engine,
> struct intel_timeline *tl)
> {
> + /* We don't deal well with the engine disappearing beneath us */
> + GEM_BUG_ON(intel_engine_is_virtual(engine));
> +
> if (add_retire(engine, tl))
> schedule_work(&engine->retire_work);
> }
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index c196fb90c59f..639b5be56026 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -4883,6 +4883,22 @@ static void virtual_submission_tasklet(unsigned long data)
> local_irq_enable();
> }
>
> +static void __ve_request_submit(const struct virtual_engine *ve,
> + struct i915_request *rq)
> +{
> + struct intel_engine_cs *engine = ve->siblings[0]; /* totally random! */
We don't preserve the execution engine in ce->inflight? No.. Will random
engine have any effect? Will proper waiters get signaled?
> +
> + /*
> + * Select a real engine to act as our permanent storage
> + * and signaler for the stale request, and prevent
> + * this virtual engine from leaking into the execution state.
> + */
> + spin_lock(&engine->active.lock);
Nesting phys lock under veng lock will be okay?
Regards,
Tvrtko
> + rq->engine = engine;
> + __i915_request_submit(rq);
> + spin_unlock(&engine->active.lock);
> +}
> +
> static void virtual_submit_request(struct i915_request *rq)
> {
> struct virtual_engine *ve = to_virtual_engine(rq->engine);
> @@ -4900,12 +4916,12 @@ static void virtual_submit_request(struct i915_request *rq)
> old = ve->request;
> if (old) { /* background completion event from preempt-to-busy */
> GEM_BUG_ON(!i915_request_completed(old));
> - __i915_request_submit(old);
> + __ve_request_submit(ve, old);
> i915_request_put(old);
> }
>
> if (i915_request_completed(rq)) {
> - __i915_request_submit(rq);
> + __ve_request_submit(ve, rq);
>
> ve->base.execlists.queue_priority_hint = INT_MIN;
> ve->request = NULL;
> @@ -5004,7 +5020,6 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings,
> snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
>
> intel_engine_init_active(&ve->base, ENGINE_VIRTUAL);
> - intel_engine_init_breadcrumbs(&ve->base);
> intel_engine_init_execlists(&ve->base);
>
> ve->base.cops = &virtual_context_ops;
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 0ecc2cf64216..2c45d4b93e2c 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -358,6 +358,8 @@ bool __i915_request_submit(struct i915_request *request)
> GEM_BUG_ON(!irqs_disabled());
> lockdep_assert_held(&engine->active.lock);
>
> + GEM_BUG_ON(intel_engine_is_virtual(engine));
> +
> /*
> * With the advent of preempt-to-busy, we frequently encounter
> * requests that we have unsubmitted from HW, but left running
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine
2020-02-06 16:44 ` Tvrtko Ursulin
@ 2020-02-06 16:48 ` Chris Wilson
0 siblings, 0 replies; 13+ messages in thread
From: Chris Wilson @ 2020-02-06 16:48 UTC (permalink / raw)
To: Tvrtko Ursulin, intel-gfx
Quoting Tvrtko Ursulin (2020-02-06 16:44:34)
>
> On 06/02/2020 16:32, Chris Wilson wrote:
> > Virtual engines are fleeting. They carry a reference count and may be freed
> > when their last request is retired. This makes them unsuitable for the
> > task of housing engine->retire.work so assert that it is not used.
> >
> > Tvrtko tracked down an instance where we did indeed violate this rule.
> > In virtual_submit_request, we flush a completed request directly with
> > __i915_request_submit and this causes us to queue that request on the
> > veng's breadcrumb list and signal it. Leading us down a path where we
> > should not attach the retire.
> >
> > v2: Always select a physical engine before submitting, and so avoid
> > using the veng as a signaler.
> >
> > Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > Fixes: dc93c9b69315 ("drm/i915/gt: Schedule request retirement when signaler idles")
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> > drivers/gpu/drm/i915/gt/intel_engine.h | 1 +
> > drivers/gpu/drm/i915/gt/intel_gt_requests.c | 3 +++
> > drivers/gpu/drm/i915/gt/intel_lrc.c | 21 ++++++++++++++++++---
> > drivers/gpu/drm/i915/i915_request.c | 2 ++
> > 4 files changed, 24 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
> > index b36ec1fddc3d..5b21ca5478c2 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> > +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> > @@ -217,6 +217,7 @@ void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine);
> > static inline void
> > intel_engine_signal_breadcrumbs(struct intel_engine_cs *engine)
> > {
> > + GEM_BUG_ON(!engine->breadcrumbs.irq_work.func);
> > irq_work_queue(&engine->breadcrumbs.irq_work);
> > }
> >
> > diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> > index 7ef1d37970f6..8a5054f21bf8 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> > @@ -99,6 +99,9 @@ static bool add_retire(struct intel_engine_cs *engine,
> > void intel_engine_add_retire(struct intel_engine_cs *engine,
> > struct intel_timeline *tl)
> > {
> > + /* We don't deal well with the engine disappearing beneath us */
> > + GEM_BUG_ON(intel_engine_is_virtual(engine));
> > +
> > if (add_retire(engine, tl))
> > schedule_work(&engine->retire_work);
> > }
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index c196fb90c59f..639b5be56026 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -4883,6 +4883,22 @@ static void virtual_submission_tasklet(unsigned long data)
> > local_irq_enable();
> > }
> >
> > +static void __ve_request_submit(const struct virtual_engine *ve,
> > + struct i915_request *rq)
> > +{
> > + struct intel_engine_cs *engine = ve->siblings[0]; /* totally random! */
>
> We don't preserve the execution engine in ce->inflight? No.. Will random
> engine have any effect? Will proper waiters get signaled?
Ok, it's not totally random ;) it's the engine on which we last executed
on, so it's a match wrt to the previous breadcrumbs/waiters. It's a good
choice :)
> > + /*
> > + * Select a real engine to act as our permanent storage
> > + * and signaler for the stale request, and prevent
> > + * this virtual engine from leaking into the execution state.
> > + */
> > + spin_lock(&engine->active.lock);
>
> Nesting phys lock under veng lock will be okay?
No. Far from it.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: Prevent queuing retire workers on the virtual engine (rev4)
2020-02-06 15:23 [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine Chris Wilson
` (3 preceding siblings ...)
2020-02-06 16:41 ` [Intel-gfx] [PATCH v1] " Chris Wilson
@ 2020-02-06 17:08 ` Patchwork
2020-02-09 10:44 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
5 siblings, 0 replies; 13+ messages in thread
From: Patchwork @ 2020-02-06 17:08 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: drm/i915/gt: Prevent queuing retire workers on the virtual engine (rev4)
URL : https://patchwork.freedesktop.org/series/73102/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_7876 -> Patchwork_16462
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/index.html
Known issues
------------
Here are the changes found in Patchwork_16462 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_close_race@basic-threads:
- fi-byt-j1900: [PASS][1] -> [INCOMPLETE][2] ([i915#45])
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/fi-byt-j1900/igt@gem_close_race@basic-threads.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/fi-byt-j1900/igt@gem_close_race@basic-threads.html
* igt@i915_selftest@live_blt:
- fi-hsw-4770r: [PASS][3] -> [DMESG-FAIL][4] ([i915#553] / [i915#725])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/fi-hsw-4770r/igt@i915_selftest@live_blt.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/fi-hsw-4770r/igt@i915_selftest@live_blt.html
- fi-hsw-4770: [PASS][5] -> [DMESG-FAIL][6] ([i915#553] / [i915#725])
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/fi-hsw-4770/igt@i915_selftest@live_blt.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/fi-hsw-4770/igt@i915_selftest@live_blt.html
* igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
- fi-icl-dsi: [PASS][7] -> [INCOMPLETE][8] ([i915#140])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/fi-icl-dsi/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/fi-icl-dsi/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
#### Possible fixes ####
* igt@gem_exec_parallel@fds:
- fi-byt-n2820: [FAIL][9] ([i915#694]) -> [PASS][10]
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/fi-byt-n2820/igt@gem_exec_parallel@fds.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/fi-byt-n2820/igt@gem_exec_parallel@fds.html
* igt@i915_selftest@live_blt:
- fi-bsw-nick: [INCOMPLETE][11] ([i915#392]) -> [PASS][12]
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/fi-bsw-nick/igt@i915_selftest@live_blt.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/fi-bsw-nick/igt@i915_selftest@live_blt.html
[i915#140]: https://gitlab.freedesktop.org/drm/intel/issues/140
[i915#392]: https://gitlab.freedesktop.org/drm/intel/issues/392
[i915#45]: https://gitlab.freedesktop.org/drm/intel/issues/45
[i915#553]: https://gitlab.freedesktop.org/drm/intel/issues/553
[i915#694]: https://gitlab.freedesktop.org/drm/intel/issues/694
[i915#725]: https://gitlab.freedesktop.org/drm/intel/issues/725
Participating hosts (41 -> 39)
------------------------------
Additional (6): fi-snb-2520m fi-ivb-3770 fi-skl-lmem fi-blb-e6850 fi-skl-6700k2 fi-snb-2600
Missing (8): fi-bdw-5557u fi-hsw-peppy fi-skl-6770hq fi-byt-squawks fi-bwr-2160 fi-cfl-8109u fi-byt-clapper fi-bdw-samus
Build changes
-------------
* CI: CI-20190529 -> None
* Linux: CI_DRM_7876 -> Patchwork_16462
CI-20190529: 20190529
CI_DRM_7876: 6ac39d9964f464065511d439afcf4da065ff96db @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_5421: 40946e61f9c47e23fdf1fff8090fadee8a4d7d3b @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_16462: 1e826279b1745739828d1aef4fc2332843cee508 @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
1e826279b174 drm/i915/gt: Prevent queuing retire workers on the virtual engine
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/gt: Prevent queuing retire workers on the virtual engine (rev4)
2020-02-06 15:23 [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine Chris Wilson
` (4 preceding siblings ...)
2020-02-06 17:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: Prevent queuing retire workers on the virtual engine (rev4) Patchwork
@ 2020-02-09 10:44 ` Patchwork
5 siblings, 0 replies; 13+ messages in thread
From: Patchwork @ 2020-02-09 10:44 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: drm/i915/gt: Prevent queuing retire workers on the virtual engine (rev4)
URL : https://patchwork.freedesktop.org/series/73102/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_7876_full -> Patchwork_16462_full
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Known issues
------------
Here are the changes found in Patchwork_16462_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_ctx_isolation@vecs0-s3:
- shard-apl: [PASS][1] -> [DMESG-WARN][2] ([i915#180]) +3 similar issues
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-apl7/igt@gem_ctx_isolation@vecs0-s3.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-apl2/igt@gem_ctx_isolation@vecs0-s3.html
* igt@gem_ctx_shared@exec-shared-gtt-bsd2:
- shard-kbl: [PASS][3] -> [FAIL][4] ([i915#616])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-kbl7/igt@gem_ctx_shared@exec-shared-gtt-bsd2.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-kbl2/igt@gem_ctx_shared@exec-shared-gtt-bsd2.html
* igt@gem_exec_schedule@pi-distinct-iova-bsd:
- shard-iclb: [PASS][5] -> [SKIP][6] ([i915#677]) +2 similar issues
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb7/igt@gem_exec_schedule@pi-distinct-iova-bsd.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb1/igt@gem_exec_schedule@pi-distinct-iova-bsd.html
* igt@gem_exec_schedule@preempt-contexts-bsd2:
- shard-iclb: [PASS][7] -> [SKIP][8] ([fdo#109276]) +18 similar issues
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb1/igt@gem_exec_schedule@preempt-contexts-bsd2.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb7/igt@gem_exec_schedule@preempt-contexts-bsd2.html
* igt@gem_exec_schedule@wide-bsd:
- shard-iclb: [PASS][9] -> [SKIP][10] ([fdo#112146]) +6 similar issues
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb7/igt@gem_exec_schedule@wide-bsd.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb1/igt@gem_exec_schedule@wide-bsd.html
* igt@gem_ppgtt@flink-and-close-vma-leak:
- shard-apl: [PASS][11] -> [FAIL][12] ([i915#644])
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-apl6/igt@gem_ppgtt@flink-and-close-vma-leak.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-apl1/igt@gem_ppgtt@flink-and-close-vma-leak.html
- shard-kbl: [PASS][13] -> [FAIL][14] ([i915#644])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-kbl3/igt@gem_ppgtt@flink-and-close-vma-leak.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-kbl4/igt@gem_ppgtt@flink-and-close-vma-leak.html
* igt@kms_cursor_crc@pipe-c-cursor-256x85-offscreen:
- shard-skl: [PASS][15] -> [FAIL][16] ([i915#54])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-skl9/igt@kms_cursor_crc@pipe-c-cursor-256x85-offscreen.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-skl7/igt@kms_cursor_crc@pipe-c-cursor-256x85-offscreen.html
* igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
- shard-skl: [PASS][17] -> [FAIL][18] ([fdo#108145] / [i915#265])
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
* igt@kms_plane_lowres@pipe-a-tiling-y:
- shard-glk: [PASS][19] -> [FAIL][20] ([i915#899])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-glk2/igt@kms_plane_lowres@pipe-a-tiling-y.html
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-glk5/igt@kms_plane_lowres@pipe-a-tiling-y.html
* igt@kms_plane_multiple@atomic-pipe-b-tiling-yf:
- shard-skl: [PASS][21] -> [DMESG-WARN][22] ([IGT#6])
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-skl1/igt@kms_plane_multiple@atomic-pipe-b-tiling-yf.html
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-skl9/igt@kms_plane_multiple@atomic-pipe-b-tiling-yf.html
* igt@kms_psr@psr2_sprite_plane_move:
- shard-iclb: [PASS][23] -> [SKIP][24] ([fdo#109441]) +2 similar issues
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb2/igt@kms_psr@psr2_sprite_plane_move.html
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb5/igt@kms_psr@psr2_sprite_plane_move.html
* igt@perf_pmu@busy-vcs1:
- shard-iclb: [PASS][25] -> [SKIP][26] ([fdo#112080]) +15 similar issues
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb4/igt@perf_pmu@busy-vcs1.html
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb3/igt@perf_pmu@busy-vcs1.html
#### Possible fixes ####
* igt@gem_busy@busy-vcs1:
- shard-iclb: [SKIP][27] ([fdo#112080]) -> [PASS][28] +16 similar issues
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb7/igt@gem_busy@busy-vcs1.html
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb1/igt@gem_busy@busy-vcs1.html
* igt@gem_ctx_persistence@processes:
- shard-tglb: [FAIL][29] ([i915#570]) -> [PASS][30]
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-tglb3/igt@gem_ctx_persistence@processes.html
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-tglb2/igt@gem_ctx_persistence@processes.html
* igt@gem_exec_balancer@hang:
- shard-tglb: [TIMEOUT][31] ([fdo#112271]) -> [PASS][32]
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-tglb6/igt@gem_exec_balancer@hang.html
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-tglb5/igt@gem_exec_balancer@hang.html
* igt@gem_exec_schedule@pi-userfault-bsd:
- shard-iclb: [SKIP][33] ([i915#677]) -> [PASS][34]
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb4/igt@gem_exec_schedule@pi-userfault-bsd.html
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb8/igt@gem_exec_schedule@pi-userfault-bsd.html
* igt@gem_exec_schedule@preempt-other-chain-bsd:
- shard-iclb: [SKIP][35] ([fdo#112146]) -> [PASS][36] +5 similar issues
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb4/igt@gem_exec_schedule@preempt-other-chain-bsd.html
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb3/igt@gem_exec_schedule@preempt-other-chain-bsd.html
* igt@gem_ppgtt@flink-and-close-vma-leak:
- shard-skl: [FAIL][37] ([i915#644]) -> [PASS][38]
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-skl2/igt@gem_ppgtt@flink-and-close-vma-leak.html
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-skl10/igt@gem_ppgtt@flink-and-close-vma-leak.html
* igt@gem_render_copy_redux@normal:
- shard-hsw: [FAIL][39] ([i915#694]) -> [PASS][40]
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-hsw5/igt@gem_render_copy_redux@normal.html
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-hsw2/igt@gem_render_copy_redux@normal.html
* igt@i915_pm_dc@dc6-psr:
- shard-iclb: [FAIL][41] ([i915#454]) -> [PASS][42]
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb2/igt@i915_pm_dc@dc6-psr.html
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb4/igt@i915_pm_dc@dc6-psr.html
* igt@i915_pm_rps@waitboost:
- shard-tglb: [FAIL][43] ([i915#413]) -> [PASS][44]
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-tglb2/igt@i915_pm_rps@waitboost.html
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-tglb3/igt@i915_pm_rps@waitboost.html
* igt@i915_selftest@live_gtt:
- shard-apl: [TIMEOUT][45] ([fdo#112271]) -> [PASS][46]
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-apl3/igt@i915_selftest@live_gtt.html
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-apl7/igt@i915_selftest@live_gtt.html
* igt@kms_cursor_crc@pipe-c-cursor-256x85-sliding:
- shard-skl: [FAIL][47] ([i915#54]) -> [PASS][48]
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-skl7/igt@kms_cursor_crc@pipe-c-cursor-256x85-sliding.html
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-skl8/igt@kms_cursor_crc@pipe-c-cursor-256x85-sliding.html
* igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-kbl: [DMESG-WARN][49] ([i915#180]) -> [PASS][50] +3 similar issues
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-kbl6/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-kbl6/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
* igt@kms_draw_crc@draw-method-xrgb8888-blt-xtiled:
- shard-skl: [FAIL][51] ([i915#52] / [i915#54]) -> [PASS][52]
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-skl2/igt@kms_draw_crc@draw-method-xrgb8888-blt-xtiled.html
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-skl9/igt@kms_draw_crc@draw-method-xrgb8888-blt-xtiled.html
* igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-mmap-gtt:
- shard-tglb: [SKIP][53] ([i915#668]) -> [PASS][54] +5 similar issues
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-tglb2/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-mmap-gtt.html
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-tglb3/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-mmap-gtt.html
* igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
- shard-iclb: [INCOMPLETE][55] ([i915#140]) -> [PASS][56]
[55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
[56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb7/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
* igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes:
- shard-apl: [DMESG-WARN][57] ([i915#180]) -> [PASS][58]
[57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-apl8/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html
[58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-apl3/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html
* igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
- shard-skl: [FAIL][59] ([fdo#108145]) -> [PASS][60] +1 similar issue
[59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
[60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-skl8/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
* igt@kms_plane_lowres@pipe-a-tiling-x:
- shard-glk: [FAIL][61] ([i915#899]) -> [PASS][62]
[61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-glk6/igt@kms_plane_lowres@pipe-a-tiling-x.html
[62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-glk1/igt@kms_plane_lowres@pipe-a-tiling-x.html
* igt@kms_psr@psr2_cursor_mmap_cpu:
- shard-iclb: [SKIP][63] ([fdo#109441]) -> [PASS][64] +3 similar issues
[63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb8/igt@kms_psr@psr2_cursor_mmap_cpu.html
[64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb2/igt@kms_psr@psr2_cursor_mmap_cpu.html
* igt@prime_vgem@fence-wait-bsd2:
- shard-iclb: [SKIP][65] ([fdo#109276]) -> [PASS][66] +23 similar issues
[65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb7/igt@prime_vgem@fence-wait-bsd2.html
[66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb1/igt@prime_vgem@fence-wait-bsd2.html
#### Warnings ####
* igt@gem_tiled_blits@normal:
- shard-hsw: [FAIL][67] ([i915#818]) -> [FAIL][68] ([i915#694])
[67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-hsw5/igt@gem_tiled_blits@normal.html
[68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-hsw1/igt@gem_tiled_blits@normal.html
* igt@gen9_exec_parse@allowed-all:
- shard-glk: [INCOMPLETE][69] ([i915#58] / [k.org#198133]) -> [DMESG-WARN][70] ([i915#716])
[69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-glk2/igt@gen9_exec_parse@allowed-all.html
[70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-glk5/igt@gen9_exec_parse@allowed-all.html
* igt@kms_dp_dsc@basic-dsc-enable-edp:
- shard-iclb: [SKIP][71] ([fdo#109349]) -> [DMESG-WARN][72] ([fdo#107724])
[71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7876/shard-iclb8/igt@kms_dp_dsc@basic-dsc-enable-edp.html
[72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/shard-iclb2/igt@kms_dp_dsc@basic-dsc-enable-edp.html
[IGT#6]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/6
[fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
[fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
[fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276
[fdo#109349]: https://bugs.freedesktop.org/show_bug.cgi?id=109349
[fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
[fdo#112080]: https://bugs.freedesktop.org/show_bug.cgi?id=112080
[fdo#112146]: https://bugs.freedesktop.org/show_bug.cgi?id=112146
[fdo#112271]: https://bugs.freedesktop.org/show_bug.cgi?id=112271
[i915#140]: https://gitlab.freedesktop.org/drm/intel/issues/140
[i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
[i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
[i915#413]: https://gitlab.freedesktop.org/drm/intel/issues/413
[i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
[i915#52]: https://gitlab.freedesktop.org/drm/intel/issues/52
[i915#54]: https://gitlab.freedesktop.org/drm/intel/issues/54
[i915#570]: https://gitlab.freedesktop.org/drm/intel/issues/570
[i915#58]: https://gitlab.freedesktop.org/drm/intel/issues/58
[i915#616]: https://gitlab.freedesktop.org/drm/intel/issues/616
[i915#644]: https://gitlab.freedesktop.org/drm/intel/issues/644
[i915#668]: https://gitlab.freedesktop.org/drm/intel/issues/668
[i915#677]: https://gitlab.freedesktop.org/drm/intel/issues/677
[i915#694]: https://gitlab.freedesktop.org/drm/intel/issues/694
[i915#716]: https://gitlab.freedesktop.org/drm/intel/issues/716
[i915#818]: https://gitlab.freedesktop.org/drm/intel/issues/818
[i915#899]: https://gitlab.freedesktop.org/drm/intel/issues/899
[k.org#198133]: https://bugzilla.kernel.org/show_bug.cgi?id=198133
Participating hosts (10 -> 10)
------------------------------
No changes in participating hosts
Build changes
-------------
* CI: CI-20190529 -> None
* Linux: CI_DRM_7876 -> Patchwork_16462
CI-20190529: 20190529
CI_DRM_7876: 6ac39d9964f464065511d439afcf4da065ff96db @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_5421: 40946e61f9c47e23fdf1fff8090fadee8a4d7d3b @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_16462: 1e826279b1745739828d1aef4fc2332843cee508 @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16462/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2020-02-09 10:44 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-06 15:23 [Intel-gfx] [PATCH] drm/i915/gt: Prevent queuing retire workers on the virtual engine Chris Wilson
2020-02-06 15:29 ` Chris Wilson
2020-02-06 15:57 ` Tvrtko Ursulin
2020-02-06 15:44 ` Mika Kuoppala
2020-02-06 16:12 ` [Intel-gfx] [PATCH v2] " Chris Wilson
2020-02-06 16:23 ` Chris Wilson
2020-02-06 16:32 ` [Intel-gfx] [PATCH] " Chris Wilson
2020-02-06 16:40 ` Chris Wilson
2020-02-06 16:44 ` Tvrtko Ursulin
2020-02-06 16:48 ` Chris Wilson
2020-02-06 16:41 ` [Intel-gfx] [PATCH v1] " Chris Wilson
2020-02-06 17:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: Prevent queuing retire workers on the virtual engine (rev4) Patchwork
2020-02-09 10:44 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.