* [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers
@ 2018-04-17 14:31 Chris Wilson
2018-04-17 14:31 ` [PATCH v2 2/3] drm/i915: Rename priotree to sched Chris Wilson
` (14 more replies)
0 siblings, 15 replies; 21+ messages in thread
From: Chris Wilson @ 2018-04-17 14:31 UTC (permalink / raw)
To: intel-gfx
Over time the priotree has grown from a sorted list to a more
complicated structure for propagating constraints along the dependency
chain to try and resolve priority inversion. Start to segregate this
information from the rest of the request/fence tracking.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_request.h | 39 +-------------------
drivers/gpu/drm/i915/i915_scheduler.h | 52 +++++++++++++++++++++++++++
2 files changed, 53 insertions(+), 38 deletions(-)
create mode 100644 drivers/gpu/drm/i915/i915_scheduler.h
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index 7d6eb82eeb91..e6f7c5f4ec7f 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -28,6 +28,7 @@
#include <linux/dma-fence.h>
#include "i915_gem.h"
+#include "i915_scheduler.h"
#include "i915_sw_fence.h"
#include <uapi/drm/i915_drm.h>
@@ -48,44 +49,6 @@ struct intel_signal_node {
struct list_head link;
};
-struct i915_dependency {
- struct i915_priotree *signaler;
- struct list_head signal_link;
- struct list_head wait_link;
- struct list_head dfs_link;
- unsigned long flags;
-#define I915_DEPENDENCY_ALLOC BIT(0)
-};
-
-/*
- * "People assume that time is a strict progression of cause to effect, but
- * actually, from a nonlinear, non-subjective viewpoint, it's more like a big
- * ball of wibbly-wobbly, timey-wimey ... stuff." -The Doctor, 2015
- *
- * Requests exist in a complex web of interdependencies. Each request
- * has to wait for some other request to complete before it is ready to be run
- * (e.g. we have to wait until the pixels have been rendering into a texture
- * before we can copy from it). We track the readiness of a request in terms
- * of fences, but we also need to keep the dependency tree for the lifetime
- * of the request (beyond the life of an individual fence). We use the tree
- * at various points to reorder the requests whilst keeping the requests
- * in order with respect to their various dependencies.
- */
-struct i915_priotree {
- struct list_head signalers_list; /* those before us, we depend upon */
- struct list_head waiters_list; /* those after us, they depend upon us */
- struct list_head link;
- int priority;
-};
-
-enum {
- I915_PRIORITY_MIN = I915_CONTEXT_MIN_USER_PRIORITY - 1,
- I915_PRIORITY_NORMAL = I915_CONTEXT_DEFAULT_PRIORITY,
- I915_PRIORITY_MAX = I915_CONTEXT_MAX_USER_PRIORITY + 1,
-
- I915_PRIORITY_INVALID = INT_MIN
-};
-
struct i915_capture_list {
struct i915_capture_list *next;
struct i915_vma *vma;
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
new file mode 100644
index 000000000000..bd588f06ce23
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2018 Intel Corporation
+ */
+
+#ifndef _I915_SCHEDULER_H_
+#define _I915_SCHEDULER_H_
+
+#include <linux/bitops.h>
+
+#include <uapi/drm/i915_drm.h>
+
+enum {
+ I915_PRIORITY_MIN = I915_CONTEXT_MIN_USER_PRIORITY - 1,
+ I915_PRIORITY_NORMAL = I915_CONTEXT_DEFAULT_PRIORITY,
+ I915_PRIORITY_MAX = I915_CONTEXT_MAX_USER_PRIORITY + 1,
+
+ I915_PRIORITY_INVALID = INT_MIN
+};
+
+/*
+ * "People assume that time is a strict progression of cause to effect, but
+ * actually, from a nonlinear, non-subjective viewpoint, it's more like a big
+ * ball of wibbly-wobbly, timey-wimey ... stuff." -The Doctor, 2015
+ *
+ * Requests exist in a complex web of interdependencies. Each request
+ * has to wait for some other request to complete before it is ready to be run
+ * (e.g. we have to wait until the pixels have been rendering into a texture
+ * before we can copy from it). We track the readiness of a request in terms
+ * of fences, but we also need to keep the dependency tree for the lifetime
+ * of the request (beyond the life of an individual fence). We use the tree
+ * at various points to reorder the requests whilst keeping the requests
+ * in order with respect to their various dependencies.
+ */
+struct i915_priotree {
+ struct list_head signalers_list; /* those before us, we depend upon */
+ struct list_head waiters_list; /* those after us, they depend upon us */
+ struct list_head link;
+ int priority;
+};
+
+struct i915_dependency {
+ struct i915_priotree *signaler;
+ struct list_head signal_link;
+ struct list_head wait_link;
+ struct list_head dfs_link;
+ unsigned long flags;
+#define I915_DEPENDENCY_ALLOC BIT(0)
+};
+
+#endif /* _I915_SCHEDULER_H_ */
--
2.17.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 2/3] drm/i915: Rename priotree to sched
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
@ 2018-04-17 14:31 ` Chris Wilson
2018-04-18 9:29 ` Joonas Lahtinen
2018-04-17 14:31 ` [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct Chris Wilson
` (13 subsequent siblings)
14 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2018-04-17 14:31 UTC (permalink / raw)
To: intel-gfx
Having moved the priotree struct into i915_scheduler.h, identify it as
the scheduling element and rebrand into i915_sched. This becomes more
useful as we start attaching more information we require to propagate
through the scheduler.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gpu_error.c | 2 +-
drivers/gpu/drm/i915/i915_request.c | 44 ++++++------
drivers/gpu/drm/i915/i915_request.h | 6 +-
drivers/gpu/drm/i915/i915_scheduler.h | 4 +-
drivers/gpu/drm/i915/intel_engine_cs.c | 4 +-
drivers/gpu/drm/i915/intel_guc_submission.c | 8 +--
drivers/gpu/drm/i915/intel_lrc.c | 77 +++++++++++----------
7 files changed, 73 insertions(+), 72 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index effaf982b19b..6b5b9b3ded02 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1278,7 +1278,7 @@ static void record_request(struct i915_request *request,
struct drm_i915_error_request *erq)
{
erq->context = request->ctx->hw_id;
- erq->priority = request->priotree.priority;
+ erq->priority = request->sched.priority;
erq->ban_score = atomic_read(&request->ctx->ban_score);
erq->seqno = request->global_seqno;
erq->jiffies = request->emitted_jiffies;
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 9ca9c24b4421..0939c120b82c 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -125,10 +125,10 @@ i915_dependency_free(struct drm_i915_private *i915,
}
static void
-__i915_priotree_add_dependency(struct i915_priotree *pt,
- struct i915_priotree *signal,
- struct i915_dependency *dep,
- unsigned long flags)
+__i915_sched_add_dependency(struct i915_sched *pt,
+ struct i915_sched *signal,
+ struct i915_dependency *dep,
+ unsigned long flags)
{
INIT_LIST_HEAD(&dep->dfs_link);
list_add(&dep->wait_link, &signal->waiters_list);
@@ -138,9 +138,9 @@ __i915_priotree_add_dependency(struct i915_priotree *pt,
}
static int
-i915_priotree_add_dependency(struct drm_i915_private *i915,
- struct i915_priotree *pt,
- struct i915_priotree *signal)
+i915_sched_add_dependency(struct drm_i915_private *i915,
+ struct i915_sched *pt,
+ struct i915_sched *signal)
{
struct i915_dependency *dep;
@@ -148,12 +148,12 @@ i915_priotree_add_dependency(struct drm_i915_private *i915,
if (!dep)
return -ENOMEM;
- __i915_priotree_add_dependency(pt, signal, dep, I915_DEPENDENCY_ALLOC);
+ __i915_sched_add_dependency(pt, signal, dep, I915_DEPENDENCY_ALLOC);
return 0;
}
static void
-i915_priotree_fini(struct drm_i915_private *i915, struct i915_priotree *pt)
+i915_sched_fini(struct drm_i915_private *i915, struct i915_sched *pt)
{
struct i915_dependency *dep, *next;
@@ -166,7 +166,7 @@ i915_priotree_fini(struct drm_i915_private *i915, struct i915_priotree *pt)
* so we may be called out-of-order.
*/
list_for_each_entry_safe(dep, next, &pt->signalers_list, signal_link) {
- GEM_BUG_ON(!i915_priotree_signaled(dep->signaler));
+ GEM_BUG_ON(!i915_sched_signaled(dep->signaler));
GEM_BUG_ON(!list_empty(&dep->dfs_link));
list_del(&dep->wait_link);
@@ -186,7 +186,7 @@ i915_priotree_fini(struct drm_i915_private *i915, struct i915_priotree *pt)
}
static void
-i915_priotree_init(struct i915_priotree *pt)
+i915_sched_init(struct i915_sched *pt)
{
INIT_LIST_HEAD(&pt->signalers_list);
INIT_LIST_HEAD(&pt->waiters_list);
@@ -422,7 +422,7 @@ static void i915_request_retire(struct i915_request *request)
}
spin_unlock_irq(&request->lock);
- i915_priotree_fini(request->i915, &request->priotree);
+ i915_sched_fini(request->i915, &request->sched);
i915_request_put(request);
}
@@ -725,7 +725,7 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
i915_sw_fence_init(&i915_request_get(rq)->submit, submit_notify);
init_waitqueue_head(&rq->execute);
- i915_priotree_init(&rq->priotree);
+ i915_sched_init(&rq->sched);
INIT_LIST_HEAD(&rq->active_list);
rq->i915 = i915;
@@ -777,8 +777,8 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
/* Make sure we didn't add ourselves to external state before freeing */
GEM_BUG_ON(!list_empty(&rq->active_list));
- GEM_BUG_ON(!list_empty(&rq->priotree.signalers_list));
- GEM_BUG_ON(!list_empty(&rq->priotree.waiters_list));
+ GEM_BUG_ON(!list_empty(&rq->sched.signalers_list));
+ GEM_BUG_ON(!list_empty(&rq->sched.waiters_list));
kmem_cache_free(i915->requests, rq);
err_unreserve:
@@ -800,9 +800,9 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from)
return 0;
if (to->engine->schedule) {
- ret = i915_priotree_add_dependency(to->i915,
- &to->priotree,
- &from->priotree);
+ ret = i915_sched_add_dependency(to->i915,
+ &to->sched,
+ &from->sched);
if (ret < 0)
return ret;
}
@@ -1033,10 +1033,10 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
i915_sw_fence_await_sw_fence(&request->submit, &prev->submit,
&request->submitq);
if (engine->schedule)
- __i915_priotree_add_dependency(&request->priotree,
- &prev->priotree,
- &request->dep,
- 0);
+ __i915_sched_add_dependency(&request->sched,
+ &prev->sched,
+ &request->dep,
+ 0);
}
spin_lock_irq(&timeline->lock);
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index e6f7c5f4ec7f..5d6619a245ba 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -117,7 +117,7 @@ struct i915_request {
* to retirement), i.e. bidirectional dependency information for the
* request not tied to individual fences.
*/
- struct i915_priotree priotree;
+ struct i915_sched sched;
struct i915_dependency dep;
/**
@@ -306,10 +306,10 @@ static inline bool i915_request_started(const struct i915_request *rq)
seqno - 1);
}
-static inline bool i915_priotree_signaled(const struct i915_priotree *pt)
+static inline bool i915_sched_signaled(const struct i915_sched *pt)
{
const struct i915_request *rq =
- container_of(pt, const struct i915_request, priotree);
+ container_of(pt, const struct i915_request, sched);
return i915_request_completed(rq);
}
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
index bd588f06ce23..b34fca3ba17f 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.h
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -33,7 +33,7 @@ enum {
* at various points to reorder the requests whilst keeping the requests
* in order with respect to their various dependencies.
*/
-struct i915_priotree {
+struct i915_sched {
struct list_head signalers_list; /* those before us, we depend upon */
struct list_head waiters_list; /* those after us, they depend upon us */
struct list_head link;
@@ -41,7 +41,7 @@ struct i915_priotree {
};
struct i915_dependency {
- struct i915_priotree *signaler;
+ struct i915_sched *signaler;
struct list_head signal_link;
struct list_head wait_link;
struct list_head dfs_link;
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 1a8370779bbb..b542b1a4dddc 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1123,7 +1123,7 @@ static void print_request(struct drm_printer *m,
rq->global_seqno,
i915_request_completed(rq) ? "!" : "",
rq->fence.context, rq->fence.seqno,
- rq->priotree.priority,
+ rq->sched.priority,
jiffies_to_msecs(jiffies - rq->emitted_jiffies),
name);
}
@@ -1367,7 +1367,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
struct i915_priolist *p =
rb_entry(rb, typeof(*p), node);
- list_for_each_entry(rq, &p->requests, priotree.link)
+ list_for_each_entry(rq, &p->requests, sched.link)
print_request(m, rq, "\t\tQ ");
}
spin_unlock_irq(&engine->timeline->lock);
diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c
index 97121230656c..0755f5cae950 100644
--- a/drivers/gpu/drm/i915/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/intel_guc_submission.c
@@ -659,7 +659,7 @@ static void port_assign(struct execlist_port *port, struct i915_request *rq)
static inline int rq_prio(const struct i915_request *rq)
{
- return rq->priotree.priority;
+ return rq->sched.priority;
}
static inline int port_prio(const struct execlist_port *port)
@@ -706,11 +706,11 @@ static void guc_dequeue(struct intel_engine_cs *engine)
struct i915_priolist *p = to_priolist(rb);
struct i915_request *rq, *rn;
- list_for_each_entry_safe(rq, rn, &p->requests, priotree.link) {
+ list_for_each_entry_safe(rq, rn, &p->requests, sched.link) {
if (last && rq->ctx != last->ctx) {
if (port == last_port) {
__list_del_many(&p->requests,
- &rq->priotree.link);
+ &rq->sched.link);
goto done;
}
@@ -719,7 +719,7 @@ static void guc_dequeue(struct intel_engine_cs *engine)
port++;
}
- INIT_LIST_HEAD(&rq->priotree.link);
+ INIT_LIST_HEAD(&rq->sched.link);
__i915_request_submit(rq);
trace_i915_request_in(rq, port_index(port, execlists));
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 4f728587a756..01f356cb3e25 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -177,7 +177,7 @@ static inline struct i915_priolist *to_priolist(struct rb_node *rb)
static inline int rq_prio(const struct i915_request *rq)
{
- return rq->priotree.priority;
+ return rq->sched.priority;
}
static inline bool need_preempt(const struct intel_engine_cs *engine,
@@ -258,7 +258,7 @@ intel_lr_context_descriptor_update(struct i915_gem_context *ctx,
static struct i915_priolist *
lookup_priolist(struct intel_engine_cs *engine,
- struct i915_priotree *pt,
+ struct i915_sched *sched,
int prio)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -344,10 +344,10 @@ static void __unwind_incomplete_requests(struct intel_engine_cs *engine)
GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
if (rq_prio(rq) != last_prio) {
last_prio = rq_prio(rq);
- p = lookup_priolist(engine, &rq->priotree, last_prio);
+ p = lookup_priolist(engine, &rq->sched, last_prio);
}
- list_add(&rq->priotree.link, &p->requests);
+ list_add(&rq->sched.link, &p->requests);
}
}
@@ -654,7 +654,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
struct i915_priolist *p = to_priolist(rb);
struct i915_request *rq, *rn;
- list_for_each_entry_safe(rq, rn, &p->requests, priotree.link) {
+ list_for_each_entry_safe(rq, rn, &p->requests, sched.link) {
/*
* Can we combine this request with the current port?
* It has to be the same context/ringbuffer and not
@@ -674,7 +674,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
*/
if (port == last_port) {
__list_del_many(&p->requests,
- &rq->priotree.link);
+ &rq->sched.link);
goto done;
}
@@ -688,7 +688,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
if (ctx_single_port_submission(last->ctx) ||
ctx_single_port_submission(rq->ctx)) {
__list_del_many(&p->requests,
- &rq->priotree.link);
+ &rq->sched.link);
goto done;
}
@@ -701,7 +701,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
GEM_BUG_ON(port_isset(port));
}
- INIT_LIST_HEAD(&rq->priotree.link);
+ INIT_LIST_HEAD(&rq->sched.link);
__i915_request_submit(rq);
trace_i915_request_in(rq, port_index(port, execlists));
last = rq;
@@ -882,8 +882,8 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine)
while (rb) {
struct i915_priolist *p = to_priolist(rb);
- list_for_each_entry_safe(rq, rn, &p->requests, priotree.link) {
- INIT_LIST_HEAD(&rq->priotree.link);
+ list_for_each_entry_safe(rq, rn, &p->requests, sched.link) {
+ INIT_LIST_HEAD(&rq->sched.link);
dma_fence_set_error(&rq->fence, -EIO);
__i915_request_submit(rq);
@@ -1116,10 +1116,11 @@ static void execlists_submission_tasklet(unsigned long data)
}
static void queue_request(struct intel_engine_cs *engine,
- struct i915_priotree *pt,
+ struct i915_sched *sched,
int prio)
{
- list_add_tail(&pt->link, &lookup_priolist(engine, pt, prio)->requests);
+ list_add_tail(&sched->link,
+ &lookup_priolist(engine, sched, prio)->requests);
}
static void __submit_queue(struct intel_engine_cs *engine, int prio)
@@ -1142,24 +1143,24 @@ static void execlists_submit_request(struct i915_request *request)
/* Will be called from irq-context when using foreign fences. */
spin_lock_irqsave(&engine->timeline->lock, flags);
- queue_request(engine, &request->priotree, rq_prio(request));
+ queue_request(engine, &request->sched, rq_prio(request));
submit_queue(engine, rq_prio(request));
GEM_BUG_ON(!engine->execlists.first);
- GEM_BUG_ON(list_empty(&request->priotree.link));
+ GEM_BUG_ON(list_empty(&request->sched.link));
spin_unlock_irqrestore(&engine->timeline->lock, flags);
}
-static struct i915_request *pt_to_request(struct i915_priotree *pt)
+static struct i915_request *sched_to_request(struct i915_sched *sched)
{
- return container_of(pt, struct i915_request, priotree);
+ return container_of(sched, struct i915_request, sched);
}
static struct intel_engine_cs *
-pt_lock_engine(struct i915_priotree *pt, struct intel_engine_cs *locked)
+sched_lock_engine(struct i915_sched *sched, struct intel_engine_cs *locked)
{
- struct intel_engine_cs *engine = pt_to_request(pt)->engine;
+ struct intel_engine_cs *engine = sched_to_request(sched)->engine;
GEM_BUG_ON(!locked);
@@ -1183,23 +1184,23 @@ static void execlists_schedule(struct i915_request *request, int prio)
if (i915_request_completed(request))
return;
- if (prio <= READ_ONCE(request->priotree.priority))
+ if (prio <= READ_ONCE(request->sched.priority))
return;
/* Need BKL in order to use the temporary link inside i915_dependency */
lockdep_assert_held(&request->i915->drm.struct_mutex);
- stack.signaler = &request->priotree;
+ stack.signaler = &request->sched;
list_add(&stack.dfs_link, &dfs);
/*
* Recursively bump all dependent priorities to match the new request.
*
* A naive approach would be to use recursion:
- * static void update_priorities(struct i915_priotree *pt, prio) {
- * list_for_each_entry(dep, &pt->signalers_list, signal_link)
+ * static void update_priorities(struct i915_sched *sched, prio) {
+ * list_for_each_entry(dep, &sched->signalers_list, signal_link)
* update_priorities(dep->signal, prio)
- * queue_request(pt);
+ * queue_request(sched);
* }
* but that may have unlimited recursion depth and so runs a very
* real risk of overunning the kernel stack. Instead, we build
@@ -1211,7 +1212,7 @@ static void execlists_schedule(struct i915_request *request, int prio)
* last element in the list is the request we must execute first.
*/
list_for_each_entry(dep, &dfs, dfs_link) {
- struct i915_priotree *pt = dep->signaler;
+ struct i915_sched *sched = dep->signaler;
/*
* Within an engine, there can be no cycle, but we may
@@ -1219,13 +1220,13 @@ static void execlists_schedule(struct i915_request *request, int prio)
* (redundant dependencies are not eliminated) and across
* engines.
*/
- list_for_each_entry(p, &pt->signalers_list, signal_link) {
+ list_for_each_entry(p, &sched->signalers_list, signal_link) {
GEM_BUG_ON(p == dep); /* no cycles! */
- if (i915_priotree_signaled(p->signaler))
+ if (i915_sched_signaled(p->signaler))
continue;
- GEM_BUG_ON(p->signaler->priority < pt->priority);
+ GEM_BUG_ON(p->signaler->priority < sched->priority);
if (prio > READ_ONCE(p->signaler->priority))
list_move_tail(&p->dfs_link, &dfs);
}
@@ -1237,9 +1238,9 @@ static void execlists_schedule(struct i915_request *request, int prio)
* execlists_submit_request()), we can set our own priority and skip
* acquiring the engine locks.
*/
- if (request->priotree.priority == I915_PRIORITY_INVALID) {
- GEM_BUG_ON(!list_empty(&request->priotree.link));
- request->priotree.priority = prio;
+ if (request->sched.priority == I915_PRIORITY_INVALID) {
+ GEM_BUG_ON(!list_empty(&request->sched.link));
+ request->sched.priority = prio;
if (stack.dfs_link.next == stack.dfs_link.prev)
return;
__list_del_entry(&stack.dfs_link);
@@ -1250,23 +1251,23 @@ static void execlists_schedule(struct i915_request *request, int prio)
/* Fifo and depth-first replacement ensure our deps execute before us */
list_for_each_entry_safe_reverse(dep, p, &dfs, dfs_link) {
- struct i915_priotree *pt = dep->signaler;
+ struct i915_sched *sched = dep->signaler;
INIT_LIST_HEAD(&dep->dfs_link);
- engine = pt_lock_engine(pt, engine);
+ engine = sched_lock_engine(sched, engine);
- if (prio <= pt->priority)
+ if (prio <= sched->priority)
continue;
- pt->priority = prio;
- if (!list_empty(&pt->link)) {
- __list_del_entry(&pt->link);
- queue_request(engine, pt, prio);
+ sched->priority = prio;
+ if (!list_empty(&sched->link)) {
+ __list_del_entry(&sched->link);
+ queue_request(engine, sched, prio);
}
if (prio > engine->execlists.queue_priority &&
- i915_sw_fence_done(&pt_to_request(pt)->submit))
+ i915_sw_fence_done(&sched_to_request(sched)->submit))
__submit_queue(engine, prio);
}
--
2.17.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
2018-04-17 14:31 ` [PATCH v2 2/3] drm/i915: Rename priotree to sched Chris Wilson
@ 2018-04-17 14:31 ` Chris Wilson
2018-04-18 9:41 ` [PATCH v2] " Chris Wilson
2018-04-18 9:46 ` [PATCH v2 3/3] " Joonas Lahtinen
2018-04-17 15:43 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers Patchwork
` (12 subsequent siblings)
14 siblings, 2 replies; 21+ messages in thread
From: Chris Wilson @ 2018-04-17 14:31 UTC (permalink / raw)
To: intel-gfx
Today we only want to pass along the priority to engine->schedule(), but
in the future we want to have much more control over the various aspects
of the GPU during a context's execution, for example controlling the
frequency allowed. As we need an ever growing number of parameters for
scheduling, move those into a struct for convenience.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/gvt/scheduler.c | 2 +-
drivers/gpu/drm/i915/i915_drv.h | 3 ++-
drivers/gpu/drm/i915/i915_gem.c | 18 +++++++++--------
drivers/gpu/drm/i915/i915_gem_context.c | 8 ++++----
drivers/gpu/drm/i915/i915_gem_context.h | 13 +-----------
drivers/gpu/drm/i915/i915_gpu_error.c | 8 ++++----
drivers/gpu/drm/i915/i915_gpu_error.h | 5 +++--
drivers/gpu/drm/i915/i915_request.c | 4 ++--
drivers/gpu/drm/i915/i915_request.h | 1 +
drivers/gpu/drm/i915/i915_scheduler.h | 17 +++++++++++++++-
drivers/gpu/drm/i915/intel_display.c | 4 +++-
drivers/gpu/drm/i915/intel_engine_cs.c | 18 ++++++++++++++---
drivers/gpu/drm/i915/intel_guc_submission.c | 2 +-
drivers/gpu/drm/i915/intel_lrc.c | 20 ++++++++++---------
drivers/gpu/drm/i915/intel_ringbuffer.h | 4 +++-
.../gpu/drm/i915/selftests/intel_hangcheck.c | 4 ++--
drivers/gpu/drm/i915/selftests/intel_lrc.c | 8 +++++---
17 files changed, 84 insertions(+), 55 deletions(-)
diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 638abe84857c..f3d21849b0cb 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -1135,7 +1135,7 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
return PTR_ERR(s->shadow_ctx);
if (HAS_LOGICAL_RING_PREEMPTION(vgpu->gvt->dev_priv))
- s->shadow_ctx->priority = INT_MAX;
+ s->shadow_ctx->sched.priority = INT_MAX;
bitmap_zero(s->shadow_ctx_desc_updated, I915_NUM_ENGINES);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8e8667d9b084..028691108125 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -75,6 +75,7 @@
#include "i915_gem_timeline.h"
#include "i915_gpu_error.h"
#include "i915_request.h"
+#include "i915_scheduler.h"
#include "i915_vma.h"
#include "intel_gvt.h"
@@ -3158,7 +3159,7 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj,
struct intel_rps_client *rps);
int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
unsigned int flags,
- int priority);
+ const struct i915_sched_attr *attr);
#define I915_PRIORITY_DISPLAY I915_PRIORITY_MAX
int __must_check
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4c9d2a6f7d28..795ca83aed7a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -564,7 +564,8 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
return timeout;
}
-static void __fence_set_priority(struct dma_fence *fence, int prio)
+static void __fence_set_priority(struct dma_fence *fence,
+ const struct i915_sched_attr *attr)
{
struct i915_request *rq;
struct intel_engine_cs *engine;
@@ -577,11 +578,12 @@ static void __fence_set_priority(struct dma_fence *fence, int prio)
rcu_read_lock();
if (engine->schedule)
- engine->schedule(rq, prio);
+ engine->schedule(rq, attr);
rcu_read_unlock();
}
-static void fence_set_priority(struct dma_fence *fence, int prio)
+static void fence_set_priority(struct dma_fence *fence,
+ const struct i915_sched_attr *attr)
{
/* Recurse once into a fence-array */
if (dma_fence_is_array(fence)) {
@@ -589,16 +591,16 @@ static void fence_set_priority(struct dma_fence *fence, int prio)
int i;
for (i = 0; i < array->num_fences; i++)
- __fence_set_priority(array->fences[i], prio);
+ __fence_set_priority(array->fences[i], attr);
} else {
- __fence_set_priority(fence, prio);
+ __fence_set_priority(fence, attr);
}
}
int
i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
unsigned int flags,
- int prio)
+ const struct i915_sched_attr *attr)
{
struct dma_fence *excl;
@@ -613,7 +615,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
return ret;
for (i = 0; i < count; i++) {
- fence_set_priority(shared[i], prio);
+ fence_set_priority(shared[i], attr);
dma_fence_put(shared[i]);
}
@@ -623,7 +625,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
}
if (excl) {
- fence_set_priority(excl, prio);
+ fence_set_priority(excl, attr);
dma_fence_put(excl);
}
return 0;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 9b3834a846e8..74435affe23f 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -281,7 +281,7 @@ __create_hw_context(struct drm_i915_private *dev_priv,
kref_init(&ctx->ref);
list_add_tail(&ctx->link, &dev_priv->contexts.list);
ctx->i915 = dev_priv;
- ctx->priority = I915_PRIORITY_NORMAL;
+ ctx->sched.priority = I915_PRIORITY_NORMAL;
INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL);
INIT_LIST_HEAD(&ctx->handles_list);
@@ -431,7 +431,7 @@ i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio)
return ctx;
i915_gem_context_clear_bannable(ctx);
- ctx->priority = prio;
+ ctx->sched.priority = prio;
ctx->ring_size = PAGE_SIZE;
GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));
@@ -753,7 +753,7 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
args->value = i915_gem_context_is_bannable(ctx);
break;
case I915_CONTEXT_PARAM_PRIORITY:
- args->value = ctx->priority;
+ args->value = ctx->sched.priority;
break;
default:
ret = -EINVAL;
@@ -826,7 +826,7 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
!capable(CAP_SYS_NICE))
ret = -EPERM;
else
- ctx->priority = priority;
+ ctx->sched.priority = priority;
}
break;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index 7854262ddfd9..b12a8a8c5af9 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -137,18 +137,7 @@ struct i915_gem_context {
*/
u32 user_handle;
- /**
- * @priority: execution and service priority
- *
- * All clients are equal, but some are more equal than others!
- *
- * Requests from a context with a greater (more positive) value of
- * @priority will be executed before those with a lower @priority
- * value, forming a simple QoS.
- *
- * The &drm_i915_private.kernel_context is assigned the lowest priority.
- */
- int priority;
+ struct i915_sched_attr sched;
/** ggtt_offset_bias: placement restriction for context objects */
u32 ggtt_offset_bias;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 6b5b9b3ded02..671ffa37614e 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -411,7 +411,7 @@ static void error_print_request(struct drm_i915_error_state_buf *m,
err_printf(m, "%s pid %d, ban score %d, seqno %8x:%08x, prio %d, emitted %dms ago, head %08x, tail %08x\n",
prefix, erq->pid, erq->ban_score,
- erq->context, erq->seqno, erq->priority,
+ erq->context, erq->seqno, erq->sched_attr.priority,
jiffies_to_msecs(jiffies - erq->jiffies),
erq->head, erq->tail);
}
@@ -422,7 +422,7 @@ static void error_print_context(struct drm_i915_error_state_buf *m,
{
err_printf(m, "%s%s[%d] user_handle %d hw_id %d, prio %d, ban score %d%s guilty %d active %d\n",
header, ctx->comm, ctx->pid, ctx->handle, ctx->hw_id,
- ctx->priority, ctx->ban_score, bannable(ctx),
+ ctx->sched_attr.priority, ctx->ban_score, bannable(ctx),
ctx->guilty, ctx->active);
}
@@ -1278,7 +1278,7 @@ static void record_request(struct i915_request *request,
struct drm_i915_error_request *erq)
{
erq->context = request->ctx->hw_id;
- erq->priority = request->sched.priority;
+ erq->sched_attr = request->sched.attr;
erq->ban_score = atomic_read(&request->ctx->ban_score);
erq->seqno = request->global_seqno;
erq->jiffies = request->emitted_jiffies;
@@ -1372,7 +1372,7 @@ static void record_context(struct drm_i915_error_context *e,
e->handle = ctx->user_handle;
e->hw_id = ctx->hw_id;
- e->priority = ctx->priority;
+ e->sched_attr = ctx->sched;
e->ban_score = atomic_read(&ctx->ban_score);
e->bannable = i915_gem_context_is_bannable(ctx);
e->guilty = atomic_read(&ctx->guilty_count);
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h
index c05b6034d718..5d6fdcbc092c 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.h
+++ b/drivers/gpu/drm/i915/i915_gpu_error.h
@@ -20,6 +20,7 @@
#include "i915_gem.h"
#include "i915_gem_gtt.h"
#include "i915_params.h"
+#include "i915_scheduler.h"
struct drm_i915_private;
struct intel_overlay_error_state;
@@ -122,11 +123,11 @@ struct i915_gpu_state {
pid_t pid;
u32 handle;
u32 hw_id;
- int priority;
int ban_score;
int active;
int guilty;
bool bannable;
+ struct i915_sched_attr sched_attr;
} context;
struct drm_i915_error_object {
@@ -147,11 +148,11 @@ struct i915_gpu_state {
long jiffies;
pid_t pid;
u32 context;
- int priority;
int ban_score;
u32 seqno;
u32 head;
u32 tail;
+ struct i915_sched_attr sched_attr;
} *requests, execlist[EXECLIST_MAX_PORTS];
unsigned int num_ports;
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 0939c120b82c..c3a908436510 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -191,7 +191,7 @@ i915_sched_init(struct i915_sched *pt)
INIT_LIST_HEAD(&pt->signalers_list);
INIT_LIST_HEAD(&pt->waiters_list);
INIT_LIST_HEAD(&pt->link);
- pt->priority = I915_PRIORITY_INVALID;
+ pt->attr.priority = I915_PRIORITY_INVALID;
}
static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
@@ -1062,7 +1062,7 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
*/
rcu_read_lock();
if (engine->schedule)
- engine->schedule(request, request->ctx->priority);
+ engine->schedule(request, &request->ctx->sched);
rcu_read_unlock();
local_bh_disable();
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index 5d6619a245ba..701ee8c7325c 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -30,6 +30,7 @@
#include "i915_gem.h"
#include "i915_scheduler.h"
#include "i915_sw_fence.h"
+#include "i915_scheduler.h"
#include <uapi/drm/i915_drm.h>
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
index b34fca3ba17f..4cae1edeb40d 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.h
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -19,6 +19,21 @@ enum {
I915_PRIORITY_INVALID = INT_MIN
};
+struct i915_sched_attr {
+ /**
+ * @priority: execution and service priority
+ *
+ * All clients are equal, but some are more equal than others!
+ *
+ * Requests from a context with a greater (more positive) value of
+ * @priority will be executed before those with a lower @priority
+ * value, forming a simple QoS.
+ *
+ * The &drm_i915_private.kernel_context is assigned the lowest priority.
+ */
+ int priority;
+};
+
/*
* "People assume that time is a strict progression of cause to effect, but
* actually, from a nonlinear, non-subjective viewpoint, it's more like a big
@@ -37,7 +52,7 @@ struct i915_sched {
struct list_head signalers_list; /* those before us, we depend upon */
struct list_head waiters_list; /* those after us, they depend upon us */
struct list_head link;
- int priority;
+ struct i915_sched_attr attr;
};
struct i915_dependency {
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 020900e08d42..7c34b8c854be 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -12839,7 +12839,9 @@ intel_prepare_plane_fb(struct drm_plane *plane,
ret = intel_plane_pin_fb(to_intel_plane_state(new_state));
- i915_gem_object_wait_priority(obj, 0, I915_PRIORITY_DISPLAY);
+ i915_gem_object_wait_priority(obj, 0, &(struct i915_sched_attr){
+ .priority = I915_PRIORITY_DISPLAY,
+ });
mutex_unlock(&dev_priv->drm.struct_mutex);
i915_gem_object_unpin_pages(obj);
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index b542b1a4dddc..be608f7111f5 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1113,17 +1113,29 @@ unsigned int intel_engines_has_context_isolation(struct drm_i915_private *i915)
return which;
}
+static void print_sched_attr(struct drm_printer *m,
+ const struct drm_i915_private *i915,
+ const struct i915_sched_attr *attr)
+{
+ if (attr->priority == I915_PRIORITY_INVALID)
+ return;
+
+ drm_printf(m, "prio=%d", attr->priority);
+}
+
static void print_request(struct drm_printer *m,
struct i915_request *rq,
const char *prefix)
{
const char *name = rq->fence.ops->get_timeline_name(&rq->fence);
- drm_printf(m, "%s%x%s [%llx:%x] prio=%d @ %dms: %s\n", prefix,
+ drm_printf(m, "%s%x%s [%llx:%x] ",
+ prefix,
rq->global_seqno,
i915_request_completed(rq) ? "!" : "",
- rq->fence.context, rq->fence.seqno,
- rq->sched.priority,
+ rq->fence.context, rq->fence.seqno);
+ print_sched_attr(m, rq->i915, &rq->sched.attr);
+ drm_printf(m, " @ %dms: %s\n",
jiffies_to_msecs(jiffies - rq->emitted_jiffies),
name);
}
diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c
index 0755f5cae950..02da05875aa7 100644
--- a/drivers/gpu/drm/i915/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/intel_guc_submission.c
@@ -659,7 +659,7 @@ static void port_assign(struct execlist_port *port, struct i915_request *rq)
static inline int rq_prio(const struct i915_request *rq)
{
- return rq->sched.priority;
+ return rq->sched.attr.priority;
}
static inline int port_prio(const struct execlist_port *port)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 01f356cb3e25..6be8ccd18b72 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -177,7 +177,7 @@ static inline struct i915_priolist *to_priolist(struct rb_node *rb)
static inline int rq_prio(const struct i915_request *rq)
{
- return rq->sched.priority;
+ return rq->sched.attr.priority;
}
static inline bool need_preempt(const struct intel_engine_cs *engine,
@@ -1172,11 +1172,13 @@ sched_lock_engine(struct i915_sched *sched, struct intel_engine_cs *locked)
return engine;
}
-static void execlists_schedule(struct i915_request *request, int prio)
+static void execlists_schedule(struct i915_request *request,
+ const struct i915_sched_attr *attr)
{
struct intel_engine_cs *engine;
struct i915_dependency *dep, *p;
struct i915_dependency stack;
+ const int prio = attr->priority;
LIST_HEAD(dfs);
GEM_BUG_ON(prio == I915_PRIORITY_INVALID);
@@ -1184,7 +1186,7 @@ static void execlists_schedule(struct i915_request *request, int prio)
if (i915_request_completed(request))
return;
- if (prio <= READ_ONCE(request->sched.priority))
+ if (prio <= READ_ONCE(request->sched.attr.priority))
return;
/* Need BKL in order to use the temporary link inside i915_dependency */
@@ -1226,8 +1228,8 @@ static void execlists_schedule(struct i915_request *request, int prio)
if (i915_sched_signaled(p->signaler))
continue;
- GEM_BUG_ON(p->signaler->priority < sched->priority);
- if (prio > READ_ONCE(p->signaler->priority))
+ GEM_BUG_ON(p->signaler->attr.priority < sched->attr.priority);
+ if (prio > READ_ONCE(p->signaler->attr.priority))
list_move_tail(&p->dfs_link, &dfs);
}
}
@@ -1238,9 +1240,9 @@ static void execlists_schedule(struct i915_request *request, int prio)
* execlists_submit_request()), we can set our own priority and skip
* acquiring the engine locks.
*/
- if (request->sched.priority == I915_PRIORITY_INVALID) {
+ if (request->sched.attr.priority == I915_PRIORITY_INVALID) {
GEM_BUG_ON(!list_empty(&request->sched.link));
- request->sched.priority = prio;
+ request->sched.attr = *attr;
if (stack.dfs_link.next == stack.dfs_link.prev)
return;
__list_del_entry(&stack.dfs_link);
@@ -1257,10 +1259,10 @@ static void execlists_schedule(struct i915_request *request, int prio)
engine = sched_lock_engine(sched, engine);
- if (prio <= sched->priority)
+ if (prio <= sched->attr.priority)
continue;
- sched->priority = prio;
+ sched->attr.priority = prio;
if (!list_empty(&sched->link)) {
__list_del_entry(&sched->link);
queue_request(engine, sched, prio);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 717041640135..c5e27905b0e1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -14,6 +14,7 @@
#include "intel_gpu_commands.h"
struct drm_printer;
+struct i915_sched_attr;
#define I915_CMD_HASH_ORDER 9
@@ -460,7 +461,8 @@ struct intel_engine_cs {
*
* Called under the struct_mutex.
*/
- void (*schedule)(struct i915_request *request, int priority);
+ void (*schedule)(struct i915_request *request,
+ const struct i915_sched_attr *attr);
/*
* Cancel all requests on the hardware, or queued for execution.
diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
index 24f913f26a7b..f7ee54e109ae 100644
--- a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
+++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
@@ -628,7 +628,7 @@ static int active_engine(void *data)
}
if (arg->flags & TEST_PRIORITY)
- ctx[idx]->priority =
+ ctx[idx]->sched.priority =
i915_prandom_u32_max_state(512, &prng);
rq[idx] = i915_request_get(new);
@@ -683,7 +683,7 @@ static int __igt_reset_engines(struct drm_i915_private *i915,
return err;
if (flags & TEST_PRIORITY)
- h.ctx->priority = 1024;
+ h.ctx->sched.priority = 1024;
}
for_each_engine(engine, i915, id) {
diff --git a/drivers/gpu/drm/i915/selftests/intel_lrc.c b/drivers/gpu/drm/i915/selftests/intel_lrc.c
index 0481e2e01146..ee7e22d18ff8 100644
--- a/drivers/gpu/drm/i915/selftests/intel_lrc.c
+++ b/drivers/gpu/drm/i915/selftests/intel_lrc.c
@@ -335,12 +335,12 @@ static int live_preempt(void *arg)
ctx_hi = kernel_context(i915);
if (!ctx_hi)
goto err_spin_lo;
- ctx_hi->priority = I915_CONTEXT_MAX_USER_PRIORITY;
+ ctx_hi->sched.priority = I915_CONTEXT_MAX_USER_PRIORITY;
ctx_lo = kernel_context(i915);
if (!ctx_lo)
goto err_ctx_hi;
- ctx_lo->priority = I915_CONTEXT_MIN_USER_PRIORITY;
+ ctx_lo->sched.priority = I915_CONTEXT_MIN_USER_PRIORITY;
for_each_engine(engine, i915, id) {
struct i915_request *rq;
@@ -407,6 +407,7 @@ static int live_late_preempt(void *arg)
struct i915_gem_context *ctx_hi, *ctx_lo;
struct spinner spin_hi, spin_lo;
struct intel_engine_cs *engine;
+ struct i915_sched_attr attr = {};
enum intel_engine_id id;
int err = -ENOMEM;
@@ -458,7 +459,8 @@ static int live_late_preempt(void *arg)
goto err_wedged;
}
- engine->schedule(rq, I915_PRIORITY_MAX);
+ attr.priority = I915_PRIORITY_MAX;
+ engine->schedule(rq, &attr);
if (!wait_for_spinner(&spin_hi, rq)) {
pr_err("High priority context failed to preempt the low priority context\n");
--
2.17.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 21+ messages in thread
* ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
2018-04-17 14:31 ` [PATCH v2 2/3] drm/i915: Rename priotree to sched Chris Wilson
2018-04-17 14:31 ` [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct Chris Wilson
@ 2018-04-17 15:43 ` Patchwork
2018-04-17 15:44 ` ✗ Fi.CI.SPARSE: " Patchwork
` (11 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-17 15:43 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
22ae52c5d491 drm/i915: Move the priotree struct to its own headers
-:71: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#71:
new file mode 100644
-:76: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#76: FILE: drivers/gpu/drm/i915/i915_scheduler.h:1:
+/*
total: 0 errors, 2 warnings, 0 checks, 103 lines checked
6e113aaa4db4 drm/i915: Rename priotree to sched
f3776d9abfc3 drm/i915: Pack params to engine->schedule() into a struct
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✗ Fi.CI.SPARSE: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (2 preceding siblings ...)
2018-04-17 15:43 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers Patchwork
@ 2018-04-17 15:44 ` Patchwork
2018-04-17 15:50 ` ✓ Fi.CI.BAT: success " Patchwork
` (10 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-17 15:44 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : warning
== Summary ==
$ dim sparse origin/drm-tip
Commit: drm/i915: Move the priotree struct to its own headers
Okay!
Commit: drm/i915: Rename priotree to sched
Okay!
Commit: drm/i915: Pack params to engine->schedule() into a struct
-drivers/gpu/drm/i915/selftests/../i915_drv.h:2207:33: warning: constant 0xffffea0000000000 is so big it is unsigned long
-drivers/gpu/drm/i915/selftests/../i915_drv.h:3655:16: warning: expression using sizeof(void)
+drivers/gpu/drm/i915/selftests/../i915_drv.h:2208:33: warning: constant 0xffffea0000000000 is so big it is unsigned long
+drivers/gpu/drm/i915/selftests/../i915_drv.h:3656:16: warning: expression using sizeof(void)
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✓ Fi.CI.BAT: success for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (3 preceding siblings ...)
2018-04-17 15:44 ` ✗ Fi.CI.SPARSE: " Patchwork
@ 2018-04-17 15:50 ` Patchwork
2018-04-17 17:09 ` ✓ Fi.CI.IGT: " Patchwork
` (9 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-17 15:50 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : success
== Summary ==
= CI Bug Log - changes from CI_DRM_4059 -> Patchwork_8708 =
== Summary - WARNING ==
Minor unknown changes coming with Patchwork_8708 need to be verified
manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_8708, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
External URL: https://patchwork.freedesktop.org/api/1.0/series/41827/revisions/1/mbox/
== Possible new issues ==
Here are the unknown changes that may have been introduced in Patchwork_8708:
=== IGT changes ===
==== Warnings ====
igt@core_auth@basic-auth:
fi-kbl-r: PASS -> NOTRUN +257
igt@drv_getparams_basic@basic-subslice-total:
fi-snb-2600: PASS -> NOTRUN +244
igt@drv_hangman@error-state-basic:
fi-elk-e7500: PASS -> NOTRUN +181
igt@gem_busy@basic-busy-default:
fi-glk-j4005: PASS -> NOTRUN +255
igt@gem_close_race@basic-process:
fi-ivb-3770: PASS -> NOTRUN +251
igt@gem_ctx_param@basic:
fi-gdg-551: SKIP -> NOTRUN +107
igt@gem_exec_basic@basic-bsd1:
fi-cfl-u: SKIP -> NOTRUN +25
igt@gem_exec_basic@basic-vebox:
fi-ivb-3770: SKIP -> NOTRUN +32
igt@gem_exec_basic@gtt-bsd:
fi-bwr-2160: SKIP -> NOTRUN +104
igt@gem_exec_basic@gtt-bsd2:
fi-kbl-7500u: SKIP -> NOTRUN +23
fi-cnl-y3: SKIP -> NOTRUN +25
igt@gem_exec_basic@readonly-bsd:
fi-pnv-d510: SKIP -> NOTRUN +63
igt@gem_exec_basic@readonly-bsd1:
fi-snb-2520m: SKIP -> NOTRUN +39
igt@gem_exec_flush@basic-batch-kernel-default-cmd:
fi-bxt-dsi: SKIP -> NOTRUN +29
igt@gem_exec_flush@basic-batch-kernel-default-wb:
fi-kbl-7567u: PASS -> NOTRUN +264
igt@gem_exec_flush@basic-uc-rw-default:
fi-byt-j1900: PASS -> NOTRUN +249
igt@gem_exec_gttfill@basic:
fi-skl-gvtdvm: SKIP -> NOTRUN +22
igt@gem_exec_reloc@basic-cpu-active:
fi-bsw-n3050: PASS -> NOTRUN +238
igt@gem_exec_reloc@basic-write-cpu-noreloc:
fi-skl-6770hq: PASS -> NOTRUN +264
igt@gem_exec_reloc@basic-write-gtt-noreloc:
fi-ivb-3520m: PASS -> NOTRUN +253
igt@gem_exec_store@basic-bsd1:
fi-kbl-r: SKIP -> NOTRUN +26
igt@gem_exec_store@basic-bsd2:
fi-hsw-4770: SKIP -> NOTRUN +26
igt@gem_flink_basic@basic:
fi-gdg-551: PASS -> NOTRUN +175
igt@gem_mmap@basic-small-bo:
fi-skl-gvtdvm: PASS -> NOTRUN +261
igt@gem_mmap_gtt@basic-read:
fi-cnl-y3: PASS -> NOTRUN +258
igt@gem_mmap_gtt@basic-read-write-distinct:
fi-hsw-4770: PASS -> NOTRUN +257
igt@gem_mmap_gtt@basic-small-bo:
fi-kbl-7500u: PASS -> NOTRUN +259
igt@gem_mmap_gtt@basic-wc:
fi-pnv-d510: PASS -> NOTRUN +219
igt@gem_mmap_gtt@basic-write:
fi-cfl-8700k: PASS -> NOTRUN +256
igt@gem_mmap_gtt@basic-write-gtt:
fi-blb-e6850: PASS -> NOTRUN +219
igt@gem_ringfill@basic-default-fd:
fi-elk-e7500: SKIP -> NOTRUN +46
igt@gem_sync@basic-store-all:
fi-byt-n2820: PASS -> NOTRUN +245
igt@gem_wait@basic-await-all:
fi-glk-1: PASS -> NOTRUN +256
igt@gem_workarounds@basic-read:
fi-snb-2600: SKIP -> NOTRUN +39
igt@gvt_basic@invalid-placeholder-test:
fi-skl-6260u: SKIP -> NOTRUN +19
igt@kms_addfb_basic@addfb25-bad-modifier:
fi-bdw-gvtdvm: PASS -> NOTRUN +260
igt@kms_addfb_basic@too-high:
fi-bwr-2160: PASS -> NOTRUN +179
igt@kms_addfb_basic@unused-modifier:
fi-bdw-5557u: PASS -> NOTRUN +263
igt@kms_chamelium@common-hpd-after-suspend:
fi-ivb-3520m: SKIP -> NOTRUN +28
igt@kms_chamelium@dp-crc-fast:
fi-skl-guc: SKIP -> NOTRUN +27
igt@kms_chamelium@dp-edid-read:
fi-skl-6770hq: SKIP -> NOTRUN +19
fi-byt-n2820: SKIP -> NOTRUN +38
igt@kms_chamelium@dp-hpd-fast:
fi-ilk-650: SKIP -> NOTRUN +59
igt@kms_chamelium@hdmi-crc-fast:
fi-cfl-s3: SKIP -> NOTRUN +25
fi-bsw-n3050: SKIP -> NOTRUN +45
fi-byt-j1900: SKIP -> NOTRUN +34
igt@kms_chamelium@hdmi-edid-read:
fi-glk-1: SKIP -> NOTRUN +27
fi-blb-e6850: SKIP -> NOTRUN +63
igt@kms_chamelium@vga-edid-read:
fi-cfl-8700k: SKIP -> NOTRUN +27
fi-skl-6600u: SKIP -> NOTRUN +26
igt@kms_flip@basic-flip-vs-dpms:
fi-ilk-650: PASS -> NOTRUN +224
igt@kms_flip@basic-plain-flip:
fi-bxt-j4205: PASS -> NOTRUN +255
igt@kms_force_connector_basic@force-connector-state:
fi-kbl-7567u: SKIP -> NOTRUN +19
igt@kms_force_connector_basic@prune-stale-modes:
fi-glk-j4005: SKIP -> NOTRUN +28
fi-skl-6700k2: SKIP -> NOTRUN +23
igt@kms_pipe_crc_basic@read-crc-pipe-a-frame-sequence:
fi-skl-6600u: PASS -> NOTRUN +257
igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b:
fi-snb-2520m: PASS -> NOTRUN +244
igt@kms_sink_crc_basic:
fi-bdw-gvtdvm: SKIP -> NOTRUN +23
igt@pm_backlight@basic-brightness:
fi-bxt-j4205: SKIP -> NOTRUN +28
fi-bdw-5557u: SKIP -> NOTRUN +20
igt@pm_rpm@basic-rte:
fi-skl-6260u: PASS -> NOTRUN +264
igt@prime_self_import@basic-llseek-bad:
fi-skl-guc: PASS -> NOTRUN +256
igt@prime_self_import@basic-with_two_bos:
fi-bxt-dsi: PASS -> NOTRUN +254
igt@prime_vgem@basic-busy-default:
fi-cfl-u: PASS -> NOTRUN +258
igt@vgem_basic@create:
fi-cfl-s3: PASS -> NOTRUN +258
igt@vgem_basic@mmap:
fi-skl-6700k2: PASS -> NOTRUN +260
== Known issues ==
Here are the changes found in Patchwork_8708 that come from known issues:
=== IGT changes ===
==== Possible fixes ====
igt@gem_exec_suspend@basic-s3:
fi-ivb-3520m: DMESG-WARN (fdo#106084) -> NOTRUN +1
igt@gem_mmap_gtt@basic-small-bo-tiledx:
fi-gdg-551: FAIL (fdo#102575) -> NOTRUN
igt@gem_ringfill@basic-default-hang:
fi-pnv-d510: DMESG-WARN (fdo#101600) -> NOTRUN
fi-blb-e6850: DMESG-WARN (fdo#101600) -> NOTRUN
igt@kms_chamelium@common-hpd-after-suspend:
fi-kbl-7500u: DMESG-WARN (fdo#102505) -> NOTRUN
igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a-frame-sequence:
fi-elk-e7500: INCOMPLETE (fdo#103989) -> NOTRUN
fdo#101600 https://bugs.freedesktop.org/show_bug.cgi?id=101600
fdo#102505 https://bugs.freedesktop.org/show_bug.cgi?id=102505
fdo#102575 https://bugs.freedesktop.org/show_bug.cgi?id=102575
fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989
fdo#106084 https://bugs.freedesktop.org/show_bug.cgi?id=106084
== Participating hosts (36 -> 34) ==
Additional (1): fi-cnl-psr
Missing (3): fi-ctg-p8600 fi-ilk-m540 fi-skl-6700hq
== Build changes ==
* Linux: CI_DRM_4059 -> Patchwork_8708
CI_DRM_4059: c1645edc253f2b52a8c94565a75b479a6782e75f @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_4435: ddbe5a4d8bb1780ecf07f72e815062d3bce8ff71 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_8708: f3776d9abfc3a5a073cc6531119697128be256e5 @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4435: e60d247eb359f044caf0c09904da14e39d7adca1 @ git://anongit.freedesktop.org/piglit
== Linux commits ==
f3776d9abfc3 drm/i915: Pack params to engine->schedule() into a struct
6e113aaa4db4 drm/i915: Rename priotree to sched
22ae52c5d491 drm/i915: Move the priotree struct to its own headers
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8708/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✓ Fi.CI.IGT: success for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (4 preceding siblings ...)
2018-04-17 15:50 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2018-04-17 17:09 ` Patchwork
2018-04-18 9:15 ` [PATCH v2 1/3] " Joonas Lahtinen
` (8 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-17 17:09 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : success
== Summary ==
= CI Bug Log - changes from CI_DRM_4059_full -> Patchwork_8708_full =
== Summary - WARNING ==
Minor unknown changes coming with Patchwork_8708_full need to be verified
manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_8708_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
External URL: https://patchwork.freedesktop.org/api/1.0/series/41827/revisions/1/mbox/
== Possible new issues ==
Here are the unknown changes that may have been introduced in Patchwork_8708_full:
=== IGT changes ===
==== Warnings ====
igt@gem_busy@extended-parallel-bsd1:
shard-hsw: SKIP -> NOTRUN +890
igt@gem_exec_params@dr1-dirt:
shard-kbl: PASS -> NOTRUN +1940
igt@gem_pread@stolen-uncached:
shard-kbl: SKIP -> NOTRUN +700
igt@gem_pwrite@display:
shard-snb: PASS -> NOTRUN +1377
igt@kms_chv_cursor_fail@pipe-b-256x256-top-edge:
shard-hsw: PASS -> NOTRUN +1783
igt@kms_rotation_crc@sprite-rotation-90-pos-100-0:
shard-apl: PASS -> NOTRUN +1834
igt@perf_pmu@busy-start-vcs1:
shard-snb: SKIP -> NOTRUN +1298
igt@prime_vgem@sync-bsd1:
shard-apl: SKIP -> NOTRUN +835
== Known issues ==
Here are the changes found in Patchwork_8708_full that come from known issues:
=== IGT changes ===
==== Possible fixes ====
igt@drv_selftest@mock_breadcrumbs:
shard-hsw: DMESG-FAIL (fdo#106085) -> NOTRUN
shard-snb: DMESG-FAIL (fdo#106085) -> NOTRUN
shard-apl: DMESG-FAIL (fdo#106085) -> NOTRUN
shard-kbl: DMESG-FAIL (fdo#106085) -> NOTRUN
igt@drv_selftest@mock_scatterlist:
shard-hsw: DMESG-WARN (fdo#103667) -> NOTRUN
shard-kbl: DMESG-WARN (fdo#103667) -> NOTRUN
shard-snb: DMESG-WARN (fdo#103667) -> NOTRUN
shard-apl: DMESG-WARN (fdo#103667) -> NOTRUN
igt@gem_ctx_isolation@vcs0-s3:
shard-kbl: INCOMPLETE (fdo#103665) -> NOTRUN
igt@gem_exec_schedule@pi-ringfull-blt:
shard-apl: FAIL (fdo#103158) -> NOTRUN +3
igt@gem_exec_schedule@pi-ringfull-bsd1:
shard-kbl: FAIL (fdo#103158) -> NOTRUN +4
igt@kms_flip@2x-flip-vs-expired-vblank:
shard-hsw: FAIL (fdo#102887) -> NOTRUN
igt@kms_flip@flip-vs-expired-vblank-interruptible:
shard-apl: FAIL (fdo#105363, fdo#102887) -> NOTRUN
igt@kms_flip@modeset-vs-vblank-race:
shard-hsw: FAIL (fdo#103060) -> NOTRUN
igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw:
shard-snb: FAIL (fdo#103167) -> NOTRUN
igt@kms_setmode@basic:
shard-apl: FAIL (fdo#99912) -> NOTRUN
shard-hsw: FAIL (fdo#99912) -> NOTRUN
shard-snb: FAIL (fdo#99912) -> NOTRUN
igt@kms_sysfs_edid_timing:
shard-hsw: WARN (fdo#100047) -> NOTRUN
shard-kbl: FAIL (fdo#100047) -> NOTRUN
igt@prime_vgem@coherency-gtt:
shard-apl: FAIL (fdo#100587) -> NOTRUN +1
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
fdo#100047 https://bugs.freedesktop.org/show_bug.cgi?id=100047
fdo#100587 https://bugs.freedesktop.org/show_bug.cgi?id=100587
fdo#102887 https://bugs.freedesktop.org/show_bug.cgi?id=102887
fdo#103060 https://bugs.freedesktop.org/show_bug.cgi?id=103060
fdo#103158 https://bugs.freedesktop.org/show_bug.cgi?id=103158
fdo#103167 https://bugs.freedesktop.org/show_bug.cgi?id=103167
fdo#103665 https://bugs.freedesktop.org/show_bug.cgi?id=103665
fdo#103667 https://bugs.freedesktop.org/show_bug.cgi?id=103667
fdo#105363 https://bugs.freedesktop.org/show_bug.cgi?id=105363
fdo#106085 https://bugs.freedesktop.org/show_bug.cgi?id=106085
fdo#99912 https://bugs.freedesktop.org/show_bug.cgi?id=99912
== Participating hosts (6 -> 4) ==
Missing (2): shard-glk shard-glkb
== Build changes ==
* Linux: CI_DRM_4059 -> Patchwork_8708
CI_DRM_4059: c1645edc253f2b52a8c94565a75b479a6782e75f @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_4435: ddbe5a4d8bb1780ecf07f72e815062d3bce8ff71 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_8708: f3776d9abfc3a5a073cc6531119697128be256e5 @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4435: e60d247eb359f044caf0c09904da14e39d7adca1 @ git://anongit.freedesktop.org/piglit
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8708/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (5 preceding siblings ...)
2018-04-17 17:09 ` ✓ Fi.CI.IGT: " Patchwork
@ 2018-04-18 9:15 ` Joonas Lahtinen
2018-04-18 9:19 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] " Patchwork
` (7 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Joonas Lahtinen @ 2018-04-18 9:15 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
Quoting Chris Wilson (2018-04-17 17:31:30)
> Over time the priotree has grown from a sorted list to a more
> complicated structure for propagating constraints along the dependency
> chain to try and resolve priority inversion. Start to segregate this
> information from the rest of the request/fence tracking.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Regards, Joonas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (6 preceding siblings ...)
2018-04-18 9:15 ` [PATCH v2 1/3] " Joonas Lahtinen
@ 2018-04-18 9:19 ` Patchwork
2018-04-18 9:20 ` ✗ Fi.CI.SPARSE: " Patchwork
` (6 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 9:19 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
68659914943f drm/i915: Move the priotree struct to its own headers
-:72: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#72:
new file mode 100644
-:77: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#77: FILE: drivers/gpu/drm/i915/i915_scheduler.h:1:
+/*
total: 0 errors, 2 warnings, 0 checks, 103 lines checked
3727950a05db drm/i915: Rename priotree to sched
f9fa785ad190 drm/i915: Pack params to engine->schedule() into a struct
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✗ Fi.CI.SPARSE: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (7 preceding siblings ...)
2018-04-18 9:19 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] " Patchwork
@ 2018-04-18 9:20 ` Patchwork
2018-04-18 9:36 ` ✓ Fi.CI.BAT: success " Patchwork
` (5 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 9:20 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : warning
== Summary ==
$ dim sparse origin/drm-tip
Commit: drm/i915: Move the priotree struct to its own headers
Okay!
Commit: drm/i915: Rename priotree to sched
Okay!
Commit: drm/i915: Pack params to engine->schedule() into a struct
-drivers/gpu/drm/i915/selftests/../i915_drv.h:2207:33: warning: constant 0xffffea0000000000 is so big it is unsigned long
-drivers/gpu/drm/i915/selftests/../i915_drv.h:3655:16: warning: expression using sizeof(void)
+drivers/gpu/drm/i915/selftests/../i915_drv.h:2208:33: warning: constant 0xffffea0000000000 is so big it is unsigned long
+drivers/gpu/drm/i915/selftests/../i915_drv.h:3656:16: warning: expression using sizeof(void)
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 2/3] drm/i915: Rename priotree to sched
2018-04-17 14:31 ` [PATCH v2 2/3] drm/i915: Rename priotree to sched Chris Wilson
@ 2018-04-18 9:29 ` Joonas Lahtinen
0 siblings, 0 replies; 21+ messages in thread
From: Joonas Lahtinen @ 2018-04-18 9:29 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
Quoting Chris Wilson (2018-04-17 17:31:31)
> Having moved the priotree struct into i915_scheduler.h, identify it as
> the scheduling element and rebrand into i915_sched. This becomes more
> useful as we start attaching more information we require to propagate
> through the scheduler.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
"i915_sched_node" might be a less confusing name compared to the DRM
core scheduler.
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Regards, Joonas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✓ Fi.CI.BAT: success for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (8 preceding siblings ...)
2018-04-18 9:20 ` ✗ Fi.CI.SPARSE: " Patchwork
@ 2018-04-18 9:36 ` Patchwork
2018-04-18 12:18 ` ✗ Fi.CI.IGT: failure " Patchwork
` (4 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 9:36 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : success
== Summary ==
= CI Bug Log - changes from CI_DRM_4063 -> Patchwork_8719 =
== Summary - SUCCESS ==
No regressions found.
External URL: https://patchwork.freedesktop.org/api/1.0/series/41827/revisions/1/mbox/
== Known issues ==
Here are the changes found in Patchwork_8719 that come from known issues:
=== IGT changes ===
==== Issues hit ====
igt@gem_exec_suspend@basic-s3:
fi-ivb-3520m: PASS -> DMESG-WARN (fdo#106084)
igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b:
fi-cnl-psr: PASS -> DMESG-WARN (fdo#104951) +1
==== Possible fixes ====
igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b:
fi-ivb-3520m: DMESG-WARN (fdo#106084) -> PASS +1
fdo#104951 https://bugs.freedesktop.org/show_bug.cgi?id=104951
fdo#106084 https://bugs.freedesktop.org/show_bug.cgi?id=106084
== Participating hosts (34 -> 33) ==
Additional (1): fi-bxt-dsi
Missing (2): fi-ilk-m540 fi-skl-6700hq
== Build changes ==
* Linux: CI_DRM_4063 -> Patchwork_8719
CI_DRM_4063: 9bdf0998d567cbe94f712c8f3e8295fb0446e114 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_4441: 83ba5b7d3bde48b383df41792fc9c955a5a23bdb @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_8719: f9fa785ad1907ac0598c84b45fc6ea1326ad9c01 @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4441: e60d247eb359f044caf0c09904da14e39d7adca1 @ git://anongit.freedesktop.org/piglit
== Linux commits ==
f9fa785ad190 drm/i915: Pack params to engine->schedule() into a struct
3727950a05db drm/i915: Rename priotree to sched
68659914943f drm/i915: Move the priotree struct to its own headers
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8719/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2] drm/i915: Pack params to engine->schedule() into a struct
2018-04-17 14:31 ` [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct Chris Wilson
@ 2018-04-18 9:41 ` Chris Wilson
2018-04-18 10:32 ` Joonas Lahtinen
2018-04-18 9:46 ` [PATCH v2 3/3] " Joonas Lahtinen
1 sibling, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2018-04-18 9:41 UTC (permalink / raw)
To: intel-gfx
Today we only want to pass along the priority to engine->schedule(), but
in the future we want to have much more control over the various aspects
of the GPU during a context's execution, for example controlling the
frequency allowed. As we need an ever growing number of parameters for
scheduling, move those into a struct for convenience.
v2: Move the anonymous struct into its own function for legibility and
ye olde gcc.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/gvt/scheduler.c | 2 +-
drivers/gpu/drm/i915/i915_drv.h | 3 ++-
drivers/gpu/drm/i915/i915_gem.c | 18 +++++++++--------
drivers/gpu/drm/i915/i915_gem_context.c | 8 ++++----
drivers/gpu/drm/i915/i915_gem_context.h | 13 +-----------
drivers/gpu/drm/i915/i915_gpu_error.c | 8 ++++----
drivers/gpu/drm/i915/i915_gpu_error.h | 5 +++--
drivers/gpu/drm/i915/i915_request.c | 4 ++--
drivers/gpu/drm/i915/i915_request.h | 1 +
drivers/gpu/drm/i915/i915_scheduler.h | 17 +++++++++++++++-
drivers/gpu/drm/i915/intel_display.c | 11 +++++++++-
drivers/gpu/drm/i915/intel_engine_cs.c | 18 ++++++++++++++---
drivers/gpu/drm/i915/intel_guc_submission.c | 2 +-
drivers/gpu/drm/i915/intel_lrc.c | 20 ++++++++++---------
drivers/gpu/drm/i915/intel_ringbuffer.h | 4 +++-
.../gpu/drm/i915/selftests/intel_hangcheck.c | 4 ++--
drivers/gpu/drm/i915/selftests/intel_lrc.c | 8 +++++---
17 files changed, 91 insertions(+), 55 deletions(-)
diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 638abe84857c..f3d21849b0cb 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -1135,7 +1135,7 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
return PTR_ERR(s->shadow_ctx);
if (HAS_LOGICAL_RING_PREEMPTION(vgpu->gvt->dev_priv))
- s->shadow_ctx->priority = INT_MAX;
+ s->shadow_ctx->sched.priority = INT_MAX;
bitmap_zero(s->shadow_ctx_desc_updated, I915_NUM_ENGINES);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8e8667d9b084..028691108125 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -75,6 +75,7 @@
#include "i915_gem_timeline.h"
#include "i915_gpu_error.h"
#include "i915_request.h"
+#include "i915_scheduler.h"
#include "i915_vma.h"
#include "intel_gvt.h"
@@ -3158,7 +3159,7 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj,
struct intel_rps_client *rps);
int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
unsigned int flags,
- int priority);
+ const struct i915_sched_attr *attr);
#define I915_PRIORITY_DISPLAY I915_PRIORITY_MAX
int __must_check
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4c9d2a6f7d28..795ca83aed7a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -564,7 +564,8 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
return timeout;
}
-static void __fence_set_priority(struct dma_fence *fence, int prio)
+static void __fence_set_priority(struct dma_fence *fence,
+ const struct i915_sched_attr *attr)
{
struct i915_request *rq;
struct intel_engine_cs *engine;
@@ -577,11 +578,12 @@ static void __fence_set_priority(struct dma_fence *fence, int prio)
rcu_read_lock();
if (engine->schedule)
- engine->schedule(rq, prio);
+ engine->schedule(rq, attr);
rcu_read_unlock();
}
-static void fence_set_priority(struct dma_fence *fence, int prio)
+static void fence_set_priority(struct dma_fence *fence,
+ const struct i915_sched_attr *attr)
{
/* Recurse once into a fence-array */
if (dma_fence_is_array(fence)) {
@@ -589,16 +591,16 @@ static void fence_set_priority(struct dma_fence *fence, int prio)
int i;
for (i = 0; i < array->num_fences; i++)
- __fence_set_priority(array->fences[i], prio);
+ __fence_set_priority(array->fences[i], attr);
} else {
- __fence_set_priority(fence, prio);
+ __fence_set_priority(fence, attr);
}
}
int
i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
unsigned int flags,
- int prio)
+ const struct i915_sched_attr *attr)
{
struct dma_fence *excl;
@@ -613,7 +615,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
return ret;
for (i = 0; i < count; i++) {
- fence_set_priority(shared[i], prio);
+ fence_set_priority(shared[i], attr);
dma_fence_put(shared[i]);
}
@@ -623,7 +625,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
}
if (excl) {
- fence_set_priority(excl, prio);
+ fence_set_priority(excl, attr);
dma_fence_put(excl);
}
return 0;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 9b3834a846e8..74435affe23f 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -281,7 +281,7 @@ __create_hw_context(struct drm_i915_private *dev_priv,
kref_init(&ctx->ref);
list_add_tail(&ctx->link, &dev_priv->contexts.list);
ctx->i915 = dev_priv;
- ctx->priority = I915_PRIORITY_NORMAL;
+ ctx->sched.priority = I915_PRIORITY_NORMAL;
INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL);
INIT_LIST_HEAD(&ctx->handles_list);
@@ -431,7 +431,7 @@ i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio)
return ctx;
i915_gem_context_clear_bannable(ctx);
- ctx->priority = prio;
+ ctx->sched.priority = prio;
ctx->ring_size = PAGE_SIZE;
GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));
@@ -753,7 +753,7 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
args->value = i915_gem_context_is_bannable(ctx);
break;
case I915_CONTEXT_PARAM_PRIORITY:
- args->value = ctx->priority;
+ args->value = ctx->sched.priority;
break;
default:
ret = -EINVAL;
@@ -826,7 +826,7 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
!capable(CAP_SYS_NICE))
ret = -EPERM;
else
- ctx->priority = priority;
+ ctx->sched.priority = priority;
}
break;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index 7854262ddfd9..b12a8a8c5af9 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -137,18 +137,7 @@ struct i915_gem_context {
*/
u32 user_handle;
- /**
- * @priority: execution and service priority
- *
- * All clients are equal, but some are more equal than others!
- *
- * Requests from a context with a greater (more positive) value of
- * @priority will be executed before those with a lower @priority
- * value, forming a simple QoS.
- *
- * The &drm_i915_private.kernel_context is assigned the lowest priority.
- */
- int priority;
+ struct i915_sched_attr sched;
/** ggtt_offset_bias: placement restriction for context objects */
u32 ggtt_offset_bias;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 6b5b9b3ded02..671ffa37614e 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -411,7 +411,7 @@ static void error_print_request(struct drm_i915_error_state_buf *m,
err_printf(m, "%s pid %d, ban score %d, seqno %8x:%08x, prio %d, emitted %dms ago, head %08x, tail %08x\n",
prefix, erq->pid, erq->ban_score,
- erq->context, erq->seqno, erq->priority,
+ erq->context, erq->seqno, erq->sched_attr.priority,
jiffies_to_msecs(jiffies - erq->jiffies),
erq->head, erq->tail);
}
@@ -422,7 +422,7 @@ static void error_print_context(struct drm_i915_error_state_buf *m,
{
err_printf(m, "%s%s[%d] user_handle %d hw_id %d, prio %d, ban score %d%s guilty %d active %d\n",
header, ctx->comm, ctx->pid, ctx->handle, ctx->hw_id,
- ctx->priority, ctx->ban_score, bannable(ctx),
+ ctx->sched_attr.priority, ctx->ban_score, bannable(ctx),
ctx->guilty, ctx->active);
}
@@ -1278,7 +1278,7 @@ static void record_request(struct i915_request *request,
struct drm_i915_error_request *erq)
{
erq->context = request->ctx->hw_id;
- erq->priority = request->sched.priority;
+ erq->sched_attr = request->sched.attr;
erq->ban_score = atomic_read(&request->ctx->ban_score);
erq->seqno = request->global_seqno;
erq->jiffies = request->emitted_jiffies;
@@ -1372,7 +1372,7 @@ static void record_context(struct drm_i915_error_context *e,
e->handle = ctx->user_handle;
e->hw_id = ctx->hw_id;
- e->priority = ctx->priority;
+ e->sched_attr = ctx->sched;
e->ban_score = atomic_read(&ctx->ban_score);
e->bannable = i915_gem_context_is_bannable(ctx);
e->guilty = atomic_read(&ctx->guilty_count);
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h
index c05b6034d718..5d6fdcbc092c 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.h
+++ b/drivers/gpu/drm/i915/i915_gpu_error.h
@@ -20,6 +20,7 @@
#include "i915_gem.h"
#include "i915_gem_gtt.h"
#include "i915_params.h"
+#include "i915_scheduler.h"
struct drm_i915_private;
struct intel_overlay_error_state;
@@ -122,11 +123,11 @@ struct i915_gpu_state {
pid_t pid;
u32 handle;
u32 hw_id;
- int priority;
int ban_score;
int active;
int guilty;
bool bannable;
+ struct i915_sched_attr sched_attr;
} context;
struct drm_i915_error_object {
@@ -147,11 +148,11 @@ struct i915_gpu_state {
long jiffies;
pid_t pid;
u32 context;
- int priority;
int ban_score;
u32 seqno;
u32 head;
u32 tail;
+ struct i915_sched_attr sched_attr;
} *requests, execlist[EXECLIST_MAX_PORTS];
unsigned int num_ports;
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 0939c120b82c..c3a908436510 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -191,7 +191,7 @@ i915_sched_init(struct i915_sched *pt)
INIT_LIST_HEAD(&pt->signalers_list);
INIT_LIST_HEAD(&pt->waiters_list);
INIT_LIST_HEAD(&pt->link);
- pt->priority = I915_PRIORITY_INVALID;
+ pt->attr.priority = I915_PRIORITY_INVALID;
}
static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
@@ -1062,7 +1062,7 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
*/
rcu_read_lock();
if (engine->schedule)
- engine->schedule(request, request->ctx->priority);
+ engine->schedule(request, &request->ctx->sched);
rcu_read_unlock();
local_bh_disable();
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index 5d6619a245ba..701ee8c7325c 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -30,6 +30,7 @@
#include "i915_gem.h"
#include "i915_scheduler.h"
#include "i915_sw_fence.h"
+#include "i915_scheduler.h"
#include <uapi/drm/i915_drm.h>
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
index b34fca3ba17f..4cae1edeb40d 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.h
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -19,6 +19,21 @@ enum {
I915_PRIORITY_INVALID = INT_MIN
};
+struct i915_sched_attr {
+ /**
+ * @priority: execution and service priority
+ *
+ * All clients are equal, but some are more equal than others!
+ *
+ * Requests from a context with a greater (more positive) value of
+ * @priority will be executed before those with a lower @priority
+ * value, forming a simple QoS.
+ *
+ * The &drm_i915_private.kernel_context is assigned the lowest priority.
+ */
+ int priority;
+};
+
/*
* "People assume that time is a strict progression of cause to effect, but
* actually, from a nonlinear, non-subjective viewpoint, it's more like a big
@@ -37,7 +52,7 @@ struct i915_sched {
struct list_head signalers_list; /* those before us, we depend upon */
struct list_head waiters_list; /* those after us, they depend upon us */
struct list_head link;
- int priority;
+ struct i915_sched_attr attr;
};
struct i915_dependency {
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 020900e08d42..5fb00f1fa2a1 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -12763,6 +12763,15 @@ static void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state)
intel_unpin_fb_vma(vma, old_plane_state->flags);
}
+static void fb_obj_bump_render_priority(struct drm_i915_gem_object *obj)
+{
+ struct i915_sched_attr attr = {
+ .priority = I915_PRIORITY_DISPLAY,
+ };
+
+ i915_gem_object_wait_priority(obj, 0, &attr);
+}
+
/**
* intel_prepare_plane_fb - Prepare fb for usage on plane
* @plane: drm plane to prepare for
@@ -12839,7 +12848,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
ret = intel_plane_pin_fb(to_intel_plane_state(new_state));
- i915_gem_object_wait_priority(obj, 0, I915_PRIORITY_DISPLAY);
+ fb_obj_bump_render_priority(obj);
mutex_unlock(&dev_priv->drm.struct_mutex);
i915_gem_object_unpin_pages(obj);
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index b542b1a4dddc..be608f7111f5 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1113,17 +1113,29 @@ unsigned int intel_engines_has_context_isolation(struct drm_i915_private *i915)
return which;
}
+static void print_sched_attr(struct drm_printer *m,
+ const struct drm_i915_private *i915,
+ const struct i915_sched_attr *attr)
+{
+ if (attr->priority == I915_PRIORITY_INVALID)
+ return;
+
+ drm_printf(m, "prio=%d", attr->priority);
+}
+
static void print_request(struct drm_printer *m,
struct i915_request *rq,
const char *prefix)
{
const char *name = rq->fence.ops->get_timeline_name(&rq->fence);
- drm_printf(m, "%s%x%s [%llx:%x] prio=%d @ %dms: %s\n", prefix,
+ drm_printf(m, "%s%x%s [%llx:%x] ",
+ prefix,
rq->global_seqno,
i915_request_completed(rq) ? "!" : "",
- rq->fence.context, rq->fence.seqno,
- rq->sched.priority,
+ rq->fence.context, rq->fence.seqno);
+ print_sched_attr(m, rq->i915, &rq->sched.attr);
+ drm_printf(m, " @ %dms: %s\n",
jiffies_to_msecs(jiffies - rq->emitted_jiffies),
name);
}
diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c
index 0755f5cae950..02da05875aa7 100644
--- a/drivers/gpu/drm/i915/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/intel_guc_submission.c
@@ -659,7 +659,7 @@ static void port_assign(struct execlist_port *port, struct i915_request *rq)
static inline int rq_prio(const struct i915_request *rq)
{
- return rq->sched.priority;
+ return rq->sched.attr.priority;
}
static inline int port_prio(const struct execlist_port *port)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 01f356cb3e25..6be8ccd18b72 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -177,7 +177,7 @@ static inline struct i915_priolist *to_priolist(struct rb_node *rb)
static inline int rq_prio(const struct i915_request *rq)
{
- return rq->sched.priority;
+ return rq->sched.attr.priority;
}
static inline bool need_preempt(const struct intel_engine_cs *engine,
@@ -1172,11 +1172,13 @@ sched_lock_engine(struct i915_sched *sched, struct intel_engine_cs *locked)
return engine;
}
-static void execlists_schedule(struct i915_request *request, int prio)
+static void execlists_schedule(struct i915_request *request,
+ const struct i915_sched_attr *attr)
{
struct intel_engine_cs *engine;
struct i915_dependency *dep, *p;
struct i915_dependency stack;
+ const int prio = attr->priority;
LIST_HEAD(dfs);
GEM_BUG_ON(prio == I915_PRIORITY_INVALID);
@@ -1184,7 +1186,7 @@ static void execlists_schedule(struct i915_request *request, int prio)
if (i915_request_completed(request))
return;
- if (prio <= READ_ONCE(request->sched.priority))
+ if (prio <= READ_ONCE(request->sched.attr.priority))
return;
/* Need BKL in order to use the temporary link inside i915_dependency */
@@ -1226,8 +1228,8 @@ static void execlists_schedule(struct i915_request *request, int prio)
if (i915_sched_signaled(p->signaler))
continue;
- GEM_BUG_ON(p->signaler->priority < sched->priority);
- if (prio > READ_ONCE(p->signaler->priority))
+ GEM_BUG_ON(p->signaler->attr.priority < sched->attr.priority);
+ if (prio > READ_ONCE(p->signaler->attr.priority))
list_move_tail(&p->dfs_link, &dfs);
}
}
@@ -1238,9 +1240,9 @@ static void execlists_schedule(struct i915_request *request, int prio)
* execlists_submit_request()), we can set our own priority and skip
* acquiring the engine locks.
*/
- if (request->sched.priority == I915_PRIORITY_INVALID) {
+ if (request->sched.attr.priority == I915_PRIORITY_INVALID) {
GEM_BUG_ON(!list_empty(&request->sched.link));
- request->sched.priority = prio;
+ request->sched.attr = *attr;
if (stack.dfs_link.next == stack.dfs_link.prev)
return;
__list_del_entry(&stack.dfs_link);
@@ -1257,10 +1259,10 @@ static void execlists_schedule(struct i915_request *request, int prio)
engine = sched_lock_engine(sched, engine);
- if (prio <= sched->priority)
+ if (prio <= sched->attr.priority)
continue;
- sched->priority = prio;
+ sched->attr.priority = prio;
if (!list_empty(&sched->link)) {
__list_del_entry(&sched->link);
queue_request(engine, sched, prio);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 717041640135..c5e27905b0e1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -14,6 +14,7 @@
#include "intel_gpu_commands.h"
struct drm_printer;
+struct i915_sched_attr;
#define I915_CMD_HASH_ORDER 9
@@ -460,7 +461,8 @@ struct intel_engine_cs {
*
* Called under the struct_mutex.
*/
- void (*schedule)(struct i915_request *request, int priority);
+ void (*schedule)(struct i915_request *request,
+ const struct i915_sched_attr *attr);
/*
* Cancel all requests on the hardware, or queued for execution.
diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
index 24f913f26a7b..f7ee54e109ae 100644
--- a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
+++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c
@@ -628,7 +628,7 @@ static int active_engine(void *data)
}
if (arg->flags & TEST_PRIORITY)
- ctx[idx]->priority =
+ ctx[idx]->sched.priority =
i915_prandom_u32_max_state(512, &prng);
rq[idx] = i915_request_get(new);
@@ -683,7 +683,7 @@ static int __igt_reset_engines(struct drm_i915_private *i915,
return err;
if (flags & TEST_PRIORITY)
- h.ctx->priority = 1024;
+ h.ctx->sched.priority = 1024;
}
for_each_engine(engine, i915, id) {
diff --git a/drivers/gpu/drm/i915/selftests/intel_lrc.c b/drivers/gpu/drm/i915/selftests/intel_lrc.c
index 0481e2e01146..ee7e22d18ff8 100644
--- a/drivers/gpu/drm/i915/selftests/intel_lrc.c
+++ b/drivers/gpu/drm/i915/selftests/intel_lrc.c
@@ -335,12 +335,12 @@ static int live_preempt(void *arg)
ctx_hi = kernel_context(i915);
if (!ctx_hi)
goto err_spin_lo;
- ctx_hi->priority = I915_CONTEXT_MAX_USER_PRIORITY;
+ ctx_hi->sched.priority = I915_CONTEXT_MAX_USER_PRIORITY;
ctx_lo = kernel_context(i915);
if (!ctx_lo)
goto err_ctx_hi;
- ctx_lo->priority = I915_CONTEXT_MIN_USER_PRIORITY;
+ ctx_lo->sched.priority = I915_CONTEXT_MIN_USER_PRIORITY;
for_each_engine(engine, i915, id) {
struct i915_request *rq;
@@ -407,6 +407,7 @@ static int live_late_preempt(void *arg)
struct i915_gem_context *ctx_hi, *ctx_lo;
struct spinner spin_hi, spin_lo;
struct intel_engine_cs *engine;
+ struct i915_sched_attr attr = {};
enum intel_engine_id id;
int err = -ENOMEM;
@@ -458,7 +459,8 @@ static int live_late_preempt(void *arg)
goto err_wedged;
}
- engine->schedule(rq, I915_PRIORITY_MAX);
+ attr.priority = I915_PRIORITY_MAX;
+ engine->schedule(rq, &attr);
if (!wait_for_spinner(&spin_hi, rq)) {
pr_err("High priority context failed to preempt the low priority context\n");
--
2.17.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct
2018-04-17 14:31 ` [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct Chris Wilson
2018-04-18 9:41 ` [PATCH v2] " Chris Wilson
@ 2018-04-18 9:46 ` Joonas Lahtinen
2018-04-18 10:39 ` Chris Wilson
1 sibling, 1 reply; 21+ messages in thread
From: Joonas Lahtinen @ 2018-04-18 9:46 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
Quoting Chris Wilson (2018-04-17 17:31:32)
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -12839,7 +12839,9 @@ intel_prepare_plane_fb(struct drm_plane *plane,
>
> ret = intel_plane_pin_fb(to_intel_plane_state(new_state));
>
> - i915_gem_object_wait_priority(obj, 0, I915_PRIORITY_DISPLAY);
> + i915_gem_object_wait_priority(obj, 0, &(struct i915_sched_attr){
> + .priority = I915_PRIORITY_DISPLAY,
> + });
Just lift the parameter to previous line :P
>
> mutex_unlock(&dev_priv->drm.struct_mutex);
> i915_gem_object_unpin_pages(obj);
> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> index b542b1a4dddc..be608f7111f5 100644
> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> @@ -1113,17 +1113,29 @@ unsigned int intel_engines_has_context_isolation(struct drm_i915_private *i915)
> return which;
> }
>
> +static void print_sched_attr(struct drm_printer *m,
> + const struct drm_i915_private *i915,
> + const struct i915_sched_attr *attr)
> +{
> + if (attr->priority == I915_PRIORITY_INVALID)
> + return;
This will yield a double space in the output. Just sayin'
> +
> + drm_printf(m, "prio=%d", attr->priority);
> +}
With the parameter passing normalized, this is:
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Regards, Joonas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2] drm/i915: Pack params to engine->schedule() into a struct
2018-04-18 9:41 ` [PATCH v2] " Chris Wilson
@ 2018-04-18 10:32 ` Joonas Lahtinen
0 siblings, 0 replies; 21+ messages in thread
From: Joonas Lahtinen @ 2018-04-18 10:32 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
Quoting Chris Wilson (2018-04-18 12:41:49)
> Today we only want to pass along the priority to engine->schedule(), but
> in the future we want to have much more control over the various aspects
> of the GPU during a context's execution, for example controlling the
> frequency allowed. As we need an ever growing number of parameters for
> scheduling, move those into a struct for convenience.
>
> v2: Move the anonymous struct into its own function for legibility and
> ye olde gcc.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Regards, Joonas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct
2018-04-18 9:46 ` [PATCH v2 3/3] " Joonas Lahtinen
@ 2018-04-18 10:39 ` Chris Wilson
0 siblings, 0 replies; 21+ messages in thread
From: Chris Wilson @ 2018-04-18 10:39 UTC (permalink / raw)
To: Joonas Lahtinen, intel-gfx
Quoting Joonas Lahtinen (2018-04-18 10:46:27)
> Quoting Chris Wilson (2018-04-17 17:31:32)
> > +static void print_sched_attr(struct drm_printer *m,
> > + const struct drm_i915_private *i915,
> > + const struct i915_sched_attr *attr)
> > +{
> > + if (attr->priority == I915_PRIORITY_INVALID)
> > + return;
>
> This will yield a double space in the output. Just sayin'
Yes, but one too many spaces is only a problem for our OCD and only for
those of us brave enough to venture that far back.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✗ Fi.CI.IGT: failure for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (9 preceding siblings ...)
2018-04-18 9:36 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2018-04-18 12:18 ` Patchwork
2018-04-18 15:15 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2) Patchwork
` (3 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 12:18 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers
URL : https://patchwork.freedesktop.org/series/41827/
State : failure
== Summary ==
= CI Bug Log - changes from CI_DRM_4063_full -> Patchwork_8719_full =
== Summary - FAILURE ==
Serious unknown changes coming with Patchwork_8719_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_8719_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
External URL: https://patchwork.freedesktop.org/api/1.0/series/41827/revisions/1/mbox/
== Possible new issues ==
Here are the unknown changes that may have been introduced in Patchwork_8719_full:
=== IGT changes ===
==== Possible regressions ====
igt@kms_cursor_legacy@short-flip-after-cursor-atomic-transitions-varying-size:
shard-hsw: PASS -> FAIL
==== Warnings ====
igt@gem_mocs_settings@mocs-rc6-vebox:
shard-kbl: SKIP -> PASS +1
igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-pwrite:
shard-hsw: SKIP -> PASS
== Known issues ==
Here are the changes found in Patchwork_8719_full that come from known issues:
=== IGT changes ===
==== Possible fixes ====
igt@kms_flip@dpms-vs-vblank-race-interruptible:
shard-hsw: FAIL (fdo#103060) -> PASS
igt@kms_rotation_crc@sprite-rotation-90-pos-100-0:
shard-apl: FAIL (fdo#103925) -> PASS
igt@kms_sysfs_edid_timing:
shard-apl: WARN (fdo#100047) -> PASS
fdo#100047 https://bugs.freedesktop.org/show_bug.cgi?id=100047
fdo#103060 https://bugs.freedesktop.org/show_bug.cgi?id=103060
fdo#103925 https://bugs.freedesktop.org/show_bug.cgi?id=103925
== Participating hosts (6 -> 4) ==
Missing (2): shard-glk shard-glkb
== Build changes ==
* Linux: CI_DRM_4063 -> Patchwork_8719
CI_DRM_4063: 9bdf0998d567cbe94f712c8f3e8295fb0446e114 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_4441: 83ba5b7d3bde48b383df41792fc9c955a5a23bdb @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_8719: f9fa785ad1907ac0598c84b45fc6ea1326ad9c01 @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4441: e60d247eb359f044caf0c09904da14e39d7adca1 @ git://anongit.freedesktop.org/piglit
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8719/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (10 preceding siblings ...)
2018-04-18 12:18 ` ✗ Fi.CI.IGT: failure " Patchwork
@ 2018-04-18 15:15 ` Patchwork
2018-04-18 15:17 ` ✗ Fi.CI.SPARSE: " Patchwork
` (2 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 15:15 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
URL : https://patchwork.freedesktop.org/series/41827/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
690cae7b1c87 drm/i915: Move the priotree struct to its own headers
-:72: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#72:
new file mode 100644
-:77: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#77: FILE: drivers/gpu/drm/i915/i915_scheduler.h:1:
+/*
total: 0 errors, 2 warnings, 0 checks, 103 lines checked
a11ef25b82dd drm/i915: Rename priotree to sched
d45621446c37 drm/i915: Pack params to engine->schedule() into a struct
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✗ Fi.CI.SPARSE: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (11 preceding siblings ...)
2018-04-18 15:15 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2) Patchwork
@ 2018-04-18 15:17 ` Patchwork
2018-04-18 15:30 ` ✓ Fi.CI.BAT: success " Patchwork
2018-04-18 19:51 ` ✓ Fi.CI.IGT: " Patchwork
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 15:17 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
URL : https://patchwork.freedesktop.org/series/41827/
State : warning
== Summary ==
$ dim sparse origin/drm-tip
Commit: drm/i915: Move the priotree struct to its own headers
Okay!
Commit: drm/i915: Rename priotree to sched
Okay!
Commit: drm/i915: Pack params to engine->schedule() into a struct
-drivers/gpu/drm/i915/selftests/../i915_drv.h:2207:33: warning: constant 0xffffea0000000000 is so big it is unsigned long
-drivers/gpu/drm/i915/selftests/../i915_drv.h:3655:16: warning: expression using sizeof(void)
+drivers/gpu/drm/i915/selftests/../i915_drv.h:2208:33: warning: constant 0xffffea0000000000 is so big it is unsigned long
+drivers/gpu/drm/i915/selftests/../i915_drv.h:3656:16: warning: expression using sizeof(void)
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✓ Fi.CI.BAT: success for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (12 preceding siblings ...)
2018-04-18 15:17 ` ✗ Fi.CI.SPARSE: " Patchwork
@ 2018-04-18 15:30 ` Patchwork
2018-04-18 19:51 ` ✓ Fi.CI.IGT: " Patchwork
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 15:30 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
URL : https://patchwork.freedesktop.org/series/41827/
State : success
== Summary ==
= CI Bug Log - changes from CI_DRM_4066 -> Patchwork_8734 =
== Summary - SUCCESS ==
No regressions found.
External URL: https://patchwork.freedesktop.org/api/1.0/series/41827/revisions/2/mbox/
== Known issues ==
Here are the changes found in Patchwork_8734 that come from known issues:
=== IGT changes ===
==== Issues hit ====
igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c:
fi-ivb-3520m: PASS -> DMESG-WARN (fdo#106084)
==== Possible fixes ====
igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
fi-ivb-3520m: DMESG-WARN (fdo#106084) -> PASS
fdo#106084 https://bugs.freedesktop.org/show_bug.cgi?id=106084
== Participating hosts (33 -> 32) ==
Additional (2): fi-glk-j4005 fi-bxt-dsi
Missing (3): fi-ctg-p8600 fi-ilk-m540 fi-skl-6700hq
== Build changes ==
* Linux: CI_DRM_4066 -> Patchwork_8734
CI_DRM_4066: e1fbca4821d0700551df233285a5c28db09fd0f6 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_4441: 83ba5b7d3bde48b383df41792fc9c955a5a23bdb @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_8734: d45621446c373467ef2f658526ce56b70a42fba7 @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4441: e60d247eb359f044caf0c09904da14e39d7adca1 @ git://anongit.freedesktop.org/piglit
== Linux commits ==
d45621446c37 drm/i915: Pack params to engine->schedule() into a struct
a11ef25b82dd drm/i915: Rename priotree to sched
690cae7b1c87 drm/i915: Move the priotree struct to its own headers
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8734/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* ✓ Fi.CI.IGT: success for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
` (13 preceding siblings ...)
2018-04-18 15:30 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2018-04-18 19:51 ` Patchwork
14 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2018-04-18 19:51 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2)
URL : https://patchwork.freedesktop.org/series/41827/
State : success
== Summary ==
= CI Bug Log - changes from CI_DRM_4066_full -> Patchwork_8734_full =
== Summary - SUCCESS ==
No regressions found.
External URL: https://patchwork.freedesktop.org/api/1.0/series/41827/revisions/2/mbox/
== Known issues ==
Here are the changes found in Patchwork_8734_full that come from known issues:
=== IGT changes ===
==== Issues hit ====
igt@kms_flip@2x-modeset-vs-vblank-race:
shard-hsw: PASS -> FAIL (fdo#103060)
igt@kms_flip@wf_vblank-ts-check-interruptible:
shard-hsw: PASS -> FAIL (fdo#100368)
igt@kms_setmode@basic:
shard-hsw: PASS -> FAIL (fdo#99912)
igt@prime_mmap_coherency@read:
shard-hsw: PASS -> DMESG-WARN (fdo#102614) +1
==== Possible fixes ====
igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy:
shard-hsw: FAIL (fdo#105767) -> PASS
igt@kms_flip@2x-dpms-vs-vblank-race:
shard-hsw: FAIL (fdo#103060) -> PASS
igt@kms_flip@2x-flip-vs-expired-vblank:
shard-hsw: FAIL (fdo#102887) -> PASS
igt@kms_flip@flip-vs-blocking-wf-vblank:
shard-hsw: FAIL (fdo#100368) -> PASS
igt@kms_sysfs_edid_timing:
shard-apl: WARN (fdo#100047) -> PASS
fdo#100047 https://bugs.freedesktop.org/show_bug.cgi?id=100047
fdo#100368 https://bugs.freedesktop.org/show_bug.cgi?id=100368
fdo#102614 https://bugs.freedesktop.org/show_bug.cgi?id=102614
fdo#102887 https://bugs.freedesktop.org/show_bug.cgi?id=102887
fdo#103060 https://bugs.freedesktop.org/show_bug.cgi?id=103060
fdo#105767 https://bugs.freedesktop.org/show_bug.cgi?id=105767
fdo#99912 https://bugs.freedesktop.org/show_bug.cgi?id=99912
== Participating hosts (9 -> 3) ==
Missing (6): shard-glk8 shard-glk6 shard-glk7 shard-glk shard-kbl shard-glkb
== Build changes ==
* Linux: CI_DRM_4066 -> Patchwork_8734
CI_DRM_4066: e1fbca4821d0700551df233285a5c28db09fd0f6 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_4441: 83ba5b7d3bde48b383df41792fc9c955a5a23bdb @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_8734: d45621446c373467ef2f658526ce56b70a42fba7 @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4441: e60d247eb359f044caf0c09904da14e39d7adca1 @ git://anongit.freedesktop.org/piglit
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8734/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2018-04-18 19:51 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-17 14:31 [PATCH v2 1/3] drm/i915: Move the priotree struct to its own headers Chris Wilson
2018-04-17 14:31 ` [PATCH v2 2/3] drm/i915: Rename priotree to sched Chris Wilson
2018-04-18 9:29 ` Joonas Lahtinen
2018-04-17 14:31 ` [PATCH v2 3/3] drm/i915: Pack params to engine->schedule() into a struct Chris Wilson
2018-04-18 9:41 ` [PATCH v2] " Chris Wilson
2018-04-18 10:32 ` Joonas Lahtinen
2018-04-18 9:46 ` [PATCH v2 3/3] " Joonas Lahtinen
2018-04-18 10:39 ` Chris Wilson
2018-04-17 15:43 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers Patchwork
2018-04-17 15:44 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-04-17 15:50 ` ✓ Fi.CI.BAT: success " Patchwork
2018-04-17 17:09 ` ✓ Fi.CI.IGT: " Patchwork
2018-04-18 9:15 ` [PATCH v2 1/3] " Joonas Lahtinen
2018-04-18 9:19 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] " Patchwork
2018-04-18 9:20 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-04-18 9:36 ` ✓ Fi.CI.BAT: success " Patchwork
2018-04-18 12:18 ` ✗ Fi.CI.IGT: failure " Patchwork
2018-04-18 15:15 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [v2,1/3] drm/i915: Move the priotree struct to its own headers (rev2) Patchwork
2018-04-18 15:17 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-04-18 15:30 ` ✓ Fi.CI.BAT: success " Patchwork
2018-04-18 19:51 ` ✓ Fi.CI.IGT: " Patchwork
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.