* [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches.
@ 2016-05-26 10:37 Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 1/9] drm/i915: Allow mmio updates on all platforms, v3 Maarten Lankhorst
` (10 more replies)
0 siblings, 11 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:37 UTC (permalink / raw)
To: intel-gfx
Add some minor changes to prevent bisect breaking.
Main change is making sure crtc_state is not freed while the mmio update still runs.
Maarten Lankhorst (9):
drm/i915: Allow mmio updates on all platforms, v3.
drm/i915: Convert flip_work to a list, v2.
drm/i915: Add the exclusive fence to plane_state.
drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4.
drm/i915: Remove cs based page flip support, v2.
drm/i915: Remove use_mmio_flip kernel parameter.
drm/i915: Remove queue_flip pointer.
drm/i915: Remove reset_counter from intel_crtc.
drm/i915: Pass atomic states to fbc update functions.
drivers/gpu/drm/i915/i915_debugfs.c | 89 ++-
drivers/gpu/drm/i915/i915_drv.h | 5 -
drivers/gpu/drm/i915/i915_irq.c | 120 +---
drivers/gpu/drm/i915/i915_params.c | 5 -
drivers/gpu/drm/i915/i915_params.h | 1 -
drivers/gpu/drm/i915/intel_atomic_plane.c | 1 +
drivers/gpu/drm/i915/intel_display.c | 1118 ++++++++---------------------
drivers/gpu/drm/i915/intel_drv.h | 37 +-
drivers/gpu/drm/i915/intel_fbc.c | 39 +-
drivers/gpu/drm/i915/intel_lrc.c | 4 +-
10 files changed, 417 insertions(+), 1002 deletions(-)
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/9] drm/i915: Allow mmio updates on all platforms, v3.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
@ 2016-05-26 10:37 ` Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 2/9] drm/i915: Convert flip_work to a list, v2 Maarten Lankhorst
` (9 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:37 UTC (permalink / raw)
To: intel-gfx
With intel_pipe_update begin/end we ensure that the mmio updates
don't run during vblank interrupt, using the hw counter we can
be sure that when current vblank count != vblank count at the time
of pipe_update_end the mmio update is complete.
This allows us to use mmio updates on all platforms, using the
update_plane call.
With Chris Wilson's patch to skip waiting for vblanks for
legacy_cursor_update this potentially leaves a small race
condition. In case of !legacy_cursor_update we wait for flips to
complete so there's no race for freeing crtc_state. In case of
legacy_cursor_update there's a check for
work->crtc_state == old_crtc_state. In that case the old_crtc_state
is removed from intel_atomic_state and freed by intel_unpin_work_fn.
This ensures that intel_mmio_flip_work_func never uses a freed pointer
to crtc_state.
Changes since v1:
- Split out the flip_work rename.
Changes since v2:
- Do not break bisect by reverting the stall fix for cursor updates,
instead add crtc_state to intel_flip_work, and make sure it's not
freed in intel_atomic_commit for legacy cursor updates.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/gpu/drm/i915/intel_display.c | 118 ++++++++---------------------------
drivers/gpu/drm/i915/intel_drv.h | 4 +-
2 files changed, 29 insertions(+), 93 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 9ccd76699f48..ae8036b5fe7c 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -10976,6 +10976,9 @@ static void intel_unpin_work_fn(struct work_struct *__work)
BUG_ON(atomic_read(&crtc->unpin_work_count) == 0);
atomic_dec(&crtc->unpin_work_count);
+ if (work->free_new_crtc_state)
+ intel_crtc_destroy_state(&crtc->base, &work->new_crtc_state->base);
+
kfree(work);
}
@@ -11373,9 +11376,6 @@ static bool use_mmio_flip(struct intel_engine_cs *engine,
if (engine == NULL)
return true;
- if (INTEL_GEN(engine->i915) < 5)
- return false;
-
if (i915.use_mmio_flip < 0)
return false;
else if (i915.use_mmio_flip > 0)
@@ -11390,92 +11390,15 @@ static bool use_mmio_flip(struct intel_engine_cs *engine,
return engine != i915_gem_request_get_engine(obj->last_write_req);
}
-static void skl_do_mmio_flip(struct intel_crtc *intel_crtc,
- unsigned int rotation,
- struct intel_flip_work *work)
-{
- struct drm_device *dev = intel_crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_framebuffer *fb = intel_crtc->base.primary->fb;
- const enum pipe pipe = intel_crtc->pipe;
- u32 ctl, stride, tile_height;
-
- ctl = I915_READ(PLANE_CTL(pipe, 0));
- ctl &= ~PLANE_CTL_TILED_MASK;
- switch (fb->modifier[0]) {
- case DRM_FORMAT_MOD_NONE:
- break;
- case I915_FORMAT_MOD_X_TILED:
- ctl |= PLANE_CTL_TILED_X;
- break;
- case I915_FORMAT_MOD_Y_TILED:
- ctl |= PLANE_CTL_TILED_Y;
- break;
- case I915_FORMAT_MOD_Yf_TILED:
- ctl |= PLANE_CTL_TILED_YF;
- break;
- default:
- MISSING_CASE(fb->modifier[0]);
- }
-
- /*
- * The stride is either expressed as a multiple of 64 bytes chunks for
- * linear buffers or in number of tiles for tiled buffers.
- */
- if (intel_rotation_90_or_270(rotation)) {
- /* stride = Surface height in tiles */
- tile_height = intel_tile_height(dev_priv, fb->modifier[0], 0);
- stride = DIV_ROUND_UP(fb->height, tile_height);
- } else {
- stride = fb->pitches[0] /
- intel_fb_stride_alignment(dev_priv, fb->modifier[0],
- fb->pixel_format);
- }
-
- /*
- * Both PLANE_CTL and PLANE_STRIDE are not updated on vblank but on
- * PLANE_SURF updates, the update is then guaranteed to be atomic.
- */
- I915_WRITE(PLANE_CTL(pipe, 0), ctl);
- I915_WRITE(PLANE_STRIDE(pipe, 0), stride);
-
- I915_WRITE(PLANE_SURF(pipe, 0), work->gtt_offset);
- POSTING_READ(PLANE_SURF(pipe, 0));
-}
-
-static void ilk_do_mmio_flip(struct intel_crtc *intel_crtc,
- struct intel_flip_work *work)
-{
- struct drm_device *dev = intel_crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_framebuffer *intel_fb =
- to_intel_framebuffer(intel_crtc->base.primary->fb);
- struct drm_i915_gem_object *obj = intel_fb->obj;
- i915_reg_t reg = DSPCNTR(intel_crtc->plane);
- u32 dspcntr;
-
- dspcntr = I915_READ(reg);
-
- if (obj->tiling_mode != I915_TILING_NONE)
- dspcntr |= DISPPLANE_TILED;
- else
- dspcntr &= ~DISPPLANE_TILED;
-
- I915_WRITE(reg, dspcntr);
-
- I915_WRITE(DSPSURF(intel_crtc->plane), work->gtt_offset);
- POSTING_READ(DSPSURF(intel_crtc->plane));
-}
-
static void intel_mmio_flip_work_func(struct work_struct *w)
{
struct intel_flip_work *work =
container_of(w, struct intel_flip_work, mmio_work);
struct intel_crtc *crtc = to_intel_crtc(work->crtc);
- struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
- struct intel_framebuffer *intel_fb =
- to_intel_framebuffer(crtc->base.primary->fb);
- struct drm_i915_gem_object *obj = intel_fb->obj;
+ struct drm_device *dev = crtc->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_plane *primary = to_intel_plane(crtc->base.primary);
+ struct drm_i915_gem_object *obj = intel_fb_obj(primary->base.state->fb);
if (work->flip_queued_req)
WARN_ON(__i915_wait_request(work->flip_queued_req,
@@ -11489,13 +11412,9 @@ static void intel_mmio_flip_work_func(struct work_struct *w)
MAX_SCHEDULE_TIMEOUT) < 0);
intel_pipe_update_start(crtc);
-
- if (INTEL_GEN(dev_priv) >= 9)
- skl_do_mmio_flip(crtc, work->rotation, work);
- else
- /* use_mmio_flip() retricts MMIO flips to ilk+ */
- ilk_do_mmio_flip(crtc, work);
-
+ primary->update_plane(&primary->base,
+ work->new_crtc_state,
+ to_intel_plane_state(primary->base.state));
intel_pipe_update_end(crtc, work);
}
@@ -11622,6 +11541,8 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
if (work == NULL)
return -ENOMEM;
+ work->new_crtc_state = to_intel_crtc_state(crtc->state);
+
work->event = event;
work->crtc = crtc;
work->old_fb = old_fb;
@@ -11720,7 +11641,6 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
work->gtt_offset = intel_plane_obj_offset(to_intel_plane(primary),
obj, 0);
work->gtt_offset += intel_crtc->dspaddr_offset;
- work->rotation = crtc->primary->state->rotation;
if (mmio_flip) {
INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
@@ -13779,6 +13699,20 @@ static int intel_atomic_commit(struct drm_device *dev,
modeset_put_power_domains(dev_priv, put_domains[i]);
intel_modeset_verify_crtc(crtc, old_crtc_state, crtc->state);
+
+ if (state->legacy_cursor_update &&
+ to_intel_crtc(crtc)->flip_work) {
+ struct intel_flip_work *work;
+
+ spin_lock_irq(&dev->event_lock);
+ work = to_intel_crtc(crtc)->flip_work;
+ if (work && &work->new_crtc_state->base == old_crtc_state) {
+ state->crtcs[i] = NULL;
+ state->crtc_states[i] = NULL;
+ work->free_new_crtc_state = true;
+ }
+ spin_unlock_irq(&dev->event_lock);
+ }
}
if (intel_state->modeset)
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 9b5f6634c558..ab778193bccd 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -976,6 +976,9 @@ struct intel_flip_work {
struct work_struct unpin_work;
struct work_struct mmio_work;
+ struct intel_crtc_state *new_crtc_state;
+ bool free_new_crtc_state;
+
struct drm_crtc *crtc;
struct drm_framebuffer *old_fb;
struct drm_i915_gem_object *pending_flip_obj;
@@ -986,7 +989,6 @@ struct intel_flip_work {
struct drm_i915_gem_request *flip_queued_req;
u32 flip_queued_vblank;
u32 flip_ready_vblank;
- unsigned int rotation;
};
struct intel_load_detect_pipe {
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/9] drm/i915: Convert flip_work to a list, v2.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 1/9] drm/i915: Allow mmio updates on all platforms, v3 Maarten Lankhorst
@ 2016-05-26 10:37 ` Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 3/9] drm/i915: Add the exclusive fence to plane_state Maarten Lankhorst
` (8 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:37 UTC (permalink / raw)
To: intel-gfx
This will be required to allow more than 1 outstanding
update in the future. For now it's unclear how this will
will be handled, but with a list it's definitely possible.
Changes since v1:
- Changed to prevent breaking with the legacy cursor update changes.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/gpu/drm/i915/i915_debugfs.c | 90 +++++++++++---------
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/intel_display.c | 156 +++++++++++++++++++++--------------
drivers/gpu/drm/i915/intel_drv.h | 4 +-
4 files changed, 149 insertions(+), 103 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index ac7e5692496d..cced527af109 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -621,6 +621,53 @@ static int i915_gem_gtt_info(struct seq_file *m, void *data)
return 0;
}
+static void i915_dump_pageflip(struct seq_file *m,
+ struct drm_i915_private *dev_priv,
+ struct intel_crtc *crtc,
+ struct intel_flip_work *work)
+{
+ const char pipe = pipe_name(crtc->pipe);
+ const char plane = plane_name(crtc->plane);
+ u32 pending;
+ u32 addr;
+
+ pending = atomic_read(&work->pending);
+ if (pending) {
+ seq_printf(m, "Flip ioctl preparing on pipe %c (plane %c)\n",
+ pipe, plane);
+ } else {
+ seq_printf(m, "Flip pending (waiting for vsync) on pipe %c (plane %c)\n",
+ pipe, plane);
+ }
+ if (work->flip_queued_req) {
+ struct intel_engine_cs *engine = i915_gem_request_get_engine(work->flip_queued_req);
+
+ seq_printf(m, "Flip queued on %s at seqno %x, next seqno %x [current breadcrumb %x], completed? %d\n",
+ engine->name,
+ i915_gem_request_get_seqno(work->flip_queued_req),
+ dev_priv->next_seqno,
+ engine->get_seqno(engine),
+ i915_gem_request_completed(work->flip_queued_req, true));
+ } else
+ seq_printf(m, "Flip not associated with any ring\n");
+ seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
+ work->flip_queued_vblank,
+ work->flip_ready_vblank,
+ intel_crtc_get_vblank_counter(crtc));
+ seq_printf(m, "%d prepares\n", atomic_read(&work->pending));
+
+ if (INTEL_INFO(dev_priv)->gen >= 4)
+ addr = I915_HI_DISPBASE(I915_READ(DSPSURF(crtc->plane)));
+ else
+ addr = I915_READ(DSPADDR(crtc->plane));
+ seq_printf(m, "Current scanout address 0x%08x\n", addr);
+
+ if (work->pending_flip_obj) {
+ seq_printf(m, "New framebuffer address 0x%08lx\n", (long)work->gtt_offset);
+ seq_printf(m, "MMIO update completed? %d\n", addr == work->gtt_offset);
+ }
+}
+
static int i915_gem_pageflip_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = m->private;
@@ -639,48 +686,13 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data)
struct intel_flip_work *work;
spin_lock_irq(&dev->event_lock);
- work = crtc->flip_work;
- if (work == NULL) {
+ if (list_empty(&crtc->flip_work)) {
seq_printf(m, "No flip due on pipe %c (plane %c)\n",
pipe, plane);
} else {
- u32 pending;
- u32 addr;
-
- pending = atomic_read(&work->pending);
- if (pending) {
- seq_printf(m, "Flip ioctl preparing on pipe %c (plane %c)\n",
- pipe, plane);
- } else {
- seq_printf(m, "Flip pending (waiting for vsync) on pipe %c (plane %c)\n",
- pipe, plane);
- }
- if (work->flip_queued_req) {
- struct intel_engine_cs *engine = i915_gem_request_get_engine(work->flip_queued_req);
-
- seq_printf(m, "Flip queued on %s at seqno %x, next seqno %x [current breadcrumb %x], completed? %d\n",
- engine->name,
- i915_gem_request_get_seqno(work->flip_queued_req),
- dev_priv->next_seqno,
- engine->get_seqno(engine),
- i915_gem_request_completed(work->flip_queued_req, true));
- } else
- seq_printf(m, "Flip not associated with any ring\n");
- seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
- work->flip_queued_vblank,
- work->flip_ready_vblank,
- intel_crtc_get_vblank_counter(crtc));
- seq_printf(m, "%d prepares\n", atomic_read(&work->pending));
-
- if (INTEL_INFO(dev)->gen >= 4)
- addr = I915_HI_DISPBASE(I915_READ(DSPSURF(crtc->plane)));
- else
- addr = I915_READ(DSPADDR(crtc->plane));
- seq_printf(m, "Current scanout address 0x%08x\n", addr);
-
- if (work->pending_flip_obj) {
- seq_printf(m, "New framebuffer address 0x%08lx\n", (long)work->gtt_offset);
- seq_printf(m, "MMIO update completed? %d\n", addr == work->gtt_offset);
+ list_for_each_entry(work, &crtc->flip_work, head) {
+ i915_dump_pageflip(m, dev_priv, crtc, work);
+ seq_puts(m, "\n");
}
}
spin_unlock_irq(&dev->event_lock);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e4c8e341655c..ce1d368e4e50 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -622,7 +622,7 @@ struct drm_i915_display_funcs {
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req,
- uint32_t flags);
+ uint64_t gtt_offset);
void (*hpd_irq_setup)(struct drm_i915_private *dev_priv);
/* clock updates for mode set */
/* cursor updates */
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ae8036b5fe7c..ffd9b555d23f 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -3214,17 +3214,12 @@ static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
struct drm_device *dev = crtc->dev;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
unsigned reset_counter;
- bool pending;
reset_counter = i915_reset_counter(&to_i915(dev)->gpu_error);
if (intel_crtc->reset_counter != reset_counter)
return false;
- spin_lock_irq(&dev->event_lock);
- pending = to_intel_crtc(crtc)->flip_work != NULL;
- spin_unlock_irq(&dev->event_lock);
-
- return pending;
+ return !list_empty_careful(&to_intel_crtc(crtc)->flip_work);
}
static void intel_update_pipe_config(struct intel_crtc *crtc,
@@ -3800,7 +3795,7 @@ bool intel_has_pending_fb_unpin(struct drm_device *dev)
if (atomic_read(&crtc->unpin_work_count) == 0)
continue;
- if (crtc->flip_work)
+ if (!list_empty_careful(&crtc->flip_work))
intel_wait_for_vblank(dev, crtc->pipe);
return true;
@@ -3809,12 +3804,11 @@ bool intel_has_pending_fb_unpin(struct drm_device *dev)
return false;
}
-static void page_flip_completed(struct intel_crtc *intel_crtc)
+static void page_flip_completed(struct intel_crtc *intel_crtc, struct intel_flip_work *work)
{
struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev);
- struct intel_flip_work *work = intel_crtc->flip_work;
- intel_crtc->flip_work = NULL;
+ list_del_init(&work->head);
if (work->event)
drm_crtc_send_vblank_event(&intel_crtc->base, work->event);
@@ -3849,10 +3843,16 @@ static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
struct intel_flip_work *work;
spin_lock_irq(&dev->event_lock);
- work = intel_crtc->flip_work;
+
+ /*
+ * If we're waiting for page flips, it's the first
+ * flip on the list that's stuck.
+ */
+ work = list_first_entry_or_null(&intel_crtc->flip_work,
+ struct intel_flip_work, head);
if (work && !is_mmio_work(work)) {
WARN_ONCE(1, "Removing stuck page flip\n");
- page_flip_completed(intel_crtc);
+ page_flip_completed(intel_crtc, work);
}
spin_unlock_irq(&dev->event_lock);
}
@@ -6273,7 +6273,7 @@ static void intel_crtc_disable_noatomic(struct drm_crtc *crtc)
return;
if (to_intel_plane_state(crtc->primary->state)->visible) {
- WARN_ON(intel_crtc->flip_work);
+ WARN_ON(list_empty(&intel_crtc->flip_work));
intel_pre_disable_primary_noatomic(crtc);
@@ -10935,15 +10935,19 @@ static void intel_crtc_destroy(struct drm_crtc *crtc)
struct intel_flip_work *work;
spin_lock_irq(&dev->event_lock);
- work = intel_crtc->flip_work;
- intel_crtc->flip_work = NULL;
- spin_unlock_irq(&dev->event_lock);
+ while (!list_empty(&intel_crtc->flip_work)) {
+ work = list_first_entry(&intel_crtc->flip_work,
+ struct intel_flip_work, head);
+ list_del_init(&work->head);
+ spin_unlock_irq(&dev->event_lock);
- if (work) {
cancel_work_sync(&work->mmio_work);
cancel_work_sync(&work->unpin_work);
kfree(work);
+
+ spin_lock_irq(&dev->event_lock);
}
+ spin_unlock_irq(&dev->event_lock);
drm_crtc_cleanup(crtc);
@@ -11031,9 +11035,9 @@ static bool __pageflip_finished_cs(struct intel_crtc *crtc,
* anyway, we don't really care.
*/
return (I915_READ(DSPSURFLIVE(crtc->plane)) & ~0xfff) ==
- crtc->flip_work->gtt_offset &&
+ work->gtt_offset &&
g4x_flip_count_after_eq(I915_READ(PIPE_FLIPCOUNT_G4X(crtc->pipe)),
- crtc->flip_work->flip_count);
+ work->flip_count);
}
static bool
@@ -11083,13 +11087,19 @@ void intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, int pipe)
* lost pageflips) so needs the full irqsave spinlocks.
*/
spin_lock_irqsave(&dev->event_lock, flags);
- work = intel_crtc->flip_work;
+ while (!list_empty(&intel_crtc->flip_work)) {
+ work = list_first_entry(&intel_crtc->flip_work,
+ struct intel_flip_work,
+ head);
+
+ if (is_mmio_work(work))
+ break;
- if (work != NULL &&
- !is_mmio_work(work) &&
- pageflip_finished(intel_crtc, work))
- page_flip_completed(intel_crtc);
+ if (!pageflip_finished(intel_crtc, work))
+ break;
+ page_flip_completed(intel_crtc, work);
+ }
spin_unlock_irqrestore(&dev->event_lock, flags);
}
@@ -11110,13 +11120,19 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
* lost pageflips) so needs the full irqsave spinlocks.
*/
spin_lock_irqsave(&dev->event_lock, flags);
- work = intel_crtc->flip_work;
+ while (!list_empty(&intel_crtc->flip_work)) {
+ work = list_first_entry(&intel_crtc->flip_work,
+ struct intel_flip_work,
+ head);
- if (work != NULL &&
- is_mmio_work(work) &&
- pageflip_finished(intel_crtc, work))
- page_flip_completed(intel_crtc);
+ if (!is_mmio_work(work))
+ break;
+ if (!pageflip_finished(intel_crtc, work))
+ break;
+
+ page_flip_completed(intel_crtc, work);
+ }
spin_unlock_irqrestore(&dev->event_lock, flags);
}
@@ -11135,7 +11151,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req,
- uint32_t flags)
+ uint64_t gtt_offset)
{
struct intel_engine_cs *engine = req->engine;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
@@ -11158,7 +11174,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
intel_ring_emit(engine, MI_DISPLAY_FLIP |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(engine, fb->pitches[0]);
- intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
+ intel_ring_emit(engine, gtt_offset);
intel_ring_emit(engine, 0); /* aux display base address, unused */
return 0;
@@ -11169,7 +11185,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req,
- uint32_t flags)
+ uint64_t gtt_offset)
{
struct intel_engine_cs *engine = req->engine;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
@@ -11189,7 +11205,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
intel_ring_emit(engine, MI_DISPLAY_FLIP_I915 |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(engine, fb->pitches[0]);
- intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
+ intel_ring_emit(engine, gtt_offset);
intel_ring_emit(engine, MI_NOOP);
return 0;
@@ -11200,7 +11216,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req,
- uint32_t flags)
+ uint64_t gtt_offset)
{
struct intel_engine_cs *engine = req->engine;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -11219,8 +11235,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
intel_ring_emit(engine, MI_DISPLAY_FLIP |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(engine, fb->pitches[0]);
- intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset |
- obj->tiling_mode);
+ intel_ring_emit(engine, gtt_offset | obj->tiling_mode);
/* XXX Enabling the panel-fitter across page-flip is so far
* untested on non-native modes, so ignore it for now.
@@ -11238,7 +11253,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req,
- uint32_t flags)
+ uint64_t gtt_offset)
{
struct intel_engine_cs *engine = req->engine;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -11253,7 +11268,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
intel_ring_emit(engine, MI_DISPLAY_FLIP |
MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
intel_ring_emit(engine, fb->pitches[0] | obj->tiling_mode);
- intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
+ intel_ring_emit(engine, gtt_offset);
/* Contrary to the suggestions in the documentation,
* "Enable Panel Fitter" does not seem to be required when page
@@ -11273,7 +11288,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req,
- uint32_t flags)
+ uint64_t gtt_offset)
{
struct intel_engine_cs *engine = req->engine;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
@@ -11356,7 +11371,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
intel_ring_emit(engine, MI_DISPLAY_FLIP_I915 | plane_bit);
intel_ring_emit(engine, (fb->pitches[0] | obj->tiling_mode));
- intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
+ intel_ring_emit(engine, gtt_offset);
intel_ring_emit(engine, (MI_NOOP));
return 0;
@@ -11423,7 +11438,7 @@ static int intel_default_queue_flip(struct drm_device *dev,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req,
- uint32_t flags)
+ uint64_t gtt_offset)
{
return -ENODEV;
}
@@ -11478,20 +11493,26 @@ void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe)
return;
spin_lock(&dev->event_lock);
- work = intel_crtc->flip_work;
+ while (!list_empty(&intel_crtc->flip_work)) {
+ work = list_first_entry(&intel_crtc->flip_work,
+ struct intel_flip_work, head);
- if (work != NULL && !is_mmio_work(work) &&
- __pageflip_stall_check_cs(dev_priv, intel_crtc, work)) {
- WARN_ONCE(1,
- "Kicking stuck page flip: queued at %d, now %d\n",
- work->flip_queued_vblank, intel_crtc_get_vblank_counter(intel_crtc));
- page_flip_completed(intel_crtc);
- work = NULL;
- }
+ if (is_mmio_work(work))
+ break;
+
+ if (__pageflip_stall_check_cs(dev_priv, intel_crtc, work)) {
+ WARN_ONCE(1,
+ "Kicking stuck page flip: queued at %d, now %d\n",
+ work->flip_queued_vblank, intel_crtc_get_vblank_counter(intel_crtc));
+ page_flip_completed(intel_crtc, work);
+ continue;
+ }
- if (work != NULL && !is_mmio_work(work) &&
- intel_crtc_get_vblank_counter(intel_crtc) - work->flip_queued_vblank > 1)
- intel_queue_rps_boost_for_request(work->flip_queued_req);
+ if (intel_crtc_get_vblank_counter(intel_crtc) - work->flip_queued_vblank > 1)
+ intel_queue_rps_boost_for_request(work->flip_queued_req);
+
+ break;
+ }
spin_unlock(&dev->event_lock);
}
@@ -11554,13 +11575,18 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
/* We borrow the event spin lock for protecting flip_work */
spin_lock_irq(&dev->event_lock);
- if (intel_crtc->flip_work) {
+ if (!list_empty(&intel_crtc->flip_work)) {
+ struct intel_flip_work *old_work;
+
+ old_work = list_last_entry(&intel_crtc->flip_work,
+ struct intel_flip_work, head);
+
/* Before declaring the flip queue wedged, check if
* the hardware completed the operation behind our backs.
*/
- if (pageflip_finished(intel_crtc, intel_crtc->flip_work)) {
+ if (pageflip_finished(intel_crtc, old_work)) {
DRM_DEBUG_DRIVER("flip queue: previous flip completed, continuing\n");
- page_flip_completed(intel_crtc);
+ page_flip_completed(intel_crtc, old_work);
} else {
DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
spin_unlock_irq(&dev->event_lock);
@@ -11570,7 +11596,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
return -EBUSY;
}
}
- intel_crtc->flip_work = work;
+ list_add_tail(&work->head, &intel_crtc->flip_work);
spin_unlock_irq(&dev->event_lock);
if (atomic_read(&intel_crtc->unpin_work_count) >= 2)
@@ -11652,7 +11678,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
} else {
i915_gem_request_assign(&work->flip_queued_req, request);
ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
- page_flip_flags);
+ work->gtt_offset);
if (ret)
goto cleanup_unpin;
@@ -11687,7 +11713,7 @@ cleanup:
drm_framebuffer_unreference(work->old_fb);
spin_lock_irq(&dev->event_lock);
- intel_crtc->flip_work = NULL;
+ list_del(&work->head);
spin_unlock_irq(&dev->event_lock);
drm_crtc_vblank_put(crtc);
@@ -13701,11 +13727,15 @@ static int intel_atomic_commit(struct drm_device *dev,
intel_modeset_verify_crtc(crtc, old_crtc_state, crtc->state);
if (state->legacy_cursor_update &&
- to_intel_crtc(crtc)->flip_work) {
- struct intel_flip_work *work;
+ !list_empty_careful(&to_intel_crtc(crtc)->flip_work)) {
+ struct intel_flip_work *work = NULL;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
spin_lock_irq(&dev->event_lock);
- work = to_intel_crtc(crtc)->flip_work;
+ if (!list_empty(&intel_crtc->flip_work))
+ work = list_last_entry(&intel_crtc->flip_work,
+ struct intel_flip_work, head);
+
if (work && &work->new_crtc_state->base == old_crtc_state) {
state->crtcs[i] = NULL;
state->crtc_states[i] = NULL;
@@ -14319,6 +14349,8 @@ static void intel_crtc_init(struct drm_device *dev, int pipe)
intel_crtc->base.state = &crtc_state->base;
crtc_state->base.crtc = &intel_crtc->base;
+ INIT_LIST_HEAD(&intel_crtc->flip_work);
+
/* initialize shared scalers */
if (INTEL_INFO(dev)->gen >= 9) {
if (pipe == PIPE_C)
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index ab778193bccd..6944202d3de0 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -644,7 +644,7 @@ struct intel_crtc {
unsigned long enabled_power_domains;
bool lowfreq_avail;
struct intel_overlay *overlay;
- struct intel_flip_work *flip_work;
+ struct list_head flip_work;
atomic_t unpin_work_count;
@@ -973,6 +973,8 @@ intel_get_crtc_for_plane(struct drm_device *dev, int plane)
}
struct intel_flip_work {
+ struct list_head head;
+
struct work_struct unpin_work;
struct work_struct mmio_work;
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 3/9] drm/i915: Add the exclusive fence to plane_state.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 1/9] drm/i915: Allow mmio updates on all platforms, v3 Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 2/9] drm/i915: Convert flip_work to a list, v2 Maarten Lankhorst
@ 2016-05-26 10:37 ` Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4 Maarten Lankhorst
` (7 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:37 UTC (permalink / raw)
To: intel-gfx
Set plane_state->base.fence to the dma_buf exclusive fence,
and add a wait to the mmio function. This will make it easier
to unify plane updates later on.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
---
drivers/gpu/drm/i915/intel_atomic_plane.c | 1 +
drivers/gpu/drm/i915/intel_display.c | 54 +++++++++++++++++++++++--------
2 files changed, 42 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_atomic_plane.c b/drivers/gpu/drm/i915/intel_atomic_plane.c
index 7de7721f65bc..2ab45f16fa65 100644
--- a/drivers/gpu/drm/i915/intel_atomic_plane.c
+++ b/drivers/gpu/drm/i915/intel_atomic_plane.c
@@ -102,6 +102,7 @@ intel_plane_destroy_state(struct drm_plane *plane,
struct drm_plane_state *state)
{
WARN_ON(state && to_intel_plane_state(state)->wait_req);
+ WARN_ON(state && state->fence);
drm_atomic_helper_plane_destroy_state(plane, state);
}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ffd9b555d23f..0de232401f1d 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -13476,6 +13476,15 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,
struct intel_plane_state *intel_plane_state =
to_intel_plane_state(plane_state);
+ if (plane_state->fence) {
+ long lret = fence_wait(plane_state->fence, true);
+
+ if (lret < 0) {
+ ret = lret;
+ break;
+ }
+ }
+
if (!intel_plane_state->wait_req)
continue;
@@ -13820,6 +13829,33 @@ static const struct drm_crtc_funcs intel_crtc_funcs = {
.atomic_destroy_state = intel_crtc_destroy_state,
};
+static struct fence *intel_get_excl_fence(struct drm_i915_gem_object *obj)
+{
+ struct reservation_object *resv;
+
+
+ if (!obj->base.dma_buf)
+ return NULL;
+
+ resv = obj->base.dma_buf->resv;
+
+ /* For framebuffer backed by dmabuf, wait for fence */
+ while (1) {
+ struct fence *fence_excl, *ret = NULL;
+
+ rcu_read_lock();
+
+ fence_excl = rcu_dereference(resv->fence_excl);
+ if (fence_excl)
+ ret = fence_get_rcu(fence_excl);
+
+ rcu_read_unlock();
+
+ if (ret == fence_excl)
+ return ret;
+ }
+}
+
/**
* intel_prepare_plane_fb - Prepare fb for usage on plane
* @plane: drm plane to prepare for
@@ -13872,19 +13908,6 @@ intel_prepare_plane_fb(struct drm_plane *plane,
}
}
- /* For framebuffer backed by dmabuf, wait for fence */
- if (obj && obj->base.dma_buf) {
- long lret;
-
- lret = reservation_object_wait_timeout_rcu(obj->base.dma_buf->resv,
- false, true,
- MAX_SCHEDULE_TIMEOUT);
- if (lret == -ERESTARTSYS)
- return lret;
-
- WARN(lret < 0, "waiting returns %li\n", lret);
- }
-
if (!obj) {
ret = 0;
} else if (plane->type == DRM_PLANE_TYPE_CURSOR &&
@@ -13904,6 +13927,8 @@ intel_prepare_plane_fb(struct drm_plane *plane,
i915_gem_request_assign(&plane_state->wait_req,
obj->last_write_req);
+
+ plane_state->base.fence = intel_get_excl_fence(obj);
}
i915_gem_track_fb(old_obj, obj, intel_plane->frontbuffer_bit);
@@ -13946,6 +13971,9 @@ intel_cleanup_plane_fb(struct drm_plane *plane,
i915_gem_track_fb(old_obj, obj, intel_plane->frontbuffer_bit);
i915_gem_request_assign(&old_intel_state->wait_req, NULL);
+
+ fence_put(old_intel_state->base.fence);
+ old_intel_state->base.fence = NULL;
}
int
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (2 preceding siblings ...)
2016-05-26 10:37 ` [PATCH 3/9] drm/i915: Add the exclusive fence to plane_state Maarten Lankhorst
@ 2016-05-26 10:38 ` Maarten Lankhorst
2016-05-30 7:54 ` [PATCH v2 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v5 Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 5/9] drm/i915: Remove cs based page flip support, v2 Maarten Lankhorst
` (6 subsequent siblings)
10 siblings, 1 reply; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:38 UTC (permalink / raw)
To: intel-gfx
Create a work structure that will be used for all changes. This will
be used later on in the atomic commit function.
Changes since v1:
- Free old_crtc_state from unpin_work_fn properly.
Changes since v2:
- Add hunk for calling hw state verifier.
- Add missing support for color spaces.
Changes since v3:
- Update for legacy cursor work.
- null pointer to request_unreference is no longer allowed.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/gpu/drm/i915/i915_debugfs.c | 36 +-
drivers/gpu/drm/i915/intel_display.c | 676 +++++++++++++++++++++--------------
drivers/gpu/drm/i915/intel_drv.h | 17 +-
3 files changed, 444 insertions(+), 285 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index cced527af109..b52c1a5f3451 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -627,29 +627,43 @@ static void i915_dump_pageflip(struct seq_file *m,
struct intel_flip_work *work)
{
const char pipe = pipe_name(crtc->pipe);
- const char plane = plane_name(crtc->plane);
u32 pending;
u32 addr;
+ int i;
pending = atomic_read(&work->pending);
if (pending) {
seq_printf(m, "Flip ioctl preparing on pipe %c (plane %c)\n",
- pipe, plane);
+ pipe, plane_name(crtc->plane));
} else {
seq_printf(m, "Flip pending (waiting for vsync) on pipe %c (plane %c)\n",
- pipe, plane);
+ pipe, plane_name(crtc->plane));
}
- if (work->flip_queued_req) {
- struct intel_engine_cs *engine = i915_gem_request_get_engine(work->flip_queued_req);
- seq_printf(m, "Flip queued on %s at seqno %x, next seqno %x [current breadcrumb %x], completed? %d\n",
+
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *old_plane_state = work->old_plane_state[i];
+ struct drm_plane *plane = old_plane_state->base.plane;
+ struct drm_i915_gem_request *req = old_plane_state->wait_req;
+ struct intel_engine_cs *engine;
+
+ seq_printf(m, "[PLANE:%i] part of flip.\n", plane->base.id);
+
+ if (!req) {
+ seq_printf(m, "Plane not associated with any engine\n");
+ continue;
+ }
+
+ engine = i915_gem_request_get_engine(req);
+
+ seq_printf(m, "Plane blocked on %s at seqno %x, next seqno %x [current breadcrumb %x], completed? %d\n",
engine->name,
- i915_gem_request_get_seqno(work->flip_queued_req),
+ i915_gem_request_get_seqno(req),
dev_priv->next_seqno,
engine->get_seqno(engine),
- i915_gem_request_completed(work->flip_queued_req, true));
- } else
- seq_printf(m, "Flip not associated with any ring\n");
+ i915_gem_request_completed(req, true));
+ }
+
seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
work->flip_queued_vblank,
work->flip_ready_vblank,
@@ -662,7 +676,7 @@ static void i915_dump_pageflip(struct seq_file *m,
addr = I915_READ(DSPADDR(crtc->plane));
seq_printf(m, "Current scanout address 0x%08x\n", addr);
- if (work->pending_flip_obj) {
+ if (work->flip_queued_req) {
seq_printf(m, "New framebuffer address 0x%08lx\n", (long)work->gtt_offset);
seq_printf(m, "MMIO update completed? %d\n", addr == work->gtt_offset);
}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 0de232401f1d..0531cdb1cfa1 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -50,7 +50,7 @@
static bool is_mmio_work(struct intel_flip_work *work)
{
- return work->mmio_work.func;
+ return !work->flip_queued_req;
}
/* Primary plane formats for gen <= 3 */
@@ -124,6 +124,9 @@ static void intel_modeset_setup_hw_state(struct drm_device *dev);
static void intel_pre_disable_primary_noatomic(struct drm_crtc *crtc);
static int ilk_max_pixel_rate(struct drm_atomic_state *state);
static int broxton_calc_cdclk(int max_pixclk);
+static void intel_modeset_verify_crtc(struct drm_crtc *crtc,
+ struct drm_crtc_state *old_state,
+ struct drm_crtc_state *new_state);
struct intel_limit {
struct {
@@ -2528,20 +2531,6 @@ out_unref_obj:
return false;
}
-/* Update plane->state->fb to match plane->fb after driver-internal updates */
-static void
-update_state_fb(struct drm_plane *plane)
-{
- if (plane->fb == plane->state->fb)
- return;
-
- if (plane->state->fb)
- drm_framebuffer_unreference(plane->state->fb);
- plane->state->fb = plane->fb;
- if (plane->state->fb)
- drm_framebuffer_reference(plane->state->fb);
-}
-
static void
intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
struct intel_initial_plane_config *plane_config)
@@ -3807,19 +3796,27 @@ bool intel_has_pending_fb_unpin(struct drm_device *dev)
static void page_flip_completed(struct intel_crtc *intel_crtc, struct intel_flip_work *work)
{
struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev);
-
- list_del_init(&work->head);
+ struct drm_plane_state *new_plane_state;
+ struct drm_plane *primary = intel_crtc->base.primary;
if (work->event)
drm_crtc_send_vblank_event(&intel_crtc->base, work->event);
drm_crtc_vblank_put(&intel_crtc->base);
- wake_up_all(&dev_priv->pending_flip_queue);
- queue_work(dev_priv->wq, &work->unpin_work);
+ new_plane_state = &work->old_plane_state[0]->base;
+ if (work->num_planes >= 1 &&
+ new_plane_state->plane == primary &&
+ new_plane_state->fb)
+ trace_i915_flip_complete(intel_crtc->plane,
+ intel_fb_obj(new_plane_state->fb));
- trace_i915_flip_complete(intel_crtc->plane,
- work->pending_flip_obj);
+ if (work->can_async_unpin) {
+ list_del_init(&work->head);
+ wake_up_all(&dev_priv->pending_flip_queue);
+ }
+
+ queue_work(dev_priv->wq, &work->unpin_work);
}
static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
@@ -3850,7 +3847,9 @@ static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
*/
work = list_first_entry_or_null(&intel_crtc->flip_work,
struct intel_flip_work, head);
- if (work && !is_mmio_work(work)) {
+
+ if (work && !is_mmio_work(work) &&
+ !work_busy(&work->unpin_work)) {
WARN_ONCE(1, "Removing stuck page flip\n");
page_flip_completed(intel_crtc, work);
}
@@ -10954,34 +10953,115 @@ static void intel_crtc_destroy(struct drm_crtc *crtc)
kfree(intel_crtc);
}
+static void intel_crtc_post_flip_update(struct intel_flip_work *work,
+ struct drm_crtc *crtc)
+{
+ struct intel_crtc_state *crtc_state = work->new_crtc_state;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ if (crtc_state->disable_cxsr)
+ intel_crtc->wm.cxsr_allowed = true;
+
+ if (crtc_state->update_wm_post && crtc_state->base.active)
+ intel_update_watermarks(crtc);
+
+ if (work->num_planes > 0 &&
+ work->old_plane_state[0]->base.plane == crtc->primary) {
+ struct intel_plane_state *plane_state =
+ work->new_plane_state[0];
+
+ if (plane_state->visible &&
+ (needs_modeset(&crtc_state->base) ||
+ !work->old_plane_state[0]->visible))
+ intel_post_enable_primary(crtc);
+ }
+}
+
static void intel_unpin_work_fn(struct work_struct *__work)
{
struct intel_flip_work *work =
container_of(__work, struct intel_flip_work, unpin_work);
- struct intel_crtc *crtc = to_intel_crtc(work->crtc);
- struct drm_device *dev = crtc->base.dev;
- struct drm_plane *primary = crtc->base.primary;
+ struct drm_crtc *crtc = work->old_crtc_state->base.crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int i;
- if (is_mmio_work(work))
- flush_work(&work->mmio_work);
+ if (work->fb_bits)
+ intel_frontbuffer_flip_complete(dev, work->fb_bits);
- mutex_lock(&dev->struct_mutex);
- intel_unpin_fb_obj(work->old_fb, primary->state->rotation);
- drm_gem_object_unreference(&work->pending_flip_obj->base);
+ /*
+ * Unless work->can_async_unpin is false, there's no way to ensure
+ * that work->new_crtc_state contains valid memory during unpin
+ * because intel_atomic_commit may free it before this runs.
+ */
+ if (!work->can_async_unpin)
+ intel_crtc_post_flip_update(work, crtc);
- if (work->flip_queued_req)
- i915_gem_request_assign(&work->flip_queued_req, NULL);
- mutex_unlock(&dev->struct_mutex);
+ if (work->fb_bits & to_intel_plane(crtc->primary)->frontbuffer_bit)
+ intel_fbc_post_update(intel_crtc);
+
+ if (work->put_power_domains)
+ modeset_put_power_domains(dev_priv, work->put_power_domains);
- intel_frontbuffer_flip_complete(dev, to_intel_plane(primary)->frontbuffer_bit);
- intel_fbc_post_update(crtc);
- drm_framebuffer_unreference(work->old_fb);
+ /* Make sure mmio work is completely finished before freeing all state here. */
+ flush_work(&work->mmio_work);
- BUG_ON(atomic_read(&crtc->unpin_work_count) == 0);
- atomic_dec(&crtc->unpin_work_count);
+ if (!work->can_async_unpin)
+ /* This must be called before work is unpinned for serialization. */
+ intel_modeset_verify_crtc(crtc, &work->old_crtc_state->base,
+ &work->new_crtc_state->base);
+
+ if (!work->can_async_unpin || !list_empty(&work->head)) {
+ spin_lock_irq(&dev->event_lock);
+ WARN(list_empty(&work->head) != work->can_async_unpin,
+ "[CRTC:%i] Pin work %p async %i with %i planes, active %i -> %i ms %i\n",
+ crtc->base.id, work, work->can_async_unpin, work->num_planes,
+ work->old_crtc_state->base.active, work->new_crtc_state->base.active,
+ needs_modeset(&work->new_crtc_state->base));
+
+ if (!list_empty(&work->head))
+ list_del(&work->head);
+
+ wake_up_all(&dev_priv->pending_flip_queue);
+ spin_unlock_irq(&dev->event_lock);
+ }
+ intel_crtc_destroy_state(crtc, &work->old_crtc_state->base);
if (work->free_new_crtc_state)
- intel_crtc_destroy_state(&crtc->base, &work->new_crtc_state->base);
+ intel_crtc_destroy_state(crtc, &work->new_crtc_state->base);
+
+ if (work->flip_queued_req)
+ i915_gem_request_unreference(work->flip_queued_req);
+
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *old_plane_state =
+ work->old_plane_state[i];
+ struct drm_framebuffer *old_fb = old_plane_state->base.fb;
+ struct drm_plane *plane = old_plane_state->base.plane;
+ struct drm_i915_gem_request *req;
+
+ req = old_plane_state->wait_req;
+ old_plane_state->wait_req = NULL;
+ if (req)
+ i915_gem_request_unreference(req);
+
+ fence_put(old_plane_state->base.fence);
+ old_plane_state->base.fence = NULL;
+
+ if (old_fb &&
+ (plane->type != DRM_PLANE_TYPE_CURSOR ||
+ !INTEL_INFO(dev_priv)->cursor_needs_physical)) {
+ mutex_lock(&dev->struct_mutex);
+ intel_unpin_fb_obj(old_fb, old_plane_state->base.rotation);
+ mutex_unlock(&dev->struct_mutex);
+ }
+
+ intel_plane_destroy_state(plane, &old_plane_state->base);
+ }
+
+ if (!WARN_ON(atomic_read(&intel_crtc->unpin_work_count) == 0))
+ atomic_dec(&intel_crtc->unpin_work_count);
kfree(work);
}
@@ -11095,7 +11175,8 @@ void intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, int pipe)
if (is_mmio_work(work))
break;
- if (!pageflip_finished(intel_crtc, work))
+ if (!pageflip_finished(intel_crtc, work) ||
+ work_busy(&work->unpin_work))
break;
page_flip_completed(intel_crtc, work);
@@ -11128,7 +11209,8 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
if (!is_mmio_work(work))
break;
- if (!pageflip_finished(intel_crtc, work))
+ if (!pageflip_finished(intel_crtc, work) ||
+ work_busy(&work->unpin_work))
break;
page_flip_completed(intel_crtc, work);
@@ -11377,70 +11459,204 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
return 0;
}
-static bool use_mmio_flip(struct intel_engine_cs *engine,
- struct drm_i915_gem_object *obj)
+static struct intel_engine_cs *
+intel_get_flip_engine(struct drm_device *dev,
+ struct drm_i915_private *dev_priv,
+ struct drm_i915_gem_object *obj)
{
- /*
- * This is not being used for older platforms, because
- * non-availability of flip done interrupt forces us to use
- * CS flips. Older platforms derive flip done using some clever
- * tricks involving the flip_pending status bits and vblank irqs.
- * So using MMIO flips there would disrupt this mechanism.
- */
+ if (IS_VALLEYVIEW(dev) || IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
+ return &dev_priv->engine[BCS];
- if (engine == NULL)
- return true;
+ if (dev_priv->info.gen >= 7) {
+ struct intel_engine_cs *engine;
+
+ engine = i915_gem_request_get_engine(obj->last_write_req);
+ if (engine && engine->id == RCS)
+ return engine;
- if (i915.use_mmio_flip < 0)
+ return &dev_priv->engine[BCS];
+ } else
+ return &dev_priv->engine[RCS];
+}
+
+static bool
+flip_fb_compatible(struct drm_device *dev,
+ struct drm_framebuffer *fb,
+ struct drm_framebuffer *old_fb)
+{
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct drm_i915_gem_object *old_obj = intel_fb_obj(old_fb);
+
+ if (old_fb->pixel_format != fb->pixel_format)
return false;
- else if (i915.use_mmio_flip > 0)
- return true;
- else if (i915.enable_execlists)
- return true;
- else if (obj->base.dma_buf &&
- !reservation_object_test_signaled_rcu(obj->base.dma_buf->resv,
- false))
- return true;
- else
- return engine != i915_gem_request_get_engine(obj->last_write_req);
+
+ if (INTEL_INFO(dev)->gen > 3 &&
+ (fb->offsets[0] != old_fb->offsets[0] ||
+ fb->pitches[0] != old_fb->pitches[0]))
+ return false;
+
+ /* vlv: DISPLAY_FLIP fails to change tiling */
+ if (IS_VALLEYVIEW(dev) && obj->tiling_mode != old_obj->tiling_mode)
+ return false;
+
+ return true;
+}
+
+static void
+intel_display_flip_prepare(struct drm_device *dev, struct drm_crtc *crtc,
+ struct intel_flip_work *work)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ if (work->flip_prepared)
+ return;
+
+ work->flip_prepared = true;
+
+ if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
+ work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(intel_crtc->pipe)) + 1;
+ work->flip_queued_vblank = drm_crtc_vblank_count(crtc);
+
+ intel_frontbuffer_flip_prepare(dev, work->new_crtc_state->fb_bits);
+}
+
+static void intel_flip_schedule_request(struct intel_flip_work *work, struct drm_crtc *crtc)
+{
+ struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_plane_state *new_state = work->new_plane_state[0];
+ struct intel_plane_state *old_state = work->old_plane_state[0];
+ struct drm_framebuffer *fb, *old_fb;
+ struct drm_i915_gem_request *request = NULL;
+ struct intel_engine_cs *engine;
+ struct drm_i915_gem_object *obj;
+ struct fence *fence;
+ int ret;
+
+ to_intel_crtc(crtc)->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ if (__i915_reset_in_progress_or_wedged(to_intel_crtc(crtc)->reset_counter))
+ goto mmio;
+
+ if (i915_terminally_wedged(&dev_priv->gpu_error) ||
+ i915_reset_in_progress(&dev_priv->gpu_error) ||
+ i915.enable_execlists || i915.use_mmio_flip > 0 ||
+ !dev_priv->display.queue_flip)
+ goto mmio;
+
+ /* Not right after modesetting, surface parameters need to be updated */
+ if (needs_modeset(crtc->state) ||
+ to_intel_crtc_state(crtc->state)->update_pipe)
+ goto mmio;
+
+ /* Only allow a mmio flip for a primary plane without a dma-buf fence */
+ if (work->num_planes != 1 ||
+ new_state->base.plane != crtc->primary ||
+ new_state->base.fence)
+ goto mmio;
+
+ fence = work->old_plane_state[0]->base.fence;
+ if (fence && !fence_is_signaled(fence))
+ goto mmio;
+
+ old_fb = old_state->base.fb;
+ fb = new_state->base.fb;
+ obj = intel_fb_obj(fb);
+
+ trace_i915_flip_request(to_intel_crtc(crtc)->plane, obj);
+
+ /* Only when updating a already visible fb. */
+ if (!new_state->visible || !old_state->visible)
+ goto mmio;
+
+ if (!flip_fb_compatible(dev, fb, old_fb))
+ goto mmio;
+
+ engine = intel_get_flip_engine(dev, dev_priv, obj);
+ if (i915.use_mmio_flip == 0 && obj->last_write_req &&
+ i915_gem_request_get_engine(obj->last_write_req) != engine)
+ goto mmio;
+
+ work->gtt_offset = intel_plane_obj_offset(to_intel_plane(crtc->primary), obj, 0);
+ work->gtt_offset += to_intel_crtc(crtc)->dspaddr_offset;
+
+ ret = i915_gem_object_sync(obj, engine, &request);
+ if (!ret && !request) {
+ request = i915_gem_request_alloc(engine, NULL);
+ ret = PTR_ERR_OR_ZERO(request);
+
+ if (ret)
+ request = NULL;
+ }
+
+ intel_display_flip_prepare(dev, crtc, work);
+
+ if (!ret)
+ ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request, 0);
+
+ if (!ret) {
+ i915_gem_request_assign(&work->flip_queued_req, request);
+ intel_mark_page_flip_active(to_intel_crtc(crtc), work);
+ i915_add_request_no_flush(request);
+ return;
+ }
+ if (request)
+ i915_add_request_no_flush(request);
+
+mmio:
+ schedule_work(&work->mmio_work);
}
static void intel_mmio_flip_work_func(struct work_struct *w)
{
struct intel_flip_work *work =
container_of(w, struct intel_flip_work, mmio_work);
- struct intel_crtc *crtc = to_intel_crtc(work->crtc);
- struct drm_device *dev = crtc->base.dev;
+ struct drm_crtc *crtc = work->old_crtc_state->base.crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct intel_crtc_state *crtc_state = work->new_crtc_state;
+ struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *primary = to_intel_plane(crtc->base.primary);
- struct drm_i915_gem_object *obj = intel_fb_obj(primary->base.state->fb);
+ struct drm_i915_gem_request *req;
+ int i;
- if (work->flip_queued_req)
- WARN_ON(__i915_wait_request(work->flip_queued_req,
- false, NULL,
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *old_plane_state = work->old_plane_state[i];
+
+ /* For framebuffer backed by dmabuf, wait for fence */
+ if (old_plane_state->base.fence)
+ WARN_ON(fence_wait(old_plane_state->base.fence, false) < 0);
+
+ req = old_plane_state->wait_req;
+ if (!req)
+ continue;
+
+ WARN_ON(__i915_wait_request(req, false, NULL,
&dev_priv->rps.mmioflips));
+ }
- /* For framebuffer backed by dmabuf, wait for fence */
- if (obj->base.dma_buf)
- WARN_ON(reservation_object_wait_timeout_rcu(obj->base.dma_buf->resv,
- false, false,
- MAX_SCHEDULE_TIMEOUT) < 0);
+ intel_display_flip_prepare(dev, crtc, work);
- intel_pipe_update_start(crtc);
- primary->update_plane(&primary->base,
- work->new_crtc_state,
- to_intel_plane_state(primary->base.state));
- intel_pipe_update_end(crtc, work);
-}
+ intel_pipe_update_start(intel_crtc);
+ if (!needs_modeset(&crtc_state->base)) {
+ if (crtc_state->base.color_mgmt_changed || crtc_state->update_pipe) {
+ intel_color_set_csc(&crtc_state->base);
+ intel_color_load_luts(&crtc_state->base);
+ }
-static int intel_default_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset)
-{
- return -ENODEV;
+ if (crtc_state->update_pipe)
+ intel_update_pipe_config(intel_crtc, work->old_crtc_state);
+ else if (INTEL_INFO(dev)->gen >= 9)
+ skl_detach_scalers(intel_crtc);
+ }
+
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *new_plane_state = work->new_plane_state[i];
+ struct intel_plane *plane = to_intel_plane(new_plane_state->base.plane);
+
+ plane->update_plane(&plane->base, crtc_state, new_plane_state);
+ }
+
+ intel_pipe_update_end(intel_crtc, work);
}
static bool __pageflip_stall_check_cs(struct drm_i915_private *dev_priv,
@@ -11449,7 +11665,8 @@ static bool __pageflip_stall_check_cs(struct drm_i915_private *dev_priv,
{
u32 addr, vblank;
- if (!atomic_read(&work->pending))
+ if (!atomic_read(&work->pending) ||
+ work_busy(&work->unpin_work))
return false;
smp_rmb();
@@ -11516,6 +11733,33 @@ void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe)
spin_unlock(&dev->event_lock);
}
+static struct fence *intel_get_excl_fence(struct drm_i915_gem_object *obj)
+{
+ struct reservation_object *resv;
+
+
+ if (!obj->base.dma_buf)
+ return NULL;
+
+ resv = obj->base.dma_buf->resv;
+
+ /* For framebuffer backed by dmabuf, wait for fence */
+ while (1) {
+ struct fence *fence_excl, *ret = NULL;
+
+ rcu_read_lock();
+
+ fence_excl = rcu_dereference(resv->fence_excl);
+ if (fence_excl)
+ ret = fence_get_rcu(fence_excl);
+
+ rcu_read_unlock();
+
+ if (ret == fence_excl)
+ return ret;
+ }
+}
+
static int intel_crtc_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
@@ -11523,17 +11767,20 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_framebuffer *old_fb = crtc->primary->fb;
+ struct drm_plane_state *old_state, *new_state = NULL;
+ struct drm_crtc_state *new_crtc_state = NULL;
+ struct drm_framebuffer *old_fb = crtc->primary->state->fb;
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct drm_plane *primary = crtc->primary;
- enum pipe pipe = intel_crtc->pipe;
struct intel_flip_work *work;
- struct intel_engine_cs *engine;
- bool mmio_flip;
- struct drm_i915_gem_request *request = NULL;
int ret;
+ old_state = crtc->primary->state;
+
+ if (!crtc->state->active)
+ return -EINVAL;
+
/*
* drm_mode_page_flip_ioctl() should already catch this, but double
* check to be safe. In the future we may enable pageflipping from
@@ -11543,7 +11790,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
return -EBUSY;
/* Can't change pixel format via MI display flips. */
- if (fb->pixel_format != crtc->primary->fb->pixel_format)
+ if (fb->pixel_format != old_fb->pixel_format)
return -EINVAL;
/*
@@ -11551,27 +11798,46 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
* Note that pitch changes could also affect these register.
*/
if (INTEL_INFO(dev)->gen > 3 &&
- (fb->offsets[0] != crtc->primary->fb->offsets[0] ||
- fb->pitches[0] != crtc->primary->fb->pitches[0]))
+ (fb->offsets[0] != old_fb->offsets[0] ||
+ fb->pitches[0] != old_fb->pitches[0]))
return -EINVAL;
- if (i915_terminally_wedged(&dev_priv->gpu_error))
- goto out_hang;
-
work = kzalloc(sizeof(*work), GFP_KERNEL);
- if (work == NULL)
- return -ENOMEM;
+ new_crtc_state = intel_crtc_duplicate_state(crtc);
+ new_state = intel_plane_duplicate_state(primary);
+
+ if (!work || !new_crtc_state || !new_state) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ drm_framebuffer_unreference(new_state->fb);
+ drm_framebuffer_reference(fb);
+ new_state->fb = fb;
work->new_crtc_state = to_intel_crtc_state(crtc->state);
work->event = event;
- work->crtc = crtc;
- work->old_fb = old_fb;
INIT_WORK(&work->unpin_work, intel_unpin_work_fn);
+ INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
+
+ work->new_crtc_state = to_intel_crtc_state(new_crtc_state);
+ work->old_crtc_state = intel_crtc->config;
+ work->fb_bits = to_intel_plane(primary)->frontbuffer_bit;
+ work->new_crtc_state->fb_bits = work->fb_bits;
+
+ work->can_async_unpin = true;
+ work->num_planes = 1;
+ work->old_plane_state[0] = to_intel_plane_state(old_state);
+ work->new_plane_state[0] = to_intel_plane_state(new_state);
+
+ /* Step 1: vblank waiting and workqueue throttling,
+ * similar to intel_atomic_prepare_commit
+ */
ret = drm_crtc_vblank_get(crtc);
if (ret)
- goto free_work;
+ goto cleanup;
/* We borrow the event spin lock for protecting flip_work */
spin_lock_irq(&dev->event_lock);
@@ -11591,9 +11857,8 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
spin_unlock_irq(&dev->event_lock);
- drm_crtc_vblank_put(crtc);
- kfree(work);
- return -EBUSY;
+ ret = -EBUSY;
+ goto cleanup_vblank;
}
}
list_add_tail(&work->head, &intel_crtc->flip_work);
@@ -11602,160 +11867,62 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
if (atomic_read(&intel_crtc->unpin_work_count) >= 2)
flush_workqueue(dev_priv->wq);
- /* Reference the objects for the scheduled work. */
- drm_framebuffer_reference(work->old_fb);
- drm_gem_object_reference(&obj->base);
-
- crtc->primary->fb = fb;
- update_state_fb(crtc->primary);
- intel_fbc_pre_update(intel_crtc);
-
- work->pending_flip_obj = obj;
-
- ret = i915_mutex_lock_interruptible(dev);
+ /* step 2, similar to intel_prepare_plane_fb */
+ ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
- goto cleanup;
-
- intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (__i915_reset_in_progress_or_wedged(intel_crtc->reset_counter)) {
- ret = -EIO;
- goto cleanup;
- }
-
- atomic_inc(&intel_crtc->unpin_work_count);
-
- if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
- work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(pipe)) + 1;
-
- if (IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) {
- engine = &dev_priv->engine[BCS];
- if (obj->tiling_mode != intel_fb_obj(work->old_fb)->tiling_mode)
- /* vlv: DISPLAY_FLIP fails to change tiling */
- engine = NULL;
- } else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
- engine = &dev_priv->engine[BCS];
- } else if (INTEL_INFO(dev)->gen >= 7) {
- engine = i915_gem_request_get_engine(obj->last_write_req);
- if (engine == NULL || engine->id != RCS)
- engine = &dev_priv->engine[BCS];
- } else {
- engine = &dev_priv->engine[RCS];
- }
+ goto cleanup_work;
- mmio_flip = use_mmio_flip(engine, obj);
-
- /* When using CS flips, we want to emit semaphores between rings.
- * However, when using mmio flips we will create a task to do the
- * synchronisation, so all we want here is to pin the framebuffer
- * into the display plane and skip any waits.
- */
- if (!mmio_flip) {
- ret = i915_gem_object_sync(obj, engine, &request);
- if (!ret && !request) {
- request = i915_gem_request_alloc(engine, NULL);
- ret = PTR_ERR_OR_ZERO(request);
- }
-
- if (ret)
- goto cleanup_pending;
- }
-
- ret = intel_pin_and_fence_fb_obj(fb, primary->state->rotation);
+ ret = intel_pin_and_fence_fb_obj(fb, new_state->rotation);
if (ret)
- goto cleanup_pending;
+ goto cleanup_unlock;
+
+ i915_gem_track_fb(intel_fb_obj(old_fb), obj,
+ to_intel_plane(primary)->frontbuffer_bit);
- work->gtt_offset = intel_plane_obj_offset(to_intel_plane(primary),
- obj, 0);
- work->gtt_offset += intel_crtc->dspaddr_offset;
+ /* point of no return, swap state */
+ primary->state = new_state;
+ crtc->state = new_crtc_state;
+ intel_crtc->config = to_intel_crtc_state(new_crtc_state);
+ primary->fb = fb;
- if (mmio_flip) {
- INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
+ /* scheduling flip work */
+ atomic_inc(&intel_crtc->unpin_work_count);
- i915_gem_request_assign(&work->flip_queued_req,
+ if (obj->last_write_req &&
+ !i915_gem_request_completed(obj->last_write_req, true))
+ i915_gem_request_assign(&work->old_plane_state[0]->wait_req,
obj->last_write_req);
- schedule_work(&work->mmio_work);
- } else {
- i915_gem_request_assign(&work->flip_queued_req, request);
- ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
- work->gtt_offset);
- if (ret)
- goto cleanup_unpin;
+ if (obj->base.dma_buf)
+ work->old_plane_state[0]->base.fence = intel_get_excl_fence(obj);
- intel_mark_page_flip_active(intel_crtc, work);
+ intel_fbc_pre_update(intel_crtc);
- i915_add_request_no_flush(request);
- }
+ intel_flip_schedule_request(work, crtc);
- i915_gem_track_fb(intel_fb_obj(old_fb), obj,
- to_intel_plane(primary)->frontbuffer_bit);
mutex_unlock(&dev->struct_mutex);
- intel_frontbuffer_flip_prepare(dev,
- to_intel_plane(primary)->frontbuffer_bit);
-
trace_i915_flip_request(intel_crtc->plane, obj);
return 0;
-cleanup_unpin:
- intel_unpin_fb_obj(fb, crtc->primary->state->rotation);
-cleanup_pending:
- if (!IS_ERR_OR_NULL(request))
- i915_add_request_no_flush(request);
- atomic_dec(&intel_crtc->unpin_work_count);
+cleanup_unlock:
mutex_unlock(&dev->struct_mutex);
-cleanup:
- crtc->primary->fb = old_fb;
- update_state_fb(crtc->primary);
-
- drm_gem_object_unreference_unlocked(&obj->base);
- drm_framebuffer_unreference(work->old_fb);
-
+cleanup_work:
spin_lock_irq(&dev->event_lock);
list_del(&work->head);
spin_unlock_irq(&dev->event_lock);
+cleanup_vblank:
drm_crtc_vblank_put(crtc);
-free_work:
- kfree(work);
-
- if (ret == -EIO) {
- struct drm_atomic_state *state;
- struct drm_plane_state *plane_state;
-
-out_hang:
- state = drm_atomic_state_alloc(dev);
- if (!state)
- return -ENOMEM;
- state->acquire_ctx = drm_modeset_legacy_acquire_ctx(crtc);
-
-retry:
- plane_state = drm_atomic_get_plane_state(state, primary);
- ret = PTR_ERR_OR_ZERO(plane_state);
- if (!ret) {
- drm_atomic_set_fb_for_plane(plane_state, fb);
-
- ret = drm_atomic_set_crtc_for_plane(plane_state, crtc);
- if (!ret)
- ret = drm_atomic_commit(state);
- }
+cleanup:
+ if (new_state)
+ intel_plane_destroy_state(primary, new_state);
- if (ret == -EDEADLK) {
- drm_modeset_backoff(state->acquire_ctx);
- drm_atomic_state_clear(state);
- goto retry;
- }
+ if (new_crtc_state)
+ intel_crtc_destroy_state(crtc, new_crtc_state);
- if (ret)
- drm_atomic_state_free(state);
-
- if (ret == 0 && event) {
- spin_lock_irq(&dev->event_lock);
- drm_crtc_send_vblank_event(crtc, event);
- spin_unlock_irq(&dev->event_lock);
- }
- }
+ kfree(work);
return ret;
}
@@ -13829,33 +13996,6 @@ static const struct drm_crtc_funcs intel_crtc_funcs = {
.atomic_destroy_state = intel_crtc_destroy_state,
};
-static struct fence *intel_get_excl_fence(struct drm_i915_gem_object *obj)
-{
- struct reservation_object *resv;
-
-
- if (!obj->base.dma_buf)
- return NULL;
-
- resv = obj->base.dma_buf->resv;
-
- /* For framebuffer backed by dmabuf, wait for fence */
- while (1) {
- struct fence *fence_excl, *ret = NULL;
-
- rcu_read_lock();
-
- fence_excl = rcu_dereference(resv->fence_excl);
- if (fence_excl)
- ret = fence_get_rcu(fence_excl);
-
- rcu_read_unlock();
-
- if (ret == fence_excl)
- return ret;
- }
-}
-
/**
* intel_prepare_plane_fb - Prepare fb for usage on plane
* @plane: drm plane to prepare for
@@ -15159,7 +15299,7 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv)
/* Drop through - unsupported since execlist only. */
default:
/* Default just returns -ENODEV to indicate unsupported */
- dev_priv->display.queue_flip = intel_default_queue_flip;
+ break;
}
}
@@ -16119,9 +16259,9 @@ void intel_modeset_gem_init(struct drm_device *dev)
DRM_ERROR("failed to pin boot fb on pipe %d\n",
to_intel_crtc(c)->pipe);
drm_framebuffer_unreference(c->primary->fb);
- c->primary->fb = NULL;
+ drm_framebuffer_unreference(c->primary->state->fb);
+ c->primary->fb = c->primary->state->fb = NULL;
c->primary->crtc = c->primary->state->crtc = NULL;
- update_state_fb(c->primary);
c->state->plane_mask &= ~(1 << drm_plane_index(c->primary));
}
}
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 6944202d3de0..c6d40bfce147 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -978,12 +978,6 @@ struct intel_flip_work {
struct work_struct unpin_work;
struct work_struct mmio_work;
- struct intel_crtc_state *new_crtc_state;
- bool free_new_crtc_state;
-
- struct drm_crtc *crtc;
- struct drm_framebuffer *old_fb;
- struct drm_i915_gem_object *pending_flip_obj;
struct drm_pending_vblank_event *event;
atomic_t pending;
u32 flip_count;
@@ -991,6 +985,17 @@ struct intel_flip_work {
struct drm_i915_gem_request *flip_queued_req;
u32 flip_queued_vblank;
u32 flip_ready_vblank;
+
+ unsigned put_power_domains;
+ unsigned num_planes;
+
+ bool can_async_unpin, flip_prepared, free_new_crtc_state;
+
+ unsigned fb_bits;
+
+ struct intel_crtc_state *old_crtc_state, *new_crtc_state;
+ struct intel_plane_state *old_plane_state[I915_MAX_PLANES + 1];
+ struct intel_plane_state *new_plane_state[I915_MAX_PLANES + 1];
};
struct intel_load_detect_pipe {
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 5/9] drm/i915: Remove cs based page flip support, v2.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (3 preceding siblings ...)
2016-05-26 10:38 ` [PATCH 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4 Maarten Lankhorst
@ 2016-05-26 10:38 ` Maarten Lankhorst
2016-05-30 7:55 ` [PATCH 5/9] drm/i915: Remove cs based page flip support, v3 Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 6/9] drm/i915: Remove use_mmio_flip kernel parameter Maarten Lankhorst
` (5 subsequent siblings)
10 siblings, 1 reply; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:38 UTC (permalink / raw)
To: intel-gfx
With mmio flips now available on all platforms it's time to remove
support for cs flips.
Changes since v1:
- Rebase for legacy cursor updates.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/gpu/drm/i915/i915_debugfs.c | 19 +-
drivers/gpu/drm/i915/i915_irq.c | 120 ++---------
drivers/gpu/drm/i915/intel_display.c | 392 +----------------------------------
drivers/gpu/drm/i915/intel_drv.h | 9 +-
4 files changed, 33 insertions(+), 507 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index b52c1a5f3451..b29ba16c90b3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -628,7 +628,6 @@ static void i915_dump_pageflip(struct seq_file *m,
{
const char pipe = pipe_name(crtc->pipe);
u32 pending;
- u32 addr;
int i;
pending = atomic_read(&work->pending);
@@ -640,7 +639,6 @@ static void i915_dump_pageflip(struct seq_file *m,
pipe, plane_name(crtc->plane));
}
-
for (i = 0; i < work->num_planes; i++) {
struct intel_plane_state *old_plane_state = work->old_plane_state[i];
struct drm_plane *plane = old_plane_state->base.plane;
@@ -664,22 +662,9 @@ static void i915_dump_pageflip(struct seq_file *m,
i915_gem_request_completed(req, true));
}
- seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
- work->flip_queued_vblank,
- work->flip_ready_vblank,
+ seq_printf(m, "Flip queued on frame %d, now %d\n",
+ pending ? work->flip_queued_vblank : -1,
intel_crtc_get_vblank_counter(crtc));
- seq_printf(m, "%d prepares\n", atomic_read(&work->pending));
-
- if (INTEL_INFO(dev_priv)->gen >= 4)
- addr = I915_HI_DISPBASE(I915_READ(DSPSURF(crtc->plane)));
- else
- addr = I915_READ(DSPADDR(crtc->plane));
- seq_printf(m, "Current scanout address 0x%08x\n", addr);
-
- if (work->flip_queued_req) {
- seq_printf(m, "New framebuffer address 0x%08lx\n", (long)work->gtt_offset);
- seq_printf(m, "MMIO update completed? %d\n", addr == work->gtt_offset);
- }
}
static int i915_gem_pageflip_info(struct seq_file *m, void *data)
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index caaf1e2a7bc1..fc2b2a7e2c55 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -136,6 +136,12 @@ static const u32 hpd_bxt[HPD_NUM_PINS] = {
POSTING_READ(type##IIR); \
} while (0)
+static void
+intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, unsigned pipe)
+{
+ DRM_DEBUG_KMS("Finished page flip\n");
+}
+
/*
* We should clear IMR at preinstall/uninstall, and just check at postinstall.
*/
@@ -1631,16 +1637,11 @@ static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir)
}
}
-static bool intel_pipe_handle_vblank(struct drm_i915_private *dev_priv,
+static void intel_pipe_handle_vblank(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
- bool ret;
-
- ret = drm_handle_vblank(dev_priv->dev, pipe);
- if (ret)
+ if (drm_handle_vblank(dev_priv->dev, pipe))
intel_finish_page_flip_mmio(dev_priv, pipe);
-
- return ret;
}
static void valleyview_pipestat_irq_ack(struct drm_i915_private *dev_priv,
@@ -1707,9 +1708,8 @@ static void valleyview_pipestat_irq_handler(struct drm_i915_private *dev_priv,
enum pipe pipe;
for_each_pipe(dev_priv, pipe) {
- if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PLANE_FLIP_DONE_INT_STATUS_VLV)
intel_finish_page_flip_cs(dev_priv, pipe);
@@ -2155,9 +2155,8 @@ static void ilk_display_irq_handler(struct drm_i915_private *dev_priv,
DRM_ERROR("Poison interrupt\n");
for_each_pipe(dev_priv, pipe) {
- if (de_iir & DE_PIPE_VBLANK(pipe) &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (de_iir & DE_PIPE_VBLANK(pipe))
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (de_iir & DE_PIPE_FIFO_UNDERRUN(pipe))
intel_cpu_fifo_underrun_irq_handler(dev_priv, pipe);
@@ -2206,9 +2205,8 @@ static void ivb_display_irq_handler(struct drm_i915_private *dev_priv,
intel_opregion_asle_intr(dev_priv);
for_each_pipe(dev_priv, pipe) {
- if (de_iir & (DE_PIPE_VBLANK_IVB(pipe)) &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (de_iir & (DE_PIPE_VBLANK_IVB(pipe)))
+ intel_pipe_handle_vblank(dev_priv, pipe);
/* plane/pipes map 1:1 on ilk+ */
if (de_iir & DE_PLANE_FLIP_DONE_IVB(pipe))
@@ -2407,9 +2405,8 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
ret = IRQ_HANDLED;
I915_WRITE(GEN8_DE_PIPE_IIR(pipe), iir);
- if (iir & GEN8_PIPE_VBLANK &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (iir & GEN8_PIPE_VBLANK)
+ intel_pipe_handle_vblank(dev_priv, pipe);
flip_done = iir;
if (INTEL_INFO(dev_priv)->gen >= 9)
@@ -3975,37 +3972,6 @@ static int i8xx_irq_postinstall(struct drm_device *dev)
return 0;
}
-/*
- * Returns true when a page flip has completed.
- */
-static bool i8xx_handle_vblank(struct drm_i915_private *dev_priv,
- int plane, int pipe, u32 iir)
-{
- u16 flip_pending = DISPLAY_PLANE_FLIP_PENDING(plane);
-
- if (!intel_pipe_handle_vblank(dev_priv, pipe))
- return false;
-
- if ((iir & flip_pending) == 0)
- goto check_page_flip;
-
- /* We detect FlipDone by looking for the change in PendingFlip from '1'
- * to '0' on the following vblank, i.e. IIR has the Pendingflip
- * asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence
- * the flip is completed (no longer pending). Since this doesn't raise
- * an interrupt per se, we watch for the change at vblank.
- */
- if (I915_READ16(ISR) & flip_pending)
- goto check_page_flip;
-
- intel_finish_page_flip_cs(dev_priv, pipe);
- return true;
-
-check_page_flip:
- intel_check_page_flip(dev_priv, pipe);
- return false;
-}
-
static irqreturn_t i8xx_irq_handler(int irq, void *arg)
{
struct drm_device *dev = arg;
@@ -4058,13 +4024,8 @@ static irqreturn_t i8xx_irq_handler(int irq, void *arg)
notify_ring(&dev_priv->engine[RCS]);
for_each_pipe(dev_priv, pipe) {
- int plane = pipe;
- if (HAS_FBC(dev_priv))
- plane = !plane;
-
- if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS &&
- i8xx_handle_vblank(dev_priv, plane, pipe, iir))
- flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(plane);
+ if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PIPE_CRC_DONE_INTERRUPT_STATUS)
i9xx_pipe_crc_irq_handler(dev_priv, pipe);
@@ -4164,37 +4125,6 @@ static int i915_irq_postinstall(struct drm_device *dev)
return 0;
}
-/*
- * Returns true when a page flip has completed.
- */
-static bool i915_handle_vblank(struct drm_i915_private *dev_priv,
- int plane, int pipe, u32 iir)
-{
- u32 flip_pending = DISPLAY_PLANE_FLIP_PENDING(plane);
-
- if (!intel_pipe_handle_vblank(dev_priv, pipe))
- return false;
-
- if ((iir & flip_pending) == 0)
- goto check_page_flip;
-
- /* We detect FlipDone by looking for the change in PendingFlip from '1'
- * to '0' on the following vblank, i.e. IIR has the Pendingflip
- * asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence
- * the flip is completed (no longer pending). Since this doesn't raise
- * an interrupt per se, we watch for the change at vblank.
- */
- if (I915_READ(ISR) & flip_pending)
- goto check_page_flip;
-
- intel_finish_page_flip_cs(dev_priv, pipe);
- return true;
-
-check_page_flip:
- intel_check_page_flip(dev_priv, pipe);
- return false;
-}
-
static irqreturn_t i915_irq_handler(int irq, void *arg)
{
struct drm_device *dev = arg;
@@ -4255,13 +4185,8 @@ static irqreturn_t i915_irq_handler(int irq, void *arg)
notify_ring(&dev_priv->engine[RCS]);
for_each_pipe(dev_priv, pipe) {
- int plane = pipe;
- if (HAS_FBC(dev_priv))
- plane = !plane;
-
- if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS &&
- i915_handle_vblank(dev_priv, plane, pipe, iir))
- flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(plane);
+ if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PIPE_LEGACY_BLC_EVENT_STATUS)
blc_event = true;
@@ -4489,9 +4414,8 @@ static irqreturn_t i965_irq_handler(int irq, void *arg)
notify_ring(&dev_priv->engine[VCS]);
for_each_pipe(dev_priv, pipe) {
- if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS &&
- i915_handle_vblank(dev_priv, pipe, pipe, iir))
- flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(pipe);
+ if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PIPE_LEGACY_BLC_EVENT_STATUS)
blc_event = true;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 0531cdb1cfa1..2324b74f72f4 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -48,11 +48,6 @@
#include <linux/reservation.h>
#include <linux/dma-buf.h>
-static bool is_mmio_work(struct intel_flip_work *work)
-{
- return !work->flip_queued_req;
-}
-
/* Primary plane formats for gen <= 3 */
static const uint32_t i8xx_primary_formats[] = {
DRM_FORMAT_C8,
@@ -3103,14 +3098,6 @@ intel_pipe_set_base_atomic(struct drm_crtc *crtc, struct drm_framebuffer *fb,
return -ENODEV;
}
-static void intel_complete_page_flips(struct drm_i915_private *dev_priv)
-{
- struct intel_crtc *crtc;
-
- for_each_intel_crtc(dev_priv->dev, crtc)
- intel_finish_page_flip_cs(dev_priv, crtc->pipe);
-}
-
static void intel_update_primary_planes(struct drm_device *dev)
{
struct drm_crtc *crtc;
@@ -3151,13 +3138,6 @@ void intel_prepare_reset(struct drm_i915_private *dev_priv)
void intel_finish_reset(struct drm_i915_private *dev_priv)
{
- /*
- * Flips in the rings will be nuked by the reset,
- * so complete all pending flips so that user space
- * will get its events and not get stuck.
- */
- intel_complete_page_flips(dev_priv);
-
/* no reset support for gen2 */
if (IS_GEN2(dev_priv))
return;
@@ -3835,26 +3815,7 @@ static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
if (ret < 0)
return ret;
- if (ret == 0) {
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_flip_work *work;
-
- spin_lock_irq(&dev->event_lock);
-
- /*
- * If we're waiting for page flips, it's the first
- * flip on the list that's stuck.
- */
- work = list_first_entry_or_null(&intel_crtc->flip_work,
- struct intel_flip_work, head);
-
- if (work && !is_mmio_work(work) &&
- !work_busy(&work->unpin_work)) {
- WARN_ONCE(1, "Removing stuck page flip\n");
- page_flip_completed(intel_crtc, work);
- }
- spin_unlock_irq(&dev->event_lock);
- }
+ WARN(ret == 0, "Stuck page flip\n");
return 0;
}
@@ -11031,9 +10992,6 @@ static void intel_unpin_work_fn(struct work_struct *__work)
if (work->free_new_crtc_state)
intel_crtc_destroy_state(crtc, &work->new_crtc_state->base);
- if (work->flip_queued_req)
- i915_gem_request_unreference(work->flip_queued_req);
-
for (i = 0; i < work->num_planes; i++) {
struct intel_plane_state *old_plane_state =
work->old_plane_state[i];
@@ -11066,75 +11024,6 @@ static void intel_unpin_work_fn(struct work_struct *__work)
kfree(work);
}
-/* Is 'a' after or equal to 'b'? */
-static bool g4x_flip_count_after_eq(u32 a, u32 b)
-{
- return !((a - b) & 0x80000000);
-}
-
-static bool __pageflip_finished_cs(struct intel_crtc *crtc,
- struct intel_flip_work *work)
-{
- struct drm_device *dev = crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned reset_counter;
-
- reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (crtc->reset_counter != reset_counter)
- return true;
-
- /*
- * The relevant registers doen't exist on pre-ctg.
- * As the flip done interrupt doesn't trigger for mmio
- * flips on gmch platforms, a flip count check isn't
- * really needed there. But since ctg has the registers,
- * include it in the check anyway.
- */
- if (INTEL_INFO(dev)->gen < 5 && !IS_G4X(dev))
- return true;
-
- /*
- * BDW signals flip done immediately if the plane
- * is disabled, even if the plane enable is already
- * armed to occur at the next vblank :(
- */
-
- /*
- * A DSPSURFLIVE check isn't enough in case the mmio and CS flips
- * used the same base address. In that case the mmio flip might
- * have completed, but the CS hasn't even executed the flip yet.
- *
- * A flip count check isn't enough as the CS might have updated
- * the base address just after start of vblank, but before we
- * managed to process the interrupt. This means we'd complete the
- * CS flip too soon.
- *
- * Combining both checks should get us a good enough result. It may
- * still happen that the CS flip has been executed, but has not
- * yet actually completed. But in case the base address is the same
- * anyway, we don't really care.
- */
- return (I915_READ(DSPSURFLIVE(crtc->plane)) & ~0xfff) ==
- work->gtt_offset &&
- g4x_flip_count_after_eq(I915_READ(PIPE_FLIPCOUNT_G4X(crtc->pipe)),
- work->flip_count);
-}
-
-static bool
-__pageflip_finished_mmio(struct intel_crtc *crtc,
- struct intel_flip_work *work)
-{
- /*
- * MMIO work completes when vblank is different from
- * flip_queued_vblank.
- *
- * Reset counter value doesn't matter, this is handled by
- * i915_wait_request finishing early, so no need to handle
- * reset here.
- */
- return intel_crtc_get_vblank_counter(crtc) != work->flip_queued_vblank;
-}
-
static bool pageflip_finished(struct intel_crtc *crtc,
struct intel_flip_work *work)
@@ -11144,44 +11033,11 @@ static bool pageflip_finished(struct intel_crtc *crtc,
smp_rmb();
- if (is_mmio_work(work))
- return __pageflip_finished_mmio(crtc, work);
- else
- return __pageflip_finished_cs(crtc, work);
-}
-
-void intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, int pipe)
-{
- struct drm_device *dev = dev_priv->dev;
- struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_flip_work *work;
- unsigned long flags;
-
- /* Ignore early vblank irqs */
- if (!crtc)
- return;
-
/*
- * This is called both by irq handlers and the reset code (to complete
- * lost pageflips) so needs the full irqsave spinlocks.
+ * MMIO work completes when vblank is different from
+ * flip_queued_vblank.
*/
- spin_lock_irqsave(&dev->event_lock, flags);
- while (!list_empty(&intel_crtc->flip_work)) {
- work = list_first_entry(&intel_crtc->flip_work,
- struct intel_flip_work,
- head);
-
- if (is_mmio_work(work))
- break;
-
- if (!pageflip_finished(intel_crtc, work) ||
- work_busy(&work->unpin_work))
- break;
-
- page_flip_completed(intel_crtc, work);
- }
- spin_unlock_irqrestore(&dev->event_lock, flags);
+ return intel_crtc_get_vblank_counter(crtc) != work->flip_queued_vblank;
}
void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
@@ -11206,9 +11062,6 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
struct intel_flip_work,
head);
- if (!is_mmio_work(work))
- break;
-
if (!pageflip_finished(intel_crtc, work) ||
work_busy(&work->unpin_work))
break;
@@ -11218,16 +11071,6 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
spin_unlock_irqrestore(&dev->event_lock, flags);
}
-static inline void intel_mark_page_flip_active(struct intel_crtc *crtc,
- struct intel_flip_work *work)
-{
- work->flip_queued_vblank = intel_crtc_get_vblank_counter(crtc);
-
- /* Ensure that the work item is consistent when activating it ... */
- smp_mb__before_atomic();
- atomic_set(&work->pending, 1);
-}
-
static int intel_gen2_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
@@ -11459,154 +11302,6 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
return 0;
}
-static struct intel_engine_cs *
-intel_get_flip_engine(struct drm_device *dev,
- struct drm_i915_private *dev_priv,
- struct drm_i915_gem_object *obj)
-{
- if (IS_VALLEYVIEW(dev) || IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
- return &dev_priv->engine[BCS];
-
- if (dev_priv->info.gen >= 7) {
- struct intel_engine_cs *engine;
-
- engine = i915_gem_request_get_engine(obj->last_write_req);
- if (engine && engine->id == RCS)
- return engine;
-
- return &dev_priv->engine[BCS];
- } else
- return &dev_priv->engine[RCS];
-}
-
-static bool
-flip_fb_compatible(struct drm_device *dev,
- struct drm_framebuffer *fb,
- struct drm_framebuffer *old_fb)
-{
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
- struct drm_i915_gem_object *old_obj = intel_fb_obj(old_fb);
-
- if (old_fb->pixel_format != fb->pixel_format)
- return false;
-
- if (INTEL_INFO(dev)->gen > 3 &&
- (fb->offsets[0] != old_fb->offsets[0] ||
- fb->pitches[0] != old_fb->pitches[0]))
- return false;
-
- /* vlv: DISPLAY_FLIP fails to change tiling */
- if (IS_VALLEYVIEW(dev) && obj->tiling_mode != old_obj->tiling_mode)
- return false;
-
- return true;
-}
-
-static void
-intel_display_flip_prepare(struct drm_device *dev, struct drm_crtc *crtc,
- struct intel_flip_work *work)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-
- if (work->flip_prepared)
- return;
-
- work->flip_prepared = true;
-
- if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
- work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(intel_crtc->pipe)) + 1;
- work->flip_queued_vblank = drm_crtc_vblank_count(crtc);
-
- intel_frontbuffer_flip_prepare(dev, work->new_crtc_state->fb_bits);
-}
-
-static void intel_flip_schedule_request(struct intel_flip_work *work, struct drm_crtc *crtc)
-{
- struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane_state *new_state = work->new_plane_state[0];
- struct intel_plane_state *old_state = work->old_plane_state[0];
- struct drm_framebuffer *fb, *old_fb;
- struct drm_i915_gem_request *request = NULL;
- struct intel_engine_cs *engine;
- struct drm_i915_gem_object *obj;
- struct fence *fence;
- int ret;
-
- to_intel_crtc(crtc)->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (__i915_reset_in_progress_or_wedged(to_intel_crtc(crtc)->reset_counter))
- goto mmio;
-
- if (i915_terminally_wedged(&dev_priv->gpu_error) ||
- i915_reset_in_progress(&dev_priv->gpu_error) ||
- i915.enable_execlists || i915.use_mmio_flip > 0 ||
- !dev_priv->display.queue_flip)
- goto mmio;
-
- /* Not right after modesetting, surface parameters need to be updated */
- if (needs_modeset(crtc->state) ||
- to_intel_crtc_state(crtc->state)->update_pipe)
- goto mmio;
-
- /* Only allow a mmio flip for a primary plane without a dma-buf fence */
- if (work->num_planes != 1 ||
- new_state->base.plane != crtc->primary ||
- new_state->base.fence)
- goto mmio;
-
- fence = work->old_plane_state[0]->base.fence;
- if (fence && !fence_is_signaled(fence))
- goto mmio;
-
- old_fb = old_state->base.fb;
- fb = new_state->base.fb;
- obj = intel_fb_obj(fb);
-
- trace_i915_flip_request(to_intel_crtc(crtc)->plane, obj);
-
- /* Only when updating a already visible fb. */
- if (!new_state->visible || !old_state->visible)
- goto mmio;
-
- if (!flip_fb_compatible(dev, fb, old_fb))
- goto mmio;
-
- engine = intel_get_flip_engine(dev, dev_priv, obj);
- if (i915.use_mmio_flip == 0 && obj->last_write_req &&
- i915_gem_request_get_engine(obj->last_write_req) != engine)
- goto mmio;
-
- work->gtt_offset = intel_plane_obj_offset(to_intel_plane(crtc->primary), obj, 0);
- work->gtt_offset += to_intel_crtc(crtc)->dspaddr_offset;
-
- ret = i915_gem_object_sync(obj, engine, &request);
- if (!ret && !request) {
- request = i915_gem_request_alloc(engine, NULL);
- ret = PTR_ERR_OR_ZERO(request);
-
- if (ret)
- request = NULL;
- }
-
- intel_display_flip_prepare(dev, crtc, work);
-
- if (!ret)
- ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request, 0);
-
- if (!ret) {
- i915_gem_request_assign(&work->flip_queued_req, request);
- intel_mark_page_flip_active(to_intel_crtc(crtc), work);
- i915_add_request_no_flush(request);
- return;
- }
- if (request)
- i915_add_request_no_flush(request);
-
-mmio:
- schedule_work(&work->mmio_work);
-}
-
static void intel_mmio_flip_work_func(struct work_struct *w)
{
struct intel_flip_work *work =
@@ -11634,7 +11329,7 @@ static void intel_mmio_flip_work_func(struct work_struct *w)
&dev_priv->rps.mmioflips));
}
- intel_display_flip_prepare(dev, crtc, work);
+ intel_frontbuffer_flip_prepare(dev, crtc_state->fb_bits);
intel_pipe_update_start(intel_crtc);
if (!needs_modeset(&crtc_state->base)) {
@@ -11659,80 +11354,6 @@ static void intel_mmio_flip_work_func(struct work_struct *w)
intel_pipe_update_end(intel_crtc, work);
}
-static bool __pageflip_stall_check_cs(struct drm_i915_private *dev_priv,
- struct intel_crtc *intel_crtc,
- struct intel_flip_work *work)
-{
- u32 addr, vblank;
-
- if (!atomic_read(&work->pending) ||
- work_busy(&work->unpin_work))
- return false;
-
- smp_rmb();
-
- vblank = intel_crtc_get_vblank_counter(intel_crtc);
- if (work->flip_ready_vblank == 0) {
- if (work->flip_queued_req &&
- !i915_gem_request_completed(work->flip_queued_req, true))
- return false;
-
- work->flip_ready_vblank = vblank;
- }
-
- if (vblank - work->flip_ready_vblank < 3)
- return false;
-
- /* Potential stall - if we see that the flip has happened,
- * assume a missed interrupt. */
- if (INTEL_GEN(dev_priv) >= 4)
- addr = I915_HI_DISPBASE(I915_READ(DSPSURF(intel_crtc->plane)));
- else
- addr = I915_READ(DSPADDR(intel_crtc->plane));
-
- /* There is a potential issue here with a false positive after a flip
- * to the same address. We could address this by checking for a
- * non-incrementing frame counter.
- */
- return addr == work->gtt_offset;
-}
-
-void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe)
-{
- struct drm_device *dev = dev_priv->dev;
- struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_flip_work *work;
-
- WARN_ON(!in_interrupt());
-
- if (crtc == NULL)
- return;
-
- spin_lock(&dev->event_lock);
- while (!list_empty(&intel_crtc->flip_work)) {
- work = list_first_entry(&intel_crtc->flip_work,
- struct intel_flip_work, head);
-
- if (is_mmio_work(work))
- break;
-
- if (__pageflip_stall_check_cs(dev_priv, intel_crtc, work)) {
- WARN_ONCE(1,
- "Kicking stuck page flip: queued at %d, now %d\n",
- work->flip_queued_vblank, intel_crtc_get_vblank_counter(intel_crtc));
- page_flip_completed(intel_crtc, work);
- continue;
- }
-
- if (intel_crtc_get_vblank_counter(intel_crtc) - work->flip_queued_vblank > 1)
- intel_queue_rps_boost_for_request(work->flip_queued_req);
-
- break;
- }
- spin_unlock(&dev->event_lock);
-}
-
static struct fence *intel_get_excl_fence(struct drm_i915_gem_object *obj)
{
struct reservation_object *resv;
@@ -11898,7 +11519,8 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
intel_fbc_pre_update(intel_crtc);
- intel_flip_schedule_request(work, crtc);
+ intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ schedule_work(&work->mmio_work);
mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index c6d40bfce147..e7e262ac1f99 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -980,16 +980,12 @@ struct intel_flip_work {
struct drm_pending_vblank_event *event;
atomic_t pending;
- u32 flip_count;
- u32 gtt_offset;
- struct drm_i915_gem_request *flip_queued_req;
u32 flip_queued_vblank;
- u32 flip_ready_vblank;
unsigned put_power_domains;
unsigned num_planes;
- bool can_async_unpin, flip_prepared, free_new_crtc_state;
+ bool can_async_unpin, free_new_crtc_state;
unsigned fb_bits;
@@ -1207,9 +1203,8 @@ struct drm_framebuffer *
__intel_framebuffer_create(struct drm_device *dev,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_i915_gem_object *obj);
-void intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, int pipe);
void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe);
-void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe);
+
int intel_prepare_plane_fb(struct drm_plane *plane,
const struct drm_plane_state *new_state);
void intel_cleanup_plane_fb(struct drm_plane *plane,
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 6/9] drm/i915: Remove use_mmio_flip kernel parameter.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (4 preceding siblings ...)
2016-05-26 10:38 ` [PATCH 5/9] drm/i915: Remove cs based page flip support, v2 Maarten Lankhorst
@ 2016-05-26 10:38 ` Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 7/9] drm/i915: Remove queue_flip pointer Maarten Lankhorst
` (4 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:38 UTC (permalink / raw)
To: intel-gfx
With the removal of cs flips this is always force enabled.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
---
drivers/gpu/drm/i915/i915_params.c | 5 -----
drivers/gpu/drm/i915/i915_params.h | 1 -
drivers/gpu/drm/i915/intel_lrc.c | 4 +---
3 files changed, 1 insertion(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_params.c b/drivers/gpu/drm/i915/i915_params.c
index 5e18cf9f754d..9a5d58b251f5 100644
--- a/drivers/gpu/drm/i915/i915_params.c
+++ b/drivers/gpu/drm/i915/i915_params.c
@@ -49,7 +49,6 @@ struct i915_params i915 __read_mostly = {
.invert_brightness = 0,
.disable_display = 0,
.enable_cmd_parser = 1,
- .use_mmio_flip = 0,
.mmio_debug = 0,
.verbose_state_checks = 1,
.nuclear_pageflip = 0,
@@ -175,10 +174,6 @@ module_param_named_unsafe(enable_cmd_parser, i915.enable_cmd_parser, int, 0600);
MODULE_PARM_DESC(enable_cmd_parser,
"Enable command parsing (1=enabled [default], 0=disabled)");
-module_param_named_unsafe(use_mmio_flip, i915.use_mmio_flip, int, 0600);
-MODULE_PARM_DESC(use_mmio_flip,
- "use MMIO flips (-1=never, 0=driver discretion [default], 1=always)");
-
module_param_named(mmio_debug, i915.mmio_debug, int, 0600);
MODULE_PARM_DESC(mmio_debug,
"Enable the MMIO debug code for the first N failures (default: off). "
diff --git a/drivers/gpu/drm/i915/i915_params.h b/drivers/gpu/drm/i915/i915_params.h
index 1323261a0cdd..658ce7379671 100644
--- a/drivers/gpu/drm/i915/i915_params.h
+++ b/drivers/gpu/drm/i915/i915_params.h
@@ -48,7 +48,6 @@ struct i915_params {
int enable_guc_loading;
int enable_guc_submission;
int guc_log_level;
- int use_mmio_flip;
int mmio_debug;
int edp_vswing;
unsigned int inject_load_failure;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 5c191a1afaaf..53715037ab54 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -260,9 +260,7 @@ int intel_sanitize_enable_execlists(struct drm_i915_private *dev_priv, int enabl
if (enable_execlists == 0)
return 0;
- if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) &&
- USES_PPGTT(dev_priv) &&
- i915.use_mmio_flip >= 0)
+ if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) && USES_PPGTT(dev_priv))
return 1;
return 0;
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 7/9] drm/i915: Remove queue_flip pointer.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (5 preceding siblings ...)
2016-05-26 10:38 ` [PATCH 6/9] drm/i915: Remove use_mmio_flip kernel parameter Maarten Lankhorst
@ 2016-05-26 10:38 ` Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 8/9] drm/i915: Remove reset_counter from intel_crtc Maarten Lankhorst
` (3 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:38 UTC (permalink / raw)
To: intel-gfx
With the removal of cs support this is no longer reachable.
Can be revived if needed.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 5 -
drivers/gpu/drm/i915/intel_display.c | 259 -----------------------------------
2 files changed, 264 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ce1d368e4e50..85a7c44ed55c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -618,11 +618,6 @@ struct drm_i915_display_funcs {
void (*audio_codec_disable)(struct intel_encoder *encoder);
void (*fdi_link_train)(struct drm_crtc *crtc);
void (*init_clock_gating)(struct drm_device *dev);
- int (*queue_flip)(struct drm_device *dev, struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset);
void (*hpd_irq_setup)(struct drm_i915_private *dev_priv);
/* clock updates for mode set */
/* cursor updates */
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 2324b74f72f4..d0653f87a53a 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11071,237 +11071,6 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
spin_unlock_irqrestore(&dev->event_lock, flags);
}
-static int intel_gen2_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset)
-{
- struct intel_engine_cs *engine = req->engine;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- u32 flip_mask;
- int ret;
-
- ret = intel_ring_begin(req, 6);
- if (ret)
- return ret;
-
- /* Can't queue multiple flips, so wait for the previous
- * one to finish before executing the next.
- */
- if (intel_crtc->plane)
- flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
- else
- flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
- intel_ring_emit(engine, MI_WAIT_FOR_EVENT | flip_mask);
- intel_ring_emit(engine, MI_NOOP);
- intel_ring_emit(engine, MI_DISPLAY_FLIP |
- MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
- intel_ring_emit(engine, fb->pitches[0]);
- intel_ring_emit(engine, gtt_offset);
- intel_ring_emit(engine, 0); /* aux display base address, unused */
-
- return 0;
-}
-
-static int intel_gen3_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset)
-{
- struct intel_engine_cs *engine = req->engine;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- u32 flip_mask;
- int ret;
-
- ret = intel_ring_begin(req, 6);
- if (ret)
- return ret;
-
- if (intel_crtc->plane)
- flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
- else
- flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
- intel_ring_emit(engine, MI_WAIT_FOR_EVENT | flip_mask);
- intel_ring_emit(engine, MI_NOOP);
- intel_ring_emit(engine, MI_DISPLAY_FLIP_I915 |
- MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
- intel_ring_emit(engine, fb->pitches[0]);
- intel_ring_emit(engine, gtt_offset);
- intel_ring_emit(engine, MI_NOOP);
-
- return 0;
-}
-
-static int intel_gen4_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset)
-{
- struct intel_engine_cs *engine = req->engine;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- uint32_t pf, pipesrc;
- int ret;
-
- ret = intel_ring_begin(req, 4);
- if (ret)
- return ret;
-
- /* i965+ uses the linear or tiled offsets from the
- * Display Registers (which do not change across a page-flip)
- * so we need only reprogram the base address.
- */
- intel_ring_emit(engine, MI_DISPLAY_FLIP |
- MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
- intel_ring_emit(engine, fb->pitches[0]);
- intel_ring_emit(engine, gtt_offset | obj->tiling_mode);
-
- /* XXX Enabling the panel-fitter across page-flip is so far
- * untested on non-native modes, so ignore it for now.
- * pf = I915_READ(pipe == 0 ? PFA_CTL_1 : PFB_CTL_1) & PF_ENABLE;
- */
- pf = 0;
- pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
- intel_ring_emit(engine, pf | pipesrc);
-
- return 0;
-}
-
-static int intel_gen6_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset)
-{
- struct intel_engine_cs *engine = req->engine;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- uint32_t pf, pipesrc;
- int ret;
-
- ret = intel_ring_begin(req, 4);
- if (ret)
- return ret;
-
- intel_ring_emit(engine, MI_DISPLAY_FLIP |
- MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
- intel_ring_emit(engine, fb->pitches[0] | obj->tiling_mode);
- intel_ring_emit(engine, gtt_offset);
-
- /* Contrary to the suggestions in the documentation,
- * "Enable Panel Fitter" does not seem to be required when page
- * flipping with a non-native mode, and worse causes a normal
- * modeset to fail.
- * pf = I915_READ(PF_CTL(intel_crtc->pipe)) & PF_ENABLE;
- */
- pf = 0;
- pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
- intel_ring_emit(engine, pf | pipesrc);
-
- return 0;
-}
-
-static int intel_gen7_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset)
-{
- struct intel_engine_cs *engine = req->engine;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- uint32_t plane_bit = 0;
- int len, ret;
-
- switch (intel_crtc->plane) {
- case PLANE_A:
- plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_A;
- break;
- case PLANE_B:
- plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_B;
- break;
- case PLANE_C:
- plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_C;
- break;
- default:
- WARN_ONCE(1, "unknown plane in flip command\n");
- return -ENODEV;
- }
-
- len = 4;
- if (engine->id == RCS) {
- len += 6;
- /*
- * On Gen 8, SRM is now taking an extra dword to accommodate
- * 48bits addresses, and we need a NOOP for the batch size to
- * stay even.
- */
- if (IS_GEN8(dev))
- len += 2;
- }
-
- /*
- * BSpec MI_DISPLAY_FLIP for IVB:
- * "The full packet must be contained within the same cache line."
- *
- * Currently the LRI+SRM+MI_DISPLAY_FLIP all fit within the same
- * cacheline, if we ever start emitting more commands before
- * the MI_DISPLAY_FLIP we may need to first emit everything else,
- * then do the cacheline alignment, and finally emit the
- * MI_DISPLAY_FLIP.
- */
- ret = intel_ring_cacheline_align(req);
- if (ret)
- return ret;
-
- ret = intel_ring_begin(req, len);
- if (ret)
- return ret;
-
- /* Unmask the flip-done completion message. Note that the bspec says that
- * we should do this for both the BCS and RCS, and that we must not unmask
- * more than one flip event at any time (or ensure that one flip message
- * can be sent by waiting for flip-done prior to queueing new flips).
- * Experimentation says that BCS works despite DERRMR masking all
- * flip-done completion events and that unmasking all planes at once
- * for the RCS also doesn't appear to drop events. Setting the DERRMR
- * to zero does lead to lockups within MI_DISPLAY_FLIP.
- */
- if (engine->id == RCS) {
- intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(engine, DERRMR);
- intel_ring_emit(engine, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
- DERRMR_PIPEB_PRI_FLIP_DONE |
- DERRMR_PIPEC_PRI_FLIP_DONE));
- if (IS_GEN8(dev))
- intel_ring_emit(engine, MI_STORE_REGISTER_MEM_GEN8 |
- MI_SRM_LRM_GLOBAL_GTT);
- else
- intel_ring_emit(engine, MI_STORE_REGISTER_MEM |
- MI_SRM_LRM_GLOBAL_GTT);
- intel_ring_emit_reg(engine, DERRMR);
- intel_ring_emit(engine, engine->scratch.gtt_offset + 256);
- if (IS_GEN8(dev)) {
- intel_ring_emit(engine, 0);
- intel_ring_emit(engine, MI_NOOP);
- }
- }
-
- intel_ring_emit(engine, MI_DISPLAY_FLIP_I915 | plane_bit);
- intel_ring_emit(engine, (fb->pitches[0] | obj->tiling_mode));
- intel_ring_emit(engine, gtt_offset);
- intel_ring_emit(engine, (MI_NOOP));
-
- return 0;
-}
-
static void intel_mmio_flip_work_func(struct work_struct *w)
{
struct intel_flip_work *work =
@@ -14895,34 +14664,6 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv)
dev_priv->display.modeset_calc_cdclk =
skl_modeset_calc_cdclk;
}
-
- switch (INTEL_INFO(dev_priv)->gen) {
- case 2:
- dev_priv->display.queue_flip = intel_gen2_queue_flip;
- break;
-
- case 3:
- dev_priv->display.queue_flip = intel_gen3_queue_flip;
- break;
-
- case 4:
- case 5:
- dev_priv->display.queue_flip = intel_gen4_queue_flip;
- break;
-
- case 6:
- dev_priv->display.queue_flip = intel_gen6_queue_flip;
- break;
- case 7:
- case 8: /* FIXME(BDW): Check that the gen8 RCS flip works. */
- dev_priv->display.queue_flip = intel_gen7_queue_flip;
- break;
- case 9:
- /* Drop through - unsupported since execlist only. */
- default:
- /* Default just returns -ENODEV to indicate unsupported */
- break;
- }
}
/*
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 8/9] drm/i915: Remove reset_counter from intel_crtc.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (6 preceding siblings ...)
2016-05-26 10:38 ` [PATCH 7/9] drm/i915: Remove queue_flip pointer Maarten Lankhorst
@ 2016-05-26 10:38 ` Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 9/9] drm/i915: Pass atomic states to fbc update functions Maarten Lankhorst
` (2 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:38 UTC (permalink / raw)
To: intel-gfx
With the removal of cs-based flips all mmio waits will
finish without requiring the reset counter, because the
waits will complete during gpu reset.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
---
drivers/gpu/drm/i915/intel_display.c | 9 ---------
drivers/gpu/drm/i915/intel_drv.h | 3 ---
2 files changed, 12 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index d0653f87a53a..e6d3721eeda3 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -3180,14 +3180,6 @@ void intel_finish_reset(struct drm_i915_private *dev_priv)
static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
{
- struct drm_device *dev = crtc->dev;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- unsigned reset_counter;
-
- reset_counter = i915_reset_counter(&to_i915(dev)->gpu_error);
- if (intel_crtc->reset_counter != reset_counter)
- return false;
-
return !list_empty_careful(&to_intel_crtc(crtc)->flip_work);
}
@@ -11288,7 +11280,6 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
intel_fbc_pre_update(intel_crtc);
- intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
schedule_work(&work->mmio_work);
mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index e7e262ac1f99..40f7925623fd 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -662,9 +662,6 @@ struct intel_crtc {
struct intel_crtc_state *config;
- /* reset counter value when the last flip was submitted */
- unsigned int reset_counter;
-
/* Access to these should be protected by dev_priv->irq_lock. */
bool cpu_fifo_underrun_disabled;
bool pch_fifo_underrun_disabled;
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 9/9] drm/i915: Pass atomic states to fbc update functions.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (7 preceding siblings ...)
2016-05-26 10:38 ` [PATCH 8/9] drm/i915: Remove reset_counter from intel_crtc Maarten Lankhorst
@ 2016-05-26 10:38 ` Maarten Lankhorst
2016-05-26 11:02 ` ✗ Ro.CI.BAT: failure for drm/i915: Reapply page flip atomic preparation patches Patchwork
2016-05-26 11:35 ` [PATCH 0/9] " Ville Syrjälä
10 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 10:38 UTC (permalink / raw)
To: intel-gfx
This is required to let fbc updates run async. It has a lot of
checks whether certain locks are taken, which can be removed when
the relevant states are passed in as pointers.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
---
drivers/gpu/drm/i915/intel_display.c | 8 +++++---
drivers/gpu/drm/i915/intel_drv.h | 8 ++++++--
drivers/gpu/drm/i915/intel_fbc.c | 39 +++++++++++++++++-------------------
3 files changed, 29 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index e6d3721eeda3..f7f2ca24d062 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -4590,7 +4590,7 @@ static void intel_pre_plane_update(struct intel_crtc_state *old_crtc_state)
struct intel_plane_state *old_primary_state =
to_intel_plane_state(old_pri_state);
- intel_fbc_pre_update(crtc);
+ intel_fbc_pre_update(crtc, pipe_config, primary_state);
if (old_primary_state->visible &&
(modeset || !primary_state->visible))
@@ -11278,7 +11278,9 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
if (obj->base.dma_buf)
work->old_plane_state[0]->base.fence = intel_get_excl_fence(obj);
- intel_fbc_pre_update(intel_crtc);
+ intel_fbc_pre_update(intel_crtc,
+ to_intel_crtc_state(new_crtc_state),
+ to_intel_plane_state(new_state));
schedule_work(&work->mmio_work);
@@ -13247,7 +13249,7 @@ static int intel_atomic_commit(struct drm_device *dev,
if (crtc->state->active &&
drm_atomic_get_existing_plane_state(state, crtc->primary))
- intel_fbc_enable(intel_crtc);
+ intel_fbc_enable(intel_crtc, pipe_config, to_intel_plane_state(crtc->primary->state));
if (crtc->state->active &&
(crtc->state->planes_changed || update_pipe))
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 40f7925623fd..070b602ac594 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1424,11 +1424,15 @@ static inline void intel_fbdev_restore_mode(struct drm_device *dev)
void intel_fbc_choose_crtc(struct drm_i915_private *dev_priv,
struct drm_atomic_state *state);
bool intel_fbc_is_active(struct drm_i915_private *dev_priv);
-void intel_fbc_pre_update(struct intel_crtc *crtc);
+void intel_fbc_pre_update(struct intel_crtc *crtc,
+ struct intel_crtc_state *crtc_state,
+ struct intel_plane_state *plane_state);
void intel_fbc_post_update(struct intel_crtc *crtc);
void intel_fbc_init(struct drm_i915_private *dev_priv);
void intel_fbc_init_pipe_state(struct drm_i915_private *dev_priv);
-void intel_fbc_enable(struct intel_crtc *crtc);
+void intel_fbc_enable(struct intel_crtc *crtc,
+ struct intel_crtc_state *crtc_state,
+ struct intel_plane_state *plane_state);
void intel_fbc_disable(struct intel_crtc *crtc);
void intel_fbc_global_disable(struct drm_i915_private *dev_priv);
void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c
index 0dea5fbcd8aa..d2b0269b2fe4 100644
--- a/drivers/gpu/drm/i915/intel_fbc.c
+++ b/drivers/gpu/drm/i915/intel_fbc.c
@@ -480,10 +480,10 @@ static void intel_fbc_deactivate(struct drm_i915_private *dev_priv)
intel_fbc_hw_deactivate(dev_priv);
}
-static bool multiple_pipes_ok(struct intel_crtc *crtc)
+static bool multiple_pipes_ok(struct intel_crtc *crtc,
+ struct intel_plane_state *plane_state)
{
- struct drm_i915_private *dev_priv = crtc->base.dev->dev_private;
- struct drm_plane *primary = crtc->base.primary;
+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc;
enum pipe pipe = crtc->pipe;
@@ -491,9 +491,7 @@ static bool multiple_pipes_ok(struct intel_crtc *crtc)
if (!no_fbc_on_multiple_pipes(dev_priv))
return true;
- WARN_ON(!drm_modeset_is_locked(&primary->mutex));
-
- if (to_intel_plane_state(primary->state)->visible)
+ if (plane_state->visible)
fbc->visible_pipes_mask |= (1 << pipe);
else
fbc->visible_pipes_mask &= ~(1 << pipe);
@@ -708,21 +706,16 @@ static bool intel_fbc_hw_tracking_covers_screen(struct intel_crtc *crtc)
return effective_w <= max_w && effective_h <= max_h;
}
-static void intel_fbc_update_state_cache(struct intel_crtc *crtc)
+static void intel_fbc_update_state_cache(struct intel_crtc *crtc,
+ struct intel_crtc_state *crtc_state,
+ struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = crtc->base.dev->dev_private;
struct intel_fbc *fbc = &dev_priv->fbc;
struct intel_fbc_state_cache *cache = &fbc->state_cache;
- struct intel_crtc_state *crtc_state =
- to_intel_crtc_state(crtc->base.state);
- struct intel_plane_state *plane_state =
- to_intel_plane_state(crtc->base.primary->state);
struct drm_framebuffer *fb = plane_state->base.fb;
struct drm_i915_gem_object *obj;
- WARN_ON(!drm_modeset_is_locked(&crtc->base.mutex));
- WARN_ON(!drm_modeset_is_locked(&crtc->base.primary->mutex));
-
cache->crtc.mode_flags = crtc_state->base.adjusted_mode.flags;
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
cache->crtc.hsw_bdw_pixel_rate =
@@ -887,7 +880,9 @@ static bool intel_fbc_reg_params_equal(struct intel_fbc_reg_params *params1,
return memcmp(params1, params2, sizeof(*params1)) == 0;
}
-void intel_fbc_pre_update(struct intel_crtc *crtc)
+void intel_fbc_pre_update(struct intel_crtc *crtc,
+ struct intel_crtc_state *crtc_state,
+ struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = crtc->base.dev->dev_private;
struct intel_fbc *fbc = &dev_priv->fbc;
@@ -897,7 +892,7 @@ void intel_fbc_pre_update(struct intel_crtc *crtc)
mutex_lock(&fbc->lock);
- if (!multiple_pipes_ok(crtc)) {
+ if (!multiple_pipes_ok(crtc, plane_state)) {
fbc->no_fbc_reason = "more than one pipe active";
goto deactivate;
}
@@ -905,7 +900,7 @@ void intel_fbc_pre_update(struct intel_crtc *crtc)
if (!fbc->enabled || fbc->crtc != crtc)
goto unlock;
- intel_fbc_update_state_cache(crtc);
+ intel_fbc_update_state_cache(crtc, crtc_state, plane_state);
deactivate:
intel_fbc_deactivate(dev_priv);
@@ -1089,7 +1084,9 @@ out:
* intel_fbc_enable multiple times for the same pipe without an
* intel_fbc_disable in the middle, as long as it is deactivated.
*/
-void intel_fbc_enable(struct intel_crtc *crtc)
+void intel_fbc_enable(struct intel_crtc *crtc,
+ struct intel_crtc_state *crtc_state,
+ struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = crtc->base.dev->dev_private;
struct intel_fbc *fbc = &dev_priv->fbc;
@@ -1102,19 +1099,19 @@ void intel_fbc_enable(struct intel_crtc *crtc)
if (fbc->enabled) {
WARN_ON(fbc->crtc == NULL);
if (fbc->crtc == crtc) {
- WARN_ON(!crtc->config->enable_fbc);
+ WARN_ON(!crtc_state->enable_fbc);
WARN_ON(fbc->active);
}
goto out;
}
- if (!crtc->config->enable_fbc)
+ if (!crtc_state->enable_fbc)
goto out;
WARN_ON(fbc->active);
WARN_ON(fbc->crtc != NULL);
- intel_fbc_update_state_cache(crtc);
+ intel_fbc_update_state_cache(crtc, crtc_state, plane_state);
if (intel_fbc_alloc_cfb(crtc)) {
fbc->no_fbc_reason = "not enough stolen memory";
goto out;
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* ✗ Ro.CI.BAT: failure for drm/i915: Reapply page flip atomic preparation patches.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (8 preceding siblings ...)
2016-05-26 10:38 ` [PATCH 9/9] drm/i915: Pass atomic states to fbc update functions Maarten Lankhorst
@ 2016-05-26 11:02 ` Patchwork
2016-05-26 11:23 ` Maarten Lankhorst
2016-05-26 11:35 ` [PATCH 0/9] " Ville Syrjälä
10 siblings, 1 reply; 18+ messages in thread
From: Patchwork @ 2016-05-26 11:02 UTC (permalink / raw)
To: Maarten Lankhorst; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Reapply page flip atomic preparation patches.
URL : https://patchwork.freedesktop.org/series/7801/
State : failure
== Summary ==
Series 7801v1 drm/i915: Reapply page flip atomic preparation patches.
http://patchwork.freedesktop.org/api/1.0/series/7801/revisions/1/mbox
Test gem_busy:
Subgroup basic-blt:
dmesg-warn -> PASS (ro-skl-i7-6700hq)
Test gem_exec_flush:
Subgroup basic-batch-kernel-default-cmd:
pass -> FAIL (ro-byt-n2820)
Test gem_flink_basic:
Subgroup bad-open:
pass -> DMESG-WARN (ro-skl-i7-6700hq)
Test kms_flip:
Subgroup basic-flip-vs-wf_vblank:
fail -> PASS (ro-bdw-i7-5600u)
skip -> PASS (fi-skl-i5-6260u)
Test kms_frontbuffer_tracking:
Subgroup basic:
pass -> DMESG-WARN (ro-skl-i7-6700hq)
Test kms_psr_sink_crc:
Subgroup psr_basic:
pass -> DMESG-WARN (ro-skl-i7-6700hq)
fi-bdw-i7-5557u total:209 pass:197 dwarn:0 dfail:0 fail:0 skip:12
fi-byt-n2820 total:209 pass:168 dwarn:0 dfail:0 fail:3 skip:38
fi-hsw-i7-4770k total:209 pass:190 dwarn:0 dfail:0 fail:0 skip:19
fi-hsw-i7-4770r total:209 pass:186 dwarn:0 dfail:0 fail:0 skip:23
fi-skl-i5-6260u total:209 pass:198 dwarn:0 dfail:0 fail:0 skip:11
fi-skl-i7-6700k total:209 pass:184 dwarn:0 dfail:0 fail:0 skip:25
fi-snb-i7-2600 total:209 pass:170 dwarn:0 dfail:0 fail:0 skip:39
ro-bdw-i5-5250u total:209 pass:172 dwarn:0 dfail:0 fail:0 skip:37
ro-bdw-i7-5557U total:209 pass:197 dwarn:0 dfail:0 fail:0 skip:12
ro-bdw-i7-5600u total:209 pass:181 dwarn:0 dfail:0 fail:0 skip:28
ro-bsw-n3050 total:209 pass:168 dwarn:0 dfail:0 fail:2 skip:39
ro-byt-n2820 total:209 pass:169 dwarn:0 dfail:0 fail:3 skip:37
ro-hsw-i3-4010u total:209 pass:186 dwarn:0 dfail:0 fail:0 skip:23
ro-hsw-i7-4770r total:209 pass:186 dwarn:0 dfail:0 fail:0 skip:23
ro-ilk-i7-620lm total:209 pass:146 dwarn:0 dfail:0 fail:1 skip:62
ro-ilk1-i5-650 total:204 pass:146 dwarn:0 dfail:0 fail:1 skip:57
ro-ivb-i7-3770 total:209 pass:177 dwarn:0 dfail:0 fail:0 skip:32
ro-ivb2-i7-3770 total:209 pass:181 dwarn:0 dfail:0 fail:0 skip:28
ro-skl-i7-6700hq total:204 pass:178 dwarn:5 dfail:0 fail:0 skip:21
ro-snb-i7-2620M total:209 pass:170 dwarn:0 dfail:0 fail:1 skip:38
fi-bsw-n3050 failed to connect after reboot
Results at /archive/results/CI_IGT_test/RO_Patchwork_1020/
fc9d741 drm-intel-nightly: 2016y-05m-25d-07h-45m-48s UTC integration manifest
6a3f8a8 drm/i915: Pass atomic states to fbc update functions.
4d540eb drm/i915: Remove reset_counter from intel_crtc.
b6103e9 drm/i915: Remove queue_flip pointer.
95a5895 drm/i915: Remove use_mmio_flip kernel parameter.
526331d drm/i915: Remove cs based page flip support, v2.
d159e01 drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4.
e8f5004 drm/i915: Add the exclusive fence to plane_state.
616a4b0 drm/i915: Convert flip_work to a list, v2.
0f8403b drm/i915: Allow mmio updates on all platforms, v3.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: ✗ Ro.CI.BAT: failure for drm/i915: Reapply page flip atomic preparation patches.
2016-05-26 11:02 ` ✗ Ro.CI.BAT: failure for drm/i915: Reapply page flip atomic preparation patches Patchwork
@ 2016-05-26 11:23 ` Maarten Lankhorst
0 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 11:23 UTC (permalink / raw)
To: intel-gfx
Op 26-05-16 om 13:02 schreef Patchwork:
> == Series Details ==
>
> Series: drm/i915: Reapply page flip atomic preparation patches.
> URL : https://patchwork.freedesktop.org/series/7801/
> State : failure
>
> == Summary ==
>
> Series 7801v1 drm/i915: Reapply page flip atomic preparation patches.
> http://patchwork.freedesktop.org/api/1.0/series/7801/revisions/1/mbox
>
> Test gem_busy:
> Subgroup basic-blt:
> dmesg-warn -> PASS (ro-skl-i7-6700hq)
> Test gem_exec_flush:
> Subgroup basic-batch-kernel-default-cmd:
> pass -> FAIL (ro-byt-n2820)
Seems to fail pretty randomly on ro-byt-n2820.
https://bugs.freedesktop.org/show_bug.cgi?id=95372
> Test gem_flink_basic:
> Subgroup bad-open:
> pass -> DMESG-WARN (ro-skl-i7-6700hq)
> Test kms_flip:
> Subgroup basic-flip-vs-wf_vblank:
> fail -> PASS (ro-bdw-i7-5600u)
> skip -> PASS (fi-skl-i5-6260u)
> Test kms_frontbuffer_tracking:
> Subgroup basic:
> pass -> DMESG-WARN (ro-skl-i7-6700hq)
> Test kms_psr_sink_crc:
> Subgroup psr_basic:
> pass -> DMESG-WARN (ro-skl-i7-6700hq)
-skl failures are all "[drm:intel_pipe_update_start [i915]] *ERROR* Potential atomic update failure on pipe A", which seem to happen randomly.
https://bugs.freedesktop.org/show_bug.cgi?id=95632
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches.
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
` (9 preceding siblings ...)
2016-05-26 11:02 ` ✗ Ro.CI.BAT: failure for drm/i915: Reapply page flip atomic preparation patches Patchwork
@ 2016-05-26 11:35 ` Ville Syrjälä
2016-05-26 11:38 ` Maarten Lankhorst
10 siblings, 1 reply; 18+ messages in thread
From: Ville Syrjälä @ 2016-05-26 11:35 UTC (permalink / raw)
To: Maarten Lankhorst; +Cc: intel-gfx
On Thu, May 26, 2016 at 12:37:56PM +0200, Maarten Lankhorst wrote:
> Add some minor changes to prevent bisect breaking.
>
> Main change is making sure crtc_state is not freed while the mmio update still runs.
I didn't see fixes for the other obvious issues.
>
> Maarten Lankhorst (9):
> drm/i915: Allow mmio updates on all platforms, v3.
> drm/i915: Convert flip_work to a list, v2.
> drm/i915: Add the exclusive fence to plane_state.
> drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4.
> drm/i915: Remove cs based page flip support, v2.
> drm/i915: Remove use_mmio_flip kernel parameter.
> drm/i915: Remove queue_flip pointer.
> drm/i915: Remove reset_counter from intel_crtc.
> drm/i915: Pass atomic states to fbc update functions.
>
> drivers/gpu/drm/i915/i915_debugfs.c | 89 ++-
> drivers/gpu/drm/i915/i915_drv.h | 5 -
> drivers/gpu/drm/i915/i915_irq.c | 120 +---
> drivers/gpu/drm/i915/i915_params.c | 5 -
> drivers/gpu/drm/i915/i915_params.h | 1 -
> drivers/gpu/drm/i915/intel_atomic_plane.c | 1 +
> drivers/gpu/drm/i915/intel_display.c | 1118 ++++++++---------------------
> drivers/gpu/drm/i915/intel_drv.h | 37 +-
> drivers/gpu/drm/i915/intel_fbc.c | 39 +-
> drivers/gpu/drm/i915/intel_lrc.c | 4 +-
> 10 files changed, 417 insertions(+), 1002 deletions(-)
>
> --
> 2.5.5
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches.
2016-05-26 11:35 ` [PATCH 0/9] " Ville Syrjälä
@ 2016-05-26 11:38 ` Maarten Lankhorst
2016-05-26 11:46 ` Ville Syrjälä
0 siblings, 1 reply; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 11:38 UTC (permalink / raw)
To: Ville Syrjälä; +Cc: intel-gfx
Op 26-05-16 om 13:35 schreef Ville Syrjälä:
> On Thu, May 26, 2016 at 12:37:56PM +0200, Maarten Lankhorst wrote:
>> Add some minor changes to prevent bisect breaking.
>>
>> Main change is making sure crtc_state is not freed while the mmio update still runs.
> I didn't see fixes for the other obvious issues.
This doesn't reapply nonblocking unpin/pageflip, which caused all problems. So what issues do you mean?
~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches.
2016-05-26 11:38 ` Maarten Lankhorst
@ 2016-05-26 11:46 ` Ville Syrjälä
2016-05-26 11:54 ` Maarten Lankhorst
0 siblings, 1 reply; 18+ messages in thread
From: Ville Syrjälä @ 2016-05-26 11:46 UTC (permalink / raw)
To: Maarten Lankhorst; +Cc: intel-gfx
On Thu, May 26, 2016 at 01:38:02PM +0200, Maarten Lankhorst wrote:
> Op 26-05-16 om 13:35 schreef Ville Syrjälä:
> > On Thu, May 26, 2016 at 12:37:56PM +0200, Maarten Lankhorst wrote:
> >> Add some minor changes to prevent bisect breaking.
> >>
> >> Main change is making sure crtc_state is not freed while the mmio update still runs.
> > I didn't see fixes for the other obvious issues.
> This doesn't reapply nonblocking unpin/pageflip, which caused all problems. So what issues do you mean?
The two I now remember off the top of my head were the killing of the
flip tracepoints and the annoying dmesg spamming.
--
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches.
2016-05-26 11:46 ` Ville Syrjälä
@ 2016-05-26 11:54 ` Maarten Lankhorst
0 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-26 11:54 UTC (permalink / raw)
To: Ville Syrjälä; +Cc: intel-gfx
Op 26-05-16 om 13:46 schreef Ville Syrjälä:
> On Thu, May 26, 2016 at 01:38:02PM +0200, Maarten Lankhorst wrote:
>> Op 26-05-16 om 13:35 schreef Ville Syrjälä:
>>> On Thu, May 26, 2016 at 12:37:56PM +0200, Maarten Lankhorst wrote:
>>>> Add some minor changes to prevent bisect breaking.
>>>>
>>>> Main change is making sure crtc_state is not freed while the mmio update still runs.
>>> I didn't see fixes for the other obvious issues.
>> This doesn't reapply nonblocking unpin/pageflip, which caused all problems. So what issues do you mean?
> The two I now remember off the top of my head were the killing of the
> flip tracepoints and the annoying dmesg spamming.
>
Flip tracepoint is still there, but I can remove the 'Finished page flip' spam from patch 5.
Patch 4 accidentally seems to call trace_i915_flip_request twice, which was fixed in patch 5.
I'll send new versions for those 2 patches.
~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v5.
2016-05-26 10:38 ` [PATCH 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4 Maarten Lankhorst
@ 2016-05-30 7:54 ` Maarten Lankhorst
0 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-30 7:54 UTC (permalink / raw)
To: Intel Graphics Development
Create a work structure that will be used for all changes. This will
be used later on in the atomic commit function.
Changes since v1:
- Free old_crtc_state from unpin_work_fn properly.
Changes since v2:
- Add hunk for calling hw state verifier.
- Add missing support for color spaces.
Changes since v3:
- Update for legacy cursor work.
- null pointer to request_unreference is no longer allowed.
Changes since v4:
- Only call trace_i915_flip_request once. (Ville)
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/gpu/drm/i915/i915_debugfs.c | 36 +-
drivers/gpu/drm/i915/intel_display.c | 674 +++++++++++++++++++++--------------
drivers/gpu/drm/i915/intel_drv.h | 17 +-
3 files changed, 442 insertions(+), 285 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index cced527af109..b52c1a5f3451 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -627,29 +627,43 @@ static void i915_dump_pageflip(struct seq_file *m,
struct intel_flip_work *work)
{
const char pipe = pipe_name(crtc->pipe);
- const char plane = plane_name(crtc->plane);
u32 pending;
u32 addr;
+ int i;
pending = atomic_read(&work->pending);
if (pending) {
seq_printf(m, "Flip ioctl preparing on pipe %c (plane %c)\n",
- pipe, plane);
+ pipe, plane_name(crtc->plane));
} else {
seq_printf(m, "Flip pending (waiting for vsync) on pipe %c (plane %c)\n",
- pipe, plane);
+ pipe, plane_name(crtc->plane));
}
- if (work->flip_queued_req) {
- struct intel_engine_cs *engine = i915_gem_request_get_engine(work->flip_queued_req);
- seq_printf(m, "Flip queued on %s at seqno %x, next seqno %x [current breadcrumb %x], completed? %d\n",
+
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *old_plane_state = work->old_plane_state[i];
+ struct drm_plane *plane = old_plane_state->base.plane;
+ struct drm_i915_gem_request *req = old_plane_state->wait_req;
+ struct intel_engine_cs *engine;
+
+ seq_printf(m, "[PLANE:%i] part of flip.\n", plane->base.id);
+
+ if (!req) {
+ seq_printf(m, "Plane not associated with any engine\n");
+ continue;
+ }
+
+ engine = i915_gem_request_get_engine(req);
+
+ seq_printf(m, "Plane blocked on %s at seqno %x, next seqno %x [current breadcrumb %x], completed? %d\n",
engine->name,
- i915_gem_request_get_seqno(work->flip_queued_req),
+ i915_gem_request_get_seqno(req),
dev_priv->next_seqno,
engine->get_seqno(engine),
- i915_gem_request_completed(work->flip_queued_req, true));
- } else
- seq_printf(m, "Flip not associated with any ring\n");
+ i915_gem_request_completed(req, true));
+ }
+
seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
work->flip_queued_vblank,
work->flip_ready_vblank,
@@ -662,7 +676,7 @@ static void i915_dump_pageflip(struct seq_file *m,
addr = I915_READ(DSPADDR(crtc->plane));
seq_printf(m, "Current scanout address 0x%08x\n", addr);
- if (work->pending_flip_obj) {
+ if (work->flip_queued_req) {
seq_printf(m, "New framebuffer address 0x%08lx\n", (long)work->gtt_offset);
seq_printf(m, "MMIO update completed? %d\n", addr == work->gtt_offset);
}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 0de232401f1d..e2d6fd0cd42c 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -50,7 +50,7 @@
static bool is_mmio_work(struct intel_flip_work *work)
{
- return work->mmio_work.func;
+ return !work->flip_queued_req;
}
/* Primary plane formats for gen <= 3 */
@@ -124,6 +124,9 @@ static void intel_modeset_setup_hw_state(struct drm_device *dev);
static void intel_pre_disable_primary_noatomic(struct drm_crtc *crtc);
static int ilk_max_pixel_rate(struct drm_atomic_state *state);
static int broxton_calc_cdclk(int max_pixclk);
+static void intel_modeset_verify_crtc(struct drm_crtc *crtc,
+ struct drm_crtc_state *old_state,
+ struct drm_crtc_state *new_state);
struct intel_limit {
struct {
@@ -2528,20 +2531,6 @@ out_unref_obj:
return false;
}
-/* Update plane->state->fb to match plane->fb after driver-internal updates */
-static void
-update_state_fb(struct drm_plane *plane)
-{
- if (plane->fb == plane->state->fb)
- return;
-
- if (plane->state->fb)
- drm_framebuffer_unreference(plane->state->fb);
- plane->state->fb = plane->fb;
- if (plane->state->fb)
- drm_framebuffer_reference(plane->state->fb);
-}
-
static void
intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
struct intel_initial_plane_config *plane_config)
@@ -3807,19 +3796,27 @@ bool intel_has_pending_fb_unpin(struct drm_device *dev)
static void page_flip_completed(struct intel_crtc *intel_crtc, struct intel_flip_work *work)
{
struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev);
-
- list_del_init(&work->head);
+ struct drm_plane_state *new_plane_state;
+ struct drm_plane *primary = intel_crtc->base.primary;
if (work->event)
drm_crtc_send_vblank_event(&intel_crtc->base, work->event);
drm_crtc_vblank_put(&intel_crtc->base);
- wake_up_all(&dev_priv->pending_flip_queue);
- queue_work(dev_priv->wq, &work->unpin_work);
+ new_plane_state = &work->old_plane_state[0]->base;
+ if (work->num_planes >= 1 &&
+ new_plane_state->plane == primary &&
+ new_plane_state->fb)
+ trace_i915_flip_complete(intel_crtc->plane,
+ intel_fb_obj(new_plane_state->fb));
- trace_i915_flip_complete(intel_crtc->plane,
- work->pending_flip_obj);
+ if (work->can_async_unpin) {
+ list_del_init(&work->head);
+ wake_up_all(&dev_priv->pending_flip_queue);
+ }
+
+ queue_work(dev_priv->wq, &work->unpin_work);
}
static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
@@ -3850,7 +3847,9 @@ static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
*/
work = list_first_entry_or_null(&intel_crtc->flip_work,
struct intel_flip_work, head);
- if (work && !is_mmio_work(work)) {
+
+ if (work && !is_mmio_work(work) &&
+ !work_busy(&work->unpin_work)) {
WARN_ONCE(1, "Removing stuck page flip\n");
page_flip_completed(intel_crtc, work);
}
@@ -10954,34 +10953,115 @@ static void intel_crtc_destroy(struct drm_crtc *crtc)
kfree(intel_crtc);
}
+static void intel_crtc_post_flip_update(struct intel_flip_work *work,
+ struct drm_crtc *crtc)
+{
+ struct intel_crtc_state *crtc_state = work->new_crtc_state;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ if (crtc_state->disable_cxsr)
+ intel_crtc->wm.cxsr_allowed = true;
+
+ if (crtc_state->update_wm_post && crtc_state->base.active)
+ intel_update_watermarks(crtc);
+
+ if (work->num_planes > 0 &&
+ work->old_plane_state[0]->base.plane == crtc->primary) {
+ struct intel_plane_state *plane_state =
+ work->new_plane_state[0];
+
+ if (plane_state->visible &&
+ (needs_modeset(&crtc_state->base) ||
+ !work->old_plane_state[0]->visible))
+ intel_post_enable_primary(crtc);
+ }
+}
+
static void intel_unpin_work_fn(struct work_struct *__work)
{
struct intel_flip_work *work =
container_of(__work, struct intel_flip_work, unpin_work);
- struct intel_crtc *crtc = to_intel_crtc(work->crtc);
- struct drm_device *dev = crtc->base.dev;
- struct drm_plane *primary = crtc->base.primary;
+ struct drm_crtc *crtc = work->old_crtc_state->base.crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int i;
- if (is_mmio_work(work))
- flush_work(&work->mmio_work);
+ if (work->fb_bits)
+ intel_frontbuffer_flip_complete(dev, work->fb_bits);
- mutex_lock(&dev->struct_mutex);
- intel_unpin_fb_obj(work->old_fb, primary->state->rotation);
- drm_gem_object_unreference(&work->pending_flip_obj->base);
+ /*
+ * Unless work->can_async_unpin is false, there's no way to ensure
+ * that work->new_crtc_state contains valid memory during unpin
+ * because intel_atomic_commit may free it before this runs.
+ */
+ if (!work->can_async_unpin)
+ intel_crtc_post_flip_update(work, crtc);
- if (work->flip_queued_req)
- i915_gem_request_assign(&work->flip_queued_req, NULL);
- mutex_unlock(&dev->struct_mutex);
+ if (work->fb_bits & to_intel_plane(crtc->primary)->frontbuffer_bit)
+ intel_fbc_post_update(intel_crtc);
+
+ if (work->put_power_domains)
+ modeset_put_power_domains(dev_priv, work->put_power_domains);
- intel_frontbuffer_flip_complete(dev, to_intel_plane(primary)->frontbuffer_bit);
- intel_fbc_post_update(crtc);
- drm_framebuffer_unreference(work->old_fb);
+ /* Make sure mmio work is completely finished before freeing all state here. */
+ flush_work(&work->mmio_work);
- BUG_ON(atomic_read(&crtc->unpin_work_count) == 0);
- atomic_dec(&crtc->unpin_work_count);
+ if (!work->can_async_unpin)
+ /* This must be called before work is unpinned for serialization. */
+ intel_modeset_verify_crtc(crtc, &work->old_crtc_state->base,
+ &work->new_crtc_state->base);
+
+ if (!work->can_async_unpin || !list_empty(&work->head)) {
+ spin_lock_irq(&dev->event_lock);
+ WARN(list_empty(&work->head) != work->can_async_unpin,
+ "[CRTC:%i] Pin work %p async %i with %i planes, active %i -> %i ms %i\n",
+ crtc->base.id, work, work->can_async_unpin, work->num_planes,
+ work->old_crtc_state->base.active, work->new_crtc_state->base.active,
+ needs_modeset(&work->new_crtc_state->base));
+
+ if (!list_empty(&work->head))
+ list_del(&work->head);
+
+ wake_up_all(&dev_priv->pending_flip_queue);
+ spin_unlock_irq(&dev->event_lock);
+ }
+ intel_crtc_destroy_state(crtc, &work->old_crtc_state->base);
if (work->free_new_crtc_state)
- intel_crtc_destroy_state(&crtc->base, &work->new_crtc_state->base);
+ intel_crtc_destroy_state(crtc, &work->new_crtc_state->base);
+
+ if (work->flip_queued_req)
+ i915_gem_request_unreference(work->flip_queued_req);
+
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *old_plane_state =
+ work->old_plane_state[i];
+ struct drm_framebuffer *old_fb = old_plane_state->base.fb;
+ struct drm_plane *plane = old_plane_state->base.plane;
+ struct drm_i915_gem_request *req;
+
+ req = old_plane_state->wait_req;
+ old_plane_state->wait_req = NULL;
+ if (req)
+ i915_gem_request_unreference(req);
+
+ fence_put(old_plane_state->base.fence);
+ old_plane_state->base.fence = NULL;
+
+ if (old_fb &&
+ (plane->type != DRM_PLANE_TYPE_CURSOR ||
+ !INTEL_INFO(dev_priv)->cursor_needs_physical)) {
+ mutex_lock(&dev->struct_mutex);
+ intel_unpin_fb_obj(old_fb, old_plane_state->base.rotation);
+ mutex_unlock(&dev->struct_mutex);
+ }
+
+ intel_plane_destroy_state(plane, &old_plane_state->base);
+ }
+
+ if (!WARN_ON(atomic_read(&intel_crtc->unpin_work_count) == 0))
+ atomic_dec(&intel_crtc->unpin_work_count);
kfree(work);
}
@@ -11095,7 +11175,8 @@ void intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, int pipe)
if (is_mmio_work(work))
break;
- if (!pageflip_finished(intel_crtc, work))
+ if (!pageflip_finished(intel_crtc, work) ||
+ work_busy(&work->unpin_work))
break;
page_flip_completed(intel_crtc, work);
@@ -11128,7 +11209,8 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
if (!is_mmio_work(work))
break;
- if (!pageflip_finished(intel_crtc, work))
+ if (!pageflip_finished(intel_crtc, work) ||
+ work_busy(&work->unpin_work))
break;
page_flip_completed(intel_crtc, work);
@@ -11377,70 +11459,202 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
return 0;
}
-static bool use_mmio_flip(struct intel_engine_cs *engine,
- struct drm_i915_gem_object *obj)
+static struct intel_engine_cs *
+intel_get_flip_engine(struct drm_device *dev,
+ struct drm_i915_private *dev_priv,
+ struct drm_i915_gem_object *obj)
{
- /*
- * This is not being used for older platforms, because
- * non-availability of flip done interrupt forces us to use
- * CS flips. Older platforms derive flip done using some clever
- * tricks involving the flip_pending status bits and vblank irqs.
- * So using MMIO flips there would disrupt this mechanism.
- */
+ if (IS_VALLEYVIEW(dev) || IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
+ return &dev_priv->engine[BCS];
- if (engine == NULL)
- return true;
+ if (dev_priv->info.gen >= 7) {
+ struct intel_engine_cs *engine;
+
+ engine = i915_gem_request_get_engine(obj->last_write_req);
+ if (engine && engine->id == RCS)
+ return engine;
- if (i915.use_mmio_flip < 0)
+ return &dev_priv->engine[BCS];
+ } else
+ return &dev_priv->engine[RCS];
+}
+
+static bool
+flip_fb_compatible(struct drm_device *dev,
+ struct drm_framebuffer *fb,
+ struct drm_framebuffer *old_fb)
+{
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct drm_i915_gem_object *old_obj = intel_fb_obj(old_fb);
+
+ if (old_fb->pixel_format != fb->pixel_format)
return false;
- else if (i915.use_mmio_flip > 0)
- return true;
- else if (i915.enable_execlists)
- return true;
- else if (obj->base.dma_buf &&
- !reservation_object_test_signaled_rcu(obj->base.dma_buf->resv,
- false))
- return true;
- else
- return engine != i915_gem_request_get_engine(obj->last_write_req);
+
+ if (INTEL_INFO(dev)->gen > 3 &&
+ (fb->offsets[0] != old_fb->offsets[0] ||
+ fb->pitches[0] != old_fb->pitches[0]))
+ return false;
+
+ /* vlv: DISPLAY_FLIP fails to change tiling */
+ if (IS_VALLEYVIEW(dev) && obj->tiling_mode != old_obj->tiling_mode)
+ return false;
+
+ return true;
+}
+
+static void
+intel_display_flip_prepare(struct drm_device *dev, struct drm_crtc *crtc,
+ struct intel_flip_work *work)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ if (work->flip_prepared)
+ return;
+
+ work->flip_prepared = true;
+
+ if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
+ work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(intel_crtc->pipe)) + 1;
+ work->flip_queued_vblank = drm_crtc_vblank_count(crtc);
+
+ intel_frontbuffer_flip_prepare(dev, work->new_crtc_state->fb_bits);
+}
+
+static void intel_flip_schedule_request(struct intel_flip_work *work, struct drm_crtc *crtc)
+{
+ struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_plane_state *new_state = work->new_plane_state[0];
+ struct intel_plane_state *old_state = work->old_plane_state[0];
+ struct drm_framebuffer *fb, *old_fb;
+ struct drm_i915_gem_request *request = NULL;
+ struct intel_engine_cs *engine;
+ struct drm_i915_gem_object *obj;
+ struct fence *fence;
+ int ret;
+
+ to_intel_crtc(crtc)->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ if (__i915_reset_in_progress_or_wedged(to_intel_crtc(crtc)->reset_counter))
+ goto mmio;
+
+ if (i915_terminally_wedged(&dev_priv->gpu_error) ||
+ i915_reset_in_progress(&dev_priv->gpu_error) ||
+ i915.enable_execlists || i915.use_mmio_flip > 0 ||
+ !dev_priv->display.queue_flip)
+ goto mmio;
+
+ /* Not right after modesetting, surface parameters need to be updated */
+ if (needs_modeset(crtc->state) ||
+ to_intel_crtc_state(crtc->state)->update_pipe)
+ goto mmio;
+
+ /* Only allow a mmio flip for a primary plane without a dma-buf fence */
+ if (work->num_planes != 1 ||
+ new_state->base.plane != crtc->primary ||
+ new_state->base.fence)
+ goto mmio;
+
+ fence = work->old_plane_state[0]->base.fence;
+ if (fence && !fence_is_signaled(fence))
+ goto mmio;
+
+ old_fb = old_state->base.fb;
+ fb = new_state->base.fb;
+ obj = intel_fb_obj(fb);
+
+ /* Only when updating a already visible fb. */
+ if (!new_state->visible || !old_state->visible)
+ goto mmio;
+
+ if (!flip_fb_compatible(dev, fb, old_fb))
+ goto mmio;
+
+ engine = intel_get_flip_engine(dev, dev_priv, obj);
+ if (i915.use_mmio_flip == 0 && obj->last_write_req &&
+ i915_gem_request_get_engine(obj->last_write_req) != engine)
+ goto mmio;
+
+ work->gtt_offset = intel_plane_obj_offset(to_intel_plane(crtc->primary), obj, 0);
+ work->gtt_offset += to_intel_crtc(crtc)->dspaddr_offset;
+
+ ret = i915_gem_object_sync(obj, engine, &request);
+ if (!ret && !request) {
+ request = i915_gem_request_alloc(engine, NULL);
+ ret = PTR_ERR_OR_ZERO(request);
+
+ if (ret)
+ request = NULL;
+ }
+
+ intel_display_flip_prepare(dev, crtc, work);
+
+ if (!ret)
+ ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request, 0);
+
+ if (!ret) {
+ i915_gem_request_assign(&work->flip_queued_req, request);
+ intel_mark_page_flip_active(to_intel_crtc(crtc), work);
+ i915_add_request_no_flush(request);
+ return;
+ }
+ if (request)
+ i915_add_request_no_flush(request);
+
+mmio:
+ schedule_work(&work->mmio_work);
}
static void intel_mmio_flip_work_func(struct work_struct *w)
{
struct intel_flip_work *work =
container_of(w, struct intel_flip_work, mmio_work);
- struct intel_crtc *crtc = to_intel_crtc(work->crtc);
- struct drm_device *dev = crtc->base.dev;
+ struct drm_crtc *crtc = work->old_crtc_state->base.crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct intel_crtc_state *crtc_state = work->new_crtc_state;
+ struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *primary = to_intel_plane(crtc->base.primary);
- struct drm_i915_gem_object *obj = intel_fb_obj(primary->base.state->fb);
+ struct drm_i915_gem_request *req;
+ int i;
- if (work->flip_queued_req)
- WARN_ON(__i915_wait_request(work->flip_queued_req,
- false, NULL,
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *old_plane_state = work->old_plane_state[i];
+
+ /* For framebuffer backed by dmabuf, wait for fence */
+ if (old_plane_state->base.fence)
+ WARN_ON(fence_wait(old_plane_state->base.fence, false) < 0);
+
+ req = old_plane_state->wait_req;
+ if (!req)
+ continue;
+
+ WARN_ON(__i915_wait_request(req, false, NULL,
&dev_priv->rps.mmioflips));
+ }
- /* For framebuffer backed by dmabuf, wait for fence */
- if (obj->base.dma_buf)
- WARN_ON(reservation_object_wait_timeout_rcu(obj->base.dma_buf->resv,
- false, false,
- MAX_SCHEDULE_TIMEOUT) < 0);
+ intel_display_flip_prepare(dev, crtc, work);
- intel_pipe_update_start(crtc);
- primary->update_plane(&primary->base,
- work->new_crtc_state,
- to_intel_plane_state(primary->base.state));
- intel_pipe_update_end(crtc, work);
-}
+ intel_pipe_update_start(intel_crtc);
+ if (!needs_modeset(&crtc_state->base)) {
+ if (crtc_state->base.color_mgmt_changed || crtc_state->update_pipe) {
+ intel_color_set_csc(&crtc_state->base);
+ intel_color_load_luts(&crtc_state->base);
+ }
-static int intel_default_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req,
- uint64_t gtt_offset)
-{
- return -ENODEV;
+ if (crtc_state->update_pipe)
+ intel_update_pipe_config(intel_crtc, work->old_crtc_state);
+ else if (INTEL_INFO(dev)->gen >= 9)
+ skl_detach_scalers(intel_crtc);
+ }
+
+ for (i = 0; i < work->num_planes; i++) {
+ struct intel_plane_state *new_plane_state = work->new_plane_state[i];
+ struct intel_plane *plane = to_intel_plane(new_plane_state->base.plane);
+
+ plane->update_plane(&plane->base, crtc_state, new_plane_state);
+ }
+
+ intel_pipe_update_end(intel_crtc, work);
}
static bool __pageflip_stall_check_cs(struct drm_i915_private *dev_priv,
@@ -11449,7 +11663,8 @@ static bool __pageflip_stall_check_cs(struct drm_i915_private *dev_priv,
{
u32 addr, vblank;
- if (!atomic_read(&work->pending))
+ if (!atomic_read(&work->pending) ||
+ work_busy(&work->unpin_work))
return false;
smp_rmb();
@@ -11516,6 +11731,33 @@ void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe)
spin_unlock(&dev->event_lock);
}
+static struct fence *intel_get_excl_fence(struct drm_i915_gem_object *obj)
+{
+ struct reservation_object *resv;
+
+
+ if (!obj->base.dma_buf)
+ return NULL;
+
+ resv = obj->base.dma_buf->resv;
+
+ /* For framebuffer backed by dmabuf, wait for fence */
+ while (1) {
+ struct fence *fence_excl, *ret = NULL;
+
+ rcu_read_lock();
+
+ fence_excl = rcu_dereference(resv->fence_excl);
+ if (fence_excl)
+ ret = fence_get_rcu(fence_excl);
+
+ rcu_read_unlock();
+
+ if (ret == fence_excl)
+ return ret;
+ }
+}
+
static int intel_crtc_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
@@ -11523,17 +11765,20 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_framebuffer *old_fb = crtc->primary->fb;
+ struct drm_plane_state *old_state, *new_state = NULL;
+ struct drm_crtc_state *new_crtc_state = NULL;
+ struct drm_framebuffer *old_fb = crtc->primary->state->fb;
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct drm_plane *primary = crtc->primary;
- enum pipe pipe = intel_crtc->pipe;
struct intel_flip_work *work;
- struct intel_engine_cs *engine;
- bool mmio_flip;
- struct drm_i915_gem_request *request = NULL;
int ret;
+ old_state = crtc->primary->state;
+
+ if (!crtc->state->active)
+ return -EINVAL;
+
/*
* drm_mode_page_flip_ioctl() should already catch this, but double
* check to be safe. In the future we may enable pageflipping from
@@ -11543,7 +11788,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
return -EBUSY;
/* Can't change pixel format via MI display flips. */
- if (fb->pixel_format != crtc->primary->fb->pixel_format)
+ if (fb->pixel_format != old_fb->pixel_format)
return -EINVAL;
/*
@@ -11551,27 +11796,46 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
* Note that pitch changes could also affect these register.
*/
if (INTEL_INFO(dev)->gen > 3 &&
- (fb->offsets[0] != crtc->primary->fb->offsets[0] ||
- fb->pitches[0] != crtc->primary->fb->pitches[0]))
+ (fb->offsets[0] != old_fb->offsets[0] ||
+ fb->pitches[0] != old_fb->pitches[0]))
return -EINVAL;
- if (i915_terminally_wedged(&dev_priv->gpu_error))
- goto out_hang;
-
work = kzalloc(sizeof(*work), GFP_KERNEL);
- if (work == NULL)
- return -ENOMEM;
+ new_crtc_state = intel_crtc_duplicate_state(crtc);
+ new_state = intel_plane_duplicate_state(primary);
+
+ if (!work || !new_crtc_state || !new_state) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ drm_framebuffer_unreference(new_state->fb);
+ drm_framebuffer_reference(fb);
+ new_state->fb = fb;
work->new_crtc_state = to_intel_crtc_state(crtc->state);
work->event = event;
- work->crtc = crtc;
- work->old_fb = old_fb;
INIT_WORK(&work->unpin_work, intel_unpin_work_fn);
+ INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
+
+ work->new_crtc_state = to_intel_crtc_state(new_crtc_state);
+ work->old_crtc_state = intel_crtc->config;
+
+ work->fb_bits = to_intel_plane(primary)->frontbuffer_bit;
+ work->new_crtc_state->fb_bits = work->fb_bits;
+ work->can_async_unpin = true;
+ work->num_planes = 1;
+ work->old_plane_state[0] = to_intel_plane_state(old_state);
+ work->new_plane_state[0] = to_intel_plane_state(new_state);
+
+ /* Step 1: vblank waiting and workqueue throttling,
+ * similar to intel_atomic_prepare_commit
+ */
ret = drm_crtc_vblank_get(crtc);
if (ret)
- goto free_work;
+ goto cleanup;
/* We borrow the event spin lock for protecting flip_work */
spin_lock_irq(&dev->event_lock);
@@ -11591,9 +11855,8 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
spin_unlock_irq(&dev->event_lock);
- drm_crtc_vblank_put(crtc);
- kfree(work);
- return -EBUSY;
+ ret = -EBUSY;
+ goto cleanup_vblank;
}
}
list_add_tail(&work->head, &intel_crtc->flip_work);
@@ -11602,160 +11865,62 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
if (atomic_read(&intel_crtc->unpin_work_count) >= 2)
flush_workqueue(dev_priv->wq);
- /* Reference the objects for the scheduled work. */
- drm_framebuffer_reference(work->old_fb);
- drm_gem_object_reference(&obj->base);
-
- crtc->primary->fb = fb;
- update_state_fb(crtc->primary);
- intel_fbc_pre_update(intel_crtc);
-
- work->pending_flip_obj = obj;
-
- ret = i915_mutex_lock_interruptible(dev);
+ /* step 2, similar to intel_prepare_plane_fb */
+ ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
- goto cleanup;
-
- intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (__i915_reset_in_progress_or_wedged(intel_crtc->reset_counter)) {
- ret = -EIO;
- goto cleanup;
- }
-
- atomic_inc(&intel_crtc->unpin_work_count);
-
- if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
- work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(pipe)) + 1;
-
- if (IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) {
- engine = &dev_priv->engine[BCS];
- if (obj->tiling_mode != intel_fb_obj(work->old_fb)->tiling_mode)
- /* vlv: DISPLAY_FLIP fails to change tiling */
- engine = NULL;
- } else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
- engine = &dev_priv->engine[BCS];
- } else if (INTEL_INFO(dev)->gen >= 7) {
- engine = i915_gem_request_get_engine(obj->last_write_req);
- if (engine == NULL || engine->id != RCS)
- engine = &dev_priv->engine[BCS];
- } else {
- engine = &dev_priv->engine[RCS];
- }
-
- mmio_flip = use_mmio_flip(engine, obj);
+ goto cleanup_work;
- /* When using CS flips, we want to emit semaphores between rings.
- * However, when using mmio flips we will create a task to do the
- * synchronisation, so all we want here is to pin the framebuffer
- * into the display plane and skip any waits.
- */
- if (!mmio_flip) {
- ret = i915_gem_object_sync(obj, engine, &request);
- if (!ret && !request) {
- request = i915_gem_request_alloc(engine, NULL);
- ret = PTR_ERR_OR_ZERO(request);
- }
-
- if (ret)
- goto cleanup_pending;
- }
-
- ret = intel_pin_and_fence_fb_obj(fb, primary->state->rotation);
+ ret = intel_pin_and_fence_fb_obj(fb, new_state->rotation);
if (ret)
- goto cleanup_pending;
+ goto cleanup_unlock;
- work->gtt_offset = intel_plane_obj_offset(to_intel_plane(primary),
- obj, 0);
- work->gtt_offset += intel_crtc->dspaddr_offset;
+ i915_gem_track_fb(intel_fb_obj(old_fb), obj,
+ to_intel_plane(primary)->frontbuffer_bit);
- if (mmio_flip) {
- INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
+ /* point of no return, swap state */
+ primary->state = new_state;
+ crtc->state = new_crtc_state;
+ intel_crtc->config = to_intel_crtc_state(new_crtc_state);
+ primary->fb = fb;
- i915_gem_request_assign(&work->flip_queued_req,
+ /* scheduling flip work */
+ atomic_inc(&intel_crtc->unpin_work_count);
+
+ if (obj->last_write_req &&
+ !i915_gem_request_completed(obj->last_write_req, true))
+ i915_gem_request_assign(&work->old_plane_state[0]->wait_req,
obj->last_write_req);
- schedule_work(&work->mmio_work);
- } else {
- i915_gem_request_assign(&work->flip_queued_req, request);
- ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
- work->gtt_offset);
- if (ret)
- goto cleanup_unpin;
+ if (obj->base.dma_buf)
+ work->old_plane_state[0]->base.fence = intel_get_excl_fence(obj);
- intel_mark_page_flip_active(intel_crtc, work);
+ intel_fbc_pre_update(intel_crtc);
- i915_add_request_no_flush(request);
- }
+ intel_flip_schedule_request(work, crtc);
- i915_gem_track_fb(intel_fb_obj(old_fb), obj,
- to_intel_plane(primary)->frontbuffer_bit);
mutex_unlock(&dev->struct_mutex);
- intel_frontbuffer_flip_prepare(dev,
- to_intel_plane(primary)->frontbuffer_bit);
-
trace_i915_flip_request(intel_crtc->plane, obj);
return 0;
-cleanup_unpin:
- intel_unpin_fb_obj(fb, crtc->primary->state->rotation);
-cleanup_pending:
- if (!IS_ERR_OR_NULL(request))
- i915_add_request_no_flush(request);
- atomic_dec(&intel_crtc->unpin_work_count);
+cleanup_unlock:
mutex_unlock(&dev->struct_mutex);
-cleanup:
- crtc->primary->fb = old_fb;
- update_state_fb(crtc->primary);
-
- drm_gem_object_unreference_unlocked(&obj->base);
- drm_framebuffer_unreference(work->old_fb);
-
+cleanup_work:
spin_lock_irq(&dev->event_lock);
list_del(&work->head);
spin_unlock_irq(&dev->event_lock);
+cleanup_vblank:
drm_crtc_vblank_put(crtc);
-free_work:
- kfree(work);
-
- if (ret == -EIO) {
- struct drm_atomic_state *state;
- struct drm_plane_state *plane_state;
-
-out_hang:
- state = drm_atomic_state_alloc(dev);
- if (!state)
- return -ENOMEM;
- state->acquire_ctx = drm_modeset_legacy_acquire_ctx(crtc);
-
-retry:
- plane_state = drm_atomic_get_plane_state(state, primary);
- ret = PTR_ERR_OR_ZERO(plane_state);
- if (!ret) {
- drm_atomic_set_fb_for_plane(plane_state, fb);
-
- ret = drm_atomic_set_crtc_for_plane(plane_state, crtc);
- if (!ret)
- ret = drm_atomic_commit(state);
- }
-
- if (ret == -EDEADLK) {
- drm_modeset_backoff(state->acquire_ctx);
- drm_atomic_state_clear(state);
- goto retry;
- }
+cleanup:
+ if (new_state)
+ intel_plane_destroy_state(primary, new_state);
- if (ret)
- drm_atomic_state_free(state);
+ if (new_crtc_state)
+ intel_crtc_destroy_state(crtc, new_crtc_state);
- if (ret == 0 && event) {
- spin_lock_irq(&dev->event_lock);
- drm_crtc_send_vblank_event(crtc, event);
- spin_unlock_irq(&dev->event_lock);
- }
- }
+ kfree(work);
return ret;
}
@@ -13829,33 +13994,6 @@ static const struct drm_crtc_funcs intel_crtc_funcs = {
.atomic_destroy_state = intel_crtc_destroy_state,
};
-static struct fence *intel_get_excl_fence(struct drm_i915_gem_object *obj)
-{
- struct reservation_object *resv;
-
-
- if (!obj->base.dma_buf)
- return NULL;
-
- resv = obj->base.dma_buf->resv;
-
- /* For framebuffer backed by dmabuf, wait for fence */
- while (1) {
- struct fence *fence_excl, *ret = NULL;
-
- rcu_read_lock();
-
- fence_excl = rcu_dereference(resv->fence_excl);
- if (fence_excl)
- ret = fence_get_rcu(fence_excl);
-
- rcu_read_unlock();
-
- if (ret == fence_excl)
- return ret;
- }
-}
-
/**
* intel_prepare_plane_fb - Prepare fb for usage on plane
* @plane: drm plane to prepare for
@@ -15159,7 +15297,7 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv)
/* Drop through - unsupported since execlist only. */
default:
/* Default just returns -ENODEV to indicate unsupported */
- dev_priv->display.queue_flip = intel_default_queue_flip;
+ break;
}
}
@@ -16119,9 +16257,9 @@ void intel_modeset_gem_init(struct drm_device *dev)
DRM_ERROR("failed to pin boot fb on pipe %d\n",
to_intel_crtc(c)->pipe);
drm_framebuffer_unreference(c->primary->fb);
- c->primary->fb = NULL;
+ drm_framebuffer_unreference(c->primary->state->fb);
+ c->primary->fb = c->primary->state->fb = NULL;
c->primary->crtc = c->primary->state->crtc = NULL;
- update_state_fb(c->primary);
c->state->plane_mask &= ~(1 << drm_plane_index(c->primary));
}
}
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 6944202d3de0..c6d40bfce147 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -978,12 +978,6 @@ struct intel_flip_work {
struct work_struct unpin_work;
struct work_struct mmio_work;
- struct intel_crtc_state *new_crtc_state;
- bool free_new_crtc_state;
-
- struct drm_crtc *crtc;
- struct drm_framebuffer *old_fb;
- struct drm_i915_gem_object *pending_flip_obj;
struct drm_pending_vblank_event *event;
atomic_t pending;
u32 flip_count;
@@ -991,6 +985,17 @@ struct intel_flip_work {
struct drm_i915_gem_request *flip_queued_req;
u32 flip_queued_vblank;
u32 flip_ready_vblank;
+
+ unsigned put_power_domains;
+ unsigned num_planes;
+
+ bool can_async_unpin, flip_prepared, free_new_crtc_state;
+
+ unsigned fb_bits;
+
+ struct intel_crtc_state *old_crtc_state, *new_crtc_state;
+ struct intel_plane_state *old_plane_state[I915_MAX_PLANES + 1];
+ struct intel_plane_state *new_plane_state[I915_MAX_PLANES + 1];
};
struct intel_load_detect_pipe {
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 5/9] drm/i915: Remove cs based page flip support, v3.
2016-05-26 10:38 ` [PATCH 5/9] drm/i915: Remove cs based page flip support, v2 Maarten Lankhorst
@ 2016-05-30 7:55 ` Maarten Lankhorst
0 siblings, 0 replies; 18+ messages in thread
From: Maarten Lankhorst @ 2016-05-30 7:55 UTC (permalink / raw)
To: Intel Graphics Development
With mmio flips now available on all platforms it's time to remove
support for cs flips.
Changes since v1:
- Rebase for legacy cursor updates.
Changes since v2:
- Silence remainder of cs page flip handler.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/gpu/drm/i915/i915_debugfs.c | 19 +-
drivers/gpu/drm/i915/i915_irq.c | 120 ++---------
drivers/gpu/drm/i915/intel_display.c | 390 +----------------------------------
drivers/gpu/drm/i915/intel_drv.h | 9 +-
4 files changed, 33 insertions(+), 505 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index b52c1a5f3451..b29ba16c90b3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -628,7 +628,6 @@ static void i915_dump_pageflip(struct seq_file *m,
{
const char pipe = pipe_name(crtc->pipe);
u32 pending;
- u32 addr;
int i;
pending = atomic_read(&work->pending);
@@ -640,7 +639,6 @@ static void i915_dump_pageflip(struct seq_file *m,
pipe, plane_name(crtc->plane));
}
-
for (i = 0; i < work->num_planes; i++) {
struct intel_plane_state *old_plane_state = work->old_plane_state[i];
struct drm_plane *plane = old_plane_state->base.plane;
@@ -664,22 +662,9 @@ static void i915_dump_pageflip(struct seq_file *m,
i915_gem_request_completed(req, true));
}
- seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
- work->flip_queued_vblank,
- work->flip_ready_vblank,
+ seq_printf(m, "Flip queued on frame %d, now %d\n",
+ pending ? work->flip_queued_vblank : -1,
intel_crtc_get_vblank_counter(crtc));
- seq_printf(m, "%d prepares\n", atomic_read(&work->pending));
-
- if (INTEL_INFO(dev_priv)->gen >= 4)
- addr = I915_HI_DISPBASE(I915_READ(DSPSURF(crtc->plane)));
- else
- addr = I915_READ(DSPADDR(crtc->plane));
- seq_printf(m, "Current scanout address 0x%08x\n", addr);
-
- if (work->flip_queued_req) {
- seq_printf(m, "New framebuffer address 0x%08lx\n", (long)work->gtt_offset);
- seq_printf(m, "MMIO update completed? %d\n", addr == work->gtt_offset);
- }
}
static int i915_gem_pageflip_info(struct seq_file *m, void *data)
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index caaf1e2a7bc1..65e0fecf362b 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -136,6 +136,12 @@ static const u32 hpd_bxt[HPD_NUM_PINS] = {
POSTING_READ(type##IIR); \
} while (0)
+static void
+intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, unsigned pipe)
+{
+ /* Noop, in case CS page flip support is re-added again. */
+}
+
/*
* We should clear IMR at preinstall/uninstall, and just check at postinstall.
*/
@@ -1631,16 +1637,11 @@ static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir)
}
}
-static bool intel_pipe_handle_vblank(struct drm_i915_private *dev_priv,
+static void intel_pipe_handle_vblank(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
- bool ret;
-
- ret = drm_handle_vblank(dev_priv->dev, pipe);
- if (ret)
+ if (drm_handle_vblank(dev_priv->dev, pipe))
intel_finish_page_flip_mmio(dev_priv, pipe);
-
- return ret;
}
static void valleyview_pipestat_irq_ack(struct drm_i915_private *dev_priv,
@@ -1707,9 +1708,8 @@ static void valleyview_pipestat_irq_handler(struct drm_i915_private *dev_priv,
enum pipe pipe;
for_each_pipe(dev_priv, pipe) {
- if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PLANE_FLIP_DONE_INT_STATUS_VLV)
intel_finish_page_flip_cs(dev_priv, pipe);
@@ -2155,9 +2155,8 @@ static void ilk_display_irq_handler(struct drm_i915_private *dev_priv,
DRM_ERROR("Poison interrupt\n");
for_each_pipe(dev_priv, pipe) {
- if (de_iir & DE_PIPE_VBLANK(pipe) &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (de_iir & DE_PIPE_VBLANK(pipe))
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (de_iir & DE_PIPE_FIFO_UNDERRUN(pipe))
intel_cpu_fifo_underrun_irq_handler(dev_priv, pipe);
@@ -2206,9 +2205,8 @@ static void ivb_display_irq_handler(struct drm_i915_private *dev_priv,
intel_opregion_asle_intr(dev_priv);
for_each_pipe(dev_priv, pipe) {
- if (de_iir & (DE_PIPE_VBLANK_IVB(pipe)) &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (de_iir & (DE_PIPE_VBLANK_IVB(pipe)))
+ intel_pipe_handle_vblank(dev_priv, pipe);
/* plane/pipes map 1:1 on ilk+ */
if (de_iir & DE_PLANE_FLIP_DONE_IVB(pipe))
@@ -2407,9 +2405,8 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
ret = IRQ_HANDLED;
I915_WRITE(GEN8_DE_PIPE_IIR(pipe), iir);
- if (iir & GEN8_PIPE_VBLANK &&
- intel_pipe_handle_vblank(dev_priv, pipe))
- intel_check_page_flip(dev_priv, pipe);
+ if (iir & GEN8_PIPE_VBLANK)
+ intel_pipe_handle_vblank(dev_priv, pipe);
flip_done = iir;
if (INTEL_INFO(dev_priv)->gen >= 9)
@@ -3975,37 +3972,6 @@ static int i8xx_irq_postinstall(struct drm_device *dev)
return 0;
}
-/*
- * Returns true when a page flip has completed.
- */
-static bool i8xx_handle_vblank(struct drm_i915_private *dev_priv,
- int plane, int pipe, u32 iir)
-{
- u16 flip_pending = DISPLAY_PLANE_FLIP_PENDING(plane);
-
- if (!intel_pipe_handle_vblank(dev_priv, pipe))
- return false;
-
- if ((iir & flip_pending) == 0)
- goto check_page_flip;
-
- /* We detect FlipDone by looking for the change in PendingFlip from '1'
- * to '0' on the following vblank, i.e. IIR has the Pendingflip
- * asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence
- * the flip is completed (no longer pending). Since this doesn't raise
- * an interrupt per se, we watch for the change at vblank.
- */
- if (I915_READ16(ISR) & flip_pending)
- goto check_page_flip;
-
- intel_finish_page_flip_cs(dev_priv, pipe);
- return true;
-
-check_page_flip:
- intel_check_page_flip(dev_priv, pipe);
- return false;
-}
-
static irqreturn_t i8xx_irq_handler(int irq, void *arg)
{
struct drm_device *dev = arg;
@@ -4058,13 +4024,8 @@ static irqreturn_t i8xx_irq_handler(int irq, void *arg)
notify_ring(&dev_priv->engine[RCS]);
for_each_pipe(dev_priv, pipe) {
- int plane = pipe;
- if (HAS_FBC(dev_priv))
- plane = !plane;
-
- if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS &&
- i8xx_handle_vblank(dev_priv, plane, pipe, iir))
- flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(plane);
+ if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PIPE_CRC_DONE_INTERRUPT_STATUS)
i9xx_pipe_crc_irq_handler(dev_priv, pipe);
@@ -4164,37 +4125,6 @@ static int i915_irq_postinstall(struct drm_device *dev)
return 0;
}
-/*
- * Returns true when a page flip has completed.
- */
-static bool i915_handle_vblank(struct drm_i915_private *dev_priv,
- int plane, int pipe, u32 iir)
-{
- u32 flip_pending = DISPLAY_PLANE_FLIP_PENDING(plane);
-
- if (!intel_pipe_handle_vblank(dev_priv, pipe))
- return false;
-
- if ((iir & flip_pending) == 0)
- goto check_page_flip;
-
- /* We detect FlipDone by looking for the change in PendingFlip from '1'
- * to '0' on the following vblank, i.e. IIR has the Pendingflip
- * asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence
- * the flip is completed (no longer pending). Since this doesn't raise
- * an interrupt per se, we watch for the change at vblank.
- */
- if (I915_READ(ISR) & flip_pending)
- goto check_page_flip;
-
- intel_finish_page_flip_cs(dev_priv, pipe);
- return true;
-
-check_page_flip:
- intel_check_page_flip(dev_priv, pipe);
- return false;
-}
-
static irqreturn_t i915_irq_handler(int irq, void *arg)
{
struct drm_device *dev = arg;
@@ -4255,13 +4185,8 @@ static irqreturn_t i915_irq_handler(int irq, void *arg)
notify_ring(&dev_priv->engine[RCS]);
for_each_pipe(dev_priv, pipe) {
- int plane = pipe;
- if (HAS_FBC(dev_priv))
- plane = !plane;
-
- if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS &&
- i915_handle_vblank(dev_priv, plane, pipe, iir))
- flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(plane);
+ if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PIPE_LEGACY_BLC_EVENT_STATUS)
blc_event = true;
@@ -4489,9 +4414,8 @@ static irqreturn_t i965_irq_handler(int irq, void *arg)
notify_ring(&dev_priv->engine[VCS]);
for_each_pipe(dev_priv, pipe) {
- if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS &&
- i915_handle_vblank(dev_priv, pipe, pipe, iir))
- flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(pipe);
+ if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS)
+ intel_pipe_handle_vblank(dev_priv, pipe);
if (pipe_stats[pipe] & PIPE_LEGACY_BLC_EVENT_STATUS)
blc_event = true;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index e2d6fd0cd42c..2324b74f72f4 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -48,11 +48,6 @@
#include <linux/reservation.h>
#include <linux/dma-buf.h>
-static bool is_mmio_work(struct intel_flip_work *work)
-{
- return !work->flip_queued_req;
-}
-
/* Primary plane formats for gen <= 3 */
static const uint32_t i8xx_primary_formats[] = {
DRM_FORMAT_C8,
@@ -3103,14 +3098,6 @@ intel_pipe_set_base_atomic(struct drm_crtc *crtc, struct drm_framebuffer *fb,
return -ENODEV;
}
-static void intel_complete_page_flips(struct drm_i915_private *dev_priv)
-{
- struct intel_crtc *crtc;
-
- for_each_intel_crtc(dev_priv->dev, crtc)
- intel_finish_page_flip_cs(dev_priv, crtc->pipe);
-}
-
static void intel_update_primary_planes(struct drm_device *dev)
{
struct drm_crtc *crtc;
@@ -3151,13 +3138,6 @@ void intel_prepare_reset(struct drm_i915_private *dev_priv)
void intel_finish_reset(struct drm_i915_private *dev_priv)
{
- /*
- * Flips in the rings will be nuked by the reset,
- * so complete all pending flips so that user space
- * will get its events and not get stuck.
- */
- intel_complete_page_flips(dev_priv);
-
/* no reset support for gen2 */
if (IS_GEN2(dev_priv))
return;
@@ -3835,26 +3815,7 @@ static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
if (ret < 0)
return ret;
- if (ret == 0) {
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_flip_work *work;
-
- spin_lock_irq(&dev->event_lock);
-
- /*
- * If we're waiting for page flips, it's the first
- * flip on the list that's stuck.
- */
- work = list_first_entry_or_null(&intel_crtc->flip_work,
- struct intel_flip_work, head);
-
- if (work && !is_mmio_work(work) &&
- !work_busy(&work->unpin_work)) {
- WARN_ONCE(1, "Removing stuck page flip\n");
- page_flip_completed(intel_crtc, work);
- }
- spin_unlock_irq(&dev->event_lock);
- }
+ WARN(ret == 0, "Stuck page flip\n");
return 0;
}
@@ -11031,9 +10992,6 @@ static void intel_unpin_work_fn(struct work_struct *__work)
if (work->free_new_crtc_state)
intel_crtc_destroy_state(crtc, &work->new_crtc_state->base);
- if (work->flip_queued_req)
- i915_gem_request_unreference(work->flip_queued_req);
-
for (i = 0; i < work->num_planes; i++) {
struct intel_plane_state *old_plane_state =
work->old_plane_state[i];
@@ -11066,75 +11024,6 @@ static void intel_unpin_work_fn(struct work_struct *__work)
kfree(work);
}
-/* Is 'a' after or equal to 'b'? */
-static bool g4x_flip_count_after_eq(u32 a, u32 b)
-{
- return !((a - b) & 0x80000000);
-}
-
-static bool __pageflip_finished_cs(struct intel_crtc *crtc,
- struct intel_flip_work *work)
-{
- struct drm_device *dev = crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned reset_counter;
-
- reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (crtc->reset_counter != reset_counter)
- return true;
-
- /*
- * The relevant registers doen't exist on pre-ctg.
- * As the flip done interrupt doesn't trigger for mmio
- * flips on gmch platforms, a flip count check isn't
- * really needed there. But since ctg has the registers,
- * include it in the check anyway.
- */
- if (INTEL_INFO(dev)->gen < 5 && !IS_G4X(dev))
- return true;
-
- /*
- * BDW signals flip done immediately if the plane
- * is disabled, even if the plane enable is already
- * armed to occur at the next vblank :(
- */
-
- /*
- * A DSPSURFLIVE check isn't enough in case the mmio and CS flips
- * used the same base address. In that case the mmio flip might
- * have completed, but the CS hasn't even executed the flip yet.
- *
- * A flip count check isn't enough as the CS might have updated
- * the base address just after start of vblank, but before we
- * managed to process the interrupt. This means we'd complete the
- * CS flip too soon.
- *
- * Combining both checks should get us a good enough result. It may
- * still happen that the CS flip has been executed, but has not
- * yet actually completed. But in case the base address is the same
- * anyway, we don't really care.
- */
- return (I915_READ(DSPSURFLIVE(crtc->plane)) & ~0xfff) ==
- work->gtt_offset &&
- g4x_flip_count_after_eq(I915_READ(PIPE_FLIPCOUNT_G4X(crtc->pipe)),
- work->flip_count);
-}
-
-static bool
-__pageflip_finished_mmio(struct intel_crtc *crtc,
- struct intel_flip_work *work)
-{
- /*
- * MMIO work completes when vblank is different from
- * flip_queued_vblank.
- *
- * Reset counter value doesn't matter, this is handled by
- * i915_wait_request finishing early, so no need to handle
- * reset here.
- */
- return intel_crtc_get_vblank_counter(crtc) != work->flip_queued_vblank;
-}
-
static bool pageflip_finished(struct intel_crtc *crtc,
struct intel_flip_work *work)
@@ -11144,44 +11033,11 @@ static bool pageflip_finished(struct intel_crtc *crtc,
smp_rmb();
- if (is_mmio_work(work))
- return __pageflip_finished_mmio(crtc, work);
- else
- return __pageflip_finished_cs(crtc, work);
-}
-
-void intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, int pipe)
-{
- struct drm_device *dev = dev_priv->dev;
- struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_flip_work *work;
- unsigned long flags;
-
- /* Ignore early vblank irqs */
- if (!crtc)
- return;
-
/*
- * This is called both by irq handlers and the reset code (to complete
- * lost pageflips) so needs the full irqsave spinlocks.
+ * MMIO work completes when vblank is different from
+ * flip_queued_vblank.
*/
- spin_lock_irqsave(&dev->event_lock, flags);
- while (!list_empty(&intel_crtc->flip_work)) {
- work = list_first_entry(&intel_crtc->flip_work,
- struct intel_flip_work,
- head);
-
- if (is_mmio_work(work))
- break;
-
- if (!pageflip_finished(intel_crtc, work) ||
- work_busy(&work->unpin_work))
- break;
-
- page_flip_completed(intel_crtc, work);
- }
- spin_unlock_irqrestore(&dev->event_lock, flags);
+ return intel_crtc_get_vblank_counter(crtc) != work->flip_queued_vblank;
}
void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
@@ -11206,9 +11062,6 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
struct intel_flip_work,
head);
- if (!is_mmio_work(work))
- break;
-
if (!pageflip_finished(intel_crtc, work) ||
work_busy(&work->unpin_work))
break;
@@ -11218,16 +11071,6 @@ void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe)
spin_unlock_irqrestore(&dev->event_lock, flags);
}
-static inline void intel_mark_page_flip_active(struct intel_crtc *crtc,
- struct intel_flip_work *work)
-{
- work->flip_queued_vblank = intel_crtc_get_vblank_counter(crtc);
-
- /* Ensure that the work item is consistent when activating it ... */
- smp_mb__before_atomic();
- atomic_set(&work->pending, 1);
-}
-
static int intel_gen2_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
@@ -11459,152 +11302,6 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
return 0;
}
-static struct intel_engine_cs *
-intel_get_flip_engine(struct drm_device *dev,
- struct drm_i915_private *dev_priv,
- struct drm_i915_gem_object *obj)
-{
- if (IS_VALLEYVIEW(dev) || IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
- return &dev_priv->engine[BCS];
-
- if (dev_priv->info.gen >= 7) {
- struct intel_engine_cs *engine;
-
- engine = i915_gem_request_get_engine(obj->last_write_req);
- if (engine && engine->id == RCS)
- return engine;
-
- return &dev_priv->engine[BCS];
- } else
- return &dev_priv->engine[RCS];
-}
-
-static bool
-flip_fb_compatible(struct drm_device *dev,
- struct drm_framebuffer *fb,
- struct drm_framebuffer *old_fb)
-{
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
- struct drm_i915_gem_object *old_obj = intel_fb_obj(old_fb);
-
- if (old_fb->pixel_format != fb->pixel_format)
- return false;
-
- if (INTEL_INFO(dev)->gen > 3 &&
- (fb->offsets[0] != old_fb->offsets[0] ||
- fb->pitches[0] != old_fb->pitches[0]))
- return false;
-
- /* vlv: DISPLAY_FLIP fails to change tiling */
- if (IS_VALLEYVIEW(dev) && obj->tiling_mode != old_obj->tiling_mode)
- return false;
-
- return true;
-}
-
-static void
-intel_display_flip_prepare(struct drm_device *dev, struct drm_crtc *crtc,
- struct intel_flip_work *work)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-
- if (work->flip_prepared)
- return;
-
- work->flip_prepared = true;
-
- if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
- work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(intel_crtc->pipe)) + 1;
- work->flip_queued_vblank = drm_crtc_vblank_count(crtc);
-
- intel_frontbuffer_flip_prepare(dev, work->new_crtc_state->fb_bits);
-}
-
-static void intel_flip_schedule_request(struct intel_flip_work *work, struct drm_crtc *crtc)
-{
- struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane_state *new_state = work->new_plane_state[0];
- struct intel_plane_state *old_state = work->old_plane_state[0];
- struct drm_framebuffer *fb, *old_fb;
- struct drm_i915_gem_request *request = NULL;
- struct intel_engine_cs *engine;
- struct drm_i915_gem_object *obj;
- struct fence *fence;
- int ret;
-
- to_intel_crtc(crtc)->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (__i915_reset_in_progress_or_wedged(to_intel_crtc(crtc)->reset_counter))
- goto mmio;
-
- if (i915_terminally_wedged(&dev_priv->gpu_error) ||
- i915_reset_in_progress(&dev_priv->gpu_error) ||
- i915.enable_execlists || i915.use_mmio_flip > 0 ||
- !dev_priv->display.queue_flip)
- goto mmio;
-
- /* Not right after modesetting, surface parameters need to be updated */
- if (needs_modeset(crtc->state) ||
- to_intel_crtc_state(crtc->state)->update_pipe)
- goto mmio;
-
- /* Only allow a mmio flip for a primary plane without a dma-buf fence */
- if (work->num_planes != 1 ||
- new_state->base.plane != crtc->primary ||
- new_state->base.fence)
- goto mmio;
-
- fence = work->old_plane_state[0]->base.fence;
- if (fence && !fence_is_signaled(fence))
- goto mmio;
-
- old_fb = old_state->base.fb;
- fb = new_state->base.fb;
- obj = intel_fb_obj(fb);
-
- /* Only when updating a already visible fb. */
- if (!new_state->visible || !old_state->visible)
- goto mmio;
-
- if (!flip_fb_compatible(dev, fb, old_fb))
- goto mmio;
-
- engine = intel_get_flip_engine(dev, dev_priv, obj);
- if (i915.use_mmio_flip == 0 && obj->last_write_req &&
- i915_gem_request_get_engine(obj->last_write_req) != engine)
- goto mmio;
-
- work->gtt_offset = intel_plane_obj_offset(to_intel_plane(crtc->primary), obj, 0);
- work->gtt_offset += to_intel_crtc(crtc)->dspaddr_offset;
-
- ret = i915_gem_object_sync(obj, engine, &request);
- if (!ret && !request) {
- request = i915_gem_request_alloc(engine, NULL);
- ret = PTR_ERR_OR_ZERO(request);
-
- if (ret)
- request = NULL;
- }
-
- intel_display_flip_prepare(dev, crtc, work);
-
- if (!ret)
- ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request, 0);
-
- if (!ret) {
- i915_gem_request_assign(&work->flip_queued_req, request);
- intel_mark_page_flip_active(to_intel_crtc(crtc), work);
- i915_add_request_no_flush(request);
- return;
- }
- if (request)
- i915_add_request_no_flush(request);
-
-mmio:
- schedule_work(&work->mmio_work);
-}
-
static void intel_mmio_flip_work_func(struct work_struct *w)
{
struct intel_flip_work *work =
@@ -11632,7 +11329,7 @@ static void intel_mmio_flip_work_func(struct work_struct *w)
&dev_priv->rps.mmioflips));
}
- intel_display_flip_prepare(dev, crtc, work);
+ intel_frontbuffer_flip_prepare(dev, crtc_state->fb_bits);
intel_pipe_update_start(intel_crtc);
if (!needs_modeset(&crtc_state->base)) {
@@ -11657,80 +11354,6 @@ static void intel_mmio_flip_work_func(struct work_struct *w)
intel_pipe_update_end(intel_crtc, work);
}
-static bool __pageflip_stall_check_cs(struct drm_i915_private *dev_priv,
- struct intel_crtc *intel_crtc,
- struct intel_flip_work *work)
-{
- u32 addr, vblank;
-
- if (!atomic_read(&work->pending) ||
- work_busy(&work->unpin_work))
- return false;
-
- smp_rmb();
-
- vblank = intel_crtc_get_vblank_counter(intel_crtc);
- if (work->flip_ready_vblank == 0) {
- if (work->flip_queued_req &&
- !i915_gem_request_completed(work->flip_queued_req, true))
- return false;
-
- work->flip_ready_vblank = vblank;
- }
-
- if (vblank - work->flip_ready_vblank < 3)
- return false;
-
- /* Potential stall - if we see that the flip has happened,
- * assume a missed interrupt. */
- if (INTEL_GEN(dev_priv) >= 4)
- addr = I915_HI_DISPBASE(I915_READ(DSPSURF(intel_crtc->plane)));
- else
- addr = I915_READ(DSPADDR(intel_crtc->plane));
-
- /* There is a potential issue here with a false positive after a flip
- * to the same address. We could address this by checking for a
- * non-incrementing frame counter.
- */
- return addr == work->gtt_offset;
-}
-
-void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe)
-{
- struct drm_device *dev = dev_priv->dev;
- struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_flip_work *work;
-
- WARN_ON(!in_interrupt());
-
- if (crtc == NULL)
- return;
-
- spin_lock(&dev->event_lock);
- while (!list_empty(&intel_crtc->flip_work)) {
- work = list_first_entry(&intel_crtc->flip_work,
- struct intel_flip_work, head);
-
- if (is_mmio_work(work))
- break;
-
- if (__pageflip_stall_check_cs(dev_priv, intel_crtc, work)) {
- WARN_ONCE(1,
- "Kicking stuck page flip: queued at %d, now %d\n",
- work->flip_queued_vblank, intel_crtc_get_vblank_counter(intel_crtc));
- page_flip_completed(intel_crtc, work);
- continue;
- }
-
- if (intel_crtc_get_vblank_counter(intel_crtc) - work->flip_queued_vblank > 1)
- intel_queue_rps_boost_for_request(work->flip_queued_req);
-
- break;
- }
- spin_unlock(&dev->event_lock);
-}
-
static struct fence *intel_get_excl_fence(struct drm_i915_gem_object *obj)
{
struct reservation_object *resv;
@@ -11896,7 +11519,8 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
intel_fbc_pre_update(intel_crtc);
- intel_flip_schedule_request(work, crtc);
+ intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ schedule_work(&work->mmio_work);
mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index c6d40bfce147..e7e262ac1f99 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -980,16 +980,12 @@ struct intel_flip_work {
struct drm_pending_vblank_event *event;
atomic_t pending;
- u32 flip_count;
- u32 gtt_offset;
- struct drm_i915_gem_request *flip_queued_req;
u32 flip_queued_vblank;
- u32 flip_ready_vblank;
unsigned put_power_domains;
unsigned num_planes;
- bool can_async_unpin, flip_prepared, free_new_crtc_state;
+ bool can_async_unpin, free_new_crtc_state;
unsigned fb_bits;
@@ -1207,9 +1203,8 @@ struct drm_framebuffer *
__intel_framebuffer_create(struct drm_device *dev,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_i915_gem_object *obj);
-void intel_finish_page_flip_cs(struct drm_i915_private *dev_priv, int pipe);
void intel_finish_page_flip_mmio(struct drm_i915_private *dev_priv, int pipe);
-void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe);
+
int intel_prepare_plane_fb(struct drm_plane *plane,
const struct drm_plane_state *new_state);
void intel_cleanup_plane_fb(struct drm_plane *plane,
--
2.5.5
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 18+ messages in thread
end of thread, other threads:[~2016-05-30 7:55 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-26 10:37 [PATCH 0/9] drm/i915: Reapply page flip atomic preparation patches Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 1/9] drm/i915: Allow mmio updates on all platforms, v3 Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 2/9] drm/i915: Convert flip_work to a list, v2 Maarten Lankhorst
2016-05-26 10:37 ` [PATCH 3/9] drm/i915: Add the exclusive fence to plane_state Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v4 Maarten Lankhorst
2016-05-30 7:54 ` [PATCH v2 4/9] drm/i915: Rework intel_crtc_page_flip to be almost atomic, v5 Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 5/9] drm/i915: Remove cs based page flip support, v2 Maarten Lankhorst
2016-05-30 7:55 ` [PATCH 5/9] drm/i915: Remove cs based page flip support, v3 Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 6/9] drm/i915: Remove use_mmio_flip kernel parameter Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 7/9] drm/i915: Remove queue_flip pointer Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 8/9] drm/i915: Remove reset_counter from intel_crtc Maarten Lankhorst
2016-05-26 10:38 ` [PATCH 9/9] drm/i915: Pass atomic states to fbc update functions Maarten Lankhorst
2016-05-26 11:02 ` ✗ Ro.CI.BAT: failure for drm/i915: Reapply page flip atomic preparation patches Patchwork
2016-05-26 11:23 ` Maarten Lankhorst
2016-05-26 11:35 ` [PATCH 0/9] " Ville Syrjälä
2016-05-26 11:38 ` Maarten Lankhorst
2016-05-26 11:46 ` Ville Syrjälä
2016-05-26 11:54 ` Maarten Lankhorst
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.