All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] More nuclear pageflip
@ 2015-01-31  0:22 Matt Roper
  2015-01-31  0:22 ` [PATCH 1/4] drm/i915: Keep plane->state updated on pageflip Matt Roper
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Matt Roper @ 2015-01-31  0:22 UTC (permalink / raw)
  To: intel-gfx

The first two patches here were already posted earlier this week; they allow
our legacy plane updates to make use of the main atomic helpers rather than the
transitional atomic helpers.  This shouldn't have any functional change, but it
will cause us to exercise the full atomic pipeline rather than just the plane
subset of it.

The third patch is the interesting one; it allows us to handle nuclear pageflip
requests in a non-blocking manner and deliver a uevent upon completion.  This
is important functionality for compositors since it allows them to request that
the kernel perform a flip, then continue on doing other work while they wait
for the flip to actually happen.

The final patch here switches our legacy pageflip ioctl to use the atomic
helper (thus exercising the new asynchronous support added in patch #3).
Removing the i915-specific pageflip handling should allow us to drop a bunch of
our display code; I've been somewhat conservative in my code removal for now
(just enough to get rid of the 'function unused' compiler warnings); we can do
further cleanup of code that relates to the legacy pageflip pipeline in a
future patchset.

Matt Roper (4):
  drm/i915: Keep plane->state updated on pageflip
  drm/i915: Switch planes from transitional helpers to full atomic
    helpers
  drm/i915: Enable asynchronous nuclear flips
  drm/i915: Use atomic helper for pageflips

 drivers/gpu/drm/i915/i915_drv.h      |   9 +-
 drivers/gpu/drm/i915/i915_params.c   |   5 -
 drivers/gpu/drm/i915/intel_atomic.c  | 162 ++++++--
 drivers/gpu/drm/i915/intel_display.c | 716 +----------------------------------
 drivers/gpu/drm/i915/intel_drv.h     |   8 +
 drivers/gpu/drm/i915/intel_lrc.c     |   3 +-
 6 files changed, 155 insertions(+), 748 deletions(-)

-- 
1.8.5.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/4] drm/i915: Keep plane->state updated on pageflip
  2015-01-31  0:22 [PATCH 0/4] More nuclear pageflip Matt Roper
@ 2015-01-31  0:22 ` Matt Roper
  2015-01-31  9:36   ` Daniel Vetter
  2015-01-31  0:22 ` [PATCH 2/4] drm/i915: Switch planes from transitional helpers to full atomic helpers Matt Roper
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Matt Roper @ 2015-01-31  0:22 UTC (permalink / raw)
  To: intel-gfx

Until all drivers have transitioned to atomic, the framebuffer
associated with a plane is tracked in both plane->fb (for legacy) and
plane->state->fb (for all the new atomic codeflow).  All of our modeset
and plane updates use drm_plane->update_plane(), so in theory plane->fb
and plane->state->fb should always stay in sync and point at the same
thing for i915.  However we forgot about the pageflip ioctl case, which
currently only updates plane->fb and leaves plane->state->fb at a stale
value.

Surprisingly, this doesn't cause any real problems at the moment since
internally we use the plane->fb pointer in most of the places that
matter, and on the next .update_plane() call, we use plane->fb to figure
out which framebuffer to cleanup.  However when we switch to the full
atomic helpers for update_plane()/disable_plane(), those helpers use
plane->state->fb to figure out which framebuffer to cleanup, so not
having updated the plane->state->fb pointer causes things to blow up
following a pageflip ioctl.

The fix here is to just make sure we update plane->state->fb at the same
time we update plane->fb in the pageflip ioctl.

Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
 drivers/gpu/drm/i915/intel_display.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 3d220a6..08e2bab 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -9801,6 +9801,13 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 
 	crtc->primary->fb = fb;
 
+	/* Keep state structure in sync */
+	if (crtc->primary->state->fb)
+		drm_framebuffer_unreference(crtc->primary->state->fb);
+	crtc->primary->state->fb = fb;
+	if (crtc->primary->state->fb)
+		drm_framebuffer_reference(crtc->primary->state->fb);
+
 	work->pending_flip_obj = obj;
 
 	atomic_inc(&intel_crtc->unpin_work_count);
-- 
1.8.5.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/4] drm/i915: Switch planes from transitional helpers to full atomic helpers
  2015-01-31  0:22 [PATCH 0/4] More nuclear pageflip Matt Roper
  2015-01-31  0:22 ` [PATCH 1/4] drm/i915: Keep plane->state updated on pageflip Matt Roper
@ 2015-01-31  0:22 ` Matt Roper
  2015-01-31  0:22 ` [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips Matt Roper
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Matt Roper @ 2015-01-31  0:22 UTC (permalink / raw)
  To: intel-gfx

There are two sets of helper functions provided by the DRM core that can
implement the .update_plane() and .disable_plane() hooks in terms of a
driver's atomic entrypoints.  The transitional helpers (which we have
been using so far) create a plane state and then use the plane's atomic
entrypoints to perform the atomic begin/check/prepare/commit/finish
sequence on that single plane only.  The full atomic helpers create a
top-level atomic state (which is capable of holding multiple object
states for planes, crtc's, and/or connectors) and then passes the
top-level atomic state through the full "atomic modeset" pipeline.

Switching from the transitional to full helpers here shouldn't result in
any functional change, but will enable us to exercise/test more of the
internal atomic pipeline with the legacy API's used by existing
applications.

Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
 drivers/gpu/drm/i915/intel_display.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 08e2bab..ebf973c 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -12076,8 +12076,8 @@ void intel_plane_destroy(struct drm_plane *plane)
 }
 
 const struct drm_plane_funcs intel_plane_funcs = {
-	.update_plane = drm_plane_helper_update,
-	.disable_plane = drm_plane_helper_disable,
+	.update_plane = drm_atomic_helper_update_plane,
+	.disable_plane = drm_atomic_helper_disable_plane,
 	.destroy = intel_plane_destroy,
 	.set_property = drm_atomic_helper_plane_set_property,
 	.atomic_get_property = intel_plane_atomic_get_property,
-- 
1.8.5.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips
  2015-01-31  0:22 [PATCH 0/4] More nuclear pageflip Matt Roper
  2015-01-31  0:22 ` [PATCH 1/4] drm/i915: Keep plane->state updated on pageflip Matt Roper
  2015-01-31  0:22 ` [PATCH 2/4] drm/i915: Switch planes from transitional helpers to full atomic helpers Matt Roper
@ 2015-01-31  0:22 ` Matt Roper
  2015-01-31  9:30   ` Daniel Vetter
  2015-01-31  0:22 ` [PATCH 4/4] drm/i915: Use atomic helper for pageflips Matt Roper
  2015-01-31  9:35 ` [PATCH 0/4] More nuclear pageflip Daniel Vetter
  4 siblings, 1 reply; 11+ messages in thread
From: Matt Roper @ 2015-01-31  0:22 UTC (permalink / raw)
  To: intel-gfx

The initial i915 nuclear pageflip support rejected asynchronous updates.
Allow all work after we swap in the new state structures to run
asynchronously.  We also need to start sending completion events to
userspace if they were requested.

Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h     |   3 +
 drivers/gpu/drm/i915/intel_atomic.c | 162 +++++++++++++++++++++++++++++++-----
 drivers/gpu/drm/i915/intel_drv.h    |   8 ++
 3 files changed, 150 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8fad702..c7a520a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1777,6 +1777,9 @@ struct drm_i915_private {
 	struct drm_crtc *pipe_to_crtc_mapping[I915_MAX_PIPES];
 	wait_queue_head_t pending_flip_queue;
 
+	/* CRTC mask of pending atomic flips */
+	uint32_t pending_atomic;
+
 #ifdef CONFIG_DEBUG_FS
 	struct intel_pipe_crc pipe_crc[I915_MAX_PIPES];
 #endif
diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c
index 19a9dd5..5dd7897 100644
--- a/drivers/gpu/drm/i915/intel_atomic.c
+++ b/drivers/gpu/drm/i915/intel_atomic.c
@@ -76,6 +76,8 @@ int intel_atomic_check(struct drm_device *dev,
 	state->allow_modeset = false;
 	for (i = 0; i < ncrtcs; i++) {
 		struct intel_crtc *crtc = to_intel_crtc(state->crtcs[i]);
+		if (crtc)
+			state->crtc_states[i]->enable = crtc->active;
 		if (crtc && crtc->pipe != nuclear_pipe)
 			not_nuclear = true;
 	}
@@ -96,6 +98,87 @@ int intel_atomic_check(struct drm_device *dev,
 }
 
 
+/*
+ * Wait until CRTC's have no pending flip, then atomically mark those CRTC's
+ * as busy.
+ */
+static int wait_for_pending_flip(uint32_t crtc_mask,
+				 struct intel_pending_atomic *commit)
+{
+	struct drm_i915_private *dev_priv = commit->dev->dev_private;
+	int ret;
+
+	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
+	ret = wait_event_interruptible_locked(dev_priv->pending_flip_queue,
+					      !(dev_priv->pending_atomic & crtc_mask));
+	if (ret == 0)
+		dev_priv->pending_atomic |= crtc_mask;
+	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
+
+	return ret;
+}
+
+/* Finish pending flip operation on specified CRTC's */
+static void flip_completion(struct intel_pending_atomic *commit)
+{
+	struct drm_i915_private *dev_priv = commit->dev->dev_private;
+
+	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
+	dev_priv->pending_atomic &= ~commit->crtc_mask;
+	wake_up_all_locked(&dev_priv->pending_flip_queue);
+	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
+}
+
+/*
+ * Finish an atomic commit.  The work here can be performed asynchronously
+ * if desired.  The new state has already been applied to the DRM objects
+ * and no modeset locks are needed.
+ */
+static void finish_atomic_commit(struct work_struct *work)
+{
+	struct intel_pending_atomic *commit =
+		container_of(work, struct intel_pending_atomic, work);
+	struct drm_device *dev = commit->dev;
+	struct drm_crtc *crtc;
+	struct drm_atomic_state *state = commit->state;
+	int i;
+
+	/*
+	 * FIXME:  The proper sequence here will eventually be:
+	 *
+	 * drm_atomic_helper_commit_pre_planes(dev, state);
+	 * drm_atomic_helper_commit_planes(dev, state);
+	 * drm_atomic_helper_commit_post_planes(dev, state);
+	 * drm_atomic_helper_wait_for_vblanks(dev, state);
+	 * drm_atomic_helper_cleanup_planes(dev, state);
+	 * drm_atomic_state_free(state);
+	 *
+	 * once we have full atomic modeset.  For now, just manually update
+	 * plane states to avoid clobbering good states with dummy states
+	 * while nuclear pageflipping.
+	 */
+	drm_atomic_helper_commit_planes(dev, state);
+	drm_atomic_helper_wait_for_vblanks(dev, state);
+
+	/* Send CRTC completion events. */
+	for (i = 0; i < dev->mode_config.num_crtc; i++) {
+		crtc = state->crtcs[i];
+		if (crtc && crtc->state->event) {
+			spin_lock_irq(&dev->event_lock);
+			drm_send_vblank_event(dev, to_intel_crtc(crtc)->pipe,
+					      crtc->state->event);
+			spin_unlock_irq(&dev->event_lock);
+			crtc->state->event = NULL;
+		}
+	}
+
+	drm_atomic_helper_cleanup_planes(dev, state);
+	drm_atomic_state_free(state);
+
+	flip_completion(commit);
+	kfree(commit);
+}
+
 /**
  * intel_atomic_commit - commit validated state object
  * @dev: DRM device
@@ -116,34 +199,48 @@ int intel_atomic_commit(struct drm_device *dev,
 			struct drm_atomic_state *state,
 			bool async)
 {
+	struct intel_pending_atomic *commit;
 	int ret;
 	int i;
 
-	if (async) {
-		DRM_DEBUG_KMS("i915 does not yet support async commit\n");
-		return -EINVAL;
-	}
-
 	ret = drm_atomic_helper_prepare_planes(dev, state);
 	if (ret)
 		return ret;
 
-	/* Point of no return */
+	commit = kzalloc(sizeof(*commit), GFP_KERNEL);
+	if (!commit)
+		return -ENOMEM;
+
+	commit->dev = dev;
+	commit->state = state;
+
+	for (i = 0; i < dev->mode_config.num_crtc; i++)
+		if (state->crtcs[i])
+			commit->crtc_mask |=
+				(1 << drm_crtc_index(state->crtcs[i]));
 
 	/*
-	 * FIXME:  The proper sequence here will eventually be:
-	 *
-	 * drm_atomic_helper_swap_state(dev, state)
-	 * drm_atomic_helper_commit_pre_planes(dev, state);
-	 * drm_atomic_helper_commit_planes(dev, state);
-	 * drm_atomic_helper_commit_post_planes(dev, state);
-	 * drm_atomic_helper_wait_for_vblanks(dev, state);
-	 * drm_atomic_helper_cleanup_planes(dev, state);
-	 * drm_atomic_state_free(state);
-	 *
-	 * once we have full atomic modeset.  For now, just manually update
-	 * plane states to avoid clobbering good states with dummy states
-	 * while nuclear pageflipping.
+	 * If there's already a flip pending, we won't schedule another one
+	 * until it completes.  We may relax this in the future, but for now
+	 * this matches the legacy pageflip behavior.
+	 */
+	ret = wait_for_pending_flip(commit->crtc_mask, commit);
+	if (ret) {
+		kfree(commit);
+		return ret;
+	}
+
+	/*
+	 * Point of no return; everything from here on can't fail, so swap
+	 * the new state into the DRM objects.
+	 */
+
+	/*
+	 * FIXME:  Should eventually use drm_atomic_helper_swap_state().
+	 * However since we only handle planes at the moment, we don't
+	 * want to clobber our good crtc state with something bogus,
+	 * so just manually swap the plane states and copy over any completion
+	 * events for CRTC's.
 	 */
 	for (i = 0; i < dev->mode_config.num_total_plane; i++) {
 		struct drm_plane *plane = state->planes[i];
@@ -155,10 +252,29 @@ int intel_atomic_commit(struct drm_device *dev,
 		swap(state->plane_states[i], plane->state);
 		plane->state->state = NULL;
 	}
-	drm_atomic_helper_commit_planes(dev, state);
-	drm_atomic_helper_wait_for_vblanks(dev, state);
-	drm_atomic_helper_cleanup_planes(dev, state);
-	drm_atomic_state_free(state);
+
+	for (i = 0; i < dev->mode_config.num_crtc; i++) {
+		struct drm_crtc *crtc = state->crtcs[i];
+		struct drm_crtc_state *crtc_state = state->crtc_states[i];
+
+		if (!crtc || !crtc_state)
+			continue;
+
+		/* n.b., we only copy the event and enabled flag for now */
+		swap(crtc->state->event, crtc_state->event);
+		swap(crtc->state->enable, crtc_state->enable);
+	}
+
+	/*
+	 * From here on, we can do everything asynchronously without any
+	 * modeset locks.
+	 */
+	if (async) {
+		INIT_WORK(&commit->work, finish_atomic_commit);
+		schedule_work(&commit->work);
+	} else {
+		finish_atomic_commit(&commit->work);
+	}
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index eef79cc..15b02b0 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -391,6 +391,14 @@ struct intel_crtc_state {
 	int pbn;
 };
 
+/* In-flight atomic operation */
+struct intel_pending_atomic {
+	struct work_struct work;
+	struct drm_device *dev;
+	struct drm_atomic_state *state;
+	uint32_t crtc_mask;
+};
+
 struct intel_pipe_wm {
 	struct intel_wm_level wm[5];
 	uint32_t linetime;
-- 
1.8.5.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/4] drm/i915: Use atomic helper for pageflips
  2015-01-31  0:22 [PATCH 0/4] More nuclear pageflip Matt Roper
                   ` (2 preceding siblings ...)
  2015-01-31  0:22 ` [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips Matt Roper
@ 2015-01-31  0:22 ` Matt Roper
  2015-01-31  9:35 ` [PATCH 0/4] More nuclear pageflip Daniel Vetter
  4 siblings, 0 replies; 11+ messages in thread
From: Matt Roper @ 2015-01-31  0:22 UTC (permalink / raw)
  To: intel-gfx

Now that we have asynchronous nuclear pageflip, we should be able to
push our legacy pageflip ioctl through the atomic helper and rip out a
bunch of i915-specific pageflip code.

This patch is actually pretty conservative; I think there's a lot more
flip-related infrastructure that should be ripped out now that we're
never doing command streamer flips and always letting the DRM core
handle the flip logic.

Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h      |   6 -
 drivers/gpu/drm/i915/i915_params.c   |   5 -
 drivers/gpu/drm/i915/intel_display.c | 719 +----------------------------------
 drivers/gpu/drm/i915/intel_lrc.c     |   3 +-
 4 files changed, 3 insertions(+), 730 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c7a520a..f13f109 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -556,11 +556,6 @@ struct drm_i915_display_funcs {
 	void (*audio_codec_disable)(struct intel_encoder *encoder);
 	void (*fdi_link_train)(struct drm_crtc *crtc);
 	void (*init_clock_gating)(struct drm_device *dev);
-	int (*queue_flip)(struct drm_device *dev, struct drm_crtc *crtc,
-			  struct drm_framebuffer *fb,
-			  struct drm_i915_gem_object *obj,
-			  struct intel_engine_cs *ring,
-			  uint32_t flags);
 	void (*update_primary_plane)(struct drm_crtc *crtc,
 				     struct drm_framebuffer *fb,
 				     int x, int y);
@@ -2526,7 +2521,6 @@ struct i915_params {
 	bool reset;
 	bool disable_display;
 	bool disable_vtd_wa;
-	int use_mmio_flip;
 	bool mmio_debug;
 	bool verbose_state_checks;
 	bool nuclear_pageflip;
diff --git a/drivers/gpu/drm/i915/i915_params.c b/drivers/gpu/drm/i915/i915_params.c
index 44f2262..794811f 100644
--- a/drivers/gpu/drm/i915/i915_params.c
+++ b/drivers/gpu/drm/i915/i915_params.c
@@ -49,7 +49,6 @@ struct i915_params i915 __read_mostly = {
 	.disable_display = 0,
 	.enable_cmd_parser = 1,
 	.disable_vtd_wa = 0,
-	.use_mmio_flip = 0,
 	.mmio_debug = 0,
 	.verbose_state_checks = 1,
 	.nuclear_pageflip = 0,
@@ -167,10 +166,6 @@ module_param_named(enable_cmd_parser, i915.enable_cmd_parser, int, 0600);
 MODULE_PARM_DESC(enable_cmd_parser,
 		 "Enable command parsing (1=enabled [default], 0=disabled)");
 
-module_param_named(use_mmio_flip, i915.use_mmio_flip, int, 0600);
-MODULE_PARM_DESC(use_mmio_flip,
-		 "use MMIO flips (-1=never, 0=driver discretion [default], 1=always)");
-
 module_param_named(mmio_debug, i915.mmio_debug, bool, 0600);
 MODULE_PARM_DESC(mmio_debug,
 	"Enable the MMIO debug code (default: false). This may negatively "
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ebf973c..706bb3f 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2934,17 +2934,12 @@ static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
 	struct drm_device *dev = crtc->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	bool pending;
 
 	if (i915_reset_in_progress(&dev_priv->gpu_error) ||
 	    intel_crtc->reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter))
 		return false;
 
-	spin_lock_irq(&dev->event_lock);
-	pending = to_intel_crtc(crtc)->unpin_work != NULL;
-	spin_unlock_irq(&dev->event_lock);
-
-	return pending;
+	return 0;
 }
 
 static void intel_update_pipe_size(struct intel_crtc *crtc)
@@ -9047,32 +9042,6 @@ static void intel_crtc_destroy(struct drm_crtc *crtc)
 	kfree(intel_crtc);
 }
 
-static void intel_unpin_work_fn(struct work_struct *__work)
-{
-	struct intel_unpin_work *work =
-		container_of(__work, struct intel_unpin_work, work);
-	struct drm_device *dev = work->crtc->dev;
-	enum pipe pipe = to_intel_crtc(work->crtc)->pipe;
-
-	mutex_lock(&dev->struct_mutex);
-	intel_unpin_fb_obj(work->old_fb_obj);
-	drm_gem_object_unreference(&work->pending_flip_obj->base);
-	drm_gem_object_unreference(&work->old_fb_obj->base);
-
-	intel_fbc_update(dev);
-
-	if (work->flip_queued_req)
-		i915_gem_request_assign(&work->flip_queued_req, NULL);
-	mutex_unlock(&dev->struct_mutex);
-
-	intel_frontbuffer_flip_complete(dev, INTEL_FRONTBUFFER_PRIMARY(pipe));
-
-	BUG_ON(atomic_read(&to_intel_crtc(work->crtc)->unpin_work_count) == 0);
-	atomic_dec(&to_intel_crtc(work->crtc)->unpin_work_count);
-
-	kfree(work);
-}
-
 static void do_intel_finish_page_flip(struct drm_device *dev,
 				      struct drm_crtc *crtc)
 {
@@ -9188,473 +9157,6 @@ void intel_prepare_page_flip(struct drm_device *dev, int plane)
 	spin_unlock_irqrestore(&dev->event_lock, flags);
 }
 
-static inline void intel_mark_page_flip_active(struct intel_crtc *intel_crtc)
-{
-	/* Ensure that the work item is consistent when activating it ... */
-	smp_wmb();
-	atomic_set(&intel_crtc->unpin_work->pending, INTEL_FLIP_PENDING);
-	/* and that it is marked active as soon as the irq could fire. */
-	smp_wmb();
-}
-
-static int intel_gen2_queue_flip(struct drm_device *dev,
-				 struct drm_crtc *crtc,
-				 struct drm_framebuffer *fb,
-				 struct drm_i915_gem_object *obj,
-				 struct intel_engine_cs *ring,
-				 uint32_t flags)
-{
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	u32 flip_mask;
-	int ret;
-
-	ret = intel_ring_begin(ring, 6);
-	if (ret)
-		return ret;
-
-	/* Can't queue multiple flips, so wait for the previous
-	 * one to finish before executing the next.
-	 */
-	if (intel_crtc->plane)
-		flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
-	else
-		flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
-	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | flip_mask);
-	intel_ring_emit(ring, MI_NOOP);
-	intel_ring_emit(ring, MI_DISPLAY_FLIP |
-			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(ring, fb->pitches[0]);
-	intel_ring_emit(ring, intel_crtc->unpin_work->gtt_offset);
-	intel_ring_emit(ring, 0); /* aux display base address, unused */
-
-	intel_mark_page_flip_active(intel_crtc);
-	__intel_ring_advance(ring);
-	return 0;
-}
-
-static int intel_gen3_queue_flip(struct drm_device *dev,
-				 struct drm_crtc *crtc,
-				 struct drm_framebuffer *fb,
-				 struct drm_i915_gem_object *obj,
-				 struct intel_engine_cs *ring,
-				 uint32_t flags)
-{
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	u32 flip_mask;
-	int ret;
-
-	ret = intel_ring_begin(ring, 6);
-	if (ret)
-		return ret;
-
-	if (intel_crtc->plane)
-		flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
-	else
-		flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
-	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | flip_mask);
-	intel_ring_emit(ring, MI_NOOP);
-	intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 |
-			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(ring, fb->pitches[0]);
-	intel_ring_emit(ring, intel_crtc->unpin_work->gtt_offset);
-	intel_ring_emit(ring, MI_NOOP);
-
-	intel_mark_page_flip_active(intel_crtc);
-	__intel_ring_advance(ring);
-	return 0;
-}
-
-static int intel_gen4_queue_flip(struct drm_device *dev,
-				 struct drm_crtc *crtc,
-				 struct drm_framebuffer *fb,
-				 struct drm_i915_gem_object *obj,
-				 struct intel_engine_cs *ring,
-				 uint32_t flags)
-{
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	uint32_t pf, pipesrc;
-	int ret;
-
-	ret = intel_ring_begin(ring, 4);
-	if (ret)
-		return ret;
-
-	/* i965+ uses the linear or tiled offsets from the
-	 * Display Registers (which do not change across a page-flip)
-	 * so we need only reprogram the base address.
-	 */
-	intel_ring_emit(ring, MI_DISPLAY_FLIP |
-			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(ring, fb->pitches[0]);
-	intel_ring_emit(ring, intel_crtc->unpin_work->gtt_offset |
-			obj->tiling_mode);
-
-	/* XXX Enabling the panel-fitter across page-flip is so far
-	 * untested on non-native modes, so ignore it for now.
-	 * pf = I915_READ(pipe == 0 ? PFA_CTL_1 : PFB_CTL_1) & PF_ENABLE;
-	 */
-	pf = 0;
-	pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
-	intel_ring_emit(ring, pf | pipesrc);
-
-	intel_mark_page_flip_active(intel_crtc);
-	__intel_ring_advance(ring);
-	return 0;
-}
-
-static int intel_gen6_queue_flip(struct drm_device *dev,
-				 struct drm_crtc *crtc,
-				 struct drm_framebuffer *fb,
-				 struct drm_i915_gem_object *obj,
-				 struct intel_engine_cs *ring,
-				 uint32_t flags)
-{
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	uint32_t pf, pipesrc;
-	int ret;
-
-	ret = intel_ring_begin(ring, 4);
-	if (ret)
-		return ret;
-
-	intel_ring_emit(ring, MI_DISPLAY_FLIP |
-			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(ring, fb->pitches[0] | obj->tiling_mode);
-	intel_ring_emit(ring, intel_crtc->unpin_work->gtt_offset);
-
-	/* Contrary to the suggestions in the documentation,
-	 * "Enable Panel Fitter" does not seem to be required when page
-	 * flipping with a non-native mode, and worse causes a normal
-	 * modeset to fail.
-	 * pf = I915_READ(PF_CTL(intel_crtc->pipe)) & PF_ENABLE;
-	 */
-	pf = 0;
-	pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
-	intel_ring_emit(ring, pf | pipesrc);
-
-	intel_mark_page_flip_active(intel_crtc);
-	__intel_ring_advance(ring);
-	return 0;
-}
-
-static int intel_gen7_queue_flip(struct drm_device *dev,
-				 struct drm_crtc *crtc,
-				 struct drm_framebuffer *fb,
-				 struct drm_i915_gem_object *obj,
-				 struct intel_engine_cs *ring,
-				 uint32_t flags)
-{
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	uint32_t plane_bit = 0;
-	int len, ret;
-
-	switch (intel_crtc->plane) {
-	case PLANE_A:
-		plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_A;
-		break;
-	case PLANE_B:
-		plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_B;
-		break;
-	case PLANE_C:
-		plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_C;
-		break;
-	default:
-		WARN_ONCE(1, "unknown plane in flip command\n");
-		return -ENODEV;
-	}
-
-	len = 4;
-	if (ring->id == RCS) {
-		len += 6;
-		/*
-		 * On Gen 8, SRM is now taking an extra dword to accommodate
-		 * 48bits addresses, and we need a NOOP for the batch size to
-		 * stay even.
-		 */
-		if (IS_GEN8(dev))
-			len += 2;
-	}
-
-	/*
-	 * BSpec MI_DISPLAY_FLIP for IVB:
-	 * "The full packet must be contained within the same cache line."
-	 *
-	 * Currently the LRI+SRM+MI_DISPLAY_FLIP all fit within the same
-	 * cacheline, if we ever start emitting more commands before
-	 * the MI_DISPLAY_FLIP we may need to first emit everything else,
-	 * then do the cacheline alignment, and finally emit the
-	 * MI_DISPLAY_FLIP.
-	 */
-	ret = intel_ring_cacheline_align(ring);
-	if (ret)
-		return ret;
-
-	ret = intel_ring_begin(ring, len);
-	if (ret)
-		return ret;
-
-	/* Unmask the flip-done completion message. Note that the bspec says that
-	 * we should do this for both the BCS and RCS, and that we must not unmask
-	 * more than one flip event at any time (or ensure that one flip message
-	 * can be sent by waiting for flip-done prior to queueing new flips).
-	 * Experimentation says that BCS works despite DERRMR masking all
-	 * flip-done completion events and that unmasking all planes at once
-	 * for the RCS also doesn't appear to drop events. Setting the DERRMR
-	 * to zero does lead to lockups within MI_DISPLAY_FLIP.
-	 */
-	if (ring->id == RCS) {
-		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
-		intel_ring_emit(ring, DERRMR);
-		intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
-					DERRMR_PIPEB_PRI_FLIP_DONE |
-					DERRMR_PIPEC_PRI_FLIP_DONE));
-		if (IS_GEN8(dev))
-			intel_ring_emit(ring, MI_STORE_REGISTER_MEM_GEN8(1) |
-					      MI_SRM_LRM_GLOBAL_GTT);
-		else
-			intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1) |
-					      MI_SRM_LRM_GLOBAL_GTT);
-		intel_ring_emit(ring, DERRMR);
-		intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
-		if (IS_GEN8(dev)) {
-			intel_ring_emit(ring, 0);
-			intel_ring_emit(ring, MI_NOOP);
-		}
-	}
-
-	intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | plane_bit);
-	intel_ring_emit(ring, (fb->pitches[0] | obj->tiling_mode));
-	intel_ring_emit(ring, intel_crtc->unpin_work->gtt_offset);
-	intel_ring_emit(ring, (MI_NOOP));
-
-	intel_mark_page_flip_active(intel_crtc);
-	__intel_ring_advance(ring);
-	return 0;
-}
-
-static bool use_mmio_flip(struct intel_engine_cs *ring,
-			  struct drm_i915_gem_object *obj)
-{
-	/*
-	 * This is not being used for older platforms, because
-	 * non-availability of flip done interrupt forces us to use
-	 * CS flips. Older platforms derive flip done using some clever
-	 * tricks involving the flip_pending status bits and vblank irqs.
-	 * So using MMIO flips there would disrupt this mechanism.
-	 */
-
-	if (ring == NULL)
-		return true;
-
-	if (INTEL_INFO(ring->dev)->gen < 5)
-		return false;
-
-	if (i915.use_mmio_flip < 0)
-		return false;
-	else if (i915.use_mmio_flip > 0)
-		return true;
-	else if (i915.enable_execlists)
-		return true;
-	else
-		return ring != i915_gem_request_get_ring(obj->last_read_req);
-}
-
-static void skl_do_mmio_flip(struct intel_crtc *intel_crtc)
-{
-	struct drm_device *dev = intel_crtc->base.dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct drm_framebuffer *fb = intel_crtc->base.primary->fb;
-	struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
-	struct drm_i915_gem_object *obj = intel_fb->obj;
-	const enum pipe pipe = intel_crtc->pipe;
-	u32 ctl, stride;
-
-	ctl = I915_READ(PLANE_CTL(pipe, 0));
-	ctl &= ~PLANE_CTL_TILED_MASK;
-	if (obj->tiling_mode == I915_TILING_X)
-		ctl |= PLANE_CTL_TILED_X;
-
-	/*
-	 * The stride is either expressed as a multiple of 64 bytes chunks for
-	 * linear buffers or in number of tiles for tiled buffers.
-	 */
-	stride = fb->pitches[0] >> 6;
-	if (obj->tiling_mode == I915_TILING_X)
-		stride = fb->pitches[0] >> 9; /* X tiles are 512 bytes wide */
-
-	/*
-	 * Both PLANE_CTL and PLANE_STRIDE are not updated on vblank but on
-	 * PLANE_SURF updates, the update is then guaranteed to be atomic.
-	 */
-	I915_WRITE(PLANE_CTL(pipe, 0), ctl);
-	I915_WRITE(PLANE_STRIDE(pipe, 0), stride);
-
-	I915_WRITE(PLANE_SURF(pipe, 0), intel_crtc->unpin_work->gtt_offset);
-	POSTING_READ(PLANE_SURF(pipe, 0));
-}
-
-static void ilk_do_mmio_flip(struct intel_crtc *intel_crtc)
-{
-	struct drm_device *dev = intel_crtc->base.dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_framebuffer *intel_fb =
-		to_intel_framebuffer(intel_crtc->base.primary->fb);
-	struct drm_i915_gem_object *obj = intel_fb->obj;
-	u32 dspcntr;
-	u32 reg;
-
-	reg = DSPCNTR(intel_crtc->plane);
-	dspcntr = I915_READ(reg);
-
-	if (obj->tiling_mode != I915_TILING_NONE)
-		dspcntr |= DISPPLANE_TILED;
-	else
-		dspcntr &= ~DISPPLANE_TILED;
-
-	I915_WRITE(reg, dspcntr);
-
-	I915_WRITE(DSPSURF(intel_crtc->plane),
-		   intel_crtc->unpin_work->gtt_offset);
-	POSTING_READ(DSPSURF(intel_crtc->plane));
-
-}
-
-/*
- * XXX: This is the temporary way to update the plane registers until we get
- * around to using the usual plane update functions for MMIO flips
- */
-static void intel_do_mmio_flip(struct intel_crtc *intel_crtc)
-{
-	struct drm_device *dev = intel_crtc->base.dev;
-	bool atomic_update;
-	u32 start_vbl_count;
-
-	intel_mark_page_flip_active(intel_crtc);
-
-	atomic_update = intel_pipe_update_start(intel_crtc, &start_vbl_count);
-
-	if (INTEL_INFO(dev)->gen >= 9)
-		skl_do_mmio_flip(intel_crtc);
-	else
-		/* use_mmio_flip() retricts MMIO flips to ilk+ */
-		ilk_do_mmio_flip(intel_crtc);
-
-	if (atomic_update)
-		intel_pipe_update_end(intel_crtc, start_vbl_count);
-}
-
-static void intel_mmio_flip_work_func(struct work_struct *work)
-{
-	struct intel_crtc *crtc =
-		container_of(work, struct intel_crtc, mmio_flip.work);
-	struct intel_mmio_flip *mmio_flip;
-
-	mmio_flip = &crtc->mmio_flip;
-	if (mmio_flip->req)
-		WARN_ON(__i915_wait_request(mmio_flip->req,
-					    crtc->reset_counter,
-					    false, NULL, NULL) != 0);
-
-	intel_do_mmio_flip(crtc);
-	if (mmio_flip->req) {
-		mutex_lock(&crtc->base.dev->struct_mutex);
-		i915_gem_request_assign(&mmio_flip->req, NULL);
-		mutex_unlock(&crtc->base.dev->struct_mutex);
-	}
-}
-
-static int intel_queue_mmio_flip(struct drm_device *dev,
-				 struct drm_crtc *crtc,
-				 struct drm_framebuffer *fb,
-				 struct drm_i915_gem_object *obj,
-				 struct intel_engine_cs *ring,
-				 uint32_t flags)
-{
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-
-	i915_gem_request_assign(&intel_crtc->mmio_flip.req,
-				obj->last_write_req);
-
-	schedule_work(&intel_crtc->mmio_flip.work);
-
-	return 0;
-}
-
-static int intel_gen9_queue_flip(struct drm_device *dev,
-				 struct drm_crtc *crtc,
-				 struct drm_framebuffer *fb,
-				 struct drm_i915_gem_object *obj,
-				 struct intel_engine_cs *ring,
-				 uint32_t flags)
-{
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	uint32_t plane = 0, stride;
-	int ret;
-
-	switch(intel_crtc->pipe) {
-	case PIPE_A:
-		plane = MI_DISPLAY_FLIP_SKL_PLANE_1_A;
-		break;
-	case PIPE_B:
-		plane = MI_DISPLAY_FLIP_SKL_PLANE_1_B;
-		break;
-	case PIPE_C:
-		plane = MI_DISPLAY_FLIP_SKL_PLANE_1_C;
-		break;
-	default:
-		WARN_ONCE(1, "unknown plane in flip command\n");
-		return -ENODEV;
-	}
-
-	switch (obj->tiling_mode) {
-	case I915_TILING_NONE:
-		stride = fb->pitches[0] >> 6;
-		break;
-	case I915_TILING_X:
-		stride = fb->pitches[0] >> 9;
-		break;
-	default:
-		WARN_ONCE(1, "unknown tiling in flip command\n");
-		return -ENODEV;
-	}
-
-	ret = intel_ring_begin(ring, 10);
-	if (ret)
-		return ret;
-
-	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
-	intel_ring_emit(ring, DERRMR);
-	intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
-				DERRMR_PIPEB_PRI_FLIP_DONE |
-				DERRMR_PIPEC_PRI_FLIP_DONE));
-	intel_ring_emit(ring, MI_STORE_REGISTER_MEM_GEN8(1) |
-			      MI_SRM_LRM_GLOBAL_GTT);
-	intel_ring_emit(ring, DERRMR);
-	intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
-	intel_ring_emit(ring, 0);
-
-	intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | plane);
-	intel_ring_emit(ring, stride << 6 | obj->tiling_mode);
-	intel_ring_emit(ring, intel_crtc->unpin_work->gtt_offset);
-
-	intel_mark_page_flip_active(intel_crtc);
-	__intel_ring_advance(ring);
-
-	return 0;
-}
-
-static int intel_default_queue_flip(struct drm_device *dev,
-				    struct drm_crtc *crtc,
-				    struct drm_framebuffer *fb,
-				    struct drm_i915_gem_object *obj,
-				    struct intel_engine_cs *ring,
-				    uint32_t flags)
-{
-	return -ENODEV;
-}
-
 static bool __intel_pageflip_stall_check(struct drm_device *dev,
 					 struct drm_crtc *crtc)
 {
@@ -9714,192 +9216,6 @@ void intel_check_page_flip(struct drm_device *dev, int pipe)
 	spin_unlock(&dev->event_lock);
 }
 
-static int intel_crtc_page_flip(struct drm_crtc *crtc,
-				struct drm_framebuffer *fb,
-				struct drm_pending_vblank_event *event,
-				uint32_t page_flip_flags)
-{
-	struct drm_device *dev = crtc->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct drm_framebuffer *old_fb = crtc->primary->fb;
-	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	struct drm_plane *primary = crtc->primary;
-	enum pipe pipe = intel_crtc->pipe;
-	struct intel_unpin_work *work;
-	struct intel_engine_cs *ring;
-	int ret;
-
-	/*
-	 * drm_mode_page_flip_ioctl() should already catch this, but double
-	 * check to be safe.  In the future we may enable pageflipping from
-	 * a disabled primary plane.
-	 */
-	if (WARN_ON(intel_fb_obj(old_fb) == NULL))
-		return -EBUSY;
-
-	/* Can't change pixel format via MI display flips. */
-	if (fb->pixel_format != crtc->primary->fb->pixel_format)
-		return -EINVAL;
-
-	/*
-	 * TILEOFF/LINOFF registers can't be changed via MI display flips.
-	 * Note that pitch changes could also affect these register.
-	 */
-	if (INTEL_INFO(dev)->gen > 3 &&
-	    (fb->offsets[0] != crtc->primary->fb->offsets[0] ||
-	     fb->pitches[0] != crtc->primary->fb->pitches[0]))
-		return -EINVAL;
-
-	if (i915_terminally_wedged(&dev_priv->gpu_error))
-		goto out_hang;
-
-	work = kzalloc(sizeof(*work), GFP_KERNEL);
-	if (work == NULL)
-		return -ENOMEM;
-
-	work->event = event;
-	work->crtc = crtc;
-	work->old_fb_obj = intel_fb_obj(old_fb);
-	INIT_WORK(&work->work, intel_unpin_work_fn);
-
-	ret = drm_crtc_vblank_get(crtc);
-	if (ret)
-		goto free_work;
-
-	/* We borrow the event spin lock for protecting unpin_work */
-	spin_lock_irq(&dev->event_lock);
-	if (intel_crtc->unpin_work) {
-		/* Before declaring the flip queue wedged, check if
-		 * the hardware completed the operation behind our backs.
-		 */
-		if (__intel_pageflip_stall_check(dev, crtc)) {
-			DRM_DEBUG_DRIVER("flip queue: previous flip completed, continuing\n");
-			page_flip_completed(intel_crtc);
-		} else {
-			DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
-			spin_unlock_irq(&dev->event_lock);
-
-			drm_crtc_vblank_put(crtc);
-			kfree(work);
-			return -EBUSY;
-		}
-	}
-	intel_crtc->unpin_work = work;
-	spin_unlock_irq(&dev->event_lock);
-
-	if (atomic_read(&intel_crtc->unpin_work_count) >= 2)
-		flush_workqueue(dev_priv->wq);
-
-	ret = i915_mutex_lock_interruptible(dev);
-	if (ret)
-		goto cleanup;
-
-	/* Reference the objects for the scheduled work. */
-	drm_gem_object_reference(&work->old_fb_obj->base);
-	drm_gem_object_reference(&obj->base);
-
-	crtc->primary->fb = fb;
-
-	/* Keep state structure in sync */
-	if (crtc->primary->state->fb)
-		drm_framebuffer_unreference(crtc->primary->state->fb);
-	crtc->primary->state->fb = fb;
-	if (crtc->primary->state->fb)
-		drm_framebuffer_reference(crtc->primary->state->fb);
-
-	work->pending_flip_obj = obj;
-
-	atomic_inc(&intel_crtc->unpin_work_count);
-	intel_crtc->reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
-
-	if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
-		work->flip_count = I915_READ(PIPE_FLIPCOUNT_GM45(pipe)) + 1;
-
-	if (IS_VALLEYVIEW(dev)) {
-		ring = &dev_priv->ring[BCS];
-		if (obj->tiling_mode != work->old_fb_obj->tiling_mode)
-			/* vlv: DISPLAY_FLIP fails to change tiling */
-			ring = NULL;
-	} else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
-		ring = &dev_priv->ring[BCS];
-	} else if (INTEL_INFO(dev)->gen >= 7) {
-		ring = i915_gem_request_get_ring(obj->last_read_req);
-		if (ring == NULL || ring->id != RCS)
-			ring = &dev_priv->ring[BCS];
-	} else {
-		ring = &dev_priv->ring[RCS];
-	}
-
-	ret = intel_pin_and_fence_fb_obj(crtc->primary, fb, ring);
-	if (ret)
-		goto cleanup_pending;
-
-	work->gtt_offset =
-		i915_gem_obj_ggtt_offset(obj) + intel_crtc->dspaddr_offset;
-
-	if (use_mmio_flip(ring, obj)) {
-		ret = intel_queue_mmio_flip(dev, crtc, fb, obj, ring,
-					    page_flip_flags);
-		if (ret)
-			goto cleanup_unpin;
-
-		i915_gem_request_assign(&work->flip_queued_req,
-					obj->last_write_req);
-	} else {
-		ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, ring,
-						   page_flip_flags);
-		if (ret)
-			goto cleanup_unpin;
-
-		i915_gem_request_assign(&work->flip_queued_req,
-					intel_ring_get_request(ring));
-	}
-
-	work->flip_queued_vblank = drm_vblank_count(dev, intel_crtc->pipe);
-	work->enable_stall_check = true;
-
-	i915_gem_track_fb(work->old_fb_obj, obj,
-			  INTEL_FRONTBUFFER_PRIMARY(pipe));
-
-	intel_fbc_disable(dev);
-	intel_frontbuffer_flip_prepare(dev, INTEL_FRONTBUFFER_PRIMARY(pipe));
-	mutex_unlock(&dev->struct_mutex);
-
-	trace_i915_flip_request(intel_crtc->plane, obj);
-
-	return 0;
-
-cleanup_unpin:
-	intel_unpin_fb_obj(obj);
-cleanup_pending:
-	atomic_dec(&intel_crtc->unpin_work_count);
-	crtc->primary->fb = old_fb;
-	drm_gem_object_unreference(&work->old_fb_obj->base);
-	drm_gem_object_unreference(&obj->base);
-	mutex_unlock(&dev->struct_mutex);
-
-cleanup:
-	spin_lock_irq(&dev->event_lock);
-	intel_crtc->unpin_work = NULL;
-	spin_unlock_irq(&dev->event_lock);
-
-	drm_crtc_vblank_put(crtc);
-free_work:
-	kfree(work);
-
-	if (ret == -EIO) {
-out_hang:
-		ret = intel_plane_restore(primary);
-		if (ret == 0 && event) {
-			spin_lock_irq(&dev->event_lock);
-			drm_send_vblank_event(dev, pipe, event);
-			spin_unlock_irq(&dev->event_lock);
-		}
-	}
-	return ret;
-}
-
 static struct drm_crtc_helper_funcs intel_helper_funcs = {
 	.mode_set_base_atomic = intel_pipe_set_base_atomic,
 	.load_lut = intel_crtc_load_lut,
@@ -11675,7 +10991,7 @@ static const struct drm_crtc_funcs intel_crtc_funcs = {
 	.gamma_set = intel_crtc_gamma_set,
 	.set_config = intel_crtc_set_config,
 	.destroy = intel_crtc_destroy,
-	.page_flip = intel_crtc_page_flip,
+	.page_flip = drm_atomic_helper_page_flip,
 	.atomic_duplicate_state = intel_crtc_duplicate_state,
 	.atomic_destroy_state = intel_crtc_destroy_state,
 };
@@ -12357,8 +11673,6 @@ static void intel_crtc_init(struct drm_device *dev, int pipe)
 	dev_priv->plane_to_crtc_mapping[intel_crtc->plane] = &intel_crtc->base;
 	dev_priv->pipe_to_crtc_mapping[intel_crtc->pipe] = &intel_crtc->base;
 
-	INIT_WORK(&intel_crtc->mmio_flip.work, intel_mmio_flip_work_func);
-
 	drm_crtc_helper_add(&intel_crtc->base, &intel_helper_funcs);
 
 	WARN_ON(drm_crtc_index(&intel_crtc->base) != intel_crtc->pipe);
@@ -12930,35 +12244,6 @@ static void intel_init_display(struct drm_device *dev)
 			valleyview_modeset_global_resources;
 	}
 
-	/* Default just returns -ENODEV to indicate unsupported */
-	dev_priv->display.queue_flip = intel_default_queue_flip;
-
-	switch (INTEL_INFO(dev)->gen) {
-	case 2:
-		dev_priv->display.queue_flip = intel_gen2_queue_flip;
-		break;
-
-	case 3:
-		dev_priv->display.queue_flip = intel_gen3_queue_flip;
-		break;
-
-	case 4:
-	case 5:
-		dev_priv->display.queue_flip = intel_gen4_queue_flip;
-		break;
-
-	case 6:
-		dev_priv->display.queue_flip = intel_gen6_queue_flip;
-		break;
-	case 7:
-	case 8: /* FIXME(BDW): Check that the gen8 RCS flip works. */
-		dev_priv->display.queue_flip = intel_gen7_queue_flip;
-		break;
-	case 9:
-		dev_priv->display.queue_flip = intel_gen9_queue_flip;
-		break;
-	}
-
 	intel_panel_init_backlight_funcs(dev);
 
 	mutex_init(&dev_priv->pps_mutex);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index a94346f..ceec373 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -226,8 +226,7 @@ int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists
 	if (enable_execlists == 0)
 		return 0;
 
-	if (HAS_LOGICAL_RING_CONTEXTS(dev) && USES_PPGTT(dev) &&
-	    i915.use_mmio_flip >= 0)
+	if (HAS_LOGICAL_RING_CONTEXTS(dev) && USES_PPGTT(dev))
 		return 1;
 
 	return 0;
-- 
1.8.5.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips
  2015-01-31  0:22 ` [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips Matt Roper
@ 2015-01-31  9:30   ` Daniel Vetter
  2015-01-31 20:07     ` Matt Roper
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Vetter @ 2015-01-31  9:30 UTC (permalink / raw)
  To: Matt Roper; +Cc: intel-gfx

On Fri, Jan 30, 2015 at 04:22:38PM -0800, Matt Roper wrote:
> The initial i915 nuclear pageflip support rejected asynchronous updates.
> Allow all work after we swap in the new state structures to run
> asynchronously.  We also need to start sending completion events to
> userspace if they were requested.
> 
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_drv.h     |   3 +
>  drivers/gpu/drm/i915/intel_atomic.c | 162 +++++++++++++++++++++++++++++++-----
>  drivers/gpu/drm/i915/intel_drv.h    |   8 ++
>  3 files changed, 150 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 8fad702..c7a520a 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1777,6 +1777,9 @@ struct drm_i915_private {
>  	struct drm_crtc *pipe_to_crtc_mapping[I915_MAX_PIPES];
>  	wait_queue_head_t pending_flip_queue;
>  
> +	/* CRTC mask of pending atomic flips */
> +	uint32_t pending_atomic;
> +
>  #ifdef CONFIG_DEBUG_FS
>  	struct intel_pipe_crc pipe_crc[I915_MAX_PIPES];
>  #endif
> diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c
> index 19a9dd5..5dd7897 100644
> --- a/drivers/gpu/drm/i915/intel_atomic.c
> +++ b/drivers/gpu/drm/i915/intel_atomic.c
> @@ -76,6 +76,8 @@ int intel_atomic_check(struct drm_device *dev,
>  	state->allow_modeset = false;
>  	for (i = 0; i < ncrtcs; i++) {
>  		struct intel_crtc *crtc = to_intel_crtc(state->crtcs[i]);
> +		if (crtc)
> +			state->crtc_states[i]->enable = crtc->active;
>  		if (crtc && crtc->pipe != nuclear_pipe)
>  			not_nuclear = true;
>  	}
> @@ -96,6 +98,87 @@ int intel_atomic_check(struct drm_device *dev,
>  }
>  
>  
> +/*
> + * Wait until CRTC's have no pending flip, then atomically mark those CRTC's
> + * as busy.
> + */
> +static int wait_for_pending_flip(uint32_t crtc_mask,
> +				 struct intel_pending_atomic *commit)
> +{
> +	struct drm_i915_private *dev_priv = commit->dev->dev_private;
> +	int ret;
> +
> +	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
> +	ret = wait_event_interruptible_locked(dev_priv->pending_flip_queue,
> +					      !(dev_priv->pending_atomic & crtc_mask));
> +	if (ret == 0)
> +		dev_priv->pending_atomic |= crtc_mask;
> +	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
> +
> +	return ret;
> +}
> +
> +/* Finish pending flip operation on specified CRTC's */
> +static void flip_completion(struct intel_pending_atomic *commit)
> +{
> +	struct drm_i915_private *dev_priv = commit->dev->dev_private;
> +
> +	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
> +	dev_priv->pending_atomic &= ~commit->crtc_mask;
> +	wake_up_all_locked(&dev_priv->pending_flip_queue);
> +	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
> +}
> +
> +/*
> + * Finish an atomic commit.  The work here can be performed asynchronously
> + * if desired.  The new state has already been applied to the DRM objects
> + * and no modeset locks are needed.
> + */
> +static void finish_atomic_commit(struct work_struct *work)
> +{
> +	struct intel_pending_atomic *commit =
> +		container_of(work, struct intel_pending_atomic, work);
> +	struct drm_device *dev = commit->dev;
> +	struct drm_crtc *crtc;
> +	struct drm_atomic_state *state = commit->state;
> +	int i;
> +
> +	/*
> +	 * FIXME:  The proper sequence here will eventually be:
> +	 *
> +	 * drm_atomic_helper_commit_pre_planes(dev, state);
> +	 * drm_atomic_helper_commit_planes(dev, state);
> +	 * drm_atomic_helper_commit_post_planes(dev, state);
> +	 * drm_atomic_helper_wait_for_vblanks(dev, state);
> +	 * drm_atomic_helper_cleanup_planes(dev, state);
> +	 * drm_atomic_state_free(state);
> +	 *
> +	 * once we have full atomic modeset.  For now, just manually update
> +	 * plane states to avoid clobbering good states with dummy states
> +	 * while nuclear pageflipping.
> +	 */
> +	drm_atomic_helper_commit_planes(dev, state);
> +	drm_atomic_helper_wait_for_vblanks(dev, state);
> +
> +	/* Send CRTC completion events. */
> +	for (i = 0; i < dev->mode_config.num_crtc; i++) {
> +		crtc = state->crtcs[i];
> +		if (crtc && crtc->state->event) {
> +			spin_lock_irq(&dev->event_lock);
> +			drm_send_vblank_event(dev, to_intel_crtc(crtc)->pipe,
> +					      crtc->state->event);
> +			spin_unlock_irq(&dev->event_lock);
> +			crtc->state->event = NULL;
> +		}
> +	}
> +
> +	drm_atomic_helper_cleanup_planes(dev, state);
> +	drm_atomic_state_free(state);
> +
> +	flip_completion(commit);
> +	kfree(commit);
> +}
> +
>  /**
>   * intel_atomic_commit - commit validated state object
>   * @dev: DRM device
> @@ -116,34 +199,48 @@ int intel_atomic_commit(struct drm_device *dev,
>  			struct drm_atomic_state *state,
>  			bool async)
>  {
> +	struct intel_pending_atomic *commit;
>  	int ret;
>  	int i;
>  
> -	if (async) {
> -		DRM_DEBUG_KMS("i915 does not yet support async commit\n");
> -		return -EINVAL;
> -	}
> -
>  	ret = drm_atomic_helper_prepare_planes(dev, state);
>  	if (ret)
>  		return ret;
>  
> -	/* Point of no return */
> +	commit = kzalloc(sizeof(*commit), GFP_KERNEL);
> +	if (!commit)
> +		return -ENOMEM;
> +
> +	commit->dev = dev;
> +	commit->state = state;
> +
> +	for (i = 0; i < dev->mode_config.num_crtc; i++)
> +		if (state->crtcs[i])
> +			commit->crtc_mask |=
> +				(1 << drm_crtc_index(state->crtcs[i]));
>  
>  	/*
> -	 * FIXME:  The proper sequence here will eventually be:
> -	 *
> -	 * drm_atomic_helper_swap_state(dev, state)
> -	 * drm_atomic_helper_commit_pre_planes(dev, state);
> -	 * drm_atomic_helper_commit_planes(dev, state);
> -	 * drm_atomic_helper_commit_post_planes(dev, state);
> -	 * drm_atomic_helper_wait_for_vblanks(dev, state);
> -	 * drm_atomic_helper_cleanup_planes(dev, state);
> -	 * drm_atomic_state_free(state);
> -	 *
> -	 * once we have full atomic modeset.  For now, just manually update
> -	 * plane states to avoid clobbering good states with dummy states
> -	 * while nuclear pageflipping.
> +	 * If there's already a flip pending, we won't schedule another one
> +	 * until it completes.  We may relax this in the future, but for now
> +	 * this matches the legacy pageflip behavior.
> +	 */
> +	ret = wait_for_pending_flip(commit->crtc_mask, commit);
> +	if (ret) {
> +		kfree(commit);
> +		return ret;
> +	}
> +
> +	/*
> +	 * Point of no return; everything from here on can't fail, so swap
> +	 * the new state into the DRM objects.
> +	 */
> +
> +	/*
> +	 * FIXME:  Should eventually use drm_atomic_helper_swap_state().
> +	 * However since we only handle planes at the moment, we don't
> +	 * want to clobber our good crtc state with something bogus,
> +	 * so just manually swap the plane states and copy over any completion
> +	 * events for CRTC's.
>  	 */
>  	for (i = 0; i < dev->mode_config.num_total_plane; i++) {
>  		struct drm_plane *plane = state->planes[i];
> @@ -155,10 +252,29 @@ int intel_atomic_commit(struct drm_device *dev,
>  		swap(state->plane_states[i], plane->state);
>  		plane->state->state = NULL;
>  	}
> -	drm_atomic_helper_commit_planes(dev, state);
> -	drm_atomic_helper_wait_for_vblanks(dev, state);
> -	drm_atomic_helper_cleanup_planes(dev, state);
> -	drm_atomic_state_free(state);
> +
> +	for (i = 0; i < dev->mode_config.num_crtc; i++) {
> +		struct drm_crtc *crtc = state->crtcs[i];
> +		struct drm_crtc_state *crtc_state = state->crtc_states[i];
> +
> +		if (!crtc || !crtc_state)
> +			continue;
> +
> +		/* n.b., we only copy the event and enabled flag for now */
> +		swap(crtc->state->event, crtc_state->event);
> +		swap(crtc->state->enable, crtc_state->enable);
> +	}

Before we can swap in the new state we need to stall for any oustanding
async flip to complete. At least until we have a full-blown flip queue.

> +
> +	/*
> +	 * From here on, we can do everything asynchronously without any
> +	 * modeset locks.
> +	 */
> +	if (async) {
> +		INIT_WORK(&commit->work, finish_atomic_commit);
> +		schedule_work(&commit->work);
> +	} else {
> +		finish_atomic_commit(&commit->work);

That also solves the problem of stalling for any oustanding async commit
when we process a synchronous one. With just legacy pageflips we do that
in the crtc shutdown funcs, but long-term I think it'll be much more
robust to do this in the top-level commit code (and rip out all the
pageflip waits deep down in the low-level functions).
-Daniel

> +	}
>  
>  	return 0;
>  }
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index eef79cc..15b02b0 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -391,6 +391,14 @@ struct intel_crtc_state {
>  	int pbn;
>  };
>  
> +/* In-flight atomic operation */
> +struct intel_pending_atomic {
> +	struct work_struct work;
> +	struct drm_device *dev;
> +	struct drm_atomic_state *state;
> +	uint32_t crtc_mask;
> +};
> +
>  struct intel_pipe_wm {
>  	struct intel_wm_level wm[5];
>  	uint32_t linetime;
> -- 
> 1.8.5.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] More nuclear pageflip
  2015-01-31  0:22 [PATCH 0/4] More nuclear pageflip Matt Roper
                   ` (3 preceding siblings ...)
  2015-01-31  0:22 ` [PATCH 4/4] drm/i915: Use atomic helper for pageflips Matt Roper
@ 2015-01-31  9:35 ` Daniel Vetter
  2015-01-31 20:22   ` Matt Roper
  4 siblings, 1 reply; 11+ messages in thread
From: Daniel Vetter @ 2015-01-31  9:35 UTC (permalink / raw)
  To: Matt Roper; +Cc: intel-gfx

On Fri, Jan 30, 2015 at 04:22:35PM -0800, Matt Roper wrote:
> The first two patches here were already posted earlier this week; they allow
> our legacy plane updates to make use of the main atomic helpers rather than the
> transitional atomic helpers.  This shouldn't have any functional change, but it
> will cause us to exercise the full atomic pipeline rather than just the plane
> subset of it.
> 
> The third patch is the interesting one; it allows us to handle nuclear pageflip
> requests in a non-blocking manner and deliver a uevent upon completion.  This
> is important functionality for compositors since it allows them to request that
> the kernel perform a flip, then continue on doing other work while they wait
> for the flip to actually happen.
> 
> The final patch here switches our legacy pageflip ioctl to use the atomic
> helper (thus exercising the new asynchronous support added in patch #3).
> Removing the i915-specific pageflip handling should allow us to drop a bunch of
> our display code; I've been somewhat conservative in my code removal for now
> (just enough to get rid of the 'function unused' compiler warnings); we can do
> further cleanup of code that relates to the legacy pageflip pipeline in a
> future patchset.

Our current mmio flip doesn't work on gen4 and earlier because the flip
completion event generation only works with MI_ flips. So I think we need
to (at least for now) keep the legacy flip code around and wired up for
those platforms.

But we need to redo the flip completion anyway (and probably just key off
the next vblank interrupt) since if you'd do an atomic flip with just
sprites the event stuff wont work on any platform. Once we have that we
can then also roll out the legacy-pageflip-on-atomic for older platforms.
-Daniel

> 
> Matt Roper (4):
>   drm/i915: Keep plane->state updated on pageflip
>   drm/i915: Switch planes from transitional helpers to full atomic
>     helpers
>   drm/i915: Enable asynchronous nuclear flips
>   drm/i915: Use atomic helper for pageflips
> 
>  drivers/gpu/drm/i915/i915_drv.h      |   9 +-
>  drivers/gpu/drm/i915/i915_params.c   |   5 -
>  drivers/gpu/drm/i915/intel_atomic.c  | 162 ++++++--
>  drivers/gpu/drm/i915/intel_display.c | 716 +----------------------------------
>  drivers/gpu/drm/i915/intel_drv.h     |   8 +
>  drivers/gpu/drm/i915/intel_lrc.c     |   3 +-
>  6 files changed, 155 insertions(+), 748 deletions(-)
> 
> -- 
> 1.8.5.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/4] drm/i915: Keep plane->state updated on pageflip
  2015-01-31  0:22 ` [PATCH 1/4] drm/i915: Keep plane->state updated on pageflip Matt Roper
@ 2015-01-31  9:36   ` Daniel Vetter
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2015-01-31  9:36 UTC (permalink / raw)
  To: Matt Roper; +Cc: intel-gfx

On Fri, Jan 30, 2015 at 04:22:36PM -0800, Matt Roper wrote:
> Until all drivers have transitioned to atomic, the framebuffer
> associated with a plane is tracked in both plane->fb (for legacy) and
> plane->state->fb (for all the new atomic codeflow).  All of our modeset
> and plane updates use drm_plane->update_plane(), so in theory plane->fb
> and plane->state->fb should always stay in sync and point at the same
> thing for i915.  However we forgot about the pageflip ioctl case, which
> currently only updates plane->fb and leaves plane->state->fb at a stale
> value.
> 
> Surprisingly, this doesn't cause any real problems at the moment since
> internally we use the plane->fb pointer in most of the places that
> matter, and on the next .update_plane() call, we use plane->fb to figure
> out which framebuffer to cleanup.  However when we switch to the full
> atomic helpers for update_plane()/disable_plane(), those helpers use
> plane->state->fb to figure out which framebuffer to cleanup, so not
> having updated the plane->state->fb pointer causes things to blow up
> following a pageflip ioctl.
> 
> The fix here is to just make sure we update plane->state->fb at the same
> time we update plane->fb in the pageflip ioctl.
> 
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_display.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index 3d220a6..08e2bab 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -9801,6 +9801,13 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
>  
>  	crtc->primary->fb = fb;
>  
> +	/* Keep state structure in sync */
> +	if (crtc->primary->state->fb)
> +		drm_framebuffer_unreference(crtc->primary->state->fb);
> +	crtc->primary->state->fb = fb;
> +	if (crtc->primary->state->fb)
> +		drm_framebuffer_reference(crtc->primary->state->fb);

Yeah, I had the same fixup in my own testconversion. So merged this and
the 2nd patch to dinq (for 3.21).

Thanks, Daniel
> +
>  	work->pending_flip_obj = obj;
>  
>  	atomic_inc(&intel_crtc->unpin_work_count);
> -- 
> 1.8.5.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips
  2015-01-31  9:30   ` Daniel Vetter
@ 2015-01-31 20:07     ` Matt Roper
  2015-02-02  9:36       ` Daniel Vetter
  0 siblings, 1 reply; 11+ messages in thread
From: Matt Roper @ 2015-01-31 20:07 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Sat, Jan 31, 2015 at 10:30:29AM +0100, Daniel Vetter wrote:
> On Fri, Jan 30, 2015 at 04:22:38PM -0800, Matt Roper wrote:
> > The initial i915 nuclear pageflip support rejected asynchronous updates.
> > Allow all work after we swap in the new state structures to run
> > asynchronously.  We also need to start sending completion events to
> > userspace if they were requested.
> > 
> > Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_drv.h     |   3 +
> >  drivers/gpu/drm/i915/intel_atomic.c | 162 +++++++++++++++++++++++++++++++-----
> >  drivers/gpu/drm/i915/intel_drv.h    |   8 ++
> >  3 files changed, 150 insertions(+), 23 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > index 8fad702..c7a520a 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -1777,6 +1777,9 @@ struct drm_i915_private {
> >  	struct drm_crtc *pipe_to_crtc_mapping[I915_MAX_PIPES];
> >  	wait_queue_head_t pending_flip_queue;
> >  
> > +	/* CRTC mask of pending atomic flips */
> > +	uint32_t pending_atomic;
> > +
> >  #ifdef CONFIG_DEBUG_FS
> >  	struct intel_pipe_crc pipe_crc[I915_MAX_PIPES];
> >  #endif
> > diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c
> > index 19a9dd5..5dd7897 100644
> > --- a/drivers/gpu/drm/i915/intel_atomic.c
> > +++ b/drivers/gpu/drm/i915/intel_atomic.c
> > @@ -76,6 +76,8 @@ int intel_atomic_check(struct drm_device *dev,
> >  	state->allow_modeset = false;
> >  	for (i = 0; i < ncrtcs; i++) {
> >  		struct intel_crtc *crtc = to_intel_crtc(state->crtcs[i]);
> > +		if (crtc)
> > +			state->crtc_states[i]->enable = crtc->active;
> >  		if (crtc && crtc->pipe != nuclear_pipe)
> >  			not_nuclear = true;
> >  	}
> > @@ -96,6 +98,87 @@ int intel_atomic_check(struct drm_device *dev,
> >  }
> >  
> >  
> > +/*
> > + * Wait until CRTC's have no pending flip, then atomically mark those CRTC's
> > + * as busy.
> > + */
> > +static int wait_for_pending_flip(uint32_t crtc_mask,
> > +				 struct intel_pending_atomic *commit)
> > +{
> > +	struct drm_i915_private *dev_priv = commit->dev->dev_private;
> > +	int ret;
> > +
> > +	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
> > +	ret = wait_event_interruptible_locked(dev_priv->pending_flip_queue,
> > +					      !(dev_priv->pending_atomic & crtc_mask));
> > +	if (ret == 0)
> > +		dev_priv->pending_atomic |= crtc_mask;
> > +	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
> > +
> > +	return ret;
> > +}
> > +
> > +/* Finish pending flip operation on specified CRTC's */
> > +static void flip_completion(struct intel_pending_atomic *commit)
> > +{
> > +	struct drm_i915_private *dev_priv = commit->dev->dev_private;
> > +
> > +	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
> > +	dev_priv->pending_atomic &= ~commit->crtc_mask;
> > +	wake_up_all_locked(&dev_priv->pending_flip_queue);
> > +	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
> > +}
> > +
> > +/*
> > + * Finish an atomic commit.  The work here can be performed asynchronously
> > + * if desired.  The new state has already been applied to the DRM objects
> > + * and no modeset locks are needed.
> > + */
> > +static void finish_atomic_commit(struct work_struct *work)
> > +{
> > +	struct intel_pending_atomic *commit =
> > +		container_of(work, struct intel_pending_atomic, work);
> > +	struct drm_device *dev = commit->dev;
> > +	struct drm_crtc *crtc;
> > +	struct drm_atomic_state *state = commit->state;
> > +	int i;
> > +
> > +	/*
> > +	 * FIXME:  The proper sequence here will eventually be:
> > +	 *
> > +	 * drm_atomic_helper_commit_pre_planes(dev, state);
> > +	 * drm_atomic_helper_commit_planes(dev, state);
> > +	 * drm_atomic_helper_commit_post_planes(dev, state);
> > +	 * drm_atomic_helper_wait_for_vblanks(dev, state);
> > +	 * drm_atomic_helper_cleanup_planes(dev, state);
> > +	 * drm_atomic_state_free(state);
> > +	 *
> > +	 * once we have full atomic modeset.  For now, just manually update
> > +	 * plane states to avoid clobbering good states with dummy states
> > +	 * while nuclear pageflipping.
> > +	 */
> > +	drm_atomic_helper_commit_planes(dev, state);
> > +	drm_atomic_helper_wait_for_vblanks(dev, state);
> > +
> > +	/* Send CRTC completion events. */
> > +	for (i = 0; i < dev->mode_config.num_crtc; i++) {
> > +		crtc = state->crtcs[i];
> > +		if (crtc && crtc->state->event) {
> > +			spin_lock_irq(&dev->event_lock);
> > +			drm_send_vblank_event(dev, to_intel_crtc(crtc)->pipe,
> > +					      crtc->state->event);
> > +			spin_unlock_irq(&dev->event_lock);
> > +			crtc->state->event = NULL;
> > +		}
> > +	}
> > +
> > +	drm_atomic_helper_cleanup_planes(dev, state);
> > +	drm_atomic_state_free(state);
> > +
> > +	flip_completion(commit);
> > +	kfree(commit);
> > +}
> > +
> >  /**
> >   * intel_atomic_commit - commit validated state object
> >   * @dev: DRM device
> > @@ -116,34 +199,48 @@ int intel_atomic_commit(struct drm_device *dev,
> >  			struct drm_atomic_state *state,
> >  			bool async)
> >  {
> > +	struct intel_pending_atomic *commit;
> >  	int ret;
> >  	int i;
> >  
> > -	if (async) {
> > -		DRM_DEBUG_KMS("i915 does not yet support async commit\n");
> > -		return -EINVAL;
> > -	}
> > -
> >  	ret = drm_atomic_helper_prepare_planes(dev, state);
> >  	if (ret)
> >  		return ret;
> >  
> > -	/* Point of no return */
> > +	commit = kzalloc(sizeof(*commit), GFP_KERNEL);
> > +	if (!commit)
> > +		return -ENOMEM;
> > +
> > +	commit->dev = dev;
> > +	commit->state = state;
> > +
> > +	for (i = 0; i < dev->mode_config.num_crtc; i++)
> > +		if (state->crtcs[i])
> > +			commit->crtc_mask |=
> > +				(1 << drm_crtc_index(state->crtcs[i]));
> >  
> >  	/*
> > -	 * FIXME:  The proper sequence here will eventually be:
> > -	 *
> > -	 * drm_atomic_helper_swap_state(dev, state)
> > -	 * drm_atomic_helper_commit_pre_planes(dev, state);
> > -	 * drm_atomic_helper_commit_planes(dev, state);
> > -	 * drm_atomic_helper_commit_post_planes(dev, state);
> > -	 * drm_atomic_helper_wait_for_vblanks(dev, state);
> > -	 * drm_atomic_helper_cleanup_planes(dev, state);
> > -	 * drm_atomic_state_free(state);
> > -	 *
> > -	 * once we have full atomic modeset.  For now, just manually update
> > -	 * plane states to avoid clobbering good states with dummy states
> > -	 * while nuclear pageflipping.
> > +	 * If there's already a flip pending, we won't schedule another one
> > +	 * until it completes.  We may relax this in the future, but for now
> > +	 * this matches the legacy pageflip behavior.
> > +	 */
> > +	ret = wait_for_pending_flip(commit->crtc_mask, commit);
> > +	if (ret) {
> > +		kfree(commit);
> > +		return ret;
> > +	}
> > +
> > +	/*
> > +	 * Point of no return; everything from here on can't fail, so swap
> > +	 * the new state into the DRM objects.
> > +	 */
> > +
> > +	/*
> > +	 * FIXME:  Should eventually use drm_atomic_helper_swap_state().
> > +	 * However since we only handle planes at the moment, we don't
> > +	 * want to clobber our good crtc state with something bogus,
> > +	 * so just manually swap the plane states and copy over any completion
> > +	 * events for CRTC's.
> >  	 */
> >  	for (i = 0; i < dev->mode_config.num_total_plane; i++) {
> >  		struct drm_plane *plane = state->planes[i];
> > @@ -155,10 +252,29 @@ int intel_atomic_commit(struct drm_device *dev,
> >  		swap(state->plane_states[i], plane->state);
> >  		plane->state->state = NULL;
> >  	}
> > -	drm_atomic_helper_commit_planes(dev, state);
> > -	drm_atomic_helper_wait_for_vblanks(dev, state);
> > -	drm_atomic_helper_cleanup_planes(dev, state);
> > -	drm_atomic_state_free(state);
> > +
> > +	for (i = 0; i < dev->mode_config.num_crtc; i++) {
> > +		struct drm_crtc *crtc = state->crtcs[i];
> > +		struct drm_crtc_state *crtc_state = state->crtc_states[i];
> > +
> > +		if (!crtc || !crtc_state)
> > +			continue;
> > +
> > +		/* n.b., we only copy the event and enabled flag for now */
> > +		swap(crtc->state->event, crtc_state->event);
> > +		swap(crtc->state->enable, crtc_state->enable);
> > +	}
> 
> Before we can swap in the new state we need to stall for any oustanding
> async flip to complete. At least until we have a full-blown flip queue.

I think the wait_for_pending_flip() call above the two comment blocks
before the loop should take care of that --- it waits until all the
CRTC's in commit->crtc_mask have no pending flip, then atomically marks
them as busy before proceeding.


Matt

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] More nuclear pageflip
  2015-01-31  9:35 ` [PATCH 0/4] More nuclear pageflip Daniel Vetter
@ 2015-01-31 20:22   ` Matt Roper
  0 siblings, 0 replies; 11+ messages in thread
From: Matt Roper @ 2015-01-31 20:22 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Sat, Jan 31, 2015 at 10:35:32AM +0100, Daniel Vetter wrote:
> On Fri, Jan 30, 2015 at 04:22:35PM -0800, Matt Roper wrote:
> > The first two patches here were already posted earlier this week; they allow
> > our legacy plane updates to make use of the main atomic helpers rather than the
> > transitional atomic helpers.  This shouldn't have any functional change, but it
> > will cause us to exercise the full atomic pipeline rather than just the plane
> > subset of it.
> > 
> > The third patch is the interesting one; it allows us to handle nuclear pageflip
> > requests in a non-blocking manner and deliver a uevent upon completion.  This
> > is important functionality for compositors since it allows them to request that
> > the kernel perform a flip, then continue on doing other work while they wait
> > for the flip to actually happen.
> > 
> > The final patch here switches our legacy pageflip ioctl to use the atomic
> > helper (thus exercising the new asynchronous support added in patch #3).
> > Removing the i915-specific pageflip handling should allow us to drop a bunch of
> > our display code; I've been somewhat conservative in my code removal for now
> > (just enough to get rid of the 'function unused' compiler warnings); we can do
> > further cleanup of code that relates to the legacy pageflip pipeline in a
> > future patchset.
> 
> Our current mmio flip doesn't work on gen4 and earlier because the flip
> completion event generation only works with MI_ flips. So I think we need
> to (at least for now) keep the legacy flip code around and wired up for
> those platforms.
> 
> But we need to redo the flip completion anyway (and probably just key off
> the next vblank interrupt) since if you'd do an atomic flip with just
> sprites the event stuff wont work on any platform. Once we have that we
> can then also roll out the legacy-pageflip-on-atomic for older platforms.
> -Daniel

Hmm, my code at the moment sends the completion event to userspace and
marks the CRTC as no longer having a flip pending after
drm_atomic_helper_wait_for_vblanks(dev, state) call, which probably
works when we're actually updating the framebuffer, but I guess that
will send the uevent too early in cases where the fb doesn't change
(since we already made a point of optimizing out that unnecessary vblank
wait).

So maybe I should just get the "schedule stuff for future vblank"
mechanism done that I already had on my plate for the watermark stuff,
and then use that to send the uevent once the next vblank following the
commit happens.

It occurs to me that I probably need to take a look at all the i915
hangcheck/recovery stuff too; I haven't paid much attention to how that
works, but I'm guessing it probably needs to be updated a bit now as
well.


Matt

> 
> > 
> > Matt Roper (4):
> >   drm/i915: Keep plane->state updated on pageflip
> >   drm/i915: Switch planes from transitional helpers to full atomic
> >     helpers
> >   drm/i915: Enable asynchronous nuclear flips
> >   drm/i915: Use atomic helper for pageflips
> > 
> >  drivers/gpu/drm/i915/i915_drv.h      |   9 +-
> >  drivers/gpu/drm/i915/i915_params.c   |   5 -
> >  drivers/gpu/drm/i915/intel_atomic.c  | 162 ++++++--
> >  drivers/gpu/drm/i915/intel_display.c | 716 +----------------------------------
> >  drivers/gpu/drm/i915/intel_drv.h     |   8 +
> >  drivers/gpu/drm/i915/intel_lrc.c     |   3 +-
> >  6 files changed, 155 insertions(+), 748 deletions(-)
> > 
> > -- 
> > 1.8.5.1
> > 
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips
  2015-01-31 20:07     ` Matt Roper
@ 2015-02-02  9:36       ` Daniel Vetter
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2015-02-02  9:36 UTC (permalink / raw)
  To: Matt Roper; +Cc: intel-gfx

On Sat, Jan 31, 2015 at 12:07:56PM -0800, Matt Roper wrote:
> On Sat, Jan 31, 2015 at 10:30:29AM +0100, Daniel Vetter wrote:
> > On Fri, Jan 30, 2015 at 04:22:38PM -0800, Matt Roper wrote:
> > > The initial i915 nuclear pageflip support rejected asynchronous updates.
> > > Allow all work after we swap in the new state structures to run
> > > asynchronously.  We also need to start sending completion events to
> > > userspace if they were requested.
> > > 
> > > Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/i915_drv.h     |   3 +
> > >  drivers/gpu/drm/i915/intel_atomic.c | 162 +++++++++++++++++++++++++++++++-----
> > >  drivers/gpu/drm/i915/intel_drv.h    |   8 ++
> > >  3 files changed, 150 insertions(+), 23 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > > index 8fad702..c7a520a 100644
> > > --- a/drivers/gpu/drm/i915/i915_drv.h
> > > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > > @@ -1777,6 +1777,9 @@ struct drm_i915_private {
> > >  	struct drm_crtc *pipe_to_crtc_mapping[I915_MAX_PIPES];
> > >  	wait_queue_head_t pending_flip_queue;
> > >  
> > > +	/* CRTC mask of pending atomic flips */
> > > +	uint32_t pending_atomic;
> > > +
> > >  #ifdef CONFIG_DEBUG_FS
> > >  	struct intel_pipe_crc pipe_crc[I915_MAX_PIPES];
> > >  #endif
> > > diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c
> > > index 19a9dd5..5dd7897 100644
> > > --- a/drivers/gpu/drm/i915/intel_atomic.c
> > > +++ b/drivers/gpu/drm/i915/intel_atomic.c
> > > @@ -76,6 +76,8 @@ int intel_atomic_check(struct drm_device *dev,
> > >  	state->allow_modeset = false;
> > >  	for (i = 0; i < ncrtcs; i++) {
> > >  		struct intel_crtc *crtc = to_intel_crtc(state->crtcs[i]);
> > > +		if (crtc)
> > > +			state->crtc_states[i]->enable = crtc->active;
> > >  		if (crtc && crtc->pipe != nuclear_pipe)
> > >  			not_nuclear = true;
> > >  	}
> > > @@ -96,6 +98,87 @@ int intel_atomic_check(struct drm_device *dev,
> > >  }
> > >  
> > >  
> > > +/*
> > > + * Wait until CRTC's have no pending flip, then atomically mark those CRTC's
> > > + * as busy.
> > > + */
> > > +static int wait_for_pending_flip(uint32_t crtc_mask,
> > > +				 struct intel_pending_atomic *commit)
> > > +{
> > > +	struct drm_i915_private *dev_priv = commit->dev->dev_private;
> > > +	int ret;
> > > +
> > > +	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
> > > +	ret = wait_event_interruptible_locked(dev_priv->pending_flip_queue,
> > > +					      !(dev_priv->pending_atomic & crtc_mask));
> > > +	if (ret == 0)
> > > +		dev_priv->pending_atomic |= crtc_mask;
> > > +	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +/* Finish pending flip operation on specified CRTC's */
> > > +static void flip_completion(struct intel_pending_atomic *commit)
> > > +{
> > > +	struct drm_i915_private *dev_priv = commit->dev->dev_private;
> > > +
> > > +	spin_lock_irq(&dev_priv->pending_flip_queue.lock);
> > > +	dev_priv->pending_atomic &= ~commit->crtc_mask;
> > > +	wake_up_all_locked(&dev_priv->pending_flip_queue);
> > > +	spin_unlock_irq(&dev_priv->pending_flip_queue.lock);
> > > +}
> > > +
> > > +/*
> > > + * Finish an atomic commit.  The work here can be performed asynchronously
> > > + * if desired.  The new state has already been applied to the DRM objects
> > > + * and no modeset locks are needed.
> > > + */
> > > +static void finish_atomic_commit(struct work_struct *work)
> > > +{
> > > +	struct intel_pending_atomic *commit =
> > > +		container_of(work, struct intel_pending_atomic, work);
> > > +	struct drm_device *dev = commit->dev;
> > > +	struct drm_crtc *crtc;
> > > +	struct drm_atomic_state *state = commit->state;
> > > +	int i;
> > > +
> > > +	/*
> > > +	 * FIXME:  The proper sequence here will eventually be:
> > > +	 *
> > > +	 * drm_atomic_helper_commit_pre_planes(dev, state);
> > > +	 * drm_atomic_helper_commit_planes(dev, state);
> > > +	 * drm_atomic_helper_commit_post_planes(dev, state);
> > > +	 * drm_atomic_helper_wait_for_vblanks(dev, state);
> > > +	 * drm_atomic_helper_cleanup_planes(dev, state);
> > > +	 * drm_atomic_state_free(state);
> > > +	 *
> > > +	 * once we have full atomic modeset.  For now, just manually update
> > > +	 * plane states to avoid clobbering good states with dummy states
> > > +	 * while nuclear pageflipping.
> > > +	 */
> > > +	drm_atomic_helper_commit_planes(dev, state);
> > > +	drm_atomic_helper_wait_for_vblanks(dev, state);
> > > +
> > > +	/* Send CRTC completion events. */
> > > +	for (i = 0; i < dev->mode_config.num_crtc; i++) {
> > > +		crtc = state->crtcs[i];
> > > +		if (crtc && crtc->state->event) {
> > > +			spin_lock_irq(&dev->event_lock);
> > > +			drm_send_vblank_event(dev, to_intel_crtc(crtc)->pipe,
> > > +					      crtc->state->event);
> > > +			spin_unlock_irq(&dev->event_lock);
> > > +			crtc->state->event = NULL;
> > > +		}
> > > +	}
> > > +
> > > +	drm_atomic_helper_cleanup_planes(dev, state);
> > > +	drm_atomic_state_free(state);
> > > +
> > > +	flip_completion(commit);
> > > +	kfree(commit);
> > > +}
> > > +
> > >  /**
> > >   * intel_atomic_commit - commit validated state object
> > >   * @dev: DRM device
> > > @@ -116,34 +199,48 @@ int intel_atomic_commit(struct drm_device *dev,
> > >  			struct drm_atomic_state *state,
> > >  			bool async)
> > >  {
> > > +	struct intel_pending_atomic *commit;
> > >  	int ret;
> > >  	int i;
> > >  
> > > -	if (async) {
> > > -		DRM_DEBUG_KMS("i915 does not yet support async commit\n");
> > > -		return -EINVAL;
> > > -	}
> > > -
> > >  	ret = drm_atomic_helper_prepare_planes(dev, state);
> > >  	if (ret)
> > >  		return ret;
> > >  
> > > -	/* Point of no return */
> > > +	commit = kzalloc(sizeof(*commit), GFP_KERNEL);
> > > +	if (!commit)
> > > +		return -ENOMEM;
> > > +
> > > +	commit->dev = dev;
> > > +	commit->state = state;
> > > +
> > > +	for (i = 0; i < dev->mode_config.num_crtc; i++)
> > > +		if (state->crtcs[i])
> > > +			commit->crtc_mask |=
> > > +				(1 << drm_crtc_index(state->crtcs[i]));
> > >  
> > >  	/*
> > > -	 * FIXME:  The proper sequence here will eventually be:
> > > -	 *
> > > -	 * drm_atomic_helper_swap_state(dev, state)
> > > -	 * drm_atomic_helper_commit_pre_planes(dev, state);
> > > -	 * drm_atomic_helper_commit_planes(dev, state);
> > > -	 * drm_atomic_helper_commit_post_planes(dev, state);
> > > -	 * drm_atomic_helper_wait_for_vblanks(dev, state);
> > > -	 * drm_atomic_helper_cleanup_planes(dev, state);
> > > -	 * drm_atomic_state_free(state);
> > > -	 *
> > > -	 * once we have full atomic modeset.  For now, just manually update
> > > -	 * plane states to avoid clobbering good states with dummy states
> > > -	 * while nuclear pageflipping.
> > > +	 * If there's already a flip pending, we won't schedule another one
> > > +	 * until it completes.  We may relax this in the future, but for now
> > > +	 * this matches the legacy pageflip behavior.
> > > +	 */
> > > +	ret = wait_for_pending_flip(commit->crtc_mask, commit);
> > > +	if (ret) {
> > > +		kfree(commit);
> > > +		return ret;
> > > +	}
> > > +
> > > +	/*
> > > +	 * Point of no return; everything from here on can't fail, so swap
> > > +	 * the new state into the DRM objects.
> > > +	 */
> > > +
> > > +	/*
> > > +	 * FIXME:  Should eventually use drm_atomic_helper_swap_state().
> > > +	 * However since we only handle planes at the moment, we don't
> > > +	 * want to clobber our good crtc state with something bogus,
> > > +	 * so just manually swap the plane states and copy over any completion
> > > +	 * events for CRTC's.
> > >  	 */
> > >  	for (i = 0; i < dev->mode_config.num_total_plane; i++) {
> > >  		struct drm_plane *plane = state->planes[i];
> > > @@ -155,10 +252,29 @@ int intel_atomic_commit(struct drm_device *dev,
> > >  		swap(state->plane_states[i], plane->state);
> > >  		plane->state->state = NULL;
> > >  	}
> > > -	drm_atomic_helper_commit_planes(dev, state);
> > > -	drm_atomic_helper_wait_for_vblanks(dev, state);
> > > -	drm_atomic_helper_cleanup_planes(dev, state);
> > > -	drm_atomic_state_free(state);
> > > +
> > > +	for (i = 0; i < dev->mode_config.num_crtc; i++) {
> > > +		struct drm_crtc *crtc = state->crtcs[i];
> > > +		struct drm_crtc_state *crtc_state = state->crtc_states[i];
> > > +
> > > +		if (!crtc || !crtc_state)
> > > +			continue;
> > > +
> > > +		/* n.b., we only copy the event and enabled flag for now */
> > > +		swap(crtc->state->event, crtc_state->event);
> > > +		swap(crtc->state->enable, crtc_state->enable);
> > > +	}
> > 
> > Before we can swap in the new state we need to stall for any oustanding
> > async flip to complete. At least until we have a full-blown flip queue.
> 
> I think the wait_for_pending_flip() call above the two comment blocks
> before the loop should take care of that --- it waits until all the
> CRTC's in commit->crtc_mask have no pending flip, then atomically marks
> them as busy before proceeding.

Right, completely missed that one, must have been blind. Looks sensible.

Another lesson learned from the old pageflip code to consider is that
free-standing flip/unpin works which only communicate their completion
status to the core of the driver through some bitmasks are a pain to
integrate into the shrinker. And even more so with atomic flips we must be
able to stall for the unpin to complete on any outstanding flips if we run
out of ggtt space.

Currently that's handled by restarting the entire ioctl from the eviction
code (see the EAGAIN in i915_gem_evict.c), which isn't terrible pretty.
Imo some explicit queue which the shrinker could stall on directly (and
maybe even process synchronously if needed) would be much better. With
atomic it should be possible since the async worker doesn't need any
modeset locks, only the dev->struct_mutex to unpin objects.

Anyway something to keep in mind, but maybe only once we have the full
flip queue in place. For now the important part is not to break things,
but I think we should have full coverage for everything in igt (but maybe
not in the limited prts subset).

Another thing I might or might not have missed is the async waiting.
Currently prepare_plane_fb does a synchronous stall for rendering, which
needs to be split like with the mmio_flip code which has an explicit
wait_request in the work function. Long-term I'd like to use the
plane_state->fence pointer, but unfortunately i915 reqeusts haven't been
subclassed properly yet. So we'd need our own request pointer for now - at
least those are refcounted already.

And one more wishlist: Would it be feasible to build up the atomic async
stuff from the mmio_flip code? Just for additional continuity and testing
...
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-02-02  9:34 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-31  0:22 [PATCH 0/4] More nuclear pageflip Matt Roper
2015-01-31  0:22 ` [PATCH 1/4] drm/i915: Keep plane->state updated on pageflip Matt Roper
2015-01-31  9:36   ` Daniel Vetter
2015-01-31  0:22 ` [PATCH 2/4] drm/i915: Switch planes from transitional helpers to full atomic helpers Matt Roper
2015-01-31  0:22 ` [PATCH 3/4] drm/i915: Enable asynchronous nuclear flips Matt Roper
2015-01-31  9:30   ` Daniel Vetter
2015-01-31 20:07     ` Matt Roper
2015-02-02  9:36       ` Daniel Vetter
2015-01-31  0:22 ` [PATCH 4/4] drm/i915: Use atomic helper for pageflips Matt Roper
2015-01-31  9:35 ` [PATCH 0/4] More nuclear pageflip Daniel Vetter
2015-01-31 20:22   ` Matt Roper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.