intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] drm/i915: ring and flip leftovers
@ 2012-11-27 18:34 ville.syrjala
  2012-11-27 18:34 ` [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head ville.syrjala
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: ville.syrjala @ 2012-11-27 18:34 UTC (permalink / raw)
  To: intel-gfx

Some patches slightly updated, and rebased on top of robustify-reset-transitions.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head
  2012-11-27 18:34 [PATCH 0/4] drm/i915: ring and flip leftovers ville.syrjala
@ 2012-11-27 18:34 ` ville.syrjala
  2012-11-28 20:45   ` Daniel Vetter
  2012-11-27 18:34 ` [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base() ville.syrjala
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: ville.syrjala @ 2012-11-27 18:34 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

From BSpec:
"If the Ring Buffer Head Pointer and the Tail Pointer are on the same
cacheline, the Head Pointer must not be greater than the Tail
Pointer."

The easiest way to enforce this is to reduce the reported ring space.

References:
Gen2 BSpec "1. Programming Environment" / "1.4.4.6 Ring Buffer Use"
Gen3 BSpec "vol1c Memory Interface Functions" / "2.3.4.5 Ring Buffer Use"
Gen4+ BSpec "vol1c Memory Interface and Command Stream" / "5.3.4.5 Ring Buffer Use"

v2: Include the exact BSpec references in the description

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_dma.c         |    2 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c |    4 ++--
 drivers/gpu/drm/i915/intel_ringbuffer.h |    2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 86b93b2..79089aa 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -141,7 +141,7 @@ void i915_kernel_lost_context(struct drm_device * dev)
 
 	ring->head = I915_READ_HEAD(ring) & HEAD_ADDR;
 	ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
-	ring->space = ring->head - (ring->tail + 8);
+	ring->space = ring->head - (ring->tail + 64);
 	if (ring->space < 0)
 		ring->space += ring->size;
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index b0a83ce..cf11f8a 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -45,7 +45,7 @@ struct pipe_control {
 
 static inline int ring_space(struct intel_ring_buffer *ring)
 {
-	int space = (ring->head & HEAD_ADDR) - (ring->tail + 8);
+	int space = (ring->head & HEAD_ADDR) - (ring->tail + 64);
 	if (space < 0)
 		space += ring->size;
 	return space;
@@ -1260,7 +1260,7 @@ static int intel_ring_wait_request(struct intel_ring_buffer *ring, int n)
 		if (request->tail == -1)
 			continue;
 
-		space = request->tail - (ring->tail + 8);
+		space = request->tail - (ring->tail + 64);
 		if (space < 0)
 			space += ring->size;
 		if (space >= n) {
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 5af65b8..2f6d9b1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -191,7 +191,7 @@ void intel_cleanup_ring_buffer(struct intel_ring_buffer *ring);
 int __must_check intel_wait_ring_buffer(struct intel_ring_buffer *ring, int n);
 static inline int intel_wait_ring_idle(struct intel_ring_buffer *ring)
 {
-	return intel_wait_ring_buffer(ring, ring->size - 8);
+	return intel_wait_ring_buffer(ring, ring->size - 64);
 }
 
 int __must_check intel_ring_begin(struct intel_ring_buffer *ring, int n);
-- 
1.7.8.6

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base()
  2012-11-27 18:34 [PATCH 0/4] drm/i915: ring and flip leftovers ville.syrjala
  2012-11-27 18:34 ` [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head ville.syrjala
@ 2012-11-27 18:34 ` ville.syrjala
  2012-11-28 20:51   ` Daniel Vetter
  2012-11-27 18:34 ` [PATCH 3/4] drm/i915: Wake up pending flip waiters when the GPU hangs ville.syrjala
  2012-11-27 18:34 ` [PATCH v2 4/4] drm/i915: Kill i915_gem_execbuffer_wait_for_flips() ville.syrjala
  3 siblings, 1 reply; 13+ messages in thread
From: ville.syrjala @ 2012-11-27 18:34 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

intel_pipe_set_base() never actually waited for any pending page flips
on the CRTC. It looks like it tried to, by calling intel_finish_fb() on
the current front buffer. But the pending flips were actually tracked
in the BO of the previous front buffer, so the call to intel_finish_fb()
never did anything useful.

intel_crtc_wait_for_pending_flips() is the current _working_ way to wait
for pending page flips. So use it in intel_pipe_set_base() too. Some
refactoring was necessary to avoid locking struct_mutex twice.

v2: Shuffle the code around so that intel_crtc_wait_for_pending_flips()
    just wraps intel_crtc_wait_for_pending_flips_locked().

v3: Kill leftover wait_event() in intel_finish_fb()

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_display.c |   80 +++++++++++++++++----------------
 1 files changed, 41 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 8c2d810..ea710af 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2238,10 +2238,6 @@ intel_finish_fb(struct drm_framebuffer *old_fb)
 	bool was_interruptible = dev_priv->mm.interruptible;
 	int ret;
 
-	wait_event(dev_priv->pending_flip_queue,
-		   i915_reset_in_progress(&dev_priv->gpu_error) ||
-		   atomic_read(&obj->pending_flip) == 0);
-
 	/* Big Hammer, we also need to ensure that any pending
 	 * MI_WAIT_FOR_EVENT inside a user batch buffer on the
 	 * current scanout is retired before unpinning the old
@@ -2284,6 +2280,46 @@ static void intel_crtc_update_sarea_pos(struct drm_crtc *crtc, int x, int y)
 	}
 }
 
+static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
+{
+	struct drm_device *dev = crtc->dev;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	unsigned long flags;
+	bool pending;
+
+	if (i915_reset_in_progress(&dev_priv->gpu_error))
+		return false;
+
+	spin_lock_irqsave(&dev->event_lock, flags);
+	pending = to_intel_crtc(crtc)->unpin_work != NULL;
+	spin_unlock_irqrestore(&dev->event_lock, flags);
+
+	return pending;
+}
+
+static void intel_crtc_wait_for_pending_flips_locked(struct drm_crtc *crtc)
+{
+	struct drm_device *dev = crtc->dev;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+
+	if (crtc->fb == NULL)
+		return;
+
+	wait_event(dev_priv->pending_flip_queue,
+		   !intel_crtc_has_pending_flip(crtc));
+
+	intel_finish_fb(crtc->fb);
+}
+
+static void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
+{
+	struct drm_device *dev = crtc->dev;
+
+	mutex_lock(&dev->struct_mutex);
+	intel_crtc_wait_for_pending_flips_locked(crtc);
+	mutex_unlock(&dev->struct_mutex);
+}
+
 static int
 intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
 		    struct drm_framebuffer *fb)
@@ -2317,8 +2353,7 @@ intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
 		return ret;
 	}
 
-	if (crtc->fb)
-		intel_finish_fb(crtc->fb);
+	intel_crtc_wait_for_pending_flips_locked(crtc);
 
 	ret = dev_priv->display.update_plane(crtc, fb, x, y);
 	if (ret) {
@@ -2950,39 +2985,6 @@ static void ironlake_fdi_disable(struct drm_crtc *crtc)
 	udelay(100);
 }
 
-static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
-{
-	struct drm_device *dev = crtc->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	unsigned long flags;
-	bool pending;
-
-	if (i915_reset_in_progress(&dev_priv->gpu_error))
-		return false;
-
-	spin_lock_irqsave(&dev->event_lock, flags);
-	pending = to_intel_crtc(crtc)->unpin_work != NULL;
-	spin_unlock_irqrestore(&dev->event_lock, flags);
-
-	return pending;
-}
-
-static void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
-{
-	struct drm_device *dev = crtc->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-
-	if (crtc->fb == NULL)
-		return;
-
-	wait_event(dev_priv->pending_flip_queue,
-		   !intel_crtc_has_pending_flip(crtc));
-
-	mutex_lock(&dev->struct_mutex);
-	intel_finish_fb(crtc->fb);
-	mutex_unlock(&dev->struct_mutex);
-}
-
 static bool ironlake_crtc_driving_pch(struct drm_crtc *crtc)
 {
 	struct drm_device *dev = crtc->dev;
-- 
1.7.8.6

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] drm/i915: Wake up pending flip waiters when the GPU hangs
  2012-11-27 18:34 [PATCH 0/4] drm/i915: ring and flip leftovers ville.syrjala
  2012-11-27 18:34 ` [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head ville.syrjala
  2012-11-27 18:34 ` [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base() ville.syrjala
@ 2012-11-27 18:34 ` ville.syrjala
  2012-11-27 18:34 ` [PATCH v2 4/4] drm/i915: Kill i915_gem_execbuffer_wait_for_flips() ville.syrjala
  3 siblings, 0 replies; 13+ messages in thread
From: ville.syrjala @ 2012-11-27 18:34 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Anyone stuck waiting for pending flips should get woken up when the GPU
hangs.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_irq.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 77de66c..2fe5707 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1464,6 +1464,7 @@ void i915_handle_error(struct drm_device *dev, bool wedged)
 		 */
 		for_each_ring(ring, dev_priv, i)
 			wake_up_all(&ring->irq_queue);
+		wake_up(&dev_priv->pending_flip_queue);
 	}
 
 	queue_work(dev_priv->wq, &dev_priv->gpu_error.work);
-- 
1.7.8.6

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 4/4] drm/i915: Kill i915_gem_execbuffer_wait_for_flips()
  2012-11-27 18:34 [PATCH 0/4] drm/i915: ring and flip leftovers ville.syrjala
                   ` (2 preceding siblings ...)
  2012-11-27 18:34 ` [PATCH 3/4] drm/i915: Wake up pending flip waiters when the GPU hangs ville.syrjala
@ 2012-11-27 18:34 ` ville.syrjala
  2012-11-28 20:56   ` Daniel Vetter
  3 siblings, 1 reply; 13+ messages in thread
From: ville.syrjala @ 2012-11-27 18:34 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

As per Chris Wilson's suggestion make
i915_gem_execbuffer_wait_for_flips() go away.

This was used to stall the GPU ring while there are pending
page flips involving the relevant BO. Ie. while the BO is still
being scanned out by the display controller.

The recommended alternative is to use the page flip events to
wait for the page flips to fully complete before reusing the BO
of the old front buffer. Or use more buffers.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h            |    7 ----
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |   42 ----------------------------
 drivers/gpu/drm/i915/intel_display.c       |    7 ----
 3 files changed, 0 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8a9856c..2c052bd 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1119,13 +1119,6 @@ struct drm_i915_gem_object {
 
 	/** for phy allocated objects */
 	struct drm_i915_gem_phys_object *phys_obj;
-
-	/**
-	 * Number of crtcs where this object is currently the fb, but
-	 * will be page flipped away on the next vblank.  When it
-	 * reaches 0, dev_priv->pending_flip_queue will be woken up.
-	 */
-	atomic_t pending_flip;
 };
 
 #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 48e4317..19d084c 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -602,44 +602,11 @@ err:
 }
 
 static int
-i915_gem_execbuffer_wait_for_flips(struct intel_ring_buffer *ring, u32 flips)
-{
-	u32 plane, flip_mask;
-	int ret;
-
-	/* Check for any pending flips. As we only maintain a flip queue depth
-	 * of 1, we can simply insert a WAIT for the next display flip prior
-	 * to executing the batch and avoid stalling the CPU.
-	 */
-
-	for (plane = 0; flips >> plane; plane++) {
-		if (((flips >> plane) & 1) == 0)
-			continue;
-
-		if (plane)
-			flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
-		else
-			flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
-
-		ret = intel_ring_begin(ring, 2);
-		if (ret)
-			return ret;
-
-		intel_ring_emit(ring, MI_WAIT_FOR_EVENT | flip_mask);
-		intel_ring_emit(ring, MI_NOOP);
-		intel_ring_advance(ring);
-	}
-
-	return 0;
-}
-
-static int
 i915_gem_execbuffer_move_to_gpu(struct intel_ring_buffer *ring,
 				struct list_head *objects)
 {
 	struct drm_i915_gem_object *obj;
 	uint32_t flush_domains = 0;
-	uint32_t flips = 0;
 	int ret;
 
 	list_for_each_entry(obj, objects, exec_list) {
@@ -650,18 +617,9 @@ i915_gem_execbuffer_move_to_gpu(struct intel_ring_buffer *ring,
 		if (obj->base.write_domain & I915_GEM_DOMAIN_CPU)
 			i915_gem_clflush_object(obj);
 
-		if (obj->base.pending_write_domain)
-			flips |= atomic_read(&obj->pending_flip);
-
 		flush_domains |= obj->base.write_domain;
 	}
 
-	if (flips) {
-		ret = i915_gem_execbuffer_wait_for_flips(ring, flips);
-		if (ret)
-			return ret;
-	}
-
 	if (flush_domains & I915_GEM_DOMAIN_CPU)
 		i915_gem_chipset_flush(ring->dev);
 
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ea710af..c7f7295 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -6947,8 +6947,6 @@ static void do_intel_finish_page_flip(struct drm_device *dev,
 
 	obj = work->old_fb_obj;
 
-	atomic_clear_mask(1 << intel_crtc->plane,
-			  &obj->pending_flip.counter);
 	wake_up(&dev_priv->pending_flip_queue);
 
 	queue_work(dev_priv->wq, &work->work);
@@ -7294,10 +7292,6 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 
 	work->enable_stall_check = true;
 
-	/* Block clients from rendering to the new back buffer until
-	 * the flip occurs and the object is no longer visible.
-	 */
-	atomic_add(1 << intel_crtc->plane, &work->old_fb_obj->pending_flip);
 	atomic_inc(&intel_crtc->unpin_work_count);
 
 	ret = dev_priv->display.queue_flip(dev, crtc, fb, obj);
@@ -7314,7 +7308,6 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 
 cleanup_pending:
 	atomic_dec(&intel_crtc->unpin_work_count);
-	atomic_sub(1 << intel_crtc->plane, &work->old_fb_obj->pending_flip);
 	drm_gem_object_unreference(&work->old_fb_obj->base);
 	drm_gem_object_unreference(&obj->base);
 	mutex_unlock(&dev->struct_mutex);
-- 
1.7.8.6

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head
  2012-11-27 18:34 ` [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head ville.syrjala
@ 2012-11-28 20:45   ` Daniel Vetter
  2012-11-29 11:25     ` Ville Syrjälä
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Vetter @ 2012-11-28 20:45 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Tue, Nov 27, 2012 at 08:34:55PM +0200, ville.syrjala@linux.intel.com wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> From BSpec:
> "If the Ring Buffer Head Pointer and the Tail Pointer are on the same
> cacheline, the Head Pointer must not be greater than the Tail
> Pointer."
> 
> The easiest way to enforce this is to reduce the reported ring space.
> 
> References:
> Gen2 BSpec "1. Programming Environment" / "1.4.4.6 Ring Buffer Use"
> Gen3 BSpec "vol1c Memory Interface Functions" / "2.3.4.5 Ring Buffer Use"
> Gen4+ BSpec "vol1c Memory Interface and Command Stream" / "5.3.4.5 Ring Buffer Use"
> 
> v2: Include the exact BSpec references in the description
> 
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Another small bikeshed for this one: Less magic numbers with

#define RING_FREE_SPACE 64

-Daniel
> ---
>  drivers/gpu/drm/i915/i915_dma.c         |    2 +-
>  drivers/gpu/drm/i915/intel_ringbuffer.c |    4 ++--
>  drivers/gpu/drm/i915/intel_ringbuffer.h |    2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> index 86b93b2..79089aa 100644
> --- a/drivers/gpu/drm/i915/i915_dma.c
> +++ b/drivers/gpu/drm/i915/i915_dma.c
> @@ -141,7 +141,7 @@ void i915_kernel_lost_context(struct drm_device * dev)
>  
>  	ring->head = I915_READ_HEAD(ring) & HEAD_ADDR;
>  	ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
> -	ring->space = ring->head - (ring->tail + 8);
> +	ring->space = ring->head - (ring->tail + 64);
>  	if (ring->space < 0)
>  		ring->space += ring->size;
>  
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index b0a83ce..cf11f8a 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -45,7 +45,7 @@ struct pipe_control {
>  
>  static inline int ring_space(struct intel_ring_buffer *ring)
>  {
> -	int space = (ring->head & HEAD_ADDR) - (ring->tail + 8);
> +	int space = (ring->head & HEAD_ADDR) - (ring->tail + 64);
>  	if (space < 0)
>  		space += ring->size;
>  	return space;
> @@ -1260,7 +1260,7 @@ static int intel_ring_wait_request(struct intel_ring_buffer *ring, int n)
>  		if (request->tail == -1)
>  			continue;
>  
> -		space = request->tail - (ring->tail + 8);
> +		space = request->tail - (ring->tail + 64);
>  		if (space < 0)
>  			space += ring->size;
>  		if (space >= n) {
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 5af65b8..2f6d9b1 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -191,7 +191,7 @@ void intel_cleanup_ring_buffer(struct intel_ring_buffer *ring);
>  int __must_check intel_wait_ring_buffer(struct intel_ring_buffer *ring, int n);
>  static inline int intel_wait_ring_idle(struct intel_ring_buffer *ring)
>  {
> -	return intel_wait_ring_buffer(ring, ring->size - 8);
> +	return intel_wait_ring_buffer(ring, ring->size - 64);
>  }
>  
>  int __must_check intel_ring_begin(struct intel_ring_buffer *ring, int n);
> -- 
> 1.7.8.6
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base()
  2012-11-27 18:34 ` [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base() ville.syrjala
@ 2012-11-28 20:51   ` Daniel Vetter
  2012-11-29 11:26     ` Ville Syrjälä
  2012-11-30 14:01     ` Ville Syrjälä
  0 siblings, 2 replies; 13+ messages in thread
From: Daniel Vetter @ 2012-11-28 20:51 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Tue, Nov 27, 2012 at 08:34:56PM +0200, ville.syrjala@linux.intel.com wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> intel_pipe_set_base() never actually waited for any pending page flips
> on the CRTC. It looks like it tried to, by calling intel_finish_fb() on
> the current front buffer. But the pending flips were actually tracked
> in the BO of the previous front buffer, so the call to intel_finish_fb()
> never did anything useful.
> 
> intel_crtc_wait_for_pending_flips() is the current _working_ way to wait
> for pending page flips. So use it in intel_pipe_set_base() too. Some
> refactoring was necessary to avoid locking struct_mutex twice.
> 
> v2: Shuffle the code around so that intel_crtc_wait_for_pending_flips()
>     just wraps intel_crtc_wait_for_pending_flips_locked().
> 
> v3: Kill leftover wait_event() in intel_finish_fb()
> 
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/intel_display.c |   80 +++++++++++++++++----------------
>  1 files changed, 41 insertions(+), 39 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index 8c2d810..ea710af 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -2238,10 +2238,6 @@ intel_finish_fb(struct drm_framebuffer *old_fb)
>  	bool was_interruptible = dev_priv->mm.interruptible;
>  	int ret;
>  
> -	wait_event(dev_priv->pending_flip_queue,
> -		   i915_reset_in_progress(&dev_priv->gpu_error) ||
> -		   atomic_read(&obj->pending_flip) == 0);
> -
>  	/* Big Hammer, we also need to ensure that any pending
>  	 * MI_WAIT_FOR_EVENT inside a user batch buffer on the
>  	 * current scanout is retired before unpinning the old
> @@ -2284,6 +2280,46 @@ static void intel_crtc_update_sarea_pos(struct drm_crtc *crtc, int x, int y)
>  	}
>  }
>  
> +static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
> +{
> +	struct drm_device *dev = crtc->dev;
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +	unsigned long flags;
> +	bool pending;
> +
> +	if (i915_reset_in_progress(&dev_priv->gpu_error))
> +		return false;

This check is not enough, since a full gpu reset might have happened while
we didn't look. Which means that we'll still be waiting forever. To really
close all gaps and races I think we need to full multi-state transistions
like it's already implemented in __wait_seqno (minus the first kick, since
no one is holding the dev->struct_mutex while waiting for a pageflip to
complete). So
- we need a reset_counter here like in wait_seqno
- need to reset the unpin_work state (and fire off any pending drm events
  while at it) to unblock kernel waiters and userspace

Also: igt test highly preferred ;-) Best would be to convert the hangman
test over to the subtest infrastructure (maybe easier when first ported
python and using argpars, but I don't mind bash's getopt support) and then
adding a new subtestcase which exercises flips vs. hangs.

Yours, Daniel

> +
> +	spin_lock_irqsave(&dev->event_lock, flags);
> +	pending = to_intel_crtc(crtc)->unpin_work != NULL;
> +	spin_unlock_irqrestore(&dev->event_lock, flags);
> +
> +	return pending;
> +}
> +
> +static void intel_crtc_wait_for_pending_flips_locked(struct drm_crtc *crtc)
> +{
> +	struct drm_device *dev = crtc->dev;
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +
> +	if (crtc->fb == NULL)
> +		return;
> +
> +	wait_event(dev_priv->pending_flip_queue,
> +		   !intel_crtc_has_pending_flip(crtc));
> +
> +	intel_finish_fb(crtc->fb);
> +}
> +
> +static void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
> +{
> +	struct drm_device *dev = crtc->dev;
> +
> +	mutex_lock(&dev->struct_mutex);
> +	intel_crtc_wait_for_pending_flips_locked(crtc);
> +	mutex_unlock(&dev->struct_mutex);
> +}
> +
>  static int
>  intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
>  		    struct drm_framebuffer *fb)
> @@ -2317,8 +2353,7 @@ intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
>  		return ret;
>  	}
>  
> -	if (crtc->fb)
> -		intel_finish_fb(crtc->fb);
> +	intel_crtc_wait_for_pending_flips_locked(crtc);
>  
>  	ret = dev_priv->display.update_plane(crtc, fb, x, y);
>  	if (ret) {
> @@ -2950,39 +2985,6 @@ static void ironlake_fdi_disable(struct drm_crtc *crtc)
>  	udelay(100);
>  }
>  
> -static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
> -{
> -	struct drm_device *dev = crtc->dev;
> -	struct drm_i915_private *dev_priv = dev->dev_private;
> -	unsigned long flags;
> -	bool pending;
> -
> -	if (i915_reset_in_progress(&dev_priv->gpu_error))
> -		return false;
> -
> -	spin_lock_irqsave(&dev->event_lock, flags);
> -	pending = to_intel_crtc(crtc)->unpin_work != NULL;
> -	spin_unlock_irqrestore(&dev->event_lock, flags);
> -
> -	return pending;
> -}
> -
> -static void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
> -{
> -	struct drm_device *dev = crtc->dev;
> -	struct drm_i915_private *dev_priv = dev->dev_private;
> -
> -	if (crtc->fb == NULL)
> -		return;
> -
> -	wait_event(dev_priv->pending_flip_queue,
> -		   !intel_crtc_has_pending_flip(crtc));
> -
> -	mutex_lock(&dev->struct_mutex);
> -	intel_finish_fb(crtc->fb);
> -	mutex_unlock(&dev->struct_mutex);
> -}
> -
>  static bool ironlake_crtc_driving_pch(struct drm_crtc *crtc)
>  {
>  	struct drm_device *dev = crtc->dev;
> -- 
> 1.7.8.6
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 4/4] drm/i915: Kill i915_gem_execbuffer_wait_for_flips()
  2012-11-27 18:34 ` [PATCH v2 4/4] drm/i915: Kill i915_gem_execbuffer_wait_for_flips() ville.syrjala
@ 2012-11-28 20:56   ` Daniel Vetter
  0 siblings, 0 replies; 13+ messages in thread
From: Daniel Vetter @ 2012-11-28 20:56 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Tue, Nov 27, 2012 at 08:34:58PM +0200, ville.syrjala@linux.intel.com wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> As per Chris Wilson's suggestion make
> i915_gem_execbuffer_wait_for_flips() go away.
> 
> This was used to stall the GPU ring while there are pending
> page flips involving the relevant BO. Ie. while the BO is still
> being scanned out by the display controller.
> 
> The recommended alternative is to use the page flip events to
> wait for the page flips to fully complete before reusing the BO
> of the old front buffer. Or use more buffers.
> 
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Queued for -next (with acks from Jesse and Kristian smashed on top),
thanks for the patch. Also dropped the obj->pending_flips removal hunk,
since that's still required.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head
  2012-11-28 20:45   ` Daniel Vetter
@ 2012-11-29 11:25     ` Ville Syrjälä
  0 siblings, 0 replies; 13+ messages in thread
From: Ville Syrjälä @ 2012-11-29 11:25 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Wed, Nov 28, 2012 at 09:45:46PM +0100, Daniel Vetter wrote:
> On Tue, Nov 27, 2012 at 08:34:55PM +0200, ville.syrjala@linux.intel.com wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > From BSpec:
> > "If the Ring Buffer Head Pointer and the Tail Pointer are on the same
> > cacheline, the Head Pointer must not be greater than the Tail
> > Pointer."
> > 
> > The easiest way to enforce this is to reduce the reported ring space.
> > 
> > References:
> > Gen2 BSpec "1. Programming Environment" / "1.4.4.6 Ring Buffer Use"
> > Gen3 BSpec "vol1c Memory Interface Functions" / "2.3.4.5 Ring Buffer Use"
> > Gen4+ BSpec "vol1c Memory Interface and Command Stream" / "5.3.4.5 Ring Buffer Use"
> > 
> > v2: Include the exact BSpec references in the description
> > 
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> Another small bikeshed for this one: Less magic numbers with
> 
> #define RING_FREE_SPACE 64

Can do. And since the number will only appear in one place, I think
I'll stick the BSpec references into a comment above it.

-- 
Ville Syrjälä
Intel OTC

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base()
  2012-11-28 20:51   ` Daniel Vetter
@ 2012-11-29 11:26     ` Ville Syrjälä
  2012-11-30 14:01     ` Ville Syrjälä
  1 sibling, 0 replies; 13+ messages in thread
From: Ville Syrjälä @ 2012-11-29 11:26 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Wed, Nov 28, 2012 at 09:51:18PM +0100, Daniel Vetter wrote:
> On Tue, Nov 27, 2012 at 08:34:56PM +0200, ville.syrjala@linux.intel.com wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > intel_pipe_set_base() never actually waited for any pending page flips
> > on the CRTC. It looks like it tried to, by calling intel_finish_fb() on
> > the current front buffer. But the pending flips were actually tracked
> > in the BO of the previous front buffer, so the call to intel_finish_fb()
> > never did anything useful.
> > 
> > intel_crtc_wait_for_pending_flips() is the current _working_ way to wait
> > for pending page flips. So use it in intel_pipe_set_base() too. Some
> > refactoring was necessary to avoid locking struct_mutex twice.
> > 
> > v2: Shuffle the code around so that intel_crtc_wait_for_pending_flips()
> >     just wraps intel_crtc_wait_for_pending_flips_locked().
> > 
> > v3: Kill leftover wait_event() in intel_finish_fb()
> > 
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/intel_display.c |   80 +++++++++++++++++----------------
> >  1 files changed, 41 insertions(+), 39 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> > index 8c2d810..ea710af 100644
> > --- a/drivers/gpu/drm/i915/intel_display.c
> > +++ b/drivers/gpu/drm/i915/intel_display.c
> > @@ -2238,10 +2238,6 @@ intel_finish_fb(struct drm_framebuffer *old_fb)
> >  	bool was_interruptible = dev_priv->mm.interruptible;
> >  	int ret;
> >  
> > -	wait_event(dev_priv->pending_flip_queue,
> > -		   i915_reset_in_progress(&dev_priv->gpu_error) ||
> > -		   atomic_read(&obj->pending_flip) == 0);
> > -
> >  	/* Big Hammer, we also need to ensure that any pending
> >  	 * MI_WAIT_FOR_EVENT inside a user batch buffer on the
> >  	 * current scanout is retired before unpinning the old
> > @@ -2284,6 +2280,46 @@ static void intel_crtc_update_sarea_pos(struct drm_crtc *crtc, int x, int y)
> >  	}
> >  }
> >  
> > +static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
> > +{
> > +	struct drm_device *dev = crtc->dev;
> > +	struct drm_i915_private *dev_priv = dev->dev_private;
> > +	unsigned long flags;
> > +	bool pending;
> > +
> > +	if (i915_reset_in_progress(&dev_priv->gpu_error))
> > +		return false;
> 
> This check is not enough, since a full gpu reset might have happened while
> we didn't look. Which means that we'll still be waiting forever. To really
> close all gaps and races I think we need to full multi-state transistions
> like it's already implemented in __wait_seqno (minus the first kick, since
> no one is holding the dev->struct_mutex while waiting for a pageflip to
> complete). So
> - we need a reset_counter here like in wait_seqno
> - need to reset the unpin_work state (and fire off any pending drm events
>   while at it) to unblock kernel waiters and userspace
> 
> Also: igt test highly preferred ;-) Best would be to convert the hangman
> test over to the subtest infrastructure (maybe easier when first ported
> python and using argpars, but I don't mind bash's getopt support) and then
> adding a new subtestcase which exercises flips vs. hangs.

I'm going to plead innocent on this one. I just copied code around, but
you wrote it ;)

-- 
Ville Syrjälä
Intel OTC

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base()
  2012-11-28 20:51   ` Daniel Vetter
  2012-11-29 11:26     ` Ville Syrjälä
@ 2012-11-30 14:01     ` Ville Syrjälä
  2012-11-30 15:44       ` Daniel Vetter
  1 sibling, 1 reply; 13+ messages in thread
From: Ville Syrjälä @ 2012-11-30 14:01 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Wed, Nov 28, 2012 at 09:51:18PM +0100, Daniel Vetter wrote:
> On Tue, Nov 27, 2012 at 08:34:56PM +0200, ville.syrjala@linux.intel.com wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > intel_pipe_set_base() never actually waited for any pending page flips
> > on the CRTC. It looks like it tried to, by calling intel_finish_fb() on
> > the current front buffer. But the pending flips were actually tracked
> > in the BO of the previous front buffer, so the call to intel_finish_fb()
> > never did anything useful.
> > 
> > intel_crtc_wait_for_pending_flips() is the current _working_ way to wait
> > for pending page flips. So use it in intel_pipe_set_base() too. Some
> > refactoring was necessary to avoid locking struct_mutex twice.
> > 
> > v2: Shuffle the code around so that intel_crtc_wait_for_pending_flips()
> >     just wraps intel_crtc_wait_for_pending_flips_locked().
> > 
> > v3: Kill leftover wait_event() in intel_finish_fb()
> > 
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/intel_display.c |   80 +++++++++++++++++----------------
> >  1 files changed, 41 insertions(+), 39 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> > index 8c2d810..ea710af 100644
> > --- a/drivers/gpu/drm/i915/intel_display.c
> > +++ b/drivers/gpu/drm/i915/intel_display.c
> > @@ -2238,10 +2238,6 @@ intel_finish_fb(struct drm_framebuffer *old_fb)
> >  	bool was_interruptible = dev_priv->mm.interruptible;
> >  	int ret;
> >  
> > -	wait_event(dev_priv->pending_flip_queue,
> > -		   i915_reset_in_progress(&dev_priv->gpu_error) ||
> > -		   atomic_read(&obj->pending_flip) == 0);
> > -
> >  	/* Big Hammer, we also need to ensure that any pending
> >  	 * MI_WAIT_FOR_EVENT inside a user batch buffer on the
> >  	 * current scanout is retired before unpinning the old
> > @@ -2284,6 +2280,46 @@ static void intel_crtc_update_sarea_pos(struct drm_crtc *crtc, int x, int y)
> >  	}
> >  }
> >  
> > +static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
> > +{
> > +	struct drm_device *dev = crtc->dev;
> > +	struct drm_i915_private *dev_priv = dev->dev_private;
> > +	unsigned long flags;
> > +	bool pending;
> > +
> > +	if (i915_reset_in_progress(&dev_priv->gpu_error))
> > +		return false;
> 
> This check is not enough, since a full gpu reset might have happened while
> we didn't look. Which means that we'll still be waiting forever. To really
> close all gaps and races I think we need to full multi-state transistions
> like it's already implemented in __wait_seqno (minus the first kick, since
> no one is holding the dev->struct_mutex while waiting for a pageflip to
> complete). So
> - we need a reset_counter here like in wait_seqno
> - need to reset the unpin_work state (and fire off any pending drm events
>   while at it) to unblock kernel waiters and userspace

I had another look at this, and I think you're aiming for something other
than what this patch is trying to do.

The thing is that we are holding struct_mutex while waiting for pending
flips here, so we just need to get out asap to allow the reset work do
its thing.

Completing pending page flips when a reset occurs is separate issue.
This code never did any of that, and I don't see why it should. We
would need to complete them anyway regardless of whether anyone is
currently waiting for them.

Perhaps we can just call intel_finish_page_flip() from the reset
code and call it a day. I suppose doing that could end up unpinning
the current scanout buffer in case the display registers were never
written with the new values. One solution for that would be to always
do a set_base() after a reset. That would make sure we end up showing
the latest buffer at least, although the contents could be total
garbage.

-- 
Ville Syrjälä
Intel OTC

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base()
  2012-11-30 14:01     ` Ville Syrjälä
@ 2012-11-30 15:44       ` Daniel Vetter
  2012-12-03 16:42         ` Ville Syrjälä
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Vetter @ 2012-11-30 15:44 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx

On Fri, Nov 30, 2012 at 3:01 PM, Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
> I had another look at this, and I think you're aiming for something other
> than what this patch is trying to do.

Oh, I know that I'm trying to volunteer you to do a bit more ... it's
why I'm the bastard maintainer ;-)

> The thing is that we are holding struct_mutex while waiting for pending
> flips here, so we just need to get out asap to allow the reset work do
> its thing.

Hm, I didn't notice that we're holding the struct_mutex there, so I
guess we need to indeed bail out. Or rework the finsh_fb vs. gpu hang
logic to not require the struct_mutex in finish_fb ...

> Completing pending page flips when a reset occurs is separate issue.
> This code never did any of that, and I don't see why it should. We
> would need to complete them anyway regardless of whether anyone is
> currently waiting for them.

Yeah, that's my idea, but I haven't though through the details (see my
ignorance with locking details above ...).

> Perhaps we can just call intel_finish_page_flip() from the reset
> code and call it a day. I suppose doing that could end up unpinning
> the current scanout buffer in case the display registers were never
> written with the new values. One solution for that would be to always
> do a set_base() after a reset. That would make sure we end up showing
> the latest buffer at least, although the contents could be total
> garbage.

Hm, I think first we should aim to no longer hang either the kernel or
leave stuck userspace (since the drm_event never shows up) behind. Atm
a gpu hang is allowed to corrupt pretty much everything. Later on,
when we successfully avoid the hung kernel/userspace problems we can
worry about making it look decent. Judging by how long it took to get
basic reset working somewhat okish, it'll take a while. And we need
some form of testcase to exercise the code.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base()
  2012-11-30 15:44       ` Daniel Vetter
@ 2012-12-03 16:42         ` Ville Syrjälä
  0 siblings, 0 replies; 13+ messages in thread
From: Ville Syrjälä @ 2012-12-03 16:42 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Fri, Nov 30, 2012 at 04:44:04PM +0100, Daniel Vetter wrote:
> On Fri, Nov 30, 2012 at 3:01 PM, Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> > I had another look at this, and I think you're aiming for something other
> > than what this patch is trying to do.
> 
> Oh, I know that I'm trying to volunteer you to do a bit more ... it's
> why I'm the bastard maintainer ;-)
> 
> > The thing is that we are holding struct_mutex while waiting for pending
> > flips here, so we just need to get out asap to allow the reset work do
> > its thing.
> 
> Hm, I didn't notice that we're holding the struct_mutex there, so I
> guess we need to indeed bail out. Or rework the finsh_fb vs. gpu hang
> logic to not require the struct_mutex in finish_fb ...
> 
> > Completing pending page flips when a reset occurs is separate issue.
> > This code never did any of that, and I don't see why it should. We
> > would need to complete them anyway regardless of whether anyone is
> > currently waiting for them.
> 
> Yeah, that's my idea, but I haven't though through the details (see my
> ignorance with locking details above ...).
> 
> > Perhaps we can just call intel_finish_page_flip() from the reset
> > code and call it a day. I suppose doing that could end up unpinning
> > the current scanout buffer in case the display registers were never
> > written with the new values. One solution for that would be to always
> > do a set_base() after a reset. That would make sure we end up showing
> > the latest buffer at least, although the contents could be total
> > garbage.
> 
> Hm, I think first we should aim to no longer hang either the kernel or
> leave stuck userspace (since the drm_event never shows up) behind. Atm
> a gpu hang is allowed to corrupt pretty much everything. Later on,
> when we successfully avoid the hung kernel/userspace problems we can
> worry about making it look decent. Judging by how long it took to get
> basic reset working somewhat okish, it'll take a while. And we need
> some form of testcase to exercise the code.

At least the current patch would avoid having the process stuck in
D state. So I think it's better than nothing.

I do have other things on my plate, so I don't want to spend any more
time on this currently.

-- 
Ville Syrjälä
Intel OTC

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2012-12-03 16:42 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-27 18:34 [PATCH 0/4] drm/i915: ring and flip leftovers ville.syrjala
2012-11-27 18:34 ` [PATCH v2 1/4] drm/i915: Don't allow ring tail to reach the same cacheline as head ville.syrjala
2012-11-28 20:45   ` Daniel Vetter
2012-11-29 11:25     ` Ville Syrjälä
2012-11-27 18:34 ` [PATCH v3 2/4] drm/i915: Wait for pending flips in intel_pipe_set_base() ville.syrjala
2012-11-28 20:51   ` Daniel Vetter
2012-11-29 11:26     ` Ville Syrjälä
2012-11-30 14:01     ` Ville Syrjälä
2012-11-30 15:44       ` Daniel Vetter
2012-12-03 16:42         ` Ville Syrjälä
2012-11-27 18:34 ` [PATCH 3/4] drm/i915: Wake up pending flip waiters when the GPU hangs ville.syrjala
2012-11-27 18:34 ` [PATCH v2 4/4] drm/i915: Kill i915_gem_execbuffer_wait_for_flips() ville.syrjala
2012-11-28 20:56   ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).