intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: intel-gfx@lists.freedesktop.org
Cc: Andy Whitcroft <apw@canonical.com>
Subject: [PATCH 29/30] drm/i915: Track fence setup separately from fenced object lifetime
Date: Tue, 12 Apr 2011 21:31:57 +0100	[thread overview]
Message-ID: <1302640318-23165-30-git-send-email-chris@chris-wilson.co.uk> (raw)
In-Reply-To: <1302640318-23165-1-git-send-email-chris@chris-wilson.co.uk>

This fixes a bookkeeping error causing an OOPS whilst waiting for an
object to finish using a fence. Now we can simply wait for the fence to
be written independent of the objects currently inhabiting it (past,
present and future).

A large amount of the change is to delay updating the information about
the fence on bo until after we successfully write, or queue the write to,
the register. This avoids the complication of undoing a partial change
should we fail in pipelining the change.

Cc: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_drv.h |    1 +
 drivers/gpu/drm/i915/i915_gem.c |  155 ++++++++++++++++++++-------------------
 2 files changed, 82 insertions(+), 74 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c837e10..d1fadb8 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -129,6 +129,7 @@ struct drm_i915_master_private {
 struct drm_i915_fence_reg {
 	struct list_head lru_list;
 	struct drm_i915_gem_object *obj;
+	struct intel_ring_buffer *setup_ring;
 	uint32_t setup_seqno;
 };
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ca14a86..1949048 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1731,6 +1731,8 @@ i915_gem_object_move_to_inactive(struct drm_i915_gem_object *obj)
 	i915_gem_object_move_off_active(obj);
 	obj->fenced_gpu_access = false;
 
+	obj->last_fenced_seqno = 0;
+
 	obj->active = 0;
 	obj->pending_gpu_write = false;
 	drm_gem_object_unreference(&obj->base);
@@ -1896,7 +1898,6 @@ static void i915_gem_reset_fences(struct drm_device *dev)
 		reg->obj->fence_reg = I915_FENCE_REG_NONE;
 		reg->obj->fenced_gpu_access = false;
 		reg->obj->last_fenced_seqno = 0;
-		reg->obj->last_fenced_ring = NULL;
 		i915_gem_clear_fence_reg(dev, reg);
 	}
 }
@@ -2497,7 +2498,7 @@ i915_gem_object_flush_fence(struct drm_i915_gem_object *obj,
 
 	if (obj->fenced_gpu_access) {
 		if (obj->base.write_domain & I915_GEM_GPU_DOMAINS) {
-			ret = i915_gem_flush_ring(obj->last_fenced_ring,
+			ret = i915_gem_flush_ring(obj->ring,
 						  0, obj->base.write_domain);
 			if (ret)
 				return ret;
@@ -2508,17 +2509,22 @@ i915_gem_object_flush_fence(struct drm_i915_gem_object *obj,
 		obj->fenced_gpu_access = false;
 	}
 
+	if (obj->last_fenced_seqno &&
+	    ring_passed_seqno(obj->last_fenced_ring, obj->last_fenced_seqno))
+		obj->last_fenced_seqno = 0;
+
 	if (obj->last_fenced_seqno && pipelined != obj->last_fenced_ring) {
-		if (!ring_passed_seqno(obj->last_fenced_ring,
-				       obj->last_fenced_seqno)) {
-			ret = i915_wait_request(obj->last_fenced_ring,
-						obj->last_fenced_seqno);
-			if (ret)
-				return ret;
-		}
+		ret = i915_wait_request(obj->last_fenced_ring,
+					obj->last_fenced_seqno);
+		if (ret)
+			return ret;
 
+		/* Since last_fence_seqno can retire much earlier than
+		 * last_rendering_seqno, we track that here for efficiency.
+		 * (With a catch-all in move_to_inactive() to prevent very
+		 * old seqno from lying around.)
+		 */
 		obj->last_fenced_seqno = 0;
-		obj->last_fenced_ring = NULL;
 	}
 
 	/* Ensure that all CPU reads are completed before installing a fence
@@ -2585,7 +2591,7 @@ i915_find_fence_reg(struct drm_device *dev,
 			first = reg;
 
 		if (!pipelined ||
-		    !reg->obj->last_fenced_ring ||
+		    !reg->obj->last_fenced_seqno ||
 		    reg->obj->last_fenced_ring == pipelined) {
 			avail = reg;
 			break;
@@ -2602,7 +2608,6 @@ i915_find_fence_reg(struct drm_device *dev,
  * i915_gem_object_get_fence - set up a fence reg for an object
  * @obj: object to map through a fence reg
  * @pipelined: ring on which to queue the change, or NULL for CPU access
- * @interruptible: must we wait uninterruptibly for the register to retire?
  *
  * When mapping objects through the GTT, userspace wants to be able to write
  * to them without having to worry about swizzling if the object is tiled.
@@ -2610,6 +2615,10 @@ i915_find_fence_reg(struct drm_device *dev,
  * This function walks the fence regs looking for a free one for @obj,
  * stealing one if it can't find any.
  *
+ * Note: if two fence registers point to the same or overlapping memory region
+ * the results are undefined. This is even more fun with asynchronous updates
+ * via the GPU!
+ *
  * It then sets up the reg based on the object's properties: address, pitch
  * and tiling format.
  */
@@ -2620,6 +2629,7 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj,
 	struct drm_device *dev = obj->base.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_fence_reg *reg;
+	struct drm_i915_gem_object *old = NULL;
 	int regnum;
 	int ret;
 
@@ -2629,45 +2639,21 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj,
 	/* Just update our place in the LRU if our fence is getting reused. */
 	if (obj->fence_reg != I915_FENCE_REG_NONE) {
 		reg = &dev_priv->fence_regs[obj->fence_reg];
-		list_move_tail(&reg->lru_list, &dev_priv->mm.fence_list);
-
-		if (obj->tiling_changed) {
-			ret = i915_gem_object_flush_fence(obj, pipelined);
-			if (ret)
-				return ret;
-
-			if (!obj->fenced_gpu_access && !obj->last_fenced_seqno)
-				pipelined = NULL;
-
-			if (pipelined) {
-				reg->setup_seqno =
-					i915_gem_next_request_seqno(pipelined);
-				obj->last_fenced_seqno = reg->setup_seqno;
-				obj->last_fenced_ring = pipelined;
-			}
 
+		if (obj->tiling_changed)
 			goto update;
-		}
-
-		if (!pipelined) {
-			if (reg->setup_seqno) {
-				if (!ring_passed_seqno(obj->last_fenced_ring,
-						       reg->setup_seqno)) {
-					ret = i915_wait_request(obj->last_fenced_ring,
-								reg->setup_seqno);
-					if (ret)
-						return ret;
-				}
 
-				reg->setup_seqno = 0;
-			}
-		} else if (obj->last_fenced_ring &&
-			   obj->last_fenced_ring != pipelined) {
-			ret = i915_gem_object_flush_fence(obj, pipelined);
+		if (reg->setup_seqno && pipelined != reg->setup_ring) {
+			ret = i915_wait_request(reg->setup_ring,
+						reg->setup_seqno);
 			if (ret)
 				return ret;
+
+			reg->setup_ring = 0;
+			reg->setup_seqno = 0;
 		}
 
+		list_move_tail(&reg->lru_list, &dev_priv->mm.fence_list);
 		return 0;
 	}
 
@@ -2675,47 +2661,43 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj,
 	if (reg == NULL)
 		return -ENOSPC;
 
-	ret = i915_gem_object_flush_fence(obj, pipelined);
-	if (ret)
-		return ret;
-
-	if (reg->obj) {
-		struct drm_i915_gem_object *old = reg->obj;
-
+	if ((old = reg->obj)) {
 		drm_gem_object_reference(&old->base);
 
 		if (old->tiling_mode)
 			i915_gem_release_mmap(old);
 
-		ret = i915_gem_object_flush_fence(old, pipelined);
-		if (ret) {
-			drm_gem_object_unreference(&old->base);
-			return ret;
-		}
+		ret = i915_gem_object_flush_fence(old, NULL); //pipelined);
+		if (ret)
+			goto err;
 
-		if (old->last_fenced_seqno == 0 && obj->last_fenced_seqno == 0)
-			pipelined = NULL;
+		/* Mark the fence register as in-use if pipelined */
+		reg->setup_ring = old->last_fenced_ring;
+		reg->setup_seqno = old->last_fenced_seqno;
+	}
 
-		old->fence_reg = I915_FENCE_REG_NONE;
-		old->last_fenced_ring = pipelined;
-		old->last_fenced_seqno =
-			pipelined ? i915_gem_next_request_seqno(pipelined) : 0;
+update:
+	ret = i915_gem_object_flush_fence(obj, pipelined);
+	if (ret)
+		goto err;
 
-		drm_gem_object_unreference(&old->base);
-	} else if (obj->last_fenced_seqno == 0)
-		pipelined = NULL;
+	if (reg->setup_seqno && pipelined != reg->setup_ring) {
+		ret = i915_wait_request(reg->setup_ring,
+					reg->setup_seqno);
+		if (ret)
+			goto err;
 
-	reg->obj = obj;
-	list_move_tail(&reg->lru_list, &dev_priv->mm.fence_list);
-	obj->fence_reg = reg - dev_priv->fence_regs;
-	obj->last_fenced_ring = pipelined;
+		reg->setup_ring = 0;
+		reg->setup_seqno = 0;
+	}
 
-	reg->setup_seqno =
-		pipelined ? i915_gem_next_request_seqno(pipelined) : 0;
-	obj->last_fenced_seqno = reg->setup_seqno;
+	/* If we had a pipelined request, but there is no pending GPU access or
+	 * update to a fence register for this memory region, we can write
+	 * the new fence register immediately.
+	 */
+	if (obj->last_fenced_seqno == 0 && reg->setup_seqno == 0)
+		pipelined = NULL;
 
-update:
-	obj->tiling_changed = false;
 	regnum = reg - dev_priv->fence_regs;
 	switch (INTEL_INFO(dev)->gen) {
 	case 6:
@@ -2732,7 +2714,31 @@ update:
 		ret = i830_write_fence_reg(obj, pipelined, regnum);
 		break;
 	}
+	if (ret)
+		goto err;
+
+	if (pipelined) {
+		reg->setup_seqno = i915_gem_next_request_seqno(pipelined);
+		reg->setup_ring = pipelined;
+		if (old) {
+			old->last_fenced_ring = pipelined;
+			old->last_fenced_seqno = reg->setup_seqno;
+		}
+	}
+
+	if (old) {
+		old->fence_reg = I915_FENCE_REG_NONE;
+		drm_gem_object_unreference(&old->base);
+	}
+
+	list_move_tail(&reg->lru_list, &dev_priv->mm.fence_list);
+	reg->obj = obj;
+	obj->fence_reg = regnum;
+	obj->tiling_changed = false;
+	return 0;
 
+err:
+	drm_gem_object_unreference(&old->base);
 	return ret;
 }
 
@@ -2771,6 +2777,7 @@ i915_gem_clear_fence_reg(struct drm_device *dev,
 
 	list_del_init(&reg->lru_list);
 	reg->obj = NULL;
+	reg->setup_ring = 0;
 	reg->setup_seqno = 0;
 }
 
-- 
1.7.4.1

  parent reply	other threads:[~2011-04-12 20:33 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-12 20:31 i915 next Chris Wilson
2011-04-12 20:31 ` [PATCH 01/30] drm/i915: Split the crtc_mode_set function along HAS_PCH_SPLIT() lines Chris Wilson
2011-04-12 20:31 ` [PATCH 02/30] drm/i915: Move the vblank pre/post modeset to the common crtc_mode_set Chris Wilson
2011-04-12 20:31 ` [PATCH 03/30] drm/i915: Remove the PCH paths from the pre-Ironlake crtc_mode_set() Chris Wilson
2011-04-12 20:31 ` [PATCH 04/30] drm/i915: Drop the eDP paths from the pre-Ironlake crtc_mode_set Chris Wilson
2011-04-12 20:31 ` [PATCH 05/30] drm/i915: Drop the remaining bit of Ironlake code from i9xx_crtc_mode_set() Chris Wilson
2011-04-12 20:31 ` [PATCH 06/30] drm/i915: Drop non-HAS_PCH_SPLIT() code from ironlake_crtc_mode_set() Chris Wilson
2011-04-12 20:31 ` [PATCH 07/30] drm/i915: Drop remaining pre-Ironlake " Chris Wilson
2011-04-12 20:31 ` [PATCH 08/30] drm/i915: Clean up leftover DPLL and LVDS register choice from pch split Chris Wilson
2011-04-12 20:31 ` [PATCH 09/30] drm/i915: Fold the DPLL limit defines into the structs that use them Chris Wilson
2011-04-12 20:31 ` [PATCH 10/30] drm/i915: fix ilk rc6 teardown locking Chris Wilson
2011-04-12 20:31 ` [PATCH 11/30] drm/1915: ringbuffer wait for idle function Chris Wilson
2011-04-12 20:31 ` [PATCH 12/30] drm/i915: fix rc6 initialization on Ironlake Chris Wilson
2011-04-12 20:31 ` [PATCH 13/30] drm/i915: re-enable rc6 for ironlake Chris Wilson
2011-04-12 20:31 ` [PATCH 14/30] drm/i915: use i915_enable_rc6 on SNB too Chris Wilson
2011-04-12 20:31 ` [PATCH 15/30] drm/i915: Rename agp_type to cache_level Chris Wilson
2011-04-13 15:57   ` Daniel Vetter
2011-04-12 20:31 ` [PATCH 16/30] drm/i915: Mark the cursor and the overlay as being part of the display planes Chris Wilson
2011-04-13 16:00   ` Daniel Vetter
2011-04-12 20:31 ` [PATCH 17/30] drm/i915: Do not clflush snooped objects Chris Wilson
2011-04-13 16:04   ` Daniel Vetter
2011-04-13 17:34     ` Chris Wilson
2011-04-13 20:47       ` Daniel Vetter
2011-04-12 20:31 ` [PATCH 18/30] drm/i915: Add an interface to dynamically change the cache level Chris Wilson
2011-04-13 18:59   ` Daniel Vetter
2011-04-13 19:21     ` Chris Wilson
2011-04-13 22:27     ` [PATCH 1/3] drm/i915: Introduce i915_gem_object_finish_gpu() Chris Wilson
2011-04-13 22:27       ` [PATCH 2/3] drm/i915: Introduce i915_gem_object_finish_gtt() Chris Wilson
2011-04-13 22:27       ` [PATCH 3/3] drm/i915: Add an interface to dynamically change the cache level Chris Wilson
2011-04-12 20:31 ` [PATCH 19/30] drm/i915: Use the uncached domain for the display planes v2 Chris Wilson
2011-04-12 20:31 ` [PATCH 20/30] drm/i915: Use the CPU domain for snooped pwrites Chris Wilson
2011-04-12 20:31 ` [PATCH 21/30] drm/i915: Redirect GTT mappings to the CPU page if cache-coherent Chris Wilson
2011-04-13 15:57   ` Eric Anholt
2011-04-13 16:19     ` Chris Wilson
2011-04-13 18:35     ` [PATCH] " Chris Wilson
2011-04-13 19:13       ` Daniel Vetter
2011-04-13 19:47         ` Chris Wilson
2011-04-13 20:26         ` [PATCH] drm/i915: Prevent mmap access through the GTT of snooped pages Chris Wilson
2011-04-13 20:51           ` Daniel Vetter
2011-04-12 20:31 ` [PATCH 22/30] drm/i915: Use the LLC mode on gen6 for everything but display Chris Wilson
2011-04-13 19:15   ` Daniel Vetter
2011-04-12 20:31 ` [PATCH 23/30] drm/i915: Cache GT fifo count for SandyBridge Chris Wilson
2011-04-14  2:21   ` Ben Widawsky
2011-04-14  4:48     ` Ben Widawsky
2011-04-12 20:31 ` [PATCH 24/30] drm/i915: Refactor pwrite/pread to use single copy of get_user_pages Chris Wilson
2011-04-13 15:59   ` Eric Anholt
2011-04-13 17:24     ` Chris Wilson
2011-04-13 19:35       ` Eric Anholt
2011-04-13 19:26   ` Daniel Vetter
2011-04-13 19:56     ` Chris Wilson
2011-04-13 20:56       ` Daniel Vetter
2011-04-14 23:23       ` Ben Widawsky
2011-04-15  9:48         ` Paul Menzel
2011-04-16  8:03           ` Chris Wilson
2011-04-12 20:31 ` [PATCH 25/30] drm/i915: s/addr & ~PAGE_MASK/offset_in_page(addr)/ Chris Wilson
2011-04-12 20:31 ` [PATCH 26/30] drm/i915: Maintain fenced gpu access until we flush the fence Chris Wilson
2011-04-13 19:37   ` Daniel Vetter
2011-04-13 20:15     ` Chris Wilson
2011-04-13 20:58       ` Daniel Vetter
2011-04-13 21:37         ` Chris Wilson
2011-04-12 20:31 ` [PATCH 27/30] drm/i915: Invalidate fenced read domains upon flush Chris Wilson
2011-04-13 19:43   ` Daniel Vetter
2011-04-13 20:38     ` Chris Wilson
2011-04-13 21:02       ` Daniel Vetter
2011-04-12 20:31 ` [PATCH 28/30] drm/i915: Pass the fence register number to be written Chris Wilson
2011-04-13 19:48   ` Daniel Vetter
2011-04-12 20:31 ` Chris Wilson [this message]
2011-04-13 20:42   ` [PATCH 29/30] drm/i915: Track fence setup separately from fenced object lifetime Daniel Vetter
2011-04-13 21:56     ` Chris Wilson
2011-04-12 20:31 ` [PATCH 30/30] drm/i915: Only print out the actual number of fences for i915_error_state Chris Wilson
2011-04-13  7:26 ` i915 next Chris Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1302640318-23165-30-git-send-email-chris@chris-wilson.co.uk \
    --to=chris@chris-wilson.co.uk \
    --cc=apw@canonical.com \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).