intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 2/4] drm/i915: kill obj->gtt_offset
  2011-04-15 18:57 ` [PATCH 2/4] drm/i915: kill obj->gtt_offset Daniel Vetter
@ 2011-04-15 18:56   ` Chris Wilson
  2011-04-15 19:19     ` Daniel Vetter
  0 siblings, 1 reply; 10+ messages in thread
From: Chris Wilson @ 2011-04-15 18:56 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

On Fri, 15 Apr 2011 20:57:36 +0200, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> Yet another massive round of sed'ing.

The only hitch here is that in the vmap code obj->gtt_offset !=
obj->gtt_space.offset.

There obj->gtt_space.offset is the base of the page aligned region allocated
in the GTT and obj->gtt_offset is obj->gtt_space.offset +
offset_in_page(user_addr).

I haven't checked but is obj->gtt_space immutable by the caller, i.e. can
we modify obj->gtt_space.offset and drm_mm still function correctly? Bake
the page aligned assumption into drm_mm? Or simply undo the page_offset
when releasing the gtt_space...? The latter sounds like it would work
best.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 0/4] embed drm_mm_node
@ 2011-04-15 18:57 Daniel Vetter
  2011-04-15 18:57 ` [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object Daniel Vetter
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Daniel Vetter @ 2011-04-15 18:57 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

Hi all,

Rebased version of my "embed drm_mm_node" series that missed the last
merge window. Applies on rather current -next-proposed.

As usual, comments highly welcome.

Yours, Daniel

Daniel Vetter (4):
  drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object
  drm/i915: kill obj->gtt_offset
  drm/i915: kill gtt_list
  drm/i915: use drm_mm_for_each_scanned_node_reverse helper

 drivers/gpu/drm/i915/i915_debugfs.c        |   51 +++++++-----
 drivers/gpu/drm/i915/i915_drv.h            |   13 +---
 drivers/gpu/drm/i915/i915_gem.c            |  125 ++++++++++++----------------
 drivers/gpu/drm/i915/i915_gem_debug.c      |    6 +-
 drivers/gpu/drm/i915/i915_gem_evict.c      |   40 ++++-----
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |   18 ++--
 drivers/gpu/drm/i915/i915_gem_gtt.c        |   14 ++--
 drivers/gpu/drm/i915/i915_gem_tiling.c     |   14 ++--
 drivers/gpu/drm/i915/i915_irq.c            |   10 +-
 drivers/gpu/drm/i915/i915_trace.h          |    8 +-
 drivers/gpu/drm/i915/intel_display.c       |   22 +++---
 drivers/gpu/drm/i915/intel_fb.c            |    6 +-
 drivers/gpu/drm/i915/intel_overlay.c       |   14 ++--
 drivers/gpu/drm/i915/intel_ringbuffer.c    |   12 ++--
 14 files changed, 166 insertions(+), 187 deletions(-)

-- 
1.7.4.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object
  2011-04-15 18:57 [PATCH 0/4] embed drm_mm_node Daniel Vetter
@ 2011-04-15 18:57 ` Daniel Vetter
  2011-04-15 23:01   ` Dave Airlie
  2011-04-15 18:57 ` [PATCH 2/4] drm/i915: kill obj->gtt_offset Daniel Vetter
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2011-04-15 18:57 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_debugfs.c        |   16 +++---
 drivers/gpu/drm/i915/i915_drv.h            |    2 +-
 drivers/gpu/drm/i915/i915_gem.c            |   75 +++++++++++----------------
 drivers/gpu/drm/i915/i915_gem_evict.c      |    6 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |    8 ++--
 drivers/gpu/drm/i915/i915_gem_gtt.c        |   10 ++--
 drivers/gpu/drm/i915/i915_gem_tiling.c     |    4 +-
 drivers/gpu/drm/i915/i915_trace.h          |    8 ++--
 8 files changed, 58 insertions(+), 71 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 52d2306..ad94a12 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -135,9 +135,9 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		seq_printf(m, " (name: %d)", obj->base.name);
 	if (obj->fence_reg != I915_FENCE_REG_NONE)
 		seq_printf(m, " (fence: %d)", obj->fence_reg);
-	if (obj->gtt_space != NULL)
+	if (drm_mm_node_allocated(&obj->gtt_space))
 		seq_printf(m, " (gtt offset: %08x, size: %08x)",
-			   obj->gtt_offset, (unsigned int)obj->gtt_space->size);
+			   obj->gtt_offset, (unsigned int)obj->gtt_space.size);
 	if (obj->pin_mappable || obj->fault_mappable) {
 		char s[3], *t = s;
 		if (obj->pin_mappable)
@@ -198,7 +198,7 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data)
 		describe_obj(m, obj);
 		seq_printf(m, "\n");
 		total_obj_size += obj->base.size;
-		total_gtt_size += obj->gtt_space->size;
+		total_gtt_size += obj->gtt_space.size;
 		count++;
 	}
 	mutex_unlock(&dev->struct_mutex);
@@ -210,10 +210,10 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data)
 
 #define count_objects(list, member) do { \
 	list_for_each_entry(obj, list, member) { \
-		size += obj->gtt_space->size; \
+		size += obj->gtt_space.size; \
 		++count; \
 		if (obj->map_and_fenceable) { \
-			mappable_size += obj->gtt_space->size; \
+			mappable_size += obj->gtt_space.size; \
 			++mappable_count; \
 		} \
 	} \
@@ -266,11 +266,11 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
 	size = count = mappable_size = mappable_count = 0;
 	list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) {
 		if (obj->fault_mappable) {
-			size += obj->gtt_space->size;
+			size += obj->gtt_space.size;
 			++count;
 		}
 		if (obj->pin_mappable) {
-			mappable_size += obj->gtt_space->size;
+			mappable_size += obj->gtt_space.size;
 			++mappable_count;
 		}
 	}
@@ -306,7 +306,7 @@ static int i915_gem_gtt_info(struct seq_file *m, void* data)
 		describe_obj(m, obj);
 		seq_printf(m, "\n");
 		total_obj_size += obj->base.size;
-		total_gtt_size += obj->gtt_space->size;
+		total_gtt_size += obj->gtt_space.size;
 		count++;
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1a74af7..21ac706 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -721,7 +721,7 @@ struct drm_i915_gem_object {
 	struct drm_gem_object base;
 
 	/** Current space allocated to this object in the GTT, if any. */
-	struct drm_mm_node *gtt_space;
+	struct drm_mm_node gtt_space;
 	struct list_head gtt_list;
 
 	/** This object's place on the active/flushing/inactive lists */
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6dd6250..429c5d3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -122,7 +122,8 @@ int i915_mutex_lock_interruptible(struct drm_device *dev)
 static inline bool
 i915_gem_object_is_inactive(struct drm_i915_gem_object *obj)
 {
-	return obj->gtt_space && !obj->active && obj->pin_count == 0;
+	return drm_mm_node_allocated(&obj->gtt_space)
+		&& !obj->active && obj->pin_count == 0;
 }
 
 void i915_gem_do_init(struct drm_device *dev,
@@ -176,7 +177,7 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
 	pinned = 0;
 	mutex_lock(&dev->struct_mutex);
 	list_for_each_entry(obj, &dev_priv->mm.pinned_list, mm_list)
-		pinned += obj->gtt_space->size;
+		pinned += obj->gtt_space.size;
 	mutex_unlock(&dev->struct_mutex);
 
 	args->aper_size = dev_priv->mm.gtt_total;
@@ -1000,7 +1001,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	 */
 	if (obj->phys_obj)
 		ret = i915_gem_phys_pwrite(dev, obj, args, file);
-	else if (obj->gtt_space &&
+	else if (drm_mm_node_allocated(&obj->gtt_space) &&
 		 obj->cache_level == I915_CACHE_NONE &&
 		 obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
 		ret = i915_gem_object_pin(obj, 0, true);
@@ -1227,7 +1228,7 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 		if (ret)
 			goto unlock;
 	}
-	if (!obj->gtt_space) {
+	if (!drm_mm_node_allocated(&obj->gtt_space)) {
 		ret = i915_gem_object_bind_to_gtt(obj, 0, true);
 		if (ret)
 			goto unlock;
@@ -2194,7 +2195,7 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj)
 {
 	int ret = 0;
 
-	if (obj->gtt_space == NULL)
+	if (!drm_mm_node_allocated(&obj->gtt_space))
 		return 0;
 
 	if (obj->pin_count != 0) {
@@ -2243,8 +2244,7 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj)
 	/* Avoid an unnecessary call to unbind on rebind. */
 	obj->map_and_fenceable = true;
 
-	drm_mm_put_block(obj->gtt_space);
-	obj->gtt_space = NULL;
+	drm_mm_remove_node(&obj->gtt_space);
 	obj->gtt_offset = 0;
 
 	if (i915_gem_object_is_purgeable(obj))
@@ -2319,7 +2319,7 @@ static int sandybridge_write_fence_reg(struct drm_i915_gem_object *obj,
 {
 	struct drm_device *dev = obj->base.dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	u32 size = obj->gtt_space->size;
+	u32 size = obj->gtt_space.size;
 	int regnum = obj->fence_reg;
 	uint64_t val;
 
@@ -2356,7 +2356,7 @@ static int i965_write_fence_reg(struct drm_i915_gem_object *obj,
 {
 	struct drm_device *dev = obj->base.dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	u32 size = obj->gtt_space->size;
+	u32 size = obj->gtt_space.size;
 	int regnum = obj->fence_reg;
 	uint64_t val;
 
@@ -2391,7 +2391,7 @@ static int i915_write_fence_reg(struct drm_i915_gem_object *obj,
 {
 	struct drm_device *dev = obj->base.dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	u32 size = obj->gtt_space->size;
+	u32 size = obj->gtt_space.size;
 	u32 fence_reg, val, pitch_val;
 	int tile_width;
 
@@ -2445,7 +2445,7 @@ static int i830_write_fence_reg(struct drm_i915_gem_object *obj,
 {
 	struct drm_device *dev = obj->base.dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	u32 size = obj->gtt_space->size;
+	u32 size = obj->gtt_space.size;
 	int regnum = obj->fence_reg;
 	uint32_t val;
 	uint32_t pitch_val;
@@ -2779,7 +2779,6 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
 {
 	struct drm_device *dev = obj->base.dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	struct drm_mm_node *free_space;
 	gfp_t gfpmask = __GFP_NORETRY | __GFP_NOWARN;
 	u32 size, fence_size, fence_alignment, unfenced_alignment;
 	bool mappable, fenceable;
@@ -2815,27 +2814,17 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
 
  search_free:
 	if (map_and_fenceable)
-		free_space =
-			drm_mm_search_free_in_range(&dev_priv->mm.gtt_space,
+		ret =
+			drm_mm_insert_node_in_range(&dev_priv->mm.gtt_space,
+						    &obj->gtt_space,
 						    size, alignment, 0,
-						    dev_priv->mm.gtt_mappable_end,
-						    0);
+						    dev_priv->mm.gtt_mappable_end);
 	else
-		free_space = drm_mm_search_free(&dev_priv->mm.gtt_space,
-						size, alignment, 0);
-
-	if (free_space != NULL) {
-		if (map_and_fenceable)
-			obj->gtt_space =
-				drm_mm_get_block_range_generic(free_space,
-							       size, alignment, 0,
-							       dev_priv->mm.gtt_mappable_end,
-							       0);
-		else
-			obj->gtt_space =
-				drm_mm_get_block(free_space, size, alignment);
-	}
-	if (obj->gtt_space == NULL) {
+		ret = drm_mm_insert_node(&dev_priv->mm.gtt_space,
+					 &obj->gtt_space,
+					 size, alignment);
+
+	if (ret != 0) {
 		/* If the gtt is empty and we're still having trouble
 		 * fitting our object in, we're out of memory.
 		 */
@@ -2849,8 +2838,7 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
 
 	ret = i915_gem_object_get_pages_gtt(obj, gfpmask);
 	if (ret) {
-		drm_mm_put_block(obj->gtt_space);
-		obj->gtt_space = NULL;
+		drm_mm_remove_node(&obj->gtt_space);
 
 		if (ret == -ENOMEM) {
 			/* first try to reclaim some memory by clearing the GTT */
@@ -2874,8 +2862,7 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
 	ret = i915_gem_gtt_bind_object(obj);
 	if (ret) {
 		i915_gem_object_put_pages_gtt(obj);
-		drm_mm_put_block(obj->gtt_space);
-		obj->gtt_space = NULL;
+		drm_mm_remove_node(&obj->gtt_space);
 
 		if (i915_gem_evict_everything(dev, false))
 			return ret;
@@ -2893,11 +2880,11 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
 	BUG_ON(obj->base.read_domains & I915_GEM_GPU_DOMAINS);
 	BUG_ON(obj->base.write_domain & I915_GEM_GPU_DOMAINS);
 
-	obj->gtt_offset = obj->gtt_space->start;
+	obj->gtt_offset = obj->gtt_space.start;
 
 	fenceable =
-		obj->gtt_space->size == fence_size &&
-		(obj->gtt_space->start & (fence_alignment -1)) == 0;
+		obj->gtt_space.size == fence_size &&
+		(obj->gtt_space.start & (fence_alignment -1)) == 0;
 
 	mappable =
 		obj->gtt_offset + obj->base.size <= dev_priv->mm.gtt_mappable_end;
@@ -3006,7 +2993,7 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
 	int ret;
 
 	/* Not valid to be called on unbound objects. */
-	if (obj->gtt_space == NULL)
+	if (!drm_mm_node_allocated(&obj->gtt_space))
 		return -EINVAL;
 
 	if (obj->base.write_domain == I915_GEM_DOMAIN_GTT)
@@ -3066,7 +3053,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 		return -EBUSY;
 	}
 
-	if (obj->gtt_space) {
+	if (drm_mm_node_allocated(&obj->gtt_space)) {
 		ret = i915_gem_object_finish_gpu(obj);
 		if (ret)
 			return ret;
@@ -3421,7 +3408,7 @@ i915_gem_object_pin(struct drm_i915_gem_object *obj,
 	BUG_ON(obj->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT);
 	WARN_ON(i915_verify_lists(dev));
 
-	if (obj->gtt_space != NULL) {
+	if (drm_mm_node_allocated(&obj->gtt_space)) {
 		if ((alignment && obj->gtt_offset & (alignment - 1)) ||
 		    (map_and_fenceable && !obj->map_and_fenceable)) {
 			WARN(obj->pin_count,
@@ -3437,7 +3424,7 @@ i915_gem_object_pin(struct drm_i915_gem_object *obj,
 		}
 	}
 
-	if (obj->gtt_space == NULL) {
+	if (!drm_mm_node_allocated(&obj->gtt_space)) {
 		ret = i915_gem_object_bind_to_gtt(obj, alignment,
 						  map_and_fenceable);
 		if (ret)
@@ -3463,7 +3450,7 @@ i915_gem_object_unpin(struct drm_i915_gem_object *obj)
 
 	WARN_ON(i915_verify_lists(dev));
 	BUG_ON(obj->pin_count == 0);
-	BUG_ON(obj->gtt_space == NULL);
+	BUG_ON(!drm_mm_node_allocated(&obj->gtt_space));
 
 	if (--obj->pin_count == 0) {
 		if (!obj->active)
@@ -3668,7 +3655,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 
 	/* if the object is no longer bound, discard its backing storage */
 	if (i915_gem_object_is_purgeable(obj) &&
-	    obj->gtt_space == NULL)
+	    !drm_mm_node_allocated(&obj->gtt_space))
 		i915_gem_object_truncate(obj);
 
 	args->retained = obj->madv != __I915_MADV_PURGED;
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index da05a26..1a9c96b 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -37,7 +37,7 @@ mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind)
 {
 	list_add(&obj->exec_list, unwind);
 	drm_gem_object_reference(&obj->base);
-	return drm_mm_scan_add_block(obj->gtt_space);
+	return drm_mm_scan_add_block(&obj->gtt_space);
 }
 
 int
@@ -135,7 +135,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
 				       struct drm_i915_gem_object,
 				       exec_list);
 
-		ret = drm_mm_scan_remove_block(obj->gtt_space);
+		ret = drm_mm_scan_remove_block(&obj->gtt_space);
 		BUG_ON(ret);
 
 		list_del_init(&obj->exec_list);
@@ -156,7 +156,7 @@ found:
 		obj = list_first_entry(&unwind_list,
 				       struct drm_i915_gem_object,
 				       exec_list);
-		if (drm_mm_scan_remove_block(obj->gtt_space)) {
+		if (drm_mm_scan_remove_block(&obj->gtt_space)) {
 			list_move(&obj->exec_list, &eviction_list);
 			continue;
 		}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 20a4cc5..7774843 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -521,7 +521,7 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
 		list_for_each_entry(obj, objects, exec_list) {
 			struct drm_i915_gem_exec_object2 *entry = obj->exec_entry;
 			bool need_fence, need_mappable;
-			if (!obj->gtt_space)
+			if (!drm_mm_node_allocated(&obj->gtt_space))
 				continue;
 
 			need_fence =
@@ -554,7 +554,7 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
 				entry->flags & EXEC_OBJECT_NEEDS_FENCE &&
 				obj->tiling_mode != I915_TILING_NONE;
 
-			if (!obj->gtt_space) {
+			if (!drm_mm_node_allocated(&obj->gtt_space)) {
 				bool need_mappable =
 					entry->relocation_count ? true : need_fence;
 
@@ -585,7 +585,7 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
 
 		/* Decrement pin count for bound objects */
 		list_for_each_entry(obj, objects, exec_list) {
-			if (obj->gtt_space)
+			if (drm_mm_node_allocated(&obj->gtt_space))
 				i915_gem_object_unpin(obj);
 		}
 
@@ -607,7 +607,7 @@ err:
 			 struct drm_i915_gem_object,
 			 exec_list);
 	while (objects != &obj->exec_list) {
-		if (obj->gtt_space)
+		if (drm_mm_node_allocated(&obj->gtt_space))
 			i915_gem_object_unpin(obj);
 
 		obj = list_entry(obj->exec_list.prev,
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 7a709cd..bca5a97 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -83,10 +83,10 @@ int i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj)
 
 		intel_gtt_insert_sg_entries(obj->sg_list,
 					    obj->num_sg,
-					    obj->gtt_space->start >> PAGE_SHIFT,
+					    obj->gtt_space.start >> PAGE_SHIFT,
 					    agp_type);
 	} else
-		intel_gtt_insert_pages(obj->gtt_space->start >> PAGE_SHIFT,
+		intel_gtt_insert_pages(obj->gtt_space.start >> PAGE_SHIFT,
 				       obj->base.size >> PAGE_SHIFT,
 				       obj->pages,
 				       agp_type);
@@ -106,10 +106,10 @@ void i915_gem_gtt_rebind_object(struct drm_i915_gem_object *obj,
 
 		intel_gtt_insert_sg_entries(obj->sg_list,
 					    obj->num_sg,
-					    obj->gtt_space->start >> PAGE_SHIFT,
+					    obj->gtt_space.start >> PAGE_SHIFT,
 					    agp_type);
 	} else
-		intel_gtt_insert_pages(obj->gtt_space->start >> PAGE_SHIFT,
+		intel_gtt_insert_pages(obj->gtt_space.start >> PAGE_SHIFT,
 				       obj->base.size >> PAGE_SHIFT,
 				       obj->pages,
 				       agp_type);
@@ -117,7 +117,7 @@ void i915_gem_gtt_rebind_object(struct drm_i915_gem_object *obj,
 
 void i915_gem_gtt_unbind_object(struct drm_i915_gem_object *obj)
 {
-	intel_gtt_clear_range(obj->gtt_space->start >> PAGE_SHIFT,
+	intel_gtt_clear_range(obj->gtt_space.start >> PAGE_SHIFT,
 			      obj->base.size >> PAGE_SHIFT);
 
 	if (obj->sg_list) {
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index dfb682b..e894a81 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -264,7 +264,7 @@ i915_gem_object_fence_ok(struct drm_i915_gem_object *obj, int tiling_mode)
 	while (size < obj->base.size)
 		size <<= 1;
 
-	if (obj->gtt_space->size != size)
+	if (obj->gtt_space.size != size)
 		return false;
 
 	if (obj->gtt_offset & (size - 1))
@@ -349,7 +349,7 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 		i915_gem_release_mmap(obj);
 
 		obj->map_and_fenceable =
-			obj->gtt_space == NULL ||
+			!drm_mm_node_allocated(&obj->gtt_space) ||
 			(obj->gtt_offset + obj->base.size <= dev_priv->mm.gtt_mappable_end &&
 			 i915_gem_object_fence_ok(obj, args->tiling_mode));
 
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index d623fef..59aa5ad 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -46,8 +46,8 @@ TRACE_EVENT(i915_gem_object_bind,
 
 	    TP_fast_assign(
 			   __entry->obj = obj;
-			   __entry->offset = obj->gtt_space->start;
-			   __entry->size = obj->gtt_space->size;
+			   __entry->offset = obj->gtt_space.start;
+			   __entry->size = obj->gtt_space.size;
 			   __entry->mappable = mappable;
 			   ),
 
@@ -68,8 +68,8 @@ TRACE_EVENT(i915_gem_object_unbind,
 
 	    TP_fast_assign(
 			   __entry->obj = obj;
-			   __entry->offset = obj->gtt_space->start;
-			   __entry->size = obj->gtt_space->size;
+			   __entry->offset = obj->gtt_space.start;
+			   __entry->size = obj->gtt_space.size;
 			   ),
 
 	    TP_printk("obj=%p, offset=%08x size=%x",
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] drm/i915: kill obj->gtt_offset
  2011-04-15 18:57 [PATCH 0/4] embed drm_mm_node Daniel Vetter
  2011-04-15 18:57 ` [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object Daniel Vetter
@ 2011-04-15 18:57 ` Daniel Vetter
  2011-04-15 18:56   ` Chris Wilson
  2011-04-15 18:57 ` [PATCH 3/4] drm/i915: kill gtt_list Daniel Vetter
  2011-04-15 18:57 ` [PATCH 4/4] drm/i915: use drm_mm_for_each_scanned_node_reverse helper Daniel Vetter
  3 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2011-04-15 18:57 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

Yet another massive round of sed'ing.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_debugfs.c        |   12 +++---
 drivers/gpu/drm/i915/i915_drv.h            |    7 ----
 drivers/gpu/drm/i915/i915_gem.c            |   48 +++++++++++++--------------
 drivers/gpu/drm/i915/i915_gem_debug.c      |    6 ++--
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |   10 +++---
 drivers/gpu/drm/i915/i915_gem_tiling.c     |   10 +++---
 drivers/gpu/drm/i915/i915_irq.c            |   10 +++---
 drivers/gpu/drm/i915/intel_display.c       |   22 ++++++------
 drivers/gpu/drm/i915/intel_fb.c            |    6 ++--
 drivers/gpu/drm/i915/intel_overlay.c       |   14 ++++----
 drivers/gpu/drm/i915/intel_ringbuffer.c    |   12 +++---
 11 files changed, 74 insertions(+), 83 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index ad94a12..1a6783f 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -136,8 +136,8 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 	if (obj->fence_reg != I915_FENCE_REG_NONE)
 		seq_printf(m, " (fence: %d)", obj->fence_reg);
 	if (drm_mm_node_allocated(&obj->gtt_space))
-		seq_printf(m, " (gtt offset: %08x, size: %08x)",
-			   obj->gtt_offset, (unsigned int)obj->gtt_space.size);
+		seq_printf(m, " (gtt offset: %08lx, size: %08x)",
+			   obj->gtt_space.start, (unsigned int)obj->gtt_space.size);
 	if (obj->pin_mappable || obj->fault_mappable) {
 		char s[3], *t = s;
 		if (obj->pin_mappable)
@@ -353,12 +353,12 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data)
 			if (work->old_fb_obj) {
 				struct drm_i915_gem_object *obj = work->old_fb_obj;
 				if (obj)
-					seq_printf(m, "Old framebuffer gtt_offset 0x%08x\n", obj->gtt_offset);
+					seq_printf(m, "Old framebuffer gtt_offset 0x%08lx\n", obj->gtt_space.start);
 			}
 			if (work->pending_flip_obj) {
 				struct drm_i915_gem_object *obj = work->pending_flip_obj;
 				if (obj)
-					seq_printf(m, "New framebuffer gtt_offset 0x%08x\n", obj->gtt_offset);
+					seq_printf(m, "New framebuffer gtt_offset 0x%08lx\n", obj->gtt_space.start);
 			}
 		}
 		spin_unlock_irqrestore(&dev->event_lock, flags);
@@ -570,7 +570,7 @@ static void i915_dump_object(struct seq_file *m,
 	page_count = obj->base.size / PAGE_SIZE;
 	for (page = 0; page < page_count; page++) {
 		u32 *mem = io_mapping_map_wc(mapping,
-					     obj->gtt_offset + page * PAGE_SIZE);
+					     obj->gtt_space.start + page * PAGE_SIZE);
 		for (i = 0; i < PAGE_SIZE; i += 4)
 			seq_printf(m, "%08x :  %08x\n", i, mem[i / 4]);
 		io_mapping_unmap(mem);
@@ -591,7 +591,7 @@ static int i915_batchbuffer_info(struct seq_file *m, void *data)
 
 	list_for_each_entry(obj, &dev_priv->mm.active_list, mm_list) {
 		if (obj->base.read_domains & I915_GEM_DOMAIN_COMMAND) {
-		    seq_printf(m, "--- gtt_offset = 0x%08x\n", obj->gtt_offset);
+		    seq_printf(m, "--- gtt_offset = 0x%08lx\n", obj->gtt_space.start);
 		    i915_dump_object(m, dev_priv->mm.gtt_mapping, obj);
 		}
 	}
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 21ac706..2301a6a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -820,13 +820,6 @@ struct drm_i915_gem_object {
 	unsigned long exec_handle;
 	struct drm_i915_gem_exec_object2 *exec_entry;
 
-	/**
-	 * Current offset of the object in GTT space.
-	 *
-	 * This is the same as gtt_space->start
-	 */
-	uint32_t gtt_offset;
-
 	/** Breadcrumb of last rendering to the buffer. */
 	uint32_t last_rendering_seqno;
 	struct intel_ring_buffer *ring;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 429c5d3..d08ad01 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -632,7 +632,7 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev,
 	user_data = (char __user *) (uintptr_t) args->data_ptr;
 	remain = args->size;
 
-	offset = obj->gtt_offset + args->offset;
+	offset = obj->gtt_space.start + args->offset;
 
 	while (remain > 0) {
 		/* Operation in this page
@@ -721,7 +721,7 @@ i915_gem_gtt_pwrite_slow(struct drm_device *dev,
 	if (ret)
 		goto out_unpin_pages;
 
-	offset = obj->gtt_offset + args->offset;
+	offset = obj->gtt_space.start + args->offset;
 
 	while (remain > 0) {
 		/* Operation in this page
@@ -1250,7 +1250,7 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 
 	obj->fault_mappable = true;
 
-	pfn = ((dev->agp->base + obj->gtt_offset) >> PAGE_SHIFT) +
+	pfn = ((dev->agp->base + obj->gtt_space.start) >> PAGE_SHIFT) +
 		page_offset;
 
 	/* Finally, remap it using the new GTT offset */
@@ -2245,7 +2245,7 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj)
 	obj->map_and_fenceable = true;
 
 	drm_mm_remove_node(&obj->gtt_space);
-	obj->gtt_offset = 0;
+	obj->gtt_space.start = 0;
 
 	if (i915_gem_object_is_purgeable(obj))
 		i915_gem_object_truncate(obj);
@@ -2323,9 +2323,9 @@ static int sandybridge_write_fence_reg(struct drm_i915_gem_object *obj,
 	int regnum = obj->fence_reg;
 	uint64_t val;
 
-	val = (uint64_t)((obj->gtt_offset + size - 4096) &
+	val = (uint64_t)((obj->gtt_space.start + size - 4096) &
 			 0xfffff000) << 32;
-	val |= obj->gtt_offset & 0xfffff000;
+	val |= obj->gtt_space.start & 0xfffff000;
 	val |= (uint64_t)((obj->stride / 128) - 1) <<
 		SANDYBRIDGE_FENCE_PITCH_SHIFT;
 
@@ -2360,9 +2360,9 @@ static int i965_write_fence_reg(struct drm_i915_gem_object *obj,
 	int regnum = obj->fence_reg;
 	uint64_t val;
 
-	val = (uint64_t)((obj->gtt_offset + size - 4096) &
+	val = (uint64_t)((obj->gtt_space.start + size - 4096) &
 		    0xfffff000) << 32;
-	val |= obj->gtt_offset & 0xfffff000;
+	val |= obj->gtt_space.start & 0xfffff000;
 	val |= ((obj->stride / 128) - 1) << I965_FENCE_PITCH_SHIFT;
 	if (obj->tiling_mode == I915_TILING_Y)
 		val |= 1 << I965_FENCE_TILING_Y_SHIFT;
@@ -2395,11 +2395,11 @@ static int i915_write_fence_reg(struct drm_i915_gem_object *obj,
 	u32 fence_reg, val, pitch_val;
 	int tile_width;
 
-	if (WARN((obj->gtt_offset & ~I915_FENCE_START_MASK) ||
+	if (WARN((obj->gtt_space.start & ~I915_FENCE_START_MASK) ||
 		 (size & -size) != size ||
-		 (obj->gtt_offset & (size - 1)),
-		 "object 0x%08x [fenceable? %d] not 1M or pot-size (0x%08x) aligned\n",
-		 obj->gtt_offset, obj->map_and_fenceable, size))
+		 (obj->gtt_space.start & (size - 1)),
+		 "object 0x%08lx [fenceable? %d] not 1M or pot-size (0x%08x) aligned\n",
+		 obj->gtt_space.start, obj->map_and_fenceable, size))
 		return -EINVAL;
 
 	if (obj->tiling_mode == I915_TILING_Y && HAS_128_BYTE_Y_TILING(dev))
@@ -2411,7 +2411,7 @@ static int i915_write_fence_reg(struct drm_i915_gem_object *obj,
 	pitch_val = obj->stride / tile_width;
 	pitch_val = ffs(pitch_val) - 1;
 
-	val = obj->gtt_offset;
+	val = obj->gtt_space.start;
 	if (obj->tiling_mode == I915_TILING_Y)
 		val |= 1 << I830_FENCE_TILING_Y_SHIFT;
 	val |= I915_FENCE_SIZE_BITS(size);
@@ -2450,17 +2450,17 @@ static int i830_write_fence_reg(struct drm_i915_gem_object *obj,
 	uint32_t val;
 	uint32_t pitch_val;
 
-	if (WARN((obj->gtt_offset & ~I830_FENCE_START_MASK) ||
+	if (WARN((obj->gtt_space.start & ~I830_FENCE_START_MASK) ||
 		 (size & -size) != size ||
-		 (obj->gtt_offset & (size - 1)),
-		 "object 0x%08x not 512K or pot-size 0x%08x aligned\n",
-		 obj->gtt_offset, size))
+		 (obj->gtt_space.start & (size - 1)),
+		 "object 0x%08lx not 512K or pot-size 0x%08x aligned\n",
+		 obj->gtt_space.start, size))
 		return -EINVAL;
 
 	pitch_val = obj->stride / 128;
 	pitch_val = ffs(pitch_val) - 1;
 
-	val = obj->gtt_offset;
+	val = obj->gtt_space.start;
 	if (obj->tiling_mode == I915_TILING_Y)
 		val |= 1 << I830_FENCE_TILING_Y_SHIFT;
 	val |= I830_FENCE_SIZE_BITS(size);
@@ -2880,14 +2880,12 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
 	BUG_ON(obj->base.read_domains & I915_GEM_GPU_DOMAINS);
 	BUG_ON(obj->base.write_domain & I915_GEM_GPU_DOMAINS);
 
-	obj->gtt_offset = obj->gtt_space.start;
-
 	fenceable =
 		obj->gtt_space.size == fence_size &&
 		(obj->gtt_space.start & (fence_alignment -1)) == 0;
 
 	mappable =
-		obj->gtt_offset + obj->base.size <= dev_priv->mm.gtt_mappable_end;
+		obj->gtt_space.start + obj->base.size <= dev_priv->mm.gtt_mappable_end;
 
 	obj->map_and_fenceable = mappable && fenceable;
 
@@ -3409,13 +3407,13 @@ i915_gem_object_pin(struct drm_i915_gem_object *obj,
 	WARN_ON(i915_verify_lists(dev));
 
 	if (drm_mm_node_allocated(&obj->gtt_space)) {
-		if ((alignment && obj->gtt_offset & (alignment - 1)) ||
+		if ((alignment && obj->gtt_space.start & (alignment - 1)) ||
 		    (map_and_fenceable && !obj->map_and_fenceable)) {
 			WARN(obj->pin_count,
 			     "bo is already pinned with incorrect alignment:"
-			     " offset=%x, req.alignment=%x, req.map_and_fenceable=%d,"
+			     " offset=%lx, req.alignment=%x, req.map_and_fenceable=%d,"
 			     " obj->map_and_fenceable=%d\n",
-			     obj->gtt_offset, alignment,
+			     obj->gtt_space.start, alignment,
 			     map_and_fenceable,
 			     obj->map_and_fenceable);
 			ret = i915_gem_object_unbind(obj);
@@ -3504,7 +3502,7 @@ i915_gem_pin_ioctl(struct drm_device *dev, void *data,
 	 * as the X server doesn't manage domains yet
 	 */
 	i915_gem_object_flush_cpu_write_domain(obj);
-	args->offset = obj->gtt_offset;
+	args->offset = obj->gtt_space.start;
 out:
 	drm_gem_object_unreference(&obj->base);
 unlock:
diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
index 8da1899..1af7b9d 100644
--- a/drivers/gpu/drm/i915/i915_gem_debug.c
+++ b/drivers/gpu/drm/i915/i915_gem_debug.c
@@ -145,10 +145,10 @@ i915_gem_object_check_coherency(struct drm_i915_gem_object *obj, int handle)
 	int bad_count = 0;
 
 	DRM_INFO("%s: checking coherency of object %p@0x%08x (%d, %zdkb):\n",
-		 __func__, obj, obj->gtt_offset, handle,
+		 __func__, obj, obj->gtt_space.start, handle,
 		 obj->size / 1024);
 
-	gtt_mapping = ioremap(dev->agp->base + obj->gtt_offset, obj->base.size);
+	gtt_mapping = ioremap(dev->agp->base + obj->gtt_space.start, obj->base.size);
 	if (gtt_mapping == NULL) {
 		DRM_ERROR("failed to map GTT space\n");
 		return;
@@ -172,7 +172,7 @@ i915_gem_object_check_coherency(struct drm_i915_gem_object *obj, int handle)
 			if (cpuval != gttval) {
 				DRM_INFO("incoherent CPU vs GPU at 0x%08x: "
 					 "0x%08x vs 0x%08x\n",
-					 (int)(obj->gtt_offset +
+					 (int)(obj->gtt_space.start +
 					       page * PAGE_SIZE + i * 4),
 					 cpuval, gttval);
 				if (bad_count++ >= 8) {
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 7774843..a06fac5 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -284,7 +284,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
 	if (unlikely(target_obj == NULL))
 		return -ENOENT;
 
-	target_offset = to_intel_bo(target_obj)->gtt_offset;
+	target_offset = to_intel_bo(target_obj)->gtt_space.start;
 
 	/* The target buffer should have appeared before us in the
 	 * exec_object list, so it should have a GTT space bound by now.
@@ -376,7 +376,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
 			return ret;
 
 		/* Map the page containing the relocation we're going to perform.  */
-		reloc->offset += obj->gtt_offset;
+		reloc->offset += obj->gtt_space.start;
 		reloc_page = io_mapping_map_atomic_wc(dev_priv->mm.gtt_mapping,
 						      reloc->offset & PAGE_MASK);
 		reloc_entry = (uint32_t __iomem *)
@@ -531,7 +531,7 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
 			need_mappable =
 				entry->relocation_count ? true : need_fence;
 
-			if ((entry->alignment && obj->gtt_offset & (entry->alignment - 1)) ||
+			if ((entry->alignment && obj->gtt_space.start & (entry->alignment - 1)) ||
 			    (need_mappable && !obj->map_and_fenceable))
 				ret = i915_gem_object_unbind(obj);
 			else
@@ -580,7 +580,7 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
 				obj->pending_fenced_gpu_access = need_fence;
 			}
 
-			entry->offset = obj->gtt_offset;
+			entry->offset = obj->gtt_space.start;
 		}
 
 		/* Decrement pin count for bound objects */
@@ -1164,7 +1164,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 
 	trace_i915_gem_ring_dispatch(ring, seqno);
 
-	exec_start = batch_obj->gtt_offset + args->batch_start_offset;
+	exec_start = batch_obj->gtt_space.start + args->batch_start_offset;
 	exec_len = args->batch_len;
 	if (cliprects) {
 		for (i = 0; i < args->num_cliprects; i++) {
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index e894a81..820c984 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -245,10 +245,10 @@ i915_gem_object_fence_ok(struct drm_i915_gem_object *obj, int tiling_mode)
 		return true;
 
 	if (INTEL_INFO(obj->base.dev)->gen == 3) {
-		if (obj->gtt_offset & ~I915_FENCE_START_MASK)
+		if (obj->gtt_space.start & ~I915_FENCE_START_MASK)
 			return false;
 	} else {
-		if (obj->gtt_offset & ~I830_FENCE_START_MASK)
+		if (obj->gtt_space.start & ~I830_FENCE_START_MASK)
 			return false;
 	}
 
@@ -267,7 +267,7 @@ i915_gem_object_fence_ok(struct drm_i915_gem_object *obj, int tiling_mode)
 	if (obj->gtt_space.size != size)
 		return false;
 
-	if (obj->gtt_offset & (size - 1))
+	if (obj->gtt_space.start & (size - 1))
 		return false;
 
 	return true;
@@ -350,14 +350,14 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 
 		obj->map_and_fenceable =
 			!drm_mm_node_allocated(&obj->gtt_space) ||
-			(obj->gtt_offset + obj->base.size <= dev_priv->mm.gtt_mappable_end &&
+			(obj->gtt_space.start + obj->base.size <= dev_priv->mm.gtt_mappable_end &&
 			 i915_gem_object_fence_ok(obj, args->tiling_mode));
 
 		/* Rebind if we need a change of alignment */
 		if (!obj->map_and_fenceable) {
 			u32 unfenced_alignment =
 				i915_gem_get_unfenced_gtt_alignment(obj);
-			if (obj->gtt_offset & (unfenced_alignment - 1))
+			if (obj->gtt_space.start & (unfenced_alignment - 1))
 				ret = i915_gem_object_unbind(obj);
 		}
 
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 5c0466e..e9d43e7 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -588,7 +588,7 @@ i915_error_object_create(struct drm_i915_private *dev_priv,
 	if (dst == NULL)
 		return NULL;
 
-	reloc_offset = src->gtt_offset;
+	reloc_offset = src->gtt_space.start;
 	for (page = 0; page < page_count; page++) {
 		unsigned long flags;
 		void __iomem *s;
@@ -610,7 +610,7 @@ i915_error_object_create(struct drm_i915_private *dev_priv,
 		reloc_offset += PAGE_SIZE;
 	}
 	dst->page_count = page_count;
-	dst->gtt_offset = src->gtt_offset;
+	dst->gtt_offset = src->gtt_space.start;
 
 	return dst;
 
@@ -663,7 +663,7 @@ static u32 capture_bo_list(struct drm_i915_error_buffer *err,
 		err->size = obj->base.size;
 		err->name = obj->base.name;
 		err->seqno = obj->last_rendering_seqno;
-		err->gtt_offset = obj->gtt_offset;
+		err->gtt_offset = obj->gtt_space.start;
 		err->read_domains = obj->base.read_domains;
 		err->write_domain = obj->base.write_domain;
 		err->fence_reg = obj->fence_reg;
@@ -1071,10 +1071,10 @@ static void i915_pageflip_stall_check(struct drm_device *dev, int pipe)
 	obj = work->pending_flip_obj;
 	if (INTEL_INFO(dev)->gen >= 4) {
 		int dspsurf = DSPSURF(intel_crtc->plane);
-		stall_detected = I915_READ(dspsurf) == obj->gtt_offset;
+		stall_detected = I915_READ(dspsurf) == obj->gtt_space.start;
 	} else {
 		int dspaddr = DSPADDR(intel_crtc->plane);
-		stall_detected = I915_READ(dspaddr) == (obj->gtt_offset +
+		stall_detected = I915_READ(dspaddr) == (obj->gtt_space.start +
 							crtc->y * crtc->fb->pitch +
 							crtc->x * crtc->fb->bits_per_pixel/8);
 	}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 62f9e52..c55f5ac 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -1571,7 +1571,7 @@ static void ironlake_enable_fbc(struct drm_crtc *crtc, unsigned long interval)
 		if (dev_priv->cfb_pitch == dev_priv->cfb_pitch / 64 - 1 &&
 		    dev_priv->cfb_fence == obj->fence_reg &&
 		    dev_priv->cfb_plane == intel_crtc->plane &&
-		    dev_priv->cfb_offset == obj->gtt_offset &&
+		    dev_priv->cfb_offset == obj->gtt_space.start &&
 		    dev_priv->cfb_y == crtc->y)
 			return;
 
@@ -1582,7 +1582,7 @@ static void ironlake_enable_fbc(struct drm_crtc *crtc, unsigned long interval)
 	dev_priv->cfb_pitch = (dev_priv->cfb_pitch / 64) - 1;
 	dev_priv->cfb_fence = obj->fence_reg;
 	dev_priv->cfb_plane = intel_crtc->plane;
-	dev_priv->cfb_offset = obj->gtt_offset;
+	dev_priv->cfb_offset = obj->gtt_space.start;
 	dev_priv->cfb_y = crtc->y;
 
 	dpfc_ctl &= DPFC_RESERVED;
@@ -1598,7 +1598,7 @@ static void ironlake_enable_fbc(struct drm_crtc *crtc, unsigned long interval)
 		   (stall_watermark << DPFC_RECOMP_STALL_WM_SHIFT) |
 		   (interval << DPFC_RECOMP_TIMER_COUNT_SHIFT));
 	I915_WRITE(ILK_DPFC_FENCE_YOFF, crtc->y);
-	I915_WRITE(ILK_FBC_RT_BASE, obj->gtt_offset | ILK_FBC_RT_VALID);
+	I915_WRITE(ILK_FBC_RT_BASE, obj->gtt_space.start | ILK_FBC_RT_VALID);
 	/* enable it... */
 	I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
 
@@ -1894,7 +1894,7 @@ intel_pipe_set_base_atomic(struct drm_crtc *crtc, struct drm_framebuffer *fb,
 
 	I915_WRITE(reg, dspcntr);
 
-	Start = obj->gtt_offset;
+	Start = obj->gtt_space.start;
 	Offset = y * fb->pitch + x * (fb->bits_per_pixel / 8);
 
 	DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
@@ -5362,7 +5362,7 @@ static int intel_crtc_cursor_set(struct drm_crtc *crtc,
 			goto fail_unpin;
 		}
 
-		addr = obj->gtt_offset;
+		addr = obj->gtt_space.start;
 	} else {
 		int align = IS_I830(dev) ? 16 * 1024 : 256;
 		ret = i915_gem_attach_phys_object(dev, obj,
@@ -6144,7 +6144,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 		OUT_RING(MI_DISPLAY_FLIP |
 			 MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
 		OUT_RING(fb->pitch);
-		OUT_RING(obj->gtt_offset + offset);
+		OUT_RING(obj->gtt_space.start + offset);
 		OUT_RING(MI_NOOP);
 		break;
 
@@ -6152,7 +6152,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 		OUT_RING(MI_DISPLAY_FLIP_I915 |
 			 MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
 		OUT_RING(fb->pitch);
-		OUT_RING(obj->gtt_offset + offset);
+		OUT_RING(obj->gtt_space.start + offset);
 		OUT_RING(MI_NOOP);
 		break;
 
@@ -6165,7 +6165,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 		OUT_RING(MI_DISPLAY_FLIP |
 			 MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
 		OUT_RING(fb->pitch);
-		OUT_RING(obj->gtt_offset | obj->tiling_mode);
+		OUT_RING(obj->gtt_space.start | obj->tiling_mode);
 
 		/* XXX Enabling the panel-fitter across page-flip is so far
 		 * untested on non-native modes, so ignore it for now.
@@ -6180,7 +6180,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 		OUT_RING(MI_DISPLAY_FLIP |
 			 MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
 		OUT_RING(fb->pitch | obj->tiling_mode);
-		OUT_RING(obj->gtt_offset);
+		OUT_RING(obj->gtt_space.start);
 
 		pf = I915_READ(PF_CTL(pipe)) & PF_ENABLE;
 		pipesrc = I915_READ(PIPESRC(pipe)) & 0x0fff0fff;
@@ -7197,7 +7197,7 @@ void ironlake_enable_rc6(struct drm_device *dev)
 
 	OUT_RING(MI_SUSPEND_FLUSH | MI_SUSPEND_FLUSH_EN);
 	OUT_RING(MI_SET_CONTEXT);
-	OUT_RING(dev_priv->renderctx->gtt_offset |
+	OUT_RING(dev_priv->renderctx->gtt_space.start |
 		 MI_MM_SPACE_GTT |
 		 MI_SAVE_EXT_STATE_EN |
 		 MI_RESTORE_EXT_STATE_EN |
@@ -7220,7 +7220,7 @@ void ironlake_enable_rc6(struct drm_device *dev)
 		return;
 	}
 
-	I915_WRITE(PWRCTXA, dev_priv->pwrctx->gtt_offset | PWRCTX_EN);
+	I915_WRITE(PWRCTXA, dev_priv->pwrctx->gtt_space.start | PWRCTX_EN);
 	I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
 	mutex_unlock(&dev->struct_mutex);
 }
diff --git a/drivers/gpu/drm/i915/intel_fb.c b/drivers/gpu/drm/i915/intel_fb.c
index 5127827..9e8a67d 100644
--- a/drivers/gpu/drm/i915/intel_fb.c
+++ b/drivers/gpu/drm/i915/intel_fb.c
@@ -136,10 +136,10 @@ static int intelfb_create(struct intel_fbdev *ifbdev,
 	info->apertures->ranges[0].size =
 		dev_priv->mm.gtt->gtt_mappable_entries << PAGE_SHIFT;
 
-	info->fix.smem_start = dev->mode_config.fb_base + obj->gtt_offset;
+	info->fix.smem_start = dev->mode_config.fb_base + obj->gtt_space.start;
 	info->fix.smem_len = size;
 
-	info->screen_base = ioremap_wc(dev->agp->base + obj->gtt_offset, size);
+	info->screen_base = ioremap_wc(dev->agp->base + obj->gtt_space.start, size);
 	if (!info->screen_base) {
 		ret = -ENOSPC;
 		goto out_unpin;
@@ -159,7 +159,7 @@ static int intelfb_create(struct intel_fbdev *ifbdev,
 
 	DRM_DEBUG_KMS("allocated %dx%d fb: 0x%08x, bo %p\n",
 		      fb->width, fb->height,
-		      obj->gtt_offset, obj);
+		      obj->gtt_space.start, obj);
 
 
 	mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index fcf6fcb..8fa3597 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -199,7 +199,7 @@ intel_overlay_map_regs(struct intel_overlay *overlay)
 		regs = overlay->reg_bo->phys_obj->handle->vaddr;
 	else
 		regs = io_mapping_map_wc(dev_priv->mm.gtt_mapping,
-					 overlay->reg_bo->gtt_offset);
+					 overlay->reg_bo->gtt_space.start);
 
 	return regs;
 }
@@ -817,7 +817,7 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
 	regs->SWIDTHSW = calc_swidthsw(overlay->dev,
 				       params->offset_Y, tmp_width);
 	regs->SHEIGHT = params->src_h;
-	regs->OBUF_0Y = new_bo->gtt_offset + params-> offset_Y;
+	regs->OBUF_0Y = new_bo->gtt_space.start + params-> offset_Y;
 	regs->OSTRIDE = params->stride_Y;
 
 	if (params->format & I915_OVERLAY_YUV_PLANAR) {
@@ -831,8 +831,8 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
 				      params->src_w/uv_hscale);
 		regs->SWIDTHSW |= max_t(u32, tmp_U, tmp_V) << 16;
 		regs->SHEIGHT |= (params->src_h/uv_vscale) << 16;
-		regs->OBUF_0U = new_bo->gtt_offset + params->offset_U;
-		regs->OBUF_0V = new_bo->gtt_offset + params->offset_V;
+		regs->OBUF_0U = new_bo->gtt_space.start + params->offset_U;
+		regs->OBUF_0V = new_bo->gtt_space.start + params->offset_V;
 		regs->OSTRIDE |= params->stride_UV << 16;
 	}
 
@@ -1427,7 +1427,7 @@ void intel_setup_overlay(struct drm_device *dev)
                         DRM_ERROR("failed to pin overlay register bo\n");
                         goto out_free_bo;
                 }
-		overlay->flip_addr = reg_bo->gtt_offset;
+		overlay->flip_addr = reg_bo->gtt_space.start;
 
 		ret = i915_gem_object_set_to_gtt_domain(reg_bo, true);
 		if (ret) {
@@ -1501,7 +1501,7 @@ intel_overlay_map_regs_atomic(struct intel_overlay *overlay)
 		regs = overlay->reg_bo->phys_obj->handle->vaddr;
 	else
 		regs = io_mapping_map_atomic_wc(dev_priv->mm.gtt_mapping,
-						overlay->reg_bo->gtt_offset);
+						overlay->reg_bo->gtt_space.start);
 
 	return regs;
 }
@@ -1534,7 +1534,7 @@ intel_overlay_capture_error_state(struct drm_device *dev)
 	if (OVERLAY_NEEDS_PHYSICAL(overlay->dev))
 		error->base = (long) overlay->reg_bo->phys_obj->handle->vaddr;
 	else
-		error->base = (long) overlay->reg_bo->gtt_offset;
+		error->base = (long) overlay->reg_bo->gtt_space.start;
 
 	regs = intel_overlay_map_regs_atomic(overlay);
 	if (!regs)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index f15d80f..638e63f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -151,7 +151,7 @@ static int init_ring_common(struct intel_ring_buffer *ring)
 	ring->write_tail(ring, 0);
 
 	/* Initialize the ring. */
-	I915_WRITE_START(ring, obj->gtt_offset);
+	I915_WRITE_START(ring, obj->gtt_space.start);
 	head = I915_READ_HEAD(ring) & HEAD_ADDR;
 
 	/* G45 ring initialization fails to reset head to zero */
@@ -183,7 +183,7 @@ static int init_ring_common(struct intel_ring_buffer *ring)
 
 	/* If the head is still not zero, the ring is dead */
 	if ((I915_READ_CTL(ring) & RING_VALID) == 0 ||
-	    I915_READ_START(ring) != obj->gtt_offset ||
+	    I915_READ_START(ring) != obj->gtt_space.start ||
 	    (I915_READ_HEAD(ring) & HEAD_ADDR) != 0) {
 		DRM_ERROR("%s initialization failed "
 				"ctl %08x head %08x tail %08x start %08x\n",
@@ -243,7 +243,7 @@ init_pipe_control(struct intel_ring_buffer *ring)
 	if (ret)
 		goto err_unref;
 
-	pc->gtt_offset = obj->gtt_offset;
+	pc->gtt_offset = obj->gtt_space.start;
 	pc->cpu_page =  kmap(obj->pages[0]);
 	if (pc->cpu_page == NULL)
 		goto err_unpin;
@@ -768,7 +768,7 @@ static int init_status_page(struct intel_ring_buffer *ring)
 		goto err_unref;
 	}
 
-	ring->status_page.gfx_addr = obj->gtt_offset;
+	ring->status_page.gfx_addr = obj->gtt_space.start;
 	ring->status_page.page_addr = kmap(obj->pages[0]);
 	if (ring->status_page.page_addr == NULL) {
 		memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));
@@ -825,7 +825,7 @@ int intel_init_ring_buffer(struct drm_device *dev,
 		goto err_unref;
 
 	ring->map.size = ring->size;
-	ring->map.offset = dev->agp->base + obj->gtt_offset;
+	ring->map.offset = dev->agp->base + obj->gtt_space.start;
 	ring->map.type = 0;
 	ring->map.flags = 0;
 	ring->map.mtrr = 0;
@@ -1211,7 +1211,7 @@ static int blt_ring_begin(struct intel_ring_buffer *ring,
 			return ret;
 
 		intel_ring_emit(ring, MI_BATCH_BUFFER_START);
-		intel_ring_emit(ring, to_blt_workaround(ring)->gtt_offset);
+		intel_ring_emit(ring, to_blt_workaround(ring)->gtt_space.start);
 
 		return 0;
 	} else
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] drm/i915: kill gtt_list
  2011-04-15 18:57 [PATCH 0/4] embed drm_mm_node Daniel Vetter
  2011-04-15 18:57 ` [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object Daniel Vetter
  2011-04-15 18:57 ` [PATCH 2/4] drm/i915: kill obj->gtt_offset Daniel Vetter
@ 2011-04-15 18:57 ` Daniel Vetter
  2011-04-15 18:57 ` [PATCH 4/4] drm/i915: use drm_mm_for_each_scanned_node_reverse helper Daniel Vetter
  3 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2011-04-15 18:57 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

Use the list iterator provided by drm_mm instead.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_debugfs.c |   29 ++++++++++++++++++++---------
 drivers/gpu/drm/i915/i915_drv.h     |    4 ----
 drivers/gpu/drm/i915/i915_gem.c     |    4 ----
 drivers/gpu/drm/i915/i915_gem_gtt.c |    4 +++-
 4 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 1a6783f..6148bdf 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -208,14 +208,18 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data)
 	return 0;
 }
 
+#define count_object(obj) do { \
+	size += obj->gtt_space.size; \
+	++count; \
+	if (obj->map_and_fenceable) { \
+		mappable_size += obj->gtt_space.size; \
+		++mappable_count; \
+	} \
+} while(0)
+
 #define count_objects(list, member) do { \
 	list_for_each_entry(obj, list, member) { \
-		size += obj->gtt_space.size; \
-		++count; \
-		if (obj->map_and_fenceable) { \
-			mappable_size += obj->gtt_space.size; \
-			++mappable_count; \
-		} \
+		count_object(obj); \
 	} \
 } while(0)
 
@@ -226,6 +230,7 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	u32 count, mappable_count;
 	size_t size, mappable_size;
+	struct drm_mm_node *mm_node;
 	struct drm_i915_gem_object *obj;
 	int ret;
 
@@ -238,7 +243,10 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
 		   dev_priv->mm.object_memory);
 
 	size = count = mappable_size = mappable_count = 0;
-	count_objects(&dev_priv->mm.gtt_list, gtt_list);
+	drm_mm_for_each_node(mm_node, &dev_priv->mm.gtt_space) {
+		obj = container_of(mm_node, struct drm_i915_gem_object, gtt_space);
+		count_object(obj);
+	}
 	seq_printf(m, "%u [%u] objects, %zu [%zu] bytes in gtt\n",
 		   count, mappable_count, size, mappable_size);
 
@@ -264,7 +272,8 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
 		   count, mappable_count, size, mappable_size);
 
 	size = count = mappable_size = mappable_count = 0;
-	list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) {
+	drm_mm_for_each_node(mm_node, &dev_priv->mm.gtt_space) {
+		obj = container_of(mm_node, struct drm_i915_gem_object, gtt_space);
 		if (obj->fault_mappable) {
 			size += obj->gtt_space.size;
 			++count;
@@ -292,6 +301,7 @@ static int i915_gem_gtt_info(struct seq_file *m, void* data)
 	struct drm_info_node *node = (struct drm_info_node *) m->private;
 	struct drm_device *dev = node->minor->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct drm_mm_node *mm_node;
 	struct drm_i915_gem_object *obj;
 	size_t total_obj_size, total_gtt_size;
 	int count, ret;
@@ -301,7 +311,8 @@ static int i915_gem_gtt_info(struct seq_file *m, void* data)
 		return ret;
 
 	total_obj_size = total_gtt_size = count = 0;
-	list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) {
+	drm_mm_for_each_node(mm_node, &dev_priv->mm.gtt_space) {
+		obj = container_of(mm_node, struct drm_i915_gem_object, gtt_space);
 		seq_printf(m, "   ");
 		describe_obj(m, obj);
 		seq_printf(m, "\n");
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 2301a6a..5186429 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -542,9 +542,6 @@ typedef struct drm_i915_private {
 		struct drm_mm stolen;
 		/** Memory allocator for GTT */
 		struct drm_mm gtt_space;
-		/** List of all objects in gtt_space. Used to restore gtt
-		 * mappings on resume */
-		struct list_head gtt_list;
 
 		/** Usable portion of the GTT for GEM */
 		unsigned long gtt_start;
@@ -722,7 +719,6 @@ struct drm_i915_gem_object {
 
 	/** Current space allocated to this object in the GTT, if any. */
 	struct drm_mm_node gtt_space;
-	struct list_head gtt_list;
 
 	/** This object's place on the active/flushing/inactive lists */
 	struct list_head ring_list;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d08ad01..d9984e3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2239,7 +2239,6 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj)
 	i915_gem_gtt_unbind_object(obj);
 	i915_gem_object_put_pages_gtt(obj);
 
-	list_del_init(&obj->gtt_list);
 	list_del_init(&obj->mm_list);
 	/* Avoid an unnecessary call to unbind on rebind. */
 	obj->map_and_fenceable = true;
@@ -2870,7 +2869,6 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
 		goto search_free;
 	}
 
-	list_add_tail(&obj->gtt_list, &dev_priv->mm.gtt_list);
 	list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list);
 
 	/* Assert that the object is not currently in any GPU domain. As it
@@ -3705,7 +3703,6 @@ struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev,
 	obj->base.driver_private = NULL;
 	obj->fence_reg = I915_FENCE_REG_NONE;
 	INIT_LIST_HEAD(&obj->mm_list);
-	INIT_LIST_HEAD(&obj->gtt_list);
 	INIT_LIST_HEAD(&obj->ring_list);
 	INIT_LIST_HEAD(&obj->exec_list);
 	INIT_LIST_HEAD(&obj->gpu_write_list);
@@ -3946,7 +3943,6 @@ i915_gem_load(struct drm_device *dev)
 	INIT_LIST_HEAD(&dev_priv->mm.pinned_list);
 	INIT_LIST_HEAD(&dev_priv->mm.fence_list);
 	INIT_LIST_HEAD(&dev_priv->mm.deferred_free_list);
-	INIT_LIST_HEAD(&dev_priv->mm.gtt_list);
 	for (i = 0; i < I915_NUM_RINGS; i++)
 		init_ring_lists(&dev_priv->ring[i]);
 	for (i = 0; i < 16; i++)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index bca5a97..bb627bf 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -53,12 +53,14 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj;
+	struct drm_mm_node *node;
 
 	/* First fill our portion of the GTT with scratch pages */
 	intel_gtt_clear_range(dev_priv->mm.gtt_start / PAGE_SIZE,
 			      (dev_priv->mm.gtt_end - dev_priv->mm.gtt_start) / PAGE_SIZE);
 
-	list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) {
+	drm_mm_for_each_node(node, &dev_priv->mm.gtt_space) {
+		obj = container_of(node, struct drm_i915_gem_object, gtt_space);
 		i915_gem_clflush_object(obj);
 		i915_gem_gtt_rebind_object(obj, obj->cache_level);
 	}
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] drm/i915: use drm_mm_for_each_scanned_node_reverse helper
  2011-04-15 18:57 [PATCH 0/4] embed drm_mm_node Daniel Vetter
                   ` (2 preceding siblings ...)
  2011-04-15 18:57 ` [PATCH 3/4] drm/i915: kill gtt_list Daniel Vetter
@ 2011-04-15 18:57 ` Daniel Vetter
  3 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2011-04-15 18:57 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

Doesn't really buy much, but looks nicer.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_gem_evict.c |   34 ++++++++++++++------------------
 1 files changed, 15 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 1a9c96b..7d3ea89 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -33,9 +33,8 @@
 #include "i915_trace.h"
 
 static bool
-mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind)
+mark_free(struct drm_i915_gem_object *obj)
 {
-	list_add(&obj->exec_list, unwind);
 	drm_gem_object_reference(&obj->base);
 	return drm_mm_scan_add_block(&obj->gtt_space);
 }
@@ -45,8 +44,9 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
 			 unsigned alignment, bool mappable)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	struct list_head eviction_list, unwind_list;
+	struct list_head eviction_list;
 	struct drm_i915_gem_object *obj;
+	struct drm_mm_node *node, *next;
 	int ret = 0;
 
 	i915_gem_retire_requests(dev);
@@ -89,7 +89,6 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
 	 * object on the TAIL.
 	 */
 
-	INIT_LIST_HEAD(&unwind_list);
 	if (mappable)
 		drm_mm_init_scan_with_range(&dev_priv->mm.gtt_space, min_size,
 					    alignment, 0,
@@ -99,7 +98,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
 
 	/* First see if there is a large enough contiguous idle region... */
 	list_for_each_entry(obj, &dev_priv->mm.inactive_list, mm_list) {
-		if (mark_free(obj, &unwind_list))
+		if (mark_free(obj))
 			goto found;
 	}
 
@@ -109,7 +108,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
 		if (obj->base.write_domain || obj->pin_count)
 			continue;
 
-		if (mark_free(obj, &unwind_list))
+		if (mark_free(obj))
 			goto found;
 	}
 
@@ -118,27 +117,25 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
 		if (obj->pin_count)
 			continue;
 
-		if (mark_free(obj, &unwind_list))
+		if (mark_free(obj))
 			goto found;
 	}
 	list_for_each_entry(obj, &dev_priv->mm.active_list, mm_list) {
 		if (! obj->base.write_domain || obj->pin_count)
 			continue;
 
-		if (mark_free(obj, &unwind_list))
+		if (mark_free(obj))
 			goto found;
 	}
 
 	/* Nothing found, clean up and bail out! */
-	while (!list_empty(&unwind_list)) {
-		obj = list_first_entry(&unwind_list,
-				       struct drm_i915_gem_object,
-				       exec_list);
+	drm_mm_for_each_scanned_node_reverse(node, next,
+					     &dev_priv->mm.gtt_space) {
+		obj = container_of(node, struct drm_i915_gem_object, gtt_space);
 
 		ret = drm_mm_scan_remove_block(&obj->gtt_space);
 		BUG_ON(ret);
 
-		list_del_init(&obj->exec_list);
 		drm_gem_object_unreference(&obj->base);
 	}
 
@@ -152,15 +149,14 @@ found:
 	 * scanning, therefore store to be evicted objects on a
 	 * temporary list. */
 	INIT_LIST_HEAD(&eviction_list);
-	while (!list_empty(&unwind_list)) {
-		obj = list_first_entry(&unwind_list,
-				       struct drm_i915_gem_object,
-				       exec_list);
+	drm_mm_for_each_scanned_node_reverse(node, next,
+					     &dev_priv->mm.gtt_space) {
+		obj = container_of(node, struct drm_i915_gem_object, gtt_space);
+
 		if (drm_mm_scan_remove_block(&obj->gtt_space)) {
-			list_move(&obj->exec_list, &eviction_list);
+			list_add(&obj->exec_list, &eviction_list);
 			continue;
 		}
-		list_del_init(&obj->exec_list);
 		drm_gem_object_unreference(&obj->base);
 	}
 
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] drm/i915: kill obj->gtt_offset
  2011-04-15 18:56   ` Chris Wilson
@ 2011-04-15 19:19     ` Daniel Vetter
  2011-04-15 20:04       ` Chris Wilson
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2011-04-15 19:19 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Daniel Vetter, intel-gfx

On Fri, Apr 15, 2011 at 07:56:10PM +0100, Chris Wilson wrote:
> On Fri, 15 Apr 2011 20:57:36 +0200, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> > Yet another massive round of sed'ing.
> 
> The only hitch here is that in the vmap code obj->gtt_offset !=
> obj->gtt_space.offset.
> 
> There obj->gtt_space.offset is the base of the page aligned region allocated
> in the GTT and obj->gtt_offset is obj->gtt_space.offset +
> offset_in_page(user_addr).
> 
> I haven't checked but is obj->gtt_space immutable by the caller, i.e. can
> we modify obj->gtt_space.offset and drm_mm still function correctly? Bake
> the page aligned assumption into drm_mm? Or simply undo the page_offset
> when releasing the gtt_space...? The latter sounds like it would work
> best.

Mucking around with drm_mm_node->start is a bad idea, it's used to track
the end of the preceding free area (if there is one).

Also I find having a bo with a not-page-aligned gtt offset kinda creepy
... So if the kernel really needs to track this, could it be tracked in a
special vmap handle object? Or is this really required, because all the
normal memory mapper syscalls only work on page boundaries, too. I.e. why
can't userspace keep track of the offset?
-Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] drm/i915: kill obj->gtt_offset
  2011-04-15 19:19     ` Daniel Vetter
@ 2011-04-15 20:04       ` Chris Wilson
  0 siblings, 0 replies; 10+ messages in thread
From: Chris Wilson @ 2011-04-15 20:04 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx

On Fri, 15 Apr 2011 21:19:00 +0200, Daniel Vetter <daniel@ffwll.ch> wrote:
> Mucking around with drm_mm_node->start is a bad idea, it's used to track
> the end of the preceding free area (if there is one).
> 
> Also I find having a bo with a not-page-aligned gtt offset kinda creepy
> ... So if the kernel really needs to track this, could it be tracked in a
> special vmap handle object?

All the relocation handling code is generic: gtt_offset + user delta
Since the gtt_offset is computed once, the vmap code applies the offset to
it directly. I suppose drm_i915_gem_object could grow an additional
gtt_offset_offset...

> Or is this really required, because all the
> normal memory mapper syscalls only work on page boundaries, too. I.e. why
> can't userspace keep track of the offset?

Because that is ugly. Userspace passes in user_addr + user_length which
can be precisely checked for the proposed access.

The alternative you propose is to pass in (user_page_base_addr,
user_page_offset) + user_length and then continue to track
user_page_offset in the userspace code to apply to reloc.delta as well.

In all, I'm favouring keeping gtt_offset. Can we postpone this one until
you've had a chance to review vmap, which I promise will be in the next
set for drm-intel-next-proposed...
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object
  2011-04-15 18:57 ` [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object Daniel Vetter
@ 2011-04-15 23:01   ` Dave Airlie
  2011-04-16 10:59     ` Daniel Vetter
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Airlie @ 2011-04-15 23:01 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Sat, Apr 16, 2011 at 4:57 AM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:

Why? seriously patches need to start having some information in them.

Dave.

> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object
  2011-04-15 23:01   ` Dave Airlie
@ 2011-04-16 10:59     ` Daniel Vetter
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2011-04-16 10:59 UTC (permalink / raw)
  To: Dave Airlie; +Cc: Daniel Vetter, intel-gfx

On Sat, Apr 16, 2011 at 09:01:12AM +1000, Dave Airlie wrote:
> On Sat, Apr 16, 2011 at 4:57 AM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> 
> Why? seriously patches need to start having some information in them.

It's the drm/i915 portion of the drm_mm free space tracking rework you've
merged for .39. The i915 part missed the merge window last time around.
Original submission was equally terse, but I admit that without the
context that's simply not enough. I am a lazy bastard, so no cookies for
me tonight ...

Anyway, I've again miss-timed and it conflicts with Chris' patch queue for
-next. I'll resend.
-Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-04-16 10:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-04-15 18:57 [PATCH 0/4] embed drm_mm_node Daniel Vetter
2011-04-15 18:57 ` [PATCH 1/4] drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object Daniel Vetter
2011-04-15 23:01   ` Dave Airlie
2011-04-16 10:59     ` Daniel Vetter
2011-04-15 18:57 ` [PATCH 2/4] drm/i915: kill obj->gtt_offset Daniel Vetter
2011-04-15 18:56   ` Chris Wilson
2011-04-15 19:19     ` Daniel Vetter
2011-04-15 20:04       ` Chris Wilson
2011-04-15 18:57 ` [PATCH 3/4] drm/i915: kill gtt_list Daniel Vetter
2011-04-15 18:57 ` [PATCH 4/4] drm/i915: use drm_mm_for_each_scanned_node_reverse helper Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).