All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/11] drm-intel-collector - update
@ 2015-01-26 12:43 Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 01/11] drm/i915: Put logical pipe_control emission into a helper Rodrigo Vivi
                   ` (10 more replies)
  0 siblings, 11 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi


This is another drm-intel-collector updated notice:
http://cgit.freedesktop.org/~vivijim/drm-intel/log/?h=drm-intel-collector

Here goes the update list in order for better reviewers assignment:

Patch     drm/i915: Put logical pipe_control emission into a helper. - Reviewer:
Patch     drm/i915: Add WaCsStallBeforeStateCacheInvalidate:bdw, chv to logical ring - Reviewer:
Patch     drm/i915: Remove pinned check from madvise_ioctl - Reviewer:
Patch     drm/i915: Extend GET_APERTURE ioctl to report available map space - Reviewer:
Patch     drm/i915: Display current hangcheck status in debugfs - Reviewer:
Patch     drm/i915/vlv: check port in infoframe_enabled v2 - Reviewer:
Patch     drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT reg - Reviewer:
Patch     Revert "drm/i915: Fix mutex->owner inspection race under DEBUG_MUTEXES" - Reviewer:
Patch     drm/i915: FIFO space query code refactor - Reviewer:
Patch     drm/i915: add irq_barrier operation for synchronising reads - Reviewer:
Patch     drm/i915: use effective_size for ringbuffer calculations - Reviewer:

Another round for discussions finished from Dec-05 to Dec-19.

Thanks in advance,
Rodrigo.


Chris Wilson (3):
  drm/i915: Remove pinned check from madvise_ioctl
  drm/i915: Display current hangcheck status in debugfs
  Revert "drm/i915: Fix mutex->owner inspection race under
    DEBUG_MUTEXES"

Dave Gordon (3):
  drm/i915: FIFO space query code refactor
  drm/i915: add irq_barrier operation for synchronising reads
  drm/i915: use effective_size for ringbuffer calculations

Imre Deak (1):
  drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT reg

Jesse Barnes (1):
  drm/i915/vlv: check port in infoframe_enabled v2

Rodrigo Vivi (3):
  drm/i915: Put logical pipe_control emission into a helper.
  drm/i915: Add WaCsStallBeforeStateCacheInvalidate:bdw, chv to logical
    ring
  drm/i915: Extend GET_APERTURE ioctl to report available map space

 drivers/gpu/drm/i915/i915_debugfs.c     |  62 ++++++++++++++++
 drivers/gpu/drm/i915/i915_drv.c         |   4 +-
 drivers/gpu/drm/i915/i915_gem.c         | 124 +++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/intel_hdmi.c       |   7 +-
 drivers/gpu/drm/i915/intel_lrc.c        |  43 +++++++----
 drivers/gpu/drm/i915/intel_ringbuffer.c |  42 +++++++++--
 drivers/gpu/drm/i915/intel_ringbuffer.h |   1 +
 drivers/gpu/drm/i915/intel_uncore.c     |  19 +++--
 include/uapi/drm/i915_drm.h             |  25 +++++++
 9 files changed, 289 insertions(+), 38 deletions(-)

-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 01/11] drm/i915: Put logical pipe_control emission into a helper.
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 02/11] drm/i915: Add WaCsStallBeforeStateCacheInvalidate:bdw, chv to logical ring Rodrigo Vivi
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

To be used for a Workaroud. Similar to:

commit 884ceacee308f0e4616d0c933518af2639f7b1d8
Author: Kenneth Graunke <kenneth@whitecape.org>
Date:   Sat Jun 28 02:04:20 2014 +0300

    drm/i915: Refactor Broadwell PIPE_CONTROL emission into a helper.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 35 +++++++++++++++++++++--------------
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index e405b61..9b95233 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1259,6 +1259,26 @@ static int gen8_emit_flush(struct intel_ringbuffer *ringbuf,
 	return 0;
 }
 
+static int gen8_emit_pipe_control(struct intel_ringbuffer *ringbuf,
+				  u32 flags, u32 scratch_addr)
+{
+	int ret;
+
+	ret = intel_logical_ring_begin(ringbuf, 6);
+	if (ret)
+		return ret;
+
+	intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
+	intel_logical_ring_emit(ringbuf, flags);
+	intel_logical_ring_emit(ringbuf, scratch_addr);
+	intel_logical_ring_emit(ringbuf, 0);
+	intel_logical_ring_emit(ringbuf, 0);
+	intel_logical_ring_emit(ringbuf, 0);
+	intel_logical_ring_advance(ringbuf);
+
+	return 0;
+}
+
 static int gen8_emit_flush_render(struct intel_ringbuffer *ringbuf,
 				  u32 invalidate_domains,
 				  u32 flush_domains)
@@ -1266,7 +1286,6 @@ static int gen8_emit_flush_render(struct intel_ringbuffer *ringbuf,
 	struct intel_engine_cs *ring = ringbuf->ring;
 	u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
-	int ret;
 
 	flags |= PIPE_CONTROL_CS_STALL;
 
@@ -1286,19 +1305,7 @@ static int gen8_emit_flush_render(struct intel_ringbuffer *ringbuf,
 		flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
 	}
 
-	ret = intel_logical_ring_begin(ringbuf, 6);
-	if (ret)
-		return ret;
-
-	intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
-	intel_logical_ring_emit(ringbuf, flags);
-	intel_logical_ring_emit(ringbuf, scratch_addr);
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_advance(ringbuf);
-
-	return 0;
+	return gen8_emit_pipe_control(ringbuf, flags, scratch_addr);
 }
 
 static u32 gen8_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 02/11] drm/i915: Add WaCsStallBeforeStateCacheInvalidate:bdw, chv to logical ring
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 01/11] drm/i915: Put logical pipe_control emission into a helper Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 03/11] drm/i915: Remove pinned check from madvise_ioctl Rodrigo Vivi
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

Similar to:

commit 02c9f7e3cfe76a7f54ef03438c36aade86cc1c8b
Author: Kenneth Graunke <kenneth@whitecape.org>
Date:   Mon Jan 27 14:20:16 2014 -0800

    drm/i915: Add the WaCsStallBeforeStateCacheInvalidate:bdw workaround.

    On Broadwell, any PIPE_CONTROL with the "State Cache Invalidate" bit set
    must be preceded by a PIPE_CONTROL with the "CS Stall" bit set.

    Documented on the BSpec 3D workarounds page.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9b95233..ae29f30d 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1286,6 +1286,7 @@ static int gen8_emit_flush_render(struct intel_ringbuffer *ringbuf,
 	struct intel_engine_cs *ring = ringbuf->ring;
 	u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
+	int ret;
 
 	flags |= PIPE_CONTROL_CS_STALL;
 
@@ -1303,6 +1304,15 @@ static int gen8_emit_flush_render(struct intel_ringbuffer *ringbuf,
 		flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
 		flags |= PIPE_CONTROL_QW_WRITE;
 		flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
+
+
+		/* WaCsStallBeforeStateCacheInvalidate:bdw,chv */
+		ret = gen8_emit_pipe_control(ring,
+					     PIPE_CONTROL_CS_STALL |
+					     PIPE_CONTROL_STALL_AT_SCOREBOARD,
+					     0);
+		if (ret)
+			return ret;
 	}
 
 	return gen8_emit_pipe_control(ringbuf, flags, scratch_addr);
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 03/11] drm/i915: Remove pinned check from madvise_ioctl
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 01/11] drm/i915: Put logical pipe_control emission into a helper Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 02/11] drm/i915: Add WaCsStallBeforeStateCacheInvalidate:bdw, chv to logical ring Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-28  9:52   ` Daniel Vetter
  2015-01-26 12:43 ` [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space Rodrigo Vivi
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

From: Chris Wilson <chris@chris-wilson.co.uk>

We don't need to incur the overhead of checking whether the object is
pinned prior to changing its madvise. If the object is pinned, the
madvise will not take effect until it is unpinned and so we cannot free
the pages being pointed at by hardware. Marking a pinned object with
allocated pages as DONTNEED will not trigger any undue warnings. The check
is therefore superfluous, and by removing it we can remove a linear walk
over all the vma the object has.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6c40365..123ce34 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4365,11 +4365,6 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 		goto unlock;
 	}
 
-	if (i915_gem_obj_is_pinned(obj)) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	if (obj->pages &&
 	    obj->tiling_mode != I915_TILING_NONE &&
 	    dev_priv->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@@ -4388,7 +4383,6 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 
 	args->retained = obj->madv != __I915_MADV_PURGED;
 
-out:
 	drm_gem_object_unreference(&obj->base);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (2 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 03/11] drm/i915: Remove pinned check from madvise_ioctl Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-28  9:59   ` Daniel Vetter
  2015-01-26 12:43 ` [PATCH 05/11] drm/i915: Display current hangcheck status in debugfs Rodrigo Vivi
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

When constructing a batchbuffer, it is sometimes crucial to know the
largest hole into which we can fit a fenceable buffer (for example when
handling very large objects on gen2 and gen3). This depends on the
fragmentation of pinned buffers inside the aperture, a question only the
kernel can easily answer.

This patch extends the current DRM_I915_GEM_GET_APERTURE ioctl to
include a couple of new fields in its reply to userspace - the total
amount of space available in the mappable region of the aperture and
also the single largest block available.

This is not quite what userspace wants to answer the question of whether
this batch will fit as fences are also required to meet severe alignment
constraints within the batch. For this purpose, a third conservative
estimate of largest fence available is also provided. For when userspace
needs more than one batch, we also provide the culmulative space
available for fences such that it has some additional guidance to how
much space it could allocate to fences. Conservatism still wins.

The patch also adds a debugfs file for convenient testing and reporting.

v2: The first object cannot end at offset 0, so we can use last==0 to
detect the empty list.

v3: Expand all values to 64bit, just in case.
    Report total mappable aperture size for userspace that cannot easily
    determine it by inspecting the PCI device.

v4: (Rodrigo) Fixed rebase conflicts.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c |  27 +++++++++
 drivers/gpu/drm/i915/i915_gem.c     | 116 ++++++++++++++++++++++++++++++++++--
 include/uapi/drm/i915_drm.h         |  25 ++++++++
 3 files changed, 164 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 0d11cbe..f1aea86 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -498,6 +498,32 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
 	return 0;
 }
 
+static int i915_gem_aperture_info(struct seq_file *m, void *data)
+{
+	struct drm_info_node *node = m->private;
+	struct drm_i915_gem_get_aperture arg;
+	int ret;
+
+	ret = i915_gem_get_aperture_ioctl(node->minor->dev, &arg, NULL);
+	if (ret)
+		return ret;
+
+	seq_printf(m, "Total size of the GTT: %llu bytes\n",
+		   arg.aper_size);
+	seq_printf(m, "Available space in the GTT: %llu bytes\n",
+		   arg.aper_available_size);
+	seq_printf(m, "Available space in the mappable aperture: %llu bytes\n",
+		   arg.map_available_size);
+	seq_printf(m, "Single largest space in the mappable aperture: %llu bytes\n",
+		   arg.map_largest_size);
+	seq_printf(m, "Available space for fences: %llu bytes\n",
+		   arg.fence_available_size);
+	seq_printf(m, "Single largest fence available: %llu bytes\n",
+		   arg.fence_largest_size);
+
+	return 0;
+}
+
 static int i915_gem_gtt_info(struct seq_file *m, void *data)
 {
 	struct drm_info_node *node = m->private;
@@ -4398,6 +4424,7 @@ static int i915_debugfs_create(struct dentry *root,
 static const struct drm_info_list i915_debugfs_list[] = {
 	{"i915_capabilities", i915_capabilities, 0},
 	{"i915_gem_objects", i915_gem_object_info, 0},
+	{"i915_gem_aperture", i915_gem_aperture_info, 0},
 	{"i915_gem_gtt", i915_gem_gtt_info, 0},
 	{"i915_gem_pinned", i915_gem_gtt_info, 0, (void *) PINNED_LIST},
 	{"i915_gem_active", i915_gem_object_list_info, 0, (void *) ACTIVE_LIST},
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 123ce34..0a074bc 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -31,6 +31,7 @@
 #include "i915_drv.h"
 #include "i915_trace.h"
 #include "intel_drv.h"
+#include <linux/list_sort.h>
 #include <linux/oom.h>
 #include <linux/shmem_fs.h>
 #include <linux/slab.h>
@@ -153,6 +154,55 @@ int i915_mutex_lock_interruptible(struct drm_device *dev)
 	return 0;
 }
 
+static inline bool
+i915_gem_object_is_inactive(struct drm_i915_gem_object *obj)
+{
+	return i915_gem_obj_bound_any(obj) && !obj->active;
+}
+
+static int obj_rank_by_ggtt(void *priv,
+			    struct list_head *A,
+			    struct list_head *B)
+{
+	struct drm_i915_gem_object *a = list_entry(A,typeof(*a), obj_exec_link);
+	struct drm_i915_gem_object *b = list_entry(B,typeof(*b), obj_exec_link);
+
+	return i915_gem_obj_ggtt_offset(a) - i915_gem_obj_ggtt_offset(b);
+}
+
+static u32 __fence_size(struct drm_i915_private *dev_priv, u32 start, u32 end)
+{
+	u32 size = end - start;
+	u32 fence_size;
+
+	if (INTEL_INFO(dev_priv)->gen < 4) {
+		u32 fence_max;
+		u32 fence_next;
+
+		if (IS_GEN3(dev_priv)) {
+			fence_max = I830_FENCE_MAX_SIZE_VAL << 20;
+			fence_next = 1024*1024;
+		} else {
+			fence_max = I830_FENCE_MAX_SIZE_VAL << 19;
+			fence_next = 512*1024;
+		}
+
+		fence_max = min(fence_max, size);
+		fence_size = 0;
+		while (fence_next <= fence_max) {
+			u32 base = ALIGN(start, fence_next);
+			if (base + fence_next > end)
+				break;
+
+			fence_size = fence_next;
+			fence_next <<= 1;
+		}
+	} else
+		fence_size = size;
+
+	return fence_size;
+}
+
 int
 i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
 			    struct drm_file *file)
@@ -160,17 +210,75 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_gem_get_aperture *args = data;
 	struct drm_i915_gem_object *obj;
-	size_t pinned;
+	struct list_head map_list;
+	const u32 map_limit = dev_priv->gtt.mappable_end;
+	size_t pinned, map_space, map_largest, fence_space, fence_largest;
+	u32 last, size;
+
+	INIT_LIST_HEAD(&map_list);
 
 	pinned = 0;
+	map_space = map_largest = 0;
+	fence_space = fence_largest = 0;
+
 	mutex_lock(&dev->struct_mutex);
-	list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list)
-		if (i915_gem_obj_is_pinned(obj))
-			pinned += i915_gem_obj_ggtt_size(obj);
+	list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+		struct i915_vma *vma = i915_gem_obj_to_ggtt(obj);
+
+		if (vma == NULL || !vma->pin_count)
+			continue;
+
+		pinned += vma->node.size;
+
+		if (vma->node.start < map_limit)
+			list_add(&obj->obj_exec_link, &map_list);
+	}
+
+	last = 0;
+	list_sort(NULL, &map_list, obj_rank_by_ggtt);
+	while (!list_empty(&map_list)) {
+		struct i915_vma *vma;
+
+		obj = list_first_entry(&map_list, typeof(*obj), obj_exec_link);
+		list_del_init(&obj->obj_exec_link);
+
+		vma = i915_gem_obj_to_ggtt(obj);
+		if (last == 0)
+			goto skip_first;
+
+		size = vma->node.start - last;
+		if (size > map_largest)
+			map_largest = size;
+		map_space += size;
+
+		size = __fence_size(dev_priv, last, vma->node.start);
+		if (size > fence_largest)
+			fence_largest = size;
+		fence_space += size;
+
+skip_first:
+		last = vma->node.start + vma->node.size;
+	}
+	if (last < map_limit) {
+		size = map_limit - last;
+		if (size > map_largest)
+			map_largest = size;
+		map_space += size;
+
+		size = __fence_size(dev_priv, last, map_limit);
+		if (size > fence_largest)
+			fence_largest = size;
+		fence_space += size;
+	}
 	mutex_unlock(&dev->struct_mutex);
 
 	args->aper_size = dev_priv->gtt.base.total;
 	args->aper_available_size = args->aper_size - pinned;
+	args->map_available_size = map_space;
+	args->map_largest_size = map_largest;
+	args->map_total_size = dev_priv->gtt.mappable_end;
+	args->fence_available_size = fence_space;
+	args->fence_largest_size = fence_largest;
 
 	return 0;
 }
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 2e559f6e..28f614d 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -907,6 +907,31 @@ struct drm_i915_gem_get_aperture {
 	 * bytes
 	 */
 	__u64 aper_available_size;
+
+	/**
+	 * Total space in the mappable region of the aperture, in bytes
+	 */
+	__u64 map_total_size;
+
+	/**
+	 * Available space in the mappable region of the aperture, in bytes
+	 */
+	__u64 map_available_size;
+
+	/**
+	 * Single largest available region inside the mappable region, in bytes.
+	 */
+	__u64 map_largest_size;
+
+	/**
+	 * Culmulative space available for fences, in bytes
+	 */
+	__u64 fence_available_size;
+
+	/**
+	 * Single largest fenceable region, in bytes.
+	 */
+	__u64 fence_largest_size;
 };
 
 struct drm_i915_get_pipe_from_crtc_id {
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 05/11] drm/i915: Display current hangcheck status in debugfs
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (3 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 06/11] drm/i915/vlv: check port in infoframe_enabled v2 Rodrigo Vivi
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi, Mika Kuoppala

From: Chris Wilson <chris@chris-wilson.co.uk>

For example,

/sys/kernel/debug/dri/0/i915_hangcheck_info:

Hangcheck active, fires in 15887800ms
render ring:
        seqno = -4059 [current -583]
        action = 2
        score = 0
        ACTHD = 1ee8 [current 21f980]
        max ACTHD = 0

v2: Include expiration ETA. Can anyone spot a problem?

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index f1aea86..bc06ad4 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1245,6 +1245,40 @@ out:
 	return ret;
 }
 
+static int i915_hangcheck_info(struct seq_file *m, void *unused)
+{
+	struct drm_info_node *node = m->private;
+	struct drm_i915_private *dev_priv = to_i915(node->minor->dev);
+	struct intel_engine_cs *ring;
+	int i;
+
+	if (!i915.enable_hangcheck) {
+		seq_printf(m, "Hangcheck disabled\n");
+		return 0;
+	}
+
+	if (timer_pending(&dev_priv->gpu_error.hangcheck_timer)) {
+		seq_printf(m, "Hangcheck active, fires in %dms\n",
+			   jiffies_to_msecs(dev_priv->gpu_error.hangcheck_timer.expires - jiffies));
+	} else
+		seq_printf(m, "Hangcheck inactive\n");
+
+	for_each_ring(ring, dev_priv, i) {
+		seq_printf(m, "%s:\n", ring->name);
+		seq_printf(m, "\tseqno = %d [current %d]\n",
+			   ring->hangcheck.seqno, ring->get_seqno(ring, false));
+		seq_printf(m, "\taction = %d\n", ring->hangcheck.action);
+		seq_printf(m, "\tscore = %d\n", ring->hangcheck.score);
+		seq_printf(m, "\tACTHD = 0x%08llx [current 0x%08llx]\n",
+			   (long long)ring->hangcheck.acthd,
+			   (long long)intel_ring_get_active_head(ring));
+		seq_printf(m, "\tmax ACTHD = 0x%08llx\n",
+			   (long long)ring->hangcheck.max_acthd);
+	}
+
+	return 0;
+}
+
 static int ironlake_drpc_info(struct seq_file *m)
 {
 	struct drm_info_node *node = m->private;
@@ -4441,6 +4475,7 @@ static const struct drm_info_list i915_debugfs_list[] = {
 	{"i915_gem_hws_vebox", i915_hws_info, 0, (void *)VECS},
 	{"i915_gem_batch_pool", i915_gem_batch_pool_info, 0},
 	{"i915_frequency_info", i915_frequency_info, 0},
+	{"i915_hangcheck_info", i915_hangcheck_info, 0},
 	{"i915_drpc_info", i915_drpc_info, 0},
 	{"i915_emon_status", i915_emon_status, 0},
 	{"i915_ring_freq_table", i915_ring_freq_table, 0},
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 06/11] drm/i915/vlv: check port in infoframe_enabled v2
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (4 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 05/11] drm/i915: Display current hangcheck status in debugfs Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 07/11] drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT reg Rodrigo Vivi
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

From: Jesse Barnes <jbarnes@virtuousgeek.org>

Same as IBX and G4x, they all share the same genetic material.

v2: we all need a bit more port in our lives

Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/intel_hdmi.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index 3abc200..62606e6 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -323,10 +323,15 @@ static bool vlv_infoframe_enabled(struct drm_encoder *encoder)
 	struct drm_device *dev = encoder->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(encoder->crtc);
+	struct intel_digital_port *intel_dig_port = enc_to_dig_port(encoder);
 	int reg = VLV_TVIDEO_DIP_CTL(intel_crtc->pipe);
 	u32 val = I915_READ(reg);
+	u32 port = intel_dig_port->port;
 
-	return val & VIDEO_DIP_ENABLE;
+	if (port == (val & VIDEO_DIP_PORT_MASK))
+		return val & VIDEO_DIP_ENABLE;
+
+	return false;
 }
 
 static void hsw_write_infoframe(struct drm_encoder *encoder,
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 07/11] drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT reg
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (5 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 06/11] drm/i915/vlv: check port in infoframe_enabled v2 Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 08/11] Revert "drm/i915: Fix mutex->owner inspection race under DEBUG_MUTEXES" Rodrigo Vivi
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

From: Imre Deak <imre.deak@intel.com>

Due this typo we don't save/restore the GFX_MAX_REQ_COUNT register across
suspend/resume, so fix this.

This was introduced in

commit ddeea5b0c36f3665446518c609be91f9336ef674
Author: Imre Deak <imre.deak@intel.com>
Date:   Mon May 5 15:19:56 2014 +0300

    drm/i915: vlv: add runtime PM support

I noticed this only by reading the code. To my knowledge it shouldn't
cause any real problems at the moment, since the power well backing this
register remains on across a runtime s/r. This may change once
system-wide s0ix functionality is enabled in the kernel.

v2:
- resend after a missing git add -u :/

Signed-off-by: Imre Deak <imre.deak@intel.com>
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 308774f..bf39a1d 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1028,7 +1028,7 @@ static void vlv_save_gunit_s0ix_state(struct drm_i915_private *dev_priv)
 		s->lra_limits[i] = I915_READ(GEN7_LRA_LIMITS_BASE + i * 4);
 
 	s->media_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
-	s->gfx_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
+	s->gfx_max_req_count	= I915_READ(GEN7_GFX_MAX_REQ_COUNT);
 
 	s->render_hwsp		= I915_READ(RENDER_HWS_PGA_GEN7);
 	s->ecochk		= I915_READ(GAM_ECOCHK);
@@ -1109,7 +1109,7 @@ static void vlv_restore_gunit_s0ix_state(struct drm_i915_private *dev_priv)
 		I915_WRITE(GEN7_LRA_LIMITS_BASE + i * 4, s->lra_limits[i]);
 
 	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->media_max_req_count);
-	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->gfx_max_req_count);
+	I915_WRITE(GEN7_GFX_MAX_REQ_COUNT, s->gfx_max_req_count);
 
 	I915_WRITE(RENDER_HWS_PGA_GEN7,	s->render_hwsp);
 	I915_WRITE(GAM_ECOCHK,		s->ecochk);
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 08/11] Revert "drm/i915: Fix mutex->owner inspection race under DEBUG_MUTEXES"
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (6 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 07/11] drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT reg Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-28  9:53   ` Daniel Vetter
  2015-01-26 12:43 ` [PATCH 09/11] drm/i915: FIFO space query code refactor Rodrigo Vivi
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Jani Nikula, Daniel Vetter, Rodrigo Vivi

From: Chris Wilson <chris@chris-wilson.co.uk>

The core fix was applied in

commit a63b03e2d2477586440741677ecac45bcf28d7b1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Tue Jan 6 10:29:35 2015 +0000

    mutex: Always clear owner field upon mutex_unlock()

(note the absence of stable@ tag)

so we can now revert our band-aid commit 226e5ae9e5f910 for -next.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0a074bc..b50a2b4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5213,7 +5213,7 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
 	if (!mutex_is_locked(mutex))
 		return false;
 
-#if defined(CONFIG_SMP) && !defined(CONFIG_DEBUG_MUTEXES)
+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
 	return mutex->owner == task;
 #else
 	/* Since UP may be pre-empted, we cannot assume that we own the lock */
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 09/11] drm/i915: FIFO space query code refactor
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (7 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 08/11] Revert "drm/i915: Fix mutex->owner inspection race under DEBUG_MUTEXES" Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 10/11] drm/i915: add irq_barrier operation for synchronising reads Rodrigo Vivi
  2015-01-26 12:43 ` [PATCH 11/11] drm/i915: use effective_size for ringbuffer calculations Rodrigo Vivi
  10 siblings, 0 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

From: Dave Gordon <david.s.gordon@intel.com>

When querying the GTFIFOCTL register to check the FIFO space, the read value
must be masked. The operation is repeated explicitly in several places. This
change refactors the read-and-mask code into a function call.

v2*: rebased on top of Mika's forcewake changes

Change-Id: Id1a9f3785cb20b82d4caa330c37b31e4e384a3ef
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/intel_uncore.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
index e9561de..d29b4d4 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -147,6 +147,13 @@ static void __gen7_gt_force_wake_mt_put(struct drm_i915_private *dev_priv,
 		gen6_gt_check_fifodbg(dev_priv);
 }
 
+static inline u32 fifo_free_entries(struct drm_i915_private *dev_priv)
+{
+	u32 count = __raw_i915_read32(dev_priv, GTFIFOCTL);
+
+	return count & GT_FIFO_FREE_ENTRIES_MASK;
+}
+
 static int __gen6_gt_wait_for_fifo(struct drm_i915_private *dev_priv)
 {
 	int ret = 0;
@@ -154,16 +161,15 @@ static int __gen6_gt_wait_for_fifo(struct drm_i915_private *dev_priv)
 	/* On VLV, FIFO will be shared by both SW and HW.
 	 * So, we need to read the FREE_ENTRIES everytime */
 	if (IS_VALLEYVIEW(dev_priv->dev))
-		dev_priv->uncore.fifo_count =
-			__raw_i915_read32(dev_priv, GTFIFOCTL) &
-						GT_FIFO_FREE_ENTRIES_MASK;
+		dev_priv->uncore.fifo_count = fifo_free_entries(dev_priv);
 
 	if (dev_priv->uncore.fifo_count < GT_FIFO_NUM_RESERVED_ENTRIES) {
 		int loop = 500;
-		u32 fifo = __raw_i915_read32(dev_priv, GTFIFOCTL) & GT_FIFO_FREE_ENTRIES_MASK;
+		u32 fifo = fifo_free_entries(dev_priv);
+
 		while (fifo <= GT_FIFO_NUM_RESERVED_ENTRIES && loop--) {
 			udelay(10);
-			fifo = __raw_i915_read32(dev_priv, GTFIFOCTL) & GT_FIFO_FREE_ENTRIES_MASK;
+			fifo = fifo_free_entries(dev_priv);
 		}
 		if (WARN_ON(loop < 0 && fifo <= GT_FIFO_NUM_RESERVED_ENTRIES))
 			++ret;
@@ -505,8 +511,7 @@ void intel_uncore_forcewake_reset(struct drm_device *dev, bool restore)
 
 		if (IS_GEN6(dev) || IS_GEN7(dev))
 			dev_priv->uncore.fifo_count =
-				__raw_i915_read32(dev_priv, GTFIFOCTL) &
-				GT_FIFO_FREE_ENTRIES_MASK;
+				fifo_free_entries(dev_priv);
 	}
 
 	spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 10/11] drm/i915: add irq_barrier operation for synchronising reads
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (8 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 09/11] drm/i915: FIFO space query code refactor Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  2015-01-28  9:55   ` Daniel Vetter
  2015-01-26 12:43 ` [PATCH 11/11] drm/i915: use effective_size for ringbuffer calculations Rodrigo Vivi
  10 siblings, 1 reply; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

From: Dave Gordon <david.s.gordon@intel.com>

On some generations of chips, it is necessary to read an MMIO register
before getting the sequence number from the status page in main memory,
in order to ensure coherency; and on all generations this should be
either helpful or harmless.

In general, we want this operation to be the cheapest possible, since
we require only the side-effect of DMA completion and don't interpret
the result of the read, and don't require any coordination with other
threads, power domains, or anything else.

However, finding a suitable register may be problematic; on GEN6 chips
the ACTHD register was used, but on VLV et al access to this register
requires FORCEWAKE and therefore many complications involving spinlocks
and polling.

So this commit introduces this synchronising operation as a distinct
vfunc in the engine structure, so that it can be GEN- or chip-specific
if needed.

And there are three implementations; a dummy one, for chips where no
synchronising read is needed, a gen6(+) version that issues a posting
read (to TAIL), and a VLV-specific one that issues a raw read instead,
avoiding touching FORCEWAKE and GTFIFO and other such complications.

We then change gen6_ring_get_seqno() to use this new irq_barrier rather
than a POSTING_READ of ACTHD. Note that both older (pre-GEN6) and newer
(GEN8+) devices running in LRC mode do not currently include any posting
read in their own get_seqno() implementations, so this change only
makes a difference on VLV (and not CHV+).

Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/intel_ringbuffer.c | 37 +++++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/intel_ringbuffer.h |  1 +
 2 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 23020d6..97473ed 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1227,6 +1227,28 @@ pc_render_add_request(struct intel_engine_cs *ring)
 	return 0;
 }
 
+static void
+dummy_irq_barrier(struct intel_engine_cs *ring)
+{
+}
+
+static void
+gen6_irq_barrier(struct intel_engine_cs *ring)
+{
+	struct drm_i915_private *dev_priv = to_i915(ring->dev);
+	POSTING_READ(RING_TAIL(ring->mmio_base));
+}
+
+#define __raw_i915_read32(dev_priv__, reg__)	readl((dev_priv__)->regs + (reg__))
+#define RAW_POSTING_READ(reg__)			(void)__raw_i915_read32(dev_priv, reg__)
+
+static void
+vlv_irq_barrier(struct intel_engine_cs *ring)
+{
+	struct drm_i915_private *dev_priv = to_i915(ring->dev);
+	RAW_POSTING_READ(RING_TAIL(ring->mmio_base));
+}
+
 static u32
 gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
 {
@@ -1234,8 +1256,7 @@ gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
 	 * ivb (and maybe also on snb) by reading from a CS register (like
 	 * ACTHD) before reading the status page. */
 	if (!lazy_coherency) {
-		struct drm_i915_private *dev_priv = ring->dev->dev_private;
-		POSTING_READ(RING_ACTHD(ring->mmio_base));
+		ring->irq_barrier(ring);
 	}
 
 	return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
@@ -2393,6 +2414,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		ring->irq_get = gen8_ring_get_irq;
 		ring->irq_put = gen8_ring_put_irq;
 		ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
+		ring->irq_barrier = gen6_irq_barrier;
 		ring->get_seqno = gen6_ring_get_seqno;
 		ring->set_seqno = ring_set_seqno;
 		if (i915_semaphore_is_enabled(dev)) {
@@ -2409,6 +2431,10 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		ring->irq_get = gen6_ring_get_irq;
 		ring->irq_put = gen6_ring_put_irq;
 		ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
+		if (IS_VALLEYVIEW(dev) && !IS_GEN8(dev))
+			ring->irq_barrier = vlv_irq_barrier;
+		else
+			ring->irq_barrier = gen6_irq_barrier;
 		ring->get_seqno = gen6_ring_get_seqno;
 		ring->set_seqno = ring_set_seqno;
 		if (i915_semaphore_is_enabled(dev)) {
@@ -2435,6 +2461,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 	} else if (IS_GEN5(dev)) {
 		ring->add_request = pc_render_add_request;
 		ring->flush = gen4_render_ring_flush;
+		ring->irq_barrier = dummy_irq_barrier;
 		ring->get_seqno = pc_render_get_seqno;
 		ring->set_seqno = pc_render_set_seqno;
 		ring->irq_get = gen5_ring_get_irq;
@@ -2447,6 +2474,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 			ring->flush = gen2_render_ring_flush;
 		else
 			ring->flush = gen4_render_ring_flush;
+		ring->irq_barrier = dummy_irq_barrier;
 		ring->get_seqno = ring_get_seqno;
 		ring->set_seqno = ring_set_seqno;
 		if (IS_GEN2(dev)) {
@@ -2523,6 +2551,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 			ring->write_tail = gen6_bsd_ring_write_tail;
 		ring->flush = gen6_bsd_ring_flush;
 		ring->add_request = gen6_add_request;
+		ring->irq_barrier = gen6_irq_barrier;
 		ring->get_seqno = gen6_ring_get_seqno;
 		ring->set_seqno = ring_set_seqno;
 		if (INTEL_INFO(dev)->gen >= 8) {
@@ -2562,6 +2591,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 		ring->mmio_base = BSD_RING_BASE;
 		ring->flush = bsd_ring_flush;
 		ring->add_request = i9xx_add_request;
+		ring->irq_barrier = dummy_irq_barrier;
 		ring->get_seqno = ring_get_seqno;
 		ring->set_seqno = ring_set_seqno;
 		if (IS_GEN5(dev)) {
@@ -2601,6 +2631,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
 	ring->mmio_base = GEN8_BSD2_RING_BASE;
 	ring->flush = gen6_bsd_ring_flush;
 	ring->add_request = gen6_add_request;
+	ring->irq_barrier = gen6_irq_barrier;
 	ring->get_seqno = gen6_ring_get_seqno;
 	ring->set_seqno = ring_set_seqno;
 	ring->irq_enable_mask =
@@ -2631,6 +2662,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 	ring->write_tail = ring_write_tail;
 	ring->flush = gen6_ring_flush;
 	ring->add_request = gen6_add_request;
+	ring->irq_barrier = gen6_irq_barrier;
 	ring->get_seqno = gen6_ring_get_seqno;
 	ring->set_seqno = ring_set_seqno;
 	if (INTEL_INFO(dev)->gen >= 8) {
@@ -2688,6 +2720,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 	ring->write_tail = ring_write_tail;
 	ring->flush = gen6_ring_flush;
 	ring->add_request = gen6_add_request;
+	ring->irq_barrier = gen6_irq_barrier;
 	ring->get_seqno = gen6_ring_get_seqno;
 	ring->set_seqno = ring_set_seqno;
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 6dbb6f4..f686929 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -163,6 +163,7 @@ struct  intel_engine_cs {
 	 * seen value is good enough. Note that the seqno will always be
 	 * monotonic, even if not coherent.
 	 */
+	void		(*irq_barrier)(struct intel_engine_cs *ring);
 	u32		(*get_seqno)(struct intel_engine_cs *ring,
 				     bool lazy_coherency);
 	void		(*set_seqno)(struct intel_engine_cs *ring,
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 11/11] drm/i915: use effective_size for ringbuffer calculations
  2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
                   ` (9 preceding siblings ...)
  2015-01-26 12:43 ` [PATCH 10/11] drm/i915: add irq_barrier operation for synchronising reads Rodrigo Vivi
@ 2015-01-26 12:43 ` Rodrigo Vivi
  10 siblings, 0 replies; 20+ messages in thread
From: Rodrigo Vivi @ 2015-01-26 12:43 UTC (permalink / raw)
  To: intel-gfx; +Cc: Rodrigo Vivi

From: Dave Gordon <david.s.gordon@intel.com>

When calculating the available space in a ringbuffer, we should
use the effective_size rather than the true size of the ring.

v2: rebase to latest drm-intel-nightly

Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c        | 2 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index ae29f30d..59e8517 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -941,7 +941,7 @@ static int logical_ring_wait_request(struct intel_ringbuffer *ringbuf,
 
 		/* Would completion of this request free enough space? */
 		if (__intel_ring_space(request->tail, ringbuf->tail,
-				       ringbuf->size) >= bytes) {
+				       ringbuf->effective_size) >= bytes) {
 			break;
 		}
 	}
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 97473ed..0c46410 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -66,7 +66,8 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
 	}
 
 	ringbuf->space = __intel_ring_space(ringbuf->head & HEAD_ADDR,
-					    ringbuf->tail, ringbuf->size);
+					    ringbuf->tail,
+					    ringbuf->effective_size);
 }
 
 int intel_ring_space(struct intel_ringbuffer *ringbuf)
@@ -1971,7 +1972,7 @@ static int intel_ring_wait_request(struct intel_engine_cs *ring, int n)
 
 	list_for_each_entry(request, &ring->request_list, list) {
 		if (__intel_ring_space(request->tail, ringbuf->tail,
-				       ringbuf->size) >= n) {
+				       ringbuf->effective_size) >= n) {
 			break;
 		}
 	}
-- 
1.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 03/11] drm/i915: Remove pinned check from madvise_ioctl
  2015-01-26 12:43 ` [PATCH 03/11] drm/i915: Remove pinned check from madvise_ioctl Rodrigo Vivi
@ 2015-01-28  9:52   ` Daniel Vetter
  0 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2015-01-28  9:52 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: intel-gfx

On Mon, Jan 26, 2015 at 04:43:17AM -0800, Rodrigo Vivi wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
> 
> We don't need to incur the overhead of checking whether the object is
> pinned prior to changing its madvise. If the object is pinned, the
> madvise will not take effect until it is unpinned and so we cannot free
> the pages being pointed at by hardware. Marking a pinned object with
> allocated pages as DONTNEED will not trigger any undue warnings. The check
> is therefore superfluous, and by removing it we can remove a linear walk
> over all the vma the object has.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

I'd still would like to see some igt testcase which marks a pinned scanout
buffer asl DONTNEED. Just to make sure we don't accidentally create a gap
somewhere for a CVE to sneak through.
-Daniel

> ---
>  drivers/gpu/drm/i915/i915_gem.c | 6 ------
>  1 file changed, 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 6c40365..123ce34 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -4365,11 +4365,6 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
>  		goto unlock;
>  	}
>  
> -	if (i915_gem_obj_is_pinned(obj)) {
> -		ret = -EINVAL;
> -		goto out;
> -	}
> -
>  	if (obj->pages &&
>  	    obj->tiling_mode != I915_TILING_NONE &&
>  	    dev_priv->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
> @@ -4388,7 +4383,6 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
>  
>  	args->retained = obj->madv != __I915_MADV_PURGED;
>  
> -out:
>  	drm_gem_object_unreference(&obj->base);
>  unlock:
>  	mutex_unlock(&dev->struct_mutex);
> -- 
> 1.9.3
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 08/11] Revert "drm/i915: Fix mutex->owner inspection race under DEBUG_MUTEXES"
  2015-01-26 12:43 ` [PATCH 08/11] Revert "drm/i915: Fix mutex->owner inspection race under DEBUG_MUTEXES" Rodrigo Vivi
@ 2015-01-28  9:53   ` Daniel Vetter
  0 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2015-01-28  9:53 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: Jani Nikula, Daniel Vetter, intel-gfx

On Mon, Jan 26, 2015 at 04:43:22AM -0800, Rodrigo Vivi wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
> 
> The core fix was applied in
> 
> commit a63b03e2d2477586440741677ecac45bcf28d7b1
> Author: Chris Wilson <chris@chris-wilson.co.uk>
> Date:   Tue Jan 6 10:29:35 2015 +0000
> 
>     mutex: Always clear owner field upon mutex_unlock()
> 
> (note the absence of stable@ tag)
> 
> so we can now revert our band-aid commit 226e5ae9e5f910 for -next.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Jani Nikula <jani.nikula@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

Queued for -next, thanks for the patch.
-Daniel

> ---
>  drivers/gpu/drm/i915/i915_gem.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0a074bc..b50a2b4 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -5213,7 +5213,7 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
>  	if (!mutex_is_locked(mutex))
>  		return false;
>  
> -#if defined(CONFIG_SMP) && !defined(CONFIG_DEBUG_MUTEXES)
> +#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
>  	return mutex->owner == task;
>  #else
>  	/* Since UP may be pre-empted, we cannot assume that we own the lock */
> -- 
> 1.9.3
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 10/11] drm/i915: add irq_barrier operation for synchronising reads
  2015-01-26 12:43 ` [PATCH 10/11] drm/i915: add irq_barrier operation for synchronising reads Rodrigo Vivi
@ 2015-01-28  9:55   ` Daniel Vetter
  2015-01-28 10:02     ` Chris Wilson
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2015-01-28  9:55 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: intel-gfx

On Mon, Jan 26, 2015 at 04:43:24AM -0800, Rodrigo Vivi wrote:
> From: Dave Gordon <david.s.gordon@intel.com>
> 
> On some generations of chips, it is necessary to read an MMIO register
> before getting the sequence number from the status page in main memory,
> in order to ensure coherency; and on all generations this should be
> either helpful or harmless.
> 
> In general, we want this operation to be the cheapest possible, since
> we require only the side-effect of DMA completion and don't interpret
> the result of the read, and don't require any coordination with other
> threads, power domains, or anything else.
> 
> However, finding a suitable register may be problematic; on GEN6 chips
> the ACTHD register was used, but on VLV et al access to this register
> requires FORCEWAKE and therefore many complications involving spinlocks
> and polling.
> 
> So this commit introduces this synchronising operation as a distinct
> vfunc in the engine structure, so that it can be GEN- or chip-specific
> if needed.
> 
> And there are three implementations; a dummy one, for chips where no
> synchronising read is needed, a gen6(+) version that issues a posting
> read (to TAIL), and a VLV-specific one that issues a raw read instead,
> avoiding touching FORCEWAKE and GTFIFO and other such complications.
> 
> We then change gen6_ring_get_seqno() to use this new irq_barrier rather
> than a POSTING_READ of ACTHD. Note that both older (pre-GEN6) and newer
> (GEN8+) devices running in LRC mode do not currently include any posting
> read in their own get_seqno() implementations, so this change only
> makes a difference on VLV (and not CHV+).
> 
> Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
> Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_ringbuffer.c | 37 +++++++++++++++++++++++++++++++--
>  drivers/gpu/drm/i915/intel_ringbuffer.h |  1 +
>  2 files changed, 36 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index 23020d6..97473ed 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -1227,6 +1227,28 @@ pc_render_add_request(struct intel_engine_cs *ring)
>  	return 0;
>  }
>  
> +static void
> +dummy_irq_barrier(struct intel_engine_cs *ring)
> +{
> +}
> +
> +static void
> +gen6_irq_barrier(struct intel_engine_cs *ring)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(ring->dev);
> +	POSTING_READ(RING_TAIL(ring->mmio_base));
> +}
> +
> +#define __raw_i915_read32(dev_priv__, reg__)	readl((dev_priv__)->regs + (reg__))
> +#define RAW_POSTING_READ(reg__)			(void)__raw_i915_read32(dev_priv, reg__)
> +
> +static void
> +vlv_irq_barrier(struct intel_engine_cs *ring)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(ring->dev);
> +	RAW_POSTING_READ(RING_TAIL(ring->mmio_base));
> +}
> +
>  static u32
>  gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
>  {
> @@ -1234,8 +1256,7 @@ gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
>  	 * ivb (and maybe also on snb) by reading from a CS register (like
>  	 * ACTHD) before reading the status page. */
>  	if (!lazy_coherency) {
> -		struct drm_i915_private *dev_priv = ring->dev->dev_private;
> -		POSTING_READ(RING_ACTHD(ring->mmio_base));
> +		ring->irq_barrier(ring);
>  	}

Imo just do a vlv_ring_get_seqno if this is a problem. Adding a vfunc with
mostly empty or same implemenation to another very tiny vfunc isn't doing
a whole lot of good to the codebase.
-Daniel

>  
>  	return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
> @@ -2393,6 +2414,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
>  		ring->irq_get = gen8_ring_get_irq;
>  		ring->irq_put = gen8_ring_put_irq;
>  		ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
> +		ring->irq_barrier = gen6_irq_barrier;
>  		ring->get_seqno = gen6_ring_get_seqno;
>  		ring->set_seqno = ring_set_seqno;
>  		if (i915_semaphore_is_enabled(dev)) {
> @@ -2409,6 +2431,10 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
>  		ring->irq_get = gen6_ring_get_irq;
>  		ring->irq_put = gen6_ring_put_irq;
>  		ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
> +		if (IS_VALLEYVIEW(dev) && !IS_GEN8(dev))
> +			ring->irq_barrier = vlv_irq_barrier;
> +		else
> +			ring->irq_barrier = gen6_irq_barrier;
>  		ring->get_seqno = gen6_ring_get_seqno;
>  		ring->set_seqno = ring_set_seqno;
>  		if (i915_semaphore_is_enabled(dev)) {
> @@ -2435,6 +2461,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
>  	} else if (IS_GEN5(dev)) {
>  		ring->add_request = pc_render_add_request;
>  		ring->flush = gen4_render_ring_flush;
> +		ring->irq_barrier = dummy_irq_barrier;
>  		ring->get_seqno = pc_render_get_seqno;
>  		ring->set_seqno = pc_render_set_seqno;
>  		ring->irq_get = gen5_ring_get_irq;
> @@ -2447,6 +2474,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
>  			ring->flush = gen2_render_ring_flush;
>  		else
>  			ring->flush = gen4_render_ring_flush;
> +		ring->irq_barrier = dummy_irq_barrier;
>  		ring->get_seqno = ring_get_seqno;
>  		ring->set_seqno = ring_set_seqno;
>  		if (IS_GEN2(dev)) {
> @@ -2523,6 +2551,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
>  			ring->write_tail = gen6_bsd_ring_write_tail;
>  		ring->flush = gen6_bsd_ring_flush;
>  		ring->add_request = gen6_add_request;
> +		ring->irq_barrier = gen6_irq_barrier;
>  		ring->get_seqno = gen6_ring_get_seqno;
>  		ring->set_seqno = ring_set_seqno;
>  		if (INTEL_INFO(dev)->gen >= 8) {
> @@ -2562,6 +2591,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
>  		ring->mmio_base = BSD_RING_BASE;
>  		ring->flush = bsd_ring_flush;
>  		ring->add_request = i9xx_add_request;
> +		ring->irq_barrier = dummy_irq_barrier;
>  		ring->get_seqno = ring_get_seqno;
>  		ring->set_seqno = ring_set_seqno;
>  		if (IS_GEN5(dev)) {
> @@ -2601,6 +2631,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
>  	ring->mmio_base = GEN8_BSD2_RING_BASE;
>  	ring->flush = gen6_bsd_ring_flush;
>  	ring->add_request = gen6_add_request;
> +	ring->irq_barrier = gen6_irq_barrier;
>  	ring->get_seqno = gen6_ring_get_seqno;
>  	ring->set_seqno = ring_set_seqno;
>  	ring->irq_enable_mask =
> @@ -2631,6 +2662,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
>  	ring->write_tail = ring_write_tail;
>  	ring->flush = gen6_ring_flush;
>  	ring->add_request = gen6_add_request;
> +	ring->irq_barrier = gen6_irq_barrier;
>  	ring->get_seqno = gen6_ring_get_seqno;
>  	ring->set_seqno = ring_set_seqno;
>  	if (INTEL_INFO(dev)->gen >= 8) {
> @@ -2688,6 +2720,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
>  	ring->write_tail = ring_write_tail;
>  	ring->flush = gen6_ring_flush;
>  	ring->add_request = gen6_add_request;
> +	ring->irq_barrier = gen6_irq_barrier;
>  	ring->get_seqno = gen6_ring_get_seqno;
>  	ring->set_seqno = ring_set_seqno;
>  
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 6dbb6f4..f686929 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -163,6 +163,7 @@ struct  intel_engine_cs {
>  	 * seen value is good enough. Note that the seqno will always be
>  	 * monotonic, even if not coherent.
>  	 */
> +	void		(*irq_barrier)(struct intel_engine_cs *ring);
>  	u32		(*get_seqno)(struct intel_engine_cs *ring,
>  				     bool lazy_coherency);
>  	void		(*set_seqno)(struct intel_engine_cs *ring,
> -- 
> 1.9.3
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2015-01-26 12:43 ` [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space Rodrigo Vivi
@ 2015-01-28  9:59   ` Daniel Vetter
  2015-04-29 10:24     ` Chris Wilson
  2015-04-29 10:27     ` Chris Wilson
  0 siblings, 2 replies; 20+ messages in thread
From: Daniel Vetter @ 2015-01-28  9:59 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: intel-gfx

On Mon, Jan 26, 2015 at 04:43:18AM -0800, Rodrigo Vivi wrote:
> When constructing a batchbuffer, it is sometimes crucial to know the
> largest hole into which we can fit a fenceable buffer (for example when
> handling very large objects on gen2 and gen3). This depends on the
> fragmentation of pinned buffers inside the aperture, a question only the
> kernel can easily answer.
> 
> This patch extends the current DRM_I915_GEM_GET_APERTURE ioctl to
> include a couple of new fields in its reply to userspace - the total
> amount of space available in the mappable region of the aperture and
> also the single largest block available.
> 
> This is not quite what userspace wants to answer the question of whether
> this batch will fit as fences are also required to meet severe alignment
> constraints within the batch. For this purpose, a third conservative
> estimate of largest fence available is also provided. For when userspace
> needs more than one batch, we also provide the culmulative space
> available for fences such that it has some additional guidance to how
> much space it could allocate to fences. Conservatism still wins.
> 
> The patch also adds a debugfs file for convenient testing and reporting.
> 
> v2: The first object cannot end at offset 0, so we can use last==0 to
> detect the empty list.
> 
> v3: Expand all values to 64bit, just in case.
>     Report total mappable aperture size for userspace that cannot easily
>     determine it by inspecting the PCI device.
> 
> v4: (Rodrigo) Fixed rebase conflicts.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

Do we have the libdrm patch for this too? Imo there's not much use in this
if mesa remains broken, especially since this is for gen2/3 ... most DE
use gl nowadays.
-Daniel

> ---
>  drivers/gpu/drm/i915/i915_debugfs.c |  27 +++++++++
>  drivers/gpu/drm/i915/i915_gem.c     | 116 ++++++++++++++++++++++++++++++++++--
>  include/uapi/drm/i915_drm.h         |  25 ++++++++
>  3 files changed, 164 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index 0d11cbe..f1aea86 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -498,6 +498,32 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
>  	return 0;
>  }
>  
> +static int i915_gem_aperture_info(struct seq_file *m, void *data)
> +{
> +	struct drm_info_node *node = m->private;
> +	struct drm_i915_gem_get_aperture arg;
> +	int ret;
> +
> +	ret = i915_gem_get_aperture_ioctl(node->minor->dev, &arg, NULL);
> +	if (ret)
> +		return ret;
> +
> +	seq_printf(m, "Total size of the GTT: %llu bytes\n",
> +		   arg.aper_size);
> +	seq_printf(m, "Available space in the GTT: %llu bytes\n",
> +		   arg.aper_available_size);
> +	seq_printf(m, "Available space in the mappable aperture: %llu bytes\n",
> +		   arg.map_available_size);
> +	seq_printf(m, "Single largest space in the mappable aperture: %llu bytes\n",
> +		   arg.map_largest_size);
> +	seq_printf(m, "Available space for fences: %llu bytes\n",
> +		   arg.fence_available_size);
> +	seq_printf(m, "Single largest fence available: %llu bytes\n",
> +		   arg.fence_largest_size);
> +
> +	return 0;
> +}
> +
>  static int i915_gem_gtt_info(struct seq_file *m, void *data)
>  {
>  	struct drm_info_node *node = m->private;
> @@ -4398,6 +4424,7 @@ static int i915_debugfs_create(struct dentry *root,
>  static const struct drm_info_list i915_debugfs_list[] = {
>  	{"i915_capabilities", i915_capabilities, 0},
>  	{"i915_gem_objects", i915_gem_object_info, 0},
> +	{"i915_gem_aperture", i915_gem_aperture_info, 0},
>  	{"i915_gem_gtt", i915_gem_gtt_info, 0},
>  	{"i915_gem_pinned", i915_gem_gtt_info, 0, (void *) PINNED_LIST},
>  	{"i915_gem_active", i915_gem_object_list_info, 0, (void *) ACTIVE_LIST},
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 123ce34..0a074bc 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -31,6 +31,7 @@
>  #include "i915_drv.h"
>  #include "i915_trace.h"
>  #include "intel_drv.h"
> +#include <linux/list_sort.h>
>  #include <linux/oom.h>
>  #include <linux/shmem_fs.h>
>  #include <linux/slab.h>
> @@ -153,6 +154,55 @@ int i915_mutex_lock_interruptible(struct drm_device *dev)
>  	return 0;
>  }
>  
> +static inline bool
> +i915_gem_object_is_inactive(struct drm_i915_gem_object *obj)
> +{
> +	return i915_gem_obj_bound_any(obj) && !obj->active;
> +}
> +
> +static int obj_rank_by_ggtt(void *priv,
> +			    struct list_head *A,
> +			    struct list_head *B)
> +{
> +	struct drm_i915_gem_object *a = list_entry(A,typeof(*a), obj_exec_link);
> +	struct drm_i915_gem_object *b = list_entry(B,typeof(*b), obj_exec_link);
> +
> +	return i915_gem_obj_ggtt_offset(a) - i915_gem_obj_ggtt_offset(b);
> +}
> +
> +static u32 __fence_size(struct drm_i915_private *dev_priv, u32 start, u32 end)
> +{
> +	u32 size = end - start;
> +	u32 fence_size;
> +
> +	if (INTEL_INFO(dev_priv)->gen < 4) {
> +		u32 fence_max;
> +		u32 fence_next;
> +
> +		if (IS_GEN3(dev_priv)) {
> +			fence_max = I830_FENCE_MAX_SIZE_VAL << 20;
> +			fence_next = 1024*1024;
> +		} else {
> +			fence_max = I830_FENCE_MAX_SIZE_VAL << 19;
> +			fence_next = 512*1024;
> +		}
> +
> +		fence_max = min(fence_max, size);
> +		fence_size = 0;
> +		while (fence_next <= fence_max) {
> +			u32 base = ALIGN(start, fence_next);
> +			if (base + fence_next > end)
> +				break;
> +
> +			fence_size = fence_next;
> +			fence_next <<= 1;
> +		}
> +	} else
> +		fence_size = size;
> +
> +	return fence_size;
> +}
> +
>  int
>  i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
>  			    struct drm_file *file)
> @@ -160,17 +210,75 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
>  	struct drm_i915_private *dev_priv = dev->dev_private;
>  	struct drm_i915_gem_get_aperture *args = data;
>  	struct drm_i915_gem_object *obj;
> -	size_t pinned;
> +	struct list_head map_list;
> +	const u32 map_limit = dev_priv->gtt.mappable_end;
> +	size_t pinned, map_space, map_largest, fence_space, fence_largest;
> +	u32 last, size;
> +
> +	INIT_LIST_HEAD(&map_list);
>  
>  	pinned = 0;
> +	map_space = map_largest = 0;
> +	fence_space = fence_largest = 0;
> +
>  	mutex_lock(&dev->struct_mutex);
> -	list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list)
> -		if (i915_gem_obj_is_pinned(obj))
> -			pinned += i915_gem_obj_ggtt_size(obj);
> +	list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
> +		struct i915_vma *vma = i915_gem_obj_to_ggtt(obj);
> +
> +		if (vma == NULL || !vma->pin_count)
> +			continue;
> +
> +		pinned += vma->node.size;
> +
> +		if (vma->node.start < map_limit)
> +			list_add(&obj->obj_exec_link, &map_list);
> +	}
> +
> +	last = 0;
> +	list_sort(NULL, &map_list, obj_rank_by_ggtt);
> +	while (!list_empty(&map_list)) {
> +		struct i915_vma *vma;
> +
> +		obj = list_first_entry(&map_list, typeof(*obj), obj_exec_link);
> +		list_del_init(&obj->obj_exec_link);
> +
> +		vma = i915_gem_obj_to_ggtt(obj);
> +		if (last == 0)
> +			goto skip_first;
> +
> +		size = vma->node.start - last;
> +		if (size > map_largest)
> +			map_largest = size;
> +		map_space += size;
> +
> +		size = __fence_size(dev_priv, last, vma->node.start);
> +		if (size > fence_largest)
> +			fence_largest = size;
> +		fence_space += size;
> +
> +skip_first:
> +		last = vma->node.start + vma->node.size;
> +	}
> +	if (last < map_limit) {
> +		size = map_limit - last;
> +		if (size > map_largest)
> +			map_largest = size;
> +		map_space += size;
> +
> +		size = __fence_size(dev_priv, last, map_limit);
> +		if (size > fence_largest)
> +			fence_largest = size;
> +		fence_space += size;
> +	}
>  	mutex_unlock(&dev->struct_mutex);
>  
>  	args->aper_size = dev_priv->gtt.base.total;
>  	args->aper_available_size = args->aper_size - pinned;
> +	args->map_available_size = map_space;
> +	args->map_largest_size = map_largest;
> +	args->map_total_size = dev_priv->gtt.mappable_end;
> +	args->fence_available_size = fence_space;
> +	args->fence_largest_size = fence_largest;
>  
>  	return 0;
>  }
> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> index 2e559f6e..28f614d 100644
> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -907,6 +907,31 @@ struct drm_i915_gem_get_aperture {
>  	 * bytes
>  	 */
>  	__u64 aper_available_size;
> +
> +	/**
> +	 * Total space in the mappable region of the aperture, in bytes
> +	 */
> +	__u64 map_total_size;
> +
> +	/**
> +	 * Available space in the mappable region of the aperture, in bytes
> +	 */
> +	__u64 map_available_size;
> +
> +	/**
> +	 * Single largest available region inside the mappable region, in bytes.
> +	 */
> +	__u64 map_largest_size;
> +
> +	/**
> +	 * Culmulative space available for fences, in bytes
> +	 */
> +	__u64 fence_available_size;
> +
> +	/**
> +	 * Single largest fenceable region, in bytes.
> +	 */
> +	__u64 fence_largest_size;
>  };
>  
>  struct drm_i915_get_pipe_from_crtc_id {
> -- 
> 1.9.3
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 10/11] drm/i915: add irq_barrier operation for synchronising reads
  2015-01-28  9:55   ` Daniel Vetter
@ 2015-01-28 10:02     ` Chris Wilson
  0 siblings, 0 replies; 20+ messages in thread
From: Chris Wilson @ 2015-01-28 10:02 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, Rodrigo Vivi

On Wed, Jan 28, 2015 at 10:55:53AM +0100, Daniel Vetter wrote:
> On Mon, Jan 26, 2015 at 04:43:24AM -0800, Rodrigo Vivi wrote:
> > From: Dave Gordon <david.s.gordon@intel.com>
> > 
> > On some generations of chips, it is necessary to read an MMIO register
> > before getting the sequence number from the status page in main memory,
> > in order to ensure coherency; and on all generations this should be
> > either helpful or harmless.
> > 
> > In general, we want this operation to be the cheapest possible, since
> > we require only the side-effect of DMA completion and don't interpret
> > the result of the read, and don't require any coordination with other
> > threads, power domains, or anything else.
> > 
> > However, finding a suitable register may be problematic; on GEN6 chips
> > the ACTHD register was used, but on VLV et al access to this register
> > requires FORCEWAKE and therefore many complications involving spinlocks
> > and polling.
> > 
> > So this commit introduces this synchronising operation as a distinct
> > vfunc in the engine structure, so that it can be GEN- or chip-specific
> > if needed.
> > 
> > And there are three implementations; a dummy one, for chips where no
> > synchronising read is needed, a gen6(+) version that issues a posting
> > read (to TAIL), and a VLV-specific one that issues a raw read instead,
> > avoiding touching FORCEWAKE and GTFIFO and other such complications.
> > 
> > We then change gen6_ring_get_seqno() to use this new irq_barrier rather
> > than a POSTING_READ of ACTHD. Note that both older (pre-GEN6) and newer
> > (GEN8+) devices running in LRC mode do not currently include any posting
> > read in their own get_seqno() implementations, so this change only
> > makes a difference on VLV (and not CHV+).
> > 
> > Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
> > Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_ringbuffer.c | 37 +++++++++++++++++++++++++++++++--
> >  drivers/gpu/drm/i915/intel_ringbuffer.h |  1 +
> >  2 files changed, 36 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> > index 23020d6..97473ed 100644
> > --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> > @@ -1227,6 +1227,28 @@ pc_render_add_request(struct intel_engine_cs *ring)
> >  	return 0;
> >  }
> >  
> > +static void
> > +dummy_irq_barrier(struct intel_engine_cs *ring)
> > +{
> > +}
> > +
> > +static void
> > +gen6_irq_barrier(struct intel_engine_cs *ring)
> > +{
> > +	struct drm_i915_private *dev_priv = to_i915(ring->dev);
> > +	POSTING_READ(RING_TAIL(ring->mmio_base));
> > +}
> > +
> > +#define __raw_i915_read32(dev_priv__, reg__)	readl((dev_priv__)->regs + (reg__))
> > +#define RAW_POSTING_READ(reg__)			(void)__raw_i915_read32(dev_priv, reg__)
> > +
> > +static void
> > +vlv_irq_barrier(struct intel_engine_cs *ring)
> > +{
> > +	struct drm_i915_private *dev_priv = to_i915(ring->dev);
> > +	RAW_POSTING_READ(RING_TAIL(ring->mmio_base));
> > +}
> > +
> >  static u32
> >  gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
> >  {
> > @@ -1234,8 +1256,7 @@ gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
> >  	 * ivb (and maybe also on snb) by reading from a CS register (like
> >  	 * ACTHD) before reading the status page. */
> >  	if (!lazy_coherency) {
> > -		struct drm_i915_private *dev_priv = ring->dev->dev_private;
> > -		POSTING_READ(RING_ACTHD(ring->mmio_base));
> > +		ring->irq_barrier(ring);
> >  	}
> 
> Imo just do a vlv_ring_get_seqno if this is a problem. Adding a vfunc with
> mostly empty or same implemenation to another very tiny vfunc isn't doing
> a whole lot of good to the codebase.

Or rather since there is just one place that cares about the
irq_barrier, just call it from that callsite and simplify get_seqno.

But the whole vlv is special argument still seems nebulous in the first
place.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2015-01-28  9:59   ` Daniel Vetter
@ 2015-04-29 10:24     ` Chris Wilson
  2015-04-29 10:27     ` Chris Wilson
  1 sibling, 0 replies; 20+ messages in thread
From: Chris Wilson @ 2015-04-29 10:24 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, Rodrigo Vivi

On Wed, Jan 28, 2015 at 10:59:28AM +0100, Daniel Vetter wrote:
> On Mon, Jan 26, 2015 at 04:43:18AM -0800, Rodrigo Vivi wrote:
> > When constructing a batchbuffer, it is sometimes crucial to know the
> > largest hole into which we can fit a fenceable buffer (for example when
> > handling very large objects on gen2 and gen3). This depends on the
> > fragmentation of pinned buffers inside the aperture, a question only the
> > kernel can easily answer.
> > 
> > This patch extends the current DRM_I915_GEM_GET_APERTURE ioctl to
> > include a couple of new fields in its reply to userspace - the total
> > amount of space available in the mappable region of the aperture and
> > also the single largest block available.
> > 
> > This is not quite what userspace wants to answer the question of whether
> > this batch will fit as fences are also required to meet severe alignment
> > constraints within the batch. For this purpose, a third conservative
> > estimate of largest fence available is also provided. For when userspace
> > needs more than one batch, we also provide the culmulative space
> > available for fences such that it has some additional guidance to how
> > much space it could allocate to fences. Conservatism still wins.
> > 
> > The patch also adds a debugfs file for convenient testing and reporting.
> > 
> > v2: The first object cannot end at offset 0, so we can use last==0 to
> > detect the empty list.
> > 
> > v3: Expand all values to 64bit, just in case.
> >     Report total mappable aperture size for userspace that cannot easily
> >     determine it by inspecting the PCI device.
> > 
> > v4: (Rodrigo) Fixed rebase conflicts.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Do we have the libdrm patch for this too? Imo there's not much use in this
> if mesa remains broken, especially since this is for gen2/3 ... most DE
> use gl nowadays.

Mesa on gen2/3 is broken full stop as it cannot handle the full desktop
size anyway. Just like Broadwell.

There was a user ready to go and waiting.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2015-01-28  9:59   ` Daniel Vetter
  2015-04-29 10:24     ` Chris Wilson
@ 2015-04-29 10:27     ` Chris Wilson
  2015-04-30 10:17       ` Joonas Lahtinen
  1 sibling, 1 reply; 20+ messages in thread
From: Chris Wilson @ 2015-04-29 10:27 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, Rodrigo Vivi

On Wed, Jan 28, 2015 at 10:59:28AM +0100, Daniel Vetter wrote:
> Do we have the libdrm patch for this too? Imo there's not much use in this
> if mesa remains broken, especially since this is for gen2/3 ... most DE
> use gl nowadays.

On the other hand, there is a bug report open for mesa being broken in
how it determines the aperture size that is causing an X crash.
So maybe I should fix it up anyway. Hard to do without such a patch.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2015-04-29 10:27     ` Chris Wilson
@ 2015-04-30 10:17       ` Joonas Lahtinen
  0 siblings, 0 replies; 20+ messages in thread
From: Joonas Lahtinen @ 2015-04-30 10:17 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, Rodrigo Vivi

On ke, 2015-04-29 at 11:27 +0100, Chris Wilson wrote:
> On Wed, Jan 28, 2015 at 10:59:28AM +0100, Daniel Vetter wrote:
> > Do we have the libdrm patch for this too? Imo there's not much use in this
> > if mesa remains broken, especially since this is for gen2/3 ... most DE
> > use gl nowadays.
> 
> On the other hand, there is a bug report open for mesa being broken in
> how it determines the aperture size that is causing an X crash.
> So maybe I should fix it up anyway. Hard to do without such a patch.

If the patch is merged, it should be upgraded to loop through all the
VMA's in GGTT, not only the ones that present whole objects.

Just a matter of adding looping in place of the obj_to_ggtt call.

Also, the i915_gem_object_is_inactive seems to be there as a leftover
added function, never gets called.

Regards, Joonas

> -Chris
> 


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-04-30 10:17 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-26 12:43 [PATCH 00/11] drm-intel-collector - update Rodrigo Vivi
2015-01-26 12:43 ` [PATCH 01/11] drm/i915: Put logical pipe_control emission into a helper Rodrigo Vivi
2015-01-26 12:43 ` [PATCH 02/11] drm/i915: Add WaCsStallBeforeStateCacheInvalidate:bdw, chv to logical ring Rodrigo Vivi
2015-01-26 12:43 ` [PATCH 03/11] drm/i915: Remove pinned check from madvise_ioctl Rodrigo Vivi
2015-01-28  9:52   ` Daniel Vetter
2015-01-26 12:43 ` [PATCH 04/11] drm/i915: Extend GET_APERTURE ioctl to report available map space Rodrigo Vivi
2015-01-28  9:59   ` Daniel Vetter
2015-04-29 10:24     ` Chris Wilson
2015-04-29 10:27     ` Chris Wilson
2015-04-30 10:17       ` Joonas Lahtinen
2015-01-26 12:43 ` [PATCH 05/11] drm/i915: Display current hangcheck status in debugfs Rodrigo Vivi
2015-01-26 12:43 ` [PATCH 06/11] drm/i915/vlv: check port in infoframe_enabled v2 Rodrigo Vivi
2015-01-26 12:43 ` [PATCH 07/11] drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT reg Rodrigo Vivi
2015-01-26 12:43 ` [PATCH 08/11] Revert "drm/i915: Fix mutex->owner inspection race under DEBUG_MUTEXES" Rodrigo Vivi
2015-01-28  9:53   ` Daniel Vetter
2015-01-26 12:43 ` [PATCH 09/11] drm/i915: FIFO space query code refactor Rodrigo Vivi
2015-01-26 12:43 ` [PATCH 10/11] drm/i915: add irq_barrier operation for synchronising reads Rodrigo Vivi
2015-01-28  9:55   ` Daniel Vetter
2015-01-28 10:02     ` Chris Wilson
2015-01-26 12:43 ` [PATCH 11/11] drm/i915: use effective_size for ringbuffer calculations Rodrigo Vivi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.