All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v20 00/11] Support for creating/using Stolen memory backed objects
@ 2016-05-31  6:19 ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
                   ` (11 more replies)
  0 siblings, 12 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

This patch series adds support for creating/using Stolen memory backed
objects.

Despite being a unified memory architecture (UMA) some bits of memory
are more equal than others. In particular we have the thorny issue of
stolen memory, memory stolen from the system by the BIOS and reserved
for igfx use. Stolen memory is required for some functions of the GPU
and display engine, but in general it goes wasted. Whilst we cannot
return it back to the system, we need to find some other method for
utilising it. As we do not support direct access to the physical address
in the stolen region, it behaves like a different class of memory,
closer in kin to local GPU memory. This strongly suggests that we need a
placement model like TTM if we are to fully utilize these discrete
chunks of differing memory.

To add support for creating Stolen memory backed objects, we extend the
drm_i915_gem_create structure, by adding a new flag through which user
can specify the preference to allocate the object from stolen memory,
which if set, an attempt will be made to allocate the object from stolen
memory subject to the availability of free space in the stolen region.

This patch series adds support for clearing buffer objects via CPU/GTT.
This is particularly useful for clearing out the memory from stolen
region, but can also be used for other shmem allocated objects. Currently
being used for buffers allocated in the stolen region. Also adding support
for stealing purgable stolen pages, if we run out of stolen memory when
trying to allocate an object.

v2: Added support for read/write from/to objects not backed by
shmem using the pread/pwrite interface.
Also extended the current get_aperture ioctl to retrieve the
total and available size of the stolen region.

v3: Removed the extended get_aperture ioctl patch 5 (to be submitted as
part of other patch series), addressed comments by Chris about pread/pwrite
for non shmem backed objects.

v4: Rebased to the latest drm-intel-nightly.

v5: Addressed comments, replaced patch 1/4 "Clearing buffers via blitter
engine" by "Clearing buffers via CPU/GTT".

v6: Rebased to the latest drm-intel-nightly, Addressed comments, updated
stolen memory purging logic by maintaining a list for purgable stolen
memory objects, enabled pread/pwrite for all non-shmem backed objects
without tiling restrictions.

v7: Addressed comments, compiler optimization, new patch added for correct
error code propagation to the userspace.

v8: Added a new patch to the series to Migrate stolen objects before
hibernation, as stolen memory is not preserved across hibernation. Added
correct error propagation for shmem as well non-shmem backed object allocation.

v9: Addressed comments, use of insert_page helper function to map object page
by page which can be helpful in low aperture space availability.

v10: Addressed comments, use insert_page for clearing out the stolen memory

v11: Addressed comments, 3 new patches added to support allocation from Stolen
memory
1. Allow use of i915_gem_object_get_dma_address for stolen backed objects
2. Use insert_page for pwrite_fast
3. Fail the execbuff using stolen objects as batchbuffers

v12: Addressed comments, Removed patch "Fail the execbuff using stolen objects
as batchbuffers"

v13: Addressed comments, Added 2 patches to detect Intel RST and disable stolen
for persistent data if RST device found
1. acpi: Export acpi_bus_type
2. drm/i915: Disable use of stolen area by User when Intel RST is present

v14: Addressed comments, Added 2 base patches to the series
1. drm/i915: Add support for mapping an object page by page
2. drm/i915: Introduce i915_gem_object_get_dma_address()

v15: Addressed comments, Disabled stolen memory by default

v16: Addressed comments, Added low level rpm assertions, Enabled stolen
memory

v17: Addressed comments

v18: Rebased and fixed issue

v19: Rebased and added 2 more patches to report mappable and stolen size
numbers
1. drm/i915: Extend GET_APERTURE ioctl to report available map space
2. drm/i915: Extend GET_APERTURE ioctl to report size of the stolen region

v20: Rebased and squashed last 2 patches into one.

This can be verified using IGT tests:
igt/gem_stolen, igt/gem_create, igt/gem_pread, igt/gem_pwrite

Ankitprasad Sharma (7):
  drm/i915: Use insert_page for pwrite_fast
  drm/i915: Clearing buffer objects via CPU/GTT
  drm/i915: Support for creating Stolen memory backed objects
  drm/i915: Propagating correct error codes to the userspace
  drm/i915: Support for pread/pwrite from/to non shmem backed objects
  drm/i915: Disable use of stolen area by User when Intel RST is present
  drm/i915: Extend GET_APERTURE ioctl to report available map space

Chris Wilson (4):
  drm/i915: Add support for mapping an object page by page
  drm/i915: Introduce i915_gem_object_get_dma_address()
  drm/i915: Add support for stealing purgable stolen pages
  drm/i915: Migrate stolen objects before hibernation

 drivers/char/agp/intel-gtt.c                 |   8 +
 drivers/gpu/drm/i915/i915_debugfs.c          | 149 ++++++-
 drivers/gpu/drm/i915/i915_dma.c              |   3 +
 drivers/gpu/drm/i915/i915_drv.c              |  22 +-
 drivers/gpu/drm/i915/i915_drv.h              |  61 ++-
 drivers/gpu/drm/i915/i915_gem.c              | 626 ++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_gem_gtt.c          |  66 ++-
 drivers/gpu/drm/i915/i915_gem_gtt.h          |   5 +
 drivers/gpu/drm/i915/i915_gem_render_state.c |   7 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c       | 306 +++++++++++--
 drivers/gpu/drm/i915/i915_guc_submission.c   |  50 ++-
 drivers/gpu/drm/i915/intel_acpi.c            |   7 +
 drivers/gpu/drm/i915/intel_display.c         |   5 +-
 drivers/gpu/drm/i915/intel_fbdev.c           |   8 +-
 drivers/gpu/drm/i915/intel_overlay.c         |   3 +-
 drivers/gpu/drm/i915/intel_pm.c              |  13 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c      |  10 +-
 include/drm/intel-gtt.h                      |   3 +
 include/uapi/drm/i915_drm.h                  |  58 +++
 19 files changed, 1262 insertions(+), 148 deletions(-)

-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-06-02 22:28   ` Chris Wilson
  2016-05-31  6:19 ` [PATCH 02/11] drm/i915: Introduce i915_gem_object_get_dma_address() ankitprasad.r.sharma
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Chris Wilson <chris@chris-wilson.co.uk>

Introduced a new vm specfic callback insert_page() to program a single pte in
ggtt or ppgtt. This allows us to map a single page in to the mappable aperture
space. This can be iterated over to access the whole object by using space as
meagre as page size.

v2: Added low level rpm assertions to insert_page routines (Chris)

v3: Added POSTING_READ post register write (Tvrtko)

v4: Rebase (Ankit)

v5: Removed wmb() and FLUSH_CTL from insert_page, caller to take care
of it (Chris)

v6: insert_page not working correctly without FLSH_CNTL write, added the
write again.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/char/agp/intel-gtt.c        |  8 +++++
 drivers/gpu/drm/i915/i915_gem_gtt.c | 66 ++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_gtt.h |  5 +++
 include/drm/intel-gtt.h             |  3 ++
 4 files changed, 81 insertions(+), 1 deletion(-)

diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
index aef87fd..4431129 100644
--- a/drivers/char/agp/intel-gtt.c
+++ b/drivers/char/agp/intel-gtt.c
@@ -840,6 +840,14 @@ static bool i830_check_flags(unsigned int flags)
 	return false;
 }
 
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags)
+{
+	intel_private.driver->write_entry(addr, pg, flags);
+}
+EXPORT_SYMBOL(intel_gtt_insert_page);
+
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 4668477..7a139a6 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2355,6 +2355,28 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
 #endif
 }
 
+static void gen8_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 unused)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen8_pte_t __iomem *pte =
+		(gen8_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(offset >> PAGE_SHIFT);
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	gen8_set_pte(pte, gen8_pte_encode(addr, level, true));
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+	POSTING_READ(GFX_FLSH_CNTL_GEN6);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *st,
 				     uint64_t start,
@@ -2424,6 +2446,28 @@ static void gen8_ggtt_insert_entries__BKL(struct i915_address_space *vm,
 	stop_machine(gen8_ggtt_insert_entries__cb, &arg, NULL);
 }
 
+static void gen6_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 flags)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen6_pte_t __iomem *pte =
+		(gen6_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(offset >> PAGE_SHIFT);
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	iowrite32(vm->pte_encode(addr, level, true, flags), pte);
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+	POSTING_READ(GFX_FLSH_CNTL_GEN6);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 /*
  * Binds an object into the global gtt with the specified cache level. The object
  * will be accessible to the GPU via commands whose operands reference offsets
@@ -2543,6 +2587,24 @@ static void gen6_ggtt_clear_range(struct i915_address_space *vm,
 	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
 }
 
+static void i915_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level cache_level,
+				  u32 unused)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	unsigned int flags = (cache_level == I915_CACHE_NONE) ?
+		AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY;
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	intel_gtt_insert_page(addr, offset >> PAGE_SHIFT, flags);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 static void i915_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *pages,
 				     uint64_t start,
@@ -3076,7 +3138,7 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
 
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
-
+	ggtt->base.insert_page = gen8_ggtt_insert_page;
 	ggtt->base.clear_range = nop_clear_range;
 	if (!USES_FULL_PPGTT(dev_priv))
 		ggtt->base.clear_range = gen8_ggtt_clear_range;
@@ -3116,6 +3178,7 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
 	ret = ggtt_probe_common(dev, ggtt->size);
 
 	ggtt->base.clear_range = gen6_ggtt_clear_range;
+	ggtt->base.insert_page = gen6_ggtt_insert_page;
 	ggtt->base.insert_entries = gen6_ggtt_insert_entries;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
@@ -3147,6 +3210,7 @@ static int i915_gmch_probe(struct i915_ggtt *ggtt)
 		      &ggtt->mappable_base, &ggtt->mappable_end);
 
 	ggtt->do_idle_maps = needs_idle_maps(dev_priv->dev);
+	ggtt->base.insert_page = i915_ggtt_insert_page;
 	ggtt->base.insert_entries = i915_ggtt_insert_entries;
 	ggtt->base.clear_range = i915_ggtt_clear_range;
 	ggtt->base.bind_vma = ggtt_bind_vma;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 62be77c..163b564 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -319,6 +319,11 @@ struct i915_address_space {
 			    uint64_t start,
 			    uint64_t length,
 			    bool use_scratch);
+	void (*insert_page)(struct i915_address_space *vm,
+			    dma_addr_t addr,
+			    uint64_t offset,
+			    enum i915_cache_level cache_level,
+			    u32 flags);
 	void (*insert_entries)(struct i915_address_space *vm,
 			       struct sg_table *st,
 			       uint64_t start,
diff --git a/include/drm/intel-gtt.h b/include/drm/intel-gtt.h
index 9e9bddaa5..f49edec 100644
--- a/include/drm/intel-gtt.h
+++ b/include/drm/intel-gtt.h
@@ -13,6 +13,9 @@ void intel_gmch_remove(void);
 bool intel_enable_gtt(void);
 
 void intel_gtt_chipset_flush(void);
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags);
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 02/11] drm/i915: Introduce i915_gem_object_get_dma_address()
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 03/11] drm/i915: Use insert_page for pwrite_fast ankitprasad.r.sharma
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Chris Wilson <chris@chris-wilson.co.uk>

This utility function is a companion to i915_gem_object_get_page() that
uses the same cached iterator for the scatterlist to perform fast
sequential lookup of the dma address associated with any page within the
object.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e4c8e34..2a09ccf 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3107,6 +3107,23 @@ static inline int __sg_page_count(struct scatterlist *sg)
 struct page *
 i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, int n);
 
+static inline dma_addr_t
+i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, int n)
+{
+	if (n < obj->get_page.last) {
+		obj->get_page.sg = obj->pages->sgl;
+		obj->get_page.last = 0;
+	}
+
+	while (obj->get_page.last + __sg_page_count(obj->get_page.sg) <= n) {
+		obj->get_page.last += __sg_page_count(obj->get_page.sg++);
+		if (unlikely(sg_is_chain(obj->get_page.sg)))
+			obj->get_page.sg = sg_chain_ptr(obj->get_page.sg);
+	}
+
+	return sg_dma_address(obj->get_page.sg) + ((n - obj->get_page.last) << PAGE_SHIFT);
+}
+
 static inline struct page *
 i915_gem_object_get_page(struct drm_i915_gem_object *obj, int n)
 {
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 03/11] drm/i915: Use insert_page for pwrite_fast
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 02/11] drm/i915: Introduce i915_gem_object_get_dma_address() ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 04/11] drm/i915: Clearing buffer objects via CPU/GTT ankitprasad.r.sharma
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

In pwrite_fast, map an object page by page if obj_ggtt_pin fails. First,
we try a nonblocking pin for the whole object (since that is fastest if
reused), then failing that we try to grab one page in the mappable
aperture. It also allows us to handle objects larger than the mappable
aperture (e.g. if we need to pwrite with vGPU restricting the aperture
to a measely 8MiB or something like that).

v2: Pin pages before starting pwrite, Combined duplicate loops (Chris)

v3: Combined loops based on local patch by Chris (Chris)

v4: Added i915 wrapper function for drm_mm_insert_node_in_range (Chris)

v5: Renamed wrapper function for drm_mm_insert_node_in_range (Chris)

v5: Added wrapper for drm_mm_remove_node() (Chris)

v6: Added get_pages call before pinning the pages (Tvrtko)
Added remove_mappable_node() wrapper for drm_mm_remove_node() (Chris)

v7: Added size argument for insert_mappable_node (Tvrtko)

v8: Do not put_pages after pwrite, do memset of node in the wrapper
function (insert_mappable_node) (Chris)

v9: Rebase (Ankit)

Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 90 +++++++++++++++++++++++++++++++----------
 1 file changed, 68 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 89967f9..74346cf 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -60,6 +60,24 @@ static bool cpu_write_needs_clflush(struct drm_i915_gem_object *obj)
 	return obj->pin_display;
 }
 
+static int
+insert_mappable_node(struct drm_i915_private *i915,
+                     struct drm_mm_node *node, u32 size)
+{
+	memset(node, 0, sizeof(*node));
+	return drm_mm_insert_node_in_range_generic(&i915->ggtt.base.mm, node,
+						   size, 0, 0, 0,
+						   i915->ggtt.mappable_end,
+						   DRM_MM_SEARCH_DEFAULT,
+						   DRM_MM_CREATE_DEFAULT);
+}
+
+static void
+remove_mappable_node(struct drm_mm_node *node)
+{
+	drm_mm_remove_node(node);
+}
+
 /* some bookkeeping */
 static void i915_gem_info_add_obj(struct drm_i915_private *dev_priv,
 				  size_t size)
@@ -755,21 +773,34 @@ fast_user_write(struct io_mapping *mapping,
  * user into the GTT, uncached.
  */
 static int
-i915_gem_gtt_pwrite_fast(struct drm_device *dev,
+i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915,
 			 struct drm_i915_gem_object *obj,
 			 struct drm_i915_gem_pwrite *args,
 			 struct drm_file *file)
 {
-	struct drm_i915_private *dev_priv = to_i915(dev);
-	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	ssize_t remain;
-	loff_t offset, page_base;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct drm_mm_node node;
+	uint64_t remain, offset;
 	char __user *user_data;
-	int page_offset, page_length, ret;
+	int ret;
 
 	ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE | PIN_NONBLOCK);
-	if (ret)
-		goto out;
+	if (ret) {
+		ret = insert_mappable_node(i915, &node, PAGE_SIZE);
+		if (ret)
+			goto out;
+
+		ret = i915_gem_object_get_pages(obj);
+		if (ret) {
+			remove_mappable_node(&node);
+			goto out;
+		}
+
+		i915_gem_object_pin_pages(obj);
+	} else {
+		node.start = i915_gem_obj_ggtt_offset(obj);
+		node.allocated = false;
+	}
 
 	ret = i915_gem_object_set_to_gtt_domain(obj, true);
 	if (ret)
@@ -779,26 +810,32 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev,
 	if (ret)
 		goto out_unpin;
 
-	user_data = to_user_ptr(args->data_ptr);
-	remain = args->size;
-
-	offset = i915_gem_obj_ggtt_offset(obj) + args->offset;
-
 	intel_fb_obj_invalidate(obj, ORIGIN_GTT);
+	obj->dirty = true;
 
-	while (remain > 0) {
+	user_data = to_user_ptr(args->data_ptr);
+	offset = args->offset;
+	remain = args->size;
+	while (remain) {
 		/* Operation in this page
 		 *
 		 * page_base = page offset within aperture
 		 * page_offset = offset within page
 		 * page_length = bytes to copy for this page
 		 */
-		page_base = offset & PAGE_MASK;
-		page_offset = offset_in_page(offset);
-		page_length = remain;
-		if ((page_offset + remain) > PAGE_SIZE)
-			page_length = PAGE_SIZE - page_offset;
-
+		u32 page_base = node.start;
+		unsigned page_offset = offset_in_page(offset);
+		unsigned page_length = PAGE_SIZE - page_offset;
+		page_length = remain < page_length ? remain : page_length;
+		if (node.allocated) {
+			wmb(); /* flush the write before we modify the GGTT */
+			ggtt->base.insert_page(&ggtt->base,
+					       i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
+					       node.start, I915_CACHE_NONE, 0);
+			wmb(); /* flush modifications to the GGTT (insert_page) */
+		} else {
+			page_base += offset & PAGE_MASK;
+		}
 		/* If we get a fault while copying data, then (presumably) our
 		 * source page isn't available.  Return the error and we'll
 		 * retry in the slow path.
@@ -817,7 +854,16 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev,
 out_flush:
 	intel_fb_obj_flush(obj, false, ORIGIN_GTT);
 out_unpin:
-	i915_gem_object_ggtt_unpin(obj);
+	if (node.allocated) {
+		wmb();
+		ggtt->base.clear_range(&ggtt->base,
+				       node.start, node.size,
+				       true);
+		i915_gem_object_unpin_pages(obj);
+		remove_mappable_node(&node);
+	} else {
+		i915_gem_object_ggtt_unpin(obj);
+	}
 out:
 	return ret;
 }
@@ -1082,7 +1128,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	if (obj->tiling_mode == I915_TILING_NONE &&
 	    obj->base.write_domain != I915_GEM_DOMAIN_CPU &&
 	    cpu_write_needs_clflush(obj)) {
-		ret = i915_gem_gtt_pwrite_fast(dev, obj, args, file);
+		ret = i915_gem_gtt_pwrite_fast(dev_priv, obj, args, file);
 		/* Note that the gtt paths might fail with non-page-backed user
 		 * pointers (e.g. gtt mappings when moving data between
 		 * textures). Fallback to the shmem path in that case. */
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 04/11] drm/i915: Clearing buffer objects via CPU/GTT
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (2 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 03/11] drm/i915: Use insert_page for pwrite_fast ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 05/11] drm/i915: Support for creating Stolen memory backed objects ankitprasad.r.sharma
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

This patch adds support for clearing buffer objects via CPU/GTT. This
is particularly useful for clearing out the non shmem backed objects.
Currently intend to use this only for buffers allocated from stolen
region.

v2: Added kernel doc for i915_gem_clear_object(), corrected/removed
variable assignments (Tvrtko)

v3: Map object page by page to the gtt if the pinning of the whole object
to the ggtt fails, Corrected function name (Chris)

v4: Clear the buffer page by page, and not map the whole object in the gtt
aperture. Use i915 wrapper function in place of drm_mm_insert_node_in_range.

v5: Use renamed wrapper function for drm_mm_insert_node_in_range,
updated barrier positioning (Chris)

v6: Use PAGE_SIZE instead of 4096, use get_pages call before pinning pages
(Tvrtko)

v7: Fixed the onion (undo operation in reverse order) (Chris)

v8: Rebase (Ankit)

Testcase: igt/gem_stolen

Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h |  1 +
 drivers/gpu/drm/i915/i915_gem.c | 45 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 46 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 2a09ccf..73643ec 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3098,6 +3098,7 @@ int i915_gem_obj_prepare_shmem_read(struct drm_i915_gem_object *obj,
 				    int *needs_clflush);
 
 int __must_check i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
+int i915_gem_object_clear(struct drm_i915_gem_object *obj);
 
 static inline int __sg_page_count(struct scatterlist *sg)
 {
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 74346cf..982f632 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5373,3 +5373,48 @@ fail:
 	drm_gem_object_unreference(&obj->base);
 	return ERR_PTR(ret);
 }
+
+/**
+ * i915_gem_object_clear() - Clear buffer object via CPU/GTT
+ * @obj: Buffer object to be cleared
+ *
+ * Return: 0 - success, non-zero - failure
+ */
+int i915_gem_object_clear(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct drm_mm_node node;
+	char __iomem *base;
+	uint64_t size = obj->base.size;
+	int ret, i;
+
+	lockdep_assert_held(&obj->base.dev->struct_mutex);
+	ret = insert_mappable_node(i915, &node, PAGE_SIZE);
+	if (ret)
+		return ret;
+
+	ret = i915_gem_object_get_pages(obj);
+	if (ret)
+		goto err_remove_node;
+
+	i915_gem_object_pin_pages(obj);
+	base = io_mapping_map_wc(ggtt->mappable, node.start, PAGE_SIZE);
+
+	for (i = 0; i < size/PAGE_SIZE; i++) {
+		ggtt->base.insert_page(&ggtt->base,
+				       i915_gem_object_get_dma_address(obj, i),
+				       node.start, I915_CACHE_NONE, 0);
+		wmb(); /* flush modifications to the GGTT (insert_page) */
+		memset_io(base, 0, PAGE_SIZE);
+		wmb(); /* flush the write before we modify the GGTT */
+	}
+
+	io_mapping_unmap(base);
+	ggtt->base.clear_range(&ggtt->base, node.start, node.size, true);
+	i915_gem_object_unpin_pages(obj);
+
+err_remove_node:
+	remove_mappable_node(&node);
+	return ret;
+}
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 05/11] drm/i915: Support for creating Stolen memory backed objects
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (3 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 04/11] drm/i915: Clearing buffer objects via CPU/GTT ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 06/11] drm/i915: Propagating correct error codes to the userspace ankitprasad.r.sharma
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

Extend the drm_i915_gem_create structure to add support for
creating Stolen memory backed objects. Added a new flag through
which user can specify the preference to allocate the object from
stolen memory, which if set, an attempt will be made to allocate
the object from stolen memory subject to the availability of
free space in the stolen region.

v2: Rebased to the latest drm-intel-nightly (Ankit)

v3: Changed versioning of GEM_CREATE param, added new comments (Tvrtko)

v4: Changed size from 32b to 64b to prevent userspace overflow (Tvrtko)
Corrected function arguments ordering (Chris)

v5: Corrected function name (Chris)

v6: Updated datatype for flags to keep sizeof(drm_i915_gem_create) u64
aligned (Chris)

v7: Use first 8 bits of gem_create flags for placement (Chris), Add helper
function for object allocation from stolen region (Ankit)

v8: Added comment explaining STOLEN placement flag (Chris)

Testcase: igt/gem_stolen

Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_dma.c        |  3 +++
 drivers/gpu/drm/i915/i915_drv.h        |  2 +-
 drivers/gpu/drm/i915/i915_gem.c        | 49 ++++++++++++++++++++++++++++++----
 drivers/gpu/drm/i915/i915_gem_stolen.c |  4 +--
 include/uapi/drm/i915_drm.h            | 41 ++++++++++++++++++++++++++++
 5 files changed, 91 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 07edaed..83ae436 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -231,6 +231,9 @@ static int i915_getparam(struct drm_device *dev, void *data,
 	case I915_PARAM_HAS_EXEC_SOFTPIN:
 		value = 1;
 		break;
+	case I915_PARAM_CREATE_VERSION:
+		value = 2;
+		break;
 	default:
 		DRM_DEBUG("Unknown parameter %d\n", param->param);
 		return -EINVAL;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 73643ec..0e8fe13 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3521,7 +3521,7 @@ void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv,
 int i915_gem_init_stolen(struct drm_device *dev);
 void i915_gem_cleanup_stolen(struct drm_device *dev);
 struct drm_i915_gem_object *
-i915_gem_object_create_stolen(struct drm_device *dev, u32 size);
+i915_gem_object_create_stolen(struct drm_device *dev, u64 size);
 struct drm_i915_gem_object *
 i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 					       u32 stolen_offset,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 982f632..0224969 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -384,10 +384,36 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj)
 	kmem_cache_free(dev_priv->objects, obj);
 }
 
+static struct drm_i915_gem_object *
+i915_gem_alloc_object_stolen(struct drm_device *dev, size_t size)
+{
+	struct drm_i915_gem_object *obj;
+	int ret;
+
+	mutex_lock(&dev->struct_mutex);
+	obj = i915_gem_object_create_stolen(dev, size);
+	if (!obj) {
+		mutex_unlock(&dev->struct_mutex);
+		return NULL;
+	}
+
+	/* Always clear fresh buffers before handing to userspace */
+	ret = i915_gem_object_clear(obj);
+	if (ret) {
+		drm_gem_object_unreference(&obj->base);
+		mutex_unlock(&dev->struct_mutex);
+		return NULL;
+	}
+
+	mutex_unlock(&dev->struct_mutex);
+	return obj;
+}
+
 static int
 i915_gem_create(struct drm_file *file,
 		struct drm_device *dev,
 		uint64_t size,
+		uint64_t flags,
 		uint32_t *handle_p)
 {
 	struct drm_i915_gem_object *obj;
@@ -398,10 +424,23 @@ i915_gem_create(struct drm_file *file,
 	if (size == 0)
 		return -EINVAL;
 
+	if (flags & __I915_CREATE_UNKNOWN_FLAGS)
+		return -EINVAL;
+
 	/* Allocate the new object */
-	obj = i915_gem_object_create(dev, size);
-	if (IS_ERR(obj))
-		return PTR_ERR(obj);
+	switch (flags & I915_CREATE_PLACEMENT_MASK) {
+	case I915_CREATE_PLACEMENT_NORMAL:
+		obj = i915_gem_object_create(dev, size);
+		break;
+	case I915_CREATE_PLACEMENT_STOLEN:
+		obj = i915_gem_alloc_object_stolen(dev, size);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (IS_ERR_OR_NULL(obj))
+		return -ENOMEM;
 
 	ret = drm_gem_handle_create(file, &obj->base, &handle);
 	/* drop reference from allocate - handle holds it now */
@@ -422,7 +461,7 @@ i915_gem_dumb_create(struct drm_file *file,
 	args->pitch = ALIGN(args->width * DIV_ROUND_UP(args->bpp, 8), 64);
 	args->size = args->pitch * args->height;
 	return i915_gem_create(file, dev,
-			       args->size, &args->handle);
+			       args->size, 0, &args->handle);
 }
 
 /**
@@ -435,7 +474,7 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
 	struct drm_i915_gem_create *args = data;
 
 	return i915_gem_create(file, dev,
-			       args->size, &args->handle);
+			       args->size, args->flags, &args->handle);
 }
 
 static inline int
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index f9253f2..81d5b6b 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -597,7 +597,7 @@ cleanup:
 }
 
 struct drm_i915_gem_object *
-i915_gem_object_create_stolen(struct drm_device *dev, u32 size)
+i915_gem_object_create_stolen(struct drm_device *dev, u64 size)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj;
@@ -607,7 +607,7 @@ i915_gem_object_create_stolen(struct drm_device *dev, u32 size)
 	if (!drm_mm_initialized(&dev_priv->mm.stolen))
 		return NULL;
 
-	DRM_DEBUG_KMS("creating stolen object: size=%x\n", size);
+	DRM_DEBUG_KMS("creating stolen object: size=%llx\n", size);
 	if (size == 0)
 		return NULL;
 
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index c17d63d..ceff4c1 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -361,6 +361,7 @@ typedef struct drm_i915_irq_wait {
 #define I915_PARAM_HAS_GPU_RESET	 35
 #define I915_PARAM_HAS_RESOURCE_STREAMER 36
 #define I915_PARAM_HAS_EXEC_SOFTPIN	 37
+#define I915_PARAM_CREATE_VERSION	 38
 
 typedef struct drm_i915_getparam {
 	__s32 param;
@@ -460,6 +461,46 @@ struct drm_i915_gem_create {
 	 */
 	__u32 handle;
 	__u32 pad;
+	/**
+	 * Requested flags (currently used for placement
+	 * (which memory domain))
+	 *
+	 * You can request that the object be created from special memory
+	 * rather than regular system pages using this parameter. Such
+	 * irregular objects may have certain restrictions (such as CPU
+	 * access to a stolen object is verboten).
+	 *
+	 * This can be used in the future for other purposes too
+	 * e.g. specifying tiling/caching/madvise
+	 */
+	__u64 flags;
+#define I915_CREATE_PLACEMENT_NORMAL 	0 /* standard swappable bo  */
+/* Allocate the object from memory reserved for the igfx (stolen).
+ *
+ * Objects allocated from stolen are restricted in the API they can use,
+ * as direct CPU access to stolen memory is prohibited by the system.
+ * This means that you cannot use a regular CPU mmap (either using WB
+ * or with the WC extension). You can still use a GTT mmap, pwrite,
+ * pread and pass it around for use by execbuffer and between processes
+ * like normal.
+ *
+ * Stolen memory is a very limited resource and certain functions of the
+ * hardware can only work from within stolen memory. Userspace's
+ * allocations may be evicted from stolen and moved to normal memory as
+ * required. If the allocation is marked as purgeable (using madvise),
+ * the allocation will be dropped and further access to the object's
+ * backing storage will result in -EFAULT. Stolen objects will also be
+ * migrated to normal memory across suspend and resume, as the stolen
+ * memory is not preserved.
+ *
+ * Stolen memory is regarded as a resource placement hint, most suitable
+ * for medium-sized buffers that are only accessed by the GPU and can be
+ * discarded.
+ */
+#define I915_CREATE_PLACEMENT_STOLEN 	1 /* Cannot use CPU mmaps */
+
+#define I915_CREATE_PLACEMENT_MASK	0xff
+#define __I915_CREATE_UNKNOWN_FLAGS	~I915_CREATE_PLACEMENT_MASK
 };
 
 struct drm_i915_gem_pread {
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 06/11] drm/i915: Propagating correct error codes to the userspace
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (4 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 05/11] drm/i915: Support for creating Stolen memory backed objects ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 07/11] drm/i915: Add support for stealing purgable stolen pages ankitprasad.r.sharma
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

Propagating correct error codes to userspace by using ERR_PTR and
PTR_ERR macros for stolen memory based object allocation. We generally
return -ENOMEM to the user whenever there is a failure in object
allocation. This patch helps user to identify the correct reason for the
failure and not just -ENOMEM each time.

v2: Moved the patch up in the series, added error propagation for
i915_gem_alloc_object too (Chris)

v3: Removed storing of error pointer inside structs, Corrected error
propagation in caller functions (Chris)

v4: Remove assignments inside the predicate (Chris)

v5: Removed unnecessary initializations, updated kerneldoc for
i915_guc_client, corrected missed error pointer handling (Tvrtko)

v6: Use ERR_CAST/temporary variable to avoid storing invalid pointer
in a common field (Chris)

v7: Resolved rebasing conflicts (Ankit)

v8: Removed redundant code (Chris)

v9: Rebase

v10: Rebase, resolve merge conflicts

Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c              | 15 ++++----
 drivers/gpu/drm/i915/i915_gem_render_state.c |  7 ++--
 drivers/gpu/drm/i915/i915_gem_stolen.c       | 53 +++++++++++++++-------------
 drivers/gpu/drm/i915/i915_guc_submission.c   | 50 ++++++++++++++++----------
 drivers/gpu/drm/i915/intel_display.c         |  2 +-
 drivers/gpu/drm/i915/intel_fbdev.c           |  2 +-
 drivers/gpu/drm/i915/intel_overlay.c         |  3 +-
 drivers/gpu/drm/i915/intel_pm.c              |  7 ++--
 drivers/gpu/drm/i915/intel_ringbuffer.c      |  4 +--
 9 files changed, 83 insertions(+), 60 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0224969..4d7a135 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -392,19 +392,18 @@ i915_gem_alloc_object_stolen(struct drm_device *dev, size_t size)
 
 	mutex_lock(&dev->struct_mutex);
 	obj = i915_gem_object_create_stolen(dev, size);
-	if (!obj) {
-		mutex_unlock(&dev->struct_mutex);
-		return NULL;
-	}
+	if (IS_ERR(obj))
+		goto out;
 
 	/* Always clear fresh buffers before handing to userspace */
 	ret = i915_gem_object_clear(obj);
 	if (ret) {
 		drm_gem_object_unreference(&obj->base);
-		mutex_unlock(&dev->struct_mutex);
-		return NULL;
+		obj = ERR_PTR(ret);
+		goto out;
 	}
 
+out:
 	mutex_unlock(&dev->struct_mutex);
 	return obj;
 }
@@ -439,8 +438,8 @@ i915_gem_create(struct drm_file *file,
 		return -EINVAL;
 	}
 
-	if (IS_ERR_OR_NULL(obj))
-		return -ENOMEM;
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
 
 	ret = drm_gem_handle_create(file, &obj->base, &handle);
 	/* drop reference from allocate - handle holds it now */
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 7c93327..84d91c9 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -59,8 +59,11 @@ static int render_state_init(struct render_state *so,
 		return -EINVAL;
 
 	so->obj = i915_gem_object_create(dev_priv->dev, 4096);
-	if (IS_ERR(so->obj))
-		return PTR_ERR(so->obj);
+	if (IS_ERR(so->obj)) {
+		ret = PTR_ERR(so->obj);
+		so->obj = NULL;
+		return ret;
+	}
 
 	ret = i915_gem_obj_ggtt_pin(so->obj, 4096, 0);
 	if (ret)
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 81d5b6b..dcb70c1 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -503,6 +503,7 @@ i915_pages_create_for_stolen(struct drm_device *dev,
 	struct i915_ggtt *ggtt = &dev_priv->ggtt;
 	struct sg_table *st;
 	struct scatterlist *sg;
+	int ret;
 
 	DRM_DEBUG_DRIVER("offset=0x%x, size=%d\n", offset, size);
 	BUG_ON(offset > ggtt->stolen_size - size);
@@ -514,11 +515,12 @@ i915_pages_create_for_stolen(struct drm_device *dev,
 
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
 	if (st == NULL)
-		return NULL;
+		return ERR_PTR(-ENOMEM);
 
-	if (sg_alloc_table(st, 1, GFP_KERNEL)) {
+	ret = sg_alloc_table(st, 1, GFP_KERNEL);
+	if (ret) {
 		kfree(st);
-		return NULL;
+		return ERR_PTR(ret);
 	}
 
 	sg = st->sgl;
@@ -567,18 +569,23 @@ _i915_gem_object_create_stolen(struct drm_device *dev,
 			       struct drm_mm_node *stolen)
 {
 	struct drm_i915_gem_object *obj;
+	struct sg_table *pages;
 
 	obj = i915_gem_object_alloc(dev);
 	if (obj == NULL)
-		return NULL;
+		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(dev, &obj->base, stolen->size);
 	i915_gem_object_init(obj, &i915_gem_object_stolen_ops);
 
-	obj->pages = i915_pages_create_for_stolen(dev,
-						  stolen->start, stolen->size);
-	if (obj->pages == NULL)
-		goto cleanup;
+	pages = i915_pages_create_for_stolen(dev,
+					     stolen->start, stolen->size);
+	if (IS_ERR(pages)) {
+		i915_gem_object_free(obj);
+		return ERR_CAST(pages);
+	}
+
+	obj->pages = pages;
 
 	obj->get_page.sg = obj->pages->sgl;
 	obj->get_page.last = 0;
@@ -590,10 +597,6 @@ _i915_gem_object_create_stolen(struct drm_device *dev,
 	obj->cache_level = HAS_LLC(dev) ? I915_CACHE_LLC : I915_CACHE_NONE;
 
 	return obj;
-
-cleanup:
-	i915_gem_object_free(obj);
-	return NULL;
 }
 
 struct drm_i915_gem_object *
@@ -605,29 +608,29 @@ i915_gem_object_create_stolen(struct drm_device *dev, u64 size)
 	int ret;
 
 	if (!drm_mm_initialized(&dev_priv->mm.stolen))
-		return NULL;
+		return ERR_PTR(-ENODEV);
 
 	DRM_DEBUG_KMS("creating stolen object: size=%llx\n", size);
 	if (size == 0)
-		return NULL;
+		return ERR_PTR(-EINVAL);
 
 	stolen = kzalloc(sizeof(*stolen), GFP_KERNEL);
 	if (!stolen)
-		return NULL;
+		return ERR_PTR(-ENOMEM);
 
 	ret = i915_gem_stolen_insert_node(dev_priv, stolen, size, 4096);
 	if (ret) {
 		kfree(stolen);
-		return NULL;
+		return ERR_PTR(ret);
 	}
 
 	obj = _i915_gem_object_create_stolen(dev, stolen);
-	if (obj)
+	if (!IS_ERR(obj))
 		return obj;
 
 	i915_gem_stolen_remove_node(dev_priv, stolen);
 	kfree(stolen);
-	return NULL;
+	return obj;
 }
 
 struct drm_i915_gem_object *
@@ -644,7 +647,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	int ret;
 
 	if (!drm_mm_initialized(&dev_priv->mm.stolen))
-		return NULL;
+		return ERR_PTR(-ENODEV);
 
 	lockdep_assert_held(&dev->struct_mutex);
 
@@ -654,11 +657,11 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	/* KISS and expect everything to be page-aligned */
 	if (WARN_ON(size == 0) || WARN_ON(size & 4095) ||
 	    WARN_ON(stolen_offset & 4095))
-		return NULL;
+		return ERR_PTR(-EINVAL);
 
 	stolen = kzalloc(sizeof(*stolen), GFP_KERNEL);
 	if (!stolen)
-		return NULL;
+		return ERR_PTR(-ENOMEM);
 
 	stolen->start = stolen_offset;
 	stolen->size = size;
@@ -668,15 +671,15 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	if (ret) {
 		DRM_DEBUG_KMS("failed to allocate stolen space\n");
 		kfree(stolen);
-		return NULL;
+		return ERR_PTR(ret);
 	}
 
 	obj = _i915_gem_object_create_stolen(dev, stolen);
-	if (obj == NULL) {
+	if (IS_ERR(obj)) {
 		DRM_DEBUG_KMS("failed to allocate stolen object\n");
 		i915_gem_stolen_remove_node(dev_priv, stolen);
 		kfree(stolen);
-		return NULL;
+		return obj;
 	}
 
 	/* Some objects just need physical mem from stolen space */
@@ -715,5 +718,5 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 
 err:
 	drm_gem_object_unreference(&obj->base);
-	return NULL;
+	return ERR_PTR(ret);
 }
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index ac72451..6ae4f49 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -598,27 +598,30 @@ int i915_guc_submit(struct drm_i915_gem_request *rq)
  * object needs to be pinned lifetime. Also we must pin it to gtt space other
  * than [0, GUC_WOPCM_TOP) because this range is reserved inside GuC.
  *
- * Return:	A drm_i915_gem_object if successful, otherwise NULL.
+ * Return:	A drm_i915_gem_object if successful, otherwise error pointer.
  */
 static struct drm_i915_gem_object *gem_allocate_guc_obj(struct drm_device *dev,
 							u32 size)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj;
+	int ret;
 
 	obj = i915_gem_object_create(dev, size);
 	if (IS_ERR(obj))
-		return NULL;
+		return obj;
 
-	if (i915_gem_object_get_pages(obj)) {
+	ret = i915_gem_object_get_pages(obj);
+	if (ret) {
 		drm_gem_object_unreference(&obj->base);
-		return NULL;
+		return ERR_PTR(ret);
 	}
 
-	if (i915_gem_obj_ggtt_pin(obj, PAGE_SIZE,
-			PIN_OFFSET_BIAS | GUC_WOPCM_TOP)) {
+	ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE,
+				    PIN_OFFSET_BIAS | GUC_WOPCM_TOP);
+	if (ret) {
 		drm_gem_object_unreference(&obj->base);
-		return NULL;
+		return ERR_PTR(ret);
 	}
 
 	/* Invalidate GuC TLB to let GuC take the latest updates to GTT. */
@@ -693,7 +696,7 @@ static void guc_client_free(struct drm_device *dev,
  * @ctx:	the context that owns the client (we use the default render
  * 		context)
  *
- * Return:	An i915_guc_client object if success, else NULL.
+ * Return:	An i915_guc_client object if success, error pointer on failure.
  */
 static struct i915_guc_client *guc_client_alloc(struct drm_device *dev,
 						uint32_t priority,
@@ -703,10 +706,11 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev,
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_guc *guc = &dev_priv->guc;
 	struct drm_i915_gem_object *obj;
+	int ret;
 
 	client = kzalloc(sizeof(*client), GFP_KERNEL);
 	if (!client)
-		return NULL;
+		return ERR_PTR(-ENOMEM);
 
 	client->doorbell_id = GUC_INVALID_DOORBELL_ID;
 	client->priority = priority;
@@ -717,13 +721,16 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev,
 			GUC_MAX_GPU_CONTEXTS, GFP_KERNEL);
 	if (client->ctx_index >= GUC_MAX_GPU_CONTEXTS) {
 		client->ctx_index = GUC_INVALID_CTX_ID;
+		ret = -EINVAL;
 		goto err;
 	}
 
 	/* The first page is doorbell/proc_desc. Two followed pages are wq. */
 	obj = gem_allocate_guc_obj(dev, GUC_DB_SIZE + GUC_WQ_SIZE);
-	if (!obj)
+	if (IS_ERR(obj)) {
+		ret = PTR_ERR(obj);
 		goto err;
+	}
 
 	/* We'll keep just the first (doorbell/proc) page permanently kmap'd. */
 	client->client_obj = obj;
@@ -744,9 +751,11 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev,
 		client->proc_desc_offset = (GUC_DB_SIZE / 2);
 
 	client->doorbell_id = assign_doorbell(guc, client->priority);
-	if (client->doorbell_id == GUC_INVALID_DOORBELL_ID)
+	if (client->doorbell_id == GUC_INVALID_DOORBELL_ID) {
 		/* XXX: evict a doorbell instead */
+		ret = -EINVAL;
 		goto err;
+	}
 
 	guc_init_proc_desc(guc, client);
 	guc_init_ctx_desc(guc, client);
@@ -754,7 +763,8 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev,
 
 	/* XXX: Any cache flushes needed? General domain mgmt calls? */
 
-	if (host2guc_allocate_doorbell(guc, client))
+	ret = host2guc_allocate_doorbell(guc, client);
+	if (ret)
 		goto err;
 
 	DRM_DEBUG_DRIVER("new priority %u client %p: ctx_index %u db_id %u\n",
@@ -766,7 +776,7 @@ err:
 	DRM_ERROR("FAILED to create priority %u GuC client!\n", priority);
 
 	guc_client_free(dev, client);
-	return NULL;
+	return ERR_PTR(ret);
 }
 
 static void guc_create_log(struct intel_guc *guc)
@@ -791,7 +801,7 @@ static void guc_create_log(struct intel_guc *guc)
 	obj = guc->log_obj;
 	if (!obj) {
 		obj = gem_allocate_guc_obj(dev_priv->dev, size);
-		if (!obj) {
+		if (IS_ERR(obj)) {
 			/* logging will be off */
 			i915.guc_log_level = -1;
 			return;
@@ -911,6 +921,7 @@ int i915_guc_submission_init(struct drm_device *dev)
 	const size_t poolsize = GUC_MAX_GPU_CONTEXTS * ctxsize;
 	const size_t gemsize = round_up(poolsize, PAGE_SIZE);
 	struct intel_guc *guc = &dev_priv->guc;
+	int ret;
 
 	if (!i915.enable_guc_submission)
 		return 0; /* not enabled  */
@@ -919,8 +930,11 @@ int i915_guc_submission_init(struct drm_device *dev)
 		return 0; /* already allocated */
 
 	guc->ctx_pool_obj = gem_allocate_guc_obj(dev_priv->dev, gemsize);
-	if (!guc->ctx_pool_obj)
-		return -ENOMEM;
+	if (IS_ERR(guc->ctx_pool_obj)) {
+		ret = PTR_ERR(guc->ctx_pool_obj);
+		guc->ctx_pool_obj = NULL;
+		return ret;
+	}
 
 	ida_init(&guc->ctx_ids);
 
@@ -941,9 +955,9 @@ int i915_guc_submission_enable(struct drm_device *dev)
 	client = guc_client_alloc(dev,
 				  GUC_CTX_PRIORITY_KMD_NORMAL,
 				  dev_priv->kernel_context);
-	if (!client) {
+	if (IS_ERR(client)) {
 		DRM_ERROR("Failed to create execbuf guc_client\n");
-		return -ENOMEM;
+		return PTR_ERR(client);
 	}
 
 	guc->execbuf_client = client;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 1c700a2..4bbda9c 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2495,7 +2495,7 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
 							     base_aligned,
 							     base_aligned,
 							     size_aligned);
-	if (!obj) {
+	if (IS_ERR(obj)) {
 		mutex_unlock(&dev->struct_mutex);
 		return false;
 	}
diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c
index 99e2753..e094005 100644
--- a/drivers/gpu/drm/i915/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/intel_fbdev.c
@@ -149,7 +149,7 @@ static int intelfb_alloc(struct drm_fb_helper *helper,
 	 * features. */
 	if (size * 2 < ggtt->stolen_usable_size)
 		obj = i915_gem_object_create_stolen(dev, size);
-	if (obj == NULL)
+	if (IS_ERR_OR_NULL(obj))
 		obj = i915_gem_object_create(dev, size);
 	if (IS_ERR(obj)) {
 		DRM_ERROR("failed to allocate framebuffer\n");
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index eb93f90..3323596 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -1388,10 +1388,11 @@ void intel_setup_overlay(struct drm_i915_private *dev_priv)
 	reg_bo = NULL;
 	if (!OVERLAY_NEEDS_PHYSICAL(dev_priv))
 		reg_bo = i915_gem_object_create_stolen(dev_priv->dev, PAGE_SIZE);
-	if (reg_bo == NULL)
+	if (IS_ERR_OR_NULL(reg_bo))
 		reg_bo = i915_gem_object_create(dev_priv->dev, PAGE_SIZE);
 	if (IS_ERR(reg_bo))
 		goto out_free;
+
 	overlay->reg_bo = reg_bo;
 
 	if (OVERLAY_NEEDS_PHYSICAL(dev_priv)) {
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index b6dfd02..eea2804 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -5517,10 +5517,13 @@ static void valleyview_setup_pctx(struct drm_i915_private *dev_priv)
 								      pcbr_offset,
 								      I915_GTT_OFFSET_NONE,
 								      pctx_size);
+		if (IS_ERR(pctx))
+			pctx = NULL;
+
 		goto out;
 	}
 
-	DRM_DEBUG_DRIVER("BIOS didn't set up PCBR, fixing up\n");
+	DRM_DEBUG_DRIVER("BIOS didn't set up PCBR or prealloc failed, fixing up\n");
 
 	/*
 	 * From the Gunit register HAS:
@@ -5531,7 +5534,7 @@ static void valleyview_setup_pctx(struct drm_i915_private *dev_priv)
 	 * memory, or any other relevant ranges.
 	 */
 	pctx = i915_gem_object_create_stolen(dev_priv->dev, pctx_size);
-	if (!pctx) {
+	if (IS_ERR(pctx)) {
 		DRM_DEBUG("not enough stolen space for PCTX, disabling\n");
 		goto out;
 	}
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 8d35a39..586360b 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2179,7 +2179,7 @@ static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
 	obj = NULL;
 	if (!HAS_LLC(dev))
 		obj = i915_gem_object_create_stolen(dev, ringbuf->size);
-	if (obj == NULL)
+	if (IS_ERR_OR_NULL(obj))
 		obj = i915_gem_object_create(dev, ringbuf->size);
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
@@ -2740,7 +2740,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 	engine->hw_id = 0;
 	engine->mmio_base = RENDER_RING_BASE;
 
-	if (INTEL_GEN(dev_priv) >= 8) {
+	if (INTEL_INFO(dev)->gen >= 8) {
 		if (i915_semaphore_is_enabled(dev_priv)) {
 			obj = i915_gem_object_create(dev, 4096);
 			if (IS_ERR(obj)) {
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 07/11] drm/i915: Add support for stealing purgable stolen pages
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (5 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 06/11] drm/i915: Propagating correct error codes to the userspace ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 08/11] drm/i915: Support for pread/pwrite from/to non shmem backed objects ankitprasad.r.sharma
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Chris Wilson <chris at chris-wilson.co.uk>

If we run out of stolen memory when trying to allocate an object, see if
we can reap enough purgeable objects to free up enough contiguous free
space for the allocation. This is in principle very much like evicting
objects to free up enough contiguous space in the vma when binding
a new object - and you will be forgiven for thinking that the code looks
very similar.

At the moment, we do not allow userspace to allocate objects in stolen,
so there is neither the memory pressure to trigger stolen eviction nor
any purgeable objects inside the stolen arena. However, this will change
in the near future, and so better management and defragmentation of
stolen memory will become a real issue.

v2: Remember to remove the drm_mm_node.

v3: Rebased to the latest drm-intel-nightly (Ankit)

v4: corrected if-else braces format (Tvrtko/kerneldoc)

v5: Rebased to the latest drm-intel-nightly (Ankit)
Added a seperate list to maintain purgable objects from stolen memory
region (Chris/Daniel)

v6: Compiler optimization (merging 2 single loops into one for() loop),
corrected code for object eviction, retire_requests before starting
object eviction (Chris)

v7: Added kernel doc for i915_gem_object_create_stolen()

v8: Check for struct_mutex lock before creating object from stolen
region (Tvrtko)

v9: Renamed variables to make usage clear, added comment, removed onetime
used macro (Tvrtko)

v10: Avoid masking of error when stolen_alloc fails (Tvrtko)

v11: Renamed stolen_link to tmp_link, as it may be used for other
purposes too (Chris)
Used ERR_CAST to cast error pointers while returning

v12: Added lockdep_assert before starting stolen-backed object
eviction (Chris)

v13: Rebased

Testcase: igt/gem_stolen

Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c    |   6 +-
 drivers/gpu/drm/i915/i915_drv.h        |  17 +++-
 drivers/gpu/drm/i915/i915_gem.c        |  15 +++
 drivers/gpu/drm/i915/i915_gem_stolen.c | 171 +++++++++++++++++++++++++++++----
 drivers/gpu/drm/i915/intel_pm.c        |   4 +-
 5 files changed, 188 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index ac7e569..aeae600 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -182,7 +182,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		seq_puts(m, ")");
 	}
 	if (obj->stolen)
-		seq_printf(m, " (stolen: %08llx)", obj->stolen->start);
+		seq_printf(m, " (stolen: %08llx)", obj->stolen->base.start);
 	if (obj->pin_display || obj->fault_mappable) {
 		char s[3], *t = s;
 		if (obj->pin_display)
@@ -254,9 +254,9 @@ static int obj_rank_by_stolen(void *priv,
 	struct drm_i915_gem_object *b =
 		container_of(B, struct drm_i915_gem_object, obj_exec_link);
 
-	if (a->stolen->start < b->stolen->start)
+	if (a->stolen->base.start < b->stolen->base.start)
 		return -1;
-	if (a->stolen->start > b->stolen->start)
+	if (a->stolen->base.start > b->stolen->base.start)
 		return 1;
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0e8fe13..d15c018 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -831,6 +831,12 @@ struct i915_ctx_hang_stats {
 	bool banned;
 };
 
+struct i915_stolen_node {
+	struct drm_mm_node base;
+	struct list_head mm_link;
+	struct drm_i915_gem_object *obj;
+};
+
 /* This must match up with the value previously used for execbuf2.rsvd1. */
 #define DEFAULT_CONTEXT_HANDLE 0
 
@@ -1279,6 +1285,13 @@ struct i915_gem_mm {
 	 */
 	struct list_head unbound_list;
 
+	/**
+	 * List of stolen objects that have been marked as purgeable and
+	 * thus available for reaping if we need more space for a new
+	 * allocation. Ordered by time of marking purgeable.
+	 */
+	struct list_head stolen_list;
+
 	/** Usable portion of the GTT for GEM */
 	unsigned long stolen_base; /* limited to low memory (32-bit) */
 
@@ -2132,7 +2145,7 @@ struct drm_i915_gem_object {
 	struct list_head vma_list;
 
 	/** Stolen memory for this object, instead of being backed by shmem. */
-	struct drm_mm_node *stolen;
+	struct i915_stolen_node *stolen;
 	struct list_head global_list;
 
 	struct list_head engine_list[I915_NUM_ENGINES];
@@ -2140,6 +2153,8 @@ struct drm_i915_gem_object {
 	struct list_head obj_exec_link;
 
 	struct list_head batch_pool_link;
+	/** Used to link an object to a list temporarily */
+	struct list_head tmp_link;
 
 	/**
 	 * This is set if the object is on the active lists (has pending
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4d7a135..980624a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4546,6 +4546,20 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	if (obj->madv == I915_MADV_DONTNEED && obj->pages == NULL)
 		i915_gem_object_truncate(obj);
 
+	if (obj->stolen) {
+		switch (obj->madv) {
+		case I915_MADV_WILLNEED:
+			list_del_init(&obj->stolen->mm_link);
+			break;
+		case I915_MADV_DONTNEED:
+			list_move(&obj->stolen->mm_link,
+				  &dev_priv->mm.stolen_list);
+			break;
+		default:
+			break;
+		}
+	}
+
 	args->retained = obj->madv != __I915_MADV_PURGED;
 
 out:
@@ -5107,6 +5121,7 @@ i915_gem_load_init(struct drm_device *dev)
 	INIT_LIST_HEAD(&dev_priv->context_list);
 	INIT_LIST_HEAD(&dev_priv->mm.unbound_list);
 	INIT_LIST_HEAD(&dev_priv->mm.bound_list);
+	INIT_LIST_HEAD(&dev_priv->mm.stolen_list);
 	INIT_LIST_HEAD(&dev_priv->mm.fence_list);
 	for (i = 0; i < I915_NUM_ENGINES; i++)
 		init_engine_lists(&dev_priv->engine[i]);
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index dcb70c1..1b1b9c1 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -553,7 +553,8 @@ i915_gem_object_release_stolen(struct drm_i915_gem_object *obj)
 	struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
 
 	if (obj->stolen) {
-		i915_gem_stolen_remove_node(dev_priv, obj->stolen);
+		list_del(&obj->stolen->mm_link);
+		i915_gem_stolen_remove_node(dev_priv, &obj->stolen->base);
 		kfree(obj->stolen);
 		obj->stolen = NULL;
 	}
@@ -566,7 +567,7 @@ static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = {
 
 static struct drm_i915_gem_object *
 _i915_gem_object_create_stolen(struct drm_device *dev,
-			       struct drm_mm_node *stolen)
+			       struct i915_stolen_node *stolen)
 {
 	struct drm_i915_gem_object *obj;
 	struct sg_table *pages;
@@ -575,11 +576,12 @@ _i915_gem_object_create_stolen(struct drm_device *dev,
 	if (obj == NULL)
 		return ERR_PTR(-ENOMEM);
 
-	drm_gem_private_object_init(dev, &obj->base, stolen->size);
+	drm_gem_private_object_init(dev, &obj->base, stolen->base.size);
 	i915_gem_object_init(obj, &i915_gem_object_stolen_ops);
 
 	pages = i915_pages_create_for_stolen(dev,
-					     stolen->start, stolen->size);
+					     stolen->base.start,
+					     stolen->base.size);
 	if (IS_ERR(pages)) {
 		i915_gem_object_free(obj);
 		return ERR_CAST(pages);
@@ -593,24 +595,112 @@ _i915_gem_object_create_stolen(struct drm_device *dev,
 	i915_gem_object_pin_pages(obj);
 	obj->stolen = stolen;
 
+	stolen->obj = obj;
+	INIT_LIST_HEAD(&stolen->mm_link);
+
 	obj->base.read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
 	obj->cache_level = HAS_LLC(dev) ? I915_CACHE_LLC : I915_CACHE_NONE;
 
 	return obj;
 }
 
-struct drm_i915_gem_object *
-i915_gem_object_create_stolen(struct drm_device *dev, u64 size)
+static bool
+mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind)
+{
+	BUG_ON(obj->stolen == NULL);
+
+	if (obj->madv != I915_MADV_DONTNEED)
+		return false;
+
+	if (obj->pin_display)
+		return false;
+
+	list_add(&obj->tmp_link, unwind);
+	return drm_mm_scan_add_block(&obj->stolen->base);
+}
+
+static int
+stolen_evict(struct drm_i915_private *dev_priv, u64 size)
 {
-	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj;
-	struct drm_mm_node *stolen;
-	int ret;
+	struct list_head unwind, evict;
+	struct i915_stolen_node *iter;
+	int ret, active;
 
-	if (!drm_mm_initialized(&dev_priv->mm.stolen))
-		return ERR_PTR(-ENODEV);
+	lockdep_assert_held(&dev_priv->dev->struct_mutex);
+	drm_mm_init_scan(&dev_priv->mm.stolen, size, 0, 0);
+	INIT_LIST_HEAD(&unwind);
+
+	/* Retire all requests before creating the evict list */
+	i915_gem_retire_requests(dev_priv);
+
+	for (active = 0; active <= 1; active++) {
+		list_for_each_entry(iter, &dev_priv->mm.stolen_list, mm_link) {
+			if (iter->obj->active != active)
+				continue;
+
+			if (mark_free(iter->obj, &unwind))
+				goto found;
+		}
+	}
+
+found:
+	INIT_LIST_HEAD(&evict);
+	while (!list_empty(&unwind)) {
+		obj = list_first_entry(&unwind,
+				       struct drm_i915_gem_object,
+				       tmp_link);
+		list_del(&obj->tmp_link);
+
+		if (drm_mm_scan_remove_block(&obj->stolen->base)) {
+			list_add(&obj->tmp_link, &evict);
+			drm_gem_object_reference(&obj->base);
+		}
+	}
+
+	ret = 0;
+	while (!list_empty(&evict)) {
+		obj = list_first_entry(&evict,
+				       struct drm_i915_gem_object,
+				       tmp_link);
+		list_del(&obj->tmp_link);
+
+		if (ret == 0) {
+			struct i915_vma *vma, *vma_next;
+
+			list_for_each_entry_safe(vma, vma_next,
+						 &obj->vma_list,
+						 obj_link)
+				if (i915_vma_unbind(vma))
+					break;
+
+			/* Stolen pins its pages to prevent the
+			 * normal shrinker from processing stolen
+			 * objects.
+			 */
+			i915_gem_object_unpin_pages(obj);
+
+			ret = i915_gem_object_put_pages(obj);
+			if (ret == 0) {
+				i915_gem_object_release_stolen(obj);
+				obj->madv = __I915_MADV_PURGED;
+			} else {
+				i915_gem_object_pin_pages(obj);
+			}
+		}
+
+		drm_gem_object_unreference(&obj->base);
+	}
+
+	return ret;
+}
+
+static struct i915_stolen_node *
+stolen_alloc(struct drm_i915_private *dev_priv, u64 size)
+{
+	struct i915_stolen_node *stolen;
+	int ret;
 
-	DRM_DEBUG_KMS("creating stolen object: size=%llx\n", size);
 	if (size == 0)
 		return ERR_PTR(-EINVAL);
 
@@ -618,17 +708,60 @@ i915_gem_object_create_stolen(struct drm_device *dev, u64 size)
 	if (!stolen)
 		return ERR_PTR(-ENOMEM);
 
-	ret = i915_gem_stolen_insert_node(dev_priv, stolen, size, 4096);
+	ret = i915_gem_stolen_insert_node(dev_priv, &stolen->base, size, 4096);
+	if (ret == 0)
+		goto out;
+
+	/* No more stolen memory available, or too fragmented.
+	 * Try evicting purgeable objects and search again.
+	 */
+	ret = stolen_evict(dev_priv, size);
+	if (ret == 0)
+		ret = i915_gem_stolen_insert_node(dev_priv, &stolen->base,
+						  size, 4096);
+out:
 	if (ret) {
 		kfree(stolen);
 		return ERR_PTR(ret);
 	}
 
+	return stolen;
+}
+
+/**
+ * i915_gem_object_create_stolen() - creates object using the stolen memory
+ * @dev:	drm device
+ * @size:	size of the object requested
+ *
+ * i915_gem_object_create_stolen() tries to allocate memory for the object
+ * from the stolen memory region. If not enough memory is found, it tries
+ * evicting purgeable objects and searching again.
+ *
+ * Returns: Object pointer - success and error pointer - failure
+ */
+struct drm_i915_gem_object *
+i915_gem_object_create_stolen(struct drm_device *dev, u64 size)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct drm_i915_gem_object *obj;
+	struct i915_stolen_node *stolen;
+
+	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+
+	if (!drm_mm_initialized(&dev_priv->mm.stolen))
+		return ERR_PTR(-ENODEV);
+
+	DRM_DEBUG_KMS("creating stolen object: size=%llx\n", size);
+
+	stolen = stolen_alloc(dev_priv, size);
+	if (IS_ERR(stolen))
+		return ERR_CAST(stolen);
+
 	obj = _i915_gem_object_create_stolen(dev, stolen);
 	if (!IS_ERR(obj))
 		return obj;
 
-	i915_gem_stolen_remove_node(dev_priv, stolen);
+	i915_gem_stolen_remove_node(dev_priv, &stolen->base);
 	kfree(stolen);
 	return obj;
 }
@@ -642,7 +775,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct i915_ggtt *ggtt = &dev_priv->ggtt;
 	struct drm_i915_gem_object *obj;
-	struct drm_mm_node *stolen;
+	struct i915_stolen_node *stolen;
 	struct i915_vma *vma;
 	int ret;
 
@@ -663,10 +796,10 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	if (!stolen)
 		return ERR_PTR(-ENOMEM);
 
-	stolen->start = stolen_offset;
-	stolen->size = size;
+	stolen->base.start = stolen_offset;
+	stolen->base.size = size;
 	mutex_lock(&dev_priv->mm.stolen_lock);
-	ret = drm_mm_reserve_node(&dev_priv->mm.stolen, stolen);
+	ret = drm_mm_reserve_node(&dev_priv->mm.stolen, &stolen->base);
 	mutex_unlock(&dev_priv->mm.stolen_lock);
 	if (ret) {
 		DRM_DEBUG_KMS("failed to allocate stolen space\n");
@@ -677,7 +810,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	obj = _i915_gem_object_create_stolen(dev, stolen);
 	if (IS_ERR(obj)) {
 		DRM_DEBUG_KMS("failed to allocate stolen object\n");
-		i915_gem_stolen_remove_node(dev_priv, stolen);
+		i915_gem_stolen_remove_node(dev_priv, &stolen->base);
 		kfree(stolen);
 		return obj;
 	}
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index eea2804..12aa36f 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -5466,7 +5466,7 @@ static void valleyview_check_pctx(struct drm_i915_private *dev_priv)
 	unsigned long pctx_addr = I915_READ(VLV_PCBR) & ~4095;
 
 	WARN_ON(pctx_addr != dev_priv->mm.stolen_base +
-			     dev_priv->vlv_pctx->stolen->start);
+			     dev_priv->vlv_pctx->stolen->base.start);
 }
 
 
@@ -5539,7 +5539,7 @@ static void valleyview_setup_pctx(struct drm_i915_private *dev_priv)
 		goto out;
 	}
 
-	pctx_paddr = dev_priv->mm.stolen_base + pctx->stolen->start;
+	pctx_paddr = dev_priv->mm.stolen_base + pctx->stolen->base.start;
 	I915_WRITE(VLV_PCBR, pctx_paddr);
 
 out:
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 08/11] drm/i915: Support for pread/pwrite from/to non shmem backed objects
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (6 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 07/11] drm/i915: Add support for stealing purgable stolen pages ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 09/11] drm/i915: Migrate stolen objects before hibernation ankitprasad.r.sharma
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

This patch adds support for extending the pread/pwrite functionality
for objects not backed by shmem. The access will be made through
gtt interface. This will cover objects backed by stolen memory as well
as other non-shmem backed objects.

v2: Drop locks around slow_user_access, prefault the pages before
access (Chris)

v3: Rebased to the latest drm-intel-nightly (Ankit)

v4: Moved page base & offset calculations outside the copy loop,
corrected data types for size and offset variables, corrected if-else
braces format (Tvrtko/kerneldocs)

v5: Enabled pread/pwrite for all non-shmem backed objects including
without tiling restrictions (Ankit)

v6: Using pwrite_fast for non-shmem backed objects as well (Chris)

v7: Updated commit message, Renamed i915_gem_gtt_read to i915_gem_gtt_copy,
added pwrite slow path for non-shmem backed objects (Chris/Tvrtko)

v8: Updated v7 commit message, mutex unlock around pwrite slow path for
non-shmem backed objects (Tvrtko)

v9: Corrected check during pread_ioctl, to avoid shmem_pread being
called for non-shmem backed objects (Tvrtko)

v10: Moved the write_domain check to needs_clflush and tiling mode check
to pwrite_fast (Chris)

v11: Use pwrite_fast fallback for all objects (shmem and non-shmem backed),
call fast_user_write regardless of pagefault in previous iteration

v12: Use page-by-page copy for slow user access too (Chris)

v13: Handled EFAULT, Avoid use of WARN_ON, put_fence only if whole obj
pinned (Chris)

v14: Corrected datatypes/initializations (Tvrtko)

Testcase: igt/gem_stolen, igt/gem_pread, igt/gem_pwrite

Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 218 ++++++++++++++++++++++++++++++++++------
 1 file changed, 188 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 980624a..fca3f6d 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -54,6 +54,9 @@ static bool cpu_cache_is_coherent(struct drm_device *dev,
 
 static bool cpu_write_needs_clflush(struct drm_i915_gem_object *obj)
 {
+	if (obj->base.write_domain == I915_GEM_DOMAIN_CPU)
+		return false;
+
 	if (!cpu_cache_is_coherent(obj->base.dev, obj->cache_level))
 		return true;
 
@@ -641,6 +644,142 @@ shmem_pread_slow(struct page *page, int shmem_page_offset, int page_length,
 	return ret ? - EFAULT : 0;
 }
 
+static inline unsigned long
+slow_user_access(struct io_mapping *mapping,
+		 uint64_t page_base, int page_offset,
+		 char __user *user_data,
+		 unsigned long length, bool pwrite)
+{
+	void __iomem *ioaddr;
+	void *vaddr;
+	uint64_t unwritten;
+
+	ioaddr = io_mapping_map_wc(mapping, page_base, PAGE_SIZE);
+	/* We can use the cpu mem copy function because this is X86. */
+	vaddr = (void __force *)ioaddr + page_offset;
+	if (pwrite)
+		unwritten = __copy_from_user(vaddr, user_data, length);
+	else
+		unwritten = __copy_to_user(user_data, vaddr, length);
+
+	io_mapping_unmap(ioaddr);
+	return unwritten;
+}
+
+static int
+i915_gem_gtt_pread(struct drm_device *dev,
+		   struct drm_i915_gem_object *obj, uint64_t size,
+		   uint64_t data_offset, uint64_t data_ptr)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct i915_ggtt *ggtt = &dev_priv->ggtt;
+	struct drm_mm_node node;
+	char __user *user_data;
+	uint64_t remain;
+	uint64_t offset;
+	int ret;
+
+	ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE);
+	if (ret) {
+		ret = insert_mappable_node(dev_priv, &node, PAGE_SIZE);
+		if (ret)
+			goto out;
+
+		ret = i915_gem_object_get_pages(obj);
+		if (ret) {
+			remove_mappable_node(&node);
+			goto out;
+		}
+
+		i915_gem_object_pin_pages(obj);
+	} else {
+		node.start = i915_gem_obj_ggtt_offset(obj);
+		node.allocated = false;
+		ret = i915_gem_object_put_fence(obj);
+		if (ret)
+			goto out_unpin;
+	}
+
+	ret = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (ret)
+		goto out_unpin;
+
+	user_data = to_user_ptr(data_ptr);
+	remain = size;
+	offset = data_offset;
+
+	mutex_unlock(&dev->struct_mutex);
+	if (likely(!i915.prefault_disable)) {
+		ret = fault_in_multipages_writeable(user_data, remain);
+		if (ret) {
+			mutex_lock(&dev->struct_mutex);
+			goto out_unpin;
+		}
+	}
+
+	while (remain > 0) {
+		/* Operation in this page
+		 *
+		 * page_base = page offset within aperture
+		 * page_offset = offset within page
+		 * page_length = bytes to copy for this page
+		 */
+		u32 page_base = node.start;
+		unsigned page_offset = offset_in_page(offset);
+		unsigned page_length = PAGE_SIZE - page_offset;
+		page_length = remain < page_length ? remain : page_length;
+		if (node.allocated) {
+			wmb();
+			ggtt->base.insert_page(&ggtt->base,
+					       i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
+					       node.start,
+					       I915_CACHE_NONE, 0);
+			wmb();
+		} else {
+			page_base += offset & PAGE_MASK;
+		}
+		/* This is a slow read/write as it tries to read from
+		 * and write to user memory which may result into page
+		 * faults, and so we cannot perform this under struct_mutex.
+		 */
+		if (slow_user_access(ggtt->mappable, page_base,
+				     page_offset, user_data,
+				     page_length, false)) {
+			ret = -EFAULT;
+			break;
+		}
+
+		remain -= page_length;
+		user_data += page_length;
+		offset += page_length;
+	}
+
+	mutex_lock(&dev->struct_mutex);
+	if (ret == 0 && (obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0) {
+		/* The user has modified the object whilst we tried
+		 * reading from it, and we now have no idea what domain
+		 * the pages should be in. As we have just been touching
+		 * them directly, flush everything back to the GTT
+		 * domain.
+		 */
+		ret = i915_gem_object_set_to_gtt_domain(obj, false);
+	}
+
+out_unpin:
+	if (node.allocated) {
+		wmb();
+		ggtt->base.clear_range(&ggtt->base,
+				       node.start, node.size,
+				       true);
+		i915_gem_object_unpin_pages(obj);
+		remove_mappable_node(&node);
+	} else {
+		i915_gem_object_ggtt_unpin(obj);
+	}
+out:
+	return ret;
+}
+
 static int
 i915_gem_shmem_pread(struct drm_device *dev,
 		     struct drm_i915_gem_object *obj,
@@ -656,6 +795,9 @@ i915_gem_shmem_pread(struct drm_device *dev,
 	int needs_clflush = 0;
 	struct sg_page_iter sg_iter;
 
+	if (!obj->base.filp)
+		return -ENODEV;
+
 	user_data = to_user_ptr(args->data_ptr);
 	remain = args->size;
 
@@ -764,18 +906,15 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 		goto out;
 	}
 
-	/* prime objects have no backing filp to GEM pread/pwrite
-	 * pages from.
-	 */
-	if (!obj->base.filp) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	trace_i915_gem_object_pread(obj, args->offset, args->size);
 
 	ret = i915_gem_shmem_pread(dev, obj, args, file);
 
+	/* pread for non shmem backed objects */
+	if (ret == -EFAULT || ret == -ENODEV)
+		ret = i915_gem_gtt_pread(dev, obj, args->size,
+					args->offset, args->data_ptr);
+
 out:
 	drm_gem_object_unreference(&obj->base);
 unlock:
@@ -817,10 +956,15 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915,
 			 struct drm_file *file)
 {
 	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct drm_device *dev = obj->base.dev;
 	struct drm_mm_node node;
 	uint64_t remain, offset;
 	char __user *user_data;
 	int ret;
+	bool hit_slow_path = false;
+
+	if (obj->tiling_mode != I915_TILING_NONE)
+		return -EFAULT;
 
 	ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE | PIN_NONBLOCK);
 	if (ret) {
@@ -838,16 +982,15 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915,
 	} else {
 		node.start = i915_gem_obj_ggtt_offset(obj);
 		node.allocated = false;
+		ret = i915_gem_object_put_fence(obj);
+		if (ret)
+			goto out_unpin;
 	}
 
 	ret = i915_gem_object_set_to_gtt_domain(obj, true);
 	if (ret)
 		goto out_unpin;
 
-	ret = i915_gem_object_put_fence(obj);
-	if (ret)
-		goto out_unpin;
-
 	intel_fb_obj_invalidate(obj, ORIGIN_GTT);
 	obj->dirty = true;
 
@@ -877,11 +1020,23 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915,
 		/* If we get a fault while copying data, then (presumably) our
 		 * source page isn't available.  Return the error and we'll
 		 * retry in the slow path.
+		 * If the object is non-shmem backed, we retry again with the
+		 * path that handles page fault.
 		 */
 		if (fast_user_write(ggtt->mappable, page_base,
 				    page_offset, user_data, page_length)) {
-			ret = -EFAULT;
-			goto out_flush;
+			hit_slow_path = true;
+			mutex_unlock(&dev->struct_mutex);
+			if (slow_user_access(ggtt->mappable,
+					     page_base,
+					     page_offset, user_data,
+					     page_length, true)) {
+				ret = -EFAULT;
+				mutex_lock(&dev->struct_mutex);
+				goto out_flush;
+			}
+
+			mutex_lock(&dev->struct_mutex);
 		}
 
 		remain -= page_length;
@@ -890,6 +1045,19 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915,
 	}
 
 out_flush:
+	if (hit_slow_path) {
+		if (ret == 0 &&
+		    (obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0) {
+			/* The user has modified the object whilst we tried
+			 * reading from it, and we now have no idea what domain
+			 * the pages should be in. As we have just been touching
+			 * them directly, flush everything back to the GTT
+			 * domain.
+			 */
+			ret = i915_gem_object_set_to_gtt_domain(obj, false);
+		}
+	}
+
 	intel_fb_obj_flush(obj, false, ORIGIN_GTT);
 out_unpin:
 	if (node.allocated) {
@@ -1146,14 +1314,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 		goto out;
 	}
 
-	/* prime objects have no backing filp to GEM pread/pwrite
-	 * pages from.
-	 */
-	if (!obj->base.filp) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	trace_i915_gem_object_pwrite(obj, args->offset, args->size);
 
 	ret = -EFAULT;
@@ -1163,20 +1323,20 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	 * pread/pwrite currently are reading and writing from the CPU
 	 * perspective, requiring manual detiling by the client.
 	 */
-	if (obj->tiling_mode == I915_TILING_NONE &&
-	    obj->base.write_domain != I915_GEM_DOMAIN_CPU &&
-	    cpu_write_needs_clflush(obj)) {
+	if (!obj->base.filp || cpu_write_needs_clflush(obj)) {
 		ret = i915_gem_gtt_pwrite_fast(dev_priv, obj, args, file);
 		/* Note that the gtt paths might fail with non-page-backed user
 		 * pointers (e.g. gtt mappings when moving data between
 		 * textures). Fallback to the shmem path in that case. */
 	}
 
-	if (ret == -EFAULT || ret == -ENOSPC) {
+	if (ret == -EFAULT) {
 		if (obj->phys_handle)
 			ret = i915_gem_phys_pwrite(obj, args, file);
-		else
+		else if (obj->base.filp)
 			ret = i915_gem_shmem_pwrite(dev, obj, args, file);
+		else
+			ret = -ENODEV;
 	}
 
 out:
@@ -4007,9 +4167,7 @@ out:
 	 * object is now coherent at its new cache level (with respect
 	 * to the access domain).
 	 */
-	if (obj->cache_dirty &&
-	    obj->base.write_domain != I915_GEM_DOMAIN_CPU &&
-	    cpu_write_needs_clflush(obj)) {
+	if (obj->cache_dirty && cpu_write_needs_clflush(obj)) {
 		if (i915_gem_clflush_object(obj, true))
 			i915_gem_chipset_flush(to_i915(obj->base.dev));
 	}
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 09/11] drm/i915: Migrate stolen objects before hibernation
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (7 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 08/11] drm/i915: Support for pread/pwrite from/to non shmem backed objects ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 10/11] drm/i915: Disable use of stolen area by User when Intel RST is present ankitprasad.r.sharma
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Chris Wilson <chris@chris-wilson.co.uk>

Ville reminded us that stolen memory is not preserved across
hibernation, and a result of this was that context objects now being
allocated from stolen were being corrupted on S4 and promptly hanging
the GPU on resume.

We want to utilise stolen for as much as possible (nothing else will use
that wasted memory otherwise), so we need a strategy for handling
general objects allocated from stolen and hibernation. A simple solution
is to do a CPU copy through the GTT of the stolen object into a fresh
shmemfs backing store and thenceforth treat it as a normal objects. This
can be refined in future to either use a GPU copy to avoid the slow
uncached reads (though it's hibernation!) and recreate stolen objects
upon resume/first-use. For now, a simple approach should suffice for
testing the object migration.

v2:
Swap PTE for pinned bindings over to the shmemfs. This adds a
complicated dance, but is required as many stolen objects are likely to
be pinned for use by the hardware. Swapping the PTEs should not result
in externally visible behaviour, as each PTE update should be atomic and
the two pages identical. (danvet)

safe-by-default, or the principle of least surprise. We need a new flag
to mark objects that we can wilfully discard and recreate across
hibernation. (danvet)

Just use the global_list rather than invent a new stolen_list. This is
the slowpath hibernate and so adding a new list and the associated
complexity isn't worth it.

v3: Rebased on drm-intel-nightly (Ankit)

v4: Use insert_page to map stolen memory backed pages for migration to
shmem (Chris)

v5: Acquire mutex lock while copying stolen buffer objects to shmem (Chris)

v6: Handled file leak, Splitted object migration function, added kerneldoc
for migrate_stolen_to_shmemfs() function (Tvrtko)
Use i915 wrapper function for drm_mm_insert_node_in_range()

v7: Keep the object in cpu domain after get_pages, remove the object from
the unbound list only when marked PURGED, Corrected split of object migration
function (Chris)

v8: Split i915_gem_freeze(), removed redundant use of barrier, corrected
use of set_to_cpu_domain() (Chris)

v9: Replaced WARN_ON by BUG_ON and added a comment explaining it
(Daniel/Tvrtko)

v10: Document use of barriers (Chris)

v11: Resolved list corruption due to not removing obj from global_list
if no reference to pages is held, Rebased (Ankit)

v12: Rebase (Ankit)

Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c         |  22 +++-
 drivers/gpu/drm/i915/i915_drv.h         |  10 ++
 drivers/gpu/drm/i915/i915_gem.c         | 204 ++++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/i915_gem_stolen.c  |  49 ++++++++
 drivers/gpu/drm/i915/intel_display.c    |   3 +
 drivers/gpu/drm/i915/intel_fbdev.c      |   6 +
 drivers/gpu/drm/i915/intel_pm.c         |   2 +
 drivers/gpu/drm/i915/intel_ringbuffer.c |   6 +
 8 files changed, 284 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 943d7b2..827a3fe 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1066,6 +1066,22 @@ static int i915_pm_suspend(struct device *dev)
 	return i915_drm_suspend(drm_dev);
 }
 
+/* freeze: before creating the hibernation_image */
+static int i915_pm_freeze(struct device *dev)
+{
+	int ret;
+
+	ret = i915_gem_freeze(pci_get_drvdata(to_pci_dev(dev)));
+	if (ret)
+		return ret;
+
+	ret = i915_pm_suspend(dev);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 static int i915_pm_suspend_late(struct device *dev)
 {
 	struct drm_device *drm_dev = dev_to_i915(dev)->dev;
@@ -1115,12 +1131,6 @@ static int i915_pm_resume(struct device *dev)
 	return i915_drm_resume(drm_dev);
 }
 
-/* freeze: before creating the hibernation_image */
-static int i915_pm_freeze(struct device *dev)
-{
-	return i915_pm_suspend(dev);
-}
-
 static int i915_pm_freeze_late(struct device *dev)
 {
 	int ret;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index d15c018..6b7e790 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2180,6 +2180,12 @@ struct drm_i915_gem_object {
 	 * Advice: are the backing pages purgeable?
 	 */
 	unsigned int madv:2;
+	/**
+	 * Whereas madv is for userspace, there are certain situations
+	 * where we want I915_MADV_DONTNEED behaviour on internal objects
+	 * without conflating the userspace setting.
+	 */
+	unsigned int internal_volatile:1;
 
 	/**
 	 * Current tiling mode for the object.
@@ -3317,6 +3323,9 @@ int __must_check i915_gem_init_hw(struct drm_device *dev);
 void i915_gem_init_swizzling(struct drm_device *dev);
 void i915_gem_cleanup_engines(struct drm_device *dev);
 int __must_check i915_gpu_idle(struct drm_device *dev);
+int __must_check i915_gem_freeze(struct drm_device *dev);
+int __must_check
+i915_gem_object_migrate_stolen_to_shmemfs(struct drm_i915_gem_object *obj);
 int __must_check i915_gem_suspend(struct drm_device *dev);
 void __i915_add_request(struct drm_i915_gem_request *req,
 			struct drm_i915_gem_object *batch_obj,
@@ -3542,6 +3551,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 					       u32 stolen_offset,
 					       u32 gtt_offset,
 					       u32 size);
+int __must_check i915_gem_stolen_freeze(struct drm_i915_private *i915);
 
 /* i915_gem_shrinker.c */
 unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fca3f6d..e0adb2a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4753,12 +4753,27 @@ static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
 	.put_pages = i915_gem_object_put_pages_gtt,
 };
 
+static struct address_space *
+i915_gem_set_inode_gfp(struct drm_device *dev, struct file *file)
+{
+	struct address_space *mapping = file_inode(file)->i_mapping;
+	gfp_t mask;
+
+	mask = GFP_HIGHUSER | __GFP_RECLAIMABLE;
+	if (IS_CRESTLINE(dev) || IS_BROADWATER(dev)) {
+		/* 965gm cannot relocate objects above 4GiB. */
+		mask &= ~__GFP_HIGHMEM;
+		mask |= __GFP_DMA32;
+	}
+	mapping_set_gfp_mask(mapping, mask);
+
+	return mapping;
+}
+
 struct drm_i915_gem_object *i915_gem_object_create(struct drm_device *dev,
-						  size_t size)
+						   size_t size)
 {
 	struct drm_i915_gem_object *obj;
-	struct address_space *mapping;
-	gfp_t mask;
 	int ret;
 
 	obj = i915_gem_object_alloc(dev);
@@ -4769,15 +4784,7 @@ struct drm_i915_gem_object *i915_gem_object_create(struct drm_device *dev,
 	if (ret)
 		goto fail;
 
-	mask = GFP_HIGHUSER | __GFP_RECLAIMABLE;
-	if (IS_CRESTLINE(dev) || IS_BROADWATER(dev)) {
-		/* 965gm cannot relocate objects above 4GiB. */
-		mask &= ~__GFP_HIGHMEM;
-		mask |= __GFP_DMA32;
-	}
-
-	mapping = file_inode(obj->base.filp)->i_mapping;
-	mapping_set_gfp_mask(mapping, mask);
+	i915_gem_set_inode_gfp(dev, obj->base.filp);
 
 	i915_gem_object_init(obj, &i915_gem_object_ops);
 
@@ -4950,6 +4957,179 @@ i915_gem_stop_engines(struct drm_device *dev)
 		dev_priv->gt.stop_engine(engine);
 }
 
+static int
+copy_content(struct drm_i915_gem_object *obj,
+		struct drm_i915_private *i915,
+		struct address_space *mapping)
+{
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct drm_mm_node node;
+	int ret, i;
+
+	ret = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (ret)
+		return ret;
+
+	/* stolen objects are already pinned to prevent shrinkage */
+	memset(&node, 0, sizeof(node));
+	ret = insert_mappable_node(i915, &node, PAGE_SIZE);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
+		struct page *page;
+		void *__iomem src;
+		void *dst;
+
+		ggtt->base.insert_page(&ggtt->base,
+				       i915_gem_object_get_dma_address(obj, i),
+				       node.start,
+				       I915_CACHE_NONE, 0);
+
+		page = shmem_read_mapping_page(mapping, i);
+		if (IS_ERR(page)) {
+			ret = PTR_ERR(page);
+			break;
+		}
+
+		src = io_mapping_map_atomic_wc(ggtt->mappable, node.start);
+		dst = kmap_atomic(page);
+		wmb(); /* flush modifications to the GGTT (insert_page) */
+		memcpy_fromio(dst, src, PAGE_SIZE);
+		wmb(); 	/* flush the write before we modify the GGTT */
+		kunmap_atomic(dst);
+		io_mapping_unmap_atomic(src);
+
+		put_page(page);
+	}
+
+	ggtt->base.clear_range(&ggtt->base,
+			       node.start, node.size,
+			       true);
+	remove_mappable_node(&node);
+	if (ret)
+		return ret;
+
+	return i915_gem_object_set_to_cpu_domain(obj, true);
+}
+
+/**
+ * i915_gem_object_migrate_stolen_to_shmemfs() - migrates a stolen backed
+ * object to shmemfs
+ * @obj: stolen backed object to be migrated
+ *
+ * Returns: 0 on successful migration, errno on failure
+ */
+
+int
+i915_gem_object_migrate_stolen_to_shmemfs(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_vma *vma, *vn;
+	struct file *file;
+	struct address_space *mapping;
+	struct sg_table *stolen_pages, *shmemfs_pages;
+	int ret;
+
+	if (WARN_ON_ONCE(i915_gem_object_needs_bit17_swizzle(obj)))
+		return -EINVAL;
+
+	file = shmem_file_setup("drm mm object", obj->base.size, VM_NORESERVE);
+	if (IS_ERR(file))
+		return PTR_ERR(file);
+	mapping = i915_gem_set_inode_gfp(obj->base.dev, file);
+
+	list_for_each_entry_safe(vma, vn, &obj->vma_list, obj_link)
+		if (i915_vma_unbind(vma))
+			continue;
+
+	if (obj->madv != I915_MADV_WILLNEED && list_empty(&obj->vma_list)) {
+		/* Discard the stolen reservation, and replace with
+		 * an unpopulated shmemfs object.
+		 */
+		obj->madv = __I915_MADV_PURGED;
+	} else {
+		ret = copy_content(obj, i915, mapping);
+		if (ret)
+			goto err_file;
+	}
+
+	stolen_pages = obj->pages;
+	obj->pages = NULL;
+
+	obj->base.filp = file;
+
+	/* Recreate any pinned binding with pointers to the new storage */
+	if (!list_empty(&obj->vma_list)) {
+		ret = i915_gem_object_get_pages_gtt(obj);
+		if (ret) {
+			obj->pages = stolen_pages;
+			goto err_file;
+		}
+
+		obj->get_page.sg = obj->pages->sgl;
+		obj->get_page.last = 0;
+
+		list_for_each_entry(vma, &obj->vma_list, obj_link) {
+			if (!drm_mm_node_allocated(&vma->node))
+				continue;
+
+			/* As vma is already allocated and only the PTEs
+			 * have to be reprogrammed, makes this vma_bind
+			 * call extremely unlikely to fail.
+			 */
+			BUG_ON(i915_vma_bind(vma,
+					      obj->cache_level,
+					      PIN_UPDATE));
+		}
+	} else {
+		/* Remove object from global list if no reference to the
+		 * pages is held.
+		 */
+		list_del(&obj->global_list);
+	}
+
+	/* drop the stolen pin and backing */
+	shmemfs_pages = obj->pages;
+	obj->pages = stolen_pages;
+
+	i915_gem_object_unpin_pages(obj);
+	obj->ops->put_pages(obj);
+	if (obj->ops->release)
+		obj->ops->release(obj);
+
+	obj->ops = &i915_gem_object_ops;
+	obj->pages = shmemfs_pages;
+
+	return 0;
+
+err_file:
+	fput(file);
+	obj->base.filp = NULL;
+	return ret;
+}
+
+int
+i915_gem_freeze(struct drm_device *dev)
+{
+	/* Called before i915_gem_suspend() when hibernating */
+	struct drm_i915_private *i915 = to_i915(dev);
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(dev);
+	if (ret)
+		return ret;
+
+	/* Across hibernation, the stolen area is not preserved.
+	 * Anything inside stolen must copied back to normal
+	 * memory if we wish to preserve it.
+	 */
+	ret = i915_gem_stolen_freeze(i915);
+
+	mutex_unlock(&dev->struct_mutex);
+	return ret;
+}
+
 int
 i915_gem_suspend(struct drm_device *dev)
 {
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 1b1b9c1..2518ebb 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -853,3 +853,52 @@ err:
 	drm_gem_object_unreference(&obj->base);
 	return ERR_PTR(ret);
 }
+
+int i915_gem_stolen_freeze(struct drm_i915_private *i915)
+{
+	struct drm_i915_gem_object *obj, *tmp;
+	struct list_head *phase[] = {
+		&i915->mm.unbound_list, &i915->mm.bound_list, NULL
+	}, **p;
+	int ret = 0;
+
+	for (p = phase; *p; p++) {
+		struct list_head migrate;
+		int ret;
+
+		INIT_LIST_HEAD(&migrate);
+		list_for_each_entry_safe(obj, tmp, *p, global_list) {
+			if (obj->stolen == NULL)
+				continue;
+
+			if (obj->internal_volatile)
+				continue;
+
+			/* In the general case, this object may only be alive
+			 * due to an active reference, and that may disappear
+			 * when we unbind any of the objects (and so wait upon
+			 * the GPU and retire requests). To prevent one of the
+			 * objects from disappearing beneath us, we need to
+			 * take a reference to each as we build the migration
+			 * list.
+			 *
+			 * This is similar to the strategy required whilst
+			 * shrinking or evicting objects (for the same reason).
+			 */
+			drm_gem_object_reference(&obj->base);
+			list_move(&obj->global_list, &migrate);
+		}
+
+		ret = 0;
+		list_for_each_entry_safe(obj, tmp, &migrate, global_list) {
+			if (ret == 0)
+				ret = i915_gem_object_migrate_stolen_to_shmemfs(obj);
+			drm_gem_object_unreference(&obj->base);
+		}
+		list_splice(&migrate, *p);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 4bbda9c..1f4e4a5 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2500,6 +2500,9 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
 		return false;
 	}
 
+	/* Not to be preserved across hibernation */
+	obj->internal_volatile = true;
+
 	obj->tiling_mode = plane_config->tiling;
 	if (obj->tiling_mode == I915_TILING_X)
 		obj->stride = fb->pitches[0];
diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c
index e094005..a0c2855 100644
--- a/drivers/gpu/drm/i915/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/intel_fbdev.c
@@ -157,6 +157,12 @@ static int intelfb_alloc(struct drm_fb_helper *helper,
 		goto out;
 	}
 
+	/* Discard the contents of the BIOS fb across hibernation.
+	 * We really want to completely throwaway the earlier fbdev
+	 * and reconfigure it anyway.
+	 */
+	obj->internal_volatile = true;
+
 	fb = __intel_framebuffer_create(dev, &mode_cmd, obj);
 	if (IS_ERR(fb)) {
 		drm_gem_object_unreference(&obj->base);
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 12aa36f..89db045 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -5543,6 +5543,8 @@ static void valleyview_setup_pctx(struct drm_i915_private *dev_priv)
 	I915_WRITE(VLV_PCBR, pctx_paddr);
 
 out:
+	/* The power context need not be preserved across hibernation */
+	pctx->internal_volatile = true;
 	DRM_DEBUG_DRIVER("PCBR: 0x%08x\n", I915_READ(VLV_PCBR));
 	dev_priv->vlv_pctx = pctx;
 	mutex_unlock(&dev_priv->dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 586360b..a0dfcab 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2184,6 +2184,12 @@ static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
+	/* Ringbuffer objects are by definition volatile - only the commands
+	 * between HEAD and TAIL need to be preserved and whilst there are
+	 * any commands there, the ringbuffer is pinned by activity.
+	 */
+	obj->internal_volatile = true;
+
 	/* mark ring buffers as read-only from GPU side by default */
 	obj->gt_ro = 1;
 
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 10/11] drm/i915: Disable use of stolen area by User when Intel RST is present
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (8 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 09/11] drm/i915: Migrate stolen objects before hibernation ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-05-31  6:19 ` [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space ankitprasad.r.sharma
  2016-05-31 11:05 ` ✗ Ro.CI.BAT: failure for Support for creating/using Stolen memory backed objects (rev15) Patchwork
  11 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

The BIOS RapidStartTechnology may corrupt the stolen memory across S3
suspend due to unalarmed hibernation, in which case we will not be able
to preserve the User data stored in the stolen region. Hence this patch
tries to identify presence of the RST device on the ACPI bus, and
disables use of stolen memory (for persistent data) if found.

v2: Updated comment, updated/corrected new functions private to driver
(Chris/Tvrtko)

v3: Disabling stolen by default, wait till required acpi changes to
detect device presence are pulled in (Ankit)

v4: Enabled stolen by default as required acpi changes are merged
(Ankit)

v5: renamed variable, is IS_ENABLED() in place of #ifdef, use char*
instead of structures (Lukas)

Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Cc: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h        | 11 +++++++++++
 drivers/gpu/drm/i915/i915_gem.c        |  8 ++++++++
 drivers/gpu/drm/i915/i915_gem_stolen.c | 12 ++++++++++++
 drivers/gpu/drm/i915/intel_acpi.c      |  7 +++++++
 4 files changed, 38 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 6b7e790..f2dbb26 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1338,6 +1338,16 @@ struct i915_gem_mm {
 	 */
 	bool busy;
 
+	/**
+	 * Stolen will be lost upon hibernate (as the memory is unpowered).
+	 * Across resume, we expect stolen to be intact - however, it may
+	 * also be utililised by third parties (e.g. Intel RapidStart
+	 * Technology) and if so we have to assume that any data stored in
+	 * stolen across resume is lost and we set this flag to indicate that
+	 * the stolen memory is volatile.
+	 */
+	bool volatile_stolen;
+
 	/* the indicator for dispatch video commands on two BSD rings */
 	unsigned int bsd_ring_dispatch_index;
 
@@ -3701,6 +3711,7 @@ static inline int intel_opregion_get_panel_type(struct drm_i915_private *dev)
 #endif
 
 /* intel_acpi.c */
+bool intel_detect_acpi_rst(void);
 #ifdef CONFIG_ACPI
 extern void intel_register_dsm_handler(void);
 extern void intel_unregister_dsm_handler(void);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e0adb2a..8bdcc55 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -391,8 +391,16 @@ static struct drm_i915_gem_object *
 i915_gem_alloc_object_stolen(struct drm_device *dev, size_t size)
 {
 	struct drm_i915_gem_object *obj;
+	struct drm_i915_private *dev_priv = dev->dev_private;
 	int ret;
 
+	if (dev_priv->mm.volatile_stolen) {
+		/* Stolen may be overwritten by external parties
+		 * so unsuitable for persistent user data.
+		 */
+		return ERR_PTR(-ENODEV);
+	}
+
 	mutex_lock(&dev->struct_mutex);
 	obj = i915_gem_object_create_stolen(dev, size);
 	if (IS_ERR(obj))
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 2518ebb..0e6203c 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -492,6 +492,18 @@ int i915_gem_init_stolen(struct drm_device *dev)
 	 */
 	drm_mm_init(&dev_priv->mm.stolen, 0, ggtt->stolen_usable_size);
 
+	/* If the stolen region can be modified behind our backs upon suspend,
+	 * then we cannot use it to store nonvolatile contents (i.e user data)
+	 * as it will be corrupted upon resume.
+	 */
+	dev_priv->mm.volatile_stolen = false;
+	if (IS_ENABLED(CONFIG_SUSPEND)) {
+		/* BIOSes using RapidStart Technology have been reported
+		 * to overwrite stolen across S3, not just S4.
+		 */
+		dev_priv->mm.volatile_stolen = intel_detect_acpi_rst();
+	}
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/intel_acpi.c b/drivers/gpu/drm/i915/intel_acpi.c
index eb638a1..05fd67f 100644
--- a/drivers/gpu/drm/i915/intel_acpi.c
+++ b/drivers/gpu/drm/i915/intel_acpi.c
@@ -23,6 +23,8 @@ static const u8 intel_dsm_guid[] = {
 	0x0f, 0x13, 0x17, 0xb0, 0x1c, 0x2c
 };
 
+static const char *irst_id = "INT3392";
+
 static char *intel_dsm_port_name(u8 id)
 {
 	switch (id) {
@@ -162,3 +164,8 @@ void intel_register_dsm_handler(void)
 void intel_unregister_dsm_handler(void)
 {
 }
+
+bool intel_detect_acpi_rst(void)
+{
+	return acpi_dev_present(irst_id);
+}
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (9 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 10/11] drm/i915: Disable use of stolen area by User when Intel RST is present ankitprasad.r.sharma
@ 2016-05-31  6:19 ` ankitprasad.r.sharma
  2016-06-01 10:31   ` Tvrtko Ursulin
  2016-06-01 12:11   ` Emil Velikov
  2016-05-31 11:05 ` ✗ Ro.CI.BAT: failure for Support for creating/using Stolen memory backed objects (rev15) Patchwork
  11 siblings, 2 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-05-31  6:19 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>

When constructing a batchbuffer, it is sometimes crucial to know the
largest hole into which we can fit a fenceable buffer (for example when
handling very large objects on gen2 and gen3). This depends on the
fragmentation of pinned buffers inside the aperture, a question only the
kernel can easily answer.

This patch extends the current DRM_I915_GEM_GET_APERTURE ioctl to
include a couple of new fields in its reply to userspace - the total
amount of space available in the mappable region of the aperture and
also the single largest block available.

This is not quite what userspace wants to answer the question of whether
this batch will fit as fences are also required to meet severe alignment
constraints within the batch. For this purpose, a third conservative
estimate of largest fence available is also provided. For when userspace
needs more than one batch, we also provide the culmulative space
available for fences such that it has some additional guidance to how
much space it could allocate to fences. Conservatism still wins.

This patch extends the GET_APERTURE ioctl to add support
for getting total size and available size of the stolen region as
well as single largest block available in the stolen region too.

The patch also adds a debugfs file for convenient testing and
reporting.

v2: The first object cannot end at offset 0, so we can use last==0 to
detect the empty list.

v3: Expand all values to 64bit, just in case.
    Report total mappable aperture size for userspace that cannot easily
    determine it by inspecting the PCI device.

v4: (Rodrigo) Fixed rebase conflicts.

v5: Rebased to the latest drm-intel-nightly (Ankit)

v6: Keeping limits to get_aperture ioctl, and moved changing numbers to
debugfs, Addressed comments (Chris/Tvrtko)

v7: Squashed stolen memory size patch to this one, Added a new version
field to validate the map_size and stolen size values, Changed Author
to me (Ankit) due to signifcant changes in the logic used to get size
values

Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c    | 143 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_drv.h        |   3 +
 drivers/gpu/drm/i915/i915_gem.c        |   4 +
 drivers/gpu/drm/i915/i915_gem_stolen.c |  27 +++++++
 include/uapi/drm/i915_drm.h            |  17 ++++
 5 files changed, 194 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index aeae600..625e3cc 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -586,6 +586,148 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
 	return 0;
 }
 
+static int vma_rank_by_ggtt(void *priv,
+			    struct list_head *A,
+			    struct list_head *B)
+{
+	struct i915_vma *a = list_entry(A, typeof(*a), exec_list);
+	struct i915_vma *b = list_entry(B, typeof(*b), exec_list);
+
+	return a->node.start - b->node.start;
+}
+
+static u32 __fence_size(struct drm_i915_private *dev_priv, u32 start, u32 end)
+{
+	u32 size = end - start;
+	u32 fence_size;
+
+	if (INTEL_INFO(dev_priv)->gen < 4) {
+		u32 fence_max;
+		u32 fence_next;
+
+		if (IS_GEN3(dev_priv)) {
+			fence_max = I830_FENCE_MAX_SIZE_VAL << 20;
+			fence_next = 1024*1024;
+		} else {
+			fence_max = I830_FENCE_MAX_SIZE_VAL << 19;
+			fence_next = 512*1024;
+		}
+
+		fence_max = min(fence_max, size);
+		fence_size = 0;
+		/* Find fence_size less than fence_max and power of 2 */
+		while (fence_next <= fence_max) {
+			u32 base = ALIGN(start, fence_next);
+			if (base + fence_next > end)
+				break;
+
+			fence_size = fence_next;
+			fence_next <<= 1;
+		}
+	} else {
+		fence_size = size;
+	}
+
+	return fence_size;
+}
+
+static int i915_gem_aperture_info(struct seq_file *m, void *data)
+{
+	struct drm_info_node *node = m->private;
+	struct drm_device *dev = node->minor->dev;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct i915_ggtt *ggtt = &dev_priv->ggtt;
+	struct drm_i915_gem_get_aperture arg;
+	struct i915_vma *vma;
+	struct list_head map_list;
+	const uint64_t map_limit = ggtt->mappable_end;
+	uint64_t map_space, map_largest, fence_space, fence_largest;
+	uint64_t last, hole_size, stolen_free, stolen_largest;
+	int ret;
+
+	INIT_LIST_HEAD(&map_list);
+
+	map_space = map_largest = 0;
+	fence_space = fence_largest = 0;
+
+	ret = i915_gem_get_aperture_ioctl(node->minor->dev, &arg, NULL);
+	if (ret)
+		return ret;
+
+	mutex_lock(&dev->struct_mutex);
+
+	i915_gem_stolen_size_info(dev_priv, &stolen_free,
+				  &stolen_largest);
+
+	list_for_each_entry(vma, &ggtt->base.active_list, vm_link)
+		if (vma->pin_count &&
+			(vma->node.start + vma->node.size) <= map_limit)
+			list_add(&vma->exec_list, &map_list);
+	list_for_each_entry(vma, &ggtt->base.inactive_list, vm_link)
+		if (vma->pin_count &&
+			(vma->node.start + vma->node.size) <= map_limit)
+			list_add(&vma->exec_list, &map_list);
+
+	last = 0;
+	list_sort(NULL, &map_list, vma_rank_by_ggtt);
+	while (!list_empty(&map_list)) {
+		vma = list_first_entry(&map_list, typeof(*vma), exec_list);
+		list_del_init(&vma->exec_list);
+
+		if (last == 0)
+			goto skip_first;
+
+		hole_size = vma->node.start - last;
+		if (hole_size > map_largest)
+			map_largest = hole_size;
+		map_space += hole_size;
+
+		hole_size = __fence_size(dev_priv, last, vma->node.start);
+		if (hole_size > fence_largest)
+			fence_largest = hole_size;
+		fence_space += hole_size;
+
+skip_first:
+		last = vma->node.start + vma->node.size;
+	}
+	if (last < map_limit) {
+		hole_size = map_limit - last;
+		if (hole_size > map_largest)
+			map_largest = hole_size;
+		map_space += hole_size;
+
+		hole_size = __fence_size(dev_priv, last, map_limit);
+		if (hole_size > fence_largest)
+			fence_largest = hole_size;
+		fence_space += hole_size;
+	}
+
+	mutex_unlock(&dev->struct_mutex);
+
+	seq_printf(m, "Total size of the GTT: %llu bytes\n",
+		   arg.aper_size);
+	seq_printf(m, "Available space in the GTT: %llu bytes\n",
+		   arg.aper_available_size);
+	seq_printf(m, "Total space in the mappable aperture: %llu bytes\n",
+		   arg.map_total_size);
+	seq_printf(m, "Available space in the mappable aperture: %llu bytes\n",
+		   map_space);
+	seq_printf(m, "Single largest space in the mappable aperture: %llu bytes\n",
+		   map_largest);
+	seq_printf(m, "Available space for fences: %llu bytes\n",
+		   fence_space);
+	seq_printf(m, "Single largest fence available: %llu bytes\n",
+		   fence_largest);
+	seq_printf(m, "Total size of the stolen region: %llu bytes\n",
+		   arg.stolen_total_size);
+	seq_printf(m, "Available size of the stolen region: %llu bytes\n",
+		   stolen_free);
+	seq_printf(m, "Single largest area in the stolen region: %llu bytes\n",
+		   stolen_largest);
+
+	return 0;
+}
+
 static int i915_gem_gtt_info(struct seq_file *m, void *data)
 {
 	struct drm_info_node *node = m->private;
@@ -5405,6 +5547,7 @@ static int i915_debugfs_create(struct dentry *root,
 static const struct drm_info_list i915_debugfs_list[] = {
 	{"i915_capabilities", i915_capabilities, 0},
 	{"i915_gem_objects", i915_gem_object_info, 0},
+	{"i915_gem_aperture", i915_gem_aperture_info, 0},
 	{"i915_gem_gtt", i915_gem_gtt_info, 0},
 	{"i915_gem_pinned", i915_gem_gtt_info, 0, (void *) PINNED_LIST},
 	{"i915_gem_active", i915_gem_object_list_info, 0, (void *) ACTIVE_LIST},
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f2dbb26..aa2aa7b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3562,6 +3562,9 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 					       u32 gtt_offset,
 					       u32 size);
 int __must_check i915_gem_stolen_freeze(struct drm_i915_private *i915);
+void i915_gem_stolen_size_info(struct drm_i915_private *dev_priv,
+			       uint64_t *stolen_free,
+			       uint64_t *stolen_largest);
 
 /* i915_gem_shrinker.c */
 unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 8bdcc55..ce3b73b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -165,6 +165,10 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
 
 	args->aper_size = ggtt->base.total;
 	args->aper_available_size = args->aper_size - pinned;
+	args->version = 1;
+	args->map_total_size = ggtt->mappable_end;
+	args->stolen_total_size =
+		dev_priv->mm.volatile_stolen ? 0 : ggtt->stolen_usable_size;
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 0e6203c..1fb4597 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -914,3 +914,30 @@ int i915_gem_stolen_freeze(struct drm_i915_private *i915)
 
 	return ret;
 }
+
+
+void i915_gem_stolen_size_info(struct drm_i915_private *dev_priv,
+			       uint64_t *stolen_free,
+			       uint64_t *stolen_largest)
+{
+	struct drm_mm *mm = &dev_priv->mm.stolen;
+	struct drm_mm_node *hole;
+	uint64_t hole_size, hole_start, hole_end, largest_hole = 0;
+	uint64_t total_free = 0;
+
+	if (dev_priv->mm.volatile_stolen) {
+		*stolen_free = 0;
+		*stolen_largest = 0;
+		return;
+	}
+
+	drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
+			hole_size = hole_end - hole_start;
+			total_free += hole_size;
+			if (largest_hole < hole_size)
+				largest_hole = hole_size;
+	}
+
+	*stolen_free = total_free;
+	*stolen_largest = largest_hole;
+}
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index ceff4c1..bc16f0a 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1011,6 +1011,23 @@ struct drm_i915_gem_get_aperture {
 	 * bytes
 	 */
 	__u64 aper_available_size;
+
+	/**
+	 * Versioning to indicate if map_total_size and stolen_total_size
+	 * value returned are valid or not
+	 */
+	__u64 version;
+
+	/**
+	 * Total space in the mappable region of the aperture, in bytes
+	 */
+	__u64 map_total_size;
+
+	/**
+	 * Total space in the stolen region, in bytes
+	 */
+	__u64 stolen_total_size;
+
 };
 
 struct drm_i915_get_pipe_from_crtc_id {
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* ✗ Ro.CI.BAT: failure for Support for creating/using Stolen memory backed objects (rev15)
  2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
                   ` (10 preceding siblings ...)
  2016-05-31  6:19 ` [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space ankitprasad.r.sharma
@ 2016-05-31 11:05 ` Patchwork
  11 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2016-05-31 11:05 UTC (permalink / raw)
  To: kbuild test robot; +Cc: intel-gfx

== Series Details ==

Series: Support for creating/using Stolen memory backed objects (rev15)
URL   : https://patchwork.freedesktop.org/series/659/
State : failure

== Summary ==

Applying: drm/i915: Add support for mapping an object page by page
Applying: drm/i915: Introduce i915_gem_object_get_dma_address()
Applying: drm/i915: Use insert_page for pwrite_fast
Using index info to reconstruct a base tree...
M	drivers/gpu/drm/i915/i915_gem.c
Falling back to patching base and 3-way merge...
Auto-merging drivers/gpu/drm/i915/i915_gem.c
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/i915_gem.c
Patch failed at 0003 drm/i915: Use insert_page for pwrite_fast
The copy of the patch that failed is found in: .git/rebase-apply/patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2016-05-31  6:19 ` [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space ankitprasad.r.sharma
@ 2016-06-01 10:31   ` Tvrtko Ursulin
  2016-06-07  8:52     ` Ankitprasad Sharma
  2016-06-01 12:11   ` Emil Velikov
  1 sibling, 1 reply; 26+ messages in thread
From: Tvrtko Ursulin @ 2016-06-01 10:31 UTC (permalink / raw)
  To: ankitprasad.r.sharma, intel-gfx; +Cc: akash.goel


On 31/05/16 07:19, ankitprasad.r.sharma@intel.com wrote:
> From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
>
> When constructing a batchbuffer, it is sometimes crucial to know the
> largest hole into which we can fit a fenceable buffer (for example when
> handling very large objects on gen2 and gen3). This depends on the
> fragmentation of pinned buffers inside the aperture, a question only the
> kernel can easily answer.
>
> This patch extends the current DRM_I915_GEM_GET_APERTURE ioctl to
> include a couple of new fields in its reply to userspace - the total
> amount of space available in the mappable region of the aperture and
> also the single largest block available.
>
> This is not quite what userspace wants to answer the question of whether
> this batch will fit as fences are also required to meet severe alignment
> constraints within the batch. For this purpose, a third conservative
> estimate of largest fence available is also provided. For when userspace
> needs more than one batch, we also provide the culmulative space

cumulative

> available for fences such that it has some additional guidance to how
> much space it could allocate to fences. Conservatism still wins.
>
> This patch extends the GET_APERTURE ioctl to add support
> for getting total size and available size of the stolen region as
> well as single largest block available in the stolen region too.
>
> The patch also adds a debugfs file for convenient testing and
> reporting.
>
> v2: The first object cannot end at offset 0, so we can use last==0 to
> detect the empty list.
>
> v3: Expand all values to 64bit, just in case.
>      Report total mappable aperture size for userspace that cannot easily
>      determine it by inspecting the PCI device.
>
> v4: (Rodrigo) Fixed rebase conflicts.
>
> v5: Rebased to the latest drm-intel-nightly (Ankit)
>
> v6: Keeping limits to get_aperture ioctl, and moved changing numbers to
> debugfs, Addressed comments (Chris/Tvrtko)
>
> v7: Squashed stolen memory size patch to this one, Added a new version
> field to validate the map_size and stolen size values, Changed Author
> to me (Ankit) due to signifcant changes in the logic used to get size

significant

> values
>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
> Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_debugfs.c    | 143 +++++++++++++++++++++++++++++++++
>   drivers/gpu/drm/i915/i915_drv.h        |   3 +
>   drivers/gpu/drm/i915/i915_gem.c        |   4 +
>   drivers/gpu/drm/i915/i915_gem_stolen.c |  27 +++++++
>   include/uapi/drm/i915_drm.h            |  17 ++++
>   5 files changed, 194 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index aeae600..625e3cc 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -586,6 +586,148 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
>   	return 0;
>   }
>
> +static int vma_rank_by_ggtt(void *priv,
> +			    struct list_head *A,
> +			    struct list_head *B)
> +{
> +	struct i915_vma *a = list_entry(A, typeof(*a), exec_list);
> +	struct i915_vma *b = list_entry(B, typeof(*b), exec_list);
> +
> +	return a->node.start - b->node.start;
> +}
> +
> +static u32 __fence_size(struct drm_i915_private *dev_priv, u32 start, u32 end)
> +{
> +	u32 size = end - start;
> +	u32 fence_size;
> +
> +	if (INTEL_INFO(dev_priv)->gen < 4) {
> +		u32 fence_max;
> +		u32 fence_next;
> +
> +		if (IS_GEN3(dev_priv)) {
> +			fence_max = I830_FENCE_MAX_SIZE_VAL << 20;
> +			fence_next = 1024*1024;
> +		} else {
> +			fence_max = I830_FENCE_MAX_SIZE_VAL << 19;
> +			fence_next = 512*1024;
> +		}
> +
> +		fence_max = min(fence_max, size);
> +		fence_size = 0;
> +		/* Find fence_size less than fence_max and power of 2 */
> +		while (fence_next <= fence_max) {
> +			u32 base = ALIGN(start, fence_next);
> +			if (base + fence_next > end)
> +				break;
> +
> +			fence_size = fence_next;
> +			fence_next <<= 1;
> +		}
> +	} else {
> +		fence_size = size;
> +	}
> +
> +	return fence_size;
> +}
> +
> +static int i915_gem_aperture_info(struct seq_file *m, void *data)
> +{
> +	struct drm_info_node *node = m->private;
> +	struct drm_device *dev = node->minor->dev;
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +	struct i915_ggtt *ggtt = &dev_priv->ggtt;
> +	struct drm_i915_gem_get_aperture arg;
> +	struct i915_vma *vma;
> +	struct list_head map_list;
> +	const uint64_t map_limit = ggtt->mappable_end;
> +	uint64_t map_space, map_largest, fence_space, fence_largest;
> +	uint64_t last, hole_size, stolen_free, stolen_largest;
> +	int ret;
> +
> +	INIT_LIST_HEAD(&map_list);
> +
> +	map_space = map_largest = 0;
> +	fence_space = fence_largest = 0;
> +
> +	ret = i915_gem_get_aperture_ioctl(node->minor->dev, &arg, NULL);
> +	if (ret)
> +		return ret;

Is it a concern that the snapshot of available space and largest ggtt 
hole is not atomic given the mutex drop and re-acquire between the two? 
Probably not..

> +
> +	mutex_lock(&dev->struct_mutex);
> +
> +	i915_gem_stolen_size_info(dev_priv, &stolen_free,
> +				  &stolen_largest);
> +
> +	list_for_each_entry(vma, &ggtt->base.active_list, vm_link)
> +		if (vma->pin_count &&
> +			(vma->node.start + vma->node.size) <= map_limit)
> +			list_add(&vma->exec_list, &map_list);
> +	list_for_each_entry(vma, &ggtt->base.inactive_list, vm_link)
> +		if (vma->pin_count &&
> +			(vma->node.start + vma->node.size) <= map_limit)
> +			list_add(&vma->exec_list, &map_list);
> +
> +	last = 0;
> +	list_sort(NULL, &map_list, vma_rank_by_ggtt);
> +	while (!list_empty(&map_list)) {
> +		vma = list_first_entry(&map_list, typeof(*vma), exec_list);
> +		list_del_init(&vma->exec_list);
> +
> +		if (last == 0)
> +			goto skip_first;
> +
> +		hole_size = vma->node.start - last;
> +		if (hole_size > map_largest)
> +			map_largest = hole_size;
> +		map_space += hole_size;
> +
> +		hole_size = __fence_size(dev_priv, last, vma->node.start);
> +		if (hole_size > fence_largest)
> +			fence_largest = hole_size;
> +		fence_space += hole_size;
> +
> +skip_first:
> +		last = vma->node.start + vma->node.size;
> +	}
> +	if (last < map_limit) {
> +		hole_size = map_limit - last;
> +		if (hole_size > map_largest)
> +			map_largest = hole_size;
> +		map_space += hole_size;
> +
> +		hole_size = __fence_size(dev_priv, last, map_limit);
> +		if (hole_size > fence_largest)
> +			fence_largest = hole_size;
> +		fence_space += hole_size;
> +	}
> +
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	seq_printf(m, "Total size of the GTT: %llu bytes\n",
> +		   arg.aper_size);
> +	seq_printf(m, "Available space in the GTT: %llu bytes\n",
> +		   arg.aper_available_size);
> +	seq_printf(m, "Total space in the mappable aperture: %llu bytes\n",
> +		   arg.map_total_size);
> +	seq_printf(m, "Available space in the mappable aperture: %llu bytes\n",
> +		   map_space);
> +	seq_printf(m, "Single largest space in the mappable aperture: %llu bytes\n",
> +		   map_largest);
> +	seq_printf(m, "Available space for fences: %llu bytes\n",
> +		   fence_space);
> +	seq_printf(m, "Single largest fence available: %llu bytes\n",
> +		   fence_largest);
> +	seq_printf(m, "Total size of the stolen region: %llu bytes\n",
> +		   arg.stolen_total_size);
> +	seq_printf(m, "Available size of the stolen region: %llu bytes\n",
> +		   stolen_free);
> +	seq_printf(m, "Single largest area in the stolen region: %llu bytes\n",
> +		   stolen_largest);
> +
> +	return 0;
> +}
> +
>   static int i915_gem_gtt_info(struct seq_file *m, void *data)
>   {
>   	struct drm_info_node *node = m->private;
> @@ -5405,6 +5547,7 @@ static int i915_debugfs_create(struct dentry *root,
>   static const struct drm_info_list i915_debugfs_list[] = {
>   	{"i915_capabilities", i915_capabilities, 0},
>   	{"i915_gem_objects", i915_gem_object_info, 0},
> +	{"i915_gem_aperture", i915_gem_aperture_info, 0},
>   	{"i915_gem_gtt", i915_gem_gtt_info, 0},
>   	{"i915_gem_pinned", i915_gem_gtt_info, 0, (void *) PINNED_LIST},
>   	{"i915_gem_active", i915_gem_object_list_info, 0, (void *) ACTIVE_LIST},
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index f2dbb26..aa2aa7b 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -3562,6 +3562,9 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
>   					       u32 gtt_offset,
>   					       u32 size);
>   int __must_check i915_gem_stolen_freeze(struct drm_i915_private *i915);
> +void i915_gem_stolen_size_info(struct drm_i915_private *dev_priv,
> +			       uint64_t *stolen_free,
> +			       uint64_t *stolen_largest);
>
>   /* i915_gem_shrinker.c */
>   unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 8bdcc55..ce3b73b 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -165,6 +165,10 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
>
>   	args->aper_size = ggtt->base.total;
>   	args->aper_available_size = args->aper_size - pinned;
> +	args->version = 1;
> +	args->map_total_size = ggtt->mappable_end;
> +	args->stolen_total_size =
> +		dev_priv->mm.volatile_stolen ? 0 : ggtt->stolen_usable_size;
>
>   	return 0;
>   }
> diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
> index 0e6203c..1fb4597 100644
> --- a/drivers/gpu/drm/i915/i915_gem_stolen.c
> +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
> @@ -914,3 +914,30 @@ int i915_gem_stolen_freeze(struct drm_i915_private *i915)
>
>   	return ret;
>   }
> +
> +
> +void i915_gem_stolen_size_info(struct drm_i915_private *dev_priv,
> +			       uint64_t *stolen_free,
> +			       uint64_t *stolen_largest)
> +{
> +	struct drm_mm *mm = &dev_priv->mm.stolen;
> +	struct drm_mm_node *hole;
> +	uint64_t hole_size, hole_start, hole_end, largest_hole = 0;
> +	uint64_t total_free = 0;
> +
> +	if (dev_priv->mm.volatile_stolen) {
> +		*stolen_free = 0;
> +		*stolen_largest = 0;
> +		return;
> +	}

Is it worth having this special case? I am assuming the list below will 
always be empty in the volatile_stolen case?

> +
> +	drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
> +			hole_size = hole_end - hole_start;
> +			total_free += hole_size;
> +			if (largest_hole < hole_size)
> +				largest_hole = hole_size;
> +	}
> +
> +	*stolen_free = total_free;
> +	*stolen_largest = largest_hole;
> +}
> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> index ceff4c1..bc16f0a 100644
> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -1011,6 +1011,23 @@ struct drm_i915_gem_get_aperture {
>   	 * bytes
>   	 */
>   	__u64 aper_available_size;
> +
> +	/**
> +	 * Versioning to indicate if map_total_size and stolen_total_size
> +	 * value returned are valid or not
> +	 */
> +	__u64 version;

64-bits sound like an overkill to me. But then again it keeps it aligned 
so don't know. Probably not worth complicating it by shrinking and 
reserving some for later.

> +
> +	/**
> +	 * Total space in the mappable region of the aperture, in bytes
> +	 */
> +	__u64 map_total_size;
> +
> +	/**
> +	 * Total space in the stolen region, in bytes
> +	 */
> +	__u64 stolen_total_size;
> +
>   };
>
>   struct drm_i915_get_pipe_from_crtc_id {
>

Anyway, looks OK to me.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

You can fix the typos of course and keep the r-b.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2016-05-31  6:19 ` [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space ankitprasad.r.sharma
  2016-06-01 10:31   ` Tvrtko Ursulin
@ 2016-06-01 12:11   ` Emil Velikov
  2016-06-01 12:31     ` Emil Velikov
  1 sibling, 1 reply; 26+ messages in thread
From: Emil Velikov @ 2016-06-01 12:11 UTC (permalink / raw)
  To: arsharma, Daniel Vetter; +Cc: intel-gfx, akash.goel

[+Daniel Vetter]

Hi Ankitprasad,

On 31 May 2016 at 07:19,  <ankitprasad.r.sharma@intel.com> wrote:

> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
Seems like you've picked the patch from the mailing list, which does
s/@/ at /. You might want to revert that and normalise the emails.

> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -1011,6 +1011,23 @@ struct drm_i915_gem_get_aperture {
>          * bytes
>          */
>         __u64 aper_available_size;
> +
> +       /**
> +        * Versioning to indicate if map_total_size and stolen_total_size
> +        * value returned are valid or not
> +        */
> +       __u64 version;
> +
> +       /**
> +        * Total space in the mappable region of the aperture, in bytes
> +        */
> +       __u64 map_total_size;
> +
> +       /**
> +        * Total space in the stolen region, in bytes
> +        */
> +       __u64 stolen_total_size;
> +
I'm not sure if this is going to work with old userspace/new kernel
and vice-versa. Are you sure that things won't explode in such cases ?
Sadly this struct is missing flag field, so I'm not sure how one can
extend it without breaking things - Daniel, any ideas ?

I believe that when proposing UAPI changes one should point to the
relevant userspace patch set. Did I miss it or there isn't any ?

Regards,
Emil
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2016-06-01 12:11   ` Emil Velikov
@ 2016-06-01 12:31     ` Emil Velikov
  0 siblings, 0 replies; 26+ messages in thread
From: Emil Velikov @ 2016-06-01 12:31 UTC (permalink / raw)
  To: arsharma, Daniel Vetter; +Cc: intel-gfx, akash.goel

On 1 June 2016 at 13:11, Emil Velikov <emil.l.velikov@gmail.com> wrote:
> [+Daniel Vetter]
>
> Hi Ankitprasad,
>
> On 31 May 2016 at 07:19,  <ankitprasad.r.sharma@intel.com> wrote:
>
>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>> Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
> Seems like you've picked the patch from the mailing list, which does
> s/@/ at /. You might want to revert that and normalise the emails.
>
>> --- a/include/uapi/drm/i915_drm.h
>> +++ b/include/uapi/drm/i915_drm.h
>> @@ -1011,6 +1011,23 @@ struct drm_i915_gem_get_aperture {
>>          * bytes
>>          */
>>         __u64 aper_available_size;
>> +
>> +       /**
>> +        * Versioning to indicate if map_total_size and stolen_total_size
>> +        * value returned are valid or not
>> +        */
>> +       __u64 version;
>> +
>> +       /**
>> +        * Total space in the mappable region of the aperture, in bytes
>> +        */
>> +       __u64 map_total_size;
>> +
>> +       /**
>> +        * Total space in the stolen region, in bytes
>> +        */
>> +       __u64 stolen_total_size;
>> +
> I'm not sure if this is going to work with old userspace/new kernel
> and vice-versa. Are you sure that things won't explode in such cases ?
> Sadly this struct is missing flag field, so I'm not sure how one can
> extend it without breaking things - Daniel, any ideas ?
>
Please ignore this. Daniel just pointed out that this cannot happen as
drm_ioctl() handles any issues this could have caused.

Thanks
Emil
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-05-31  6:19 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
@ 2016-06-02 22:28   ` Chris Wilson
  0 siblings, 0 replies; 26+ messages in thread
From: Chris Wilson @ 2016-06-02 22:28 UTC (permalink / raw)
  To: ankitprasad.r.sharma; +Cc: intel-gfx, akash.goel

On Tue, May 31, 2016 at 11:49:14AM +0530, ankitprasad.r.sharma@intel.com wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
> 
> Introduced a new vm specfic callback insert_page() to program a single pte in
> ggtt or ppgtt. This allows us to map a single page in to the mappable aperture
> space. This can be iterated over to access the whole object by using space as
> meagre as page size.
> 
> v2: Added low level rpm assertions to insert_page routines (Chris)
> 
> v3: Added POSTING_READ post register write (Tvrtko)
> 
> v4: Rebase (Ankit)
> 
> v5: Removed wmb() and FLUSH_CTL from insert_page, caller to take care
> of it (Chris)
> 
> v6: insert_page not working correctly without FLSH_CNTL write, added the
> write again.

Confirmed. Sorry, it took me a bit to get a test box ready for a bit of
digging.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space
  2016-06-01 10:31   ` Tvrtko Ursulin
@ 2016-06-07  8:52     ` Ankitprasad Sharma
  0 siblings, 0 replies; 26+ messages in thread
From: Ankitprasad Sharma @ 2016-06-07  8:52 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx, akash.goel

On Wed, 2016-06-01 at 11:31 +0100, Tvrtko Ursulin wrote:
> On 31/05/16 07:19, ankitprasad.r.sharma@intel.com wrote:
> > From: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
.
.
.
> > +
> > +void i915_gem_stolen_size_info(struct drm_i915_private *dev_priv,
> > +			       uint64_t *stolen_free,
> > +			       uint64_t *stolen_largest)
> > +{
> > +	struct drm_mm *mm = &dev_priv->mm.stolen;
> > +	struct drm_mm_node *hole;
> > +	uint64_t hole_size, hole_start, hole_end, largest_hole = 0;
> > +	uint64_t total_free = 0;
> > +
> > +	if (dev_priv->mm.volatile_stolen) {
> > +		*stolen_free = 0;
> > +		*stolen_largest = 0;
> > +		return;
> > +	}
> 
> Is it worth having this special case? I am assuming the list below will 
> always be empty in the volatile_stolen case?

FBC uses this space to store the compressed framebuffer, so the list may
not be empty if volatile_stolen case is true.

Thanks,
Ankit


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-06-08 12:20 [PATCH v22 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
@ 2016-06-08 12:20 ` ankitprasad.r.sharma
  0 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-06-08 12:20 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Chris Wilson <chris@chris-wilson.co.uk>

Introduced a new vm specfic callback insert_page() to program a single pte in
ggtt or ppgtt. This allows us to map a single page in to the mappable aperture
space. This can be iterated over to access the whole object by using space as
meagre as page size.

v2: Added low level rpm assertions to insert_page routines (Chris)

v3: Added POSTING_READ post register write (Tvrtko)

v4: Rebase (Ankit)

v5: Removed wmb() and FLUSH_CTL from insert_page, caller to take care
of it (Chris)

v6: insert_page not working correctly without FLSH_CNTL write, added the
write again.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/char/agp/intel-gtt.c        |  8 +++++
 drivers/gpu/drm/i915/i915_gem_gtt.c | 66 ++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_gtt.h |  5 +++
 include/drm/intel-gtt.h             |  3 ++
 4 files changed, 81 insertions(+), 1 deletion(-)

diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
index aef87fd..4431129 100644
--- a/drivers/char/agp/intel-gtt.c
+++ b/drivers/char/agp/intel-gtt.c
@@ -840,6 +840,14 @@ static bool i830_check_flags(unsigned int flags)
 	return false;
 }
 
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags)
+{
+	intel_private.driver->write_entry(addr, pg, flags);
+}
+EXPORT_SYMBOL(intel_gtt_insert_page);
+
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 4668477..7a139a6 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2355,6 +2355,28 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
 #endif
 }
 
+static void gen8_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 unused)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen8_pte_t __iomem *pte =
+		(gen8_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(offset >> PAGE_SHIFT);
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	gen8_set_pte(pte, gen8_pte_encode(addr, level, true));
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+	POSTING_READ(GFX_FLSH_CNTL_GEN6);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *st,
 				     uint64_t start,
@@ -2424,6 +2446,28 @@ static void gen8_ggtt_insert_entries__BKL(struct i915_address_space *vm,
 	stop_machine(gen8_ggtt_insert_entries__cb, &arg, NULL);
 }
 
+static void gen6_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 flags)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen6_pte_t __iomem *pte =
+		(gen6_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(offset >> PAGE_SHIFT);
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	iowrite32(vm->pte_encode(addr, level, true, flags), pte);
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+	POSTING_READ(GFX_FLSH_CNTL_GEN6);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 /*
  * Binds an object into the global gtt with the specified cache level. The object
  * will be accessible to the GPU via commands whose operands reference offsets
@@ -2543,6 +2587,24 @@ static void gen6_ggtt_clear_range(struct i915_address_space *vm,
 	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
 }
 
+static void i915_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level cache_level,
+				  u32 unused)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	unsigned int flags = (cache_level == I915_CACHE_NONE) ?
+		AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY;
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	intel_gtt_insert_page(addr, offset >> PAGE_SHIFT, flags);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 static void i915_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *pages,
 				     uint64_t start,
@@ -3076,7 +3138,7 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
 
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
-
+	ggtt->base.insert_page = gen8_ggtt_insert_page;
 	ggtt->base.clear_range = nop_clear_range;
 	if (!USES_FULL_PPGTT(dev_priv))
 		ggtt->base.clear_range = gen8_ggtt_clear_range;
@@ -3116,6 +3178,7 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
 	ret = ggtt_probe_common(dev, ggtt->size);
 
 	ggtt->base.clear_range = gen6_ggtt_clear_range;
+	ggtt->base.insert_page = gen6_ggtt_insert_page;
 	ggtt->base.insert_entries = gen6_ggtt_insert_entries;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
@@ -3147,6 +3210,7 @@ static int i915_gmch_probe(struct i915_ggtt *ggtt)
 		      &ggtt->mappable_base, &ggtt->mappable_end);
 
 	ggtt->do_idle_maps = needs_idle_maps(dev_priv->dev);
+	ggtt->base.insert_page = i915_ggtt_insert_page;
 	ggtt->base.insert_entries = i915_ggtt_insert_entries;
 	ggtt->base.clear_range = i915_ggtt_clear_range;
 	ggtt->base.bind_vma = ggtt_bind_vma;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 62be77c..163b564 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -319,6 +319,11 @@ struct i915_address_space {
 			    uint64_t start,
 			    uint64_t length,
 			    bool use_scratch);
+	void (*insert_page)(struct i915_address_space *vm,
+			    dma_addr_t addr,
+			    uint64_t offset,
+			    enum i915_cache_level cache_level,
+			    u32 flags);
 	void (*insert_entries)(struct i915_address_space *vm,
 			       struct sg_table *st,
 			       uint64_t start,
diff --git a/include/drm/intel-gtt.h b/include/drm/intel-gtt.h
index 9e9bddaa5..f49edec 100644
--- a/include/drm/intel-gtt.h
+++ b/include/drm/intel-gtt.h
@@ -13,6 +13,9 @@ void intel_gmch_remove(void);
 bool intel_enable_gtt(void);
 
 void intel_gtt_chipset_flush(void);
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags);
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-06-08  9:26 [PATCH v21 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
@ 2016-06-08  9:26 ` ankitprasad.r.sharma
  0 siblings, 0 replies; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-06-08  9:26 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma

From: Chris Wilson <chris@chris-wilson.co.uk>

Introduced a new vm specfic callback insert_page() to program a single pte in
ggtt or ppgtt. This allows us to map a single page in to the mappable aperture
space. This can be iterated over to access the whole object by using space as
meagre as page size.

v2: Added low level rpm assertions to insert_page routines (Chris)

v3: Added POSTING_READ post register write (Tvrtko)

v4: Rebase (Ankit)

v5: Removed wmb() and FLUSH_CTL from insert_page, caller to take care
of it (Chris)

v6: insert_page not working correctly without FLSH_CNTL write, added the
write again.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/char/agp/intel-gtt.c        |  8 +++++
 drivers/gpu/drm/i915/i915_gem_gtt.c | 66 ++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_gtt.h |  5 +++
 include/drm/intel-gtt.h             |  3 ++
 4 files changed, 81 insertions(+), 1 deletion(-)

diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
index aef87fd..4431129 100644
--- a/drivers/char/agp/intel-gtt.c
+++ b/drivers/char/agp/intel-gtt.c
@@ -840,6 +840,14 @@ static bool i830_check_flags(unsigned int flags)
 	return false;
 }
 
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags)
+{
+	intel_private.driver->write_entry(addr, pg, flags);
+}
+EXPORT_SYMBOL(intel_gtt_insert_page);
+
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 4668477..7a139a6 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2355,6 +2355,28 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
 #endif
 }
 
+static void gen8_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 unused)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen8_pte_t __iomem *pte =
+		(gen8_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(offset >> PAGE_SHIFT);
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	gen8_set_pte(pte, gen8_pte_encode(addr, level, true));
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+	POSTING_READ(GFX_FLSH_CNTL_GEN6);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *st,
 				     uint64_t start,
@@ -2424,6 +2446,28 @@ static void gen8_ggtt_insert_entries__BKL(struct i915_address_space *vm,
 	stop_machine(gen8_ggtt_insert_entries__cb, &arg, NULL);
 }
 
+static void gen6_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 flags)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen6_pte_t __iomem *pte =
+		(gen6_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(offset >> PAGE_SHIFT);
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	iowrite32(vm->pte_encode(addr, level, true, flags), pte);
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+	POSTING_READ(GFX_FLSH_CNTL_GEN6);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 /*
  * Binds an object into the global gtt with the specified cache level. The object
  * will be accessible to the GPU via commands whose operands reference offsets
@@ -2543,6 +2587,24 @@ static void gen6_ggtt_clear_range(struct i915_address_space *vm,
 	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
 }
 
+static void i915_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level cache_level,
+				  u32 unused)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	unsigned int flags = (cache_level == I915_CACHE_NONE) ?
+		AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY;
+	int rpm_atomic_seq;
+
+	rpm_atomic_seq = assert_rpm_atomic_begin(dev_priv);
+
+	intel_gtt_insert_page(addr, offset >> PAGE_SHIFT, flags);
+
+	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
+}
+
 static void i915_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *pages,
 				     uint64_t start,
@@ -3076,7 +3138,7 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
 
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
-
+	ggtt->base.insert_page = gen8_ggtt_insert_page;
 	ggtt->base.clear_range = nop_clear_range;
 	if (!USES_FULL_PPGTT(dev_priv))
 		ggtt->base.clear_range = gen8_ggtt_clear_range;
@@ -3116,6 +3178,7 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
 	ret = ggtt_probe_common(dev, ggtt->size);
 
 	ggtt->base.clear_range = gen6_ggtt_clear_range;
+	ggtt->base.insert_page = gen6_ggtt_insert_page;
 	ggtt->base.insert_entries = gen6_ggtt_insert_entries;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
@@ -3147,6 +3210,7 @@ static int i915_gmch_probe(struct i915_ggtt *ggtt)
 		      &ggtt->mappable_base, &ggtt->mappable_end);
 
 	ggtt->do_idle_maps = needs_idle_maps(dev_priv->dev);
+	ggtt->base.insert_page = i915_ggtt_insert_page;
 	ggtt->base.insert_entries = i915_ggtt_insert_entries;
 	ggtt->base.clear_range = i915_ggtt_clear_range;
 	ggtt->base.bind_vma = ggtt_bind_vma;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 62be77c..163b564 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -319,6 +319,11 @@ struct i915_address_space {
 			    uint64_t start,
 			    uint64_t length,
 			    bool use_scratch);
+	void (*insert_page)(struct i915_address_space *vm,
+			    dma_addr_t addr,
+			    uint64_t offset,
+			    enum i915_cache_level cache_level,
+			    u32 flags);
 	void (*insert_entries)(struct i915_address_space *vm,
 			       struct sg_table *st,
 			       uint64_t start,
diff --git a/include/drm/intel-gtt.h b/include/drm/intel-gtt.h
index 9e9bddaa5..f49edec 100644
--- a/include/drm/intel-gtt.h
+++ b/include/drm/intel-gtt.h
@@ -13,6 +13,9 @@ void intel_gmch_remove(void);
 bool intel_enable_gtt(void);
 
 void intel_gtt_chipset_flush(void);
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags);
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-01-15 11:44         ` Chris Wilson
@ 2016-01-15 15:15           ` Tvrtko Ursulin
  0 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2016-01-15 15:15 UTC (permalink / raw)
  To: Chris Wilson, ankitprasad.r.sharma, intel-gfx, akash.goel,
	shashidhar.hiremath


On 15/01/16 11:44, Chris Wilson wrote:
> On Fri, Jan 15, 2016 at 09:53:46AM +0000, Tvrtko Ursulin wrote:
>>
>> On 14/01/16 13:34, Chris Wilson wrote:
>>> On Thu, Jan 14, 2016 at 10:32:11AM +0000, Tvrtko Ursulin wrote:
>>>>> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
>>>>> index b448ad8..5f86596 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_gem_gtt.h
>>>>> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
>>>>> @@ -317,6 +317,11 @@ struct i915_address_space {
>>>>>   			    uint64_t start,
>>>>>   			    uint64_t length,
>>>>>   			    bool use_scratch);
>>>>> +	void (*insert_page)(struct i915_address_space *vm,
>>>>> +			    dma_addr_t addr,
>>>>> +			    uint64_t offset,
>>>>> +			    enum i915_cache_level cache_level,
>>>>> +			    u32 flags);
>>>>>   	void (*insert_entries)(struct i915_address_space *vm,
>>>>>   			       struct sg_table *st,
>>>>>   			       uint64_t start,
>>>>
>>>> Why not extend the current API to support start page offset and
>>>> number of pages? Could default to full object like today if zero.
>>>> Eg:
>>>>
>>>>   void (*insert_entries)(struct i915_address_space *vm,
>>>> 			struct sg_table *st,
>>>> +			unsigned page_offset,
>>>> +			unsigned num_pages,
>>>
>>> Ouch. That would be quite slow for the insert_page() use case of
>>> page-by-page looping.
>>
>> It could use the same last page caching trick so why would be be so slow?
>
> We don't have the convenient struct? I presumed you were thinking of
> passing the offset through to sg_page_iter
>
> for_each_sg_page(st->sgl, &sg_iter, st->nents, page_offset) {
> 	if (--num_pages)
> 		break;
> 	
> 	/* original pt insertion code */
> }
>
> Is it worth expanding struct sg_table for this shortcut?
>
> There are some games where we spend more time in sg_page_iter_next() than
> anything else in the kernel. The games have more textures than GTT
> space, they aren't playable right now nor will they ever likely to be. :(
> Improving these loops would definitely be a boon nevertheless.

Okay, dead end, mission abort. :)

Ankit please just sync the patch in next respin with the RPM changes 
which happened in the meantime.

Regards,

Tvrtko


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-01-15  9:53       ` Tvrtko Ursulin
@ 2016-01-15 11:44         ` Chris Wilson
  2016-01-15 15:15           ` Tvrtko Ursulin
  0 siblings, 1 reply; 26+ messages in thread
From: Chris Wilson @ 2016-01-15 11:44 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: ankitprasad.r.sharma, intel-gfx, akash.goel, shashidhar.hiremath

On Fri, Jan 15, 2016 at 09:53:46AM +0000, Tvrtko Ursulin wrote:
> 
> On 14/01/16 13:34, Chris Wilson wrote:
> >On Thu, Jan 14, 2016 at 10:32:11AM +0000, Tvrtko Ursulin wrote:
> >>>diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
> >>>index b448ad8..5f86596 100644
> >>>--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
> >>>+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
> >>>@@ -317,6 +317,11 @@ struct i915_address_space {
> >>>  			    uint64_t start,
> >>>  			    uint64_t length,
> >>>  			    bool use_scratch);
> >>>+	void (*insert_page)(struct i915_address_space *vm,
> >>>+			    dma_addr_t addr,
> >>>+			    uint64_t offset,
> >>>+			    enum i915_cache_level cache_level,
> >>>+			    u32 flags);
> >>>  	void (*insert_entries)(struct i915_address_space *vm,
> >>>  			       struct sg_table *st,
> >>>  			       uint64_t start,
> >>
> >>Why not extend the current API to support start page offset and
> >>number of pages? Could default to full object like today if zero.
> >>Eg:
> >>
> >>  void (*insert_entries)(struct i915_address_space *vm,
> >>			struct sg_table *st,
> >>+			unsigned page_offset,
> >>+			unsigned num_pages,
> >
> >Ouch. That would be quite slow for the insert_page() use case of
> >page-by-page looping.
> 
> It could use the same last page caching trick so why would be be so slow?

We don't have the convenient struct? I presumed you were thinking of
passing the offset through to sg_page_iter

for_each_sg_page(st->sgl, &sg_iter, st->nents, page_offset) {
	if (--num_pages)
		break;
	
	/* original pt insertion code */
}

Is it worth expanding struct sg_table for this shortcut?

There are some games where we spend more time in sg_page_iter_next() than
anything else in the kernel. The games have more textures than GTT
space, they aren't playable right now nor will they ever likely to be. :(
Improving these loops would definitely be a boon nevertheless.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-01-14 13:34     ` Chris Wilson
@ 2016-01-15  9:53       ` Tvrtko Ursulin
  2016-01-15 11:44         ` Chris Wilson
  0 siblings, 1 reply; 26+ messages in thread
From: Tvrtko Ursulin @ 2016-01-15  9:53 UTC (permalink / raw)
  To: Chris Wilson, ankitprasad.r.sharma, intel-gfx, akash.goel,
	shashidhar.hiremath


On 14/01/16 13:34, Chris Wilson wrote:
> On Thu, Jan 14, 2016 at 10:32:11AM +0000, Tvrtko Ursulin wrote:
>>> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
>>> index b448ad8..5f86596 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem_gtt.h
>>> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
>>> @@ -317,6 +317,11 @@ struct i915_address_space {
>>>   			    uint64_t start,
>>>   			    uint64_t length,
>>>   			    bool use_scratch);
>>> +	void (*insert_page)(struct i915_address_space *vm,
>>> +			    dma_addr_t addr,
>>> +			    uint64_t offset,
>>> +			    enum i915_cache_level cache_level,
>>> +			    u32 flags);
>>>   	void (*insert_entries)(struct i915_address_space *vm,
>>>   			       struct sg_table *st,
>>>   			       uint64_t start,
>>
>> Why not extend the current API to support start page offset and
>> number of pages? Could default to full object like today if zero.
>> Eg:
>>
>>   void (*insert_entries)(struct i915_address_space *vm,
>> 			struct sg_table *st,
>> +			unsigned page_offset,
>> +			unsigned num_pages,
>
> Ouch. That would be quite slow for the insert_page() use case of
> page-by-page looping.

It could use the same last page caching trick so why would be be so slow?

>> 			uint64_t start,
>> 			enum i915_cache_level cache_level,
>> 			u32 flags);
>>
>> That way we would not have two functions for effectively the same
>> thing operating on different type of input parameters.
>>
>> If extending insert_entries is not preferable, then still we could
>> add another compatible one, like insert_entries_range or something,
>> and then both could share the same underlying implementation for
>> less code.
>>
>> Like this, this patch already does not match current codebase - see
>> assert_rpm_atomic_begin in insert_entries.
>>
>> Also if API between the two was compatible there would be no need
>> for i915_gem_object_get_dma_address() and i915_gem_object_get_page()
>> could be used instead.
>
> The point was to write a lowlevel analog to provide a complementary API
> to insert_entries that could be used for everywhere the we wanted to peek
> through the GTT without even touching an object, i.e. for cases where we
> might allocate a scratch page and temporarily put it into the GTT.

Well maybe it is not that bad on the API front, but code duplication 
would still not be my first choice, as demonstrated by the divergence 
which already happened. Need to think about it some more.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-01-14 10:32   ` Tvrtko Ursulin
@ 2016-01-14 13:34     ` Chris Wilson
  2016-01-15  9:53       ` Tvrtko Ursulin
  0 siblings, 1 reply; 26+ messages in thread
From: Chris Wilson @ 2016-01-14 13:34 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: ankitprasad.r.sharma, intel-gfx, akash.goel, shashidhar.hiremath

On Thu, Jan 14, 2016 at 10:32:11AM +0000, Tvrtko Ursulin wrote:
> >diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
> >index b448ad8..5f86596 100644
> >--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
> >+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
> >@@ -317,6 +317,11 @@ struct i915_address_space {
> >  			    uint64_t start,
> >  			    uint64_t length,
> >  			    bool use_scratch);
> >+	void (*insert_page)(struct i915_address_space *vm,
> >+			    dma_addr_t addr,
> >+			    uint64_t offset,
> >+			    enum i915_cache_level cache_level,
> >+			    u32 flags);
> >  	void (*insert_entries)(struct i915_address_space *vm,
> >  			       struct sg_table *st,
> >  			       uint64_t start,
> 
> Why not extend the current API to support start page offset and
> number of pages? Could default to full object like today if zero.
> Eg:
> 
>  void (*insert_entries)(struct i915_address_space *vm,
> 			struct sg_table *st,
> +			unsigned page_offset,
> +			unsigned num_pages,

Ouch. That would be quite slow for the insert_page() use case of
page-by-page looping.

> 			uint64_t start,
> 			enum i915_cache_level cache_level,
> 			u32 flags);
> 
> That way we would not have two functions for effectively the same
> thing operating on different type of input parameters.
>
> If extending insert_entries is not preferable, then still we could
> add another compatible one, like insert_entries_range or something,
> and then both could share the same underlying implementation for
> less code.
> 
> Like this, this patch already does not match current codebase - see
> assert_rpm_atomic_begin in insert_entries.
> 
> Also if API between the two was compatible there would be no need
> for i915_gem_object_get_dma_address() and i915_gem_object_get_page()
> could be used instead.

The point was to write a lowlevel analog to provide a complementary API
to insert_entries that could be used for everywhere the we wanted to peek
through the GTT without even touching an object, i.e. for cases where we
might allocate a scratch page and temporarily put it into the GTT.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-01-14  6:16 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
@ 2016-01-14 10:32   ` Tvrtko Ursulin
  2016-01-14 13:34     ` Chris Wilson
  0 siblings, 1 reply; 26+ messages in thread
From: Tvrtko Ursulin @ 2016-01-14 10:32 UTC (permalink / raw)
  To: ankitprasad.r.sharma, intel-gfx; +Cc: akash.goel, shashidhar.hiremath


On 14/01/16 06:16, ankitprasad.r.sharma@intel.com wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
>
> Introduced a new vm specfic callback insert_page() to program a single pte in
> ggtt or ppgtt. This allows us to map a single page in to the mappable aperture
> space. This can be iterated over to access the whole object by using space as
> meagre as page size.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
> ---
>   drivers/char/agp/intel-gtt.c        |  9 +++++++
>   drivers/gpu/drm/i915/i915_gem_gtt.c | 49 +++++++++++++++++++++++++++++++++++++
>   drivers/gpu/drm/i915/i915_gem_gtt.h |  5 ++++
>   include/drm/intel-gtt.h             |  3 +++
>   4 files changed, 66 insertions(+)
>
> diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
> index 1341a94..7c68576 100644
> --- a/drivers/char/agp/intel-gtt.c
> +++ b/drivers/char/agp/intel-gtt.c
> @@ -838,6 +838,15 @@ static bool i830_check_flags(unsigned int flags)
>   	return false;
>   }
>
> +void intel_gtt_insert_page(dma_addr_t addr,
> +			   unsigned int pg,
> +			   unsigned int flags)
> +{
> +	intel_private.driver->write_entry(addr, pg, flags);
> +	wmb();
> +}
> +EXPORT_SYMBOL(intel_gtt_insert_page);
> +
>   void intel_gtt_insert_sg_entries(struct sg_table *st,
>   				 unsigned int pg_start,
>   				 unsigned int flags)
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index 56f4f2e..48bd6a4 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -2339,6 +2339,23 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
>   #endif
>   }
>
> +static void gen8_ggtt_insert_page(struct i915_address_space *vm,
> +				  dma_addr_t addr,
> +				  uint64_t offset,
> +				  enum i915_cache_level level,
> +				  u32 unused)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(vm->dev);
> +	gen8_pte_t __iomem *pte =
> +		(gen8_pte_t __iomem *)dev_priv->gtt.gsm +
> +		(offset >> PAGE_SHIFT);
> +
> +	gen8_set_pte(pte, gen8_pte_encode(addr, level, true));
> +	wmb();
> +
> +	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
> +}
> +
>   static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
>   				     struct sg_table *st,
>   				     uint64_t start,
> @@ -2410,6 +2427,23 @@ static void gen8_ggtt_insert_entries__BKL(struct i915_address_space *vm,
>   	stop_machine(gen8_ggtt_insert_entries__cb, &arg, NULL);
>   }
>
> +static void gen6_ggtt_insert_page(struct i915_address_space *vm,
> +				  dma_addr_t addr,
> +				  uint64_t offset,
> +				  enum i915_cache_level level,
> +				  u32 flags)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(vm->dev);
> +	gen6_pte_t __iomem *pte =
> +		(gen6_pte_t __iomem *)dev_priv->gtt.gsm +
> +		(offset >> PAGE_SHIFT);
> +
> +	iowrite32(vm->pte_encode(addr, level, true, flags), pte);
> +	wmb();
> +
> +	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
> +}
> +
>   /*
>    * Binds an object into the global gtt with the specified cache level. The object
>    * will be accessible to the GPU via commands whose operands reference offsets
> @@ -2521,6 +2555,18 @@ static void gen6_ggtt_clear_range(struct i915_address_space *vm,
>   	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
>   }
>
> +static void i915_ggtt_insert_page(struct i915_address_space *vm,
> +				  dma_addr_t addr,
> +				  uint64_t offset,
> +				  enum i915_cache_level cache_level,
> +				  u32 unused)
> +{
> +	unsigned int flags = (cache_level == I915_CACHE_NONE) ?
> +		AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY;
> +
> +	intel_gtt_insert_page(addr, offset >> PAGE_SHIFT, flags);
> +}
> +
>   static void i915_ggtt_insert_entries(struct i915_address_space *vm,
>   				     struct sg_table *pages,
>   				     uint64_t start,
> @@ -3050,6 +3096,7 @@ static int gen8_gmch_probe(struct drm_device *dev,
>   	ret = ggtt_probe_common(dev, gtt_size);
>
>   	dev_priv->gtt.base.clear_range = gen8_ggtt_clear_range;
> +	dev_priv->gtt.base.insert_page = gen8_ggtt_insert_page;
>   	dev_priv->gtt.base.insert_entries = gen8_ggtt_insert_entries;
>   	dev_priv->gtt.base.bind_vma = ggtt_bind_vma;
>   	dev_priv->gtt.base.unbind_vma = ggtt_unbind_vma;
> @@ -3095,6 +3142,7 @@ static int gen6_gmch_probe(struct drm_device *dev,
>   	ret = ggtt_probe_common(dev, gtt_size);
>
>   	dev_priv->gtt.base.clear_range = gen6_ggtt_clear_range;
> +	dev_priv->gtt.base.insert_page = gen6_ggtt_insert_page;
>   	dev_priv->gtt.base.insert_entries = gen6_ggtt_insert_entries;
>   	dev_priv->gtt.base.bind_vma = ggtt_bind_vma;
>   	dev_priv->gtt.base.unbind_vma = ggtt_unbind_vma;
> @@ -3129,6 +3177,7 @@ static int i915_gmch_probe(struct drm_device *dev,
>   	intel_gtt_get(gtt_total, stolen, mappable_base, mappable_end);
>
>   	dev_priv->gtt.do_idle_maps = needs_idle_maps(dev_priv->dev);
> +	dev_priv->gtt.base.insert_page = i915_ggtt_insert_page;
>   	dev_priv->gtt.base.insert_entries = i915_ggtt_insert_entries;
>   	dev_priv->gtt.base.clear_range = i915_ggtt_clear_range;
>   	dev_priv->gtt.base.bind_vma = ggtt_bind_vma;
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
> index b448ad8..5f86596 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.h
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
> @@ -317,6 +317,11 @@ struct i915_address_space {
>   			    uint64_t start,
>   			    uint64_t length,
>   			    bool use_scratch);
> +	void (*insert_page)(struct i915_address_space *vm,
> +			    dma_addr_t addr,
> +			    uint64_t offset,
> +			    enum i915_cache_level cache_level,
> +			    u32 flags);
>   	void (*insert_entries)(struct i915_address_space *vm,
>   			       struct sg_table *st,
>   			       uint64_t start,

Why not extend the current API to support start page offset and number 
of pages? Could default to full object like today if zero. Eg:

  void (*insert_entries)(struct i915_address_space *vm,
			struct sg_table *st,
+			unsigned page_offset,
+			unsigned num_pages,
			uint64_t start,
			enum i915_cache_level cache_level,
			u32 flags);

That way we would not have two functions for effectively the same thing 
operating on different type of input parameters.

If extending insert_entries is not preferable, then still we could add 
another compatible one, like insert_entries_range or something, and then 
both could share the same underlying implementation for less code.

Like this, this patch already does not match current codebase - see 
assert_rpm_atomic_begin in insert_entries.

Also if API between the two was compatible there would be no need for 
i915_gem_object_get_dma_address() and i915_gem_object_get_page() could 
be used instead.

Regards,

Tvrtko

> diff --git a/include/drm/intel-gtt.h b/include/drm/intel-gtt.h
> index 9e9bddaa5..f49edec 100644
> --- a/include/drm/intel-gtt.h
> +++ b/include/drm/intel-gtt.h
> @@ -13,6 +13,9 @@ void intel_gmch_remove(void);
>   bool intel_enable_gtt(void);
>
>   void intel_gtt_chipset_flush(void);
> +void intel_gtt_insert_page(dma_addr_t addr,
> +			   unsigned int pg,
> +			   unsigned int flags);
>   void intel_gtt_insert_sg_entries(struct sg_table *st,
>   				 unsigned int pg_start,
>   				 unsigned int flags);
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 01/11] drm/i915: Add support for mapping an object page by page
  2016-01-14  6:16 [PATCH v14 0/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
@ 2016-01-14  6:16 ` ankitprasad.r.sharma
  2016-01-14 10:32   ` Tvrtko Ursulin
  0 siblings, 1 reply; 26+ messages in thread
From: ankitprasad.r.sharma @ 2016-01-14  6:16 UTC (permalink / raw)
  To: intel-gfx; +Cc: akash.goel, Ankitprasad Sharma, shashidhar.hiremath

From: Chris Wilson <chris@chris-wilson.co.uk>

Introduced a new vm specfic callback insert_page() to program a single pte in
ggtt or ppgtt. This allows us to map a single page in to the mappable aperture
space. This can be iterated over to access the whole object by using space as
meagre as page size.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@intel.com>
---
 drivers/char/agp/intel-gtt.c        |  9 +++++++
 drivers/gpu/drm/i915/i915_gem_gtt.c | 49 +++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.h |  5 ++++
 include/drm/intel-gtt.h             |  3 +++
 4 files changed, 66 insertions(+)

diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
index 1341a94..7c68576 100644
--- a/drivers/char/agp/intel-gtt.c
+++ b/drivers/char/agp/intel-gtt.c
@@ -838,6 +838,15 @@ static bool i830_check_flags(unsigned int flags)
 	return false;
 }
 
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags)
+{
+	intel_private.driver->write_entry(addr, pg, flags);
+	wmb();
+}
+EXPORT_SYMBOL(intel_gtt_insert_page);
+
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 56f4f2e..48bd6a4 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2339,6 +2339,23 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
 #endif
 }
 
+static void gen8_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 unused)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen8_pte_t __iomem *pte =
+		(gen8_pte_t __iomem *)dev_priv->gtt.gsm +
+		(offset >> PAGE_SHIFT);
+
+	gen8_set_pte(pte, gen8_pte_encode(addr, level, true));
+	wmb();
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+}
+
 static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *st,
 				     uint64_t start,
@@ -2410,6 +2427,23 @@ static void gen8_ggtt_insert_entries__BKL(struct i915_address_space *vm,
 	stop_machine(gen8_ggtt_insert_entries__cb, &arg, NULL);
 }
 
+static void gen6_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level level,
+				  u32 flags)
+{
+	struct drm_i915_private *dev_priv = to_i915(vm->dev);
+	gen6_pte_t __iomem *pte =
+		(gen6_pte_t __iomem *)dev_priv->gtt.gsm +
+		(offset >> PAGE_SHIFT);
+
+	iowrite32(vm->pte_encode(addr, level, true, flags), pte);
+	wmb();
+
+	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
+}
+
 /*
  * Binds an object into the global gtt with the specified cache level. The object
  * will be accessible to the GPU via commands whose operands reference offsets
@@ -2521,6 +2555,18 @@ static void gen6_ggtt_clear_range(struct i915_address_space *vm,
 	assert_rpm_atomic_end(dev_priv, rpm_atomic_seq);
 }
 
+static void i915_ggtt_insert_page(struct i915_address_space *vm,
+				  dma_addr_t addr,
+				  uint64_t offset,
+				  enum i915_cache_level cache_level,
+				  u32 unused)
+{
+	unsigned int flags = (cache_level == I915_CACHE_NONE) ?
+		AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY;
+
+	intel_gtt_insert_page(addr, offset >> PAGE_SHIFT, flags);
+}
+
 static void i915_ggtt_insert_entries(struct i915_address_space *vm,
 				     struct sg_table *pages,
 				     uint64_t start,
@@ -3050,6 +3096,7 @@ static int gen8_gmch_probe(struct drm_device *dev,
 	ret = ggtt_probe_common(dev, gtt_size);
 
 	dev_priv->gtt.base.clear_range = gen8_ggtt_clear_range;
+	dev_priv->gtt.base.insert_page = gen8_ggtt_insert_page;
 	dev_priv->gtt.base.insert_entries = gen8_ggtt_insert_entries;
 	dev_priv->gtt.base.bind_vma = ggtt_bind_vma;
 	dev_priv->gtt.base.unbind_vma = ggtt_unbind_vma;
@@ -3095,6 +3142,7 @@ static int gen6_gmch_probe(struct drm_device *dev,
 	ret = ggtt_probe_common(dev, gtt_size);
 
 	dev_priv->gtt.base.clear_range = gen6_ggtt_clear_range;
+	dev_priv->gtt.base.insert_page = gen6_ggtt_insert_page;
 	dev_priv->gtt.base.insert_entries = gen6_ggtt_insert_entries;
 	dev_priv->gtt.base.bind_vma = ggtt_bind_vma;
 	dev_priv->gtt.base.unbind_vma = ggtt_unbind_vma;
@@ -3129,6 +3177,7 @@ static int i915_gmch_probe(struct drm_device *dev,
 	intel_gtt_get(gtt_total, stolen, mappable_base, mappable_end);
 
 	dev_priv->gtt.do_idle_maps = needs_idle_maps(dev_priv->dev);
+	dev_priv->gtt.base.insert_page = i915_ggtt_insert_page;
 	dev_priv->gtt.base.insert_entries = i915_ggtt_insert_entries;
 	dev_priv->gtt.base.clear_range = i915_ggtt_clear_range;
 	dev_priv->gtt.base.bind_vma = ggtt_bind_vma;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index b448ad8..5f86596 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -317,6 +317,11 @@ struct i915_address_space {
 			    uint64_t start,
 			    uint64_t length,
 			    bool use_scratch);
+	void (*insert_page)(struct i915_address_space *vm,
+			    dma_addr_t addr,
+			    uint64_t offset,
+			    enum i915_cache_level cache_level,
+			    u32 flags);
 	void (*insert_entries)(struct i915_address_space *vm,
 			       struct sg_table *st,
 			       uint64_t start,
diff --git a/include/drm/intel-gtt.h b/include/drm/intel-gtt.h
index 9e9bddaa5..f49edec 100644
--- a/include/drm/intel-gtt.h
+++ b/include/drm/intel-gtt.h
@@ -13,6 +13,9 @@ void intel_gmch_remove(void);
 bool intel_enable_gtt(void);
 
 void intel_gtt_chipset_flush(void);
+void intel_gtt_insert_page(dma_addr_t addr,
+			   unsigned int pg,
+			   unsigned int flags);
 void intel_gtt_insert_sg_entries(struct sg_table *st,
 				 unsigned int pg_start,
 				 unsigned int flags);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-06-08 12:46 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-31  6:19 [PATCH v20 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
2016-06-02 22:28   ` Chris Wilson
2016-05-31  6:19 ` [PATCH 02/11] drm/i915: Introduce i915_gem_object_get_dma_address() ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 03/11] drm/i915: Use insert_page for pwrite_fast ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 04/11] drm/i915: Clearing buffer objects via CPU/GTT ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 05/11] drm/i915: Support for creating Stolen memory backed objects ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 06/11] drm/i915: Propagating correct error codes to the userspace ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 07/11] drm/i915: Add support for stealing purgable stolen pages ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 08/11] drm/i915: Support for pread/pwrite from/to non shmem backed objects ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 09/11] drm/i915: Migrate stolen objects before hibernation ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 10/11] drm/i915: Disable use of stolen area by User when Intel RST is present ankitprasad.r.sharma
2016-05-31  6:19 ` [PATCH 11/11] drm/i915: Extend GET_APERTURE ioctl to report available map space ankitprasad.r.sharma
2016-06-01 10:31   ` Tvrtko Ursulin
2016-06-07  8:52     ` Ankitprasad Sharma
2016-06-01 12:11   ` Emil Velikov
2016-06-01 12:31     ` Emil Velikov
2016-05-31 11:05 ` ✗ Ro.CI.BAT: failure for Support for creating/using Stolen memory backed objects (rev15) Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2016-06-08 12:20 [PATCH v22 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
2016-06-08 12:20 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
2016-06-08  9:26 [PATCH v21 00/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
2016-06-08  9:26 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
2016-01-14  6:16 [PATCH v14 0/11] Support for creating/using Stolen memory backed objects ankitprasad.r.sharma
2016-01-14  6:16 ` [PATCH 01/11] drm/i915: Add support for mapping an object page by page ankitprasad.r.sharma
2016-01-14 10:32   ` Tvrtko Ursulin
2016-01-14 13:34     ` Chris Wilson
2016-01-15  9:53       ` Tvrtko Ursulin
2016-01-15 11:44         ` Chris Wilson
2016-01-15 15:15           ` Tvrtko Ursulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.