All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/21] huge gtt pages
@ 2017-07-03 14:14 Matthew Auld
  2017-07-03 14:14 ` [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt Matthew Auld
                   ` (21 more replies)
  0 siblings, 22 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

We now try running all mockable selftests through the mock_device, and disable
vGPU huge gtt pages for now.

Matthew Auld (21):
  mm/shmem: introduce shmem_file_setup_with_mnt
  drm/i915: introduce simple gemfs
  drm/i915/gemfs: enable THP
  drm/i915: introduce page_size_mask to dev_info
  drm/i915: introduce page_size members
  drm/i915: introduce vm set_pages/clear_pages
  drm/i915: align the vma start to the largest gtt page size
  drm/i915: align 64K objects to 2M
  drm/i915: enable IPS bit for 64K pages
  drm/i915: disable GTT cache for 2M/1G pages
  drm/i915: support 1G pages for the 48b PPGTT
  drm/i915: support 2M pages for the 48b PPGTT
  drm/i915: support 64K pages for the 48b PPGTT
  drm/i915: accurate page size tracking for the ppgtt
  drm/i915/debugfs: include some gtt page size metrics
  drm/i915/selftests: huge page tests
  drm/i915/selftests: mix huge pages
  drm/i915: disable platform support for vGPU huge gtt pages
  drm/i915: enable platform support for 64K pages
  drm/i915: enable platform support for 2M pages
  drm/i915: enable platform support for 1G pages

 drivers/gpu/drm/i915/Makefile                      |   1 +
 drivers/gpu/drm/i915/i915_debugfs.c                |  42 +-
 drivers/gpu/drm/i915/i915_drv.h                    |   9 +-
 drivers/gpu/drm/i915/i915_gem.c                    |  98 ++-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |  17 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c                | 195 ++++-
 drivers/gpu/drm/i915/i915_gem_gtt.h                |  15 +-
 drivers/gpu/drm/i915/i915_gem_internal.c           |   5 +-
 drivers/gpu/drm/i915/i915_gem_object.h             |  30 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c             |  13 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c            |  26 +-
 drivers/gpu/drm/i915/i915_gemfs.c                  |  67 ++
 drivers/gpu/drm/i915/i915_gemfs.h                  |  34 +
 drivers/gpu/drm/i915/i915_pci.c                    |  25 +
 drivers/gpu/drm/i915/i915_reg.h                    |   3 +
 drivers/gpu/drm/i915/i915_vma.c                    |  49 +-
 drivers/gpu/drm/i915/i915_vma.h                    |   1 +
 drivers/gpu/drm/i915/intel_pm.c                    |   9 +-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c   |   4 +-
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 931 +++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      |   3 +-
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |  16 +-
 drivers/gpu/drm/i915/selftests/mock_gtt.c          |  11 +-
 drivers/gpu/drm/i915/selftests/scatterlist.c       |  15 +
 include/linux/shmem_fs.h                           |   2 +
 mm/shmem.c                                         |  30 +-
 28 files changed, 1558 insertions(+), 95 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14   ` Matthew Auld
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx
  Cc: Joonas Lahtinen, Chris Wilson, Dave Hansen, Kirill A . Shutemov,
	Hugh Dickins, linux-mm

We are planning to use our own tmpfs mnt in i915 in place of the
shm_mnt, such that we can control the mount options, in particular
huge=, which we require to support huge-gtt-pages. So rather than roll
our own version of __shmem_file_setup, it would be preferred if we could
just give shmem our mnt, and let it do the rest.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org
---
 include/linux/shmem_fs.h |  2 ++
 mm/shmem.c               | 30 ++++++++++++++++++++++--------
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index a7d6bd2a918f..27de676f0b63 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -53,6 +53,8 @@ extern struct file *shmem_file_setup(const char *name,
 					loff_t size, unsigned long flags);
 extern struct file *shmem_kernel_file_setup(const char *name, loff_t size,
 					    unsigned long flags);
+extern struct file *shmem_file_setup_with_mnt(struct vfsmount *mnt,
+		const char *name, loff_t size, unsigned long flags);
 extern int shmem_zero_setup(struct vm_area_struct *);
 extern unsigned long shmem_get_unmapped_area(struct file *, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
diff --git a/mm/shmem.c b/mm/shmem.c
index e67d6ba4e98e..88cca428403d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -4130,7 +4130,7 @@ static const struct dentry_operations anon_ops = {
 	.d_dname = simple_dname
 };
 
-static struct file *__shmem_file_setup(const char *name, loff_t size,
+static struct file *__shmem_file_setup(struct vfsmount *mnt, const char *name, loff_t size,
 				       unsigned long flags, unsigned int i_flags)
 {
 	struct file *res;
@@ -4139,8 +4139,8 @@ static struct file *__shmem_file_setup(const char *name, loff_t size,
 	struct super_block *sb;
 	struct qstr this;
 
-	if (IS_ERR(shm_mnt))
-		return ERR_CAST(shm_mnt);
+	if (IS_ERR(mnt))
+		return ERR_CAST(mnt);
 
 	if (size < 0 || size > MAX_LFS_FILESIZE)
 		return ERR_PTR(-EINVAL);
@@ -4152,8 +4152,8 @@ static struct file *__shmem_file_setup(const char *name, loff_t size,
 	this.name = name;
 	this.len = strlen(name);
 	this.hash = 0; /* will go */
-	sb = shm_mnt->mnt_sb;
-	path.mnt = mntget(shm_mnt);
+	sb = mnt->mnt_sb;
+	path.mnt = mntget(mnt);
 	path.dentry = d_alloc_pseudo(sb, &this);
 	if (!path.dentry)
 		goto put_memory;
@@ -4198,7 +4198,7 @@ static struct file *__shmem_file_setup(const char *name, loff_t size,
  */
 struct file *shmem_kernel_file_setup(const char *name, loff_t size, unsigned long flags)
 {
-	return __shmem_file_setup(name, size, flags, S_PRIVATE);
+	return __shmem_file_setup(shm_mnt, name, size, flags, S_PRIVATE);
 }
 
 /**
@@ -4209,11 +4209,25 @@ struct file *shmem_kernel_file_setup(const char *name, loff_t size, unsigned lon
  */
 struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
 {
-	return __shmem_file_setup(name, size, flags, 0);
+	return __shmem_file_setup(shm_mnt, name, size, flags, 0);
 }
 EXPORT_SYMBOL_GPL(shmem_file_setup);
 
 /**
+ * shmem_file_setup_with_mnt - get an unlinked file living in tmpfs
+ * @mnt: the tmpfs mount where the file will be created
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_file_setup_with_mnt(struct vfsmount *mnt, const char *name,
+				       loff_t size, unsigned long flags)
+{
+	return __shmem_file_setup(mnt, name, size, flags, 0);
+}
+EXPORT_SYMBOL_GPL(shmem_file_setup_with_mnt);
+
+/**
  * shmem_zero_setup - setup a shared anonymous mapping
  * @vma: the vma to be mmapped is prepared by do_mmap_pgoff
  */
@@ -4228,7 +4242,7 @@ int shmem_zero_setup(struct vm_area_struct *vma)
 	 * accessible to the user through its mapping, use S_PRIVATE flag to
 	 * bypass file security, in the same way as shmem_kernel_file_setup().
 	 */
-	file = __shmem_file_setup("dev/zero", size, vma->vm_flags, S_PRIVATE);
+	file = shmem_kernel_file_setup("dev/zero", size, vma->vm_flags);
 	if (IS_ERR(file))
 		return PTR_ERR(file);
 
-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 02/21] drm/i915: introduce simple gemfs
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-07-03 14:14   ` Matthew Auld
  2017-07-03 14:14   ` Matthew Auld
                     ` (20 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx
  Cc: Joonas Lahtinen, Chris Wilson, Dave Hansen, Kirill A . Shutemov,
	Hugh Dickins, linux-mm

Not a fully blown gemfs, just our very own tmpfs kernel mount. Doing so
moves us away from the shmemfs shm_mnt, and gives us the much needed
flexibility to do things like set our own mount options, namely huge=
which should allow us to enable the use of transparent-huge-pages for
our shmem backed objects.

v2: various improvements suggested by Joonas

v3: move gemfs instance to i915.mm and simplify now that we have
file_setup_with_mnt

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org
---
 drivers/gpu/drm/i915/Makefile                    |  1 +
 drivers/gpu/drm/i915/i915_drv.h                  |  3 ++
 drivers/gpu/drm/i915/i915_gem.c                  | 34 +++++++++++++++-
 drivers/gpu/drm/i915/i915_gemfs.c                | 52 ++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gemfs.h                | 34 ++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 10 ++++-
 6 files changed, 131 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f8227318dcaf..29e3cfdf56ce 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -46,6 +46,7 @@ i915-y += i915_cmd_parser.o \
 	  i915_gem_tiling.o \
 	  i915_gem_timeline.o \
 	  i915_gem_userptr.o \
+	  i915_gemfs.o \
 	  i915_trace_points.o \
 	  i915_vma.o \
 	  intel_breadcrumbs.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0029bb949e90..f36ebd2e1e5a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1442,6 +1442,9 @@ struct i915_gem_mm {
 	/** Usable portion of the GTT for GEM */
 	dma_addr_t stolen_base; /* limited to low memory (32-bit) */
 
+	/** tmpfs instance used for shmem backed objects */
+	struct vfsmount *gemfs;
+
 	/** PPGTT used for aliasing the PPGTT with the GTT */
 	struct i915_hw_ppgtt *aliasing_ppgtt;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1b2dfa8bdeef..509e5fc4af56 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -35,6 +35,7 @@
 #include "intel_drv.h"
 #include "intel_frontbuffer.h"
 #include "intel_mocs.h"
+#include "i915_gemfs.h"
 #include <linux/dma-fence-array.h>
 #include <linux/kthread.h>
 #include <linux/reservation.h>
@@ -4289,6 +4290,25 @@ static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
 	.pwrite = i915_gem_object_pwrite_gtt,
 };
 
+static int i915_gem_object_create_shmem(struct drm_device *dev,
+					struct drm_gem_object *obj,
+					size_t size)
+{
+	struct drm_i915_private *i915 = to_i915(dev);
+	struct file *filp;
+
+	drm_gem_private_object_init(dev, obj, size);
+
+	filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size,
+					 VM_NORESERVE);
+	if (IS_ERR(filp))
+		return PTR_ERR(filp);
+
+	obj->filp = filp;
+
+	return 0;
+}
+
 struct drm_i915_gem_object *
 i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 {
@@ -4312,7 +4332,7 @@ i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 	if (obj == NULL)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_gem_object_init(&dev_priv->drm, &obj->base, size);
+	ret = i915_gem_object_create_shmem(&dev_priv->drm, &obj->base, size);
 	if (ret)
 		goto fail;
 
@@ -4888,7 +4908,13 @@ i915_gem_load_init_fences(struct drm_i915_private *dev_priv)
 int
 i915_gem_load_init(struct drm_i915_private *dev_priv)
 {
-	int err = -ENOMEM;
+	int err;
+
+	err = i915_gemfs_init(dev_priv);
+	if (err)
+		return err;
+
+	err = -ENOMEM;
 
 	dev_priv->objects = KMEM_CACHE(drm_i915_gem_object, SLAB_HWCACHE_ALIGN);
 	if (!dev_priv->objects)
@@ -4954,6 +4980,8 @@ i915_gem_load_init(struct drm_i915_private *dev_priv)
 err_objects:
 	kmem_cache_destroy(dev_priv->objects);
 err_out:
+	i915_gemfs_fini(dev_priv);
+
 	return err;
 }
 
@@ -4976,6 +5004,8 @@ void i915_gem_load_cleanup(struct drm_i915_private *dev_priv)
 
 	/* And ensure that our DESTROY_BY_RCU slabs are truly destroyed */
 	rcu_barrier();
+
+	i915_gemfs_fini(dev_priv);
 }
 
 int i915_gem_freeze(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gemfs.c b/drivers/gpu/drm/i915/i915_gemfs.c
new file mode 100644
index 000000000000..168d0bd98f60
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.c
@@ -0,0 +1,52 @@
+/*
+ * Copyright A(C) 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/fs.h>
+#include <linux/mount.h>
+
+#include "i915_drv.h"
+#include "i915_gemfs.h"
+
+int i915_gemfs_init(struct drm_i915_private *i915)
+{
+	struct file_system_type *type;
+	struct vfsmount *gemfs;
+
+	type = get_fs_type("tmpfs");
+	if (!type)
+		return -ENODEV;
+
+	gemfs = kern_mount(type);
+	if (IS_ERR(gemfs))
+		return PTR_ERR(gemfs);
+
+	i915->mm.gemfs = gemfs;
+
+	return 0;
+}
+
+void i915_gemfs_fini(struct drm_i915_private *i915)
+{
+	kern_unmount(i915->mm.gemfs);
+}
diff --git a/drivers/gpu/drm/i915/i915_gemfs.h b/drivers/gpu/drm/i915/i915_gemfs.h
new file mode 100644
index 000000000000..cca8bdc5b93e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright A(C) 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_GEMFS_H__
+#define __I915_GEMFS_H__
+
+struct drm_i915_private;
+
+int i915_gemfs_init(struct drm_i915_private *i915);
+
+void i915_gemfs_fini(struct drm_i915_private *i915);
+
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 47613d20bba8..0ac4efd5c7a2 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -81,6 +81,8 @@ static void mock_device_release(struct drm_device *dev)
 	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 
+	i915_gemfs_fini(i915);
+
 	drm_dev_fini(&i915->drm);
 	put_device(&i915->drm.pdev->dev);
 }
@@ -168,9 +170,13 @@ struct drm_i915_private *mock_gem_device(void)
 
 	i915->gt.awake = true;
 
+	err = i915_gemfs_init(i915);
+	if (err)
+		goto err_wq;
+
 	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
 	if (!i915->objects)
-		goto err_wq;
+		goto err_gemfs;
 
 	i915->vmas = KMEM_CACHE(i915_vma, SLAB_HWCACHE_ALIGN);
 	if (!i915->vmas)
@@ -228,6 +234,8 @@ struct drm_i915_private *mock_gem_device(void)
 	kmem_cache_destroy(i915->vmas);
 err_objects:
 	kmem_cache_destroy(i915->objects);
+err_gemfs:
+	i915_gemfs_fini(i915);
 err_wq:
 	destroy_workqueue(i915->wq);
 put_device:
-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 02/21] drm/i915: introduce simple gemfs
@ 2017-07-03 14:14   ` Matthew Auld
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dave Hansen, Hugh Dickins, linux-mm, Kirill A . Shutemov

Not a fully blown gemfs, just our very own tmpfs kernel mount. Doing so
moves us away from the shmemfs shm_mnt, and gives us the much needed
flexibility to do things like set our own mount options, namely huge=
which should allow us to enable the use of transparent-huge-pages for
our shmem backed objects.

v2: various improvements suggested by Joonas

v3: move gemfs instance to i915.mm and simplify now that we have
file_setup_with_mnt

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org
---
 drivers/gpu/drm/i915/Makefile                    |  1 +
 drivers/gpu/drm/i915/i915_drv.h                  |  3 ++
 drivers/gpu/drm/i915/i915_gem.c                  | 34 +++++++++++++++-
 drivers/gpu/drm/i915/i915_gemfs.c                | 52 ++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gemfs.h                | 34 ++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 10 ++++-
 6 files changed, 131 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f8227318dcaf..29e3cfdf56ce 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -46,6 +46,7 @@ i915-y += i915_cmd_parser.o \
 	  i915_gem_tiling.o \
 	  i915_gem_timeline.o \
 	  i915_gem_userptr.o \
+	  i915_gemfs.o \
 	  i915_trace_points.o \
 	  i915_vma.o \
 	  intel_breadcrumbs.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0029bb949e90..f36ebd2e1e5a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1442,6 +1442,9 @@ struct i915_gem_mm {
 	/** Usable portion of the GTT for GEM */
 	dma_addr_t stolen_base; /* limited to low memory (32-bit) */
 
+	/** tmpfs instance used for shmem backed objects */
+	struct vfsmount *gemfs;
+
 	/** PPGTT used for aliasing the PPGTT with the GTT */
 	struct i915_hw_ppgtt *aliasing_ppgtt;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1b2dfa8bdeef..509e5fc4af56 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -35,6 +35,7 @@
 #include "intel_drv.h"
 #include "intel_frontbuffer.h"
 #include "intel_mocs.h"
+#include "i915_gemfs.h"
 #include <linux/dma-fence-array.h>
 #include <linux/kthread.h>
 #include <linux/reservation.h>
@@ -4289,6 +4290,25 @@ static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
 	.pwrite = i915_gem_object_pwrite_gtt,
 };
 
+static int i915_gem_object_create_shmem(struct drm_device *dev,
+					struct drm_gem_object *obj,
+					size_t size)
+{
+	struct drm_i915_private *i915 = to_i915(dev);
+	struct file *filp;
+
+	drm_gem_private_object_init(dev, obj, size);
+
+	filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size,
+					 VM_NORESERVE);
+	if (IS_ERR(filp))
+		return PTR_ERR(filp);
+
+	obj->filp = filp;
+
+	return 0;
+}
+
 struct drm_i915_gem_object *
 i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 {
@@ -4312,7 +4332,7 @@ i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 	if (obj == NULL)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_gem_object_init(&dev_priv->drm, &obj->base, size);
+	ret = i915_gem_object_create_shmem(&dev_priv->drm, &obj->base, size);
 	if (ret)
 		goto fail;
 
@@ -4888,7 +4908,13 @@ i915_gem_load_init_fences(struct drm_i915_private *dev_priv)
 int
 i915_gem_load_init(struct drm_i915_private *dev_priv)
 {
-	int err = -ENOMEM;
+	int err;
+
+	err = i915_gemfs_init(dev_priv);
+	if (err)
+		return err;
+
+	err = -ENOMEM;
 
 	dev_priv->objects = KMEM_CACHE(drm_i915_gem_object, SLAB_HWCACHE_ALIGN);
 	if (!dev_priv->objects)
@@ -4954,6 +4980,8 @@ i915_gem_load_init(struct drm_i915_private *dev_priv)
 err_objects:
 	kmem_cache_destroy(dev_priv->objects);
 err_out:
+	i915_gemfs_fini(dev_priv);
+
 	return err;
 }
 
@@ -4976,6 +5004,8 @@ void i915_gem_load_cleanup(struct drm_i915_private *dev_priv)
 
 	/* And ensure that our DESTROY_BY_RCU slabs are truly destroyed */
 	rcu_barrier();
+
+	i915_gemfs_fini(dev_priv);
 }
 
 int i915_gem_freeze(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gemfs.c b/drivers/gpu/drm/i915/i915_gemfs.c
new file mode 100644
index 000000000000..168d0bd98f60
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.c
@@ -0,0 +1,52 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/fs.h>
+#include <linux/mount.h>
+
+#include "i915_drv.h"
+#include "i915_gemfs.h"
+
+int i915_gemfs_init(struct drm_i915_private *i915)
+{
+	struct file_system_type *type;
+	struct vfsmount *gemfs;
+
+	type = get_fs_type("tmpfs");
+	if (!type)
+		return -ENODEV;
+
+	gemfs = kern_mount(type);
+	if (IS_ERR(gemfs))
+		return PTR_ERR(gemfs);
+
+	i915->mm.gemfs = gemfs;
+
+	return 0;
+}
+
+void i915_gemfs_fini(struct drm_i915_private *i915)
+{
+	kern_unmount(i915->mm.gemfs);
+}
diff --git a/drivers/gpu/drm/i915/i915_gemfs.h b/drivers/gpu/drm/i915/i915_gemfs.h
new file mode 100644
index 000000000000..cca8bdc5b93e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_GEMFS_H__
+#define __I915_GEMFS_H__
+
+struct drm_i915_private;
+
+int i915_gemfs_init(struct drm_i915_private *i915);
+
+void i915_gemfs_fini(struct drm_i915_private *i915);
+
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 47613d20bba8..0ac4efd5c7a2 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -81,6 +81,8 @@ static void mock_device_release(struct drm_device *dev)
 	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 
+	i915_gemfs_fini(i915);
+
 	drm_dev_fini(&i915->drm);
 	put_device(&i915->drm.pdev->dev);
 }
@@ -168,9 +170,13 @@ struct drm_i915_private *mock_gem_device(void)
 
 	i915->gt.awake = true;
 
+	err = i915_gemfs_init(i915);
+	if (err)
+		goto err_wq;
+
 	i915->objects = KMEM_CACHE(mock_object, SLAB_HWCACHE_ALIGN);
 	if (!i915->objects)
-		goto err_wq;
+		goto err_gemfs;
 
 	i915->vmas = KMEM_CACHE(i915_vma, SLAB_HWCACHE_ALIGN);
 	if (!i915->vmas)
@@ -228,6 +234,8 @@ struct drm_i915_private *mock_gem_device(void)
 	kmem_cache_destroy(i915->vmas);
 err_objects:
 	kmem_cache_destroy(i915->objects);
+err_gemfs:
+	i915_gemfs_fini(i915);
 err_wq:
 	destroy_workqueue(i915->wq);
 put_device:
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 03/21] drm/i915/gemfs: enable THP
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
  2017-07-03 14:14 ` [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt Matthew Auld
  2017-07-03 14:14   ` Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 04/21] drm/i915: introduce page_size_mask to dev_info Matthew Auld
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Enable transparent-huge-pages through gemfs by mounting with
huge=within_size.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gemfs.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gemfs.c b/drivers/gpu/drm/i915/i915_gemfs.c
index 168d0bd98f60..999f0b6a2d64 100644
--- a/drivers/gpu/drm/i915/i915_gemfs.c
+++ b/drivers/gpu/drm/i915/i915_gemfs.c
@@ -24,6 +24,7 @@
 
 #include <linux/fs.h>
 #include <linux/mount.h>
+#include <linux/pagemap.h>
 
 #include "i915_drv.h"
 #include "i915_gemfs.h"
@@ -41,6 +42,20 @@ int i915_gemfs_init(struct drm_i915_private *i915)
 	if (IS_ERR(gemfs))
 		return PTR_ERR(gemfs);
 
+	if (has_transparent_hugepage()) {
+		struct super_block *sb = gemfs->mnt_sb;
+		char options[] = "huge=within_size";
+		int flags = 0;
+
+		/* We don't consider failure to remount fatal, since this should
+		 * only ever attempt to modify the mount options of the sb, and
+		 * so should always leave us with a working mount upon failure.
+		 * Hence decoupling this from the actual kern_mount is probably
+		 * advisable.
+		 */
+		WARN_ON(sb->s_op->remount_fs(sb, &flags, options));
+	}
+
 	i915->mm.gemfs = gemfs;
 
 	return 0;
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 04/21] drm/i915: introduce page_size_mask to dev_info
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (2 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 03/21] drm/i915/gemfs: enable THP Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 05/21] drm/i915: introduce page_size members Matthew Auld
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

In preparation for huge gtt pages expose a page_size_mask as part of the
device info, to indicate the page sizes supported by the HW.  Currently
only 4K is supported.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h                  |  1 +
 drivers/gpu/drm/i915/i915_gem_gtt.h              |  8 +++++++-
 drivers/gpu/drm/i915/i915_pci.c                  | 20 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c |  3 +++
 4 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f36ebd2e1e5a..979752584585 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -842,6 +842,7 @@ struct intel_device_info {
 	enum intel_platform platform;
 	u8 ring_mask; /* Rings supported by the HW */
 	u8 num_rings;
+	unsigned int page_size_mask; /* page sizes supported by the HW */
 #define DEFINE_FLAG(name) u8 name:1
 	DEV_INFO_FOR_EACH_FLAG(DEFINE_FLAG);
 #undef DEFINE_FLAG
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index b4e3aa7c0ce1..4c2f7d7c1e7d 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -42,7 +42,13 @@
 #include "i915_gem_request.h"
 #include "i915_selftest.h"
 
-#define I915_GTT_PAGE_SIZE 4096UL
+#define I915_GTT_PAGE_SIZE_4K BIT(12)
+#define I915_GTT_PAGE_SIZE_64K BIT(16)
+#define I915_GTT_PAGE_SIZE_2M BIT(21)
+#define I915_GTT_PAGE_SIZE_1G BIT(30)
+
+#define I915_GTT_PAGE_SIZE I915_GTT_PAGE_SIZE_4K
+
 #define I915_GTT_MIN_ALIGNMENT I915_GTT_PAGE_SIZE
 
 #define I915_FENCE_REG_NONE -1
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index 04aaf553e3fa..b73c1eb778d1 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -56,6 +56,10 @@
 	.color = { .degamma_lut_size = 65, .gamma_lut_size = 257 }
 
 /* Keep in gen based order, and chronological order within a gen */
+
+#define GEN_DEFAULT_PAGE_SIZES \
+	.page_size_mask = I915_GTT_PAGE_SIZE_4K
+
 #define GEN2_FEATURES \
 	.gen = 2, .num_pipes = 1, \
 	.has_overlay = 1, .overlay_needs_physical = 1, \
@@ -64,6 +68,7 @@
 	.unfenced_needs_alignment = 1, \
 	.ring_mask = RENDER_RING, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_i830_info = {
@@ -96,6 +101,7 @@ static const struct intel_device_info intel_i865g_info = {
 	.has_gmch_display = 1, \
 	.ring_mask = RENDER_RING, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_i915g_info = {
@@ -158,6 +164,7 @@ static const struct intel_device_info intel_pineview_info = {
 	.has_gmch_display = 1, \
 	.ring_mask = RENDER_RING, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_i965g_info = {
@@ -198,6 +205,7 @@ static const struct intel_device_info intel_gm45_info = {
 	.has_gmbus_irq = 1, \
 	.ring_mask = RENDER_RING | BSD_RING, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_ironlake_d_info = {
@@ -222,6 +230,7 @@ static const struct intel_device_info intel_ironlake_m_info = {
 	.has_gmbus_irq = 1, \
 	.has_aliasing_ppgtt = 1, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_sandybridge_d_info = {
@@ -247,6 +256,7 @@ static const struct intel_device_info intel_sandybridge_m_info = {
 	.has_aliasing_ppgtt = 1, \
 	.has_full_ppgtt = 1, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	IVB_CURSOR_OFFSETS
 
 static const struct intel_device_info intel_ivybridge_d_info = {
@@ -284,6 +294,7 @@ static const struct intel_device_info intel_valleyview_info = {
 	.has_full_ppgtt = 1,
 	.ring_mask = RENDER_RING | BSD_RING | BLT_RING,
 	.display_mmio_offset = VLV_DISPLAY_BASE,
+	GEN_DEFAULT_PAGE_SIZES,
 	GEN_DEFAULT_PIPEOFFSETS,
 	CURSOR_OFFSETS
 };
@@ -308,6 +319,7 @@ static const struct intel_device_info intel_haswell_info = {
 #define BDW_FEATURES \
 	HSW_FEATURES, \
 	BDW_COLORS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	.has_logical_ring_contexts = 1, \
 	.has_full_48bit_ppgtt = 1, \
 	.has_64bit_reloc = 1, \
@@ -345,13 +357,18 @@ static const struct intel_device_info intel_cherryview_info = {
 	.has_full_ppgtt = 1,
 	.has_reset_engine = 1,
 	.display_mmio_offset = VLV_DISPLAY_BASE,
+	GEN_DEFAULT_PAGE_SIZES,
 	GEN_CHV_PIPEOFFSETS,
 	CURSOR_OFFSETS,
 	CHV_COLORS,
 };
 
+#define GEN9_DEFAULT_PAGE_SIZES \
+	.page_size_mask = I915_GTT_PAGE_SIZE_4K
+
 #define SKL_PLATFORM \
 	BDW_FEATURES, \
+	GEN9_DEFAULT_PAGE_SIZES, \
 	.gen = 9, \
 	.platform = INTEL_SKYLAKE, \
 	.has_csr = 1, \
@@ -390,6 +407,7 @@ static const struct intel_device_info intel_skylake_gt3_info = {
 	.has_full_ppgtt = 1, \
 	.has_full_48bit_ppgtt = 1, \
 	.has_reset_engine = 1, \
+	GEN9_DEFAULT_PAGE_SIZES, \
 	GEN_DEFAULT_PIPEOFFSETS, \
 	IVB_CURSOR_OFFSETS, \
 	BDW_COLORS
@@ -409,6 +427,7 @@ static const struct intel_device_info intel_geminilake_info = {
 
 #define KBL_PLATFORM \
 	BDW_FEATURES, \
+	GEN9_DEFAULT_PAGE_SIZES, \
 	.gen = 9, \
 	.platform = INTEL_KABYLAKE, \
 	.has_csr = 1, \
@@ -444,6 +463,7 @@ static const struct intel_device_info intel_coffeelake_gt3_info = {
 
 static const struct intel_device_info intel_cannonlake_info = {
 	BDW_FEATURES,
+	GEN9_DEFAULT_PAGE_SIZES, \
 	.is_alpha_support = 1,
 	.platform = INTEL_CANNONLAKE,
 	.gen = 10,
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 0ac4efd5c7a2..0002ba28780c 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -148,6 +148,9 @@ struct drm_i915_private *mock_gem_device(void)
 
 	mkwrite_device_info(i915)->gen = -1;
 
+	mkwrite_device_info(i915)->page_size_mask =
+		I915_GTT_PAGE_SIZE_4K;
+
 	spin_lock_init(&i915->mm.object_stat_lock);
 	mock_uncore_init(i915);
 
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 05/21] drm/i915: introduce page_size members
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (3 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 04/21] drm/i915: introduce page_size_mask to dev_info Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 06/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

In preparation for supporting huge gtt pages for the ppgtt, we introduce
page size members for gem objects.  We fill in the page sizes by
scanning the sg table.

v2: pass the sg_mask to set_pages

v3: calculate the sg_mask inline with populating the sg_table where
possible, and pass to set_pages along with the pages.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_drv.h                  |  5 ++-
 drivers/gpu/drm/i915/i915_gem.c                  | 43 ++++++++++++++++++++----
 drivers/gpu/drm/i915/i915_gem_dmabuf.c           | 17 ++++++++--
 drivers/gpu/drm/i915/i915_gem_internal.c         |  5 ++-
 drivers/gpu/drm/i915/i915_gem_object.h           | 20 ++++++++++-
 drivers/gpu/drm/i915/i915_gem_stolen.c           | 13 ++++---
 drivers/gpu/drm/i915/i915_gem_userptr.c          | 26 ++++++++++----
 drivers/gpu/drm/i915/selftests/huge_gem_object.c |  4 ++-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c    |  3 +-
 9 files changed, 110 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 979752584585..75f7e9f5b424 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2947,6 +2947,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define USES_PPGTT(dev_priv)		(i915.enable_ppgtt)
 #define USES_FULL_PPGTT(dev_priv)	(i915.enable_ppgtt >= 2)
 #define USES_FULL_48BIT_PPGTT(dev_priv)	(i915.enable_ppgtt == 3)
+#define HAS_PAGE_SIZE(dev_priv, page_size) \
+	((dev_priv)->info.page_size_mask & (page_size))
 
 #define HAS_OVERLAY(dev_priv)		 ((dev_priv)->info.has_overlay)
 #define OVERLAY_NEEDS_PHYSICAL(dev_priv) \
@@ -3331,7 +3333,8 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 				unsigned long n);
 
 void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
-				 struct sg_table *pages);
+				 struct sg_table *pages,
+				 unsigned int sg_mask);
 int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 
 static inline int __must_check
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 509e5fc4af56..76b4657e779a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -163,7 +163,8 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
 }
 
 static struct sg_table *
-i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
+i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj,
+			       unsigned int *sg_mask)
 {
 	struct address_space *mapping = obj->base.filp->f_mapping;
 	drm_dma_handle_t *phys;
@@ -223,6 +224,8 @@ i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 	sg->offset = 0;
 	sg->length = obj->base.size;
 
+	*sg_mask = sg->length;
+
 	sg_dma_address(sg) = phys->busaddr;
 	sg_dma_len(sg) = obj->base.size;
 
@@ -2298,6 +2301,8 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
 	if (!IS_ERR(pages))
 		obj->ops->put_pages(obj, pages);
 
+	obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+
 unlock:
 	mutex_unlock(&obj->mm.lock);
 }
@@ -2329,7 +2334,8 @@ static bool i915_sg_trim(struct sg_table *orig_st)
 }
 
 static struct sg_table *
-i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
+i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj,
+			      unsigned int *sg_mask)
 {
 	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
 	const unsigned long page_count = obj->base.size / PAGE_SIZE;
@@ -2376,6 +2382,7 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 
 	sg = st->sgl;
 	st->nents = 0;
+	*sg_mask = 0;
 	for (i = 0; i < page_count; i++) {
 		const unsigned int shrink[] = {
 			I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_PURGEABLE,
@@ -2427,8 +2434,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 		if (!i ||
 		    sg->length >= max_segment ||
 		    page_to_pfn(page) != last_pfn + 1) {
-			if (i)
+			if (i) {
+				*sg_mask |= sg->length;
 				sg = sg_next(sg);
+			}
 			st->nents++;
 			sg_set_page(sg, page, PAGE_SIZE, 0);
 		} else {
@@ -2439,8 +2448,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 		/* Check that the i965g/gm workaround works. */
 		WARN_ON((gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL));
 	}
-	if (sg) /* loop terminated early; short sg table */
+	if (sg) { /* loop terminated early; short sg table */
+		*sg_mask |= sg->length;
 		sg_mark_end(sg);
+	}
 
 	/* Trim unused sg entries to avoid wasting memory. */
 	i915_sg_trim(st);
@@ -2494,8 +2505,13 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 }
 
 void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
-				 struct sg_table *pages)
+				 struct sg_table *pages,
+				 unsigned int sg_mask)
 {
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	unsigned long supported_page_sizes = INTEL_INFO(i915)->page_size_mask;
+	unsigned int bit;
+
 	lockdep_assert_held(&obj->mm.lock);
 
 	obj->mm.get_page.sg_pos = pages->sgl;
@@ -2509,11 +2525,24 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 		__i915_gem_object_pin_pages(obj);
 		obj->mm.quirked = true;
 	}
+
+	GEM_BUG_ON(!sg_mask);
+
+	obj->mm.page_sizes.phys = sg_mask;
+
+	obj->mm.page_sizes.sg = 0;
+	for_each_set_bit(bit, &supported_page_sizes, BITS_PER_LONG) {
+		if (obj->mm.page_sizes.phys & ~0u << bit)
+			obj->mm.page_sizes.sg |= BIT(bit);
+	}
+
+	GEM_BUG_ON(!HAS_PAGE_SIZE(i915, obj->mm.page_sizes.sg));
 }
 
 static int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
+	unsigned int sg_mask = 0;
 
 	GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
 
@@ -2522,11 +2551,11 @@ static int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 		return -EFAULT;
 	}
 
-	pages = obj->ops->get_pages(obj);
+	pages = obj->ops->get_pages(obj, &sg_mask);
 	if (unlikely(IS_ERR(pages)))
 		return PTR_ERR(pages);
 
-	__i915_gem_object_set_pages(obj, pages);
+	__i915_gem_object_set_pages(obj, pages, sg_mask);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index 6176e589cf09..2b3b16d88d4b 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -257,10 +257,21 @@ struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
 }
 
 static struct sg_table *
-i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
+i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj,
+				 unsigned int *sg_mask)
 {
-	return dma_buf_map_attachment(obj->base.import_attach,
-				      DMA_BIDIRECTIONAL);
+	struct sg_table *pages;
+	struct scatterlist *sg;
+	int n;
+
+	pages = dma_buf_map_attachment(obj->base.import_attach,
+				       DMA_BIDIRECTIONAL);
+	if (!IS_ERR(pages)) {
+		for_each_sg(pages->sgl, sg, pages->nents, n)
+			*sg_mask |= sg->length;
+	}
+
+	return pages;
 }
 
 static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/i915_gem_internal.c b/drivers/gpu/drm/i915/i915_gem_internal.c
index 568bf83af1f5..a505cb82eb82 100644
--- a/drivers/gpu/drm/i915/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/i915_gem_internal.c
@@ -45,7 +45,8 @@ static void internal_free_pages(struct sg_table *st)
 }
 
 static struct sg_table *
-i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj,
+				   unsigned int *sg_mask)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct sg_table *st;
@@ -76,6 +77,7 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 	}
 
 create_st:
+	*sg_mask = 0;
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
 	if (!st)
 		return ERR_PTR(-ENOMEM);
@@ -105,6 +107,7 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 		} while (1);
 
 		sg_set_page(sg, page, PAGE_SIZE << order, 0);
+		*sg_mask |= PAGE_SIZE << order;
 		st->nents++;
 
 		npages -= 1 << order;
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 5b19a4916a4d..7fc8b8402897 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -53,7 +53,8 @@ struct drm_i915_gem_object_ops {
 	 * being released or under memory pressure (where we attempt to
 	 * reap pages for the shrinker).
 	 */
-	struct sg_table *(*get_pages)(struct drm_i915_gem_object *);
+	struct sg_table *(*get_pages)(struct drm_i915_gem_object *,
+				      unsigned int *sg_mask);
 	void (*put_pages)(struct drm_i915_gem_object *, struct sg_table *);
 
 	int (*pwrite)(struct drm_i915_gem_object *,
@@ -143,6 +144,23 @@ struct drm_i915_gem_object {
 		struct sg_table *pages;
 		void *mapping;
 
+		struct i915_page_sizes {
+			/**
+			 * The sg mask of the pages sg_table. i.e the mask of
+			 * of the lengths for each sg entry.
+			 */
+			unsigned int phys;
+
+			/**
+			 * The gtt page sizes we are allowed to use given the
+			 * sg mask and the supported page sizes. This will
+			 * express the smallest unit we can use for the whole
+			 * object, as well as the larger sizes we may be able
+			 * to use opportunistically.
+			 */
+			unsigned int sg;
+		} page_sizes;
+
 		struct i915_gem_object_page_iter {
 			struct scatterlist *sg_pos;
 			unsigned int sg_idx; /* in pages, but 32bit eek! */
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index a817b3e0b17e..2cc09517d46c 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -539,11 +539,16 @@ i915_pages_create_for_stolen(struct drm_device *dev,
 }
 
 static struct sg_table *
-i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
+i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj,
+				 unsigned int *sg_mask)
 {
-	return i915_pages_create_for_stolen(obj->base.dev,
-					    obj->stolen->start,
-					    obj->stolen->size);
+	struct sg_table *pages =
+		i915_pages_create_for_stolen(obj->base.dev,
+					     obj->stolen->start,
+					     obj->stolen->size);
+	*sg_mask = obj->stolen->size;
+
+	return pages;
 }
 
 static void i915_gem_object_put_pages_stolen(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index ccd09e8419f5..b4c15a847a63 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -406,7 +406,8 @@ struct get_pages_work {
 #endif
 
 static int
-st_set_pages(struct sg_table **st, struct page **pvec, int num_pages)
+st_set_pages(struct sg_table **st, struct page **pvec, int num_pages,
+	     unsigned int *sg_mask)
 {
 	struct scatterlist *sg;
 	int ret, n;
@@ -422,12 +423,17 @@ st_set_pages(struct sg_table **st, struct page **pvec, int num_pages)
 
 		for_each_sg((*st)->sgl, sg, num_pages, n)
 			sg_set_page(sg, pvec[n], PAGE_SIZE, 0);
+
+		*sg_mask = PAGE_SIZE;
 	} else {
 		ret = sg_alloc_table_from_pages(*st, pvec, num_pages,
 						0, num_pages << PAGE_SHIFT,
 						GFP_KERNEL);
 		if (ret)
 			goto err;
+
+		for_each_sg((*st)->sgl, sg, num_pages, n)
+			*sg_mask |= sg->length;
 	}
 
 	return 0;
@@ -440,12 +446,13 @@ st_set_pages(struct sg_table **st, struct page **pvec, int num_pages)
 
 static struct sg_table *
 __i915_gem_userptr_set_pages(struct drm_i915_gem_object *obj,
-			     struct page **pvec, int num_pages)
+			     struct page **pvec, int num_pages,
+			     unsigned int *sg_mask)
 {
 	struct sg_table *pages;
 	int ret;
 
-	ret = st_set_pages(&pages, pvec, num_pages);
+	ret = st_set_pages(&pages, pvec, num_pages, sg_mask);
 	if (ret)
 		return ERR_PTR(ret);
 
@@ -540,9 +547,12 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
 		struct sg_table *pages = ERR_PTR(ret);
 
 		if (pinned == npages) {
-			pages = __i915_gem_userptr_set_pages(obj, pvec, npages);
+			unsigned int sg_mask = 0;
+
+			pages = __i915_gem_userptr_set_pages(obj, pvec, npages,
+							     &sg_mask);
 			if (!IS_ERR(pages)) {
-				__i915_gem_object_set_pages(obj, pages);
+				__i915_gem_object_set_pages(obj, pages, sg_mask);
 				pinned = 0;
 				pages = NULL;
 			}
@@ -604,7 +614,8 @@ __i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
 }
 
 static struct sg_table *
-i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
+i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj,
+			   unsigned int *sg_mask)
 {
 	const int num_pages = obj->base.size >> PAGE_SHIFT;
 	struct mm_struct *mm = obj->userptr.mm->mm;
@@ -661,7 +672,8 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 		pages = __i915_gem_userptr_get_pages_schedule(obj);
 		active = pages == ERR_PTR(-EAGAIN);
 	} else {
-		pages = __i915_gem_userptr_set_pages(obj, pvec, num_pages);
+		pages = __i915_gem_userptr_set_pages(obj, pvec, num_pages,
+						     sg_mask);
 		active = !IS_ERR(pages);
 	}
 	if (active)
diff --git a/drivers/gpu/drm/i915/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
index caf76af36aba..3f1afe4b65f1 100644
--- a/drivers/gpu/drm/i915/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
@@ -38,7 +38,7 @@ static void huge_free_pages(struct drm_i915_gem_object *obj,
 }
 
 static struct sg_table *
-huge_get_pages(struct drm_i915_gem_object *obj)
+huge_get_pages(struct drm_i915_gem_object *obj, unsigned int *sg_mask)
 {
 #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
 	const unsigned long nreal = obj->scratch / PAGE_SIZE;
@@ -81,6 +81,8 @@ huge_get_pages(struct drm_i915_gem_object *obj)
 	if (i915_gem_gtt_prepare_pages(obj, pages))
 		goto err;
 
+	*sg_mask = PAGE_SIZE;
+
 	return pages;
 
 err:
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 6b132caffa18..0e1ded4239f9 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -40,7 +40,7 @@ static void fake_free_pages(struct drm_i915_gem_object *obj,
 }
 
 static struct sg_table *
-fake_get_pages(struct drm_i915_gem_object *obj)
+fake_get_pages(struct drm_i915_gem_object *obj, unsigned int *sg_mask)
 {
 #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
 #define PFN_BIAS 0x1000
@@ -66,6 +66,7 @@ fake_get_pages(struct drm_i915_gem_object *obj)
 		sg_set_page(sg, pfn_to_page(PFN_BIAS), len, 0);
 		sg_dma_address(sg) = page_to_phys(sg_page(sg));
 		sg_dma_len(sg) = len;
+		*sg_mask |= len;
 
 		rem -= len;
 	}
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 06/21] drm/i915: introduce vm set_pages/clear_pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (4 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 05/21] drm/i915: introduce page_size members Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 07/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Move the setting/clearing of the vma->pages to a vm operation. Doing so
neatens things up a little, but more importantly gives us a sane place
to also set/clear the vma->pages_sizes, which we introduce later in
preparation for supporting huge-pages.

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c       | 70 +++++++++++++++++++------------
 drivers/gpu/drm/i915/i915_gem_gtt.h       |  2 +
 drivers/gpu/drm/i915/i915_vma.c           | 29 +++++++------
 drivers/gpu/drm/i915/selftests/mock_gtt.c | 11 ++---
 4 files changed, 66 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index de67084d5fcf..071ba8e56606 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -200,8 +200,6 @@ static int ppgtt_bind_vma(struct i915_vma *vma,
 			return ret;
 	}
 
-	vma->pages = vma->obj->mm.pages;
-
 	/* Currently applicable only to VLV */
 	pte_flags = 0;
 	if (vma->obj->gt_ro)
@@ -217,6 +215,26 @@ static void ppgtt_unbind_vma(struct i915_vma *vma)
 	vma->vm->clear_range(vma->vm, vma->node.start, vma->size);
 }
 
+static int ppgtt_set_pages(struct i915_vma *vma)
+{
+	GEM_BUG_ON(vma->pages);
+
+	vma->pages = vma->obj->mm.pages;
+
+	return 0;
+}
+
+static void clear_pages(struct i915_vma *vma)
+{
+	GEM_BUG_ON(!vma->pages);
+
+	if (vma->pages != vma->obj->mm.pages) {
+		sg_free_table(vma->pages);
+		kfree(vma->pages);
+	}
+	vma->pages = NULL;
+}
+
 static gen8_pte_t gen8_pte_encode(dma_addr_t addr,
 				  enum i915_cache_level level)
 {
@@ -1380,6 +1398,8 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
 	ppgtt->base.cleanup = gen8_ppgtt_cleanup;
 	ppgtt->base.unbind_vma = ppgtt_unbind_vma;
 	ppgtt->base.bind_vma = ppgtt_bind_vma;
+	ppgtt->base.set_pages = ppgtt_set_pages;
+	ppgtt->base.clear_pages = clear_pages;
 	ppgtt->debug_dump = gen8_dump_ppgtt;
 
 	return 0;
@@ -1822,6 +1842,8 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
 	ppgtt->base.insert_entries = gen6_ppgtt_insert_entries;
 	ppgtt->base.unbind_vma = ppgtt_unbind_vma;
 	ppgtt->base.bind_vma = ppgtt_bind_vma;
+	ppgtt->base.set_pages = ppgtt_set_pages;
+	ppgtt->base.clear_pages = clear_pages;
 	ppgtt->base.cleanup = gen6_ppgtt_cleanup;
 	ppgtt->debug_dump = gen6_dump_ppgtt;
 
@@ -2333,12 +2355,6 @@ static int ggtt_bind_vma(struct i915_vma *vma,
 	struct drm_i915_gem_object *obj = vma->obj;
 	u32 pte_flags;
 
-	if (unlikely(!vma->pages)) {
-		int ret = i915_get_ggtt_vma_pages(vma);
-		if (ret)
-			return ret;
-	}
-
 	/* Currently applicable only to VLV */
 	pte_flags = 0;
 	if (obj->gt_ro)
@@ -2375,12 +2391,6 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
 	u32 pte_flags;
 	int ret;
 
-	if (unlikely(!vma->pages)) {
-		ret = i915_get_ggtt_vma_pages(vma);
-		if (ret)
-			return ret;
-	}
-
 	/* Currently applicable only to VLV */
 	pte_flags = 0;
 	if (vma->obj->gt_ro)
@@ -2395,7 +2405,7 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
 							     vma->node.start,
 							     vma->size);
 			if (ret)
-				goto err_pages;
+				return ret;
 		}
 
 		appgtt->base.insert_entries(&appgtt->base, vma, cache_level,
@@ -2409,17 +2419,6 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
 	}
 
 	return 0;
-
-err_pages:
-	if (!(vma->flags & (I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND))) {
-		if (vma->pages != vma->obj->mm.pages) {
-			GEM_BUG_ON(!vma->pages);
-			sg_free_table(vma->pages);
-			kfree(vma->pages);
-		}
-		vma->pages = NULL;
-	}
-	return ret;
 }
 
 static void aliasing_gtt_unbind_vma(struct i915_vma *vma)
@@ -2457,6 +2456,19 @@ void i915_gem_gtt_finish_pages(struct drm_i915_gem_object *obj,
 	dma_unmap_sg(kdev, pages->sgl, pages->nents, PCI_DMA_BIDIRECTIONAL);
 }
 
+static int ggtt_set_pages(struct i915_vma *vma)
+{
+	int ret;
+
+	GEM_BUG_ON(vma->pages);
+
+	ret = i915_get_ggtt_vma_pages(vma);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 static void i915_gtt_color_adjust(const struct drm_mm_node *node,
 				  unsigned long color,
 				  u64 *start,
@@ -2859,6 +2871,8 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->base.cleanup = gen6_gmch_remove;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.insert_page = gen8_ggtt_insert_page;
 	ggtt->base.clear_range = nop_clear_range;
 	if (!USES_FULL_PPGTT(dev_priv) || intel_scanout_needs_vtd_wa(dev_priv))
@@ -2915,6 +2929,8 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->base.insert_entries = gen6_ggtt_insert_entries;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.cleanup = gen6_gmch_remove;
 
 	ggtt->invalidate = gen6_ggtt_invalidate;
@@ -2960,6 +2976,8 @@ static int i915_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->base.clear_range = i915_ggtt_clear_range;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.cleanup = i915_gmch_remove;
 
 	ggtt->invalidate = gmch_ggtt_invalidate;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 4c2f7d7c1e7d..57738a61ea6e 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -330,6 +330,8 @@ struct i915_address_space {
 	int (*bind_vma)(struct i915_vma *vma,
 			enum i915_cache_level cache_level,
 			u32 flags);
+	int (*set_pages)(struct i915_vma *vma);
+	void (*clear_pages)(struct i915_vma *vma);
 
 	I915_SELFTEST_DECLARE(struct fault_attr fault_attr);
 };
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 958be0a95960..c8d1b98cc260 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -266,6 +266,8 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
 	if (bind_flags == 0)
 		return 0;
 
+	GEM_BUG_ON(!vma->pages);
+
 	trace_i915_vma_bind(vma, bind_flags);
 	ret = vma->vm->bind_vma(vma, cache_level, bind_flags);
 	if (ret)
@@ -471,25 +473,31 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 	if (ret)
 		return ret;
 
+	if (!vma->pages) {
+		ret = vma->vm->set_pages(vma);
+		if (ret)
+			goto err_unpin;
+	}
+
 	if (flags & PIN_OFFSET_FIXED) {
 		u64 offset = flags & PIN_OFFSET_MASK;
 		if (!IS_ALIGNED(offset, alignment) ||
 		    range_overflows(offset, size, end)) {
 			ret = -EINVAL;
-			goto err_unpin;
+			goto err_clear;
 		}
 
 		ret = i915_gem_gtt_reserve(vma->vm, &vma->node,
 					   size, offset, obj->cache_level,
 					   flags);
 		if (ret)
-			goto err_unpin;
+			goto err_clear;
 	} else {
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
 					  size, alignment, obj->cache_level,
 					  start, end, flags);
 		if (ret)
-			goto err_unpin;
+			goto err_clear;
 
 		GEM_BUG_ON(vma->node.start < start);
 		GEM_BUG_ON(vma->node.start + vma->node.size > end);
@@ -504,6 +512,8 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 
 	return 0;
 
+err_clear:
+	vma->vm->clear_pages(vma);
 err_unpin:
 	i915_gem_object_unpin_pages(obj);
 	return ret;
@@ -517,6 +527,8 @@ i915_vma_remove(struct i915_vma *vma)
 	GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
 	GEM_BUG_ON(vma->flags & (I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND));
 
+	vma->vm->clear_pages(vma);
+
 	drm_mm_remove_node(&vma->node);
 	list_move_tail(&vma->vm_link, &vma->vm->unbound_list);
 
@@ -568,10 +580,8 @@ int __i915_vma_do_pin(struct i915_vma *vma,
 	return 0;
 
 err_remove:
-	if ((bound & I915_VMA_BIND_MASK) == 0) {
-		GEM_BUG_ON(vma->pages);
+	if ((bound & I915_VMA_BIND_MASK) == 0)
 		i915_vma_remove(vma);
-	}
 err_unpin:
 	__i915_vma_unpin(vma);
 	return ret;
@@ -717,13 +727,6 @@ int i915_vma_unbind(struct i915_vma *vma)
 	}
 	vma->flags &= ~(I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND);
 
-	if (vma->pages != obj->mm.pages) {
-		GEM_BUG_ON(!vma->pages);
-		sg_free_table(vma->pages);
-		kfree(vma->pages);
-	}
-	vma->pages = NULL;
-
 	i915_vma_remove(vma);
 
 destroy:
diff --git a/drivers/gpu/drm/i915/selftests/mock_gtt.c b/drivers/gpu/drm/i915/selftests/mock_gtt.c
index f2118cf535a0..336e1afb250f 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gtt.c
@@ -43,7 +43,6 @@ static int mock_bind_ppgtt(struct i915_vma *vma,
 			   u32 flags)
 {
 	GEM_BUG_ON(flags & I915_VMA_GLOBAL_BIND);
-	vma->pages = vma->obj->mm.pages;
 	vma->flags |= I915_VMA_LOCAL_BIND;
 	return 0;
 }
@@ -84,6 +83,8 @@ mock_ppgtt(struct drm_i915_private *i915,
 	ppgtt->base.insert_entries = mock_insert_entries;
 	ppgtt->base.bind_vma = mock_bind_ppgtt;
 	ppgtt->base.unbind_vma = mock_unbind_ppgtt;
+	ppgtt->base.set_pages = ppgtt_set_pages;
+	ppgtt->base.clear_pages = clear_pages;
 	ppgtt->base.cleanup = mock_cleanup;
 
 	return ppgtt;
@@ -93,12 +94,6 @@ static int mock_bind_ggtt(struct i915_vma *vma,
 			  enum i915_cache_level cache_level,
 			  u32 flags)
 {
-	int err;
-
-	err = i915_get_ggtt_vma_pages(vma);
-	if (err)
-		return err;
-
 	vma->flags |= I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND;
 	return 0;
 }
@@ -124,6 +119,8 @@ void mock_init_ggtt(struct drm_i915_private *i915)
 	ggtt->base.insert_entries = mock_insert_entries;
 	ggtt->base.bind_vma = mock_bind_ggtt;
 	ggtt->base.unbind_vma = mock_unbind_ggtt;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.cleanup = mock_cleanup;
 
 	i915_address_space_init(&ggtt->base, i915, "global");
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 07/21] drm/i915: align the vma start to the largest gtt page size
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (5 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 06/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 08/21] drm/i915: align 64K objects to 2M Matthew Auld
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

For the 48b PPGTT try to align the vma start address to the required
page size boundary to guarantee we use said page size in the gtt. If we
are dealing with multiple page sizes, we can't guarantee anything and
just align to the largest. For soft pinning and objects which need to be
tightly packed into the lower 32bits we don't force any alignment.

v2: various improvements suggested by Chris

v3: use set_pages and better placement of page_sizes

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c |  8 ++++++++
 drivers/gpu/drm/i915/i915_vma.c     | 12 ++++++++++++
 drivers/gpu/drm/i915/i915_vma.h     |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 071ba8e56606..ae362abd4729 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -221,6 +221,9 @@ static int ppgtt_set_pages(struct i915_vma *vma)
 
 	vma->pages = vma->obj->mm.pages;
 
+	vma->page_sizes.phys = vma->obj->mm.page_sizes.phys;
+	vma->page_sizes.sg = vma->obj->mm.page_sizes.sg;
+
 	return 0;
 }
 
@@ -233,6 +236,8 @@ static void clear_pages(struct i915_vma *vma)
 		kfree(vma->pages);
 	}
 	vma->pages = NULL;
+
+	memset(&vma->page_sizes, 0, sizeof(struct i915_page_sizes));
 }
 
 static gen8_pte_t gen8_pte_encode(dma_addr_t addr,
@@ -2466,6 +2471,9 @@ static int ggtt_set_pages(struct i915_vma *vma)
 	if (ret)
 		return ret;
 
+	vma->page_sizes.phys = vma->obj->mm.page_sizes.phys;
+	vma->page_sizes.sg = vma->obj->mm.page_sizes.sg;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index c8d1b98cc260..212da056ef7d 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -493,6 +493,18 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		if (ret)
 			goto err_clear;
 	} else {
+		/* We only support huge gtt pages through the 48b PPGTT,
+		 * however we also don't want to force any alignment for
+		 * objects which need to be tightly packed into the low 32bits.
+		 */
+		if (end > (1ULL << 32) &&
+		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			u64 page_alignment =
+				rounddown_pow_of_two(vma->page_sizes.sg);
+
+			alignment = max(alignment, page_alignment);
+		}
+
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
 					  size, alignment, obj->cache_level,
 					  start, end, flags);
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 4a673fc1a432..58256aec35f8 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -55,6 +55,7 @@ struct i915_vma {
 	void __iomem *iomap;
 	u64 size;
 	u64 display_alignment;
+	struct i915_page_sizes page_sizes;
 
 	u32 fence_size;
 	u32 fence_alignment;
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 08/21] drm/i915: align 64K objects to 2M
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (6 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 07/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 09/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

We can't mix 64K and 4K pte's in the same page-table, so for now we
align 64K objects to 2M to avoid any potential mixing. This is
potentially wasteful but in reality shouldn't be too bad since this only
applies to the virtual address space of a 48b PPGTT.

v2: don't separate logically connected ops

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_vma.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 212da056ef7d..879d15b815e0 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -500,9 +500,17 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		if (end > (1ULL << 32) &&
 		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
 			u64 page_alignment =
-				rounddown_pow_of_two(vma->page_sizes.sg);
+				rounddown_pow_of_two(vma->page_sizes.sg |
+						     I915_GTT_PAGE_SIZE_2M);
 
 			alignment = max(alignment, page_alignment);
+
+			/* We can't mix 64K and 4K PTEs in the same page-table (2M
+			 * block), and so to avoid the ugliness and complexity of
+			 * coloring we opt for just aligning 64K objects to 2M.
+			 */
+			if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K)
+				size = round_up(size, I915_GTT_PAGE_SIZE_2M);
 		}
 
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 09/21] drm/i915: enable IPS bit for 64K pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (7 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 08/21] drm/i915: align 64K objects to 2M Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 10/21] drm/i915: disable GTT cache for 2M/1G pages Matthew Auld
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Before we can enable 64K pages through the IPS bit, we must first enable
it through MMIO, otherwise the page-walker will simply ignore it.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 11 +++++++++++
 drivers/gpu/drm/i915/i915_reg.h |  3 +++
 2 files changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 76b4657e779a..09da46178c31 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4775,6 +4775,17 @@ int i915_gem_init_hw(struct drm_i915_private *dev_priv)
 		}
 	}
 
+	/* To support 64K PTE's we need to first enable the use of the
+	 * Intermediate-Page-Size(IPS) bit of the PDE field via some magical
+	 * mmio, otherwise the page-walker will simply ignore the IPS bit. This
+	 * shouldn't be needed after GEN10.
+	 */
+	if (HAS_PAGE_SIZE(dev_priv, I915_GTT_PAGE_SIZE_64K) &&
+	    INTEL_GEN(dev_priv) <= 10)
+		I915_WRITE(GEN8_GAMW_ECO_DEV_RW_IA,
+			   I915_READ(GEN8_GAMW_ECO_DEV_RW_IA) |
+			   GAMW_ECO_ENABLE_64K_IPS_FIELD);
+
 	i915_gem_init_swizzling(dev_priv);
 
 	/*
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 64cc674b652a..b812fc5612da 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -2190,6 +2190,9 @@ enum skl_disp_power_wells {
 #define GEN9_GAMT_ECO_REG_RW_IA _MMIO(0x4ab0)
 #define   GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS	(1<<18)
 
+#define GEN8_GAMW_ECO_DEV_RW_IA _MMIO(0x4080)
+#define   GAMW_ECO_ENABLE_64K_IPS_FIELD 0xF
+
 #define GAMT_CHKN_BIT_REG	_MMIO(0x4ab8)
 #define   GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING	(1<<28)
 
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 10/21] drm/i915: disable GTT cache for 2M/1G pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (8 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 09/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 11/21] drm/i915: support 1G pages for the 48b PPGTT Matthew Auld
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

When SW enables the use of 2M/1G pages, it must disable the GTT cache.

v2: don't disable for Cherryview which doesn't even support 48b PPGTT!

v3: explicitly check that the system does support 2M/1G pages

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_pm.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index c3fcadfa0ae7..375be20c2d22 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -8296,10 +8296,13 @@ static void broadwell_init_clock_gating(struct drm_i915_private *dev_priv)
 
 	/*
 	 * WaGttCachingOffByDefault:bdw
-	 * GTT cache may not work with big pages, so if those
-	 * are ever enabled GTT cache may need to be disabled.
+	 * The GTT cache must be disabled if the system is planning to use
+	 * 2M/1G pages.
 	 */
-	I915_WRITE(HSW_GTT_CACHE_EN, GTT_CACHE_EN_ALL);
+	I915_WRITE(HSW_GTT_CACHE_EN,
+		   HAS_PAGE_SIZE(dev_priv,
+				 I915_GTT_PAGE_SIZE_2M |
+				 I915_GTT_PAGE_SIZE_1G) ? 0 : GTT_CACHE_EN_ALL);
 
 	/* WaKVMNotificationOnConfigChange:bdw */
 	I915_WRITE(CHICKEN_PAR2_1, I915_READ(CHICKEN_PAR2_1)
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 11/21] drm/i915: support 1G pages for the 48b PPGTT
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (9 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 10/21] drm/i915: disable GTT cache for 2M/1G pages Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 12/21] drm/i915: support 2M " Matthew Auld
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Support inserting 1G gtt pages into the 48b PPGTT.

v2: sanity check sg->length against page_size

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 73 +++++++++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/i915_gem_gtt.h |  2 +
 2 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index ae362abd4729..861b1442b6c4 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -945,6 +945,66 @@ static void gen8_ppgtt_insert_3lvl(struct i915_address_space *vm,
 				      cache_level);
 }
 
+static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
+					   struct i915_page_directory_pointer **pdps,
+					   struct sgt_dma *iter,
+					   enum i915_cache_level cache_level)
+{
+	const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level);
+	u64 start = vma->node.start;
+
+	do {
+		struct gen8_insert_pte idx = gen8_insert_pte(start);
+		struct i915_page_directory_pointer *pdp = pdps[idx.pml4e];
+		struct i915_page_directory *pd = pdp->page_directory[idx.pdpe];
+		struct i915_page_table *pt = pd->page_table[idx.pde];
+		dma_addr_t rem = iter->max - iter->dma;
+		unsigned int page_size;
+		gen8_pte_t encode = pte_encode;
+		gen8_pte_t *vaddr;
+		u16 index, max;
+
+		if (unlikely(vma->page_sizes.sg & I915_GTT_PAGE_SIZE_1G) &&
+		    IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_1G) &&
+		    rem >= I915_GTT_PAGE_SIZE_1G && !(idx.pte | idx.pde)) {
+			vaddr = kmap_atomic_px(pdp);
+			index = idx.pdpe;
+			max = GEN8_PML4ES_PER_PML4;
+			page_size = I915_GTT_PAGE_SIZE_1G;
+			encode |= GEN8_PDPE_PS_1G;
+		} else {
+			vaddr = kmap_atomic_px(pt);
+			index = idx.pte;
+			max = GEN8_PTES;
+			page_size = I915_GTT_PAGE_SIZE;
+		}
+
+		do {
+			GEM_BUG_ON(iter->sg->length < page_size);
+			vaddr[index++] = encode | iter->dma;
+
+			start += page_size;
+			iter->dma += page_size;
+			if (iter->dma >= iter->max) {
+				iter->sg = __sg_next(iter->sg);
+				if (!iter->sg)
+					break;
+
+				iter->dma = sg_dma_address(iter->sg);
+				iter->max = iter->dma + iter->sg->length;
+
+				if (unlikely(!IS_ALIGNED(iter->dma, page_size)))
+					break;
+			}
+			rem = iter->max - iter->dma;
+
+		} while (rem >= page_size && index < max);
+
+		kunmap_atomic(vaddr);
+
+	} while (iter->sg);
+}
+
 static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 				   struct i915_vma *vma,
 				   enum i915_cache_level cache_level,
@@ -957,11 +1017,16 @@ static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 		.max = iter.dma + iter.sg->length,
 	};
 	struct i915_page_directory_pointer **pdps = ppgtt->pml4.pdps;
-	struct gen8_insert_pte idx = gen8_insert_pte(vma->node.start);
 
-	while (gen8_ppgtt_insert_pte_entries(ppgtt, pdps[idx.pml4e++], &iter,
-					     &idx, cache_level))
-		GEM_BUG_ON(idx.pml4e >= GEN8_PML4ES_PER_PML4);
+	if (vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
+		gen8_ppgtt_insert_huge_entries(vma, pdps, &iter, cache_level);
+	} else {
+		struct gen8_insert_pte idx = gen8_insert_pte(vma->node.start);
+
+		while (gen8_ppgtt_insert_pte_entries(ppgtt, pdps[idx.pml4e++],
+						     &iter, &idx, cache_level))
+			GEM_BUG_ON(idx.pml4e >= GEN8_PML4ES_PER_PML4);
+	}
 }
 
 static void gen8_free_page_tables(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 57738a61ea6e..e46f05f0cfd9 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -149,6 +149,8 @@ typedef u64 gen8_ppgtt_pml4e_t;
 #define GEN8_PPAT_ELLC_OVERRIDE		(0<<2)
 #define GEN8_PPAT(i, x)			((u64)(x) << ((i) * 8))
 
+#define GEN8_PDPE_PS_1G  BIT(7)
+
 struct sg_table;
 
 struct intel_rotation_info {
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 12/21] drm/i915: support 2M pages for the 48b PPGTT
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (10 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 11/21] drm/i915: support 1G pages for the 48b PPGTT Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 13/21] drm/i915: support 64K " Matthew Auld
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 8 ++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.h | 2 ++
 2 files changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 861b1442b6c4..be11c83d5c91 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -972,6 +972,14 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 			max = GEN8_PML4ES_PER_PML4;
 			page_size = I915_GTT_PAGE_SIZE_1G;
 			encode |= GEN8_PDPE_PS_1G;
+		} else if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_2M &&
+			   IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_2M) &&
+			   rem >= I915_GTT_PAGE_SIZE_2M && !idx.pte) {
+			vaddr = kmap_atomic_px(pd);
+			index = idx.pde;
+			max = I915_PDES;
+			page_size = I915_GTT_PAGE_SIZE_2M;
+			encode |= GEN8_PDE_PS_2M;
 		} else {
 			vaddr = kmap_atomic_px(pt);
 			index = idx.pte;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index e46f05f0cfd9..aa4488637fc9 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -149,6 +149,8 @@ typedef u64 gen8_ppgtt_pml4e_t;
 #define GEN8_PPAT_ELLC_OVERRIDE		(0<<2)
 #define GEN8_PPAT(i, x)			((u64)(x) << ((i) * 8))
 
+#define GEN8_PDE_PS_2M   BIT(7)
+
 #define GEN8_PDPE_PS_1G  BIT(7)
 
 struct sg_table;
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 13/21] drm/i915: support 64K pages for the 48b PPGTT
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (11 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 12/21] drm/i915: support 2M " Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 14/21] drm/i915: accurate page size tracking for the ppgtt Matthew Auld
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 26 ++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index be11c83d5c91..ee9ed2d6dd11 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -960,6 +960,7 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 		struct i915_page_table *pt = pd->page_table[idx.pde];
 		dma_addr_t rem = iter->max - iter->dma;
 		unsigned int page_size;
+		bool maybe_64K = false;
 		gen8_pte_t encode = pte_encode;
 		gen8_pte_t *vaddr;
 		u16 index, max;
@@ -985,9 +986,17 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 			index = idx.pte;
 			max = GEN8_PTES;
 			page_size = I915_GTT_PAGE_SIZE;
+
+			if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K && !idx.pte)
+				maybe_64K = true;
 		}
 
 		do {
+			if (maybe_64K && (index % 16 == 0) &&
+			    (!IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_64K) ||
+			     rem < I915_GTT_PAGE_SIZE_64K))
+				maybe_64K = false;
+
 			GEM_BUG_ON(iter->sg->length < page_size);
 			vaddr[index++] = encode | iter->dma;
 
@@ -1010,6 +1019,23 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 
 		kunmap_atomic(vaddr);
 
+
+		/* Is it safe to mark the 2M block as 64K? -- Either we have
+		 * filled whole page-table with 64K entries, or filled part of
+		 * it and have reached the end of the sg table and we have
+		 * enough padding.
+		 */
+		if (maybe_64K) {
+			if (index == max ||
+			    (!iter->sg && IS_ALIGNED(vma->node.start +
+						     vma->node.size,
+						     I915_GTT_PAGE_SIZE_2M))) {
+				vaddr = kmap_atomic_px(pd);
+				vaddr[idx.pde] |= GEN8_PDE_IPS_64K;
+				kunmap_atomic(vaddr);
+			}
+		}
+
 	} while (iter->sg);
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index aa4488637fc9..42be89d27193 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -149,6 +149,7 @@ typedef u64 gen8_ppgtt_pml4e_t;
 #define GEN8_PPAT_ELLC_OVERRIDE		(0<<2)
 #define GEN8_PPAT(i, x)			((u64)(x) << ((i) * 8))
 
+#define GEN8_PDE_IPS_64K BIT(11)
 #define GEN8_PDE_PS_2M   BIT(7)
 
 #define GEN8_PDPE_PS_1G  BIT(7)
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 14/21] drm/i915: accurate page size tracking for the ppgtt
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (12 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 13/21] drm/i915: support 64K " Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 15/21] drm/i915/debugfs: include some gtt page size metrics Matthew Auld
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Now that we support multiple page sizes for the ppgtt, it would be
useful to track the real usage for debugging purposes.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c    | 10 ++++++++++
 drivers/gpu/drm/i915/i915_gem_object.h | 10 ++++++++++
 2 files changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index ee9ed2d6dd11..d643fc486451 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -943,6 +943,8 @@ static void gen8_ppgtt_insert_3lvl(struct i915_address_space *vm,
 
 	gen8_ppgtt_insert_pte_entries(ppgtt, &ppgtt->pdp, &iter, &idx,
 				      cache_level);
+
+	vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
 }
 
 static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
@@ -1033,8 +1035,10 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 				vaddr = kmap_atomic_px(pd);
 				vaddr[idx.pde] |= GEN8_PDE_IPS_64K;
 				kunmap_atomic(vaddr);
+				page_size = I915_GTT_PAGE_SIZE_64K;
 			}
 		}
+		vma->page_sizes.gtt |= page_size;
 
 	} while (iter->sg);
 }
@@ -1060,6 +1064,8 @@ static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 		while (gen8_ppgtt_insert_pte_entries(ppgtt, pdps[idx.pml4e++],
 						     &iter, &idx, cache_level))
 			GEM_BUG_ON(idx.pml4e >= GEN8_PML4ES_PER_PML4);
+
+		vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
 	}
 }
 
@@ -1778,6 +1784,8 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
 		}
 	} while (1);
 	kunmap_atomic(vaddr);
+
+	vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
 }
 
 static int gen6_alloc_va_range(struct i915_address_space *vm,
@@ -2468,6 +2476,8 @@ static int ggtt_bind_vma(struct i915_vma *vma,
 	vma->vm->insert_entries(vma->vm, vma, cache_level, pte_flags);
 	intel_runtime_pm_put(i915);
 
+	vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
+
 	/*
 	 * Without aliasing PPGTT there's no difference between
 	 * GLOBAL/LOCAL_BIND, it's all the same ptes. Hence unconditionally
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 7fc8b8402897..2e0f3b48a81a 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -144,6 +144,7 @@ struct drm_i915_gem_object {
 		struct sg_table *pages;
 		void *mapping;
 
+		/* TODO: whack some of this into the error state */
 		struct i915_page_sizes {
 			/**
 			 * The sg mask of the pages sg_table. i.e the mask of
@@ -159,6 +160,15 @@ struct drm_i915_gem_object {
 			 * to use opportunistically.
 			 */
 			unsigned int sg;
+
+			/**
+			 * The actual gtt page size usage. Since we can have
+			 * multiple vma associated with this object we need to
+			 * prevent any trampling of state, hence a copy of this
+			 * struct also lives in each vma, therefore the gtt
+			 * value here should only be read/write through the vma.
+			 */
+			unsigned int gtt;
 		} page_sizes;
 
 		struct i915_gem_object_page_iter {
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 15/21] drm/i915/debugfs: include some gtt page size metrics
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (13 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 14/21] drm/i915: accurate page size tracking for the ppgtt Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:14 ` [PATCH 16/21] drm/i915/selftests: huge page tests Matthew Auld
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Good to know, mostly for debugging purposes.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 42 +++++++++++++++++++++++++++++++++----
 1 file changed, 38 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 580bd4f4a49e..9dc0feb36ac8 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -117,6 +117,26 @@ static u64 i915_gem_obj_total_ggtt_size(struct drm_i915_gem_object *obj)
 	return size;
 }
 
+static const char *stringify_page_sizes(unsigned int page_sizes)
+{
+	switch (page_sizes) {
+	case I915_GTT_PAGE_SIZE_4K:
+		return "4K";
+	case I915_GTT_PAGE_SIZE_64K:
+		return "64K";
+	case I915_GTT_PAGE_SIZE_2M:
+		return "2M";
+	case I915_GTT_PAGE_SIZE_1G:
+		return "1G";
+	default:
+		/* mixed-mode? */
+		if (page_sizes)
+			return "M";
+
+		return "";
+	}
+}
+
 static void
 describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 {
@@ -154,9 +174,10 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		if (!drm_mm_node_allocated(&vma->node))
 			continue;
 
-		seq_printf(m, " (%sgtt offset: %08llx, size: %08llx",
+		seq_printf(m, " (%sgtt offset: %08llx, size: %08llx, pages: %s",
 			   i915_vma_is_ggtt(vma) ? "g" : "pp",
-			   vma->node.start, vma->node.size);
+			   vma->node.start, vma->node.size,
+			   stringify_page_sizes(vma->page_sizes.gtt));
 		if (i915_vma_is_ggtt(vma)) {
 			switch (vma->ggtt_view.type) {
 			case I915_GGTT_VIEW_NORMAL:
@@ -401,8 +422,8 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 	struct drm_i915_private *dev_priv = node_to_i915(m->private);
 	struct drm_device *dev = &dev_priv->drm;
 	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	u32 count, mapped_count, purgeable_count, dpy_count;
-	u64 size, mapped_size, purgeable_size, dpy_size;
+	u32 count, mapped_count, purgeable_count, dpy_count, huge_count;
+	u64 size, mapped_size, purgeable_size, dpy_size, huge_size;
 	struct drm_i915_gem_object *obj;
 	struct drm_file *file;
 	int ret;
@@ -418,6 +439,7 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 	size = count = 0;
 	mapped_size = mapped_count = 0;
 	purgeable_size = purgeable_count = 0;
+	huge_size = huge_count = 0;
 	list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_link) {
 		size += obj->base.size;
 		++count;
@@ -431,6 +453,11 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 			mapped_count++;
 			mapped_size += obj->base.size;
 		}
+
+		if (obj->mm.page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			huge_count++;
+			huge_size += obj->base.size;
+		}
 	}
 	seq_printf(m, "%u unbound objects, %llu bytes\n", count, size);
 
@@ -453,6 +480,11 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 			mapped_count++;
 			mapped_size += obj->base.size;
 		}
+
+		if (obj->mm.page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			huge_count++;
+			huge_size += obj->base.size;
+		}
 	}
 	seq_printf(m, "%u bound objects, %llu bytes\n",
 		   count, size);
@@ -460,6 +492,8 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 		   purgeable_count, purgeable_size);
 	seq_printf(m, "%u mapped objects, %llu bytes\n",
 		   mapped_count, mapped_size);
+	seq_printf(m, "%u huge-paged objects, %llu bytes\n",
+		   huge_count, huge_size);
 	seq_printf(m, "%u display objects (pinned), %llu bytes\n",
 		   dpy_count, dpy_size);
 
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 16/21] drm/i915/selftests: huge page tests
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (14 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 15/21] drm/i915/debugfs: include some gtt page size metrics Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:48   ` Chris Wilson
  2017-07-05 15:40   ` Chris Wilson
  2017-07-03 14:14 ` [PATCH 17/21] drm/i915/selftests: mix huge pages Matthew Auld
                   ` (5 subsequent siblings)
  21 siblings, 2 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

v2: mock test page support configurations and add MI_STORE_DWORD test

v3: run all mockable huge page tests on all platforms via the mock_device

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c                    |   1 +
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 931 +++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |   1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
 4 files changed, 934 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 09da46178c31..183657f9b096 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5377,6 +5377,7 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 #include "selftests/scatterlist.c"
 #include "selftests/mock_gem_device.c"
 #include "selftests/huge_gem_object.c"
+#include "selftests/huge_pages.c"
 #include "selftests/i915_gem_object.c"
 #include "selftests/i915_gem_coherency.c"
 #endif
diff --git a/drivers/gpu/drm/i915/selftests/huge_pages.c b/drivers/gpu/drm/i915/selftests/huge_pages.c
new file mode 100644
index 000000000000..159e8213a767
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/huge_pages.c
@@ -0,0 +1,931 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+#include <linux/prime_numbers.h>
+
+#include "mock_drm.h"
+
+static const unsigned int page_sizes[] = {
+	I915_GTT_PAGE_SIZE_1G,
+	I915_GTT_PAGE_SIZE_2M,
+	I915_GTT_PAGE_SIZE_64K,
+	I915_GTT_PAGE_SIZE_4K,
+};
+
+static unsigned int get_largest_page_size(struct drm_i915_private *i915,
+					  size_t rem)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(page_sizes); ++i) {
+		unsigned int page_size = page_sizes[i];
+
+		if (HAS_PAGE_SIZE(i915, page_size) && rem >= page_size)
+			return page_size;
+	}
+
+	GEM_BUG_ON(1);
+}
+
+static struct sg_table *
+fake_get_huge_pages(struct drm_i915_gem_object *obj,
+		    unsigned int *sg_mask)
+{
+#define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	const unsigned long max = obj->base.size >> PAGE_SHIFT;
+	struct sg_table *st;
+	struct scatterlist *sg;
+	size_t rem;
+
+	st = kmalloc(sizeof(*st), GFP);
+	if (!st)
+		return ERR_PTR(-ENOMEM);
+
+	if (sg_alloc_table(st, max, GFP)) {
+		kfree(st);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	/* Use optimal page sized chunks to fill in the sg table */
+	rem = obj->base.size;
+	sg = st->sgl;
+	st->nents = 0;
+	do {
+		unsigned int page_size = get_largest_page_size(i915, rem);
+		unsigned int len = page_size * (rem / page_size);
+
+		sg->offset = 0;
+		sg->length = len;
+		sg_dma_len(sg) = len;
+		sg_dma_address(sg) = page_size;
+
+		*sg_mask |= len;
+
+		st->nents++;
+
+		rem -= len;
+		if (!rem) {
+			sg_mark_end(sg);
+			break;
+		}
+
+		sg = sg_next(sg);
+	} while (1);
+
+	obj->mm.madv = I915_MADV_DONTNEED;
+
+	return st;
+#undef GFP
+}
+
+static void fake_free_huge_pages(struct drm_i915_gem_object *obj,
+				 struct sg_table *pages)
+{
+	sg_free_table(pages);
+	kfree(pages);
+}
+
+static void fake_put_huge_pages(struct drm_i915_gem_object *obj,
+				struct sg_table *pages)
+{
+	fake_free_huge_pages(obj, pages);
+	obj->mm.dirty = false;
+	obj->mm.madv = I915_MADV_WILLNEED;
+}
+
+static const struct drm_i915_gem_object_ops fake_ops = {
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
+	.get_pages = fake_get_huge_pages,
+	.put_pages = fake_put_huge_pages,
+};
+
+static struct drm_i915_gem_object *
+fake_huge_pages_object(struct drm_i915_private *i915, u64 size)
+{
+	struct drm_i915_gem_object *obj;
+
+	GEM_BUG_ON(!size);
+	GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE));
+
+	if (overflows_type(size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc(i915);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, size);
+	i915_gem_object_init(obj, &fake_ops);
+
+	obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+	obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+	obj->cache_level = I915_CACHE_NONE;
+
+	return obj;
+}
+
+static int igt_mock_ppgtt_huge_fill(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	struct drm_i915_gem_object *obj;
+	unsigned long max_pages = ppgtt->base.total >> PAGE_SHIFT;
+	unsigned long page_num;
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	for_each_prime_number_from(page_num, 1, max_pages) {
+		unsigned int expected_gtt = 0;
+		struct i915_vma *vma;
+		size_t size = page_num << PAGE_SHIFT;
+		int i;
+
+		obj = fake_huge_pages_object(i915, size);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		GEM_BUG_ON(obj->base.size != size);
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		GEM_BUG_ON(!obj->mm.page_sizes.sg);
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_unpin;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, PIN_USER);
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+		GEM_BUG_ON(obj->mm.page_sizes.gtt);
+		GEM_BUG_ON(!vma->page_sizes.sg);
+		GEM_BUG_ON(!vma->page_sizes.phys);
+
+		/* Figure out the expected gtt page size knowing that we go from
+		 * largest to smallest page size sg chunks, and that we align to
+		 * the largest page size.
+		 */
+		for (i = 0; i < ARRAY_SIZE(page_sizes); ++i) {
+			unsigned int page_size = page_sizes[i];
+
+			if (HAS_PAGE_SIZE(i915, page_size) && size >= page_size) {
+				expected_gtt |= page_size;
+				size &= page_size-1;
+			}
+		}
+
+		GEM_BUG_ON(!expected_gtt);
+		GEM_BUG_ON(size);
+
+		if (expected_gtt & I915_GTT_PAGE_SIZE_4K)
+			expected_gtt &= ~I915_GTT_PAGE_SIZE_64K;
+
+		GEM_BUG_ON(vma->page_sizes.gtt != expected_gtt);
+
+		if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
+			GEM_BUG_ON(!IS_ALIGNED(vma->node.start,
+					       I915_GTT_PAGE_SIZE_2M));
+			GEM_BUG_ON(!IS_ALIGNED(vma->node.size,
+					       I915_GTT_PAGE_SIZE_2M));
+		}
+
+		i915_vma_unpin(vma);
+		i915_vma_close(vma);
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+
+		if (igt_timeout(end_time,
+				"%s timed out at size %lx\n",
+				__func__, obj->base.size))
+			break;
+	}
+
+	return 0;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_mock_ppgtt_misaligned_dma(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	unsigned long supported = INTEL_INFO(i915)->page_size_mask;
+	struct drm_i915_gem_object *obj;
+	int err;
+	int bit;
+
+	/* Sanity check dma misalignment for huge pages -- the dma addresses we
+	 * insert into the paging structures need to always respect the page
+	 * size alignment.
+	 */
+
+	bit = ilog2(I915_GTT_PAGE_SIZE_64K);
+
+	for_each_set_bit_from(bit, &supported, BITS_PER_LONG) {
+		IGT_TIMEOUT(end_time);
+		unsigned int page_size = BIT(bit);
+		unsigned int flags = PIN_USER | PIN_OFFSET_FIXED;
+		struct i915_vma *vma;
+		unsigned int offset;
+		unsigned int size =
+			round_up(page_size, I915_GTT_PAGE_SIZE_2M) << 1;
+
+		obj = fake_huge_pages_object(i915, size);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		GEM_BUG_ON(obj->base.size != size);
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		GEM_BUG_ON(!(obj->mm.page_sizes.sg & page_size));
+
+		/* Force the page size for this object */
+		obj->mm.page_sizes.sg = page_size;
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_unpin;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, flags);
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+		GEM_BUG_ON(vma->page_sizes.gtt != page_size);
+
+		i915_vma_unpin(vma);
+		err = i915_vma_unbind(vma);
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+		/* Try all the other valid offsets until the next boundary --
+		 * should always fall back to using 4K pages.
+		 */
+		for (offset = 4096; offset < page_size; offset += 4096) {
+			err = i915_vma_pin(vma, 0, 0, flags | offset);
+			if (err) {
+				i915_vma_close(vma);
+				goto out_unpin;
+			}
+
+			GEM_BUG_ON(vma->page_sizes.gtt != I915_GTT_PAGE_SIZE_4K);
+
+			i915_vma_unpin(vma);
+			err = i915_vma_unbind(vma);
+			if (err) {
+				i915_vma_close(vma);
+				goto out_unpin;
+			}
+
+			if (igt_timeout(end_time,
+					"%s timed out at offset %x with page-size %x\n",
+					__func__, offset, page_size))
+				break;
+		}
+
+		i915_vma_close(vma);
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+
+	return 0;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_mock_ppgtt_64K(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	struct drm_i915_gem_object *obj;
+	const struct object_info {
+		unsigned int size;
+		unsigned int gtt;
+		unsigned int offset;
+	} objects[] = {
+		/* Cases with forced padding/alignment */
+		{
+			.size = SZ_64K,
+			.gtt = I915_GTT_PAGE_SIZE_64K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_64K + SZ_4K,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+			.offset = 0.
+		},
+		{
+			.size = SZ_2M - SZ_4K,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_2M + SZ_64K,
+			.gtt = I915_GTT_PAGE_SIZE_64K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_2M + SZ_4K,
+			.gtt = I915_GTT_PAGE_SIZE_64K | I915_GTT_PAGE_SIZE_4K,
+			.offset = 0,
+		},
+		/* Try without any forced padding/alignment */
+		{
+			.size = SZ_64K,
+			.offset = SZ_2M,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+		},
+		{
+			.size = SZ_128K,
+			.offset = SZ_2M - SZ_64K,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+		},
+	};
+	int i;
+	int err;
+
+	if (!HAS_PAGE_SIZE(i915, I915_GTT_PAGE_SIZE_64K))
+		return 0;
+
+	/* Sanity check some of the trickiness with 64K pages -- either we can
+	 * safely mark the whole page-table(2M block) as 64K, or we have to
+	 * always fallback to 4K.
+	 */
+
+	for (i = 0; i < ARRAY_SIZE(objects); ++i) {
+		unsigned int size = objects[i].size;
+		unsigned int expected_gtt = objects[i].gtt;
+		unsigned int offset = objects[i].offset;
+		struct i915_vma *vma;
+		int flags = PIN_USER;
+
+		obj = fake_huge_pages_object(i915, size);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		GEM_BUG_ON(!obj->mm.page_sizes.sg);
+
+		/* Disable 2M pages -- We only want to use 64K/4K pages for
+		 * this test.
+		 */
+		obj->mm.page_sizes.sg &= ~I915_GTT_PAGE_SIZE_2M;
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_unpin;
+		}
+
+		if (offset)
+			flags |= PIN_OFFSET_FIXED | offset;
+
+		err = i915_vma_pin(vma, 0, 0, flags);
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+		GEM_BUG_ON(obj->mm.page_sizes.gtt);
+		GEM_BUG_ON(!vma->page_sizes.sg);
+		GEM_BUG_ON(!vma->page_sizes.phys);
+
+		GEM_BUG_ON(vma->page_sizes.gtt != expected_gtt);
+
+		if (!offset && vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
+			GEM_BUG_ON(!IS_ALIGNED(vma->node.start,
+					       I915_GTT_PAGE_SIZE_2M));
+			GEM_BUG_ON(!IS_ALIGNED(vma->node.size,
+					       I915_GTT_PAGE_SIZE_2M));
+		}
+
+		i915_vma_unpin(vma);
+		i915_vma_close(vma);
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+
+	return 0;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_mock_exhaust_device_supported_pages(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	unsigned int saved_mask = INTEL_INFO(i915)->page_size_mask;
+	struct drm_i915_gem_object *obj;
+	int i, j;
+	int err;
+
+	/* Sanity check creating objects with every valid page support
+	 * combination for our mock device.
+	 */
+
+	for (i = 1; i < BIT(ARRAY_SIZE(page_sizes)); i++) {
+		unsigned int combination = 0;
+		struct i915_vma *vma;
+
+		for (j = 0; j < ARRAY_SIZE(page_sizes); j++) {
+			if (i & BIT(j))
+				combination |= page_sizes[j];
+		}
+
+		mkwrite_device_info(i915)->page_size_mask = combination;
+
+		obj = fake_huge_pages_object(i915, combination);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			goto out_device;
+		}
+
+		GEM_BUG_ON(obj->base.size != combination);
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		GEM_BUG_ON(obj->mm.page_sizes.sg != combination);
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_unpin;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, PIN_USER);
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+		GEM_BUG_ON(obj->mm.page_sizes.gtt);
+		GEM_BUG_ON(!vma->page_sizes.sg);
+		GEM_BUG_ON(!vma->page_sizes.phys);
+
+		GEM_BUG_ON(vma->page_sizes.gtt & ~combination);
+
+		i915_vma_unpin(vma);
+		i915_vma_close(vma);
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+
+	goto out_device;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+out_device:
+	mkwrite_device_info(i915)->page_size_mask = saved_mask;
+
+	return err;
+}
+
+static struct i915_vma *
+gpu_write_dw(struct i915_vma *vma, u64 offset, u32 value)
+{
+	struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *batch;
+	unsigned int size;
+	u32 *cmd;
+	int err;
+
+	GEM_BUG_ON(INTEL_GEN(i915) < 8);
+
+	size = 5 * sizeof(u32);
+	size = round_up(size, PAGE_SIZE);
+	obj = i915_gem_object_create_internal(i915, size);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		goto err;
+	}
+
+	offset += vma->node.start;
+
+	*cmd++ = MI_STORE_DWORD_IMM_GEN4;
+	*cmd++ = lower_32_bits(offset);
+	*cmd++ = upper_32_bits(offset);
+	*cmd++ = value;
+
+	*cmd = MI_BATCH_BUFFER_END;
+	i915_gem_object_unpin_map(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		goto err;
+
+	batch = i915_vma_instance(obj, vma->vm, NULL);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		goto err;
+	}
+
+	err = i915_vma_pin(batch, 0, 0, PIN_USER);
+	if (err)
+		goto err;
+
+	return batch;
+
+err:
+	i915_gem_object_put(obj);
+
+	return ERR_PTR(err);
+}
+
+static int gpu_write(struct i915_vma *vma,
+		     struct i915_gem_context *ctx,
+		     unsigned long offset,
+		     u32 value)
+{
+	struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
+	struct drm_i915_gem_request *rq;
+	struct i915_vma *batch;
+	int flags = 0;
+	int err;
+
+	batch = gpu_write_dw(vma, offset, value);
+	if (IS_ERR(batch))
+		return PTR_ERR(batch);
+
+	rq = i915_gem_request_alloc(i915->engine[RCS], ctx);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto err_batch;
+	}
+
+	err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
+	if (err)
+		goto err_request;
+
+	err = i915_switch_context(rq);
+	if (err)
+		goto err_request;
+
+	err = rq->engine->emit_bb_start(rq,
+			batch->node.start, batch->node.size,
+			flags);
+	if (err)
+		goto err_request;
+
+	i915_vma_move_to_active(batch, rq, 0);
+	i915_gem_object_set_active_reference(batch->obj);
+	i915_vma_unpin(batch);
+	i915_vma_close(batch);
+
+	i915_vma_move_to_active(vma, rq, 0);
+
+	reservation_object_lock(vma->obj->resv, NULL);
+	reservation_object_add_excl_fence(vma->obj->resv, &rq->fence);
+	reservation_object_unlock(vma->obj->resv);
+
+	__i915_add_request(rq, true);
+
+	return 0;
+
+err_request:
+	__i915_add_request(rq, false);
+err_batch:
+	i915_vma_unpin(batch);
+
+	return err;
+}
+
+static inline int igt_can_allocate_thp(struct drm_i915_private *i915)
+{
+	return HAS_PAGE_SIZE(i915, I915_GTT_PAGE_SIZE_2M) &&
+		has_transparent_hugepage();
+}
+
+static int igt_ppgtt_write_huge(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	struct drm_i915_gem_object *obj;
+	const unsigned int page_sizes[] = {
+		I915_GTT_PAGE_SIZE_64K,
+		I915_GTT_PAGE_SIZE_2M,
+	};
+	struct i915_vma *vma;
+	int err = 0;
+	int i;
+
+	if (!igt_can_allocate_thp(i915))
+		return 0;
+
+	/* Sanity check that the HW uses huge-pages correctly */
+
+	ppgtt = i915->kernel_context->ppgtt;
+
+	obj = i915_gem_object_create(i915, I915_GTT_PAGE_SIZE_2M);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_pin_pages(obj);
+	if (err)
+		goto out_put;
+
+	GEM_BUG_ON(!obj->mm.page_sizes.sg);
+
+	if (obj->mm.page_sizes.sg < I915_GTT_PAGE_SIZE_2M) {
+		pr_err("Failed to allocate huge pages\n");
+		err = -EINVAL;
+		goto out_unpin;
+	}
+
+	vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto out_unpin;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(page_sizes); ++i) {
+		unsigned int page_size = page_sizes[i];
+		u32 *map;
+
+		if (!HAS_PAGE_SIZE(i915, page_size))
+			continue;
+
+		/* Force the page size */
+		obj->mm.page_sizes.sg = page_size;
+
+		err = i915_gem_object_set_to_gtt_domain(obj, true);
+		if (err)
+			goto out_close;
+
+		err = i915_vma_pin(vma, 0, 0, PIN_USER);
+		if (err)
+			goto out_close;
+
+		GEM_BUG_ON(obj->mm.page_sizes.gtt);
+		GEM_BUG_ON(vma->page_sizes.sg != page_size);
+		GEM_BUG_ON(!vma->page_sizes.phys);
+
+		GEM_BUG_ON(vma->page_sizes.gtt != page_size);
+		GEM_BUG_ON(!IS_ALIGNED(vma->node.start, I915_GTT_PAGE_SIZE_2M));
+		GEM_BUG_ON(!IS_ALIGNED(vma->node.size, I915_GTT_PAGE_SIZE_2M));
+
+		err = gpu_write(vma, i915->kernel_context, 0, page_size);
+		if (err) {
+			i915_vma_unpin(vma);
+			goto out_close;
+		}
+
+		err = i915_gem_object_set_to_cpu_domain(obj, false);
+		if (err) {
+			i915_vma_unpin(vma);
+			goto out_close;
+		}
+
+		i915_vma_unpin(vma);
+		err = i915_vma_unbind(vma);
+		if (err)
+			goto out_close;
+
+		map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		if (IS_ERR(map)) {
+			err = PTR_ERR(map);
+			goto out_close;
+		}
+
+		GEM_BUG_ON(map[0] != page_size);
+
+		i915_gem_object_unpin_map(obj);
+	}
+
+out_close:
+	i915_vma_close(vma);
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_ppgtt_gemfs_huge(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	struct drm_i915_gem_object *obj;
+	const unsigned int object_sizes[] = {
+		I915_GTT_PAGE_SIZE_2M,
+		I915_GTT_PAGE_SIZE_2M + I915_GTT_PAGE_SIZE_4K,
+	};
+	int err;
+	int i;
+
+	if (!igt_can_allocate_thp(i915))
+		return 0;
+
+	/* Sanity check THP through gemfs */
+
+	for (i = 0; i < ARRAY_SIZE(object_sizes); ++i) {
+		unsigned int size = object_sizes[i];
+		struct i915_vma *vma;
+
+		obj = i915_gem_object_create(i915, size);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		GEM_BUG_ON(!obj->mm.page_sizes.sg);
+
+		if (obj->mm.page_sizes.sg < size) {
+			pr_err("Failed to allocate huge pages\n");
+			err = -EINVAL;
+			goto out_unpin;
+		}
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_unpin;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, PIN_USER);
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+		GEM_BUG_ON(obj->mm.page_sizes.gtt);
+		GEM_BUG_ON(!vma->page_sizes.sg);
+		GEM_BUG_ON(!vma->page_sizes.phys);
+
+		GEM_BUG_ON(vma->page_sizes.gtt != size);
+		GEM_BUG_ON(!IS_ALIGNED(vma->node.start, I915_GTT_PAGE_SIZE_2M));
+
+		if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
+			GEM_BUG_ON(!IS_ALIGNED(vma->node.size,
+					       I915_GTT_PAGE_SIZE_2M));
+		}
+
+		i915_vma_unpin(vma);
+		i915_vma_close(vma);
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+
+	return 0;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+int i915_gem_huge_page_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_mock_ppgtt_huge_fill),
+		SUBTEST(igt_mock_ppgtt_misaligned_dma),
+		SUBTEST(igt_mock_ppgtt_64K),
+		SUBTEST(igt_mock_exhaust_device_supported_pages),
+	};
+	int saved_ppgtt = i915.enable_ppgtt;
+	struct drm_i915_private *dev_priv;
+	struct i915_hw_ppgtt *ppgtt;
+	int err;
+
+	dev_priv = mock_gem_device();
+	if (!dev_priv)
+		return -ENOMEM;
+
+	/* Pretend to be a device which supports the 48b PPGTT */
+	i915.enable_ppgtt = 3;
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	ppgtt = i915_ppgtt_create(dev_priv, NULL, "mock");
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto out_unlock;
+	}
+
+	GEM_BUG_ON(!i915_vm_is_48bit(&ppgtt->base));
+
+	err = i915_subtests(tests, ppgtt);
+
+	i915_ppgtt_close(&ppgtt->base);
+	i915_ppgtt_put(ppgtt);
+
+out_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	i915.enable_ppgtt = saved_ppgtt;
+
+	drm_dev_unref(&dev_priv->drm);
+
+	return err;
+}
+
+int i915_gem_huge_page_live_selftests(struct drm_i915_private *dev_priv)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_ppgtt_gemfs_huge),
+		SUBTEST(igt_ppgtt_write_huge),
+	};
+	struct i915_hw_ppgtt *ppgtt;
+	struct drm_file *file;
+	int err;
+
+	if (!USES_FULL_48BIT_PPGTT(dev_priv))
+		return 0;
+
+	file = mock_file(dev_priv);
+	if (IS_ERR(file))
+		return PTR_ERR(file);
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	ppgtt = i915_ppgtt_create(dev_priv, file->driver_priv, "live");
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto out_unlock;
+	}
+
+	err = i915_subtests(tests, ppgtt);
+
+	i915_ppgtt_close(&ppgtt->base);
+	i915_ppgtt_put(ppgtt);
+
+out_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	mock_file_free(dev_priv, file);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 18b174d855ca..64acd7eccc5c 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -15,5 +15,6 @@ selftest(objects, i915_gem_object_live_selftests)
 selftest(dmabuf, i915_gem_dmabuf_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
 selftest(gtt, i915_gem_gtt_live_selftests)
+selftest(hugepages, i915_gem_huge_page_live_selftests)
 selftest(contexts, i915_gem_context_live_selftests)
 selftest(hangcheck, intel_hangcheck_live_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index fc74687501ba..9961b44f76ed 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -21,3 +21,4 @@ selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
 selftest(vma, i915_vma_mock_selftests)
 selftest(evict, i915_gem_evict_mock_selftests)
 selftest(gtt, i915_gem_gtt_mock_selftests)
+selftest(hugepages, i915_gem_huge_page_mock_selftests)
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 17/21] drm/i915/selftests: mix huge pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (15 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 16/21] drm/i915/selftests: huge page tests Matthew Auld
@ 2017-07-03 14:14 ` Matthew Auld
  2017-07-03 14:15 ` [PATCH 18/21] drm/i915: disable platform support for vGPU huge gtt pages Matthew Auld
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:14 UTC (permalink / raw)
  To: intel-gfx

Try to mix sg page sizes for 4K, 64K and 2M pages.

v2: s/BIT(x) >> 12/BIT(x) >> PAGE_SHIFT/

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/scatterlist.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
index 1cc5d2931753..cd6d2a16071f 100644
--- a/drivers/gpu/drm/i915/selftests/scatterlist.c
+++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
@@ -189,6 +189,20 @@ static unsigned int random(unsigned long n,
 	return 1 + (prandom_u32_state(rnd) % 1024);
 }
 
+static unsigned int random_page_size_pages(unsigned long n,
+					   unsigned long count,
+					   struct rnd_state *rnd)
+{
+	/* 4K, 64K, 2M */
+	static unsigned int page_count[] = {
+		BIT(12) >> PAGE_SHIFT,
+		BIT(16) >> PAGE_SHIFT,
+		BIT(21) >> PAGE_SHIFT,
+	};
+
+	return page_count[(prandom_u32_state(rnd) % 3)];
+}
+
 static inline bool page_contiguous(struct page *first,
 				   struct page *last,
 				   unsigned long npages)
@@ -252,6 +266,7 @@ static const npages_fn_t npages_funcs[] = {
 	grow,
 	shrink,
 	random,
+	random_page_size_pages,
 	NULL,
 };
 
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 18/21] drm/i915: disable platform support for vGPU huge gtt pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (16 preceding siblings ...)
  2017-07-03 14:14 ` [PATCH 17/21] drm/i915/selftests: mix huge pages Matthew Auld
@ 2017-07-03 14:15 ` Matthew Auld
  2017-07-03 14:15 ` [PATCH 19/21] drm/i915: enable platform support for 64K pages Matthew Auld
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:15 UTC (permalink / raw)
  To: intel-gfx

Currently gvt gtt handling doesn't support huge page entries, so disable
for now.

Suggested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 183657f9b096..6cb0a3b8988b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4846,6 +4846,15 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 
 	mutex_lock(&dev_priv->drm.struct_mutex);
 
+	/* If the guest supports the 48b PPGGT, then we need to fallback to 4K
+	 * pages, since gvt gtt handling doesn't support huge page entries - we
+	 * need to check either hypervisor mm can support huge guest page or
+	 * just do emulation in gvt.
+	 */
+	if (USES_FULL_48BIT_PPGTT(dev_priv) && intel_vgpu_active(dev_priv))
+		mkwrite_device_info(dev_priv)->page_size_mask =
+			I915_GTT_PAGE_SIZE_4K;
+
 	dev_priv->mm.unordered_timeline = dma_fence_context_alloc(1);
 
 	if (!i915.enable_execlists) {
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 19/21] drm/i915: enable platform support for 64K pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (17 preceding siblings ...)
  2017-07-03 14:15 ` [PATCH 18/21] drm/i915: disable platform support for vGPU huge gtt pages Matthew Auld
@ 2017-07-03 14:15 ` Matthew Auld
  2017-07-03 14:15 ` [PATCH 20/21] drm/i915: enable platform support for 2M pages Matthew Auld
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:15 UTC (permalink / raw)
  To: intel-gfx

For gen9+ enable platform level support for 64K pages. Also enable for
mock testing.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_pci.c                  | 3 ++-
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index b73c1eb778d1..d8fa06988914 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -364,7 +364,8 @@ static const struct intel_device_info intel_cherryview_info = {
 };
 
 #define GEN9_DEFAULT_PAGE_SIZES \
-	.page_size_mask = I915_GTT_PAGE_SIZE_4K
+	.page_size_mask = I915_GTT_PAGE_SIZE_4K | \
+			  I915_GTT_PAGE_SIZE_64K
 
 #define SKL_PLATFORM \
 	BDW_FEATURES, \
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 0002ba28780c..370ded2a5166 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -149,7 +149,8 @@ struct drm_i915_private *mock_gem_device(void)
 	mkwrite_device_info(i915)->gen = -1;
 
 	mkwrite_device_info(i915)->page_size_mask =
-		I915_GTT_PAGE_SIZE_4K;
+		I915_GTT_PAGE_SIZE_4K |
+		I915_GTT_PAGE_SIZE_64K;
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 	mock_uncore_init(i915);
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 20/21] drm/i915: enable platform support for 2M pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (18 preceding siblings ...)
  2017-07-03 14:15 ` [PATCH 19/21] drm/i915: enable platform support for 64K pages Matthew Auld
@ 2017-07-03 14:15 ` Matthew Auld
  2017-07-03 14:15 ` [PATCH 21/21] drm/i915: enable platform support for 1G pages Matthew Auld
  2017-07-03 14:55 ` ✓ Fi.CI.BAT: success for huge gtt pages (rev4) Patchwork
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:15 UTC (permalink / raw)
  To: intel-gfx

For gen8+ platforms which support the 48b PPGTT, enable platform level
support for 2M pages. Also enable for mock testing.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_pci.c                  | 6 ++++--
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index d8fa06988914..bfe6a79be969 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -319,7 +319,8 @@ static const struct intel_device_info intel_haswell_info = {
 #define BDW_FEATURES \
 	HSW_FEATURES, \
 	BDW_COLORS, \
-	GEN_DEFAULT_PAGE_SIZES, \
+	.page_size_mask = I915_GTT_PAGE_SIZE_4K | \
+			  I915_GTT_PAGE_SIZE_2M, \
 	.has_logical_ring_contexts = 1, \
 	.has_full_48bit_ppgtt = 1, \
 	.has_64bit_reloc = 1, \
@@ -365,7 +366,8 @@ static const struct intel_device_info intel_cherryview_info = {
 
 #define GEN9_DEFAULT_PAGE_SIZES \
 	.page_size_mask = I915_GTT_PAGE_SIZE_4K | \
-			  I915_GTT_PAGE_SIZE_64K
+			  I915_GTT_PAGE_SIZE_64K | \
+			  I915_GTT_PAGE_SIZE_2M
 
 #define SKL_PLATFORM \
 	BDW_FEATURES, \
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 370ded2a5166..499ec437cc15 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -150,7 +150,8 @@ struct drm_i915_private *mock_gem_device(void)
 
 	mkwrite_device_info(i915)->page_size_mask =
 		I915_GTT_PAGE_SIZE_4K |
-		I915_GTT_PAGE_SIZE_64K;
+		I915_GTT_PAGE_SIZE_64K |
+		I915_GTT_PAGE_SIZE_2M;
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 	mock_uncore_init(i915);
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 21/21] drm/i915: enable platform support for 1G pages
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (19 preceding siblings ...)
  2017-07-03 14:15 ` [PATCH 20/21] drm/i915: enable platform support for 2M pages Matthew Auld
@ 2017-07-03 14:15 ` Matthew Auld
  2017-07-03 14:55 ` ✓ Fi.CI.BAT: success for huge gtt pages (rev4) Patchwork
  21 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-03 14:15 UTC (permalink / raw)
  To: intel-gfx

For gen8+ enable platforms which support the 48b PPGTT, enable support
for 1G pages. Also enable for mock testing.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_pci.c                  | 6 ++++--
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index bfe6a79be969..12363ae5da06 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -320,7 +320,8 @@ static const struct intel_device_info intel_haswell_info = {
 	HSW_FEATURES, \
 	BDW_COLORS, \
 	.page_size_mask = I915_GTT_PAGE_SIZE_4K | \
-			  I915_GTT_PAGE_SIZE_2M, \
+			  I915_GTT_PAGE_SIZE_2M | \
+			  I915_GTT_PAGE_SIZE_1G, \
 	.has_logical_ring_contexts = 1, \
 	.has_full_48bit_ppgtt = 1, \
 	.has_64bit_reloc = 1, \
@@ -367,7 +368,8 @@ static const struct intel_device_info intel_cherryview_info = {
 #define GEN9_DEFAULT_PAGE_SIZES \
 	.page_size_mask = I915_GTT_PAGE_SIZE_4K | \
 			  I915_GTT_PAGE_SIZE_64K | \
-			  I915_GTT_PAGE_SIZE_2M
+			  I915_GTT_PAGE_SIZE_2M | \
+			  I915_GTT_PAGE_SIZE_1G
 
 #define SKL_PLATFORM \
 	BDW_FEATURES, \
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 499ec437cc15..a8327febf02f 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -151,7 +151,8 @@ struct drm_i915_private *mock_gem_device(void)
 	mkwrite_device_info(i915)->page_size_mask =
 		I915_GTT_PAGE_SIZE_4K |
 		I915_GTT_PAGE_SIZE_64K |
-		I915_GTT_PAGE_SIZE_2M;
+		I915_GTT_PAGE_SIZE_2M |
+		I915_GTT_PAGE_SIZE_1G;
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 	mock_uncore_init(i915);
-- 
2.9.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 16/21] drm/i915/selftests: huge page tests
  2017-07-03 14:14 ` [PATCH 16/21] drm/i915/selftests: huge page tests Matthew Auld
@ 2017-07-03 14:48   ` Chris Wilson
  2017-07-06 11:05     ` Matthew Auld
  2017-07-05 15:40   ` Chris Wilson
  1 sibling, 1 reply; 34+ messages in thread
From: Chris Wilson @ 2017-07-03 14:48 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-07-03 15:14:58)
> +static struct i915_vma *
> +gpu_write_dw(struct i915_vma *vma, u64 offset, u32 value)
> +{
> +       struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
> +       struct drm_i915_gem_object *obj;
> +       struct i915_vma *batch;
> +       unsigned int size;
> +       u32 *cmd;
> +       int err;
> +
> +       GEM_BUG_ON(INTEL_GEN(i915) < 8);

Didn't you enable large pages for snb and earlier?

> +static int gpu_write(struct i915_vma *vma,
> +                    struct i915_gem_context *ctx,
> +                    unsigned long offset,
> +                    u32 value)
> +{
> +       struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
> +       struct drm_i915_gem_request *rq;
> +       struct i915_vma *batch;
> +       int flags = 0;
> +       int err;
> +
> +       batch = gpu_write_dw(vma, offset, value);
> +       if (IS_ERR(batch))
> +               return PTR_ERR(batch);
> +
> +       rq = i915_gem_request_alloc(i915->engine[RCS], ctx);
> +       if (IS_ERR(rq)) {
> +               err = PTR_ERR(rq);
> +               goto err_batch;
> +       }
> +
> +       err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
> +       if (err)
> +               goto err_request;
> +
> +       err = i915_switch_context(rq);
> +       if (err)
> +               goto err_request;
> +
> +       err = rq->engine->emit_bb_start(rq,
> +                       batch->node.start, batch->node.size,
> +                       flags);
> +       if (err)
> +               goto err_request;
> +
> +       i915_vma_move_to_active(batch, rq, 0);
> +       i915_gem_object_set_active_reference(batch->obj);
> +       i915_vma_unpin(batch);
> +       i915_vma_close(batch);
> +
> +       i915_vma_move_to_active(vma, rq, 0);
> +
> +       reservation_object_lock(vma->obj->resv, NULL);

vma->resv

> +       reservation_object_add_excl_fence(vma->obj->resv, &rq->fence);
> +       reservation_object_unlock(vma->obj->resv);
> +
> +       __i915_add_request(rq, true);
> +
> +       return 0;
> +
> +err_request:
> +       __i915_add_request(rq, false);
> +err_batch:
> +       i915_vma_unpin(batch);

Leaking batch. Just tie the lifetime of the batch to the request
immediately after allocating the request.

> +
> +       return err;
> +}
> +
> +static inline int igt_can_allocate_thp(struct drm_i915_private *i915)
> +{
> +       return HAS_PAGE_SIZE(i915, I915_GTT_PAGE_SIZE_2M) &&
> +               has_transparent_hugepage();
> +}
> +
> +static int igt_ppgtt_write_huge(void *arg)
> +{
> +       struct i915_hw_ppgtt *ppgtt = arg;
> +       struct drm_i915_private *i915 = ppgtt->base.i915;
> +       struct drm_i915_gem_object *obj;
> +       const unsigned int page_sizes[] = {
> +               I915_GTT_PAGE_SIZE_64K,
> +               I915_GTT_PAGE_SIZE_2M,
> +       };
> +       struct i915_vma *vma;
> +       int err = 0;
> +       int i;
> +
> +       if (!igt_can_allocate_thp(i915))
> +               return 0;
> +
> +       /* Sanity check that the HW uses huge-pages correctly */
> +
> +       ppgtt = i915->kernel_context->ppgtt;
> +
> +       obj = i915_gem_object_create(i915, I915_GTT_PAGE_SIZE_2M);
> +       if (IS_ERR(obj))
> +               return PTR_ERR(obj);

2M is MAX_ORDER, so you don't need thp for this and can just use
i915_gem_object_create_internal() (or both, once to test without relying
on thp, perhaps more common, then once to test thp shmemfs).

> +       err = i915_gem_object_pin_pages(obj);
> +       if (err)
> +               goto out_put;
> +
> +       GEM_BUG_ON(!obj->mm.page_sizes.sg);
> +
> +       if (obj->mm.page_sizes.sg < I915_GTT_PAGE_SIZE_2M) {
> +               pr_err("Failed to allocate huge pages\n");
> +               err = -EINVAL;

Failure? Or just unable to test? Nothing forces the kernel to give you
contiguous space.

> +               goto out_unpin;
> +       }
> +
> +       vma = i915_vma_instance(obj, &ppgtt->base, NULL);
> +       if (IS_ERR(vma)) {
> +               err = PTR_ERR(vma);
> +               goto out_unpin;
> +       }
> +
> +       for (i = 0; i < ARRAY_SIZE(page_sizes); ++i) {
> +               unsigned int page_size = page_sizes[i];
> +               u32 *map;
> +
> +               if (!HAS_PAGE_SIZE(i915, page_size))
> +                       continue;
> +
> +               /* Force the page size */
> +               obj->mm.page_sizes.sg = page_size;
> +
> +               err = i915_gem_object_set_to_gtt_domain(obj, true);
> +               if (err)
> +                       goto out_close;

Gets in the way of the important details, make it coherent and get rid
of this.

> +               err = i915_vma_pin(vma, 0, 0, PIN_USER);
> +               if (err)
> +                       goto out_close;

Move the unbind above this:

/* Force the page size */
i915_vma_unbind(vma);

obj->m.page_sizes.sg = page_size;

i915_vma_pin(vma);
GEM_BUG_ON(vma->page_sizes.sg != page_size);

> +
> +               GEM_BUG_ON(obj->mm.page_sizes.gtt);
> +               GEM_BUG_ON(vma->page_sizes.sg != page_size);
> +               GEM_BUG_ON(!vma->page_sizes.phys);
> +
> +               GEM_BUG_ON(vma->page_sizes.gtt != page_size);
> +               GEM_BUG_ON(!IS_ALIGNED(vma->node.start, I915_GTT_PAGE_SIZE_2M));
> +               GEM_BUG_ON(!IS_ALIGNED(vma->node.size, I915_GTT_PAGE_SIZE_2M));
> +
> +               err = gpu_write(vma, i915->kernel_context, 0, page_size);
> +               if (err) {
> +                       i915_vma_unpin(vma);
> +                       goto out_close;
> +               }
> +
> +               err = i915_gem_object_set_to_cpu_domain(obj, false);
> +               if (err) {
> +                       i915_vma_unpin(vma);
> +                       goto out_close;
> +               }

Again, this is getting in the way of the important details.

> +               i915_vma_unpin(vma);
> +               err = i915_vma_unbind(vma);
> +               if (err)
> +                       goto out_close;
> +
> +               map = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +               if (IS_ERR(map)) {
> +                       err = PTR_ERR(map);
> +                       goto out_close;
> +               }

Do we need to test different PTE bits?

If you don't make this coherent, use

map = pin_map(obj);
set_to_cpu_domain(obj);

for_each_page()
	test

set_to_gtt_domain(obj);
unpin_map(obj);

> +
> +               GEM_BUG_ON(map[0] != page_size);

This is the essential test being performed. Check it, and report an
error properly. Check every page at different offsets!

> +
> +               i915_gem_object_unpin_map(obj);
> +       }
> +
> +out_close:
> +       i915_vma_close(vma);
> +out_unpin:
> +       i915_gem_object_unpin_pages(obj);
> +out_put:
> +       i915_gem_object_put(obj);
> +
> +       return err;
> +}
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* ✓ Fi.CI.BAT: success for huge gtt pages (rev4)
  2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (20 preceding siblings ...)
  2017-07-03 14:15 ` [PATCH 21/21] drm/i915: enable platform support for 1G pages Matthew Auld
@ 2017-07-03 14:55 ` Patchwork
  21 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2017-07-03 14:55 UTC (permalink / raw)
  To: Matthew Auld; +Cc: intel-gfx

== Series Details ==

Series: huge gtt pages (rev4)
URL   : https://patchwork.freedesktop.org/series/25118/
State : success

== Summary ==

Series 25118v4 huge gtt pages
https://patchwork.freedesktop.org/api/1.0/series/25118/revisions/4/mbox/

Test gem_exec_suspend:
        Subgroup basic-s4-devices:
                dmesg-warn -> PASS       (fi-kbl-7560u) fdo#100125
Test kms_cursor_legacy:
        Subgroup basic-busy-flip-before-cursor-legacy:
                pass       -> FAIL       (fi-snb-2600) fdo#100215
Test kms_pipe_crc_basic:
        Subgroup hang-read-crc-pipe-b:
                pass       -> DMESG-WARN (fi-pnv-d510) fdo#101597
        Subgroup suspend-read-crc-pipe-b:
                dmesg-warn -> PASS       (fi-byt-j1900) fdo#101516

fdo#100125 https://bugs.freedesktop.org/show_bug.cgi?id=100125
fdo#100215 https://bugs.freedesktop.org/show_bug.cgi?id=100215
fdo#101597 https://bugs.freedesktop.org/show_bug.cgi?id=101597
fdo#101516 https://bugs.freedesktop.org/show_bug.cgi?id=101516

fi-bdw-5557u     total:279  pass:264  dwarn:0   dfail:0   fail:3   skip:11  time:432s
fi-bdw-gvtdvm    total:279  pass:257  dwarn:8   dfail:0   fail:0   skip:14  time:427s
fi-blb-e6850     total:279  pass:224  dwarn:1   dfail:0   fail:0   skip:54  time:351s
fi-bsw-n3050     total:279  pass:239  dwarn:0   dfail:0   fail:3   skip:36  time:517s
fi-bxt-j4205     total:279  pass:256  dwarn:0   dfail:0   fail:3   skip:19  time:489s
fi-byt-j1900     total:279  pass:251  dwarn:0   dfail:0   fail:3   skip:24  time:471s
fi-byt-n2820     total:279  pass:246  dwarn:1   dfail:0   fail:3   skip:28  time:467s
fi-glk-2a        total:279  pass:256  dwarn:0   dfail:0   fail:3   skip:19  time:579s
fi-hsw-4770      total:279  pass:259  dwarn:0   dfail:0   fail:3   skip:16  time:424s
fi-hsw-4770r     total:279  pass:259  dwarn:0   dfail:0   fail:3   skip:16  time:410s
fi-ilk-650       total:279  pass:225  dwarn:0   dfail:0   fail:3   skip:50  time:411s
fi-ivb-3520m     total:279  pass:257  dwarn:0   dfail:0   fail:3   skip:18  time:490s
fi-ivb-3770      total:279  pass:257  dwarn:0   dfail:0   fail:3   skip:18  time:467s
fi-kbl-7500u     total:279  pass:257  dwarn:0   dfail:0   fail:3   skip:18  time:453s
fi-kbl-7560u     total:279  pass:265  dwarn:0   dfail:0   fail:3   skip:10  time:555s
fi-kbl-r         total:279  pass:256  dwarn:1   dfail:0   fail:3   skip:18  time:562s
fi-pnv-d510      total:279  pass:222  dwarn:2   dfail:0   fail:0   skip:55  time:556s
fi-skl-6260u     total:279  pass:265  dwarn:0   dfail:0   fail:3   skip:10  time:455s
fi-skl-6700hq    total:279  pass:219  dwarn:1   dfail:0   fail:33  skip:24  time:300s
fi-skl-6700k     total:279  pass:257  dwarn:0   dfail:0   fail:3   skip:18  time:458s
fi-skl-6770hq    total:279  pass:265  dwarn:0   dfail:0   fail:3   skip:10  time:469s
fi-skl-gvtdvm    total:279  pass:266  dwarn:0   dfail:0   fail:0   skip:13  time:437s
fi-snb-2520m     total:279  pass:247  dwarn:0   dfail:0   fail:3   skip:28  time:531s
fi-snb-2600      total:279  pass:245  dwarn:0   dfail:0   fail:4   skip:29  time:396s

df0182c2c95385492772c6e4ace76b463298b8ca drm-tip: 2017y-07m-03d-13h-20m-24s UTC integration manifest
cc43739 drm/i915: enable platform support for 1G pages
bb814db drm/i915: enable platform support for 2M pages
f090dc2 drm/i915: enable platform support for 64K pages
1bd3408 drm/i915: disable platform support for vGPU huge gtt pages
f0d316e drm/i915/selftests: mix huge pages
2a782c7 drm/i915/selftests: huge page tests
fcf6ab2 drm/i915/debugfs: include some gtt page size metrics
9afbe00 drm/i915: accurate page size tracking for the ppgtt
add7b6f drm/i915: support 64K pages for the 48b PPGTT
b7d1e82 drm/i915: support 2M pages for the 48b PPGTT
9a5c5fd drm/i915: support 1G pages for the 48b PPGTT
3e40e96 drm/i915: disable GTT cache for 2M/1G pages
7b699cfc drm/i915: enable IPS bit for 64K pages
7c4cafe drm/i915: align 64K objects to 2M
e5a3fb5 drm/i915: align the vma start to the largest gtt page size
72b637d2 drm/i915: introduce vm set_pages/clear_pages
dc51240 drm/i915: introduce page_size members
7f554d3 drm/i915: introduce page_size_mask to dev_info
3ceaa93 drm/i915/gemfs: enable THP
05c509f drm/i915: introduce simple gemfs
b30d90d mm/shmem: introduce shmem_file_setup_with_mnt

== Logs ==

For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_5095/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 16/21] drm/i915/selftests: huge page tests
  2017-07-03 14:14 ` [PATCH 16/21] drm/i915/selftests: huge page tests Matthew Auld
  2017-07-03 14:48   ` Chris Wilson
@ 2017-07-05 15:40   ` Chris Wilson
  1 sibling, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2017-07-05 15:40 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-07-03 15:14:58)
> v2: mock test page support configurations and add MI_STORE_DWORD test
> 
> v3: run all mockable huge page tests on all platforms via the mock_device

Another thought is to have multiple objects in the ppgtt, to avoid any
issues hidden by effectively always using offset = 0. Or move the object
around.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 16/21] drm/i915/selftests: huge page tests
  2017-07-03 14:48   ` Chris Wilson
@ 2017-07-06 11:05     ` Matthew Auld
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-07-06 11:05 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development, Matthew Auld

On 3 July 2017 at 15:48, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Quoting Matthew Auld (2017-07-03 15:14:58)
>> +static struct i915_vma *
>> +gpu_write_dw(struct i915_vma *vma, u64 offset, u32 value)
>> +{
>> +       struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
>> +       struct drm_i915_gem_object *obj;
>> +       struct i915_vma *batch;
>> +       unsigned int size;
>> +       u32 *cmd;
>> +       int err;
>> +
>> +       GEM_BUG_ON(INTEL_GEN(i915) < 8);
>
> Didn't you enable large pages for snb and earlier?

So run the huge_write test regardless of the supported gtt page sizes?

>
>> +static int gpu_write(struct i915_vma *vma,
>> +                    struct i915_gem_context *ctx,
>> +                    unsigned long offset,
>> +                    u32 value)
>> +{
>> +       struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
>> +       struct drm_i915_gem_request *rq;
>> +       struct i915_vma *batch;
>> +       int flags = 0;
>> +       int err;
>> +
>> +       batch = gpu_write_dw(vma, offset, value);
>> +       if (IS_ERR(batch))
>> +               return PTR_ERR(batch);
>> +
>> +       rq = i915_gem_request_alloc(i915->engine[RCS], ctx);
>> +       if (IS_ERR(rq)) {
>> +               err = PTR_ERR(rq);
>> +               goto err_batch;
>> +       }
>> +
>> +       err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
>> +       if (err)
>> +               goto err_request;
>> +
>> +       err = i915_switch_context(rq);
>> +       if (err)
>> +               goto err_request;
>> +
>> +       err = rq->engine->emit_bb_start(rq,
>> +                       batch->node.start, batch->node.size,
>> +                       flags);
>> +       if (err)
>> +               goto err_request;
>> +
>> +       i915_vma_move_to_active(batch, rq, 0);
>> +       i915_gem_object_set_active_reference(batch->obj);
>> +       i915_vma_unpin(batch);
>> +       i915_vma_close(batch);
>> +
>> +       i915_vma_move_to_active(vma, rq, 0);
>> +
>> +       reservation_object_lock(vma->obj->resv, NULL);
>
> vma->resv
>
>> +       reservation_object_add_excl_fence(vma->obj->resv, &rq->fence);
>> +       reservation_object_unlock(vma->obj->resv);
>> +
>> +       __i915_add_request(rq, true);
>> +
>> +       return 0;
>> +
>> +err_request:
>> +       __i915_add_request(rq, false);
>> +err_batch:
>> +       i915_vma_unpin(batch);
>
> Leaking batch. Just tie the lifetime of the batch to the request
> immediately after allocating the request.
>
>> +
>> +       return err;
>> +}
>> +
>> +static inline int igt_can_allocate_thp(struct drm_i915_private *i915)
>> +{
>> +       return HAS_PAGE_SIZE(i915, I915_GTT_PAGE_SIZE_2M) &&
>> +               has_transparent_hugepage();
>> +}
>> +
>> +static int igt_ppgtt_write_huge(void *arg)
>> +{
>> +       struct i915_hw_ppgtt *ppgtt = arg;
>> +       struct drm_i915_private *i915 = ppgtt->base.i915;
>> +       struct drm_i915_gem_object *obj;
>> +       const unsigned int page_sizes[] = {
>> +               I915_GTT_PAGE_SIZE_64K,
>> +               I915_GTT_PAGE_SIZE_2M,
>> +       };
>> +       struct i915_vma *vma;
>> +       int err = 0;
>> +       int i;
>> +
>> +       if (!igt_can_allocate_thp(i915))
>> +               return 0;
>> +
>> +       /* Sanity check that the HW uses huge-pages correctly */
>> +
>> +       ppgtt = i915->kernel_context->ppgtt;
>> +
>> +       obj = i915_gem_object_create(i915, I915_GTT_PAGE_SIZE_2M);
>> +       if (IS_ERR(obj))
>> +               return PTR_ERR(obj);
>
> 2M is MAX_ORDER, so you don't need thp for this and can just use
> i915_gem_object_create_internal() (or both, once to test without relying
> on thp, perhaps more common, then once to test thp shmemfs).
>
>> +       err = i915_gem_object_pin_pages(obj);
>> +       if (err)
>> +               goto out_put;
>> +
>> +       GEM_BUG_ON(!obj->mm.page_sizes.sg);
>> +
>> +       if (obj->mm.page_sizes.sg < I915_GTT_PAGE_SIZE_2M) {
>> +               pr_err("Failed to allocate huge pages\n");
>> +               err = -EINVAL;
>
> Failure? Or just unable to test? Nothing forces the kernel to give you
> contiguous space.
>
>> +               goto out_unpin;
>> +       }
>> +
>> +       vma = i915_vma_instance(obj, &ppgtt->base, NULL);
>> +       if (IS_ERR(vma)) {
>> +               err = PTR_ERR(vma);
>> +               goto out_unpin;
>> +       }
>> +
>> +       for (i = 0; i < ARRAY_SIZE(page_sizes); ++i) {
>> +               unsigned int page_size = page_sizes[i];
>> +               u32 *map;
>> +
>> +               if (!HAS_PAGE_SIZE(i915, page_size))
>> +                       continue;
>> +
>> +               /* Force the page size */
>> +               obj->mm.page_sizes.sg = page_size;
>> +
>> +               err = i915_gem_object_set_to_gtt_domain(obj, true);
>> +               if (err)
>> +                       goto out_close;
>
> Gets in the way of the important details, make it coherent and get rid
> of this.

Just make it coherent?

>
>> +               err = i915_vma_pin(vma, 0, 0, PIN_USER);
>> +               if (err)
>> +                       goto out_close;
>
> Move the unbind above this:
>
> /* Force the page size */
> i915_vma_unbind(vma);
>
> obj->m.page_sizes.sg = page_size;
>
> i915_vma_pin(vma);
> GEM_BUG_ON(vma->page_sizes.sg != page_size);
>
>> +
>> +               GEM_BUG_ON(obj->mm.page_sizes.gtt);
>> +               GEM_BUG_ON(vma->page_sizes.sg != page_size);
>> +               GEM_BUG_ON(!vma->page_sizes.phys);
>> +
>> +               GEM_BUG_ON(vma->page_sizes.gtt != page_size);
>> +               GEM_BUG_ON(!IS_ALIGNED(vma->node.start, I915_GTT_PAGE_SIZE_2M));
>> +               GEM_BUG_ON(!IS_ALIGNED(vma->node.size, I915_GTT_PAGE_SIZE_2M));
>> +
>> +               err = gpu_write(vma, i915->kernel_context, 0, page_size);
>> +               if (err) {
>> +                       i915_vma_unpin(vma);
>> +                       goto out_close;
>> +               }
>> +
>> +               err = i915_gem_object_set_to_cpu_domain(obj, false);
>> +               if (err) {
>> +                       i915_vma_unpin(vma);
>> +                       goto out_close;
>> +               }
>
> Again, this is getting in the way of the important details.
>
>> +               i915_vma_unpin(vma);
>> +               err = i915_vma_unbind(vma);
>> +               if (err)
>> +                       goto out_close;
>> +
>> +               map = i915_gem_object_pin_map(obj, I915_MAP_WB);
>> +               if (IS_ERR(map)) {
>> +                       err = PTR_ERR(map);
>> +                       goto out_close;
>> +               }
>
> Do we need to test different PTE bits?

So WB vs WC? Or are you talking about the gtt PTEs?

>
> If you don't make this coherent, use
>
> map = pin_map(obj);
> set_to_cpu_domain(obj);
>
> for_each_page()
>         test
>
> set_to_gtt_domain(obj);
> unpin_map(obj);
>
>> +
>> +               GEM_BUG_ON(map[0] != page_size);
>
> This is the essential test being performed. Check it, and report an
> error properly. Check every page at different offsets!
>
>> +
>> +               i915_gem_object_unpin_map(obj);
>> +       }
>> +
>> +out_close:
>> +       i915_vma_close(vma);
>> +out_unpin:
>> +       i915_gem_object_unpin_pages(obj);
>> +out_put:
>> +       i915_gem_object_put(obj);
>> +
>> +       return err;
>> +}
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 00/21] huge gtt pages
@ 2017-10-06 14:50 Matthew Auld
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-10-06 14:50 UTC (permalink / raw)
  To: intel-gfx

Some more bits of polish.

Matthew Auld (21):
  mm/shmem: introduce shmem_file_setup_with_mnt
  drm/i915: introduce simple gemfs
  drm/i915/gemfs: enable THP
  drm/i915: introduce page_sizes field to dev_info
  drm/i915: push set_pages down to the callers
  drm/i915: introduce page_size members
  drm/i915: introduce vm set_pages/clear_pages
  drm/i915: align the vma start to the largest gtt page size
  drm/i915: align 64K objects to 2M
  drm/i915: enable IPS bit for 64K pages
  drm/i915: disable GTT cache for 2M pages
  drm/i915: support 2M pages for the 48b PPGTT
  drm/i915: add support for 64K scratch page
  drm/i915: support 64K pages for the 48b PPGTT
  drm/i915: accurate page size tracking for the ppgtt
  drm/i915/debugfs: include some gtt page size metrics
  drm/i915/selftests: huge page tests
  drm/i915/selftests: mix huge pages
  drm/i915: disable platform support for vGPU huge gtt pages
  drm/i915: enable platform support for 64K pages
  drm/i915: enable platform support for 2M pages

 drivers/gpu/drm/i915/Makefile                      |    1 +
 drivers/gpu/drm/i915/i915_debugfs.c                |   61 +-
 drivers/gpu/drm/i915/i915_drv.h                    |   29 +-
 drivers/gpu/drm/i915/i915_gem.c                    |  126 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   18 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  275 +++-
 drivers/gpu/drm/i915/i915_gem_gtt.h                |   20 +-
 drivers/gpu/drm/i915/i915_gem_internal.c           |   18 +-
 drivers/gpu/drm/i915/i915_gem_object.h             |   31 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c             |   16 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c            |   15 +-
 drivers/gpu/drm/i915/i915_gemfs.c                  |   74 +
 drivers/gpu/drm/i915/i915_gemfs.h                  |   34 +
 drivers/gpu/drm/i915/i915_pci.c                    |   21 +
 drivers/gpu/drm/i915/i915_reg.h                    |    3 +
 drivers/gpu/drm/i915/i915_vma.c                    |   49 +-
 drivers/gpu/drm/i915/i915_vma.h                    |    1 +
 drivers/gpu/drm/i915/intel_pm.c                    |   11 +-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c   |   14 +-
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 1715 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      |   15 +-
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |    1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |    1 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |    9 +
 drivers/gpu/drm/i915/selftests/mock_gtt.c          |   11 +-
 drivers/gpu/drm/i915/selftests/scatterlist.c       |   15 +
 include/linux/shmem_fs.h                           |    2 +
 mm/shmem.c                                         |   30 +-
 28 files changed, 2479 insertions(+), 137 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00/21] huge gtt pages
  2017-10-05 15:18 Matthew Auld
@ 2017-10-05 20:31 ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2017-10-05 20:31 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-10-05 16:18:54)
> Some more bits of polish, just missing R-b's on patches 6 and 11.
> 
> Matthew Auld (21):
>   mm/shmem: introduce shmem_file_setup_with_mnt
>   drm/i915: introduce simple gemfs
>   drm/i915/gemfs: enable THP
>   drm/i915: introduce page_sizes field to dev_info
>   drm/i915: push set_pages down to the callers
>   drm/i915: introduce page_size members
>   drm/i915: introduce vm set_pages/clear_pages
>   drm/i915: align the vma start to the largest gtt page size
>   drm/i915: align 64K objects to 2M
>   drm/i915: enable IPS bit for 64K pages
>   drm/i915: disable GTT cache for 2M pages
>   drm/i915: support 2M pages for the 48b PPGTT
>   drm/i915: add support for 64K scratch page
>   drm/i915: support 64K pages for the 48b PPGTT
>   drm/i915: accurate page size tracking for the ppgtt
>   drm/i915/debugfs: include some gtt page size metrics
>   drm/i915/selftests: huge page tests
>   drm/i915/selftests: mix huge pages
>   drm/i915: disable platform support for vGPU huge gtt pages
>   drm/i915: enable platform support for 64K pages
>   drm/i915: enable platform support for 2M pages

Hmm, I really would like a GETPARAM to retrieve
INTEL_INFO()->info.page_sizes. That will be useful to expose the change
for testing, and for Vk to allocate its memfd correctly.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 00/21] huge gtt pages
@ 2017-10-05 15:18 Matthew Auld
  2017-10-05 20:31 ` Chris Wilson
  0 siblings, 1 reply; 34+ messages in thread
From: Matthew Auld @ 2017-10-05 15:18 UTC (permalink / raw)
  To: intel-gfx

Some more bits of polish, just missing R-b's on patches 6 and 11.

Matthew Auld (21):
  mm/shmem: introduce shmem_file_setup_with_mnt
  drm/i915: introduce simple gemfs
  drm/i915/gemfs: enable THP
  drm/i915: introduce page_sizes field to dev_info
  drm/i915: push set_pages down to the callers
  drm/i915: introduce page_size members
  drm/i915: introduce vm set_pages/clear_pages
  drm/i915: align the vma start to the largest gtt page size
  drm/i915: align 64K objects to 2M
  drm/i915: enable IPS bit for 64K pages
  drm/i915: disable GTT cache for 2M pages
  drm/i915: support 2M pages for the 48b PPGTT
  drm/i915: add support for 64K scratch page
  drm/i915: support 64K pages for the 48b PPGTT
  drm/i915: accurate page size tracking for the ppgtt
  drm/i915/debugfs: include some gtt page size metrics
  drm/i915/selftests: huge page tests
  drm/i915/selftests: mix huge pages
  drm/i915: disable platform support for vGPU huge gtt pages
  drm/i915: enable platform support for 64K pages
  drm/i915: enable platform support for 2M pages

 drivers/gpu/drm/i915/Makefile                      |    1 +
 drivers/gpu/drm/i915/i915_debugfs.c                |   61 +-
 drivers/gpu/drm/i915/i915_drv.h                    |   27 +-
 drivers/gpu/drm/i915/i915_gem.c                    |  126 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   18 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  275 +++-
 drivers/gpu/drm/i915/i915_gem_gtt.h                |   19 +-
 drivers/gpu/drm/i915/i915_gem_internal.c           |   18 +-
 drivers/gpu/drm/i915/i915_gem_object.h             |   31 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c             |   16 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c            |   15 +-
 drivers/gpu/drm/i915/i915_gemfs.c                  |   74 +
 drivers/gpu/drm/i915/i915_gemfs.h                  |   34 +
 drivers/gpu/drm/i915/i915_pci.c                    |   21 +
 drivers/gpu/drm/i915/i915_reg.h                    |    3 +
 drivers/gpu/drm/i915/i915_vma.c                    |   49 +-
 drivers/gpu/drm/i915/i915_vma.h                    |    1 +
 drivers/gpu/drm/i915/intel_pm.c                    |    7 +-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c   |   14 +-
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 1712 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      |   15 +-
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |    1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |    1 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |    9 +
 drivers/gpu/drm/i915/selftests/mock_gtt.c          |   11 +-
 drivers/gpu/drm/i915/selftests/scatterlist.c       |   15 +
 include/linux/shmem_fs.h                           |    2 +
 mm/shmem.c                                         |   30 +-
 28 files changed, 2472 insertions(+), 134 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 00/21] huge gtt pages
@ 2017-09-29 16:10 Matthew Auld
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

The less contentious version of gemfs.

Matthew Auld (21):
  mm/shmem: introduce shmem_file_setup_with_mnt
  drm/i915: introduce simple gemfs
  drm/i915/gemfs: enable THP
  drm/i915: introduce page_sizes field to dev_info
  drm/i915: push set_pages down to the callers
  drm/i915: introduce page_size members
  drm/i915: introduce vm set_pages/clear_pages
  drm/i915: align the vma start to the largest gtt page size
  drm/i915: align 64K objects to 2M
  drm/i915: enable IPS bit for 64K pages
  drm/i915: disable GTT cache for 2M pages
  drm/i915: support 2M pages for the 48b PPGTT
  drm/i915: add support for 64K scratch page
  drm/i915: support 64K pages for the 48b PPGTT
  drm/i915: accurate page size tracking for the ppgtt
  drm/i915/debugfs: include some gtt page size metrics
  drm/i915/selftests: huge page tests
  drm/i915/selftests: mix huge pages
  drm/i915: disable platform support for vGPU huge gtt pages
  drm/i915: enable platform support for 64K pages
  drm/i915: enable platform support for 2M pages

 drivers/gpu/drm/i915/Makefile                      |    1 +
 drivers/gpu/drm/i915/i915_debugfs.c                |   61 +-
 drivers/gpu/drm/i915/i915_drv.h                    |   12 +-
 drivers/gpu/drm/i915/i915_gem.c                    |  124 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   22 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  277 +++-
 drivers/gpu/drm/i915/i915_gem_gtt.h                |   19 +-
 drivers/gpu/drm/i915/i915_gem_internal.c           |   18 +-
 drivers/gpu/drm/i915/i915_gem_object.h             |   31 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c             |   16 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c            |   19 +-
 drivers/gpu/drm/i915/i915_gemfs.c                  |   73 +
 drivers/gpu/drm/i915/i915_gemfs.h                  |   34 +
 drivers/gpu/drm/i915/i915_pci.c                    |   23 +
 drivers/gpu/drm/i915/i915_reg.h                    |    3 +
 drivers/gpu/drm/i915/i915_vma.c                    |   49 +-
 drivers/gpu/drm/i915/i915_vma.h                    |    1 +
 drivers/gpu/drm/i915/intel_pm.c                    |    7 +-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c   |   14 +-
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 1711 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      |   15 +-
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |    1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |    1 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |    9 +
 drivers/gpu/drm/i915/selftests/mock_gtt.c          |   11 +-
 drivers/gpu/drm/i915/selftests/scatterlist.c       |   15 +
 include/linux/shmem_fs.h                           |    2 +
 mm/shmem.c                                         |   30 +-
 28 files changed, 2465 insertions(+), 134 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 00/21] huge gtt pages
@ 2017-09-22 17:32 Matthew Auld
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2017-09-22 17:32 UTC (permalink / raw)
  To: intel-gfx

Bunch of changes all round, mostly in the kselftest department, of note we drop
support for 1G pages for the time being, since testing has proven to be a pita,
and instead focus on getting the 64K and 2M support landed.

Matthew Auld (21):
  mm/shmem: introduce shmem_file_setup_with_mnt
  drm/i915: introduce simple gemfs
  drm/i915/gemfs: enable THP
  drm/i915: introduce page_sizes field to dev_info
  drm/i915: push set_pages down to the callers
  drm/i915: introduce page_size members
  drm/i915: introduce vm set_pages/clear_pages
  drm/i915: align the vma start to the largest gtt page size
  drm/i915: align 64K objects to 2M
  drm/i915: enable IPS bit for 64K pages
  drm/i915: disable GTT cache for 2M pages
  drm/i915: support 2M pages for the 48b PPGTT
  drm/i915: add support for 64K scratch page
  drm/i915: support 64K pages for the 48b PPGTT
  drm/i915: accurate page size tracking for the ppgtt
  drm/i915/debugfs: include some gtt page size metrics
  drm/i915/selftests: huge page tests
  drm/i915/selftests: mix huge pages
  drm/i915: disable platform support for vGPU huge gtt pages
  drm/i915: enable platform support for 64K pages
  drm/i915: enable platform support for 2M pages

 drivers/gpu/drm/i915/Makefile                      |    1 +
 drivers/gpu/drm/i915/i915_debugfs.c                |   61 +-
 drivers/gpu/drm/i915/i915_drv.h                    |   11 +-
 drivers/gpu/drm/i915/i915_gem.c                    |  137 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   22 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  257 ++-
 drivers/gpu/drm/i915/i915_gem_gtt.h                |   19 +-
 drivers/gpu/drm/i915/i915_gem_internal.c           |   18 +-
 drivers/gpu/drm/i915/i915_gem_object.h             |   31 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c             |   16 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c            |   19 +-
 drivers/gpu/drm/i915/i915_gemfs.c                  |   66 +
 drivers/gpu/drm/i915/i915_gemfs.h                  |   34 +
 drivers/gpu/drm/i915/i915_pci.c                    |   23 +
 drivers/gpu/drm/i915/i915_reg.h                    |    3 +
 drivers/gpu/drm/i915/i915_vma.c                    |   47 +-
 drivers/gpu/drm/i915/i915_vma.h                    |    1 +
 drivers/gpu/drm/i915/intel_pm.c                    |    8 +-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c   |   14 +-
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 1636 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      |   15 +-
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |    1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |    1 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |    9 +
 drivers/gpu/drm/i915/selftests/mock_gtt.c          |   11 +-
 drivers/gpu/drm/i915/selftests/scatterlist.c       |   15 +
 include/linux/shmem_fs.h                           |    2 +
 mm/shmem.c                                         |   30 +-
 28 files changed, 2374 insertions(+), 134 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00/21] huge gtt pages
  2017-07-25 19:21 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-08-12 11:55 ` Chris Wilson
  0 siblings, 0 replies; 34+ messages in thread
From: Chris Wilson @ 2017-08-12 11:55 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-07-25 20:21:12)
> Some updates to the kselftests as per Chris' comments.
> 
> Matthew Auld (21):
>   mm/shmem: introduce shmem_file_setup_with_mnt
>   drm/i915: introduce simple gemfs
>   drm/i915/gemfs: enable THP
>   drm/i915: introduce page_size_mask to dev_info
>   drm/i915: introduce page_size members
>   drm/i915: introduce vm set_pages/clear_pages
>   drm/i915: align the vma start to the largest gtt page size
>   drm/i915: align 64K objects to 2M
>   drm/i915: enable IPS bit for 64K pages
>   drm/i915: disable GTT cache for 2M/1G pages
>   drm/i915: support 1G pages for the 48b PPGTT
>   drm/i915: support 2M pages for the 48b PPGTT
>   drm/i915: support 64K pages for the 48b PPGTT
>   drm/i915: accurate page size tracking for the ppgtt
>   drm/i915/debugfs: include some gtt page size metrics
>   drm/i915/selftests: huge page tests
>   drm/i915/selftests: mix huge pages
>   drm/i915: disable platform support for vGPU huge gtt pages
>   drm/i915: enable platform support for 64K pages
>   drm/i915: enable platform support for 2M pages
>   drm/i915: enable platform support for 1G pages

memfd is about to gain the ability to create huge pages as well, and
that is being used by anv to allocate userptr objects (in pieces) so if
we want to tie those two together we also need a I915_GETPARAM to report
the i915->info.page_sizes;

For debugging purposes, I'm contemplating a
i915_gem_object_getparam_ioctl.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 00/21] huge gtt pages
@ 2017-07-25 19:21 Matthew Auld
  2017-08-12 11:55 ` Chris Wilson
  0 siblings, 1 reply; 34+ messages in thread
From: Matthew Auld @ 2017-07-25 19:21 UTC (permalink / raw)
  To: intel-gfx

Some updates to the kselftests as per Chris' comments.

Matthew Auld (21):
  mm/shmem: introduce shmem_file_setup_with_mnt
  drm/i915: introduce simple gemfs
  drm/i915/gemfs: enable THP
  drm/i915: introduce page_size_mask to dev_info
  drm/i915: introduce page_size members
  drm/i915: introduce vm set_pages/clear_pages
  drm/i915: align the vma start to the largest gtt page size
  drm/i915: align 64K objects to 2M
  drm/i915: enable IPS bit for 64K pages
  drm/i915: disable GTT cache for 2M/1G pages
  drm/i915: support 1G pages for the 48b PPGTT
  drm/i915: support 2M pages for the 48b PPGTT
  drm/i915: support 64K pages for the 48b PPGTT
  drm/i915: accurate page size tracking for the ppgtt
  drm/i915/debugfs: include some gtt page size metrics
  drm/i915/selftests: huge page tests
  drm/i915/selftests: mix huge pages
  drm/i915: disable platform support for vGPU huge gtt pages
  drm/i915: enable platform support for 64K pages
  drm/i915: enable platform support for 2M pages
  drm/i915: enable platform support for 1G pages

 drivers/gpu/drm/i915/Makefile                      |    1 +
 drivers/gpu/drm/i915/i915_debugfs.c                |   42 +-
 drivers/gpu/drm/i915/i915_drv.h                    |    9 +-
 drivers/gpu/drm/i915/i915_gem.c                    |   98 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   17 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  195 +++-
 drivers/gpu/drm/i915/i915_gem_gtt.h                |   15 +-
 drivers/gpu/drm/i915/i915_gem_internal.c           |    5 +-
 drivers/gpu/drm/i915/i915_gem_object.h             |   30 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c             |   13 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c            |   26 +-
 drivers/gpu/drm/i915/i915_gemfs.c                  |   67 ++
 drivers/gpu/drm/i915/i915_gemfs.h                  |   34 +
 drivers/gpu/drm/i915/i915_pci.c                    |   25 +
 drivers/gpu/drm/i915/i915_reg.h                    |    3 +
 drivers/gpu/drm/i915/i915_vma.c                    |   49 +-
 drivers/gpu/drm/i915/i915_vma.h                    |    1 +
 drivers/gpu/drm/i915/intel_pm.c                    |    9 +-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c   |    4 +-
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 1170 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      |    3 +-
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |    1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |    1 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |   16 +-
 drivers/gpu/drm/i915/selftests/mock_gtt.c          |   11 +-
 drivers/gpu/drm/i915/selftests/scatterlist.c       |   15 +
 include/linux/shmem_fs.h                           |    2 +
 mm/shmem.c                                         |   30 +-
 28 files changed, 1797 insertions(+), 95 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

-- 
2.13.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2017-10-06 14:50 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-03 14:14 [PATCH 00/21] huge gtt pages Matthew Auld
2017-07-03 14:14 ` [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt Matthew Auld
2017-07-03 14:14 ` [PATCH 02/21] drm/i915: introduce simple gemfs Matthew Auld
2017-07-03 14:14   ` Matthew Auld
2017-07-03 14:14 ` [PATCH 03/21] drm/i915/gemfs: enable THP Matthew Auld
2017-07-03 14:14 ` [PATCH 04/21] drm/i915: introduce page_size_mask to dev_info Matthew Auld
2017-07-03 14:14 ` [PATCH 05/21] drm/i915: introduce page_size members Matthew Auld
2017-07-03 14:14 ` [PATCH 06/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
2017-07-03 14:14 ` [PATCH 07/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
2017-07-03 14:14 ` [PATCH 08/21] drm/i915: align 64K objects to 2M Matthew Auld
2017-07-03 14:14 ` [PATCH 09/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
2017-07-03 14:14 ` [PATCH 10/21] drm/i915: disable GTT cache for 2M/1G pages Matthew Auld
2017-07-03 14:14 ` [PATCH 11/21] drm/i915: support 1G pages for the 48b PPGTT Matthew Auld
2017-07-03 14:14 ` [PATCH 12/21] drm/i915: support 2M " Matthew Auld
2017-07-03 14:14 ` [PATCH 13/21] drm/i915: support 64K " Matthew Auld
2017-07-03 14:14 ` [PATCH 14/21] drm/i915: accurate page size tracking for the ppgtt Matthew Auld
2017-07-03 14:14 ` [PATCH 15/21] drm/i915/debugfs: include some gtt page size metrics Matthew Auld
2017-07-03 14:14 ` [PATCH 16/21] drm/i915/selftests: huge page tests Matthew Auld
2017-07-03 14:48   ` Chris Wilson
2017-07-06 11:05     ` Matthew Auld
2017-07-05 15:40   ` Chris Wilson
2017-07-03 14:14 ` [PATCH 17/21] drm/i915/selftests: mix huge pages Matthew Auld
2017-07-03 14:15 ` [PATCH 18/21] drm/i915: disable platform support for vGPU huge gtt pages Matthew Auld
2017-07-03 14:15 ` [PATCH 19/21] drm/i915: enable platform support for 64K pages Matthew Auld
2017-07-03 14:15 ` [PATCH 20/21] drm/i915: enable platform support for 2M pages Matthew Auld
2017-07-03 14:15 ` [PATCH 21/21] drm/i915: enable platform support for 1G pages Matthew Auld
2017-07-03 14:55 ` ✓ Fi.CI.BAT: success for huge gtt pages (rev4) Patchwork
2017-07-25 19:21 [PATCH 00/21] huge gtt pages Matthew Auld
2017-08-12 11:55 ` Chris Wilson
2017-09-22 17:32 Matthew Auld
2017-09-29 16:10 Matthew Auld
2017-10-05 15:18 Matthew Auld
2017-10-05 20:31 ` Chris Wilson
2017-10-06 14:50 Matthew Auld

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.