All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/21] huge gtt pages
@ 2017-09-29 16:10 Matthew Auld
  2017-09-29 16:10 ` [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt Matthew Auld
                   ` (22 more replies)
  0 siblings, 23 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

The less contentious version of gemfs.

Matthew Auld (21):
  mm/shmem: introduce shmem_file_setup_with_mnt
  drm/i915: introduce simple gemfs
  drm/i915/gemfs: enable THP
  drm/i915: introduce page_sizes field to dev_info
  drm/i915: push set_pages down to the callers
  drm/i915: introduce page_size members
  drm/i915: introduce vm set_pages/clear_pages
  drm/i915: align the vma start to the largest gtt page size
  drm/i915: align 64K objects to 2M
  drm/i915: enable IPS bit for 64K pages
  drm/i915: disable GTT cache for 2M pages
  drm/i915: support 2M pages for the 48b PPGTT
  drm/i915: add support for 64K scratch page
  drm/i915: support 64K pages for the 48b PPGTT
  drm/i915: accurate page size tracking for the ppgtt
  drm/i915/debugfs: include some gtt page size metrics
  drm/i915/selftests: huge page tests
  drm/i915/selftests: mix huge pages
  drm/i915: disable platform support for vGPU huge gtt pages
  drm/i915: enable platform support for 64K pages
  drm/i915: enable platform support for 2M pages

 drivers/gpu/drm/i915/Makefile                      |    1 +
 drivers/gpu/drm/i915/i915_debugfs.c                |   61 +-
 drivers/gpu/drm/i915/i915_drv.h                    |   12 +-
 drivers/gpu/drm/i915/i915_gem.c                    |  124 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c             |   22 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c                |  277 +++-
 drivers/gpu/drm/i915/i915_gem_gtt.h                |   19 +-
 drivers/gpu/drm/i915/i915_gem_internal.c           |   18 +-
 drivers/gpu/drm/i915/i915_gem_object.h             |   31 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c             |   16 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c            |   19 +-
 drivers/gpu/drm/i915/i915_gemfs.c                  |   73 +
 drivers/gpu/drm/i915/i915_gemfs.h                  |   34 +
 drivers/gpu/drm/i915/i915_pci.c                    |   23 +
 drivers/gpu/drm/i915/i915_reg.h                    |    3 +
 drivers/gpu/drm/i915/i915_vma.c                    |   49 +-
 drivers/gpu/drm/i915/i915_vma.h                    |    1 +
 drivers/gpu/drm/i915/intel_pm.c                    |    7 +-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c   |   14 +-
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 1711 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c      |   15 +-
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |    1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |    1 +
 drivers/gpu/drm/i915/selftests/mock_gem_device.c   |    9 +
 drivers/gpu/drm/i915/selftests/mock_gtt.c          |   11 +-
 drivers/gpu/drm/i915/selftests/scatterlist.c       |   15 +
 include/linux/shmem_fs.h                           |    2 +
 mm/shmem.c                                         |   30 +-
 28 files changed, 2465 insertions(+), 134 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10   ` Matthew Auld
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx
  Cc: Joonas Lahtinen, Chris Wilson, Dave Hansen, Kirill A . Shutemov,
	Hugh Dickins, linux-mm

We are planning to use our own tmpfs mnt in i915 in place of the
shm_mnt, such that we can control the mount options, in particular
huge=, which we require to support huge-gtt-pages. So rather than roll
our own version of __shmem_file_setup, it would be preferred if we could
just give shmem our mnt, and let it do the rest.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 include/linux/shmem_fs.h |  2 ++
 mm/shmem.c               | 30 ++++++++++++++++++++++--------
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index b6c3540e07bc..0937d9a7d8fb 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -53,6 +53,8 @@ extern struct file *shmem_file_setup(const char *name,
 					loff_t size, unsigned long flags);
 extern struct file *shmem_kernel_file_setup(const char *name, loff_t size,
 					    unsigned long flags);
+extern struct file *shmem_file_setup_with_mnt(struct vfsmount *mnt,
+		const char *name, loff_t size, unsigned long flags);
 extern int shmem_zero_setup(struct vm_area_struct *);
 extern unsigned long shmem_get_unmapped_area(struct file *, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
diff --git a/mm/shmem.c b/mm/shmem.c
index 07a1d22807be..3229d27503ec 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -4183,7 +4183,7 @@ static const struct dentry_operations anon_ops = {
 	.d_dname = simple_dname
 };
 
-static struct file *__shmem_file_setup(const char *name, loff_t size,
+static struct file *__shmem_file_setup(struct vfsmount *mnt, const char *name, loff_t size,
 				       unsigned long flags, unsigned int i_flags)
 {
 	struct file *res;
@@ -4192,8 +4192,8 @@ static struct file *__shmem_file_setup(const char *name, loff_t size,
 	struct super_block *sb;
 	struct qstr this;
 
-	if (IS_ERR(shm_mnt))
-		return ERR_CAST(shm_mnt);
+	if (IS_ERR(mnt))
+		return ERR_CAST(mnt);
 
 	if (size < 0 || size > MAX_LFS_FILESIZE)
 		return ERR_PTR(-EINVAL);
@@ -4205,8 +4205,8 @@ static struct file *__shmem_file_setup(const char *name, loff_t size,
 	this.name = name;
 	this.len = strlen(name);
 	this.hash = 0; /* will go */
-	sb = shm_mnt->mnt_sb;
-	path.mnt = mntget(shm_mnt);
+	sb = mnt->mnt_sb;
+	path.mnt = mntget(mnt);
 	path.dentry = d_alloc_pseudo(sb, &this);
 	if (!path.dentry)
 		goto put_memory;
@@ -4251,7 +4251,7 @@ static struct file *__shmem_file_setup(const char *name, loff_t size,
  */
 struct file *shmem_kernel_file_setup(const char *name, loff_t size, unsigned long flags)
 {
-	return __shmem_file_setup(name, size, flags, S_PRIVATE);
+	return __shmem_file_setup(shm_mnt, name, size, flags, S_PRIVATE);
 }
 
 /**
@@ -4262,11 +4262,25 @@ struct file *shmem_kernel_file_setup(const char *name, loff_t size, unsigned lon
  */
 struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
 {
-	return __shmem_file_setup(name, size, flags, 0);
+	return __shmem_file_setup(shm_mnt, name, size, flags, 0);
 }
 EXPORT_SYMBOL_GPL(shmem_file_setup);
 
 /**
+ * shmem_file_setup_with_mnt - get an unlinked file living in tmpfs
+ * @mnt: the tmpfs mount where the file will be created
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_file_setup_with_mnt(struct vfsmount *mnt, const char *name,
+				       loff_t size, unsigned long flags)
+{
+	return __shmem_file_setup(mnt, name, size, flags, 0);
+}
+EXPORT_SYMBOL_GPL(shmem_file_setup_with_mnt);
+
+/**
  * shmem_zero_setup - setup a shared anonymous mapping
  * @vma: the vma to be mmapped is prepared by do_mmap_pgoff
  */
@@ -4281,7 +4295,7 @@ int shmem_zero_setup(struct vm_area_struct *vma)
 	 * accessible to the user through its mapping, use S_PRIVATE flag to
 	 * bypass file security, in the same way as shmem_kernel_file_setup().
 	 */
-	file = __shmem_file_setup("dev/zero", size, vma->vm_flags, S_PRIVATE);
+	file = shmem_kernel_file_setup("dev/zero", size, vma->vm_flags);
 	if (IS_ERR(file))
 		return PTR_ERR(file);
 
-- 
2.13.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 02/21] drm/i915: introduce simple gemfs
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-09-29 16:10   ` Matthew Auld
  2017-09-29 16:10   ` Matthew Auld
                     ` (21 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx
  Cc: Joonas Lahtinen, Chris Wilson, Dave Hansen, Kirill A . Shutemov,
	Hugh Dickins, linux-mm

Not a fully blown gemfs, just our very own tmpfs kernel mount. Doing so
moves us away from the shmemfs shm_mnt, and gives us the much needed
flexibility to do things like set our own mount options, namely huge=
which should allow us to enable the use of transparent-huge-pages for
our shmem backed objects.

v2: various improvements suggested by Joonas

v3: move gemfs instance to i915.mm and simplify now that we have
file_setup_with_mnt

v4: fallback to tmpfs shm_mnt upon failure to setup gemfs

v5: make tmpfs fallback kinder

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org
---
 drivers/gpu/drm/i915/Makefile                    |  1 +
 drivers/gpu/drm/i915/i915_drv.h                  |  5 +++
 drivers/gpu/drm/i915/i915_gem.c                  | 31 +++++++++++++-
 drivers/gpu/drm/i915/i915_gemfs.c                | 52 ++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gemfs.h                | 34 ++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c |  4 ++
 6 files changed, 126 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 5182e3d5557d..980c41568f46 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -47,6 +47,7 @@ i915-y += i915_cmd_parser.o \
 	  i915_gem_tiling.o \
 	  i915_gem_timeline.o \
 	  i915_gem_userptr.o \
+	  i915_gemfs.o \
 	  i915_trace_points.o \
 	  i915_vma.o \
 	  intel_breadcrumbs.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7ca11318ac69..b50fe24454ed 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1508,6 +1508,11 @@ struct i915_gem_mm {
 	/** Usable portion of the GTT for GEM */
 	dma_addr_t stolen_base; /* limited to low memory (32-bit) */
 
+	/**
+	 * tmpfs instance used for shmem backed objects
+	 */
+	struct vfsmount *gemfs;
+
 	/** PPGTT used for aliasing the PPGTT with the GTT */
 	struct i915_hw_ppgtt *aliasing_ppgtt;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ab8c6946fea4..a6a0a7ee2df8 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -35,6 +35,7 @@
 #include "intel_drv.h"
 #include "intel_frontbuffer.h"
 #include "intel_mocs.h"
+#include "i915_gemfs.h"
 #include <linux/dma-fence-array.h>
 #include <linux/kthread.h>
 #include <linux/reservation.h>
@@ -4251,6 +4252,29 @@ static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
 	.pwrite = i915_gem_object_pwrite_gtt,
 };
 
+static int i915_gem_object_create_shmem(struct drm_device *dev,
+					struct drm_gem_object *obj,
+					size_t size)
+{
+	struct drm_i915_private *i915 = to_i915(dev);
+	struct file *filp;
+
+	drm_gem_private_object_init(dev, obj, size);
+
+	if (i915->mm.gemfs)
+		filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size,
+						 VM_NORESERVE);
+	else
+		filp = shmem_file_setup("i915", size, VM_NORESERVE);
+
+	if (IS_ERR(filp))
+		return PTR_ERR(filp);
+
+	obj->filp = filp;
+
+	return 0;
+}
+
 struct drm_i915_gem_object *
 i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 {
@@ -4275,7 +4299,7 @@ i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 	if (obj == NULL)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_gem_object_init(&dev_priv->drm, &obj->base, size);
+	ret = i915_gem_object_create_shmem(&dev_priv->drm, &obj->base, size);
 	if (ret)
 		goto fail;
 
@@ -4915,6 +4939,9 @@ i915_gem_load_init(struct drm_i915_private *dev_priv)
 
 	spin_lock_init(&dev_priv->fb_tracking.lock);
 
+	if (i915_gemfs_init(dev_priv))
+		DRM_NOTE("Unable to create a private tmpfs mountpoint, hugepage support will be disabled.\n");
+
 	return 0;
 
 err_priorities:
@@ -4953,6 +4980,8 @@ void i915_gem_load_cleanup(struct drm_i915_private *dev_priv)
 
 	/* And ensure that our DESTROY_BY_RCU slabs are truly destroyed */
 	rcu_barrier();
+
+	i915_gemfs_fini(dev_priv);
 }
 
 int i915_gem_freeze(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gemfs.c b/drivers/gpu/drm/i915/i915_gemfs.c
new file mode 100644
index 000000000000..168d0bd98f60
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.c
@@ -0,0 +1,52 @@
+/*
+ * Copyright A(C) 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/fs.h>
+#include <linux/mount.h>
+
+#include "i915_drv.h"
+#include "i915_gemfs.h"
+
+int i915_gemfs_init(struct drm_i915_private *i915)
+{
+	struct file_system_type *type;
+	struct vfsmount *gemfs;
+
+	type = get_fs_type("tmpfs");
+	if (!type)
+		return -ENODEV;
+
+	gemfs = kern_mount(type);
+	if (IS_ERR(gemfs))
+		return PTR_ERR(gemfs);
+
+	i915->mm.gemfs = gemfs;
+
+	return 0;
+}
+
+void i915_gemfs_fini(struct drm_i915_private *i915)
+{
+	kern_unmount(i915->mm.gemfs);
+}
diff --git a/drivers/gpu/drm/i915/i915_gemfs.h b/drivers/gpu/drm/i915/i915_gemfs.h
new file mode 100644
index 000000000000..cca8bdc5b93e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright A(C) 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_GEMFS_H__
+#define __I915_GEMFS_H__
+
+struct drm_i915_private;
+
+int i915_gemfs_init(struct drm_i915_private *i915);
+
+void i915_gemfs_fini(struct drm_i915_private *i915);
+
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 2388424a14da..ed3407b078e7 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -83,6 +83,8 @@ static void mock_device_release(struct drm_device *dev)
 	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 
+	i915_gemfs_fini(i915);
+
 	drm_dev_fini(&i915->drm);
 	put_device(&i915->drm.pdev->dev);
 }
@@ -239,6 +241,8 @@ struct drm_i915_private *mock_gem_device(void)
 	if (!i915->kernel_context)
 		goto err_engine;
 
+	WARN_ON(i915_gemfs_init(i915));
+
 	return i915;
 
 err_engine:
-- 
2.13.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 02/21] drm/i915: introduce simple gemfs
@ 2017-09-29 16:10   ` Matthew Auld
  0 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx
  Cc: Joonas Lahtinen, Chris Wilson, Dave Hansen, Kirill A . Shutemov,
	Hugh Dickins, linux-mm

Not a fully blown gemfs, just our very own tmpfs kernel mount. Doing so
moves us away from the shmemfs shm_mnt, and gives us the much needed
flexibility to do things like set our own mount options, namely huge=
which should allow us to enable the use of transparent-huge-pages for
our shmem backed objects.

v2: various improvements suggested by Joonas

v3: move gemfs instance to i915.mm and simplify now that we have
file_setup_with_mnt

v4: fallback to tmpfs shm_mnt upon failure to setup gemfs

v5: make tmpfs fallback kinder

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org
---
 drivers/gpu/drm/i915/Makefile                    |  1 +
 drivers/gpu/drm/i915/i915_drv.h                  |  5 +++
 drivers/gpu/drm/i915/i915_gem.c                  | 31 +++++++++++++-
 drivers/gpu/drm/i915/i915_gemfs.c                | 52 ++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gemfs.h                | 34 ++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c |  4 ++
 6 files changed, 126 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.c
 create mode 100644 drivers/gpu/drm/i915/i915_gemfs.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 5182e3d5557d..980c41568f46 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -47,6 +47,7 @@ i915-y += i915_cmd_parser.o \
 	  i915_gem_tiling.o \
 	  i915_gem_timeline.o \
 	  i915_gem_userptr.o \
+	  i915_gemfs.o \
 	  i915_trace_points.o \
 	  i915_vma.o \
 	  intel_breadcrumbs.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7ca11318ac69..b50fe24454ed 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1508,6 +1508,11 @@ struct i915_gem_mm {
 	/** Usable portion of the GTT for GEM */
 	dma_addr_t stolen_base; /* limited to low memory (32-bit) */
 
+	/**
+	 * tmpfs instance used for shmem backed objects
+	 */
+	struct vfsmount *gemfs;
+
 	/** PPGTT used for aliasing the PPGTT with the GTT */
 	struct i915_hw_ppgtt *aliasing_ppgtt;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ab8c6946fea4..a6a0a7ee2df8 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -35,6 +35,7 @@
 #include "intel_drv.h"
 #include "intel_frontbuffer.h"
 #include "intel_mocs.h"
+#include "i915_gemfs.h"
 #include <linux/dma-fence-array.h>
 #include <linux/kthread.h>
 #include <linux/reservation.h>
@@ -4251,6 +4252,29 @@ static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
 	.pwrite = i915_gem_object_pwrite_gtt,
 };
 
+static int i915_gem_object_create_shmem(struct drm_device *dev,
+					struct drm_gem_object *obj,
+					size_t size)
+{
+	struct drm_i915_private *i915 = to_i915(dev);
+	struct file *filp;
+
+	drm_gem_private_object_init(dev, obj, size);
+
+	if (i915->mm.gemfs)
+		filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size,
+						 VM_NORESERVE);
+	else
+		filp = shmem_file_setup("i915", size, VM_NORESERVE);
+
+	if (IS_ERR(filp))
+		return PTR_ERR(filp);
+
+	obj->filp = filp;
+
+	return 0;
+}
+
 struct drm_i915_gem_object *
 i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 {
@@ -4275,7 +4299,7 @@ i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 	if (obj == NULL)
 		return ERR_PTR(-ENOMEM);
 
-	ret = drm_gem_object_init(&dev_priv->drm, &obj->base, size);
+	ret = i915_gem_object_create_shmem(&dev_priv->drm, &obj->base, size);
 	if (ret)
 		goto fail;
 
@@ -4915,6 +4939,9 @@ i915_gem_load_init(struct drm_i915_private *dev_priv)
 
 	spin_lock_init(&dev_priv->fb_tracking.lock);
 
+	if (i915_gemfs_init(dev_priv))
+		DRM_NOTE("Unable to create a private tmpfs mountpoint, hugepage support will be disabled.\n");
+
 	return 0;
 
 err_priorities:
@@ -4953,6 +4980,8 @@ void i915_gem_load_cleanup(struct drm_i915_private *dev_priv)
 
 	/* And ensure that our DESTROY_BY_RCU slabs are truly destroyed */
 	rcu_barrier();
+
+	i915_gemfs_fini(dev_priv);
 }
 
 int i915_gem_freeze(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gemfs.c b/drivers/gpu/drm/i915/i915_gemfs.c
new file mode 100644
index 000000000000..168d0bd98f60
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.c
@@ -0,0 +1,52 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/fs.h>
+#include <linux/mount.h>
+
+#include "i915_drv.h"
+#include "i915_gemfs.h"
+
+int i915_gemfs_init(struct drm_i915_private *i915)
+{
+	struct file_system_type *type;
+	struct vfsmount *gemfs;
+
+	type = get_fs_type("tmpfs");
+	if (!type)
+		return -ENODEV;
+
+	gemfs = kern_mount(type);
+	if (IS_ERR(gemfs))
+		return PTR_ERR(gemfs);
+
+	i915->mm.gemfs = gemfs;
+
+	return 0;
+}
+
+void i915_gemfs_fini(struct drm_i915_private *i915)
+{
+	kern_unmount(i915->mm.gemfs);
+}
diff --git a/drivers/gpu/drm/i915/i915_gemfs.h b/drivers/gpu/drm/i915/i915_gemfs.h
new file mode 100644
index 000000000000..cca8bdc5b93e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gemfs.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_GEMFS_H__
+#define __I915_GEMFS_H__
+
+struct drm_i915_private;
+
+int i915_gemfs_init(struct drm_i915_private *i915);
+
+void i915_gemfs_fini(struct drm_i915_private *i915);
+
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 2388424a14da..ed3407b078e7 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -83,6 +83,8 @@ static void mock_device_release(struct drm_device *dev)
 	kmem_cache_destroy(i915->vmas);
 	kmem_cache_destroy(i915->objects);
 
+	i915_gemfs_fini(i915);
+
 	drm_dev_fini(&i915->drm);
 	put_device(&i915->drm.pdev->dev);
 }
@@ -239,6 +241,8 @@ struct drm_i915_private *mock_gem_device(void)
 	if (!i915->kernel_context)
 		goto err_engine;
 
+	WARN_ON(i915_gemfs_init(i915));
+
 	return i915;
 
 err_engine:
-- 
2.13.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 03/21] drm/i915/gemfs: enable THP
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
  2017-09-29 16:10 ` [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt Matthew Auld
  2017-09-29 16:10   ` Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:18   ` Chris Wilson
  2017-10-02 12:05   ` Joonas Lahtinen
  2017-09-29 16:10 ` [PATCH 04/21] drm/i915: introduce page_sizes field to dev_info Matthew Auld
                   ` (19 subsequent siblings)
  22 siblings, 2 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Enable transparent-huge-pages through gemfs by mounting with
huge=within_size.

v2: sprinkle within_size comment

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gemfs.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gemfs.c b/drivers/gpu/drm/i915/i915_gemfs.c
index 168d0bd98f60..b1b4d13d8f97 100644
--- a/drivers/gpu/drm/i915/i915_gemfs.c
+++ b/drivers/gpu/drm/i915/i915_gemfs.c
@@ -24,6 +24,7 @@
 
 #include <linux/fs.h>
 #include <linux/mount.h>
+#include <linux/pagemap.h>
 
 #include "i915_drv.h"
 #include "i915_gemfs.h"
@@ -41,6 +42,26 @@ int i915_gemfs_init(struct drm_i915_private *i915)
 	if (IS_ERR(gemfs))
 		return PTR_ERR(gemfs);
 
+	/*
+	 * Enable huge-pages for objects that are at least HPAGE_PMD_SIZE, most
+	 * likely 2M. Note that within_size may overallocate huge-pages, if say
+	 * we allocate an object of size 2M + 4K, but under memory pressure
+	 * should split any huge-pages which can be shrunk.
+	 */
+
+	if (has_transparent_hugepage()) {
+		struct super_block *sb = gemfs->mnt_sb;
+		char options[] = "huge=within_size";
+		int flags = 0;
+		int err;
+
+		err = sb->s_op->remount_fs(sb, &flags, options);
+		if (err) {
+			kern_unmount(gemfs);
+			return err;
+		}
+	}
+
 	i915->mm.gemfs = gemfs;
 
 	return 0;
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 04/21] drm/i915: introduce page_sizes field to dev_info
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (2 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 03/21] drm/i915/gemfs: enable THP Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 05/21] drm/i915: push set_pages down to the callers Matthew Auld
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

In preparation for huge gtt pages expose page_sizes as part of the
device info, to indicate the page sizes supported by the HW.  Currently
only 4K is supported.

v2: s/page_size_mask/page_sizes/

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h                  |  2 ++
 drivers/gpu/drm/i915/i915_gem_gtt.h              |  7 ++++++-
 drivers/gpu/drm/i915/i915_pci.c                  | 20 ++++++++++++++++++++
 drivers/gpu/drm/i915/selftests/mock_gem_device.c |  3 +++
 4 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b50fe24454ed..3011056da9ee 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -868,6 +868,8 @@ struct intel_device_info {
 	u8 num_sprites[I915_MAX_PIPES];
 	u8 num_scalers[I915_MAX_PIPES];
 
+	unsigned int page_sizes; /* page sizes supported by the HW */
+
 #define DEFINE_FLAG(name) u8 name:1
 	DEV_INFO_FOR_EACH_FLAG(DEFINE_FLAG);
 #undef DEFINE_FLAG
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index f62fb903dc24..88e9724d5fc7 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -42,7 +42,12 @@
 #include "i915_gem_request.h"
 #include "i915_selftest.h"
 
-#define I915_GTT_PAGE_SIZE 4096UL
+#define I915_GTT_PAGE_SIZE_4K BIT(12)
+#define I915_GTT_PAGE_SIZE_64K BIT(16)
+#define I915_GTT_PAGE_SIZE_2M BIT(21)
+
+#define I915_GTT_PAGE_SIZE I915_GTT_PAGE_SIZE_4K
+
 #define I915_GTT_MIN_ALIGNMENT I915_GTT_PAGE_SIZE
 
 #define I915_FENCE_REG_NONE -1
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index da60866b6628..84cdab3c5f09 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -56,6 +56,10 @@
 	.color = { .degamma_lut_size = 65, .gamma_lut_size = 257 }
 
 /* Keep in gen based order, and chronological order within a gen */
+
+#define GEN_DEFAULT_PAGE_SIZES \
+	.page_sizes = I915_GTT_PAGE_SIZE_4K
+
 #define GEN2_FEATURES \
 	.gen = 2, .num_pipes = 1, \
 	.has_overlay = 1, .overlay_needs_physical = 1, \
@@ -65,6 +69,7 @@
 	.ring_mask = RENDER_RING, \
 	.has_snoop = true, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_i830_info __initconst = {
@@ -98,6 +103,7 @@ static const struct intel_device_info intel_i865g_info __initconst = {
 	.ring_mask = RENDER_RING, \
 	.has_snoop = true, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_i915g_info __initconst = {
@@ -161,6 +167,7 @@ static const struct intel_device_info intel_pineview_info __initconst = {
 	.ring_mask = RENDER_RING, \
 	.has_snoop = true, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_i965g_info __initconst = {
@@ -203,6 +210,7 @@ static const struct intel_device_info intel_gm45_info __initconst = {
 	.ring_mask = RENDER_RING | BSD_RING, \
 	.has_snoop = true, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 static const struct intel_device_info intel_ironlake_d_info __initconst = {
@@ -226,6 +234,7 @@ static const struct intel_device_info intel_ironlake_m_info __initconst = {
 	.has_rc6p = 1, \
 	.has_aliasing_ppgtt = 1, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	CURSOR_OFFSETS
 
 #define SNB_D_PLATFORM \
@@ -269,6 +278,7 @@ static const struct intel_device_info intel_sandybridge_m_gt2_info __initconst =
 	.has_aliasing_ppgtt = 1, \
 	.has_full_ppgtt = 1, \
 	GEN_DEFAULT_PIPEOFFSETS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	IVB_CURSOR_OFFSETS
 
 #define IVB_D_PLATFORM \
@@ -325,6 +335,7 @@ static const struct intel_device_info intel_valleyview_info __initconst = {
 	.has_snoop = true,
 	.ring_mask = RENDER_RING | BSD_RING | BLT_RING,
 	.display_mmio_offset = VLV_DISPLAY_BASE,
+	GEN_DEFAULT_PAGE_SIZES,
 	GEN_DEFAULT_PIPEOFFSETS,
 	CURSOR_OFFSETS
 };
@@ -363,6 +374,7 @@ static const struct intel_device_info intel_haswell_gt3_info __initconst = {
 #define BDW_FEATURES \
 	HSW_FEATURES, \
 	BDW_COLORS, \
+	GEN_DEFAULT_PAGE_SIZES, \
 	.has_logical_ring_contexts = 1, \
 	.has_full_48bit_ppgtt = 1, \
 	.has_64bit_reloc = 1, \
@@ -415,13 +427,18 @@ static const struct intel_device_info intel_cherryview_info __initconst = {
 	.has_reset_engine = 1,
 	.has_snoop = true,
 	.display_mmio_offset = VLV_DISPLAY_BASE,
+	GEN_DEFAULT_PAGE_SIZES,
 	GEN_CHV_PIPEOFFSETS,
 	CURSOR_OFFSETS,
 	CHV_COLORS,
 };
 
+#define GEN9_DEFAULT_PAGE_SIZES \
+	.page_sizes = I915_GTT_PAGE_SIZE_4K
+
 #define SKL_PLATFORM \
 	BDW_FEATURES, \
+	GEN9_DEFAULT_PAGE_SIZES, \
 	.gen = 9, \
 	.platform = INTEL_SKYLAKE, \
 	.has_csr = 1, \
@@ -477,6 +494,7 @@ static const struct intel_device_info intel_skylake_gt4_info __initconst = {
 	.has_reset_engine = 1, \
 	.has_snoop = true, \
 	.has_ipc = 1, \
+	GEN9_DEFAULT_PAGE_SIZES, \
 	GEN_DEFAULT_PIPEOFFSETS, \
 	IVB_CURSOR_OFFSETS, \
 	BDW_COLORS
@@ -496,6 +514,7 @@ static const struct intel_device_info intel_geminilake_info __initconst = {
 
 #define KBL_PLATFORM \
 	BDW_FEATURES, \
+	GEN9_DEFAULT_PAGE_SIZES, \
 	.gen = 9, \
 	.platform = INTEL_KABYLAKE, \
 	.has_csr = 1, \
@@ -546,6 +565,7 @@ static const struct intel_device_info intel_coffeelake_gt3_info __initconst = {
 
 static const struct intel_device_info intel_cannonlake_gt2_info __initconst = {
 	BDW_FEATURES,
+	GEN9_DEFAULT_PAGE_SIZES, \
 	.is_alpha_support = 1,
 	.platform = INTEL_CANNONLAKE,
 	.gen = 10,
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index ed3407b078e7..1785c63ad797 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -174,6 +174,9 @@ struct drm_i915_private *mock_gem_device(void)
 
 	mkwrite_device_info(i915)->gen = -1;
 
+	mkwrite_device_info(i915)->page_sizes =
+		I915_GTT_PAGE_SIZE_4K;
+
 	spin_lock_init(&i915->mm.object_stat_lock);
 	mock_uncore_init(i915);
 
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 05/21] drm/i915: push set_pages down to the callers
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (3 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 04/21] drm/i915: introduce page_sizes field to dev_info Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 06/21] drm/i915: introduce page_size members Matthew Auld
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Each backend is now responsible for calling __i915_gem_object_set_pages
upon successfully gathering its backing storage. This eliminates the
inconsistency between the async and sync paths, which stands out even
more when we start throwing around an sg_mask in a later patch.

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c                  | 45 +++++++++++++-----------
 drivers/gpu/drm/i915/i915_gem_dmabuf.c           | 15 +++++---
 drivers/gpu/drm/i915/i915_gem_internal.c         | 15 ++++----
 drivers/gpu/drm/i915/i915_gem_object.h           |  2 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c           | 16 ++++++---
 drivers/gpu/drm/i915/i915_gem_userptr.c          | 12 +++----
 drivers/gpu/drm/i915/selftests/huge_gem_object.c | 14 ++++----
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c    | 12 ++++---
 8 files changed, 77 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a6a0a7ee2df8..8a483a995db0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -162,8 +162,7 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
 	return 0;
 }
 
-static struct sg_table *
-i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 {
 	struct address_space *mapping = obj->base.filp->f_mapping;
 	drm_dma_handle_t *phys;
@@ -171,9 +170,10 @@ i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 	struct scatterlist *sg;
 	char *vaddr;
 	int i;
+	int err;
 
 	if (WARN_ON(i915_gem_object_needs_bit17_swizzle(obj)))
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
 	/* Always aligning to the object size, allows a single allocation
 	 * to handle all possible callers, and given typical object sizes,
@@ -183,7 +183,7 @@ i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 			     roundup_pow_of_two(obj->base.size),
 			     roundup_pow_of_two(obj->base.size));
 	if (!phys)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 
 	vaddr = phys->vaddr;
 	for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
@@ -192,7 +192,7 @@ i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 
 		page = shmem_read_mapping_page(mapping, i);
 		if (IS_ERR(page)) {
-			st = ERR_CAST(page);
+			err = PTR_ERR(page);
 			goto err_phys;
 		}
 
@@ -209,13 +209,13 @@ i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
 	if (!st) {
-		st = ERR_PTR(-ENOMEM);
+		err = -ENOMEM;
 		goto err_phys;
 	}
 
 	if (sg_alloc_table(st, 1, GFP_KERNEL)) {
 		kfree(st);
-		st = ERR_PTR(-ENOMEM);
+		err = -ENOMEM;
 		goto err_phys;
 	}
 
@@ -227,11 +227,15 @@ i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 	sg_dma_len(sg) = obj->base.size;
 
 	obj->phys_handle = phys;
-	return st;
+
+	__i915_gem_object_set_pages(obj, st);
+
+	return 0;
 
 err_phys:
 	drm_pci_free(obj->base.dev, phys);
-	return st;
+
+	return err;
 }
 
 static void __start_cpu_write(struct drm_i915_gem_object *obj)
@@ -2292,8 +2296,7 @@ static bool i915_sg_trim(struct sg_table *orig_st)
 	return true;
 }
 
-static struct sg_table *
-i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 {
 	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
 	const unsigned long page_count = obj->base.size / PAGE_SIZE;
@@ -2317,12 +2320,12 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
 	if (st == NULL)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 
 rebuild_st:
 	if (sg_alloc_table(st, page_count, GFP_KERNEL)) {
 		kfree(st);
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 	}
 
 	/* Get the list of pages out of our struct file.  They'll be pinned
@@ -2430,7 +2433,9 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	if (i915_gem_object_needs_bit17_swizzle(obj))
 		i915_gem_object_do_bit_17_swizzle(obj, st);
 
-	return st;
+	__i915_gem_object_set_pages(obj, st);
+
+	return 0;
 
 err_sg:
 	sg_mark_end(sg);
@@ -2451,7 +2456,7 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	if (ret == -ENOSPC)
 		ret = -ENOMEM;
 
-	return ERR_PTR(ret);
+	return ret;
 }
 
 void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
@@ -2474,19 +2479,17 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 
 static int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 {
-	struct sg_table *pages;
+	int err;
 
 	if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
 		DRM_DEBUG("Attempting to obtain a purgeable object\n");
 		return -EFAULT;
 	}
 
-	pages = obj->ops->get_pages(obj);
-	if (unlikely(IS_ERR(pages)))
-		return PTR_ERR(pages);
+	err = obj->ops->get_pages(obj);
+	GEM_BUG_ON(!err && IS_ERR_OR_NULL(obj->mm.pages));
 
-	__i915_gem_object_set_pages(obj, pages);
-	return 0;
+	return err;
 }
 
 /* Ensure that the associated pages are gathered from the backing storage
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index 6176e589cf09..4c4dc85159fb 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -256,11 +256,18 @@ struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
 	return drm_gem_dmabuf_export(dev, &exp_info);
 }
 
-static struct sg_table *
-i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
 {
-	return dma_buf_map_attachment(obj->base.import_attach,
-				      DMA_BIDIRECTIONAL);
+	struct sg_table *pages;
+
+	pages = dma_buf_map_attachment(obj->base.import_attach,
+				       DMA_BIDIRECTIONAL);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
+
+	__i915_gem_object_set_pages(obj, pages);
+
+	return 0;
 }
 
 static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/i915_gem_internal.c b/drivers/gpu/drm/i915/i915_gem_internal.c
index c1f64ddaf8aa..f59764da4254 100644
--- a/drivers/gpu/drm/i915/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/i915_gem_internal.c
@@ -44,8 +44,7 @@ static void internal_free_pages(struct sg_table *st)
 	kfree(st);
 }
 
-static struct sg_table *
-i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct sg_table *st;
@@ -78,12 +77,12 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 create_st:
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
 	if (!st)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 
 	npages = obj->base.size / PAGE_SIZE;
 	if (sg_alloc_table(st, npages, GFP_KERNEL)) {
 		kfree(st);
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 	}
 
 	sg = st->sgl;
@@ -132,13 +131,17 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 	 * object are only valid whilst active and pinned.
 	 */
 	obj->mm.madv = I915_MADV_DONTNEED;
-	return st;
+
+	__i915_gem_object_set_pages(obj, st);
+
+	return 0;
 
 err:
 	sg_set_page(sg, NULL, 0, 0);
 	sg_mark_end(sg);
 	internal_free_pages(st);
-	return ERR_PTR(-ENOMEM);
+
+	return -ENOMEM;
 }
 
 static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index c30d8f808185..036e847b27f0 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -69,7 +69,7 @@ struct drm_i915_gem_object_ops {
 	 * being released or under memory pressure (where we attempt to
 	 * reap pages for the shrinker).
 	 */
-	struct sg_table *(*get_pages)(struct drm_i915_gem_object *);
+	int (*get_pages)(struct drm_i915_gem_object *);
 	void (*put_pages)(struct drm_i915_gem_object *, struct sg_table *);
 
 	int (*pwrite)(struct drm_i915_gem_object *,
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 507c9f0d8df1..537ecb224db0 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -539,12 +539,18 @@ i915_pages_create_for_stolen(struct drm_device *dev,
 	return st;
 }
 
-static struct sg_table *
-i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
 {
-	return i915_pages_create_for_stolen(obj->base.dev,
-					    obj->stolen->start,
-					    obj->stolen->size);
+	struct sg_table *pages =
+		i915_pages_create_for_stolen(obj->base.dev,
+					     obj->stolen->start,
+					     obj->stolen->size);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
+
+	__i915_gem_object_set_pages(obj, pages);
+
+	return 0;
 }
 
 static void i915_gem_object_put_pages_stolen(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 2d4996de7331..70ad7489827d 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -434,6 +434,8 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 		return ERR_PTR(ret);
 	}
 
+	__i915_gem_object_set_pages(obj, st);
+
 	return st;
 }
 
@@ -521,7 +523,6 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
 			pages = __i915_gem_userptr_alloc_pages(obj, pvec,
 							       npages);
 			if (!IS_ERR(pages)) {
-				__i915_gem_object_set_pages(obj, pages);
 				pinned = 0;
 				pages = NULL;
 			}
@@ -582,8 +583,7 @@ __i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
 	return ERR_PTR(-EAGAIN);
 }
 
-static struct sg_table *
-i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
+static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 {
 	const int num_pages = obj->base.size >> PAGE_SHIFT;
 	struct mm_struct *mm = obj->userptr.mm->mm;
@@ -612,9 +612,9 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 	if (obj->userptr.work) {
 		/* active flag should still be held for the pending work */
 		if (IS_ERR(obj->userptr.work))
-			return ERR_CAST(obj->userptr.work);
+			return PTR_ERR(obj->userptr.work);
 		else
-			return ERR_PTR(-EAGAIN);
+			return -EAGAIN;
 	}
 
 	pvec = NULL;
@@ -650,7 +650,7 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 		release_pages(pvec, pinned, 0);
 	kvfree(pvec);
 
-	return pages;
+	return PTR_ERR_OR_ZERO(pages);
 }
 
 static void
diff --git a/drivers/gpu/drm/i915/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
index c5c7e8efbdd3..41c15f3aa467 100644
--- a/drivers/gpu/drm/i915/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
@@ -37,8 +37,7 @@ static void huge_free_pages(struct drm_i915_gem_object *obj,
 	kfree(pages);
 }
 
-static struct sg_table *
-huge_get_pages(struct drm_i915_gem_object *obj)
+static int huge_get_pages(struct drm_i915_gem_object *obj)
 {
 #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
 	const unsigned long nreal = obj->scratch / PAGE_SIZE;
@@ -49,11 +48,11 @@ huge_get_pages(struct drm_i915_gem_object *obj)
 
 	pages = kmalloc(sizeof(*pages), GFP);
 	if (!pages)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 
 	if (sg_alloc_table(pages, npages, GFP)) {
 		kfree(pages);
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 	}
 
 	sg = pages->sgl;
@@ -81,11 +80,14 @@ huge_get_pages(struct drm_i915_gem_object *obj)
 	if (i915_gem_gtt_prepare_pages(obj, pages))
 		goto err;
 
-	return pages;
+	__i915_gem_object_set_pages(obj, pages);
+
+	return 0;
 
 err:
 	huge_free_pages(obj, pages);
-	return ERR_PTR(-ENOMEM);
+
+	return -ENOMEM;
 #undef GFP
 }
 
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 6b132caffa18..aa1db375d59a 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -39,8 +39,7 @@ static void fake_free_pages(struct drm_i915_gem_object *obj,
 	kfree(pages);
 }
 
-static struct sg_table *
-fake_get_pages(struct drm_i915_gem_object *obj)
+static int fake_get_pages(struct drm_i915_gem_object *obj)
 {
 #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
 #define PFN_BIAS 0x1000
@@ -50,12 +49,12 @@ fake_get_pages(struct drm_i915_gem_object *obj)
 
 	pages = kmalloc(sizeof(*pages), GFP);
 	if (!pages)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 
 	rem = round_up(obj->base.size, BIT(31)) >> 31;
 	if (sg_alloc_table(pages, rem, GFP)) {
 		kfree(pages);
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 	}
 
 	rem = obj->base.size;
@@ -72,7 +71,10 @@ fake_get_pages(struct drm_i915_gem_object *obj)
 	GEM_BUG_ON(rem);
 
 	obj->mm.madv = I915_MADV_DONTNEED;
-	return pages;
+
+	__i915_gem_object_set_pages(obj, pages);
+
+	return 0;
 #undef GFP
 }
 
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 06/21] drm/i915: introduce page_size members
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (4 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 05/21] drm/i915: push set_pages down to the callers Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 21:31   ` Chris Wilson
  2017-09-29 16:10 ` [PATCH 07/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
                   ` (16 subsequent siblings)
  22 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

In preparation for supporting huge gtt pages for the ppgtt, we introduce
page size members for gem objects.  We fill in the page sizes by
scanning the sg table.

v2: pass the sg_mask to set_pages

v3: calculate the sg_mask inline with populating the sg_table where
possible, and pass to set_pages along with the pages.

v4: bunch of improvements from Joonas

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h                  |  5 ++-
 drivers/gpu/drm/i915/i915_gem.c                  | 42 +++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_gem_dmabuf.c           |  9 ++++-
 drivers/gpu/drm/i915/i915_gem_internal.c         |  5 ++-
 drivers/gpu/drm/i915/i915_gem_object.h           | 17 ++++++++++
 drivers/gpu/drm/i915/i915_gem_stolen.c           |  2 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c          |  9 ++++-
 drivers/gpu/drm/i915/selftests/huge_gem_object.c |  2 +-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c    |  5 ++-
 9 files changed, 84 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 3011056da9ee..6423c55f095b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3095,6 +3095,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define USES_PPGTT(dev_priv)		(i915_modparams.enable_ppgtt)
 #define USES_FULL_PPGTT(dev_priv)	(i915_modparams.enable_ppgtt >= 2)
 #define USES_FULL_48BIT_PPGTT(dev_priv)	(i915_modparams.enable_ppgtt == 3)
+#define HAS_PAGE_SIZES(dev_priv, sizes) \
+	((sizes) && (((sizes) & ~(dev_priv)->info.page_sizes)) == 0)
 
 #define HAS_OVERLAY(dev_priv)		 ((dev_priv)->info.has_overlay)
 #define OVERLAY_NEEDS_PHYSICAL(dev_priv) \
@@ -3511,7 +3513,8 @@ i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
 				unsigned long n);
 
 void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
-				 struct sg_table *pages);
+				 struct sg_table *pages,
+				 unsigned int sg_mask);
 int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 
 static inline int __must_check
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 8a483a995db0..1e4f2b3c33ab 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -228,7 +228,7 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 
 	obj->phys_handle = phys;
 
-	__i915_gem_object_set_pages(obj, st);
+	__i915_gem_object_set_pages(obj, st, sg->length);
 
 	return 0;
 
@@ -2266,6 +2266,8 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
 	if (!IS_ERR(pages))
 		obj->ops->put_pages(obj, pages);
 
+	obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+
 unlock:
 	mutex_unlock(&obj->mm.lock);
 }
@@ -2308,6 +2310,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	struct page *page;
 	unsigned long last_pfn = 0;	/* suppress gcc warning */
 	unsigned int max_segment = i915_sg_segment_size();
+	unsigned int sg_mask;
 	gfp_t noreclaim;
 	int ret;
 
@@ -2339,6 +2342,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 
 	sg = st->sgl;
 	st->nents = 0;
+	sg_mask = 0;
 	for (i = 0; i < page_count; i++) {
 		const unsigned int shrink[] = {
 			I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_PURGEABLE,
@@ -2391,8 +2395,10 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 		if (!i ||
 		    sg->length >= max_segment ||
 		    page_to_pfn(page) != last_pfn + 1) {
-			if (i)
+			if (i) {
+				sg_mask |= sg->length;
 				sg = sg_next(sg);
+			}
 			st->nents++;
 			sg_set_page(sg, page, PAGE_SIZE, 0);
 		} else {
@@ -2403,8 +2409,10 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 		/* Check that the i965g/gm workaround works. */
 		WARN_ON((gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL));
 	}
-	if (sg) /* loop terminated early; short sg table */
+	if (sg) { /* loop terminated early; short sg table */
+		sg_mask |= sg->length;
 		sg_mark_end(sg);
+	}
 
 	/* Trim unused sg entries to avoid wasting memory. */
 	i915_sg_trim(st);
@@ -2433,7 +2441,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	if (i915_gem_object_needs_bit17_swizzle(obj))
 		i915_gem_object_do_bit_17_swizzle(obj, st);
 
-	__i915_gem_object_set_pages(obj, st);
+	__i915_gem_object_set_pages(obj, st, sg_mask);
 
 	return 0;
 
@@ -2460,8 +2468,13 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 }
 
 void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
-				 struct sg_table *pages)
+				 struct sg_table *pages,
+				 unsigned int sg_mask)
 {
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	unsigned long supported_page_sizes = INTEL_INFO(i915)->page_sizes;
+	int i;
+
 	lockdep_assert_held(&obj->mm.lock);
 
 	obj->mm.get_page.sg_pos = pages->sgl;
@@ -2475,6 +2488,25 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 		__i915_gem_object_pin_pages(obj);
 		obj->mm.quirked = true;
 	}
+
+	GEM_BUG_ON(!sg_mask);
+	obj->mm.page_sizes.phys = sg_mask;
+
+	/*
+	 * Calculate the supported page-sizes which fit into the given sg_mask.
+	 * This will give us the page-sizes which we may be able to use
+	 * opportunistically when later inserting into the GTT. For example if
+	 * phys=2G, then in theory we should be able to use 1G, 2M, 64K or 4K
+	 * pages, although in practice this will depend on a number of other
+	 * factors.
+	 */
+	obj->mm.page_sizes.sg = 0;
+	for_each_set_bit(i, &supported_page_sizes, BITS_PER_LONG) {
+		if (obj->mm.page_sizes.phys & ~0u << i)
+			obj->mm.page_sizes.sg |= BIT(i);
+	}
+
+	GEM_BUG_ON(!HAS_PAGE_SIZES(i915, obj->mm.page_sizes.sg));
 }
 
 static int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index 4c4dc85159fb..2f80b07d7f61 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -259,13 +259,20 @@ struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
 static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
+	struct scatterlist *sg;
+	unsigned int sg_mask;
+	int n;
 
 	pages = dma_buf_map_attachment(obj->base.import_attach,
 				       DMA_BIDIRECTIONAL);
 	if (IS_ERR(pages))
 		return PTR_ERR(pages);
 
-	__i915_gem_object_set_pages(obj, pages);
+	sg_mask = 0;
+	for_each_sg(pages->sgl, sg, pages->nents, n)
+		sg_mask |= sg->length;
+
+	__i915_gem_object_set_pages(obj, pages, sg_mask);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_internal.c b/drivers/gpu/drm/i915/i915_gem_internal.c
index f59764da4254..bdc23c4c8783 100644
--- a/drivers/gpu/drm/i915/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/i915_gem_internal.c
@@ -49,6 +49,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct sg_table *st;
 	struct scatterlist *sg;
+	unsigned int sg_mask;
 	unsigned int npages;
 	int max_order;
 	gfp_t gfp;
@@ -87,6 +88,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	sg = st->sgl;
 	st->nents = 0;
+	sg_mask = 0;
 
 	do {
 		int order = min(fls(npages) - 1, max_order);
@@ -104,6 +106,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 		} while (1);
 
 		sg_set_page(sg, page, PAGE_SIZE << order, 0);
+		sg_mask |= PAGE_SIZE << order;
 		st->nents++;
 
 		npages -= 1 << order;
@@ -132,7 +135,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 	 */
 	obj->mm.madv = I915_MADV_DONTNEED;
 
-	__i915_gem_object_set_pages(obj, st);
+	__i915_gem_object_set_pages(obj, st, sg_mask);
 
 	return 0;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 036e847b27f0..110672952a1c 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -169,6 +169,23 @@ struct drm_i915_gem_object {
 		struct sg_table *pages;
 		void *mapping;
 
+		struct i915_page_sizes {
+			/**
+			 * The sg mask of the pages sg_table. i.e the mask of
+			 * of the lengths for each sg entry.
+			 */
+			unsigned int phys;
+
+			/**
+			 * The gtt page sizes we are allowed to use given the
+			 * sg mask and the supported page sizes. This will
+			 * express the smallest unit we can use for the whole
+			 * object, as well as the larger sizes we may be able
+			 * to use opportunistically.
+			 */
+			unsigned int sg;
+		} page_sizes;
+
 		struct i915_gem_object_page_iter {
 			struct scatterlist *sg_pos;
 			unsigned int sg_idx; /* in pages, but 32bit eek! */
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 537ecb224db0..54fd4cfa9d07 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -548,7 +548,7 @@ static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
 	if (IS_ERR(pages))
 		return PTR_ERR(pages);
 
-	__i915_gem_object_set_pages(obj, pages);
+	__i915_gem_object_set_pages(obj, pages, obj->stolen->size);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 70ad7489827d..ad5abca1f794 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -405,6 +405,9 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 {
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
+	struct scatterlist *sg;
+	unsigned int sg_mask;
+	int n;
 	int ret;
 
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
@@ -434,7 +437,11 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 		return ERR_PTR(ret);
 	}
 
-	__i915_gem_object_set_pages(obj, st);
+	sg_mask = 0;
+	for_each_sg(st->sgl, sg, num_pages, n)
+		sg_mask |= sg->length;
+
+	__i915_gem_object_set_pages(obj, st, sg_mask);
 
 	return st;
 }
diff --git a/drivers/gpu/drm/i915/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
index 41c15f3aa467..a2632df39173 100644
--- a/drivers/gpu/drm/i915/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/selftests/huge_gem_object.c
@@ -80,7 +80,7 @@ static int huge_get_pages(struct drm_i915_gem_object *obj)
 	if (i915_gem_gtt_prepare_pages(obj, pages))
 		goto err;
 
-	__i915_gem_object_set_pages(obj, pages);
+	__i915_gem_object_set_pages(obj, pages, PAGE_SIZE);
 
 	return 0;
 
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index aa1db375d59a..883bc19e3aaf 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -45,6 +45,7 @@ static int fake_get_pages(struct drm_i915_gem_object *obj)
 #define PFN_BIAS 0x1000
 	struct sg_table *pages;
 	struct scatterlist *sg;
+	unsigned int sg_mask;
 	typeof(obj->base.size) rem;
 
 	pages = kmalloc(sizeof(*pages), GFP);
@@ -57,6 +58,7 @@ static int fake_get_pages(struct drm_i915_gem_object *obj)
 		return -ENOMEM;
 	}
 
+	sg_mask = 0;
 	rem = obj->base.size;
 	for (sg = pages->sgl; sg; sg = sg_next(sg)) {
 		unsigned long len = min_t(typeof(rem), rem, BIT(31));
@@ -65,6 +67,7 @@ static int fake_get_pages(struct drm_i915_gem_object *obj)
 		sg_set_page(sg, pfn_to_page(PFN_BIAS), len, 0);
 		sg_dma_address(sg) = page_to_phys(sg_page(sg));
 		sg_dma_len(sg) = len;
+		sg_mask |= len;
 
 		rem -= len;
 	}
@@ -72,7 +75,7 @@ static int fake_get_pages(struct drm_i915_gem_object *obj)
 
 	obj->mm.madv = I915_MADV_DONTNEED;
 
-	__i915_gem_object_set_pages(obj, pages);
+	__i915_gem_object_set_pages(obj, pages, sg_mask);
 
 	return 0;
 #undef GFP
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 07/21] drm/i915: introduce vm set_pages/clear_pages
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (5 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 06/21] drm/i915: introduce page_size members Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-10-02 12:10   ` Joonas Lahtinen
  2017-09-29 16:10 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
                   ` (15 subsequent siblings)
  22 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Move the setting/clearing of the vma->pages to a vm operation. Doing so
neatens things up a little, but more importantly gives us a sane place
to also set/clear the vma->pages_sizes, which we introduce later in
preparation for supporting huge-pages.

v2: remove redundant vma->pages check

v3: GEM_BUG_ON(vma->pages) following i915_vma_remove

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c       | 70 +++++++++++++++++++------------
 drivers/gpu/drm/i915/i915_gem_gtt.h       |  2 +
 drivers/gpu/drm/i915/i915_vma.c           | 27 +++++++-----
 drivers/gpu/drm/i915/selftests/mock_gtt.c | 11 ++---
 4 files changed, 66 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 4c82ceb8d318..c534b74eee32 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -205,8 +205,6 @@ static int ppgtt_bind_vma(struct i915_vma *vma,
 			return ret;
 	}
 
-	vma->pages = vma->obj->mm.pages;
-
 	/* Currently applicable only to VLV */
 	pte_flags = 0;
 	if (vma->obj->gt_ro)
@@ -222,6 +220,26 @@ static void ppgtt_unbind_vma(struct i915_vma *vma)
 	vma->vm->clear_range(vma->vm, vma->node.start, vma->size);
 }
 
+static int ppgtt_set_pages(struct i915_vma *vma)
+{
+	GEM_BUG_ON(vma->pages);
+
+	vma->pages = vma->obj->mm.pages;
+
+	return 0;
+}
+
+static void clear_pages(struct i915_vma *vma)
+{
+	GEM_BUG_ON(!vma->pages);
+
+	if (vma->pages != vma->obj->mm.pages) {
+		sg_free_table(vma->pages);
+		kfree(vma->pages);
+	}
+	vma->pages = NULL;
+}
+
 static gen8_pte_t gen8_pte_encode(dma_addr_t addr,
 				  enum i915_cache_level level)
 {
@@ -1452,6 +1470,8 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
 	ppgtt->base.cleanup = gen8_ppgtt_cleanup;
 	ppgtt->base.unbind_vma = ppgtt_unbind_vma;
 	ppgtt->base.bind_vma = ppgtt_bind_vma;
+	ppgtt->base.set_pages = ppgtt_set_pages;
+	ppgtt->base.clear_pages = clear_pages;
 	ppgtt->debug_dump = gen8_dump_ppgtt;
 
 	return 0;
@@ -1894,6 +1914,8 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
 	ppgtt->base.insert_entries = gen6_ppgtt_insert_entries;
 	ppgtt->base.unbind_vma = ppgtt_unbind_vma;
 	ppgtt->base.bind_vma = ppgtt_bind_vma;
+	ppgtt->base.set_pages = ppgtt_set_pages;
+	ppgtt->base.clear_pages = clear_pages;
 	ppgtt->base.cleanup = gen6_ppgtt_cleanup;
 	ppgtt->debug_dump = gen6_dump_ppgtt;
 
@@ -2405,12 +2427,6 @@ static int ggtt_bind_vma(struct i915_vma *vma,
 	struct drm_i915_gem_object *obj = vma->obj;
 	u32 pte_flags;
 
-	if (unlikely(!vma->pages)) {
-		int ret = i915_get_ggtt_vma_pages(vma);
-		if (ret)
-			return ret;
-	}
-
 	/* Currently applicable only to VLV */
 	pte_flags = 0;
 	if (obj->gt_ro)
@@ -2447,12 +2463,6 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
 	u32 pte_flags;
 	int ret;
 
-	if (unlikely(!vma->pages)) {
-		ret = i915_get_ggtt_vma_pages(vma);
-		if (ret)
-			return ret;
-	}
-
 	/* Currently applicable only to VLV */
 	pte_flags = 0;
 	if (vma->obj->gt_ro)
@@ -2467,7 +2477,7 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
 							     vma->node.start,
 							     vma->size);
 			if (ret)
-				goto err_pages;
+				return ret;
 		}
 
 		appgtt->base.insert_entries(&appgtt->base, vma, cache_level,
@@ -2481,17 +2491,6 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
 	}
 
 	return 0;
-
-err_pages:
-	if (!(vma->flags & (I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND))) {
-		if (vma->pages != vma->obj->mm.pages) {
-			GEM_BUG_ON(!vma->pages);
-			sg_free_table(vma->pages);
-			kfree(vma->pages);
-		}
-		vma->pages = NULL;
-	}
-	return ret;
 }
 
 static void aliasing_gtt_unbind_vma(struct i915_vma *vma)
@@ -2529,6 +2528,19 @@ void i915_gem_gtt_finish_pages(struct drm_i915_gem_object *obj,
 	dma_unmap_sg(kdev, pages->sgl, pages->nents, PCI_DMA_BIDIRECTIONAL);
 }
 
+static int ggtt_set_pages(struct i915_vma *vma)
+{
+	int ret;
+
+	GEM_BUG_ON(vma->pages);
+
+	ret = i915_get_ggtt_vma_pages(vma);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 static void i915_gtt_color_adjust(const struct drm_mm_node *node,
 				  unsigned long color,
 				  u64 *start,
@@ -3151,6 +3163,8 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->base.cleanup = gen6_gmch_remove;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.insert_page = gen8_ggtt_insert_page;
 	ggtt->base.clear_range = nop_clear_range;
 	if (!USES_FULL_PPGTT(dev_priv) || intel_scanout_needs_vtd_wa(dev_priv))
@@ -3209,6 +3223,8 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->base.insert_entries = gen6_ggtt_insert_entries;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.cleanup = gen6_gmch_remove;
 
 	ggtt->invalidate = gen6_ggtt_invalidate;
@@ -3254,6 +3270,8 @@ static int i915_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->base.clear_range = i915_ggtt_clear_range;
 	ggtt->base.bind_vma = ggtt_bind_vma;
 	ggtt->base.unbind_vma = ggtt_unbind_vma;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.cleanup = i915_gmch_remove;
 
 	ggtt->invalidate = gmch_ggtt_invalidate;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 88e9724d5fc7..87f3ceaca5a8 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -334,6 +334,8 @@ struct i915_address_space {
 	int (*bind_vma)(struct i915_vma *vma,
 			enum i915_cache_level cache_level,
 			u32 flags);
+	int (*set_pages)(struct i915_vma *vma);
+	void (*clear_pages)(struct i915_vma *vma);
 
 	I915_SELFTEST_DECLARE(struct fault_attr fault_attr);
 };
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 02d1a5eacb00..49bf49571e47 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -266,6 +266,8 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
 	if (bind_flags == 0)
 		return 0;
 
+	GEM_BUG_ON(!vma->pages);
+
 	trace_i915_vma_bind(vma, bind_flags);
 	ret = vma->vm->bind_vma(vma, cache_level, bind_flags);
 	if (ret)
@@ -471,25 +473,31 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 	if (ret)
 		return ret;
 
+	GEM_BUG_ON(vma->pages);
+
+	ret = vma->vm->set_pages(vma);
+	if (ret)
+		goto err_unpin;
+
 	if (flags & PIN_OFFSET_FIXED) {
 		u64 offset = flags & PIN_OFFSET_MASK;
 		if (!IS_ALIGNED(offset, alignment) ||
 		    range_overflows(offset, size, end)) {
 			ret = -EINVAL;
-			goto err_unpin;
+			goto err_clear;
 		}
 
 		ret = i915_gem_gtt_reserve(vma->vm, &vma->node,
 					   size, offset, obj->cache_level,
 					   flags);
 		if (ret)
-			goto err_unpin;
+			goto err_clear;
 	} else {
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
 					  size, alignment, obj->cache_level,
 					  start, end, flags);
 		if (ret)
-			goto err_unpin;
+			goto err_clear;
 
 		GEM_BUG_ON(vma->node.start < start);
 		GEM_BUG_ON(vma->node.start + vma->node.size > end);
@@ -504,6 +512,8 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 
 	return 0;
 
+err_clear:
+	vma->vm->clear_pages(vma);
 err_unpin:
 	i915_gem_object_unpin_pages(obj);
 	return ret;
@@ -517,6 +527,8 @@ i915_vma_remove(struct i915_vma *vma)
 	GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
 	GEM_BUG_ON(vma->flags & (I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND));
 
+	vma->vm->clear_pages(vma);
+
 	drm_mm_remove_node(&vma->node);
 	list_move_tail(&vma->vm_link, &vma->vm->unbound_list);
 
@@ -569,8 +581,8 @@ int __i915_vma_do_pin(struct i915_vma *vma,
 
 err_remove:
 	if ((bound & I915_VMA_BIND_MASK) == 0) {
-		GEM_BUG_ON(vma->pages);
 		i915_vma_remove(vma);
+		GEM_BUG_ON(vma->pages);
 	}
 err_unpin:
 	__i915_vma_unpin(vma);
@@ -695,13 +707,6 @@ int i915_vma_unbind(struct i915_vma *vma)
 	}
 	vma->flags &= ~(I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND);
 
-	if (vma->pages != obj->mm.pages) {
-		GEM_BUG_ON(!vma->pages);
-		sg_free_table(vma->pages);
-		kfree(vma->pages);
-	}
-	vma->pages = NULL;
-
 	i915_vma_remove(vma);
 
 destroy:
diff --git a/drivers/gpu/drm/i915/selftests/mock_gtt.c b/drivers/gpu/drm/i915/selftests/mock_gtt.c
index f2118cf535a0..336e1afb250f 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gtt.c
@@ -43,7 +43,6 @@ static int mock_bind_ppgtt(struct i915_vma *vma,
 			   u32 flags)
 {
 	GEM_BUG_ON(flags & I915_VMA_GLOBAL_BIND);
-	vma->pages = vma->obj->mm.pages;
 	vma->flags |= I915_VMA_LOCAL_BIND;
 	return 0;
 }
@@ -84,6 +83,8 @@ mock_ppgtt(struct drm_i915_private *i915,
 	ppgtt->base.insert_entries = mock_insert_entries;
 	ppgtt->base.bind_vma = mock_bind_ppgtt;
 	ppgtt->base.unbind_vma = mock_unbind_ppgtt;
+	ppgtt->base.set_pages = ppgtt_set_pages;
+	ppgtt->base.clear_pages = clear_pages;
 	ppgtt->base.cleanup = mock_cleanup;
 
 	return ppgtt;
@@ -93,12 +94,6 @@ static int mock_bind_ggtt(struct i915_vma *vma,
 			  enum i915_cache_level cache_level,
 			  u32 flags)
 {
-	int err;
-
-	err = i915_get_ggtt_vma_pages(vma);
-	if (err)
-		return err;
-
 	vma->flags |= I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND;
 	return 0;
 }
@@ -124,6 +119,8 @@ void mock_init_ggtt(struct drm_i915_private *i915)
 	ggtt->base.insert_entries = mock_insert_entries;
 	ggtt->base.bind_vma = mock_bind_ggtt;
 	ggtt->base.unbind_vma = mock_unbind_ggtt;
+	ggtt->base.set_pages = ggtt_set_pages;
+	ggtt->base.clear_pages = clear_pages;
 	ggtt->base.cleanup = mock_cleanup;
 
 	i915_address_space_init(&ggtt->base, i915, "global");
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (6 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 07/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-10-02 12:16   ` Joonas Lahtinen
  2017-09-29 16:10 ` [PATCH 09/21] drm/i915: align 64K objects to 2M Matthew Auld
                   ` (14 subsequent siblings)
  22 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

For the 48b PPGTT try to align the vma start address to the required
page size boundary to guarantee we use said page size in the gtt. If we
are dealing with multiple page sizes, we can't guarantee anything and
just align to the largest. For soft pinning and objects which need to be
tightly packed into the lower 32bits we don't force any alignment.

v2: various improvements suggested by Chris

v3: use set_pages and better placement of page_sizes

v4: prefer upper_32_bits()

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c |  8 ++++++++
 drivers/gpu/drm/i915/i915_vma.c     | 13 +++++++++++++
 drivers/gpu/drm/i915/i915_vma.h     |  1 +
 3 files changed, 22 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index c534b74eee32..c989e3d24e37 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -226,6 +226,9 @@ static int ppgtt_set_pages(struct i915_vma *vma)
 
 	vma->pages = vma->obj->mm.pages;
 
+	vma->page_sizes.phys = vma->obj->mm.page_sizes.phys;
+	vma->page_sizes.sg = vma->obj->mm.page_sizes.sg;
+
 	return 0;
 }
 
@@ -238,6 +241,8 @@ static void clear_pages(struct i915_vma *vma)
 		kfree(vma->pages);
 	}
 	vma->pages = NULL;
+
+	memset(&vma->page_sizes, 0, sizeof(struct i915_page_sizes));
 }
 
 static gen8_pte_t gen8_pte_encode(dma_addr_t addr,
@@ -2538,6 +2543,9 @@ static int ggtt_set_pages(struct i915_vma *vma)
 	if (ret)
 		return ret;
 
+	vma->page_sizes.phys = vma->obj->mm.page_sizes.phys;
+	vma->page_sizes.sg = vma->obj->mm.page_sizes.sg;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 49bf49571e47..5067eab27829 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -493,6 +493,19 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		if (ret)
 			goto err_clear;
 	} else {
+		/*
+		 * We only support huge gtt pages through the 48b PPGTT,
+		 * however we also don't want to force any alignment for
+		 * objects which need to be tightly packed into the low 32bits.
+		 */
+		if (upper_32_bits(end) &&
+		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			u64 page_alignment =
+				rounddown_pow_of_two(vma->page_sizes.sg);
+
+			alignment = max(alignment, page_alignment);
+		}
+
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
 					  size, alignment, obj->cache_level,
 					  start, end, flags);
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index e811067c7724..c59ba76613a3 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -55,6 +55,7 @@ struct i915_vma {
 	void __iomem *iomap;
 	u64 size;
 	u64 display_alignment;
+	struct i915_page_sizes page_sizes;
 
 	u32 fence_size;
 	u32 fence_alignment;
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 09/21] drm/i915: align 64K objects to 2M
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (7 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-10-02 12:18   ` Joonas Lahtinen
  2017-09-29 16:10 ` [PATCH 10/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
                   ` (13 subsequent siblings)
  22 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

We can't mix 64K and 4K pte's in the same page-table, so for now we
align 64K objects to 2M to avoid any potential mixing. This is
potentially wasteful but in reality shouldn't be too bad since this only
applies to the virtual address space of a 48b PPGTT.

v2: don't separate logically connected ops

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_vma.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 5067eab27829..cc06a7100c81 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -501,9 +501,18 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		if (upper_32_bits(end) &&
 		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
 			u64 page_alignment =
-				rounddown_pow_of_two(vma->page_sizes.sg);
+				rounddown_pow_of_two(vma->page_sizes.sg |
+						     I915_GTT_PAGE_SIZE_2M);
 
 			alignment = max(alignment, page_alignment);
+
+			/*
+			 * We can't mix 64K and 4K PTEs in the same page-table (2M
+			 * block), and so to avoid the ugliness and complexity of
+			 * coloring we opt for just aligning 64K objects to 2M.
+			 */
+			if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K)
+				size = round_up(size, I915_GTT_PAGE_SIZE_2M);
 		}
 
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 10/21] drm/i915: enable IPS bit for 64K pages
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (8 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 09/21] drm/i915: align 64K objects to 2M Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-10-02 12:21   ` Joonas Lahtinen
  2017-09-29 16:10 ` [PATCH 11/21] drm/i915: disable GTT cache for 2M pages Matthew Auld
                   ` (12 subsequent siblings)
  22 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Before we can enable 64K pages through the IPS bit, we must first enable
it through MMIO, otherwise the page-walker will simply ignore it.

v2: add comment mentioning that 64K is BDW+

v3: move to more suitable home

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 17 +++++++++++++++++
 drivers/gpu/drm/i915/i915_reg.h     |  3 +++
 2 files changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index c989e3d24e37..3115a8ec4376 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1988,6 +1988,23 @@ static void gtt_write_workarounds(struct drm_i915_private *dev_priv)
 		I915_WRITE(GEN8_L3_LRA_1_GPGPU, GEN9_L3_LRA_1_GPGPU_DEFAULT_VALUE_SKL);
 	else if (IS_GEN9_LP(dev_priv))
 		I915_WRITE(GEN8_L3_LRA_1_GPGPU, GEN9_L3_LRA_1_GPGPU_DEFAULT_VALUE_BXT);
+
+	/*
+	 * To support 64K PTEs we need to first enable the use of the
+	 * Intermediate-Page-Size(IPS) bit of the PDE field via some magical
+	 * mmio, otherwise the page-walker will simply ignore the IPS bit. This
+	 * shouldn't be needed after GEN10.
+	 *
+	 * 64K pages were first introduced from BDW+, although technically they
+	 * only *work* from gen9+. For pre-BDW we instead have the option for
+	 * 32K pages, but we don't currently have any support for it in our
+	 * driver.
+	 */
+	if (HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_64K) &&
+	    INTEL_GEN(dev_priv) <= 10)
+		I915_WRITE(GEN8_GAMW_ECO_DEV_RW_IA,
+			   I915_READ(GEN8_GAMW_ECO_DEV_RW_IA) |
+			   GAMW_ECO_ENABLE_64K_IPS_FIELD);
 }
 
 int i915_ppgtt_init_hw(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index ee0d4f14ac98..6c2f4c7bf141 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -2371,6 +2371,9 @@ enum i915_power_well_id {
 #define GEN9_GAMT_ECO_REG_RW_IA _MMIO(0x4ab0)
 #define   GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS	(1<<18)
 
+#define GEN8_GAMW_ECO_DEV_RW_IA _MMIO(0x4080)
+#define   GAMW_ECO_ENABLE_64K_IPS_FIELD 0xF
+
 #define GAMT_CHKN_BIT_REG	_MMIO(0x4ab8)
 #define   GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING	(1<<28)
 #define   GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT	(1<<24)
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 11/21] drm/i915: disable GTT cache for 2M pages
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (9 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 10/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-10-02 12:31   ` Joonas Lahtinen
  2017-09-29 16:10 ` [PATCH 12/21] drm/i915: support 2M pages for the 48b PPGTT Matthew Auld
                   ` (11 subsequent siblings)
  22 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

When SW enables the use of 2M/1G pages, it must disable the GTT cache.

v2: don't disable for Cherryview which doesn't even support 48b PPGTT!

v3: explicitly check that the system does support 2M/1G pages

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index c66af09e27a7..29c68c2a99ef 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -8483,10 +8483,11 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
 
 	/*
 	 * WaGttCachingOffByDefault:bdw
-	 * GTT cache may not work with big pages, so if those
-	 * are ever enabled GTT cache may need to be disabled.
+	 * The GTT cache must be disabled if the system is using 2M/1G pages.
 	 */
-	I915_WRITE(HSW_GTT_CACHE_EN, GTT_CACHE_EN_ALL);
+	I915_WRITE(HSW_GTT_CACHE_EN,
+		   HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M) ? 0 :
+		   GTT_CACHE_EN_ALL);
 
 	/* WaKVMNotificationOnConfigChange:bdw */
 	I915_WRITE(CHICKEN_PAR2_1, I915_READ(CHICKEN_PAR2_1)
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 12/21] drm/i915: support 2M pages for the 48b PPGTT
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (10 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 11/21] drm/i915: disable GTT cache for 2M pages Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 13/21] drm/i915: add support for 64K scratch page Matthew Auld
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Support inserting 2M gtt pages into the 48b PPGTT.

v2: sanity check sg->length against page_size

v3: don't recalculate rem on each loop
    whitespace breakup

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 76 +++++++++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/i915_gem_gtt.h |  2 +
 2 files changed, 74 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 3115a8ec4376..0137ff8568eb 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1014,6 +1014,69 @@ static void gen8_ppgtt_insert_3lvl(struct i915_address_space *vm,
 				      cache_level);
 }
 
+static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
+					   struct i915_page_directory_pointer **pdps,
+					   struct sgt_dma *iter,
+					   enum i915_cache_level cache_level)
+{
+	const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level);
+	u64 start = vma->node.start;
+	dma_addr_t rem = iter->sg->length;
+
+	do {
+		struct gen8_insert_pte idx = gen8_insert_pte(start);
+		struct i915_page_directory_pointer *pdp = pdps[idx.pml4e];
+		struct i915_page_directory *pd = pdp->page_directory[idx.pdpe];
+		unsigned int page_size;
+		gen8_pte_t encode = pte_encode;
+		gen8_pte_t *vaddr;
+		u16 index, max;
+
+		if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_2M &&
+		    IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_2M) &&
+		    rem >= I915_GTT_PAGE_SIZE_2M && !idx.pte) {
+			index = idx.pde;
+			max = I915_PDES;
+			page_size = I915_GTT_PAGE_SIZE_2M;
+
+			encode |= GEN8_PDE_PS_2M;
+
+			vaddr = kmap_atomic_px(pd);
+		} else {
+			struct i915_page_table *pt = pd->page_table[idx.pde];
+
+			index = idx.pte;
+			max = GEN8_PTES;
+			page_size = I915_GTT_PAGE_SIZE;
+
+			vaddr = kmap_atomic_px(pt);
+		}
+
+		do {
+			GEM_BUG_ON(iter->sg->length < page_size);
+			vaddr[index++] = encode | iter->dma;
+
+			start += page_size;
+			iter->dma += page_size;
+			rem -= page_size;
+			if (iter->dma >= iter->max) {
+				iter->sg = __sg_next(iter->sg);
+				if (!iter->sg)
+					break;
+
+				rem = iter->sg->length;
+				iter->dma = sg_dma_address(iter->sg);
+				iter->max = iter->dma + rem;
+
+				if (unlikely(!IS_ALIGNED(iter->dma, page_size)))
+					break;
+			}
+		} while (rem >= page_size && index < max);
+
+		kunmap_atomic(vaddr);
+	} while (iter->sg);
+}
+
 static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 				   struct i915_vma *vma,
 				   enum i915_cache_level cache_level,
@@ -1026,11 +1089,16 @@ static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 		.max = iter.dma + iter.sg->length,
 	};
 	struct i915_page_directory_pointer **pdps = ppgtt->pml4.pdps;
-	struct gen8_insert_pte idx = gen8_insert_pte(vma->node.start);
 
-	while (gen8_ppgtt_insert_pte_entries(ppgtt, pdps[idx.pml4e++], &iter,
-					     &idx, cache_level))
-		GEM_BUG_ON(idx.pml4e >= GEN8_PML4ES_PER_PML4);
+	if (vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
+		gen8_ppgtt_insert_huge_entries(vma, pdps, &iter, cache_level);
+	} else {
+		struct gen8_insert_pte idx = gen8_insert_pte(vma->node.start);
+
+		while (gen8_ppgtt_insert_pte_entries(ppgtt, pdps[idx.pml4e++],
+						     &iter, &idx, cache_level))
+			GEM_BUG_ON(idx.pml4e >= GEN8_PML4ES_PER_PML4);
+	}
 }
 
 static void gen8_free_page_tables(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 87f3ceaca5a8..e9e66abbe532 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -153,6 +153,8 @@ typedef u64 gen8_ppgtt_pml4e_t;
 #define GEN8_PPAT_GET_AGE(x) ((x) & (3 << 4))
 #define CHV_PPAT_GET_SNOOP(x) ((x) & (1 << 6))
 
+#define GEN8_PDE_PS_2M   BIT(7)
+
 struct sg_table;
 
 struct intel_rotation_info {
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 13/21] drm/i915: add support for 64K scratch page
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (11 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 12/21] drm/i915: support 2M pages for the 48b PPGTT Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 14/21] drm/i915: support 64K pages for the 48b PPGTT Matthew Auld
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Before we can fully enable 64K pages, we need to first support a 64K
scratch page if we intend to support the case where we have object sizes
< 2M, since any scratch PTE must also point to a 64K region.  Without
this our 64K usage is limited to objects which completely fill the
page-table, and therefore don't need any scratch.

v2: add reminder about why 48b PPGTT

Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 64 ++++++++++++++++++++++++++++++-------
 drivers/gpu/drm/i915/i915_gem_gtt.h |  1 +
 2 files changed, 54 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 0137ff8568eb..37029a42c701 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -520,22 +520,63 @@ static void fill_page_dma_32(struct i915_address_space *vm,
 static int
 setup_scratch_page(struct i915_address_space *vm, gfp_t gfp)
 {
-	struct page *page;
+	struct page *page = NULL;
 	dma_addr_t addr;
+	int order;
 
-	page = alloc_page(gfp | __GFP_ZERO);
-	if (unlikely(!page))
-		return -ENOMEM;
+	/*
+	 * In order to utilize 64K pages for an object with a size < 2M, we will
+	 * need to support a 64K scratch page, given that every 16th entry for a
+	 * page-table operating in 64K mode must point to a properly aligned 64K
+	 * region, including any PTEs which happen to point to scratch.
+	 *
+	 * This is only relevant for the 48b PPGTT where we support
+	 * huge-gtt-pages, see also i915_vma_insert().
+	 *
+	 * TODO: we should really consider write-protecting the scratch-page and
+	 * sharing between ppgtt
+	 */
+	if (i915_vm_is_48bit(vm) &&
+	    HAS_PAGE_SIZES(vm->i915, I915_GTT_PAGE_SIZE_64K)) {
+		order = get_order(I915_GTT_PAGE_SIZE_64K);
+		page = alloc_pages(gfp | __GFP_ZERO, order);
+		if (page) {
+			addr = dma_map_page(vm->dma, page, 0,
+					    I915_GTT_PAGE_SIZE_64K,
+					    PCI_DMA_BIDIRECTIONAL);
+			if (unlikely(dma_mapping_error(vm->dma, addr))) {
+				__free_pages(page, order);
+				page = NULL;
+			}
 
-	addr = dma_map_page(vm->dma, page, 0, PAGE_SIZE,
-			    PCI_DMA_BIDIRECTIONAL);
-	if (unlikely(dma_mapping_error(vm->dma, addr))) {
-		__free_page(page);
-		return -ENOMEM;
+			if (!IS_ALIGNED(addr, I915_GTT_PAGE_SIZE_64K)) {
+				dma_unmap_page(vm->dma, addr,
+					       I915_GTT_PAGE_SIZE_64K,
+					       PCI_DMA_BIDIRECTIONAL);
+				__free_pages(page, order);
+				page = NULL;
+			}
+		}
+	}
+
+	if (!page) {
+		order = 0;
+		page = alloc_page(gfp | __GFP_ZERO);
+		if (unlikely(!page))
+			return -ENOMEM;
+
+		addr = dma_map_page(vm->dma, page, 0, PAGE_SIZE,
+				    PCI_DMA_BIDIRECTIONAL);
+		if (unlikely(dma_mapping_error(vm->dma, addr))) {
+			__free_page(page);
+			return -ENOMEM;
+		}
 	}
 
 	vm->scratch_page.page = page;
 	vm->scratch_page.daddr = addr;
+	vm->scratch_page.order = order;
+
 	return 0;
 }
 
@@ -543,8 +584,9 @@ static void cleanup_scratch_page(struct i915_address_space *vm)
 {
 	struct i915_page_dma *p = &vm->scratch_page;
 
-	dma_unmap_page(vm->dma, p->daddr, PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
-	__free_page(p->page);
+	dma_unmap_page(vm->dma, p->daddr, BIT(p->order) << PAGE_SHIFT,
+		       PCI_DMA_BIDIRECTIONAL);
+	__free_pages(p->page, p->order);
 }
 
 static struct i915_page_table *alloc_pt(struct i915_address_space *vm)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index e9e66abbe532..0a31dc369c28 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -214,6 +214,7 @@ struct i915_vma;
 
 struct i915_page_dma {
 	struct page *page;
+	int order;
 	union {
 		dma_addr_t daddr;
 
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 14/21] drm/i915: support 64K pages for the 48b PPGTT
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (12 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 13/21] drm/i915: add support for 64K scratch page Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 15/21] drm/i915: accurate page size tracking for the ppgtt Matthew Auld
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Support inserting 64K pages into the 48b PPGTT.

v2: check for 64K scratch

v3: we should only have to re-adjust maybe_64K at every sg interval

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 31 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.h |  7 +++++++
 2 files changed, 38 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 37029a42c701..629684c23a0e 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1070,6 +1070,7 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 		struct i915_page_directory_pointer *pdp = pdps[idx.pml4e];
 		struct i915_page_directory *pd = pdp->page_directory[idx.pdpe];
 		unsigned int page_size;
+		bool maybe_64K = false;
 		gen8_pte_t encode = pte_encode;
 		gen8_pte_t *vaddr;
 		u16 index, max;
@@ -1091,6 +1092,13 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 			max = GEN8_PTES;
 			page_size = I915_GTT_PAGE_SIZE;
 
+			if (!index &&
+			    vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K &&
+			    IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_64K) &&
+			    (IS_ALIGNED(rem, I915_GTT_PAGE_SIZE_64K) ||
+			     rem >= (max - index) << PAGE_SHIFT))
+				maybe_64K = true;
+
 			vaddr = kmap_atomic_px(pt);
 		}
 
@@ -1110,12 +1118,35 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 				iter->dma = sg_dma_address(iter->sg);
 				iter->max = iter->dma + rem;
 
+				if (maybe_64K && index < max &&
+				    !(IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_64K) &&
+				      (IS_ALIGNED(rem, I915_GTT_PAGE_SIZE_64K) ||
+				       rem >= (max - index) << PAGE_SHIFT)))
+					maybe_64K = false;
+
 				if (unlikely(!IS_ALIGNED(iter->dma, page_size)))
 					break;
 			}
 		} while (rem >= page_size && index < max);
 
 		kunmap_atomic(vaddr);
+
+		/*
+		 * Is it safe to mark the 2M block as 64K? -- Either we have
+		 * filled whole page-table with 64K entries, or filled part of
+		 * it and have reached the end of the sg table and we have
+		 * enough padding.
+		 */
+		if (maybe_64K &&
+		    (index == max ||
+		     (i915_vm_has_scratch_64K(vma->vm) &&
+		      !iter->sg && IS_ALIGNED(vma->node.start +
+					      vma->node.size,
+					      I915_GTT_PAGE_SIZE_2M)))) {
+			vaddr = kmap_atomic_px(pd);
+			vaddr[idx.pde] |= GEN8_PDE_IPS_64K;
+			kunmap_atomic(vaddr);
+		}
 	} while (iter->sg);
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 0a31dc369c28..475e4cf042be 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -153,6 +153,7 @@ typedef u64 gen8_ppgtt_pml4e_t;
 #define GEN8_PPAT_GET_AGE(x) ((x) & (3 << 4))
 #define CHV_PPAT_GET_SNOOP(x) ((x) & (1 << 6))
 
+#define GEN8_PDE_IPS_64K BIT(11)
 #define GEN8_PDE_PS_2M   BIT(7)
 
 struct sg_table;
@@ -351,6 +352,12 @@ i915_vm_is_48bit(const struct i915_address_space *vm)
 	return (vm->total - 1) >> 32;
 }
 
+static inline bool
+i915_vm_has_scratch_64K(struct i915_address_space *vm)
+{
+	return vm->scratch_page.order == get_order(I915_GTT_PAGE_SIZE_64K);
+}
+
 /* The Graphics Translation Table is the way in which GEN hardware translates a
  * Graphics Virtual Address into a Physical Address. In addition to the normal
  * collateral associated with any va->pa translations GEN hardware also has a
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 15/21] drm/i915: accurate page size tracking for the ppgtt
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (13 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 14/21] drm/i915: support 64K pages for the 48b PPGTT Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 16/21] drm/i915/debugfs: include some gtt page size metrics Matthew Auld
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Now that we support multiple page sizes for the ppgtt, it would be
useful to track the real usage for debugging purposes.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c    | 11 +++++++++++
 drivers/gpu/drm/i915/i915_gem_object.h | 10 ++++++++++
 2 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 629684c23a0e..d1f281874668 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1054,6 +1054,8 @@ static void gen8_ppgtt_insert_3lvl(struct i915_address_space *vm,
 
 	gen8_ppgtt_insert_pte_entries(ppgtt, &ppgtt->pdp, &iter, &idx,
 				      cache_level);
+
+	vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
 }
 
 static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
@@ -1146,7 +1148,10 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 			vaddr = kmap_atomic_px(pd);
 			vaddr[idx.pde] |= GEN8_PDE_IPS_64K;
 			kunmap_atomic(vaddr);
+			page_size = I915_GTT_PAGE_SIZE_64K;
 		}
+
+		vma->page_sizes.gtt |= page_size;
 	} while (iter->sg);
 }
 
@@ -1171,6 +1176,8 @@ static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 		while (gen8_ppgtt_insert_pte_entries(ppgtt, pdps[idx.pml4e++],
 						     &iter, &idx, cache_level))
 			GEM_BUG_ON(idx.pml4e >= GEN8_PML4ES_PER_PML4);
+
+		vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
 	}
 }
 
@@ -1892,6 +1899,8 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
 		}
 	} while (1);
 	kunmap_atomic(vaddr);
+
+	vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
 }
 
 static int gen6_alloc_va_range(struct i915_address_space *vm,
@@ -2599,6 +2608,8 @@ static int ggtt_bind_vma(struct i915_vma *vma,
 	vma->vm->insert_entries(vma->vm, vma, cache_level, pte_flags);
 	intel_runtime_pm_put(i915);
 
+	vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
+
 	/*
 	 * Without aliasing PPGTT there's no difference between
 	 * GLOBAL/LOCAL_BIND, it's all the same ptes. Hence unconditionally
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 110672952a1c..e4e6dd93889d 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -169,6 +169,7 @@ struct drm_i915_gem_object {
 		struct sg_table *pages;
 		void *mapping;
 
+		/* TODO: whack some of this into the error state */
 		struct i915_page_sizes {
 			/**
 			 * The sg mask of the pages sg_table. i.e the mask of
@@ -184,6 +185,15 @@ struct drm_i915_gem_object {
 			 * to use opportunistically.
 			 */
 			unsigned int sg;
+
+			/**
+			 * The actual gtt page size usage. Since we can have
+			 * multiple vma associated with this object we need to
+			 * prevent any trampling of state, hence a copy of this
+			 * struct also lives in each vma, therefore the gtt
+			 * value here should only be read/write through the vma.
+			 */
+			unsigned int gtt;
 		} page_sizes;
 
 		struct i915_gem_object_page_iter {
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 16/21] drm/i915/debugfs: include some gtt page size metrics
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (14 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 15/21] drm/i915: accurate page size tracking for the ppgtt Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 17/21] drm/i915/selftests: huge page tests Matthew Auld
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Good to know, mostly for debugging purposes.

v2: some improvements from Chris

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 61 ++++++++++++++++++++++++++++++++++---
 1 file changed, 57 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index b4a6ac60e7c6..7585dafee6e3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -118,6 +118,36 @@ static u64 i915_gem_obj_total_ggtt_size(struct drm_i915_gem_object *obj)
 	return size;
 }
 
+static const char *
+stringify_page_sizes(unsigned int page_sizes, char *buf, size_t len)
+{
+	size_t x = 0;
+
+	switch (page_sizes) {
+	case 0:
+		return "";
+	case I915_GTT_PAGE_SIZE_4K:
+		return "4K";
+	case I915_GTT_PAGE_SIZE_64K:
+		return "64K";
+	case I915_GTT_PAGE_SIZE_2M:
+		return "2M";
+	default:
+		if (!buf)
+			return "M";
+
+		if (page_sizes & I915_GTT_PAGE_SIZE_2M)
+			x += snprintf(buf + x, len - x, "2M, ");
+		if (page_sizes & I915_GTT_PAGE_SIZE_64K)
+			x += snprintf(buf + x, len - x, "64K, ");
+		if (page_sizes & I915_GTT_PAGE_SIZE_4K)
+			x += snprintf(buf + x, len - x, "4K, ");
+		buf[x-2] = '\0';
+
+		return buf;
+	}
+}
+
 static void
 describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 {
@@ -155,9 +185,10 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		if (!drm_mm_node_allocated(&vma->node))
 			continue;
 
-		seq_printf(m, " (%sgtt offset: %08llx, size: %08llx",
+		seq_printf(m, " (%sgtt offset: %08llx, size: %08llx, pages: %s",
 			   i915_vma_is_ggtt(vma) ? "g" : "pp",
-			   vma->node.start, vma->node.size);
+			   vma->node.start, vma->node.size,
+			   stringify_page_sizes(vma->page_sizes.gtt, NULL, 0));
 		if (i915_vma_is_ggtt(vma)) {
 			switch (vma->ggtt_view.type) {
 			case I915_GGTT_VIEW_NORMAL:
@@ -402,10 +433,12 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 	struct drm_i915_private *dev_priv = node_to_i915(m->private);
 	struct drm_device *dev = &dev_priv->drm;
 	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	u32 count, mapped_count, purgeable_count, dpy_count;
-	u64 size, mapped_size, purgeable_size, dpy_size;
+	u32 count, mapped_count, purgeable_count, dpy_count, huge_count;
+	u64 size, mapped_size, purgeable_size, dpy_size, huge_size;
 	struct drm_i915_gem_object *obj;
+	unsigned int page_sizes = 0;
 	struct drm_file *file;
+	char buf[80];
 	int ret;
 
 	ret = mutex_lock_interruptible(&dev->struct_mutex);
@@ -419,6 +452,7 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 	size = count = 0;
 	mapped_size = mapped_count = 0;
 	purgeable_size = purgeable_count = 0;
+	huge_size = huge_count = 0;
 	list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_link) {
 		size += obj->base.size;
 		++count;
@@ -432,6 +466,12 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 			mapped_count++;
 			mapped_size += obj->base.size;
 		}
+
+		if (obj->mm.page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			huge_count++;
+			huge_size += obj->base.size;
+			page_sizes |= obj->mm.page_sizes.sg;
+		}
 	}
 	seq_printf(m, "%u unbound objects, %llu bytes\n", count, size);
 
@@ -454,6 +494,12 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 			mapped_count++;
 			mapped_size += obj->base.size;
 		}
+
+		if (obj->mm.page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			huge_count++;
+			huge_size += obj->base.size;
+			page_sizes |= obj->mm.page_sizes.sg;
+		}
 	}
 	seq_printf(m, "%u bound objects, %llu bytes\n",
 		   count, size);
@@ -461,11 +507,18 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 		   purgeable_count, purgeable_size);
 	seq_printf(m, "%u mapped objects, %llu bytes\n",
 		   mapped_count, mapped_size);
+	seq_printf(m, "%u huge-paged objects (%s) %llu bytes\n",
+		   huge_count,
+		   stringify_page_sizes(page_sizes, buf, sizeof(buf)),
+		   huge_size);
 	seq_printf(m, "%u display objects (pinned), %llu bytes\n",
 		   dpy_count, dpy_size);
 
 	seq_printf(m, "%llu [%llu] gtt total\n",
 		   ggtt->base.total, ggtt->mappable_end);
+	seq_printf(m, "Supported page sizes: %s\n",
+		   stringify_page_sizes(INTEL_INFO(dev_priv)->page_sizes,
+					buf, sizeof(buf)));
 
 	seq_putc(m, '\n');
 	print_batch_pool_stats(m, dev_priv);
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 17/21] drm/i915/selftests: huge page tests
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (15 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 16/21] drm/i915/debugfs: include some gtt page size metrics Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:22   ` Chris Wilson
                     ` (2 more replies)
  2017-09-29 16:10 ` [PATCH 18/21] drm/i915/selftests: mix huge pages Matthew Auld
                   ` (5 subsequent siblings)
  22 siblings, 3 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

v2: mock test page support configurations and add MI_STORE_DWORD test

v3: run all mockable huge page tests on all platforms via the mock_device

v4: add pin_update regression test
    various improvements suggested by Chris

v5: fix issues reported by kbuild
    test single sg spanning multiple page sizes
    don't explode when running the live-tests through the appgtt

v6: lots of improvements from Chris

v7: run on each engine for igt_write_huge
    add simple tmpfs fallback test

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c                    |    1 +
 drivers/gpu/drm/i915/i915_gem_object.h             |    2 +
 drivers/gpu/drm/i915/selftests/huge_pages.c        | 1711 ++++++++++++++++++++
 .../gpu/drm/i915/selftests/i915_live_selftests.h   |    1 +
 .../gpu/drm/i915/selftests/i915_mock_selftests.h   |    1 +
 5 files changed, 1716 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/huge_pages.c

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1e4f2b3c33ab..769a94e666e6 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5406,6 +5406,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 #include "selftests/scatterlist.c"
 #include "selftests/mock_gem_device.c"
 #include "selftests/huge_gem_object.c"
+#include "selftests/huge_pages.c"
 #include "selftests/i915_gem_object.c"
 #include "selftests/i915_gem_coherency.c"
 #endif
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index e4e6dd93889d..956c911c2cbf 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -196,6 +196,8 @@ struct drm_i915_gem_object {
 			unsigned int gtt;
 		} page_sizes;
 
+		I915_SELFTEST_DECLARE(unsigned int page_mask);
+
 		struct i915_gem_object_page_iter {
 			struct scatterlist *sg_pos;
 			unsigned int sg_idx; /* in pages, but 32bit eek! */
diff --git a/drivers/gpu/drm/i915/selftests/huge_pages.c b/drivers/gpu/drm/i915/selftests/huge_pages.c
new file mode 100644
index 000000000000..543b632c3fb6
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/huge_pages.c
@@ -0,0 +1,1711 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "../i915_selftest.h"
+
+#include <linux/prime_numbers.h>
+
+#include "mock_drm.h"
+
+static const unsigned int page_sizes[] = {
+	I915_GTT_PAGE_SIZE_2M,
+	I915_GTT_PAGE_SIZE_64K,
+	I915_GTT_PAGE_SIZE_4K,
+};
+
+static unsigned int get_largest_page_size(struct drm_i915_private *i915,
+					  size_t rem)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(page_sizes); ++i) {
+		unsigned int page_size = page_sizes[i];
+
+		if (HAS_PAGE_SIZES(i915, page_size) && rem >= page_size)
+			return page_size;
+	}
+
+	return 0;
+}
+
+static void huge_pages_free_pages(struct sg_table *st)
+{
+	struct scatterlist *sg;
+
+	for (sg = st->sgl; sg; sg = __sg_next(sg)) {
+		if (sg_page(sg))
+			__free_pages(sg_page(sg), get_order(sg->length));
+	}
+
+	sg_free_table(st);
+	kfree(st);
+}
+
+static int get_huge_pages(struct drm_i915_gem_object *obj)
+{
+#define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
+	unsigned int page_mask = obj->mm.page_mask;
+	struct sg_table *st;
+	struct scatterlist *sg;
+	unsigned int sg_mask;
+	u64 rem;
+
+	st = kmalloc(sizeof(*st), GFP);
+	if (!st)
+		return -ENOMEM;
+
+	if (sg_alloc_table(st, obj->base.size >> PAGE_SHIFT, GFP)) {
+		kfree(st);
+		return -ENOMEM;
+	}
+
+	rem = obj->base.size;
+	sg = st->sgl;
+	st->nents = 0;
+	sg_mask = 0;
+
+	/*
+	 * Our goal here is simple, we want to greedily fill the object from
+	 * largest to smallest page-size, while ensuring that we use *every*
+	 * page-size as per the given page-mask.
+	 */
+	do {
+		unsigned int bit = ilog2(page_mask);
+		unsigned int page_size = BIT(bit);
+		int order = get_order(page_size);
+
+		do {
+			struct page *page;
+
+			GEM_BUG_ON(order >= MAX_ORDER);
+			page = alloc_pages(GFP | __GFP_ZERO, order);
+			if (!page)
+				goto err;
+
+			sg_set_page(sg, page, page_size, 0);
+			sg_mask |= page_size;
+			st->nents++;
+
+			rem -= page_size;
+			if (!rem) {
+				sg_mark_end(sg);
+				break;
+			}
+
+			sg = __sg_next(sg);
+		} while ((rem - ((page_size-1) & page_mask)) >= page_size);
+
+		page_mask &= (page_size-1);
+	} while (page_mask);
+
+	if (i915_gem_gtt_prepare_pages(obj, st))
+		goto err;
+
+	obj->mm.madv = I915_MADV_DONTNEED;
+
+	GEM_BUG_ON(sg_mask != obj->mm.page_mask);
+	__i915_gem_object_set_pages(obj, st, sg_mask);
+
+	return 0;
+
+err:
+	sg_set_page(sg, NULL, 0, 0);
+	sg_mark_end(sg);
+	huge_pages_free_pages(st);
+
+	return -ENOMEM;
+}
+
+static void put_huge_pages(struct drm_i915_gem_object *obj,
+			   struct sg_table *pages)
+{
+	i915_gem_gtt_finish_pages(obj, pages);
+	huge_pages_free_pages(pages);
+
+	obj->mm.dirty = false;
+	obj->mm.madv = I915_MADV_WILLNEED;
+}
+
+static const struct drm_i915_gem_object_ops huge_page_ops = {
+	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
+		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.get_pages = get_huge_pages,
+	.put_pages = put_huge_pages,
+};
+
+static struct drm_i915_gem_object *
+huge_pages_object(struct drm_i915_private *i915,
+		  u64 size,
+		  unsigned int page_mask)
+{
+	struct drm_i915_gem_object *obj;
+
+	GEM_BUG_ON(!size);
+	GEM_BUG_ON(!IS_ALIGNED(size, BIT(__ffs(page_mask))));
+
+	if (size >> PAGE_SHIFT > INT_MAX)
+		return ERR_PTR(-E2BIG);
+
+	if (overflows_type(size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc(i915);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, size);
+	i915_gem_object_init(obj, &huge_page_ops);
+
+	obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+	obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+	obj->cache_level = I915_CACHE_NONE;
+
+	obj->mm.page_mask = page_mask;
+
+	return obj;
+}
+
+static int fake_get_huge_pages(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	size_t max_len = rounddown_pow_of_two(UINT_MAX);
+	struct sg_table *st;
+	struct scatterlist *sg;
+	unsigned int sg_mask;
+	size_t rem;
+
+	st = kmalloc(sizeof(*st), GFP);
+	if (!st)
+		return -ENOMEM;
+
+	if (sg_alloc_table(st, obj->base.size >> PAGE_SHIFT, GFP)) {
+		kfree(st);
+		return -ENOMEM;
+	}
+
+	/* Use optimal page sized chunks to fill in the sg table */
+	rem = obj->base.size;
+	sg = st->sgl;
+	st->nents = 0;
+	sg_mask = 0;
+	do {
+		unsigned int page_size = get_largest_page_size(i915, rem);
+		unsigned int len = min(page_size * (rem / page_size), max_len);
+
+		GEM_BUG_ON(!page_size);
+
+		sg->offset = 0;
+		sg->length = len;
+		sg_dma_len(sg) = len;
+		sg_dma_address(sg) = page_size;
+
+		sg_mask |= len;
+
+		st->nents++;
+
+		rem -= len;
+		if (!rem) {
+			sg_mark_end(sg);
+			break;
+		}
+
+		sg = sg_next(sg);
+	} while (1);
+
+	obj->mm.madv = I915_MADV_DONTNEED;
+
+	__i915_gem_object_set_pages(obj, st, sg_mask);
+
+	return 0;
+}
+
+static int fake_get_huge_pages_single(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct sg_table *st;
+	struct scatterlist *sg;
+	unsigned int page_size;
+
+	st = kmalloc(sizeof(*st), GFP);
+	if (!st)
+		return -ENOMEM;
+
+	if (sg_alloc_table(st, 1, GFP)) {
+		kfree(st);
+		return -ENOMEM;
+	}
+
+	sg = st->sgl;
+	st->nents = 1;
+
+	page_size = get_largest_page_size(i915, obj->base.size);
+	GEM_BUG_ON(!page_size);
+
+	sg->offset = 0;
+	sg->length = obj->base.size;
+	sg_dma_len(sg) = obj->base.size;
+	sg_dma_address(sg) = page_size;
+
+	obj->mm.madv = I915_MADV_DONTNEED;
+
+	__i915_gem_object_set_pages(obj, st, sg->length);
+
+	return 0;
+#undef GFP
+}
+
+static void fake_free_huge_pages(struct drm_i915_gem_object *obj,
+				 struct sg_table *pages)
+{
+	sg_free_table(pages);
+	kfree(pages);
+}
+
+static void fake_put_huge_pages(struct drm_i915_gem_object *obj,
+				struct sg_table *pages)
+{
+	fake_free_huge_pages(obj, pages);
+	obj->mm.dirty = false;
+	obj->mm.madv = I915_MADV_WILLNEED;
+}
+
+static const struct drm_i915_gem_object_ops fake_ops = {
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
+	.get_pages = fake_get_huge_pages,
+	.put_pages = fake_put_huge_pages,
+};
+
+static const struct drm_i915_gem_object_ops fake_ops_single = {
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
+	.get_pages = fake_get_huge_pages_single,
+	.put_pages = fake_put_huge_pages,
+};
+
+static struct drm_i915_gem_object *
+fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
+{
+	struct drm_i915_gem_object *obj;
+
+	GEM_BUG_ON(!size);
+	GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE));
+
+	if (size >> PAGE_SHIFT > UINT_MAX)
+		return ERR_PTR(-E2BIG);
+
+	if (overflows_type(size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc(i915);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, size);
+
+	if (single)
+		i915_gem_object_init(obj, &fake_ops_single);
+	else
+		i915_gem_object_init(obj, &fake_ops);
+
+	obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+	obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+	obj->cache_level = I915_CACHE_NONE;
+
+	return obj;
+}
+
+static int igt_check_page_sizes(struct i915_vma *vma)
+{
+	struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
+	unsigned int supported = INTEL_INFO(i915)->page_sizes;
+	struct drm_i915_gem_object *obj = vma->obj;
+	int err = 0;
+
+	if (!HAS_PAGE_SIZES(i915, vma->page_sizes.sg)) {
+		pr_err("unsupported page_sizes.sg=%u, supported=%u\n",
+		       vma->page_sizes.sg & ~supported, supported);
+		err = -EINVAL;
+	}
+
+	if (!HAS_PAGE_SIZES(i915, vma->page_sizes.gtt)) {
+		pr_err("unsupported page_sizes.gtt=%u, supported=%u\n",
+		       vma->page_sizes.gtt & ~supported, supported);
+		err = -EINVAL;
+	}
+
+	if (vma->page_sizes.phys != obj->mm.page_sizes.phys) {
+		pr_err("vma->page_sizes.phys(%u) != obj->mm.page_sizes.phys(%u)\n",
+		       vma->page_sizes.phys, obj->mm.page_sizes.phys);
+		err = -EINVAL;
+	}
+
+	if (vma->page_sizes.sg != obj->mm.page_sizes.sg) {
+		pr_err("vma->page_sizes.sg(%u) != obj->mm.page_sizes.sg(%u)\n",
+		       vma->page_sizes.sg, obj->mm.page_sizes.sg);
+		err = -EINVAL;
+	}
+
+	if (obj->mm.page_sizes.gtt) {
+		pr_err("obj->page_sizes.gtt(%u) should never be set\n",
+		       obj->mm.page_sizes.gtt);
+		err = -EINVAL;
+	}
+
+	return err;
+}
+
+static int igt_mock_exhaust_device_supported_pages(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	unsigned int saved_mask = INTEL_INFO(i915)->page_sizes;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	int i, j, single;
+	int err;
+
+	/*
+	 * Sanity check creating objects with every valid page support
+	 * combination for our mock device.
+	 */
+
+	for (i = 1; i < BIT(ARRAY_SIZE(page_sizes)); i++) {
+		unsigned int combination = 0;
+
+		for (j = 0; j < ARRAY_SIZE(page_sizes); j++) {
+			if (i & BIT(j))
+				combination |= page_sizes[j];
+		}
+
+		mkwrite_device_info(i915)->page_sizes = combination;
+
+		for (single = 0; single <= 1; ++single) {
+			obj = fake_huge_pages_object(i915, combination, !!single);
+			if (IS_ERR(obj)) {
+				err = PTR_ERR(obj);
+				goto out_device;
+			}
+
+			if (obj->base.size != combination) {
+				pr_err("obj->base.size=%zu, expected=%u\n",
+				       obj->base.size, combination);
+				err = -EINVAL;
+				goto out_put;
+			}
+
+			vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+			if (IS_ERR(vma)) {
+				err = PTR_ERR(vma);
+				goto out_put;
+			}
+
+			err = i915_vma_pin(vma, 0, 0, PIN_USER);
+			if (err)
+				goto out_close;
+
+			err = igt_check_page_sizes(vma);
+
+			if (vma->page_sizes.sg != combination) {
+				pr_err("page_sizes.sg=%u, expected=%u\n",
+				       vma->page_sizes.sg, combination);
+				err = -EINVAL;
+			}
+
+			i915_vma_unpin(vma);
+			i915_vma_close(vma);
+
+			i915_gem_object_put(obj);
+
+			if (err)
+				goto out_device;
+		}
+	}
+
+	goto out_device;
+
+out_close:
+	i915_vma_close(vma);
+out_put:
+	i915_gem_object_put(obj);
+out_device:
+	mkwrite_device_info(i915)->page_sizes = saved_mask;
+
+	return err;
+}
+
+static int igt_mock_ppgtt_misaligned_dma(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	unsigned long supported = INTEL_INFO(i915)->page_sizes;
+	struct drm_i915_gem_object *obj;
+	int bit;
+	int err;
+
+	/*
+	 * Sanity check dma misalignment for huge pages -- the dma addresses we
+	 * insert into the paging structures need to always respect the page
+	 * size alignment.
+	 */
+
+	bit = ilog2(I915_GTT_PAGE_SIZE_64K);
+
+	for_each_set_bit_from(bit, &supported, BITS_PER_LONG) {
+		IGT_TIMEOUT(end_time);
+		unsigned int page_size = BIT(bit);
+		unsigned int flags = PIN_USER | PIN_OFFSET_FIXED;
+		unsigned int offset;
+		unsigned int size =
+			round_up(page_size, I915_GTT_PAGE_SIZE_2M) << 1;
+		struct i915_vma *vma;
+
+		obj = fake_huge_pages_object(i915, size, true);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		if (obj->base.size != size) {
+			pr_err("obj->base.size=%zu, expected=%u\n",
+			       obj->base.size, size);
+			err = -EINVAL;
+			goto out_put;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		/* Force the page size for this object */
+		obj->mm.page_sizes.sg = page_size;
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_unpin;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, flags);
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+
+		err = igt_check_page_sizes(vma);
+
+		if (vma->page_sizes.gtt != page_size) {
+			pr_err("page_sizes.gtt=%u, expected %u\n",
+			       vma->page_sizes.gtt, page_size);
+			err = -EINVAL;
+		}
+
+		i915_vma_unpin(vma);
+
+		if (err) {
+			i915_vma_close(vma);
+			goto out_unpin;
+		}
+
+		/*
+		 * Try all the other valid offsets until the next
+		 * boundary -- should always fall back to using 4K
+		 * pages.
+		 */
+		for (offset = 4096; offset < page_size; offset += 4096) {
+			err = i915_vma_unbind(vma);
+			if (err) {
+				i915_vma_close(vma);
+				goto out_unpin;
+			}
+
+			err = i915_vma_pin(vma, 0, 0, flags | offset);
+			if (err) {
+				i915_vma_close(vma);
+				goto out_unpin;
+			}
+
+			err = igt_check_page_sizes(vma);
+
+			if (vma->page_sizes.gtt != I915_GTT_PAGE_SIZE_4K) {
+				pr_err("page_sizes.gtt=%u, expected %lu\n",
+				       vma->page_sizes.gtt, I915_GTT_PAGE_SIZE_4K);
+				err = -EINVAL;
+			}
+
+			i915_vma_unpin(vma);
+
+			if (err) {
+				i915_vma_close(vma);
+				goto out_unpin;
+			}
+
+			if (igt_timeout(end_time,
+					"%s timed out at offset %x with page-size %x\n",
+					__func__, offset, page_size))
+				break;
+		}
+
+		i915_vma_close(vma);
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+
+	return 0;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static void close_object_list(struct list_head *objects,
+			      struct i915_hw_ppgtt *ppgtt)
+{
+	struct drm_i915_gem_object *obj, *on;
+
+	list_for_each_entry_safe(obj, on, objects, st_link) {
+		struct i915_vma *vma;
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (!IS_ERR(vma))
+			i915_vma_close(vma);
+
+		list_del(&obj->st_link);
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+}
+
+static int igt_mock_ppgtt_huge_fill(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	unsigned long max_pages = ppgtt->base.total >> PAGE_SHIFT;
+	unsigned long page_num;
+	bool single = false;
+	LIST_HEAD(objects);
+	IGT_TIMEOUT(end_time);
+	int err;
+
+	for_each_prime_number_from(page_num, 1, max_pages) {
+		struct drm_i915_gem_object *obj;
+		size_t size = page_num << PAGE_SHIFT;
+		struct i915_vma *vma;
+		unsigned int expected_gtt = 0;
+		int i;
+
+		obj = fake_huge_pages_object(i915, size, single);
+		if (IS_ERR(obj)) {
+			err = PTR_ERR(obj);
+			break;
+		}
+
+		if (obj->base.size != size) {
+			pr_err("obj->base.size=%zd, expected=%zd\n",
+			       obj->base.size, size);
+			i915_gem_object_put(obj);
+			err = -EINVAL;
+			break;
+		}
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err) {
+			i915_gem_object_put(obj);
+			break;
+		}
+
+		list_add(&obj->st_link, &objects);
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			break;
+		}
+
+		err = i915_vma_pin(vma, 0, 0, PIN_USER);
+		if (err)
+			break;
+
+		err = igt_check_page_sizes(vma);
+		if (err) {
+			i915_vma_unpin(vma);
+			break;
+		}
+
+		/*
+		 * Figure out the expected gtt page size knowing that we go from
+		 * largest to smallest page size sg chunks, and that we align to
+		 * the largest page size.
+		 */
+		for (i = 0; i < ARRAY_SIZE(page_sizes); ++i) {
+			unsigned int page_size = page_sizes[i];
+
+			if (HAS_PAGE_SIZES(i915, page_size) &&
+			    size >= page_size) {
+				expected_gtt |= page_size;
+				size &= page_size-1;
+			}
+		}
+
+		GEM_BUG_ON(!expected_gtt);
+		GEM_BUG_ON(size);
+
+		if (expected_gtt & I915_GTT_PAGE_SIZE_4K)
+			expected_gtt &= ~I915_GTT_PAGE_SIZE_64K;
+
+		i915_vma_unpin(vma);
+
+		if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
+			if (!IS_ALIGNED(vma->node.start,
+					I915_GTT_PAGE_SIZE_2M)) {
+				pr_err("node.start(%llx) not aligned to 2M\n",
+				       vma->node.start);
+				err = -EINVAL;
+				break;
+			}
+
+			if (!IS_ALIGNED(vma->node.size,
+					I915_GTT_PAGE_SIZE_2M)) {
+				pr_err("node.size(%llx) not aligned to 2M\n",
+				       vma->node.size);
+				err = -EINVAL;
+				break;
+			}
+		}
+
+		if (vma->page_sizes.gtt != expected_gtt) {
+			pr_err("gtt=%u, expected=%u, size=%zd, single=%s\n",
+			       vma->page_sizes.gtt, expected_gtt,
+			       obj->base.size, yesno(!!single));
+			err = -EINVAL;
+			break;
+		}
+
+		if (igt_timeout(end_time,
+				"%s timed out at size %zd\n",
+				__func__, obj->base.size))
+			break;
+
+		single = !single;
+	}
+
+	close_object_list(&objects, ppgtt);
+
+	if (err == -ENOMEM || err == -ENOSPC)
+		err = 0;
+
+	return err;
+}
+
+static int igt_mock_ppgtt_64K(void *arg)
+{
+	struct i915_hw_ppgtt *ppgtt = arg;
+	struct drm_i915_private *i915 = ppgtt->base.i915;
+	struct drm_i915_gem_object *obj;
+	const struct object_info {
+		unsigned int size;
+		unsigned int gtt;
+		unsigned int offset;
+	} objects[] = {
+		/* Cases with forced padding/alignment */
+		{
+			.size = SZ_64K,
+			.gtt = I915_GTT_PAGE_SIZE_64K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_64K + SZ_4K,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_64K - SZ_4K,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_2M,
+			.gtt = I915_GTT_PAGE_SIZE_64K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_2M - SZ_4K,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_2M + SZ_4K,
+			.gtt = I915_GTT_PAGE_SIZE_64K | I915_GTT_PAGE_SIZE_4K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_2M + SZ_64K,
+			.gtt = I915_GTT_PAGE_SIZE_64K,
+			.offset = 0,
+		},
+		{
+			.size = SZ_2M - SZ_64K,
+			.gtt = I915_GTT_PAGE_SIZE_64K,
+			.offset = 0,
+		},
+		/* Try without any forced padding/alignment */
+		{
+			.size = SZ_64K,
+			.offset = SZ_2M,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+		},
+		{
+			.size = SZ_128K,
+			.offset = SZ_2M - SZ_64K,
+			.gtt = I915_GTT_PAGE_SIZE_4K,
+		},
+	};
+	struct i915_vma *vma;
+	int i, single;
+	int err;
+
+	/*
+	 * Sanity check some of the trickiness with 64K pages -- either we can
+	 * safely mark the whole page-table(2M block) as 64K, or we have to
+	 * always fallback to 4K.
+	 */
+
+	if (!HAS_PAGE_SIZES(i915, I915_GTT_PAGE_SIZE_64K))
+		return 0;
+
+	for (i = 0; i < ARRAY_SIZE(objects); ++i) {
+		unsigned int size = objects[i].size;
+		unsigned int expected_gtt = objects[i].gtt;
+		unsigned int offset = objects[i].offset;
+		unsigned int flags = PIN_USER;
+
+		for (single = 0; single <= 1; single++) {
+			obj = fake_huge_pages_object(i915, size, !!single);
+			if (IS_ERR(obj))
+				return PTR_ERR(obj);
+
+			err = i915_gem_object_pin_pages(obj);
+			if (err)
+				goto out_object_put;
+
+			/*
+			 * Disable 2M pages -- We only want to use 64K/4K pages
+			 * for this test.
+			 */
+			obj->mm.page_sizes.sg &= ~I915_GTT_PAGE_SIZE_2M;
+
+			vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+			if (IS_ERR(vma)) {
+				err = PTR_ERR(vma);
+				goto out_object_unpin;
+			}
+
+			if (offset)
+				flags |= PIN_OFFSET_FIXED | offset;
+
+			err = i915_vma_pin(vma, 0, 0, flags);
+			if (err)
+				goto out_vma_close;
+
+			err = igt_check_page_sizes(vma);
+			if (err)
+				goto out_vma_unpin;
+
+			if (!offset && vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
+				if (!IS_ALIGNED(vma->node.start,
+						I915_GTT_PAGE_SIZE_2M)) {
+					pr_err("node.start(%llx) not aligned to 2M\n",
+					       vma->node.start);
+					err = -EINVAL;
+					goto out_vma_unpin;
+				}
+
+				if (!IS_ALIGNED(vma->node.size,
+						I915_GTT_PAGE_SIZE_2M)) {
+					pr_err("node.size(%llx) not aligned to 2M\n",
+					       vma->node.size);
+					err = -EINVAL;
+					goto out_vma_unpin;
+				}
+			}
+
+			if (vma->page_sizes.gtt != expected_gtt) {
+				pr_err("gtt=%u, expected=%u, i=%d, single=%s\n",
+				       vma->page_sizes.gtt, expected_gtt, i,
+				       yesno(!!single));
+				err = -EINVAL;
+				goto out_vma_unpin;
+			}
+
+			i915_vma_unpin(vma);
+			i915_vma_close(vma);
+
+			i915_gem_object_unpin_pages(obj);
+			i915_gem_object_put(obj);
+		}
+	}
+
+	return 0;
+
+out_vma_unpin:
+	i915_vma_unpin(vma);
+out_vma_close:
+	i915_vma_close(vma);
+out_object_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_object_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static struct i915_vma *
+gpu_write_dw(struct i915_vma *vma, u64 offset, u32 val)
+{
+	struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
+	const int gen = INTEL_GEN(vma->vm->i915);
+	unsigned int count = vma->size >> PAGE_SHIFT;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *batch;
+	unsigned int size;
+	u32 *cmd;
+	int n;
+	int err;
+
+	size = (1 + 4 * count) * sizeof(u32);
+	size = round_up(size, PAGE_SIZE);
+	obj = i915_gem_object_create_internal(i915, size);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		goto err;
+	}
+
+	offset += vma->node.start;
+
+	for (n = 0; n < count; n++) {
+		if (gen >= 8) {
+			*cmd++ = MI_STORE_DWORD_IMM_GEN4;
+			*cmd++ = lower_32_bits(offset);
+			*cmd++ = upper_32_bits(offset);
+			*cmd++ = val;
+		} else if (gen >= 4) {
+			*cmd++ = MI_STORE_DWORD_IMM_GEN4 |
+				(gen < 6 ? 1 << 22 : 0);
+			*cmd++ = 0;
+			*cmd++ = offset;
+			*cmd++ = val;
+		} else {
+			*cmd++ = MI_STORE_DWORD_IMM | 1 << 22;
+			*cmd++ = offset;
+			*cmd++ = val;
+		}
+
+		offset += PAGE_SIZE;
+	}
+
+	*cmd = MI_BATCH_BUFFER_END;
+
+	i915_gem_object_unpin_map(obj);
+
+	err = i915_gem_object_set_to_gtt_domain(obj, false);
+	if (err)
+		goto err;
+
+	batch = i915_vma_instance(obj, vma->vm, NULL);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		goto err;
+	}
+
+	err = i915_vma_pin(batch, 0, 0, PIN_USER);
+	if (err)
+		goto err;
+
+	return batch;
+
+err:
+	i915_gem_object_put(obj);
+
+	return ERR_PTR(err);
+}
+
+static int gpu_write(struct i915_vma *vma,
+		     struct i915_gem_context *ctx,
+		     struct intel_engine_cs *engine,
+		     u32 dword,
+		     u32 value)
+{
+	struct drm_i915_gem_request *rq;
+	struct i915_vma *batch;
+	int flags = 0;
+	int err;
+
+	GEM_BUG_ON(!intel_engine_can_store_dword(engine));
+
+	err = i915_gem_object_set_to_gtt_domain(vma->obj, true);
+	if (err)
+		return err;
+
+	rq = i915_gem_request_alloc(engine, ctx);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
+
+	batch = gpu_write_dw(vma, dword * sizeof(u32), value);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		goto err_request;
+	}
+
+	i915_vma_move_to_active(batch, rq, 0);
+	i915_gem_object_set_active_reference(batch->obj);
+	i915_vma_unpin(batch);
+	i915_vma_close(batch);
+
+	err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
+	if (err)
+		goto err_request;
+
+	err = i915_switch_context(rq);
+	if (err)
+		goto err_request;
+
+	err = rq->engine->emit_bb_start(rq,
+					batch->node.start, batch->node.size,
+					flags);
+	if (err)
+		goto err_request;
+
+	i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
+
+	reservation_object_lock(vma->resv, NULL);
+	reservation_object_add_excl_fence(vma->resv, &rq->fence);
+	reservation_object_unlock(vma->resv);
+
+err_request:
+	__i915_add_request(rq, err == 0);
+
+	return err;
+}
+
+static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val)
+{
+	unsigned int needs_flush;
+	unsigned long n;
+	int err;
+
+	err = i915_gem_obj_prepare_shmem_read(obj, &needs_flush);
+	if (err)
+		return err;
+
+	for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) {
+		u32 *ptr = kmap_atomic(i915_gem_object_get_page(obj, n));
+
+		if (needs_flush & CLFLUSH_BEFORE)
+			drm_clflush_virt_range(ptr, PAGE_SIZE);
+
+		if (ptr[dword] != val) {
+			pr_err("n=%lu ptr[%u]=%u, val=%u\n",
+			       n, dword, ptr[dword], val);
+			kunmap_atomic(ptr);
+			err = -EINVAL;
+			break;
+		}
+
+		kunmap_atomic(ptr);
+	}
+
+	i915_gem_obj_finish_shmem_access(obj);
+
+	return err;
+}
+
+static int igt_write_huge(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_gem_context *ctx = i915->kernel_context;
+	struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	struct i915_vma *vma;
+	unsigned int flags = PIN_USER | PIN_OFFSET_FIXED;
+	unsigned int max_page_size;
+	struct intel_engine_cs *engine;
+	unsigned int id;
+	u32 max;
+	u32 num;
+	u64 size;
+	int err = 0;
+
+	GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
+
+	size = obj->base.size;
+	if (obj->mm.page_sizes.sg & I915_GTT_PAGE_SIZE_64K)
+		size = round_up(size, I915_GTT_PAGE_SIZE_2M);
+
+	max_page_size = rounddown_pow_of_two(obj->mm.page_sizes.sg);
+	max = (vm->total - size) / max_page_size;
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	for_each_engine(engine, i915, id) {
+		IGT_TIMEOUT(end_time);
+
+		if (!intel_engine_can_store_dword(engine)) {
+			pr_info("store-dword-imm not supported on engine=%u\n",
+				id);
+			continue;
+		}
+
+		/*
+		 * Try various offsets until we timeout -- we want to avoid
+		 * issues hidden by effectively always using offset = 0.
+		 */
+		for_each_prime_number_from(num, 0, max) {
+			u64 offset = num * max_page_size;
+			u32 dword;
+
+			err = i915_vma_unbind(vma);
+			if (err)
+				goto out_vma_close;
+
+			err = i915_vma_pin(vma, size, max_page_size, flags | offset);
+			if (err) {
+				/*
+				 * The ggtt may have some pages reserved so
+				 * refrain from erroring out.
+				 */
+				if (err == -ENOSPC && i915_is_ggtt(vm))
+					continue;
+
+				goto out_vma_close;
+			}
+
+			err = igt_check_page_sizes(vma);
+			if (err)
+				goto out_vma_unpin;
+
+			dword = offset_in_page(num) / 4;
+
+			err = gpu_write(vma, ctx, engine, dword, num + 1);
+			if (err) {
+				pr_err("gpu-write failed at offset=%llx", offset);
+				goto out_vma_unpin;
+			}
+
+			err = cpu_check(obj, dword, num + 1);
+			if (err) {
+				pr_err("cpu-check failed at offset=%llx", offset);
+				goto out_vma_unpin;
+			}
+
+			i915_vma_unpin(vma);
+
+			if (num > 0 &&
+			    igt_timeout(end_time,
+					"%s timed out on engine=%u at offset=%llx\n",
+					__func__, id, offset))
+				break;
+		}
+	}
+
+out_vma_unpin:
+	if (i915_vma_is_pinned(vma))
+		i915_vma_unpin(vma);
+out_vma_close:
+	i915_vma_close(vma);
+
+	return err;
+}
+
+static int igt_ppgtt_exhaust_huge(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	unsigned long supported = INTEL_INFO(i915)->page_sizes;
+	static unsigned int pages[ARRAY_SIZE(page_sizes)];
+	struct drm_i915_gem_object *obj;
+	unsigned int size_mask;
+	unsigned int page_mask;
+	int n, i;
+	int err;
+
+	/*
+	 * Sanity check creating objects with a varying mix of page sizes --
+	 * ensuring that our writes lands in the right place.
+	 */
+
+	n = 0;
+	for_each_set_bit(i, &supported, BITS_PER_LONG)
+		pages[n++] = BIT(i);
+
+	for (size_mask = 2; size_mask < BIT(n); size_mask++) {
+		unsigned int size = 0;
+
+		for (i = 0; i < n; i++) {
+			if (size_mask & BIT(i))
+				size |= pages[i];
+		}
+
+		/*
+		 * For our page mask we want to enumerate all the page-size
+		 * combinations which will fit into our chosen object size.
+		 */
+		for (page_mask = 2; page_mask <= size_mask; page_mask++) {
+			unsigned int page_sizes = 0;
+
+			for (i = 0; i < n; i++) {
+				if (page_mask & BIT(i))
+					page_sizes |= pages[i];
+			}
+
+			/*
+			 * Ensure that we can actually fill the given object
+			 * with our chosen page mask.
+			 */
+			if (!IS_ALIGNED(size, BIT(__ffs(page_sizes))))
+				continue;
+
+			obj = huge_pages_object(i915, size, page_sizes);
+			if (IS_ERR(obj)) {
+				err = PTR_ERR(obj);
+				goto out_device;
+			}
+
+			err = i915_gem_object_pin_pages(obj);
+			if (err) {
+				i915_gem_object_put(obj);
+
+				if (err == -ENOMEM) {
+					pr_info("unable to get pages, size=%u, pages=%u\n",
+						size, page_sizes);
+					err = 0;
+					break;
+				}
+
+				pr_err("pin_pages failed, size=%u, pages=%u\n",
+				       size_mask, page_mask);
+
+				goto out_device;
+			}
+
+			/* Force the page-size for the gtt insertion */
+			obj->mm.page_sizes.sg = page_sizes;
+
+			err = igt_write_huge(obj);
+			if (err) {
+				pr_err("exhaust write-huge failed with size=%u\n",
+				       size);
+				goto out_unpin;
+			}
+
+			i915_gem_object_unpin_pages(obj);
+			i915_gem_object_put(obj);
+		}
+	}
+
+	goto out_device;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+	i915_gem_object_put(obj);
+out_device:
+	mkwrite_device_info(i915)->page_sizes = supported;
+
+	return err;
+}
+
+static int igt_ppgtt_internal_huge(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	static const unsigned int sizes[] = {
+		SZ_64K,
+		SZ_128K,
+		SZ_256K,
+		SZ_512K,
+		SZ_1M,
+		SZ_2M,
+	};
+	int i;
+	int err;
+
+	/*
+	 * Sanity check that the HW uses huge pages correctly through internal
+	 * -- ensure that our writes land in the right place.
+	 */
+
+	for (i = 0; i < ARRAY_SIZE(sizes); ++i) {
+		unsigned int size = sizes[i];
+
+		obj = i915_gem_object_create_internal(i915, size);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		if (obj->mm.page_sizes.phys < I915_GTT_PAGE_SIZE_64K) {
+			pr_info("internal unable to allocate huge-page(s) with size=%u\n",
+				size);
+			goto out_unpin;
+		}
+
+		err = igt_write_huge(obj);
+		if (err) {
+			pr_err("internal write-huge failed with size=%u\n",
+			       size);
+			goto out_unpin;
+		}
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+
+	return 0;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static inline bool igt_can_allocate_thp(struct drm_i915_private *i915)
+{
+	return i915->mm.gemfs && has_transparent_hugepage();
+}
+
+static int igt_ppgtt_gemfs_huge(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	static const unsigned int sizes[] = {
+		SZ_2M,
+		SZ_4M,
+		SZ_8M,
+		SZ_16M,
+		SZ_32M,
+	};
+	int i;
+	int err;
+
+	/*
+	 * Sanity check that the HW uses huge pages correctly through gemfs --
+	 * ensure that our writes land in the right place.
+	 */
+
+	if (!igt_can_allocate_thp(i915)) {
+		pr_info("missing THP support, skipping\n");
+		return 0;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(sizes); ++i) {
+		unsigned int size = sizes[i];
+
+		obj = i915_gem_object_create(i915, size);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			goto out_put;
+
+		if (obj->mm.page_sizes.phys < I915_GTT_PAGE_SIZE_2M) {
+			pr_info("finishing test early, gemfs unable to allocate huge-page(s) with size=%u\n",
+				size);
+			goto out_unpin;
+		}
+
+		err = igt_write_huge(obj);
+		if (err) {
+			pr_err("gemfs write-huge failed with size=%u\n",
+			       size);
+			goto out_unpin;
+		}
+
+		i915_gem_object_unpin_pages(obj);
+		i915_gem_object_put(obj);
+	}
+
+	return 0;
+
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_ppgtt_pin_update(void *arg)
+{
+	struct drm_i915_private *dev_priv = arg;
+	unsigned long supported = INTEL_INFO(dev_priv)->page_sizes;
+	struct i915_hw_ppgtt *ppgtt = dev_priv->kernel_context->ppgtt;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	unsigned int flags = PIN_USER | PIN_OFFSET_FIXED;
+	int first, last;
+	int err;
+
+	/*
+	 * Make sure there's no funny business when doing a PIN_UPDATE -- in the
+	 * past we had a subtle issue with being able to incorrectly do multiple
+	 * alloc va ranges on the same object when doing a PIN_UPDATE, which
+	 * resulted in some pretty nasty bugs, though only when using
+	 * huge-gtt-pages.
+	 */
+
+	if (!USES_FULL_48BIT_PPGTT(dev_priv)) {
+		pr_info("48b PPGTT not supported, skipping\n");
+		return 0;
+	}
+
+	first = ilog2(I915_GTT_PAGE_SIZE_64K);
+	last = ilog2(I915_GTT_PAGE_SIZE_2M);
+
+	for_each_set_bit_from(first, &supported, last + 1) {
+		unsigned int page_size = BIT(first);
+
+		obj = i915_gem_object_create_internal(dev_priv, page_size);
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+
+		vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			goto out_put;
+		}
+
+		err = i915_vma_pin(vma, SZ_2M, 0, flags);
+		if (err)
+			goto out_close;
+
+		if (vma->page_sizes.sg < page_size) {
+			pr_info("Unable to allocate page-size %x, finishing test early\n",
+				page_size);
+			goto out_unpin;
+		}
+
+		err = igt_check_page_sizes(vma);
+		if (err)
+			goto out_unpin;
+
+		if (vma->page_sizes.gtt != page_size) {
+			dma_addr_t addr = i915_gem_object_get_dma_address(obj, 0);
+
+			/*
+			 * The only valid reason for this to ever fail would be
+			 * if the dma-mapper screwed us over when we did the
+			 * dma_map_sg(), since it has the final say over the dma
+			 * address.
+			 */
+			if (IS_ALIGNED(addr, page_size)) {
+				pr_err("page_sizes.gtt=%u, expected=%u\n",
+				       vma->page_sizes.gtt, page_size);
+				err = -EINVAL;
+			} else {
+				pr_info("dma address misaligned, finishing test early\n");
+			}
+
+			goto out_unpin;
+		}
+
+		err = i915_vma_bind(vma, I915_CACHE_NONE, PIN_UPDATE);
+		if (err)
+			goto out_unpin;
+
+		i915_vma_unpin(vma);
+		i915_vma_close(vma);
+
+		i915_gem_object_put(obj);
+	}
+
+	obj = i915_gem_object_create_internal(dev_priv, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	vma = i915_vma_instance(obj, &ppgtt->base, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto out_put;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, flags);
+	if (err)
+		goto out_close;
+
+	/*
+	 * Make sure we don't end up with something like where the pde is still
+	 * pointing to the 2M page, and the pt we just filled-in is dangling --
+	 * we can check this by writing to the first page where it would then
+	 * land in the now stale 2M page.
+	 */
+
+	err = gpu_write(vma, dev_priv->kernel_context, dev_priv->engine[RCS],
+			0, 0xdeadbeaf);
+	if (err)
+		goto out_unpin;
+
+	err = cpu_check(obj, 0, 0xdeadbeaf);
+
+out_unpin:
+	i915_vma_unpin(vma);
+out_close:
+	i915_vma_close(vma);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_tmpfs_fallback(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct vfsmount *gemfs = i915->mm.gemfs;
+	struct i915_gem_context *ctx = i915->kernel_context;
+	struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	u32 *vaddr;
+	int err = 0;
+
+	/*
+	 * Make sure that we don't burst into a ball of flames upon falling back
+	 * to tmpfs, which we rely on if on the off-chance we encouter a failure
+	 * when setting up gemfs.
+	 */
+
+	i915->mm.gemfs = NULL;
+
+	obj = i915_gem_object_create(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto out_restore;
+	}
+
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr)) {
+		err = PTR_ERR(vaddr);
+		goto out_put;
+	}
+	*vaddr = 0xdeadbeaf;
+
+	i915_gem_object_unpin_map(obj);
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto out_put;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, PIN_USER);
+	if (err)
+		goto out_close;
+
+	err = igt_check_page_sizes(vma);
+
+	i915_vma_unpin(vma);
+out_close:
+	i915_vma_close(vma);
+out_put:
+	i915_gem_object_put(obj);
+out_restore:
+	i915->mm.gemfs = gemfs;
+
+	return err;
+}
+
+static int igt_shrink_thp(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_gem_context *ctx = i915->kernel_context;
+	struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	unsigned int flags = PIN_USER;
+	int err;
+
+	/*
+	 * Sanity check shrinking huge-paged object -- make sure nothing blows
+	 * up.
+	 */
+
+	if (!igt_can_allocate_thp(i915)) {
+		pr_info("missing THP support, skipping\n");
+		return 0;
+	}
+
+	obj = i915_gem_object_create(i915, SZ_2M);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	vma = i915_vma_instance(obj, vm, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto out_put;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, flags);
+	if (err)
+		goto out_close;
+
+	if (obj->mm.page_sizes.phys < I915_GTT_PAGE_SIZE_2M) {
+		pr_info("failed to allocate THP, finishing test early\n");
+		goto out_unpin;
+	}
+
+	err = igt_check_page_sizes(vma);
+	if (err)
+		goto out_unpin;
+
+	err = gpu_write(vma, i915->kernel_context, i915->engine[RCS], 0,
+			0xdeadbeaf);
+	if (err)
+		goto out_unpin;
+
+	i915_vma_unpin(vma);
+
+	/*
+	 * Now that the pages are *unpinned* shrink-all should invoke
+	 * shmem to truncate our pages.
+	 */
+	i915_gem_shrink_all(i915);
+	if (!IS_ERR_OR_NULL(obj->mm.pages)) {
+		pr_err("shrink-all didn't truncate the pages\n");
+		err = -EINVAL;
+		goto out_close;
+	}
+
+	if (obj->mm.page_sizes.sg || obj->mm.page_sizes.phys) {
+		pr_err("residual page-size bits left\n");
+		err = -EINVAL;
+		goto out_close;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, flags);
+	if (err)
+		goto out_close;
+
+	err = cpu_check(obj, 0, 0xdeadbeaf);
+
+out_unpin:
+	i915_vma_unpin(vma);
+out_close:
+	i915_vma_close(vma);
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+int i915_gem_huge_page_mock_selftests(void)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_mock_exhaust_device_supported_pages),
+		SUBTEST(igt_mock_ppgtt_misaligned_dma),
+		SUBTEST(igt_mock_ppgtt_huge_fill),
+		SUBTEST(igt_mock_ppgtt_64K),
+	};
+	int saved_ppgtt = i915_modparams.enable_ppgtt;
+	struct drm_i915_private *dev_priv;
+	struct pci_dev *pdev;
+	struct i915_hw_ppgtt *ppgtt;
+	int err;
+
+	dev_priv = mock_gem_device();
+	if (!dev_priv)
+		return -ENOMEM;
+
+	/* Pretend to be a device which supports the 48b PPGTT */
+	i915_modparams.enable_ppgtt = 3;
+
+	pdev = dev_priv->drm.pdev;
+	dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(39));
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	ppgtt = i915_ppgtt_create(dev_priv, ERR_PTR(-ENODEV), "mock");
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto out_unlock;
+	}
+
+	if (!i915_vm_is_48bit(&ppgtt->base)) {
+		pr_err("failed to create 48b PPGTT\n");
+		err = -EINVAL;
+		goto out_close;
+	}
+
+	/* If we were ever hit this then it's time to mock the 64K scratch */
+	if (!i915_vm_has_scratch_64K(&ppgtt->base)) {
+		pr_err("PPGTT missing 64K scratch page\n");
+		err = -EINVAL;
+		goto out_close;
+	}
+
+	err = i915_subtests(tests, ppgtt);
+
+out_close:
+	i915_ppgtt_close(&ppgtt->base);
+	i915_ppgtt_put(ppgtt);
+
+out_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	i915_modparams.enable_ppgtt = saved_ppgtt;
+
+	drm_dev_unref(&dev_priv->drm);
+
+	return err;
+}
+
+int i915_gem_huge_page_live_selftests(struct drm_i915_private *dev_priv)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_ppgtt_gemfs_huge),
+		SUBTEST(igt_ppgtt_internal_huge),
+		SUBTEST(igt_ppgtt_exhaust_huge),
+		SUBTEST(igt_ppgtt_pin_update),
+		SUBTEST(igt_tmpfs_fallback),
+		SUBTEST(igt_shrink_thp),
+	};
+	int err;
+
+	if (!USES_PPGTT(dev_priv)) {
+		pr_info("PPGTT not supported, skipping live-selftests\n");
+		return 0;
+	}
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
+	err = i915_subtests(tests, dev_priv);
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 18b174d855ca..64acd7eccc5c 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -15,5 +15,6 @@ selftest(objects, i915_gem_object_live_selftests)
 selftest(dmabuf, i915_gem_dmabuf_live_selftests)
 selftest(coherency, i915_gem_coherency_live_selftests)
 selftest(gtt, i915_gem_gtt_live_selftests)
+selftest(hugepages, i915_gem_huge_page_live_selftests)
 selftest(contexts, i915_gem_context_live_selftests)
 selftest(hangcheck, intel_hangcheck_live_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index fc74687501ba..9961b44f76ed 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -21,3 +21,4 @@ selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
 selftest(vma, i915_vma_mock_selftests)
 selftest(evict, i915_gem_evict_mock_selftests)
 selftest(gtt, i915_gem_gtt_mock_selftests)
+selftest(hugepages, i915_gem_huge_page_mock_selftests)
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 18/21] drm/i915/selftests: mix huge pages
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (16 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 17/21] drm/i915/selftests: huge page tests Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 19/21] drm/i915: disable platform support for vGPU huge gtt pages Matthew Auld
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Try to mix sg page sizes for 4K, 64K and 2M pages.

v2: s/BIT(x) >> 12/BIT(x) >> PAGE_SHIFT/

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/selftests/scatterlist.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
index 1cc5d2931753..cd6d2a16071f 100644
--- a/drivers/gpu/drm/i915/selftests/scatterlist.c
+++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
@@ -189,6 +189,20 @@ static unsigned int random(unsigned long n,
 	return 1 + (prandom_u32_state(rnd) % 1024);
 }
 
+static unsigned int random_page_size_pages(unsigned long n,
+					   unsigned long count,
+					   struct rnd_state *rnd)
+{
+	/* 4K, 64K, 2M */
+	static unsigned int page_count[] = {
+		BIT(12) >> PAGE_SHIFT,
+		BIT(16) >> PAGE_SHIFT,
+		BIT(21) >> PAGE_SHIFT,
+	};
+
+	return page_count[(prandom_u32_state(rnd) % 3)];
+}
+
 static inline bool page_contiguous(struct page *first,
 				   struct page *last,
 				   unsigned long npages)
@@ -252,6 +266,7 @@ static const npages_fn_t npages_funcs[] = {
 	grow,
 	shrink,
 	random,
+	random_page_size_pages,
 	NULL,
 };
 
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 19/21] drm/i915: disable platform support for vGPU huge gtt pages
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (17 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 18/21] drm/i915/selftests: mix huge pages Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 20/21] drm/i915: enable platform support for 64K pages Matthew Auld
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

Currently gvt gtt handling doesn't support huge page entries, so disable
for now.

v2: remove useless 48b PPGTT check

Suggested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 769a94e666e6..a48f6f1c150b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4817,6 +4817,15 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 
 	mutex_lock(&dev_priv->drm.struct_mutex);
 
+	/*
+	 * We need to fallback to 4K pages since gvt gtt handling doesn't
+	 * support huge page entries - we will need to check either hypervisor
+	 * mm can support huge guest page or just do emulation in gvt.
+	 */
+	if (intel_vgpu_active(dev_priv))
+		mkwrite_device_info(dev_priv)->page_sizes =
+			I915_GTT_PAGE_SIZE_4K;
+
 	dev_priv->mm.unordered_timeline = dma_fence_context_alloc(1);
 
 	if (!i915_modparams.enable_execlists) {
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 20/21] drm/i915: enable platform support for 64K pages
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (18 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 19/21] drm/i915: disable platform support for vGPU huge gtt pages Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:10 ` [PATCH 21/21] drm/i915: enable platform support for 2M pages Matthew Auld
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

For gen9+ enable platform level support for 64K pages. Also enable for
mock testing.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_pci.c                  | 3 ++-
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index 84cdab3c5f09..9da69cb55302 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -434,7 +434,8 @@ static const struct intel_device_info intel_cherryview_info __initconst = {
 };
 
 #define GEN9_DEFAULT_PAGE_SIZES \
-	.page_sizes = I915_GTT_PAGE_SIZE_4K
+	.page_sizes = I915_GTT_PAGE_SIZE_4K | \
+		      I915_GTT_PAGE_SIZE_64K
 
 #define SKL_PLATFORM \
 	BDW_FEATURES, \
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 1785c63ad797..8a5a42ea1c98 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -175,7 +175,8 @@ struct drm_i915_private *mock_gem_device(void)
 	mkwrite_device_info(i915)->gen = -1;
 
 	mkwrite_device_info(i915)->page_sizes =
-		I915_GTT_PAGE_SIZE_4K;
+		I915_GTT_PAGE_SIZE_4K |
+		I915_GTT_PAGE_SIZE_64K;
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 	mock_uncore_init(i915);
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 21/21] drm/i915: enable platform support for 2M pages
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (19 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 20/21] drm/i915: enable platform support for 64K pages Matthew Auld
@ 2017-09-29 16:10 ` Matthew Auld
  2017-09-29 16:59 ` ✓ Fi.CI.BAT: success for huge gtt pages (rev10) Patchwork
  2017-09-29 18:32 ` ✓ Fi.CI.IGT: " Patchwork
  22 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-09-29 16:10 UTC (permalink / raw)
  To: intel-gfx

For gen8+ platforms which support the 48b PPGTT, enable platform level
support for 2M pages. Also enable for mock testing.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_pci.c                  | 6 ++++--
 drivers/gpu/drm/i915/selftests/mock_gem_device.c | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index 9da69cb55302..ad07f7075f29 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -374,7 +374,8 @@ static const struct intel_device_info intel_haswell_gt3_info __initconst = {
 #define BDW_FEATURES \
 	HSW_FEATURES, \
 	BDW_COLORS, \
-	GEN_DEFAULT_PAGE_SIZES, \
+	.page_sizes = I915_GTT_PAGE_SIZE_4K | \
+		      I915_GTT_PAGE_SIZE_2M, \
 	.has_logical_ring_contexts = 1, \
 	.has_full_48bit_ppgtt = 1, \
 	.has_64bit_reloc = 1, \
@@ -435,7 +436,8 @@ static const struct intel_device_info intel_cherryview_info __initconst = {
 
 #define GEN9_DEFAULT_PAGE_SIZES \
 	.page_sizes = I915_GTT_PAGE_SIZE_4K | \
-		      I915_GTT_PAGE_SIZE_64K
+		      I915_GTT_PAGE_SIZE_64K | \
+		      I915_GTT_PAGE_SIZE_2M
 
 #define SKL_PLATFORM \
 	BDW_FEATURES, \
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 8a5a42ea1c98..39556a5979d4 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -176,7 +176,8 @@ struct drm_i915_private *mock_gem_device(void)
 
 	mkwrite_device_info(i915)->page_sizes =
 		I915_GTT_PAGE_SIZE_4K |
-		I915_GTT_PAGE_SIZE_64K;
+		I915_GTT_PAGE_SIZE_64K |
+		I915_GTT_PAGE_SIZE_2M;
 
 	spin_lock_init(&i915->mm.object_stat_lock);
 	mock_uncore_init(i915);
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH 03/21] drm/i915/gemfs: enable THP
  2017-09-29 16:10 ` [PATCH 03/21] drm/i915/gemfs: enable THP Matthew Auld
@ 2017-09-29 16:18   ` Chris Wilson
  2017-10-02 12:05   ` Joonas Lahtinen
  1 sibling, 0 replies; 47+ messages in thread
From: Chris Wilson @ 2017-09-29 16:18 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-09-29 17:10:14)
> Enable transparent-huge-pages through gemfs by mounting with
> huge=within_size.
> 
> v2: sprinkle within_size comment
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>

It's the interface that exists, not necessarily the one that we want!

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 17/21] drm/i915/selftests: huge page tests
  2017-09-29 16:10 ` [PATCH 17/21] drm/i915/selftests: huge page tests Matthew Auld
@ 2017-09-29 16:22   ` Chris Wilson
  2017-10-02 22:11   ` kbuild test robot
  2017-10-03  9:49   ` Chris Wilson
  2 siblings, 0 replies; 47+ messages in thread
From: Chris Wilson @ 2017-09-29 16:22 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-09-29 17:10:28)
> v2: mock test page support configurations and add MI_STORE_DWORD test
> 
> v3: run all mockable huge page tests on all platforms via the mock_device
> 
> v4: add pin_update regression test
>     various improvements suggested by Chris
> 
> v5: fix issues reported by kbuild
>     test single sg spanning multiple page sizes
>     don't explode when running the live-tests through the appgtt
> 
> v6: lots of improvements from Chris
> 
> v7: run on each engine for igt_write_huge
>     add simple tmpfs fallback test
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>

Looks like it should exercise huge pages on the HW and catch the bugs
hit during development and the most likely bugs in code to come. A good
foundation for future testing.

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* ✓ Fi.CI.BAT: success for huge gtt pages (rev10)
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (20 preceding siblings ...)
  2017-09-29 16:10 ` [PATCH 21/21] drm/i915: enable platform support for 2M pages Matthew Auld
@ 2017-09-29 16:59 ` Patchwork
  2017-09-29 18:32 ` ✓ Fi.CI.IGT: " Patchwork
  22 siblings, 0 replies; 47+ messages in thread
From: Patchwork @ 2017-09-29 16:59 UTC (permalink / raw)
  To: Matthew Auld; +Cc: intel-gfx

== Series Details ==

Series: huge gtt pages (rev10)
URL   : https://patchwork.freedesktop.org/series/25118/
State : success

== Summary ==

Series 25118v10 huge gtt pages
https://patchwork.freedesktop.org/api/1.0/series/25118/revisions/10/mbox/

Test chamelium:
        Subgroup hdmi-crc-fast:
                dmesg-warn -> PASS       (fi-skl-6700k) fdo#103019
Test kms_busy:
        Subgroup basic-flip-c:
                incomplete -> PASS       (fi-bxt-j4205) fdo#102035
Test drv_module_reload:
        Subgroup basic-reload-inject:
                dmesg-warn -> PASS       (fi-glk-1) fdo#102777

fdo#103019 https://bugs.freedesktop.org/show_bug.cgi?id=103019
fdo#102035 https://bugs.freedesktop.org/show_bug.cgi?id=102035
fdo#102777 https://bugs.freedesktop.org/show_bug.cgi?id=102777

fi-bdw-5557u     total:289  pass:268  dwarn:0   dfail:0   fail:0   skip:21  time:440s
fi-bdw-gvtdvm    total:289  pass:265  dwarn:0   dfail:0   fail:0   skip:24  time:478s
fi-blb-e6850     total:289  pass:224  dwarn:1   dfail:0   fail:0   skip:64  time:422s
fi-bsw-n3050     total:289  pass:243  dwarn:0   dfail:0   fail:0   skip:46  time:518s
fi-bwr-2160      total:289  pass:184  dwarn:0   dfail:0   fail:0   skip:105 time:273s
fi-bxt-dsi       total:289  pass:259  dwarn:0   dfail:0   fail:0   skip:30  time:494s
fi-bxt-j4205     total:289  pass:260  dwarn:0   dfail:0   fail:0   skip:29  time:491s
fi-byt-j1900     total:289  pass:254  dwarn:1   dfail:0   fail:0   skip:34  time:493s
fi-byt-n2820     total:289  pass:250  dwarn:1   dfail:0   fail:0   skip:38  time:484s
fi-cfl-s         total:289  pass:256  dwarn:1   dfail:0   fail:0   skip:32  time:562s
fi-cnl-y         total:289  pass:261  dwarn:1   dfail:0   fail:0   skip:27  time:620s
fi-elk-e7500     total:289  pass:230  dwarn:0   dfail:0   fail:0   skip:59  time:415s
fi-glk-1         total:289  pass:260  dwarn:0   dfail:0   fail:0   skip:29  time:563s
fi-hsw-4770      total:289  pass:263  dwarn:0   dfail:0   fail:0   skip:26  time:427s
fi-hsw-4770r     total:289  pass:263  dwarn:0   dfail:0   fail:0   skip:26  time:405s
fi-ilk-650       total:289  pass:229  dwarn:0   dfail:0   fail:0   skip:60  time:422s
fi-ivb-3520m     total:289  pass:261  dwarn:0   dfail:0   fail:0   skip:28  time:479s
fi-ivb-3770      total:289  pass:261  dwarn:0   dfail:0   fail:0   skip:28  time:464s
fi-kbl-7500u     total:289  pass:264  dwarn:1   dfail:0   fail:0   skip:24  time:472s
fi-kbl-7560u     total:289  pass:270  dwarn:0   dfail:0   fail:0   skip:19  time:578s
fi-kbl-r         total:289  pass:262  dwarn:0   dfail:0   fail:0   skip:27  time:584s
fi-pnv-d510      total:289  pass:223  dwarn:1   dfail:0   fail:0   skip:65  time:545s
fi-skl-6260u     total:289  pass:269  dwarn:0   dfail:0   fail:0   skip:20  time:448s
fi-skl-6700k     total:289  pass:265  dwarn:0   dfail:0   fail:0   skip:24  time:746s
fi-skl-6770hq    total:289  pass:269  dwarn:0   dfail:0   fail:0   skip:20  time:486s
fi-skl-gvtdvm    total:289  pass:266  dwarn:0   dfail:0   fail:0   skip:23  time:472s
fi-snb-2520m     total:289  pass:251  dwarn:0   dfail:0   fail:0   skip:38  time:567s
fi-snb-2600      total:289  pass:250  dwarn:0   dfail:0   fail:0   skip:39  time:414s

e650b9cdaff616445f38986379b1c0c4b03a4282 drm-tip: 2017y-09m-29d-11h-52m-41s UTC integration manifest
a77d94a46de1 drm/i915: enable platform support for 2M pages
68643d0491f6 drm/i915: enable platform support for 64K pages
a00440b4fdae drm/i915: disable platform support for vGPU huge gtt pages
3d9378a2a2d8 drm/i915/selftests: mix huge pages
3b6099261b40 drm/i915/selftests: huge page tests
c6421ef4a78d drm/i915/debugfs: include some gtt page size metrics
27ded7fd5d58 drm/i915: accurate page size tracking for the ppgtt
7ee07e9107e9 drm/i915: support 64K pages for the 48b PPGTT
dcc09bd932c4 drm/i915: add support for 64K scratch page
ec05bb497244 drm/i915: support 2M pages for the 48b PPGTT
f9166244d1d0 drm/i915: disable GTT cache for 2M pages
f016928feecb drm/i915: enable IPS bit for 64K pages
e22e5d73c75b drm/i915: align 64K objects to 2M
2546667e0698 drm/i915: align the vma start to the largest gtt page size
b0c1afbe2322 drm/i915: introduce vm set_pages/clear_pages
772e4e6d28e9 drm/i915: introduce page_size members
9ce0c18adb5e drm/i915: push set_pages down to the callers
d2de8abcd38f drm/i915: introduce page_sizes field to dev_info
075a6b9c83a6 drm/i915/gemfs: enable THP
b1499a7d8c54 drm/i915: introduce simple gemfs
8fa2dc2d076f mm/shmem: introduce shmem_file_setup_with_mnt

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_5862/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* ✓ Fi.CI.IGT: success for huge gtt pages (rev10)
  2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
                   ` (21 preceding siblings ...)
  2017-09-29 16:59 ` ✓ Fi.CI.BAT: success for huge gtt pages (rev10) Patchwork
@ 2017-09-29 18:32 ` Patchwork
  22 siblings, 0 replies; 47+ messages in thread
From: Patchwork @ 2017-09-29 18:32 UTC (permalink / raw)
  To: Matthew Auld; +Cc: intel-gfx

== Series Details ==

Series: huge gtt pages (rev10)
URL   : https://patchwork.freedesktop.org/series/25118/
State : success

== Summary ==

Test kms_setmode:
        Subgroup basic:
                fail       -> PASS       (shard-hsw) fdo#99912
Test perf:
        Subgroup polling:
                fail       -> PASS       (shard-hsw) fdo#102252

fdo#99912 https://bugs.freedesktop.org/show_bug.cgi?id=99912
fdo#102252 https://bugs.freedesktop.org/show_bug.cgi?id=102252

shard-hsw        total:2381 pass:1318 dwarn:3   dfail:0   fail:8   skip:1052 time:9926s

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_5862/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 06/21] drm/i915: introduce page_size members
  2017-09-29 16:10 ` [PATCH 06/21] drm/i915: introduce page_size members Matthew Auld
@ 2017-09-29 21:31   ` Chris Wilson
  2017-10-03 16:32     ` Chris Wilson
  0 siblings, 1 reply; 47+ messages in thread
From: Chris Wilson @ 2017-09-29 21:31 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-09-29 17:10:17)
> diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> index 70ad7489827d..ad5abca1f794 100644
> --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> @@ -405,6 +405,9 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>  {
>         unsigned int max_segment = i915_sg_segment_size();
>         struct sg_table *st;
> +       struct scatterlist *sg;
> +       unsigned int sg_mask;
> +       int n;
>         int ret;
>  
>         st = kmalloc(sizeof(*st), GFP_KERNEL);
> @@ -434,7 +437,11 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>                 return ERR_PTR(ret);
>         }
>  
> -       __i915_gem_object_set_pages(obj, st);
> +       sg_mask = 0;
> +       for_each_sg(st->sgl, sg, num_pages, n)
> +               sg_mask |= sg->length;

No workie as num_pages != nents.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 02/21] drm/i915: introduce simple gemfs
  2017-09-29 16:10   ` Matthew Auld
  (?)
@ 2017-10-02 11:55   ` Joonas Lahtinen
  -1 siblings, 0 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-02 11:55 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx
  Cc: Chris Wilson, Dave Hansen, Kirill A . Shutemov, Hugh Dickins, linux-mm

On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> Not a fully blown gemfs, just our very own tmpfs kernel mount. Doing so
> moves us away from the shmemfs shm_mnt, and gives us the much needed
> flexibility to do things like set our own mount options, namely huge=
> which should allow us to enable the use of transparent-huge-pages for
> our shmem backed objects.
> 
> v2: various improvements suggested by Joonas
> 
> v3: move gemfs instance to i915.mm and simplify now that we have
> file_setup_with_mnt
> 
> v4: fallback to tmpfs shm_mnt upon failure to setup gemfs
> 
> v5: make tmpfs fallback kinder
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: linux-mm@kvack.org

<SNIP>

> @@ -4251,6 +4252,29 @@ static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
>  	.pwrite = i915_gem_object_pwrite_gtt,
>  };
>  
> +static int i915_gem_object_create_shmem(struct drm_device *dev,
> +					struct drm_gem_object *obj,
> +					size_t size)
> +{
> +	struct drm_i915_private *i915 = to_i915(dev);
> +	struct file *filp;
> +
> +	drm_gem_private_object_init(dev, obj, size);
> +
> +	if (i915->mm.gemfs)
> +		filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size,
> +						 VM_NORESERVE);
> +	else
> +		filp = shmem_file_setup("i915", size, VM_NORESERVE);

Put that VM_NORESERVE to 'flags' variable.

> @@ -4915,6 +4939,9 @@ i915_gem_load_init(struct drm_i915_private *dev_priv)
>  
>  	spin_lock_init(&dev_priv->fb_tracking.lock);
>  
> +	if (i915_gemfs_init(dev_priv))
> +		DRM_NOTE("Unable to create a private tmpfs mountpoint, hugepage support will be disabled.\n");

s/mountpoint/mount/, as we're not creating a directory. Maybe also
include the returned error for easier post-mortem?

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 03/21] drm/i915/gemfs: enable THP
  2017-09-29 16:10 ` [PATCH 03/21] drm/i915/gemfs: enable THP Matthew Auld
  2017-09-29 16:18   ` Chris Wilson
@ 2017-10-02 12:05   ` Joonas Lahtinen
  1 sibling, 0 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-02 12:05 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> Enable transparent-huge-pages through gemfs by mounting with
> huge=within_size.
> 
> v2: sprinkle within_size comment
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_gemfs.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gemfs.c b/drivers/gpu/drm/i915/i915_gemfs.c
> index 168d0bd98f60..b1b4d13d8f97 100644
> --- a/drivers/gpu/drm/i915/i915_gemfs.c
> +++ b/drivers/gpu/drm/i915/i915_gemfs.c
> @@ -24,6 +24,7 @@
>  
>  #include <linux/fs.h>
>  #include <linux/mount.h>
> +#include <linux/pagemap.h>
>  
>  #include "i915_drv.h"
>  #include "i915_gemfs.h"
> @@ -41,6 +42,26 @@ int i915_gemfs_init(struct drm_i915_private *i915)
>  	if (IS_ERR(gemfs))
>  		return PTR_ERR(gemfs);
>  
> +	/*
> +	 * Enable huge-pages for objects that are at least HPAGE_PMD_SIZE, most
> +	 * likely 2M. Note that within_size may overallocate huge-pages, if say
> +	 * we allocate an object of size 2M + 4K, but under memory pressure

Maybe append after "2M + 4K" "we may get 2M + 2M", to complete the
sentence.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 07/21] drm/i915: introduce vm set_pages/clear_pages
  2017-09-29 16:10 ` [PATCH 07/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
@ 2017-10-02 12:10   ` Joonas Lahtinen
  0 siblings, 0 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-02 12:10 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> Move the setting/clearing of the vma->pages to a vm operation. Doing so
> neatens things up a little, but more importantly gives us a sane place
> to also set/clear the vma->pages_sizes, which we introduce later in
> preparation for supporting huge-pages.
> 
> v2: remove redundant vma->pages check
> 
> v3: GEM_BUG_ON(vma->pages) following i915_vma_remove
> 
> Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size
  2017-09-29 16:10 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
@ 2017-10-02 12:16   ` Joonas Lahtinen
  0 siblings, 0 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-02 12:16 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> For the 48b PPGTT try to align the vma start address to the required
> page size boundary to guarantee we use said page size in the gtt. If we
> are dealing with multiple page sizes, we can't guarantee anything and
> just align to the largest. For soft pinning and objects which need to be
> tightly packed into the lower 32bits we don't force any alignment.
> 
> v2: various improvements suggested by Chris
> 
> v3: use set_pages and better placement of page_sizes
> 
> v4: prefer upper_32_bits()
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> @@ -238,6 +241,8 @@ static void clear_pages(struct i915_vma *vma)
>  		kfree(vma->pages);
>  	}
>  	vma->pages = NULL;
> +
> +	memset(&vma->page_sizes, 0, sizeof(struct i915_page_sizes));

sizeof(vma->page_sizes)

> @@ -2538,6 +2543,9 @@ static int ggtt_set_pages(struct i915_vma *vma)
>  	if (ret)
>  		return ret;
>  
> +	vma->page_sizes.phys = vma->obj->mm.page_sizes.phys;
> +	vma->page_sizes.sg = vma->obj->mm.page_sizes.sg;

Hmm, are we not able to assign vma->page_sizes =
vma->obj->mm.page_sizes?

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 09/21] drm/i915: align 64K objects to 2M
  2017-09-29 16:10 ` [PATCH 09/21] drm/i915: align 64K objects to 2M Matthew Auld
@ 2017-10-02 12:18   ` Joonas Lahtinen
  0 siblings, 0 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-02 12:18 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> We can't mix 64K and 4K pte's in the same page-table, so for now we
> align 64K objects to 2M to avoid any potential mixing. This is
> potentially wasteful but in reality shouldn't be too bad since this only
> applies to the virtual address space of a 48b PPGTT.
> 
> v2: don't separate logically connected ops
> 
> Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +++ b/drivers/gpu/drm/i915/i915_vma.c
> @@ -501,9 +501,18 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
>  		if (upper_32_bits(end) &&
>  		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {

Here. See below.

>  			u64 page_alignment =
> -				rounddown_pow_of_two(vma->page_sizes.sg);
> +				rounddown_pow_of_two(vma->page_sizes.sg |
> +						     I915_GTT_PAGE_SIZE_2M);
>  
>  			alignment = max(alignment, page_alignment);
> +
> +			/*
> +			 * We can't mix 64K and 4K PTEs in the same page-table (2M
> +			 * block), and so to avoid the ugliness and complexity of
> +			 * coloring we opt for just aligning 64K objects to 2M.
> +			 */

This is about size, the alignment is happening above, I'd lift the
comment there.

> +			if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K)
> +				size = round_up(size, I915_GTT_PAGE_SIZE_2M);
>  		}

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 10/21] drm/i915: enable IPS bit for 64K pages
  2017-09-29 16:10 ` [PATCH 10/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
@ 2017-10-02 12:21   ` Joonas Lahtinen
  0 siblings, 0 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-02 12:21 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> Before we can enable 64K pages through the IPS bit, we must first enable
> it through MMIO, otherwise the page-walker will simply ignore it.
> 
> v2: add comment mentioning that 64K is BDW+
> 
> v3: move to more suitable home
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 11/21] drm/i915: disable GTT cache for 2M pages
  2017-09-29 16:10 ` [PATCH 11/21] drm/i915: disable GTT cache for 2M pages Matthew Auld
@ 2017-10-02 12:31   ` Joonas Lahtinen
  2017-10-02 13:52     ` Matthew Auld
  2017-10-09 12:07     ` Ville Syrjälä
  0 siblings, 2 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-02 12:31 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> When SW enables the use of 2M/1G pages, it must disable the GTT cache.
> 
> v2: don't disable for Cherryview which doesn't even support 48b PPGTT!
> 
> v3: explicitly check that the system does support 2M/1G pages
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

<SNIP>

> @@ -8483,10 +8483,11 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
>  
>  	/*
>  	 * WaGttCachingOffByDefault:bdw
> -	 * GTT cache may not work with big pages, so if those
> -	 * are ever enabled GTT cache may need to be disabled.
> +	 * The GTT cache must be disabled if the system is using 2M/1G pages.
>  	 */
> -	I915_WRITE(HSW_GTT_CACHE_EN, GTT_CACHE_EN_ALL);
> +	I915_WRITE(HSW_GTT_CACHE_EN,
> +		   HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M) ? 0 :
> +		   GTT_CACHE_EN_ALL);

Umm, this is mixing a known W/A with decision logic.

	bool can_use_gtt_cache = !HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M);

	/* WaGttCachingOffByDefault:bdw */
	I915_WRITE(HSW_GTT_CACHE_EN, can_use_gtt_cache ? GTT_CACHE_EN_ALL : 0);

The big question is that if everyone else has GTT caching enabled by
default, should not we actively be disabling it on other code paths?
I'm also feeling 'init_clock_gating' is not the best home for the code.

We can find a new home later, but do separate the decision logic from
the W/A.

Also, do 1G pages imply 2M page support? It's bit on the theoritical
side of science of course.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 11/21] drm/i915: disable GTT cache for 2M pages
  2017-10-02 12:31   ` Joonas Lahtinen
@ 2017-10-02 13:52     ` Matthew Auld
  2017-10-03 12:32       ` Joonas Lahtinen
  2017-10-09 12:07     ` Ville Syrjälä
  1 sibling, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-10-02 13:52 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: Intel Graphics Development, Matthew Auld

On 2 October 2017 at 13:31, Joonas Lahtinen
<joonas.lahtinen@linux.intel.com> wrote:
> On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
>> When SW enables the use of 2M/1G pages, it must disable the GTT cache.
>>
>> v2: don't disable for Cherryview which doesn't even support 48b PPGTT!
>>
>> v3: explicitly check that the system does support 2M/1G pages
>>
>> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
>> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
>
> <SNIP>
>
>> @@ -8483,10 +8483,11 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
>>
>>       /*
>>        * WaGttCachingOffByDefault:bdw
>> -      * GTT cache may not work with big pages, so if those
>> -      * are ever enabled GTT cache may need to be disabled.
>> +      * The GTT cache must be disabled if the system is using 2M/1G pages.
>>        */
>> -     I915_WRITE(HSW_GTT_CACHE_EN, GTT_CACHE_EN_ALL);
>> +     I915_WRITE(HSW_GTT_CACHE_EN,
>> +                HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M) ? 0 :
>> +                GTT_CACHE_EN_ALL);
>
> Umm, this is mixing a known W/A with decision logic.
>
>         bool can_use_gtt_cache = !HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M);
>
>         /* WaGttCachingOffByDefault:bdw */
>         I915_WRITE(HSW_GTT_CACHE_EN, can_use_gtt_cache ? GTT_CACHE_EN_ALL : 0);
>
> The big question is that if everyone else has GTT caching enabled by
> default, should not we actively be disabling it on other code paths?

AFAIK it's *disabled* by default, bdw and chv are the only platforms
which seek to enable it. I don't know why other platforms don't also
follow suit...

That WA is literally just: "WA to enable the caching if off by
defaultboth at driver init and Resume".

> I'm also feeling 'init_clock_gating' is not the best home for the code.
>
> We can find a new home later, but do separate the decision logic from
> the W/A.
>
> Also, do 1G pages imply 2M page support? It's bit on the theoritical
> side of science of course.

I just assume not, since it doesn't explicitly state that anywhere.

>
> Regards, Joonas
> --
> Joonas Lahtinen
> Open Source Technology Center
> Intel Corporation
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 17/21] drm/i915/selftests: huge page tests
  2017-09-29 16:10 ` [PATCH 17/21] drm/i915/selftests: huge page tests Matthew Auld
  2017-09-29 16:22   ` Chris Wilson
@ 2017-10-02 22:11   ` kbuild test robot
  2017-10-03  9:49   ` Chris Wilson
  2 siblings, 0 replies; 47+ messages in thread
From: kbuild test robot @ 2017-10-02 22:11 UTC (permalink / raw)
  To: Matthew Auld; +Cc: intel-gfx, kbuild-all

[-- Attachment #1: Type: text/plain, Size: 801 bytes --]

Hi Matthew,

[auto build test ERROR on drm-intel/for-linux-next]
[cannot apply to v4.14-rc3]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Matthew-Auld/huge-gtt-pages/20171002-212557
base:   git://anongit.freedesktop.org/drm-intel for-linux-next
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

>> ERROR: "__udivdi3" [drivers/gpu/drm/i915/i915.ko] undefined!

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 61110 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 17/21] drm/i915/selftests: huge page tests
  2017-09-29 16:10 ` [PATCH 17/21] drm/i915/selftests: huge page tests Matthew Auld
  2017-09-29 16:22   ` Chris Wilson
  2017-10-02 22:11   ` kbuild test robot
@ 2017-10-03  9:49   ` Chris Wilson
  2 siblings, 0 replies; 47+ messages in thread
From: Chris Wilson @ 2017-10-03  9:49 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-09-29 17:10:28)
> +static unsigned int get_largest_page_size(struct drm_i915_private *i915,
> +                                         size_t rem)

Grr, I dislike size_t in the kernel. It's not tied to the physical
address size, or virtual address size (or either dma or ourselves).
In most cases you really do mean unsigned long, but often there's an
explicit type you could use. Note that they need u64 math protection (or
at least using the macros for handling longs).
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 11/21] drm/i915: disable GTT cache for 2M pages
  2017-10-02 13:52     ` Matthew Auld
@ 2017-10-03 12:32       ` Joonas Lahtinen
  0 siblings, 0 replies; 47+ messages in thread
From: Joonas Lahtinen @ 2017-10-03 12:32 UTC (permalink / raw)
  To: Mika Kuoppala; +Cc: Intel Graphics Development, Matthew Auld

+ Mika for more comments as the original reviewer

On Mon, 2017-10-02 at 14:52 +0100, Matthew Auld wrote:
> On 2 October 2017 at 13:31, Joonas Lahtinen
> <joonas.lahtinen@linux.intel.com> wrote:
> > On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> > > When SW enables the use of 2M/1G pages, it must disable the GTT cache.
> > > 
> > > v2: don't disable for Cherryview which doesn't even support 48b PPGTT!
> > > 
> > > v3: explicitly check that the system does support 2M/1G pages
> > > 
> > > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> > > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> > > Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> > 
> > <SNIP>
> > 
> > > @@ -8483,10 +8483,11 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
> > > 
> > >       /*
> > >        * WaGttCachingOffByDefault:bdw
> > > -      * GTT cache may not work with big pages, so if those
> > > -      * are ever enabled GTT cache may need to be disabled.
> > > +      * The GTT cache must be disabled if the system is using 2M/1G pages.
> > >        */
> > > -     I915_WRITE(HSW_GTT_CACHE_EN, GTT_CACHE_EN_ALL);
> > > +     I915_WRITE(HSW_GTT_CACHE_EN,
> > > +                HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M) ? 0 :
> > > +                GTT_CACHE_EN_ALL);
> > 
> > Umm, this is mixing a known W/A with decision logic.
> > 
> >         bool can_use_gtt_cache = !HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M);
> > 
> >         /* WaGttCachingOffByDefault:bdw */
> >         I915_WRITE(HSW_GTT_CACHE_EN, can_use_gtt_cache ? GTT_CACHE_EN_ALL : 0);
> > 
> > The big question is that if everyone else has GTT caching enabled by
> > default, should not we actively be disabling it on other code paths?
> 
> AFAIK it's *disabled* by default, bdw and chv are the only platforms
> which seek to enable it. I don't know why other platforms don't also
> follow suit...
> 
> That WA is literally just: "WA to enable the caching if off by
> defaultboth at driver init and Resume".
> 
> > I'm also feeling 'init_clock_gating' is not the best home for the code.
> > 
> > We can find a new home later, but do separate the decision logic from
> > the W/A.
> > 
> > Also, do 1G pages imply 2M page support? It's bit on the theoritical
> > side of science of course.
> 
> I just assume not, since it doesn't explicitly state that anywhere.
> 
> > 
> > Regards, Joonas
> > --
> > Joonas Lahtinen
> > Open Source Technology Center
> > Intel Corporation
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 06/21] drm/i915: introduce page_size members
  2017-09-29 21:31   ` Chris Wilson
@ 2017-10-03 16:32     ` Chris Wilson
  0 siblings, 0 replies; 47+ messages in thread
From: Chris Wilson @ 2017-10-03 16:32 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Chris Wilson (2017-09-29 22:31:27)
> Quoting Matthew Auld (2017-09-29 17:10:17)
> > diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> > index 70ad7489827d..ad5abca1f794 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> > @@ -405,6 +405,9 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >  {
> >         unsigned int max_segment = i915_sg_segment_size();
> >         struct sg_table *st;
> > +       struct scatterlist *sg;
> > +       unsigned int sg_mask;
> > +       int n;
> >         int ret;
> >  
> >         st = kmalloc(sizeof(*st), GFP_KERNEL);
> > @@ -434,7 +437,11 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >                 return ERR_PTR(ret);
> >         }
> >  
> > -       __i915_gem_object_set_pages(obj, st);
> > +       sg_mask = 0;
> > +       for_each_sg(st->sgl, sg, num_pages, n)
> > +               sg_mask |= sg->length;
> 
> No workie as num_pages != nents.

If we do something like
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 5ea7e1fbd0fd..c92d89ec9d5a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2896,6 +2896,21 @@ static inline struct scatterlist *__sg_next(struct scatterlist *sg)
             (((__iter).curr += PAGE_SIZE) >= (__iter).max) ?           \
             (__iter) = __sgt_iter(__sg_next((__iter).sgp), false), 0 : 0)
 
+static inline unsigned int i915_sg_page_sizes(struct scatterlist *sg)
+{
+       unsigned int page_sizes;
+
+       page_sizes = 0;
+       while (sg) {
+               GEM_BUG_ON(sg->offset);
+               GEM_BUG_ON(!IS_ALIGNED(sg->length, PAGE_SIZE));
+               page_sizes |= sg->length;
+               sg = __sg_next(sg);
+       }
+
+       return page_sizes;
+}
+
 static inline unsigned int i915_sg_segment_size(void)
 {
        unsigned int size = swiotlb_max_segment();

Then we can just write sg_mask = i915_sg_page_sizes(sg); for when we
don't compute them inline. For popular interfaces (userptr being one of
them) we should look at computing page_sizes inline.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH 11/21] drm/i915: disable GTT cache for 2M pages
  2017-10-02 12:31   ` Joonas Lahtinen
  2017-10-02 13:52     ` Matthew Auld
@ 2017-10-09 12:07     ` Ville Syrjälä
  2017-10-09 13:04       ` Matthew Auld
  1 sibling, 1 reply; 47+ messages in thread
From: Ville Syrjälä @ 2017-10-09 12:07 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx, Matthew Auld

On Mon, Oct 02, 2017 at 03:31:44PM +0300, Joonas Lahtinen wrote:
> On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
> > When SW enables the use of 2M/1G pages, it must disable the GTT cache.
> > 
> > v2: don't disable for Cherryview which doesn't even support 48b PPGTT!
> > 
> > v3: explicitly check that the system does support 2M/1G pages
> > 
> > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> > Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> 
> <SNIP>
> 
> > @@ -8483,10 +8483,11 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
> >  
> >  	/*
> >  	 * WaGttCachingOffByDefault:bdw
> > -	 * GTT cache may not work with big pages, so if those
> > -	 * are ever enabled GTT cache may need to be disabled.
> > +	 * The GTT cache must be disabled if the system is using 2M/1G pages.
> >  	 */
> > -	I915_WRITE(HSW_GTT_CACHE_EN, GTT_CACHE_EN_ALL);
> > +	I915_WRITE(HSW_GTT_CACHE_EN,
> > +		   HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M) ? 0 :
> > +		   GTT_CACHE_EN_ALL);
> 
> Umm, this is mixing a known W/A with decision logic.
> 
> 	bool can_use_gtt_cache = !HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M);
> 
> 	/* WaGttCachingOffByDefault:bdw */
> 	I915_WRITE(HSW_GTT_CACHE_EN, can_use_gtt_cache ? GTT_CACHE_EN_ALL : 0);
> 
> The big question is that if everyone else has GTT caching enabled by
> default, should not we actively be disabling it on other code paths?

HSW has it enabled by default. BDW B0 disabled it by default, even
though the spec seems to suggest that the big pages vs. GTT cache issue
may have been fixed properly already on BDW. Did anyone actually test
whether big pages work with the GTT cache enabled?

Bspec is a bit of a mess these days so it's kinda hard to see what the
situation is with SKL+, but it looks like the default is still 0x0. So
seems to me that someone should actually explicitly enable GTT cache on
SKL+. Enabling it did prodcuce a slight performance increase on BDW/CHV
IIRC.

-- 
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 11/21] drm/i915: disable GTT cache for 2M pages
  2017-10-09 12:07     ` Ville Syrjälä
@ 2017-10-09 13:04       ` Matthew Auld
  0 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-10-09 13:04 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: Intel Graphics Development, Matthew Auld

On 9 October 2017 at 13:07, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> On Mon, Oct 02, 2017 at 03:31:44PM +0300, Joonas Lahtinen wrote:
>> On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote:
>> > When SW enables the use of 2M/1G pages, it must disable the GTT cache.
>> >
>> > v2: don't disable for Cherryview which doesn't even support 48b PPGTT!
>> >
>> > v3: explicitly check that the system does support 2M/1G pages
>> >
>> > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
>> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> > Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
>> > Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
>>
>> <SNIP>
>>
>> > @@ -8483,10 +8483,11 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
>> >
>> >     /*
>> >      * WaGttCachingOffByDefault:bdw
>> > -    * GTT cache may not work with big pages, so if those
>> > -    * are ever enabled GTT cache may need to be disabled.
>> > +    * The GTT cache must be disabled if the system is using 2M/1G pages.
>> >      */
>> > -   I915_WRITE(HSW_GTT_CACHE_EN, GTT_CACHE_EN_ALL);
>> > +   I915_WRITE(HSW_GTT_CACHE_EN,
>> > +              HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M) ? 0 :
>> > +              GTT_CACHE_EN_ALL);
>>
>> Umm, this is mixing a known W/A with decision logic.
>>
>>       bool can_use_gtt_cache = !HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_2M);
>>
>>       /* WaGttCachingOffByDefault:bdw */
>>       I915_WRITE(HSW_GTT_CACHE_EN, can_use_gtt_cache ? GTT_CACHE_EN_ALL : 0);
>>
>> The big question is that if everyone else has GTT caching enabled by
>> default, should not we actively be disabling it on other code paths?
>
> HSW has it enabled by default. BDW B0 disabled it by default, even
> though the spec seems to suggest that the big pages vs. GTT cache issue
> may have been fixed properly already on BDW. Did anyone actually test
> whether big pages work with the GTT cache enabled?

Back when I was originally testing 2M pages on my BDW machine, I was
seeing lots of nasty visual corruption, and so only stumbled on
disabling the GTT cache after. The bit in the spec which caught my eye
at the time[1].

It might be worth making sure the same holds true for SKL+....

[1] https://gfxspecs.intel.com/Predator/Home/Index/423
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size
  2017-10-06 14:50 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
@ 2017-10-06 22:10   ` Chris Wilson
  0 siblings, 0 replies; 47+ messages in thread
From: Chris Wilson @ 2017-10-06 22:10 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-10-06 15:50:28)
> For the 48b PPGTT try to align the vma start address to the required
> page size boundary to guarantee we use said page size in the gtt. If we
> are dealing with multiple page sizes, we can't guarantee anything and
> just align to the largest. For soft pinning and objects which need to be
> tightly packed into the lower 32bits we don't force any alignment.
> 
> v2: various improvements suggested by Chris
> 
> v3: use set_pages and better placement of page_sizes
> 
> v4: prefer upper_32_bits()
> 
> v5: assign vma->page_sizes = vma->obj->page_sizes directly
>     prefer sizeof(vma->page_sizes)
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> ---
> diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
> index 49bf49571e47..5067eab27829 100644
> --- a/drivers/gpu/drm/i915/i915_vma.c
> +++ b/drivers/gpu/drm/i915/i915_vma.c
> @@ -493,6 +493,19 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
>                 if (ret)
>                         goto err_clear;
>         } else {
> +               /*
> +                * We only support huge gtt pages through the 48b PPGTT,
> +                * however we also don't want to force any alignment for
> +                * objects which need to be tightly packed into the low 32bits.
> +                */
> +               if (upper_32_bits(end) &&

Bah, this assumed PIN_ZONE_4G behaviour and forgot about 4G GGTT. :|

Insert a sly
		      !i915_vma_is_ggtt(vma) &&
here. Or use upper_32_bits(end-1). Hmm. Atm we have the pervasive
assumption that GGTT is capped at 4G, so we could use end-1 with a
comment.

The theory about not wanting to waste space in the low 4G is theory no
more!
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size
  2017-10-06 14:50 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-10-06 14:50 ` Matthew Auld
  2017-10-06 22:10   ` Chris Wilson
  0 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-10-06 14:50 UTC (permalink / raw)
  To: intel-gfx

For the 48b PPGTT try to align the vma start address to the required
page size boundary to guarantee we use said page size in the gtt. If we
are dealing with multiple page sizes, we can't guarantee anything and
just align to the largest. For soft pinning and objects which need to be
tightly packed into the lower 32bits we don't force any alignment.

v2: various improvements suggested by Chris

v3: use set_pages and better placement of page_sizes

v4: prefer upper_32_bits()

v5: assign vma->page_sizes = vma->obj->page_sizes directly
    prefer sizeof(vma->page_sizes)

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c |  6 ++++++
 drivers/gpu/drm/i915/i915_vma.c     | 13 +++++++++++++
 drivers/gpu/drm/i915/i915_vma.h     |  1 +
 3 files changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index c534b74eee32..fb7ac66814ab 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -226,6 +226,8 @@ static int ppgtt_set_pages(struct i915_vma *vma)
 
 	vma->pages = vma->obj->mm.pages;
 
+	vma->page_sizes = vma->obj->mm.page_sizes;
+
 	return 0;
 }
 
@@ -238,6 +240,8 @@ static void clear_pages(struct i915_vma *vma)
 		kfree(vma->pages);
 	}
 	vma->pages = NULL;
+
+	memset(&vma->page_sizes, 0, sizeof(vma->page_sizes));
 }
 
 static gen8_pte_t gen8_pte_encode(dma_addr_t addr,
@@ -2538,6 +2542,8 @@ static int ggtt_set_pages(struct i915_vma *vma)
 	if (ret)
 		return ret;
 
+	vma->page_sizes = vma->obj->mm.page_sizes;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 49bf49571e47..5067eab27829 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -493,6 +493,19 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		if (ret)
 			goto err_clear;
 	} else {
+		/*
+		 * We only support huge gtt pages through the 48b PPGTT,
+		 * however we also don't want to force any alignment for
+		 * objects which need to be tightly packed into the low 32bits.
+		 */
+		if (upper_32_bits(end) &&
+		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			u64 page_alignment =
+				rounddown_pow_of_two(vma->page_sizes.sg);
+
+			alignment = max(alignment, page_alignment);
+		}
+
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
 					  size, alignment, obj->cache_level,
 					  start, end, flags);
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index e811067c7724..c59ba76613a3 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -55,6 +55,7 @@ struct i915_vma {
 	void __iomem *iomap;
 	u64 size;
 	u64 display_alignment;
+	struct i915_page_sizes page_sizes;
 
 	u32 fence_size;
 	u32 fence_alignment;
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size
  2017-10-05 15:18 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-10-05 15:19 ` Matthew Auld
  0 siblings, 0 replies; 47+ messages in thread
From: Matthew Auld @ 2017-10-05 15:19 UTC (permalink / raw)
  To: intel-gfx

For the 48b PPGTT try to align the vma start address to the required
page size boundary to guarantee we use said page size in the gtt. If we
are dealing with multiple page sizes, we can't guarantee anything and
just align to the largest. For soft pinning and objects which need to be
tightly packed into the lower 32bits we don't force any alignment.

v2: various improvements suggested by Chris

v3: use set_pages and better placement of page_sizes

v4: prefer upper_32_bits()

v5: assign vma->page_sizes = vma->obj->page_sizes directly
    prefer sizeof(vma->page_sizes)

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c |  6 ++++++
 drivers/gpu/drm/i915/i915_vma.c     | 13 +++++++++++++
 drivers/gpu/drm/i915/i915_vma.h     |  1 +
 3 files changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index c534b74eee32..fb7ac66814ab 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -226,6 +226,8 @@ static int ppgtt_set_pages(struct i915_vma *vma)
 
 	vma->pages = vma->obj->mm.pages;
 
+	vma->page_sizes = vma->obj->mm.page_sizes;
+
 	return 0;
 }
 
@@ -238,6 +240,8 @@ static void clear_pages(struct i915_vma *vma)
 		kfree(vma->pages);
 	}
 	vma->pages = NULL;
+
+	memset(&vma->page_sizes, 0, sizeof(vma->page_sizes));
 }
 
 static gen8_pte_t gen8_pte_encode(dma_addr_t addr,
@@ -2538,6 +2542,8 @@ static int ggtt_set_pages(struct i915_vma *vma)
 	if (ret)
 		return ret;
 
+	vma->page_sizes = vma->obj->mm.page_sizes;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 49bf49571e47..5067eab27829 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -493,6 +493,19 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		if (ret)
 			goto err_clear;
 	} else {
+		/*
+		 * We only support huge gtt pages through the 48b PPGTT,
+		 * however we also don't want to force any alignment for
+		 * objects which need to be tightly packed into the low 32bits.
+		 */
+		if (upper_32_bits(end) &&
+		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			u64 page_alignment =
+				rounddown_pow_of_two(vma->page_sizes.sg);
+
+			alignment = max(alignment, page_alignment);
+		}
+
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
 					  size, alignment, obj->cache_level,
 					  start, end, flags);
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index e811067c7724..c59ba76613a3 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -55,6 +55,7 @@ struct i915_vma {
 	void __iomem *iomap;
 	u64 size;
 	u64 display_alignment;
+	struct i915_page_sizes page_sizes;
 
 	u32 fence_size;
 	u32 fence_alignment;
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size
  2017-09-22 17:32 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
@ 2017-09-23  8:46   ` Chris Wilson
  0 siblings, 0 replies; 47+ messages in thread
From: Chris Wilson @ 2017-09-23  8:46 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2017-09-22 18:32:39)
> For the 48b PPGTT try to align the vma start address to the required
> page size boundary to guarantee we use said page size in the gtt. If we
> are dealing with multiple page sizes, we can't guarantee anything and
> just align to the largest. For soft pinning and objects which need to be
> tightly packed into the lower 32bits we don't force any alignment.
> 
> v2: various improvements suggested by Chris
> 
> v3: use set_pages and better placement of page_sizes
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>

>         } else {
> +               /* We only support huge gtt pages through the 48b PPGTT,
> +                * however we also don't want to force any alignment for
> +                * objects which need to be tightly packed into the low 32bits.
> +                */
> +               if (end > (1ULL << 32) &&

gcc prefers
if (upper_32_bits(end) && vma->page_sizes.sg)

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size
  2017-09-22 17:32 [PATCH 00/21] huge gtt pages Matthew Auld
@ 2017-09-22 17:32 ` Matthew Auld
  2017-09-23  8:46   ` Chris Wilson
  0 siblings, 1 reply; 47+ messages in thread
From: Matthew Auld @ 2017-09-22 17:32 UTC (permalink / raw)
  To: intel-gfx

For the 48b PPGTT try to align the vma start address to the required
page size boundary to guarantee we use said page size in the gtt. If we
are dealing with multiple page sizes, we can't guarantee anything and
just align to the largest. For soft pinning and objects which need to be
tightly packed into the lower 32bits we don't force any alignment.

v2: various improvements suggested by Chris

v3: use set_pages and better placement of page_sizes

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c |  8 ++++++++
 drivers/gpu/drm/i915/i915_vma.c     | 12 ++++++++++++
 drivers/gpu/drm/i915/i915_vma.h     |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 5e0ac4c5a81c..a3897676e0bc 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -226,6 +226,9 @@ static int ppgtt_set_pages(struct i915_vma *vma)
 
 	vma->pages = vma->obj->mm.pages;
 
+	vma->page_sizes.phys = vma->obj->mm.page_sizes.phys;
+	vma->page_sizes.sg = vma->obj->mm.page_sizes.sg;
+
 	return 0;
 }
 
@@ -238,6 +241,8 @@ static void clear_pages(struct i915_vma *vma)
 		kfree(vma->pages);
 	}
 	vma->pages = NULL;
+
+	memset(&vma->page_sizes, 0, sizeof(struct i915_page_sizes));
 }
 
 static gen8_pte_t gen8_pte_encode(dma_addr_t addr,
@@ -2540,6 +2545,9 @@ static int ggtt_set_pages(struct i915_vma *vma)
 	if (ret)
 		return ret;
 
+	vma->page_sizes.phys = vma->obj->mm.page_sizes.phys;
+	vma->page_sizes.sg = vma->obj->mm.page_sizes.sg;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 49bf49571e47..102c2f184486 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -493,6 +493,18 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		if (ret)
 			goto err_clear;
 	} else {
+		/* We only support huge gtt pages through the 48b PPGTT,
+		 * however we also don't want to force any alignment for
+		 * objects which need to be tightly packed into the low 32bits.
+		 */
+		if (end > (1ULL << 32) &&
+		    vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
+			u64 page_alignment =
+				rounddown_pow_of_two(vma->page_sizes.sg);
+
+			alignment = max(alignment, page_alignment);
+		}
+
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
 					  size, alignment, obj->cache_level,
 					  start, end, flags);
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index e811067c7724..c59ba76613a3 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -55,6 +55,7 @@ struct i915_vma {
 	void __iomem *iomap;
 	u64 size;
 	u64 display_alignment;
+	struct i915_page_sizes page_sizes;
 
 	u32 fence_size;
 	u32 fence_alignment;
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2017-10-09 13:05 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
2017-09-29 16:10 ` [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt Matthew Auld
2017-09-29 16:10 ` [PATCH 02/21] drm/i915: introduce simple gemfs Matthew Auld
2017-09-29 16:10   ` Matthew Auld
2017-10-02 11:55   ` Joonas Lahtinen
2017-09-29 16:10 ` [PATCH 03/21] drm/i915/gemfs: enable THP Matthew Auld
2017-09-29 16:18   ` Chris Wilson
2017-10-02 12:05   ` Joonas Lahtinen
2017-09-29 16:10 ` [PATCH 04/21] drm/i915: introduce page_sizes field to dev_info Matthew Auld
2017-09-29 16:10 ` [PATCH 05/21] drm/i915: push set_pages down to the callers Matthew Auld
2017-09-29 16:10 ` [PATCH 06/21] drm/i915: introduce page_size members Matthew Auld
2017-09-29 21:31   ` Chris Wilson
2017-10-03 16:32     ` Chris Wilson
2017-09-29 16:10 ` [PATCH 07/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
2017-10-02 12:10   ` Joonas Lahtinen
2017-09-29 16:10 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
2017-10-02 12:16   ` Joonas Lahtinen
2017-09-29 16:10 ` [PATCH 09/21] drm/i915: align 64K objects to 2M Matthew Auld
2017-10-02 12:18   ` Joonas Lahtinen
2017-09-29 16:10 ` [PATCH 10/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
2017-10-02 12:21   ` Joonas Lahtinen
2017-09-29 16:10 ` [PATCH 11/21] drm/i915: disable GTT cache for 2M pages Matthew Auld
2017-10-02 12:31   ` Joonas Lahtinen
2017-10-02 13:52     ` Matthew Auld
2017-10-03 12:32       ` Joonas Lahtinen
2017-10-09 12:07     ` Ville Syrjälä
2017-10-09 13:04       ` Matthew Auld
2017-09-29 16:10 ` [PATCH 12/21] drm/i915: support 2M pages for the 48b PPGTT Matthew Auld
2017-09-29 16:10 ` [PATCH 13/21] drm/i915: add support for 64K scratch page Matthew Auld
2017-09-29 16:10 ` [PATCH 14/21] drm/i915: support 64K pages for the 48b PPGTT Matthew Auld
2017-09-29 16:10 ` [PATCH 15/21] drm/i915: accurate page size tracking for the ppgtt Matthew Auld
2017-09-29 16:10 ` [PATCH 16/21] drm/i915/debugfs: include some gtt page size metrics Matthew Auld
2017-09-29 16:10 ` [PATCH 17/21] drm/i915/selftests: huge page tests Matthew Auld
2017-09-29 16:22   ` Chris Wilson
2017-10-02 22:11   ` kbuild test robot
2017-10-03  9:49   ` Chris Wilson
2017-09-29 16:10 ` [PATCH 18/21] drm/i915/selftests: mix huge pages Matthew Auld
2017-09-29 16:10 ` [PATCH 19/21] drm/i915: disable platform support for vGPU huge gtt pages Matthew Auld
2017-09-29 16:10 ` [PATCH 20/21] drm/i915: enable platform support for 64K pages Matthew Auld
2017-09-29 16:10 ` [PATCH 21/21] drm/i915: enable platform support for 2M pages Matthew Auld
2017-09-29 16:59 ` ✓ Fi.CI.BAT: success for huge gtt pages (rev10) Patchwork
2017-09-29 18:32 ` ✓ Fi.CI.IGT: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2017-10-06 14:50 [PATCH 00/21] huge gtt pages Matthew Auld
2017-10-06 14:50 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
2017-10-06 22:10   ` Chris Wilson
2017-10-05 15:18 [PATCH 00/21] huge gtt pages Matthew Auld
2017-10-05 15:19 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
2017-09-22 17:32 [PATCH 00/21] huge gtt pages Matthew Auld
2017-09-22 17:32 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
2017-09-23  8:46   ` Chris Wilson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.