All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: intel-gfx@lists.freedesktop.org
Subject: [PATCH 13/21] drm/i915: add support for 64K scratch page
Date: Fri,  6 Oct 2017 15:50:33 +0100	[thread overview]
Message-ID: <20171006145041.21673-14-matthew.auld@intel.com> (raw)
In-Reply-To: <20171006145041.21673-1-matthew.auld@intel.com>

Before we can fully enable 64K pages, we need to first support a 64K
scratch page if we intend to support the case where we have object sizes
< 2M, since any scratch PTE must also point to a 64K region.  Without
this our 64K usage is limited to objects which completely fill the
page-table, and therefore don't need any scratch.

v2: add reminder about why 48b PPGTT

Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 64 ++++++++++++++++++++++++++++++-------
 drivers/gpu/drm/i915/i915_gem_gtt.h |  1 +
 2 files changed, 54 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 79ba485c5d42..7eae6ab8c5fd 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -519,22 +519,63 @@ static void fill_page_dma_32(struct i915_address_space *vm,
 static int
 setup_scratch_page(struct i915_address_space *vm, gfp_t gfp)
 {
-	struct page *page;
+	struct page *page = NULL;
 	dma_addr_t addr;
+	int order;
 
-	page = alloc_page(gfp | __GFP_ZERO);
-	if (unlikely(!page))
-		return -ENOMEM;
+	/*
+	 * In order to utilize 64K pages for an object with a size < 2M, we will
+	 * need to support a 64K scratch page, given that every 16th entry for a
+	 * page-table operating in 64K mode must point to a properly aligned 64K
+	 * region, including any PTEs which happen to point to scratch.
+	 *
+	 * This is only relevant for the 48b PPGTT where we support
+	 * huge-gtt-pages, see also i915_vma_insert().
+	 *
+	 * TODO: we should really consider write-protecting the scratch-page and
+	 * sharing between ppgtt
+	 */
+	if (i915_vm_is_48bit(vm) &&
+	    HAS_PAGE_SIZES(vm->i915, I915_GTT_PAGE_SIZE_64K)) {
+		order = get_order(I915_GTT_PAGE_SIZE_64K);
+		page = alloc_pages(gfp | __GFP_ZERO, order);
+		if (page) {
+			addr = dma_map_page(vm->dma, page, 0,
+					    I915_GTT_PAGE_SIZE_64K,
+					    PCI_DMA_BIDIRECTIONAL);
+			if (unlikely(dma_mapping_error(vm->dma, addr))) {
+				__free_pages(page, order);
+				page = NULL;
+			}
 
-	addr = dma_map_page(vm->dma, page, 0, PAGE_SIZE,
-			    PCI_DMA_BIDIRECTIONAL);
-	if (unlikely(dma_mapping_error(vm->dma, addr))) {
-		__free_page(page);
-		return -ENOMEM;
+			if (!IS_ALIGNED(addr, I915_GTT_PAGE_SIZE_64K)) {
+				dma_unmap_page(vm->dma, addr,
+					       I915_GTT_PAGE_SIZE_64K,
+					       PCI_DMA_BIDIRECTIONAL);
+				__free_pages(page, order);
+				page = NULL;
+			}
+		}
+	}
+
+	if (!page) {
+		order = 0;
+		page = alloc_page(gfp | __GFP_ZERO);
+		if (unlikely(!page))
+			return -ENOMEM;
+
+		addr = dma_map_page(vm->dma, page, 0, PAGE_SIZE,
+				    PCI_DMA_BIDIRECTIONAL);
+		if (unlikely(dma_mapping_error(vm->dma, addr))) {
+			__free_page(page);
+			return -ENOMEM;
+		}
 	}
 
 	vm->scratch_page.page = page;
 	vm->scratch_page.daddr = addr;
+	vm->scratch_page.order = order;
+
 	return 0;
 }
 
@@ -542,8 +583,9 @@ static void cleanup_scratch_page(struct i915_address_space *vm)
 {
 	struct i915_page_dma *p = &vm->scratch_page;
 
-	dma_unmap_page(vm->dma, p->daddr, PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
-	__free_page(p->page);
+	dma_unmap_page(vm->dma, p->daddr, BIT(p->order) << PAGE_SHIFT,
+		       PCI_DMA_BIDIRECTIONAL);
+	__free_pages(p->page, p->order);
 }
 
 static struct i915_page_table *alloc_pt(struct i915_address_space *vm)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index b9d7036c3665..e9de3f05b0c9 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -215,6 +215,7 @@ struct i915_vma;
 
 struct i915_page_dma {
 	struct page *page;
+	int order;
 	union {
 		dma_addr_t daddr;
 
-- 
2.13.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2017-10-06 14:51 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-06 14:50 [PATCH 00/21] huge gtt pages Matthew Auld
2017-10-06 14:50 ` [PATCH 01/21] mm/shmem: introduce shmem_file_setup_with_mnt Matthew Auld
2017-10-06 14:50   ` Matthew Auld
2017-10-06 14:50 ` [PATCH 02/21] drm/i915: introduce simple gemfs Matthew Auld
2017-10-06 14:50   ` Matthew Auld
2017-10-06 14:50 ` [PATCH 03/21] drm/i915/gemfs: enable THP Matthew Auld
2017-10-06 14:50 ` [PATCH 04/21] drm/i915: introduce page_sizes field to dev_info Matthew Auld
2017-10-06 14:50 ` [PATCH 05/21] drm/i915: push set_pages down to the callers Matthew Auld
2017-10-06 14:50 ` [PATCH 06/21] drm/i915: introduce page_size members Matthew Auld
2017-10-09 10:06   ` Joonas Lahtinen
2017-10-06 14:50 ` [PATCH 07/21] drm/i915: introduce vm set_pages/clear_pages Matthew Auld
2017-10-06 14:50 ` [PATCH 08/21] drm/i915: align the vma start to the largest gtt page size Matthew Auld
2017-10-06 22:10   ` Chris Wilson
2017-10-06 14:50 ` [PATCH 09/21] drm/i915: align 64K objects to 2M Matthew Auld
2017-10-06 14:50 ` [PATCH 10/21] drm/i915: enable IPS bit for 64K pages Matthew Auld
2017-10-06 14:50 ` [PATCH 11/21] drm/i915: disable GTT cache for 2M pages Matthew Auld
2017-10-06 14:50 ` [PATCH 12/21] drm/i915: support 2M pages for the 48b PPGTT Matthew Auld
2017-10-06 14:50 ` Matthew Auld [this message]
2017-10-06 14:50 ` [PATCH 14/21] drm/i915: support 64K " Matthew Auld
2017-10-06 14:50 ` [PATCH 15/21] drm/i915: accurate page size tracking for the ppgtt Matthew Auld
2017-10-06 14:50 ` [PATCH 16/21] drm/i915/debugfs: include some gtt page size metrics Matthew Auld
2017-10-06 14:50 ` [PATCH 17/21] drm/i915/selftests: huge page tests Matthew Auld
2017-10-06 14:50 ` [PATCH 18/21] drm/i915/selftests: mix huge pages Matthew Auld
2017-10-06 14:50 ` [PATCH 19/21] drm/i915: disable platform support for vGPU huge gtt pages Matthew Auld
2017-10-06 14:50 ` [PATCH 20/21] drm/i915: enable platform support for 64K pages Matthew Auld
2017-10-06 14:50 ` [PATCH 21/21] drm/i915: enable platform support for 2M pages Matthew Auld
2017-10-06 15:50 ` ✓ Fi.CI.BAT: success for huge gtt pages (rev13) Patchwork
2017-10-06 21:29 ` ✓ Fi.CI.IGT: " Patchwork
2017-10-06 21:44   ` Chris Wilson
  -- strict thread matches above, loose matches on Subject: below --
2017-10-05 15:18 [PATCH 00/21] huge gtt pages Matthew Auld
2017-10-05 15:19 ` [PATCH 13/21] drm/i915: add support for 64K scratch page Matthew Auld
2017-09-29 16:10 [PATCH 00/21] huge gtt pages Matthew Auld
2017-09-29 16:10 ` [PATCH 13/21] drm/i915: add support for 64K scratch page Matthew Auld
2017-09-22 17:32 [PATCH 00/21] huge gtt pages Matthew Auld
2017-09-22 17:32 ` [PATCH 13/21] drm/i915: add support for 64K scratch page Matthew Auld
2017-09-23  8:52   ` Chris Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171006145041.21673-14-matthew.auld@intel.com \
    --to=matthew.auld@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.