intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking
@ 2023-04-04 20:06 Thomas Hellström
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 1/3] drm/ttm/pool: Fix ttm_pool_alloc error path Thomas Hellström
                   ` (7 more replies)
  0 siblings, 8 replies; 20+ messages in thread
From: Thomas Hellström @ 2023-04-04 20:06 UTC (permalink / raw)
  To: dri-devel
  Cc: Thomas Hellström, intel-gfx, intel-xe, Christian Koenig,
	Matthew Auld

I collected the, from my POW, uncontroversial patches from V1 of the TTM
shrinker series, some corrected after the initial patch submission, one
patch added from the Xe RFC ("drm/ttm: Don't print error message if
eviction was interrupted"). It would be nice to have these reviewed and
merged while reworking the rest.

v2:
- Simplify __ttm_pool_free().
- Fix the TTM_TT_FLAG bit numbers.
- Keep all allocation orders for TTM pages at or below PMD order

v3:
- Rename __tm_pool_free() to ttm_pool_free_range(). Document.
- Compile-fix.

Thomas Hellström (3):
  drm/ttm/pool: Fix ttm_pool_alloc error path
  drm/ttm: Reduce the number of used allocation orders for TTM pages
  drm/ttm: Make the call to ttm_tt_populate() interruptible when
    faulting

 drivers/gpu/drm/ttm/ttm_bo_vm.c |  13 +++-
 drivers/gpu/drm/ttm/ttm_pool.c  | 111 ++++++++++++++++++++------------
 2 files changed, 80 insertions(+), 44 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Intel-gfx] [PATCH RESEND v3 1/3] drm/ttm/pool: Fix ttm_pool_alloc error path
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
@ 2023-04-04 20:06 ` Thomas Hellström
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages Thomas Hellström
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Thomas Hellström @ 2023-04-04 20:06 UTC (permalink / raw)
  To: dri-devel
  Cc: Thomas Hellström, intel-gfx, intel-xe, Huang Rui,
	Matthew Auld, Dave Airlie, Christian König

When hitting an error, the error path forgot to unmap dma mappings and
could call set_pages_wb() on already uncached pages.

Fix this by introducing a common ttm_pool_free_range() function that
does the right thing.

v2:
- Simplify that common function (Christian König)
v3:
- Rename that common function to ttm_pool_free_range() (Christian König)

Fixes: d099fc8f540a ("drm/ttm: new TT backend allocation pool v3")
Cc: Christian König <christian.koenig@amd.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/ttm/ttm_pool.c | 81 +++++++++++++++++++++-------------
 1 file changed, 51 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index aa116a7bbae3..dfce896c4bae 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -367,6 +367,43 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
 	return 0;
 }
 
+/**
+ * ttm_pool_free_range() - Free a range of TTM pages
+ * @pool: The pool used for allocating.
+ * @tt: The struct ttm_tt holding the page pointers.
+ * @caching: The page caching mode used by the range.
+ * @start_page: index for first page to free.
+ * @end_page: index for last page to free + 1.
+ *
+ * During allocation the ttm_tt page-vector may be populated with ranges of
+ * pages with different attributes if allocation hit an error without being
+ * able to completely fulfill the allocation. This function can be used
+ * to free these individual ranges.
+ */
+static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt,
+				enum ttm_caching caching,
+				pgoff_t start_page, pgoff_t end_page)
+{
+	struct page **pages = tt->pages;
+	unsigned int order;
+	pgoff_t i, nr;
+
+	for (i = start_page; i < end_page; i += nr, pages += nr) {
+		struct ttm_pool_type *pt = NULL;
+
+		order = ttm_pool_page_order(pool, *pages);
+		nr = (1UL << order);
+		if (tt->dma_address)
+			ttm_pool_unmap(pool, tt->dma_address[i], nr);
+
+		pt = ttm_pool_select_type(pool, caching, order);
+		if (pt)
+			ttm_pool_type_give(pt, *pages);
+		else
+			ttm_pool_free_page(pool, caching, order, *pages);
+	}
+}
+
 /**
  * ttm_pool_alloc - Fill a ttm_tt object
  *
@@ -382,12 +419,14 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
 int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 		   struct ttm_operation_ctx *ctx)
 {
-	unsigned long num_pages = tt->num_pages;
+	pgoff_t num_pages = tt->num_pages;
 	dma_addr_t *dma_addr = tt->dma_address;
 	struct page **caching = tt->pages;
 	struct page **pages = tt->pages;
+	enum ttm_caching page_caching;
 	gfp_t gfp_flags = GFP_USER;
-	unsigned int i, order;
+	pgoff_t caching_divide;
+	unsigned int order;
 	struct page *p;
 	int r;
 
@@ -410,6 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 	     order = min_t(unsigned int, order, __fls(num_pages))) {
 		struct ttm_pool_type *pt;
 
+		page_caching = tt->caching;
 		pt = ttm_pool_select_type(pool, tt->caching, order);
 		p = pt ? ttm_pool_type_take(pt) : NULL;
 		if (p) {
@@ -418,6 +458,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 			if (r)
 				goto error_free_page;
 
+			caching = pages;
 			do {
 				r = ttm_pool_page_allocated(pool, order, p,
 							    &dma_addr,
@@ -426,14 +467,15 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 				if (r)
 					goto error_free_page;
 
+				caching = pages;
 				if (num_pages < (1 << order))
 					break;
 
 				p = ttm_pool_type_take(pt);
 			} while (p);
-			caching = pages;
 		}
 
+		page_caching = ttm_cached;
 		while (num_pages >= (1 << order) &&
 		       (p = ttm_pool_alloc_page(pool, gfp_flags, order))) {
 
@@ -442,6 +484,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 							   tt->caching);
 				if (r)
 					goto error_free_page;
+				caching = pages;
 			}
 			r = ttm_pool_page_allocated(pool, order, p, &dma_addr,
 						    &num_pages, &pages);
@@ -468,15 +511,13 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 	return 0;
 
 error_free_page:
-	ttm_pool_free_page(pool, tt->caching, order, p);
+	ttm_pool_free_page(pool, page_caching, order, p);
 
 error_free_all:
 	num_pages = tt->num_pages - num_pages;
-	for (i = 0; i < num_pages; ) {
-		order = ttm_pool_page_order(pool, tt->pages[i]);
-		ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]);
-		i += 1 << order;
-	}
+	caching_divide = caching - tt->pages;
+	ttm_pool_free_range(pool, tt, tt->caching, 0, caching_divide);
+	ttm_pool_free_range(pool, tt, ttm_cached, caching_divide, num_pages);
 
 	return r;
 }
@@ -492,27 +533,7 @@ EXPORT_SYMBOL(ttm_pool_alloc);
  */
 void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt)
 {
-	unsigned int i;
-
-	for (i = 0; i < tt->num_pages; ) {
-		struct page *p = tt->pages[i];
-		unsigned int order, num_pages;
-		struct ttm_pool_type *pt;
-
-		order = ttm_pool_page_order(pool, p);
-		num_pages = 1ULL << order;
-		if (tt->dma_address)
-			ttm_pool_unmap(pool, tt->dma_address[i], num_pages);
-
-		pt = ttm_pool_select_type(pool, tt->caching, order);
-		if (pt)
-			ttm_pool_type_give(pt, tt->pages[i]);
-		else
-			ttm_pool_free_page(pool, tt->caching, order,
-					   tt->pages[i]);
-
-		i += num_pages;
-	}
+	ttm_pool_free_range(pool, tt, tt->caching, 0, tt->num_pages);
 
 	while (atomic_long_read(&allocated_pages) > page_pool_size)
 		ttm_pool_shrink();
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 1/3] drm/ttm/pool: Fix ttm_pool_alloc error path Thomas Hellström
@ 2023-04-04 20:06 ` Thomas Hellström
  2023-04-11  9:51   ` Daniel Vetter
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 3/3] drm/ttm: Make the call to ttm_tt_populate() interruptible when faulting Thomas Hellström
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Thomas Hellström @ 2023-04-04 20:06 UTC (permalink / raw)
  To: dri-devel
  Cc: Thomas Hellström, intel-gfx, intel-xe, Christian König,
	Matthew Auld

When swapping out, we will split multi-order pages both in order to
move them to the swap-cache and to be able to return memory to the
swap cache as soon as possible on a page-by-page basis.
Reduce the page max order to the system PMD size, as we can then be nicer
to the system and avoid splitting gigantic pages.

Looking forward to when we might be able to swap out PMD size folios
without splitting, this will also be a benefit.

v2:
- Include all orders up to the PMD size (Christian König)
v3:
- Avoid compilation errors for architectures with special PFN_SHIFTs

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index dfce896c4bae..18c342a919a2 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -47,6 +47,11 @@
 
 #include "ttm_module.h"
 
+#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
+#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
+/* Some architectures have a weird PMD_SHIFT */
+#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
+
 /**
  * struct ttm_pool_dma - Helper object for coherent DMA mappings
  *
@@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
 
 static atomic_long_t allocated_pages;
 
-static struct ttm_pool_type global_write_combined[MAX_ORDER];
-static struct ttm_pool_type global_uncached[MAX_ORDER];
+static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
+static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
 
-static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
-static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
+static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
+static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
 
 static spinlock_t shrinker_lock;
 static struct list_head shrinker_list;
@@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 	else
 		gfp_flags |= GFP_HIGHUSER;
 
-	for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
+	for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
 	     num_pages;
 	     order = min_t(unsigned int, order, __fls(num_pages))) {
 		struct ttm_pool_type *pt;
@@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
 
 	if (use_dma_alloc) {
 		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
-			for (j = 0; j < MAX_ORDER; ++j)
+			for (j = 0; j < TTM_DIM_ORDER; ++j)
 				ttm_pool_type_init(&pool->caching[i].orders[j],
 						   pool, i, j);
 	}
@@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
 
 	if (pool->use_dma_alloc) {
 		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
-			for (j = 0; j < MAX_ORDER; ++j)
+			for (j = 0; j < TTM_DIM_ORDER; ++j)
 				ttm_pool_type_fini(&pool->caching[i].orders[j]);
 	}
 
@@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
 	unsigned int i;
 
 	seq_puts(m, "\t ");
-	for (i = 0; i < MAX_ORDER; ++i)
+	for (i = 0; i < TTM_DIM_ORDER; ++i)
 		seq_printf(m, " ---%2u---", i);
 	seq_puts(m, "\n");
 }
@@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
 {
 	unsigned int i;
 
-	for (i = 0; i < MAX_ORDER; ++i)
+	for (i = 0; i < TTM_DIM_ORDER; ++i)
 		seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
 	seq_puts(m, "\n");
 }
@@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
 {
 	unsigned int i;
 
+	BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
+	BUILD_BUG_ON(TTM_DIM_ORDER < 1);
+
 	if (!page_pool_size)
 		page_pool_size = num_pages;
 
 	spin_lock_init(&shrinker_lock);
 	INIT_LIST_HEAD(&shrinker_list);
 
-	for (i = 0; i < MAX_ORDER; ++i) {
+	for (i = 0; i < TTM_DIM_ORDER; ++i) {
 		ttm_pool_type_init(&global_write_combined[i], NULL,
 				   ttm_write_combined, i);
 		ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
@@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
 {
 	unsigned int i;
 
-	for (i = 0; i < MAX_ORDER; ++i) {
+	for (i = 0; i < TTM_DIM_ORDER; ++i) {
 		ttm_pool_type_fini(&global_write_combined[i]);
 		ttm_pool_type_fini(&global_uncached[i]);
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Intel-gfx] [PATCH RESEND v3 3/3] drm/ttm: Make the call to ttm_tt_populate() interruptible when faulting
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 1/3] drm/ttm/pool: Fix ttm_pool_alloc error path Thomas Hellström
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages Thomas Hellström
@ 2023-04-04 20:06 ` Thomas Hellström
  2023-04-04 23:13 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/ttm: Small fixes / cleanups in prep for shrinking (rev3) Patchwork
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Thomas Hellström @ 2023-04-04 20:06 UTC (permalink / raw)
  To: dri-devel
  Cc: Thomas Hellström, intel-gfx, intel-xe, Christian Koenig,
	Matthew Auld

When swapping in, or under memory pressure ttm_tt_populate() may sleep
for a substantiable amount of time. Allow interrupts during the sleep.
This will also allow us to inject -EINTR errors during swapin in upcoming
patches.

Also avoid returning VM_FAULT_OOM, since that will confuse the core
mm, making it print out a confused message and retrying the fault.
Return VM_FAULT_SIGBUS also under OOM conditions.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/ttm/ttm_bo_vm.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index ca7744b852f5..4bca6b54520a 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -218,14 +218,21 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 	prot = ttm_io_prot(bo, bo->resource, prot);
 	if (!bo->resource->bus.is_iomem) {
 		struct ttm_operation_ctx ctx = {
-			.interruptible = false,
+			.interruptible = true,
 			.no_wait_gpu = false,
 			.force_alloc = true
 		};
 
 		ttm = bo->ttm;
-		if (ttm_tt_populate(bdev, bo->ttm, &ctx))
-			return VM_FAULT_OOM;
+		err = ttm_tt_populate(bdev, bo->ttm, &ctx);
+		if (err) {
+			if (err == -EINTR || err == -ERESTARTSYS ||
+			    err == -EAGAIN)
+				return VM_FAULT_NOPAGE;
+
+			pr_debug("TTM fault hit %pe.\n", ERR_PTR(err));
+			return VM_FAULT_SIGBUS;
+		}
 	} else {
 		/* Iomem should not be marked encrypted */
 		prot = pgprot_decrypted(prot);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
                   ` (2 preceding siblings ...)
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 3/3] drm/ttm: Make the call to ttm_tt_populate() interruptible when faulting Thomas Hellström
@ 2023-04-04 23:13 ` Patchwork
  2023-04-04 23:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2023-04-04 23:13 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

== Series Details ==

Series: drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
URL   : https://patchwork.freedesktop.org/series/114774/
State : warning

== Summary ==

Error: dim checkpatch failed
20b987c8b612 drm/ttm/pool: Fix ttm_pool_alloc error path
48e572a2078c drm/ttm: Reduce the number of used allocation orders for TTM pages
-:38: WARNING:CONSTANT_COMPARISON: Comparisons should place the constant on the right side of the test
#38: FILE: drivers/gpu/drm/ttm/ttm_pool.c:53:
+#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)

total: 0 errors, 1 warnings, 0 checks, 91 lines checked
ae0dffefba8a drm/ttm: Make the call to ttm_tt_populate() interruptible when faulting



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
                   ` (3 preceding siblings ...)
  2023-04-04 23:13 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/ttm: Small fixes / cleanups in prep for shrinking (rev3) Patchwork
@ 2023-04-04 23:13 ` Patchwork
  2023-04-04 23:23 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2023-04-04 23:13 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

== Series Details ==

Series: drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
URL   : https://patchwork.freedesktop.org/series/114774/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
+./arch/x86/include/asm/bitops.h:117:1: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:148:1: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:150:9: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:154:26: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:156:16: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:156:9: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:174:1: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:176:9: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:180:35: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:182:16: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:182:9: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:186:1: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:188:9: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:192:35: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:195:16: warning: unreplaced symbol 'oldbit'
+./arch/x86/include/asm/bitops.h:195:9: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:237:1: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:239:9: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:66:1: warning: unreplaced symbol 'return'
+./arch/x86/include/asm/bitops.h:92:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:100:17: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:100:23: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:100:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:105:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:107:9: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:108:9: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:109:9: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:111:10: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:111:14: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:111:20: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:112:17: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:112:23: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:112:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:121:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:128:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:166:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:168:9: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:169:9: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:170:9: warning: unreplaced symbol 'val'
+./include/asm-generic/bitops/generic-non-atomic.h:172:19: warning: unreplaced symbol 'val'
+./include/asm-generic/bitops/generic-non-atomic.h:172:25: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:172:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:28:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:30:9: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:31:9: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:33:10: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:33:16: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:37:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:39:9: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:40:9: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:42:10: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:42:16: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:55:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:57:9: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:58:9: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:60:10: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:60:15: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:73:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:75:9: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:76:9: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:77:9: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:79:10: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:79:14: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:79:20: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:80:17: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:80:23: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:80:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:93:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/generic-non-atomic.h:95:9: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/generic-non-atomic.h:96:9: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:97:9: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:99:10: warning: unreplaced symbol 'p'
+./include/asm-generic/bitops/generic-non-atomic.h:99:14: warning: unreplaced symbol 'old'
+./include/asm-generic/bitops/generic-non-atomic.h:99:21: warning: unreplaced symbol 'mask'
+./include/asm-generic/bitops/instrumented-non-atomic.h:100:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:112:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:115:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:127:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:130:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:139:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:142:9: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:26:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:42:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:58:1: warning: unreplaced symbol 'return'
+./include/asm-generic/bitops/instrumented-non-atomic.h:97:1: warning: unreplaced symbol 'return'



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
                   ` (4 preceding siblings ...)
  2023-04-04 23:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2023-04-04 23:23 ` Patchwork
  2023-04-05  9:24 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  2023-04-05 12:32 ` [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Christian König
  7 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2023-04-04 23:23 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 4444 bytes --]

== Series Details ==

Series: drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
URL   : https://patchwork.freedesktop.org/series/114774/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_12966 -> Patchwork_114774v3
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/index.html

Participating hosts (37 -> 36)
------------------------------

  Missing    (1): fi-snb-2520m 

Known issues
------------

  Here are the changes found in Patchwork_114774v3 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@reset:
    - bat-rpls-1:         [PASS][1] -> [ABORT][2] ([i915#4983])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/bat-rpls-1/igt@i915_selftest@live@reset.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/bat-rpls-1/igt@i915_selftest@live@reset.html

  * igt@i915_suspend@basic-s3-without-i915:
    - bat-rpls-2:         [PASS][3] -> [ABORT][4] ([i915#7978])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/bat-rpls-2/igt@i915_suspend@basic-s3-without-i915.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/bat-rpls-2/igt@i915_suspend@basic-s3-without-i915.html

  * igt@kms_chamelium_hpd@common-hpd-after-suspend:
    - bat-dg2-11:         NOTRUN -> [SKIP][5] ([i915#7828])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/bat-dg2-11/igt@kms_chamelium_hpd@common-hpd-after-suspend.html

  
#### Possible fixes ####

  * igt@gem_exec_suspend@basic-s3@smem:
    - fi-skl-6600u:       [FAIL][6] ([fdo#103375]) -> [PASS][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/fi-skl-6600u/igt@gem_exec_suspend@basic-s3@smem.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/fi-skl-6600u/igt@gem_exec_suspend@basic-s3@smem.html

  * igt@i915_selftest@live@gt_lrc:
    - bat-dg2-11:         [INCOMPLETE][8] ([i915#7609] / [i915#7913]) -> [PASS][9]
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/bat-dg2-11/igt@i915_selftest@live@gt_lrc.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/bat-dg2-11/igt@i915_selftest@live@gt_lrc.html

  * igt@kms_pipe_crc_basic@suspend-read-crc@pipe-c-hdmi-a-1:
    - fi-rkl-11600:       [FAIL][10] ([fdo#103375]) -> [PASS][11]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/fi-rkl-11600/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-c-hdmi-a-1.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/fi-rkl-11600/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-c-hdmi-a-1.html

  
#### Warnings ####

  * igt@i915_selftest@live@slpc:
    - bat-rpls-2:         [DMESG-FAIL][12] ([i915#6367] / [i915#7913] / [i915#7996]) -> [DMESG-FAIL][13] ([i915#6367] / [i915#7913])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/bat-rpls-2/igt@i915_selftest@live@slpc.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/bat-rpls-2/igt@i915_selftest@live@slpc.html

  
  [fdo#103375]: https://bugs.freedesktop.org/show_bug.cgi?id=103375
  [i915#4983]: https://gitlab.freedesktop.org/drm/intel/issues/4983
  [i915#6367]: https://gitlab.freedesktop.org/drm/intel/issues/6367
  [i915#7609]: https://gitlab.freedesktop.org/drm/intel/issues/7609
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7913]: https://gitlab.freedesktop.org/drm/intel/issues/7913
  [i915#7978]: https://gitlab.freedesktop.org/drm/intel/issues/7978
  [i915#7996]: https://gitlab.freedesktop.org/drm/intel/issues/7996


Build changes
-------------

  * Linux: CI_DRM_12966 -> Patchwork_114774v3

  CI-20190529: 20190529
  CI_DRM_12966: 202141796dba6058f9f7623c0ee48ff4ebcc2607 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7236: bac5a4cc31b3212a205219a6cbc45a173d30d04b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_114774v3: 202141796dba6058f9f7623c0ee48ff4ebcc2607 @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

a42f1c7dda48 drm/ttm: Make the call to ttm_tt_populate() interruptible when faulting
8ee4de7f70fa drm/ttm: Reduce the number of used allocation orders for TTM pages
f0f151642ee9 drm/ttm/pool: Fix ttm_pool_alloc error path

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/index.html

[-- Attachment #2: Type: text/html, Size: 5483 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
                   ` (5 preceding siblings ...)
  2023-04-04 23:23 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2023-04-05  9:24 ` Patchwork
  2023-04-05 12:32 ` [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Christian König
  7 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2023-04-05  9:24 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 18655 bytes --]

== Series Details ==

Series: drm/ttm: Small fixes / cleanups in prep for shrinking (rev3)
URL   : https://patchwork.freedesktop.org/series/114774/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_12966_full -> Patchwork_114774v3_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (7 -> 7)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_114774v3_full:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@perf@stress-open-close@0-rcs0:
    - {shard-tglu}:       [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-tglu-3/igt@perf@stress-open-close@0-rcs0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-tglu-6/igt@perf@stress-open-close@0-rcs0.html

  
Known issues
------------

  Here are the changes found in Patchwork_114774v3_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-glk:          NOTRUN -> [FAIL][3] ([i915#2842])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk1/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-glk:          NOTRUN -> [SKIP][4] ([fdo#109271] / [i915#2190])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk1/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@massive:
    - shard-apl:          NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#4613])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@gem_lmem_swapping@massive.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-apl:          [PASS][6] -> [ABORT][7] ([i915#5566])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-apl6/igt@gen9_exec_parse@allowed-single.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@gen9_exec_parse@allowed-single.html

  * igt@i915_pm_rpm@system-suspend-devices:
    - shard-snb:          NOTRUN -> [SKIP][8] ([fdo#109271]) +24 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-snb2/igt@i915_pm_rpm@system-suspend-devices.html

  * igt@kms_ccs@pipe-b-bad-aux-stride-y_tiled_gen12_mc_ccs:
    - shard-apl:          NOTRUN -> [SKIP][9] ([fdo#109271] / [i915#3886]) +4 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_ccs@pipe-b-bad-aux-stride-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-c-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs:
    - shard-glk:          NOTRUN -> [SKIP][10] ([fdo#109271] / [i915#3886]) +2 similar issues
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk3/igt@kms_ccs@pipe-c-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs.html

  * igt@kms_chamelium_color@ctm-0-75:
    - shard-apl:          NOTRUN -> [SKIP][11] ([fdo#109271]) +79 similar issues
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_chamelium_color@ctm-0-75.html

  * igt@kms_content_protection@legacy@pipe-a-dp-1:
    - shard-apl:          NOTRUN -> [TIMEOUT][12] ([i915#7173])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_content_protection@legacy@pipe-a-dp-1.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-glk:          [PASS][13] -> [FAIL][14] ([i915#2346]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-glk2/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk6/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-apl:          [PASS][15] -> [ABORT][16] ([i915#180])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-apl3/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_flip@flip-vs-suspend-interruptible@b-dp1:
    - shard-apl:          [PASS][17] -> [DMESG-WARN][18] ([i915#180])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-apl3/igt@kms_flip@flip-vs-suspend-interruptible@b-dp1.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@b-dp1.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-mmap-gtt:
    - shard-glk:          NOTRUN -> [SKIP][19] ([fdo#109271]) +49 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk1/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-mmap-gtt.html

  * igt@kms_plane_alpha_blend@alpha-basic@pipe-c-hdmi-a-1:
    - shard-glk:          NOTRUN -> [FAIL][20] ([i915#7862]) +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk1/igt@kms_plane_alpha_blend@alpha-basic@pipe-c-hdmi-a-1.html

  * igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-a-dp-1:
    - shard-apl:          NOTRUN -> [FAIL][21] ([i915#4573]) +1 similar issue
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-a-dp-1.html

  * igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-fully-sf:
    - shard-glk:          NOTRUN -> [SKIP][22] ([fdo#109271] / [i915#658])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk1/igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-sf:
    - shard-apl:          NOTRUN -> [SKIP][23] ([fdo#109271] / [i915#658])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-sf.html

  
#### Possible fixes ####

  * igt@drm_fdinfo@most-busy-idle-check-all@rcs0:
    - {shard-rkl}:        [FAIL][24] ([i915#7742]) -> [PASS][25]
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-rkl-6/igt@drm_fdinfo@most-busy-idle-check-all@rcs0.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-rkl-3/igt@drm_fdinfo@most-busy-idle-check-all@rcs0.html

  * igt@gem_barrier_race@remote-request@rcs0:
    - {shard-dg1}:        [ABORT][26] ([i915#8234]) -> [PASS][27]
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-dg1-15/igt@gem_barrier_race@remote-request@rcs0.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-dg1-15/igt@gem_barrier_race@remote-request@rcs0.html
    - shard-apl:          [ABORT][28] ([i915#8211] / [i915#8234]) -> [PASS][29]
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-apl2/igt@gem_barrier_race@remote-request@rcs0.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl6/igt@gem_barrier_race@remote-request@rcs0.html
    - shard-glk:          [ABORT][30] ([i915#8211]) -> [PASS][31]
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-glk4/igt@gem_barrier_race@remote-request@rcs0.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk3/igt@gem_barrier_race@remote-request@rcs0.html

  * igt@gem_ctx_exec@basic-nohangcheck:
    - {shard-tglu}:       [FAIL][32] ([i915#6268]) -> [PASS][33]
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-tglu-3/igt@gem_ctx_exec@basic-nohangcheck.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-tglu-6/igt@gem_ctx_exec@basic-nohangcheck.html

  * igt@gem_eio@in-flight-contexts-immediate:
    - shard-apl:          [TIMEOUT][34] ([i915#3063]) -> [PASS][35]
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-apl6/igt@gem_eio@in-flight-contexts-immediate.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl6/igt@gem_eio@in-flight-contexts-immediate.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - {shard-rkl}:        [FAIL][36] ([i915#2842]) -> [PASS][37] +2 similar issues
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-rkl-6/igt@gem_exec_fair@basic-pace@rcs0.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-rkl-3/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-glk:          [ABORT][38] ([i915#5566]) -> [PASS][39]
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-glk3/igt@gen9_exec_parse@allowed-single.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk1/igt@gen9_exec_parse@allowed-single.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-snb:          [ABORT][40] ([i915#4528]) -> [PASS][41]
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-snb7/igt@i915_module_load@reload-with-fault-injection.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-snb2/igt@i915_module_load@reload-with-fault-injection.html

  * igt@i915_pm_dc@dc6-dpms:
    - {shard-tglu}:       [FAIL][42] ([i915#3989] / [i915#454]) -> [PASS][43]
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-tglu-5/igt@i915_pm_dc@dc6-dpms.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-tglu-2/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a:
    - {shard-dg1}:        [SKIP][44] ([i915#1937]) -> [PASS][45]
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-dg1-15/igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-dg1-14/igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a.html

  * igt@i915_pm_rc6_residency@rc6-idle@bcs0:
    - {shard-dg1}:        [FAIL][46] ([i915#3591]) -> [PASS][47]
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-dg1-14/igt@i915_pm_rc6_residency@rc6-idle@bcs0.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-dg1-14/igt@i915_pm_rc6_residency@rc6-idle@bcs0.html

  * igt@i915_pm_rpm@modeset-lpsp-stress:
    - {shard-rkl}:        [SKIP][48] ([i915#1397]) -> [PASS][49]
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-rkl-2/igt@i915_pm_rpm@modeset-lpsp-stress.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-rkl-7/igt@i915_pm_rpm@modeset-lpsp-stress.html

  * igt@i915_pm_rps@reset:
    - shard-snb:          [DMESG-FAIL][50] ([i915#8319]) -> [PASS][51]
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-snb5/igt@i915_pm_rps@reset.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-snb2/igt@i915_pm_rps@reset.html

  * igt@i915_suspend@fence-restore-untiled:
    - shard-snb:          [DMESG-WARN][52] ([i915#5090]) -> [PASS][53]
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-snb4/igt@i915_suspend@fence-restore-untiled.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-snb2/igt@i915_suspend@fence-restore-untiled.html

  * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy:
    - shard-glk:          [FAIL][54] ([i915#72]) -> [PASS][55]
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-glk2/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-glk6/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html

  
#### Warnings ####

  * igt@kms_content_protection@atomic@pipe-a-dp-1:
    - shard-apl:          [FAIL][56] ([i915#7173]) -> [TIMEOUT][57] ([i915#7173])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12966/shard-apl2/igt@kms_content_protection@atomic@pipe-a-dp-1.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/shard-apl7/igt@kms_content_protection@atomic@pipe-a-dp-1.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1397]: https://gitlab.freedesktop.org/drm/intel/issues/1397
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1937]: https://gitlab.freedesktop.org/drm/intel/issues/1937
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2433]: https://gitlab.freedesktop.org/drm/intel/issues/2433
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#3063]: https://gitlab.freedesktop.org/drm/intel/issues/3063
  [i915#315]: https://gitlab.freedesktop.org/drm/intel/issues/315
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3458]: https://gitlab.freedesktop.org/drm/intel/issues/3458
  [i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3591]: https://gitlab.freedesktop.org/drm/intel/issues/3591
  [i915#3638]: https://gitlab.freedesktop.org/drm/intel/issues/3638
  [i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#3989]: https://gitlab.freedesktop.org/drm/intel/issues/3989
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4258]: https://gitlab.freedesktop.org/drm/intel/issues/4258
  [i915#426]: https://gitlab.freedesktop.org/drm/intel/issues/426
  [i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
  [i915#4349]: https://gitlab.freedesktop.org/drm/intel/issues/4349
  [i915#4528]: https://gitlab.freedesktop.org/drm/intel/issues/4528
  [i915#4538]: https://gitlab.freedesktop.org/drm/intel/issues/4538
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#4573]: https://gitlab.freedesktop.org/drm/intel/issues/4573
  [i915#4579]: https://gitlab.freedesktop.org/drm/intel/issues/4579
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4816]: https://gitlab.freedesktop.org/drm/intel/issues/4816
  [i915#4833]: https://gitlab.freedesktop.org/drm/intel/issues/4833
  [i915#4852]: https://gitlab.freedesktop.org/drm/intel/issues/4852
  [i915#4854]: https://gitlab.freedesktop.org/drm/intel/issues/4854
  [i915#4880]: https://gitlab.freedesktop.org/drm/intel/issues/4880
  [i915#5090]: https://gitlab.freedesktop.org/drm/intel/issues/5090
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5289]: https://gitlab.freedesktop.org/drm/intel/issues/5289
  [i915#5325]: https://gitlab.freedesktop.org/drm/intel/issues/5325
  [i915#5461]: https://gitlab.freedesktop.org/drm/intel/issues/5461
  [i915#5563]: https://gitlab.freedesktop.org/drm/intel/issues/5563
  [i915#5566]: https://gitlab.freedesktop.org/drm/intel/issues/5566
  [i915#5784]: https://gitlab.freedesktop.org/drm/intel/issues/5784
  [i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
  [i915#6268]: https://gitlab.freedesktop.org/drm/intel/issues/6268
  [i915#6301]: https://gitlab.freedesktop.org/drm/intel/issues/6301
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#7116]: https://gitlab.freedesktop.org/drm/intel/issues/7116
  [i915#7173]: https://gitlab.freedesktop.org/drm/intel/issues/7173
  [i915#72]: https://gitlab.freedesktop.org/drm/intel/issues/72
  [i915#7561]: https://gitlab.freedesktop.org/drm/intel/issues/7561
  [i915#7711]: https://gitlab.freedesktop.org/drm/intel/issues/7711
  [i915#7742]: https://gitlab.freedesktop.org/drm/intel/issues/7742
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7862]: https://gitlab.freedesktop.org/drm/intel/issues/7862
  [i915#8011]: https://gitlab.freedesktop.org/drm/intel/issues/8011
  [i915#8150]: https://gitlab.freedesktop.org/drm/intel/issues/8150
  [i915#8211]: https://gitlab.freedesktop.org/drm/intel/issues/8211
  [i915#8234]: https://gitlab.freedesktop.org/drm/intel/issues/8234
  [i915#8292]: https://gitlab.freedesktop.org/drm/intel/issues/8292
  [i915#8319]: https://gitlab.freedesktop.org/drm/intel/issues/8319


Build changes
-------------

  * Linux: CI_DRM_12966 -> Patchwork_114774v3

  CI-20190529: 20190529
  CI_DRM_12966: 202141796dba6058f9f7623c0ee48ff4ebcc2607 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7236: bac5a4cc31b3212a205219a6cbc45a173d30d04b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_114774v3: 202141796dba6058f9f7623c0ee48ff4ebcc2607 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114774v3/index.html

[-- Attachment #2: Type: text/html, Size: 17370 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking
  2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
                   ` (6 preceding siblings ...)
  2023-04-05  9:24 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
@ 2023-04-05 12:32 ` Christian König
  2023-04-05 12:36   ` Thomas Hellström
  7 siblings, 1 reply; 20+ messages in thread
From: Christian König @ 2023-04-05 12:32 UTC (permalink / raw)
  To: Thomas Hellström, dri-devel; +Cc: intel-gfx, Matthew Auld, intel-xe

Am 04.04.23 um 22:06 schrieb Thomas Hellström:
> I collected the, from my POW, uncontroversial patches from V1 of the TTM
> shrinker series, some corrected after the initial patch submission, one
> patch added from the Xe RFC ("drm/ttm: Don't print error message if
> eviction was interrupted"). It would be nice to have these reviewed and
> merged while reworking the rest.
>
> v2:
> - Simplify __ttm_pool_free().
> - Fix the TTM_TT_FLAG bit numbers.
> - Keep all allocation orders for TTM pages at or below PMD order
>
> v3:
> - Rename __tm_pool_free() to ttm_pool_free_range(). Document.
> - Compile-fix.

Reviewed-by: Christian König <christian.koenig@amd.com> for the series.

>
> Thomas Hellström (3):
>    drm/ttm/pool: Fix ttm_pool_alloc error path
>    drm/ttm: Reduce the number of used allocation orders for TTM pages
>    drm/ttm: Make the call to ttm_tt_populate() interruptible when
>      faulting
>
>   drivers/gpu/drm/ttm/ttm_bo_vm.c |  13 +++-
>   drivers/gpu/drm/ttm/ttm_pool.c  | 111 ++++++++++++++++++++------------
>   2 files changed, 80 insertions(+), 44 deletions(-)
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking
  2023-04-05 12:32 ` [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Christian König
@ 2023-04-05 12:36   ` Thomas Hellström
  0 siblings, 0 replies; 20+ messages in thread
From: Thomas Hellström @ 2023-04-05 12:36 UTC (permalink / raw)
  To: Christian König, dri-devel; +Cc: intel-gfx, Matthew Auld, intel-xe


On 4/5/23 14:32, Christian König wrote:
> Am 04.04.23 um 22:06 schrieb Thomas Hellström:
>> I collected the, from my POW, uncontroversial patches from V1 of the TTM
>> shrinker series, some corrected after the initial patch submission, one
>> patch added from the Xe RFC ("drm/ttm: Don't print error message if
>> eviction was interrupted"). It would be nice to have these reviewed and
>> merged while reworking the rest.
>>
>> v2:
>> - Simplify __ttm_pool_free().
>> - Fix the TTM_TT_FLAG bit numbers.
>> - Keep all allocation orders for TTM pages at or below PMD order
>>
>> v3:
>> - Rename __tm_pool_free() to ttm_pool_free_range(). Document.
>> - Compile-fix.
>
> Reviewed-by: Christian König <christian.koenig@amd.com> for the series.

Thanks, Christian.

/Thomas


>
>>
>> Thomas Hellström (3):
>>    drm/ttm/pool: Fix ttm_pool_alloc error path
>>    drm/ttm: Reduce the number of used allocation orders for TTM pages
>>    drm/ttm: Make the call to ttm_tt_populate() interruptible when
>>      faulting
>>
>>   drivers/gpu/drm/ttm/ttm_bo_vm.c |  13 +++-
>>   drivers/gpu/drm/ttm/ttm_pool.c  | 111 ++++++++++++++++++++------------
>>   2 files changed, 80 insertions(+), 44 deletions(-)
>>
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages Thomas Hellström
@ 2023-04-11  9:51   ` Daniel Vetter
  2023-04-11 12:11     ` Christian König
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-11  9:51 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: intel-gfx, Matthew Auld, intel-xe, dri-devel, Christian König

On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
> When swapping out, we will split multi-order pages both in order to
> move them to the swap-cache and to be able to return memory to the
> swap cache as soon as possible on a page-by-page basis.
> Reduce the page max order to the system PMD size, as we can then be nicer
> to the system and avoid splitting gigantic pages.
> 
> Looking forward to when we might be able to swap out PMD size folios
> without splitting, this will also be a benefit.
> 
> v2:
> - Include all orders up to the PMD size (Christian König)
> v3:
> - Avoid compilation errors for architectures with special PFN_SHIFTs
> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Reviewed-by: Christian König <christian.koenig@amd.com>

Apparently this fails on ppc build testing. Please supply build fix asap
(or I guess we need to revert). I'm kinda not clear why this only showed
up when I merged the drm-misc-next pr into drm-next ...
-Daniel

> ---
>  drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
>  1 file changed, 19 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index dfce896c4bae..18c342a919a2 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -47,6 +47,11 @@
>  
>  #include "ttm_module.h"
>  
> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
> +/* Some architectures have a weird PMD_SHIFT */
> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
> +
>  /**
>   * struct ttm_pool_dma - Helper object for coherent DMA mappings
>   *
> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
>  
>  static atomic_long_t allocated_pages;
>  
> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
> -static struct ttm_pool_type global_uncached[MAX_ORDER];
> +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
>  
> -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
> +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
> +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
>  
>  static spinlock_t shrinker_lock;
>  static struct list_head shrinker_list;
> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>  	else
>  		gfp_flags |= GFP_HIGHUSER;
>  
> -	for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
> +	for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
>  	     num_pages;
>  	     order = min_t(unsigned int, order, __fls(num_pages))) {
>  		struct ttm_pool_type *pt;
> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
>  
>  	if (use_dma_alloc) {
>  		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> -			for (j = 0; j < MAX_ORDER; ++j)
> +			for (j = 0; j < TTM_DIM_ORDER; ++j)
>  				ttm_pool_type_init(&pool->caching[i].orders[j],
>  						   pool, i, j);
>  	}
> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
>  
>  	if (pool->use_dma_alloc) {
>  		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> -			for (j = 0; j < MAX_ORDER; ++j)
> +			for (j = 0; j < TTM_DIM_ORDER; ++j)
>  				ttm_pool_type_fini(&pool->caching[i].orders[j]);
>  	}
>  
> @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
>  	unsigned int i;
>  
>  	seq_puts(m, "\t ");
> -	for (i = 0; i < MAX_ORDER; ++i)
> +	for (i = 0; i < TTM_DIM_ORDER; ++i)
>  		seq_printf(m, " ---%2u---", i);
>  	seq_puts(m, "\n");
>  }
> @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
>  {
>  	unsigned int i;
>  
> -	for (i = 0; i < MAX_ORDER; ++i)
> +	for (i = 0; i < TTM_DIM_ORDER; ++i)
>  		seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
>  	seq_puts(m, "\n");
>  }
> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
>  {
>  	unsigned int i;
>  
> +	BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
> +	BUILD_BUG_ON(TTM_DIM_ORDER < 1);
> +
>  	if (!page_pool_size)
>  		page_pool_size = num_pages;
>  
>  	spin_lock_init(&shrinker_lock);
>  	INIT_LIST_HEAD(&shrinker_list);
>  
> -	for (i = 0; i < MAX_ORDER; ++i) {
> +	for (i = 0; i < TTM_DIM_ORDER; ++i) {
>  		ttm_pool_type_init(&global_write_combined[i], NULL,
>  				   ttm_write_combined, i);
>  		ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
>  {
>  	unsigned int i;
>  
> -	for (i = 0; i < MAX_ORDER; ++i) {
> +	for (i = 0; i < TTM_DIM_ORDER; ++i) {
>  		ttm_pool_type_fini(&global_write_combined[i]);
>  		ttm_pool_type_fini(&global_uncached[i]);
>  
> -- 
> 2.39.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-11  9:51   ` Daniel Vetter
@ 2023-04-11 12:11     ` Christian König
  2023-04-11 13:45       ` Daniel Vetter
  0 siblings, 1 reply; 20+ messages in thread
From: Christian König @ 2023-04-11 12:11 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Hellström
  Cc: intel-gfx, intel-xe, dri-devel, Matthew Auld

Am 11.04.23 um 11:51 schrieb Daniel Vetter:
> On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
>> When swapping out, we will split multi-order pages both in order to
>> move them to the swap-cache and to be able to return memory to the
>> swap cache as soon as possible on a page-by-page basis.
>> Reduce the page max order to the system PMD size, as we can then be nicer
>> to the system and avoid splitting gigantic pages.
>>
>> Looking forward to when we might be able to swap out PMD size folios
>> without splitting, this will also be a benefit.
>>
>> v2:
>> - Include all orders up to the PMD size (Christian König)
>> v3:
>> - Avoid compilation errors for architectures with special PFN_SHIFTs
>>
>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Reviewed-by: Christian König <christian.koenig@amd.com>
> Apparently this fails on ppc build testing. Please supply build fix asap
> (or I guess we need to revert). I'm kinda not clear why this only showed
> up when I merged the drm-misc-next pr into drm-next ...

I'm really wondering this as well. It looks like PMD_SHIFT isn't a 
constant on this particular platform.

But from what I can find in the upstream 6.2 kernel PMD_SHIFT always 
seems to be a constant.

So how exactly can that here break?

Christian.

> -Daniel
>
>> ---
>>   drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
>>   1 file changed, 19 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
>> index dfce896c4bae..18c342a919a2 100644
>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>> @@ -47,6 +47,11 @@
>>   
>>   #include "ttm_module.h"
>>   
>> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
>> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
>> +/* Some architectures have a weird PMD_SHIFT */
>> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
>> +
>>   /**
>>    * struct ttm_pool_dma - Helper object for coherent DMA mappings
>>    *
>> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
>>   
>>   static atomic_long_t allocated_pages;
>>   
>> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
>> -static struct ttm_pool_type global_uncached[MAX_ORDER];
>> +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
>> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
>>   
>> -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
>> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
>> +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
>> +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
>>   
>>   static spinlock_t shrinker_lock;
>>   static struct list_head shrinker_list;
>> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>>   	else
>>   		gfp_flags |= GFP_HIGHUSER;
>>   
>> -	for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
>> +	for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
>>   	     num_pages;
>>   	     order = min_t(unsigned int, order, __fls(num_pages))) {
>>   		struct ttm_pool_type *pt;
>> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
>>   
>>   	if (use_dma_alloc) {
>>   		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>> -			for (j = 0; j < MAX_ORDER; ++j)
>> +			for (j = 0; j < TTM_DIM_ORDER; ++j)
>>   				ttm_pool_type_init(&pool->caching[i].orders[j],
>>   						   pool, i, j);
>>   	}
>> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
>>   
>>   	if (pool->use_dma_alloc) {
>>   		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>> -			for (j = 0; j < MAX_ORDER; ++j)
>> +			for (j = 0; j < TTM_DIM_ORDER; ++j)
>>   				ttm_pool_type_fini(&pool->caching[i].orders[j]);
>>   	}
>>   
>> @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
>>   	unsigned int i;
>>   
>>   	seq_puts(m, "\t ");
>> -	for (i = 0; i < MAX_ORDER; ++i)
>> +	for (i = 0; i < TTM_DIM_ORDER; ++i)
>>   		seq_printf(m, " ---%2u---", i);
>>   	seq_puts(m, "\n");
>>   }
>> @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
>>   {
>>   	unsigned int i;
>>   
>> -	for (i = 0; i < MAX_ORDER; ++i)
>> +	for (i = 0; i < TTM_DIM_ORDER; ++i)
>>   		seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
>>   	seq_puts(m, "\n");
>>   }
>> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
>>   {
>>   	unsigned int i;
>>   
>> +	BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
>> +	BUILD_BUG_ON(TTM_DIM_ORDER < 1);
>> +
>>   	if (!page_pool_size)
>>   		page_pool_size = num_pages;
>>   
>>   	spin_lock_init(&shrinker_lock);
>>   	INIT_LIST_HEAD(&shrinker_list);
>>   
>> -	for (i = 0; i < MAX_ORDER; ++i) {
>> +	for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>   		ttm_pool_type_init(&global_write_combined[i], NULL,
>>   				   ttm_write_combined, i);
>>   		ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
>> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
>>   {
>>   	unsigned int i;
>>   
>> -	for (i = 0; i < MAX_ORDER; ++i) {
>> +	for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>   		ttm_pool_type_fini(&global_write_combined[i]);
>>   		ttm_pool_type_fini(&global_uncached[i]);
>>   
>> -- 
>> 2.39.2
>>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-11 12:11     ` Christian König
@ 2023-04-11 13:45       ` Daniel Vetter
  2023-04-12  9:08         ` Daniel Vetter
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-11 13:45 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, intel-gfx, dri-devel, Matthew Auld,
	Daniel Vetter, intel-xe

On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
> > On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
> > > When swapping out, we will split multi-order pages both in order to
> > > move them to the swap-cache and to be able to return memory to the
> > > swap cache as soon as possible on a page-by-page basis.
> > > Reduce the page max order to the system PMD size, as we can then be nicer
> > > to the system and avoid splitting gigantic pages.
> > > 
> > > Looking forward to when we might be able to swap out PMD size folios
> > > without splitting, this will also be a benefit.
> > > 
> > > v2:
> > > - Include all orders up to the PMD size (Christian König)
> > > v3:
> > > - Avoid compilation errors for architectures with special PFN_SHIFTs
> > > 
> > > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > Reviewed-by: Christian König <christian.koenig@amd.com>
> > Apparently this fails on ppc build testing. Please supply build fix asap
> > (or I guess we need to revert). I'm kinda not clear why this only showed
> > up when I merged the drm-misc-next pr into drm-next ...
> 
> I'm really wondering this as well. It looks like PMD_SHIFT isn't a constant
> on this particular platform.
> 
> But from what I can find in the upstream 6.2 kernel PMD_SHIFT always seems
> to be a constant.
> 
> So how exactly can that here break?

There's some in-flight patches to rework MAX_ORDER and other things in
linux-next, maybe it's recent? If you check out linux-next then you need
to reapply the patch (since sfr reverted it).
-Daniel

> 
> Christian.
> 
> > -Daniel
> > 
> > > ---
> > >   drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
> > >   1 file changed, 19 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> > > index dfce896c4bae..18c342a919a2 100644
> > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > @@ -47,6 +47,11 @@
> > >   #include "ttm_module.h"
> > > +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
> > > +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
> > > +/* Some architectures have a weird PMD_SHIFT */
> > > +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
> > > +
> > >   /**
> > >    * struct ttm_pool_dma - Helper object for coherent DMA mappings
> > >    *
> > > @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
> > >   static atomic_long_t allocated_pages;
> > > -static struct ttm_pool_type global_write_combined[MAX_ORDER];
> > > -static struct ttm_pool_type global_uncached[MAX_ORDER];
> > > +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
> > > +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
> > > -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
> > > -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
> > > +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
> > > +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
> > >   static spinlock_t shrinker_lock;
> > >   static struct list_head shrinker_list;
> > > @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
> > >   	else
> > >   		gfp_flags |= GFP_HIGHUSER;
> > > -	for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
> > > +	for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
> > >   	     num_pages;
> > >   	     order = min_t(unsigned int, order, __fls(num_pages))) {
> > >   		struct ttm_pool_type *pt;
> > > @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
> > >   	if (use_dma_alloc) {
> > >   		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> > > -			for (j = 0; j < MAX_ORDER; ++j)
> > > +			for (j = 0; j < TTM_DIM_ORDER; ++j)
> > >   				ttm_pool_type_init(&pool->caching[i].orders[j],
> > >   						   pool, i, j);
> > >   	}
> > > @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
> > >   	if (pool->use_dma_alloc) {
> > >   		for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> > > -			for (j = 0; j < MAX_ORDER; ++j)
> > > +			for (j = 0; j < TTM_DIM_ORDER; ++j)
> > >   				ttm_pool_type_fini(&pool->caching[i].orders[j]);
> > >   	}
> > > @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
> > >   	unsigned int i;
> > >   	seq_puts(m, "\t ");
> > > -	for (i = 0; i < MAX_ORDER; ++i)
> > > +	for (i = 0; i < TTM_DIM_ORDER; ++i)
> > >   		seq_printf(m, " ---%2u---", i);
> > >   	seq_puts(m, "\n");
> > >   }
> > > @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
> > >   {
> > >   	unsigned int i;
> > > -	for (i = 0; i < MAX_ORDER; ++i)
> > > +	for (i = 0; i < TTM_DIM_ORDER; ++i)
> > >   		seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
> > >   	seq_puts(m, "\n");
> > >   }
> > > @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
> > >   {
> > >   	unsigned int i;
> > > +	BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
> > > +	BUILD_BUG_ON(TTM_DIM_ORDER < 1);
> > > +
> > >   	if (!page_pool_size)
> > >   		page_pool_size = num_pages;
> > >   	spin_lock_init(&shrinker_lock);
> > >   	INIT_LIST_HEAD(&shrinker_list);
> > > -	for (i = 0; i < MAX_ORDER; ++i) {
> > > +	for (i = 0; i < TTM_DIM_ORDER; ++i) {
> > >   		ttm_pool_type_init(&global_write_combined[i], NULL,
> > >   				   ttm_write_combined, i);
> > >   		ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
> > > @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
> > >   {
> > >   	unsigned int i;
> > > -	for (i = 0; i < MAX_ORDER; ++i) {
> > > +	for (i = 0; i < TTM_DIM_ORDER; ++i) {
> > >   		ttm_pool_type_fini(&global_write_combined[i]);
> > >   		ttm_pool_type_fini(&global_uncached[i]);
> > > -- 
> > > 2.39.2
> > > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-11 13:45       ` Daniel Vetter
@ 2023-04-12  9:08         ` Daniel Vetter
  2023-04-12 14:17           ` Christian König
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-12  9:08 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, intel-gfx, intel-xe, dri-devel, Matthew Auld

On Tue, 11 Apr 2023 at 15:45, Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
> > Am 11.04.23 um 11:51 schrieb Daniel Vetter:
> > > On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
> > > > When swapping out, we will split multi-order pages both in order to
> > > > move them to the swap-cache and to be able to return memory to the
> > > > swap cache as soon as possible on a page-by-page basis.
> > > > Reduce the page max order to the system PMD size, as we can then be nicer
> > > > to the system and avoid splitting gigantic pages.
> > > >
> > > > Looking forward to when we might be able to swap out PMD size folios
> > > > without splitting, this will also be a benefit.
> > > >
> > > > v2:
> > > > - Include all orders up to the PMD size (Christian König)
> > > > v3:
> > > > - Avoid compilation errors for architectures with special PFN_SHIFTs
> > > >
> > > > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > Reviewed-by: Christian König <christian.koenig@amd.com>
> > > Apparently this fails on ppc build testing. Please supply build fix asap
> > > (or I guess we need to revert). I'm kinda not clear why this only showed
> > > up when I merged the drm-misc-next pr into drm-next ...
> >
> > I'm really wondering this as well. It looks like PMD_SHIFT isn't a constant
> > on this particular platform.
> >
> > But from what I can find in the upstream 6.2 kernel PMD_SHIFT always seems
> > to be a constant.
> >
> > So how exactly can that here break?
>
> There's some in-flight patches to rework MAX_ORDER and other things in
> linux-next, maybe it's recent? If you check out linux-next then you need
> to reapply the patch (since sfr reverted it).

So I looked and on ppc64 PMD_SHIFT is defined in terms of
PTE_INDEX_SIZE, which is defined (for book3s) in terms of the variable
__pte_index_size. This is in 6.3 already and seems pretty old.

So revert? Or fixup patch to make this work on ppc?


> > > > ---
> > > >   drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
> > > >   1 file changed, 19 insertions(+), 11 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > index dfce896c4bae..18c342a919a2 100644
> > > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > @@ -47,6 +47,11 @@
> > > >   #include "ttm_module.h"
> > > > +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
> > > > +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
> > > > +/* Some architectures have a weird PMD_SHIFT */
> > > > +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
> > > > +
> > > >   /**
> > > >    * struct ttm_pool_dma - Helper object for coherent DMA mappings
> > > >    *
> > > > @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
> > > >   static atomic_long_t allocated_pages;
> > > > -static struct ttm_pool_type global_write_combined[MAX_ORDER];
> > > > -static struct ttm_pool_type global_uncached[MAX_ORDER];
> > > > +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
> > > > +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
> > > > -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
> > > > -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
> > > > +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
> > > > +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
> > > >   static spinlock_t shrinker_lock;
> > > >   static struct list_head shrinker_list;
> > > > @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
> > > >           else
> > > >                   gfp_flags |= GFP_HIGHUSER;
> > > > - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
> > > > + for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
> > > >                num_pages;
> > > >                order = min_t(unsigned int, order, __fls(num_pages))) {
> > > >                   struct ttm_pool_type *pt;
> > > > @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
> > > >           if (use_dma_alloc) {
> > > >                   for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> > > > -                 for (j = 0; j < MAX_ORDER; ++j)
> > > > +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
> > > >                                   ttm_pool_type_init(&pool->caching[i].orders[j],
> > > >                                                      pool, i, j);
> > > >           }
> > > > @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
> > > >           if (pool->use_dma_alloc) {
> > > >                   for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> > > > -                 for (j = 0; j < MAX_ORDER; ++j)
> > > > +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
> > > >                                   ttm_pool_type_fini(&pool->caching[i].orders[j]);
> > > >           }
> > > > @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
> > > >           unsigned int i;
> > > >           seq_puts(m, "\t ");
> > > > - for (i = 0; i < MAX_ORDER; ++i)
> > > > + for (i = 0; i < TTM_DIM_ORDER; ++i)
> > > >                   seq_printf(m, " ---%2u---", i);
> > > >           seq_puts(m, "\n");
> > > >   }
> > > > @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
> > > >   {
> > > >           unsigned int i;
> > > > - for (i = 0; i < MAX_ORDER; ++i)
> > > > + for (i = 0; i < TTM_DIM_ORDER; ++i)
> > > >                   seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
> > > >           seq_puts(m, "\n");
> > > >   }
> > > > @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
> > > >   {
> > > >           unsigned int i;
> > > > + BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
> > > > + BUILD_BUG_ON(TTM_DIM_ORDER < 1);
> > > > +
> > > >           if (!page_pool_size)
> > > >                   page_pool_size = num_pages;
> > > >           spin_lock_init(&shrinker_lock);
> > > >           INIT_LIST_HEAD(&shrinker_list);
> > > > - for (i = 0; i < MAX_ORDER; ++i) {
> > > > + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> > > >                   ttm_pool_type_init(&global_write_combined[i], NULL,
> > > >                                      ttm_write_combined, i);
> > > >                   ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
> > > > @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
> > > >   {
> > > >           unsigned int i;
> > > > - for (i = 0; i < MAX_ORDER; ++i) {
> > > > + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> > > >                   ttm_pool_type_fini(&global_write_combined[i]);
> > > >                   ttm_pool_type_fini(&global_uncached[i]);
> > > > --
> > > > 2.39.2
> > > >
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-12  9:08         ` Daniel Vetter
@ 2023-04-12 14:17           ` Christian König
  2023-04-13  8:48             ` Daniel Vetter
  0 siblings, 1 reply; 20+ messages in thread
From: Christian König @ 2023-04-12 14:17 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, intel-gfx, intel-xe, dri-devel, Matthew Auld

Am 12.04.23 um 11:08 schrieb Daniel Vetter:
> On Tue, 11 Apr 2023 at 15:45, Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
>>> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
>>>> On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
>>>>> When swapping out, we will split multi-order pages both in order to
>>>>> move them to the swap-cache and to be able to return memory to the
>>>>> swap cache as soon as possible on a page-by-page basis.
>>>>> Reduce the page max order to the system PMD size, as we can then be nicer
>>>>> to the system and avoid splitting gigantic pages.
>>>>>
>>>>> Looking forward to when we might be able to swap out PMD size folios
>>>>> without splitting, this will also be a benefit.
>>>>>
>>>>> v2:
>>>>> - Include all orders up to the PMD size (Christian König)
>>>>> v3:
>>>>> - Avoid compilation errors for architectures with special PFN_SHIFTs
>>>>>
>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>> Apparently this fails on ppc build testing. Please supply build fix asap
>>>> (or I guess we need to revert). I'm kinda not clear why this only showed
>>>> up when I merged the drm-misc-next pr into drm-next ...
>>> I'm really wondering this as well. It looks like PMD_SHIFT isn't a constant
>>> on this particular platform.
>>>
>>> But from what I can find in the upstream 6.2 kernel PMD_SHIFT always seems
>>> to be a constant.
>>>
>>> So how exactly can that here break?
>> There's some in-flight patches to rework MAX_ORDER and other things in
>> linux-next, maybe it's recent? If you check out linux-next then you need
>> to reapply the patch (since sfr reverted it).
> So I looked and on ppc64 PMD_SHIFT is defined in terms of
> PTE_INDEX_SIZE, which is defined (for book3s) in terms of the variable
> __pte_index_size. This is in 6.3 already and seems pretty old.

Ah! I missed that one, thanks.

> So revert? Or fixup patch to make this work on ppc?

I think for now just revert or change it so that we check if PMD_SHIFT 
is a constant.

Thomas do you have any quick solution?

Christian.

>
>
>>>>> ---
>>>>>    drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
>>>>>    1 file changed, 19 insertions(+), 11 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>> index dfce896c4bae..18c342a919a2 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>> @@ -47,6 +47,11 @@
>>>>>    #include "ttm_module.h"
>>>>> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
>>>>> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
>>>>> +/* Some architectures have a weird PMD_SHIFT */
>>>>> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
>>>>> +
>>>>>    /**
>>>>>     * struct ttm_pool_dma - Helper object for coherent DMA mappings
>>>>>     *
>>>>> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
>>>>>    static atomic_long_t allocated_pages;
>>>>> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
>>>>> -static struct ttm_pool_type global_uncached[MAX_ORDER];
>>>>> +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
>>>>> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
>>>>> -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
>>>>> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
>>>>> +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
>>>>> +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
>>>>>    static spinlock_t shrinker_lock;
>>>>>    static struct list_head shrinker_list;
>>>>> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>>>>>            else
>>>>>                    gfp_flags |= GFP_HIGHUSER;
>>>>> - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
>>>>> + for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
>>>>>                 num_pages;
>>>>>                 order = min_t(unsigned int, order, __fls(num_pages))) {
>>>>>                    struct ttm_pool_type *pt;
>>>>> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
>>>>>            if (use_dma_alloc) {
>>>>>                    for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>                                    ttm_pool_type_init(&pool->caching[i].orders[j],
>>>>>                                                       pool, i, j);
>>>>>            }
>>>>> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
>>>>>            if (pool->use_dma_alloc) {
>>>>>                    for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>                                    ttm_pool_type_fini(&pool->caching[i].orders[j]);
>>>>>            }
>>>>> @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
>>>>>            unsigned int i;
>>>>>            seq_puts(m, "\t ");
>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>                    seq_printf(m, " ---%2u---", i);
>>>>>            seq_puts(m, "\n");
>>>>>    }
>>>>> @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
>>>>>    {
>>>>>            unsigned int i;
>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>                    seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
>>>>>            seq_puts(m, "\n");
>>>>>    }
>>>>> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
>>>>>    {
>>>>>            unsigned int i;
>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER < 1);
>>>>> +
>>>>>            if (!page_pool_size)
>>>>>                    page_pool_size = num_pages;
>>>>>            spin_lock_init(&shrinker_lock);
>>>>>            INIT_LIST_HEAD(&shrinker_list);
>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>                    ttm_pool_type_init(&global_write_combined[i], NULL,
>>>>>                                       ttm_write_combined, i);
>>>>>                    ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
>>>>> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
>>>>>    {
>>>>>            unsigned int i;
>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>                    ttm_pool_type_fini(&global_write_combined[i]);
>>>>>                    ttm_pool_type_fini(&global_uncached[i]);
>>>>> --
>>>>> 2.39.2
>>>>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
>
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-12 14:17           ` Christian König
@ 2023-04-13  8:48             ` Daniel Vetter
  2023-04-13  9:45               ` Christian König
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-13  8:48 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, intel-gfx, intel-xe, dri-devel, Matthew Auld

On Wed, 12 Apr 2023 at 16:18, Christian König <christian.koenig@amd.com> wrote:
>
> Am 12.04.23 um 11:08 schrieb Daniel Vetter:
> > On Tue, 11 Apr 2023 at 15:45, Daniel Vetter <daniel@ffwll.ch> wrote:
> >> On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
> >>> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
> >>>> On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
> >>>>> When swapping out, we will split multi-order pages both in order to
> >>>>> move them to the swap-cache and to be able to return memory to the
> >>>>> swap cache as soon as possible on a page-by-page basis.
> >>>>> Reduce the page max order to the system PMD size, as we can then be nicer
> >>>>> to the system and avoid splitting gigantic pages.
> >>>>>
> >>>>> Looking forward to when we might be able to swap out PMD size folios
> >>>>> without splitting, this will also be a benefit.
> >>>>>
> >>>>> v2:
> >>>>> - Include all orders up to the PMD size (Christian König)
> >>>>> v3:
> >>>>> - Avoid compilation errors for architectures with special PFN_SHIFTs
> >>>>>
> >>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
> >>>> Apparently this fails on ppc build testing. Please supply build fix asap
> >>>> (or I guess we need to revert). I'm kinda not clear why this only showed
> >>>> up when I merged the drm-misc-next pr into drm-next ...
> >>> I'm really wondering this as well. It looks like PMD_SHIFT isn't a constant
> >>> on this particular platform.
> >>>
> >>> But from what I can find in the upstream 6.2 kernel PMD_SHIFT always seems
> >>> to be a constant.
> >>>
> >>> So how exactly can that here break?
> >> There's some in-flight patches to rework MAX_ORDER and other things in
> >> linux-next, maybe it's recent? If you check out linux-next then you need
> >> to reapply the patch (since sfr reverted it).
> > So I looked and on ppc64 PMD_SHIFT is defined in terms of
> > PTE_INDEX_SIZE, which is defined (for book3s) in terms of the variable
> > __pte_index_size. This is in 6.3 already and seems pretty old.
>
> Ah! I missed that one, thanks.
>
> > So revert? Or fixup patch to make this work on ppc?
>
> I think for now just revert or change it so that we check if PMD_SHIFT
> is a constant.
>
> Thomas do you have any quick solution?

I guess Thomas is on vacations. Can you pls do the revert and push it
to drm-misc-next-fixes so this won't get lost?

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

preemptively for that. Normally I think we could wait a bit more but
it's really close to merge window PR and I don't like handing too many
open things to Dave when he's back :-)
-Daniel

>
> Christian.
>
> >
> >
> >>>>> ---
> >>>>>    drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
> >>>>>    1 file changed, 19 insertions(+), 11 deletions(-)
> >>>>>
> >>>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> >>>>> index dfce896c4bae..18c342a919a2 100644
> >>>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> >>>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> >>>>> @@ -47,6 +47,11 @@
> >>>>>    #include "ttm_module.h"
> >>>>> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
> >>>>> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
> >>>>> +/* Some architectures have a weird PMD_SHIFT */
> >>>>> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
> >>>>> +
> >>>>>    /**
> >>>>>     * struct ttm_pool_dma - Helper object for coherent DMA mappings
> >>>>>     *
> >>>>> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
> >>>>>    static atomic_long_t allocated_pages;
> >>>>> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
> >>>>> -static struct ttm_pool_type global_uncached[MAX_ORDER];
> >>>>> +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
> >>>>> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
> >>>>> -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
> >>>>> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
> >>>>> +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
> >>>>> +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
> >>>>>    static spinlock_t shrinker_lock;
> >>>>>    static struct list_head shrinker_list;
> >>>>> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
> >>>>>            else
> >>>>>                    gfp_flags |= GFP_HIGHUSER;
> >>>>> - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
> >>>>> + for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
> >>>>>                 num_pages;
> >>>>>                 order = min_t(unsigned int, order, __fls(num_pages))) {
> >>>>>                    struct ttm_pool_type *pt;
> >>>>> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
> >>>>>            if (use_dma_alloc) {
> >>>>>                    for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> >>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
> >>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
> >>>>>                                    ttm_pool_type_init(&pool->caching[i].orders[j],
> >>>>>                                                       pool, i, j);
> >>>>>            }
> >>>>> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
> >>>>>            if (pool->use_dma_alloc) {
> >>>>>                    for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> >>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
> >>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
> >>>>>                                    ttm_pool_type_fini(&pool->caching[i].orders[j]);
> >>>>>            }
> >>>>> @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
> >>>>>            unsigned int i;
> >>>>>            seq_puts(m, "\t ");
> >>>>> - for (i = 0; i < MAX_ORDER; ++i)
> >>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
> >>>>>                    seq_printf(m, " ---%2u---", i);
> >>>>>            seq_puts(m, "\n");
> >>>>>    }
> >>>>> @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
> >>>>>    {
> >>>>>            unsigned int i;
> >>>>> - for (i = 0; i < MAX_ORDER; ++i)
> >>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
> >>>>>                    seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
> >>>>>            seq_puts(m, "\n");
> >>>>>    }
> >>>>> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
> >>>>>    {
> >>>>>            unsigned int i;
> >>>>> + BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
> >>>>> + BUILD_BUG_ON(TTM_DIM_ORDER < 1);
> >>>>> +
> >>>>>            if (!page_pool_size)
> >>>>>                    page_pool_size = num_pages;
> >>>>>            spin_lock_init(&shrinker_lock);
> >>>>>            INIT_LIST_HEAD(&shrinker_list);
> >>>>> - for (i = 0; i < MAX_ORDER; ++i) {
> >>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> >>>>>                    ttm_pool_type_init(&global_write_combined[i], NULL,
> >>>>>                                       ttm_write_combined, i);
> >>>>>                    ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
> >>>>> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
> >>>>>    {
> >>>>>            unsigned int i;
> >>>>> - for (i = 0; i < MAX_ORDER; ++i) {
> >>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> >>>>>                    ttm_pool_type_fini(&global_write_combined[i]);
> >>>>>                    ttm_pool_type_fini(&global_uncached[i]);
> >>>>> --
> >>>>> 2.39.2
> >>>>>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> http://blog.ffwll.ch
> >
> >
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-13  8:48             ` Daniel Vetter
@ 2023-04-13  9:45               ` Christian König
  2023-04-13 13:13                 ` Daniel Vetter
  0 siblings, 1 reply; 20+ messages in thread
From: Christian König @ 2023-04-13  9:45 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, intel-gfx, intel-xe, dri-devel, Matthew Auld

Am 13.04.23 um 10:48 schrieb Daniel Vetter:
> On Wed, 12 Apr 2023 at 16:18, Christian König <christian.koenig@amd.com> wrote:
>> Am 12.04.23 um 11:08 schrieb Daniel Vetter:
>>> On Tue, 11 Apr 2023 at 15:45, Daniel Vetter <daniel@ffwll.ch> wrote:
>>>> On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
>>>>> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
>>>>>> On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
>>>>>>> When swapping out, we will split multi-order pages both in order to
>>>>>>> move them to the swap-cache and to be able to return memory to the
>>>>>>> swap cache as soon as possible on a page-by-page basis.
>>>>>>> Reduce the page max order to the system PMD size, as we can then be nicer
>>>>>>> to the system and avoid splitting gigantic pages.
>>>>>>>
>>>>>>> Looking forward to when we might be able to swap out PMD size folios
>>>>>>> without splitting, this will also be a benefit.
>>>>>>>
>>>>>>> v2:
>>>>>>> - Include all orders up to the PMD size (Christian König)
>>>>>>> v3:
>>>>>>> - Avoid compilation errors for architectures with special PFN_SHIFTs
>>>>>>>
>>>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>>>> Apparently this fails on ppc build testing. Please supply build fix asap
>>>>>> (or I guess we need to revert). I'm kinda not clear why this only showed
>>>>>> up when I merged the drm-misc-next pr into drm-next ...
>>>>> I'm really wondering this as well. It looks like PMD_SHIFT isn't a constant
>>>>> on this particular platform.
>>>>>
>>>>> But from what I can find in the upstream 6.2 kernel PMD_SHIFT always seems
>>>>> to be a constant.
>>>>>
>>>>> So how exactly can that here break?
>>>> There's some in-flight patches to rework MAX_ORDER and other things in
>>>> linux-next, maybe it's recent? If you check out linux-next then you need
>>>> to reapply the patch (since sfr reverted it).
>>> So I looked and on ppc64 PMD_SHIFT is defined in terms of
>>> PTE_INDEX_SIZE, which is defined (for book3s) in terms of the variable
>>> __pte_index_size. This is in 6.3 already and seems pretty old.
>> Ah! I missed that one, thanks.
>>
>>> So revert? Or fixup patch to make this work on ppc?
>> I think for now just revert or change it so that we check if PMD_SHIFT
>> is a constant.
>>
>> Thomas do you have any quick solution?
> I guess Thomas is on vacations. Can you pls do the revert and push it
> to drm-misc-next-fixes so this won't get lost?

The offending patch hasn't showed up in drm-misc-next-fixes nor 
drm-misc-fixes yet. Looks like the branches are lacking behind.

I can revert it on drm-misc-next, but I', not 100% sure that will then 
get picked up in time.

Christian.

>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> preemptively for that. Normally I think we could wait a bit more but
> it's really close to merge window PR and I don't like handing too many
> open things to Dave when he's back :-)
> -Daniel
>
>> Christian.
>>
>>>
>>>>>>> ---
>>>>>>>     drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
>>>>>>>     1 file changed, 19 insertions(+), 11 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>> index dfce896c4bae..18c342a919a2 100644
>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>> @@ -47,6 +47,11 @@
>>>>>>>     #include "ttm_module.h"
>>>>>>> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
>>>>>>> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
>>>>>>> +/* Some architectures have a weird PMD_SHIFT */
>>>>>>> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
>>>>>>> +
>>>>>>>     /**
>>>>>>>      * struct ttm_pool_dma - Helper object for coherent DMA mappings
>>>>>>>      *
>>>>>>> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
>>>>>>>     static atomic_long_t allocated_pages;
>>>>>>> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
>>>>>>> -static struct ttm_pool_type global_uncached[MAX_ORDER];
>>>>>>> +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
>>>>>>> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
>>>>>>> -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
>>>>>>> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
>>>>>>> +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
>>>>>>> +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
>>>>>>>     static spinlock_t shrinker_lock;
>>>>>>>     static struct list_head shrinker_list;
>>>>>>> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>>>>>>>             else
>>>>>>>                     gfp_flags |= GFP_HIGHUSER;
>>>>>>> - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
>>>>>>> + for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
>>>>>>>                  num_pages;
>>>>>>>                  order = min_t(unsigned int, order, __fls(num_pages))) {
>>>>>>>                     struct ttm_pool_type *pt;
>>>>>>> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
>>>>>>>             if (use_dma_alloc) {
>>>>>>>                     for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>>>                                     ttm_pool_type_init(&pool->caching[i].orders[j],
>>>>>>>                                                        pool, i, j);
>>>>>>>             }
>>>>>>> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
>>>>>>>             if (pool->use_dma_alloc) {
>>>>>>>                     for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>>>                                     ttm_pool_type_fini(&pool->caching[i].orders[j]);
>>>>>>>             }
>>>>>>> @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
>>>>>>>             unsigned int i;
>>>>>>>             seq_puts(m, "\t ");
>>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>>>                     seq_printf(m, " ---%2u---", i);
>>>>>>>             seq_puts(m, "\n");
>>>>>>>     }
>>>>>>> @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
>>>>>>>     {
>>>>>>>             unsigned int i;
>>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>>>                     seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
>>>>>>>             seq_puts(m, "\n");
>>>>>>>     }
>>>>>>> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
>>>>>>>     {
>>>>>>>             unsigned int i;
>>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
>>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER < 1);
>>>>>>> +
>>>>>>>             if (!page_pool_size)
>>>>>>>                     page_pool_size = num_pages;
>>>>>>>             spin_lock_init(&shrinker_lock);
>>>>>>>             INIT_LIST_HEAD(&shrinker_list);
>>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>>>                     ttm_pool_type_init(&global_write_combined[i], NULL,
>>>>>>>                                        ttm_write_combined, i);
>>>>>>>                     ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
>>>>>>> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
>>>>>>>     {
>>>>>>>             unsigned int i;
>>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>>>                     ttm_pool_type_fini(&global_write_combined[i]);
>>>>>>>                     ttm_pool_type_fini(&global_uncached[i]);
>>>>>>> --
>>>>>>> 2.39.2
>>>>>>>
>>>> --
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>>>
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-13  9:45               ` Christian König
@ 2023-04-13 13:13                 ` Daniel Vetter
  2023-04-14 10:11                   ` Christian König
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-13 13:13 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, intel-gfx, intel-xe, dri-devel, Matthew Auld

On Thu, 13 Apr 2023 at 11:46, Christian König <christian.koenig@amd.com> wrote:
>
> Am 13.04.23 um 10:48 schrieb Daniel Vetter:
> > On Wed, 12 Apr 2023 at 16:18, Christian König <christian.koenig@amd.com> wrote:
> >> Am 12.04.23 um 11:08 schrieb Daniel Vetter:
> >>> On Tue, 11 Apr 2023 at 15:45, Daniel Vetter <daniel@ffwll.ch> wrote:
> >>>> On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
> >>>>> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
> >>>>>> On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
> >>>>>>> When swapping out, we will split multi-order pages both in order to
> >>>>>>> move them to the swap-cache and to be able to return memory to the
> >>>>>>> swap cache as soon as possible on a page-by-page basis.
> >>>>>>> Reduce the page max order to the system PMD size, as we can then be nicer
> >>>>>>> to the system and avoid splitting gigantic pages.
> >>>>>>>
> >>>>>>> Looking forward to when we might be able to swap out PMD size folios
> >>>>>>> without splitting, this will also be a benefit.
> >>>>>>>
> >>>>>>> v2:
> >>>>>>> - Include all orders up to the PMD size (Christian König)
> >>>>>>> v3:
> >>>>>>> - Avoid compilation errors for architectures with special PFN_SHIFTs
> >>>>>>>
> >>>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >>>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
> >>>>>> Apparently this fails on ppc build testing. Please supply build fix asap
> >>>>>> (or I guess we need to revert). I'm kinda not clear why this only showed
> >>>>>> up when I merged the drm-misc-next pr into drm-next ...
> >>>>> I'm really wondering this as well. It looks like PMD_SHIFT isn't a constant
> >>>>> on this particular platform.
> >>>>>
> >>>>> But from what I can find in the upstream 6.2 kernel PMD_SHIFT always seems
> >>>>> to be a constant.
> >>>>>
> >>>>> So how exactly can that here break?
> >>>> There's some in-flight patches to rework MAX_ORDER and other things in
> >>>> linux-next, maybe it's recent? If you check out linux-next then you need
> >>>> to reapply the patch (since sfr reverted it).
> >>> So I looked and on ppc64 PMD_SHIFT is defined in terms of
> >>> PTE_INDEX_SIZE, which is defined (for book3s) in terms of the variable
> >>> __pte_index_size. This is in 6.3 already and seems pretty old.
> >> Ah! I missed that one, thanks.
> >>
> >>> So revert? Or fixup patch to make this work on ppc?
> >> I think for now just revert or change it so that we check if PMD_SHIFT
> >> is a constant.
> >>
> >> Thomas do you have any quick solution?
> > I guess Thomas is on vacations. Can you pls do the revert and push it
> > to drm-misc-next-fixes so this won't get lost?
>
> The offending patch hasn't showed up in drm-misc-next-fixes nor
> drm-misc-fixes yet. Looks like the branches are lacking behind.
>
> I can revert it on drm-misc-next, but I', not 100% sure that will then
> get picked up in time.

It's there now, Maarten forwarded drm-misc-next-fixes this morning.
That's why I pinged here again, trees are ready to land the revert :-)
-Daniel

>
> Christian.
>
> >
> > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >
> > preemptively for that. Normally I think we could wait a bit more but
> > it's really close to merge window PR and I don't like handing too many
> > open things to Dave when he's back :-)
> > -Daniel
> >
> >> Christian.
> >>
> >>>
> >>>>>>> ---
> >>>>>>>     drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
> >>>>>>>     1 file changed, 19 insertions(+), 11 deletions(-)
> >>>>>>>
> >>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> >>>>>>> index dfce896c4bae..18c342a919a2 100644
> >>>>>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> >>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> >>>>>>> @@ -47,6 +47,11 @@
> >>>>>>>     #include "ttm_module.h"
> >>>>>>> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
> >>>>>>> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
> >>>>>>> +/* Some architectures have a weird PMD_SHIFT */
> >>>>>>> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
> >>>>>>> +
> >>>>>>>     /**
> >>>>>>>      * struct ttm_pool_dma - Helper object for coherent DMA mappings
> >>>>>>>      *
> >>>>>>> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
> >>>>>>>     static atomic_long_t allocated_pages;
> >>>>>>> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
> >>>>>>> -static struct ttm_pool_type global_uncached[MAX_ORDER];
> >>>>>>> +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
> >>>>>>> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
> >>>>>>> -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
> >>>>>>> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
> >>>>>>> +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
> >>>>>>> +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
> >>>>>>>     static spinlock_t shrinker_lock;
> >>>>>>>     static struct list_head shrinker_list;
> >>>>>>> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
> >>>>>>>             else
> >>>>>>>                     gfp_flags |= GFP_HIGHUSER;
> >>>>>>> - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
> >>>>>>> + for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
> >>>>>>>                  num_pages;
> >>>>>>>                  order = min_t(unsigned int, order, __fls(num_pages))) {
> >>>>>>>                     struct ttm_pool_type *pt;
> >>>>>>> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
> >>>>>>>             if (use_dma_alloc) {
> >>>>>>>                     for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> >>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
> >>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
> >>>>>>>                                     ttm_pool_type_init(&pool->caching[i].orders[j],
> >>>>>>>                                                        pool, i, j);
> >>>>>>>             }
> >>>>>>> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
> >>>>>>>             if (pool->use_dma_alloc) {
> >>>>>>>                     for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
> >>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
> >>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
> >>>>>>>                                     ttm_pool_type_fini(&pool->caching[i].orders[j]);
> >>>>>>>             }
> >>>>>>> @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
> >>>>>>>             unsigned int i;
> >>>>>>>             seq_puts(m, "\t ");
> >>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
> >>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
> >>>>>>>                     seq_printf(m, " ---%2u---", i);
> >>>>>>>             seq_puts(m, "\n");
> >>>>>>>     }
> >>>>>>> @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
> >>>>>>>     {
> >>>>>>>             unsigned int i;
> >>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
> >>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
> >>>>>>>                     seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
> >>>>>>>             seq_puts(m, "\n");
> >>>>>>>     }
> >>>>>>> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
> >>>>>>>     {
> >>>>>>>             unsigned int i;
> >>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
> >>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER < 1);
> >>>>>>> +
> >>>>>>>             if (!page_pool_size)
> >>>>>>>                     page_pool_size = num_pages;
> >>>>>>>             spin_lock_init(&shrinker_lock);
> >>>>>>>             INIT_LIST_HEAD(&shrinker_list);
> >>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
> >>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> >>>>>>>                     ttm_pool_type_init(&global_write_combined[i], NULL,
> >>>>>>>                                        ttm_write_combined, i);
> >>>>>>>                     ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
> >>>>>>> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
> >>>>>>>     {
> >>>>>>>             unsigned int i;
> >>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
> >>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> >>>>>>>                     ttm_pool_type_fini(&global_write_combined[i]);
> >>>>>>>                     ttm_pool_type_fini(&global_uncached[i]);
> >>>>>>> --
> >>>>>>> 2.39.2
> >>>>>>>
> >>>> --
> >>>> Daniel Vetter
> >>>> Software Engineer, Intel Corporation
> >>>> http://blog.ffwll.ch
> >>>
> >
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-13 13:13                 ` Daniel Vetter
@ 2023-04-14 10:11                   ` Christian König
  2023-04-17  8:02                     ` Thomas Hellström
  0 siblings, 1 reply; 20+ messages in thread
From: Christian König @ 2023-04-14 10:11 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström, intel-gfx, intel-xe, dri-devel, Matthew Auld

Am 13.04.23 um 15:13 schrieb Daniel Vetter:
> On Thu, 13 Apr 2023 at 11:46, Christian König <christian.koenig@amd.com> wrote:
>> Am 13.04.23 um 10:48 schrieb Daniel Vetter:
>>> On Wed, 12 Apr 2023 at 16:18, Christian König <christian.koenig@amd.com> wrote:
>>>> Am 12.04.23 um 11:08 schrieb Daniel Vetter:
>>>>> On Tue, 11 Apr 2023 at 15:45, Daniel Vetter <daniel@ffwll.ch> wrote:
>>>>>> On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
>>>>>>> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
>>>>>>>> On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
>>>>>>>>> When swapping out, we will split multi-order pages both in order to
>>>>>>>>> move them to the swap-cache and to be able to return memory to the
>>>>>>>>> swap cache as soon as possible on a page-by-page basis.
>>>>>>>>> Reduce the page max order to the system PMD size, as we can then be nicer
>>>>>>>>> to the system and avoid splitting gigantic pages.
>>>>>>>>>
>>>>>>>>> Looking forward to when we might be able to swap out PMD size folios
>>>>>>>>> without splitting, this will also be a benefit.
>>>>>>>>>
>>>>>>>>> v2:
>>>>>>>>> - Include all orders up to the PMD size (Christian König)
>>>>>>>>> v3:
>>>>>>>>> - Avoid compilation errors for architectures with special PFN_SHIFTs
>>>>>>>>>
>>>>>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>>>>>> Apparently this fails on ppc build testing. Please supply build fix asap
>>>>>>>> (or I guess we need to revert). I'm kinda not clear why this only showed
>>>>>>>> up when I merged the drm-misc-next pr into drm-next ...
>>>>>>> I'm really wondering this as well. It looks like PMD_SHIFT isn't a constant
>>>>>>> on this particular platform.
>>>>>>>
>>>>>>> But from what I can find in the upstream 6.2 kernel PMD_SHIFT always seems
>>>>>>> to be a constant.
>>>>>>>
>>>>>>> So how exactly can that here break?
>>>>>> There's some in-flight patches to rework MAX_ORDER and other things in
>>>>>> linux-next, maybe it's recent? If you check out linux-next then you need
>>>>>> to reapply the patch (since sfr reverted it).
>>>>> So I looked and on ppc64 PMD_SHIFT is defined in terms of
>>>>> PTE_INDEX_SIZE, which is defined (for book3s) in terms of the variable
>>>>> __pte_index_size. This is in 6.3 already and seems pretty old.
>>>> Ah! I missed that one, thanks.
>>>>
>>>>> So revert? Or fixup patch to make this work on ppc?
>>>> I think for now just revert or change it so that we check if PMD_SHIFT
>>>> is a constant.
>>>>
>>>> Thomas do you have any quick solution?
>>> I guess Thomas is on vacations. Can you pls do the revert and push it
>>> to drm-misc-next-fixes so this won't get lost?
>> The offending patch hasn't showed up in drm-misc-next-fixes nor
>> drm-misc-fixes yet. Looks like the branches are lacking behind.
>>
>> I can revert it on drm-misc-next, but I', not 100% sure that will then
>> get picked up in time.
> It's there now, Maarten forwarded drm-misc-next-fixes this morning.
> That's why I pinged here again, trees are ready to land the revert :-)

Just pushed it.

Christian.

> -Daniel
>
>> Christian.
>>
>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>
>>> preemptively for that. Normally I think we could wait a bit more but
>>> it's really close to merge window PR and I don't like handing too many
>>> open things to Dave when he's back :-)
>>> -Daniel
>>>
>>>> Christian.
>>>>
>>>>>>>>> ---
>>>>>>>>>      drivers/gpu/drm/ttm/ttm_pool.c | 30 +++++++++++++++++++-----------
>>>>>>>>>      1 file changed, 19 insertions(+), 11 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>>>> index dfce896c4bae..18c342a919a2 100644
>>>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>>>> @@ -47,6 +47,11 @@
>>>>>>>>>      #include "ttm_module.h"
>>>>>>>>> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
>>>>>>>>> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
>>>>>>>>> +/* Some architectures have a weird PMD_SHIFT */
>>>>>>>>> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? __TTM_DIM_ORDER : MAX_ORDER)
>>>>>>>>> +
>>>>>>>>>      /**
>>>>>>>>>       * struct ttm_pool_dma - Helper object for coherent DMA mappings
>>>>>>>>>       *
>>>>>>>>> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
>>>>>>>>>      static atomic_long_t allocated_pages;
>>>>>>>>> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
>>>>>>>>> -static struct ttm_pool_type global_uncached[MAX_ORDER];
>>>>>>>>> +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER];
>>>>>>>>> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
>>>>>>>>> -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER];
>>>>>>>>> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
>>>>>>>>> +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER];
>>>>>>>>> +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER];
>>>>>>>>>      static spinlock_t shrinker_lock;
>>>>>>>>>      static struct list_head shrinker_list;
>>>>>>>>> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>>>>>>>>>              else
>>>>>>>>>                      gfp_flags |= GFP_HIGHUSER;
>>>>>>>>> - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
>>>>>>>>> + for (order = min_t(unsigned int, TTM_MAX_ORDER, __fls(num_pages));
>>>>>>>>>                   num_pages;
>>>>>>>>>                   order = min_t(unsigned int, order, __fls(num_pages))) {
>>>>>>>>>                      struct ttm_pool_type *pt;
>>>>>>>>> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
>>>>>>>>>              if (use_dma_alloc) {
>>>>>>>>>                      for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>>>>>                                      ttm_pool_type_init(&pool->caching[i].orders[j],
>>>>>>>>>                                                         pool, i, j);
>>>>>>>>>              }
>>>>>>>>> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
>>>>>>>>>              if (pool->use_dma_alloc) {
>>>>>>>>>                      for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>>>>>                                      ttm_pool_type_fini(&pool->caching[i].orders[j]);
>>>>>>>>>              }
>>>>>>>>> @@ -637,7 +642,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m)
>>>>>>>>>              unsigned int i;
>>>>>>>>>              seq_puts(m, "\t ");
>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>>>>>                      seq_printf(m, " ---%2u---", i);
>>>>>>>>>              seq_puts(m, "\n");
>>>>>>>>>      }
>>>>>>>>> @@ -648,7 +653,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
>>>>>>>>>      {
>>>>>>>>>              unsigned int i;
>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>>>>>                      seq_printf(m, " %8u", ttm_pool_type_count(&pt[i]));
>>>>>>>>>              seq_puts(m, "\n");
>>>>>>>>>      }
>>>>>>>>> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long num_pages)
>>>>>>>>>      {
>>>>>>>>>              unsigned int i;
>>>>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
>>>>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER < 1);
>>>>>>>>> +
>>>>>>>>>              if (!page_pool_size)
>>>>>>>>>                      page_pool_size = num_pages;
>>>>>>>>>              spin_lock_init(&shrinker_lock);
>>>>>>>>>              INIT_LIST_HEAD(&shrinker_list);
>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>>>>>                      ttm_pool_type_init(&global_write_combined[i], NULL,
>>>>>>>>>                                         ttm_write_combined, i);
>>>>>>>>>                      ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
>>>>>>>>> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
>>>>>>>>>      {
>>>>>>>>>              unsigned int i;
>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>>>>>                      ttm_pool_type_fini(&global_write_combined[i]);
>>>>>>>>>                      ttm_pool_type_fini(&global_uncached[i]);
>>>>>>>>> --
>>>>>>>>> 2.39.2
>>>>>>>>>
>>>>>> --
>>>>>> Daniel Vetter
>>>>>> Software Engineer, Intel Corporation
>>>>>> http://blog.ffwll.ch
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages
  2023-04-14 10:11                   ` Christian König
@ 2023-04-17  8:02                     ` Thomas Hellström
  0 siblings, 0 replies; 20+ messages in thread
From: Thomas Hellström @ 2023-04-17  8:02 UTC (permalink / raw)
  To: Christian König, Daniel Vetter
  Cc: intel-gfx, intel-xe, dri-devel, Matthew Auld

Hi

On 4/14/23 12:11, Christian König wrote:
> Am 13.04.23 um 15:13 schrieb Daniel Vetter:
>> On Thu, 13 Apr 2023 at 11:46, Christian König 
>> <christian.koenig@amd.com> wrote:
>>> Am 13.04.23 um 10:48 schrieb Daniel Vetter:
>>>> On Wed, 12 Apr 2023 at 16:18, Christian König 
>>>> <christian.koenig@amd.com> wrote:
>>>>> Am 12.04.23 um 11:08 schrieb Daniel Vetter:
>>>>>> On Tue, 11 Apr 2023 at 15:45, Daniel Vetter <daniel@ffwll.ch> wrote:
>>>>>>> On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
>>>>>>>> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
>>>>>>>>> On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
>>>>>>>>>> When swapping out, we will split multi-order pages both in 
>>>>>>>>>> order to
>>>>>>>>>> move them to the swap-cache and to be able to return memory 
>>>>>>>>>> to the
>>>>>>>>>> swap cache as soon as possible on a page-by-page basis.
>>>>>>>>>> Reduce the page max order to the system PMD size, as we can 
>>>>>>>>>> then be nicer
>>>>>>>>>> to the system and avoid splitting gigantic pages.
>>>>>>>>>>
>>>>>>>>>> Looking forward to when we might be able to swap out PMD size 
>>>>>>>>>> folios
>>>>>>>>>> without splitting, this will also be a benefit.
>>>>>>>>>>
>>>>>>>>>> v2:
>>>>>>>>>> - Include all orders up to the PMD size (Christian König)
>>>>>>>>>> v3:
>>>>>>>>>> - Avoid compilation errors for architectures with special 
>>>>>>>>>> PFN_SHIFTs
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Thomas Hellström 
>>>>>>>>>> <thomas.hellstrom@linux.intel.com>
>>>>>>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>>>>>>> Apparently this fails on ppc build testing. Please supply 
>>>>>>>>> build fix asap
>>>>>>>>> (or I guess we need to revert). I'm kinda not clear why this 
>>>>>>>>> only showed
>>>>>>>>> up when I merged the drm-misc-next pr into drm-next ...
>>>>>>>> I'm really wondering this as well. It looks like PMD_SHIFT 
>>>>>>>> isn't a constant
>>>>>>>> on this particular platform.
>>>>>>>>
>>>>>>>> But from what I can find in the upstream 6.2 kernel PMD_SHIFT 
>>>>>>>> always seems
>>>>>>>> to be a constant.
>>>>>>>>
>>>>>>>> So how exactly can that here break?
>>>>>>> There's some in-flight patches to rework MAX_ORDER and other 
>>>>>>> things in
>>>>>>> linux-next, maybe it's recent? If you check out linux-next then 
>>>>>>> you need
>>>>>>> to reapply the patch (since sfr reverted it).
>>>>>> So I looked and on ppc64 PMD_SHIFT is defined in terms of
>>>>>> PTE_INDEX_SIZE, which is defined (for book3s) in terms of the 
>>>>>> variable
>>>>>> __pte_index_size. This is in 6.3 already and seems pretty old.
>>>>> Ah! I missed that one, thanks.
>>>>>
>>>>>> So revert? Or fixup patch to make this work on ppc?
>>>>> I think for now just revert or change it so that we check if 
>>>>> PMD_SHIFT
>>>>> is a constant.
>>>>>
>>>>> Thomas do you have any quick solution?
>>>> I guess Thomas is on vacations. Can you pls do the revert and push it
>>>> to drm-misc-next-fixes so this won't get lost?
>>> The offending patch hasn't showed up in drm-misc-next-fixes nor
>>> drm-misc-fixes yet. Looks like the branches are lacking behind.
>>>
>>> I can revert it on drm-misc-next, but I', not 100% sure that will then
>>> get picked up in time.
>> It's there now, Maarten forwarded drm-misc-next-fixes this morning.
>> That's why I pinged here again, trees are ready to land the revert :-)
>
> Just pushed it.
>
> Christian.

Thanks for fixing this. (I was on vacation). I got a "BUILD SUCCESS" for 
this series based on drm-misc-next so I didn't think anything weird 
would show up.

Thanks,

Thomas

>
>> -Daniel
>>
>>> Christian.
>>>
>>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>>
>>>> preemptively for that. Normally I think we could wait a bit more but
>>>> it's really close to merge window PR and I don't like handing too many
>>>> open things to Dave when he's back :-)
>>>> -Daniel
>>>>
>>>>> Christian.
>>>>>
>>>>>>>>>> ---
>>>>>>>>>>      drivers/gpu/drm/ttm/ttm_pool.c | 30 
>>>>>>>>>> +++++++++++++++++++-----------
>>>>>>>>>>      1 file changed, 19 insertions(+), 11 deletions(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c 
>>>>>>>>>> b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>>>>> index dfce896c4bae..18c342a919a2 100644
>>>>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>>>>>>>>>> @@ -47,6 +47,11 @@
>>>>>>>>>>      #include "ttm_module.h"
>>>>>>>>>> +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT)
>>>>>>>>>> +#define __TTM_DIM_ORDER (TTM_MAX_ORDER + 1)
>>>>>>>>>> +/* Some architectures have a weird PMD_SHIFT */
>>>>>>>>>> +#define TTM_DIM_ORDER (__TTM_DIM_ORDER <= MAX_ORDER ? 
>>>>>>>>>> __TTM_DIM_ORDER : MAX_ORDER)
>>>>>>>>>> +
>>>>>>>>>>      /**
>>>>>>>>>>       * struct ttm_pool_dma - Helper object for coherent DMA 
>>>>>>>>>> mappings
>>>>>>>>>>       *
>>>>>>>>>> @@ -65,11 +70,11 @@ module_param(page_pool_size, ulong, 0644);
>>>>>>>>>>      static atomic_long_t allocated_pages;
>>>>>>>>>> -static struct ttm_pool_type global_write_combined[MAX_ORDER];
>>>>>>>>>> -static struct ttm_pool_type global_uncached[MAX_ORDER];
>>>>>>>>>> +static struct ttm_pool_type 
>>>>>>>>>> global_write_combined[TTM_DIM_ORDER];
>>>>>>>>>> +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER];
>>>>>>>>>> -static struct ttm_pool_type 
>>>>>>>>>> global_dma32_write_combined[MAX_ORDER];
>>>>>>>>>> -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER];
>>>>>>>>>> +static struct ttm_pool_type 
>>>>>>>>>> global_dma32_write_combined[TTM_DIM_ORDER];
>>>>>>>>>> +static struct ttm_pool_type 
>>>>>>>>>> global_dma32_uncached[TTM_DIM_ORDER];
>>>>>>>>>>      static spinlock_t shrinker_lock;
>>>>>>>>>>      static struct list_head shrinker_list;
>>>>>>>>>> @@ -444,7 +449,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, 
>>>>>>>>>> struct ttm_tt *tt,
>>>>>>>>>>              else
>>>>>>>>>>                      gfp_flags |= GFP_HIGHUSER;
>>>>>>>>>> - for (order = min_t(unsigned int, MAX_ORDER - 1, 
>>>>>>>>>> __fls(num_pages));
>>>>>>>>>> + for (order = min_t(unsigned int, TTM_MAX_ORDER, 
>>>>>>>>>> __fls(num_pages));
>>>>>>>>>>                   num_pages;
>>>>>>>>>>                   order = min_t(unsigned int, order, 
>>>>>>>>>> __fls(num_pages))) {
>>>>>>>>>>                      struct ttm_pool_type *pt;
>>>>>>>>>> @@ -563,7 +568,7 @@ void ttm_pool_init(struct ttm_pool *pool, 
>>>>>>>>>> struct device *dev,
>>>>>>>>>>              if (use_dma_alloc) {
>>>>>>>>>>                      for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>>>>>> ttm_pool_type_init(&pool->caching[i].orders[j],
>>>>>>>>>>                                                         pool, 
>>>>>>>>>> i, j);
>>>>>>>>>>              }
>>>>>>>>>> @@ -583,7 +588,7 @@ void ttm_pool_fini(struct ttm_pool *pool)
>>>>>>>>>>              if (pool->use_dma_alloc) {
>>>>>>>>>>                      for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i)
>>>>>>>>>> -                 for (j = 0; j < MAX_ORDER; ++j)
>>>>>>>>>> +                 for (j = 0; j < TTM_DIM_ORDER; ++j)
>>>>>>>>>> ttm_pool_type_fini(&pool->caching[i].orders[j]);
>>>>>>>>>>              }
>>>>>>>>>> @@ -637,7 +642,7 @@ static void 
>>>>>>>>>> ttm_pool_debugfs_header(struct seq_file *m)
>>>>>>>>>>              unsigned int i;
>>>>>>>>>>              seq_puts(m, "\t ");
>>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>>>>>>                      seq_printf(m, " ---%2u---", i);
>>>>>>>>>>              seq_puts(m, "\n");
>>>>>>>>>>      }
>>>>>>>>>> @@ -648,7 +653,7 @@ static void 
>>>>>>>>>> ttm_pool_debugfs_orders(struct ttm_pool_type *pt,
>>>>>>>>>>      {
>>>>>>>>>>              unsigned int i;
>>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i)
>>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i)
>>>>>>>>>>                      seq_printf(m, " %8u", 
>>>>>>>>>> ttm_pool_type_count(&pt[i]));
>>>>>>>>>>              seq_puts(m, "\n");
>>>>>>>>>>      }
>>>>>>>>>> @@ -751,13 +756,16 @@ int ttm_pool_mgr_init(unsigned long 
>>>>>>>>>> num_pages)
>>>>>>>>>>      {
>>>>>>>>>>              unsigned int i;
>>>>>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER > MAX_ORDER);
>>>>>>>>>> + BUILD_BUG_ON(TTM_DIM_ORDER < 1);
>>>>>>>>>> +
>>>>>>>>>>              if (!page_pool_size)
>>>>>>>>>>                      page_pool_size = num_pages;
>>>>>>>>>>              spin_lock_init(&shrinker_lock);
>>>>>>>>>>              INIT_LIST_HEAD(&shrinker_list);
>>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>>>>>> ttm_pool_type_init(&global_write_combined[i], NULL,
>>>>>>>>>> ttm_write_combined, i);
>>>>>>>>>> ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
>>>>>>>>>> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
>>>>>>>>>>      {
>>>>>>>>>>              unsigned int i;
>>>>>>>>>> - for (i = 0; i < MAX_ORDER; ++i) {
>>>>>>>>>> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
>>>>>>>>>> ttm_pool_type_fini(&global_write_combined[i]);
>>>>>>>>>> ttm_pool_type_fini(&global_uncached[i]);
>>>>>>>>>> -- 
>>>>>>>>>> 2.39.2
>>>>>>>>>>
>>>>>>> -- 
>>>>>>> Daniel Vetter
>>>>>>> Software Engineer, Intel Corporation
>>>>>>> http://blog.ffwll.ch
>>
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2023-04-17  8:02 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-04 20:06 [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Thomas Hellström
2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 1/3] drm/ttm/pool: Fix ttm_pool_alloc error path Thomas Hellström
2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 2/3] drm/ttm: Reduce the number of used allocation orders for TTM pages Thomas Hellström
2023-04-11  9:51   ` Daniel Vetter
2023-04-11 12:11     ` Christian König
2023-04-11 13:45       ` Daniel Vetter
2023-04-12  9:08         ` Daniel Vetter
2023-04-12 14:17           ` Christian König
2023-04-13  8:48             ` Daniel Vetter
2023-04-13  9:45               ` Christian König
2023-04-13 13:13                 ` Daniel Vetter
2023-04-14 10:11                   ` Christian König
2023-04-17  8:02                     ` Thomas Hellström
2023-04-04 20:06 ` [Intel-gfx] [PATCH RESEND v3 3/3] drm/ttm: Make the call to ttm_tt_populate() interruptible when faulting Thomas Hellström
2023-04-04 23:13 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/ttm: Small fixes / cleanups in prep for shrinking (rev3) Patchwork
2023-04-04 23:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2023-04-04 23:23 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-04-05  9:24 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2023-04-05 12:32 ` [Intel-gfx] [PATCH RESEND v3 0/3] drm/ttm: Small fixes / cleanups in prep for shrinking Christian König
2023-04-05 12:36   ` Thomas Hellström

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).