All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/i915: Disable gpu relocations
@ 2021-08-03 12:48 ` Daniel Vetter
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2021-08-03 12:48 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: DRI Development, Daniel Vetter, Dave Airlie, Maarten Lankhorst,
	Daniel Vetter, Jon Bloomfield, Chris Wilson, Joonas Lahtinen,
	Thomas Hellström, Matthew Auld, Lionel Landwerlin,
	Jason Ekstrand

Media userspace was the last userspace to still use them, and they
converted now too:

https://github.com/intel/media-driver/commit/144020c37770083974bedf59902b70b8f444c799

This means no reason anymore to make relocations faster than they've
been for the first 9 years of gem. This code was added in

commit 7dd4f6729f9243bd7046c6f04c107a456bda38eb
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Fri Jun 16 15:05:24 2017 +0100

    drm/i915: Async GPU relocation processing

Furthermore there's pretty strong indications it's buggy, since the
code to use it by default as the only option had to be reverted:

commit ad5d95e4d538737ed3fa25493777decf264a3011
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Sep 8 15:41:17 2020 +1000

    Revert "drm/i915/gem: Async GPU relocations only"

This code just disables gpu relocations, leaving the garbage
collection for later patches and more importantly, much less confusing
diff. Also given how much headaches this code has caused in the past,
letting this soak for a bit seems justified.

Acked-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 43 ++++++++-----------
 1 file changed, 18 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 25ba2765d27d..e4dc4c3b4df3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1588,7 +1588,7 @@ static int __reloc_entry_gpu(struct i915_execbuffer *eb,
 	return true;
 }
 
-static int reloc_entry_gpu(struct i915_execbuffer *eb,
+static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
 			    struct i915_vma *vma,
 			    u64 offset,
 			    u64 target_addr)
@@ -1610,32 +1610,25 @@ relocate_entry(struct i915_vma *vma,
 {
 	u64 target_addr = relocation_target(reloc, target);
 	u64 offset = reloc->offset;
-	int reloc_gpu = reloc_entry_gpu(eb, vma, offset, target_addr);
-
-	if (reloc_gpu < 0)
-		return reloc_gpu;
-
-	if (!reloc_gpu) {
-		bool wide = eb->reloc_cache.use_64bit_reloc;
-		void *vaddr;
+	bool wide = eb->reloc_cache.use_64bit_reloc;
+	void *vaddr;
 
 repeat:
-		vaddr = reloc_vaddr(vma->obj, eb,
-				    offset >> PAGE_SHIFT);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
-
-		GEM_BUG_ON(!IS_ALIGNED(offset, sizeof(u32)));
-		clflush_write32(vaddr + offset_in_page(offset),
-				lower_32_bits(target_addr),
-				eb->reloc_cache.vaddr);
-
-		if (wide) {
-			offset += sizeof(u32);
-			target_addr >>= 32;
-			wide = false;
-			goto repeat;
-		}
+	vaddr = reloc_vaddr(vma->obj, eb,
+			    offset >> PAGE_SHIFT);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	GEM_BUG_ON(!IS_ALIGNED(offset, sizeof(u32)));
+	clflush_write32(vaddr + offset_in_page(offset),
+			lower_32_bits(target_addr),
+			eb->reloc_cache.vaddr);
+
+	if (wide) {
+		offset += sizeof(u32);
+		target_addr >>= 32;
+		wide = false;
+		goto repeat;
 	}
 
 	return target->node.start | UPDATE;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Intel-gfx] [PATCH 1/2] drm/i915: Disable gpu relocations
@ 2021-08-03 12:48 ` Daniel Vetter
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2021-08-03 12:48 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: DRI Development, Daniel Vetter, Dave Airlie, Maarten Lankhorst,
	Daniel Vetter, Jon Bloomfield, Chris Wilson, Joonas Lahtinen,
	Thomas Hellström, Matthew Auld, Lionel Landwerlin,
	Jason Ekstrand

Media userspace was the last userspace to still use them, and they
converted now too:

https://github.com/intel/media-driver/commit/144020c37770083974bedf59902b70b8f444c799

This means no reason anymore to make relocations faster than they've
been for the first 9 years of gem. This code was added in

commit 7dd4f6729f9243bd7046c6f04c107a456bda38eb
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Fri Jun 16 15:05:24 2017 +0100

    drm/i915: Async GPU relocation processing

Furthermore there's pretty strong indications it's buggy, since the
code to use it by default as the only option had to be reverted:

commit ad5d95e4d538737ed3fa25493777decf264a3011
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Sep 8 15:41:17 2020 +1000

    Revert "drm/i915/gem: Async GPU relocations only"

This code just disables gpu relocations, leaving the garbage
collection for later patches and more importantly, much less confusing
diff. Also given how much headaches this code has caused in the past,
letting this soak for a bit seems justified.

Acked-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 43 ++++++++-----------
 1 file changed, 18 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 25ba2765d27d..e4dc4c3b4df3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1588,7 +1588,7 @@ static int __reloc_entry_gpu(struct i915_execbuffer *eb,
 	return true;
 }
 
-static int reloc_entry_gpu(struct i915_execbuffer *eb,
+static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
 			    struct i915_vma *vma,
 			    u64 offset,
 			    u64 target_addr)
@@ -1610,32 +1610,25 @@ relocate_entry(struct i915_vma *vma,
 {
 	u64 target_addr = relocation_target(reloc, target);
 	u64 offset = reloc->offset;
-	int reloc_gpu = reloc_entry_gpu(eb, vma, offset, target_addr);
-
-	if (reloc_gpu < 0)
-		return reloc_gpu;
-
-	if (!reloc_gpu) {
-		bool wide = eb->reloc_cache.use_64bit_reloc;
-		void *vaddr;
+	bool wide = eb->reloc_cache.use_64bit_reloc;
+	void *vaddr;
 
 repeat:
-		vaddr = reloc_vaddr(vma->obj, eb,
-				    offset >> PAGE_SHIFT);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
-
-		GEM_BUG_ON(!IS_ALIGNED(offset, sizeof(u32)));
-		clflush_write32(vaddr + offset_in_page(offset),
-				lower_32_bits(target_addr),
-				eb->reloc_cache.vaddr);
-
-		if (wide) {
-			offset += sizeof(u32);
-			target_addr >>= 32;
-			wide = false;
-			goto repeat;
-		}
+	vaddr = reloc_vaddr(vma->obj, eb,
+			    offset >> PAGE_SHIFT);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	GEM_BUG_ON(!IS_ALIGNED(offset, sizeof(u32)));
+	clflush_write32(vaddr + offset_in_page(offset),
+			lower_32_bits(target_addr),
+			eb->reloc_cache.vaddr);
+
+	if (wide) {
+		offset += sizeof(u32);
+		target_addr >>= 32;
+		wide = false;
+		goto repeat;
 	}
 
 	return target->node.start | UPDATE;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] drm/i915: delete gpu reloc code
  2021-08-03 12:48 ` [Intel-gfx] " Daniel Vetter
@ 2021-08-03 12:48   ` Daniel Vetter
  -1 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2021-08-03 12:48 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: DRI Development, Daniel Vetter, Daniel Vetter, Jon Bloomfield,
	Chris Wilson, Maarten Lankhorst, Joonas Lahtinen,
	Thomas Hellström, Matthew Auld, Lionel Landwerlin,
	Dave Airlie, Jason Ekstrand

It's already removed, this just garbage collects it all.

v2: Rebase over s/GEN/GRAPHICS_VER/

v3: Also ditch eb.reloc_pool and eb.reloc_context (Maarten)

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 360 +-----------------
 .../drm/i915/selftests/i915_live_selftests.h  |   1 -
 2 files changed, 1 insertion(+), 360 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index e4dc4c3b4df3..98e25efffb59 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -277,16 +277,8 @@ struct i915_execbuffer {
 		bool has_llc : 1;
 		bool has_fence : 1;
 		bool needs_unfenced : 1;
-
-		struct i915_request *rq;
-		u32 *rq_cmd;
-		unsigned int rq_size;
-		struct intel_gt_buffer_pool_node *pool;
 	} reloc_cache;
 
-	struct intel_gt_buffer_pool_node *reloc_pool; /** relocation pool for -EDEADLK handling */
-	struct intel_context *reloc_context;
-
 	u64 invalid_flags; /** Set of execobj.flags that are invalid */
 
 	u64 batch_len; /** Length of batch within object */
@@ -1035,8 +1027,6 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
 
 static void eb_destroy(const struct i915_execbuffer *eb)
 {
-	GEM_BUG_ON(eb->reloc_cache.rq);
-
 	if (eb->lut_size > 0)
 		kfree(eb->buckets);
 }
@@ -1048,14 +1038,6 @@ relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
 	return gen8_canonical_addr((int)reloc->delta + target->node.start);
 }
 
-static void reloc_cache_clear(struct reloc_cache *cache)
-{
-	cache->rq = NULL;
-	cache->rq_cmd = NULL;
-	cache->pool = NULL;
-	cache->rq_size = 0;
-}
-
 static void reloc_cache_init(struct reloc_cache *cache,
 			     struct drm_i915_private *i915)
 {
@@ -1068,7 +1050,6 @@ static void reloc_cache_init(struct reloc_cache *cache,
 	cache->has_fence = cache->graphics_ver < 4;
 	cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment;
 	cache->node.flags = 0;
-	reloc_cache_clear(cache);
 }
 
 static inline void *unmask_page(unsigned long p)
@@ -1090,48 +1071,10 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
 	return &i915->ggtt;
 }
 
-static void reloc_cache_put_pool(struct i915_execbuffer *eb, struct reloc_cache *cache)
-{
-	if (!cache->pool)
-		return;
-
-	/*
-	 * This is a bit nasty, normally we keep objects locked until the end
-	 * of execbuffer, but we already submit this, and have to unlock before
-	 * dropping the reference. Fortunately we can only hold 1 pool node at
-	 * a time, so this should be harmless.
-	 */
-	i915_gem_ww_unlock_single(cache->pool->obj);
-	intel_gt_buffer_pool_put(cache->pool);
-	cache->pool = NULL;
-}
-
-static void reloc_gpu_flush(struct i915_execbuffer *eb, struct reloc_cache *cache)
-{
-	struct drm_i915_gem_object *obj = cache->rq->batch->obj;
-
-	GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
-	cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
-
-	i915_gem_object_flush_map(obj);
-	i915_gem_object_unpin_map(obj);
-
-	intel_gt_chipset_flush(cache->rq->engine->gt);
-
-	i915_request_add(cache->rq);
-	reloc_cache_put_pool(eb, cache);
-	reloc_cache_clear(cache);
-
-	eb->reloc_pool = NULL;
-}
-
 static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer *eb)
 {
 	void *vaddr;
 
-	if (cache->rq)
-		reloc_gpu_flush(eb, cache);
-
 	if (!cache->vaddr)
 		return;
 
@@ -1313,295 +1256,6 @@ static void clflush_write32(u32 *addr, u32 value, unsigned int flushes)
 		*addr = value;
 }
 
-static int reloc_move_to_gpu(struct i915_request *rq, struct i915_vma *vma)
-{
-	struct drm_i915_gem_object *obj = vma->obj;
-	int err;
-
-	assert_vma_held(vma);
-
-	if (obj->cache_dirty & ~obj->cache_coherent)
-		i915_gem_clflush_object(obj, 0);
-	obj->write_domain = 0;
-
-	err = i915_request_await_object(rq, vma->obj, true);
-	if (err == 0)
-		err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
-
-	return err;
-}
-
-static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
-			     struct intel_engine_cs *engine,
-			     struct i915_vma *vma,
-			     unsigned int len)
-{
-	struct reloc_cache *cache = &eb->reloc_cache;
-	struct intel_gt_buffer_pool_node *pool = eb->reloc_pool;
-	struct i915_request *rq;
-	struct i915_vma *batch;
-	u32 *cmd;
-	int err;
-
-	if (!pool) {
-		pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE,
-						cache->has_llc ?
-						I915_MAP_WB :
-						I915_MAP_WC);
-		if (IS_ERR(pool))
-			return PTR_ERR(pool);
-	}
-	eb->reloc_pool = NULL;
-
-	err = i915_gem_object_lock(pool->obj, &eb->ww);
-	if (err)
-		goto err_pool;
-
-	cmd = i915_gem_object_pin_map(pool->obj, pool->type);
-	if (IS_ERR(cmd)) {
-		err = PTR_ERR(cmd);
-		goto err_pool;
-	}
-	intel_gt_buffer_pool_mark_used(pool);
-
-	memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
-
-	batch = i915_vma_instance(pool->obj, vma->vm, NULL);
-	if (IS_ERR(batch)) {
-		err = PTR_ERR(batch);
-		goto err_unmap;
-	}
-
-	err = i915_vma_pin_ww(batch, &eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK);
-	if (err)
-		goto err_unmap;
-
-	if (engine == eb->context->engine) {
-		rq = i915_request_create(eb->context);
-	} else {
-		struct intel_context *ce = eb->reloc_context;
-
-		if (!ce) {
-			ce = intel_context_create(engine);
-			if (IS_ERR(ce)) {
-				err = PTR_ERR(ce);
-				goto err_unpin;
-			}
-
-			i915_vm_put(ce->vm);
-			ce->vm = i915_vm_get(eb->context->vm);
-			eb->reloc_context = ce;
-		}
-
-		err = intel_context_pin_ww(ce, &eb->ww);
-		if (err)
-			goto err_unpin;
-
-		rq = i915_request_create(ce);
-		intel_context_unpin(ce);
-	}
-	if (IS_ERR(rq)) {
-		err = PTR_ERR(rq);
-		goto err_unpin;
-	}
-
-	err = intel_gt_buffer_pool_mark_active(pool, rq);
-	if (err)
-		goto err_request;
-
-	err = reloc_move_to_gpu(rq, vma);
-	if (err)
-		goto err_request;
-
-	err = eb->engine->emit_bb_start(rq,
-					batch->node.start, PAGE_SIZE,
-					cache->graphics_ver > 5 ? 0 : I915_DISPATCH_SECURE);
-	if (err)
-		goto skip_request;
-
-	assert_vma_held(batch);
-	err = i915_request_await_object(rq, batch->obj, false);
-	if (err == 0)
-		err = i915_vma_move_to_active(batch, rq, 0);
-	if (err)
-		goto skip_request;
-
-	rq->batch = batch;
-	i915_vma_unpin(batch);
-
-	cache->rq = rq;
-	cache->rq_cmd = cmd;
-	cache->rq_size = 0;
-	cache->pool = pool;
-
-	/* Return with batch mapping (cmd) still pinned */
-	return 0;
-
-skip_request:
-	i915_request_set_error_once(rq, err);
-err_request:
-	i915_request_add(rq);
-err_unpin:
-	i915_vma_unpin(batch);
-err_unmap:
-	i915_gem_object_unpin_map(pool->obj);
-err_pool:
-	eb->reloc_pool = pool;
-	return err;
-}
-
-static bool reloc_can_use_engine(const struct intel_engine_cs *engine)
-{
-	return engine->class != VIDEO_DECODE_CLASS || GRAPHICS_VER(engine->i915) != 6;
-}
-
-static u32 *reloc_gpu(struct i915_execbuffer *eb,
-		      struct i915_vma *vma,
-		      unsigned int len)
-{
-	struct reloc_cache *cache = &eb->reloc_cache;
-	u32 *cmd;
-
-	if (cache->rq_size > PAGE_SIZE/sizeof(u32) - (len + 1))
-		reloc_gpu_flush(eb, cache);
-
-	if (unlikely(!cache->rq)) {
-		int err;
-		struct intel_engine_cs *engine = eb->engine;
-
-		/* If we need to copy for the cmdparser, we will stall anyway */
-		if (eb_use_cmdparser(eb))
-			return ERR_PTR(-EWOULDBLOCK);
-
-		if (!reloc_can_use_engine(engine)) {
-			engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
-			if (!engine)
-				return ERR_PTR(-ENODEV);
-		}
-
-		err = __reloc_gpu_alloc(eb, engine, vma, len);
-		if (unlikely(err))
-			return ERR_PTR(err);
-	}
-
-	cmd = cache->rq_cmd + cache->rq_size;
-	cache->rq_size += len;
-
-	return cmd;
-}
-
-static inline bool use_reloc_gpu(struct i915_vma *vma)
-{
-	if (DBG_FORCE_RELOC == FORCE_GPU_RELOC)
-		return true;
-
-	if (DBG_FORCE_RELOC)
-		return false;
-
-	return !dma_resv_test_signaled(vma->resv, true);
-}
-
-static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
-{
-	struct page *page;
-	unsigned long addr;
-
-	GEM_BUG_ON(vma->pages != vma->obj->mm.pages);
-
-	page = i915_gem_object_get_page(vma->obj, offset >> PAGE_SHIFT);
-	addr = PFN_PHYS(page_to_pfn(page));
-	GEM_BUG_ON(overflows_type(addr, u32)); /* expected dma32 */
-
-	return addr + offset_in_page(offset);
-}
-
-static int __reloc_entry_gpu(struct i915_execbuffer *eb,
-			      struct i915_vma *vma,
-			      u64 offset,
-			      u64 target_addr)
-{
-	const unsigned int ver = eb->reloc_cache.graphics_ver;
-	unsigned int len;
-	u32 *batch;
-	u64 addr;
-
-	if (ver >= 8)
-		len = offset & 7 ? 8 : 5;
-	else if (ver >= 4)
-		len = 4;
-	else
-		len = 3;
-
-	batch = reloc_gpu(eb, vma, len);
-	if (batch == ERR_PTR(-EDEADLK))
-		return -EDEADLK;
-	else if (IS_ERR(batch))
-		return false;
-
-	addr = gen8_canonical_addr(vma->node.start + offset);
-	if (ver >= 8) {
-		if (offset & 7) {
-			*batch++ = MI_STORE_DWORD_IMM_GEN4;
-			*batch++ = lower_32_bits(addr);
-			*batch++ = upper_32_bits(addr);
-			*batch++ = lower_32_bits(target_addr);
-
-			addr = gen8_canonical_addr(addr + 4);
-
-			*batch++ = MI_STORE_DWORD_IMM_GEN4;
-			*batch++ = lower_32_bits(addr);
-			*batch++ = upper_32_bits(addr);
-			*batch++ = upper_32_bits(target_addr);
-		} else {
-			*batch++ = (MI_STORE_DWORD_IMM_GEN4 | (1 << 21)) + 1;
-			*batch++ = lower_32_bits(addr);
-			*batch++ = upper_32_bits(addr);
-			*batch++ = lower_32_bits(target_addr);
-			*batch++ = upper_32_bits(target_addr);
-		}
-	} else if (ver >= 6) {
-		*batch++ = MI_STORE_DWORD_IMM_GEN4;
-		*batch++ = 0;
-		*batch++ = addr;
-		*batch++ = target_addr;
-	} else if (IS_I965G(eb->i915)) {
-		*batch++ = MI_STORE_DWORD_IMM_GEN4;
-		*batch++ = 0;
-		*batch++ = vma_phys_addr(vma, offset);
-		*batch++ = target_addr;
-	} else if (ver >= 4) {
-		*batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
-		*batch++ = 0;
-		*batch++ = addr;
-		*batch++ = target_addr;
-	} else if (ver >= 3 &&
-		   !(IS_I915G(eb->i915) || IS_I915GM(eb->i915))) {
-		*batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
-		*batch++ = addr;
-		*batch++ = target_addr;
-	} else {
-		*batch++ = MI_STORE_DWORD_IMM;
-		*batch++ = vma_phys_addr(vma, offset);
-		*batch++ = target_addr;
-	}
-
-	return true;
-}
-
-static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
-			    struct i915_vma *vma,
-			    u64 offset,
-			    u64 target_addr)
-{
-	if (eb->reloc_cache.vaddr)
-		return false;
-
-	if (!use_reloc_gpu(vma))
-		return false;
-
-	return __reloc_entry_gpu(eb, vma, offset, target_addr);
-}
-
 static u64
 relocate_entry(struct i915_vma *vma,
 	       const struct drm_i915_gem_relocation_entry *reloc,
@@ -3166,8 +2820,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	eb.exec = exec;
 	eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1);
 	eb.vma[0].vma = NULL;
-	eb.reloc_pool = eb.batch_pool = NULL;
-	eb.reloc_context = NULL;
+	eb.batch_pool = NULL;
 
 	eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
 	reloc_cache_init(&eb.reloc_cache, eb.i915);
@@ -3265,9 +2918,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	batch = eb.batch->vma;
 
-	/* All GPU relocation batches must be submitted prior to the user rq */
-	GEM_BUG_ON(eb.reloc_cache.rq);
-
 	/* Allocate a request for this batch buffer nice and early. */
 	eb.request = i915_request_create(eb.context);
 	if (IS_ERR(eb.request)) {
@@ -3358,10 +3008,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	if (eb.batch_pool)
 		intel_gt_buffer_pool_put(eb.batch_pool);
-	if (eb.reloc_pool)
-		intel_gt_buffer_pool_put(eb.reloc_pool);
-	if (eb.reloc_context)
-		intel_context_put(eb.reloc_context);
 err_engine:
 	eb_put_engine(&eb);
 err_context:
@@ -3475,7 +3121,3 @@ end:;
 	kvfree(exec2_list);
 	return err;
 }
-
-#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
-#include "selftests/i915_gem_execbuffer.c"
-#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index e2fd1b61af71..c0386fb4e286 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -38,7 +38,6 @@ selftest(gem, i915_gem_live_selftests)
 selftest(evict, i915_gem_evict_live_selftests)
 selftest(hugepages, i915_gem_huge_page_live_selftests)
 selftest(gem_contexts, i915_gem_context_live_selftests)
-selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
 selftest(client, i915_gem_client_blt_live_selftests)
 selftest(gem_migrate, i915_gem_migrate_live_selftests)
 selftest(reset, intel_reset_live_selftests)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Intel-gfx] [PATCH 2/2] drm/i915: delete gpu reloc code
@ 2021-08-03 12:48   ` Daniel Vetter
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2021-08-03 12:48 UTC (permalink / raw)
  To: Intel Graphics Development
  Cc: DRI Development, Daniel Vetter, Daniel Vetter, Jon Bloomfield,
	Chris Wilson, Maarten Lankhorst, Joonas Lahtinen,
	Thomas Hellström, Matthew Auld, Lionel Landwerlin,
	Dave Airlie, Jason Ekstrand

It's already removed, this just garbage collects it all.

v2: Rebase over s/GEN/GRAPHICS_VER/

v3: Also ditch eb.reloc_pool and eb.reloc_context (Maarten)

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 360 +-----------------
 .../drm/i915/selftests/i915_live_selftests.h  |   1 -
 2 files changed, 1 insertion(+), 360 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index e4dc4c3b4df3..98e25efffb59 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -277,16 +277,8 @@ struct i915_execbuffer {
 		bool has_llc : 1;
 		bool has_fence : 1;
 		bool needs_unfenced : 1;
-
-		struct i915_request *rq;
-		u32 *rq_cmd;
-		unsigned int rq_size;
-		struct intel_gt_buffer_pool_node *pool;
 	} reloc_cache;
 
-	struct intel_gt_buffer_pool_node *reloc_pool; /** relocation pool for -EDEADLK handling */
-	struct intel_context *reloc_context;
-
 	u64 invalid_flags; /** Set of execobj.flags that are invalid */
 
 	u64 batch_len; /** Length of batch within object */
@@ -1035,8 +1027,6 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
 
 static void eb_destroy(const struct i915_execbuffer *eb)
 {
-	GEM_BUG_ON(eb->reloc_cache.rq);
-
 	if (eb->lut_size > 0)
 		kfree(eb->buckets);
 }
@@ -1048,14 +1038,6 @@ relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
 	return gen8_canonical_addr((int)reloc->delta + target->node.start);
 }
 
-static void reloc_cache_clear(struct reloc_cache *cache)
-{
-	cache->rq = NULL;
-	cache->rq_cmd = NULL;
-	cache->pool = NULL;
-	cache->rq_size = 0;
-}
-
 static void reloc_cache_init(struct reloc_cache *cache,
 			     struct drm_i915_private *i915)
 {
@@ -1068,7 +1050,6 @@ static void reloc_cache_init(struct reloc_cache *cache,
 	cache->has_fence = cache->graphics_ver < 4;
 	cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment;
 	cache->node.flags = 0;
-	reloc_cache_clear(cache);
 }
 
 static inline void *unmask_page(unsigned long p)
@@ -1090,48 +1071,10 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
 	return &i915->ggtt;
 }
 
-static void reloc_cache_put_pool(struct i915_execbuffer *eb, struct reloc_cache *cache)
-{
-	if (!cache->pool)
-		return;
-
-	/*
-	 * This is a bit nasty, normally we keep objects locked until the end
-	 * of execbuffer, but we already submit this, and have to unlock before
-	 * dropping the reference. Fortunately we can only hold 1 pool node at
-	 * a time, so this should be harmless.
-	 */
-	i915_gem_ww_unlock_single(cache->pool->obj);
-	intel_gt_buffer_pool_put(cache->pool);
-	cache->pool = NULL;
-}
-
-static void reloc_gpu_flush(struct i915_execbuffer *eb, struct reloc_cache *cache)
-{
-	struct drm_i915_gem_object *obj = cache->rq->batch->obj;
-
-	GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
-	cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
-
-	i915_gem_object_flush_map(obj);
-	i915_gem_object_unpin_map(obj);
-
-	intel_gt_chipset_flush(cache->rq->engine->gt);
-
-	i915_request_add(cache->rq);
-	reloc_cache_put_pool(eb, cache);
-	reloc_cache_clear(cache);
-
-	eb->reloc_pool = NULL;
-}
-
 static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer *eb)
 {
 	void *vaddr;
 
-	if (cache->rq)
-		reloc_gpu_flush(eb, cache);
-
 	if (!cache->vaddr)
 		return;
 
@@ -1313,295 +1256,6 @@ static void clflush_write32(u32 *addr, u32 value, unsigned int flushes)
 		*addr = value;
 }
 
-static int reloc_move_to_gpu(struct i915_request *rq, struct i915_vma *vma)
-{
-	struct drm_i915_gem_object *obj = vma->obj;
-	int err;
-
-	assert_vma_held(vma);
-
-	if (obj->cache_dirty & ~obj->cache_coherent)
-		i915_gem_clflush_object(obj, 0);
-	obj->write_domain = 0;
-
-	err = i915_request_await_object(rq, vma->obj, true);
-	if (err == 0)
-		err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
-
-	return err;
-}
-
-static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
-			     struct intel_engine_cs *engine,
-			     struct i915_vma *vma,
-			     unsigned int len)
-{
-	struct reloc_cache *cache = &eb->reloc_cache;
-	struct intel_gt_buffer_pool_node *pool = eb->reloc_pool;
-	struct i915_request *rq;
-	struct i915_vma *batch;
-	u32 *cmd;
-	int err;
-
-	if (!pool) {
-		pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE,
-						cache->has_llc ?
-						I915_MAP_WB :
-						I915_MAP_WC);
-		if (IS_ERR(pool))
-			return PTR_ERR(pool);
-	}
-	eb->reloc_pool = NULL;
-
-	err = i915_gem_object_lock(pool->obj, &eb->ww);
-	if (err)
-		goto err_pool;
-
-	cmd = i915_gem_object_pin_map(pool->obj, pool->type);
-	if (IS_ERR(cmd)) {
-		err = PTR_ERR(cmd);
-		goto err_pool;
-	}
-	intel_gt_buffer_pool_mark_used(pool);
-
-	memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
-
-	batch = i915_vma_instance(pool->obj, vma->vm, NULL);
-	if (IS_ERR(batch)) {
-		err = PTR_ERR(batch);
-		goto err_unmap;
-	}
-
-	err = i915_vma_pin_ww(batch, &eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK);
-	if (err)
-		goto err_unmap;
-
-	if (engine == eb->context->engine) {
-		rq = i915_request_create(eb->context);
-	} else {
-		struct intel_context *ce = eb->reloc_context;
-
-		if (!ce) {
-			ce = intel_context_create(engine);
-			if (IS_ERR(ce)) {
-				err = PTR_ERR(ce);
-				goto err_unpin;
-			}
-
-			i915_vm_put(ce->vm);
-			ce->vm = i915_vm_get(eb->context->vm);
-			eb->reloc_context = ce;
-		}
-
-		err = intel_context_pin_ww(ce, &eb->ww);
-		if (err)
-			goto err_unpin;
-
-		rq = i915_request_create(ce);
-		intel_context_unpin(ce);
-	}
-	if (IS_ERR(rq)) {
-		err = PTR_ERR(rq);
-		goto err_unpin;
-	}
-
-	err = intel_gt_buffer_pool_mark_active(pool, rq);
-	if (err)
-		goto err_request;
-
-	err = reloc_move_to_gpu(rq, vma);
-	if (err)
-		goto err_request;
-
-	err = eb->engine->emit_bb_start(rq,
-					batch->node.start, PAGE_SIZE,
-					cache->graphics_ver > 5 ? 0 : I915_DISPATCH_SECURE);
-	if (err)
-		goto skip_request;
-
-	assert_vma_held(batch);
-	err = i915_request_await_object(rq, batch->obj, false);
-	if (err == 0)
-		err = i915_vma_move_to_active(batch, rq, 0);
-	if (err)
-		goto skip_request;
-
-	rq->batch = batch;
-	i915_vma_unpin(batch);
-
-	cache->rq = rq;
-	cache->rq_cmd = cmd;
-	cache->rq_size = 0;
-	cache->pool = pool;
-
-	/* Return with batch mapping (cmd) still pinned */
-	return 0;
-
-skip_request:
-	i915_request_set_error_once(rq, err);
-err_request:
-	i915_request_add(rq);
-err_unpin:
-	i915_vma_unpin(batch);
-err_unmap:
-	i915_gem_object_unpin_map(pool->obj);
-err_pool:
-	eb->reloc_pool = pool;
-	return err;
-}
-
-static bool reloc_can_use_engine(const struct intel_engine_cs *engine)
-{
-	return engine->class != VIDEO_DECODE_CLASS || GRAPHICS_VER(engine->i915) != 6;
-}
-
-static u32 *reloc_gpu(struct i915_execbuffer *eb,
-		      struct i915_vma *vma,
-		      unsigned int len)
-{
-	struct reloc_cache *cache = &eb->reloc_cache;
-	u32 *cmd;
-
-	if (cache->rq_size > PAGE_SIZE/sizeof(u32) - (len + 1))
-		reloc_gpu_flush(eb, cache);
-
-	if (unlikely(!cache->rq)) {
-		int err;
-		struct intel_engine_cs *engine = eb->engine;
-
-		/* If we need to copy for the cmdparser, we will stall anyway */
-		if (eb_use_cmdparser(eb))
-			return ERR_PTR(-EWOULDBLOCK);
-
-		if (!reloc_can_use_engine(engine)) {
-			engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
-			if (!engine)
-				return ERR_PTR(-ENODEV);
-		}
-
-		err = __reloc_gpu_alloc(eb, engine, vma, len);
-		if (unlikely(err))
-			return ERR_PTR(err);
-	}
-
-	cmd = cache->rq_cmd + cache->rq_size;
-	cache->rq_size += len;
-
-	return cmd;
-}
-
-static inline bool use_reloc_gpu(struct i915_vma *vma)
-{
-	if (DBG_FORCE_RELOC == FORCE_GPU_RELOC)
-		return true;
-
-	if (DBG_FORCE_RELOC)
-		return false;
-
-	return !dma_resv_test_signaled(vma->resv, true);
-}
-
-static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
-{
-	struct page *page;
-	unsigned long addr;
-
-	GEM_BUG_ON(vma->pages != vma->obj->mm.pages);
-
-	page = i915_gem_object_get_page(vma->obj, offset >> PAGE_SHIFT);
-	addr = PFN_PHYS(page_to_pfn(page));
-	GEM_BUG_ON(overflows_type(addr, u32)); /* expected dma32 */
-
-	return addr + offset_in_page(offset);
-}
-
-static int __reloc_entry_gpu(struct i915_execbuffer *eb,
-			      struct i915_vma *vma,
-			      u64 offset,
-			      u64 target_addr)
-{
-	const unsigned int ver = eb->reloc_cache.graphics_ver;
-	unsigned int len;
-	u32 *batch;
-	u64 addr;
-
-	if (ver >= 8)
-		len = offset & 7 ? 8 : 5;
-	else if (ver >= 4)
-		len = 4;
-	else
-		len = 3;
-
-	batch = reloc_gpu(eb, vma, len);
-	if (batch == ERR_PTR(-EDEADLK))
-		return -EDEADLK;
-	else if (IS_ERR(batch))
-		return false;
-
-	addr = gen8_canonical_addr(vma->node.start + offset);
-	if (ver >= 8) {
-		if (offset & 7) {
-			*batch++ = MI_STORE_DWORD_IMM_GEN4;
-			*batch++ = lower_32_bits(addr);
-			*batch++ = upper_32_bits(addr);
-			*batch++ = lower_32_bits(target_addr);
-
-			addr = gen8_canonical_addr(addr + 4);
-
-			*batch++ = MI_STORE_DWORD_IMM_GEN4;
-			*batch++ = lower_32_bits(addr);
-			*batch++ = upper_32_bits(addr);
-			*batch++ = upper_32_bits(target_addr);
-		} else {
-			*batch++ = (MI_STORE_DWORD_IMM_GEN4 | (1 << 21)) + 1;
-			*batch++ = lower_32_bits(addr);
-			*batch++ = upper_32_bits(addr);
-			*batch++ = lower_32_bits(target_addr);
-			*batch++ = upper_32_bits(target_addr);
-		}
-	} else if (ver >= 6) {
-		*batch++ = MI_STORE_DWORD_IMM_GEN4;
-		*batch++ = 0;
-		*batch++ = addr;
-		*batch++ = target_addr;
-	} else if (IS_I965G(eb->i915)) {
-		*batch++ = MI_STORE_DWORD_IMM_GEN4;
-		*batch++ = 0;
-		*batch++ = vma_phys_addr(vma, offset);
-		*batch++ = target_addr;
-	} else if (ver >= 4) {
-		*batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
-		*batch++ = 0;
-		*batch++ = addr;
-		*batch++ = target_addr;
-	} else if (ver >= 3 &&
-		   !(IS_I915G(eb->i915) || IS_I915GM(eb->i915))) {
-		*batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
-		*batch++ = addr;
-		*batch++ = target_addr;
-	} else {
-		*batch++ = MI_STORE_DWORD_IMM;
-		*batch++ = vma_phys_addr(vma, offset);
-		*batch++ = target_addr;
-	}
-
-	return true;
-}
-
-static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
-			    struct i915_vma *vma,
-			    u64 offset,
-			    u64 target_addr)
-{
-	if (eb->reloc_cache.vaddr)
-		return false;
-
-	if (!use_reloc_gpu(vma))
-		return false;
-
-	return __reloc_entry_gpu(eb, vma, offset, target_addr);
-}
-
 static u64
 relocate_entry(struct i915_vma *vma,
 	       const struct drm_i915_gem_relocation_entry *reloc,
@@ -3166,8 +2820,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	eb.exec = exec;
 	eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1);
 	eb.vma[0].vma = NULL;
-	eb.reloc_pool = eb.batch_pool = NULL;
-	eb.reloc_context = NULL;
+	eb.batch_pool = NULL;
 
 	eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
 	reloc_cache_init(&eb.reloc_cache, eb.i915);
@@ -3265,9 +2918,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	batch = eb.batch->vma;
 
-	/* All GPU relocation batches must be submitted prior to the user rq */
-	GEM_BUG_ON(eb.reloc_cache.rq);
-
 	/* Allocate a request for this batch buffer nice and early. */
 	eb.request = i915_request_create(eb.context);
 	if (IS_ERR(eb.request)) {
@@ -3358,10 +3008,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	if (eb.batch_pool)
 		intel_gt_buffer_pool_put(eb.batch_pool);
-	if (eb.reloc_pool)
-		intel_gt_buffer_pool_put(eb.reloc_pool);
-	if (eb.reloc_context)
-		intel_context_put(eb.reloc_context);
 err_engine:
 	eb_put_engine(&eb);
 err_context:
@@ -3475,7 +3121,3 @@ end:;
 	kvfree(exec2_list);
 	return err;
 }
-
-#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
-#include "selftests/i915_gem_execbuffer.c"
-#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index e2fd1b61af71..c0386fb4e286 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -38,7 +38,6 @@ selftest(gem, i915_gem_live_selftests)
 selftest(evict, i915_gem_evict_live_selftests)
 selftest(hugepages, i915_gem_huge_page_live_selftests)
 selftest(gem_contexts, i915_gem_context_live_selftests)
-selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
 selftest(client, i915_gem_client_blt_live_selftests)
 selftest(gem_migrate, i915_gem_migrate_live_selftests)
 selftest(reset, intel_reset_live_selftests)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/2] drm/i915: Disable gpu relocations
  2021-08-03 12:48 ` [Intel-gfx] " Daniel Vetter
  (?)
  (?)
@ 2021-08-03 15:10 ` Patchwork
  -1 siblings, 0 replies; 11+ messages in thread
From: Patchwork @ 2021-08-03 15:10 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/2] drm/i915: Disable gpu relocations
URL   : https://patchwork.freedesktop.org/series/93340/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
42973b3c2f93 drm/i915: Disable gpu relocations
-:12: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#12: 
https://github.com/intel/media-driver/commit/144020c37770083974bedf59902b70b8f444c799

-:17: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 7dd4f6729f92 ("drm/i915: Async GPU relocation processing")'
#17: 
commit 7dd4f6729f9243bd7046c6f04c107a456bda38eb

-:26: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit ad5d95e4d538 ("Revert "drm/i915/gem: Async GPU relocations only"")'
#26: 
commit ad5d95e4d538737ed3fa25493777decf264a3011

-:61: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#61: FILE: drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:1592:
+static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
 			    struct i915_vma *vma,

-:113: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 2 errors, 2 warnings, 1 checks, 57 lines checked
2f0e03310b89 drm/i915: delete gpu reloc code
-:475: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Daniel Vetter <daniel.vetter@ffwll.ch>' != 'Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>'

total: 0 errors, 1 warnings, 0 checks, 426 lines checked



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/2] drm/i915: Disable gpu relocations
  2021-08-03 12:48 ` [Intel-gfx] " Daniel Vetter
                   ` (2 preceding siblings ...)
  (?)
@ 2021-08-03 15:38 ` Patchwork
  -1 siblings, 0 replies; 11+ messages in thread
From: Patchwork @ 2021-08-03 15:38 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 1740 bytes --]

== Series Details ==

Series: series starting with [1/2] drm/i915: Disable gpu relocations
URL   : https://patchwork.freedesktop.org/series/93340/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10442 -> Patchwork_20765
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/index.html

Known issues
------------

  Here are the changes found in Patchwork_20765 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_suspend@basic-s3:
    - fi-tgl-1115g4:      [PASS][1] -> [FAIL][2] ([i915#1888])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/fi-tgl-1115g4/igt@gem_exec_suspend@basic-s3.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/fi-tgl-1115g4/igt@gem_exec_suspend@basic-s3.html

  
  [i915#1888]: https://gitlab.freedesktop.org/drm/intel/issues/1888


Participating hosts (37 -> 33)
------------------------------

  Missing    (4): fi-bdw-samus fi-bsw-cyan bat-jsl-1 fi-hsw-4200u 


Build changes
-------------

  * Linux: CI_DRM_10442 -> Patchwork_20765

  CI-20190529: 20190529
  CI_DRM_10442: d3816ffe379da79a69188424318fe2b5d458347b @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6159: 6135b9cc319ed965e3aafb5b2ae2abf4762a06b2 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_20765: 2f0e03310b89b36519b276927a8a0b9db50f92e9 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

2f0e03310b89 drm/i915: delete gpu reloc code
42973b3c2f93 drm/i915: Disable gpu relocations

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/index.html

[-- Attachment #2: Type: text/html, Size: 2338 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] drm/i915: delete gpu reloc code
  2021-08-03 12:48   ` [Intel-gfx] " Daniel Vetter
@ 2021-08-03 15:47     ` Jason Ekstrand
  -1 siblings, 0 replies; 11+ messages in thread
From: Jason Ekstrand @ 2021-08-03 15:47 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Intel Graphics Development, DRI Development, Daniel Vetter,
	Jon Bloomfield, Chris Wilson, Maarten Lankhorst, Joonas Lahtinen,
	Thomas Hellström, Matthew Auld, Lionel Landwerlin,
	Dave Airlie

Both are

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

On Tue, Aug 3, 2021 at 7:49 AM Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
>
> It's already removed, this just garbage collects it all.
>
> v2: Rebase over s/GEN/GRAPHICS_VER/
>
> v3: Also ditch eb.reloc_pool and eb.reloc_context (Maarten)
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Jason Ekstrand <jason@jlekstrand.net>
> ---
>  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 360 +-----------------
>  .../drm/i915/selftests/i915_live_selftests.h  |   1 -
>  2 files changed, 1 insertion(+), 360 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index e4dc4c3b4df3..98e25efffb59 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -277,16 +277,8 @@ struct i915_execbuffer {
>                 bool has_llc : 1;
>                 bool has_fence : 1;
>                 bool needs_unfenced : 1;
> -
> -               struct i915_request *rq;
> -               u32 *rq_cmd;
> -               unsigned int rq_size;
> -               struct intel_gt_buffer_pool_node *pool;
>         } reloc_cache;
>
> -       struct intel_gt_buffer_pool_node *reloc_pool; /** relocation pool for -EDEADLK handling */
> -       struct intel_context *reloc_context;
> -
>         u64 invalid_flags; /** Set of execobj.flags that are invalid */
>
>         u64 batch_len; /** Length of batch within object */
> @@ -1035,8 +1027,6 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>
>  static void eb_destroy(const struct i915_execbuffer *eb)
>  {
> -       GEM_BUG_ON(eb->reloc_cache.rq);
> -
>         if (eb->lut_size > 0)
>                 kfree(eb->buckets);
>  }
> @@ -1048,14 +1038,6 @@ relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
>         return gen8_canonical_addr((int)reloc->delta + target->node.start);
>  }
>
> -static void reloc_cache_clear(struct reloc_cache *cache)
> -{
> -       cache->rq = NULL;
> -       cache->rq_cmd = NULL;
> -       cache->pool = NULL;
> -       cache->rq_size = 0;
> -}
> -
>  static void reloc_cache_init(struct reloc_cache *cache,
>                              struct drm_i915_private *i915)
>  {
> @@ -1068,7 +1050,6 @@ static void reloc_cache_init(struct reloc_cache *cache,
>         cache->has_fence = cache->graphics_ver < 4;
>         cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment;
>         cache->node.flags = 0;
> -       reloc_cache_clear(cache);
>  }
>
>  static inline void *unmask_page(unsigned long p)
> @@ -1090,48 +1071,10 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
>         return &i915->ggtt;
>  }
>
> -static void reloc_cache_put_pool(struct i915_execbuffer *eb, struct reloc_cache *cache)
> -{
> -       if (!cache->pool)
> -               return;
> -
> -       /*
> -        * This is a bit nasty, normally we keep objects locked until the end
> -        * of execbuffer, but we already submit this, and have to unlock before
> -        * dropping the reference. Fortunately we can only hold 1 pool node at
> -        * a time, so this should be harmless.
> -        */
> -       i915_gem_ww_unlock_single(cache->pool->obj);
> -       intel_gt_buffer_pool_put(cache->pool);
> -       cache->pool = NULL;
> -}
> -
> -static void reloc_gpu_flush(struct i915_execbuffer *eb, struct reloc_cache *cache)
> -{
> -       struct drm_i915_gem_object *obj = cache->rq->batch->obj;
> -
> -       GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
> -       cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
> -
> -       i915_gem_object_flush_map(obj);
> -       i915_gem_object_unpin_map(obj);
> -
> -       intel_gt_chipset_flush(cache->rq->engine->gt);
> -
> -       i915_request_add(cache->rq);
> -       reloc_cache_put_pool(eb, cache);
> -       reloc_cache_clear(cache);
> -
> -       eb->reloc_pool = NULL;
> -}
> -
>  static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer *eb)
>  {
>         void *vaddr;
>
> -       if (cache->rq)
> -               reloc_gpu_flush(eb, cache);
> -
>         if (!cache->vaddr)
>                 return;
>
> @@ -1313,295 +1256,6 @@ static void clflush_write32(u32 *addr, u32 value, unsigned int flushes)
>                 *addr = value;
>  }
>
> -static int reloc_move_to_gpu(struct i915_request *rq, struct i915_vma *vma)
> -{
> -       struct drm_i915_gem_object *obj = vma->obj;
> -       int err;
> -
> -       assert_vma_held(vma);
> -
> -       if (obj->cache_dirty & ~obj->cache_coherent)
> -               i915_gem_clflush_object(obj, 0);
> -       obj->write_domain = 0;
> -
> -       err = i915_request_await_object(rq, vma->obj, true);
> -       if (err == 0)
> -               err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
> -
> -       return err;
> -}
> -
> -static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
> -                            struct intel_engine_cs *engine,
> -                            struct i915_vma *vma,
> -                            unsigned int len)
> -{
> -       struct reloc_cache *cache = &eb->reloc_cache;
> -       struct intel_gt_buffer_pool_node *pool = eb->reloc_pool;
> -       struct i915_request *rq;
> -       struct i915_vma *batch;
> -       u32 *cmd;
> -       int err;
> -
> -       if (!pool) {
> -               pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE,
> -                                               cache->has_llc ?
> -                                               I915_MAP_WB :
> -                                               I915_MAP_WC);
> -               if (IS_ERR(pool))
> -                       return PTR_ERR(pool);
> -       }
> -       eb->reloc_pool = NULL;
> -
> -       err = i915_gem_object_lock(pool->obj, &eb->ww);
> -       if (err)
> -               goto err_pool;
> -
> -       cmd = i915_gem_object_pin_map(pool->obj, pool->type);
> -       if (IS_ERR(cmd)) {
> -               err = PTR_ERR(cmd);
> -               goto err_pool;
> -       }
> -       intel_gt_buffer_pool_mark_used(pool);
> -
> -       memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
> -
> -       batch = i915_vma_instance(pool->obj, vma->vm, NULL);
> -       if (IS_ERR(batch)) {
> -               err = PTR_ERR(batch);
> -               goto err_unmap;
> -       }
> -
> -       err = i915_vma_pin_ww(batch, &eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK);
> -       if (err)
> -               goto err_unmap;
> -
> -       if (engine == eb->context->engine) {
> -               rq = i915_request_create(eb->context);
> -       } else {
> -               struct intel_context *ce = eb->reloc_context;
> -
> -               if (!ce) {
> -                       ce = intel_context_create(engine);
> -                       if (IS_ERR(ce)) {
> -                               err = PTR_ERR(ce);
> -                               goto err_unpin;
> -                       }
> -
> -                       i915_vm_put(ce->vm);
> -                       ce->vm = i915_vm_get(eb->context->vm);
> -                       eb->reloc_context = ce;
> -               }
> -
> -               err = intel_context_pin_ww(ce, &eb->ww);
> -               if (err)
> -                       goto err_unpin;
> -
> -               rq = i915_request_create(ce);
> -               intel_context_unpin(ce);
> -       }
> -       if (IS_ERR(rq)) {
> -               err = PTR_ERR(rq);
> -               goto err_unpin;
> -       }
> -
> -       err = intel_gt_buffer_pool_mark_active(pool, rq);
> -       if (err)
> -               goto err_request;
> -
> -       err = reloc_move_to_gpu(rq, vma);
> -       if (err)
> -               goto err_request;
> -
> -       err = eb->engine->emit_bb_start(rq,
> -                                       batch->node.start, PAGE_SIZE,
> -                                       cache->graphics_ver > 5 ? 0 : I915_DISPATCH_SECURE);
> -       if (err)
> -               goto skip_request;
> -
> -       assert_vma_held(batch);
> -       err = i915_request_await_object(rq, batch->obj, false);
> -       if (err == 0)
> -               err = i915_vma_move_to_active(batch, rq, 0);
> -       if (err)
> -               goto skip_request;
> -
> -       rq->batch = batch;
> -       i915_vma_unpin(batch);
> -
> -       cache->rq = rq;
> -       cache->rq_cmd = cmd;
> -       cache->rq_size = 0;
> -       cache->pool = pool;
> -
> -       /* Return with batch mapping (cmd) still pinned */
> -       return 0;
> -
> -skip_request:
> -       i915_request_set_error_once(rq, err);
> -err_request:
> -       i915_request_add(rq);
> -err_unpin:
> -       i915_vma_unpin(batch);
> -err_unmap:
> -       i915_gem_object_unpin_map(pool->obj);
> -err_pool:
> -       eb->reloc_pool = pool;
> -       return err;
> -}
> -
> -static bool reloc_can_use_engine(const struct intel_engine_cs *engine)
> -{
> -       return engine->class != VIDEO_DECODE_CLASS || GRAPHICS_VER(engine->i915) != 6;
> -}
> -
> -static u32 *reloc_gpu(struct i915_execbuffer *eb,
> -                     struct i915_vma *vma,
> -                     unsigned int len)
> -{
> -       struct reloc_cache *cache = &eb->reloc_cache;
> -       u32 *cmd;
> -
> -       if (cache->rq_size > PAGE_SIZE/sizeof(u32) - (len + 1))
> -               reloc_gpu_flush(eb, cache);
> -
> -       if (unlikely(!cache->rq)) {
> -               int err;
> -               struct intel_engine_cs *engine = eb->engine;
> -
> -               /* If we need to copy for the cmdparser, we will stall anyway */
> -               if (eb_use_cmdparser(eb))
> -                       return ERR_PTR(-EWOULDBLOCK);
> -
> -               if (!reloc_can_use_engine(engine)) {
> -                       engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
> -                       if (!engine)
> -                               return ERR_PTR(-ENODEV);
> -               }
> -
> -               err = __reloc_gpu_alloc(eb, engine, vma, len);
> -               if (unlikely(err))
> -                       return ERR_PTR(err);
> -       }
> -
> -       cmd = cache->rq_cmd + cache->rq_size;
> -       cache->rq_size += len;
> -
> -       return cmd;
> -}
> -
> -static inline bool use_reloc_gpu(struct i915_vma *vma)
> -{
> -       if (DBG_FORCE_RELOC == FORCE_GPU_RELOC)
> -               return true;
> -
> -       if (DBG_FORCE_RELOC)
> -               return false;
> -
> -       return !dma_resv_test_signaled(vma->resv, true);
> -}
> -
> -static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> -{
> -       struct page *page;
> -       unsigned long addr;
> -
> -       GEM_BUG_ON(vma->pages != vma->obj->mm.pages);
> -
> -       page = i915_gem_object_get_page(vma->obj, offset >> PAGE_SHIFT);
> -       addr = PFN_PHYS(page_to_pfn(page));
> -       GEM_BUG_ON(overflows_type(addr, u32)); /* expected dma32 */
> -
> -       return addr + offset_in_page(offset);
> -}
> -
> -static int __reloc_entry_gpu(struct i915_execbuffer *eb,
> -                             struct i915_vma *vma,
> -                             u64 offset,
> -                             u64 target_addr)
> -{
> -       const unsigned int ver = eb->reloc_cache.graphics_ver;
> -       unsigned int len;
> -       u32 *batch;
> -       u64 addr;
> -
> -       if (ver >= 8)
> -               len = offset & 7 ? 8 : 5;
> -       else if (ver >= 4)
> -               len = 4;
> -       else
> -               len = 3;
> -
> -       batch = reloc_gpu(eb, vma, len);
> -       if (batch == ERR_PTR(-EDEADLK))
> -               return -EDEADLK;
> -       else if (IS_ERR(batch))
> -               return false;
> -
> -       addr = gen8_canonical_addr(vma->node.start + offset);
> -       if (ver >= 8) {
> -               if (offset & 7) {
> -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -                       *batch++ = lower_32_bits(addr);
> -                       *batch++ = upper_32_bits(addr);
> -                       *batch++ = lower_32_bits(target_addr);
> -
> -                       addr = gen8_canonical_addr(addr + 4);
> -
> -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -                       *batch++ = lower_32_bits(addr);
> -                       *batch++ = upper_32_bits(addr);
> -                       *batch++ = upper_32_bits(target_addr);
> -               } else {
> -                       *batch++ = (MI_STORE_DWORD_IMM_GEN4 | (1 << 21)) + 1;
> -                       *batch++ = lower_32_bits(addr);
> -                       *batch++ = upper_32_bits(addr);
> -                       *batch++ = lower_32_bits(target_addr);
> -                       *batch++ = upper_32_bits(target_addr);
> -               }
> -       } else if (ver >= 6) {
> -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -               *batch++ = 0;
> -               *batch++ = addr;
> -               *batch++ = target_addr;
> -       } else if (IS_I965G(eb->i915)) {
> -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -               *batch++ = 0;
> -               *batch++ = vma_phys_addr(vma, offset);
> -               *batch++ = target_addr;
> -       } else if (ver >= 4) {
> -               *batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
> -               *batch++ = 0;
> -               *batch++ = addr;
> -               *batch++ = target_addr;
> -       } else if (ver >= 3 &&
> -                  !(IS_I915G(eb->i915) || IS_I915GM(eb->i915))) {
> -               *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
> -               *batch++ = addr;
> -               *batch++ = target_addr;
> -       } else {
> -               *batch++ = MI_STORE_DWORD_IMM;
> -               *batch++ = vma_phys_addr(vma, offset);
> -               *batch++ = target_addr;
> -       }
> -
> -       return true;
> -}
> -
> -static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
> -                           struct i915_vma *vma,
> -                           u64 offset,
> -                           u64 target_addr)
> -{
> -       if (eb->reloc_cache.vaddr)
> -               return false;
> -
> -       if (!use_reloc_gpu(vma))
> -               return false;
> -
> -       return __reloc_entry_gpu(eb, vma, offset, target_addr);
> -}
> -
>  static u64
>  relocate_entry(struct i915_vma *vma,
>                const struct drm_i915_gem_relocation_entry *reloc,
> @@ -3166,8 +2820,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>         eb.exec = exec;
>         eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1);
>         eb.vma[0].vma = NULL;
> -       eb.reloc_pool = eb.batch_pool = NULL;
> -       eb.reloc_context = NULL;
> +       eb.batch_pool = NULL;
>
>         eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
>         reloc_cache_init(&eb.reloc_cache, eb.i915);
> @@ -3265,9 +2918,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>
>         batch = eb.batch->vma;
>
> -       /* All GPU relocation batches must be submitted prior to the user rq */
> -       GEM_BUG_ON(eb.reloc_cache.rq);
> -
>         /* Allocate a request for this batch buffer nice and early. */
>         eb.request = i915_request_create(eb.context);
>         if (IS_ERR(eb.request)) {
> @@ -3358,10 +3008,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>
>         if (eb.batch_pool)
>                 intel_gt_buffer_pool_put(eb.batch_pool);
> -       if (eb.reloc_pool)
> -               intel_gt_buffer_pool_put(eb.reloc_pool);
> -       if (eb.reloc_context)
> -               intel_context_put(eb.reloc_context);
>  err_engine:
>         eb_put_engine(&eb);
>  err_context:
> @@ -3475,7 +3121,3 @@ end:;
>         kvfree(exec2_list);
>         return err;
>  }
> -
> -#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> -#include "selftests/i915_gem_execbuffer.c"
> -#endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> index e2fd1b61af71..c0386fb4e286 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> @@ -38,7 +38,6 @@ selftest(gem, i915_gem_live_selftests)
>  selftest(evict, i915_gem_evict_live_selftests)
>  selftest(hugepages, i915_gem_huge_page_live_selftests)
>  selftest(gem_contexts, i915_gem_context_live_selftests)
> -selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
>  selftest(client, i915_gem_client_blt_live_selftests)
>  selftest(gem_migrate, i915_gem_migrate_live_selftests)
>  selftest(reset, intel_reset_live_selftests)
> --
> 2.32.0
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Intel-gfx] [PATCH 2/2] drm/i915: delete gpu reloc code
@ 2021-08-03 15:47     ` Jason Ekstrand
  0 siblings, 0 replies; 11+ messages in thread
From: Jason Ekstrand @ 2021-08-03 15:47 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Intel Graphics Development, DRI Development, Daniel Vetter,
	Jon Bloomfield, Chris Wilson, Maarten Lankhorst, Joonas Lahtinen,
	Thomas Hellström, Matthew Auld, Lionel Landwerlin,
	Dave Airlie

Both are

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

On Tue, Aug 3, 2021 at 7:49 AM Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
>
> It's already removed, this just garbage collects it all.
>
> v2: Rebase over s/GEN/GRAPHICS_VER/
>
> v3: Also ditch eb.reloc_pool and eb.reloc_context (Maarten)
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Jason Ekstrand <jason@jlekstrand.net>
> ---
>  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 360 +-----------------
>  .../drm/i915/selftests/i915_live_selftests.h  |   1 -
>  2 files changed, 1 insertion(+), 360 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index e4dc4c3b4df3..98e25efffb59 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -277,16 +277,8 @@ struct i915_execbuffer {
>                 bool has_llc : 1;
>                 bool has_fence : 1;
>                 bool needs_unfenced : 1;
> -
> -               struct i915_request *rq;
> -               u32 *rq_cmd;
> -               unsigned int rq_size;
> -               struct intel_gt_buffer_pool_node *pool;
>         } reloc_cache;
>
> -       struct intel_gt_buffer_pool_node *reloc_pool; /** relocation pool for -EDEADLK handling */
> -       struct intel_context *reloc_context;
> -
>         u64 invalid_flags; /** Set of execobj.flags that are invalid */
>
>         u64 batch_len; /** Length of batch within object */
> @@ -1035,8 +1027,6 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>
>  static void eb_destroy(const struct i915_execbuffer *eb)
>  {
> -       GEM_BUG_ON(eb->reloc_cache.rq);
> -
>         if (eb->lut_size > 0)
>                 kfree(eb->buckets);
>  }
> @@ -1048,14 +1038,6 @@ relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
>         return gen8_canonical_addr((int)reloc->delta + target->node.start);
>  }
>
> -static void reloc_cache_clear(struct reloc_cache *cache)
> -{
> -       cache->rq = NULL;
> -       cache->rq_cmd = NULL;
> -       cache->pool = NULL;
> -       cache->rq_size = 0;
> -}
> -
>  static void reloc_cache_init(struct reloc_cache *cache,
>                              struct drm_i915_private *i915)
>  {
> @@ -1068,7 +1050,6 @@ static void reloc_cache_init(struct reloc_cache *cache,
>         cache->has_fence = cache->graphics_ver < 4;
>         cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment;
>         cache->node.flags = 0;
> -       reloc_cache_clear(cache);
>  }
>
>  static inline void *unmask_page(unsigned long p)
> @@ -1090,48 +1071,10 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
>         return &i915->ggtt;
>  }
>
> -static void reloc_cache_put_pool(struct i915_execbuffer *eb, struct reloc_cache *cache)
> -{
> -       if (!cache->pool)
> -               return;
> -
> -       /*
> -        * This is a bit nasty, normally we keep objects locked until the end
> -        * of execbuffer, but we already submit this, and have to unlock before
> -        * dropping the reference. Fortunately we can only hold 1 pool node at
> -        * a time, so this should be harmless.
> -        */
> -       i915_gem_ww_unlock_single(cache->pool->obj);
> -       intel_gt_buffer_pool_put(cache->pool);
> -       cache->pool = NULL;
> -}
> -
> -static void reloc_gpu_flush(struct i915_execbuffer *eb, struct reloc_cache *cache)
> -{
> -       struct drm_i915_gem_object *obj = cache->rq->batch->obj;
> -
> -       GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
> -       cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
> -
> -       i915_gem_object_flush_map(obj);
> -       i915_gem_object_unpin_map(obj);
> -
> -       intel_gt_chipset_flush(cache->rq->engine->gt);
> -
> -       i915_request_add(cache->rq);
> -       reloc_cache_put_pool(eb, cache);
> -       reloc_cache_clear(cache);
> -
> -       eb->reloc_pool = NULL;
> -}
> -
>  static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer *eb)
>  {
>         void *vaddr;
>
> -       if (cache->rq)
> -               reloc_gpu_flush(eb, cache);
> -
>         if (!cache->vaddr)
>                 return;
>
> @@ -1313,295 +1256,6 @@ static void clflush_write32(u32 *addr, u32 value, unsigned int flushes)
>                 *addr = value;
>  }
>
> -static int reloc_move_to_gpu(struct i915_request *rq, struct i915_vma *vma)
> -{
> -       struct drm_i915_gem_object *obj = vma->obj;
> -       int err;
> -
> -       assert_vma_held(vma);
> -
> -       if (obj->cache_dirty & ~obj->cache_coherent)
> -               i915_gem_clflush_object(obj, 0);
> -       obj->write_domain = 0;
> -
> -       err = i915_request_await_object(rq, vma->obj, true);
> -       if (err == 0)
> -               err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
> -
> -       return err;
> -}
> -
> -static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
> -                            struct intel_engine_cs *engine,
> -                            struct i915_vma *vma,
> -                            unsigned int len)
> -{
> -       struct reloc_cache *cache = &eb->reloc_cache;
> -       struct intel_gt_buffer_pool_node *pool = eb->reloc_pool;
> -       struct i915_request *rq;
> -       struct i915_vma *batch;
> -       u32 *cmd;
> -       int err;
> -
> -       if (!pool) {
> -               pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE,
> -                                               cache->has_llc ?
> -                                               I915_MAP_WB :
> -                                               I915_MAP_WC);
> -               if (IS_ERR(pool))
> -                       return PTR_ERR(pool);
> -       }
> -       eb->reloc_pool = NULL;
> -
> -       err = i915_gem_object_lock(pool->obj, &eb->ww);
> -       if (err)
> -               goto err_pool;
> -
> -       cmd = i915_gem_object_pin_map(pool->obj, pool->type);
> -       if (IS_ERR(cmd)) {
> -               err = PTR_ERR(cmd);
> -               goto err_pool;
> -       }
> -       intel_gt_buffer_pool_mark_used(pool);
> -
> -       memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
> -
> -       batch = i915_vma_instance(pool->obj, vma->vm, NULL);
> -       if (IS_ERR(batch)) {
> -               err = PTR_ERR(batch);
> -               goto err_unmap;
> -       }
> -
> -       err = i915_vma_pin_ww(batch, &eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK);
> -       if (err)
> -               goto err_unmap;
> -
> -       if (engine == eb->context->engine) {
> -               rq = i915_request_create(eb->context);
> -       } else {
> -               struct intel_context *ce = eb->reloc_context;
> -
> -               if (!ce) {
> -                       ce = intel_context_create(engine);
> -                       if (IS_ERR(ce)) {
> -                               err = PTR_ERR(ce);
> -                               goto err_unpin;
> -                       }
> -
> -                       i915_vm_put(ce->vm);
> -                       ce->vm = i915_vm_get(eb->context->vm);
> -                       eb->reloc_context = ce;
> -               }
> -
> -               err = intel_context_pin_ww(ce, &eb->ww);
> -               if (err)
> -                       goto err_unpin;
> -
> -               rq = i915_request_create(ce);
> -               intel_context_unpin(ce);
> -       }
> -       if (IS_ERR(rq)) {
> -               err = PTR_ERR(rq);
> -               goto err_unpin;
> -       }
> -
> -       err = intel_gt_buffer_pool_mark_active(pool, rq);
> -       if (err)
> -               goto err_request;
> -
> -       err = reloc_move_to_gpu(rq, vma);
> -       if (err)
> -               goto err_request;
> -
> -       err = eb->engine->emit_bb_start(rq,
> -                                       batch->node.start, PAGE_SIZE,
> -                                       cache->graphics_ver > 5 ? 0 : I915_DISPATCH_SECURE);
> -       if (err)
> -               goto skip_request;
> -
> -       assert_vma_held(batch);
> -       err = i915_request_await_object(rq, batch->obj, false);
> -       if (err == 0)
> -               err = i915_vma_move_to_active(batch, rq, 0);
> -       if (err)
> -               goto skip_request;
> -
> -       rq->batch = batch;
> -       i915_vma_unpin(batch);
> -
> -       cache->rq = rq;
> -       cache->rq_cmd = cmd;
> -       cache->rq_size = 0;
> -       cache->pool = pool;
> -
> -       /* Return with batch mapping (cmd) still pinned */
> -       return 0;
> -
> -skip_request:
> -       i915_request_set_error_once(rq, err);
> -err_request:
> -       i915_request_add(rq);
> -err_unpin:
> -       i915_vma_unpin(batch);
> -err_unmap:
> -       i915_gem_object_unpin_map(pool->obj);
> -err_pool:
> -       eb->reloc_pool = pool;
> -       return err;
> -}
> -
> -static bool reloc_can_use_engine(const struct intel_engine_cs *engine)
> -{
> -       return engine->class != VIDEO_DECODE_CLASS || GRAPHICS_VER(engine->i915) != 6;
> -}
> -
> -static u32 *reloc_gpu(struct i915_execbuffer *eb,
> -                     struct i915_vma *vma,
> -                     unsigned int len)
> -{
> -       struct reloc_cache *cache = &eb->reloc_cache;
> -       u32 *cmd;
> -
> -       if (cache->rq_size > PAGE_SIZE/sizeof(u32) - (len + 1))
> -               reloc_gpu_flush(eb, cache);
> -
> -       if (unlikely(!cache->rq)) {
> -               int err;
> -               struct intel_engine_cs *engine = eb->engine;
> -
> -               /* If we need to copy for the cmdparser, we will stall anyway */
> -               if (eb_use_cmdparser(eb))
> -                       return ERR_PTR(-EWOULDBLOCK);
> -
> -               if (!reloc_can_use_engine(engine)) {
> -                       engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
> -                       if (!engine)
> -                               return ERR_PTR(-ENODEV);
> -               }
> -
> -               err = __reloc_gpu_alloc(eb, engine, vma, len);
> -               if (unlikely(err))
> -                       return ERR_PTR(err);
> -       }
> -
> -       cmd = cache->rq_cmd + cache->rq_size;
> -       cache->rq_size += len;
> -
> -       return cmd;
> -}
> -
> -static inline bool use_reloc_gpu(struct i915_vma *vma)
> -{
> -       if (DBG_FORCE_RELOC == FORCE_GPU_RELOC)
> -               return true;
> -
> -       if (DBG_FORCE_RELOC)
> -               return false;
> -
> -       return !dma_resv_test_signaled(vma->resv, true);
> -}
> -
> -static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> -{
> -       struct page *page;
> -       unsigned long addr;
> -
> -       GEM_BUG_ON(vma->pages != vma->obj->mm.pages);
> -
> -       page = i915_gem_object_get_page(vma->obj, offset >> PAGE_SHIFT);
> -       addr = PFN_PHYS(page_to_pfn(page));
> -       GEM_BUG_ON(overflows_type(addr, u32)); /* expected dma32 */
> -
> -       return addr + offset_in_page(offset);
> -}
> -
> -static int __reloc_entry_gpu(struct i915_execbuffer *eb,
> -                             struct i915_vma *vma,
> -                             u64 offset,
> -                             u64 target_addr)
> -{
> -       const unsigned int ver = eb->reloc_cache.graphics_ver;
> -       unsigned int len;
> -       u32 *batch;
> -       u64 addr;
> -
> -       if (ver >= 8)
> -               len = offset & 7 ? 8 : 5;
> -       else if (ver >= 4)
> -               len = 4;
> -       else
> -               len = 3;
> -
> -       batch = reloc_gpu(eb, vma, len);
> -       if (batch == ERR_PTR(-EDEADLK))
> -               return -EDEADLK;
> -       else if (IS_ERR(batch))
> -               return false;
> -
> -       addr = gen8_canonical_addr(vma->node.start + offset);
> -       if (ver >= 8) {
> -               if (offset & 7) {
> -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -                       *batch++ = lower_32_bits(addr);
> -                       *batch++ = upper_32_bits(addr);
> -                       *batch++ = lower_32_bits(target_addr);
> -
> -                       addr = gen8_canonical_addr(addr + 4);
> -
> -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -                       *batch++ = lower_32_bits(addr);
> -                       *batch++ = upper_32_bits(addr);
> -                       *batch++ = upper_32_bits(target_addr);
> -               } else {
> -                       *batch++ = (MI_STORE_DWORD_IMM_GEN4 | (1 << 21)) + 1;
> -                       *batch++ = lower_32_bits(addr);
> -                       *batch++ = upper_32_bits(addr);
> -                       *batch++ = lower_32_bits(target_addr);
> -                       *batch++ = upper_32_bits(target_addr);
> -               }
> -       } else if (ver >= 6) {
> -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -               *batch++ = 0;
> -               *batch++ = addr;
> -               *batch++ = target_addr;
> -       } else if (IS_I965G(eb->i915)) {
> -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> -               *batch++ = 0;
> -               *batch++ = vma_phys_addr(vma, offset);
> -               *batch++ = target_addr;
> -       } else if (ver >= 4) {
> -               *batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
> -               *batch++ = 0;
> -               *batch++ = addr;
> -               *batch++ = target_addr;
> -       } else if (ver >= 3 &&
> -                  !(IS_I915G(eb->i915) || IS_I915GM(eb->i915))) {
> -               *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
> -               *batch++ = addr;
> -               *batch++ = target_addr;
> -       } else {
> -               *batch++ = MI_STORE_DWORD_IMM;
> -               *batch++ = vma_phys_addr(vma, offset);
> -               *batch++ = target_addr;
> -       }
> -
> -       return true;
> -}
> -
> -static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
> -                           struct i915_vma *vma,
> -                           u64 offset,
> -                           u64 target_addr)
> -{
> -       if (eb->reloc_cache.vaddr)
> -               return false;
> -
> -       if (!use_reloc_gpu(vma))
> -               return false;
> -
> -       return __reloc_entry_gpu(eb, vma, offset, target_addr);
> -}
> -
>  static u64
>  relocate_entry(struct i915_vma *vma,
>                const struct drm_i915_gem_relocation_entry *reloc,
> @@ -3166,8 +2820,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>         eb.exec = exec;
>         eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1);
>         eb.vma[0].vma = NULL;
> -       eb.reloc_pool = eb.batch_pool = NULL;
> -       eb.reloc_context = NULL;
> +       eb.batch_pool = NULL;
>
>         eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
>         reloc_cache_init(&eb.reloc_cache, eb.i915);
> @@ -3265,9 +2918,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>
>         batch = eb.batch->vma;
>
> -       /* All GPU relocation batches must be submitted prior to the user rq */
> -       GEM_BUG_ON(eb.reloc_cache.rq);
> -
>         /* Allocate a request for this batch buffer nice and early. */
>         eb.request = i915_request_create(eb.context);
>         if (IS_ERR(eb.request)) {
> @@ -3358,10 +3008,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>
>         if (eb.batch_pool)
>                 intel_gt_buffer_pool_put(eb.batch_pool);
> -       if (eb.reloc_pool)
> -               intel_gt_buffer_pool_put(eb.reloc_pool);
> -       if (eb.reloc_context)
> -               intel_context_put(eb.reloc_context);
>  err_engine:
>         eb_put_engine(&eb);
>  err_context:
> @@ -3475,7 +3121,3 @@ end:;
>         kvfree(exec2_list);
>         return err;
>  }
> -
> -#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> -#include "selftests/i915_gem_execbuffer.c"
> -#endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> index e2fd1b61af71..c0386fb4e286 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> @@ -38,7 +38,6 @@ selftest(gem, i915_gem_live_selftests)
>  selftest(evict, i915_gem_evict_live_selftests)
>  selftest(hugepages, i915_gem_huge_page_live_selftests)
>  selftest(gem_contexts, i915_gem_context_live_selftests)
> -selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
>  selftest(client, i915_gem_client_blt_live_selftests)
>  selftest(gem_migrate, i915_gem_migrate_live_selftests)
>  selftest(reset, intel_reset_live_selftests)
> --
> 2.32.0
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/2] drm/i915: Disable gpu relocations
  2021-08-03 12:48 ` [Intel-gfx] " Daniel Vetter
                   ` (3 preceding siblings ...)
  (?)
@ 2021-08-04 10:56 ` Patchwork
  -1 siblings, 0 replies; 11+ messages in thread
From: Patchwork @ 2021-08-04 10:56 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30279 bytes --]

== Series Details ==

Series: series starting with [1/2] drm/i915: Disable gpu relocations
URL   : https://patchwork.freedesktop.org/series/93340/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10442_full -> Patchwork_20765_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Known issues
------------

  Here are the changes found in Patchwork_20765_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_create@create-massive:
    - shard-snb:          NOTRUN -> [DMESG-WARN][1] ([i915#3002])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-snb5/igt@gem_create@create-massive.html

  * igt@gem_ctx_persistence@legacy-engines-mixed:
    - shard-snb:          NOTRUN -> [SKIP][2] ([fdo#109271] / [i915#1099]) +2 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-snb6/igt@gem_ctx_persistence@legacy-engines-mixed.html

  * igt@gem_exec_endless@dispatch@bcs0:
    - shard-skl:          NOTRUN -> [SKIP][3] ([fdo#109271]) +6 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl9/igt@gem_exec_endless@dispatch@bcs0.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-apl:          NOTRUN -> [FAIL][4] ([i915#2846])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl2/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-flow@rcs0:
    - shard-tglb:         [PASS][5] -> [FAIL][6] ([i915#2842]) +2 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-tglb2/igt@gem_exec_fair@basic-flow@rcs0.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb3/igt@gem_exec_fair@basic-flow@rcs0.html

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-glk:          [PASS][7] -> [FAIL][8] ([i915#2842]) +2 similar issues
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-glk5/igt@gem_exec_fair@basic-none-rrul@rcs0.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-glk6/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][9] ([i915#2842])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][10] ([i915#2842])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb4/igt@gem_exec_fair@basic-pace@vcs1.html
    - shard-kbl:          [PASS][11] -> [FAIL][12] ([i915#2842])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl4/igt@gem_exec_fair@basic-pace@vcs1.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl3/igt@gem_exec_fair@basic-pace@vcs1.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-kbl:          [PASS][13] -> [SKIP][14] ([fdo#109271]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl4/igt@gem_exec_fair@basic-pace@vecs0.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl3/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][15] ([fdo#109271] / [i915#2190])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl3/igt@gem_huc_copy@huc-copy.html

  * igt@gem_render_copy@x-tiled-to-vebox-yf-tiled:
    - shard-kbl:          NOTRUN -> [SKIP][16] ([fdo#109271]) +94 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@gem_render_copy@x-tiled-to-vebox-yf-tiled.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-apl:          NOTRUN -> [SKIP][17] ([fdo#109271] / [i915#3323])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl2/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-snb:          NOTRUN -> [FAIL][18] ([i915#2724])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-snb2/igt@gem_userptr_blits@vma-merge.html

  * igt@i915_module_load@reload:
    - shard-skl:          [PASS][19] -> [DMESG-WARN][20] ([i915#1982]) +1 similar issue
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl6/igt@i915_module_load@reload.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl10/igt@i915_module_load@reload.html

  * igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_mc_ccs:
    - shard-apl:          NOTRUN -> [SKIP][21] ([fdo#109271] / [i915#3886]) +10 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl7/igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][22] ([i915#3689] / [i915#3886])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb1/igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][23] ([i915#3689])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb1/igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_ccs.html

  * igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
    - shard-kbl:          NOTRUN -> [SKIP][24] ([fdo#109271] / [i915#3886]) +2 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-d-bad-pixel-format-y_tiled_ccs:
    - shard-snb:          NOTRUN -> [SKIP][25] ([fdo#109271]) +278 similar issues
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-snb6/igt@kms_ccs@pipe-d-bad-pixel-format-y_tiled_ccs.html

  * igt@kms_cdclk@mode-transition:
    - shard-apl:          NOTRUN -> [SKIP][26] ([fdo#109271]) +217 similar issues
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl8/igt@kms_cdclk@mode-transition.html

  * igt@kms_chamelium@hdmi-edid-change-during-suspend:
    - shard-apl:          NOTRUN -> [SKIP][27] ([fdo#109271] / [fdo#111827]) +19 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl8/igt@kms_chamelium@hdmi-edid-change-during-suspend.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-75:
    - shard-kbl:          NOTRUN -> [SKIP][28] ([fdo#109271] / [fdo#111827]) +6 similar issues
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl3/igt@kms_color_chamelium@pipe-a-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-c-gamma:
    - shard-tglb:         NOTRUN -> [SKIP][29] ([fdo#109284] / [fdo#111827])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb1/igt@kms_color_chamelium@pipe-c-gamma.html

  * igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes:
    - shard-snb:          NOTRUN -> [SKIP][30] ([fdo#109271] / [fdo#111827]) +12 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-snb6/igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes.html

  * igt@kms_content_protection@atomic:
    - shard-apl:          NOTRUN -> [TIMEOUT][31] ([i915#1319]) +1 similar issue
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl7/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@legacy:
    - shard-kbl:          NOTRUN -> [TIMEOUT][32] ([i915#1319])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl3/igt@kms_content_protection@legacy.html
    - shard-tglb:         NOTRUN -> [SKIP][33] ([fdo#111828])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb5/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@uevent:
    - shard-kbl:          NOTRUN -> [FAIL][34] ([i915#2105])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-skl:          [PASS][35] -> [FAIL][36] ([i915#2346])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl7/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl5/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a1:
    - shard-glk:          [PASS][37] -> [FAIL][38] ([i915#79]) +1 similar issue
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-glk7/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a1.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-glk7/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile:
    - shard-tglb:         NOTRUN -> [SKIP][39] ([i915#2587])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb1/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt:
    - shard-tglb:         NOTRUN -> [SKIP][40] ([fdo#111825]) +3 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d:
    - shard-apl:          NOTRUN -> [SKIP][41] ([fdo#109271] / [i915#533])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl3/igt@kms_pipe_crc_basic@read-crc-pipe-d.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb:
    - shard-apl:          NOTRUN -> [FAIL][42] ([fdo#108145] / [i915#265]) +1 similar issue
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl8/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb:
    - shard-apl:          NOTRUN -> [FAIL][43] ([i915#265])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl7/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
    - shard-skl:          [PASS][44] -> [FAIL][45] ([fdo#108145] / [i915#265])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl8/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl3/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-basic:
    - shard-kbl:          NOTRUN -> [FAIL][46] ([fdo#108145] / [i915#265])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@kms_plane_alpha_blend@pipe-b-alpha-basic.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb:
    - shard-kbl:          NOTRUN -> [FAIL][47] ([i915#265]) +1 similar issue
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4:
    - shard-apl:          NOTRUN -> [SKIP][48] ([fdo#109271] / [i915#658]) +3 similar issues
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1:
    - shard-kbl:          NOTRUN -> [SKIP][49] ([fdo#109271] / [i915#658]) +1 similar issue
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html

  * igt@kms_psr@psr2_primary_page_flip:
    - shard-iclb:         [PASS][50] -> [SKIP][51] ([fdo#109441]) +1 similar issue
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb5/igt@kms_psr@psr2_primary_page_flip.html

  * igt@kms_vblank@pipe-b-ts-continuation-suspend:
    - shard-kbl:          [PASS][52] -> [DMESG-WARN][53] ([i915#180]) +1 similar issue
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl2/igt@kms_vblank@pipe-b-ts-continuation-suspend.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl4/igt@kms_vblank@pipe-b-ts-continuation-suspend.html

  * igt@kms_vblank@pipe-c-ts-continuation-suspend:
    - shard-apl:          [PASS][54] -> [DMESG-WARN][55] ([i915#180]) +1 similar issue
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl3/igt@kms_vblank@pipe-c-ts-continuation-suspend.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl1/igt@kms_vblank@pipe-c-ts-continuation-suspend.html

  * igt@nouveau_crc@pipe-a-ctx-flip-skip-current-frame:
    - shard-tglb:         NOTRUN -> [SKIP][56] ([i915#2530])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb1/igt@nouveau_crc@pipe-a-ctx-flip-skip-current-frame.html

  * igt@perf@polling-parameterized:
    - shard-glk:          [PASS][57] -> [FAIL][58] ([i915#1542])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-glk7/igt@perf@polling-parameterized.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-glk6/igt@perf@polling-parameterized.html

  * igt@sysfs_clients@sema-10:
    - shard-apl:          NOTRUN -> [SKIP][59] ([fdo#109271] / [i915#2994])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl3/igt@sysfs_clients@sema-10.html

  * igt@sysfs_clients@split-10:
    - shard-skl:          NOTRUN -> [SKIP][60] ([fdo#109271] / [i915#2994])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl8/igt@sysfs_clients@split-10.html

  * igt@sysfs_heartbeat_interval@mixed@vcs0:
    - shard-skl:          [PASS][61] -> [FAIL][62] ([i915#1731])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl9/igt@sysfs_heartbeat_interval@mixed@vcs0.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl8/igt@sysfs_heartbeat_interval@mixed@vcs0.html

  
#### Possible fixes ####

  * igt@fbdev@read:
    - {shard-rkl}:        [SKIP][63] ([i915#2582]) -> [PASS][64]
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@fbdev@read.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@fbdev@read.html

  * igt@gem_ctx_persistence@legacy-engines-hang@render:
    - shard-kbl:          [FAIL][65] ([i915#2410]) -> [PASS][66]
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl4/igt@gem_ctx_persistence@legacy-engines-hang@render.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl4/igt@gem_ctx_persistence@legacy-engines-hang@render.html

  * igt@gem_eio@in-flight-contexts-10ms:
    - {shard-rkl}:        [TIMEOUT][67] ([i915#3063]) -> [PASS][68]
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-5/igt@gem_eio@in-flight-contexts-10ms.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-5/igt@gem_eio@in-flight-contexts-10ms.html

  * igt@gem_eio@reset-stress:
    - {shard-rkl}:        [FAIL][69] ([i915#3115]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@gem_eio@reset-stress.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@gem_eio@reset-stress.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-apl:          [SKIP][71] ([fdo#109271]) -> [PASS][72]
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl8/igt@gem_exec_fair@basic-none-share@rcs0.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl8/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-kbl:          [FAIL][73] ([i915#2842]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl7/igt@gem_exec_fair@basic-none@vcs0.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl4/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [FAIL][75] ([i915#2842]) -> [PASS][76]
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-glk7/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-glk7/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-iclb:         [FAIL][77] ([i915#2842]) -> [PASS][78]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-iclb8/igt@gem_exec_fair@basic-pace@rcs0.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb4/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - {shard-rkl}:        [FAIL][79] ([i915#2842]) -> [PASS][80] +1 similar issue
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-5/igt@gem_exec_fair@basic-throttle@rcs0.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-5/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-tglb:         [SKIP][81] ([i915#2190]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-tglb6/igt@gem_huc_copy@huc-copy.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-tglb7/igt@gem_huc_copy@huc-copy.html

  * igt@gem_mmap_gtt@cpuset-medium-copy-xy:
    - shard-iclb:         [FAIL][83] ([i915#307]) -> [PASS][84] +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-iclb8/igt@gem_mmap_gtt@cpuset-medium-copy-xy.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb5/igt@gem_mmap_gtt@cpuset-medium-copy-xy.html

  * igt@gem_mmap_offset@clear:
    - {shard-rkl}:        [FAIL][85] ([i915#3160]) -> [PASS][86]
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@gem_mmap_offset@clear.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-1/igt@gem_mmap_offset@clear.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [FAIL][87] ([i915#454]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-iclb6/igt@i915_pm_dc@dc6-psr.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb1/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_rpm@reg-read-ioctl:
    - {shard-rkl}:        [SKIP][89] ([i915#3844] / [i915#579]) -> [PASS][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-6/igt@i915_pm_rpm@reg-read-ioctl.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-5/igt@i915_pm_rpm@reg-read-ioctl.html

  * igt@i915_suspend@fence-restore-untiled:
    - shard-apl:          [DMESG-WARN][91] ([i915#180]) -> [PASS][92] +1 similar issue
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl6/igt@i915_suspend@fence-restore-untiled.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl7/igt@i915_suspend@fence-restore-untiled.html

  * igt@kms_big_fb@linear-8bpp-rotate-180:
    - {shard-rkl}:        [SKIP][93] ([i915#3638]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-2/igt@kms_big_fb@linear-8bpp-rotate-180.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_big_fb@linear-8bpp-rotate-180.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
    - {shard-rkl}:        [SKIP][95] ([i915#3721]) -> [PASS][96] +1 similar issue
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html

  * igt@kms_ccs@pipe-a-bad-aux-stride-y_tiled_gen12_rc_ccs_cc:
    - {shard-rkl}:        [FAIL][97] ([i915#3678]) -> [PASS][98] +4 similar issues
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@kms_ccs@pipe-a-bad-aux-stride-y_tiled_gen12_rc_ccs_cc.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_ccs@pipe-a-bad-aux-stride-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_color@pipe-a-ctm-0-5:
    - {shard-rkl}:        [SKIP][99] ([i915#1149] / [i915#1849]) -> [PASS][100] +2 similar issues
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@kms_color@pipe-a-ctm-0-5.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_color@pipe-a-ctm-0-5.html

  * igt@kms_cursor_crc@pipe-a-cursor-128x42-sliding:
    - {shard-rkl}:        [SKIP][101] ([fdo#112022]) -> [PASS][102] +7 similar issues
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@kms_cursor_crc@pipe-a-cursor-128x42-sliding.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_cursor_crc@pipe-a-cursor-128x42-sliding.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-kbl:          [DMESG-WARN][103] ([i915#180]) -> [PASS][104] +4 similar issues
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl7/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl3/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions:
    - {shard-rkl}:        [SKIP][105] ([fdo#111825]) -> [PASS][106] +1 similar issue
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-2/igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions.html

  * igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-untiled:
    - {shard-rkl}:        [SKIP][107] ([fdo#111314]) -> [PASS][108] +4 similar issues
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-2/igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-untiled.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-untiled.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-kbl:          [INCOMPLETE][109] ([i915#155] / [i915#180] / [i915#636]) -> [PASS][110]
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl4/igt@kms_fbcon_fbt@fbc-suspend.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl2/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
    - {shard-rkl}:        [FAIL][111] ([i915#79]) -> [PASS][112] +1 similar issue
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-6/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1:
    - shard-skl:          [FAIL][113] ([i915#2122]) -> [PASS][114] +1 similar issue
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl4/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl2/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html

  * igt@kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-wc:
    - {shard-rkl}:        [SKIP][115] ([i915#1849]) -> [PASS][116] +22 similar issues
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-wc.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-1p-rte:
    - shard-skl:          [DMESG-WARN][117] ([i915#1982]) -> [PASS][118]
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl8/igt@kms_frontbuffer_tracking@psr-1p-rte.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl3/igt@kms_frontbuffer_tracking@psr-1p-rte.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [FAIL][119] ([i915#1188]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl9/igt@kms_hdr@bpc-switch-suspend.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl8/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
    - shard-skl:          [FAIL][121] ([fdo#108145] / [i915#265]) -> [PASS][122] +1 similar issue
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-skl8/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-skl1/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html

  * igt@kms_plane_multiple@atomic-pipe-a-tiling-y:
    - {shard-rkl}:        [SKIP][123] ([i915#1849] / [i915#3558]) -> [PASS][124]
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-2/igt@kms_plane_multiple@atomic-pipe-a-tiling-y.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_plane_multiple@atomic-pipe-a-tiling-y.html

  * igt@kms_psr@no_drrs:
    - {shard-rkl}:        [SKIP][125] ([i915#1072]) -> [PASS][126]
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@kms_psr@no_drrs.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_psr@no_drrs.html

  * igt@kms_psr@psr2_sprite_mmap_gtt:
    - shard-iclb:         [SKIP][127] ([fdo#109441]) -> [PASS][128] +1 similar issue
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-iclb7/igt@kms_psr@psr2_sprite_mmap_gtt.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb2/igt@kms_psr@psr2_sprite_mmap_gtt.html

  * igt@kms_vblank@pipe-c-wait-busy:
    - {shard-rkl}:        [SKIP][129] ([i915#1845]) -> [PASS][130] +8 similar issues
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-1/igt@kms_vblank@pipe-c-wait-busy.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-6/igt@kms_vblank@pipe-c-wait-busy.html

  * igt@perf@blocking:
    - {shard-rkl}:        [FAIL][131] ([i915#1542]) -> [PASS][132] +1 similar issue
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-rkl-2/igt@perf@blocking.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-rkl-1/igt@perf@blocking.html

  
#### Warnings ####

  * igt@kms_content_protection@lic:
    - shard-apl:          [TIMEOUT][133] ([i915#1319]) -> [DMESG-FAIL][134] ([fdo#110321])
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl3/igt@kms_content_protection@lic.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-apl1/igt@kms_content_protection@lic.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4:
    - shard-iclb:         [SKIP][135] ([i915#2920]) -> [SKIP][136] ([i915#658]) +2 similar issues
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-iclb2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb5/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5:
    - shard-iclb:         [SKIP][137] ([i915#658]) -> [SKIP][138] ([i915#2920]) +1 similar issue
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-iclb7/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5.html

  * igt@runner@aborted:
    - shard-kbl:          ([FAIL][139], [FAIL][140], [FAIL][141], [FAIL][142], [FAIL][143], [FAIL][144]) ([i915#180] / [i915#1814] / [i915#2505] / [i915#3002] / [i915#3363] / [i915#92]) -> ([FAIL][145], [FAIL][146], [FAIL][147], [FAIL][148]) ([i915#1814] / [i915#3002] / [i915#3363] / [i915#602])
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl7/igt@runner@aborted.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl6/igt@runner@aborted.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl4/igt@runner@aborted.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl4/igt@runner@aborted.html
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl7/igt@runner@aborted.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-kbl7/igt@runner@aborted.html
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl4/igt@runner@aborted.html
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl1/igt@runner@aborted.html
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl4/igt@runner@aborted.html
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/shard-kbl4/igt@runner@aborted.html
    - shard-apl:          ([FAIL][149], [FAIL][150], [FAIL][151], [FAIL][152], [FAIL][153]) ([i915#1610] / [i915#180] / [i915#1814] / [i915#3002] / [i915#3363]) -> ([FAIL][154], [FAIL][155], [FAIL][156]) ([fdo#109271] / [i915#1814] / [i915#3002] / [i915#3363])
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl2/igt@runner@aborted.html
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl1/igt@runner@aborted.html
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl8/igt@runner@aborted.html
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10442/shard-apl8/igt@runner@aborted.html

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20765/index.html

[-- Attachment #2: Type: text/html, Size: 33818 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] drm/i915: delete gpu reloc code
  2021-08-03 15:47     ` [Intel-gfx] " Jason Ekstrand
@ 2021-08-04 22:26       ` Daniel Vetter
  -1 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2021-08-04 22:26 UTC (permalink / raw)
  To: Jason Ekstrand
  Cc: Daniel Vetter, Intel Graphics Development, DRI Development,
	Daniel Vetter, Jon Bloomfield, Chris Wilson, Maarten Lankhorst,
	Joonas Lahtinen, Thomas Hellström, Matthew Auld,
	Lionel Landwerlin, Dave Airlie

On Tue, Aug 03, 2021 at 10:47:10AM -0500, Jason Ekstrand wrote:
> Both are
> 
> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

CI is happy, I guess you got all the igt changes indeed. Both pushed
thanks for reviewing.
-Daniel

> 
> On Tue, Aug 3, 2021 at 7:49 AM Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> >
> > It's already removed, this just garbage collects it all.
> >
> > v2: Rebase over s/GEN/GRAPHICS_VER/
> >
> > v3: Also ditch eb.reloc_pool and eb.reloc_context (Maarten)
> >
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> > Cc: Dave Airlie <airlied@redhat.com>
> > Cc: Jason Ekstrand <jason@jlekstrand.net>
> > ---
> >  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 360 +-----------------
> >  .../drm/i915/selftests/i915_live_selftests.h  |   1 -
> >  2 files changed, 1 insertion(+), 360 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > index e4dc4c3b4df3..98e25efffb59 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > @@ -277,16 +277,8 @@ struct i915_execbuffer {
> >                 bool has_llc : 1;
> >                 bool has_fence : 1;
> >                 bool needs_unfenced : 1;
> > -
> > -               struct i915_request *rq;
> > -               u32 *rq_cmd;
> > -               unsigned int rq_size;
> > -               struct intel_gt_buffer_pool_node *pool;
> >         } reloc_cache;
> >
> > -       struct intel_gt_buffer_pool_node *reloc_pool; /** relocation pool for -EDEADLK handling */
> > -       struct intel_context *reloc_context;
> > -
> >         u64 invalid_flags; /** Set of execobj.flags that are invalid */
> >
> >         u64 batch_len; /** Length of batch within object */
> > @@ -1035,8 +1027,6 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
> >
> >  static void eb_destroy(const struct i915_execbuffer *eb)
> >  {
> > -       GEM_BUG_ON(eb->reloc_cache.rq);
> > -
> >         if (eb->lut_size > 0)
> >                 kfree(eb->buckets);
> >  }
> > @@ -1048,14 +1038,6 @@ relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
> >         return gen8_canonical_addr((int)reloc->delta + target->node.start);
> >  }
> >
> > -static void reloc_cache_clear(struct reloc_cache *cache)
> > -{
> > -       cache->rq = NULL;
> > -       cache->rq_cmd = NULL;
> > -       cache->pool = NULL;
> > -       cache->rq_size = 0;
> > -}
> > -
> >  static void reloc_cache_init(struct reloc_cache *cache,
> >                              struct drm_i915_private *i915)
> >  {
> > @@ -1068,7 +1050,6 @@ static void reloc_cache_init(struct reloc_cache *cache,
> >         cache->has_fence = cache->graphics_ver < 4;
> >         cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment;
> >         cache->node.flags = 0;
> > -       reloc_cache_clear(cache);
> >  }
> >
> >  static inline void *unmask_page(unsigned long p)
> > @@ -1090,48 +1071,10 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
> >         return &i915->ggtt;
> >  }
> >
> > -static void reloc_cache_put_pool(struct i915_execbuffer *eb, struct reloc_cache *cache)
> > -{
> > -       if (!cache->pool)
> > -               return;
> > -
> > -       /*
> > -        * This is a bit nasty, normally we keep objects locked until the end
> > -        * of execbuffer, but we already submit this, and have to unlock before
> > -        * dropping the reference. Fortunately we can only hold 1 pool node at
> > -        * a time, so this should be harmless.
> > -        */
> > -       i915_gem_ww_unlock_single(cache->pool->obj);
> > -       intel_gt_buffer_pool_put(cache->pool);
> > -       cache->pool = NULL;
> > -}
> > -
> > -static void reloc_gpu_flush(struct i915_execbuffer *eb, struct reloc_cache *cache)
> > -{
> > -       struct drm_i915_gem_object *obj = cache->rq->batch->obj;
> > -
> > -       GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
> > -       cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
> > -
> > -       i915_gem_object_flush_map(obj);
> > -       i915_gem_object_unpin_map(obj);
> > -
> > -       intel_gt_chipset_flush(cache->rq->engine->gt);
> > -
> > -       i915_request_add(cache->rq);
> > -       reloc_cache_put_pool(eb, cache);
> > -       reloc_cache_clear(cache);
> > -
> > -       eb->reloc_pool = NULL;
> > -}
> > -
> >  static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer *eb)
> >  {
> >         void *vaddr;
> >
> > -       if (cache->rq)
> > -               reloc_gpu_flush(eb, cache);
> > -
> >         if (!cache->vaddr)
> >                 return;
> >
> > @@ -1313,295 +1256,6 @@ static void clflush_write32(u32 *addr, u32 value, unsigned int flushes)
> >                 *addr = value;
> >  }
> >
> > -static int reloc_move_to_gpu(struct i915_request *rq, struct i915_vma *vma)
> > -{
> > -       struct drm_i915_gem_object *obj = vma->obj;
> > -       int err;
> > -
> > -       assert_vma_held(vma);
> > -
> > -       if (obj->cache_dirty & ~obj->cache_coherent)
> > -               i915_gem_clflush_object(obj, 0);
> > -       obj->write_domain = 0;
> > -
> > -       err = i915_request_await_object(rq, vma->obj, true);
> > -       if (err == 0)
> > -               err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
> > -
> > -       return err;
> > -}
> > -
> > -static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
> > -                            struct intel_engine_cs *engine,
> > -                            struct i915_vma *vma,
> > -                            unsigned int len)
> > -{
> > -       struct reloc_cache *cache = &eb->reloc_cache;
> > -       struct intel_gt_buffer_pool_node *pool = eb->reloc_pool;
> > -       struct i915_request *rq;
> > -       struct i915_vma *batch;
> > -       u32 *cmd;
> > -       int err;
> > -
> > -       if (!pool) {
> > -               pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE,
> > -                                               cache->has_llc ?
> > -                                               I915_MAP_WB :
> > -                                               I915_MAP_WC);
> > -               if (IS_ERR(pool))
> > -                       return PTR_ERR(pool);
> > -       }
> > -       eb->reloc_pool = NULL;
> > -
> > -       err = i915_gem_object_lock(pool->obj, &eb->ww);
> > -       if (err)
> > -               goto err_pool;
> > -
> > -       cmd = i915_gem_object_pin_map(pool->obj, pool->type);
> > -       if (IS_ERR(cmd)) {
> > -               err = PTR_ERR(cmd);
> > -               goto err_pool;
> > -       }
> > -       intel_gt_buffer_pool_mark_used(pool);
> > -
> > -       memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
> > -
> > -       batch = i915_vma_instance(pool->obj, vma->vm, NULL);
> > -       if (IS_ERR(batch)) {
> > -               err = PTR_ERR(batch);
> > -               goto err_unmap;
> > -       }
> > -
> > -       err = i915_vma_pin_ww(batch, &eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK);
> > -       if (err)
> > -               goto err_unmap;
> > -
> > -       if (engine == eb->context->engine) {
> > -               rq = i915_request_create(eb->context);
> > -       } else {
> > -               struct intel_context *ce = eb->reloc_context;
> > -
> > -               if (!ce) {
> > -                       ce = intel_context_create(engine);
> > -                       if (IS_ERR(ce)) {
> > -                               err = PTR_ERR(ce);
> > -                               goto err_unpin;
> > -                       }
> > -
> > -                       i915_vm_put(ce->vm);
> > -                       ce->vm = i915_vm_get(eb->context->vm);
> > -                       eb->reloc_context = ce;
> > -               }
> > -
> > -               err = intel_context_pin_ww(ce, &eb->ww);
> > -               if (err)
> > -                       goto err_unpin;
> > -
> > -               rq = i915_request_create(ce);
> > -               intel_context_unpin(ce);
> > -       }
> > -       if (IS_ERR(rq)) {
> > -               err = PTR_ERR(rq);
> > -               goto err_unpin;
> > -       }
> > -
> > -       err = intel_gt_buffer_pool_mark_active(pool, rq);
> > -       if (err)
> > -               goto err_request;
> > -
> > -       err = reloc_move_to_gpu(rq, vma);
> > -       if (err)
> > -               goto err_request;
> > -
> > -       err = eb->engine->emit_bb_start(rq,
> > -                                       batch->node.start, PAGE_SIZE,
> > -                                       cache->graphics_ver > 5 ? 0 : I915_DISPATCH_SECURE);
> > -       if (err)
> > -               goto skip_request;
> > -
> > -       assert_vma_held(batch);
> > -       err = i915_request_await_object(rq, batch->obj, false);
> > -       if (err == 0)
> > -               err = i915_vma_move_to_active(batch, rq, 0);
> > -       if (err)
> > -               goto skip_request;
> > -
> > -       rq->batch = batch;
> > -       i915_vma_unpin(batch);
> > -
> > -       cache->rq = rq;
> > -       cache->rq_cmd = cmd;
> > -       cache->rq_size = 0;
> > -       cache->pool = pool;
> > -
> > -       /* Return with batch mapping (cmd) still pinned */
> > -       return 0;
> > -
> > -skip_request:
> > -       i915_request_set_error_once(rq, err);
> > -err_request:
> > -       i915_request_add(rq);
> > -err_unpin:
> > -       i915_vma_unpin(batch);
> > -err_unmap:
> > -       i915_gem_object_unpin_map(pool->obj);
> > -err_pool:
> > -       eb->reloc_pool = pool;
> > -       return err;
> > -}
> > -
> > -static bool reloc_can_use_engine(const struct intel_engine_cs *engine)
> > -{
> > -       return engine->class != VIDEO_DECODE_CLASS || GRAPHICS_VER(engine->i915) != 6;
> > -}
> > -
> > -static u32 *reloc_gpu(struct i915_execbuffer *eb,
> > -                     struct i915_vma *vma,
> > -                     unsigned int len)
> > -{
> > -       struct reloc_cache *cache = &eb->reloc_cache;
> > -       u32 *cmd;
> > -
> > -       if (cache->rq_size > PAGE_SIZE/sizeof(u32) - (len + 1))
> > -               reloc_gpu_flush(eb, cache);
> > -
> > -       if (unlikely(!cache->rq)) {
> > -               int err;
> > -               struct intel_engine_cs *engine = eb->engine;
> > -
> > -               /* If we need to copy for the cmdparser, we will stall anyway */
> > -               if (eb_use_cmdparser(eb))
> > -                       return ERR_PTR(-EWOULDBLOCK);
> > -
> > -               if (!reloc_can_use_engine(engine)) {
> > -                       engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
> > -                       if (!engine)
> > -                               return ERR_PTR(-ENODEV);
> > -               }
> > -
> > -               err = __reloc_gpu_alloc(eb, engine, vma, len);
> > -               if (unlikely(err))
> > -                       return ERR_PTR(err);
> > -       }
> > -
> > -       cmd = cache->rq_cmd + cache->rq_size;
> > -       cache->rq_size += len;
> > -
> > -       return cmd;
> > -}
> > -
> > -static inline bool use_reloc_gpu(struct i915_vma *vma)
> > -{
> > -       if (DBG_FORCE_RELOC == FORCE_GPU_RELOC)
> > -               return true;
> > -
> > -       if (DBG_FORCE_RELOC)
> > -               return false;
> > -
> > -       return !dma_resv_test_signaled(vma->resv, true);
> > -}
> > -
> > -static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> > -{
> > -       struct page *page;
> > -       unsigned long addr;
> > -
> > -       GEM_BUG_ON(vma->pages != vma->obj->mm.pages);
> > -
> > -       page = i915_gem_object_get_page(vma->obj, offset >> PAGE_SHIFT);
> > -       addr = PFN_PHYS(page_to_pfn(page));
> > -       GEM_BUG_ON(overflows_type(addr, u32)); /* expected dma32 */
> > -
> > -       return addr + offset_in_page(offset);
> > -}
> > -
> > -static int __reloc_entry_gpu(struct i915_execbuffer *eb,
> > -                             struct i915_vma *vma,
> > -                             u64 offset,
> > -                             u64 target_addr)
> > -{
> > -       const unsigned int ver = eb->reloc_cache.graphics_ver;
> > -       unsigned int len;
> > -       u32 *batch;
> > -       u64 addr;
> > -
> > -       if (ver >= 8)
> > -               len = offset & 7 ? 8 : 5;
> > -       else if (ver >= 4)
> > -               len = 4;
> > -       else
> > -               len = 3;
> > -
> > -       batch = reloc_gpu(eb, vma, len);
> > -       if (batch == ERR_PTR(-EDEADLK))
> > -               return -EDEADLK;
> > -       else if (IS_ERR(batch))
> > -               return false;
> > -
> > -       addr = gen8_canonical_addr(vma->node.start + offset);
> > -       if (ver >= 8) {
> > -               if (offset & 7) {
> > -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -                       *batch++ = lower_32_bits(addr);
> > -                       *batch++ = upper_32_bits(addr);
> > -                       *batch++ = lower_32_bits(target_addr);
> > -
> > -                       addr = gen8_canonical_addr(addr + 4);
> > -
> > -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -                       *batch++ = lower_32_bits(addr);
> > -                       *batch++ = upper_32_bits(addr);
> > -                       *batch++ = upper_32_bits(target_addr);
> > -               } else {
> > -                       *batch++ = (MI_STORE_DWORD_IMM_GEN4 | (1 << 21)) + 1;
> > -                       *batch++ = lower_32_bits(addr);
> > -                       *batch++ = upper_32_bits(addr);
> > -                       *batch++ = lower_32_bits(target_addr);
> > -                       *batch++ = upper_32_bits(target_addr);
> > -               }
> > -       } else if (ver >= 6) {
> > -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -               *batch++ = 0;
> > -               *batch++ = addr;
> > -               *batch++ = target_addr;
> > -       } else if (IS_I965G(eb->i915)) {
> > -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -               *batch++ = 0;
> > -               *batch++ = vma_phys_addr(vma, offset);
> > -               *batch++ = target_addr;
> > -       } else if (ver >= 4) {
> > -               *batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
> > -               *batch++ = 0;
> > -               *batch++ = addr;
> > -               *batch++ = target_addr;
> > -       } else if (ver >= 3 &&
> > -                  !(IS_I915G(eb->i915) || IS_I915GM(eb->i915))) {
> > -               *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
> > -               *batch++ = addr;
> > -               *batch++ = target_addr;
> > -       } else {
> > -               *batch++ = MI_STORE_DWORD_IMM;
> > -               *batch++ = vma_phys_addr(vma, offset);
> > -               *batch++ = target_addr;
> > -       }
> > -
> > -       return true;
> > -}
> > -
> > -static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
> > -                           struct i915_vma *vma,
> > -                           u64 offset,
> > -                           u64 target_addr)
> > -{
> > -       if (eb->reloc_cache.vaddr)
> > -               return false;
> > -
> > -       if (!use_reloc_gpu(vma))
> > -               return false;
> > -
> > -       return __reloc_entry_gpu(eb, vma, offset, target_addr);
> > -}
> > -
> >  static u64
> >  relocate_entry(struct i915_vma *vma,
> >                const struct drm_i915_gem_relocation_entry *reloc,
> > @@ -3166,8 +2820,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >         eb.exec = exec;
> >         eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1);
> >         eb.vma[0].vma = NULL;
> > -       eb.reloc_pool = eb.batch_pool = NULL;
> > -       eb.reloc_context = NULL;
> > +       eb.batch_pool = NULL;
> >
> >         eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
> >         reloc_cache_init(&eb.reloc_cache, eb.i915);
> > @@ -3265,9 +2918,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >
> >         batch = eb.batch->vma;
> >
> > -       /* All GPU relocation batches must be submitted prior to the user rq */
> > -       GEM_BUG_ON(eb.reloc_cache.rq);
> > -
> >         /* Allocate a request for this batch buffer nice and early. */
> >         eb.request = i915_request_create(eb.context);
> >         if (IS_ERR(eb.request)) {
> > @@ -3358,10 +3008,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >
> >         if (eb.batch_pool)
> >                 intel_gt_buffer_pool_put(eb.batch_pool);
> > -       if (eb.reloc_pool)
> > -               intel_gt_buffer_pool_put(eb.reloc_pool);
> > -       if (eb.reloc_context)
> > -               intel_context_put(eb.reloc_context);
> >  err_engine:
> >         eb_put_engine(&eb);
> >  err_context:
> > @@ -3475,7 +3121,3 @@ end:;
> >         kvfree(exec2_list);
> >         return err;
> >  }
> > -
> > -#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> > -#include "selftests/i915_gem_execbuffer.c"
> > -#endif
> > diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> > index e2fd1b61af71..c0386fb4e286 100644
> > --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> > +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> > @@ -38,7 +38,6 @@ selftest(gem, i915_gem_live_selftests)
> >  selftest(evict, i915_gem_evict_live_selftests)
> >  selftest(hugepages, i915_gem_huge_page_live_selftests)
> >  selftest(gem_contexts, i915_gem_context_live_selftests)
> > -selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
> >  selftest(client, i915_gem_client_blt_live_selftests)
> >  selftest(gem_migrate, i915_gem_migrate_live_selftests)
> >  selftest(reset, intel_reset_live_selftests)
> > --
> > 2.32.0
> >

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Intel-gfx] [PATCH 2/2] drm/i915: delete gpu reloc code
@ 2021-08-04 22:26       ` Daniel Vetter
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Vetter @ 2021-08-04 22:26 UTC (permalink / raw)
  To: Jason Ekstrand
  Cc: Daniel Vetter, Intel Graphics Development, DRI Development,
	Daniel Vetter, Jon Bloomfield, Chris Wilson, Maarten Lankhorst,
	Joonas Lahtinen, Thomas Hellström, Matthew Auld,
	Lionel Landwerlin, Dave Airlie

On Tue, Aug 03, 2021 at 10:47:10AM -0500, Jason Ekstrand wrote:
> Both are
> 
> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

CI is happy, I guess you got all the igt changes indeed. Both pushed
thanks for reviewing.
-Daniel

> 
> On Tue, Aug 3, 2021 at 7:49 AM Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> >
> > It's already removed, this just garbage collects it all.
> >
> > v2: Rebase over s/GEN/GRAPHICS_VER/
> >
> > v3: Also ditch eb.reloc_pool and eb.reloc_context (Maarten)
> >
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> > Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> > Cc: Dave Airlie <airlied@redhat.com>
> > Cc: Jason Ekstrand <jason@jlekstrand.net>
> > ---
> >  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 360 +-----------------
> >  .../drm/i915/selftests/i915_live_selftests.h  |   1 -
> >  2 files changed, 1 insertion(+), 360 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > index e4dc4c3b4df3..98e25efffb59 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > @@ -277,16 +277,8 @@ struct i915_execbuffer {
> >                 bool has_llc : 1;
> >                 bool has_fence : 1;
> >                 bool needs_unfenced : 1;
> > -
> > -               struct i915_request *rq;
> > -               u32 *rq_cmd;
> > -               unsigned int rq_size;
> > -               struct intel_gt_buffer_pool_node *pool;
> >         } reloc_cache;
> >
> > -       struct intel_gt_buffer_pool_node *reloc_pool; /** relocation pool for -EDEADLK handling */
> > -       struct intel_context *reloc_context;
> > -
> >         u64 invalid_flags; /** Set of execobj.flags that are invalid */
> >
> >         u64 batch_len; /** Length of batch within object */
> > @@ -1035,8 +1027,6 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
> >
> >  static void eb_destroy(const struct i915_execbuffer *eb)
> >  {
> > -       GEM_BUG_ON(eb->reloc_cache.rq);
> > -
> >         if (eb->lut_size > 0)
> >                 kfree(eb->buckets);
> >  }
> > @@ -1048,14 +1038,6 @@ relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
> >         return gen8_canonical_addr((int)reloc->delta + target->node.start);
> >  }
> >
> > -static void reloc_cache_clear(struct reloc_cache *cache)
> > -{
> > -       cache->rq = NULL;
> > -       cache->rq_cmd = NULL;
> > -       cache->pool = NULL;
> > -       cache->rq_size = 0;
> > -}
> > -
> >  static void reloc_cache_init(struct reloc_cache *cache,
> >                              struct drm_i915_private *i915)
> >  {
> > @@ -1068,7 +1050,6 @@ static void reloc_cache_init(struct reloc_cache *cache,
> >         cache->has_fence = cache->graphics_ver < 4;
> >         cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment;
> >         cache->node.flags = 0;
> > -       reloc_cache_clear(cache);
> >  }
> >
> >  static inline void *unmask_page(unsigned long p)
> > @@ -1090,48 +1071,10 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
> >         return &i915->ggtt;
> >  }
> >
> > -static void reloc_cache_put_pool(struct i915_execbuffer *eb, struct reloc_cache *cache)
> > -{
> > -       if (!cache->pool)
> > -               return;
> > -
> > -       /*
> > -        * This is a bit nasty, normally we keep objects locked until the end
> > -        * of execbuffer, but we already submit this, and have to unlock before
> > -        * dropping the reference. Fortunately we can only hold 1 pool node at
> > -        * a time, so this should be harmless.
> > -        */
> > -       i915_gem_ww_unlock_single(cache->pool->obj);
> > -       intel_gt_buffer_pool_put(cache->pool);
> > -       cache->pool = NULL;
> > -}
> > -
> > -static void reloc_gpu_flush(struct i915_execbuffer *eb, struct reloc_cache *cache)
> > -{
> > -       struct drm_i915_gem_object *obj = cache->rq->batch->obj;
> > -
> > -       GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
> > -       cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
> > -
> > -       i915_gem_object_flush_map(obj);
> > -       i915_gem_object_unpin_map(obj);
> > -
> > -       intel_gt_chipset_flush(cache->rq->engine->gt);
> > -
> > -       i915_request_add(cache->rq);
> > -       reloc_cache_put_pool(eb, cache);
> > -       reloc_cache_clear(cache);
> > -
> > -       eb->reloc_pool = NULL;
> > -}
> > -
> >  static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer *eb)
> >  {
> >         void *vaddr;
> >
> > -       if (cache->rq)
> > -               reloc_gpu_flush(eb, cache);
> > -
> >         if (!cache->vaddr)
> >                 return;
> >
> > @@ -1313,295 +1256,6 @@ static void clflush_write32(u32 *addr, u32 value, unsigned int flushes)
> >                 *addr = value;
> >  }
> >
> > -static int reloc_move_to_gpu(struct i915_request *rq, struct i915_vma *vma)
> > -{
> > -       struct drm_i915_gem_object *obj = vma->obj;
> > -       int err;
> > -
> > -       assert_vma_held(vma);
> > -
> > -       if (obj->cache_dirty & ~obj->cache_coherent)
> > -               i915_gem_clflush_object(obj, 0);
> > -       obj->write_domain = 0;
> > -
> > -       err = i915_request_await_object(rq, vma->obj, true);
> > -       if (err == 0)
> > -               err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
> > -
> > -       return err;
> > -}
> > -
> > -static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
> > -                            struct intel_engine_cs *engine,
> > -                            struct i915_vma *vma,
> > -                            unsigned int len)
> > -{
> > -       struct reloc_cache *cache = &eb->reloc_cache;
> > -       struct intel_gt_buffer_pool_node *pool = eb->reloc_pool;
> > -       struct i915_request *rq;
> > -       struct i915_vma *batch;
> > -       u32 *cmd;
> > -       int err;
> > -
> > -       if (!pool) {
> > -               pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE,
> > -                                               cache->has_llc ?
> > -                                               I915_MAP_WB :
> > -                                               I915_MAP_WC);
> > -               if (IS_ERR(pool))
> > -                       return PTR_ERR(pool);
> > -       }
> > -       eb->reloc_pool = NULL;
> > -
> > -       err = i915_gem_object_lock(pool->obj, &eb->ww);
> > -       if (err)
> > -               goto err_pool;
> > -
> > -       cmd = i915_gem_object_pin_map(pool->obj, pool->type);
> > -       if (IS_ERR(cmd)) {
> > -               err = PTR_ERR(cmd);
> > -               goto err_pool;
> > -       }
> > -       intel_gt_buffer_pool_mark_used(pool);
> > -
> > -       memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
> > -
> > -       batch = i915_vma_instance(pool->obj, vma->vm, NULL);
> > -       if (IS_ERR(batch)) {
> > -               err = PTR_ERR(batch);
> > -               goto err_unmap;
> > -       }
> > -
> > -       err = i915_vma_pin_ww(batch, &eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK);
> > -       if (err)
> > -               goto err_unmap;
> > -
> > -       if (engine == eb->context->engine) {
> > -               rq = i915_request_create(eb->context);
> > -       } else {
> > -               struct intel_context *ce = eb->reloc_context;
> > -
> > -               if (!ce) {
> > -                       ce = intel_context_create(engine);
> > -                       if (IS_ERR(ce)) {
> > -                               err = PTR_ERR(ce);
> > -                               goto err_unpin;
> > -                       }
> > -
> > -                       i915_vm_put(ce->vm);
> > -                       ce->vm = i915_vm_get(eb->context->vm);
> > -                       eb->reloc_context = ce;
> > -               }
> > -
> > -               err = intel_context_pin_ww(ce, &eb->ww);
> > -               if (err)
> > -                       goto err_unpin;
> > -
> > -               rq = i915_request_create(ce);
> > -               intel_context_unpin(ce);
> > -       }
> > -       if (IS_ERR(rq)) {
> > -               err = PTR_ERR(rq);
> > -               goto err_unpin;
> > -       }
> > -
> > -       err = intel_gt_buffer_pool_mark_active(pool, rq);
> > -       if (err)
> > -               goto err_request;
> > -
> > -       err = reloc_move_to_gpu(rq, vma);
> > -       if (err)
> > -               goto err_request;
> > -
> > -       err = eb->engine->emit_bb_start(rq,
> > -                                       batch->node.start, PAGE_SIZE,
> > -                                       cache->graphics_ver > 5 ? 0 : I915_DISPATCH_SECURE);
> > -       if (err)
> > -               goto skip_request;
> > -
> > -       assert_vma_held(batch);
> > -       err = i915_request_await_object(rq, batch->obj, false);
> > -       if (err == 0)
> > -               err = i915_vma_move_to_active(batch, rq, 0);
> > -       if (err)
> > -               goto skip_request;
> > -
> > -       rq->batch = batch;
> > -       i915_vma_unpin(batch);
> > -
> > -       cache->rq = rq;
> > -       cache->rq_cmd = cmd;
> > -       cache->rq_size = 0;
> > -       cache->pool = pool;
> > -
> > -       /* Return with batch mapping (cmd) still pinned */
> > -       return 0;
> > -
> > -skip_request:
> > -       i915_request_set_error_once(rq, err);
> > -err_request:
> > -       i915_request_add(rq);
> > -err_unpin:
> > -       i915_vma_unpin(batch);
> > -err_unmap:
> > -       i915_gem_object_unpin_map(pool->obj);
> > -err_pool:
> > -       eb->reloc_pool = pool;
> > -       return err;
> > -}
> > -
> > -static bool reloc_can_use_engine(const struct intel_engine_cs *engine)
> > -{
> > -       return engine->class != VIDEO_DECODE_CLASS || GRAPHICS_VER(engine->i915) != 6;
> > -}
> > -
> > -static u32 *reloc_gpu(struct i915_execbuffer *eb,
> > -                     struct i915_vma *vma,
> > -                     unsigned int len)
> > -{
> > -       struct reloc_cache *cache = &eb->reloc_cache;
> > -       u32 *cmd;
> > -
> > -       if (cache->rq_size > PAGE_SIZE/sizeof(u32) - (len + 1))
> > -               reloc_gpu_flush(eb, cache);
> > -
> > -       if (unlikely(!cache->rq)) {
> > -               int err;
> > -               struct intel_engine_cs *engine = eb->engine;
> > -
> > -               /* If we need to copy for the cmdparser, we will stall anyway */
> > -               if (eb_use_cmdparser(eb))
> > -                       return ERR_PTR(-EWOULDBLOCK);
> > -
> > -               if (!reloc_can_use_engine(engine)) {
> > -                       engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
> > -                       if (!engine)
> > -                               return ERR_PTR(-ENODEV);
> > -               }
> > -
> > -               err = __reloc_gpu_alloc(eb, engine, vma, len);
> > -               if (unlikely(err))
> > -                       return ERR_PTR(err);
> > -       }
> > -
> > -       cmd = cache->rq_cmd + cache->rq_size;
> > -       cache->rq_size += len;
> > -
> > -       return cmd;
> > -}
> > -
> > -static inline bool use_reloc_gpu(struct i915_vma *vma)
> > -{
> > -       if (DBG_FORCE_RELOC == FORCE_GPU_RELOC)
> > -               return true;
> > -
> > -       if (DBG_FORCE_RELOC)
> > -               return false;
> > -
> > -       return !dma_resv_test_signaled(vma->resv, true);
> > -}
> > -
> > -static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> > -{
> > -       struct page *page;
> > -       unsigned long addr;
> > -
> > -       GEM_BUG_ON(vma->pages != vma->obj->mm.pages);
> > -
> > -       page = i915_gem_object_get_page(vma->obj, offset >> PAGE_SHIFT);
> > -       addr = PFN_PHYS(page_to_pfn(page));
> > -       GEM_BUG_ON(overflows_type(addr, u32)); /* expected dma32 */
> > -
> > -       return addr + offset_in_page(offset);
> > -}
> > -
> > -static int __reloc_entry_gpu(struct i915_execbuffer *eb,
> > -                             struct i915_vma *vma,
> > -                             u64 offset,
> > -                             u64 target_addr)
> > -{
> > -       const unsigned int ver = eb->reloc_cache.graphics_ver;
> > -       unsigned int len;
> > -       u32 *batch;
> > -       u64 addr;
> > -
> > -       if (ver >= 8)
> > -               len = offset & 7 ? 8 : 5;
> > -       else if (ver >= 4)
> > -               len = 4;
> > -       else
> > -               len = 3;
> > -
> > -       batch = reloc_gpu(eb, vma, len);
> > -       if (batch == ERR_PTR(-EDEADLK))
> > -               return -EDEADLK;
> > -       else if (IS_ERR(batch))
> > -               return false;
> > -
> > -       addr = gen8_canonical_addr(vma->node.start + offset);
> > -       if (ver >= 8) {
> > -               if (offset & 7) {
> > -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -                       *batch++ = lower_32_bits(addr);
> > -                       *batch++ = upper_32_bits(addr);
> > -                       *batch++ = lower_32_bits(target_addr);
> > -
> > -                       addr = gen8_canonical_addr(addr + 4);
> > -
> > -                       *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -                       *batch++ = lower_32_bits(addr);
> > -                       *batch++ = upper_32_bits(addr);
> > -                       *batch++ = upper_32_bits(target_addr);
> > -               } else {
> > -                       *batch++ = (MI_STORE_DWORD_IMM_GEN4 | (1 << 21)) + 1;
> > -                       *batch++ = lower_32_bits(addr);
> > -                       *batch++ = upper_32_bits(addr);
> > -                       *batch++ = lower_32_bits(target_addr);
> > -                       *batch++ = upper_32_bits(target_addr);
> > -               }
> > -       } else if (ver >= 6) {
> > -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -               *batch++ = 0;
> > -               *batch++ = addr;
> > -               *batch++ = target_addr;
> > -       } else if (IS_I965G(eb->i915)) {
> > -               *batch++ = MI_STORE_DWORD_IMM_GEN4;
> > -               *batch++ = 0;
> > -               *batch++ = vma_phys_addr(vma, offset);
> > -               *batch++ = target_addr;
> > -       } else if (ver >= 4) {
> > -               *batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
> > -               *batch++ = 0;
> > -               *batch++ = addr;
> > -               *batch++ = target_addr;
> > -       } else if (ver >= 3 &&
> > -                  !(IS_I915G(eb->i915) || IS_I915GM(eb->i915))) {
> > -               *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
> > -               *batch++ = addr;
> > -               *batch++ = target_addr;
> > -       } else {
> > -               *batch++ = MI_STORE_DWORD_IMM;
> > -               *batch++ = vma_phys_addr(vma, offset);
> > -               *batch++ = target_addr;
> > -       }
> > -
> > -       return true;
> > -}
> > -
> > -static int __maybe_unused reloc_entry_gpu(struct i915_execbuffer *eb,
> > -                           struct i915_vma *vma,
> > -                           u64 offset,
> > -                           u64 target_addr)
> > -{
> > -       if (eb->reloc_cache.vaddr)
> > -               return false;
> > -
> > -       if (!use_reloc_gpu(vma))
> > -               return false;
> > -
> > -       return __reloc_entry_gpu(eb, vma, offset, target_addr);
> > -}
> > -
> >  static u64
> >  relocate_entry(struct i915_vma *vma,
> >                const struct drm_i915_gem_relocation_entry *reloc,
> > @@ -3166,8 +2820,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >         eb.exec = exec;
> >         eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1);
> >         eb.vma[0].vma = NULL;
> > -       eb.reloc_pool = eb.batch_pool = NULL;
> > -       eb.reloc_context = NULL;
> > +       eb.batch_pool = NULL;
> >
> >         eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
> >         reloc_cache_init(&eb.reloc_cache, eb.i915);
> > @@ -3265,9 +2918,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >
> >         batch = eb.batch->vma;
> >
> > -       /* All GPU relocation batches must be submitted prior to the user rq */
> > -       GEM_BUG_ON(eb.reloc_cache.rq);
> > -
> >         /* Allocate a request for this batch buffer nice and early. */
> >         eb.request = i915_request_create(eb.context);
> >         if (IS_ERR(eb.request)) {
> > @@ -3358,10 +3008,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
> >
> >         if (eb.batch_pool)
> >                 intel_gt_buffer_pool_put(eb.batch_pool);
> > -       if (eb.reloc_pool)
> > -               intel_gt_buffer_pool_put(eb.reloc_pool);
> > -       if (eb.reloc_context)
> > -               intel_context_put(eb.reloc_context);
> >  err_engine:
> >         eb_put_engine(&eb);
> >  err_context:
> > @@ -3475,7 +3121,3 @@ end:;
> >         kvfree(exec2_list);
> >         return err;
> >  }
> > -
> > -#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> > -#include "selftests/i915_gem_execbuffer.c"
> > -#endif
> > diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> > index e2fd1b61af71..c0386fb4e286 100644
> > --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> > +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> > @@ -38,7 +38,6 @@ selftest(gem, i915_gem_live_selftests)
> >  selftest(evict, i915_gem_evict_live_selftests)
> >  selftest(hugepages, i915_gem_huge_page_live_selftests)
> >  selftest(gem_contexts, i915_gem_context_live_selftests)
> > -selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
> >  selftest(client, i915_gem_client_blt_live_selftests)
> >  selftest(gem_migrate, i915_gem_migrate_live_selftests)
> >  selftest(reset, intel_reset_live_selftests)
> > --
> > 2.32.0
> >

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-08-04 22:26 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-03 12:48 [PATCH 1/2] drm/i915: Disable gpu relocations Daniel Vetter
2021-08-03 12:48 ` [Intel-gfx] " Daniel Vetter
2021-08-03 12:48 ` [PATCH 2/2] drm/i915: delete gpu reloc code Daniel Vetter
2021-08-03 12:48   ` [Intel-gfx] " Daniel Vetter
2021-08-03 15:47   ` Jason Ekstrand
2021-08-03 15:47     ` [Intel-gfx] " Jason Ekstrand
2021-08-04 22:26     ` Daniel Vetter
2021-08-04 22:26       ` [Intel-gfx] " Daniel Vetter
2021-08-03 15:10 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/2] drm/i915: Disable gpu relocations Patchwork
2021-08-03 15:38 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-08-04 10:56 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.