All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings
@ 2019-10-14  9:07 Chris Wilson
  2019-10-14  9:07 ` [PATCH 02/15] drm/i915/gem: Distinguish each object type Chris Wilson
                   ` (14 more replies)
  0 siblings, 15 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

Just a parameter rename,

drivers/gpu/drm/i915/display/intel_display.c:14425: warning: Function parameter or member '_new_plane_state' not described in 'intel_prepare_plane_fb'
drivers/gpu/drm/i915/display/intel_display.c:14425: warning: Excess function parameter 'new_state' description in 'intel_prepare_plane_fb'
drivers/gpu/drm/i915/display/intel_display.c:14534: warning: Function parameter or member '_old_plane_state' not described in 'intel_cleanup_plane_fb'
drivers/gpu/drm/i915/display/intel_display.c:14534: warning: Excess function parameter 'old_state' description in 'intel_cleanup_plane_fb'

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index a146ec02a0c1..3cf39fc153b3 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14410,7 +14410,7 @@ static void fb_obj_bump_render_priority(struct drm_i915_gem_object *obj)
 /**
  * intel_prepare_plane_fb - Prepare fb for usage on plane
  * @plane: drm plane to prepare for
- * @new_state: the plane state being prepared
+ * @_new_plane_state: the plane state being prepared
  *
  * Prepares a framebuffer for usage on a display plane.  Generally this
  * involves pinning the underlying object and updating the frontbuffer tracking
@@ -14524,7 +14524,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
 /**
  * intel_cleanup_plane_fb - Cleans up an fb after plane use
  * @plane: drm plane to clean up for
- * @old_state: the state from the previous modeset
+ * @_old_plane_state: the state from the previous modeset
  *
  * Cleans up a framebuffer that has just been removed from a plane.
  */
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 02/15] drm/i915/gem: Distinguish each object type
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14  9:07 ` [PATCH 03/15] drm/i915/execlists: Assert tasklet is locked for process_csb() Chris Wilson
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

Separate each object class into a separate lock type to avoid lockdep
cross-contamination between paths (i.e. userptr!).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c           | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c         | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_object.c           | 5 +++--
 drivers/gpu/drm/i915/gem/i915_gem_object.h           | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c            | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c           | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c          | 3 ++-
 drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c | 3 ++-
 drivers/gpu/drm/i915/gem/selftests/huge_pages.c      | 8 +++++---
 drivers/gpu/drm/i915/gvt/dmabuf.c                    | 3 ++-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c        | 3 ++-
 drivers/gpu/drm/i915/selftests/mock_region.c         | 3 ++-
 12 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 96ce95c8ac5a..eaea49d08eb5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -256,6 +256,7 @@ static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = {
 struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 					     struct dma_buf *dma_buf)
 {
+	static struct lock_class_key lock_class;
 	struct dma_buf_attachment *attach;
 	struct drm_i915_gem_object *obj;
 	int ret;
@@ -287,7 +288,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 	}
 
 	drm_gem_private_object_init(dev, &obj->base, dma_buf->size);
-	i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops);
+	i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class);
 	obj->base.import_attach = attach;
 	obj->base.resv = dma_buf->resv;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index 5ae694c24df4..9cfb0e41ff06 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -164,6 +164,7 @@ struct drm_i915_gem_object *
 i915_gem_object_create_internal(struct drm_i915_private *i915,
 				phys_addr_t size)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 	unsigned int cache_level;
 
@@ -178,7 +179,7 @@ i915_gem_object_create_internal(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &i915_gem_object_internal_ops);
+	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class);
 
 	/*
 	 * Mark the object as volatile, such that the pages are marked as
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index dbf9be9a79f4..a50296cce0d8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -47,9 +47,10 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj)
 }
 
 void i915_gem_object_init(struct drm_i915_gem_object *obj,
-			  const struct drm_i915_gem_object_ops *ops)
+			  const struct drm_i915_gem_object_ops *ops,
+			  struct lock_class_key *key)
 {
-	mutex_init(&obj->mm.lock);
+	__mutex_init(&obj->mm.lock, "obj->mm.lock", key);
 
 	spin_lock_init(&obj->vma.lock);
 	INIT_LIST_HEAD(&obj->vma.list);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index c5e14c9c805c..b0245585b4d5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -23,7 +23,8 @@ struct drm_i915_gem_object *i915_gem_object_alloc(void);
 void i915_gem_object_free(struct drm_i915_gem_object *obj);
 
 void i915_gem_object_init(struct drm_i915_gem_object *obj,
-			  const struct drm_i915_gem_object_ops *ops);
+			  const struct drm_i915_gem_object_ops *ops,
+			  struct lock_class_key *key);
 struct drm_i915_gem_object *
 i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size);
 struct drm_i915_gem_object *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 4c4954e8ce0a..f36f7d658380 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -458,6 +458,7 @@ static int create_shmem(struct drm_i915_private *i915,
 struct drm_i915_gem_object *
 i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 	struct address_space *mapping;
 	unsigned int cache_level;
@@ -494,7 +495,7 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size)
 	mapping_set_gfp_mask(mapping, mask);
 	GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));
 
-	i915_gem_object_init(obj, &i915_gem_shmem_ops);
+	i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class);
 
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index c76260ce13e3..a1f0cf2519b3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -551,6 +551,7 @@ static struct drm_i915_gem_object *
 _i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
 			       struct drm_mm_node *stolen)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 	unsigned int cache_level;
 	int err = -ENOMEM;
@@ -560,7 +561,7 @@ _i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
 		goto err;
 
 	drm_gem_private_object_init(&dev_priv->drm, &obj->base, stolen->size);
-	i915_gem_object_init(obj, &i915_gem_object_stolen_ops);
+	i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class);
 
 	obj->stolen = stolen;
 	obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 4f970474013f..1e045c337044 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -725,6 +725,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 		       void *data,
 		       struct drm_file *file)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_i915_gem_userptr *args = data;
 	struct drm_i915_gem_object *obj;
@@ -769,7 +770,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 		return -ENOMEM;
 
 	drm_gem_private_object_init(dev, &obj->base, args->user_size);
-	i915_gem_object_init(obj, &i915_gem_userptr_ops);
+	i915_gem_object_init(obj, &i915_gem_userptr_ops, &lock_class);
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
index 3c5d17b2b670..892d12db6c49 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
@@ -96,6 +96,7 @@ huge_gem_object(struct drm_i915_private *i915,
 		phys_addr_t phys_size,
 		dma_addr_t dma_size)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 	unsigned int cache_level;
 
@@ -111,7 +112,7 @@ huge_gem_object(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, dma_size);
-	i915_gem_object_init(obj, &huge_ops);
+	i915_gem_object_init(obj, &huge_ops, &lock_class);
 
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index f27772f6779a..dac8344507c1 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -149,6 +149,7 @@ huge_pages_object(struct drm_i915_private *i915,
 		  u64 size,
 		  unsigned int page_mask)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 
 	GEM_BUG_ON(!size);
@@ -165,7 +166,7 @@ huge_pages_object(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &huge_page_ops);
+	i915_gem_object_init(obj, &huge_page_ops, &lock_class);
 
 	i915_gem_object_set_volatile(obj);
 
@@ -295,6 +296,7 @@ static const struct drm_i915_gem_object_ops fake_ops_single = {
 static struct drm_i915_gem_object *
 fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 
 	GEM_BUG_ON(!size);
@@ -313,9 +315,9 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
 
 	if (single)
-		i915_gem_object_init(obj, &fake_ops_single);
+		i915_gem_object_init(obj, &fake_ops_single, &lock_class);
 	else
-		i915_gem_object_init(obj, &fake_ops);
+		i915_gem_object_init(obj, &fake_ops, &lock_class);
 
 	i915_gem_object_set_volatile(obj);
 
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c
index 13044c027f27..a816aef6142b 100644
--- a/drivers/gpu/drm/i915/gvt/dmabuf.c
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.c
@@ -152,6 +152,7 @@ static const struct drm_i915_gem_object_ops intel_vgpu_gem_ops = {
 static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
 		struct intel_vgpu_fb_info *info)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_i915_gem_object *obj;
 
@@ -161,7 +162,7 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
 
 	drm_gem_private_object_init(dev, &obj->base,
 		roundup(info->size, PAGE_SIZE));
-	i915_gem_object_init(obj, &intel_vgpu_gem_ops);
+	i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class);
 
 	obj->read_domains = I915_GEM_DOMAIN_GTT;
 	obj->write_domain = 0;
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index ebe735df6504..a1cb072e4a1b 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -104,6 +104,7 @@ static const struct drm_i915_gem_object_ops fake_ops = {
 static struct drm_i915_gem_object *
 fake_dma_object(struct drm_i915_private *i915, u64 size)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 
 	GEM_BUG_ON(!size);
@@ -117,7 +118,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size)
 		goto err;
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &fake_ops);
+	i915_gem_object_init(obj, &fake_ops, &lock_class);
 
 	i915_gem_object_set_volatile(obj);
 
diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c
index 7b0c99ddc2d5..b2ad41c27e67 100644
--- a/drivers/gpu/drm/i915/selftests/mock_region.c
+++ b/drivers/gpu/drm/i915/selftests/mock_region.c
@@ -19,6 +19,7 @@ mock_object_create(struct intel_memory_region *mem,
 		   resource_size_t size,
 		   unsigned int flags)
 {
+	static struct lock_class_key lock_class;
 	struct drm_i915_private *i915 = mem->i915;
 	struct drm_i915_gem_object *obj;
 
@@ -30,7 +31,7 @@ mock_object_create(struct intel_memory_region *mem,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &mock_region_obj_ops);
+	i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class);
 
 	obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
 
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 03/15] drm/i915/execlists: Assert tasklet is locked for process_csb()
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
  2019-10-14  9:07 ` [PATCH 02/15] drm/i915/gem: Distinguish each object type Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14  9:07 ` [PATCH 04/15] drm/i915/execlists: Clear semaphore immediately upon ELSP promotion Chris Wilson
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

We rely on only the tasklet being allowed to call into process_csb(), so
assert that is locked when we do. As the tasklet uses a simple bitlock,
there is no strong lockdep checking so we must make do with a plain
assertion that the tasklet is running and assume that we are the
tasklet!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 1 +
 drivers/gpu/drm/i915/i915_gem.h     | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 16b878d35814..fc4be76b3070 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1843,6 +1843,7 @@ static void process_csb(struct intel_engine_cs *engine)
 	u8 head, tail;
 
 	GEM_BUG_ON(USES_GUC_SUBMISSION(engine->i915));
+	GEM_BUG_ON(!tasklet_is_locked(&execlists->tasklet));
 
 	/*
 	 * Note that csb_write, csb_status may be either in HWSP or mmio.
diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
index db20d2b0842b..f6f9675848b8 100644
--- a/drivers/gpu/drm/i915/i915_gem.h
+++ b/drivers/gpu/drm/i915/i915_gem.h
@@ -86,6 +86,11 @@ static inline void tasklet_lock(struct tasklet_struct *t)
 		cpu_relax();
 }
 
+static inline bool tasklet_is_locked(const struct tasklet_struct *t)
+{
+	return test_bit(TASKLET_STATE_RUN, &t->state);
+}
+
 static inline void __tasklet_disable_sync_once(struct tasklet_struct *t)
 {
 	if (!atomic_fetch_inc(&t->count))
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 04/15] drm/i915/execlists: Clear semaphore immediately upon ELSP promotion
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
  2019-10-14  9:07 ` [PATCH 02/15] drm/i915/gem: Distinguish each object type Chris Wilson
  2019-10-14  9:07 ` [PATCH 03/15] drm/i915/execlists: Assert tasklet is locked for process_csb() Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14  9:07 ` [PATCH 05/15] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

There is no significance to our delay before clearing the semaphore the
engine is waiting on, so release it as soon as we acknowledge the CS
update following our preemption request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index fc4be76b3070..ce59f868ce51 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1904,6 +1904,9 @@ static void process_csb(struct intel_engine_cs *engine)
 		else
 			promote = gen8_csb_parse(execlists, buf + 2 * head);
 		if (promote) {
+			if (!inject_preempt_hang(execlists))
+				ring_set_paused(engine, 0);
+
 			/* cancel old inflight, prepare for switch */
 			trace_ports(execlists, "preempted", execlists->active);
 			while (*execlists->active)
@@ -1920,9 +1923,6 @@ static void process_csb(struct intel_engine_cs *engine)
 			if (enable_timeslice(execlists))
 				mod_timer(&execlists->timer, jiffies + 1);
 
-			if (!inject_preempt_hang(execlists))
-				ring_set_paused(engine, 0);
-
 			WRITE_ONCE(execlists->pending[0], NULL);
 		} else {
 			GEM_BUG_ON(!*execlists->active);
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 05/15] drm/i915/execlists: Tweak virtual unsubmission
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (2 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 04/15] drm/i915/execlists: Clear semaphore immediately upon ELSP promotion Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14  9:07 ` [PATCH 06/15] drm/i915/selftests: Check known register values within the context Chris Wilson
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
overtaking each other on preemption") we have restricted requests to run
on their chosen engine across preemption events. We can take this
restriction into account to know that we will want to resubmit those
requests onto the same physical engine, and so can shortcircuit the
virtual engine selection process and keep the request on the same
engine during unwind.

References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
 drivers/gpu/drm/i915/i915_request.c | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index ce59f868ce51..0fbd72a9ab56 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -847,7 +847,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 	list_for_each_entry_safe_reverse(rq, rn,
 					 &engine->active.requests,
 					 sched.link) {
-		struct intel_engine_cs *owner;
 
 		if (i915_request_completed(rq))
 			continue; /* XXX */
@@ -862,8 +861,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 		 * engine so that it can be moved across onto another physical
 		 * engine as load dictates.
 		 */
-		owner = rq->hw_context->engine;
-		if (likely(owner == engine)) {
+		if (likely(rq->execution_mask == engine->mask)) {
 			GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
 			if (rq_prio(rq) != prio) {
 				prio = rq_prio(rq);
@@ -874,6 +872,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 			list_move(&rq->sched.link, pl);
 			active = rq;
 		} else {
+			struct intel_engine_cs *owner = rq->hw_context->engine;
+
 			/*
 			 * Decouple the virtual breadcrumb before moving it
 			 * back to the virtual engine -- we don't want the
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 437f9fc6282e..b8a54572a4f8 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -649,6 +649,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->gem_context = ce->gem_context;
 	rq->engine = ce->engine;
 	rq->ring = ce->ring;
+	rq->execution_mask = ce->engine->mask;
 
 	rcu_assign_pointer(rq->timeline, tl);
 	rq->hwsp_seqno = tl->hwsp_seqno;
@@ -671,7 +672,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->batch = NULL;
 	rq->capture_list = NULL;
 	rq->flags = 0;
-	rq->execution_mask = ALL_ENGINES;
 
 	INIT_LIST_HEAD(&rq->execute_cb);
 
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 06/15] drm/i915/selftests: Check known register values within the context
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (3 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 05/15] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14  9:59   ` Tvrtko Ursulin
  2019-10-14  9:07 ` [PATCH 07/15] drm/i915/selftests: Check that GPR are cleared for new contexts Chris Wilson
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

Check the logical ring context by asserting that the registers hold
expected start during execution. (It's a bit chicken-and-egg for how
could we manage to execute our request if the registers were not being
updated. Still, it's nice to verify that the HW is working as expected.)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 126 +++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index a691e429ca01..0aa36b1b2389 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -2599,10 +2599,136 @@ static int live_lrc_layout(void *arg)
 	return err;
 }
 
+static int __live_lrc_state(struct i915_gem_context *fixme,
+			    struct intel_engine_cs *engine,
+			    struct i915_vma *scratch)
+{
+	struct intel_context *ce;
+	struct i915_request *rq;
+	enum {
+		RING_START_IDX = 0,
+		RING_TAIL_IDX,
+		MAX_IDX
+	};
+	u32 expected[MAX_IDX];
+	u32 *cs;
+	int err;
+	int n;
+
+	ce = intel_context_create(fixme, engine);
+	if (IS_ERR(ce))
+		return PTR_ERR(ce);
+
+	err = intel_context_pin(ce);
+	if (err)
+		goto err_put;
+
+	rq = i915_request_create(ce);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto err_unpin;
+	}
+
+	cs = intel_ring_begin(rq, 4 * MAX_IDX);
+	if (IS_ERR(cs)) {
+		err = PTR_ERR(cs);
+		i915_request_add(rq);
+		goto err_unpin;
+	}
+
+	*cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
+	*cs++ = i915_mmio_reg_offset(RING_START(engine->mmio_base));
+	*cs++ = i915_ggtt_offset(scratch) + RING_START_IDX * sizeof(u32);
+	*cs++ = 0;
+
+	expected[RING_START_IDX] = i915_ggtt_offset(ce->ring->vma);
+
+	*cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
+	*cs++ = i915_mmio_reg_offset(RING_TAIL(engine->mmio_base));
+	*cs++ = i915_ggtt_offset(scratch) + RING_TAIL_IDX * sizeof(u32);
+	*cs++ = 0;
+
+	i915_request_get(rq);
+	i915_request_add(rq);
+
+	intel_engine_flush_submission(engine);
+	expected[RING_TAIL_IDX] = ce->ring->tail;
+
+	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
+		err = -ETIME;
+		goto err_rq;
+	}
+
+	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	if (IS_ERR(cs)) {
+		err = PTR_ERR(cs);
+		goto err_rq;
+	}
+
+	for (n = 0; n < MAX_IDX; n++) {
+		if (cs[n] != expected[n]) {
+			pr_err("%s: Stored register[%d] value[0x%x] did not match expected[0x%x]\n",
+			       engine->name, n, cs[n], expected[n]);
+			err = -EINVAL;
+			break;
+		}
+	}
+
+	i915_gem_object_unpin_map(scratch->obj);
+
+err_rq:
+	i915_request_put(rq);
+err_unpin:
+	intel_context_unpin(ce);
+err_put:
+	intel_context_put(ce);
+	return err;
+}
+
+static int live_lrc_state(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct intel_engine_cs *engine;
+	struct i915_gem_context *fixme;
+	struct i915_vma *scratch;
+	enum intel_engine_id id;
+	int err = 0;
+
+	/*
+	 * Check the live register state matches what we expect for this
+	 * intel_context.
+	 */
+
+	fixme = kernel_context(gt->i915);
+	if (!fixme)
+		return -ENOMEM;
+
+	scratch = create_scratch(gt);
+	if (IS_ERR(scratch)) {
+		err = PTR_ERR(scratch);
+		goto out_close;
+	}
+
+	for_each_engine(engine, gt->i915, id) {
+		err = __live_lrc_state(fixme, engine, scratch);
+		if (err)
+			break;
+	}
+
+	if (igt_flush_test(gt->i915))
+		err = -EIO;
+
+	i915_vma_unpin_and_release(&scratch, 0);
+out_close:
+	kernel_context_close(fixme);
+	return err;
+}
+
 int intel_lrc_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(live_lrc_layout),
+		SUBTEST(live_lrc_state),
 	};
 
 	if (!HAS_LOGICAL_RING_CONTEXTS(i915))
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 07/15] drm/i915/selftests: Check that GPR are cleared for new contexts
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (4 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 06/15] drm/i915/selftests: Check known register values within the context Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14 10:08   ` Tvrtko Ursulin
  2019-10-14  9:07 ` [PATCH 08/15] drm/i915: Expose engine properties via sysfs Chris Wilson
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

We want the general purpose registers to be clear in all new contexts so
that we can be confident that no information is leaked from one to the
next.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 185 ++++++++++++++++++++++---
 1 file changed, 166 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 0aa36b1b2389..1276da059dc6 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -19,6 +19,9 @@
 #include "gem/selftests/igt_gem_utils.h"
 #include "gem/selftests/mock_context.h"
 
+#define CS_GPR(engine, n) ((engine)->mmio_base + 0x600 + (n) * 4)
+#define NUM_GPR_DW (16 * 2) /* each GPR is 2 dwords */
+
 static struct i915_vma *create_scratch(struct intel_gt *gt)
 {
 	struct drm_i915_gem_object *obj;
@@ -2107,16 +2110,14 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
 				    struct intel_engine_cs **siblings,
 				    unsigned int nsibling)
 {
-#define CS_GPR(engine, n) ((engine)->mmio_base + 0x600 + (n) * 4)
-
 	struct i915_request *last = NULL;
 	struct i915_gem_context *ctx;
 	struct intel_context *ve;
 	struct i915_vma *scratch;
 	struct igt_live_test t;
-	const int num_gpr = 16 * 2; /* each GPR is 2 dwords */
 	unsigned int n;
 	int err = 0;
+	u32 *cs;
 
 	ctx = kernel_context(i915);
 	if (!ctx)
@@ -2142,10 +2143,9 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
 	if (err)
 		goto out_unpin;
 
-	for (n = 0; n < num_gpr; n++) {
+	for (n = 0; n < NUM_GPR_DW; n++) {
 		struct intel_engine_cs *engine = siblings[n % nsibling];
 		struct i915_request *rq;
-		u32 *cs;
 
 		rq = i915_request_create(ve);
 		if (IS_ERR(rq)) {
@@ -2169,7 +2169,7 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
 		*cs++ = 0;
 
 		*cs++ = MI_LOAD_REGISTER_IMM(1);
-		*cs++ = CS_GPR(engine, (n + 1) % num_gpr);
+		*cs++ = CS_GPR(engine, (n + 1) % NUM_GPR_DW);
 		*cs++ = n + 1;
 
 		*cs++ = MI_NOOP;
@@ -2182,21 +2182,26 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
 
 	if (i915_request_wait(last, 0, HZ / 5) < 0) {
 		err = -ETIME;
-	} else {
-		u32 *map = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+		goto out_end;
+	}
 
-		for (n = 0; n < num_gpr; n++) {
-			if (map[n] != n) {
-				pr_err("Incorrect value[%d] found for GPR[%d]\n",
-				       map[n], n);
-				err = -EINVAL;
-				break;
-			}
-		}
+	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	if (IS_ERR(cs)) {
+		err = PTR_ERR(cs);
+		goto out_end;
+	}
 
-		i915_gem_object_unpin_map(scratch->obj);
+	for (n = 0; n < NUM_GPR_DW; n++) {
+		if (cs[n] != n) {
+			pr_err("Incorrect value[%d] found for GPR[%d]\n",
+			       cs[n], n);
+			err = -EINVAL;
+			break;
+		}
 	}
 
+	i915_gem_object_unpin_map(scratch->obj);
+
 out_end:
 	if (igt_live_test_end(&t))
 		err = -EIO;
@@ -2210,8 +2215,6 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
 out_close:
 	kernel_context_close(ctx);
 	return err;
-
-#undef CS_GPR
 }
 
 static int live_virtual_preserved(void *arg)
@@ -2724,11 +2727,155 @@ static int live_lrc_state(void *arg)
 	return err;
 }
 
+static int gpr_make_dirty(struct intel_engine_cs *engine)
+{
+	struct i915_request *rq;
+	u32 *cs;
+	int n;
+
+	rq = i915_request_create(engine->kernel_context);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
+
+	cs = intel_ring_begin(rq, 2 * NUM_GPR_DW + 2);
+	if (IS_ERR(cs)) {
+		i915_request_add(rq);
+		return PTR_ERR(cs);
+	}
+
+	*cs++ = MI_LOAD_REGISTER_IMM(NUM_GPR_DW);
+	for (n = 0; n < NUM_GPR_DW; n++) {
+		*cs++ = CS_GPR(engine, n);
+		*cs++ = STACK_MAGIC;
+	}
+	*cs++ = MI_NOOP;
+
+	intel_ring_advance(rq, cs);
+	i915_request_add(rq);
+
+	return 0;
+}
+
+static int __live_gpr_clear(struct i915_gem_context *fixme,
+			    struct intel_engine_cs *engine,
+			    struct i915_vma *scratch)
+{
+	struct intel_context *ce;
+	struct i915_request *rq;
+	u32 *cs;
+	int err;
+	int n;
+
+	if (INTEL_GEN(engine->i915) < 9 && engine->class != RENDER_CLASS)
+		return 0; /* GPR only on rcs0 for gen8 */
+
+	err = gpr_make_dirty(engine);
+	if (err)
+		return err;
+
+	ce = intel_context_create(fixme, engine);
+	if (IS_ERR(ce))
+		return PTR_ERR(ce);
+
+	rq = intel_context_create_request(ce);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto err_put;
+	}
+
+	cs = intel_ring_begin(rq, 4 * NUM_GPR_DW);
+	if (IS_ERR(cs)) {
+		err = PTR_ERR(cs);
+		i915_request_add(rq);
+		goto err_put;
+	}
+
+	for (n = 0; n < NUM_GPR_DW; n++) {
+		*cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
+		*cs++ = CS_GPR(engine, n);
+		*cs++ = i915_ggtt_offset(scratch) + n * sizeof(u32);
+		*cs++ = 0;
+	}
+
+	i915_request_get(rq);
+	i915_request_add(rq);
+
+	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
+		err = -ETIME;
+		goto err_rq;
+	}
+
+	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	if (IS_ERR(cs)) {
+		err = PTR_ERR(cs);
+		goto err_rq;
+	}
+
+	for (n = 0; n < NUM_GPR_DW; n++) {
+		if (cs[n]) {
+			pr_err("%s: GPR[%d].%s was not zero, found 0x%08x!\n",
+			       engine->name,
+			       n / 2, n & 1 ? "udw" : "ldw",
+			       cs[n]);
+			err = -EINVAL;
+			break;
+		}
+	}
+
+	i915_gem_object_unpin_map(scratch->obj);
+
+err_rq:
+	i915_request_put(rq);
+err_put:
+	intel_context_put(ce);
+	return err;
+}
+
+static int live_gpr_clear(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct intel_engine_cs *engine;
+	struct i915_gem_context *fixme;
+	struct i915_vma *scratch;
+	enum intel_engine_id id;
+	int err = 0;
+
+	/*
+	 * Check that GPR registers are cleared in new contexts as we need
+	 * to avoid leaking any information from previous contexts.
+	 */
+
+	fixme = kernel_context(gt->i915);
+	if (!fixme)
+		return -ENOMEM;
+
+	scratch = create_scratch(gt);
+	if (IS_ERR(scratch)) {
+		err = PTR_ERR(scratch);
+		goto out_close;
+	}
+
+	for_each_engine(engine, gt->i915, id) {
+		err = __live_gpr_clear(fixme, engine, scratch);
+		if (err)
+			break;
+	}
+
+	if (igt_flush_test(gt->i915))
+		err = -EIO;
+
+	i915_vma_unpin_and_release(&scratch, 0);
+out_close:
+	kernel_context_close(fixme);
+	return err;
+}
+
 int intel_lrc_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(live_lrc_layout),
 		SUBTEST(live_lrc_state),
+		SUBTEST(live_gpr_clear),
 	};
 
 	if (!HAS_LOGICAL_RING_CONTEXTS(i915))
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 08/15] drm/i915: Expose engine properties via sysfs
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (5 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 07/15] drm/i915/selftests: Check that GPR are cleared for new contexts Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14 10:17   ` Tvrtko Ursulin
  2019-10-14  9:07 ` [PATCH 09/15] drm/i915/execlists: Force preemption Chris Wilson
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

Preliminary stub to add engines underneath /sys/class/drm/cardN/, so
that we can expose properties on each engine to the sysadmin.

To start with we have basic analogues of the i915_query ioctl so that we
can pretty print engine discovery from the shell, and flesh out the
directory structure. Later we will add writeable sysadmin properties such
as per-engine timeout controls.

An example tree of the engine properties on Braswell:
    /sys/class/drm/card0
    └── engine
        ├── bcs0
        │   ├── capabilities
        │   ├── class
        │   ├── instance
        │   ├── known_capabilities
        │   └── name
        ├── rcs0
        │   ├── capabilities
        │   ├── class
        │   ├── instance
        │   ├── known_capabilities
        │   └── name
        ├── vcs0
        │   ├── capabilities
        │   ├── class
        │   ├── instance
        │   ├── known_capabilities
        │   └── name
        └── vecs0
            ├── capabilities
            ├── class
            ├── instance
            ├── known_capabilities
            └── name

v2: Include stringified capabilities
v3: Include all known capabilities for futureproofing.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/i915/Makefile                |   3 +-
 drivers/gpu/drm/i915/gt/intel_engine_sysfs.c | 223 +++++++++++++++++++
 drivers/gpu/drm/i915/gt/intel_engine_sysfs.h |  14 ++
 drivers/gpu/drm/i915/i915_sysfs.c            |   3 +
 4 files changed, 242 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_sysfs.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index e791d9323b51..cd9a10ba2516 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -78,8 +78,9 @@ gt-y += \
 	gt/intel_breadcrumbs.o \
 	gt/intel_context.o \
 	gt/intel_engine_cs.o \
-	gt/intel_engine_pool.o \
 	gt/intel_engine_pm.o \
+	gt/intel_engine_pool.o \
+	gt/intel_engine_sysfs.o \
 	gt/intel_engine_user.o \
 	gt/intel_gt.o \
 	gt/intel_gt_irq.o \
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
new file mode 100644
index 000000000000..4e403a1b069a
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
@@ -0,0 +1,223 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include <linux/kobject.h>
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "intel_engine.h"
+#include "intel_engine_sysfs.h"
+
+struct kobj_engine {
+	struct kobject base;
+	struct intel_engine_cs *engine;
+};
+
+static struct intel_engine_cs *kobj_to_engine(struct kobject *kobj)
+{
+	return container_of(kobj, struct kobj_engine, base)->engine;
+}
+
+static ssize_t
+name_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%s\n", kobj_to_engine(kobj)->name);
+}
+
+static struct kobj_attribute name_attr =
+__ATTR(name, 0444, name_show, NULL);
+
+static ssize_t
+class_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", kobj_to_engine(kobj)->uabi_class);
+}
+
+static struct kobj_attribute class_attr =
+__ATTR(class, 0444, class_show, NULL);
+
+static ssize_t
+inst_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", kobj_to_engine(kobj)->uabi_instance);
+}
+
+static struct kobj_attribute inst_attr =
+__ATTR(instance, 0444, inst_show, NULL);
+
+static const char * const vcs_caps[] = {
+	[ilog2(I915_VIDEO_CLASS_CAPABILITY_HEVC)] = "hevc",
+	[ilog2(I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC)] = "sfc",
+};
+static const char * const vecs_caps[] = {
+	[ilog2(I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC)] = "sfc",
+};
+
+static ssize_t repr_trim(char *buf, ssize_t len)
+{
+	/* Trim off the trailing space and replace with a newline */
+	if (len > PAGE_SIZE)
+		len = PAGE_SIZE;
+	if (len > 0)
+		buf[len - 1] = '\n';
+
+	return len;
+}
+
+static ssize_t
+caps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+{
+	struct intel_engine_cs *engine = kobj_to_engine(kobj);
+	const char * const *repr;
+	int num_repr, n;
+	ssize_t len;
+
+	switch (engine->class) {
+	case VIDEO_DECODE_CLASS:
+		repr = vcs_caps;
+		num_repr = ARRAY_SIZE(vcs_caps);
+		break;
+
+	case VIDEO_ENHANCEMENT_CLASS:
+		repr = vecs_caps;
+		num_repr = ARRAY_SIZE(vecs_caps);
+		break;
+
+	default:
+		repr = NULL;
+		num_repr = 0;
+		break;
+	}
+
+	len = 0;
+	for_each_set_bit(n,
+			 (unsigned long *)&engine->uabi_capabilities,
+			 BITS_PER_TYPE(typeof(engine->uabi_capabilities))) {
+		if (GEM_WARN_ON(n >= num_repr || !repr[n]))
+			len += snprintf(buf + len, PAGE_SIZE - len,
+					"[%x] ", n);
+		else
+			len += snprintf(buf + len, PAGE_SIZE - len,
+					"%s ", repr[n]);
+		if (GEM_WARN_ON(len >= PAGE_SIZE))
+			break;
+	}
+	return repr_trim(buf, len);
+}
+
+static struct kobj_attribute caps_attr =
+__ATTR(capabilities, 0444, caps_show, NULL);
+
+static ssize_t
+all_caps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+{
+	struct intel_engine_cs *engine = kobj_to_engine(kobj);
+	const char * const *repr;
+	int num_repr, n;
+	ssize_t len;
+
+	switch (engine->class) {
+	case VIDEO_DECODE_CLASS:
+		repr = vcs_caps;
+		num_repr = ARRAY_SIZE(vcs_caps);
+		break;
+
+	case VIDEO_ENHANCEMENT_CLASS:
+		repr = vecs_caps;
+		num_repr = ARRAY_SIZE(vecs_caps);
+		break;
+
+	default:
+		repr = NULL;
+		num_repr = 0;
+		break;
+	}
+
+	len = 0;
+	for (n = 0; n < num_repr; n++) {
+		if (!repr[n])
+			continue;
+
+		len += snprintf(buf + len, PAGE_SIZE - len, "%s ", repr[n]);
+		if (GEM_WARN_ON(len >= PAGE_SIZE))
+			break;
+	}
+	return repr_trim(buf, len);
+}
+
+static struct kobj_attribute all_caps_attr =
+__ATTR(known_capabilities, 0444, all_caps_show, NULL);
+
+static void kobj_engine_release(struct kobject *kobj)
+{
+	kfree(kobj);
+}
+
+static struct kobj_type kobj_engine_type = {
+	.release = kobj_engine_release,
+	.sysfs_ops = &kobj_sysfs_ops
+};
+
+static struct kobject *
+kobj_engine(struct kobject *dir, struct intel_engine_cs *engine)
+{
+	struct kobj_engine *ke;
+
+	ke = kzalloc(sizeof(*ke), GFP_KERNEL);
+	if (!ke)
+		return NULL;
+
+	kobject_init(&ke->base, &kobj_engine_type);
+	ke->engine = engine;
+
+	if (kobject_add(&ke->base, dir, "%s", engine->name)) {
+		kobject_put(&ke->base);
+		return NULL;
+	}
+
+	/* xfer ownership to sysfs tree */
+	return &ke->base;
+}
+
+void intel_engines_add_sysfs(struct drm_i915_private *i915)
+{
+	static const struct attribute *files[] = {
+		&name_attr.attr,
+		&class_attr.attr,
+		&inst_attr.attr,
+		&caps_attr.attr,
+		&all_caps_attr.attr,
+		NULL
+	};
+
+	struct device *kdev = i915->drm.primary->kdev;
+	struct intel_engine_cs *engine;
+	struct kobject *dir;
+
+	dir = kobject_create_and_add("engine", &kdev->kobj);
+	if (!dir)
+		return;
+
+	for_each_uabi_engine(engine, i915) {
+		struct kobject *kobj;
+
+		kobj = kobj_engine(dir, engine);
+		if (!kobj)
+			goto err_engine;
+
+		if (sysfs_create_files(kobj, files))
+			goto err_object;
+
+		if (0) {
+err_object:
+			kobject_put(kobj);
+err_engine:
+			dev_err(kdev, "Failed to add sysfs engine '%s'\n",
+				engine->name);
+			break;
+		}
+	}
+}
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.h b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.h
new file mode 100644
index 000000000000..ef44a745b70a
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.h
@@ -0,0 +1,14 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef INTEL_ENGINE_SYSFS_H
+#define INTEL_ENGINE_SYSFS_H
+
+struct drm_i915_private;
+
+void intel_engines_add_sysfs(struct drm_i915_private *i915);
+
+#endif /* INTEL_ENGINE_SYSFS_H */
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index bf039b8ba593..7b665f69f301 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -30,6 +30,7 @@
 #include <linux/stat.h>
 #include <linux/sysfs.h>
 
+#include "gt/intel_engine_sysfs.h"
 #include "gt/intel_rc6.h"
 
 #include "i915_drv.h"
@@ -616,6 +617,8 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 		DRM_ERROR("RPS sysfs setup failed\n");
 
 	i915_setup_error_capture(kdev);
+
+	intel_engines_add_sysfs(dev_priv);
 }
 
 void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 09/15] drm/i915/execlists: Force preemption
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (6 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 08/15] drm/i915: Expose engine properties via sysfs Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14  9:07 ` [PATCH 10/15] drm/i915/gt: Introduce barrier pulses along engines Chris Wilson
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

If the preempted context takes too long to relinquish control, e.g. it
is stuck inside a shader with arbitration disabled, evict that context
with an engine reset. This ensures that preemptions are reasonably
responsive, providing a tighter QoS for the more important context at
the cost of flagging unresponsive contexts more frequently (i.e. instead
of using an ~10s hangcheck, we now evict at ~100ms).  The challenge of
lies in picking a timeout that can be reasonably serviced by HW for
typical workloads, balancing the existing clients against the needs for
responsiveness.

Note that coupled with timeslicing, this will lead to rapid GPU "hang"
detection with multiple active contexts vying for GPU time.

The preempt timeout can be adjusted per-engine using,

	/sys/class/drm/card?/engine/*/preempt_timeout_ms

v2: Couple in sysfs control of preemption timeout

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/Kconfig.profile         |  15 +++
 drivers/gpu/drm/i915/gt/intel_engine_cs.c    |   2 +
 drivers/gpu/drm/i915/gt/intel_engine_sysfs.c |  33 +++++
 drivers/gpu/drm/i915/gt/intel_engine_types.h |   9 ++
 drivers/gpu/drm/i915/gt/intel_lrc.c          | 127 +++++++++++++++++--
 drivers/gpu/drm/i915/gt/selftest_lrc.c       |  98 ++++++++++++++
 drivers/gpu/drm/i915/i915_params.h           |   2 +-
 7 files changed, 277 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/Kconfig.profile b/drivers/gpu/drm/i915/Kconfig.profile
index 48df8889a88a..8fceea85937b 100644
--- a/drivers/gpu/drm/i915/Kconfig.profile
+++ b/drivers/gpu/drm/i915/Kconfig.profile
@@ -25,3 +25,18 @@ config DRM_I915_SPIN_REQUEST
 	  May be 0 to disable the initial spin. In practice, we estimate
 	  the cost of enabling the interrupt (if currently disabled) to be
 	  a few microseconds.
+
+config DRM_I915_PREEMPT_TIMEOUT
+	int "Preempt timeout (ms)"
+	default 100 # milliseconds
+	help
+	  How long to wait (in milliseconds) for a preemption event to occur
+	  when submitting a new context via execlists. If the current context
+	  does not hit an arbitration point and yield to HW before the timer
+	  expires, the HW will be reset to allow the more important context
+	  to execute.
+
+	  This is adjustable via
+	  /sys/class/drm/card?/engine/*/preempt_timeout_ms
+
+	  May be 0 to disable the timeout.
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index c9d639c6becb..1eb51147839a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -304,6 +304,8 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
 	engine->instance = info->instance;
 	__sprint_engine_name(engine);
 
+	engine->props.preempt_timeout = CONFIG_DRM_I915_PREEMPT_TIMEOUT;
+
 	/*
 	 * To be overridden by the backend on setup. However to facilitate
 	 * cleanup on error during setup, we always provide the destroy vfunc.
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
index 4e403a1b069a..b9eb9e4cba0a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
@@ -151,6 +151,34 @@ all_caps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
 static struct kobj_attribute all_caps_attr =
 __ATTR(known_capabilities, 0444, all_caps_show, NULL);
 
+static ssize_t
+preempt_timeout_show(struct kobject *kobj, struct kobj_attribute *attr,
+		     char *buf)
+{
+	struct intel_engine_cs *engine = kobj_to_engine(kobj);
+
+	return sprintf(buf, "%lu\n", engine->props.preempt_timeout);
+}
+
+static ssize_t
+preempt_timeout_store(struct kobject *kobj, struct kobj_attribute *attr,
+		      const char *buf, size_t count)
+{
+	struct intel_engine_cs *engine = kobj_to_engine(kobj);
+	unsigned long timeout;
+	int err;
+
+	err = kstrtoul(buf, 0, &timeout);
+	if (err)
+		return err;
+
+	WRITE_ONCE(engine->props.preempt_timeout, timeout);
+	return count;
+}
+
+static struct kobj_attribute preempt_timeout_attr =
+__ATTR(preempt_timeout_ms, 0644, preempt_timeout_show, preempt_timeout_store);
+
 static void kobj_engine_release(struct kobject *kobj)
 {
 	kfree(kobj);
@@ -211,6 +239,11 @@ void intel_engines_add_sysfs(struct drm_i915_private *i915)
 		if (sysfs_create_files(kobj, files))
 			goto err_object;
 
+		if (CONFIG_DRM_I915_PREEMPT_TIMEOUT &&
+		    intel_engine_has_preemption(engine) &&
+		    sysfs_create_file(kobj, &preempt_timeout_attr.attr))
+			goto err_engine;
+
 		if (0) {
 err_object:
 			kobject_put(kobj);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 6199064f332b..6af9b0096975 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -173,6 +173,11 @@ struct intel_engine_execlists {
 	 */
 	struct timer_list timer;
 
+	/**
+	 * @preempt: reset the current context if it fails to give way
+	 */
+	struct timer_list preempt;
+
 	/**
 	 * @default_priolist: priority list for I915_PRIORITY_NORMAL
 	 */
@@ -541,6 +546,10 @@ struct intel_engine_cs {
 		 */
 		ktime_t total;
 	} stats;
+
+	struct {
+		unsigned long preempt_timeout;
+	} props;
 };
 
 static inline bool
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 0fbd72a9ab56..e16ede75412b 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -235,6 +235,20 @@ static void execlists_init_reg_state(u32 *reg_state,
 				     const struct intel_ring *ring,
 				     bool close);
 
+static void cancel_timer(struct timer_list *t)
+{
+	if (!READ_ONCE(t->expires))
+		return;
+
+	del_timer(t);
+	WRITE_ONCE(t->expires, 0);
+}
+
+static inline bool timer_expired(const struct timer_list *t)
+{
+	return READ_ONCE(t->expires) && !timer_pending(t);
+}
+
 static void __context_pin_acquire(struct intel_context *ce)
 {
 	mutex_acquire(&ce->pin_mutex.dep_map, 2, 0, _RET_IP_);
@@ -1373,6 +1387,46 @@ static void record_preemption(struct intel_engine_execlists *execlists)
 	(void)I915_SELFTEST_ONLY(execlists->preempt_hang.count++);
 }
 
+static unsigned long active_preempt_timeout(struct intel_engine_cs *engine)
+{
+	struct i915_request *rq;
+
+	rq = last_active(&engine->execlists);
+	if (!rq)
+		return 0;
+
+	return READ_ONCE(engine->props.preempt_timeout);
+}
+
+static void set_preempt_timeout(struct intel_engine_cs *engine)
+{
+	struct timer_list *t = &engine->execlists.preempt;
+	unsigned long timeout;
+
+	if (!CONFIG_DRM_I915_PREEMPT_TIMEOUT)
+		return;
+
+	if (!intel_engine_has_preemption(engine))
+		return;
+
+	timeout = active_preempt_timeout(engine);
+	if (!timeout) {
+		cancel_timer(t);
+		return;
+	}
+
+	timeout = msecs_to_jiffies_timeout(timeout);
+	/*
+	 * Paranoia to make sure the compiler computes the timeout before
+	 * loading 'jiffies' as jiffies is volatile and may be updated in
+	 * the background by a timer tick. All to reduce the complexity
+	 * of the addition and reduce the risk of losing a jiffie.
+	 */
+	barrier();
+
+	mod_timer(t, jiffies + timeout);
+}
+
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -1739,6 +1793,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 
 		memset(port + 1, 0, (last_port - port) * sizeof(*port));
 		execlists_submit_ports(engine);
+
+		set_preempt_timeout(engine);
 	} else {
 skip_submit:
 		ring_set_paused(engine, 0);
@@ -1971,6 +2027,43 @@ static void __execlists_submission_tasklet(struct intel_engine_cs *const engine)
 	}
 }
 
+static noinline void preempt_reset(struct intel_engine_cs *engine)
+{
+	const unsigned int bit = I915_RESET_ENGINE + engine->id;
+	unsigned long *lock = &engine->gt->reset.flags;
+
+	if (i915_modparams.reset < 3)
+		return;
+
+	if (test_and_set_bit(bit, lock))
+		return;
+
+	/* Mark this tasklet as disabled to avoid waiting for it to complete */
+	tasklet_disable_nosync(&engine->execlists.tasklet);
+
+	GEM_TRACE("%s: preempt timeout %lu+%ums\n",
+		  engine->name,
+		  engine->props.preempt_timeout,
+		  jiffies_to_msecs(jiffies - engine->execlists.preempt.expires));
+	intel_engine_reset(engine, "preemption time out");
+
+	tasklet_enable(&engine->execlists.tasklet);
+	clear_and_wake_up_bit(bit, lock);
+}
+
+static bool preempt_timeout(const struct intel_engine_cs *const engine)
+{
+	const struct timer_list *t = &engine->execlists.preempt;
+
+	if (!CONFIG_DRM_I915_PREEMPT_TIMEOUT)
+		return false;
+
+	if (!timer_expired(t))
+		return false;
+
+	return READ_ONCE(engine->execlists.pending[0]);
+}
+
 /*
  * Check the unread Context Status Buffers and manage the submission of new
  * contexts to the ELSP accordingly.
@@ -1978,23 +2071,39 @@ static void __execlists_submission_tasklet(struct intel_engine_cs *const engine)
 static void execlists_submission_tasklet(unsigned long data)
 {
 	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
-	unsigned long flags;
+	bool timeout = preempt_timeout(engine);
 
 	process_csb(engine);
-	if (!READ_ONCE(engine->execlists.pending[0])) {
+	if (!READ_ONCE(engine->execlists.pending[0]) || timeout) {
+		unsigned long flags;
+
 		spin_lock_irqsave(&engine->active.lock, flags);
 		__execlists_submission_tasklet(engine);
 		spin_unlock_irqrestore(&engine->active.lock, flags);
+
+		/* Recheck after serialising with direct-submission */
+		if (timeout && preempt_timeout(engine))
+			preempt_reset(engine);
 	}
 }
 
-static void execlists_submission_timer(struct timer_list *timer)
+static void __execlists_kick(struct intel_engine_execlists *execlists)
 {
-	struct intel_engine_cs *engine =
-		from_timer(engine, timer, execlists.timer);
-
 	/* Kick the tasklet for some interrupt coalescing and reset handling */
-	tasklet_hi_schedule(&engine->execlists.tasklet);
+	tasklet_hi_schedule(&execlists->tasklet);
+}
+
+#define execlists_kick(t, member) \
+	__execlists_kick(container_of(t, struct intel_engine_execlists, member))
+
+static void execlists_timeslice(struct timer_list *timer)
+{
+	execlists_kick(timer, timer);
+}
+
+static void execlists_preempt(struct timer_list *timer)
+{
+	execlists_kick(timer, preempt);
 }
 
 static void queue_request(struct intel_engine_cs *engine,
@@ -3417,6 +3526,7 @@ gen12_emit_fini_breadcrumb_rcs(struct i915_request *request, u32 *cs)
 static void execlists_park(struct intel_engine_cs *engine)
 {
 	del_timer(&engine->execlists.timer);
+	del_timer(&engine->execlists.preempt);
 }
 
 void intel_execlists_set_default_submission(struct intel_engine_cs *engine)
@@ -3534,7 +3644,8 @@ int intel_execlists_submission_setup(struct intel_engine_cs *engine)
 {
 	tasklet_init(&engine->execlists.tasklet,
 		     execlists_submission_tasklet, (unsigned long)engine);
-	timer_setup(&engine->execlists.timer, execlists_submission_timer, 0);
+	timer_setup(&engine->execlists.timer, execlists_timeslice, 0);
+	timer_setup(&engine->execlists.preempt, execlists_preempt, 0);
 
 	logical_ring_default_vfuncs(engine);
 	logical_ring_default_irqs(engine);
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 1276da059dc6..e7a86f60cf82 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -1553,6 +1553,103 @@ static int live_preempt_hang(void *arg)
 	return err;
 }
 
+static int live_preempt_timeout(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct i915_gem_context *ctx_hi, *ctx_lo;
+	struct igt_spinner spin_lo;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	int err = -ENOMEM;
+
+	/*
+	 * Check that we force preemption to occur by cancelling the previous
+	 * context if it refuses to yield the GPU.
+	 */
+
+	if (!HAS_LOGICAL_RING_PREEMPTION(i915))
+		return 0;
+
+	if (!intel_has_reset_engine(&i915->gt))
+		return 0;
+
+	if (igt_spinner_init(&spin_lo, &i915->gt))
+		return -ENOMEM;
+
+	ctx_hi = kernel_context(i915);
+	if (!ctx_hi)
+		goto err_spin_lo;
+	ctx_hi->sched.priority =
+		I915_USER_PRIORITY(I915_CONTEXT_MAX_USER_PRIORITY);
+
+	ctx_lo = kernel_context(i915);
+	if (!ctx_lo)
+		goto err_ctx_hi;
+	ctx_lo->sched.priority =
+		I915_USER_PRIORITY(I915_CONTEXT_MIN_USER_PRIORITY);
+
+	for_each_engine(engine, i915, id) {
+		unsigned long saved_timeout;
+		struct i915_request *rq;
+
+		if (!intel_engine_has_preemption(engine))
+			continue;
+
+		rq = spinner_create_request(&spin_lo, ctx_lo, engine,
+					    MI_NOOP); /* preemption disabled */
+		if (IS_ERR(rq)) {
+			err = PTR_ERR(rq);
+			goto err_ctx_lo;
+		}
+
+		i915_request_add(rq);
+		if (!igt_wait_for_spinner(&spin_lo, rq)) {
+			intel_gt_set_wedged(&i915->gt);
+			err = -EIO;
+			goto err_ctx_lo;
+		}
+
+		rq = igt_request_alloc(ctx_hi, engine);
+		if (IS_ERR(rq)) {
+			igt_spinner_end(&spin_lo);
+			err = PTR_ERR(rq);
+			goto err_ctx_lo;
+		}
+
+		/* Flush the previous CS ack before changing timeouts */
+		while (READ_ONCE(engine->execlists.pending[0]))
+			cpu_relax();
+
+		saved_timeout = engine->props.preempt_timeout;
+		engine->props.preempt_timeout = 1; /* in ms, -> 1 jiffie */
+
+		i915_request_get(rq);
+		i915_request_add(rq);
+
+		intel_engine_flush_submission(engine);
+		engine->props.preempt_timeout = saved_timeout;
+
+		if (i915_request_wait(rq, 0, HZ / 10) < 0) {
+			intel_gt_set_wedged(&i915->gt);
+			i915_request_put(rq);
+			err = -ETIME;
+			goto err_ctx_lo;
+		}
+
+		igt_spinner_end(&spin_lo);
+		i915_request_put(rq);
+	}
+
+	err = 0;
+err_ctx_lo:
+	kernel_context_close(ctx_lo);
+err_ctx_hi:
+	kernel_context_close(ctx_hi);
+err_spin_lo:
+	igt_spinner_fini(&spin_lo);
+	return err;
+}
+
 static int random_range(struct rnd_state *rnd, int min, int max)
 {
 	return i915_prandom_u32_max_state(max - min, rnd) + min;
@@ -2456,6 +2553,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915)
 		SUBTEST(live_suppress_wait_preempt),
 		SUBTEST(live_chain_preempt),
 		SUBTEST(live_preempt_hang),
+		SUBTEST(live_preempt_timeout),
 		SUBTEST(live_preempt_smoke),
 		SUBTEST(live_virtual_engine),
 		SUBTEST(live_virtual_mask),
diff --git a/drivers/gpu/drm/i915/i915_params.h b/drivers/gpu/drm/i915/i915_params.h
index d29ade3b7de6..56058978bb27 100644
--- a/drivers/gpu/drm/i915/i915_params.h
+++ b/drivers/gpu/drm/i915/i915_params.h
@@ -61,7 +61,7 @@ struct drm_printer;
 	param(char *, dmc_firmware_path, NULL) \
 	param(int, mmio_debug, -IS_ENABLED(CONFIG_DRM_I915_DEBUG_MMIO)) \
 	param(int, edp_vswing, 0) \
-	param(int, reset, 2) \
+	param(int, reset, 3) \
 	param(unsigned int, inject_load_failure, 0) \
 	param(int, fastboot, -1) \
 	param(int, enable_dpcd_backlight, 0) \
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 10/15] drm/i915/gt: Introduce barrier pulses along engines
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (7 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 09/15] drm/i915/execlists: Force preemption Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14 11:03   ` Tvrtko Ursulin
  2019-10-14  9:07 ` [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out Chris Wilson
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

To flush idle barriers, and even inflight requests, we want to send a
preemptive 'pulse' along an engine. We use a no-op request along the
pinned kernel_context at high priority so that it should run or else
kick off the stuck requests. We can use this to ensure idle barriers are
immediately flushed, as part of a context cancellation mechanism, or as
part of a heartbeat mechanism to detect and reset a stuck GPU.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/Makefile                 |  1 +
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 56 +++++++++++++++++++
 .../gpu/drm/i915/gt/intel_engine_heartbeat.h  | 14 +++++
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     |  2 +-
 drivers/gpu/drm/i915/i915_priolist_types.h    |  1 +
 5 files changed, 73 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index cd9a10ba2516..cfab7c8585b3 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -78,6 +78,7 @@ gt-y += \
 	gt/intel_breadcrumbs.o \
 	gt/intel_context.o \
 	gt/intel_engine_cs.o \
+	gt/intel_engine_heartbeat.o \
 	gt/intel_engine_pm.o \
 	gt/intel_engine_pool.o \
 	gt/intel_engine_sysfs.o \
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
new file mode 100644
index 000000000000..2fc413f9d506
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -0,0 +1,56 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "i915_request.h"
+
+#include "intel_context.h"
+#include "intel_engine_heartbeat.h"
+#include "intel_engine_pm.h"
+#include "intel_engine.h"
+#include "intel_gt.h"
+
+static void idle_pulse(struct intel_engine_cs *engine, struct i915_request *rq)
+{
+	engine->wakeref_serial = READ_ONCE(engine->serial) + 1;
+	i915_request_add_active_barriers(rq);
+}
+
+int intel_engine_pulse(struct intel_engine_cs *engine)
+{
+	struct i915_sched_attr attr = { .priority = I915_PRIORITY_BARRIER };
+	struct intel_context *ce = engine->kernel_context;
+	struct i915_request *rq;
+	int err = 0;
+
+	if (!intel_engine_has_preemption(engine))
+		return -ENODEV;
+
+	if (!intel_engine_pm_get_if_awake(engine))
+		return 0;
+
+	if (mutex_lock_interruptible(&ce->timeline->mutex))
+		goto out_rpm;
+
+	intel_context_enter(ce);
+	rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
+	intel_context_exit(ce);
+	if (IS_ERR(rq)) {
+		err = PTR_ERR(rq);
+		goto out_unlock;
+	}
+
+	rq->flags |= I915_REQUEST_SENTINEL;
+	idle_pulse(engine, rq);
+
+	__i915_request_commit(rq);
+	__i915_request_queue(rq, &attr);
+
+out_unlock:
+	mutex_unlock(&ce->timeline->mutex);
+out_rpm:
+	intel_engine_pm_put(engine);
+	return err;
+}
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
new file mode 100644
index 000000000000..b950451b5998
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
@@ -0,0 +1,14 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef INTEL_ENGINE_HEARTBEAT_H
+#define INTEL_ENGINE_HEARTBEAT_H
+
+struct intel_engine_cs;
+
+int intel_engine_pulse(struct intel_engine_cs *engine);
+
+#endif /* INTEL_ENGINE_HEARTBEAT_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 67eb6183648a..7d76611d9df1 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -111,7 +111,7 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 	i915_request_add_active_barriers(rq);
 
 	/* Install ourselves as a preemption barrier */
-	rq->sched.attr.priority = I915_PRIORITY_UNPREEMPTABLE;
+	rq->sched.attr.priority = I915_PRIORITY_BARRIER;
 	__i915_request_commit(rq);
 
 	/* Release our exclusive hold on the engine */
diff --git a/drivers/gpu/drm/i915/i915_priolist_types.h b/drivers/gpu/drm/i915/i915_priolist_types.h
index 21037a2e2038..ae8bb3cb627e 100644
--- a/drivers/gpu/drm/i915/i915_priolist_types.h
+++ b/drivers/gpu/drm/i915/i915_priolist_types.h
@@ -39,6 +39,7 @@ enum {
  * active request.
  */
 #define I915_PRIORITY_UNPREEMPTABLE INT_MAX
+#define I915_PRIORITY_BARRIER INT_MAX
 
 #define __NO_PREEMPTION (I915_PRIORITY_WAIT)
 
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (8 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 10/15] drm/i915/gt: Introduce barrier pulses along engines Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14 12:00   ` Tvrtko Ursulin
  2019-10-14  9:07 ` [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close Chris Wilson
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

On schedule-out (CS completion) of a banned context, scrub the context
image so that we do not replay the active payload. The intent is that we
skip banned payloads on request submission so that the timeline
advancement continues on in the background. However, if we are returning
to a preempted request, i915_request_skip() is ineffective and instead we
need to patch up the context image so that it continues from the start
of the next request.

v2: Fixup cancellation so that we only scrub the payload of the active
request and do not short-circuit the breadcrumbs (which might cause
other contexts to execute out of order).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c    | 129 ++++++++----
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 273 +++++++++++++++++++++++++
 2 files changed, 361 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index e16ede75412b..b76b35194114 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -234,6 +234,9 @@ static void execlists_init_reg_state(u32 *reg_state,
 				     const struct intel_engine_cs *engine,
 				     const struct intel_ring *ring,
 				     bool close);
+static void
+__execlists_update_reg_state(const struct intel_context *ce,
+			     const struct intel_engine_cs *engine);
 
 static void cancel_timer(struct timer_list *t)
 {
@@ -270,6 +273,31 @@ static void mark_eio(struct i915_request *rq)
 	i915_request_mark_complete(rq);
 }
 
+static struct i915_request *active_request(struct i915_request *rq)
+{
+	const struct intel_context * const ce = rq->hw_context;
+	struct i915_request *active = NULL;
+	struct list_head *list;
+
+	if (!i915_request_is_active(rq)) /* unwound, but incomplete! */
+		return rq;
+
+	rcu_read_lock();
+	list = &rcu_dereference(rq->timeline)->requests;
+	list_for_each_entry_from_reverse(rq, list, link) {
+		if (i915_request_completed(rq))
+			break;
+
+		if (rq->hw_context != ce)
+			break;
+
+		active = rq;
+	}
+	rcu_read_unlock();
+
+	return active;
+}
+
 static inline u32 intel_hws_preempt_address(struct intel_engine_cs *engine)
 {
 	return (i915_ggtt_offset(engine->status_page.vma) +
@@ -991,6 +1019,56 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 		tasklet_schedule(&ve->base.execlists.tasklet);
 }
 
+static void restore_default_state(struct intel_context *ce,
+				  struct intel_engine_cs *engine)
+{
+	u32 *regs = ce->lrc_reg_state;
+
+	if (engine->pinned_default_state)
+		memcpy(regs, /* skip restoring the vanilla PPHWSP */
+		       engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
+		       engine->context_size - PAGE_SIZE);
+
+	execlists_init_reg_state(regs, ce, engine, ce->ring, false);
+}
+
+static void cancel_active(struct i915_request *rq,
+			  struct intel_engine_cs *engine)
+{
+	struct intel_context * const ce = rq->hw_context;
+
+	/*
+	 * The executing context has been cancelled. Fixup the context so that
+	 * it will be marked as incomplete [-EIO] upon resubmission and not
+	 * execute any user payloads. We preserve the breadcrumbs and
+	 * semaphores of the incomplete requests so that inter-timeline
+	 * dependencies (i.e other timelines) remain correctly ordered.
+	 *
+	 * See __i915_request_submit() for applying -EIO and removing the
+	 * payload on resubmission.
+	 */
+	GEM_TRACE("%s(%s): { rq=%llx:%lld }\n",
+		  __func__, engine->name, rq->fence.context, rq->fence.seqno);
+	__context_pin_acquire(ce);
+
+	/* On resubmission of the active request, payload will be scrubbed */
+	rq = active_request(rq);
+	if (rq)
+		ce->ring->head = intel_ring_wrap(ce->ring, rq->head);
+	else
+		ce->ring->head = ce->ring->tail;
+	intel_ring_update_space(ce->ring);
+
+	/* Scrub the context image to prevent replaying the previous batch */
+	restore_default_state(ce, engine);
+	__execlists_update_reg_state(ce, engine);
+
+	/* We've switched away, so this should be a no-op, but intent matters */
+	ce->lrc_desc |= CTX_DESC_FORCE_RESTORE;
+
+	__context_pin_release(ce);
+}
+
 static inline void
 __execlists_schedule_out(struct i915_request *rq,
 			 struct intel_engine_cs * const engine)
@@ -1001,6 +1079,9 @@ __execlists_schedule_out(struct i915_request *rq,
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
 	intel_gt_pm_put(engine->gt);
 
+	if (unlikely(i915_gem_context_is_banned(ce->gem_context)))
+		cancel_active(rq, engine);
+
 	/*
 	 * If this is part of a virtual engine, its next request may
 	 * have been blocked waiting for access to the active context.
@@ -1395,6 +1476,10 @@ static unsigned long active_preempt_timeout(struct intel_engine_cs *engine)
 	if (!rq)
 		return 0;
 
+	/* Force a fast reset for terminated contexts (ignoring sysfs!) */
+	if (unlikely(i915_gem_context_is_banned(rq->gem_context)))
+		return 1;
+
 	return READ_ONCE(engine->props.preempt_timeout);
 }
 
@@ -2810,29 +2895,6 @@ static void reset_csb_pointers(struct intel_engine_cs *engine)
 			       &execlists->csb_status[reset_value]);
 }
 
-static struct i915_request *active_request(struct i915_request *rq)
-{
-	const struct intel_context * const ce = rq->hw_context;
-	struct i915_request *active = NULL;
-	struct list_head *list;
-
-	if (!i915_request_is_active(rq)) /* unwound, but incomplete! */
-		return rq;
-
-	list = &i915_request_active_timeline(rq)->requests;
-	list_for_each_entry_from_reverse(rq, list, link) {
-		if (i915_request_completed(rq))
-			break;
-
-		if (rq->hw_context != ce)
-			break;
-
-		active = rq;
-	}
-
-	return active;
-}
-
 static void __execlists_reset_reg_state(const struct intel_context *ce,
 					const struct intel_engine_cs *engine)
 {
@@ -2849,7 +2911,6 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
 	struct intel_engine_execlists * const execlists = &engine->execlists;
 	struct intel_context *ce;
 	struct i915_request *rq;
-	u32 *regs;
 
 	mb(); /* paranoia: read the CSB pointers from after the reset */
 	clflush(execlists->csb_write);
@@ -2928,13 +2989,7 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
 	 * to recreate its own state.
 	 */
 	GEM_BUG_ON(!intel_context_is_pinned(ce));
-	regs = ce->lrc_reg_state;
-	if (engine->pinned_default_state) {
-		memcpy(regs, /* skip restoring the vanilla PPHWSP */
-		       engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
-		       engine->context_size - PAGE_SIZE);
-	}
-	execlists_init_reg_state(regs, ce, engine, ce->ring, false);
+	restore_default_state(ce, engine);
 
 out_replay:
 	GEM_TRACE("%s replay {head:%04x, tail:%04x\n",
@@ -4571,16 +4626,8 @@ void intel_lr_context_reset(struct intel_engine_cs *engine,
 	 * future request will be after userspace has had the opportunity
 	 * to recreate its own state.
 	 */
-	if (scrub) {
-		u32 *regs = ce->lrc_reg_state;
-
-		if (engine->pinned_default_state) {
-			memcpy(regs, /* skip restoring the vanilla PPHWSP */
-			       engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
-			       engine->context_size - PAGE_SIZE);
-		}
-		execlists_init_reg_state(regs, ce, engine, ce->ring, false);
-	}
+	if (scrub)
+		restore_default_state(ce, engine);
 
 	/* Rerun the request; its payload has been neutered (if guilty). */
 	ce->ring->head = head;
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index e7a86f60cf82..dd3e7b47b76d 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -7,6 +7,7 @@
 #include <linux/prime_numbers.h>
 
 #include "gem/i915_gem_pm.h"
+#include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_reset.h"
 
 #include "i915_selftest.h"
@@ -1016,6 +1017,277 @@ static int live_nopreempt(void *arg)
 	goto err_client_b;
 }
 
+struct live_preempt_cancel {
+	struct intel_engine_cs *engine;
+	struct preempt_client a, b;
+};
+
+static int __cancel_active0(struct live_preempt_cancel *arg)
+{
+	struct i915_request *rq;
+	struct igt_live_test t;
+	int err;
+
+	/* Preempt cancel of ELSP0 */
+	GEM_TRACE("%s(%s)\n", __func__, arg->engine->name);
+
+	if (igt_live_test_begin(&t, arg->engine->i915,
+				__func__, arg->engine->name))
+		return -EIO;
+
+	clear_bit(CONTEXT_BANNED, &arg->a.ctx->flags);
+	rq = spinner_create_request(&arg->a.spin,
+				    arg->a.ctx, arg->engine,
+				    MI_ARB_CHECK);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
+
+	i915_request_get(rq);
+	i915_request_add(rq);
+	if (!igt_wait_for_spinner(&arg->a.spin, rq)) {
+		err = -EIO;
+		goto out;
+	}
+
+	i915_gem_context_set_banned(arg->a.ctx);
+	err = intel_engine_pulse(arg->engine);
+	if (err)
+		goto out;
+
+	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
+		err = -EIO;
+		goto out;
+	}
+
+	if (rq->fence.error != -EIO) {
+		pr_err("Cancelled inflight0 request did not report -EIO\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+out:
+	i915_request_put(rq);
+	if (igt_live_test_end(&t))
+		err = -EIO;
+	return err;
+}
+
+static int __cancel_active1(struct live_preempt_cancel *arg)
+{
+	struct i915_request *rq[2] = {};
+	struct igt_live_test t;
+	int err;
+
+	/* Preempt cancel of ELSP1 */
+	GEM_TRACE("%s(%s)\n", __func__, arg->engine->name);
+
+	if (igt_live_test_begin(&t, arg->engine->i915,
+				__func__, arg->engine->name))
+		return -EIO;
+
+	clear_bit(CONTEXT_BANNED, &arg->a.ctx->flags);
+	rq[0] = spinner_create_request(&arg->a.spin,
+				       arg->a.ctx, arg->engine,
+				       MI_NOOP); /* no preemption */
+	if (IS_ERR(rq[0]))
+		return PTR_ERR(rq[0]);
+
+	i915_request_get(rq[0]);
+	i915_request_add(rq[0]);
+	if (!igt_wait_for_spinner(&arg->a.spin, rq[0])) {
+		err = -EIO;
+		goto out;
+	}
+
+	clear_bit(CONTEXT_BANNED, &arg->b.ctx->flags);
+	rq[1] = spinner_create_request(&arg->b.spin,
+				       arg->b.ctx, arg->engine,
+				       MI_ARB_CHECK);
+	if (IS_ERR(rq[1])) {
+		err = PTR_ERR(rq[1]);
+		goto out;
+	}
+
+	i915_request_get(rq[1]);
+	err = i915_request_await_dma_fence(rq[1], &rq[0]->fence);
+	i915_request_add(rq[1]);
+	if (err)
+		goto out;
+
+	i915_gem_context_set_banned(arg->b.ctx);
+	err = intel_engine_pulse(arg->engine);
+	if (err)
+		goto out;
+
+	igt_spinner_end(&arg->a.spin);
+	if (i915_request_wait(rq[1], 0, HZ / 5) < 0) {
+		err = -EIO;
+		goto out;
+	}
+
+	if (rq[0]->fence.error != 0) {
+		pr_err("Normal inflight0 request did not complete\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (rq[1]->fence.error != -EIO) {
+		pr_err("Cancelled inflight1 request did not report -EIO\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+out:
+	i915_request_put(rq[1]);
+	i915_request_put(rq[0]);
+	if (igt_live_test_end(&t))
+		err = -EIO;
+	return err;
+}
+
+static int __cancel_queued(struct live_preempt_cancel *arg)
+{
+	struct i915_request *rq[3] = {};
+	struct igt_live_test t;
+	int err;
+
+	/* Full ELSP and one in the wings */
+	GEM_TRACE("%s(%s)\n", __func__, arg->engine->name);
+
+	if (igt_live_test_begin(&t, arg->engine->i915,
+				__func__, arg->engine->name))
+		return -EIO;
+
+	clear_bit(CONTEXT_BANNED, &arg->a.ctx->flags);
+	rq[0] = spinner_create_request(&arg->a.spin,
+				       arg->a.ctx, arg->engine,
+				       MI_ARB_CHECK);
+	if (IS_ERR(rq[0]))
+		return PTR_ERR(rq[0]);
+
+	i915_request_get(rq[0]);
+	i915_request_add(rq[0]);
+	if (!igt_wait_for_spinner(&arg->a.spin, rq[0])) {
+		err = -EIO;
+		goto out;
+	}
+
+	clear_bit(CONTEXT_BANNED, &arg->b.ctx->flags);
+	rq[1] = igt_request_alloc(arg->b.ctx, arg->engine);
+	if (IS_ERR(rq[1])) {
+		err = PTR_ERR(rq[1]);
+		goto out;
+	}
+
+	i915_request_get(rq[1]);
+	err = i915_request_await_dma_fence(rq[1], &rq[0]->fence);
+	i915_request_add(rq[1]);
+	if (err)
+		goto out;
+
+	rq[2] = spinner_create_request(&arg->b.spin,
+				       arg->a.ctx, arg->engine,
+				       MI_ARB_CHECK);
+	if (IS_ERR(rq[2])) {
+		err = PTR_ERR(rq[2]);
+		goto out;
+	}
+
+	i915_request_get(rq[2]);
+	err = i915_request_await_dma_fence(rq[2], &rq[1]->fence);
+	i915_request_add(rq[2]);
+	if (err)
+		goto out;
+
+	i915_gem_context_set_banned(arg->a.ctx);
+	err = intel_engine_pulse(arg->engine);
+	if (err)
+		goto out;
+
+	if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
+		err = -EIO;
+		goto out;
+	}
+
+	if (rq[0]->fence.error != -EIO) {
+		pr_err("Cancelled inflight0 request did not report -EIO\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (rq[1]->fence.error != 0) {
+		pr_err("Normal inflight1 request did not complete\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (rq[2]->fence.error != -EIO) {
+		pr_err("Cancelled queued request did not report -EIO\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+out:
+	i915_request_put(rq[2]);
+	i915_request_put(rq[1]);
+	i915_request_put(rq[0]);
+	if (igt_live_test_end(&t))
+		err = -EIO;
+	return err;
+}
+
+static int live_preempt_cancel(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct live_preempt_cancel data;
+	enum intel_engine_id id;
+	int err = -ENOMEM;
+
+	/*
+	 * To cancel an inflight context, we need to first remove it from the
+	 * GPU. That sounds like preemption! Plus a little bit of bookkeeping.
+	 */
+
+	if (!HAS_LOGICAL_RING_PREEMPTION(i915))
+		return 0;
+
+	if (preempt_client_init(i915, &data.a))
+		return -ENOMEM;
+	if (preempt_client_init(i915, &data.b))
+		goto err_client_a;
+
+	for_each_engine(data.engine, i915, id) {
+		if (!intel_engine_has_preemption(data.engine))
+			continue;
+
+		err = __cancel_active0(&data);
+		if (err)
+			goto err_wedged;
+
+		err = __cancel_active1(&data);
+		if (err)
+			goto err_wedged;
+
+		err = __cancel_queued(&data);
+		if (err)
+			goto err_wedged;
+	}
+
+	err = 0;
+err_client_b:
+	preempt_client_fini(&data.b);
+err_client_a:
+	preempt_client_fini(&data.a);
+	return err;
+
+err_wedged:
+	GEM_TRACE_DUMP();
+	igt_spinner_end(&data.b.spin);
+	igt_spinner_end(&data.a.spin);
+	intel_gt_set_wedged(&i915->gt);
+	goto err_client_b;
+}
+
 static int live_suppress_self_preempt(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
@@ -2549,6 +2821,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915)
 		SUBTEST(live_preempt),
 		SUBTEST(live_late_preempt),
 		SUBTEST(live_nopreempt),
+		SUBTEST(live_preempt_cancel),
 		SUBTEST(live_suppress_self_preempt),
 		SUBTEST(live_suppress_wait_preempt),
 		SUBTEST(live_chain_preempt),
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (9 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14 12:11   ` Tvrtko Ursulin
  2019-10-14  9:07 ` [PATCH 13/15] drm/i915: Replace hangcheck by heartbeats Chris Wilson
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

Normally, we rely on our hangcheck to prevent persistent batches from
hogging the GPU. However, if the user disables hangcheck, this mechanism
breaks down. Despite our insistence that this is unsafe, the users are
equally insistent that they want to use endless batches and will disable
the hangcheck mechanism. We are looking at perhaps replacing hangcheck
with a softer mechanism, that sends a pulse down the engine to check if
it is well. We can use the same preemptive pulse to flush an active
persistent context off the GPU upon context close, preventing resources
being lost and unkillable requests remaining on the GPU after process
termination. To avoid changing the ABI and accidentally breaking
existing userspace, we make the persistence of a context explicit and
enable it by default (matching current ABI). Userspace can opt out of
persistent mode (forcing requests to be cancelled when the context is
closed by process termination or explicitly) by a context parameter. To
facilitate existing use-cases of disabling hangcheck, if the modparam is
disabled (i915.enable_hangcheck=0), we disable persistence mode by
default.  (Note, one of the outcomes for supporting endless mode will be
the removal of hangchecking, at which point opting into persistent mode
will be mandatory, or maybe the default perhaps controlled by cgroups.)

v2: Check for hangchecking at context termination, so that we are not
left with undying contexts from a crafty user.
v3: Force context termination even if forced-preemption is disabled.

Testcase: igt/gem_ctx_persistence
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   | 182 ++++++++++++++++++
 drivers/gpu/drm/i915/gem/i915_gem_context.h   |  15 ++
 .../gpu/drm/i915/gem/i915_gem_context_types.h |   1 +
 .../gpu/drm/i915/gem/selftests/mock_context.c |   2 +
 include/uapi/drm/i915_drm.h                   |  15 ++
 5 files changed, 215 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 5d8221c7ba83..70b72456e2c4 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -70,6 +70,7 @@
 #include <drm/i915_drm.h>
 
 #include "gt/intel_lrc_reg.h"
+#include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_engine_user.h"
 
 #include "i915_gem_context.h"
@@ -269,6 +270,128 @@ void i915_gem_context_release(struct kref *ref)
 		schedule_work(&gc->free_work);
 }
 
+static inline struct i915_gem_engines *
+__context_engines_static(const struct i915_gem_context *ctx)
+{
+	return rcu_dereference_protected(ctx->engines, true);
+}
+
+static bool __reset_engine(struct intel_engine_cs *engine)
+{
+	struct intel_gt *gt = engine->gt;
+	bool success = false;
+
+	if (!intel_has_reset_engine(gt))
+		return false;
+
+	if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
+			      &gt->reset.flags)) {
+		success = intel_engine_reset(engine, NULL) == 0;
+		clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id,
+				      &gt->reset.flags);
+	}
+
+	return success;
+}
+
+static void __reset_context(struct i915_gem_context *ctx,
+			    struct intel_engine_cs *engine)
+{
+	intel_gt_handle_error(engine->gt, engine->mask, 0,
+			      "context closure in %s", ctx->name);
+}
+
+static bool __cancel_engine(struct intel_engine_cs *engine)
+{
+	/*
+	 * Send a "high priority pulse" down the engine to cause the
+	 * current request to be momentarily preempted. (If it fails to
+	 * be preempted, it will be reset). As we have marked our context
+	 * as banned, any incomplete request, including any running, will
+	 * be skipped following the preemption.
+	 */
+	if (CONFIG_DRM_I915_PREEMPT_TIMEOUT && !intel_engine_pulse(engine))
+		return true;
+
+	/* If we are unable to send a pulse, try resseting this engine. */
+	return __reset_engine(engine);
+}
+
+static struct intel_engine_cs *
+active_engine(struct dma_fence *fence, struct intel_context *ce)
+{
+	struct i915_request *rq = to_request(fence);
+	struct intel_engine_cs *engine, *locked;
+
+	/*
+	 * Serialise with __i915_request_submit() so that it sees
+	 * is-banned?, or we know the request is already inflight.
+	 */
+	locked = READ_ONCE(rq->engine);
+	spin_lock_irq(&locked->active.lock);
+	while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
+		spin_unlock(&locked->active.lock);
+		spin_lock(&engine->active.lock);
+		locked = engine;
+	}
+
+	engine = NULL;
+	if (i915_request_is_active(rq) && !rq->fence.error)
+		engine = rq->engine;
+
+	spin_unlock_irq(&locked->active.lock);
+
+	return engine;
+}
+
+static void kill_context(struct i915_gem_context *ctx)
+{
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+
+	/*
+	 * If we are already banned, it was due to a guilty request causing
+	 * a reset and the entire context being evicted from the GPU.
+	 */
+	if (i915_gem_context_is_banned(ctx))
+		return;
+
+	i915_gem_context_set_banned(ctx);
+
+	/*
+	 * Map the user's engine back to the actual engines; one virtual
+	 * engine will be mapped to multiple engines, and using ctx->engine[]
+	 * the same engine may be have multiple instances in the user's map.
+	 * However, we only care about pending requests, so only include
+	 * engines on which there are incomplete requests.
+	 */
+	for_each_gem_engine(ce, __context_engines_static(ctx), it) {
+		struct intel_engine_cs *engine;
+		struct dma_fence *fence;
+
+		if (!ce->timeline)
+			continue;
+
+		fence = i915_active_fence_get(&ce->timeline->last_request);
+		if (!fence)
+			continue;
+
+		/* Check with the backend if the request is still inflight */
+		engine = active_engine(fence, ce);
+
+		/* First attempt to gracefully cancel the context */
+		if (engine && !__cancel_engine(engine))
+			/*
+			 * If we are unable to send a preemptive pulse to bump
+			 * the context from the GPU, we have to resort to a full
+			 * reset. We hope the collateral damage is worth it.
+			 */
+			__reset_context(ctx, engine);
+
+		dma_fence_put(fence);
+	}
+}
+
 static void context_close(struct i915_gem_context *ctx)
 {
 	struct i915_address_space *vm;
@@ -291,9 +414,47 @@ static void context_close(struct i915_gem_context *ctx)
 	lut_close(ctx);
 
 	mutex_unlock(&ctx->mutex);
+
+	/*
+	 * If the user has disabled hangchecking, we can not be sure that
+	 * the batches will ever complete after the context is closed,
+	 * keeping the context and all resources pinned forever. So in this
+	 * case we opt to forcibly kill off all remaining requests on
+	 * context close.
+	 */
+	if (!i915_gem_context_is_persistent(ctx) ||
+	    !i915_modparams.enable_hangcheck)
+		kill_context(ctx);
+
 	i915_gem_context_put(ctx);
 }
 
+static int __context_set_persistence(struct i915_gem_context *ctx, bool state)
+{
+	if (i915_gem_context_is_persistent(ctx) == state)
+		return 0;
+
+	if (state) {
+		/*
+		 * Only contexts that are short-lived [that will expire or be
+		 * reset] are allowed to survive past termination. We require
+		 * hangcheck to ensure that the persistent requests are healthy.
+		 */
+		if (!i915_modparams.enable_hangcheck)
+			return -EINVAL;
+
+		i915_gem_context_set_persistence(ctx);
+	} else {
+		/* To cancel a context we use "preempt-to-idle" */
+		if (!(ctx->i915->caps.scheduler & I915_SCHEDULER_CAP_PREEMPTION))
+			return -ENODEV;
+
+		i915_gem_context_clear_persistence(ctx);
+	}
+
+	return 0;
+}
+
 static struct i915_gem_context *
 __create_context(struct drm_i915_private *i915)
 {
@@ -328,6 +489,7 @@ __create_context(struct drm_i915_private *i915)
 
 	i915_gem_context_set_bannable(ctx);
 	i915_gem_context_set_recoverable(ctx);
+	__context_set_persistence(ctx, true /* cgroup hook? */);
 
 	for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)
 		ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;
@@ -484,6 +646,7 @@ i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio)
 		return ctx;
 
 	i915_gem_context_clear_bannable(ctx);
+	i915_gem_context_set_persistence(ctx);
 	ctx->sched.priority = I915_USER_PRIORITY(prio);
 
 	GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));
@@ -1594,6 +1757,16 @@ get_engines(struct i915_gem_context *ctx,
 	return err;
 }
 
+static int
+set_persistence(struct i915_gem_context *ctx,
+		const struct drm_i915_gem_context_param *args)
+{
+	if (args->size)
+		return -EINVAL;
+
+	return __context_set_persistence(ctx, args->value);
+}
+
 static int ctx_setparam(struct drm_i915_file_private *fpriv,
 			struct i915_gem_context *ctx,
 			struct drm_i915_gem_context_param *args)
@@ -1671,6 +1844,10 @@ static int ctx_setparam(struct drm_i915_file_private *fpriv,
 		ret = set_engines(ctx, args);
 		break;
 
+	case I915_CONTEXT_PARAM_PERSISTENCE:
+		ret = set_persistence(ctx, args);
+		break;
+
 	case I915_CONTEXT_PARAM_BAN_PERIOD:
 	default:
 		ret = -EINVAL;
@@ -2123,6 +2300,11 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
 		ret = get_engines(ctx, args);
 		break;
 
+	case I915_CONTEXT_PARAM_PERSISTENCE:
+		args->size = 0;
+		args->value = i915_gem_context_is_persistent(ctx);
+		break;
+
 	case I915_CONTEXT_PARAM_BAN_PERIOD:
 	default:
 		ret = -EINVAL;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.h b/drivers/gpu/drm/i915/gem/i915_gem_context.h
index 9234586830d1..2eec035382a2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.h
@@ -76,6 +76,21 @@ static inline void i915_gem_context_clear_recoverable(struct i915_gem_context *c
 	clear_bit(UCONTEXT_RECOVERABLE, &ctx->user_flags);
 }
 
+static inline bool i915_gem_context_is_persistent(const struct i915_gem_context *ctx)
+{
+	return test_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);
+}
+
+static inline void i915_gem_context_set_persistence(struct i915_gem_context *ctx)
+{
+	set_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);
+}
+
+static inline void i915_gem_context_clear_persistence(struct i915_gem_context *ctx)
+{
+	clear_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);
+}
+
 static inline bool i915_gem_context_is_banned(const struct i915_gem_context *ctx)
 {
 	return test_bit(CONTEXT_BANNED, &ctx->flags);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index ab8e1367dfc8..a3ecd19f2303 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -137,6 +137,7 @@ struct i915_gem_context {
 #define UCONTEXT_NO_ERROR_CAPTURE	1
 #define UCONTEXT_BANNABLE		2
 #define UCONTEXT_RECOVERABLE		3
+#define UCONTEXT_PERSISTENCE		4
 
 	/**
 	 * @flags: small set of booleans
diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
index 74ddd682c9cd..29b8984f0e47 100644
--- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
@@ -22,6 +22,8 @@ mock_context(struct drm_i915_private *i915,
 	INIT_LIST_HEAD(&ctx->link);
 	ctx->i915 = i915;
 
+	i915_gem_context_set_persistence(ctx);
+
 	mutex_init(&ctx->engines_mutex);
 	e = default_engines(ctx);
 	if (IS_ERR(e))
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 30c542144016..eb9e704d717a 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1565,6 +1565,21 @@ struct drm_i915_gem_context_param {
  *   i915_context_engines_bond (I915_CONTEXT_ENGINES_EXT_BOND)
  */
 #define I915_CONTEXT_PARAM_ENGINES	0xa
+
+/*
+ * I915_CONTEXT_PARAM_PERSISTENCE:
+ *
+ * Allow the context and active rendering to survive the process until
+ * completion. Persistence allows fire-and-forget clients to queue up a
+ * bunch of work, hand the output over to a display server and the quit.
+ * If the context is not marked as persistent, upon closing (either via
+ * an explicit DRM_I915_GEM_CONTEXT_DESTROY or implicitly from file closure
+ * or process termination), the context and any outstanding requests will be
+ * cancelled (and exported fences for cancelled requests marked as -EIO).
+ *
+ * By default, new contexts allow persistence.
+ */
+#define I915_CONTEXT_PARAM_PERSISTENCE	0xb
 /* Must be kept compact -- no holes and well documented */
 
 	__u64 value;
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 13/15] drm/i915: Replace hangcheck by heartbeats
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (10 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14 12:13   ` Tvrtko Ursulin
  2019-10-14  9:07 ` [PATCH 14/15] drm/i915: Flush idle barriers when waiting Chris Wilson
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

Replace sampling the engine state every so often with a periodic
heartbeat request to measure the health of an engine. This is coupled
with the forced-preemption to allow long running requests to survive so
long as they do not block other users.

The heartbeat interval can be adjusted per-engine using,

	/sys/class/drm/card?/engine/*/heartbeat_interval_ms

v2: Couple in sysfs controls

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
---
 drivers/gpu/drm/i915/Kconfig.profile          |  14 +
 drivers/gpu/drm/i915/Makefile                 |   1 -
 drivers/gpu/drm/i915/display/intel_display.c  |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |   1 -
 drivers/gpu/drm/i915/gem/i915_gem_pm.c        |   2 -
 drivers/gpu/drm/i915/gt/intel_engine.h        |  32 --
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  11 +-
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 128 +++++++
 .../gpu/drm/i915/gt/intel_engine_heartbeat.h  |   5 +
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     |   5 +-
 drivers/gpu/drm/i915/gt/intel_engine_sysfs.c  |  32 ++
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  17 +-
 drivers/gpu/drm/i915/gt/intel_gt.c            |   1 -
 drivers/gpu/drm/i915/gt/intel_gt.h            |   4 -
 drivers/gpu/drm/i915/gt/intel_gt_pm.c         |   1 -
 drivers/gpu/drm/i915/gt/intel_gt_types.h      |   9 -
 drivers/gpu/drm/i915/gt/intel_hangcheck.c     | 361 ------------------
 drivers/gpu/drm/i915/gt/intel_reset.c         |   3 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |   4 -
 drivers/gpu/drm/i915/i915_debugfs.c           |  87 -----
 drivers/gpu/drm/i915/i915_drv.c               |   3 -
 drivers/gpu/drm/i915/i915_drv.h               |   1 -
 drivers/gpu/drm/i915/i915_gpu_error.c         |  33 +-
 drivers/gpu/drm/i915/i915_gpu_error.h         |   2 -
 drivers/gpu/drm/i915/i915_priolist_types.h    |   6 +
 25 files changed, 210 insertions(+), 555 deletions(-)
 delete mode 100644 drivers/gpu/drm/i915/gt/intel_hangcheck.c

diff --git a/drivers/gpu/drm/i915/Kconfig.profile b/drivers/gpu/drm/i915/Kconfig.profile
index 8fceea85937b..4bed293e1fb4 100644
--- a/drivers/gpu/drm/i915/Kconfig.profile
+++ b/drivers/gpu/drm/i915/Kconfig.profile
@@ -40,3 +40,17 @@ config DRM_I915_PREEMPT_TIMEOUT
 	  /sys/class/drm/card?/engine/*/preempt_timeout_ms
 
 	  May be 0 to disable the timeout.
+
+config DRM_I915_HEARTBEAT_INTERVAL
+	int "Interval between heartbeat pulses (ms)"
+	default 2500 # milliseconds
+	help
+	  The driver sends a periodic heartbeat down all active engines to
+	  check the health of the GPU and undertake regular house-keeping of
+	  internal driver state.
+
+	  This is adjustable via
+	  /sys/class/drm/card?/engine/*/heartbeat_interval_ms
+
+	  May be 0 to disable heartbeats and therefore disable automatic GPU
+	  hang detection.
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index cfab7c8585b3..59d356cc406c 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -88,7 +88,6 @@ gt-y += \
 	gt/intel_gt_pm.o \
 	gt/intel_gt_pm_irq.o \
 	gt/intel_gt_requests.o \
-	gt/intel_hangcheck.o \
 	gt/intel_lrc.o \
 	gt/intel_rc6.o \
 	gt/intel_renderstate.o \
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 3cf39fc153b3..a2ef4bcf9242 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14401,7 +14401,7 @@ static void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state)
 static void fb_obj_bump_render_priority(struct drm_i915_gem_object *obj)
 {
 	struct i915_sched_attr attr = {
-		.priority = I915_PRIORITY_DISPLAY,
+		.priority = I915_USER_PRIORITY(I915_PRIORITY_DISPLAY),
 	};
 
 	i915_gem_object_wait_priority(obj, 0, &attr);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index b0245585b4d5..817ea0725e43 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -461,6 +461,5 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj,
 int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
 				  unsigned int flags,
 				  const struct i915_sched_attr *attr);
-#define I915_PRIORITY_DISPLAY I915_USER_PRIORITY(I915_PRIORITY_MAX)
 
 #endif
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 7987b54fb1f5..0e97520cb1bb 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -100,8 +100,6 @@ void i915_gem_suspend(struct drm_i915_private *i915)
 	intel_gt_suspend(&i915->gt);
 	intel_uc_suspend(&i915->gt.uc);
 
-	cancel_delayed_work_sync(&i915->gt.hangcheck.work);
-
 	i915_gem_drain_freed_objects(i915);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index 93ea367fe624..8ad57eace351 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -89,38 +89,6 @@ struct drm_printer;
 /* seqno size is actually only a uint32, but since we plan to use MI_FLUSH_DW to
  * do the writes, and that must have qw aligned offsets, simply pretend it's 8b.
  */
-enum intel_engine_hangcheck_action {
-	ENGINE_IDLE = 0,
-	ENGINE_WAIT,
-	ENGINE_ACTIVE_SEQNO,
-	ENGINE_ACTIVE_HEAD,
-	ENGINE_ACTIVE_SUBUNITS,
-	ENGINE_WAIT_KICK,
-	ENGINE_DEAD,
-};
-
-static inline const char *
-hangcheck_action_to_str(const enum intel_engine_hangcheck_action a)
-{
-	switch (a) {
-	case ENGINE_IDLE:
-		return "idle";
-	case ENGINE_WAIT:
-		return "wait";
-	case ENGINE_ACTIVE_SEQNO:
-		return "active seqno";
-	case ENGINE_ACTIVE_HEAD:
-		return "active head";
-	case ENGINE_ACTIVE_SUBUNITS:
-		return "active subunits";
-	case ENGINE_WAIT_KICK:
-		return "wait kick";
-	case ENGINE_DEAD:
-		return "dead";
-	}
-
-	return "unknown";
-}
 
 static inline unsigned int
 execlists_num_ports(const struct intel_engine_execlists * const execlists)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 1eb51147839a..d829ad340ca0 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -305,6 +305,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
 	__sprint_engine_name(engine);
 
 	engine->props.preempt_timeout = CONFIG_DRM_I915_PREEMPT_TIMEOUT;
+	engine->props.heartbeat_interval = CONFIG_DRM_I915_HEARTBEAT_INTERVAL;
 
 	/*
 	 * To be overridden by the backend on setup. However to facilitate
@@ -599,7 +600,6 @@ static int intel_engine_setup_common(struct intel_engine_cs *engine)
 	intel_engine_init_active(engine, ENGINE_PHYSICAL);
 	intel_engine_init_breadcrumbs(engine);
 	intel_engine_init_execlists(engine);
-	intel_engine_init_hangcheck(engine);
 	intel_engine_init_cmd_parser(engine);
 	intel_engine_init__pm(engine);
 
@@ -1432,8 +1432,13 @@ void intel_engine_dump(struct intel_engine_cs *engine,
 		drm_printf(m, "*** WEDGED ***\n");
 
 	drm_printf(m, "\tAwake? %d\n", atomic_read(&engine->wakeref.count));
-	drm_printf(m, "\tHangcheck: %d ms ago\n",
-		   jiffies_to_msecs(jiffies - engine->hangcheck.action_timestamp));
+
+	rcu_read_lock();
+	rq = READ_ONCE(engine->heartbeat.systole);
+	if (rq)
+		drm_printf(m, "\tHeartbeat: %d ms ago\n",
+			   jiffies_to_msecs(jiffies - rq->emitted_jiffies));
+	rcu_read_unlock();
 	drm_printf(m, "\tReset count: %d (global %d)\n",
 		   i915_reset_engine_count(error, engine),
 		   i915_reset_count(error));
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index 2fc413f9d506..dc741e86fb41 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -11,6 +11,27 @@
 #include "intel_engine_pm.h"
 #include "intel_engine.h"
 #include "intel_gt.h"
+#include "intel_reset.h"
+
+/*
+ * While the engine is active, we send a periodic pulse along the engine
+ * to check on its health and to flush any idle-barriers. If that request
+ * is stuck, and we fail to preempt it, we declare the engine hung and
+ * issue a reset -- in the hope that restores progress.
+ */
+
+static void next_heartbeat(struct intel_engine_cs *engine)
+{
+	long delay;
+
+	delay = READ_ONCE(engine->props.heartbeat_interval);
+	if (!delay)
+		return;
+
+	delay = msecs_to_jiffies_timeout(delay);
+	schedule_delayed_work(&engine->heartbeat.work,
+			      round_jiffies_up_relative(delay));
+}
 
 static void idle_pulse(struct intel_engine_cs *engine, struct i915_request *rq)
 {
@@ -18,6 +39,113 @@ static void idle_pulse(struct intel_engine_cs *engine, struct i915_request *rq)
 	i915_request_add_active_barriers(rq);
 }
 
+static void show_heartbeat(const struct i915_request *rq,
+			   struct intel_engine_cs *engine)
+{
+	struct drm_printer p = drm_debug_printer("heartbeat");
+
+	intel_engine_dump(engine, &p,
+			  "%s heartbeat {prio:%d} not ticking\n",
+			  engine->name,
+			  rq->sched.attr.priority);
+}
+
+static void heartbeat(struct work_struct *wrk)
+{
+	struct i915_sched_attr attr = {
+		.priority = I915_USER_PRIORITY(I915_PRIORITY_MIN),
+	};
+	struct intel_engine_cs *engine =
+		container_of(wrk, typeof(*engine), heartbeat.work.work);
+	struct intel_context *ce = engine->kernel_context;
+	struct i915_request *rq;
+
+	if (!intel_engine_pm_get_if_awake(engine))
+		return;
+
+	rq = engine->heartbeat.systole;
+	if (rq && i915_request_completed(rq)) {
+		i915_request_put(rq);
+		engine->heartbeat.systole = NULL;
+	}
+
+	if (intel_gt_is_wedged(engine->gt))
+		goto out;
+
+	if (engine->heartbeat.systole) {
+		if (engine->schedule &&
+		    rq->sched.attr.priority < I915_PRIORITY_BARRIER) {
+			/*
+			 * Gradually raise the priority of the heartbeat to
+			 * give high priority work [which presumably desires
+			 * low latency and no jitter] the chance to naturally
+			 * complete before being preempted.
+			 */
+			attr.priority = 0;
+			if (rq->sched.attr.priority >= attr.priority)
+				attr.priority = I915_USER_PRIORITY(I915_PRIORITY_HEARTBEAT);
+			if (rq->sched.attr.priority >= attr.priority)
+				attr.priority = I915_PRIORITY_BARRIER;
+
+			local_bh_disable();
+			engine->schedule(rq, &attr);
+			local_bh_enable();
+		} else {
+			if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
+				show_heartbeat(rq, engine);
+
+			intel_gt_handle_error(engine->gt, engine->mask,
+					      I915_ERROR_CAPTURE,
+					      "stopped heartbeat on %s",
+					      engine->name);
+		}
+		goto out;
+	}
+
+	if (engine->wakeref_serial == engine->serial)
+		goto out;
+
+	mutex_lock(&ce->timeline->mutex);
+
+	intel_context_enter(ce);
+	rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
+	intel_context_exit(ce);
+	if (IS_ERR(rq))
+		goto unlock;
+
+	idle_pulse(engine, rq);
+	if (i915_modparams.enable_hangcheck)
+		engine->heartbeat.systole = i915_request_get(rq);
+
+	__i915_request_commit(rq);
+	__i915_request_queue(rq, &attr);
+
+unlock:
+	mutex_unlock(&ce->timeline->mutex);
+out:
+	next_heartbeat(engine);
+	intel_engine_pm_put(engine);
+}
+
+void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine)
+{
+	if (!CONFIG_DRM_I915_HEARTBEAT_INTERVAL)
+		return;
+
+	next_heartbeat(engine);
+}
+
+void intel_engine_park_heartbeat(struct intel_engine_cs *engine)
+{
+	cancel_delayed_work(&engine->heartbeat.work);
+	i915_request_put(fetch_and_zero(&engine->heartbeat.systole));
+}
+
+void intel_engine_init_heartbeat(struct intel_engine_cs *engine)
+{
+	INIT_DELAYED_WORK(&engine->heartbeat.work, heartbeat);
+}
+
 int intel_engine_pulse(struct intel_engine_cs *engine)
 {
 	struct i915_sched_attr attr = { .priority = I915_PRIORITY_BARRIER };
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
index b950451b5998..39391004554d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
@@ -9,6 +9,11 @@
 
 struct intel_engine_cs;
 
+void intel_engine_init_heartbeat(struct intel_engine_cs *engine);
+
+void intel_engine_park_heartbeat(struct intel_engine_cs *engine);
+void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine);
+
 int intel_engine_pulse(struct intel_engine_cs *engine);
 
 #endif /* INTEL_ENGINE_HEARTBEAT_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 7d76611d9df1..6fbfa2162e54 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -7,6 +7,7 @@
 #include "i915_drv.h"
 
 #include "intel_engine.h"
+#include "intel_engine_heartbeat.h"
 #include "intel_engine_pm.h"
 #include "intel_engine_pool.h"
 #include "intel_gt.h"
@@ -34,7 +35,7 @@ static int __engine_unpark(struct intel_wakeref *wf)
 	if (engine->unpark)
 		engine->unpark(engine);
 
-	intel_engine_init_hangcheck(engine);
+	intel_engine_unpark_heartbeat(engine);
 	return 0;
 }
 
@@ -158,6 +159,7 @@ static int __engine_park(struct intel_wakeref *wf)
 
 	call_idle_barriers(engine); /* cleanup after wedging */
 
+	intel_engine_park_heartbeat(engine);
 	intel_engine_disarm_breadcrumbs(engine);
 	intel_engine_pool_park(&engine->pool);
 
@@ -188,6 +190,7 @@ void intel_engine_init__pm(struct intel_engine_cs *engine)
 	struct intel_runtime_pm *rpm = engine->uncore->rpm;
 
 	intel_wakeref_init(&engine->wakeref, rpm, &wf_ops);
+	intel_engine_init_heartbeat(engine);
 }
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
index b9eb9e4cba0a..d7adbcb9c6e5 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
@@ -179,6 +179,35 @@ preempt_timeout_store(struct kobject *kobj, struct kobj_attribute *attr,
 static struct kobj_attribute preempt_timeout_attr =
 __ATTR(preempt_timeout_ms, 0644, preempt_timeout_show, preempt_timeout_store);
 
+static ssize_t
+heartbeat_interval_show(struct kobject *kobj, struct kobj_attribute *attr,
+			char *buf)
+{
+	struct intel_engine_cs *engine = kobj_to_engine(kobj);
+
+	return sprintf(buf, "%lu\n", engine->props.heartbeat_interval);
+}
+
+static ssize_t
+heartbeat_interval_store(struct kobject *kobj, struct kobj_attribute *attr,
+			 const char *buf, size_t count)
+{
+	struct intel_engine_cs *engine = kobj_to_engine(kobj);
+	unsigned long delay;
+	int err;
+
+	err = kstrtoul(buf, 0, &delay);
+	if (err)
+		return err;
+
+	WRITE_ONCE(engine->props.heartbeat_interval, delay);
+	return count;
+}
+
+static struct kobj_attribute heartbeat_interval_attr =
+__ATTR(heartbeat_interval_ms, 0644,
+       heartbeat_interval_show, heartbeat_interval_store);
+
 static void kobj_engine_release(struct kobject *kobj)
 {
 	kfree(kobj);
@@ -218,6 +247,9 @@ void intel_engines_add_sysfs(struct drm_i915_private *i915)
 		&inst_attr.attr,
 		&caps_attr.attr,
 		&all_caps_attr.attr,
+#if CONFIG_DRM_I915_HEARTBEAT_INTERVAL
+		&heartbeat_interval_attr.attr,
+#endif
 		NULL
 	};
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 6af9b0096975..ad3be2fbd71a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -15,6 +15,7 @@
 #include <linux/rbtree.h>
 #include <linux/timer.h>
 #include <linux/types.h>
+#include <linux/workqueue.h>
 
 #include "i915_gem.h"
 #include "i915_pmu.h"
@@ -76,14 +77,6 @@ struct intel_instdone {
 	u32 row[I915_MAX_SLICES][I915_MAX_SUBSLICES];
 };
 
-struct intel_engine_hangcheck {
-	u64 acthd;
-	u32 last_ring;
-	u32 last_head;
-	unsigned long action_timestamp;
-	struct intel_instdone instdone;
-};
-
 struct intel_ring {
 	struct kref ref;
 	struct i915_vma *vma;
@@ -330,6 +323,11 @@ struct intel_engine_cs {
 
 	intel_engine_mask_t saturated; /* submitting semaphores too late? */
 
+	struct {
+		struct delayed_work work;
+		struct i915_request *systole;
+	} heartbeat;
+
 	unsigned long serial;
 
 	unsigned long wakeref_serial;
@@ -480,8 +478,6 @@ struct intel_engine_cs {
 	/* status_notifier: list of callbacks for context-switch changes */
 	struct atomic_notifier_head context_status_notifier;
 
-	struct intel_engine_hangcheck hangcheck;
-
 #define I915_ENGINE_NEEDS_CMD_PARSER BIT(0)
 #define I915_ENGINE_SUPPORTS_STATS   BIT(1)
 #define I915_ENGINE_HAS_PREEMPTION   BIT(2)
@@ -549,6 +545,7 @@ struct intel_engine_cs {
 
 	struct {
 		unsigned long preempt_timeout;
+		unsigned long heartbeat_interval;
 	} props;
 };
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index b3619a2a5d0e..f3e1925987e1 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -22,7 +22,6 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
 	INIT_LIST_HEAD(&gt->closed_vma);
 	spin_lock_init(&gt->closed_lock);
 
-	intel_gt_init_hangcheck(gt);
 	intel_gt_init_reset(gt);
 	intel_gt_init_requests(gt);
 	intel_gt_pm_init_early(gt);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
index e6ab0bff0efb..5b6effed3713 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt.h
@@ -46,8 +46,6 @@ void intel_gt_clear_error_registers(struct intel_gt *gt,
 void intel_gt_flush_ggtt_writes(struct intel_gt *gt);
 void intel_gt_chipset_flush(struct intel_gt *gt);
 
-void intel_gt_init_hangcheck(struct intel_gt *gt);
-
 static inline u32 intel_gt_scratch_offset(const struct intel_gt *gt,
 					  enum intel_gt_scratch_field field)
 {
@@ -59,6 +57,4 @@ static inline bool intel_gt_is_wedged(struct intel_gt *gt)
 	return __intel_reset_failed(&gt->reset);
 }
 
-void intel_gt_queue_hangcheck(struct intel_gt *gt);
-
 #endif /* __INTEL_GT_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index 87e34e0b6427..85af0d16f869 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -52,7 +52,6 @@ static int __gt_unpark(struct intel_wakeref *wf)
 
 	i915_pmu_gt_unparked(i915);
 
-	intel_gt_queue_hangcheck(gt);
 	intel_gt_unpark_requests(gt);
 
 	pm_notify(gt, INTEL_GT_UNPARK);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
index be4b263621c8..a2e8c8a952e7 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
@@ -26,14 +26,6 @@ struct i915_ggtt;
 struct intel_engine_cs;
 struct intel_uncore;
 
-struct intel_hangcheck {
-	/* For hangcheck timer */
-#define DRM_I915_HANGCHECK_PERIOD 1500 /* in ms */
-#define DRM_I915_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_I915_HANGCHECK_PERIOD)
-
-	struct delayed_work work;
-};
-
 struct intel_gt {
 	struct drm_i915_private *i915;
 	struct intel_uncore *uncore;
@@ -67,7 +59,6 @@ struct intel_gt {
 	struct list_head closed_vma;
 	spinlock_t closed_lock; /* guards the list of closed_vma */
 
-	struct intel_hangcheck hangcheck;
 	struct intel_reset reset;
 
 	/**
diff --git a/drivers/gpu/drm/i915/gt/intel_hangcheck.c b/drivers/gpu/drm/i915/gt/intel_hangcheck.c
deleted file mode 100644
index c14dbeb3ccc3..000000000000
--- a/drivers/gpu/drm/i915/gt/intel_hangcheck.c
+++ /dev/null
@@ -1,361 +0,0 @@
-/*
- * Copyright © 2016 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#include "i915_drv.h"
-#include "intel_engine.h"
-#include "intel_gt.h"
-#include "intel_reset.h"
-
-struct hangcheck {
-	u64 acthd;
-	u32 ring;
-	u32 head;
-	enum intel_engine_hangcheck_action action;
-	unsigned long action_timestamp;
-	int deadlock;
-	struct intel_instdone instdone;
-	bool wedged:1;
-	bool stalled:1;
-};
-
-static bool instdone_unchanged(u32 current_instdone, u32 *old_instdone)
-{
-	u32 tmp = current_instdone | *old_instdone;
-	bool unchanged;
-
-	unchanged = tmp == *old_instdone;
-	*old_instdone |= tmp;
-
-	return unchanged;
-}
-
-static bool subunits_stuck(struct intel_engine_cs *engine)
-{
-	struct drm_i915_private *dev_priv = engine->i915;
-	const struct sseu_dev_info *sseu = &RUNTIME_INFO(dev_priv)->sseu;
-	struct intel_instdone instdone;
-	struct intel_instdone *accu_instdone = &engine->hangcheck.instdone;
-	bool stuck;
-	int slice;
-	int subslice;
-
-	intel_engine_get_instdone(engine, &instdone);
-
-	/* There might be unstable subunit states even when
-	 * actual head is not moving. Filter out the unstable ones by
-	 * accumulating the undone -> done transitions and only
-	 * consider those as progress.
-	 */
-	stuck = instdone_unchanged(instdone.instdone,
-				   &accu_instdone->instdone);
-	stuck &= instdone_unchanged(instdone.slice_common,
-				    &accu_instdone->slice_common);
-
-	for_each_instdone_slice_subslice(dev_priv, sseu, slice, subslice) {
-		stuck &= instdone_unchanged(instdone.sampler[slice][subslice],
-					    &accu_instdone->sampler[slice][subslice]);
-		stuck &= instdone_unchanged(instdone.row[slice][subslice],
-					    &accu_instdone->row[slice][subslice]);
-	}
-
-	return stuck;
-}
-
-static enum intel_engine_hangcheck_action
-head_stuck(struct intel_engine_cs *engine, u64 acthd)
-{
-	if (acthd != engine->hangcheck.acthd) {
-
-		/* Clear subunit states on head movement */
-		memset(&engine->hangcheck.instdone, 0,
-		       sizeof(engine->hangcheck.instdone));
-
-		return ENGINE_ACTIVE_HEAD;
-	}
-
-	if (!subunits_stuck(engine))
-		return ENGINE_ACTIVE_SUBUNITS;
-
-	return ENGINE_DEAD;
-}
-
-static enum intel_engine_hangcheck_action
-engine_stuck(struct intel_engine_cs *engine, u64 acthd)
-{
-	enum intel_engine_hangcheck_action ha;
-	u32 tmp;
-
-	ha = head_stuck(engine, acthd);
-	if (ha != ENGINE_DEAD)
-		return ha;
-
-	if (IS_GEN(engine->i915, 2))
-		return ENGINE_DEAD;
-
-	/* Is the chip hanging on a WAIT_FOR_EVENT?
-	 * If so we can simply poke the RB_WAIT bit
-	 * and break the hang. This should work on
-	 * all but the second generation chipsets.
-	 */
-	tmp = ENGINE_READ(engine, RING_CTL);
-	if (tmp & RING_WAIT) {
-		intel_gt_handle_error(engine->gt, engine->mask, 0,
-				      "stuck wait on %s", engine->name);
-		ENGINE_WRITE(engine, RING_CTL, tmp);
-		return ENGINE_WAIT_KICK;
-	}
-
-	return ENGINE_DEAD;
-}
-
-static void hangcheck_load_sample(struct intel_engine_cs *engine,
-				  struct hangcheck *hc)
-{
-	hc->acthd = intel_engine_get_active_head(engine);
-	hc->ring = ENGINE_READ(engine, RING_START);
-	hc->head = ENGINE_READ(engine, RING_HEAD);
-}
-
-static void hangcheck_store_sample(struct intel_engine_cs *engine,
-				   const struct hangcheck *hc)
-{
-	engine->hangcheck.acthd = hc->acthd;
-	engine->hangcheck.last_ring = hc->ring;
-	engine->hangcheck.last_head = hc->head;
-}
-
-static enum intel_engine_hangcheck_action
-hangcheck_get_action(struct intel_engine_cs *engine,
-		     const struct hangcheck *hc)
-{
-	if (intel_engine_is_idle(engine))
-		return ENGINE_IDLE;
-
-	if (engine->hangcheck.last_ring != hc->ring)
-		return ENGINE_ACTIVE_SEQNO;
-
-	if (engine->hangcheck.last_head != hc->head)
-		return ENGINE_ACTIVE_SEQNO;
-
-	return engine_stuck(engine, hc->acthd);
-}
-
-static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
-					struct hangcheck *hc)
-{
-	unsigned long timeout = I915_ENGINE_DEAD_TIMEOUT;
-
-	hc->action = hangcheck_get_action(engine, hc);
-
-	/* We always increment the progress
-	 * if the engine is busy and still processing
-	 * the same request, so that no single request
-	 * can run indefinitely (such as a chain of
-	 * batches). The only time we do not increment
-	 * the hangcheck score on this ring, if this
-	 * engine is in a legitimate wait for another
-	 * engine. In that case the waiting engine is a
-	 * victim and we want to be sure we catch the
-	 * right culprit. Then every time we do kick
-	 * the ring, make it as a progress as the seqno
-	 * advancement might ensure and if not, it
-	 * will catch the hanging engine.
-	 */
-
-	switch (hc->action) {
-	case ENGINE_IDLE:
-	case ENGINE_ACTIVE_SEQNO:
-		/* Clear head and subunit states on seqno movement */
-		hc->acthd = 0;
-
-		memset(&engine->hangcheck.instdone, 0,
-		       sizeof(engine->hangcheck.instdone));
-
-		/* Intentional fall through */
-	case ENGINE_WAIT_KICK:
-	case ENGINE_WAIT:
-		engine->hangcheck.action_timestamp = jiffies;
-		break;
-
-	case ENGINE_ACTIVE_HEAD:
-	case ENGINE_ACTIVE_SUBUNITS:
-		/*
-		 * Seqno stuck with still active engine gets leeway,
-		 * in hopes that it is just a long shader.
-		 */
-		timeout = I915_SEQNO_DEAD_TIMEOUT;
-		break;
-
-	case ENGINE_DEAD:
-		break;
-
-	default:
-		MISSING_CASE(hc->action);
-	}
-
-	hc->stalled = time_after(jiffies,
-				 engine->hangcheck.action_timestamp + timeout);
-	hc->wedged = time_after(jiffies,
-				 engine->hangcheck.action_timestamp +
-				 I915_ENGINE_WEDGED_TIMEOUT);
-}
-
-static void hangcheck_declare_hang(struct intel_gt *gt,
-				   intel_engine_mask_t hung,
-				   intel_engine_mask_t stuck)
-{
-	struct intel_engine_cs *engine;
-	intel_engine_mask_t tmp;
-	char msg[80];
-	int len;
-
-	/* If some rings hung but others were still busy, only
-	 * blame the hanging rings in the synopsis.
-	 */
-	if (stuck != hung)
-		hung &= ~stuck;
-	len = scnprintf(msg, sizeof(msg),
-			"%s on ", stuck == hung ? "no progress" : "hang");
-	for_each_engine_masked(engine, gt->i915, hung, tmp)
-		len += scnprintf(msg + len, sizeof(msg) - len,
-				 "%s, ", engine->name);
-	msg[len-2] = '\0';
-
-	return intel_gt_handle_error(gt, hung, I915_ERROR_CAPTURE, "%s", msg);
-}
-
-/*
- * This is called when the chip hasn't reported back with completed
- * batchbuffers in a long time. We keep track per ring seqno progress and
- * if there are no progress, hangcheck score for that ring is increased.
- * Further, acthd is inspected to see if the ring is stuck. On stuck case
- * we kick the ring. If we see no progress on three subsequent calls
- * we assume chip is wedged and try to fix it by resetting the chip.
- */
-static void hangcheck_elapsed(struct work_struct *work)
-{
-	struct intel_gt *gt =
-		container_of(work, typeof(*gt), hangcheck.work.work);
-	intel_engine_mask_t hung = 0, stuck = 0, wedged = 0;
-	struct intel_engine_cs *engine;
-	enum intel_engine_id id;
-	intel_wakeref_t wakeref;
-
-	if (!i915_modparams.enable_hangcheck)
-		return;
-
-	if (!READ_ONCE(gt->awake))
-		return;
-
-	if (intel_gt_is_wedged(gt))
-		return;
-
-	wakeref = intel_runtime_pm_get_if_in_use(gt->uncore->rpm);
-	if (!wakeref)
-		return;
-
-	/* As enabling the GPU requires fairly extensive mmio access,
-	 * periodically arm the mmio checker to see if we are triggering
-	 * any invalid access.
-	 */
-	intel_uncore_arm_unclaimed_mmio_detection(gt->uncore);
-
-	for_each_engine(engine, gt->i915, id) {
-		struct hangcheck hc;
-
-		intel_engine_breadcrumbs_irq(engine);
-
-		hangcheck_load_sample(engine, &hc);
-		hangcheck_accumulate_sample(engine, &hc);
-		hangcheck_store_sample(engine, &hc);
-
-		if (hc.stalled) {
-			hung |= engine->mask;
-			if (hc.action != ENGINE_DEAD)
-				stuck |= engine->mask;
-		}
-
-		if (hc.wedged)
-			wedged |= engine->mask;
-	}
-
-	if (GEM_SHOW_DEBUG() && (hung | stuck)) {
-		struct drm_printer p = drm_debug_printer("hangcheck");
-
-		for_each_engine(engine, gt->i915, id) {
-			if (intel_engine_is_idle(engine))
-				continue;
-
-			intel_engine_dump(engine, &p, "%s\n", engine->name);
-		}
-	}
-
-	if (wedged) {
-		dev_err(gt->i915->drm.dev,
-			"GPU recovery timed out,"
-			" cancelling all in-flight rendering.\n");
-		GEM_TRACE_DUMP();
-		intel_gt_set_wedged(gt);
-	}
-
-	if (hung)
-		hangcheck_declare_hang(gt, hung, stuck);
-
-	intel_runtime_pm_put(gt->uncore->rpm, wakeref);
-
-	/* Reset timer in case GPU hangs without another request being added */
-	intel_gt_queue_hangcheck(gt);
-}
-
-void intel_gt_queue_hangcheck(struct intel_gt *gt)
-{
-	unsigned long delay;
-
-	if (unlikely(!i915_modparams.enable_hangcheck))
-		return;
-
-	/*
-	 * Don't continually defer the hangcheck so that it is always run at
-	 * least once after work has been scheduled on any ring. Otherwise,
-	 * we will ignore a hung ring if a second ring is kept busy.
-	 */
-
-	delay = round_jiffies_up_relative(DRM_I915_HANGCHECK_JIFFIES);
-	queue_delayed_work(system_long_wq, &gt->hangcheck.work, delay);
-}
-
-void intel_engine_init_hangcheck(struct intel_engine_cs *engine)
-{
-	memset(&engine->hangcheck, 0, sizeof(engine->hangcheck));
-	engine->hangcheck.action_timestamp = jiffies;
-}
-
-void intel_gt_init_hangcheck(struct intel_gt *gt)
-{
-	INIT_DELAYED_WORK(&gt->hangcheck.work, hangcheck_elapsed);
-}
-
-#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
-#include "selftest_hangcheck.c"
-#endif
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 77445d100ca8..9d6451f9a855 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -1024,8 +1024,6 @@ void intel_gt_reset(struct intel_gt *gt,
 	if (ret)
 		goto taint;
 
-	intel_gt_queue_hangcheck(gt);
-
 finish:
 	reset_finish(gt, awake);
 unlock:
@@ -1353,4 +1351,5 @@ void __intel_fini_wedge(struct intel_wedge_me *w)
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_reset.c"
+#include "selftest_hangcheck.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index 569a4105d49e..570546eda5e8 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -1686,7 +1686,6 @@ int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
 	};
 	struct intel_gt *gt = &i915->gt;
 	intel_wakeref_t wakeref;
-	bool saved_hangcheck;
 	int err;
 
 	if (!intel_has_gpu_reset(gt))
@@ -1696,12 +1695,9 @@ int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
 		return -EIO; /* we're long past hope of a successful reset */
 
 	wakeref = intel_runtime_pm_get(gt->uncore->rpm);
-	saved_hangcheck = fetch_and_zero(&i915_modparams.enable_hangcheck);
-	drain_delayed_work(&gt->hangcheck.work); /* flush param */
 
 	err = intel_gt_live_subtests(tests, gt);
 
-	i915_modparams.enable_hangcheck = saved_hangcheck;
 	intel_runtime_pm_put(gt->uncore->rpm, wakeref);
 
 	return err;
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index a541b6ae534f..2fae6e592b09 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1011,92 +1011,6 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
 	return ret;
 }
 
-static void i915_instdone_info(struct drm_i915_private *dev_priv,
-			       struct seq_file *m,
-			       struct intel_instdone *instdone)
-{
-	const struct sseu_dev_info *sseu = &RUNTIME_INFO(dev_priv)->sseu;
-	int slice;
-	int subslice;
-
-	seq_printf(m, "\t\tINSTDONE: 0x%08x\n",
-		   instdone->instdone);
-
-	if (INTEL_GEN(dev_priv) <= 3)
-		return;
-
-	seq_printf(m, "\t\tSC_INSTDONE: 0x%08x\n",
-		   instdone->slice_common);
-
-	if (INTEL_GEN(dev_priv) <= 6)
-		return;
-
-	for_each_instdone_slice_subslice(dev_priv, sseu, slice, subslice)
-		seq_printf(m, "\t\tSAMPLER_INSTDONE[%d][%d]: 0x%08x\n",
-			   slice, subslice, instdone->sampler[slice][subslice]);
-
-	for_each_instdone_slice_subslice(dev_priv, sseu, slice, subslice)
-		seq_printf(m, "\t\tROW_INSTDONE[%d][%d]: 0x%08x\n",
-			   slice, subslice, instdone->row[slice][subslice]);
-}
-
-static int i915_hangcheck_info(struct seq_file *m, void *unused)
-{
-	struct drm_i915_private *i915 = node_to_i915(m->private);
-	struct intel_gt *gt = &i915->gt;
-	struct intel_engine_cs *engine;
-	intel_wakeref_t wakeref;
-	enum intel_engine_id id;
-
-	seq_printf(m, "Reset flags: %lx\n", gt->reset.flags);
-	if (test_bit(I915_WEDGED, &gt->reset.flags))
-		seq_puts(m, "\tWedged\n");
-	if (test_bit(I915_RESET_BACKOFF, &gt->reset.flags))
-		seq_puts(m, "\tDevice (global) reset in progress\n");
-
-	if (!i915_modparams.enable_hangcheck) {
-		seq_puts(m, "Hangcheck disabled\n");
-		return 0;
-	}
-
-	if (timer_pending(&gt->hangcheck.work.timer))
-		seq_printf(m, "Hangcheck active, timer fires in %dms\n",
-			   jiffies_to_msecs(gt->hangcheck.work.timer.expires -
-					    jiffies));
-	else if (delayed_work_pending(&gt->hangcheck.work))
-		seq_puts(m, "Hangcheck active, work pending\n");
-	else
-		seq_puts(m, "Hangcheck inactive\n");
-
-	seq_printf(m, "GT active? %s\n", yesno(gt->awake));
-
-	with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
-		for_each_engine(engine, i915, id) {
-			struct intel_instdone instdone;
-
-			seq_printf(m, "%s: %d ms ago\n",
-				   engine->name,
-				   jiffies_to_msecs(jiffies -
-						    engine->hangcheck.action_timestamp));
-
-			seq_printf(m, "\tACTHD = 0x%08llx [current 0x%08llx]\n",
-				   (long long)engine->hangcheck.acthd,
-				   intel_engine_get_active_head(engine));
-
-			intel_engine_get_instdone(engine, &instdone);
-
-			seq_puts(m, "\tinstdone read =\n");
-			i915_instdone_info(i915, m, &instdone);
-
-			seq_puts(m, "\tinstdone accu =\n");
-			i915_instdone_info(i915, m,
-					   &engine->hangcheck.instdone);
-		}
-	}
-
-	return 0;
-}
-
 static int ironlake_drpc_info(struct seq_file *m)
 {
 	struct drm_i915_private *i915 = node_to_i915(m->private);
@@ -4339,7 +4253,6 @@ static const struct drm_info_list i915_debugfs_list[] = {
 	{"i915_guc_stage_pool", i915_guc_stage_pool, 0},
 	{"i915_huc_load_status", i915_huc_load_status_info, 0},
 	{"i915_frequency_info", i915_frequency_info, 0},
-	{"i915_hangcheck_info", i915_hangcheck_info, 0},
 	{"i915_drpc_info", i915_drpc_info, 0},
 	{"i915_ring_freq_table", i915_ring_freq_table, 0},
 	{"i915_frontbuffer_tracking", i915_frontbuffer_tracking, 0},
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index f02a34722217..1dae43ed4c48 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1546,10 +1546,7 @@ void i915_driver_remove(struct drm_i915_private *i915)
 
 	i915_driver_modeset_remove(i915);
 
-	/* Free error state after interrupts are fully disabled. */
-	cancel_delayed_work_sync(&i915->gt.hangcheck.work);
 	i915_reset_error_state(i915);
-
 	i915_gem_driver_remove(i915);
 
 	intel_power_domains_driver_remove(i915);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c46b339064c0..19f5b0608a56 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1847,7 +1847,6 @@ void i915_driver_remove(struct drm_i915_private *i915);
 int i915_resume_switcheroo(struct drm_i915_private *i915);
 int i915_suspend_switcheroo(struct drm_i915_private *i915, pm_message_t state);
 
-void intel_engine_init_hangcheck(struct intel_engine_cs *engine);
 int vlv_force_gfx_clock(struct drm_i915_private *dev_priv, bool on);
 
 static inline bool intel_gvt_active(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 5cf4eed5add8..47239df653f2 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -534,10 +534,6 @@ static void error_print_engine(struct drm_i915_error_state_buf *m,
 	}
 	err_printf(m, "  ring->head: 0x%08x\n", ee->cpu_ring_head);
 	err_printf(m, "  ring->tail: 0x%08x\n", ee->cpu_ring_tail);
-	err_printf(m, "  hangcheck timestamp: %dms (%lu%s)\n",
-		   jiffies_to_msecs(ee->hangcheck_timestamp - epoch),
-		   ee->hangcheck_timestamp,
-		   ee->hangcheck_timestamp == epoch ? "; epoch" : "");
 	err_printf(m, "  engine reset count: %u\n", ee->reset_count);
 
 	for (n = 0; n < ee->num_ports; n++) {
@@ -679,11 +675,8 @@ static void __err_print_to_sgl(struct drm_i915_error_state_buf *m,
 	ts = ktime_to_timespec64(error->uptime);
 	err_printf(m, "Uptime: %lld s %ld us\n",
 		   (s64)ts.tv_sec, ts.tv_nsec / NSEC_PER_USEC);
-	err_printf(m, "Epoch: %lu jiffies (%u HZ)\n", error->epoch, HZ);
-	err_printf(m, "Capture: %lu jiffies; %d ms ago, %d ms after epoch\n",
-		   error->capture,
-		   jiffies_to_msecs(jiffies - error->capture),
-		   jiffies_to_msecs(error->capture - error->epoch));
+	err_printf(m, "Capture: %lu jiffies; %d ms ago\n",
+		   error->capture, jiffies_to_msecs(jiffies - error->capture));
 
 	for (ee = error->engine; ee; ee = ee->next)
 		err_printf(m, "Active process (on ring %s): %s [%d]\n",
@@ -742,7 +735,7 @@ static void __err_print_to_sgl(struct drm_i915_error_state_buf *m,
 		err_printf(m, "GTT_CACHE_EN: 0x%08x\n", error->gtt_cache);
 
 	for (ee = error->engine; ee; ee = ee->next)
-		error_print_engine(m, ee, error->epoch);
+		error_print_engine(m, ee, error->capture);
 
 	for (ee = error->engine; ee; ee = ee->next) {
 		const struct drm_i915_error_object *obj;
@@ -770,7 +763,7 @@ static void __err_print_to_sgl(struct drm_i915_error_state_buf *m,
 			for (j = 0; j < ee->num_requests; j++)
 				error_print_request(m, " ",
 						    &ee->requests[j],
-						    error->epoch);
+						    error->capture);
 		}
 
 		print_error_obj(m, ee->engine, "ringbuffer", ee->ringbuffer);
@@ -1144,8 +1137,6 @@ static void error_record_engine_registers(struct i915_gpu_state *error,
 	}
 
 	ee->idle = intel_engine_is_idle(engine);
-	if (!ee->idle)
-		ee->hangcheck_timestamp = engine->hangcheck.action_timestamp;
 	ee->reset_count = i915_reset_engine_count(&dev_priv->gpu_error,
 						  engine);
 
@@ -1657,20 +1648,6 @@ static void capture_params(struct i915_gpu_state *error)
 	i915_params_copy(&error->params, &i915_modparams);
 }
 
-static unsigned long capture_find_epoch(const struct i915_gpu_state *error)
-{
-	const struct drm_i915_error_engine *ee;
-	unsigned long epoch = error->capture;
-
-	for (ee = error->engine; ee; ee = ee->next) {
-		if (ee->hangcheck_timestamp &&
-		    time_before(ee->hangcheck_timestamp, epoch))
-			epoch = ee->hangcheck_timestamp;
-	}
-
-	return epoch;
-}
-
 static void capture_finish(struct i915_gpu_state *error)
 {
 	struct i915_ggtt *ggtt = &error->i915->ggtt;
@@ -1722,8 +1699,6 @@ i915_capture_gpu_state(struct drm_i915_private *i915)
 	error->overlay = intel_overlay_capture_error_state(i915);
 	error->display = intel_display_capture_error_state(i915);
 
-	error->epoch = capture_find_epoch(error);
-
 	capture_finish(error);
 	compress_fini(&compress);
 
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h
index 7f1cd0b1fef7..4dc36d6ee3a2 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.h
+++ b/drivers/gpu/drm/i915/i915_gpu_error.h
@@ -34,7 +34,6 @@ struct i915_gpu_state {
 	ktime_t boottime;
 	ktime_t uptime;
 	unsigned long capture;
-	unsigned long epoch;
 
 	struct drm_i915_private *i915;
 
@@ -86,7 +85,6 @@ struct i915_gpu_state {
 
 		/* Software tracked state */
 		bool idle;
-		unsigned long hangcheck_timestamp;
 		int num_requests;
 		u32 reset_count;
 
diff --git a/drivers/gpu/drm/i915/i915_priolist_types.h b/drivers/gpu/drm/i915/i915_priolist_types.h
index ae8bb3cb627e..732aad148881 100644
--- a/drivers/gpu/drm/i915/i915_priolist_types.h
+++ b/drivers/gpu/drm/i915/i915_priolist_types.h
@@ -16,6 +16,12 @@ enum {
 	I915_PRIORITY_MIN = I915_CONTEXT_MIN_USER_PRIORITY - 1,
 	I915_PRIORITY_NORMAL = I915_CONTEXT_DEFAULT_PRIORITY,
 	I915_PRIORITY_MAX = I915_CONTEXT_MAX_USER_PRIORITY + 1,
+
+	/* A preemptive pulse used to monitor the health of each engine */
+	I915_PRIORITY_HEARTBEAT,
+
+	/* Interactive workload, scheduled for immediate pageflipping */
+	I915_PRIORITY_DISPLAY,
 };
 
 #define I915_USER_PRIORITY_SHIFT 2
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 14/15] drm/i915: Flush idle barriers when waiting
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (11 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 13/15] drm/i915: Replace hangcheck by heartbeats Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14  9:07 ` [PATCH 15/15] drm/i915/execlist: Trim immediate timeslice expiry Chris Wilson
  2019-10-14 16:15 ` ✗ Fi.CI.BUILD: failure for series starting with [01/15] drm/i915/display: Squelch kerneldoc warnings Patchwork
  14 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

If we do find ourselves with an idle barrier inside our active while
waiting, attempt to flush it by emitting a pulse using the kernel
context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 14 +++++++++++++
 .../gpu/drm/i915/gt/intel_engine_heartbeat.h  |  1 +
 drivers/gpu/drm/i915/i915_active.c            | 21 +++++++++++++++++--
 3 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index dc741e86fb41..f02f660479fa 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -182,3 +182,17 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
 	intel_engine_pm_put(engine);
 	return err;
 }
+
+int intel_engine_flush_barriers(struct intel_engine_cs *engine)
+{
+	struct i915_request *rq;
+
+	rq = i915_request_create(engine->kernel_context);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
+
+	idle_pulse(engine, rq);
+	i915_request_add(rq);
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
index 39391004554d..0c1ad0fc091d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
@@ -15,5 +15,6 @@ void intel_engine_park_heartbeat(struct intel_engine_cs *engine);
 void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine);
 
 int intel_engine_pulse(struct intel_engine_cs *engine);
+int intel_engine_flush_barriers(struct intel_engine_cs *engine);
 
 #endif /* INTEL_ENGINE_HEARTBEAT_H */
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index aa37c07004b9..98d5fe1c7e19 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -6,6 +6,7 @@
 
 #include <linux/debugobjects.h>
 
+#include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_engine_pm.h"
 
 #include "i915_drv.h"
@@ -435,6 +436,21 @@ static void enable_signaling(struct i915_active_fence *active)
 	dma_fence_put(fence);
 }
 
+static int flush_barrier(struct active_node *it)
+{
+	struct intel_engine_cs *engine;
+
+	if (!is_barrier(&it->base))
+		return 0;
+
+	engine = __barrier_to_engine(it);
+	smp_rmb(); /* serialise with add_active_barriers */
+	if (!is_barrier(&it->base))
+		return 0;
+
+	return intel_engine_flush_barriers(engine);
+}
+
 int i915_active_wait(struct i915_active *ref)
 {
 	struct active_node *it, *n;
@@ -448,8 +464,9 @@ int i915_active_wait(struct i915_active *ref)
 	/* Flush lazy signals */
 	enable_signaling(&ref->excl);
 	rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
-		if (is_barrier(&it->base)) /* unconnected idle barrier */
-			continue;
+		err = flush_barrier(it); /* unconnected idle barrier? */
+		if (err)
+			break;
 
 		enable_signaling(&it->base);
 	}
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 15/15] drm/i915/execlist: Trim immediate timeslice expiry
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (12 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 14/15] drm/i915: Flush idle barriers when waiting Chris Wilson
@ 2019-10-14  9:07 ` Chris Wilson
  2019-10-14 16:15 ` ✗ Fi.CI.BUILD: failure for series starting with [01/15] drm/i915/display: Squelch kerneldoc warnings Patchwork
  14 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14  9:07 UTC (permalink / raw)
  To: intel-gfx

We perform timeslicing immediately upon receipt of a request that may be
put into the second ELSP slot. The idea behind this was that since we
didn't install the timer if the second ELSP slot was empty, we would not
have any idea of how long ELSP[0] had been running and so giving the
newcomer a chance on the GPU was fair. However, this causes us extra
busy work that we may be able to avoid if we wait a jiffie for the first
timeslice as normal.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index b76b35194114..fd1803aa8c21 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1608,7 +1608,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 			last->hw_context->lrc_desc |= CTX_DESC_FORCE_RESTORE;
 			last = NULL;
 		} else if (need_timeslice(engine, last) &&
-			   !timer_pending(&engine->execlists.timer)) {
+			   timer_expired(&engine->execlists.timer)) {
 			GEM_TRACE("%s: expired last=%llx:%lld, prio=%d, hint=%d\n",
 				  engine->name,
 				  last->fence.context,
@@ -2063,6 +2063,8 @@ static void process_csb(struct intel_engine_cs *engine)
 
 			if (enable_timeslice(execlists))
 				mod_timer(&execlists->timer, jiffies + 1);
+			else
+				cancel_timer(&execlists->timer);
 
 			WRITE_ONCE(execlists->pending[0], NULL);
 		} else {
-- 
2.23.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 06/15] drm/i915/selftests: Check known register values within the context
  2019-10-14  9:07 ` [PATCH 06/15] drm/i915/selftests: Check known register values within the context Chris Wilson
@ 2019-10-14  9:59   ` Tvrtko Ursulin
  2019-10-14 10:06     ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14  9:59 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:07, Chris Wilson wrote:
> Check the logical ring context by asserting that the registers hold
> expected start during execution. (It's a bit chicken-and-egg for how
> could we manage to execute our request if the registers were not being
> updated. Still, it's nice to verify that the HW is working as expected.)
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/gt/selftest_lrc.c | 126 +++++++++++++++++++++++++
>   1 file changed, 126 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index a691e429ca01..0aa36b1b2389 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -2599,10 +2599,136 @@ static int live_lrc_layout(void *arg)
>   	return err;
>   }
>   
> +static int __live_lrc_state(struct i915_gem_context *fixme,
> +			    struct intel_engine_cs *engine,
> +			    struct i915_vma *scratch)
> +{
> +	struct intel_context *ce;
> +	struct i915_request *rq;
> +	enum {
> +		RING_START_IDX = 0,
> +		RING_TAIL_IDX,
> +		MAX_IDX
> +	};
> +	u32 expected[MAX_IDX];
> +	u32 *cs;
> +	int err;
> +	int n;
> +
> +	ce = intel_context_create(fixme, engine);

Calling the context fixme imo just makes the code less readable.

> +	if (IS_ERR(ce))
> +		return PTR_ERR(ce);
> +
> +	err = intel_context_pin(ce);
> +	if (err)
> +		goto err_put;
> +
> +	rq = i915_request_create(ce);
> +	if (IS_ERR(rq)) {
> +		err = PTR_ERR(rq);
> +		goto err_unpin;
> +	}
> +
> +	cs = intel_ring_begin(rq, 4 * MAX_IDX);
> +	if (IS_ERR(cs)) {
> +		err = PTR_ERR(cs);
> +		i915_request_add(rq);
> +		goto err_unpin;
> +	}
> +
> +	*cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
> +	*cs++ = i915_mmio_reg_offset(RING_START(engine->mmio_base));
> +	*cs++ = i915_ggtt_offset(scratch) + RING_START_IDX * sizeof(u32);
> +	*cs++ = 0;
> +
> +	expected[RING_START_IDX] = i915_ggtt_offset(ce->ring->vma);
> +
> +	*cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
> +	*cs++ = i915_mmio_reg_offset(RING_TAIL(engine->mmio_base));
> +	*cs++ = i915_ggtt_offset(scratch) + RING_TAIL_IDX * sizeof(u32);
> +	*cs++ = 0;
> +
> +	i915_request_get(rq);
> +	i915_request_add(rq);
> +
> +	intel_engine_flush_submission(engine);
> +	expected[RING_TAIL_IDX] = ce->ring->tail;
> +
> +	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
> +		err = -ETIME;
> +		goto err_rq;
> +	}
> +
> +	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
> +	if (IS_ERR(cs)) {
> +		err = PTR_ERR(cs);
> +		goto err_rq;
> +	}
> +
> +	for (n = 0; n < MAX_IDX; n++) {
> +		if (cs[n] != expected[n]) {
> +			pr_err("%s: Stored register[%d] value[0x%x] did not match expected[0x%x]\n",
> +			       engine->name, n, cs[n], expected[n]);
> +			err = -EINVAL;
> +			break;
> +		}
> +	}
> +
> +	i915_gem_object_unpin_map(scratch->obj);
> +
> +err_rq:
> +	i915_request_put(rq);
> +err_unpin:
> +	intel_context_unpin(ce);
> +err_put:
> +	intel_context_put(ce);
> +	return err;
> +}
> +
> +static int live_lrc_state(void *arg)
> +{
> +	struct intel_gt *gt = arg;
> +	struct intel_engine_cs *engine;
> +	struct i915_gem_context *fixme;
> +	struct i915_vma *scratch;
> +	enum intel_engine_id id;
> +	int err = 0;
> +
> +	/*
> +	 * Check the live register state matches what we expect for this
> +	 * intel_context.
> +	 */
> +
> +	fixme = kernel_context(gt->i915);
> +	if (!fixme)
> +		return -ENOMEM;
> +
> +	scratch = create_scratch(gt);
> +	if (IS_ERR(scratch)) {
> +		err = PTR_ERR(scratch);
> +		goto out_close;
> +	}
> +
> +	for_each_engine(engine, gt->i915, id) {
> +		err = __live_lrc_state(fixme, engine, scratch);
> +		if (err)
> +			break;
> +	}
> +
> +	if (igt_flush_test(gt->i915))
> +		err = -EIO;
> +
> +	i915_vma_unpin_and_release(&scratch, 0);
> +out_close:
> +	kernel_context_close(fixme);
> +	return err;
> +}
> +
>   int intel_lrc_live_selftests(struct drm_i915_private *i915)
>   {
>   	static const struct i915_subtest tests[] = {
>   		SUBTEST(live_lrc_layout),
> +		SUBTEST(live_lrc_state),
>   	};
>   
>   	if (!HAS_LOGICAL_RING_CONTEXTS(i915))
> 

I don't know.. guess it has some extra value compared to basic MI_NOOP 
tests.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 06/15] drm/i915/selftests: Check known register values within the context
  2019-10-14  9:59   ` Tvrtko Ursulin
@ 2019-10-14 10:06     ` Chris Wilson
  0 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 10:06 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 10:59:44)
> 
> On 14/10/2019 10:07, Chris Wilson wrote:
> > +static int __live_lrc_state(struct i915_gem_context *fixme,
> > +                         struct intel_engine_cs *engine,
> > +                         struct i915_vma *scratch)
> > +{
> > +     struct intel_context *ce;
> > +     struct i915_request *rq;
> > +     enum {
> > +             RING_START_IDX = 0,
> > +             RING_TAIL_IDX,
> > +             MAX_IDX
> > +     };
> > +     u32 expected[MAX_IDX];
> > +     u32 *cs;
> > +     int err;
> > +     int n;
> > +
> > +     ce = intel_context_create(fixme, engine);
> 
> Calling the context fixme imo just makes the code less readable.

If you review some other patches, I wouldn't need a GEM context here :-p
Vacations!

> >   int intel_lrc_live_selftests(struct drm_i915_private *i915)
> >   {
> >       static const struct i915_subtest tests[] = {
> >               SUBTEST(live_lrc_layout),
> > +             SUBTEST(live_lrc_state),
> >       };
> >   
> >       if (!HAS_LOGICAL_RING_CONTEXTS(i915))
> > 
> 
> I don't know.. guess it has some extra value compared to basic MI_NOOP 
> tests.

Yeah, it's very much chicken and egg. I like the idea of confirming that
the lrc we execute is the same as we setup - but that's kind of implied
by successful execution (or so one would assume). I regard it as mostly
a placeholder, a seed of a test idea, and plan to add more state checks
as soon as I think of some more reliable checks. Or anyone else for that
matter. :) I did think that verifying the vm matches the ce->vm would be
a good addition.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 07/15] drm/i915/selftests: Check that GPR are cleared for new contexts
  2019-10-14  9:07 ` [PATCH 07/15] drm/i915/selftests: Check that GPR are cleared for new contexts Chris Wilson
@ 2019-10-14 10:08   ` Tvrtko Ursulin
  0 siblings, 0 replies; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 10:08 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:07, Chris Wilson wrote:
> We want the general purpose registers to be clear in all new contexts so
> that we can be confident that no information is leaked from one to the
> next.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/selftest_lrc.c | 185 ++++++++++++++++++++++---
>   1 file changed, 166 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index 0aa36b1b2389..1276da059dc6 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -19,6 +19,9 @@
>   #include "gem/selftests/igt_gem_utils.h"
>   #include "gem/selftests/mock_context.h"
>   
> +#define CS_GPR(engine, n) ((engine)->mmio_base + 0x600 + (n) * 4)
> +#define NUM_GPR_DW (16 * 2) /* each GPR is 2 dwords */
> +
>   static struct i915_vma *create_scratch(struct intel_gt *gt)
>   {
>   	struct drm_i915_gem_object *obj;
> @@ -2107,16 +2110,14 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
>   				    struct intel_engine_cs **siblings,
>   				    unsigned int nsibling)
>   {
> -#define CS_GPR(engine, n) ((engine)->mmio_base + 0x600 + (n) * 4)
> -
>   	struct i915_request *last = NULL;
>   	struct i915_gem_context *ctx;
>   	struct intel_context *ve;
>   	struct i915_vma *scratch;
>   	struct igt_live_test t;
> -	const int num_gpr = 16 * 2; /* each GPR is 2 dwords */
>   	unsigned int n;
>   	int err = 0;
> +	u32 *cs;
>   
>   	ctx = kernel_context(i915);
>   	if (!ctx)
> @@ -2142,10 +2143,9 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
>   	if (err)
>   		goto out_unpin;
>   
> -	for (n = 0; n < num_gpr; n++) {
> +	for (n = 0; n < NUM_GPR_DW; n++) {
>   		struct intel_engine_cs *engine = siblings[n % nsibling];
>   		struct i915_request *rq;
> -		u32 *cs;
>   
>   		rq = i915_request_create(ve);
>   		if (IS_ERR(rq)) {
> @@ -2169,7 +2169,7 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
>   		*cs++ = 0;
>   
>   		*cs++ = MI_LOAD_REGISTER_IMM(1);
> -		*cs++ = CS_GPR(engine, (n + 1) % num_gpr);
> +		*cs++ = CS_GPR(engine, (n + 1) % NUM_GPR_DW);
>   		*cs++ = n + 1;
>   
>   		*cs++ = MI_NOOP;
> @@ -2182,21 +2182,26 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
>   
>   	if (i915_request_wait(last, 0, HZ / 5) < 0) {
>   		err = -ETIME;
> -	} else {
> -		u32 *map = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
> +		goto out_end;
> +	}
>   
> -		for (n = 0; n < num_gpr; n++) {
> -			if (map[n] != n) {
> -				pr_err("Incorrect value[%d] found for GPR[%d]\n",
> -				       map[n], n);
> -				err = -EINVAL;
> -				break;
> -			}
> -		}
> +	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
> +	if (IS_ERR(cs)) {
> +		err = PTR_ERR(cs);
> +		goto out_end;
> +	}
>   
> -		i915_gem_object_unpin_map(scratch->obj);
> +	for (n = 0; n < NUM_GPR_DW; n++) {
> +		if (cs[n] != n) {
> +			pr_err("Incorrect value[%d] found for GPR[%d]\n",
> +			       cs[n], n);
> +			err = -EINVAL;
> +			break;
> +		}
>   	}

Grrrrumble.

>   
> +	i915_gem_object_unpin_map(scratch->obj);
> +
>   out_end:
>   	if (igt_live_test_end(&t))
>   		err = -EIO;
> @@ -2210,8 +2215,6 @@ static int preserved_virtual_engine(struct drm_i915_private *i915,
>   out_close:
>   	kernel_context_close(ctx);
>   	return err;
> -
> -#undef CS_GPR
>   }
>   
>   static int live_virtual_preserved(void *arg)
> @@ -2724,11 +2727,155 @@ static int live_lrc_state(void *arg)
>   	return err;
>   }
>   
> +static int gpr_make_dirty(struct intel_engine_cs *engine)
> +{
> +	struct i915_request *rq;
> +	u32 *cs;
> +	int n;
> +
> +	rq = i915_request_create(engine->kernel_context);
> +	if (IS_ERR(rq))
> +		return PTR_ERR(rq);
> +
> +	cs = intel_ring_begin(rq, 2 * NUM_GPR_DW + 2);
> +	if (IS_ERR(cs)) {
> +		i915_request_add(rq);
> +		return PTR_ERR(cs);
> +	}
> +
> +	*cs++ = MI_LOAD_REGISTER_IMM(NUM_GPR_DW);
> +	for (n = 0; n < NUM_GPR_DW; n++) {
> +		*cs++ = CS_GPR(engine, n);
> +		*cs++ = STACK_MAGIC;
> +	}
> +	*cs++ = MI_NOOP;
> +
> +	intel_ring_advance(rq, cs);
> +	i915_request_add(rq);
> +
> +	return 0;
> +}
> +
> +static int __live_gpr_clear(struct i915_gem_context *fixme,
> +			    struct intel_engine_cs *engine,
> +			    struct i915_vma *scratch)
> +{
> +	struct intel_context *ce;
> +	struct i915_request *rq;
> +	u32 *cs;
> +	int err;
> +	int n;
> +
> +	if (INTEL_GEN(engine->i915) < 9 && engine->class != RENDER_CLASS)
> +		return 0; /* GPR only on rcs0 for gen8 */
> +
> +	err = gpr_make_dirty(engine);
> +	if (err)
> +		return err;
> +
> +	ce = intel_context_create(fixme, engine);
> +	if (IS_ERR(ce))
> +		return PTR_ERR(ce);
> +
> +	rq = intel_context_create_request(ce);
> +	if (IS_ERR(rq)) {
> +		err = PTR_ERR(rq);
> +		goto err_put;
> +	}
> +
> +	cs = intel_ring_begin(rq, 4 * NUM_GPR_DW);
> +	if (IS_ERR(cs)) {
> +		err = PTR_ERR(cs);
> +		i915_request_add(rq);
> +		goto err_put;
> +	}
> +
> +	for (n = 0; n < NUM_GPR_DW; n++) {
> +		*cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
> +		*cs++ = CS_GPR(engine, n);
> +		*cs++ = i915_ggtt_offset(scratch) + n * sizeof(u32);
> +		*cs++ = 0;
> +	}
> +
> +	i915_request_get(rq);
> +	i915_request_add(rq);
> +
> +	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
> +		err = -ETIME;
> +		goto err_rq;
> +	}
> +
> +	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
> +	if (IS_ERR(cs)) {
> +		err = PTR_ERR(cs);
> +		goto err_rq;
> +	}
> +
> +	for (n = 0; n < NUM_GPR_DW; n++) {
> +		if (cs[n]) {
> +			pr_err("%s: GPR[%d].%s was not zero, found 0x%08x!\n",
> +			       engine->name,
> +			       n / 2, n & 1 ? "udw" : "ldw",
> +			       cs[n]);
> +			err = -EINVAL;
> +			break;
> +		}
> +	}
> +
> +	i915_gem_object_unpin_map(scratch->obj);
> +
> +err_rq:
> +	i915_request_put(rq);
> +err_put:
> +	intel_context_put(ce);
> +	return err;
> +}
> +
> +static int live_gpr_clear(void *arg)
> +{
> +	struct intel_gt *gt = arg;
> +	struct intel_engine_cs *engine;
> +	struct i915_gem_context *fixme;
> +	struct i915_vma *scratch;
> +	enum intel_engine_id id;
> +	int err = 0;
> +
> +	/*
> +	 * Check that GPR registers are cleared in new contexts as we need
> +	 * to avoid leaking any information from previous contexts.
> +	 */
> +
> +	fixme = kernel_context(gt->i915);
> +	if (!fixme)
> +		return -ENOMEM;
> +
> +	scratch = create_scratch(gt);
> +	if (IS_ERR(scratch)) {
> +		err = PTR_ERR(scratch);
> +		goto out_close;
> +	}
> +
> +	for_each_engine(engine, gt->i915, id) {
> +		err = __live_gpr_clear(fixme, engine, scratch);
> +		if (err)
> +			break;
> +	}
> +
> +	if (igt_flush_test(gt->i915))
> +		err = -EIO;
> +
> +	i915_vma_unpin_and_release(&scratch, 0);
> +out_close:
> +	kernel_context_close(fixme);
> +	return err;
> +}
> +
>   int intel_lrc_live_selftests(struct drm_i915_private *i915)
>   {
>   	static const struct i915_subtest tests[] = {
>   		SUBTEST(live_lrc_layout),
>   		SUBTEST(live_lrc_state),
> +		SUBTEST(live_gpr_clear),
>   	};
>   
>   	if (!HAS_LOGICAL_RING_CONTEXTS(i915))
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/15] drm/i915: Expose engine properties via sysfs
  2019-10-14  9:07 ` [PATCH 08/15] drm/i915: Expose engine properties via sysfs Chris Wilson
@ 2019-10-14 10:17   ` Tvrtko Ursulin
  2019-10-14 10:27     ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 10:17 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:07, Chris Wilson wrote:
> Preliminary stub to add engines underneath /sys/class/drm/cardN/, so
> that we can expose properties on each engine to the sysadmin.
> 
> To start with we have basic analogues of the i915_query ioctl so that we
> can pretty print engine discovery from the shell, and flesh out the
> directory structure. Later we will add writeable sysadmin properties such
> as per-engine timeout controls.
> 
> An example tree of the engine properties on Braswell:
>      /sys/class/drm/card0
>      └── engine
>          ├── bcs0
>          │   ├── capabilities
>          │   ├── class
>          │   ├── instance
>          │   ├── known_capabilities
>          │   └── name
>          ├── rcs0
>          │   ├── capabilities
>          │   ├── class
>          │   ├── instance
>          │   ├── known_capabilities
>          │   └── name
>          ├── vcs0
>          │   ├── capabilities
>          │   ├── class
>          │   ├── instance
>          │   ├── known_capabilities
>          │   └── name
>          └── vecs0
>              ├── capabilities
>              ├── class
>              ├── instance
>              ├── known_capabilities
>              └── name
> 
> v2: Include stringified capabilities
> v3: Include all known capabilities for futureproofing.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>   drivers/gpu/drm/i915/Makefile                |   3 +-
>   drivers/gpu/drm/i915/gt/intel_engine_sysfs.c | 223 +++++++++++++++++++
>   drivers/gpu/drm/i915/gt/intel_engine_sysfs.h |  14 ++
>   drivers/gpu/drm/i915/i915_sysfs.c            |   3 +
>   4 files changed, 242 insertions(+), 1 deletion(-)
>   create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
>   create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_sysfs.h
> 
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index e791d9323b51..cd9a10ba2516 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -78,8 +78,9 @@ gt-y += \
>   	gt/intel_breadcrumbs.o \
>   	gt/intel_context.o \
>   	gt/intel_engine_cs.o \
> -	gt/intel_engine_pool.o \
>   	gt/intel_engine_pm.o \
> +	gt/intel_engine_pool.o \
> +	gt/intel_engine_sysfs.o \
>   	gt/intel_engine_user.o \
>   	gt/intel_gt.o \
>   	gt/intel_gt_irq.o \
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
> new file mode 100644
> index 000000000000..4e403a1b069a
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
> @@ -0,0 +1,223 @@
> +/*
> + * SPDX-License-Identifier: MIT
> + *
> + * Copyright © 2019 Intel Corporation
> + */
> +
> +#include <linux/kobject.h>
> +#include <linux/sysfs.h>
> +
> +#include "i915_drv.h"
> +#include "intel_engine.h"
> +#include "intel_engine_sysfs.h"
> +
> +struct kobj_engine {
> +	struct kobject base;
> +	struct intel_engine_cs *engine;
> +};
> +
> +static struct intel_engine_cs *kobj_to_engine(struct kobject *kobj)
> +{
> +	return container_of(kobj, struct kobj_engine, base)->engine;
> +}
> +
> +static ssize_t
> +name_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "%s\n", kobj_to_engine(kobj)->name);
> +}
> +
> +static struct kobj_attribute name_attr =
> +__ATTR(name, 0444, name_show, NULL);
> +
> +static ssize_t
> +class_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "%d\n", kobj_to_engine(kobj)->uabi_class);
> +}
> +
> +static struct kobj_attribute class_attr =
> +__ATTR(class, 0444, class_show, NULL);
> +
> +static ssize_t
> +inst_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "%d\n", kobj_to_engine(kobj)->uabi_instance);
> +}
> +
> +static struct kobj_attribute inst_attr =
> +__ATTR(instance, 0444, inst_show, NULL);
> +
> +static const char * const vcs_caps[] = {
> +	[ilog2(I915_VIDEO_CLASS_CAPABILITY_HEVC)] = "hevc",
> +	[ilog2(I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC)] = "sfc",
> +};
> +static const char * const vecs_caps[] = {
> +	[ilog2(I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC)] = "sfc",
> +};
> +
> +static ssize_t repr_trim(char *buf, ssize_t len)
> +{
> +	/* Trim off the trailing space and replace with a newline */
> +	if (len > PAGE_SIZE)
> +		len = PAGE_SIZE;
> +	if (len > 0)
> +		buf[len - 1] = '\n';
> +
> +	return len;
> +}
> +
> +static ssize_t
> +caps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
> +{
> +	struct intel_engine_cs *engine = kobj_to_engine(kobj);
> +	const char * const *repr;
> +	int num_repr, n;
> +	ssize_t len;
> +
> +	switch (engine->class) {
> +	case VIDEO_DECODE_CLASS:
> +		repr = vcs_caps;
> +		num_repr = ARRAY_SIZE(vcs_caps);
> +		break;
> +
> +	case VIDEO_ENHANCEMENT_CLASS:
> +		repr = vecs_caps;
> +		num_repr = ARRAY_SIZE(vecs_caps);
> +		break;
> +
> +	default:
> +		repr = NULL;
> +		num_repr = 0;
> +		break;
> +	}
> +
> +	len = 0;
> +	for_each_set_bit(n,
> +			 (unsigned long *)&engine->uabi_capabilities,
> +			 BITS_PER_TYPE(typeof(engine->uabi_capabilities))) {
> +		if (GEM_WARN_ON(n >= num_repr || !repr[n]))
> +			len += snprintf(buf + len, PAGE_SIZE - len,
> +					"[%x] ", n);
> +		else
> +			len += snprintf(buf + len, PAGE_SIZE - len,
> +					"%s ", repr[n]);
> +		if (GEM_WARN_ON(len >= PAGE_SIZE))
> +			break;
> +	}
> +	return repr_trim(buf, len);
> +}
> +
> +static struct kobj_attribute caps_attr =
> +__ATTR(capabilities, 0444, caps_show, NULL);
> +
> +static ssize_t
> +all_caps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
> +{
> +	struct intel_engine_cs *engine = kobj_to_engine(kobj);
> +	const char * const *repr;
> +	int num_repr, n;
> +	ssize_t len;
> +
> +	switch (engine->class) {
> +	case VIDEO_DECODE_CLASS:
> +		repr = vcs_caps;
> +		num_repr = ARRAY_SIZE(vcs_caps);
> +		break;
> +
> +	case VIDEO_ENHANCEMENT_CLASS:
> +		repr = vecs_caps;
> +		num_repr = ARRAY_SIZE(vecs_caps);
> +		break;
> +
> +	default:
> +		repr = NULL;
> +		num_repr = 0;
> +		break;
> +	}
> +
> +	len = 0;
> +	for (n = 0; n < num_repr; n++) {
> +		if (!repr[n])
> +			continue;
> +
> +		len += snprintf(buf + len, PAGE_SIZE - len, "%s ", repr[n]);
> +		if (GEM_WARN_ON(len >= PAGE_SIZE))
> +			break;
> +	}
> +	return repr_trim(buf, len);
> +}

Would it be worth consolidating like:

__caps_show(engine, buf, len, mask, warn_unknown)
{
...
	switch(engine->class...
...
	for_each_set_bit(...mask...) {
		if (n >= repr || !repr[n]) {
			if (warn_unknown)
				GEM_WARN_ON(...);
			else
				len += snprinf...
		} else {
			len += snprintf...
		}
	}
...
}

caps_show()
{
...
	len = __caps_show(engine, buf, len, engine->uabi_capabilities, true);
...
}

all_caps_show()
{
...
	len = __caps_show(engine, buf, len, ~0, false);
...
}

Regards,

Tvrtko

> +
> +static struct kobj_attribute all_caps_attr =
> +__ATTR(known_capabilities, 0444, all_caps_show, NULL);
> +
> +static void kobj_engine_release(struct kobject *kobj)
> +{
> +	kfree(kobj);
> +}
> +
> +static struct kobj_type kobj_engine_type = {
> +	.release = kobj_engine_release,
> +	.sysfs_ops = &kobj_sysfs_ops
> +};
> +
> +static struct kobject *
> +kobj_engine(struct kobject *dir, struct intel_engine_cs *engine)
> +{
> +	struct kobj_engine *ke;
> +
> +	ke = kzalloc(sizeof(*ke), GFP_KERNEL);
> +	if (!ke)
> +		return NULL;
> +
> +	kobject_init(&ke->base, &kobj_engine_type);
> +	ke->engine = engine;
> +
> +	if (kobject_add(&ke->base, dir, "%s", engine->name)) {
> +		kobject_put(&ke->base);
> +		return NULL;
> +	}
> +
> +	/* xfer ownership to sysfs tree */
> +	return &ke->base;
> +}
> +
> +void intel_engines_add_sysfs(struct drm_i915_private *i915)
> +{
> +	static const struct attribute *files[] = {
> +		&name_attr.attr,
> +		&class_attr.attr,
> +		&inst_attr.attr,
> +		&caps_attr.attr,
> +		&all_caps_attr.attr,
> +		NULL
> +	};
> +
> +	struct device *kdev = i915->drm.primary->kdev;
> +	struct intel_engine_cs *engine;
> +	struct kobject *dir;
> +
> +	dir = kobject_create_and_add("engine", &kdev->kobj);
> +	if (!dir)
> +		return;
> +
> +	for_each_uabi_engine(engine, i915) {
> +		struct kobject *kobj;
> +
> +		kobj = kobj_engine(dir, engine);
> +		if (!kobj)
> +			goto err_engine;
> +
> +		if (sysfs_create_files(kobj, files))
> +			goto err_object;
> +
> +		if (0) {
> +err_object:
> +			kobject_put(kobj);
> +err_engine:
> +			dev_err(kdev, "Failed to add sysfs engine '%s'\n",
> +				engine->name);
> +			break;
> +		}
> +	}
> +}
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.h b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.h
> new file mode 100644
> index 000000000000..ef44a745b70a
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.h
> @@ -0,0 +1,14 @@
> +/*
> + * SPDX-License-Identifier: MIT
> + *
> + * Copyright © 2019 Intel Corporation
> + */
> +
> +#ifndef INTEL_ENGINE_SYSFS_H
> +#define INTEL_ENGINE_SYSFS_H
> +
> +struct drm_i915_private;
> +
> +void intel_engines_add_sysfs(struct drm_i915_private *i915);
> +
> +#endif /* INTEL_ENGINE_SYSFS_H */
> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> index bf039b8ba593..7b665f69f301 100644
> --- a/drivers/gpu/drm/i915/i915_sysfs.c
> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> @@ -30,6 +30,7 @@
>   #include <linux/stat.h>
>   #include <linux/sysfs.h>
>   
> +#include "gt/intel_engine_sysfs.h"
>   #include "gt/intel_rc6.h"
>   
>   #include "i915_drv.h"
> @@ -616,6 +617,8 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
>   		DRM_ERROR("RPS sysfs setup failed\n");
>   
>   	i915_setup_error_capture(kdev);
> +
> +	intel_engines_add_sysfs(dev_priv);
>   }
>   
>   void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/15] drm/i915: Expose engine properties via sysfs
  2019-10-14 10:17   ` Tvrtko Ursulin
@ 2019-10-14 10:27     ` Chris Wilson
  0 siblings, 0 replies; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 10:27 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 11:17:54)
> 
> On 14/10/2019 10:07, Chris Wilson wrote:
> > +static ssize_t
> > +caps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
> > +{
> > +     struct intel_engine_cs *engine = kobj_to_engine(kobj);
> > +     const char * const *repr;
> > +     int num_repr, n;
> > +     ssize_t len;
> > +
> > +     switch (engine->class) {
> > +     case VIDEO_DECODE_CLASS:
> > +             repr = vcs_caps;
> > +             num_repr = ARRAY_SIZE(vcs_caps);
> > +             break;
> > +
> > +     case VIDEO_ENHANCEMENT_CLASS:
> > +             repr = vecs_caps;
> > +             num_repr = ARRAY_SIZE(vecs_caps);
> > +             break;
> > +
> > +     default:
> > +             repr = NULL;
> > +             num_repr = 0;
> > +             break;
> > +     }
> > +
> > +     len = 0;
> > +     for_each_set_bit(n,
> > +                      (unsigned long *)&engine->uabi_capabilities,
> > +                      BITS_PER_TYPE(typeof(engine->uabi_capabilities))) {
> > +             if (GEM_WARN_ON(n >= num_repr || !repr[n]))
> > +                     len += snprintf(buf + len, PAGE_SIZE - len,
> > +                                     "[%x] ", n);
> > +             else
> > +                     len += snprintf(buf + len, PAGE_SIZE - len,
> > +                                     "%s ", repr[n]);
> > +             if (GEM_WARN_ON(len >= PAGE_SIZE))
> > +                     break;
> > +     }
> > +     return repr_trim(buf, len);
> > +}
> > +
> > +static struct kobj_attribute caps_attr =
> > +__ATTR(capabilities, 0444, caps_show, NULL);
> > +
> > +static ssize_t
> > +all_caps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
> > +{
> > +     struct intel_engine_cs *engine = kobj_to_engine(kobj);
> > +     const char * const *repr;
> > +     int num_repr, n;
> > +     ssize_t len;
> > +
> > +     switch (engine->class) {
> > +     case VIDEO_DECODE_CLASS:
> > +             repr = vcs_caps;
> > +             num_repr = ARRAY_SIZE(vcs_caps);
> > +             break;
> > +
> > +     case VIDEO_ENHANCEMENT_CLASS:
> > +             repr = vecs_caps;
> > +             num_repr = ARRAY_SIZE(vecs_caps);
> > +             break;
> > +
> > +     default:
> > +             repr = NULL;
> > +             num_repr = 0;
> > +             break;
> > +     }
> > +
> > +     len = 0;
> > +     for (n = 0; n < num_repr; n++) {
> > +             if (!repr[n])
> > +                     continue;
> > +
> > +             len += snprintf(buf + len, PAGE_SIZE - len, "%s ", repr[n]);
> > +             if (GEM_WARN_ON(len >= PAGE_SIZE))
> > +                     break;
> > +     }
> > +     return repr_trim(buf, len);
> > +}
> 
> Would it be worth consolidating like:
> 
> __caps_show(engine, buf, len, mask, warn_unknown)
> {
> ...
>         switch(engine->class...
> ...
>         for_each_set_bit(...mask...) {
>                 if (n >= repr || !repr[n]) {
>                         if (warn_unknown)
>                                 GEM_WARN_ON(...);
>                         else
>                                 len += snprinf...
>                 } else {
>                         len += snprintf...
>                 }
>         }
> ...
> }
> 
> caps_show()
> {
> ...
>         len = __caps_show(engine, buf, len, engine->uabi_capabilities, true);
> ...
> }
> 
> all_caps_show()
> {
> ...
>         len = __caps_show(engine, buf, len, ~0, false);
> ...
> }

I thought it would look ugly and distract from the intent. One is to
list the bits, the other the array of string.

But there is a certain appeal to having just one setup and loop.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 10/15] drm/i915/gt: Introduce barrier pulses along engines
  2019-10-14  9:07 ` [PATCH 10/15] drm/i915/gt: Introduce barrier pulses along engines Chris Wilson
@ 2019-10-14 11:03   ` Tvrtko Ursulin
  0 siblings, 0 replies; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 11:03 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:07, Chris Wilson wrote:
> To flush idle barriers, and even inflight requests, we want to send a
> preemptive 'pulse' along an engine. We use a no-op request along the
> pinned kernel_context at high priority so that it should run or else
> kick off the stuck requests. We can use this to ensure idle barriers are
> immediately flushed, as part of a context cancellation mechanism, or as
> part of a heartbeat mechanism to detect and reset a stuck GPU.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/Makefile                 |  1 +
>   .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 56 +++++++++++++++++++
>   .../gpu/drm/i915/gt/intel_engine_heartbeat.h  | 14 +++++
>   drivers/gpu/drm/i915/gt/intel_engine_pm.c     |  2 +-
>   drivers/gpu/drm/i915/i915_priolist_types.h    |  1 +
>   5 files changed, 73 insertions(+), 1 deletion(-)
>   create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
>   create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
> 
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index cd9a10ba2516..cfab7c8585b3 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -78,6 +78,7 @@ gt-y += \
>   	gt/intel_breadcrumbs.o \
>   	gt/intel_context.o \
>   	gt/intel_engine_cs.o \
> +	gt/intel_engine_heartbeat.o \
>   	gt/intel_engine_pm.o \
>   	gt/intel_engine_pool.o \
>   	gt/intel_engine_sysfs.o \
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
> new file mode 100644
> index 000000000000..2fc413f9d506
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
> @@ -0,0 +1,56 @@
> +/*
> + * SPDX-License-Identifier: MIT
> + *
> + * Copyright © 2019 Intel Corporation
> + */
> +
> +#include "i915_request.h"
> +
> +#include "intel_context.h"
> +#include "intel_engine_heartbeat.h"
> +#include "intel_engine_pm.h"
> +#include "intel_engine.h"
> +#include "intel_gt.h"
> +
> +static void idle_pulse(struct intel_engine_cs *engine, struct i915_request *rq)
> +{
> +	engine->wakeref_serial = READ_ONCE(engine->serial) + 1;
> +	i915_request_add_active_barriers(rq);
> +}
> +
> +int intel_engine_pulse(struct intel_engine_cs *engine)
> +{
> +	struct i915_sched_attr attr = { .priority = I915_PRIORITY_BARRIER };
> +	struct intel_context *ce = engine->kernel_context;
> +	struct i915_request *rq;
> +	int err = 0;
> +
> +	if (!intel_engine_has_preemption(engine))
> +		return -ENODEV;
> +
> +	if (!intel_engine_pm_get_if_awake(engine))
> +		return 0;
> +
> +	if (mutex_lock_interruptible(&ce->timeline->mutex))
> +		goto out_rpm;
> +
> +	intel_context_enter(ce);
> +	rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
> +	intel_context_exit(ce);
> +	if (IS_ERR(rq)) {
> +		err = PTR_ERR(rq);
> +		goto out_unlock;
> +	}
> +
> +	rq->flags |= I915_REQUEST_SENTINEL;
> +	idle_pulse(engine, rq);
> +
> +	__i915_request_commit(rq);
> +	__i915_request_queue(rq, &attr);
> +
> +out_unlock:
> +	mutex_unlock(&ce->timeline->mutex);
> +out_rpm:
> +	intel_engine_pm_put(engine);
> +	return err;
> +}
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
> new file mode 100644
> index 000000000000..b950451b5998
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
> @@ -0,0 +1,14 @@
> +/*
> + * SPDX-License-Identifier: MIT
> + *
> + * Copyright © 2019 Intel Corporation
> + */
> +
> +#ifndef INTEL_ENGINE_HEARTBEAT_H
> +#define INTEL_ENGINE_HEARTBEAT_H
> +
> +struct intel_engine_cs;
> +
> +int intel_engine_pulse(struct intel_engine_cs *engine);
> +
> +#endif /* INTEL_ENGINE_HEARTBEAT_H */
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> index 67eb6183648a..7d76611d9df1 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> @@ -111,7 +111,7 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
>   	i915_request_add_active_barriers(rq);
>   
>   	/* Install ourselves as a preemption barrier */
> -	rq->sched.attr.priority = I915_PRIORITY_UNPREEMPTABLE;
> +	rq->sched.attr.priority = I915_PRIORITY_BARRIER;
>   	__i915_request_commit(rq);
>   
>   	/* Release our exclusive hold on the engine */
> diff --git a/drivers/gpu/drm/i915/i915_priolist_types.h b/drivers/gpu/drm/i915/i915_priolist_types.h
> index 21037a2e2038..ae8bb3cb627e 100644
> --- a/drivers/gpu/drm/i915/i915_priolist_types.h
> +++ b/drivers/gpu/drm/i915/i915_priolist_types.h
> @@ -39,6 +39,7 @@ enum {
>    * active request.
>    */
>   #define I915_PRIORITY_UNPREEMPTABLE INT_MAX
> +#define I915_PRIORITY_BARRIER INT_MAX
>   
>   #define __NO_PREEMPTION (I915_PRIORITY_WAIT)
>   
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14  9:07 ` [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out Chris Wilson
@ 2019-10-14 12:00   ` Tvrtko Ursulin
  2019-10-14 12:06     ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 12:00 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:07, Chris Wilson wrote:
> On schedule-out (CS completion) of a banned context, scrub the context
> image so that we do not replay the active payload. The intent is that we
> skip banned payloads on request submission so that the timeline
> advancement continues on in the background. However, if we are returning
> to a preempted request, i915_request_skip() is ineffective and instead we
> need to patch up the context image so that it continues from the start
> of the next request.
> 
> v2: Fixup cancellation so that we only scrub the payload of the active
> request and do not short-circuit the breadcrumbs (which might cause
> other contexts to execute out of order).
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_lrc.c    | 129 ++++++++----
>   drivers/gpu/drm/i915/gt/selftest_lrc.c | 273 +++++++++++++++++++++++++
>   2 files changed, 361 insertions(+), 41 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index e16ede75412b..b76b35194114 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -234,6 +234,9 @@ static void execlists_init_reg_state(u32 *reg_state,
>   				     const struct intel_engine_cs *engine,
>   				     const struct intel_ring *ring,
>   				     bool close);
> +static void
> +__execlists_update_reg_state(const struct intel_context *ce,
> +			     const struct intel_engine_cs *engine);
>   
>   static void cancel_timer(struct timer_list *t)
>   {
> @@ -270,6 +273,31 @@ static void mark_eio(struct i915_request *rq)
>   	i915_request_mark_complete(rq);
>   }
>   
> +static struct i915_request *active_request(struct i915_request *rq)
> +{
> +	const struct intel_context * const ce = rq->hw_context;
> +	struct i915_request *active = NULL;
> +	struct list_head *list;
> +
> +	if (!i915_request_is_active(rq)) /* unwound, but incomplete! */
> +		return rq;
> +
> +	rcu_read_lock();
> +	list = &rcu_dereference(rq->timeline)->requests;
> +	list_for_each_entry_from_reverse(rq, list, link) {
> +		if (i915_request_completed(rq))
> +			break;
> +
> +		if (rq->hw_context != ce)
> +			break;
> +
> +		active = rq;
> +	}
> +	rcu_read_unlock();
> +
> +	return active;
> +}
> +
>   static inline u32 intel_hws_preempt_address(struct intel_engine_cs *engine)
>   {
>   	return (i915_ggtt_offset(engine->status_page.vma) +
> @@ -991,6 +1019,56 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
>   		tasklet_schedule(&ve->base.execlists.tasklet);
>   }
>   
> +static void restore_default_state(struct intel_context *ce,
> +				  struct intel_engine_cs *engine)
> +{
> +	u32 *regs = ce->lrc_reg_state;
> +
> +	if (engine->pinned_default_state)
> +		memcpy(regs, /* skip restoring the vanilla PPHWSP */
> +		       engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
> +		       engine->context_size - PAGE_SIZE);
> +
> +	execlists_init_reg_state(regs, ce, engine, ce->ring, false);
> +}
> +
> +static void cancel_active(struct i915_request *rq,
> +			  struct intel_engine_cs *engine)
> +{
> +	struct intel_context * const ce = rq->hw_context;
> +
> +	/*
> +	 * The executing context has been cancelled. Fixup the context so that
> +	 * it will be marked as incomplete [-EIO] upon resubmission and not
> +	 * execute any user payloads. We preserve the breadcrumbs and
> +	 * semaphores of the incomplete requests so that inter-timeline
> +	 * dependencies (i.e other timelines) remain correctly ordered.
> +	 *
> +	 * See __i915_request_submit() for applying -EIO and removing the
> +	 * payload on resubmission.
> +	 */
> +	GEM_TRACE("%s(%s): { rq=%llx:%lld }\n",
> +		  __func__, engine->name, rq->fence.context, rq->fence.seqno);
> +	__context_pin_acquire(ce);
> +
> +	/* On resubmission of the active request, payload will be scrubbed */
> +	rq = active_request(rq);
> +	if (rq)
> +		ce->ring->head = intel_ring_wrap(ce->ring, rq->head);

Without this change, where would the head be pointing after 
schedule_out? Somewhere in the middle of the active request? And with 
this change it is rewound to the start of it?

Regards,

Tvrtko

> +	else
> +		ce->ring->head = ce->ring->tail;
> +	intel_ring_update_space(ce->ring);
> +
> +	/* Scrub the context image to prevent replaying the previous batch */
> +	restore_default_state(ce, engine);
> +	__execlists_update_reg_state(ce, engine);
> +
> +	/* We've switched away, so this should be a no-op, but intent matters */
> +	ce->lrc_desc |= CTX_DESC_FORCE_RESTORE;
> +
> +	__context_pin_release(ce);
> +}
> +
>   static inline void
>   __execlists_schedule_out(struct i915_request *rq,
>   			 struct intel_engine_cs * const engine)
> @@ -1001,6 +1079,9 @@ __execlists_schedule_out(struct i915_request *rq,
>   	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
>   	intel_gt_pm_put(engine->gt);
>   
> +	if (unlikely(i915_gem_context_is_banned(ce->gem_context)))
> +		cancel_active(rq, engine);
> +
>   	/*
>   	 * If this is part of a virtual engine, its next request may
>   	 * have been blocked waiting for access to the active context.
> @@ -1395,6 +1476,10 @@ static unsigned long active_preempt_timeout(struct intel_engine_cs *engine)
>   	if (!rq)
>   		return 0;
>   
> +	/* Force a fast reset for terminated contexts (ignoring sysfs!) */
> +	if (unlikely(i915_gem_context_is_banned(rq->gem_context)))
> +		return 1;
> +
>   	return READ_ONCE(engine->props.preempt_timeout);
>   }
>   
> @@ -2810,29 +2895,6 @@ static void reset_csb_pointers(struct intel_engine_cs *engine)
>   			       &execlists->csb_status[reset_value]);
>   }
>   
> -static struct i915_request *active_request(struct i915_request *rq)
> -{
> -	const struct intel_context * const ce = rq->hw_context;
> -	struct i915_request *active = NULL;
> -	struct list_head *list;
> -
> -	if (!i915_request_is_active(rq)) /* unwound, but incomplete! */
> -		return rq;
> -
> -	list = &i915_request_active_timeline(rq)->requests;
> -	list_for_each_entry_from_reverse(rq, list, link) {
> -		if (i915_request_completed(rq))
> -			break;
> -
> -		if (rq->hw_context != ce)
> -			break;
> -
> -		active = rq;
> -	}
> -
> -	return active;
> -}
> -
>   static void __execlists_reset_reg_state(const struct intel_context *ce,
>   					const struct intel_engine_cs *engine)
>   {
> @@ -2849,7 +2911,6 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
>   	struct intel_engine_execlists * const execlists = &engine->execlists;
>   	struct intel_context *ce;
>   	struct i915_request *rq;
> -	u32 *regs;
>   
>   	mb(); /* paranoia: read the CSB pointers from after the reset */
>   	clflush(execlists->csb_write);
> @@ -2928,13 +2989,7 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
>   	 * to recreate its own state.
>   	 */
>   	GEM_BUG_ON(!intel_context_is_pinned(ce));
> -	regs = ce->lrc_reg_state;
> -	if (engine->pinned_default_state) {
> -		memcpy(regs, /* skip restoring the vanilla PPHWSP */
> -		       engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
> -		       engine->context_size - PAGE_SIZE);
> -	}
> -	execlists_init_reg_state(regs, ce, engine, ce->ring, false);
> +	restore_default_state(ce, engine);
>   
>   out_replay:
>   	GEM_TRACE("%s replay {head:%04x, tail:%04x\n",
> @@ -4571,16 +4626,8 @@ void intel_lr_context_reset(struct intel_engine_cs *engine,
>   	 * future request will be after userspace has had the opportunity
>   	 * to recreate its own state.
>   	 */
> -	if (scrub) {
> -		u32 *regs = ce->lrc_reg_state;
> -
> -		if (engine->pinned_default_state) {
> -			memcpy(regs, /* skip restoring the vanilla PPHWSP */
> -			       engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
> -			       engine->context_size - PAGE_SIZE);
> -		}
> -		execlists_init_reg_state(regs, ce, engine, ce->ring, false);
> -	}
> +	if (scrub)
> +		restore_default_state(ce, engine);
>   
>   	/* Rerun the request; its payload has been neutered (if guilty). */
>   	ce->ring->head = head;
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index e7a86f60cf82..dd3e7b47b76d 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -7,6 +7,7 @@
>   #include <linux/prime_numbers.h>
>   
>   #include "gem/i915_gem_pm.h"
> +#include "gt/intel_engine_heartbeat.h"
>   #include "gt/intel_reset.h"
>   
>   #include "i915_selftest.h"
> @@ -1016,6 +1017,277 @@ static int live_nopreempt(void *arg)
>   	goto err_client_b;
>   }
>   
> +struct live_preempt_cancel {
> +	struct intel_engine_cs *engine;
> +	struct preempt_client a, b;
> +};
> +
> +static int __cancel_active0(struct live_preempt_cancel *arg)
> +{
> +	struct i915_request *rq;
> +	struct igt_live_test t;
> +	int err;
> +
> +	/* Preempt cancel of ELSP0 */
> +	GEM_TRACE("%s(%s)\n", __func__, arg->engine->name);
> +
> +	if (igt_live_test_begin(&t, arg->engine->i915,
> +				__func__, arg->engine->name))
> +		return -EIO;
> +
> +	clear_bit(CONTEXT_BANNED, &arg->a.ctx->flags);
> +	rq = spinner_create_request(&arg->a.spin,
> +				    arg->a.ctx, arg->engine,
> +				    MI_ARB_CHECK);
> +	if (IS_ERR(rq))
> +		return PTR_ERR(rq);
> +
> +	i915_request_get(rq);
> +	i915_request_add(rq);
> +	if (!igt_wait_for_spinner(&arg->a.spin, rq)) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	i915_gem_context_set_banned(arg->a.ctx);
> +	err = intel_engine_pulse(arg->engine);
> +	if (err)
> +		goto out;
> +
> +	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	if (rq->fence.error != -EIO) {
> +		pr_err("Cancelled inflight0 request did not report -EIO\n");
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +out:
> +	i915_request_put(rq);
> +	if (igt_live_test_end(&t))
> +		err = -EIO;
> +	return err;
> +}
> +
> +static int __cancel_active1(struct live_preempt_cancel *arg)
> +{
> +	struct i915_request *rq[2] = {};
> +	struct igt_live_test t;
> +	int err;
> +
> +	/* Preempt cancel of ELSP1 */
> +	GEM_TRACE("%s(%s)\n", __func__, arg->engine->name);
> +
> +	if (igt_live_test_begin(&t, arg->engine->i915,
> +				__func__, arg->engine->name))
> +		return -EIO;
> +
> +	clear_bit(CONTEXT_BANNED, &arg->a.ctx->flags);
> +	rq[0] = spinner_create_request(&arg->a.spin,
> +				       arg->a.ctx, arg->engine,
> +				       MI_NOOP); /* no preemption */
> +	if (IS_ERR(rq[0]))
> +		return PTR_ERR(rq[0]);
> +
> +	i915_request_get(rq[0]);
> +	i915_request_add(rq[0]);
> +	if (!igt_wait_for_spinner(&arg->a.spin, rq[0])) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	clear_bit(CONTEXT_BANNED, &arg->b.ctx->flags);
> +	rq[1] = spinner_create_request(&arg->b.spin,
> +				       arg->b.ctx, arg->engine,
> +				       MI_ARB_CHECK);
> +	if (IS_ERR(rq[1])) {
> +		err = PTR_ERR(rq[1]);
> +		goto out;
> +	}
> +
> +	i915_request_get(rq[1]);
> +	err = i915_request_await_dma_fence(rq[1], &rq[0]->fence);
> +	i915_request_add(rq[1]);
> +	if (err)
> +		goto out;
> +
> +	i915_gem_context_set_banned(arg->b.ctx);
> +	err = intel_engine_pulse(arg->engine);
> +	if (err)
> +		goto out;
> +
> +	igt_spinner_end(&arg->a.spin);
> +	if (i915_request_wait(rq[1], 0, HZ / 5) < 0) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	if (rq[0]->fence.error != 0) {
> +		pr_err("Normal inflight0 request did not complete\n");
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (rq[1]->fence.error != -EIO) {
> +		pr_err("Cancelled inflight1 request did not report -EIO\n");
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +out:
> +	i915_request_put(rq[1]);
> +	i915_request_put(rq[0]);
> +	if (igt_live_test_end(&t))
> +		err = -EIO;
> +	return err;
> +}
> +
> +static int __cancel_queued(struct live_preempt_cancel *arg)
> +{
> +	struct i915_request *rq[3] = {};
> +	struct igt_live_test t;
> +	int err;
> +
> +	/* Full ELSP and one in the wings */
> +	GEM_TRACE("%s(%s)\n", __func__, arg->engine->name);
> +
> +	if (igt_live_test_begin(&t, arg->engine->i915,
> +				__func__, arg->engine->name))
> +		return -EIO;
> +
> +	clear_bit(CONTEXT_BANNED, &arg->a.ctx->flags);
> +	rq[0] = spinner_create_request(&arg->a.spin,
> +				       arg->a.ctx, arg->engine,
> +				       MI_ARB_CHECK);
> +	if (IS_ERR(rq[0]))
> +		return PTR_ERR(rq[0]);
> +
> +	i915_request_get(rq[0]);
> +	i915_request_add(rq[0]);
> +	if (!igt_wait_for_spinner(&arg->a.spin, rq[0])) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	clear_bit(CONTEXT_BANNED, &arg->b.ctx->flags);
> +	rq[1] = igt_request_alloc(arg->b.ctx, arg->engine);
> +	if (IS_ERR(rq[1])) {
> +		err = PTR_ERR(rq[1]);
> +		goto out;
> +	}
> +
> +	i915_request_get(rq[1]);
> +	err = i915_request_await_dma_fence(rq[1], &rq[0]->fence);
> +	i915_request_add(rq[1]);
> +	if (err)
> +		goto out;
> +
> +	rq[2] = spinner_create_request(&arg->b.spin,
> +				       arg->a.ctx, arg->engine,
> +				       MI_ARB_CHECK);
> +	if (IS_ERR(rq[2])) {
> +		err = PTR_ERR(rq[2]);
> +		goto out;
> +	}
> +
> +	i915_request_get(rq[2]);
> +	err = i915_request_await_dma_fence(rq[2], &rq[1]->fence);
> +	i915_request_add(rq[2]);
> +	if (err)
> +		goto out;
> +
> +	i915_gem_context_set_banned(arg->a.ctx);
> +	err = intel_engine_pulse(arg->engine);
> +	if (err)
> +		goto out;
> +
> +	if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
> +		err = -EIO;
> +		goto out;
> +	}
> +
> +	if (rq[0]->fence.error != -EIO) {
> +		pr_err("Cancelled inflight0 request did not report -EIO\n");
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (rq[1]->fence.error != 0) {
> +		pr_err("Normal inflight1 request did not complete\n");
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (rq[2]->fence.error != -EIO) {
> +		pr_err("Cancelled queued request did not report -EIO\n");
> +		err = -EINVAL;
> +		goto out;
> +	}
> +
> +out:
> +	i915_request_put(rq[2]);
> +	i915_request_put(rq[1]);
> +	i915_request_put(rq[0]);
> +	if (igt_live_test_end(&t))
> +		err = -EIO;
> +	return err;
> +}
> +
> +static int live_preempt_cancel(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct live_preempt_cancel data;
> +	enum intel_engine_id id;
> +	int err = -ENOMEM;
> +
> +	/*
> +	 * To cancel an inflight context, we need to first remove it from the
> +	 * GPU. That sounds like preemption! Plus a little bit of bookkeeping.
> +	 */
> +
> +	if (!HAS_LOGICAL_RING_PREEMPTION(i915))
> +		return 0;
> +
> +	if (preempt_client_init(i915, &data.a))
> +		return -ENOMEM;
> +	if (preempt_client_init(i915, &data.b))
> +		goto err_client_a;
> +
> +	for_each_engine(data.engine, i915, id) {
> +		if (!intel_engine_has_preemption(data.engine))
> +			continue;
> +
> +		err = __cancel_active0(&data);
> +		if (err)
> +			goto err_wedged;
> +
> +		err = __cancel_active1(&data);
> +		if (err)
> +			goto err_wedged;
> +
> +		err = __cancel_queued(&data);
> +		if (err)
> +			goto err_wedged;
> +	}
> +
> +	err = 0;
> +err_client_b:
> +	preempt_client_fini(&data.b);
> +err_client_a:
> +	preempt_client_fini(&data.a);
> +	return err;
> +
> +err_wedged:
> +	GEM_TRACE_DUMP();
> +	igt_spinner_end(&data.b.spin);
> +	igt_spinner_end(&data.a.spin);
> +	intel_gt_set_wedged(&i915->gt);
> +	goto err_client_b;
> +}
> +
>   static int live_suppress_self_preempt(void *arg)
>   {
>   	struct drm_i915_private *i915 = arg;
> @@ -2549,6 +2821,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915)
>   		SUBTEST(live_preempt),
>   		SUBTEST(live_late_preempt),
>   		SUBTEST(live_nopreempt),
> +		SUBTEST(live_preempt_cancel),
>   		SUBTEST(live_suppress_self_preempt),
>   		SUBTEST(live_suppress_wait_preempt),
>   		SUBTEST(live_chain_preempt),
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14 12:00   ` Tvrtko Ursulin
@ 2019-10-14 12:06     ` Chris Wilson
  2019-10-14 12:25       ` Tvrtko Ursulin
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 12:06 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
> 
> On 14/10/2019 10:07, Chris Wilson wrote:
> > On schedule-out (CS completion) of a banned context, scrub the context
> > image so that we do not replay the active payload. The intent is that we
> > skip banned payloads on request submission so that the timeline
> > advancement continues on in the background. However, if we are returning
> > to a preempted request, i915_request_skip() is ineffective and instead we
> > need to patch up the context image so that it continues from the start
> > of the next request.
> > 
> > v2: Fixup cancellation so that we only scrub the payload of the active
> > request and do not short-circuit the breadcrumbs (which might cause
> > other contexts to execute out of order).
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_lrc.c    | 129 ++++++++----
> >   drivers/gpu/drm/i915/gt/selftest_lrc.c | 273 +++++++++++++++++++++++++
> >   2 files changed, 361 insertions(+), 41 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index e16ede75412b..b76b35194114 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -234,6 +234,9 @@ static void execlists_init_reg_state(u32 *reg_state,
> >                                    const struct intel_engine_cs *engine,
> >                                    const struct intel_ring *ring,
> >                                    bool close);
> > +static void
> > +__execlists_update_reg_state(const struct intel_context *ce,
> > +                          const struct intel_engine_cs *engine);
> >   
> >   static void cancel_timer(struct timer_list *t)
> >   {
> > @@ -270,6 +273,31 @@ static void mark_eio(struct i915_request *rq)
> >       i915_request_mark_complete(rq);
> >   }
> >   
> > +static struct i915_request *active_request(struct i915_request *rq)
> > +{
> > +     const struct intel_context * const ce = rq->hw_context;
> > +     struct i915_request *active = NULL;
> > +     struct list_head *list;
> > +
> > +     if (!i915_request_is_active(rq)) /* unwound, but incomplete! */
> > +             return rq;
> > +
> > +     rcu_read_lock();
> > +     list = &rcu_dereference(rq->timeline)->requests;
> > +     list_for_each_entry_from_reverse(rq, list, link) {
> > +             if (i915_request_completed(rq))
> > +                     break;
> > +
> > +             if (rq->hw_context != ce)
> > +                     break;
> > +
> > +             active = rq;
> > +     }
> > +     rcu_read_unlock();
> > +
> > +     return active;
> > +}
> > +
> >   static inline u32 intel_hws_preempt_address(struct intel_engine_cs *engine)
> >   {
> >       return (i915_ggtt_offset(engine->status_page.vma) +
> > @@ -991,6 +1019,56 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
> >               tasklet_schedule(&ve->base.execlists.tasklet);
> >   }
> >   
> > +static void restore_default_state(struct intel_context *ce,
> > +                               struct intel_engine_cs *engine)
> > +{
> > +     u32 *regs = ce->lrc_reg_state;
> > +
> > +     if (engine->pinned_default_state)
> > +             memcpy(regs, /* skip restoring the vanilla PPHWSP */
> > +                    engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
> > +                    engine->context_size - PAGE_SIZE);
> > +
> > +     execlists_init_reg_state(regs, ce, engine, ce->ring, false);
> > +}
> > +
> > +static void cancel_active(struct i915_request *rq,
> > +                       struct intel_engine_cs *engine)
> > +{
> > +     struct intel_context * const ce = rq->hw_context;
> > +
> > +     /*
> > +      * The executing context has been cancelled. Fixup the context so that
> > +      * it will be marked as incomplete [-EIO] upon resubmission and not
> > +      * execute any user payloads. We preserve the breadcrumbs and
> > +      * semaphores of the incomplete requests so that inter-timeline
> > +      * dependencies (i.e other timelines) remain correctly ordered.
> > +      *
> > +      * See __i915_request_submit() for applying -EIO and removing the
> > +      * payload on resubmission.
> > +      */
> > +     GEM_TRACE("%s(%s): { rq=%llx:%lld }\n",
> > +               __func__, engine->name, rq->fence.context, rq->fence.seqno);
> > +     __context_pin_acquire(ce);
> > +
> > +     /* On resubmission of the active request, payload will be scrubbed */
> > +     rq = active_request(rq);
> > +     if (rq)
> > +             ce->ring->head = intel_ring_wrap(ce->ring, rq->head);
> 
> Without this change, where would the head be pointing after 
> schedule_out? Somewhere in the middle of the active request? And with 
> this change it is rewound to the start of it?

RING_HEAD could be anywhere between rq->head and rq->tail. We could just
leave it be, but it would be better if we reset it if it was past
rq->postfix (as we rewrite those instructions). It's just simpler if we
reset it back to the start, and scrub the request.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close
  2019-10-14  9:07 ` [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close Chris Wilson
@ 2019-10-14 12:11   ` Tvrtko Ursulin
  2019-10-14 12:21     ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 12:11 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:07, Chris Wilson wrote:
> Normally, we rely on our hangcheck to prevent persistent batches from
> hogging the GPU. However, if the user disables hangcheck, this mechanism
> breaks down. Despite our insistence that this is unsafe, the users are
> equally insistent that they want to use endless batches and will disable
> the hangcheck mechanism. We are looking at perhaps replacing hangcheck
> with a softer mechanism, that sends a pulse down the engine to check if
> it is well. We can use the same preemptive pulse to flush an active
> persistent context off the GPU upon context close, preventing resources
> being lost and unkillable requests remaining on the GPU after process
> termination. To avoid changing the ABI and accidentally breaking
> existing userspace, we make the persistence of a context explicit and
> enable it by default (matching current ABI). Userspace can opt out of
> persistent mode (forcing requests to be cancelled when the context is
> closed by process termination or explicitly) by a context parameter. To
> facilitate existing use-cases of disabling hangcheck, if the modparam is
> disabled (i915.enable_hangcheck=0), we disable persistence mode by
> default.  (Note, one of the outcomes for supporting endless mode will be
> the removal of hangchecking, at which point opting into persistent mode
> will be mandatory, or maybe the default perhaps controlled by cgroups.)
> 
> v2: Check for hangchecking at context termination, so that we are not
> left with undying contexts from a crafty user.
> v3: Force context termination even if forced-preemption is disabled.
> 
> Testcase: igt/gem_ctx_persistence
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Michał Winiarski <michal.winiarski@intel.com>
> Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_context.c   | 182 ++++++++++++++++++
>   drivers/gpu/drm/i915/gem/i915_gem_context.h   |  15 ++
>   .../gpu/drm/i915/gem/i915_gem_context_types.h |   1 +
>   .../gpu/drm/i915/gem/selftests/mock_context.c |   2 +
>   include/uapi/drm/i915_drm.h                   |  15 ++
>   5 files changed, 215 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 5d8221c7ba83..70b72456e2c4 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -70,6 +70,7 @@
>   #include <drm/i915_drm.h>
>   
>   #include "gt/intel_lrc_reg.h"
> +#include "gt/intel_engine_heartbeat.h"
>   #include "gt/intel_engine_user.h"
>   
>   #include "i915_gem_context.h"
> @@ -269,6 +270,128 @@ void i915_gem_context_release(struct kref *ref)
>   		schedule_work(&gc->free_work);
>   }
>   
> +static inline struct i915_gem_engines *
> +__context_engines_static(const struct i915_gem_context *ctx)
> +{
> +	return rcu_dereference_protected(ctx->engines, true);
> +}
> +
> +static bool __reset_engine(struct intel_engine_cs *engine)
> +{
> +	struct intel_gt *gt = engine->gt;
> +	bool success = false;
> +
> +	if (!intel_has_reset_engine(gt))
> +		return false;
> +
> +	if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> +			      &gt->reset.flags)) {
> +		success = intel_engine_reset(engine, NULL) == 0;
> +		clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id,
> +				      &gt->reset.flags);
> +	}
> +
> +	return success;
> +}
> +
> +static void __reset_context(struct i915_gem_context *ctx,
> +			    struct intel_engine_cs *engine)
> +{
> +	intel_gt_handle_error(engine->gt, engine->mask, 0,
> +			      "context closure in %s", ctx->name);
> +}
> +
> +static bool __cancel_engine(struct intel_engine_cs *engine)
> +{
> +	/*
> +	 * Send a "high priority pulse" down the engine to cause the
> +	 * current request to be momentarily preempted. (If it fails to
> +	 * be preempted, it will be reset). As we have marked our context
> +	 * as banned, any incomplete request, including any running, will
> +	 * be skipped following the preemption.
> +	 */
> +	if (CONFIG_DRM_I915_PREEMPT_TIMEOUT && !intel_engine_pulse(engine))
> +		return true;

Maybe I lost the train of thought here.. But why not even try with the 
pulse even if forced preemption is not compiled in? There is a chance it 
may preempt normally, no?

Hm, or from the other angle, why bother with preemption and not just 
reset? What is the value in letting the closed context complete if at 
the same time, if it is preemptable, we will cancel all outstanding work 
anyway?

> +
> +	/* If we are unable to send a pulse, try resseting this engine. */

Typo in resetting.

Regards,

Tvrtko

> +	return __reset_engine(engine);
> +}
> +
> +static struct intel_engine_cs *
> +active_engine(struct dma_fence *fence, struct intel_context *ce)
> +{
> +	struct i915_request *rq = to_request(fence);
> +	struct intel_engine_cs *engine, *locked;
> +
> +	/*
> +	 * Serialise with __i915_request_submit() so that it sees
> +	 * is-banned?, or we know the request is already inflight.
> +	 */
> +	locked = READ_ONCE(rq->engine);
> +	spin_lock_irq(&locked->active.lock);
> +	while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
> +		spin_unlock(&locked->active.lock);
> +		spin_lock(&engine->active.lock);
> +		locked = engine;
> +	}
> +
> +	engine = NULL;
> +	if (i915_request_is_active(rq) && !rq->fence.error)
> +		engine = rq->engine;
> +
> +	spin_unlock_irq(&locked->active.lock);
> +
> +	return engine;
> +}
> +
> +static void kill_context(struct i915_gem_context *ctx)
> +{
> +	struct i915_gem_engines_iter it;
> +	struct intel_context *ce;
> +
> +	/*
> +	 * If we are already banned, it was due to a guilty request causing
> +	 * a reset and the entire context being evicted from the GPU.
> +	 */
> +	if (i915_gem_context_is_banned(ctx))
> +		return;
> +
> +	i915_gem_context_set_banned(ctx);
> +
> +	/*
> +	 * Map the user's engine back to the actual engines; one virtual
> +	 * engine will be mapped to multiple engines, and using ctx->engine[]
> +	 * the same engine may be have multiple instances in the user's map.
> +	 * However, we only care about pending requests, so only include
> +	 * engines on which there are incomplete requests.
> +	 */
> +	for_each_gem_engine(ce, __context_engines_static(ctx), it) {
> +		struct intel_engine_cs *engine;
> +		struct dma_fence *fence;
> +
> +		if (!ce->timeline)
> +			continue;
> +
> +		fence = i915_active_fence_get(&ce->timeline->last_request);
> +		if (!fence)
> +			continue;
> +
> +		/* Check with the backend if the request is still inflight */
> +		engine = active_engine(fence, ce);
> +
> +		/* First attempt to gracefully cancel the context */
> +		if (engine && !__cancel_engine(engine))
> +			/*
> +			 * If we are unable to send a preemptive pulse to bump
> +			 * the context from the GPU, we have to resort to a full
> +			 * reset. We hope the collateral damage is worth it.
> +			 */
> +			__reset_context(ctx, engine);
> +
> +		dma_fence_put(fence);
> +	}
> +}
> +
>   static void context_close(struct i915_gem_context *ctx)
>   {
>   	struct i915_address_space *vm;
> @@ -291,9 +414,47 @@ static void context_close(struct i915_gem_context *ctx)
>   	lut_close(ctx);
>   
>   	mutex_unlock(&ctx->mutex);
> +
> +	/*
> +	 * If the user has disabled hangchecking, we can not be sure that
> +	 * the batches will ever complete after the context is closed,
> +	 * keeping the context and all resources pinned forever. So in this
> +	 * case we opt to forcibly kill off all remaining requests on
> +	 * context close.
> +	 */
> +	if (!i915_gem_context_is_persistent(ctx) ||
> +	    !i915_modparams.enable_hangcheck)
> +		kill_context(ctx);
> +
>   	i915_gem_context_put(ctx);
>   }
>   
> +static int __context_set_persistence(struct i915_gem_context *ctx, bool state)
> +{
> +	if (i915_gem_context_is_persistent(ctx) == state)
> +		return 0;
> +
> +	if (state) {
> +		/*
> +		 * Only contexts that are short-lived [that will expire or be
> +		 * reset] are allowed to survive past termination. We require
> +		 * hangcheck to ensure that the persistent requests are healthy.
> +		 */
> +		if (!i915_modparams.enable_hangcheck)
> +			return -EINVAL;
> +
> +		i915_gem_context_set_persistence(ctx);
> +	} else {
> +		/* To cancel a context we use "preempt-to-idle" */
> +		if (!(ctx->i915->caps.scheduler & I915_SCHEDULER_CAP_PREEMPTION))
> +			return -ENODEV;
> +
> +		i915_gem_context_clear_persistence(ctx);
> +	}
> +
> +	return 0;
> +}
> +
>   static struct i915_gem_context *
>   __create_context(struct drm_i915_private *i915)
>   {
> @@ -328,6 +489,7 @@ __create_context(struct drm_i915_private *i915)
>   
>   	i915_gem_context_set_bannable(ctx);
>   	i915_gem_context_set_recoverable(ctx);
> +	__context_set_persistence(ctx, true /* cgroup hook? */);
>   
>   	for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)
>   		ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;
> @@ -484,6 +646,7 @@ i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio)
>   		return ctx;
>   
>   	i915_gem_context_clear_bannable(ctx);
> +	i915_gem_context_set_persistence(ctx);
>   	ctx->sched.priority = I915_USER_PRIORITY(prio);
>   
>   	GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));
> @@ -1594,6 +1757,16 @@ get_engines(struct i915_gem_context *ctx,
>   	return err;
>   }
>   
> +static int
> +set_persistence(struct i915_gem_context *ctx,
> +		const struct drm_i915_gem_context_param *args)
> +{
> +	if (args->size)
> +		return -EINVAL;
> +
> +	return __context_set_persistence(ctx, args->value);
> +}
> +
>   static int ctx_setparam(struct drm_i915_file_private *fpriv,
>   			struct i915_gem_context *ctx,
>   			struct drm_i915_gem_context_param *args)
> @@ -1671,6 +1844,10 @@ static int ctx_setparam(struct drm_i915_file_private *fpriv,
>   		ret = set_engines(ctx, args);
>   		break;
>   
> +	case I915_CONTEXT_PARAM_PERSISTENCE:
> +		ret = set_persistence(ctx, args);
> +		break;
> +
>   	case I915_CONTEXT_PARAM_BAN_PERIOD:
>   	default:
>   		ret = -EINVAL;
> @@ -2123,6 +2300,11 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
>   		ret = get_engines(ctx, args);
>   		break;
>   
> +	case I915_CONTEXT_PARAM_PERSISTENCE:
> +		args->size = 0;
> +		args->value = i915_gem_context_is_persistent(ctx);
> +		break;
> +
>   	case I915_CONTEXT_PARAM_BAN_PERIOD:
>   	default:
>   		ret = -EINVAL;
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.h b/drivers/gpu/drm/i915/gem/i915_gem_context.h
> index 9234586830d1..2eec035382a2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.h
> @@ -76,6 +76,21 @@ static inline void i915_gem_context_clear_recoverable(struct i915_gem_context *c
>   	clear_bit(UCONTEXT_RECOVERABLE, &ctx->user_flags);
>   }
>   
> +static inline bool i915_gem_context_is_persistent(const struct i915_gem_context *ctx)
> +{
> +	return test_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);
> +}
> +
> +static inline void i915_gem_context_set_persistence(struct i915_gem_context *ctx)
> +{
> +	set_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);
> +}
> +
> +static inline void i915_gem_context_clear_persistence(struct i915_gem_context *ctx)
> +{
> +	clear_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);
> +}
> +
>   static inline bool i915_gem_context_is_banned(const struct i915_gem_context *ctx)
>   {
>   	return test_bit(CONTEXT_BANNED, &ctx->flags);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> index ab8e1367dfc8..a3ecd19f2303 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> @@ -137,6 +137,7 @@ struct i915_gem_context {
>   #define UCONTEXT_NO_ERROR_CAPTURE	1
>   #define UCONTEXT_BANNABLE		2
>   #define UCONTEXT_RECOVERABLE		3
> +#define UCONTEXT_PERSISTENCE		4
>   
>   	/**
>   	 * @flags: small set of booleans
> diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> index 74ddd682c9cd..29b8984f0e47 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> @@ -22,6 +22,8 @@ mock_context(struct drm_i915_private *i915,
>   	INIT_LIST_HEAD(&ctx->link);
>   	ctx->i915 = i915;
>   
> +	i915_gem_context_set_persistence(ctx);
> +
>   	mutex_init(&ctx->engines_mutex);
>   	e = default_engines(ctx);
>   	if (IS_ERR(e))
> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> index 30c542144016..eb9e704d717a 100644
> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -1565,6 +1565,21 @@ struct drm_i915_gem_context_param {
>    *   i915_context_engines_bond (I915_CONTEXT_ENGINES_EXT_BOND)
>    */
>   #define I915_CONTEXT_PARAM_ENGINES	0xa
> +
> +/*
> + * I915_CONTEXT_PARAM_PERSISTENCE:
> + *
> + * Allow the context and active rendering to survive the process until
> + * completion. Persistence allows fire-and-forget clients to queue up a
> + * bunch of work, hand the output over to a display server and the quit.
> + * If the context is not marked as persistent, upon closing (either via
> + * an explicit DRM_I915_GEM_CONTEXT_DESTROY or implicitly from file closure
> + * or process termination), the context and any outstanding requests will be
> + * cancelled (and exported fences for cancelled requests marked as -EIO).
> + *
> + * By default, new contexts allow persistence.
> + */
> +#define I915_CONTEXT_PARAM_PERSISTENCE	0xb
>   /* Must be kept compact -- no holes and well documented */
>   
>   	__u64 value;
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 13/15] drm/i915: Replace hangcheck by heartbeats
  2019-10-14  9:07 ` [PATCH 13/15] drm/i915: Replace hangcheck by heartbeats Chris Wilson
@ 2019-10-14 12:13   ` Tvrtko Ursulin
  0 siblings, 0 replies; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 12:13 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 10:07, Chris Wilson wrote:
> Replace sampling the engine state every so often with a periodic
> heartbeat request to measure the health of an engine. This is coupled
> with the forced-preemption to allow long running requests to survive so
> long as they do not block other users.
> 
> The heartbeat interval can be adjusted per-engine using,
> 
> 	/sys/class/drm/card?/engine/*/heartbeat_interval_ms
> 
> v2: Couple in sysfs controls
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko

> ---
>   drivers/gpu/drm/i915/Kconfig.profile          |  14 +
>   drivers/gpu/drm/i915/Makefile                 |   1 -
>   drivers/gpu/drm/i915/display/intel_display.c  |   2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_object.h    |   1 -
>   drivers/gpu/drm/i915/gem/i915_gem_pm.c        |   2 -
>   drivers/gpu/drm/i915/gt/intel_engine.h        |  32 --
>   drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  11 +-
>   .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 128 +++++++
>   .../gpu/drm/i915/gt/intel_engine_heartbeat.h  |   5 +
>   drivers/gpu/drm/i915/gt/intel_engine_pm.c     |   5 +-
>   drivers/gpu/drm/i915/gt/intel_engine_sysfs.c  |  32 ++
>   drivers/gpu/drm/i915/gt/intel_engine_types.h  |  17 +-
>   drivers/gpu/drm/i915/gt/intel_gt.c            |   1 -
>   drivers/gpu/drm/i915/gt/intel_gt.h            |   4 -
>   drivers/gpu/drm/i915/gt/intel_gt_pm.c         |   1 -
>   drivers/gpu/drm/i915/gt/intel_gt_types.h      |   9 -
>   drivers/gpu/drm/i915/gt/intel_hangcheck.c     | 361 ------------------
>   drivers/gpu/drm/i915/gt/intel_reset.c         |   3 +-
>   drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |   4 -
>   drivers/gpu/drm/i915/i915_debugfs.c           |  87 -----
>   drivers/gpu/drm/i915/i915_drv.c               |   3 -
>   drivers/gpu/drm/i915/i915_drv.h               |   1 -
>   drivers/gpu/drm/i915/i915_gpu_error.c         |  33 +-
>   drivers/gpu/drm/i915/i915_gpu_error.h         |   2 -
>   drivers/gpu/drm/i915/i915_priolist_types.h    |   6 +
>   25 files changed, 210 insertions(+), 555 deletions(-)
>   delete mode 100644 drivers/gpu/drm/i915/gt/intel_hangcheck.c
> 
> diff --git a/drivers/gpu/drm/i915/Kconfig.profile b/drivers/gpu/drm/i915/Kconfig.profile
> index 8fceea85937b..4bed293e1fb4 100644
> --- a/drivers/gpu/drm/i915/Kconfig.profile
> +++ b/drivers/gpu/drm/i915/Kconfig.profile
> @@ -40,3 +40,17 @@ config DRM_I915_PREEMPT_TIMEOUT
>   	  /sys/class/drm/card?/engine/*/preempt_timeout_ms
>   
>   	  May be 0 to disable the timeout.
> +
> +config DRM_I915_HEARTBEAT_INTERVAL
> +	int "Interval between heartbeat pulses (ms)"
> +	default 2500 # milliseconds
> +	help
> +	  The driver sends a periodic heartbeat down all active engines to
> +	  check the health of the GPU and undertake regular house-keeping of
> +	  internal driver state.
> +
> +	  This is adjustable via
> +	  /sys/class/drm/card?/engine/*/heartbeat_interval_ms
> +
> +	  May be 0 to disable heartbeats and therefore disable automatic GPU
> +	  hang detection.
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index cfab7c8585b3..59d356cc406c 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -88,7 +88,6 @@ gt-y += \
>   	gt/intel_gt_pm.o \
>   	gt/intel_gt_pm_irq.o \
>   	gt/intel_gt_requests.o \
> -	gt/intel_hangcheck.o \
>   	gt/intel_lrc.o \
>   	gt/intel_rc6.o \
>   	gt/intel_renderstate.o \
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index 3cf39fc153b3..a2ef4bcf9242 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -14401,7 +14401,7 @@ static void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state)
>   static void fb_obj_bump_render_priority(struct drm_i915_gem_object *obj)
>   {
>   	struct i915_sched_attr attr = {
> -		.priority = I915_PRIORITY_DISPLAY,
> +		.priority = I915_USER_PRIORITY(I915_PRIORITY_DISPLAY),
>   	};
>   
>   	i915_gem_object_wait_priority(obj, 0, &attr);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index b0245585b4d5..817ea0725e43 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -461,6 +461,5 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj,
>   int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>   				  unsigned int flags,
>   				  const struct i915_sched_attr *attr);
> -#define I915_PRIORITY_DISPLAY I915_USER_PRIORITY(I915_PRIORITY_MAX)
>   
>   #endif
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
> index 7987b54fb1f5..0e97520cb1bb 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
> @@ -100,8 +100,6 @@ void i915_gem_suspend(struct drm_i915_private *i915)
>   	intel_gt_suspend(&i915->gt);
>   	intel_uc_suspend(&i915->gt.uc);
>   
> -	cancel_delayed_work_sync(&i915->gt.hangcheck.work);
> -
>   	i915_gem_drain_freed_objects(i915);
>   }
>   
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
> index 93ea367fe624..8ad57eace351 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> @@ -89,38 +89,6 @@ struct drm_printer;
>   /* seqno size is actually only a uint32, but since we plan to use MI_FLUSH_DW to
>    * do the writes, and that must have qw aligned offsets, simply pretend it's 8b.
>    */
> -enum intel_engine_hangcheck_action {
> -	ENGINE_IDLE = 0,
> -	ENGINE_WAIT,
> -	ENGINE_ACTIVE_SEQNO,
> -	ENGINE_ACTIVE_HEAD,
> -	ENGINE_ACTIVE_SUBUNITS,
> -	ENGINE_WAIT_KICK,
> -	ENGINE_DEAD,
> -};
> -
> -static inline const char *
> -hangcheck_action_to_str(const enum intel_engine_hangcheck_action a)
> -{
> -	switch (a) {
> -	case ENGINE_IDLE:
> -		return "idle";
> -	case ENGINE_WAIT:
> -		return "wait";
> -	case ENGINE_ACTIVE_SEQNO:
> -		return "active seqno";
> -	case ENGINE_ACTIVE_HEAD:
> -		return "active head";
> -	case ENGINE_ACTIVE_SUBUNITS:
> -		return "active subunits";
> -	case ENGINE_WAIT_KICK:
> -		return "wait kick";
> -	case ENGINE_DEAD:
> -		return "dead";
> -	}
> -
> -	return "unknown";
> -}
>   
>   static inline unsigned int
>   execlists_num_ports(const struct intel_engine_execlists * const execlists)
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> index 1eb51147839a..d829ad340ca0 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> @@ -305,6 +305,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
>   	__sprint_engine_name(engine);
>   
>   	engine->props.preempt_timeout = CONFIG_DRM_I915_PREEMPT_TIMEOUT;
> +	engine->props.heartbeat_interval = CONFIG_DRM_I915_HEARTBEAT_INTERVAL;
>   
>   	/*
>   	 * To be overridden by the backend on setup. However to facilitate
> @@ -599,7 +600,6 @@ static int intel_engine_setup_common(struct intel_engine_cs *engine)
>   	intel_engine_init_active(engine, ENGINE_PHYSICAL);
>   	intel_engine_init_breadcrumbs(engine);
>   	intel_engine_init_execlists(engine);
> -	intel_engine_init_hangcheck(engine);
>   	intel_engine_init_cmd_parser(engine);
>   	intel_engine_init__pm(engine);
>   
> @@ -1432,8 +1432,13 @@ void intel_engine_dump(struct intel_engine_cs *engine,
>   		drm_printf(m, "*** WEDGED ***\n");
>   
>   	drm_printf(m, "\tAwake? %d\n", atomic_read(&engine->wakeref.count));
> -	drm_printf(m, "\tHangcheck: %d ms ago\n",
> -		   jiffies_to_msecs(jiffies - engine->hangcheck.action_timestamp));
> +
> +	rcu_read_lock();
> +	rq = READ_ONCE(engine->heartbeat.systole);
> +	if (rq)
> +		drm_printf(m, "\tHeartbeat: %d ms ago\n",
> +			   jiffies_to_msecs(jiffies - rq->emitted_jiffies));
> +	rcu_read_unlock();
>   	drm_printf(m, "\tReset count: %d (global %d)\n",
>   		   i915_reset_engine_count(error, engine),
>   		   i915_reset_count(error));
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
> index 2fc413f9d506..dc741e86fb41 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
> @@ -11,6 +11,27 @@
>   #include "intel_engine_pm.h"
>   #include "intel_engine.h"
>   #include "intel_gt.h"
> +#include "intel_reset.h"
> +
> +/*
> + * While the engine is active, we send a periodic pulse along the engine
> + * to check on its health and to flush any idle-barriers. If that request
> + * is stuck, and we fail to preempt it, we declare the engine hung and
> + * issue a reset -- in the hope that restores progress.
> + */
> +
> +static void next_heartbeat(struct intel_engine_cs *engine)
> +{
> +	long delay;
> +
> +	delay = READ_ONCE(engine->props.heartbeat_interval);
> +	if (!delay)
> +		return;
> +
> +	delay = msecs_to_jiffies_timeout(delay);
> +	schedule_delayed_work(&engine->heartbeat.work,
> +			      round_jiffies_up_relative(delay));
> +}
>   
>   static void idle_pulse(struct intel_engine_cs *engine, struct i915_request *rq)
>   {
> @@ -18,6 +39,113 @@ static void idle_pulse(struct intel_engine_cs *engine, struct i915_request *rq)
>   	i915_request_add_active_barriers(rq);
>   }
>   
> +static void show_heartbeat(const struct i915_request *rq,
> +			   struct intel_engine_cs *engine)
> +{
> +	struct drm_printer p = drm_debug_printer("heartbeat");
> +
> +	intel_engine_dump(engine, &p,
> +			  "%s heartbeat {prio:%d} not ticking\n",
> +			  engine->name,
> +			  rq->sched.attr.priority);
> +}
> +
> +static void heartbeat(struct work_struct *wrk)
> +{
> +	struct i915_sched_attr attr = {
> +		.priority = I915_USER_PRIORITY(I915_PRIORITY_MIN),
> +	};
> +	struct intel_engine_cs *engine =
> +		container_of(wrk, typeof(*engine), heartbeat.work.work);
> +	struct intel_context *ce = engine->kernel_context;
> +	struct i915_request *rq;
> +
> +	if (!intel_engine_pm_get_if_awake(engine))
> +		return;
> +
> +	rq = engine->heartbeat.systole;
> +	if (rq && i915_request_completed(rq)) {
> +		i915_request_put(rq);
> +		engine->heartbeat.systole = NULL;
> +	}
> +
> +	if (intel_gt_is_wedged(engine->gt))
> +		goto out;
> +
> +	if (engine->heartbeat.systole) {
> +		if (engine->schedule &&
> +		    rq->sched.attr.priority < I915_PRIORITY_BARRIER) {
> +			/*
> +			 * Gradually raise the priority of the heartbeat to
> +			 * give high priority work [which presumably desires
> +			 * low latency and no jitter] the chance to naturally
> +			 * complete before being preempted.
> +			 */
> +			attr.priority = 0;
> +			if (rq->sched.attr.priority >= attr.priority)
> +				attr.priority = I915_USER_PRIORITY(I915_PRIORITY_HEARTBEAT);
> +			if (rq->sched.attr.priority >= attr.priority)
> +				attr.priority = I915_PRIORITY_BARRIER;
> +
> +			local_bh_disable();
> +			engine->schedule(rq, &attr);
> +			local_bh_enable();
> +		} else {
> +			if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
> +				show_heartbeat(rq, engine);
> +
> +			intel_gt_handle_error(engine->gt, engine->mask,
> +					      I915_ERROR_CAPTURE,
> +					      "stopped heartbeat on %s",
> +					      engine->name);
> +		}
> +		goto out;
> +	}
> +
> +	if (engine->wakeref_serial == engine->serial)
> +		goto out;
> +
> +	mutex_lock(&ce->timeline->mutex);
> +
> +	intel_context_enter(ce);
> +	rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
> +	intel_context_exit(ce);
> +	if (IS_ERR(rq))
> +		goto unlock;
> +
> +	idle_pulse(engine, rq);
> +	if (i915_modparams.enable_hangcheck)
> +		engine->heartbeat.systole = i915_request_get(rq);
> +
> +	__i915_request_commit(rq);
> +	__i915_request_queue(rq, &attr);
> +
> +unlock:
> +	mutex_unlock(&ce->timeline->mutex);
> +out:
> +	next_heartbeat(engine);
> +	intel_engine_pm_put(engine);
> +}
> +
> +void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine)
> +{
> +	if (!CONFIG_DRM_I915_HEARTBEAT_INTERVAL)
> +		return;
> +
> +	next_heartbeat(engine);
> +}
> +
> +void intel_engine_park_heartbeat(struct intel_engine_cs *engine)
> +{
> +	cancel_delayed_work(&engine->heartbeat.work);
> +	i915_request_put(fetch_and_zero(&engine->heartbeat.systole));
> +}
> +
> +void intel_engine_init_heartbeat(struct intel_engine_cs *engine)
> +{
> +	INIT_DELAYED_WORK(&engine->heartbeat.work, heartbeat);
> +}
> +
>   int intel_engine_pulse(struct intel_engine_cs *engine)
>   {
>   	struct i915_sched_attr attr = { .priority = I915_PRIORITY_BARRIER };
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
> index b950451b5998..39391004554d 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
> @@ -9,6 +9,11 @@
>   
>   struct intel_engine_cs;
>   
> +void intel_engine_init_heartbeat(struct intel_engine_cs *engine);
> +
> +void intel_engine_park_heartbeat(struct intel_engine_cs *engine);
> +void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine);
> +
>   int intel_engine_pulse(struct intel_engine_cs *engine);
>   
>   #endif /* INTEL_ENGINE_HEARTBEAT_H */
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> index 7d76611d9df1..6fbfa2162e54 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> @@ -7,6 +7,7 @@
>   #include "i915_drv.h"
>   
>   #include "intel_engine.h"
> +#include "intel_engine_heartbeat.h"
>   #include "intel_engine_pm.h"
>   #include "intel_engine_pool.h"
>   #include "intel_gt.h"
> @@ -34,7 +35,7 @@ static int __engine_unpark(struct intel_wakeref *wf)
>   	if (engine->unpark)
>   		engine->unpark(engine);
>   
> -	intel_engine_init_hangcheck(engine);
> +	intel_engine_unpark_heartbeat(engine);
>   	return 0;
>   }
>   
> @@ -158,6 +159,7 @@ static int __engine_park(struct intel_wakeref *wf)
>   
>   	call_idle_barriers(engine); /* cleanup after wedging */
>   
> +	intel_engine_park_heartbeat(engine);
>   	intel_engine_disarm_breadcrumbs(engine);
>   	intel_engine_pool_park(&engine->pool);
>   
> @@ -188,6 +190,7 @@ void intel_engine_init__pm(struct intel_engine_cs *engine)
>   	struct intel_runtime_pm *rpm = engine->uncore->rpm;
>   
>   	intel_wakeref_init(&engine->wakeref, rpm, &wf_ops);
> +	intel_engine_init_heartbeat(engine);
>   }
>   
>   #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
> index b9eb9e4cba0a..d7adbcb9c6e5 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_sysfs.c
> @@ -179,6 +179,35 @@ preempt_timeout_store(struct kobject *kobj, struct kobj_attribute *attr,
>   static struct kobj_attribute preempt_timeout_attr =
>   __ATTR(preempt_timeout_ms, 0644, preempt_timeout_show, preempt_timeout_store);
>   
> +static ssize_t
> +heartbeat_interval_show(struct kobject *kobj, struct kobj_attribute *attr,
> +			char *buf)
> +{
> +	struct intel_engine_cs *engine = kobj_to_engine(kobj);
> +
> +	return sprintf(buf, "%lu\n", engine->props.heartbeat_interval);
> +}
> +
> +static ssize_t
> +heartbeat_interval_store(struct kobject *kobj, struct kobj_attribute *attr,
> +			 const char *buf, size_t count)
> +{
> +	struct intel_engine_cs *engine = kobj_to_engine(kobj);
> +	unsigned long delay;
> +	int err;
> +
> +	err = kstrtoul(buf, 0, &delay);
> +	if (err)
> +		return err;
> +
> +	WRITE_ONCE(engine->props.heartbeat_interval, delay);
> +	return count;
> +}
> +
> +static struct kobj_attribute heartbeat_interval_attr =
> +__ATTR(heartbeat_interval_ms, 0644,
> +       heartbeat_interval_show, heartbeat_interval_store);
> +
>   static void kobj_engine_release(struct kobject *kobj)
>   {
>   	kfree(kobj);
> @@ -218,6 +247,9 @@ void intel_engines_add_sysfs(struct drm_i915_private *i915)
>   		&inst_attr.attr,
>   		&caps_attr.attr,
>   		&all_caps_attr.attr,
> +#if CONFIG_DRM_I915_HEARTBEAT_INTERVAL
> +		&heartbeat_interval_attr.attr,
> +#endif
>   		NULL
>   	};
>   
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> index 6af9b0096975..ad3be2fbd71a 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> @@ -15,6 +15,7 @@
>   #include <linux/rbtree.h>
>   #include <linux/timer.h>
>   #include <linux/types.h>
> +#include <linux/workqueue.h>
>   
>   #include "i915_gem.h"
>   #include "i915_pmu.h"
> @@ -76,14 +77,6 @@ struct intel_instdone {
>   	u32 row[I915_MAX_SLICES][I915_MAX_SUBSLICES];
>   };
>   
> -struct intel_engine_hangcheck {
> -	u64 acthd;
> -	u32 last_ring;
> -	u32 last_head;
> -	unsigned long action_timestamp;
> -	struct intel_instdone instdone;
> -};
> -
>   struct intel_ring {
>   	struct kref ref;
>   	struct i915_vma *vma;
> @@ -330,6 +323,11 @@ struct intel_engine_cs {
>   
>   	intel_engine_mask_t saturated; /* submitting semaphores too late? */
>   
> +	struct {
> +		struct delayed_work work;
> +		struct i915_request *systole;
> +	} heartbeat;
> +
>   	unsigned long serial;
>   
>   	unsigned long wakeref_serial;
> @@ -480,8 +478,6 @@ struct intel_engine_cs {
>   	/* status_notifier: list of callbacks for context-switch changes */
>   	struct atomic_notifier_head context_status_notifier;
>   
> -	struct intel_engine_hangcheck hangcheck;
> -
>   #define I915_ENGINE_NEEDS_CMD_PARSER BIT(0)
>   #define I915_ENGINE_SUPPORTS_STATS   BIT(1)
>   #define I915_ENGINE_HAS_PREEMPTION   BIT(2)
> @@ -549,6 +545,7 @@ struct intel_engine_cs {
>   
>   	struct {
>   		unsigned long preempt_timeout;
> +		unsigned long heartbeat_interval;
>   	} props;
>   };
>   
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
> index b3619a2a5d0e..f3e1925987e1 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt.c
> @@ -22,7 +22,6 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
>   	INIT_LIST_HEAD(&gt->closed_vma);
>   	spin_lock_init(&gt->closed_lock);
>   
> -	intel_gt_init_hangcheck(gt);
>   	intel_gt_init_reset(gt);
>   	intel_gt_init_requests(gt);
>   	intel_gt_pm_init_early(gt);
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
> index e6ab0bff0efb..5b6effed3713 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt.h
> @@ -46,8 +46,6 @@ void intel_gt_clear_error_registers(struct intel_gt *gt,
>   void intel_gt_flush_ggtt_writes(struct intel_gt *gt);
>   void intel_gt_chipset_flush(struct intel_gt *gt);
>   
> -void intel_gt_init_hangcheck(struct intel_gt *gt);
> -
>   static inline u32 intel_gt_scratch_offset(const struct intel_gt *gt,
>   					  enum intel_gt_scratch_field field)
>   {
> @@ -59,6 +57,4 @@ static inline bool intel_gt_is_wedged(struct intel_gt *gt)
>   	return __intel_reset_failed(&gt->reset);
>   }
>   
> -void intel_gt_queue_hangcheck(struct intel_gt *gt);
> -
>   #endif /* __INTEL_GT_H__ */
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> index 87e34e0b6427..85af0d16f869 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> @@ -52,7 +52,6 @@ static int __gt_unpark(struct intel_wakeref *wf)
>   
>   	i915_pmu_gt_unparked(i915);
>   
> -	intel_gt_queue_hangcheck(gt);
>   	intel_gt_unpark_requests(gt);
>   
>   	pm_notify(gt, INTEL_GT_UNPARK);
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
> index be4b263621c8..a2e8c8a952e7 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
> @@ -26,14 +26,6 @@ struct i915_ggtt;
>   struct intel_engine_cs;
>   struct intel_uncore;
>   
> -struct intel_hangcheck {
> -	/* For hangcheck timer */
> -#define DRM_I915_HANGCHECK_PERIOD 1500 /* in ms */
> -#define DRM_I915_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_I915_HANGCHECK_PERIOD)
> -
> -	struct delayed_work work;
> -};
> -
>   struct intel_gt {
>   	struct drm_i915_private *i915;
>   	struct intel_uncore *uncore;
> @@ -67,7 +59,6 @@ struct intel_gt {
>   	struct list_head closed_vma;
>   	spinlock_t closed_lock; /* guards the list of closed_vma */
>   
> -	struct intel_hangcheck hangcheck;
>   	struct intel_reset reset;
>   
>   	/**
> diff --git a/drivers/gpu/drm/i915/gt/intel_hangcheck.c b/drivers/gpu/drm/i915/gt/intel_hangcheck.c
> deleted file mode 100644
> index c14dbeb3ccc3..000000000000
> --- a/drivers/gpu/drm/i915/gt/intel_hangcheck.c
> +++ /dev/null
> @@ -1,361 +0,0 @@
> -/*
> - * Copyright © 2016 Intel Corporation
> - *
> - * Permission is hereby granted, free of charge, to any person obtaining a
> - * copy of this software and associated documentation files (the "Software"),
> - * to deal in the Software without restriction, including without limitation
> - * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> - * and/or sell copies of the Software, and to permit persons to whom the
> - * Software is furnished to do so, subject to the following conditions:
> - *
> - * The above copyright notice and this permission notice (including the next
> - * paragraph) shall be included in all copies or substantial portions of the
> - * Software.
> - *
> - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> - * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> - * IN THE SOFTWARE.
> - *
> - */
> -
> -#include "i915_drv.h"
> -#include "intel_engine.h"
> -#include "intel_gt.h"
> -#include "intel_reset.h"
> -
> -struct hangcheck {
> -	u64 acthd;
> -	u32 ring;
> -	u32 head;
> -	enum intel_engine_hangcheck_action action;
> -	unsigned long action_timestamp;
> -	int deadlock;
> -	struct intel_instdone instdone;
> -	bool wedged:1;
> -	bool stalled:1;
> -};
> -
> -static bool instdone_unchanged(u32 current_instdone, u32 *old_instdone)
> -{
> -	u32 tmp = current_instdone | *old_instdone;
> -	bool unchanged;
> -
> -	unchanged = tmp == *old_instdone;
> -	*old_instdone |= tmp;
> -
> -	return unchanged;
> -}
> -
> -static bool subunits_stuck(struct intel_engine_cs *engine)
> -{
> -	struct drm_i915_private *dev_priv = engine->i915;
> -	const struct sseu_dev_info *sseu = &RUNTIME_INFO(dev_priv)->sseu;
> -	struct intel_instdone instdone;
> -	struct intel_instdone *accu_instdone = &engine->hangcheck.instdone;
> -	bool stuck;
> -	int slice;
> -	int subslice;
> -
> -	intel_engine_get_instdone(engine, &instdone);
> -
> -	/* There might be unstable subunit states even when
> -	 * actual head is not moving. Filter out the unstable ones by
> -	 * accumulating the undone -> done transitions and only
> -	 * consider those as progress.
> -	 */
> -	stuck = instdone_unchanged(instdone.instdone,
> -				   &accu_instdone->instdone);
> -	stuck &= instdone_unchanged(instdone.slice_common,
> -				    &accu_instdone->slice_common);
> -
> -	for_each_instdone_slice_subslice(dev_priv, sseu, slice, subslice) {
> -		stuck &= instdone_unchanged(instdone.sampler[slice][subslice],
> -					    &accu_instdone->sampler[slice][subslice]);
> -		stuck &= instdone_unchanged(instdone.row[slice][subslice],
> -					    &accu_instdone->row[slice][subslice]);
> -	}
> -
> -	return stuck;
> -}
> -
> -static enum intel_engine_hangcheck_action
> -head_stuck(struct intel_engine_cs *engine, u64 acthd)
> -{
> -	if (acthd != engine->hangcheck.acthd) {
> -
> -		/* Clear subunit states on head movement */
> -		memset(&engine->hangcheck.instdone, 0,
> -		       sizeof(engine->hangcheck.instdone));
> -
> -		return ENGINE_ACTIVE_HEAD;
> -	}
> -
> -	if (!subunits_stuck(engine))
> -		return ENGINE_ACTIVE_SUBUNITS;
> -
> -	return ENGINE_DEAD;
> -}
> -
> -static enum intel_engine_hangcheck_action
> -engine_stuck(struct intel_engine_cs *engine, u64 acthd)
> -{
> -	enum intel_engine_hangcheck_action ha;
> -	u32 tmp;
> -
> -	ha = head_stuck(engine, acthd);
> -	if (ha != ENGINE_DEAD)
> -		return ha;
> -
> -	if (IS_GEN(engine->i915, 2))
> -		return ENGINE_DEAD;
> -
> -	/* Is the chip hanging on a WAIT_FOR_EVENT?
> -	 * If so we can simply poke the RB_WAIT bit
> -	 * and break the hang. This should work on
> -	 * all but the second generation chipsets.
> -	 */
> -	tmp = ENGINE_READ(engine, RING_CTL);
> -	if (tmp & RING_WAIT) {
> -		intel_gt_handle_error(engine->gt, engine->mask, 0,
> -				      "stuck wait on %s", engine->name);
> -		ENGINE_WRITE(engine, RING_CTL, tmp);
> -		return ENGINE_WAIT_KICK;
> -	}
> -
> -	return ENGINE_DEAD;
> -}
> -
> -static void hangcheck_load_sample(struct intel_engine_cs *engine,
> -				  struct hangcheck *hc)
> -{
> -	hc->acthd = intel_engine_get_active_head(engine);
> -	hc->ring = ENGINE_READ(engine, RING_START);
> -	hc->head = ENGINE_READ(engine, RING_HEAD);
> -}
> -
> -static void hangcheck_store_sample(struct intel_engine_cs *engine,
> -				   const struct hangcheck *hc)
> -{
> -	engine->hangcheck.acthd = hc->acthd;
> -	engine->hangcheck.last_ring = hc->ring;
> -	engine->hangcheck.last_head = hc->head;
> -}
> -
> -static enum intel_engine_hangcheck_action
> -hangcheck_get_action(struct intel_engine_cs *engine,
> -		     const struct hangcheck *hc)
> -{
> -	if (intel_engine_is_idle(engine))
> -		return ENGINE_IDLE;
> -
> -	if (engine->hangcheck.last_ring != hc->ring)
> -		return ENGINE_ACTIVE_SEQNO;
> -
> -	if (engine->hangcheck.last_head != hc->head)
> -		return ENGINE_ACTIVE_SEQNO;
> -
> -	return engine_stuck(engine, hc->acthd);
> -}
> -
> -static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
> -					struct hangcheck *hc)
> -{
> -	unsigned long timeout = I915_ENGINE_DEAD_TIMEOUT;
> -
> -	hc->action = hangcheck_get_action(engine, hc);
> -
> -	/* We always increment the progress
> -	 * if the engine is busy and still processing
> -	 * the same request, so that no single request
> -	 * can run indefinitely (such as a chain of
> -	 * batches). The only time we do not increment
> -	 * the hangcheck score on this ring, if this
> -	 * engine is in a legitimate wait for another
> -	 * engine. In that case the waiting engine is a
> -	 * victim and we want to be sure we catch the
> -	 * right culprit. Then every time we do kick
> -	 * the ring, make it as a progress as the seqno
> -	 * advancement might ensure and if not, it
> -	 * will catch the hanging engine.
> -	 */
> -
> -	switch (hc->action) {
> -	case ENGINE_IDLE:
> -	case ENGINE_ACTIVE_SEQNO:
> -		/* Clear head and subunit states on seqno movement */
> -		hc->acthd = 0;
> -
> -		memset(&engine->hangcheck.instdone, 0,
> -		       sizeof(engine->hangcheck.instdone));
> -
> -		/* Intentional fall through */
> -	case ENGINE_WAIT_KICK:
> -	case ENGINE_WAIT:
> -		engine->hangcheck.action_timestamp = jiffies;
> -		break;
> -
> -	case ENGINE_ACTIVE_HEAD:
> -	case ENGINE_ACTIVE_SUBUNITS:
> -		/*
> -		 * Seqno stuck with still active engine gets leeway,
> -		 * in hopes that it is just a long shader.
> -		 */
> -		timeout = I915_SEQNO_DEAD_TIMEOUT;
> -		break;
> -
> -	case ENGINE_DEAD:
> -		break;
> -
> -	default:
> -		MISSING_CASE(hc->action);
> -	}
> -
> -	hc->stalled = time_after(jiffies,
> -				 engine->hangcheck.action_timestamp + timeout);
> -	hc->wedged = time_after(jiffies,
> -				 engine->hangcheck.action_timestamp +
> -				 I915_ENGINE_WEDGED_TIMEOUT);
> -}
> -
> -static void hangcheck_declare_hang(struct intel_gt *gt,
> -				   intel_engine_mask_t hung,
> -				   intel_engine_mask_t stuck)
> -{
> -	struct intel_engine_cs *engine;
> -	intel_engine_mask_t tmp;
> -	char msg[80];
> -	int len;
> -
> -	/* If some rings hung but others were still busy, only
> -	 * blame the hanging rings in the synopsis.
> -	 */
> -	if (stuck != hung)
> -		hung &= ~stuck;
> -	len = scnprintf(msg, sizeof(msg),
> -			"%s on ", stuck == hung ? "no progress" : "hang");
> -	for_each_engine_masked(engine, gt->i915, hung, tmp)
> -		len += scnprintf(msg + len, sizeof(msg) - len,
> -				 "%s, ", engine->name);
> -	msg[len-2] = '\0';
> -
> -	return intel_gt_handle_error(gt, hung, I915_ERROR_CAPTURE, "%s", msg);
> -}
> -
> -/*
> - * This is called when the chip hasn't reported back with completed
> - * batchbuffers in a long time. We keep track per ring seqno progress and
> - * if there are no progress, hangcheck score for that ring is increased.
> - * Further, acthd is inspected to see if the ring is stuck. On stuck case
> - * we kick the ring. If we see no progress on three subsequent calls
> - * we assume chip is wedged and try to fix it by resetting the chip.
> - */
> -static void hangcheck_elapsed(struct work_struct *work)
> -{
> -	struct intel_gt *gt =
> -		container_of(work, typeof(*gt), hangcheck.work.work);
> -	intel_engine_mask_t hung = 0, stuck = 0, wedged = 0;
> -	struct intel_engine_cs *engine;
> -	enum intel_engine_id id;
> -	intel_wakeref_t wakeref;
> -
> -	if (!i915_modparams.enable_hangcheck)
> -		return;
> -
> -	if (!READ_ONCE(gt->awake))
> -		return;
> -
> -	if (intel_gt_is_wedged(gt))
> -		return;
> -
> -	wakeref = intel_runtime_pm_get_if_in_use(gt->uncore->rpm);
> -	if (!wakeref)
> -		return;
> -
> -	/* As enabling the GPU requires fairly extensive mmio access,
> -	 * periodically arm the mmio checker to see if we are triggering
> -	 * any invalid access.
> -	 */
> -	intel_uncore_arm_unclaimed_mmio_detection(gt->uncore);
> -
> -	for_each_engine(engine, gt->i915, id) {
> -		struct hangcheck hc;
> -
> -		intel_engine_breadcrumbs_irq(engine);
> -
> -		hangcheck_load_sample(engine, &hc);
> -		hangcheck_accumulate_sample(engine, &hc);
> -		hangcheck_store_sample(engine, &hc);
> -
> -		if (hc.stalled) {
> -			hung |= engine->mask;
> -			if (hc.action != ENGINE_DEAD)
> -				stuck |= engine->mask;
> -		}
> -
> -		if (hc.wedged)
> -			wedged |= engine->mask;
> -	}
> -
> -	if (GEM_SHOW_DEBUG() && (hung | stuck)) {
> -		struct drm_printer p = drm_debug_printer("hangcheck");
> -
> -		for_each_engine(engine, gt->i915, id) {
> -			if (intel_engine_is_idle(engine))
> -				continue;
> -
> -			intel_engine_dump(engine, &p, "%s\n", engine->name);
> -		}
> -	}
> -
> -	if (wedged) {
> -		dev_err(gt->i915->drm.dev,
> -			"GPU recovery timed out,"
> -			" cancelling all in-flight rendering.\n");
> -		GEM_TRACE_DUMP();
> -		intel_gt_set_wedged(gt);
> -	}
> -
> -	if (hung)
> -		hangcheck_declare_hang(gt, hung, stuck);
> -
> -	intel_runtime_pm_put(gt->uncore->rpm, wakeref);
> -
> -	/* Reset timer in case GPU hangs without another request being added */
> -	intel_gt_queue_hangcheck(gt);
> -}
> -
> -void intel_gt_queue_hangcheck(struct intel_gt *gt)
> -{
> -	unsigned long delay;
> -
> -	if (unlikely(!i915_modparams.enable_hangcheck))
> -		return;
> -
> -	/*
> -	 * Don't continually defer the hangcheck so that it is always run at
> -	 * least once after work has been scheduled on any ring. Otherwise,
> -	 * we will ignore a hung ring if a second ring is kept busy.
> -	 */
> -
> -	delay = round_jiffies_up_relative(DRM_I915_HANGCHECK_JIFFIES);
> -	queue_delayed_work(system_long_wq, &gt->hangcheck.work, delay);
> -}
> -
> -void intel_engine_init_hangcheck(struct intel_engine_cs *engine)
> -{
> -	memset(&engine->hangcheck, 0, sizeof(engine->hangcheck));
> -	engine->hangcheck.action_timestamp = jiffies;
> -}
> -
> -void intel_gt_init_hangcheck(struct intel_gt *gt)
> -{
> -	INIT_DELAYED_WORK(&gt->hangcheck.work, hangcheck_elapsed);
> -}
> -
> -#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> -#include "selftest_hangcheck.c"
> -#endif
> diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
> index 77445d100ca8..9d6451f9a855 100644
> --- a/drivers/gpu/drm/i915/gt/intel_reset.c
> +++ b/drivers/gpu/drm/i915/gt/intel_reset.c
> @@ -1024,8 +1024,6 @@ void intel_gt_reset(struct intel_gt *gt,
>   	if (ret)
>   		goto taint;
>   
> -	intel_gt_queue_hangcheck(gt);
> -
>   finish:
>   	reset_finish(gt, awake);
>   unlock:
> @@ -1353,4 +1351,5 @@ void __intel_fini_wedge(struct intel_wedge_me *w)
>   
>   #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>   #include "selftest_reset.c"
> +#include "selftest_hangcheck.c"
>   #endif
> diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> index 569a4105d49e..570546eda5e8 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> @@ -1686,7 +1686,6 @@ int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
>   	};
>   	struct intel_gt *gt = &i915->gt;
>   	intel_wakeref_t wakeref;
> -	bool saved_hangcheck;
>   	int err;
>   
>   	if (!intel_has_gpu_reset(gt))
> @@ -1696,12 +1695,9 @@ int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
>   		return -EIO; /* we're long past hope of a successful reset */
>   
>   	wakeref = intel_runtime_pm_get(gt->uncore->rpm);
> -	saved_hangcheck = fetch_and_zero(&i915_modparams.enable_hangcheck);
> -	drain_delayed_work(&gt->hangcheck.work); /* flush param */
>   
>   	err = intel_gt_live_subtests(tests, gt);
>   
> -	i915_modparams.enable_hangcheck = saved_hangcheck;
>   	intel_runtime_pm_put(gt->uncore->rpm, wakeref);
>   
>   	return err;
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index a541b6ae534f..2fae6e592b09 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -1011,92 +1011,6 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
>   	return ret;
>   }
>   
> -static void i915_instdone_info(struct drm_i915_private *dev_priv,
> -			       struct seq_file *m,
> -			       struct intel_instdone *instdone)
> -{
> -	const struct sseu_dev_info *sseu = &RUNTIME_INFO(dev_priv)->sseu;
> -	int slice;
> -	int subslice;
> -
> -	seq_printf(m, "\t\tINSTDONE: 0x%08x\n",
> -		   instdone->instdone);
> -
> -	if (INTEL_GEN(dev_priv) <= 3)
> -		return;
> -
> -	seq_printf(m, "\t\tSC_INSTDONE: 0x%08x\n",
> -		   instdone->slice_common);
> -
> -	if (INTEL_GEN(dev_priv) <= 6)
> -		return;
> -
> -	for_each_instdone_slice_subslice(dev_priv, sseu, slice, subslice)
> -		seq_printf(m, "\t\tSAMPLER_INSTDONE[%d][%d]: 0x%08x\n",
> -			   slice, subslice, instdone->sampler[slice][subslice]);
> -
> -	for_each_instdone_slice_subslice(dev_priv, sseu, slice, subslice)
> -		seq_printf(m, "\t\tROW_INSTDONE[%d][%d]: 0x%08x\n",
> -			   slice, subslice, instdone->row[slice][subslice]);
> -}
> -
> -static int i915_hangcheck_info(struct seq_file *m, void *unused)
> -{
> -	struct drm_i915_private *i915 = node_to_i915(m->private);
> -	struct intel_gt *gt = &i915->gt;
> -	struct intel_engine_cs *engine;
> -	intel_wakeref_t wakeref;
> -	enum intel_engine_id id;
> -
> -	seq_printf(m, "Reset flags: %lx\n", gt->reset.flags);
> -	if (test_bit(I915_WEDGED, &gt->reset.flags))
> -		seq_puts(m, "\tWedged\n");
> -	if (test_bit(I915_RESET_BACKOFF, &gt->reset.flags))
> -		seq_puts(m, "\tDevice (global) reset in progress\n");
> -
> -	if (!i915_modparams.enable_hangcheck) {
> -		seq_puts(m, "Hangcheck disabled\n");
> -		return 0;
> -	}
> -
> -	if (timer_pending(&gt->hangcheck.work.timer))
> -		seq_printf(m, "Hangcheck active, timer fires in %dms\n",
> -			   jiffies_to_msecs(gt->hangcheck.work.timer.expires -
> -					    jiffies));
> -	else if (delayed_work_pending(&gt->hangcheck.work))
> -		seq_puts(m, "Hangcheck active, work pending\n");
> -	else
> -		seq_puts(m, "Hangcheck inactive\n");
> -
> -	seq_printf(m, "GT active? %s\n", yesno(gt->awake));
> -
> -	with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
> -		for_each_engine(engine, i915, id) {
> -			struct intel_instdone instdone;
> -
> -			seq_printf(m, "%s: %d ms ago\n",
> -				   engine->name,
> -				   jiffies_to_msecs(jiffies -
> -						    engine->hangcheck.action_timestamp));
> -
> -			seq_printf(m, "\tACTHD = 0x%08llx [current 0x%08llx]\n",
> -				   (long long)engine->hangcheck.acthd,
> -				   intel_engine_get_active_head(engine));
> -
> -			intel_engine_get_instdone(engine, &instdone);
> -
> -			seq_puts(m, "\tinstdone read =\n");
> -			i915_instdone_info(i915, m, &instdone);
> -
> -			seq_puts(m, "\tinstdone accu =\n");
> -			i915_instdone_info(i915, m,
> -					   &engine->hangcheck.instdone);
> -		}
> -	}
> -
> -	return 0;
> -}
> -
>   static int ironlake_drpc_info(struct seq_file *m)
>   {
>   	struct drm_i915_private *i915 = node_to_i915(m->private);
> @@ -4339,7 +4253,6 @@ static const struct drm_info_list i915_debugfs_list[] = {
>   	{"i915_guc_stage_pool", i915_guc_stage_pool, 0},
>   	{"i915_huc_load_status", i915_huc_load_status_info, 0},
>   	{"i915_frequency_info", i915_frequency_info, 0},
> -	{"i915_hangcheck_info", i915_hangcheck_info, 0},
>   	{"i915_drpc_info", i915_drpc_info, 0},
>   	{"i915_ring_freq_table", i915_ring_freq_table, 0},
>   	{"i915_frontbuffer_tracking", i915_frontbuffer_tracking, 0},
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index f02a34722217..1dae43ed4c48 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -1546,10 +1546,7 @@ void i915_driver_remove(struct drm_i915_private *i915)
>   
>   	i915_driver_modeset_remove(i915);
>   
> -	/* Free error state after interrupts are fully disabled. */
> -	cancel_delayed_work_sync(&i915->gt.hangcheck.work);
>   	i915_reset_error_state(i915);
> -
>   	i915_gem_driver_remove(i915);
>   
>   	intel_power_domains_driver_remove(i915);
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index c46b339064c0..19f5b0608a56 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1847,7 +1847,6 @@ void i915_driver_remove(struct drm_i915_private *i915);
>   int i915_resume_switcheroo(struct drm_i915_private *i915);
>   int i915_suspend_switcheroo(struct drm_i915_private *i915, pm_message_t state);
>   
> -void intel_engine_init_hangcheck(struct intel_engine_cs *engine);
>   int vlv_force_gfx_clock(struct drm_i915_private *dev_priv, bool on);
>   
>   static inline bool intel_gvt_active(struct drm_i915_private *dev_priv)
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 5cf4eed5add8..47239df653f2 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -534,10 +534,6 @@ static void error_print_engine(struct drm_i915_error_state_buf *m,
>   	}
>   	err_printf(m, "  ring->head: 0x%08x\n", ee->cpu_ring_head);
>   	err_printf(m, "  ring->tail: 0x%08x\n", ee->cpu_ring_tail);
> -	err_printf(m, "  hangcheck timestamp: %dms (%lu%s)\n",
> -		   jiffies_to_msecs(ee->hangcheck_timestamp - epoch),
> -		   ee->hangcheck_timestamp,
> -		   ee->hangcheck_timestamp == epoch ? "; epoch" : "");
>   	err_printf(m, "  engine reset count: %u\n", ee->reset_count);
>   
>   	for (n = 0; n < ee->num_ports; n++) {
> @@ -679,11 +675,8 @@ static void __err_print_to_sgl(struct drm_i915_error_state_buf *m,
>   	ts = ktime_to_timespec64(error->uptime);
>   	err_printf(m, "Uptime: %lld s %ld us\n",
>   		   (s64)ts.tv_sec, ts.tv_nsec / NSEC_PER_USEC);
> -	err_printf(m, "Epoch: %lu jiffies (%u HZ)\n", error->epoch, HZ);
> -	err_printf(m, "Capture: %lu jiffies; %d ms ago, %d ms after epoch\n",
> -		   error->capture,
> -		   jiffies_to_msecs(jiffies - error->capture),
> -		   jiffies_to_msecs(error->capture - error->epoch));
> +	err_printf(m, "Capture: %lu jiffies; %d ms ago\n",
> +		   error->capture, jiffies_to_msecs(jiffies - error->capture));
>   
>   	for (ee = error->engine; ee; ee = ee->next)
>   		err_printf(m, "Active process (on ring %s): %s [%d]\n",
> @@ -742,7 +735,7 @@ static void __err_print_to_sgl(struct drm_i915_error_state_buf *m,
>   		err_printf(m, "GTT_CACHE_EN: 0x%08x\n", error->gtt_cache);
>   
>   	for (ee = error->engine; ee; ee = ee->next)
> -		error_print_engine(m, ee, error->epoch);
> +		error_print_engine(m, ee, error->capture);
>   
>   	for (ee = error->engine; ee; ee = ee->next) {
>   		const struct drm_i915_error_object *obj;
> @@ -770,7 +763,7 @@ static void __err_print_to_sgl(struct drm_i915_error_state_buf *m,
>   			for (j = 0; j < ee->num_requests; j++)
>   				error_print_request(m, " ",
>   						    &ee->requests[j],
> -						    error->epoch);
> +						    error->capture);
>   		}
>   
>   		print_error_obj(m, ee->engine, "ringbuffer", ee->ringbuffer);
> @@ -1144,8 +1137,6 @@ static void error_record_engine_registers(struct i915_gpu_state *error,
>   	}
>   
>   	ee->idle = intel_engine_is_idle(engine);
> -	if (!ee->idle)
> -		ee->hangcheck_timestamp = engine->hangcheck.action_timestamp;
>   	ee->reset_count = i915_reset_engine_count(&dev_priv->gpu_error,
>   						  engine);
>   
> @@ -1657,20 +1648,6 @@ static void capture_params(struct i915_gpu_state *error)
>   	i915_params_copy(&error->params, &i915_modparams);
>   }
>   
> -static unsigned long capture_find_epoch(const struct i915_gpu_state *error)
> -{
> -	const struct drm_i915_error_engine *ee;
> -	unsigned long epoch = error->capture;
> -
> -	for (ee = error->engine; ee; ee = ee->next) {
> -		if (ee->hangcheck_timestamp &&
> -		    time_before(ee->hangcheck_timestamp, epoch))
> -			epoch = ee->hangcheck_timestamp;
> -	}
> -
> -	return epoch;
> -}
> -
>   static void capture_finish(struct i915_gpu_state *error)
>   {
>   	struct i915_ggtt *ggtt = &error->i915->ggtt;
> @@ -1722,8 +1699,6 @@ i915_capture_gpu_state(struct drm_i915_private *i915)
>   	error->overlay = intel_overlay_capture_error_state(i915);
>   	error->display = intel_display_capture_error_state(i915);
>   
> -	error->epoch = capture_find_epoch(error);
> -
>   	capture_finish(error);
>   	compress_fini(&compress);
>   
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h
> index 7f1cd0b1fef7..4dc36d6ee3a2 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.h
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.h
> @@ -34,7 +34,6 @@ struct i915_gpu_state {
>   	ktime_t boottime;
>   	ktime_t uptime;
>   	unsigned long capture;
> -	unsigned long epoch;
>   
>   	struct drm_i915_private *i915;
>   
> @@ -86,7 +85,6 @@ struct i915_gpu_state {
>   
>   		/* Software tracked state */
>   		bool idle;
> -		unsigned long hangcheck_timestamp;
>   		int num_requests;
>   		u32 reset_count;
>   
> diff --git a/drivers/gpu/drm/i915/i915_priolist_types.h b/drivers/gpu/drm/i915/i915_priolist_types.h
> index ae8bb3cb627e..732aad148881 100644
> --- a/drivers/gpu/drm/i915/i915_priolist_types.h
> +++ b/drivers/gpu/drm/i915/i915_priolist_types.h
> @@ -16,6 +16,12 @@ enum {
>   	I915_PRIORITY_MIN = I915_CONTEXT_MIN_USER_PRIORITY - 1,
>   	I915_PRIORITY_NORMAL = I915_CONTEXT_DEFAULT_PRIORITY,
>   	I915_PRIORITY_MAX = I915_CONTEXT_MAX_USER_PRIORITY + 1,
> +
> +	/* A preemptive pulse used to monitor the health of each engine */
> +	I915_PRIORITY_HEARTBEAT,
> +
> +	/* Interactive workload, scheduled for immediate pageflipping */
> +	I915_PRIORITY_DISPLAY,
>   };
>   
>   #define I915_USER_PRIORITY_SHIFT 2
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close
  2019-10-14 12:11   ` Tvrtko Ursulin
@ 2019-10-14 12:21     ` Chris Wilson
  2019-10-14 13:10       ` Tvrtko Ursulin
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 12:21 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 13:11:46)
> 
> On 14/10/2019 10:07, Chris Wilson wrote:
> > Normally, we rely on our hangcheck to prevent persistent batches from
> > hogging the GPU. However, if the user disables hangcheck, this mechanism
> > breaks down. Despite our insistence that this is unsafe, the users are
> > equally insistent that they want to use endless batches and will disable
> > the hangcheck mechanism. We are looking at perhaps replacing hangcheck
> > with a softer mechanism, that sends a pulse down the engine to check if
> > it is well. We can use the same preemptive pulse to flush an active
> > persistent context off the GPU upon context close, preventing resources
> > being lost and unkillable requests remaining on the GPU after process
> > termination. To avoid changing the ABI and accidentally breaking
> > existing userspace, we make the persistence of a context explicit and
> > enable it by default (matching current ABI). Userspace can opt out of
> > persistent mode (forcing requests to be cancelled when the context is
> > closed by process termination or explicitly) by a context parameter. To
> > facilitate existing use-cases of disabling hangcheck, if the modparam is
> > disabled (i915.enable_hangcheck=0), we disable persistence mode by
> > default.  (Note, one of the outcomes for supporting endless mode will be
> > the removal of hangchecking, at which point opting into persistent mode
> > will be mandatory, or maybe the default perhaps controlled by cgroups.)
> > 
> > v2: Check for hangchecking at context termination, so that we are not
> > left with undying contexts from a crafty user.
> > v3: Force context termination even if forced-preemption is disabled.
> > 
> > Testcase: igt/gem_ctx_persistence
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> > Cc: Michał Winiarski <michal.winiarski@intel.com>
> > Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> > Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gem/i915_gem_context.c   | 182 ++++++++++++++++++
> >   drivers/gpu/drm/i915/gem/i915_gem_context.h   |  15 ++
> >   .../gpu/drm/i915/gem/i915_gem_context_types.h |   1 +
> >   .../gpu/drm/i915/gem/selftests/mock_context.c |   2 +
> >   include/uapi/drm/i915_drm.h                   |  15 ++
> >   5 files changed, 215 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > index 5d8221c7ba83..70b72456e2c4 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > @@ -70,6 +70,7 @@
> >   #include <drm/i915_drm.h>
> >   
> >   #include "gt/intel_lrc_reg.h"
> > +#include "gt/intel_engine_heartbeat.h"
> >   #include "gt/intel_engine_user.h"
> >   
> >   #include "i915_gem_context.h"
> > @@ -269,6 +270,128 @@ void i915_gem_context_release(struct kref *ref)
> >               schedule_work(&gc->free_work);
> >   }
> >   
> > +static inline struct i915_gem_engines *
> > +__context_engines_static(const struct i915_gem_context *ctx)
> > +{
> > +     return rcu_dereference_protected(ctx->engines, true);
> > +}
> > +
> > +static bool __reset_engine(struct intel_engine_cs *engine)
> > +{
> > +     struct intel_gt *gt = engine->gt;
> > +     bool success = false;
> > +
> > +     if (!intel_has_reset_engine(gt))
> > +             return false;
> > +
> > +     if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > +                           &gt->reset.flags)) {
> > +             success = intel_engine_reset(engine, NULL) == 0;
> > +             clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id,
> > +                                   &gt->reset.flags);
> > +     }
> > +
> > +     return success;
> > +}
> > +
> > +static void __reset_context(struct i915_gem_context *ctx,
> > +                         struct intel_engine_cs *engine)
> > +{
> > +     intel_gt_handle_error(engine->gt, engine->mask, 0,
> > +                           "context closure in %s", ctx->name);
> > +}
> > +
> > +static bool __cancel_engine(struct intel_engine_cs *engine)
> > +{
> > +     /*
> > +      * Send a "high priority pulse" down the engine to cause the
> > +      * current request to be momentarily preempted. (If it fails to
> > +      * be preempted, it will be reset). As we have marked our context
> > +      * as banned, any incomplete request, including any running, will
> > +      * be skipped following the preemption.
> > +      */
> > +     if (CONFIG_DRM_I915_PREEMPT_TIMEOUT && !intel_engine_pulse(engine))
> > +             return true;
> 
> Maybe I lost the train of thought here.. But why not even try with the 
> pulse even if forced preemption is not compiled in? There is a chance it 
> may preempt normally, no?

If there is no reset-on-preemption failure and no hangchecking, there is
no reset and we are left with the denial-of-service that we are seeking
to close.
 
> Hm, or from the other angle, why bother with preemption and not just 
> reset? What is the value in letting the closed context complete if at 
> the same time, if it is preemptable, we will cancel all outstanding work 
> anyway?

The reset is the elephant gun; it is likely to cause collateral damage.
So we try with a bit of finesse first.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14 12:06     ` Chris Wilson
@ 2019-10-14 12:25       ` Tvrtko Ursulin
  2019-10-14 12:34         ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 12:25 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 13:06, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
>>
>> On 14/10/2019 10:07, Chris Wilson wrote:
>>> On schedule-out (CS completion) of a banned context, scrub the context
>>> image so that we do not replay the active payload. The intent is that we
>>> skip banned payloads on request submission so that the timeline
>>> advancement continues on in the background. However, if we are returning
>>> to a preempted request, i915_request_skip() is ineffective and instead we
>>> need to patch up the context image so that it continues from the start
>>> of the next request.
>>>
>>> v2: Fixup cancellation so that we only scrub the payload of the active
>>> request and do not short-circuit the breadcrumbs (which might cause
>>> other contexts to execute out of order).
>>>
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> ---
>>>    drivers/gpu/drm/i915/gt/intel_lrc.c    | 129 ++++++++----
>>>    drivers/gpu/drm/i915/gt/selftest_lrc.c | 273 +++++++++++++++++++++++++
>>>    2 files changed, 361 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> index e16ede75412b..b76b35194114 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> @@ -234,6 +234,9 @@ static void execlists_init_reg_state(u32 *reg_state,
>>>                                     const struct intel_engine_cs *engine,
>>>                                     const struct intel_ring *ring,
>>>                                     bool close);
>>> +static void
>>> +__execlists_update_reg_state(const struct intel_context *ce,
>>> +                          const struct intel_engine_cs *engine);
>>>    
>>>    static void cancel_timer(struct timer_list *t)
>>>    {
>>> @@ -270,6 +273,31 @@ static void mark_eio(struct i915_request *rq)
>>>        i915_request_mark_complete(rq);
>>>    }
>>>    
>>> +static struct i915_request *active_request(struct i915_request *rq)
>>> +{
>>> +     const struct intel_context * const ce = rq->hw_context;
>>> +     struct i915_request *active = NULL;
>>> +     struct list_head *list;
>>> +
>>> +     if (!i915_request_is_active(rq)) /* unwound, but incomplete! */
>>> +             return rq;
>>> +
>>> +     rcu_read_lock();
>>> +     list = &rcu_dereference(rq->timeline)->requests;
>>> +     list_for_each_entry_from_reverse(rq, list, link) {
>>> +             if (i915_request_completed(rq))
>>> +                     break;
>>> +
>>> +             if (rq->hw_context != ce)
>>> +                     break;
>>> +
>>> +             active = rq;
>>> +     }
>>> +     rcu_read_unlock();
>>> +
>>> +     return active;
>>> +}
>>> +
>>>    static inline u32 intel_hws_preempt_address(struct intel_engine_cs *engine)
>>>    {
>>>        return (i915_ggtt_offset(engine->status_page.vma) +
>>> @@ -991,6 +1019,56 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
>>>                tasklet_schedule(&ve->base.execlists.tasklet);
>>>    }
>>>    
>>> +static void restore_default_state(struct intel_context *ce,
>>> +                               struct intel_engine_cs *engine)
>>> +{
>>> +     u32 *regs = ce->lrc_reg_state;
>>> +
>>> +     if (engine->pinned_default_state)
>>> +             memcpy(regs, /* skip restoring the vanilla PPHWSP */
>>> +                    engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
>>> +                    engine->context_size - PAGE_SIZE);
>>> +
>>> +     execlists_init_reg_state(regs, ce, engine, ce->ring, false);
>>> +}
>>> +
>>> +static void cancel_active(struct i915_request *rq,
>>> +                       struct intel_engine_cs *engine)
>>> +{
>>> +     struct intel_context * const ce = rq->hw_context;
>>> +
>>> +     /*
>>> +      * The executing context has been cancelled. Fixup the context so that
>>> +      * it will be marked as incomplete [-EIO] upon resubmission and not

(read below first)

... and not misleadingly say "Fixup the context so that it will be 
marked as incomplete" because there is nothing in this function which 
does that. It mostly happens by the virtual of context already being 
marked as banned somewhere else. This comment should just explain the 
decision to rewind the ring->head for more determinism. It can still 
mention canceling of user payload and -EIO. Just needs to be clear of 
the single extra thing achieved here by the head rewind and context edit.

>>> +      * execute any user payloads. We preserve the breadcrumbs and
>>> +      * semaphores of the incomplete requests so that inter-timeline
>>> +      * dependencies (i.e other timelines) remain correctly ordered.
>>> +      *
>>> +      * See __i915_request_submit() for applying -EIO and removing the
>>> +      * payload on resubmission.
>>> +      */
>>> +     GEM_TRACE("%s(%s): { rq=%llx:%lld }\n",
>>> +               __func__, engine->name, rq->fence.context, rq->fence.seqno);
>>> +     __context_pin_acquire(ce);
>>> +
>>> +     /* On resubmission of the active request, payload will be scrubbed */
>>> +     rq = active_request(rq);
>>> +     if (rq)
>>> +             ce->ring->head = intel_ring_wrap(ce->ring, rq->head);
>>
>> Without this change, where would the head be pointing after
>> schedule_out? Somewhere in the middle of the active request? And with
>> this change it is rewound to the start of it?
> 
> RING_HEAD could be anywhere between rq->head and rq->tail. We could just
> leave it be, but it would be better if we reset it if it was past
> rq->postfix (as we rewrite those instructions). It's just simpler if we
> reset it back to the start, and scrub the request.

Okay, now I follow. I suggest going back up ^^^ and making the comment 
clear...

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14 12:25       ` Tvrtko Ursulin
@ 2019-10-14 12:34         ` Chris Wilson
  2019-10-14 13:13           ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 12:34 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 13:25:58)
> 
> On 14/10/2019 13:06, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
> >>
> >> On 14/10/2019 10:07, Chris Wilson wrote:
> >>> +static void cancel_active(struct i915_request *rq,
> >>> +                       struct intel_engine_cs *engine)
> >>> +{
> >>> +     struct intel_context * const ce = rq->hw_context;
> >>> +
> >>> +     /*
> >>> +      * The executing context has been cancelled. Fixup the context so that
> >>> +      * it will be marked as incomplete [-EIO] upon resubmission and not
> 
> (read below first)
> 
> ... and not misleadingly say "Fixup the context so that it will be 
> marked as incomplete" because there is nothing in this function which 
> does that. It mostly happens by the virtual of context already being 
> marked as banned somewhere else. This comment should just explain the 
> decision to rewind the ring->head for more determinism. It can still 
> mention canceling of user payload and -EIO. Just needs to be clear of 
> the single extra thing achieved here by the head rewind and context edit.

I thought I was clear: "upon resubmission". So use a more active voice to
reduce ambiguity, gotcha.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close
  2019-10-14 12:21     ` Chris Wilson
@ 2019-10-14 13:10       ` Tvrtko Ursulin
  2019-10-14 13:34         ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 13:10 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 13:21, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 13:11:46)
>>
>> On 14/10/2019 10:07, Chris Wilson wrote:
>>> Normally, we rely on our hangcheck to prevent persistent batches from
>>> hogging the GPU. However, if the user disables hangcheck, this mechanism
>>> breaks down. Despite our insistence that this is unsafe, the users are
>>> equally insistent that they want to use endless batches and will disable
>>> the hangcheck mechanism. We are looking at perhaps replacing hangcheck
>>> with a softer mechanism, that sends a pulse down the engine to check if
>>> it is well. We can use the same preemptive pulse to flush an active
>>> persistent context off the GPU upon context close, preventing resources
>>> being lost and unkillable requests remaining on the GPU after process
>>> termination. To avoid changing the ABI and accidentally breaking
>>> existing userspace, we make the persistence of a context explicit and
>>> enable it by default (matching current ABI). Userspace can opt out of
>>> persistent mode (forcing requests to be cancelled when the context is
>>> closed by process termination or explicitly) by a context parameter. To
>>> facilitate existing use-cases of disabling hangcheck, if the modparam is
>>> disabled (i915.enable_hangcheck=0), we disable persistence mode by
>>> default.  (Note, one of the outcomes for supporting endless mode will be
>>> the removal of hangchecking, at which point opting into persistent mode
>>> will be mandatory, or maybe the default perhaps controlled by cgroups.)
>>>
>>> v2: Check for hangchecking at context termination, so that we are not
>>> left with undying contexts from a crafty user.
>>> v3: Force context termination even if forced-preemption is disabled.
>>>
>>> Testcase: igt/gem_ctx_persistence
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
>>> Cc: Michał Winiarski <michal.winiarski@intel.com>
>>> Cc: Jon Bloomfield <jon.bloomfield@intel.com>
>>> Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
>>> ---
>>>    drivers/gpu/drm/i915/gem/i915_gem_context.c   | 182 ++++++++++++++++++
>>>    drivers/gpu/drm/i915/gem/i915_gem_context.h   |  15 ++
>>>    .../gpu/drm/i915/gem/i915_gem_context_types.h |   1 +
>>>    .../gpu/drm/i915/gem/selftests/mock_context.c |   2 +
>>>    include/uapi/drm/i915_drm.h                   |  15 ++
>>>    5 files changed, 215 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>> index 5d8221c7ba83..70b72456e2c4 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>> @@ -70,6 +70,7 @@
>>>    #include <drm/i915_drm.h>
>>>    
>>>    #include "gt/intel_lrc_reg.h"
>>> +#include "gt/intel_engine_heartbeat.h"
>>>    #include "gt/intel_engine_user.h"
>>>    
>>>    #include "i915_gem_context.h"
>>> @@ -269,6 +270,128 @@ void i915_gem_context_release(struct kref *ref)
>>>                schedule_work(&gc->free_work);
>>>    }
>>>    
>>> +static inline struct i915_gem_engines *
>>> +__context_engines_static(const struct i915_gem_context *ctx)
>>> +{
>>> +     return rcu_dereference_protected(ctx->engines, true);
>>> +}
>>> +
>>> +static bool __reset_engine(struct intel_engine_cs *engine)
>>> +{
>>> +     struct intel_gt *gt = engine->gt;
>>> +     bool success = false;
>>> +
>>> +     if (!intel_has_reset_engine(gt))
>>> +             return false;
>>> +
>>> +     if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
>>> +                           &gt->reset.flags)) {
>>> +             success = intel_engine_reset(engine, NULL) == 0;
>>> +             clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id,
>>> +                                   &gt->reset.flags);
>>> +     }
>>> +
>>> +     return success;
>>> +}
>>> +
>>> +static void __reset_context(struct i915_gem_context *ctx,
>>> +                         struct intel_engine_cs *engine)
>>> +{
>>> +     intel_gt_handle_error(engine->gt, engine->mask, 0,
>>> +                           "context closure in %s", ctx->name);
>>> +}
>>> +
>>> +static bool __cancel_engine(struct intel_engine_cs *engine)
>>> +{
>>> +     /*
>>> +      * Send a "high priority pulse" down the engine to cause the
>>> +      * current request to be momentarily preempted. (If it fails to
>>> +      * be preempted, it will be reset). As we have marked our context
>>> +      * as banned, any incomplete request, including any running, will
>>> +      * be skipped following the preemption.
>>> +      */
>>> +     if (CONFIG_DRM_I915_PREEMPT_TIMEOUT && !intel_engine_pulse(engine))
>>> +             return true;
>>
>> Maybe I lost the train of thought here.. But why not even try with the
>> pulse even if forced preemption is not compiled in? There is a chance it
>> may preempt normally, no?
> 
> If there is no reset-on-preemption failure and no hangchecking, there is
> no reset and we are left with the denial-of-service that we are seeking
> to close.

Because there is no mechanism to send a pulse, see if it managed to 
preempt, but if it did not come come back later and reset?

>> Hm, or from the other angle, why bother with preemption and not just
>> reset? What is the value in letting the closed context complete if at
>> the same time, if it is preemptable, we will cancel all outstanding work
>> anyway?
> 
> The reset is the elephant gun; it is likely to cause collateral damage.
> So we try with a bit of finesse first.

How so? Isn't our per-engine reset supposed to be fast and reliable? But 
yes, I have no complaints of trying preemption first, just trying to 
connect all the dots.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14 12:34         ` Chris Wilson
@ 2019-10-14 13:13           ` Chris Wilson
  2019-10-14 13:19             ` Tvrtko Ursulin
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 13:13 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Chris Wilson (2019-10-14 13:34:35)
> Quoting Tvrtko Ursulin (2019-10-14 13:25:58)
> > 
> > On 14/10/2019 13:06, Chris Wilson wrote:
> > > Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
> > >>
> > >> On 14/10/2019 10:07, Chris Wilson wrote:
> > >>> +static void cancel_active(struct i915_request *rq,
> > >>> +                       struct intel_engine_cs *engine)
> > >>> +{
> > >>> +     struct intel_context * const ce = rq->hw_context;
> > >>> +
> > >>> +     /*
> > >>> +      * The executing context has been cancelled. Fixup the context so that
> > >>> +      * it will be marked as incomplete [-EIO] upon resubmission and not
> > 
> > (read below first)
> > 
> > ... and not misleadingly say "Fixup the context so that it will be 
> > marked as incomplete" because there is nothing in this function which 
> > does that. It mostly happens by the virtual of context already being 
> > marked as banned somewhere else. This comment should just explain the 
> > decision to rewind the ring->head for more determinism. It can still 
> > mention canceling of user payload and -EIO. Just needs to be clear of 
> > the single extra thing achieved here by the head rewind and context edit.
> 
> I thought I was clear: "upon resubmission". So use a more active voice to
> reduce ambiguity, gotcha.

        /*
         * The executing context has been cancelled. We want to prevent
         * further execution along this context and propagate the error on
         * to anything depending on its results.
         *
         * In __i915_request_submit(), we apply the -EIO and remove the
         * requests payload for any banned requests. But first, we must
         * rewind the context back to the start of the incomplete request so
         * that we don't jump back into the middle of the batch.
         *
         * We preserve the breadcrumbs and semaphores of the incomplete
         * requests so that inter-timeline dependencies (i.e other timelines)
         * remain correctly ordered.
         */

-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14 13:13           ` Chris Wilson
@ 2019-10-14 13:19             ` Tvrtko Ursulin
  2019-10-14 13:23               ` Chris Wilson
  0 siblings, 1 reply; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 13:19 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 14:13, Chris Wilson wrote:
> Quoting Chris Wilson (2019-10-14 13:34:35)
>> Quoting Tvrtko Ursulin (2019-10-14 13:25:58)
>>>
>>> On 14/10/2019 13:06, Chris Wilson wrote:
>>>> Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
>>>>>
>>>>> On 14/10/2019 10:07, Chris Wilson wrote:
>>>>>> +static void cancel_active(struct i915_request *rq,
>>>>>> +                       struct intel_engine_cs *engine)
>>>>>> +{
>>>>>> +     struct intel_context * const ce = rq->hw_context;
>>>>>> +
>>>>>> +     /*
>>>>>> +      * The executing context has been cancelled. Fixup the context so that
>>>>>> +      * it will be marked as incomplete [-EIO] upon resubmission and not
>>>
>>> (read below first)
>>>
>>> ... and not misleadingly say "Fixup the context so that it will be
>>> marked as incomplete" because there is nothing in this function which
>>> does that. It mostly happens by the virtual of context already being
>>> marked as banned somewhere else. This comment should just explain the
>>> decision to rewind the ring->head for more determinism. It can still
>>> mention canceling of user payload and -EIO. Just needs to be clear of
>>> the single extra thing achieved here by the head rewind and context edit.
>>
>> I thought I was clear: "upon resubmission". So use a more active voice to
>> reduce ambiguity, gotcha.
> 
>          /*
>           * The executing context has been cancelled. We want to prevent
>           * further execution along this context and propagate the error on
>           * to anything depending on its results.
>           *
>           * In __i915_request_submit(), we apply the -EIO and remove the
>           * requests payload for any banned requests. But first, we must
>           * rewind the context back to the start of the incomplete request so
>           * that we don't jump back into the middle of the batch.
>           *
>           * We preserve the breadcrumbs and semaphores of the incomplete
>           * requests so that inter-timeline dependencies (i.e other timelines)
>           * remain correctly ordered.
>           */
> 

Sounds good.

Btw.. does this work? :) If context was preempted in the middle of a 
batch buffer there must be some other state saved (other than RING_HEAD) 
which on context restore enables it to continue from the right place 
*within* the batch. Is this code zapping that state as well so GPU will 
fully forget it was inside the batch?

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14 13:19             ` Tvrtko Ursulin
@ 2019-10-14 13:23               ` Chris Wilson
  2019-10-14 13:38                 ` Tvrtko Ursulin
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 13:23 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 14:19:19)
> 
> On 14/10/2019 14:13, Chris Wilson wrote:
> > Quoting Chris Wilson (2019-10-14 13:34:35)
> >> Quoting Tvrtko Ursulin (2019-10-14 13:25:58)
> >>>
> >>> On 14/10/2019 13:06, Chris Wilson wrote:
> >>>> Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
> >>>>>
> >>>>> On 14/10/2019 10:07, Chris Wilson wrote:
> >>>>>> +static void cancel_active(struct i915_request *rq,
> >>>>>> +                       struct intel_engine_cs *engine)
> >>>>>> +{
> >>>>>> +     struct intel_context * const ce = rq->hw_context;
> >>>>>> +
> >>>>>> +     /*
> >>>>>> +      * The executing context has been cancelled. Fixup the context so that
> >>>>>> +      * it will be marked as incomplete [-EIO] upon resubmission and not
> >>>
> >>> (read below first)
> >>>
> >>> ... and not misleadingly say "Fixup the context so that it will be
> >>> marked as incomplete" because there is nothing in this function which
> >>> does that. It mostly happens by the virtual of context already being
> >>> marked as banned somewhere else. This comment should just explain the
> >>> decision to rewind the ring->head for more determinism. It can still
> >>> mention canceling of user payload and -EIO. Just needs to be clear of
> >>> the single extra thing achieved here by the head rewind and context edit.
> >>
> >> I thought I was clear: "upon resubmission". So use a more active voice to
> >> reduce ambiguity, gotcha.
> > 
> >          /*
> >           * The executing context has been cancelled. We want to prevent
> >           * further execution along this context and propagate the error on
> >           * to anything depending on its results.
> >           *
> >           * In __i915_request_submit(), we apply the -EIO and remove the
> >           * requests payload for any banned requests. But first, we must
> >           * rewind the context back to the start of the incomplete request so
> >           * that we don't jump back into the middle of the batch.
> >           *
> >           * We preserve the breadcrumbs and semaphores of the incomplete
> >           * requests so that inter-timeline dependencies (i.e other timelines)
> >           * remain correctly ordered.
> >           */
> > 
> 
> Sounds good.
> 
> Btw.. does this work? :) If context was preempted in the middle of a 
> batch buffer there must be some other state saved (other than RING_HEAD) 
> which on context restore enables it to continue from the right place 
> *within* the batch. Is this code zapping that state as well so GPU will 
> fully forget it was inside the batch?

Yes. We are resetting the context image back to vanilla, and then
restore the ring registers to restart from this request. The selftests
are using spinning batches to simulate the banned context.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close
  2019-10-14 13:10       ` Tvrtko Ursulin
@ 2019-10-14 13:34         ` Chris Wilson
  2019-10-14 16:06           ` Tvrtko Ursulin
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Wilson @ 2019-10-14 13:34 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2019-10-14 14:10:30)
> 
> On 14/10/2019 13:21, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-10-14 13:11:46)
> >>
> >> On 14/10/2019 10:07, Chris Wilson wrote:
> >>> Normally, we rely on our hangcheck to prevent persistent batches from
> >>> hogging the GPU. However, if the user disables hangcheck, this mechanism
> >>> breaks down. Despite our insistence that this is unsafe, the users are
> >>> equally insistent that they want to use endless batches and will disable
> >>> the hangcheck mechanism. We are looking at perhaps replacing hangcheck
> >>> with a softer mechanism, that sends a pulse down the engine to check if
> >>> it is well. We can use the same preemptive pulse to flush an active
> >>> persistent context off the GPU upon context close, preventing resources
> >>> being lost and unkillable requests remaining on the GPU after process
> >>> termination. To avoid changing the ABI and accidentally breaking
> >>> existing userspace, we make the persistence of a context explicit and
> >>> enable it by default (matching current ABI). Userspace can opt out of
> >>> persistent mode (forcing requests to be cancelled when the context is
> >>> closed by process termination or explicitly) by a context parameter. To
> >>> facilitate existing use-cases of disabling hangcheck, if the modparam is
> >>> disabled (i915.enable_hangcheck=0), we disable persistence mode by
> >>> default.  (Note, one of the outcomes for supporting endless mode will be
> >>> the removal of hangchecking, at which point opting into persistent mode
> >>> will be mandatory, or maybe the default perhaps controlled by cgroups.)
> >>>
> >>> v2: Check for hangchecking at context termination, so that we are not
> >>> left with undying contexts from a crafty user.
> >>> v3: Force context termination even if forced-preemption is disabled.
> >>>
> >>> Testcase: igt/gem_ctx_persistence
> >>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >>> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> >>> Cc: Michał Winiarski <michal.winiarski@intel.com>
> >>> Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> >>> Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
> >>> ---
> >>>    drivers/gpu/drm/i915/gem/i915_gem_context.c   | 182 ++++++++++++++++++
> >>>    drivers/gpu/drm/i915/gem/i915_gem_context.h   |  15 ++
> >>>    .../gpu/drm/i915/gem/i915_gem_context_types.h |   1 +
> >>>    .../gpu/drm/i915/gem/selftests/mock_context.c |   2 +
> >>>    include/uapi/drm/i915_drm.h                   |  15 ++
> >>>    5 files changed, 215 insertions(+)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>> index 5d8221c7ba83..70b72456e2c4 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>> @@ -70,6 +70,7 @@
> >>>    #include <drm/i915_drm.h>
> >>>    
> >>>    #include "gt/intel_lrc_reg.h"
> >>> +#include "gt/intel_engine_heartbeat.h"
> >>>    #include "gt/intel_engine_user.h"
> >>>    
> >>>    #include "i915_gem_context.h"
> >>> @@ -269,6 +270,128 @@ void i915_gem_context_release(struct kref *ref)
> >>>                schedule_work(&gc->free_work);
> >>>    }
> >>>    
> >>> +static inline struct i915_gem_engines *
> >>> +__context_engines_static(const struct i915_gem_context *ctx)
> >>> +{
> >>> +     return rcu_dereference_protected(ctx->engines, true);
> >>> +}
> >>> +
> >>> +static bool __reset_engine(struct intel_engine_cs *engine)
> >>> +{
> >>> +     struct intel_gt *gt = engine->gt;
> >>> +     bool success = false;
> >>> +
> >>> +     if (!intel_has_reset_engine(gt))
> >>> +             return false;
> >>> +
> >>> +     if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> >>> +                           &gt->reset.flags)) {
> >>> +             success = intel_engine_reset(engine, NULL) == 0;
> >>> +             clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id,
> >>> +                                   &gt->reset.flags);
> >>> +     }
> >>> +
> >>> +     return success;
> >>> +}
> >>> +
> >>> +static void __reset_context(struct i915_gem_context *ctx,
> >>> +                         struct intel_engine_cs *engine)
> >>> +{
> >>> +     intel_gt_handle_error(engine->gt, engine->mask, 0,
> >>> +                           "context closure in %s", ctx->name);
> >>> +}
> >>> +
> >>> +static bool __cancel_engine(struct intel_engine_cs *engine)
> >>> +{
> >>> +     /*
> >>> +      * Send a "high priority pulse" down the engine to cause the
> >>> +      * current request to be momentarily preempted. (If it fails to
> >>> +      * be preempted, it will be reset). As we have marked our context
> >>> +      * as banned, any incomplete request, including any running, will
> >>> +      * be skipped following the preemption.
> >>> +      */
> >>> +     if (CONFIG_DRM_I915_PREEMPT_TIMEOUT && !intel_engine_pulse(engine))
> >>> +             return true;
> >>
> >> Maybe I lost the train of thought here.. But why not even try with the
> >> pulse even if forced preemption is not compiled in? There is a chance it
> >> may preempt normally, no?
> > 
> > If there is no reset-on-preemption failure and no hangchecking, there is
> > no reset and we are left with the denial-of-service that we are seeking
> > to close.
> 
> Because there is no mechanism to send a pulse, see if it managed to 
> preempt, but if it did not come come back later and reset?

What are you going to preempt with? The mechanism you describe is what
the pulse + forced-preempt is meant to be handling. (I was going to use
a 2 birds with one stone allegory for the various features all pulling
together, but it's more like a flock with a grenade.)

> >> Hm, or from the other angle, why bother with preemption and not just
> >> reset? What is the value in letting the closed context complete if at
> >> the same time, if it is preemptable, we will cancel all outstanding work
> >> anyway?
> > 
> > The reset is the elephant gun; it is likely to cause collateral damage.
> > So we try with a bit of finesse first.
> 
> How so? Isn't our per-engine reset supposed to be fast and reliable? But 
> yes, I have no complaints of trying preemption first, just trying to 
> connect all the dots.

Fast and reliable! Even if were, we still the challenge of ensuring we
reset the right context. But in terms of being fast and reliable, we
have actually talked about using this type of preempt mechanism to avoid
taking a risk with the reset! :)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
  2019-10-14 13:23               ` Chris Wilson
@ 2019-10-14 13:38                 ` Tvrtko Ursulin
  0 siblings, 0 replies; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 13:38 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 14:23, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 14:19:19)
>>
>> On 14/10/2019 14:13, Chris Wilson wrote:
>>> Quoting Chris Wilson (2019-10-14 13:34:35)
>>>> Quoting Tvrtko Ursulin (2019-10-14 13:25:58)
>>>>>
>>>>> On 14/10/2019 13:06, Chris Wilson wrote:
>>>>>> Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
>>>>>>>
>>>>>>> On 14/10/2019 10:07, Chris Wilson wrote:
>>>>>>>> +static void cancel_active(struct i915_request *rq,
>>>>>>>> +                       struct intel_engine_cs *engine)
>>>>>>>> +{
>>>>>>>> +     struct intel_context * const ce = rq->hw_context;
>>>>>>>> +
>>>>>>>> +     /*
>>>>>>>> +      * The executing context has been cancelled. Fixup the context so that
>>>>>>>> +      * it will be marked as incomplete [-EIO] upon resubmission and not
>>>>>
>>>>> (read below first)
>>>>>
>>>>> ... and not misleadingly say "Fixup the context so that it will be
>>>>> marked as incomplete" because there is nothing in this function which
>>>>> does that. It mostly happens by the virtual of context already being
>>>>> marked as banned somewhere else. This comment should just explain the
>>>>> decision to rewind the ring->head for more determinism. It can still
>>>>> mention canceling of user payload and -EIO. Just needs to be clear of
>>>>> the single extra thing achieved here by the head rewind and context edit.
>>>>
>>>> I thought I was clear: "upon resubmission". So use a more active voice to
>>>> reduce ambiguity, gotcha.
>>>
>>>           /*
>>>            * The executing context has been cancelled. We want to prevent
>>>            * further execution along this context and propagate the error on
>>>            * to anything depending on its results.
>>>            *
>>>            * In __i915_request_submit(), we apply the -EIO and remove the
>>>            * requests payload for any banned requests. But first, we must
>>>            * rewind the context back to the start of the incomplete request so
>>>            * that we don't jump back into the middle of the batch.
>>>            *
>>>            * We preserve the breadcrumbs and semaphores of the incomplete
>>>            * requests so that inter-timeline dependencies (i.e other timelines)
>>>            * remain correctly ordered.
>>>            */
>>>
>>
>> Sounds good.
>>
>> Btw.. does this work? :) If context was preempted in the middle of a
>> batch buffer there must be some other state saved (other than RING_HEAD)
>> which on context restore enables it to continue from the right place
>> *within* the batch. Is this code zapping that state as well so GPU will
>> fully forget it was inside the batch?
> 
> Yes. We are resetting the context image back to vanilla, and then
> restore the ring registers to restart from this request. The selftests
> are using spinning batches to simulate the banned context.

Okay, in that case:

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close
  2019-10-14 13:34         ` Chris Wilson
@ 2019-10-14 16:06           ` Tvrtko Ursulin
  0 siblings, 0 replies; 36+ messages in thread
From: Tvrtko Ursulin @ 2019-10-14 16:06 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 14/10/2019 14:34, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 14:10:30)
>>
>> On 14/10/2019 13:21, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2019-10-14 13:11:46)
>>>>
>>>> On 14/10/2019 10:07, Chris Wilson wrote:
>>>>> Normally, we rely on our hangcheck to prevent persistent batches from
>>>>> hogging the GPU. However, if the user disables hangcheck, this mechanism
>>>>> breaks down. Despite our insistence that this is unsafe, the users are
>>>>> equally insistent that they want to use endless batches and will disable
>>>>> the hangcheck mechanism. We are looking at perhaps replacing hangcheck
>>>>> with a softer mechanism, that sends a pulse down the engine to check if
>>>>> it is well. We can use the same preemptive pulse to flush an active
>>>>> persistent context off the GPU upon context close, preventing resources
>>>>> being lost and unkillable requests remaining on the GPU after process
>>>>> termination. To avoid changing the ABI and accidentally breaking
>>>>> existing userspace, we make the persistence of a context explicit and
>>>>> enable it by default (matching current ABI). Userspace can opt out of
>>>>> persistent mode (forcing requests to be cancelled when the context is
>>>>> closed by process termination or explicitly) by a context parameter. To
>>>>> facilitate existing use-cases of disabling hangcheck, if the modparam is
>>>>> disabled (i915.enable_hangcheck=0), we disable persistence mode by
>>>>> default.  (Note, one of the outcomes for supporting endless mode will be
>>>>> the removal of hangchecking, at which point opting into persistent mode
>>>>> will be mandatory, or maybe the default perhaps controlled by cgroups.)
>>>>>
>>>>> v2: Check for hangchecking at context termination, so that we are not
>>>>> left with undying contexts from a crafty user.
>>>>> v3: Force context termination even if forced-preemption is disabled.
>>>>>
>>>>> Testcase: igt/gem_ctx_persistence
>>>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>>>> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
>>>>> Cc: Michał Winiarski <michal.winiarski@intel.com>
>>>>> Cc: Jon Bloomfield <jon.bloomfield@intel.com>
>>>>> Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
>>>>> ---
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_context.c   | 182 ++++++++++++++++++
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_context.h   |  15 ++
>>>>>     .../gpu/drm/i915/gem/i915_gem_context_types.h |   1 +
>>>>>     .../gpu/drm/i915/gem/selftests/mock_context.c |   2 +
>>>>>     include/uapi/drm/i915_drm.h                   |  15 ++
>>>>>     5 files changed, 215 insertions(+)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>> index 5d8221c7ba83..70b72456e2c4 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>> @@ -70,6 +70,7 @@
>>>>>     #include <drm/i915_drm.h>
>>>>>     
>>>>>     #include "gt/intel_lrc_reg.h"
>>>>> +#include "gt/intel_engine_heartbeat.h"
>>>>>     #include "gt/intel_engine_user.h"
>>>>>     
>>>>>     #include "i915_gem_context.h"
>>>>> @@ -269,6 +270,128 @@ void i915_gem_context_release(struct kref *ref)
>>>>>                 schedule_work(&gc->free_work);
>>>>>     }
>>>>>     
>>>>> +static inline struct i915_gem_engines *
>>>>> +__context_engines_static(const struct i915_gem_context *ctx)
>>>>> +{
>>>>> +     return rcu_dereference_protected(ctx->engines, true);
>>>>> +}
>>>>> +
>>>>> +static bool __reset_engine(struct intel_engine_cs *engine)
>>>>> +{
>>>>> +     struct intel_gt *gt = engine->gt;
>>>>> +     bool success = false;
>>>>> +
>>>>> +     if (!intel_has_reset_engine(gt))
>>>>> +             return false;
>>>>> +
>>>>> +     if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
>>>>> +                           &gt->reset.flags)) {
>>>>> +             success = intel_engine_reset(engine, NULL) == 0;
>>>>> +             clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id,
>>>>> +                                   &gt->reset.flags);
>>>>> +     }
>>>>> +
>>>>> +     return success;
>>>>> +}
>>>>> +
>>>>> +static void __reset_context(struct i915_gem_context *ctx,
>>>>> +                         struct intel_engine_cs *engine)
>>>>> +{
>>>>> +     intel_gt_handle_error(engine->gt, engine->mask, 0,
>>>>> +                           "context closure in %s", ctx->name);
>>>>> +}
>>>>> +
>>>>> +static bool __cancel_engine(struct intel_engine_cs *engine)
>>>>> +{
>>>>> +     /*
>>>>> +      * Send a "high priority pulse" down the engine to cause the
>>>>> +      * current request to be momentarily preempted. (If it fails to
>>>>> +      * be preempted, it will be reset). As we have marked our context
>>>>> +      * as banned, any incomplete request, including any running, will
>>>>> +      * be skipped following the preemption.
>>>>> +      */
>>>>> +     if (CONFIG_DRM_I915_PREEMPT_TIMEOUT && !intel_engine_pulse(engine))
>>>>> +             return true;
>>>>
>>>> Maybe I lost the train of thought here.. But why not even try with the
>>>> pulse even if forced preemption is not compiled in? There is a chance it
>>>> may preempt normally, no?
>>>
>>> If there is no reset-on-preemption failure and no hangchecking, there is
>>> no reset and we are left with the denial-of-service that we are seeking
>>> to close.
>>
>> Because there is no mechanism to send a pulse, see if it managed to
>> preempt, but if it did not come come back later and reset?
> 
> What are you going to preempt with? The mechanism you describe is what
> the pulse + forced-preempt is meant to be handling. (I was going to use
> a 2 birds with one stone allegory for the various features all pulling
> together, but it's more like a flock with a grenade.)

I meant try to preempt with idle pulse first. If it doesn't let go then 
go for a reset. It has as chance of working even without forced 
preemption, no?

>>>> Hm, or from the other angle, why bother with preemption and not just
>>>> reset? What is the value in letting the closed context complete if at
>>>> the same time, if it is preemptable, we will cancel all outstanding work
>>>> anyway?
>>>
>>> The reset is the elephant gun; it is likely to cause collateral damage.
>>> So we try with a bit of finesse first.
>>
>> How so? Isn't our per-engine reset supposed to be fast and reliable? But
>> yes, I have no complaints of trying preemption first, just trying to
>> connect all the dots.
> 
> Fast and reliable! Even if were, we still the challenge of ensuring we
> reset the right context. But in terms of being fast and reliable, we
> have actually talked about using this type of preempt mechanism to avoid
> taking a risk with the reset! :)

:) Ok, makes sense. I did not look into how reset victims are picked in 
detail. So just a question if we want to try to send an idle pulse first 
in any case.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

* ✗ Fi.CI.BUILD: failure for series starting with [01/15] drm/i915/display: Squelch kerneldoc warnings
  2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
                   ` (13 preceding siblings ...)
  2019-10-14  9:07 ` [PATCH 15/15] drm/i915/execlist: Trim immediate timeslice expiry Chris Wilson
@ 2019-10-14 16:15 ` Patchwork
  14 siblings, 0 replies; 36+ messages in thread
From: Patchwork @ 2019-10-14 16:15 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/15] drm/i915/display: Squelch kerneldoc warnings
URL   : https://patchwork.freedesktop.org/series/67970/
State : failure

== Summary ==

Applying: drm/i915/display: Squelch kerneldoc warnings
Using index info to reconstruct a base tree...
M	drivers/gpu/drm/i915/display/intel_display.c
Falling back to patching base and 3-way merge...
No changes -- Patch already applied.
Applying: drm/i915/gem: Distinguish each object type
Applying: drm/i915/execlists: Assert tasklet is locked for process_csb()
Applying: drm/i915/execlists: Clear semaphore immediately upon ELSP promotion
Applying: drm/i915/execlists: Tweak virtual unsubmission
Using index info to reconstruct a base tree...
M	drivers/gpu/drm/i915/gt/intel_lrc.c
M	drivers/gpu/drm/i915/i915_request.c
Falling back to patching base and 3-way merge...
No changes -- Patch already applied.
Applying: drm/i915/selftests: Check known register values within the context
Using index info to reconstruct a base tree...
M	drivers/gpu/drm/i915/gt/selftest_lrc.c
Falling back to patching base and 3-way merge...
Auto-merging drivers/gpu/drm/i915/gt/selftest_lrc.c
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/gt/selftest_lrc.c
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch' to see the failed patch
Patch failed at 0006 drm/i915/selftests: Check known register values within the context
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2019-10-14 16:15 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
2019-10-14  9:07 ` [PATCH 02/15] drm/i915/gem: Distinguish each object type Chris Wilson
2019-10-14  9:07 ` [PATCH 03/15] drm/i915/execlists: Assert tasklet is locked for process_csb() Chris Wilson
2019-10-14  9:07 ` [PATCH 04/15] drm/i915/execlists: Clear semaphore immediately upon ELSP promotion Chris Wilson
2019-10-14  9:07 ` [PATCH 05/15] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
2019-10-14  9:07 ` [PATCH 06/15] drm/i915/selftests: Check known register values within the context Chris Wilson
2019-10-14  9:59   ` Tvrtko Ursulin
2019-10-14 10:06     ` Chris Wilson
2019-10-14  9:07 ` [PATCH 07/15] drm/i915/selftests: Check that GPR are cleared for new contexts Chris Wilson
2019-10-14 10:08   ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 08/15] drm/i915: Expose engine properties via sysfs Chris Wilson
2019-10-14 10:17   ` Tvrtko Ursulin
2019-10-14 10:27     ` Chris Wilson
2019-10-14  9:07 ` [PATCH 09/15] drm/i915/execlists: Force preemption Chris Wilson
2019-10-14  9:07 ` [PATCH 10/15] drm/i915/gt: Introduce barrier pulses along engines Chris Wilson
2019-10-14 11:03   ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out Chris Wilson
2019-10-14 12:00   ` Tvrtko Ursulin
2019-10-14 12:06     ` Chris Wilson
2019-10-14 12:25       ` Tvrtko Ursulin
2019-10-14 12:34         ` Chris Wilson
2019-10-14 13:13           ` Chris Wilson
2019-10-14 13:19             ` Tvrtko Ursulin
2019-10-14 13:23               ` Chris Wilson
2019-10-14 13:38                 ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close Chris Wilson
2019-10-14 12:11   ` Tvrtko Ursulin
2019-10-14 12:21     ` Chris Wilson
2019-10-14 13:10       ` Tvrtko Ursulin
2019-10-14 13:34         ` Chris Wilson
2019-10-14 16:06           ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 13/15] drm/i915: Replace hangcheck by heartbeats Chris Wilson
2019-10-14 12:13   ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 14/15] drm/i915: Flush idle barriers when waiting Chris Wilson
2019-10-14  9:07 ` [PATCH 15/15] drm/i915/execlist: Trim immediate timeslice expiry Chris Wilson
2019-10-14 16:15 ` ✗ Fi.CI.BUILD: failure for series starting with [01/15] drm/i915/display: Squelch kerneldoc warnings Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.