All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock!
@ 2021-03-11 13:41 Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7 Maarten Lankhorst
                   ` (73 more replies)
  0 siblings, 74 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx

New rebased version, now includes conversion to take
a ww argument in the set_pages() callback, which
completes the rework.

Maarten Lankhorst (68):
  drm/i915: Do not share hwsp across contexts any more, v7.
  drm/i915: Pin timeline map after first timeline pin, v3.
  drm/i915: Move cmd parser pinning to execbuffer
  drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
  drm/i915: Ensure we hold the object mutex in pin correctly.
  drm/i915: Add gem object locking to madvise.
  drm/i915: Move HAS_STRUCT_PAGE to obj->flags
  drm/i915: Rework struct phys attachment handling
  drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
  drm/i915: make lockdep slightly happier about execbuf.
  drm/i915: Disable userptr pread/pwrite support.
  drm/i915: No longer allow exporting userptr through dma-buf
  drm/i915: Reject more ioctls for userptr, v2.
  drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
  drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
  drm/i915: Fix userptr so we do not have to worry about obj->mm.lock,
    v7.
  drm/i915: Flatten obj->mm.lock
  drm/i915: Populate logical context during first pin.
  drm/i915: Make ring submission compatible with obj->mm.lock removal,
    v2.
  drm/i915: Handle ww locking in init_status_page
  drm/i915: Rework clflush to work correctly without obj->mm.lock.
  drm/i915: Pass ww ctx to intel_pin_to_display_plane
  drm/i915: Add object locking to vm_fault_cpu
  drm/i915: Move pinning to inside engine_wa_list_verify()
  drm/i915: Take reservation lock around i915_vma_pin.
  drm/i915: Make lrc_init_wa_ctx compatible with ww locking, v3.
  drm/i915: Make __engine_unpark() compatible with ww locking.
  drm/i915: Take obj lock around set_domain ioctl
  drm/i915: Defer pin calls in buffer pool until first use by caller.
  drm/i915: Fix pread/pwrite to work with new locking rules.
  drm/i915: Fix workarounds selftest, part 1
  drm/i915: Add igt_spinner_pin() to allow for ww locking around
    spinner.
  drm/i915: Add ww locking around vm_access()
  drm/i915: Increase ww locking for perf.
  drm/i915: Lock ww in ucode objects correctly
  drm/i915: Add ww locking to dma-buf ops.
  drm/i915: Add missing ww lock in intel_dsb_prepare.
  drm/i915: Fix ww locking in shmem_create_from_object
  drm/i915: Use a single page table lock for each gtt.
  drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock
    removal.
  drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
  drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
  drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare object blit tests for obj->mm.lock
    removal.
  drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
  drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
  drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
  drm/i915/selftests: Prepare execlists and lrc selftests for
    obj->mm.lock removal
  drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
  drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
  drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
  drm/i915/selftests: Prepare i915_request tests for obj->mm.lock
    removal
  drm/i915/selftests: Prepare memory region tests for obj->mm.lock
    removal
  drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
  drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
  drm/i915: Finally remove obj->mm.lock.
  drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
  drm/i915: Move gt_revoke() slightly
  drm/i915: Add missing -EDEADLK path in execbuffer ggtt pinning.
  drm/i915: Fix pin_map in scheduler selftests
  drm/i915: Add ww parameter to get_pages() callback
  drm/i915: Add ww context to prepare_(read/write)
  drm/i915: Pass ww ctx to pin_map
  drm/i915: Pass ww ctx to i915_gem_object_pin_pages

Thomas Hellström (1):
  drm/i915: Prepare for obj->mm.lock removal, v2.

 drivers/gpu/drm/i915/Makefile                 |   1 -
 drivers/gpu/drm/i915/display/intel_display.c  |  71 +-
 drivers/gpu/drm/i915/display/intel_display.h  |   2 +-
 drivers/gpu/drm/i915/display/intel_dsb.c      |   2 +-
 drivers/gpu/drm/i915/display/intel_fbdev.c    |   2 +-
 drivers/gpu/drm/i915/display/intel_overlay.c  |  34 +-
 drivers/gpu/drm/i915/gem/i915_gem_clflush.c   |  15 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  67 +-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    |  94 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 240 ++++-
 drivers/gpu/drm/i915/gem/i915_gem_fence.c     |  95 --
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   9 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c      |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |  48 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |   9 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    | 110 ++-
 .../gpu/drm/i915/gem/i915_gem_object_blt.c    |  10 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  26 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 134 ++-
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 110 +--
 drivers/gpu/drm/i915/gem/i915_gem_pm.c        |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_region.c    |   7 +-
 drivers/gpu/drm/i915/gem/i915_gem_region.h    |   7 +-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |  42 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c  |  39 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.h  |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c    |  17 +-
 drivers/gpu/drm/i915/gem/i915_gem_tiling.c    |   2 -
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 903 +++++++-----------
 .../drm/i915/gem/selftests/huge_gem_object.c  |   7 +-
 .../gpu/drm/i915/gem/selftests/huge_pages.c   |  49 +-
 .../i915/gem/selftests/i915_gem_client_blt.c  |   8 +-
 .../i915/gem/selftests/i915_gem_coherency.c   |  20 +-
 .../drm/i915/gem/selftests/i915_gem_context.c |  22 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |   4 +-
 .../i915/gem/selftests/i915_gem_execbuffer.c  |   2 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c    |  21 +-
 .../drm/i915/gem/selftests/i915_gem_object.c  |   2 +-
 .../i915/gem/selftests/i915_gem_object_blt.c  |   6 +-
 .../drm/i915/gem/selftests/i915_gem_phys.c    |  10 +-
 .../drm/i915/gem/selftests/igt_gem_utils.c    |   2 +-
 drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
 drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
 drivers/gpu/drm/i915/gt/gen7_renderclear.c    |   2 +-
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  13 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  40 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     |   6 +-
 .../drm/i915/gt/intel_execlists_submission.c  |  26 +-
 drivers/gpu/drm/i915/gt/intel_ggtt.c          |  10 +-
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.c    |  47 +-
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.h    |   5 +
 .../drm/i915/gt/intel_gt_buffer_pool_types.h  |   1 +
 drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
 drivers/gpu/drm/i915/gt/intel_gtt.c           |  54 +-
 drivers/gpu/drm/i915/gt/intel_gtt.h           |   8 +
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  53 +-
 drivers/gpu/drm/i915/gt/intel_ppgtt.c         |   3 +-
 drivers/gpu/drm/i915/gt/intel_renderstate.c   |   4 +-
 drivers/gpu/drm/i915/gt/intel_reset.c         |   5 +-
 drivers/gpu/drm/i915/gt/intel_ring.c          |   2 +-
 .../gpu/drm/i915/gt/intel_ring_submission.c   | 184 ++--
 drivers/gpu/drm/i915/gt/intel_timeline.c      | 427 ++-------
 drivers/gpu/drm/i915/gt/intel_timeline.h      |   3 +
 .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
 drivers/gpu/drm/i915/gt/intel_workarounds.c   |  12 +-
 drivers/gpu/drm/i915/gt/mock_engine.c         |  22 +-
 drivers/gpu/drm/i915/gt/selftest_context.c    |   4 +-
 drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   9 +-
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  23 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |   8 +-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  20 +-
 drivers/gpu/drm/i915/gt/selftest_mocs.c       |   5 +-
 .../drm/i915/gt/selftest_ring_submission.c    |   4 +-
 drivers/gpu/drm/i915/gt/selftest_rps.c        |  10 +-
 drivers/gpu/drm/i915/gt/selftest_timeline.c   | 174 ++--
 .../gpu/drm/i915/gt/selftest_workarounds.c    | 103 +-
 drivers/gpu/drm/i915/gt/shmem_utils.c         |   2 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.c        |   2 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_log.c    |   4 +-
 drivers/gpu/drm/i915/gt/uc/intel_huc.c        |   2 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c      |   2 +-
 drivers/gpu/drm/i915/gvt/cmd_parser.c         |   4 +-
 drivers/gpu/drm/i915/gvt/dmabuf.c             |   5 +-
 drivers/gpu/drm/i915/i915_active.c            |  20 +-
 drivers/gpu/drm/i915/i915_cmd_parser.c        | 104 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |   4 +-
 drivers/gpu/drm/i915/i915_drv.h               |  18 +-
 drivers/gpu/drm/i915/i915_gem.c               | 239 ++---
 drivers/gpu/drm/i915/i915_gem_gtt.c           |   2 +-
 drivers/gpu/drm/i915/i915_memcpy.c            |   2 +-
 drivers/gpu/drm/i915/i915_memcpy.h            |   2 +-
 drivers/gpu/drm/i915/i915_perf.c              |  60 +-
 drivers/gpu/drm/i915/i915_request.c           |   4 -
 drivers/gpu/drm/i915/i915_request.h           |  31 +-
 drivers/gpu/drm/i915/i915_selftest.h          |   2 +
 drivers/gpu/drm/i915/i915_vma.c               |  33 +-
 drivers/gpu/drm/i915/i915_vma.h               |  20 +-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c |  97 +-
 drivers/gpu/drm/i915/selftests/i915_request.c |  10 +-
 .../gpu/drm/i915/selftests/i915_scheduler.c   |   2 +-
 drivers/gpu/drm/i915/selftests/igt_spinner.c  | 136 ++-
 drivers/gpu/drm/i915/selftests/igt_spinner.h  |   5 +
 .../drm/i915/selftests/intel_memory_region.c  |  18 +-
 drivers/gpu/drm/i915/selftests/mock_region.c  |   4 +-
 104 files changed, 2324 insertions(+), 2087 deletions(-)
 delete mode 100644 drivers/gpu/drm/i915/gem/i915_gem_fence.c

-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 21:22   ` Jason Ekstrand
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 02/69] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
                   ` (72 subsequent siblings)
  73 siblings, 1 reply; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Instead of sharing pages with breadcrumbs, give each timeline a
single page. This allows unrelated timelines not to share locks
any more during command submission.

As an additional benefit, seqno wraparound no longer requires
i915_vma_pin, which means we no longer need to worry about a
potential -EDEADLK at a point where we are ready to submit.

Changes since v1:
- Fix erroneous i915_vma_acquire that should be a i915_vma_release (ickle).
- Extra check for completion in intel_read_hwsp().
Changes since v2:
- Fix inconsistent indent in hwsp_alloc() (kbuild)
- memset entire cacheline to 0.
Changes since v3:
- Do same in intel_timeline_reset_seqno(), and clflush for good measure.
Changes since v4:
- Use refcounting on timeline, instead of relying on i915_active.
- Fix waiting on kernel requests.
Changes since v5:
- Bump amount of slots to maximum (256), for best wraparounds.
- Add hwsp_offset to i915_request to fix potential wraparound hang.
- Ensure timeline wrap test works with the changes.
- Assign hwsp in intel_timeline_read_hwsp() within the rcu lock to
  fix a hang.
Changes since v6:
- Rename i915_request_active_offset to i915_request_active_seqno(),
  and elaborate the function. (tvrtko)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> #v1
Reported-by: kernel test robot <lkp@intel.com>
---
 drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
 drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  13 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |   1 +
 drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
 drivers/gpu/drm/i915/gt/intel_timeline.c      | 422 ++++--------------
 .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
 drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   5 +-
 drivers/gpu/drm/i915/gt/selftest_timeline.c   |  83 ++--
 drivers/gpu/drm/i915/i915_request.c           |   4 -
 drivers/gpu/drm/i915/i915_request.h           |  31 +-
 11 files changed, 175 insertions(+), 415 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
index b491a64919c8..9646200d2792 100644
--- a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
@@ -143,7 +143,7 @@ static u32 *__gen2_emit_breadcrumb(struct i915_request *rq, u32 *cs,
 				   int flush, int post)
 {
 	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
-	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
+	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
 
 	*cs++ = MI_FLUSH;
 
diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
index ce38d1bcaba3..b388ceeeb1c9 100644
--- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
@@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 		 PIPE_CONTROL_DC_FLUSH_ENABLE |
 		 PIPE_CONTROL_QW_WRITE |
 		 PIPE_CONTROL_CS_STALL);
-	*cs++ = i915_request_active_timeline(rq)->hwsp_offset |
+	*cs++ = i915_request_active_seqno(rq) |
 		PIPE_CONTROL_GLOBAL_GTT;
 	*cs++ = rq->fence.seqno;
 
@@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 		 PIPE_CONTROL_QW_WRITE |
 		 PIPE_CONTROL_GLOBAL_GTT_IVB |
 		 PIPE_CONTROL_CS_STALL);
-	*cs++ = i915_request_active_timeline(rq)->hwsp_offset;
+	*cs++ = i915_request_active_seqno(rq);
 	*cs++ = rq->fence.seqno;
 
 	*cs++ = MI_USER_INTERRUPT;
@@ -374,7 +374,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
 {
 	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
-	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
+	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
 
 	*cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
 	*cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
@@ -394,7 +394,7 @@ u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
 	int i;
 
 	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
-	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
+	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
 
 	*cs++ = MI_FLUSH_DW | MI_INVALIDATE_TLB |
 		MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
index cac80af7ad1c..6b9c34d3ac8d 100644
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
@@ -338,15 +338,14 @@ static u32 preempt_address(struct intel_engine_cs *engine)
 
 static u32 hwsp_offset(const struct i915_request *rq)
 {
-	const struct intel_timeline_cacheline *cl;
+	const struct intel_timeline *tl;
 
-	/* Before the request is executed, the timeline/cachline is fixed */
+	/* Before the request is executed, the timeline is fixed */
+	tl = rcu_dereference_protected(rq->timeline,
+				       !i915_request_signaled(rq));
 
-	cl = rcu_dereference_protected(rq->hwsp_cacheline, 1);
-	if (cl)
-		return cl->ggtt_offset;
-
-	return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset;
+	/* See the comment in i915_request_active_seqno(). */
+	return page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno);
 }
 
 int gen8_emit_init_breadcrumb(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index b4df275cba79..e6cefd00b4a1 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -752,6 +752,7 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
 	frame->rq.engine = engine;
 	frame->rq.context = ce;
 	rcu_assign_pointer(frame->rq.timeline, ce->timeline);
+	frame->rq.hwsp_seqno = ce->timeline->hwsp_seqno;
 
 	frame->ring.vaddr = frame->cs;
 	frame->ring.size = sizeof(frame->cs);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
index 626af37c7790..3f6db8357309 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
@@ -45,10 +45,6 @@ struct intel_gt {
 	struct intel_gt_timelines {
 		spinlock_t lock; /* protects active_list */
 		struct list_head active_list;
-
-		/* Pack multiple timelines' seqnos into the same page */
-		spinlock_t hwsp_lock;
-		struct list_head hwsp_free_list;
 	} timelines;
 
 	struct intel_gt_requests {
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 491b8df174c2..efe2030cfe5e 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -11,21 +11,9 @@
 #include "intel_ring.h"
 #include "intel_timeline.h"
 
-#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit)))
-#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit))
+#define TIMELINE_SEQNO_BYTES 8
 
-#define CACHELINE_BITS 6
-#define CACHELINE_FREE CACHELINE_BITS
-
-struct intel_timeline_hwsp {
-	struct intel_gt *gt;
-	struct intel_gt_timelines *gt_timelines;
-	struct list_head free_link;
-	struct i915_vma *vma;
-	u64 free_bitmap;
-};
-
-static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
+static struct i915_vma *hwsp_alloc(struct intel_gt *gt)
 {
 	struct drm_i915_private *i915 = gt->i915;
 	struct drm_i915_gem_object *obj;
@@ -44,220 +32,59 @@ static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
 	return vma;
 }
 
-static struct i915_vma *
-hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline)
-{
-	struct intel_gt_timelines *gt = &timeline->gt->timelines;
-	struct intel_timeline_hwsp *hwsp;
-
-	BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
-
-	spin_lock_irq(&gt->hwsp_lock);
-
-	/* hwsp_free_list only contains HWSP that have available cachelines */
-	hwsp = list_first_entry_or_null(&gt->hwsp_free_list,
-					typeof(*hwsp), free_link);
-	if (!hwsp) {
-		struct i915_vma *vma;
-
-		spin_unlock_irq(&gt->hwsp_lock);
-
-		hwsp = kmalloc(sizeof(*hwsp), GFP_KERNEL);
-		if (!hwsp)
-			return ERR_PTR(-ENOMEM);
-
-		vma = __hwsp_alloc(timeline->gt);
-		if (IS_ERR(vma)) {
-			kfree(hwsp);
-			return vma;
-		}
-
-		GT_TRACE(timeline->gt, "new HWSP allocated\n");
-
-		vma->private = hwsp;
-		hwsp->gt = timeline->gt;
-		hwsp->vma = vma;
-		hwsp->free_bitmap = ~0ull;
-		hwsp->gt_timelines = gt;
-
-		spin_lock_irq(&gt->hwsp_lock);
-		list_add(&hwsp->free_link, &gt->hwsp_free_list);
-	}
-
-	GEM_BUG_ON(!hwsp->free_bitmap);
-	*cacheline = __ffs64(hwsp->free_bitmap);
-	hwsp->free_bitmap &= ~BIT_ULL(*cacheline);
-	if (!hwsp->free_bitmap)
-		list_del(&hwsp->free_link);
-
-	spin_unlock_irq(&gt->hwsp_lock);
-
-	GEM_BUG_ON(hwsp->vma->private != hwsp);
-	return hwsp->vma;
-}
-
-static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
-{
-	struct intel_gt_timelines *gt = hwsp->gt_timelines;
-	unsigned long flags;
-
-	spin_lock_irqsave(&gt->hwsp_lock, flags);
-
-	/* As a cacheline becomes available, publish the HWSP on the freelist */
-	if (!hwsp->free_bitmap)
-		list_add_tail(&hwsp->free_link, &gt->hwsp_free_list);
-
-	GEM_BUG_ON(cacheline >= BITS_PER_TYPE(hwsp->free_bitmap));
-	hwsp->free_bitmap |= BIT_ULL(cacheline);
-
-	/* And if no one is left using it, give the page back to the system */
-	if (hwsp->free_bitmap == ~0ull) {
-		i915_vma_put(hwsp->vma);
-		list_del(&hwsp->free_link);
-		kfree(hwsp);
-	}
-
-	spin_unlock_irqrestore(&gt->hwsp_lock, flags);
-}
-
-static void __rcu_cacheline_free(struct rcu_head *rcu)
-{
-	struct intel_timeline_cacheline *cl =
-		container_of(rcu, typeof(*cl), rcu);
-
-	/* Must wait until after all *rq->hwsp are complete before removing */
-	i915_gem_object_unpin_map(cl->hwsp->vma->obj);
-	__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
-
-	i915_active_fini(&cl->active);
-	kfree(cl);
-}
-
-static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
-{
-	GEM_BUG_ON(!i915_active_is_idle(&cl->active));
-	call_rcu(&cl->rcu, __rcu_cacheline_free);
-}
-
 __i915_active_call
-static void __cacheline_retire(struct i915_active *active)
+static void __timeline_retire(struct i915_active *active)
 {
-	struct intel_timeline_cacheline *cl =
-		container_of(active, typeof(*cl), active);
+	struct intel_timeline *tl =
+		container_of(active, typeof(*tl), active);
 
-	i915_vma_unpin(cl->hwsp->vma);
-	if (ptr_test_bit(cl->vaddr, CACHELINE_FREE))
-		__idle_cacheline_free(cl);
+	i915_vma_unpin(tl->hwsp_ggtt);
+	intel_timeline_put(tl);
 }
 
-static int __cacheline_active(struct i915_active *active)
+static int __timeline_active(struct i915_active *active)
 {
-	struct intel_timeline_cacheline *cl =
-		container_of(active, typeof(*cl), active);
+	struct intel_timeline *tl =
+		container_of(active, typeof(*tl), active);
 
-	__i915_vma_pin(cl->hwsp->vma);
+	__i915_vma_pin(tl->hwsp_ggtt);
+	intel_timeline_get(tl);
 	return 0;
 }
 
-static struct intel_timeline_cacheline *
-cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
-{
-	struct intel_timeline_cacheline *cl;
-	void *vaddr;
-
-	GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS));
-
-	cl = kmalloc(sizeof(*cl), GFP_KERNEL);
-	if (!cl)
-		return ERR_PTR(-ENOMEM);
-
-	vaddr = i915_gem_object_pin_map(hwsp->vma->obj, I915_MAP_WB);
-	if (IS_ERR(vaddr)) {
-		kfree(cl);
-		return ERR_CAST(vaddr);
-	}
-
-	cl->hwsp = hwsp;
-	cl->vaddr = page_pack_bits(vaddr, cacheline);
-
-	i915_active_init(&cl->active, __cacheline_active, __cacheline_retire);
-
-	return cl;
-}
-
-static void cacheline_acquire(struct intel_timeline_cacheline *cl,
-			      u32 ggtt_offset)
-{
-	if (!cl)
-		return;
-
-	cl->ggtt_offset = ggtt_offset;
-	i915_active_acquire(&cl->active);
-}
-
-static void cacheline_release(struct intel_timeline_cacheline *cl)
-{
-	if (cl)
-		i915_active_release(&cl->active);
-}
-
-static void cacheline_free(struct intel_timeline_cacheline *cl)
-{
-	if (!i915_active_acquire_if_busy(&cl->active)) {
-		__idle_cacheline_free(cl);
-		return;
-	}
-
-	GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE));
-	cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE);
-
-	i915_active_release(&cl->active);
-}
-
 static int intel_timeline_init(struct intel_timeline *timeline,
 			       struct intel_gt *gt,
 			       struct i915_vma *hwsp,
 			       unsigned int offset)
 {
 	void *vaddr;
+	u32 *seqno;
 
 	kref_init(&timeline->kref);
 	atomic_set(&timeline->pin_count, 0);
 
 	timeline->gt = gt;
 
-	timeline->has_initial_breadcrumb = !hwsp;
-	timeline->hwsp_cacheline = NULL;
-
-	if (!hwsp) {
-		struct intel_timeline_cacheline *cl;
-		unsigned int cacheline;
-
-		hwsp = hwsp_alloc(timeline, &cacheline);
+	if (hwsp) {
+		timeline->hwsp_offset = offset;
+		timeline->hwsp_ggtt = i915_vma_get(hwsp);
+	} else {
+		timeline->has_initial_breadcrumb = true;
+		hwsp = hwsp_alloc(gt);
 		if (IS_ERR(hwsp))
 			return PTR_ERR(hwsp);
-
-		cl = cacheline_alloc(hwsp->private, cacheline);
-		if (IS_ERR(cl)) {
-			__idle_hwsp_free(hwsp->private, cacheline);
-			return PTR_ERR(cl);
-		}
-
-		timeline->hwsp_cacheline = cl;
-		timeline->hwsp_offset = cacheline * CACHELINE_BYTES;
-
-		vaddr = page_mask_bits(cl->vaddr);
-	} else {
-		timeline->hwsp_offset = offset;
-		vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		timeline->hwsp_ggtt = hwsp;
 	}
 
-	timeline->hwsp_seqno =
-		memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES);
+	vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	timeline->hwsp_map = vaddr;
+	seqno = vaddr + timeline->hwsp_offset;
+	WRITE_ONCE(*seqno, 0);
+	timeline->hwsp_seqno = seqno;
 
-	timeline->hwsp_ggtt = i915_vma_get(hwsp);
 	GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
 
 	timeline->fence_context = dma_fence_context_alloc(1);
@@ -268,6 +95,7 @@ static int intel_timeline_init(struct intel_timeline *timeline,
 	INIT_LIST_HEAD(&timeline->requests);
 
 	i915_syncmap_init(&timeline->sync);
+	i915_active_init(&timeline->active, __timeline_active, __timeline_retire);
 
 	return 0;
 }
@@ -278,23 +106,18 @@ void intel_gt_init_timelines(struct intel_gt *gt)
 
 	spin_lock_init(&timelines->lock);
 	INIT_LIST_HEAD(&timelines->active_list);
-
-	spin_lock_init(&timelines->hwsp_lock);
-	INIT_LIST_HEAD(&timelines->hwsp_free_list);
 }
 
-static void intel_timeline_fini(struct intel_timeline *timeline)
+static void intel_timeline_fini(struct rcu_head *rcu)
 {
-	GEM_BUG_ON(atomic_read(&timeline->pin_count));
-	GEM_BUG_ON(!list_empty(&timeline->requests));
-	GEM_BUG_ON(timeline->retire);
+	struct intel_timeline *timeline =
+		container_of(rcu, struct intel_timeline, rcu);
 
-	if (timeline->hwsp_cacheline)
-		cacheline_free(timeline->hwsp_cacheline);
-	else
-		i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
+	i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
 
 	i915_vma_put(timeline->hwsp_ggtt);
+	i915_active_fini(&timeline->active);
+	kfree(timeline);
 }
 
 struct intel_timeline *
@@ -360,9 +183,9 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
 	GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
 		 tl->fence_context, tl->hwsp_offset);
 
-	cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset);
+	i915_active_acquire(&tl->active);
 	if (atomic_fetch_inc(&tl->pin_count)) {
-		cacheline_release(tl->hwsp_cacheline);
+		i915_active_release(&tl->active);
 		__i915_vma_unpin(tl->hwsp_ggtt);
 	}
 
@@ -371,9 +194,13 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
 
 void intel_timeline_reset_seqno(const struct intel_timeline *tl)
 {
+	u32 *hwsp_seqno = (u32 *)tl->hwsp_seqno;
 	/* Must be pinned to be writable, and no requests in flight. */
 	GEM_BUG_ON(!atomic_read(&tl->pin_count));
-	WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
+
+	memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
+	WRITE_ONCE(*hwsp_seqno, tl->seqno);
+	clflush(hwsp_seqno);
 }
 
 void intel_timeline_enter(struct intel_timeline *tl)
@@ -449,106 +276,23 @@ static u32 timeline_advance(struct intel_timeline *tl)
 	return tl->seqno += 1 + tl->has_initial_breadcrumb;
 }
 
-static void timeline_rollback(struct intel_timeline *tl)
-{
-	tl->seqno -= 1 + tl->has_initial_breadcrumb;
-}
-
 static noinline int
 __intel_timeline_get_seqno(struct intel_timeline *tl,
-			   struct i915_request *rq,
 			   u32 *seqno)
 {
-	struct intel_timeline_cacheline *cl;
-	unsigned int cacheline;
-	struct i915_vma *vma;
-	void *vaddr;
-	int err;
-
-	might_lock(&tl->gt->ggtt->vm.mutex);
-	GT_TRACE(tl->gt, "timeline:%llx wrapped\n", tl->fence_context);
-
-	/*
-	 * If there is an outstanding GPU reference to this cacheline,
-	 * such as it being sampled by a HW semaphore on another timeline,
-	 * we cannot wraparound our seqno value (the HW semaphore does
-	 * a strict greater-than-or-equals compare, not i915_seqno_passed).
-	 * So if the cacheline is still busy, we must detach ourselves
-	 * from it and leave it inflight alongside its users.
-	 *
-	 * However, if nobody is watching and we can guarantee that nobody
-	 * will, we could simply reuse the same cacheline.
-	 *
-	 * if (i915_active_request_is_signaled(&tl->last_request) &&
-	 *     i915_active_is_signaled(&tl->hwsp_cacheline->active))
-	 *	return 0;
-	 *
-	 * That seems unlikely for a busy timeline that needed to wrap in
-	 * the first place, so just replace the cacheline.
-	 */
-
-	vma = hwsp_alloc(tl, &cacheline);
-	if (IS_ERR(vma)) {
-		err = PTR_ERR(vma);
-		goto err_rollback;
-	}
-
-	err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
-	if (err) {
-		__idle_hwsp_free(vma->private, cacheline);
-		goto err_rollback;
-	}
+	u32 next_ofs = offset_in_page(tl->hwsp_offset + TIMELINE_SEQNO_BYTES);
 
-	cl = cacheline_alloc(vma->private, cacheline);
-	if (IS_ERR(cl)) {
-		err = PTR_ERR(cl);
-		__idle_hwsp_free(vma->private, cacheline);
-		goto err_unpin;
-	}
-	GEM_BUG_ON(cl->hwsp->vma != vma);
-
-	/*
-	 * Attach the old cacheline to the current request, so that we only
-	 * free it after the current request is retired, which ensures that
-	 * all writes into the cacheline from previous requests are complete.
-	 */
-	err = i915_active_ref(&tl->hwsp_cacheline->active,
-			      tl->fence_context,
-			      &rq->fence);
-	if (err)
-		goto err_cacheline;
+	/* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
+	if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))
+		next_ofs = offset_in_page(next_ofs + BIT(5));
 
-	cacheline_release(tl->hwsp_cacheline); /* ownership now xfered to rq */
-	cacheline_free(tl->hwsp_cacheline);
-
-	i915_vma_unpin(tl->hwsp_ggtt); /* binding kept alive by old cacheline */
-	i915_vma_put(tl->hwsp_ggtt);
-
-	tl->hwsp_ggtt = i915_vma_get(vma);
-
-	vaddr = page_mask_bits(cl->vaddr);
-	tl->hwsp_offset = cacheline * CACHELINE_BYTES;
-	tl->hwsp_seqno =
-		memset(vaddr + tl->hwsp_offset, 0, CACHELINE_BYTES);
-
-	tl->hwsp_offset += i915_ggtt_offset(vma);
-	GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
-		 tl->fence_context, tl->hwsp_offset);
-
-	cacheline_acquire(cl, tl->hwsp_offset);
-	tl->hwsp_cacheline = cl;
+	tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + next_ofs;
+	tl->hwsp_seqno = tl->hwsp_map + next_ofs;
+	intel_timeline_reset_seqno(tl);
 
 	*seqno = timeline_advance(tl);
 	GEM_BUG_ON(i915_seqno_passed(*tl->hwsp_seqno, *seqno));
 	return 0;
-
-err_cacheline:
-	cacheline_free(cl);
-err_unpin:
-	i915_vma_unpin(vma);
-err_rollback:
-	timeline_rollback(tl);
-	return err;
 }
 
 int intel_timeline_get_seqno(struct intel_timeline *tl,
@@ -558,51 +302,52 @@ int intel_timeline_get_seqno(struct intel_timeline *tl,
 	*seqno = timeline_advance(tl);
 
 	/* Replace the HWSP on wraparound for HW semaphores */
-	if (unlikely(!*seqno && tl->hwsp_cacheline))
-		return __intel_timeline_get_seqno(tl, rq, seqno);
+	if (unlikely(!*seqno && tl->has_initial_breadcrumb))
+		return __intel_timeline_get_seqno(tl, seqno);
 
 	return 0;
 }
 
-static int cacheline_ref(struct intel_timeline_cacheline *cl,
-			 struct i915_request *rq)
-{
-	return i915_active_add_request(&cl->active, rq);
-}
-
 int intel_timeline_read_hwsp(struct i915_request *from,
 			     struct i915_request *to,
 			     u32 *hwsp)
 {
-	struct intel_timeline_cacheline *cl;
+	struct intel_timeline *tl;
 	int err;
 
-	GEM_BUG_ON(!rcu_access_pointer(from->hwsp_cacheline));
-
 	rcu_read_lock();
-	cl = rcu_dereference(from->hwsp_cacheline);
-	if (i915_request_signaled(from)) /* confirm cacheline is valid */
-		goto unlock;
-	if (unlikely(!i915_active_acquire_if_busy(&cl->active)))
-		goto unlock; /* seqno wrapped and completed! */
-	if (unlikely(__i915_request_is_complete(from)))
-		goto release;
+	tl = rcu_dereference(from->timeline);
+	if (i915_request_signaled(from) ||
+	    !i915_active_acquire_if_busy(&tl->active))
+		tl = NULL;
+
+	if (tl) {
+		/* hwsp_offset may wraparound, so use from->hwsp_seqno */
+		*hwsp = i915_ggtt_offset(tl->hwsp_ggtt) +
+			offset_in_page(from->hwsp_seqno);
+	}
+
+	/* ensure we wait on the right request, if not, we completed */
+	if (tl && __i915_request_is_complete(from)) {
+		i915_active_release(&tl->active);
+		tl = NULL;
+	}
 	rcu_read_unlock();
 
-	err = cacheline_ref(cl, to);
-	if (err)
+	if (!tl)
+		return 1;
+
+	/* Can't do semaphore waits on kernel context */
+	if (!tl->has_initial_breadcrumb) {
+		err = -EINVAL;
 		goto out;
+	}
+
+	err = i915_active_add_request(&tl->active, to);
 
-	*hwsp = cl->ggtt_offset;
 out:
-	i915_active_release(&cl->active);
+	i915_active_release(&tl->active);
 	return err;
-
-release:
-	i915_active_release(&cl->active);
-unlock:
-	rcu_read_unlock();
-	return 1;
 }
 
 void intel_timeline_unpin(struct intel_timeline *tl)
@@ -611,8 +356,7 @@ void intel_timeline_unpin(struct intel_timeline *tl)
 	if (!atomic_dec_and_test(&tl->pin_count))
 		return;
 
-	cacheline_release(tl->hwsp_cacheline);
-
+	i915_active_release(&tl->active);
 	__i915_vma_unpin(tl->hwsp_ggtt);
 }
 
@@ -621,8 +365,11 @@ void __intel_timeline_free(struct kref *kref)
 	struct intel_timeline *timeline =
 		container_of(kref, typeof(*timeline), kref);
 
-	intel_timeline_fini(timeline);
-	kfree_rcu(timeline, rcu);
+	GEM_BUG_ON(atomic_read(&timeline->pin_count));
+	GEM_BUG_ON(!list_empty(&timeline->requests));
+	GEM_BUG_ON(timeline->retire);
+
+	call_rcu(&timeline->rcu, intel_timeline_fini);
 }
 
 void intel_gt_fini_timelines(struct intel_gt *gt)
@@ -630,7 +377,6 @@ void intel_gt_fini_timelines(struct intel_gt *gt)
 	struct intel_gt_timelines *timelines = &gt->timelines;
 
 	GEM_BUG_ON(!list_empty(&timelines->active_list));
-	GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
 }
 
 void intel_gt_show_timelines(struct intel_gt *gt,
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
index 9f677c9b7d06..74e67dbf89c5 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
@@ -17,7 +17,6 @@
 struct i915_vma;
 struct i915_syncmap;
 struct intel_gt;
-struct intel_timeline_hwsp;
 
 struct intel_timeline {
 	u64 fence_context;
@@ -44,12 +43,11 @@ struct intel_timeline {
 	atomic_t pin_count;
 	atomic_t active_count;
 
+	void *hwsp_map;
 	const u32 *hwsp_seqno;
 	struct i915_vma *hwsp_ggtt;
 	u32 hwsp_offset;
 
-	struct intel_timeline_cacheline *hwsp_cacheline;
-
 	bool has_initial_breadcrumb;
 
 	/**
@@ -66,6 +64,8 @@ struct intel_timeline {
 	 */
 	struct i915_active_fence last_request;
 
+	struct i915_active active;
+
 	/** A chain of completed timelines ready for early retirement. */
 	struct intel_timeline *retire;
 
@@ -89,15 +89,4 @@ struct intel_timeline {
 	struct rcu_head rcu;
 };
 
-struct intel_timeline_cacheline {
-	struct i915_active active;
-
-	struct intel_timeline_hwsp *hwsp;
-	void *vaddr;
-
-	u32 ggtt_offset;
-
-	struct rcu_head rcu;
-};
-
 #endif /* __I915_TIMELINE_TYPES_H__ */
diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
index 84d883de30ee..7e466ae114f8 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
@@ -41,6 +41,9 @@ static int perf_end(struct intel_gt *gt)
 
 static int write_timestamp(struct i915_request *rq, int slot)
 {
+	struct intel_timeline *tl =
+		rcu_dereference_protected(rq->timeline,
+					  !i915_request_signaled(rq));
 	u32 cmd;
 	u32 *cs;
 
@@ -53,7 +56,7 @@ static int write_timestamp(struct i915_request *rq, int slot)
 		cmd++;
 	*cs++ = cmd;
 	*cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(rq->engine->mmio_base));
-	*cs++ = i915_request_timeline(rq)->hwsp_offset + slot * sizeof(u32);
+	*cs++ = tl->hwsp_offset + slot * sizeof(u32);
 	*cs++ = 0;
 
 	intel_ring_advance(rq, cs);
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index d283dce5b4ac..a4c78062e92b 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -665,7 +665,7 @@ static int live_hwsp_wrap(void *arg)
 	if (IS_ERR(tl))
 		return PTR_ERR(tl);
 
-	if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
+	if (!tl->has_initial_breadcrumb)
 		goto out_free;
 
 	err = intel_timeline_pin(tl, NULL);
@@ -832,12 +832,26 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
 	return 0;
 }
 
+static void switch_tl_lock(struct i915_request *from, struct i915_request *to)
+{
+	/* some light mutex juggling required; think co-routines */
+
+	if (from) {
+		lockdep_unpin_lock(&from->context->timeline->mutex, from->cookie);
+		mutex_unlock(&from->context->timeline->mutex);
+	}
+
+	if (to) {
+		mutex_lock(&to->context->timeline->mutex);
+		to->cookie = lockdep_pin_lock(&to->context->timeline->mutex);
+	}
+}
+
 static int create_watcher(struct hwsp_watcher *w,
 			  struct intel_engine_cs *engine,
 			  int ringsz)
 {
 	struct intel_context *ce;
-	struct intel_timeline *tl;
 
 	ce = intel_context_create(engine);
 	if (IS_ERR(ce))
@@ -850,11 +864,8 @@ static int create_watcher(struct hwsp_watcher *w,
 		return PTR_ERR(w->rq);
 
 	w->addr = i915_ggtt_offset(w->vma);
-	tl = w->rq->context->timeline;
 
-	/* some light mutex juggling required; think co-routines */
-	lockdep_unpin_lock(&tl->mutex, w->rq->cookie);
-	mutex_unlock(&tl->mutex);
+	switch_tl_lock(w->rq, NULL);
 
 	return 0;
 }
@@ -863,15 +874,13 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
 			 bool (*op)(u32 hwsp, u32 seqno))
 {
 	struct i915_request *rq = fetch_and_zero(&w->rq);
-	struct intel_timeline *tl = rq->context->timeline;
 	u32 offset, end;
 	int err;
 
 	GEM_BUG_ON(w->addr - i915_ggtt_offset(w->vma) > w->vma->size);
 
 	i915_request_get(rq);
-	mutex_lock(&tl->mutex);
-	rq->cookie = lockdep_pin_lock(&tl->mutex);
+	switch_tl_lock(NULL, rq);
 	i915_request_add(rq);
 
 	if (i915_request_wait(rq, 0, HZ) < 0) {
@@ -900,10 +909,7 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
 static void cleanup_watcher(struct hwsp_watcher *w)
 {
 	if (w->rq) {
-		struct intel_timeline *tl = w->rq->context->timeline;
-
-		mutex_lock(&tl->mutex);
-		w->rq->cookie = lockdep_pin_lock(&tl->mutex);
+		switch_tl_lock(NULL, w->rq);
 
 		i915_request_add(w->rq);
 	}
@@ -941,7 +947,7 @@ static struct i915_request *wrap_timeline(struct i915_request *rq)
 	}
 
 	i915_request_put(rq);
-	rq = intel_context_create_request(ce);
+	rq = i915_request_create(ce);
 	if (IS_ERR(rq))
 		return rq;
 
@@ -976,7 +982,7 @@ static int live_hwsp_read(void *arg)
 	if (IS_ERR(tl))
 		return PTR_ERR(tl);
 
-	if (!tl->hwsp_cacheline)
+	if (!tl->has_initial_breadcrumb)
 		goto out_free;
 
 	for (i = 0; i < ARRAY_SIZE(watcher); i++) {
@@ -998,7 +1004,7 @@ static int live_hwsp_read(void *arg)
 		do {
 			struct i915_sw_fence *submit;
 			struct i915_request *rq;
-			u32 hwsp;
+			u32 hwsp, dummy;
 
 			submit = heap_fence_create(GFP_KERNEL);
 			if (!submit) {
@@ -1016,14 +1022,26 @@ static int live_hwsp_read(void *arg)
 				goto out;
 			}
 
-			/* Skip to the end, saving 30 minutes of nops */
-			tl->seqno = -10u + 2 * (count & 3);
-			WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 			ce->timeline = intel_timeline_get(tl);
 
-			rq = intel_context_create_request(ce);
+			/* Ensure timeline is mapped, done during first pin */
+			err = intel_context_pin(ce);
+			if (err) {
+				intel_context_put(ce);
+				goto out;
+			}
+
+			/*
+			 * Start at a new wrap, and set seqno right before another wrap,
+			 * saving 30 minutes of nops
+			 */
+			tl->seqno = -12u + 2 * (count & 3);
+			__intel_timeline_get_seqno(tl, &dummy);
+
+			rq = i915_request_create(ce);
 			if (IS_ERR(rq)) {
 				err = PTR_ERR(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
@@ -1033,32 +1051,35 @@ static int live_hwsp_read(void *arg)
 							    GFP_KERNEL);
 			if (err < 0) {
 				i915_request_add(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
 
-			mutex_lock(&watcher[0].rq->context->timeline->mutex);
+			switch_tl_lock(rq, watcher[0].rq);
 			err = intel_timeline_read_hwsp(rq, watcher[0].rq, &hwsp);
 			if (err == 0)
 				err = emit_read_hwsp(watcher[0].rq, /* before */
 						     rq->fence.seqno, hwsp,
 						     &watcher[0].addr);
-			mutex_unlock(&watcher[0].rq->context->timeline->mutex);
+			switch_tl_lock(watcher[0].rq, rq);
 			if (err) {
 				i915_request_add(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
 
-			mutex_lock(&watcher[1].rq->context->timeline->mutex);
+			switch_tl_lock(rq, watcher[1].rq);
 			err = intel_timeline_read_hwsp(rq, watcher[1].rq, &hwsp);
 			if (err == 0)
 				err = emit_read_hwsp(watcher[1].rq, /* after */
 						     rq->fence.seqno, hwsp,
 						     &watcher[1].addr);
-			mutex_unlock(&watcher[1].rq->context->timeline->mutex);
+			switch_tl_lock(watcher[1].rq, rq);
 			if (err) {
 				i915_request_add(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
@@ -1067,6 +1088,7 @@ static int live_hwsp_read(void *arg)
 			i915_request_add(rq);
 
 			rq = wrap_timeline(rq);
+			intel_context_unpin(ce);
 			intel_context_put(ce);
 			if (IS_ERR(rq)) {
 				err = PTR_ERR(rq);
@@ -1106,8 +1128,8 @@ static int live_hwsp_read(void *arg)
 			    3 * watcher[1].rq->ring->size)
 				break;
 
-		} while (!__igt_timeout(end_time, NULL));
-		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, 0xdeadbeef);
+		} while (!__igt_timeout(end_time, NULL) &&
+			 count < (PAGE_SIZE / TIMELINE_SEQNO_BYTES - 1) / 2);
 
 		pr_info("%s: simulated %lu wraps\n", engine->name, count);
 		err = check_watcher(&watcher[1], "after", cmp_gte);
@@ -1152,9 +1174,7 @@ static int live_hwsp_rollover_kernel(void *arg)
 		}
 
 		GEM_BUG_ON(i915_active_fence_isset(&tl->last_request));
-		tl->seqno = 0;
-		timeline_rollback(tl);
-		timeline_rollback(tl);
+		tl->seqno = -2u;
 		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 
 		for (i = 0; i < ARRAY_SIZE(rq); i++) {
@@ -1234,11 +1254,10 @@ static int live_hwsp_rollover_user(void *arg)
 			goto out;
 
 		tl = ce->timeline;
-		if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
+		if (!tl->has_initial_breadcrumb)
 			goto out;
 
-		timeline_rollback(tl);
-		timeline_rollback(tl);
+		tl->seqno = -4u;
 		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 
 		for (i = 0; i < ARRAY_SIZE(rq); i++) {
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index e7b4c4bc41a6..59d942910558 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -794,7 +794,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->fence.seqno = seqno;
 
 	RCU_INIT_POINTER(rq->timeline, tl);
-	RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline);
 	rq->hwsp_seqno = tl->hwsp_seqno;
 	GEM_BUG_ON(__i915_request_is_complete(rq));
 
@@ -1039,9 +1038,6 @@ emit_semaphore_wait(struct i915_request *to,
 	if (i915_request_has_initial_breadcrumb(to))
 		goto await_fence;
 
-	if (!rcu_access_pointer(from->hwsp_cacheline))
-		goto await_fence;
-
 	/*
 	 * If this or its dependents are waiting on an external fence
 	 * that may fail catastrophically, then we want to avoid using
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index dd10a6db3d21..38062495b66f 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -239,16 +239,6 @@ struct i915_request {
 	 */
 	const u32 *hwsp_seqno;
 
-	/*
-	 * If we need to access the timeline's seqno for this request in
-	 * another request, we need to keep a read reference to this associated
-	 * cacheline, so that we do not free and recycle it before the foreign
-	 * observers have completed. Hence, we keep a pointer to the cacheline
-	 * inside the timeline's HWSP vma, but it is only valid while this
-	 * request has not completed and guarded by the timeline mutex.
-	 */
-	struct intel_timeline_cacheline __rcu *hwsp_cacheline;
-
 	/** Position in the ring of the start of the request */
 	u32 head;
 
@@ -650,4 +640,25 @@ static inline bool i915_request_use_semaphores(const struct i915_request *rq)
 	return intel_engine_has_semaphores(rq->engine);
 }
 
+static inline u32
+i915_request_active_seqno(const struct i915_request *rq)
+{
+	u32 hwsp_phys_base =
+		page_mask_bits(i915_request_active_timeline(rq)->hwsp_offset);
+	u32 hwsp_relative_offset = offset_in_page(rq->hwsp_seqno);
+
+	/*
+	 * Because of wraparound, we cannot simply take tl->hwsp_offset,
+	 * but instead use the fact that the relative for vaddr is the
+	 * offset as for hwsp_offset. Take the top bits from tl->hwsp_offset
+	 * and combine them with the relative offset in rq->hwsp_seqno.
+	 *
+	 * As rw->hwsp_seqno is rewritten when signaled, this only works
+	 * when the request isn't signaled yet, but at that point you
+	 * no longer need the offset.
+	 */
+
+	return hwsp_phys_base + hwsp_relative_offset;
+}
+
 #endif /* I915_REQUEST_H */
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 02/69] drm/i915: Pin timeline map after first timeline pin, v3.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7 Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 21:44   ` Jason Ekstrand
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 03/69] drm/i915: Move cmd parser pinning to execbuffer Maarten Lankhorst
                   ` (71 subsequent siblings)
  73 siblings, 1 reply; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We're starting to require the reservation lock for pinning,
so wait until we have that.

Update the selftests to handle this correctly, and ensure pin is
called in live_hwsp_rollover_user() and mock_hwsp_freelist().

Changes since v1:
- Fix NULL + XX arithmatic, use casts. (kbuild)
Changes since v2:
- Clear entire cacheline when pinning.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_timeline.c    | 40 +++++++++----
 drivers/gpu/drm/i915/gt/intel_timeline.h    |  2 +
 drivers/gpu/drm/i915/gt/mock_engine.c       | 22 ++++++-
 drivers/gpu/drm/i915/gt/selftest_timeline.c | 63 +++++++++++----------
 drivers/gpu/drm/i915/i915_selftest.h        |  2 +
 5 files changed, 84 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index efe2030cfe5e..032e1d1b4c5e 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -52,14 +52,29 @@ static int __timeline_active(struct i915_active *active)
 	return 0;
 }
 
+I915_SELFTEST_EXPORT int
+intel_timeline_pin_map(struct intel_timeline *timeline)
+{
+	struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj;
+	u32 ofs = offset_in_page(timeline->hwsp_offset);
+	void *vaddr;
+
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	timeline->hwsp_map = vaddr;
+	timeline->hwsp_seqno = memset(vaddr + ofs, 0, CACHELINE_BYTES);
+	clflush(vaddr + ofs);
+
+	return 0;
+}
+
 static int intel_timeline_init(struct intel_timeline *timeline,
 			       struct intel_gt *gt,
 			       struct i915_vma *hwsp,
 			       unsigned int offset)
 {
-	void *vaddr;
-	u32 *seqno;
-
 	kref_init(&timeline->kref);
 	atomic_set(&timeline->pin_count, 0);
 
@@ -76,14 +91,8 @@ static int intel_timeline_init(struct intel_timeline *timeline,
 		timeline->hwsp_ggtt = hwsp;
 	}
 
-	vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
-
-	timeline->hwsp_map = vaddr;
-	seqno = vaddr + timeline->hwsp_offset;
-	WRITE_ONCE(*seqno, 0);
-	timeline->hwsp_seqno = seqno;
+	timeline->hwsp_map = NULL;
+	timeline->hwsp_seqno = (void *)(long)timeline->hwsp_offset;
 
 	GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
 
@@ -113,7 +122,8 @@ static void intel_timeline_fini(struct rcu_head *rcu)
 	struct intel_timeline *timeline =
 		container_of(rcu, struct intel_timeline, rcu);
 
-	i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
+	if (timeline->hwsp_map)
+		i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
 
 	i915_vma_put(timeline->hwsp_ggtt);
 	i915_active_fini(&timeline->active);
@@ -173,6 +183,12 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
 	if (atomic_add_unless(&tl->pin_count, 1, 0))
 		return 0;
 
+	if (!tl->hwsp_map) {
+		err = intel_timeline_pin_map(tl);
+		if (err)
+			return err;
+	}
+
 	err = i915_ggtt_pin(tl->hwsp_ggtt, ww, 0, PIN_HIGH);
 	if (err)
 		return err;
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h
index b1f81d947f8d..57308c4d664a 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
@@ -98,4 +98,6 @@ intel_timeline_is_last(const struct intel_timeline *tl,
 	return list_is_last_rcu(&rq->link, &tl->requests);
 }
 
+I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl));
+
 #endif
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
index 5662f7c2f719..42fd86658ee7 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -13,9 +13,20 @@
 #include "mock_engine.h"
 #include "selftests/mock_request.h"
 
-static void mock_timeline_pin(struct intel_timeline *tl)
+static int mock_timeline_pin(struct intel_timeline *tl)
 {
+	int err;
+
+	if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
+		return -EBUSY;
+
+	err = intel_timeline_pin_map(tl);
+	i915_gem_object_unlock(tl->hwsp_ggtt->obj);
+	if (err)
+		return err;
+
 	atomic_inc(&tl->pin_count);
+	return 0;
 }
 
 static void mock_timeline_unpin(struct intel_timeline *tl)
@@ -133,6 +144,8 @@ static void mock_context_destroy(struct kref *ref)
 
 static int mock_context_alloc(struct intel_context *ce)
 {
+	int err;
+
 	ce->ring = mock_ring(ce->engine);
 	if (!ce->ring)
 		return -ENOMEM;
@@ -143,7 +156,12 @@ static int mock_context_alloc(struct intel_context *ce)
 		return PTR_ERR(ce->timeline);
 	}
 
-	mock_timeline_pin(ce->timeline);
+	err = mock_timeline_pin(ce->timeline);
+	if (err) {
+		intel_timeline_put(ce->timeline);
+		ce->timeline = NULL;
+		return err;
+	}
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index a4c78062e92b..31b492eb2982 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -34,7 +34,7 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
 {
 	unsigned long address = (unsigned long)page_address(hwsp_page(tl));
 
-	return (address + tl->hwsp_offset) / CACHELINE_BYTES;
+	return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES;
 }
 
 #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
@@ -58,6 +58,7 @@ static void __mock_hwsp_record(struct mock_hwsp_freelist *state,
 	tl = xchg(&state->history[idx], tl);
 	if (tl) {
 		radix_tree_delete(&state->cachelines, hwsp_cacheline(tl));
+		intel_timeline_unpin(tl);
 		intel_timeline_put(tl);
 	}
 }
@@ -77,6 +78,12 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
 		if (IS_ERR(tl))
 			return PTR_ERR(tl);
 
+		err = intel_timeline_pin(tl, NULL);
+		if (err) {
+			intel_timeline_put(tl);
+			return err;
+		}
+
 		cacheline = hwsp_cacheline(tl);
 		err = radix_tree_insert(&state->cachelines, cacheline, tl);
 		if (err) {
@@ -84,6 +91,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
 				pr_err("HWSP cacheline %lu already used; duplicate allocation!\n",
 				       cacheline);
 			}
+			intel_timeline_unpin(tl);
 			intel_timeline_put(tl);
 			return err;
 		}
@@ -451,7 +459,7 @@ static int emit_ggtt_store_dw(struct i915_request *rq, u32 addr, u32 value)
 }
 
 static struct i915_request *
-tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
+checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
 {
 	struct i915_request *rq;
 	int err;
@@ -462,6 +470,13 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
 		goto out;
 	}
 
+	if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
+		pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
+		       *tl->hwsp_seqno, tl->seqno);
+		intel_timeline_unpin(tl);
+		return ERR_PTR(-EINVAL);
+	}
+
 	rq = intel_engine_create_kernel_request(engine);
 	if (IS_ERR(rq))
 		goto out_unpin;
@@ -483,25 +498,6 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
 	return rq;
 }
 
-static struct intel_timeline *
-checked_intel_timeline_create(struct intel_gt *gt)
-{
-	struct intel_timeline *tl;
-
-	tl = intel_timeline_create(gt);
-	if (IS_ERR(tl))
-		return tl;
-
-	if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
-		pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
-		       *tl->hwsp_seqno, tl->seqno);
-		intel_timeline_put(tl);
-		return ERR_PTR(-EINVAL);
-	}
-
-	return tl;
-}
-
 static int live_hwsp_engine(void *arg)
 {
 #define NUM_TIMELINES 4096
@@ -534,13 +530,13 @@ static int live_hwsp_engine(void *arg)
 			struct intel_timeline *tl;
 			struct i915_request *rq;
 
-			tl = checked_intel_timeline_create(gt);
+			tl = intel_timeline_create(gt);
 			if (IS_ERR(tl)) {
 				err = PTR_ERR(tl);
 				break;
 			}
 
-			rq = tl_write(tl, engine, count);
+			rq = checked_tl_write(tl, engine, count);
 			if (IS_ERR(rq)) {
 				intel_timeline_put(tl);
 				err = PTR_ERR(rq);
@@ -607,14 +603,14 @@ static int live_hwsp_alternate(void *arg)
 			if (!intel_engine_can_store_dword(engine))
 				continue;
 
-			tl = checked_intel_timeline_create(gt);
+			tl = intel_timeline_create(gt);
 			if (IS_ERR(tl)) {
 				err = PTR_ERR(tl);
 				goto out;
 			}
 
 			intel_engine_pm_get(engine);
-			rq = tl_write(tl, engine, count);
+			rq = checked_tl_write(tl, engine, count);
 			intel_engine_pm_put(engine);
 			if (IS_ERR(rq)) {
 				intel_timeline_put(tl);
@@ -1257,6 +1253,10 @@ static int live_hwsp_rollover_user(void *arg)
 		if (!tl->has_initial_breadcrumb)
 			goto out;
 
+		err = intel_context_pin(ce);
+		if (err)
+			goto out;
+
 		tl->seqno = -4u;
 		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 
@@ -1266,7 +1266,7 @@ static int live_hwsp_rollover_user(void *arg)
 			this = intel_context_create_request(ce);
 			if (IS_ERR(this)) {
 				err = PTR_ERR(this);
-				goto out;
+				goto out_unpin;
 			}
 
 			pr_debug("%s: create fence.seqnp:%d\n",
@@ -1285,17 +1285,18 @@ static int live_hwsp_rollover_user(void *arg)
 		if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
 			pr_err("Wait for timeline wrap timed out!\n");
 			err = -EIO;
-			goto out;
+			goto out_unpin;
 		}
 
 		for (i = 0; i < ARRAY_SIZE(rq); i++) {
 			if (!i915_request_completed(rq[i])) {
 				pr_err("Pre-wrap request not completed!\n");
 				err = -EINVAL;
-				goto out;
+				goto out_unpin;
 			}
 		}
-
+out_unpin:
+		intel_context_unpin(ce);
 out:
 		for (i = 0; i < ARRAY_SIZE(rq); i++)
 			i915_request_put(rq[i]);
@@ -1337,13 +1338,13 @@ static int live_hwsp_recycle(void *arg)
 			struct intel_timeline *tl;
 			struct i915_request *rq;
 
-			tl = checked_intel_timeline_create(gt);
+			tl = intel_timeline_create(gt);
 			if (IS_ERR(tl)) {
 				err = PTR_ERR(tl);
 				break;
 			}
 
-			rq = tl_write(tl, engine, count);
+			rq = checked_tl_write(tl, engine, count);
 			if (IS_ERR(rq)) {
 				intel_timeline_put(tl);
 				err = PTR_ERR(rq);
diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
index d53d207ab6eb..f54de0499be7 100644
--- a/drivers/gpu/drm/i915/i915_selftest.h
+++ b/drivers/gpu/drm/i915/i915_selftest.h
@@ -107,6 +107,7 @@ int __i915_subtests(const char *caller,
 
 #define I915_SELFTEST_DECLARE(x) x
 #define I915_SELFTEST_ONLY(x) unlikely(x)
+#define I915_SELFTEST_EXPORT
 
 #else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
 
@@ -116,6 +117,7 @@ static inline int i915_perf_selftests(struct pci_dev *pdev) { return 0; }
 
 #define I915_SELFTEST_DECLARE(x)
 #define I915_SELFTEST_ONLY(x) 0
+#define I915_SELFTEST_EXPORT static
 
 #endif
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 03/69] drm/i915: Move cmd parser pinning to execbuffer
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7 Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 02/69] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 04/69] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2 Maarten Lankhorst
                   ` (70 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to get rid of allocations in the cmd parser, because it needs
to be called from a signaling context, first move all pinning to
execbuf, where we already hold all locks.

Allocate jump_whitelist in the execbuffer, and add annotations around
intel_engine_cmd_parser(), to ensure we only call the command parser
without allocating any memory, or taking any locks we're not supposed to.

Because i915_gem_object_get_page() may also allocate memory, add a
path to i915_gem_object_get_sg() that prevents memory allocations,
and walk the sg list manually. It should be similarly fast.

This has the added benefit of being able to catch all memory allocation
errors before the point of no return, and return -ENOMEM safely to the
execbuf submitter.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  74 ++++++++++++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  10 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     |  21 +++-
 drivers/gpu/drm/i915/gt/intel_ggtt.c          |   2 +-
 drivers/gpu/drm/i915/i915_cmd_parser.c        | 104 ++++++++----------
 drivers/gpu/drm/i915/i915_drv.h               |   7 +-
 drivers/gpu/drm/i915/i915_memcpy.c            |   2 +-
 drivers/gpu/drm/i915/i915_memcpy.h            |   2 +-
 8 files changed, 142 insertions(+), 80 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index fe170186dd42..3981f8ef3fcb 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -28,6 +28,7 @@
 #include "i915_sw_fence_work.h"
 #include "i915_trace.h"
 #include "i915_user_extensions.h"
+#include "i915_memcpy.h"
 
 struct eb_vma {
 	struct i915_vma *vma;
@@ -2267,24 +2268,45 @@ struct eb_parse_work {
 	struct i915_vma *trampoline;
 	unsigned long batch_offset;
 	unsigned long batch_length;
+	unsigned long *jump_whitelist;
+	const void *batch_map;
+	void *shadow_map;
 };
 
 static int __eb_parse(struct dma_fence_work *work)
 {
 	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
+	int ret;
+	bool cookie;
 
-	return intel_engine_cmd_parser(pw->engine,
-				       pw->batch,
-				       pw->batch_offset,
-				       pw->batch_length,
-				       pw->shadow,
-				       pw->trampoline);
+	cookie = dma_fence_begin_signalling();
+	ret = intel_engine_cmd_parser(pw->engine,
+				      pw->batch,
+				      pw->batch_offset,
+				      pw->batch_length,
+				      pw->shadow,
+				      pw->jump_whitelist,
+				      pw->shadow_map,
+				      pw->batch_map);
+	dma_fence_end_signalling(cookie);
+
+	return ret;
 }
 
 static void __eb_parse_release(struct dma_fence_work *work)
 {
 	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
 
+	if (!IS_ERR_OR_NULL(pw->jump_whitelist))
+		kfree(pw->jump_whitelist);
+
+	if (pw->batch_map)
+		i915_gem_object_unpin_map(pw->batch->obj);
+	else
+		i915_gem_object_unpin_pages(pw->batch->obj);
+
+	i915_gem_object_unpin_map(pw->shadow->obj);
+
 	if (pw->trampoline)
 		i915_active_release(&pw->trampoline->active);
 	i915_active_release(&pw->shadow->active);
@@ -2334,6 +2356,8 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 			     struct i915_vma *trampoline)
 {
 	struct eb_parse_work *pw;
+	struct drm_i915_gem_object *batch = eb->batch->vma->obj;
+	bool needs_clflush;
 	int err;
 
 	GEM_BUG_ON(overflows_type(eb->batch_start_offset, pw->batch_offset));
@@ -2357,6 +2381,34 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 			goto err_shadow;
 	}
 
+	pw->shadow_map = i915_gem_object_pin_map(shadow->obj, I915_MAP_WB);
+	if (IS_ERR(pw->shadow_map)) {
+		err = PTR_ERR(pw->shadow_map);
+		goto err_trampoline;
+	}
+
+	needs_clflush =
+		!(batch->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
+
+	pw->batch_map = ERR_PTR(-ENODEV);
+	if (needs_clflush && i915_has_memcpy_from_wc())
+		pw->batch_map = i915_gem_object_pin_map(batch, I915_MAP_WC);
+
+	if (IS_ERR(pw->batch_map)) {
+		err = i915_gem_object_pin_pages(batch);
+		if (err)
+			goto err_unmap_shadow;
+		pw->batch_map = NULL;
+	}
+
+	pw->jump_whitelist =
+		intel_engine_cmd_parser_alloc_jump_whitelist(eb->batch_len,
+							     trampoline);
+	if (IS_ERR(pw->jump_whitelist)) {
+		err = PTR_ERR(pw->jump_whitelist);
+		goto err_unmap_batch;
+	}
+
 	dma_fence_work_init(&pw->base, &eb_parse_ops);
 
 	pw->engine = eb->engine;
@@ -2396,6 +2448,16 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 	dma_fence_work_commit_imm(&pw->base);
 	return err;
 
+err_unmap_batch:
+	if (pw->batch_map)
+		i915_gem_object_unpin_map(batch);
+	else
+		i915_gem_object_unpin_pages(batch);
+err_unmap_shadow:
+	i915_gem_object_unpin_map(shadow->obj);
+err_trampoline:
+	if (trampoline)
+		i915_active_release(&trampoline->active);
 err_shadow:
 	i915_active_release(&shadow->active);
 err_batch:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 366d23afbb1a..dc949404843a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -325,22 +325,22 @@ struct scatterlist *
 __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 			 struct i915_gem_object_page_iter *iter,
 			 unsigned int n,
-			 unsigned int *offset);
+			 unsigned int *offset, bool allow_alloc);
 
 static inline struct scatterlist *
 i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 		       unsigned int n,
-		       unsigned int *offset)
+		       unsigned int *offset, bool allow_alloc)
 {
-	return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset);
+	return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, allow_alloc);
 }
 
 static inline struct scatterlist *
 i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj,
 			   unsigned int n,
-			   unsigned int *offset)
+			   unsigned int *offset, bool allow_alloc)
 {
-	return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset);
+	return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, allow_alloc);
 }
 
 struct page *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 43028f3539a6..d44b72dd13fe 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -448,7 +448,8 @@ struct scatterlist *
 __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 			 struct i915_gem_object_page_iter *iter,
 			 unsigned int n,
-			 unsigned int *offset)
+			 unsigned int *offset,
+			 bool allow_alloc)
 {
 	const bool dma = iter == &obj->mm.get_dma_page;
 	struct scatterlist *sg;
@@ -470,6 +471,9 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 	if (n < READ_ONCE(iter->sg_idx))
 		goto lookup;
 
+	if (!allow_alloc)
+		goto manual_lookup;
+
 	mutex_lock(&iter->lock);
 
 	/* We prefer to reuse the last sg so that repeated lookup of this
@@ -519,7 +523,16 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 	if (unlikely(n < idx)) /* insertion completed by another thread */
 		goto lookup;
 
-	/* In case we failed to insert the entry into the radixtree, we need
+	goto manual_walk;
+
+manual_lookup:
+	idx = 0;
+	sg = obj->mm.pages->sgl;
+	count = __sg_page_count(sg);
+
+manual_walk:
+	/*
+	 * In case we failed to insert the entry into the radixtree, we need
 	 * to look beyond the current sg.
 	 */
 	while (idx + count <= n) {
@@ -566,7 +579,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n)
 
 	GEM_BUG_ON(!i915_gem_object_has_struct_page(obj));
 
-	sg = i915_gem_object_get_sg(obj, n, &offset);
+	sg = i915_gem_object_get_sg(obj, n, &offset, true);
 	return nth_page(sg_page(sg), offset);
 }
 
@@ -592,7 +605,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
 	struct scatterlist *sg;
 	unsigned int offset;
 
-	sg = i915_gem_object_get_sg_dma(obj, n, &offset);
+	sg = i915_gem_object_get_sg_dma(obj, n, &offset, true);
 
 	if (len)
 		*len = sg_dma_len(sg) - (offset << PAGE_SHIFT);
diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c
index b0b8ded834f0..c8eb034c806a 100644
--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c
@@ -1434,7 +1434,7 @@ intel_partial_pages(const struct i915_ggtt_view *view,
 	if (ret)
 		goto err_sg_alloc;
 
-	iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset);
+	iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset, true);
 	GEM_BUG_ON(!iter);
 
 	sg = st->sgl;
diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
index 5f86f5b2caf6..e6f1e93abbbb 100644
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
@@ -1144,38 +1144,20 @@ find_reg(const struct intel_engine_cs *engine, u32 addr)
 /* Returns a vmap'd pointer to dst_obj, which the caller must unmap */
 static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
 		       struct drm_i915_gem_object *src_obj,
-		       unsigned long offset, unsigned long length)
+		       unsigned long offset, unsigned long length,
+		       void *dst, const void *src)
 {
-	bool needs_clflush;
-	void *dst, *src;
-	int ret;
-
-	dst = i915_gem_object_pin_map(dst_obj, I915_MAP_WB);
-	if (IS_ERR(dst))
-		return dst;
-
-	ret = i915_gem_object_pin_pages(src_obj);
-	if (ret) {
-		i915_gem_object_unpin_map(dst_obj);
-		return ERR_PTR(ret);
-	}
-
-	needs_clflush =
+	bool needs_clflush =
 		!(src_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
 
-	src = ERR_PTR(-ENODEV);
-	if (needs_clflush && i915_has_memcpy_from_wc()) {
-		src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);
-		if (!IS_ERR(src)) {
-			i915_unaligned_memcpy_from_wc(dst,
-						      src + offset,
-						      length);
-			i915_gem_object_unpin_map(src_obj);
-		}
-	}
-	if (IS_ERR(src)) {
-		unsigned long x, n, remain;
+	if (src) {
+		GEM_BUG_ON(!needs_clflush);
+		i915_unaligned_memcpy_from_wc(dst, src + offset, length);
+	} else {
+		struct scatterlist *sg;
 		void *ptr;
+		unsigned int x, sg_ofs;
+		unsigned long remain;
 
 		/*
 		 * We can avoid clflushing partial cachelines before the write
@@ -1192,23 +1174,31 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
 
 		ptr = dst;
 		x = offset_in_page(offset);
-		for (n = offset >> PAGE_SHIFT; remain; n++) {
-			int len = min(remain, PAGE_SIZE - x);
-
-			src = kmap_atomic(i915_gem_object_get_page(src_obj, n));
-			if (needs_clflush)
-				drm_clflush_virt_range(src + x, len);
-			memcpy(ptr, src + x, len);
-			kunmap_atomic(src);
-
-			ptr += len;
-			remain -= len;
-			x = 0;
+		sg = i915_gem_object_get_sg(src_obj, offset >> PAGE_SHIFT, &sg_ofs, false);
+
+		while (remain) {
+			unsigned long sg_max = sg->length >> PAGE_SHIFT;
+
+			for (; remain && sg_ofs < sg_max; sg_ofs++) {
+				unsigned long len = min(remain, PAGE_SIZE - x);
+				void *map;
+
+				map = kmap_atomic(nth_page(sg_page(sg), sg_ofs));
+				if (needs_clflush)
+					drm_clflush_virt_range(map + x, len);
+				memcpy(ptr, map + x, len);
+				kunmap_atomic(map);
+
+				ptr += len;
+				remain -= len;
+				x = 0;
+			}
+
+			sg_ofs = 0;
+			sg = sg_next(sg);
 		}
 	}
 
-	i915_gem_object_unpin_pages(src_obj);
-
 	memset32(dst + length, 0, (dst_obj->base.size - length) / sizeof(u32));
 
 	/* dst_obj is returned with vmap pinned */
@@ -1370,9 +1360,6 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
 	if (target_cmd_index == offset)
 		return 0;
 
-	if (IS_ERR(jump_whitelist))
-		return PTR_ERR(jump_whitelist);
-
 	if (!test_bit(target_cmd_index, jump_whitelist)) {
 		DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n",
 			  jump_target);
@@ -1382,10 +1369,14 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
 	return 0;
 }
 
-static unsigned long *alloc_whitelist(u32 batch_length)
+unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
+							    bool trampoline)
 {
 	unsigned long *jmp;
 
+	if (trampoline)
+		return NULL;
+
 	/*
 	 * We expect batch_length to be less than 256KiB for known users,
 	 * i.e. we need at most an 8KiB bitmap allocation which should be
@@ -1423,14 +1414,16 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 			    unsigned long batch_offset,
 			    unsigned long batch_length,
 			    struct i915_vma *shadow,
-			    bool trampoline)
+			    unsigned long *jump_whitelist,
+			    void *shadow_map,
+			    const void *batch_map)
 {
 	u32 *cmd, *batch_end, offset = 0;
 	struct drm_i915_cmd_descriptor default_desc = noop_desc;
 	const struct drm_i915_cmd_descriptor *desc = &default_desc;
-	unsigned long *jump_whitelist;
 	u64 batch_addr, shadow_addr;
 	int ret = 0;
+	bool trampoline = !jump_whitelist;
 
 	GEM_BUG_ON(!IS_ALIGNED(batch_offset, sizeof(*cmd)));
 	GEM_BUG_ON(!IS_ALIGNED(batch_length, sizeof(*cmd)));
@@ -1438,16 +1431,8 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 				     batch->size));
 	GEM_BUG_ON(!batch_length);
 
-	cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length);
-	if (IS_ERR(cmd)) {
-		DRM_DEBUG("CMD: Failed to copy batch\n");
-		return PTR_ERR(cmd);
-	}
-
-	jump_whitelist = NULL;
-	if (!trampoline)
-		/* Defer failure until attempted use */
-		jump_whitelist = alloc_whitelist(batch_length);
+	cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length,
+			 shadow_map, batch_map);
 
 	shadow_addr = gen8_canonical_addr(shadow->node.start);
 	batch_addr = gen8_canonical_addr(batch->node.start + batch_offset);
@@ -1548,9 +1533,6 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 
 	i915_gem_object_flush_map(shadow->obj);
 
-	if (!IS_ERR_OR_NULL(jump_whitelist))
-		kfree(jump_whitelist);
-	i915_gem_object_unpin_map(shadow->obj);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1d45d7492d10..09318340e693 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1950,12 +1950,17 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
 int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
 int intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
 void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
+unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
+							    bool trampoline);
+
 int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 			    struct i915_vma *batch,
 			    unsigned long batch_offset,
 			    unsigned long batch_length,
 			    struct i915_vma *shadow,
-			    bool trampoline);
+			    unsigned long *jump_whitelist,
+			    void *shadow_map,
+			    const void *batch_map);
 #define I915_CMD_PARSER_TRAMPOLINE_SIZE 8
 
 /* intel_device_info.c */
diff --git a/drivers/gpu/drm/i915/i915_memcpy.c b/drivers/gpu/drm/i915/i915_memcpy.c
index 7b3b83bd5ab8..1b021a4902de 100644
--- a/drivers/gpu/drm/i915/i915_memcpy.c
+++ b/drivers/gpu/drm/i915/i915_memcpy.c
@@ -135,7 +135,7 @@ bool i915_memcpy_from_wc(void *dst, const void *src, unsigned long len)
  * accepts that its arguments may not be aligned, but are valid for the
  * potential 16-byte read past the end.
  */
-void i915_unaligned_memcpy_from_wc(void *dst, void *src, unsigned long len)
+void i915_unaligned_memcpy_from_wc(void *dst, const void *src, unsigned long len)
 {
 	unsigned long addr;
 
diff --git a/drivers/gpu/drm/i915/i915_memcpy.h b/drivers/gpu/drm/i915/i915_memcpy.h
index e36d30edd987..3df063a3293b 100644
--- a/drivers/gpu/drm/i915/i915_memcpy.h
+++ b/drivers/gpu/drm/i915/i915_memcpy.h
@@ -13,7 +13,7 @@ struct drm_i915_private;
 void i915_memcpy_init_early(struct drm_i915_private *i915);
 
 bool i915_memcpy_from_wc(void *dst, const void *src, unsigned long len);
-void i915_unaligned_memcpy_from_wc(void *dst, void *src, unsigned long len);
+void i915_unaligned_memcpy_from_wc(void *dst, const void *src, unsigned long len);
 
 /* The movntdqa instructions used for memcpy-from-wc require 16-byte alignment,
  * as well as SSE4.1 support. i915_memcpy_from_wc() will report if it cannot
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 04/69] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (2 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 03/69] drm/i915: Move cmd parser pinning to execbuffer Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 05/69] drm/i915: Ensure we hold the object mutex in pin correctly Maarten Lankhorst
                   ` (69 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

i915_vma_pin may fail with -EDEADLK when we start locking page tables,
so ensure we handle this correctly.

Changes since v1:
- Drop -EDEADLK todo, this commit handles it.
- Change eb_pin_vma from sort-of-bool + -EDEADLK to a proper int. (Matt)

Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 35 +++++++++++++------
 1 file changed, 24 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 3981f8ef3fcb..1938dd739454 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -420,13 +420,14 @@ static u64 eb_pin_flags(const struct drm_i915_gem_exec_object2 *entry,
 	return pin_flags;
 }
 
-static inline bool
+static inline int
 eb_pin_vma(struct i915_execbuffer *eb,
 	   const struct drm_i915_gem_exec_object2 *entry,
 	   struct eb_vma *ev)
 {
 	struct i915_vma *vma = ev->vma;
 	u64 pin_flags;
+	int err;
 
 	if (vma->node.size)
 		pin_flags = vma->node.start;
@@ -438,24 +439,29 @@ eb_pin_vma(struct i915_execbuffer *eb,
 		pin_flags |= PIN_GLOBAL;
 
 	/* Attempt to reuse the current location if available */
-	/* TODO: Add -EDEADLK handling here */
-	if (unlikely(i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags))) {
+	err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags);
+	if (err == -EDEADLK)
+		return err;
+
+	if (unlikely(err)) {
 		if (entry->flags & EXEC_OBJECT_PINNED)
-			return false;
+			return err;
 
 		/* Failing that pick any _free_ space if suitable */
-		if (unlikely(i915_vma_pin_ww(vma, &eb->ww,
+		err = i915_vma_pin_ww(vma, &eb->ww,
 					     entry->pad_to_size,
 					     entry->alignment,
 					     eb_pin_flags(entry, ev->flags) |
-					     PIN_USER | PIN_NOEVICT)))
-			return false;
+					     PIN_USER | PIN_NOEVICT);
+		if (unlikely(err))
+			return err;
 	}
 
 	if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_FENCE)) {
-		if (unlikely(i915_vma_pin_fence(vma))) {
+		err = i915_vma_pin_fence(vma);
+		if (unlikely(err)) {
 			i915_vma_unpin(vma);
-			return false;
+			return err;
 		}
 
 		if (vma->fence)
@@ -463,7 +469,10 @@ eb_pin_vma(struct i915_execbuffer *eb,
 	}
 
 	ev->flags |= __EXEC_OBJECT_HAS_PIN;
-	return !eb_vma_misplaced(entry, vma, ev->flags);
+	if (eb_vma_misplaced(entry, vma, ev->flags))
+		return -EBADSLT;
+
+	return 0;
 }
 
 static inline void
@@ -899,7 +908,11 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
 		if (err)
 			return err;
 
-		if (eb_pin_vma(eb, entry, ev)) {
+		err = eb_pin_vma(eb, entry, ev);
+		if (err == -EDEADLK)
+			return err;
+
+		if (!err) {
 			if (entry->offset != vma->node.start) {
 				entry->offset = vma->node.start | UPDATE;
 				eb->args->flags |= __EXEC_HAS_RELOC;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 05/69] drm/i915: Ensure we hold the object mutex in pin correctly.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (3 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 04/69] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2 Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 06/69] drm/i915: Add gem object locking to madvise Maarten Lankhorst
                   ` (68 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Currently we have a lot of places where we hold the gem object lock,
but haven't yet been converted to the ww dance. Complain loudly about
those places.

i915_vma_pin shouldn't have the obj lock held, so we can do a ww dance,
while i915_vma_pin_ww should.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #irc
---
 drivers/gpu/drm/i915/gt/intel_renderstate.c |  2 +-
 drivers/gpu/drm/i915/i915_vma.c             | 11 ++++++++++-
 drivers/gpu/drm/i915/i915_vma.h             |  3 +++
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_renderstate.c b/drivers/gpu/drm/i915/gt/intel_renderstate.c
index 0f7c0a148b80..b03e197b1d99 100644
--- a/drivers/gpu/drm/i915/gt/intel_renderstate.c
+++ b/drivers/gpu/drm/i915/gt/intel_renderstate.c
@@ -176,7 +176,7 @@ int intel_renderstate_init(struct intel_renderstate *so,
 	if (err)
 		goto err_context;
 
-	err = i915_vma_pin(so->vma, 0, 0, PIN_GLOBAL | PIN_HIGH);
+	err = i915_vma_pin_ww(so->vma, &so->ww, 0, 0, PIN_GLOBAL | PIN_HIGH);
 	if (err)
 		goto err_context;
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index caa9b041616b..7310893086f7 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -865,6 +865,8 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 #ifdef CONFIG_PROVE_LOCKING
 	if (debug_locks && lockdep_is_held(&vma->vm->i915->drm.struct_mutex))
 		WARN_ON(!ww);
+	if (debug_locks && ww && vma->resv)
+		assert_vma_held(vma);
 #endif
 
 	BUILD_BUG_ON(PIN_GLOBAL != I915_VMA_GLOBAL_BIND);
@@ -1020,8 +1022,15 @@ int i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 
 	GEM_BUG_ON(!i915_vma_is_ggtt(vma));
 
+#ifdef CONFIG_LOCKDEP
+	WARN_ON(!ww && vma->resv && dma_resv_held(vma->resv));
+#endif
+
 	do {
-		err = i915_vma_pin_ww(vma, ww, 0, align, flags | PIN_GLOBAL);
+		if (ww)
+			err = i915_vma_pin_ww(vma, ww, 0, align, flags | PIN_GLOBAL);
+		else
+			err = i915_vma_pin(vma, 0, align, flags | PIN_GLOBAL);
 		if (err != -ENOSPC) {
 			if (!err) {
 				err = i915_vma_wait_for_bind(vma);
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index a64adc8c883b..3c914c9de9a9 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -243,6 +243,9 @@ i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 static inline int __must_check
 i915_vma_pin(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 {
+#ifdef CONFIG_LOCKDEP
+	WARN_ON_ONCE(vma->resv && dma_resv_held(vma->resv));
+#endif
 	return i915_vma_pin_ww(vma, NULL, size, alignment, flags);
 }
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 06/69] drm/i915: Add gem object locking to madvise.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (4 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 05/69] drm/i915: Ensure we hold the object mutex in pin correctly Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 07/69] drm/i915: Move HAS_STRUCT_PAGE to obj->flags Maarten Lankhorst
                   ` (67 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Doesn't need the full ww lock, only checking if pages are bound.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #irc
---
 drivers/gpu/drm/i915/i915_gem.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b2e3b5cfccb4..daf6a742a766 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -941,10 +941,14 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
+	err = i915_gem_object_lock_interruptible(obj, NULL);
 	if (err)
 		goto out;
 
+	err = mutex_lock_interruptible(&obj->mm.lock);
+	if (err)
+		goto out_ww;
+
 	if (i915_gem_object_has_pages(obj) &&
 	    i915_gem_object_is_tiled(obj) &&
 	    i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@@ -989,6 +993,8 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	args->retained = obj->mm.madv != __I915_MADV_PURGED;
 	mutex_unlock(&obj->mm.lock);
 
+out_ww:
+	i915_gem_object_unlock(obj);
 out:
 	i915_gem_object_put(obj);
 	return err;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 07/69] drm/i915: Move HAS_STRUCT_PAGE to obj->flags
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (5 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 06/69] drm/i915: Add gem object locking to madvise Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 08/69] drm/i915: Rework struct phys attachment handling Maarten Lankhorst
                   ` (66 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We want to remove the changing of ops structure for attaching
phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags,
and put it in the bo.

This will remove a potential race of dereferencing the wrong obj->ops
without ww mutex held.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c           |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c         |  6 +++---
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c             |  4 ++--
 drivers/gpu/drm/i915/gem/i915_gem_mman.c             |  7 +++----
 drivers/gpu/drm/i915/gem/i915_gem_object.c           |  4 +++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h           |  5 +++--
 drivers/gpu/drm/i915/gem/i915_gem_object_types.h     | 10 ++++++----
 drivers/gpu/drm/i915/gem/i915_gem_pages.c            |  5 ++---
 drivers/gpu/drm/i915/gem/i915_gem_phys.c             |  2 ++
 drivers/gpu/drm/i915/gem/i915_gem_region.c           |  4 +---
 drivers/gpu/drm/i915/gem/i915_gem_region.h           |  3 +--
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c            |  8 ++++----
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c           |  4 ++--
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c          |  6 +++---
 drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c |  4 ++--
 drivers/gpu/drm/i915/gem/selftests/huge_pages.c      | 10 +++++-----
 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c   | 11 ++++-------
 drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c   | 12 ++++++++++++
 drivers/gpu/drm/i915/gvt/dmabuf.c                    |  2 +-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c        |  2 +-
 drivers/gpu/drm/i915/selftests/mock_region.c         |  4 ++--
 21 files changed, 63 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index d804b0003e0d..c7100a83b8ea 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -261,7 +261,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 	}
 
 	drm_gem_private_object_init(dev, &obj->base, dma_buf->size);
-	i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class, 0);
 	obj->base.import_attach = attach;
 	obj->base.resv = dma_buf->resv;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ad22f42541bd..21cc40897ca8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -138,8 +138,7 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
 
 static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
 	.name = "i915_gem_object_internal",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 	.get_pages = i915_gem_object_get_pages_internal,
 	.put_pages = i915_gem_object_put_pages_internal,
 };
@@ -178,7 +177,8 @@ i915_gem_object_create_internal(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	/*
 	 * Mark the object as volatile, such that the pages are marked as
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index 194f35342710..ce1c83c13d05 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -40,13 +40,13 @@ int __i915_gem_lmem_object_init(struct intel_memory_region *mem,
 	struct drm_i915_private *i915 = mem->i915;
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class, flags);
 
 	obj->read_domains = I915_GEM_DOMAIN_WC | I915_GEM_DOMAIN_GTT;
 
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
 
-	i915_gem_object_init_memory_region(obj, mem, flags);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index ec28a6cde49b..c0034d811e50 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -251,7 +251,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 		goto out;
 
 	iomap = -1;
-	if (!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE)) {
+	if (!i915_gem_object_has_struct_page(obj)) {
 		iomap = obj->mm.region->iomap.base;
 		iomap -= obj->mm.region->region.start;
 	}
@@ -653,9 +653,8 @@ __assign_mmap_offset(struct drm_file *file,
 	}
 
 	if (mmap_type != I915_MMAP_TYPE_GTT &&
-	    !i915_gem_object_type_has(obj,
-				      I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-				      I915_GEM_OBJECT_HAS_IOMEM)) {
+	    !i915_gem_object_has_struct_page(obj) &&
+	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) {
 		err = -ENODEV;
 		goto out;
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 6cdff5fc5882..6083b9c14be6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -60,7 +60,7 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj)
 
 void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
-			  struct lock_class_key *key)
+			  struct lock_class_key *key, unsigned flags)
 {
 	__mutex_init(&obj->mm.lock, ops->name ?: "obj->mm.lock", key);
 
@@ -78,6 +78,8 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 	init_rcu_head(&obj->rcu);
 
 	obj->ops = ops;
+	GEM_BUG_ON(flags & ~I915_BO_ALLOC_FLAGS);
+	obj->flags = flags;
 
 	obj->mm.madv = I915_MADV_WILLNEED;
 	INIT_RADIX_TREE(&obj->mm.get_page.radix, GFP_KERNEL | __GFP_NOWARN);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index dc949404843a..f706812280d6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -49,7 +49,8 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj);
 
 void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
-			  struct lock_class_key *key);
+			  struct lock_class_key *key,
+			  unsigned alloc_flags);
 struct drm_i915_gem_object *
 i915_gem_object_create_shmem(struct drm_i915_private *i915,
 			     resource_size_t size);
@@ -241,7 +242,7 @@ i915_gem_object_type_has(const struct drm_i915_gem_object *obj,
 static inline bool
 i915_gem_object_has_struct_page(const struct drm_i915_gem_object *obj)
 {
-	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE);
+	return obj->flags & I915_BO_ALLOC_STRUCT_PAGE;
 }
 
 static inline bool
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 0a1fdbac882e..0320508b66b3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -30,7 +30,6 @@ struct i915_lut_handle {
 
 struct drm_i915_gem_object_ops {
 	unsigned int flags;
-#define I915_GEM_OBJECT_HAS_STRUCT_PAGE	BIT(0)
 #define I915_GEM_OBJECT_HAS_IOMEM	BIT(1)
 #define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(2)
 #define I915_GEM_OBJECT_IS_PROXY	BIT(3)
@@ -171,9 +170,12 @@ struct drm_i915_gem_object {
 	unsigned long flags;
 #define I915_BO_ALLOC_CONTIGUOUS BIT(0)
 #define I915_BO_ALLOC_VOLATILE   BIT(1)
-#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE)
-#define I915_BO_READONLY         BIT(2)
-#define I915_TILING_QUIRK_BIT    3 /* unknown swizzling; do not release! */
+#define I915_BO_ALLOC_STRUCT_PAGE BIT(2)
+#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \
+			     I915_BO_ALLOC_VOLATILE | \
+			     I915_BO_ALLOC_STRUCT_PAGE)
+#define I915_BO_READONLY         BIT(3)
+#define I915_TILING_QUIRK_BIT    4 /* unknown swizzling; do not release! */
 
 	/*
 	 * Is the object to be mapped as read-only to the GPU
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index d44b72dd13fe..bf61b88a2113 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -333,13 +333,12 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 			      enum i915_map_type type)
 {
 	enum i915_map_type has_type;
-	unsigned int flags;
 	bool pinned;
 	void *ptr;
 	int err;
 
-	flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM;
-	if (!i915_gem_object_type_has(obj, flags))
+	if (!i915_gem_object_has_struct_page(obj) &&
+	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return ERR_PTR(-ENXIO);
 
 	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 01fe89afe8c0..d1bf543d111a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -240,6 +240,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	pages = __i915_gem_object_unset_pages(obj);
 
 	obj->ops = &i915_gem_phys_ops;
+	obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE;
 
 	err = ____i915_gem_object_get_pages(obj);
 	if (err)
@@ -258,6 +259,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 
 err_xfer:
 	obj->ops = &i915_gem_shmem_ops;
+	obj->flags |= I915_BO_ALLOC_STRUCT_PAGE;
 	if (!IS_ERR_OR_NULL(pages)) {
 		unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
index 77dfa908f156..6a84fb6dde24 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -106,13 +106,11 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
 }
 
 void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
-					struct intel_memory_region *mem,
-					unsigned long flags)
+					struct intel_memory_region *mem)
 {
 	INIT_LIST_HEAD(&obj->mm.blocks);
 	obj->mm.region = intel_memory_region_get(mem);
 
-	obj->flags |= flags;
 	if (obj->base.size <= mem->min_page_size)
 		obj->flags |= I915_BO_ALLOC_CONTIGUOUS;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h
index f2ff6f8bff74..ebddc86d78f7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h
@@ -17,8 +17,7 @@ void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
 				     struct sg_table *pages);
 
 void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
-					struct intel_memory_region *mem,
-					unsigned long flags);
+					struct intel_memory_region *mem);
 void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj);
 
 struct drm_i915_gem_object *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 680b370a8ef3..bb82b3bc8830 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -430,8 +430,7 @@ static void shmem_release(struct drm_i915_gem_object *obj)
 
 const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
 	.name = "i915_gem_object_shmem",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 
 	.get_pages = shmem_get_pages,
 	.put_pages = shmem_put_pages,
@@ -491,7 +490,8 @@ static int shmem_object_init(struct intel_memory_region *mem,
 	mapping_set_gfp_mask(mapping, mask);
 	GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));
 
-	i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
@@ -515,7 +515,7 @@ static int shmem_object_init(struct intel_memory_region *mem,
 
 	i915_gem_object_set_cache_coherency(obj, cache_level);
 
-	i915_gem_object_init_memory_region(obj, mem, 0);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index c5f85296a45f..7cdb32d881d9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -630,7 +630,7 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem,
 	int err;
 
 	drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size);
-	i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, 0);
 
 	obj->stolen = stolen;
 	obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
@@ -641,7 +641,7 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem,
 	if (err)
 		return err;
 
-	i915_gem_object_init_memory_region(obj, mem, 0);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 3e4785c2dfa2..0f9024c62c06 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -702,8 +702,7 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
 
 static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE |
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
 		 I915_GEM_OBJECT_NO_MMAP |
 		 I915_GEM_OBJECT_ASYNC_CANCEL,
 	.get_pages = i915_gem_userptr_get_pages,
@@ -796,7 +795,8 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 		return -ENOMEM;
 
 	drm_gem_private_object_init(dev, &obj->base, args->user_size);
-	i915_gem_object_init(obj, &i915_gem_userptr_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_userptr_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
index 2fb501a78a85..0c8ecfdf5405 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
@@ -89,7 +89,6 @@ static void huge_put_pages(struct drm_i915_gem_object *obj,
 
 static const struct drm_i915_gem_object_ops huge_ops = {
 	.name = "huge-gem",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE,
 	.get_pages = huge_get_pages,
 	.put_pages = huge_put_pages,
 };
@@ -115,7 +114,8 @@ huge_gem_object(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, dma_size);
-	i915_gem_object_init(obj, &huge_ops, &lock_class);
+	i915_gem_object_init(obj, &huge_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index 10ee24b252dd..515dbc468175 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -140,8 +140,7 @@ static void put_huge_pages(struct drm_i915_gem_object *obj,
 
 static const struct drm_i915_gem_object_ops huge_page_ops = {
 	.name = "huge-gem",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 	.get_pages = get_huge_pages,
 	.put_pages = put_huge_pages,
 };
@@ -168,7 +167,8 @@ huge_pages_object(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &huge_page_ops, &lock_class);
+	i915_gem_object_init(obj, &huge_page_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	i915_gem_object_set_volatile(obj);
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
@@ -317,9 +317,9 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
 
 	if (single)
-		i915_gem_object_init(obj, &fake_ops_single, &lock_class);
+		i915_gem_object_init(obj, &fake_ops_single, &lock_class, 0);
 	else
-		i915_gem_object_init(obj, &fake_ops, &lock_class);
+		i915_gem_object_init(obj, &fake_ops, &lock_class, 0);
 
 	i915_gem_object_set_volatile(obj);
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 39293d98f34d..49f17708c143 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -819,9 +819,8 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type)
 		return false;
 
 	if (type != I915_MMAP_TYPE_GTT &&
-	    !i915_gem_object_type_has(obj,
-				      I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-				      I915_GEM_OBJECT_HAS_IOMEM))
+	    !i915_gem_object_has_struct_page(obj) &&
+	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return false;
 
 	return true;
@@ -961,10 +960,8 @@ static const char *repr_mmap_type(enum i915_mmap_type type)
 
 static bool can_access(const struct drm_i915_gem_object *obj)
 {
-	unsigned int flags =
-		I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM;
-
-	return i915_gem_object_type_has(obj, flags);
+	return i915_gem_object_has_struct_page(obj) ||
+	       i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM);
 }
 
 static int __igt_mmap_access(struct drm_i915_private *i915,
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
index b62d02cb9579..338dada88359 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
@@ -25,12 +25,24 @@ static int mock_phys_object(void *arg)
 		goto out;
 	}
 
+	if (!i915_gem_object_has_struct_page(obj)) {
+		err = -EINVAL;
+		pr_err("shmem has no struct page\n");
+		goto out_obj;
+	}
+
 	err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
 	if (err) {
 		pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
 		goto out_obj;
 	}
 
+	if (i915_gem_object_has_struct_page(obj)) {
+		err = -EINVAL;
+		pr_err("shmem has a struct page\n");
+		goto out_obj;
+	}
+
 	if (obj->ops != &i915_gem_phys_ops) {
 		pr_err("i915_gem_object_attach_phys did not create a phys object\n");
 		err = -EINVAL;
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c
index c3eb3838fe88..d4f883f35b95 100644
--- a/drivers/gpu/drm/i915/gvt/dmabuf.c
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.c
@@ -218,7 +218,7 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
 
 	drm_gem_private_object_init(dev, &obj->base,
 		roundup(info->size, PAGE_SIZE));
-	i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class);
+	i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class, 0);
 	i915_gem_object_set_readonly(obj);
 
 	obj->read_domains = I915_GEM_DOMAIN_GTT;
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index c1adea8765a9..5be6dcf4357e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -121,7 +121,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size)
 		goto err;
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &fake_ops, &lock_class);
+	i915_gem_object_init(obj, &fake_ops, &lock_class, 0);
 
 	i915_gem_object_set_volatile(obj);
 
diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c
index 3c6021415274..5d2d010a1e22 100644
--- a/drivers/gpu/drm/i915/selftests/mock_region.c
+++ b/drivers/gpu/drm/i915/selftests/mock_region.c
@@ -27,13 +27,13 @@ static int mock_object_init(struct intel_memory_region *mem,
 		return -E2BIG;
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class);
+	i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class, flags);
 
 	obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
 
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
 
-	i915_gem_object_init_memory_region(obj, mem, flags);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return 0;
 }
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 08/69] drm/i915: Rework struct phys attachment handling
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (6 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 07/69] drm/i915: Move HAS_STRUCT_PAGE to obj->flags Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 09/69] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2 Maarten Lankhorst
                   ` (65 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Instead of creating a separate object type, we make changes to
the shmem type, to clear struct page backing. This will allow us to
ensure we never run into a race when we exchange obj->ops with other
function pointers.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |   8 ++
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 102 +++++++++---------
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |  22 +++-
 .../drm/i915/gem/selftests/i915_gem_phys.c    |   6 --
 4 files changed, 78 insertions(+), 60 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index f706812280d6..75e8734f50d2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -63,7 +63,15 @@ void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
 				     struct sg_table *pages,
 				     bool needs_clflush);
 
+int i915_gem_object_pwrite_phys(struct drm_i915_gem_object *obj,
+				const struct drm_i915_gem_pwrite *args);
+int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
+			       const struct drm_i915_gem_pread *args);
+
 int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align);
+void i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
+				    struct sg_table *pages);
+
 
 void i915_gem_flush_free_objects(struct drm_i915_private *i915);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index d1bf543d111a..ed283e168f2f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -76,6 +76,8 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 
 	intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
 
+	/* We're no longer struct page backed */
+	obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE;
 	__i915_gem_object_set_pages(obj, st, sg->length);
 
 	return 0;
@@ -89,7 +91,7 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 	return -ENOMEM;
 }
 
-static void
+void
 i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
 			       struct sg_table *pages)
 {
@@ -134,9 +136,8 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
 			  vaddr, dma);
 }
 
-static int
-phys_pwrite(struct drm_i915_gem_object *obj,
-	    const struct drm_i915_gem_pwrite *args)
+int i915_gem_object_pwrite_phys(struct drm_i915_gem_object *obj,
+				const struct drm_i915_gem_pwrite *args)
 {
 	void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset;
 	char __user *user_data = u64_to_user_ptr(args->data_ptr);
@@ -165,9 +166,8 @@ phys_pwrite(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
-static int
-phys_pread(struct drm_i915_gem_object *obj,
-	   const struct drm_i915_gem_pread *args)
+int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
+			       const struct drm_i915_gem_pread *args)
 {
 	void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset;
 	char __user *user_data = u64_to_user_ptr(args->data_ptr);
@@ -186,86 +186,82 @@ phys_pread(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
-static void phys_release(struct drm_i915_gem_object *obj)
+static int i915_gem_object_shmem_to_phys(struct drm_i915_gem_object *obj)
 {
-	fput(obj->base.filp);
-}
+	struct sg_table *pages;
+	int err;
 
-static const struct drm_i915_gem_object_ops i915_gem_phys_ops = {
-	.name = "i915_gem_object_phys",
-	.get_pages = i915_gem_object_get_pages_phys,
-	.put_pages = i915_gem_object_put_pages_phys,
+	pages = __i915_gem_object_unset_pages(obj);
+
+	err = i915_gem_object_get_pages_phys(obj);
+	if (err)
+		goto err_xfer;
 
-	.pread  = phys_pread,
-	.pwrite = phys_pwrite,
+	/* Perma-pin (until release) the physical set of pages */
+	__i915_gem_object_pin_pages(obj);
 
-	.release = phys_release,
-};
+	if (!IS_ERR_OR_NULL(pages))
+		i915_gem_shmem_ops.put_pages(obj, pages);
+
+	i915_gem_object_release_memory_region(obj);
+	return 0;
+
+err_xfer:
+	if (!IS_ERR_OR_NULL(pages)) {
+		unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
+
+		__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
+	}
+	return err;
+}
 
 int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 {
-	struct sg_table *pages;
 	int err;
 
 	if (align > obj->base.size)
 		return -EINVAL;
 
-	if (obj->ops == &i915_gem_phys_ops)
-		return 0;
-
 	if (!i915_gem_object_is_shmem(obj))
 		return -EINVAL;
 
+	if (!i915_gem_object_has_struct_page(obj))
+		return 0;
+
 	err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
 	if (err)
 		return err;
 
 	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
 
+	if (unlikely(!i915_gem_object_has_struct_page(obj)))
+		goto out;
+
 	if (obj->mm.madv != I915_MADV_WILLNEED) {
 		err = -EFAULT;
-		goto err_unlock;
+		goto out;
 	}
 
 	if (i915_gem_object_has_tiling_quirk(obj)) {
 		err = -EFAULT;
-		goto err_unlock;
+		goto out;
 	}
 
-	if (obj->mm.mapping) {
+	if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj)) {
 		err = -EBUSY;
-		goto err_unlock;
+		goto out;
 	}
 
-	pages = __i915_gem_object_unset_pages(obj);
-
-	obj->ops = &i915_gem_phys_ops;
-	obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE;
-
-	err = ____i915_gem_object_get_pages(obj);
-	if (err)
-		goto err_xfer;
-
-	/* Perma-pin (until release) the physical set of pages */
-	__i915_gem_object_pin_pages(obj);
-
-	if (!IS_ERR_OR_NULL(pages))
-		i915_gem_shmem_ops.put_pages(obj, pages);
-
-	i915_gem_object_release_memory_region(obj);
-
-	mutex_unlock(&obj->mm.lock);
-	return 0;
+	if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
+		drm_dbg(obj->base.dev,
+			"Attempting to obtain a purgeable object\n");
+		err = -EFAULT;
+		goto out;
+	}
 
-err_xfer:
-	obj->ops = &i915_gem_shmem_ops;
-	obj->flags |= I915_BO_ALLOC_STRUCT_PAGE;
-	if (!IS_ERR_OR_NULL(pages)) {
-		unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
+	err = i915_gem_object_shmem_to_phys(obj);
 
-		__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
-	}
-err_unlock:
+out:
 	mutex_unlock(&obj->mm.lock);
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index bb82b3bc8830..c9820c19c5f2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -303,6 +303,11 @@ shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
 	struct pagevec pvec;
 	struct page *page;
 
+	if (unlikely(!i915_gem_object_has_struct_page(obj))) {
+		i915_gem_object_put_pages_phys(obj, pages);
+		return;
+	}
+
 	__i915_gem_object_release_shmem(obj, pages, true);
 
 	i915_gem_gtt_finish_pages(obj, pages);
@@ -343,6 +348,9 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
 	/* Caller already validated user args */
 	GEM_BUG_ON(!access_ok(user_data, arg->size));
 
+	if (!i915_gem_object_has_struct_page(obj))
+		return i915_gem_object_pwrite_phys(obj, arg);
+
 	/*
 	 * Before we instantiate/pin the backing store for our use, we
 	 * can prepopulate the shmemfs filp efficiently using a write into
@@ -421,9 +429,20 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
+static int
+shmem_pread(struct drm_i915_gem_object *obj,
+	    const struct drm_i915_gem_pread *arg)
+{
+	if (!i915_gem_object_has_struct_page(obj))
+		return i915_gem_object_pread_phys(obj, arg);
+
+	return -ENODEV;
+}
+
 static void shmem_release(struct drm_i915_gem_object *obj)
 {
-	i915_gem_object_release_memory_region(obj);
+	if (obj->flags & I915_BO_ALLOC_STRUCT_PAGE)
+		i915_gem_object_release_memory_region(obj);
 
 	fput(obj->base.filp);
 }
@@ -438,6 +457,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
 	.writeback = shmem_writeback,
 
 	.pwrite = shmem_pwrite,
+	.pread = shmem_pread,
 
 	.release = shmem_release,
 };
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
index 338dada88359..238af7bd84f6 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
@@ -38,12 +38,6 @@ static int mock_phys_object(void *arg)
 	}
 
 	if (i915_gem_object_has_struct_page(obj)) {
-		err = -EINVAL;
-		pr_err("shmem has a struct page\n");
-		goto out_obj;
-	}
-
-	if (obj->ops != &i915_gem_phys_ops) {
 		pr_err("i915_gem_object_attach_phys did not create a phys object\n");
 		err = -EINVAL;
 		goto out_obj;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 09/69] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (7 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 08/69] drm/i915: Rework struct phys attachment handling Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 10/69] drm/i915: make lockdep slightly happier about execbuf Maarten Lankhorst
                   ` (64 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Simple adding of i915_gem_object_lock, we may start to pass ww to
get_pages() in the future, but that won't be the case here;
We override shmem's get_pages() handling by calling
i915_gem_object_get_pages_phys(), no ww is needed.

Changes since v1:
- Call shmem put pages directly, the callback would
  go down the phys free path.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h |  3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_phys.c   | 12 ++++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c  | 17 ++++++++++-------
 3 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 75e8734f50d2..c8eb0df904f7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -69,10 +69,11 @@ int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
 			       const struct drm_i915_gem_pread *args);
 
 int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align);
+void i915_gem_object_put_pages_shmem(struct drm_i915_gem_object *obj,
+				     struct sg_table *pages);
 void i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
 				    struct sg_table *pages);
 
-
 void i915_gem_flush_free_objects(struct drm_i915_private *i915);
 
 struct sg_table *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index ed283e168f2f..06c481ff79d8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -201,7 +201,7 @@ static int i915_gem_object_shmem_to_phys(struct drm_i915_gem_object *obj)
 	__i915_gem_object_pin_pages(obj);
 
 	if (!IS_ERR_OR_NULL(pages))
-		i915_gem_shmem_ops.put_pages(obj, pages);
+		i915_gem_object_put_pages_shmem(obj, pages);
 
 	i915_gem_object_release_memory_region(obj);
 	return 0;
@@ -232,7 +232,13 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = i915_gem_object_lock_interruptible(obj, NULL);
+	if (err)
+		return err;
+
+	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	if (err)
+		goto err_unlock;
 
 	if (unlikely(!i915_gem_object_has_struct_page(obj)))
 		goto out;
@@ -263,6 +269,8 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 
 out:
 	mutex_unlock(&obj->mm.lock);
+err_unlock:
+	i915_gem_object_unlock(obj);
 	return err;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index c9820c19c5f2..59fb16a82270 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -296,18 +296,12 @@ __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
 	__start_cpu_write(obj);
 }
 
-static void
-shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
+void i915_gem_object_put_pages_shmem(struct drm_i915_gem_object *obj, struct sg_table *pages)
 {
 	struct sgt_iter sgt_iter;
 	struct pagevec pvec;
 	struct page *page;
 
-	if (unlikely(!i915_gem_object_has_struct_page(obj))) {
-		i915_gem_object_put_pages_phys(obj, pages);
-		return;
-	}
-
 	__i915_gem_object_release_shmem(obj, pages, true);
 
 	i915_gem_gtt_finish_pages(obj, pages);
@@ -336,6 +330,15 @@ shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
 	kfree(pages);
 }
 
+static void
+shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
+{
+	if (likely(i915_gem_object_has_struct_page(obj)))
+		i915_gem_object_put_pages_shmem(obj, pages);
+	else
+		i915_gem_object_put_pages_phys(obj, pages);
+}
+
 static int
 shmem_pwrite(struct drm_i915_gem_object *obj,
 	     const struct drm_i915_gem_pwrite *arg)
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 10/69] drm/i915: make lockdep slightly happier about execbuf.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (8 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 09/69] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2 Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 11/69] drm/i915: Disable userptr pread/pwrite support Maarten Lankhorst
                   ` (63 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

As soon as we install fences, we should stop allocating memory
in order to prevent any potential deadlocks.

This is required later on, when we start adding support for
dma-fence annotations.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 24 ++++++++++++++-----
 drivers/gpu/drm/i915/i915_active.c            | 20 ++++++++--------
 drivers/gpu/drm/i915/i915_vma.c               |  8 ++++---
 drivers/gpu/drm/i915/i915_vma.h               |  3 +++
 4 files changed, 36 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 1938dd739454..b5056bd80464 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -50,11 +50,12 @@ enum {
 #define DBG_FORCE_RELOC 0 /* choose one of the above! */
 };
 
-#define __EXEC_OBJECT_HAS_PIN		BIT(31)
-#define __EXEC_OBJECT_HAS_FENCE		BIT(30)
-#define __EXEC_OBJECT_NEEDS_MAP		BIT(29)
-#define __EXEC_OBJECT_NEEDS_BIAS	BIT(28)
-#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 28) /* all of the above */
+/* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
+#define __EXEC_OBJECT_HAS_PIN		BIT(30)
+#define __EXEC_OBJECT_HAS_FENCE		BIT(29)
+#define __EXEC_OBJECT_NEEDS_MAP		BIT(28)
+#define __EXEC_OBJECT_NEEDS_BIAS	BIT(27)
+#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 27) /* all of the above + */
 #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
 
 #define __EXEC_HAS_RELOC	BIT(31)
@@ -928,6 +929,12 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
 			}
 		}
 
+		if (!(ev->flags & EXEC_OBJECT_WRITE)) {
+			err = dma_resv_reserve_shared(vma->resv, 1);
+			if (err)
+				return err;
+		}
+
 		GEM_BUG_ON(drm_mm_node_allocated(&vma->node) &&
 			   eb_vma_misplaced(&eb->exec[i], vma, ev->flags));
 	}
@@ -2188,7 +2195,8 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
 		}
 
 		if (err == 0)
-			err = i915_vma_move_to_active(vma, eb->request, flags);
+			err = i915_vma_move_to_active(vma, eb->request,
+						      flags | __EXEC_OBJECT_NO_RESERVE);
 	}
 
 	if (unlikely(err))
@@ -2440,6 +2448,10 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 	if (err)
 		goto err_commit;
 
+	err = dma_resv_reserve_shared(shadow->resv, 1);
+	if (err)
+		goto err_commit;
+
 	/* Wait for all writes (and relocs) into the batch to complete */
 	err = i915_sw_fence_await_reservation(&pw->base.chain,
 					      pw->batch->resv, NULL, false,
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 3bc616cc1ad2..cf9a3d384971 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -293,18 +293,13 @@ static struct active_node *__active_lookup(struct i915_active *ref, u64 idx)
 static struct i915_active_fence *
 active_instance(struct i915_active *ref, u64 idx)
 {
-	struct active_node *node, *prealloc;
+	struct active_node *node;
 	struct rb_node **p, *parent;
 
 	node = __active_lookup(ref, idx);
 	if (likely(node))
 		return &node->base;
 
-	/* Preallocate a replacement, just in case */
-	prealloc = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
-	if (!prealloc)
-		return NULL;
-
 	spin_lock_irq(&ref->tree_lock);
 	GEM_BUG_ON(i915_active_is_idle(ref));
 
@@ -314,10 +309,8 @@ active_instance(struct i915_active *ref, u64 idx)
 		parent = *p;
 
 		node = rb_entry(parent, struct active_node, node);
-		if (node->timeline == idx) {
-			kmem_cache_free(global.slab_cache, prealloc);
+		if (node->timeline == idx)
 			goto out;
-		}
 
 		if (node->timeline < idx)
 			p = &parent->rb_right;
@@ -325,7 +318,14 @@ active_instance(struct i915_active *ref, u64 idx)
 			p = &parent->rb_left;
 	}
 
-	node = prealloc;
+	/*
+	 * XXX: We should preallocate this before i915_active_ref() is ever
+	 *  called, but we cannot call into fs_reclaim() anyway, so use GFP_ATOMIC.
+	 */
+	node = kmem_cache_alloc(global.slab_cache, GFP_ATOMIC);
+	if (!node)
+		goto out;
+
 	__i915_active_fence_init(&node->base, NULL, node_retire);
 	node->ref = ref;
 	node->timeline = idx;
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 7310893086f7..1ffda2aaa7a0 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -1247,9 +1247,11 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 		obj->write_domain = I915_GEM_DOMAIN_RENDER;
 		obj->read_domains = 0;
 	} else {
-		err = dma_resv_reserve_shared(vma->resv, 1);
-		if (unlikely(err))
-			return err;
+		if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
+			err = dma_resv_reserve_shared(vma->resv, 1);
+			if (unlikely(err))
+				return err;
+		}
 
 		dma_resv_add_shared_fence(vma->resv, &rq->fence);
 		obj->write_domain = 0;
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 3c914c9de9a9..6b48f5c42488 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -52,6 +52,9 @@ static inline bool i915_vma_is_active(const struct i915_vma *vma)
 	return !i915_active_is_idle(&vma->active);
 }
 
+/* do not reserve memory to prevent deadlocks */
+#define __EXEC_OBJECT_NO_RESERVE BIT(31)
+
 int __must_check __i915_vma_move_to_active(struct i915_vma *vma,
 					   struct i915_request *rq);
 int __must_check i915_vma_move_to_active(struct i915_vma *vma,
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 11/69] drm/i915: Disable userptr pread/pwrite support.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (9 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 10/69] drm/i915: make lockdep slightly happier about execbuf Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 12/69] drm/i915: No longer allow exporting userptr through dma-buf Maarten Lankhorst
                   ` (62 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Userptr should not need the kernel for a userspace memcpy, userspace
needs to call memcpy directly.

Specifically, disable i915_gem_pwrite_ioctl() and i915_gem_pread_ioctl().

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

-- Still needs an ack from relevant userspace that it won't break, but should be good.
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 20 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem.c             |  5 +++++
 2 files changed, 25 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 0f9024c62c06..5a19699c2d7e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -700,6 +700,24 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
 	return i915_gem_userptr_init__mmu_notifier(obj, 0);
 }
 
+static int
+i915_gem_userptr_pwrite(struct drm_i915_gem_object *obj,
+			const struct drm_i915_gem_pwrite *args)
+{
+	drm_dbg(obj->base.dev, "pwrite to userptr no longer allowed\n");
+
+	return -EINVAL;
+}
+
+static int
+i915_gem_userptr_pread(struct drm_i915_gem_object *obj,
+		       const struct drm_i915_gem_pread *args)
+{
+	drm_dbg(obj->base.dev, "pread from userptr no longer allowed\n");
+
+	return -EINVAL;
+}
+
 static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
 	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
@@ -708,6 +726,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.get_pages = i915_gem_userptr_get_pages,
 	.put_pages = i915_gem_userptr_put_pages,
 	.dmabuf_export = i915_gem_userptr_dmabuf_export,
+	.pwrite = i915_gem_userptr_pwrite,
+	.pread = i915_gem_userptr_pread,
 	.release = i915_gem_userptr_release,
 };
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index daf6a742a766..22be1e7bf2dd 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -396,6 +396,11 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 	}
 
 	trace_i915_gem_object_pread(obj, args->offset, args->size);
+	ret = -ENODEV;
+	if (obj->ops->pread)
+		ret = obj->ops->pread(obj, args);
+	if (ret != -ENODEV)
+		goto out;
 
 	ret = -ENODEV;
 	if (obj->ops->pread)
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 12/69] drm/i915: No longer allow exporting userptr through dma-buf
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (10 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 11/69] drm/i915: Disable userptr pread/pwrite support Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 13/69] drm/i915: Reject more ioctls for userptr, v2 Maarten Lankhorst
                   ` (61 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

It doesn't make sense to export a memory address, we will prevent
allowing access this way to different address spaces when we
rework userptr handling, so best to explicitly disable it.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 5a19699c2d7e..0c30ca52dee3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -694,10 +694,9 @@ i915_gem_userptr_release(struct drm_i915_gem_object *obj)
 static int
 i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
 {
-	if (obj->userptr.mmu_object)
-		return 0;
+	drm_dbg(obj->base.dev, "Exporting userptr no longer allowed\n");
 
-	return i915_gem_userptr_init__mmu_notifier(obj, 0);
+	return -EINVAL;
 }
 
 static int
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 13/69] drm/i915: Reject more ioctls for userptr, v2.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (11 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 12/69] drm/i915: No longer allow exporting userptr through dma-buf Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 14/69] drm/i915: Reject UNSYNCHRONIZED " Maarten Lankhorst
                   ` (60 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

There are a couple of ioctl's related to tiling and cache placement,
that make no sense for userptr, reject those:
- i915_gem_set_tiling_ioctl()
    Tiling should always be linear for userptr. Changing placement will
    fail with -ENXIO.
- i915_gem_set_caching_ioctl()
    Userptr memory should always be cached. Changing caching mode will
    fail with -ENXIO.
- i915_gem_set_domain_ioctl()
    Still temporarily allowed to work as intended, it's used to check
    userptr validity. With the reworked userptr code, it will keep
    working for this usecase.

This plus the previous changes have been tested against beignet
by using its own unit tests, and intel-video-compute by using
piglit's opencl tests.

Changes since v1:
- set_domain was apparently used in iris for checking userptr validity,
  keep it working as intended.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
---
 drivers/gpu/drm/i915/display/intel_display.c |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c   | 12 ++++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_object.h   |  6 ++++++
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c  |  3 ++-
 4 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 5bfc06c46e28..f0fa4cb6135e 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -11902,7 +11902,7 @@ static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb,
 	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 
-	if (obj->userptr.mm) {
+	if (i915_gem_object_is_userptr(obj)) {
 		drm_dbg(&i915->drm,
 			"attempting to use a userptr for a framebuffer, denied\n");
 		return -EINVAL;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 0478b069c202..2f4980bf742e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -287,7 +287,14 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
 	 * not allowed to be changed by userspace.
 	 */
 	if (i915_gem_object_is_proxy(obj)) {
-		ret = -ENXIO;
+		/*
+		 * Silently allow cached for userptr; the vulkan driver
+		 * sets all objects to cached
+		 */
+		if (!i915_gem_object_is_userptr(obj) ||
+		    args->caching != I915_CACHING_CACHED)
+			ret = -ENXIO;
+
 		goto out;
 	}
 
@@ -467,7 +474,8 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	 * tracking for that backing storage. The proxy object is always
 	 * considered to be outside of any cache domain.
 	 */
-	if (i915_gem_object_is_proxy(obj)) {
+	if (i915_gem_object_is_proxy(obj) &&
+	    !i915_gem_object_is_userptr(obj)) {
 		err = -ENXIO;
 		goto out;
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index c8eb0df904f7..1008c6c47809 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -573,6 +573,12 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
 					      enum fb_op_origin origin);
 
+static inline bool
+i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
+{
+	return obj->userptr.mm;
+}
+
 static inline void
 i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 				  enum fb_op_origin origin)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 0c30ca52dee3..c89cf911fb29 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -721,7 +721,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
 	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
 		 I915_GEM_OBJECT_NO_MMAP |
-		 I915_GEM_OBJECT_ASYNC_CANCEL,
+		 I915_GEM_OBJECT_ASYNC_CANCEL |
+		 I915_GEM_OBJECT_IS_PROXY,
 	.get_pages = i915_gem_userptr_get_pages,
 	.put_pages = i915_gem_userptr_put_pages,
 	.dmabuf_export = i915_gem_userptr_dmabuf_export,
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 14/69] drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (12 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 13/69] drm/i915: Reject more ioctls for userptr, v2 Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 15/69] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER Maarten Lankhorst
                   ` (59 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dave Airlie, Thomas Hellström

We should not allow this any more, as it will break with the new userptr
implementation, it could still be made to work, but there's no point in
doing so.

Inspection of the beignet opencl driver shows that it's only used
when normal userptr is not available, which means for new kernels
you will need CONFIG_I915_USERPTR.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index c89cf911fb29..80bc10b4ac74 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -224,7 +224,7 @@ i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
 	struct i915_mmu_object *mo;
 
 	if (flags & I915_USERPTR_UNSYNCHRONIZED)
-		return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
+		return -ENODEV;
 
 	if (GEM_WARN_ON(!obj->userptr.mm))
 		return -EINVAL;
@@ -274,13 +274,7 @@ static int
 i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
 				    unsigned flags)
 {
-	if ((flags & I915_USERPTR_UNSYNCHRONIZED) == 0)
-		return -ENODEV;
-
-	if (!capable(CAP_SYS_ADMIN))
-		return -EPERM;
-
-	return 0;
+	return -ENODEV;
 }
 
 static void
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 15/69] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (13 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 14/69] drm/i915: Reject UNSYNCHRONIZED " Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7 Maarten Lankhorst
                   ` (58 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Now that unsynchronized mappings are removed, the only time userptr
works is when the MMU notifier is enabled. Put all of the userptr
code behind a mmu notifier ifdef.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  4 ++
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  2 +
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 58 +++++++------------
 drivers/gpu/drm/i915/i915_drv.h               |  2 +
 5 files changed, 31 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index b5056bd80464..c72440c10876 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1964,8 +1964,10 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 		err = 0;
 	}
 
+#ifdef CONFIG_MMU_NOTIFIER
 	if (!err)
 		flush_workqueue(eb->i915->mm.userptr_wq);
+#endif
 
 err_relock:
 	i915_gem_ww_ctx_init(&eb->ww, true);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 1008c6c47809..69509dbd01c7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -576,7 +576,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
 static inline bool
 i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
 {
+#ifdef CONFIG_MMU_NOTIFIER
 	return obj->userptr.mm;
+#else
+	return false;
+#endif
 }
 
 static inline void
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 0320508b66b3..414322619781 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -296,6 +296,7 @@ struct drm_i915_gem_object {
 	unsigned long *bit_17;
 
 	union {
+#ifdef CONFIG_MMU_NOTIFIER
 		struct i915_gem_userptr {
 			uintptr_t ptr;
 
@@ -303,6 +304,7 @@ struct drm_i915_gem_object {
 			struct i915_mmu_object *mmu_object;
 			struct work_struct *work;
 		} userptr;
+#endif
 
 		struct drm_mm_node *stolen;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 80bc10b4ac74..b466ab2def4d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -15,6 +15,8 @@
 #include "i915_gem_object.h"
 #include "i915_scatterlist.h"
 
+#if defined(CONFIG_MMU_NOTIFIER)
+
 struct i915_mm_struct {
 	struct mm_struct *mm;
 	struct drm_i915_private *i915;
@@ -24,7 +26,6 @@ struct i915_mm_struct {
 	struct rcu_work work;
 };
 
-#if defined(CONFIG_MMU_NOTIFIER)
 #include <linux/interval_tree.h>
 
 struct i915_mmu_notifier {
@@ -217,15 +218,11 @@ i915_mmu_notifier_find(struct i915_mm_struct *mm)
 }
 
 static int
-i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
-				    unsigned flags)
+i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
 {
 	struct i915_mmu_notifier *mn;
 	struct i915_mmu_object *mo;
 
-	if (flags & I915_USERPTR_UNSYNCHRONIZED)
-		return -ENODEV;
-
 	if (GEM_WARN_ON(!obj->userptr.mm))
 		return -EINVAL;
 
@@ -258,32 +255,6 @@ i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
 	kfree(mn);
 }
 
-#else
-
-static void
-__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
-{
-}
-
-static void
-i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
-{
-}
-
-static int
-i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
-				    unsigned flags)
-{
-	return -ENODEV;
-}
-
-static void
-i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
-		       struct mm_struct *mm)
-{
-}
-
-#endif
 
 static struct i915_mm_struct *
 __i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
@@ -725,6 +696,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.release = i915_gem_userptr_release,
 };
 
+#endif
+
 /*
  * Creates a new mm object that wraps some normal memory from the process
  * context - user memory.
@@ -765,12 +738,12 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 		       void *data,
 		       struct drm_file *file)
 {
-	static struct lock_class_key lock_class;
+	static struct lock_class_key __maybe_unused lock_class;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_i915_gem_userptr *args = data;
-	struct drm_i915_gem_object *obj;
-	int ret;
-	u32 handle;
+	struct drm_i915_gem_object __maybe_unused *obj;
+	int __maybe_unused ret;
+	u32 __maybe_unused handle;
 
 	if (!HAS_LLC(dev_priv) && !HAS_SNOOP(dev_priv)) {
 		/* We cannot support coherent userptr objects on hw without
@@ -795,6 +768,9 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	if (!access_ok((char __user *)(unsigned long)args->user_ptr, args->user_size))
 		return -EFAULT;
 
+	if (args->flags & I915_USERPTR_UNSYNCHRONIZED)
+		return -ENODEV;
+
 	if (args->flags & I915_USERPTR_READ_ONLY) {
 		/*
 		 * On almost all of the older hw, we cannot tell the GPU that
@@ -804,6 +780,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 			return -ENODEV;
 	}
 
+#ifdef CONFIG_MMU_NOTIFIER
 	obj = i915_gem_object_alloc();
 	if (obj == NULL)
 		return -ENOMEM;
@@ -825,7 +802,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	 */
 	ret = i915_gem_userptr_init__mm_struct(obj);
 	if (ret == 0)
-		ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags);
+		ret = i915_gem_userptr_init__mmu_notifier(obj);
 	if (ret == 0)
 		ret = drm_gem_handle_create(file, &obj->base, &handle);
 
@@ -836,10 +813,14 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 
 	args->handle = handle;
 	return 0;
+#else
+	return -ENODEV;
+#endif
 }
 
 int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 {
+#ifdef CONFIG_MMU_NOTIFIER
 	spin_lock_init(&dev_priv->mm_lock);
 	hash_init(dev_priv->mm_structs);
 
@@ -849,11 +830,14 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 				0);
 	if (!dev_priv->mm.userptr_wq)
 		return -ENOMEM;
+#endif
 
 	return 0;
 }
 
 void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
 {
+#ifdef CONFIG_MMU_NOTIFIER
 	destroy_workqueue(dev_priv->mm.userptr_wq);
+#endif
 }
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 09318340e693..fc41cf6442a9 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -556,12 +556,14 @@ struct i915_gem_mm {
 	struct notifier_block vmap_notifier;
 	struct shrinker shrinker;
 
+#ifdef CONFIG_MMU_NOTIFIER
 	/**
 	 * Workqueue to fault in userptr pages, flushed by the execbuf
 	 * when required but otherwise left to userspace to try again
 	 * on EAGAIN.
 	 */
 	struct workqueue_struct *userptr_wq;
+#endif
 
 	/* shrinker accounting, also useful for userland debugging */
 	u64 shrink_memory;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (14 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 15/69] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 17:24   ` Thomas Hellström (Intel)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 17/69] drm/i915: Flatten obj->mm.lock Maarten Lankhorst
                   ` (57 subsequent siblings)
  73 siblings, 1 reply; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dave Airlie

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 40298 bytes --]

Instead of doing what we do currently, which will never work with
PROVE_LOCKING, do the same as AMD does, and something similar to
relocation slowpath. When all locks are dropped, we acquire the
pages for pinning. When the locks are taken, we transfer those
pages in .get_pages() to the bo. As a final check before installing
the fences, we ensure that the mmu notifier was not called; if it is,
we return -EAGAIN to userspace to signal it has to start over.

Changes since v1:
- Unbinding is done in submit_init only. submit_begin() removed.
- MMU_NOTFIER -> MMU_NOTIFIER
Changes since v2:
- Make i915->mm.notifier a spinlock.
Changes since v3:
- Add WARN_ON if there are any page references left, should have been 0.
- Return 0 on success in submit_init(), bug from spinlock conversion.
- Release pvec outside of notifier_lock (Thomas).
Changes since v4:
- Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
- Actually check all invalidations in eb_move_to_gpu. (Thomas)
- Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
Changes since v5:
- Clarify why check on PF_EXITING is (temporarily) required.
Changes since v6:
- Ensure userptr validity is checked in set_domain through a special path.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Dave Airlie <airlied@redhat.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    |  18 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 101 ++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  38 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  10 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 796 ++++++------------
 drivers/gpu/drm/i915/i915_drv.h               |   9 +-
 drivers/gpu/drm/i915/i915_gem.c               |   5 +-
 8 files changed, 395 insertions(+), 584 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 2f4980bf742e..76cb9f5c66aa 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -468,14 +468,28 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
+	if (i915_gem_object_is_userptr(obj)) {
+		/*
+		 * Try to grab userptr pages, iris uses set_domain to check
+		 * userptr validity
+		 */
+		err = i915_gem_object_userptr_validate(obj);
+		if (!err)
+			err = i915_gem_object_wait(obj,
+						   I915_WAIT_INTERRUPTIBLE |
+						   I915_WAIT_PRIORITY |
+						   (write_domain ? I915_WAIT_ALL : 0),
+						   MAX_SCHEDULE_TIMEOUT);
+		goto out;
+	}
+
 	/*
 	 * Proxy objects do not control access to the backing storage, ergo
 	 * they cannot be used as a means to manipulate the cache domain
 	 * tracking for that backing storage. The proxy object is always
 	 * considered to be outside of any cache domain.
 	 */
-	if (i915_gem_object_is_proxy(obj) &&
-	    !i915_gem_object_is_userptr(obj)) {
+	if (i915_gem_object_is_proxy(obj)) {
 		err = -ENXIO;
 		goto out;
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index c72440c10876..64d0e5fccece 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -53,14 +53,16 @@ enum {
 /* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
 #define __EXEC_OBJECT_HAS_PIN		BIT(30)
 #define __EXEC_OBJECT_HAS_FENCE		BIT(29)
-#define __EXEC_OBJECT_NEEDS_MAP		BIT(28)
-#define __EXEC_OBJECT_NEEDS_BIAS	BIT(27)
-#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 27) /* all of the above + */
+#define __EXEC_OBJECT_USERPTR_INIT	BIT(28)
+#define __EXEC_OBJECT_NEEDS_MAP		BIT(27)
+#define __EXEC_OBJECT_NEEDS_BIAS	BIT(26)
+#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 26) /* all of the above + */
 #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
 
 #define __EXEC_HAS_RELOC	BIT(31)
 #define __EXEC_ENGINE_PINNED	BIT(30)
-#define __EXEC_INTERNAL_FLAGS	(~0u << 30)
+#define __EXEC_USERPTR_USED	BIT(29)
+#define __EXEC_INTERNAL_FLAGS	(~0u << 29)
 #define UPDATE			PIN_OFFSET_FIXED
 
 #define BATCH_OFFSET_BIAS (256*1024)
@@ -864,6 +866,26 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
 		}
 
 		eb_add_vma(eb, i, batch, vma);
+
+		if (i915_gem_object_is_userptr(vma->obj)) {
+			err = i915_gem_object_userptr_submit_init(vma->obj);
+			if (err) {
+				if (i + 1 < eb->buffer_count) {
+					/*
+					 * Execbuffer code expects last vma entry to be NULL,
+					 * since we already initialized this entry,
+					 * set the next value to NULL or we mess up
+					 * cleanup handling.
+					 */
+					eb->vma[i + 1].vma = NULL;
+				}
+
+				return err;
+			}
+
+			eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT;
+			eb->args->flags |= __EXEC_USERPTR_USED;
+		}
 	}
 
 	if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) {
@@ -965,7 +987,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle)
 	}
 }
 
-static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
+static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr)
 {
 	const unsigned int count = eb->buffer_count;
 	unsigned int i;
@@ -979,6 +1001,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
 
 		eb_unreserve_vma(ev);
 
+		if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) {
+			ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT;
+			i915_gem_object_userptr_submit_fini(vma->obj);
+		}
+
 		if (final)
 			i915_vma_put(vma);
 	}
@@ -1909,6 +1936,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
 	return 0;
 }
 
+static int eb_reinit_userptr(struct i915_execbuffer *eb)
+{
+	const unsigned int count = eb->buffer_count;
+	unsigned int i;
+	int ret;
+
+	if (likely(!(eb->args->flags & __EXEC_USERPTR_USED)))
+		return 0;
+
+	for (i = 0; i < count; i++) {
+		struct eb_vma *ev = &eb->vma[i];
+
+		if (!i915_gem_object_is_userptr(ev->vma->obj))
+			continue;
+
+		ret = i915_gem_object_userptr_submit_init(ev->vma->obj);
+		if (ret)
+			return ret;
+
+		ev->flags |= __EXEC_OBJECT_USERPTR_INIT;
+	}
+
+	return 0;
+}
+
 static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 					   struct i915_request *rq)
 {
@@ -1923,7 +1975,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 	}
 
 	/* We may process another execbuffer during the unlock... */
-	eb_release_vmas(eb, false);
+	eb_release_vmas(eb, false, true);
 	i915_gem_ww_ctx_fini(&eb->ww);
 
 	if (rq) {
@@ -1964,10 +2016,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 		err = 0;
 	}
 
-#ifdef CONFIG_MMU_NOTIFIER
 	if (!err)
-		flush_workqueue(eb->i915->mm.userptr_wq);
-#endif
+		err = eb_reinit_userptr(eb);
 
 err_relock:
 	i915_gem_ww_ctx_init(&eb->ww, true);
@@ -2029,7 +2079,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 
 err:
 	if (err == -EDEADLK) {
-		eb_release_vmas(eb, false);
+		eb_release_vmas(eb, false, false);
 		err = i915_gem_ww_ctx_backoff(&eb->ww);
 		if (!err)
 			goto repeat_validate;
@@ -2126,7 +2176,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb)
 
 err:
 	if (err == -EDEADLK) {
-		eb_release_vmas(eb, false);
+		eb_release_vmas(eb, false, false);
 		err = i915_gem_ww_ctx_backoff(&eb->ww);
 		if (!err)
 			goto retry;
@@ -2201,6 +2251,30 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
 						      flags | __EXEC_OBJECT_NO_RESERVE);
 	}
 
+#ifdef CONFIG_MMU_NOTIFIER
+	if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
+		spin_lock(&eb->i915->mm.notifier_lock);
+
+		/*
+		 * count is always at least 1, otherwise __EXEC_USERPTR_USED
+		 * could not have been set
+		 */
+		for (i = 0; i < count; i++) {
+			struct eb_vma *ev = &eb->vma[i];
+			struct drm_i915_gem_object *obj = ev->vma->obj;
+
+			if (!i915_gem_object_is_userptr(obj))
+				continue;
+
+			err = i915_gem_object_userptr_submit_done(obj);
+			if (err)
+				break;
+		}
+
+		spin_unlock(&eb->i915->mm.notifier_lock);
+	}
+#endif
+
 	if (unlikely(err))
 		goto err_skip;
 
@@ -3345,7 +3419,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	err = eb_lookup_vmas(&eb);
 	if (err) {
-		eb_release_vmas(&eb, true);
+		eb_release_vmas(&eb, true, true);
 		goto err_engine;
 	}
 
@@ -3417,6 +3491,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	trace_i915_request_queue(eb.request, eb.batch_flags);
 	err = eb_submit(&eb, batch);
+
 err_request:
 	i915_request_get(eb.request);
 	err = eb_request_add(&eb, err);
@@ -3437,7 +3512,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	i915_request_put(eb.request);
 
 err_vma:
-	eb_release_vmas(&eb, true);
+	eb_release_vmas(&eb, true, true);
 	if (eb.trampoline)
 		i915_vma_unpin(eb.trampoline);
 	WARN_ON(err == -EDEADLK);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 69509dbd01c7..b5af9c440ac5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -59,6 +59,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
 				       const void *data, resource_size_t size);
 
 extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
+
 void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
 				     struct sg_table *pages,
 				     bool needs_clflush);
@@ -278,12 +279,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
 	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
 }
 
-static inline bool
-i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
-{
-	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL);
-}
-
 static inline bool
 i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
 {
@@ -573,16 +568,6 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
 					      enum fb_op_origin origin);
 
-static inline bool
-i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
-{
-#ifdef CONFIG_MMU_NOTIFIER
-	return obj->userptr.mm;
-#else
-	return false;
-#endif
-}
-
 static inline void
 i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 				  enum fb_op_origin origin)
@@ -603,4 +588,25 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset,
 
 bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj);
 
+#ifdef CONFIG_MMU_NOTIFIER
+static inline bool
+i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
+{
+	return obj->userptr.notifier.mm;
+}
+
+int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj);
+int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj);
+void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj);
+int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj);
+#else
+static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; }
+
+static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
+static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
+static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }
+static inline int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
+
+#endif
+
 #endif
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 414322619781..4c0a34231623 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -7,6 +7,8 @@
 #ifndef __I915_GEM_OBJECT_TYPES_H__
 #define __I915_GEM_OBJECT_TYPES_H__
 
+#include <linux/mmu_notifier.h>
+
 #include <drm/drm_gem.h>
 #include <uapi/drm/i915_drm.h>
 
@@ -34,7 +36,6 @@ struct drm_i915_gem_object_ops {
 #define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(2)
 #define I915_GEM_OBJECT_IS_PROXY	BIT(3)
 #define I915_GEM_OBJECT_NO_MMAP		BIT(4)
-#define I915_GEM_OBJECT_ASYNC_CANCEL	BIT(5)
 
 	/* Interface between the GEM object and its backing storage.
 	 * get_pages() is called once prior to the use of the associated set
@@ -299,10 +300,11 @@ struct drm_i915_gem_object {
 #ifdef CONFIG_MMU_NOTIFIER
 		struct i915_gem_userptr {
 			uintptr_t ptr;
+			unsigned long notifier_seq;
 
-			struct i915_mm_struct *mm;
-			struct i915_mmu_object *mmu_object;
-			struct work_struct *work;
+			struct mmu_interval_notifier notifier;
+			struct page **pvec;
+			int page_ref;
 		} userptr;
 #endif
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index bf61b88a2113..e7d7650072c5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -226,7 +226,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 	 * get_pages backends we should be better able to handle the
 	 * cancellation of the async task in a more uniform manner.
 	 */
-	if (!pages && !i915_gem_object_needs_async_cancel(obj))
+	if (!pages)
 		pages = ERR_PTR(-EINVAL);
 
 	if (!IS_ERR(pages))
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index b466ab2def4d..1e42fbc68697 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -2,10 +2,39 @@
  * SPDX-License-Identifier: MIT
  *
  * Copyright © 2012-2014 Intel Corporation
+ *
+  * Based on amdgpu_mn, which bears the following notice:
+ *
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ */
+/*
+ * Authors:
+ *    Christian König <christian.koenig@amd.com>
  */
 
 #include <linux/mmu_context.h>
-#include <linux/mmu_notifier.h>
 #include <linux/mempolicy.h>
 #include <linux/swap.h>
 #include <linux/sched/mm.h>
@@ -15,373 +44,121 @@
 #include "i915_gem_object.h"
 #include "i915_scatterlist.h"
 
-#if defined(CONFIG_MMU_NOTIFIER)
-
-struct i915_mm_struct {
-	struct mm_struct *mm;
-	struct drm_i915_private *i915;
-	struct i915_mmu_notifier *mn;
-	struct hlist_node node;
-	struct kref kref;
-	struct rcu_work work;
-};
-
-#include <linux/interval_tree.h>
-
-struct i915_mmu_notifier {
-	spinlock_t lock;
-	struct hlist_node node;
-	struct mmu_notifier mn;
-	struct rb_root_cached objects;
-	struct i915_mm_struct *mm;
-};
-
-struct i915_mmu_object {
-	struct i915_mmu_notifier *mn;
-	struct drm_i915_gem_object *obj;
-	struct interval_tree_node it;
-};
+#ifdef CONFIG_MMU_NOTIFIER
 
-static void add_object(struct i915_mmu_object *mo)
+/**
+ * i915_gem_userptr_invalidate - callback to notify about mm change
+ *
+ * @mni: the range (mm) is about to update
+ * @range: details on the invalidation
+ * @cur_seq: Value to pass to mmu_interval_set_seq()
+ *
+ * Block for operations on BOs to finish and mark pages as accessed and
+ * potentially dirty.
+ */
+static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
+					const struct mmu_notifier_range *range,
+					unsigned long cur_seq)
 {
-	GEM_BUG_ON(!RB_EMPTY_NODE(&mo->it.rb));
-	interval_tree_insert(&mo->it, &mo->mn->objects);
-}
+	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	long r;
 
-static void del_object(struct i915_mmu_object *mo)
-{
-	if (RB_EMPTY_NODE(&mo->it.rb))
-		return;
+	if (!mmu_notifier_range_blockable(range))
+		return false;
 
-	interval_tree_remove(&mo->it, &mo->mn->objects);
-	RB_CLEAR_NODE(&mo->it.rb);
-}
+	spin_lock(&i915->mm.notifier_lock);
 
-static void
-__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
-{
-	struct i915_mmu_object *mo = obj->userptr.mmu_object;
+	mmu_interval_set_seq(mni, cur_seq);
+
+	spin_unlock(&i915->mm.notifier_lock);
 
 	/*
-	 * During mm_invalidate_range we need to cancel any userptr that
-	 * overlaps the range being invalidated. Doing so requires the
-	 * struct_mutex, and that risks recursion. In order to cause
-	 * recursion, the user must alias the userptr address space with
-	 * a GTT mmapping (possible with a MAP_FIXED) - then when we have
-	 * to invalidate that mmaping, mm_invalidate_range is called with
-	 * the userptr address *and* the struct_mutex held.  To prevent that
-	 * we set a flag under the i915_mmu_notifier spinlock to indicate
-	 * whether this object is valid.
+	 * We don't wait when the process is exiting. This is valid
+	 * because the object will be cleaned up anyway.
+	 *
+	 * This is also temporarily required as a hack, because we
+	 * cannot currently force non-consistent batch buffers to preempt
+	 * and reschedule by waiting on it, hanging processes on exit.
 	 */
-	if (!mo)
-		return;
-
-	spin_lock(&mo->mn->lock);
-	if (value)
-		add_object(mo);
-	else
-		del_object(mo);
-	spin_unlock(&mo->mn->lock);
-}
-
-static int
-userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
-				  const struct mmu_notifier_range *range)
-{
-	struct i915_mmu_notifier *mn =
-		container_of(_mn, struct i915_mmu_notifier, mn);
-	struct interval_tree_node *it;
-	unsigned long end;
-	int ret = 0;
-
-	if (RB_EMPTY_ROOT(&mn->objects.rb_root))
-		return 0;
-
-	/* interval ranges are inclusive, but invalidate range is exclusive */
-	end = range->end - 1;
-
-	spin_lock(&mn->lock);
-	it = interval_tree_iter_first(&mn->objects, range->start, end);
-	while (it) {
-		struct drm_i915_gem_object *obj;
-
-		if (!mmu_notifier_range_blockable(range)) {
-			ret = -EAGAIN;
-			break;
-		}
-
-		/*
-		 * The mmu_object is released late when destroying the
-		 * GEM object so it is entirely possible to gain a
-		 * reference on an object in the process of being freed
-		 * since our serialisation is via the spinlock and not
-		 * the struct_mutex - and consequently use it after it
-		 * is freed and then double free it. To prevent that
-		 * use-after-free we only acquire a reference on the
-		 * object if it is not in the process of being destroyed.
-		 */
-		obj = container_of(it, struct i915_mmu_object, it)->obj;
-		if (!kref_get_unless_zero(&obj->base.refcount)) {
-			it = interval_tree_iter_next(it, range->start, end);
-			continue;
-		}
-		spin_unlock(&mn->lock);
-
-		ret = i915_gem_object_unbind(obj,
-					     I915_GEM_OBJECT_UNBIND_ACTIVE |
-					     I915_GEM_OBJECT_UNBIND_BARRIER);
-		if (ret == 0)
-			ret = __i915_gem_object_put_pages(obj);
-		i915_gem_object_put(obj);
-		if (ret)
-			return ret;
+	if (current->flags & PF_EXITING)
+		return true;
 
-		spin_lock(&mn->lock);
-
-		/*
-		 * As we do not (yet) protect the mmu from concurrent insertion
-		 * over this range, there is no guarantee that this search will
-		 * terminate given a pathologic workload.
-		 */
-		it = interval_tree_iter_first(&mn->objects, range->start, end);
-	}
-	spin_unlock(&mn->lock);
-
-	return ret;
+	/* we will unbind on next submission, still have userptr pins */
+	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
+				      MAX_SCHEDULE_TIMEOUT);
+	if (r <= 0)
+		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
 
+	return true;
 }
 
-static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
-	.invalidate_range_start = userptr_mn_invalidate_range_start,
+static const struct mmu_interval_notifier_ops i915_gem_userptr_notifier_ops = {
+	.invalidate = i915_gem_userptr_invalidate,
 };
 
-static struct i915_mmu_notifier *
-i915_mmu_notifier_create(struct i915_mm_struct *mm)
-{
-	struct i915_mmu_notifier *mn;
-
-	mn = kmalloc(sizeof(*mn), GFP_KERNEL);
-	if (mn == NULL)
-		return ERR_PTR(-ENOMEM);
-
-	spin_lock_init(&mn->lock);
-	mn->mn.ops = &i915_gem_userptr_notifier;
-	mn->objects = RB_ROOT_CACHED;
-	mn->mm = mm;
-
-	return mn;
-}
-
-static void
-i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
-{
-	struct i915_mmu_object *mo;
-
-	mo = fetch_and_zero(&obj->userptr.mmu_object);
-	if (!mo)
-		return;
-
-	spin_lock(&mo->mn->lock);
-	del_object(mo);
-	spin_unlock(&mo->mn->lock);
-	kfree(mo);
-}
-
-static struct i915_mmu_notifier *
-i915_mmu_notifier_find(struct i915_mm_struct *mm)
-{
-	struct i915_mmu_notifier *mn, *old;
-	int err;
-
-	mn = READ_ONCE(mm->mn);
-	if (likely(mn))
-		return mn;
-
-	mn = i915_mmu_notifier_create(mm);
-	if (IS_ERR(mn))
-		return mn;
-
-	err = mmu_notifier_register(&mn->mn, mm->mm);
-	if (err) {
-		kfree(mn);
-		return ERR_PTR(err);
-	}
-
-	old = cmpxchg(&mm->mn, NULL, mn);
-	if (old) {
-		mmu_notifier_unregister(&mn->mn, mm->mm);
-		kfree(mn);
-		mn = old;
-	}
-
-	return mn;
-}
-
 static int
 i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
 {
-	struct i915_mmu_notifier *mn;
-	struct i915_mmu_object *mo;
-
-	if (GEM_WARN_ON(!obj->userptr.mm))
-		return -EINVAL;
-
-	mn = i915_mmu_notifier_find(obj->userptr.mm);
-	if (IS_ERR(mn))
-		return PTR_ERR(mn);
-
-	mo = kzalloc(sizeof(*mo), GFP_KERNEL);
-	if (!mo)
-		return -ENOMEM;
-
-	mo->mn = mn;
-	mo->obj = obj;
-	mo->it.start = obj->userptr.ptr;
-	mo->it.last = obj->userptr.ptr + obj->base.size - 1;
-	RB_CLEAR_NODE(&mo->it.rb);
-
-	obj->userptr.mmu_object = mo;
-	return 0;
-}
-
-static void
-i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
-		       struct mm_struct *mm)
-{
-	if (mn == NULL)
-		return;
-
-	mmu_notifier_unregister(&mn->mn, mm);
-	kfree(mn);
-}
-
-
-static struct i915_mm_struct *
-__i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
-{
-	struct i915_mm_struct *it, *mm = NULL;
-
-	rcu_read_lock();
-	hash_for_each_possible_rcu(i915->mm_structs,
-				   it, node,
-				   (unsigned long)real)
-		if (it->mm == real && kref_get_unless_zero(&it->kref)) {
-			mm = it;
-			break;
-		}
-	rcu_read_unlock();
-
-	return mm;
+	return mmu_interval_notifier_insert(&obj->userptr.notifier, current->mm,
+					    obj->userptr.ptr, obj->base.size,
+					    &i915_gem_userptr_notifier_ops);
 }
 
-static int
-i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
+static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	struct i915_mm_struct *mm, *new;
-	int ret = 0;
-
-	/* During release of the GEM object we hold the struct_mutex. This
-	 * precludes us from calling mmput() at that time as that may be
-	 * the last reference and so call exit_mmap(). exit_mmap() will
-	 * attempt to reap the vma, and if we were holding a GTT mmap
-	 * would then call drm_gem_vm_close() and attempt to reacquire
-	 * the struct mutex. So in order to avoid that recursion, we have
-	 * to defer releasing the mm reference until after we drop the
-	 * struct_mutex, i.e. we need to schedule a worker to do the clean
-	 * up.
-	 */
-	mm = __i915_mm_struct_find(i915, current->mm);
-	if (mm)
-		goto out;
+	struct page **pvec = NULL;
 
-	new = kmalloc(sizeof(*mm), GFP_KERNEL);
-	if (!new)
-		return -ENOMEM;
-
-	kref_init(&new->kref);
-	new->i915 = to_i915(obj->base.dev);
-	new->mm = current->mm;
-	new->mn = NULL;
-
-	spin_lock(&i915->mm_lock);
-	mm = __i915_mm_struct_find(i915, current->mm);
-	if (!mm) {
-		hash_add_rcu(i915->mm_structs,
-			     &new->node,
-			     (unsigned long)new->mm);
-		mmgrab(current->mm);
-		mm = new;
+	spin_lock(&i915->mm.notifier_lock);
+	if (!--obj->userptr.page_ref) {
+		pvec = obj->userptr.pvec;
+		obj->userptr.pvec = NULL;
 	}
-	spin_unlock(&i915->mm_lock);
-	if (mm != new)
-		kfree(new);
+	GEM_BUG_ON(obj->userptr.page_ref < 0);
+	spin_unlock(&i915->mm.notifier_lock);
 
-out:
-	obj->userptr.mm = mm;
-	return ret;
-}
-
-static void
-__i915_mm_struct_free__worker(struct work_struct *work)
-{
-	struct i915_mm_struct *mm = container_of(work, typeof(*mm), work.work);
-
-	i915_mmu_notifier_free(mm->mn, mm->mm);
-	mmdrop(mm->mm);
-	kfree(mm);
-}
-
-static void
-__i915_mm_struct_free(struct kref *kref)
-{
-	struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
-
-	spin_lock(&mm->i915->mm_lock);
-	hash_del_rcu(&mm->node);
-	spin_unlock(&mm->i915->mm_lock);
-
-	INIT_RCU_WORK(&mm->work, __i915_mm_struct_free__worker);
-	queue_rcu_work(system_wq, &mm->work);
-}
-
-static void
-i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
-{
-	if (obj->userptr.mm == NULL)
-		return;
+	if (pvec) {
+		const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
 
-	kref_put(&obj->userptr.mm->kref, __i915_mm_struct_free);
-	obj->userptr.mm = NULL;
+		unpin_user_pages(pvec, num_pages);
+		kfree(pvec);
+	}
 }
 
-struct get_pages_work {
-	struct work_struct work;
-	struct drm_i915_gem_object *obj;
-	struct task_struct *task;
-};
-
-static struct sg_table *
-__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
-			       struct page **pvec, unsigned long num_pages)
+static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 {
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
 	unsigned int sg_page_sizes;
 	struct scatterlist *sg;
+	struct page **pvec;
 	int ret;
 
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
 	if (!st)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
+
+	spin_lock(&i915->mm.notifier_lock);
+	if (GEM_WARN_ON(!obj->userptr.page_ref)) {
+		spin_unlock(&i915->mm.notifier_lock);
+		ret = -EFAULT;
+		goto err_free;
+	}
+
+	obj->userptr.page_ref++;
+	pvec = obj->userptr.pvec;
+	spin_unlock(&i915->mm.notifier_lock);
 
 alloc_table:
 	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
 					 num_pages << PAGE_SHIFT, max_segment,
 					 NULL, 0, GFP_KERNEL);
 	if (IS_ERR(sg)) {
-		kfree(st);
-		return ERR_CAST(sg);
+		ret = PTR_ERR(sg);
+		goto err;
 	}
 
 	ret = i915_gem_gtt_prepare_pages(obj, st);
@@ -393,203 +170,20 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 			goto alloc_table;
 		}
 
-		kfree(st);
-		return ERR_PTR(ret);
+		goto err;
 	}
 
 	sg_page_sizes = i915_sg_page_sizes(st->sgl);
 
 	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
 
-	return st;
-}
-
-static void
-__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
-{
-	struct get_pages_work *work = container_of(_work, typeof(*work), work);
-	struct drm_i915_gem_object *obj = work->obj;
-	const unsigned long npages = obj->base.size >> PAGE_SHIFT;
-	unsigned long pinned;
-	struct page **pvec;
-	int ret;
-
-	ret = -ENOMEM;
-	pinned = 0;
-
-	pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
-	if (pvec != NULL) {
-		struct mm_struct *mm = obj->userptr.mm->mm;
-		unsigned int flags = 0;
-		int locked = 0;
-
-		if (!i915_gem_object_is_readonly(obj))
-			flags |= FOLL_WRITE;
-
-		ret = -EFAULT;
-		if (mmget_not_zero(mm)) {
-			while (pinned < npages) {
-				if (!locked) {
-					mmap_read_lock(mm);
-					locked = 1;
-				}
-				ret = pin_user_pages_remote
-					(mm,
-					 obj->userptr.ptr + pinned * PAGE_SIZE,
-					 npages - pinned,
-					 flags,
-					 pvec + pinned, NULL, &locked);
-				if (ret < 0)
-					break;
-
-				pinned += ret;
-			}
-			if (locked)
-				mmap_read_unlock(mm);
-			mmput(mm);
-		}
-	}
-
-	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
-	if (obj->userptr.work == &work->work) {
-		struct sg_table *pages = ERR_PTR(ret);
-
-		if (pinned == npages) {
-			pages = __i915_gem_userptr_alloc_pages(obj, pvec,
-							       npages);
-			if (!IS_ERR(pages)) {
-				pinned = 0;
-				pages = NULL;
-			}
-		}
-
-		obj->userptr.work = ERR_CAST(pages);
-		if (IS_ERR(pages))
-			__i915_gem_userptr_set_active(obj, false);
-	}
-	mutex_unlock(&obj->mm.lock);
-
-	unpin_user_pages(pvec, pinned);
-	kvfree(pvec);
-
-	i915_gem_object_put(obj);
-	put_task_struct(work->task);
-	kfree(work);
-}
-
-static struct sg_table *
-__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
-{
-	struct get_pages_work *work;
-
-	/* Spawn a worker so that we can acquire the
-	 * user pages without holding our mutex. Access
-	 * to the user pages requires mmap_lock, and we have
-	 * a strict lock ordering of mmap_lock, struct_mutex -
-	 * we already hold struct_mutex here and so cannot
-	 * call gup without encountering a lock inversion.
-	 *
-	 * Userspace will keep on repeating the operation
-	 * (thanks to EAGAIN) until either we hit the fast
-	 * path or the worker completes. If the worker is
-	 * cancelled or superseded, the task is still run
-	 * but the results ignored. (This leads to
-	 * complications that we may have a stray object
-	 * refcount that we need to be wary of when
-	 * checking for existing objects during creation.)
-	 * If the worker encounters an error, it reports
-	 * that error back to this function through
-	 * obj->userptr.work = ERR_PTR.
-	 */
-	work = kmalloc(sizeof(*work), GFP_KERNEL);
-	if (work == NULL)
-		return ERR_PTR(-ENOMEM);
-
-	obj->userptr.work = &work->work;
-
-	work->obj = i915_gem_object_get(obj);
-
-	work->task = current;
-	get_task_struct(work->task);
-
-	INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker);
-	queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work);
-
-	return ERR_PTR(-EAGAIN);
-}
-
-static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
-{
-	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
-	struct mm_struct *mm = obj->userptr.mm->mm;
-	struct page **pvec;
-	struct sg_table *pages;
-	bool active;
-	int pinned;
-	unsigned int gup_flags = 0;
-
-	/* If userspace should engineer that these pages are replaced in
-	 * the vma between us binding this page into the GTT and completion
-	 * of rendering... Their loss. If they change the mapping of their
-	 * pages they need to create a new bo to point to the new vma.
-	 *
-	 * However, that still leaves open the possibility of the vma
-	 * being copied upon fork. Which falls under the same userspace
-	 * synchronisation issue as a regular bo, except that this time
-	 * the process may not be expecting that a particular piece of
-	 * memory is tied to the GPU.
-	 *
-	 * Fortunately, we can hook into the mmu_notifier in order to
-	 * discard the page references prior to anything nasty happening
-	 * to the vma (discard or cloning) which should prevent the more
-	 * egregious cases from causing harm.
-	 */
-
-	if (obj->userptr.work) {
-		/* active flag should still be held for the pending work */
-		if (IS_ERR(obj->userptr.work))
-			return PTR_ERR(obj->userptr.work);
-		else
-			return -EAGAIN;
-	}
-
-	pvec = NULL;
-	pinned = 0;
-
-	if (mm == current->mm) {
-		pvec = kvmalloc_array(num_pages, sizeof(struct page *),
-				      GFP_KERNEL |
-				      __GFP_NORETRY |
-				      __GFP_NOWARN);
-		if (pvec) {
-			/* defer to worker if malloc fails */
-			if (!i915_gem_object_is_readonly(obj))
-				gup_flags |= FOLL_WRITE;
-			pinned = pin_user_pages_fast_only(obj->userptr.ptr,
-							  num_pages, gup_flags,
-							  pvec);
-		}
-	}
-
-	active = false;
-	if (pinned < 0) {
-		pages = ERR_PTR(pinned);
-		pinned = 0;
-	} else if (pinned < num_pages) {
-		pages = __i915_gem_userptr_get_pages_schedule(obj);
-		active = pages == ERR_PTR(-EAGAIN);
-	} else {
-		pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
-		active = !IS_ERR(pages);
-	}
-	if (active)
-		__i915_gem_userptr_set_active(obj, true);
-
-	if (IS_ERR(pages))
-		unpin_user_pages(pvec, pinned);
-	kvfree(pvec);
+	return 0;
 
-	return PTR_ERR_OR_ZERO(pages);
+err:
+	i915_gem_object_userptr_drop_ref(obj);
+err_free:
+	kfree(st);
+	return ret;
 }
 
 static void
@@ -599,9 +193,6 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
 	struct sgt_iter sgt_iter;
 	struct page *page;
 
-	/* Cancel any inflight work and force them to restart their gup */
-	obj->userptr.work = NULL;
-	__i915_gem_userptr_set_active(obj, false);
 	if (!pages)
 		return;
 
@@ -641,19 +232,161 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
 		}
 
 		mark_page_accessed(page);
-		unpin_user_page(page);
 	}
 	obj->mm.dirty = false;
 
 	sg_free_table(pages);
 	kfree(pages);
+
+	i915_gem_object_userptr_drop_ref(obj);
+}
+
+static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages)
+{
+	struct sg_table *pages;
+	int err;
+
+	err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
+	if (err)
+		return err;
+
+	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
+		return -EBUSY;
+
+	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+
+	pages = __i915_gem_object_unset_pages(obj);
+	if (!IS_ERR_OR_NULL(pages))
+		i915_gem_userptr_put_pages(obj, pages);
+
+	if (get_pages)
+		err = ____i915_gem_object_get_pages(obj);
+	mutex_unlock(&obj->mm.lock);
+
+	return err;
+}
+
+int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
+	struct page **pvec;
+	unsigned int gup_flags = 0;
+	unsigned long notifier_seq;
+	int pinned, ret;
+
+	if (obj->userptr.notifier.mm != current->mm)
+		return -EFAULT;
+
+	ret = i915_gem_object_lock_interruptible(obj, NULL);
+	if (ret)
+		return ret;
+
+	/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
+	ret = i915_gem_object_userptr_unbind(obj, false);
+	i915_gem_object_unlock(obj);
+	if (ret)
+		return ret;
+
+	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
+
+	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
+	if (!pvec)
+		return -ENOMEM;
+
+	if (!i915_gem_object_is_readonly(obj))
+		gup_flags |= FOLL_WRITE;
+
+	pinned = ret = 0;
+	while (pinned < num_pages) {
+		ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
+					  num_pages - pinned, gup_flags,
+					  &pvec[pinned]);
+		if (ret < 0)
+			goto out;
+
+		pinned += ret;
+	}
+	ret = 0;
+
+	spin_lock(&i915->mm.notifier_lock);
+
+	if (mmu_interval_read_retry(&obj->userptr.notifier,
+		!obj->userptr.page_ref ? notifier_seq :
+		obj->userptr.notifier_seq)) {
+		ret = -EAGAIN;
+		goto out_unlock;
+	}
+
+	if (!obj->userptr.page_ref++) {
+		obj->userptr.pvec = pvec;
+		obj->userptr.notifier_seq = notifier_seq;
+
+		pvec = NULL;
+	}
+
+out_unlock:
+	spin_unlock(&i915->mm.notifier_lock);
+
+out:
+	if (pvec) {
+		unpin_user_pages(pvec, pinned);
+		kvfree(pvec);
+	}
+
+	return ret;
+}
+
+int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj)
+{
+	if (mmu_interval_read_retry(&obj->userptr.notifier,
+				    obj->userptr.notifier_seq)) {
+		/* We collided with the mmu notifier, need to retry */
+
+		return -EAGAIN;
+	}
+
+	return 0;
+}
+
+void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj)
+{
+	i915_gem_object_userptr_drop_ref(obj);
+}
+
+int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj)
+{
+	int err;
+
+	err = i915_gem_object_userptr_submit_init(obj);
+	if (err)
+		return err;
+
+	err = i915_gem_object_lock_interruptible(obj, NULL);
+	if (!err) {
+		/*
+		 * Since we only check validity, not use the pages,
+		 * it doesn't matter if we collide with the mmu notifier,
+		 * and -EAGAIN handling is not required.
+		 */
+		err = i915_gem_object_pin_pages(obj);
+		if (!err)
+			i915_gem_object_unpin_pages(obj);
+
+		i915_gem_object_unlock(obj);
+	}
+
+	i915_gem_object_userptr_submit_fini(obj);
+	return err;
 }
 
 static void
 i915_gem_userptr_release(struct drm_i915_gem_object *obj)
 {
-	i915_gem_userptr_release__mmu_notifier(obj);
-	i915_gem_userptr_release__mm_struct(obj);
+	GEM_WARN_ON(obj->userptr.page_ref);
+
+	mmu_interval_notifier_remove(&obj->userptr.notifier);
+	obj->userptr.notifier.mm = NULL;
 }
 
 static int
@@ -686,7 +419,6 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
 	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
 		 I915_GEM_OBJECT_NO_MMAP |
-		 I915_GEM_OBJECT_ASYNC_CANCEL |
 		 I915_GEM_OBJECT_IS_PROXY,
 	.get_pages = i915_gem_userptr_get_pages,
 	.put_pages = i915_gem_userptr_put_pages,
@@ -793,6 +525,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
 
 	obj->userptr.ptr = args->user_ptr;
+	obj->userptr.notifier_seq = ULONG_MAX;
 	if (args->flags & I915_USERPTR_READ_ONLY)
 		i915_gem_object_set_readonly(obj);
 
@@ -800,9 +533,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	 * at binding. This means that we need to hook into the mmu_notifier
 	 * in order to detect if the mmu is destroyed.
 	 */
-	ret = i915_gem_userptr_init__mm_struct(obj);
-	if (ret == 0)
-		ret = i915_gem_userptr_init__mmu_notifier(obj);
+	ret = i915_gem_userptr_init__mmu_notifier(obj);
 	if (ret == 0)
 		ret = drm_gem_handle_create(file, &obj->base, &handle);
 
@@ -821,15 +552,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 {
 #ifdef CONFIG_MMU_NOTIFIER
-	spin_lock_init(&dev_priv->mm_lock);
-	hash_init(dev_priv->mm_structs);
-
-	dev_priv->mm.userptr_wq =
-		alloc_workqueue("i915-userptr-acquire",
-				WQ_HIGHPRI | WQ_UNBOUND,
-				0);
-	if (!dev_priv->mm.userptr_wq)
-		return -ENOMEM;
+	spin_lock_init(&dev_priv->mm.notifier_lock);
 #endif
 
 	return 0;
@@ -837,7 +560,4 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 
 void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
 {
-#ifdef CONFIG_MMU_NOTIFIER
-	destroy_workqueue(dev_priv->mm.userptr_wq);
-#endif
 }
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index fc41cf6442a9..72927d356c1a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -558,11 +558,10 @@ struct i915_gem_mm {
 
 #ifdef CONFIG_MMU_NOTIFIER
 	/**
-	 * Workqueue to fault in userptr pages, flushed by the execbuf
-	 * when required but otherwise left to userspace to try again
-	 * on EAGAIN.
+	 * notifier_lock for mmu notifiers, memory may not be allocated
+	 * while holding this lock.
 	 */
-	struct workqueue_struct *userptr_wq;
+	spinlock_t notifier_lock;
 #endif
 
 	/* shrinker accounting, also useful for userland debugging */
@@ -942,8 +941,6 @@ struct drm_i915_private {
 	struct i915_ggtt ggtt; /* VM representing the global address space */
 
 	struct i915_gem_mm mm;
-	DECLARE_HASHTABLE(mm_structs, 7);
-	spinlock_t mm_lock;
 
 	/* Kernel Modesetting */
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 22be1e7bf2dd..6288cd5d898e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1053,10 +1053,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 err_unlock:
 	i915_gem_drain_workqueue(dev_priv);
 
-	if (ret != -EIO) {
+	if (ret != -EIO)
 		intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
-		i915_gem_cleanup_userptr(dev_priv);
-	}
 
 	if (ret == -EIO) {
 		/*
@@ -1113,7 +1111,6 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv)
 	intel_wa_list_free(&dev_priv->gt_wa_list);
 
 	intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
-	i915_gem_cleanup_userptr(dev_priv);
 
 	i915_gem_drain_freed_objects(dev_priv);
 
-- 
2.30.1


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 17/69] drm/i915: Flatten obj->mm.lock
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (15 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7 Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 18/69] drm/i915: Populate logical context during first pin Maarten Lankhorst
                   ` (56 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

With userptr fixed, there is no need for all separate lockdep classes
now, and we can remove all lockdep tricks used. A trylock in the
shrinker is all we need now to flatten the locking hierarchy.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c   |  5 +--
 drivers/gpu/drm/i915/gem/i915_gem_object.h   | 20 ++----------
 drivers/gpu/drm/i915/gem/i915_gem_pages.c    | 34 ++++++++++----------
 drivers/gpu/drm/i915/gem/i915_gem_phys.c     |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 10 +++---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c  |  2 +-
 6 files changed, 27 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 6083b9c14be6..821cb40f8d73 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -62,7 +62,7 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
 			  struct lock_class_key *key, unsigned flags)
 {
-	__mutex_init(&obj->mm.lock, ops->name ?: "obj->mm.lock", key);
+	mutex_init(&obj->mm.lock);
 
 	spin_lock_init(&obj->vma.lock);
 	INIT_LIST_HEAD(&obj->vma.list);
@@ -86,9 +86,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 	mutex_init(&obj->mm.get_page.lock);
 	INIT_RADIX_TREE(&obj->mm.get_dma_page.radix, GFP_KERNEL | __GFP_NOWARN);
 	mutex_init(&obj->mm.get_dma_page.lock);
-
-	if (IS_ENABLED(CONFIG_LOCKDEP) && i915_gem_object_is_shrinkable(obj))
-		fs_reclaim_taints_mutex(&obj->mm.lock);
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index b5af9c440ac5..a0e1c4ff0de4 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -372,27 +372,10 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 
-enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */
-	I915_MM_NORMAL = 0,
-	/*
-	 * Only used by struct_mutex, when called "recursively" from
-	 * direct-reclaim-esque. Safe because there is only every one
-	 * struct_mutex in the entire system.
-	 */
-	I915_MM_SHRINKER = 1,
-	/*
-	 * Used for obj->mm.lock when allocating pages. Safe because the object
-	 * isn't yet on any LRU, and therefore the shrinker can't deadlock on
-	 * it. As soon as the object has pages, obj->mm.lock nests within
-	 * fs_reclaim.
-	 */
-	I915_MM_GET_PAGES = 1,
-};
-
 static inline int __must_check
 i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
 {
-	might_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	might_lock(&obj->mm.lock);
 
 	if (atomic_inc_not_zero(&obj->mm.pages_pin_count))
 		return 0;
@@ -436,6 +419,7 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
 }
 
 int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
+int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj);
 void i915_gem_object_truncate(struct drm_i915_gem_object *obj);
 void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index e7d7650072c5..e947d4c0da1f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -114,7 +114,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 {
 	int err;
 
-	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
 		return err;
 
@@ -196,21 +196,13 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 	return pages;
 }
 
-int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
+int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
-	int err;
 
 	if (i915_gem_object_has_pinned_pages(obj))
 		return -EBUSY;
 
-	/* May be called by shrinker from within get_pages() (on another bo) */
-	mutex_lock(&obj->mm.lock);
-	if (unlikely(atomic_read(&obj->mm.pages_pin_count))) {
-		err = -EBUSY;
-		goto unlock;
-	}
-
 	i915_gem_object_release_mmap_offset(obj);
 
 	/*
@@ -226,14 +218,22 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 	 * get_pages backends we should be better able to handle the
 	 * cancellation of the async task in a more uniform manner.
 	 */
-	if (!pages)
-		pages = ERR_PTR(-EINVAL);
-
-	if (!IS_ERR(pages))
+	if (!IS_ERR_OR_NULL(pages))
 		obj->ops->put_pages(obj, pages);
 
-	err = 0;
-unlock:
+	return 0;
+}
+
+int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
+{
+	int err;
+
+	if (i915_gem_object_has_pinned_pages(obj))
+		return -EBUSY;
+
+	/* May be called by shrinker from within get_pages() (on another bo) */
+	mutex_lock(&obj->mm.lock);
+	err = __i915_gem_object_put_pages_locked(obj);
 	mutex_unlock(&obj->mm.lock);
 
 	return err;
@@ -341,7 +341,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return ERR_PTR(-ENXIO);
 
-	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
 		return ERR_PTR(err);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 06c481ff79d8..44329c435cf1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -236,7 +236,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
 		goto err_unlock;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index b64a0788381f..3052ef5ad89d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -49,9 +49,9 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj,
 		flags = I915_GEM_OBJECT_UNBIND_TEST;
 
 	if (i915_gem_object_unbind(obj, flags) == 0)
-		__i915_gem_object_put_pages(obj);
+		return true;
 
-	return !i915_gem_object_has_pages(obj);
+	return false;
 }
 
 static void try_to_writeback(struct drm_i915_gem_object *obj,
@@ -200,10 +200,10 @@ i915_gem_shrink(struct drm_i915_private *i915,
 
 			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
 
-			if (unsafe_drop_pages(obj, shrink)) {
+			if (unsafe_drop_pages(obj, shrink) &&
+			    mutex_trylock(&obj->mm.lock)) {
 				/* May arrive from get_pages on another bo */
-				mutex_lock(&obj->mm.lock);
-				if (!i915_gem_object_has_pages(obj)) {
+				if (!__i915_gem_object_put_pages_locked(obj)) {
 					try_to_writeback(obj, shrink);
 					count += obj->base.size >> PAGE_SHIFT;
 				}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 1e42fbc68697..503325e74eff 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -253,7 +253,7 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
 		return -EBUSY;
 
-	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	mutex_lock(&obj->mm.lock);
 
 	pages = __i915_gem_object_unset_pages(obj);
 	if (!IS_ERR_OR_NULL(pages))
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 18/69] drm/i915: Populate logical context during first pin.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (16 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 17/69] drm/i915: Flatten obj->mm.lock Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 19/69] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2 Maarten Lankhorst
                   ` (55 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

This allows us to remove pin_map from state allocation, which saves
us a few retry loops. We won't need this until first pin, anyway.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../drm/i915/gt/intel_execlists_submission.c  | 26 ++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 85ff5fe861b4..ca6a85537274 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -2206,11 +2206,31 @@ static void execlists_preempt(struct timer_list *timer)
 	execlists_kick(timer, preempt);
 }
 
+static int
+__execlists_context_pre_pin(struct intel_context *ce,
+			    struct intel_engine_cs *engine,
+			    struct i915_gem_ww_ctx *ww, void **vaddr)
+{
+	int err;
+
+	err = lrc_pre_pin(ce, engine, ww, vaddr);
+	if (err)
+		return err;
+
+	if (!__test_and_set_bit(CONTEXT_INIT_BIT, &ce->flags)) {
+		lrc_init_state(ce, engine, *vaddr);
+
+		 __i915_gem_object_flush_map(ce->state->obj, 0, engine->context_size);
+	}
+
+	return 0;
+}
+
 static int execlists_context_pre_pin(struct intel_context *ce,
 				     struct i915_gem_ww_ctx *ww,
 				     void **vaddr)
 {
-	return lrc_pre_pin(ce, ce->engine, ww, vaddr);
+	return __execlists_context_pre_pin(ce, ce->engine, ww, vaddr);
 }
 
 static int execlists_context_pin(struct intel_context *ce, void *vaddr)
@@ -3088,8 +3108,8 @@ static int virtual_context_pre_pin(struct intel_context *ce,
 {
 	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
 
-	/* Note: we must use a real engine class for setting up reg state */
-	return lrc_pre_pin(ce, ve->siblings[0], ww, vaddr);
+	 /* Note: we must use a real engine class for setting up reg state */
+	return __execlists_context_pre_pin(ce, ve->siblings[0], ww, vaddr);
 }
 
 static int virtual_context_pin(struct intel_context *ce, void *vaddr)
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 19/69] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (17 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 18/69] drm/i915: Populate logical context during first pin Maarten Lankhorst
@ 2021-03-11 13:41 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 20/69] drm/i915: Handle ww locking in init_status_page Maarten Lankhorst
                   ` (54 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, Dan Carpenter

We map the initial context during first pin.

This allows us to remove pin_map from state allocation, which saves
us a few retry loops. We won't need this until first pin anyway.

intel_ring_submission_setup() is also reworked slightly to do all
pinning in a single ww loop.

Changes since v1:
- Handle -EDEADLK backoff in intel_ring_submission_setup() better.
- Handle smatch errors reported by Dan and testbot.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gt/intel_ring_submission.c   | 184 +++++++++++-------
 1 file changed, 118 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
index 282089d64789..f8ad891ad635 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
@@ -436,6 +436,26 @@ static void ring_context_destroy(struct kref *ref)
 	intel_context_free(ce);
 }
 
+static int ring_context_init_default_state(struct intel_context *ce,
+					   struct i915_gem_ww_ctx *ww)
+{
+	struct drm_i915_gem_object *obj = ce->state->obj;
+	void *vaddr;
+
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	shmem_read(ce->engine->default_state, 0,
+		   vaddr, ce->engine->context_size);
+
+	i915_gem_object_flush_map(obj);
+	__i915_gem_object_release_map(obj);
+
+	__set_bit(CONTEXT_VALID_BIT, &ce->flags);
+	return 0;
+}
+
 static int ring_context_pre_pin(struct intel_context *ce,
 				struct i915_gem_ww_ctx *ww,
 				void **unused)
@@ -443,6 +463,13 @@ static int ring_context_pre_pin(struct intel_context *ce,
 	struct i915_address_space *vm;
 	int err = 0;
 
+	if (ce->engine->default_state &&
+	    !test_bit(CONTEXT_VALID_BIT, &ce->flags)) {
+		err = ring_context_init_default_state(ce, ww);
+		if (err)
+			return err;
+	}
+
 	vm = vm_alias(ce->vm);
 	if (vm)
 		err = gen6_ppgtt_pin(i915_vm_to_ppgtt((vm)), ww);
@@ -498,22 +525,6 @@ alloc_context_vma(struct intel_engine_cs *engine)
 	if (IS_IVYBRIDGE(i915))
 		i915_gem_object_set_cache_coherency(obj, I915_CACHE_L3_LLC);
 
-	if (engine->default_state) {
-		void *vaddr;
-
-		vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
-		if (IS_ERR(vaddr)) {
-			err = PTR_ERR(vaddr);
-			goto err_obj;
-		}
-
-		shmem_read(engine->default_state, 0,
-			   vaddr, engine->context_size);
-
-		i915_gem_object_flush_map(obj);
-		__i915_gem_object_release_map(obj);
-	}
-
 	vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
 	if (IS_ERR(vma)) {
 		err = PTR_ERR(vma);
@@ -545,8 +556,6 @@ static int ring_context_alloc(struct intel_context *ce)
 			return PTR_ERR(vma);
 
 		ce->state = vma;
-		if (engine->default_state)
-			__set_bit(CONTEXT_VALID_BIT, &ce->flags);
 	}
 
 	return 0;
@@ -1151,37 +1160,15 @@ static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine,
 	return gen7_setup_clear_gpr_bb(engine, vma);
 }
 
-static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine)
+static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine,
+				   struct i915_gem_ww_ctx *ww,
+				   struct i915_vma *vma)
 {
-	struct drm_i915_gem_object *obj;
-	struct i915_vma *vma;
-	int size;
 	int err;
 
-	size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */);
-	if (size <= 0)
-		return size;
-
-	size = ALIGN(size, PAGE_SIZE);
-	obj = i915_gem_object_create_internal(engine->i915, size);
-	if (IS_ERR(obj))
-		return PTR_ERR(obj);
-
-	vma = i915_vma_instance(obj, engine->gt->vm, NULL);
-	if (IS_ERR(vma)) {
-		err = PTR_ERR(vma);
-		goto err_obj;
-	}
-
-	vma->private = intel_context_create(engine); /* dummy residuals */
-	if (IS_ERR(vma->private)) {
-		err = PTR_ERR(vma->private);
-		goto err_obj;
-	}
-
-	err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH);
+	err = i915_vma_pin_ww(vma, ww, 0, 0, PIN_USER | PIN_HIGH);
 	if (err)
-		goto err_private;
+		return err;
 
 	err = i915_vma_sync(vma);
 	if (err)
@@ -1196,17 +1183,53 @@ static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine)
 
 err_unpin:
 	i915_vma_unpin(vma);
-err_private:
-	intel_context_put(vma->private);
-err_obj:
-	i915_gem_object_put(obj);
 	return err;
 }
 
+static struct i915_vma *gen7_ctx_vma(struct intel_engine_cs *engine)
+{
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	int size, err;
+
+	if (!IS_GEN(engine->i915, 7) || engine->class != RENDER_CLASS)
+		return 0;
+
+	err = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */);
+	if (err < 0)
+		return ERR_PTR(err);
+	if (!err)
+		return NULL;
+
+	size = ALIGN(err, PAGE_SIZE);
+
+	obj = i915_gem_object_create_internal(engine->i915, size);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	vma = i915_vma_instance(obj, engine->gt->vm, NULL);
+	if (IS_ERR(vma)) {
+		i915_gem_object_put(obj);
+		return ERR_CAST(vma);
+	}
+
+	vma->private = intel_context_create(engine); /* dummy residuals */
+	if (IS_ERR(vma->private)) {
+		err = PTR_ERR(vma->private);
+		vma->private = NULL;
+		i915_gem_object_put(obj);
+		return ERR_PTR(err);
+	}
+
+	return vma;
+}
+
 int intel_ring_submission_setup(struct intel_engine_cs *engine)
 {
+	struct i915_gem_ww_ctx ww;
 	struct intel_timeline *timeline;
 	struct intel_ring *ring;
+	struct i915_vma *gen7_wa_vma;
 	int err;
 
 	setup_common(engine);
@@ -1237,43 +1260,72 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine)
 	}
 	GEM_BUG_ON(timeline->has_initial_breadcrumb);
 
-	err = intel_timeline_pin(timeline, NULL);
-	if (err)
-		goto err_timeline;
-
 	ring = intel_engine_create_ring(engine, SZ_16K);
 	if (IS_ERR(ring)) {
 		err = PTR_ERR(ring);
-		goto err_timeline_unpin;
+		goto err_timeline;
 	}
 
-	err = intel_ring_pin(ring, NULL);
-	if (err)
-		goto err_ring;
-
 	GEM_BUG_ON(engine->legacy.ring);
 	engine->legacy.ring = ring;
 	engine->legacy.timeline = timeline;
 
-	GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma);
+	gen7_wa_vma = gen7_ctx_vma(engine);
+	if (IS_ERR(gen7_wa_vma)) {
+		err = PTR_ERR(gen7_wa_vma);
+		goto err_ring;
+	}
 
-	if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) {
-		err = gen7_ctx_switch_bb_init(engine);
+	i915_gem_ww_ctx_init(&ww, false);
+
+retry:
+	err = i915_gem_object_lock(timeline->hwsp_ggtt->obj, &ww);
+	if (!err && gen7_wa_vma)
+		err = i915_gem_object_lock(gen7_wa_vma->obj, &ww);
+	if (!err && engine->legacy.ring->vma->obj)
+		err = i915_gem_object_lock(engine->legacy.ring->vma->obj, &ww);
+	if (!err)
+		err = intel_timeline_pin(timeline, &ww);
+	if (!err) {
+		err = intel_ring_pin(ring, &ww);
 		if (err)
-			goto err_ring_unpin;
+			intel_timeline_unpin(timeline);
+	}
+	if (err)
+		goto out;
+
+	GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma);
+
+	if (gen7_wa_vma) {
+		err = gen7_ctx_switch_bb_init(engine, &ww, gen7_wa_vma);
+		if (err) {
+			intel_ring_unpin(ring);
+			intel_timeline_unpin(timeline);
+		}
 	}
 
+out:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (err)
+		goto err_gen7_put;
+
 	/* Finally, take ownership and responsibility for cleanup! */
 	engine->release = ring_release;
 
 	return 0;
 
-err_ring_unpin:
-	intel_ring_unpin(ring);
+err_gen7_put:
+	if (gen7_wa_vma) {
+		intel_context_put(gen7_wa_vma->private);
+		i915_gem_object_put(gen7_wa_vma->obj);
+	}
 err_ring:
 	intel_ring_put(ring);
-err_timeline_unpin:
-	intel_timeline_unpin(timeline);
 err_timeline:
 	intel_timeline_put(timeline);
 err:
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 20/69] drm/i915: Handle ww locking in init_status_page
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (18 preceding siblings ...)
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 19/69] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2 Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 21/69] drm/i915: Rework clflush to work correctly without obj->mm.lock Maarten Lankhorst
                   ` (53 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Try to pin to ggtt first, and use a full ww loop to handle
eviction correctly.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 37 +++++++++++++++--------
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index e6cefd00b4a1..859c79b0d6ee 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -602,6 +602,7 @@ static void cleanup_status_page(struct intel_engine_cs *engine)
 }
 
 static int pin_ggtt_status_page(struct intel_engine_cs *engine,
+				struct i915_gem_ww_ctx *ww,
 				struct i915_vma *vma)
 {
 	unsigned int flags;
@@ -622,12 +623,13 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine,
 	else
 		flags = PIN_HIGH;
 
-	return i915_ggtt_pin(vma, NULL, 0, flags);
+	return i915_ggtt_pin(vma, ww, 0, flags);
 }
 
 static int init_status_page(struct intel_engine_cs *engine)
 {
 	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
 	struct i915_vma *vma;
 	void *vaddr;
 	int ret;
@@ -653,30 +655,39 @@ static int init_status_page(struct intel_engine_cs *engine)
 	vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
-		goto err;
+		goto err_put;
 	}
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(obj, &ww);
+	if (!ret && !HWS_NEEDS_PHYSICAL(engine->i915))
+		ret = pin_ggtt_status_page(engine, &ww, vma);
+	if (ret)
+		goto err;
+
 	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		ret = PTR_ERR(vaddr);
-		goto err;
+		goto err_unpin;
 	}
 
 	engine->status_page.addr = memset(vaddr, 0, PAGE_SIZE);
 	engine->status_page.vma = vma;
 
-	if (!HWS_NEEDS_PHYSICAL(engine->i915)) {
-		ret = pin_ggtt_status_page(engine, vma);
-		if (ret)
-			goto err_unpin;
-	}
-
-	return 0;
-
 err_unpin:
-	i915_gem_object_unpin_map(obj);
+	if (ret)
+		i915_vma_unpin(vma);
 err:
-	i915_gem_object_put(obj);
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+err_put:
+	if (ret)
+		i915_gem_object_put(obj);
 	return ret;
 }
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 21/69] drm/i915: Rework clflush to work correctly without obj->mm.lock.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (19 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 20/69] drm/i915: Handle ww locking in init_status_page Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 22/69] drm/i915: Pass ww ctx to intel_pin_to_display_plane Maarten Lankhorst
                   ` (52 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Pin in the caller, not in the work itself. This should also
work better for dma-fence annotations.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
index a28f8c912a3e..e4c24558eaa8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
@@ -27,15 +27,8 @@ static void __do_clflush(struct drm_i915_gem_object *obj)
 static int clflush_work(struct dma_fence_work *base)
 {
 	struct clflush *clflush = container_of(base, typeof(*clflush), base);
-	struct drm_i915_gem_object *obj = clflush->obj;
-	int err;
 
-	err = i915_gem_object_pin_pages(obj);
-	if (err)
-		return err;
-
-	__do_clflush(obj);
-	i915_gem_object_unpin_pages(obj);
+	__do_clflush(clflush->obj);
 
 	return 0;
 }
@@ -44,6 +37,7 @@ static void clflush_release(struct dma_fence_work *base)
 {
 	struct clflush *clflush = container_of(base, typeof(*clflush), base);
 
+	i915_gem_object_unpin_pages(clflush->obj);
 	i915_gem_object_put(clflush->obj);
 }
 
@@ -61,6 +55,11 @@ static struct clflush *clflush_work_create(struct drm_i915_gem_object *obj)
 	if (!clflush)
 		return NULL;
 
+	if (__i915_gem_object_get_pages(obj) < 0) {
+		kfree(clflush);
+		return NULL;
+	}
+
 	dma_fence_work_init(&clflush->base, &clflush_ops);
 	clflush->obj = i915_gem_object_get(obj); /* obj <-> clflush cycle */
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 22/69] drm/i915: Pass ww ctx to intel_pin_to_display_plane
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (20 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 21/69] drm/i915: Rework clflush to work correctly without obj->mm.lock Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 23/69] drm/i915: Add object locking to vm_fault_cpu Maarten Lankhorst
                   ` (51 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Instead of multiple lockings, lock the object once,
and perform the ww dance around attach_phys and pin_pages.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  | 69 ++++++++++++-------
 drivers/gpu/drm/i915/display/intel_display.h  |  2 +-
 drivers/gpu/drm/i915/display/intel_fbdev.c    |  2 +-
 drivers/gpu/drm/i915/display/intel_overlay.c  | 34 +++++++--
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    | 30 ++------
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  1 +
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 10 +--
 .../drm/i915/gem/selftests/i915_gem_phys.c    |  2 +
 8 files changed, 86 insertions(+), 64 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index f0fa4cb6135e..acfd50248f7b 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -1091,6 +1091,7 @@ static bool intel_plane_uses_fence(const struct intel_plane_state *plane_state)
 
 struct i915_vma *
 intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
+			   bool phys_cursor,
 			   const struct i915_ggtt_view *view,
 			   bool uses_fence,
 			   unsigned long *out_flags)
@@ -1099,14 +1100,19 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
 	intel_wakeref_t wakeref;
+	struct i915_gem_ww_ctx ww;
 	struct i915_vma *vma;
 	unsigned int pinctl;
 	u32 alignment;
+	int ret;
 
 	if (drm_WARN_ON(dev, !i915_gem_object_is_framebuffer(obj)))
 		return ERR_PTR(-EINVAL);
 
-	alignment = intel_surf_alignment(fb, 0);
+	if (phys_cursor)
+		alignment = intel_cursor_alignment(dev_priv);
+	else
+		alignment = intel_surf_alignment(fb, 0);
 	if (drm_WARN_ON(dev, alignment && !is_power_of_2(alignment)))
 		return ERR_PTR(-EINVAL);
 
@@ -1141,14 +1147,26 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	if (HAS_GMCH(dev_priv))
 		pinctl |= PIN_MAPPABLE;
 
-	vma = i915_gem_object_pin_to_display_plane(obj,
-						   alignment, view, pinctl);
-	if (IS_ERR(vma))
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(obj, &ww);
+	if (!ret && phys_cursor)
+		ret = i915_gem_object_attach_phys(obj, alignment);
+	if (!ret)
+		ret = i915_gem_object_pin_pages(obj);
+	if (ret)
 		goto err;
 
-	if (uses_fence && i915_vma_is_map_and_fenceable(vma)) {
-		int ret;
+	if (!ret) {
+		vma = i915_gem_object_pin_to_display_plane(obj, &ww, alignment,
+							   view, pinctl);
+		if (IS_ERR(vma)) {
+			ret = PTR_ERR(vma);
+			goto err_unpin;
+		}
+	}
 
+	if (uses_fence && i915_vma_is_map_and_fenceable(vma)) {
 		/*
 		 * Install a fence for tiled scan-out. Pre-i965 always needs a
 		 * fence, whereas 965+ only requires a fence if using
@@ -1169,16 +1187,28 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 		ret = i915_vma_pin_fence(vma);
 		if (ret != 0 && INTEL_GEN(dev_priv) < 4) {
 			i915_vma_unpin(vma);
-			vma = ERR_PTR(ret);
-			goto err;
+			goto err_unpin;
 		}
+		ret = 0;
 
-		if (ret == 0 && vma->fence)
+		if (vma->fence)
 			*out_flags |= PLANE_HAS_FENCE;
 	}
 
 	i915_vma_get(vma);
+
+err_unpin:
+	i915_gem_object_unpin_pages(obj);
 err:
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (ret)
+		vma = ERR_PTR(ret);
+
 	atomic_dec(&dev_priv->gpu_error.pending_fb_pin);
 	intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
 	return vma;
@@ -11333,19 +11363,11 @@ int intel_plane_pin_fb(struct intel_plane_state *plane_state)
 	struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
 	struct drm_framebuffer *fb = plane_state->hw.fb;
 	struct i915_vma *vma;
+	bool phys_cursor =
+		plane->id == PLANE_CURSOR &&
+		INTEL_INFO(dev_priv)->display.cursor_needs_physical;
 
-	if (plane->id == PLANE_CURSOR &&
-	    INTEL_INFO(dev_priv)->display.cursor_needs_physical) {
-		struct drm_i915_gem_object *obj = intel_fb_obj(fb);
-		const int align = intel_cursor_alignment(dev_priv);
-		int err;
-
-		err = i915_gem_object_attach_phys(obj, align);
-		if (err)
-			return err;
-	}
-
-	vma = intel_pin_and_fence_fb_obj(fb,
+	vma = intel_pin_and_fence_fb_obj(fb, phys_cursor,
 					 &plane_state->view,
 					 intel_plane_uses_fence(plane_state),
 					 &plane_state->flags);
@@ -11434,13 +11456,8 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
 	if (!obj)
 		return 0;
 
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		return ret;
 
 	ret = intel_plane_pin_fb(new_plane_state);
-
-	i915_gem_object_unpin_pages(obj);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/display/intel_display.h b/drivers/gpu/drm/i915/display/intel_display.h
index 431770eeadb4..f056e19cf559 100644
--- a/drivers/gpu/drm/i915/display/intel_display.h
+++ b/drivers/gpu/drm/i915/display/intel_display.h
@@ -573,7 +573,7 @@ void intel_release_load_detect_pipe(struct drm_connector *connector,
 				    struct intel_load_detect_pipe *old,
 				    struct drm_modeset_acquire_ctx *ctx);
 struct i915_vma *
-intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
+intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, bool phys_cursor,
 			   const struct i915_ggtt_view *view,
 			   bool uses_fence,
 			   unsigned long *out_flags);
diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c
index 07db8e83f98e..ccd00e65a5fe 100644
--- a/drivers/gpu/drm/i915/display/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/display/intel_fbdev.c
@@ -211,7 +211,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
 	 * This also validates that any existing fb inherited from the
 	 * BIOS is suitable for own access.
 	 */
-	vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base,
+	vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base, false,
 					 &view, false, &flags);
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
index ef8f44f5e751..4b77a23451dd 100644
--- a/drivers/gpu/drm/i915/display/intel_overlay.c
+++ b/drivers/gpu/drm/i915/display/intel_overlay.c
@@ -755,6 +755,32 @@ static u32 overlay_cmd_reg(struct drm_intel_overlay_put_image *params)
 	return cmd;
 }
 
+static struct i915_vma *intel_overlay_pin_fb(struct drm_i915_gem_object *new_bo)
+{
+	struct i915_gem_ww_ctx ww;
+	struct i915_vma *vma;
+	int ret;
+
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(new_bo, &ww);
+	if (!ret) {
+		vma = i915_gem_object_pin_to_display_plane(new_bo, &ww, 0,
+							   NULL, PIN_MAPPABLE);
+		ret = PTR_ERR_OR_ZERO(vma);
+	}
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (ret)
+		return ERR_PTR(ret);
+
+	return vma;
+}
+
 static int intel_overlay_do_put_image(struct intel_overlay *overlay,
 				      struct drm_i915_gem_object *new_bo,
 				      struct drm_intel_overlay_put_image *params)
@@ -776,12 +802,10 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
 
 	atomic_inc(&dev_priv->gpu_error.pending_fb_pin);
 
-	vma = i915_gem_object_pin_to_display_plane(new_bo,
-						   0, NULL, PIN_MAPPABLE);
-	if (IS_ERR(vma)) {
-		ret = PTR_ERR(vma);
+	vma = intel_overlay_pin_fb(new_bo);
+	if (IS_ERR(vma))
 		goto out_pin_section;
-	}
+
 	i915_gem_object_flush_frontbuffer(new_bo, ORIGIN_DIRTYFB);
 
 	if (!overlay->active) {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 76cb9f5c66aa..41dae0d83dbb 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -318,12 +318,12 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
  */
 struct i915_vma *
 i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
+				     struct i915_gem_ww_ctx *ww,
 				     u32 alignment,
 				     const struct i915_ggtt_view *view,
 				     unsigned int flags)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	struct i915_gem_ww_ctx ww;
 	struct i915_vma *vma;
 	int ret;
 
@@ -331,11 +331,6 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 	if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj))
 		return ERR_PTR(-EINVAL);
 
-	i915_gem_ww_ctx_init(&ww, true);
-retry:
-	ret = i915_gem_object_lock(obj, &ww);
-	if (ret)
-		goto err;
 	/*
 	 * The display engine is not coherent with the LLC cache on gen6.  As
 	 * a result, we make sure that the pinning that is about to occur is
@@ -350,7 +345,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 					      HAS_WT(i915) ?
 					      I915_CACHE_WT : I915_CACHE_NONE);
 	if (ret)
-		goto err;
+		return ERR_PTR(ret);
 
 	/*
 	 * As the user may map the buffer once pinned in the display plane
@@ -363,33 +358,20 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 	vma = ERR_PTR(-ENOSPC);
 	if ((flags & PIN_MAPPABLE) == 0 &&
 	    (!view || view->type == I915_GGTT_VIEW_NORMAL))
-		vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0, alignment,
+		vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0, alignment,
 						  flags | PIN_MAPPABLE |
 						  PIN_NONBLOCK);
 	if (IS_ERR(vma) && vma != ERR_PTR(-EDEADLK))
-		vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0,
+		vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0,
 						  alignment, flags);
-	if (IS_ERR(vma)) {
-		ret = PTR_ERR(vma);
-		goto err;
-	}
+	if (IS_ERR(vma))
+		return vma;
 
 	vma->display_alignment = max_t(u64, vma->display_alignment, alignment);
 	i915_vma_mark_scanout(vma);
 
 	i915_gem_object_flush_if_display_locked(obj);
 
-err:
-	if (ret == -EDEADLK) {
-		ret = i915_gem_ww_ctx_backoff(&ww);
-		if (!ret)
-			goto retry;
-	}
-	i915_gem_ww_ctx_fini(&ww);
-
-	if (ret)
-		return ERR_PTR(ret);
-
 	return vma;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index a0e1c4ff0de4..fef0d62f3eb7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -510,6 +510,7 @@ void i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj,
 				       bool write);
 struct i915_vma * __must_check
 i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
+				     struct i915_gem_ww_ctx *ww,
 				     u32 alignment,
 				     const struct i915_ggtt_view *view,
 				     unsigned int flags);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 44329c435cf1..92297362fad8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -219,6 +219,8 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 {
 	int err;
 
+	assert_object_held(obj);
+
 	if (align > obj->base.size)
 		return -EINVAL;
 
@@ -232,13 +234,9 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
-		return err;
-
 	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
-		goto err_unlock;
+		return err;
 
 	if (unlikely(!i915_gem_object_has_struct_page(obj)))
 		goto out;
@@ -269,8 +267,6 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 
 out:
 	mutex_unlock(&obj->mm.lock);
-err_unlock:
-	i915_gem_object_unlock(obj);
 	return err;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
index 238af7bd84f6..4d7580762acc 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
@@ -31,7 +31,9 @@ static int mock_phys_object(void *arg)
 		goto out_obj;
 	}
 
+	i915_gem_object_lock(obj, NULL);
 	err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
+	i915_gem_object_unlock(obj);
 	if (err) {
 		pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
 		goto out_obj;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 23/69] drm/i915: Add object locking to vm_fault_cpu
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (21 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 22/69] drm/i915: Pass ww ctx to intel_pin_to_display_plane Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 24/69] drm/i915: Move pinning to inside engine_wa_list_verify() Maarten Lankhorst
                   ` (50 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Take a simple lock so we hold ww around (un)pin_pages as needed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index c0034d811e50..163208a6260d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -246,6 +246,9 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 		     area->vm_flags & VM_WRITE))
 		return VM_FAULT_SIGBUS;
 
+	if (i915_gem_object_lock_interruptible(obj, NULL))
+		return VM_FAULT_NOPAGE;
+
 	err = i915_gem_object_pin_pages(obj);
 	if (err)
 		goto out;
@@ -269,6 +272,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 	i915_gem_object_unpin_pages(obj);
 
 out:
+	i915_gem_object_unlock(obj);
 	return i915_error_to_vmf_fault(err);
 }
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 24/69] drm/i915: Move pinning to inside engine_wa_list_verify()
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (22 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 23/69] drm/i915: Add object locking to vm_fault_cpu Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 25/69] drm/i915: Take reservation lock around i915_vma_pin Maarten Lankhorst
                   ` (49 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

This should be done as part of the ww loop, in order to remove a
i915_vma_pin that needs ww held.

Now only i915_ggtt_pin() callers remaining.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gtt.c            | 14 +++++++++++++-
 drivers/gpu/drm/i915/gt/intel_gtt.h            |  3 +++
 drivers/gpu/drm/i915/gt/intel_workarounds.c    | 10 ++++++++--
 drivers/gpu/drm/i915/gt/selftest_execlists.c   |  5 +++--
 drivers/gpu/drm/i915/gt/selftest_lrc.c         |  2 +-
 drivers/gpu/drm/i915/gt/selftest_mocs.c        |  3 ++-
 drivers/gpu/drm/i915/gt/selftest_workarounds.c |  6 +++---
 7 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index d34770ae4c9a..1b532a2791ea 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -427,7 +427,6 @@ __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size)
 {
 	struct drm_i915_gem_object *obj;
 	struct i915_vma *vma;
-	int err;
 
 	obj = i915_gem_object_create_internal(vm->i915, PAGE_ALIGN(size));
 	if (IS_ERR(obj))
@@ -441,6 +440,19 @@ __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size)
 		return vma;
 	}
 
+	return vma;
+}
+
+struct i915_vma *
+__vm_create_scratch_for_read_pinned(struct i915_address_space *vm, unsigned long size)
+{
+	struct i915_vma *vma;
+	int err;
+
+	vma = __vm_create_scratch_for_read(vm, size);
+	if (IS_ERR(vma))
+		return vma;
+
 	err = i915_vma_pin(vma, 0, 0,
 			   i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);
 	if (err) {
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index 24b5808df16d..784c4372b405 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -581,6 +581,9 @@ void i915_vm_free_pt_stash(struct i915_address_space *vm,
 struct i915_vma *
 __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size);
 
+struct i915_vma *
+__vm_create_scratch_for_read_pinned(struct i915_address_space *vm, unsigned long size);
+
 static inline struct sgt_dma {
 	struct scatterlist *sg;
 	dma_addr_t dma, max;
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 3b4a7da60f0b..bb2357119792 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -2211,10 +2211,15 @@ static int engine_wa_list_verify(struct intel_context *ce,
 	if (err)
 		goto err_pm;
 
+	err = i915_vma_pin_ww(vma, &ww, 0, 0,
+			   i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);
+	if (err)
+		goto err_unpin;
+
 	rq = i915_request_create(ce);
 	if (IS_ERR(rq)) {
 		err = PTR_ERR(rq);
-		goto err_unpin;
+		goto err_vma;
 	}
 
 	err = i915_request_await_object(rq, vma->obj, true);
@@ -2255,6 +2260,8 @@ static int engine_wa_list_verify(struct intel_context *ce,
 
 err_rq:
 	i915_request_put(rq);
+err_vma:
+	i915_vma_unpin(vma);
 err_unpin:
 	intel_context_unpin(ce);
 err_pm:
@@ -2265,7 +2272,6 @@ static int engine_wa_list_verify(struct intel_context *ce,
 	}
 	i915_gem_ww_ctx_fini(&ww);
 	intel_engine_pm_put(ce->engine);
-	i915_vma_unpin(vma);
 	i915_vma_put(vma);
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
index f625c29023ea..a6e77a161b70 100644
--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
+++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
@@ -4165,8 +4165,9 @@ static int preserved_virtual_engine(struct intel_gt *gt,
 	int err = 0;
 	u32 *cs;
 
-	scratch = __vm_create_scratch_for_read(&siblings[0]->gt->ggtt->vm,
-					       PAGE_SIZE);
+	scratch =
+		__vm_create_scratch_for_read_pinned(&siblings[0]->gt->ggtt->vm,
+						    PAGE_SIZE);
 	if (IS_ERR(scratch))
 		return PTR_ERR(scratch);
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 279091e41b41..1f7a120606e6 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -27,7 +27,7 @@
 
 static struct i915_vma *create_scratch(struct intel_gt *gt)
 {
-	return __vm_create_scratch_for_read(&gt->ggtt->vm, PAGE_SIZE);
+	return __vm_create_scratch_for_read_pinned(&gt->ggtt->vm, PAGE_SIZE);
 }
 
 static bool is_active(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
index 44609d1c7780..01dd050d4161 100644
--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
@@ -74,7 +74,8 @@ static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
 	if (flags & (HAS_GLOBAL_MOCS | HAS_ENGINE_MOCS))
 		arg->mocs = &arg->table;
 
-	arg->scratch = __vm_create_scratch_for_read(&gt->ggtt->vm, PAGE_SIZE);
+	arg->scratch =
+		__vm_create_scratch_for_read_pinned(&gt->ggtt->vm, PAGE_SIZE);
 	if (IS_ERR(arg->scratch))
 		return PTR_ERR(arg->scratch);
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
index e5ee6136c81f..de6136bd10ac 100644
--- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
@@ -463,7 +463,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 	u32 *cs, *results;
 
 	sz = (2 * ARRAY_SIZE(values) + 1) * sizeof(u32);
-	scratch = __vm_create_scratch_for_read(ce->vm, sz);
+	scratch = __vm_create_scratch_for_read_pinned(ce->vm, sz);
 	if (IS_ERR(scratch))
 		return PTR_ERR(scratch);
 
@@ -1003,14 +1003,14 @@ static int live_isolated_whitelist(void *arg)
 
 	for (i = 0; i < ARRAY_SIZE(client); i++) {
 		client[i].scratch[0] =
-			__vm_create_scratch_for_read(gt->vm, 4096);
+			__vm_create_scratch_for_read_pinned(gt->vm, 4096);
 		if (IS_ERR(client[i].scratch[0])) {
 			err = PTR_ERR(client[i].scratch[0]);
 			goto err;
 		}
 
 		client[i].scratch[1] =
-			__vm_create_scratch_for_read(gt->vm, 4096);
+			__vm_create_scratch_for_read_pinned(gt->vm, 4096);
 		if (IS_ERR(client[i].scratch[1])) {
 			err = PTR_ERR(client[i].scratch[1]);
 			i915_vma_unpin_and_release(&client[i].scratch[0], 0);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 25/69] drm/i915: Take reservation lock around i915_vma_pin.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (23 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 24/69] drm/i915: Move pinning to inside engine_wa_list_verify() Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 26/69] drm/i915: Make lrc_init_wa_ctx compatible with ww locking, v3 Maarten Lankhorst
                   ` (48 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We previously complained when ww == NULL.

This function is now only used in selftests to pin an object,
and ww locking is now fixed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../i915/gem/selftests/i915_gem_coherency.c   | 12 ++++-------
 drivers/gpu/drm/i915/i915_gem.c               |  6 +++++-
 drivers/gpu/drm/i915/i915_vma.c               |  4 +---
 drivers/gpu/drm/i915/i915_vma.h               | 20 +++++++++++++++----
 4 files changed, 26 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index b5dbf15570fc..3eec385d43bb 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -218,15 +218,13 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v)
 	u32 *cs;
 	int err;
 
+	vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
 	i915_gem_object_lock(ctx->obj, NULL);
 	i915_gem_object_set_to_gtt_domain(ctx->obj, false);
 
-	vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0);
-	if (IS_ERR(vma)) {
-		err = PTR_ERR(vma);
-		goto out_unlock;
-	}
-
 	rq = intel_engine_create_kernel_request(ctx->engine);
 	if (IS_ERR(rq)) {
 		err = PTR_ERR(rq);
@@ -265,9 +263,7 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v)
 	i915_request_add(rq);
 out_unpin:
 	i915_vma_unpin(vma);
-out_unlock:
 	i915_gem_object_unlock(ctx->obj);
-
 	return err;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6288cd5d898e..edb3a32b062f 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -906,7 +906,11 @@ i915_gem_object_ggtt_pin_ww(struct drm_i915_gem_object *obj,
 			return ERR_PTR(ret);
 	}
 
-	ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL);
+	if (ww)
+		ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL);
+	else
+		ret = i915_vma_pin(vma, size, alignment, flags | PIN_GLOBAL);
+
 	if (ret)
 		return ERR_PTR(ret);
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 1ffda2aaa7a0..265e3a3079e2 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -863,9 +863,7 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 	int err;
 
 #ifdef CONFIG_PROVE_LOCKING
-	if (debug_locks && lockdep_is_held(&vma->vm->i915->drm.struct_mutex))
-		WARN_ON(!ww);
-	if (debug_locks && ww && vma->resv)
+	if (debug_locks && !WARN_ON(!ww) && vma->resv)
 		assert_vma_held(vma);
 #endif
 
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 6b48f5c42488..8df784a026d2 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -246,10 +246,22 @@ i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 static inline int __must_check
 i915_vma_pin(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 {
-#ifdef CONFIG_LOCKDEP
-	WARN_ON_ONCE(vma->resv && dma_resv_held(vma->resv));
-#endif
-	return i915_vma_pin_ww(vma, NULL, size, alignment, flags);
+	struct i915_gem_ww_ctx ww;
+	int err;
+
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(vma->obj, &ww);
+	if (!err)
+		err = i915_vma_pin_ww(vma, &ww, size, alignment, flags);
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
+	return err;
 }
 
 int i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 26/69] drm/i915: Make lrc_init_wa_ctx compatible with ww locking, v3.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (24 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 25/69] drm/i915: Take reservation lock around i915_vma_pin Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 27/69] drm/i915: Make __engine_unpark() compatible with ww locking Maarten Lankhorst
                   ` (47 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Make creation separate from pinning, in order to take the lock only
once, and pin the mapping with the lock held.

Changes since v1:
- Rebase on top of upstream changes.
Changes since v2:
- Fully clear wa_ctx on error.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 49 ++++++++++++++++++++++-------
 1 file changed, 38 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 8508b8d701c1..a2b916d27a39 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1421,7 +1421,7 @@ gen10_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
 
 #define CTX_WA_BB_SIZE (PAGE_SIZE)
 
-static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
+static int lrc_create_wa_ctx(struct intel_engine_cs *engine)
 {
 	struct drm_i915_gem_object *obj;
 	struct i915_vma *vma;
@@ -1437,10 +1437,6 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
 		goto err;
 	}
 
-	err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
-	if (err)
-		goto err;
-
 	engine->wa_ctx.vma = vma;
 	return 0;
 
@@ -1452,9 +1448,6 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
 void lrc_fini_wa_ctx(struct intel_engine_cs *engine)
 {
 	i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0);
-
-	/* Called on error unwind, clear all flags to prevent further use */
-	memset(&engine->wa_ctx, 0, sizeof(engine->wa_ctx));
 }
 
 typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch);
@@ -1466,6 +1459,7 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
 		&wa_ctx->indirect_ctx, &wa_ctx->per_ctx
 	};
 	wa_bb_func_t wa_bb_fn[ARRAY_SIZE(wa_bb)];
+	struct i915_gem_ww_ctx ww;
 	void *batch, *batch_ptr;
 	unsigned int i;
 	int err;
@@ -1494,7 +1488,7 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
 		return;
 	}
 
-	err = lrc_setup_wa_ctx(engine);
+	err = lrc_create_wa_ctx(engine);
 	if (err) {
 		/*
 		 * We continue even if we fail to initialize WA batch
@@ -1507,7 +1501,22 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
 		return;
 	}
 
+	if (!engine->wa_ctx.vma)
+		return;
+
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(wa_ctx->vma->obj, &ww);
+	if (!err)
+		err = i915_ggtt_pin(wa_ctx->vma, &ww, 0, PIN_HIGH);
+	if (err)
+		goto err;
+
 	batch = i915_gem_object_pin_map(wa_ctx->vma->obj, I915_MAP_WB);
+	if (IS_ERR(batch)) {
+		err = PTR_ERR(batch);
+		goto err_unpin;
+	}
 
 	/*
 	 * Emit the two workaround batch buffers, recording the offset from the
@@ -1532,8 +1541,26 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
 	__i915_gem_object_release_map(wa_ctx->vma->obj);
 
 	/* Verify that we can handle failure to setup the wa_ctx */
-	if (err || i915_inject_probe_error(engine->i915, -ENODEV))
-		lrc_fini_wa_ctx(engine);
+	if (!err)
+		err = i915_inject_probe_error(engine->i915, -ENODEV);
+
+err_unpin:
+	if (err)
+		i915_vma_unpin(wa_ctx->vma);
+err:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
+	if (err) {
+		i915_vma_put(engine->wa_ctx.vma);
+
+		/* Clear all flags to prevent further use */
+		memset(wa_ctx, 0, sizeof(*wa_ctx));
+	}
 }
 
 static void st_runtime_underflow(struct intel_context_stats *stats, s32 dt)
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 27/69] drm/i915: Make __engine_unpark() compatible with ww locking.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (25 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 26/69] drm/i915: Make lrc_init_wa_ctx compatible with ww locking, v3 Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 28/69] drm/i915: Take obj lock around set_domain ioctl Maarten Lankhorst
                   ` (46 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Take the ww lock around engine_unpark. Because of the
many many places where rpm is used, I chose the safest option
and used a trylock to opportunistically take this lock for
__engine_unpark.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_pm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 27d9d17b35cb..bddc5c98fb04 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -27,12 +27,16 @@ static void dbg_poison_ce(struct intel_context *ce)
 		int type = i915_coherent_map_type(ce->engine->i915);
 		void *map;
 
+		if (!i915_gem_object_trylock(obj))
+			return;
+
 		map = i915_gem_object_pin_map(obj, type);
 		if (!IS_ERR(map)) {
 			memset(map, CONTEXT_REDZONE, obj->base.size);
 			i915_gem_object_flush_map(obj);
 			i915_gem_object_unpin_map(obj);
 		}
+		i915_gem_object_unlock(obj);
 	}
 }
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 28/69] drm/i915: Take obj lock around set_domain ioctl
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (26 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 27/69] drm/i915: Make __engine_unpark() compatible with ww locking Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 29/69] drm/i915: Defer pin calls in buffer pool until first use by caller Maarten Lankhorst
                   ` (45 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to lock the object to move it to the correct domain,
add the missing lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_domain.c | 41 ++++++++++------------
 1 file changed, 19 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 41dae0d83dbb..e3537922183b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -456,13 +456,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 		 * userptr validity
 		 */
 		err = i915_gem_object_userptr_validate(obj);
-		if (!err)
-			err = i915_gem_object_wait(obj,
-						   I915_WAIT_INTERRUPTIBLE |
-						   I915_WAIT_PRIORITY |
-						   (write_domain ? I915_WAIT_ALL : 0),
-						   MAX_SCHEDULE_TIMEOUT);
-		goto out;
+		goto out_wait;
 	}
 
 	/*
@@ -476,6 +470,10 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 		goto out;
 	}
 
+	err = i915_gem_object_lock_interruptible(obj, NULL);
+	if (err)
+		goto out;
+
 	/*
 	 * Flush and acquire obj->pages so that we are coherent through
 	 * direct access in memory with previous cached writes through
@@ -487,7 +485,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	 */
 	err = i915_gem_object_pin_pages(obj);
 	if (err)
-		goto out;
+		goto out_unlock;
 
 	/*
 	 * Already in the desired write domain? Nothing for us to do!
@@ -500,10 +498,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	 * without having to further check the requested write_domain.
 	 */
 	if (READ_ONCE(obj->write_domain) == read_domains)
-		goto out_wait;
-
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
 		goto out_unpin;
 
 	if (read_domains & I915_GEM_DOMAIN_WC)
@@ -513,19 +507,22 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	else
 		i915_gem_object_set_to_cpu_domain(obj, write_domain);
 
-	i915_gem_object_unlock(obj);
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
 
+out_unlock:
+	i915_gem_object_unlock(obj);
 out_wait:
-	err = i915_gem_object_wait(obj,
-				   I915_WAIT_INTERRUPTIBLE |
-				   I915_WAIT_PRIORITY |
-				   (write_domain ? I915_WAIT_ALL : 0),
-				   MAX_SCHEDULE_TIMEOUT);
-	if (write_domain)
-		i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
+	if (!err) {
+		err = i915_gem_object_wait(obj,
+					  I915_WAIT_INTERRUPTIBLE |
+					  I915_WAIT_PRIORITY |
+					  (write_domain ? I915_WAIT_ALL : 0),
+					  MAX_SCHEDULE_TIMEOUT);
+		if (write_domain)
+			i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
+	}
 
-out_unpin:
-	i915_gem_object_unpin_pages(obj);
 out:
 	i915_gem_object_put(obj);
 	return err;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 29/69] drm/i915: Defer pin calls in buffer pool until first use by caller.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (27 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 28/69] drm/i915: Take obj lock around set_domain ioctl Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 30/69] drm/i915: Fix pread/pwrite to work with new locking rules Maarten Lankhorst
                   ` (44 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to take the obj lock to pin pages, so wait until the callers
have done so, before making the object unshrinkable.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +
 .../gpu/drm/i915/gem/i915_gem_object_blt.c    |  6 +++
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.c    | 47 +++++++++----------
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.h    |  5 ++
 .../drm/i915/gt/intel_gt_buffer_pool_types.h  |  1 +
 5 files changed, 35 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 64d0e5fccece..97b0d1134b66 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1335,6 +1335,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 		err = PTR_ERR(cmd);
 		goto err_pool;
 	}
+	intel_gt_buffer_pool_mark_used(pool);
 
 	memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
 
@@ -2630,6 +2631,7 @@ static int eb_parse(struct i915_execbuffer *eb)
 		err = PTR_ERR(shadow);
 		goto err;
 	}
+	intel_gt_buffer_pool_mark_used(pool);
 	i915_gem_object_set_readonly(shadow->obj);
 	shadow->private = pool;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
index d6dac21fce0b..df8e8c18c6c9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
@@ -55,6 +55,9 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
 	if (unlikely(err))
 		goto out_put;
 
+	/* we pinned the pool, mark it as such */
+	intel_gt_buffer_pool_mark_used(pool);
+
 	cmd = i915_gem_object_pin_map(pool->obj, pool->type);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
@@ -277,6 +280,9 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
 	if (unlikely(err))
 		goto out_put;
 
+	/* we pinned the pool, mark it as such */
+	intel_gt_buffer_pool_mark_used(pool);
+
 	cmd = i915_gem_object_pin_map(pool->obj, pool->type);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
index 06d84cf09570..c59468107598 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
@@ -98,28 +98,6 @@ static void pool_free_work(struct work_struct *wrk)
 				      round_jiffies_up_relative(HZ));
 }
 
-static int pool_active(struct i915_active *ref)
-{
-	struct intel_gt_buffer_pool_node *node =
-		container_of(ref, typeof(*node), active);
-	struct dma_resv *resv = node->obj->base.resv;
-	int err;
-
-	if (dma_resv_trylock(resv)) {
-		dma_resv_add_excl_fence(resv, NULL);
-		dma_resv_unlock(resv);
-	}
-
-	err = i915_gem_object_pin_pages(node->obj);
-	if (err)
-		return err;
-
-	/* Hide this pinned object from the shrinker until retired */
-	i915_gem_object_make_unshrinkable(node->obj);
-
-	return 0;
-}
-
 __i915_active_call
 static void pool_retire(struct i915_active *ref)
 {
@@ -129,10 +107,13 @@ static void pool_retire(struct i915_active *ref)
 	struct list_head *list = bucket_for_size(pool, node->obj->base.size);
 	unsigned long flags;
 
-	i915_gem_object_unpin_pages(node->obj);
+	if (node->pinned) {
+		i915_gem_object_unpin_pages(node->obj);
 
-	/* Return this object to the shrinker pool */
-	i915_gem_object_make_purgeable(node->obj);
+		/* Return this object to the shrinker pool */
+		i915_gem_object_make_purgeable(node->obj);
+		node->pinned = false;
+	}
 
 	GEM_BUG_ON(node->age);
 	spin_lock_irqsave(&pool->lock, flags);
@@ -144,6 +125,19 @@ static void pool_retire(struct i915_active *ref)
 			      round_jiffies_up_relative(HZ));
 }
 
+void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node)
+{
+	assert_object_held(node->obj);
+
+	if (node->pinned)
+		return;
+
+	__i915_gem_object_pin_pages(node->obj);
+	/* Hide this pinned object from the shrinker until retired */
+	i915_gem_object_make_unshrinkable(node->obj);
+	node->pinned = true;
+}
+
 static struct intel_gt_buffer_pool_node *
 node_create(struct intel_gt_buffer_pool *pool, size_t sz,
 	    enum i915_map_type type)
@@ -159,7 +153,8 @@ node_create(struct intel_gt_buffer_pool *pool, size_t sz,
 
 	node->age = 0;
 	node->pool = pool;
-	i915_active_init(&node->active, pool_active, pool_retire);
+	node->pinned = false;
+	i915_active_init(&node->active, NULL, pool_retire);
 
 	obj = i915_gem_object_create_internal(gt->i915, sz);
 	if (IS_ERR(obj)) {
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
index 6068f8f1762e..487b8a5520f1 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
@@ -18,10 +18,15 @@ struct intel_gt_buffer_pool_node *
 intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size,
 			 enum i915_map_type type);
 
+void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node);
+
 static inline int
 intel_gt_buffer_pool_mark_active(struct intel_gt_buffer_pool_node *node,
 				 struct i915_request *rq)
 {
+	/* did we call mark_used? */
+	GEM_WARN_ON(!node->pinned);
+
 	return i915_active_add_request(&node->active, rq);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
index c49b84fe5164..df1d75d08cd2 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
@@ -30,6 +30,7 @@ struct intel_gt_buffer_pool_node {
 	};
 	unsigned long age;
 	enum i915_map_type type;
+	u32 pinned;
 };
 
 #endif /* INTEL_GT_BUFFER_POOL_TYPES_H */
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 30/69] drm/i915: Fix pread/pwrite to work with new locking rules.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (28 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 29/69] drm/i915: Defer pin calls in buffer pool until first use by caller Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 31/69] drm/i915: Fix workarounds selftest, part 1 Maarten Lankhorst
                   ` (43 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We are removing obj->mm.lock, and need to take the reservation lock
before we can pin pages. Move the pinning pages into the helper, and
merge gtt pwrite/pread preparation and cleanup paths.

The fence lock is also removed; it will conflict with fence annotations,
because of memory allocations done when pagefaulting inside copy_*_user.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/Makefile              |   1 -
 drivers/gpu/drm/i915/gem/i915_gem_fence.c  |  95 ---------
 drivers/gpu/drm/i915/gem/i915_gem_object.h |   5 -
 drivers/gpu/drm/i915/i915_gem.c            | 215 +++++++++++----------
 4 files changed, 112 insertions(+), 204 deletions(-)
 delete mode 100644 drivers/gpu/drm/i915/gem/i915_gem_fence.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index bc6138880c67..a1d6d468e65d 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -140,7 +140,6 @@ gem-y += \
 	gem/i915_gem_dmabuf.o \
 	gem/i915_gem_domain.o \
 	gem/i915_gem_execbuffer.o \
-	gem/i915_gem_fence.o \
 	gem/i915_gem_internal.o \
 	gem/i915_gem_object.o \
 	gem/i915_gem_object_blt.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_fence.c b/drivers/gpu/drm/i915/gem/i915_gem_fence.c
deleted file mode 100644
index 8ab842c80f99..000000000000
--- a/drivers/gpu/drm/i915/gem/i915_gem_fence.c
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * SPDX-License-Identifier: MIT
- *
- * Copyright © 2019 Intel Corporation
- */
-
-#include "i915_drv.h"
-#include "i915_gem_object.h"
-
-struct stub_fence {
-	struct dma_fence dma;
-	struct i915_sw_fence chain;
-};
-
-static int __i915_sw_fence_call
-stub_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
-{
-	struct stub_fence *stub = container_of(fence, typeof(*stub), chain);
-
-	switch (state) {
-	case FENCE_COMPLETE:
-		dma_fence_signal(&stub->dma);
-		break;
-
-	case FENCE_FREE:
-		dma_fence_put(&stub->dma);
-		break;
-	}
-
-	return NOTIFY_DONE;
-}
-
-static const char *stub_driver_name(struct dma_fence *fence)
-{
-	return DRIVER_NAME;
-}
-
-static const char *stub_timeline_name(struct dma_fence *fence)
-{
-	return "object";
-}
-
-static void stub_release(struct dma_fence *fence)
-{
-	struct stub_fence *stub = container_of(fence, typeof(*stub), dma);
-
-	i915_sw_fence_fini(&stub->chain);
-
-	BUILD_BUG_ON(offsetof(typeof(*stub), dma));
-	dma_fence_free(&stub->dma);
-}
-
-static const struct dma_fence_ops stub_fence_ops = {
-	.get_driver_name = stub_driver_name,
-	.get_timeline_name = stub_timeline_name,
-	.release = stub_release,
-};
-
-struct dma_fence *
-i915_gem_object_lock_fence(struct drm_i915_gem_object *obj)
-{
-	struct stub_fence *stub;
-
-	assert_object_held(obj);
-
-	stub = kmalloc(sizeof(*stub), GFP_KERNEL);
-	if (!stub)
-		return NULL;
-
-	i915_sw_fence_init(&stub->chain, stub_notify);
-	dma_fence_init(&stub->dma, &stub_fence_ops, &stub->chain.wait.lock,
-		       0, 0);
-
-	if (i915_sw_fence_await_reservation(&stub->chain,
-					    obj->base.resv, NULL, true,
-					    i915_fence_timeout(to_i915(obj->base.dev)),
-					    I915_FENCE_GFP) < 0)
-		goto err;
-
-	dma_resv_add_excl_fence(obj->base.resv, &stub->dma);
-
-	return &stub->dma;
-
-err:
-	stub_release(&stub->dma);
-	return NULL;
-}
-
-void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
-				  struct dma_fence *fence)
-{
-	struct stub_fence *stub = container_of(fence, typeof(*stub), dma);
-
-	i915_sw_fence_commit(&stub->chain);
-}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index fef0d62f3eb7..6c3f75adb53c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -189,11 +189,6 @@ static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj)
 	dma_resv_unlock(obj->base.resv);
 }
 
-struct dma_fence *
-i915_gem_object_lock_fence(struct drm_i915_gem_object *obj);
-void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
-				  struct dma_fence *fence);
-
 static inline void
 i915_gem_object_set_readonly(struct drm_i915_gem_object *obj)
 {
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index edb3a32b062f..7f6165816872 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -204,7 +204,6 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 {
 	unsigned int needs_clflush;
 	unsigned int idx, offset;
-	struct dma_fence *fence;
 	char __user *user_data;
 	u64 remain;
 	int ret;
@@ -213,19 +212,17 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 	if (ret)
 		return ret;
 
+	ret = i915_gem_object_pin_pages(obj);
+	if (ret)
+		goto err_unlock;
+
 	ret = i915_gem_object_prepare_read(obj, &needs_clflush);
-	if (ret) {
-		i915_gem_object_unlock(obj);
-		return ret;
-	}
+	if (ret)
+		goto err_unpin;
 
-	fence = i915_gem_object_lock_fence(obj);
 	i915_gem_object_finish_access(obj);
 	i915_gem_object_unlock(obj);
 
-	if (!fence)
-		return -ENOMEM;
-
 	remain = args->size;
 	user_data = u64_to_user_ptr(args->data_ptr);
 	offset = offset_in_page(args->offset);
@@ -243,7 +240,13 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 		offset = 0;
 	}
 
-	i915_gem_object_unlock_fence(obj, fence);
+	i915_gem_object_unpin_pages(obj);
+	return ret;
+
+err_unpin:
+	i915_gem_object_unpin_pages(obj);
+err_unlock:
+	i915_gem_object_unlock(obj);
 	return ret;
 }
 
@@ -271,48 +274,99 @@ gtt_user_read(struct io_mapping *mapping,
 	return unwritten;
 }
 
-static int
-i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
-		   const struct drm_i915_gem_pread *args)
+static struct i915_vma *i915_gem_gtt_prepare(struct drm_i915_gem_object *obj,
+					     struct drm_mm_node *node,
+					     bool write)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct i915_ggtt *ggtt = &i915->ggtt;
-	intel_wakeref_t wakeref;
-	struct drm_mm_node node;
-	struct dma_fence *fence;
-	void __user *user_data;
 	struct i915_vma *vma;
-	u64 remain, offset;
+	struct i915_gem_ww_ctx ww;
 	int ret;
 
-	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
 	vma = ERR_PTR(-ENODEV);
+	ret = i915_gem_object_lock(obj, &ww);
+	if (ret)
+		goto err_ww;
+
+	i915_gem_object_set_to_gtt_domain(obj, write);
 	if (!i915_gem_object_is_tiled(obj))
-		vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,
-					       PIN_MAPPABLE |
-					       PIN_NONBLOCK /* NOWARN */ |
-					       PIN_NOEVICT);
-	if (!IS_ERR(vma)) {
-		node.start = i915_ggtt_offset(vma);
-		node.flags = 0;
+		vma = i915_gem_object_ggtt_pin_ww(obj, &ww, NULL, 0, 0,
+						  PIN_MAPPABLE |
+						  PIN_NONBLOCK /* NOWARN */ |
+						  PIN_NOEVICT);
+	if (vma == ERR_PTR(-EDEADLK)) {
+		ret = -EDEADLK;
+		goto err_ww;
+	} else if (!IS_ERR(vma)) {
+		node->start = i915_ggtt_offset(vma);
+		node->flags = 0;
 	} else {
-		ret = insert_mappable_node(ggtt, &node, PAGE_SIZE);
+		ret = insert_mappable_node(ggtt, node, PAGE_SIZE);
 		if (ret)
-			goto out_rpm;
-		GEM_BUG_ON(!drm_mm_node_allocated(&node));
+			goto err_ww;
+		GEM_BUG_ON(!drm_mm_node_allocated(node));
+		vma = NULL;
 	}
 
-	ret = i915_gem_object_lock_interruptible(obj, NULL);
-	if (ret)
-		goto out_unpin;
+	ret = i915_gem_object_pin_pages(obj);
+	if (ret) {
+		if (drm_mm_node_allocated(node)) {
+			ggtt->vm.clear_range(&ggtt->vm, node->start, node->size);
+			remove_mappable_node(ggtt, node);
+		} else {
+			i915_vma_unpin(vma);
+		}
+	}
+
+err_ww:
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 
-	i915_gem_object_set_to_gtt_domain(obj, false);
+	return ret ? ERR_PTR(ret) : vma;
+}
 
-	fence = i915_gem_object_lock_fence(obj);
-	i915_gem_object_unlock(obj);
-	if (!fence) {
-		ret = -ENOMEM;
-		goto out_unpin;
+static void i915_gem_gtt_cleanup(struct drm_i915_gem_object *obj,
+				 struct drm_mm_node *node,
+				 struct i915_vma *vma)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_ggtt *ggtt = &i915->ggtt;
+
+	i915_gem_object_unpin_pages(obj);
+	if (drm_mm_node_allocated(node)) {
+		ggtt->vm.clear_range(&ggtt->vm, node->start, node->size);
+		remove_mappable_node(ggtt, node);
+	} else {
+		i915_vma_unpin(vma);
+	}
+}
+
+static int
+i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
+		   const struct drm_i915_gem_pread *args)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	intel_wakeref_t wakeref;
+	struct drm_mm_node node;
+	void __user *user_data;
+	struct i915_vma *vma;
+	u64 remain, offset;
+	int ret = 0;
+
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	vma = i915_gem_gtt_prepare(obj, &node, false);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto out_rpm;
 	}
 
 	user_data = u64_to_user_ptr(args->data_ptr);
@@ -349,14 +403,7 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
 		offset += page_length;
 	}
 
-	i915_gem_object_unlock_fence(obj, fence);
-out_unpin:
-	if (drm_mm_node_allocated(&node)) {
-		ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
-		remove_mappable_node(ggtt, &node);
-	} else {
-		i915_vma_unpin(vma);
-	}
+	i915_gem_gtt_cleanup(obj, &node, vma);
 out_rpm:
 	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 	return ret;
@@ -414,15 +461,10 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		goto out;
 
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		goto out;
-
 	ret = i915_gem_shmem_pread(obj, args);
 	if (ret == -EFAULT || ret == -ENODEV)
 		ret = i915_gem_gtt_pread(obj, args);
 
-	i915_gem_object_unpin_pages(obj);
 out:
 	i915_gem_object_put(obj);
 	return ret;
@@ -470,11 +512,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 	struct intel_runtime_pm *rpm = &i915->runtime_pm;
 	intel_wakeref_t wakeref;
 	struct drm_mm_node node;
-	struct dma_fence *fence;
 	struct i915_vma *vma;
 	u64 remain, offset;
 	void __user *user_data;
-	int ret;
+	int ret = 0;
 
 	if (i915_gem_object_has_struct_page(obj)) {
 		/*
@@ -492,33 +533,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 		wakeref = intel_runtime_pm_get(rpm);
 	}
 
-	vma = ERR_PTR(-ENODEV);
-	if (!i915_gem_object_is_tiled(obj))
-		vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,
-					       PIN_MAPPABLE |
-					       PIN_NONBLOCK /* NOWARN */ |
-					       PIN_NOEVICT);
-	if (!IS_ERR(vma)) {
-		node.start = i915_ggtt_offset(vma);
-		node.flags = 0;
-	} else {
-		ret = insert_mappable_node(ggtt, &node, PAGE_SIZE);
-		if (ret)
-			goto out_rpm;
-		GEM_BUG_ON(!drm_mm_node_allocated(&node));
-	}
-
-	ret = i915_gem_object_lock_interruptible(obj, NULL);
-	if (ret)
-		goto out_unpin;
-
-	i915_gem_object_set_to_gtt_domain(obj, true);
-
-	fence = i915_gem_object_lock_fence(obj);
-	i915_gem_object_unlock(obj);
-	if (!fence) {
-		ret = -ENOMEM;
-		goto out_unpin;
+	vma = i915_gem_gtt_prepare(obj, &node, true);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto out_rpm;
 	}
 
 	i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
@@ -567,14 +585,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 	intel_gt_flush_ggtt_writes(ggtt->vm.gt);
 	i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
 
-	i915_gem_object_unlock_fence(obj, fence);
-out_unpin:
-	if (drm_mm_node_allocated(&node)) {
-		ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
-		remove_mappable_node(ggtt, &node);
-	} else {
-		i915_vma_unpin(vma);
-	}
+	i915_gem_gtt_cleanup(obj, &node, vma);
 out_rpm:
 	intel_runtime_pm_put(rpm, wakeref);
 	return ret;
@@ -614,7 +625,6 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	unsigned int partial_cacheline_write;
 	unsigned int needs_clflush;
 	unsigned int offset, idx;
-	struct dma_fence *fence;
 	void __user *user_data;
 	u64 remain;
 	int ret;
@@ -623,19 +633,17 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	if (ret)
 		return ret;
 
+	ret = i915_gem_object_pin_pages(obj);
+	if (ret)
+		goto err_unlock;
+
 	ret = i915_gem_object_prepare_write(obj, &needs_clflush);
-	if (ret) {
-		i915_gem_object_unlock(obj);
-		return ret;
-	}
+	if (ret)
+		goto err_unpin;
 
-	fence = i915_gem_object_lock_fence(obj);
 	i915_gem_object_finish_access(obj);
 	i915_gem_object_unlock(obj);
 
-	if (!fence)
-		return -ENOMEM;
-
 	/* If we don't overwrite a cacheline completely we need to be
 	 * careful to have up-to-date data by first clflushing. Don't
 	 * overcomplicate things and flush the entire patch.
@@ -663,8 +671,14 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	}
 
 	i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
-	i915_gem_object_unlock_fence(obj, fence);
 
+	i915_gem_object_unpin_pages(obj);
+	return ret;
+
+err_unpin:
+	i915_gem_object_unpin_pages(obj);
+err_unlock:
+	i915_gem_object_unlock(obj);
 	return ret;
 }
 
@@ -721,10 +735,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		goto err;
 
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		goto err;
-
 	ret = -EFAULT;
 	/* We can only do the GTT pwrite on untiled buffers, as otherwise
 	 * it would end up going through the fenced access, and we'll get
@@ -745,7 +755,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 			ret = i915_gem_shmem_pwrite(obj, args);
 	}
 
-	i915_gem_object_unpin_pages(obj);
 err:
 	i915_gem_object_put(obj);
 	return ret;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 31/69] drm/i915: Fix workarounds selftest, part 1
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (29 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 30/69] drm/i915: Fix pread/pwrite to work with new locking rules Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 32/69] drm/i915: Prepare for obj->mm.lock removal, v2 Maarten Lankhorst
                   ` (42 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

pin_map needs the ww lock, so ensure we pin both before submission.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  3 +
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 12 +++
 .../gpu/drm/i915/gt/selftest_workarounds.c    | 95 +++++++++++++------
 3 files changed, 80 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 6c3f75adb53c..983f2d4b2a85 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -437,6 +437,9 @@ void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
 void *__must_check i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 					   enum i915_map_type type);
 
+void *__must_check i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
+						    enum i915_map_type type);
+
 void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj,
 				 unsigned long offset,
 				 unsigned long size);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index e947d4c0da1f..a24617af3c93 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -400,6 +400,18 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 	goto out_unlock;
 }
 
+void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
+				       enum i915_map_type type)
+{
+	void *ret;
+
+	i915_gem_object_lock(obj, NULL);
+	ret = i915_gem_object_pin_map(obj, type);
+	i915_gem_object_unlock(obj);
+
+	return ret;
+}
+
 void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj,
 				 unsigned long offset,
 				 unsigned long size)
diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
index de6136bd10ac..a508614b2fd5 100644
--- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
@@ -103,15 +103,13 @@ read_nonprivs(struct intel_context *ce, struct i915_vma *result)
 	int err;
 	int i;
 
-	rq = intel_context_create_request(ce);
+	rq = i915_request_create(ce);
 	if (IS_ERR(rq))
 		return rq;
 
-	i915_vma_lock(result);
 	err = i915_request_await_object(rq, result->obj, true);
 	if (err == 0)
 		err = i915_vma_move_to_active(result, rq, EXEC_OBJECT_WRITE);
-	i915_vma_unlock(result);
 	if (err)
 		goto err_rq;
 
@@ -176,10 +174,11 @@ static int check_whitelist(struct intel_context *ce)
 	u32 *vaddr;
 	int i;
 
-	result = __vm_create_scratch_for_read(&engine->gt->ggtt->vm, PAGE_SIZE);
+	result = __vm_create_scratch_for_read_pinned(&engine->gt->ggtt->vm, PAGE_SIZE);
 	if (IS_ERR(result))
 		return PTR_ERR(result);
 
+	i915_gem_object_lock(result->obj, NULL);
 	vaddr = i915_gem_object_pin_map(result->obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
@@ -219,6 +218,8 @@ static int check_whitelist(struct intel_context *ce)
 out_map:
 	i915_gem_object_unpin_map(result->obj);
 out_put:
+	i915_vma_unpin(result);
+	i915_gem_object_unlock(result->obj);
 	i915_vma_put(result);
 	return err;
 }
@@ -279,10 +280,14 @@ static int check_whitelist_across_reset(struct intel_engine_cs *engine,
 	if (IS_ERR(ce))
 		return PTR_ERR(ce);
 
-	err = igt_spinner_init(&spin, engine->gt);
+	err = intel_context_pin(ce);
 	if (err)
 		goto out_ctx;
 
+	err = igt_spinner_init(&spin, engine->gt);
+	if (err)
+		goto out_unpin;
+
 	err = check_whitelist(ce);
 	if (err) {
 		pr_err("Invalid whitelist *before* %s reset!\n", name);
@@ -315,6 +320,13 @@ static int check_whitelist_across_reset(struct intel_engine_cs *engine,
 		err = PTR_ERR(tmp);
 		goto out_spin;
 	}
+	err = intel_context_pin(tmp);
+	if (err) {
+		intel_context_put(tmp);
+		goto out_spin;
+	}
+
+	intel_context_unpin(ce);
 	intel_context_put(ce);
 	ce = tmp;
 
@@ -327,6 +339,8 @@ static int check_whitelist_across_reset(struct intel_engine_cs *engine,
 
 out_spin:
 	igt_spinner_fini(&spin);
+out_unpin:
+	intel_context_unpin(ce);
 out_ctx:
 	intel_context_put(ce);
 	return err;
@@ -475,6 +489,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 
 	for (i = 0; i < engine->whitelist.count; i++) {
 		u32 reg = i915_mmio_reg_offset(engine->whitelist.list[i].reg);
+		struct i915_gem_ww_ctx ww;
 		u64 addr = scratch->node.start;
 		struct i915_request *rq;
 		u32 srm, lrm, rsvd;
@@ -490,6 +505,29 @@ static int check_dirty_whitelist(struct intel_context *ce)
 
 		ro_reg = ro_register(reg);
 
+		i915_gem_ww_ctx_init(&ww, false);
+retry:
+		cs = NULL;
+		err = i915_gem_object_lock(scratch->obj, &ww);
+		if (!err)
+			err = i915_gem_object_lock(batch->obj, &ww);
+		if (!err)
+			err = intel_context_pin_ww(ce, &ww);
+		if (err)
+			goto out;
+
+		cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+		if (IS_ERR(cs)) {
+			err = PTR_ERR(cs);
+			goto out_ctx;
+		}
+
+		results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+		if (IS_ERR(results)) {
+			err = PTR_ERR(results);
+			goto out_unmap_batch;
+		}
+
 		/* Clear non priv flags */
 		reg &= RING_FORCE_TO_NONPRIV_ADDRESS_MASK;
 
@@ -501,12 +539,6 @@ static int check_dirty_whitelist(struct intel_context *ce)
 		pr_debug("%s: Writing garbage to %x\n",
 			 engine->name, reg);
 
-		cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
-		if (IS_ERR(cs)) {
-			err = PTR_ERR(cs);
-			goto out_batch;
-		}
-
 		/* SRM original */
 		*cs++ = srm;
 		*cs++ = reg;
@@ -553,11 +585,12 @@ static int check_dirty_whitelist(struct intel_context *ce)
 		i915_gem_object_flush_map(batch->obj);
 		i915_gem_object_unpin_map(batch->obj);
 		intel_gt_chipset_flush(engine->gt);
+		cs = NULL;
 
-		rq = intel_context_create_request(ce);
+		rq = i915_request_create(ce);
 		if (IS_ERR(rq)) {
 			err = PTR_ERR(rq);
-			goto out_batch;
+			goto out_unmap_scratch;
 		}
 
 		if (engine->emit_init_breadcrumb) { /* Be nice if we hang */
@@ -566,20 +599,16 @@ static int check_dirty_whitelist(struct intel_context *ce)
 				goto err_request;
 		}
 
-		i915_vma_lock(batch);
 		err = i915_request_await_object(rq, batch->obj, false);
 		if (err == 0)
 			err = i915_vma_move_to_active(batch, rq, 0);
-		i915_vma_unlock(batch);
 		if (err)
 			goto err_request;
 
-		i915_vma_lock(scratch);
 		err = i915_request_await_object(rq, scratch->obj, true);
 		if (err == 0)
 			err = i915_vma_move_to_active(scratch, rq,
 						      EXEC_OBJECT_WRITE);
-		i915_vma_unlock(scratch);
 		if (err)
 			goto err_request;
 
@@ -595,13 +624,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 			pr_err("%s: Futzing %x timedout; cancelling test\n",
 			       engine->name, reg);
 			intel_gt_set_wedged(engine->gt);
-			goto out_batch;
-		}
-
-		results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
-		if (IS_ERR(results)) {
-			err = PTR_ERR(results);
-			goto out_batch;
+			goto out_unmap_scratch;
 		}
 
 		GEM_BUG_ON(values[ARRAY_SIZE(values) - 1] != 0xffffffff);
@@ -612,7 +635,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 				pr_err("%s: Unable to write to whitelisted register %x\n",
 				       engine->name, reg);
 				err = -EINVAL;
-				goto out_unpin;
+				goto out_unmap_scratch;
 			}
 		} else {
 			rsvd = 0;
@@ -678,15 +701,27 @@ static int check_dirty_whitelist(struct intel_context *ce)
 
 			err = -EINVAL;
 		}
-out_unpin:
+out_unmap_scratch:
 		i915_gem_object_unpin_map(scratch->obj);
+out_unmap_batch:
+		if (cs)
+			i915_gem_object_unpin_map(batch->obj);
+out_ctx:
+		intel_context_unpin(ce);
+out:
+		if (err == -EDEADLK) {
+			err = i915_gem_ww_ctx_backoff(&ww);
+			if (!err)
+				goto retry;
+		}
+		i915_gem_ww_ctx_fini(&ww);
 		if (err)
 			break;
 	}
 
 	if (igt_flush_test(engine->i915))
 		err = -EIO;
-out_batch:
+
 	i915_vma_unpin_and_release(&batch, 0);
 out_scratch:
 	i915_vma_unpin_and_release(&scratch, 0);
@@ -820,7 +855,7 @@ static int scrub_whitelisted_registers(struct intel_context *ce)
 	if (IS_ERR(batch))
 		return PTR_ERR(batch);
 
-	cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_batch;
@@ -955,11 +990,11 @@ check_whitelisted_registers(struct intel_engine_cs *engine,
 	u32 *a, *b;
 	int i, err;
 
-	a = i915_gem_object_pin_map(A->obj, I915_MAP_WB);
+	a = i915_gem_object_pin_map_unlocked(A->obj, I915_MAP_WB);
 	if (IS_ERR(a))
 		return PTR_ERR(a);
 
-	b = i915_gem_object_pin_map(B->obj, I915_MAP_WB);
+	b = i915_gem_object_pin_map_unlocked(B->obj, I915_MAP_WB);
 	if (IS_ERR(b)) {
 		err = PTR_ERR(b);
 		goto err_a;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 32/69] drm/i915: Prepare for obj->mm.lock removal, v2.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (30 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 31/69] drm/i915: Fix workarounds selftest, part 1 Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 33/69] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner Maarten Lankhorst
                   ` (41 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, Thomas Hellström

From: Thomas Hellström <thomas.hellstrom@intel.com>

Stolen objects need to lock, and we may call put_pages when
refcount drops to 0, ensure all calls are handled correctly.

Changes since v1:
- Rebase on top of upstream changes.

Idea-from: Thomas Hellström <thomas.hellstrom@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h | 14 ++++++++++++++
 drivers/gpu/drm/i915/gem/i915_gem_pages.c  | 14 ++++++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 12 +++++++-----
 3 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 983f2d4b2a85..74de195b57de 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -144,6 +144,20 @@ i915_gem_object_put(struct drm_i915_gem_object *obj)
 
 #define assert_object_held(obj) dma_resv_assert_held((obj)->base.resv)
 
+/*
+ * If more than one potential simultaneous locker, assert held.
+ */
+static inline void assert_object_held_shared(struct drm_i915_gem_object *obj)
+{
+	/*
+	 * Note mm list lookup is protected by
+	 * kref_get_unless_zero().
+	 */
+	if (IS_ENABLED(CONFIG_LOCKDEP) &&
+	    kref_read(&obj->base.refcount) > 0)
+		lockdep_assert_held(&obj->mm.lock);
+}
+
 static inline int __i915_gem_object_lock(struct drm_i915_gem_object *obj,
 					 struct i915_gem_ww_ctx *ww,
 					 bool intr)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index a24617af3c93..2d0065fa6e80 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -19,7 +19,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 	bool shrinkable;
 	int i;
 
-	lockdep_assert_held(&obj->mm.lock);
+	assert_object_held_shared(obj);
 
 	if (i915_gem_object_is_volatile(obj))
 		obj->mm.madv = I915_MADV_DONTNEED;
@@ -70,6 +70,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 		struct list_head *list;
 		unsigned long flags;
 
+		lockdep_assert_held(&obj->mm.lock);
 		spin_lock_irqsave(&i915->mm.obj_lock, flags);
 
 		i915->mm.shrink_count++;
@@ -91,6 +92,8 @@ int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	int err;
 
+	assert_object_held_shared(obj);
+
 	if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
 		drm_dbg(&i915->drm,
 			"Attempting to obtain a purgeable object\n");
@@ -118,6 +121,8 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	if (err)
 		return err;
 
+	assert_object_held_shared(obj);
+
 	if (unlikely(!i915_gem_object_has_pages(obj))) {
 		GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
 
@@ -145,7 +150,7 @@ void i915_gem_object_truncate(struct drm_i915_gem_object *obj)
 /* Try to discard unwanted pages */
 void i915_gem_object_writeback(struct drm_i915_gem_object *obj)
 {
-	lockdep_assert_held(&obj->mm.lock);
+	assert_object_held_shared(obj);
 	GEM_BUG_ON(i915_gem_object_has_pages(obj));
 
 	if (obj->ops->writeback)
@@ -176,6 +181,8 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
 
+	assert_object_held_shared(obj);
+
 	pages = fetch_and_zero(&obj->mm.pages);
 	if (IS_ERR_OR_NULL(pages))
 		return pages;
@@ -203,6 +210,9 @@ int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
 	if (i915_gem_object_has_pinned_pages(obj))
 		return -EBUSY;
 
+	/* May be called by shrinker from within get_pages() (on another bo) */
+	assert_object_held_shared(obj);
+
 	i915_gem_object_release_mmap_offset(obj);
 
 	/*
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index 7cdb32d881d9..b0597de206de 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -637,13 +637,15 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem,
 	cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
 	i915_gem_object_set_cache_coherency(obj, cache_level);
 
-	err = i915_gem_object_pin_pages(obj);
-	if (err)
-		return err;
+	if (WARN_ON(!i915_gem_object_trylock(obj)))
+		return -EBUSY;
 
-	i915_gem_object_init_memory_region(obj, mem);
+	err = i915_gem_object_pin_pages(obj);
+	if (!err)
+		i915_gem_object_init_memory_region(obj, mem);
+	i915_gem_object_unlock(obj);
 
-	return 0;
+	return err;
 }
 
 static int _i915_gem_object_stolen_init(struct intel_memory_region *mem,
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 33/69] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (31 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 32/69] drm/i915: Prepare for obj->mm.lock removal, v2 Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 34/69] drm/i915: Add ww locking around vm_access() Maarten Lankhorst
                   ` (40 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

By default, we assume that it's called inside igt_create_request
to keep existing selftests working, but allow for manual pinning
when passing a ww context.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/igt_spinner.c | 136 ++++++++++++-------
 drivers/gpu/drm/i915/selftests/igt_spinner.h |   5 +
 2 files changed, 95 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.c b/drivers/gpu/drm/i915/selftests/igt_spinner.c
index 0e6c1ea0082a..243cb4b9a2ee 100644
--- a/drivers/gpu/drm/i915/selftests/igt_spinner.c
+++ b/drivers/gpu/drm/i915/selftests/igt_spinner.c
@@ -12,8 +12,6 @@
 
 int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt)
 {
-	unsigned int mode;
-	void *vaddr;
 	int err;
 
 	memset(spin, 0, sizeof(*spin));
@@ -24,6 +22,7 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt)
 		err = PTR_ERR(spin->hws);
 		goto err;
 	}
+	i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_LLC);
 
 	spin->obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE);
 	if (IS_ERR(spin->obj)) {
@@ -31,34 +30,83 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt)
 		goto err_hws;
 	}
 
-	i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_LLC);
-	vaddr = i915_gem_object_pin_map(spin->hws, I915_MAP_WB);
-	if (IS_ERR(vaddr)) {
-		err = PTR_ERR(vaddr);
-		goto err_obj;
-	}
-	spin->seqno = memset(vaddr, 0xff, PAGE_SIZE);
-
-	mode = i915_coherent_map_type(gt->i915);
-	vaddr = i915_gem_object_pin_map(spin->obj, mode);
-	if (IS_ERR(vaddr)) {
-		err = PTR_ERR(vaddr);
-		goto err_unpin_hws;
-	}
-	spin->batch = vaddr;
-
 	return 0;
 
-err_unpin_hws:
-	i915_gem_object_unpin_map(spin->hws);
-err_obj:
-	i915_gem_object_put(spin->obj);
 err_hws:
 	i915_gem_object_put(spin->hws);
 err:
 	return err;
 }
 
+static void *igt_spinner_pin_obj(struct intel_context *ce,
+				 struct i915_gem_ww_ctx *ww,
+				 struct drm_i915_gem_object *obj,
+				 unsigned int mode, struct i915_vma **vma)
+{
+	void *vaddr;
+	int ret;
+
+	*vma = i915_vma_instance(obj, ce->vm, NULL);
+	if (IS_ERR(*vma))
+		return ERR_CAST(*vma);
+
+	ret = i915_gem_object_lock(obj, ww);
+	if (ret)
+		return ERR_PTR(ret);
+
+	vaddr = i915_gem_object_pin_map(obj, mode);
+
+	if (!ww)
+		i915_gem_object_unlock(obj);
+
+	if (IS_ERR(vaddr))
+		return vaddr;
+
+	if (ww)
+		ret = i915_vma_pin_ww(*vma, ww, 0, 0, PIN_USER);
+	else
+		ret = i915_vma_pin(*vma, 0, 0, PIN_USER);
+
+	if (ret) {
+		i915_gem_object_unpin_map(obj);
+		return ERR_PTR(ret);
+	}
+
+	return vaddr;
+}
+
+int igt_spinner_pin(struct igt_spinner *spin,
+		    struct intel_context *ce,
+		    struct i915_gem_ww_ctx *ww)
+{
+	void *vaddr;
+
+	if (spin->ce && WARN_ON(spin->ce != ce))
+		return -ENODEV;
+	spin->ce = ce;
+
+	if (!spin->seqno) {
+		vaddr = igt_spinner_pin_obj(ce, ww, spin->hws, I915_MAP_WB, &spin->hws_vma);
+		if (IS_ERR(vaddr))
+			return PTR_ERR(vaddr);
+
+		spin->seqno = memset(vaddr, 0xff, PAGE_SIZE);
+	}
+
+	if (!spin->batch) {
+		unsigned int mode =
+			i915_coherent_map_type(spin->gt->i915);
+
+		vaddr = igt_spinner_pin_obj(ce, ww, spin->obj, mode, &spin->batch_vma);
+		if (IS_ERR(vaddr))
+			return PTR_ERR(vaddr);
+
+		spin->batch = vaddr;
+	}
+
+	return 0;
+}
+
 static unsigned int seqno_offset(u64 fence)
 {
 	return offset_in_page(sizeof(u32) * fence);
@@ -103,27 +151,18 @@ igt_spinner_create_request(struct igt_spinner *spin,
 	if (!intel_engine_can_store_dword(ce->engine))
 		return ERR_PTR(-ENODEV);
 
-	vma = i915_vma_instance(spin->obj, ce->vm, NULL);
-	if (IS_ERR(vma))
-		return ERR_CAST(vma);
-
-	hws = i915_vma_instance(spin->hws, ce->vm, NULL);
-	if (IS_ERR(hws))
-		return ERR_CAST(hws);
+	if (!spin->batch) {
+		err = igt_spinner_pin(spin, ce, NULL);
+		if (err)
+			return ERR_PTR(err);
+	}
 
-	err = i915_vma_pin(vma, 0, 0, PIN_USER);
-	if (err)
-		return ERR_PTR(err);
-
-	err = i915_vma_pin(hws, 0, 0, PIN_USER);
-	if (err)
-		goto unpin_vma;
+	hws = spin->hws_vma;
+	vma = spin->batch_vma;
 
 	rq = intel_context_create_request(ce);
-	if (IS_ERR(rq)) {
-		err = PTR_ERR(rq);
-		goto unpin_hws;
-	}
+	if (IS_ERR(rq))
+		return ERR_CAST(rq);
 
 	err = move_to_active(vma, rq, 0);
 	if (err)
@@ -186,10 +225,6 @@ igt_spinner_create_request(struct igt_spinner *spin,
 		i915_request_set_error_once(rq, err);
 		i915_request_add(rq);
 	}
-unpin_hws:
-	i915_vma_unpin(hws);
-unpin_vma:
-	i915_vma_unpin(vma);
 	return err ? ERR_PTR(err) : rq;
 }
 
@@ -203,6 +238,9 @@ hws_seqno(const struct igt_spinner *spin, const struct i915_request *rq)
 
 void igt_spinner_end(struct igt_spinner *spin)
 {
+	if (!spin->batch)
+		return;
+
 	*spin->batch = MI_BATCH_BUFFER_END;
 	intel_gt_chipset_flush(spin->gt);
 }
@@ -211,10 +249,16 @@ void igt_spinner_fini(struct igt_spinner *spin)
 {
 	igt_spinner_end(spin);
 
-	i915_gem_object_unpin_map(spin->obj);
+	if (spin->batch) {
+		i915_vma_unpin(spin->batch_vma);
+		i915_gem_object_unpin_map(spin->obj);
+	}
 	i915_gem_object_put(spin->obj);
 
-	i915_gem_object_unpin_map(spin->hws);
+	if (spin->seqno) {
+		i915_vma_unpin(spin->hws_vma);
+		i915_gem_object_unpin_map(spin->hws);
+	}
 	i915_gem_object_put(spin->hws);
 }
 
diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.h b/drivers/gpu/drm/i915/selftests/igt_spinner.h
index ec62c9ef320b..fbe5b1625b05 100644
--- a/drivers/gpu/drm/i915/selftests/igt_spinner.h
+++ b/drivers/gpu/drm/i915/selftests/igt_spinner.h
@@ -20,11 +20,16 @@ struct igt_spinner {
 	struct intel_gt *gt;
 	struct drm_i915_gem_object *hws;
 	struct drm_i915_gem_object *obj;
+	struct intel_context *ce;
+	struct i915_vma *hws_vma, *batch_vma;
 	u32 *batch;
 	void *seqno;
 };
 
 int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt);
+int igt_spinner_pin(struct igt_spinner *spin,
+		    struct intel_context *ce,
+		    struct i915_gem_ww_ctx *ww);
 void igt_spinner_fini(struct igt_spinner *spin);
 
 struct i915_request *
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 34/69] drm/i915: Add ww locking around vm_access()
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (32 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 33/69] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 35/69] drm/i915: Increase ww locking for perf Maarten Lankhorst
                   ` (39 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

i915_gem_object_pin_map potentially needs a ww context, so ensure we
have one we can revoke.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_mman.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 163208a6260d..2561a2f1e54f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -421,7 +421,9 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
 {
 	struct i915_mmap_offset *mmo = area->vm_private_data;
 	struct drm_i915_gem_object *obj = mmo->obj;
+	struct i915_gem_ww_ctx ww;
 	void *vaddr;
+	int err = 0;
 
 	if (i915_gem_object_is_readonly(obj) && write)
 		return -EACCES;
@@ -430,10 +432,18 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
 	if (addr >= obj->base.size)
 		return -EINVAL;
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (err)
+		goto out;
+
 	/* As this is primarily for debugging, let's focus on simplicity */
 	vaddr = i915_gem_object_pin_map(obj, I915_MAP_FORCE_WC);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
+	if (IS_ERR(vaddr)) {
+		err = PTR_ERR(vaddr);
+		goto out;
+	}
 
 	if (write) {
 		memcpy(vaddr + addr, buf, len);
@@ -443,6 +453,16 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
 	}
 
 	i915_gem_object_unpin_map(obj);
+out:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
+	if (err)
+		return err;
 
 	return len;
 }
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 35/69] drm/i915: Increase ww locking for perf.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (33 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 34/69] drm/i915: Add ww locking around vm_access() Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 36/69] drm/i915: Lock ww in ucode objects correctly Maarten Lankhorst
                   ` (38 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to lock a few more objects, some temporarily,
add ww lock where needed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 56 ++++++++++++++++++++++++--------
 1 file changed, 43 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index c15bead2dac7..d13d1d9d4039 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1576,7 +1576,7 @@ static int alloc_oa_buffer(struct i915_perf_stream *stream)
 	stream->oa_buffer.vma = vma;
 
 	stream->oa_buffer.vaddr =
-		i915_gem_object_pin_map(bo, I915_MAP_WB);
+		i915_gem_object_pin_map_unlocked(bo, I915_MAP_WB);
 	if (IS_ERR(stream->oa_buffer.vaddr)) {
 		ret = PTR_ERR(stream->oa_buffer.vaddr);
 		goto err_unpin;
@@ -1630,6 +1630,7 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
 	const u32 base = stream->engine->mmio_base;
 #define CS_GPR(x) GEN8_RING_CS_GPR(base, x)
 	u32 *batch, *ts0, *cs, *jump;
+	struct i915_gem_ww_ctx ww;
 	int ret, i;
 	enum {
 		START_TS,
@@ -1647,15 +1648,21 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
 		return PTR_ERR(bo);
 	}
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(bo, &ww);
+	if (ret)
+		goto out_ww;
+
 	/*
 	 * We pin in GGTT because we jump into this buffer now because
 	 * multiple OA config BOs will have a jump to this address and it
 	 * needs to be fixed during the lifetime of the i915/perf stream.
 	 */
-	vma = i915_gem_object_ggtt_pin(bo, NULL, 0, 0, PIN_HIGH);
+	vma = i915_gem_object_ggtt_pin_ww(bo, &ww, NULL, 0, 0, PIN_HIGH);
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
-		goto err_unref;
+		goto out_ww;
 	}
 
 	batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB);
@@ -1789,12 +1796,19 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
 	__i915_gem_object_release_map(bo);
 
 	stream->noa_wait = vma;
-	return 0;
+	goto out_ww;
 
 err_unpin:
 	i915_vma_unpin_and_release(&vma, 0);
-err_unref:
-	i915_gem_object_put(bo);
+out_ww:
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (ret)
+		i915_gem_object_put(bo);
 	return ret;
 }
 
@@ -1837,6 +1851,7 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream,
 {
 	struct drm_i915_gem_object *obj;
 	struct i915_oa_config_bo *oa_bo;
+	struct i915_gem_ww_ctx ww;
 	size_t config_length = 0;
 	u32 *cs;
 	int err;
@@ -1857,10 +1872,16 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream,
 		goto err_free;
 	}
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (err)
+		goto out_ww;
+
 	cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
-		goto err_oa_bo;
+		goto out_ww;
 	}
 
 	cs = write_cs_mi_lri(cs,
@@ -1888,19 +1909,28 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream,
 				       NULL);
 	if (IS_ERR(oa_bo->vma)) {
 		err = PTR_ERR(oa_bo->vma);
-		goto err_oa_bo;
+		goto out_ww;
 	}
 
 	oa_bo->oa_config = i915_oa_config_get(oa_config);
 	llist_add(&oa_bo->node, &stream->oa_config_bos);
 
-	return oa_bo;
+out_ww:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 
-err_oa_bo:
-	i915_gem_object_put(obj);
+	if (err)
+		i915_gem_object_put(obj);
 err_free:
-	kfree(oa_bo);
-	return ERR_PTR(err);
+	if (err) {
+		kfree(oa_bo);
+		return ERR_PTR(err);
+	}
+	return oa_bo;
 }
 
 static struct i915_vma *
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 36/69] drm/i915: Lock ww in ucode objects correctly
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (34 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 35/69] drm/i915: Increase ww locking for perf Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 37/69] drm/i915: Add ww locking to dma-buf ops Maarten Lankhorst
                   ` (37 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

In the ucode functions, the calls are done before userspace runs,
when debugging using debugfs, or when creating semi-permanent mappings;
we can safely use the unlocked versions that does the ww dance for us.

Because there is no pin_pages_unlocked yet, add it as convenience function.

This removes possible lockdep splats about missing resv lock for ucode.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h |  2 ++
 drivers/gpu/drm/i915/gem/i915_gem_pages.c  | 20 ++++++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc.c     |  2 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_log.c |  4 ++--
 drivers/gpu/drm/i915/gt/uc/intel_huc.c     |  2 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c   |  2 +-
 6 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 74de195b57de..5fffa6f07560 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -392,6 +392,8 @@ i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
 	return __i915_gem_object_get_pages(obj);
 }
 
+int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj);
+
 static inline bool
 i915_gem_object_has_pages(struct drm_i915_gem_object *obj)
 {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 2d0065fa6e80..5b8af8f83ee3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -139,6 +139,26 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	return err;
 }
 
+int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj)
+{
+	struct i915_gem_ww_ctx ww;
+	int err;
+
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		err = i915_gem_object_pin_pages(obj);
+
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	return err;
+}
+
 /* Immediately discard the backing storage */
 void i915_gem_object_truncate(struct drm_i915_gem_object *obj)
 {
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 4545e90e3bf1..78305b2ec89d 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -682,7 +682,7 @@ int intel_guc_allocate_and_map_vma(struct intel_guc *guc, u32 size,
 	if (IS_ERR(vma))
 		return PTR_ERR(vma);
 
-	vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		i915_vma_unpin_and_release(&vma, 0);
 		return PTR_ERR(vaddr);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
index c92f2c056db4..c36d5eb5bbb9 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
@@ -335,7 +335,7 @@ static int guc_log_map(struct intel_guc_log *log)
 	 * buffer pages, so that we can directly get the data
 	 * (up-to-date) from memory.
 	 */
-	vaddr = i915_gem_object_pin_map(log->vma->obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(log->vma->obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -744,7 +744,7 @@ int intel_guc_log_dump(struct intel_guc_log *log, struct drm_printer *p,
 	if (!obj)
 		return 0;
 
-	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(map)) {
 		DRM_DEBUG("Failed to pin object\n");
 		drm_puts(p, "(log data unaccessible)\n");
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
index 65eeb44b397d..2126dd81ac38 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
@@ -82,7 +82,7 @@ static int intel_huc_rsa_data_create(struct intel_huc *huc)
 	if (IS_ERR(vma))
 		return PTR_ERR(vma);
 
-	vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		i915_vma_unpin_and_release(&vma, 0);
 		return PTR_ERR(vaddr);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
index 984fa79e0fa7..df647c9a8d56 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
@@ -539,7 +539,7 @@ int intel_uc_fw_init(struct intel_uc_fw *uc_fw)
 	if (!intel_uc_fw_is_available(uc_fw))
 		return -ENOEXEC;
 
-	err = i915_gem_object_pin_pages(uc_fw->obj);
+	err = i915_gem_object_pin_pages_unlocked(uc_fw->obj);
 	if (err) {
 		DRM_DEBUG_DRIVER("%s fw pin-pages err=%d\n",
 				 intel_uc_fw_type_repr(uc_fw->type), err);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 37/69] drm/i915: Add ww locking to dma-buf ops.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (35 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 36/69] drm/i915: Lock ww in ucode objects correctly Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 38/69] drm/i915: Add missing ww lock in intel_dsb_prepare Maarten Lankhorst
                   ` (36 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

vmap is using pin_pages, but needs to use ww locking,
add pin_pages_unlocked to correctly lock the mapping.

Also add ww locking to begin/end cpu access.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 60 ++++++++++++----------
 1 file changed, 33 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index c7100a83b8ea..7636c2644ccf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -82,7 +82,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 	void *vaddr;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -123,42 +123,48 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_dire
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 	bool write = (direction == DMA_BIDIRECTIONAL || direction == DMA_TO_DEVICE);
+	struct i915_gem_ww_ctx ww;
 	int err;
 
-	err = i915_gem_object_pin_pages(obj);
-	if (err)
-		return err;
-
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
-		goto out;
-
-	i915_gem_object_set_to_cpu_domain(obj, write);
-	i915_gem_object_unlock(obj);
-
-out:
-	i915_gem_object_unpin_pages(obj);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		err = i915_gem_object_pin_pages(obj);
+	if (!err) {
+		i915_gem_object_set_to_cpu_domain(obj, write);
+		i915_gem_object_unpin_pages(obj);
+	}
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 	return err;
 }
 
 static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction direction)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
+	struct i915_gem_ww_ctx ww;
 	int err;
 
-	err = i915_gem_object_pin_pages(obj);
-	if (err)
-		return err;
-
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
-		goto out;
-
-	i915_gem_object_set_to_gtt_domain(obj, false);
-	i915_gem_object_unlock(obj);
-
-out:
-	i915_gem_object_unpin_pages(obj);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		err = i915_gem_object_pin_pages(obj);
+	if (!err) {
+		i915_gem_object_set_to_gtt_domain(obj, false);
+		i915_gem_object_unpin_pages(obj);
+	}
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 	return err;
 }
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 38/69] drm/i915: Add missing ww lock in intel_dsb_prepare.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (36 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 37/69] drm/i915: Add ww locking to dma-buf ops Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 39/69] drm/i915: Fix ww locking in shmem_create_from_object Maarten Lankhorst
                   ` (35 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Because of the long lifetime of the mapping, we cannot wrap this in a
simple limited ww lock. Just use the unlocked version of pin_map,
because we'll likely release the mapping a lot later, in a different
thread.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_dsb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dsb.c b/drivers/gpu/drm/i915/display/intel_dsb.c
index 566fa72427b3..857126822a88 100644
--- a/drivers/gpu/drm/i915/display/intel_dsb.c
+++ b/drivers/gpu/drm/i915/display/intel_dsb.c
@@ -293,7 +293,7 @@ void intel_dsb_prepare(struct intel_crtc_state *crtc_state)
 		goto out;
 	}
 
-	buf = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
+	buf = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WC);
 	if (IS_ERR(buf)) {
 		drm_err(&i915->drm, "Command buffer creation failed\n");
 		i915_vma_unpin_and_release(&vma, I915_VMA_RELEASE_MAP);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 39/69] drm/i915: Fix ww locking in shmem_create_from_object
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (37 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 38/69] drm/i915: Add missing ww lock in intel_dsb_prepare Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 40/69] drm/i915: Use a single page table lock for each gtt Maarten Lankhorst
                   ` (34 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Quick fix, just use the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/shmem_utils.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
index a4d8fc9e2374..f8f02aab842b 100644
--- a/drivers/gpu/drm/i915/gt/shmem_utils.c
+++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
@@ -39,7 +39,7 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj)
 		return file;
 	}
 
-	ptr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(ptr))
 		return ERR_CAST(ptr);
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 40/69] drm/i915: Use a single page table lock for each gtt.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (38 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 39/69] drm/i915: Fix ww locking in shmem_create_from_object Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 41/69] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal Maarten Lankhorst
                   ` (33 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We may create page table objects on the fly, but we may need to
wait with the ww lock held. Instead of waiting on a freed obj
lock, ensure we have the same lock for each object to keep
-EDEADLK working. This ensures that i915_vma_pin_ww can lock
the page tables when required.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_ggtt.c  |  8 +++++-
 drivers/gpu/drm/i915/gt/intel_gtt.c   | 38 ++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/gt/intel_gtt.h   |  5 ++++
 drivers/gpu/drm/i915/gt/intel_ppgtt.c |  3 ++-
 drivers/gpu/drm/i915/i915_vma.c       |  5 ++++
 5 files changed, 56 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c
index c8eb034c806a..c2fc49495029 100644
--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c
@@ -656,7 +656,9 @@ static int init_aliasing_ppgtt(struct i915_ggtt *ggtt)
 	if (err)
 		goto err_ppgtt;
 
+	i915_gem_object_lock(ppgtt->vm.scratch[0], NULL);
 	err = i915_vm_pin_pt_stash(&ppgtt->vm, &stash);
+	i915_gem_object_unlock(ppgtt->vm.scratch[0]);
 	if (err)
 		goto err_stash;
 
@@ -743,6 +745,7 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt)
 
 	mutex_unlock(&ggtt->vm.mutex);
 	i915_address_space_fini(&ggtt->vm);
+	dma_resv_fini(&ggtt->vm.resv);
 
 	arch_phys_wc_del(ggtt->mtrr);
 
@@ -1129,6 +1132,7 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt)
 	ggtt->vm.gt = gt;
 	ggtt->vm.i915 = i915;
 	ggtt->vm.dma = i915->drm.dev;
+	dma_resv_init(&ggtt->vm.resv);
 
 	if (INTEL_GEN(i915) <= 5)
 		ret = i915_gmch_probe(ggtt);
@@ -1136,8 +1140,10 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt)
 		ret = gen6_gmch_probe(ggtt);
 	else
 		ret = gen8_gmch_probe(ggtt);
-	if (ret)
+	if (ret) {
+		dma_resv_fini(&ggtt->vm.resv);
 		return ret;
+	}
 
 	if ((ggtt->vm.total - 1) >> 32) {
 		drm_err(&i915->drm,
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 1b532a2791ea..994e4ea28903 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -13,16 +13,36 @@
 
 struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz)
 {
+	struct drm_i915_gem_object *obj;
+
 	if (I915_SELFTEST_ONLY(should_fail(&vm->fault_attr, 1)))
 		i915_gem_shrink_all(vm->i915);
 
-	return i915_gem_object_create_internal(vm->i915, sz);
+	obj = i915_gem_object_create_internal(vm->i915, sz);
+	/* ensure all dma objects have the same reservation class */
+	if (!IS_ERR(obj))
+		obj->base.resv = &vm->resv;
+	return obj;
 }
 
 int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj)
 {
 	int err;
 
+	i915_gem_object_lock(obj, NULL);
+	err = i915_gem_object_pin_pages(obj);
+	i915_gem_object_unlock(obj);
+	if (err)
+		return err;
+
+	i915_gem_object_make_unshrinkable(obj);
+	return 0;
+}
+
+int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj)
+{
+	int err;
+
 	err = i915_gem_object_pin_pages(obj);
 	if (err)
 		return err;
@@ -56,6 +76,20 @@ void __i915_vm_close(struct i915_address_space *vm)
 	mutex_unlock(&vm->mutex);
 }
 
+/* lock the vm into the current ww, if we lock one, we lock all */
+int i915_vm_lock_objects(struct i915_address_space *vm,
+			 struct i915_gem_ww_ctx *ww)
+{
+	if (vm->scratch[0]->base.resv == &vm->resv) {
+		return i915_gem_object_lock(vm->scratch[0], ww);
+	} else {
+		struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+
+		/* We borrowed the scratch page from ggtt, take the top level object */
+		return i915_gem_object_lock(ppgtt->pd->pt.base, ww);
+	}
+}
+
 void i915_address_space_fini(struct i915_address_space *vm)
 {
 	drm_mm_takedown(&vm->mm);
@@ -69,6 +103,7 @@ static void __i915_vm_release(struct work_struct *work)
 
 	vm->cleanup(vm);
 	i915_address_space_fini(vm);
+	dma_resv_fini(&vm->resv);
 
 	kfree(vm);
 }
@@ -98,6 +133,7 @@ void i915_address_space_init(struct i915_address_space *vm, int subclass)
 	mutex_init(&vm->mutex);
 	lockdep_set_subclass(&vm->mutex, subclass);
 	fs_reclaim_taints_mutex(&vm->mutex);
+	dma_resv_init(&vm->resv);
 
 	GEM_BUG_ON(!vm->total);
 	drm_mm_init(&vm->mm, 0, vm->total);
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index 784c4372b405..e67e34e17913 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -242,6 +242,7 @@ struct i915_address_space {
 	atomic_t open;
 
 	struct mutex mutex; /* protects vma and our lists */
+	struct dma_resv resv; /* reservation lock for all pd objects, and buffer pool */
 #define VM_CLASS_GGTT 0
 #define VM_CLASS_PPGTT 1
 
@@ -351,6 +352,9 @@ struct i915_ppgtt {
 
 #define i915_is_ggtt(vm) ((vm)->is_ggtt)
 
+int __must_check
+i915_vm_lock_objects(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww);
+
 static inline bool
 i915_vm_is_4lvl(const struct i915_address_space *vm)
 {
@@ -527,6 +531,7 @@ struct i915_page_directory *alloc_pd(struct i915_address_space *vm);
 struct i915_page_directory *__alloc_pd(int npde);
 
 int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj);
+int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj);
 
 void free_px(struct i915_address_space *vm,
 	     struct i915_page_table *pt, int lvl);
diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
index a91955af50a6..014ae8ac4480 100644
--- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
@@ -266,7 +266,7 @@ int i915_vm_pin_pt_stash(struct i915_address_space *vm,
 
 	for (n = 0; n < ARRAY_SIZE(stash->pt); n++) {
 		for (pt = stash->pt[n]; pt; pt = pt->stash) {
-			err = pin_pt_dma(vm, pt->base);
+			err = pin_pt_dma_locked(vm, pt->base);
 			if (err)
 				return err;
 		}
@@ -308,6 +308,7 @@ void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt)
 	ppgtt->vm.dma = i915->drm.dev;
 	ppgtt->vm.total = BIT_ULL(INTEL_INFO(i915)->ppgtt_size);
 
+	dma_resv_init(&ppgtt->vm.resv);
 	i915_address_space_init(&ppgtt->vm, VM_CLASS_PPGTT);
 
 	ppgtt->vm.vma_ops.bind_vma    = ppgtt_bind_vma;
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 265e3a3079e2..c5b9f30ac0a3 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -884,6 +884,11 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 		wakeref = intel_runtime_pm_get(&vma->vm->i915->runtime_pm);
 
 	if (flags & vma->vm->bind_async_flags) {
+		/* lock VM */
+		err = i915_vm_lock_objects(vma->vm, ww);
+		if (err)
+			goto err_rpm;
+
 		work = i915_vma_work();
 		if (!work) {
 			err = -ENOMEM;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 41/69] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (39 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 40/69] drm/i915: Use a single page table lock for each gtt Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 42/69] drm/i915/selftests: Prepare client blit " Maarten Lankhorst
                   ` (32 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/selftests/huge_pages.c   | 28 ++++++++++++++-----
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index 515dbc468175..d85ca79ac433 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -585,7 +585,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
 			goto out_put;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err)
 			goto out_put;
 
@@ -649,15 +649,19 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
 				break;
 		}
 
+		i915_gem_object_lock(obj, NULL);
 		i915_gem_object_unpin_pages(obj);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		i915_gem_object_put(obj);
 	}
 
 	return 0;
 
 out_unpin:
+	i915_gem_object_lock(obj, NULL);
 	i915_gem_object_unpin_pages(obj);
+	i915_gem_object_unlock(obj);
 out_put:
 	i915_gem_object_put(obj);
 
@@ -671,8 +675,10 @@ static void close_object_list(struct list_head *objects,
 
 	list_for_each_entry_safe(obj, on, objects, st_link) {
 		list_del(&obj->st_link);
+		i915_gem_object_lock(obj, NULL);
 		i915_gem_object_unpin_pages(obj);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		i915_gem_object_put(obj);
 	}
 }
@@ -709,7 +715,7 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
 			break;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			break;
@@ -885,7 +891,7 @@ static int igt_mock_ppgtt_64K(void *arg)
 			if (IS_ERR(obj))
 				return PTR_ERR(obj);
 
-			err = i915_gem_object_pin_pages(obj);
+			err = i915_gem_object_pin_pages_unlocked(obj);
 			if (err)
 				goto out_object_put;
 
@@ -939,8 +945,10 @@ static int igt_mock_ppgtt_64K(void *arg)
 			}
 
 			i915_vma_unpin(vma);
+			i915_gem_object_lock(obj, NULL);
 			i915_gem_object_unpin_pages(obj);
 			__i915_gem_object_put_pages(obj);
+			i915_gem_object_unlock(obj);
 			i915_gem_object_put(obj);
 		}
 	}
@@ -950,7 +958,9 @@ static int igt_mock_ppgtt_64K(void *arg)
 out_vma_unpin:
 	i915_vma_unpin(vma);
 out_object_unpin:
+	i915_gem_object_lock(obj, NULL);
 	i915_gem_object_unpin_pages(obj);
+	i915_gem_object_unlock(obj);
 out_object_put:
 	i915_gem_object_put(obj);
 
@@ -1012,7 +1022,7 @@ static int __cpu_check_vmap(struct drm_i915_gem_object *obj, u32 dword, u32 val)
 	if (err)
 		return err;
 
-	ptr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(ptr))
 		return PTR_ERR(ptr);
 
@@ -1292,7 +1302,7 @@ static int igt_ppgtt_smoke_huge(void *arg)
 			return err;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			if (err == -ENXIO || err == -E2BIG) {
 				i915_gem_object_put(obj);
@@ -1315,8 +1325,10 @@ static int igt_ppgtt_smoke_huge(void *arg)
 			       __func__, size, i);
 		}
 out_unpin:
+		i915_gem_object_lock(obj, NULL);
 		i915_gem_object_unpin_pages(obj);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 out_put:
 		i915_gem_object_put(obj);
 
@@ -1390,7 +1402,7 @@ static int igt_ppgtt_sanity_check(void *arg)
 				return err;
 			}
 
-			err = i915_gem_object_pin_pages(obj);
+			err = i915_gem_object_pin_pages_unlocked(obj);
 			if (err) {
 				i915_gem_object_put(obj);
 				goto out;
@@ -1404,8 +1416,10 @@ static int igt_ppgtt_sanity_check(void *arg)
 
 			err = igt_write_huge(ctx, obj);
 
+			i915_gem_object_lock(obj, NULL);
 			i915_gem_object_unpin_pages(obj);
 			__i915_gem_object_put_pages(obj);
+			i915_gem_object_unlock(obj);
 			i915_gem_object_put(obj);
 
 			if (err) {
@@ -1450,7 +1464,7 @@ static int igt_tmpfs_fallback(void *arg)
 		goto out_restore;
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto out_put;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 42/69] drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (40 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 41/69] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 43/69] drm/i915/selftests: Prepare coherency tests " Maarten Lankhorst
                   ` (31 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
index 175581724d44..baec7bd1fa53 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
@@ -52,7 +52,7 @@ static int __igt_client_fill(struct intel_engine_cs *engine)
 			goto err_flush;
 		}
 
-		vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put;
@@ -171,7 +171,7 @@ static int prepare_blit(const struct tiled_blits *t,
 	u32 src_pitch, dst_pitch;
 	u32 cmd, *cs;
 
-	cs = i915_gem_object_pin_map(batch, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch, I915_MAP_WC);
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
 
@@ -391,7 +391,7 @@ static int verify_buffer(const struct tiled_blits *t,
 	y = i915_prandom_u32_max_state(t->height, prng);
 	p = y * t->width + x;
 
-	vaddr = i915_gem_object_pin_map(buf->vma->obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(buf->vma->obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -578,7 +578,7 @@ static int tiled_blits_prepare(struct tiled_blits *t,
 	int err;
 	int i;
 
-	map = i915_gem_object_pin_map(t->scratch.vma->obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(t->scratch.vma->obj, I915_MAP_WC);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 43/69] drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (41 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 42/69] drm/i915/selftests: Prepare client blit " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 44/69] drm/i915/selftests: Prepare context " Maarten Lankhorst
                   ` (30 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index 3eec385d43bb..8f2e447bd503 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -174,7 +174,7 @@ static int wc_set(struct context *ctx, unsigned long offset, u32 v)
 	if (err)
 		return err;
 
-	map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 
@@ -201,7 +201,7 @@ static int wc_get(struct context *ctx, unsigned long offset, u32 *v)
 	if (err)
 		return err;
 
-	map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 44/69] drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (42 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 43/69] drm/i915/selftests: Prepare coherency tests " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 45/69] drm/i915/selftests: Prepare dma-buf " Maarten Lankhorst
                   ` (29 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index df949320f2b5..82d5d37e9b66 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -1092,7 +1092,7 @@ __read_slice_count(struct intel_context *ce,
 	if (ret < 0)
 		return ret;
 
-	buf = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	buf = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(buf)) {
 		ret = PTR_ERR(buf);
 		return ret;
@@ -1509,7 +1509,7 @@ static int write_to_scratch(struct i915_gem_context *ctx,
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out;
@@ -1620,7 +1620,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
 		if (err)
 			goto out_vm;
 
-		cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+		cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 		if (IS_ERR(cmd)) {
 			err = PTR_ERR(cmd);
 			goto out;
@@ -1656,7 +1656,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
 		if (err)
 			goto out_vm;
 
-		cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+		cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 		if (IS_ERR(cmd)) {
 			err = PTR_ERR(cmd);
 			goto out;
@@ -1715,7 +1715,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
 	}
 	i915_request_put(rq);
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out_vm;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 45/69] drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (43 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 44/69] drm/i915/selftests: Prepare context " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 46/69] drm/i915/selftests: Prepare execbuf " Maarten Lankhorst
                   ` (28 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use pin_pages_unlocked() where we don't have a lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index b6d43880b0c1..dd74bc09ec88 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -194,7 +194,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 
 	dma_buf_put(dmabuf);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("i915_gem_object_pin_pages failed with err=%d\n", err);
 		goto out_obj;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 46/69] drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (44 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 45/69] drm/i915/selftests: Prepare dma-buf " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 47/69] drm/i915/selftests: Prepare mman testcases " Maarten Lankhorst
                   ` (27 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Also quite simple, a single call needs to use the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
index e1d50a5a1477..4df505e4c53a 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
@@ -116,7 +116,7 @@ static int igt_gpu_reloc(void *arg)
 	if (IS_ERR(scratch))
 		return PTR_ERR(scratch);
 
-	map = i915_gem_object_pin_map(scratch, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(scratch, I915_MAP_WC);
 	if (IS_ERR(map)) {
 		err = PTR_ERR(map);
 		goto err_scratch;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 47/69] drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (45 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 46/69] drm/i915/selftests: Prepare execbuf " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 48/69] drm/i915/selftests: Prepare object tests " Maarten Lankhorst
                   ` (26 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Ensure we hold the lock around put_pages, and use the unlocked wrappers
for pinning pages and mappings.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 49f17708c143..090e77c2a6dc 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -306,7 +306,7 @@ static int igt_partial_tiling(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
 		       nreal, obj->base.size / PAGE_SIZE, err);
@@ -443,7 +443,7 @@ static int igt_smoke_tiling(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
 		       nreal, obj->base.size / PAGE_SIZE, err);
@@ -782,7 +782,7 @@ static int wc_set(struct drm_i915_gem_object *obj)
 {
 	void *vaddr;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -798,7 +798,7 @@ static int wc_check(struct drm_i915_gem_object *obj)
 	void *vaddr;
 	int err = 0;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -1300,7 +1300,9 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915,
 	}
 
 	if (type != I915_MMAP_TYPE_GTT) {
+		i915_gem_object_lock(obj, NULL);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		if (i915_gem_object_has_pages(obj)) {
 			pr_err("Failed to put-pages object!\n");
 			err = -EINVAL;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 48/69] drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (46 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 47/69] drm/i915/selftests: Prepare mman testcases " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 49/69] drm/i915/selftests: Prepare object blit " Maarten Lankhorst
                   ` (25 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Convert a single pin_pages call to use the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c
index bf853c40ec65..740ee8086a27 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c
@@ -47,7 +47,7 @@ static int igt_gem_huge(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
 		       nreal, obj->base.size / PAGE_SIZE, err);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 49/69] drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (47 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 48/69] drm/i915/selftests: Prepare object tests " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 50/69] drm/i915/selftests: Prepare igt_gem_utils " Maarten Lankhorst
                   ` (24 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use some unlocked versions where we're not holding the ww lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
index c4c04fb97d14..8c335d1a8406 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
@@ -262,7 +262,7 @@ static int igt_fill_blt_thread(void *arg)
 			goto err_flush;
 		}
 
-		vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put;
@@ -380,7 +380,7 @@ static int igt_copy_blt_thread(void *arg)
 			goto err_flush;
 		}
 
-		vaddr = i915_gem_object_pin_map(src, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(src, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put_src;
@@ -400,7 +400,7 @@ static int igt_copy_blt_thread(void *arg)
 			goto err_put_src;
 		}
 
-		vaddr = i915_gem_object_pin_map(dst, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(dst, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put_dst;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 50/69] drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (48 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 49/69] drm/i915/selftests: Prepare object blit " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 51/69] drm/i915/selftests: Prepare context selftest " Maarten Lankhorst
                   ` (23 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

igt_emit_store_dw needs to use the unlocked version, as it's not
holding a lock. This fixes igt_gpu_fill_dw() which is used by
some other selftests.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c
index b7e064667d39..ba8c06778b6c 100644
--- a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c
+++ b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c
@@ -56,7 +56,7 @@ igt_emit_store_dw(struct i915_vma *vma,
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto err;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 51/69] drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (49 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 50/69] drm/i915/selftests: Prepare igt_gem_utils " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 52/69] drm/i915/selftests: Prepare hangcheck " Maarten Lankhorst
                   ` (22 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Only needs to convert a single call to the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_context.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c
index a02fd70644e2..b9bdd1d23243 100644
--- a/drivers/gpu/drm/i915/gt/selftest_context.c
+++ b/drivers/gpu/drm/i915/gt/selftest_context.c
@@ -87,8 +87,8 @@ static int __live_context_size(struct intel_engine_cs *engine)
 	if (err)
 		goto err;
 
-	vaddr = i915_gem_object_pin_map(ce->state->obj,
-					i915_coherent_map_type(engine->i915));
+	vaddr = i915_gem_object_pin_map_unlocked(ce->state->obj,
+						 i915_coherent_map_type(engine->i915));
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		intel_context_unpin(ce);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 52/69] drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (50 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 51/69] drm/i915/selftests: Prepare context selftest " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 53/69] drm/i915/selftests: Prepare execlists and lrc selftests " Maarten Lankhorst
                   ` (21 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Convert a few calls to use the unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index cdb0ceff3be1..608a67f9c631 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -61,15 +61,15 @@ static int hang_init(struct hang *h, struct intel_gt *gt)
 	}
 
 	i915_gem_object_set_cache_coherency(h->hws, I915_CACHE_LLC);
-	vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(h->hws, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_obj;
 	}
 	h->seqno = memset(vaddr, 0xff, PAGE_SIZE);
 
-	vaddr = i915_gem_object_pin_map(h->obj,
-					i915_coherent_map_type(gt->i915));
+	vaddr = i915_gem_object_pin_map_unlocked(h->obj,
+						 i915_coherent_map_type(gt->i915));
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_unpin_hws;
@@ -130,7 +130,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
 		return ERR_CAST(obj);
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, i915_coherent_map_type(gt->i915));
+	vaddr = i915_gem_object_pin_map_unlocked(obj, i915_coherent_map_type(gt->i915));
 	if (IS_ERR(vaddr)) {
 		i915_gem_object_put(obj);
 		i915_vm_put(vm);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 53/69] drm/i915/selftests: Prepare execlists and lrc selftests for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (51 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 52/69] drm/i915/selftests: Prepare hangcheck " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 54/69] drm/i915/selftests: Prepare mocs tests " Maarten Lankhorst
                   ` (20 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Convert normal functions to unlocked versions where needed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_execlists.c | 18 +++++++++---------
 drivers/gpu/drm/i915/gt/selftest_lrc.c       | 16 ++++++++--------
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
index a6e77a161b70..e97825447dca 100644
--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
+++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
@@ -982,7 +982,7 @@ static int live_timeslice_preempt(void *arg)
 		goto err_obj;
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_obj;
@@ -1289,7 +1289,7 @@ static int live_timeslice_queue(void *arg)
 		goto err_obj;
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_obj;
@@ -1531,7 +1531,7 @@ static int live_busywait_preempt(void *arg)
 		goto err_ctx_lo;
 	}
 
-	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(map)) {
 		err = PTR_ERR(map);
 		goto err_obj;
@@ -2691,7 +2691,7 @@ static int create_gang(struct intel_engine_cs *engine,
 	if (err)
 		goto err_obj;
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_obj;
@@ -2970,7 +2970,7 @@ static int live_preempt_gang(void *arg)
 		 * it will terminate the next lowest spinner until there
 		 * are no more spinners and the gang is complete.
 		 */
-		cs = i915_gem_object_pin_map(rq->batch->obj, I915_MAP_WC);
+		cs = i915_gem_object_pin_map_unlocked(rq->batch->obj, I915_MAP_WC);
 		if (!IS_ERR(cs)) {
 			*cs = 0;
 			i915_gem_object_unpin_map(rq->batch->obj);
@@ -3035,7 +3035,7 @@ create_gpr_user(struct intel_engine_cs *engine,
 		return ERR_PTR(err);
 	}
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_vma_put(vma);
 		return ERR_CAST(cs);
@@ -3239,7 +3239,7 @@ static int live_preempt_user(void *arg)
 	if (IS_ERR(global))
 		return PTR_ERR(global);
 
-	result = i915_gem_object_pin_map(global->obj, I915_MAP_WC);
+	result = i915_gem_object_pin_map_unlocked(global->obj, I915_MAP_WC);
 	if (IS_ERR(result)) {
 		i915_vma_unpin_and_release(&global, 0);
 		return PTR_ERR(result);
@@ -3626,7 +3626,7 @@ static int live_preempt_smoke(void *arg)
 		goto err_free;
 	}
 
-	cs = i915_gem_object_pin_map(smoke.batch, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(smoke.batch, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_batch;
@@ -4231,7 +4231,7 @@ static int preserved_virtual_engine(struct intel_gt *gt,
 		goto out_end;
 	}
 
-	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto out_end;
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 1f7a120606e6..5726943d7ff0 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -627,7 +627,7 @@ static int __live_lrc_gpr(struct intel_engine_cs *engine,
 		goto err_rq;
 	}
 
-	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_rq;
@@ -923,7 +923,7 @@ store_context(struct intel_context *ce,
 	if (IS_ERR(batch))
 		return batch;
 
-	cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_vma_put(batch);
 		return ERR_CAST(cs);
@@ -1138,7 +1138,7 @@ load_context(struct intel_context *ce,
 	if (IS_ERR(batch))
 		return batch;
 
-	cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_vma_put(batch);
 		return ERR_CAST(cs);
@@ -1277,29 +1277,29 @@ static int compare_isolation(struct intel_engine_cs *engine,
 	u32 *defaults;
 	int err = 0;
 
-	A[0] = i915_gem_object_pin_map(ref[0]->obj, I915_MAP_WC);
+	A[0] = i915_gem_object_pin_map_unlocked(ref[0]->obj, I915_MAP_WC);
 	if (IS_ERR(A[0]))
 		return PTR_ERR(A[0]);
 
-	A[1] = i915_gem_object_pin_map(ref[1]->obj, I915_MAP_WC);
+	A[1] = i915_gem_object_pin_map_unlocked(ref[1]->obj, I915_MAP_WC);
 	if (IS_ERR(A[1])) {
 		err = PTR_ERR(A[1]);
 		goto err_A0;
 	}
 
-	B[0] = i915_gem_object_pin_map(result[0]->obj, I915_MAP_WC);
+	B[0] = i915_gem_object_pin_map_unlocked(result[0]->obj, I915_MAP_WC);
 	if (IS_ERR(B[0])) {
 		err = PTR_ERR(B[0]);
 		goto err_A1;
 	}
 
-	B[1] = i915_gem_object_pin_map(result[1]->obj, I915_MAP_WC);
+	B[1] = i915_gem_object_pin_map_unlocked(result[1]->obj, I915_MAP_WC);
 	if (IS_ERR(B[1])) {
 		err = PTR_ERR(B[1]);
 		goto err_B0;
 	}
 
-	lrc = i915_gem_object_pin_map(ce->state->obj,
+	lrc = i915_gem_object_pin_map_unlocked(ce->state->obj,
 				      i915_coherent_map_type(engine->i915));
 	if (IS_ERR(lrc)) {
 		err = PTR_ERR(lrc);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 54/69] drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (52 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 53/69] drm/i915/selftests: Prepare execlists and lrc selftests " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 55/69] drm/i915/selftests: Prepare ring submission " Maarten Lankhorst
                   ` (19 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use pin_map_unlocked when we're not holding locks.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_mocs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
index 01dd050d4161..e55a887d11e2 100644
--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
@@ -79,7 +79,7 @@ static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
 	if (IS_ERR(arg->scratch))
 		return PTR_ERR(arg->scratch);
 
-	arg->vaddr = i915_gem_object_pin_map(arg->scratch->obj, I915_MAP_WB);
+	arg->vaddr = i915_gem_object_pin_map_unlocked(arg->scratch->obj, I915_MAP_WB);
 	if (IS_ERR(arg->vaddr)) {
 		err = PTR_ERR(arg->vaddr);
 		goto err_scratch;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 55/69] drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (53 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 54/69] drm/i915/selftests: Prepare mocs tests " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 56/69] drm/i915/selftests: Prepare timeline tests " Maarten Lankhorst
                   ` (18 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use unlocked versions when the ww lock is not held.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_ring_submission.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_ring_submission.c b/drivers/gpu/drm/i915/gt/selftest_ring_submission.c
index 6cd9f6bc240c..c12e74171b63 100644
--- a/drivers/gpu/drm/i915/gt/selftest_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/selftest_ring_submission.c
@@ -35,7 +35,7 @@ static struct i915_vma *create_wally(struct intel_engine_cs *engine)
 		return ERR_PTR(err);
 	}
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_gem_object_put(obj);
 		return ERR_CAST(cs);
@@ -212,7 +212,7 @@ static int __live_ctx_switch_wa(struct intel_engine_cs *engine)
 	if (IS_ERR(bb))
 		return PTR_ERR(bb);
 
-	result = i915_gem_object_pin_map(bb->obj, I915_MAP_WC);
+	result = i915_gem_object_pin_map_unlocked(bb->obj, I915_MAP_WC);
 	if (IS_ERR(result)) {
 		intel_context_put(bb->private);
 		i915_vma_unpin_and_release(&bb, 0);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 56/69] drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (54 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 55/69] drm/i915/selftests: Prepare ring submission " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 57/69] drm/i915/selftests: Prepare i915_request " Maarten Lankhorst
                   ` (17 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We can no longer call intel_timeline_pin with a null argument,
so add a ww loop that locks the backing object.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_timeline.c | 30 +++++++++++++++++----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index 31b492eb2982..d20f9301a459 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -37,6 +37,26 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
 	return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES;
 }
 
+static int selftest_tl_pin(struct intel_timeline *tl)
+{
+	struct i915_gem_ww_ctx ww;
+	int err;
+
+	i915_gem_ww_ctx_init(&ww, false);
+retry:
+	err = i915_gem_object_lock(tl->hwsp_ggtt->obj, &ww);
+	if (!err)
+		err = intel_timeline_pin(tl, &ww);
+
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	return err;
+}
+
 #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
 
 struct mock_hwsp_freelist {
@@ -78,7 +98,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
 		if (IS_ERR(tl))
 			return PTR_ERR(tl);
 
-		err = intel_timeline_pin(tl, NULL);
+		err = selftest_tl_pin(tl);
 		if (err) {
 			intel_timeline_put(tl);
 			return err;
@@ -464,7 +484,7 @@ checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32
 	struct i915_request *rq;
 	int err;
 
-	err = intel_timeline_pin(tl, NULL);
+	err = selftest_tl_pin(tl);
 	if (err) {
 		rq = ERR_PTR(err);
 		goto out;
@@ -664,7 +684,7 @@ static int live_hwsp_wrap(void *arg)
 	if (!tl->has_initial_breadcrumb)
 		goto out_free;
 
-	err = intel_timeline_pin(tl, NULL);
+	err = selftest_tl_pin(tl);
 	if (err)
 		goto out_free;
 
@@ -811,13 +831,13 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	w->map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	w->map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(w->map)) {
 		i915_gem_object_put(obj);
 		return PTR_ERR(w->map);
 	}
 
-	vma = i915_gem_object_ggtt_pin_ww(obj, NULL, NULL, 0, 0, 0);
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
 	if (IS_ERR(vma)) {
 		i915_gem_object_put(obj);
 		return PTR_ERR(vma);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 57/69] drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (55 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 56/69] drm/i915/selftests: Prepare timeline tests " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 58/69] drm/i915/selftests: Prepare memory region " Maarten Lankhorst
                   ` (16 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion by using unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_request.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
index 8035ea7565ed..a27cc504f839 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -619,7 +619,7 @@ static struct i915_vma *empty_batch(struct drm_i915_private *i915)
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto err;
@@ -781,7 +781,7 @@ static struct i915_vma *recursive_batch(struct drm_i915_private *i915)
 	if (err)
 		goto err;
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto err;
@@ -816,7 +816,7 @@ static int recursive_batch_resolve(struct i915_vma *batch)
 {
 	u32 *cmd;
 
-	cmd = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cmd))
 		return PTR_ERR(cmd);
 
@@ -1069,8 +1069,8 @@ static int live_sequential_engines(void *arg)
 		if (!request[idx])
 			break;
 
-		cmd = i915_gem_object_pin_map(request[idx]->batch->obj,
-					      I915_MAP_WC);
+		cmd = i915_gem_object_pin_map_unlocked(request[idx]->batch->obj,
+						       I915_MAP_WC);
 		if (!IS_ERR(cmd)) {
 			*cmd = MI_BATCH_BUFFER_END;
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 58/69] drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (56 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 57/69] drm/i915/selftests: Prepare i915_request " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 59/69] drm/i915/selftests: Prepare cs engine " Maarten Lankhorst
                   ` (15 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use the unlocked variants for pin_map and pin_pages, and add lock
around unpinning/putting pages.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../drm/i915/selftests/intel_memory_region.c   | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index 3e583139f767..759ac7e26e13 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -31,10 +31,12 @@ static void close_objects(struct intel_memory_region *mem,
 	struct drm_i915_gem_object *obj, *on;
 
 	list_for_each_entry_safe(obj, on, objects, st_link) {
+		i915_gem_object_lock(obj, NULL);
 		if (i915_gem_object_has_pinned_pages(obj))
 			i915_gem_object_unpin_pages(obj);
 		/* No polluting the memory region between tests */
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		list_del(&obj->st_link);
 		i915_gem_object_put(obj);
 	}
@@ -69,7 +71,7 @@ static int igt_mock_fill(void *arg)
 			break;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			break;
@@ -109,7 +111,7 @@ igt_object_create(struct intel_memory_region *mem,
 	if (IS_ERR(obj))
 		return obj;
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto put;
 
@@ -123,8 +125,10 @@ igt_object_create(struct intel_memory_region *mem,
 
 static void igt_object_release(struct drm_i915_gem_object *obj)
 {
+	i915_gem_object_lock(obj, NULL);
 	i915_gem_object_unpin_pages(obj);
 	__i915_gem_object_put_pages(obj);
+	i915_gem_object_unlock(obj);
 	list_del(&obj->st_link);
 	i915_gem_object_put(obj);
 }
@@ -509,7 +513,7 @@ static int igt_cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val)
 	if (err)
 		return err;
 
-	ptr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(ptr))
 		return PTR_ERR(ptr);
 
@@ -614,7 +618,7 @@ static int igt_lmem_create(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto out_put;
 
@@ -653,7 +657,7 @@ static int igt_lmem_write_gpu(void *arg)
 		goto out_file;
 	}
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto out_put;
 
@@ -725,7 +729,7 @@ static int igt_lmem_write_cpu(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto out_put;
@@ -828,7 +832,7 @@ create_region_for_mapping(struct intel_memory_region *mr, u64 size, u32 type,
 		return obj;
 	}
 
-	addr = i915_gem_object_pin_map(obj, type);
+	addr = i915_gem_object_pin_map_unlocked(obj, type);
 	if (IS_ERR(addr)) {
 		i915_gem_object_put(obj);
 		if (PTR_ERR(addr) == -ENXIO)
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 59/69] drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (57 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 58/69] drm/i915/selftests: Prepare memory region " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 60/69] drm/i915/selftests: Prepare gtt " Maarten Lankhorst
                   ` (14 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Same as other tests, use pin_map_unlocked.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_engine_cs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
index 7e466ae114f8..b32814a1f20b 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
@@ -75,7 +75,7 @@ static struct i915_vma *create_empty_batch(struct intel_context *ce)
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_put;
@@ -211,7 +211,7 @@ static struct i915_vma *create_nop_batch(struct intel_context *ce)
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_put;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 60/69] drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (58 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 59/69] drm/i915/selftests: Prepare cs engine " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 61/69] drm/i915: Finally remove obj->mm.lock Maarten Lankhorst
                   ` (13 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to lock the global gtt dma_resv, use i915_vm_lock_objects
to handle this correctly. Add ww handling for this where required.

Add the object lock around unpin/put pages, and use the unlocked
versions of pin_pages and pin_map where required.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 92 ++++++++++++++-----
 1 file changed, 67 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 5be6dcf4357e..2e4f06eaacc1 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -130,7 +130,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size)
 	obj->cache_level = I915_CACHE_NONE;
 
 	/* Preallocate the "backing storage" */
-	if (i915_gem_object_pin_pages(obj))
+	if (i915_gem_object_pin_pages_unlocked(obj))
 		goto err_obj;
 
 	i915_gem_object_unpin_pages(obj);
@@ -146,6 +146,7 @@ static int igt_ppgtt_alloc(void *arg)
 {
 	struct drm_i915_private *dev_priv = arg;
 	struct i915_ppgtt *ppgtt;
+	struct i915_gem_ww_ctx ww;
 	u64 size, last, limit;
 	int err = 0;
 
@@ -171,6 +172,12 @@ static int igt_ppgtt_alloc(void *arg)
 	limit = totalram_pages() << PAGE_SHIFT;
 	limit = min(ppgtt->vm.total, limit);
 
+	i915_gem_ww_ctx_init(&ww, false);
+retry:
+	err = i915_vm_lock_objects(&ppgtt->vm, &ww);
+	if (err)
+		goto err_ppgtt_cleanup;
+
 	/* Check we can allocate the entire range */
 	for (size = 4096; size <= limit; size <<= 2) {
 		struct i915_vm_pt_stash stash = {};
@@ -215,6 +222,13 @@ static int igt_ppgtt_alloc(void *arg)
 	}
 
 err_ppgtt_cleanup:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
 	i915_vm_put(&ppgtt->vm);
 	return err;
 }
@@ -276,7 +290,7 @@ static int lowlevel_hole(struct i915_address_space *vm,
 
 		GEM_BUG_ON(obj->base.size != BIT_ULL(size));
 
-		if (i915_gem_object_pin_pages(obj)) {
+		if (i915_gem_object_pin_pages_unlocked(obj)) {
 			i915_gem_object_put(obj);
 			kfree(order);
 			break;
@@ -297,20 +311,36 @@ static int lowlevel_hole(struct i915_address_space *vm,
 
 			if (vm->allocate_va_range) {
 				struct i915_vm_pt_stash stash = {};
+				struct i915_gem_ww_ctx ww;
+				int err;
+
+				i915_gem_ww_ctx_init(&ww, false);
+retry:
+				err = i915_vm_lock_objects(vm, &ww);
+				if (err)
+					goto alloc_vm_end;
 
+				err = -ENOMEM;
 				if (i915_vm_alloc_pt_stash(vm, &stash,
 							   BIT_ULL(size)))
-					break;
-
-				if (i915_vm_pin_pt_stash(vm, &stash)) {
-					i915_vm_free_pt_stash(vm, &stash);
-					break;
-				}
+					goto alloc_vm_end;
 
-				vm->allocate_va_range(vm, &stash,
-						      addr, BIT_ULL(size));
+				err = i915_vm_pin_pt_stash(vm, &stash);
+				if (!err)
+					vm->allocate_va_range(vm, &stash,
+							      addr, BIT_ULL(size));
 
 				i915_vm_free_pt_stash(vm, &stash);
+alloc_vm_end:
+				if (err == -EDEADLK) {
+					err = i915_gem_ww_ctx_backoff(&ww);
+					if (!err)
+						goto retry;
+				}
+				i915_gem_ww_ctx_fini(&ww);
+
+				if (err)
+					break;
 			}
 
 			mock_vma->pages = obj->mm.pages;
@@ -1166,7 +1196,7 @@ static int igt_ggtt_page(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto out_free;
 
@@ -1333,7 +1363,7 @@ static int igt_gtt_reserve(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1385,7 +1415,7 @@ static int igt_gtt_reserve(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1549,7 +1579,7 @@ static int igt_gtt_insert(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1658,7 +1688,7 @@ static int igt_gtt_insert(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1829,7 +1859,7 @@ static int igt_cs_tlb(void *arg)
 		goto out_vm;
 	}
 
-	batch = i915_gem_object_pin_map(bbe, I915_MAP_WC);
+	batch = i915_gem_object_pin_map_unlocked(bbe, I915_MAP_WC);
 	if (IS_ERR(batch)) {
 		err = PTR_ERR(batch);
 		goto out_put_bbe;
@@ -1845,7 +1875,7 @@ static int igt_cs_tlb(void *arg)
 	}
 
 	/* Track the execution of each request by writing into different slot */
-	batch = i915_gem_object_pin_map(act, I915_MAP_WC);
+	batch = i915_gem_object_pin_map_unlocked(act, I915_MAP_WC);
 	if (IS_ERR(batch)) {
 		err = PTR_ERR(batch);
 		goto out_put_act;
@@ -1892,7 +1922,7 @@ static int igt_cs_tlb(void *arg)
 		goto out_put_out;
 	GEM_BUG_ON(vma->node.start != vm->total - PAGE_SIZE);
 
-	result = i915_gem_object_pin_map(out, I915_MAP_WB);
+	result = i915_gem_object_pin_map_unlocked(out, I915_MAP_WB);
 	if (IS_ERR(result)) {
 		err = PTR_ERR(result);
 		goto out_put_out;
@@ -1908,6 +1938,7 @@ static int igt_cs_tlb(void *arg)
 		while (!__igt_timeout(end_time, NULL)) {
 			struct i915_vm_pt_stash stash = {};
 			struct i915_request *rq;
+			struct i915_gem_ww_ctx ww;
 			u64 offset;
 
 			offset = igt_random_offset(&prng,
@@ -1926,19 +1957,30 @@ static int igt_cs_tlb(void *arg)
 			if (err)
 				goto end;
 
+			i915_gem_ww_ctx_init(&ww, false);
+retry:
+			err = i915_vm_lock_objects(vm, &ww);
+			if (err)
+				goto end_ww;
+
 			err = i915_vm_alloc_pt_stash(vm, &stash, chunk_size);
 			if (err)
-				goto end;
+				goto end_ww;
 
 			err = i915_vm_pin_pt_stash(vm, &stash);
-			if (err) {
-				i915_vm_free_pt_stash(vm, &stash);
-				goto end;
-			}
-
-			vm->allocate_va_range(vm, &stash, offset, chunk_size);
+			if (!err)
+				vm->allocate_va_range(vm, &stash, offset, chunk_size);
 
 			i915_vm_free_pt_stash(vm, &stash);
+end_ww:
+			if (err == -EDEADLK) {
+				err = i915_gem_ww_ctx_backoff(&ww);
+				if (!err)
+					goto retry;
+			}
+			i915_gem_ww_ctx_fini(&ww);
+			if (err)
+				goto end;
 
 			/* Prime the TLB with the dummy pages */
 			for (i = 0; i < count; i++) {
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 61/69] drm/i915: Finally remove obj->mm.lock.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (59 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 60/69] drm/i915/selftests: Prepare gtt " Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 62/69] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2 Maarten Lankhorst
                   ` (12 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

With all callers and selftests fixed to use ww locking, we can now
finally remove this lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  2 -
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  5 +--
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  1 -
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 43 ++++---------------
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 34 ++++-----------
 drivers/gpu/drm/i915/gem/i915_gem_pm.c        |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c  | 37 +++++++++++-----
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.h  |  4 +-
 drivers/gpu/drm/i915/gem/i915_gem_tiling.c    |  2 -
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  3 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  4 +-
 drivers/gpu/drm/i915/i915_gem.c               |  6 ---
 drivers/gpu/drm/i915/i915_gem_gtt.c           |  2 +-
 14 files changed, 55 insertions(+), 92 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 821cb40f8d73..ea74cbca95be 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -62,8 +62,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
 			  struct lock_class_key *key, unsigned flags)
 {
-	mutex_init(&obj->mm.lock);
-
 	spin_lock_init(&obj->vma.lock);
 	INIT_LIST_HEAD(&obj->vma.list);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 5fffa6f07560..7a252dc4237f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -155,7 +155,7 @@ static inline void assert_object_held_shared(struct drm_i915_gem_object *obj)
 	 */
 	if (IS_ENABLED(CONFIG_LOCKDEP) &&
 	    kref_read(&obj->base.refcount) > 0)
-		lockdep_assert_held(&obj->mm.lock);
+		assert_object_held(obj);
 }
 
 static inline int __i915_gem_object_lock(struct drm_i915_gem_object *obj,
@@ -384,7 +384,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 static inline int __must_check
 i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
 {
-	might_lock(&obj->mm.lock);
+	assert_object_held(obj);
 
 	if (atomic_inc_not_zero(&obj->mm.pages_pin_count))
 		return 0;
@@ -430,7 +430,6 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
 }
 
 int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
-int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj);
 void i915_gem_object_truncate(struct drm_i915_gem_object *obj);
 void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 4c0a34231623..a5bc42c7087a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -222,7 +222,6 @@ struct drm_i915_gem_object {
 		 * Protects the pages and their use. Do not use directly, but
 		 * instead go through the pin/unpin interfaces.
 		 */
-		struct mutex lock;
 		atomic_t pages_pin_count;
 		atomic_t shrink_pin;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 5b8af8f83ee3..aed8a37ccdc9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -70,7 +70,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 		struct list_head *list;
 		unsigned long flags;
 
-		lockdep_assert_held(&obj->mm.lock);
+		assert_object_held(obj);
 		spin_lock_irqsave(&i915->mm.obj_lock, flags);
 
 		i915->mm.shrink_count++;
@@ -117,9 +117,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 {
 	int err;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		return err;
+	assert_object_held(obj);
 
 	assert_object_held_shared(obj);
 
@@ -128,15 +126,13 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 
 		err = ____i915_gem_object_get_pages(obj);
 		if (err)
-			goto unlock;
+			return err;
 
 		smp_mb__before_atomic();
 	}
 	atomic_inc(&obj->mm.pages_pin_count);
 
-unlock:
-	mutex_unlock(&obj->mm.lock);
-	return err;
+	return 0;
 }
 
 int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj)
@@ -223,7 +219,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 	return pages;
 }
 
-int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
+int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
 
@@ -254,21 +250,6 @@ int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
 	return 0;
 }
 
-int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
-{
-	int err;
-
-	if (i915_gem_object_has_pinned_pages(obj))
-		return -EBUSY;
-
-	/* May be called by shrinker from within get_pages() (on another bo) */
-	mutex_lock(&obj->mm.lock);
-	err = __i915_gem_object_put_pages_locked(obj);
-	mutex_unlock(&obj->mm.lock);
-
-	return err;
-}
-
 /* The 'mapping' part of i915_gem_object_pin_map() below */
 static void *i915_gem_object_map_page(struct drm_i915_gem_object *obj,
 				      enum i915_map_type type)
@@ -371,9 +352,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return ERR_PTR(-ENXIO);
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		return ERR_PTR(err);
+	assert_object_held(obj);
 
 	pinned = !(type & I915_MAP_OVERRIDE);
 	type &= ~I915_MAP_OVERRIDE;
@@ -383,10 +362,8 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 			GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
 
 			err = ____i915_gem_object_get_pages(obj);
-			if (err) {
-				ptr = ERR_PTR(err);
-				goto out_unlock;
-			}
+			if (err)
+				return ERR_PTR(err);
 
 			smp_mb__before_atomic();
 		}
@@ -421,13 +398,11 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 		obj->mm.mapping = page_pack_bits(ptr, type);
 	}
 
-out_unlock:
-	mutex_unlock(&obj->mm.lock);
 	return ptr;
 
 err_unpin:
 	atomic_dec(&obj->mm.pages_pin_count);
-	goto out_unlock;
+	return ptr;
 }
 
 void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 92297362fad8..81dc2bf59bc3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -234,40 +234,22 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		return err;
-
-	if (unlikely(!i915_gem_object_has_struct_page(obj)))
-		goto out;
-
-	if (obj->mm.madv != I915_MADV_WILLNEED) {
-		err = -EFAULT;
-		goto out;
-	}
+	if (obj->mm.madv != I915_MADV_WILLNEED)
+		return -EFAULT;
 
-	if (i915_gem_object_has_tiling_quirk(obj)) {
-		err = -EFAULT;
-		goto out;
-	}
+	if (i915_gem_object_has_tiling_quirk(obj))
+		return -EFAULT;
 
-	if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj)) {
-		err = -EBUSY;
-		goto out;
-	}
+	if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj))
+		return -EBUSY;
 
 	if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
 		drm_dbg(obj->base.dev,
 			"Attempting to obtain a purgeable object\n");
-		err = -EFAULT;
-		goto out;
+		return -EFAULT;
 	}
 
-	err = i915_gem_object_shmem_to_phys(obj);
-
-out:
-	mutex_unlock(&obj->mm.lock);
-	return err;
+	return i915_gem_object_shmem_to_phys(obj);
 }
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 000e1cd8e920..8b9d7d14c4bd 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -116,7 +116,7 @@ int i915_gem_freeze_late(struct drm_i915_private *i915)
 	 */
 
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
-		i915_gem_shrink(i915, -1UL, NULL, ~0);
+		i915_gem_shrink(NULL, i915, -1UL, NULL, ~0);
 	i915_gem_drain_freed_objects(i915);
 
 	wbinvd_on_all_cpus();
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 59fb16a82270..a9bfa66c8da1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -99,7 +99,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
 				goto err_sg;
 			}
 
-			i915_gem_shrink(i915, 2 * page_count, NULL, *s++);
+			i915_gem_shrink(NULL, i915, 2 * page_count, NULL, *s++);
 
 			/*
 			 * We've tried hard to allocate the memory by reaping
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index 3052ef5ad89d..a82e10fb58e3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -94,7 +94,8 @@ static void try_to_writeback(struct drm_i915_gem_object *obj,
  * The number of pages of backing storage actually released.
  */
 unsigned long
-i915_gem_shrink(struct drm_i915_private *i915,
+i915_gem_shrink(struct i915_gem_ww_ctx *ww,
+		struct drm_i915_private *i915,
 		unsigned long target,
 		unsigned long *nr_scanned,
 		unsigned int shrink)
@@ -113,6 +114,7 @@ i915_gem_shrink(struct drm_i915_private *i915,
 	intel_wakeref_t wakeref = 0;
 	unsigned long count = 0;
 	unsigned long scanned = 0;
+	int err;
 
 	trace_i915_gem_shrink(i915, target, shrink);
 
@@ -200,25 +202,40 @@ i915_gem_shrink(struct drm_i915_private *i915,
 
 			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
 
-			if (unsafe_drop_pages(obj, shrink) &&
-			    mutex_trylock(&obj->mm.lock)) {
+			err = 0;
+			if (unsafe_drop_pages(obj, shrink)) {
 				/* May arrive from get_pages on another bo */
-				if (!__i915_gem_object_put_pages_locked(obj)) {
+				if (!ww) {
+					if (!i915_gem_object_trylock(obj))
+						goto skip;
+				} else {
+					err = i915_gem_object_lock(obj, ww);
+					if (err)
+						goto skip;
+				}
+
+				if (!__i915_gem_object_put_pages(obj)) {
 					try_to_writeback(obj, shrink);
 					count += obj->base.size >> PAGE_SHIFT;
 				}
-				mutex_unlock(&obj->mm.lock);
+				if (!ww)
+					i915_gem_object_unlock(obj);
 			}
 
 			dma_resv_prune(obj->base.resv);
 
 			scanned += obj->base.size >> PAGE_SHIFT;
+skip:
 			i915_gem_object_put(obj);
 
 			spin_lock_irqsave(&i915->mm.obj_lock, flags);
+			if (err)
+				break;
 		}
 		list_splice_tail(&still_in_list, phase->list);
 		spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
+		if (err)
+			return err;
 	}
 
 	if (shrink & I915_SHRINK_BOUND)
@@ -249,7 +266,7 @@ unsigned long i915_gem_shrink_all(struct drm_i915_private *i915)
 	unsigned long freed = 0;
 
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
-		freed = i915_gem_shrink(i915, -1UL, NULL,
+		freed = i915_gem_shrink(NULL, i915, -1UL, NULL,
 					I915_SHRINK_BOUND |
 					I915_SHRINK_UNBOUND);
 	}
@@ -295,7 +312,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
 
 	sc->nr_scanned = 0;
 
-	freed = i915_gem_shrink(i915,
+	freed = i915_gem_shrink(NULL, i915,
 				sc->nr_to_scan,
 				&sc->nr_scanned,
 				I915_SHRINK_BOUND |
@@ -304,7 +321,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
 		intel_wakeref_t wakeref;
 
 		with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
-			freed += i915_gem_shrink(i915,
+			freed += i915_gem_shrink(NULL, i915,
 						 sc->nr_to_scan - sc->nr_scanned,
 						 &sc->nr_scanned,
 						 I915_SHRINK_ACTIVE |
@@ -329,7 +346,7 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
 
 	freed_pages = 0;
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
-		freed_pages += i915_gem_shrink(i915, -1UL, NULL,
+		freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL,
 					       I915_SHRINK_BOUND |
 					       I915_SHRINK_UNBOUND |
 					       I915_SHRINK_WRITEBACK);
@@ -367,7 +384,7 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
 	intel_wakeref_t wakeref;
 
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
-		freed_pages += i915_gem_shrink(i915, -1UL, NULL,
+		freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL,
 					       I915_SHRINK_BOUND |
 					       I915_SHRINK_UNBOUND |
 					       I915_SHRINK_VMAPS);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h
index a25754a51ac3..17ad82ea961f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h
@@ -9,10 +9,12 @@
 #include <linux/bits.h>
 
 struct drm_i915_private;
+struct i915_gem_ww_ctx;
 struct mutex;
 
 /* i915_gem_shrinker.c */
-unsigned long i915_gem_shrink(struct drm_i915_private *i915,
+unsigned long i915_gem_shrink(struct i915_gem_ww_ctx *ww,
+			      struct drm_i915_private *i915,
 			      unsigned long target,
 			      unsigned long *nr_scanned,
 			      unsigned flags);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
index d589d3d81085..9e8945013090 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
@@ -265,7 +265,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
 	 * pages to prevent them being swapped out and causing corruption
 	 * due to the change in swizzling.
 	 */
-	mutex_lock(&obj->mm.lock);
 	if (i915_gem_object_has_pages(obj) &&
 	    obj->mm.madv == I915_MADV_WILLNEED &&
 	    i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@@ -280,7 +279,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
 			i915_gem_object_set_tiling_quirk(obj);
 		}
 	}
-	mutex_unlock(&obj->mm.lock);
 
 	spin_lock(&obj->vma.lock);
 	for_each_ggtt_vma(vma, obj) {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 503325e74eff..3babecd14b47 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -253,7 +253,7 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
 		return -EBUSY;
 
-	mutex_lock(&obj->mm.lock);
+	assert_object_held(obj);
 
 	pages = __i915_gem_object_unset_pages(obj);
 	if (!IS_ERR_OR_NULL(pages))
@@ -261,7 +261,6 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 
 	if (get_pages)
 		err = ____i915_gem_object_get_pages(obj);
-	mutex_unlock(&obj->mm.lock);
 
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 51133b8fabb4..b00c828f90a7 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -904,10 +904,10 @@ i915_drop_caches_set(void *data, u64 val)
 
 	fs_reclaim_acquire(GFP_KERNEL);
 	if (val & DROP_BOUND)
-		i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_BOUND);
+		i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_BOUND);
 
 	if (val & DROP_UNBOUND)
-		i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND);
+		i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND);
 
 	if (val & DROP_SHRINK_ALL)
 		i915_gem_shrink_all(i915);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 7f6165816872..81c7f2b5a585 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -963,10 +963,6 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	if (err)
 		goto out;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		goto out_ww;
-
 	if (i915_gem_object_has_pages(obj) &&
 	    i915_gem_object_is_tiled(obj) &&
 	    i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@@ -1009,9 +1005,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 		i915_gem_object_truncate(obj);
 
 	args->retained = obj->mm.madv != __I915_MADV_PURGED;
-	mutex_unlock(&obj->mm.lock);
 
-out_ww:
 	i915_gem_object_unlock(obj);
 out:
 	i915_gem_object_put(obj);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 486c9953e5b6..36489be4896b 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -44,7 +44,7 @@ int i915_gem_gtt_prepare_pages(struct drm_i915_gem_object *obj,
 		 * the DMA remapper, i915_gem_shrink will return 0.
 		 */
 		GEM_BUG_ON(obj->mm.pages == pages);
-	} while (i915_gem_shrink(to_i915(obj->base.dev),
+	} while (i915_gem_shrink(NULL, to_i915(obj->base.dev),
 				 obj->base.size >> PAGE_SHIFT, NULL,
 				 I915_SHRINK_BOUND |
 				 I915_SHRINK_UNBOUND));
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 62/69] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (60 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 61/69] drm/i915: Finally remove obj->mm.lock Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 63/69] drm/i915: Move gt_revoke() slightly Maarten Lankhorst
                   ` (11 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, Dan Carpenter

Instead of force unbinding and rebinding every time, we try to check
if our notifier seqcount is still correct when pages are bound. This
way we only rebind userptr when we need to, and prevent stalls.

Changes since v1:
- Missing mutex_unlock, reported by kbuild.

Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 27 ++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 3babecd14b47..09b4219eab5d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -281,12 +281,33 @@ int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
 	if (ret)
 		return ret;
 
-	/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
-	ret = i915_gem_object_userptr_unbind(obj, false);
+	/* optimistically try to preserve current pages while unlocked */
+	if (i915_gem_object_has_pages(obj) &&
+	    !mmu_interval_check_retry(&obj->userptr.notifier,
+				      obj->userptr.notifier_seq)) {
+		spin_lock(&i915->mm.notifier_lock);
+		if (obj->userptr.pvec &&
+		    !mmu_interval_read_retry(&obj->userptr.notifier,
+					     obj->userptr.notifier_seq)) {
+			obj->userptr.page_ref++;
+
+			/* We can keep using the current binding, this is the fastpath */
+			ret = 1;
+		}
+		spin_unlock(&i915->mm.notifier_lock);
+	}
+
+	if (!ret) {
+		/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
+		ret = i915_gem_object_userptr_unbind(obj, false);
+	}
 	i915_gem_object_unlock(obj);
-	if (ret)
+	if (ret < 0)
 		return ret;
 
+	if (ret > 0)
+		return 0;
+
 	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
 
 	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 63/69] drm/i915: Move gt_revoke() slightly
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (61 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 62/69] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2 Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 64/69] drm/i915: Add missing -EDEADLK path in execbuffer ggtt pinning Maarten Lankhorst
                   ` (10 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We get a lockdep splat when the reset mutex is held, because it can be
taken from fence_wait. This conflicts with the mmu notifier we have,
because we recurse between reset mutex and mmap lock -> mmu notifier.

Remove this recursion by calling revoke_mmaps before taking the lock.

The reset code still needs fixing, as taking mmap locks during reset
is not allowed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_reset.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 990cb4adbb9a..447f589750c2 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -970,8 +970,6 @@ static int do_reset(struct intel_gt *gt, intel_engine_mask_t stalled_mask)
 {
 	int err, i;
 
-	gt_revoke(gt);
-
 	err = __intel_gt_reset(gt, ALL_ENGINES);
 	for (i = 0; err && i < RESET_MAX_RETRIES; i++) {
 		msleep(10 * (i + 1));
@@ -1026,6 +1024,9 @@ void intel_gt_reset(struct intel_gt *gt,
 
 	might_sleep();
 	GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &gt->reset.flags));
+
+	gt_revoke(gt);
+
 	mutex_lock(&gt->reset.mutex);
 
 	/* Clear any previous failed attempts at recovery. Time to try again. */
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 64/69] drm/i915: Add missing -EDEADLK path in execbuffer ggtt pinning.
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (62 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 63/69] drm/i915: Move gt_revoke() slightly Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 65/69] drm/i915: Fix pin_map in scheduler selftests Maarten Lankhorst
                   ` (9 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx

In reloc_iomap we swallow the -EDEADLK error, but this needs to
be returned for -EDEADLK handling. Add the missing check to
make bsw pass again.

Testcase: gem_exec_fence.basic-await

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 97b0d1134b66..df4f124dc61f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1209,6 +1209,8 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
 							  PIN_MAPPABLE |
 							  PIN_NONBLOCK /* NOWARN */ |
 							  PIN_NOEVICT);
+		if (vma == ERR_PTR(-EDEADLK))
+			return vma;
 		if (IS_ERR(vma)) {
 			memset(&cache->node, 0, sizeof(cache->node));
 			mutex_lock(&ggtt->vm.mutex);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 65/69] drm/i915: Fix pin_map in scheduler selftests
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (63 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 64/69] drm/i915: Add missing -EDEADLK path in execbuffer ggtt pinning Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 66/69] drm/i915: Add ww parameter to get_pages() callback Maarten Lankhorst
                   ` (8 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_scheduler.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_scheduler.c b/drivers/gpu/drm/i915/selftests/i915_scheduler.c
index f54bdbeaa48b..4c306e40c416 100644
--- a/drivers/gpu/drm/i915/selftests/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/selftests/i915_scheduler.c
@@ -645,7 +645,7 @@ static int __igt_schedule_cycle(struct drm_i915_private *i915,
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	time = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	time = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(time)) {
 		err = PTR_ERR(time);
 		goto out_obj;
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 66/69] drm/i915: Add ww parameter to get_pages() callback
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (64 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 65/69] drm/i915: Fix pin_map in scheduler selftests Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 67/69] drm/i915: Add ww context to prepare_(read/write) Maarten Lankhorst
                   ` (7 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx

We will need this to support eviction with lmem, so
explicitly pass ww as a parameter.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c           | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c         | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_object_types.h     | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c            | 2 +-
 drivers/gpu/drm/i915/gem/i915_gem_region.c           | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_region.h           | 4 +++-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c            | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c           | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c          | 3 ++-
 drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c | 3 ++-
 drivers/gpu/drm/i915/gem/selftests/huge_pages.c      | 9 ++++++---
 drivers/gpu/drm/i915/gvt/dmabuf.c                    | 3 ++-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c        | 3 ++-
 13 files changed, 30 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 7636c2644ccf..5821524e391c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -199,7 +199,8 @@ struct dma_buf *i915_gem_prime_export(struct drm_gem_object *gem_obj, int flags)
 	return drm_gem_dmabuf_export(gem_obj->dev, &exp_info);
 }
 
-static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj,
+					    struct i915_gem_ww_ctx *ww)
 {
 	struct sg_table *pages;
 	unsigned int sg_page_sizes;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index 21cc40897ca8..90777fb5f5e0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -30,7 +30,8 @@ static void internal_free_pages(struct sg_table *st)
 	kfree(st);
 }
 
-static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj,
+					      struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct sg_table *st;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index a5bc42c7087a..280f54a75ab1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -50,7 +50,8 @@ struct drm_i915_gem_object_ops {
 	 * being released or under memory pressure (where we attempt to
 	 * reap pages for the shrinker).
 	 */
-	int (*get_pages)(struct drm_i915_gem_object *obj);
+	int (*get_pages)(struct drm_i915_gem_object *obj,
+			 struct i915_gem_ww_ctx *ww);
 	void (*put_pages)(struct drm_i915_gem_object *obj,
 			  struct sg_table *pages);
 	void (*truncate)(struct drm_i915_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index aed8a37ccdc9..58e222030e10 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -100,7 +100,7 @@ int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 		return -EFAULT;
 	}
 
-	err = obj->ops->get_pages(obj);
+	err = obj->ops->get_pages(obj, NULL);
 	GEM_BUG_ON(!err && !i915_gem_object_has_pages(obj));
 
 	return err;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
index 6a84fb6dde24..6cb8b70c19bf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -20,7 +20,8 @@ i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
 }
 
 int
-i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
+i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj,
+				struct i915_gem_ww_ctx *ww)
 {
 	const u64 max_segment = i915_sg_segment_size();
 	struct intel_memory_region *mem = obj->mm.region;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h
index ebddc86d78f7..c6f250aac925 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h
@@ -9,10 +9,12 @@
 #include <linux/types.h>
 
 struct intel_memory_region;
+struct i915_gem_ww_ctx;
 struct drm_i915_gem_object;
 struct sg_table;
 
-int i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj);
+int i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj,
+				    struct i915_gem_ww_ctx *ww);
 void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
 				     struct sg_table *pages);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index a9bfa66c8da1..3f80a017959a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -25,7 +25,8 @@ static void check_release_pagevec(struct pagevec *pvec)
 	cond_resched();
 }
 
-static int shmem_get_pages(struct drm_i915_gem_object *obj)
+static int shmem_get_pages(struct drm_i915_gem_object *obj,
+			   struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct intel_memory_region *mem = obj->mm.region;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index b0597de206de..5b732b0fe5ce 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -568,7 +568,8 @@ i915_pages_create_for_stolen(struct drm_device *dev,
 	return st;
 }
 
-static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
+static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj,
+					    struct i915_gem_ww_ctx *ww)
 {
 	struct sg_table *pages =
 		i915_pages_create_for_stolen(obj->base.dev,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 09b4219eab5d..693d0dbe9ed2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -126,7 +126,8 @@ static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
 	}
 }
 
-static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
+static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj,
+				      struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
index 0c8ecfdf5405..6ce237a5e38d 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
@@ -25,7 +25,8 @@ static void huge_free_pages(struct drm_i915_gem_object *obj,
 	kfree(pages);
 }
 
-static int huge_get_pages(struct drm_i915_gem_object *obj)
+static int huge_get_pages(struct drm_i915_gem_object *obj,
+			  struct i915_gem_ww_ctx *ww)
 {
 #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL)
 	const unsigned long nreal = obj->scratch / PAGE_SIZE;
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index d85ca79ac433..80eeb59aae67 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -56,7 +56,8 @@ static void huge_pages_free_pages(struct sg_table *st)
 	kfree(st);
 }
 
-static int get_huge_pages(struct drm_i915_gem_object *obj)
+static int get_huge_pages(struct drm_i915_gem_object *obj,
+			  struct i915_gem_ww_ctx *ww)
 {
 #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
 	unsigned int page_mask = obj->mm.page_mask;
@@ -179,7 +180,8 @@ huge_pages_object(struct drm_i915_private *i915,
 	return obj;
 }
 
-static int fake_get_huge_pages(struct drm_i915_gem_object *obj)
+static int fake_get_huge_pages(struct drm_i915_gem_object *obj,
+			       struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	const u64 max_len = rounddown_pow_of_two(UINT_MAX);
@@ -234,7 +236,8 @@ static int fake_get_huge_pages(struct drm_i915_gem_object *obj)
 	return 0;
 }
 
-static int fake_get_huge_pages_single(struct drm_i915_gem_object *obj)
+static int fake_get_huge_pages_single(struct drm_i915_gem_object *obj,
+				      struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct sg_table *st;
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c
index d4f883f35b95..609257aaf711 100644
--- a/drivers/gpu/drm/i915/gvt/dmabuf.c
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.c
@@ -55,7 +55,8 @@ static void vgpu_unpin_dma_address(struct intel_vgpu *vgpu,
 }
 
 static int vgpu_gem_get_pages(
-		struct drm_i915_gem_object *obj)
+		struct drm_i915_gem_object *obj,
+		struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
 	struct intel_vgpu *vgpu;
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 2e4f06eaacc1..fc92fed7a04a 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -50,7 +50,8 @@ static void fake_free_pages(struct drm_i915_gem_object *obj,
 	kfree(pages);
 }
 
-static int fake_get_pages(struct drm_i915_gem_object *obj)
+static int fake_get_pages(struct drm_i915_gem_object *obj,
+			  struct i915_gem_ww_ctx *ww)
 {
 #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY)
 #define PFN_BIAS 0x1000
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 67/69] drm/i915: Add ww context to prepare_(read/write)
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (65 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 66/69] drm/i915: Add ww parameter to get_pages() callback Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 68/69] drm/i915: Pass ww ctx to pin_map Maarten Lankhorst
                   ` (6 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx

This will allow us to explicitly pass the ww to pin_pages, when it starts taking it.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_domain.c              | 2 ++
 drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c          | 7 ++++---
 drivers/gpu/drm/i915/gem/i915_gem_object.h              | 2 ++
 drivers/gpu/drm/i915/gem/selftests/huge_pages.c         | 2 +-
 drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 4 ++--
 drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c   | 4 ++--
 drivers/gpu/drm/i915/i915_gem.c                         | 4 ++--
 7 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index e3537922183b..a5b3a21faf9c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -534,6 +534,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
  * flush the object from the CPU cache.
  */
 int i915_gem_object_prepare_read(struct drm_i915_gem_object *obj,
+				 struct i915_gem_ww_ctx *ww,
 				 unsigned int *needs_clflush)
 {
 	int ret;
@@ -578,6 +579,7 @@ int i915_gem_object_prepare_read(struct drm_i915_gem_object *obj,
 }
 
 int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj,
+				  struct i915_gem_ww_ctx *ww,
 				  unsigned int *needs_clflush)
 {
 	int ret;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index df4f124dc61f..74667be619b1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1147,9 +1147,10 @@ static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer
 }
 
 static void *reloc_kmap(struct drm_i915_gem_object *obj,
-			struct reloc_cache *cache,
+			struct i915_execbuffer *eb,
 			unsigned long pageno)
 {
+	struct reloc_cache *cache = &eb->reloc_cache;
 	void *vaddr;
 	struct page *page;
 
@@ -1159,7 +1160,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
 		unsigned int flushes;
 		int err;
 
-		err = i915_gem_object_prepare_write(obj, &flushes);
+		err = i915_gem_object_prepare_write(obj, &eb->ww, &flushes);
 		if (err)
 			return ERR_PTR(err);
 
@@ -1259,7 +1260,7 @@ static void *reloc_vaddr(struct drm_i915_gem_object *obj,
 		if ((cache->vaddr & KMAP) == 0)
 			vaddr = reloc_iomap(obj, eb, page);
 		if (!vaddr)
-			vaddr = reloc_kmap(obj, cache, page);
+			vaddr = reloc_kmap(obj, eb, page);
 	}
 
 	return vaddr;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 7a252dc4237f..1a8ec4035112 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -480,8 +480,10 @@ static inline void i915_gem_object_unpin_map(struct drm_i915_gem_object *obj)
 void __i915_gem_object_release_map(struct drm_i915_gem_object *obj);
 
 int i915_gem_object_prepare_read(struct drm_i915_gem_object *obj,
+				 struct i915_gem_ww_ctx *ww,
 				 unsigned int *needs_clflush);
 int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj,
+				  struct i915_gem_ww_ctx *ww,
 				  unsigned int *needs_clflush);
 #define CLFLUSH_BEFORE	BIT(0)
 #define CLFLUSH_AFTER	BIT(1)
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index 80eeb59aae67..8b07bb77bb86 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -987,7 +987,7 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val)
 	int err;
 
 	i915_gem_object_lock(obj, NULL);
-	err = i915_gem_object_prepare_read(obj, &needs_flush);
+	err = i915_gem_object_prepare_read(obj, NULL, &needs_flush);
 	if (err)
 		goto err_unlock;
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index 8f2e447bd503..8f5b1e44d534 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -29,7 +29,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v)
 	int err;
 
 	i915_gem_object_lock(ctx->obj, NULL);
-	err = i915_gem_object_prepare_write(ctx->obj, &needs_clflush);
+	err = i915_gem_object_prepare_write(ctx->obj, NULL, &needs_clflush);
 	if (err)
 		goto out;
 
@@ -62,7 +62,7 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v)
 	int err;
 
 	i915_gem_object_lock(ctx->obj, NULL);
-	err = i915_gem_object_prepare_read(ctx->obj, &needs_clflush);
+	err = i915_gem_object_prepare_read(ctx->obj, NULL, &needs_clflush);
 	if (err)
 		goto out;
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index 82d5d37e9b66..af5f29a8a7f2 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -462,7 +462,7 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value)
 	int err;
 
 	i915_gem_object_lock(obj, NULL);
-	err = i915_gem_object_prepare_write(obj, &need_flush);
+	err = i915_gem_object_prepare_write(obj, NULL, &need_flush);
 	if (err)
 		goto out;
 
@@ -492,7 +492,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
 	int err;
 
 	i915_gem_object_lock(obj, NULL);
-	err = i915_gem_object_prepare_read(obj, &needs_flush);
+	err = i915_gem_object_prepare_read(obj, NULL, &needs_flush);
 	if (err)
 		goto out_unlock;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 81c7f2b5a585..a935c05809d5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -216,7 +216,7 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 	if (ret)
 		goto err_unlock;
 
-	ret = i915_gem_object_prepare_read(obj, &needs_clflush);
+	ret = i915_gem_object_prepare_read(obj, NULL, &needs_clflush);
 	if (ret)
 		goto err_unpin;
 
@@ -637,7 +637,7 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	if (ret)
 		goto err_unlock;
 
-	ret = i915_gem_object_prepare_write(obj, &needs_clflush);
+	ret = i915_gem_object_prepare_write(obj, NULL, &needs_clflush);
 	if (ret)
 		goto err_unpin;
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 68/69] drm/i915: Pass ww ctx to pin_map
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (66 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 67/69] drm/i915: Add ww context to prepare_(read/write) Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 69/69] drm/i915: Pass ww ctx to i915_gem_object_pin_pages Maarten Lankhorst
                   ` (5 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx

This will allow us to explicitly pass the ww to pin_pages,
when it starts taking it.

This allows us to finally kill off the explicit passing of ww
by retrieving it from the obj.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  7 ++++---
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  1 +
 .../gpu/drm/i915/gem/i915_gem_object_blt.c    |  4 ++--
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 21 +++++++++++++++----
 .../drm/i915/gem/selftests/i915_gem_context.c |  8 ++++---
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  2 +-
 drivers/gpu/drm/i915/gt/gen7_renderclear.c    |  2 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  2 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     |  2 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  4 ++--
 drivers/gpu/drm/i915/gt/intel_renderstate.c   |  2 +-
 drivers/gpu/drm/i915/gt/intel_ring.c          |  2 +-
 .../gpu/drm/i915/gt/intel_ring_submission.c   |  2 +-
 drivers/gpu/drm/i915/gt/intel_timeline.c      |  7 ++++---
 drivers/gpu/drm/i915/gt/intel_timeline.h      |  3 ++-
 drivers/gpu/drm/i915/gt/intel_workarounds.c   |  2 +-
 drivers/gpu/drm/i915/gt/mock_engine.c         |  2 +-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  2 +-
 drivers/gpu/drm/i915/gt/selftest_rps.c        | 10 ++++-----
 .../gpu/drm/i915/gt/selftest_workarounds.c    |  6 +++---
 drivers/gpu/drm/i915/gvt/cmd_parser.c         |  4 ++--
 drivers/gpu/drm/i915/i915_perf.c              |  4 ++--
 drivers/gpu/drm/i915/selftests/igt_spinner.c  |  2 +-
 24 files changed, 60 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 74667be619b1..3d50f2d17d3c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1333,7 +1333,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 	if (err)
 		goto err_pool;
 
-	cmd = i915_gem_object_pin_map(pool->obj, pool->type);
+	cmd = i915_gem_object_pin_map(pool->obj, &eb->ww, pool->type);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto err_pool;
@@ -2482,7 +2482,8 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 			goto err_shadow;
 	}
 
-	pw->shadow_map = i915_gem_object_pin_map(shadow->obj, I915_MAP_WB);
+	pw->shadow_map = i915_gem_object_pin_map(shadow->obj, &eb->ww,
+						 I915_MAP_WB);
 	if (IS_ERR(pw->shadow_map)) {
 		err = PTR_ERR(pw->shadow_map);
 		goto err_trampoline;
@@ -2493,7 +2494,7 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 
 	pw->batch_map = ERR_PTR(-ENODEV);
 	if (needs_clflush && i915_has_memcpy_from_wc())
-		pw->batch_map = i915_gem_object_pin_map(batch, I915_MAP_WC);
+		pw->batch_map = i915_gem_object_pin_map(batch, &eb->ww, I915_MAP_WC);
 
 	if (IS_ERR(pw->batch_map)) {
 		err = i915_gem_object_pin_pages(batch);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 2561a2f1e54f..edac8ee3be9a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -439,7 +439,7 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
 		goto out;
 
 	/* As this is primarily for debugging, let's focus on simplicity */
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_FORCE_WC);
+	vaddr = i915_gem_object_pin_map(obj, &ww, I915_MAP_FORCE_WC);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto out;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 1a8ec4035112..9bd9b47dcc8d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -450,6 +450,7 @@ void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
  * ERR_PTR() on error.
  */
 void *__must_check i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
+					   struct i915_gem_ww_ctx *ww,
 					   enum i915_map_type type);
 
 void *__must_check i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
index df8e8c18c6c9..fae18622d2da 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
@@ -58,7 +58,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
 	/* we pinned the pool, mark it as such */
 	intel_gt_buffer_pool_mark_used(pool);
 
-	cmd = i915_gem_object_pin_map(pool->obj, pool->type);
+	cmd = i915_gem_object_pin_map(pool->obj, ww, pool->type);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out_unpin;
@@ -283,7 +283,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
 	/* we pinned the pool, mark it as such */
 	intel_gt_buffer_pool_mark_used(pool);
 
-	cmd = i915_gem_object_pin_map(pool->obj, pool->type);
+	cmd = i915_gem_object_pin_map(pool->obj, ww, pool->type);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out_unpin;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 58e222030e10..232832398457 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -341,6 +341,7 @@ static void *i915_gem_object_map_pfn(struct drm_i915_gem_object *obj,
 
 /* get, pin, and map the pages of the object into kernel space */
 void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
+			      struct i915_gem_ww_ctx *ww,
 			      enum i915_map_type type)
 {
 	enum i915_map_type has_type;
@@ -408,13 +409,25 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
 				       enum i915_map_type type)
 {
+	struct i915_gem_ww_ctx ww;
 	void *ret;
+	int err;
 
-	i915_gem_object_lock(obj, NULL);
-	ret = i915_gem_object_pin_map(obj, type);
-	i915_gem_object_unlock(obj);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		ret = i915_gem_object_pin_map(obj, &ww, type);
+	if (IS_ERR(ret))
+		err = PTR_ERR(ret);
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 
-	return ret;
+	return err ? ERR_PTR(err) : ret;
 }
 
 void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index af5f29a8a7f2..74c53b5ae96e 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -893,13 +893,15 @@ static int igt_shared_ctx_exec(void *arg)
 	return err;
 }
 
-static int rpcs_query_batch(struct drm_i915_gem_object *rpcs, struct i915_vma *vma)
+static int rpcs_query_batch(struct drm_i915_gem_object *rpcs,
+			    struct i915_gem_ww_ctx *ww,
+			    struct i915_vma *vma)
 {
 	u32 *cmd;
 
 	GEM_BUG_ON(INTEL_GEN(vma->vm->i915) < 8);
 
-	cmd = i915_gem_object_pin_map(rpcs, I915_MAP_WB);
+	cmd = i915_gem_object_pin_map(rpcs, ww, I915_MAP_WB);
 	if (IS_ERR(cmd))
 		return PTR_ERR(cmd);
 
@@ -963,7 +965,7 @@ emit_rpcs_query(struct drm_i915_gem_object *obj,
 	if (err)
 		goto err_vma;
 
-	err = rpcs_query_batch(rpcs, vma);
+	err = rpcs_query_batch(rpcs, &ww, vma);
 	if (err)
 		goto err_batch;
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index dd74bc09ec88..3edf5a1cc0c0 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -120,7 +120,7 @@ static int igt_dmabuf_import(void *arg)
 	}
 
 	if (0) { /* Can not yet map dmabuf */
-		obj_map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		obj_map = i915_gem_object_pin_map(obj, NULL, I915_MAP_WB);
 		if (IS_ERR(obj_map)) {
 			err = PTR_ERR(obj_map);
 			pr_err("i915_gem_object_pin_map failed with err=%d\n", err);
diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
index de575fdb033f..c0b0044cb52a 100644
--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
@@ -436,7 +436,7 @@ int gen7_setup_clear_gpr_bb(struct intel_engine_cs * const engine,
 
 	GEM_BUG_ON(vma->obj->base.size < bv.size);
 
-	batch = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
+	batch = i915_gem_object_pin_map(vma->obj, NULL, I915_MAP_WC);
 	if (IS_ERR(batch))
 		return PTR_ERR(batch);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 859c79b0d6ee..b2a62bcd9cc3 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -666,7 +666,7 @@ static int init_status_page(struct intel_engine_cs *engine)
 	if (ret)
 		goto err;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(obj, &ww, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		ret = PTR_ERR(vaddr);
 		goto err_unpin;
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index bddc5c98fb04..c042755c4ca2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -30,7 +30,7 @@ static void dbg_poison_ce(struct intel_context *ce)
 		if (!i915_gem_object_trylock(obj))
 			return;
 
-		map = i915_gem_object_pin_map(obj, type);
+		map = i915_gem_object_pin_map(obj, NULL, type);
 		if (!IS_ERR(map)) {
 			memset(map, CONTEXT_REDZONE, obj->base.size);
 			i915_gem_object_flush_map(obj);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index a2b916d27a39..63e544ce4396 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -902,7 +902,7 @@ lrc_pre_pin(struct intel_context *ce,
 	GEM_BUG_ON(!ce->state);
 	GEM_BUG_ON(!i915_vma_is_pinned(ce->state));
 
-	*vaddr = i915_gem_object_pin_map(ce->state->obj,
+	*vaddr = i915_gem_object_pin_map(ce->state->obj, ww,
 					 i915_coherent_map_type(ce->engine->i915) |
 					 I915_MAP_OVERRIDE);
 
@@ -1512,7 +1512,7 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
 	if (err)
 		goto err;
 
-	batch = i915_gem_object_pin_map(wa_ctx->vma->obj, I915_MAP_WB);
+	batch = i915_gem_object_pin_map(wa_ctx->vma->obj, &ww, I915_MAP_WB);
 	if (IS_ERR(batch)) {
 		err = PTR_ERR(batch);
 		goto err_unpin;
diff --git a/drivers/gpu/drm/i915/gt/intel_renderstate.c b/drivers/gpu/drm/i915/gt/intel_renderstate.c
index b03e197b1d99..69d4856a2b11 100644
--- a/drivers/gpu/drm/i915/gt/intel_renderstate.c
+++ b/drivers/gpu/drm/i915/gt/intel_renderstate.c
@@ -53,7 +53,7 @@ static int render_state_setup(struct intel_renderstate *so,
 	int ret = -EINVAL;
 	u32 *d;
 
-	d = i915_gem_object_pin_map(so->vma->obj, I915_MAP_WB);
+	d = i915_gem_object_pin_map(so->vma->obj, &so->ww, I915_MAP_WB);
 	if (IS_ERR(d))
 		return PTR_ERR(d);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
index aee0a77c77e0..7a768ab76765 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring.c
@@ -54,7 +54,7 @@ int intel_ring_pin(struct intel_ring *ring, struct i915_gem_ww_ctx *ww)
 	if (i915_vma_is_map_and_fenceable(vma))
 		addr = (void __force *)i915_vma_pin_iomap(vma);
 	else
-		addr = i915_gem_object_pin_map(vma->obj,
+		addr = i915_gem_object_pin_map(vma->obj, ww,
 					       i915_coherent_map_type(vma->vm->i915));
 	if (IS_ERR(addr)) {
 		ret = PTR_ERR(addr);
diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
index f8ad891ad635..d23e02a35269 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
@@ -442,7 +442,7 @@ static int ring_context_init_default_state(struct intel_context *ce,
 	struct drm_i915_gem_object *obj = ce->state->obj;
 	void *vaddr;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(obj, ww, I915_MAP_WB);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 032e1d1b4c5e..b8f502dce8c7 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -53,13 +53,14 @@ static int __timeline_active(struct i915_active *active)
 }
 
 I915_SELFTEST_EXPORT int
-intel_timeline_pin_map(struct intel_timeline *timeline)
+intel_timeline_pin_map(struct intel_timeline *timeline,
+		       struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj;
 	u32 ofs = offset_in_page(timeline->hwsp_offset);
 	void *vaddr;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(obj, ww, I915_MAP_WB);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -184,7 +185,7 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
 		return 0;
 
 	if (!tl->hwsp_map) {
-		err = intel_timeline_pin_map(tl);
+		err = intel_timeline_pin_map(tl, ww);
 		if (err)
 			return err;
 	}
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h
index 57308c4d664a..dad5a60e556b 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
@@ -98,6 +98,7 @@ intel_timeline_is_last(const struct intel_timeline *tl,
 	return list_is_last_rcu(&rq->link, &tl->requests);
 }
 
-I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl));
+I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl,
+						 struct i915_gem_ww_ctx *ww));
 
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index bb2357119792..72c6da608cd2 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -2241,7 +2241,7 @@ static int engine_wa_list_verify(struct intel_context *ce,
 		goto err_rq;
 	}
 
-	results = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
+	results = i915_gem_object_pin_map(vma->obj, NULL, I915_MAP_WB);
 	if (IS_ERR(results)) {
 		err = PTR_ERR(results);
 		goto err_rq;
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
index 42fd86658ee7..193b355e45d6 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -20,7 +20,7 @@ static int mock_timeline_pin(struct intel_timeline *tl)
 	if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
 		return -EBUSY;
 
-	err = intel_timeline_pin_map(tl);
+	err = intel_timeline_pin_map(tl, NULL);
 	i915_gem_object_unlock(tl->hwsp_ggtt->obj);
 	if (err)
 		return err;
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 5726943d7ff0..60deea51ecc9 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -425,7 +425,7 @@ static int __live_lrc_state(struct intel_engine_cs *engine,
 		goto err_rq;
 	}
 
-	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map(scratch->obj, NULL, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_rq;
diff --git a/drivers/gpu/drm/i915/gt/selftest_rps.c b/drivers/gpu/drm/i915/gt/selftest_rps.c
index 967641fee42a..53f6d2e02506 100644
--- a/drivers/gpu/drm/i915/gt/selftest_rps.c
+++ b/drivers/gpu/drm/i915/gt/selftest_rps.c
@@ -83,17 +83,17 @@ create_spin_counter(struct intel_engine_cs *engine,
 
 	err = i915_vma_pin(vma, 0, 0, PIN_USER);
 	if (err)
-		goto err_unlock;
-
-	i915_vma_lock(vma);
+		goto err_put;
 
-	base = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	base = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(base)) {
 		err = PTR_ERR(base);
 		goto err_unpin;
 	}
 	cs = base;
 
+	i915_vma_lock(vma);
+
 	*cs++ = MI_LOAD_REGISTER_IMM(__NGPR__ * 2);
 	for (i = 0; i < __NGPR__; i++) {
 		*cs++ = i915_mmio_reg_offset(CS_GPR(i));
@@ -137,8 +137,6 @@ create_spin_counter(struct intel_engine_cs *engine,
 
 err_unpin:
 	i915_vma_unpin(vma);
-err_unlock:
-	i915_vma_unlock(vma);
 err_put:
 	i915_gem_object_put(obj);
 	return ERR_PTR(err);
diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
index a508614b2fd5..5e74e65550e8 100644
--- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
@@ -179,7 +179,7 @@ static int check_whitelist(struct intel_context *ce)
 		return PTR_ERR(result);
 
 	i915_gem_object_lock(result->obj, NULL);
-	vaddr = i915_gem_object_pin_map(result->obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(result->obj, NULL, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto out_put;
@@ -516,13 +516,13 @@ static int check_dirty_whitelist(struct intel_context *ce)
 		if (err)
 			goto out;
 
-		cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+		cs = i915_gem_object_pin_map(batch->obj, &ww, I915_MAP_WC);
 		if (IS_ERR(cs)) {
 			err = PTR_ERR(cs);
 			goto out_ctx;
 		}
 
-		results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+		results = i915_gem_object_pin_map(scratch->obj, &ww, I915_MAP_WB);
 		if (IS_ERR(results)) {
 			err = PTR_ERR(results);
 			goto out_unmap_batch;
diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
index ec6ea11d747f..d50396d51f9f 100644
--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
+++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
@@ -1933,7 +1933,7 @@ static int perform_bb_shadow(struct parser_exec_state *s)
 		goto err_free_bb;
 	}
 
-	bb->va = i915_gem_object_pin_map(bb->obj, I915_MAP_WB);
+	bb->va = i915_gem_object_pin_map_unlocked(bb->obj, I915_MAP_WB);
 	if (IS_ERR(bb->va)) {
 		ret = PTR_ERR(bb->va);
 		goto err_free_obj;
@@ -3006,7 +3006,7 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 		return PTR_ERR(obj);
 
 	/* get the va of the shadow batch buffer */
-	map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(map)) {
 		gvt_vgpu_err("failed to vmap shadow indirect ctx\n");
 		ret = PTR_ERR(map);
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index d13d1d9d4039..5bdea158d445 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1665,7 +1665,7 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
 		goto out_ww;
 	}
 
-	batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB);
+	batch = cs = i915_gem_object_pin_map(bo, &ww, I915_MAP_WB);
 	if (IS_ERR(batch)) {
 		ret = PTR_ERR(batch);
 		goto err_unpin;
@@ -1878,7 +1878,7 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream,
 	if (err)
 		goto out_ww;
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map(obj, &ww, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto out_ww;
diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.c b/drivers/gpu/drm/i915/selftests/igt_spinner.c
index 243cb4b9a2ee..ccddbfcf09f1 100644
--- a/drivers/gpu/drm/i915/selftests/igt_spinner.c
+++ b/drivers/gpu/drm/i915/selftests/igt_spinner.c
@@ -54,7 +54,7 @@ static void *igt_spinner_pin_obj(struct intel_context *ce,
 	if (ret)
 		return ERR_PTR(ret);
 
-	vaddr = i915_gem_object_pin_map(obj, mode);
+	vaddr = i915_gem_object_pin_map(obj, ww, mode);
 
 	if (!ww)
 		i915_gem_object_unlock(obj);
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] [PATCH v8 69/69] drm/i915: Pass ww ctx to i915_gem_object_pin_pages
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (67 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 68/69] drm/i915: Pass ww ctx to pin_map Maarten Lankhorst
@ 2021-03-11 13:42 ` Maarten Lankhorst
  2021-03-11 14:27 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev16) Patchwork
                   ` (4 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-11 13:42 UTC (permalink / raw)
  To: intel-gfx

This is the final part of passing ww ctx to the get_pages() callbacks.
Now we no longer have to implicitly get ww ctx by using get_ww_ctx.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_clflush.c   |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  6 +++---
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    | 21 ++++++++++++-------
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      | 19 +++++++++++------
 drivers/gpu/drm/i915/gem/i915_gem_object.h    | 11 ++++++----
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 14 +++++++------
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 ++--
 drivers/gpu/drm/i915/gt/intel_gtt.c           |  4 ++--
 drivers/gpu/drm/i915/i915_gem.c               |  6 +++---
 drivers/gpu/drm/i915/i915_vma.c               |  7 ++++---
 13 files changed, 60 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index acfd50248f7b..0a21393412c9 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -1153,7 +1153,7 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	if (!ret && phys_cursor)
 		ret = i915_gem_object_attach_phys(obj, alignment);
 	if (!ret)
-		ret = i915_gem_object_pin_pages(obj);
+		ret = i915_gem_object_pin_pages(obj, &ww);
 	if (ret)
 		goto err;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
index e4c24558eaa8..109f5c8b802a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
@@ -55,7 +55,7 @@ static struct clflush *clflush_work_create(struct drm_i915_gem_object *obj)
 	if (!clflush)
 		return NULL;
 
-	if (__i915_gem_object_get_pages(obj) < 0) {
+	if (__i915_gem_object_get_pages(obj, NULL) < 0) {
 		kfree(clflush);
 		return NULL;
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 5821524e391c..9e6d72a3e94b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -25,7 +25,7 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
 	struct scatterlist *src, *dst;
 	int ret, i;
 
-	ret = i915_gem_object_pin_pages(obj);
+	ret = i915_gem_object_pin_pages(obj, NULL);
 	if (ret)
 		goto err;
 
@@ -130,7 +130,7 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_dire
 retry:
 	err = i915_gem_object_lock(obj, &ww);
 	if (!err)
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages(obj, &ww);
 	if (!err) {
 		i915_gem_object_set_to_cpu_domain(obj, write);
 		i915_gem_object_unpin_pages(obj);
@@ -154,7 +154,7 @@ static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direct
 retry:
 	err = i915_gem_object_lock(obj, &ww);
 	if (!err)
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages(obj, &ww);
 	if (!err) {
 		i915_gem_object_set_to_gtt_domain(obj, false);
 		i915_gem_object_unpin_pages(obj);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index a5b3a21faf9c..85d3d3f4a77e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -430,6 +430,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	struct drm_i915_gem_object *obj;
 	u32 read_domains = args->read_domains;
 	u32 write_domain = args->write_domain;
+	struct i915_gem_ww_ctx ww;
 	int err;
 
 	/* Only handle setting domains to types used by the CPU. */
@@ -470,7 +471,9 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 		goto out;
 	}
 
-	err = i915_gem_object_lock_interruptible(obj, NULL);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock_interruptible(obj, &ww);
 	if (err)
 		goto out;
 
@@ -483,9 +486,9 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	 * continue to assume that the obj remained out of the CPU cached
 	 * domain.
 	 */
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages(obj, &ww);
 	if (err)
-		goto out_unlock;
+		goto out;
 
 	/*
 	 * Already in the desired write domain? Nothing for us to do!
@@ -510,8 +513,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 out_unpin:
 	i915_gem_object_unpin_pages(obj);
 
-out_unlock:
-	i915_gem_object_unlock(obj);
 out_wait:
 	if (!err) {
 		err = i915_gem_object_wait(obj,
@@ -524,6 +525,12 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	}
 
 out:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 	i915_gem_object_put(obj);
 	return err;
 }
@@ -545,7 +552,7 @@ int i915_gem_object_prepare_read(struct drm_i915_gem_object *obj,
 
 	assert_object_held(obj);
 
-	ret = i915_gem_object_pin_pages(obj);
+	ret = i915_gem_object_pin_pages(obj, ww);
 	if (ret)
 		return ret;
 
@@ -590,7 +597,7 @@ int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj,
 
 	assert_object_held(obj);
 
-	ret = i915_gem_object_pin_pages(obj);
+	ret = i915_gem_object_pin_pages(obj, ww);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 3d50f2d17d3c..7e60e41d6c0c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2497,7 +2497,7 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 		pw->batch_map = i915_gem_object_pin_map(batch, &eb->ww, I915_MAP_WC);
 
 	if (IS_ERR(pw->batch_map)) {
-		err = i915_gem_object_pin_pages(batch);
+		err = i915_gem_object_pin_pages(batch, &eb->ww);
 		if (err)
 			goto err_unmap_shadow;
 		pw->batch_map = NULL;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index edac8ee3be9a..8690bf434407 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -239,6 +239,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 	struct i915_mmap_offset *mmo = area->vm_private_data;
 	struct drm_i915_gem_object *obj = mmo->obj;
 	resource_size_t iomap;
+	struct i915_gem_ww_ctx ww;
 	int err;
 
 	/* Sanity check that we allow writing into this object */
@@ -246,10 +247,11 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 		     area->vm_flags & VM_WRITE))
 		return VM_FAULT_SIGBUS;
 
-	if (i915_gem_object_lock_interruptible(obj, NULL))
-		return VM_FAULT_NOPAGE;
-
-	err = i915_gem_object_pin_pages(obj);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		err = i915_gem_object_pin_pages(obj, &ww);
 	if (err)
 		goto out;
 
@@ -272,7 +274,12 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 	i915_gem_object_unpin_pages(obj);
 
 out:
-	i915_gem_object_unlock(obj);
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 	return i915_error_to_vmf_fault(err);
 }
 
@@ -313,7 +320,7 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
 		goto err_rpm;
 	}
 
-	ret = i915_gem_object_pin_pages(obj);
+	ret = i915_gem_object_pin_pages(obj, &ww);
 	if (ret)
 		goto err_rpm;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 9bd9b47dcc8d..64819b4e592a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -378,18 +378,21 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 				 struct sg_table *pages,
 				 unsigned int sg_page_sizes);
 
-int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
-int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
+int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj,
+				  struct i915_gem_ww_ctx *ww);
+int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj,
+				struct i915_gem_ww_ctx *ww);
 
 static inline int __must_check
-i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
+i915_gem_object_pin_pages(struct drm_i915_gem_object *obj,
+			  struct i915_gem_ww_ctx *ww)
 {
 	assert_object_held(obj);
 
 	if (atomic_inc_not_zero(&obj->mm.pages_pin_count))
 		return 0;
 
-	return __i915_gem_object_get_pages(obj);
+	return __i915_gem_object_get_pages(obj, ww);
 }
 
 int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 232832398457..94cc33ea483d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -87,7 +87,8 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 	}
 }
 
-int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
+int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj,
+				  struct i915_gem_ww_ctx *ww)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	int err;
@@ -100,7 +101,7 @@ int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 		return -EFAULT;
 	}
 
-	err = obj->ops->get_pages(obj, NULL);
+	err = obj->ops->get_pages(obj, ww);
 	GEM_BUG_ON(!err && !i915_gem_object_has_pages(obj));
 
 	return err;
@@ -113,7 +114,8 @@ int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
  * either as a result of memory pressure (reaping pages under the shrinker)
  * or as the object is itself released.
  */
-int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
+int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj,
+				struct i915_gem_ww_ctx *ww)
 {
 	int err;
 
@@ -124,7 +126,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	if (unlikely(!i915_gem_object_has_pages(obj))) {
 		GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
 
-		err = ____i915_gem_object_get_pages(obj);
+		err = ____i915_gem_object_get_pages(obj, ww);
 		if (err)
 			return err;
 
@@ -144,7 +146,7 @@ int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj)
 retry:
 	err = i915_gem_object_lock(obj, &ww);
 	if (!err)
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages(obj, &ww);
 
 	if (err == -EDEADLK) {
 		err = i915_gem_ww_ctx_backoff(&ww);
@@ -362,7 +364,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 		if (unlikely(!i915_gem_object_has_pages(obj))) {
 			GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
 
-			err = ____i915_gem_object_get_pages(obj);
+			err = ____i915_gem_object_get_pages(obj, ww);
 			if (err)
 				return ERR_PTR(err);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index 5b732b0fe5ce..48b2258091c3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -641,7 +641,7 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem,
 	if (WARN_ON(!i915_gem_object_trylock(obj)))
 		return -EBUSY;
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages(obj, NULL);
 	if (!err)
 		i915_gem_object_init_memory_region(obj, mem);
 	i915_gem_object_unlock(obj);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 693d0dbe9ed2..71c928c789b3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -261,7 +261,7 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 		i915_gem_userptr_put_pages(obj, pages);
 
 	if (get_pages)
-		err = ____i915_gem_object_get_pages(obj);
+		err = ____i915_gem_object_get_pages(obj, NULL);
 
 	return err;
 }
@@ -390,7 +390,7 @@ int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj)
 		 * it doesn't matter if we collide with the mmu notifier,
 		 * and -EAGAIN handling is not required.
 		 */
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages(obj, NULL);
 		if (!err)
 			i915_gem_object_unpin_pages(obj);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 994e4ea28903..38c1ba203071 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -30,7 +30,7 @@ int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj)
 	int err;
 
 	i915_gem_object_lock(obj, NULL);
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages(obj, NULL);
 	i915_gem_object_unlock(obj);
 	if (err)
 		return err;
@@ -43,7 +43,7 @@ int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object
 {
 	int err;
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages(obj, NULL);
 	if (err)
 		return err;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a935c05809d5..49f0e459ee64 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -212,7 +212,7 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 	if (ret)
 		return ret;
 
-	ret = i915_gem_object_pin_pages(obj);
+	ret = i915_gem_object_pin_pages(obj, NULL);
 	if (ret)
 		goto err_unlock;
 
@@ -311,7 +311,7 @@ static struct i915_vma *i915_gem_gtt_prepare(struct drm_i915_gem_object *obj,
 		vma = NULL;
 	}
 
-	ret = i915_gem_object_pin_pages(obj);
+	ret = i915_gem_object_pin_pages(obj, &ww);
 	if (ret) {
 		if (drm_mm_node_allocated(node)) {
 			ggtt->vm.clear_range(&ggtt->vm, node->start, node->size);
@@ -633,7 +633,7 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	if (ret)
 		return ret;
 
-	ret = i915_gem_object_pin_pages(obj);
+	ret = i915_gem_object_pin_pages(obj, NULL);
 	if (ret)
 		goto err_unlock;
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index c5b9f30ac0a3..03291c032814 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -785,7 +785,8 @@ static bool try_qad_pin(struct i915_vma *vma, unsigned int flags)
 	return pinned;
 }
 
-static int vma_get_pages(struct i915_vma *vma)
+static int vma_get_pages(struct i915_vma *vma,
+			 struct i915_gem_ww_ctx *ww)
 {
 	int err = 0;
 
@@ -798,7 +799,7 @@ static int vma_get_pages(struct i915_vma *vma)
 
 	if (!atomic_read(&vma->pages_count)) {
 		if (vma->obj) {
-			err = i915_gem_object_pin_pages(vma->obj);
+			err = i915_gem_object_pin_pages(vma->obj, ww);
 			if (err)
 				goto unlock;
 		}
@@ -876,7 +877,7 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 	if (try_qad_pin(vma, flags & I915_VMA_BIND_MASK))
 		return 0;
 
-	err = vma_get_pages(vma);
+	err = vma_get_pages(vma, ww);
 	if (err)
 		return err;
 
-- 
2.30.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev16)
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (68 preceding siblings ...)
  2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 69/69] drm/i915: Pass ww ctx to i915_gem_object_pin_pages Maarten Lankhorst
@ 2021-03-11 14:27 ` Patchwork
  2021-03-11 14:28 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (3 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Patchwork @ 2021-03-11 14:27 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev16)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
62d87d68daa6 drm/i915: Do not share hwsp across contexts any more, v7.
-:562: WARNING:CONSTANT_COMPARISON: Comparisons should place the constant on the right side of the test
#562: FILE: drivers/gpu/drm/i915/gt/intel_timeline.c:286:
+	if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))

total: 0 errors, 1 warnings, 0 checks, 954 lines checked
45179357f4a3 drm/i915: Pin timeline map after first timeline pin, v3.
-:16: WARNING:TYPO_SPELLING: 'arithmatic' may be misspelled - perhaps 'arithmetic'?
#16: 
- Fix NULL + XX arithmatic, use casts. (kbuild)
                ^^^^^^^^^^

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
05ca8fd3489b drm/i915: Move cmd parser pinning to execbuffer
c3e85904db1d drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
-:59: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#59: FILE: drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:452:
+		err = i915_vma_pin_ww(vma, &eb->ww,
 					     entry->pad_to_size,

total: 0 errors, 0 warnings, 1 checks, 75 lines checked
d3c0d6934682 drm/i915: Ensure we hold the object mutex in pin correctly.
513ee4798499 drm/i915: Add gem object locking to madvise.
98108b44d022 drm/i915: Move HAS_STRUCT_PAGE to obj->flags
-:110: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#110: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.c:63:
+			  struct lock_class_key *key, unsigned flags)

-:133: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#133: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:53:
+			  unsigned alloc_flags);

total: 0 errors, 2 warnings, 0 checks, 350 lines checked
3768448e58fc drm/i915: Rework struct phys attachment handling
94c02fbb4cec drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
2ff91be66b1e drm/i915: make lockdep slightly happier about execbuf.
4f5772ae4042 drm/i915: Disable userptr pread/pwrite support.
063d779f45a2 drm/i915: No longer allow exporting userptr through dma-buf
6cce6a791e3b drm/i915: Reject more ioctls for userptr, v2.
40c7ada6528a drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
c006fd23ae00 drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
b4e92512f008 drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
-:332: WARNING:LONG_LINE: line length of 121 exceeds 100 columns
#332: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:605:
+static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }

-:333: WARNING:LONG_LINE: line length of 121 exceeds 100 columns
#333: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:606:
+static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }

-:334: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#334: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:607:
+static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }

-:335: WARNING:LONG_LINE: line length of 118 exceeds 100 columns
#335: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:608:
+static inline int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }

-:394: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#394: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:2:
  * SPDX-License-Identifier: MIT

-:398: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#398: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:6:
+ *
+  * Based on amdgpu_mn, which bears the following notice:

-:399: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#399: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:7:
+  * Based on amdgpu_mn, which bears the following notice:
+ *

-:484: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#484: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:63:
+	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);

-:1172: CHECK:MULTIPLE_ASSIGNMENTS: multiple assignments should be avoided
#1172: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:300:
+	pinned = ret = 0;

-:1187: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#1187: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:315:
+	if (mmu_interval_read_retry(&obj->userptr.notifier,
+		!obj->userptr.page_ref ? notifier_seq :

-:1334: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#1334: FILE: drivers/gpu/drm/i915/i915_drv.h:564:
+	spinlock_t notifier_lock;

total: 0 errors, 8 warnings, 3 checks, 1267 lines checked
3c7404bac792 drm/i915: Flatten obj->mm.lock
206f875cafc5 drm/i915: Populate logical context during first pin.
72e9bfd7ceea drm/i915: Make ring submission compatible with obj->mm.lock removal, v2.
598e9ba1dfed drm/i915: Handle ww locking in init_status_page
6bf1b9e277f3 drm/i915: Rework clflush to work correctly without obj->mm.lock.
01cd746af3c5 drm/i915: Pass ww ctx to intel_pin_to_display_plane
b2cecc1e4972 drm/i915: Add object locking to vm_fault_cpu
1092d212be31 drm/i915: Move pinning to inside engine_wa_list_verify()
-:72: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#72: FILE: drivers/gpu/drm/i915/gt/intel_workarounds.c:2215:
+	err = i915_vma_pin_ww(vma, &ww, 0, 0,
+			   i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);

total: 0 errors, 0 warnings, 1 checks, 118 lines checked
95ab8d6067d0 drm/i915: Take reservation lock around i915_vma_pin.
5e0e47dd8939 drm/i915: Make lrc_init_wa_ctx compatible with ww locking, v3.
0234dc4ff4ff drm/i915: Make __engine_unpark() compatible with ww locking.
-:10: WARNING:REPEATED_WORD: Possible repeated word: 'many'
#10: 
many many places where rpm is used, I chose the safest option

total: 0 errors, 1 warnings, 0 checks, 16 lines checked
38250623370a drm/i915: Take obj lock around set_domain ioctl
-:85: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#85: FILE: drivers/gpu/drm/i915/gem/i915_gem_domain.c:518:
+		err = i915_gem_object_wait(obj,
+					  I915_WAIT_INTERRUPTIBLE |

total: 0 errors, 0 warnings, 1 checks, 74 lines checked
ab50d5ae3aa1 drm/i915: Defer pin calls in buffer pool until first use by caller.
95dd9a5b76ad drm/i915: Fix pread/pwrite to work with new locking rules.
-:32: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#32: 
deleted file mode 100644

total: 0 errors, 1 warnings, 0 checks, 349 lines checked
b84204b94870 drm/i915: Fix workarounds selftest, part 1
9dd394f55435 drm/i915: Prepare for obj->mm.lock removal, v2.
-:135: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: "Thomas Hellström" <thomas.hellstrom@intel.com>' != 'Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>'

total: 0 errors, 1 warnings, 0 checks, 96 lines checked
594fb9bb3edf drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner.
aac3d8e69b16 drm/i915: Add ww locking around vm_access()
856988996488 drm/i915: Increase ww locking for perf.
13463c86de73 drm/i915: Lock ww in ucode objects correctly
7bf1238f430a drm/i915: Add ww locking to dma-buf ops.
2f5f9bca3704 drm/i915: Add missing ww lock in intel_dsb_prepare.
60f82fd9a7c9 drm/i915: Fix ww locking in shmem_create_from_object
25ec5fbd2244 drm/i915: Use a single page table lock for each gtt.
-:112: WARNING:UNNECESSARY_ELSE: else is not generally useful after a break or return
#112: FILE: drivers/gpu/drm/i915/gt/intel_gtt.c:85:
+		return i915_gem_object_lock(vm->scratch[0], ww);
+	} else {

total: 0 errors, 1 warnings, 0 checks, 154 lines checked
07742daa31f1 drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal.
a0edf1dedfcb drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
c518b61f2828 drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
f76bff23db43 drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
1f422bb004d2 drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
a2b35549e350 drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
1ad052fc5083 drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
ad8e58bc1d77 drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
78ac2ae0a6f3 drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal.
c60227da1d5e drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
ae02f93ac96e drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
d4b60b619e09 drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
a46ab6927106 drm/i915/selftests: Prepare execlists and lrc selftests for obj->mm.lock removal
-:163: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#163: FILE: drivers/gpu/drm/i915/gt/selftest_lrc.c:1303:
+	lrc = i915_gem_object_pin_map_unlocked(ce->state->obj,
 				      i915_coherent_map_type(engine->i915));

total: 0 errors, 0 warnings, 1 checks, 130 lines checked
567f1fdb0fd3 drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
3e54a317b6a2 drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
0d29694fa958 drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
3eba970a2562 drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal
b0a1c130a3e1 drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal
bfc623e7aa5a drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
eccf40ac6dff drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
82341525cb30 drm/i915: Finally remove obj->mm.lock.
b54702917508 drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
6c9553a14119 drm/i915: Move gt_revoke() slightly
6a04f7a4fb3d drm/i915: Add missing -EDEADLK path in execbuffer ggtt pinning.
c4bcee10e1ff drm/i915: Fix pin_map in scheduler selftests
-:7: WARNING:COMMIT_MESSAGE: Missing commit description - Add an appropriate one

total: 0 errors, 1 warnings, 0 checks, 8 lines checked
17e1f6f13463 drm/i915: Add ww parameter to get_pages() callback
2a3925c85678 drm/i915: Add ww context to prepare_(read/write)
-:6: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#6: 
This will allow us to explicitly pass the ww to pin_pages, when it starts taking it.

total: 0 errors, 1 warnings, 0 checks, 107 lines checked
01ec61981f44 drm/i915: Pass ww ctx to pin_map
-:456: CHECK:MULTIPLE_ASSIGNMENTS: multiple assignments should be avoided
#456: FILE: drivers/gpu/drm/i915/i915_perf.c:1668:
+	batch = cs = i915_gem_object_pin_map(bo, &ww, I915_MAP_WB);

total: 0 errors, 0 warnings, 1 checks, 337 lines checked
df3000218c20 drm/i915: Pass ww ctx to i915_gem_object_pin_pages


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915: Remove obj->mm.lock! (rev16)
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (69 preceding siblings ...)
  2021-03-11 14:27 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev16) Patchwork
@ 2021-03-11 14:28 ` Patchwork
  2021-03-11 14:32 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
                   ` (2 subsequent siblings)
  73 siblings, 0 replies; 82+ messages in thread
From: Patchwork @ 2021-03-11 14:28 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev16)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
+drivers/gpu/drm/i915/gt/intel_ring_submission.c:1196:24: warning: Using plain integer as NULL pointer
+drivers/gpu/drm/i915/intel_wakeref.c:137:19: warning: context imbalance in 'wakeref_auto_timeout' - unexpected unlock
+drivers/gpu/drm/i915/selftests/i915_syncmap.c:80:54: warning: dubious: x | !y


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [Intel-gfx] ✗ Fi.CI.DOCS: warning for drm/i915: Remove obj->mm.lock! (rev16)
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (70 preceding siblings ...)
  2021-03-11 14:28 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2021-03-11 14:32 ` Patchwork
  2021-03-11 14:59 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  2021-03-16  9:10 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Remove obj->mm.lock! (rev17) Patchwork
  73 siblings, 0 replies; 82+ messages in thread
From: Patchwork @ 2021-03-11 14:32 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev16)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ make htmldocs 2>&1 > /dev/null | grep i915
./drivers/gpu/drm/i915/gem/i915_gem_shrinker.c:102: warning: Function parameter or member 'ww' not described in 'i915_gem_shrink'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1420: warning: Excess function parameter 'trampoline' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1420: warning: Function parameter or member 'jump_whitelist' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1420: warning: Function parameter or member 'shadow_map' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1420: warning: Function parameter or member 'batch_map' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1420: warning: Excess function parameter 'trampoline' description in 'intel_engine_cmd_parser'


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915: Remove obj->mm.lock! (rev16)
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (71 preceding siblings ...)
  2021-03-11 14:32 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
@ 2021-03-11 14:59 ` Patchwork
  2021-03-16  9:10 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Remove obj->mm.lock! (rev17) Patchwork
  73 siblings, 0 replies; 82+ messages in thread
From: Patchwork @ 2021-03-11 14:59 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 19404 bytes --]

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev16)
URL   : https://patchwork.freedesktop.org/series/82337/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_9849 -> Patchwork_19780
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_19780 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_19780, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_19780:

### IGT changes ###

#### Possible regressions ####

  * igt@amdgpu/amd_prime@i915-to-amd:
    - fi-kbl-8809g:       [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-8809g/igt@amdgpu/amd_prime@i915-to-amd.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-8809g/igt@amdgpu/amd_prime@i915-to-amd.html

  * igt@prime_vgem@basic-userptr:
    - fi-tgl-u2:          [PASS][3] -> [SKIP][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-tgl-u2/igt@prime_vgem@basic-userptr.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-tgl-u2/igt@prime_vgem@basic-userptr.html
    - fi-cml-s:           [PASS][5] -> [SKIP][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-cml-s/igt@prime_vgem@basic-userptr.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-cml-s/igt@prime_vgem@basic-userptr.html
    - fi-icl-y:           [PASS][7] -> [SKIP][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-icl-y/igt@prime_vgem@basic-userptr.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-icl-y/igt@prime_vgem@basic-userptr.html
    - fi-cml-u2:          [PASS][9] -> [SKIP][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-cml-u2/igt@prime_vgem@basic-userptr.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-cml-u2/igt@prime_vgem@basic-userptr.html

  
#### Warnings ####

  * igt@runner@aborted:
    - fi-kbl-8809g:       [FAIL][11] ([i915#2947]) -> [FAIL][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-8809g/igt@runner@aborted.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-8809g/igt@runner@aborted.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@prime_vgem@basic-userptr:
    - {fi-ehl-2}:         [PASS][13] -> [SKIP][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-ehl-2/igt@prime_vgem@basic-userptr.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-ehl-2/igt@prime_vgem@basic-userptr.html
    - {fi-jsl-1}:         [PASS][15] -> [SKIP][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-jsl-1/igt@prime_vgem@basic-userptr.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-jsl-1/igt@prime_vgem@basic-userptr.html
    - {fi-ehl-1}:         [PASS][17] -> [SKIP][18]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-ehl-1/igt@prime_vgem@basic-userptr.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-ehl-1/igt@prime_vgem@basic-userptr.html
    - {fi-tgl-dsi}:       [PASS][19] -> [SKIP][20]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-tgl-dsi/igt@prime_vgem@basic-userptr.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-tgl-dsi/igt@prime_vgem@basic-userptr.html
    - {fi-rkl-11500t}:    [PASS][21] -> [SKIP][22]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-rkl-11500t/igt@prime_vgem@basic-userptr.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-rkl-11500t/igt@prime_vgem@basic-userptr.html

  
Known issues
------------

  Here are the changes found in Patchwork_19780 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_gttfill@basic:
    - fi-bsw-n3050:       NOTRUN -> [SKIP][23] ([fdo#109271])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-bsw-n3050/igt@gem_exec_gttfill@basic.html

  * igt@gem_exec_suspend@basic-s3:
    - fi-bsw-n3050:       NOTRUN -> [INCOMPLETE][24] ([i915#3159])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-bsw-n3050/igt@gem_exec_suspend@basic-s3.html

  * igt@gem_linear_blits@basic:
    - fi-kbl-8809g:       [PASS][25] -> [TIMEOUT][26] ([i915#2502] / [i915#3145])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-8809g/igt@gem_linear_blits@basic.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-8809g/igt@gem_linear_blits@basic.html

  * igt@i915_selftest@live@hangcheck:
    - fi-icl-y:           [PASS][27] -> [INCOMPLETE][28] ([i915#2782] / [i915#926])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-icl-y/igt@i915_selftest@live@hangcheck.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-icl-y/igt@i915_selftest@live@hangcheck.html

  * igt@prime_vgem@basic-userptr:
    - fi-pnv-d510:        [PASS][29] -> [SKIP][30] ([fdo#109271])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-pnv-d510/igt@prime_vgem@basic-userptr.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-pnv-d510/igt@prime_vgem@basic-userptr.html
    - fi-cfl-8700k:       [PASS][31] -> [SKIP][32] ([fdo#109271])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-cfl-8700k/igt@prime_vgem@basic-userptr.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-cfl-8700k/igt@prime_vgem@basic-userptr.html
    - fi-skl-6600u:       [PASS][33] -> [SKIP][34] ([fdo#109271])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-skl-6600u/igt@prime_vgem@basic-userptr.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-skl-6600u/igt@prime_vgem@basic-userptr.html
    - fi-bsw-kefka:       [PASS][35] -> [SKIP][36] ([fdo#109271])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-bsw-kefka/igt@prime_vgem@basic-userptr.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-bsw-kefka/igt@prime_vgem@basic-userptr.html
    - fi-ilk-650:         [PASS][37] -> [SKIP][38] ([fdo#109271])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-ilk-650/igt@prime_vgem@basic-userptr.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-ilk-650/igt@prime_vgem@basic-userptr.html
    - fi-elk-e7500:       [PASS][39] -> [SKIP][40] ([fdo#109271])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-elk-e7500/igt@prime_vgem@basic-userptr.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-elk-e7500/igt@prime_vgem@basic-userptr.html
    - fi-kbl-7567u:       [PASS][41] -> [SKIP][42] ([fdo#109271])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-7567u/igt@prime_vgem@basic-userptr.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-7567u/igt@prime_vgem@basic-userptr.html
    - fi-skl-guc:         [PASS][43] -> [SKIP][44] ([fdo#109271])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-skl-guc/igt@prime_vgem@basic-userptr.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-skl-guc/igt@prime_vgem@basic-userptr.html
    - fi-cfl-guc:         [PASS][45] -> [SKIP][46] ([fdo#109271])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-cfl-guc/igt@prime_vgem@basic-userptr.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-cfl-guc/igt@prime_vgem@basic-userptr.html
    - fi-bxt-dsi:         [PASS][47] -> [SKIP][48] ([fdo#109271])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-bxt-dsi/igt@prime_vgem@basic-userptr.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-bxt-dsi/igt@prime_vgem@basic-userptr.html
    - fi-ivb-3770:        [PASS][49] -> [SKIP][50] ([fdo#109271])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-ivb-3770/igt@prime_vgem@basic-userptr.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-ivb-3770/igt@prime_vgem@basic-userptr.html
    - fi-skl-6700k2:      [PASS][51] -> [SKIP][52] ([fdo#109271])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-skl-6700k2/igt@prime_vgem@basic-userptr.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-skl-6700k2/igt@prime_vgem@basic-userptr.html
    - fi-byt-j1900:       [PASS][53] -> [SKIP][54] ([fdo#109271])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-byt-j1900/igt@prime_vgem@basic-userptr.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-byt-j1900/igt@prime_vgem@basic-userptr.html
    - fi-hsw-4770:        [PASS][55] -> [SKIP][56] ([fdo#109271])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-hsw-4770/igt@prime_vgem@basic-userptr.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-hsw-4770/igt@prime_vgem@basic-userptr.html
    - fi-kbl-7500u:       [PASS][57] -> [SKIP][58] ([fdo#109271])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-7500u/igt@prime_vgem@basic-userptr.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-7500u/igt@prime_vgem@basic-userptr.html
    - fi-kbl-soraka:      [PASS][59] -> [SKIP][60] ([fdo#109271])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-soraka/igt@prime_vgem@basic-userptr.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-soraka/igt@prime_vgem@basic-userptr.html
    - fi-kbl-guc:         [PASS][61] -> [SKIP][62] ([fdo#109271])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-guc/igt@prime_vgem@basic-userptr.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-guc/igt@prime_vgem@basic-userptr.html
    - fi-kbl-8809g:       [PASS][63] -> [SKIP][64] ([fdo#109271])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-8809g/igt@prime_vgem@basic-userptr.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-8809g/igt@prime_vgem@basic-userptr.html
    - fi-bdw-5557u:       [PASS][65] -> [SKIP][66] ([fdo#109271])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-bdw-5557u/igt@prime_vgem@basic-userptr.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-bdw-5557u/igt@prime_vgem@basic-userptr.html
    - fi-kbl-r:           [PASS][67] -> [SKIP][68] ([fdo#109271])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-r/igt@prime_vgem@basic-userptr.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-r/igt@prime_vgem@basic-userptr.html
    - fi-cfl-8109u:       [PASS][69] -> [SKIP][70] ([fdo#109271])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-cfl-8109u/igt@prime_vgem@basic-userptr.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-cfl-8109u/igt@prime_vgem@basic-userptr.html
    - fi-bsw-nick:        [PASS][71] -> [SKIP][72] ([fdo#109271])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-bsw-nick/igt@prime_vgem@basic-userptr.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-bsw-nick/igt@prime_vgem@basic-userptr.html
    - fi-glk-dsi:         [PASS][73] -> [SKIP][74] ([fdo#109271])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-glk-dsi/igt@prime_vgem@basic-userptr.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-glk-dsi/igt@prime_vgem@basic-userptr.html
    - fi-kbl-x1275:       [PASS][75] -> [SKIP][76] ([fdo#109271])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-x1275/igt@prime_vgem@basic-userptr.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-x1275/igt@prime_vgem@basic-userptr.html
    - fi-snb-2520m:       [PASS][77] -> [SKIP][78] ([fdo#109271])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-snb-2520m/igt@prime_vgem@basic-userptr.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-snb-2520m/igt@prime_vgem@basic-userptr.html

  * igt@runner@aborted:
    - fi-icl-y:           NOTRUN -> [FAIL][79] ([i915#2782])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-icl-y/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@gem_tiled_blits@basic:
    - fi-kbl-8809g:       [TIMEOUT][80] ([i915#2502] / [i915#3145]) -> [PASS][81]
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9849/fi-kbl-8809g/igt@gem_tiled_blits@basic.html
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/fi-kbl-8809g/igt@gem_tiled_blits@basic.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#2502]: https://gitlab.freedesktop.org/drm/intel/issues/2502
  [i915#2782]: https://gitlab.freedesktop.org/drm/intel/issues/2782
  [i915#2947]: https://gitlab.freedesktop.org/drm/intel/issues/2947
  [i915#3145]: https://gitlab.freedesktop.org/drm/intel/issues/3145
  [i915#3159]: https://gitlab.freedesktop.org/drm/intel/issues/3159
  [i915#3180]: https://gitlab.freedesktop.org/drm/intel/issues/3180
  [i915#926]: https://gitlab.freedesktop.org/drm/intel/issues/926


Participating hosts (45 -> 41)
------------------------------

  Additional (1): fi-bsw-n3050 
  Missing    (5): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_9849 -> Patchwork_19780

  CI-20190529: 20190529
  CI_DRM_9849: 123ebf0379ca90c2f64bff73ff32c7c2140d2b9c @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6030: e11e4bfb91fec9af71c3909996c66e5666270e07 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19780: df3000218c205a604d53961223b1aa423b7d61b6 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

df3000218c20 drm/i915: Pass ww ctx to i915_gem_object_pin_pages
01ec61981f44 drm/i915: Pass ww ctx to pin_map
2a3925c85678 drm/i915: Add ww context to prepare_(read/write)
17e1f6f13463 drm/i915: Add ww parameter to get_pages() callback
c4bcee10e1ff drm/i915: Fix pin_map in scheduler selftests
6a04f7a4fb3d drm/i915: Add missing -EDEADLK path in execbuffer ggtt pinning.
6c9553a14119 drm/i915: Move gt_revoke() slightly
b54702917508 drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
82341525cb30 drm/i915: Finally remove obj->mm.lock.
eccf40ac6dff drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
bfc623e7aa5a drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
b0a1c130a3e1 drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal
3eba970a2562 drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal
0d29694fa958 drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
3e54a317b6a2 drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
567f1fdb0fd3 drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
a46ab6927106 drm/i915/selftests: Prepare execlists and lrc selftests for obj->mm.lock removal
d4b60b619e09 drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
ae02f93ac96e drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
c60227da1d5e drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
78ac2ae0a6f3 drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal.
ad8e58bc1d77 drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
1ad052fc5083 drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
a2b35549e350 drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
1f422bb004d2 drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
f76bff23db43 drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
c518b61f2828 drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
a0edf1dedfcb drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
07742daa31f1 drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal.
25ec5fbd2244 drm/i915: Use a single page table lock for each gtt.
60f82fd9a7c9 drm/i915: Fix ww locking in shmem_create_from_object
2f5f9bca3704 drm/i915: Add missing ww lock in intel_dsb_prepare.
7bf1238f430a drm/i915: Add ww locking to dma-buf ops.
13463c86de73 drm/i915: Lock ww in ucode objects correctly
856988996488 drm/i915: Increase ww locking for perf.
aac3d8e69b16 drm/i915: Add ww locking around vm_access()
594fb9bb3edf drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner.
9dd394f55435 drm/i915: Prepare for obj->mm.lock removal, v2.
b84204b94870 drm/i915: Fix workarounds selftest, part 1
95dd9a5b76ad drm/i915: Fix pread/pwrite to work with new locking rules.
ab50d5ae3aa1 drm/i915: Defer pin calls in buffer pool until first use by caller.
38250623370a drm/i915: Take obj lock around set_domain ioctl
0234dc4ff4ff drm/i915: Make __engine_unpark() compatible with ww locking.
5e0e47dd8939 drm/i915: Make lrc_init_wa_ctx compatible with ww locking, v3.
95ab8d6067d0 drm/i915: Take reservation lock around i915_vma_pin.
1092d212be31 drm/i915: Move pinning to inside engine_wa_list_verify()
b2cecc1e4972 drm/i915: Add object locking to vm_fault_cpu
01cd746af3c5 drm/i915: Pass ww ctx to intel_pin_to_display_plane
6bf1b9e277f3 drm/i915: Rework clflush to work correctly without obj->mm.lock.
598e9ba1dfed drm/i915: Handle ww locking in init_status_page
72e9bfd7ceea drm/i915: Make ring submission compatible with obj->mm.lock removal, v2.
206f875cafc5 drm/i915: Populate logical context during first pin.
3c7404bac792 drm/i915: Flatten obj->mm.lock
b4e92512f008 drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
c006fd23ae00 drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
40c7ada6528a drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
6cce6a791e3b drm/i915: Reject more ioctls for userptr, v2.
063d779f45a2 drm/i915: No longer allow exporting userptr through dma-buf
4f5772ae4042 drm/i915: Disable userptr pread/pwrite support.
2ff91be66b1e drm/i915: make lockdep slightly happier about execbuf.
94c02fbb4cec drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
3768448e58fc drm/i915: Rework struct phys attachment handling
98108b44d022 drm/i915: Move HAS_STRUCT_PAGE to obj->flags
513ee4798499 drm/i915: Add gem object locking to madvise.
d3c0d6934682 drm/i915: Ensure we hold the object mutex in pin correctly.
c3e85904db1d drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
05ca8fd3489b drm/i915: Move cmd parser pinning to execbuffer
45179357f4a3 drm/i915: Pin timeline map after first timeline pin, v3.
62d87d68daa6 drm/i915: Do not share hwsp across contexts any more, v7.

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19780/index.html

[-- Attachment #1.2: Type: text/html, Size: 22977 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7 Maarten Lankhorst
@ 2021-03-11 17:24   ` Thomas Hellström (Intel)
  2021-03-15 12:36     ` Maarten Lankhorst
  0 siblings, 1 reply; 82+ messages in thread
From: Thomas Hellström (Intel) @ 2021-03-11 17:24 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx; +Cc: Dave Airlie


[-- Attachment #1.1: Type: text/plain, Size: 44710 bytes --]

Hi, Maarten,

On 3/11/21 2:41 PM, Maarten Lankhorst wrote:
> Instead of doing what we do currently, which will never work with
> PROVE_LOCKING, do the same as AMD does, and something similar to
> relocation slowpath. When all locks are dropped, we acquire the
> pages for pinning. When the locks are taken, we transfer those
> pages in .get_pages() to the bo. As a final check before installing
> the fences, we ensure that the mmu notifier was not called; if it is,
> we return -EAGAIN to userspace to signal it has to start over.
>
> Changes since v1:
> - Unbinding is done in submit_init only. submit_begin() removed.
> - MMU_NOTFIER -> MMU_NOTIFIER
> Changes since v2:
> - Make i915->mm.notifier a spinlock.
> Changes since v3:
> - Add WARN_ON if there are any page references left, should have been 0.
> - Return 0 on success in submit_init(), bug from spinlock conversion.
> - Release pvec outside of notifier_lock (Thomas).
> Changes since v4:
> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
> Changes since v5:
> - Clarify why check on PF_EXITING is (temporarily) required.
> Changes since v6:
> - Ensure userptr validity is checked in set_domain through a special path.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Acked-by: Dave Airlie <airlied@redhat.com>

Mostly LGTM. Comments / suggestions below.

> ---
>   drivers/gpu/drm/i915/gem/i915_gem_domain.c    |  18 +-
>   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 101 ++-
>   drivers/gpu/drm/i915/gem/i915_gem_object.h    |  38 +-
>   .../gpu/drm/i915/gem/i915_gem_object_types.h  |  10 +-
>   drivers/gpu/drm/i915/gem/i915_gem_pages.c     |   2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 796 ++++++------------
>   drivers/gpu/drm/i915/i915_drv.h               |   9 +-
>   drivers/gpu/drm/i915/i915_gem.c               |   5 +-
>   8 files changed, 395 insertions(+), 584 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
> index 2f4980bf742e..76cb9f5c66aa 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
> @@ -468,14 +468,28 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
>   	if (!obj)
>   		return -ENOENT;
>   
> +	if (i915_gem_object_is_userptr(obj)) {
> +		/*
> +		 * Try to grab userptr pages, iris uses set_domain to check
> +		 * userptr validity
> +		 */
> +		err = i915_gem_object_userptr_validate(obj);
> +		if (!err)
> +			err = i915_gem_object_wait(obj,
> +						   I915_WAIT_INTERRUPTIBLE |
> +						   I915_WAIT_PRIORITY |
> +						   (write_domain ? I915_WAIT_ALL : 0),
> +						   MAX_SCHEDULE_TIMEOUT);
> +		goto out;
> +	}
> +
>   	/*
>   	 * Proxy objects do not control access to the backing storage, ergo
>   	 * they cannot be used as a means to manipulate the cache domain
>   	 * tracking for that backing storage. The proxy object is always
>   	 * considered to be outside of any cache domain.
>   	 */
> -	if (i915_gem_object_is_proxy(obj) &&
> -	    !i915_gem_object_is_userptr(obj)) {
> +	if (i915_gem_object_is_proxy(obj)) {
>   		err = -ENXIO;
>   		goto out;
>   	}
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index c72440c10876..64d0e5fccece 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -53,14 +53,16 @@ enum {
>   /* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
>   #define __EXEC_OBJECT_HAS_PIN		BIT(30)
>   #define __EXEC_OBJECT_HAS_FENCE		BIT(29)
> -#define __EXEC_OBJECT_NEEDS_MAP		BIT(28)
> -#define __EXEC_OBJECT_NEEDS_BIAS	BIT(27)
> -#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 27) /* all of the above + */
> +#define __EXEC_OBJECT_USERPTR_INIT	BIT(28)
> +#define __EXEC_OBJECT_NEEDS_MAP		BIT(27)
> +#define __EXEC_OBJECT_NEEDS_BIAS	BIT(26)
> +#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 26) /* all of the above + */
>   #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
>   
>   #define __EXEC_HAS_RELOC	BIT(31)
>   #define __EXEC_ENGINE_PINNED	BIT(30)
> -#define __EXEC_INTERNAL_FLAGS	(~0u << 30)
> +#define __EXEC_USERPTR_USED	BIT(29)
> +#define __EXEC_INTERNAL_FLAGS	(~0u << 29)
>   #define UPDATE			PIN_OFFSET_FIXED
>   
>   #define BATCH_OFFSET_BIAS (256*1024)
> @@ -864,6 +866,26 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
>   		}
>   
>   		eb_add_vma(eb, i, batch, vma);
> +
> +		if (i915_gem_object_is_userptr(vma->obj)) {
> +			err = i915_gem_object_userptr_submit_init(vma->obj);
> +			if (err) {
> +				if (i + 1 < eb->buffer_count) {
> +					/*
> +					 * Execbuffer code expects last vma entry to be NULL,
> +					 * since we already initialized this entry,
> +					 * set the next value to NULL or we mess up
> +					 * cleanup handling.
> +					 */
> +					eb->vma[i + 1].vma = NULL;
> +				}
> +
> +				return err;
> +			}
> +
> +			eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT;
> +			eb->args->flags |= __EXEC_USERPTR_USED;
> +		}
>   	}
>   
>   	if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) {
> @@ -965,7 +987,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle)
>   	}
>   }
>   
> -static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
> +static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr)
>   {
>   	const unsigned int count = eb->buffer_count;
>   	unsigned int i;
> @@ -979,6 +1001,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>   
>   		eb_unreserve_vma(ev);
>   
> +		if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) {
> +			ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT;
> +			i915_gem_object_userptr_submit_fini(vma->obj);
> +		}
> +
>   		if (final)
>   			i915_vma_put(vma);
>   	}
> @@ -1909,6 +1936,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
>   	return 0;
>   }
>   
> +static int eb_reinit_userptr(struct i915_execbuffer *eb)
> +{
> +	const unsigned int count = eb->buffer_count;
> +	unsigned int i;
> +	int ret;
> +
> +	if (likely(!(eb->args->flags & __EXEC_USERPTR_USED)))
> +		return 0;
> +
> +	for (i = 0; i < count; i++) {
> +		struct eb_vma *ev = &eb->vma[i];
> +
> +		if (!i915_gem_object_is_userptr(ev->vma->obj))
> +			continue;
> +
> +		ret = i915_gem_object_userptr_submit_init(ev->vma->obj);
> +		if (ret)
> +			return ret;
> +
> +		ev->flags |= __EXEC_OBJECT_USERPTR_INIT;
> +	}
> +
> +	return 0;
> +}
> +
>   static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>   					   struct i915_request *rq)
>   {
> @@ -1923,7 +1975,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>   	}
>   
>   	/* We may process another execbuffer during the unlock... */
> -	eb_release_vmas(eb, false);
> +	eb_release_vmas(eb, false, true);
>   	i915_gem_ww_ctx_fini(&eb->ww);
>   
>   	if (rq) {
> @@ -1964,10 +2016,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>   		err = 0;
>   	}
>   
> -#ifdef CONFIG_MMU_NOTIFIER
>   	if (!err)
> -		flush_workqueue(eb->i915->mm.userptr_wq);
> -#endif
> +		err = eb_reinit_userptr(eb);
>   
>   err_relock:
>   	i915_gem_ww_ctx_init(&eb->ww, true);
> @@ -2029,7 +2079,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>   
>   err:
>   	if (err == -EDEADLK) {
> -		eb_release_vmas(eb, false);
> +		eb_release_vmas(eb, false, false);
>   		err = i915_gem_ww_ctx_backoff(&eb->ww);
>   		if (!err)
>   			goto repeat_validate;
> @@ -2126,7 +2176,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb)
>   
>   err:
>   	if (err == -EDEADLK) {
> -		eb_release_vmas(eb, false);
> +		eb_release_vmas(eb, false, false);
>   		err = i915_gem_ww_ctx_backoff(&eb->ww);
>   		if (!err)
>   			goto retry;
> @@ -2201,6 +2251,30 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
>   						      flags | __EXEC_OBJECT_NO_RESERVE);
>   	}
>   
> +#ifdef CONFIG_MMU_NOTIFIER
> +	if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
> +		spin_lock(&eb->i915->mm.notifier_lock);
> +
> +		/*
> +		 * count is always at least 1, otherwise __EXEC_USERPTR_USED
> +		 * could not have been set
> +		 */
> +		for (i = 0; i < count; i++) {
> +			struct eb_vma *ev = &eb->vma[i];
> +			struct drm_i915_gem_object *obj = ev->vma->obj;
> +
> +			if (!i915_gem_object_is_userptr(obj))
> +				continue;
> +
> +			err = i915_gem_object_userptr_submit_done(obj);
> +			if (err)
> +				break;
> +		}
> +
> +		spin_unlock(&eb->i915->mm.notifier_lock);
> +	}
> +#endif
> +
>   	if (unlikely(err))
>   		goto err_skip;
>   
> @@ -3345,7 +3419,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>   
>   	err = eb_lookup_vmas(&eb);
>   	if (err) {
> -		eb_release_vmas(&eb, true);
> +		eb_release_vmas(&eb, true, true);
>   		goto err_engine;
>   	}
>   
> @@ -3417,6 +3491,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>   
>   	trace_i915_request_queue(eb.request, eb.batch_flags);
>   	err = eb_submit(&eb, batch);
> +
>   err_request:
>   	i915_request_get(eb.request);
>   	err = eb_request_add(&eb, err);
> @@ -3437,7 +3512,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>   	i915_request_put(eb.request);
>   
>   err_vma:
> -	eb_release_vmas(&eb, true);
> +	eb_release_vmas(&eb, true, true);
>   	if (eb.trampoline)
>   		i915_vma_unpin(eb.trampoline);
>   	WARN_ON(err == -EDEADLK);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index 69509dbd01c7..b5af9c440ac5 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -59,6 +59,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
>   				       const void *data, resource_size_t size);
>   
>   extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
> +
>   void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
>   				     struct sg_table *pages,
>   				     bool needs_clflush);
> @@ -278,12 +279,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
>   	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
>   }
>   
> -static inline bool
> -i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
> -{
> -	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL);
> -}
> -
>   static inline bool
>   i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
>   {
> @@ -573,16 +568,6 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>   void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
>   					      enum fb_op_origin origin);
>   
> -static inline bool
> -i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
> -{
> -#ifdef CONFIG_MMU_NOTIFIER
> -	return obj->userptr.mm;
> -#else
> -	return false;
> -#endif
> -}
> -
>   static inline void
>   i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>   				  enum fb_op_origin origin)
> @@ -603,4 +588,25 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset,
>   
>   bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj);
>   
> +#ifdef CONFIG_MMU_NOTIFIER
> +static inline bool
> +i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
> +{
> +	return obj->userptr.notifier.mm;
> +}
> +
> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj);
> +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj);
> +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj);
> +int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj);
> +#else
> +static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; }
> +
> +static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
> +static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
> +static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }
> +static inline int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
> +
> +#endif
> +
>   #endif
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> index 414322619781..4c0a34231623 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> @@ -7,6 +7,8 @@
>   #ifndef __I915_GEM_OBJECT_TYPES_H__
>   #define __I915_GEM_OBJECT_TYPES_H__
>   
> +#include <linux/mmu_notifier.h>
> +
>   #include <drm/drm_gem.h>
>   #include <uapi/drm/i915_drm.h>
>   
> @@ -34,7 +36,6 @@ struct drm_i915_gem_object_ops {
>   #define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(2)
>   #define I915_GEM_OBJECT_IS_PROXY	BIT(3)
>   #define I915_GEM_OBJECT_NO_MMAP		BIT(4)
> -#define I915_GEM_OBJECT_ASYNC_CANCEL	BIT(5)
>   
>   	/* Interface between the GEM object and its backing storage.
>   	 * get_pages() is called once prior to the use of the associated set
> @@ -299,10 +300,11 @@ struct drm_i915_gem_object {
>   #ifdef CONFIG_MMU_NOTIFIER
>   		struct i915_gem_userptr {
>   			uintptr_t ptr;
> +			unsigned long notifier_seq;
>   
> -			struct i915_mm_struct *mm;
> -			struct i915_mmu_object *mmu_object;
> -			struct work_struct *work;
> +			struct mmu_interval_notifier notifier;
> +			struct page **pvec;
> +			int page_ref;
>   		} userptr;
>   #endif
>   
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> index bf61b88a2113..e7d7650072c5 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> @@ -226,7 +226,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
>   	 * get_pages backends we should be better able to handle the
>   	 * cancellation of the async task in a more uniform manner.
>   	 */
> -	if (!pages && !i915_gem_object_needs_async_cancel(obj))
> +	if (!pages)
>   		pages = ERR_PTR(-EINVAL);
>   
>   	if (!IS_ERR(pages))
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index b466ab2def4d..1e42fbc68697 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -2,10 +2,39 @@
>    * SPDX-License-Identifier: MIT
>    *
>    * Copyright © 2012-2014 Intel Corporation
> + *
> +  * Based on amdgpu_mn, which bears the following notice:
> + *
> + * Copyright 2014 Advanced Micro Devices, Inc.
> + * All Rights Reserved.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + */
> +/*
> + * Authors:
> + *    Christian König <christian.koenig@amd.com>
>    */
>   
>   #include <linux/mmu_context.h>
> -#include <linux/mmu_notifier.h>
>   #include <linux/mempolicy.h>
>   #include <linux/swap.h>
>   #include <linux/sched/mm.h>
> @@ -15,373 +44,121 @@
>   #include "i915_gem_object.h"
>   #include "i915_scatterlist.h"
>   
> -#if defined(CONFIG_MMU_NOTIFIER)
> -
> -struct i915_mm_struct {
> -	struct mm_struct *mm;
> -	struct drm_i915_private *i915;
> -	struct i915_mmu_notifier *mn;
> -	struct hlist_node node;
> -	struct kref kref;
> -	struct rcu_work work;
> -};
> -
> -#include <linux/interval_tree.h>
> -
> -struct i915_mmu_notifier {
> -	spinlock_t lock;
> -	struct hlist_node node;
> -	struct mmu_notifier mn;
> -	struct rb_root_cached objects;
> -	struct i915_mm_struct *mm;
> -};
> -
> -struct i915_mmu_object {
> -	struct i915_mmu_notifier *mn;
> -	struct drm_i915_gem_object *obj;
> -	struct interval_tree_node it;
> -};
> +#ifdef CONFIG_MMU_NOTIFIER
>   
> -static void add_object(struct i915_mmu_object *mo)
> +/**
> + * i915_gem_userptr_invalidate - callback to notify about mm change
> + *
> + * @mni: the range (mm) is about to update
> + * @range: details on the invalidation
> + * @cur_seq: Value to pass to mmu_interval_set_seq()
> + *
> + * Block for operations on BOs to finish and mark pages as accessed and
> + * potentially dirty.
> + */
> +static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> +					const struct mmu_notifier_range *range,
> +					unsigned long cur_seq)
>   {
> -	GEM_BUG_ON(!RB_EMPTY_NODE(&mo->it.rb));
> -	interval_tree_insert(&mo->it, &mo->mn->objects);
> -}
> +	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	long r;
>   
> -static void del_object(struct i915_mmu_object *mo)
> -{
> -	if (RB_EMPTY_NODE(&mo->it.rb))
> -		return;
> +	if (!mmu_notifier_range_blockable(range))
> +		return false;
>   
> -	interval_tree_remove(&mo->it, &mo->mn->objects);
> -	RB_CLEAR_NODE(&mo->it.rb);
> -}
> +	spin_lock(&i915->mm.notifier_lock);
>   
> -static void
> -__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
> -{
> -	struct i915_mmu_object *mo = obj->userptr.mmu_object;
> +	mmu_interval_set_seq(mni, cur_seq);
> +
> +	spin_unlock(&i915->mm.notifier_lock);
>   
>   	/*
> -	 * During mm_invalidate_range we need to cancel any userptr that
> -	 * overlaps the range being invalidated. Doing so requires the
> -	 * struct_mutex, and that risks recursion. In order to cause
> -	 * recursion, the user must alias the userptr address space with
> -	 * a GTT mmapping (possible with a MAP_FIXED) - then when we have
> -	 * to invalidate that mmaping, mm_invalidate_range is called with
> -	 * the userptr address *and* the struct_mutex held.  To prevent that
> -	 * we set a flag under the i915_mmu_notifier spinlock to indicate
> -	 * whether this object is valid.
> +	 * We don't wait when the process is exiting. This is valid
> +	 * because the object will be cleaned up anyway.
> +	 *
> +	 * This is also temporarily required as a hack, because we
> +	 * cannot currently force non-consistent batch buffers to preempt
> +	 * and reschedule by waiting on it, hanging processes on exit.
>   	 */
> -	if (!mo)
> -		return;
> -
> -	spin_lock(&mo->mn->lock);
> -	if (value)
> -		add_object(mo);
> -	else
> -		del_object(mo);
> -	spin_unlock(&mo->mn->lock);
> -}
> -
> -static int
> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
> -				  const struct mmu_notifier_range *range)
> -{
> -	struct i915_mmu_notifier *mn =
> -		container_of(_mn, struct i915_mmu_notifier, mn);
> -	struct interval_tree_node *it;
> -	unsigned long end;
> -	int ret = 0;
> -
> -	if (RB_EMPTY_ROOT(&mn->objects.rb_root))
> -		return 0;
> -
> -	/* interval ranges are inclusive, but invalidate range is exclusive */
> -	end = range->end - 1;
> -
> -	spin_lock(&mn->lock);
> -	it = interval_tree_iter_first(&mn->objects, range->start, end);
> -	while (it) {
> -		struct drm_i915_gem_object *obj;
> -
> -		if (!mmu_notifier_range_blockable(range)) {
> -			ret = -EAGAIN;
> -			break;
> -		}
> -
> -		/*
> -		 * The mmu_object is released late when destroying the
> -		 * GEM object so it is entirely possible to gain a
> -		 * reference on an object in the process of being freed
> -		 * since our serialisation is via the spinlock and not
> -		 * the struct_mutex - and consequently use it after it
> -		 * is freed and then double free it. To prevent that
> -		 * use-after-free we only acquire a reference on the
> -		 * object if it is not in the process of being destroyed.
> -		 */
> -		obj = container_of(it, struct i915_mmu_object, it)->obj;
> -		if (!kref_get_unless_zero(&obj->base.refcount)) {
> -			it = interval_tree_iter_next(it, range->start, end);
> -			continue;
> -		}
> -		spin_unlock(&mn->lock);
> -
> -		ret = i915_gem_object_unbind(obj,
> -					     I915_GEM_OBJECT_UNBIND_ACTIVE |
> -					     I915_GEM_OBJECT_UNBIND_BARRIER);
> -		if (ret == 0)
> -			ret = __i915_gem_object_put_pages(obj);
> -		i915_gem_object_put(obj);
> -		if (ret)
> -			return ret;
> +	if (current->flags & PF_EXITING)
> +		return true;
>   
> -		spin_lock(&mn->lock);
> -
> -		/*
> -		 * As we do not (yet) protect the mmu from concurrent insertion
> -		 * over this range, there is no guarantee that this search will
> -		 * terminate given a pathologic workload.
> -		 */
> -		it = interval_tree_iter_first(&mn->objects, range->start, end);
> -	}
> -	spin_unlock(&mn->lock);
> -
> -	return ret;
> +	/* we will unbind on next submission, still have userptr pins */
> +	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> +				      MAX_SCHEDULE_TIMEOUT);
> +	if (r <= 0)
> +		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);

I think, since linux 5.9 where a fork is no longer setting up COW on 
pinned pages, and we do in fact still pin pages, I think this fence wait 
should be removed, together with the PF_EXIT special case, as it does 
not improve on anything but creates hangs that only hangcheck / watchdog 
can resolve. If we, in future work no longer pin the pages, which is the 
direction we're moving towards, let's re-add it when needed.

>   
> +	return true;
>   }
>   
> -static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
> -	.invalidate_range_start = userptr_mn_invalidate_range_start,
> +static const struct mmu_interval_notifier_ops i915_gem_userptr_notifier_ops = {
> +	.invalidate = i915_gem_userptr_invalidate,
>   };
>   
> -static struct i915_mmu_notifier *
> -i915_mmu_notifier_create(struct i915_mm_struct *mm)
> -{
> -	struct i915_mmu_notifier *mn;
> -
> -	mn = kmalloc(sizeof(*mn), GFP_KERNEL);
> -	if (mn == NULL)
> -		return ERR_PTR(-ENOMEM);
> -
> -	spin_lock_init(&mn->lock);
> -	mn->mn.ops = &i915_gem_userptr_notifier;
> -	mn->objects = RB_ROOT_CACHED;
> -	mn->mm = mm;
> -
> -	return mn;
> -}
> -
> -static void
> -i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
> -{
> -	struct i915_mmu_object *mo;
> -
> -	mo = fetch_and_zero(&obj->userptr.mmu_object);
> -	if (!mo)
> -		return;
> -
> -	spin_lock(&mo->mn->lock);
> -	del_object(mo);
> -	spin_unlock(&mo->mn->lock);
> -	kfree(mo);
> -}
> -
> -static struct i915_mmu_notifier *
> -i915_mmu_notifier_find(struct i915_mm_struct *mm)
> -{
> -	struct i915_mmu_notifier *mn, *old;
> -	int err;
> -
> -	mn = READ_ONCE(mm->mn);
> -	if (likely(mn))
> -		return mn;
> -
> -	mn = i915_mmu_notifier_create(mm);
> -	if (IS_ERR(mn))
> -		return mn;
> -
> -	err = mmu_notifier_register(&mn->mn, mm->mm);
> -	if (err) {
> -		kfree(mn);
> -		return ERR_PTR(err);
> -	}
> -
> -	old = cmpxchg(&mm->mn, NULL, mn);
> -	if (old) {
> -		mmu_notifier_unregister(&mn->mn, mm->mm);
> -		kfree(mn);
> -		mn = old;
> -	}
> -
> -	return mn;
> -}
> -
>   static int
>   i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
>   {
> -	struct i915_mmu_notifier *mn;
> -	struct i915_mmu_object *mo;
> -
> -	if (GEM_WARN_ON(!obj->userptr.mm))
> -		return -EINVAL;
> -
> -	mn = i915_mmu_notifier_find(obj->userptr.mm);
> -	if (IS_ERR(mn))
> -		return PTR_ERR(mn);
> -
> -	mo = kzalloc(sizeof(*mo), GFP_KERNEL);
> -	if (!mo)
> -		return -ENOMEM;
> -
> -	mo->mn = mn;
> -	mo->obj = obj;
> -	mo->it.start = obj->userptr.ptr;
> -	mo->it.last = obj->userptr.ptr + obj->base.size - 1;
> -	RB_CLEAR_NODE(&mo->it.rb);
> -
> -	obj->userptr.mmu_object = mo;
> -	return 0;
> -}
> -
> -static void
> -i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
> -		       struct mm_struct *mm)
> -{
> -	if (mn == NULL)
> -		return;
> -
> -	mmu_notifier_unregister(&mn->mn, mm);
> -	kfree(mn);
> -}
> -
> -
> -static struct i915_mm_struct *
> -__i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
> -{
> -	struct i915_mm_struct *it, *mm = NULL;
> -
> -	rcu_read_lock();
> -	hash_for_each_possible_rcu(i915->mm_structs,
> -				   it, node,
> -				   (unsigned long)real)
> -		if (it->mm == real && kref_get_unless_zero(&it->kref)) {
> -			mm = it;
> -			break;
> -		}
> -	rcu_read_unlock();
> -
> -	return mm;
> +	return mmu_interval_notifier_insert(&obj->userptr.notifier, current->mm,
> +					    obj->userptr.ptr, obj->base.size,
> +					    &i915_gem_userptr_notifier_ops);
>   }
>   
> -static int
> -i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
> +static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
>   {
>   	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> -	struct i915_mm_struct *mm, *new;
> -	int ret = 0;
> -
> -	/* During release of the GEM object we hold the struct_mutex. This
> -	 * precludes us from calling mmput() at that time as that may be
> -	 * the last reference and so call exit_mmap(). exit_mmap() will
> -	 * attempt to reap the vma, and if we were holding a GTT mmap
> -	 * would then call drm_gem_vm_close() and attempt to reacquire
> -	 * the struct mutex. So in order to avoid that recursion, we have
> -	 * to defer releasing the mm reference until after we drop the
> -	 * struct_mutex, i.e. we need to schedule a worker to do the clean
> -	 * up.
> -	 */
> -	mm = __i915_mm_struct_find(i915, current->mm);
> -	if (mm)
> -		goto out;
> +	struct page **pvec = NULL;
>   
> -	new = kmalloc(sizeof(*mm), GFP_KERNEL);
> -	if (!new)
> -		return -ENOMEM;
> -
> -	kref_init(&new->kref);
> -	new->i915 = to_i915(obj->base.dev);
> -	new->mm = current->mm;
> -	new->mn = NULL;
> -
> -	spin_lock(&i915->mm_lock);
> -	mm = __i915_mm_struct_find(i915, current->mm);
> -	if (!mm) {
> -		hash_add_rcu(i915->mm_structs,
> -			     &new->node,
> -			     (unsigned long)new->mm);
> -		mmgrab(current->mm);
> -		mm = new;
> +	spin_lock(&i915->mm.notifier_lock);
> +	if (!--obj->userptr.page_ref) {
> +		pvec = obj->userptr.pvec;
> +		obj->userptr.pvec = NULL;
>   	}
> -	spin_unlock(&i915->mm_lock);
> -	if (mm != new)
> -		kfree(new);
> +	GEM_BUG_ON(obj->userptr.page_ref < 0);
> +	spin_unlock(&i915->mm.notifier_lock);
>   
> -out:
> -	obj->userptr.mm = mm;
> -	return ret;
> -}
> -
> -static void
> -__i915_mm_struct_free__worker(struct work_struct *work)
> -{
> -	struct i915_mm_struct *mm = container_of(work, typeof(*mm), work.work);
> -
> -	i915_mmu_notifier_free(mm->mn, mm->mm);
> -	mmdrop(mm->mm);
> -	kfree(mm);
> -}
> -
> -static void
> -__i915_mm_struct_free(struct kref *kref)
> -{
> -	struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
> -
> -	spin_lock(&mm->i915->mm_lock);
> -	hash_del_rcu(&mm->node);
> -	spin_unlock(&mm->i915->mm_lock);
> -
> -	INIT_RCU_WORK(&mm->work, __i915_mm_struct_free__worker);
> -	queue_rcu_work(system_wq, &mm->work);
> -}
> -
> -static void
> -i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
> -{
> -	if (obj->userptr.mm == NULL)
> -		return;
> +	if (pvec) {
> +		const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>   
> -	kref_put(&obj->userptr.mm->kref, __i915_mm_struct_free);
> -	obj->userptr.mm = NULL;
> +		unpin_user_pages(pvec, num_pages);
> +		kfree(pvec);

IIRC, CQ spotted that we should have a kvfree here right?

> +	}
>   }
>   
> -struct get_pages_work {
> -	struct work_struct work;
> -	struct drm_i915_gem_object *obj;
> -	struct task_struct *task;
> -};
> -
> -static struct sg_table *
> -__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> -			       struct page **pvec, unsigned long num_pages)
> +static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
>   {
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>   	unsigned int max_segment = i915_sg_segment_size();
>   	struct sg_table *st;
>   	unsigned int sg_page_sizes;
>   	struct scatterlist *sg;
> +	struct page **pvec;
>   	int ret;
>   
>   	st = kmalloc(sizeof(*st), GFP_KERNEL);
>   	if (!st)
> -		return ERR_PTR(-ENOMEM);
> +		return -ENOMEM;
> +
> +	spin_lock(&i915->mm.notifier_lock);
> +	if (GEM_WARN_ON(!obj->userptr.page_ref)) {
> +		spin_unlock(&i915->mm.notifier_lock);
> +		ret = -EFAULT;
> +		goto err_free;
> +	}
> +
> +	obj->userptr.page_ref++;
> +	pvec = obj->userptr.pvec;
> +	spin_unlock(&i915->mm.notifier_lock);
>   
>   alloc_table:
>   	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
>   					 num_pages << PAGE_SHIFT, max_segment,
>   					 NULL, 0, GFP_KERNEL);
>   	if (IS_ERR(sg)) {
> -		kfree(st);
> -		return ERR_CAST(sg);
> +		ret = PTR_ERR(sg);
> +		goto err;
>   	}
>   
>   	ret = i915_gem_gtt_prepare_pages(obj, st);
> @@ -393,203 +170,20 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>   			goto alloc_table;
>   		}
>   
> -		kfree(st);
> -		return ERR_PTR(ret);
> +		goto err;
>   	}
>   
>   	sg_page_sizes = i915_sg_page_sizes(st->sgl);
>   
>   	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
>   
> -	return st;
> -}
> -
> -static void
> -__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
> -{
> -	struct get_pages_work *work = container_of(_work, typeof(*work), work);
> -	struct drm_i915_gem_object *obj = work->obj;
> -	const unsigned long npages = obj->base.size >> PAGE_SHIFT;
> -	unsigned long pinned;
> -	struct page **pvec;
> -	int ret;
> -
> -	ret = -ENOMEM;
> -	pinned = 0;
> -
> -	pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> -	if (pvec != NULL) {
> -		struct mm_struct *mm = obj->userptr.mm->mm;
> -		unsigned int flags = 0;
> -		int locked = 0;
> -
> -		if (!i915_gem_object_is_readonly(obj))
> -			flags |= FOLL_WRITE;
> -
> -		ret = -EFAULT;
> -		if (mmget_not_zero(mm)) {
> -			while (pinned < npages) {
> -				if (!locked) {
> -					mmap_read_lock(mm);
> -					locked = 1;
> -				}
> -				ret = pin_user_pages_remote
> -					(mm,
> -					 obj->userptr.ptr + pinned * PAGE_SIZE,
> -					 npages - pinned,
> -					 flags,
> -					 pvec + pinned, NULL, &locked);
> -				if (ret < 0)
> -					break;
> -
> -				pinned += ret;
> -			}
> -			if (locked)
> -				mmap_read_unlock(mm);
> -			mmput(mm);
> -		}
> -	}
> -
> -	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
> -	if (obj->userptr.work == &work->work) {
> -		struct sg_table *pages = ERR_PTR(ret);
> -
> -		if (pinned == npages) {
> -			pages = __i915_gem_userptr_alloc_pages(obj, pvec,
> -							       npages);
> -			if (!IS_ERR(pages)) {
> -				pinned = 0;
> -				pages = NULL;
> -			}
> -		}
> -
> -		obj->userptr.work = ERR_CAST(pages);
> -		if (IS_ERR(pages))
> -			__i915_gem_userptr_set_active(obj, false);
> -	}
> -	mutex_unlock(&obj->mm.lock);
> -
> -	unpin_user_pages(pvec, pinned);
> -	kvfree(pvec);
> -
> -	i915_gem_object_put(obj);
> -	put_task_struct(work->task);
> -	kfree(work);
> -}
> -
> -static struct sg_table *
> -__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
> -{
> -	struct get_pages_work *work;
> -
> -	/* Spawn a worker so that we can acquire the
> -	 * user pages without holding our mutex. Access
> -	 * to the user pages requires mmap_lock, and we have
> -	 * a strict lock ordering of mmap_lock, struct_mutex -
> -	 * we already hold struct_mutex here and so cannot
> -	 * call gup without encountering a lock inversion.
> -	 *
> -	 * Userspace will keep on repeating the operation
> -	 * (thanks to EAGAIN) until either we hit the fast
> -	 * path or the worker completes. If the worker is
> -	 * cancelled or superseded, the task is still run
> -	 * but the results ignored. (This leads to
> -	 * complications that we may have a stray object
> -	 * refcount that we need to be wary of when
> -	 * checking for existing objects during creation.)
> -	 * If the worker encounters an error, it reports
> -	 * that error back to this function through
> -	 * obj->userptr.work = ERR_PTR.
> -	 */
> -	work = kmalloc(sizeof(*work), GFP_KERNEL);
> -	if (work == NULL)
> -		return ERR_PTR(-ENOMEM);
> -
> -	obj->userptr.work = &work->work;
> -
> -	work->obj = i915_gem_object_get(obj);
> -
> -	work->task = current;
> -	get_task_struct(work->task);
> -
> -	INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker);
> -	queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work);
> -
> -	return ERR_PTR(-EAGAIN);
> -}
> -
> -static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
> -{
> -	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
> -	struct mm_struct *mm = obj->userptr.mm->mm;
> -	struct page **pvec;
> -	struct sg_table *pages;
> -	bool active;
> -	int pinned;
> -	unsigned int gup_flags = 0;
> -
> -	/* If userspace should engineer that these pages are replaced in
> -	 * the vma between us binding this page into the GTT and completion
> -	 * of rendering... Their loss. If they change the mapping of their
> -	 * pages they need to create a new bo to point to the new vma.
> -	 *
> -	 * However, that still leaves open the possibility of the vma
> -	 * being copied upon fork. Which falls under the same userspace
> -	 * synchronisation issue as a regular bo, except that this time
> -	 * the process may not be expecting that a particular piece of
> -	 * memory is tied to the GPU.
> -	 *
> -	 * Fortunately, we can hook into the mmu_notifier in order to
> -	 * discard the page references prior to anything nasty happening
> -	 * to the vma (discard or cloning) which should prevent the more
> -	 * egregious cases from causing harm.
> -	 */
> -
> -	if (obj->userptr.work) {
> -		/* active flag should still be held for the pending work */
> -		if (IS_ERR(obj->userptr.work))
> -			return PTR_ERR(obj->userptr.work);
> -		else
> -			return -EAGAIN;
> -	}
> -
> -	pvec = NULL;
> -	pinned = 0;
> -
> -	if (mm == current->mm) {
> -		pvec = kvmalloc_array(num_pages, sizeof(struct page *),
> -				      GFP_KERNEL |
> -				      __GFP_NORETRY |
> -				      __GFP_NOWARN);
> -		if (pvec) {
> -			/* defer to worker if malloc fails */
> -			if (!i915_gem_object_is_readonly(obj))
> -				gup_flags |= FOLL_WRITE;
> -			pinned = pin_user_pages_fast_only(obj->userptr.ptr,
> -							  num_pages, gup_flags,
> -							  pvec);
> -		}
> -	}
> -
> -	active = false;
> -	if (pinned < 0) {
> -		pages = ERR_PTR(pinned);
> -		pinned = 0;
> -	} else if (pinned < num_pages) {
> -		pages = __i915_gem_userptr_get_pages_schedule(obj);
> -		active = pages == ERR_PTR(-EAGAIN);
> -	} else {
> -		pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
> -		active = !IS_ERR(pages);
> -	}
> -	if (active)
> -		__i915_gem_userptr_set_active(obj, true);
> -
> -	if (IS_ERR(pages))
> -		unpin_user_pages(pvec, pinned);
> -	kvfree(pvec);
> +	return 0;
>   
> -	return PTR_ERR_OR_ZERO(pages);
> +err:
> +	i915_gem_object_userptr_drop_ref(obj);
> +err_free:
> +	kfree(st);
> +	return ret;
>   }
>   
>   static void
> @@ -599,9 +193,6 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>   	struct sgt_iter sgt_iter;
>   	struct page *page;
>   
> -	/* Cancel any inflight work and force them to restart their gup */
> -	obj->userptr.work = NULL;
> -	__i915_gem_userptr_set_active(obj, false);
>   	if (!pages)
>   		return;
>   
> @@ -641,19 +232,161 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>   		}
>   
>   		mark_page_accessed(page);
> -		unpin_user_page(page);
>   	}
>   	obj->mm.dirty = false;
>   
>   	sg_free_table(pages);
>   	kfree(pages);
> +
> +	i915_gem_object_userptr_drop_ref(obj);
> +}
> +
> +static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages)
> +{
> +	struct sg_table *pages;
> +	int err;
> +
> +	err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
> +	if (err)
> +		return err;
> +
> +	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
> +		return -EBUSY;
> +
> +	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
> +
> +	pages = __i915_gem_object_unset_pages(obj);
> +	if (!IS_ERR_OR_NULL(pages))
> +		i915_gem_userptr_put_pages(obj, pages);
> +
> +	if (get_pages)
> +		err = ____i915_gem_object_get_pages(obj);
> +	mutex_unlock(&obj->mm.lock);
> +
> +	return err;
> +}
> +
> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
> +{
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
> +	struct page **pvec;
> +	unsigned int gup_flags = 0;
> +	unsigned long notifier_seq;
> +	int pinned, ret;
> +
> +	if (obj->userptr.notifier.mm != current->mm)
> +		return -EFAULT;
> +
> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
> +	if (ret)
> +		return ret;
> +
> +	/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
> +	ret = i915_gem_object_userptr_unbind(obj, false);
> +	i915_gem_object_unlock(obj);
> +	if (ret)
> +		return ret;
> +
> +	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
> +
> +	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
> +	if (!pvec)
> +		return -ENOMEM;
> +
> +	if (!i915_gem_object_is_readonly(obj))
> +		gup_flags |= FOLL_WRITE;
> +
> +	pinned = ret = 0;
> +	while (pinned < num_pages) {
> +		ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
> +					  num_pages - pinned, gup_flags,
> +					  &pvec[pinned]);
> +		if (ret < 0)
> +			goto out;
> +
> +		pinned += ret;
> +	}
> +	ret = 0;
> +
> +	spin_lock(&i915->mm.notifier_lock);
I think we can improve a lot on the locking here by having the object 
lock protect the object state and only take the driver-wide notifier 
lock in execbuf / userptr_invalidate. If we in addition use an rwlock as 
a notifier lock taken in read mode in execbuf, any potential global lock 
contention can be practically eliminated. But that's perhaps for a 
future improvement.
> +
> +	if (mmu_interval_read_retry(&obj->userptr.notifier,
> +		!obj->userptr.page_ref ? notifier_seq :
> +		obj->userptr.notifier_seq)) {
> +		ret = -EAGAIN;
> +		goto out_unlock;
> +	}
> +
> +	if (!obj->userptr.page_ref++) {
> +		obj->userptr.pvec = pvec;
> +		obj->userptr.notifier_seq = notifier_seq;
> +
> +		pvec = NULL;
> +	}

In addition, if we can call get_pages() here to take the page_ref, we 
can eliminate one page_ref and the use of _userptr_submit_fini(). That 
would of course require the object lock, but we'd already hold that for 
the object state as above.

> +
> +out_unlock:
> +	spin_unlock(&i915->mm.notifier_lock);
> +
> +out:
> +	if (pvec) {
> +		unpin_user_pages(pvec, pinned);
> +		kvfree(pvec);
> +	}
> +
> +	return ret;
> +}
> +
> +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj)
> +{
> +	if (mmu_interval_read_retry(&obj->userptr.notifier,
> +				    obj->userptr.notifier_seq)) {
> +		/* We collided with the mmu notifier, need to retry */
> +
> +		return -EAGAIN;
> +	}
> +
> +	return 0;
> +}
> +
> +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj)
> +{
> +	i915_gem_object_userptr_drop_ref(obj);
> +}
> +
> +int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj)
> +{
> +	int err;
> +
> +	err = i915_gem_object_userptr_submit_init(obj);
> +	if (err)
> +		return err;
> +
> +	err = i915_gem_object_lock_interruptible(obj, NULL);
> +	if (!err) {
> +		/*
> +		 * Since we only check validity, not use the pages,
> +		 * it doesn't matter if we collide with the mmu notifier,
> +		 * and -EAGAIN handling is not required.
> +		 */
> +		err = i915_gem_object_pin_pages(obj);
> +		if (!err)
> +			i915_gem_object_unpin_pages(obj);
> +
> +		i915_gem_object_unlock(obj);
> +	}
> +
> +	i915_gem_object_userptr_submit_fini(obj);
> +	return err;
>   }
>   
>   static void
>   i915_gem_userptr_release(struct drm_i915_gem_object *obj)
>   {
> -	i915_gem_userptr_release__mmu_notifier(obj);
> -	i915_gem_userptr_release__mm_struct(obj);
> +	GEM_WARN_ON(obj->userptr.page_ref);
> +
> +	mmu_interval_notifier_remove(&obj->userptr.notifier);
> +	obj->userptr.notifier.mm = NULL;
>   }
>   
>   static int
> @@ -686,7 +419,6 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
>   	.name = "i915_gem_object_userptr",
>   	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
>   		 I915_GEM_OBJECT_NO_MMAP |
> -		 I915_GEM_OBJECT_ASYNC_CANCEL |
>   		 I915_GEM_OBJECT_IS_PROXY,
>   	.get_pages = i915_gem_userptr_get_pages,
>   	.put_pages = i915_gem_userptr_put_pages,
> @@ -793,6 +525,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>   	i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
>   
>   	obj->userptr.ptr = args->user_ptr;
> +	obj->userptr.notifier_seq = ULONG_MAX;
>   	if (args->flags & I915_USERPTR_READ_ONLY)
>   		i915_gem_object_set_readonly(obj);
>   
> @@ -800,9 +533,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>   	 * at binding. This means that we need to hook into the mmu_notifier
>   	 * in order to detect if the mmu is destroyed.
>   	 */
> -	ret = i915_gem_userptr_init__mm_struct(obj);
> -	if (ret == 0)
> -		ret = i915_gem_userptr_init__mmu_notifier(obj);
> +	ret = i915_gem_userptr_init__mmu_notifier(obj);
>   	if (ret == 0)
>   		ret = drm_gem_handle_create(file, &obj->base, &handle);
>   
> @@ -821,15 +552,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>   int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
>   {
>   #ifdef CONFIG_MMU_NOTIFIER
> -	spin_lock_init(&dev_priv->mm_lock);
> -	hash_init(dev_priv->mm_structs);
> -
> -	dev_priv->mm.userptr_wq =
> -		alloc_workqueue("i915-userptr-acquire",
> -				WQ_HIGHPRI | WQ_UNBOUND,
> -				0);
> -	if (!dev_priv->mm.userptr_wq)
> -		return -ENOMEM;
> +	spin_lock_init(&dev_priv->mm.notifier_lock);
>   #endif
>   
>   	return 0;
> @@ -837,7 +560,4 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
>   
>   void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
>   {
> -#ifdef CONFIG_MMU_NOTIFIER
> -	destroy_workqueue(dev_priv->mm.userptr_wq);
> -#endif
>   }
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index fc41cf6442a9..72927d356c1a 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -558,11 +558,10 @@ struct i915_gem_mm {
>   
>   #ifdef CONFIG_MMU_NOTIFIER
>   	/**
> -	 * Workqueue to fault in userptr pages, flushed by the execbuf
> -	 * when required but otherwise left to userspace to try again
> -	 * on EAGAIN.
> +	 * notifier_lock for mmu notifiers, memory may not be allocated
> +	 * while holding this lock.
>   	 */
> -	struct workqueue_struct *userptr_wq;
> +	spinlock_t notifier_lock;
>   #endif
>   
>   	/* shrinker accounting, also useful for userland debugging */
> @@ -942,8 +941,6 @@ struct drm_i915_private {
>   	struct i915_ggtt ggtt; /* VM representing the global address space */
>   
>   	struct i915_gem_mm mm;
> -	DECLARE_HASHTABLE(mm_structs, 7);
> -	spinlock_t mm_lock;
>   
>   	/* Kernel Modesetting */
>   
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 22be1e7bf2dd..6288cd5d898e 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1053,10 +1053,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
>   err_unlock:
>   	i915_gem_drain_workqueue(dev_priv);
>   
> -	if (ret != -EIO) {
> +	if (ret != -EIO)
>   		intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
> -		i915_gem_cleanup_userptr(dev_priv);
> -	}
>   
>   	if (ret == -EIO) {
>   		/*
> @@ -1113,7 +1111,6 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv)
>   	intel_wa_list_free(&dev_priv->gt_wa_list);
>   
>   	intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
> -	i915_gem_cleanup_userptr(dev_priv);
>   
>   	i915_gem_drain_freed_objects(dev_priv);
>   
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[-- Attachment #1.2: Type: text/html, Size: 44788 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7.
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7 Maarten Lankhorst
@ 2021-03-11 21:22   ` Jason Ekstrand
  2021-03-15 12:08     ` Maarten Lankhorst
  0 siblings, 1 reply; 82+ messages in thread
From: Jason Ekstrand @ 2021-03-11 21:22 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Intel GFX, Thomas Hellström

First off, I'm just here asking questions right now trying to start
getting my head around some of this stuff.  Feel free to ignore me or
tell me to go away if I'm being annoying. :-)

On Thu, Mar 11, 2021 at 7:49 AM Maarten Lankhorst
<maarten.lankhorst@linux.intel.com> wrote:
>
> Instead of sharing pages with breadcrumbs, give each timeline a
> single page. This allows unrelated timelines not to share locks
> any more during command submission.
>
> As an additional benefit, seqno wraparound no longer requires
> i915_vma_pin, which means we no longer need to worry about a
> potential -EDEADLK at a point where we are ready to submit.
>
> Changes since v1:
> - Fix erroneous i915_vma_acquire that should be a i915_vma_release (ickle).
> - Extra check for completion in intel_read_hwsp().
> Changes since v2:
> - Fix inconsistent indent in hwsp_alloc() (kbuild)
> - memset entire cacheline to 0.
> Changes since v3:
> - Do same in intel_timeline_reset_seqno(), and clflush for good measure.
> Changes since v4:
> - Use refcounting on timeline, instead of relying on i915_active.
> - Fix waiting on kernel requests.
> Changes since v5:
> - Bump amount of slots to maximum (256), for best wraparounds.
> - Add hwsp_offset to i915_request to fix potential wraparound hang.
> - Ensure timeline wrap test works with the changes.
> - Assign hwsp in intel_timeline_read_hwsp() within the rcu lock to
>   fix a hang.
> Changes since v6:
> - Rename i915_request_active_offset to i915_request_active_seqno(),
>   and elaborate the function. (tvrtko)
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> #v1
> Reported-by: kernel test robot <lkp@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
>  drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
>  drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  13 +-
>  drivers/gpu/drm/i915/gt/intel_engine_cs.c     |   1 +
>  drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
>  drivers/gpu/drm/i915/gt/intel_timeline.c      | 422 ++++--------------
>  .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
>  drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   5 +-
>  drivers/gpu/drm/i915/gt/selftest_timeline.c   |  83 ++--
>  drivers/gpu/drm/i915/i915_request.c           |   4 -
>  drivers/gpu/drm/i915/i915_request.h           |  31 +-
>  11 files changed, 175 insertions(+), 415 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
> index b491a64919c8..9646200d2792 100644
> --- a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
> @@ -143,7 +143,7 @@ static u32 *__gen2_emit_breadcrumb(struct i915_request *rq, u32 *cs,
>                                    int flush, int post)
>  {
>         GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
> -       GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
> +       GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>
>         *cs++ = MI_FLUSH;
>
> diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
> index ce38d1bcaba3..b388ceeeb1c9 100644
> --- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
> @@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>                  PIPE_CONTROL_DC_FLUSH_ENABLE |
>                  PIPE_CONTROL_QW_WRITE |
>                  PIPE_CONTROL_CS_STALL);
> -       *cs++ = i915_request_active_timeline(rq)->hwsp_offset |
> +       *cs++ = i915_request_active_seqno(rq) |
>                 PIPE_CONTROL_GLOBAL_GTT;
>         *cs++ = rq->fence.seqno;
>
> @@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>                  PIPE_CONTROL_QW_WRITE |
>                  PIPE_CONTROL_GLOBAL_GTT_IVB |
>                  PIPE_CONTROL_CS_STALL);
> -       *cs++ = i915_request_active_timeline(rq)->hwsp_offset;
> +       *cs++ = i915_request_active_seqno(rq);
>         *cs++ = rq->fence.seqno;
>
>         *cs++ = MI_USER_INTERRUPT;
> @@ -374,7 +374,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>  u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>  {
>         GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
> -       GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
> +       GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>
>         *cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
>         *cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
> @@ -394,7 +394,7 @@ u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>         int i;
>
>         GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
> -       GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
> +       GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>
>         *cs++ = MI_FLUSH_DW | MI_INVALIDATE_TLB |
>                 MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
> diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
> index cac80af7ad1c..6b9c34d3ac8d 100644
> --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
> @@ -338,15 +338,14 @@ static u32 preempt_address(struct intel_engine_cs *engine)
>
>  static u32 hwsp_offset(const struct i915_request *rq)
>  {
> -       const struct intel_timeline_cacheline *cl;
> +       const struct intel_timeline *tl;
>
> -       /* Before the request is executed, the timeline/cachline is fixed */
> +       /* Before the request is executed, the timeline is fixed */
> +       tl = rcu_dereference_protected(rq->timeline,
> +                                      !i915_request_signaled(rq));

Why is Gen8+ different from Gen2/6 here?  In particular, why not use
i915_request_active_timeline(rq) or, better yet,
i915_request_active_seqno()?  The primary difference I see is that the
guard on the RCU is different but it's not immediately obvious to me
why this should be different between hardware generations.  Also,
i915_request_active_seqno() returns a u32, but that could be fixed.

>
> -       cl = rcu_dereference_protected(rq->hwsp_cacheline, 1);
> -       if (cl)
> -               return cl->ggtt_offset;
> -
> -       return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset;
> +       /* See the comment in i915_request_active_seqno(). */
> +       return page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno);
>  }
>
>  int gen8_emit_init_breadcrumb(struct i915_request *rq)
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> index b4df275cba79..e6cefd00b4a1 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> @@ -752,6 +752,7 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
>         frame->rq.engine = engine;
>         frame->rq.context = ce;
>         rcu_assign_pointer(frame->rq.timeline, ce->timeline);
> +       frame->rq.hwsp_seqno = ce->timeline->hwsp_seqno;
>
>         frame->ring.vaddr = frame->cs;
>         frame->ring.size = sizeof(frame->cs);
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
> index 626af37c7790..3f6db8357309 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
> @@ -45,10 +45,6 @@ struct intel_gt {
>         struct intel_gt_timelines {
>                 spinlock_t lock; /* protects active_list */
>                 struct list_head active_list;
> -
> -               /* Pack multiple timelines' seqnos into the same page */
> -               spinlock_t hwsp_lock;
> -               struct list_head hwsp_free_list;
>         } timelines;
>
>         struct intel_gt_requests {
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
> index 491b8df174c2..efe2030cfe5e 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
> @@ -11,21 +11,9 @@
>  #include "intel_ring.h"
>  #include "intel_timeline.h"
>
> -#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit)))
> -#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit))
> +#define TIMELINE_SEQNO_BYTES 8
>
> -#define CACHELINE_BITS 6
> -#define CACHELINE_FREE CACHELINE_BITS
> -
> -struct intel_timeline_hwsp {
> -       struct intel_gt *gt;
> -       struct intel_gt_timelines *gt_timelines;
> -       struct list_head free_link;
> -       struct i915_vma *vma;
> -       u64 free_bitmap;
> -};
> -
> -static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
> +static struct i915_vma *hwsp_alloc(struct intel_gt *gt)
>  {
>         struct drm_i915_private *i915 = gt->i915;
>         struct drm_i915_gem_object *obj;
> @@ -44,220 +32,59 @@ static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
>         return vma;
>  }
>
> -static struct i915_vma *
> -hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline)
> -{
> -       struct intel_gt_timelines *gt = &timeline->gt->timelines;
> -       struct intel_timeline_hwsp *hwsp;
> -
> -       BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
> -
> -       spin_lock_irq(&gt->hwsp_lock);
> -
> -       /* hwsp_free_list only contains HWSP that have available cachelines */
> -       hwsp = list_first_entry_or_null(&gt->hwsp_free_list,
> -                                       typeof(*hwsp), free_link);
> -       if (!hwsp) {
> -               struct i915_vma *vma;
> -
> -               spin_unlock_irq(&gt->hwsp_lock);
> -
> -               hwsp = kmalloc(sizeof(*hwsp), GFP_KERNEL);
> -               if (!hwsp)
> -                       return ERR_PTR(-ENOMEM);
> -
> -               vma = __hwsp_alloc(timeline->gt);
> -               if (IS_ERR(vma)) {
> -                       kfree(hwsp);
> -                       return vma;
> -               }
> -
> -               GT_TRACE(timeline->gt, "new HWSP allocated\n");
> -
> -               vma->private = hwsp;
> -               hwsp->gt = timeline->gt;
> -               hwsp->vma = vma;
> -               hwsp->free_bitmap = ~0ull;
> -               hwsp->gt_timelines = gt;
> -
> -               spin_lock_irq(&gt->hwsp_lock);
> -               list_add(&hwsp->free_link, &gt->hwsp_free_list);
> -       }
> -
> -       GEM_BUG_ON(!hwsp->free_bitmap);
> -       *cacheline = __ffs64(hwsp->free_bitmap);
> -       hwsp->free_bitmap &= ~BIT_ULL(*cacheline);
> -       if (!hwsp->free_bitmap)
> -               list_del(&hwsp->free_link);
> -
> -       spin_unlock_irq(&gt->hwsp_lock);
> -
> -       GEM_BUG_ON(hwsp->vma->private != hwsp);
> -       return hwsp->vma;
> -}
> -
> -static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
> -{
> -       struct intel_gt_timelines *gt = hwsp->gt_timelines;
> -       unsigned long flags;
> -
> -       spin_lock_irqsave(&gt->hwsp_lock, flags);
> -
> -       /* As a cacheline becomes available, publish the HWSP on the freelist */
> -       if (!hwsp->free_bitmap)
> -               list_add_tail(&hwsp->free_link, &gt->hwsp_free_list);
> -
> -       GEM_BUG_ON(cacheline >= BITS_PER_TYPE(hwsp->free_bitmap));
> -       hwsp->free_bitmap |= BIT_ULL(cacheline);
> -
> -       /* And if no one is left using it, give the page back to the system */
> -       if (hwsp->free_bitmap == ~0ull) {
> -               i915_vma_put(hwsp->vma);
> -               list_del(&hwsp->free_link);
> -               kfree(hwsp);
> -       }
> -
> -       spin_unlock_irqrestore(&gt->hwsp_lock, flags);
> -}
> -
> -static void __rcu_cacheline_free(struct rcu_head *rcu)
> -{
> -       struct intel_timeline_cacheline *cl =
> -               container_of(rcu, typeof(*cl), rcu);
> -
> -       /* Must wait until after all *rq->hwsp are complete before removing */
> -       i915_gem_object_unpin_map(cl->hwsp->vma->obj);
> -       __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
> -
> -       i915_active_fini(&cl->active);
> -       kfree(cl);
> -}
> -
> -static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
> -{
> -       GEM_BUG_ON(!i915_active_is_idle(&cl->active));
> -       call_rcu(&cl->rcu, __rcu_cacheline_free);
> -}
> -
>  __i915_active_call
> -static void __cacheline_retire(struct i915_active *active)
> +static void __timeline_retire(struct i915_active *active)
>  {
> -       struct intel_timeline_cacheline *cl =
> -               container_of(active, typeof(*cl), active);
> +       struct intel_timeline *tl =
> +               container_of(active, typeof(*tl), active);
>
> -       i915_vma_unpin(cl->hwsp->vma);
> -       if (ptr_test_bit(cl->vaddr, CACHELINE_FREE))
> -               __idle_cacheline_free(cl);
> +       i915_vma_unpin(tl->hwsp_ggtt);
> +       intel_timeline_put(tl);
>  }
>
> -static int __cacheline_active(struct i915_active *active)
> +static int __timeline_active(struct i915_active *active)
>  {
> -       struct intel_timeline_cacheline *cl =
> -               container_of(active, typeof(*cl), active);
> +       struct intel_timeline *tl =
> +               container_of(active, typeof(*tl), active);
>
> -       __i915_vma_pin(cl->hwsp->vma);
> +       __i915_vma_pin(tl->hwsp_ggtt);
> +       intel_timeline_get(tl);
>         return 0;
>  }
>
> -static struct intel_timeline_cacheline *
> -cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
> -{
> -       struct intel_timeline_cacheline *cl;
> -       void *vaddr;
> -
> -       GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS));
> -
> -       cl = kmalloc(sizeof(*cl), GFP_KERNEL);
> -       if (!cl)
> -               return ERR_PTR(-ENOMEM);
> -
> -       vaddr = i915_gem_object_pin_map(hwsp->vma->obj, I915_MAP_WB);
> -       if (IS_ERR(vaddr)) {
> -               kfree(cl);
> -               return ERR_CAST(vaddr);
> -       }
> -
> -       cl->hwsp = hwsp;
> -       cl->vaddr = page_pack_bits(vaddr, cacheline);
> -
> -       i915_active_init(&cl->active, __cacheline_active, __cacheline_retire);
> -
> -       return cl;
> -}
> -
> -static void cacheline_acquire(struct intel_timeline_cacheline *cl,
> -                             u32 ggtt_offset)
> -{
> -       if (!cl)
> -               return;
> -
> -       cl->ggtt_offset = ggtt_offset;
> -       i915_active_acquire(&cl->active);
> -}
> -
> -static void cacheline_release(struct intel_timeline_cacheline *cl)
> -{
> -       if (cl)
> -               i915_active_release(&cl->active);
> -}
> -
> -static void cacheline_free(struct intel_timeline_cacheline *cl)
> -{
> -       if (!i915_active_acquire_if_busy(&cl->active)) {
> -               __idle_cacheline_free(cl);
> -               return;
> -       }
> -
> -       GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE));
> -       cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE);
> -
> -       i915_active_release(&cl->active);
> -}
> -
>  static int intel_timeline_init(struct intel_timeline *timeline,
>                                struct intel_gt *gt,
>                                struct i915_vma *hwsp,
>                                unsigned int offset)
>  {
>         void *vaddr;
> +       u32 *seqno;
>
>         kref_init(&timeline->kref);
>         atomic_set(&timeline->pin_count, 0);
>
>         timeline->gt = gt;
>
> -       timeline->has_initial_breadcrumb = !hwsp;
> -       timeline->hwsp_cacheline = NULL;
> -
> -       if (!hwsp) {
> -               struct intel_timeline_cacheline *cl;
> -               unsigned int cacheline;
> -
> -               hwsp = hwsp_alloc(timeline, &cacheline);
> +       if (hwsp) {
> +               timeline->hwsp_offset = offset;
> +               timeline->hwsp_ggtt = i915_vma_get(hwsp);
> +       } else {
> +               timeline->has_initial_breadcrumb = true;
> +               hwsp = hwsp_alloc(gt);
>                 if (IS_ERR(hwsp))
>                         return PTR_ERR(hwsp);
> -
> -               cl = cacheline_alloc(hwsp->private, cacheline);
> -               if (IS_ERR(cl)) {
> -                       __idle_hwsp_free(hwsp->private, cacheline);
> -                       return PTR_ERR(cl);
> -               }
> -
> -               timeline->hwsp_cacheline = cl;
> -               timeline->hwsp_offset = cacheline * CACHELINE_BYTES;
> -
> -               vaddr = page_mask_bits(cl->vaddr);
> -       } else {
> -               timeline->hwsp_offset = offset;
> -               vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
> -               if (IS_ERR(vaddr))
> -                       return PTR_ERR(vaddr);
> +               timeline->hwsp_ggtt = hwsp;
>         }
>
> -       timeline->hwsp_seqno =
> -               memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES);
> +       vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
> +       if (IS_ERR(vaddr))
> +               return PTR_ERR(vaddr);
> +
> +       timeline->hwsp_map = vaddr;
> +       seqno = vaddr + timeline->hwsp_offset;
> +       WRITE_ONCE(*seqno, 0);
> +       timeline->hwsp_seqno = seqno;
>
> -       timeline->hwsp_ggtt = i915_vma_get(hwsp);
>         GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
>
>         timeline->fence_context = dma_fence_context_alloc(1);
> @@ -268,6 +95,7 @@ static int intel_timeline_init(struct intel_timeline *timeline,
>         INIT_LIST_HEAD(&timeline->requests);
>
>         i915_syncmap_init(&timeline->sync);
> +       i915_active_init(&timeline->active, __timeline_active, __timeline_retire);
>
>         return 0;
>  }
> @@ -278,23 +106,18 @@ void intel_gt_init_timelines(struct intel_gt *gt)
>
>         spin_lock_init(&timelines->lock);
>         INIT_LIST_HEAD(&timelines->active_list);
> -
> -       spin_lock_init(&timelines->hwsp_lock);
> -       INIT_LIST_HEAD(&timelines->hwsp_free_list);
>  }
>
> -static void intel_timeline_fini(struct intel_timeline *timeline)
> +static void intel_timeline_fini(struct rcu_head *rcu)
>  {
> -       GEM_BUG_ON(atomic_read(&timeline->pin_count));
> -       GEM_BUG_ON(!list_empty(&timeline->requests));
> -       GEM_BUG_ON(timeline->retire);
> +       struct intel_timeline *timeline =
> +               container_of(rcu, struct intel_timeline, rcu);
>
> -       if (timeline->hwsp_cacheline)
> -               cacheline_free(timeline->hwsp_cacheline);
> -       else
> -               i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
> +       i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>
>         i915_vma_put(timeline->hwsp_ggtt);
> +       i915_active_fini(&timeline->active);
> +       kfree(timeline);
>  }
>
>  struct intel_timeline *
> @@ -360,9 +183,9 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>         GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
>                  tl->fence_context, tl->hwsp_offset);
>
> -       cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset);
> +       i915_active_acquire(&tl->active);
>         if (atomic_fetch_inc(&tl->pin_count)) {
> -               cacheline_release(tl->hwsp_cacheline);
> +               i915_active_release(&tl->active);
>                 __i915_vma_unpin(tl->hwsp_ggtt);
>         }
>
> @@ -371,9 +194,13 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>
>  void intel_timeline_reset_seqno(const struct intel_timeline *tl)
>  {
> +       u32 *hwsp_seqno = (u32 *)tl->hwsp_seqno;
>         /* Must be pinned to be writable, and no requests in flight. */
>         GEM_BUG_ON(!atomic_read(&tl->pin_count));
> -       WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
> +
> +       memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
> +       WRITE_ONCE(*hwsp_seqno, tl->seqno);
> +       clflush(hwsp_seqno);
>  }
>
>  void intel_timeline_enter(struct intel_timeline *tl)
> @@ -449,106 +276,23 @@ static u32 timeline_advance(struct intel_timeline *tl)
>         return tl->seqno += 1 + tl->has_initial_breadcrumb;
>  }
>
> -static void timeline_rollback(struct intel_timeline *tl)
> -{
> -       tl->seqno -= 1 + tl->has_initial_breadcrumb;
> -}
> -
>  static noinline int
>  __intel_timeline_get_seqno(struct intel_timeline *tl,
> -                          struct i915_request *rq,
>                            u32 *seqno)
>  {
> -       struct intel_timeline_cacheline *cl;
> -       unsigned int cacheline;
> -       struct i915_vma *vma;
> -       void *vaddr;
> -       int err;
> -
> -       might_lock(&tl->gt->ggtt->vm.mutex);
> -       GT_TRACE(tl->gt, "timeline:%llx wrapped\n", tl->fence_context);
> -
> -       /*
> -        * If there is an outstanding GPU reference to this cacheline,
> -        * such as it being sampled by a HW semaphore on another timeline,
> -        * we cannot wraparound our seqno value (the HW semaphore does
> -        * a strict greater-than-or-equals compare, not i915_seqno_passed).
> -        * So if the cacheline is still busy, we must detach ourselves
> -        * from it and leave it inflight alongside its users.
> -        *
> -        * However, if nobody is watching and we can guarantee that nobody
> -        * will, we could simply reuse the same cacheline.
> -        *
> -        * if (i915_active_request_is_signaled(&tl->last_request) &&
> -        *     i915_active_is_signaled(&tl->hwsp_cacheline->active))
> -        *      return 0;
> -        *
> -        * That seems unlikely for a busy timeline that needed to wrap in
> -        * the first place, so just replace the cacheline.
> -        */
> -
> -       vma = hwsp_alloc(tl, &cacheline);
> -       if (IS_ERR(vma)) {
> -               err = PTR_ERR(vma);
> -               goto err_rollback;
> -       }
> -
> -       err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
> -       if (err) {
> -               __idle_hwsp_free(vma->private, cacheline);
> -               goto err_rollback;
> -       }
> +       u32 next_ofs = offset_in_page(tl->hwsp_offset + TIMELINE_SEQNO_BYTES);
>
> -       cl = cacheline_alloc(vma->private, cacheline);
> -       if (IS_ERR(cl)) {
> -               err = PTR_ERR(cl);
> -               __idle_hwsp_free(vma->private, cacheline);
> -               goto err_unpin;
> -       }
> -       GEM_BUG_ON(cl->hwsp->vma != vma);
> -
> -       /*
> -        * Attach the old cacheline to the current request, so that we only
> -        * free it after the current request is retired, which ensures that
> -        * all writes into the cacheline from previous requests are complete.
> -        */
> -       err = i915_active_ref(&tl->hwsp_cacheline->active,
> -                             tl->fence_context,
> -                             &rq->fence);
> -       if (err)
> -               goto err_cacheline;
> +       /* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
> +       if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))
> +               next_ofs = offset_in_page(next_ofs + BIT(5));
>
> -       cacheline_release(tl->hwsp_cacheline); /* ownership now xfered to rq */
> -       cacheline_free(tl->hwsp_cacheline);
> -
> -       i915_vma_unpin(tl->hwsp_ggtt); /* binding kept alive by old cacheline */
> -       i915_vma_put(tl->hwsp_ggtt);
> -
> -       tl->hwsp_ggtt = i915_vma_get(vma);
> -
> -       vaddr = page_mask_bits(cl->vaddr);
> -       tl->hwsp_offset = cacheline * CACHELINE_BYTES;
> -       tl->hwsp_seqno =
> -               memset(vaddr + tl->hwsp_offset, 0, CACHELINE_BYTES);
> -
> -       tl->hwsp_offset += i915_ggtt_offset(vma);
> -       GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
> -                tl->fence_context, tl->hwsp_offset);
> -
> -       cacheline_acquire(cl, tl->hwsp_offset);
> -       tl->hwsp_cacheline = cl;
> +       tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + next_ofs;
> +       tl->hwsp_seqno = tl->hwsp_map + next_ofs;
> +       intel_timeline_reset_seqno(tl);
>
>         *seqno = timeline_advance(tl);
>         GEM_BUG_ON(i915_seqno_passed(*tl->hwsp_seqno, *seqno));
>         return 0;
> -
> -err_cacheline:
> -       cacheline_free(cl);
> -err_unpin:
> -       i915_vma_unpin(vma);
> -err_rollback:
> -       timeline_rollback(tl);
> -       return err;
>  }
>
>  int intel_timeline_get_seqno(struct intel_timeline *tl,
> @@ -558,51 +302,52 @@ int intel_timeline_get_seqno(struct intel_timeline *tl,
>         *seqno = timeline_advance(tl);
>
>         /* Replace the HWSP on wraparound for HW semaphores */
> -       if (unlikely(!*seqno && tl->hwsp_cacheline))
> -               return __intel_timeline_get_seqno(tl, rq, seqno);
> +       if (unlikely(!*seqno && tl->has_initial_breadcrumb))
> +               return __intel_timeline_get_seqno(tl, seqno);
>
>         return 0;
>  }
>
> -static int cacheline_ref(struct intel_timeline_cacheline *cl,
> -                        struct i915_request *rq)
> -{
> -       return i915_active_add_request(&cl->active, rq);
> -}
> -
>  int intel_timeline_read_hwsp(struct i915_request *from,
>                              struct i915_request *to,
>                              u32 *hwsp)
>  {
> -       struct intel_timeline_cacheline *cl;
> +       struct intel_timeline *tl;
>         int err;
>
> -       GEM_BUG_ON(!rcu_access_pointer(from->hwsp_cacheline));
> -
>         rcu_read_lock();
> -       cl = rcu_dereference(from->hwsp_cacheline);
> -       if (i915_request_signaled(from)) /* confirm cacheline is valid */
> -               goto unlock;
> -       if (unlikely(!i915_active_acquire_if_busy(&cl->active)))
> -               goto unlock; /* seqno wrapped and completed! */
> -       if (unlikely(__i915_request_is_complete(from)))
> -               goto release;
> +       tl = rcu_dereference(from->timeline);
> +       if (i915_request_signaled(from) ||
> +           !i915_active_acquire_if_busy(&tl->active))
> +               tl = NULL;
> +
> +       if (tl) {
> +               /* hwsp_offset may wraparound, so use from->hwsp_seqno */
> +               *hwsp = i915_ggtt_offset(tl->hwsp_ggtt) +
> +                       offset_in_page(from->hwsp_seqno);
> +       }
> +
> +       /* ensure we wait on the right request, if not, we completed */
> +       if (tl && __i915_request_is_complete(from)) {
> +               i915_active_release(&tl->active);
> +               tl = NULL;
> +       }
>         rcu_read_unlock();
>
> -       err = cacheline_ref(cl, to);
> -       if (err)
> +       if (!tl)
> +               return 1;
> +
> +       /* Can't do semaphore waits on kernel context */
> +       if (!tl->has_initial_breadcrumb) {
> +               err = -EINVAL;
>                 goto out;
> +       }
> +
> +       err = i915_active_add_request(&tl->active, to);
>
> -       *hwsp = cl->ggtt_offset;
>  out:
> -       i915_active_release(&cl->active);
> +       i915_active_release(&tl->active);
>         return err;
> -
> -release:
> -       i915_active_release(&cl->active);
> -unlock:
> -       rcu_read_unlock();
> -       return 1;
>  }
>
>  void intel_timeline_unpin(struct intel_timeline *tl)
> @@ -611,8 +356,7 @@ void intel_timeline_unpin(struct intel_timeline *tl)
>         if (!atomic_dec_and_test(&tl->pin_count))
>                 return;
>
> -       cacheline_release(tl->hwsp_cacheline);
> -
> +       i915_active_release(&tl->active);
>         __i915_vma_unpin(tl->hwsp_ggtt);
>  }
>
> @@ -621,8 +365,11 @@ void __intel_timeline_free(struct kref *kref)
>         struct intel_timeline *timeline =
>                 container_of(kref, typeof(*timeline), kref);
>
> -       intel_timeline_fini(timeline);
> -       kfree_rcu(timeline, rcu);
> +       GEM_BUG_ON(atomic_read(&timeline->pin_count));
> +       GEM_BUG_ON(!list_empty(&timeline->requests));
> +       GEM_BUG_ON(timeline->retire);
> +
> +       call_rcu(&timeline->rcu, intel_timeline_fini);
>  }
>
>  void intel_gt_fini_timelines(struct intel_gt *gt)
> @@ -630,7 +377,6 @@ void intel_gt_fini_timelines(struct intel_gt *gt)
>         struct intel_gt_timelines *timelines = &gt->timelines;
>
>         GEM_BUG_ON(!list_empty(&timelines->active_list));
> -       GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
>  }
>
>  void intel_gt_show_timelines(struct intel_gt *gt,
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
> index 9f677c9b7d06..74e67dbf89c5 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
> @@ -17,7 +17,6 @@
>  struct i915_vma;
>  struct i915_syncmap;
>  struct intel_gt;
> -struct intel_timeline_hwsp;
>
>  struct intel_timeline {
>         u64 fence_context;
> @@ -44,12 +43,11 @@ struct intel_timeline {
>         atomic_t pin_count;
>         atomic_t active_count;
>
> +       void *hwsp_map;
>         const u32 *hwsp_seqno;
>         struct i915_vma *hwsp_ggtt;
>         u32 hwsp_offset;
>
> -       struct intel_timeline_cacheline *hwsp_cacheline;
> -
>         bool has_initial_breadcrumb;
>
>         /**
> @@ -66,6 +64,8 @@ struct intel_timeline {
>          */
>         struct i915_active_fence last_request;
>
> +       struct i915_active active;
> +
>         /** A chain of completed timelines ready for early retirement. */
>         struct intel_timeline *retire;
>
> @@ -89,15 +89,4 @@ struct intel_timeline {
>         struct rcu_head rcu;
>  };
>
> -struct intel_timeline_cacheline {
> -       struct i915_active active;
> -
> -       struct intel_timeline_hwsp *hwsp;
> -       void *vaddr;
> -
> -       u32 ggtt_offset;
> -
> -       struct rcu_head rcu;
> -};
> -
>  #endif /* __I915_TIMELINE_TYPES_H__ */
> diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
> index 84d883de30ee..7e466ae114f8 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
> @@ -41,6 +41,9 @@ static int perf_end(struct intel_gt *gt)
>
>  static int write_timestamp(struct i915_request *rq, int slot)
>  {
> +       struct intel_timeline *tl =
> +               rcu_dereference_protected(rq->timeline,
> +                                         !i915_request_signaled(rq));
>         u32 cmd;
>         u32 *cs;
>
> @@ -53,7 +56,7 @@ static int write_timestamp(struct i915_request *rq, int slot)
>                 cmd++;
>         *cs++ = cmd;
>         *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(rq->engine->mmio_base));
> -       *cs++ = i915_request_timeline(rq)->hwsp_offset + slot * sizeof(u32);
> +       *cs++ = tl->hwsp_offset + slot * sizeof(u32);
>         *cs++ = 0;
>
>         intel_ring_advance(rq, cs);
> diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> index d283dce5b4ac..a4c78062e92b 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> @@ -665,7 +665,7 @@ static int live_hwsp_wrap(void *arg)
>         if (IS_ERR(tl))
>                 return PTR_ERR(tl);
>
> -       if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
> +       if (!tl->has_initial_breadcrumb)
>                 goto out_free;
>
>         err = intel_timeline_pin(tl, NULL);
> @@ -832,12 +832,26 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
>         return 0;
>  }
>
> +static void switch_tl_lock(struct i915_request *from, struct i915_request *to)
> +{
> +       /* some light mutex juggling required; think co-routines */
> +
> +       if (from) {
> +               lockdep_unpin_lock(&from->context->timeline->mutex, from->cookie);
> +               mutex_unlock(&from->context->timeline->mutex);
> +       }
> +
> +       if (to) {
> +               mutex_lock(&to->context->timeline->mutex);
> +               to->cookie = lockdep_pin_lock(&to->context->timeline->mutex);
> +       }
> +}
> +
>  static int create_watcher(struct hwsp_watcher *w,
>                           struct intel_engine_cs *engine,
>                           int ringsz)
>  {
>         struct intel_context *ce;
> -       struct intel_timeline *tl;
>
>         ce = intel_context_create(engine);
>         if (IS_ERR(ce))
> @@ -850,11 +864,8 @@ static int create_watcher(struct hwsp_watcher *w,
>                 return PTR_ERR(w->rq);
>
>         w->addr = i915_ggtt_offset(w->vma);
> -       tl = w->rq->context->timeline;
>
> -       /* some light mutex juggling required; think co-routines */
> -       lockdep_unpin_lock(&tl->mutex, w->rq->cookie);
> -       mutex_unlock(&tl->mutex);
> +       switch_tl_lock(w->rq, NULL);
>
>         return 0;
>  }
> @@ -863,15 +874,13 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
>                          bool (*op)(u32 hwsp, u32 seqno))
>  {
>         struct i915_request *rq = fetch_and_zero(&w->rq);
> -       struct intel_timeline *tl = rq->context->timeline;
>         u32 offset, end;
>         int err;
>
>         GEM_BUG_ON(w->addr - i915_ggtt_offset(w->vma) > w->vma->size);
>
>         i915_request_get(rq);
> -       mutex_lock(&tl->mutex);
> -       rq->cookie = lockdep_pin_lock(&tl->mutex);
> +       switch_tl_lock(NULL, rq);
>         i915_request_add(rq);
>
>         if (i915_request_wait(rq, 0, HZ) < 0) {
> @@ -900,10 +909,7 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
>  static void cleanup_watcher(struct hwsp_watcher *w)
>  {
>         if (w->rq) {
> -               struct intel_timeline *tl = w->rq->context->timeline;
> -
> -               mutex_lock(&tl->mutex);
> -               w->rq->cookie = lockdep_pin_lock(&tl->mutex);
> +               switch_tl_lock(NULL, w->rq);
>
>                 i915_request_add(w->rq);
>         }
> @@ -941,7 +947,7 @@ static struct i915_request *wrap_timeline(struct i915_request *rq)
>         }
>
>         i915_request_put(rq);
> -       rq = intel_context_create_request(ce);
> +       rq = i915_request_create(ce);
>         if (IS_ERR(rq))
>                 return rq;
>
> @@ -976,7 +982,7 @@ static int live_hwsp_read(void *arg)
>         if (IS_ERR(tl))
>                 return PTR_ERR(tl);
>
> -       if (!tl->hwsp_cacheline)
> +       if (!tl->has_initial_breadcrumb)
>                 goto out_free;
>
>         for (i = 0; i < ARRAY_SIZE(watcher); i++) {
> @@ -998,7 +1004,7 @@ static int live_hwsp_read(void *arg)
>                 do {
>                         struct i915_sw_fence *submit;
>                         struct i915_request *rq;
> -                       u32 hwsp;
> +                       u32 hwsp, dummy;
>
>                         submit = heap_fence_create(GFP_KERNEL);
>                         if (!submit) {
> @@ -1016,14 +1022,26 @@ static int live_hwsp_read(void *arg)
>                                 goto out;
>                         }
>
> -                       /* Skip to the end, saving 30 minutes of nops */
> -                       tl->seqno = -10u + 2 * (count & 3);
> -                       WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>                         ce->timeline = intel_timeline_get(tl);
>
> -                       rq = intel_context_create_request(ce);
> +                       /* Ensure timeline is mapped, done during first pin */
> +                       err = intel_context_pin(ce);
> +                       if (err) {
> +                               intel_context_put(ce);
> +                               goto out;
> +                       }
> +
> +                       /*
> +                        * Start at a new wrap, and set seqno right before another wrap,
> +                        * saving 30 minutes of nops
> +                        */
> +                       tl->seqno = -12u + 2 * (count & 3);
> +                       __intel_timeline_get_seqno(tl, &dummy);
> +
> +                       rq = i915_request_create(ce);
>                         if (IS_ERR(rq)) {
>                                 err = PTR_ERR(rq);
> +                               intel_context_unpin(ce);
>                                 intel_context_put(ce);
>                                 goto out;
>                         }
> @@ -1033,32 +1051,35 @@ static int live_hwsp_read(void *arg)
>                                                             GFP_KERNEL);
>                         if (err < 0) {
>                                 i915_request_add(rq);
> +                               intel_context_unpin(ce);
>                                 intel_context_put(ce);
>                                 goto out;
>                         }
>
> -                       mutex_lock(&watcher[0].rq->context->timeline->mutex);
> +                       switch_tl_lock(rq, watcher[0].rq);
>                         err = intel_timeline_read_hwsp(rq, watcher[0].rq, &hwsp);
>                         if (err == 0)
>                                 err = emit_read_hwsp(watcher[0].rq, /* before */
>                                                      rq->fence.seqno, hwsp,
>                                                      &watcher[0].addr);
> -                       mutex_unlock(&watcher[0].rq->context->timeline->mutex);
> +                       switch_tl_lock(watcher[0].rq, rq);
>                         if (err) {
>                                 i915_request_add(rq);
> +                               intel_context_unpin(ce);
>                                 intel_context_put(ce);
>                                 goto out;
>                         }
>
> -                       mutex_lock(&watcher[1].rq->context->timeline->mutex);
> +                       switch_tl_lock(rq, watcher[1].rq);
>                         err = intel_timeline_read_hwsp(rq, watcher[1].rq, &hwsp);
>                         if (err == 0)
>                                 err = emit_read_hwsp(watcher[1].rq, /* after */
>                                                      rq->fence.seqno, hwsp,
>                                                      &watcher[1].addr);
> -                       mutex_unlock(&watcher[1].rq->context->timeline->mutex);
> +                       switch_tl_lock(watcher[1].rq, rq);
>                         if (err) {
>                                 i915_request_add(rq);
> +                               intel_context_unpin(ce);
>                                 intel_context_put(ce);
>                                 goto out;
>                         }
> @@ -1067,6 +1088,7 @@ static int live_hwsp_read(void *arg)
>                         i915_request_add(rq);
>
>                         rq = wrap_timeline(rq);
> +                       intel_context_unpin(ce);
>                         intel_context_put(ce);
>                         if (IS_ERR(rq)) {
>                                 err = PTR_ERR(rq);
> @@ -1106,8 +1128,8 @@ static int live_hwsp_read(void *arg)
>                             3 * watcher[1].rq->ring->size)
>                                 break;
>
> -               } while (!__igt_timeout(end_time, NULL));
> -               WRITE_ONCE(*(u32 *)tl->hwsp_seqno, 0xdeadbeef);
> +               } while (!__igt_timeout(end_time, NULL) &&
> +                        count < (PAGE_SIZE / TIMELINE_SEQNO_BYTES - 1) / 2);
>
>                 pr_info("%s: simulated %lu wraps\n", engine->name, count);
>                 err = check_watcher(&watcher[1], "after", cmp_gte);
> @@ -1152,9 +1174,7 @@ static int live_hwsp_rollover_kernel(void *arg)
>                 }
>
>                 GEM_BUG_ON(i915_active_fence_isset(&tl->last_request));
> -               tl->seqno = 0;
> -               timeline_rollback(tl);
> -               timeline_rollback(tl);
> +               tl->seqno = -2u;
>                 WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>
>                 for (i = 0; i < ARRAY_SIZE(rq); i++) {
> @@ -1234,11 +1254,10 @@ static int live_hwsp_rollover_user(void *arg)
>                         goto out;
>
>                 tl = ce->timeline;
> -               if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
> +               if (!tl->has_initial_breadcrumb)
>                         goto out;
>
> -               timeline_rollback(tl);
> -               timeline_rollback(tl);
> +               tl->seqno = -4u;
>                 WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>
>                 for (i = 0; i < ARRAY_SIZE(rq); i++) {
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index e7b4c4bc41a6..59d942910558 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -794,7 +794,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
>         rq->fence.seqno = seqno;
>
>         RCU_INIT_POINTER(rq->timeline, tl);
> -       RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline);
>         rq->hwsp_seqno = tl->hwsp_seqno;
>         GEM_BUG_ON(__i915_request_is_complete(rq));
>
> @@ -1039,9 +1038,6 @@ emit_semaphore_wait(struct i915_request *to,
>         if (i915_request_has_initial_breadcrumb(to))
>                 goto await_fence;
>
> -       if (!rcu_access_pointer(from->hwsp_cacheline))
> -               goto await_fence;
> -
>         /*
>          * If this or its dependents are waiting on an external fence
>          * that may fail catastrophically, then we want to avoid using
> diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
> index dd10a6db3d21..38062495b66f 100644
> --- a/drivers/gpu/drm/i915/i915_request.h
> +++ b/drivers/gpu/drm/i915/i915_request.h
> @@ -239,16 +239,6 @@ struct i915_request {
>          */
>         const u32 *hwsp_seqno;
>
> -       /*
> -        * If we need to access the timeline's seqno for this request in
> -        * another request, we need to keep a read reference to this associated
> -        * cacheline, so that we do not free and recycle it before the foreign
> -        * observers have completed. Hence, we keep a pointer to the cacheline
> -        * inside the timeline's HWSP vma, but it is only valid while this
> -        * request has not completed and guarded by the timeline mutex.
> -        */
> -       struct intel_timeline_cacheline __rcu *hwsp_cacheline;
> -
>         /** Position in the ring of the start of the request */
>         u32 head;
>
> @@ -650,4 +640,25 @@ static inline bool i915_request_use_semaphores(const struct i915_request *rq)
>         return intel_engine_has_semaphores(rq->engine);
>  }
>
> +static inline u32
> +i915_request_active_seqno(const struct i915_request *rq)
> +{
> +       u32 hwsp_phys_base =
> +               page_mask_bits(i915_request_active_timeline(rq)->hwsp_offset);
> +       u32 hwsp_relative_offset = offset_in_page(rq->hwsp_seqno);
> +
> +       /*
> +        * Because of wraparound, we cannot simply take tl->hwsp_offset,
> +        * but instead use the fact that the relative for vaddr is the
> +        * offset as for hwsp_offset. Take the top bits from tl->hwsp_offset
> +        * and combine them with the relative offset in rq->hwsp_seqno.
> +        *
> +        * As rw->hwsp_seqno is rewritten when signaled, this only works
> +        * when the request isn't signaled yet, but at that point you
> +        * no longer need the offset.
> +        */
> +
> +       return hwsp_phys_base + hwsp_relative_offset;
> +}
> +
>  #endif /* I915_REQUEST_H */
> --
> 2.30.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [Intel-gfx] [PATCH v8 02/69] drm/i915: Pin timeline map after first timeline pin, v3.
  2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 02/69] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
@ 2021-03-11 21:44   ` Jason Ekstrand
  2021-03-15 12:34     ` Maarten Lankhorst
  0 siblings, 1 reply; 82+ messages in thread
From: Jason Ekstrand @ 2021-03-11 21:44 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Thomas Hellström, Intel GFX

On Thu, Mar 11, 2021 at 7:49 AM Maarten Lankhorst
<maarten.lankhorst@linux.intel.com> wrote:
>
> We're starting to require the reservation lock for pinning,
> so wait until we have that.
>
> Update the selftests to handle this correctly, and ensure pin is
> called in live_hwsp_rollover_user() and mock_hwsp_freelist().
>
> Changes since v1:
> - Fix NULL + XX arithmatic, use casts. (kbuild)
> Changes since v2:
> - Clear entire cacheline when pinning.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Reported-by: kernel test robot <lkp@intel.com>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_timeline.c    | 40 +++++++++----
>  drivers/gpu/drm/i915/gt/intel_timeline.h    |  2 +
>  drivers/gpu/drm/i915/gt/mock_engine.c       | 22 ++++++-
>  drivers/gpu/drm/i915/gt/selftest_timeline.c | 63 +++++++++++----------
>  drivers/gpu/drm/i915/i915_selftest.h        |  2 +
>  5 files changed, 84 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
> index efe2030cfe5e..032e1d1b4c5e 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
> @@ -52,14 +52,29 @@ static int __timeline_active(struct i915_active *active)
>         return 0;
>  }
>
> +I915_SELFTEST_EXPORT int
> +intel_timeline_pin_map(struct intel_timeline *timeline)
> +{
> +       struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj;
> +       u32 ofs = offset_in_page(timeline->hwsp_offset);
> +       void *vaddr;
> +
> +       vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +       if (IS_ERR(vaddr))
> +               return PTR_ERR(vaddr);
> +
> +       timeline->hwsp_map = vaddr;
> +       timeline->hwsp_seqno = memset(vaddr + ofs, 0, CACHELINE_BYTES);

What guarantees that hwsp_offset is cacheline-aligned?  From what I
saw in patch 1, it's incremented by 8 so only every 8th one is
actually CL-aligned.

> +       clflush(vaddr + ofs);
> +
> +       return 0;
> +}
> +
>  static int intel_timeline_init(struct intel_timeline *timeline,
>                                struct intel_gt *gt,
>                                struct i915_vma *hwsp,
>                                unsigned int offset)
>  {
> -       void *vaddr;
> -       u32 *seqno;
> -
>         kref_init(&timeline->kref);
>         atomic_set(&timeline->pin_count, 0);
>
> @@ -76,14 +91,8 @@ static int intel_timeline_init(struct intel_timeline *timeline,
>                 timeline->hwsp_ggtt = hwsp;
>         }
>
> -       vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
> -       if (IS_ERR(vaddr))
> -               return PTR_ERR(vaddr);
> -
> -       timeline->hwsp_map = vaddr;
> -       seqno = vaddr + timeline->hwsp_offset;
> -       WRITE_ONCE(*seqno, 0);
> -       timeline->hwsp_seqno = seqno;
> +       timeline->hwsp_map = NULL;
> +       timeline->hwsp_seqno = (void *)(long)timeline->hwsp_offset;

Maybe uintptr_t instead of long?  I think they're always the same on
Linux but uintptr_t seems like the more "correct" type for this sort
of thing.


>
>         GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
>
> @@ -113,7 +122,8 @@ static void intel_timeline_fini(struct rcu_head *rcu)
>         struct intel_timeline *timeline =
>                 container_of(rcu, struct intel_timeline, rcu);
>
> -       i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
> +       if (timeline->hwsp_map)
> +               i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>
>         i915_vma_put(timeline->hwsp_ggtt);
>         i915_active_fini(&timeline->active);
> @@ -173,6 +183,12 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>         if (atomic_add_unless(&tl->pin_count, 1, 0))
>                 return 0;
>
> +       if (!tl->hwsp_map) {
> +               err = intel_timeline_pin_map(tl);
> +               if (err)
> +                       return err;
> +       }
> +
>         err = i915_ggtt_pin(tl->hwsp_ggtt, ww, 0, PIN_HIGH);
>         if (err)
>                 return err;
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h
> index b1f81d947f8d..57308c4d664a 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline.h
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
> @@ -98,4 +98,6 @@ intel_timeline_is_last(const struct intel_timeline *tl,
>         return list_is_last_rcu(&rq->link, &tl->requests);
>  }
>
> +I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl));
> +
>  #endif
> diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
> index 5662f7c2f719..42fd86658ee7 100644
> --- a/drivers/gpu/drm/i915/gt/mock_engine.c
> +++ b/drivers/gpu/drm/i915/gt/mock_engine.c
> @@ -13,9 +13,20 @@
>  #include "mock_engine.h"
>  #include "selftests/mock_request.h"
>
> -static void mock_timeline_pin(struct intel_timeline *tl)
> +static int mock_timeline_pin(struct intel_timeline *tl)
>  {
> +       int err;
> +
> +       if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
> +               return -EBUSY;
> +
> +       err = intel_timeline_pin_map(tl);
> +       i915_gem_object_unlock(tl->hwsp_ggtt->obj);
> +       if (err)
> +               return err;
> +
>         atomic_inc(&tl->pin_count);
> +       return 0;
>  }
>
>  static void mock_timeline_unpin(struct intel_timeline *tl)
> @@ -133,6 +144,8 @@ static void mock_context_destroy(struct kref *ref)
>
>  static int mock_context_alloc(struct intel_context *ce)
>  {
> +       int err;
> +
>         ce->ring = mock_ring(ce->engine);
>         if (!ce->ring)
>                 return -ENOMEM;
> @@ -143,7 +156,12 @@ static int mock_context_alloc(struct intel_context *ce)
>                 return PTR_ERR(ce->timeline);
>         }
>
> -       mock_timeline_pin(ce->timeline);
> +       err = mock_timeline_pin(ce->timeline);
> +       if (err) {
> +               intel_timeline_put(ce->timeline);
> +               ce->timeline = NULL;
> +               return err;
> +       }
>
>         return 0;
>  }
> diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> index a4c78062e92b..31b492eb2982 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> @@ -34,7 +34,7 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
>  {
>         unsigned long address = (unsigned long)page_address(hwsp_page(tl));
>
> -       return (address + tl->hwsp_offset) / CACHELINE_BYTES;
> +       return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES;

Does this belong in the previous commit?  I've got no clue what I'm
talking about here but it looks like it goes with the hwsp_offset
wrapping changes in 01/69.

--Jason

>  }
>
>  #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
> @@ -58,6 +58,7 @@ static void __mock_hwsp_record(struct mock_hwsp_freelist *state,
>         tl = xchg(&state->history[idx], tl);
>         if (tl) {
>                 radix_tree_delete(&state->cachelines, hwsp_cacheline(tl));
> +               intel_timeline_unpin(tl);
>                 intel_timeline_put(tl);
>         }
>  }
> @@ -77,6 +78,12 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
>                 if (IS_ERR(tl))
>                         return PTR_ERR(tl);
>
> +               err = intel_timeline_pin(tl, NULL);
> +               if (err) {
> +                       intel_timeline_put(tl);
> +                       return err;
> +               }
> +
>                 cacheline = hwsp_cacheline(tl);
>                 err = radix_tree_insert(&state->cachelines, cacheline, tl);
>                 if (err) {
> @@ -84,6 +91,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
>                                 pr_err("HWSP cacheline %lu already used; duplicate allocation!\n",
>                                        cacheline);
>                         }
> +                       intel_timeline_unpin(tl);
>                         intel_timeline_put(tl);
>                         return err;
>                 }
> @@ -451,7 +459,7 @@ static int emit_ggtt_store_dw(struct i915_request *rq, u32 addr, u32 value)
>  }
>
>  static struct i915_request *
> -tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
> +checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>  {
>         struct i915_request *rq;
>         int err;
> @@ -462,6 +470,13 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>                 goto out;
>         }
>
> +       if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
> +               pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
> +                      *tl->hwsp_seqno, tl->seqno);
> +               intel_timeline_unpin(tl);
> +               return ERR_PTR(-EINVAL);
> +       }
> +
>         rq = intel_engine_create_kernel_request(engine);
>         if (IS_ERR(rq))
>                 goto out_unpin;
> @@ -483,25 +498,6 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>         return rq;
>  }
>
> -static struct intel_timeline *
> -checked_intel_timeline_create(struct intel_gt *gt)
> -{
> -       struct intel_timeline *tl;
> -
> -       tl = intel_timeline_create(gt);
> -       if (IS_ERR(tl))
> -               return tl;
> -
> -       if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
> -               pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
> -                      *tl->hwsp_seqno, tl->seqno);
> -               intel_timeline_put(tl);
> -               return ERR_PTR(-EINVAL);
> -       }
> -
> -       return tl;
> -}
> -
>  static int live_hwsp_engine(void *arg)
>  {
>  #define NUM_TIMELINES 4096
> @@ -534,13 +530,13 @@ static int live_hwsp_engine(void *arg)
>                         struct intel_timeline *tl;
>                         struct i915_request *rq;
>
> -                       tl = checked_intel_timeline_create(gt);
> +                       tl = intel_timeline_create(gt);
>                         if (IS_ERR(tl)) {
>                                 err = PTR_ERR(tl);
>                                 break;
>                         }
>
> -                       rq = tl_write(tl, engine, count);
> +                       rq = checked_tl_write(tl, engine, count);
>                         if (IS_ERR(rq)) {
>                                 intel_timeline_put(tl);
>                                 err = PTR_ERR(rq);
> @@ -607,14 +603,14 @@ static int live_hwsp_alternate(void *arg)
>                         if (!intel_engine_can_store_dword(engine))
>                                 continue;
>
> -                       tl = checked_intel_timeline_create(gt);
> +                       tl = intel_timeline_create(gt);
>                         if (IS_ERR(tl)) {
>                                 err = PTR_ERR(tl);
>                                 goto out;
>                         }
>
>                         intel_engine_pm_get(engine);
> -                       rq = tl_write(tl, engine, count);
> +                       rq = checked_tl_write(tl, engine, count);
>                         intel_engine_pm_put(engine);
>                         if (IS_ERR(rq)) {
>                                 intel_timeline_put(tl);
> @@ -1257,6 +1253,10 @@ static int live_hwsp_rollover_user(void *arg)
>                 if (!tl->has_initial_breadcrumb)
>                         goto out;
>
> +               err = intel_context_pin(ce);
> +               if (err)
> +                       goto out;
> +
>                 tl->seqno = -4u;
>                 WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>
> @@ -1266,7 +1266,7 @@ static int live_hwsp_rollover_user(void *arg)
>                         this = intel_context_create_request(ce);
>                         if (IS_ERR(this)) {
>                                 err = PTR_ERR(this);
> -                               goto out;
> +                               goto out_unpin;
>                         }
>
>                         pr_debug("%s: create fence.seqnp:%d\n",
> @@ -1285,17 +1285,18 @@ static int live_hwsp_rollover_user(void *arg)
>                 if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
>                         pr_err("Wait for timeline wrap timed out!\n");
>                         err = -EIO;
> -                       goto out;
> +                       goto out_unpin;
>                 }
>
>                 for (i = 0; i < ARRAY_SIZE(rq); i++) {
>                         if (!i915_request_completed(rq[i])) {
>                                 pr_err("Pre-wrap request not completed!\n");
>                                 err = -EINVAL;
> -                               goto out;
> +                               goto out_unpin;
>                         }
>                 }
> -
> +out_unpin:
> +               intel_context_unpin(ce);
>  out:
>                 for (i = 0; i < ARRAY_SIZE(rq); i++)
>                         i915_request_put(rq[i]);
> @@ -1337,13 +1338,13 @@ static int live_hwsp_recycle(void *arg)
>                         struct intel_timeline *tl;
>                         struct i915_request *rq;
>
> -                       tl = checked_intel_timeline_create(gt);
> +                       tl = intel_timeline_create(gt);
>                         if (IS_ERR(tl)) {
>                                 err = PTR_ERR(tl);
>                                 break;
>                         }
>
> -                       rq = tl_write(tl, engine, count);
> +                       rq = checked_tl_write(tl, engine, count);
>                         if (IS_ERR(rq)) {
>                                 intel_timeline_put(tl);
>                                 err = PTR_ERR(rq);
> diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
> index d53d207ab6eb..f54de0499be7 100644
> --- a/drivers/gpu/drm/i915/i915_selftest.h
> +++ b/drivers/gpu/drm/i915/i915_selftest.h
> @@ -107,6 +107,7 @@ int __i915_subtests(const char *caller,
>
>  #define I915_SELFTEST_DECLARE(x) x
>  #define I915_SELFTEST_ONLY(x) unlikely(x)
> +#define I915_SELFTEST_EXPORT
>
>  #else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
>
> @@ -116,6 +117,7 @@ static inline int i915_perf_selftests(struct pci_dev *pdev) { return 0; }
>
>  #define I915_SELFTEST_DECLARE(x)
>  #define I915_SELFTEST_ONLY(x) 0
> +#define I915_SELFTEST_EXPORT static
>
>  #endif
>
> --
> 2.30.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7.
  2021-03-11 21:22   ` Jason Ekstrand
@ 2021-03-15 12:08     ` Maarten Lankhorst
  0 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-15 12:08 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: Intel GFX, Thomas Hellström

Op 2021-03-11 om 22:22 schreef Jason Ekstrand:
> First off, I'm just here asking questions right now trying to start
> getting my head around some of this stuff.  Feel free to ignore me or
> tell me to go away if I'm being annoying. :-)
>
> On Thu, Mar 11, 2021 at 7:49 AM Maarten Lankhorst
> <maarten.lankhorst@linux.intel.com> wrote:
>> Instead of sharing pages with breadcrumbs, give each timeline a
>> single page. This allows unrelated timelines not to share locks
>> any more during command submission.
>>
>> As an additional benefit, seqno wraparound no longer requires
>> i915_vma_pin, which means we no longer need to worry about a
>> potential -EDEADLK at a point where we are ready to submit.
>>
>> Changes since v1:
>> - Fix erroneous i915_vma_acquire that should be a i915_vma_release (ickle).
>> - Extra check for completion in intel_read_hwsp().
>> Changes since v2:
>> - Fix inconsistent indent in hwsp_alloc() (kbuild)
>> - memset entire cacheline to 0.
>> Changes since v3:
>> - Do same in intel_timeline_reset_seqno(), and clflush for good measure.
>> Changes since v4:
>> - Use refcounting on timeline, instead of relying on i915_active.
>> - Fix waiting on kernel requests.
>> Changes since v5:
>> - Bump amount of slots to maximum (256), for best wraparounds.
>> - Add hwsp_offset to i915_request to fix potential wraparound hang.
>> - Ensure timeline wrap test works with the changes.
>> - Assign hwsp in intel_timeline_read_hwsp() within the rcu lock to
>>   fix a hang.
>> Changes since v6:
>> - Rename i915_request_active_offset to i915_request_active_seqno(),
>>   and elaborate the function. (tvrtko)
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> #v1
>> Reported-by: kernel test robot <lkp@intel.com>
>> ---
>>  drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
>>  drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
>>  drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  13 +-
>>  drivers/gpu/drm/i915/gt/intel_engine_cs.c     |   1 +
>>  drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
>>  drivers/gpu/drm/i915/gt/intel_timeline.c      | 422 ++++--------------
>>  .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
>>  drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   5 +-
>>  drivers/gpu/drm/i915/gt/selftest_timeline.c   |  83 ++--
>>  drivers/gpu/drm/i915/i915_request.c           |   4 -
>>  drivers/gpu/drm/i915/i915_request.h           |  31 +-
>>  11 files changed, 175 insertions(+), 415 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
>> index b491a64919c8..9646200d2792 100644
>> --- a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
>> @@ -143,7 +143,7 @@ static u32 *__gen2_emit_breadcrumb(struct i915_request *rq, u32 *cs,
>>                                    int flush, int post)
>>  {
>>         GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
>> -       GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
>> +       GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>>
>>         *cs++ = MI_FLUSH;
>>
>> diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
>> index ce38d1bcaba3..b388ceeeb1c9 100644
>> --- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
>> @@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>>                  PIPE_CONTROL_DC_FLUSH_ENABLE |
>>                  PIPE_CONTROL_QW_WRITE |
>>                  PIPE_CONTROL_CS_STALL);
>> -       *cs++ = i915_request_active_timeline(rq)->hwsp_offset |
>> +       *cs++ = i915_request_active_seqno(rq) |
>>                 PIPE_CONTROL_GLOBAL_GTT;
>>         *cs++ = rq->fence.seqno;
>>
>> @@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>>                  PIPE_CONTROL_QW_WRITE |
>>                  PIPE_CONTROL_GLOBAL_GTT_IVB |
>>                  PIPE_CONTROL_CS_STALL);
>> -       *cs++ = i915_request_active_timeline(rq)->hwsp_offset;
>> +       *cs++ = i915_request_active_seqno(rq);
>>         *cs++ = rq->fence.seqno;
>>
>>         *cs++ = MI_USER_INTERRUPT;
>> @@ -374,7 +374,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>>  u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>>  {
>>         GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
>> -       GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
>> +       GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>>
>>         *cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
>>         *cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
>> @@ -394,7 +394,7 @@ u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>>         int i;
>>
>>         GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
>> -       GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
>> +       GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>>
>>         *cs++ = MI_FLUSH_DW | MI_INVALIDATE_TLB |
>>                 MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
>> diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
>> index cac80af7ad1c..6b9c34d3ac8d 100644
>> --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
>> @@ -338,15 +338,14 @@ static u32 preempt_address(struct intel_engine_cs *engine)
>>
>>  static u32 hwsp_offset(const struct i915_request *rq)
>>  {
>> -       const struct intel_timeline_cacheline *cl;
>> +       const struct intel_timeline *tl;
>>
>> -       /* Before the request is executed, the timeline/cachline is fixed */
>> +       /* Before the request is executed, the timeline is fixed */
>> +       tl = rcu_dereference_protected(rq->timeline,
>> +                                      !i915_request_signaled(rq));
> Why is Gen8+ different from Gen2/6 here?  In particular, why not use
> i915_request_active_timeline(rq) or, better yet,
> i915_request_active_seqno()?  The primary difference I see is that the
> guard on the RCU is different but it's not immediately obvious to me
> why this should be different between hardware generations.  Also,
> i915_request_active_seqno() returns a u32, but that could be fixed.

The legacy rings different locking, and it was splatting on PROVE_RCU.

I didn't want to weaken it, so I just put in a different condition for gen8 that's slightly weaker, but still better than the '1'.


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [Intel-gfx] [PATCH v8 02/69] drm/i915: Pin timeline map after first timeline pin, v3.
  2021-03-11 21:44   ` Jason Ekstrand
@ 2021-03-15 12:34     ` Maarten Lankhorst
  0 siblings, 0 replies; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-15 12:34 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: Thomas Hellström, Intel GFX

Op 2021-03-11 om 22:44 schreef Jason Ekstrand:
> On Thu, Mar 11, 2021 at 7:49 AM Maarten Lankhorst
> <maarten.lankhorst@linux.intel.com> wrote:
>> We're starting to require the reservation lock for pinning,
>> so wait until we have that.
>>
>> Update the selftests to handle this correctly, and ensure pin is
>> called in live_hwsp_rollover_user() and mock_hwsp_freelist().
>>
>> Changes since v1:
>> - Fix NULL + XX arithmatic, use casts. (kbuild)
>> Changes since v2:
>> - Clear entire cacheline when pinning.
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>> Reported-by: kernel test robot <lkp@intel.com>
>> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>>  drivers/gpu/drm/i915/gt/intel_timeline.c    | 40 +++++++++----
>>  drivers/gpu/drm/i915/gt/intel_timeline.h    |  2 +
>>  drivers/gpu/drm/i915/gt/mock_engine.c       | 22 ++++++-
>>  drivers/gpu/drm/i915/gt/selftest_timeline.c | 63 +++++++++++----------
>>  drivers/gpu/drm/i915/i915_selftest.h        |  2 +
>>  5 files changed, 84 insertions(+), 45 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
>> index efe2030cfe5e..032e1d1b4c5e 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_timeline.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
>> @@ -52,14 +52,29 @@ static int __timeline_active(struct i915_active *active)
>>         return 0;
>>  }
>>
>> +I915_SELFTEST_EXPORT int
>> +intel_timeline_pin_map(struct intel_timeline *timeline)
>> +{
>> +       struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj;
>> +       u32 ofs = offset_in_page(timeline->hwsp_offset);
>> +       void *vaddr;
>> +
>> +       vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
>> +       if (IS_ERR(vaddr))
>> +               return PTR_ERR(vaddr);
>> +
>> +       timeline->hwsp_map = vaddr;
>> +       timeline->hwsp_seqno = memset(vaddr + ofs, 0, CACHELINE_BYTES);
> What guarantees that hwsp_offset is cacheline-aligned?  From what I
> saw in patch 1, it's incremented by 8 so only every 8th one is
> actually CL-aligned.

Yeah the name of the #define is wrong here. It was originally cacheline aligned because

the page was originally shared between different contexts. This is no longer the case,

but the name was kept. I think TIMELINE_SEQNO_ALIGN would be a better fit, I will respin.

>
>> +       clflush(vaddr + ofs);
>> +
>> +       return 0;
>> +}
>> +
>>  static int intel_timeline_init(struct intel_timeline *timeline,
>>                                struct intel_gt *gt,
>>                                struct i915_vma *hwsp,
>>                                unsigned int offset)
>>  {
>> -       void *vaddr;
>> -       u32 *seqno;
>> -
>>         kref_init(&timeline->kref);
>>         atomic_set(&timeline->pin_count, 0);
>>
>> @@ -76,14 +91,8 @@ static int intel_timeline_init(struct intel_timeline *timeline,
>>                 timeline->hwsp_ggtt = hwsp;
>>         }
>>
>> -       vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
>> -       if (IS_ERR(vaddr))
>> -               return PTR_ERR(vaddr);
>> -
>> -       timeline->hwsp_map = vaddr;
>> -       seqno = vaddr + timeline->hwsp_offset;
>> -       WRITE_ONCE(*seqno, 0);
>> -       timeline->hwsp_seqno = seqno;
>> +       timeline->hwsp_map = NULL;
>> +       timeline->hwsp_seqno = (void *)(long)timeline->hwsp_offset;
> Maybe uintptr_t instead of long?  I think they're always the same on
> Linux but uintptr_t seems like the more "correct" type for this sort
> of thing.

I did a non-scientific compare, void *)(uintptr_t vs void *)(long in the kernel, the latter was used 4x as often.

In general long is a pointer sized type for the kernel. I think for userspace it would be more correct, though.


>>         GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
>>
>> @@ -113,7 +122,8 @@ static void intel_timeline_fini(struct rcu_head *rcu)
>>         struct intel_timeline *timeline =
>>                 container_of(rcu, struct intel_timeline, rcu);
>>
>> -       i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>> +       if (timeline->hwsp_map)
>> +               i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>>
>>         i915_vma_put(timeline->hwsp_ggtt);
>>         i915_active_fini(&timeline->active);
>> @@ -173,6 +183,12 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>>         if (atomic_add_unless(&tl->pin_count, 1, 0))
>>                 return 0;
>>
>> +       if (!tl->hwsp_map) {
>> +               err = intel_timeline_pin_map(tl);
>> +               if (err)
>> +                       return err;
>> +       }
>> +
>>         err = i915_ggtt_pin(tl->hwsp_ggtt, ww, 0, PIN_HIGH);
>>         if (err)
>>                 return err;
>> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h
>> index b1f81d947f8d..57308c4d664a 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_timeline.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
>> @@ -98,4 +98,6 @@ intel_timeline_is_last(const struct intel_timeline *tl,
>>         return list_is_last_rcu(&rq->link, &tl->requests);
>>  }
>>
>> +I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl));
>> +
>>  #endif
>> diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
>> index 5662f7c2f719..42fd86658ee7 100644
>> --- a/drivers/gpu/drm/i915/gt/mock_engine.c
>> +++ b/drivers/gpu/drm/i915/gt/mock_engine.c
>> @@ -13,9 +13,20 @@
>>  #include "mock_engine.h"
>>  #include "selftests/mock_request.h"
>>
>> -static void mock_timeline_pin(struct intel_timeline *tl)
>> +static int mock_timeline_pin(struct intel_timeline *tl)
>>  {
>> +       int err;
>> +
>> +       if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
>> +               return -EBUSY;
>> +
>> +       err = intel_timeline_pin_map(tl);
>> +       i915_gem_object_unlock(tl->hwsp_ggtt->obj);
>> +       if (err)
>> +               return err;
>> +
>>         atomic_inc(&tl->pin_count);
>> +       return 0;
>>  }
>>
>>  static void mock_timeline_unpin(struct intel_timeline *tl)
>> @@ -133,6 +144,8 @@ static void mock_context_destroy(struct kref *ref)
>>
>>  static int mock_context_alloc(struct intel_context *ce)
>>  {
>> +       int err;
>> +
>>         ce->ring = mock_ring(ce->engine);
>>         if (!ce->ring)
>>                 return -ENOMEM;
>> @@ -143,7 +156,12 @@ static int mock_context_alloc(struct intel_context *ce)
>>                 return PTR_ERR(ce->timeline);
>>         }
>>
>> -       mock_timeline_pin(ce->timeline);
>> +       err = mock_timeline_pin(ce->timeline);
>> +       if (err) {
>> +               intel_timeline_put(ce->timeline);
>> +               ce->timeline = NULL;
>> +               return err;
>> +       }
>>
>>         return 0;
>>  }
>> diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
>> index a4c78062e92b..31b492eb2982 100644
>> --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
>> +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
>> @@ -34,7 +34,7 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
>>  {
>>         unsigned long address = (unsigned long)page_address(hwsp_page(tl));
>>
>> -       return (address + tl->hwsp_offset) / CACHELINE_BYTES;
>> +       return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES;
> Does this belong in the previous commit?  I've got no clue what I'm
> talking about here but it looks like it goes with the hwsp_offset
> wrapping changes in 01/69.
Yeah likely. Will move.
>
> --Jason
>
>>  }
>>
>>  #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
>> @@ -58,6 +58,7 @@ static void __mock_hwsp_record(struct mock_hwsp_freelist *state,
>>         tl = xchg(&state->history[idx], tl);
>>         if (tl) {
>>                 radix_tree_delete(&state->cachelines, hwsp_cacheline(tl));
>> +               intel_timeline_unpin(tl);
>>                 intel_timeline_put(tl);
>>         }
>>  }
>> @@ -77,6 +78,12 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
>>                 if (IS_ERR(tl))
>>                         return PTR_ERR(tl);
>>
>> +               err = intel_timeline_pin(tl, NULL);
>> +               if (err) {
>> +                       intel_timeline_put(tl);
>> +                       return err;
>> +               }
>> +
>>                 cacheline = hwsp_cacheline(tl);
>>                 err = radix_tree_insert(&state->cachelines, cacheline, tl);
>>                 if (err) {
>> @@ -84,6 +91,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
>>                                 pr_err("HWSP cacheline %lu already used; duplicate allocation!\n",
>>                                        cacheline);
>>                         }
>> +                       intel_timeline_unpin(tl);
>>                         intel_timeline_put(tl);
>>                         return err;
>>                 }
>> @@ -451,7 +459,7 @@ static int emit_ggtt_store_dw(struct i915_request *rq, u32 addr, u32 value)
>>  }
>>
>>  static struct i915_request *
>> -tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>> +checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>>  {
>>         struct i915_request *rq;
>>         int err;
>> @@ -462,6 +470,13 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>>                 goto out;
>>         }
>>
>> +       if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
>> +               pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
>> +                      *tl->hwsp_seqno, tl->seqno);
>> +               intel_timeline_unpin(tl);
>> +               return ERR_PTR(-EINVAL);
>> +       }
>> +
>>         rq = intel_engine_create_kernel_request(engine);
>>         if (IS_ERR(rq))
>>                 goto out_unpin;
>> @@ -483,25 +498,6 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>>         return rq;
>>  }
>>
>> -static struct intel_timeline *
>> -checked_intel_timeline_create(struct intel_gt *gt)
>> -{
>> -       struct intel_timeline *tl;
>> -
>> -       tl = intel_timeline_create(gt);
>> -       if (IS_ERR(tl))
>> -               return tl;
>> -
>> -       if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
>> -               pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
>> -                      *tl->hwsp_seqno, tl->seqno);
>> -               intel_timeline_put(tl);
>> -               return ERR_PTR(-EINVAL);
>> -       }
>> -
>> -       return tl;
>> -}
>> -
>>  static int live_hwsp_engine(void *arg)
>>  {
>>  #define NUM_TIMELINES 4096
>> @@ -534,13 +530,13 @@ static int live_hwsp_engine(void *arg)
>>                         struct intel_timeline *tl;
>>                         struct i915_request *rq;
>>
>> -                       tl = checked_intel_timeline_create(gt);
>> +                       tl = intel_timeline_create(gt);
>>                         if (IS_ERR(tl)) {
>>                                 err = PTR_ERR(tl);
>>                                 break;
>>                         }
>>
>> -                       rq = tl_write(tl, engine, count);
>> +                       rq = checked_tl_write(tl, engine, count);
>>                         if (IS_ERR(rq)) {
>>                                 intel_timeline_put(tl);
>>                                 err = PTR_ERR(rq);
>> @@ -607,14 +603,14 @@ static int live_hwsp_alternate(void *arg)
>>                         if (!intel_engine_can_store_dword(engine))
>>                                 continue;
>>
>> -                       tl = checked_intel_timeline_create(gt);
>> +                       tl = intel_timeline_create(gt);
>>                         if (IS_ERR(tl)) {
>>                                 err = PTR_ERR(tl);
>>                                 goto out;
>>                         }
>>
>>                         intel_engine_pm_get(engine);
>> -                       rq = tl_write(tl, engine, count);
>> +                       rq = checked_tl_write(tl, engine, count);
>>                         intel_engine_pm_put(engine);
>>                         if (IS_ERR(rq)) {
>>                                 intel_timeline_put(tl);
>> @@ -1257,6 +1253,10 @@ static int live_hwsp_rollover_user(void *arg)
>>                 if (!tl->has_initial_breadcrumb)
>>                         goto out;
>>
>> +               err = intel_context_pin(ce);
>> +               if (err)
>> +                       goto out;
>> +
>>                 tl->seqno = -4u;
>>                 WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>>
>> @@ -1266,7 +1266,7 @@ static int live_hwsp_rollover_user(void *arg)
>>                         this = intel_context_create_request(ce);
>>                         if (IS_ERR(this)) {
>>                                 err = PTR_ERR(this);
>> -                               goto out;
>> +                               goto out_unpin;
>>                         }
>>
>>                         pr_debug("%s: create fence.seqnp:%d\n",
>> @@ -1285,17 +1285,18 @@ static int live_hwsp_rollover_user(void *arg)
>>                 if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
>>                         pr_err("Wait for timeline wrap timed out!\n");
>>                         err = -EIO;
>> -                       goto out;
>> +                       goto out_unpin;
>>                 }
>>
>>                 for (i = 0; i < ARRAY_SIZE(rq); i++) {
>>                         if (!i915_request_completed(rq[i])) {
>>                                 pr_err("Pre-wrap request not completed!\n");
>>                                 err = -EINVAL;
>> -                               goto out;
>> +                               goto out_unpin;
>>                         }
>>                 }
>> -
>> +out_unpin:
>> +               intel_context_unpin(ce);
>>  out:
>>                 for (i = 0; i < ARRAY_SIZE(rq); i++)
>>                         i915_request_put(rq[i]);
>> @@ -1337,13 +1338,13 @@ static int live_hwsp_recycle(void *arg)
>>                         struct intel_timeline *tl;
>>                         struct i915_request *rq;
>>
>> -                       tl = checked_intel_timeline_create(gt);
>> +                       tl = intel_timeline_create(gt);
>>                         if (IS_ERR(tl)) {
>>                                 err = PTR_ERR(tl);
>>                                 break;
>>                         }
>>
>> -                       rq = tl_write(tl, engine, count);
>> +                       rq = checked_tl_write(tl, engine, count);
>>                         if (IS_ERR(rq)) {
>>                                 intel_timeline_put(tl);
>>                                 err = PTR_ERR(rq);
>> diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
>> index d53d207ab6eb..f54de0499be7 100644
>> --- a/drivers/gpu/drm/i915/i915_selftest.h
>> +++ b/drivers/gpu/drm/i915/i915_selftest.h
>> @@ -107,6 +107,7 @@ int __i915_subtests(const char *caller,
>>
>>  #define I915_SELFTEST_DECLARE(x) x
>>  #define I915_SELFTEST_ONLY(x) unlikely(x)
>> +#define I915_SELFTEST_EXPORT
>>
>>  #else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
>>
>> @@ -116,6 +117,7 @@ static inline int i915_perf_selftests(struct pci_dev *pdev) { return 0; }
>>
>>  #define I915_SELFTEST_DECLARE(x)
>>  #define I915_SELFTEST_ONLY(x) 0
>> +#define I915_SELFTEST_EXPORT static
>>
>>  #endif
>>
>> --
>> 2.30.1
>>
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
  2021-03-11 17:24   ` Thomas Hellström (Intel)
@ 2021-03-15 12:36     ` Maarten Lankhorst
  2021-03-16  8:47       ` Thomas Hellström (Intel)
  0 siblings, 1 reply; 82+ messages in thread
From: Maarten Lankhorst @ 2021-03-15 12:36 UTC (permalink / raw)
  To: Thomas Hellström (Intel), intel-gfx; +Cc: Dave Airlie

Op 2021-03-11 om 18:24 schreef Thomas Hellström (Intel):
>
> Hi, Maarten,
>
> On 3/11/21 2:41 PM, Maarten Lankhorst wrote:
>> Instead of doing what we do currently, which will never work with
>> PROVE_LOCKING, do the same as AMD does, and something similar to
>> relocation slowpath. When all locks are dropped, we acquire the
>> pages for pinning. When the locks are taken, we transfer those
>> pages in .get_pages() to the bo. As a final check before installing
>> the fences, we ensure that the mmu notifier was not called; if it is,
>> we return -EAGAIN to userspace to signal it has to start over.
>>
>> Changes since v1:
>> - Unbinding is done in submit_init only. submit_begin() removed.
>> - MMU_NOTFIER -> MMU_NOTIFIER
>> Changes since v2:
>> - Make i915->mm.notifier a spinlock.
>> Changes since v3:
>> - Add WARN_ON if there are any page references left, should have been 0.
>> - Return 0 on success in submit_init(), bug from spinlock conversion.
>> - Release pvec outside of notifier_lock (Thomas).
>> Changes since v4:
>> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
>> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
>> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
>> Changes since v5:
>> - Clarify why check on PF_EXITING is (temporarily) required.
>> Changes since v6:
>> - Ensure userptr validity is checked in set_domain through a special path.
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>> Acked-by: Dave Airlie <airlied@redhat.com>
>
> Mostly LGTM. Comments / suggestions below.
>
>> ---
>>  drivers/gpu/drm/i915/gem/i915_gem_domain.c    |  18 +-
>>  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 101 ++-
>>  drivers/gpu/drm/i915/gem/i915_gem_object.h    |  38 +-
>>  .../gpu/drm/i915/gem/i915_gem_object_types.h  |  10 +-
>>  drivers/gpu/drm/i915/gem/i915_gem_pages.c     |   2 +-
>>  drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 796 ++++++------------
>>  drivers/gpu/drm/i915/i915_drv.h               |   9 +-
>>  drivers/gpu/drm/i915/i915_gem.c               |   5 +-
>>  8 files changed, 395 insertions(+), 584 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
>> index 2f4980bf742e..76cb9f5c66aa 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
>> @@ -468,14 +468,28 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
>>  	if (!obj)
>>  		return -ENOENT;
>>  
>> +	if (i915_gem_object_is_userptr(obj)) {
>> +		/*
>> +		 * Try to grab userptr pages, iris uses set_domain to check
>> +		 * userptr validity
>> +		 */
>> +		err = i915_gem_object_userptr_validate(obj);
>> +		if (!err)
>> +			err = i915_gem_object_wait(obj,
>> +						   I915_WAIT_INTERRUPTIBLE |
>> +						   I915_WAIT_PRIORITY |
>> +						   (write_domain ? I915_WAIT_ALL : 0),
>> +						   MAX_SCHEDULE_TIMEOUT);
>> +		goto out;
>> +	}
>> +
>>  	/*
>>  	 * Proxy objects do not control access to the backing storage, ergo
>>  	 * they cannot be used as a means to manipulate the cache domain
>>  	 * tracking for that backing storage. The proxy object is always
>>  	 * considered to be outside of any cache domain.
>>  	 */
>> -	if (i915_gem_object_is_proxy(obj) &&
>> -	    !i915_gem_object_is_userptr(obj)) {
>> +	if (i915_gem_object_is_proxy(obj)) {
>>  		err = -ENXIO;
>>  		goto out;
>>  	}
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>> index c72440c10876..64d0e5fccece 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>> @@ -53,14 +53,16 @@ enum {
>>  /* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
>>  #define __EXEC_OBJECT_HAS_PIN		BIT(30)
>>  #define __EXEC_OBJECT_HAS_FENCE		BIT(29)
>> -#define __EXEC_OBJECT_NEEDS_MAP		BIT(28)
>> -#define __EXEC_OBJECT_NEEDS_BIAS	BIT(27)
>> -#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 27) /* all of the above + */
>> +#define __EXEC_OBJECT_USERPTR_INIT	BIT(28)
>> +#define __EXEC_OBJECT_NEEDS_MAP		BIT(27)
>> +#define __EXEC_OBJECT_NEEDS_BIAS	BIT(26)
>> +#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 26) /* all of the above + */
>>  #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
>>  
>>  #define __EXEC_HAS_RELOC	BIT(31)
>>  #define __EXEC_ENGINE_PINNED	BIT(30)
>> -#define __EXEC_INTERNAL_FLAGS	(~0u << 30)
>> +#define __EXEC_USERPTR_USED	BIT(29)
>> +#define __EXEC_INTERNAL_FLAGS	(~0u << 29)
>>  #define UPDATE			PIN_OFFSET_FIXED
>>  
>>  #define BATCH_OFFSET_BIAS (256*1024)
>> @@ -864,6 +866,26 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
>>  		}
>>  
>>  		eb_add_vma(eb, i, batch, vma);
>> +
>> +		if (i915_gem_object_is_userptr(vma->obj)) {
>> +			err = i915_gem_object_userptr_submit_init(vma->obj);
>> +			if (err) {
>> +				if (i + 1 < eb->buffer_count) {
>> +					/*
>> +					 * Execbuffer code expects last vma entry to be NULL,
>> +					 * since we already initialized this entry,
>> +					 * set the next value to NULL or we mess up
>> +					 * cleanup handling.
>> +					 */
>> +					eb->vma[i + 1].vma = NULL;
>> +				}
>> +
>> +				return err;
>> +			}
>> +
>> +			eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT;
>> +			eb->args->flags |= __EXEC_USERPTR_USED;
>> +		}
>>  	}
>>  
>>  	if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) {
>> @@ -965,7 +987,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle)
>>  	}
>>  }
>>  
>> -static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>> +static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr)
>>  {
>>  	const unsigned int count = eb->buffer_count;
>>  	unsigned int i;
>> @@ -979,6 +1001,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>>  
>>  		eb_unreserve_vma(ev);
>>  
>> +		if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) {
>> +			ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT;
>> +			i915_gem_object_userptr_submit_fini(vma->obj);
>> +		}
>> +
>>  		if (final)
>>  			i915_vma_put(vma);
>>  	}
>> @@ -1909,6 +1936,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
>>  	return 0;
>>  }
>>  
>> +static int eb_reinit_userptr(struct i915_execbuffer *eb)
>> +{
>> +	const unsigned int count = eb->buffer_count;
>> +	unsigned int i;
>> +	int ret;
>> +
>> +	if (likely(!(eb->args->flags & __EXEC_USERPTR_USED)))
>> +		return 0;
>> +
>> +	for (i = 0; i < count; i++) {
>> +		struct eb_vma *ev = &eb->vma[i];
>> +
>> +		if (!i915_gem_object_is_userptr(ev->vma->obj))
>> +			continue;
>> +
>> +		ret = i915_gem_object_userptr_submit_init(ev->vma->obj);
>> +		if (ret)
>> +			return ret;
>> +
>> +		ev->flags |= __EXEC_OBJECT_USERPTR_INIT;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>  static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>  					   struct i915_request *rq)
>>  {
>> @@ -1923,7 +1975,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>  	}
>>  
>>  	/* We may process another execbuffer during the unlock... */
>> -	eb_release_vmas(eb, false);
>> +	eb_release_vmas(eb, false, true);
>>  	i915_gem_ww_ctx_fini(&eb->ww);
>>  
>>  	if (rq) {
>> @@ -1964,10 +2016,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>  		err = 0;
>>  	}
>>  
>> -#ifdef CONFIG_MMU_NOTIFIER
>>  	if (!err)
>> -		flush_workqueue(eb->i915->mm.userptr_wq);
>> -#endif
>> +		err = eb_reinit_userptr(eb);
>>  
>>  err_relock:
>>  	i915_gem_ww_ctx_init(&eb->ww, true);
>> @@ -2029,7 +2079,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>  
>>  err:
>>  	if (err == -EDEADLK) {
>> -		eb_release_vmas(eb, false);
>> +		eb_release_vmas(eb, false, false);
>>  		err = i915_gem_ww_ctx_backoff(&eb->ww);
>>  		if (!err)
>>  			goto repeat_validate;
>> @@ -2126,7 +2176,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb)
>>  
>>  err:
>>  	if (err == -EDEADLK) {
>> -		eb_release_vmas(eb, false);
>> +		eb_release_vmas(eb, false, false);
>>  		err = i915_gem_ww_ctx_backoff(&eb->ww);
>>  		if (!err)
>>  			goto retry;
>> @@ -2201,6 +2251,30 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
>>  						      flags | __EXEC_OBJECT_NO_RESERVE);
>>  	}
>>  
>> +#ifdef CONFIG_MMU_NOTIFIER
>> +	if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
>> +		spin_lock(&eb->i915->mm.notifier_lock);
>> +
>> +		/*
>> +		 * count is always at least 1, otherwise __EXEC_USERPTR_USED
>> +		 * could not have been set
>> +		 */
>> +		for (i = 0; i < count; i++) {
>> +			struct eb_vma *ev = &eb->vma[i];
>> +			struct drm_i915_gem_object *obj = ev->vma->obj;
>> +
>> +			if (!i915_gem_object_is_userptr(obj))
>> +				continue;
>> +
>> +			err = i915_gem_object_userptr_submit_done(obj);
>> +			if (err)
>> +				break;
>> +		}
>> +
>> +		spin_unlock(&eb->i915->mm.notifier_lock);
>> +	}
>> +#endif
>> +
>>  	if (unlikely(err))
>>  		goto err_skip;
>>  
>> @@ -3345,7 +3419,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>>  
>>  	err = eb_lookup_vmas(&eb);
>>  	if (err) {
>> -		eb_release_vmas(&eb, true);
>> +		eb_release_vmas(&eb, true, true);
>>  		goto err_engine;
>>  	}
>>  
>> @@ -3417,6 +3491,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>>  
>>  	trace_i915_request_queue(eb.request, eb.batch_flags);
>>  	err = eb_submit(&eb, batch);
>> +
>>  err_request:
>>  	i915_request_get(eb.request);
>>  	err = eb_request_add(&eb, err);
>> @@ -3437,7 +3512,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>>  	i915_request_put(eb.request);
>>  
>>  err_vma:
>> -	eb_release_vmas(&eb, true);
>> +	eb_release_vmas(&eb, true, true);
>>  	if (eb.trampoline)
>>  		i915_vma_unpin(eb.trampoline);
>>  	WARN_ON(err == -EDEADLK);
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>> index 69509dbd01c7..b5af9c440ac5 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>> @@ -59,6 +59,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
>>  				       const void *data, resource_size_t size);
>>  
>>  extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
>> +
>>  void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
>>  				     struct sg_table *pages,
>>  				     bool needs_clflush);
>> @@ -278,12 +279,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
>>  	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
>>  }
>>  
>> -static inline bool
>> -i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
>> -{
>> -	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL);
>> -}
>> -
>>  static inline bool
>>  i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
>>  {
>> @@ -573,16 +568,6 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>>  void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
>>  					      enum fb_op_origin origin);
>>  
>> -static inline bool
>> -i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
>> -{
>> -#ifdef CONFIG_MMU_NOTIFIER
>> -	return obj->userptr.mm;
>> -#else
>> -	return false;
>> -#endif
>> -}
>> -
>>  static inline void
>>  i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>>  				  enum fb_op_origin origin)
>> @@ -603,4 +588,25 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset,
>>  
>>  bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj);
>>  
>> +#ifdef CONFIG_MMU_NOTIFIER
>> +static inline bool
>> +i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
>> +{
>> +	return obj->userptr.notifier.mm;
>> +}
>> +
>> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj);
>> +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj);
>> +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj);
>> +int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj);
>> +#else
>> +static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; }
>> +
>> +static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
>> +static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
>> +static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }
>> +static inline int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
>> +
>> +#endif
>> +
>>  #endif
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>> index 414322619781..4c0a34231623 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>> @@ -7,6 +7,8 @@
>>  #ifndef __I915_GEM_OBJECT_TYPES_H__
>>  #define __I915_GEM_OBJECT_TYPES_H__
>>  
>> +#include <linux/mmu_notifier.h>
>> +
>>  #include <drm/drm_gem.h>
>>  #include <uapi/drm/i915_drm.h>
>>  
>> @@ -34,7 +36,6 @@ struct drm_i915_gem_object_ops {
>>  #define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(2)
>>  #define I915_GEM_OBJECT_IS_PROXY	BIT(3)
>>  #define I915_GEM_OBJECT_NO_MMAP		BIT(4)
>> -#define I915_GEM_OBJECT_ASYNC_CANCEL	BIT(5)
>>  
>>  	/* Interface between the GEM object and its backing storage.
>>  	 * get_pages() is called once prior to the use of the associated set
>> @@ -299,10 +300,11 @@ struct drm_i915_gem_object {
>>  #ifdef CONFIG_MMU_NOTIFIER
>>  		struct i915_gem_userptr {
>>  			uintptr_t ptr;
>> +			unsigned long notifier_seq;
>>  
>> -			struct i915_mm_struct *mm;
>> -			struct i915_mmu_object *mmu_object;
>> -			struct work_struct *work;
>> +			struct mmu_interval_notifier notifier;
>> +			struct page **pvec;
>> +			int page_ref;
>>  		} userptr;
>>  #endif
>>  
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
>> index bf61b88a2113..e7d7650072c5 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
>> @@ -226,7 +226,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
>>  	 * get_pages backends we should be better able to handle the
>>  	 * cancellation of the async task in a more uniform manner.
>>  	 */
>> -	if (!pages && !i915_gem_object_needs_async_cancel(obj))
>> +	if (!pages)
>>  		pages = ERR_PTR(-EINVAL);
>>  
>>  	if (!IS_ERR(pages))
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>> index b466ab2def4d..1e42fbc68697 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>> @@ -2,10 +2,39 @@
>>   * SPDX-License-Identifier: MIT
>>   *
>>   * Copyright © 2012-2014 Intel Corporation
>> + *
>> +  * Based on amdgpu_mn, which bears the following notice:
>> + *
>> + * Copyright 2014 Advanced Micro Devices, Inc.
>> + * All Rights Reserved.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining a
>> + * copy of this software and associated documentation files (the
>> + * "Software"), to deal in the Software without restriction, including
>> + * without limitation the rights to use, copy, modify, merge, publish,
>> + * distribute, sub license, and/or sell copies of the Software, and to
>> + * permit persons to whom the Software is furnished to do so, subject to
>> + * the following conditions:
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
>> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
>> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
>> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + * The above copyright notice and this permission notice (including the
>> + * next paragraph) shall be included in all copies or substantial portions
>> + * of the Software.
>> + *
>> + */
>> +/*
>> + * Authors:
>> + *    Christian König <christian.koenig@amd.com>
>>   */
>>  
>>  #include <linux/mmu_context.h>
>> -#include <linux/mmu_notifier.h>
>>  #include <linux/mempolicy.h>
>>  #include <linux/swap.h>
>>  #include <linux/sched/mm.h>
>> @@ -15,373 +44,121 @@
>>  #include "i915_gem_object.h"
>>  #include "i915_scatterlist.h"
>>  
>> -#if defined(CONFIG_MMU_NOTIFIER)
>> -
>> -struct i915_mm_struct {
>> -	struct mm_struct *mm;
>> -	struct drm_i915_private *i915;
>> -	struct i915_mmu_notifier *mn;
>> -	struct hlist_node node;
>> -	struct kref kref;
>> -	struct rcu_work work;
>> -};
>> -
>> -#include <linux/interval_tree.h>
>> -
>> -struct i915_mmu_notifier {
>> -	spinlock_t lock;
>> -	struct hlist_node node;
>> -	struct mmu_notifier mn;
>> -	struct rb_root_cached objects;
>> -	struct i915_mm_struct *mm;
>> -};
>> -
>> -struct i915_mmu_object {
>> -	struct i915_mmu_notifier *mn;
>> -	struct drm_i915_gem_object *obj;
>> -	struct interval_tree_node it;
>> -};
>> +#ifdef CONFIG_MMU_NOTIFIER
>>  
>> -static void add_object(struct i915_mmu_object *mo)
>> +/**
>> + * i915_gem_userptr_invalidate - callback to notify about mm change
>> + *
>> + * @mni: the range (mm) is about to update
>> + * @range: details on the invalidation
>> + * @cur_seq: Value to pass to mmu_interval_set_seq()
>> + *
>> + * Block for operations on BOs to finish and mark pages as accessed and
>> + * potentially dirty.
>> + */
>> +static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>> +					const struct mmu_notifier_range *range,
>> +					unsigned long cur_seq)
>>  {
>> -	GEM_BUG_ON(!RB_EMPTY_NODE(&mo->it.rb));
>> -	interval_tree_insert(&mo->it, &mo->mn->objects);
>> -}
>> +	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);
>> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> +	long r;
>>  
>> -static void del_object(struct i915_mmu_object *mo)
>> -{
>> -	if (RB_EMPTY_NODE(&mo->it.rb))
>> -		return;
>> +	if (!mmu_notifier_range_blockable(range))
>> +		return false;
>>  
>> -	interval_tree_remove(&mo->it, &mo->mn->objects);
>> -	RB_CLEAR_NODE(&mo->it.rb);
>> -}
>> +	spin_lock(&i915->mm.notifier_lock);
>>  
>> -static void
>> -__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
>> -{
>> -	struct i915_mmu_object *mo = obj->userptr.mmu_object;
>> +	mmu_interval_set_seq(mni, cur_seq);
>> +
>> +	spin_unlock(&i915->mm.notifier_lock);
>>  
>>  	/*
>> -	 * During mm_invalidate_range we need to cancel any userptr that
>> -	 * overlaps the range being invalidated. Doing so requires the
>> -	 * struct_mutex, and that risks recursion. In order to cause
>> -	 * recursion, the user must alias the userptr address space with
>> -	 * a GTT mmapping (possible with a MAP_FIXED) - then when we have
>> -	 * to invalidate that mmaping, mm_invalidate_range is called with
>> -	 * the userptr address *and* the struct_mutex held.  To prevent that
>> -	 * we set a flag under the i915_mmu_notifier spinlock to indicate
>> -	 * whether this object is valid.
>> +	 * We don't wait when the process is exiting. This is valid
>> +	 * because the object will be cleaned up anyway.
>> +	 *
>> +	 * This is also temporarily required as a hack, because we
>> +	 * cannot currently force non-consistent batch buffers to preempt
>> +	 * and reschedule by waiting on it, hanging processes on exit.
>>  	 */
>> -	if (!mo)
>> -		return;
>> -
>> -	spin_lock(&mo->mn->lock);
>> -	if (value)
>> -		add_object(mo);
>> -	else
>> -		del_object(mo);
>> -	spin_unlock(&mo->mn->lock);
>> -}
>> -
>> -static int
>> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>> -				  const struct mmu_notifier_range *range)
>> -{
>> -	struct i915_mmu_notifier *mn =
>> -		container_of(_mn, struct i915_mmu_notifier, mn);
>> -	struct interval_tree_node *it;
>> -	unsigned long end;
>> -	int ret = 0;
>> -
>> -	if (RB_EMPTY_ROOT(&mn->objects.rb_root))
>> -		return 0;
>> -
>> -	/* interval ranges are inclusive, but invalidate range is exclusive */
>> -	end = range->end - 1;
>> -
>> -	spin_lock(&mn->lock);
>> -	it = interval_tree_iter_first(&mn->objects, range->start, end);
>> -	while (it) {
>> -		struct drm_i915_gem_object *obj;
>> -
>> -		if (!mmu_notifier_range_blockable(range)) {
>> -			ret = -EAGAIN;
>> -			break;
>> -		}
>> -
>> -		/*
>> -		 * The mmu_object is released late when destroying the
>> -		 * GEM object so it is entirely possible to gain a
>> -		 * reference on an object in the process of being freed
>> -		 * since our serialisation is via the spinlock and not
>> -		 * the struct_mutex - and consequently use it after it
>> -		 * is freed and then double free it. To prevent that
>> -		 * use-after-free we only acquire a reference on the
>> -		 * object if it is not in the process of being destroyed.
>> -		 */
>> -		obj = container_of(it, struct i915_mmu_object, it)->obj;
>> -		if (!kref_get_unless_zero(&obj->base.refcount)) {
>> -			it = interval_tree_iter_next(it, range->start, end);
>> -			continue;
>> -		}
>> -		spin_unlock(&mn->lock);
>> -
>> -		ret = i915_gem_object_unbind(obj,
>> -					     I915_GEM_OBJECT_UNBIND_ACTIVE |
>> -					     I915_GEM_OBJECT_UNBIND_BARRIER);
>> -		if (ret == 0)
>> -			ret = __i915_gem_object_put_pages(obj);
>> -		i915_gem_object_put(obj);
>> -		if (ret)
>> -			return ret;
>> +	if (current->flags & PF_EXITING)
>> +		return true;
>>  
>> -		spin_lock(&mn->lock);
>> -
>> -		/*
>> -		 * As we do not (yet) protect the mmu from concurrent insertion
>> -		 * over this range, there is no guarantee that this search will
>> -		 * terminate given a pathologic workload.
>> -		 */
>> -		it = interval_tree_iter_first(&mn->objects, range->start, end);
>> -	}
>> -	spin_unlock(&mn->lock);
>> -
>> -	return ret;
>> +	/* we will unbind on next submission, still have userptr pins */
>> +	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>> +				      MAX_SCHEDULE_TIMEOUT);
>> +	if (r <= 0)
>> +		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>
> I think, since linux 5.9 where a fork is no longer setting up COW on pinned pages, and we do in fact still pin pages, I think this fence wait should be removed, together with the PF_EXIT special case, as it does not improve on anything but creates hangs that only hangcheck / watchdog can resolve. If we, in future work no longer pin the pages, which is the direction we're moving towards, let's re-add it when needed.
>
>>  
>> +	return true;
>>  }
>>  
>> -static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
>> -	.invalidate_range_start = userptr_mn_invalidate_range_start,
>> +static const struct mmu_interval_notifier_ops i915_gem_userptr_notifier_ops = {
>> +	.invalidate = i915_gem_userptr_invalidate,
>>  };
>>  
>> -static struct i915_mmu_notifier *
>> -i915_mmu_notifier_create(struct i915_mm_struct *mm)
>> -{
>> -	struct i915_mmu_notifier *mn;
>> -
>> -	mn = kmalloc(sizeof(*mn), GFP_KERNEL);
>> -	if (mn == NULL)
>> -		return ERR_PTR(-ENOMEM);
>> -
>> -	spin_lock_init(&mn->lock);
>> -	mn->mn.ops = &i915_gem_userptr_notifier;
>> -	mn->objects = RB_ROOT_CACHED;
>> -	mn->mm = mm;
>> -
>> -	return mn;
>> -}
>> -
>> -static void
>> -i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
>> -{
>> -	struct i915_mmu_object *mo;
>> -
>> -	mo = fetch_and_zero(&obj->userptr.mmu_object);
>> -	if (!mo)
>> -		return;
>> -
>> -	spin_lock(&mo->mn->lock);
>> -	del_object(mo);
>> -	spin_unlock(&mo->mn->lock);
>> -	kfree(mo);
>> -}
>> -
>> -static struct i915_mmu_notifier *
>> -i915_mmu_notifier_find(struct i915_mm_struct *mm)
>> -{
>> -	struct i915_mmu_notifier *mn, *old;
>> -	int err;
>> -
>> -	mn = READ_ONCE(mm->mn);
>> -	if (likely(mn))
>> -		return mn;
>> -
>> -	mn = i915_mmu_notifier_create(mm);
>> -	if (IS_ERR(mn))
>> -		return mn;
>> -
>> -	err = mmu_notifier_register(&mn->mn, mm->mm);
>> -	if (err) {
>> -		kfree(mn);
>> -		return ERR_PTR(err);
>> -	}
>> -
>> -	old = cmpxchg(&mm->mn, NULL, mn);
>> -	if (old) {
>> -		mmu_notifier_unregister(&mn->mn, mm->mm);
>> -		kfree(mn);
>> -		mn = old;
>> -	}
>> -
>> -	return mn;
>> -}
>> -
>>  static int
>>  i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
>>  {
>> -	struct i915_mmu_notifier *mn;
>> -	struct i915_mmu_object *mo;
>> -
>> -	if (GEM_WARN_ON(!obj->userptr.mm))
>> -		return -EINVAL;
>> -
>> -	mn = i915_mmu_notifier_find(obj->userptr.mm);
>> -	if (IS_ERR(mn))
>> -		return PTR_ERR(mn);
>> -
>> -	mo = kzalloc(sizeof(*mo), GFP_KERNEL);
>> -	if (!mo)
>> -		return -ENOMEM;
>> -
>> -	mo->mn = mn;
>> -	mo->obj = obj;
>> -	mo->it.start = obj->userptr.ptr;
>> -	mo->it.last = obj->userptr.ptr + obj->base.size - 1;
>> -	RB_CLEAR_NODE(&mo->it.rb);
>> -
>> -	obj->userptr.mmu_object = mo;
>> -	return 0;
>> -}
>> -
>> -static void
>> -i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
>> -		       struct mm_struct *mm)
>> -{
>> -	if (mn == NULL)
>> -		return;
>> -
>> -	mmu_notifier_unregister(&mn->mn, mm);
>> -	kfree(mn);
>> -}
>> -
>> -
>> -static struct i915_mm_struct *
>> -__i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
>> -{
>> -	struct i915_mm_struct *it, *mm = NULL;
>> -
>> -	rcu_read_lock();
>> -	hash_for_each_possible_rcu(i915->mm_structs,
>> -				   it, node,
>> -				   (unsigned long)real)
>> -		if (it->mm == real && kref_get_unless_zero(&it->kref)) {
>> -			mm = it;
>> -			break;
>> -		}
>> -	rcu_read_unlock();
>> -
>> -	return mm;
>> +	return mmu_interval_notifier_insert(&obj->userptr.notifier, current->mm,
>> +					    obj->userptr.ptr, obj->base.size,
>> +					    &i915_gem_userptr_notifier_ops);
>>  }
>>  
>> -static int
>> -i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
>> +static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
>>  {
>>  	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> -	struct i915_mm_struct *mm, *new;
>> -	int ret = 0;
>> -
>> -	/* During release of the GEM object we hold the struct_mutex. This
>> -	 * precludes us from calling mmput() at that time as that may be
>> -	 * the last reference and so call exit_mmap(). exit_mmap() will
>> -	 * attempt to reap the vma, and if we were holding a GTT mmap
>> -	 * would then call drm_gem_vm_close() and attempt to reacquire
>> -	 * the struct mutex. So in order to avoid that recursion, we have
>> -	 * to defer releasing the mm reference until after we drop the
>> -	 * struct_mutex, i.e. we need to schedule a worker to do the clean
>> -	 * up.
>> -	 */
>> -	mm = __i915_mm_struct_find(i915, current->mm);
>> -	if (mm)
>> -		goto out;
>> +	struct page **pvec = NULL;
>>  
>> -	new = kmalloc(sizeof(*mm), GFP_KERNEL);
>> -	if (!new)
>> -		return -ENOMEM;
>> -
>> -	kref_init(&new->kref);
>> -	new->i915 = to_i915(obj->base.dev);
>> -	new->mm = current->mm;
>> -	new->mn = NULL;
>> -
>> -	spin_lock(&i915->mm_lock);
>> -	mm = __i915_mm_struct_find(i915, current->mm);
>> -	if (!mm) {
>> -		hash_add_rcu(i915->mm_structs,
>> -			     &new->node,
>> -			     (unsigned long)new->mm);
>> -		mmgrab(current->mm);
>> -		mm = new;
>> +	spin_lock(&i915->mm.notifier_lock);
>> +	if (!--obj->userptr.page_ref) {
>> +		pvec = obj->userptr.pvec;
>> +		obj->userptr.pvec = NULL;
>>  	}
>> -	spin_unlock(&i915->mm_lock);
>> -	if (mm != new)
>> -		kfree(new);
>> +	GEM_BUG_ON(obj->userptr.page_ref < 0);
>> +	spin_unlock(&i915->mm.notifier_lock);
>>  
>> -out:
>> -	obj->userptr.mm = mm;
>> -	return ret;
>> -}
>> -
>> -static void
>> -__i915_mm_struct_free__worker(struct work_struct *work)
>> -{
>> -	struct i915_mm_struct *mm = container_of(work, typeof(*mm), work.work);
>> -
>> -	i915_mmu_notifier_free(mm->mn, mm->mm);
>> -	mmdrop(mm->mm);
>> -	kfree(mm);
>> -}
>> -
>> -static void
>> -__i915_mm_struct_free(struct kref *kref)
>> -{
>> -	struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
>> -
>> -	spin_lock(&mm->i915->mm_lock);
>> -	hash_del_rcu(&mm->node);
>> -	spin_unlock(&mm->i915->mm_lock);
>> -
>> -	INIT_RCU_WORK(&mm->work, __i915_mm_struct_free__worker);
>> -	queue_rcu_work(system_wq, &mm->work);
>> -}
>> -
>> -static void
>> -i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
>> -{
>> -	if (obj->userptr.mm == NULL)
>> -		return;
>> +	if (pvec) {
>> +		const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>>  
>> -	kref_put(&obj->userptr.mm->kref, __i915_mm_struct_free);
>> -	obj->userptr.mm = NULL;
>> +		unpin_user_pages(pvec, num_pages);
>> +		kfree(pvec);
>
> IIRC, CQ spotted that we should have a kvfree here right?
>
>> +	}
>>  }
>>  
>> -struct get_pages_work {
>> -	struct work_struct work;
>> -	struct drm_i915_gem_object *obj;
>> -	struct task_struct *task;
>> -};
>> -
>> -static struct sg_table *
>> -__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>> -			       struct page **pvec, unsigned long num_pages)
>> +static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
>>  {
>> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> +	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>>  	unsigned int max_segment = i915_sg_segment_size();
>>  	struct sg_table *st;
>>  	unsigned int sg_page_sizes;
>>  	struct scatterlist *sg;
>> +	struct page **pvec;
>>  	int ret;
>>  
>>  	st = kmalloc(sizeof(*st), GFP_KERNEL);
>>  	if (!st)
>> -		return ERR_PTR(-ENOMEM);
>> +		return -ENOMEM;
>> +
>> +	spin_lock(&i915->mm.notifier_lock);
>> +	if (GEM_WARN_ON(!obj->userptr.page_ref)) {
>> +		spin_unlock(&i915->mm.notifier_lock);
>> +		ret = -EFAULT;
>> +		goto err_free;
>> +	}
>> +
>> +	obj->userptr.page_ref++;
>> +	pvec = obj->userptr.pvec;
>> +	spin_unlock(&i915->mm.notifier_lock);
>>  
>>  alloc_table:
>>  	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
>>  					 num_pages << PAGE_SHIFT, max_segment,
>>  					 NULL, 0, GFP_KERNEL);
>>  	if (IS_ERR(sg)) {
>> -		kfree(st);
>> -		return ERR_CAST(sg);
>> +		ret = PTR_ERR(sg);
>> +		goto err;
>>  	}
>>  
>>  	ret = i915_gem_gtt_prepare_pages(obj, st);
>> @@ -393,203 +170,20 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>>  			goto alloc_table;
>>  		}
>>  
>> -		kfree(st);
>> -		return ERR_PTR(ret);
>> +		goto err;
>>  	}
>>  
>>  	sg_page_sizes = i915_sg_page_sizes(st->sgl);
>>  
>>  	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
>>  
>> -	return st;
>> -}
>> -
>> -static void
>> -__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
>> -{
>> -	struct get_pages_work *work = container_of(_work, typeof(*work), work);
>> -	struct drm_i915_gem_object *obj = work->obj;
>> -	const unsigned long npages = obj->base.size >> PAGE_SHIFT;
>> -	unsigned long pinned;
>> -	struct page **pvec;
>> -	int ret;
>> -
>> -	ret = -ENOMEM;
>> -	pinned = 0;
>> -
>> -	pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
>> -	if (pvec != NULL) {
>> -		struct mm_struct *mm = obj->userptr.mm->mm;
>> -		unsigned int flags = 0;
>> -		int locked = 0;
>> -
>> -		if (!i915_gem_object_is_readonly(obj))
>> -			flags |= FOLL_WRITE;
>> -
>> -		ret = -EFAULT;
>> -		if (mmget_not_zero(mm)) {
>> -			while (pinned < npages) {
>> -				if (!locked) {
>> -					mmap_read_lock(mm);
>> -					locked = 1;
>> -				}
>> -				ret = pin_user_pages_remote
>> -					(mm,
>> -					 obj->userptr.ptr + pinned * PAGE_SIZE,
>> -					 npages - pinned,
>> -					 flags,
>> -					 pvec + pinned, NULL, &locked);
>> -				if (ret < 0)
>> -					break;
>> -
>> -				pinned += ret;
>> -			}
>> -			if (locked)
>> -				mmap_read_unlock(mm);
>> -			mmput(mm);
>> -		}
>> -	}
>> -
>> -	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
>> -	if (obj->userptr.work == &work->work) {
>> -		struct sg_table *pages = ERR_PTR(ret);
>> -
>> -		if (pinned == npages) {
>> -			pages = __i915_gem_userptr_alloc_pages(obj, pvec,
>> -							       npages);
>> -			if (!IS_ERR(pages)) {
>> -				pinned = 0;
>> -				pages = NULL;
>> -			}
>> -		}
>> -
>> -		obj->userptr.work = ERR_CAST(pages);
>> -		if (IS_ERR(pages))
>> -			__i915_gem_userptr_set_active(obj, false);
>> -	}
>> -	mutex_unlock(&obj->mm.lock);
>> -
>> -	unpin_user_pages(pvec, pinned);
>> -	kvfree(pvec);
>> -
>> -	i915_gem_object_put(obj);
>> -	put_task_struct(work->task);
>> -	kfree(work);
>> -}
>> -
>> -static struct sg_table *
>> -__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
>> -{
>> -	struct get_pages_work *work;
>> -
>> -	/* Spawn a worker so that we can acquire the
>> -	 * user pages without holding our mutex. Access
>> -	 * to the user pages requires mmap_lock, and we have
>> -	 * a strict lock ordering of mmap_lock, struct_mutex -
>> -	 * we already hold struct_mutex here and so cannot
>> -	 * call gup without encountering a lock inversion.
>> -	 *
>> -	 * Userspace will keep on repeating the operation
>> -	 * (thanks to EAGAIN) until either we hit the fast
>> -	 * path or the worker completes. If the worker is
>> -	 * cancelled or superseded, the task is still run
>> -	 * but the results ignored. (This leads to
>> -	 * complications that we may have a stray object
>> -	 * refcount that we need to be wary of when
>> -	 * checking for existing objects during creation.)
>> -	 * If the worker encounters an error, it reports
>> -	 * that error back to this function through
>> -	 * obj->userptr.work = ERR_PTR.
>> -	 */
>> -	work = kmalloc(sizeof(*work), GFP_KERNEL);
>> -	if (work == NULL)
>> -		return ERR_PTR(-ENOMEM);
>> -
>> -	obj->userptr.work = &work->work;
>> -
>> -	work->obj = i915_gem_object_get(obj);
>> -
>> -	work->task = current;
>> -	get_task_struct(work->task);
>> -
>> -	INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker);
>> -	queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work);
>> -
>> -	return ERR_PTR(-EAGAIN);
>> -}
>> -
>> -static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
>> -{
>> -	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>> -	struct mm_struct *mm = obj->userptr.mm->mm;
>> -	struct page **pvec;
>> -	struct sg_table *pages;
>> -	bool active;
>> -	int pinned;
>> -	unsigned int gup_flags = 0;
>> -
>> -	/* If userspace should engineer that these pages are replaced in
>> -	 * the vma between us binding this page into the GTT and completion
>> -	 * of rendering... Their loss. If they change the mapping of their
>> -	 * pages they need to create a new bo to point to the new vma.
>> -	 *
>> -	 * However, that still leaves open the possibility of the vma
>> -	 * being copied upon fork. Which falls under the same userspace
>> -	 * synchronisation issue as a regular bo, except that this time
>> -	 * the process may not be expecting that a particular piece of
>> -	 * memory is tied to the GPU.
>> -	 *
>> -	 * Fortunately, we can hook into the mmu_notifier in order to
>> -	 * discard the page references prior to anything nasty happening
>> -	 * to the vma (discard or cloning) which should prevent the more
>> -	 * egregious cases from causing harm.
>> -	 */
>> -
>> -	if (obj->userptr.work) {
>> -		/* active flag should still be held for the pending work */
>> -		if (IS_ERR(obj->userptr.work))
>> -			return PTR_ERR(obj->userptr.work);
>> -		else
>> -			return -EAGAIN;
>> -	}
>> -
>> -	pvec = NULL;
>> -	pinned = 0;
>> -
>> -	if (mm == current->mm) {
>> -		pvec = kvmalloc_array(num_pages, sizeof(struct page *),
>> -				      GFP_KERNEL |
>> -				      __GFP_NORETRY |
>> -				      __GFP_NOWARN);
>> -		if (pvec) {
>> -			/* defer to worker if malloc fails */
>> -			if (!i915_gem_object_is_readonly(obj))
>> -				gup_flags |= FOLL_WRITE;
>> -			pinned = pin_user_pages_fast_only(obj->userptr.ptr,
>> -							  num_pages, gup_flags,
>> -							  pvec);
>> -		}
>> -	}
>> -
>> -	active = false;
>> -	if (pinned < 0) {
>> -		pages = ERR_PTR(pinned);
>> -		pinned = 0;
>> -	} else if (pinned < num_pages) {
>> -		pages = __i915_gem_userptr_get_pages_schedule(obj);
>> -		active = pages == ERR_PTR(-EAGAIN);
>> -	} else {
>> -		pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
>> -		active = !IS_ERR(pages);
>> -	}
>> -	if (active)
>> -		__i915_gem_userptr_set_active(obj, true);
>> -
>> -	if (IS_ERR(pages))
>> -		unpin_user_pages(pvec, pinned);
>> -	kvfree(pvec);
>> +	return 0;
>>  
>> -	return PTR_ERR_OR_ZERO(pages);
>> +err:
>> +	i915_gem_object_userptr_drop_ref(obj);
>> +err_free:
>> +	kfree(st);
>> +	return ret;
>>  }
>>  
>>  static void
>> @@ -599,9 +193,6 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>>  	struct sgt_iter sgt_iter;
>>  	struct page *page;
>>  
>> -	/* Cancel any inflight work and force them to restart their gup */
>> -	obj->userptr.work = NULL;
>> -	__i915_gem_userptr_set_active(obj, false);
>>  	if (!pages)
>>  		return;
>>  
>> @@ -641,19 +232,161 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>>  		}
>>  
>>  		mark_page_accessed(page);
>> -		unpin_user_page(page);
>>  	}
>>  	obj->mm.dirty = false;
>>  
>>  	sg_free_table(pages);
>>  	kfree(pages);
>> +
>> +	i915_gem_object_userptr_drop_ref(obj);
>> +}
>> +
>> +static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages)
>> +{
>> +	struct sg_table *pages;
>> +	int err;
>> +
>> +	err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
>> +	if (err)
>> +		return err;
>> +
>> +	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
>> +		return -EBUSY;
>> +
>> +	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
>> +
>> +	pages = __i915_gem_object_unset_pages(obj);
>> +	if (!IS_ERR_OR_NULL(pages))
>> +		i915_gem_userptr_put_pages(obj, pages);
>> +
>> +	if (get_pages)
>> +		err = ____i915_gem_object_get_pages(obj);
>> +	mutex_unlock(&obj->mm.lock);
>> +
>> +	return err;
>> +}
>> +
>> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
>> +{
>> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> +	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>> +	struct page **pvec;
>> +	unsigned int gup_flags = 0;
>> +	unsigned long notifier_seq;
>> +	int pinned, ret;
>> +
>> +	if (obj->userptr.notifier.mm != current->mm)
>> +		return -EFAULT;
>> +
>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>> +	if (ret)
>> +		return ret;
>> +
>> +	/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
>> +	ret = i915_gem_object_userptr_unbind(obj, false);
>> +	i915_gem_object_unlock(obj);
>> +	if (ret)
>> +		return ret;
>> +
>> +	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
>> +
>> +	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
>> +	if (!pvec)
>> +		return -ENOMEM;
>> +
>> +	if (!i915_gem_object_is_readonly(obj))
>> +		gup_flags |= FOLL_WRITE;
>> +
>> +	pinned = ret = 0;
>> +	while (pinned < num_pages) {
>> +		ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
>> +					  num_pages - pinned, gup_flags,
>> +					  &pvec[pinned]);
>> +		if (ret < 0)
>> +			goto out;
>> +
>> +		pinned += ret;
>> +	}
>> +	ret = 0;
>> +
>> +	spin_lock(&i915->mm.notifier_lock);
> I think we can improve a lot on the locking here by having the object lock protect the object state and only take the driver-wide notifier lock in execbuf / userptr_invalidate. If we in addition use an rwlock as a notifier lock taken in read mode in execbuf, any potential global lock contention can be practically eliminated. But that's perhaps for a future improvement.
>> +
>> +	if (mmu_interval_read_retry(&obj->userptr.notifier,
>> +		!obj->userptr.page_ref ? notifier_seq :
>> +		obj->userptr.notifier_seq)) {
>> +		ret = -EAGAIN;
>> +		goto out_unlock;
>> +	}
>> +
>> +	if (!obj->userptr.page_ref++) {
>> +		obj->userptr.pvec = pvec;
>> +		obj->userptr.notifier_seq = notifier_seq;
>> +
>> +		pvec = NULL;
>> +	}
>
> In addition, if we can call get_pages() here to take the page_ref, we can eliminate one page_ref and the use of _userptr_submit_fini(). That would of course require the object lock, but we'd already hold that for the object state as above.
>
I guess it could be optimized by not doing this dance..

Perhaps this?

i915_gem_object_set_pages(pvec, seq)
{
i915_gem_object_lock_interruptible();

Directly run the set_pages() function body with pvec and seqno as argument. It will check for collisions, and return -EAGAIN if needed, 0 if notifier seqno was already set to same.
gem_object_unlock()
}


>> +
>> +out_unlock:
>> +	spin_unlock(&i915->mm.notifier_lock);
>> +
>> +out:
>> +	if (pvec) {
>> +		unpin_user_pages(pvec, pinned);
>> +		kvfree(pvec);
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj)
>> +{
>> +	if (mmu_interval_read_retry(&obj->userptr.notifier,
>> +				    obj->userptr.notifier_seq)) {
>> +		/* We collided with the mmu notifier, need to retry */
>> +
>> +		return -EAGAIN;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj)
>> +{
>> +	i915_gem_object_userptr_drop_ref(obj);
>> +}
>> +
>> +int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj)
>> +{
>> +	int err;
>> +
>> +	err = i915_gem_object_userptr_submit_init(obj);
>> +	if (err)
>> +		return err;
>> +
>> +	err = i915_gem_object_lock_interruptible(obj, NULL);
>> +	if (!err) {
>> +		/*
>> +		 * Since we only check validity, not use the pages,
>> +		 * it doesn't matter if we collide with the mmu notifier,
>> +		 * and -EAGAIN handling is not required.
>> +		 */
>> +		err = i915_gem_object_pin_pages(obj);
>> +		if (!err)
>> +			i915_gem_object_unpin_pages(obj);
>> +
>> +		i915_gem_object_unlock(obj);
>> +	}
>> +
>> +	i915_gem_object_userptr_submit_fini(obj);
>> +	return err;
>>  }
>>  
>>  static void
>>  i915_gem_userptr_release(struct drm_i915_gem_object *obj)
>>  {
>> -	i915_gem_userptr_release__mmu_notifier(obj);
>> -	i915_gem_userptr_release__mm_struct(obj);
>> +	GEM_WARN_ON(obj->userptr.page_ref);
>> +
>> +	mmu_interval_notifier_remove(&obj->userptr.notifier);
>> +	obj->userptr.notifier.mm = NULL;
>>  }
>>  
>>  static int
>> @@ -686,7 +419,6 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
>>  	.name = "i915_gem_object_userptr",
>>  	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
>>  		 I915_GEM_OBJECT_NO_MMAP |
>> -		 I915_GEM_OBJECT_ASYNC_CANCEL |
>>  		 I915_GEM_OBJECT_IS_PROXY,
>>  	.get_pages = i915_gem_userptr_get_pages,
>>  	.put_pages = i915_gem_userptr_put_pages,
>> @@ -793,6 +525,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>>  	i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
>>  
>>  	obj->userptr.ptr = args->user_ptr;
>> +	obj->userptr.notifier_seq = ULONG_MAX;
>>  	if (args->flags & I915_USERPTR_READ_ONLY)
>>  		i915_gem_object_set_readonly(obj);
>>  
>> @@ -800,9 +533,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>>  	 * at binding. This means that we need to hook into the mmu_notifier
>>  	 * in order to detect if the mmu is destroyed.
>>  	 */
>> -	ret = i915_gem_userptr_init__mm_struct(obj);
>> -	if (ret == 0)
>> -		ret = i915_gem_userptr_init__mmu_notifier(obj);
>> +	ret = i915_gem_userptr_init__mmu_notifier(obj);
>>  	if (ret == 0)
>>  		ret = drm_gem_handle_create(file, &obj->base, &handle);
>>  
>> @@ -821,15 +552,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>>  int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
>>  {
>>  #ifdef CONFIG_MMU_NOTIFIER
>> -	spin_lock_init(&dev_priv->mm_lock);
>> -	hash_init(dev_priv->mm_structs);
>> -
>> -	dev_priv->mm.userptr_wq =
>> -		alloc_workqueue("i915-userptr-acquire",
>> -				WQ_HIGHPRI | WQ_UNBOUND,
>> -				0);
>> -	if (!dev_priv->mm.userptr_wq)
>> -		return -ENOMEM;
>> +	spin_lock_init(&dev_priv->mm.notifier_lock);
>>  #endif
>>  
>>  	return 0;
>> @@ -837,7 +560,4 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
>>  
>>  void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
>>  {
>> -#ifdef CONFIG_MMU_NOTIFIER
>> -	destroy_workqueue(dev_priv->mm.userptr_wq);
>> -#endif
>>  }
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> index fc41cf6442a9..72927d356c1a 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -558,11 +558,10 @@ struct i915_gem_mm {
>>  
>>  #ifdef CONFIG_MMU_NOTIFIER
>>  	/**
>> -	 * Workqueue to fault in userptr pages, flushed by the execbuf
>> -	 * when required but otherwise left to userspace to try again
>> -	 * on EAGAIN.
>> +	 * notifier_lock for mmu notifiers, memory may not be allocated
>> +	 * while holding this lock.
>>  	 */
>> -	struct workqueue_struct *userptr_wq;
>> +	spinlock_t notifier_lock;
>>  #endif
>>  
>>  	/* shrinker accounting, also useful for userland debugging */
>> @@ -942,8 +941,6 @@ struct drm_i915_private {
>>  	struct i915_ggtt ggtt; /* VM representing the global address space */
>>  
>>  	struct i915_gem_mm mm;
>> -	DECLARE_HASHTABLE(mm_structs, 7);
>> -	spinlock_t mm_lock;
>>  
>>  	/* Kernel Modesetting */
>>  
>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>> index 22be1e7bf2dd..6288cd5d898e 100644
>> --- a/drivers/gpu/drm/i915/i915_gem.c
>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>> @@ -1053,10 +1053,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
>>  err_unlock:
>>  	i915_gem_drain_workqueue(dev_priv);
>>  
>> -	if (ret != -EIO) {
>> +	if (ret != -EIO)
>>  		intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
>> -		i915_gem_cleanup_userptr(dev_priv);
>> -	}
>>  
>>  	if (ret == -EIO) {
>>  		/*
>> @@ -1113,7 +1111,6 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv)
>>  	intel_wa_list_free(&dev_priv->gt_wa_list);
>>  
>>  	intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
>> -	i915_gem_cleanup_userptr(dev_priv);
>>  
>>  	i915_gem_drain_freed_objects(dev_priv);
>>  
>>
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
  2021-03-15 12:36     ` Maarten Lankhorst
@ 2021-03-16  8:47       ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 82+ messages in thread
From: Thomas Hellström (Intel) @ 2021-03-16  8:47 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx; +Cc: Dave Airlie

[-- Attachment #1: Type: text/plain, Size: 42272 bytes --]


On 3/15/21 1:36 PM, Maarten Lankhorst wrote:
> Op 2021-03-11 om 18:24 schreef Thomas Hellström (Intel):
>> Hi, Maarten,
>>
>> On 3/11/21 2:41 PM, Maarten Lankhorst wrote:
>>> Instead of doing what we do currently, which will never work with
>>> PROVE_LOCKING, do the same as AMD does, and something similar to
>>> relocation slowpath. When all locks are dropped, we acquire the
>>> pages for pinning. When the locks are taken, we transfer those
>>> pages in .get_pages() to the bo. As a final check before installing
>>> the fences, we ensure that the mmu notifier was not called; if it is,
>>> we return -EAGAIN to userspace to signal it has to start over.
>>>
>>> Changes since v1:
>>> - Unbinding is done in submit_init only. submit_begin() removed.
>>> - MMU_NOTFIER -> MMU_NOTIFIER
>>> Changes since v2:
>>> - Make i915->mm.notifier a spinlock.
>>> Changes since v3:
>>> - Add WARN_ON if there are any page references left, should have been 0.
>>> - Return 0 on success in submit_init(), bug from spinlock conversion.
>>> - Release pvec outside of notifier_lock (Thomas).
>>> Changes since v4:
>>> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
>>> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
>>> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
>>> Changes since v5:
>>> - Clarify why check on PF_EXITING is (temporarily) required.
>>> Changes since v6:
>>> - Ensure userptr validity is checked in set_domain through a special path.
>>>
>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Acked-by: Dave Airlie <airlied@redhat.com>
>> Mostly LGTM. Comments / suggestions below.
>>
>>> ---
>>>   drivers/gpu/drm/i915/gem/i915_gem_domain.c    |  18 +-
>>>   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 101 ++-
>>>   drivers/gpu/drm/i915/gem/i915_gem_object.h    |  38 +-
>>>   .../gpu/drm/i915/gem/i915_gem_object_types.h  |  10 +-
>>>   drivers/gpu/drm/i915/gem/i915_gem_pages.c     |   2 +-
>>>   drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 796 ++++++------------
>>>   drivers/gpu/drm/i915/i915_drv.h               |   9 +-
>>>   drivers/gpu/drm/i915/i915_gem.c               |   5 +-
>>>   8 files changed, 395 insertions(+), 584 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
>>> index 2f4980bf742e..76cb9f5c66aa 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
>>> @@ -468,14 +468,28 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
>>>   	if (!obj)
>>>   		return -ENOENT;
>>>   
>>> +	if (i915_gem_object_is_userptr(obj)) {
>>> +		/*
>>> +		 * Try to grab userptr pages, iris uses set_domain to check
>>> +		 * userptr validity
>>> +		 */
>>> +		err = i915_gem_object_userptr_validate(obj);
>>> +		if (!err)
>>> +			err = i915_gem_object_wait(obj,
>>> +						   I915_WAIT_INTERRUPTIBLE |
>>> +						   I915_WAIT_PRIORITY |
>>> +						   (write_domain ? I915_WAIT_ALL : 0),
>>> +						   MAX_SCHEDULE_TIMEOUT);
>>> +		goto out;
>>> +	}
>>> +
>>>   	/*
>>>   	 * Proxy objects do not control access to the backing storage, ergo
>>>   	 * they cannot be used as a means to manipulate the cache domain
>>>   	 * tracking for that backing storage. The proxy object is always
>>>   	 * considered to be outside of any cache domain.
>>>   	 */
>>> -	if (i915_gem_object_is_proxy(obj) &&
>>> -	    !i915_gem_object_is_userptr(obj)) {
>>> +	if (i915_gem_object_is_proxy(obj)) {
>>>   		err = -ENXIO;
>>>   		goto out;
>>>   	}
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> index c72440c10876..64d0e5fccece 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> @@ -53,14 +53,16 @@ enum {
>>>   /* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
>>>   #define __EXEC_OBJECT_HAS_PIN		BIT(30)
>>>   #define __EXEC_OBJECT_HAS_FENCE		BIT(29)
>>> -#define __EXEC_OBJECT_NEEDS_MAP		BIT(28)
>>> -#define __EXEC_OBJECT_NEEDS_BIAS	BIT(27)
>>> -#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 27) /* all of the above + */
>>> +#define __EXEC_OBJECT_USERPTR_INIT	BIT(28)
>>> +#define __EXEC_OBJECT_NEEDS_MAP		BIT(27)
>>> +#define __EXEC_OBJECT_NEEDS_BIAS	BIT(26)
>>> +#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 26) /* all of the above + */
>>>   #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
>>>   
>>>   #define __EXEC_HAS_RELOC	BIT(31)
>>>   #define __EXEC_ENGINE_PINNED	BIT(30)
>>> -#define __EXEC_INTERNAL_FLAGS	(~0u << 30)
>>> +#define __EXEC_USERPTR_USED	BIT(29)
>>> +#define __EXEC_INTERNAL_FLAGS	(~0u << 29)
>>>   #define UPDATE			PIN_OFFSET_FIXED
>>>   
>>>   #define BATCH_OFFSET_BIAS (256*1024)
>>> @@ -864,6 +866,26 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
>>>   		}
>>>   
>>>   		eb_add_vma(eb, i, batch, vma);
>>> +
>>> +		if (i915_gem_object_is_userptr(vma->obj)) {
>>> +			err = i915_gem_object_userptr_submit_init(vma->obj);
>>> +			if (err) {
>>> +				if (i + 1 < eb->buffer_count) {
>>> +					/*
>>> +					 * Execbuffer code expects last vma entry to be NULL,
>>> +					 * since we already initialized this entry,
>>> +					 * set the next value to NULL or we mess up
>>> +					 * cleanup handling.
>>> +					 */
>>> +					eb->vma[i + 1].vma = NULL;
>>> +				}
>>> +
>>> +				return err;
>>> +			}
>>> +
>>> +			eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT;
>>> +			eb->args->flags |= __EXEC_USERPTR_USED;
>>> +		}
>>>   	}
>>>   
>>>   	if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) {
>>> @@ -965,7 +987,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle)
>>>   	}
>>>   }
>>>   
>>> -static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>>> +static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr)
>>>   {
>>>   	const unsigned int count = eb->buffer_count;
>>>   	unsigned int i;
>>> @@ -979,6 +1001,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>>>   
>>>   		eb_unreserve_vma(ev);
>>>   
>>> +		if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) {
>>> +			ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT;
>>> +			i915_gem_object_userptr_submit_fini(vma->obj);
>>> +		}
>>> +
>>>   		if (final)
>>>   			i915_vma_put(vma);
>>>   	}
>>> @@ -1909,6 +1936,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
>>>   	return 0;
>>>   }
>>>   
>>> +static int eb_reinit_userptr(struct i915_execbuffer *eb)
>>> +{
>>> +	const unsigned int count = eb->buffer_count;
>>> +	unsigned int i;
>>> +	int ret;
>>> +
>>> +	if (likely(!(eb->args->flags & __EXEC_USERPTR_USED)))
>>> +		return 0;
>>> +
>>> +	for (i = 0; i < count; i++) {
>>> +		struct eb_vma *ev = &eb->vma[i];
>>> +
>>> +		if (!i915_gem_object_is_userptr(ev->vma->obj))
>>> +			continue;
>>> +
>>> +		ret = i915_gem_object_userptr_submit_init(ev->vma->obj);
>>> +		if (ret)
>>> +			return ret;
>>> +
>>> +		ev->flags |= __EXEC_OBJECT_USERPTR_INIT;
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>>   static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>>   					   struct i915_request *rq)
>>>   {
>>> @@ -1923,7 +1975,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>>   	}
>>>   
>>>   	/* We may process another execbuffer during the unlock... */
>>> -	eb_release_vmas(eb, false);
>>> +	eb_release_vmas(eb, false, true);
>>>   	i915_gem_ww_ctx_fini(&eb->ww);
>>>   
>>>   	if (rq) {
>>> @@ -1964,10 +2016,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>>   		err = 0;
>>>   	}
>>>   
>>> -#ifdef CONFIG_MMU_NOTIFIER
>>>   	if (!err)
>>> -		flush_workqueue(eb->i915->mm.userptr_wq);
>>> -#endif
>>> +		err = eb_reinit_userptr(eb);
>>>   
>>>   err_relock:
>>>   	i915_gem_ww_ctx_init(&eb->ww, true);
>>> @@ -2029,7 +2079,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>>>   
>>>   err:
>>>   	if (err == -EDEADLK) {
>>> -		eb_release_vmas(eb, false);
>>> +		eb_release_vmas(eb, false, false);
>>>   		err = i915_gem_ww_ctx_backoff(&eb->ww);
>>>   		if (!err)
>>>   			goto repeat_validate;
>>> @@ -2126,7 +2176,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb)
>>>   
>>>   err:
>>>   	if (err == -EDEADLK) {
>>> -		eb_release_vmas(eb, false);
>>> +		eb_release_vmas(eb, false, false);
>>>   		err = i915_gem_ww_ctx_backoff(&eb->ww);
>>>   		if (!err)
>>>   			goto retry;
>>> @@ -2201,6 +2251,30 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
>>>   						      flags | __EXEC_OBJECT_NO_RESERVE);
>>>   	}
>>>   
>>> +#ifdef CONFIG_MMU_NOTIFIER
>>> +	if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
>>> +		spin_lock(&eb->i915->mm.notifier_lock);
>>> +
>>> +		/*
>>> +		 * count is always at least 1, otherwise __EXEC_USERPTR_USED
>>> +		 * could not have been set
>>> +		 */
>>> +		for (i = 0; i < count; i++) {
>>> +			struct eb_vma *ev = &eb->vma[i];
>>> +			struct drm_i915_gem_object *obj = ev->vma->obj;
>>> +
>>> +			if (!i915_gem_object_is_userptr(obj))
>>> +				continue;
>>> +
>>> +			err = i915_gem_object_userptr_submit_done(obj);
>>> +			if (err)
>>> +				break;
>>> +		}
>>> +
>>> +		spin_unlock(&eb->i915->mm.notifier_lock);
>>> +	}
>>> +#endif
>>> +
>>>   	if (unlikely(err))
>>>   		goto err_skip;
>>>   
>>> @@ -3345,7 +3419,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>>>   
>>>   	err = eb_lookup_vmas(&eb);
>>>   	if (err) {
>>> -		eb_release_vmas(&eb, true);
>>> +		eb_release_vmas(&eb, true, true);
>>>   		goto err_engine;
>>>   	}
>>>   
>>> @@ -3417,6 +3491,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>>>   
>>>   	trace_i915_request_queue(eb.request, eb.batch_flags);
>>>   	err = eb_submit(&eb, batch);
>>> +
>>>   err_request:
>>>   	i915_request_get(eb.request);
>>>   	err = eb_request_add(&eb, err);
>>> @@ -3437,7 +3512,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>>>   	i915_request_put(eb.request);
>>>   
>>>   err_vma:
>>> -	eb_release_vmas(&eb, true);
>>> +	eb_release_vmas(&eb, true, true);
>>>   	if (eb.trampoline)
>>>   		i915_vma_unpin(eb.trampoline);
>>>   	WARN_ON(err == -EDEADLK);
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> index 69509dbd01c7..b5af9c440ac5 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> @@ -59,6 +59,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
>>>   				       const void *data, resource_size_t size);
>>>   
>>>   extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
>>> +
>>>   void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
>>>   				     struct sg_table *pages,
>>>   				     bool needs_clflush);
>>> @@ -278,12 +279,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
>>>   	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
>>>   }
>>>   
>>> -static inline bool
>>> -i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
>>> -{
>>> -	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL);
>>> -}
>>> -
>>>   static inline bool
>>>   i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
>>>   {
>>> @@ -573,16 +568,6 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>>>   void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
>>>   					      enum fb_op_origin origin);
>>>   
>>> -static inline bool
>>> -i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
>>> -{
>>> -#ifdef CONFIG_MMU_NOTIFIER
>>> -	return obj->userptr.mm;
>>> -#else
>>> -	return false;
>>> -#endif
>>> -}
>>> -
>>>   static inline void
>>>   i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>>>   				  enum fb_op_origin origin)
>>> @@ -603,4 +588,25 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset,
>>>   
>>>   bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj);
>>>   
>>> +#ifdef CONFIG_MMU_NOTIFIER
>>> +static inline bool
>>> +i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
>>> +{
>>> +	return obj->userptr.notifier.mm;
>>> +}
>>> +
>>> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj);
>>> +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj);
>>> +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj);
>>> +int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj);
>>> +#else
>>> +static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; }
>>> +
>>> +static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
>>> +static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
>>> +static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }
>>> +static inline int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
>>> +
>>> +#endif
>>> +
>>>   #endif
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>> index 414322619781..4c0a34231623 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>> @@ -7,6 +7,8 @@
>>>   #ifndef __I915_GEM_OBJECT_TYPES_H__
>>>   #define __I915_GEM_OBJECT_TYPES_H__
>>>   
>>> +#include <linux/mmu_notifier.h>
>>> +
>>>   #include <drm/drm_gem.h>
>>>   #include <uapi/drm/i915_drm.h>
>>>   
>>> @@ -34,7 +36,6 @@ struct drm_i915_gem_object_ops {
>>>   #define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(2)
>>>   #define I915_GEM_OBJECT_IS_PROXY	BIT(3)
>>>   #define I915_GEM_OBJECT_NO_MMAP		BIT(4)
>>> -#define I915_GEM_OBJECT_ASYNC_CANCEL	BIT(5)
>>>   
>>>   	/* Interface between the GEM object and its backing storage.
>>>   	 * get_pages() is called once prior to the use of the associated set
>>> @@ -299,10 +300,11 @@ struct drm_i915_gem_object {
>>>   #ifdef CONFIG_MMU_NOTIFIER
>>>   		struct i915_gem_userptr {
>>>   			uintptr_t ptr;
>>> +			unsigned long notifier_seq;
>>>   
>>> -			struct i915_mm_struct *mm;
>>> -			struct i915_mmu_object *mmu_object;
>>> -			struct work_struct *work;
>>> +			struct mmu_interval_notifier notifier;
>>> +			struct page **pvec;
>>> +			int page_ref;
>>>   		} userptr;
>>>   #endif
>>>   
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
>>> index bf61b88a2113..e7d7650072c5 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
>>> @@ -226,7 +226,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
>>>   	 * get_pages backends we should be better able to handle the
>>>   	 * cancellation of the async task in a more uniform manner.
>>>   	 */
>>> -	if (!pages && !i915_gem_object_needs_async_cancel(obj))
>>> +	if (!pages)
>>>   		pages = ERR_PTR(-EINVAL);
>>>   
>>>   	if (!IS_ERR(pages))
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> index b466ab2def4d..1e42fbc68697 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> @@ -2,10 +2,39 @@
>>>    * SPDX-License-Identifier: MIT
>>>    *
>>>    * Copyright © 2012-2014 Intel Corporation
>>> + *
>>> +  * Based on amdgpu_mn, which bears the following notice:
>>> + *
>>> + * Copyright 2014 Advanced Micro Devices, Inc.
>>> + * All Rights Reserved.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>>> + * "Software"), to deal in the Software without restriction, including
>>> + * without limitation the rights to use, copy, modify, merge, publish,
>>> + * distribute, sub license, and/or sell copies of the Software, and to
>>> + * permit persons to whom the Software is furnished to do so, subject to
>>> + * the following conditions:
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
>>> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>>> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
>>> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + * The above copyright notice and this permission notice (including the
>>> + * next paragraph) shall be included in all copies or substantial portions
>>> + * of the Software.
>>> + *
>>> + */
>>> +/*
>>> + * Authors:
>>> + *    Christian König <christian.koenig@amd.com>
>>>    */
>>>   
>>>   #include <linux/mmu_context.h>
>>> -#include <linux/mmu_notifier.h>
>>>   #include <linux/mempolicy.h>
>>>   #include <linux/swap.h>
>>>   #include <linux/sched/mm.h>
>>> @@ -15,373 +44,121 @@
>>>   #include "i915_gem_object.h"
>>>   #include "i915_scatterlist.h"
>>>   
>>> -#if defined(CONFIG_MMU_NOTIFIER)
>>> -
>>> -struct i915_mm_struct {
>>> -	struct mm_struct *mm;
>>> -	struct drm_i915_private *i915;
>>> -	struct i915_mmu_notifier *mn;
>>> -	struct hlist_node node;
>>> -	struct kref kref;
>>> -	struct rcu_work work;
>>> -};
>>> -
>>> -#include <linux/interval_tree.h>
>>> -
>>> -struct i915_mmu_notifier {
>>> -	spinlock_t lock;
>>> -	struct hlist_node node;
>>> -	struct mmu_notifier mn;
>>> -	struct rb_root_cached objects;
>>> -	struct i915_mm_struct *mm;
>>> -};
>>> -
>>> -struct i915_mmu_object {
>>> -	struct i915_mmu_notifier *mn;
>>> -	struct drm_i915_gem_object *obj;
>>> -	struct interval_tree_node it;
>>> -};
>>> +#ifdef CONFIG_MMU_NOTIFIER
>>>   
>>> -static void add_object(struct i915_mmu_object *mo)
>>> +/**
>>> + * i915_gem_userptr_invalidate - callback to notify about mm change
>>> + *
>>> + * @mni: the range (mm) is about to update
>>> + * @range: details on the invalidation
>>> + * @cur_seq: Value to pass to mmu_interval_set_seq()
>>> + *
>>> + * Block for operations on BOs to finish and mark pages as accessed and
>>> + * potentially dirty.
>>> + */
>>> +static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>>> +					const struct mmu_notifier_range *range,
>>> +					unsigned long cur_seq)
>>>   {
>>> -	GEM_BUG_ON(!RB_EMPTY_NODE(&mo->it.rb));
>>> -	interval_tree_insert(&mo->it, &mo->mn->objects);
>>> -}
>>> +	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);
>>> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>>> +	long r;
>>>   
>>> -static void del_object(struct i915_mmu_object *mo)
>>> -{
>>> -	if (RB_EMPTY_NODE(&mo->it.rb))
>>> -		return;
>>> +	if (!mmu_notifier_range_blockable(range))
>>> +		return false;
>>>   
>>> -	interval_tree_remove(&mo->it, &mo->mn->objects);
>>> -	RB_CLEAR_NODE(&mo->it.rb);
>>> -}
>>> +	spin_lock(&i915->mm.notifier_lock);
>>>   
>>> -static void
>>> -__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
>>> -{
>>> -	struct i915_mmu_object *mo = obj->userptr.mmu_object;
>>> +	mmu_interval_set_seq(mni, cur_seq);
>>> +
>>> +	spin_unlock(&i915->mm.notifier_lock);
>>>   
>>>   	/*
>>> -	 * During mm_invalidate_range we need to cancel any userptr that
>>> -	 * overlaps the range being invalidated. Doing so requires the
>>> -	 * struct_mutex, and that risks recursion. In order to cause
>>> -	 * recursion, the user must alias the userptr address space with
>>> -	 * a GTT mmapping (possible with a MAP_FIXED) - then when we have
>>> -	 * to invalidate that mmaping, mm_invalidate_range is called with
>>> -	 * the userptr address *and* the struct_mutex held.  To prevent that
>>> -	 * we set a flag under the i915_mmu_notifier spinlock to indicate
>>> -	 * whether this object is valid.
>>> +	 * We don't wait when the process is exiting. This is valid
>>> +	 * because the object will be cleaned up anyway.
>>> +	 *
>>> +	 * This is also temporarily required as a hack, because we
>>> +	 * cannot currently force non-consistent batch buffers to preempt
>>> +	 * and reschedule by waiting on it, hanging processes on exit.
>>>   	 */
>>> -	if (!mo)
>>> -		return;
>>> -
>>> -	spin_lock(&mo->mn->lock);
>>> -	if (value)
>>> -		add_object(mo);
>>> -	else
>>> -		del_object(mo);
>>> -	spin_unlock(&mo->mn->lock);
>>> -}
>>> -
>>> -static int
>>> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>>> -				  const struct mmu_notifier_range *range)
>>> -{
>>> -	struct i915_mmu_notifier *mn =
>>> -		container_of(_mn, struct i915_mmu_notifier, mn);
>>> -	struct interval_tree_node *it;
>>> -	unsigned long end;
>>> -	int ret = 0;
>>> -
>>> -	if (RB_EMPTY_ROOT(&mn->objects.rb_root))
>>> -		return 0;
>>> -
>>> -	/* interval ranges are inclusive, but invalidate range is exclusive */
>>> -	end = range->end - 1;
>>> -
>>> -	spin_lock(&mn->lock);
>>> -	it = interval_tree_iter_first(&mn->objects, range->start, end);
>>> -	while (it) {
>>> -		struct drm_i915_gem_object *obj;
>>> -
>>> -		if (!mmu_notifier_range_blockable(range)) {
>>> -			ret = -EAGAIN;
>>> -			break;
>>> -		}
>>> -
>>> -		/*
>>> -		 * The mmu_object is released late when destroying the
>>> -		 * GEM object so it is entirely possible to gain a
>>> -		 * reference on an object in the process of being freed
>>> -		 * since our serialisation is via the spinlock and not
>>> -		 * the struct_mutex - and consequently use it after it
>>> -		 * is freed and then double free it. To prevent that
>>> -		 * use-after-free we only acquire a reference on the
>>> -		 * object if it is not in the process of being destroyed.
>>> -		 */
>>> -		obj = container_of(it, struct i915_mmu_object, it)->obj;
>>> -		if (!kref_get_unless_zero(&obj->base.refcount)) {
>>> -			it = interval_tree_iter_next(it, range->start, end);
>>> -			continue;
>>> -		}
>>> -		spin_unlock(&mn->lock);
>>> -
>>> -		ret = i915_gem_object_unbind(obj,
>>> -					     I915_GEM_OBJECT_UNBIND_ACTIVE |
>>> -					     I915_GEM_OBJECT_UNBIND_BARRIER);
>>> -		if (ret == 0)
>>> -			ret = __i915_gem_object_put_pages(obj);
>>> -		i915_gem_object_put(obj);
>>> -		if (ret)
>>> -			return ret;
>>> +	if (current->flags & PF_EXITING)
>>> +		return true;
>>>   
>>> -		spin_lock(&mn->lock);
>>> -
>>> -		/*
>>> -		 * As we do not (yet) protect the mmu from concurrent insertion
>>> -		 * over this range, there is no guarantee that this search will
>>> -		 * terminate given a pathologic workload.
>>> -		 */
>>> -		it = interval_tree_iter_first(&mn->objects, range->start, end);
>>> -	}
>>> -	spin_unlock(&mn->lock);
>>> -
>>> -	return ret;
>>> +	/* we will unbind on next submission, still have userptr pins */
>>> +	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>>> +				      MAX_SCHEDULE_TIMEOUT);
>>> +	if (r <= 0)
>>> +		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>> I think, since linux 5.9 where a fork is no longer setting up COW on pinned pages, and we do in fact still pin pages, I think this fence wait should be removed, together with the PF_EXIT special case, as it does not improve on anything but creates hangs that only hangcheck / watchdog can resolve. If we, in future work no longer pin the pages, which is the direction we're moving towards, let's re-add it when needed.
>>
>>>   
>>> +	return true;
>>>   }
>>>   
>>> -static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
>>> -	.invalidate_range_start = userptr_mn_invalidate_range_start,
>>> +static const struct mmu_interval_notifier_ops i915_gem_userptr_notifier_ops = {
>>> +	.invalidate = i915_gem_userptr_invalidate,
>>>   };
>>>   
>>> -static struct i915_mmu_notifier *
>>> -i915_mmu_notifier_create(struct i915_mm_struct *mm)
>>> -{
>>> -	struct i915_mmu_notifier *mn;
>>> -
>>> -	mn = kmalloc(sizeof(*mn), GFP_KERNEL);
>>> -	if (mn == NULL)
>>> -		return ERR_PTR(-ENOMEM);
>>> -
>>> -	spin_lock_init(&mn->lock);
>>> -	mn->mn.ops = &i915_gem_userptr_notifier;
>>> -	mn->objects = RB_ROOT_CACHED;
>>> -	mn->mm = mm;
>>> -
>>> -	return mn;
>>> -}
>>> -
>>> -static void
>>> -i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
>>> -{
>>> -	struct i915_mmu_object *mo;
>>> -
>>> -	mo = fetch_and_zero(&obj->userptr.mmu_object);
>>> -	if (!mo)
>>> -		return;
>>> -
>>> -	spin_lock(&mo->mn->lock);
>>> -	del_object(mo);
>>> -	spin_unlock(&mo->mn->lock);
>>> -	kfree(mo);
>>> -}
>>> -
>>> -static struct i915_mmu_notifier *
>>> -i915_mmu_notifier_find(struct i915_mm_struct *mm)
>>> -{
>>> -	struct i915_mmu_notifier *mn, *old;
>>> -	int err;
>>> -
>>> -	mn = READ_ONCE(mm->mn);
>>> -	if (likely(mn))
>>> -		return mn;
>>> -
>>> -	mn = i915_mmu_notifier_create(mm);
>>> -	if (IS_ERR(mn))
>>> -		return mn;
>>> -
>>> -	err = mmu_notifier_register(&mn->mn, mm->mm);
>>> -	if (err) {
>>> -		kfree(mn);
>>> -		return ERR_PTR(err);
>>> -	}
>>> -
>>> -	old = cmpxchg(&mm->mn, NULL, mn);
>>> -	if (old) {
>>> -		mmu_notifier_unregister(&mn->mn, mm->mm);
>>> -		kfree(mn);
>>> -		mn = old;
>>> -	}
>>> -
>>> -	return mn;
>>> -}
>>> -
>>>   static int
>>>   i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
>>>   {
>>> -	struct i915_mmu_notifier *mn;
>>> -	struct i915_mmu_object *mo;
>>> -
>>> -	if (GEM_WARN_ON(!obj->userptr.mm))
>>> -		return -EINVAL;
>>> -
>>> -	mn = i915_mmu_notifier_find(obj->userptr.mm);
>>> -	if (IS_ERR(mn))
>>> -		return PTR_ERR(mn);
>>> -
>>> -	mo = kzalloc(sizeof(*mo), GFP_KERNEL);
>>> -	if (!mo)
>>> -		return -ENOMEM;
>>> -
>>> -	mo->mn = mn;
>>> -	mo->obj = obj;
>>> -	mo->it.start = obj->userptr.ptr;
>>> -	mo->it.last = obj->userptr.ptr + obj->base.size - 1;
>>> -	RB_CLEAR_NODE(&mo->it.rb);
>>> -
>>> -	obj->userptr.mmu_object = mo;
>>> -	return 0;
>>> -}
>>> -
>>> -static void
>>> -i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
>>> -		       struct mm_struct *mm)
>>> -{
>>> -	if (mn == NULL)
>>> -		return;
>>> -
>>> -	mmu_notifier_unregister(&mn->mn, mm);
>>> -	kfree(mn);
>>> -}
>>> -
>>> -
>>> -static struct i915_mm_struct *
>>> -__i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
>>> -{
>>> -	struct i915_mm_struct *it, *mm = NULL;
>>> -
>>> -	rcu_read_lock();
>>> -	hash_for_each_possible_rcu(i915->mm_structs,
>>> -				   it, node,
>>> -				   (unsigned long)real)
>>> -		if (it->mm == real && kref_get_unless_zero(&it->kref)) {
>>> -			mm = it;
>>> -			break;
>>> -		}
>>> -	rcu_read_unlock();
>>> -
>>> -	return mm;
>>> +	return mmu_interval_notifier_insert(&obj->userptr.notifier, current->mm,
>>> +					    obj->userptr.ptr, obj->base.size,
>>> +					    &i915_gem_userptr_notifier_ops);
>>>   }
>>>   
>>> -static int
>>> -i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
>>> +static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
>>>   {
>>>   	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>>> -	struct i915_mm_struct *mm, *new;
>>> -	int ret = 0;
>>> -
>>> -	/* During release of the GEM object we hold the struct_mutex. This
>>> -	 * precludes us from calling mmput() at that time as that may be
>>> -	 * the last reference and so call exit_mmap(). exit_mmap() will
>>> -	 * attempt to reap the vma, and if we were holding a GTT mmap
>>> -	 * would then call drm_gem_vm_close() and attempt to reacquire
>>> -	 * the struct mutex. So in order to avoid that recursion, we have
>>> -	 * to defer releasing the mm reference until after we drop the
>>> -	 * struct_mutex, i.e. we need to schedule a worker to do the clean
>>> -	 * up.
>>> -	 */
>>> -	mm = __i915_mm_struct_find(i915, current->mm);
>>> -	if (mm)
>>> -		goto out;
>>> +	struct page **pvec = NULL;
>>>   
>>> -	new = kmalloc(sizeof(*mm), GFP_KERNEL);
>>> -	if (!new)
>>> -		return -ENOMEM;
>>> -
>>> -	kref_init(&new->kref);
>>> -	new->i915 = to_i915(obj->base.dev);
>>> -	new->mm = current->mm;
>>> -	new->mn = NULL;
>>> -
>>> -	spin_lock(&i915->mm_lock);
>>> -	mm = __i915_mm_struct_find(i915, current->mm);
>>> -	if (!mm) {
>>> -		hash_add_rcu(i915->mm_structs,
>>> -			     &new->node,
>>> -			     (unsigned long)new->mm);
>>> -		mmgrab(current->mm);
>>> -		mm = new;
>>> +	spin_lock(&i915->mm.notifier_lock);
>>> +	if (!--obj->userptr.page_ref) {
>>> +		pvec = obj->userptr.pvec;
>>> +		obj->userptr.pvec = NULL;
>>>   	}
>>> -	spin_unlock(&i915->mm_lock);
>>> -	if (mm != new)
>>> -		kfree(new);
>>> +	GEM_BUG_ON(obj->userptr.page_ref < 0);
>>> +	spin_unlock(&i915->mm.notifier_lock);
>>>   
>>> -out:
>>> -	obj->userptr.mm = mm;
>>> -	return ret;
>>> -}
>>> -
>>> -static void
>>> -__i915_mm_struct_free__worker(struct work_struct *work)
>>> -{
>>> -	struct i915_mm_struct *mm = container_of(work, typeof(*mm), work.work);
>>> -
>>> -	i915_mmu_notifier_free(mm->mn, mm->mm);
>>> -	mmdrop(mm->mm);
>>> -	kfree(mm);
>>> -}
>>> -
>>> -static void
>>> -__i915_mm_struct_free(struct kref *kref)
>>> -{
>>> -	struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
>>> -
>>> -	spin_lock(&mm->i915->mm_lock);
>>> -	hash_del_rcu(&mm->node);
>>> -	spin_unlock(&mm->i915->mm_lock);
>>> -
>>> -	INIT_RCU_WORK(&mm->work, __i915_mm_struct_free__worker);
>>> -	queue_rcu_work(system_wq, &mm->work);
>>> -}
>>> -
>>> -static void
>>> -i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
>>> -{
>>> -	if (obj->userptr.mm == NULL)
>>> -		return;
>>> +	if (pvec) {
>>> +		const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>>>   
>>> -	kref_put(&obj->userptr.mm->kref, __i915_mm_struct_free);
>>> -	obj->userptr.mm = NULL;
>>> +		unpin_user_pages(pvec, num_pages);
>>> +		kfree(pvec);
>> IIRC, CQ spotted that we should have a kvfree here right?
>>
>>> +	}
>>>   }
>>>   
>>> -struct get_pages_work {
>>> -	struct work_struct work;
>>> -	struct drm_i915_gem_object *obj;
>>> -	struct task_struct *task;
>>> -};
>>> -
>>> -static struct sg_table *
>>> -__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>>> -			       struct page **pvec, unsigned long num_pages)
>>> +static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
>>>   {
>>> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>>> +	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>>>   	unsigned int max_segment = i915_sg_segment_size();
>>>   	struct sg_table *st;
>>>   	unsigned int sg_page_sizes;
>>>   	struct scatterlist *sg;
>>> +	struct page **pvec;
>>>   	int ret;
>>>   
>>>   	st = kmalloc(sizeof(*st), GFP_KERNEL);
>>>   	if (!st)
>>> -		return ERR_PTR(-ENOMEM);
>>> +		return -ENOMEM;
>>> +
>>> +	spin_lock(&i915->mm.notifier_lock);
>>> +	if (GEM_WARN_ON(!obj->userptr.page_ref)) {
>>> +		spin_unlock(&i915->mm.notifier_lock);
>>> +		ret = -EFAULT;
>>> +		goto err_free;
>>> +	}
>>> +
>>> +	obj->userptr.page_ref++;
>>> +	pvec = obj->userptr.pvec;
>>> +	spin_unlock(&i915->mm.notifier_lock);
>>>   
>>>   alloc_table:
>>>   	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
>>>   					 num_pages << PAGE_SHIFT, max_segment,
>>>   					 NULL, 0, GFP_KERNEL);
>>>   	if (IS_ERR(sg)) {
>>> -		kfree(st);
>>> -		return ERR_CAST(sg);
>>> +		ret = PTR_ERR(sg);
>>> +		goto err;
>>>   	}
>>>   
>>>   	ret = i915_gem_gtt_prepare_pages(obj, st);
>>> @@ -393,203 +170,20 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>>>   			goto alloc_table;
>>>   		}
>>>   
>>> -		kfree(st);
>>> -		return ERR_PTR(ret);
>>> +		goto err;
>>>   	}
>>>   
>>>   	sg_page_sizes = i915_sg_page_sizes(st->sgl);
>>>   
>>>   	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
>>>   
>>> -	return st;
>>> -}
>>> -
>>> -static void
>>> -__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
>>> -{
>>> -	struct get_pages_work *work = container_of(_work, typeof(*work), work);
>>> -	struct drm_i915_gem_object *obj = work->obj;
>>> -	const unsigned long npages = obj->base.size >> PAGE_SHIFT;
>>> -	unsigned long pinned;
>>> -	struct page **pvec;
>>> -	int ret;
>>> -
>>> -	ret = -ENOMEM;
>>> -	pinned = 0;
>>> -
>>> -	pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
>>> -	if (pvec != NULL) {
>>> -		struct mm_struct *mm = obj->userptr.mm->mm;
>>> -		unsigned int flags = 0;
>>> -		int locked = 0;
>>> -
>>> -		if (!i915_gem_object_is_readonly(obj))
>>> -			flags |= FOLL_WRITE;
>>> -
>>> -		ret = -EFAULT;
>>> -		if (mmget_not_zero(mm)) {
>>> -			while (pinned < npages) {
>>> -				if (!locked) {
>>> -					mmap_read_lock(mm);
>>> -					locked = 1;
>>> -				}
>>> -				ret = pin_user_pages_remote
>>> -					(mm,
>>> -					 obj->userptr.ptr + pinned * PAGE_SIZE,
>>> -					 npages - pinned,
>>> -					 flags,
>>> -					 pvec + pinned, NULL, &locked);
>>> -				if (ret < 0)
>>> -					break;
>>> -
>>> -				pinned += ret;
>>> -			}
>>> -			if (locked)
>>> -				mmap_read_unlock(mm);
>>> -			mmput(mm);
>>> -		}
>>> -	}
>>> -
>>> -	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
>>> -	if (obj->userptr.work == &work->work) {
>>> -		struct sg_table *pages = ERR_PTR(ret);
>>> -
>>> -		if (pinned == npages) {
>>> -			pages = __i915_gem_userptr_alloc_pages(obj, pvec,
>>> -							       npages);
>>> -			if (!IS_ERR(pages)) {
>>> -				pinned = 0;
>>> -				pages = NULL;
>>> -			}
>>> -		}
>>> -
>>> -		obj->userptr.work = ERR_CAST(pages);
>>> -		if (IS_ERR(pages))
>>> -			__i915_gem_userptr_set_active(obj, false);
>>> -	}
>>> -	mutex_unlock(&obj->mm.lock);
>>> -
>>> -	unpin_user_pages(pvec, pinned);
>>> -	kvfree(pvec);
>>> -
>>> -	i915_gem_object_put(obj);
>>> -	put_task_struct(work->task);
>>> -	kfree(work);
>>> -}
>>> -
>>> -static struct sg_table *
>>> -__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
>>> -{
>>> -	struct get_pages_work *work;
>>> -
>>> -	/* Spawn a worker so that we can acquire the
>>> -	 * user pages without holding our mutex. Access
>>> -	 * to the user pages requires mmap_lock, and we have
>>> -	 * a strict lock ordering of mmap_lock, struct_mutex -
>>> -	 * we already hold struct_mutex here and so cannot
>>> -	 * call gup without encountering a lock inversion.
>>> -	 *
>>> -	 * Userspace will keep on repeating the operation
>>> -	 * (thanks to EAGAIN) until either we hit the fast
>>> -	 * path or the worker completes. If the worker is
>>> -	 * cancelled or superseded, the task is still run
>>> -	 * but the results ignored. (This leads to
>>> -	 * complications that we may have a stray object
>>> -	 * refcount that we need to be wary of when
>>> -	 * checking for existing objects during creation.)
>>> -	 * If the worker encounters an error, it reports
>>> -	 * that error back to this function through
>>> -	 * obj->userptr.work = ERR_PTR.
>>> -	 */
>>> -	work = kmalloc(sizeof(*work), GFP_KERNEL);
>>> -	if (work == NULL)
>>> -		return ERR_PTR(-ENOMEM);
>>> -
>>> -	obj->userptr.work = &work->work;
>>> -
>>> -	work->obj = i915_gem_object_get(obj);
>>> -
>>> -	work->task = current;
>>> -	get_task_struct(work->task);
>>> -
>>> -	INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker);
>>> -	queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work);
>>> -
>>> -	return ERR_PTR(-EAGAIN);
>>> -}
>>> -
>>> -static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
>>> -{
>>> -	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>>> -	struct mm_struct *mm = obj->userptr.mm->mm;
>>> -	struct page **pvec;
>>> -	struct sg_table *pages;
>>> -	bool active;
>>> -	int pinned;
>>> -	unsigned int gup_flags = 0;
>>> -
>>> -	/* If userspace should engineer that these pages are replaced in
>>> -	 * the vma between us binding this page into the GTT and completion
>>> -	 * of rendering... Their loss. If they change the mapping of their
>>> -	 * pages they need to create a new bo to point to the new vma.
>>> -	 *
>>> -	 * However, that still leaves open the possibility of the vma
>>> -	 * being copied upon fork. Which falls under the same userspace
>>> -	 * synchronisation issue as a regular bo, except that this time
>>> -	 * the process may not be expecting that a particular piece of
>>> -	 * memory is tied to the GPU.
>>> -	 *
>>> -	 * Fortunately, we can hook into the mmu_notifier in order to
>>> -	 * discard the page references prior to anything nasty happening
>>> -	 * to the vma (discard or cloning) which should prevent the more
>>> -	 * egregious cases from causing harm.
>>> -	 */
>>> -
>>> -	if (obj->userptr.work) {
>>> -		/* active flag should still be held for the pending work */
>>> -		if (IS_ERR(obj->userptr.work))
>>> -			return PTR_ERR(obj->userptr.work);
>>> -		else
>>> -			return -EAGAIN;
>>> -	}
>>> -
>>> -	pvec = NULL;
>>> -	pinned = 0;
>>> -
>>> -	if (mm == current->mm) {
>>> -		pvec = kvmalloc_array(num_pages, sizeof(struct page *),
>>> -				      GFP_KERNEL |
>>> -				      __GFP_NORETRY |
>>> -				      __GFP_NOWARN);
>>> -		if (pvec) {
>>> -			/* defer to worker if malloc fails */
>>> -			if (!i915_gem_object_is_readonly(obj))
>>> -				gup_flags |= FOLL_WRITE;
>>> -			pinned = pin_user_pages_fast_only(obj->userptr.ptr,
>>> -							  num_pages, gup_flags,
>>> -							  pvec);
>>> -		}
>>> -	}
>>> -
>>> -	active = false;
>>> -	if (pinned < 0) {
>>> -		pages = ERR_PTR(pinned);
>>> -		pinned = 0;
>>> -	} else if (pinned < num_pages) {
>>> -		pages = __i915_gem_userptr_get_pages_schedule(obj);
>>> -		active = pages == ERR_PTR(-EAGAIN);
>>> -	} else {
>>> -		pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
>>> -		active = !IS_ERR(pages);
>>> -	}
>>> -	if (active)
>>> -		__i915_gem_userptr_set_active(obj, true);
>>> -
>>> -	if (IS_ERR(pages))
>>> -		unpin_user_pages(pvec, pinned);
>>> -	kvfree(pvec);
>>> +	return 0;
>>>   
>>> -	return PTR_ERR_OR_ZERO(pages);
>>> +err:
>>> +	i915_gem_object_userptr_drop_ref(obj);
>>> +err_free:
>>> +	kfree(st);
>>> +	return ret;
>>>   }
>>>   
>>>   static void
>>> @@ -599,9 +193,6 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>>>   	struct sgt_iter sgt_iter;
>>>   	struct page *page;
>>>   
>>> -	/* Cancel any inflight work and force them to restart their gup */
>>> -	obj->userptr.work = NULL;
>>> -	__i915_gem_userptr_set_active(obj, false);
>>>   	if (!pages)
>>>   		return;
>>>   
>>> @@ -641,19 +232,161 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>>>   		}
>>>   
>>>   		mark_page_accessed(page);
>>> -		unpin_user_page(page);
>>>   	}
>>>   	obj->mm.dirty = false;
>>>   
>>>   	sg_free_table(pages);
>>>   	kfree(pages);
>>> +
>>> +	i915_gem_object_userptr_drop_ref(obj);
>>> +}
>>> +
>>> +static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages)
>>> +{
>>> +	struct sg_table *pages;
>>> +	int err;
>>> +
>>> +	err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
>>> +	if (err)
>>> +		return err;
>>> +
>>> +	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
>>> +		return -EBUSY;
>>> +
>>> +	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
>>> +
>>> +	pages = __i915_gem_object_unset_pages(obj);
>>> +	if (!IS_ERR_OR_NULL(pages))
>>> +		i915_gem_userptr_put_pages(obj, pages);
>>> +
>>> +	if (get_pages)
>>> +		err = ____i915_gem_object_get_pages(obj);
>>> +	mutex_unlock(&obj->mm.lock);
>>> +
>>> +	return err;
>>> +}
>>> +
>>> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
>>> +{
>>> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>>> +	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>>> +	struct page **pvec;
>>> +	unsigned int gup_flags = 0;
>>> +	unsigned long notifier_seq;
>>> +	int pinned, ret;
>>> +
>>> +	if (obj->userptr.notifier.mm != current->mm)
>>> +		return -EFAULT;
>>> +
>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>> +	if (ret)
>>> +		return ret;
>>> +
>>> +	/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
>>> +	ret = i915_gem_object_userptr_unbind(obj, false);
>>> +	i915_gem_object_unlock(obj);
>>> +	if (ret)
>>> +		return ret;
>>> +
>>> +	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
>>> +
>>> +	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
>>> +	if (!pvec)
>>> +		return -ENOMEM;
>>> +
>>> +	if (!i915_gem_object_is_readonly(obj))
>>> +		gup_flags |= FOLL_WRITE;
>>> +
>>> +	pinned = ret = 0;
>>> +	while (pinned < num_pages) {
>>> +		ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
>>> +					  num_pages - pinned, gup_flags,
>>> +					  &pvec[pinned]);
>>> +		if (ret < 0)
>>> +			goto out;
>>> +
>>> +		pinned += ret;
>>> +	}
>>> +	ret = 0;
>>> +
>>> +	spin_lock(&i915->mm.notifier_lock);
>> I think we can improve a lot on the locking here by having the object lock protect the object state and only take the driver-wide notifier lock in execbuf / userptr_invalidate. If we in addition use an rwlock as a notifier lock taken in read mode in execbuf, any potential global lock contention can be practically eliminated. But that's perhaps for a future improvement.
>>> +
>>> +	if (mmu_interval_read_retry(&obj->userptr.notifier,
>>> +		!obj->userptr.page_ref ? notifier_seq :
>>> +		obj->userptr.notifier_seq)) {
>>> +		ret = -EAGAIN;
>>> +		goto out_unlock;
>>> +	}
>>> +
>>> +	if (!obj->userptr.page_ref++) {
>>> +		obj->userptr.pvec = pvec;
>>> +		obj->userptr.notifier_seq = notifier_seq;
>>> +
>>> +		pvec = NULL;
>>> +	}
>> In addition, if we can call get_pages() here to take the page_ref, we can eliminate one page_ref and the use of _userptr_submit_fini(). That would of course require the object lock, but we'd already hold that for the object state as above.
>>
> I guess it could be optimized by not doing this dance..
>
> Perhaps this?
>
> i915_gem_object_set_pages(pvec, seq)
> {
> i915_gem_object_lock_interruptible();
>
> Directly run the set_pages() function body with pvec and seqno as argument. It will check for collisions, and return -EAGAIN if needed, 0 if notifier seqno was already set to same.
> gem_object_unlock()
> }

Yeah, I had something like the attached in mind. I think getting rid of 
submit_fini() would be helpful moving forward.

/Thomas



[-- Attachment #2: 0001-drm-i915-Simplify-userptr-locking.patch --]
[-- Type: text/x-patch, Size: 7995 bytes --]

From de14ad02b8c5100cc0834711f254dbb15561c60b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= <thomas.hellstrom@linux.intel.com>
Date: Mon, 8 Feb 2021 17:45:41 +0100
Subject: [PATCH] drm/i915: Simplify userptr locking
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Use an rwlock instead of spinlock for the global notifier lock
to reduce risk of contention in execbuf.

Protect object state with the object lock whenever possible rather
than with the global notifier lock

Don't take an explicit page_ref in userptr_submit_init() but rather
call get_pages() after obtaining the page list so that
get_pages() holds the page_ref. This means we don't need to call
userptr_submit_fini().

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  4 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 67 ++++++-------------
 drivers/gpu/drm/i915/i915_drv.h               |  2 +-
 3 files changed, 25 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 1ac272f4634a..5444e88baf4e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2261,7 +2261,7 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
 
 #ifdef CONFIG_MMU_NOTIFIER
 	if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
-		spin_lock(&eb->i915->mm.notifier_lock);
+		read_lock(&eb->i915->mm.notifier_lock);
 
 		/*
 		 * count is always at least 1, otherwise __EXEC_USERPTR_USED
@@ -2279,7 +2279,7 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
 				break;
 		}
 
-		spin_unlock(&eb->i915->mm.notifier_lock);
+		read_unlock(&eb->i915->mm.notifier_lock);
 	}
 #endif
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 44fc0df89784..7366b47a742c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -67,11 +67,11 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
 	if (!mmu_notifier_range_blockable(range))
 		return false;
 
-	spin_lock(&i915->mm.notifier_lock);
+	write_lock(&i915->mm.notifier_lock);
 
 	mmu_interval_set_seq(mni, cur_seq);
 
-	spin_unlock(&i915->mm.notifier_lock);
+	write_unlock(&i915->mm.notifier_lock);
 
 	/* During exit there's no need to wait */
 	if (current->flags & PF_EXITING)
@@ -100,16 +100,15 @@ i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
 
 static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
 {
-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct page **pvec = NULL;
 
-	spin_lock(&i915->mm.notifier_lock);
+	assert_object_held_shared(obj);
+
 	if (!--obj->userptr.page_ref) {
 		pvec = obj->userptr.pvec;
 		obj->userptr.pvec = NULL;
 	}
 	GEM_BUG_ON(obj->userptr.page_ref < 0);
-	spin_unlock(&i915->mm.notifier_lock);
 
 	if (pvec) {
 		const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
@@ -121,7 +120,6 @@ static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
 
 static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 {
-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
@@ -134,16 +132,13 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 	if (!st)
 		return -ENOMEM;
 
-	spin_lock(&i915->mm.notifier_lock);
-	if (GEM_WARN_ON(!obj->userptr.page_ref)) {
-		spin_unlock(&i915->mm.notifier_lock);
-		ret = -EFAULT;
+	if (!obj->userptr.page_ref) {
+		ret = -EAGAIN;
 		goto err_free;
 	}
 
 	obj->userptr.page_ref++;
 	pvec = obj->userptr.pvec;
-	spin_unlock(&i915->mm.notifier_lock);
 
 alloc_table:
 	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
@@ -234,7 +229,7 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
 	i915_gem_object_userptr_drop_ref(obj);
 }
 
-static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages)
+static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
 	int err;
@@ -252,15 +247,11 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 	if (!IS_ERR_OR_NULL(pages))
 		i915_gem_userptr_put_pages(obj, pages);
 
-	if (get_pages)
-		err = ____i915_gem_object_get_pages(obj);
-
 	return err;
 }
 
 int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
 {
-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
 	struct page **pvec;
 	unsigned int gup_flags = 0;
@@ -270,39 +261,22 @@ int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
 	if (obj->userptr.notifier.mm != current->mm)
 		return -EFAULT;
 
+	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
+
 	ret = i915_gem_object_lock_interruptible(obj, NULL);
 	if (ret)
 		return ret;
 
-	/* optimistically try to preserve current pages while unlocked */
-	if (i915_gem_object_has_pages(obj) &&
-	    !mmu_interval_check_retry(&obj->userptr.notifier,
-				      obj->userptr.notifier_seq)) {
-		spin_lock(&i915->mm.notifier_lock);
-		if (obj->userptr.pvec &&
-		    !mmu_interval_read_retry(&obj->userptr.notifier,
-					     obj->userptr.notifier_seq)) {
-			obj->userptr.page_ref++;
-
-			/* We can keep using the current binding, this is the fastpath */
-			ret = 1;
-		}
-		spin_unlock(&i915->mm.notifier_lock);
+	if (notifier_seq == obj->userptr.notifier_seq && obj->userptr.pvec) {
+		i915_gem_object_unlock(obj);
+		return 0;
 	}
 
-	if (!ret) {
-		/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
-		ret = i915_gem_object_userptr_unbind(obj, false);
-	}
+	ret = i915_gem_object_userptr_unbind(obj);
 	i915_gem_object_unlock(obj);
-	if (ret < 0)
+	if (ret)
 		return ret;
 
-	if (ret > 0)
-		return 0;
-
-	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
-
 	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
 	if (!pvec)
 		return -ENOMEM;
@@ -322,7 +296,9 @@ int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
 	}
 	ret = 0;
 
-	spin_lock(&i915->mm.notifier_lock);
+	ret = i915_gem_object_lock_interruptible(obj, NULL);
+	if (ret)
+		goto out;
 
 	if (mmu_interval_read_retry(&obj->userptr.notifier,
 		!obj->userptr.page_ref ? notifier_seq :
@@ -334,12 +310,14 @@ int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
 	if (!obj->userptr.page_ref++) {
 		obj->userptr.pvec = pvec;
 		obj->userptr.notifier_seq = notifier_seq;
-
 		pvec = NULL;
+		ret = ____i915_gem_object_get_pages(obj);
 	}
 
+	obj->userptr.page_ref--;
+
 out_unlock:
-	spin_unlock(&i915->mm.notifier_lock);
+	i915_gem_object_unlock(obj);
 
 out:
 	if (pvec) {
@@ -364,7 +342,6 @@ int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj)
 
 void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj)
 {
-	i915_gem_object_userptr_drop_ref(obj);
 }
 
 static void
@@ -553,7 +530,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 {
 #ifdef CONFIG_MMU_NOTIFIER
-	spin_lock_init(&dev_priv->mm.notifier_lock);
+	rwlock_init(&dev_priv->mm.notifier_lock);
 #endif
 
 	return 0;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 320e0f66122d..e88f52013e1e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -595,7 +595,7 @@ struct i915_gem_mm {
 	 * notifier_lock for mmu notifiers, memory may not be allocated
 	 * while holding this lock.
 	 */
-	spinlock_t notifier_lock;
+	rwlock_t notifier_lock;
 #endif
 
 	/* shrinker accounting, also useful for userland debugging */
-- 
2.25.1


[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Remove obj->mm.lock! (rev17)
  2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (72 preceding siblings ...)
  2021-03-11 14:59 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
@ 2021-03-16  9:10 ` Patchwork
  73 siblings, 0 replies; 82+ messages in thread
From: Patchwork @ 2021-03-16  9:10 UTC (permalink / raw)
  To: Thomas Hellström (Intel); +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev17)
URL   : https://patchwork.freedesktop.org/series/82337/
State : failure

== Summary ==

Applying: drm/i915: Do not share hwsp across contexts any more, v7.
Applying: drm/i915: Pin timeline map after first timeline pin, v3.
Applying: drm/i915: Move cmd parser pinning to execbuffer
Applying: drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
Applying: drm/i915: Ensure we hold the object mutex in pin correctly.
Applying: drm/i915: Add gem object locking to madvise.
Applying: drm/i915: Move HAS_STRUCT_PAGE to obj->flags
Applying: drm/i915: Rework struct phys attachment handling
Applying: drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
Applying: drm/i915: make lockdep slightly happier about execbuf.
Applying: drm/i915: Disable userptr pread/pwrite support.
Applying: drm/i915: No longer allow exporting userptr through dma-buf
Applying: drm/i915: Reject more ioctls for userptr, v2.
Applying: drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
Applying: drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
Applying: drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
Using index info to reconstruct a base tree...
M	drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
M	drivers/gpu/drm/i915/gem/i915_gem_userptr.c
M	drivers/gpu/drm/i915/i915_drv.h
Falling back to patching base and 3-way merge...
Auto-merging drivers/gpu/drm/i915/i915_drv.h
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/i915_drv.h
Auto-merging drivers/gpu/drm/i915/gem/i915_gem_userptr.c
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/gem/i915_gem_userptr.c
Auto-merging drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0016 drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 82+ messages in thread

end of thread, other threads:[~2021-03-16  9:10 UTC | newest]

Thread overview: 82+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-11 13:41 [Intel-gfx] [PATCH v8 00/69] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 01/69] drm/i915: Do not share hwsp across contexts any more, v7 Maarten Lankhorst
2021-03-11 21:22   ` Jason Ekstrand
2021-03-15 12:08     ` Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 02/69] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
2021-03-11 21:44   ` Jason Ekstrand
2021-03-15 12:34     ` Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 03/69] drm/i915: Move cmd parser pinning to execbuffer Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 04/69] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2 Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 05/69] drm/i915: Ensure we hold the object mutex in pin correctly Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 06/69] drm/i915: Add gem object locking to madvise Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 07/69] drm/i915: Move HAS_STRUCT_PAGE to obj->flags Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 08/69] drm/i915: Rework struct phys attachment handling Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 09/69] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2 Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 10/69] drm/i915: make lockdep slightly happier about execbuf Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 11/69] drm/i915: Disable userptr pread/pwrite support Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 12/69] drm/i915: No longer allow exporting userptr through dma-buf Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 13/69] drm/i915: Reject more ioctls for userptr, v2 Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 14/69] drm/i915: Reject UNSYNCHRONIZED " Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 15/69] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 16/69] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7 Maarten Lankhorst
2021-03-11 17:24   ` Thomas Hellström (Intel)
2021-03-15 12:36     ` Maarten Lankhorst
2021-03-16  8:47       ` Thomas Hellström (Intel)
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 17/69] drm/i915: Flatten obj->mm.lock Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 18/69] drm/i915: Populate logical context during first pin Maarten Lankhorst
2021-03-11 13:41 ` [Intel-gfx] [PATCH v8 19/69] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2 Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 20/69] drm/i915: Handle ww locking in init_status_page Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 21/69] drm/i915: Rework clflush to work correctly without obj->mm.lock Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 22/69] drm/i915: Pass ww ctx to intel_pin_to_display_plane Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 23/69] drm/i915: Add object locking to vm_fault_cpu Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 24/69] drm/i915: Move pinning to inside engine_wa_list_verify() Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 25/69] drm/i915: Take reservation lock around i915_vma_pin Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 26/69] drm/i915: Make lrc_init_wa_ctx compatible with ww locking, v3 Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 27/69] drm/i915: Make __engine_unpark() compatible with ww locking Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 28/69] drm/i915: Take obj lock around set_domain ioctl Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 29/69] drm/i915: Defer pin calls in buffer pool until first use by caller Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 30/69] drm/i915: Fix pread/pwrite to work with new locking rules Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 31/69] drm/i915: Fix workarounds selftest, part 1 Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 32/69] drm/i915: Prepare for obj->mm.lock removal, v2 Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 33/69] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 34/69] drm/i915: Add ww locking around vm_access() Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 35/69] drm/i915: Increase ww locking for perf Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 36/69] drm/i915: Lock ww in ucode objects correctly Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 37/69] drm/i915: Add ww locking to dma-buf ops Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 38/69] drm/i915: Add missing ww lock in intel_dsb_prepare Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 39/69] drm/i915: Fix ww locking in shmem_create_from_object Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 40/69] drm/i915: Use a single page table lock for each gtt Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 41/69] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 42/69] drm/i915/selftests: Prepare client blit " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 43/69] drm/i915/selftests: Prepare coherency tests " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 44/69] drm/i915/selftests: Prepare context " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 45/69] drm/i915/selftests: Prepare dma-buf " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 46/69] drm/i915/selftests: Prepare execbuf " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 47/69] drm/i915/selftests: Prepare mman testcases " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 48/69] drm/i915/selftests: Prepare object tests " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 49/69] drm/i915/selftests: Prepare object blit " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 50/69] drm/i915/selftests: Prepare igt_gem_utils " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 51/69] drm/i915/selftests: Prepare context selftest " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 52/69] drm/i915/selftests: Prepare hangcheck " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 53/69] drm/i915/selftests: Prepare execlists and lrc selftests " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 54/69] drm/i915/selftests: Prepare mocs tests " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 55/69] drm/i915/selftests: Prepare ring submission " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 56/69] drm/i915/selftests: Prepare timeline tests " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 57/69] drm/i915/selftests: Prepare i915_request " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 58/69] drm/i915/selftests: Prepare memory region " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 59/69] drm/i915/selftests: Prepare cs engine " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 60/69] drm/i915/selftests: Prepare gtt " Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 61/69] drm/i915: Finally remove obj->mm.lock Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 62/69] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2 Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 63/69] drm/i915: Move gt_revoke() slightly Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 64/69] drm/i915: Add missing -EDEADLK path in execbuffer ggtt pinning Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 65/69] drm/i915: Fix pin_map in scheduler selftests Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 66/69] drm/i915: Add ww parameter to get_pages() callback Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 67/69] drm/i915: Add ww context to prepare_(read/write) Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 68/69] drm/i915: Pass ww ctx to pin_map Maarten Lankhorst
2021-03-11 13:42 ` [Intel-gfx] [PATCH v8 69/69] drm/i915: Pass ww ctx to i915_gem_object_pin_pages Maarten Lankhorst
2021-03-11 14:27 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev16) Patchwork
2021-03-11 14:28 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-03-11 14:32 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
2021-03-11 14:59 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2021-03-16  9:10 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Remove obj->mm.lock! (rev17) Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.