All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock!
@ 2021-01-05 15:34 Maarten Lankhorst
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6 Maarten Lankhorst
                   ` (70 more replies)
  0 siblings, 71 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:34 UTC (permalink / raw)
  To: intel-gfx

Rebased on top of latest drm-tip.

The userptr changes have been tested against beignet's selftests, and
intel-compute-runtime with piglit. I added a few ERRs to check if the
changed paths were hit, but didn't get any returned, and no new test
failures, so no regressions.

IGT needs fixes, as it still uses some of the deprecated calls.

Test-with: 20201215084825.101358-1-maarten.lankhorst@linux.intel.com

---

Finally there, just needs a lot of fixes!

A lot of places were calling certain calls without any object lock held,
with the removal of mm.lock we can no longer do this, and have to fix it.

Phys page handling has to be redone, as nothing protects obj->ops structure,
we have to remove swapping it, and move HAS_STRUCT_PAGE to obj->flags instead.

Userpointer locking is inverted, which we tried to get around with a workqueue.
We correct the lock ordering and try to acquire userptr pages first before taking
any ww locks. This is more compatible with the locking hierarchy, as we may need
to acquire mmap_sem.

The previous versione broke gem_exec_schedule@pi-shared/distinct-iova, but
now that we only unbind when required, this is now fixed.

We also have to fix some dma-work, the command parser and clflush are slightly
reworked to put all memory allocations and pinning in the preparation,
so the work could pass fence annotations.

In a few places like igt_spinner and execlists, we move some part of init to the
first pin, because we need to have the ww lock held and it makes it easier that way.

Finally we convert all selftests, and then remove obj->mm.lock!

Compared to last version, I moved gt_revoke slightly to fix a lockdep splat with
reset mutex vs mmu notifier.

Maarten Lankhorst (62):
  drm/i915: Do not share hwsp across contexts any more, v6
  drm/i915: Pin timeline map after first timeline pin, v3.
  drm/i915: Move cmd parser pinning to execbuffer
  drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
  drm/i915: Ensure we hold the object mutex in pin correctly.
  drm/i915: Add gem object locking to madvise.
  drm/i915: Move HAS_STRUCT_PAGE to obj->flags
  drm/i915: Rework struct phys attachment handling
  drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
  drm/i915: make lockdep slightly happier about execbuf.
  drm/i915: Disable userptr pread/pwrite support.
  drm/i915: No longer allow exporting userptr through dma-buf
  drm/i915: Reject more ioctls for userptr
  drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
  drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
  drm/i915: Fix userptr so we do not have to worry about obj->mm.lock,
    v5.
  drm/i915: Flatten obj->mm.lock
  drm/i915: Populate logical context during first pin.
  drm/i915: Make ring submission compatible with obj->mm.lock removal,
    v2.
  drm/i915: Handle ww locking in init_status_page
  drm/i915: Rework clflush to work correctly without obj->mm.lock.
  drm/i915: Pass ww ctx to intel_pin_to_display_plane
  drm/i915: Add object locking to vm_fault_cpu
  drm/i915: Move pinning to inside engine_wa_list_verify()
  drm/i915: Take reservation lock around i915_vma_pin.
  drm/i915: Make lrc_init_wa_ctx compatible with ww locking.
  drm/i915: Make __engine_unpark() compatible with ww locking.
  drm/i915: Take obj lock around set_domain ioctl
  drm/i915: Defer pin calls in buffer pool until first use by caller.
  drm/i915: Fix pread/pwrite to work with new locking rules.
  drm/i915: Fix workarounds selftest, part 1
  drm/i915: Add igt_spinner_pin() to allow for ww locking around
    spinner.
  drm/i915: Add ww locking around vm_access()
  drm/i915: Increase ww locking for perf.
  drm/i915: Lock ww in ucode objects correctly
  drm/i915: Add ww locking to dma-buf ops.
  drm/i915: Add missing ww lock in intel_dsb_prepare.
  drm/i915: Fix ww locking in shmem_create_from_object
  drm/i915: Use a single page table lock for each gtt.
  drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock
    removal.
  drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
  drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
  drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
  drm/i915/selftests: Prepare object blit tests for obj->mm.lock
    removal.
  drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
  drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
  drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
  drm/i915/selftests: Prepare execlists and lrc selftests for
    obj->mm.lock removal
  drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
  drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
  drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
  drm/i915/selftests: Prepare i915_request tests for obj->mm.lock
    removal
  drm/i915/selftests: Prepare memory region tests for obj->mm.lock
    removal
  drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
  drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
  drm/i915: Finally remove obj->mm.lock.
  drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
  drm/i915: Move gt_revoke() slightly

Thomas Hellström (2):
  drm/i915: Prepare for obj->mm.lock removal
  drm/i915: Avoid some false positives in assert_object_held()

 drivers/gpu/drm/i915/Makefile                 |   1 -
 drivers/gpu/drm/i915/display/intel_display.c  |  71 +-
 drivers/gpu/drm/i915/display/intel_display.h  |   2 +-
 drivers/gpu/drm/i915/display/intel_dsb.c      |   2 +-
 drivers/gpu/drm/i915/display/intel_fbdev.c    |   2 +-
 drivers/gpu/drm/i915/display/intel_overlay.c  |  34 +-
 drivers/gpu/drm/i915/gem/i915_gem_clflush.c   |  15 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  62 +-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    |  52 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 228 ++++-
 drivers/gpu/drm/i915/gem/i915_gem_fence.c     |  95 --
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   6 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c      |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |  35 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  10 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    | 102 +-
 .../gpu/drm/i915/gem/i915_gem_object_blt.c    |   6 +
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  21 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 109 ++-
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 110 +--
 drivers/gpu/drm/i915/gem/i915_gem_region.c    |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_region.h    |   3 +-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |  39 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c  |  39 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.h  |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c    |  14 +-
 drivers/gpu/drm/i915/gem/i915_gem_tiling.c    |   2 -
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 871 ++++++------------
 .../drm/i915/gem/selftests/huge_gem_object.c  |   4 +-
 .../gpu/drm/i915/gem/selftests/huge_pages.c   |  38 +-
 .../i915/gem/selftests/i915_gem_client_blt.c  |   8 +-
 .../i915/gem/selftests/i915_gem_coherency.c   |  18 +-
 .../drm/i915/gem/selftests/i915_gem_context.c |  10 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |   2 +-
 .../i915/gem/selftests/i915_gem_execbuffer.c  |   2 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c    |  21 +-
 .../drm/i915/gem/selftests/i915_gem_object.c  |   2 +-
 .../i915/gem/selftests/i915_gem_object_blt.c  |   6 +-
 .../drm/i915/gem/selftests/i915_gem_phys.c    |  10 +-
 .../drm/i915/gem/selftests/igt_gem_utils.c    |   2 +-
 drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
 drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  12 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  38 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     |   4 +
 .../drm/i915/gt/intel_execlists_submission.c  |  26 +-
 drivers/gpu/drm/i915/gt/intel_ggtt.c          |  10 +-
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.c    |  47 +-
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.h    |   5 +
 .../drm/i915/gt/intel_gt_buffer_pool_types.h  |   1 +
 drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
 drivers/gpu/drm/i915/gt/intel_gtt.c           |  52 +-
 drivers/gpu/drm/i915/gt/intel_gtt.h           |   8 +
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  32 +-
 drivers/gpu/drm/i915/gt/intel_ppgtt.c         |   3 +-
 drivers/gpu/drm/i915/gt/intel_renderstate.c   |   2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c         |   5 +-
 .../gpu/drm/i915/gt/intel_ring_submission.c   | 184 ++--
 drivers/gpu/drm/i915/gt/intel_timeline.c      | 426 ++-------
 drivers/gpu/drm/i915/gt/intel_timeline.h      |   2 +
 .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
 drivers/gpu/drm/i915/gt/intel_workarounds.c   |  10 +-
 drivers/gpu/drm/i915/gt/mock_engine.c         |  22 +-
 drivers/gpu/drm/i915/gt/selftest_context.c    |   4 +-
 drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   9 +-
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  23 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |   8 +-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  18 +-
 drivers/gpu/drm/i915/gt/selftest_mocs.c       |   5 +-
 .../drm/i915/gt/selftest_ring_submission.c    |   4 +-
 drivers/gpu/drm/i915/gt/selftest_timeline.c   | 174 ++--
 .../gpu/drm/i915/gt/selftest_workarounds.c    |  82 +-
 drivers/gpu/drm/i915/gt/shmem_utils.c         |   2 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.c        |   2 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_log.c    |   4 +-
 drivers/gpu/drm/i915/gt/uc/intel_huc.c        |   2 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c      |   2 +-
 drivers/gpu/drm/i915/gvt/dmabuf.c             |   2 +-
 drivers/gpu/drm/i915/i915_active.c            |  20 +-
 drivers/gpu/drm/i915/i915_cmd_parser.c        | 104 +--
 drivers/gpu/drm/i915/i915_debugfs.c           |   4 +-
 drivers/gpu/drm/i915/i915_drv.h               |  18 +-
 drivers/gpu/drm/i915/i915_gem.c               | 246 ++---
 drivers/gpu/drm/i915/i915_gem_gtt.c           |   2 +-
 drivers/gpu/drm/i915/i915_memcpy.c            |   2 +-
 drivers/gpu/drm/i915/i915_memcpy.h            |   2 +-
 drivers/gpu/drm/i915/i915_perf.c              |  56 +-
 drivers/gpu/drm/i915/i915_request.c           |   4 -
 drivers/gpu/drm/i915/i915_request.h           |  17 +-
 drivers/gpu/drm/i915/i915_selftest.h          |   2 +
 drivers/gpu/drm/i915/i915_vma.c               |  26 +-
 drivers/gpu/drm/i915/i915_vma.h               |  20 +-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c |  94 +-
 drivers/gpu/drm/i915/selftests/i915_request.c |  10 +-
 drivers/gpu/drm/i915/selftests/igt_spinner.c  | 136 ++-
 drivers/gpu/drm/i915/selftests/igt_spinner.h  |   5 +
 .../drm/i915/selftests/intel_memory_region.c  |  18 +-
 drivers/gpu/drm/i915/selftests/mock_region.c  |   4 +-
 98 files changed, 2107 insertions(+), 2010 deletions(-)
 delete mode 100644 drivers/gpu/drm/i915/gem/i915_gem_fence.c

-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
@ 2021-01-05 15:34 ` Maarten Lankhorst
  2021-01-14 12:06   ` Tvrtko Ursulin
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 02/64] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
                   ` (69 subsequent siblings)
  70 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:34 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Instead of sharing pages with breadcrumbs, give each timeline a
single page. This allows unrelated timelines not to share locks
any more during command submission.

As an additional benefit, seqno wraparound no longer requires
i915_vma_pin, which means we no longer need to worry about a
potential -EDEADLK at a point where we are ready to submit.

Changes since v1:
- Fix erroneous i915_vma_acquire that should be a i915_vma_release (ickle).
- Extra check for completion in intel_read_hwsp().
Changes since v2:
- Fix inconsistent indent in hwsp_alloc() (kbuild)
- memset entire cacheline to 0.
Changes since v3:
- Do same in intel_timeline_reset_seqno(), and clflush for good measure.
Changes since v4:
- Use refcounting on timeline, instead of relying on i915_active.
- Fix waiting on kernel requests.
Changes since v5:
- Bump amount of slots to maximum (256), for best wraparounds.
- Add hwsp_offset to i915_request to fix potential wraparound hang.
- Ensure timeline wrap test works with the changes.
- Assign hwsp in intel_timeline_read_hwsp() within the rcu lock to
  fix a hang.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> #v1
Reported-by: kernel test robot <lkp@intel.com>
---
 drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
 drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  12 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |   1 +
 drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
 drivers/gpu/drm/i915/gt/intel_timeline.c      | 422 ++++--------------
 .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
 drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   5 +-
 drivers/gpu/drm/i915/gt/selftest_timeline.c   |  83 ++--
 drivers/gpu/drm/i915/i915_request.c           |   4 -
 drivers/gpu/drm/i915/i915_request.h           |  17 +-
 11 files changed, 160 insertions(+), 415 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
index b491a64919c8..9646200d2792 100644
--- a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
@@ -143,7 +143,7 @@ static u32 *__gen2_emit_breadcrumb(struct i915_request *rq, u32 *cs,
 				   int flush, int post)
 {
 	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
-	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
+	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
 
 	*cs++ = MI_FLUSH;
 
diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
index ce38d1bcaba3..dd094e21bb51 100644
--- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
@@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 		 PIPE_CONTROL_DC_FLUSH_ENABLE |
 		 PIPE_CONTROL_QW_WRITE |
 		 PIPE_CONTROL_CS_STALL);
-	*cs++ = i915_request_active_timeline(rq)->hwsp_offset |
+	*cs++ = i915_request_active_offset(rq) |
 		PIPE_CONTROL_GLOBAL_GTT;
 	*cs++ = rq->fence.seqno;
 
@@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 		 PIPE_CONTROL_QW_WRITE |
 		 PIPE_CONTROL_GLOBAL_GTT_IVB |
 		 PIPE_CONTROL_CS_STALL);
-	*cs++ = i915_request_active_timeline(rq)->hwsp_offset;
+	*cs++ = i915_request_active_offset(rq);
 	*cs++ = rq->fence.seqno;
 
 	*cs++ = MI_USER_INTERRUPT;
@@ -374,7 +374,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
 {
 	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
-	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
+	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
 
 	*cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
 	*cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
@@ -394,7 +394,7 @@ u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
 	int i;
 
 	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
-	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
+	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
 
 	*cs++ = MI_FLUSH_DW | MI_INVALIDATE_TLB |
 		MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
index 1972dd5dca00..33eaa0203b1d 100644
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
@@ -338,15 +338,13 @@ static inline u32 preempt_address(struct intel_engine_cs *engine)
 
 static u32 hwsp_offset(const struct i915_request *rq)
 {
-	const struct intel_timeline_cacheline *cl;
+	const struct intel_timeline *tl;
 
-	/* Before the request is executed, the timeline/cachline is fixed */
+	/* Before the request is executed, the timeline is fixed */
+	tl = rcu_dereference_protected(rq->timeline,
+				       !i915_request_signaled(rq));
 
-	cl = rcu_dereference_protected(rq->hwsp_cacheline, 1);
-	if (cl)
-		return cl->ggtt_offset;
-
-	return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset;
+	return page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno);
 }
 
 int gen8_emit_init_breadcrumb(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 1847d3c2ea99..1db608d1f26f 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -754,6 +754,7 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
 	frame->rq.engine = engine;
 	frame->rq.context = ce;
 	rcu_assign_pointer(frame->rq.timeline, ce->timeline);
+	frame->rq.hwsp_seqno = ce->timeline->hwsp_seqno;
 
 	frame->ring.vaddr = frame->cs;
 	frame->ring.size = sizeof(frame->cs);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
index a83d3e18254d..f7dab4fc926c 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
@@ -39,10 +39,6 @@ struct intel_gt {
 	struct intel_gt_timelines {
 		spinlock_t lock; /* protects active_list */
 		struct list_head active_list;
-
-		/* Pack multiple timelines' seqnos into the same page */
-		spinlock_t hwsp_lock;
-		struct list_head hwsp_free_list;
 	} timelines;
 
 	struct intel_gt_requests {
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 7fe05918a76e..3d43f15f34f3 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -12,21 +12,9 @@
 #include "intel_ring.h"
 #include "intel_timeline.h"
 
-#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit)))
-#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit))
+#define TIMELINE_SEQNO_BYTES 8
 
-#define CACHELINE_BITS 6
-#define CACHELINE_FREE CACHELINE_BITS
-
-struct intel_timeline_hwsp {
-	struct intel_gt *gt;
-	struct intel_gt_timelines *gt_timelines;
-	struct list_head free_link;
-	struct i915_vma *vma;
-	u64 free_bitmap;
-};
-
-static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
+static struct i915_vma *hwsp_alloc(struct intel_gt *gt)
 {
 	struct drm_i915_private *i915 = gt->i915;
 	struct drm_i915_gem_object *obj;
@@ -45,220 +33,59 @@ static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
 	return vma;
 }
 
-static struct i915_vma *
-hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline)
-{
-	struct intel_gt_timelines *gt = &timeline->gt->timelines;
-	struct intel_timeline_hwsp *hwsp;
-
-	BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
-
-	spin_lock_irq(&gt->hwsp_lock);
-
-	/* hwsp_free_list only contains HWSP that have available cachelines */
-	hwsp = list_first_entry_or_null(&gt->hwsp_free_list,
-					typeof(*hwsp), free_link);
-	if (!hwsp) {
-		struct i915_vma *vma;
-
-		spin_unlock_irq(&gt->hwsp_lock);
-
-		hwsp = kmalloc(sizeof(*hwsp), GFP_KERNEL);
-		if (!hwsp)
-			return ERR_PTR(-ENOMEM);
-
-		vma = __hwsp_alloc(timeline->gt);
-		if (IS_ERR(vma)) {
-			kfree(hwsp);
-			return vma;
-		}
-
-		GT_TRACE(timeline->gt, "new HWSP allocated\n");
-
-		vma->private = hwsp;
-		hwsp->gt = timeline->gt;
-		hwsp->vma = vma;
-		hwsp->free_bitmap = ~0ull;
-		hwsp->gt_timelines = gt;
-
-		spin_lock_irq(&gt->hwsp_lock);
-		list_add(&hwsp->free_link, &gt->hwsp_free_list);
-	}
-
-	GEM_BUG_ON(!hwsp->free_bitmap);
-	*cacheline = __ffs64(hwsp->free_bitmap);
-	hwsp->free_bitmap &= ~BIT_ULL(*cacheline);
-	if (!hwsp->free_bitmap)
-		list_del(&hwsp->free_link);
-
-	spin_unlock_irq(&gt->hwsp_lock);
-
-	GEM_BUG_ON(hwsp->vma->private != hwsp);
-	return hwsp->vma;
-}
-
-static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
-{
-	struct intel_gt_timelines *gt = hwsp->gt_timelines;
-	unsigned long flags;
-
-	spin_lock_irqsave(&gt->hwsp_lock, flags);
-
-	/* As a cacheline becomes available, publish the HWSP on the freelist */
-	if (!hwsp->free_bitmap)
-		list_add_tail(&hwsp->free_link, &gt->hwsp_free_list);
-
-	GEM_BUG_ON(cacheline >= BITS_PER_TYPE(hwsp->free_bitmap));
-	hwsp->free_bitmap |= BIT_ULL(cacheline);
-
-	/* And if no one is left using it, give the page back to the system */
-	if (hwsp->free_bitmap == ~0ull) {
-		i915_vma_put(hwsp->vma);
-		list_del(&hwsp->free_link);
-		kfree(hwsp);
-	}
-
-	spin_unlock_irqrestore(&gt->hwsp_lock, flags);
-}
-
-static void __rcu_cacheline_free(struct rcu_head *rcu)
-{
-	struct intel_timeline_cacheline *cl =
-		container_of(rcu, typeof(*cl), rcu);
-
-	/* Must wait until after all *rq->hwsp are complete before removing */
-	i915_gem_object_unpin_map(cl->hwsp->vma->obj);
-	__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
-
-	i915_active_fini(&cl->active);
-	kfree(cl);
-}
-
-static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
-{
-	GEM_BUG_ON(!i915_active_is_idle(&cl->active));
-	call_rcu(&cl->rcu, __rcu_cacheline_free);
-}
-
 __i915_active_call
-static void __cacheline_retire(struct i915_active *active)
+static void __timeline_retire(struct i915_active *active)
 {
-	struct intel_timeline_cacheline *cl =
-		container_of(active, typeof(*cl), active);
+	struct intel_timeline *tl =
+		container_of(active, typeof(*tl), active);
 
-	i915_vma_unpin(cl->hwsp->vma);
-	if (ptr_test_bit(cl->vaddr, CACHELINE_FREE))
-		__idle_cacheline_free(cl);
+	i915_vma_unpin(tl->hwsp_ggtt);
+	intel_timeline_put(tl);
 }
 
-static int __cacheline_active(struct i915_active *active)
+static int __timeline_active(struct i915_active *active)
 {
-	struct intel_timeline_cacheline *cl =
-		container_of(active, typeof(*cl), active);
+	struct intel_timeline *tl =
+		container_of(active, typeof(*tl), active);
 
-	__i915_vma_pin(cl->hwsp->vma);
+	__i915_vma_pin(tl->hwsp_ggtt);
+	intel_timeline_get(tl);
 	return 0;
 }
 
-static struct intel_timeline_cacheline *
-cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
-{
-	struct intel_timeline_cacheline *cl;
-	void *vaddr;
-
-	GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS));
-
-	cl = kmalloc(sizeof(*cl), GFP_KERNEL);
-	if (!cl)
-		return ERR_PTR(-ENOMEM);
-
-	vaddr = i915_gem_object_pin_map(hwsp->vma->obj, I915_MAP_WB);
-	if (IS_ERR(vaddr)) {
-		kfree(cl);
-		return ERR_CAST(vaddr);
-	}
-
-	cl->hwsp = hwsp;
-	cl->vaddr = page_pack_bits(vaddr, cacheline);
-
-	i915_active_init(&cl->active, __cacheline_active, __cacheline_retire);
-
-	return cl;
-}
-
-static void cacheline_acquire(struct intel_timeline_cacheline *cl,
-			      u32 ggtt_offset)
-{
-	if (!cl)
-		return;
-
-	cl->ggtt_offset = ggtt_offset;
-	i915_active_acquire(&cl->active);
-}
-
-static void cacheline_release(struct intel_timeline_cacheline *cl)
-{
-	if (cl)
-		i915_active_release(&cl->active);
-}
-
-static void cacheline_free(struct intel_timeline_cacheline *cl)
-{
-	if (!i915_active_acquire_if_busy(&cl->active)) {
-		__idle_cacheline_free(cl);
-		return;
-	}
-
-	GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE));
-	cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE);
-
-	i915_active_release(&cl->active);
-}
-
 static int intel_timeline_init(struct intel_timeline *timeline,
 			       struct intel_gt *gt,
 			       struct i915_vma *hwsp,
 			       unsigned int offset)
 {
 	void *vaddr;
+	u32 *seqno;
 
 	kref_init(&timeline->kref);
 	atomic_set(&timeline->pin_count, 0);
 
 	timeline->gt = gt;
 
-	timeline->has_initial_breadcrumb = !hwsp;
-	timeline->hwsp_cacheline = NULL;
-
-	if (!hwsp) {
-		struct intel_timeline_cacheline *cl;
-		unsigned int cacheline;
-
-		hwsp = hwsp_alloc(timeline, &cacheline);
+	if (hwsp) {
+		timeline->hwsp_offset = offset;
+		timeline->hwsp_ggtt = i915_vma_get(hwsp);
+	} else {
+		timeline->has_initial_breadcrumb = true;
+		hwsp = hwsp_alloc(gt);
 		if (IS_ERR(hwsp))
 			return PTR_ERR(hwsp);
-
-		cl = cacheline_alloc(hwsp->private, cacheline);
-		if (IS_ERR(cl)) {
-			__idle_hwsp_free(hwsp->private, cacheline);
-			return PTR_ERR(cl);
-		}
-
-		timeline->hwsp_cacheline = cl;
-		timeline->hwsp_offset = cacheline * CACHELINE_BYTES;
-
-		vaddr = page_mask_bits(cl->vaddr);
-	} else {
-		timeline->hwsp_offset = offset;
-		vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		timeline->hwsp_ggtt = hwsp;
 	}
 
-	timeline->hwsp_seqno =
-		memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES);
+	vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	timeline->hwsp_map = vaddr;
+	seqno = vaddr + timeline->hwsp_offset;
+	WRITE_ONCE(*seqno, 0);
+	timeline->hwsp_seqno = seqno;
 
-	timeline->hwsp_ggtt = i915_vma_get(hwsp);
 	GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
 
 	timeline->fence_context = dma_fence_context_alloc(1);
@@ -269,6 +96,7 @@ static int intel_timeline_init(struct intel_timeline *timeline,
 	INIT_LIST_HEAD(&timeline->requests);
 
 	i915_syncmap_init(&timeline->sync);
+	i915_active_init(&timeline->active, __timeline_active, __timeline_retire);
 
 	return 0;
 }
@@ -279,23 +107,18 @@ void intel_gt_init_timelines(struct intel_gt *gt)
 
 	spin_lock_init(&timelines->lock);
 	INIT_LIST_HEAD(&timelines->active_list);
-
-	spin_lock_init(&timelines->hwsp_lock);
-	INIT_LIST_HEAD(&timelines->hwsp_free_list);
 }
 
-static void intel_timeline_fini(struct intel_timeline *timeline)
+static void intel_timeline_fini(struct rcu_head *rcu)
 {
-	GEM_BUG_ON(atomic_read(&timeline->pin_count));
-	GEM_BUG_ON(!list_empty(&timeline->requests));
-	GEM_BUG_ON(timeline->retire);
+	struct intel_timeline *timeline =
+		container_of(rcu, struct intel_timeline, rcu);
 
-	if (timeline->hwsp_cacheline)
-		cacheline_free(timeline->hwsp_cacheline);
-	else
-		i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
+	i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
 
 	i915_vma_put(timeline->hwsp_ggtt);
+	i915_active_fini(&timeline->active);
+	kfree(timeline);
 }
 
 struct intel_timeline *
@@ -361,9 +184,9 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
 	GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
 		 tl->fence_context, tl->hwsp_offset);
 
-	cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset);
+	i915_active_acquire(&tl->active);
 	if (atomic_fetch_inc(&tl->pin_count)) {
-		cacheline_release(tl->hwsp_cacheline);
+		i915_active_release(&tl->active);
 		__i915_vma_unpin(tl->hwsp_ggtt);
 	}
 
@@ -372,9 +195,13 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
 
 void intel_timeline_reset_seqno(const struct intel_timeline *tl)
 {
+	u32 *hwsp_seqno = (u32 *)tl->hwsp_seqno;
 	/* Must be pinned to be writable, and no requests in flight. */
 	GEM_BUG_ON(!atomic_read(&tl->pin_count));
-	WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
+
+	memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
+	WRITE_ONCE(*hwsp_seqno, tl->seqno);
+	clflush(hwsp_seqno);
 }
 
 void intel_timeline_enter(struct intel_timeline *tl)
@@ -450,106 +277,23 @@ static u32 timeline_advance(struct intel_timeline *tl)
 	return tl->seqno += 1 + tl->has_initial_breadcrumb;
 }
 
-static void timeline_rollback(struct intel_timeline *tl)
-{
-	tl->seqno -= 1 + tl->has_initial_breadcrumb;
-}
-
 static noinline int
 __intel_timeline_get_seqno(struct intel_timeline *tl,
-			   struct i915_request *rq,
 			   u32 *seqno)
 {
-	struct intel_timeline_cacheline *cl;
-	unsigned int cacheline;
-	struct i915_vma *vma;
-	void *vaddr;
-	int err;
-
-	might_lock(&tl->gt->ggtt->vm.mutex);
-	GT_TRACE(tl->gt, "timeline:%llx wrapped\n", tl->fence_context);
-
-	/*
-	 * If there is an outstanding GPU reference to this cacheline,
-	 * such as it being sampled by a HW semaphore on another timeline,
-	 * we cannot wraparound our seqno value (the HW semaphore does
-	 * a strict greater-than-or-equals compare, not i915_seqno_passed).
-	 * So if the cacheline is still busy, we must detach ourselves
-	 * from it and leave it inflight alongside its users.
-	 *
-	 * However, if nobody is watching and we can guarantee that nobody
-	 * will, we could simply reuse the same cacheline.
-	 *
-	 * if (i915_active_request_is_signaled(&tl->last_request) &&
-	 *     i915_active_is_signaled(&tl->hwsp_cacheline->active))
-	 *	return 0;
-	 *
-	 * That seems unlikely for a busy timeline that needed to wrap in
-	 * the first place, so just replace the cacheline.
-	 */
-
-	vma = hwsp_alloc(tl, &cacheline);
-	if (IS_ERR(vma)) {
-		err = PTR_ERR(vma);
-		goto err_rollback;
-	}
-
-	err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
-	if (err) {
-		__idle_hwsp_free(vma->private, cacheline);
-		goto err_rollback;
-	}
+	u32 next_ofs = offset_in_page(tl->hwsp_offset + TIMELINE_SEQNO_BYTES);
 
-	cl = cacheline_alloc(vma->private, cacheline);
-	if (IS_ERR(cl)) {
-		err = PTR_ERR(cl);
-		__idle_hwsp_free(vma->private, cacheline);
-		goto err_unpin;
-	}
-	GEM_BUG_ON(cl->hwsp->vma != vma);
-
-	/*
-	 * Attach the old cacheline to the current request, so that we only
-	 * free it after the current request is retired, which ensures that
-	 * all writes into the cacheline from previous requests are complete.
-	 */
-	err = i915_active_ref(&tl->hwsp_cacheline->active,
-			      tl->fence_context,
-			      &rq->fence);
-	if (err)
-		goto err_cacheline;
+	/* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
+	if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))
+		next_ofs = offset_in_page(next_ofs + BIT(5));
 
-	cacheline_release(tl->hwsp_cacheline); /* ownership now xfered to rq */
-	cacheline_free(tl->hwsp_cacheline);
-
-	i915_vma_unpin(tl->hwsp_ggtt); /* binding kept alive by old cacheline */
-	i915_vma_put(tl->hwsp_ggtt);
-
-	tl->hwsp_ggtt = i915_vma_get(vma);
-
-	vaddr = page_mask_bits(cl->vaddr);
-	tl->hwsp_offset = cacheline * CACHELINE_BYTES;
-	tl->hwsp_seqno =
-		memset(vaddr + tl->hwsp_offset, 0, CACHELINE_BYTES);
-
-	tl->hwsp_offset += i915_ggtt_offset(vma);
-	GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
-		 tl->fence_context, tl->hwsp_offset);
-
-	cacheline_acquire(cl, tl->hwsp_offset);
-	tl->hwsp_cacheline = cl;
+	tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + next_ofs;
+	tl->hwsp_seqno = tl->hwsp_map + next_ofs;
+	intel_timeline_reset_seqno(tl);
 
 	*seqno = timeline_advance(tl);
 	GEM_BUG_ON(i915_seqno_passed(*tl->hwsp_seqno, *seqno));
 	return 0;
-
-err_cacheline:
-	cacheline_free(cl);
-err_unpin:
-	i915_vma_unpin(vma);
-err_rollback:
-	timeline_rollback(tl);
-	return err;
 }
 
 int intel_timeline_get_seqno(struct intel_timeline *tl,
@@ -559,51 +303,52 @@ int intel_timeline_get_seqno(struct intel_timeline *tl,
 	*seqno = timeline_advance(tl);
 
 	/* Replace the HWSP on wraparound for HW semaphores */
-	if (unlikely(!*seqno && tl->hwsp_cacheline))
-		return __intel_timeline_get_seqno(tl, rq, seqno);
+	if (unlikely(!*seqno && tl->has_initial_breadcrumb))
+		return __intel_timeline_get_seqno(tl, seqno);
 
 	return 0;
 }
 
-static int cacheline_ref(struct intel_timeline_cacheline *cl,
-			 struct i915_request *rq)
-{
-	return i915_active_add_request(&cl->active, rq);
-}
-
 int intel_timeline_read_hwsp(struct i915_request *from,
 			     struct i915_request *to,
 			     u32 *hwsp)
 {
-	struct intel_timeline_cacheline *cl;
+	struct intel_timeline *tl;
 	int err;
 
-	GEM_BUG_ON(!rcu_access_pointer(from->hwsp_cacheline));
-
 	rcu_read_lock();
-	cl = rcu_dereference(from->hwsp_cacheline);
-	if (i915_request_completed(from)) /* confirm cacheline is valid */
-		goto unlock;
-	if (unlikely(!i915_active_acquire_if_busy(&cl->active)))
-		goto unlock; /* seqno wrapped and completed! */
-	if (unlikely(i915_request_completed(from)))
-		goto release;
+	tl = rcu_dereference(from->timeline);
+	if (i915_request_completed(from) ||
+	    !i915_active_acquire_if_busy(&tl->active))
+		tl = NULL;
+
+	if (tl) {
+		/* hwsp_offset may wraparound, so use from->hwsp_seqno */
+		*hwsp = i915_ggtt_offset(tl->hwsp_ggtt) +
+			offset_in_page(from->hwsp_seqno);
+	}
+
+	/* ensure we wait on the right request, if not, we completed */
+	if (tl && i915_request_completed(from)) {
+		i915_active_release(&tl->active);
+		tl = NULL;
+	}
 	rcu_read_unlock();
 
-	err = cacheline_ref(cl, to);
-	if (err)
+	if (!tl)
+		return 1;
+
+	/* Can't do semaphore waits on kernel context */
+	if (!tl->has_initial_breadcrumb) {
+		err = -EINVAL;
 		goto out;
+	}
+
+	err = i915_active_add_request(&tl->active, to);
 
-	*hwsp = cl->ggtt_offset;
 out:
-	i915_active_release(&cl->active);
+	i915_active_release(&tl->active);
 	return err;
-
-release:
-	i915_active_release(&cl->active);
-unlock:
-	rcu_read_unlock();
-	return 1;
 }
 
 void intel_timeline_unpin(struct intel_timeline *tl)
@@ -612,8 +357,7 @@ void intel_timeline_unpin(struct intel_timeline *tl)
 	if (!atomic_dec_and_test(&tl->pin_count))
 		return;
 
-	cacheline_release(tl->hwsp_cacheline);
-
+	i915_active_release(&tl->active);
 	__i915_vma_unpin(tl->hwsp_ggtt);
 }
 
@@ -622,8 +366,11 @@ void __intel_timeline_free(struct kref *kref)
 	struct intel_timeline *timeline =
 		container_of(kref, typeof(*timeline), kref);
 
-	intel_timeline_fini(timeline);
-	kfree_rcu(timeline, rcu);
+	GEM_BUG_ON(atomic_read(&timeline->pin_count));
+	GEM_BUG_ON(!list_empty(&timeline->requests));
+	GEM_BUG_ON(timeline->retire);
+
+	call_rcu(&timeline->rcu, intel_timeline_fini);
 }
 
 void intel_gt_fini_timelines(struct intel_gt *gt)
@@ -631,7 +378,6 @@ void intel_gt_fini_timelines(struct intel_gt *gt)
 	struct intel_gt_timelines *timelines = &gt->timelines;
 
 	GEM_BUG_ON(!list_empty(&timelines->active_list));
-	GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
 }
 
 void intel_gt_show_timelines(struct intel_gt *gt,
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
index e360f50706bf..9c1d767a867f 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
@@ -18,7 +18,6 @@
 struct i915_vma;
 struct i915_syncmap;
 struct intel_gt;
-struct intel_timeline_hwsp;
 
 struct intel_timeline {
 	u64 fence_context;
@@ -45,12 +44,11 @@ struct intel_timeline {
 	atomic_t pin_count;
 	atomic_t active_count;
 
+	void *hwsp_map;
 	const u32 *hwsp_seqno;
 	struct i915_vma *hwsp_ggtt;
 	u32 hwsp_offset;
 
-	struct intel_timeline_cacheline *hwsp_cacheline;
-
 	bool has_initial_breadcrumb;
 
 	/**
@@ -67,6 +65,8 @@ struct intel_timeline {
 	 */
 	struct i915_active_fence last_request;
 
+	struct i915_active active;
+
 	/** A chain of completed timelines ready for early retirement. */
 	struct intel_timeline *retire;
 
@@ -90,15 +90,4 @@ struct intel_timeline {
 	struct rcu_head rcu;
 };
 
-struct intel_timeline_cacheline {
-	struct i915_active active;
-
-	struct intel_timeline_hwsp *hwsp;
-	void *vaddr;
-
-	u32 ggtt_offset;
-
-	struct rcu_head rcu;
-};
-
 #endif /* __I915_TIMELINE_TYPES_H__ */
diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
index 439c8984f5fa..fd9414b0d1bd 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
@@ -42,6 +42,9 @@ static int perf_end(struct intel_gt *gt)
 
 static int write_timestamp(struct i915_request *rq, int slot)
 {
+	struct intel_timeline *tl =
+		rcu_dereference_protected(rq->timeline,
+					  !i915_request_signaled(rq));
 	u32 cmd;
 	u32 *cs;
 
@@ -54,7 +57,7 @@ static int write_timestamp(struct i915_request *rq, int slot)
 		cmd++;
 	*cs++ = cmd;
 	*cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(rq->engine->mmio_base));
-	*cs++ = i915_request_timeline(rq)->hwsp_offset + slot * sizeof(u32);
+	*cs++ = tl->hwsp_offset + slot * sizeof(u32);
 	*cs++ = 0;
 
 	intel_ring_advance(rq, cs);
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index 6f3a3687ef0f..6e32e91cdab4 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -666,7 +666,7 @@ static int live_hwsp_wrap(void *arg)
 	if (IS_ERR(tl))
 		return PTR_ERR(tl);
 
-	if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
+	if (!tl->has_initial_breadcrumb)
 		goto out_free;
 
 	err = intel_timeline_pin(tl, NULL);
@@ -833,12 +833,26 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
 	return 0;
 }
 
+static void switch_tl_lock(struct i915_request *from, struct i915_request *to)
+{
+	/* some light mutex juggling required; think co-routines */
+
+	if (from) {
+		lockdep_unpin_lock(&from->context->timeline->mutex, from->cookie);
+		mutex_unlock(&from->context->timeline->mutex);
+	}
+
+	if (to) {
+		mutex_lock(&to->context->timeline->mutex);
+		to->cookie = lockdep_pin_lock(&to->context->timeline->mutex);
+	}
+}
+
 static int create_watcher(struct hwsp_watcher *w,
 			  struct intel_engine_cs *engine,
 			  int ringsz)
 {
 	struct intel_context *ce;
-	struct intel_timeline *tl;
 
 	ce = intel_context_create(engine);
 	if (IS_ERR(ce))
@@ -851,11 +865,8 @@ static int create_watcher(struct hwsp_watcher *w,
 		return PTR_ERR(w->rq);
 
 	w->addr = i915_ggtt_offset(w->vma);
-	tl = w->rq->context->timeline;
 
-	/* some light mutex juggling required; think co-routines */
-	lockdep_unpin_lock(&tl->mutex, w->rq->cookie);
-	mutex_unlock(&tl->mutex);
+	switch_tl_lock(w->rq, NULL);
 
 	return 0;
 }
@@ -864,15 +875,13 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
 			 bool (*op)(u32 hwsp, u32 seqno))
 {
 	struct i915_request *rq = fetch_and_zero(&w->rq);
-	struct intel_timeline *tl = rq->context->timeline;
 	u32 offset, end;
 	int err;
 
 	GEM_BUG_ON(w->addr - i915_ggtt_offset(w->vma) > w->vma->size);
 
 	i915_request_get(rq);
-	mutex_lock(&tl->mutex);
-	rq->cookie = lockdep_pin_lock(&tl->mutex);
+	switch_tl_lock(NULL, rq);
 	i915_request_add(rq);
 
 	if (i915_request_wait(rq, 0, HZ) < 0) {
@@ -901,10 +910,7 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
 static void cleanup_watcher(struct hwsp_watcher *w)
 {
 	if (w->rq) {
-		struct intel_timeline *tl = w->rq->context->timeline;
-
-		mutex_lock(&tl->mutex);
-		w->rq->cookie = lockdep_pin_lock(&tl->mutex);
+		switch_tl_lock(NULL, w->rq);
 
 		i915_request_add(w->rq);
 	}
@@ -942,7 +948,7 @@ static struct i915_request *wrap_timeline(struct i915_request *rq)
 	}
 
 	i915_request_put(rq);
-	rq = intel_context_create_request(ce);
+	rq = i915_request_create(ce);
 	if (IS_ERR(rq))
 		return rq;
 
@@ -977,7 +983,7 @@ static int live_hwsp_read(void *arg)
 	if (IS_ERR(tl))
 		return PTR_ERR(tl);
 
-	if (!tl->hwsp_cacheline)
+	if (!tl->has_initial_breadcrumb)
 		goto out_free;
 
 	for (i = 0; i < ARRAY_SIZE(watcher); i++) {
@@ -999,7 +1005,7 @@ static int live_hwsp_read(void *arg)
 		do {
 			struct i915_sw_fence *submit;
 			struct i915_request *rq;
-			u32 hwsp;
+			u32 hwsp, dummy;
 
 			submit = heap_fence_create(GFP_KERNEL);
 			if (!submit) {
@@ -1017,14 +1023,26 @@ static int live_hwsp_read(void *arg)
 				goto out;
 			}
 
-			/* Skip to the end, saving 30 minutes of nops */
-			tl->seqno = -10u + 2 * (count & 3);
-			WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 			ce->timeline = intel_timeline_get(tl);
 
-			rq = intel_context_create_request(ce);
+			/* Ensure timeline is mapped, done during first pin */
+			err = intel_context_pin(ce);
+			if (err) {
+				intel_context_put(ce);
+				goto out;
+			}
+
+			/*
+			 * Start at a new wrap, and set seqno right before another wrap,
+			 * saving 30 minutes of nops
+			 */
+			tl->seqno = -12u + 2 * (count & 3);
+			__intel_timeline_get_seqno(tl, &dummy);
+
+			rq = i915_request_create(ce);
 			if (IS_ERR(rq)) {
 				err = PTR_ERR(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
@@ -1034,32 +1052,35 @@ static int live_hwsp_read(void *arg)
 							    GFP_KERNEL);
 			if (err < 0) {
 				i915_request_add(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
 
-			mutex_lock(&watcher[0].rq->context->timeline->mutex);
+			switch_tl_lock(rq, watcher[0].rq);
 			err = intel_timeline_read_hwsp(rq, watcher[0].rq, &hwsp);
 			if (err == 0)
 				err = emit_read_hwsp(watcher[0].rq, /* before */
 						     rq->fence.seqno, hwsp,
 						     &watcher[0].addr);
-			mutex_unlock(&watcher[0].rq->context->timeline->mutex);
+			switch_tl_lock(watcher[0].rq, rq);
 			if (err) {
 				i915_request_add(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
 
-			mutex_lock(&watcher[1].rq->context->timeline->mutex);
+			switch_tl_lock(rq, watcher[1].rq);
 			err = intel_timeline_read_hwsp(rq, watcher[1].rq, &hwsp);
 			if (err == 0)
 				err = emit_read_hwsp(watcher[1].rq, /* after */
 						     rq->fence.seqno, hwsp,
 						     &watcher[1].addr);
-			mutex_unlock(&watcher[1].rq->context->timeline->mutex);
+			switch_tl_lock(watcher[1].rq, rq);
 			if (err) {
 				i915_request_add(rq);
+				intel_context_unpin(ce);
 				intel_context_put(ce);
 				goto out;
 			}
@@ -1068,6 +1089,7 @@ static int live_hwsp_read(void *arg)
 			i915_request_add(rq);
 
 			rq = wrap_timeline(rq);
+			intel_context_unpin(ce);
 			intel_context_put(ce);
 			if (IS_ERR(rq)) {
 				err = PTR_ERR(rq);
@@ -1107,8 +1129,8 @@ static int live_hwsp_read(void *arg)
 			    3 * watcher[1].rq->ring->size)
 				break;
 
-		} while (!__igt_timeout(end_time, NULL));
-		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, 0xdeadbeef);
+		} while (!__igt_timeout(end_time, NULL) &&
+			 count < (PAGE_SIZE / TIMELINE_SEQNO_BYTES - 1) / 2);
 
 		pr_info("%s: simulated %lu wraps\n", engine->name, count);
 		err = check_watcher(&watcher[1], "after", cmp_gte);
@@ -1153,9 +1175,7 @@ static int live_hwsp_rollover_kernel(void *arg)
 		}
 
 		GEM_BUG_ON(i915_active_fence_isset(&tl->last_request));
-		tl->seqno = 0;
-		timeline_rollback(tl);
-		timeline_rollback(tl);
+		tl->seqno = -2u;
 		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 
 		for (i = 0; i < ARRAY_SIZE(rq); i++) {
@@ -1235,11 +1255,10 @@ static int live_hwsp_rollover_user(void *arg)
 			goto out;
 
 		tl = ce->timeline;
-		if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
+		if (!tl->has_initial_breadcrumb)
 			goto out;
 
-		timeline_rollback(tl);
-		timeline_rollback(tl);
+		tl->seqno = -4u;
 		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 
 		for (i = 0; i < ARRAY_SIZE(rq); i++) {
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index bfd82d8259f4..b661b3b5992e 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -851,7 +851,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->fence.seqno = seqno;
 
 	RCU_INIT_POINTER(rq->timeline, tl);
-	RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline);
 	rq->hwsp_seqno = tl->hwsp_seqno;
 	GEM_BUG_ON(i915_request_completed(rq));
 
@@ -1090,9 +1089,6 @@ emit_semaphore_wait(struct i915_request *to,
 	if (i915_request_has_initial_breadcrumb(to))
 		goto await_fence;
 
-	if (!rcu_access_pointer(from->hwsp_cacheline))
-		goto await_fence;
-
 	/*
 	 * If this or its dependents are waiting on an external fence
 	 * that may fail catastrophically, then we want to avoid using
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index a8c413203f72..c25b0afcc225 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -237,16 +237,6 @@ struct i915_request {
 	 */
 	const u32 *hwsp_seqno;
 
-	/*
-	 * If we need to access the timeline's seqno for this request in
-	 * another request, we need to keep a read reference to this associated
-	 * cacheline, so that we do not free and recycle it before the foreign
-	 * observers have completed. Hence, we keep a pointer to the cacheline
-	 * inside the timeline's HWSP vma, but it is only valid while this
-	 * request has not completed and guarded by the timeline mutex.
-	 */
-	struct intel_timeline_cacheline __rcu *hwsp_cacheline;
-
 	/** Position in the ring of the start of the request */
 	u32 head;
 
@@ -615,4 +605,11 @@ i915_request_active_timeline(const struct i915_request *rq)
 					 lockdep_is_held(&rq->engine->active.lock));
 }
 
+static inline u32
+i915_request_active_offset(const struct i915_request *rq)
+{
+	return page_mask_bits(i915_request_active_timeline(rq)->hwsp_offset) +
+	       offset_in_page(rq->hwsp_seqno);
+}
+
 #endif /* I915_REQUEST_H */
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 02/64] drm/i915: Pin timeline map after first timeline pin, v3.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6 Maarten Lankhorst
@ 2021-01-05 15:34 ` Maarten Lankhorst
  2021-01-18 10:28   ` Thomas Hellström (Intel)
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 03/64] drm/i915: Move cmd parser pinning to execbuffer Maarten Lankhorst
                   ` (68 subsequent siblings)
  70 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:34 UTC (permalink / raw)
  To: intel-gfx

We're starting to require the reservation lock for pinning,
so wait until we have that.

Update the selftests to handle this correctly, and ensure pin is
called in live_hwsp_rollover_user() and mock_hwsp_freelist().

Changes since v1:
- Fix NULL + XX arithmatic, use casts. (kbuild)
Changes since v2:
- Clear entire cacheline when pinning.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_timeline.c    | 40 +++++++++----
 drivers/gpu/drm/i915/gt/intel_timeline.h    |  2 +
 drivers/gpu/drm/i915/gt/mock_engine.c       | 22 ++++++-
 drivers/gpu/drm/i915/gt/selftest_timeline.c | 63 +++++++++++----------
 drivers/gpu/drm/i915/i915_selftest.h        |  2 +
 5 files changed, 84 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 3d43f15f34f3..7509af471341 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -53,14 +53,29 @@ static int __timeline_active(struct i915_active *active)
 	return 0;
 }
 
+I915_SELFTEST_EXPORT int
+intel_timeline_pin_map(struct intel_timeline *timeline)
+{
+	struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj;
+	u32 ofs = offset_in_page(timeline->hwsp_offset);
+	void *vaddr;
+
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	timeline->hwsp_map = vaddr;
+	timeline->hwsp_seqno = memset(vaddr + ofs, 0, CACHELINE_BYTES);
+	clflush(vaddr + ofs);
+
+	return 0;
+}
+
 static int intel_timeline_init(struct intel_timeline *timeline,
 			       struct intel_gt *gt,
 			       struct i915_vma *hwsp,
 			       unsigned int offset)
 {
-	void *vaddr;
-	u32 *seqno;
-
 	kref_init(&timeline->kref);
 	atomic_set(&timeline->pin_count, 0);
 
@@ -77,14 +92,8 @@ static int intel_timeline_init(struct intel_timeline *timeline,
 		timeline->hwsp_ggtt = hwsp;
 	}
 
-	vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
-
-	timeline->hwsp_map = vaddr;
-	seqno = vaddr + timeline->hwsp_offset;
-	WRITE_ONCE(*seqno, 0);
-	timeline->hwsp_seqno = seqno;
+	timeline->hwsp_map = NULL;
+	timeline->hwsp_seqno = (void *)(long)timeline->hwsp_offset;
 
 	GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
 
@@ -114,7 +123,8 @@ static void intel_timeline_fini(struct rcu_head *rcu)
 	struct intel_timeline *timeline =
 		container_of(rcu, struct intel_timeline, rcu);
 
-	i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
+	if (timeline->hwsp_map)
+		i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
 
 	i915_vma_put(timeline->hwsp_ggtt);
 	i915_active_fini(&timeline->active);
@@ -174,6 +184,12 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
 	if (atomic_add_unless(&tl->pin_count, 1, 0))
 		return 0;
 
+	if (!tl->hwsp_map) {
+		err = intel_timeline_pin_map(tl);
+		if (err)
+			return err;
+	}
+
 	err = i915_ggtt_pin(tl->hwsp_ggtt, ww, 0, PIN_HIGH);
 	if (err)
 		return err;
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h
index f502a619843f..c50543d3ed5d 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
@@ -110,4 +110,6 @@ void intel_gt_show_timelines(struct intel_gt *gt,
 						  const char *prefix,
 						  int indent));
 
+I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl));
+
 #endif
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
index 2f830017c51d..effbac877eec 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -32,9 +32,20 @@
 #include "mock_engine.h"
 #include "selftests/mock_request.h"
 
-static void mock_timeline_pin(struct intel_timeline *tl)
+static int mock_timeline_pin(struct intel_timeline *tl)
 {
+	int err;
+
+	if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
+		return -EBUSY;
+
+	err = intel_timeline_pin_map(tl);
+	i915_gem_object_unlock(tl->hwsp_ggtt->obj);
+	if (err)
+		return err;
+
 	atomic_inc(&tl->pin_count);
+	return 0;
 }
 
 static void mock_timeline_unpin(struct intel_timeline *tl)
@@ -152,6 +163,8 @@ static void mock_context_destroy(struct kref *ref)
 
 static int mock_context_alloc(struct intel_context *ce)
 {
+	int err;
+
 	ce->ring = mock_ring(ce->engine);
 	if (!ce->ring)
 		return -ENOMEM;
@@ -162,7 +175,12 @@ static int mock_context_alloc(struct intel_context *ce)
 		return PTR_ERR(ce->timeline);
 	}
 
-	mock_timeline_pin(ce->timeline);
+	err = mock_timeline_pin(ce->timeline);
+	if (err) {
+		intel_timeline_put(ce->timeline);
+		ce->timeline = NULL;
+		return err;
+	}
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index 6e32e91cdab4..2e2bea7ac7a5 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -35,7 +35,7 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
 {
 	unsigned long address = (unsigned long)page_address(hwsp_page(tl));
 
-	return (address + tl->hwsp_offset) / CACHELINE_BYTES;
+	return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES;
 }
 
 #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
@@ -59,6 +59,7 @@ static void __mock_hwsp_record(struct mock_hwsp_freelist *state,
 	tl = xchg(&state->history[idx], tl);
 	if (tl) {
 		radix_tree_delete(&state->cachelines, hwsp_cacheline(tl));
+		intel_timeline_unpin(tl);
 		intel_timeline_put(tl);
 	}
 }
@@ -78,6 +79,12 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
 		if (IS_ERR(tl))
 			return PTR_ERR(tl);
 
+		err = intel_timeline_pin(tl, NULL);
+		if (err) {
+			intel_timeline_put(tl);
+			return err;
+		}
+
 		cacheline = hwsp_cacheline(tl);
 		err = radix_tree_insert(&state->cachelines, cacheline, tl);
 		if (err) {
@@ -85,6 +92,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
 				pr_err("HWSP cacheline %lu already used; duplicate allocation!\n",
 				       cacheline);
 			}
+			intel_timeline_unpin(tl);
 			intel_timeline_put(tl);
 			return err;
 		}
@@ -452,7 +460,7 @@ static int emit_ggtt_store_dw(struct i915_request *rq, u32 addr, u32 value)
 }
 
 static struct i915_request *
-tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
+checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
 {
 	struct i915_request *rq;
 	int err;
@@ -463,6 +471,13 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
 		goto out;
 	}
 
+	if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
+		pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
+		       *tl->hwsp_seqno, tl->seqno);
+		intel_timeline_unpin(tl);
+		return ERR_PTR(-EINVAL);
+	}
+
 	rq = intel_engine_create_kernel_request(engine);
 	if (IS_ERR(rq))
 		goto out_unpin;
@@ -484,25 +499,6 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
 	return rq;
 }
 
-static struct intel_timeline *
-checked_intel_timeline_create(struct intel_gt *gt)
-{
-	struct intel_timeline *tl;
-
-	tl = intel_timeline_create(gt);
-	if (IS_ERR(tl))
-		return tl;
-
-	if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
-		pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
-		       *tl->hwsp_seqno, tl->seqno);
-		intel_timeline_put(tl);
-		return ERR_PTR(-EINVAL);
-	}
-
-	return tl;
-}
-
 static int live_hwsp_engine(void *arg)
 {
 #define NUM_TIMELINES 4096
@@ -535,13 +531,13 @@ static int live_hwsp_engine(void *arg)
 			struct intel_timeline *tl;
 			struct i915_request *rq;
 
-			tl = checked_intel_timeline_create(gt);
+			tl = intel_timeline_create(gt);
 			if (IS_ERR(tl)) {
 				err = PTR_ERR(tl);
 				break;
 			}
 
-			rq = tl_write(tl, engine, count);
+			rq = checked_tl_write(tl, engine, count);
 			if (IS_ERR(rq)) {
 				intel_timeline_put(tl);
 				err = PTR_ERR(rq);
@@ -608,14 +604,14 @@ static int live_hwsp_alternate(void *arg)
 			if (!intel_engine_can_store_dword(engine))
 				continue;
 
-			tl = checked_intel_timeline_create(gt);
+			tl = intel_timeline_create(gt);
 			if (IS_ERR(tl)) {
 				err = PTR_ERR(tl);
 				goto out;
 			}
 
 			intel_engine_pm_get(engine);
-			rq = tl_write(tl, engine, count);
+			rq = checked_tl_write(tl, engine, count);
 			intel_engine_pm_put(engine);
 			if (IS_ERR(rq)) {
 				intel_timeline_put(tl);
@@ -1258,6 +1254,10 @@ static int live_hwsp_rollover_user(void *arg)
 		if (!tl->has_initial_breadcrumb)
 			goto out;
 
+		err = intel_context_pin(ce);
+		if (err)
+			goto out;
+
 		tl->seqno = -4u;
 		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
 
@@ -1267,7 +1267,7 @@ static int live_hwsp_rollover_user(void *arg)
 			this = intel_context_create_request(ce);
 			if (IS_ERR(this)) {
 				err = PTR_ERR(this);
-				goto out;
+				goto out_unpin;
 			}
 
 			pr_debug("%s: create fence.seqnp:%d\n",
@@ -1286,17 +1286,18 @@ static int live_hwsp_rollover_user(void *arg)
 		if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
 			pr_err("Wait for timeline wrap timed out!\n");
 			err = -EIO;
-			goto out;
+			goto out_unpin;
 		}
 
 		for (i = 0; i < ARRAY_SIZE(rq); i++) {
 			if (!i915_request_completed(rq[i])) {
 				pr_err("Pre-wrap request not completed!\n");
 				err = -EINVAL;
-				goto out;
+				goto out_unpin;
 			}
 		}
-
+out_unpin:
+		intel_context_unpin(ce);
 out:
 		for (i = 0; i < ARRAY_SIZE(rq); i++)
 			i915_request_put(rq[i]);
@@ -1338,13 +1339,13 @@ static int live_hwsp_recycle(void *arg)
 			struct intel_timeline *tl;
 			struct i915_request *rq;
 
-			tl = checked_intel_timeline_create(gt);
+			tl = intel_timeline_create(gt);
 			if (IS_ERR(tl)) {
 				err = PTR_ERR(tl);
 				break;
 			}
 
-			rq = tl_write(tl, engine, count);
+			rq = checked_tl_write(tl, engine, count);
 			if (IS_ERR(rq)) {
 				intel_timeline_put(tl);
 				err = PTR_ERR(rq);
diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
index d53d207ab6eb..f54de0499be7 100644
--- a/drivers/gpu/drm/i915/i915_selftest.h
+++ b/drivers/gpu/drm/i915/i915_selftest.h
@@ -107,6 +107,7 @@ int __i915_subtests(const char *caller,
 
 #define I915_SELFTEST_DECLARE(x) x
 #define I915_SELFTEST_ONLY(x) unlikely(x)
+#define I915_SELFTEST_EXPORT
 
 #else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
 
@@ -116,6 +117,7 @@ static inline int i915_perf_selftests(struct pci_dev *pdev) { return 0; }
 
 #define I915_SELFTEST_DECLARE(x)
 #define I915_SELFTEST_ONLY(x) 0
+#define I915_SELFTEST_EXPORT static
 
 #endif
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 03/64] drm/i915: Move cmd parser pinning to execbuffer
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6 Maarten Lankhorst
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 02/64] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
@ 2021-01-05 15:34 ` Maarten Lankhorst
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 04/64] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2 Maarten Lankhorst
                   ` (67 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:34 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to get rid of allocations in the cmd parser, because it needs
to be called from a signaling context, first move all pinning to
execbuf, where we already hold all locks.

Allocate jump_whitelist in the execbuffer, and add annotations around
intel_engine_cmd_parser(), to ensure we only call the command parser
without allocating any memory, or taking any locks we're not supposed to.

Because i915_gem_object_get_page() may also allocate memory, add a
path to i915_gem_object_get_sg() that prevents memory allocations,
and walk the sg list manually. It should be similarly fast.

This has the added benefit of being able to catch all memory allocation
errors before the point of no return, and return -ENOMEM safely to the
execbuf submitter.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  74 ++++++++++++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  10 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     |  21 +++-
 drivers/gpu/drm/i915/gt/intel_ggtt.c          |   2 +-
 drivers/gpu/drm/i915/i915_cmd_parser.c        | 104 ++++++++----------
 drivers/gpu/drm/i915/i915_drv.h               |   7 +-
 drivers/gpu/drm/i915/i915_memcpy.c            |   2 +-
 drivers/gpu/drm/i915/i915_memcpy.h            |   2 +-
 8 files changed, 142 insertions(+), 80 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index cf9a6b4eb913..fe985e1f843d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -28,6 +28,7 @@
 #include "i915_sw_fence_work.h"
 #include "i915_trace.h"
 #include "i915_user_extensions.h"
+#include "i915_memcpy.h"
 
 struct eb_vma {
 	struct i915_vma *vma;
@@ -2274,24 +2275,45 @@ struct eb_parse_work {
 	struct i915_vma *trampoline;
 	unsigned long batch_offset;
 	unsigned long batch_length;
+	unsigned long *jump_whitelist;
+	const void *batch_map;
+	void *shadow_map;
 };
 
 static int __eb_parse(struct dma_fence_work *work)
 {
 	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
+	int ret;
+	bool cookie;
 
-	return intel_engine_cmd_parser(pw->engine,
-				       pw->batch,
-				       pw->batch_offset,
-				       pw->batch_length,
-				       pw->shadow,
-				       pw->trampoline);
+	cookie = dma_fence_begin_signalling();
+	ret = intel_engine_cmd_parser(pw->engine,
+				      pw->batch,
+				      pw->batch_offset,
+				      pw->batch_length,
+				      pw->shadow,
+				      pw->jump_whitelist,
+				      pw->shadow_map,
+				      pw->batch_map);
+	dma_fence_end_signalling(cookie);
+
+	return ret;
 }
 
 static void __eb_parse_release(struct dma_fence_work *work)
 {
 	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
 
+	if (!IS_ERR_OR_NULL(pw->jump_whitelist))
+		kfree(pw->jump_whitelist);
+
+	if (pw->batch_map)
+		i915_gem_object_unpin_map(pw->batch->obj);
+	else
+		i915_gem_object_unpin_pages(pw->batch->obj);
+
+	i915_gem_object_unpin_map(pw->shadow->obj);
+
 	if (pw->trampoline)
 		i915_active_release(&pw->trampoline->active);
 	i915_active_release(&pw->shadow->active);
@@ -2341,6 +2363,8 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 			     struct i915_vma *trampoline)
 {
 	struct eb_parse_work *pw;
+	struct drm_i915_gem_object *batch = eb->batch->vma->obj;
+	bool needs_clflush;
 	int err;
 
 	GEM_BUG_ON(overflows_type(eb->batch_start_offset, pw->batch_offset));
@@ -2364,6 +2388,34 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 			goto err_shadow;
 	}
 
+	pw->shadow_map = i915_gem_object_pin_map(shadow->obj, I915_MAP_FORCE_WB);
+	if (IS_ERR(pw->shadow_map)) {
+		err = PTR_ERR(pw->shadow_map);
+		goto err_trampoline;
+	}
+
+	needs_clflush =
+		!(batch->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
+
+	pw->batch_map = ERR_PTR(-ENODEV);
+	if (needs_clflush && i915_has_memcpy_from_wc())
+		pw->batch_map = i915_gem_object_pin_map(batch, I915_MAP_WC);
+
+	if (IS_ERR(pw->batch_map)) {
+		err = i915_gem_object_pin_pages(batch);
+		if (err)
+			goto err_unmap_shadow;
+		pw->batch_map = NULL;
+	}
+
+	pw->jump_whitelist =
+		intel_engine_cmd_parser_alloc_jump_whitelist(eb->batch_len,
+							     trampoline);
+	if (IS_ERR(pw->jump_whitelist)) {
+		err = PTR_ERR(pw->jump_whitelist);
+		goto err_unmap_batch;
+	}
+
 	dma_fence_work_init(&pw->base, &eb_parse_ops);
 
 	pw->engine = eb->engine;
@@ -2403,6 +2455,16 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 	dma_fence_work_commit_imm(&pw->base);
 	return err;
 
+err_unmap_batch:
+	if (pw->batch_map)
+		i915_gem_object_unpin_map(batch);
+	else
+		i915_gem_object_unpin_pages(batch);
+err_unmap_shadow:
+	i915_gem_object_unpin_map(shadow->obj);
+err_trampoline:
+	if (trampoline)
+		i915_active_release(&trampoline->active);
 err_shadow:
 	i915_active_release(&shadow->active);
 err_batch:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index be14486f63a7..99b18ba0c48d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -275,22 +275,22 @@ struct scatterlist *
 __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 			 struct i915_gem_object_page_iter *iter,
 			 unsigned int n,
-			 unsigned int *offset);
+			 unsigned int *offset, bool allow_alloc);
 
 static inline struct scatterlist *
 i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 		       unsigned int n,
-		       unsigned int *offset)
+		       unsigned int *offset, bool allow_alloc)
 {
-	return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset);
+	return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, allow_alloc);
 }
 
 static inline struct scatterlist *
 i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj,
 			   unsigned int n,
-			   unsigned int *offset)
+			   unsigned int *offset, bool allow_alloc)
 {
-	return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset);
+	return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, allow_alloc);
 }
 
 struct page *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 3db3c667c486..40f4e285594d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -445,7 +445,8 @@ struct scatterlist *
 __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 			 struct i915_gem_object_page_iter *iter,
 			 unsigned int n,
-			 unsigned int *offset)
+			 unsigned int *offset,
+			 bool allow_alloc)
 {
 	const bool dma = iter == &obj->mm.get_dma_page;
 	struct scatterlist *sg;
@@ -467,6 +468,9 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 	if (n < READ_ONCE(iter->sg_idx))
 		goto lookup;
 
+	if (!allow_alloc)
+		goto manual_lookup;
+
 	mutex_lock(&iter->lock);
 
 	/* We prefer to reuse the last sg so that repeated lookup of this
@@ -516,7 +520,16 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
 	if (unlikely(n < idx)) /* insertion completed by another thread */
 		goto lookup;
 
-	/* In case we failed to insert the entry into the radixtree, we need
+	goto manual_walk;
+
+manual_lookup:
+	idx = 0;
+	sg = obj->mm.pages->sgl;
+	count = __sg_page_count(sg);
+
+manual_walk:
+	/*
+	 * In case we failed to insert the entry into the radixtree, we need
 	 * to look beyond the current sg.
 	 */
 	while (idx + count <= n) {
@@ -563,7 +576,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n)
 
 	GEM_BUG_ON(!i915_gem_object_has_struct_page(obj));
 
-	sg = i915_gem_object_get_sg(obj, n, &offset);
+	sg = i915_gem_object_get_sg(obj, n, &offset, true);
 	return nth_page(sg_page(sg), offset);
 }
 
@@ -589,7 +602,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
 	struct scatterlist *sg;
 	unsigned int offset;
 
-	sg = i915_gem_object_get_sg_dma(obj, n, &offset);
+	sg = i915_gem_object_get_sg_dma(obj, n, &offset, true);
 
 	if (len)
 		*len = sg_dma_len(sg) - (offset << PAGE_SHIFT);
diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c
index eece0844fbe9..28cff6fb3059 100644
--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c
@@ -1397,7 +1397,7 @@ intel_partial_pages(const struct i915_ggtt_view *view,
 	if (ret)
 		goto err_sg_alloc;
 
-	iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset);
+	iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset, true);
 	GEM_BUG_ON(!iter);
 
 	sg = st->sgl;
diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
index 82d0f19e86df..72a1acaf5bb0 100644
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
@@ -1137,38 +1137,20 @@ find_reg(const struct intel_engine_cs *engine, u32 addr)
 /* Returns a vmap'd pointer to dst_obj, which the caller must unmap */
 static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
 		       struct drm_i915_gem_object *src_obj,
-		       unsigned long offset, unsigned long length)
+		       unsigned long offset, unsigned long length,
+		       void *dst, const void *src)
 {
-	bool needs_clflush;
-	void *dst, *src;
-	int ret;
-
-	dst = i915_gem_object_pin_map(dst_obj, I915_MAP_FORCE_WB);
-	if (IS_ERR(dst))
-		return dst;
-
-	ret = i915_gem_object_pin_pages(src_obj);
-	if (ret) {
-		i915_gem_object_unpin_map(dst_obj);
-		return ERR_PTR(ret);
-	}
-
-	needs_clflush =
+	bool needs_clflush =
 		!(src_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
 
-	src = ERR_PTR(-ENODEV);
-	if (needs_clflush && i915_has_memcpy_from_wc()) {
-		src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);
-		if (!IS_ERR(src)) {
-			i915_unaligned_memcpy_from_wc(dst,
-						      src + offset,
-						      length);
-			i915_gem_object_unpin_map(src_obj);
-		}
-	}
-	if (IS_ERR(src)) {
-		unsigned long x, n, remain;
+	if (src) {
+		GEM_BUG_ON(!needs_clflush);
+		i915_unaligned_memcpy_from_wc(dst, src + offset, length);
+	} else {
+		struct scatterlist *sg;
 		void *ptr;
+		unsigned int x, sg_ofs;
+		unsigned long remain;
 
 		/*
 		 * We can avoid clflushing partial cachelines before the write
@@ -1185,23 +1167,31 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
 
 		ptr = dst;
 		x = offset_in_page(offset);
-		for (n = offset >> PAGE_SHIFT; remain; n++) {
-			int len = min(remain, PAGE_SIZE - x);
-
-			src = kmap_atomic(i915_gem_object_get_page(src_obj, n));
-			if (needs_clflush)
-				drm_clflush_virt_range(src + x, len);
-			memcpy(ptr, src + x, len);
-			kunmap_atomic(src);
-
-			ptr += len;
-			remain -= len;
-			x = 0;
+		sg = i915_gem_object_get_sg(src_obj, offset >> PAGE_SHIFT, &sg_ofs, false);
+
+		while (remain) {
+			unsigned long sg_max = sg->length >> PAGE_SHIFT;
+
+			for (; remain && sg_ofs < sg_max; sg_ofs++) {
+				unsigned long len = min(remain, PAGE_SIZE - x);
+				void *map;
+
+				map = kmap_atomic(nth_page(sg_page(sg), sg_ofs));
+				if (needs_clflush)
+					drm_clflush_virt_range(map + x, len);
+				memcpy(ptr, map + x, len);
+				kunmap_atomic(map);
+
+				ptr += len;
+				remain -= len;
+				x = 0;
+			}
+
+			sg_ofs = 0;
+			sg = sg_next(sg);
 		}
 	}
 
-	i915_gem_object_unpin_pages(src_obj);
-
 	memset32(dst + length, 0, (dst_obj->base.size - length) / sizeof(u32));
 
 	/* dst_obj is returned with vmap pinned */
@@ -1363,9 +1353,6 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
 	if (target_cmd_index == offset)
 		return 0;
 
-	if (IS_ERR(jump_whitelist))
-		return PTR_ERR(jump_whitelist);
-
 	if (!test_bit(target_cmd_index, jump_whitelist)) {
 		DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n",
 			  jump_target);
@@ -1375,10 +1362,14 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
 	return 0;
 }
 
-static unsigned long *alloc_whitelist(u32 batch_length)
+unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
+							    bool trampoline)
 {
 	unsigned long *jmp;
 
+	if (trampoline)
+		return NULL;
+
 	/*
 	 * We expect batch_length to be less than 256KiB for known users,
 	 * i.e. we need at most an 8KiB bitmap allocation which should be
@@ -1416,14 +1407,16 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 			    unsigned long batch_offset,
 			    unsigned long batch_length,
 			    struct i915_vma *shadow,
-			    bool trampoline)
+			    unsigned long *jump_whitelist,
+			    void *shadow_map,
+			    const void *batch_map)
 {
 	u32 *cmd, *batch_end, offset = 0;
 	struct drm_i915_cmd_descriptor default_desc = noop_desc;
 	const struct drm_i915_cmd_descriptor *desc = &default_desc;
-	unsigned long *jump_whitelist;
 	u64 batch_addr, shadow_addr;
 	int ret = 0;
+	bool trampoline = !jump_whitelist;
 
 	GEM_BUG_ON(!IS_ALIGNED(batch_offset, sizeof(*cmd)));
 	GEM_BUG_ON(!IS_ALIGNED(batch_length, sizeof(*cmd)));
@@ -1431,16 +1424,8 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 				     batch->size));
 	GEM_BUG_ON(!batch_length);
 
-	cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length);
-	if (IS_ERR(cmd)) {
-		DRM_DEBUG("CMD: Failed to copy batch\n");
-		return PTR_ERR(cmd);
-	}
-
-	jump_whitelist = NULL;
-	if (!trampoline)
-		/* Defer failure until attempted use */
-		jump_whitelist = alloc_whitelist(batch_length);
+	cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length,
+			 shadow_map, batch_map);
 
 	shadow_addr = gen8_canonical_addr(shadow->node.start);
 	batch_addr = gen8_canonical_addr(batch->node.start + batch_offset);
@@ -1541,9 +1526,6 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 
 	i915_gem_object_flush_map(shadow->obj);
 
-	if (!IS_ERR_OR_NULL(jump_whitelist))
-		kfree(jump_whitelist);
-	i915_gem_object_unpin_map(shadow->obj);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 5e5bcef20e33..5310562da9cb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1960,12 +1960,17 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
 int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
 void intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
 void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
+unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
+							    bool trampoline);
+
 int intel_engine_cmd_parser(struct intel_engine_cs *engine,
 			    struct i915_vma *batch,
 			    unsigned long batch_offset,
 			    unsigned long batch_length,
 			    struct i915_vma *shadow,
-			    bool trampoline);
+			    unsigned long *jump_whitelist,
+			    void *shadow_map,
+			    const void *batch_map);
 #define I915_CMD_PARSER_TRAMPOLINE_SIZE 8
 
 /* intel_device_info.c */
diff --git a/drivers/gpu/drm/i915/i915_memcpy.c b/drivers/gpu/drm/i915/i915_memcpy.c
index 7b3b83bd5ab8..1b021a4902de 100644
--- a/drivers/gpu/drm/i915/i915_memcpy.c
+++ b/drivers/gpu/drm/i915/i915_memcpy.c
@@ -135,7 +135,7 @@ bool i915_memcpy_from_wc(void *dst, const void *src, unsigned long len)
  * accepts that its arguments may not be aligned, but are valid for the
  * potential 16-byte read past the end.
  */
-void i915_unaligned_memcpy_from_wc(void *dst, void *src, unsigned long len)
+void i915_unaligned_memcpy_from_wc(void *dst, const void *src, unsigned long len)
 {
 	unsigned long addr;
 
diff --git a/drivers/gpu/drm/i915/i915_memcpy.h b/drivers/gpu/drm/i915/i915_memcpy.h
index e36d30edd987..3df063a3293b 100644
--- a/drivers/gpu/drm/i915/i915_memcpy.h
+++ b/drivers/gpu/drm/i915/i915_memcpy.h
@@ -13,7 +13,7 @@ struct drm_i915_private;
 void i915_memcpy_init_early(struct drm_i915_private *i915);
 
 bool i915_memcpy_from_wc(void *dst, const void *src, unsigned long len);
-void i915_unaligned_memcpy_from_wc(void *dst, void *src, unsigned long len);
+void i915_unaligned_memcpy_from_wc(void *dst, const void *src, unsigned long len);
 
 /* The movntdqa instructions used for memcpy-from-wc require 16-byte alignment,
  * as well as SSE4.1 support. i915_memcpy_from_wc() will report if it cannot
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 04/64] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (2 preceding siblings ...)
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 03/64] drm/i915: Move cmd parser pinning to execbuffer Maarten Lankhorst
@ 2021-01-05 15:34 ` Maarten Lankhorst
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 05/64] drm/i915: Ensure we hold the object mutex in pin correctly Maarten Lankhorst
                   ` (66 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:34 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

i915_vma_pin may fail with -EDEADLK when we start locking page tables,
so ensure we handle this correctly.

Changes since v1:
- Drop -EDEADLK todo, this commit handles it.
- Change eb_pin_vma from sort-of-bool + -EDEADLK to a proper int. (Matt)

Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 35 +++++++++++++------
 1 file changed, 24 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index fe985e1f843d..d5666157dfa6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -420,13 +420,14 @@ static u64 eb_pin_flags(const struct drm_i915_gem_exec_object2 *entry,
 	return pin_flags;
 }
 
-static inline bool
+static inline int
 eb_pin_vma(struct i915_execbuffer *eb,
 	   const struct drm_i915_gem_exec_object2 *entry,
 	   struct eb_vma *ev)
 {
 	struct i915_vma *vma = ev->vma;
 	u64 pin_flags;
+	int err;
 
 	if (vma->node.size)
 		pin_flags = vma->node.start;
@@ -438,24 +439,29 @@ eb_pin_vma(struct i915_execbuffer *eb,
 		pin_flags |= PIN_GLOBAL;
 
 	/* Attempt to reuse the current location if available */
-	/* TODO: Add -EDEADLK handling here */
-	if (unlikely(i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags))) {
+	err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags);
+	if (err == -EDEADLK)
+		return err;
+
+	if (unlikely(err)) {
 		if (entry->flags & EXEC_OBJECT_PINNED)
-			return false;
+			return err;
 
 		/* Failing that pick any _free_ space if suitable */
-		if (unlikely(i915_vma_pin_ww(vma, &eb->ww,
+		err = i915_vma_pin_ww(vma, &eb->ww,
 					     entry->pad_to_size,
 					     entry->alignment,
 					     eb_pin_flags(entry, ev->flags) |
-					     PIN_USER | PIN_NOEVICT)))
-			return false;
+					     PIN_USER | PIN_NOEVICT);
+		if (unlikely(err))
+			return err;
 	}
 
 	if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_FENCE)) {
-		if (unlikely(i915_vma_pin_fence(vma))) {
+		err = i915_vma_pin_fence(vma);
+		if (unlikely(err)) {
 			i915_vma_unpin(vma);
-			return false;
+			return err;
 		}
 
 		if (vma->fence)
@@ -463,7 +469,10 @@ eb_pin_vma(struct i915_execbuffer *eb,
 	}
 
 	ev->flags |= __EXEC_OBJECT_HAS_PIN;
-	return !eb_vma_misplaced(entry, vma, ev->flags);
+	if (eb_vma_misplaced(entry, vma, ev->flags))
+		return -EBADSLT;
+
+	return 0;
 }
 
 static inline void
@@ -899,7 +908,11 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
 		if (err)
 			return err;
 
-		if (eb_pin_vma(eb, entry, ev)) {
+		err = eb_pin_vma(eb, entry, ev);
+		if (err == -EDEADLK)
+			return err;
+
+		if (!err) {
 			if (entry->offset != vma->node.start) {
 				entry->offset = vma->node.start | UPDATE;
 				eb->args->flags |= __EXEC_HAS_RELOC;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 05/64] drm/i915: Ensure we hold the object mutex in pin correctly.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (3 preceding siblings ...)
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 04/64] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2 Maarten Lankhorst
@ 2021-01-05 15:34 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 06/64] drm/i915: Add gem object locking to madvise Maarten Lankhorst
                   ` (65 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:34 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Currently we have a lot of places where we hold the gem object lock,
but haven't yet been converted to the ww dance. Complain loudly about
those places.

i915_vma_pin shouldn't have the obj lock held, so we can do a ww dance,
while i915_vma_pin_ww should.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #irc
---
 drivers/gpu/drm/i915/gt/intel_renderstate.c |  2 +-
 drivers/gpu/drm/i915/i915_vma.c             | 11 ++++++++++-
 drivers/gpu/drm/i915/i915_vma.h             |  3 +++
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_renderstate.c b/drivers/gpu/drm/i915/gt/intel_renderstate.c
index ca816ba22197..334c557673dd 100644
--- a/drivers/gpu/drm/i915/gt/intel_renderstate.c
+++ b/drivers/gpu/drm/i915/gt/intel_renderstate.c
@@ -197,7 +197,7 @@ int intel_renderstate_init(struct intel_renderstate *so,
 	if (err)
 		goto err_context;
 
-	err = i915_vma_pin(so->vma, 0, 0, PIN_GLOBAL | PIN_HIGH);
+	err = i915_vma_pin_ww(so->vma, &so->ww, 0, 0, PIN_GLOBAL | PIN_HIGH);
 	if (err)
 		goto err_context;
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index caa9b041616b..7310893086f7 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -865,6 +865,8 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 #ifdef CONFIG_PROVE_LOCKING
 	if (debug_locks && lockdep_is_held(&vma->vm->i915->drm.struct_mutex))
 		WARN_ON(!ww);
+	if (debug_locks && ww && vma->resv)
+		assert_vma_held(vma);
 #endif
 
 	BUILD_BUG_ON(PIN_GLOBAL != I915_VMA_GLOBAL_BIND);
@@ -1020,8 +1022,15 @@ int i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 
 	GEM_BUG_ON(!i915_vma_is_ggtt(vma));
 
+#ifdef CONFIG_LOCKDEP
+	WARN_ON(!ww && vma->resv && dma_resv_held(vma->resv));
+#endif
+
 	do {
-		err = i915_vma_pin_ww(vma, ww, 0, align, flags | PIN_GLOBAL);
+		if (ww)
+			err = i915_vma_pin_ww(vma, ww, 0, align, flags | PIN_GLOBAL);
+		else
+			err = i915_vma_pin(vma, 0, align, flags | PIN_GLOBAL);
 		if (err != -ENOSPC) {
 			if (!err) {
 				err = i915_vma_wait_for_bind(vma);
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 5b3a3c653454..838bbbeb11cc 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -243,6 +243,9 @@ i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 static inline int __must_check
 i915_vma_pin(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 {
+#ifdef CONFIG_LOCKDEP
+	WARN_ON_ONCE(vma->resv && dma_resv_held(vma->resv));
+#endif
 	return i915_vma_pin_ww(vma, NULL, size, alignment, flags);
 }
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 06/64] drm/i915: Add gem object locking to madvise.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (4 preceding siblings ...)
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 05/64] drm/i915: Ensure we hold the object mutex in pin correctly Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 07/64] drm/i915: Move HAS_STRUCT_PAGE to obj->flags Maarten Lankhorst
                   ` (64 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Doesn't need the full ww lock, only checking if pages are bound.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #irc
---
 drivers/gpu/drm/i915/i915_gem.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 17a4636ee542..209d244e9879 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1051,10 +1051,14 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
+	err = i915_gem_object_lock_interruptible(obj, NULL);
 	if (err)
 		goto out;
 
+	err = mutex_lock_interruptible(&obj->mm.lock);
+	if (err)
+		goto out_ww;
+
 	if (i915_gem_object_has_pages(obj) &&
 	    i915_gem_object_is_tiled(obj) &&
 	    i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@@ -1099,6 +1103,8 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	args->retained = obj->mm.madv != __I915_MADV_PURGED;
 	mutex_unlock(&obj->mm.lock);
 
+out_ww:
+	i915_gem_object_unlock(obj);
 out:
 	i915_gem_object_put(obj);
 	return err;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 07/64] drm/i915: Move HAS_STRUCT_PAGE to obj->flags
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (5 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 06/64] drm/i915: Add gem object locking to madvise Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 08/64] drm/i915: Rework struct phys attachment handling Maarten Lankhorst
                   ` (63 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We want to remove the changing of ops structure for attaching
phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags,
and put it in the bo.

This will remove a potential race of dereferencing the wrong obj->ops
without ww mutex held.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c           |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c         |  6 +++---
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c             |  4 ++--
 drivers/gpu/drm/i915/gem/i915_gem_mman.c             |  7 +++----
 drivers/gpu/drm/i915/gem/i915_gem_object.c           |  4 +++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h           |  5 +++--
 drivers/gpu/drm/i915/gem/i915_gem_object_types.h     |  8 +++++---
 drivers/gpu/drm/i915/gem/i915_gem_pages.c            |  5 ++---
 drivers/gpu/drm/i915/gem/i915_gem_phys.c             |  2 ++
 drivers/gpu/drm/i915/gem/i915_gem_region.c           |  4 +---
 drivers/gpu/drm/i915/gem/i915_gem_region.h           |  3 +--
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c            |  8 ++++----
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c           |  4 ++--
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c          |  6 +++---
 drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c |  4 ++--
 drivers/gpu/drm/i915/gem/selftests/huge_pages.c      | 10 +++++-----
 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c   | 11 ++++-------
 drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c   | 12 ++++++++++++
 drivers/gpu/drm/i915/gvt/dmabuf.c                    |  2 +-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c        |  2 +-
 drivers/gpu/drm/i915/selftests/mock_region.c         |  4 ++--
 21 files changed, 62 insertions(+), 51 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 04e9c04545ad..36e3c2765f4c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -258,7 +258,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 	}
 
 	drm_gem_private_object_init(dev, &obj->base, dma_buf->size);
-	i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class, 0);
 	obj->base.import_attach = attach;
 	obj->base.resv = dma_buf->resv;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ad22f42541bd..21cc40897ca8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -138,8 +138,7 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
 
 static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
 	.name = "i915_gem_object_internal",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 	.get_pages = i915_gem_object_get_pages_internal,
 	.put_pages = i915_gem_object_put_pages_internal,
 };
@@ -178,7 +177,8 @@ i915_gem_object_create_internal(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	/*
 	 * Mark the object as volatile, such that the pages are marked as
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index 932ee21e6609..e953965f8263 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -45,13 +45,13 @@ __i915_gem_lmem_object_create(struct intel_memory_region *mem,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class, flags);
 
 	obj->read_domains = I915_GEM_DOMAIN_WC | I915_GEM_DOMAIN_GTT;
 
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
 
-	i915_gem_object_init_memory_region(obj, mem, flags);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return obj;
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index ec28a6cde49b..c0034d811e50 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -251,7 +251,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 		goto out;
 
 	iomap = -1;
-	if (!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE)) {
+	if (!i915_gem_object_has_struct_page(obj)) {
 		iomap = obj->mm.region->iomap.base;
 		iomap -= obj->mm.region->region.start;
 	}
@@ -653,9 +653,8 @@ __assign_mmap_offset(struct drm_file *file,
 	}
 
 	if (mmap_type != I915_MMAP_TYPE_GTT &&
-	    !i915_gem_object_type_has(obj,
-				      I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-				      I915_GEM_OBJECT_HAS_IOMEM)) {
+	    !i915_gem_object_has_struct_page(obj) &&
+	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) {
 		err = -ENODEV;
 		goto out;
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 00d24000b5e8..1393988bd5af 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -60,7 +60,7 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj)
 
 void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
-			  struct lock_class_key *key)
+			  struct lock_class_key *key, unsigned flags)
 {
 	__mutex_init(&obj->mm.lock, ops->name ?: "obj->mm.lock", key);
 
@@ -78,6 +78,8 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 	init_rcu_head(&obj->rcu);
 
 	obj->ops = ops;
+	GEM_BUG_ON(flags & ~I915_BO_ALLOC_FLAGS);
+	obj->flags = flags;
 
 	obj->mm.madv = I915_MADV_WILLNEED;
 	INIT_RADIX_TREE(&obj->mm.get_page.radix, GFP_KERNEL | __GFP_NOWARN);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 99b18ba0c48d..04c29ed93632 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -23,7 +23,8 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj);
 
 void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
-			  struct lock_class_key *key);
+			  struct lock_class_key *key,
+			  unsigned alloc_flags);
 struct drm_i915_gem_object *
 i915_gem_object_create_shmem(struct drm_i915_private *i915,
 			     resource_size_t size);
@@ -197,7 +198,7 @@ i915_gem_object_type_has(const struct drm_i915_gem_object *obj,
 static inline bool
 i915_gem_object_has_struct_page(const struct drm_i915_gem_object *obj)
 {
-	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE);
+	return obj->flags & I915_BO_ALLOC_STRUCT_PAGE;
 }
 
 static inline bool
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index e2d9b7e1e152..b53e44b06b09 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -30,7 +30,6 @@ struct i915_lut_handle {
 
 struct drm_i915_gem_object_ops {
 	unsigned int flags;
-#define I915_GEM_OBJECT_HAS_STRUCT_PAGE	BIT(0)
 #define I915_GEM_OBJECT_HAS_IOMEM	BIT(1)
 #define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(2)
 #define I915_GEM_OBJECT_IS_PROXY	BIT(3)
@@ -165,8 +164,11 @@ struct drm_i915_gem_object {
 	unsigned long flags;
 #define I915_BO_ALLOC_CONTIGUOUS BIT(0)
 #define I915_BO_ALLOC_VOLATILE   BIT(1)
-#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE)
-#define I915_BO_READONLY         BIT(2)
+#define I915_BO_ALLOC_STRUCT_PAGE BIT(2)
+#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \
+			     I915_BO_ALLOC_VOLATILE | \
+			     I915_BO_ALLOC_STRUCT_PAGE)
+#define I915_BO_READONLY         BIT(3)
 
 	/*
 	 * Is the object to be mapped as read-only to the GPU
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 40f4e285594d..63b50c3559f7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -330,13 +330,12 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 			      enum i915_map_type type)
 {
 	enum i915_map_type has_type;
-	unsigned int flags;
 	bool pinned;
 	void *ptr;
 	int err;
 
-	flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM;
-	if (!i915_gem_object_type_has(obj, flags))
+	if (!i915_gem_object_has_struct_page(obj) &&
+	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return ERR_PTR(-ENXIO);
 
 	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 3a4dfe2ef1da..965590d3a570 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -240,6 +240,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	pages = __i915_gem_object_unset_pages(obj);
 
 	obj->ops = &i915_gem_phys_ops;
+	obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE;
 
 	err = ____i915_gem_object_get_pages(obj);
 	if (err)
@@ -258,6 +259,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 
 err_xfer:
 	obj->ops = &i915_gem_shmem_ops;
+	obj->flags |= I915_BO_ALLOC_STRUCT_PAGE;
 	if (!IS_ERR_OR_NULL(pages)) {
 		unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
index 835bd01f2e5d..aa685033305a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -106,13 +106,11 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
 }
 
 void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
-					struct intel_memory_region *mem,
-					unsigned long flags)
+					struct intel_memory_region *mem)
 {
 	INIT_LIST_HEAD(&obj->mm.blocks);
 	obj->mm.region = intel_memory_region_get(mem);
 
-	obj->flags |= flags;
 	if (obj->base.size <= mem->min_page_size)
 		obj->flags |= I915_BO_ALLOC_CONTIGUOUS;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h
index f2ff6f8bff74..ebddc86d78f7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h
@@ -17,8 +17,7 @@ void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
 				     struct sg_table *pages);
 
 void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
-					struct intel_memory_region *mem,
-					unsigned long flags);
+					struct intel_memory_region *mem);
 void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj);
 
 struct drm_i915_gem_object *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 75e8b71c18b9..31c617a1115f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -430,8 +430,7 @@ static void shmem_release(struct drm_i915_gem_object *obj)
 
 const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
 	.name = "i915_gem_object_shmem",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 
 	.get_pages = shmem_get_pages,
 	.put_pages = shmem_put_pages,
@@ -496,7 +495,8 @@ create_shmem(struct intel_memory_region *mem,
 	mapping_set_gfp_mask(mapping, mask);
 	GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));
 
-	i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
@@ -520,7 +520,7 @@ create_shmem(struct intel_memory_region *mem,
 
 	i915_gem_object_set_cache_coherency(obj, cache_level);
 
-	i915_gem_object_init_memory_region(obj, mem, 0);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return obj;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index 29bffc6afcc1..5372b888ba01 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -636,7 +636,7 @@ __i915_gem_object_create_stolen(struct intel_memory_region *mem,
 		goto err;
 
 	drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size);
-	i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, 0);
 
 	obj->stolen = stolen;
 	obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
@@ -647,7 +647,7 @@ __i915_gem_object_create_stolen(struct intel_memory_region *mem,
 	if (err)
 		goto cleanup;
 
-	i915_gem_object_init_memory_region(obj, mem, 0);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return obj;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index f2eaed6aca3d..30edc5a0a54e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -702,8 +702,7 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
 
 static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE |
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
 		 I915_GEM_OBJECT_NO_MMAP |
 		 I915_GEM_OBJECT_ASYNC_CANCEL,
 	.get_pages = i915_gem_userptr_get_pages,
@@ -810,7 +809,8 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 		return -ENOMEM;
 
 	drm_gem_private_object_init(dev, &obj->base, args->user_size);
-	i915_gem_object_init(obj, &i915_gem_userptr_ops, &lock_class);
+	i915_gem_object_init(obj, &i915_gem_userptr_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
index a768ec61e966..dfad86d74dd0 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
@@ -89,7 +89,6 @@ static void huge_put_pages(struct drm_i915_gem_object *obj,
 
 static const struct drm_i915_gem_object_ops huge_ops = {
 	.name = "huge-gem",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE,
 	.get_pages = huge_get_pages,
 	.put_pages = huge_put_pages,
 };
@@ -115,7 +114,8 @@ huge_gem_object(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, dma_size);
-	i915_gem_object_init(obj, &huge_ops, &lock_class);
+	i915_gem_object_init(obj, &huge_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index aacf4856ccb4..6c2241b7387b 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -140,8 +140,7 @@ static void put_huge_pages(struct drm_i915_gem_object *obj,
 
 static const struct drm_i915_gem_object_ops huge_page_ops = {
 	.name = "huge-gem",
-	.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-		 I915_GEM_OBJECT_IS_SHRINKABLE,
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 	.get_pages = get_huge_pages,
 	.put_pages = put_huge_pages,
 };
@@ -168,7 +167,8 @@ huge_pages_object(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &huge_page_ops, &lock_class);
+	i915_gem_object_init(obj, &huge_page_ops, &lock_class,
+			     I915_BO_ALLOC_STRUCT_PAGE);
 
 	i915_gem_object_set_volatile(obj);
 
@@ -319,9 +319,9 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
 
 	if (single)
-		i915_gem_object_init(obj, &fake_ops_single, &lock_class);
+		i915_gem_object_init(obj, &fake_ops_single, &lock_class, 0);
 	else
-		i915_gem_object_init(obj, &fake_ops, &lock_class);
+		i915_gem_object_init(obj, &fake_ops, &lock_class, 0);
 
 	i915_gem_object_set_volatile(obj);
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index d429c7643ff2..44908c68e331 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -835,9 +835,8 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type)
 		return false;
 
 	if (type != I915_MMAP_TYPE_GTT &&
-	    !i915_gem_object_type_has(obj,
-				      I915_GEM_OBJECT_HAS_STRUCT_PAGE |
-				      I915_GEM_OBJECT_HAS_IOMEM))
+	    !i915_gem_object_has_struct_page(obj) &&
+	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return false;
 
 	return true;
@@ -977,10 +976,8 @@ static const char *repr_mmap_type(enum i915_mmap_type type)
 
 static bool can_access(const struct drm_i915_gem_object *obj)
 {
-	unsigned int flags =
-		I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM;
-
-	return i915_gem_object_type_has(obj, flags);
+	return i915_gem_object_has_struct_page(obj) ||
+	       i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM);
 }
 
 static int __igt_mmap_access(struct drm_i915_private *i915,
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
index 8cee68c6a6dc..fb6a17701310 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
@@ -25,12 +25,24 @@ static int mock_phys_object(void *arg)
 		goto out;
 	}
 
+	if (!i915_gem_object_has_struct_page(obj)) {
+		err = -EINVAL;
+		pr_err("shmem has no struct page\n");
+		goto out_obj;
+	}
+
 	err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
 	if (err) {
 		pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
 		goto out_obj;
 	}
 
+	if (i915_gem_object_has_struct_page(obj)) {
+		err = -EINVAL;
+		pr_err("shmem has a struct page\n");
+		goto out_obj;
+	}
+
 	if (obj->ops != &i915_gem_phys_ops) {
 		pr_err("i915_gem_object_attach_phys did not create a phys object\n");
 		err = -EINVAL;
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c
index c3eb3838fe88..d4f883f35b95 100644
--- a/drivers/gpu/drm/i915/gvt/dmabuf.c
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.c
@@ -218,7 +218,7 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
 
 	drm_gem_private_object_init(dev, &obj->base,
 		roundup(info->size, PAGE_SIZE));
-	i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class);
+	i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class, 0);
 	i915_gem_object_set_readonly(obj);
 
 	obj->read_domains = I915_GEM_DOMAIN_GTT;
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 70e07e9b78c2..d4e5b2df684d 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -121,7 +121,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size)
 		goto err;
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &fake_ops, &lock_class);
+	i915_gem_object_init(obj, &fake_ops, &lock_class, 0);
 
 	i915_gem_object_set_volatile(obj);
 
diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c
index 979d96f27c43..b046bd1a9ad3 100644
--- a/drivers/gpu/drm/i915/selftests/mock_region.c
+++ b/drivers/gpu/drm/i915/selftests/mock_region.c
@@ -32,13 +32,13 @@ mock_object_create(struct intel_memory_region *mem,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class);
+	i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class, flags);
 
 	obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
 
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
 
-	i915_gem_object_init_memory_region(obj, mem, flags);
+	i915_gem_object_init_memory_region(obj, mem);
 
 	return obj;
 }
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 08/64] drm/i915: Rework struct phys attachment handling
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (6 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 07/64] drm/i915: Move HAS_STRUCT_PAGE to obj->flags Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 09/64] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2 Maarten Lankhorst
                   ` (62 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Instead of creating a separate object type, we make changes to
the shmem type, to clear struct page backing. This will allow us to
ensure we never run into a race when we exchange obj->ops with other
function pointers.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |   8 ++
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 102 +++++++++---------
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |  22 +++-
 .../drm/i915/gem/selftests/i915_gem_phys.c    |   6 --
 4 files changed, 78 insertions(+), 60 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 04c29ed93632..aa55afc0c858 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -37,7 +37,15 @@ void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
 				     struct sg_table *pages,
 				     bool needs_clflush);
 
+int i915_gem_object_pwrite_phys(struct drm_i915_gem_object *obj,
+				const struct drm_i915_gem_pwrite *args);
+int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
+			       const struct drm_i915_gem_pread *args);
+
 int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align);
+void i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
+				    struct sg_table *pages);
+
 
 void i915_gem_flush_free_objects(struct drm_i915_private *i915);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 965590d3a570..4bdd0429c08b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -76,6 +76,8 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 
 	intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
 
+	/* We're no longer struct page backed */
+	obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE;
 	__i915_gem_object_set_pages(obj, st, sg->length);
 
 	return 0;
@@ -89,7 +91,7 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
 	return -ENOMEM;
 }
 
-static void
+void
 i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
 			       struct sg_table *pages)
 {
@@ -134,9 +136,8 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
 			  vaddr, dma);
 }
 
-static int
-phys_pwrite(struct drm_i915_gem_object *obj,
-	    const struct drm_i915_gem_pwrite *args)
+int i915_gem_object_pwrite_phys(struct drm_i915_gem_object *obj,
+				const struct drm_i915_gem_pwrite *args)
 {
 	void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset;
 	char __user *user_data = u64_to_user_ptr(args->data_ptr);
@@ -165,9 +166,8 @@ phys_pwrite(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
-static int
-phys_pread(struct drm_i915_gem_object *obj,
-	   const struct drm_i915_gem_pread *args)
+int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
+			       const struct drm_i915_gem_pread *args)
 {
 	void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset;
 	char __user *user_data = u64_to_user_ptr(args->data_ptr);
@@ -186,86 +186,82 @@ phys_pread(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
-static void phys_release(struct drm_i915_gem_object *obj)
+static int i915_gem_object_shmem_to_phys(struct drm_i915_gem_object *obj)
 {
-	fput(obj->base.filp);
-}
+	struct sg_table *pages;
+	int err;
 
-static const struct drm_i915_gem_object_ops i915_gem_phys_ops = {
-	.name = "i915_gem_object_phys",
-	.get_pages = i915_gem_object_get_pages_phys,
-	.put_pages = i915_gem_object_put_pages_phys,
+	pages = __i915_gem_object_unset_pages(obj);
+
+	err = i915_gem_object_get_pages_phys(obj);
+	if (err)
+		goto err_xfer;
 
-	.pread  = phys_pread,
-	.pwrite = phys_pwrite,
+	/* Perma-pin (until release) the physical set of pages */
+	__i915_gem_object_pin_pages(obj);
 
-	.release = phys_release,
-};
+	if (!IS_ERR_OR_NULL(pages))
+		i915_gem_shmem_ops.put_pages(obj, pages);
+
+	i915_gem_object_release_memory_region(obj);
+	return 0;
+
+err_xfer:
+	if (!IS_ERR_OR_NULL(pages)) {
+		unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
+
+		__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
+	}
+	return err;
+}
 
 int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 {
-	struct sg_table *pages;
 	int err;
 
 	if (align > obj->base.size)
 		return -EINVAL;
 
-	if (obj->ops == &i915_gem_phys_ops)
-		return 0;
-
 	if (obj->ops != &i915_gem_shmem_ops)
 		return -EINVAL;
 
+	if (!i915_gem_object_has_struct_page(obj))
+		return 0;
+
 	err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
 	if (err)
 		return err;
 
 	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
 
+	if (unlikely(!i915_gem_object_has_struct_page(obj)))
+		goto out;
+
 	if (obj->mm.madv != I915_MADV_WILLNEED) {
 		err = -EFAULT;
-		goto err_unlock;
+		goto out;
 	}
 
 	if (obj->mm.quirked) {
 		err = -EFAULT;
-		goto err_unlock;
+		goto out;
 	}
 
-	if (obj->mm.mapping) {
+	if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj)) {
 		err = -EBUSY;
-		goto err_unlock;
+		goto out;
 	}
 
-	pages = __i915_gem_object_unset_pages(obj);
-
-	obj->ops = &i915_gem_phys_ops;
-	obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE;
-
-	err = ____i915_gem_object_get_pages(obj);
-	if (err)
-		goto err_xfer;
-
-	/* Perma-pin (until release) the physical set of pages */
-	__i915_gem_object_pin_pages(obj);
-
-	if (!IS_ERR_OR_NULL(pages))
-		i915_gem_shmem_ops.put_pages(obj, pages);
-
-	i915_gem_object_release_memory_region(obj);
-
-	mutex_unlock(&obj->mm.lock);
-	return 0;
+	if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
+		drm_dbg(obj->base.dev,
+			"Attempting to obtain a purgeable object\n");
+		err = -EFAULT;
+		goto out;
+	}
 
-err_xfer:
-	obj->ops = &i915_gem_shmem_ops;
-	obj->flags |= I915_BO_ALLOC_STRUCT_PAGE;
-	if (!IS_ERR_OR_NULL(pages)) {
-		unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
+	err = i915_gem_object_shmem_to_phys(obj);
 
-		__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
-	}
-err_unlock:
+out:
 	mutex_unlock(&obj->mm.lock);
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 31c617a1115f..d590e0c3bd00 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -303,6 +303,11 @@ shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
 	struct pagevec pvec;
 	struct page *page;
 
+	if (unlikely(!i915_gem_object_has_struct_page(obj))) {
+		i915_gem_object_put_pages_phys(obj, pages);
+		return;
+	}
+
 	__i915_gem_object_release_shmem(obj, pages, true);
 
 	i915_gem_gtt_finish_pages(obj, pages);
@@ -343,6 +348,9 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
 	/* Caller already validated user args */
 	GEM_BUG_ON(!access_ok(user_data, arg->size));
 
+	if (!i915_gem_object_has_struct_page(obj))
+		return i915_gem_object_pwrite_phys(obj, arg);
+
 	/*
 	 * Before we instantiate/pin the backing store for our use, we
 	 * can prepopulate the shmemfs filp efficiently using a write into
@@ -421,9 +429,20 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
+static int
+shmem_pread(struct drm_i915_gem_object *obj,
+	    const struct drm_i915_gem_pread *arg)
+{
+	if (!i915_gem_object_has_struct_page(obj))
+		return i915_gem_object_pread_phys(obj, arg);
+
+	return -ENODEV;
+}
+
 static void shmem_release(struct drm_i915_gem_object *obj)
 {
-	i915_gem_object_release_memory_region(obj);
+	if (obj->flags & I915_BO_ALLOC_STRUCT_PAGE)
+		i915_gem_object_release_memory_region(obj);
 
 	fput(obj->base.filp);
 }
@@ -438,6 +457,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
 	.writeback = shmem_writeback,
 
 	.pwrite = shmem_pwrite,
+	.pread = shmem_pread,
 
 	.release = shmem_release,
 };
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
index fb6a17701310..0cfa082047fe 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
@@ -38,12 +38,6 @@ static int mock_phys_object(void *arg)
 	}
 
 	if (i915_gem_object_has_struct_page(obj)) {
-		err = -EINVAL;
-		pr_err("shmem has a struct page\n");
-		goto out_obj;
-	}
-
-	if (obj->ops != &i915_gem_phys_ops) {
 		pr_err("i915_gem_object_attach_phys did not create a phys object\n");
 		err = -EINVAL;
 		goto out_obj;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 09/64] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (7 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 08/64] drm/i915: Rework struct phys attachment handling Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 10/64] drm/i915: make lockdep slightly happier about execbuf Maarten Lankhorst
                   ` (61 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Simple adding of i915_gem_object_lock, we may start to pass ww to
get_pages() in the future, but that won't be the case here;
We override shmem's get_pages() handling by calling
i915_gem_object_get_pages_phys(), no ww is needed.

Changes since v1:
- Call shmem put pages directly, the callback would
  go down the phys free path.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h |  3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_phys.c   | 12 ++++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c  | 17 ++++++++++-------
 3 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index aa55afc0c858..fa699d45e5a9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -43,10 +43,11 @@ int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
 			       const struct drm_i915_gem_pread *args);
 
 int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align);
+void i915_gem_object_put_pages_shmem(struct drm_i915_gem_object *obj,
+				     struct sg_table *pages);
 void i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
 				    struct sg_table *pages);
 
-
 void i915_gem_flush_free_objects(struct drm_i915_private *i915);
 
 struct sg_table *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 4bdd0429c08b..144e4940eede 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -201,7 +201,7 @@ static int i915_gem_object_shmem_to_phys(struct drm_i915_gem_object *obj)
 	__i915_gem_object_pin_pages(obj);
 
 	if (!IS_ERR_OR_NULL(pages))
-		i915_gem_shmem_ops.put_pages(obj, pages);
+		i915_gem_object_put_pages_shmem(obj, pages);
 
 	i915_gem_object_release_memory_region(obj);
 	return 0;
@@ -232,7 +232,13 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = i915_gem_object_lock_interruptible(obj, NULL);
+	if (err)
+		return err;
+
+	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	if (err)
+		goto err_unlock;
 
 	if (unlikely(!i915_gem_object_has_struct_page(obj)))
 		goto out;
@@ -263,6 +269,8 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 
 out:
 	mutex_unlock(&obj->mm.lock);
+err_unlock:
+	i915_gem_object_unlock(obj);
 	return err;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index d590e0c3bd00..7a59fd1ea4e5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -296,18 +296,12 @@ __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
 	__start_cpu_write(obj);
 }
 
-static void
-shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
+void i915_gem_object_put_pages_shmem(struct drm_i915_gem_object *obj, struct sg_table *pages)
 {
 	struct sgt_iter sgt_iter;
 	struct pagevec pvec;
 	struct page *page;
 
-	if (unlikely(!i915_gem_object_has_struct_page(obj))) {
-		i915_gem_object_put_pages_phys(obj, pages);
-		return;
-	}
-
 	__i915_gem_object_release_shmem(obj, pages, true);
 
 	i915_gem_gtt_finish_pages(obj, pages);
@@ -336,6 +330,15 @@ shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
 	kfree(pages);
 }
 
+static void
+shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
+{
+	if (likely(i915_gem_object_has_struct_page(obj)))
+		i915_gem_object_put_pages_shmem(obj, pages);
+	else
+		i915_gem_object_put_pages_phys(obj, pages);
+}
+
 static int
 shmem_pwrite(struct drm_i915_gem_object *obj,
 	     const struct drm_i915_gem_pwrite *arg)
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 10/64] drm/i915: make lockdep slightly happier about execbuf.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (8 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 09/64] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2 Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 11/64] drm/i915: Disable userptr pread/pwrite support Maarten Lankhorst
                   ` (60 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

As soon as we install fences, we should stop allocating memory
in order to prevent any potential deadlocks.

This is required later on, when we start adding support for
dma-fence annotations.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 24 ++++++++++++++-----
 drivers/gpu/drm/i915/i915_active.c            | 20 ++++++++--------
 drivers/gpu/drm/i915/i915_vma.c               |  8 ++++---
 drivers/gpu/drm/i915/i915_vma.h               |  3 +++
 4 files changed, 36 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index d5666157dfa6..9c7480119d53 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -50,11 +50,12 @@ enum {
 #define DBG_FORCE_RELOC 0 /* choose one of the above! */
 };
 
-#define __EXEC_OBJECT_HAS_PIN		BIT(31)
-#define __EXEC_OBJECT_HAS_FENCE		BIT(30)
-#define __EXEC_OBJECT_NEEDS_MAP		BIT(29)
-#define __EXEC_OBJECT_NEEDS_BIAS	BIT(28)
-#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 28) /* all of the above */
+/* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
+#define __EXEC_OBJECT_HAS_PIN		BIT(30)
+#define __EXEC_OBJECT_HAS_FENCE		BIT(29)
+#define __EXEC_OBJECT_NEEDS_MAP		BIT(28)
+#define __EXEC_OBJECT_NEEDS_BIAS	BIT(27)
+#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 27) /* all of the above + */
 #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
 
 #define __EXEC_HAS_RELOC	BIT(31)
@@ -928,6 +929,12 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
 			}
 		}
 
+		if (!(ev->flags & EXEC_OBJECT_WRITE)) {
+			err = dma_resv_reserve_shared(vma->resv, 1);
+			if (err)
+				return err;
+		}
+
 		GEM_BUG_ON(drm_mm_node_allocated(&vma->node) &&
 			   eb_vma_misplaced(&eb->exec[i], vma, ev->flags));
 	}
@@ -2195,7 +2202,8 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
 		}
 
 		if (err == 0)
-			err = i915_vma_move_to_active(vma, eb->request, flags);
+			err = i915_vma_move_to_active(vma, eb->request,
+						      flags | __EXEC_OBJECT_NO_RESERVE);
 	}
 
 	if (unlikely(err))
@@ -2447,6 +2455,10 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
 	if (err)
 		goto err_commit;
 
+	err = dma_resv_reserve_shared(shadow->resv, 1);
+	if (err)
+		goto err_commit;
+
 	/* Wait for all writes (and relocs) into the batch to complete */
 	err = i915_sw_fence_await_reservation(&pw->base.chain,
 					      pw->batch->resv, NULL, false,
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index ab4382841c6b..4e78de64b71b 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -293,18 +293,13 @@ static struct active_node *__active_lookup(struct i915_active *ref, u64 idx)
 static struct i915_active_fence *
 active_instance(struct i915_active *ref, u64 idx)
 {
-	struct active_node *node, *prealloc;
+	struct active_node *node;
 	struct rb_node **p, *parent;
 
 	node = __active_lookup(ref, idx);
 	if (likely(node))
 		return &node->base;
 
-	/* Preallocate a replacement, just in case */
-	prealloc = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
-	if (!prealloc)
-		return NULL;
-
 	spin_lock_irq(&ref->tree_lock);
 	GEM_BUG_ON(i915_active_is_idle(ref));
 
@@ -314,10 +309,8 @@ active_instance(struct i915_active *ref, u64 idx)
 		parent = *p;
 
 		node = rb_entry(parent, struct active_node, node);
-		if (node->timeline == idx) {
-			kmem_cache_free(global.slab_cache, prealloc);
+		if (node->timeline == idx)
 			goto out;
-		}
 
 		if (node->timeline < idx)
 			p = &parent->rb_right;
@@ -325,7 +318,14 @@ active_instance(struct i915_active *ref, u64 idx)
 			p = &parent->rb_left;
 	}
 
-	node = prealloc;
+	/*
+	 * XXX: We should preallocate this before i915_active_ref() is ever
+	 *  called, but we cannot call into fs_reclaim() anyway, so use GFP_ATOMIC.
+	 */
+	node = kmem_cache_alloc(global.slab_cache, GFP_ATOMIC);
+	if (!node)
+		goto out;
+
 	__i915_active_fence_init(&node->base, NULL, node_retire);
 	node->ref = ref;
 	node->timeline = idx;
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 7310893086f7..1ffda2aaa7a0 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -1247,9 +1247,11 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 		obj->write_domain = I915_GEM_DOMAIN_RENDER;
 		obj->read_domains = 0;
 	} else {
-		err = dma_resv_reserve_shared(vma->resv, 1);
-		if (unlikely(err))
-			return err;
+		if (!(flags & __EXEC_OBJECT_NO_RESERVE)) {
+			err = dma_resv_reserve_shared(vma->resv, 1);
+			if (unlikely(err))
+				return err;
+		}
 
 		dma_resv_add_shared_fence(vma->resv, &rq->fence);
 		obj->write_domain = 0;
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 838bbbeb11cc..3c951d5428cf 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -52,6 +52,9 @@ static inline bool i915_vma_is_active(const struct i915_vma *vma)
 	return !i915_active_is_idle(&vma->active);
 }
 
+/* do not reserve memory to prevent deadlocks */
+#define __EXEC_OBJECT_NO_RESERVE BIT(31)
+
 int __must_check __i915_vma_move_to_active(struct i915_vma *vma,
 					   struct i915_request *rq);
 int __must_check i915_vma_move_to_active(struct i915_vma *vma,
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 11/64] drm/i915: Disable userptr pread/pwrite support.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (9 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 10/64] drm/i915: make lockdep slightly happier about execbuf Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 12/64] drm/i915: No longer allow exporting userptr through dma-buf Maarten Lankhorst
                   ` (59 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Userptr should not need the kernel for a userspace memcpy, userspace
needs to call memcpy directly.

Specifically, disable i915_gem_pwrite_ioctl() and i915_gem_pread_ioctl().

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

-- Still needs an ack from relevant userspace that it won't break, but should be good.
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 20 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem.c             |  5 +++++
 2 files changed, 25 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 30edc5a0a54e..8c3d1eb2f96a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -700,6 +700,24 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
 	return i915_gem_userptr_init__mmu_notifier(obj, 0);
 }
 
+static int
+i915_gem_userptr_pwrite(struct drm_i915_gem_object *obj,
+			const struct drm_i915_gem_pwrite *args)
+{
+	drm_dbg(obj->base.dev, "pwrite to userptr no longer allowed\n");
+
+	return -EINVAL;
+}
+
+static int
+i915_gem_userptr_pread(struct drm_i915_gem_object *obj,
+		       const struct drm_i915_gem_pread *args)
+{
+	drm_dbg(obj->base.dev, "pread from userptr no longer allowed\n");
+
+	return -EINVAL;
+}
+
 static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
 	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
@@ -708,6 +726,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.get_pages = i915_gem_userptr_get_pages,
 	.put_pages = i915_gem_userptr_put_pages,
 	.dmabuf_export = i915_gem_userptr_dmabuf_export,
+	.pwrite = i915_gem_userptr_pwrite,
+	.pread = i915_gem_userptr_pread,
 	.release = i915_gem_userptr_release,
 };
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 209d244e9879..84eb2a574813 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -502,6 +502,11 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 	}
 
 	trace_i915_gem_object_pread(obj, args->offset, args->size);
+	ret = -ENODEV;
+	if (obj->ops->pread)
+		ret = obj->ops->pread(obj, args);
+	if (ret != -ENODEV)
+		goto out;
 
 	ret = -ENODEV;
 	if (obj->ops->pread)
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 12/64] drm/i915: No longer allow exporting userptr through dma-buf
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (10 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 11/64] drm/i915: Disable userptr pread/pwrite support Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 13/64] drm/i915: Reject more ioctls for userptr Maarten Lankhorst
                   ` (58 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

It doesn't make sense to export a memory address, we will prevent
allowing access this way to different address spaces when we
rework userptr handling, so best to explicitly disable it.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

-- Still needs an ack from relevant userspace that it won't break, but should be good.
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 8c3d1eb2f96a..44af6265948d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -694,10 +694,9 @@ i915_gem_userptr_release(struct drm_i915_gem_object *obj)
 static int
 i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
 {
-	if (obj->userptr.mmu_object)
-		return 0;
+	drm_dbg(obj->base.dev, "Exporting userptr no longer allowed\n");
 
-	return i915_gem_userptr_init__mmu_notifier(obj, 0);
+	return -EINVAL;
 }
 
 static int
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 13/64] drm/i915: Reject more ioctls for userptr
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (11 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 12/64] drm/i915: No longer allow exporting userptr through dma-buf Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2 Maarten Lankhorst
                   ` (57 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

There are a couple of ioctl's related to tiling and cache placement,
that make no sense for userptr, reject those:
- i915_gem_set_tiling_ioctl()
    Tiling should always be linear for userptr. Changing placement will
    fail with -ENXIO.
- i915_gem_set_caching_ioctl()
    Userptr memory should always be cached. Changing will fail with
    -ENXIO.
- i915_gem_set_domain_ioctl()
    Changed to be equivalent to gem_wait, which is correct for the
    cached linear userptr pointers. This is required because we
    cannot grab a reference to the pages in the rework, but waiting
    for idle will do the same.

This plus the previous changes have been tested against beignet
by using its own unit tests, and intel-video-compute by using
piglit's opencl tests.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

-- Still needs an ack from relevant userspace that it won't break, but should be good.
---
 drivers/gpu/drm/i915/display/intel_display.c | 2 +-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c   | 4 +++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h   | 6 ++++++
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c  | 3 ++-
 4 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 0189d379a55e..c8df42193287 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -16399,7 +16399,7 @@ static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb,
 	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 
-	if (obj->userptr.mm) {
+	if (i915_gem_object_is_userptr(obj)) {
 		drm_dbg(&i915->drm,
 			"attempting to use a userptr for a framebuffer, denied\n");
 		return -EINVAL;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index fcce6909f201..c1d4bf62b3ea 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -528,7 +528,9 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	 * considered to be outside of any cache domain.
 	 */
 	if (i915_gem_object_is_proxy(obj)) {
-		err = -ENXIO;
+		/* silently allow userptr to complete */
+		if (!i915_gem_object_is_userptr(obj))
+			err = -ENXIO;
 		goto out;
 	}
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index fa699d45e5a9..2ba6d114182c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -534,6 +534,12 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
 					      enum fb_op_origin origin);
 
+static inline bool
+i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
+{
+	return obj->userptr.mm;
+}
+
 static inline void
 i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 				  enum fb_op_origin origin)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 44af6265948d..64a946d5f753 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -721,7 +721,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
 	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
 		 I915_GEM_OBJECT_NO_MMAP |
-		 I915_GEM_OBJECT_ASYNC_CANCEL,
+		 I915_GEM_OBJECT_ASYNC_CANCEL |
+		 I915_GEM_OBJECT_IS_PROXY,
 	.get_pages = i915_gem_userptr_get_pages,
 	.put_pages = i915_gem_userptr_put_pages,
 	.dmabuf_export = i915_gem_userptr_dmabuf_export,
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (12 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 13/64] drm/i915: Reject more ioctls for userptr Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-11 20:50   ` Dave Airlie
  2021-01-18 10:34   ` Thomas Hellström
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 15/64] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER Maarten Lankhorst
                   ` (56 subsequent siblings)
  70 siblings, 2 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx

We should not allow this any more, as it will break with the new userptr
implementation, it could still be made to work, but there's no point in
doing so.

Inspection of the beignet opencl driver shows that it's only used
when normal userptr is not available, which means for new kernels
you will need CONFIG_I915_USERPTR.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 64a946d5f753..241f865077b9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -224,7 +224,7 @@ i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
 	struct i915_mmu_object *mo;
 
 	if (flags & I915_USERPTR_UNSYNCHRONIZED)
-		return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
+		return -ENODEV;
 
 	if (GEM_WARN_ON(!obj->userptr.mm))
 		return -EINVAL;
@@ -274,13 +274,7 @@ static int
 i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
 				    unsigned flags)
 {
-	if ((flags & I915_USERPTR_UNSYNCHRONIZED) == 0)
-		return -ENODEV;
-
-	if (!capable(CAP_SYS_ADMIN))
-		return -EPERM;
-
-	return 0;
+	return -ENODEV;
 }
 
 static void
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 15/64] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (13 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2 Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-18 11:17   ` Thomas Hellström
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5 Maarten Lankhorst
                   ` (55 subsequent siblings)
  70 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx

Now that unsynchronized mappings are removed, the only time userptr
works is when the MMU notifier is enabled. Put all of the userptr
code behind a mmu notifier ifdef.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  4 ++
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  2 +
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 58 +++++++------------
 drivers/gpu/drm/i915/i915_drv.h               |  2 +
 5 files changed, 31 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 9c7480119d53..77de94e45f55 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1971,8 +1971,10 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 		err = 0;
 	}
 
+#ifdef CONFIG_MMU_NOTIFIER
 	if (!err)
 		flush_workqueue(eb->i915->mm.userptr_wq);
+#endif
 
 err_relock:
 	i915_gem_ww_ctx_init(&eb->ww, true);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 2ba6d114182c..dc4f3175550d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -537,7 +537,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
 static inline bool
 i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
 {
+#ifdef CONFIG_MMU_NOTIFIER
 	return obj->userptr.mm;
+#else
+	return false;
+#endif
 }
 
 static inline void
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index b53e44b06b09..6d3f451c15c6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -289,6 +289,7 @@ struct drm_i915_gem_object {
 	unsigned long *bit_17;
 
 	union {
+#ifdef CONFIG_MMU_NOTIFIER
 		struct i915_gem_userptr {
 			uintptr_t ptr;
 
@@ -296,6 +297,7 @@ struct drm_i915_gem_object {
 			struct i915_mmu_object *mmu_object;
 			struct work_struct *work;
 		} userptr;
+#endif
 
 		unsigned long scratch;
 		u64 encode;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 241f865077b9..1183b28c084b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -15,6 +15,8 @@
 #include "i915_gem_object.h"
 #include "i915_scatterlist.h"
 
+#if defined(CONFIG_MMU_NOTIFIER)
+
 struct i915_mm_struct {
 	struct mm_struct *mm;
 	struct drm_i915_private *i915;
@@ -24,7 +26,6 @@ struct i915_mm_struct {
 	struct rcu_work work;
 };
 
-#if defined(CONFIG_MMU_NOTIFIER)
 #include <linux/interval_tree.h>
 
 struct i915_mmu_notifier {
@@ -217,15 +218,11 @@ i915_mmu_notifier_find(struct i915_mm_struct *mm)
 }
 
 static int
-i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
-				    unsigned flags)
+i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
 {
 	struct i915_mmu_notifier *mn;
 	struct i915_mmu_object *mo;
 
-	if (flags & I915_USERPTR_UNSYNCHRONIZED)
-		return -ENODEV;
-
 	if (GEM_WARN_ON(!obj->userptr.mm))
 		return -EINVAL;
 
@@ -258,32 +255,6 @@ i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
 	kfree(mn);
 }
 
-#else
-
-static void
-__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
-{
-}
-
-static void
-i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
-{
-}
-
-static int
-i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
-				    unsigned flags)
-{
-	return -ENODEV;
-}
-
-static void
-i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
-		       struct mm_struct *mm)
-{
-}
-
-#endif
 
 static struct i915_mm_struct *
 __i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
@@ -725,6 +696,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.release = i915_gem_userptr_release,
 };
 
+#endif
+
 /*
  * Creates a new mm object that wraps some normal memory from the process
  * context - user memory.
@@ -765,12 +738,12 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 		       void *data,
 		       struct drm_file *file)
 {
-	static struct lock_class_key lock_class;
+	static struct lock_class_key __maybe_unused lock_class;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_i915_gem_userptr *args = data;
-	struct drm_i915_gem_object *obj;
-	int ret;
-	u32 handle;
+	struct drm_i915_gem_object __maybe_unused *obj;
+	int __maybe_unused ret;
+	u32 __maybe_unused handle;
 
 	if (!HAS_LLC(dev_priv) && !HAS_SNOOP(dev_priv)) {
 		/* We cannot support coherent userptr objects on hw without
@@ -809,6 +782,9 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	if (!access_ok((char __user *)(unsigned long)args->user_ptr, args->user_size))
 		return -EFAULT;
 
+	if (args->flags & I915_USERPTR_UNSYNCHRONIZED)
+		return -ENODEV;
+
 	if (args->flags & I915_USERPTR_READ_ONLY) {
 		/*
 		 * On almost all of the older hw, we cannot tell the GPU that
@@ -818,6 +794,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 			return -ENODEV;
 	}
 
+#ifdef CONFIG_MMU_NOTIFIER
 	obj = i915_gem_object_alloc();
 	if (obj == NULL)
 		return -ENOMEM;
@@ -839,7 +816,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	 */
 	ret = i915_gem_userptr_init__mm_struct(obj);
 	if (ret == 0)
-		ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags);
+		ret = i915_gem_userptr_init__mmu_notifier(obj);
 	if (ret == 0)
 		ret = drm_gem_handle_create(file, &obj->base, &handle);
 
@@ -850,10 +827,14 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 
 	args->handle = handle;
 	return 0;
+#else
+	return -ENODEV;
+#endif
 }
 
 int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 {
+#ifdef CONFIG_MMU_NOTIFIER
 	spin_lock_init(&dev_priv->mm_lock);
 	hash_init(dev_priv->mm_structs);
 
@@ -863,11 +844,14 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 				0);
 	if (!dev_priv->mm.userptr_wq)
 		return -ENOMEM;
+#endif
 
 	return 0;
 }
 
 void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
 {
+#ifdef CONFIG_MMU_NOTIFIER
 	destroy_workqueue(dev_priv->mm.userptr_wq);
+#endif
 }
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 5310562da9cb..028b7ff06f57 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -590,12 +590,14 @@ struct i915_gem_mm {
 	struct notifier_block vmap_notifier;
 	struct shrinker shrinker;
 
+#ifdef CONFIG_MMU_NOTIFIER
 	/**
 	 * Workqueue to fault in userptr pages, flushed by the execbuf
 	 * when required but otherwise left to userspace to try again
 	 * on EAGAIN.
 	 */
 	struct workqueue_struct *userptr_wq;
+#endif
 
 	/* shrinker accounting, also useful for userland debugging */
 	u64 shrink_memory;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (14 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 15/64] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-11 20:51   ` Dave Airlie
  2021-01-18 11:30   ` Thomas Hellström (Intel)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 17/64] drm/i915: Flatten obj->mm.lock Maarten Lankhorst
                   ` (54 subsequent siblings)
  70 siblings, 2 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 37767 bytes --]

Instead of doing what we do currently, which will never work with
PROVE_LOCKING, do the same as AMD does, and something similar to
relocation slowpath. When all locks are dropped, we acquire the
pages for pinning. When the locks are taken, we transfer those
pages in .get_pages() to the bo. As a final check before installing
the fences, we ensure that the mmu notifier was not called; if it is,
we return -EAGAIN to userspace to signal it has to start over.

Changes since v1:
- Unbinding is done in submit_init only. submit_begin() removed.
- MMU_NOTFIER -> MMU_NOTIFIER
Changes since v2:
- Make i915->mm.notifier a spinlock.
Changes since v3:
- Add WARN_ON if there are any page references left, should have been 0.
- Return 0 on success in submit_init(), bug from spinlock conversion.
- Release pvec outside of notifier_lock (Thomas).
Changes since v4:
- Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
- Actually check all invalidations in eb_move_to_gpu. (Thomas)
- Do not wait when process is exiting to fix gem_ctx_persistence.userptr.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 101 ++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  35 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  10 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 765 ++++++------------
 drivers/gpu/drm/i915/i915_drv.h               |   9 +-
 drivers/gpu/drm/i915/i915_gem.c               |   5 +-
 7 files changed, 344 insertions(+), 583 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 77de94e45f55..d4c065c1da06 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -53,14 +53,16 @@ enum {
 /* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
 #define __EXEC_OBJECT_HAS_PIN		BIT(30)
 #define __EXEC_OBJECT_HAS_FENCE		BIT(29)
-#define __EXEC_OBJECT_NEEDS_MAP		BIT(28)
-#define __EXEC_OBJECT_NEEDS_BIAS	BIT(27)
-#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 27) /* all of the above + */
+#define __EXEC_OBJECT_USERPTR_INIT	BIT(28)
+#define __EXEC_OBJECT_NEEDS_MAP		BIT(27)
+#define __EXEC_OBJECT_NEEDS_BIAS	BIT(26)
+#define __EXEC_OBJECT_INTERNAL_FLAGS	(~0u << 26) /* all of the above + */
 #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
 
 #define __EXEC_HAS_RELOC	BIT(31)
 #define __EXEC_ENGINE_PINNED	BIT(30)
-#define __EXEC_INTERNAL_FLAGS	(~0u << 30)
+#define __EXEC_USERPTR_USED	BIT(29)
+#define __EXEC_INTERNAL_FLAGS	(~0u << 29)
 #define UPDATE			PIN_OFFSET_FIXED
 
 #define BATCH_OFFSET_BIAS (256*1024)
@@ -864,6 +866,26 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
 		}
 
 		eb_add_vma(eb, i, batch, vma);
+
+		if (i915_gem_object_is_userptr(vma->obj)) {
+			err = i915_gem_object_userptr_submit_init(vma->obj);
+			if (err) {
+				if (i + 1 < eb->buffer_count) {
+					/*
+					 * Execbuffer code expects last vma entry to be NULL,
+					 * since we already initialized this entry,
+					 * set the next value to NULL or we mess up
+					 * cleanup handling.
+					 */
+					eb->vma[i + 1].vma = NULL;
+				}
+
+				return err;
+			}
+
+			eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT;
+			eb->args->flags |= __EXEC_USERPTR_USED;
+		}
 	}
 
 	if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) {
@@ -965,7 +987,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle)
 	}
 }
 
-static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
+static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr)
 {
 	const unsigned int count = eb->buffer_count;
 	unsigned int i;
@@ -979,6 +1001,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
 
 		eb_unreserve_vma(ev);
 
+		if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) {
+			ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT;
+			i915_gem_object_userptr_submit_fini(vma->obj);
+		}
+
 		if (final)
 			i915_vma_put(vma);
 	}
@@ -1916,6 +1943,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
 	return 0;
 }
 
+static int eb_reinit_userptr(struct i915_execbuffer *eb)
+{
+	const unsigned int count = eb->buffer_count;
+	unsigned int i;
+	int ret;
+
+	if (likely(!(eb->args->flags & __EXEC_USERPTR_USED)))
+		return 0;
+
+	for (i = 0; i < count; i++) {
+		struct eb_vma *ev = &eb->vma[i];
+
+		if (!i915_gem_object_is_userptr(ev->vma->obj))
+			continue;
+
+		ret = i915_gem_object_userptr_submit_init(ev->vma->obj);
+		if (ret)
+			return ret;
+
+		ev->flags |= __EXEC_OBJECT_USERPTR_INIT;
+	}
+
+	return 0;
+}
+
 static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 					   struct i915_request *rq)
 {
@@ -1930,7 +1982,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 	}
 
 	/* We may process another execbuffer during the unlock... */
-	eb_release_vmas(eb, false);
+	eb_release_vmas(eb, false, true);
 	i915_gem_ww_ctx_fini(&eb->ww);
 
 	if (rq) {
@@ -1971,10 +2023,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 		err = 0;
 	}
 
-#ifdef CONFIG_MMU_NOTIFIER
 	if (!err)
-		flush_workqueue(eb->i915->mm.userptr_wq);
-#endif
+		err = eb_reinit_userptr(eb);
 
 err_relock:
 	i915_gem_ww_ctx_init(&eb->ww, true);
@@ -2036,7 +2086,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
 
 err:
 	if (err == -EDEADLK) {
-		eb_release_vmas(eb, false);
+		eb_release_vmas(eb, false, false);
 		err = i915_gem_ww_ctx_backoff(&eb->ww);
 		if (!err)
 			goto repeat_validate;
@@ -2133,7 +2183,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb)
 
 err:
 	if (err == -EDEADLK) {
-		eb_release_vmas(eb, false);
+		eb_release_vmas(eb, false, false);
 		err = i915_gem_ww_ctx_backoff(&eb->ww);
 		if (!err)
 			goto retry;
@@ -2208,6 +2258,30 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
 						      flags | __EXEC_OBJECT_NO_RESERVE);
 	}
 
+#ifdef CONFIG_MMU_NOTIFIER
+	if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
+		spin_lock(&eb->i915->mm.notifier_lock);
+
+		/*
+		 * count is always at least 1, otherwise __EXEC_USERPTR_USED
+		 * could not have been set
+		 */
+		for (i = 0; i < count; i++) {
+			struct eb_vma *ev = &eb->vma[i];
+			struct drm_i915_gem_object *obj = ev->vma->obj;
+
+			if (!i915_gem_object_is_userptr(obj))
+				continue;
+
+			err = i915_gem_object_userptr_submit_done(obj);
+			if (err)
+				break;
+		}
+
+		spin_unlock(&eb->i915->mm.notifier_lock);
+	}
+#endif
+
 	if (unlikely(err))
 		goto err_skip;
 
@@ -3351,7 +3425,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	err = eb_lookup_vmas(&eb);
 	if (err) {
-		eb_release_vmas(&eb, true);
+		eb_release_vmas(&eb, true, true);
 		goto err_engine;
 	}
 
@@ -3423,6 +3497,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 
 	trace_i915_request_queue(eb.request, eb.batch_flags);
 	err = eb_submit(&eb, batch);
+
 err_request:
 	i915_request_get(eb.request);
 	err = eb_request_add(&eb, err);
@@ -3443,7 +3518,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	i915_request_put(eb.request);
 
 err_vma:
-	eb_release_vmas(&eb, true);
+	eb_release_vmas(&eb, true, true);
 	if (eb.trampoline)
 		i915_vma_unpin(eb.trampoline);
 	WARN_ON(err == -EDEADLK);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index dc4f3175550d..db4ddbfcee40 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -33,6 +33,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
 				       const void *data, resource_size_t size);
 
 extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
+
 void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
 				     struct sg_table *pages,
 				     bool needs_clflush);
@@ -228,12 +229,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
 	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
 }
 
-static inline bool
-i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
-{
-	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL);
-}
-
 static inline bool
 i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
 {
@@ -534,16 +529,6 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
 					      enum fb_op_origin origin);
 
-static inline bool
-i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
-{
-#ifdef CONFIG_MMU_NOTIFIER
-	return obj->userptr.mm;
-#else
-	return false;
-#endif
-}
-
 static inline void
 i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 				  enum fb_op_origin origin)
@@ -560,4 +545,22 @@ i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
 		__i915_gem_object_invalidate_frontbuffer(obj, origin);
 }
 
+#ifdef CONFIG_MMU_NOTIFIER
+static inline bool
+i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
+{
+	return obj->userptr.notifier.mm;
+}
+
+int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj);
+int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj);
+void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj);
+#else
+static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; }
+
+static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
+static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
+static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }
+#endif
+
 #endif
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 6d3f451c15c6..5234c1ed62d4 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -7,6 +7,8 @@
 #ifndef __I915_GEM_OBJECT_TYPES_H__
 #define __I915_GEM_OBJECT_TYPES_H__
 
+#include <linux/mmu_notifier.h>
+
 #include <drm/drm_gem.h>
 #include <uapi/drm/i915_drm.h>
 
@@ -34,7 +36,6 @@ struct drm_i915_gem_object_ops {
 #define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(2)
 #define I915_GEM_OBJECT_IS_PROXY	BIT(3)
 #define I915_GEM_OBJECT_NO_MMAP		BIT(4)
-#define I915_GEM_OBJECT_ASYNC_CANCEL	BIT(5)
 
 	/* Interface between the GEM object and its backing storage.
 	 * get_pages() is called once prior to the use of the associated set
@@ -292,10 +293,11 @@ struct drm_i915_gem_object {
 #ifdef CONFIG_MMU_NOTIFIER
 		struct i915_gem_userptr {
 			uintptr_t ptr;
+			unsigned long notifier_seq;
 
-			struct i915_mm_struct *mm;
-			struct i915_mmu_object *mmu_object;
-			struct work_struct *work;
+			struct mmu_interval_notifier notifier;
+			struct page **pvec;
+			int page_ref;
 		} userptr;
 #endif
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 63b50c3559f7..aefdbe36fe6f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -223,7 +223,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 	 * get_pages backends we should be better able to handle the
 	 * cancellation of the async task in a more uniform manner.
 	 */
-	if (!pages && !i915_gem_object_needs_async_cancel(obj))
+	if (!pages)
 		pages = ERR_PTR(-EINVAL);
 
 	if (!IS_ERR(pages))
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 1183b28c084b..d07f84328edd 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -2,10 +2,39 @@
  * SPDX-License-Identifier: MIT
  *
  * Copyright © 2012-2014 Intel Corporation
+ *
+  * Based on amdgpu_mn, which bears the following notice:
+ *
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ */
+/*
+ * Authors:
+ *    Christian König <christian.koenig@amd.com>
  */
 
 #include <linux/mmu_context.h>
-#include <linux/mmu_notifier.h>
 #include <linux/mempolicy.h>
 #include <linux/swap.h>
 #include <linux/sched/mm.h>
@@ -15,373 +44,114 @@
 #include "i915_gem_object.h"
 #include "i915_scatterlist.h"
 
-#if defined(CONFIG_MMU_NOTIFIER)
-
-struct i915_mm_struct {
-	struct mm_struct *mm;
-	struct drm_i915_private *i915;
-	struct i915_mmu_notifier *mn;
-	struct hlist_node node;
-	struct kref kref;
-	struct rcu_work work;
-};
-
-#include <linux/interval_tree.h>
-
-struct i915_mmu_notifier {
-	spinlock_t lock;
-	struct hlist_node node;
-	struct mmu_notifier mn;
-	struct rb_root_cached objects;
-	struct i915_mm_struct *mm;
-};
-
-struct i915_mmu_object {
-	struct i915_mmu_notifier *mn;
-	struct drm_i915_gem_object *obj;
-	struct interval_tree_node it;
-};
-
-static void add_object(struct i915_mmu_object *mo)
-{
-	GEM_BUG_ON(!RB_EMPTY_NODE(&mo->it.rb));
-	interval_tree_insert(&mo->it, &mo->mn->objects);
-}
-
-static void del_object(struct i915_mmu_object *mo)
-{
-	if (RB_EMPTY_NODE(&mo->it.rb))
-		return;
-
-	interval_tree_remove(&mo->it, &mo->mn->objects);
-	RB_CLEAR_NODE(&mo->it.rb);
-}
+#ifdef CONFIG_MMU_NOTIFIER
 
-static void
-__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
+/**
+ * i915_gem_userptr_invalidate - callback to notify about mm change
+ *
+ * @mni: the range (mm) is about to update
+ * @range: details on the invalidation
+ * @cur_seq: Value to pass to mmu_interval_set_seq()
+ *
+ * Block for operations on BOs to finish and mark pages as accessed and
+ * potentially dirty.
+ */
+static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
+					const struct mmu_notifier_range *range,
+					unsigned long cur_seq)
 {
-	struct i915_mmu_object *mo = obj->userptr.mmu_object;
-
-	/*
-	 * During mm_invalidate_range we need to cancel any userptr that
-	 * overlaps the range being invalidated. Doing so requires the
-	 * struct_mutex, and that risks recursion. In order to cause
-	 * recursion, the user must alias the userptr address space with
-	 * a GTT mmapping (possible with a MAP_FIXED) - then when we have
-	 * to invalidate that mmaping, mm_invalidate_range is called with
-	 * the userptr address *and* the struct_mutex held.  To prevent that
-	 * we set a flag under the i915_mmu_notifier spinlock to indicate
-	 * whether this object is valid.
-	 */
-	if (!mo)
-		return;
+	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	long r;
 
-	spin_lock(&mo->mn->lock);
-	if (value)
-		add_object(mo);
-	else
-		del_object(mo);
-	spin_unlock(&mo->mn->lock);
-}
+	if (!mmu_notifier_range_blockable(range))
+		return false;
 
-static int
-userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
-				  const struct mmu_notifier_range *range)
-{
-	struct i915_mmu_notifier *mn =
-		container_of(_mn, struct i915_mmu_notifier, mn);
-	struct interval_tree_node *it;
-	unsigned long end;
-	int ret = 0;
-
-	if (RB_EMPTY_ROOT(&mn->objects.rb_root))
-		return 0;
-
-	/* interval ranges are inclusive, but invalidate range is exclusive */
-	end = range->end - 1;
-
-	spin_lock(&mn->lock);
-	it = interval_tree_iter_first(&mn->objects, range->start, end);
-	while (it) {
-		struct drm_i915_gem_object *obj;
-
-		if (!mmu_notifier_range_blockable(range)) {
-			ret = -EAGAIN;
-			break;
-		}
+	spin_lock(&i915->mm.notifier_lock);
 
-		/*
-		 * The mmu_object is released late when destroying the
-		 * GEM object so it is entirely possible to gain a
-		 * reference on an object in the process of being freed
-		 * since our serialisation is via the spinlock and not
-		 * the struct_mutex - and consequently use it after it
-		 * is freed and then double free it. To prevent that
-		 * use-after-free we only acquire a reference on the
-		 * object if it is not in the process of being destroyed.
-		 */
-		obj = container_of(it, struct i915_mmu_object, it)->obj;
-		if (!kref_get_unless_zero(&obj->base.refcount)) {
-			it = interval_tree_iter_next(it, range->start, end);
-			continue;
-		}
-		spin_unlock(&mn->lock);
+	mmu_interval_set_seq(mni, cur_seq);
 
-		ret = i915_gem_object_unbind(obj,
-					     I915_GEM_OBJECT_UNBIND_ACTIVE |
-					     I915_GEM_OBJECT_UNBIND_BARRIER);
-		if (ret == 0)
-			ret = __i915_gem_object_put_pages(obj);
-		i915_gem_object_put(obj);
-		if (ret)
-			return ret;
+	spin_unlock(&i915->mm.notifier_lock);
 
-		spin_lock(&mn->lock);
+	/* During exit there's no need to wait */
+	if (current->flags & PF_EXITING)
+		return true;
 
-		/*
-		 * As we do not (yet) protect the mmu from concurrent insertion
-		 * over this range, there is no guarantee that this search will
-		 * terminate given a pathologic workload.
-		 */
-		it = interval_tree_iter_first(&mn->objects, range->start, end);
-	}
-	spin_unlock(&mn->lock);
-
-	return ret;
+	/* we will unbind on next submission, still have userptr pins */
+	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
+				      MAX_SCHEDULE_TIMEOUT);
+	if (r <= 0)
+		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
 
+	return true;
 }
 
-static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
-	.invalidate_range_start = userptr_mn_invalidate_range_start,
+static const struct mmu_interval_notifier_ops i915_gem_userptr_notifier_ops = {
+	.invalidate = i915_gem_userptr_invalidate,
 };
 
-static struct i915_mmu_notifier *
-i915_mmu_notifier_create(struct i915_mm_struct *mm)
-{
-	struct i915_mmu_notifier *mn;
-
-	mn = kmalloc(sizeof(*mn), GFP_KERNEL);
-	if (mn == NULL)
-		return ERR_PTR(-ENOMEM);
-
-	spin_lock_init(&mn->lock);
-	mn->mn.ops = &i915_gem_userptr_notifier;
-	mn->objects = RB_ROOT_CACHED;
-	mn->mm = mm;
-
-	return mn;
-}
-
-static void
-i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
-{
-	struct i915_mmu_object *mo;
-
-	mo = fetch_and_zero(&obj->userptr.mmu_object);
-	if (!mo)
-		return;
-
-	spin_lock(&mo->mn->lock);
-	del_object(mo);
-	spin_unlock(&mo->mn->lock);
-	kfree(mo);
-}
-
-static struct i915_mmu_notifier *
-i915_mmu_notifier_find(struct i915_mm_struct *mm)
-{
-	struct i915_mmu_notifier *mn, *old;
-	int err;
-
-	mn = READ_ONCE(mm->mn);
-	if (likely(mn))
-		return mn;
-
-	mn = i915_mmu_notifier_create(mm);
-	if (IS_ERR(mn))
-		return mn;
-
-	err = mmu_notifier_register(&mn->mn, mm->mm);
-	if (err) {
-		kfree(mn);
-		return ERR_PTR(err);
-	}
-
-	old = cmpxchg(&mm->mn, NULL, mn);
-	if (old) {
-		mmu_notifier_unregister(&mn->mn, mm->mm);
-		kfree(mn);
-		mn = old;
-	}
-
-	return mn;
-}
-
 static int
 i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
 {
-	struct i915_mmu_notifier *mn;
-	struct i915_mmu_object *mo;
-
-	if (GEM_WARN_ON(!obj->userptr.mm))
-		return -EINVAL;
-
-	mn = i915_mmu_notifier_find(obj->userptr.mm);
-	if (IS_ERR(mn))
-		return PTR_ERR(mn);
-
-	mo = kzalloc(sizeof(*mo), GFP_KERNEL);
-	if (!mo)
-		return -ENOMEM;
-
-	mo->mn = mn;
-	mo->obj = obj;
-	mo->it.start = obj->userptr.ptr;
-	mo->it.last = obj->userptr.ptr + obj->base.size - 1;
-	RB_CLEAR_NODE(&mo->it.rb);
-
-	obj->userptr.mmu_object = mo;
-	return 0;
-}
-
-static void
-i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
-		       struct mm_struct *mm)
-{
-	if (mn == NULL)
-		return;
-
-	mmu_notifier_unregister(&mn->mn, mm);
-	kfree(mn);
+	return mmu_interval_notifier_insert(&obj->userptr.notifier, current->mm,
+					    obj->userptr.ptr, obj->base.size,
+					    &i915_gem_userptr_notifier_ops);
 }
 
-
-static struct i915_mm_struct *
-__i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
-{
-	struct i915_mm_struct *it, *mm = NULL;
-
-	rcu_read_lock();
-	hash_for_each_possible_rcu(i915->mm_structs,
-				   it, node,
-				   (unsigned long)real)
-		if (it->mm == real && kref_get_unless_zero(&it->kref)) {
-			mm = it;
-			break;
-		}
-	rcu_read_unlock();
-
-	return mm;
-}
-
-static int
-i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
+static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	struct i915_mm_struct *mm, *new;
-	int ret = 0;
-
-	/* During release of the GEM object we hold the struct_mutex. This
-	 * precludes us from calling mmput() at that time as that may be
-	 * the last reference and so call exit_mmap(). exit_mmap() will
-	 * attempt to reap the vma, and if we were holding a GTT mmap
-	 * would then call drm_gem_vm_close() and attempt to reacquire
-	 * the struct mutex. So in order to avoid that recursion, we have
-	 * to defer releasing the mm reference until after we drop the
-	 * struct_mutex, i.e. we need to schedule a worker to do the clean
-	 * up.
-	 */
-	mm = __i915_mm_struct_find(i915, current->mm);
-	if (mm)
-		goto out;
+	struct page **pvec = NULL;
 
-	new = kmalloc(sizeof(*mm), GFP_KERNEL);
-	if (!new)
-		return -ENOMEM;
-
-	kref_init(&new->kref);
-	new->i915 = to_i915(obj->base.dev);
-	new->mm = current->mm;
-	new->mn = NULL;
-
-	spin_lock(&i915->mm_lock);
-	mm = __i915_mm_struct_find(i915, current->mm);
-	if (!mm) {
-		hash_add_rcu(i915->mm_structs,
-			     &new->node,
-			     (unsigned long)new->mm);
-		mmgrab(current->mm);
-		mm = new;
+	spin_lock(&i915->mm.notifier_lock);
+	if (!--obj->userptr.page_ref) {
+		pvec = obj->userptr.pvec;
+		obj->userptr.pvec = NULL;
 	}
-	spin_unlock(&i915->mm_lock);
-	if (mm != new)
-		kfree(new);
-
-out:
-	obj->userptr.mm = mm;
-	return ret;
-}
-
-static void
-__i915_mm_struct_free__worker(struct work_struct *work)
-{
-	struct i915_mm_struct *mm = container_of(work, typeof(*mm), work.work);
-
-	i915_mmu_notifier_free(mm->mn, mm->mm);
-	mmdrop(mm->mm);
-	kfree(mm);
-}
-
-static void
-__i915_mm_struct_free(struct kref *kref)
-{
-	struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
-
-	spin_lock(&mm->i915->mm_lock);
-	hash_del_rcu(&mm->node);
-	spin_unlock(&mm->i915->mm_lock);
+	GEM_BUG_ON(obj->userptr.page_ref < 0);
+	spin_unlock(&i915->mm.notifier_lock);
 
-	INIT_RCU_WORK(&mm->work, __i915_mm_struct_free__worker);
-	queue_rcu_work(system_wq, &mm->work);
-}
-
-static void
-i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
-{
-	if (obj->userptr.mm == NULL)
-		return;
+	if (pvec) {
+		const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
 
-	kref_put(&obj->userptr.mm->kref, __i915_mm_struct_free);
-	obj->userptr.mm = NULL;
+		unpin_user_pages(pvec, num_pages);
+		kfree(pvec);
+	}
 }
 
-struct get_pages_work {
-	struct work_struct work;
-	struct drm_i915_gem_object *obj;
-	struct task_struct *task;
-};
-
-static struct sg_table *
-__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
-			       struct page **pvec, unsigned long num_pages)
+static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 {
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
 	unsigned int sg_page_sizes;
 	struct scatterlist *sg;
+	struct page **pvec;
 	int ret;
 
 	st = kmalloc(sizeof(*st), GFP_KERNEL);
 	if (!st)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
+
+	spin_lock(&i915->mm.notifier_lock);
+	if (GEM_WARN_ON(!obj->userptr.page_ref)) {
+		spin_unlock(&i915->mm.notifier_lock);
+		ret = -EFAULT;
+		goto err_free;
+	}
+
+	obj->userptr.page_ref++;
+	pvec = obj->userptr.pvec;
+	spin_unlock(&i915->mm.notifier_lock);
 
 alloc_table:
 	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
 					 num_pages << PAGE_SHIFT, max_segment,
 					 NULL, 0, GFP_KERNEL);
 	if (IS_ERR(sg)) {
-		kfree(st);
-		return ERR_CAST(sg);
+		ret = PTR_ERR(sg);
+		goto err;
 	}
 
 	ret = i915_gem_gtt_prepare_pages(obj, st);
@@ -393,203 +163,20 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 			goto alloc_table;
 		}
 
-		kfree(st);
-		return ERR_PTR(ret);
+		goto err;
 	}
 
 	sg_page_sizes = i915_sg_page_sizes(st->sgl);
 
 	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
 
-	return st;
-}
-
-static void
-__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
-{
-	struct get_pages_work *work = container_of(_work, typeof(*work), work);
-	struct drm_i915_gem_object *obj = work->obj;
-	const unsigned long npages = obj->base.size >> PAGE_SHIFT;
-	unsigned long pinned;
-	struct page **pvec;
-	int ret;
-
-	ret = -ENOMEM;
-	pinned = 0;
-
-	pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
-	if (pvec != NULL) {
-		struct mm_struct *mm = obj->userptr.mm->mm;
-		unsigned int flags = 0;
-		int locked = 0;
-
-		if (!i915_gem_object_is_readonly(obj))
-			flags |= FOLL_WRITE;
-
-		ret = -EFAULT;
-		if (mmget_not_zero(mm)) {
-			while (pinned < npages) {
-				if (!locked) {
-					mmap_read_lock(mm);
-					locked = 1;
-				}
-				ret = pin_user_pages_remote
-					(mm,
-					 obj->userptr.ptr + pinned * PAGE_SIZE,
-					 npages - pinned,
-					 flags,
-					 pvec + pinned, NULL, &locked);
-				if (ret < 0)
-					break;
-
-				pinned += ret;
-			}
-			if (locked)
-				mmap_read_unlock(mm);
-			mmput(mm);
-		}
-	}
-
-	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
-	if (obj->userptr.work == &work->work) {
-		struct sg_table *pages = ERR_PTR(ret);
-
-		if (pinned == npages) {
-			pages = __i915_gem_userptr_alloc_pages(obj, pvec,
-							       npages);
-			if (!IS_ERR(pages)) {
-				pinned = 0;
-				pages = NULL;
-			}
-		}
-
-		obj->userptr.work = ERR_CAST(pages);
-		if (IS_ERR(pages))
-			__i915_gem_userptr_set_active(obj, false);
-	}
-	mutex_unlock(&obj->mm.lock);
-
-	unpin_user_pages(pvec, pinned);
-	kvfree(pvec);
-
-	i915_gem_object_put(obj);
-	put_task_struct(work->task);
-	kfree(work);
-}
-
-static struct sg_table *
-__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
-{
-	struct get_pages_work *work;
-
-	/* Spawn a worker so that we can acquire the
-	 * user pages without holding our mutex. Access
-	 * to the user pages requires mmap_lock, and we have
-	 * a strict lock ordering of mmap_lock, struct_mutex -
-	 * we already hold struct_mutex here and so cannot
-	 * call gup without encountering a lock inversion.
-	 *
-	 * Userspace will keep on repeating the operation
-	 * (thanks to EAGAIN) until either we hit the fast
-	 * path or the worker completes. If the worker is
-	 * cancelled or superseded, the task is still run
-	 * but the results ignored. (This leads to
-	 * complications that we may have a stray object
-	 * refcount that we need to be wary of when
-	 * checking for existing objects during creation.)
-	 * If the worker encounters an error, it reports
-	 * that error back to this function through
-	 * obj->userptr.work = ERR_PTR.
-	 */
-	work = kmalloc(sizeof(*work), GFP_KERNEL);
-	if (work == NULL)
-		return ERR_PTR(-ENOMEM);
-
-	obj->userptr.work = &work->work;
-
-	work->obj = i915_gem_object_get(obj);
-
-	work->task = current;
-	get_task_struct(work->task);
-
-	INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker);
-	queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work);
-
-	return ERR_PTR(-EAGAIN);
-}
-
-static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
-{
-	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
-	struct mm_struct *mm = obj->userptr.mm->mm;
-	struct page **pvec;
-	struct sg_table *pages;
-	bool active;
-	int pinned;
-	unsigned int gup_flags = 0;
-
-	/* If userspace should engineer that these pages are replaced in
-	 * the vma between us binding this page into the GTT and completion
-	 * of rendering... Their loss. If they change the mapping of their
-	 * pages they need to create a new bo to point to the new vma.
-	 *
-	 * However, that still leaves open the possibility of the vma
-	 * being copied upon fork. Which falls under the same userspace
-	 * synchronisation issue as a regular bo, except that this time
-	 * the process may not be expecting that a particular piece of
-	 * memory is tied to the GPU.
-	 *
-	 * Fortunately, we can hook into the mmu_notifier in order to
-	 * discard the page references prior to anything nasty happening
-	 * to the vma (discard or cloning) which should prevent the more
-	 * egregious cases from causing harm.
-	 */
-
-	if (obj->userptr.work) {
-		/* active flag should still be held for the pending work */
-		if (IS_ERR(obj->userptr.work))
-			return PTR_ERR(obj->userptr.work);
-		else
-			return -EAGAIN;
-	}
-
-	pvec = NULL;
-	pinned = 0;
-
-	if (mm == current->mm) {
-		pvec = kvmalloc_array(num_pages, sizeof(struct page *),
-				      GFP_KERNEL |
-				      __GFP_NORETRY |
-				      __GFP_NOWARN);
-		if (pvec) {
-			/* defer to worker if malloc fails */
-			if (!i915_gem_object_is_readonly(obj))
-				gup_flags |= FOLL_WRITE;
-			pinned = pin_user_pages_fast_only(obj->userptr.ptr,
-							  num_pages, gup_flags,
-							  pvec);
-		}
-	}
-
-	active = false;
-	if (pinned < 0) {
-		pages = ERR_PTR(pinned);
-		pinned = 0;
-	} else if (pinned < num_pages) {
-		pages = __i915_gem_userptr_get_pages_schedule(obj);
-		active = pages == ERR_PTR(-EAGAIN);
-	} else {
-		pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
-		active = !IS_ERR(pages);
-	}
-	if (active)
-		__i915_gem_userptr_set_active(obj, true);
-
-	if (IS_ERR(pages))
-		unpin_user_pages(pvec, pinned);
-	kvfree(pvec);
+	return 0;
 
-	return PTR_ERR_OR_ZERO(pages);
+err:
+	i915_gem_object_userptr_drop_ref(obj);
+err_free:
+	kfree(st);
+	return ret;
 }
 
 static void
@@ -599,9 +186,6 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
 	struct sgt_iter sgt_iter;
 	struct page *page;
 
-	/* Cancel any inflight work and force them to restart their gup */
-	obj->userptr.work = NULL;
-	__i915_gem_userptr_set_active(obj, false);
 	if (!pages)
 		return;
 
@@ -641,19 +225,135 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
 		}
 
 		mark_page_accessed(page);
-		unpin_user_page(page);
 	}
 	obj->mm.dirty = false;
 
 	sg_free_table(pages);
 	kfree(pages);
+
+	i915_gem_object_userptr_drop_ref(obj);
+}
+
+static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages)
+{
+	struct sg_table *pages;
+	int err;
+
+	err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
+	if (err)
+		return err;
+
+	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
+		return -EBUSY;
+
+	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+
+	pages = __i915_gem_object_unset_pages(obj);
+	if (!IS_ERR_OR_NULL(pages))
+		i915_gem_userptr_put_pages(obj, pages);
+
+	if (get_pages)
+		err = ____i915_gem_object_get_pages(obj);
+	mutex_unlock(&obj->mm.lock);
+
+	return err;
+}
+
+int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
+	struct page **pvec;
+	unsigned int gup_flags = 0;
+	unsigned long notifier_seq;
+	int pinned, ret;
+
+	if (obj->userptr.notifier.mm != current->mm)
+		return -EFAULT;
+
+	ret = i915_gem_object_lock_interruptible(obj, NULL);
+	if (ret)
+		return ret;
+
+	/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
+	ret = i915_gem_object_userptr_unbind(obj, false);
+	i915_gem_object_unlock(obj);
+	if (ret)
+		return ret;
+
+	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
+
+	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
+	if (!pvec)
+		return -ENOMEM;
+
+	if (!i915_gem_object_is_readonly(obj))
+		gup_flags |= FOLL_WRITE;
+
+	pinned = ret = 0;
+	while (pinned < num_pages) {
+		ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
+					  num_pages - pinned, gup_flags,
+					  &pvec[pinned]);
+		if (ret < 0)
+			goto out;
+
+		pinned += ret;
+	}
+	ret = 0;
+
+	spin_lock(&i915->mm.notifier_lock);
+
+	if (mmu_interval_read_retry(&obj->userptr.notifier,
+		!obj->userptr.page_ref ? notifier_seq :
+		obj->userptr.notifier_seq)) {
+		ret = -EAGAIN;
+		goto out_unlock;
+	}
+
+	if (!obj->userptr.page_ref++) {
+		obj->userptr.pvec = pvec;
+		obj->userptr.notifier_seq = notifier_seq;
+
+		pvec = NULL;
+	}
+
+out_unlock:
+	spin_unlock(&i915->mm.notifier_lock);
+
+out:
+	if (pvec) {
+		unpin_user_pages(pvec, pinned);
+		kvfree(pvec);
+	}
+
+	return ret;
+}
+
+int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj)
+{
+	if (mmu_interval_read_retry(&obj->userptr.notifier,
+				    obj->userptr.notifier_seq)) {
+		/* We collided with the mmu notifier, need to retry */
+
+		return -EAGAIN;
+	}
+
+	return 0;
+}
+
+void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj)
+{
+	i915_gem_object_userptr_drop_ref(obj);
 }
 
 static void
 i915_gem_userptr_release(struct drm_i915_gem_object *obj)
 {
-	i915_gem_userptr_release__mmu_notifier(obj);
-	i915_gem_userptr_release__mm_struct(obj);
+	GEM_WARN_ON(obj->userptr.page_ref);
+
+	mmu_interval_notifier_remove(&obj->userptr.notifier);
+	obj->userptr.notifier.mm = NULL;
 }
 
 static int
@@ -686,7 +386,6 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
 	.name = "i915_gem_object_userptr",
 	.flags = I915_GEM_OBJECT_IS_SHRINKABLE |
 		 I915_GEM_OBJECT_NO_MMAP |
-		 I915_GEM_OBJECT_ASYNC_CANCEL |
 		 I915_GEM_OBJECT_IS_PROXY,
 	.get_pages = i915_gem_userptr_get_pages,
 	.put_pages = i915_gem_userptr_put_pages,
@@ -807,6 +506,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
 
 	obj->userptr.ptr = args->user_ptr;
+	obj->userptr.notifier_seq = ULONG_MAX;
 	if (args->flags & I915_USERPTR_READ_ONLY)
 		i915_gem_object_set_readonly(obj);
 
@@ -814,9 +514,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 	 * at binding. This means that we need to hook into the mmu_notifier
 	 * in order to detect if the mmu is destroyed.
 	 */
-	ret = i915_gem_userptr_init__mm_struct(obj);
-	if (ret == 0)
-		ret = i915_gem_userptr_init__mmu_notifier(obj);
+	ret = i915_gem_userptr_init__mmu_notifier(obj);
 	if (ret == 0)
 		ret = drm_gem_handle_create(file, &obj->base, &handle);
 
@@ -835,15 +533,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
 int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 {
 #ifdef CONFIG_MMU_NOTIFIER
-	spin_lock_init(&dev_priv->mm_lock);
-	hash_init(dev_priv->mm_structs);
-
-	dev_priv->mm.userptr_wq =
-		alloc_workqueue("i915-userptr-acquire",
-				WQ_HIGHPRI | WQ_UNBOUND,
-				0);
-	if (!dev_priv->mm.userptr_wq)
-		return -ENOMEM;
+	spin_lock_init(&dev_priv->mm.notifier_lock);
 #endif
 
 	return 0;
@@ -851,7 +541,4 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
 
 void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
 {
-#ifdef CONFIG_MMU_NOTIFIER
-	destroy_workqueue(dev_priv->mm.userptr_wq);
-#endif
 }
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 028b7ff06f57..320e0f66122d 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -592,11 +592,10 @@ struct i915_gem_mm {
 
 #ifdef CONFIG_MMU_NOTIFIER
 	/**
-	 * Workqueue to fault in userptr pages, flushed by the execbuf
-	 * when required but otherwise left to userspace to try again
-	 * on EAGAIN.
+	 * notifier_lock for mmu notifiers, memory may not be allocated
+	 * while holding this lock.
 	 */
-	struct workqueue_struct *userptr_wq;
+	spinlock_t notifier_lock;
 #endif
 
 	/* shrinker accounting, also useful for userland debugging */
@@ -976,8 +975,6 @@ struct drm_i915_private {
 	struct i915_ggtt ggtt; /* VM representing the global address space */
 
 	struct i915_gem_mm mm;
-	DECLARE_HASHTABLE(mm_structs, 7);
-	spinlock_t mm_lock;
 
 	/* Kernel Modesetting */
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 84eb2a574813..d138a8b18a96 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1163,10 +1163,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 err_unlock:
 	i915_gem_drain_workqueue(dev_priv);
 
-	if (ret != -EIO) {
+	if (ret != -EIO)
 		intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
-		i915_gem_cleanup_userptr(dev_priv);
-	}
 
 	if (ret == -EIO) {
 		/*
@@ -1223,7 +1221,6 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv)
 	intel_wa_list_free(&dev_priv->gt_wa_list);
 
 	intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
-	i915_gem_cleanup_userptr(dev_priv);
 
 	i915_gem_drain_freed_objects(dev_priv);
 
-- 
2.30.0.rc1


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 17/64] drm/i915: Flatten obj->mm.lock
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (15 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5 Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 18/64] drm/i915: Populate logical context during first pin Maarten Lankhorst
                   ` (53 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

With userptr fixed, there is no need for all separate lockdep classes
now, and we can remove all lockdep tricks used. A trylock in the
shrinker is all we need now to flatten the locking hierarchy.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c   |  6 +---
 drivers/gpu/drm/i915/gem/i915_gem_object.h   | 20 ++----------
 drivers/gpu/drm/i915/gem/i915_gem_pages.c    | 34 ++++++++++----------
 drivers/gpu/drm/i915/gem/i915_gem_phys.c     |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 10 +++---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c  |  2 +-
 6 files changed, 27 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 1393988bd5af..028a556ab1a5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -62,7 +62,7 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
 			  struct lock_class_key *key, unsigned flags)
 {
-	__mutex_init(&obj->mm.lock, ops->name ?: "obj->mm.lock", key);
+	mutex_init(&obj->mm.lock);
 
 	spin_lock_init(&obj->vma.lock);
 	INIT_LIST_HEAD(&obj->vma.list);
@@ -86,10 +86,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 	mutex_init(&obj->mm.get_page.lock);
 	INIT_RADIX_TREE(&obj->mm.get_dma_page.radix, GFP_KERNEL | __GFP_NOWARN);
 	mutex_init(&obj->mm.get_dma_page.lock);
-
-	if (IS_ENABLED(CONFIG_LOCKDEP) && i915_gem_object_is_shrinkable(obj))
-		i915_gem_shrinker_taints_mutex(to_i915(obj->base.dev),
-					       &obj->mm.lock);
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index db4ddbfcee40..aaa574545776 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -322,27 +322,10 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 
-enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */
-	I915_MM_NORMAL = 0,
-	/*
-	 * Only used by struct_mutex, when called "recursively" from
-	 * direct-reclaim-esque. Safe because there is only every one
-	 * struct_mutex in the entire system.
-	 */
-	I915_MM_SHRINKER = 1,
-	/*
-	 * Used for obj->mm.lock when allocating pages. Safe because the object
-	 * isn't yet on any LRU, and therefore the shrinker can't deadlock on
-	 * it. As soon as the object has pages, obj->mm.lock nests within
-	 * fs_reclaim.
-	 */
-	I915_MM_GET_PAGES = 1,
-};
-
 static inline int __must_check
 i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
 {
-	might_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	might_lock(&obj->mm.lock);
 
 	if (atomic_inc_not_zero(&obj->mm.pages_pin_count))
 		return 0;
@@ -386,6 +369,7 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
 }
 
 int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
+int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj);
 void i915_gem_object_truncate(struct drm_i915_gem_object *obj);
 void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index aefdbe36fe6f..ffc3c3c474b1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -111,7 +111,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 {
 	int err;
 
-	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
 		return err;
 
@@ -193,21 +193,13 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 	return pages;
 }
 
-int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
+int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
-	int err;
 
 	if (i915_gem_object_has_pinned_pages(obj))
 		return -EBUSY;
 
-	/* May be called by shrinker from within get_pages() (on another bo) */
-	mutex_lock(&obj->mm.lock);
-	if (unlikely(atomic_read(&obj->mm.pages_pin_count))) {
-		err = -EBUSY;
-		goto unlock;
-	}
-
 	i915_gem_object_release_mmap_offset(obj);
 
 	/*
@@ -223,14 +215,22 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 	 * get_pages backends we should be better able to handle the
 	 * cancellation of the async task in a more uniform manner.
 	 */
-	if (!pages)
-		pages = ERR_PTR(-EINVAL);
-
-	if (!IS_ERR(pages))
+	if (!IS_ERR_OR_NULL(pages))
 		obj->ops->put_pages(obj, pages);
 
-	err = 0;
-unlock:
+	return 0;
+}
+
+int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
+{
+	int err;
+
+	if (i915_gem_object_has_pinned_pages(obj))
+		return -EBUSY;
+
+	/* May be called by shrinker from within get_pages() (on another bo) */
+	mutex_lock(&obj->mm.lock);
+	err = __i915_gem_object_put_pages_locked(obj);
 	mutex_unlock(&obj->mm.lock);
 
 	return err;
@@ -338,7 +338,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return ERR_PTR(-ENXIO);
 
-	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
 		return ERR_PTR(err);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 144e4940eede..0d176bf06405 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -236,7 +236,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
 		goto err_unlock;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index c2dba1cd9532..11b8d4e69fc7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -49,9 +49,9 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj,
 		flags = I915_GEM_OBJECT_UNBIND_TEST;
 
 	if (i915_gem_object_unbind(obj, flags) == 0)
-		__i915_gem_object_put_pages(obj);
+		return true;
 
-	return !i915_gem_object_has_pages(obj);
+	return false;
 }
 
 static void try_to_writeback(struct drm_i915_gem_object *obj,
@@ -200,10 +200,10 @@ i915_gem_shrink(struct drm_i915_private *i915,
 
 			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
 
-			if (unsafe_drop_pages(obj, shrink)) {
+			if (unsafe_drop_pages(obj, shrink) &&
+			    mutex_trylock(&obj->mm.lock)) {
 				/* May arrive from get_pages on another bo */
-				mutex_lock(&obj->mm.lock);
-				if (!i915_gem_object_has_pages(obj)) {
+				if (!__i915_gem_object_put_pages_locked(obj)) {
 					try_to_writeback(obj, shrink);
 					count += obj->base.size >> PAGE_SHIFT;
 				}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index d07f84328edd..b1c035c55708 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -246,7 +246,7 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
 		return -EBUSY;
 
-	mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
+	mutex_lock(&obj->mm.lock);
 
 	pages = __i915_gem_object_unset_pages(obj);
 	if (!IS_ERR_OR_NULL(pages))
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 18/64] drm/i915: Populate logical context during first pin.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (16 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 17/64] drm/i915: Flatten obj->mm.lock Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 19/64] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2 Maarten Lankhorst
                   ` (52 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

This allows us to remove pin_map from state allocation, which saves
us a few retry loops. We won't need this until first pin, anyway.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../drm/i915/gt/intel_execlists_submission.c  | 26 ++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index a5b442683c18..5f73611ed91b 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -2489,11 +2489,31 @@ static void execlists_submit_request(struct i915_request *request)
 	spin_unlock_irqrestore(&engine->active.lock, flags);
 }
 
+static int
+__execlists_context_pre_pin(struct intel_context *ce,
+			    struct intel_engine_cs *engine,
+			    struct i915_gem_ww_ctx *ww, void **vaddr)
+{
+	int err;
+
+	err = lrc_pre_pin(ce, engine, ww, vaddr);
+	if (err)
+		return err;
+
+	if (!__test_and_set_bit(CONTEXT_INIT_BIT, &ce->flags)) {
+		lrc_init_state(ce, engine, *vaddr);
+
+		 __i915_gem_object_flush_map(ce->state->obj, 0, engine->context_size);
+	}
+
+	return 0;
+}
+
 static int execlists_context_pre_pin(struct intel_context *ce,
 				     struct i915_gem_ww_ctx *ww,
 				     void **vaddr)
 {
-	return lrc_pre_pin(ce, ce->engine, ww, vaddr);
+	return __execlists_context_pre_pin(ce, ce->engine, ww, vaddr);
 }
 
 static int execlists_context_pin(struct intel_context *ce, void *vaddr)
@@ -3389,8 +3409,8 @@ static int virtual_context_pre_pin(struct intel_context *ce,
 {
 	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
 
-	/* Note: we must use a real engine class for setting up reg state */
-	return lrc_pre_pin(ce, ve->siblings[0], ww, vaddr);
+	 /* Note: we must use a real engine class for setting up reg state */
+	return __execlists_context_pre_pin(ce, ve->siblings[0], ww, vaddr);
 }
 
 static int virtual_context_pin(struct intel_context *ce, void *vaddr)
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 19/64] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (17 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 18/64] drm/i915: Populate logical context during first pin Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 20/64] drm/i915: Handle ww locking in init_status_page Maarten Lankhorst
                   ` (51 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, Dan Carpenter

We map the initial context during first pin.

This allows us to remove pin_map from state allocation, which saves
us a few retry loops. We won't need this until first pin anyway.

intel_ring_submission_setup() is also reworked slightly to do all
pinning in a single ww loop.

Changes since v1:
- Handle -EDEADLK backoff in intel_ring_submission_setup() better.
- Handle smatch errors reported by Dan and testbot.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gt/intel_ring_submission.c   | 184 +++++++++++-------
 1 file changed, 118 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
index 4ea741f488a8..b44f7e64c986 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
@@ -511,6 +511,26 @@ static void ring_context_destroy(struct kref *ref)
 	intel_context_free(ce);
 }
 
+static int ring_context_init_default_state(struct intel_context *ce,
+					   struct i915_gem_ww_ctx *ww)
+{
+	struct drm_i915_gem_object *obj = ce->state->obj;
+	void *vaddr;
+
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	shmem_read(ce->engine->default_state, 0,
+		   vaddr, ce->engine->context_size);
+
+	i915_gem_object_flush_map(obj);
+	__i915_gem_object_release_map(obj);
+
+	__set_bit(CONTEXT_VALID_BIT, &ce->flags);
+	return 0;
+}
+
 static int ring_context_pre_pin(struct intel_context *ce,
 				struct i915_gem_ww_ctx *ww,
 				void **unused)
@@ -518,6 +538,13 @@ static int ring_context_pre_pin(struct intel_context *ce,
 	struct i915_address_space *vm;
 	int err = 0;
 
+	if (ce->engine->default_state &&
+	    !test_bit(CONTEXT_VALID_BIT, &ce->flags)) {
+		err = ring_context_init_default_state(ce, ww);
+		if (err)
+			return err;
+	}
+
 	vm = vm_alias(ce->vm);
 	if (vm)
 		err = gen6_ppgtt_pin(i915_vm_to_ppgtt((vm)), ww);
@@ -573,22 +600,6 @@ alloc_context_vma(struct intel_engine_cs *engine)
 	if (IS_IVYBRIDGE(i915))
 		i915_gem_object_set_cache_coherency(obj, I915_CACHE_L3_LLC);
 
-	if (engine->default_state) {
-		void *vaddr;
-
-		vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
-		if (IS_ERR(vaddr)) {
-			err = PTR_ERR(vaddr);
-			goto err_obj;
-		}
-
-		shmem_read(engine->default_state, 0,
-			   vaddr, engine->context_size);
-
-		i915_gem_object_flush_map(obj);
-		__i915_gem_object_release_map(obj);
-	}
-
 	vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
 	if (IS_ERR(vma)) {
 		err = PTR_ERR(vma);
@@ -620,8 +631,6 @@ static int ring_context_alloc(struct intel_context *ce)
 			return PTR_ERR(vma);
 
 		ce->state = vma;
-		if (engine->default_state)
-			__set_bit(CONTEXT_VALID_BIT, &ce->flags);
 	}
 
 	return 0;
@@ -1220,37 +1229,15 @@ static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine,
 	return gen7_setup_clear_gpr_bb(engine, vma);
 }
 
-static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine)
+static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine,
+				   struct i915_gem_ww_ctx *ww,
+				   struct i915_vma *vma)
 {
-	struct drm_i915_gem_object *obj;
-	struct i915_vma *vma;
-	int size;
 	int err;
 
-	size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */);
-	if (size <= 0)
-		return size;
-
-	size = ALIGN(size, PAGE_SIZE);
-	obj = i915_gem_object_create_internal(engine->i915, size);
-	if (IS_ERR(obj))
-		return PTR_ERR(obj);
-
-	vma = i915_vma_instance(obj, engine->gt->vm, NULL);
-	if (IS_ERR(vma)) {
-		err = PTR_ERR(vma);
-		goto err_obj;
-	}
-
-	vma->private = intel_context_create(engine); /* dummy residuals */
-	if (IS_ERR(vma->private)) {
-		err = PTR_ERR(vma->private);
-		goto err_obj;
-	}
-
-	err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH);
+	err = i915_vma_pin_ww(vma, ww, 0, 0, PIN_USER | PIN_HIGH);
 	if (err)
-		goto err_private;
+		return err;
 
 	err = i915_vma_sync(vma);
 	if (err)
@@ -1265,17 +1252,53 @@ static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine)
 
 err_unpin:
 	i915_vma_unpin(vma);
-err_private:
-	intel_context_put(vma->private);
-err_obj:
-	i915_gem_object_put(obj);
 	return err;
 }
 
+static struct i915_vma *gen7_ctx_vma(struct intel_engine_cs *engine)
+{
+	struct drm_i915_gem_object *obj;
+	struct i915_vma *vma;
+	int size, err;
+
+	if (!IS_HASWELL(engine->i915) || engine->class != RENDER_CLASS)
+		return 0;
+
+	err = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */);
+	if (err < 0)
+		return ERR_PTR(err);
+	if (!err)
+		return NULL;
+
+	size = ALIGN(err, PAGE_SIZE);
+
+	obj = i915_gem_object_create_internal(engine->i915, size);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
+
+	vma = i915_vma_instance(obj, engine->gt->vm, NULL);
+	if (IS_ERR(vma)) {
+		i915_gem_object_put(obj);
+		return ERR_CAST(vma);
+	}
+
+	vma->private = intel_context_create(engine); /* dummy residuals */
+	if (IS_ERR(vma->private)) {
+		err = PTR_ERR(vma->private);
+		vma->private = NULL;
+		i915_gem_object_put(obj);
+		return ERR_PTR(err);
+	}
+
+	return vma;
+}
+
 int intel_ring_submission_setup(struct intel_engine_cs *engine)
 {
+	struct i915_gem_ww_ctx ww;
 	struct intel_timeline *timeline;
 	struct intel_ring *ring;
+	struct i915_vma *gen7_wa_vma;
 	int err;
 
 	setup_common(engine);
@@ -1306,43 +1329,72 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine)
 	}
 	GEM_BUG_ON(timeline->has_initial_breadcrumb);
 
-	err = intel_timeline_pin(timeline, NULL);
-	if (err)
-		goto err_timeline;
-
 	ring = intel_engine_create_ring(engine, SZ_16K);
 	if (IS_ERR(ring)) {
 		err = PTR_ERR(ring);
-		goto err_timeline_unpin;
+		goto err_timeline;
 	}
 
-	err = intel_ring_pin(ring, NULL);
-	if (err)
-		goto err_ring;
-
 	GEM_BUG_ON(engine->legacy.ring);
 	engine->legacy.ring = ring;
 	engine->legacy.timeline = timeline;
 
-	GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma);
+	gen7_wa_vma = gen7_ctx_vma(engine);
+	if (IS_ERR(gen7_wa_vma)) {
+		err = PTR_ERR(gen7_wa_vma);
+		goto err_ring;
+	}
 
-	if (IS_HASWELL(engine->i915) && engine->class == RENDER_CLASS) {
-		err = gen7_ctx_switch_bb_init(engine);
+	i915_gem_ww_ctx_init(&ww, false);
+
+retry:
+	err = i915_gem_object_lock(timeline->hwsp_ggtt->obj, &ww);
+	if (!err && gen7_wa_vma)
+		err = i915_gem_object_lock(gen7_wa_vma->obj, &ww);
+	if (!err && engine->legacy.ring->vma->obj)
+		err = i915_gem_object_lock(engine->legacy.ring->vma->obj, &ww);
+	if (!err)
+		err = intel_timeline_pin(timeline, &ww);
+	if (!err) {
+		err = intel_ring_pin(ring, &ww);
 		if (err)
-			goto err_ring_unpin;
+			intel_timeline_unpin(timeline);
 	}
+	if (err)
+		goto out;
+
+	GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma);
+
+	if (gen7_wa_vma) {
+		err = gen7_ctx_switch_bb_init(engine, &ww, gen7_wa_vma);
+		if (err) {
+			intel_ring_unpin(ring);
+			intel_timeline_unpin(timeline);
+		}
+	}
+
+out:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (err)
+		goto err_gen7_put;
 
 	/* Finally, take ownership and responsibility for cleanup! */
 	engine->release = ring_release;
 
 	return 0;
 
-err_ring_unpin:
-	intel_ring_unpin(ring);
+err_gen7_put:
+	if (gen7_wa_vma) {
+		intel_context_put(gen7_wa_vma->private);
+		i915_gem_object_put(gen7_wa_vma->obj);
+	}
 err_ring:
 	intel_ring_put(ring);
-err_timeline_unpin:
-	intel_timeline_unpin(timeline);
 err_timeline:
 	intel_timeline_put(timeline);
 err:
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 20/64] drm/i915: Handle ww locking in init_status_page
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (18 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 19/64] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2 Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 21/64] drm/i915: Rework clflush to work correctly without obj->mm.lock Maarten Lankhorst
                   ` (50 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Try to pin to ggtt first, and use a full ww loop to handle
eviction correctly.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 37 +++++++++++++++--------
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 1db608d1f26f..ace6fcbd1f38 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -618,6 +618,7 @@ static void cleanup_status_page(struct intel_engine_cs *engine)
 }
 
 static int pin_ggtt_status_page(struct intel_engine_cs *engine,
+				struct i915_gem_ww_ctx *ww,
 				struct i915_vma *vma)
 {
 	unsigned int flags;
@@ -638,12 +639,13 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine,
 	else
 		flags = PIN_HIGH;
 
-	return i915_ggtt_pin(vma, NULL, 0, flags);
+	return i915_ggtt_pin(vma, ww, 0, flags);
 }
 
 static int init_status_page(struct intel_engine_cs *engine)
 {
 	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
 	struct i915_vma *vma;
 	void *vaddr;
 	int ret;
@@ -669,30 +671,39 @@ static int init_status_page(struct intel_engine_cs *engine)
 	vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
-		goto err;
+		goto err_put;
 	}
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(obj, &ww);
+	if (!ret && !HWS_NEEDS_PHYSICAL(engine->i915))
+		ret = pin_ggtt_status_page(engine, &ww, vma);
+	if (ret)
+		goto err;
+
 	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		ret = PTR_ERR(vaddr);
-		goto err;
+		goto err_unpin;
 	}
 
 	engine->status_page.addr = memset(vaddr, 0, PAGE_SIZE);
 	engine->status_page.vma = vma;
 
-	if (!HWS_NEEDS_PHYSICAL(engine->i915)) {
-		ret = pin_ggtt_status_page(engine, vma);
-		if (ret)
-			goto err_unpin;
-	}
-
-	return 0;
-
 err_unpin:
-	i915_gem_object_unpin_map(obj);
+	if (ret)
+		i915_vma_unpin(vma);
 err:
-	i915_gem_object_put(obj);
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+err_put:
+	if (ret)
+		i915_gem_object_put(obj);
 	return ret;
 }
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 21/64] drm/i915: Rework clflush to work correctly without obj->mm.lock.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (19 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 20/64] drm/i915: Handle ww locking in init_status_page Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 22/64] drm/i915: Pass ww ctx to intel_pin_to_display_plane Maarten Lankhorst
                   ` (49 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Pin in the caller, not in the work itself. This should also
work better for dma-fence annotations.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
index bc0223716906..daf9284ef1f5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
@@ -27,15 +27,8 @@ static void __do_clflush(struct drm_i915_gem_object *obj)
 static int clflush_work(struct dma_fence_work *base)
 {
 	struct clflush *clflush = container_of(base, typeof(*clflush), base);
-	struct drm_i915_gem_object *obj = clflush->obj;
-	int err;
 
-	err = i915_gem_object_pin_pages(obj);
-	if (err)
-		return err;
-
-	__do_clflush(obj);
-	i915_gem_object_unpin_pages(obj);
+	__do_clflush(clflush->obj);
 
 	return 0;
 }
@@ -44,6 +37,7 @@ static void clflush_release(struct dma_fence_work *base)
 {
 	struct clflush *clflush = container_of(base, typeof(*clflush), base);
 
+	i915_gem_object_unpin_pages(clflush->obj);
 	i915_gem_object_put(clflush->obj);
 }
 
@@ -63,6 +57,11 @@ static struct clflush *clflush_work_create(struct drm_i915_gem_object *obj)
 	if (!clflush)
 		return NULL;
 
+	if (__i915_gem_object_get_pages(obj) < 0) {
+		kfree(clflush);
+		return NULL;
+	}
+
 	dma_fence_work_init(&clflush->base, &clflush_ops);
 	clflush->obj = i915_gem_object_get(obj); /* obj <-> clflush cycle */
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 22/64] drm/i915: Pass ww ctx to intel_pin_to_display_plane
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (20 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 21/64] drm/i915: Rework clflush to work correctly without obj->mm.lock Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 23/64] drm/i915: Add object locking to vm_fault_cpu Maarten Lankhorst
                   ` (48 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Instead of multiple lockings, lock the object once,
and perform the ww dance around attach_phys and pin_pages.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  | 69 ++++++++++++-------
 drivers/gpu/drm/i915/display/intel_display.h  |  2 +-
 drivers/gpu/drm/i915/display/intel_fbdev.c    |  2 +-
 drivers/gpu/drm/i915/display/intel_overlay.c  | 34 +++++++--
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    | 30 ++------
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  1 +
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 10 +--
 .../drm/i915/gem/selftests/i915_gem_phys.c    |  2 +
 8 files changed, 86 insertions(+), 64 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index c8df42193287..af0f419be94d 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -2170,6 +2170,7 @@ static bool intel_plane_uses_fence(const struct intel_plane_state *plane_state)
 
 struct i915_vma *
 intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
+			   bool phys_cursor,
 			   const struct i915_ggtt_view *view,
 			   bool uses_fence,
 			   unsigned long *out_flags)
@@ -2178,14 +2179,19 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
 	intel_wakeref_t wakeref;
+	struct i915_gem_ww_ctx ww;
 	struct i915_vma *vma;
 	unsigned int pinctl;
 	u32 alignment;
+	int ret;
 
 	if (drm_WARN_ON(dev, !i915_gem_object_is_framebuffer(obj)))
 		return ERR_PTR(-EINVAL);
 
-	alignment = intel_surf_alignment(fb, 0);
+	if (phys_cursor)
+		alignment = intel_cursor_alignment(dev_priv);
+	else
+		alignment = intel_surf_alignment(fb, 0);
 	if (drm_WARN_ON(dev, alignment && !is_power_of_2(alignment)))
 		return ERR_PTR(-EINVAL);
 
@@ -2220,14 +2226,26 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	if (HAS_GMCH(dev_priv))
 		pinctl |= PIN_MAPPABLE;
 
-	vma = i915_gem_object_pin_to_display_plane(obj,
-						   alignment, view, pinctl);
-	if (IS_ERR(vma))
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(obj, &ww);
+	if (!ret && phys_cursor)
+		ret = i915_gem_object_attach_phys(obj, alignment);
+	if (!ret)
+		ret = i915_gem_object_pin_pages(obj);
+	if (ret)
 		goto err;
 
-	if (uses_fence && i915_vma_is_map_and_fenceable(vma)) {
-		int ret;
+	if (!ret) {
+		vma = i915_gem_object_pin_to_display_plane(obj, &ww, alignment,
+							   view, pinctl);
+		if (IS_ERR(vma)) {
+			ret = PTR_ERR(vma);
+			goto err_unpin;
+		}
+	}
 
+	if (uses_fence && i915_vma_is_map_and_fenceable(vma)) {
 		/*
 		 * Install a fence for tiled scan-out. Pre-i965 always needs a
 		 * fence, whereas 965+ only requires a fence if using
@@ -2248,16 +2266,28 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 		ret = i915_vma_pin_fence(vma);
 		if (ret != 0 && INTEL_GEN(dev_priv) < 4) {
 			i915_gem_object_unpin_from_display_plane(vma);
-			vma = ERR_PTR(ret);
-			goto err;
+			goto err_unpin;
 		}
+		ret = 0;
 
-		if (ret == 0 && vma->fence)
+		if (vma->fence)
 			*out_flags |= PLANE_HAS_FENCE;
 	}
 
 	i915_vma_get(vma);
+
+err_unpin:
+	i915_gem_object_unpin_pages(obj);
 err:
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (ret)
+		vma = ERR_PTR(ret);
+
 	atomic_dec(&dev_priv->gpu_error.pending_fb_pin);
 	intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
 	return vma;
@@ -15598,19 +15628,11 @@ int intel_plane_pin_fb(struct intel_plane_state *plane_state)
 	struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
 	struct drm_framebuffer *fb = plane_state->hw.fb;
 	struct i915_vma *vma;
+	bool phys_cursor =
+		plane->id == PLANE_CURSOR &&
+		INTEL_INFO(dev_priv)->display.cursor_needs_physical;
 
-	if (plane->id == PLANE_CURSOR &&
-	    INTEL_INFO(dev_priv)->display.cursor_needs_physical) {
-		struct drm_i915_gem_object *obj = intel_fb_obj(fb);
-		const int align = intel_cursor_alignment(dev_priv);
-		int err;
-
-		err = i915_gem_object_attach_phys(obj, align);
-		if (err)
-			return err;
-	}
-
-	vma = intel_pin_and_fence_fb_obj(fb,
+	vma = intel_pin_and_fence_fb_obj(fb, phys_cursor,
 					 &plane_state->view,
 					 intel_plane_uses_fence(plane_state),
 					 &plane_state->flags);
@@ -15706,13 +15728,8 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
 	if (!obj)
 		return 0;
 
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		return ret;
 
 	ret = intel_plane_pin_fb(new_plane_state);
-
-	i915_gem_object_unpin_pages(obj);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/display/intel_display.h b/drivers/gpu/drm/i915/display/intel_display.h
index 7ddbc00a0f41..97d494738060 100644
--- a/drivers/gpu/drm/i915/display/intel_display.h
+++ b/drivers/gpu/drm/i915/display/intel_display.h
@@ -571,7 +571,7 @@ void intel_release_load_detect_pipe(struct drm_connector *connector,
 				    struct intel_load_detect_pipe *old,
 				    struct drm_modeset_acquire_ctx *ctx);
 struct i915_vma *
-intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
+intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, bool phys_cursor,
 			   const struct i915_ggtt_view *view,
 			   bool uses_fence,
 			   unsigned long *out_flags);
diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c
index 842c04e63214..bdf44e923cc0 100644
--- a/drivers/gpu/drm/i915/display/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/display/intel_fbdev.c
@@ -211,7 +211,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
 	 * This also validates that any existing fb inherited from the
 	 * BIOS is suitable for own access.
 	 */
-	vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base,
+	vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base, false,
 					 &view, false, &flags);
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
index 6be5d8946c69..10434f5f3b00 100644
--- a/drivers/gpu/drm/i915/display/intel_overlay.c
+++ b/drivers/gpu/drm/i915/display/intel_overlay.c
@@ -756,6 +756,32 @@ static u32 overlay_cmd_reg(struct drm_intel_overlay_put_image *params)
 	return cmd;
 }
 
+static struct i915_vma *intel_overlay_pin_fb(struct drm_i915_gem_object *new_bo)
+{
+	struct i915_gem_ww_ctx ww;
+	struct i915_vma *vma;
+	int ret;
+
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(new_bo, &ww);
+	if (!ret) {
+		vma = i915_gem_object_pin_to_display_plane(new_bo, &ww, 0,
+							   NULL, PIN_MAPPABLE);
+		ret = PTR_ERR_OR_ZERO(vma);
+	}
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (ret)
+		return ERR_PTR(ret);
+
+	return vma;
+}
+
 static int intel_overlay_do_put_image(struct intel_overlay *overlay,
 				      struct drm_i915_gem_object *new_bo,
 				      struct drm_intel_overlay_put_image *params)
@@ -777,12 +803,10 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
 
 	atomic_inc(&dev_priv->gpu_error.pending_fb_pin);
 
-	vma = i915_gem_object_pin_to_display_plane(new_bo,
-						   0, NULL, PIN_MAPPABLE);
-	if (IS_ERR(vma)) {
-		ret = PTR_ERR(vma);
+	vma = intel_overlay_pin_fb(new_bo);
+	if (IS_ERR(vma))
 		goto out_pin_section;
-	}
+
 	i915_gem_object_flush_frontbuffer(new_bo, ORIGIN_DIRTYFB);
 
 	if (!overlay->active) {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index c1d4bf62b3ea..51a33c4f61d0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -313,12 +313,12 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
  */
 struct i915_vma *
 i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
+				     struct i915_gem_ww_ctx *ww,
 				     u32 alignment,
 				     const struct i915_ggtt_view *view,
 				     unsigned int flags)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	struct i915_gem_ww_ctx ww;
 	struct i915_vma *vma;
 	int ret;
 
@@ -326,11 +326,6 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 	if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj))
 		return ERR_PTR(-EINVAL);
 
-	i915_gem_ww_ctx_init(&ww, true);
-retry:
-	ret = i915_gem_object_lock(obj, &ww);
-	if (ret)
-		goto err;
 	/*
 	 * The display engine is not coherent with the LLC cache on gen6.  As
 	 * a result, we make sure that the pinning that is about to occur is
@@ -345,7 +340,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 					      HAS_WT(i915) ?
 					      I915_CACHE_WT : I915_CACHE_NONE);
 	if (ret)
-		goto err;
+		return ERR_PTR(ret);
 
 	/*
 	 * As the user may map the buffer once pinned in the display plane
@@ -358,32 +353,19 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 	vma = ERR_PTR(-ENOSPC);
 	if ((flags & PIN_MAPPABLE) == 0 &&
 	    (!view || view->type == I915_GGTT_VIEW_NORMAL))
-		vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0, alignment,
+		vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0, alignment,
 						  flags | PIN_MAPPABLE |
 						  PIN_NONBLOCK);
 	if (IS_ERR(vma) && vma != ERR_PTR(-EDEADLK))
-		vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0,
+		vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0,
 						  alignment, flags);
-	if (IS_ERR(vma)) {
-		ret = PTR_ERR(vma);
-		goto err;
-	}
+	if (IS_ERR(vma))
+		return vma;
 
 	vma->display_alignment = max_t(u64, vma->display_alignment, alignment);
 
 	i915_gem_object_flush_if_display_locked(obj);
 
-err:
-	if (ret == -EDEADLK) {
-		ret = i915_gem_ww_ctx_backoff(&ww);
-		if (!ret)
-			goto retry;
-	}
-	i915_gem_ww_ctx_fini(&ww);
-
-	if (ret)
-		return ERR_PTR(ret);
-
 	return vma;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index aaa574545776..0c357e91dce4 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -472,6 +472,7 @@ int __must_check
 i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write);
 struct i915_vma * __must_check
 i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
+				     struct i915_gem_ww_ctx *ww,
 				     u32 alignment,
 				     const struct i915_ggtt_view *view,
 				     unsigned int flags);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 0d176bf06405..f317be5f5e34 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -219,6 +219,8 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 {
 	int err;
 
+	assert_object_held(obj);
+
 	if (align > obj->base.size)
 		return -EINVAL;
 
@@ -232,13 +234,9 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
-		return err;
-
 	err = mutex_lock_interruptible(&obj->mm.lock);
 	if (err)
-		goto err_unlock;
+		return err;
 
 	if (unlikely(!i915_gem_object_has_struct_page(obj)))
 		goto out;
@@ -269,8 +267,6 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 
 out:
 	mutex_unlock(&obj->mm.lock);
-err_unlock:
-	i915_gem_object_unlock(obj);
 	return err;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
index 0cfa082047fe..3a6ce87f8b52 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c
@@ -31,7 +31,9 @@ static int mock_phys_object(void *arg)
 		goto out_obj;
 	}
 
+	i915_gem_object_lock(obj, NULL);
 	err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
+	i915_gem_object_unlock(obj);
 	if (err) {
 		pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
 		goto out_obj;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 23/64] drm/i915: Add object locking to vm_fault_cpu
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (21 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 22/64] drm/i915: Pass ww ctx to intel_pin_to_display_plane Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 24/64] drm/i915: Move pinning to inside engine_wa_list_verify() Maarten Lankhorst
                   ` (47 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Take a simple lock so we hold ww around (un)pin_pages as needed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index c0034d811e50..163208a6260d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -246,6 +246,9 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 		     area->vm_flags & VM_WRITE))
 		return VM_FAULT_SIGBUS;
 
+	if (i915_gem_object_lock_interruptible(obj, NULL))
+		return VM_FAULT_NOPAGE;
+
 	err = i915_gem_object_pin_pages(obj);
 	if (err)
 		goto out;
@@ -269,6 +272,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 	i915_gem_object_unpin_pages(obj);
 
 out:
+	i915_gem_object_unlock(obj);
 	return i915_error_to_vmf_fault(err);
 }
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 24/64] drm/i915: Move pinning to inside engine_wa_list_verify()
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (22 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 23/64] drm/i915: Add object locking to vm_fault_cpu Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 25/64] drm/i915: Take reservation lock around i915_vma_pin Maarten Lankhorst
                   ` (46 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

This should be done as part of the ww loop, in order to remove a
i915_vma_pin that needs ww held.

Now only i915_ggtt_pin() callers remaining.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gtt.c            | 14 +++++++++++++-
 drivers/gpu/drm/i915/gt/intel_gtt.h            |  3 +++
 drivers/gpu/drm/i915/gt/intel_workarounds.c    | 10 ++++++++--
 drivers/gpu/drm/i915/gt/selftest_execlists.c   |  5 +++--
 drivers/gpu/drm/i915/gt/selftest_lrc.c         |  2 +-
 drivers/gpu/drm/i915/gt/selftest_mocs.c        |  3 ++-
 drivers/gpu/drm/i915/gt/selftest_workarounds.c |  6 +++---
 7 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 04aa6601e984..444d9bacfafd 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -427,7 +427,6 @@ __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size)
 {
 	struct drm_i915_gem_object *obj;
 	struct i915_vma *vma;
-	int err;
 
 	obj = i915_gem_object_create_internal(vm->i915, PAGE_ALIGN(size));
 	if (IS_ERR(obj))
@@ -441,6 +440,19 @@ __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size)
 		return vma;
 	}
 
+	return vma;
+}
+
+struct i915_vma *
+__vm_create_scratch_for_read_pinned(struct i915_address_space *vm, unsigned long size)
+{
+	struct i915_vma *vma;
+	int err;
+
+	vma = __vm_create_scratch_for_read(vm, size);
+	if (IS_ERR(vma))
+		return vma;
+
 	err = i915_vma_pin(vma, 0, 0,
 			   i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);
 	if (err) {
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index 29c10fde8ce3..af90090c3d18 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -576,6 +576,9 @@ void i915_vm_free_pt_stash(struct i915_address_space *vm,
 struct i915_vma *
 __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size);
 
+struct i915_vma *
+__vm_create_scratch_for_read_pinned(struct i915_address_space *vm, unsigned long size);
+
 static inline struct sgt_dma {
 	struct scatterlist *sg;
 	dma_addr_t dma, max;
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index c21a9726326a..4a19f809b529 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -2205,10 +2205,15 @@ static int engine_wa_list_verify(struct intel_context *ce,
 	if (err)
 		goto err_pm;
 
+	err = i915_vma_pin_ww(vma, &ww, 0, 0,
+			   i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);
+	if (err)
+		goto err_unpin;
+
 	rq = i915_request_create(ce);
 	if (IS_ERR(rq)) {
 		err = PTR_ERR(rq);
-		goto err_unpin;
+		goto err_vma;
 	}
 
 	err = i915_request_await_object(rq, vma->obj, true);
@@ -2249,6 +2254,8 @@ static int engine_wa_list_verify(struct intel_context *ce,
 
 err_rq:
 	i915_request_put(rq);
+err_vma:
+	i915_vma_unpin(vma);
 err_unpin:
 	intel_context_unpin(ce);
 err_pm:
@@ -2259,7 +2266,6 @@ static int engine_wa_list_verify(struct intel_context *ce,
 	}
 	i915_gem_ww_ctx_fini(&ww);
 	intel_engine_pm_put(ce->engine);
-	i915_vma_unpin(vma);
 	i915_vma_put(vma);
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
index bfa7fd5c2c91..c9f3fef11f8f 100644
--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
+++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
@@ -4225,8 +4225,9 @@ static int preserved_virtual_engine(struct intel_gt *gt,
 	int err = 0;
 	u32 *cs;
 
-	scratch = __vm_create_scratch_for_read(&siblings[0]->gt->ggtt->vm,
-					       PAGE_SIZE);
+	scratch =
+		__vm_create_scratch_for_read_pinned(&siblings[0]->gt->ggtt->vm,
+						    PAGE_SIZE);
 	if (IS_ERR(scratch))
 		return PTR_ERR(scratch);
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 3485cb7c431d..1474300a1a9e 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -27,7 +27,7 @@
 
 static struct i915_vma *create_scratch(struct intel_gt *gt)
 {
-	return __vm_create_scratch_for_read(&gt->ggtt->vm, PAGE_SIZE);
+	return __vm_create_scratch_for_read_pinned(&gt->ggtt->vm, PAGE_SIZE);
 }
 
 static bool is_active(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
index ca72894918ba..2dd063bc656d 100644
--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
@@ -75,7 +75,8 @@ static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
 	if (flags & (HAS_GLOBAL_MOCS | HAS_ENGINE_MOCS))
 		arg->mocs = table;
 
-	arg->scratch = __vm_create_scratch_for_read(&gt->ggtt->vm, PAGE_SIZE);
+	arg->scratch =
+		__vm_create_scratch_for_read_pinned(&gt->ggtt->vm, PAGE_SIZE);
 	if (IS_ERR(arg->scratch))
 		return PTR_ERR(arg->scratch);
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
index 2070b91cb607..7ce7955c44d4 100644
--- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
@@ -490,7 +490,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 	u32 *cs, *results;
 
 	sz = (2 * ARRAY_SIZE(values) + 1) * sizeof(u32);
-	scratch = __vm_create_scratch_for_read(ce->vm, sz);
+	scratch = __vm_create_scratch_for_read_pinned(ce->vm, sz);
 	if (IS_ERR(scratch))
 		return PTR_ERR(scratch);
 
@@ -1030,14 +1030,14 @@ static int live_isolated_whitelist(void *arg)
 
 	for (i = 0; i < ARRAY_SIZE(client); i++) {
 		client[i].scratch[0] =
-			__vm_create_scratch_for_read(gt->vm, 4096);
+			__vm_create_scratch_for_read_pinned(gt->vm, 4096);
 		if (IS_ERR(client[i].scratch[0])) {
 			err = PTR_ERR(client[i].scratch[0]);
 			goto err;
 		}
 
 		client[i].scratch[1] =
-			__vm_create_scratch_for_read(gt->vm, 4096);
+			__vm_create_scratch_for_read_pinned(gt->vm, 4096);
 		if (IS_ERR(client[i].scratch[1])) {
 			err = PTR_ERR(client[i].scratch[1]);
 			i915_vma_unpin_and_release(&client[i].scratch[0], 0);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 25/64] drm/i915: Take reservation lock around i915_vma_pin.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (23 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 24/64] drm/i915: Move pinning to inside engine_wa_list_verify() Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 26/64] drm/i915: Make lrc_init_wa_ctx compatible with ww locking Maarten Lankhorst
                   ` (45 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We previously complained when ww == NULL.

This function is now only used in selftests to pin an object,
and ww locking is now fixed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../i915/gem/selftests/i915_gem_coherency.c   | 14 +++++--------
 drivers/gpu/drm/i915/i915_gem.c               |  6 +++++-
 drivers/gpu/drm/i915/i915_vma.c               |  4 +---
 drivers/gpu/drm/i915/i915_vma.h               | 20 +++++++++++++++----
 4 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index 1117d2a44518..654912abaeb4 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -200,16 +200,14 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v)
 	u32 *cs;
 	int err;
 
+	vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0);
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
 	i915_gem_object_lock(ctx->obj, NULL);
 	err = i915_gem_object_set_to_gtt_domain(ctx->obj, true);
 	if (err)
-		goto out_unlock;
-
-	vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0);
-	if (IS_ERR(vma)) {
-		err = PTR_ERR(vma);
-		goto out_unlock;
-	}
+		goto out_unpin;
 
 	rq = intel_engine_create_kernel_request(ctx->engine);
 	if (IS_ERR(rq)) {
@@ -249,9 +247,7 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v)
 	i915_request_add(rq);
 out_unpin:
 	i915_vma_unpin(vma);
-out_unlock:
 	i915_gem_object_unlock(ctx->obj);
-
 	return err;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d138a8b18a96..c5d0f3887d29 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1016,7 +1016,11 @@ i915_gem_object_ggtt_pin_ww(struct drm_i915_gem_object *obj,
 			return ERR_PTR(ret);
 	}
 
-	ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL);
+	if (ww)
+		ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL);
+	else
+		ret = i915_vma_pin(vma, size, alignment, flags | PIN_GLOBAL);
+
 	if (ret)
 		return ERR_PTR(ret);
 
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 1ffda2aaa7a0..265e3a3079e2 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -863,9 +863,7 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 	int err;
 
 #ifdef CONFIG_PROVE_LOCKING
-	if (debug_locks && lockdep_is_held(&vma->vm->i915->drm.struct_mutex))
-		WARN_ON(!ww);
-	if (debug_locks && ww && vma->resv)
+	if (debug_locks && !WARN_ON(!ww) && vma->resv)
 		assert_vma_held(vma);
 #endif
 
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 3c951d5428cf..cea6e7b8611b 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -246,10 +246,22 @@ i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 static inline int __must_check
 i915_vma_pin(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 {
-#ifdef CONFIG_LOCKDEP
-	WARN_ON_ONCE(vma->resv && dma_resv_held(vma->resv));
-#endif
-	return i915_vma_pin_ww(vma, NULL, size, alignment, flags);
+	struct i915_gem_ww_ctx ww;
+	int err;
+
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(vma->obj, &ww);
+	if (!err)
+		err = i915_vma_pin_ww(vma, &ww, size, alignment, flags);
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
+	return err;
 }
 
 int i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 26/64] drm/i915: Make lrc_init_wa_ctx compatible with ww locking.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (24 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 25/64] drm/i915: Take reservation lock around i915_vma_pin Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 27/64] drm/i915: Make __engine_unpark() " Maarten Lankhorst
                   ` (44 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Make creation separate from pinning, in order to take the lock only
once, and pin the mapping with the lock held.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 32 ++++++++++++++++++++++-------
 1 file changed, 25 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 4e856947fb13..707981db189b 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1422,7 +1422,7 @@ gen10_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
 
 #define CTX_WA_BB_SIZE (PAGE_SIZE)
 
-static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
+static int lrc_create_wa_ctx(struct intel_engine_cs *engine)
 {
 	struct drm_i915_gem_object *obj;
 	struct i915_vma *vma;
@@ -1438,10 +1438,6 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
 		goto err;
 	}
 
-	err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
-	if (err)
-		goto err;
-
 	engine->wa_ctx.vma = vma;
 	return 0;
 
@@ -1464,6 +1460,7 @@ int lrc_init_wa_ctx(struct intel_engine_cs *engine)
 		&wa_ctx->indirect_ctx, &wa_ctx->per_ctx
 	};
 	wa_bb_func_t wa_bb_fn[ARRAY_SIZE(wa_bb)];
+	struct i915_gem_ww_ctx ww;
 	void *batch, *batch_ptr;
 	unsigned int i;
 	int ret;
@@ -1492,13 +1489,21 @@ int lrc_init_wa_ctx(struct intel_engine_cs *engine)
 		return 0;
 	}
 
-	ret = lrc_setup_wa_ctx(engine);
+	ret = lrc_create_wa_ctx(engine);
 	if (ret) {
 		drm_dbg(&engine->i915->drm,
 			"Failed to setup context WA page: %d\n", ret);
 		return ret;
 	}
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(wa_ctx->vma->obj, &ww);
+	if (!ret)
+		ret = i915_ggtt_pin(wa_ctx->vma, &ww, 0, PIN_HIGH);
+	if (ret)
+		goto err;
+
 	batch = i915_gem_object_pin_map(wa_ctx->vma->obj, I915_MAP_WB);
 
 	/*
@@ -1522,8 +1527,21 @@ int lrc_init_wa_ctx(struct intel_engine_cs *engine)
 
 	__i915_gem_object_flush_map(wa_ctx->vma->obj, 0, batch_ptr - batch);
 	__i915_gem_object_release_map(wa_ctx->vma->obj);
+
 	if (ret)
-		lrc_fini_wa_ctx(engine);
+		i915_vma_unpin(wa_ctx->vma);
+err:
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
+	if (ret && engine->wa_ctx.vma) {
+		i915_vma_put(engine->wa_ctx.vma);
+		engine->wa_ctx.vma = NULL;
+	}
 
 	return ret;
 }
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 27/64] drm/i915: Make __engine_unpark() compatible with ww locking.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (25 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 26/64] drm/i915: Make lrc_init_wa_ctx compatible with ww locking Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 28/64] drm/i915: Take obj lock around set_domain ioctl Maarten Lankhorst
                   ` (43 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Take the ww lock around engine_unpark. Because of the
many many places where rpm is used, I chose the safest option
and used a trylock to opportunistically take this lock for
__engine_unpark.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_engine_pm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 2843db731b7d..f81bdd747fc9 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -27,12 +27,16 @@ static void dbg_poison_ce(struct intel_context *ce)
 		int type = i915_coherent_map_type(ce->engine->i915);
 		void *map;
 
+		if (!i915_gem_object_trylock(obj))
+			return;
+
 		map = i915_gem_object_pin_map(obj, type);
 		if (!IS_ERR(map)) {
 			memset(map, CONTEXT_REDZONE, obj->base.size);
 			i915_gem_object_flush_map(obj);
 			i915_gem_object_unpin_map(obj);
 		}
+		i915_gem_object_unlock(obj);
 	}
 }
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 28/64] drm/i915: Take obj lock around set_domain ioctl
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (26 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 27/64] drm/i915: Make __engine_unpark() " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 29/64] drm/i915: Defer pin calls in buffer pool until first use by caller Maarten Lankhorst
                   ` (42 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to lock the object to move it to the correct domain,
add the missing lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_domain.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 51a33c4f61d0..e62f9e8dd339 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -516,6 +516,10 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 		goto out;
 	}
 
+	err = i915_gem_object_lock_interruptible(obj, NULL);
+	if (err)
+		goto out;
+
 	/*
 	 * Flush and acquire obj->pages so that we are coherent through
 	 * direct access in memory with previous cached writes through
@@ -527,7 +531,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	 */
 	err = i915_gem_object_pin_pages(obj);
 	if (err)
-		goto out;
+		goto out_unlock;
 
 	/*
 	 * Already in the desired write domain? Nothing for us to do!
@@ -542,10 +546,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	if (READ_ONCE(obj->write_domain) == read_domains)
 		goto out_unpin;
 
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
-		goto out_unpin;
-
 	if (read_domains & I915_GEM_DOMAIN_WC)
 		err = i915_gem_object_set_to_wc_domain(obj, write_domain);
 	else if (read_domains & I915_GEM_DOMAIN_GTT)
@@ -556,13 +556,15 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	/* And bump the LRU for this access */
 	i915_gem_object_bump_inactive_ggtt(obj);
 
+out_unpin:
+	i915_gem_object_unpin_pages(obj);
+
+out_unlock:
 	i915_gem_object_unlock(obj);
 
-	if (write_domain)
+	if (!err && write_domain)
 		i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
 
-out_unpin:
-	i915_gem_object_unpin_pages(obj);
 out:
 	i915_gem_object_put(obj);
 	return err;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 29/64] drm/i915: Defer pin calls in buffer pool until first use by caller.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (27 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 28/64] drm/i915: Take obj lock around set_domain ioctl Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 30/64] drm/i915: Fix pread/pwrite to work with new locking rules Maarten Lankhorst
                   ` (41 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to take the obj lock to pin pages, so wait until the callers
have done so, before making the object unshrinkable.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +
 .../gpu/drm/i915/gem/i915_gem_object_blt.c    |  6 +++
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.c    | 47 +++++++++----------
 .../gpu/drm/i915/gt/intel_gt_buffer_pool.h    |  5 ++
 .../drm/i915/gt/intel_gt_buffer_pool_types.h  |  1 +
 5 files changed, 35 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index d4c065c1da06..1ac272f4634a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1342,6 +1342,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 		err = PTR_ERR(cmd);
 		goto err_pool;
 	}
+	intel_gt_buffer_pool_mark_used(pool);
 
 	memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
 
@@ -2636,6 +2637,7 @@ static int eb_parse(struct i915_execbuffer *eb)
 		err = PTR_ERR(shadow);
 		goto err;
 	}
+	intel_gt_buffer_pool_mark_used(pool);
 	i915_gem_object_set_readonly(shadow->obj);
 	shadow->private = pool;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
index 10cac9fac79b..13f681fc27db 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
@@ -55,6 +55,9 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
 	if (unlikely(err))
 		goto out_put;
 
+	/* we pinned the pool, mark it as such */
+	intel_gt_buffer_pool_mark_used(pool);
+
 	cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
@@ -277,6 +280,9 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
 	if (unlikely(err))
 		goto out_put;
 
+	/* we pinned the pool, mark it as such */
+	intel_gt_buffer_pool_mark_used(pool);
+
 	cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
index 104cb30e8c13..030759305196 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
@@ -98,28 +98,6 @@ static void pool_free_work(struct work_struct *wrk)
 				      round_jiffies_up_relative(HZ));
 }
 
-static int pool_active(struct i915_active *ref)
-{
-	struct intel_gt_buffer_pool_node *node =
-		container_of(ref, typeof(*node), active);
-	struct dma_resv *resv = node->obj->base.resv;
-	int err;
-
-	if (dma_resv_trylock(resv)) {
-		dma_resv_add_excl_fence(resv, NULL);
-		dma_resv_unlock(resv);
-	}
-
-	err = i915_gem_object_pin_pages(node->obj);
-	if (err)
-		return err;
-
-	/* Hide this pinned object from the shrinker until retired */
-	i915_gem_object_make_unshrinkable(node->obj);
-
-	return 0;
-}
-
 __i915_active_call
 static void pool_retire(struct i915_active *ref)
 {
@@ -129,10 +107,13 @@ static void pool_retire(struct i915_active *ref)
 	struct list_head *list = bucket_for_size(pool, node->obj->base.size);
 	unsigned long flags;
 
-	i915_gem_object_unpin_pages(node->obj);
+	if (node->pinned) {
+		i915_gem_object_unpin_pages(node->obj);
 
-	/* Return this object to the shrinker pool */
-	i915_gem_object_make_purgeable(node->obj);
+		/* Return this object to the shrinker pool */
+		i915_gem_object_make_purgeable(node->obj);
+		node->pinned = false;
+	}
 
 	GEM_BUG_ON(node->age);
 	spin_lock_irqsave(&pool->lock, flags);
@@ -144,6 +125,19 @@ static void pool_retire(struct i915_active *ref)
 			      round_jiffies_up_relative(HZ));
 }
 
+void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node)
+{
+	assert_object_held(node->obj);
+
+	if (node->pinned)
+		return;
+
+	__i915_gem_object_pin_pages(node->obj);
+	/* Hide this pinned object from the shrinker until retired */
+	i915_gem_object_make_unshrinkable(node->obj);
+	node->pinned = true;
+}
+
 static struct intel_gt_buffer_pool_node *
 node_create(struct intel_gt_buffer_pool *pool, size_t sz)
 {
@@ -158,7 +152,8 @@ node_create(struct intel_gt_buffer_pool *pool, size_t sz)
 
 	node->age = 0;
 	node->pool = pool;
-	i915_active_init(&node->active, pool_active, pool_retire);
+	node->pinned = false;
+	i915_active_init(&node->active, NULL, pool_retire);
 
 	obj = i915_gem_object_create_internal(gt->i915, sz);
 	if (IS_ERR(obj)) {
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
index 42cbac003e8a..9878ce9a07ab 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
@@ -17,10 +17,15 @@ struct i915_request;
 struct intel_gt_buffer_pool_node *
 intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size);
 
+void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node);
+
 static inline int
 intel_gt_buffer_pool_mark_active(struct intel_gt_buffer_pool_node *node,
 				 struct i915_request *rq)
 {
+	/* did we call mark_used? */
+	GEM_WARN_ON(!node->pinned);
+
 	return i915_active_add_request(&node->active, rq);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
index bcf1658c9633..0401825e829d 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
@@ -31,6 +31,7 @@ struct intel_gt_buffer_pool_node {
 		struct rcu_head rcu;
 	};
 	unsigned long age;
+	bool pinned;
 };
 
 #endif /* INTEL_GT_BUFFER_POOL_TYPES_H */
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 30/64] drm/i915: Fix pread/pwrite to work with new locking rules.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (28 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 29/64] drm/i915: Defer pin calls in buffer pool until first use by caller Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 31/64] drm/i915: Fix workarounds selftest, part 1 Maarten Lankhorst
                   ` (40 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We are removing obj->mm.lock, and need to take the reservation lock
before we can pin pages. Move the pinning pages into the helper, and
merge gtt pwrite/pread preparation and cleanup paths.

The fence lock is also removed; it will conflict with fence annotations,
because of memory allocations done when pagefaulting inside copy_*_user.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/Makefile              |   1 -
 drivers/gpu/drm/i915/gem/i915_gem_fence.c  |  95 ---------
 drivers/gpu/drm/i915/gem/i915_gem_object.h |   5 -
 drivers/gpu/drm/i915/i915_gem.c            | 224 +++++++++++----------
 4 files changed, 114 insertions(+), 211 deletions(-)
 delete mode 100644 drivers/gpu/drm/i915/gem/i915_gem_fence.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f1c7c3246226..350274698037 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -137,7 +137,6 @@ gem-y += \
 	gem/i915_gem_dmabuf.o \
 	gem/i915_gem_domain.o \
 	gem/i915_gem_execbuffer.o \
-	gem/i915_gem_fence.o \
 	gem/i915_gem_internal.o \
 	gem/i915_gem_object.o \
 	gem/i915_gem_object_blt.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_fence.c b/drivers/gpu/drm/i915/gem/i915_gem_fence.c
deleted file mode 100644
index 8ab842c80f99..000000000000
--- a/drivers/gpu/drm/i915/gem/i915_gem_fence.c
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * SPDX-License-Identifier: MIT
- *
- * Copyright © 2019 Intel Corporation
- */
-
-#include "i915_drv.h"
-#include "i915_gem_object.h"
-
-struct stub_fence {
-	struct dma_fence dma;
-	struct i915_sw_fence chain;
-};
-
-static int __i915_sw_fence_call
-stub_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
-{
-	struct stub_fence *stub = container_of(fence, typeof(*stub), chain);
-
-	switch (state) {
-	case FENCE_COMPLETE:
-		dma_fence_signal(&stub->dma);
-		break;
-
-	case FENCE_FREE:
-		dma_fence_put(&stub->dma);
-		break;
-	}
-
-	return NOTIFY_DONE;
-}
-
-static const char *stub_driver_name(struct dma_fence *fence)
-{
-	return DRIVER_NAME;
-}
-
-static const char *stub_timeline_name(struct dma_fence *fence)
-{
-	return "object";
-}
-
-static void stub_release(struct dma_fence *fence)
-{
-	struct stub_fence *stub = container_of(fence, typeof(*stub), dma);
-
-	i915_sw_fence_fini(&stub->chain);
-
-	BUILD_BUG_ON(offsetof(typeof(*stub), dma));
-	dma_fence_free(&stub->dma);
-}
-
-static const struct dma_fence_ops stub_fence_ops = {
-	.get_driver_name = stub_driver_name,
-	.get_timeline_name = stub_timeline_name,
-	.release = stub_release,
-};
-
-struct dma_fence *
-i915_gem_object_lock_fence(struct drm_i915_gem_object *obj)
-{
-	struct stub_fence *stub;
-
-	assert_object_held(obj);
-
-	stub = kmalloc(sizeof(*stub), GFP_KERNEL);
-	if (!stub)
-		return NULL;
-
-	i915_sw_fence_init(&stub->chain, stub_notify);
-	dma_fence_init(&stub->dma, &stub_fence_ops, &stub->chain.wait.lock,
-		       0, 0);
-
-	if (i915_sw_fence_await_reservation(&stub->chain,
-					    obj->base.resv, NULL, true,
-					    i915_fence_timeout(to_i915(obj->base.dev)),
-					    I915_FENCE_GFP) < 0)
-		goto err;
-
-	dma_resv_add_excl_fence(obj->base.resv, &stub->dma);
-
-	return &stub->dma;
-
-err:
-	stub_release(&stub->dma);
-	return NULL;
-}
-
-void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
-				  struct dma_fence *fence)
-{
-	struct stub_fence *stub = container_of(fence, typeof(*stub), dma);
-
-	i915_sw_fence_commit(&stub->chain);
-}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 0c357e91dce4..fbd01e25ca71 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -163,11 +163,6 @@ static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj)
 	dma_resv_unlock(obj->base.resv);
 }
 
-struct dma_fence *
-i915_gem_object_lock_fence(struct drm_i915_gem_object *obj);
-void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
-				  struct dma_fence *fence);
-
 static inline void
 i915_gem_object_set_readonly(struct drm_i915_gem_object *obj)
 {
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index c5d0f3887d29..dd811d0f96f2 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -306,7 +306,6 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 {
 	unsigned int needs_clflush;
 	unsigned int idx, offset;
-	struct dma_fence *fence;
 	char __user *user_data;
 	u64 remain;
 	int ret;
@@ -315,19 +314,17 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 	if (ret)
 		return ret;
 
+	ret = i915_gem_object_pin_pages(obj);
+	if (ret)
+		goto err_unlock;
+
 	ret = i915_gem_object_prepare_read(obj, &needs_clflush);
-	if (ret) {
-		i915_gem_object_unlock(obj);
-		return ret;
-	}
+	if (ret)
+		goto err_unpin;
 
-	fence = i915_gem_object_lock_fence(obj);
 	i915_gem_object_finish_access(obj);
 	i915_gem_object_unlock(obj);
 
-	if (!fence)
-		return -ENOMEM;
-
 	remain = args->size;
 	user_data = u64_to_user_ptr(args->data_ptr);
 	offset = offset_in_page(args->offset);
@@ -345,7 +342,13 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
 		offset = 0;
 	}
 
-	i915_gem_object_unlock_fence(obj, fence);
+	i915_gem_object_unpin_pages(obj);
+	return ret;
+
+err_unpin:
+	i915_gem_object_unpin_pages(obj);
+err_unlock:
+	i915_gem_object_unlock(obj);
 	return ret;
 }
 
@@ -373,52 +376,102 @@ gtt_user_read(struct io_mapping *mapping,
 	return unwritten;
 }
 
-static int
-i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
-		   const struct drm_i915_gem_pread *args)
+static struct i915_vma *i915_gem_gtt_prepare(struct drm_i915_gem_object *obj,
+					     struct drm_mm_node *node,
+					     bool write)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct i915_ggtt *ggtt = &i915->ggtt;
-	intel_wakeref_t wakeref;
-	struct drm_mm_node node;
-	struct dma_fence *fence;
-	void __user *user_data;
 	struct i915_vma *vma;
-	u64 remain, offset;
+	struct i915_gem_ww_ctx ww;
 	int ret;
 
-	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
 	vma = ERR_PTR(-ENODEV);
+	ret = i915_gem_object_lock(obj, &ww);
+	if (ret)
+		goto err_ww;
+
+	ret = i915_gem_object_set_to_gtt_domain(obj, write);
+	if (ret)
+		goto err_ww;
+
 	if (!i915_gem_object_is_tiled(obj))
-		vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,
-					       PIN_MAPPABLE |
-					       PIN_NONBLOCK /* NOWARN */ |
-					       PIN_NOEVICT);
-	if (!IS_ERR(vma)) {
-		node.start = i915_ggtt_offset(vma);
-		node.flags = 0;
+		vma = i915_gem_object_ggtt_pin_ww(obj, &ww, NULL, 0, 0,
+						  PIN_MAPPABLE |
+						  PIN_NONBLOCK /* NOWARN */ |
+						  PIN_NOEVICT);
+	if (vma == ERR_PTR(-EDEADLK)) {
+		ret = -EDEADLK;
+		goto err_ww;
+	} else if (!IS_ERR(vma)) {
+		node->start = i915_ggtt_offset(vma);
+		node->flags = 0;
 	} else {
-		ret = insert_mappable_node(ggtt, &node, PAGE_SIZE);
+		ret = insert_mappable_node(ggtt, node, PAGE_SIZE);
 		if (ret)
-			goto out_rpm;
-		GEM_BUG_ON(!drm_mm_node_allocated(&node));
+			goto err_ww;
+		GEM_BUG_ON(!drm_mm_node_allocated(node));
+		vma = NULL;
 	}
 
-	ret = i915_gem_object_lock_interruptible(obj, NULL);
-	if (ret)
-		goto out_unpin;
-
-	ret = i915_gem_object_set_to_gtt_domain(obj, false);
+	ret = i915_gem_object_pin_pages(obj);
 	if (ret) {
-		i915_gem_object_unlock(obj);
-		goto out_unpin;
+		if (drm_mm_node_allocated(node)) {
+			ggtt->vm.clear_range(&ggtt->vm, node->start, node->size);
+			remove_mappable_node(ggtt, node);
+		} else {
+			i915_vma_unpin(vma);
+		}
 	}
 
-	fence = i915_gem_object_lock_fence(obj);
-	i915_gem_object_unlock(obj);
-	if (!fence) {
-		ret = -ENOMEM;
-		goto out_unpin;
+err_ww:
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
+	return ret ? ERR_PTR(ret) : vma;
+}
+
+static void i915_gem_gtt_cleanup(struct drm_i915_gem_object *obj,
+				 struct drm_mm_node *node,
+				 struct i915_vma *vma)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_ggtt *ggtt = &i915->ggtt;
+
+	i915_gem_object_unpin_pages(obj);
+	if (drm_mm_node_allocated(node)) {
+		ggtt->vm.clear_range(&ggtt->vm, node->start, node->size);
+		remove_mappable_node(ggtt, node);
+	} else {
+		i915_vma_unpin(vma);
+	}
+}
+
+static int
+i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
+		   const struct drm_i915_gem_pread *args)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	intel_wakeref_t wakeref;
+	struct drm_mm_node node;
+	void __user *user_data;
+	struct i915_vma *vma;
+	u64 remain, offset;
+	int ret = 0;
+
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	vma = i915_gem_gtt_prepare(obj, &node, false);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto out_rpm;
 	}
 
 	user_data = u64_to_user_ptr(args->data_ptr);
@@ -455,14 +508,7 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
 		offset += page_length;
 	}
 
-	i915_gem_object_unlock_fence(obj, fence);
-out_unpin:
-	if (drm_mm_node_allocated(&node)) {
-		ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
-		remove_mappable_node(ggtt, &node);
-	} else {
-		i915_vma_unpin(vma);
-	}
+	i915_gem_gtt_cleanup(obj, &node, vma);
 out_rpm:
 	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 	return ret;
@@ -520,15 +566,10 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		goto out;
 
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		goto out;
-
 	ret = i915_gem_shmem_pread(obj, args);
 	if (ret == -EFAULT || ret == -ENODEV)
 		ret = i915_gem_gtt_pread(obj, args);
 
-	i915_gem_object_unpin_pages(obj);
 out:
 	i915_gem_object_put(obj);
 	return ret;
@@ -576,11 +617,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 	struct intel_runtime_pm *rpm = &i915->runtime_pm;
 	intel_wakeref_t wakeref;
 	struct drm_mm_node node;
-	struct dma_fence *fence;
 	struct i915_vma *vma;
 	u64 remain, offset;
 	void __user *user_data;
-	int ret;
+	int ret = 0;
 
 	if (i915_gem_object_has_struct_page(obj)) {
 		/*
@@ -598,37 +638,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 		wakeref = intel_runtime_pm_get(rpm);
 	}
 
-	vma = ERR_PTR(-ENODEV);
-	if (!i915_gem_object_is_tiled(obj))
-		vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,
-					       PIN_MAPPABLE |
-					       PIN_NONBLOCK /* NOWARN */ |
-					       PIN_NOEVICT);
-	if (!IS_ERR(vma)) {
-		node.start = i915_ggtt_offset(vma);
-		node.flags = 0;
-	} else {
-		ret = insert_mappable_node(ggtt, &node, PAGE_SIZE);
-		if (ret)
-			goto out_rpm;
-		GEM_BUG_ON(!drm_mm_node_allocated(&node));
-	}
-
-	ret = i915_gem_object_lock_interruptible(obj, NULL);
-	if (ret)
-		goto out_unpin;
-
-	ret = i915_gem_object_set_to_gtt_domain(obj, true);
-	if (ret) {
-		i915_gem_object_unlock(obj);
-		goto out_unpin;
-	}
-
-	fence = i915_gem_object_lock_fence(obj);
-	i915_gem_object_unlock(obj);
-	if (!fence) {
-		ret = -ENOMEM;
-		goto out_unpin;
+	vma = i915_gem_gtt_prepare(obj, &node, true);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto out_rpm;
 	}
 
 	i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
@@ -677,14 +690,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 	intel_gt_flush_ggtt_writes(ggtt->vm.gt);
 	i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
 
-	i915_gem_object_unlock_fence(obj, fence);
-out_unpin:
-	if (drm_mm_node_allocated(&node)) {
-		ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
-		remove_mappable_node(ggtt, &node);
-	} else {
-		i915_vma_unpin(vma);
-	}
+	i915_gem_gtt_cleanup(obj, &node, vma);
 out_rpm:
 	intel_runtime_pm_put(rpm, wakeref);
 	return ret;
@@ -724,7 +730,6 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	unsigned int partial_cacheline_write;
 	unsigned int needs_clflush;
 	unsigned int offset, idx;
-	struct dma_fence *fence;
 	void __user *user_data;
 	u64 remain;
 	int ret;
@@ -733,19 +738,17 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	if (ret)
 		return ret;
 
+	ret = i915_gem_object_pin_pages(obj);
+	if (ret)
+		goto err_unlock;
+
 	ret = i915_gem_object_prepare_write(obj, &needs_clflush);
-	if (ret) {
-		i915_gem_object_unlock(obj);
-		return ret;
-	}
+	if (ret)
+		goto err_unpin;
 
-	fence = i915_gem_object_lock_fence(obj);
 	i915_gem_object_finish_access(obj);
 	i915_gem_object_unlock(obj);
 
-	if (!fence)
-		return -ENOMEM;
-
 	/* If we don't overwrite a cacheline completely we need to be
 	 * careful to have up-to-date data by first clflushing. Don't
 	 * overcomplicate things and flush the entire patch.
@@ -773,8 +776,14 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 	}
 
 	i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
-	i915_gem_object_unlock_fence(obj, fence);
 
+	i915_gem_object_unpin_pages(obj);
+	return ret;
+
+err_unpin:
+	i915_gem_object_unpin_pages(obj);
+err_unlock:
+	i915_gem_object_unlock(obj);
 	return ret;
 }
 
@@ -831,10 +840,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		goto err;
 
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		goto err;
-
 	ret = -EFAULT;
 	/* We can only do the GTT pwrite on untiled buffers, as otherwise
 	 * it would end up going through the fenced access, and we'll get
@@ -855,7 +860,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 			ret = i915_gem_shmem_pwrite(obj, args);
 	}
 
-	i915_gem_object_unpin_pages(obj);
 err:
 	i915_gem_object_put(obj);
 	return ret;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 31/64] drm/i915: Fix workarounds selftest, part 1
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (29 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 30/64] drm/i915: Fix pread/pwrite to work with new locking rules Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 32/64] drm/i915: Prepare for obj->mm.lock removal Maarten Lankhorst
                   ` (39 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

pin_map needs the ww lock, so ensure we pin both before submission.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  3 +
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 12 +++
 .../gpu/drm/i915/gt/selftest_workarounds.c    | 76 ++++++++++++-------
 3 files changed, 64 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index fbd01e25ca71..a8e98acdfc04 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -395,6 +395,9 @@ enum i915_map_type {
 void *__must_check i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 					   enum i915_map_type type);
 
+void *__must_check i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
+						    enum i915_map_type type);
+
 void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj,
 				 unsigned long offset,
 				 unsigned long size);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index ffc3c3c474b1..e7e5898cb143 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -397,6 +397,18 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 	goto out_unlock;
 }
 
+void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
+				       enum i915_map_type type)
+{
+	void *ret;
+
+	i915_gem_object_lock(obj, NULL);
+	ret = i915_gem_object_pin_map(obj, type);
+	i915_gem_object_unlock(obj);
+
+	return ret;
+}
+
 void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj,
 				 unsigned long offset,
 				 unsigned long size)
diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
index 7ce7955c44d4..5cc78eada097 100644
--- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
@@ -112,7 +112,7 @@ read_nonprivs(struct intel_context *ce)
 
 	i915_gem_object_set_cache_coherency(result, I915_CACHE_LLC);
 
-	cs = i915_gem_object_pin_map(result, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(result, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_obj;
@@ -218,7 +218,7 @@ static int check_whitelist(struct intel_context *ce)
 	i915_gem_object_lock(results, NULL);
 	intel_wedge_on_timeout(&wedge, engine->gt, HZ / 5) /* safety net! */
 		err = i915_gem_object_set_to_cpu_domain(results, false);
-	i915_gem_object_unlock(results);
+
 	if (intel_gt_is_wedged(engine->gt))
 		err = -EIO;
 	if (err)
@@ -246,6 +246,7 @@ static int check_whitelist(struct intel_context *ce)
 
 	i915_gem_object_unpin_map(results);
 out_put:
+	i915_gem_object_unlock(results);
 	i915_gem_object_put(results);
 	return err;
 }
@@ -502,6 +503,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 
 	for (i = 0; i < engine->whitelist.count; i++) {
 		u32 reg = i915_mmio_reg_offset(engine->whitelist.list[i].reg);
+		struct i915_gem_ww_ctx ww;
 		u64 addr = scratch->node.start;
 		struct i915_request *rq;
 		u32 srm, lrm, rsvd;
@@ -517,6 +519,29 @@ static int check_dirty_whitelist(struct intel_context *ce)
 
 		ro_reg = ro_register(reg);
 
+		i915_gem_ww_ctx_init(&ww, false);
+retry:
+		cs = NULL;
+		err = i915_gem_object_lock(scratch->obj, &ww);
+		if (!err)
+			err = i915_gem_object_lock(batch->obj, &ww);
+		if (!err)
+			err = intel_context_pin_ww(ce, &ww);
+		if (err)
+			goto out;
+
+		cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+		if (IS_ERR(cs)) {
+			err = PTR_ERR(cs);
+			goto out_ctx;
+		}
+
+		results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+		if (IS_ERR(results)) {
+			err = PTR_ERR(results);
+			goto out_unmap_batch;
+		}
+
 		/* Clear non priv flags */
 		reg &= RING_FORCE_TO_NONPRIV_ADDRESS_MASK;
 
@@ -528,12 +553,6 @@ static int check_dirty_whitelist(struct intel_context *ce)
 		pr_debug("%s: Writing garbage to %x\n",
 			 engine->name, reg);
 
-		cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
-		if (IS_ERR(cs)) {
-			err = PTR_ERR(cs);
-			goto out_batch;
-		}
-
 		/* SRM original */
 		*cs++ = srm;
 		*cs++ = reg;
@@ -580,11 +599,12 @@ static int check_dirty_whitelist(struct intel_context *ce)
 		i915_gem_object_flush_map(batch->obj);
 		i915_gem_object_unpin_map(batch->obj);
 		intel_gt_chipset_flush(engine->gt);
+		cs = NULL;
 
-		rq = intel_context_create_request(ce);
+		rq = i915_request_create(ce);
 		if (IS_ERR(rq)) {
 			err = PTR_ERR(rq);
-			goto out_batch;
+			goto out_unmap_scratch;
 		}
 
 		if (engine->emit_init_breadcrumb) { /* Be nice if we hang */
@@ -593,20 +613,16 @@ static int check_dirty_whitelist(struct intel_context *ce)
 				goto err_request;
 		}
 
-		i915_vma_lock(batch);
 		err = i915_request_await_object(rq, batch->obj, false);
 		if (err == 0)
 			err = i915_vma_move_to_active(batch, rq, 0);
-		i915_vma_unlock(batch);
 		if (err)
 			goto err_request;
 
-		i915_vma_lock(scratch);
 		err = i915_request_await_object(rq, scratch->obj, true);
 		if (err == 0)
 			err = i915_vma_move_to_active(scratch, rq,
 						      EXEC_OBJECT_WRITE);
-		i915_vma_unlock(scratch);
 		if (err)
 			goto err_request;
 
@@ -622,13 +638,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 			pr_err("%s: Futzing %x timedout; cancelling test\n",
 			       engine->name, reg);
 			intel_gt_set_wedged(engine->gt);
-			goto out_batch;
-		}
-
-		results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
-		if (IS_ERR(results)) {
-			err = PTR_ERR(results);
-			goto out_batch;
+			goto out_unmap_scratch;
 		}
 
 		GEM_BUG_ON(values[ARRAY_SIZE(values) - 1] != 0xffffffff);
@@ -639,7 +649,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
 				pr_err("%s: Unable to write to whitelisted register %x\n",
 				       engine->name, reg);
 				err = -EINVAL;
-				goto out_unpin;
+				goto out_unmap_scratch;
 			}
 		} else {
 			rsvd = 0;
@@ -705,15 +715,27 @@ static int check_dirty_whitelist(struct intel_context *ce)
 
 			err = -EINVAL;
 		}
-out_unpin:
+out_unmap_scratch:
 		i915_gem_object_unpin_map(scratch->obj);
+out_unmap_batch:
+		if (cs)
+			i915_gem_object_unpin_map(batch->obj);
+out_ctx:
+		intel_context_unpin(ce);
+out:
+		if (err == -EDEADLK) {
+			err = i915_gem_ww_ctx_backoff(&ww);
+			if (!err)
+				goto retry;
+		}
+		i915_gem_ww_ctx_fini(&ww);
 		if (err)
 			break;
 	}
 
 	if (igt_flush_test(engine->i915))
 		err = -EIO;
-out_batch:
+
 	i915_vma_unpin_and_release(&batch, 0);
 out_scratch:
 	i915_vma_unpin_and_release(&scratch, 0);
@@ -847,7 +869,7 @@ static int scrub_whitelisted_registers(struct intel_context *ce)
 	if (IS_ERR(batch))
 		return PTR_ERR(batch);
 
-	cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_batch;
@@ -982,11 +1004,11 @@ check_whitelisted_registers(struct intel_engine_cs *engine,
 	u32 *a, *b;
 	int i, err;
 
-	a = i915_gem_object_pin_map(A->obj, I915_MAP_WB);
+	a = i915_gem_object_pin_map_unlocked(A->obj, I915_MAP_WB);
 	if (IS_ERR(a))
 		return PTR_ERR(a);
 
-	b = i915_gem_object_pin_map(B->obj, I915_MAP_WB);
+	b = i915_gem_object_pin_map_unlocked(B->obj, I915_MAP_WB);
 	if (IS_ERR(b)) {
 		err = PTR_ERR(b);
 		goto err_a;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 32/64] drm/i915: Prepare for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (30 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 31/64] drm/i915: Fix workarounds selftest, part 1 Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 33/64] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner Maarten Lankhorst
                   ` (38 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, Thomas Hellström

From: Thomas Hellström <thomas.hellstrom@intel.com>

Stolen objects need to lock, and we may call put_pages when
refcount drops to 0, ensure all calls are handled correctly.

Idea-from: Thomas Hellström <thomas.hellstrom@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h | 14 ++++++++++++++
 drivers/gpu/drm/i915/gem/i915_gem_pages.c  | 14 ++++++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 10 +++++++++-
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index a8e98acdfc04..105a49102827 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -118,6 +118,20 @@ i915_gem_object_put(struct drm_i915_gem_object *obj)
 
 #define assert_object_held(obj) dma_resv_assert_held((obj)->base.resv)
 
+/*
+ * If more than one potential simultaneous locker, assert held.
+ */
+static inline void assert_object_held_shared(struct drm_i915_gem_object *obj)
+{
+	/*
+	 * Note mm list lookup is protected by
+	 * kref_get_unless_zero().
+	 */
+	if (IS_ENABLED(CONFIG_LOCKDEP) &&
+	    kref_read(&obj->base.refcount) > 0)
+		lockdep_assert_held(&obj->mm.lock);
+}
+
 static inline int __i915_gem_object_lock(struct drm_i915_gem_object *obj,
 					 struct i915_gem_ww_ctx *ww,
 					 bool intr)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index e7e5898cb143..b86bef4fb602 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -18,7 +18,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 	unsigned long supported = INTEL_INFO(i915)->page_sizes;
 	int i;
 
-	lockdep_assert_held(&obj->mm.lock);
+	assert_object_held_shared(obj);
 
 	if (i915_gem_object_is_volatile(obj))
 		obj->mm.madv = I915_MADV_DONTNEED;
@@ -67,6 +67,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 		struct list_head *list;
 		unsigned long flags;
 
+		lockdep_assert_held(&obj->mm.lock);
 		spin_lock_irqsave(&i915->mm.obj_lock, flags);
 
 		i915->mm.shrink_count++;
@@ -88,6 +89,8 @@ int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	int err;
 
+	assert_object_held_shared(obj);
+
 	if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
 		drm_dbg(&i915->drm,
 			"Attempting to obtain a purgeable object\n");
@@ -115,6 +118,8 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	if (err)
 		return err;
 
+	assert_object_held_shared(obj);
+
 	if (unlikely(!i915_gem_object_has_pages(obj))) {
 		GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
 
@@ -142,7 +147,7 @@ void i915_gem_object_truncate(struct drm_i915_gem_object *obj)
 /* Try to discard unwanted pages */
 void i915_gem_object_writeback(struct drm_i915_gem_object *obj)
 {
-	lockdep_assert_held(&obj->mm.lock);
+	assert_object_held_shared(obj);
 	GEM_BUG_ON(i915_gem_object_has_pages(obj));
 
 	if (obj->ops->writeback)
@@ -173,6 +178,8 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
 
+	assert_object_held_shared(obj);
+
 	pages = fetch_and_zero(&obj->mm.pages);
 	if (IS_ERR_OR_NULL(pages))
 		return pages;
@@ -200,6 +207,9 @@ int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
 	if (i915_gem_object_has_pinned_pages(obj))
 		return -EBUSY;
 
+	/* May be called by shrinker from within get_pages() (on another bo) */
+	assert_object_held_shared(obj);
+
 	i915_gem_object_release_mmap_offset(obj);
 
 	/*
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index 5372b888ba01..ce9086d3a647 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -643,11 +643,19 @@ __i915_gem_object_create_stolen(struct intel_memory_region *mem,
 	cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
 	i915_gem_object_set_cache_coherency(obj, cache_level);
 
+	if (WARN_ON(!i915_gem_object_trylock(obj))) {
+		err = -EBUSY;
+		goto cleanup;
+	}
+
 	err = i915_gem_object_pin_pages(obj);
-	if (err)
+	if (err) {
+		i915_gem_object_unlock(obj);
 		goto cleanup;
+	}
 
 	i915_gem_object_init_memory_region(obj, mem);
+	i915_gem_object_unlock(obj);
 
 	return obj;
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 33/64] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (31 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 32/64] drm/i915: Prepare for obj->mm.lock removal Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 34/64] drm/i915: Add ww locking around vm_access() Maarten Lankhorst
                   ` (37 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

By default, we assume that it's called inside igt_create_request
to keep existing selftests working, but allow for manual pinning
when passing a ww context.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/igt_spinner.c | 136 ++++++++++++-------
 drivers/gpu/drm/i915/selftests/igt_spinner.h |   5 +
 2 files changed, 95 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.c b/drivers/gpu/drm/i915/selftests/igt_spinner.c
index 83f6e5f31fb3..cfbbe415b57c 100644
--- a/drivers/gpu/drm/i915/selftests/igt_spinner.c
+++ b/drivers/gpu/drm/i915/selftests/igt_spinner.c
@@ -12,8 +12,6 @@
 
 int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt)
 {
-	unsigned int mode;
-	void *vaddr;
 	int err;
 
 	memset(spin, 0, sizeof(*spin));
@@ -24,6 +22,7 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt)
 		err = PTR_ERR(spin->hws);
 		goto err;
 	}
+	i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_LLC);
 
 	spin->obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE);
 	if (IS_ERR(spin->obj)) {
@@ -31,34 +30,83 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt)
 		goto err_hws;
 	}
 
-	i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_LLC);
-	vaddr = i915_gem_object_pin_map(spin->hws, I915_MAP_WB);
-	if (IS_ERR(vaddr)) {
-		err = PTR_ERR(vaddr);
-		goto err_obj;
-	}
-	spin->seqno = memset(vaddr, 0xff, PAGE_SIZE);
-
-	mode = i915_coherent_map_type(gt->i915);
-	vaddr = i915_gem_object_pin_map(spin->obj, mode);
-	if (IS_ERR(vaddr)) {
-		err = PTR_ERR(vaddr);
-		goto err_unpin_hws;
-	}
-	spin->batch = vaddr;
-
 	return 0;
 
-err_unpin_hws:
-	i915_gem_object_unpin_map(spin->hws);
-err_obj:
-	i915_gem_object_put(spin->obj);
 err_hws:
 	i915_gem_object_put(spin->hws);
 err:
 	return err;
 }
 
+static void *igt_spinner_pin_obj(struct intel_context *ce,
+				 struct i915_gem_ww_ctx *ww,
+				 struct drm_i915_gem_object *obj,
+				 unsigned int mode, struct i915_vma **vma)
+{
+	void *vaddr;
+	int ret;
+
+	*vma = i915_vma_instance(obj, ce->vm, NULL);
+	if (IS_ERR(*vma))
+		return ERR_CAST(*vma);
+
+	ret = i915_gem_object_lock(obj, ww);
+	if (ret)
+		return ERR_PTR(ret);
+
+	vaddr = i915_gem_object_pin_map(obj, mode);
+
+	if (!ww)
+		i915_gem_object_unlock(obj);
+
+	if (IS_ERR(vaddr))
+		return vaddr;
+
+	if (ww)
+		ret = i915_vma_pin_ww(*vma, ww, 0, 0, PIN_USER);
+	else
+		ret = i915_vma_pin(*vma, 0, 0, PIN_USER);
+
+	if (ret) {
+		i915_gem_object_unpin_map(obj);
+		return ERR_PTR(ret);
+	}
+
+	return vaddr;
+}
+
+int igt_spinner_pin(struct igt_spinner *spin,
+		    struct intel_context *ce,
+		    struct i915_gem_ww_ctx *ww)
+{
+	void *vaddr;
+
+	if (spin->ce && WARN_ON(spin->ce != ce))
+		return -ENODEV;
+	spin->ce = ce;
+
+	if (!spin->seqno) {
+		vaddr = igt_spinner_pin_obj(ce, ww, spin->hws, I915_MAP_WB, &spin->hws_vma);
+		if (IS_ERR(vaddr))
+			return PTR_ERR(vaddr);
+
+		spin->seqno = memset(vaddr, 0xff, PAGE_SIZE);
+	}
+
+	if (!spin->batch) {
+		unsigned int mode =
+			i915_coherent_map_type(spin->gt->i915);
+
+		vaddr = igt_spinner_pin_obj(ce, ww, spin->obj, mode, &spin->batch_vma);
+		if (IS_ERR(vaddr))
+			return PTR_ERR(vaddr);
+
+		spin->batch = vaddr;
+	}
+
+	return 0;
+}
+
 static unsigned int seqno_offset(u64 fence)
 {
 	return offset_in_page(sizeof(u32) * fence);
@@ -103,27 +151,18 @@ igt_spinner_create_request(struct igt_spinner *spin,
 	if (!intel_engine_can_store_dword(ce->engine))
 		return ERR_PTR(-ENODEV);
 
-	vma = i915_vma_instance(spin->obj, ce->vm, NULL);
-	if (IS_ERR(vma))
-		return ERR_CAST(vma);
-
-	hws = i915_vma_instance(spin->hws, ce->vm, NULL);
-	if (IS_ERR(hws))
-		return ERR_CAST(hws);
+	if (!spin->batch) {
+		err = igt_spinner_pin(spin, ce, NULL);
+		if (err)
+			return ERR_PTR(err);
+	}
 
-	err = i915_vma_pin(vma, 0, 0, PIN_USER);
-	if (err)
-		return ERR_PTR(err);
-
-	err = i915_vma_pin(hws, 0, 0, PIN_USER);
-	if (err)
-		goto unpin_vma;
+	hws = spin->hws_vma;
+	vma = spin->batch_vma;
 
 	rq = intel_context_create_request(ce);
-	if (IS_ERR(rq)) {
-		err = PTR_ERR(rq);
-		goto unpin_hws;
-	}
+	if (IS_ERR(rq))
+		return ERR_CAST(rq);
 
 	err = move_to_active(vma, rq, 0);
 	if (err)
@@ -186,10 +225,6 @@ igt_spinner_create_request(struct igt_spinner *spin,
 		i915_request_set_error_once(rq, err);
 		i915_request_add(rq);
 	}
-unpin_hws:
-	i915_vma_unpin(hws);
-unpin_vma:
-	i915_vma_unpin(vma);
 	return err ? ERR_PTR(err) : rq;
 }
 
@@ -203,6 +238,9 @@ hws_seqno(const struct igt_spinner *spin, const struct i915_request *rq)
 
 void igt_spinner_end(struct igt_spinner *spin)
 {
+	if (!spin->batch)
+		return;
+
 	*spin->batch = MI_BATCH_BUFFER_END;
 	intel_gt_chipset_flush(spin->gt);
 }
@@ -211,10 +249,16 @@ void igt_spinner_fini(struct igt_spinner *spin)
 {
 	igt_spinner_end(spin);
 
-	i915_gem_object_unpin_map(spin->obj);
+	if (spin->batch) {
+		i915_vma_unpin(spin->batch_vma);
+		i915_gem_object_unpin_map(spin->obj);
+	}
 	i915_gem_object_put(spin->obj);
 
-	i915_gem_object_unpin_map(spin->hws);
+	if (spin->seqno) {
+		i915_vma_unpin(spin->hws_vma);
+		i915_gem_object_unpin_map(spin->hws);
+	}
 	i915_gem_object_put(spin->hws);
 }
 
diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.h b/drivers/gpu/drm/i915/selftests/igt_spinner.h
index ec62c9ef320b..fbe5b1625b05 100644
--- a/drivers/gpu/drm/i915/selftests/igt_spinner.h
+++ b/drivers/gpu/drm/i915/selftests/igt_spinner.h
@@ -20,11 +20,16 @@ struct igt_spinner {
 	struct intel_gt *gt;
 	struct drm_i915_gem_object *hws;
 	struct drm_i915_gem_object *obj;
+	struct intel_context *ce;
+	struct i915_vma *hws_vma, *batch_vma;
 	u32 *batch;
 	void *seqno;
 };
 
 int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt);
+int igt_spinner_pin(struct igt_spinner *spin,
+		    struct intel_context *ce,
+		    struct i915_gem_ww_ctx *ww);
 void igt_spinner_fini(struct igt_spinner *spin);
 
 struct i915_request *
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 34/64] drm/i915: Add ww locking around vm_access()
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (32 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 33/64] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 35/64] drm/i915: Increase ww locking for perf Maarten Lankhorst
                   ` (36 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

i915_gem_object_pin_map potentially needs a ww context, so ensure we
have one we can revoke.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_mman.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 163208a6260d..2561a2f1e54f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -421,7 +421,9 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
 {
 	struct i915_mmap_offset *mmo = area->vm_private_data;
 	struct drm_i915_gem_object *obj = mmo->obj;
+	struct i915_gem_ww_ctx ww;
 	void *vaddr;
+	int err = 0;
 
 	if (i915_gem_object_is_readonly(obj) && write)
 		return -EACCES;
@@ -430,10 +432,18 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
 	if (addr >= obj->base.size)
 		return -EINVAL;
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (err)
+		goto out;
+
 	/* As this is primarily for debugging, let's focus on simplicity */
 	vaddr = i915_gem_object_pin_map(obj, I915_MAP_FORCE_WC);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
+	if (IS_ERR(vaddr)) {
+		err = PTR_ERR(vaddr);
+		goto out;
+	}
 
 	if (write) {
 		memcpy(vaddr + addr, buf, len);
@@ -443,6 +453,16 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
 	}
 
 	i915_gem_object_unpin_map(obj);
+out:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
+	if (err)
+		return err;
 
 	return len;
 }
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 35/64] drm/i915: Increase ww locking for perf.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (33 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 34/64] drm/i915: Add ww locking around vm_access() Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 36/64] drm/i915: Lock ww in ucode objects correctly Maarten Lankhorst
                   ` (35 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to lock a few more objects, some temporarily,
add ww lock where needed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 56 ++++++++++++++++++++++++--------
 1 file changed, 43 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 112ba5f2ce90..aac614204fc0 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1589,7 +1589,7 @@ static int alloc_oa_buffer(struct i915_perf_stream *stream)
 	stream->oa_buffer.vma = vma;
 
 	stream->oa_buffer.vaddr =
-		i915_gem_object_pin_map(bo, I915_MAP_WB);
+		i915_gem_object_pin_map_unlocked(bo, I915_MAP_WB);
 	if (IS_ERR(stream->oa_buffer.vaddr)) {
 		ret = PTR_ERR(stream->oa_buffer.vaddr);
 		goto err_unpin;
@@ -1643,6 +1643,7 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
 	const u32 base = stream->engine->mmio_base;
 #define CS_GPR(x) GEN8_RING_CS_GPR(base, x)
 	u32 *batch, *ts0, *cs, *jump;
+	struct i915_gem_ww_ctx ww;
 	int ret, i;
 	enum {
 		START_TS,
@@ -1660,15 +1661,21 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
 		return PTR_ERR(bo);
 	}
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	ret = i915_gem_object_lock(bo, &ww);
+	if (ret)
+		goto out_ww;
+
 	/*
 	 * We pin in GGTT because we jump into this buffer now because
 	 * multiple OA config BOs will have a jump to this address and it
 	 * needs to be fixed during the lifetime of the i915/perf stream.
 	 */
-	vma = i915_gem_object_ggtt_pin(bo, NULL, 0, 0, PIN_HIGH);
+	vma = i915_gem_object_ggtt_pin_ww(bo, &ww, NULL, 0, 0, PIN_HIGH);
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
-		goto err_unref;
+		goto out_ww;
 	}
 
 	batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB);
@@ -1802,12 +1809,19 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
 	__i915_gem_object_release_map(bo);
 
 	stream->noa_wait = vma;
-	return 0;
+	goto out_ww;
 
 err_unpin:
 	i915_vma_unpin_and_release(&vma, 0);
-err_unref:
-	i915_gem_object_put(bo);
+out_ww:
+	if (ret == -EDEADLK) {
+		ret = i915_gem_ww_ctx_backoff(&ww);
+		if (!ret)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	if (ret)
+		i915_gem_object_put(bo);
 	return ret;
 }
 
@@ -1850,6 +1864,7 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream,
 {
 	struct drm_i915_gem_object *obj;
 	struct i915_oa_config_bo *oa_bo;
+	struct i915_gem_ww_ctx ww;
 	size_t config_length = 0;
 	u32 *cs;
 	int err;
@@ -1870,10 +1885,16 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream,
 		goto err_free;
 	}
 
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (err)
+		goto out_ww;
+
 	cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
-		goto err_oa_bo;
+		goto out_ww;
 	}
 
 	cs = write_cs_mi_lri(cs,
@@ -1901,19 +1922,28 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream,
 				       NULL);
 	if (IS_ERR(oa_bo->vma)) {
 		err = PTR_ERR(oa_bo->vma);
-		goto err_oa_bo;
+		goto out_ww;
 	}
 
 	oa_bo->oa_config = i915_oa_config_get(oa_config);
 	llist_add(&oa_bo->node, &stream->oa_config_bos);
 
-	return oa_bo;
+out_ww:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 
-err_oa_bo:
-	i915_gem_object_put(obj);
+	if (err)
+		i915_gem_object_put(obj);
 err_free:
-	kfree(oa_bo);
-	return ERR_PTR(err);
+	if (err) {
+		kfree(oa_bo);
+		return ERR_PTR(err);
+	}
+	return oa_bo;
 }
 
 static struct i915_vma *
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 36/64] drm/i915: Lock ww in ucode objects correctly
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (34 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 35/64] drm/i915: Increase ww locking for perf Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 37/64] drm/i915: Add ww locking to dma-buf ops Maarten Lankhorst
                   ` (34 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

In the ucode functions, the calls are done before userspace runs,
when debugging using debugfs, or when creating semi-permanent mappings;
we can safely use the unlocked versions that does the ww dance for us.

Because there is no pin_pages_unlocked yet, add it as convenience function.

This removes possible lockdep splats about missing resv lock for ucode.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h |  2 ++
 drivers/gpu/drm/i915/gem/i915_gem_pages.c  | 20 ++++++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc.c     |  2 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_log.c |  4 ++--
 drivers/gpu/drm/i915/gt/uc/intel_huc.c     |  2 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c   |  2 +-
 6 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 105a49102827..3fe41180cc81 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -342,6 +342,8 @@ i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
 	return __i915_gem_object_get_pages(obj);
 }
 
+int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj);
+
 static inline bool
 i915_gem_object_has_pages(struct drm_i915_gem_object *obj)
 {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index b86bef4fb602..8a6c0dd1bfdd 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -136,6 +136,26 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	return err;
 }
 
+int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj)
+{
+	struct i915_gem_ww_ctx ww;
+	int err;
+
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		err = i915_gem_object_pin_pages(obj);
+
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	return err;
+}
+
 /* Immediately discard the backing storage */
 void i915_gem_object_truncate(struct drm_i915_gem_object *obj)
 {
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 2a343a977987..a65661eb5d5d 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -694,7 +694,7 @@ int intel_guc_allocate_and_map_vma(struct intel_guc *guc, u32 size,
 	if (IS_ERR(vma))
 		return PTR_ERR(vma);
 
-	vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		i915_vma_unpin_and_release(&vma, 0);
 		return PTR_ERR(vaddr);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
index c92f2c056db4..c36d5eb5bbb9 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
@@ -335,7 +335,7 @@ static int guc_log_map(struct intel_guc_log *log)
 	 * buffer pages, so that we can directly get the data
 	 * (up-to-date) from memory.
 	 */
-	vaddr = i915_gem_object_pin_map(log->vma->obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(log->vma->obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -744,7 +744,7 @@ int intel_guc_log_dump(struct intel_guc_log *log, struct drm_printer *p,
 	if (!obj)
 		return 0;
 
-	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(map)) {
 		DRM_DEBUG("Failed to pin object\n");
 		drm_puts(p, "(log data unaccessible)\n");
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
index 65eeb44b397d..2126dd81ac38 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
@@ -82,7 +82,7 @@ static int intel_huc_rsa_data_create(struct intel_huc *huc)
 	if (IS_ERR(vma))
 		return PTR_ERR(vma);
 
-	vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		i915_vma_unpin_and_release(&vma, 0);
 		return PTR_ERR(vaddr);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
index 602f1a0bc587..f40bd455830e 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
@@ -542,7 +542,7 @@ int intel_uc_fw_init(struct intel_uc_fw *uc_fw)
 	if (!intel_uc_fw_is_available(uc_fw))
 		return -ENOEXEC;
 
-	err = i915_gem_object_pin_pages(uc_fw->obj);
+	err = i915_gem_object_pin_pages_unlocked(uc_fw->obj);
 	if (err) {
 		DRM_DEBUG_DRIVER("%s fw pin-pages err=%d\n",
 				 intel_uc_fw_type_repr(uc_fw->type), err);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 37/64] drm/i915: Add ww locking to dma-buf ops.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (35 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 36/64] drm/i915: Lock ww in ucode objects correctly Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 38/64] drm/i915: Add missing ww lock in intel_dsb_prepare Maarten Lankhorst
                   ` (33 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

vmap is using pin_pages, but needs to use ww locking,
add pin_pages_unlocked to correctly lock the mapping.

Also add ww locking to begin/end cpu access.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 60 ++++++++++++----------
 1 file changed, 33 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 36e3c2765f4c..c4b01e819786 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -82,7 +82,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 	void *vaddr;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -123,42 +123,48 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_dire
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 	bool write = (direction == DMA_BIDIRECTIONAL || direction == DMA_TO_DEVICE);
+	struct i915_gem_ww_ctx ww;
 	int err;
 
-	err = i915_gem_object_pin_pages(obj);
-	if (err)
-		return err;
-
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
-		goto out;
-
-	err = i915_gem_object_set_to_cpu_domain(obj, write);
-	i915_gem_object_unlock(obj);
-
-out:
-	i915_gem_object_unpin_pages(obj);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		err = i915_gem_object_pin_pages(obj);
+	if (!err) {
+		err = i915_gem_object_set_to_cpu_domain(obj, write);
+		i915_gem_object_unpin_pages(obj);
+	}
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 	return err;
 }
 
 static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction direction)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
+	struct i915_gem_ww_ctx ww;
 	int err;
 
-	err = i915_gem_object_pin_pages(obj);
-	if (err)
-		return err;
-
-	err = i915_gem_object_lock_interruptible(obj, NULL);
-	if (err)
-		goto out;
-
-	err = i915_gem_object_set_to_gtt_domain(obj, false);
-	i915_gem_object_unlock(obj);
-
-out:
-	i915_gem_object_unpin_pages(obj);
+	i915_gem_ww_ctx_init(&ww, true);
+retry:
+	err = i915_gem_object_lock(obj, &ww);
+	if (!err)
+		err = i915_gem_object_pin_pages(obj);
+	if (!err) {
+		err = i915_gem_object_set_to_gtt_domain(obj, false);
+		i915_gem_object_unpin_pages(obj);
+	}
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
 	return err;
 }
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 38/64] drm/i915: Add missing ww lock in intel_dsb_prepare.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (36 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 37/64] drm/i915: Add ww locking to dma-buf ops Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 39/64] drm/i915: Fix ww locking in shmem_create_from_object Maarten Lankhorst
                   ` (32 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Because of the long lifetime of the mapping, we cannot wrap this in a
simple limited ww lock. Just use the unlocked version of pin_map,
because we'll likely release the mapping a lot later, in a different
thread.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_dsb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dsb.c b/drivers/gpu/drm/i915/display/intel_dsb.c
index 566fa72427b3..857126822a88 100644
--- a/drivers/gpu/drm/i915/display/intel_dsb.c
+++ b/drivers/gpu/drm/i915/display/intel_dsb.c
@@ -293,7 +293,7 @@ void intel_dsb_prepare(struct intel_crtc_state *crtc_state)
 		goto out;
 	}
 
-	buf = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
+	buf = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WC);
 	if (IS_ERR(buf)) {
 		drm_err(&i915->drm, "Command buffer creation failed\n");
 		i915_vma_unpin_and_release(&vma, I915_VMA_RELEASE_MAP);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 39/64] drm/i915: Fix ww locking in shmem_create_from_object
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (37 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 38/64] drm/i915: Add missing ww lock in intel_dsb_prepare Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 40/64] drm/i915: Use a single page table lock for each gtt Maarten Lankhorst
                   ` (31 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Quick fix, just use the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/shmem_utils.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
index 5982b62f913d..5d9bac7f39c4 100644
--- a/drivers/gpu/drm/i915/gt/shmem_utils.c
+++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
@@ -39,7 +39,7 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj)
 		return file;
 	}
 
-	ptr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(ptr))
 		return ERR_CAST(ptr);
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 40/64] drm/i915: Use a single page table lock for each gtt.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (38 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 39/64] drm/i915: Fix ww locking in shmem_create_from_object Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 41/64] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal Maarten Lankhorst
                   ` (30 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We may create page table objects on the fly, but we may need to
wait with the ww lock held. Instead of waiting on a freed obj
lock, ensure we have the same lock for each object to keep
-EDEADLK working. This ensures that i915_vma_pin_ww can lock
the page tables when required.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_ggtt.c  |  8 +++++-
 drivers/gpu/drm/i915/gt/intel_gtt.c   | 38 ++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/gt/intel_gtt.h   |  5 ++++
 drivers/gpu/drm/i915/gt/intel_ppgtt.c |  3 ++-
 drivers/gpu/drm/i915/i915_vma.c       |  5 ++++
 5 files changed, 56 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c
index 28cff6fb3059..06a41c1dd0bf 100644
--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c
@@ -624,7 +624,9 @@ static int init_aliasing_ppgtt(struct i915_ggtt *ggtt)
 	if (err)
 		goto err_ppgtt;
 
+	i915_gem_object_lock(ppgtt->vm.scratch[0], NULL);
 	err = i915_vm_pin_pt_stash(&ppgtt->vm, &stash);
+	i915_gem_object_unlock(ppgtt->vm.scratch[0]);
 	if (err)
 		goto err_stash;
 
@@ -711,6 +713,7 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt)
 
 	mutex_unlock(&ggtt->vm.mutex);
 	i915_address_space_fini(&ggtt->vm);
+	dma_resv_fini(&ggtt->vm.resv);
 
 	arch_phys_wc_del(ggtt->mtrr);
 
@@ -1092,6 +1095,7 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt)
 	ggtt->vm.gt = gt;
 	ggtt->vm.i915 = i915;
 	ggtt->vm.dma = &i915->drm.pdev->dev;
+	dma_resv_init(&ggtt->vm.resv);
 
 	if (INTEL_GEN(i915) <= 5)
 		ret = i915_gmch_probe(ggtt);
@@ -1099,8 +1103,10 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt)
 		ret = gen6_gmch_probe(ggtt);
 	else
 		ret = gen8_gmch_probe(ggtt);
-	if (ret)
+	if (ret) {
+		dma_resv_fini(&ggtt->vm.resv);
 		return ret;
+	}
 
 	if ((ggtt->vm.total - 1) >> 32) {
 		drm_err(&i915->drm,
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 444d9bacfafd..941f8af016d6 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -13,16 +13,36 @@
 
 struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz)
 {
+	struct drm_i915_gem_object *obj;
+
 	if (I915_SELFTEST_ONLY(should_fail(&vm->fault_attr, 1)))
 		i915_gem_shrink_all(vm->i915);
 
-	return i915_gem_object_create_internal(vm->i915, sz);
+	obj = i915_gem_object_create_internal(vm->i915, sz);
+	/* ensure all dma objects have the same reservation class */
+	if (!IS_ERR(obj))
+		obj->base.resv = &vm->resv;
+	return obj;
 }
 
 int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj)
 {
 	int err;
 
+	i915_gem_object_lock(obj, NULL);
+	err = i915_gem_object_pin_pages(obj);
+	i915_gem_object_unlock(obj);
+	if (err)
+		return err;
+
+	i915_gem_object_make_unshrinkable(obj);
+	return 0;
+}
+
+int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj)
+{
+	int err;
+
 	err = i915_gem_object_pin_pages(obj);
 	if (err)
 		return err;
@@ -56,6 +76,20 @@ void __i915_vm_close(struct i915_address_space *vm)
 	mutex_unlock(&vm->mutex);
 }
 
+/* lock the vm into the current ww, if we lock one, we lock all */
+int i915_vm_lock_objects(struct i915_address_space *vm,
+			 struct i915_gem_ww_ctx *ww)
+{
+	if (vm->scratch[0]->base.resv == &vm->resv) {
+		return i915_gem_object_lock(vm->scratch[0], ww);
+	} else {
+		struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+
+		/* We borrowed the scratch page from ggtt, take the top level object */
+		return i915_gem_object_lock(ppgtt->pd->pt.base, ww);
+	}
+}
+
 void i915_address_space_fini(struct i915_address_space *vm)
 {
 	drm_mm_takedown(&vm->mm);
@@ -69,6 +103,7 @@ static void __i915_vm_release(struct work_struct *work)
 
 	vm->cleanup(vm);
 	i915_address_space_fini(vm);
+	dma_resv_fini(&vm->resv);
 
 	kfree(vm);
 }
@@ -98,6 +133,7 @@ void i915_address_space_init(struct i915_address_space *vm, int subclass)
 	mutex_init(&vm->mutex);
 	lockdep_set_subclass(&vm->mutex, subclass);
 	i915_gem_shrinker_taints_mutex(vm->i915, &vm->mutex);
+	dma_resv_init(&vm->resv);
 
 	GEM_BUG_ON(!vm->total);
 	drm_mm_init(&vm->mm, 0, vm->total);
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index af90090c3d18..8f7c49efa190 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -238,6 +238,7 @@ struct i915_address_space {
 	atomic_t open;
 
 	struct mutex mutex; /* protects vma and our lists */
+	struct dma_resv resv; /* reservation lock for all pd objects, and buffer pool */
 #define VM_CLASS_GGTT 0
 #define VM_CLASS_PPGTT 1
 
@@ -346,6 +347,9 @@ struct i915_ppgtt {
 
 #define i915_is_ggtt(vm) ((vm)->is_ggtt)
 
+int __must_check
+i915_vm_lock_objects(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww);
+
 static inline bool
 i915_vm_is_4lvl(const struct i915_address_space *vm)
 {
@@ -522,6 +526,7 @@ struct i915_page_directory *alloc_pd(struct i915_address_space *vm);
 struct i915_page_directory *__alloc_pd(int npde);
 
 int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj);
+int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj);
 
 void free_px(struct i915_address_space *vm,
 	     struct i915_page_table *pt, int lvl);
diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
index 46d9aceda64c..f3ac47702aee 100644
--- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
@@ -262,7 +262,7 @@ int i915_vm_pin_pt_stash(struct i915_address_space *vm,
 
 	for (n = 0; n < ARRAY_SIZE(stash->pt); n++) {
 		for (pt = stash->pt[n]; pt; pt = pt->stash) {
-			err = pin_pt_dma(vm, pt->base);
+			err = pin_pt_dma_locked(vm, pt->base);
 			if (err)
 				return err;
 		}
@@ -304,6 +304,7 @@ void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt)
 	ppgtt->vm.dma = &i915->drm.pdev->dev;
 	ppgtt->vm.total = BIT_ULL(INTEL_INFO(i915)->ppgtt_size);
 
+	dma_resv_init(&ppgtt->vm.resv);
 	i915_address_space_init(&ppgtt->vm, VM_CLASS_PPGTT);
 
 	ppgtt->vm.vma_ops.bind_vma    = ppgtt_bind_vma;
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 265e3a3079e2..c5b9f30ac0a3 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -884,6 +884,11 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
 		wakeref = intel_runtime_pm_get(&vma->vm->i915->runtime_pm);
 
 	if (flags & vma->vm->bind_async_flags) {
+		/* lock VM */
+		err = i915_vm_lock_objects(vma->vm, ww);
+		if (err)
+			goto err_rpm;
+
 		work = i915_vma_work();
 		if (!work) {
 			err = -ENOMEM;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 41/64] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (39 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 40/64] drm/i915: Use a single page table lock for each gtt Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 42/64] drm/i915/selftests: Prepare client blit " Maarten Lankhorst
                   ` (29 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../gpu/drm/i915/gem/selftests/huge_pages.c   | 28 ++++++++++++++-----
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index 6c2241b7387b..dadd485bc52f 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -589,7 +589,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
 			goto out_put;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err)
 			goto out_put;
 
@@ -653,15 +653,19 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
 				break;
 		}
 
+		i915_gem_object_lock(obj, NULL);
 		i915_gem_object_unpin_pages(obj);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		i915_gem_object_put(obj);
 	}
 
 	return 0;
 
 out_unpin:
+	i915_gem_object_lock(obj, NULL);
 	i915_gem_object_unpin_pages(obj);
+	i915_gem_object_unlock(obj);
 out_put:
 	i915_gem_object_put(obj);
 
@@ -675,8 +679,10 @@ static void close_object_list(struct list_head *objects,
 
 	list_for_each_entry_safe(obj, on, objects, st_link) {
 		list_del(&obj->st_link);
+		i915_gem_object_lock(obj, NULL);
 		i915_gem_object_unpin_pages(obj);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		i915_gem_object_put(obj);
 	}
 }
@@ -713,7 +719,7 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
 			break;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			break;
@@ -889,7 +895,7 @@ static int igt_mock_ppgtt_64K(void *arg)
 			if (IS_ERR(obj))
 				return PTR_ERR(obj);
 
-			err = i915_gem_object_pin_pages(obj);
+			err = i915_gem_object_pin_pages_unlocked(obj);
 			if (err)
 				goto out_object_put;
 
@@ -943,8 +949,10 @@ static int igt_mock_ppgtt_64K(void *arg)
 			}
 
 			i915_vma_unpin(vma);
+			i915_gem_object_lock(obj, NULL);
 			i915_gem_object_unpin_pages(obj);
 			__i915_gem_object_put_pages(obj);
+			i915_gem_object_unlock(obj);
 			i915_gem_object_put(obj);
 		}
 	}
@@ -954,7 +962,9 @@ static int igt_mock_ppgtt_64K(void *arg)
 out_vma_unpin:
 	i915_vma_unpin(vma);
 out_object_unpin:
+	i915_gem_object_lock(obj, NULL);
 	i915_gem_object_unpin_pages(obj);
+	i915_gem_object_unlock(obj);
 out_object_put:
 	i915_gem_object_put(obj);
 
@@ -1024,7 +1034,7 @@ static int __cpu_check_vmap(struct drm_i915_gem_object *obj, u32 dword, u32 val)
 	if (err)
 		return err;
 
-	ptr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(ptr))
 		return PTR_ERR(ptr);
 
@@ -1304,7 +1314,7 @@ static int igt_ppgtt_smoke_huge(void *arg)
 			return err;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			if (err == -ENXIO || err == -E2BIG) {
 				i915_gem_object_put(obj);
@@ -1327,8 +1337,10 @@ static int igt_ppgtt_smoke_huge(void *arg)
 			       __func__, size, i);
 		}
 out_unpin:
+		i915_gem_object_lock(obj, NULL);
 		i915_gem_object_unpin_pages(obj);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 out_put:
 		i915_gem_object_put(obj);
 
@@ -1402,7 +1414,7 @@ static int igt_ppgtt_sanity_check(void *arg)
 				return err;
 			}
 
-			err = i915_gem_object_pin_pages(obj);
+			err = i915_gem_object_pin_pages_unlocked(obj);
 			if (err) {
 				i915_gem_object_put(obj);
 				goto out;
@@ -1416,8 +1428,10 @@ static int igt_ppgtt_sanity_check(void *arg)
 
 			err = igt_write_huge(ctx, obj);
 
+			i915_gem_object_lock(obj, NULL);
 			i915_gem_object_unpin_pages(obj);
 			__i915_gem_object_put_pages(obj);
+			i915_gem_object_unlock(obj);
 			i915_gem_object_put(obj);
 
 			if (err) {
@@ -1462,7 +1476,7 @@ static int igt_tmpfs_fallback(void *arg)
 		goto out_restore;
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto out_put;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 42/64] drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (40 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 41/64] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 43/64] drm/i915/selftests: Prepare coherency tests " Maarten Lankhorst
                   ` (28 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
index 6a674a7994df..d36873885cc1 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
@@ -45,7 +45,7 @@ static int __igt_client_fill(struct intel_engine_cs *engine)
 			goto err_flush;
 		}
 
-		vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put;
@@ -157,7 +157,7 @@ static int prepare_blit(const struct tiled_blits *t,
 	u32 src_pitch, dst_pitch;
 	u32 cmd, *cs;
 
-	cs = i915_gem_object_pin_map(batch, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch, I915_MAP_WC);
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
 
@@ -377,7 +377,7 @@ static int verify_buffer(const struct tiled_blits *t,
 	y = i915_prandom_u32_max_state(t->height, prng);
 	p = y * t->width + x;
 
-	vaddr = i915_gem_object_pin_map(buf->vma->obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(buf->vma->obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -564,7 +564,7 @@ static int tiled_blits_prepare(struct tiled_blits *t,
 	int err;
 	int i;
 
-	map = i915_gem_object_pin_map(t->scratch.vma->obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(t->scratch.vma->obj, I915_MAP_WC);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 43/64] drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (41 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 42/64] drm/i915/selftests: Prepare client blit " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 44/64] drm/i915/selftests: Prepare context " Maarten Lankhorst
                   ` (27 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index 654912abaeb4..7da9f1a53ab5 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -160,7 +160,7 @@ static int wc_set(struct context *ctx, unsigned long offset, u32 v)
 	if (err)
 		return err;
 
-	map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 
@@ -183,7 +183,7 @@ static int wc_get(struct context *ctx, unsigned long offset, u32 *v)
 	if (err)
 		return err;
 
-	map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 44/64] drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (42 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 43/64] drm/i915/selftests: Prepare coherency tests " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 45/64] drm/i915/selftests: Prepare dma-buf " Maarten Lankhorst
                   ` (26 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion, just convert a bunch of calls to
unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index d3f87dc4eda3..5fef592390cb 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -1094,7 +1094,7 @@ __read_slice_count(struct intel_context *ce,
 	if (ret < 0)
 		return ret;
 
-	buf = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	buf = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(buf)) {
 		ret = PTR_ERR(buf);
 		return ret;
@@ -1511,7 +1511,7 @@ static int write_to_scratch(struct i915_gem_context *ctx,
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out;
@@ -1622,7 +1622,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
 		if (err)
 			goto out_vm;
 
-		cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 		if (IS_ERR(cmd)) {
 			err = PTR_ERR(cmd);
 			goto out;
@@ -1658,7 +1658,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
 		if (err)
 			goto out_vm;
 
-		cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 		if (IS_ERR(cmd)) {
 			err = PTR_ERR(cmd);
 			goto out;
@@ -1715,7 +1715,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
 	if (err)
 		goto out_vm;
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out_vm;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 45/64] drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (43 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 44/64] drm/i915/selftests: Prepare context " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 46/64] drm/i915/selftests: Prepare execbuf " Maarten Lankhorst
                   ` (25 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use pin_pages_unlocked() where we don't have a lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index b6d43880b0c1..dd74bc09ec88 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -194,7 +194,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 
 	dma_buf_put(dmabuf);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("i915_gem_object_pin_pages failed with err=%d\n", err);
 		goto out_obj;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 46/64] drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (44 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 45/64] drm/i915/selftests: Prepare dma-buf " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 47/64] drm/i915/selftests: Prepare mman testcases " Maarten Lankhorst
                   ` (24 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Also quite simple, a single call needs to use the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
index e1d50a5a1477..4df505e4c53a 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
@@ -116,7 +116,7 @@ static int igt_gpu_reloc(void *arg)
 	if (IS_ERR(scratch))
 		return PTR_ERR(scratch);
 
-	map = i915_gem_object_pin_map(scratch, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(scratch, I915_MAP_WC);
 	if (IS_ERR(map)) {
 		err = PTR_ERR(map);
 		goto err_scratch;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 47/64] drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (45 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 46/64] drm/i915/selftests: Prepare execbuf " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 48/64] drm/i915/selftests: Prepare object tests " Maarten Lankhorst
                   ` (23 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Ensure we hold the lock around put_pages, and use the unlocked wrappers
for pinning pages and mappings.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 44908c68e331..5cf6df49c333 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -322,7 +322,7 @@ static int igt_partial_tiling(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
 		       nreal, obj->base.size / PAGE_SIZE, err);
@@ -459,7 +459,7 @@ static int igt_smoke_tiling(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
 		       nreal, obj->base.size / PAGE_SIZE, err);
@@ -798,7 +798,7 @@ static int wc_set(struct drm_i915_gem_object *obj)
 {
 	void *vaddr;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -814,7 +814,7 @@ static int wc_check(struct drm_i915_gem_object *obj)
 	void *vaddr;
 	int err = 0;
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr))
 		return PTR_ERR(vaddr);
 
@@ -1316,7 +1316,9 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915,
 	}
 
 	if (type != I915_MMAP_TYPE_GTT) {
+		i915_gem_object_lock(obj, NULL);
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		if (i915_gem_object_has_pages(obj)) {
 			pr_err("Failed to put-pages object!\n");
 			err = -EINVAL;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 48/64] drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (46 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 47/64] drm/i915/selftests: Prepare mman testcases " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 49/64] drm/i915/selftests: Prepare object blit " Maarten Lankhorst
                   ` (22 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Convert a single pin_pages call to use the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c
index bf853c40ec65..740ee8086a27 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c
@@ -47,7 +47,7 @@ static int igt_gem_huge(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err) {
 		pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
 		       nreal, obj->base.size / PAGE_SIZE, err);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 49/64] drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (47 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 48/64] drm/i915/selftests: Prepare object tests " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 50/64] drm/i915/selftests: Prepare igt_gem_utils " Maarten Lankhorst
                   ` (21 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use some unlocked versions where we're not holding the ww lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
index 23b6e11bbc3e..ee9496f3d11d 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
@@ -262,7 +262,7 @@ static int igt_fill_blt_thread(void *arg)
 			goto err_flush;
 		}
 
-		vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put;
@@ -380,7 +380,7 @@ static int igt_copy_blt_thread(void *arg)
 			goto err_flush;
 		}
 
-		vaddr = i915_gem_object_pin_map(src, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(src, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put_src;
@@ -400,7 +400,7 @@ static int igt_copy_blt_thread(void *arg)
 			goto err_put_src;
 		}
 
-		vaddr = i915_gem_object_pin_map(dst, I915_MAP_WB);
+		vaddr = i915_gem_object_pin_map_unlocked(dst, I915_MAP_WB);
 		if (IS_ERR(vaddr)) {
 			err = PTR_ERR(vaddr);
 			goto err_put_dst;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 50/64] drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (48 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 49/64] drm/i915/selftests: Prepare object blit " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 51/64] drm/i915/selftests: Prepare context selftest " Maarten Lankhorst
                   ` (20 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

igt_emit_store_dw needs to use the unlocked version, as it's not
holding a lock. This fixes igt_gpu_fill_dw() which is used by
some other selftests.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c
index d6783061bc72..0b092c62bb34 100644
--- a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c
+++ b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c
@@ -55,7 +55,7 @@ igt_emit_store_dw(struct i915_vma *vma,
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto err;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 51/64] drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (49 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 50/64] drm/i915/selftests: Prepare igt_gem_utils " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 52/64] drm/i915/selftests: Prepare hangcheck " Maarten Lankhorst
                   ` (19 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Only needs to convert a single call to the unlocked version.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_context.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c
index db738d400168..d1f19fd04769 100644
--- a/drivers/gpu/drm/i915/gt/selftest_context.c
+++ b/drivers/gpu/drm/i915/gt/selftest_context.c
@@ -88,8 +88,8 @@ static int __live_context_size(struct intel_engine_cs *engine)
 	if (err)
 		goto err;
 
-	vaddr = i915_gem_object_pin_map(ce->state->obj,
-					i915_coherent_map_type(engine->i915));
+	vaddr = i915_gem_object_pin_map_unlocked(ce->state->obj,
+						 i915_coherent_map_type(engine->i915));
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		intel_context_unpin(ce);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 52/64] drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (50 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 51/64] drm/i915/selftests: Prepare context selftest " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 53/64] drm/i915/selftests: Prepare execlists and lrc selftests " Maarten Lankhorst
                   ` (18 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Convert a few calls to use the unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index c28d1fcad673..403edf26a338 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -80,15 +80,15 @@ static int hang_init(struct hang *h, struct intel_gt *gt)
 	}
 
 	i915_gem_object_set_cache_coherency(h->hws, I915_CACHE_LLC);
-	vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map_unlocked(h->hws, I915_MAP_WB);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_obj;
 	}
 	h->seqno = memset(vaddr, 0xff, PAGE_SIZE);
 
-	vaddr = i915_gem_object_pin_map(h->obj,
-					i915_coherent_map_type(gt->i915));
+	vaddr = i915_gem_object_pin_map_unlocked(h->obj,
+						 i915_coherent_map_type(gt->i915));
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_unpin_hws;
@@ -149,7 +149,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
 		return ERR_CAST(obj);
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, i915_coherent_map_type(gt->i915));
+	vaddr = i915_gem_object_pin_map_unlocked(obj, i915_coherent_map_type(gt->i915));
 	if (IS_ERR(vaddr)) {
 		i915_gem_object_put(obj);
 		i915_vm_put(vm);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 53/64] drm/i915/selftests: Prepare execlists and lrc selftests for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (51 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 52/64] drm/i915/selftests: Prepare hangcheck " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 54/64] drm/i915/selftests: Prepare mocs tests " Maarten Lankhorst
                   ` (17 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Convert normal functions to unlocked versions where needed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_execlists.c | 18 +++++++++---------
 drivers/gpu/drm/i915/gt/selftest_lrc.c       | 16 ++++++++--------
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
index c9f3fef11f8f..e815cb2ca429 100644
--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
+++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
@@ -986,7 +986,7 @@ static int live_timeslice_preempt(void *arg)
 		goto err_obj;
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_obj;
@@ -1294,7 +1294,7 @@ static int live_timeslice_queue(void *arg)
 		goto err_obj;
 	}
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto err_obj;
@@ -1541,7 +1541,7 @@ static int live_busywait_preempt(void *arg)
 		goto err_ctx_lo;
 	}
 
-	map = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(map)) {
 		err = PTR_ERR(map);
 		goto err_obj;
@@ -2732,7 +2732,7 @@ static int create_gang(struct intel_engine_cs *engine,
 	if (err)
 		goto err_obj;
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_obj;
@@ -3018,7 +3018,7 @@ static int live_preempt_gang(void *arg)
 		 * it will terminate the next lowest spinner until there
 		 * are no more spinners and the gang is complete.
 		 */
-		cs = i915_gem_object_pin_map(rq->batch->obj, I915_MAP_WC);
+		cs = i915_gem_object_pin_map_unlocked(rq->batch->obj, I915_MAP_WC);
 		if (!IS_ERR(cs)) {
 			*cs = 0;
 			i915_gem_object_unpin_map(rq->batch->obj);
@@ -3083,7 +3083,7 @@ create_gpr_user(struct intel_engine_cs *engine,
 		return ERR_PTR(err);
 	}
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_vma_put(vma);
 		return ERR_CAST(cs);
@@ -3293,7 +3293,7 @@ static int live_preempt_user(void *arg)
 	if (IS_ERR(global))
 		return PTR_ERR(global);
 
-	result = i915_gem_object_pin_map(global->obj, I915_MAP_WC);
+	result = i915_gem_object_pin_map_unlocked(global->obj, I915_MAP_WC);
 	if (IS_ERR(result)) {
 		i915_vma_unpin_and_release(&global, 0);
 		return PTR_ERR(result);
@@ -3686,7 +3686,7 @@ static int live_preempt_smoke(void *arg)
 		goto err_free;
 	}
 
-	cs = i915_gem_object_pin_map(smoke.batch, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(smoke.batch, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_batch;
@@ -4291,7 +4291,7 @@ static int preserved_virtual_engine(struct intel_gt *gt,
 		goto out_end;
 	}
 
-	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto out_end;
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 1474300a1a9e..196eebef52ac 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -625,7 +625,7 @@ static int __live_lrc_gpr(struct intel_engine_cs *engine,
 		goto err_rq;
 	}
 
-	cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_rq;
@@ -919,7 +919,7 @@ store_context(struct intel_context *ce, struct i915_vma *scratch)
 	if (IS_ERR(batch))
 		return batch;
 
-	cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_vma_put(batch);
 		return ERR_CAST(cs);
@@ -1083,7 +1083,7 @@ static struct i915_vma *load_context(struct intel_context *ce, u32 poison)
 	if (IS_ERR(batch))
 		return batch;
 
-	cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_vma_put(batch);
 		return ERR_CAST(cs);
@@ -1197,29 +1197,29 @@ static int compare_isolation(struct intel_engine_cs *engine,
 	u32 *defaults;
 	int err = 0;
 
-	A[0] = i915_gem_object_pin_map(ref[0]->obj, I915_MAP_WC);
+	A[0] = i915_gem_object_pin_map_unlocked(ref[0]->obj, I915_MAP_WC);
 	if (IS_ERR(A[0]))
 		return PTR_ERR(A[0]);
 
-	A[1] = i915_gem_object_pin_map(ref[1]->obj, I915_MAP_WC);
+	A[1] = i915_gem_object_pin_map_unlocked(ref[1]->obj, I915_MAP_WC);
 	if (IS_ERR(A[1])) {
 		err = PTR_ERR(A[1]);
 		goto err_A0;
 	}
 
-	B[0] = i915_gem_object_pin_map(result[0]->obj, I915_MAP_WC);
+	B[0] = i915_gem_object_pin_map_unlocked(result[0]->obj, I915_MAP_WC);
 	if (IS_ERR(B[0])) {
 		err = PTR_ERR(B[0]);
 		goto err_A1;
 	}
 
-	B[1] = i915_gem_object_pin_map(result[1]->obj, I915_MAP_WC);
+	B[1] = i915_gem_object_pin_map_unlocked(result[1]->obj, I915_MAP_WC);
 	if (IS_ERR(B[1])) {
 		err = PTR_ERR(B[1]);
 		goto err_B0;
 	}
 
-	lrc = i915_gem_object_pin_map(ce->state->obj,
+	lrc = i915_gem_object_pin_map_unlocked(ce->state->obj,
 				      i915_coherent_map_type(engine->i915));
 	if (IS_ERR(lrc)) {
 		err = PTR_ERR(lrc);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 54/64] drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (52 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 53/64] drm/i915/selftests: Prepare execlists and lrc selftests " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 55/64] drm/i915/selftests: Prepare ring submission " Maarten Lankhorst
                   ` (16 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use pin_map_unlocked when we're not holding locks.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_mocs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
index 2dd063bc656d..fc8004e8a763 100644
--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
@@ -80,7 +80,7 @@ static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
 	if (IS_ERR(arg->scratch))
 		return PTR_ERR(arg->scratch);
 
-	arg->vaddr = i915_gem_object_pin_map(arg->scratch->obj, I915_MAP_WB);
+	arg->vaddr = i915_gem_object_pin_map_unlocked(arg->scratch->obj, I915_MAP_WB);
 	if (IS_ERR(arg->vaddr)) {
 		err = PTR_ERR(arg->vaddr);
 		goto err_scratch;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 55/64] drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (53 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 54/64] drm/i915/selftests: Prepare mocs tests " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 56/64] drm/i915/selftests: Prepare timeline tests " Maarten Lankhorst
                   ` (15 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use unlocked versions when the ww lock is not held.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_ring_submission.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_ring_submission.c b/drivers/gpu/drm/i915/gt/selftest_ring_submission.c
index 3350e7c995bc..99609271c3a7 100644
--- a/drivers/gpu/drm/i915/gt/selftest_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/selftest_ring_submission.c
@@ -35,7 +35,7 @@ static struct i915_vma *create_wally(struct intel_engine_cs *engine)
 		return ERR_PTR(err);
 	}
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cs)) {
 		i915_gem_object_put(obj);
 		return ERR_CAST(cs);
@@ -212,7 +212,7 @@ static int __live_ctx_switch_wa(struct intel_engine_cs *engine)
 	if (IS_ERR(bb))
 		return PTR_ERR(bb);
 
-	result = i915_gem_object_pin_map(bb->obj, I915_MAP_WC);
+	result = i915_gem_object_pin_map_unlocked(bb->obj, I915_MAP_WC);
 	if (IS_ERR(result)) {
 		intel_context_put(bb->private);
 		i915_vma_unpin_and_release(&bb, 0);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 56/64] drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (54 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 55/64] drm/i915/selftests: Prepare ring submission " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 57/64] drm/i915/selftests: Prepare i915_request " Maarten Lankhorst
                   ` (14 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We can no longer call intel_timeline_pin with a null argument,
so add a ww loop that locks the backing object.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_timeline.c | 30 +++++++++++++++++----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index 2e2bea7ac7a5..bed05ca5d670 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -38,6 +38,26 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
 	return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES;
 }
 
+static int selftest_tl_pin(struct intel_timeline *tl)
+{
+	struct i915_gem_ww_ctx ww;
+	int err;
+
+	i915_gem_ww_ctx_init(&ww, false);
+retry:
+	err = i915_gem_object_lock(tl->hwsp_ggtt->obj, &ww);
+	if (!err)
+		err = intel_timeline_pin(tl, &ww);
+
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+	return err;
+}
+
 #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
 
 struct mock_hwsp_freelist {
@@ -79,7 +99,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
 		if (IS_ERR(tl))
 			return PTR_ERR(tl);
 
-		err = intel_timeline_pin(tl, NULL);
+		err = selftest_tl_pin(tl);
 		if (err) {
 			intel_timeline_put(tl);
 			return err;
@@ -465,7 +485,7 @@ checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32
 	struct i915_request *rq;
 	int err;
 
-	err = intel_timeline_pin(tl, NULL);
+	err = selftest_tl_pin(tl);
 	if (err) {
 		rq = ERR_PTR(err);
 		goto out;
@@ -665,7 +685,7 @@ static int live_hwsp_wrap(void *arg)
 	if (!tl->has_initial_breadcrumb)
 		goto out_free;
 
-	err = intel_timeline_pin(tl, NULL);
+	err = selftest_tl_pin(tl);
 	if (err)
 		goto out_free;
 
@@ -812,13 +832,13 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	w->map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	w->map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(w->map)) {
 		i915_gem_object_put(obj);
 		return PTR_ERR(w->map);
 	}
 
-	vma = i915_gem_object_ggtt_pin_ww(obj, NULL, NULL, 0, 0, 0);
+	vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
 	if (IS_ERR(vma)) {
 		i915_gem_object_put(obj);
 		return PTR_ERR(vma);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 57/64] drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (55 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 56/64] drm/i915/selftests: Prepare timeline tests " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 58/64] drm/i915/selftests: Prepare memory region " Maarten Lankhorst
                   ` (13 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Straightforward conversion by using unlocked versions.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_request.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
index d2a678a2497e..9a9e92a775c8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -620,7 +620,7 @@ static struct i915_vma *empty_batch(struct drm_i915_private *i915)
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto err;
@@ -782,7 +782,7 @@ static struct i915_vma *recursive_batch(struct drm_i915_private *i915)
 	if (err)
 		goto err;
 
-	cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto err;
@@ -817,7 +817,7 @@ static int recursive_batch_resolve(struct i915_vma *batch)
 {
 	u32 *cmd;
 
-	cmd = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
 	if (IS_ERR(cmd))
 		return PTR_ERR(cmd);
 
@@ -1070,8 +1070,8 @@ static int live_sequential_engines(void *arg)
 		if (!request[idx])
 			break;
 
-		cmd = i915_gem_object_pin_map(request[idx]->batch->obj,
-					      I915_MAP_WC);
+		cmd = i915_gem_object_pin_map_unlocked(request[idx]->batch->obj,
+						       I915_MAP_WC);
 		if (!IS_ERR(cmd)) {
 			*cmd = MI_BATCH_BUFFER_END;
 
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 58/64] drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (56 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 57/64] drm/i915/selftests: Prepare i915_request " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 59/64] drm/i915/selftests: Prepare cs engine " Maarten Lankhorst
                   ` (12 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Use the unlocked variants for pin_map and pin_pages, and add lock
around unpinning/putting pages.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 .../drm/i915/selftests/intel_memory_region.c   | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index 75839db63bea..3b88504d7061 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -31,10 +31,12 @@ static void close_objects(struct intel_memory_region *mem,
 	struct drm_i915_gem_object *obj, *on;
 
 	list_for_each_entry_safe(obj, on, objects, st_link) {
+		i915_gem_object_lock(obj, NULL);
 		if (i915_gem_object_has_pinned_pages(obj))
 			i915_gem_object_unpin_pages(obj);
 		/* No polluting the memory region between tests */
 		__i915_gem_object_put_pages(obj);
+		i915_gem_object_unlock(obj);
 		list_del(&obj->st_link);
 		i915_gem_object_put(obj);
 	}
@@ -69,7 +71,7 @@ static int igt_mock_fill(void *arg)
 			break;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			break;
@@ -109,7 +111,7 @@ igt_object_create(struct intel_memory_region *mem,
 	if (IS_ERR(obj))
 		return obj;
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto put;
 
@@ -123,8 +125,10 @@ igt_object_create(struct intel_memory_region *mem,
 
 static void igt_object_release(struct drm_i915_gem_object *obj)
 {
+	i915_gem_object_lock(obj, NULL);
 	i915_gem_object_unpin_pages(obj);
 	__i915_gem_object_put_pages(obj);
+	i915_gem_object_unlock(obj);
 	list_del(&obj->st_link);
 	i915_gem_object_put(obj);
 }
@@ -433,7 +437,7 @@ static int igt_cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val)
 	if (err)
 		return err;
 
-	ptr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(ptr))
 		return PTR_ERR(ptr);
 
@@ -538,7 +542,7 @@ static int igt_lmem_create(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto out_put;
 
@@ -577,7 +581,7 @@ static int igt_lmem_write_gpu(void *arg)
 		goto out_file;
 	}
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto out_put;
 
@@ -649,7 +653,7 @@ static int igt_lmem_write_cpu(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
 	if (IS_ERR(vaddr)) {
 		err = PTR_ERR(vaddr);
 		goto out_put;
@@ -753,7 +757,7 @@ create_region_for_mapping(struct intel_memory_region *mr, u64 size, u32 type,
 		return obj;
 	}
 
-	addr = i915_gem_object_pin_map(obj, type);
+	addr = i915_gem_object_pin_map_unlocked(obj, type);
 	if (IS_ERR(addr)) {
 		i915_gem_object_put(obj);
 		if (PTR_ERR(addr) == -ENXIO)
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 59/64] drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (57 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 58/64] drm/i915/selftests: Prepare memory region " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 60/64] drm/i915/selftests: Prepare gtt " Maarten Lankhorst
                   ` (11 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

Same as other tests, use pin_map_unlocked.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_engine_cs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
index fd9414b0d1bd..625ebb7ccc4e 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
@@ -76,7 +76,7 @@ static struct i915_vma *create_empty_batch(struct intel_context *ce)
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_put;
@@ -212,7 +212,7 @@ static struct i915_vma *create_nop_batch(struct intel_context *ce)
 	if (IS_ERR(obj))
 		return ERR_CAST(obj);
 
-	cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
 	if (IS_ERR(cs)) {
 		err = PTR_ERR(cs);
 		goto err_put;
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 60/64] drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (58 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 59/64] drm/i915/selftests: Prepare cs engine " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 61/64] drm/i915: Finally remove obj->mm.lock Maarten Lankhorst
                   ` (10 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

We need to lock the global gtt dma_resv, use i915_vm_lock_objects
to handle this correctly. Add ww handling for this where required.

Add the object lock around unpin/put pages, and use the unlocked
versions of pin_pages and pin_map where required.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 92 ++++++++++++++-----
 1 file changed, 67 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index d4e5b2df684d..83d4bbff1939 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -130,7 +130,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size)
 	obj->cache_level = I915_CACHE_NONE;
 
 	/* Preallocate the "backing storage" */
-	if (i915_gem_object_pin_pages(obj))
+	if (i915_gem_object_pin_pages_unlocked(obj))
 		goto err_obj;
 
 	i915_gem_object_unpin_pages(obj);
@@ -146,6 +146,7 @@ static int igt_ppgtt_alloc(void *arg)
 {
 	struct drm_i915_private *dev_priv = arg;
 	struct i915_ppgtt *ppgtt;
+	struct i915_gem_ww_ctx ww;
 	u64 size, last, limit;
 	int err = 0;
 
@@ -171,6 +172,12 @@ static int igt_ppgtt_alloc(void *arg)
 	limit = totalram_pages() << PAGE_SHIFT;
 	limit = min(ppgtt->vm.total, limit);
 
+	i915_gem_ww_ctx_init(&ww, false);
+retry:
+	err = i915_vm_lock_objects(&ppgtt->vm, &ww);
+	if (err)
+		goto err_ppgtt_cleanup;
+
 	/* Check we can allocate the entire range */
 	for (size = 4096; size <= limit; size <<= 2) {
 		struct i915_vm_pt_stash stash = {};
@@ -215,6 +222,13 @@ static int igt_ppgtt_alloc(void *arg)
 	}
 
 err_ppgtt_cleanup:
+	if (err == -EDEADLK) {
+		err = i915_gem_ww_ctx_backoff(&ww);
+		if (!err)
+			goto retry;
+	}
+	i915_gem_ww_ctx_fini(&ww);
+
 	i915_vm_put(&ppgtt->vm);
 	return err;
 }
@@ -276,7 +290,7 @@ static int lowlevel_hole(struct i915_address_space *vm,
 
 		GEM_BUG_ON(obj->base.size != BIT_ULL(size));
 
-		if (i915_gem_object_pin_pages(obj)) {
+		if (i915_gem_object_pin_pages_unlocked(obj)) {
 			i915_gem_object_put(obj);
 			kfree(order);
 			break;
@@ -297,20 +311,36 @@ static int lowlevel_hole(struct i915_address_space *vm,
 
 			if (vm->allocate_va_range) {
 				struct i915_vm_pt_stash stash = {};
+				struct i915_gem_ww_ctx ww;
+				int err;
+
+				i915_gem_ww_ctx_init(&ww, false);
+retry:
+				err = i915_vm_lock_objects(vm, &ww);
+				if (err)
+					goto alloc_vm_end;
 
+				err = -ENOMEM;
 				if (i915_vm_alloc_pt_stash(vm, &stash,
 							   BIT_ULL(size)))
-					break;
-
-				if (i915_vm_pin_pt_stash(vm, &stash)) {
-					i915_vm_free_pt_stash(vm, &stash);
-					break;
-				}
+					goto alloc_vm_end;
 
-				vm->allocate_va_range(vm, &stash,
-						      addr, BIT_ULL(size));
+				err = i915_vm_pin_pt_stash(vm, &stash);
+				if (!err)
+					vm->allocate_va_range(vm, &stash,
+							      addr, BIT_ULL(size));
 
 				i915_vm_free_pt_stash(vm, &stash);
+alloc_vm_end:
+				if (err == -EDEADLK) {
+					err = i915_gem_ww_ctx_backoff(&ww);
+					if (!err)
+						goto retry;
+				}
+				i915_gem_ww_ctx_fini(&ww);
+
+				if (err)
+					break;
 			}
 
 			mock_vma->pages = obj->mm.pages;
@@ -1166,7 +1196,7 @@ static int igt_ggtt_page(void *arg)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	err = i915_gem_object_pin_pages(obj);
+	err = i915_gem_object_pin_pages_unlocked(obj);
 	if (err)
 		goto out_free;
 
@@ -1333,7 +1363,7 @@ static int igt_gtt_reserve(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1385,7 +1415,7 @@ static int igt_gtt_reserve(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1549,7 +1579,7 @@ static int igt_gtt_insert(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1658,7 +1688,7 @@ static int igt_gtt_insert(void *arg)
 			goto out;
 		}
 
-		err = i915_gem_object_pin_pages(obj);
+		err = i915_gem_object_pin_pages_unlocked(obj);
 		if (err) {
 			i915_gem_object_put(obj);
 			goto out;
@@ -1829,7 +1859,7 @@ static int igt_cs_tlb(void *arg)
 		goto out_vm;
 	}
 
-	batch = i915_gem_object_pin_map(bbe, I915_MAP_WC);
+	batch = i915_gem_object_pin_map_unlocked(bbe, I915_MAP_WC);
 	if (IS_ERR(batch)) {
 		err = PTR_ERR(batch);
 		goto out_put_bbe;
@@ -1845,7 +1875,7 @@ static int igt_cs_tlb(void *arg)
 	}
 
 	/* Track the execution of each request by writing into different slot */
-	batch = i915_gem_object_pin_map(act, I915_MAP_WC);
+	batch = i915_gem_object_pin_map_unlocked(act, I915_MAP_WC);
 	if (IS_ERR(batch)) {
 		err = PTR_ERR(batch);
 		goto out_put_act;
@@ -1892,7 +1922,7 @@ static int igt_cs_tlb(void *arg)
 		goto out_put_out;
 	GEM_BUG_ON(vma->node.start != vm->total - PAGE_SIZE);
 
-	result = i915_gem_object_pin_map(out, I915_MAP_WB);
+	result = i915_gem_object_pin_map_unlocked(out, I915_MAP_WB);
 	if (IS_ERR(result)) {
 		err = PTR_ERR(result);
 		goto out_put_out;
@@ -1908,6 +1938,7 @@ static int igt_cs_tlb(void *arg)
 		while (!__igt_timeout(end_time, NULL)) {
 			struct i915_vm_pt_stash stash = {};
 			struct i915_request *rq;
+			struct i915_gem_ww_ctx ww;
 			u64 offset;
 
 			offset = igt_random_offset(&prng,
@@ -1926,19 +1957,30 @@ static int igt_cs_tlb(void *arg)
 			if (err)
 				goto end;
 
+			i915_gem_ww_ctx_init(&ww, false);
+retry:
+			err = i915_vm_lock_objects(vm, &ww);
+			if (err)
+				goto end_ww;
+
 			err = i915_vm_alloc_pt_stash(vm, &stash, chunk_size);
 			if (err)
-				goto end;
+				goto end_ww;
 
 			err = i915_vm_pin_pt_stash(vm, &stash);
-			if (err) {
-				i915_vm_free_pt_stash(vm, &stash);
-				goto end;
-			}
-
-			vm->allocate_va_range(vm, &stash, offset, chunk_size);
+			if (!err)
+				vm->allocate_va_range(vm, &stash, offset, chunk_size);
 
 			i915_vm_free_pt_stash(vm, &stash);
+end_ww:
+			if (err == -EDEADLK) {
+				err = i915_gem_ww_ctx_backoff(&ww);
+				if (!err)
+					goto retry;
+			}
+			i915_gem_ww_ctx_fini(&ww);
+			if (err)
+				goto end;
 
 			/* Prime the TLB with the dummy pages */
 			for (i = 0; i < count; i++) {
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 61/64] drm/i915: Finally remove obj->mm.lock.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (59 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 60/64] drm/i915/selftests: Prepare gtt " Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 62/64] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2 Maarten Lankhorst
                   ` (9 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström

With all callers and selftests fixed to use ww locking, we can now
finally remove this lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  2 -
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  5 +--
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  1 -
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     | 43 ++++---------------
 drivers/gpu/drm/i915/gem/i915_gem_phys.c      | 34 ++++-----------
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c  | 37 +++++++++++-----
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.h  |  4 +-
 drivers/gpu/drm/i915/gem/i915_gem_tiling.c    |  2 -
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  3 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  4 +-
 drivers/gpu/drm/i915/i915_gem.c               |  8 +---
 drivers/gpu/drm/i915/i915_gem_gtt.c           |  2 +-
 13 files changed, 55 insertions(+), 92 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 028a556ab1a5..08d806bbf48e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -62,8 +62,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops,
 			  struct lock_class_key *key, unsigned flags)
 {
-	mutex_init(&obj->mm.lock);
-
 	spin_lock_init(&obj->vma.lock);
 	INIT_LIST_HEAD(&obj->vma.list);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 3fe41180cc81..748b03aeb3c6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -129,7 +129,7 @@ static inline void assert_object_held_shared(struct drm_i915_gem_object *obj)
 	 */
 	if (IS_ENABLED(CONFIG_LOCKDEP) &&
 	    kref_read(&obj->base.refcount) > 0)
-		lockdep_assert_held(&obj->mm.lock);
+		assert_object_held(obj);
 }
 
 static inline int __i915_gem_object_lock(struct drm_i915_gem_object *obj,
@@ -334,7 +334,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
 static inline int __must_check
 i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
 {
-	might_lock(&obj->mm.lock);
+	assert_object_held(obj);
 
 	if (atomic_inc_not_zero(&obj->mm.pages_pin_count))
 		return 0;
@@ -380,7 +380,6 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
 }
 
 int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
-int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj);
 void i915_gem_object_truncate(struct drm_i915_gem_object *obj);
 void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 5234c1ed62d4..b172e8cc53ab 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -209,7 +209,6 @@ struct drm_i915_gem_object {
 		 * Protects the pages and their use. Do not use directly, but
 		 * instead go through the pin/unpin interfaces.
 		 */
-		struct mutex lock;
 		atomic_t pages_pin_count;
 		atomic_t shrink_pin;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 8a6c0dd1bfdd..27d5261af949 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -67,7 +67,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 		struct list_head *list;
 		unsigned long flags;
 
-		lockdep_assert_held(&obj->mm.lock);
+		assert_object_held(obj);
 		spin_lock_irqsave(&i915->mm.obj_lock, flags);
 
 		i915->mm.shrink_count++;
@@ -114,9 +114,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 {
 	int err;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		return err;
+	assert_object_held(obj);
 
 	assert_object_held_shared(obj);
 
@@ -125,15 +123,13 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 
 		err = ____i915_gem_object_get_pages(obj);
 		if (err)
-			goto unlock;
+			return err;
 
 		smp_mb__before_atomic();
 	}
 	atomic_inc(&obj->mm.pages_pin_count);
 
-unlock:
-	mutex_unlock(&obj->mm.lock);
-	return err;
+	return 0;
 }
 
 int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj)
@@ -220,7 +216,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 	return pages;
 }
 
-int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
+int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 {
 	struct sg_table *pages;
 
@@ -251,21 +247,6 @@ int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj)
 	return 0;
 }
 
-int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
-{
-	int err;
-
-	if (i915_gem_object_has_pinned_pages(obj))
-		return -EBUSY;
-
-	/* May be called by shrinker from within get_pages() (on another bo) */
-	mutex_lock(&obj->mm.lock);
-	err = __i915_gem_object_put_pages_locked(obj);
-	mutex_unlock(&obj->mm.lock);
-
-	return err;
-}
-
 /* The 'mapping' part of i915_gem_object_pin_map() below */
 static void *i915_gem_object_map_page(struct drm_i915_gem_object *obj,
 				      enum i915_map_type type)
@@ -368,9 +349,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 	    !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
 		return ERR_PTR(-ENXIO);
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		return ERR_PTR(err);
+	assert_object_held(obj);
 
 	pinned = !(type & I915_MAP_OVERRIDE);
 	type &= ~I915_MAP_OVERRIDE;
@@ -380,10 +359,8 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 			GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
 
 			err = ____i915_gem_object_get_pages(obj);
-			if (err) {
-				ptr = ERR_PTR(err);
-				goto out_unlock;
-			}
+			if (err)
+				return ERR_PTR(err);
 
 			smp_mb__before_atomic();
 		}
@@ -418,13 +395,11 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 		obj->mm.mapping = page_pack_bits(ptr, type);
 	}
 
-out_unlock:
-	mutex_unlock(&obj->mm.lock);
 	return ptr;
 
 err_unpin:
 	atomic_dec(&obj->mm.pages_pin_count);
-	goto out_unlock;
+	return ptr;
 }
 
 void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index f317be5f5e34..435c3b54cf14 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -234,40 +234,22 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 	if (err)
 		return err;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		return err;
-
-	if (unlikely(!i915_gem_object_has_struct_page(obj)))
-		goto out;
-
-	if (obj->mm.madv != I915_MADV_WILLNEED) {
-		err = -EFAULT;
-		goto out;
-	}
+	if (obj->mm.madv != I915_MADV_WILLNEED)
+		return -EFAULT;
 
-	if (obj->mm.quirked) {
-		err = -EFAULT;
-		goto out;
-	}
+	if (obj->mm.quirked)
+		return -EFAULT;
 
-	if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj)) {
-		err = -EBUSY;
-		goto out;
-	}
+	if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj))
+		return -EBUSY;
 
 	if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
 		drm_dbg(obj->base.dev,
 			"Attempting to obtain a purgeable object\n");
-		err = -EFAULT;
-		goto out;
+		return -EFAULT;
 	}
 
-	err = i915_gem_object_shmem_to_phys(obj);
-
-out:
-	mutex_unlock(&obj->mm.lock);
-	return err;
+	return i915_gem_object_shmem_to_phys(obj);
 }
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 7a59fd1ea4e5..b4dd7a709800 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -99,7 +99,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
 				goto err_sg;
 			}
 
-			i915_gem_shrink(i915, 2 * page_count, NULL, *s++);
+			i915_gem_shrink(NULL, i915, 2 * page_count, NULL, *s++);
 
 			/*
 			 * We've tried hard to allocate the memory by reaping
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index 11b8d4e69fc7..3e248d3bd869 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -94,7 +94,8 @@ static void try_to_writeback(struct drm_i915_gem_object *obj,
  * The number of pages of backing storage actually released.
  */
 unsigned long
-i915_gem_shrink(struct drm_i915_private *i915,
+i915_gem_shrink(struct i915_gem_ww_ctx *ww,
+		struct drm_i915_private *i915,
 		unsigned long target,
 		unsigned long *nr_scanned,
 		unsigned int shrink)
@@ -113,6 +114,7 @@ i915_gem_shrink(struct drm_i915_private *i915,
 	intel_wakeref_t wakeref = 0;
 	unsigned long count = 0;
 	unsigned long scanned = 0;
+	int err;
 
 	trace_i915_gem_shrink(i915, target, shrink);
 
@@ -200,25 +202,40 @@ i915_gem_shrink(struct drm_i915_private *i915,
 
 			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
 
-			if (unsafe_drop_pages(obj, shrink) &&
-			    mutex_trylock(&obj->mm.lock)) {
+			err = 0;
+			if (unsafe_drop_pages(obj, shrink)) {
 				/* May arrive from get_pages on another bo */
-				if (!__i915_gem_object_put_pages_locked(obj)) {
+				if (!ww) {
+					if (!i915_gem_object_trylock(obj))
+						goto skip;
+				} else {
+					err = i915_gem_object_lock(obj, ww);
+					if (err)
+						goto skip;
+				}
+
+				if (!__i915_gem_object_put_pages(obj)) {
 					try_to_writeback(obj, shrink);
 					count += obj->base.size >> PAGE_SHIFT;
 				}
-				mutex_unlock(&obj->mm.lock);
+				if (!ww)
+					i915_gem_object_unlock(obj);
 			}
 
 			dma_resv_prune(obj->base.resv);
 
 			scanned += obj->base.size >> PAGE_SHIFT;
+skip:
 			i915_gem_object_put(obj);
 
 			spin_lock_irqsave(&i915->mm.obj_lock, flags);
+			if (err)
+				break;
 		}
 		list_splice_tail(&still_in_list, phase->list);
 		spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
+		if (err)
+			return err;
 	}
 
 	if (shrink & I915_SHRINK_BOUND)
@@ -249,7 +266,7 @@ unsigned long i915_gem_shrink_all(struct drm_i915_private *i915)
 	unsigned long freed = 0;
 
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
-		freed = i915_gem_shrink(i915, -1UL, NULL,
+		freed = i915_gem_shrink(NULL, i915, -1UL, NULL,
 					I915_SHRINK_BOUND |
 					I915_SHRINK_UNBOUND);
 	}
@@ -295,7 +312,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
 
 	sc->nr_scanned = 0;
 
-	freed = i915_gem_shrink(i915,
+	freed = i915_gem_shrink(NULL, i915,
 				sc->nr_to_scan,
 				&sc->nr_scanned,
 				I915_SHRINK_BOUND |
@@ -304,7 +321,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
 		intel_wakeref_t wakeref;
 
 		with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
-			freed += i915_gem_shrink(i915,
+			freed += i915_gem_shrink(NULL, i915,
 						 sc->nr_to_scan - sc->nr_scanned,
 						 &sc->nr_scanned,
 						 I915_SHRINK_ACTIVE |
@@ -329,7 +346,7 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
 
 	freed_pages = 0;
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
-		freed_pages += i915_gem_shrink(i915, -1UL, NULL,
+		freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL,
 					       I915_SHRINK_BOUND |
 					       I915_SHRINK_UNBOUND |
 					       I915_SHRINK_WRITEBACK);
@@ -367,7 +384,7 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
 	intel_wakeref_t wakeref;
 
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
-		freed_pages += i915_gem_shrink(i915, -1UL, NULL,
+		freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL,
 					       I915_SHRINK_BOUND |
 					       I915_SHRINK_UNBOUND |
 					       I915_SHRINK_VMAPS);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h
index b397d7785789..8512470f6fd6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h
@@ -9,10 +9,12 @@
 #include <linux/bits.h>
 
 struct drm_i915_private;
+struct i915_gem_ww_ctx;
 struct mutex;
 
 /* i915_gem_shrinker.c */
-unsigned long i915_gem_shrink(struct drm_i915_private *i915,
+unsigned long i915_gem_shrink(struct i915_gem_ww_ctx *ww,
+			      struct drm_i915_private *i915,
 			      unsigned long target,
 			      unsigned long *nr_scanned,
 			      unsigned flags);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
index ffcaee74a249..4523a14db86e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
@@ -265,7 +265,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
 	 * pages to prevent them being swapped out and causing corruption
 	 * due to the change in swizzling.
 	 */
-	mutex_lock(&obj->mm.lock);
 	if (i915_gem_object_has_pages(obj) &&
 	    obj->mm.madv == I915_MADV_WILLNEED &&
 	    i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@@ -280,7 +279,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
 			obj->mm.quirked = true;
 		}
 	}
-	mutex_unlock(&obj->mm.lock);
 
 	spin_lock(&obj->vma.lock);
 	for_each_ggtt_vma(vma, obj) {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index b1c035c55708..84c3e8dc1491 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -246,7 +246,7 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 	if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
 		return -EBUSY;
 
-	mutex_lock(&obj->mm.lock);
+	assert_object_held(obj);
 
 	pages = __i915_gem_object_unset_pages(obj);
 	if (!IS_ERR_OR_NULL(pages))
@@ -254,7 +254,6 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool
 
 	if (get_pages)
 		err = ____i915_gem_object_get_pages(obj);
-	mutex_unlock(&obj->mm.lock);
 
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index de8e0e44cfb6..dd01127ced0f 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1046,10 +1046,10 @@ i915_drop_caches_set(void *data, u64 val)
 
 	fs_reclaim_acquire(GFP_KERNEL);
 	if (val & DROP_BOUND)
-		i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_BOUND);
+		i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_BOUND);
 
 	if (val & DROP_UNBOUND)
-		i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND);
+		i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND);
 
 	if (val & DROP_SHRINK_ALL)
 		i915_gem_shrink_all(i915);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index dd811d0f96f2..42668a1bc4c4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1068,10 +1068,6 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	if (err)
 		goto out;
 
-	err = mutex_lock_interruptible(&obj->mm.lock);
-	if (err)
-		goto out_ww;
-
 	if (i915_gem_object_has_pages(obj) &&
 	    i915_gem_object_is_tiled(obj) &&
 	    i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@@ -1114,9 +1110,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 		i915_gem_object_truncate(obj);
 
 	args->retained = obj->mm.madv != __I915_MADV_PURGED;
-	mutex_unlock(&obj->mm.lock);
 
-out_ww:
 	i915_gem_object_unlock(obj);
 out:
 	i915_gem_object_put(obj);
@@ -1295,7 +1289,7 @@ int i915_gem_freeze_late(struct drm_i915_private *i915)
 
 	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
 
-	i915_gem_shrink(i915, -1UL, NULL, ~0);
+	i915_gem_shrink(NULL, i915, -1UL, NULL, ~0);
 	i915_gem_drain_freed_objects(i915);
 
 	list_for_each_entry(obj, &i915->mm.shrink_list, mm.link) {
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 3ee2f682eff6..1cf90d506ee2 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -44,7 +44,7 @@ int i915_gem_gtt_prepare_pages(struct drm_i915_gem_object *obj,
 		 * the DMA remapper, i915_gem_shrink will return 0.
 		 */
 		GEM_BUG_ON(obj->mm.pages == pages);
-	} while (i915_gem_shrink(to_i915(obj->base.dev),
+	} while (i915_gem_shrink(NULL, to_i915(obj->base.dev),
 				 obj->base.size >> PAGE_SHIFT, NULL,
 				 I915_SHRINK_BOUND |
 				 I915_SHRINK_UNBOUND));
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 62/64] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (60 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 61/64] drm/i915: Finally remove obj->mm.lock Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-18 10:58   ` Thomas Hellström
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly Maarten Lankhorst
                   ` (8 subsequent siblings)
  70 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dan Carpenter

Instead of force unbinding and rebinding every time, we try to check
if our notifier seqcount is still correct when pages are bound. This
way we only rebind userptr when we need to, and prevent stalls.

Changes since v1:
- Missing mutex_unlock, reported by kbuild.

Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 27 ++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 84c3e8dc1491..44fc0df89784 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -274,12 +274,33 @@ int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
 	if (ret)
 		return ret;
 
-	/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
-	ret = i915_gem_object_userptr_unbind(obj, false);
+	/* optimistically try to preserve current pages while unlocked */
+	if (i915_gem_object_has_pages(obj) &&
+	    !mmu_interval_check_retry(&obj->userptr.notifier,
+				      obj->userptr.notifier_seq)) {
+		spin_lock(&i915->mm.notifier_lock);
+		if (obj->userptr.pvec &&
+		    !mmu_interval_read_retry(&obj->userptr.notifier,
+					     obj->userptr.notifier_seq)) {
+			obj->userptr.page_ref++;
+
+			/* We can keep using the current binding, this is the fastpath */
+			ret = 1;
+		}
+		spin_unlock(&i915->mm.notifier_lock);
+	}
+
+	if (!ret) {
+		/* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
+		ret = i915_gem_object_userptr_unbind(obj, false);
+	}
 	i915_gem_object_unlock(obj);
-	if (ret)
+	if (ret < 0)
 		return ret;
 
+	if (ret > 0)
+		return 0;
+
 	notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
 
 	pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (61 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 62/64] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2 Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-18 11:11   ` Thomas Hellström
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 64/64] drm/i915: Avoid some false positives in assert_object_held() Maarten Lankhorst
                   ` (7 subsequent siblings)
  70 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx

We get a lockdep splat when the reset mutex is held, because it can be
taken from fence_wait. This conflicts with the mmu notifier we have,
because we recurse between reset mutex and mmap lock -> mmu notifier.

Remove this recursion by calling revoke_mmaps before taking the lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_reset.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 9d177297db79..3c0807d9a86e 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -975,8 +975,6 @@ static int do_reset(struct intel_gt *gt, intel_engine_mask_t stalled_mask)
 {
 	int err, i;
 
-	gt_revoke(gt);
-
 	err = __intel_gt_reset(gt, ALL_ENGINES);
 	for (i = 0; err && i < RESET_MAX_RETRIES; i++) {
 		msleep(10 * (i + 1));
@@ -1031,6 +1029,9 @@ void intel_gt_reset(struct intel_gt *gt,
 
 	might_sleep();
 	GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &gt->reset.flags));
+
+	gt_revoke(gt);
+
 	mutex_lock(&gt->reset.mutex);
 
 	/* Clear any previous failed attempts at recovery. Time to try again. */
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] [PATCH v6 64/64] drm/i915: Avoid some false positives in assert_object_held()
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (62 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly Maarten Lankhorst
@ 2021-01-05 15:35 ` Maarten Lankhorst
  2021-01-05 17:24 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev12) Patchwork
                   ` (6 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-05 15:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: Thomas Hellström, Matthew Auld

From: Thomas Hellström <thomas.hellstrom@intel.com>

In a ww transaction where we've already locked a reservation
object, assert_object_held() might not throw a splat even if
the object is unlocked. Improve on that situation by asserting
that the reservation object's ww mutex is indeed locked.

Signed-off-by: Thomas Hellström <thomas.hellstrom@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 748b03aeb3c6..5a518373f6c5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -116,7 +116,14 @@ i915_gem_object_put(struct drm_i915_gem_object *obj)
 	__drm_gem_object_put(&obj->base);
 }
 
-#define assert_object_held(obj) dma_resv_assert_held((obj)->base.resv)
+#ifdef CONFIG_LOCKDEP
+#define assert_object_held(obj) do {					\
+		dma_resv_assert_held((obj)->base.resv);			\
+		WARN_ON(!ww_mutex_is_locked(&(obj)->base.resv->lock)); \
+	} while (0)
+#else
+#define assert_object_held(obj) do { } while (0)
+#endif
 
 /*
  * If more than one potential simultaneous locker, assert held.
-- 
2.30.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev12)
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (63 preceding siblings ...)
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 64/64] drm/i915: Avoid some false positives in assert_object_held() Maarten Lankhorst
@ 2021-01-05 17:24 ` Patchwork
  2021-01-05 17:26 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (5 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Patchwork @ 2021-01-05 17:24 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev12)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
ee36d7082777 drm/i915: Do not share hwsp across contexts any more, v6
-:558: WARNING:CONSTANT_COMPARISON: Comparisons should place the constant on the right side of the test
#558: FILE: drivers/gpu/drm/i915/gt/intel_timeline.c:287:
+	if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))

total: 0 errors, 1 warnings, 0 checks, 939 lines checked
77e847edcdef drm/i915: Pin timeline map after first timeline pin, v3.
-:13: WARNING:TYPO_SPELLING: 'arithmatic' may be misspelled - perhaps 'arithmetic'?
#13: 
- Fix NULL + XX arithmatic, use casts. (kbuild)
                ^^^^^^^^^^

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
ae7984bc7a2b drm/i915: Move cmd parser pinning to execbuffer
d32ce8b3991b drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
-:59: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#59: FILE: drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:452:
+		err = i915_vma_pin_ww(vma, &eb->ww,
 					     entry->pad_to_size,

total: 0 errors, 0 warnings, 1 checks, 75 lines checked
c4327984c5b8 drm/i915: Ensure we hold the object mutex in pin correctly.
7593f631b80e drm/i915: Add gem object locking to madvise.
c053c9e3806b drm/i915: Move HAS_STRUCT_PAGE to obj->flags
-:110: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#110: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.c:63:
+			  struct lock_class_key *key, unsigned flags)

-:133: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#133: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:27:
+			  unsigned alloc_flags);

total: 0 errors, 2 warnings, 0 checks, 348 lines checked
82ff4c5fefab drm/i915: Rework struct phys attachment handling
126e45a43613 drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
4de06eb09218 drm/i915: make lockdep slightly happier about execbuf.
904777e468af drm/i915: Disable userptr pread/pwrite support.
68bf4d09afd6 drm/i915: No longer allow exporting userptr through dma-buf
bffde4d35594 drm/i915: Reject more ioctls for userptr
0b63b8ded518 drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
bd2117b82199 drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
5a4c2a10b2b9 drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
-:291: WARNING:LONG_LINE: line length of 121 exceeds 100 columns
#291: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:561:
+static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }

-:292: WARNING:LONG_LINE: line length of 121 exceeds 100 columns
#292: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:562:
+static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }

-:293: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#293: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:563:
+static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }

-:351: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#351: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:2:
  * SPDX-License-Identifier: MIT

-:355: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#355: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:6:
+ *
+  * Based on amdgpu_mn, which bears the following notice:

-:356: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#356: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:7:
+  * Based on amdgpu_mn, which bears the following notice:
+ *

-:469: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#469: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:63:
+	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);

-:1123: CHECK:MULTIPLE_ASSIGNMENTS: multiple assignments should be avoided
#1123: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:293:
+	pinned = ret = 0;

-:1138: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#1138: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:308:
+	if (mmu_interval_read_retry(&obj->userptr.notifier,
+		!obj->userptr.page_ref ? notifier_seq :

-:1259: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#1259: FILE: drivers/gpu/drm/i915/i915_drv.h:598:
+	spinlock_t notifier_lock;

total: 0 errors, 7 warnings, 3 checks, 1202 lines checked
1b10449ea891 drm/i915: Flatten obj->mm.lock
5b67b21fc107 drm/i915: Populate logical context during first pin.
ddf15e91d6e9 drm/i915: Make ring submission compatible with obj->mm.lock removal, v2.
ee7e02b69e96 drm/i915: Handle ww locking in init_status_page
1e9c89c0cdd5 drm/i915: Rework clflush to work correctly without obj->mm.lock.
30477a8c7f4a drm/i915: Pass ww ctx to intel_pin_to_display_plane
90cd582176d6 drm/i915: Add object locking to vm_fault_cpu
98f313e1b2c7 drm/i915: Move pinning to inside engine_wa_list_verify()
-:72: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#72: FILE: drivers/gpu/drm/i915/gt/intel_workarounds.c:2209:
+	err = i915_vma_pin_ww(vma, &ww, 0, 0,
+			   i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);

total: 0 errors, 0 warnings, 1 checks, 118 lines checked
26d7e276b32b drm/i915: Take reservation lock around i915_vma_pin.
a16ddd6bbe45 drm/i915: Make lrc_init_wa_ctx compatible with ww locking.
3d40ff463446 drm/i915: Make __engine_unpark() compatible with ww locking.
-:10: WARNING:REPEATED_WORD: Possible repeated word: 'many'
#10: 
many many places where rpm is used, I chose the safest option

total: 0 errors, 1 warnings, 0 checks, 16 lines checked
51621d12a4f7 drm/i915: Take obj lock around set_domain ioctl
b604c14fe1fd drm/i915: Defer pin calls in buffer pool until first use by caller.
48158d09d069 drm/i915: Fix pread/pwrite to work with new locking rules.
-:32: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#32: 
deleted file mode 100644

total: 0 errors, 1 warnings, 0 checks, 359 lines checked
c18cdf7507ea drm/i915: Fix workarounds selftest, part 1
7de7664d46b6 drm/i915: Prepare for obj->mm.lock removal
-:132: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: "Thomas Hellström" <thomas.hellstrom@intel.com>' != 'Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>'

total: 0 errors, 1 warnings, 0 checks, 96 lines checked
e331444a6ecd drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner.
5b2ed8ac6aa9 drm/i915: Add ww locking around vm_access()
c4c898358dc5 drm/i915: Increase ww locking for perf.
a07625d3a30b drm/i915: Lock ww in ucode objects correctly
c73e522da202 drm/i915: Add ww locking to dma-buf ops.
e6bc7cd8e75b drm/i915: Add missing ww lock in intel_dsb_prepare.
880c7384495b drm/i915: Fix ww locking in shmem_create_from_object
e7d765f48e76 drm/i915: Use a single page table lock for each gtt.
-:112: WARNING:UNNECESSARY_ELSE: else is not generally useful after a break or return
#112: FILE: drivers/gpu/drm/i915/gt/intel_gtt.c:85:
+		return i915_gem_object_lock(vm->scratch[0], ww);
+	} else {

total: 0 errors, 1 warnings, 0 checks, 154 lines checked
6c13bbb27944 drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal.
df26a7c3bfa7 drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
f86e65b8dccd drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
1f8c1b5d09b9 drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
882d19d33b69 drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
03b5e3147426 drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
67971ae59f49 drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
c3b9c948b30e drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
2b97dd56740e drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal.
12e1af995385 drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
2bb81227e971 drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
4b56a1f7891c drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
d09c8b3e58c4 drm/i915/selftests: Prepare execlists and lrc selftests for obj->mm.lock removal
-:163: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#163: FILE: drivers/gpu/drm/i915/gt/selftest_lrc.c:1223:
+	lrc = i915_gem_object_pin_map_unlocked(ce->state->obj,
 				      i915_coherent_map_type(engine->i915));

total: 0 errors, 0 warnings, 1 checks, 130 lines checked
d1ecda40df09 drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
c32e2b197641 drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
ca03ac9a54ce drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
6979844fdcd0 drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal
ff645615e0b6 drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal
a14eedc5f483 drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
ee87a31a5a25 drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
8489020ac7f4 drm/i915: Finally remove obj->mm.lock.
2b0562da270c drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
83bdcbefe32d drm/i915: Move gt_revoke() slightly
87115bdd2a96 drm/i915: Avoid some false positives in assert_object_held()
-:27: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'obj' - possible side-effects?
#27: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:120:
+#define assert_object_held(obj) do {					\
+		dma_resv_assert_held((obj)->base.resv);			\
+		WARN_ON(!ww_mutex_is_locked(&(obj)->base.resv->lock)); \
+	} while (0)

total: 0 errors, 0 warnings, 1 checks, 15 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915: Remove obj->mm.lock! (rev12)
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (64 preceding siblings ...)
  2021-01-05 17:24 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev12) Patchwork
@ 2021-01-05 17:26 ` Patchwork
  2021-01-05 17:29 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
                   ` (4 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Patchwork @ 2021-01-05 17:26 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev12)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
+drivers/gpu/drm/i915/gt/intel_ring_submission.c:1265:24: warning: Using plain integer as NULL pointer
+drivers/gpu/drm/i915/intel_wakeref.c:137:19: warning: context imbalance in 'wakeref_auto_timeout' - unexpected unlock
+drivers/gpu/drm/i915/selftests/i915_syncmap.c:80:54: warning: dubious: x | !y


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [Intel-gfx] ✗ Fi.CI.DOCS: warning for drm/i915: Remove obj->mm.lock! (rev12)
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (65 preceding siblings ...)
  2021-01-05 17:26 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2021-01-05 17:29 ` Patchwork
  2021-01-07 15:42 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev13) Patchwork
                   ` (3 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Patchwork @ 2021-01-05 17:29 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev12)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ make htmldocs 2>&1 > /dev/null | grep i915
./drivers/gpu/drm/i915/gem/i915_gem_shrinker.c:102: warning: Function parameter or member 'ww' not described in 'i915_gem_shrink'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Excess function parameter 'trampoline' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Function parameter or member 'jump_whitelist' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Function parameter or member 'shadow_map' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Function parameter or member 'batch_map' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Excess function parameter 'trampoline' description in 'intel_engine_cmd_parser'


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev13)
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (66 preceding siblings ...)
  2021-01-05 17:29 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
@ 2021-01-07 15:42 ` Patchwork
  2021-01-07 15:44 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (2 subsequent siblings)
  70 siblings, 0 replies; 92+ messages in thread
From: Patchwork @ 2021-01-07 15:42 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev13)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
cbc0237dc775 drm/i915: Do not share hwsp across contexts any more, v6
-:558: WARNING:CONSTANT_COMPARISON: Comparisons should place the constant on the right side of the test
#558: FILE: drivers/gpu/drm/i915/gt/intel_timeline.c:287:
+	if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))

total: 0 errors, 1 warnings, 0 checks, 939 lines checked
9785c066d43f drm/i915: Pin timeline map after first timeline pin, v3.
-:13: WARNING:TYPO_SPELLING: 'arithmatic' may be misspelled - perhaps 'arithmetic'?
#13: 
- Fix NULL + XX arithmatic, use casts. (kbuild)
                ^^^^^^^^^^

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
de7db6cc5368 drm/i915: Move cmd parser pinning to execbuffer
f4a0ebe47349 drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
-:59: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#59: FILE: drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:452:
+		err = i915_vma_pin_ww(vma, &eb->ww,
 					     entry->pad_to_size,

total: 0 errors, 0 warnings, 1 checks, 75 lines checked
f4245cae4741 drm/i915: Ensure we hold the object mutex in pin correctly.
a6940e0503e5 drm/i915: Add gem object locking to madvise.
b881f7651122 drm/i915: Move HAS_STRUCT_PAGE to obj->flags
-:110: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#110: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.c:63:
+			  struct lock_class_key *key, unsigned flags)

-:133: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#133: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:27:
+			  unsigned alloc_flags);

total: 0 errors, 2 warnings, 0 checks, 348 lines checked
f90f08586008 drm/i915: Rework struct phys attachment handling
138d22527745 drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
d319bbed11e6 drm/i915: make lockdep slightly happier about execbuf.
2164b6818543 drm/i915: Disable userptr pread/pwrite support.
f5dbb93bbfa1 drm/i915: No longer allow exporting userptr through dma-buf
7f3e5695a30c drm/i915: Reject more ioctls for userptr
6221e6630d88 drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
08f97e18f8e9 drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
3953cfa616ca drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
-:291: WARNING:LONG_LINE: line length of 121 exceeds 100 columns
#291: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:561:
+static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }

-:292: WARNING:LONG_LINE: line length of 121 exceeds 100 columns
#292: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:562:
+static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }

-:293: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#293: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:563:
+static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }

-:351: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#351: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:2:
  * SPDX-License-Identifier: MIT

-:355: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#355: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:6:
+ *
+  * Based on amdgpu_mn, which bears the following notice:

-:356: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#356: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:7:
+  * Based on amdgpu_mn, which bears the following notice:
+ *

-:469: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#469: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:63:
+	struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);

-:1123: CHECK:MULTIPLE_ASSIGNMENTS: multiple assignments should be avoided
#1123: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:293:
+	pinned = ret = 0;

-:1138: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#1138: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:308:
+	if (mmu_interval_read_retry(&obj->userptr.notifier,
+		!obj->userptr.page_ref ? notifier_seq :

-:1259: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#1259: FILE: drivers/gpu/drm/i915/i915_drv.h:598:
+	spinlock_t notifier_lock;

total: 0 errors, 7 warnings, 3 checks, 1202 lines checked
f112822eb256 drm/i915: Flatten obj->mm.lock
a29d90e385ae drm/i915: Populate logical context during first pin.
31e23527c490 drm/i915: Make ring submission compatible with obj->mm.lock removal, v2.
868eac184a45 drm/i915: Handle ww locking in init_status_page
befedf03ef9b drm/i915: Rework clflush to work correctly without obj->mm.lock.
cef708148231 drm/i915: Pass ww ctx to intel_pin_to_display_plane
4bfb4766e4cd drm/i915: Add object locking to vm_fault_cpu
fbeeff4c2e20 drm/i915: Move pinning to inside engine_wa_list_verify()
-:72: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#72: FILE: drivers/gpu/drm/i915/gt/intel_workarounds.c:2209:
+	err = i915_vma_pin_ww(vma, &ww, 0, 0,
+			   i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);

total: 0 errors, 0 warnings, 1 checks, 118 lines checked
a34e73844b08 drm/i915: Take reservation lock around i915_vma_pin.
332aba0adab3 drm/i915: Make lrc_init_wa_ctx compatible with ww locking.
96ab1f267f08 drm/i915: Make __engine_unpark() compatible with ww locking.
-:10: WARNING:REPEATED_WORD: Possible repeated word: 'many'
#10: 
many many places where rpm is used, I chose the safest option

total: 0 errors, 1 warnings, 0 checks, 16 lines checked
93fc5e38bd1e drm/i915: Take obj lock around set_domain ioctl
c4a35d8beff3 drm/i915: Defer pin calls in buffer pool until first use by caller.
735ee8499c6a drm/i915: Fix pread/pwrite to work with new locking rules.
-:32: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#32: 
deleted file mode 100644

total: 0 errors, 1 warnings, 0 checks, 359 lines checked
24a43ba8beab drm/i915: Fix workarounds selftest, part 1
8d709663d753 drm/i915: Prepare for obj->mm.lock removal
-:132: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: "Thomas Hellström" <thomas.hellstrom@intel.com>' != 'Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>'

total: 0 errors, 1 warnings, 0 checks, 96 lines checked
38fb4fa98f28 drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner.
61bd786460ef drm/i915: Add ww locking around vm_access()
1c7e3745bac7 drm/i915: Increase ww locking for perf.
f9780be3bf15 drm/i915: Lock ww in ucode objects correctly
4cc4a2a44fb1 drm/i915: Add ww locking to dma-buf ops.
6544ccbea852 drm/i915: Add missing ww lock in intel_dsb_prepare.
be70ee3eed38 drm/i915: Fix ww locking in shmem_create_from_object
7c6e80443e65 drm/i915: Use a single page table lock for each gtt.
-:112: WARNING:UNNECESSARY_ELSE: else is not generally useful after a break or return
#112: FILE: drivers/gpu/drm/i915/gt/intel_gtt.c:85:
+		return i915_gem_object_lock(vm->scratch[0], ww);
+	} else {

total: 0 errors, 1 warnings, 0 checks, 154 lines checked
fb3eb3fa4a62 drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal.
7491321df5bb drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
0f3c56470046 drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
2d031c9059f5 drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
4a5c2526d221 drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
a58e936c0fb1 drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
47c5b0d39049 drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
c756e2d60992 drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
ad4baf51502e drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal.
5e265d322c70 drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
6f4ab34f68bd drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
49f49501504c drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
3767527b9676 drm/i915/selftests: Prepare execlists and lrc selftests for obj->mm.lock removal
-:163: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#163: FILE: drivers/gpu/drm/i915/gt/selftest_lrc.c:1225:
+	lrc = i915_gem_object_pin_map_unlocked(ce->state->obj,
 				      i915_coherent_map_type(engine->i915));

total: 0 errors, 0 warnings, 1 checks, 130 lines checked
9fc0127e0f37 drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
757f76c2b189 drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
9c26cd14427b drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
fbe995780783 drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal
445321957223 drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal
b514f7e656ac drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
eb8696b3d237 drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
8a2f700c0251 drm/i915: Finally remove obj->mm.lock.
b2508e46dd39 drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
c3b42b7bd741 drm/i915: Move gt_revoke() slightly
f1003593900b drm/i915: Avoid some false positives in assert_object_held()
-:27: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'obj' - possible side-effects?
#27: FILE: drivers/gpu/drm/i915/gem/i915_gem_object.h:120:
+#define assert_object_held(obj) do {					\
+		dma_resv_assert_held((obj)->base.resv);			\
+		WARN_ON(!ww_mutex_is_locked(&(obj)->base.resv->lock)); \
+	} while (0)

total: 0 errors, 0 warnings, 1 checks, 15 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915: Remove obj->mm.lock! (rev13)
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (67 preceding siblings ...)
  2021-01-07 15:42 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev13) Patchwork
@ 2021-01-07 15:44 ` Patchwork
  2021-01-07 15:47 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
  2021-01-07 16:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  70 siblings, 0 replies; 92+ messages in thread
From: Patchwork @ 2021-01-07 15:44 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev13)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
+drivers/gpu/drm/i915/gt/intel_ring_submission.c:1265:24: warning: Using plain integer as NULL pointer
+drivers/gpu/drm/i915/intel_wakeref.c:137:19: warning: context imbalance in 'wakeref_auto_timeout' - unexpected unlock
+drivers/gpu/drm/i915/selftests/i915_syncmap.c:80:54: warning: dubious: x | !y


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [Intel-gfx] ✗ Fi.CI.DOCS: warning for drm/i915: Remove obj->mm.lock! (rev13)
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (68 preceding siblings ...)
  2021-01-07 15:44 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2021-01-07 15:47 ` Patchwork
  2021-01-07 16:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  70 siblings, 0 replies; 92+ messages in thread
From: Patchwork @ 2021-01-07 15:47 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev13)
URL   : https://patchwork.freedesktop.org/series/82337/
State : warning

== Summary ==

$ make htmldocs 2>&1 > /dev/null | grep i915
./drivers/gpu/drm/i915/gem/i915_gem_shrinker.c:102: warning: Function parameter or member 'ww' not described in 'i915_gem_shrink'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Excess function parameter 'trampoline' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Function parameter or member 'jump_whitelist' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Function parameter or member 'shadow_map' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Function parameter or member 'batch_map' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1413: warning: Excess function parameter 'trampoline' description in 'intel_engine_cmd_parser'


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915: Remove obj->mm.lock! (rev13)
  2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
                   ` (69 preceding siblings ...)
  2021-01-07 15:47 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
@ 2021-01-07 16:11 ` Patchwork
  70 siblings, 0 replies; 92+ messages in thread
From: Patchwork @ 2021-01-07 16:11 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 17136 bytes --]

== Series Details ==

Series: drm/i915: Remove obj->mm.lock! (rev13)
URL   : https://patchwork.freedesktop.org/series/82337/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_9562 -> Patchwork_19278
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_19278 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_19278, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_19278:

### IGT changes ###

#### Possible regressions ####

  * igt@prime_vgem@basic-userptr:
    - fi-tgl-y:           [PASS][1] -> [SKIP][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-tgl-y/igt@prime_vgem@basic-userptr.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-tgl-y/igt@prime_vgem@basic-userptr.html
    - fi-icl-u2:          [PASS][3] -> [SKIP][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-icl-u2/igt@prime_vgem@basic-userptr.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-icl-u2/igt@prime_vgem@basic-userptr.html
    - fi-tgl-u2:          [PASS][5] -> [SKIP][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-tgl-u2/igt@prime_vgem@basic-userptr.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-tgl-u2/igt@prime_vgem@basic-userptr.html
    - fi-cml-s:           [PASS][7] -> [SKIP][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-cml-s/igt@prime_vgem@basic-userptr.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-cml-s/igt@prime_vgem@basic-userptr.html
    - fi-icl-y:           [PASS][9] -> [SKIP][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-icl-y/igt@prime_vgem@basic-userptr.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-icl-y/igt@prime_vgem@basic-userptr.html
    - fi-cml-u2:          [PASS][11] -> [SKIP][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-cml-u2/igt@prime_vgem@basic-userptr.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-cml-u2/igt@prime_vgem@basic-userptr.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@prime_vgem@basic-userptr:
    - {fi-ehl-1}:         [PASS][13] -> [SKIP][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-ehl-1/igt@prime_vgem@basic-userptr.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-ehl-1/igt@prime_vgem@basic-userptr.html
    - {fi-tgl-dsi}:       [PASS][15] -> [SKIP][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-tgl-dsi/igt@prime_vgem@basic-userptr.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-tgl-dsi/igt@prime_vgem@basic-userptr.html

  
Known issues
------------

  Here are the changes found in Patchwork_19278 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_sync@basic-each:
    - fi-tgl-y:           [PASS][17] -> [DMESG-WARN][18] ([i915#402])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-tgl-y/igt@gem_sync@basic-each.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-tgl-y/igt@gem_sync@basic-each.html

  * igt@prime_vgem@basic-userptr:
    - fi-pnv-d510:        [PASS][19] -> [SKIP][20] ([fdo#109271])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-pnv-d510/igt@prime_vgem@basic-userptr.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-pnv-d510/igt@prime_vgem@basic-userptr.html
    - fi-cfl-8700k:       [PASS][21] -> [SKIP][22] ([fdo#109271])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-cfl-8700k/igt@prime_vgem@basic-userptr.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-cfl-8700k/igt@prime_vgem@basic-userptr.html
    - fi-skl-6600u:       [PASS][23] -> [SKIP][24] ([fdo#109271])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-skl-6600u/igt@prime_vgem@basic-userptr.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-skl-6600u/igt@prime_vgem@basic-userptr.html
    - fi-bsw-n3050:       [PASS][25] -> [SKIP][26] ([fdo#109271])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-bsw-n3050/igt@prime_vgem@basic-userptr.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-bsw-n3050/igt@prime_vgem@basic-userptr.html
    - fi-bsw-kefka:       [PASS][27] -> [SKIP][28] ([fdo#109271])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-bsw-kefka/igt@prime_vgem@basic-userptr.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-bsw-kefka/igt@prime_vgem@basic-userptr.html
    - fi-ilk-650:         [PASS][29] -> [SKIP][30] ([fdo#109271])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-ilk-650/igt@prime_vgem@basic-userptr.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-ilk-650/igt@prime_vgem@basic-userptr.html
    - fi-elk-e7500:       [PASS][31] -> [SKIP][32] ([fdo#109271])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-elk-e7500/igt@prime_vgem@basic-userptr.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-elk-e7500/igt@prime_vgem@basic-userptr.html
    - fi-skl-guc:         [PASS][33] -> [SKIP][34] ([fdo#109271])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-skl-guc/igt@prime_vgem@basic-userptr.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-skl-guc/igt@prime_vgem@basic-userptr.html
    - fi-cfl-guc:         [PASS][35] -> [SKIP][36] ([fdo#109271])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-cfl-guc/igt@prime_vgem@basic-userptr.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-cfl-guc/igt@prime_vgem@basic-userptr.html
    - fi-bxt-dsi:         [PASS][37] -> [SKIP][38] ([fdo#109271])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-bxt-dsi/igt@prime_vgem@basic-userptr.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-bxt-dsi/igt@prime_vgem@basic-userptr.html
    - fi-ivb-3770:        [PASS][39] -> [SKIP][40] ([fdo#109271])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-ivb-3770/igt@prime_vgem@basic-userptr.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-ivb-3770/igt@prime_vgem@basic-userptr.html
    - fi-snb-2600:        [PASS][41] -> [SKIP][42] ([fdo#109271])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-snb-2600/igt@prime_vgem@basic-userptr.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-snb-2600/igt@prime_vgem@basic-userptr.html
    - fi-byt-j1900:       [PASS][43] -> [SKIP][44] ([fdo#109271])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-byt-j1900/igt@prime_vgem@basic-userptr.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-byt-j1900/igt@prime_vgem@basic-userptr.html
    - fi-hsw-4770:        [PASS][45] -> [SKIP][46] ([fdo#109271])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-hsw-4770/igt@prime_vgem@basic-userptr.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-hsw-4770/igt@prime_vgem@basic-userptr.html
    - fi-kbl-7500u:       [PASS][47] -> [SKIP][48] ([fdo#109271])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-kbl-7500u/igt@prime_vgem@basic-userptr.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-kbl-7500u/igt@prime_vgem@basic-userptr.html
    - fi-kbl-guc:         [PASS][49] -> [SKIP][50] ([fdo#109271])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-kbl-guc/igt@prime_vgem@basic-userptr.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-kbl-guc/igt@prime_vgem@basic-userptr.html
    - fi-bdw-5557u:       [PASS][51] -> [SKIP][52] ([fdo#109271])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-bdw-5557u/igt@prime_vgem@basic-userptr.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-bdw-5557u/igt@prime_vgem@basic-userptr.html
    - fi-kbl-r:           [PASS][53] -> [SKIP][54] ([fdo#109271])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-kbl-r/igt@prime_vgem@basic-userptr.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-kbl-r/igt@prime_vgem@basic-userptr.html
    - fi-cfl-8109u:       [PASS][55] -> [SKIP][56] ([fdo#109271])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-cfl-8109u/igt@prime_vgem@basic-userptr.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-cfl-8109u/igt@prime_vgem@basic-userptr.html
    - fi-bsw-nick:        [PASS][57] -> [SKIP][58] ([fdo#109271])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-bsw-nick/igt@prime_vgem@basic-userptr.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-bsw-nick/igt@prime_vgem@basic-userptr.html
    - fi-glk-dsi:         [PASS][59] -> [SKIP][60] ([fdo#109271])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-glk-dsi/igt@prime_vgem@basic-userptr.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-glk-dsi/igt@prime_vgem@basic-userptr.html
    - fi-kbl-x1275:       [PASS][61] -> [SKIP][62] ([fdo#109271])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-kbl-x1275/igt@prime_vgem@basic-userptr.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-kbl-x1275/igt@prime_vgem@basic-userptr.html
    - fi-snb-2520m:       [PASS][63] -> [SKIP][64] ([fdo#109271])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-snb-2520m/igt@prime_vgem@basic-userptr.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-snb-2520m/igt@prime_vgem@basic-userptr.html

  
#### Possible fixes ####

  * igt@debugfs_test@read_all_entries:
    - fi-tgl-y:           [DMESG-WARN][65] ([i915#402]) -> [PASS][66] +1 similar issue
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-tgl-y/igt@debugfs_test@read_all_entries.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-tgl-y/igt@debugfs_test@read_all_entries.html

  * igt@gem_exec_suspend@basic-s3:
    - fi-tgl-y:           [DMESG-WARN][67] ([i915#2411] / [i915#402]) -> [PASS][68]
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-tgl-y/igt@gem_exec_suspend@basic-s3.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-tgl-y/igt@gem_exec_suspend@basic-s3.html

  * igt@i915_selftest@live@sanitycheck:
    - fi-kbl-7500u:       [DMESG-WARN][69] ([i915#2605]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9562/fi-kbl-7500u/igt@i915_selftest@live@sanitycheck.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/fi-kbl-7500u/igt@i915_selftest@live@sanitycheck.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#2411]: https://gitlab.freedesktop.org/drm/intel/issues/2411
  [i915#2605]: https://gitlab.freedesktop.org/drm/intel/issues/2605
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402


Participating hosts (43 -> 37)
------------------------------

  Missing    (6): fi-kbl-soraka fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-bdw-samus 


Build changes
-------------

  * IGT: IGT_5946 -> IGTPW_5364
  * Linux: CI_DRM_9562 -> Patchwork_19278

  CI-20190529: 20190529
  CI_DRM_9562: fc8d32007355b4babc37b621b3c9a4e0fe998d27 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_5364: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5364/index.html
  IGT_5946: 641e5545213dd9a82d80a4e065013a138afb58ff @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19278: f1003593900b9def3b2df9a3987e6527c47eb181 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

f1003593900b drm/i915: Avoid some false positives in assert_object_held()
c3b42b7bd741 drm/i915: Move gt_revoke() slightly
b2508e46dd39 drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
8a2f700c0251 drm/i915: Finally remove obj->mm.lock.
eb8696b3d237 drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal
b514f7e656ac drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal
445321957223 drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal
fbe995780783 drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal
9c26cd14427b drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal
757f76c2b189 drm/i915/selftests: Prepare ring submission for obj->mm.lock removal
9fc0127e0f37 drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal
3767527b9676 drm/i915/selftests: Prepare execlists and lrc selftests for obj->mm.lock removal
49f49501504c drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal
6f4ab34f68bd drm/i915/selftests: Prepare context selftest for obj->mm.lock removal
5e265d322c70 drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal
ad4baf51502e drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal.
c756e2d60992 drm/i915/selftests: Prepare object tests for obj->mm.lock removal.
47c5b0d39049 drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal.
a58e936c0fb1 drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal.
4a5c2526d221 drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal.
2d031c9059f5 drm/i915/selftests: Prepare context tests for obj->mm.lock removal.
0f3c56470046 drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal.
7491321df5bb drm/i915/selftests: Prepare client blit for obj->mm.lock removal.
fb3eb3fa4a62 drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal.
7c6e80443e65 drm/i915: Use a single page table lock for each gtt.
be70ee3eed38 drm/i915: Fix ww locking in shmem_create_from_object
6544ccbea852 drm/i915: Add missing ww lock in intel_dsb_prepare.
4cc4a2a44fb1 drm/i915: Add ww locking to dma-buf ops.
f9780be3bf15 drm/i915: Lock ww in ucode objects correctly
1c7e3745bac7 drm/i915: Increase ww locking for perf.
61bd786460ef drm/i915: Add ww locking around vm_access()
38fb4fa98f28 drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner.
8d709663d753 drm/i915: Prepare for obj->mm.lock removal
24a43ba8beab drm/i915: Fix workarounds selftest, part 1
735ee8499c6a drm/i915: Fix pread/pwrite to work with new locking rules.
c4a35d8beff3 drm/i915: Defer pin calls in buffer pool until first use by caller.
93fc5e38bd1e drm/i915: Take obj lock around set_domain ioctl
96ab1f267f08 drm/i915: Make __engine_unpark() compatible with ww locking.
332aba0adab3 drm/i915: Make lrc_init_wa_ctx compatible with ww locking.
a34e73844b08 drm/i915: Take reservation lock around i915_vma_pin.
fbeeff4c2e20 drm/i915: Move pinning to inside engine_wa_list_verify()
4bfb4766e4cd drm/i915: Add object locking to vm_fault_cpu
cef708148231 drm/i915: Pass ww ctx to intel_pin_to_display_plane
befedf03ef9b drm/i915: Rework clflush to work correctly without obj->mm.lock.
868eac184a45 drm/i915: Handle ww locking in init_status_page
31e23527c490 drm/i915: Make ring submission compatible with obj->mm.lock removal, v2.
a29d90e385ae drm/i915: Populate logical context during first pin.
f112822eb256 drm/i915: Flatten obj->mm.lock
3953cfa616ca drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
08f97e18f8e9 drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
6221e6630d88 drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
7f3e5695a30c drm/i915: Reject more ioctls for userptr
f5dbb93bbfa1 drm/i915: No longer allow exporting userptr through dma-buf
2164b6818543 drm/i915: Disable userptr pread/pwrite support.
d319bbed11e6 drm/i915: make lockdep slightly happier about execbuf.
138d22527745 drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2.
f90f08586008 drm/i915: Rework struct phys attachment handling
b881f7651122 drm/i915: Move HAS_STRUCT_PAGE to obj->flags
a6940e0503e5 drm/i915: Add gem object locking to madvise.
f4245cae4741 drm/i915: Ensure we hold the object mutex in pin correctly.
f4a0ebe47349 drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2.
de7db6cc5368 drm/i915: Move cmd parser pinning to execbuffer
9785c066d43f drm/i915: Pin timeline map after first timeline pin, v3.
cbc0237dc775 drm/i915: Do not share hwsp across contexts any more, v6

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19278/index.html

[-- Attachment #1.2: Type: text/html, Size: 20363 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2 Maarten Lankhorst
@ 2021-01-11 20:50   ` Dave Airlie
  2021-01-18 10:34   ` Thomas Hellström
  1 sibling, 0 replies; 92+ messages in thread
From: Dave Airlie @ 2021-01-11 20:50 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Intel Graphics Development

Acked-by: Dave Airlie <airlied@redhat.com>

burn with the fire.

Dave.

On Wed, 6 Jan 2021 at 01:46, Maarten Lankhorst
<maarten.lankhorst@linux.intel.com> wrote:
>
> We should not allow this any more, as it will break with the new userptr
> implementation, it could still be made to work, but there's no point in
> doing so.
>
> Inspection of the beignet opencl driver shows that it's only used
> when normal userptr is not available, which means for new kernels
> you will need CONFIG_I915_USERPTR.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index 64a946d5f753..241f865077b9 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -224,7 +224,7 @@ i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
>         struct i915_mmu_object *mo;
>
>         if (flags & I915_USERPTR_UNSYNCHRONIZED)
> -               return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
> +               return -ENODEV;
>
>         if (GEM_WARN_ON(!obj->userptr.mm))
>                 return -EINVAL;
> @@ -274,13 +274,7 @@ static int
>  i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
>                                     unsigned flags)
>  {
> -       if ((flags & I915_USERPTR_UNSYNCHRONIZED) == 0)
> -               return -ENODEV;
> -
> -       if (!capable(CAP_SYS_ADMIN))
> -               return -EPERM;
> -
> -       return 0;
> +       return -ENODEV;
>  }
>
>  static void
> --
> 2.30.0.rc1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5 Maarten Lankhorst
@ 2021-01-11 20:51   ` Dave Airlie
  2021-01-18 11:30   ` Thomas Hellström (Intel)
  1 sibling, 0 replies; 92+ messages in thread
From: Dave Airlie @ 2021-01-11 20:51 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: Intel Graphics Development

On Wed, 6 Jan 2021 at 01:46, Maarten Lankhorst
<maarten.lankhorst@linux.intel.com> wrote:
>
> Instead of doing what we do currently, which will never work with
> PROVE_LOCKING, do the same as AMD does, and something similar to
> relocation slowpath. When all locks are dropped, we acquire the
> pages for pinning. When the locks are taken, we transfer those
> pages in .get_pages() to the bo. As a final check before installing
> the fences, we ensure that the mmu notifier was not called; if it is,
> we return -EAGAIN to userspace to signal it has to start over.
>
> Changes since v1:
> - Unbinding is done in submit_init only. submit_begin() removed.
> - MMU_NOTFIER -> MMU_NOTIFIER
> Changes since v2:
> - Make i915->mm.notifier a spinlock.
> Changes since v3:
> - Add WARN_ON if there are any page references left, should have been 0.
> - Return 0 on success in submit_init(), bug from spinlock conversion.
> - Release pvec outside of notifier_lock (Thomas).
> Changes since v4:
> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Acked-by: Dave Airlie <airlied@redhat.com>

This isn't a review so much as a land this over objections.

Dave.

> ---
>  .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 101 ++-
>  drivers/gpu/drm/i915/gem/i915_gem_object.h    |  35 +-
>  .../gpu/drm/i915/gem/i915_gem_object_types.h  |  10 +-
>  drivers/gpu/drm/i915/gem/i915_gem_pages.c     |   2 +-
>  drivers/gpu/drm/i915/gem/i915_gem_userptr.c   | 765 ++++++------------
>  drivers/gpu/drm/i915/i915_drv.h               |   9 +-
>  drivers/gpu/drm/i915/i915_gem.c               |   5 +-
>  7 files changed, 344 insertions(+), 583 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 77de94e45f55..d4c065c1da06 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -53,14 +53,16 @@ enum {
>  /* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
>  #define __EXEC_OBJECT_HAS_PIN          BIT(30)
>  #define __EXEC_OBJECT_HAS_FENCE                BIT(29)
> -#define __EXEC_OBJECT_NEEDS_MAP                BIT(28)
> -#define __EXEC_OBJECT_NEEDS_BIAS       BIT(27)
> -#define __EXEC_OBJECT_INTERNAL_FLAGS   (~0u << 27) /* all of the above + */
> +#define __EXEC_OBJECT_USERPTR_INIT     BIT(28)
> +#define __EXEC_OBJECT_NEEDS_MAP                BIT(27)
> +#define __EXEC_OBJECT_NEEDS_BIAS       BIT(26)
> +#define __EXEC_OBJECT_INTERNAL_FLAGS   (~0u << 26) /* all of the above + */
>  #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
>
>  #define __EXEC_HAS_RELOC       BIT(31)
>  #define __EXEC_ENGINE_PINNED   BIT(30)
> -#define __EXEC_INTERNAL_FLAGS  (~0u << 30)
> +#define __EXEC_USERPTR_USED    BIT(29)
> +#define __EXEC_INTERNAL_FLAGS  (~0u << 29)
>  #define UPDATE                 PIN_OFFSET_FIXED
>
>  #define BATCH_OFFSET_BIAS (256*1024)
> @@ -864,6 +866,26 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
>                 }
>
>                 eb_add_vma(eb, i, batch, vma);
> +
> +               if (i915_gem_object_is_userptr(vma->obj)) {
> +                       err = i915_gem_object_userptr_submit_init(vma->obj);
> +                       if (err) {
> +                               if (i + 1 < eb->buffer_count) {
> +                                       /*
> +                                        * Execbuffer code expects last vma entry to be NULL,
> +                                        * since we already initialized this entry,
> +                                        * set the next value to NULL or we mess up
> +                                        * cleanup handling.
> +                                        */
> +                                       eb->vma[i + 1].vma = NULL;
> +                               }
> +
> +                               return err;
> +                       }
> +
> +                       eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT;
> +                       eb->args->flags |= __EXEC_USERPTR_USED;
> +               }
>         }
>
>         if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) {
> @@ -965,7 +987,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle)
>         }
>  }
>
> -static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
> +static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr)
>  {
>         const unsigned int count = eb->buffer_count;
>         unsigned int i;
> @@ -979,6 +1001,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
>
>                 eb_unreserve_vma(ev);
>
> +               if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) {
> +                       ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT;
> +                       i915_gem_object_userptr_submit_fini(vma->obj);
> +               }
> +
>                 if (final)
>                         i915_vma_put(vma);
>         }
> @@ -1916,6 +1943,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
>         return 0;
>  }
>
> +static int eb_reinit_userptr(struct i915_execbuffer *eb)
> +{
> +       const unsigned int count = eb->buffer_count;
> +       unsigned int i;
> +       int ret;
> +
> +       if (likely(!(eb->args->flags & __EXEC_USERPTR_USED)))
> +               return 0;
> +
> +       for (i = 0; i < count; i++) {
> +               struct eb_vma *ev = &eb->vma[i];
> +
> +               if (!i915_gem_object_is_userptr(ev->vma->obj))
> +                       continue;
> +
> +               ret = i915_gem_object_userptr_submit_init(ev->vma->obj);
> +               if (ret)
> +                       return ret;
> +
> +               ev->flags |= __EXEC_OBJECT_USERPTR_INIT;
> +       }
> +
> +       return 0;
> +}
> +
>  static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>                                            struct i915_request *rq)
>  {
> @@ -1930,7 +1982,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>         }
>
>         /* We may process another execbuffer during the unlock... */
> -       eb_release_vmas(eb, false);
> +       eb_release_vmas(eb, false, true);
>         i915_gem_ww_ctx_fini(&eb->ww);
>
>         if (rq) {
> @@ -1971,10 +2023,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>                 err = 0;
>         }
>
> -#ifdef CONFIG_MMU_NOTIFIER
>         if (!err)
> -               flush_workqueue(eb->i915->mm.userptr_wq);
> -#endif
> +               err = eb_reinit_userptr(eb);
>
>  err_relock:
>         i915_gem_ww_ctx_init(&eb->ww, true);
> @@ -2036,7 +2086,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
>
>  err:
>         if (err == -EDEADLK) {
> -               eb_release_vmas(eb, false);
> +               eb_release_vmas(eb, false, false);
>                 err = i915_gem_ww_ctx_backoff(&eb->ww);
>                 if (!err)
>                         goto repeat_validate;
> @@ -2133,7 +2183,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb)
>
>  err:
>         if (err == -EDEADLK) {
> -               eb_release_vmas(eb, false);
> +               eb_release_vmas(eb, false, false);
>                 err = i915_gem_ww_ctx_backoff(&eb->ww);
>                 if (!err)
>                         goto retry;
> @@ -2208,6 +2258,30 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
>                                                       flags | __EXEC_OBJECT_NO_RESERVE);
>         }
>
> +#ifdef CONFIG_MMU_NOTIFIER
> +       if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
> +               spin_lock(&eb->i915->mm.notifier_lock);
> +
> +               /*
> +                * count is always at least 1, otherwise __EXEC_USERPTR_USED
> +                * could not have been set
> +                */
> +               for (i = 0; i < count; i++) {
> +                       struct eb_vma *ev = &eb->vma[i];
> +                       struct drm_i915_gem_object *obj = ev->vma->obj;
> +
> +                       if (!i915_gem_object_is_userptr(obj))
> +                               continue;
> +
> +                       err = i915_gem_object_userptr_submit_done(obj);
> +                       if (err)
> +                               break;
> +               }
> +
> +               spin_unlock(&eb->i915->mm.notifier_lock);
> +       }
> +#endif
> +
>         if (unlikely(err))
>                 goto err_skip;
>
> @@ -3351,7 +3425,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>
>         err = eb_lookup_vmas(&eb);
>         if (err) {
> -               eb_release_vmas(&eb, true);
> +               eb_release_vmas(&eb, true, true);
>                 goto err_engine;
>         }
>
> @@ -3423,6 +3497,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>
>         trace_i915_request_queue(eb.request, eb.batch_flags);
>         err = eb_submit(&eb, batch);
> +
>  err_request:
>         i915_request_get(eb.request);
>         err = eb_request_add(&eb, err);
> @@ -3443,7 +3518,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
>         i915_request_put(eb.request);
>
>  err_vma:
> -       eb_release_vmas(&eb, true);
> +       eb_release_vmas(&eb, true, true);
>         if (eb.trampoline)
>                 i915_vma_unpin(eb.trampoline);
>         WARN_ON(err == -EDEADLK);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index dc4f3175550d..db4ddbfcee40 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -33,6 +33,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
>                                        const void *data, resource_size_t size);
>
>  extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
> +
>  void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
>                                      struct sg_table *pages,
>                                      bool needs_clflush);
> @@ -228,12 +229,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
>         return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
>  }
>
> -static inline bool
> -i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
> -{
> -       return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL);
> -}
> -
>  static inline bool
>  i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
>  {
> @@ -534,16 +529,6 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>  void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
>                                               enum fb_op_origin origin);
>
> -static inline bool
> -i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
> -{
> -#ifdef CONFIG_MMU_NOTIFIER
> -       return obj->userptr.mm;
> -#else
> -       return false;
> -#endif
> -}
> -
>  static inline void
>  i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
>                                   enum fb_op_origin origin)
> @@ -560,4 +545,22 @@ i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
>                 __i915_gem_object_invalidate_frontbuffer(obj, origin);
>  }
>
> +#ifdef CONFIG_MMU_NOTIFIER
> +static inline bool
> +i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
> +{
> +       return obj->userptr.notifier.mm;
> +}
> +
> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj);
> +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj);
> +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj);
> +#else
> +static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; }
> +
> +static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
> +static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
> +static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }
> +#endif
> +
>  #endif
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> index 6d3f451c15c6..5234c1ed62d4 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> @@ -7,6 +7,8 @@
>  #ifndef __I915_GEM_OBJECT_TYPES_H__
>  #define __I915_GEM_OBJECT_TYPES_H__
>
> +#include <linux/mmu_notifier.h>
> +
>  #include <drm/drm_gem.h>
>  #include <uapi/drm/i915_drm.h>
>
> @@ -34,7 +36,6 @@ struct drm_i915_gem_object_ops {
>  #define I915_GEM_OBJECT_IS_SHRINKABLE  BIT(2)
>  #define I915_GEM_OBJECT_IS_PROXY       BIT(3)
>  #define I915_GEM_OBJECT_NO_MMAP                BIT(4)
> -#define I915_GEM_OBJECT_ASYNC_CANCEL   BIT(5)
>
>         /* Interface between the GEM object and its backing storage.
>          * get_pages() is called once prior to the use of the associated set
> @@ -292,10 +293,11 @@ struct drm_i915_gem_object {
>  #ifdef CONFIG_MMU_NOTIFIER
>                 struct i915_gem_userptr {
>                         uintptr_t ptr;
> +                       unsigned long notifier_seq;
>
> -                       struct i915_mm_struct *mm;
> -                       struct i915_mmu_object *mmu_object;
> -                       struct work_struct *work;
> +                       struct mmu_interval_notifier notifier;
> +                       struct page **pvec;
> +                       int page_ref;
>                 } userptr;
>  #endif
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> index 63b50c3559f7..aefdbe36fe6f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> @@ -223,7 +223,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
>          * get_pages backends we should be better able to handle the
>          * cancellation of the async task in a more uniform manner.
>          */
> -       if (!pages && !i915_gem_object_needs_async_cancel(obj))
> +       if (!pages)
>                 pages = ERR_PTR(-EINVAL);
>
>         if (!IS_ERR(pages))
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index 1183b28c084b..d07f84328edd 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -2,10 +2,39 @@
>   * SPDX-License-Identifier: MIT
>   *
>   * Copyright © 2012-2014 Intel Corporation
> + *
> +  * Based on amdgpu_mn, which bears the following notice:
> + *
> + * Copyright 2014 Advanced Micro Devices, Inc.
> + * All Rights Reserved.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + */
> +/*
> + * Authors:
> + *    Christian König <christian.koenig@amd.com>
>   */
>
>  #include <linux/mmu_context.h>
> -#include <linux/mmu_notifier.h>
>  #include <linux/mempolicy.h>
>  #include <linux/swap.h>
>  #include <linux/sched/mm.h>
> @@ -15,373 +44,114 @@
>  #include "i915_gem_object.h"
>  #include "i915_scatterlist.h"
>
> -#if defined(CONFIG_MMU_NOTIFIER)
> -
> -struct i915_mm_struct {
> -       struct mm_struct *mm;
> -       struct drm_i915_private *i915;
> -       struct i915_mmu_notifier *mn;
> -       struct hlist_node node;
> -       struct kref kref;
> -       struct rcu_work work;
> -};
> -
> -#include <linux/interval_tree.h>
> -
> -struct i915_mmu_notifier {
> -       spinlock_t lock;
> -       struct hlist_node node;
> -       struct mmu_notifier mn;
> -       struct rb_root_cached objects;
> -       struct i915_mm_struct *mm;
> -};
> -
> -struct i915_mmu_object {
> -       struct i915_mmu_notifier *mn;
> -       struct drm_i915_gem_object *obj;
> -       struct interval_tree_node it;
> -};
> -
> -static void add_object(struct i915_mmu_object *mo)
> -{
> -       GEM_BUG_ON(!RB_EMPTY_NODE(&mo->it.rb));
> -       interval_tree_insert(&mo->it, &mo->mn->objects);
> -}
> -
> -static void del_object(struct i915_mmu_object *mo)
> -{
> -       if (RB_EMPTY_NODE(&mo->it.rb))
> -               return;
> -
> -       interval_tree_remove(&mo->it, &mo->mn->objects);
> -       RB_CLEAR_NODE(&mo->it.rb);
> -}
> +#ifdef CONFIG_MMU_NOTIFIER
>
> -static void
> -__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
> +/**
> + * i915_gem_userptr_invalidate - callback to notify about mm change
> + *
> + * @mni: the range (mm) is about to update
> + * @range: details on the invalidation
> + * @cur_seq: Value to pass to mmu_interval_set_seq()
> + *
> + * Block for operations on BOs to finish and mark pages as accessed and
> + * potentially dirty.
> + */
> +static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> +                                       const struct mmu_notifier_range *range,
> +                                       unsigned long cur_seq)
>  {
> -       struct i915_mmu_object *mo = obj->userptr.mmu_object;
> -
> -       /*
> -        * During mm_invalidate_range we need to cancel any userptr that
> -        * overlaps the range being invalidated. Doing so requires the
> -        * struct_mutex, and that risks recursion. In order to cause
> -        * recursion, the user must alias the userptr address space with
> -        * a GTT mmapping (possible with a MAP_FIXED) - then when we have
> -        * to invalidate that mmaping, mm_invalidate_range is called with
> -        * the userptr address *and* the struct_mutex held.  To prevent that
> -        * we set a flag under the i915_mmu_notifier spinlock to indicate
> -        * whether this object is valid.
> -        */
> -       if (!mo)
> -               return;
> +       struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier);
> +       struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +       long r;
>
> -       spin_lock(&mo->mn->lock);
> -       if (value)
> -               add_object(mo);
> -       else
> -               del_object(mo);
> -       spin_unlock(&mo->mn->lock);
> -}
> +       if (!mmu_notifier_range_blockable(range))
> +               return false;
>
> -static int
> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
> -                                 const struct mmu_notifier_range *range)
> -{
> -       struct i915_mmu_notifier *mn =
> -               container_of(_mn, struct i915_mmu_notifier, mn);
> -       struct interval_tree_node *it;
> -       unsigned long end;
> -       int ret = 0;
> -
> -       if (RB_EMPTY_ROOT(&mn->objects.rb_root))
> -               return 0;
> -
> -       /* interval ranges are inclusive, but invalidate range is exclusive */
> -       end = range->end - 1;
> -
> -       spin_lock(&mn->lock);
> -       it = interval_tree_iter_first(&mn->objects, range->start, end);
> -       while (it) {
> -               struct drm_i915_gem_object *obj;
> -
> -               if (!mmu_notifier_range_blockable(range)) {
> -                       ret = -EAGAIN;
> -                       break;
> -               }
> +       spin_lock(&i915->mm.notifier_lock);
>
> -               /*
> -                * The mmu_object is released late when destroying the
> -                * GEM object so it is entirely possible to gain a
> -                * reference on an object in the process of being freed
> -                * since our serialisation is via the spinlock and not
> -                * the struct_mutex - and consequently use it after it
> -                * is freed and then double free it. To prevent that
> -                * use-after-free we only acquire a reference on the
> -                * object if it is not in the process of being destroyed.
> -                */
> -               obj = container_of(it, struct i915_mmu_object, it)->obj;
> -               if (!kref_get_unless_zero(&obj->base.refcount)) {
> -                       it = interval_tree_iter_next(it, range->start, end);
> -                       continue;
> -               }
> -               spin_unlock(&mn->lock);
> +       mmu_interval_set_seq(mni, cur_seq);
>
> -               ret = i915_gem_object_unbind(obj,
> -                                            I915_GEM_OBJECT_UNBIND_ACTIVE |
> -                                            I915_GEM_OBJECT_UNBIND_BARRIER);
> -               if (ret == 0)
> -                       ret = __i915_gem_object_put_pages(obj);
> -               i915_gem_object_put(obj);
> -               if (ret)
> -                       return ret;
> +       spin_unlock(&i915->mm.notifier_lock);
>
> -               spin_lock(&mn->lock);
> +       /* During exit there's no need to wait */
> +       if (current->flags & PF_EXITING)
> +               return true;
>
> -               /*
> -                * As we do not (yet) protect the mmu from concurrent insertion
> -                * over this range, there is no guarantee that this search will
> -                * terminate given a pathologic workload.
> -                */
> -               it = interval_tree_iter_first(&mn->objects, range->start, end);
> -       }
> -       spin_unlock(&mn->lock);
> -
> -       return ret;
> +       /* we will unbind on next submission, still have userptr pins */
> +       r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> +                                     MAX_SCHEDULE_TIMEOUT);
> +       if (r <= 0)
> +               drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>
> +       return true;
>  }
>
> -static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
> -       .invalidate_range_start = userptr_mn_invalidate_range_start,
> +static const struct mmu_interval_notifier_ops i915_gem_userptr_notifier_ops = {
> +       .invalidate = i915_gem_userptr_invalidate,
>  };
>
> -static struct i915_mmu_notifier *
> -i915_mmu_notifier_create(struct i915_mm_struct *mm)
> -{
> -       struct i915_mmu_notifier *mn;
> -
> -       mn = kmalloc(sizeof(*mn), GFP_KERNEL);
> -       if (mn == NULL)
> -               return ERR_PTR(-ENOMEM);
> -
> -       spin_lock_init(&mn->lock);
> -       mn->mn.ops = &i915_gem_userptr_notifier;
> -       mn->objects = RB_ROOT_CACHED;
> -       mn->mm = mm;
> -
> -       return mn;
> -}
> -
> -static void
> -i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
> -{
> -       struct i915_mmu_object *mo;
> -
> -       mo = fetch_and_zero(&obj->userptr.mmu_object);
> -       if (!mo)
> -               return;
> -
> -       spin_lock(&mo->mn->lock);
> -       del_object(mo);
> -       spin_unlock(&mo->mn->lock);
> -       kfree(mo);
> -}
> -
> -static struct i915_mmu_notifier *
> -i915_mmu_notifier_find(struct i915_mm_struct *mm)
> -{
> -       struct i915_mmu_notifier *mn, *old;
> -       int err;
> -
> -       mn = READ_ONCE(mm->mn);
> -       if (likely(mn))
> -               return mn;
> -
> -       mn = i915_mmu_notifier_create(mm);
> -       if (IS_ERR(mn))
> -               return mn;
> -
> -       err = mmu_notifier_register(&mn->mn, mm->mm);
> -       if (err) {
> -               kfree(mn);
> -               return ERR_PTR(err);
> -       }
> -
> -       old = cmpxchg(&mm->mn, NULL, mn);
> -       if (old) {
> -               mmu_notifier_unregister(&mn->mn, mm->mm);
> -               kfree(mn);
> -               mn = old;
> -       }
> -
> -       return mn;
> -}
> -
>  static int
>  i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj)
>  {
> -       struct i915_mmu_notifier *mn;
> -       struct i915_mmu_object *mo;
> -
> -       if (GEM_WARN_ON(!obj->userptr.mm))
> -               return -EINVAL;
> -
> -       mn = i915_mmu_notifier_find(obj->userptr.mm);
> -       if (IS_ERR(mn))
> -               return PTR_ERR(mn);
> -
> -       mo = kzalloc(sizeof(*mo), GFP_KERNEL);
> -       if (!mo)
> -               return -ENOMEM;
> -
> -       mo->mn = mn;
> -       mo->obj = obj;
> -       mo->it.start = obj->userptr.ptr;
> -       mo->it.last = obj->userptr.ptr + obj->base.size - 1;
> -       RB_CLEAR_NODE(&mo->it.rb);
> -
> -       obj->userptr.mmu_object = mo;
> -       return 0;
> -}
> -
> -static void
> -i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
> -                      struct mm_struct *mm)
> -{
> -       if (mn == NULL)
> -               return;
> -
> -       mmu_notifier_unregister(&mn->mn, mm);
> -       kfree(mn);
> +       return mmu_interval_notifier_insert(&obj->userptr.notifier, current->mm,
> +                                           obj->userptr.ptr, obj->base.size,
> +                                           &i915_gem_userptr_notifier_ops);
>  }
>
> -
> -static struct i915_mm_struct *
> -__i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real)
> -{
> -       struct i915_mm_struct *it, *mm = NULL;
> -
> -       rcu_read_lock();
> -       hash_for_each_possible_rcu(i915->mm_structs,
> -                                  it, node,
> -                                  (unsigned long)real)
> -               if (it->mm == real && kref_get_unless_zero(&it->kref)) {
> -                       mm = it;
> -                       break;
> -               }
> -       rcu_read_unlock();
> -
> -       return mm;
> -}
> -
> -static int
> -i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
> +static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
>  {
>         struct drm_i915_private *i915 = to_i915(obj->base.dev);
> -       struct i915_mm_struct *mm, *new;
> -       int ret = 0;
> -
> -       /* During release of the GEM object we hold the struct_mutex. This
> -        * precludes us from calling mmput() at that time as that may be
> -        * the last reference and so call exit_mmap(). exit_mmap() will
> -        * attempt to reap the vma, and if we were holding a GTT mmap
> -        * would then call drm_gem_vm_close() and attempt to reacquire
> -        * the struct mutex. So in order to avoid that recursion, we have
> -        * to defer releasing the mm reference until after we drop the
> -        * struct_mutex, i.e. we need to schedule a worker to do the clean
> -        * up.
> -        */
> -       mm = __i915_mm_struct_find(i915, current->mm);
> -       if (mm)
> -               goto out;
> +       struct page **pvec = NULL;
>
> -       new = kmalloc(sizeof(*mm), GFP_KERNEL);
> -       if (!new)
> -               return -ENOMEM;
> -
> -       kref_init(&new->kref);
> -       new->i915 = to_i915(obj->base.dev);
> -       new->mm = current->mm;
> -       new->mn = NULL;
> -
> -       spin_lock(&i915->mm_lock);
> -       mm = __i915_mm_struct_find(i915, current->mm);
> -       if (!mm) {
> -               hash_add_rcu(i915->mm_structs,
> -                            &new->node,
> -                            (unsigned long)new->mm);
> -               mmgrab(current->mm);
> -               mm = new;
> +       spin_lock(&i915->mm.notifier_lock);
> +       if (!--obj->userptr.page_ref) {
> +               pvec = obj->userptr.pvec;
> +               obj->userptr.pvec = NULL;
>         }
> -       spin_unlock(&i915->mm_lock);
> -       if (mm != new)
> -               kfree(new);
> -
> -out:
> -       obj->userptr.mm = mm;
> -       return ret;
> -}
> -
> -static void
> -__i915_mm_struct_free__worker(struct work_struct *work)
> -{
> -       struct i915_mm_struct *mm = container_of(work, typeof(*mm), work.work);
> -
> -       i915_mmu_notifier_free(mm->mn, mm->mm);
> -       mmdrop(mm->mm);
> -       kfree(mm);
> -}
> -
> -static void
> -__i915_mm_struct_free(struct kref *kref)
> -{
> -       struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
> -
> -       spin_lock(&mm->i915->mm_lock);
> -       hash_del_rcu(&mm->node);
> -       spin_unlock(&mm->i915->mm_lock);
> +       GEM_BUG_ON(obj->userptr.page_ref < 0);
> +       spin_unlock(&i915->mm.notifier_lock);
>
> -       INIT_RCU_WORK(&mm->work, __i915_mm_struct_free__worker);
> -       queue_rcu_work(system_wq, &mm->work);
> -}
> -
> -static void
> -i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
> -{
> -       if (obj->userptr.mm == NULL)
> -               return;
> +       if (pvec) {
> +               const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>
> -       kref_put(&obj->userptr.mm->kref, __i915_mm_struct_free);
> -       obj->userptr.mm = NULL;
> +               unpin_user_pages(pvec, num_pages);
> +               kfree(pvec);
> +       }
>  }
>
> -struct get_pages_work {
> -       struct work_struct work;
> -       struct drm_i915_gem_object *obj;
> -       struct task_struct *task;
> -};
> -
> -static struct sg_table *
> -__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> -                              struct page **pvec, unsigned long num_pages)
> +static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
>  {
> +       struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +       const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
>         unsigned int max_segment = i915_sg_segment_size();
>         struct sg_table *st;
>         unsigned int sg_page_sizes;
>         struct scatterlist *sg;
> +       struct page **pvec;
>         int ret;
>
>         st = kmalloc(sizeof(*st), GFP_KERNEL);
>         if (!st)
> -               return ERR_PTR(-ENOMEM);
> +               return -ENOMEM;
> +
> +       spin_lock(&i915->mm.notifier_lock);
> +       if (GEM_WARN_ON(!obj->userptr.page_ref)) {
> +               spin_unlock(&i915->mm.notifier_lock);
> +               ret = -EFAULT;
> +               goto err_free;
> +       }
> +
> +       obj->userptr.page_ref++;
> +       pvec = obj->userptr.pvec;
> +       spin_unlock(&i915->mm.notifier_lock);
>
>  alloc_table:
>         sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
>                                          num_pages << PAGE_SHIFT, max_segment,
>                                          NULL, 0, GFP_KERNEL);
>         if (IS_ERR(sg)) {
> -               kfree(st);
> -               return ERR_CAST(sg);
> +               ret = PTR_ERR(sg);
> +               goto err;
>         }
>
>         ret = i915_gem_gtt_prepare_pages(obj, st);
> @@ -393,203 +163,20 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>                         goto alloc_table;
>                 }
>
> -               kfree(st);
> -               return ERR_PTR(ret);
> +               goto err;
>         }
>
>         sg_page_sizes = i915_sg_page_sizes(st->sgl);
>
>         __i915_gem_object_set_pages(obj, st, sg_page_sizes);
>
> -       return st;
> -}
> -
> -static void
> -__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
> -{
> -       struct get_pages_work *work = container_of(_work, typeof(*work), work);
> -       struct drm_i915_gem_object *obj = work->obj;
> -       const unsigned long npages = obj->base.size >> PAGE_SHIFT;
> -       unsigned long pinned;
> -       struct page **pvec;
> -       int ret;
> -
> -       ret = -ENOMEM;
> -       pinned = 0;
> -
> -       pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> -       if (pvec != NULL) {
> -               struct mm_struct *mm = obj->userptr.mm->mm;
> -               unsigned int flags = 0;
> -               int locked = 0;
> -
> -               if (!i915_gem_object_is_readonly(obj))
> -                       flags |= FOLL_WRITE;
> -
> -               ret = -EFAULT;
> -               if (mmget_not_zero(mm)) {
> -                       while (pinned < npages) {
> -                               if (!locked) {
> -                                       mmap_read_lock(mm);
> -                                       locked = 1;
> -                               }
> -                               ret = pin_user_pages_remote
> -                                       (mm,
> -                                        obj->userptr.ptr + pinned * PAGE_SIZE,
> -                                        npages - pinned,
> -                                        flags,
> -                                        pvec + pinned, NULL, &locked);
> -                               if (ret < 0)
> -                                       break;
> -
> -                               pinned += ret;
> -                       }
> -                       if (locked)
> -                               mmap_read_unlock(mm);
> -                       mmput(mm);
> -               }
> -       }
> -
> -       mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
> -       if (obj->userptr.work == &work->work) {
> -               struct sg_table *pages = ERR_PTR(ret);
> -
> -               if (pinned == npages) {
> -                       pages = __i915_gem_userptr_alloc_pages(obj, pvec,
> -                                                              npages);
> -                       if (!IS_ERR(pages)) {
> -                               pinned = 0;
> -                               pages = NULL;
> -                       }
> -               }
> -
> -               obj->userptr.work = ERR_CAST(pages);
> -               if (IS_ERR(pages))
> -                       __i915_gem_userptr_set_active(obj, false);
> -       }
> -       mutex_unlock(&obj->mm.lock);
> -
> -       unpin_user_pages(pvec, pinned);
> -       kvfree(pvec);
> -
> -       i915_gem_object_put(obj);
> -       put_task_struct(work->task);
> -       kfree(work);
> -}
> -
> -static struct sg_table *
> -__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
> -{
> -       struct get_pages_work *work;
> -
> -       /* Spawn a worker so that we can acquire the
> -        * user pages without holding our mutex. Access
> -        * to the user pages requires mmap_lock, and we have
> -        * a strict lock ordering of mmap_lock, struct_mutex -
> -        * we already hold struct_mutex here and so cannot
> -        * call gup without encountering a lock inversion.
> -        *
> -        * Userspace will keep on repeating the operation
> -        * (thanks to EAGAIN) until either we hit the fast
> -        * path or the worker completes. If the worker is
> -        * cancelled or superseded, the task is still run
> -        * but the results ignored. (This leads to
> -        * complications that we may have a stray object
> -        * refcount that we need to be wary of when
> -        * checking for existing objects during creation.)
> -        * If the worker encounters an error, it reports
> -        * that error back to this function through
> -        * obj->userptr.work = ERR_PTR.
> -        */
> -       work = kmalloc(sizeof(*work), GFP_KERNEL);
> -       if (work == NULL)
> -               return ERR_PTR(-ENOMEM);
> -
> -       obj->userptr.work = &work->work;
> -
> -       work->obj = i915_gem_object_get(obj);
> -
> -       work->task = current;
> -       get_task_struct(work->task);
> -
> -       INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker);
> -       queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work);
> -
> -       return ERR_PTR(-EAGAIN);
> -}
> -
> -static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
> -{
> -       const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
> -       struct mm_struct *mm = obj->userptr.mm->mm;
> -       struct page **pvec;
> -       struct sg_table *pages;
> -       bool active;
> -       int pinned;
> -       unsigned int gup_flags = 0;
> -
> -       /* If userspace should engineer that these pages are replaced in
> -        * the vma between us binding this page into the GTT and completion
> -        * of rendering... Their loss. If they change the mapping of their
> -        * pages they need to create a new bo to point to the new vma.
> -        *
> -        * However, that still leaves open the possibility of the vma
> -        * being copied upon fork. Which falls under the same userspace
> -        * synchronisation issue as a regular bo, except that this time
> -        * the process may not be expecting that a particular piece of
> -        * memory is tied to the GPU.
> -        *
> -        * Fortunately, we can hook into the mmu_notifier in order to
> -        * discard the page references prior to anything nasty happening
> -        * to the vma (discard or cloning) which should prevent the more
> -        * egregious cases from causing harm.
> -        */
> -
> -       if (obj->userptr.work) {
> -               /* active flag should still be held for the pending work */
> -               if (IS_ERR(obj->userptr.work))
> -                       return PTR_ERR(obj->userptr.work);
> -               else
> -                       return -EAGAIN;
> -       }
> -
> -       pvec = NULL;
> -       pinned = 0;
> -
> -       if (mm == current->mm) {
> -               pvec = kvmalloc_array(num_pages, sizeof(struct page *),
> -                                     GFP_KERNEL |
> -                                     __GFP_NORETRY |
> -                                     __GFP_NOWARN);
> -               if (pvec) {
> -                       /* defer to worker if malloc fails */
> -                       if (!i915_gem_object_is_readonly(obj))
> -                               gup_flags |= FOLL_WRITE;
> -                       pinned = pin_user_pages_fast_only(obj->userptr.ptr,
> -                                                         num_pages, gup_flags,
> -                                                         pvec);
> -               }
> -       }
> -
> -       active = false;
> -       if (pinned < 0) {
> -               pages = ERR_PTR(pinned);
> -               pinned = 0;
> -       } else if (pinned < num_pages) {
> -               pages = __i915_gem_userptr_get_pages_schedule(obj);
> -               active = pages == ERR_PTR(-EAGAIN);
> -       } else {
> -               pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
> -               active = !IS_ERR(pages);
> -       }
> -       if (active)
> -               __i915_gem_userptr_set_active(obj, true);
> -
> -       if (IS_ERR(pages))
> -               unpin_user_pages(pvec, pinned);
> -       kvfree(pvec);
> +       return 0;
>
> -       return PTR_ERR_OR_ZERO(pages);
> +err:
> +       i915_gem_object_userptr_drop_ref(obj);
> +err_free:
> +       kfree(st);
> +       return ret;
>  }
>
>  static void
> @@ -599,9 +186,6 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>         struct sgt_iter sgt_iter;
>         struct page *page;
>
> -       /* Cancel any inflight work and force them to restart their gup */
> -       obj->userptr.work = NULL;
> -       __i915_gem_userptr_set_active(obj, false);
>         if (!pages)
>                 return;
>
> @@ -641,19 +225,135 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
>                 }
>
>                 mark_page_accessed(page);
> -               unpin_user_page(page);
>         }
>         obj->mm.dirty = false;
>
>         sg_free_table(pages);
>         kfree(pages);
> +
> +       i915_gem_object_userptr_drop_ref(obj);
> +}
> +
> +static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages)
> +{
> +       struct sg_table *pages;
> +       int err;
> +
> +       err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
> +       if (err)
> +               return err;
> +
> +       if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj)))
> +               return -EBUSY;
> +
> +       mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
> +
> +       pages = __i915_gem_object_unset_pages(obj);
> +       if (!IS_ERR_OR_NULL(pages))
> +               i915_gem_userptr_put_pages(obj, pages);
> +
> +       if (get_pages)
> +               err = ____i915_gem_object_get_pages(obj);
> +       mutex_unlock(&obj->mm.lock);
> +
> +       return err;
> +}
> +
> +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj)
> +{
> +       struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +       const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
> +       struct page **pvec;
> +       unsigned int gup_flags = 0;
> +       unsigned long notifier_seq;
> +       int pinned, ret;
> +
> +       if (obj->userptr.notifier.mm != current->mm)
> +               return -EFAULT;
> +
> +       ret = i915_gem_object_lock_interruptible(obj, NULL);
> +       if (ret)
> +               return ret;
> +
> +       /* Make sure userptr is unbound for next attempt, so we don't use stale pages. */
> +       ret = i915_gem_object_userptr_unbind(obj, false);
> +       i915_gem_object_unlock(obj);
> +       if (ret)
> +               return ret;
> +
> +       notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier);
> +
> +       pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
> +       if (!pvec)
> +               return -ENOMEM;
> +
> +       if (!i915_gem_object_is_readonly(obj))
> +               gup_flags |= FOLL_WRITE;
> +
> +       pinned = ret = 0;
> +       while (pinned < num_pages) {
> +               ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
> +                                         num_pages - pinned, gup_flags,
> +                                         &pvec[pinned]);
> +               if (ret < 0)
> +                       goto out;
> +
> +               pinned += ret;
> +       }
> +       ret = 0;
> +
> +       spin_lock(&i915->mm.notifier_lock);
> +
> +       if (mmu_interval_read_retry(&obj->userptr.notifier,
> +               !obj->userptr.page_ref ? notifier_seq :
> +               obj->userptr.notifier_seq)) {
> +               ret = -EAGAIN;
> +               goto out_unlock;
> +       }
> +
> +       if (!obj->userptr.page_ref++) {
> +               obj->userptr.pvec = pvec;
> +               obj->userptr.notifier_seq = notifier_seq;
> +
> +               pvec = NULL;
> +       }
> +
> +out_unlock:
> +       spin_unlock(&i915->mm.notifier_lock);
> +
> +out:
> +       if (pvec) {
> +               unpin_user_pages(pvec, pinned);
> +               kvfree(pvec);
> +       }
> +
> +       return ret;
> +}
> +
> +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj)
> +{
> +       if (mmu_interval_read_retry(&obj->userptr.notifier,
> +                                   obj->userptr.notifier_seq)) {
> +               /* We collided with the mmu notifier, need to retry */
> +
> +               return -EAGAIN;
> +       }
> +
> +       return 0;
> +}
> +
> +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj)
> +{
> +       i915_gem_object_userptr_drop_ref(obj);
>  }
>
>  static void
>  i915_gem_userptr_release(struct drm_i915_gem_object *obj)
>  {
> -       i915_gem_userptr_release__mmu_notifier(obj);
> -       i915_gem_userptr_release__mm_struct(obj);
> +       GEM_WARN_ON(obj->userptr.page_ref);
> +
> +       mmu_interval_notifier_remove(&obj->userptr.notifier);
> +       obj->userptr.notifier.mm = NULL;
>  }
>
>  static int
> @@ -686,7 +386,6 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
>         .name = "i915_gem_object_userptr",
>         .flags = I915_GEM_OBJECT_IS_SHRINKABLE |
>                  I915_GEM_OBJECT_NO_MMAP |
> -                I915_GEM_OBJECT_ASYNC_CANCEL |
>                  I915_GEM_OBJECT_IS_PROXY,
>         .get_pages = i915_gem_userptr_get_pages,
>         .put_pages = i915_gem_userptr_put_pages,
> @@ -807,6 +506,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>         i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
>
>         obj->userptr.ptr = args->user_ptr;
> +       obj->userptr.notifier_seq = ULONG_MAX;
>         if (args->flags & I915_USERPTR_READ_ONLY)
>                 i915_gem_object_set_readonly(obj);
>
> @@ -814,9 +514,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>          * at binding. This means that we need to hook into the mmu_notifier
>          * in order to detect if the mmu is destroyed.
>          */
> -       ret = i915_gem_userptr_init__mm_struct(obj);
> -       if (ret == 0)
> -               ret = i915_gem_userptr_init__mmu_notifier(obj);
> +       ret = i915_gem_userptr_init__mmu_notifier(obj);
>         if (ret == 0)
>                 ret = drm_gem_handle_create(file, &obj->base, &handle);
>
> @@ -835,15 +533,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
>  int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
>  {
>  #ifdef CONFIG_MMU_NOTIFIER
> -       spin_lock_init(&dev_priv->mm_lock);
> -       hash_init(dev_priv->mm_structs);
> -
> -       dev_priv->mm.userptr_wq =
> -               alloc_workqueue("i915-userptr-acquire",
> -                               WQ_HIGHPRI | WQ_UNBOUND,
> -                               0);
> -       if (!dev_priv->mm.userptr_wq)
> -               return -ENOMEM;
> +       spin_lock_init(&dev_priv->mm.notifier_lock);
>  #endif
>
>         return 0;
> @@ -851,7 +541,4 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
>
>  void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
>  {
> -#ifdef CONFIG_MMU_NOTIFIER
> -       destroy_workqueue(dev_priv->mm.userptr_wq);
> -#endif
>  }
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 028b7ff06f57..320e0f66122d 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -592,11 +592,10 @@ struct i915_gem_mm {
>
>  #ifdef CONFIG_MMU_NOTIFIER
>         /**
> -        * Workqueue to fault in userptr pages, flushed by the execbuf
> -        * when required but otherwise left to userspace to try again
> -        * on EAGAIN.
> +        * notifier_lock for mmu notifiers, memory may not be allocated
> +        * while holding this lock.
>          */
> -       struct workqueue_struct *userptr_wq;
> +       spinlock_t notifier_lock;
>  #endif
>
>         /* shrinker accounting, also useful for userland debugging */
> @@ -976,8 +975,6 @@ struct drm_i915_private {
>         struct i915_ggtt ggtt; /* VM representing the global address space */
>
>         struct i915_gem_mm mm;
> -       DECLARE_HASHTABLE(mm_structs, 7);
> -       spinlock_t mm_lock;
>
>         /* Kernel Modesetting */
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 84eb2a574813..d138a8b18a96 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1163,10 +1163,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
>  err_unlock:
>         i915_gem_drain_workqueue(dev_priv);
>
> -       if (ret != -EIO) {
> +       if (ret != -EIO)
>                 intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
> -               i915_gem_cleanup_userptr(dev_priv);
> -       }
>
>         if (ret == -EIO) {
>                 /*
> @@ -1223,7 +1221,6 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv)
>         intel_wa_list_free(&dev_priv->gt_wa_list);
>
>         intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
> -       i915_gem_cleanup_userptr(dev_priv);
>
>         i915_gem_drain_freed_objects(dev_priv);
>
> --
> 2.30.0.rc1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6 Maarten Lankhorst
@ 2021-01-14 12:06   ` Tvrtko Ursulin
  2021-01-18 16:09     ` Maarten Lankhorst
  0 siblings, 1 reply; 92+ messages in thread
From: Tvrtko Ursulin @ 2021-01-14 12:06 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx; +Cc: Thomas Hellström


On 05/01/2021 15:34, Maarten Lankhorst wrote:
> Instead of sharing pages with breadcrumbs, give each timeline a
> single page. This allows unrelated timelines not to share locks
> any more during command submission.
> 
> As an additional benefit, seqno wraparound no longer requires
> i915_vma_pin, which means we no longer need to worry about a
> potential -EDEADLK at a point where we are ready to submit.
> 
> Changes since v1:
> - Fix erroneous i915_vma_acquire that should be a i915_vma_release (ickle).
> - Extra check for completion in intel_read_hwsp().
> Changes since v2:
> - Fix inconsistent indent in hwsp_alloc() (kbuild)
> - memset entire cacheline to 0.
> Changes since v3:
> - Do same in intel_timeline_reset_seqno(), and clflush for good measure.
> Changes since v4:
> - Use refcounting on timeline, instead of relying on i915_active.
> - Fix waiting on kernel requests.
> Changes since v5:
> - Bump amount of slots to maximum (256), for best wraparounds.
> - Add hwsp_offset to i915_request to fix potential wraparound hang.
> - Ensure timeline wrap test works with the changes.
> - Assign hwsp in intel_timeline_read_hwsp() within the rcu lock to
>    fix a hang.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> #v1
> Reported-by: kernel test robot <lkp@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
>   drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
>   drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  12 +-
>   drivers/gpu/drm/i915/gt/intel_engine_cs.c     |   1 +
>   drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
>   drivers/gpu/drm/i915/gt/intel_timeline.c      | 422 ++++--------------
>   .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
>   drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   5 +-
>   drivers/gpu/drm/i915/gt/selftest_timeline.c   |  83 ++--
>   drivers/gpu/drm/i915/i915_request.c           |   4 -
>   drivers/gpu/drm/i915/i915_request.h           |  17 +-
>   11 files changed, 160 insertions(+), 415 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
> index b491a64919c8..9646200d2792 100644
> --- a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
> @@ -143,7 +143,7 @@ static u32 *__gen2_emit_breadcrumb(struct i915_request *rq, u32 *cs,
>   				   int flush, int post)
>   {
>   	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
> -	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
> +	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>   
>   	*cs++ = MI_FLUSH;
>   
> diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
> index ce38d1bcaba3..dd094e21bb51 100644
> --- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
> @@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>   		 PIPE_CONTROL_DC_FLUSH_ENABLE |
>   		 PIPE_CONTROL_QW_WRITE |
>   		 PIPE_CONTROL_CS_STALL);
> -	*cs++ = i915_request_active_timeline(rq)->hwsp_offset |
> +	*cs++ = i915_request_active_offset(rq) |

"Active offset" is maybe a bit non-descriptive. How about 
i915_request_hwsp_seqno()?

>   		PIPE_CONTROL_GLOBAL_GTT;
>   	*cs++ = rq->fence.seqno;
>   
> @@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>   		 PIPE_CONTROL_QW_WRITE |
>   		 PIPE_CONTROL_GLOBAL_GTT_IVB |
>   		 PIPE_CONTROL_CS_STALL);
> -	*cs++ = i915_request_active_timeline(rq)->hwsp_offset;
> +	*cs++ = i915_request_active_offset(rq);
>   	*cs++ = rq->fence.seqno;
>   
>   	*cs++ = MI_USER_INTERRUPT;
> @@ -374,7 +374,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>   u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>   {
>   	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
> -	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
> +	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>   
>   	*cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
>   	*cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
> @@ -394,7 +394,7 @@ u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>   	int i;
>   
>   	GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
> -	GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
> +	GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>   
>   	*cs++ = MI_FLUSH_DW | MI_INVALIDATE_TLB |
>   		MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
> diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
> index 1972dd5dca00..33eaa0203b1d 100644
> --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
> @@ -338,15 +338,13 @@ static inline u32 preempt_address(struct intel_engine_cs *engine)
>   
>   static u32 hwsp_offset(const struct i915_request *rq)
>   {
> -	const struct intel_timeline_cacheline *cl;
> +	const struct intel_timeline *tl;
>   
> -	/* Before the request is executed, the timeline/cachline is fixed */
> +	/* Before the request is executed, the timeline is fixed */
> +	tl = rcu_dereference_protected(rq->timeline,
> +				       !i915_request_signaled(rq));
>   
> -	cl = rcu_dereference_protected(rq->hwsp_cacheline, 1);
> -	if (cl)
> -		return cl->ggtt_offset;
> -
> -	return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset;
> +	return page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno);

I don't get this.

tl->hwsp_offset is an offset, not vaddr. What does page_mask_bits on it 
achieve?

Then rq->hwsp_seqno is a vaddr assigned as:

   rq->hwsp_seqno = tl->hwsp_seqno;

And elsewhere in the patch:

+	timeline->hwsp_map = vaddr;
+	seqno = vaddr + timeline->hwsp_offset;
+	WRITE_ONCE(*seqno, 0);
+	timeline->hwsp_seqno = seqno;

So page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno) 
seems to me to reduce to 0 + tl->hwsp_offset, or tl->hwsp_offset.

Maybe I am blind..

>   }
>   
>   int gen8_emit_init_breadcrumb(struct i915_request *rq)
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> index 1847d3c2ea99..1db608d1f26f 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> @@ -754,6 +754,7 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
>   	frame->rq.engine = engine;
>   	frame->rq.context = ce;
>   	rcu_assign_pointer(frame->rq.timeline, ce->timeline);
> +	frame->rq.hwsp_seqno = ce->timeline->hwsp_seqno;
>   
>   	frame->ring.vaddr = frame->cs;
>   	frame->ring.size = sizeof(frame->cs);
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
> index a83d3e18254d..f7dab4fc926c 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
> @@ -39,10 +39,6 @@ struct intel_gt {
>   	struct intel_gt_timelines {
>   		spinlock_t lock; /* protects active_list */
>   		struct list_head active_list;
> -
> -		/* Pack multiple timelines' seqnos into the same page */
> -		spinlock_t hwsp_lock;
> -		struct list_head hwsp_free_list;
>   	} timelines;
>   
>   	struct intel_gt_requests {
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
> index 7fe05918a76e..3d43f15f34f3 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
> @@ -12,21 +12,9 @@
>   #include "intel_ring.h"
>   #include "intel_timeline.h"
>   
> -#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit)))
> -#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit))
> +#define TIMELINE_SEQNO_BYTES 8
>   
> -#define CACHELINE_BITS 6
> -#define CACHELINE_FREE CACHELINE_BITS
> -
> -struct intel_timeline_hwsp {
> -	struct intel_gt *gt;
> -	struct intel_gt_timelines *gt_timelines;
> -	struct list_head free_link;
> -	struct i915_vma *vma;
> -	u64 free_bitmap;
> -};
> -
> -static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
> +static struct i915_vma *hwsp_alloc(struct intel_gt *gt)
>   {
>   	struct drm_i915_private *i915 = gt->i915;
>   	struct drm_i915_gem_object *obj;
> @@ -45,220 +33,59 @@ static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
>   	return vma;
>   }
>   
> -static struct i915_vma *
> -hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline)
> -{
> -	struct intel_gt_timelines *gt = &timeline->gt->timelines;
> -	struct intel_timeline_hwsp *hwsp;
> -
> -	BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
> -
> -	spin_lock_irq(&gt->hwsp_lock);
> -
> -	/* hwsp_free_list only contains HWSP that have available cachelines */
> -	hwsp = list_first_entry_or_null(&gt->hwsp_free_list,
> -					typeof(*hwsp), free_link);
> -	if (!hwsp) {
> -		struct i915_vma *vma;
> -
> -		spin_unlock_irq(&gt->hwsp_lock);
> -
> -		hwsp = kmalloc(sizeof(*hwsp), GFP_KERNEL);
> -		if (!hwsp)
> -			return ERR_PTR(-ENOMEM);
> -
> -		vma = __hwsp_alloc(timeline->gt);
> -		if (IS_ERR(vma)) {
> -			kfree(hwsp);
> -			return vma;
> -		}
> -
> -		GT_TRACE(timeline->gt, "new HWSP allocated\n");
> -
> -		vma->private = hwsp;
> -		hwsp->gt = timeline->gt;
> -		hwsp->vma = vma;
> -		hwsp->free_bitmap = ~0ull;
> -		hwsp->gt_timelines = gt;
> -
> -		spin_lock_irq(&gt->hwsp_lock);
> -		list_add(&hwsp->free_link, &gt->hwsp_free_list);
> -	}
> -
> -	GEM_BUG_ON(!hwsp->free_bitmap);
> -	*cacheline = __ffs64(hwsp->free_bitmap);
> -	hwsp->free_bitmap &= ~BIT_ULL(*cacheline);
> -	if (!hwsp->free_bitmap)
> -		list_del(&hwsp->free_link);
> -
> -	spin_unlock_irq(&gt->hwsp_lock);
> -
> -	GEM_BUG_ON(hwsp->vma->private != hwsp);
> -	return hwsp->vma;
> -}
> -
> -static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
> -{
> -	struct intel_gt_timelines *gt = hwsp->gt_timelines;
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&gt->hwsp_lock, flags);
> -
> -	/* As a cacheline becomes available, publish the HWSP on the freelist */
> -	if (!hwsp->free_bitmap)
> -		list_add_tail(&hwsp->free_link, &gt->hwsp_free_list);
> -
> -	GEM_BUG_ON(cacheline >= BITS_PER_TYPE(hwsp->free_bitmap));
> -	hwsp->free_bitmap |= BIT_ULL(cacheline);
> -
> -	/* And if no one is left using it, give the page back to the system */
> -	if (hwsp->free_bitmap == ~0ull) {
> -		i915_vma_put(hwsp->vma);
> -		list_del(&hwsp->free_link);
> -		kfree(hwsp);
> -	}
> -
> -	spin_unlock_irqrestore(&gt->hwsp_lock, flags);
> -}
> -
> -static void __rcu_cacheline_free(struct rcu_head *rcu)
> -{
> -	struct intel_timeline_cacheline *cl =
> -		container_of(rcu, typeof(*cl), rcu);
> -
> -	/* Must wait until after all *rq->hwsp are complete before removing */
> -	i915_gem_object_unpin_map(cl->hwsp->vma->obj);
> -	__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
> -
> -	i915_active_fini(&cl->active);
> -	kfree(cl);
> -}
> -
> -static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
> -{
> -	GEM_BUG_ON(!i915_active_is_idle(&cl->active));
> -	call_rcu(&cl->rcu, __rcu_cacheline_free);
> -}
> -
>   __i915_active_call
> -static void __cacheline_retire(struct i915_active *active)
> +static void __timeline_retire(struct i915_active *active)
>   {
> -	struct intel_timeline_cacheline *cl =
> -		container_of(active, typeof(*cl), active);
> +	struct intel_timeline *tl =
> +		container_of(active, typeof(*tl), active);
>   
> -	i915_vma_unpin(cl->hwsp->vma);
> -	if (ptr_test_bit(cl->vaddr, CACHELINE_FREE))
> -		__idle_cacheline_free(cl);
> +	i915_vma_unpin(tl->hwsp_ggtt);
> +	intel_timeline_put(tl);
>   }
>   
> -static int __cacheline_active(struct i915_active *active)
> +static int __timeline_active(struct i915_active *active)
>   {
> -	struct intel_timeline_cacheline *cl =
> -		container_of(active, typeof(*cl), active);
> +	struct intel_timeline *tl =
> +		container_of(active, typeof(*tl), active);
>   
> -	__i915_vma_pin(cl->hwsp->vma);
> +	__i915_vma_pin(tl->hwsp_ggtt);
> +	intel_timeline_get(tl);
>   	return 0;
>   }
>   
> -static struct intel_timeline_cacheline *
> -cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
> -{
> -	struct intel_timeline_cacheline *cl;
> -	void *vaddr;
> -
> -	GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS));
> -
> -	cl = kmalloc(sizeof(*cl), GFP_KERNEL);
> -	if (!cl)
> -		return ERR_PTR(-ENOMEM);
> -
> -	vaddr = i915_gem_object_pin_map(hwsp->vma->obj, I915_MAP_WB);
> -	if (IS_ERR(vaddr)) {
> -		kfree(cl);
> -		return ERR_CAST(vaddr);
> -	}
> -
> -	cl->hwsp = hwsp;
> -	cl->vaddr = page_pack_bits(vaddr, cacheline);
> -
> -	i915_active_init(&cl->active, __cacheline_active, __cacheline_retire);
> -
> -	return cl;
> -}
> -
> -static void cacheline_acquire(struct intel_timeline_cacheline *cl,
> -			      u32 ggtt_offset)
> -{
> -	if (!cl)
> -		return;
> -
> -	cl->ggtt_offset = ggtt_offset;
> -	i915_active_acquire(&cl->active);
> -}
> -
> -static void cacheline_release(struct intel_timeline_cacheline *cl)
> -{
> -	if (cl)
> -		i915_active_release(&cl->active);
> -}
> -
> -static void cacheline_free(struct intel_timeline_cacheline *cl)
> -{
> -	if (!i915_active_acquire_if_busy(&cl->active)) {
> -		__idle_cacheline_free(cl);
> -		return;
> -	}
> -
> -	GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE));
> -	cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE);
> -
> -	i915_active_release(&cl->active);
> -}
> -
>   static int intel_timeline_init(struct intel_timeline *timeline,
>   			       struct intel_gt *gt,
>   			       struct i915_vma *hwsp,
>   			       unsigned int offset)
>   {
>   	void *vaddr;
> +	u32 *seqno;
>   
>   	kref_init(&timeline->kref);
>   	atomic_set(&timeline->pin_count, 0);
>   
>   	timeline->gt = gt;
>   
> -	timeline->has_initial_breadcrumb = !hwsp;
> -	timeline->hwsp_cacheline = NULL;
> -
> -	if (!hwsp) {
> -		struct intel_timeline_cacheline *cl;
> -		unsigned int cacheline;
> -
> -		hwsp = hwsp_alloc(timeline, &cacheline);
> +	if (hwsp) {
> +		timeline->hwsp_offset = offset;
> +		timeline->hwsp_ggtt = i915_vma_get(hwsp);
> +	} else {
> +		timeline->has_initial_breadcrumb = true;
> +		hwsp = hwsp_alloc(gt);
>   		if (IS_ERR(hwsp))
>   			return PTR_ERR(hwsp);
> -
> -		cl = cacheline_alloc(hwsp->private, cacheline);
> -		if (IS_ERR(cl)) {
> -			__idle_hwsp_free(hwsp->private, cacheline);
> -			return PTR_ERR(cl);
> -		}
> -
> -		timeline->hwsp_cacheline = cl;
> -		timeline->hwsp_offset = cacheline * CACHELINE_BYTES;
> -
> -		vaddr = page_mask_bits(cl->vaddr);
> -	} else {
> -		timeline->hwsp_offset = offset;
> -		vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
> -		if (IS_ERR(vaddr))
> -			return PTR_ERR(vaddr);
> +		timeline->hwsp_ggtt = hwsp;
>   	}
>   
> -	timeline->hwsp_seqno =
> -		memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES);
> +	vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
> +	if (IS_ERR(vaddr))
> +		return PTR_ERR(vaddr);
> +
> +	timeline->hwsp_map = vaddr;
> +	seqno = vaddr + timeline->hwsp_offset;
> +	WRITE_ONCE(*seqno, 0);
> +	timeline->hwsp_seqno = seqno;
>   
> -	timeline->hwsp_ggtt = i915_vma_get(hwsp);
>   	GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
>   
>   	timeline->fence_context = dma_fence_context_alloc(1);
> @@ -269,6 +96,7 @@ static int intel_timeline_init(struct intel_timeline *timeline,
>   	INIT_LIST_HEAD(&timeline->requests);
>   
>   	i915_syncmap_init(&timeline->sync);
> +	i915_active_init(&timeline->active, __timeline_active, __timeline_retire);
>   
>   	return 0;
>   }
> @@ -279,23 +107,18 @@ void intel_gt_init_timelines(struct intel_gt *gt)
>   
>   	spin_lock_init(&timelines->lock);
>   	INIT_LIST_HEAD(&timelines->active_list);
> -
> -	spin_lock_init(&timelines->hwsp_lock);
> -	INIT_LIST_HEAD(&timelines->hwsp_free_list);
>   }
>   
> -static void intel_timeline_fini(struct intel_timeline *timeline)
> +static void intel_timeline_fini(struct rcu_head *rcu)
>   {
> -	GEM_BUG_ON(atomic_read(&timeline->pin_count));
> -	GEM_BUG_ON(!list_empty(&timeline->requests));
> -	GEM_BUG_ON(timeline->retire);
> +	struct intel_timeline *timeline =
> +		container_of(rcu, struct intel_timeline, rcu);
>   
> -	if (timeline->hwsp_cacheline)
> -		cacheline_free(timeline->hwsp_cacheline);
> -	else
> -		i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
> +	i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>   
>   	i915_vma_put(timeline->hwsp_ggtt);
> +	i915_active_fini(&timeline->active);
> +	kfree(timeline);
>   }
>   
>   struct intel_timeline *
> @@ -361,9 +184,9 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>   	GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
>   		 tl->fence_context, tl->hwsp_offset);
>   
> -	cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset);
> +	i915_active_acquire(&tl->active);
>   	if (atomic_fetch_inc(&tl->pin_count)) {
> -		cacheline_release(tl->hwsp_cacheline);
> +		i915_active_release(&tl->active);
>   		__i915_vma_unpin(tl->hwsp_ggtt);
>   	}
>   
> @@ -372,9 +195,13 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>   
>   void intel_timeline_reset_seqno(const struct intel_timeline *tl)
>   {
> +	u32 *hwsp_seqno = (u32 *)tl->hwsp_seqno;
>   	/* Must be pinned to be writable, and no requests in flight. */
>   	GEM_BUG_ON(!atomic_read(&tl->pin_count));
> -	WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
> +
> +	memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
> +	WRITE_ONCE(*hwsp_seqno, tl->seqno);
> +	clflush(hwsp_seqno);
>   }
>   
>   void intel_timeline_enter(struct intel_timeline *tl)
> @@ -450,106 +277,23 @@ static u32 timeline_advance(struct intel_timeline *tl)
>   	return tl->seqno += 1 + tl->has_initial_breadcrumb;
>   }
>   
> -static void timeline_rollback(struct intel_timeline *tl)
> -{
> -	tl->seqno -= 1 + tl->has_initial_breadcrumb;
> -}
> -
>   static noinline int
>   __intel_timeline_get_seqno(struct intel_timeline *tl,
> -			   struct i915_request *rq,
>   			   u32 *seqno)
>   {
> -	struct intel_timeline_cacheline *cl;
> -	unsigned int cacheline;
> -	struct i915_vma *vma;
> -	void *vaddr;
> -	int err;
> -
> -	might_lock(&tl->gt->ggtt->vm.mutex);
> -	GT_TRACE(tl->gt, "timeline:%llx wrapped\n", tl->fence_context);
> -
> -	/*
> -	 * If there is an outstanding GPU reference to this cacheline,
> -	 * such as it being sampled by a HW semaphore on another timeline,
> -	 * we cannot wraparound our seqno value (the HW semaphore does
> -	 * a strict greater-than-or-equals compare, not i915_seqno_passed).
> -	 * So if the cacheline is still busy, we must detach ourselves
> -	 * from it and leave it inflight alongside its users.
> -	 *
> -	 * However, if nobody is watching and we can guarantee that nobody
> -	 * will, we could simply reuse the same cacheline.
> -	 *
> -	 * if (i915_active_request_is_signaled(&tl->last_request) &&
> -	 *     i915_active_is_signaled(&tl->hwsp_cacheline->active))
> -	 *	return 0;
> -	 *
> -	 * That seems unlikely for a busy timeline that needed to wrap in
> -	 * the first place, so just replace the cacheline.
> -	 */
> -
> -	vma = hwsp_alloc(tl, &cacheline);
> -	if (IS_ERR(vma)) {
> -		err = PTR_ERR(vma);
> -		goto err_rollback;
> -	}
> -
> -	err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
> -	if (err) {
> -		__idle_hwsp_free(vma->private, cacheline);
> -		goto err_rollback;
> -	}
> +	u32 next_ofs = offset_in_page(tl->hwsp_offset + TIMELINE_SEQNO_BYTES);
>   
> -	cl = cacheline_alloc(vma->private, cacheline);
> -	if (IS_ERR(cl)) {
> -		err = PTR_ERR(cl);
> -		__idle_hwsp_free(vma->private, cacheline);
> -		goto err_unpin;
> -	}
> -	GEM_BUG_ON(cl->hwsp->vma != vma);
> -
> -	/*
> -	 * Attach the old cacheline to the current request, so that we only
> -	 * free it after the current request is retired, which ensures that
> -	 * all writes into the cacheline from previous requests are complete.
> -	 */
> -	err = i915_active_ref(&tl->hwsp_cacheline->active,
> -			      tl->fence_context,
> -			      &rq->fence);
> -	if (err)
> -		goto err_cacheline;
> +	/* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
> +	if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))
> +		next_ofs = offset_in_page(next_ofs + BIT(5));
>   
> -	cacheline_release(tl->hwsp_cacheline); /* ownership now xfered to rq */
> -	cacheline_free(tl->hwsp_cacheline);
> -
> -	i915_vma_unpin(tl->hwsp_ggtt); /* binding kept alive by old cacheline */
> -	i915_vma_put(tl->hwsp_ggtt);
> -
> -	tl->hwsp_ggtt = i915_vma_get(vma);
> -
> -	vaddr = page_mask_bits(cl->vaddr);
> -	tl->hwsp_offset = cacheline * CACHELINE_BYTES;
> -	tl->hwsp_seqno =
> -		memset(vaddr + tl->hwsp_offset, 0, CACHELINE_BYTES);
> -
> -	tl->hwsp_offset += i915_ggtt_offset(vma);
> -	GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
> -		 tl->fence_context, tl->hwsp_offset);
> -
> -	cacheline_acquire(cl, tl->hwsp_offset);
> -	tl->hwsp_cacheline = cl;
> +	tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + next_ofs;
> +	tl->hwsp_seqno = tl->hwsp_map + next_ofs;
> +	intel_timeline_reset_seqno(tl);
>   
>   	*seqno = timeline_advance(tl);
>   	GEM_BUG_ON(i915_seqno_passed(*tl->hwsp_seqno, *seqno));
>   	return 0;
> -
> -err_cacheline:
> -	cacheline_free(cl);
> -err_unpin:
> -	i915_vma_unpin(vma);
> -err_rollback:
> -	timeline_rollback(tl);
> -	return err;
>   }
>   
>   int intel_timeline_get_seqno(struct intel_timeline *tl,
> @@ -559,51 +303,52 @@ int intel_timeline_get_seqno(struct intel_timeline *tl,
>   	*seqno = timeline_advance(tl);
>   
>   	/* Replace the HWSP on wraparound for HW semaphores */
> -	if (unlikely(!*seqno && tl->hwsp_cacheline))
> -		return __intel_timeline_get_seqno(tl, rq, seqno);
> +	if (unlikely(!*seqno && tl->has_initial_breadcrumb))
> +		return __intel_timeline_get_seqno(tl, seqno);
>   
>   	return 0;
>   }
>   
> -static int cacheline_ref(struct intel_timeline_cacheline *cl,
> -			 struct i915_request *rq)
> -{
> -	return i915_active_add_request(&cl->active, rq);
> -}
> -
>   int intel_timeline_read_hwsp(struct i915_request *from,
>   			     struct i915_request *to,
>   			     u32 *hwsp)
>   {
> -	struct intel_timeline_cacheline *cl;
> +	struct intel_timeline *tl;
>   	int err;
>   
> -	GEM_BUG_ON(!rcu_access_pointer(from->hwsp_cacheline));
> -
>   	rcu_read_lock();
> -	cl = rcu_dereference(from->hwsp_cacheline);
> -	if (i915_request_completed(from)) /* confirm cacheline is valid */
> -		goto unlock;
> -	if (unlikely(!i915_active_acquire_if_busy(&cl->active)))
> -		goto unlock; /* seqno wrapped and completed! */
> -	if (unlikely(i915_request_completed(from)))
> -		goto release;
> +	tl = rcu_dereference(from->timeline);
> +	if (i915_request_completed(from) ||
> +	    !i915_active_acquire_if_busy(&tl->active))
> +		tl = NULL;
> +
> +	if (tl) {
> +		/* hwsp_offset may wraparound, so use from->hwsp_seqno */
> +		*hwsp = i915_ggtt_offset(tl->hwsp_ggtt) +
> +			offset_in_page(from->hwsp_seqno);
> +	}
> +
> +	/* ensure we wait on the right request, if not, we completed */
> +	if (tl && i915_request_completed(from)) {
> +		i915_active_release(&tl->active);
> +		tl = NULL;
> +	}
>   	rcu_read_unlock();
>   
> -	err = cacheline_ref(cl, to);
> -	if (err)
> +	if (!tl)
> +		return 1;
> +
> +	/* Can't do semaphore waits on kernel context */
> +	if (!tl->has_initial_breadcrumb) {
> +		err = -EINVAL;
>   		goto out;
> +	}
> +
> +	err = i915_active_add_request(&tl->active, to);
>   
> -	*hwsp = cl->ggtt_offset;
>   out:
> -	i915_active_release(&cl->active);
> +	i915_active_release(&tl->active);
>   	return err;
> -
> -release:
> -	i915_active_release(&cl->active);
> -unlock:
> -	rcu_read_unlock();
> -	return 1;
>   }
>   
>   void intel_timeline_unpin(struct intel_timeline *tl)
> @@ -612,8 +357,7 @@ void intel_timeline_unpin(struct intel_timeline *tl)
>   	if (!atomic_dec_and_test(&tl->pin_count))
>   		return;
>   
> -	cacheline_release(tl->hwsp_cacheline);
> -
> +	i915_active_release(&tl->active);
>   	__i915_vma_unpin(tl->hwsp_ggtt);
>   }
>   
> @@ -622,8 +366,11 @@ void __intel_timeline_free(struct kref *kref)
>   	struct intel_timeline *timeline =
>   		container_of(kref, typeof(*timeline), kref);
>   
> -	intel_timeline_fini(timeline);
> -	kfree_rcu(timeline, rcu);
> +	GEM_BUG_ON(atomic_read(&timeline->pin_count));
> +	GEM_BUG_ON(!list_empty(&timeline->requests));
> +	GEM_BUG_ON(timeline->retire);
> +
> +	call_rcu(&timeline->rcu, intel_timeline_fini);
>   }
>   
>   void intel_gt_fini_timelines(struct intel_gt *gt)
> @@ -631,7 +378,6 @@ void intel_gt_fini_timelines(struct intel_gt *gt)
>   	struct intel_gt_timelines *timelines = &gt->timelines;
>   
>   	GEM_BUG_ON(!list_empty(&timelines->active_list));
> -	GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
>   }
>   
>   void intel_gt_show_timelines(struct intel_gt *gt,
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
> index e360f50706bf..9c1d767a867f 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
> @@ -18,7 +18,6 @@
>   struct i915_vma;
>   struct i915_syncmap;
>   struct intel_gt;
> -struct intel_timeline_hwsp;
>   
>   struct intel_timeline {
>   	u64 fence_context;
> @@ -45,12 +44,11 @@ struct intel_timeline {
>   	atomic_t pin_count;
>   	atomic_t active_count;
>   
> +	void *hwsp_map;
>   	const u32 *hwsp_seqno;
>   	struct i915_vma *hwsp_ggtt;
>   	u32 hwsp_offset;
>   
> -	struct intel_timeline_cacheline *hwsp_cacheline;
> -
>   	bool has_initial_breadcrumb;
>   
>   	/**
> @@ -67,6 +65,8 @@ struct intel_timeline {
>   	 */
>   	struct i915_active_fence last_request;
>   
> +	struct i915_active active;
> +
>   	/** A chain of completed timelines ready for early retirement. */
>   	struct intel_timeline *retire;
>   
> @@ -90,15 +90,4 @@ struct intel_timeline {
>   	struct rcu_head rcu;
>   };
>   
> -struct intel_timeline_cacheline {
> -	struct i915_active active;
> -
> -	struct intel_timeline_hwsp *hwsp;
> -	void *vaddr;
> -
> -	u32 ggtt_offset;
> -
> -	struct rcu_head rcu;
> -};
> -
>   #endif /* __I915_TIMELINE_TYPES_H__ */
> diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
> index 439c8984f5fa..fd9414b0d1bd 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
> @@ -42,6 +42,9 @@ static int perf_end(struct intel_gt *gt)
>   
>   static int write_timestamp(struct i915_request *rq, int slot)
>   {
> +	struct intel_timeline *tl =
> +		rcu_dereference_protected(rq->timeline,
> +					  !i915_request_signaled(rq));
>   	u32 cmd;
>   	u32 *cs;
>   
> @@ -54,7 +57,7 @@ static int write_timestamp(struct i915_request *rq, int slot)
>   		cmd++;
>   	*cs++ = cmd;
>   	*cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(rq->engine->mmio_base));
> -	*cs++ = i915_request_timeline(rq)->hwsp_offset + slot * sizeof(u32);
> +	*cs++ = tl->hwsp_offset + slot * sizeof(u32);
>   	*cs++ = 0;
>   
>   	intel_ring_advance(rq, cs);
> diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> index 6f3a3687ef0f..6e32e91cdab4 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> @@ -666,7 +666,7 @@ static int live_hwsp_wrap(void *arg)
>   	if (IS_ERR(tl))
>   		return PTR_ERR(tl);
>   
> -	if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
> +	if (!tl->has_initial_breadcrumb)
>   		goto out_free;
>   
>   	err = intel_timeline_pin(tl, NULL);
> @@ -833,12 +833,26 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
>   	return 0;
>   }
>   
> +static void switch_tl_lock(struct i915_request *from, struct i915_request *to)
> +{
> +	/* some light mutex juggling required; think co-routines */
> +
> +	if (from) {
> +		lockdep_unpin_lock(&from->context->timeline->mutex, from->cookie);
> +		mutex_unlock(&from->context->timeline->mutex);
> +	}
> +
> +	if (to) {
> +		mutex_lock(&to->context->timeline->mutex);
> +		to->cookie = lockdep_pin_lock(&to->context->timeline->mutex);
> +	}
> +}
> +
>   static int create_watcher(struct hwsp_watcher *w,
>   			  struct intel_engine_cs *engine,
>   			  int ringsz)
>   {
>   	struct intel_context *ce;
> -	struct intel_timeline *tl;
>   
>   	ce = intel_context_create(engine);
>   	if (IS_ERR(ce))
> @@ -851,11 +865,8 @@ static int create_watcher(struct hwsp_watcher *w,
>   		return PTR_ERR(w->rq);
>   
>   	w->addr = i915_ggtt_offset(w->vma);
> -	tl = w->rq->context->timeline;
>   
> -	/* some light mutex juggling required; think co-routines */
> -	lockdep_unpin_lock(&tl->mutex, w->rq->cookie);
> -	mutex_unlock(&tl->mutex);
> +	switch_tl_lock(w->rq, NULL);
>   
>   	return 0;
>   }
> @@ -864,15 +875,13 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
>   			 bool (*op)(u32 hwsp, u32 seqno))
>   {
>   	struct i915_request *rq = fetch_and_zero(&w->rq);
> -	struct intel_timeline *tl = rq->context->timeline;
>   	u32 offset, end;
>   	int err;
>   
>   	GEM_BUG_ON(w->addr - i915_ggtt_offset(w->vma) > w->vma->size);
>   
>   	i915_request_get(rq);
> -	mutex_lock(&tl->mutex);
> -	rq->cookie = lockdep_pin_lock(&tl->mutex);
> +	switch_tl_lock(NULL, rq);
>   	i915_request_add(rq);
>   
>   	if (i915_request_wait(rq, 0, HZ) < 0) {
> @@ -901,10 +910,7 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
>   static void cleanup_watcher(struct hwsp_watcher *w)
>   {
>   	if (w->rq) {
> -		struct intel_timeline *tl = w->rq->context->timeline;
> -
> -		mutex_lock(&tl->mutex);
> -		w->rq->cookie = lockdep_pin_lock(&tl->mutex);
> +		switch_tl_lock(NULL, w->rq);
>   
>   		i915_request_add(w->rq);
>   	}
> @@ -942,7 +948,7 @@ static struct i915_request *wrap_timeline(struct i915_request *rq)
>   	}
>   
>   	i915_request_put(rq);
> -	rq = intel_context_create_request(ce);
> +	rq = i915_request_create(ce);
>   	if (IS_ERR(rq))
>   		return rq;
>   
> @@ -977,7 +983,7 @@ static int live_hwsp_read(void *arg)
>   	if (IS_ERR(tl))
>   		return PTR_ERR(tl);
>   
> -	if (!tl->hwsp_cacheline)
> +	if (!tl->has_initial_breadcrumb)
>   		goto out_free;
>   
>   	for (i = 0; i < ARRAY_SIZE(watcher); i++) {
> @@ -999,7 +1005,7 @@ static int live_hwsp_read(void *arg)
>   		do {
>   			struct i915_sw_fence *submit;
>   			struct i915_request *rq;
> -			u32 hwsp;
> +			u32 hwsp, dummy;
>   
>   			submit = heap_fence_create(GFP_KERNEL);
>   			if (!submit) {
> @@ -1017,14 +1023,26 @@ static int live_hwsp_read(void *arg)
>   				goto out;
>   			}
>   
> -			/* Skip to the end, saving 30 minutes of nops */
> -			tl->seqno = -10u + 2 * (count & 3);
> -			WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>   			ce->timeline = intel_timeline_get(tl);
>   
> -			rq = intel_context_create_request(ce);
> +			/* Ensure timeline is mapped, done during first pin */
> +			err = intel_context_pin(ce);
> +			if (err) {
> +				intel_context_put(ce);
> +				goto out;
> +			}
> +
> +			/*
> +			 * Start at a new wrap, and set seqno right before another wrap,
> +			 * saving 30 minutes of nops
> +			 */
> +			tl->seqno = -12u + 2 * (count & 3);
> +			__intel_timeline_get_seqno(tl, &dummy);
> +
> +			rq = i915_request_create(ce);
>   			if (IS_ERR(rq)) {
>   				err = PTR_ERR(rq);
> +				intel_context_unpin(ce);
>   				intel_context_put(ce);
>   				goto out;
>   			}
> @@ -1034,32 +1052,35 @@ static int live_hwsp_read(void *arg)
>   							    GFP_KERNEL);
>   			if (err < 0) {
>   				i915_request_add(rq);
> +				intel_context_unpin(ce);
>   				intel_context_put(ce);
>   				goto out;
>   			}
>   
> -			mutex_lock(&watcher[0].rq->context->timeline->mutex);
> +			switch_tl_lock(rq, watcher[0].rq);
>   			err = intel_timeline_read_hwsp(rq, watcher[0].rq, &hwsp);
>   			if (err == 0)
>   				err = emit_read_hwsp(watcher[0].rq, /* before */
>   						     rq->fence.seqno, hwsp,
>   						     &watcher[0].addr);
> -			mutex_unlock(&watcher[0].rq->context->timeline->mutex);
> +			switch_tl_lock(watcher[0].rq, rq);

Ugh this is some advanced stuff. Trickyness level out of 10?

Change is "touching" two mutexes instead of one.

Do you have a branch somewhere I could check out at this stage and look 
at the code to try and figure it out more easily?

>   			if (err) {
>   				i915_request_add(rq);
> +				intel_context_unpin(ce);
>   				intel_context_put(ce);
>   				goto out;
>   			}
>   
> -			mutex_lock(&watcher[1].rq->context->timeline->mutex);
> +			switch_tl_lock(rq, watcher[1].rq);
>   			err = intel_timeline_read_hwsp(rq, watcher[1].rq, &hwsp);
>   			if (err == 0)
>   				err = emit_read_hwsp(watcher[1].rq, /* after */
>   						     rq->fence.seqno, hwsp,
>   						     &watcher[1].addr);
> -			mutex_unlock(&watcher[1].rq->context->timeline->mutex);
> +			switch_tl_lock(watcher[1].rq, rq);
>   			if (err) {
>   				i915_request_add(rq);
> +				intel_context_unpin(ce);
>   				intel_context_put(ce);
>   				goto out;
>   			}
> @@ -1068,6 +1089,7 @@ static int live_hwsp_read(void *arg)
>   			i915_request_add(rq);
>   
>   			rq = wrap_timeline(rq);
> +			intel_context_unpin(ce);
>   			intel_context_put(ce);
>   			if (IS_ERR(rq)) {
>   				err = PTR_ERR(rq);
> @@ -1107,8 +1129,8 @@ static int live_hwsp_read(void *arg)
>   			    3 * watcher[1].rq->ring->size)
>   				break;
>   
> -		} while (!__igt_timeout(end_time, NULL));
> -		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, 0xdeadbeef);
> +		} while (!__igt_timeout(end_time, NULL) &&
> +			 count < (PAGE_SIZE / TIMELINE_SEQNO_BYTES - 1) / 2);
>   
>   		pr_info("%s: simulated %lu wraps\n", engine->name, count);
>   		err = check_watcher(&watcher[1], "after", cmp_gte);
> @@ -1153,9 +1175,7 @@ static int live_hwsp_rollover_kernel(void *arg)
>   		}
>   
>   		GEM_BUG_ON(i915_active_fence_isset(&tl->last_request));
> -		tl->seqno = 0;
> -		timeline_rollback(tl);
> -		timeline_rollback(tl);
> +		tl->seqno = -2u;
>   		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>   
>   		for (i = 0; i < ARRAY_SIZE(rq); i++) {
> @@ -1235,11 +1255,10 @@ static int live_hwsp_rollover_user(void *arg)
>   			goto out;
>   
>   		tl = ce->timeline;
> -		if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
> +		if (!tl->has_initial_breadcrumb)
>   			goto out;
>   
> -		timeline_rollback(tl);
> -		timeline_rollback(tl);
> +		tl->seqno = -4u;
>   		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>   
>   		for (i = 0; i < ARRAY_SIZE(rq); i++) {
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index bfd82d8259f4..b661b3b5992e 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -851,7 +851,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
>   	rq->fence.seqno = seqno;
>   
>   	RCU_INIT_POINTER(rq->timeline, tl);
> -	RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline);
>   	rq->hwsp_seqno = tl->hwsp_seqno;
>   	GEM_BUG_ON(i915_request_completed(rq));
>   
> @@ -1090,9 +1089,6 @@ emit_semaphore_wait(struct i915_request *to,
>   	if (i915_request_has_initial_breadcrumb(to))
>   		goto await_fence;
>   
> -	if (!rcu_access_pointer(from->hwsp_cacheline))
> -		goto await_fence;
> -
>   	/*
>   	 * If this or its dependents are waiting on an external fence
>   	 * that may fail catastrophically, then we want to avoid using
> diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
> index a8c413203f72..c25b0afcc225 100644
> --- a/drivers/gpu/drm/i915/i915_request.h
> +++ b/drivers/gpu/drm/i915/i915_request.h
> @@ -237,16 +237,6 @@ struct i915_request {
>   	 */
>   	const u32 *hwsp_seqno;
>   
> -	/*
> -	 * If we need to access the timeline's seqno for this request in
> -	 * another request, we need to keep a read reference to this associated
> -	 * cacheline, so that we do not free and recycle it before the foreign
> -	 * observers have completed. Hence, we keep a pointer to the cacheline
> -	 * inside the timeline's HWSP vma, but it is only valid while this
> -	 * request has not completed and guarded by the timeline mutex.
> -	 */
> -	struct intel_timeline_cacheline __rcu *hwsp_cacheline;
> -
>   	/** Position in the ring of the start of the request */
>   	u32 head;
>   
> @@ -615,4 +605,11 @@ i915_request_active_timeline(const struct i915_request *rq)
>   					 lockdep_is_held(&rq->engine->active.lock));
>   }
>   
> +static inline u32
> +i915_request_active_offset(const struct i915_request *rq)
> +{
> +	return page_mask_bits(i915_request_active_timeline(rq)->hwsp_offset) +
> +	       offset_in_page(rq->hwsp_seqno);

Seems the same conundrum as I had earlier in hwsp_offset(). TBD.

Regards,

Tvrtko

> +}
> +
>   #endif /* I915_REQUEST_H */
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 02/64] drm/i915: Pin timeline map after first timeline pin, v3.
  2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 02/64] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
@ 2021-01-18 10:28   ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 92+ messages in thread
From: Thomas Hellström (Intel) @ 2021-01-18 10:28 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx

Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>


On 1/5/21 4:34 PM, Maarten Lankhorst wrote:
> We're starting to require the reservation lock for pinning,
> so wait until we have that.
>
> Update the selftests to handle this correctly, and ensure pin is
> called in live_hwsp_rollover_user() and mock_hwsp_freelist().
>
> Changes since v1:
> - Fix NULL + XX arithmatic, use casts. (kbuild)
> Changes since v2:
> - Clear entire cacheline when pinning.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Reported-by: kernel test robot <lkp@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_timeline.c    | 40 +++++++++----
>   drivers/gpu/drm/i915/gt/intel_timeline.h    |  2 +
>   drivers/gpu/drm/i915/gt/mock_engine.c       | 22 ++++++-
>   drivers/gpu/drm/i915/gt/selftest_timeline.c | 63 +++++++++++----------
>   drivers/gpu/drm/i915/i915_selftest.h        |  2 +
>   5 files changed, 84 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
> index 3d43f15f34f3..7509af471341 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
> @@ -53,14 +53,29 @@ static int __timeline_active(struct i915_active *active)
>   	return 0;
>   }
>   
> +I915_SELFTEST_EXPORT int
> +intel_timeline_pin_map(struct intel_timeline *timeline)
> +{
> +	struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj;
> +	u32 ofs = offset_in_page(timeline->hwsp_offset);
> +	void *vaddr;
> +
> +	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	if (IS_ERR(vaddr))
> +		return PTR_ERR(vaddr);
> +
> +	timeline->hwsp_map = vaddr;
> +	timeline->hwsp_seqno = memset(vaddr + ofs, 0, CACHELINE_BYTES);
> +	clflush(vaddr + ofs);
> +
> +	return 0;
> +}
> +
>   static int intel_timeline_init(struct intel_timeline *timeline,
>   			       struct intel_gt *gt,
>   			       struct i915_vma *hwsp,
>   			       unsigned int offset)
>   {
> -	void *vaddr;
> -	u32 *seqno;
> -
>   	kref_init(&timeline->kref);
>   	atomic_set(&timeline->pin_count, 0);
>   
> @@ -77,14 +92,8 @@ static int intel_timeline_init(struct intel_timeline *timeline,
>   		timeline->hwsp_ggtt = hwsp;
>   	}
>   
> -	vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
> -	if (IS_ERR(vaddr))
> -		return PTR_ERR(vaddr);
> -
> -	timeline->hwsp_map = vaddr;
> -	seqno = vaddr + timeline->hwsp_offset;
> -	WRITE_ONCE(*seqno, 0);
> -	timeline->hwsp_seqno = seqno;
> +	timeline->hwsp_map = NULL;
> +	timeline->hwsp_seqno = (void *)(long)timeline->hwsp_offset;
>   
>   	GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
>   
> @@ -114,7 +123,8 @@ static void intel_timeline_fini(struct rcu_head *rcu)
>   	struct intel_timeline *timeline =
>   		container_of(rcu, struct intel_timeline, rcu);
>   
> -	i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
> +	if (timeline->hwsp_map)
> +		i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>   
>   	i915_vma_put(timeline->hwsp_ggtt);
>   	i915_active_fini(&timeline->active);
> @@ -174,6 +184,12 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>   	if (atomic_add_unless(&tl->pin_count, 1, 0))
>   		return 0;
>   
> +	if (!tl->hwsp_map) {
> +		err = intel_timeline_pin_map(tl);
> +		if (err)
> +			return err;
> +	}
> +
>   	err = i915_ggtt_pin(tl->hwsp_ggtt, ww, 0, PIN_HIGH);
>   	if (err)
>   		return err;
> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h
> index f502a619843f..c50543d3ed5d 100644
> --- a/drivers/gpu/drm/i915/gt/intel_timeline.h
> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
> @@ -110,4 +110,6 @@ void intel_gt_show_timelines(struct intel_gt *gt,
>   						  const char *prefix,
>   						  int indent));
>   
> +I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl));
> +
>   #endif
> diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
> index 2f830017c51d..effbac877eec 100644
> --- a/drivers/gpu/drm/i915/gt/mock_engine.c
> +++ b/drivers/gpu/drm/i915/gt/mock_engine.c
> @@ -32,9 +32,20 @@
>   #include "mock_engine.h"
>   #include "selftests/mock_request.h"
>   
> -static void mock_timeline_pin(struct intel_timeline *tl)
> +static int mock_timeline_pin(struct intel_timeline *tl)
>   {
> +	int err;
> +
> +	if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
> +		return -EBUSY;
> +
> +	err = intel_timeline_pin_map(tl);
> +	i915_gem_object_unlock(tl->hwsp_ggtt->obj);
> +	if (err)
> +		return err;
> +
>   	atomic_inc(&tl->pin_count);
> +	return 0;
>   }
>   
>   static void mock_timeline_unpin(struct intel_timeline *tl)
> @@ -152,6 +163,8 @@ static void mock_context_destroy(struct kref *ref)
>   
>   static int mock_context_alloc(struct intel_context *ce)
>   {
> +	int err;
> +
>   	ce->ring = mock_ring(ce->engine);
>   	if (!ce->ring)
>   		return -ENOMEM;
> @@ -162,7 +175,12 @@ static int mock_context_alloc(struct intel_context *ce)
>   		return PTR_ERR(ce->timeline);
>   	}
>   
> -	mock_timeline_pin(ce->timeline);
> +	err = mock_timeline_pin(ce->timeline);
> +	if (err) {
> +		intel_timeline_put(ce->timeline);
> +		ce->timeline = NULL;
> +		return err;
> +	}
>   
>   	return 0;
>   }
> diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> index 6e32e91cdab4..2e2bea7ac7a5 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
> @@ -35,7 +35,7 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
>   {
>   	unsigned long address = (unsigned long)page_address(hwsp_page(tl));
>   
> -	return (address + tl->hwsp_offset) / CACHELINE_BYTES;
> +	return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES;
>   }
>   
>   #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
> @@ -59,6 +59,7 @@ static void __mock_hwsp_record(struct mock_hwsp_freelist *state,
>   	tl = xchg(&state->history[idx], tl);
>   	if (tl) {
>   		radix_tree_delete(&state->cachelines, hwsp_cacheline(tl));
> +		intel_timeline_unpin(tl);
>   		intel_timeline_put(tl);
>   	}
>   }
> @@ -78,6 +79,12 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
>   		if (IS_ERR(tl))
>   			return PTR_ERR(tl);
>   
> +		err = intel_timeline_pin(tl, NULL);
> +		if (err) {
> +			intel_timeline_put(tl);
> +			return err;
> +		}
> +
>   		cacheline = hwsp_cacheline(tl);
>   		err = radix_tree_insert(&state->cachelines, cacheline, tl);
>   		if (err) {
> @@ -85,6 +92,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
>   				pr_err("HWSP cacheline %lu already used; duplicate allocation!\n",
>   				       cacheline);
>   			}
> +			intel_timeline_unpin(tl);
>   			intel_timeline_put(tl);
>   			return err;
>   		}
> @@ -452,7 +460,7 @@ static int emit_ggtt_store_dw(struct i915_request *rq, u32 addr, u32 value)
>   }
>   
>   static struct i915_request *
> -tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
> +checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>   {
>   	struct i915_request *rq;
>   	int err;
> @@ -463,6 +471,13 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>   		goto out;
>   	}
>   
> +	if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
> +		pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
> +		       *tl->hwsp_seqno, tl->seqno);
> +		intel_timeline_unpin(tl);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
>   	rq = intel_engine_create_kernel_request(engine);
>   	if (IS_ERR(rq))
>   		goto out_unpin;
> @@ -484,25 +499,6 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
>   	return rq;
>   }
>   
> -static struct intel_timeline *
> -checked_intel_timeline_create(struct intel_gt *gt)
> -{
> -	struct intel_timeline *tl;
> -
> -	tl = intel_timeline_create(gt);
> -	if (IS_ERR(tl))
> -		return tl;
> -
> -	if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
> -		pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
> -		       *tl->hwsp_seqno, tl->seqno);
> -		intel_timeline_put(tl);
> -		return ERR_PTR(-EINVAL);
> -	}
> -
> -	return tl;
> -}
> -
>   static int live_hwsp_engine(void *arg)
>   {
>   #define NUM_TIMELINES 4096
> @@ -535,13 +531,13 @@ static int live_hwsp_engine(void *arg)
>   			struct intel_timeline *tl;
>   			struct i915_request *rq;
>   
> -			tl = checked_intel_timeline_create(gt);
> +			tl = intel_timeline_create(gt);
>   			if (IS_ERR(tl)) {
>   				err = PTR_ERR(tl);
>   				break;
>   			}
>   
> -			rq = tl_write(tl, engine, count);
> +			rq = checked_tl_write(tl, engine, count);
>   			if (IS_ERR(rq)) {
>   				intel_timeline_put(tl);
>   				err = PTR_ERR(rq);
> @@ -608,14 +604,14 @@ static int live_hwsp_alternate(void *arg)
>   			if (!intel_engine_can_store_dword(engine))
>   				continue;
>   
> -			tl = checked_intel_timeline_create(gt);
> +			tl = intel_timeline_create(gt);
>   			if (IS_ERR(tl)) {
>   				err = PTR_ERR(tl);
>   				goto out;
>   			}
>   
>   			intel_engine_pm_get(engine);
> -			rq = tl_write(tl, engine, count);
> +			rq = checked_tl_write(tl, engine, count);
>   			intel_engine_pm_put(engine);
>   			if (IS_ERR(rq)) {
>   				intel_timeline_put(tl);
> @@ -1258,6 +1254,10 @@ static int live_hwsp_rollover_user(void *arg)
>   		if (!tl->has_initial_breadcrumb)
>   			goto out;
>   
> +		err = intel_context_pin(ce);
> +		if (err)
> +			goto out;
> +
>   		tl->seqno = -4u;
>   		WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>   
> @@ -1267,7 +1267,7 @@ static int live_hwsp_rollover_user(void *arg)
>   			this = intel_context_create_request(ce);
>   			if (IS_ERR(this)) {
>   				err = PTR_ERR(this);
> -				goto out;
> +				goto out_unpin;
>   			}
>   
>   			pr_debug("%s: create fence.seqnp:%d\n",
> @@ -1286,17 +1286,18 @@ static int live_hwsp_rollover_user(void *arg)
>   		if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
>   			pr_err("Wait for timeline wrap timed out!\n");
>   			err = -EIO;
> -			goto out;
> +			goto out_unpin;
>   		}
>   
>   		for (i = 0; i < ARRAY_SIZE(rq); i++) {
>   			if (!i915_request_completed(rq[i])) {
>   				pr_err("Pre-wrap request not completed!\n");
>   				err = -EINVAL;
> -				goto out;
> +				goto out_unpin;
>   			}
>   		}
> -
> +out_unpin:
> +		intel_context_unpin(ce);
>   out:
>   		for (i = 0; i < ARRAY_SIZE(rq); i++)
>   			i915_request_put(rq[i]);
> @@ -1338,13 +1339,13 @@ static int live_hwsp_recycle(void *arg)
>   			struct intel_timeline *tl;
>   			struct i915_request *rq;
>   
> -			tl = checked_intel_timeline_create(gt);
> +			tl = intel_timeline_create(gt);
>   			if (IS_ERR(tl)) {
>   				err = PTR_ERR(tl);
>   				break;
>   			}
>   
> -			rq = tl_write(tl, engine, count);
> +			rq = checked_tl_write(tl, engine, count);
>   			if (IS_ERR(rq)) {
>   				intel_timeline_put(tl);
>   				err = PTR_ERR(rq);
> diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h
> index d53d207ab6eb..f54de0499be7 100644
> --- a/drivers/gpu/drm/i915/i915_selftest.h
> +++ b/drivers/gpu/drm/i915/i915_selftest.h
> @@ -107,6 +107,7 @@ int __i915_subtests(const char *caller,
>   
>   #define I915_SELFTEST_DECLARE(x) x
>   #define I915_SELFTEST_ONLY(x) unlikely(x)
> +#define I915_SELFTEST_EXPORT
>   
>   #else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */
>   
> @@ -116,6 +117,7 @@ static inline int i915_perf_selftests(struct pci_dev *pdev) { return 0; }
>   
>   #define I915_SELFTEST_DECLARE(x)
>   #define I915_SELFTEST_ONLY(x) 0
> +#define I915_SELFTEST_EXPORT static
>   
>   #endif
>   
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2.
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2 Maarten Lankhorst
  2021-01-11 20:50   ` Dave Airlie
@ 2021-01-18 10:34   ` Thomas Hellström
  1 sibling, 0 replies; 92+ messages in thread
From: Thomas Hellström @ 2021-01-18 10:34 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
> We should not allow this any more, as it will break with the new userptr
> implementation, it could still be made to work, but there's no point in
> doing so.
>
> Inspection of the beignet opencl driver shows that it's only used
> when normal userptr is not available, which means for new kernels
> you will need CONFIG_I915_USERPTR.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 62/64] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2.
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 62/64] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2 Maarten Lankhorst
@ 2021-01-18 10:58   ` Thomas Hellström
  0 siblings, 0 replies; 92+ messages in thread
From: Thomas Hellström @ 2021-01-18 10:58 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx; +Cc: Dan Carpenter


On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
> Instead of force unbinding and rebinding every time, we try to check
> if our notifier seqcount is still correct when pages are bound. This
> way we only rebind userptr when we need to, and prevent stalls.
>
> Changes since v1:
> - Missing mutex_unlock, reported by kbuild.
>
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly Maarten Lankhorst
@ 2021-01-18 11:11   ` Thomas Hellström
  2021-01-18 12:01     ` Maarten Lankhorst
  0 siblings, 1 reply; 92+ messages in thread
From: Thomas Hellström @ 2021-01-18 11:11 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
> We get a lockdep splat when the reset mutex is held, because it can be
> taken from fence_wait. This conflicts with the mmu notifier we have,
> because we recurse between reset mutex and mmap lock -> mmu notifier.
>
> Remove this recursion by calling revoke_mmaps before taking the lock.

Hmm. Is the mmap se taken from gt_revoke()?

If so, isn't the real problem that the mmap_sem is taken in the 
dma_fence critical path (where the reset code sits)?

/Thomas


>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_reset.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
> index 9d177297db79..3c0807d9a86e 100644
> --- a/drivers/gpu/drm/i915/gt/intel_reset.c
> +++ b/drivers/gpu/drm/i915/gt/intel_reset.c
> @@ -975,8 +975,6 @@ static int do_reset(struct intel_gt *gt, intel_engine_mask_t stalled_mask)
>   {
>   	int err, i;
>   
> -	gt_revoke(gt);
> -
>   	err = __intel_gt_reset(gt, ALL_ENGINES);
>   	for (i = 0; err && i < RESET_MAX_RETRIES; i++) {
>   		msleep(10 * (i + 1));
> @@ -1031,6 +1029,9 @@ void intel_gt_reset(struct intel_gt *gt,
>   
>   	might_sleep();
>   	GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &gt->reset.flags));
> +
> +	gt_revoke(gt);
> +
>   	mutex_lock(&gt->reset.mutex);
>   
>   	/* Clear any previous failed attempts at recovery. Time to try again. */
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 15/64] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER.
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 15/64] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER Maarten Lankhorst
@ 2021-01-18 11:17   ` Thomas Hellström
  0 siblings, 0 replies; 92+ messages in thread
From: Thomas Hellström @ 2021-01-18 11:17 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
> Now that unsynchronized mappings are removed, the only time userptr
> works is when the MMU notifier is enabled. Put all of the userptr
> code behind a mmu notifier ifdef.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
  2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5 Maarten Lankhorst
  2021-01-11 20:51   ` Dave Airlie
@ 2021-01-18 11:30   ` Thomas Hellström (Intel)
  2021-01-18 12:43     ` Maarten Lankhorst
  1 sibling, 1 reply; 92+ messages in thread
From: Thomas Hellström (Intel) @ 2021-01-18 11:30 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx

Hi,

On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
> Instead of doing what we do currently, which will never work with
> PROVE_LOCKING, do the same as AMD does, and something similar to
> relocation slowpath. When all locks are dropped, we acquire the
> pages for pinning. When the locks are taken, we transfer those
> pages in .get_pages() to the bo. As a final check before installing
> the fences, we ensure that the mmu notifier was not called; if it is,
> we return -EAGAIN to userspace to signal it has to start over.
>
> Changes since v1:
> - Unbinding is done in submit_init only. submit_begin() removed.
> - MMU_NOTFIER -> MMU_NOTIFIER
> Changes since v2:
> - Make i915->mm.notifier a spinlock.
> Changes since v3:
> - Add WARN_ON if there are any page references left, should have been 0.
> - Return 0 on success in submit_init(), bug from spinlock conversion.
> - Release pvec outside of notifier_lock (Thomas).
> Changes since v4:
> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>


...

>   
> -static int
> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
> -				  const struct mmu_notifier_range *range)
> -{
> -	struct i915_mmu_notifier *mn =
> -		container_of(_mn, struct i915_mmu_notifier, mn);
> -	struct interval_tree_node *it;
> -	unsigned long end;
> -	int ret = 0;
> -
> -	if (RB_EMPTY_ROOT(&mn->objects.rb_root))
> -		return 0;
> -
> -	/* interval ranges are inclusive, but invalidate range is exclusive */
> -	end = range->end - 1;
> -
> -	spin_lock(&mn->lock);
> -	it = interval_tree_iter_first(&mn->objects, range->start, end);
> -	while (it) {
> -		struct drm_i915_gem_object *obj;
> -
> -		if (!mmu_notifier_range_blockable(range)) {
> -			ret = -EAGAIN;
> -			break;
> -		}
> +	spin_lock(&i915->mm.notifier_lock);
>   
> -		/*
> -		 * The mmu_object is released late when destroying the
> -		 * GEM object so it is entirely possible to gain a
> -		 * reference on an object in the process of being freed
> -		 * since our serialisation is via the spinlock and not
> -		 * the struct_mutex - and consequently use it after it
> -		 * is freed and then double free it. To prevent that
> -		 * use-after-free we only acquire a reference on the
> -		 * object if it is not in the process of being destroyed.
> -		 */
> -		obj = container_of(it, struct i915_mmu_object, it)->obj;
> -		if (!kref_get_unless_zero(&obj->base.refcount)) {
> -			it = interval_tree_iter_next(it, range->start, end);
> -			continue;
> -		}
> -		spin_unlock(&mn->lock);
> +	mmu_interval_set_seq(mni, cur_seq);
>   
> -		ret = i915_gem_object_unbind(obj,
> -					     I915_GEM_OBJECT_UNBIND_ACTIVE |
> -					     I915_GEM_OBJECT_UNBIND_BARRIER);
> -		if (ret == 0)
> -			ret = __i915_gem_object_put_pages(obj);
> -		i915_gem_object_put(obj);
> -		if (ret)
> -			return ret;
> +	spin_unlock(&i915->mm.notifier_lock);
>   
> -		spin_lock(&mn->lock);
> +	/* During exit there's no need to wait */
> +	if (current->flags & PF_EXITING)
> +		return true;

Did we ever find out why this is needed, that is why the old userptr 
invalidation called doesn't hang here in a similar way?

/Thomas


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-18 11:11   ` Thomas Hellström
@ 2021-01-18 12:01     ` Maarten Lankhorst
  2021-01-18 13:22       ` Thomas Hellström
  0 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-18 12:01 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx

Op 18-01-2021 om 12:11 schreef Thomas Hellström:
>
> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>> We get a lockdep splat when the reset mutex is held, because it can be
>> taken from fence_wait. This conflicts with the mmu notifier we have,
>> because we recurse between reset mutex and mmap lock -> mmu notifier.
>>
>> Remove this recursion by calling revoke_mmaps before taking the lock.
>
> Hmm. Is the mmap se taken from gt_revoke()?
>
> If so, isn't the real problem that the mmap_sem is taken in the dma_fence critical path (where the reset code sits)? 

Hey,

The gpu reset code specifically needs to revoke all gtt mappings, and the fault handler uses intel_gt_reset_trylock(),

so this change should be ok since all those mappings are invalidated correctly and completed before this point.

The reset mutex isn't actually taken inside fence code, but used for lockdep validation, so this should be ok.

~Maarten

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
  2021-01-18 11:30   ` Thomas Hellström (Intel)
@ 2021-01-18 12:43     ` Maarten Lankhorst
  2021-01-18 12:55       ` Thomas Hellström (Intel)
  0 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-18 12:43 UTC (permalink / raw)
  To: Thomas Hellström (Intel), intel-gfx

Op 18-01-2021 om 12:30 schreef Thomas Hellström (Intel):
> Hi,
>
> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>> Instead of doing what we do currently, which will never work with
>> PROVE_LOCKING, do the same as AMD does, and something similar to
>> relocation slowpath. When all locks are dropped, we acquire the
>> pages for pinning. When the locks are taken, we transfer those
>> pages in .get_pages() to the bo. As a final check before installing
>> the fences, we ensure that the mmu notifier was not called; if it is,
>> we return -EAGAIN to userspace to signal it has to start over.
>>
>> Changes since v1:
>> - Unbinding is done in submit_init only. submit_begin() removed.
>> - MMU_NOTFIER -> MMU_NOTIFIER
>> Changes since v2:
>> - Make i915->mm.notifier a spinlock.
>> Changes since v3:
>> - Add WARN_ON if there are any page references left, should have been 0.
>> - Return 0 on success in submit_init(), bug from spinlock conversion.
>> - Release pvec outside of notifier_lock (Thomas).
>> Changes since v4:
>> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
>> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
>> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>
>
> ...
>
>>   -static int
>> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>> -                  const struct mmu_notifier_range *range)
>> -{
>> -    struct i915_mmu_notifier *mn =
>> -        container_of(_mn, struct i915_mmu_notifier, mn);
>> -    struct interval_tree_node *it;
>> -    unsigned long end;
>> -    int ret = 0;
>> -
>> -    if (RB_EMPTY_ROOT(&mn->objects.rb_root))
>> -        return 0;
>> -
>> -    /* interval ranges are inclusive, but invalidate range is exclusive */
>> -    end = range->end - 1;
>> -
>> -    spin_lock(&mn->lock);
>> -    it = interval_tree_iter_first(&mn->objects, range->start, end);
>> -    while (it) {
>> -        struct drm_i915_gem_object *obj;
>> -
>> -        if (!mmu_notifier_range_blockable(range)) {
>> -            ret = -EAGAIN;
>> -            break;
>> -        }
>> +    spin_lock(&i915->mm.notifier_lock);
>>   -        /*
>> -         * The mmu_object is released late when destroying the
>> -         * GEM object so it is entirely possible to gain a
>> -         * reference on an object in the process of being freed
>> -         * since our serialisation is via the spinlock and not
>> -         * the struct_mutex - and consequently use it after it
>> -         * is freed and then double free it. To prevent that
>> -         * use-after-free we only acquire a reference on the
>> -         * object if it is not in the process of being destroyed.
>> -         */
>> -        obj = container_of(it, struct i915_mmu_object, it)->obj;
>> -        if (!kref_get_unless_zero(&obj->base.refcount)) {
>> -            it = interval_tree_iter_next(it, range->start, end);
>> -            continue;
>> -        }
>> -        spin_unlock(&mn->lock);
>> +    mmu_interval_set_seq(mni, cur_seq);
>>   -        ret = i915_gem_object_unbind(obj,
>> -                         I915_GEM_OBJECT_UNBIND_ACTIVE |
>> -                         I915_GEM_OBJECT_UNBIND_BARRIER);
>> -        if (ret == 0)
>> -            ret = __i915_gem_object_put_pages(obj);
>> -        i915_gem_object_put(obj);
>> -        if (ret)
>> -            return ret;
>> +    spin_unlock(&i915->mm.notifier_lock);
>>   -        spin_lock(&mn->lock);
>> +    /* During exit there's no need to wait */
>> +    if (current->flags & PF_EXITING)
>> +        return true;
>
> Did we ever find out why this is needed, that is why the old userptr invalidation called doesn't hang here in a similar way? 

It's an optimization for teardown because userptr will be invalidated anyway, but also for gem_ctx_persistence.userptr, although

with ulls that test may stop working anyway because it takes an out_fence.

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
  2021-01-18 12:43     ` Maarten Lankhorst
@ 2021-01-18 12:55       ` Thomas Hellström (Intel)
  2021-01-18 14:43         ` Maarten Lankhorst
  2021-01-20 13:32         ` Thomas Hellström (Intel)
  0 siblings, 2 replies; 92+ messages in thread
From: Thomas Hellström (Intel) @ 2021-01-18 12:55 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/18/21 1:43 PM, Maarten Lankhorst wrote:
> Op 18-01-2021 om 12:30 schreef Thomas Hellström (Intel):
>> Hi,
>>
>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>> Instead of doing what we do currently, which will never work with
>>> PROVE_LOCKING, do the same as AMD does, and something similar to
>>> relocation slowpath. When all locks are dropped, we acquire the
>>> pages for pinning. When the locks are taken, we transfer those
>>> pages in .get_pages() to the bo. As a final check before installing
>>> the fences, we ensure that the mmu notifier was not called; if it is,
>>> we return -EAGAIN to userspace to signal it has to start over.
>>>
>>> Changes since v1:
>>> - Unbinding is done in submit_init only. submit_begin() removed.
>>> - MMU_NOTFIER -> MMU_NOTIFIER
>>> Changes since v2:
>>> - Make i915->mm.notifier a spinlock.
>>> Changes since v3:
>>> - Add WARN_ON if there are any page references left, should have been 0.
>>> - Return 0 on success in submit_init(), bug from spinlock conversion.
>>> - Release pvec outside of notifier_lock (Thomas).
>>> Changes since v4:
>>> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
>>> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
>>> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
>>>
>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>
>> ...
>>
>>>    -static int
>>> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>>> -                  const struct mmu_notifier_range *range)
>>> -{
>>> -    struct i915_mmu_notifier *mn =
>>> -        container_of(_mn, struct i915_mmu_notifier, mn);
>>> -    struct interval_tree_node *it;
>>> -    unsigned long end;
>>> -    int ret = 0;
>>> -
>>> -    if (RB_EMPTY_ROOT(&mn->objects.rb_root))
>>> -        return 0;
>>> -
>>> -    /* interval ranges are inclusive, but invalidate range is exclusive */
>>> -    end = range->end - 1;
>>> -
>>> -    spin_lock(&mn->lock);
>>> -    it = interval_tree_iter_first(&mn->objects, range->start, end);
>>> -    while (it) {
>>> -        struct drm_i915_gem_object *obj;
>>> -
>>> -        if (!mmu_notifier_range_blockable(range)) {
>>> -            ret = -EAGAIN;
>>> -            break;
>>> -        }
>>> +    spin_lock(&i915->mm.notifier_lock);
>>>    -        /*
>>> -         * The mmu_object is released late when destroying the
>>> -         * GEM object so it is entirely possible to gain a
>>> -         * reference on an object in the process of being freed
>>> -         * since our serialisation is via the spinlock and not
>>> -         * the struct_mutex - and consequently use it after it
>>> -         * is freed and then double free it. To prevent that
>>> -         * use-after-free we only acquire a reference on the
>>> -         * object if it is not in the process of being destroyed.
>>> -         */
>>> -        obj = container_of(it, struct i915_mmu_object, it)->obj;
>>> -        if (!kref_get_unless_zero(&obj->base.refcount)) {
>>> -            it = interval_tree_iter_next(it, range->start, end);
>>> -            continue;
>>> -        }
>>> -        spin_unlock(&mn->lock);
>>> +    mmu_interval_set_seq(mni, cur_seq);
>>>    -        ret = i915_gem_object_unbind(obj,
>>> -                         I915_GEM_OBJECT_UNBIND_ACTIVE |
>>> -                         I915_GEM_OBJECT_UNBIND_BARRIER);
>>> -        if (ret == 0)
>>> -            ret = __i915_gem_object_put_pages(obj);
>>> -        i915_gem_object_put(obj);
>>> -        if (ret)
>>> -            return ret;
>>> +    spin_unlock(&i915->mm.notifier_lock);
>>>    -        spin_lock(&mn->lock);
>>> +    /* During exit there's no need to wait */
>>> +    if (current->flags & PF_EXITING)
>>> +        return true;
>> Did we ever find out why this is needed, that is why the old userptr invalidation called doesn't hang here in a similar way?
> It's an optimization for teardown because userptr will be invalidated anyway, but also for gem_ctx_persistence.userptr, although
>
> with ulls that test may stop working anyway because it takes an out_fence.

Sure, but what I meant was: Did we find out what's different in the new 
code compared to the old one? Because the old code also waits for gpu 
when unbinding in the mmu_notifier, but it appears like in the old code, 
the mmu notifier is never called here. At least to me it seems it would 
be good if we understand what that difference is.

/Thomas


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-18 12:01     ` Maarten Lankhorst
@ 2021-01-18 13:22       ` Thomas Hellström
  2021-01-18 13:28         ` Thomas Hellström
  0 siblings, 1 reply; 92+ messages in thread
From: Thomas Hellström @ 2021-01-18 13:22 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/18/21 1:01 PM, Maarten Lankhorst wrote:
> Op 18-01-2021 om 12:11 schreef Thomas Hellström:
>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>> We get a lockdep splat when the reset mutex is held, because it can be
>>> taken from fence_wait. This conflicts with the mmu notifier we have,
>>> because we recurse between reset mutex and mmap lock -> mmu notifier.
>>>
>>> Remove this recursion by calling revoke_mmaps before taking the lock.
>> Hmm. Is the mmap se taken from gt_revoke()?
>>
>> If so, isn't the real problem that the mmap_sem is taken in the dma_fence critical path (where the reset code sits)?
> Hey,
>
> The gpu reset code specifically needs to revoke all gtt mappings, and the fault handler uses intel_gt_reset_trylock(),
>
> so this change should be ok since all those mappings are invalidated correctly and completed before this point.
>
> The reset mutex isn't actually taken inside fence code, but used for lockdep validation, so this should be ok.
>
> ~Maarten

Hmm, OK but then we still have the following established locking order.

lock(fence_signaling)
lock(i_mmap_lock)

But in the notifier

lock(i_mmap_lock)
fence_signaling(within notifier)

So gt_revoke() is violating dma-fence rules.

BTW it looks to me like the reset mutex notation is actually doing much 
the same as the dma-fence annotations; While we can move gt_revoke() out 
of the reset mutex, that only gives us false hopes since it moves it out 
of the equivalent dma-fence annotation. I figure the reason this was not 
seen before the new code is that the reset mutex lockdep isn't taken 
when waiting for active. Only when waiting for dma-fence, but IMO the 
root problem is pre-existing.

/Thomas




_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-18 13:22       ` Thomas Hellström
@ 2021-01-18 13:28         ` Thomas Hellström
  2021-01-18 14:46           ` Maarten Lankhorst
  0 siblings, 1 reply; 92+ messages in thread
From: Thomas Hellström @ 2021-01-18 13:28 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/18/21 2:22 PM, Thomas Hellström wrote:
>
> On 1/18/21 1:01 PM, Maarten Lankhorst wrote:
>> Op 18-01-2021 om 12:11 schreef Thomas Hellström:
>>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>>> We get a lockdep splat when the reset mutex is held, because it can be
>>>> taken from fence_wait. This conflicts with the mmu notifier we have,
>>>> because we recurse between reset mutex and mmap lock -> mmu notifier.
>>>>
>>>> Remove this recursion by calling revoke_mmaps before taking the lock.
>>> Hmm. Is the mmap se taken from gt_revoke()?
>>>
>>> If so, isn't the real problem that the mmap_sem is taken in the 
>>> dma_fence critical path (where the reset code sits)?
>> Hey,
>>
>> The gpu reset code specifically needs to revoke all gtt mappings, and 
>> the fault handler uses intel_gt_reset_trylock(),
>>
>> so this change should be ok since all those mappings are invalidated 
>> correctly and completed before this point.
>>
>> The reset mutex isn't actually taken inside fence code, but used for 
>> lockdep validation, so this should be ok.
>>
>> ~Maarten
>
> Hmm, OK but then we still have the following established locking order.
>
> lock(fence_signaling)
> lock(i_mmap_lock)
>
> But in the notifier
>
> lock(i_mmap_lock)
> fence_signaling(within notifier)
>
> So gt_revoke() is violating dma-fence rules.
>
> BTW it looks to me like the reset mutex notation is actually doing 
> much the same as the dma-fence annotations; While we can move 
> gt_revoke() out of the reset mutex, that only gives us false hopes 
> since it moves it out of the equivalent dma-fence annotation. I figure 
> the reason this was not seen before the new code is that the reset 
> mutex lockdep isn't taken when waiting for active. Only when waiting 
> for dma-fence, but IMO the root problem is pre-existing.
>
> /Thomas
>
>
The interesting scenario is

thread 1:
take i_mmap_lock()
enter_mmu_notifier()
wait_fence()

thread 2:
need_to_reset_gpu_for_the_above_fence();
take i_mmap_lock()

Deadlock.

/Thomas


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
  2021-01-18 12:55       ` Thomas Hellström (Intel)
@ 2021-01-18 14:43         ` Maarten Lankhorst
  2021-01-20 13:32         ` Thomas Hellström (Intel)
  1 sibling, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-18 14:43 UTC (permalink / raw)
  To: Thomas Hellström (Intel), intel-gfx

Op 18-01-2021 om 13:55 schreef Thomas Hellström (Intel):
>
> On 1/18/21 1:43 PM, Maarten Lankhorst wrote:
>> Op 18-01-2021 om 12:30 schreef Thomas Hellström (Intel):
>>> Hi,
>>>
>>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>>> Instead of doing what we do currently, which will never work with
>>>> PROVE_LOCKING, do the same as AMD does, and something similar to
>>>> relocation slowpath. When all locks are dropped, we acquire the
>>>> pages for pinning. When the locks are taken, we transfer those
>>>> pages in .get_pages() to the bo. As a final check before installing
>>>> the fences, we ensure that the mmu notifier was not called; if it is,
>>>> we return -EAGAIN to userspace to signal it has to start over.
>>>>
>>>> Changes since v1:
>>>> - Unbinding is done in submit_init only. submit_begin() removed.
>>>> - MMU_NOTFIER -> MMU_NOTIFIER
>>>> Changes since v2:
>>>> - Make i915->mm.notifier a spinlock.
>>>> Changes since v3:
>>>> - Add WARN_ON if there are any page references left, should have been 0.
>>>> - Return 0 on success in submit_init(), bug from spinlock conversion.
>>>> - Release pvec outside of notifier_lock (Thomas).
>>>> Changes since v4:
>>>> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
>>>> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
>>>> - Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
>>>>
>>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>
>>> ...
>>>
>>>>    -static int
>>>> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>>>> -                  const struct mmu_notifier_range *range)
>>>> -{
>>>> -    struct i915_mmu_notifier *mn =
>>>> -        container_of(_mn, struct i915_mmu_notifier, mn);
>>>> -    struct interval_tree_node *it;
>>>> -    unsigned long end;
>>>> -    int ret = 0;
>>>> -
>>>> -    if (RB_EMPTY_ROOT(&mn->objects.rb_root))
>>>> -        return 0;
>>>> -
>>>> -    /* interval ranges are inclusive, but invalidate range is exclusive */
>>>> -    end = range->end - 1;
>>>> -
>>>> -    spin_lock(&mn->lock);
>>>> -    it = interval_tree_iter_first(&mn->objects, range->start, end);
>>>> -    while (it) {
>>>> -        struct drm_i915_gem_object *obj;
>>>> -
>>>> -        if (!mmu_notifier_range_blockable(range)) {
>>>> -            ret = -EAGAIN;
>>>> -            break;
>>>> -        }
>>>> +    spin_lock(&i915->mm.notifier_lock);
>>>>    -        /*
>>>> -         * The mmu_object is released late when destroying the
>>>> -         * GEM object so it is entirely possible to gain a
>>>> -         * reference on an object in the process of being freed
>>>> -         * since our serialisation is via the spinlock and not
>>>> -         * the struct_mutex - and consequently use it after it
>>>> -         * is freed and then double free it. To prevent that
>>>> -         * use-after-free we only acquire a reference on the
>>>> -         * object if it is not in the process of being destroyed.
>>>> -         */
>>>> -        obj = container_of(it, struct i915_mmu_object, it)->obj;
>>>> -        if (!kref_get_unless_zero(&obj->base.refcount)) {
>>>> -            it = interval_tree_iter_next(it, range->start, end);
>>>> -            continue;
>>>> -        }
>>>> -        spin_unlock(&mn->lock);
>>>> +    mmu_interval_set_seq(mni, cur_seq);
>>>>    -        ret = i915_gem_object_unbind(obj,
>>>> -                         I915_GEM_OBJECT_UNBIND_ACTIVE |
>>>> -                         I915_GEM_OBJECT_UNBIND_BARRIER);
>>>> -        if (ret == 0)
>>>> -            ret = __i915_gem_object_put_pages(obj);
>>>> -        i915_gem_object_put(obj);
>>>> -        if (ret)
>>>> -            return ret;
>>>> +    spin_unlock(&i915->mm.notifier_lock);
>>>>    -        spin_lock(&mn->lock);
>>>> +    /* During exit there's no need to wait */
>>>> +    if (current->flags & PF_EXITING)
>>>> +        return true;
>>> Did we ever find out why this is needed, that is why the old userptr invalidation called doesn't hang here in a similar way?
>> It's an optimization for teardown because userptr will be invalidated anyway, but also for gem_ctx_persistence.userptr, although
>>
>> with ulls that test may stop working anyway because it takes an out_fence.
>
> Sure, but what I meant was: Did we find out what's different in the new code compared to the old one? Because the old code also waits for gpu when unbinding in the mmu_notifier, but it appears like in the old code, the mmu notifier is never called here. At least to me it seems it would be good if we understand what that difference is. 

I'm not sure, old code doesn't wait, but unbinds.. i915_gem_object_unbind(). It breaks if i915_vm_tryopen() fails, could be that it is why it works.. But this is just speculation..

~Maarten

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-18 13:28         ` Thomas Hellström
@ 2021-01-18 14:46           ` Maarten Lankhorst
  2021-01-18 15:05             ` Thomas Hellström
  0 siblings, 1 reply; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-18 14:46 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx

Op 18-01-2021 om 14:28 schreef Thomas Hellström:
>
> On 1/18/21 2:22 PM, Thomas Hellström wrote:
>>
>> On 1/18/21 1:01 PM, Maarten Lankhorst wrote:
>>> Op 18-01-2021 om 12:11 schreef Thomas Hellström:
>>>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>>>> We get a lockdep splat when the reset mutex is held, because it can be
>>>>> taken from fence_wait. This conflicts with the mmu notifier we have,
>>>>> because we recurse between reset mutex and mmap lock -> mmu notifier.
>>>>>
>>>>> Remove this recursion by calling revoke_mmaps before taking the lock.
>>>> Hmm. Is the mmap se taken from gt_revoke()?
>>>>
>>>> If so, isn't the real problem that the mmap_sem is taken in the dma_fence critical path (where the reset code sits)?
>>> Hey,
>>>
>>> The gpu reset code specifically needs to revoke all gtt mappings, and the fault handler uses intel_gt_reset_trylock(),
>>>
>>> so this change should be ok since all those mappings are invalidated correctly and completed before this point.
>>>
>>> The reset mutex isn't actually taken inside fence code, but used for lockdep validation, so this should be ok.
>>>
>>> ~Maarten
>>
>> Hmm, OK but then we still have the following established locking order.
>>
>> lock(fence_signaling)
>> lock(i_mmap_lock)
>>
>> But in the notifier
>>
>> lock(i_mmap_lock)
>> fence_signaling(within notifier)
>>
>> So gt_revoke() is violating dma-fence rules.
>>
>> BTW it looks to me like the reset mutex notation is actually doing much the same as the dma-fence annotations; While we can move gt_revoke() out of the reset mutex, that only gives us false hopes since it moves it out of the equivalent dma-fence annotation. I figure the reason this was not seen before the new code is that the reset mutex lockdep isn't taken when waiting for active. Only when waiting for dma-fence, but IMO the root problem is pre-existing.
>>
>> /Thomas
>>
>>
> The interesting scenario is
>
> thread 1:
> take i_mmap_lock()
> enter_mmu_notifier()
> wait_fence()
>
> thread 2:
> need_to_reset_gpu_for_the_above_fence();
> take i_mmap_lock()
>
> Deadlock.
>
> /Thomas
>
>
Yeah, I think gpu reset isn't completely following lockdep rules yet. Thread 1 isn't doing anything wrong, gpu reset probably should stop revoking gt bindings, and allow some garbage during reset. I don't see another way out. :-/

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-18 14:46           ` Maarten Lankhorst
@ 2021-01-18 15:05             ` Thomas Hellström
  2021-01-18 15:32               ` Maarten Lankhorst
  0 siblings, 1 reply; 92+ messages in thread
From: Thomas Hellström @ 2021-01-18 15:05 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/18/21 3:46 PM, Maarten Lankhorst wrote:
> Op 18-01-2021 om 14:28 schreef Thomas Hellström:
>> On 1/18/21 2:22 PM, Thomas Hellström wrote:
>>> On 1/18/21 1:01 PM, Maarten Lankhorst wrote:
>>>> Op 18-01-2021 om 12:11 schreef Thomas Hellström:
>>>>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>>>>> We get a lockdep splat when the reset mutex is held, because it can be
>>>>>> taken from fence_wait. This conflicts with the mmu notifier we have,
>>>>>> because we recurse between reset mutex and mmap lock -> mmu notifier.
>>>>>>
>>>>>> Remove this recursion by calling revoke_mmaps before taking the lock.
>>>>> Hmm. Is the mmap se taken from gt_revoke()?
>>>>>
>>>>> If so, isn't the real problem that the mmap_sem is taken in the dma_fence critical path (where the reset code sits)?
>>>> Hey,
>>>>
>>>> The gpu reset code specifically needs to revoke all gtt mappings, and the fault handler uses intel_gt_reset_trylock(),
>>>>
>>>> so this change should be ok since all those mappings are invalidated correctly and completed before this point.
>>>>
>>>> The reset mutex isn't actually taken inside fence code, but used for lockdep validation, so this should be ok.
>>>>
>>>> ~Maarten
>>> Hmm, OK but then we still have the following established locking order.
>>>
>>> lock(fence_signaling)
>>> lock(i_mmap_lock)
>>>
>>> But in the notifier
>>>
>>> lock(i_mmap_lock)
>>> fence_signaling(within notifier)
>>>
>>> So gt_revoke() is violating dma-fence rules.
>>>
>>> BTW it looks to me like the reset mutex notation is actually doing much the same as the dma-fence annotations; While we can move gt_revoke() out of the reset mutex, that only gives us false hopes since it moves it out of the equivalent dma-fence annotation. I figure the reason this was not seen before the new code is that the reset mutex lockdep isn't taken when waiting for active. Only when waiting for dma-fence, but IMO the root problem is pre-existing.
>>>
>>> /Thomas
>>>
>>>
>> The interesting scenario is
>>
>> thread 1:
>> take i_mmap_lock()
>> enter_mmu_notifier()
>> wait_fence()
>>
>> thread 2:
>> need_to_reset_gpu_for_the_above_fence();
>> take i_mmap_lock()
>>
>> Deadlock.
>>
>> /Thomas
>>
>>
> Yeah, I think gpu reset isn't completely following lockdep rules yet. Thread 1 isn't doing anything wrong, gpu reset probably should stop revoking gt bindings, and allow some garbage during reset. I don't see another way out. :-/

Me neither.

But to silence lockdep until dma_fence annotation is widely added:

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly
  2021-01-18 15:05             ` Thomas Hellström
@ 2021-01-18 15:32               ` Maarten Lankhorst
  0 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-18 15:32 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx

Op 18-01-2021 om 16:05 schreef Thomas Hellström:
>
> On 1/18/21 3:46 PM, Maarten Lankhorst wrote:
>> Op 18-01-2021 om 14:28 schreef Thomas Hellström:
>>> On 1/18/21 2:22 PM, Thomas Hellström wrote:
>>>> On 1/18/21 1:01 PM, Maarten Lankhorst wrote:
>>>>> Op 18-01-2021 om 12:11 schreef Thomas Hellström:
>>>>>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>>>>>> We get a lockdep splat when the reset mutex is held, because it can be
>>>>>>> taken from fence_wait. This conflicts with the mmu notifier we have,
>>>>>>> because we recurse between reset mutex and mmap lock -> mmu notifier.
>>>>>>>
>>>>>>> Remove this recursion by calling revoke_mmaps before taking the lock.
>>>>>> Hmm. Is the mmap se taken from gt_revoke()?
>>>>>>
>>>>>> If so, isn't the real problem that the mmap_sem is taken in the dma_fence critical path (where the reset code sits)?
>>>>> Hey,
>>>>>
>>>>> The gpu reset code specifically needs to revoke all gtt mappings, and the fault handler uses intel_gt_reset_trylock(),
>>>>>
>>>>> so this change should be ok since all those mappings are invalidated correctly and completed before this point.
>>>>>
>>>>> The reset mutex isn't actually taken inside fence code, but used for lockdep validation, so this should be ok.
>>>>>
>>>>> ~Maarten
>>>> Hmm, OK but then we still have the following established locking order.
>>>>
>>>> lock(fence_signaling)
>>>> lock(i_mmap_lock)
>>>>
>>>> But in the notifier
>>>>
>>>> lock(i_mmap_lock)
>>>> fence_signaling(within notifier)
>>>>
>>>> So gt_revoke() is violating dma-fence rules.
>>>>
>>>> BTW it looks to me like the reset mutex notation is actually doing much the same as the dma-fence annotations; While we can move gt_revoke() out of the reset mutex, that only gives us false hopes since it moves it out of the equivalent dma-fence annotation. I figure the reason this was not seen before the new code is that the reset mutex lockdep isn't taken when waiting for active. Only when waiting for dma-fence, but IMO the root problem is pre-existing.
>>>>
>>>> /Thomas
>>>>
>>>>
>>> The interesting scenario is
>>>
>>> thread 1:
>>> take i_mmap_lock()
>>> enter_mmu_notifier()
>>> wait_fence()
>>>
>>> thread 2:
>>> need_to_reset_gpu_for_the_above_fence();
>>> take i_mmap_lock()
>>>
>>> Deadlock.
>>>
>>> /Thomas
>>>
>>>
>> Yeah, I think gpu reset isn't completely following lockdep rules yet. Thread 1 isn't doing anything wrong, gpu reset probably should stop revoking gt bindings, and allow some garbage during reset. I don't see another way out. :-/
>
> Me neither.
>
> But to silence lockdep until dma_fence annotation is widely added:
>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
>
Ideally we'll add fence signaling annotations to gpu reset, to exactly detect these kind of things. Hopefully in the future. :)

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6
  2021-01-14 12:06   ` Tvrtko Ursulin
@ 2021-01-18 16:09     ` Maarten Lankhorst
  0 siblings, 0 replies; 92+ messages in thread
From: Maarten Lankhorst @ 2021-01-18 16:09 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx; +Cc: Thomas Hellström

Op 14-01-2021 om 13:06 schreef Tvrtko Ursulin:
>
> On 05/01/2021 15:34, Maarten Lankhorst wrote:
>> Instead of sharing pages with breadcrumbs, give each timeline a
>> single page. This allows unrelated timelines not to share locks
>> any more during command submission.
>>
>> As an additional benefit, seqno wraparound no longer requires
>> i915_vma_pin, which means we no longer need to worry about a
>> potential -EDEADLK at a point where we are ready to submit.
>>
>> Changes since v1:
>> - Fix erroneous i915_vma_acquire that should be a i915_vma_release (ickle).
>> - Extra check for completion in intel_read_hwsp().
>> Changes since v2:
>> - Fix inconsistent indent in hwsp_alloc() (kbuild)
>> - memset entire cacheline to 0.
>> Changes since v3:
>> - Do same in intel_timeline_reset_seqno(), and clflush for good measure.
>> Changes since v4:
>> - Use refcounting on timeline, instead of relying on i915_active.
>> - Fix waiting on kernel requests.
>> Changes since v5:
>> - Bump amount of slots to maximum (256), for best wraparounds.
>> - Add hwsp_offset to i915_request to fix potential wraparound hang.
>> - Ensure timeline wrap test works with the changes.
>> - Assign hwsp in intel_timeline_read_hwsp() within the rcu lock to
>>    fix a hang.
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> #v1
>> Reported-by: kernel test robot <lkp@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/gen2_engine_cs.c      |   2 +-
>>   drivers/gpu/drm/i915/gt/gen6_engine_cs.c      |   8 +-
>>   drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |  12 +-
>>   drivers/gpu/drm/i915/gt/intel_engine_cs.c     |   1 +
>>   drivers/gpu/drm/i915/gt/intel_gt_types.h      |   4 -
>>   drivers/gpu/drm/i915/gt/intel_timeline.c      | 422 ++++--------------
>>   .../gpu/drm/i915/gt/intel_timeline_types.h    |  17 +-
>>   drivers/gpu/drm/i915/gt/selftest_engine_cs.c  |   5 +-
>>   drivers/gpu/drm/i915/gt/selftest_timeline.c   |  83 ++--
>>   drivers/gpu/drm/i915/i915_request.c           |   4 -
>>   drivers/gpu/drm/i915/i915_request.h           |  17 +-
>>   11 files changed, 160 insertions(+), 415 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
>> index b491a64919c8..9646200d2792 100644
>> --- a/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/gen2_engine_cs.c
>> @@ -143,7 +143,7 @@ static u32 *__gen2_emit_breadcrumb(struct i915_request *rq, u32 *cs,
>>                      int flush, int post)
>>   {
>>       GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
>> -    GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
>> +    GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>>         *cs++ = MI_FLUSH;
>>   diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
>> index ce38d1bcaba3..dd094e21bb51 100644
>> --- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
>> @@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>>            PIPE_CONTROL_DC_FLUSH_ENABLE |
>>            PIPE_CONTROL_QW_WRITE |
>>            PIPE_CONTROL_CS_STALL);
>> -    *cs++ = i915_request_active_timeline(rq)->hwsp_offset |
>> +    *cs++ = i915_request_active_offset(rq) |
>
> "Active offset" is maybe a bit non-descriptive. How about i915_request_hwsp_seqno()?
>
>>           PIPE_CONTROL_GLOBAL_GTT;
>>       *cs++ = rq->fence.seqno;
>>   @@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>>            PIPE_CONTROL_QW_WRITE |
>>            PIPE_CONTROL_GLOBAL_GTT_IVB |
>>            PIPE_CONTROL_CS_STALL);
>> -    *cs++ = i915_request_active_timeline(rq)->hwsp_offset;
>> +    *cs++ = i915_request_active_offset(rq);
>>       *cs++ = rq->fence.seqno;
>>         *cs++ = MI_USER_INTERRUPT;
>> @@ -374,7 +374,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
>>   u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>>   {
>>       GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
>> -    GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
>> +    GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>>         *cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
>>       *cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
>> @@ -394,7 +394,7 @@ u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
>>       int i;
>>         GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
>> -    GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
>> +    GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
>>         *cs++ = MI_FLUSH_DW | MI_INVALIDATE_TLB |
>>           MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
>> diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
>> index 1972dd5dca00..33eaa0203b1d 100644
>> --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
>> @@ -338,15 +338,13 @@ static inline u32 preempt_address(struct intel_engine_cs *engine)
>>     static u32 hwsp_offset(const struct i915_request *rq)
>>   {
>> -    const struct intel_timeline_cacheline *cl;
>> +    const struct intel_timeline *tl;
>>   -    /* Before the request is executed, the timeline/cachline is fixed */
>> +    /* Before the request is executed, the timeline is fixed */
>> +    tl = rcu_dereference_protected(rq->timeline,
>> +                       !i915_request_signaled(rq));
>>   -    cl = rcu_dereference_protected(rq->hwsp_cacheline, 1);
>> -    if (cl)
>> -        return cl->ggtt_offset;
>> -
>> -    return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset;
>> +    return page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno);
>
> I don't get this.
>
> tl->hwsp_offset is an offset, not vaddr. What does page_mask_bits on it achieve?
>
> Then rq->hwsp_seqno is a vaddr assigned as:
>
>   rq->hwsp_seqno = tl->hwsp_seqno;
>
> And elsewhere in the patch:
>
> +    timeline->hwsp_map = vaddr;
> +    seqno = vaddr + timeline->hwsp_offset;
> +    WRITE_ONCE(*seqno, 0);
> +    timeline->hwsp_seqno = seqno;
>
> So page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno) seems to me to reduce to 0 + tl->hwsp_offset, or tl->hwsp_offset.
>
> Maybe I am blind..
>
>>   }
>>     int gen8_emit_init_breadcrumb(struct i915_request *rq)
>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>> index 1847d3c2ea99..1db608d1f26f 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>> @@ -754,6 +754,7 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
>>       frame->rq.engine = engine;
>>       frame->rq.context = ce;
>>       rcu_assign_pointer(frame->rq.timeline, ce->timeline);
>> +    frame->rq.hwsp_seqno = ce->timeline->hwsp_seqno;
>>         frame->ring.vaddr = frame->cs;
>>       frame->ring.size = sizeof(frame->cs);
>> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
>> index a83d3e18254d..f7dab4fc926c 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
>> @@ -39,10 +39,6 @@ struct intel_gt {
>>       struct intel_gt_timelines {
>>           spinlock_t lock; /* protects active_list */
>>           struct list_head active_list;
>> -
>> -        /* Pack multiple timelines' seqnos into the same page */
>> -        spinlock_t hwsp_lock;
>> -        struct list_head hwsp_free_list;
>>       } timelines;
>>         struct intel_gt_requests {
>> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
>> index 7fe05918a76e..3d43f15f34f3 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_timeline.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
>> @@ -12,21 +12,9 @@
>>   #include "intel_ring.h"
>>   #include "intel_timeline.h"
>>   -#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit)))
>> -#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit))
>> +#define TIMELINE_SEQNO_BYTES 8
>>   -#define CACHELINE_BITS 6
>> -#define CACHELINE_FREE CACHELINE_BITS
>> -
>> -struct intel_timeline_hwsp {
>> -    struct intel_gt *gt;
>> -    struct intel_gt_timelines *gt_timelines;
>> -    struct list_head free_link;
>> -    struct i915_vma *vma;
>> -    u64 free_bitmap;
>> -};
>> -
>> -static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
>> +static struct i915_vma *hwsp_alloc(struct intel_gt *gt)
>>   {
>>       struct drm_i915_private *i915 = gt->i915;
>>       struct drm_i915_gem_object *obj;
>> @@ -45,220 +33,59 @@ static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
>>       return vma;
>>   }
>>   -static struct i915_vma *
>> -hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline)
>> -{
>> -    struct intel_gt_timelines *gt = &timeline->gt->timelines;
>> -    struct intel_timeline_hwsp *hwsp;
>> -
>> -    BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
>> -
>> -    spin_lock_irq(&gt->hwsp_lock);
>> -
>> -    /* hwsp_free_list only contains HWSP that have available cachelines */
>> -    hwsp = list_first_entry_or_null(&gt->hwsp_free_list,
>> -                    typeof(*hwsp), free_link);
>> -    if (!hwsp) {
>> -        struct i915_vma *vma;
>> -
>> -        spin_unlock_irq(&gt->hwsp_lock);
>> -
>> -        hwsp = kmalloc(sizeof(*hwsp), GFP_KERNEL);
>> -        if (!hwsp)
>> -            return ERR_PTR(-ENOMEM);
>> -
>> -        vma = __hwsp_alloc(timeline->gt);
>> -        if (IS_ERR(vma)) {
>> -            kfree(hwsp);
>> -            return vma;
>> -        }
>> -
>> -        GT_TRACE(timeline->gt, "new HWSP allocated\n");
>> -
>> -        vma->private = hwsp;
>> -        hwsp->gt = timeline->gt;
>> -        hwsp->vma = vma;
>> -        hwsp->free_bitmap = ~0ull;
>> -        hwsp->gt_timelines = gt;
>> -
>> -        spin_lock_irq(&gt->hwsp_lock);
>> -        list_add(&hwsp->free_link, &gt->hwsp_free_list);
>> -    }
>> -
>> -    GEM_BUG_ON(!hwsp->free_bitmap);
>> -    *cacheline = __ffs64(hwsp->free_bitmap);
>> -    hwsp->free_bitmap &= ~BIT_ULL(*cacheline);
>> -    if (!hwsp->free_bitmap)
>> -        list_del(&hwsp->free_link);
>> -
>> -    spin_unlock_irq(&gt->hwsp_lock);
>> -
>> -    GEM_BUG_ON(hwsp->vma->private != hwsp);
>> -    return hwsp->vma;
>> -}
>> -
>> -static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
>> -{
>> -    struct intel_gt_timelines *gt = hwsp->gt_timelines;
>> -    unsigned long flags;
>> -
>> -    spin_lock_irqsave(&gt->hwsp_lock, flags);
>> -
>> -    /* As a cacheline becomes available, publish the HWSP on the freelist */
>> -    if (!hwsp->free_bitmap)
>> -        list_add_tail(&hwsp->free_link, &gt->hwsp_free_list);
>> -
>> -    GEM_BUG_ON(cacheline >= BITS_PER_TYPE(hwsp->free_bitmap));
>> -    hwsp->free_bitmap |= BIT_ULL(cacheline);
>> -
>> -    /* And if no one is left using it, give the page back to the system */
>> -    if (hwsp->free_bitmap == ~0ull) {
>> -        i915_vma_put(hwsp->vma);
>> -        list_del(&hwsp->free_link);
>> -        kfree(hwsp);
>> -    }
>> -
>> -    spin_unlock_irqrestore(&gt->hwsp_lock, flags);
>> -}
>> -
>> -static void __rcu_cacheline_free(struct rcu_head *rcu)
>> -{
>> -    struct intel_timeline_cacheline *cl =
>> -        container_of(rcu, typeof(*cl), rcu);
>> -
>> -    /* Must wait until after all *rq->hwsp are complete before removing */
>> -    i915_gem_object_unpin_map(cl->hwsp->vma->obj);
>> -    __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
>> -
>> -    i915_active_fini(&cl->active);
>> -    kfree(cl);
>> -}
>> -
>> -static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
>> -{
>> -    GEM_BUG_ON(!i915_active_is_idle(&cl->active));
>> -    call_rcu(&cl->rcu, __rcu_cacheline_free);
>> -}
>> -
>>   __i915_active_call
>> -static void __cacheline_retire(struct i915_active *active)
>> +static void __timeline_retire(struct i915_active *active)
>>   {
>> -    struct intel_timeline_cacheline *cl =
>> -        container_of(active, typeof(*cl), active);
>> +    struct intel_timeline *tl =
>> +        container_of(active, typeof(*tl), active);
>>   -    i915_vma_unpin(cl->hwsp->vma);
>> -    if (ptr_test_bit(cl->vaddr, CACHELINE_FREE))
>> -        __idle_cacheline_free(cl);
>> +    i915_vma_unpin(tl->hwsp_ggtt);
>> +    intel_timeline_put(tl);
>>   }
>>   -static int __cacheline_active(struct i915_active *active)
>> +static int __timeline_active(struct i915_active *active)
>>   {
>> -    struct intel_timeline_cacheline *cl =
>> -        container_of(active, typeof(*cl), active);
>> +    struct intel_timeline *tl =
>> +        container_of(active, typeof(*tl), active);
>>   -    __i915_vma_pin(cl->hwsp->vma);
>> +    __i915_vma_pin(tl->hwsp_ggtt);
>> +    intel_timeline_get(tl);
>>       return 0;
>>   }
>>   -static struct intel_timeline_cacheline *
>> -cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
>> -{
>> -    struct intel_timeline_cacheline *cl;
>> -    void *vaddr;
>> -
>> -    GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS));
>> -
>> -    cl = kmalloc(sizeof(*cl), GFP_KERNEL);
>> -    if (!cl)
>> -        return ERR_PTR(-ENOMEM);
>> -
>> -    vaddr = i915_gem_object_pin_map(hwsp->vma->obj, I915_MAP_WB);
>> -    if (IS_ERR(vaddr)) {
>> -        kfree(cl);
>> -        return ERR_CAST(vaddr);
>> -    }
>> -
>> -    cl->hwsp = hwsp;
>> -    cl->vaddr = page_pack_bits(vaddr, cacheline);
>> -
>> -    i915_active_init(&cl->active, __cacheline_active, __cacheline_retire);
>> -
>> -    return cl;
>> -}
>> -
>> -static void cacheline_acquire(struct intel_timeline_cacheline *cl,
>> -                  u32 ggtt_offset)
>> -{
>> -    if (!cl)
>> -        return;
>> -
>> -    cl->ggtt_offset = ggtt_offset;
>> -    i915_active_acquire(&cl->active);
>> -}
>> -
>> -static void cacheline_release(struct intel_timeline_cacheline *cl)
>> -{
>> -    if (cl)
>> -        i915_active_release(&cl->active);
>> -}
>> -
>> -static void cacheline_free(struct intel_timeline_cacheline *cl)
>> -{
>> -    if (!i915_active_acquire_if_busy(&cl->active)) {
>> -        __idle_cacheline_free(cl);
>> -        return;
>> -    }
>> -
>> -    GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE));
>> -    cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE);
>> -
>> -    i915_active_release(&cl->active);
>> -}
>> -
>>   static int intel_timeline_init(struct intel_timeline *timeline,
>>                      struct intel_gt *gt,
>>                      struct i915_vma *hwsp,
>>                      unsigned int offset)
>>   {
>>       void *vaddr;
>> +    u32 *seqno;
>>         kref_init(&timeline->kref);
>>       atomic_set(&timeline->pin_count, 0);
>>         timeline->gt = gt;
>>   -    timeline->has_initial_breadcrumb = !hwsp;
>> -    timeline->hwsp_cacheline = NULL;
>> -
>> -    if (!hwsp) {
>> -        struct intel_timeline_cacheline *cl;
>> -        unsigned int cacheline;
>> -
>> -        hwsp = hwsp_alloc(timeline, &cacheline);
>> +    if (hwsp) {
>> +        timeline->hwsp_offset = offset;
>> +        timeline->hwsp_ggtt = i915_vma_get(hwsp);
>> +    } else {
>> +        timeline->has_initial_breadcrumb = true;
>> +        hwsp = hwsp_alloc(gt);
>>           if (IS_ERR(hwsp))
>>               return PTR_ERR(hwsp);
>> -
>> -        cl = cacheline_alloc(hwsp->private, cacheline);
>> -        if (IS_ERR(cl)) {
>> -            __idle_hwsp_free(hwsp->private, cacheline);
>> -            return PTR_ERR(cl);
>> -        }
>> -
>> -        timeline->hwsp_cacheline = cl;
>> -        timeline->hwsp_offset = cacheline * CACHELINE_BYTES;
>> -
>> -        vaddr = page_mask_bits(cl->vaddr);
>> -    } else {
>> -        timeline->hwsp_offset = offset;
>> -        vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
>> -        if (IS_ERR(vaddr))
>> -            return PTR_ERR(vaddr);
>> +        timeline->hwsp_ggtt = hwsp;
>>       }
>>   -    timeline->hwsp_seqno =
>> -        memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES);
>> +    vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
>> +    if (IS_ERR(vaddr))
>> +        return PTR_ERR(vaddr);
>> +
>> +    timeline->hwsp_map = vaddr;
>> +    seqno = vaddr + timeline->hwsp_offset;
>> +    WRITE_ONCE(*seqno, 0);
>> +    timeline->hwsp_seqno = seqno;
>>   -    timeline->hwsp_ggtt = i915_vma_get(hwsp);
>>       GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
>>         timeline->fence_context = dma_fence_context_alloc(1);
>> @@ -269,6 +96,7 @@ static int intel_timeline_init(struct intel_timeline *timeline,
>>       INIT_LIST_HEAD(&timeline->requests);
>>         i915_syncmap_init(&timeline->sync);
>> +    i915_active_init(&timeline->active, __timeline_active, __timeline_retire);
>>         return 0;
>>   }
>> @@ -279,23 +107,18 @@ void intel_gt_init_timelines(struct intel_gt *gt)
>>         spin_lock_init(&timelines->lock);
>>       INIT_LIST_HEAD(&timelines->active_list);
>> -
>> -    spin_lock_init(&timelines->hwsp_lock);
>> -    INIT_LIST_HEAD(&timelines->hwsp_free_list);
>>   }
>>   -static void intel_timeline_fini(struct intel_timeline *timeline)
>> +static void intel_timeline_fini(struct rcu_head *rcu)
>>   {
>> -    GEM_BUG_ON(atomic_read(&timeline->pin_count));
>> -    GEM_BUG_ON(!list_empty(&timeline->requests));
>> -    GEM_BUG_ON(timeline->retire);
>> +    struct intel_timeline *timeline =
>> +        container_of(rcu, struct intel_timeline, rcu);
>>   -    if (timeline->hwsp_cacheline)
>> -        cacheline_free(timeline->hwsp_cacheline);
>> -    else
>> -        i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>> +    i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
>>         i915_vma_put(timeline->hwsp_ggtt);
>> +    i915_active_fini(&timeline->active);
>> +    kfree(timeline);
>>   }
>>     struct intel_timeline *
>> @@ -361,9 +184,9 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>>       GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
>>            tl->fence_context, tl->hwsp_offset);
>>   -    cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset);
>> +    i915_active_acquire(&tl->active);
>>       if (atomic_fetch_inc(&tl->pin_count)) {
>> -        cacheline_release(tl->hwsp_cacheline);
>> +        i915_active_release(&tl->active);
>>           __i915_vma_unpin(tl->hwsp_ggtt);
>>       }
>>   @@ -372,9 +195,13 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
>>     void intel_timeline_reset_seqno(const struct intel_timeline *tl)
>>   {
>> +    u32 *hwsp_seqno = (u32 *)tl->hwsp_seqno;
>>       /* Must be pinned to be writable, and no requests in flight. */
>>       GEM_BUG_ON(!atomic_read(&tl->pin_count));
>> -    WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>> +
>> +    memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
>> +    WRITE_ONCE(*hwsp_seqno, tl->seqno);
>> +    clflush(hwsp_seqno);
>>   }
>>     void intel_timeline_enter(struct intel_timeline *tl)
>> @@ -450,106 +277,23 @@ static u32 timeline_advance(struct intel_timeline *tl)
>>       return tl->seqno += 1 + tl->has_initial_breadcrumb;
>>   }
>>   -static void timeline_rollback(struct intel_timeline *tl)
>> -{
>> -    tl->seqno -= 1 + tl->has_initial_breadcrumb;
>> -}
>> -
>>   static noinline int
>>   __intel_timeline_get_seqno(struct intel_timeline *tl,
>> -               struct i915_request *rq,
>>                  u32 *seqno)
>>   {
>> -    struct intel_timeline_cacheline *cl;
>> -    unsigned int cacheline;
>> -    struct i915_vma *vma;
>> -    void *vaddr;
>> -    int err;
>> -
>> -    might_lock(&tl->gt->ggtt->vm.mutex);
>> -    GT_TRACE(tl->gt, "timeline:%llx wrapped\n", tl->fence_context);
>> -
>> -    /*
>> -     * If there is an outstanding GPU reference to this cacheline,
>> -     * such as it being sampled by a HW semaphore on another timeline,
>> -     * we cannot wraparound our seqno value (the HW semaphore does
>> -     * a strict greater-than-or-equals compare, not i915_seqno_passed).
>> -     * So if the cacheline is still busy, we must detach ourselves
>> -     * from it and leave it inflight alongside its users.
>> -     *
>> -     * However, if nobody is watching and we can guarantee that nobody
>> -     * will, we could simply reuse the same cacheline.
>> -     *
>> -     * if (i915_active_request_is_signaled(&tl->last_request) &&
>> -     *     i915_active_is_signaled(&tl->hwsp_cacheline->active))
>> -     *    return 0;
>> -     *
>> -     * That seems unlikely for a busy timeline that needed to wrap in
>> -     * the first place, so just replace the cacheline.
>> -     */
>> -
>> -    vma = hwsp_alloc(tl, &cacheline);
>> -    if (IS_ERR(vma)) {
>> -        err = PTR_ERR(vma);
>> -        goto err_rollback;
>> -    }
>> -
>> -    err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
>> -    if (err) {
>> -        __idle_hwsp_free(vma->private, cacheline);
>> -        goto err_rollback;
>> -    }
>> +    u32 next_ofs = offset_in_page(tl->hwsp_offset + TIMELINE_SEQNO_BYTES);
>>   -    cl = cacheline_alloc(vma->private, cacheline);
>> -    if (IS_ERR(cl)) {
>> -        err = PTR_ERR(cl);
>> -        __idle_hwsp_free(vma->private, cacheline);
>> -        goto err_unpin;
>> -    }
>> -    GEM_BUG_ON(cl->hwsp->vma != vma);
>> -
>> -    /*
>> -     * Attach the old cacheline to the current request, so that we only
>> -     * free it after the current request is retired, which ensures that
>> -     * all writes into the cacheline from previous requests are complete.
>> -     */
>> -    err = i915_active_ref(&tl->hwsp_cacheline->active,
>> -                  tl->fence_context,
>> -                  &rq->fence);
>> -    if (err)
>> -        goto err_cacheline;
>> +    /* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
>> +    if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))
>> +        next_ofs = offset_in_page(next_ofs + BIT(5));
>>   -    cacheline_release(tl->hwsp_cacheline); /* ownership now xfered to rq */
>> -    cacheline_free(tl->hwsp_cacheline);
>> -
>> -    i915_vma_unpin(tl->hwsp_ggtt); /* binding kept alive by old cacheline */
>> -    i915_vma_put(tl->hwsp_ggtt);
>> -
>> -    tl->hwsp_ggtt = i915_vma_get(vma);
>> -
>> -    vaddr = page_mask_bits(cl->vaddr);
>> -    tl->hwsp_offset = cacheline * CACHELINE_BYTES;
>> -    tl->hwsp_seqno =
>> -        memset(vaddr + tl->hwsp_offset, 0, CACHELINE_BYTES);
>> -
>> -    tl->hwsp_offset += i915_ggtt_offset(vma);
>> -    GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
>> -         tl->fence_context, tl->hwsp_offset);
>> -
>> -    cacheline_acquire(cl, tl->hwsp_offset);
>> -    tl->hwsp_cacheline = cl;
>> +    tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + next_ofs;
>> +    tl->hwsp_seqno = tl->hwsp_map + next_ofs;
>> +    intel_timeline_reset_seqno(tl);
>>         *seqno = timeline_advance(tl);
>>       GEM_BUG_ON(i915_seqno_passed(*tl->hwsp_seqno, *seqno));
>>       return 0;
>> -
>> -err_cacheline:
>> -    cacheline_free(cl);
>> -err_unpin:
>> -    i915_vma_unpin(vma);
>> -err_rollback:
>> -    timeline_rollback(tl);
>> -    return err;
>>   }
>>     int intel_timeline_get_seqno(struct intel_timeline *tl,
>> @@ -559,51 +303,52 @@ int intel_timeline_get_seqno(struct intel_timeline *tl,
>>       *seqno = timeline_advance(tl);
>>         /* Replace the HWSP on wraparound for HW semaphores */
>> -    if (unlikely(!*seqno && tl->hwsp_cacheline))
>> -        return __intel_timeline_get_seqno(tl, rq, seqno);
>> +    if (unlikely(!*seqno && tl->has_initial_breadcrumb))
>> +        return __intel_timeline_get_seqno(tl, seqno);
>>         return 0;
>>   }
>>   -static int cacheline_ref(struct intel_timeline_cacheline *cl,
>> -             struct i915_request *rq)
>> -{
>> -    return i915_active_add_request(&cl->active, rq);
>> -}
>> -
>>   int intel_timeline_read_hwsp(struct i915_request *from,
>>                    struct i915_request *to,
>>                    u32 *hwsp)
>>   {
>> -    struct intel_timeline_cacheline *cl;
>> +    struct intel_timeline *tl;
>>       int err;
>>   -    GEM_BUG_ON(!rcu_access_pointer(from->hwsp_cacheline));
>> -
>>       rcu_read_lock();
>> -    cl = rcu_dereference(from->hwsp_cacheline);
>> -    if (i915_request_completed(from)) /* confirm cacheline is valid */
>> -        goto unlock;
>> -    if (unlikely(!i915_active_acquire_if_busy(&cl->active)))
>> -        goto unlock; /* seqno wrapped and completed! */
>> -    if (unlikely(i915_request_completed(from)))
>> -        goto release;
>> +    tl = rcu_dereference(from->timeline);
>> +    if (i915_request_completed(from) ||
>> +        !i915_active_acquire_if_busy(&tl->active))
>> +        tl = NULL;
>> +
>> +    if (tl) {
>> +        /* hwsp_offset may wraparound, so use from->hwsp_seqno */
>> +        *hwsp = i915_ggtt_offset(tl->hwsp_ggtt) +
>> +            offset_in_page(from->hwsp_seqno);
>> +    }
>> +
>> +    /* ensure we wait on the right request, if not, we completed */
>> +    if (tl && i915_request_completed(from)) {
>> +        i915_active_release(&tl->active);
>> +        tl = NULL;
>> +    }
>>       rcu_read_unlock();
>>   -    err = cacheline_ref(cl, to);
>> -    if (err)
>> +    if (!tl)
>> +        return 1;
>> +
>> +    /* Can't do semaphore waits on kernel context */
>> +    if (!tl->has_initial_breadcrumb) {
>> +        err = -EINVAL;
>>           goto out;
>> +    }
>> +
>> +    err = i915_active_add_request(&tl->active, to);
>>   -    *hwsp = cl->ggtt_offset;
>>   out:
>> -    i915_active_release(&cl->active);
>> +    i915_active_release(&tl->active);
>>       return err;
>> -
>> -release:
>> -    i915_active_release(&cl->active);
>> -unlock:
>> -    rcu_read_unlock();
>> -    return 1;
>>   }
>>     void intel_timeline_unpin(struct intel_timeline *tl)
>> @@ -612,8 +357,7 @@ void intel_timeline_unpin(struct intel_timeline *tl)
>>       if (!atomic_dec_and_test(&tl->pin_count))
>>           return;
>>   -    cacheline_release(tl->hwsp_cacheline);
>> -
>> +    i915_active_release(&tl->active);
>>       __i915_vma_unpin(tl->hwsp_ggtt);
>>   }
>>   @@ -622,8 +366,11 @@ void __intel_timeline_free(struct kref *kref)
>>       struct intel_timeline *timeline =
>>           container_of(kref, typeof(*timeline), kref);
>>   -    intel_timeline_fini(timeline);
>> -    kfree_rcu(timeline, rcu);
>> +    GEM_BUG_ON(atomic_read(&timeline->pin_count));
>> +    GEM_BUG_ON(!list_empty(&timeline->requests));
>> +    GEM_BUG_ON(timeline->retire);
>> +
>> +    call_rcu(&timeline->rcu, intel_timeline_fini);
>>   }
>>     void intel_gt_fini_timelines(struct intel_gt *gt)
>> @@ -631,7 +378,6 @@ void intel_gt_fini_timelines(struct intel_gt *gt)
>>       struct intel_gt_timelines *timelines = &gt->timelines;
>>         GEM_BUG_ON(!list_empty(&timelines->active_list));
>> -    GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
>>   }
>>     void intel_gt_show_timelines(struct intel_gt *gt,
>> diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
>> index e360f50706bf..9c1d767a867f 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
>> @@ -18,7 +18,6 @@
>>   struct i915_vma;
>>   struct i915_syncmap;
>>   struct intel_gt;
>> -struct intel_timeline_hwsp;
>>     struct intel_timeline {
>>       u64 fence_context;
>> @@ -45,12 +44,11 @@ struct intel_timeline {
>>       atomic_t pin_count;
>>       atomic_t active_count;
>>   +    void *hwsp_map;
>>       const u32 *hwsp_seqno;
>>       struct i915_vma *hwsp_ggtt;
>>       u32 hwsp_offset;
>>   -    struct intel_timeline_cacheline *hwsp_cacheline;
>> -
>>       bool has_initial_breadcrumb;
>>         /**
>> @@ -67,6 +65,8 @@ struct intel_timeline {
>>        */
>>       struct i915_active_fence last_request;
>>   +    struct i915_active active;
>> +
>>       /** A chain of completed timelines ready for early retirement. */
>>       struct intel_timeline *retire;
>>   @@ -90,15 +90,4 @@ struct intel_timeline {
>>       struct rcu_head rcu;
>>   };
>>   -struct intel_timeline_cacheline {
>> -    struct i915_active active;
>> -
>> -    struct intel_timeline_hwsp *hwsp;
>> -    void *vaddr;
>> -
>> -    u32 ggtt_offset;
>> -
>> -    struct rcu_head rcu;
>> -};
>> -
>>   #endif /* __I915_TIMELINE_TYPES_H__ */
>> diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
>> index 439c8984f5fa..fd9414b0d1bd 100644
>> --- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
>> @@ -42,6 +42,9 @@ static int perf_end(struct intel_gt *gt)
>>     static int write_timestamp(struct i915_request *rq, int slot)
>>   {
>> +    struct intel_timeline *tl =
>> +        rcu_dereference_protected(rq->timeline,
>> +                      !i915_request_signaled(rq));
>>       u32 cmd;
>>       u32 *cs;
>>   @@ -54,7 +57,7 @@ static int write_timestamp(struct i915_request *rq, int slot)
>>           cmd++;
>>       *cs++ = cmd;
>>       *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(rq->engine->mmio_base));
>> -    *cs++ = i915_request_timeline(rq)->hwsp_offset + slot * sizeof(u32);
>> +    *cs++ = tl->hwsp_offset + slot * sizeof(u32);
>>       *cs++ = 0;
>>         intel_ring_advance(rq, cs);
>> diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
>> index 6f3a3687ef0f..6e32e91cdab4 100644
>> --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
>> +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
>> @@ -666,7 +666,7 @@ static int live_hwsp_wrap(void *arg)
>>       if (IS_ERR(tl))
>>           return PTR_ERR(tl);
>>   -    if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
>> +    if (!tl->has_initial_breadcrumb)
>>           goto out_free;
>>         err = intel_timeline_pin(tl, NULL);
>> @@ -833,12 +833,26 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
>>       return 0;
>>   }
>>   +static void switch_tl_lock(struct i915_request *from, struct i915_request *to)
>> +{
>> +    /* some light mutex juggling required; think co-routines */
>> +
>> +    if (from) {
>> +        lockdep_unpin_lock(&from->context->timeline->mutex, from->cookie);
>> +        mutex_unlock(&from->context->timeline->mutex);
>> +    }
>> +
>> +    if (to) {
>> +        mutex_lock(&to->context->timeline->mutex);
>> +        to->cookie = lockdep_pin_lock(&to->context->timeline->mutex);
>> +    }
>> +}
>> +
>>   static int create_watcher(struct hwsp_watcher *w,
>>                 struct intel_engine_cs *engine,
>>                 int ringsz)
>>   {
>>       struct intel_context *ce;
>> -    struct intel_timeline *tl;
>>         ce = intel_context_create(engine);
>>       if (IS_ERR(ce))
>> @@ -851,11 +865,8 @@ static int create_watcher(struct hwsp_watcher *w,
>>           return PTR_ERR(w->rq);
>>         w->addr = i915_ggtt_offset(w->vma);
>> -    tl = w->rq->context->timeline;
>>   -    /* some light mutex juggling required; think co-routines */
>> -    lockdep_unpin_lock(&tl->mutex, w->rq->cookie);
>> -    mutex_unlock(&tl->mutex);
>> +    switch_tl_lock(w->rq, NULL);
>>         return 0;
>>   }
>> @@ -864,15 +875,13 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
>>                bool (*op)(u32 hwsp, u32 seqno))
>>   {
>>       struct i915_request *rq = fetch_and_zero(&w->rq);
>> -    struct intel_timeline *tl = rq->context->timeline;
>>       u32 offset, end;
>>       int err;
>>         GEM_BUG_ON(w->addr - i915_ggtt_offset(w->vma) > w->vma->size);
>>         i915_request_get(rq);
>> -    mutex_lock(&tl->mutex);
>> -    rq->cookie = lockdep_pin_lock(&tl->mutex);
>> +    switch_tl_lock(NULL, rq);
>>       i915_request_add(rq);
>>         if (i915_request_wait(rq, 0, HZ) < 0) {
>> @@ -901,10 +910,7 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
>>   static void cleanup_watcher(struct hwsp_watcher *w)
>>   {
>>       if (w->rq) {
>> -        struct intel_timeline *tl = w->rq->context->timeline;
>> -
>> -        mutex_lock(&tl->mutex);
>> -        w->rq->cookie = lockdep_pin_lock(&tl->mutex);
>> +        switch_tl_lock(NULL, w->rq);
>>             i915_request_add(w->rq);
>>       }
>> @@ -942,7 +948,7 @@ static struct i915_request *wrap_timeline(struct i915_request *rq)
>>       }
>>         i915_request_put(rq);
>> -    rq = intel_context_create_request(ce);
>> +    rq = i915_request_create(ce);
>>       if (IS_ERR(rq))
>>           return rq;
>>   @@ -977,7 +983,7 @@ static int live_hwsp_read(void *arg)
>>       if (IS_ERR(tl))
>>           return PTR_ERR(tl);
>>   -    if (!tl->hwsp_cacheline)
>> +    if (!tl->has_initial_breadcrumb)
>>           goto out_free;
>>         for (i = 0; i < ARRAY_SIZE(watcher); i++) {
>> @@ -999,7 +1005,7 @@ static int live_hwsp_read(void *arg)
>>           do {
>>               struct i915_sw_fence *submit;
>>               struct i915_request *rq;
>> -            u32 hwsp;
>> +            u32 hwsp, dummy;
>>                 submit = heap_fence_create(GFP_KERNEL);
>>               if (!submit) {
>> @@ -1017,14 +1023,26 @@ static int live_hwsp_read(void *arg)
>>                   goto out;
>>               }
>>   -            /* Skip to the end, saving 30 minutes of nops */
>> -            tl->seqno = -10u + 2 * (count & 3);
>> -            WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>>               ce->timeline = intel_timeline_get(tl);
>>   -            rq = intel_context_create_request(ce);
>> +            /* Ensure timeline is mapped, done during first pin */
>> +            err = intel_context_pin(ce);
>> +            if (err) {
>> +                intel_context_put(ce);
>> +                goto out;
>> +            }
>> +
>> +            /*
>> +             * Start at a new wrap, and set seqno right before another wrap,
>> +             * saving 30 minutes of nops
>> +             */
>> +            tl->seqno = -12u + 2 * (count & 3);
>> +            __intel_timeline_get_seqno(tl, &dummy);
>> +
>> +            rq = i915_request_create(ce);
>>               if (IS_ERR(rq)) {
>>                   err = PTR_ERR(rq);
>> +                intel_context_unpin(ce);
>>                   intel_context_put(ce);
>>                   goto out;
>>               }
>> @@ -1034,32 +1052,35 @@ static int live_hwsp_read(void *arg)
>>                                   GFP_KERNEL);
>>               if (err < 0) {
>>                   i915_request_add(rq);
>> +                intel_context_unpin(ce);
>>                   intel_context_put(ce);
>>                   goto out;
>>               }
>>   -            mutex_lock(&watcher[0].rq->context->timeline->mutex);
>> +            switch_tl_lock(rq, watcher[0].rq);
>>               err = intel_timeline_read_hwsp(rq, watcher[0].rq, &hwsp);
>>               if (err == 0)
>>                   err = emit_read_hwsp(watcher[0].rq, /* before */
>>                                rq->fence.seqno, hwsp,
>>                                &watcher[0].addr);
>> -            mutex_unlock(&watcher[0].rq->context->timeline->mutex);
>> +            switch_tl_lock(watcher[0].rq, rq);
>
> Ugh this is some advanced stuff. Trickyness level out of 10?
>
> Change is "touching" two mutexes instead of one.
>
> Do you have a branch somewhere I could check out at this stage and look at the code to try and figure it out more easily?
>
>>               if (err) {
>>                   i915_request_add(rq);
>> +                intel_context_unpin(ce);
>>                   intel_context_put(ce);
>>                   goto out;
>>               }
>>   -            mutex_lock(&watcher[1].rq->context->timeline->mutex);
>> +            switch_tl_lock(rq, watcher[1].rq);
>>               err = intel_timeline_read_hwsp(rq, watcher[1].rq, &hwsp);
>>               if (err == 0)
>>                   err = emit_read_hwsp(watcher[1].rq, /* after */
>>                                rq->fence.seqno, hwsp,
>>                                &watcher[1].addr);
>> -            mutex_unlock(&watcher[1].rq->context->timeline->mutex);
>> +            switch_tl_lock(watcher[1].rq, rq);
>>               if (err) {
>>                   i915_request_add(rq);
>> +                intel_context_unpin(ce);
>>                   intel_context_put(ce);
>>                   goto out;
>>               }
>> @@ -1068,6 +1089,7 @@ static int live_hwsp_read(void *arg)
>>               i915_request_add(rq);
>>                 rq = wrap_timeline(rq);
>> +            intel_context_unpin(ce);
>>               intel_context_put(ce);
>>               if (IS_ERR(rq)) {
>>                   err = PTR_ERR(rq);
>> @@ -1107,8 +1129,8 @@ static int live_hwsp_read(void *arg)
>>                   3 * watcher[1].rq->ring->size)
>>                   break;
>>   -        } while (!__igt_timeout(end_time, NULL));
>> -        WRITE_ONCE(*(u32 *)tl->hwsp_seqno, 0xdeadbeef);
>> +        } while (!__igt_timeout(end_time, NULL) &&
>> +             count < (PAGE_SIZE / TIMELINE_SEQNO_BYTES - 1) / 2);
>>             pr_info("%s: simulated %lu wraps\n", engine->name, count);
>>           err = check_watcher(&watcher[1], "after", cmp_gte);
>> @@ -1153,9 +1175,7 @@ static int live_hwsp_rollover_kernel(void *arg)
>>           }
>>             GEM_BUG_ON(i915_active_fence_isset(&tl->last_request));
>> -        tl->seqno = 0;
>> -        timeline_rollback(tl);
>> -        timeline_rollback(tl);
>> +        tl->seqno = -2u;
>>           WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>>             for (i = 0; i < ARRAY_SIZE(rq); i++) {
>> @@ -1235,11 +1255,10 @@ static int live_hwsp_rollover_user(void *arg)
>>               goto out;
>>             tl = ce->timeline;
>> -        if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
>> +        if (!tl->has_initial_breadcrumb)
>>               goto out;
>>   -        timeline_rollback(tl);
>> -        timeline_rollback(tl);
>> +        tl->seqno = -4u;
>>           WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
>>             for (i = 0; i < ARRAY_SIZE(rq); i++) {
>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>> index bfd82d8259f4..b661b3b5992e 100644
>> --- a/drivers/gpu/drm/i915/i915_request.c
>> +++ b/drivers/gpu/drm/i915/i915_request.c
>> @@ -851,7 +851,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
>>       rq->fence.seqno = seqno;
>>         RCU_INIT_POINTER(rq->timeline, tl);
>> -    RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline);
>>       rq->hwsp_seqno = tl->hwsp_seqno;
>>       GEM_BUG_ON(i915_request_completed(rq));
>>   @@ -1090,9 +1089,6 @@ emit_semaphore_wait(struct i915_request *to,
>>       if (i915_request_has_initial_breadcrumb(to))
>>           goto await_fence;
>>   -    if (!rcu_access_pointer(from->hwsp_cacheline))
>> -        goto await_fence;
>> -
>>       /*
>>        * If this or its dependents are waiting on an external fence
>>        * that may fail catastrophically, then we want to avoid using
>> diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
>> index a8c413203f72..c25b0afcc225 100644
>> --- a/drivers/gpu/drm/i915/i915_request.h
>> +++ b/drivers/gpu/drm/i915/i915_request.h
>> @@ -237,16 +237,6 @@ struct i915_request {
>>        */
>>       const u32 *hwsp_seqno;
>>   -    /*
>> -     * If we need to access the timeline's seqno for this request in
>> -     * another request, we need to keep a read reference to this associated
>> -     * cacheline, so that we do not free and recycle it before the foreign
>> -     * observers have completed. Hence, we keep a pointer to the cacheline
>> -     * inside the timeline's HWSP vma, but it is only valid while this
>> -     * request has not completed and guarded by the timeline mutex.
>> -     */
>> -    struct intel_timeline_cacheline __rcu *hwsp_cacheline;
>> -
>>       /** Position in the ring of the start of the request */
>>       u32 head;
>>   @@ -615,4 +605,11 @@ i915_request_active_timeline(const struct i915_request *rq)
>>                        lockdep_is_held(&rq->engine->active.lock));
>>   }
>>   +static inline u32
>> +i915_request_active_offset(const struct i915_request *rq)
>> +{
>> +    return page_mask_bits(i915_request_active_timeline(rq)->hwsp_offset) +
>> +           offset_in_page(rq->hwsp_seqno);
>
> Seems the same conundrum as I had earlier in hwsp_offset(). TBD.
>
After discussion on irc, we'll rename the function to i915_request_active_seqno, and probably rewrite it like this for clarity:

u32 phys_base = page_mask_bits(i915_request_active_timeline(rq)->hwsp_offset);

u32 rel_hwsp_offset = offset_in_page(rq->hwsp_seqno);



_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5.
  2021-01-18 12:55       ` Thomas Hellström (Intel)
  2021-01-18 14:43         ` Maarten Lankhorst
@ 2021-01-20 13:32         ` Thomas Hellström (Intel)
  1 sibling, 0 replies; 92+ messages in thread
From: Thomas Hellström (Intel) @ 2021-01-20 13:32 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx


On 1/18/21 1:55 PM, Thomas Hellström (Intel) wrote:
>
> On 1/18/21 1:43 PM, Maarten Lankhorst wrote:
>> Op 18-01-2021 om 12:30 schreef Thomas Hellström (Intel):
>>> Hi,
>>>
>>> On 1/5/21 4:35 PM, Maarten Lankhorst wrote:
>>>> Instead of doing what we do currently, which will never work with
>>>> PROVE_LOCKING, do the same as AMD does, and something similar to
>>>> relocation slowpath. When all locks are dropped, we acquire the
>>>> pages for pinning. When the locks are taken, we transfer those
>>>> pages in .get_pages() to the bo. As a final check before installing
>>>> the fences, we ensure that the mmu notifier was not called; if it is,
>>>> we return -EAGAIN to userspace to signal it has to start over.
>>>>
>>>> Changes since v1:
>>>> - Unbinding is done in submit_init only. submit_begin() removed.
>>>> - MMU_NOTFIER -> MMU_NOTIFIER
>>>> Changes since v2:
>>>> - Make i915->mm.notifier a spinlock.
>>>> Changes since v3:
>>>> - Add WARN_ON if there are any page references left, should have 
>>>> been 0.
>>>> - Return 0 on success in submit_init(), bug from spinlock conversion.
>>>> - Release pvec outside of notifier_lock (Thomas).
>>>> Changes since v4:
>>>> - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
>>>> - Actually check all invalidations in eb_move_to_gpu. (Thomas)
>>>> - Do not wait when process is exiting to fix 
>>>> gem_ctx_persistence.userptr.
>>>>
>>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>
>>> ...
>>>
>>>>    -static int
>>>> -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>>>> -                  const struct mmu_notifier_range *range)
>>>> -{
>>>> -    struct i915_mmu_notifier *mn =
>>>> -        container_of(_mn, struct i915_mmu_notifier, mn);
>>>> -    struct interval_tree_node *it;
>>>> -    unsigned long end;
>>>> -    int ret = 0;
>>>> -
>>>> -    if (RB_EMPTY_ROOT(&mn->objects.rb_root))
>>>> -        return 0;
>>>> -
>>>> -    /* interval ranges are inclusive, but invalidate range is 
>>>> exclusive */
>>>> -    end = range->end - 1;
>>>> -
>>>> -    spin_lock(&mn->lock);
>>>> -    it = interval_tree_iter_first(&mn->objects, range->start, end);
>>>> -    while (it) {
>>>> -        struct drm_i915_gem_object *obj;
>>>> -
>>>> -        if (!mmu_notifier_range_blockable(range)) {
>>>> -            ret = -EAGAIN;
>>>> -            break;
>>>> -        }
>>>> +    spin_lock(&i915->mm.notifier_lock);
>>>>    -        /*
>>>> -         * The mmu_object is released late when destroying the
>>>> -         * GEM object so it is entirely possible to gain a
>>>> -         * reference on an object in the process of being freed
>>>> -         * since our serialisation is via the spinlock and not
>>>> -         * the struct_mutex - and consequently use it after it
>>>> -         * is freed and then double free it. To prevent that
>>>> -         * use-after-free we only acquire a reference on the
>>>> -         * object if it is not in the process of being destroyed.
>>>> -         */
>>>> -        obj = container_of(it, struct i915_mmu_object, it)->obj;
>>>> -        if (!kref_get_unless_zero(&obj->base.refcount)) {
>>>> -            it = interval_tree_iter_next(it, range->start, end);
>>>> -            continue;
>>>> -        }
>>>> -        spin_unlock(&mn->lock);
>>>> +    mmu_interval_set_seq(mni, cur_seq);
>>>>    -        ret = i915_gem_object_unbind(obj,
>>>> -                         I915_GEM_OBJECT_UNBIND_ACTIVE |
>>>> -                         I915_GEM_OBJECT_UNBIND_BARRIER);
>>>> -        if (ret == 0)
>>>> -            ret = __i915_gem_object_put_pages(obj);
>>>> -        i915_gem_object_put(obj);
>>>> -        if (ret)
>>>> -            return ret;
>>>> +    spin_unlock(&i915->mm.notifier_lock);
>>>>    -        spin_lock(&mn->lock);
>>>> +    /* During exit there's no need to wait */
>>>> +    if (current->flags & PF_EXITING)
>>>> +        return true;
>>> Did we ever find out why this is needed, that is why the old userptr 
>>> invalidation called doesn't hang here in a similar way?
>> It's an optimization for teardown because userptr will be invalidated 
>> anyway, but also for gem_ctx_persistence.userptr, although
>>
>> with ulls that test may stop working anyway because it takes an 
>> out_fence.
>
> Sure, but what I meant was: Did we find out what's different in the 
> new code compared to the old one? Because the old code also waits for 
> gpu when unbinding in the mmu_notifier, but it appears like in the old 
> code, the mmu notifier is never called here. At least to me it seems 
> it would be good if we understand what that difference is.
>
> /Thomas
>
IIRC I did some investigation here as well and from what I could tell, 
the notifier was not called at all for the old code. I don't really feel 
comfortable with an R-B until we've really understood why.

Also there was a discussion with you, Sudeep and Chris about whether 
user-space was actually not comfortable with this, you saying it worked, 
Chris said tests were showing otherwise. What were those tests?

Thanks,

Thomas



_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 92+ messages in thread

end of thread, other threads:[~2021-01-20 13:41 UTC | newest]

Thread overview: 92+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-05 15:34 [Intel-gfx] [PATCH v6 00/64] drm/i915: Remove obj->mm.lock! Maarten Lankhorst
2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 01/64] drm/i915: Do not share hwsp across contexts any more, v6 Maarten Lankhorst
2021-01-14 12:06   ` Tvrtko Ursulin
2021-01-18 16:09     ` Maarten Lankhorst
2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 02/64] drm/i915: Pin timeline map after first timeline pin, v3 Maarten Lankhorst
2021-01-18 10:28   ` Thomas Hellström (Intel)
2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 03/64] drm/i915: Move cmd parser pinning to execbuffer Maarten Lankhorst
2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 04/64] drm/i915: Add missing -EDEADLK handling to execbuf pinning, v2 Maarten Lankhorst
2021-01-05 15:34 ` [Intel-gfx] [PATCH v6 05/64] drm/i915: Ensure we hold the object mutex in pin correctly Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 06/64] drm/i915: Add gem object locking to madvise Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 07/64] drm/i915: Move HAS_STRUCT_PAGE to obj->flags Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 08/64] drm/i915: Rework struct phys attachment handling Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 09/64] drm/i915: Convert i915_gem_object_attach_phys() to ww locking, v2 Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 10/64] drm/i915: make lockdep slightly happier about execbuf Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 11/64] drm/i915: Disable userptr pread/pwrite support Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 12/64] drm/i915: No longer allow exporting userptr through dma-buf Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 13/64] drm/i915: Reject more ioctls for userptr Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 14/64] drm/i915: Reject UNSYNCHRONIZED for userptr, v2 Maarten Lankhorst
2021-01-11 20:50   ` Dave Airlie
2021-01-18 10:34   ` Thomas Hellström
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 15/64] drm/i915: Make compilation of userptr code depend on MMU_NOTIFIER Maarten Lankhorst
2021-01-18 11:17   ` Thomas Hellström
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 16/64] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v5 Maarten Lankhorst
2021-01-11 20:51   ` Dave Airlie
2021-01-18 11:30   ` Thomas Hellström (Intel)
2021-01-18 12:43     ` Maarten Lankhorst
2021-01-18 12:55       ` Thomas Hellström (Intel)
2021-01-18 14:43         ` Maarten Lankhorst
2021-01-20 13:32         ` Thomas Hellström (Intel)
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 17/64] drm/i915: Flatten obj->mm.lock Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 18/64] drm/i915: Populate logical context during first pin Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 19/64] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2 Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 20/64] drm/i915: Handle ww locking in init_status_page Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 21/64] drm/i915: Rework clflush to work correctly without obj->mm.lock Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 22/64] drm/i915: Pass ww ctx to intel_pin_to_display_plane Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 23/64] drm/i915: Add object locking to vm_fault_cpu Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 24/64] drm/i915: Move pinning to inside engine_wa_list_verify() Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 25/64] drm/i915: Take reservation lock around i915_vma_pin Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 26/64] drm/i915: Make lrc_init_wa_ctx compatible with ww locking Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 27/64] drm/i915: Make __engine_unpark() " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 28/64] drm/i915: Take obj lock around set_domain ioctl Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 29/64] drm/i915: Defer pin calls in buffer pool until first use by caller Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 30/64] drm/i915: Fix pread/pwrite to work with new locking rules Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 31/64] drm/i915: Fix workarounds selftest, part 1 Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 32/64] drm/i915: Prepare for obj->mm.lock removal Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 33/64] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 34/64] drm/i915: Add ww locking around vm_access() Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 35/64] drm/i915: Increase ww locking for perf Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 36/64] drm/i915: Lock ww in ucode objects correctly Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 37/64] drm/i915: Add ww locking to dma-buf ops Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 38/64] drm/i915: Add missing ww lock in intel_dsb_prepare Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 39/64] drm/i915: Fix ww locking in shmem_create_from_object Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 40/64] drm/i915: Use a single page table lock for each gtt Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 41/64] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 42/64] drm/i915/selftests: Prepare client blit " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 43/64] drm/i915/selftests: Prepare coherency tests " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 44/64] drm/i915/selftests: Prepare context " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 45/64] drm/i915/selftests: Prepare dma-buf " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 46/64] drm/i915/selftests: Prepare execbuf " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 47/64] drm/i915/selftests: Prepare mman testcases " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 48/64] drm/i915/selftests: Prepare object tests " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 49/64] drm/i915/selftests: Prepare object blit " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 50/64] drm/i915/selftests: Prepare igt_gem_utils " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 51/64] drm/i915/selftests: Prepare context selftest " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 52/64] drm/i915/selftests: Prepare hangcheck " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 53/64] drm/i915/selftests: Prepare execlists and lrc selftests " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 54/64] drm/i915/selftests: Prepare mocs tests " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 55/64] drm/i915/selftests: Prepare ring submission " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 56/64] drm/i915/selftests: Prepare timeline tests " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 57/64] drm/i915/selftests: Prepare i915_request " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 58/64] drm/i915/selftests: Prepare memory region " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 59/64] drm/i915/selftests: Prepare cs engine " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 60/64] drm/i915/selftests: Prepare gtt " Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 61/64] drm/i915: Finally remove obj->mm.lock Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 62/64] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2 Maarten Lankhorst
2021-01-18 10:58   ` Thomas Hellström
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 63/64] drm/i915: Move gt_revoke() slightly Maarten Lankhorst
2021-01-18 11:11   ` Thomas Hellström
2021-01-18 12:01     ` Maarten Lankhorst
2021-01-18 13:22       ` Thomas Hellström
2021-01-18 13:28         ` Thomas Hellström
2021-01-18 14:46           ` Maarten Lankhorst
2021-01-18 15:05             ` Thomas Hellström
2021-01-18 15:32               ` Maarten Lankhorst
2021-01-05 15:35 ` [Intel-gfx] [PATCH v6 64/64] drm/i915: Avoid some false positives in assert_object_held() Maarten Lankhorst
2021-01-05 17:24 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev12) Patchwork
2021-01-05 17:26 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-01-05 17:29 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
2021-01-07 15:42 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Remove obj->mm.lock! (rev13) Patchwork
2021-01-07 15:44 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-01-07 15:47 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
2021-01-07 16:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.