All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Subject: [Intel-gfx] [PATCH 37/37] drm/i915/gem: Delay attach mmu-notifier until we acquire the pinned userptr
Date: Wed,  5 Aug 2020 13:22:31 +0100	[thread overview]
Message-ID: <20200805122231.23313-38-chris@chris-wilson.co.uk> (raw)
In-Reply-To: <20200805122231.23313-1-chris@chris-wilson.co.uk>

On the fast path, we first try to pin the user pages and then attach the
mmu-notifier. On the slow path, we did it the opposite way around,
carrying the mmu-notifier over from the tail of the fast path. However,
if we are mapping a fresh batch of user pages, we will always hit a pmd
split operation (to replace the zero pages with real pages), triggering
an invalidate-range callback for this userptr, and so we have to cancel
the work [after completing the pinning] and cause the caller to retry
(an extra EAGAIN return from an ioctl for some paths). If we follow the
fast path approach and attach the callback after completion, we only see
the invalidate-range for revocations of our pages.

The risk (the same as for the fast path) is that if the mmu-notifier
should have been run during the page lookup, we will have missed it and
the pages will be mixed. One might conclude that the fast path is wrong,
and we should always attach the mmu-notifier first and bear the cost of
redundant repetition.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 80907c00c6fd..ba1f01650eeb 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -500,14 +500,13 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
 			pages = __i915_gem_userptr_alloc_pages(obj, pvec,
 							       npages);
 			if (!IS_ERR(pages)) {
+				__i915_gem_userptr_set_active(obj, true);
 				pinned = 0;
 				pages = NULL;
 			}
 		}
 
 		obj->userptr.work = ERR_CAST(pages);
-		if (IS_ERR(pages))
-			__i915_gem_userptr_set_active(obj, false);
 	}
 	i915_gem_object_unlock(obj);
 
@@ -566,7 +565,6 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 	struct mm_struct *mm = obj->userptr.mm->mm;
 	struct page **pvec;
 	struct sg_table *pages;
-	bool active;
 	int pinned;
 	unsigned int gup_flags = 0;
 
@@ -621,19 +619,16 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 		}
 	}
 
-	active = false;
 	if (pinned < 0) {
 		pages = ERR_PTR(pinned);
 		pinned = 0;
 	} else if (pinned < num_pages) {
 		pages = __i915_gem_userptr_get_pages_schedule(obj);
-		active = pages == ERR_PTR(-EAGAIN);
 	} else {
 		pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
-		active = !IS_ERR(pages);
+		if (!IS_ERR(pages))
+			__i915_gem_userptr_set_active(obj, true);
 	}
-	if (active)
-		__i915_gem_userptr_set_active(obj, true);
 
 	if (IS_ERR(pages))
 		unpin_user_pages(pvec, pinned);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2020-08-05 12:23 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-05 12:21 [Intel-gfx] [PATCH 00/37] Replace obj->mm.lock with reservation_ww_class Chris Wilson
2020-08-05 12:21 ` [Intel-gfx] [PATCH 01/37] drm/i915/gem: Reduce context termination list iteration guard to RCU Chris Wilson
2020-08-05 15:02   ` Tvrtko Ursulin
2020-08-05 12:21 ` [Intel-gfx] [PATCH 02/37] drm/i915/gt: Protect context lifetime with RCU Chris Wilson
2020-08-05 15:03   ` Tvrtko Ursulin
2020-08-06 10:14     ` Chris Wilson
2020-08-05 12:21 ` [Intel-gfx] [PATCH 03/37] drm/i915/gt: Free stale request on destroying the virtual engine Chris Wilson
2020-08-05 15:05   ` Tvrtko Ursulin
2020-08-06 10:44     ` Chris Wilson
2020-08-05 12:21 ` [Intel-gfx] [PATCH 04/37] drm/i915/gt: Defer enabling the breadcrumb interrupt to after submission Chris Wilson
2020-08-05 12:21 ` [Intel-gfx] [PATCH 05/37] drm/i915/gt: Track signaled breadcrumbs outside of the breadcrumb spinlock Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 06/37] drm/i915/gt: Don't cancel the interrupt shadow too early Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 07/37] drm/i915/gt: Split the breadcrumb spinlock between global and contexts Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 08/37] drm/i915/gem: Don't drop the timeline lock during execbuf Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 09/37] drm/i915/gem: Rename execbuf.bind_link to unbound_link Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 10/37] drm/i915/gem: Rename the list of relocations to reloc_list Chris Wilson
2020-08-05 13:26   ` Tvrtko Ursulin
2020-08-05 12:22 ` [Intel-gfx] [PATCH 11/37] drm/i915/gem: Move the 'cached' info to i915_execbuffer Chris Wilson
2020-08-05 13:29   ` Tvrtko Ursulin
2020-08-05 12:22 ` [Intel-gfx] [PATCH 12/37] drm/i915/gem: Break apart the early i915_vma_pin from execbuf object lookup Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 13/37] drm/i915/gem: Remove the call for no-evict i915_vma_pin Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 14/37] drm/i915: Serialise i915_vma_pin_inplace() with i915_vma_unbind() Chris Wilson
2020-08-05 13:56   ` Tvrtko Ursulin
2020-08-05 12:22 ` [Intel-gfx] [PATCH 15/37] drm/i915: Add list_for_each_entry_safe_continue_reverse Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 16/37] drm/i915: Always defer fenced work to the worker Chris Wilson
2020-08-05 13:58   ` Tvrtko Ursulin
2020-08-05 12:22 ` [Intel-gfx] [PATCH 17/37] drm/i915/gem: Assign context id for async work Chris Wilson
2020-08-05 13:59   ` Tvrtko Ursulin
2020-08-05 12:22 ` [Intel-gfx] [PATCH 18/37] drm/i915/gem: Separate the ww_mutex walker into its own list Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 19/37] drm/i915/gem: Asynchronous GTT unbinding Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 20/37] drm/i915/gem: Bind the fence async for execbuf Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 21/37] drm/i915/gem: Include cmdparser in common execbuf pinning Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 22/37] drm/i915/gem: Include secure batch " Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 23/37] drm/i915/gem: Manage GTT placement bias (starting offset) explicitly Chris Wilson
2020-08-05 14:16   ` Tvrtko Ursulin
2020-08-05 12:22 ` [Intel-gfx] [PATCH 24/37] drm/i915/gem: Reintroduce multiple passes for reloc processing Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 25/37] drm/i915: Add an implementation for common reservation_ww_class locking Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 26/37] drm/i915/gem: Pull execbuf dma resv under a single critical section Chris Wilson
2020-08-05 15:42   ` Thomas Hellström (Intel)
2020-08-05 12:22 ` [Intel-gfx] [PATCH 27/37] drm/i915/gtt: map the PD up front Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 28/37] drm/i915: Acquire the object lock around page directories Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 29/37] drm/i915/gem: Replace i915_gem_object.mm.mutex with reservation_ww_class Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 30/37] drm/i915: Hold wakeref for the duration of the vma GGTT binding Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 31/37] drm/i915/gt: Refactor heartbeat request construction and submission Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 32/37] drm/i915: Specialise GGTT binding Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 33/37] drm/i915/gt: Acquire backing storage for the context Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 34/37] drm/i915/gt: Push the wait for the context to bound to the request Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 35/37] drm/i915: Remove unused i915_gem_evict_vm() Chris Wilson
2020-08-05 12:22 ` [Intel-gfx] [PATCH 36/37] drm/i915/display: Drop object lock from intel_unpin_fb_vma Chris Wilson
2020-08-05 12:22 ` Chris Wilson [this message]
2020-08-05 12:41 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Replace obj->mm.lock with reservation_ww_class Patchwork
2020-08-05 12:42 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-08-05 13:00 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-08-05 16:22 ` [Intel-gfx] [PATCH 00/37] " Thomas Hellström (Intel)
2020-08-06  9:21   ` Tvrtko Ursulin
2020-08-06 11:55     ` Daniel Vetter
2020-08-06 13:10       ` Tvrtko Ursulin
2020-08-10  9:51     ` Chris Wilson
2020-09-03 14:25       ` Tvrtko Ursulin
2020-08-05 17:44 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200805122231.23313-38-chris@chris-wilson.co.uk \
    --to=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.