All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] s/obj->mm.lock//
Date: Mon, 27 Jul 2020 20:53:40 +0200	[thread overview]
Message-ID: <0af5d31f-ed6b-8491-7170-14fe2e076c60@shipmail.org> (raw)
In-Reply-To: <20200706061926.6687-1-chris@chris-wilson.co.uk>


On 7/6/20 8:19 AM, Chris Wilson wrote:
> This is the easy part; pulling reservation of multiple objects under an
> ww acquire context. With one simple rule that eviction is handled by the
> ww acquire context, we can carefully transition the driver over to using
> eviction. Instead of feeding the acquire context everywhere, we make the
> caller gather up all the objects they need to acquire into the context,
> then acquire their backing store. The major boon here is that by
> providing a clean set of objects (that we have not yet started to
> acquire any auxiliary attachments for) to the acquire context, it can
> handle all the EDEADLK processing for us [since it is a pure locking
> operation and does not need to release attachments upon revoking the
> locks].
>
> As a sketch of what that would look like, to illustrate the major work
> remaining:
>
> static int evict(struct drm_i915_gem_object *obj, struct i915_acquire_ctx *ctx)
> {
> 	struct intel_memory_region *mem = obj->mm->region;
> 	struct drm_i915_gem_object *swap; // struct i915_mm_bo *swap
> 	struct i915_request *rq;
> 	int err;
>
> 	/* swap = mem->create_eviction_target(obj); */
> 	swap = i915_gem_object_create_shmem(mem->i915, obj->base.size);
> 	if (IS_ERR(swap))
> 		return PTR_ERR(swap);
>
> 	err = dma_resv_lock_interruptible(swap, &ctx->ctx);
> 	GEM_BUG_ON(err == -EALREADY);
> 	if (err == -EDEADLK)
> 		goto out;
>
> 	/* Obviously swap has to be carefully chosen so that this may succeed */
> 	err = __i915_gem_object_get_pages_locked(swap);
> 	if (err)
> 		goto out_unlock;
>
> 	rq = pinned_evict_copy(ctx, obj, swap);
> 	if (IS_ERR(rq)) {
> 		err = PTR_ERR(rq);
> 		goto out_unlock;
> 	}
>
> 	err = i915_gem_revoke_mm(obj);
> 	if (err)
> 		goto out_request;
>
> 	/* Alternatively you could wait synchronously! */
> 	mem->release_blocks(&obj->mm->blocks, rq);
> 	i915_mm_bo_put(xchg(&obj->mm, i915_mm_bo_get(swap)));
>
> 	dma_resv_add_exclusive_fence(obj->base.resv, &rq->fence);
> out_request:
> 	i915_request_put(rq);
> out_unlock:
> 	dma_resv_unlock(swap);
> out:
> 	i915_gem_object_put(swap);
> 	return err;
> }
>
> static int relock_all(struct i915_acquire_ctx *ctx)
> {
> 	struct i915_acquire_link *lnk, *lock;
> 	int err;
>
> 	for (lnk = ctx->locked; lnk; lnk = lnk->next)
> 		dma_resv_unlock(lnk->obj->base.resv);
>
> 	lock = fetch_and_zero(&ctx->locked);
> 	while ((lnk = lock)) {
> 		struct drm_i915_gem_object *obj;
>
> 		obj = lnk->obj;
> 		lock = lnk->next;
>
> 		if (ctx->locked)
> 			err = dma_resv_lock_interruptible(obj->base.resv,
> 							  &ctx->ctx);
> 		else
> 			err = dma_resv_lock_slow_interruptible(obj->base.resv,
> 							       &ctx->ctx);
> 		GEM_BUG_ON(err == -EALREADY);
> 		if (err == -EDEADLK) {
> 			struct i915_acquire *old;
>
> 			while ((old = ctx->locked)) {
> 				dma_resv_unlock(old->obj->base.resv);
> 				ctx->locked = old->next;
> 				old->next = lock;
> 				lock = old;
> 			}
>
> 			lnk->next = lock;
> 			lock = lnk;
> 			continue;
> 		}
> 		if (err) {
> 			lock = lnk;
> 			break;
> 		}
>
> 		lnk->next = ctx->locked;
> 		ctx->locked = lnk;
> 	}
>
> 	while ((lnk = lock)) {
> 		lock = lnk->next;
> 		i915_gem_object_put(lnk->obj);
> 		i915_acquire_free(lnk);
> 	}
>
> 	return err;
> }
>
> int i915_acquire_mm(struct i915_acquire_ctx *ctx)
> {
> 	struct i915_acquire_link *lnk;
> 	int n, err;
>
> restart:
> 	for (lnk = ctx->locked; lnk; lnk = lnk->next) {
> 		for (n = 0;
> 		     !i915_gem_object_has_pages(lnk->obj);
> 		     n++) {
> 			struct drm_i915_gem_object *evictee = NULL;
>
> 			mem = get_preferred_memregion_for_object(lnk->obj, n);
> 			if (!mem)
> 				return -ENXIO;
>
> 			while (!i915_gem_object_get_pages(lnk->obj)) {
> 				struct i915_acquire_link *this;
>
> 				evictee = mem->get_eviction_candidate(mem,
> 								      evictee);
> 				if (!evictee)
> 					break;
>
> 				err = dma_resv_lock_interruptible(evictee,
> 								  &ctx->ctx);
> 				if (err == -EALREADY)
> 					continue; /* XXX fragmentation? */
> 				if (err == 0)
> 					err = evict(evictee);
> 				dma_resv_unlock(evictee);

There was a discussion on dri-devel not too long ago, where Christian 
mentioned there is a point holding on to the evictee(s) locks until  the 
get_pages() succeeds to avoid racing with threads wanting to move the 
evictee back into vram. Perhaps something worth considering.

/Thomas


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

      parent reply	other threads:[~2020-07-27 18:53 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-06  6:19 [Intel-gfx] s/obj->mm.lock// Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 01/20] drm/i915: Preallocate stashes for vma page-directories Chris Wilson
2020-07-06 18:15   ` Matthew Auld
2020-07-06 18:20     ` Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 02/20] drm/i915: Switch to object allocations for page directories Chris Wilson
2020-07-06 19:06   ` Matthew Auld
2020-07-06 19:31     ` Chris Wilson
2020-07-06 20:01     ` Chris Wilson
2020-07-06 21:08       ` Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 03/20] drm/i915/gem: Don't drop the timeline lock during execbuf Chris Wilson
2020-07-08 16:54   ` Tvrtko Ursulin
2020-07-08 18:08     ` Chris Wilson
2020-07-09 10:52       ` Tvrtko Ursulin
2020-07-09 10:57         ` Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 04/20] drm/i915/gem: Rename execbuf.bind_link to unbound_link Chris Wilson
2020-07-10 11:26   ` Tvrtko Ursulin
2020-07-06  6:19 ` [Intel-gfx] [PATCH 05/20] drm/i915/gem: Break apart the early i915_vma_pin from execbuf object lookup Chris Wilson
2020-07-10 11:27   ` Tvrtko Ursulin
2020-07-06  6:19 ` [Intel-gfx] [PATCH 06/20] drm/i915/gem: Remove the call for no-evict i915_vma_pin Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 07/20] drm/i915: Add list_for_each_entry_safe_continue_reverse Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 08/20] drm/i915: Always defer fenced work to the worker Chris Wilson
2020-07-08 12:18   ` Tvrtko Ursulin
2020-07-08 12:25     ` Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 09/20] drm/i915/gem: Assign context id for async work Chris Wilson
2020-07-08 12:26   ` Tvrtko Ursulin
2020-07-08 12:42     ` Chris Wilson
2020-07-08 14:24       ` Tvrtko Ursulin
2020-07-08 15:36         ` Chris Wilson
2020-07-09 11:01           ` Tvrtko Ursulin
2020-07-09 11:07             ` Chris Wilson
2020-07-09 11:59               ` Tvrtko Ursulin
2020-07-09 12:07                 ` Chris Wilson
2020-07-13 12:22                   ` Tvrtko Ursulin
2020-07-14 14:01                     ` Chris Wilson
2020-07-08 12:45     ` Tvrtko Ursulin
2020-07-06  6:19 ` [Intel-gfx] [PATCH 10/20] drm/i915: Export a preallocate variant of i915_active_acquire() Chris Wilson
2020-07-09 14:36   ` Maarten Lankhorst
2020-07-10 12:24     ` Tvrtko Ursulin
2020-07-10 12:32       ` Maarten Lankhorst
2020-07-13 14:29   ` Tvrtko Ursulin
2020-07-06  6:19 ` [Intel-gfx] [PATCH 11/20] drm/i915/gem: Separate the ww_mutex walker into its own list Chris Wilson
2020-07-13 14:53   ` Tvrtko Ursulin
2020-07-14 14:10     ` Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 12/20] drm/i915/gem: Asynchronous GTT unbinding Chris Wilson
2020-07-14  9:02   ` Tvrtko Ursulin
2020-07-14 15:05     ` Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 13/20] drm/i915/gem: Bind the fence async for execbuf Chris Wilson
2020-07-14 12:19   ` Tvrtko Ursulin
2020-07-14 15:21     ` Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 14/20] drm/i915/gem: Include cmdparser in common execbuf pinning Chris Wilson
2020-07-14 12:48   ` Tvrtko Ursulin
2020-07-06  6:19 ` [Intel-gfx] [PATCH 15/20] drm/i915/gem: Include secure batch " Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 16/20] drm/i915/gem: Reintroduce multiple passes for reloc processing Chris Wilson
2020-07-09 15:39   ` Tvrtko Ursulin
2020-07-06  6:19 ` [Intel-gfx] [PATCH 17/20] drm/i915: Add an implementation for i915_gem_ww_ctx locking, v2 Chris Wilson
2020-07-06 17:21   ` kernel test robot
2020-07-06  6:19 ` [Intel-gfx] [PATCH 18/20] drm/i915/gem: Pull execbuf dma resv under a single critical section Chris Wilson
2020-07-06  6:19 ` [Intel-gfx] [PATCH 19/20] drm/i915/gem: Replace i915_gem_object.mm.mutex with reservation_ww_class Chris Wilson
2020-07-09 14:06   ` Maarten Lankhorst
2020-07-06  6:19 ` [Intel-gfx] [PATCH 20/20] drm/i915: Track i915_vma with its own reference counter Chris Wilson
2020-07-06  6:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories Patchwork
2020-07-06  6:29 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-07-06  6:51 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-07-06  7:55 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-07-27 18:53 ` Thomas Hellström (Intel) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0af5d31f-ed6b-8491-7170-14fe2e076c60@shipmail.org \
    --to=thomas_os@shipmail.org \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.