linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
	linaro-mm-sig@lists.linaro.org, dri-devel@lists.freedesktop.org,
	linux-media@vger.kernel.org
Cc: daniel@ffwll.ch, intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 01/26] dma-buf: add dma_resv_for_each_fence_unlocked
Date: Tue, 14 Sep 2021 11:53:36 +0100	[thread overview]
Message-ID: <1eee4105-e154-9d1d-b92b-d17c6f8f8432@linux.intel.com> (raw)
In-Reply-To: <20210913131707.45639-2-christian.koenig@amd.com>


On 13/09/2021 14:16, Christian König wrote:
> Abstract the complexity of iterating over all the fences
> in a dma_resv object.
> 
> The new loop handles the whole RCU and retry dance and
> returns only fences where we can be sure we grabbed the
> right one.
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/dma-buf/dma-resv.c | 63 ++++++++++++++++++++++++++++++++++++++
>   include/linux/dma-resv.h   | 36 ++++++++++++++++++++++
>   2 files changed, 99 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 84fbe60629e3..213a9b7251ca 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -323,6 +323,69 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
>   }
>   EXPORT_SYMBOL(dma_resv_add_excl_fence);
>   
> +/**
> + * dma_resv_walk_unlocked - walk over fences in a dma_resv obj
> + * @obj: the dma_resv object
> + * @cursor: cursor to record the current position
> + * @all_fences: true returns also the shared fences
> + * @first: if we should start over
> + *
> + * Return all the fences in the dma_resv object which are not yet signaled.
> + * The returned fence has an extra local reference so will stay alive.
> + * If a concurrent modify is detected the whole iterator is started over again.
> + */
> +struct dma_fence *dma_resv_walk_unlocked(struct dma_resv *obj,
> +					 struct dma_resv_cursor *cursor,
> +					 bool all_fences, bool first)
> +{
> +	struct dma_fence *fence = NULL;
> +
> +	do {
> +		/* Drop the reference from the previous round */
> +		dma_fence_put(fence);
> +
> +		cursor->is_first = first;
> +		if (first) {
> +			cursor->seq = read_seqcount_begin(&obj->seq);
> +			cursor->index = -1;
> +			cursor->fences = dma_resv_shared_list(obj);
> +			cursor->is_exclusive = true;
> +
> +			fence = dma_resv_excl_fence(obj);
> +			if (fence && test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
> +					      &fence->flags))
> +				fence = NULL;
> +		} else {
> +			fence = NULL;
> +		}
> +
> +		if (fence) {
> +			fence = dma_fence_get_rcu(fence);
> +		} else if (all_fences && cursor->fences) {
> +			struct dma_resv_list *fences = cursor->fences;

If rcu lock is allowed to be dropped while walking the list what 
guarantees list of fences hasn't been freed?

Like:

1st call
   -> gets seqcount
   -> stores cursor->fences

rcu lock dropped/re-acquired

2nd call
   -> dereferences into cursor->fences -> boom?

> +
> +			cursor->is_exclusive = false;
> +			while (++cursor->index < fences->shared_count) {
> +				fence = rcu_dereference(fences->shared[
> +							cursor->index]);
> +				if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
> +					      &fence->flags))
> +					break;
> +			}
> +			if (cursor->index < fences->shared_count)
> +				fence = dma_fence_get_rcu(fence);
> +			else
> +				fence = NULL;
> +		}
> +
> +		/* For the eventually next round */
> +		first = true;
> +	} while (read_seqcount_retry(&obj->seq, cursor->seq));
> +
> +	return fence;
> +}
> +EXPORT_SYMBOL_GPL(dma_resv_walk_unlocked);
> +
>   /**
>    * dma_resv_copy_fences - Copy all fences from src to dst.
>    * @dst: the destination reservation object
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index 9100dd3dc21f..f5b91c292ee0 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -149,6 +149,39 @@ struct dma_resv {
>   	struct dma_resv_list __rcu *fence;
>   };
>   
> +/**
> + * struct dma_resv_cursor - current position into the dma_resv fences
> + * @seq: sequence number to check
> + * @index: index into the shared fences
> + * @shared: the shared fences
> + * @is_first: true if this is the first returned fence
> + * @is_exclusive: if the current fence is the exclusive one
> + */
> +struct dma_resv_cursor {
> +	unsigned int seq;
> +	unsigned int index;
> +	struct dma_resv_list *fences;
> +	bool is_first;

Is_first is useful to callers - like they are legitimately allowed to 
look inside this, what could otherwise be private object? What is the 
intended use case, given when true the returned fence can be either 
exclusive or first from a shared list?

> +	bool is_exclusive;

Is_exclusive could be written as index == -1 in the code, right? If so 
then an opportunity to remove some redundancy.

> +};
> +
> +/**
> + * dma_resv_for_each_fence_unlocked - fence iterator
> + * @obj: a dma_resv object pointer
> + * @cursor: a struct dma_resv_cursor pointer
> + * @all_fences: true if all fences should be returned
> + * @fence: the current fence
> + *
> + * Iterate over the fences in a struct dma_resv object without holding the
> + * dma_resv::lock. The RCU read side lock must be hold when using this, but can
> + * be dropped and re-taken as necessary inside the loop. @all_fences controls
> + * if the shared fences are returned as well.
> + */
> +#define dma_resv_for_each_fence_unlocked(obj, cursor, all_fences, fence)    \
> +	for (fence = dma_resv_walk_unlocked(obj, cursor, all_fences, true); \
> +	     fence; dma_fence_put(fence),				    \
> +	     fence = dma_resv_walk_unlocked(obj, cursor, all_fences, false))

Has the fact RCU lock can be dropped so there is potential to walk over 
completely different snapshots been discussed?

At least if I followed the code correctly - it appears there is 
potential the walk restarts from the start (exclusive slot) at any point 
during the walk.

Because theoretically I think you could take an atomic snapshot of 
everything (given you have a cursor object) and then release it on the 
end condition.

Regards,

Tvrtko

> +
>   #define dma_resv_held(obj) lockdep_is_held(&(obj)->lock.base)
>   #define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base)
>   
> @@ -366,6 +399,9 @@ void dma_resv_fini(struct dma_resv *obj);
>   int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences);
>   void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>   void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> +struct dma_fence *dma_resv_walk_unlocked(struct dma_resv *obj,
> +					 struct dma_resv_cursor *cursor,
> +					 bool first, bool all_fences);
>   int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl,
>   			unsigned *pshared_count, struct dma_fence ***pshared);
>   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> 

  reply	other threads:[~2021-09-14 10:53 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-13 13:16 Deploying new iterator interface for dma-buf Christian König
2021-09-13 13:16 ` [PATCH 01/26] dma-buf: add dma_resv_for_each_fence_unlocked Christian König
2021-09-14 10:53   ` Tvrtko Ursulin [this message]
2021-09-14 11:25     ` [Intel-gfx] " Christian König
2021-09-14 13:07       ` Tvrtko Ursulin
2021-09-15 10:46         ` Christian König
2021-09-13 13:16 ` [PATCH 02/26] dma-buf: add dma_resv_for_each_fence Christian König
2021-09-13 13:16 ` [PATCH 03/26] dma-buf: use new iterator in dma_resv_copy_fences Christian König
2021-09-13 13:16 ` [PATCH 04/26] dma-buf: use new iterator in dma_resv_get_fences v2 Christian König
2021-09-13 13:16 ` [PATCH 05/26] dma-buf: use new iterator in dma_resv_wait_timeout Christian König
2021-09-13 13:16 ` [PATCH 06/26] dma-buf: use new iterator in dma_resv_test_signaled Christian König
2021-09-13 13:16 ` [PATCH 07/26] drm/ttm: use the new iterator in ttm_bo_flush_all_fences Christian König
2021-09-13 13:16 ` [PATCH 08/26] drm/amdgpu: use the new iterator in amdgpu_sync_resv Christian König
2021-09-13 13:16 ` [PATCH 09/26] drm/amdgpu: use new iterator in amdgpu_ttm_bo_eviction_valuable Christian König
2021-09-13 13:16 ` [PATCH 10/26] drm/msm: use new iterator in msm_gem_describe Christian König
2021-09-13 13:16 ` [PATCH 11/26] drm/radeon: use new iterator in radeon_sync_resv Christian König
2021-09-13 13:16 ` [PATCH 12/26] drm/scheduler: use new iterator in drm_sched_job_add_implicit_dependencies Christian König
2021-09-13 13:16 ` [PATCH 13/26] drm/i915: use the new iterator in i915_gem_busy_ioctl Christian König
2021-09-14 13:18   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-13 13:16 ` [PATCH 14/26] drm/i915: use the new iterator in i915_sw_fence_await_reservation Christian König
2021-09-13 13:16 ` [PATCH 15/26] drm/i915: use the new iterator in i915_request_await_object Christian König
2021-09-14 10:26   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-14 10:39     ` Christian König
2021-09-14 10:59       ` Tvrtko Ursulin
2021-09-13 13:16 ` [PATCH 16/26] drm/i915: use new iterator in i915_gem_object_wait_reservation Christian König
2021-09-13 13:16 ` [PATCH 17/26] drm/i915: use new iterator in i915_gem_object_wait_priority Christian König
2021-09-14 12:42   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-13 13:16 ` [PATCH 18/26] drm/i915: use new iterator in i915_gem_object_last_write_engine Christian König
2021-09-14 12:27   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-14 12:32     ` Christian König
2021-09-14 12:47       ` Tvrtko Ursulin
2021-09-15 11:19         ` Christian König
2021-09-13 13:17 ` [PATCH 19/26] drm/i915: use new cursor in intel_prepare_plane_fb Christian König
2021-09-13 13:17 ` [PATCH 20/26] drm: use new iterator in drm_gem_fence_array_add_implicit Christian König
2021-09-13 13:17 ` [PATCH 21/26] drm: use new iterator in drm_gem_plane_helper_prepare_fb Christian König
2021-09-13 13:17 ` [PATCH 22/26] drm/nouveau: use the new iterator in nouveau_fence_sync Christian König
2021-09-13 13:17 ` [PATCH 23/26] drm/nouveau: use the new interator in nv50_wndw_prepare_fb Christian König
2021-09-13 13:17 ` [PATCH 24/26] drm/etnaviv: use new iterator in etnaviv_gem_describe Christian König
2021-09-13 13:17 ` [PATCH 25/26] drm/etnaviv: replace dma_resv_get_excl_unlocked Christian König
2021-09-13 13:17 ` [PATCH 26/26] dma-buf: nuke dma_resv_get_excl_unlocked Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1eee4105-e154-9d1d-b92b-d17c6f8f8432@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-media@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).