All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-25 21:17 ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx
  Cc: Daniel Stone, Michel Dänzer, wayland-devel, Jason Ekstrand,
	Dave Airlie, mesa-dev, Christian König

Modern userspace APIs like Vulkan are built on an explicit
synchronization model.  This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland.  The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.

The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor.  We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer.  With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer.  In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.

This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later.  As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display.  This allows to accurately turn
it into a VkFence or VkSemaphore without any over- synchronization.

This patch series actually contains two new ioctls.  There is the export
one mentioned above as well as an RFC for an import ioctl which provides
the other half.  The intention is to land the export ioctl since it seems
like there's no real disagreement on that one.  The import ioctl, however,
has a lot of debate around it so it's intended to be RFC-only for now.

Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
IGT tests: https://patchwork.freedesktop.org/series/90490/

v10 (Jason Ekstrand, Daniel Vetter):
 - Add reviews/acks
 - Add a patch to rename _rcu to _unlocked
 - Split things better so import is clearly RFC status

v11 (Daniel Vetter):
 - Add more CCs to try and get maintainers
 - Add a patch to document DMA_BUF_IOCTL_SYNC
 - Generally better docs
 - Use separate structs for import/export (easier to document)
 - Fix an issue in the import patch

Cc: Christian König <christian.koenig@amd.com>
Cc: Michel Dänzer <michel@daenzer.net>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Cc: Daniel Stone <daniels@collabora.com>
Cc: mesa-dev@lists.freedesktop.org
Cc: wayland-devel@lists.freedesktop.org
Test-with: 20210524205225.872316-1-jason@jlekstrand.net

Christian König (1):
  dma-buf: Add dma_fence_array_for_each (v2)

Jason Ekstrand (6):
  dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  dma-buf: Add dma_resv_get_singleton_unlocked (v5)
  dma-buf: Document DMA_BUF_IOCTL_SYNC
  dma-buf: Add an API for exporting sync files (v11)
  RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
  RFC: dma-buf: Add an API for importing sync files (v7)

 Documentation/driver-api/dma-buf.rst          |   8 +
 drivers/dma-buf/dma-buf.c                     | 107 +++++++++++++-
 drivers/dma-buf/dma-fence-array.c             |  27 ++++
 drivers/dma-buf/dma-resv.c                    | 139 ++++++++++++++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |  14 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |   6 +-
 drivers/gpu/drm/drm_gem.c                     |  10 +-
 drivers/gpu/drm/drm_gem_atomic_helper.c       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |   7 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |   8 +-
 drivers/gpu/drm/i915/display/intel_display.c  |   2 +-
 drivers/gpu/drm/i915/dma_resv_utils.c         |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_busy.c      |   2 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      |  10 +-
 drivers/gpu/drm/i915/i915_request.c           |   6 +-
 drivers/gpu/drm/i915/i915_sw_fence.c          |   4 +-
 drivers/gpu/drm/msm/msm_gem.c                 |   3 +-
 drivers/gpu/drm/nouveau/dispnv50/wndw.c       |   2 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c         |   4 +-
 drivers/gpu/drm/panfrost/panfrost_drv.c       |   4 +-
 drivers/gpu/drm/panfrost/panfrost_job.c       |   2 +-
 drivers/gpu/drm/radeon/radeon_gem.c           |   6 +-
 drivers/gpu/drm/radeon/radeon_mn.c            |   4 +-
 drivers/gpu/drm/ttm/ttm_bo.c                  |  18 +--
 drivers/gpu/drm/vgem/vgem_fence.c             |   4 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |   6 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |   2 +-
 include/linux/dma-fence-array.h               |  17 +++
 include/linux/dma-resv.h                      |  21 +--
 include/uapi/linux/dma-buf.h                  |  89 ++++++++++-
 40 files changed, 465 insertions(+), 111 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 0/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-25 21:17 ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx
  Cc: Daniel Stone, Michel Dänzer, wayland-devel,
	Bas Nieuwenhuizen, Dave Airlie, mesa-dev, Christian König

Modern userspace APIs like Vulkan are built on an explicit
synchronization model.  This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland.  The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.

The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor.  We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer.  With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer.  In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.

This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later.  As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display.  This allows to accurately turn
it into a VkFence or VkSemaphore without any over- synchronization.

This patch series actually contains two new ioctls.  There is the export
one mentioned above as well as an RFC for an import ioctl which provides
the other half.  The intention is to land the export ioctl since it seems
like there's no real disagreement on that one.  The import ioctl, however,
has a lot of debate around it so it's intended to be RFC-only for now.

Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
IGT tests: https://patchwork.freedesktop.org/series/90490/

v10 (Jason Ekstrand, Daniel Vetter):
 - Add reviews/acks
 - Add a patch to rename _rcu to _unlocked
 - Split things better so import is clearly RFC status

v11 (Daniel Vetter):
 - Add more CCs to try and get maintainers
 - Add a patch to document DMA_BUF_IOCTL_SYNC
 - Generally better docs
 - Use separate structs for import/export (easier to document)
 - Fix an issue in the import patch

Cc: Christian König <christian.koenig@amd.com>
Cc: Michel Dänzer <michel@daenzer.net>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Cc: Daniel Stone <daniels@collabora.com>
Cc: mesa-dev@lists.freedesktop.org
Cc: wayland-devel@lists.freedesktop.org
Test-with: 20210524205225.872316-1-jason@jlekstrand.net

Christian König (1):
  dma-buf: Add dma_fence_array_for_each (v2)

Jason Ekstrand (6):
  dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  dma-buf: Add dma_resv_get_singleton_unlocked (v5)
  dma-buf: Document DMA_BUF_IOCTL_SYNC
  dma-buf: Add an API for exporting sync files (v11)
  RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
  RFC: dma-buf: Add an API for importing sync files (v7)

 Documentation/driver-api/dma-buf.rst          |   8 +
 drivers/dma-buf/dma-buf.c                     | 107 +++++++++++++-
 drivers/dma-buf/dma-fence-array.c             |  27 ++++
 drivers/dma-buf/dma-resv.c                    | 139 ++++++++++++++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |  14 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |   6 +-
 drivers/gpu/drm/drm_gem.c                     |  10 +-
 drivers/gpu/drm/drm_gem_atomic_helper.c       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |   7 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |   8 +-
 drivers/gpu/drm/i915/display/intel_display.c  |   2 +-
 drivers/gpu/drm/i915/dma_resv_utils.c         |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_busy.c      |   2 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      |  10 +-
 drivers/gpu/drm/i915/i915_request.c           |   6 +-
 drivers/gpu/drm/i915/i915_sw_fence.c          |   4 +-
 drivers/gpu/drm/msm/msm_gem.c                 |   3 +-
 drivers/gpu/drm/nouveau/dispnv50/wndw.c       |   2 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c         |   4 +-
 drivers/gpu/drm/panfrost/panfrost_drv.c       |   4 +-
 drivers/gpu/drm/panfrost/panfrost_job.c       |   2 +-
 drivers/gpu/drm/radeon/radeon_gem.c           |   6 +-
 drivers/gpu/drm/radeon/radeon_mn.c            |   4 +-
 drivers/gpu/drm/ttm/ttm_bo.c                  |  18 +--
 drivers/gpu/drm/vgem/vgem_fence.c             |   4 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |   6 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |   2 +-
 include/linux/dma-fence-array.h               |  17 +++
 include/linux/dma-resv.h                      |  21 +--
 include/uapi/linux/dma-buf.h                  |  89 ++++++++++-
 40 files changed, 465 insertions(+), 111 deletions(-)

-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [PATCH 1/7] dma-buf: Add dma_fence_array_for_each (v2)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-25 21:17   ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx
  Cc: Christian König, Christian König, Jason Ekstrand,
	Daniel Vetter

From: Christian König <ckoenig.leichtzumerken@gmail.com>

Add a helper to iterate over all fences in a dma_fence_array object.

v2 (Jason Ekstrand)
 - Return NULL from dma_fence_array_first if head == NULL.  This matches
   the iterator behavior of dma_fence_chain_for_each in that it iterates
   zero times if head == NULL.
 - Return NULL from dma_fence_array_next if index > array->num_fences.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Christian König <christian.koenig@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++
 include/linux/dma-fence-array.h   | 17 +++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index d3fbd950be944..2ac1afc697d0f 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context)
 	return true;
 }
 EXPORT_SYMBOL(dma_fence_match_context);
+
+struct dma_fence *dma_fence_array_first(struct dma_fence *head)
+{
+	struct dma_fence_array *array;
+
+	if (!head)
+		return NULL;
+
+	array = to_dma_fence_array(head);
+	if (!array)
+		return head;
+
+	return array->fences[0];
+}
+EXPORT_SYMBOL(dma_fence_array_first);
+
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
+				       unsigned int index)
+{
+	struct dma_fence_array *array = to_dma_fence_array(head);
+
+	if (!array || index >= array->num_fences)
+		return NULL;
+
+	return array->fences[index];
+}
+EXPORT_SYMBOL(dma_fence_array_next);
diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h
index 303dd712220fd..588ac8089dd61 100644
--- a/include/linux/dma-fence-array.h
+++ b/include/linux/dma-fence-array.h
@@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence)
 	return container_of(fence, struct dma_fence_array, base);
 }
 
+/**
+ * dma_fence_array_for_each - iterate over all fences in array
+ * @fence: current fence
+ * @index: index into the array
+ * @head: potential dma_fence_array object
+ *
+ * Test if @array is a dma_fence_array object and if yes iterate over all fences
+ * in the array. If not just iterate over the fence in @array itself.
+ */
+#define dma_fence_array_for_each(fence, index, head)			\
+	for (index = 0, fence = dma_fence_array_first(head); fence;	\
+	     ++(index), fence = dma_fence_array_next(head, index))
+
 struct dma_fence_array *dma_fence_array_create(int num_fences,
 					       struct dma_fence **fences,
 					       u64 context, unsigned seqno,
@@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 
 bool dma_fence_match_context(struct dma_fence *fence, u64 context);
 
+struct dma_fence *dma_fence_array_first(struct dma_fence *head);
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
+				       unsigned int index);
+
 #endif /* __LINUX_DMA_FENCE_ARRAY_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 1/7] dma-buf: Add dma_fence_array_for_each (v2)
@ 2021-05-25 21:17   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx
  Cc: Christian König, Christian König, Daniel Vetter

From: Christian König <ckoenig.leichtzumerken@gmail.com>

Add a helper to iterate over all fences in a dma_fence_array object.

v2 (Jason Ekstrand)
 - Return NULL from dma_fence_array_first if head == NULL.  This matches
   the iterator behavior of dma_fence_chain_for_each in that it iterates
   zero times if head == NULL.
 - Return NULL from dma_fence_array_next if index > array->num_fences.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Christian König <christian.koenig@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++
 include/linux/dma-fence-array.h   | 17 +++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index d3fbd950be944..2ac1afc697d0f 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context)
 	return true;
 }
 EXPORT_SYMBOL(dma_fence_match_context);
+
+struct dma_fence *dma_fence_array_first(struct dma_fence *head)
+{
+	struct dma_fence_array *array;
+
+	if (!head)
+		return NULL;
+
+	array = to_dma_fence_array(head);
+	if (!array)
+		return head;
+
+	return array->fences[0];
+}
+EXPORT_SYMBOL(dma_fence_array_first);
+
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
+				       unsigned int index)
+{
+	struct dma_fence_array *array = to_dma_fence_array(head);
+
+	if (!array || index >= array->num_fences)
+		return NULL;
+
+	return array->fences[index];
+}
+EXPORT_SYMBOL(dma_fence_array_next);
diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h
index 303dd712220fd..588ac8089dd61 100644
--- a/include/linux/dma-fence-array.h
+++ b/include/linux/dma-fence-array.h
@@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence)
 	return container_of(fence, struct dma_fence_array, base);
 }
 
+/**
+ * dma_fence_array_for_each - iterate over all fences in array
+ * @fence: current fence
+ * @index: index into the array
+ * @head: potential dma_fence_array object
+ *
+ * Test if @array is a dma_fence_array object and if yes iterate over all fences
+ * in the array. If not just iterate over the fence in @array itself.
+ */
+#define dma_fence_array_for_each(fence, index, head)			\
+	for (index = 0, fence = dma_fence_array_first(head); fence;	\
+	     ++(index), fence = dma_fence_array_next(head, index))
+
 struct dma_fence_array *dma_fence_array_create(int num_fences,
 					       struct dma_fence **fences,
 					       u64 context, unsigned seqno,
@@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 
 bool dma_fence_match_context(struct dma_fence *fence, u64 context);
 
+struct dma_fence *dma_fence_array_first(struct dma_fence *head);
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
+				       unsigned int index);
+
 #endif /* __LINUX_DMA_FENCE_ARRAY_H */
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-25 21:17   ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx
  Cc: Thomas Zimmermann, Daniel Vetter, Huang Rui, VMware Graphics,
	Gerd Hoffmann, Jason Ekstrand, Sean Paul, Christian König

None of these helpers actually leak any RCU details to the caller.  They
all assume you have a genuine reference, take the RCU read lock, and
retry if needed.  Naming them with an _rcu is likely to cause callers
more panic than needed.

v2 (Jason Ekstrand):
 - Fix function argument indentation

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Sean Paul <sean@poorly.run>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
---
 drivers/dma-buf/dma-buf.c                     |  4 +--
 drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
 drivers/gpu/drm/drm_gem.c                     | 10 +++----
 drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
 drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
 drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
 drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
 drivers/gpu/drm/i915/i915_request.c           |  6 ++--
 drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
 drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
 drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
 drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
 drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
 drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
 drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
 drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
 include/linux/dma-resv.h                      | 18 ++++++------
 36 files changed, 108 insertions(+), 110 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f264b70c383eb..ed6451d55d663 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 	long ret;
 
 	/* Wait on any implicit rendering fences */
-	ret = dma_resv_wait_timeout_rcu(resv, write, true,
-						  MAX_SCHEDULE_TIMEOUT);
+	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
+					     MAX_SCHEDULE_TIMEOUT);
 	if (ret < 0)
 		return ret;
 
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
 EXPORT_SYMBOL(dma_resv_copy_fences);
 
 /**
- * dma_resv_get_fences_rcu - Get an object's shared and exclusive
+ * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
  * fences without update side lock held
  * @obj: the reservation object
  * @pfence_excl: the returned exclusive fence (or NULL)
@@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
  * exclusive fence is not specified the fence is put into the array of the
  * shared fences as well. Returns either zero or -ENOMEM.
  */
-int dma_resv_get_fences_rcu(struct dma_resv *obj,
-			    struct dma_fence **pfence_excl,
-			    unsigned *pshared_count,
-			    struct dma_fence ***pshared)
+int dma_resv_get_fences_unlocked(struct dma_resv *obj,
+				 struct dma_fence **pfence_excl,
+				 unsigned *pshared_count,
+				 struct dma_fence ***pshared)
 {
 	struct dma_fence **shared = NULL;
 	struct dma_fence *fence_excl;
@@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
 	*pshared = shared;
 	return ret;
 }
-EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
+EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
 
 /**
- * dma_resv_wait_timeout_rcu - Wait on reservation's objects
+ * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
  * shared and/or exclusive fences.
  * @obj: the reservation object
  * @wait_all: if true, wait on all fences, else wait on just exclusive fence
@@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
  * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
  * greater than zer on success.
  */
-long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
-			       bool wait_all, bool intr,
-			       unsigned long timeout)
+long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
+				    bool wait_all, bool intr,
+				    unsigned long timeout)
 {
 	struct dma_fence *fence;
 	unsigned seq, shared_count;
@@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
 	rcu_read_unlock();
 	goto retry;
 }
-EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
+EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
 
 
 static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
@@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
 }
 
 /**
- * dma_resv_test_signaled_rcu - Test if a reservation object's
+ * dma_resv_test_signaled_unlocked - Test if a reservation object's
  * fences have been signaled.
  * @obj: the reservation object
  * @test_all: if true, test all fences, otherwise only test the exclusive
@@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
  * RETURNS
  * true if all fences signaled, else false
  */
-bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
+bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
 {
 	unsigned seq, shared_count;
 	int ret;
@@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
 	rcu_read_unlock();
 	return ret;
 }
-EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
+EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 8a1fb8b6606e5..b8e24f199be9a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
 		goto unpin;
 	}
 
-	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
-					      &work->shared_count,
-					      &work->shared);
+	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
+					 &work->shared_count,
+					 &work->shared);
 	if (unlikely(r != 0)) {
 		DRM_ERROR("failed to get fences for buffer\n");
 		goto unpin;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index baa980a477d94..0d0319bc51577 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
 	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
 		return 0;
 
-	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
+	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 18974bd081f00..8e2996d6ba3ad 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
 		return -ENOENT;
 	}
 	robj = gem_to_amdgpu_bo(gobj);
-	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
-						  timeout);
+	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
+					     timeout);
 
 	/* ret == 0 means not signaled,
 	 * ret > 0 means signaled
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index b4971e90b98cf..38e1b32dd2cef 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
 	unsigned count;
 	int r;
 
-	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
+	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
 	if (r)
 		goto fallback;
 
@@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
 	/* Not enough memory for the delayed delete, as last resort
 	 * block for all the fences to complete.
 	 */
-	dma_resv_wait_timeout_rcu(resv, true, false,
-					    MAX_SCHEDULE_TIMEOUT);
+	dma_resv_wait_timeout_unlocked(resv, true, false,
+				       MAX_SCHEDULE_TIMEOUT);
 	amdgpu_pasid_free(pasid);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index 828b5167ff128..0319c8b547c48 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
 
 	mmu_interval_set_seq(mni, cur_seq);
 
-	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
-				      MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	mutex_unlock(&adev->notifier_lock);
 	if (r <= 0)
 		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 0adffcace3263..de1c7c5501683 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
 		return 0;
 	}
 
-	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
-						MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	if (r < 0)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index c6dbc08016045..4a2196404fb69 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
 	ib->length_dw = 16;
 
 	if (direct) {
-		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
-							true, false,
-							msecs_to_jiffies(10));
+		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
+						   true, false,
+						   msecs_to_jiffies(10));
 		if (r == 0)
 			r = -ETIMEDOUT;
 		if (r < 0)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 4a3e3f72e1277..7ba1c537d6584 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	unsigned i, shared_count;
 	int r;
 
-	r = dma_resv_get_fences_rcu(resv, &excl,
-					      &shared_count, &shared);
+	r = dma_resv_get_fences_unlocked(resv, &excl,
+					 &shared_count, &shared);
 	if (r) {
 		/* Not enough memory to grab the fence list, as last resort
 		 * block for all the fences to complete.
 		 */
-		dma_resv_wait_timeout_rcu(resv, true, false,
-						    MAX_SCHEDULE_TIMEOUT);
+		dma_resv_wait_timeout_unlocked(resv, true, false,
+					       MAX_SCHEDULE_TIMEOUT);
 		return;
 	}
 
@@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
 		return true;
 
 	/* Don't evict VM page tables while they are busy */
-	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
+	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
 		return false;
 
 	/* Try to block ongoing updates */
@@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
  */
 long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
 {
-	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
-					    true, true, timeout);
+	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
+						 true, true, timeout);
 	if (timeout <= 0)
 		return timeout;
 
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 9ca517b658546..0121d2817fa26 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
 		 * deadlock during GPU reset when this fence will not signal
 		 * but we hold reservation lock for the BO.
 		 */
-		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
-							false,
-							msecs_to_jiffies(5000));
+		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
+						   false,
+						   msecs_to_jiffies(5000));
 		if (unlikely(r <= 0))
 			DRM_ERROR("Waiting for fences timed out!");
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 9989425e9875a..1241a421b9e81 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
 		return -EINVAL;
 	}
 
-	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
-						  true, timeout);
+	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
+					     true, timeout);
 	if (ret == 0)
 		ret = -ETIME;
 	else if (ret > 0)
@@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
 
 	if (!write) {
 		struct dma_fence *fence =
-			dma_resv_get_excl_rcu(obj->resv);
+			dma_resv_get_excl_unlocked(obj->resv);
 
 		return drm_gem_fence_array_add(fence_array, fence);
 	}
 
-	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
-						&fence_count, &fences);
+	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
+					   &fence_count, &fences);
 	if (ret || !fence_count)
 		return ret;
 
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
index a005c5a0ba46a..a27135084ae5c 100644
--- a/drivers/gpu/drm/drm_gem_atomic_helper.c
+++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
@@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
 		return 0;
 
 	obj = drm_gem_fb_get_obj(state->fb, 0);
-	fence = dma_resv_get_excl_rcu(obj->resv);
+	fence = dma_resv_get_excl_unlocked(obj->resv);
 	drm_atomic_set_fence_for_plane(state, fence);
 
 	return 0;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index db69f19ab5bca..4e6f5346e84e4 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
 	}
 
 	if (op & ETNA_PREP_NOSYNC) {
-		if (!dma_resv_test_signaled_rcu(obj->resv,
-							  write))
+		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
 			return -EBUSY;
 	} else {
 		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
 
-		ret = dma_resv_wait_timeout_rcu(obj->resv,
-							  write, true, remain);
+		ret = dma_resv_wait_timeout_unlocked(obj->resv,
+						     write, true, remain);
 		if (ret <= 0)
 			return ret == 0 ? -ETIMEDOUT : ret;
 	}
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
index d05c359945799..6617fada4595d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
@@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
 			continue;
 
 		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
-			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
-								&bo->nr_shared,
-								&bo->shared);
+			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
+							   &bo->nr_shared,
+							   &bo->shared);
 			if (ret)
 				return ret;
 		} else {
-			bo->excl = dma_resv_get_excl_rcu(robj);
+			bo->excl = dma_resv_get_excl_unlocked(robj);
 		}
 
 	}
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 422b59ebf6dce..5f0b85a102159 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
 		if (ret < 0)
 			goto unpin_fb;
 
-		fence = dma_resv_get_excl_rcu(obj->base.resv);
+		fence = dma_resv_get_excl_unlocked(obj->base.resv);
 		if (fence) {
 			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
 						   fence);
diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
index 9e508e7d4629f..bdfc6bf16a4e9 100644
--- a/drivers/gpu/drm/i915/dma_resv_utils.c
+++ b/drivers/gpu/drm/i915/dma_resv_utils.c
@@ -10,7 +10,7 @@
 void dma_resv_prune(struct dma_resv *resv)
 {
 	if (dma_resv_trylock(resv)) {
-		if (dma_resv_test_signaled_rcu(resv, true))
+		if (dma_resv_test_signaled_unlocked(resv, true))
 			dma_resv_add_excl_fence(resv, NULL);
 		dma_resv_unlock(resv);
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
index 25235ef630c10..754ad6d1bace9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
@@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 	 * Alternatively, we can trade that extra information on read/write
 	 * activity with
 	 *	args->busy =
-	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
+	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
 	 * to report the overall busyness. This is what the wait-ioctl does.
 	 *
 	 */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 297143511f99b..e8f323564e57b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
 	if (DBG_FORCE_RELOC)
 		return false;
 
-	return !dma_resv_test_signaled_rcu(vma->resv, true);
+	return !dma_resv_test_signaled_unlocked(vma->resv, true);
 }
 
 static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 2ebd79537aea9..7c0eb425cb3b3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
 	struct dma_fence *fence;
 
 	rcu_read_lock();
-	fence = dma_resv_get_excl_rcu(obj->base.resv);
+	fence = dma_resv_get_excl_unlocked(obj->base.resv);
 	rcu_read_unlock();
 
 	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index a657b99ec7606..44df18dc9669f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
 		return true;
 
 	/* we will unbind on next submission, still have userptr pins */
-	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
-				      MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	if (r <= 0)
 		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index 4b9856d5ba14f..5b6c52659ad4d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
 		unsigned int count, i;
 		int ret;
 
-		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
 		 */
 		prune_fences = count && timeout >= 0;
 	} else {
-		excl = dma_resv_get_excl_rcu(resv);
+		excl = dma_resv_get_excl_unlocked(resv);
 	}
 
 	if (excl && timeout >= 0)
@@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
 		unsigned int count, i;
 		int ret;
 
-		ret = dma_resv_get_fences_rcu(obj->base.resv,
-					      &excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(obj->base.resv,
+						   &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
 
 		kfree(shared);
 	} else {
-		excl = dma_resv_get_excl_rcu(obj->base.resv);
+		excl = dma_resv_get_excl_unlocked(obj->base.resv);
 	}
 
 	if (excl) {
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 970d8f4986bbe..f1ed03ced7dd1 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
 		struct dma_fence **shared;
 		unsigned int count, i;
 
-		ret = dma_resv_get_fences_rcu(obj->base.resv,
-							&excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(obj->base.resv,
+						   &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
 			dma_fence_put(shared[i]);
 		kfree(shared);
 	} else {
-		excl = dma_resv_get_excl_rcu(obj->base.resv);
+		excl = dma_resv_get_excl_unlocked(obj->base.resv);
 	}
 
 	if (excl) {
diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
index 2744558f30507..0bcb7ea44201e 100644
--- a/drivers/gpu/drm/i915/i915_sw_fence.c
+++ b/drivers/gpu/drm/i915/i915_sw_fence.c
@@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
 		struct dma_fence **shared;
 		unsigned int count, i;
 
-		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
 			dma_fence_put(shared[i]);
 		kfree(shared);
 	} else {
-		excl = dma_resv_get_excl_rcu(resv);
+		excl = dma_resv_get_excl_unlocked(resv);
 	}
 
 	if (ret >= 0 && excl && excl->ops != exclude) {
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 56df86e5f7400..1aca60507bb14 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
 		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
 	long ret;
 
-	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
-						  true,  remain);
+	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
 	if (ret == 0)
 		return remain == 0 ? -EBUSY : -ETIMEDOUT;
 	else if (ret < 0)
diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
index 0cb1f9d848d3e..8d048bacd6f02 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
@@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
 			asyw->image.handle[0] = ctxdma->object.handle;
 	}
 
-	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
+	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
 	asyw->image.offset[0] = nvbo->offset;
 
 	if (wndw->func->prepare) {
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index a70e82413fa75..bc6b09ee9b552 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
 		return -ENOENT;
 	nvbo = nouveau_gem_object(gem);
 
-	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
-						   no_wait ? 0 : 30 * HZ);
+	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
+					      no_wait ? 0 : 30 * HZ);
 	if (!lret)
 		ret = -EBUSY;
 	else if (lret > 0)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index ca07098a61419..eef5b632ee0ce 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
 	if (!gem_obj)
 		return -ENOENT;
 
-	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
-						  true, timeout);
+	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
+					     true, timeout);
 	if (!ret)
 		ret = timeout ? -ETIMEDOUT : -EBUSY;
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 6003cfeb13221..2df3e999a38d0 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
 	int i;
 
 	for (i = 0; i < bo_count; i++)
-		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
+		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
 }
 
 static void panfrost_attach_object_fences(struct drm_gem_object **bos,
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 05ea2f39f6261..1a38b0bf36d11 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
 	}
 	if (domain == RADEON_GEM_DOMAIN_CPU) {
 		/* Asking for cpu access wait for object idle */
-		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
+		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
 		if (!r)
 			r = -EBUSY;
 
@@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
 	}
 	robj = gem_to_radeon_bo(gobj);
 
-	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
+	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
 	if (r == 0)
 		r = -EBUSY;
 	else
@@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
 	}
 	robj = gem_to_radeon_bo(gobj);
 
-	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
+	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
 	if (ret == 0)
 		r = -EBUSY;
 	else if (ret < 0)
diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
index e37c9a57a7c36..a19be3f8a218c 100644
--- a/drivers/gpu/drm/radeon/radeon_mn.c
+++ b/drivers/gpu/drm/radeon/radeon_mn.c
@@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
 		return true;
 	}
 
-	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
-				      MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	if (r <= 0)
 		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
 
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index ca1b098b6a561..215cad3149621 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
 	struct dma_resv *resv = &bo->base._resv;
 	int ret;
 
-	if (dma_resv_test_signaled_rcu(resv, true))
+	if (dma_resv_test_signaled_unlocked(resv, true))
 		ret = 0;
 	else
 		ret = -EBUSY;
@@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
 			dma_resv_unlock(bo->base.resv);
 		spin_unlock(&bo->bdev->lru_lock);
 
-		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
-						 30 * HZ);
+		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
+						      30 * HZ);
 
 		if (lret < 0)
 			return lret;
@@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
 			/* Last resort, if we fail to allocate memory for the
 			 * fences block for the BO to become idle
 			 */
-			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
-						  30 * HZ);
+			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
+						       30 * HZ);
 		}
 
 		if (bo->bdev->funcs->release_notify)
@@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
 		ttm_mem_io_free(bdev, &bo->mem);
 	}
 
-	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
+	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
 	    !dma_resv_trylock(bo->base.resv)) {
 		/* The BO is not idle, resurrect it for delayed destroy */
 		ttm_bo_flush_all_fences(bo);
@@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
 	long timeout = 15 * HZ;
 
 	if (no_wait) {
-		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
+		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
 			return 0;
 		else
 			return -EBUSY;
 	}
 
-	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
-						      interruptible, timeout);
+	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
+						 interruptible, timeout);
 	if (timeout < 0)
 		return timeout;
 
diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
index 2902dc6e64faf..010a82405e374 100644
--- a/drivers/gpu/drm/vgem/vgem_fence.c
+++ b/drivers/gpu/drm/vgem/vgem_fence.c
@@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
 
 	/* Check for a conflicting fence */
 	resv = obj->resv;
-	if (!dma_resv_test_signaled_rcu(resv,
-						  arg->flags & VGEM_FENCE_WRITE)) {
+	if (!dma_resv_test_signaled_unlocked(resv,
+					     arg->flags & VGEM_FENCE_WRITE)) {
 		ret = -EBUSY;
 		goto err_fence;
 	}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 669f2ee395154..ab010c8e32816 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
 		return -ENOENT;
 
 	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
-		ret = dma_resv_test_signaled_rcu(obj->resv, true);
+		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
 	} else {
-		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
-						timeout);
+		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
+						     timeout);
 	}
 	if (ret == 0)
 		ret = -EBUSY;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
index 04dd49c4c2572..19e1ce23842a9 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
@@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
 	if (flags & drm_vmw_synccpu_allow_cs) {
 		long lret;
 
-		lret = dma_resv_wait_timeout_rcu
+		lret = dma_resv_wait_timeout_unlocked
 			(bo->base.resv, true, true,
 			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
 		if (!lret)
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index d44a77e8a7e34..99cfb7af966b8 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
 }
 
 /**
- * dma_resv_get_excl_rcu - get the reservation object's
+ * dma_resv_get_excl_unlocked - get the reservation object's
  * exclusive fence, without lock held.
  * @obj: the reservation object
  *
@@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
  * The exclusive fence or NULL if none
  */
 static inline struct dma_fence *
-dma_resv_get_excl_rcu(struct dma_resv *obj)
+dma_resv_get_excl_unlocked(struct dma_resv *obj)
 {
 	struct dma_fence *fence;
 
@@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
 
 void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
 
-int dma_resv_get_fences_rcu(struct dma_resv *obj,
-			    struct dma_fence **pfence_excl,
-			    unsigned *pshared_count,
-			    struct dma_fence ***pshared);
+int dma_resv_get_fences_unlocked(struct dma_resv *obj,
+				 struct dma_fence **pfence_excl,
+				 unsigned *pshared_count,
+				 struct dma_fence ***pshared);
 
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
 
-long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
-			       unsigned long timeout);
+long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
+				    unsigned long timeout);
 
-bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
+bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
 
 #endif /* _LINUX_RESERVATION_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-05-25 21:17   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx
  Cc: Thomas Zimmermann, Daniel Vetter, Maxime Ripard, Huang Rui,
	VMware Graphics, Gerd Hoffmann, Christian König,
	Lucas Stach

None of these helpers actually leak any RCU details to the caller.  They
all assume you have a genuine reference, take the RCU read lock, and
retry if needed.  Naming them with an _rcu is likely to cause callers
more panic than needed.

v2 (Jason Ekstrand):
 - Fix function argument indentation

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Sean Paul <sean@poorly.run>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
---
 drivers/dma-buf/dma-buf.c                     |  4 +--
 drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
 drivers/gpu/drm/drm_gem.c                     | 10 +++----
 drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
 drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
 drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
 drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
 drivers/gpu/drm/i915/i915_request.c           |  6 ++--
 drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
 drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
 drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
 drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
 drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
 drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
 drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
 drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
 include/linux/dma-resv.h                      | 18 ++++++------
 36 files changed, 108 insertions(+), 110 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f264b70c383eb..ed6451d55d663 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 	long ret;
 
 	/* Wait on any implicit rendering fences */
-	ret = dma_resv_wait_timeout_rcu(resv, write, true,
-						  MAX_SCHEDULE_TIMEOUT);
+	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
+					     MAX_SCHEDULE_TIMEOUT);
 	if (ret < 0)
 		return ret;
 
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
 EXPORT_SYMBOL(dma_resv_copy_fences);
 
 /**
- * dma_resv_get_fences_rcu - Get an object's shared and exclusive
+ * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
  * fences without update side lock held
  * @obj: the reservation object
  * @pfence_excl: the returned exclusive fence (or NULL)
@@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
  * exclusive fence is not specified the fence is put into the array of the
  * shared fences as well. Returns either zero or -ENOMEM.
  */
-int dma_resv_get_fences_rcu(struct dma_resv *obj,
-			    struct dma_fence **pfence_excl,
-			    unsigned *pshared_count,
-			    struct dma_fence ***pshared)
+int dma_resv_get_fences_unlocked(struct dma_resv *obj,
+				 struct dma_fence **pfence_excl,
+				 unsigned *pshared_count,
+				 struct dma_fence ***pshared)
 {
 	struct dma_fence **shared = NULL;
 	struct dma_fence *fence_excl;
@@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
 	*pshared = shared;
 	return ret;
 }
-EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
+EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
 
 /**
- * dma_resv_wait_timeout_rcu - Wait on reservation's objects
+ * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
  * shared and/or exclusive fences.
  * @obj: the reservation object
  * @wait_all: if true, wait on all fences, else wait on just exclusive fence
@@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
  * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
  * greater than zer on success.
  */
-long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
-			       bool wait_all, bool intr,
-			       unsigned long timeout)
+long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
+				    bool wait_all, bool intr,
+				    unsigned long timeout)
 {
 	struct dma_fence *fence;
 	unsigned seq, shared_count;
@@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
 	rcu_read_unlock();
 	goto retry;
 }
-EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
+EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
 
 
 static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
@@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
 }
 
 /**
- * dma_resv_test_signaled_rcu - Test if a reservation object's
+ * dma_resv_test_signaled_unlocked - Test if a reservation object's
  * fences have been signaled.
  * @obj: the reservation object
  * @test_all: if true, test all fences, otherwise only test the exclusive
@@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
  * RETURNS
  * true if all fences signaled, else false
  */
-bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
+bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
 {
 	unsigned seq, shared_count;
 	int ret;
@@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
 	rcu_read_unlock();
 	return ret;
 }
-EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
+EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 8a1fb8b6606e5..b8e24f199be9a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
 		goto unpin;
 	}
 
-	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
-					      &work->shared_count,
-					      &work->shared);
+	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
+					 &work->shared_count,
+					 &work->shared);
 	if (unlikely(r != 0)) {
 		DRM_ERROR("failed to get fences for buffer\n");
 		goto unpin;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index baa980a477d94..0d0319bc51577 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
 	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
 		return 0;
 
-	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
+	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 18974bd081f00..8e2996d6ba3ad 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
 		return -ENOENT;
 	}
 	robj = gem_to_amdgpu_bo(gobj);
-	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
-						  timeout);
+	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
+					     timeout);
 
 	/* ret == 0 means not signaled,
 	 * ret > 0 means signaled
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index b4971e90b98cf..38e1b32dd2cef 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
 	unsigned count;
 	int r;
 
-	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
+	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
 	if (r)
 		goto fallback;
 
@@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
 	/* Not enough memory for the delayed delete, as last resort
 	 * block for all the fences to complete.
 	 */
-	dma_resv_wait_timeout_rcu(resv, true, false,
-					    MAX_SCHEDULE_TIMEOUT);
+	dma_resv_wait_timeout_unlocked(resv, true, false,
+				       MAX_SCHEDULE_TIMEOUT);
 	amdgpu_pasid_free(pasid);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index 828b5167ff128..0319c8b547c48 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
 
 	mmu_interval_set_seq(mni, cur_seq);
 
-	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
-				      MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	mutex_unlock(&adev->notifier_lock);
 	if (r <= 0)
 		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 0adffcace3263..de1c7c5501683 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
 		return 0;
 	}
 
-	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
-						MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	if (r < 0)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index c6dbc08016045..4a2196404fb69 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
 	ib->length_dw = 16;
 
 	if (direct) {
-		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
-							true, false,
-							msecs_to_jiffies(10));
+		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
+						   true, false,
+						   msecs_to_jiffies(10));
 		if (r == 0)
 			r = -ETIMEDOUT;
 		if (r < 0)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 4a3e3f72e1277..7ba1c537d6584 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	unsigned i, shared_count;
 	int r;
 
-	r = dma_resv_get_fences_rcu(resv, &excl,
-					      &shared_count, &shared);
+	r = dma_resv_get_fences_unlocked(resv, &excl,
+					 &shared_count, &shared);
 	if (r) {
 		/* Not enough memory to grab the fence list, as last resort
 		 * block for all the fences to complete.
 		 */
-		dma_resv_wait_timeout_rcu(resv, true, false,
-						    MAX_SCHEDULE_TIMEOUT);
+		dma_resv_wait_timeout_unlocked(resv, true, false,
+					       MAX_SCHEDULE_TIMEOUT);
 		return;
 	}
 
@@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
 		return true;
 
 	/* Don't evict VM page tables while they are busy */
-	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
+	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
 		return false;
 
 	/* Try to block ongoing updates */
@@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
  */
 long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
 {
-	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
-					    true, true, timeout);
+	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
+						 true, true, timeout);
 	if (timeout <= 0)
 		return timeout;
 
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 9ca517b658546..0121d2817fa26 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
 		 * deadlock during GPU reset when this fence will not signal
 		 * but we hold reservation lock for the BO.
 		 */
-		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
-							false,
-							msecs_to_jiffies(5000));
+		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
+						   false,
+						   msecs_to_jiffies(5000));
 		if (unlikely(r <= 0))
 			DRM_ERROR("Waiting for fences timed out!");
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 9989425e9875a..1241a421b9e81 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
 		return -EINVAL;
 	}
 
-	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
-						  true, timeout);
+	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
+					     true, timeout);
 	if (ret == 0)
 		ret = -ETIME;
 	else if (ret > 0)
@@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
 
 	if (!write) {
 		struct dma_fence *fence =
-			dma_resv_get_excl_rcu(obj->resv);
+			dma_resv_get_excl_unlocked(obj->resv);
 
 		return drm_gem_fence_array_add(fence_array, fence);
 	}
 
-	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
-						&fence_count, &fences);
+	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
+					   &fence_count, &fences);
 	if (ret || !fence_count)
 		return ret;
 
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
index a005c5a0ba46a..a27135084ae5c 100644
--- a/drivers/gpu/drm/drm_gem_atomic_helper.c
+++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
@@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
 		return 0;
 
 	obj = drm_gem_fb_get_obj(state->fb, 0);
-	fence = dma_resv_get_excl_rcu(obj->resv);
+	fence = dma_resv_get_excl_unlocked(obj->resv);
 	drm_atomic_set_fence_for_plane(state, fence);
 
 	return 0;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index db69f19ab5bca..4e6f5346e84e4 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
 	}
 
 	if (op & ETNA_PREP_NOSYNC) {
-		if (!dma_resv_test_signaled_rcu(obj->resv,
-							  write))
+		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
 			return -EBUSY;
 	} else {
 		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
 
-		ret = dma_resv_wait_timeout_rcu(obj->resv,
-							  write, true, remain);
+		ret = dma_resv_wait_timeout_unlocked(obj->resv,
+						     write, true, remain);
 		if (ret <= 0)
 			return ret == 0 ? -ETIMEDOUT : ret;
 	}
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
index d05c359945799..6617fada4595d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
@@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
 			continue;
 
 		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
-			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
-								&bo->nr_shared,
-								&bo->shared);
+			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
+							   &bo->nr_shared,
+							   &bo->shared);
 			if (ret)
 				return ret;
 		} else {
-			bo->excl = dma_resv_get_excl_rcu(robj);
+			bo->excl = dma_resv_get_excl_unlocked(robj);
 		}
 
 	}
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 422b59ebf6dce..5f0b85a102159 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
 		if (ret < 0)
 			goto unpin_fb;
 
-		fence = dma_resv_get_excl_rcu(obj->base.resv);
+		fence = dma_resv_get_excl_unlocked(obj->base.resv);
 		if (fence) {
 			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
 						   fence);
diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
index 9e508e7d4629f..bdfc6bf16a4e9 100644
--- a/drivers/gpu/drm/i915/dma_resv_utils.c
+++ b/drivers/gpu/drm/i915/dma_resv_utils.c
@@ -10,7 +10,7 @@
 void dma_resv_prune(struct dma_resv *resv)
 {
 	if (dma_resv_trylock(resv)) {
-		if (dma_resv_test_signaled_rcu(resv, true))
+		if (dma_resv_test_signaled_unlocked(resv, true))
 			dma_resv_add_excl_fence(resv, NULL);
 		dma_resv_unlock(resv);
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
index 25235ef630c10..754ad6d1bace9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
@@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 	 * Alternatively, we can trade that extra information on read/write
 	 * activity with
 	 *	args->busy =
-	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
+	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
 	 * to report the overall busyness. This is what the wait-ioctl does.
 	 *
 	 */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 297143511f99b..e8f323564e57b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
 	if (DBG_FORCE_RELOC)
 		return false;
 
-	return !dma_resv_test_signaled_rcu(vma->resv, true);
+	return !dma_resv_test_signaled_unlocked(vma->resv, true);
 }
 
 static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 2ebd79537aea9..7c0eb425cb3b3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
 	struct dma_fence *fence;
 
 	rcu_read_lock();
-	fence = dma_resv_get_excl_rcu(obj->base.resv);
+	fence = dma_resv_get_excl_unlocked(obj->base.resv);
 	rcu_read_unlock();
 
 	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index a657b99ec7606..44df18dc9669f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
 		return true;
 
 	/* we will unbind on next submission, still have userptr pins */
-	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
-				      MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	if (r <= 0)
 		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index 4b9856d5ba14f..5b6c52659ad4d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
 		unsigned int count, i;
 		int ret;
 
-		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
 		 */
 		prune_fences = count && timeout >= 0;
 	} else {
-		excl = dma_resv_get_excl_rcu(resv);
+		excl = dma_resv_get_excl_unlocked(resv);
 	}
 
 	if (excl && timeout >= 0)
@@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
 		unsigned int count, i;
 		int ret;
 
-		ret = dma_resv_get_fences_rcu(obj->base.resv,
-					      &excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(obj->base.resv,
+						   &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
 
 		kfree(shared);
 	} else {
-		excl = dma_resv_get_excl_rcu(obj->base.resv);
+		excl = dma_resv_get_excl_unlocked(obj->base.resv);
 	}
 
 	if (excl) {
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 970d8f4986bbe..f1ed03ced7dd1 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
 		struct dma_fence **shared;
 		unsigned int count, i;
 
-		ret = dma_resv_get_fences_rcu(obj->base.resv,
-							&excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(obj->base.resv,
+						   &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
 			dma_fence_put(shared[i]);
 		kfree(shared);
 	} else {
-		excl = dma_resv_get_excl_rcu(obj->base.resv);
+		excl = dma_resv_get_excl_unlocked(obj->base.resv);
 	}
 
 	if (excl) {
diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
index 2744558f30507..0bcb7ea44201e 100644
--- a/drivers/gpu/drm/i915/i915_sw_fence.c
+++ b/drivers/gpu/drm/i915/i915_sw_fence.c
@@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
 		struct dma_fence **shared;
 		unsigned int count, i;
 
-		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
+		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
 		if (ret)
 			return ret;
 
@@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
 			dma_fence_put(shared[i]);
 		kfree(shared);
 	} else {
-		excl = dma_resv_get_excl_rcu(resv);
+		excl = dma_resv_get_excl_unlocked(resv);
 	}
 
 	if (ret >= 0 && excl && excl->ops != exclude) {
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 56df86e5f7400..1aca60507bb14 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
 		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
 	long ret;
 
-	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
-						  true,  remain);
+	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
 	if (ret == 0)
 		return remain == 0 ? -EBUSY : -ETIMEDOUT;
 	else if (ret < 0)
diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
index 0cb1f9d848d3e..8d048bacd6f02 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
@@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
 			asyw->image.handle[0] = ctxdma->object.handle;
 	}
 
-	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
+	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
 	asyw->image.offset[0] = nvbo->offset;
 
 	if (wndw->func->prepare) {
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index a70e82413fa75..bc6b09ee9b552 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
 		return -ENOENT;
 	nvbo = nouveau_gem_object(gem);
 
-	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
-						   no_wait ? 0 : 30 * HZ);
+	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
+					      no_wait ? 0 : 30 * HZ);
 	if (!lret)
 		ret = -EBUSY;
 	else if (lret > 0)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index ca07098a61419..eef5b632ee0ce 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
 	if (!gem_obj)
 		return -ENOENT;
 
-	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
-						  true, timeout);
+	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
+					     true, timeout);
 	if (!ret)
 		ret = timeout ? -ETIMEDOUT : -EBUSY;
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 6003cfeb13221..2df3e999a38d0 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
 	int i;
 
 	for (i = 0; i < bo_count; i++)
-		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
+		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
 }
 
 static void panfrost_attach_object_fences(struct drm_gem_object **bos,
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 05ea2f39f6261..1a38b0bf36d11 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
 	}
 	if (domain == RADEON_GEM_DOMAIN_CPU) {
 		/* Asking for cpu access wait for object idle */
-		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
+		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
 		if (!r)
 			r = -EBUSY;
 
@@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
 	}
 	robj = gem_to_radeon_bo(gobj);
 
-	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
+	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
 	if (r == 0)
 		r = -EBUSY;
 	else
@@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
 	}
 	robj = gem_to_radeon_bo(gobj);
 
-	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
+	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
 	if (ret == 0)
 		r = -EBUSY;
 	else if (ret < 0)
diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
index e37c9a57a7c36..a19be3f8a218c 100644
--- a/drivers/gpu/drm/radeon/radeon_mn.c
+++ b/drivers/gpu/drm/radeon/radeon_mn.c
@@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
 		return true;
 	}
 
-	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
-				      MAX_SCHEDULE_TIMEOUT);
+	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
+					   MAX_SCHEDULE_TIMEOUT);
 	if (r <= 0)
 		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
 
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index ca1b098b6a561..215cad3149621 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
 	struct dma_resv *resv = &bo->base._resv;
 	int ret;
 
-	if (dma_resv_test_signaled_rcu(resv, true))
+	if (dma_resv_test_signaled_unlocked(resv, true))
 		ret = 0;
 	else
 		ret = -EBUSY;
@@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
 			dma_resv_unlock(bo->base.resv);
 		spin_unlock(&bo->bdev->lru_lock);
 
-		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
-						 30 * HZ);
+		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
+						      30 * HZ);
 
 		if (lret < 0)
 			return lret;
@@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
 			/* Last resort, if we fail to allocate memory for the
 			 * fences block for the BO to become idle
 			 */
-			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
-						  30 * HZ);
+			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
+						       30 * HZ);
 		}
 
 		if (bo->bdev->funcs->release_notify)
@@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
 		ttm_mem_io_free(bdev, &bo->mem);
 	}
 
-	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
+	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
 	    !dma_resv_trylock(bo->base.resv)) {
 		/* The BO is not idle, resurrect it for delayed destroy */
 		ttm_bo_flush_all_fences(bo);
@@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
 	long timeout = 15 * HZ;
 
 	if (no_wait) {
-		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
+		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
 			return 0;
 		else
 			return -EBUSY;
 	}
 
-	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
-						      interruptible, timeout);
+	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
+						 interruptible, timeout);
 	if (timeout < 0)
 		return timeout;
 
diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
index 2902dc6e64faf..010a82405e374 100644
--- a/drivers/gpu/drm/vgem/vgem_fence.c
+++ b/drivers/gpu/drm/vgem/vgem_fence.c
@@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
 
 	/* Check for a conflicting fence */
 	resv = obj->resv;
-	if (!dma_resv_test_signaled_rcu(resv,
-						  arg->flags & VGEM_FENCE_WRITE)) {
+	if (!dma_resv_test_signaled_unlocked(resv,
+					     arg->flags & VGEM_FENCE_WRITE)) {
 		ret = -EBUSY;
 		goto err_fence;
 	}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 669f2ee395154..ab010c8e32816 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
 		return -ENOENT;
 
 	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
-		ret = dma_resv_test_signaled_rcu(obj->resv, true);
+		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
 	} else {
-		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
-						timeout);
+		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
+						     timeout);
 	}
 	if (ret == 0)
 		ret = -EBUSY;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
index 04dd49c4c2572..19e1ce23842a9 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
@@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
 	if (flags & drm_vmw_synccpu_allow_cs) {
 		long lret;
 
-		lret = dma_resv_wait_timeout_rcu
+		lret = dma_resv_wait_timeout_unlocked
 			(bo->base.resv, true, true,
 			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
 		if (!lret)
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index d44a77e8a7e34..99cfb7af966b8 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
 }
 
 /**
- * dma_resv_get_excl_rcu - get the reservation object's
+ * dma_resv_get_excl_unlocked - get the reservation object's
  * exclusive fence, without lock held.
  * @obj: the reservation object
  *
@@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
  * The exclusive fence or NULL if none
  */
 static inline struct dma_fence *
-dma_resv_get_excl_rcu(struct dma_resv *obj)
+dma_resv_get_excl_unlocked(struct dma_resv *obj)
 {
 	struct dma_fence *fence;
 
@@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
 
 void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
 
-int dma_resv_get_fences_rcu(struct dma_resv *obj,
-			    struct dma_fence **pfence_excl,
-			    unsigned *pshared_count,
-			    struct dma_fence ***pshared);
+int dma_resv_get_fences_unlocked(struct dma_resv *obj,
+				 struct dma_fence **pfence_excl,
+				 unsigned *pshared_count,
+				 struct dma_fence ***pshared);
 
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
 
-long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
-			       unsigned long timeout);
+long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
+				    unsigned long timeout);
 
-bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
+bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
 
 #endif /* _LINUX_RESERVATION_H */
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH 3/7] dma-buf: Add dma_resv_get_singleton_unlocked (v5)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-25 21:17   ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

Add a helper function to get a single fence representing
all fences in a dma_resv object.

This fence is either the only one in the object or all not
signaled fences of the object in a flatted out dma_fence_array.

v2 (Jason Ekstrand):
 - Take reference of fences both for creating the dma_fence_array and in
   the case where we return one fence.
 - Handle the case where dma_resv_get_list() returns NULL

v3 (Jason Ekstrand):
 - Add an _rcu suffix because it is read-only
 - Rewrite to use dma_resv_get_fences_rcu so it's RCU-safe
 - Add an EXPORT_SYMBOL_GPL declaration
 - Re-author the patch to Jason since very little is left of Christian
   König's original patch
 - Remove the extra fence argument

v4 (Jason Ekstrand):
 - Restore the extra fence argument

v5 (Daniel Vetter):
 - Rename from _rcu to _unlocked since it doesn't leak RCU details to
   the caller
 - Fix docs
 - Use ERR_PTR for error handling rather than an output dma_fence**

v5 (Jason Ekstrand):
 - Drop the extra fence param and leave that to a separate patch

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-resv.c | 92 ++++++++++++++++++++++++++++++++++++++
 include/linux/dma-resv.h   |  2 +
 2 files changed, 94 insertions(+)

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index d6f1ed4cd4d55..23db2181c8ad8 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -33,6 +33,8 @@
  */
 
 #include <linux/dma-resv.h>
+#include <linux/dma-fence-chain.h>
+#include <linux/dma-fence-array.h>
 #include <linux/export.h>
 #include <linux/mm.h>
 #include <linux/sched/mm.h>
@@ -49,6 +51,11 @@
  * write-side updates.
  */
 
+/* deep dive into the fence containers */
+#define dma_fence_deep_dive_for_each(fence, chain, index, head)	\
+	dma_fence_chain_for_each(chain, head)			\
+		dma_fence_array_for_each(fence, index, chain)
+
 DEFINE_WD_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
 
@@ -517,6 +524,91 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj,
 }
 EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
 
+/**
+ * dma_resv_get_singleton_unlocked - get a single fence for the dma_resv object
+ * @obj: the reservation object
+ *
+ * Get a single fence representing all unsignaled fences in the dma_resv object
+ * plus the given extra fence. If we got only one fence return a new
+ * reference to that, otherwise return a dma_fence_array object.
+ *
+ * RETURNS
+ * The singleton dma_fence on success or an ERR_PTR on failure
+ */
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
+{
+	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
+	struct dma_fence_array *array;
+	unsigned int num_resv_fences, num_fences;
+	unsigned int err, i, j;
+
+	err = dma_resv_get_fences_unlocked(obj, NULL, &num_resv_fences, &resv_fences);
+	if (err)
+		return ERR_PTR(err);
+
+	if (num_resv_fences == 0)
+		return NULL;
+
+	num_fences = 0;
+	result = NULL;
+
+	for (i = 0; i < num_resv_fences; ++i) {
+		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
+			if (dma_fence_is_signaled(fence))
+				continue;
+
+			result = fence;
+			++num_fences;
+		}
+	}
+
+	if (num_fences <= 1) {
+		result = dma_fence_get(result);
+		goto put_resv_fences;
+	}
+
+	fences = kmalloc_array(num_fences, sizeof(struct dma_fence*),
+			       GFP_KERNEL);
+	if (!fences) {
+		result = ERR_PTR(-ENOMEM);
+		goto put_resv_fences;
+	}
+
+	num_fences = 0;
+	for (i = 0; i < num_resv_fences; ++i) {
+		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
+			if (!dma_fence_is_signaled(fence))
+				fences[num_fences++] = dma_fence_get(fence);
+		}
+	}
+
+	if (num_fences <= 1) {
+		result = num_fences ? fences[0] : NULL;
+		kfree(fences);
+		goto put_resv_fences;
+	}
+
+	array = dma_fence_array_create(num_fences, fences,
+				       dma_fence_context_alloc(1),
+				       1, false);
+	if (array) {
+		result = &array->base;
+	} else {
+		result = ERR_PTR(-ENOMEM);
+		while (num_fences--)
+			dma_fence_put(fences[num_fences]);
+		kfree(fences);
+	}
+
+put_resv_fences:
+	while (num_resv_fences--)
+		dma_fence_put(resv_fences[num_resv_fences]);
+	kfree(resv_fences);
+
+	return result;
+}
+EXPORT_SYMBOL_GPL(dma_resv_get_singleton_unlocked);
+
 /**
  * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
  * shared and/or exclusive fences.
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index 99cfb7af966b8..c5fa09555eca5 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -285,6 +285,8 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj,
 
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
 
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj);
+
 long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
 				    unsigned long timeout);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 3/7] dma-buf: Add dma_resv_get_singleton_unlocked (v5)
@ 2021-05-25 21:17   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König

Add a helper function to get a single fence representing
all fences in a dma_resv object.

This fence is either the only one in the object or all not
signaled fences of the object in a flatted out dma_fence_array.

v2 (Jason Ekstrand):
 - Take reference of fences both for creating the dma_fence_array and in
   the case where we return one fence.
 - Handle the case where dma_resv_get_list() returns NULL

v3 (Jason Ekstrand):
 - Add an _rcu suffix because it is read-only
 - Rewrite to use dma_resv_get_fences_rcu so it's RCU-safe
 - Add an EXPORT_SYMBOL_GPL declaration
 - Re-author the patch to Jason since very little is left of Christian
   König's original patch
 - Remove the extra fence argument

v4 (Jason Ekstrand):
 - Restore the extra fence argument

v5 (Daniel Vetter):
 - Rename from _rcu to _unlocked since it doesn't leak RCU details to
   the caller
 - Fix docs
 - Use ERR_PTR for error handling rather than an output dma_fence**

v5 (Jason Ekstrand):
 - Drop the extra fence param and leave that to a separate patch

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-resv.c | 92 ++++++++++++++++++++++++++++++++++++++
 include/linux/dma-resv.h   |  2 +
 2 files changed, 94 insertions(+)

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index d6f1ed4cd4d55..23db2181c8ad8 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -33,6 +33,8 @@
  */
 
 #include <linux/dma-resv.h>
+#include <linux/dma-fence-chain.h>
+#include <linux/dma-fence-array.h>
 #include <linux/export.h>
 #include <linux/mm.h>
 #include <linux/sched/mm.h>
@@ -49,6 +51,11 @@
  * write-side updates.
  */
 
+/* deep dive into the fence containers */
+#define dma_fence_deep_dive_for_each(fence, chain, index, head)	\
+	dma_fence_chain_for_each(chain, head)			\
+		dma_fence_array_for_each(fence, index, chain)
+
 DEFINE_WD_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
 
@@ -517,6 +524,91 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj,
 }
 EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
 
+/**
+ * dma_resv_get_singleton_unlocked - get a single fence for the dma_resv object
+ * @obj: the reservation object
+ *
+ * Get a single fence representing all unsignaled fences in the dma_resv object
+ * plus the given extra fence. If we got only one fence return a new
+ * reference to that, otherwise return a dma_fence_array object.
+ *
+ * RETURNS
+ * The singleton dma_fence on success or an ERR_PTR on failure
+ */
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
+{
+	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
+	struct dma_fence_array *array;
+	unsigned int num_resv_fences, num_fences;
+	unsigned int err, i, j;
+
+	err = dma_resv_get_fences_unlocked(obj, NULL, &num_resv_fences, &resv_fences);
+	if (err)
+		return ERR_PTR(err);
+
+	if (num_resv_fences == 0)
+		return NULL;
+
+	num_fences = 0;
+	result = NULL;
+
+	for (i = 0; i < num_resv_fences; ++i) {
+		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
+			if (dma_fence_is_signaled(fence))
+				continue;
+
+			result = fence;
+			++num_fences;
+		}
+	}
+
+	if (num_fences <= 1) {
+		result = dma_fence_get(result);
+		goto put_resv_fences;
+	}
+
+	fences = kmalloc_array(num_fences, sizeof(struct dma_fence*),
+			       GFP_KERNEL);
+	if (!fences) {
+		result = ERR_PTR(-ENOMEM);
+		goto put_resv_fences;
+	}
+
+	num_fences = 0;
+	for (i = 0; i < num_resv_fences; ++i) {
+		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
+			if (!dma_fence_is_signaled(fence))
+				fences[num_fences++] = dma_fence_get(fence);
+		}
+	}
+
+	if (num_fences <= 1) {
+		result = num_fences ? fences[0] : NULL;
+		kfree(fences);
+		goto put_resv_fences;
+	}
+
+	array = dma_fence_array_create(num_fences, fences,
+				       dma_fence_context_alloc(1),
+				       1, false);
+	if (array) {
+		result = &array->base;
+	} else {
+		result = ERR_PTR(-ENOMEM);
+		while (num_fences--)
+			dma_fence_put(fences[num_fences]);
+		kfree(fences);
+	}
+
+put_resv_fences:
+	while (num_resv_fences--)
+		dma_fence_put(resv_fences[num_resv_fences]);
+	kfree(resv_fences);
+
+	return result;
+}
+EXPORT_SYMBOL_GPL(dma_resv_get_singleton_unlocked);
+
 /**
  * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
  * shared and/or exclusive fences.
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index 99cfb7af966b8..c5fa09555eca5 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -285,6 +285,8 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj,
 
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
 
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj);
+
 long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
 				    unsigned long timeout);
 
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-25 21:17   ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
documentation for DMA_BUF_IOCTL_SYNC.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
---
 Documentation/driver-api/dma-buf.rst |  8 +++++++
 include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 7f37ec30d9fd7..784f84fe50a5e 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -88,6 +88,9 @@ consider though:
 - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
   details.
 
+- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
+  `DMA Buffer ioctls`_ below for details.
+
 Basic Operation and Device DMA Access
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -106,6 +109,11 @@ Implicit Fence Poll Support
 .. kernel-doc:: drivers/dma-buf/dma-buf.c
    :doc: implicit fence polling
 
+DMA Buffer ioctls
+~~~~~~~~~~~~~~~~~
+
+.. kernel-doc:: include/uapi/linux/dma-buf.h
+
 Kernel Functions and Structures Reference
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 7f30393b92c3b..1f67ced853b14 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -22,8 +22,38 @@
 
 #include <linux/types.h>
 
-/* begin/end dma-buf functions used for userspace mmap. */
+/**
+ * struct dma_buf_sync - Synchronize with CPU access.
+ *
+ * When a DMA buffer is accessed from the CPU via mmap, it is not always
+ * possible to guarantee coherency between the CPU-visible map and underlying
+ * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
+ * any CPU access to give the kernel the chance to shuffle memory around if
+ * needed.
+ *
+ * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC
+ * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
+ * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
+ * DMA_BUF_SYNC_END and the same read/write flags.
+ */
 struct dma_buf_sync {
+	/**
+	 * @flags: Set of access flags
+	 *
+	 * - DMA_BUF_SYNC_START: Indicates the start of a map access
+	 *   session.
+	 *
+	 * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
+	 *
+	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
+	 *   be read by the client via the CPU map.
+	 *
+	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
+	 *   be written by the client via the CPU map.
+	 *
+	 * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
+	 *   DMA_BUF_SYNC_WRITE.
+	 */
 	__u64 flags;
 };
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
@ 2021-05-25 21:17   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Sumit Semwal

This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
documentation for DMA_BUF_IOCTL_SYNC.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
---
 Documentation/driver-api/dma-buf.rst |  8 +++++++
 include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 7f37ec30d9fd7..784f84fe50a5e 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -88,6 +88,9 @@ consider though:
 - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
   details.
 
+- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
+  `DMA Buffer ioctls`_ below for details.
+
 Basic Operation and Device DMA Access
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -106,6 +109,11 @@ Implicit Fence Poll Support
 .. kernel-doc:: drivers/dma-buf/dma-buf.c
    :doc: implicit fence polling
 
+DMA Buffer ioctls
+~~~~~~~~~~~~~~~~~
+
+.. kernel-doc:: include/uapi/linux/dma-buf.h
+
 Kernel Functions and Structures Reference
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 7f30393b92c3b..1f67ced853b14 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -22,8 +22,38 @@
 
 #include <linux/types.h>
 
-/* begin/end dma-buf functions used for userspace mmap. */
+/**
+ * struct dma_buf_sync - Synchronize with CPU access.
+ *
+ * When a DMA buffer is accessed from the CPU via mmap, it is not always
+ * possible to guarantee coherency between the CPU-visible map and underlying
+ * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
+ * any CPU access to give the kernel the chance to shuffle memory around if
+ * needed.
+ *
+ * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC
+ * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
+ * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
+ * DMA_BUF_SYNC_END and the same read/write flags.
+ */
 struct dma_buf_sync {
+	/**
+	 * @flags: Set of access flags
+	 *
+	 * - DMA_BUF_SYNC_START: Indicates the start of a map access
+	 *   session.
+	 *
+	 * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
+	 *
+	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
+	 *   be read by the client via the CPU map.
+	 *
+	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
+	 *   be written by the client via the CPU map.
+	 *
+	 * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
+	 *   DMA_BUF_SYNC_WRITE.
+	 */
 	__u64 flags;
 };
 
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-25 21:17   ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Christian König, Jason Ekstrand, Daniel Vetter

Modern userspace APIs like Vulkan are built on an explicit
synchronization model.  This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland.  The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.

The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor.  We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer.  With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer.  In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.

This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later.  As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display.  This allows to accurately turn
it into a VkFence or VkSemaphore without any over- synchronization.

v2 (Jason Ekstrand):
 - Use a wrapper dma_fence_array of all fences including the new one
   when importing an exclusive fence.

v3 (Jason Ekstrand):
 - Lock around setting shared fences as well as exclusive
 - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
 - Initialize ret to 0 in dma_buf_wait_sync_file

v4 (Jason Ekstrand):
 - Use the new dma_resv_get_singleton helper

v5 (Jason Ekstrand):
 - Rename the IOCTLs to import/export rather than wait/signal
 - Drop the WRITE flag and always get/set the exclusive fence

v6 (Jason Ekstrand):
 - Drop the sync_file import as it was all-around sketchy and not nearly
   as useful as import.
 - Re-introduce READ/WRITE flag support for export
 - Rework the commit message

v7 (Jason Ekstrand):
 - Require at least one sync flag
 - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
 - Use _rcu helpers since we're accessing the dma_resv read-only

v8 (Jason Ekstrand):
 - Return -ENOMEM if the sync_file_create fails
 - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)

v9 (Jason Ekstrand):
 - Add documentation for the new ioctl

v10 (Jason Ekstrand):
 - Go back to dma_buf_sync_file as the ioctl struct name

v11 (Daniel Vetter):
 - Go back to dma_buf_export_sync_file as the ioctl struct name
 - Better kerneldoc describing what the read/write flags do

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Simon Ser <contact@emersion.fr>
Acked-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c    | 67 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++
 2 files changed, 102 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ed6451d55d663..65a9574ee04ed 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -20,6 +20,7 @@
 #include <linux/debugfs.h>
 #include <linux/module.h>
 #include <linux/seq_file.h>
+#include <linux/sync_file.h>
 #include <linux/poll.h>
 #include <linux/dma-resv.h>
 #include <linux/mm.h>
@@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
  * Note that this only signals the completion of the respective fences, i.e. the
  * DMA transfers are complete. Cache flushing and any other necessary
  * preparations before CPU access can begin still need to happen.
+ *
+ * As an alternative to poll(), the set of fences on DMA buffer can be
+ * exported as a &sync_file using &dma_buf_sync_file_export.
  */
 
 static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
@@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
 	return ret;
 }
 
+#if IS_ENABLED(CONFIG_SYNC_FILE)
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
+				     void __user *user_data)
+{
+	struct dma_buf_export_sync_file arg;
+	struct dma_fence *fence = NULL;
+	struct sync_file *sync_file;
+	int fd, ret;
+
+	if (copy_from_user(&arg, user_data, sizeof(arg)))
+		return -EFAULT;
+
+	if (arg.flags & ~DMA_BUF_SYNC_RW)
+		return -EINVAL;
+
+	if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	if (arg.flags & DMA_BUF_SYNC_WRITE) {
+		fence = dma_resv_get_singleton_unlocked(dmabuf->resv);
+		if (IS_ERR(fence)) {
+			ret = PTR_ERR(fence);
+			goto err_put_fd;
+		}
+	} else if (arg.flags & DMA_BUF_SYNC_READ) {
+		fence = dma_resv_get_excl_unlocked(dmabuf->resv);
+	}
+
+	if (!fence)
+		fence = dma_fence_get_stub();
+
+	sync_file = sync_file_create(fence);
+
+	dma_fence_put(fence);
+
+	if (!sync_file) {
+		ret = -ENOMEM;
+		goto err_put_fd;
+	}
+
+	fd_install(fd, sync_file->file);
+
+	arg.fd = fd;
+	if (copy_to_user(user_data, &arg, sizeof(arg)))
+		return -EFAULT;
+
+	return 0;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return ret;
+}
+#endif
+
 static long dma_buf_ioctl(struct file *file,
 			  unsigned int cmd, unsigned long arg)
 {
@@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file,
 	case DMA_BUF_SET_NAME_B:
 		return dma_buf_set_name(dmabuf, (const char __user *)arg);
 
+#if IS_ENABLED(CONFIG_SYNC_FILE)
+	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
+		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+#endif
+
 	default:
 		return -ENOTTY;
 	}
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 1f67ced853b14..aeba45180b028 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -67,6 +67,40 @@ struct dma_buf_sync {
 
 #define DMA_BUF_NAME_LEN	32
 
+/**
+ * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
+ *
+ * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
+ * current set of fences on a dma-buf file descriptor as a sync_file.  CPU
+ * waits via poll() or other driver-specific mechanisms typically wait on
+ * whatever fences are on the dma-buf at the time the wait begins.  This
+ * is similar except that it takes a snapshot of the current fences on the
+ * dma-buf for waiting later instead of waiting immediately.  This is
+ * useful for modern graphics APIs such as Vulkan which assume an explicit
+ * synchronization model but still need to inter-operate with dma-buf.
+ */
+struct dma_buf_export_sync_file {
+	/**
+	 * @flags: Read/write flags
+	 *
+	 * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
+	 *
+	 * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
+	 * the returned sync file waits on any writers of the dma-buf to
+	 * complete.  Waiting on the returned sync file is equivalent to
+	 * poll() with POLLIN.
+	 *
+	 * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
+	 * any users of the dma-buf (read or write) to complete.  Waiting
+	 * on the returned sync file is equivalent to poll() with POLLOUT.
+	 * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
+	 * is equivalent to just DMA_BUF_SYNC_WRITE.
+	 */
+	__u32 flags;
+	/** @fd: Returned sync file descriptor */
+	__s32 fd;
+};
+
 #define DMA_BUF_BASE		'b'
 #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
 
@@ -76,5 +110,6 @@ struct dma_buf_sync {
 #define DMA_BUF_SET_NAME	_IOW(DMA_BUF_BASE, 1, const char *)
 #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
 #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
+#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
 
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-25 21:17   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx
  Cc: Simon Ser, Christian König, Daniel Vetter, Sumit Semwal

Modern userspace APIs like Vulkan are built on an explicit
synchronization model.  This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland.  The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.

The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor.  We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer.  With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer.  In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.

This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later.  As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display.  This allows to accurately turn
it into a VkFence or VkSemaphore without any over- synchronization.

v2 (Jason Ekstrand):
 - Use a wrapper dma_fence_array of all fences including the new one
   when importing an exclusive fence.

v3 (Jason Ekstrand):
 - Lock around setting shared fences as well as exclusive
 - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
 - Initialize ret to 0 in dma_buf_wait_sync_file

v4 (Jason Ekstrand):
 - Use the new dma_resv_get_singleton helper

v5 (Jason Ekstrand):
 - Rename the IOCTLs to import/export rather than wait/signal
 - Drop the WRITE flag and always get/set the exclusive fence

v6 (Jason Ekstrand):
 - Drop the sync_file import as it was all-around sketchy and not nearly
   as useful as import.
 - Re-introduce READ/WRITE flag support for export
 - Rework the commit message

v7 (Jason Ekstrand):
 - Require at least one sync flag
 - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
 - Use _rcu helpers since we're accessing the dma_resv read-only

v8 (Jason Ekstrand):
 - Return -ENOMEM if the sync_file_create fails
 - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)

v9 (Jason Ekstrand):
 - Add documentation for the new ioctl

v10 (Jason Ekstrand):
 - Go back to dma_buf_sync_file as the ioctl struct name

v11 (Daniel Vetter):
 - Go back to dma_buf_export_sync_file as the ioctl struct name
 - Better kerneldoc describing what the read/write flags do

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Simon Ser <contact@emersion.fr>
Acked-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c    | 67 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++
 2 files changed, 102 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ed6451d55d663..65a9574ee04ed 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -20,6 +20,7 @@
 #include <linux/debugfs.h>
 #include <linux/module.h>
 #include <linux/seq_file.h>
+#include <linux/sync_file.h>
 #include <linux/poll.h>
 #include <linux/dma-resv.h>
 #include <linux/mm.h>
@@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
  * Note that this only signals the completion of the respective fences, i.e. the
  * DMA transfers are complete. Cache flushing and any other necessary
  * preparations before CPU access can begin still need to happen.
+ *
+ * As an alternative to poll(), the set of fences on DMA buffer can be
+ * exported as a &sync_file using &dma_buf_sync_file_export.
  */
 
 static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
@@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
 	return ret;
 }
 
+#if IS_ENABLED(CONFIG_SYNC_FILE)
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
+				     void __user *user_data)
+{
+	struct dma_buf_export_sync_file arg;
+	struct dma_fence *fence = NULL;
+	struct sync_file *sync_file;
+	int fd, ret;
+
+	if (copy_from_user(&arg, user_data, sizeof(arg)))
+		return -EFAULT;
+
+	if (arg.flags & ~DMA_BUF_SYNC_RW)
+		return -EINVAL;
+
+	if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	if (arg.flags & DMA_BUF_SYNC_WRITE) {
+		fence = dma_resv_get_singleton_unlocked(dmabuf->resv);
+		if (IS_ERR(fence)) {
+			ret = PTR_ERR(fence);
+			goto err_put_fd;
+		}
+	} else if (arg.flags & DMA_BUF_SYNC_READ) {
+		fence = dma_resv_get_excl_unlocked(dmabuf->resv);
+	}
+
+	if (!fence)
+		fence = dma_fence_get_stub();
+
+	sync_file = sync_file_create(fence);
+
+	dma_fence_put(fence);
+
+	if (!sync_file) {
+		ret = -ENOMEM;
+		goto err_put_fd;
+	}
+
+	fd_install(fd, sync_file->file);
+
+	arg.fd = fd;
+	if (copy_to_user(user_data, &arg, sizeof(arg)))
+		return -EFAULT;
+
+	return 0;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return ret;
+}
+#endif
+
 static long dma_buf_ioctl(struct file *file,
 			  unsigned int cmd, unsigned long arg)
 {
@@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file,
 	case DMA_BUF_SET_NAME_B:
 		return dma_buf_set_name(dmabuf, (const char __user *)arg);
 
+#if IS_ENABLED(CONFIG_SYNC_FILE)
+	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
+		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+#endif
+
 	default:
 		return -ENOTTY;
 	}
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 1f67ced853b14..aeba45180b028 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -67,6 +67,40 @@ struct dma_buf_sync {
 
 #define DMA_BUF_NAME_LEN	32
 
+/**
+ * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
+ *
+ * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
+ * current set of fences on a dma-buf file descriptor as a sync_file.  CPU
+ * waits via poll() or other driver-specific mechanisms typically wait on
+ * whatever fences are on the dma-buf at the time the wait begins.  This
+ * is similar except that it takes a snapshot of the current fences on the
+ * dma-buf for waiting later instead of waiting immediately.  This is
+ * useful for modern graphics APIs such as Vulkan which assume an explicit
+ * synchronization model but still need to inter-operate with dma-buf.
+ */
+struct dma_buf_export_sync_file {
+	/**
+	 * @flags: Read/write flags
+	 *
+	 * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
+	 *
+	 * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
+	 * the returned sync file waits on any writers of the dma-buf to
+	 * complete.  Waiting on the returned sync file is equivalent to
+	 * poll() with POLLIN.
+	 *
+	 * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
+	 * any users of the dma-buf (read or write) to complete.  Waiting
+	 * on the returned sync file is equivalent to poll() with POLLOUT.
+	 * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
+	 * is equivalent to just DMA_BUF_SYNC_WRITE.
+	 */
+	__u32 flags;
+	/** @fd: Returned sync file descriptor */
+	__s32 fd;
+};
+
 #define DMA_BUF_BASE		'b'
 #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
 
@@ -76,5 +110,6 @@ struct dma_buf_sync {
 #define DMA_BUF_SET_NAME	_IOW(DMA_BUF_BASE, 1, const char *)
 #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
 #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
+#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
 
 #endif
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH 6/7] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-25 21:17   ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

For dma-buf sync_file import, we want to get all the fences on a
dma_resv plus one more.  We could wrap the fence we get back in an array
fence or we could make dma_resv_get_singleton_unlocked take "one more"
to make this case easier.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c  |  2 +-
 drivers/dma-buf/dma-resv.c | 23 +++++++++++++++++++++--
 include/linux/dma-resv.h   |  3 ++-
 3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 65a9574ee04ed..ea117de962903 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -389,7 +389,7 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
 		return fd;
 
 	if (arg.flags & DMA_BUF_SYNC_WRITE) {
-		fence = dma_resv_get_singleton_unlocked(dmabuf->resv);
+		fence = dma_resv_get_singleton_unlocked(dmabuf->resv, NULL);
 		if (IS_ERR(fence)) {
 			ret = PTR_ERR(fence);
 			goto err_put_fd;
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 23db2181c8ad8..5a5e13a01e516 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -527,6 +527,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
 /**
  * dma_resv_get_singleton_unlocked - get a single fence for the dma_resv object
  * @obj: the reservation object
+ * @extra: extra fence to add to the resulting array
  *
  * Get a single fence representing all unsignaled fences in the dma_resv object
  * plus the given extra fence. If we got only one fence return a new
@@ -535,7 +536,8 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
  * RETURNS
  * The singleton dma_fence on success or an ERR_PTR on failure
  */
-struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj,
+						  struct dma_fence *extra)
 {
 	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
 	struct dma_fence_array *array;
@@ -546,7 +548,7 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
 	if (err)
 		return ERR_PTR(err);
 
-	if (num_resv_fences == 0)
+	if (num_resv_fences == 0 && !extra)
 		return NULL;
 
 	num_fences = 0;
@@ -562,6 +564,16 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
 		}
 	}
 
+	if (extra) {
+		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
+			if (dma_fence_is_signaled(fence))
+				continue;
+
+			result = fence;
+			++num_fences;
+		}
+	}
+
 	if (num_fences <= 1) {
 		result = dma_fence_get(result);
 		goto put_resv_fences;
@@ -582,6 +594,13 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
 		}
 	}
 
+	if (extra) {
+		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
+			if (dma_fence_is_signaled(fence))
+				fences[num_fences++] = dma_fence_get(fence);
+		}
+	}
+
 	if (num_fences <= 1) {
 		result = num_fences ? fences[0] : NULL;
 		kfree(fences);
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index c5fa09555eca5..4b1dabfa7017d 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -285,7 +285,8 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj,
 
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
 
-struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj);
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj,
+						  struct dma_fence *extra);
 
 long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
 				    unsigned long timeout);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 6/7] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
@ 2021-05-25 21:17   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König

For dma-buf sync_file import, we want to get all the fences on a
dma_resv plus one more.  We could wrap the fence we get back in an array
fence or we could make dma_resv_get_singleton_unlocked take "one more"
to make this case easier.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c  |  2 +-
 drivers/dma-buf/dma-resv.c | 23 +++++++++++++++++++++--
 include/linux/dma-resv.h   |  3 ++-
 3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 65a9574ee04ed..ea117de962903 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -389,7 +389,7 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
 		return fd;
 
 	if (arg.flags & DMA_BUF_SYNC_WRITE) {
-		fence = dma_resv_get_singleton_unlocked(dmabuf->resv);
+		fence = dma_resv_get_singleton_unlocked(dmabuf->resv, NULL);
 		if (IS_ERR(fence)) {
 			ret = PTR_ERR(fence);
 			goto err_put_fd;
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 23db2181c8ad8..5a5e13a01e516 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -527,6 +527,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
 /**
  * dma_resv_get_singleton_unlocked - get a single fence for the dma_resv object
  * @obj: the reservation object
+ * @extra: extra fence to add to the resulting array
  *
  * Get a single fence representing all unsignaled fences in the dma_resv object
  * plus the given extra fence. If we got only one fence return a new
@@ -535,7 +536,8 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
  * RETURNS
  * The singleton dma_fence on success or an ERR_PTR on failure
  */
-struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj,
+						  struct dma_fence *extra)
 {
 	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
 	struct dma_fence_array *array;
@@ -546,7 +548,7 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
 	if (err)
 		return ERR_PTR(err);
 
-	if (num_resv_fences == 0)
+	if (num_resv_fences == 0 && !extra)
 		return NULL;
 
 	num_fences = 0;
@@ -562,6 +564,16 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
 		}
 	}
 
+	if (extra) {
+		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
+			if (dma_fence_is_signaled(fence))
+				continue;
+
+			result = fence;
+			++num_fences;
+		}
+	}
+
 	if (num_fences <= 1) {
 		result = dma_fence_get(result);
 		goto put_resv_fences;
@@ -582,6 +594,13 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj)
 		}
 	}
 
+	if (extra) {
+		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
+			if (dma_fence_is_signaled(fence))
+				fences[num_fences++] = dma_fence_get(fence);
+		}
+	}
+
 	if (num_fences <= 1) {
 		result = num_fences ? fences[0] : NULL;
 		kfree(fences);
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index c5fa09555eca5..4b1dabfa7017d 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -285,7 +285,8 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj,
 
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
 
-struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj);
+struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj,
+						  struct dma_fence *extra);
 
 long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
 				    unsigned long timeout);
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH 7/7] RFC: dma-buf: Add an API for importing sync files (v7)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-25 21:17   ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

This patch is analogous to the previous sync file export patch in that
it allows you to import a sync_file into a dma-buf.  Unlike the previous
patch, however, this does add genuinely new functionality to dma-buf.
Without this, the only way to attach a sync_file to a dma-buf is to
submit a batch to your driver of choice which waits on the sync_file and
claims to write to the dma-buf.  Even if said batch is a no-op, a submit
is typically way more overhead than just attaching a fence.  A submit
may also imply extra synchronization with other work because it happens
on a hardware queue.

In the Vulkan world, this is useful for dealing with the out-fence from
vkQueuePresent.  Current Linux window-systems (X11, Wayland, etc.) all
rely on dma-buf implicit sync.  Since Vulkan is an explicit sync API, we
get a set of fences (VkSemaphores) in vkQueuePresent and have to stash
those as an exclusive (write) fence on the dma-buf.  We handle it in
Mesa today with the above mentioned dummy submit trick.  This ioctl
would allow us to set it directly without the dummy submit.

This may also open up possibilities for GPU drivers to move away from
implicit sync for their kernel driver uAPI and instead provide sync
files and rely on dma-buf import/export for communicating with other
implicit sync clients.

We make the explicit choice here to only allow setting RW fences which
translates to an exclusive fence on the dma_resv.  There's no use for
read-only fences for communicating with other implicit sync userspace
and any such attempts are likely to be racy at best.  When we got to
insert the RW fence, the actual fence we set as the new exclusive fence
is a combination of the sync_file provided by the user and all the other
fences on the dma_resv.  This ensures that the newly added exclusive
fence will never signal before the old one would have and ensures that
we don't break any dma_resv contracts.  We require userspace to specify
RW in the flags for symmetry with the export ioctl and in case we ever
want to support read fences in the future.

There is one downside here that's worth documenting:  If two clients
writing to the same dma-buf using this API race with each other, their
actions on the dma-buf may happen in parallel or in an undefined order.
Both with and without this API, the pattern is the same:  Collect all
the fences on dma-buf, submit work which depends on said fences, and
then set a new exclusive (write) fence on the dma-buf which depends on
said work.  The difference is that, when it's all handled by the GPU
driver's submit ioctl, the three operations happen atomically under the
dma_resv lock.  If two userspace submits race, one will happen before
the other.  You aren't guaranteed which but you are guaranteed that
they're strictly ordered.  If userspace manages the fences itself, then
these three operations happen separately and the two render operations
may happen genuinely in parallel or get interleaved.  However, this is a
case of userspace racing with itself.  As long as we ensure userspace
can't back the kernel into a corner, it should be fine.

v2 (Jason Ekstrand):
 - Use a wrapper dma_fence_array of all fences including the new one
   when importing an exclusive fence.

v3 (Jason Ekstrand):
 - Lock around setting shared fences as well as exclusive
 - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
 - Initialize ret to 0 in dma_buf_wait_sync_file

v4 (Jason Ekstrand):
 - Use the new dma_resv_get_singleton helper

v5 (Jason Ekstrand):
 - Rename the IOCTLs to import/export rather than wait/signal
 - Drop the WRITE flag and always get/set the exclusive fence

v6 (Jason Ekstrand):
 - Split import and export into separate patches
 - New commit message

v7 (Daniel Vetter):
 - Fix the uapi header to use the right struct in the ioctl
 - Use a separate dma_buf_import_sync_file struct
 - Add kerneldoc for dma_buf_import_sync_file

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Christian König <christian.koenig@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c    | 36 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ea117de962903..098340222662b 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -422,6 +422,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
 	put_unused_fd(fd);
 	return ret;
 }
+
+static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
+				     const void __user *user_data)
+{
+	struct dma_buf_import_sync_file arg;
+	struct dma_fence *fence, *singleton = NULL;
+	int ret = 0;
+
+	if (copy_from_user(&arg, user_data, sizeof(arg)))
+		return -EFAULT;
+
+	if (arg.flags != DMA_BUF_SYNC_RW)
+		return -EINVAL;
+
+	fence = sync_file_get_fence(arg.fd);
+	if (!fence)
+		return -EINVAL;
+
+	dma_resv_lock(dmabuf->resv, NULL);
+
+	singleton = dma_resv_get_singleton_unlocked(dmabuf->resv, fence);
+	if (IS_ERR(singleton)) {
+		ret = PTR_ERR(singleton);
+	} else if (singleton) {
+		dma_resv_add_excl_fence(dmabuf->resv, singleton);
+		dma_resv_add_shared_fence(dmabuf->resv, singleton);
+	}
+
+	dma_resv_unlock(dmabuf->resv);
+
+	dma_fence_put(fence);
+
+	return ret;
+}
 #endif
 
 static long dma_buf_ioctl(struct file *file,
@@ -470,6 +504,8 @@ static long dma_buf_ioctl(struct file *file,
 #if IS_ENABLED(CONFIG_SYNC_FILE)
 	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
 		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+	case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
+		return dma_buf_import_sync_file(dmabuf, (const void __user *)arg);
 #endif
 
 	default:
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index aeba45180b028..af53987db24be 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -101,6 +101,27 @@ struct dma_buf_export_sync_file {
 	__s32 fd;
 };
 
+/**
+ * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
+ *
+ * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
+ * sync_file into a dma-buf for the purposes of implicit synchronization
+ * with other dma-buf consumers.  This allows clients using explicitly
+ * synchronized APIs such as Vulkan to inter-op with dma-buf consumers
+ * which expect implicit synchronization such as OpenGL or most media
+ * drivers/video.
+ */
+struct dma_buf_import_sync_file {
+	/**
+	 * @flags: Read/write flags
+	 *
+	 * Must be DMA_BUF_SYNC_RW.
+	 */
+	__u32 flags;
+	/** @fd: Sync file descriptor */
+	__s32 fd;
+};
+
 #define DMA_BUF_BASE		'b'
 #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
 
@@ -111,5 +132,6 @@ struct dma_buf_export_sync_file {
 #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
 #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
 #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
+#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE	_IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
 
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] [PATCH 7/7] RFC: dma-buf: Add an API for importing sync files (v7)
@ 2021-05-25 21:17   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-25 21:17 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Sumit Semwal

This patch is analogous to the previous sync file export patch in that
it allows you to import a sync_file into a dma-buf.  Unlike the previous
patch, however, this does add genuinely new functionality to dma-buf.
Without this, the only way to attach a sync_file to a dma-buf is to
submit a batch to your driver of choice which waits on the sync_file and
claims to write to the dma-buf.  Even if said batch is a no-op, a submit
is typically way more overhead than just attaching a fence.  A submit
may also imply extra synchronization with other work because it happens
on a hardware queue.

In the Vulkan world, this is useful for dealing with the out-fence from
vkQueuePresent.  Current Linux window-systems (X11, Wayland, etc.) all
rely on dma-buf implicit sync.  Since Vulkan is an explicit sync API, we
get a set of fences (VkSemaphores) in vkQueuePresent and have to stash
those as an exclusive (write) fence on the dma-buf.  We handle it in
Mesa today with the above mentioned dummy submit trick.  This ioctl
would allow us to set it directly without the dummy submit.

This may also open up possibilities for GPU drivers to move away from
implicit sync for their kernel driver uAPI and instead provide sync
files and rely on dma-buf import/export for communicating with other
implicit sync clients.

We make the explicit choice here to only allow setting RW fences which
translates to an exclusive fence on the dma_resv.  There's no use for
read-only fences for communicating with other implicit sync userspace
and any such attempts are likely to be racy at best.  When we got to
insert the RW fence, the actual fence we set as the new exclusive fence
is a combination of the sync_file provided by the user and all the other
fences on the dma_resv.  This ensures that the newly added exclusive
fence will never signal before the old one would have and ensures that
we don't break any dma_resv contracts.  We require userspace to specify
RW in the flags for symmetry with the export ioctl and in case we ever
want to support read fences in the future.

There is one downside here that's worth documenting:  If two clients
writing to the same dma-buf using this API race with each other, their
actions on the dma-buf may happen in parallel or in an undefined order.
Both with and without this API, the pattern is the same:  Collect all
the fences on dma-buf, submit work which depends on said fences, and
then set a new exclusive (write) fence on the dma-buf which depends on
said work.  The difference is that, when it's all handled by the GPU
driver's submit ioctl, the three operations happen atomically under the
dma_resv lock.  If two userspace submits race, one will happen before
the other.  You aren't guaranteed which but you are guaranteed that
they're strictly ordered.  If userspace manages the fences itself, then
these three operations happen separately and the two render operations
may happen genuinely in parallel or get interleaved.  However, this is a
case of userspace racing with itself.  As long as we ensure userspace
can't back the kernel into a corner, it should be fine.

v2 (Jason Ekstrand):
 - Use a wrapper dma_fence_array of all fences including the new one
   when importing an exclusive fence.

v3 (Jason Ekstrand):
 - Lock around setting shared fences as well as exclusive
 - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
 - Initialize ret to 0 in dma_buf_wait_sync_file

v4 (Jason Ekstrand):
 - Use the new dma_resv_get_singleton helper

v5 (Jason Ekstrand):
 - Rename the IOCTLs to import/export rather than wait/signal
 - Drop the WRITE flag and always get/set the exclusive fence

v6 (Jason Ekstrand):
 - Split import and export into separate patches
 - New commit message

v7 (Daniel Vetter):
 - Fix the uapi header to use the right struct in the ioctl
 - Use a separate dma_buf_import_sync_file struct
 - Add kerneldoc for dma_buf_import_sync_file

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Christian König <christian.koenig@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c    | 36 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ea117de962903..098340222662b 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -422,6 +422,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
 	put_unused_fd(fd);
 	return ret;
 }
+
+static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
+				     const void __user *user_data)
+{
+	struct dma_buf_import_sync_file arg;
+	struct dma_fence *fence, *singleton = NULL;
+	int ret = 0;
+
+	if (copy_from_user(&arg, user_data, sizeof(arg)))
+		return -EFAULT;
+
+	if (arg.flags != DMA_BUF_SYNC_RW)
+		return -EINVAL;
+
+	fence = sync_file_get_fence(arg.fd);
+	if (!fence)
+		return -EINVAL;
+
+	dma_resv_lock(dmabuf->resv, NULL);
+
+	singleton = dma_resv_get_singleton_unlocked(dmabuf->resv, fence);
+	if (IS_ERR(singleton)) {
+		ret = PTR_ERR(singleton);
+	} else if (singleton) {
+		dma_resv_add_excl_fence(dmabuf->resv, singleton);
+		dma_resv_add_shared_fence(dmabuf->resv, singleton);
+	}
+
+	dma_resv_unlock(dmabuf->resv);
+
+	dma_fence_put(fence);
+
+	return ret;
+}
 #endif
 
 static long dma_buf_ioctl(struct file *file,
@@ -470,6 +504,8 @@ static long dma_buf_ioctl(struct file *file,
 #if IS_ENABLED(CONFIG_SYNC_FILE)
 	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
 		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+	case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
+		return dma_buf_import_sync_file(dmabuf, (const void __user *)arg);
 #endif
 
 	default:
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index aeba45180b028..af53987db24be 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -101,6 +101,27 @@ struct dma_buf_export_sync_file {
 	__s32 fd;
 };
 
+/**
+ * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
+ *
+ * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
+ * sync_file into a dma-buf for the purposes of implicit synchronization
+ * with other dma-buf consumers.  This allows clients using explicitly
+ * synchronized APIs such as Vulkan to inter-op with dma-buf consumers
+ * which expect implicit synchronization such as OpenGL or most media
+ * drivers/video.
+ */
+struct dma_buf_import_sync_file {
+	/**
+	 * @flags: Read/write flags
+	 *
+	 * Must be DMA_BUF_SYNC_RW.
+	 */
+	__u32 flags;
+	/** @fd: Sync file descriptor */
+	__s32 fd;
+};
+
 #define DMA_BUF_BASE		'b'
 #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
 
@@ -111,5 +132,6 @@ struct dma_buf_export_sync_file {
 #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
 #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
 #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
+#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE	_IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
 
 #endif
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-buf: Add an API for exporting sync files (v11)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
                   ` (7 preceding siblings ...)
  (?)
@ 2021-05-25 21:44 ` Patchwork
  -1 siblings, 0 replies; 68+ messages in thread
From: Patchwork @ 2021-05-25 21:44 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: intel-gfx

== Series Details ==

Series: dma-buf: Add an API for exporting sync files (v11)
URL   : https://patchwork.freedesktop.org/series/90555/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
225a57e84149 dma-buf: Add dma_fence_array_for_each (v2)
-:75: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'fence' - possible side-effects?
#75: FILE: include/linux/dma-fence-array.h:86:
+#define dma_fence_array_for_each(fence, index, head)			\
+	for (index = 0, fence = dma_fence_array_first(head); fence;	\
+	     ++(index), fence = dma_fence_array_next(head, index))

-:75: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'index' - possible side-effects?
#75: FILE: include/linux/dma-fence-array.h:86:
+#define dma_fence_array_for_each(fence, index, head)			\
+	for (index = 0, fence = dma_fence_array_first(head); fence;	\
+	     ++(index), fence = dma_fence_array_next(head, index))

-:75: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'head' - possible side-effects?
#75: FILE: include/linux/dma-fence-array.h:86:
+#define dma_fence_array_for_each(fence, index, head)			\
+	for (index = 0, fence = dma_fence_array_first(head); fence;	\
+	     ++(index), fence = dma_fence_array_next(head, index))

-:90: ERROR:NO_AUTHOR_SIGN_OFF: Missing Signed-off-by: line by nominal patch author '"Christian König" <ckoenig.leichtzumerken@gmail.com>'

total: 1 errors, 0 warnings, 3 checks, 57 lines checked
a26f8ef33498 dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
-:68: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int *' to bare use of 'unsigned *'
#68: FILE: drivers/dma-buf/dma-resv.c:434:
+				 unsigned *pshared_count,

-:821: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int *' to bare use of 'unsigned *'
#821: FILE: include/linux/dma-resv.h:283:
+				 unsigned *pshared_count,

total: 0 errors, 2 warnings, 0 checks, 601 lines checked
8f674ac01f2f dma-buf: Add dma_resv_get_singleton_unlocked (v5)
-:63: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#63: FILE: drivers/dma-buf/dma-resv.c:55:
+#define dma_fence_deep_dive_for_each(fence, chain, index, head)	\
+	dma_fence_chain_for_each(chain, head)			\
+		dma_fence_array_for_each(fence, index, chain)

-:63: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'chain' - possible side-effects?
#63: FILE: drivers/dma-buf/dma-resv.c:55:
+#define dma_fence_deep_dive_for_each(fence, chain, index, head)	\
+	dma_fence_chain_for_each(chain, head)			\
+		dma_fence_array_for_each(fence, index, chain)

-:117: ERROR:POINTER_LOCATION: "(foo*)" should be "(foo *)"
#117: FILE: drivers/dma-buf/dma-resv.c:570:
+	fences = kmalloc_array(num_fences, sizeof(struct dma_fence*),

total: 2 errors, 0 warnings, 1 checks, 118 lines checked
94bae126528f dma-buf: Document DMA_BUF_IOCTL_SYNC
3dff9bbbbc2a dma-buf: Add an API for exporting sync files (v11)
46fac74a3274 RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
f3982039cd8a RFC: dma-buf: Add an API for importing sync files (v7)


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for dma-buf: Add an API for exporting sync files (v11)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
                   ` (8 preceding siblings ...)
  (?)
@ 2021-05-25 21:46 ` Patchwork
  -1 siblings, 0 replies; 68+ messages in thread
From: Patchwork @ 2021-05-25 21:46 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: intel-gfx

== Series Details ==

Series: dma-buf: Add an API for exporting sync files (v11)
URL   : https://patchwork.freedesktop.org/series/90555/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1345:25: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1345:25:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1345:25:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1346:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1346:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1346:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1405:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1405:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1405:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:293:16: error: incompatible types in comparison expression (different type sizes):
+drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:293:16:    unsigned long *
+drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:293:16:    unsigned long long *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:275:25: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:275:25:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:275:25:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:276:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:276:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:276:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:330:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:330:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:330:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.h:90:56: error: marked inline, but without a definition
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for dma-buf: Add an API for exporting sync files (v11)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
                   ` (9 preceding siblings ...)
  (?)
@ 2021-05-25 22:14 ` Patchwork
  -1 siblings, 0 replies; 68+ messages in thread
From: Patchwork @ 2021-05-25 22:14 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 6184 bytes --]

== Series Details ==

Series: dma-buf: Add an API for exporting sync files (v11)
URL   : https://patchwork.freedesktop.org/series/90555/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10133 -> Patchwork_20193
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/index.html

Known issues
------------

  Here are the changes found in Patchwork_20193 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@core_hotunplug@unbind-rebind:
    - fi-bdw-5557u:       NOTRUN -> [WARN][1] ([i915#2283])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-bdw-5557u/igt@core_hotunplug@unbind-rebind.html

  * igt@i915_selftest@live@execlists:
    - fi-bdw-5557u:       NOTRUN -> [DMESG-FAIL][2] ([i915#3462])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-bdw-5557u/igt@i915_selftest@live@execlists.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-kbl-7500u:       [PASS][3] -> [FAIL][4] ([i915#1372])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html

  * igt@kms_psr@cursor_plane_move:
    - fi-bdw-5557u:       NOTRUN -> [SKIP][5] ([fdo#109271]) +5 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-bdw-5557u/igt@kms_psr@cursor_plane_move.html

  
#### Possible fixes ####

  * igt@kms_frontbuffer_tracking@basic:
    - fi-icl-u2:          [FAIL][6] ([i915#49]) -> [PASS][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/fi-icl-u2/igt@kms_frontbuffer_tracking@basic.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-icl-u2/igt@kms_frontbuffer_tracking@basic.html

  
#### Warnings ####

  * igt@runner@aborted:
    - fi-glk-dsi:         [FAIL][8] ([i915#2426] / [i915#3363] / [k.org#202321]) -> [FAIL][9] ([i915#3363] / [k.org#202321])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/fi-glk-dsi/igt@runner@aborted.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-glk-dsi/igt@runner@aborted.html
    - fi-bdw-5557u:       [FAIL][10] ([i915#1602] / [i915#2029]) -> [FAIL][11] ([i915#3462])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/fi-bdw-5557u/igt@runner@aborted.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-bdw-5557u/igt@runner@aborted.html
    - fi-cml-u2:          [FAIL][12] ([i915#2082] / [i915#2426] / [i915#3363] / [i915#3462]) -> [FAIL][13] ([i915#3363] / [i915#3462])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/fi-cml-u2/igt@runner@aborted.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/fi-cml-u2/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1372]: https://gitlab.freedesktop.org/drm/intel/issues/1372
  [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
  [i915#1602]: https://gitlab.freedesktop.org/drm/intel/issues/1602
  [i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029
  [i915#2082]: https://gitlab.freedesktop.org/drm/intel/issues/2082
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2283]: https://gitlab.freedesktop.org/drm/intel/issues/2283
  [i915#2426]: https://gitlab.freedesktop.org/drm/intel/issues/2426
  [i915#2932]: https://gitlab.freedesktop.org/drm/intel/issues/2932
  [i915#2966]: https://gitlab.freedesktop.org/drm/intel/issues/2966
  [i915#3012]: https://gitlab.freedesktop.org/drm/intel/issues/3012
  [i915#3276]: https://gitlab.freedesktop.org/drm/intel/issues/3276
  [i915#3277]: https://gitlab.freedesktop.org/drm/intel/issues/3277
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3283]: https://gitlab.freedesktop.org/drm/intel/issues/3283
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
  [i915#3303]: https://gitlab.freedesktop.org/drm/intel/issues/3303
  [i915#3363]: https://gitlab.freedesktop.org/drm/intel/issues/3363
  [i915#3462]: https://gitlab.freedesktop.org/drm/intel/issues/3462
  [i915#49]: https://gitlab.freedesktop.org/drm/intel/issues/49
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [k.org#202321]: https://bugzilla.kernel.org/show_bug.cgi?id=202321


Participating hosts (45 -> 41)
------------------------------

  Additional (1): fi-rkl-11500t 
  Missing    (5): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-bdw-samus 


Build changes
-------------

  * IGT: IGT_6092 -> IGTPW_5846
  * Linux: CI_DRM_10133 -> Patchwork_20193

  CI-20190529: 20190529
  CI_DRM_10133: 79cace2bbe3bb9cbff1aa14428adea42072b56b0 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_5846: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5846/index.html
  IGT_6092: d87087c321da07035d4f96d98c34e451b3ccb809 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_20193: f3982039cd8af612a2209d1613208cc0fedf675d @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

f3982039cd8a RFC: dma-buf: Add an API for importing sync files (v7)
46fac74a3274 RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
3dff9bbbbc2a dma-buf: Add an API for exporting sync files (v11)
94bae126528f dma-buf: Document DMA_BUF_IOCTL_SYNC
8f674ac01f2f dma-buf: Add dma_resv_get_singleton_unlocked (v5)
a26f8ef33498 dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
225a57e84149 dma-buf: Add dma_fence_array_for_each (v2)

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/index.html

[-- Attachment #1.2: Type: text/html, Size: 6446 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for dma-buf: Add an API for exporting sync files (v11)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
                   ` (10 preceding siblings ...)
  (?)
@ 2021-05-26  4:20 ` Patchwork
  -1 siblings, 0 replies; 68+ messages in thread
From: Patchwork @ 2021-05-26  4:20 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 30273 bytes --]

== Series Details ==

Series: dma-buf: Add an API for exporting sync files (v11)
URL   : https://patchwork.freedesktop.org/series/90555/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10133_full -> Patchwork_20193_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_20193_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_20193_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_20193_full:

### IGT changes ###

#### Possible regressions ####

  * {igt@dmabuf_sync_file@import-basic} (NEW):
    - shard-iclb:         NOTRUN -> [INCOMPLETE][1] +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@dmabuf_sync_file@import-basic.html
    - shard-skl:          NOTRUN -> [INCOMPLETE][2] +1 similar issue
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl10/igt@dmabuf_sync_file@import-basic.html
    - shard-tglb:         NOTRUN -> [INCOMPLETE][3] +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb8/igt@dmabuf_sync_file@import-basic.html
    - shard-kbl:          NOTRUN -> [INCOMPLETE][4] +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl7/igt@dmabuf_sync_file@import-basic.html

  * {igt@dmabuf_sync_file@import-existing-exclusive} (NEW):
    - shard-glk:          NOTRUN -> [INCOMPLETE][5] +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk5/igt@dmabuf_sync_file@import-existing-exclusive.html
    - shard-snb:          NOTRUN -> [INCOMPLETE][6]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-snb5/igt@dmabuf_sync_file@import-existing-exclusive.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-180:
    - shard-glk:          [PASS][7] -> [FAIL][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-glk2/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk6/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html

  
New tests
---------

  New tests have been introduced between CI_DRM_10133_full and Patchwork_20193_full:

### New IGT tests (9) ###

  * igt@dmabuf_sync_file@export-basic:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@dmabuf_sync_file@export-before-signal:
    - Statuses : 7 pass(s)
    - Exec time: [0.00, 0.02] s

  * igt@dmabuf_sync_file@export-multiwait:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@dmabuf_sync_file@export-wait-after-attach:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@dmabuf_sync_file@import-basic:
    - Statuses : 5 incomplete(s)
    - Exec time: [0.0] s

  * igt@dmabuf_sync_file@import-existing-exclusive:
    - Statuses : 6 incomplete(s)
    - Exec time: [0.0] s

  * igt@dmabuf_sync_file@import-existing-shared-1:
    - Statuses : 6 pass(s)
    - Exec time: [0.00, 0.02] s

  * igt@dmabuf_sync_file@import-existing-shared-32:
    - Statuses : 6 pass(s)
    - Exec time: [0.01, 0.13] s

  * igt@dmabuf_sync_file@import-existing-shared-5:
    - Statuses : 5 pass(s)
    - Exec time: [0.00, 0.01] s

  

Known issues
------------

  Here are the changes found in Patchwork_20193_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@core_hotunplug@unbind-rebind:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][9] ([i915#2283])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl3/igt@core_hotunplug@unbind-rebind.html

  * igt@gem_create@create-massive:
    - shard-snb:          NOTRUN -> [DMESG-WARN][10] ([i915#3002])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-snb2/igt@gem_create@create-massive.html
    - shard-apl:          NOTRUN -> [DMESG-WARN][11] ([i915#3002])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl7/igt@gem_create@create-massive.html

  * igt@gem_ctx_persistence@legacy-engines-mixed-process:
    - shard-snb:          NOTRUN -> [SKIP][12] ([fdo#109271] / [i915#1099]) +6 similar issues
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-snb6/igt@gem_ctx_persistence@legacy-engines-mixed-process.html

  * igt@gem_ctx_persistence@many-contexts:
    - shard-tglb:         [PASS][13] -> [FAIL][14] ([i915#2410])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-tglb7/igt@gem_ctx_persistence@many-contexts.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@gem_ctx_persistence@many-contexts.html

  * igt@gem_ctx_ringsize@idle@bcs0:
    - shard-skl:          NOTRUN -> [INCOMPLETE][15] ([i915#3316])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl1/igt@gem_ctx_ringsize@idle@bcs0.html

  * igt@gem_ctx_sseu@invalid-args:
    - shard-tglb:         NOTRUN -> [SKIP][16] ([i915#280])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb8/igt@gem_ctx_sseu@invalid-args.html

  * igt@gem_eio@in-flight-suspend:
    - shard-kbl:          [PASS][17] -> [DMESG-WARN][18] ([i915#180]) +1 similar issue
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-kbl3/igt@gem_eio@in-flight-suspend.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl1/igt@gem_eio@in-flight-suspend.html

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [PASS][19] -> [TIMEOUT][20] ([i915#2369] / [i915#3063])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-tglb5/igt@gem_eio@unwedge-stress.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb8/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_endless@dispatch@vecs0:
    - shard-tglb:         [PASS][21] -> [INCOMPLETE][22] ([i915#2502])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-tglb7/igt@gem_exec_endless@dispatch@vecs0.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb7/igt@gem_exec_endless@dispatch@vecs0.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-kbl:          NOTRUN -> [FAIL][23] ([i915#2846])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl4/igt@gem_exec_fair@basic-deadline.html
    - shard-apl:          NOTRUN -> [FAIL][24] ([i915#2846])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl8/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-tglb:         [PASS][25] -> [FAIL][26] ([i915#2842]) +1 similar issue
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-tglb7/igt@gem_exec_fair@basic-none-share@rcs0.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb3/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none-vip@rcs0:
    - shard-tglb:         NOTRUN -> [FAIL][27] ([i915#2842]) +6 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb8/igt@gem_exec_fair@basic-none-vip@rcs0.html

  * igt@gem_exec_fair@basic-none@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][28] ([i915#2842])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl3/igt@gem_exec_fair@basic-none@rcs0.html

  * igt@gem_exec_fair@basic-none@vecs0:
    - shard-iclb:         NOTRUN -> [FAIL][29] ([i915#2842]) +4 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb5/igt@gem_exec_fair@basic-none@vecs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-kbl:          [PASS][30] -> [FAIL][31] ([i915#2842])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-kbl1/igt@gem_exec_fair@basic-pace@vecs0.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl1/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-glk:          NOTRUN -> [FAIL][32] ([i915#2842])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk6/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_exec_params@no-bsd:
    - shard-iclb:         NOTRUN -> [SKIP][33] ([fdo#109283])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@gem_exec_params@no-bsd.html

  * igt@gem_exec_params@secure-non-master:
    - shard-tglb:         NOTRUN -> [SKIP][34] ([fdo#112283])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@gem_exec_params@secure-non-master.html
    - shard-iclb:         NOTRUN -> [SKIP][35] ([fdo#112283])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@gem_exec_params@secure-non-master.html

  * igt@gem_exec_reloc@basic-wide-active@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][36] ([i915#2389])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@gem_exec_reloc@basic-wide-active@vcs1.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][37] ([fdo#109271] / [i915#2190])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl8/igt@gem_huc_copy@huc-copy.html

  * igt@gem_media_vme:
    - shard-tglb:         NOTRUN -> [SKIP][38] ([i915#284])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@gem_media_vme.html

  * igt@gem_mmap_gtt@cpuset-basic-small-copy:
    - shard-tglb:         [PASS][39] -> [INCOMPLETE][40] ([i915#3468])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-tglb6/igt@gem_mmap_gtt@cpuset-basic-small-copy.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb3/igt@gem_mmap_gtt@cpuset-basic-small-copy.html

  * igt@gem_mmap_gtt@cpuset-basic-small-copy-xy:
    - shard-snb:          NOTRUN -> [INCOMPLETE][41] ([i915#2055] / [i915#3468])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-snb7/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html
    - shard-kbl:          [PASS][42] -> [INCOMPLETE][43] ([i915#3468])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-kbl1/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl4/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html
    - shard-iclb:         [PASS][44] -> [INCOMPLETE][45] ([i915#3468])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-iclb2/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb3/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html

  * igt@gem_mmap_gtt@fault-concurrent:
    - shard-iclb:         NOTRUN -> [INCOMPLETE][46] ([i915#3468])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb6/igt@gem_mmap_gtt@fault-concurrent.html

  * igt@gem_mmap_gtt@fault-concurrent-x:
    - shard-snb:          NOTRUN -> [INCOMPLETE][47] ([i915#3468] / [i915#3485])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-snb2/igt@gem_mmap_gtt@fault-concurrent-x.html

  * igt@gem_mmap_gtt@fault-concurrent-y:
    - shard-skl:          NOTRUN -> [INCOMPLETE][48] ([i915#3468])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl7/igt@gem_mmap_gtt@fault-concurrent-y.html

  * igt@gem_pread@exhaustion:
    - shard-apl:          NOTRUN -> [WARN][49] ([i915#2658]) +1 similar issue
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl1/igt@gem_pread@exhaustion.html
    - shard-iclb:         NOTRUN -> [WARN][50] ([i915#2658]) +1 similar issue
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb5/igt@gem_pread@exhaustion.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-skl:          NOTRUN -> [WARN][51] ([i915#2658])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl10/igt@gem_pwrite@basic-exhaustion.html
    - shard-snb:          NOTRUN -> [WARN][52] ([i915#2658])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-snb2/igt@gem_pwrite@basic-exhaustion.html
    - shard-kbl:          NOTRUN -> [WARN][53] ([i915#2658]) +1 similar issue
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl1/igt@gem_pwrite@basic-exhaustion.html
    - shard-tglb:         NOTRUN -> [WARN][54] ([i915#2658])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb8/igt@gem_pwrite@basic-exhaustion.html
    - shard-glk:          NOTRUN -> [WARN][55] ([i915#2658])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk2/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-y-tiled:
    - shard-iclb:         NOTRUN -> [SKIP][56] ([i915#768]) +4 similar issues
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-y-tiled.html

  * igt@gem_softpin@evict-snoop-interruptible:
    - shard-tglb:         NOTRUN -> [SKIP][57] ([fdo#109312])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb1/igt@gem_softpin@evict-snoop-interruptible.html
    - shard-iclb:         NOTRUN -> [SKIP][58] ([fdo#109312])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@gem_softpin@evict-snoop-interruptible.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-kbl:          NOTRUN -> [SKIP][59] ([fdo#109271] / [i915#3323])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl2/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@unsync-unmap-cycles:
    - shard-tglb:         NOTRUN -> [SKIP][60] ([i915#3297])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@gem_userptr_blits@unsync-unmap-cycles.html
    - shard-iclb:         NOTRUN -> [SKIP][61] ([i915#3297])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@gem_userptr_blits@unsync-unmap-cycles.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-apl:          NOTRUN -> [FAIL][62] ([i915#3318])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl8/igt@gem_userptr_blits@vma-merge.html

  * igt@gen9_exec_parse@allowed-all:
    - shard-iclb:         NOTRUN -> [SKIP][63] ([fdo#112306]) +4 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@gen9_exec_parse@allowed-all.html

  * igt@gen9_exec_parse@bb-large:
    - shard-kbl:          NOTRUN -> [FAIL][64] ([i915#3296])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl3/igt@gen9_exec_parse@bb-large.html

  * igt@gen9_exec_parse@bb-secure:
    - shard-tglb:         NOTRUN -> [SKIP][65] ([fdo#112306]) +5 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@gen9_exec_parse@bb-secure.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-iclb:         [PASS][66] -> [FAIL][67] ([i915#454])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-iclb6/igt@i915_pm_dc@dc6-dpms.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb6/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_dc@dc9-dpms:
    - shard-tglb:         NOTRUN -> [SKIP][68] ([i915#3288])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb1/igt@i915_pm_dc@dc9-dpms.html
    - shard-iclb:         NOTRUN -> [FAIL][69] ([i915#3343])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@i915_pm_dc@dc9-dpms.html

  * igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp:
    - shard-apl:          NOTRUN -> [SKIP][70] ([fdo#109271] / [i915#1937])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl2/igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp.html

  * igt@i915_pm_lpsp@screens-disabled:
    - shard-tglb:         NOTRUN -> [SKIP][71] ([i915#1902])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@i915_pm_lpsp@screens-disabled.html

  * igt@i915_pm_rc6_residency@media-rc6-accuracy:
    - shard-tglb:         NOTRUN -> [SKIP][72] ([fdo#109289] / [fdo#111719])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@i915_pm_rc6_residency@media-rc6-accuracy.html

  * igt@i915_pm_rpm@gem-execbuf-stress-pc8:
    - shard-iclb:         NOTRUN -> [SKIP][73] ([fdo#109293] / [fdo#109506])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb6/igt@i915_pm_rpm@gem-execbuf-stress-pc8.html
    - shard-tglb:         NOTRUN -> [SKIP][74] ([fdo#109506] / [i915#2411]) +1 similar issue
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@i915_pm_rpm@gem-execbuf-stress-pc8.html

  * igt@i915_pm_rpm@modeset-non-lpsp-stress:
    - shard-iclb:         NOTRUN -> [SKIP][75] ([fdo#110892])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb5/igt@i915_pm_rpm@modeset-non-lpsp-stress.html

  * igt@i915_query@query-topology-known-pci-ids:
    - shard-tglb:         NOTRUN -> [SKIP][76] ([fdo#109303])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@i915_query@query-topology-known-pci-ids.html
    - shard-iclb:         NOTRUN -> [SKIP][77] ([fdo#109303])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@i915_query@query-topology-known-pci-ids.html

  * igt@i915_selftest@live@execlists:
    - shard-tglb:         NOTRUN -> [DMESG-FAIL][78] ([i915#3462])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@i915_selftest@live@execlists.html
    - shard-skl:          NOTRUN -> [INCOMPLETE][79] ([i915#2782] / [i915#3462])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl10/igt@i915_selftest@live@execlists.html
    - shard-glk:          NOTRUN -> [DMESG-FAIL][80] ([i915#3462])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk1/igt@i915_selftest@live@execlists.html
    - shard-apl:          NOTRUN -> [DMESG-FAIL][81] ([i915#3462])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl7/igt@i915_selftest@live@execlists.html

  * igt@i915_selftest@live@gt_lrc:
    - shard-tglb:         NOTRUN -> [DMESG-FAIL][82] ([i915#2373])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@i915_selftest@live@gt_lrc.html

  * igt@i915_selftest@live@gt_pm:
    - shard-tglb:         NOTRUN -> [DMESG-FAIL][83] ([i915#1759] / [i915#2291])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@i915_selftest@live@gt_pm.html
    - shard-skl:          NOTRUN -> [DMESG-FAIL][84] ([i915#1886] / [i915#2291])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl10/igt@i915_selftest@live@gt_pm.html

  * igt@kms_addfb_basic@no-handle:
    - shard-glk:          [PASS][85] -> [DMESG-WARN][86] ([i915#118] / [i915#95])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-glk3/igt@kms_addfb_basic@no-handle.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk6/igt@kms_addfb_basic@no-handle.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing:
    - shard-iclb:         NOTRUN -> [SKIP][87] ([i915#1769]) +1 similar issue
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb5/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html
    - shard-tglb:         NOTRUN -> [SKIP][88] ([i915#1769])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html

  * igt@kms_big_fb@x-tiled-16bpp-rotate-90:
    - shard-tglb:         NOTRUN -> [SKIP][89] ([fdo#111614]) +3 similar issues
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@kms_big_fb@x-tiled-16bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-32bpp-rotate-270:
    - shard-iclb:         NOTRUN -> [SKIP][90] ([fdo#110725] / [fdo#111614]) +4 similar issues
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@kms_big_fb@x-tiled-32bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-270:
    - shard-iclb:         NOTRUN -> [SKIP][91] ([fdo#110723])
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb5/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html

  * igt@kms_big_joiner@invalid-modeset:
    - shard-skl:          NOTRUN -> [SKIP][92] ([fdo#109271] / [i915#2705])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl7/igt@kms_big_joiner@invalid-modeset.html
    - shard-iclb:         NOTRUN -> [SKIP][93] ([i915#2705])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@kms_big_joiner@invalid-modeset.html
    - shard-kbl:          NOTRUN -> [SKIP][94] ([fdo#109271] / [i915#2705])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl4/igt@kms_big_joiner@invalid-modeset.html
    - shard-apl:          NOTRUN -> [SKIP][95] ([fdo#109271] / [i915#2705])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl3/igt@kms_big_joiner@invalid-modeset.html
    - shard-glk:          NOTRUN -> [SKIP][96] ([fdo#109271] / [i915#2705])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk8/igt@kms_big_joiner@invalid-modeset.html
    - shard-tglb:         NOTRUN -> [SKIP][97] ([i915#2705])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@kms_big_joiner@invalid-modeset.html

  * igt@kms_ccs@pipe-c-missing-ccs-buffer:
    - shard-skl:          NOTRUN -> [SKIP][98] ([fdo#109271] / [fdo#111304])
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl10/igt@kms_ccs@pipe-c-missing-ccs-buffer.html

  * igt@kms_chamelium@dp-hpd-storm:
    - shard-iclb:         NOTRUN -> [SKIP][99] ([fdo#109284] / [fdo#111827]) +12 similar issues
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@kms_chamelium@dp-hpd-storm.html

  * igt@kms_chamelium@hdmi-edid-change-during-suspend:
    - shard-apl:          NOTRUN -> [SKIP][100] ([fdo#109271] / [fdo#111827]) +24 similar issues
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl8/igt@kms_chamelium@hdmi-edid-change-during-suspend.html

  * igt@kms_chamelium@hdmi-hpd:
    - shard-glk:          NOTRUN -> [SKIP][101] ([fdo#109271] / [fdo#111827]) +12 similar issues
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk2/igt@kms_chamelium@hdmi-hpd.html

  * igt@kms_chamelium@vga-hpd-after-suspend:
    - shard-skl:          NOTRUN -> [SKIP][102] ([fdo#109271] / [fdo#111827]) +12 similar issues
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl8/igt@kms_chamelium@vga-hpd-after-suspend.html

  * igt@kms_color@pipe-b-ctm-0-5:
    - shard-skl:          [PASS][103] -> [DMESG-WARN][104] ([i915#1982])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-skl1/igt@kms_color@pipe-b-ctm-0-5.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl1/igt@kms_color@pipe-b-ctm-0-5.html

  * igt@kms_color@pipe-c-degamma:
    - shard-iclb:         NOTRUN -> [FAIL][105] ([i915#1149])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@kms_color@pipe-c-degamma.html
    - shard-tglb:         NOTRUN -> [FAIL][106] ([i915#1149])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@kms_color@pipe-c-degamma.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-25:
    - shard-snb:          NOTRUN -> [SKIP][107] ([fdo#109271] / [fdo#111827]) +24 similar issues
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-snb7/igt@kms_color_chamelium@pipe-a-ctm-0-25.html

  * igt@kms_color_chamelium@pipe-a-degamma:
    - shard-kbl:          NOTRUN -> [SKIP][108] ([fdo#109271] / [fdo#111827]) +20 similar issues
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl2/igt@kms_color_chamelium@pipe-a-degamma.html

  * igt@kms_color_chamelium@pipe-b-ctm-0-75:
    - shard-tglb:         NOTRUN -> [SKIP][109] ([fdo#109284] / [fdo#111827]) +10 similar issues
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb6/igt@kms_color_chamelium@pipe-b-ctm-0-75.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-apl:          NOTRUN -> [TIMEOUT][110] ([i915#1319]) +1 similar issue
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl8/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@content_type_change:
    - shard-iclb:         NOTRUN -> [SKIP][111] ([fdo#109300] / [fdo#111066]) +2 similar issues
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@kms_content_protection@content_type_change.html

  * igt@kms_content_protection@dp-mst-lic-type-0:
    - shard-iclb:         NOTRUN -> [SKIP][112] ([i915#3116])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@kms_content_protection@dp-mst-lic-type-0.html
    - shard-tglb:         NOTRUN -> [SKIP][113] ([i915#3116])
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb3/igt@kms_content_protection@dp-mst-lic-type-0.html

  * igt@kms_content_protection@uevent:
    - shard-kbl:          NOTRUN -> [FAIL][114] ([i915#2105])
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-kbl1/igt@kms_content_protection@uevent.html
    - shard-tglb:         NOTRUN -> [SKIP][115] ([fdo#111828])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@kms_content_protection@uevent.html
    - shard-apl:          NOTRUN -> [FAIL][116] ([i915#2105])
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-apl2/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-a-cursor-512x170-sliding:
    - shard-tglb:         NOTRUN -> [SKIP][117] ([fdo#109279] / [i915#3359]) +3 similar issues
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb8/igt@kms_cursor_crc@pipe-a-cursor-512x170-sliding.html

  * igt@kms_cursor_crc@pipe-a-cursor-64x21-sliding:
    - shard-skl:          [PASS][118] -> [FAIL][119] ([i915#3444])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-skl2/igt@kms_cursor_crc@pipe-a-cursor-64x21-sliding.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl1/igt@kms_cursor_crc@pipe-a-cursor-64x21-sliding.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-skl:          NOTRUN -> [INCOMPLETE][120] ([i915#2828] / [i915#300])
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl3/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * igt@kms_cursor_crc@pipe-b-cursor-512x512-rapid-movement:
    - shard-iclb:         NOTRUN -> [SKIP][121] ([fdo#109278] / [fdo#109279]) +5 similar issues
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb3/igt@kms_cursor_crc@pipe-b-cursor-512x512-rapid-movement.html

  * igt@kms_cursor_crc@pipe-c-cursor-max-size-offscreen:
    - shard-tglb:         NOTRUN -> [SKIP][122] ([i915#3359]) +7 similar issues
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb3/igt@kms_cursor_crc@pipe-c-cursor-max-size-offscreen.html

  * igt@kms_cursor_crc@pipe-d-cursor-32x32-rapid-movement:
    - shard-iclb:         NOTRUN -> [SKIP][123] ([fdo#109278]) +36 similar issues
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb6/igt@kms_cursor_crc@pipe-d-cursor-32x32-rapid-movement.html
    - shard-tglb:         NOTRUN -> [SKIP][124] ([i915#3319]) +2 similar issues
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb2/igt@kms_cursor_crc@pipe-d-cursor-32x32-rapid-movement.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
    - shard-iclb:         NOTRUN -> [SKIP][125] ([fdo#109274] / [fdo#109278]) +4 similar issues
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb2/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html

  * igt@kms_dp_tiled_display@basic-test-pattern:
    - shard-iclb:         NOTRUN -> [SKIP][126] ([i915#426])
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@kms_dp_tiled_display@basic-test-pattern.html
    - shard-tglb:         NOTRUN -> [SKIP][127] ([i915#426])
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-tglb8/igt@kms_dp_tiled_display@basic-test-pattern.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2:
    - shard-glk:          [PASS][128] -> [FAIL][129] ([i915#79])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10133/shard-glk3/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk7/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@2x-modeset-vs-vblank-race-interruptible:
    - shard-iclb:         NOTRUN -> [SKIP][130] ([fdo#109274]) +7 similar issues
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-iclb1/igt@kms_flip@2x-modeset-vs-vblank-race-interruptible.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile:
    - shard-glk:          NOTRUN -> [SKIP][131] ([fdo#109271] / [i915#2642])
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-glk9/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile.html
    - shard-skl:          NOTRUN -> [SKIP][132] ([fdo#109271] / [i915#2642])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/shard-skl7/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile.html
    - shard-kbl:          NOTRUN -> [SKIP][133] ([fdo

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20193/index.html

[-- Attachment #1.2: Type: text/html, Size: 34123 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-26 10:57     ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 10:57 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel, intel-gfx
  Cc: Daniel Vetter, Huang Rui, VMware Graphics, Gerd Hoffmann,
	Thomas Zimmermann, Sean Paul

Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> None of these helpers actually leak any RCU details to the caller.  They
> all assume you have a genuine reference, take the RCU read lock, and
> retry if needed.  Naming them with an _rcu is likely to cause callers
> more panic than needed.

I'm really wondering if we need this postfix in the first place.

If we use the right rcu_dereference_check() macro then those functions 
can be called with both the reservation object locked and unlocked. It 
shouldn't matter to them.

But getting rid of the _rcu postfix sounds like a good idea in general 
to me.

Christian.

>
> v2 (Jason Ekstrand):
>   - Fix function argument indentation
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Lucas Stach <l.stach@pengutronix.de>
> Cc: Rob Clark <robdclark@gmail.com>
> Cc: Sean Paul <sean@poorly.run>
> Cc: Huang Rui <ray.huang@amd.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> ---
>   drivers/dma-buf/dma-buf.c                     |  4 +--
>   drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>   drivers/gpu/drm/drm_gem.c                     | 10 +++----
>   drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>   drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>   drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>   drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>   drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>   drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>   drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>   drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>   drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>   drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>   drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>   drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>   drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>   drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>   drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>   drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>   drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>   drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>   drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>   include/linux/dma-resv.h                      | 18 ++++++------
>   36 files changed, 108 insertions(+), 110 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index f264b70c383eb..ed6451d55d663 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   	long ret;
>   
>   	/* Wait on any implicit rendering fences */
> -	ret = dma_resv_wait_timeout_rcu(resv, write, true,
> -						  MAX_SCHEDULE_TIMEOUT);
> +	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> +					     MAX_SCHEDULE_TIMEOUT);
>   	if (ret < 0)
>   		return ret;
>   
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>   EXPORT_SYMBOL(dma_resv_copy_fences);
>   
>   /**
> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>    * fences without update side lock held
>    * @obj: the reservation object
>    * @pfence_excl: the returned exclusive fence (or NULL)
> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>    * exclusive fence is not specified the fence is put into the array of the
>    * shared fences as well. Returns either zero or -ENOMEM.
>    */
> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> -			    struct dma_fence **pfence_excl,
> -			    unsigned *pshared_count,
> -			    struct dma_fence ***pshared)
> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> +				 struct dma_fence **pfence_excl,
> +				 unsigned *pshared_count,
> +				 struct dma_fence ***pshared)
>   {
>   	struct dma_fence **shared = NULL;
>   	struct dma_fence *fence_excl;
> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>   	*pshared = shared;
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>   
>   /**
> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>    * shared and/or exclusive fences.
>    * @obj: the reservation object
>    * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>    * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>    * greater than zer on success.
>    */
> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> -			       bool wait_all, bool intr,
> -			       unsigned long timeout)
> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> +				    bool wait_all, bool intr,
> +				    unsigned long timeout)
>   {
>   	struct dma_fence *fence;
>   	unsigned seq, shared_count;
> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>   	rcu_read_unlock();
>   	goto retry;
>   }
> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>   
>   
>   static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>   }
>   
>   /**
> - * dma_resv_test_signaled_rcu - Test if a reservation object's
> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>    * fences have been signaled.
>    * @obj: the reservation object
>    * @test_all: if true, test all fences, otherwise only test the exclusive
> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>    * RETURNS
>    * true if all fences signaled, else false
>    */
> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>   {
>   	unsigned seq, shared_count;
>   	int ret;
> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>   	rcu_read_unlock();
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index 8a1fb8b6606e5..b8e24f199be9a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>   		goto unpin;
>   	}
>   
> -	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> -					      &work->shared_count,
> -					      &work->shared);
> +	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> +					 &work->shared_count,
> +					 &work->shared);
>   	if (unlikely(r != 0)) {
>   		DRM_ERROR("failed to get fences for buffer\n");
>   		goto unpin;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index baa980a477d94..0d0319bc51577 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>   	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>   		return 0;
>   
> -	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> +	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>   	if (r)
>   		return r;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 18974bd081f00..8e2996d6ba3ad 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>   		return -ENOENT;
>   	}
>   	robj = gem_to_amdgpu_bo(gobj);
> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> -						  timeout);
> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> +					     timeout);
>   
>   	/* ret == 0 means not signaled,
>   	 * ret > 0 means signaled
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index b4971e90b98cf..38e1b32dd2cef 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>   	unsigned count;
>   	int r;
>   
> -	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> +	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>   	if (r)
>   		goto fallback;
>   
> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>   	/* Not enough memory for the delayed delete, as last resort
>   	 * block for all the fences to complete.
>   	 */
> -	dma_resv_wait_timeout_rcu(resv, true, false,
> -					    MAX_SCHEDULE_TIMEOUT);
> +	dma_resv_wait_timeout_unlocked(resv, true, false,
> +				       MAX_SCHEDULE_TIMEOUT);
>   	amdgpu_pasid_free(pasid);
>   }
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> index 828b5167ff128..0319c8b547c48 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>   
>   	mmu_interval_set_seq(mni, cur_seq);
>   
> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> -				      MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	mutex_unlock(&adev->notifier_lock);
>   	if (r <= 0)
>   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index 0adffcace3263..de1c7c5501683 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>   		return 0;
>   	}
>   
> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> -						MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	if (r < 0)
>   		return r;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> index c6dbc08016045..4a2196404fb69 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>   	ib->length_dw = 16;
>   
>   	if (direct) {
> -		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> -							true, false,
> -							msecs_to_jiffies(10));
> +		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> +						   true, false,
> +						   msecs_to_jiffies(10));
>   		if (r == 0)
>   			r = -ETIMEDOUT;
>   		if (r < 0)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 4a3e3f72e1277..7ba1c537d6584 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>   	unsigned i, shared_count;
>   	int r;
>   
> -	r = dma_resv_get_fences_rcu(resv, &excl,
> -					      &shared_count, &shared);
> +	r = dma_resv_get_fences_unlocked(resv, &excl,
> +					 &shared_count, &shared);
>   	if (r) {
>   		/* Not enough memory to grab the fence list, as last resort
>   		 * block for all the fences to complete.
>   		 */
> -		dma_resv_wait_timeout_rcu(resv, true, false,
> -						    MAX_SCHEDULE_TIMEOUT);
> +		dma_resv_wait_timeout_unlocked(resv, true, false,
> +					       MAX_SCHEDULE_TIMEOUT);
>   		return;
>   	}
>   
> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>   		return true;
>   
>   	/* Don't evict VM page tables while they are busy */
> -	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> +	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>   		return false;
>   
>   	/* Try to block ongoing updates */
> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>    */
>   long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>   {
> -	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> -					    true, true, timeout);
> +	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> +						 true, true, timeout);
>   	if (timeout <= 0)
>   		return timeout;
>   
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 9ca517b658546..0121d2817fa26 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>   		 * deadlock during GPU reset when this fence will not signal
>   		 * but we hold reservation lock for the BO.
>   		 */
> -		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> -							false,
> -							msecs_to_jiffies(5000));
> +		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> +						   false,
> +						   msecs_to_jiffies(5000));
>   		if (unlikely(r <= 0))
>   			DRM_ERROR("Waiting for fences timed out!");
>   
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 9989425e9875a..1241a421b9e81 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>   		return -EINVAL;
>   	}
>   
> -	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> -						  true, timeout);
> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> +					     true, timeout);
>   	if (ret == 0)
>   		ret = -ETIME;
>   	else if (ret > 0)
> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>   
>   	if (!write) {
>   		struct dma_fence *fence =
> -			dma_resv_get_excl_rcu(obj->resv);
> +			dma_resv_get_excl_unlocked(obj->resv);
>   
>   		return drm_gem_fence_array_add(fence_array, fence);
>   	}
>   
> -	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> -						&fence_count, &fences);
> +	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> +					   &fence_count, &fences);
>   	if (ret || !fence_count)
>   		return ret;
>   
> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> index a005c5a0ba46a..a27135084ae5c 100644
> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>   		return 0;
>   
>   	obj = drm_gem_fb_get_obj(state->fb, 0);
> -	fence = dma_resv_get_excl_rcu(obj->resv);
> +	fence = dma_resv_get_excl_unlocked(obj->resv);
>   	drm_atomic_set_fence_for_plane(state, fence);
>   
>   	return 0;
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index db69f19ab5bca..4e6f5346e84e4 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>   	}
>   
>   	if (op & ETNA_PREP_NOSYNC) {
> -		if (!dma_resv_test_signaled_rcu(obj->resv,
> -							  write))
> +		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>   			return -EBUSY;
>   	} else {
>   		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>   
> -		ret = dma_resv_wait_timeout_rcu(obj->resv,
> -							  write, true, remain);
> +		ret = dma_resv_wait_timeout_unlocked(obj->resv,
> +						     write, true, remain);
>   		if (ret <= 0)
>   			return ret == 0 ? -ETIMEDOUT : ret;
>   	}
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> index d05c359945799..6617fada4595d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>   			continue;
>   
>   		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> -			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> -								&bo->nr_shared,
> -								&bo->shared);
> +			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> +							   &bo->nr_shared,
> +							   &bo->shared);
>   			if (ret)
>   				return ret;
>   		} else {
> -			bo->excl = dma_resv_get_excl_rcu(robj);
> +			bo->excl = dma_resv_get_excl_unlocked(robj);
>   		}
>   
>   	}
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index 422b59ebf6dce..5f0b85a102159 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>   		if (ret < 0)
>   			goto unpin_fb;
>   
> -		fence = dma_resv_get_excl_rcu(obj->base.resv);
> +		fence = dma_resv_get_excl_unlocked(obj->base.resv);
>   		if (fence) {
>   			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>   						   fence);
> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> index 9e508e7d4629f..bdfc6bf16a4e9 100644
> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> @@ -10,7 +10,7 @@
>   void dma_resv_prune(struct dma_resv *resv)
>   {
>   	if (dma_resv_trylock(resv)) {
> -		if (dma_resv_test_signaled_rcu(resv, true))
> +		if (dma_resv_test_signaled_unlocked(resv, true))
>   			dma_resv_add_excl_fence(resv, NULL);
>   		dma_resv_unlock(resv);
>   	}
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> index 25235ef630c10..754ad6d1bace9 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>   	 * Alternatively, we can trade that extra information on read/write
>   	 * activity with
>   	 *	args->busy =
> -	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
> +	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
>   	 * to report the overall busyness. This is what the wait-ioctl does.
>   	 *
>   	 */
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 297143511f99b..e8f323564e57b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>   	if (DBG_FORCE_RELOC)
>   		return false;
>   
> -	return !dma_resv_test_signaled_rcu(vma->resv, true);
> +	return !dma_resv_test_signaled_unlocked(vma->resv, true);
>   }
>   
>   static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index 2ebd79537aea9..7c0eb425cb3b3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>   	struct dma_fence *fence;
>   
>   	rcu_read_lock();
> -	fence = dma_resv_get_excl_rcu(obj->base.resv);
> +	fence = dma_resv_get_excl_unlocked(obj->base.resv);
>   	rcu_read_unlock();
>   
>   	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index a657b99ec7606..44df18dc9669f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>   		return true;
>   
>   	/* we will unbind on next submission, still have userptr pins */
> -	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> -				      MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	if (r <= 0)
>   		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>   
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> index 4b9856d5ba14f..5b6c52659ad4d 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>   		unsigned int count, i;
>   		int ret;
>   
> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>   		 */
>   		prune_fences = count && timeout >= 0;
>   	} else {
> -		excl = dma_resv_get_excl_rcu(resv);
> +		excl = dma_resv_get_excl_unlocked(resv);
>   	}
>   
>   	if (excl && timeout >= 0)
> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>   		unsigned int count, i;
>   		int ret;
>   
> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> -					      &excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> +						   &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>   
>   		kfree(shared);
>   	} else {
> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>   	}
>   
>   	if (excl) {
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 970d8f4986bbe..f1ed03ced7dd1 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>   		struct dma_fence **shared;
>   		unsigned int count, i;
>   
> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> -							&excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> +						   &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>   			dma_fence_put(shared[i]);
>   		kfree(shared);
>   	} else {
> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>   	}
>   
>   	if (excl) {
> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> index 2744558f30507..0bcb7ea44201e 100644
> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>   		struct dma_fence **shared;
>   		unsigned int count, i;
>   
> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>   			dma_fence_put(shared[i]);
>   		kfree(shared);
>   	} else {
> -		excl = dma_resv_get_excl_rcu(resv);
> +		excl = dma_resv_get_excl_unlocked(resv);
>   	}
>   
>   	if (ret >= 0 && excl && excl->ops != exclude) {
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 56df86e5f7400..1aca60507bb14 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>   		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>   	long ret;
>   
> -	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> -						  true,  remain);
> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>   	if (ret == 0)
>   		return remain == 0 ? -EBUSY : -ETIMEDOUT;
>   	else if (ret < 0)
> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> index 0cb1f9d848d3e..8d048bacd6f02 100644
> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>   			asyw->image.handle[0] = ctxdma->object.handle;
>   	}
>   
> -	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> +	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>   	asyw->image.offset[0] = nvbo->offset;
>   
>   	if (wndw->func->prepare) {
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index a70e82413fa75..bc6b09ee9b552 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>   		return -ENOENT;
>   	nvbo = nouveau_gem_object(gem);
>   
> -	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> -						   no_wait ? 0 : 30 * HZ);
> +	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> +					      no_wait ? 0 : 30 * HZ);
>   	if (!lret)
>   		ret = -EBUSY;
>   	else if (lret > 0)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index ca07098a61419..eef5b632ee0ce 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>   	if (!gem_obj)
>   		return -ENOENT;
>   
> -	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> -						  true, timeout);
> +	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> +					     true, timeout);
>   	if (!ret)
>   		ret = timeout ? -ETIMEDOUT : -EBUSY;
>   
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 6003cfeb13221..2df3e999a38d0 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>   	int i;
>   
>   	for (i = 0; i < bo_count; i++)
> -		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> +		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>   }
>   
>   static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 05ea2f39f6261..1a38b0bf36d11 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>   	}
>   	if (domain == RADEON_GEM_DOMAIN_CPU) {
>   		/* Asking for cpu access wait for object idle */
> -		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> +		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>   		if (!r)
>   			r = -EBUSY;
>   
> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>   	}
>   	robj = gem_to_radeon_bo(gobj);
>   
> -	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> +	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>   	if (r == 0)
>   		r = -EBUSY;
>   	else
> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>   	}
>   	robj = gem_to_radeon_bo(gobj);
>   
> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>   	if (ret == 0)
>   		r = -EBUSY;
>   	else if (ret < 0)
> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> index e37c9a57a7c36..a19be3f8a218c 100644
> --- a/drivers/gpu/drm/radeon/radeon_mn.c
> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>   		return true;
>   	}
>   
> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> -				      MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	if (r <= 0)
>   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>   
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index ca1b098b6a561..215cad3149621 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>   	struct dma_resv *resv = &bo->base._resv;
>   	int ret;
>   
> -	if (dma_resv_test_signaled_rcu(resv, true))
> +	if (dma_resv_test_signaled_unlocked(resv, true))
>   		ret = 0;
>   	else
>   		ret = -EBUSY;
> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>   			dma_resv_unlock(bo->base.resv);
>   		spin_unlock(&bo->bdev->lru_lock);
>   
> -		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> -						 30 * HZ);
> +		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> +						      30 * HZ);
>   
>   		if (lret < 0)
>   			return lret;
> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>   			/* Last resort, if we fail to allocate memory for the
>   			 * fences block for the BO to become idle
>   			 */
> -			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> -						  30 * HZ);
> +			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> +						       30 * HZ);
>   		}
>   
>   		if (bo->bdev->funcs->release_notify)
> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>   		ttm_mem_io_free(bdev, &bo->mem);
>   	}
>   
> -	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> +	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>   	    !dma_resv_trylock(bo->base.resv)) {
>   		/* The BO is not idle, resurrect it for delayed destroy */
>   		ttm_bo_flush_all_fences(bo);
> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>   	long timeout = 15 * HZ;
>   
>   	if (no_wait) {
> -		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> +		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>   			return 0;
>   		else
>   			return -EBUSY;
>   	}
>   
> -	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> -						      interruptible, timeout);
> +	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> +						 interruptible, timeout);
>   	if (timeout < 0)
>   		return timeout;
>   
> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> index 2902dc6e64faf..010a82405e374 100644
> --- a/drivers/gpu/drm/vgem/vgem_fence.c
> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>   
>   	/* Check for a conflicting fence */
>   	resv = obj->resv;
> -	if (!dma_resv_test_signaled_rcu(resv,
> -						  arg->flags & VGEM_FENCE_WRITE)) {
> +	if (!dma_resv_test_signaled_unlocked(resv,
> +					     arg->flags & VGEM_FENCE_WRITE)) {
>   		ret = -EBUSY;
>   		goto err_fence;
>   	}
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 669f2ee395154..ab010c8e32816 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>   		return -ENOENT;
>   
>   	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> -		ret = dma_resv_test_signaled_rcu(obj->resv, true);
> +		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>   	} else {
> -		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> -						timeout);
> +		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> +						     timeout);
>   	}
>   	if (ret == 0)
>   		ret = -EBUSY;
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> index 04dd49c4c2572..19e1ce23842a9 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>   	if (flags & drm_vmw_synccpu_allow_cs) {
>   		long lret;
>   
> -		lret = dma_resv_wait_timeout_rcu
> +		lret = dma_resv_wait_timeout_unlocked
>   			(bo->base.resv, true, true,
>   			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>   		if (!lret)
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index d44a77e8a7e34..99cfb7af966b8 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>   }
>   
>   /**
> - * dma_resv_get_excl_rcu - get the reservation object's
> + * dma_resv_get_excl_unlocked - get the reservation object's
>    * exclusive fence, without lock held.
>    * @obj: the reservation object
>    *
> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>    * The exclusive fence or NULL if none
>    */
>   static inline struct dma_fence *
> -dma_resv_get_excl_rcu(struct dma_resv *obj)
> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>   {
>   	struct dma_fence *fence;
>   
> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>   
>   void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>   
> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> -			    struct dma_fence **pfence_excl,
> -			    unsigned *pshared_count,
> -			    struct dma_fence ***pshared);
> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> +				 struct dma_fence **pfence_excl,
> +				 unsigned *pshared_count,
> +				 struct dma_fence ***pshared);
>   
>   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>   
> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> -			       unsigned long timeout);
> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> +				    unsigned long timeout);
>   
> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>   
>   #endif /* _LINUX_RESERVATION_H */


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-05-26 10:57     ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 10:57 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel, intel-gfx
  Cc: Daniel Vetter, Maxime Ripard, Huang Rui, VMware Graphics,
	Gerd Hoffmann, Thomas Zimmermann, Lucas Stach

Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> None of these helpers actually leak any RCU details to the caller.  They
> all assume you have a genuine reference, take the RCU read lock, and
> retry if needed.  Naming them with an _rcu is likely to cause callers
> more panic than needed.

I'm really wondering if we need this postfix in the first place.

If we use the right rcu_dereference_check() macro then those functions 
can be called with both the reservation object locked and unlocked. It 
shouldn't matter to them.

But getting rid of the _rcu postfix sounds like a good idea in general 
to me.

Christian.

>
> v2 (Jason Ekstrand):
>   - Fix function argument indentation
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Lucas Stach <l.stach@pengutronix.de>
> Cc: Rob Clark <robdclark@gmail.com>
> Cc: Sean Paul <sean@poorly.run>
> Cc: Huang Rui <ray.huang@amd.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> ---
>   drivers/dma-buf/dma-buf.c                     |  4 +--
>   drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>   drivers/gpu/drm/drm_gem.c                     | 10 +++----
>   drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>   drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>   drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>   drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>   drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>   drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>   drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>   drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>   drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>   drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>   drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>   drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>   drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>   drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>   drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>   drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>   drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>   drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>   drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>   drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>   include/linux/dma-resv.h                      | 18 ++++++------
>   36 files changed, 108 insertions(+), 110 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index f264b70c383eb..ed6451d55d663 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   	long ret;
>   
>   	/* Wait on any implicit rendering fences */
> -	ret = dma_resv_wait_timeout_rcu(resv, write, true,
> -						  MAX_SCHEDULE_TIMEOUT);
> +	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> +					     MAX_SCHEDULE_TIMEOUT);
>   	if (ret < 0)
>   		return ret;
>   
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>   EXPORT_SYMBOL(dma_resv_copy_fences);
>   
>   /**
> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>    * fences without update side lock held
>    * @obj: the reservation object
>    * @pfence_excl: the returned exclusive fence (or NULL)
> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>    * exclusive fence is not specified the fence is put into the array of the
>    * shared fences as well. Returns either zero or -ENOMEM.
>    */
> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> -			    struct dma_fence **pfence_excl,
> -			    unsigned *pshared_count,
> -			    struct dma_fence ***pshared)
> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> +				 struct dma_fence **pfence_excl,
> +				 unsigned *pshared_count,
> +				 struct dma_fence ***pshared)
>   {
>   	struct dma_fence **shared = NULL;
>   	struct dma_fence *fence_excl;
> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>   	*pshared = shared;
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>   
>   /**
> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>    * shared and/or exclusive fences.
>    * @obj: the reservation object
>    * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>    * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>    * greater than zer on success.
>    */
> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> -			       bool wait_all, bool intr,
> -			       unsigned long timeout)
> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> +				    bool wait_all, bool intr,
> +				    unsigned long timeout)
>   {
>   	struct dma_fence *fence;
>   	unsigned seq, shared_count;
> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>   	rcu_read_unlock();
>   	goto retry;
>   }
> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>   
>   
>   static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>   }
>   
>   /**
> - * dma_resv_test_signaled_rcu - Test if a reservation object's
> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>    * fences have been signaled.
>    * @obj: the reservation object
>    * @test_all: if true, test all fences, otherwise only test the exclusive
> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>    * RETURNS
>    * true if all fences signaled, else false
>    */
> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>   {
>   	unsigned seq, shared_count;
>   	int ret;
> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>   	rcu_read_unlock();
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index 8a1fb8b6606e5..b8e24f199be9a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>   		goto unpin;
>   	}
>   
> -	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> -					      &work->shared_count,
> -					      &work->shared);
> +	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> +					 &work->shared_count,
> +					 &work->shared);
>   	if (unlikely(r != 0)) {
>   		DRM_ERROR("failed to get fences for buffer\n");
>   		goto unpin;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index baa980a477d94..0d0319bc51577 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>   	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>   		return 0;
>   
> -	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> +	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>   	if (r)
>   		return r;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 18974bd081f00..8e2996d6ba3ad 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>   		return -ENOENT;
>   	}
>   	robj = gem_to_amdgpu_bo(gobj);
> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> -						  timeout);
> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> +					     timeout);
>   
>   	/* ret == 0 means not signaled,
>   	 * ret > 0 means signaled
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index b4971e90b98cf..38e1b32dd2cef 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>   	unsigned count;
>   	int r;
>   
> -	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> +	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>   	if (r)
>   		goto fallback;
>   
> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>   	/* Not enough memory for the delayed delete, as last resort
>   	 * block for all the fences to complete.
>   	 */
> -	dma_resv_wait_timeout_rcu(resv, true, false,
> -					    MAX_SCHEDULE_TIMEOUT);
> +	dma_resv_wait_timeout_unlocked(resv, true, false,
> +				       MAX_SCHEDULE_TIMEOUT);
>   	amdgpu_pasid_free(pasid);
>   }
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> index 828b5167ff128..0319c8b547c48 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>   
>   	mmu_interval_set_seq(mni, cur_seq);
>   
> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> -				      MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	mutex_unlock(&adev->notifier_lock);
>   	if (r <= 0)
>   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index 0adffcace3263..de1c7c5501683 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>   		return 0;
>   	}
>   
> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> -						MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	if (r < 0)
>   		return r;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> index c6dbc08016045..4a2196404fb69 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>   	ib->length_dw = 16;
>   
>   	if (direct) {
> -		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> -							true, false,
> -							msecs_to_jiffies(10));
> +		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> +						   true, false,
> +						   msecs_to_jiffies(10));
>   		if (r == 0)
>   			r = -ETIMEDOUT;
>   		if (r < 0)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 4a3e3f72e1277..7ba1c537d6584 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>   	unsigned i, shared_count;
>   	int r;
>   
> -	r = dma_resv_get_fences_rcu(resv, &excl,
> -					      &shared_count, &shared);
> +	r = dma_resv_get_fences_unlocked(resv, &excl,
> +					 &shared_count, &shared);
>   	if (r) {
>   		/* Not enough memory to grab the fence list, as last resort
>   		 * block for all the fences to complete.
>   		 */
> -		dma_resv_wait_timeout_rcu(resv, true, false,
> -						    MAX_SCHEDULE_TIMEOUT);
> +		dma_resv_wait_timeout_unlocked(resv, true, false,
> +					       MAX_SCHEDULE_TIMEOUT);
>   		return;
>   	}
>   
> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>   		return true;
>   
>   	/* Don't evict VM page tables while they are busy */
> -	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> +	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>   		return false;
>   
>   	/* Try to block ongoing updates */
> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>    */
>   long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>   {
> -	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> -					    true, true, timeout);
> +	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> +						 true, true, timeout);
>   	if (timeout <= 0)
>   		return timeout;
>   
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 9ca517b658546..0121d2817fa26 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>   		 * deadlock during GPU reset when this fence will not signal
>   		 * but we hold reservation lock for the BO.
>   		 */
> -		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> -							false,
> -							msecs_to_jiffies(5000));
> +		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> +						   false,
> +						   msecs_to_jiffies(5000));
>   		if (unlikely(r <= 0))
>   			DRM_ERROR("Waiting for fences timed out!");
>   
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 9989425e9875a..1241a421b9e81 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>   		return -EINVAL;
>   	}
>   
> -	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> -						  true, timeout);
> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> +					     true, timeout);
>   	if (ret == 0)
>   		ret = -ETIME;
>   	else if (ret > 0)
> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>   
>   	if (!write) {
>   		struct dma_fence *fence =
> -			dma_resv_get_excl_rcu(obj->resv);
> +			dma_resv_get_excl_unlocked(obj->resv);
>   
>   		return drm_gem_fence_array_add(fence_array, fence);
>   	}
>   
> -	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> -						&fence_count, &fences);
> +	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> +					   &fence_count, &fences);
>   	if (ret || !fence_count)
>   		return ret;
>   
> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> index a005c5a0ba46a..a27135084ae5c 100644
> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>   		return 0;
>   
>   	obj = drm_gem_fb_get_obj(state->fb, 0);
> -	fence = dma_resv_get_excl_rcu(obj->resv);
> +	fence = dma_resv_get_excl_unlocked(obj->resv);
>   	drm_atomic_set_fence_for_plane(state, fence);
>   
>   	return 0;
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index db69f19ab5bca..4e6f5346e84e4 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>   	}
>   
>   	if (op & ETNA_PREP_NOSYNC) {
> -		if (!dma_resv_test_signaled_rcu(obj->resv,
> -							  write))
> +		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>   			return -EBUSY;
>   	} else {
>   		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>   
> -		ret = dma_resv_wait_timeout_rcu(obj->resv,
> -							  write, true, remain);
> +		ret = dma_resv_wait_timeout_unlocked(obj->resv,
> +						     write, true, remain);
>   		if (ret <= 0)
>   			return ret == 0 ? -ETIMEDOUT : ret;
>   	}
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> index d05c359945799..6617fada4595d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>   			continue;
>   
>   		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> -			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> -								&bo->nr_shared,
> -								&bo->shared);
> +			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> +							   &bo->nr_shared,
> +							   &bo->shared);
>   			if (ret)
>   				return ret;
>   		} else {
> -			bo->excl = dma_resv_get_excl_rcu(robj);
> +			bo->excl = dma_resv_get_excl_unlocked(robj);
>   		}
>   
>   	}
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index 422b59ebf6dce..5f0b85a102159 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>   		if (ret < 0)
>   			goto unpin_fb;
>   
> -		fence = dma_resv_get_excl_rcu(obj->base.resv);
> +		fence = dma_resv_get_excl_unlocked(obj->base.resv);
>   		if (fence) {
>   			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>   						   fence);
> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> index 9e508e7d4629f..bdfc6bf16a4e9 100644
> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> @@ -10,7 +10,7 @@
>   void dma_resv_prune(struct dma_resv *resv)
>   {
>   	if (dma_resv_trylock(resv)) {
> -		if (dma_resv_test_signaled_rcu(resv, true))
> +		if (dma_resv_test_signaled_unlocked(resv, true))
>   			dma_resv_add_excl_fence(resv, NULL);
>   		dma_resv_unlock(resv);
>   	}
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> index 25235ef630c10..754ad6d1bace9 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>   	 * Alternatively, we can trade that extra information on read/write
>   	 * activity with
>   	 *	args->busy =
> -	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
> +	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
>   	 * to report the overall busyness. This is what the wait-ioctl does.
>   	 *
>   	 */
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 297143511f99b..e8f323564e57b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>   	if (DBG_FORCE_RELOC)
>   		return false;
>   
> -	return !dma_resv_test_signaled_rcu(vma->resv, true);
> +	return !dma_resv_test_signaled_unlocked(vma->resv, true);
>   }
>   
>   static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index 2ebd79537aea9..7c0eb425cb3b3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>   	struct dma_fence *fence;
>   
>   	rcu_read_lock();
> -	fence = dma_resv_get_excl_rcu(obj->base.resv);
> +	fence = dma_resv_get_excl_unlocked(obj->base.resv);
>   	rcu_read_unlock();
>   
>   	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index a657b99ec7606..44df18dc9669f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>   		return true;
>   
>   	/* we will unbind on next submission, still have userptr pins */
> -	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> -				      MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	if (r <= 0)
>   		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>   
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> index 4b9856d5ba14f..5b6c52659ad4d 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>   		unsigned int count, i;
>   		int ret;
>   
> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>   		 */
>   		prune_fences = count && timeout >= 0;
>   	} else {
> -		excl = dma_resv_get_excl_rcu(resv);
> +		excl = dma_resv_get_excl_unlocked(resv);
>   	}
>   
>   	if (excl && timeout >= 0)
> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>   		unsigned int count, i;
>   		int ret;
>   
> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> -					      &excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> +						   &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>   
>   		kfree(shared);
>   	} else {
> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>   	}
>   
>   	if (excl) {
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 970d8f4986bbe..f1ed03ced7dd1 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>   		struct dma_fence **shared;
>   		unsigned int count, i;
>   
> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> -							&excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> +						   &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>   			dma_fence_put(shared[i]);
>   		kfree(shared);
>   	} else {
> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>   	}
>   
>   	if (excl) {
> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> index 2744558f30507..0bcb7ea44201e 100644
> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>   		struct dma_fence **shared;
>   		unsigned int count, i;
>   
> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>   		if (ret)
>   			return ret;
>   
> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>   			dma_fence_put(shared[i]);
>   		kfree(shared);
>   	} else {
> -		excl = dma_resv_get_excl_rcu(resv);
> +		excl = dma_resv_get_excl_unlocked(resv);
>   	}
>   
>   	if (ret >= 0 && excl && excl->ops != exclude) {
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 56df86e5f7400..1aca60507bb14 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>   		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>   	long ret;
>   
> -	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> -						  true,  remain);
> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>   	if (ret == 0)
>   		return remain == 0 ? -EBUSY : -ETIMEDOUT;
>   	else if (ret < 0)
> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> index 0cb1f9d848d3e..8d048bacd6f02 100644
> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>   			asyw->image.handle[0] = ctxdma->object.handle;
>   	}
>   
> -	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> +	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>   	asyw->image.offset[0] = nvbo->offset;
>   
>   	if (wndw->func->prepare) {
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index a70e82413fa75..bc6b09ee9b552 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>   		return -ENOENT;
>   	nvbo = nouveau_gem_object(gem);
>   
> -	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> -						   no_wait ? 0 : 30 * HZ);
> +	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> +					      no_wait ? 0 : 30 * HZ);
>   	if (!lret)
>   		ret = -EBUSY;
>   	else if (lret > 0)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index ca07098a61419..eef5b632ee0ce 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>   	if (!gem_obj)
>   		return -ENOENT;
>   
> -	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> -						  true, timeout);
> +	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> +					     true, timeout);
>   	if (!ret)
>   		ret = timeout ? -ETIMEDOUT : -EBUSY;
>   
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 6003cfeb13221..2df3e999a38d0 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>   	int i;
>   
>   	for (i = 0; i < bo_count; i++)
> -		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> +		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>   }
>   
>   static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 05ea2f39f6261..1a38b0bf36d11 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>   	}
>   	if (domain == RADEON_GEM_DOMAIN_CPU) {
>   		/* Asking for cpu access wait for object idle */
> -		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> +		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>   		if (!r)
>   			r = -EBUSY;
>   
> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>   	}
>   	robj = gem_to_radeon_bo(gobj);
>   
> -	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> +	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>   	if (r == 0)
>   		r = -EBUSY;
>   	else
> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>   	}
>   	robj = gem_to_radeon_bo(gobj);
>   
> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>   	if (ret == 0)
>   		r = -EBUSY;
>   	else if (ret < 0)
> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> index e37c9a57a7c36..a19be3f8a218c 100644
> --- a/drivers/gpu/drm/radeon/radeon_mn.c
> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>   		return true;
>   	}
>   
> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> -				      MAX_SCHEDULE_TIMEOUT);
> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> +					   MAX_SCHEDULE_TIMEOUT);
>   	if (r <= 0)
>   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>   
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index ca1b098b6a561..215cad3149621 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>   	struct dma_resv *resv = &bo->base._resv;
>   	int ret;
>   
> -	if (dma_resv_test_signaled_rcu(resv, true))
> +	if (dma_resv_test_signaled_unlocked(resv, true))
>   		ret = 0;
>   	else
>   		ret = -EBUSY;
> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>   			dma_resv_unlock(bo->base.resv);
>   		spin_unlock(&bo->bdev->lru_lock);
>   
> -		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> -						 30 * HZ);
> +		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> +						      30 * HZ);
>   
>   		if (lret < 0)
>   			return lret;
> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>   			/* Last resort, if we fail to allocate memory for the
>   			 * fences block for the BO to become idle
>   			 */
> -			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> -						  30 * HZ);
> +			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> +						       30 * HZ);
>   		}
>   
>   		if (bo->bdev->funcs->release_notify)
> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>   		ttm_mem_io_free(bdev, &bo->mem);
>   	}
>   
> -	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> +	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>   	    !dma_resv_trylock(bo->base.resv)) {
>   		/* The BO is not idle, resurrect it for delayed destroy */
>   		ttm_bo_flush_all_fences(bo);
> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>   	long timeout = 15 * HZ;
>   
>   	if (no_wait) {
> -		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> +		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>   			return 0;
>   		else
>   			return -EBUSY;
>   	}
>   
> -	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> -						      interruptible, timeout);
> +	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> +						 interruptible, timeout);
>   	if (timeout < 0)
>   		return timeout;
>   
> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> index 2902dc6e64faf..010a82405e374 100644
> --- a/drivers/gpu/drm/vgem/vgem_fence.c
> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>   
>   	/* Check for a conflicting fence */
>   	resv = obj->resv;
> -	if (!dma_resv_test_signaled_rcu(resv,
> -						  arg->flags & VGEM_FENCE_WRITE)) {
> +	if (!dma_resv_test_signaled_unlocked(resv,
> +					     arg->flags & VGEM_FENCE_WRITE)) {
>   		ret = -EBUSY;
>   		goto err_fence;
>   	}
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 669f2ee395154..ab010c8e32816 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>   		return -ENOENT;
>   
>   	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> -		ret = dma_resv_test_signaled_rcu(obj->resv, true);
> +		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>   	} else {
> -		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> -						timeout);
> +		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> +						     timeout);
>   	}
>   	if (ret == 0)
>   		ret = -EBUSY;
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> index 04dd49c4c2572..19e1ce23842a9 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>   	if (flags & drm_vmw_synccpu_allow_cs) {
>   		long lret;
>   
> -		lret = dma_resv_wait_timeout_rcu
> +		lret = dma_resv_wait_timeout_unlocked
>   			(bo->base.resv, true, true,
>   			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>   		if (!lret)
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index d44a77e8a7e34..99cfb7af966b8 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>   }
>   
>   /**
> - * dma_resv_get_excl_rcu - get the reservation object's
> + * dma_resv_get_excl_unlocked - get the reservation object's
>    * exclusive fence, without lock held.
>    * @obj: the reservation object
>    *
> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>    * The exclusive fence or NULL if none
>    */
>   static inline struct dma_fence *
> -dma_resv_get_excl_rcu(struct dma_resv *obj)
> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>   {
>   	struct dma_fence *fence;
>   
> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>   
>   void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>   
> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> -			    struct dma_fence **pfence_excl,
> -			    unsigned *pshared_count,
> -			    struct dma_fence ***pshared);
> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> +				 struct dma_fence **pfence_excl,
> +				 unsigned *pshared_count,
> +				 struct dma_fence ***pshared);
>   
>   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>   
> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> -			       unsigned long timeout);
> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> +				    unsigned long timeout);
>   
> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>   
>   #endif /* _LINUX_RESERVATION_H */

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-26 11:02     ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 11:02 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel, intel-gfx; +Cc: Daniel Vetter

Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> Modern userspace APIs like Vulkan are built on an explicit
> synchronization model.  This doesn't always play nicely with the
> implicit synchronization used in the kernel and assumed by X11 and
> Wayland.  The client -> compositor half of the synchronization isn't too
> bad, at least on intel, because we can control whether or not i915
> synchronizes on the buffer and whether or not it's considered written.
>
> The harder part is the compositor -> client synchronization when we get
> the buffer back from the compositor.  We're required to be able to
> provide the client with a VkSemaphore and VkFence representing the point
> in time where the window system (compositor and/or display) finished
> using the buffer.  With current APIs, it's very hard to do this in such
> a way that we don't get confused by the Vulkan driver's access of the
> buffer.  In particular, once we tell the kernel that we're rendering to
> the buffer again, any CPU waits on the buffer or GPU dependencies will
> wait on some of the client rendering and not just the compositor.
>
> This new IOCTL solves this problem by allowing us to get a snapshot of
> the implicit synchronization state of a given dma-buf in the form of a
> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> instead of CPU waiting directly, it encapsulates the wait operation, at
> the current moment in time, in a sync_file so we can check/wait on it
> later.  As long as the Vulkan driver does the sync_file export from the
> dma-buf before we re-introduce it for rendering, it will only contain
> fences from the compositor or display.  This allows to accurately turn
> it into a VkFence or VkSemaphore without any over- synchronization.

Regarding that, why do we actually use a syncfile and not a drm_syncobj 
here?

The later should be much closer to a Vulkan timeline semaphore.

Christian.

>
> v2 (Jason Ekstrand):
>   - Use a wrapper dma_fence_array of all fences including the new one
>     when importing an exclusive fence.
>
> v3 (Jason Ekstrand):
>   - Lock around setting shared fences as well as exclusive
>   - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
>   - Initialize ret to 0 in dma_buf_wait_sync_file
>
> v4 (Jason Ekstrand):
>   - Use the new dma_resv_get_singleton helper
>
> v5 (Jason Ekstrand):
>   - Rename the IOCTLs to import/export rather than wait/signal
>   - Drop the WRITE flag and always get/set the exclusive fence
>
> v6 (Jason Ekstrand):
>   - Drop the sync_file import as it was all-around sketchy and not nearly
>     as useful as import.
>   - Re-introduce READ/WRITE flag support for export
>   - Rework the commit message
>
> v7 (Jason Ekstrand):
>   - Require at least one sync flag
>   - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
>   - Use _rcu helpers since we're accessing the dma_resv read-only
>
> v8 (Jason Ekstrand):
>   - Return -ENOMEM if the sync_file_create fails
>   - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
>
> v9 (Jason Ekstrand):
>   - Add documentation for the new ioctl
>
> v10 (Jason Ekstrand):
>   - Go back to dma_buf_sync_file as the ioctl struct name
>
> v11 (Daniel Vetter):
>   - Go back to dma_buf_export_sync_file as the ioctl struct name
>   - Better kerneldoc describing what the read/write flags do
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Acked-by: Simon Ser <contact@emersion.fr>
> Acked-by: Christian König <christian.koenig@amd.com>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>   drivers/dma-buf/dma-buf.c    | 67 ++++++++++++++++++++++++++++++++++++
>   include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++
>   2 files changed, 102 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ed6451d55d663..65a9574ee04ed 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -20,6 +20,7 @@
>   #include <linux/debugfs.h>
>   #include <linux/module.h>
>   #include <linux/seq_file.h>
> +#include <linux/sync_file.h>
>   #include <linux/poll.h>
>   #include <linux/dma-resv.h>
>   #include <linux/mm.h>
> @@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
>    * Note that this only signals the completion of the respective fences, i.e. the
>    * DMA transfers are complete. Cache flushing and any other necessary
>    * preparations before CPU access can begin still need to happen.
> + *
> + * As an alternative to poll(), the set of fences on DMA buffer can be
> + * exported as a &sync_file using &dma_buf_sync_file_export.
>    */
>   
>   static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
> @@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
>   	return ret;
>   }
>   
> +#if IS_ENABLED(CONFIG_SYNC_FILE)
> +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
> +				     void __user *user_data)
> +{
> +	struct dma_buf_export_sync_file arg;
> +	struct dma_fence *fence = NULL;
> +	struct sync_file *sync_file;
> +	int fd, ret;
> +
> +	if (copy_from_user(&arg, user_data, sizeof(arg)))
> +		return -EFAULT;
> +
> +	if (arg.flags & ~DMA_BUF_SYNC_RW)
> +		return -EINVAL;
> +
> +	if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
> +		return -EINVAL;
> +
> +	fd = get_unused_fd_flags(O_CLOEXEC);
> +	if (fd < 0)
> +		return fd;
> +
> +	if (arg.flags & DMA_BUF_SYNC_WRITE) {
> +		fence = dma_resv_get_singleton_unlocked(dmabuf->resv);
> +		if (IS_ERR(fence)) {
> +			ret = PTR_ERR(fence);
> +			goto err_put_fd;
> +		}
> +	} else if (arg.flags & DMA_BUF_SYNC_READ) {
> +		fence = dma_resv_get_excl_unlocked(dmabuf->resv);
> +	}
> +
> +	if (!fence)
> +		fence = dma_fence_get_stub();
> +
> +	sync_file = sync_file_create(fence);
> +
> +	dma_fence_put(fence);
> +
> +	if (!sync_file) {
> +		ret = -ENOMEM;
> +		goto err_put_fd;
> +	}
> +
> +	fd_install(fd, sync_file->file);
> +
> +	arg.fd = fd;
> +	if (copy_to_user(user_data, &arg, sizeof(arg)))
> +		return -EFAULT;
> +
> +	return 0;
> +
> +err_put_fd:
> +	put_unused_fd(fd);
> +	return ret;
> +}
> +#endif
> +
>   static long dma_buf_ioctl(struct file *file,
>   			  unsigned int cmd, unsigned long arg)
>   {
> @@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file,
>   	case DMA_BUF_SET_NAME_B:
>   		return dma_buf_set_name(dmabuf, (const char __user *)arg);
>   
> +#if IS_ENABLED(CONFIG_SYNC_FILE)
> +	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
> +		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
> +#endif
> +
>   	default:
>   		return -ENOTTY;
>   	}
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 1f67ced853b14..aeba45180b028 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -67,6 +67,40 @@ struct dma_buf_sync {
>   
>   #define DMA_BUF_NAME_LEN	32
>   
> +/**
> + * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
> + *
> + * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
> + * current set of fences on a dma-buf file descriptor as a sync_file.  CPU
> + * waits via poll() or other driver-specific mechanisms typically wait on
> + * whatever fences are on the dma-buf at the time the wait begins.  This
> + * is similar except that it takes a snapshot of the current fences on the
> + * dma-buf for waiting later instead of waiting immediately.  This is
> + * useful for modern graphics APIs such as Vulkan which assume an explicit
> + * synchronization model but still need to inter-operate with dma-buf.
> + */
> +struct dma_buf_export_sync_file {
> +	/**
> +	 * @flags: Read/write flags
> +	 *
> +	 * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
> +	 *
> +	 * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
> +	 * the returned sync file waits on any writers of the dma-buf to
> +	 * complete.  Waiting on the returned sync file is equivalent to
> +	 * poll() with POLLIN.
> +	 *
> +	 * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
> +	 * any users of the dma-buf (read or write) to complete.  Waiting
> +	 * on the returned sync file is equivalent to poll() with POLLOUT.
> +	 * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
> +	 * is equivalent to just DMA_BUF_SYNC_WRITE.
> +	 */
> +	__u32 flags;
> +	/** @fd: Returned sync file descriptor */
> +	__s32 fd;
> +};
> +
>   #define DMA_BUF_BASE		'b'
>   #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>   
> @@ -76,5 +110,6 @@ struct dma_buf_sync {
>   #define DMA_BUF_SET_NAME	_IOW(DMA_BUF_BASE, 1, const char *)
>   #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
>   #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
> +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
>   
>   #endif


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-26 11:02     ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 11:02 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel, intel-gfx
  Cc: Simon Ser, Sumit Semwal, Daniel Vetter

Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> Modern userspace APIs like Vulkan are built on an explicit
> synchronization model.  This doesn't always play nicely with the
> implicit synchronization used in the kernel and assumed by X11 and
> Wayland.  The client -> compositor half of the synchronization isn't too
> bad, at least on intel, because we can control whether or not i915
> synchronizes on the buffer and whether or not it's considered written.
>
> The harder part is the compositor -> client synchronization when we get
> the buffer back from the compositor.  We're required to be able to
> provide the client with a VkSemaphore and VkFence representing the point
> in time where the window system (compositor and/or display) finished
> using the buffer.  With current APIs, it's very hard to do this in such
> a way that we don't get confused by the Vulkan driver's access of the
> buffer.  In particular, once we tell the kernel that we're rendering to
> the buffer again, any CPU waits on the buffer or GPU dependencies will
> wait on some of the client rendering and not just the compositor.
>
> This new IOCTL solves this problem by allowing us to get a snapshot of
> the implicit synchronization state of a given dma-buf in the form of a
> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> instead of CPU waiting directly, it encapsulates the wait operation, at
> the current moment in time, in a sync_file so we can check/wait on it
> later.  As long as the Vulkan driver does the sync_file export from the
> dma-buf before we re-introduce it for rendering, it will only contain
> fences from the compositor or display.  This allows to accurately turn
> it into a VkFence or VkSemaphore without any over- synchronization.

Regarding that, why do we actually use a syncfile and not a drm_syncobj 
here?

The later should be much closer to a Vulkan timeline semaphore.

Christian.

>
> v2 (Jason Ekstrand):
>   - Use a wrapper dma_fence_array of all fences including the new one
>     when importing an exclusive fence.
>
> v3 (Jason Ekstrand):
>   - Lock around setting shared fences as well as exclusive
>   - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
>   - Initialize ret to 0 in dma_buf_wait_sync_file
>
> v4 (Jason Ekstrand):
>   - Use the new dma_resv_get_singleton helper
>
> v5 (Jason Ekstrand):
>   - Rename the IOCTLs to import/export rather than wait/signal
>   - Drop the WRITE flag and always get/set the exclusive fence
>
> v6 (Jason Ekstrand):
>   - Drop the sync_file import as it was all-around sketchy and not nearly
>     as useful as import.
>   - Re-introduce READ/WRITE flag support for export
>   - Rework the commit message
>
> v7 (Jason Ekstrand):
>   - Require at least one sync flag
>   - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
>   - Use _rcu helpers since we're accessing the dma_resv read-only
>
> v8 (Jason Ekstrand):
>   - Return -ENOMEM if the sync_file_create fails
>   - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
>
> v9 (Jason Ekstrand):
>   - Add documentation for the new ioctl
>
> v10 (Jason Ekstrand):
>   - Go back to dma_buf_sync_file as the ioctl struct name
>
> v11 (Daniel Vetter):
>   - Go back to dma_buf_export_sync_file as the ioctl struct name
>   - Better kerneldoc describing what the read/write flags do
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Acked-by: Simon Ser <contact@emersion.fr>
> Acked-by: Christian König <christian.koenig@amd.com>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>   drivers/dma-buf/dma-buf.c    | 67 ++++++++++++++++++++++++++++++++++++
>   include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++
>   2 files changed, 102 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ed6451d55d663..65a9574ee04ed 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -20,6 +20,7 @@
>   #include <linux/debugfs.h>
>   #include <linux/module.h>
>   #include <linux/seq_file.h>
> +#include <linux/sync_file.h>
>   #include <linux/poll.h>
>   #include <linux/dma-resv.h>
>   #include <linux/mm.h>
> @@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
>    * Note that this only signals the completion of the respective fences, i.e. the
>    * DMA transfers are complete. Cache flushing and any other necessary
>    * preparations before CPU access can begin still need to happen.
> + *
> + * As an alternative to poll(), the set of fences on DMA buffer can be
> + * exported as a &sync_file using &dma_buf_sync_file_export.
>    */
>   
>   static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
> @@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
>   	return ret;
>   }
>   
> +#if IS_ENABLED(CONFIG_SYNC_FILE)
> +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
> +				     void __user *user_data)
> +{
> +	struct dma_buf_export_sync_file arg;
> +	struct dma_fence *fence = NULL;
> +	struct sync_file *sync_file;
> +	int fd, ret;
> +
> +	if (copy_from_user(&arg, user_data, sizeof(arg)))
> +		return -EFAULT;
> +
> +	if (arg.flags & ~DMA_BUF_SYNC_RW)
> +		return -EINVAL;
> +
> +	if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
> +		return -EINVAL;
> +
> +	fd = get_unused_fd_flags(O_CLOEXEC);
> +	if (fd < 0)
> +		return fd;
> +
> +	if (arg.flags & DMA_BUF_SYNC_WRITE) {
> +		fence = dma_resv_get_singleton_unlocked(dmabuf->resv);
> +		if (IS_ERR(fence)) {
> +			ret = PTR_ERR(fence);
> +			goto err_put_fd;
> +		}
> +	} else if (arg.flags & DMA_BUF_SYNC_READ) {
> +		fence = dma_resv_get_excl_unlocked(dmabuf->resv);
> +	}
> +
> +	if (!fence)
> +		fence = dma_fence_get_stub();
> +
> +	sync_file = sync_file_create(fence);
> +
> +	dma_fence_put(fence);
> +
> +	if (!sync_file) {
> +		ret = -ENOMEM;
> +		goto err_put_fd;
> +	}
> +
> +	fd_install(fd, sync_file->file);
> +
> +	arg.fd = fd;
> +	if (copy_to_user(user_data, &arg, sizeof(arg)))
> +		return -EFAULT;
> +
> +	return 0;
> +
> +err_put_fd:
> +	put_unused_fd(fd);
> +	return ret;
> +}
> +#endif
> +
>   static long dma_buf_ioctl(struct file *file,
>   			  unsigned int cmd, unsigned long arg)
>   {
> @@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file,
>   	case DMA_BUF_SET_NAME_B:
>   		return dma_buf_set_name(dmabuf, (const char __user *)arg);
>   
> +#if IS_ENABLED(CONFIG_SYNC_FILE)
> +	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
> +		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
> +#endif
> +
>   	default:
>   		return -ENOTTY;
>   	}
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 1f67ced853b14..aeba45180b028 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -67,6 +67,40 @@ struct dma_buf_sync {
>   
>   #define DMA_BUF_NAME_LEN	32
>   
> +/**
> + * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
> + *
> + * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
> + * current set of fences on a dma-buf file descriptor as a sync_file.  CPU
> + * waits via poll() or other driver-specific mechanisms typically wait on
> + * whatever fences are on the dma-buf at the time the wait begins.  This
> + * is similar except that it takes a snapshot of the current fences on the
> + * dma-buf for waiting later instead of waiting immediately.  This is
> + * useful for modern graphics APIs such as Vulkan which assume an explicit
> + * synchronization model but still need to inter-operate with dma-buf.
> + */
> +struct dma_buf_export_sync_file {
> +	/**
> +	 * @flags: Read/write flags
> +	 *
> +	 * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
> +	 *
> +	 * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
> +	 * the returned sync file waits on any writers of the dma-buf to
> +	 * complete.  Waiting on the returned sync file is equivalent to
> +	 * poll() with POLLIN.
> +	 *
> +	 * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
> +	 * any users of the dma-buf (read or write) to complete.  Waiting
> +	 * on the returned sync file is equivalent to poll() with POLLOUT.
> +	 * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
> +	 * is equivalent to just DMA_BUF_SYNC_WRITE.
> +	 */
> +	__u32 flags;
> +	/** @fd: Returned sync file descriptor */
> +	__s32 fd;
> +};
> +
>   #define DMA_BUF_BASE		'b'
>   #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>   
> @@ -76,5 +110,6 @@ struct dma_buf_sync {
>   #define DMA_BUF_SET_NAME	_IOW(DMA_BUF_BASE, 1, const char *)
>   #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
>   #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
> +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
>   
>   #endif

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-26 11:02     ` [Intel-gfx] " Christian König
@ 2021-05-26 11:31       ` Daniel Stone
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Stone @ 2021-05-26 11:31 UTC (permalink / raw)
  To: Christian König; +Cc: Daniel Vetter, intel-gfx, dri-devel, Jason Ekstrand

Hi Christian,

On Wed, 26 May 2021 at 12:02, Christian König <christian.koenig@amd.com> wrote:
> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> > This new IOCTL solves this problem by allowing us to get a snapshot of
> > the implicit synchronization state of a given dma-buf in the form of a
> > sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> > instead of CPU waiting directly, it encapsulates the wait operation, at
> > the current moment in time, in a sync_file so we can check/wait on it
> > later.  As long as the Vulkan driver does the sync_file export from the
> > dma-buf before we re-introduce it for rendering, it will only contain
> > fences from the compositor or display.  This allows to accurately turn
> > it into a VkFence or VkSemaphore without any over- synchronization.
>
> Regarding that, why do we actually use a syncfile and not a drm_syncobj
> here?
>
> The later should be much closer to a Vulkan timeline semaphore.

How would we insert a syncobj+val into a resv though? Like, if we pass
an unmaterialised syncobj+val here to insert into the resv, then an
implicit-only media user (or KMS) goes to sync against the resv, what
happens?

Cheers,
Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-26 11:31       ` Daniel Stone
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Stone @ 2021-05-26 11:31 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Vetter, intel-gfx, dri-devel, Simon Ser, Sumit Semwal

Hi Christian,

On Wed, 26 May 2021 at 12:02, Christian König <christian.koenig@amd.com> wrote:
> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> > This new IOCTL solves this problem by allowing us to get a snapshot of
> > the implicit synchronization state of a given dma-buf in the form of a
> > sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> > instead of CPU waiting directly, it encapsulates the wait operation, at
> > the current moment in time, in a sync_file so we can check/wait on it
> > later.  As long as the Vulkan driver does the sync_file export from the
> > dma-buf before we re-introduce it for rendering, it will only contain
> > fences from the compositor or display.  This allows to accurately turn
> > it into a VkFence or VkSemaphore without any over- synchronization.
>
> Regarding that, why do we actually use a syncfile and not a drm_syncobj
> here?
>
> The later should be much closer to a Vulkan timeline semaphore.

How would we insert a syncobj+val into a resv though? Like, if we pass
an unmaterialised syncobj+val here to insert into the resv, then an
implicit-only media user (or KMS) goes to sync against the resv, what
happens?

Cheers,
Daniel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-26 11:31       ` Daniel Stone
@ 2021-05-26 12:46         ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 12:46 UTC (permalink / raw)
  To: Daniel Stone; +Cc: Daniel Vetter, intel-gfx, dri-devel, Jason Ekstrand



Am 26.05.21 um 13:31 schrieb Daniel Stone:
> Hi Christian,
>
> On Wed, 26 May 2021 at 12:02, Christian König <christian.koenig@amd.com> wrote:
>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>> This new IOCTL solves this problem by allowing us to get a snapshot of
>>> the implicit synchronization state of a given dma-buf in the form of a
>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
>>> instead of CPU waiting directly, it encapsulates the wait operation, at
>>> the current moment in time, in a sync_file so we can check/wait on it
>>> later.  As long as the Vulkan driver does the sync_file export from the
>>> dma-buf before we re-introduce it for rendering, it will only contain
>>> fences from the compositor or display.  This allows to accurately turn
>>> it into a VkFence or VkSemaphore without any over- synchronization.
>> Regarding that, why do we actually use a syncfile and not a drm_syncobj
>> here?
>>
>> The later should be much closer to a Vulkan timeline semaphore.
> How would we insert a syncobj+val into a resv though? Like, if we pass
> an unmaterialised syncobj+val here to insert into the resv, then an
> implicit-only media user (or KMS) goes to sync against the resv, what
> happens?

Well this is for exporting, not importing. So we don't need to worry 
about that.

It's just my thinking because the drm_syncobj is the backing object on 
VkSemaphore implementations these days, isn't it?

Christian.

>
> Cheers,
> Daniel


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-26 12:46         ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 12:46 UTC (permalink / raw)
  To: Daniel Stone; +Cc: Daniel Vetter, intel-gfx, dri-devel, Simon Ser, Sumit Semwal



Am 26.05.21 um 13:31 schrieb Daniel Stone:
> Hi Christian,
>
> On Wed, 26 May 2021 at 12:02, Christian König <christian.koenig@amd.com> wrote:
>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>> This new IOCTL solves this problem by allowing us to get a snapshot of
>>> the implicit synchronization state of a given dma-buf in the form of a
>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
>>> instead of CPU waiting directly, it encapsulates the wait operation, at
>>> the current moment in time, in a sync_file so we can check/wait on it
>>> later.  As long as the Vulkan driver does the sync_file export from the
>>> dma-buf before we re-introduce it for rendering, it will only contain
>>> fences from the compositor or display.  This allows to accurately turn
>>> it into a VkFence or VkSemaphore without any over- synchronization.
>> Regarding that, why do we actually use a syncfile and not a drm_syncobj
>> here?
>>
>> The later should be much closer to a Vulkan timeline semaphore.
> How would we insert a syncobj+val into a resv though? Like, if we pass
> an unmaterialised syncobj+val here to insert into the resv, then an
> implicit-only media user (or KMS) goes to sync against the resv, what
> happens?

Well this is for exporting, not importing. So we don't need to worry 
about that.

It's just my thinking because the drm_syncobj is the backing object on 
VkSemaphore implementations these days, isn't it?

Christian.

>
> Cheers,
> Daniel

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-26 12:46         ` Christian König
@ 2021-05-26 13:12           ` Daniel Stone
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Stone @ 2021-05-26 13:12 UTC (permalink / raw)
  To: Christian König; +Cc: Daniel Vetter, intel-gfx, dri-devel, Jason Ekstrand

Hi,

On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
> Am 26.05.21 um 13:31 schrieb Daniel Stone:
> > How would we insert a syncobj+val into a resv though? Like, if we pass
> > an unmaterialised syncobj+val here to insert into the resv, then an
> > implicit-only media user (or KMS) goes to sync against the resv, what
> > happens?
>
> Well this is for exporting, not importing. So we don't need to worry
> about that.
>
> It's just my thinking because the drm_syncobj is the backing object on
> VkSemaphore implementations these days, isn't it?

Yeah, I can see that to an extent. But then binary vs. timeline
syncobjs are very different in use (which is unfortunate tbh), and
then we have an asymmetry between syncobj export & sync_file import.

You're right that we do want a syncobj though. This is probably not
practical due to smashing uAPI to bits, but if we could wind the clock
back a couple of years, I suspect the interface we want is that export
can either export a sync_file or a binary syncobj, and further that
binary syncobjs could transparently act as timeline semaphores by
mapping any value (either wait or signal) to the binary signal. In
hindsight, we should probably just never have had binary syncobj. Oh
well.

Cheers,
Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-26 13:12           ` Daniel Stone
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Stone @ 2021-05-26 13:12 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Vetter, intel-gfx, dri-devel, Simon Ser, Sumit Semwal

Hi,

On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
> Am 26.05.21 um 13:31 schrieb Daniel Stone:
> > How would we insert a syncobj+val into a resv though? Like, if we pass
> > an unmaterialised syncobj+val here to insert into the resv, then an
> > implicit-only media user (or KMS) goes to sync against the resv, what
> > happens?
>
> Well this is for exporting, not importing. So we don't need to worry
> about that.
>
> It's just my thinking because the drm_syncobj is the backing object on
> VkSemaphore implementations these days, isn't it?

Yeah, I can see that to an extent. But then binary vs. timeline
syncobjs are very different in use (which is unfortunate tbh), and
then we have an asymmetry between syncobj export & sync_file import.

You're right that we do want a syncobj though. This is probably not
practical due to smashing uAPI to bits, but if we could wind the clock
back a couple of years, I suspect the interface we want is that export
can either export a sync_file or a binary syncobj, and further that
binary syncobjs could transparently act as timeline semaphores by
mapping any value (either wait or signal) to the binary signal. In
hindsight, we should probably just never have had binary syncobj. Oh
well.

Cheers,
Daniel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-26 13:12           ` Daniel Stone
@ 2021-05-26 13:23             ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 13:23 UTC (permalink / raw)
  To: Daniel Stone; +Cc: Daniel Vetter, intel-gfx, dri-devel, Jason Ekstrand



Am 26.05.21 um 15:12 schrieb Daniel Stone:
> Hi,
>
> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
>> Am 26.05.21 um 13:31 schrieb Daniel Stone:
>>> How would we insert a syncobj+val into a resv though? Like, if we pass
>>> an unmaterialised syncobj+val here to insert into the resv, then an
>>> implicit-only media user (or KMS) goes to sync against the resv, what
>>> happens?
>> Well this is for exporting, not importing. So we don't need to worry
>> about that.
>>
>> It's just my thinking because the drm_syncobj is the backing object on
>> VkSemaphore implementations these days, isn't it?
> Yeah, I can see that to an extent. But then binary vs. timeline
> syncobjs are very different in use (which is unfortunate tbh), and
> then we have an asymmetry between syncobj export & sync_file import.
>
> You're right that we do want a syncobj though. This is probably not
> practical due to smashing uAPI to bits, but if we could wind the clock
> back a couple of years, I suspect the interface we want is that export
> can either export a sync_file or a binary syncobj, and further that
> binary syncobjs could transparently act as timeline semaphores by
> mapping any value (either wait or signal) to the binary signal. In
> hindsight, we should probably just never have had binary syncobj. Oh
> well.

Well the later is the case IIRC. Don't ask me for the detail semantics, 
but in general the drm_syncobj in timeline mode is compatible to the 
binary mode.

The sync_file is also import/exportable to a certain drm_syncobj 
timeline point (or as binary signal). So no big deal, we are all 
compatible here :)

I just thought that it might be more appropriate to return a drm_syncobj 
directly instead of a sync_file.

Regards,
Christian.

>
> Cheers,
> Daniel


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-26 13:23             ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-26 13:23 UTC (permalink / raw)
  To: Daniel Stone; +Cc: Daniel Vetter, intel-gfx, dri-devel, Simon Ser, Sumit Semwal



Am 26.05.21 um 15:12 schrieb Daniel Stone:
> Hi,
>
> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
>> Am 26.05.21 um 13:31 schrieb Daniel Stone:
>>> How would we insert a syncobj+val into a resv though? Like, if we pass
>>> an unmaterialised syncobj+val here to insert into the resv, then an
>>> implicit-only media user (or KMS) goes to sync against the resv, what
>>> happens?
>> Well this is for exporting, not importing. So we don't need to worry
>> about that.
>>
>> It's just my thinking because the drm_syncobj is the backing object on
>> VkSemaphore implementations these days, isn't it?
> Yeah, I can see that to an extent. But then binary vs. timeline
> syncobjs are very different in use (which is unfortunate tbh), and
> then we have an asymmetry between syncobj export & sync_file import.
>
> You're right that we do want a syncobj though. This is probably not
> practical due to smashing uAPI to bits, but if we could wind the clock
> back a couple of years, I suspect the interface we want is that export
> can either export a sync_file or a binary syncobj, and further that
> binary syncobjs could transparently act as timeline semaphores by
> mapping any value (either wait or signal) to the binary signal. In
> hindsight, we should probably just never have had binary syncobj. Oh
> well.

Well the later is the case IIRC. Don't ask me for the detail semantics, 
but in general the drm_syncobj in timeline mode is compatible to the 
binary mode.

The sync_file is also import/exportable to a certain drm_syncobj 
timeline point (or as binary signal). So no big deal, we are all 
compatible here :)

I just thought that it might be more appropriate to return a drm_syncobj 
directly instead of a sync_file.

Regards,
Christian.

>
> Cheers,
> Daniel

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 7/7] RFC: dma-buf: Add an API for importing sync files (v7)
  2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-26 17:09     ` Daniel Vetter
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-26 17:09 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: Daniel Vetter, intel-gfx, dri-devel, Christian König

On Tue, May 25, 2021 at 04:17:53PM -0500, Jason Ekstrand wrote:
> This patch is analogous to the previous sync file export patch in that
> it allows you to import a sync_file into a dma-buf.  Unlike the previous
> patch, however, this does add genuinely new functionality to dma-buf.
> Without this, the only way to attach a sync_file to a dma-buf is to
> submit a batch to your driver of choice which waits on the sync_file and
> claims to write to the dma-buf.  Even if said batch is a no-op, a submit
> is typically way more overhead than just attaching a fence.  A submit
> may also imply extra synchronization with other work because it happens
> on a hardware queue.
> 
> In the Vulkan world, this is useful for dealing with the out-fence from
> vkQueuePresent.  Current Linux window-systems (X11, Wayland, etc.) all
> rely on dma-buf implicit sync.  Since Vulkan is an explicit sync API, we
> get a set of fences (VkSemaphores) in vkQueuePresent and have to stash
> those as an exclusive (write) fence on the dma-buf.  We handle it in
> Mesa today with the above mentioned dummy submit trick.  This ioctl
> would allow us to set it directly without the dummy submit.
> 
> This may also open up possibilities for GPU drivers to move away from
> implicit sync for their kernel driver uAPI and instead provide sync
> files and rely on dma-buf import/export for communicating with other
> implicit sync clients.
> 
> We make the explicit choice here to only allow setting RW fences which
> translates to an exclusive fence on the dma_resv.  There's no use for
> read-only fences for communicating with other implicit sync userspace
> and any such attempts are likely to be racy at best.  When we got to
> insert the RW fence, the actual fence we set as the new exclusive fence
> is a combination of the sync_file provided by the user and all the other
> fences on the dma_resv.  This ensures that the newly added exclusive
> fence will never signal before the old one would have and ensures that
> we don't break any dma_resv contracts.  We require userspace to specify
> RW in the flags for symmetry with the export ioctl and in case we ever
> want to support read fences in the future.
> 
> There is one downside here that's worth documenting:  If two clients
> writing to the same dma-buf using this API race with each other, their
> actions on the dma-buf may happen in parallel or in an undefined order.
> Both with and without this API, the pattern is the same:  Collect all
> the fences on dma-buf, submit work which depends on said fences, and
> then set a new exclusive (write) fence on the dma-buf which depends on
> said work.  The difference is that, when it's all handled by the GPU
> driver's submit ioctl, the three operations happen atomically under the
> dma_resv lock.  If two userspace submits race, one will happen before
> the other.  You aren't guaranteed which but you are guaranteed that
> they're strictly ordered.  If userspace manages the fences itself, then
> these three operations happen separately and the two render operations
> may happen genuinely in parallel or get interleaved.  However, this is a
> case of userspace racing with itself.  As long as we ensure userspace
> can't back the kernel into a corner, it should be fine.
> 
> v2 (Jason Ekstrand):
>  - Use a wrapper dma_fence_array of all fences including the new one
>    when importing an exclusive fence.
> 
> v3 (Jason Ekstrand):
>  - Lock around setting shared fences as well as exclusive
>  - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
>  - Initialize ret to 0 in dma_buf_wait_sync_file
> 
> v4 (Jason Ekstrand):
>  - Use the new dma_resv_get_singleton helper
> 
> v5 (Jason Ekstrand):
>  - Rename the IOCTLs to import/export rather than wait/signal
>  - Drop the WRITE flag and always get/set the exclusive fence
> 
> v6 (Jason Ekstrand):
>  - Split import and export into separate patches
>  - New commit message
> 
> v7 (Daniel Vetter):
>  - Fix the uapi header to use the right struct in the ioctl
>  - Use a separate dma_buf_import_sync_file struct
>  - Add kerneldoc for dma_buf_import_sync_file
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>  drivers/dma-buf/dma-buf.c    | 36 ++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++
>  2 files changed, 58 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ea117de962903..098340222662b 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -422,6 +422,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
>  	put_unused_fd(fd);
>  	return ret;
>  }
> +
> +static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
> +				     const void __user *user_data)
> +{
> +	struct dma_buf_import_sync_file arg;
> +	struct dma_fence *fence, *singleton = NULL;
> +	int ret = 0;
> +
> +	if (copy_from_user(&arg, user_data, sizeof(arg)))
> +		return -EFAULT;
> +
> +	if (arg.flags != DMA_BUF_SYNC_RW)
> +		return -EINVAL;
> +
> +	fence = sync_file_get_fence(arg.fd);
> +	if (!fence)
> +		return -EINVAL;
> +
> +	dma_resv_lock(dmabuf->resv, NULL);
> +
> +	singleton = dma_resv_get_singleton_unlocked(dmabuf->resv, fence);
> +	if (IS_ERR(singleton)) {
> +		ret = PTR_ERR(singleton);
> +	} else if (singleton) {
> +		dma_resv_add_excl_fence(dmabuf->resv, singleton);
> +		dma_resv_add_shared_fence(dmabuf->resv, singleton);

Thought some more about this, and I think what we actually need is:
- the "combine everything" singleton as shared fence
- but only the new + eventually current exclusive fence in the exclusive
  slot

This way the rules are still all upheld, but we avoid including all
current shared fences in our new exclusive fence. And there's potentially
a lot included in these shared fences, like the beginning of rendering the
next frame. So exactly what we're trying to avoid.

This is endless amounts of tricky ...
-Daniel

> +	}
> +
> +	dma_resv_unlock(dmabuf->resv);
> +
> +	dma_fence_put(fence);
> +
> +	return ret;
> +}
>  #endif
>  
>  static long dma_buf_ioctl(struct file *file,
> @@ -470,6 +504,8 @@ static long dma_buf_ioctl(struct file *file,
>  #if IS_ENABLED(CONFIG_SYNC_FILE)
>  	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
>  		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
> +	case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
> +		return dma_buf_import_sync_file(dmabuf, (const void __user *)arg);
>  #endif
>  
>  	default:
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index aeba45180b028..af53987db24be 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -101,6 +101,27 @@ struct dma_buf_export_sync_file {
>  	__s32 fd;
>  };
>  
> +/**
> + * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
> + *
> + * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
> + * sync_file into a dma-buf for the purposes of implicit synchronization
> + * with other dma-buf consumers.  This allows clients using explicitly
> + * synchronized APIs such as Vulkan to inter-op with dma-buf consumers
> + * which expect implicit synchronization such as OpenGL or most media
> + * drivers/video.
> + */
> +struct dma_buf_import_sync_file {
> +	/**
> +	 * @flags: Read/write flags
> +	 *
> +	 * Must be DMA_BUF_SYNC_RW.
> +	 */
> +	__u32 flags;
> +	/** @fd: Sync file descriptor */
> +	__s32 fd;
> +};
> +
>  #define DMA_BUF_BASE		'b'
>  #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>  
> @@ -111,5 +132,6 @@ struct dma_buf_export_sync_file {
>  #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
>  #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
>  #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
> +#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE	_IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
>  
>  #endif
> -- 
> 2.31.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 7/7] RFC: dma-buf: Add an API for importing sync files (v7)
@ 2021-05-26 17:09     ` Daniel Vetter
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-26 17:09 UTC (permalink / raw)
  To: Jason Ekstrand
  Cc: Daniel Vetter, intel-gfx, dri-devel, Sumit Semwal, Christian König

On Tue, May 25, 2021 at 04:17:53PM -0500, Jason Ekstrand wrote:
> This patch is analogous to the previous sync file export patch in that
> it allows you to import a sync_file into a dma-buf.  Unlike the previous
> patch, however, this does add genuinely new functionality to dma-buf.
> Without this, the only way to attach a sync_file to a dma-buf is to
> submit a batch to your driver of choice which waits on the sync_file and
> claims to write to the dma-buf.  Even if said batch is a no-op, a submit
> is typically way more overhead than just attaching a fence.  A submit
> may also imply extra synchronization with other work because it happens
> on a hardware queue.
> 
> In the Vulkan world, this is useful for dealing with the out-fence from
> vkQueuePresent.  Current Linux window-systems (X11, Wayland, etc.) all
> rely on dma-buf implicit sync.  Since Vulkan is an explicit sync API, we
> get a set of fences (VkSemaphores) in vkQueuePresent and have to stash
> those as an exclusive (write) fence on the dma-buf.  We handle it in
> Mesa today with the above mentioned dummy submit trick.  This ioctl
> would allow us to set it directly without the dummy submit.
> 
> This may also open up possibilities for GPU drivers to move away from
> implicit sync for their kernel driver uAPI and instead provide sync
> files and rely on dma-buf import/export for communicating with other
> implicit sync clients.
> 
> We make the explicit choice here to only allow setting RW fences which
> translates to an exclusive fence on the dma_resv.  There's no use for
> read-only fences for communicating with other implicit sync userspace
> and any such attempts are likely to be racy at best.  When we got to
> insert the RW fence, the actual fence we set as the new exclusive fence
> is a combination of the sync_file provided by the user and all the other
> fences on the dma_resv.  This ensures that the newly added exclusive
> fence will never signal before the old one would have and ensures that
> we don't break any dma_resv contracts.  We require userspace to specify
> RW in the flags for symmetry with the export ioctl and in case we ever
> want to support read fences in the future.
> 
> There is one downside here that's worth documenting:  If two clients
> writing to the same dma-buf using this API race with each other, their
> actions on the dma-buf may happen in parallel or in an undefined order.
> Both with and without this API, the pattern is the same:  Collect all
> the fences on dma-buf, submit work which depends on said fences, and
> then set a new exclusive (write) fence on the dma-buf which depends on
> said work.  The difference is that, when it's all handled by the GPU
> driver's submit ioctl, the three operations happen atomically under the
> dma_resv lock.  If two userspace submits race, one will happen before
> the other.  You aren't guaranteed which but you are guaranteed that
> they're strictly ordered.  If userspace manages the fences itself, then
> these three operations happen separately and the two render operations
> may happen genuinely in parallel or get interleaved.  However, this is a
> case of userspace racing with itself.  As long as we ensure userspace
> can't back the kernel into a corner, it should be fine.
> 
> v2 (Jason Ekstrand):
>  - Use a wrapper dma_fence_array of all fences including the new one
>    when importing an exclusive fence.
> 
> v3 (Jason Ekstrand):
>  - Lock around setting shared fences as well as exclusive
>  - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
>  - Initialize ret to 0 in dma_buf_wait_sync_file
> 
> v4 (Jason Ekstrand):
>  - Use the new dma_resv_get_singleton helper
> 
> v5 (Jason Ekstrand):
>  - Rename the IOCTLs to import/export rather than wait/signal
>  - Drop the WRITE flag and always get/set the exclusive fence
> 
> v6 (Jason Ekstrand):
>  - Split import and export into separate patches
>  - New commit message
> 
> v7 (Daniel Vetter):
>  - Fix the uapi header to use the right struct in the ioctl
>  - Use a separate dma_buf_import_sync_file struct
>  - Add kerneldoc for dma_buf_import_sync_file
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>  drivers/dma-buf/dma-buf.c    | 36 ++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++
>  2 files changed, 58 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ea117de962903..098340222662b 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -422,6 +422,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
>  	put_unused_fd(fd);
>  	return ret;
>  }
> +
> +static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
> +				     const void __user *user_data)
> +{
> +	struct dma_buf_import_sync_file arg;
> +	struct dma_fence *fence, *singleton = NULL;
> +	int ret = 0;
> +
> +	if (copy_from_user(&arg, user_data, sizeof(arg)))
> +		return -EFAULT;
> +
> +	if (arg.flags != DMA_BUF_SYNC_RW)
> +		return -EINVAL;
> +
> +	fence = sync_file_get_fence(arg.fd);
> +	if (!fence)
> +		return -EINVAL;
> +
> +	dma_resv_lock(dmabuf->resv, NULL);
> +
> +	singleton = dma_resv_get_singleton_unlocked(dmabuf->resv, fence);
> +	if (IS_ERR(singleton)) {
> +		ret = PTR_ERR(singleton);
> +	} else if (singleton) {
> +		dma_resv_add_excl_fence(dmabuf->resv, singleton);
> +		dma_resv_add_shared_fence(dmabuf->resv, singleton);

Thought some more about this, and I think what we actually need is:
- the "combine everything" singleton as shared fence
- but only the new + eventually current exclusive fence in the exclusive
  slot

This way the rules are still all upheld, but we avoid including all
current shared fences in our new exclusive fence. And there's potentially
a lot included in these shared fences, like the beginning of rendering the
next frame. So exactly what we're trying to avoid.

This is endless amounts of tricky ...
-Daniel

> +	}
> +
> +	dma_resv_unlock(dmabuf->resv);
> +
> +	dma_fence_put(fence);
> +
> +	return ret;
> +}
>  #endif
>  
>  static long dma_buf_ioctl(struct file *file,
> @@ -470,6 +504,8 @@ static long dma_buf_ioctl(struct file *file,
>  #if IS_ENABLED(CONFIG_SYNC_FILE)
>  	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
>  		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
> +	case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
> +		return dma_buf_import_sync_file(dmabuf, (const void __user *)arg);
>  #endif
>  
>  	default:
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index aeba45180b028..af53987db24be 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -101,6 +101,27 @@ struct dma_buf_export_sync_file {
>  	__s32 fd;
>  };
>  
> +/**
> + * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
> + *
> + * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
> + * sync_file into a dma-buf for the purposes of implicit synchronization
> + * with other dma-buf consumers.  This allows clients using explicitly
> + * synchronized APIs such as Vulkan to inter-op with dma-buf consumers
> + * which expect implicit synchronization such as OpenGL or most media
> + * drivers/video.
> + */
> +struct dma_buf_import_sync_file {
> +	/**
> +	 * @flags: Read/write flags
> +	 *
> +	 * Must be DMA_BUF_SYNC_RW.
> +	 */
> +	__u32 flags;
> +	/** @fd: Sync file descriptor */
> +	__s32 fd;
> +};
> +
>  #define DMA_BUF_BASE		'b'
>  #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>  
> @@ -111,5 +132,6 @@ struct dma_buf_export_sync_file {
>  #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
>  #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
>  #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
> +#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE	_IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
>  
>  #endif
> -- 
> 2.31.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-26 13:23             ` Christian König
@ 2021-05-27 10:33               ` Daniel Vetter
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 10:33 UTC (permalink / raw)
  To: Christian König; +Cc: intel-gfx, dri-devel, Jason Ekstrand, Daniel Vetter

On Wed, May 26, 2021 at 03:23:01PM +0200, Christian König wrote:
> 
> 
> Am 26.05.21 um 15:12 schrieb Daniel Stone:
> > Hi,
> > 
> > On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
> > > Am 26.05.21 um 13:31 schrieb Daniel Stone:
> > > > How would we insert a syncobj+val into a resv though? Like, if we pass
> > > > an unmaterialised syncobj+val here to insert into the resv, then an
> > > > implicit-only media user (or KMS) goes to sync against the resv, what
> > > > happens?
> > > Well this is for exporting, not importing. So we don't need to worry
> > > about that.
> > > 
> > > It's just my thinking because the drm_syncobj is the backing object on
> > > VkSemaphore implementations these days, isn't it?
> > Yeah, I can see that to an extent. But then binary vs. timeline
> > syncobjs are very different in use (which is unfortunate tbh), and
> > then we have an asymmetry between syncobj export & sync_file import.
> > 
> > You're right that we do want a syncobj though. This is probably not
> > practical due to smashing uAPI to bits, but if we could wind the clock
> > back a couple of years, I suspect the interface we want is that export
> > can either export a sync_file or a binary syncobj, and further that
> > binary syncobjs could transparently act as timeline semaphores by
> > mapping any value (either wait or signal) to the binary signal. In
> > hindsight, we should probably just never have had binary syncobj. Oh
> > well.
> 
> Well the later is the case IIRC. Don't ask me for the detail semantics, but
> in general the drm_syncobj in timeline mode is compatible to the binary
> mode.
> 
> The sync_file is also import/exportable to a certain drm_syncobj timeline
> point (or as binary signal). So no big deal, we are all compatible here :)
> 
> I just thought that it might be more appropriate to return a drm_syncobj
> directly instead of a sync_file.

I think another big potential user for this is a vk-based compositor which
needs to interact/support implicit synced clients. And compositor world I
think is right now still more sync_file (because that's where we started
with atomic kms ioctl).

The other slight nudge is that drm_syncobj is a drm thing, so we'd first
need to lift it out into drivers/dma-buf (and hand-wave the DRM prefix
away) for it to be a non-awkward fit for dma-buf.

Plus you can convert them to the other form anyway, so really doesn't
matter much. But for the above reasons I'm leaning slightly towards
sync_file. Except if compositor folks tell me I'm a fool and why :-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-27 10:33               ` Daniel Vetter
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 10:33 UTC (permalink / raw)
  To: Christian König
  Cc: Simon Ser, intel-gfx, dri-devel, Daniel Vetter, Sumit Semwal

On Wed, May 26, 2021 at 03:23:01PM +0200, Christian König wrote:
> 
> 
> Am 26.05.21 um 15:12 schrieb Daniel Stone:
> > Hi,
> > 
> > On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
> > > Am 26.05.21 um 13:31 schrieb Daniel Stone:
> > > > How would we insert a syncobj+val into a resv though? Like, if we pass
> > > > an unmaterialised syncobj+val here to insert into the resv, then an
> > > > implicit-only media user (or KMS) goes to sync against the resv, what
> > > > happens?
> > > Well this is for exporting, not importing. So we don't need to worry
> > > about that.
> > > 
> > > It's just my thinking because the drm_syncobj is the backing object on
> > > VkSemaphore implementations these days, isn't it?
> > Yeah, I can see that to an extent. But then binary vs. timeline
> > syncobjs are very different in use (which is unfortunate tbh), and
> > then we have an asymmetry between syncobj export & sync_file import.
> > 
> > You're right that we do want a syncobj though. This is probably not
> > practical due to smashing uAPI to bits, but if we could wind the clock
> > back a couple of years, I suspect the interface we want is that export
> > can either export a sync_file or a binary syncobj, and further that
> > binary syncobjs could transparently act as timeline semaphores by
> > mapping any value (either wait or signal) to the binary signal. In
> > hindsight, we should probably just never have had binary syncobj. Oh
> > well.
> 
> Well the later is the case IIRC. Don't ask me for the detail semantics, but
> in general the drm_syncobj in timeline mode is compatible to the binary
> mode.
> 
> The sync_file is also import/exportable to a certain drm_syncobj timeline
> point (or as binary signal). So no big deal, we are all compatible here :)
> 
> I just thought that it might be more appropriate to return a drm_syncobj
> directly instead of a sync_file.

I think another big potential user for this is a vk-based compositor which
needs to interact/support implicit synced clients. And compositor world I
think is right now still more sync_file (because that's where we started
with atomic kms ioctl).

The other slight nudge is that drm_syncobj is a drm thing, so we'd first
need to lift it out into drivers/dma-buf (and hand-wave the DRM prefix
away) for it to be a non-awkward fit for dma-buf.

Plus you can convert them to the other form anyway, so really doesn't
matter much. But for the above reasons I'm leaning slightly towards
sync_file. Except if compositor folks tell me I'm a fool and why :-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
  2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
@ 2021-05-27 10:38     ` Daniel Vetter
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 10:38 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: Daniel Vetter, intel-gfx, Christian König, dri-devel

On Tue, May 25, 2021 at 04:17:50PM -0500, Jason Ekstrand wrote:
> This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> documentation for DMA_BUF_IOCTL_SYNC.
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>

We're still missing the doc for the SET_NAME ioctl, but maybe Sumit can be
motivated to fix that?

> ---
>  Documentation/driver-api/dma-buf.rst |  8 +++++++
>  include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
>  2 files changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index 7f37ec30d9fd7..784f84fe50a5e 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -88,6 +88,9 @@ consider though:
>  - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
>    details.
>  
> +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> +  `DMA Buffer ioctls`_ below for details.
> +
>  Basic Operation and Device DMA Access
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> @@ -106,6 +109,11 @@ Implicit Fence Poll Support
>  .. kernel-doc:: drivers/dma-buf/dma-buf.c
>     :doc: implicit fence polling
>  
> +DMA Buffer ioctls
> +~~~~~~~~~~~~~~~~~
> +
> +.. kernel-doc:: include/uapi/linux/dma-buf.h
> +
>  Kernel Functions and Structures Reference
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 7f30393b92c3b..1f67ced853b14 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -22,8 +22,38 @@
>  
>  #include <linux/types.h>
>  
> -/* begin/end dma-buf functions used for userspace mmap. */
> +/**
> + * struct dma_buf_sync - Synchronize with CPU access.
> + *
> + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> + * possible to guarantee coherency between the CPU-visible map and underlying
> + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
> + * any CPU access to give the kernel the chance to shuffle memory around if
> + * needed.
> + *
> + * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC

s/should/must/

> + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
> + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> + * DMA_BUF_SYNC_END and the same read/write flags.

I think we should make it really clear here that this is _only_ for cache
coherency, and that furthermore if you want coherency with gpu access you
either need to use poll() for implicit sync (link to the relevant section)
or handle explicit sync with sync_file (again link would be awesome).

> + */
>  struct dma_buf_sync {
> +	/**
> +	 * @flags: Set of access flags
> +	 *
> +	 * - DMA_BUF_SYNC_START: Indicates the start of a map access

Bikeshed, but I think the item list format instead of bullet point list
looks neater, e.g.  DOC: standard plane properties in drm_plane.c.


> +	 *   session.
> +	 *
> +	 * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
> +	 *
> +	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
> +	 *   be read by the client via the CPU map.
> +	 *
> +	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will

s/READ/WRITE/

> +	 *   be written by the client via the CPU map.
> +	 *
> +	 * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
> +	 *   DMA_BUF_SYNC_WRITE.
> +	 */

With the nits addressed: Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

>  	__u64 flags;
>  };
>  
> -- 
> 2.31.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
@ 2021-05-27 10:38     ` Daniel Vetter
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 10:38 UTC (permalink / raw)
  To: Jason Ekstrand
  Cc: Daniel Vetter, intel-gfx, Christian König, dri-devel, Sumit Semwal

On Tue, May 25, 2021 at 04:17:50PM -0500, Jason Ekstrand wrote:
> This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> documentation for DMA_BUF_IOCTL_SYNC.
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>

We're still missing the doc for the SET_NAME ioctl, but maybe Sumit can be
motivated to fix that?

> ---
>  Documentation/driver-api/dma-buf.rst |  8 +++++++
>  include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
>  2 files changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index 7f37ec30d9fd7..784f84fe50a5e 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -88,6 +88,9 @@ consider though:
>  - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
>    details.
>  
> +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> +  `DMA Buffer ioctls`_ below for details.
> +
>  Basic Operation and Device DMA Access
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> @@ -106,6 +109,11 @@ Implicit Fence Poll Support
>  .. kernel-doc:: drivers/dma-buf/dma-buf.c
>     :doc: implicit fence polling
>  
> +DMA Buffer ioctls
> +~~~~~~~~~~~~~~~~~
> +
> +.. kernel-doc:: include/uapi/linux/dma-buf.h
> +
>  Kernel Functions and Structures Reference
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 7f30393b92c3b..1f67ced853b14 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -22,8 +22,38 @@
>  
>  #include <linux/types.h>
>  
> -/* begin/end dma-buf functions used for userspace mmap. */
> +/**
> + * struct dma_buf_sync - Synchronize with CPU access.
> + *
> + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> + * possible to guarantee coherency between the CPU-visible map and underlying
> + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
> + * any CPU access to give the kernel the chance to shuffle memory around if
> + * needed.
> + *
> + * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC

s/should/must/

> + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
> + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> + * DMA_BUF_SYNC_END and the same read/write flags.

I think we should make it really clear here that this is _only_ for cache
coherency, and that furthermore if you want coherency with gpu access you
either need to use poll() for implicit sync (link to the relevant section)
or handle explicit sync with sync_file (again link would be awesome).

> + */
>  struct dma_buf_sync {
> +	/**
> +	 * @flags: Set of access flags
> +	 *
> +	 * - DMA_BUF_SYNC_START: Indicates the start of a map access

Bikeshed, but I think the item list format instead of bullet point list
looks neater, e.g.  DOC: standard plane properties in drm_plane.c.


> +	 *   session.
> +	 *
> +	 * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
> +	 *
> +	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
> +	 *   be read by the client via the CPU map.
> +	 *
> +	 * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will

s/READ/WRITE/

> +	 *   be written by the client via the CPU map.
> +	 *
> +	 * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
> +	 *   DMA_BUF_SYNC_WRITE.
> +	 */

With the nits addressed: Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

>  	__u64 flags;
>  };
>  
> -- 
> 2.31.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-26 10:57     ` [Intel-gfx] " Christian König
@ 2021-05-27 10:39       ` Daniel Vetter
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 10:39 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, Thomas Zimmermann, Daniel Vetter, intel-gfx,
	Huang Rui, VMware Graphics, dri-devel, Jason Ekstrand, Sean Paul

On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> > None of these helpers actually leak any RCU details to the caller.  They
> > all assume you have a genuine reference, take the RCU read lock, and
> > retry if needed.  Naming them with an _rcu is likely to cause callers
> > more panic than needed.
> 
> I'm really wondering if we need this postfix in the first place.
> 
> If we use the right rcu_dereference_check() macro then those functions can
> be called with both the reservation object locked and unlocked. It shouldn't
> matter to them.
> 
> But getting rid of the _rcu postfix sounds like a good idea in general to
> me.

So does that count as an ack or not? If yes I think we should land this
patch right away, since it's going to conflict real fast badly.
-Daniel

> 
> Christian.
> 
> > 
> > v2 (Jason Ekstrand):
> >   - Fix function argument indentation
> > 
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: Lucas Stach <l.stach@pengutronix.de>
> > Cc: Rob Clark <robdclark@gmail.com>
> > Cc: Sean Paul <sean@poorly.run>
> > Cc: Huang Rui <ray.huang@amd.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> > ---
> >   drivers/dma-buf/dma-buf.c                     |  4 +--
> >   drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> >   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> >   drivers/gpu/drm/drm_gem.c                     | 10 +++----
> >   drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> >   drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> >   drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> >   drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> >   drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> >   drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> >   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> >   drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> >   drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> >   drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> >   drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> >   drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> >   drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> >   drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> >   drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> >   drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> >   drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> >   drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> >   drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> >   drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> >   drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> >   drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> >   drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> >   include/linux/dma-resv.h                      | 18 ++++++------
> >   36 files changed, 108 insertions(+), 110 deletions(-)
> > 
> > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > index f264b70c383eb..ed6451d55d663 100644
> > --- a/drivers/dma-buf/dma-buf.c
> > +++ b/drivers/dma-buf/dma-buf.c
> > @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >   	long ret;
> >   	/* Wait on any implicit rendering fences */
> > -	ret = dma_resv_wait_timeout_rcu(resv, write, true,
> > -						  MAX_SCHEDULE_TIMEOUT);
> > +	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> > +					     MAX_SCHEDULE_TIMEOUT);
> >   	if (ret < 0)
> >   		return ret;
> > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> > index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> > --- a/drivers/dma-buf/dma-resv.c
> > +++ b/drivers/dma-buf/dma-resv.c
> > @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> >   EXPORT_SYMBOL(dma_resv_copy_fences);
> >   /**
> > - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> > + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> >    * fences without update side lock held
> >    * @obj: the reservation object
> >    * @pfence_excl: the returned exclusive fence (or NULL)
> > @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> >    * exclusive fence is not specified the fence is put into the array of the
> >    * shared fences as well. Returns either zero or -ENOMEM.
> >    */
> > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > -			    struct dma_fence **pfence_excl,
> > -			    unsigned *pshared_count,
> > -			    struct dma_fence ***pshared)
> > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > +				 struct dma_fence **pfence_excl,
> > +				 unsigned *pshared_count,
> > +				 struct dma_fence ***pshared)
> >   {
> >   	struct dma_fence **shared = NULL;
> >   	struct dma_fence *fence_excl;
> > @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >   	*pshared = shared;
> >   	return ret;
> >   }
> > -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> > +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> >   /**
> > - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> > + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> >    * shared and/or exclusive fences.
> >    * @obj: the reservation object
> >    * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> > @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >    * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> >    * greater than zer on success.
> >    */
> > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> > -			       bool wait_all, bool intr,
> > -			       unsigned long timeout)
> > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> > +				    bool wait_all, bool intr,
> > +				    unsigned long timeout)
> >   {
> >   	struct dma_fence *fence;
> >   	unsigned seq, shared_count;
> > @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >   	rcu_read_unlock();
> >   	goto retry;
> >   }
> > -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> > +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> >   static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >   }
> >   /**
> > - * dma_resv_test_signaled_rcu - Test if a reservation object's
> > + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> >    * fences have been signaled.
> >    * @obj: the reservation object
> >    * @test_all: if true, test all fences, otherwise only test the exclusive
> > @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >    * RETURNS
> >    * true if all fences signaled, else false
> >    */
> > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> >   {
> >   	unsigned seq, shared_count;
> >   	int ret;
> > @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >   	rcu_read_unlock();
> >   	return ret;
> >   }
> > -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> > +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > index 8a1fb8b6606e5..b8e24f199be9a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> >   		goto unpin;
> >   	}
> > -	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> > -					      &work->shared_count,
> > -					      &work->shared);
> > +	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> > +					 &work->shared_count,
> > +					 &work->shared);
> >   	if (unlikely(r != 0)) {
> >   		DRM_ERROR("failed to get fences for buffer\n");
> >   		goto unpin;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > index baa980a477d94..0d0319bc51577 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> >   	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> >   		return 0;
> > -	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> > +	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> >   	if (r)
> >   		return r;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > index 18974bd081f00..8e2996d6ba3ad 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >   		return -ENOENT;
> >   	}
> >   	robj = gem_to_amdgpu_bo(gobj);
> > -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> > -						  timeout);
> > +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> > +					     timeout);
> >   	/* ret == 0 means not signaled,
> >   	 * ret > 0 means signaled
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > index b4971e90b98cf..38e1b32dd2cef 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >   	unsigned count;
> >   	int r;
> > -	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> > +	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> >   	if (r)
> >   		goto fallback;
> > @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >   	/* Not enough memory for the delayed delete, as last resort
> >   	 * block for all the fences to complete.
> >   	 */
> > -	dma_resv_wait_timeout_rcu(resv, true, false,
> > -					    MAX_SCHEDULE_TIMEOUT);
> > +	dma_resv_wait_timeout_unlocked(resv, true, false,
> > +				       MAX_SCHEDULE_TIMEOUT);
> >   	amdgpu_pasid_free(pasid);
> >   }
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > index 828b5167ff128..0319c8b547c48 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> >   	mmu_interval_set_seq(mni, cur_seq);
> > -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > -				      MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	mutex_unlock(&adev->notifier_lock);
> >   	if (r <= 0)
> >   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > index 0adffcace3263..de1c7c5501683 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> >   		return 0;
> >   	}
> > -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> > -						MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	if (r < 0)
> >   		return r;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > index c6dbc08016045..4a2196404fb69 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> >   	ib->length_dw = 16;
> >   	if (direct) {
> > -		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> > -							true, false,
> > -							msecs_to_jiffies(10));
> > +		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> > +						   true, false,
> > +						   msecs_to_jiffies(10));
> >   		if (r == 0)
> >   			r = -ETIMEDOUT;
> >   		if (r < 0)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index 4a3e3f72e1277..7ba1c537d6584 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >   	unsigned i, shared_count;
> >   	int r;
> > -	r = dma_resv_get_fences_rcu(resv, &excl,
> > -					      &shared_count, &shared);
> > +	r = dma_resv_get_fences_unlocked(resv, &excl,
> > +					 &shared_count, &shared);
> >   	if (r) {
> >   		/* Not enough memory to grab the fence list, as last resort
> >   		 * block for all the fences to complete.
> >   		 */
> > -		dma_resv_wait_timeout_rcu(resv, true, false,
> > -						    MAX_SCHEDULE_TIMEOUT);
> > +		dma_resv_wait_timeout_unlocked(resv, true, false,
> > +					       MAX_SCHEDULE_TIMEOUT);
> >   		return;
> >   	}
> > @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> >   		return true;
> >   	/* Don't evict VM page tables while they are busy */
> > -	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> > +	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> >   		return false;
> >   	/* Try to block ongoing updates */
> > @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> >    */
> >   long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> >   {
> > -	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> > -					    true, true, timeout);
> > +	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> > +						 true, true, timeout);
> >   	if (timeout <= 0)
> >   		return timeout;
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > index 9ca517b658546..0121d2817fa26 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> >   		 * deadlock during GPU reset when this fence will not signal
> >   		 * but we hold reservation lock for the BO.
> >   		 */
> > -		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> > -							false,
> > -							msecs_to_jiffies(5000));
> > +		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> > +						   false,
> > +						   msecs_to_jiffies(5000));
> >   		if (unlikely(r <= 0))
> >   			DRM_ERROR("Waiting for fences timed out!");
> > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > index 9989425e9875a..1241a421b9e81 100644
> > --- a/drivers/gpu/drm/drm_gem.c
> > +++ b/drivers/gpu/drm/drm_gem.c
> > @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> >   		return -EINVAL;
> >   	}
> > -	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> > -						  true, timeout);
> > +	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> > +					     true, timeout);
> >   	if (ret == 0)
> >   		ret = -ETIME;
> >   	else if (ret > 0)
> > @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> >   	if (!write) {
> >   		struct dma_fence *fence =
> > -			dma_resv_get_excl_rcu(obj->resv);
> > +			dma_resv_get_excl_unlocked(obj->resv);
> >   		return drm_gem_fence_array_add(fence_array, fence);
> >   	}
> > -	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> > -						&fence_count, &fences);
> > +	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> > +					   &fence_count, &fences);
> >   	if (ret || !fence_count)
> >   		return ret;
> > diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > index a005c5a0ba46a..a27135084ae5c 100644
> > --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> >   		return 0;
> >   	obj = drm_gem_fb_get_obj(state->fb, 0);
> > -	fence = dma_resv_get_excl_rcu(obj->resv);
> > +	fence = dma_resv_get_excl_unlocked(obj->resv);
> >   	drm_atomic_set_fence_for_plane(state, fence);
> >   	return 0;
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > index db69f19ab5bca..4e6f5346e84e4 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> >   	}
> >   	if (op & ETNA_PREP_NOSYNC) {
> > -		if (!dma_resv_test_signaled_rcu(obj->resv,
> > -							  write))
> > +		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> >   			return -EBUSY;
> >   	} else {
> >   		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> > -		ret = dma_resv_wait_timeout_rcu(obj->resv,
> > -							  write, true, remain);
> > +		ret = dma_resv_wait_timeout_unlocked(obj->resv,
> > +						     write, true, remain);
> >   		if (ret <= 0)
> >   			return ret == 0 ? -ETIMEDOUT : ret;
> >   	}
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > index d05c359945799..6617fada4595d 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> >   			continue;
> >   		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> > -			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> > -								&bo->nr_shared,
> > -								&bo->shared);
> > +			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> > +							   &bo->nr_shared,
> > +							   &bo->shared);
> >   			if (ret)
> >   				return ret;
> >   		} else {
> > -			bo->excl = dma_resv_get_excl_rcu(robj);
> > +			bo->excl = dma_resv_get_excl_unlocked(robj);
> >   		}
> >   	}
> > diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> > index 422b59ebf6dce..5f0b85a102159 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> >   		if (ret < 0)
> >   			goto unpin_fb;
> > -		fence = dma_resv_get_excl_rcu(obj->base.resv);
> > +		fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >   		if (fence) {
> >   			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> >   						   fence);
> > diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> > index 9e508e7d4629f..bdfc6bf16a4e9 100644
> > --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> > +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> > @@ -10,7 +10,7 @@
> >   void dma_resv_prune(struct dma_resv *resv)
> >   {
> >   	if (dma_resv_trylock(resv)) {
> > -		if (dma_resv_test_signaled_rcu(resv, true))
> > +		if (dma_resv_test_signaled_unlocked(resv, true))
> >   			dma_resv_add_excl_fence(resv, NULL);
> >   		dma_resv_unlock(resv);
> >   	}
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > index 25235ef630c10..754ad6d1bace9 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> >   	 * Alternatively, we can trade that extra information on read/write
> >   	 * activity with
> >   	 *	args->busy =
> > -	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
> > +	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
> >   	 * to report the overall busyness. This is what the wait-ioctl does.
> >   	 *
> >   	 */
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > index 297143511f99b..e8f323564e57b 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> >   	if (DBG_FORCE_RELOC)
> >   		return false;
> > -	return !dma_resv_test_signaled_rcu(vma->resv, true);
> > +	return !dma_resv_test_signaled_unlocked(vma->resv, true);
> >   }
> >   static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > index 2ebd79537aea9..7c0eb425cb3b3 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> >   	struct dma_fence *fence;
> >   	rcu_read_lock();
> > -	fence = dma_resv_get_excl_rcu(obj->base.resv);
> > +	fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >   	rcu_read_unlock();
> >   	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > index a657b99ec7606..44df18dc9669f 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> >   		return true;
> >   	/* we will unbind on next submission, still have userptr pins */
> > -	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> > -				      MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	if (r <= 0)
> >   		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > index 4b9856d5ba14f..5b6c52659ad4d 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >   		unsigned int count, i;
> >   		int ret;
> > -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >   		 */
> >   		prune_fences = count && timeout >= 0;
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(resv);
> > +		excl = dma_resv_get_excl_unlocked(resv);
> >   	}
> >   	if (excl && timeout >= 0)
> > @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >   		unsigned int count, i;
> >   		int ret;
> > -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> > -					      &excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > +						   &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >   		kfree(shared);
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> > +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >   	}
> >   	if (excl) {
> > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> > index 970d8f4986bbe..f1ed03ced7dd1 100644
> > --- a/drivers/gpu/drm/i915/i915_request.c
> > +++ b/drivers/gpu/drm/i915/i915_request.c
> > @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> >   		struct dma_fence **shared;
> >   		unsigned int count, i;
> > -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> > -							&excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > +						   &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> >   			dma_fence_put(shared[i]);
> >   		kfree(shared);
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> > +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >   	}
> >   	if (excl) {
> > diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> > index 2744558f30507..0bcb7ea44201e 100644
> > --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> > +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> > @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >   		struct dma_fence **shared;
> >   		unsigned int count, i;
> > -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >   			dma_fence_put(shared[i]);
> >   		kfree(shared);
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(resv);
> > +		excl = dma_resv_get_excl_unlocked(resv);
> >   	}
> >   	if (ret >= 0 && excl && excl->ops != exclude) {
> > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > index 56df86e5f7400..1aca60507bb14 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.c
> > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> >   		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> >   	long ret;
> > -	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> > -						  true,  remain);
> > +	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> >   	if (ret == 0)
> >   		return remain == 0 ? -EBUSY : -ETIMEDOUT;
> >   	else if (ret < 0)
> > diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > index 0cb1f9d848d3e..8d048bacd6f02 100644
> > --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> >   			asyw->image.handle[0] = ctxdma->object.handle;
> >   	}
> > -	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> > +	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> >   	asyw->image.offset[0] = nvbo->offset;
> >   	if (wndw->func->prepare) {
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > index a70e82413fa75..bc6b09ee9b552 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> >   		return -ENOENT;
> >   	nvbo = nouveau_gem_object(gem);
> > -	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> > -						   no_wait ? 0 : 30 * HZ);
> > +	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> > +					      no_wait ? 0 : 30 * HZ);
> >   	if (!lret)
> >   		ret = -EBUSY;
> >   	else if (lret > 0)
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > index ca07098a61419..eef5b632ee0ce 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> >   	if (!gem_obj)
> >   		return -ENOENT;
> > -	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> > -						  true, timeout);
> > +	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> > +					     true, timeout);
> >   	if (!ret)
> >   		ret = timeout ? -ETIMEDOUT : -EBUSY;
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> > index 6003cfeb13221..2df3e999a38d0 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> >   	int i;
> >   	for (i = 0; i < bo_count; i++)
> > -		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> > +		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> >   }
> >   static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> > diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> > index 05ea2f39f6261..1a38b0bf36d11 100644
> > --- a/drivers/gpu/drm/radeon/radeon_gem.c
> > +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> > @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> >   	}
> >   	if (domain == RADEON_GEM_DOMAIN_CPU) {
> >   		/* Asking for cpu access wait for object idle */
> > -		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > +		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >   		if (!r)
> >   			r = -EBUSY;
> > @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> >   	}
> >   	robj = gem_to_radeon_bo(gobj);
> > -	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> > +	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> >   	if (r == 0)
> >   		r = -EBUSY;
> >   	else
> > @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >   	}
> >   	robj = gem_to_radeon_bo(gobj);
> > -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >   	if (ret == 0)
> >   		r = -EBUSY;
> >   	else if (ret < 0)
> > diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> > index e37c9a57a7c36..a19be3f8a218c 100644
> > --- a/drivers/gpu/drm/radeon/radeon_mn.c
> > +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> > @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> >   		return true;
> >   	}
> > -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > -				      MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	if (r <= 0)
> >   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> > index ca1b098b6a561..215cad3149621 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >   	struct dma_resv *resv = &bo->base._resv;
> >   	int ret;
> > -	if (dma_resv_test_signaled_rcu(resv, true))
> > +	if (dma_resv_test_signaled_unlocked(resv, true))
> >   		ret = 0;
> >   	else
> >   		ret = -EBUSY;
> > @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >   			dma_resv_unlock(bo->base.resv);
> >   		spin_unlock(&bo->bdev->lru_lock);
> > -		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> > -						 30 * HZ);
> > +		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> > +						      30 * HZ);
> >   		if (lret < 0)
> >   			return lret;
> > @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> >   			/* Last resort, if we fail to allocate memory for the
> >   			 * fences block for the BO to become idle
> >   			 */
> > -			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> > -						  30 * HZ);
> > +			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> > +						       30 * HZ);
> >   		}
> >   		if (bo->bdev->funcs->release_notify)
> > @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> >   		ttm_mem_io_free(bdev, &bo->mem);
> >   	}
> > -	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> > +	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> >   	    !dma_resv_trylock(bo->base.resv)) {
> >   		/* The BO is not idle, resurrect it for delayed destroy */
> >   		ttm_bo_flush_all_fences(bo);
> > @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> >   	long timeout = 15 * HZ;
> >   	if (no_wait) {
> > -		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> > +		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> >   			return 0;
> >   		else
> >   			return -EBUSY;
> >   	}
> > -	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> > -						      interruptible, timeout);
> > +	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> > +						 interruptible, timeout);
> >   	if (timeout < 0)
> >   		return timeout;
> > diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> > index 2902dc6e64faf..010a82405e374 100644
> > --- a/drivers/gpu/drm/vgem/vgem_fence.c
> > +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> > @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> >   	/* Check for a conflicting fence */
> >   	resv = obj->resv;
> > -	if (!dma_resv_test_signaled_rcu(resv,
> > -						  arg->flags & VGEM_FENCE_WRITE)) {
> > +	if (!dma_resv_test_signaled_unlocked(resv,
> > +					     arg->flags & VGEM_FENCE_WRITE)) {
> >   		ret = -EBUSY;
> >   		goto err_fence;
> >   	}
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > index 669f2ee395154..ab010c8e32816 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> >   		return -ENOENT;
> >   	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> > -		ret = dma_resv_test_signaled_rcu(obj->resv, true);
> > +		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> >   	} else {
> > -		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> > -						timeout);
> > +		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> > +						     timeout);
> >   	}
> >   	if (ret == 0)
> >   		ret = -EBUSY;
> > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > index 04dd49c4c2572..19e1ce23842a9 100644
> > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> >   	if (flags & drm_vmw_synccpu_allow_cs) {
> >   		long lret;
> > -		lret = dma_resv_wait_timeout_rcu
> > +		lret = dma_resv_wait_timeout_unlocked
> >   			(bo->base.resv, true, true,
> >   			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> >   		if (!lret)
> > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> > index d44a77e8a7e34..99cfb7af966b8 100644
> > --- a/include/linux/dma-resv.h
> > +++ b/include/linux/dma-resv.h
> > @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >   }
> >   /**
> > - * dma_resv_get_excl_rcu - get the reservation object's
> > + * dma_resv_get_excl_unlocked - get the reservation object's
> >    * exclusive fence, without lock held.
> >    * @obj: the reservation object
> >    *
> > @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >    * The exclusive fence or NULL if none
> >    */
> >   static inline struct dma_fence *
> > -dma_resv_get_excl_rcu(struct dma_resv *obj)
> > +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> >   {
> >   	struct dma_fence *fence;
> > @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> >   void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > -			    struct dma_fence **pfence_excl,
> > -			    unsigned *pshared_count,
> > -			    struct dma_fence ***pshared);
> > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > +				 struct dma_fence **pfence_excl,
> > +				 unsigned *pshared_count,
> > +				 struct dma_fence ***pshared);
> >   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> > -			       unsigned long timeout);
> > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> > +				    unsigned long timeout);
> > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> >   #endif /* _LINUX_RESERVATION_H */
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-05-27 10:39       ` Daniel Vetter
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 10:39 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, Thomas Zimmermann, Daniel Vetter, intel-gfx,
	Maxime Ripard, Huang Rui, VMware Graphics, dri-devel,
	Lucas Stach

On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> > None of these helpers actually leak any RCU details to the caller.  They
> > all assume you have a genuine reference, take the RCU read lock, and
> > retry if needed.  Naming them with an _rcu is likely to cause callers
> > more panic than needed.
> 
> I'm really wondering if we need this postfix in the first place.
> 
> If we use the right rcu_dereference_check() macro then those functions can
> be called with both the reservation object locked and unlocked. It shouldn't
> matter to them.
> 
> But getting rid of the _rcu postfix sounds like a good idea in general to
> me.

So does that count as an ack or not? If yes I think we should land this
patch right away, since it's going to conflict real fast badly.
-Daniel

> 
> Christian.
> 
> > 
> > v2 (Jason Ekstrand):
> >   - Fix function argument indentation
> > 
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: Lucas Stach <l.stach@pengutronix.de>
> > Cc: Rob Clark <robdclark@gmail.com>
> > Cc: Sean Paul <sean@poorly.run>
> > Cc: Huang Rui <ray.huang@amd.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> > ---
> >   drivers/dma-buf/dma-buf.c                     |  4 +--
> >   drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> >   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> >   drivers/gpu/drm/drm_gem.c                     | 10 +++----
> >   drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> >   drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> >   drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> >   drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> >   drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> >   drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> >   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> >   drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> >   drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> >   drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> >   drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> >   drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> >   drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> >   drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> >   drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> >   drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> >   drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> >   drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> >   drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> >   drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> >   drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> >   drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> >   drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> >   include/linux/dma-resv.h                      | 18 ++++++------
> >   36 files changed, 108 insertions(+), 110 deletions(-)
> > 
> > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > index f264b70c383eb..ed6451d55d663 100644
> > --- a/drivers/dma-buf/dma-buf.c
> > +++ b/drivers/dma-buf/dma-buf.c
> > @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >   	long ret;
> >   	/* Wait on any implicit rendering fences */
> > -	ret = dma_resv_wait_timeout_rcu(resv, write, true,
> > -						  MAX_SCHEDULE_TIMEOUT);
> > +	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> > +					     MAX_SCHEDULE_TIMEOUT);
> >   	if (ret < 0)
> >   		return ret;
> > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> > index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> > --- a/drivers/dma-buf/dma-resv.c
> > +++ b/drivers/dma-buf/dma-resv.c
> > @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> >   EXPORT_SYMBOL(dma_resv_copy_fences);
> >   /**
> > - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> > + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> >    * fences without update side lock held
> >    * @obj: the reservation object
> >    * @pfence_excl: the returned exclusive fence (or NULL)
> > @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> >    * exclusive fence is not specified the fence is put into the array of the
> >    * shared fences as well. Returns either zero or -ENOMEM.
> >    */
> > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > -			    struct dma_fence **pfence_excl,
> > -			    unsigned *pshared_count,
> > -			    struct dma_fence ***pshared)
> > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > +				 struct dma_fence **pfence_excl,
> > +				 unsigned *pshared_count,
> > +				 struct dma_fence ***pshared)
> >   {
> >   	struct dma_fence **shared = NULL;
> >   	struct dma_fence *fence_excl;
> > @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >   	*pshared = shared;
> >   	return ret;
> >   }
> > -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> > +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> >   /**
> > - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> > + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> >    * shared and/or exclusive fences.
> >    * @obj: the reservation object
> >    * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> > @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >    * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> >    * greater than zer on success.
> >    */
> > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> > -			       bool wait_all, bool intr,
> > -			       unsigned long timeout)
> > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> > +				    bool wait_all, bool intr,
> > +				    unsigned long timeout)
> >   {
> >   	struct dma_fence *fence;
> >   	unsigned seq, shared_count;
> > @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >   	rcu_read_unlock();
> >   	goto retry;
> >   }
> > -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> > +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> >   static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >   }
> >   /**
> > - * dma_resv_test_signaled_rcu - Test if a reservation object's
> > + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> >    * fences have been signaled.
> >    * @obj: the reservation object
> >    * @test_all: if true, test all fences, otherwise only test the exclusive
> > @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >    * RETURNS
> >    * true if all fences signaled, else false
> >    */
> > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> >   {
> >   	unsigned seq, shared_count;
> >   	int ret;
> > @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >   	rcu_read_unlock();
> >   	return ret;
> >   }
> > -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> > +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > index 8a1fb8b6606e5..b8e24f199be9a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> >   		goto unpin;
> >   	}
> > -	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> > -					      &work->shared_count,
> > -					      &work->shared);
> > +	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> > +					 &work->shared_count,
> > +					 &work->shared);
> >   	if (unlikely(r != 0)) {
> >   		DRM_ERROR("failed to get fences for buffer\n");
> >   		goto unpin;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > index baa980a477d94..0d0319bc51577 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> >   	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> >   		return 0;
> > -	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> > +	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> >   	if (r)
> >   		return r;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > index 18974bd081f00..8e2996d6ba3ad 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >   		return -ENOENT;
> >   	}
> >   	robj = gem_to_amdgpu_bo(gobj);
> > -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> > -						  timeout);
> > +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> > +					     timeout);
> >   	/* ret == 0 means not signaled,
> >   	 * ret > 0 means signaled
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > index b4971e90b98cf..38e1b32dd2cef 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >   	unsigned count;
> >   	int r;
> > -	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> > +	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> >   	if (r)
> >   		goto fallback;
> > @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >   	/* Not enough memory for the delayed delete, as last resort
> >   	 * block for all the fences to complete.
> >   	 */
> > -	dma_resv_wait_timeout_rcu(resv, true, false,
> > -					    MAX_SCHEDULE_TIMEOUT);
> > +	dma_resv_wait_timeout_unlocked(resv, true, false,
> > +				       MAX_SCHEDULE_TIMEOUT);
> >   	amdgpu_pasid_free(pasid);
> >   }
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > index 828b5167ff128..0319c8b547c48 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> >   	mmu_interval_set_seq(mni, cur_seq);
> > -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > -				      MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	mutex_unlock(&adev->notifier_lock);
> >   	if (r <= 0)
> >   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > index 0adffcace3263..de1c7c5501683 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> >   		return 0;
> >   	}
> > -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> > -						MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	if (r < 0)
> >   		return r;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > index c6dbc08016045..4a2196404fb69 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> >   	ib->length_dw = 16;
> >   	if (direct) {
> > -		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> > -							true, false,
> > -							msecs_to_jiffies(10));
> > +		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> > +						   true, false,
> > +						   msecs_to_jiffies(10));
> >   		if (r == 0)
> >   			r = -ETIMEDOUT;
> >   		if (r < 0)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index 4a3e3f72e1277..7ba1c537d6584 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >   	unsigned i, shared_count;
> >   	int r;
> > -	r = dma_resv_get_fences_rcu(resv, &excl,
> > -					      &shared_count, &shared);
> > +	r = dma_resv_get_fences_unlocked(resv, &excl,
> > +					 &shared_count, &shared);
> >   	if (r) {
> >   		/* Not enough memory to grab the fence list, as last resort
> >   		 * block for all the fences to complete.
> >   		 */
> > -		dma_resv_wait_timeout_rcu(resv, true, false,
> > -						    MAX_SCHEDULE_TIMEOUT);
> > +		dma_resv_wait_timeout_unlocked(resv, true, false,
> > +					       MAX_SCHEDULE_TIMEOUT);
> >   		return;
> >   	}
> > @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> >   		return true;
> >   	/* Don't evict VM page tables while they are busy */
> > -	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> > +	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> >   		return false;
> >   	/* Try to block ongoing updates */
> > @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> >    */
> >   long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> >   {
> > -	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> > -					    true, true, timeout);
> > +	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> > +						 true, true, timeout);
> >   	if (timeout <= 0)
> >   		return timeout;
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > index 9ca517b658546..0121d2817fa26 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> >   		 * deadlock during GPU reset when this fence will not signal
> >   		 * but we hold reservation lock for the BO.
> >   		 */
> > -		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> > -							false,
> > -							msecs_to_jiffies(5000));
> > +		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> > +						   false,
> > +						   msecs_to_jiffies(5000));
> >   		if (unlikely(r <= 0))
> >   			DRM_ERROR("Waiting for fences timed out!");
> > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > index 9989425e9875a..1241a421b9e81 100644
> > --- a/drivers/gpu/drm/drm_gem.c
> > +++ b/drivers/gpu/drm/drm_gem.c
> > @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> >   		return -EINVAL;
> >   	}
> > -	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> > -						  true, timeout);
> > +	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> > +					     true, timeout);
> >   	if (ret == 0)
> >   		ret = -ETIME;
> >   	else if (ret > 0)
> > @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> >   	if (!write) {
> >   		struct dma_fence *fence =
> > -			dma_resv_get_excl_rcu(obj->resv);
> > +			dma_resv_get_excl_unlocked(obj->resv);
> >   		return drm_gem_fence_array_add(fence_array, fence);
> >   	}
> > -	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> > -						&fence_count, &fences);
> > +	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> > +					   &fence_count, &fences);
> >   	if (ret || !fence_count)
> >   		return ret;
> > diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > index a005c5a0ba46a..a27135084ae5c 100644
> > --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> >   		return 0;
> >   	obj = drm_gem_fb_get_obj(state->fb, 0);
> > -	fence = dma_resv_get_excl_rcu(obj->resv);
> > +	fence = dma_resv_get_excl_unlocked(obj->resv);
> >   	drm_atomic_set_fence_for_plane(state, fence);
> >   	return 0;
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > index db69f19ab5bca..4e6f5346e84e4 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> >   	}
> >   	if (op & ETNA_PREP_NOSYNC) {
> > -		if (!dma_resv_test_signaled_rcu(obj->resv,
> > -							  write))
> > +		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> >   			return -EBUSY;
> >   	} else {
> >   		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> > -		ret = dma_resv_wait_timeout_rcu(obj->resv,
> > -							  write, true, remain);
> > +		ret = dma_resv_wait_timeout_unlocked(obj->resv,
> > +						     write, true, remain);
> >   		if (ret <= 0)
> >   			return ret == 0 ? -ETIMEDOUT : ret;
> >   	}
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > index d05c359945799..6617fada4595d 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> >   			continue;
> >   		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> > -			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> > -								&bo->nr_shared,
> > -								&bo->shared);
> > +			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> > +							   &bo->nr_shared,
> > +							   &bo->shared);
> >   			if (ret)
> >   				return ret;
> >   		} else {
> > -			bo->excl = dma_resv_get_excl_rcu(robj);
> > +			bo->excl = dma_resv_get_excl_unlocked(robj);
> >   		}
> >   	}
> > diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> > index 422b59ebf6dce..5f0b85a102159 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> >   		if (ret < 0)
> >   			goto unpin_fb;
> > -		fence = dma_resv_get_excl_rcu(obj->base.resv);
> > +		fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >   		if (fence) {
> >   			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> >   						   fence);
> > diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> > index 9e508e7d4629f..bdfc6bf16a4e9 100644
> > --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> > +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> > @@ -10,7 +10,7 @@
> >   void dma_resv_prune(struct dma_resv *resv)
> >   {
> >   	if (dma_resv_trylock(resv)) {
> > -		if (dma_resv_test_signaled_rcu(resv, true))
> > +		if (dma_resv_test_signaled_unlocked(resv, true))
> >   			dma_resv_add_excl_fence(resv, NULL);
> >   		dma_resv_unlock(resv);
> >   	}
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > index 25235ef630c10..754ad6d1bace9 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> >   	 * Alternatively, we can trade that extra information on read/write
> >   	 * activity with
> >   	 *	args->busy =
> > -	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
> > +	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
> >   	 * to report the overall busyness. This is what the wait-ioctl does.
> >   	 *
> >   	 */
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > index 297143511f99b..e8f323564e57b 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> >   	if (DBG_FORCE_RELOC)
> >   		return false;
> > -	return !dma_resv_test_signaled_rcu(vma->resv, true);
> > +	return !dma_resv_test_signaled_unlocked(vma->resv, true);
> >   }
> >   static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > index 2ebd79537aea9..7c0eb425cb3b3 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> >   	struct dma_fence *fence;
> >   	rcu_read_lock();
> > -	fence = dma_resv_get_excl_rcu(obj->base.resv);
> > +	fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >   	rcu_read_unlock();
> >   	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > index a657b99ec7606..44df18dc9669f 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> >   		return true;
> >   	/* we will unbind on next submission, still have userptr pins */
> > -	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> > -				      MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	if (r <= 0)
> >   		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > index 4b9856d5ba14f..5b6c52659ad4d 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >   		unsigned int count, i;
> >   		int ret;
> > -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >   		 */
> >   		prune_fences = count && timeout >= 0;
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(resv);
> > +		excl = dma_resv_get_excl_unlocked(resv);
> >   	}
> >   	if (excl && timeout >= 0)
> > @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >   		unsigned int count, i;
> >   		int ret;
> > -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> > -					      &excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > +						   &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >   		kfree(shared);
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> > +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >   	}
> >   	if (excl) {
> > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> > index 970d8f4986bbe..f1ed03ced7dd1 100644
> > --- a/drivers/gpu/drm/i915/i915_request.c
> > +++ b/drivers/gpu/drm/i915/i915_request.c
> > @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> >   		struct dma_fence **shared;
> >   		unsigned int count, i;
> > -		ret = dma_resv_get_fences_rcu(obj->base.resv,
> > -							&excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > +						   &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> >   			dma_fence_put(shared[i]);
> >   		kfree(shared);
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(obj->base.resv);
> > +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >   	}
> >   	if (excl) {
> > diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> > index 2744558f30507..0bcb7ea44201e 100644
> > --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> > +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> > @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >   		struct dma_fence **shared;
> >   		unsigned int count, i;
> > -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >   		if (ret)
> >   			return ret;
> > @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >   			dma_fence_put(shared[i]);
> >   		kfree(shared);
> >   	} else {
> > -		excl = dma_resv_get_excl_rcu(resv);
> > +		excl = dma_resv_get_excl_unlocked(resv);
> >   	}
> >   	if (ret >= 0 && excl && excl->ops != exclude) {
> > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > index 56df86e5f7400..1aca60507bb14 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.c
> > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> >   		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> >   	long ret;
> > -	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> > -						  true,  remain);
> > +	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> >   	if (ret == 0)
> >   		return remain == 0 ? -EBUSY : -ETIMEDOUT;
> >   	else if (ret < 0)
> > diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > index 0cb1f9d848d3e..8d048bacd6f02 100644
> > --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> >   			asyw->image.handle[0] = ctxdma->object.handle;
> >   	}
> > -	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> > +	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> >   	asyw->image.offset[0] = nvbo->offset;
> >   	if (wndw->func->prepare) {
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > index a70e82413fa75..bc6b09ee9b552 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> >   		return -ENOENT;
> >   	nvbo = nouveau_gem_object(gem);
> > -	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> > -						   no_wait ? 0 : 30 * HZ);
> > +	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> > +					      no_wait ? 0 : 30 * HZ);
> >   	if (!lret)
> >   		ret = -EBUSY;
> >   	else if (lret > 0)
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > index ca07098a61419..eef5b632ee0ce 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> >   	if (!gem_obj)
> >   		return -ENOENT;
> > -	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> > -						  true, timeout);
> > +	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> > +					     true, timeout);
> >   	if (!ret)
> >   		ret = timeout ? -ETIMEDOUT : -EBUSY;
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> > index 6003cfeb13221..2df3e999a38d0 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> >   	int i;
> >   	for (i = 0; i < bo_count; i++)
> > -		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> > +		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> >   }
> >   static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> > diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> > index 05ea2f39f6261..1a38b0bf36d11 100644
> > --- a/drivers/gpu/drm/radeon/radeon_gem.c
> > +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> > @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> >   	}
> >   	if (domain == RADEON_GEM_DOMAIN_CPU) {
> >   		/* Asking for cpu access wait for object idle */
> > -		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > +		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >   		if (!r)
> >   			r = -EBUSY;
> > @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> >   	}
> >   	robj = gem_to_radeon_bo(gobj);
> > -	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> > +	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> >   	if (r == 0)
> >   		r = -EBUSY;
> >   	else
> > @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >   	}
> >   	robj = gem_to_radeon_bo(gobj);
> > -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >   	if (ret == 0)
> >   		r = -EBUSY;
> >   	else if (ret < 0)
> > diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> > index e37c9a57a7c36..a19be3f8a218c 100644
> > --- a/drivers/gpu/drm/radeon/radeon_mn.c
> > +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> > @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> >   		return true;
> >   	}
> > -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > -				      MAX_SCHEDULE_TIMEOUT);
> > +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > +					   MAX_SCHEDULE_TIMEOUT);
> >   	if (r <= 0)
> >   		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> > index ca1b098b6a561..215cad3149621 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >   	struct dma_resv *resv = &bo->base._resv;
> >   	int ret;
> > -	if (dma_resv_test_signaled_rcu(resv, true))
> > +	if (dma_resv_test_signaled_unlocked(resv, true))
> >   		ret = 0;
> >   	else
> >   		ret = -EBUSY;
> > @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >   			dma_resv_unlock(bo->base.resv);
> >   		spin_unlock(&bo->bdev->lru_lock);
> > -		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> > -						 30 * HZ);
> > +		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> > +						      30 * HZ);
> >   		if (lret < 0)
> >   			return lret;
> > @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> >   			/* Last resort, if we fail to allocate memory for the
> >   			 * fences block for the BO to become idle
> >   			 */
> > -			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> > -						  30 * HZ);
> > +			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> > +						       30 * HZ);
> >   		}
> >   		if (bo->bdev->funcs->release_notify)
> > @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> >   		ttm_mem_io_free(bdev, &bo->mem);
> >   	}
> > -	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> > +	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> >   	    !dma_resv_trylock(bo->base.resv)) {
> >   		/* The BO is not idle, resurrect it for delayed destroy */
> >   		ttm_bo_flush_all_fences(bo);
> > @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> >   	long timeout = 15 * HZ;
> >   	if (no_wait) {
> > -		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> > +		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> >   			return 0;
> >   		else
> >   			return -EBUSY;
> >   	}
> > -	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> > -						      interruptible, timeout);
> > +	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> > +						 interruptible, timeout);
> >   	if (timeout < 0)
> >   		return timeout;
> > diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> > index 2902dc6e64faf..010a82405e374 100644
> > --- a/drivers/gpu/drm/vgem/vgem_fence.c
> > +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> > @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> >   	/* Check for a conflicting fence */
> >   	resv = obj->resv;
> > -	if (!dma_resv_test_signaled_rcu(resv,
> > -						  arg->flags & VGEM_FENCE_WRITE)) {
> > +	if (!dma_resv_test_signaled_unlocked(resv,
> > +					     arg->flags & VGEM_FENCE_WRITE)) {
> >   		ret = -EBUSY;
> >   		goto err_fence;
> >   	}
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > index 669f2ee395154..ab010c8e32816 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> >   		return -ENOENT;
> >   	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> > -		ret = dma_resv_test_signaled_rcu(obj->resv, true);
> > +		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> >   	} else {
> > -		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> > -						timeout);
> > +		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> > +						     timeout);
> >   	}
> >   	if (ret == 0)
> >   		ret = -EBUSY;
> > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > index 04dd49c4c2572..19e1ce23842a9 100644
> > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> >   	if (flags & drm_vmw_synccpu_allow_cs) {
> >   		long lret;
> > -		lret = dma_resv_wait_timeout_rcu
> > +		lret = dma_resv_wait_timeout_unlocked
> >   			(bo->base.resv, true, true,
> >   			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> >   		if (!lret)
> > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> > index d44a77e8a7e34..99cfb7af966b8 100644
> > --- a/include/linux/dma-resv.h
> > +++ b/include/linux/dma-resv.h
> > @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >   }
> >   /**
> > - * dma_resv_get_excl_rcu - get the reservation object's
> > + * dma_resv_get_excl_unlocked - get the reservation object's
> >    * exclusive fence, without lock held.
> >    * @obj: the reservation object
> >    *
> > @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >    * The exclusive fence or NULL if none
> >    */
> >   static inline struct dma_fence *
> > -dma_resv_get_excl_rcu(struct dma_resv *obj)
> > +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> >   {
> >   	struct dma_fence *fence;
> > @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> >   void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > -			    struct dma_fence **pfence_excl,
> > -			    unsigned *pshared_count,
> > -			    struct dma_fence ***pshared);
> > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > +				 struct dma_fence **pfence_excl,
> > +				 unsigned *pshared_count,
> > +				 struct dma_fence ***pshared);
> >   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> > -			       unsigned long timeout);
> > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> > +				    unsigned long timeout);
> > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> >   #endif /* _LINUX_RESERVATION_H */
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-27 10:33               ` Daniel Vetter
@ 2021-05-27 10:48                 ` Simon Ser
  -1 siblings, 0 replies; 68+ messages in thread
From: Simon Ser @ 2021-05-27 10:48 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, dri-devel, Jason Ekstrand,
	Christian König

On Thursday, May 27th, 2021 at 12:33 PM, Daniel Vetter <daniel@ffwll.ch> wrote:

> > The sync_file is also import/exportable to a certain drm_syncobj timeline
> > point (or as binary signal). So no big deal, we are all compatible here :)
> > I just thought that it might be more appropriate to return a drm_syncobj
> > directly instead of a sync_file.
>
> I think another big potential user for this is a vk-based compositor which
> needs to interact/support implicit synced clients. And compositor world I
> think is right now still more sync_file (because that's where we started
> with atomic kms ioctl).

Yeah, right now compositors can only deal with sync_file. I have a
Vulkan pull request for wlroots [1] that would benefit from this, I
think.

Also it seems like drm_syncobj isn't necessarily going to be the the
sync primitive that ends them all with all that user-space fence
discussion, so long-term I guess we'll need a conversion anyways.

[1]: https://github.com/swaywm/wlroots/pull/2771

> Plus you can convert them to the other form anyway, so really doesn't
> matter much. But for the above reasons I'm leaning slightly towards
> sync_file. Except if compositor folks tell me I'm a fool and why :-)

sync_file is fine from my PoV.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-27 10:48                 ` Simon Ser
  0 siblings, 0 replies; 68+ messages in thread
From: Simon Ser @ 2021-05-27 10:48 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, dri-devel, Sumit Semwal, Christian König

On Thursday, May 27th, 2021 at 12:33 PM, Daniel Vetter <daniel@ffwll.ch> wrote:

> > The sync_file is also import/exportable to a certain drm_syncobj timeline
> > point (or as binary signal). So no big deal, we are all compatible here :)
> > I just thought that it might be more appropriate to return a drm_syncobj
> > directly instead of a sync_file.
>
> I think another big potential user for this is a vk-based compositor which
> needs to interact/support implicit synced clients. And compositor world I
> think is right now still more sync_file (because that's where we started
> with atomic kms ioctl).

Yeah, right now compositors can only deal with sync_file. I have a
Vulkan pull request for wlroots [1] that would benefit from this, I
think.

Also it seems like drm_syncobj isn't necessarily going to be the the
sync primitive that ends them all with all that user-space fence
discussion, so long-term I guess we'll need a conversion anyways.

[1]: https://github.com/swaywm/wlroots/pull/2771

> Plus you can convert them to the other form anyway, so really doesn't
> matter much. But for the above reasons I'm leaning slightly towards
> sync_file. Except if compositor folks tell me I'm a fool and why :-)

sync_file is fine from my PoV.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
  2021-05-27 10:38     ` [Intel-gfx] " Daniel Vetter
@ 2021-05-27 11:12       ` Sumit Semwal
  -1 siblings, 0 replies; 68+ messages in thread
From: Sumit Semwal @ 2021-05-27 11:12 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, Intel Graphics Development, DRI mailing list,
	Jason Ekstrand, Christian König

[-- Attachment #1: Type: text/plain, Size: 4545 bytes --]

Hi Daniel,

On Thu, 27 May 2021 at 16:08, Daniel Vetter <daniel@ffwll.ch> wrote:

> On Tue, May 25, 2021 at 04:17:50PM -0500, Jason Ekstrand wrote:
> > This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> > documentation for DMA_BUF_IOCTL_SYNC.
> >
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
>
> We're still missing the doc for the SET_NAME ioctl, but maybe Sumit can be
> motivated to fix that?
>

Yes, certainly, I'll cook up a patch and send it soon.

>
> > ---
> >  Documentation/driver-api/dma-buf.rst |  8 +++++++
> >  include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
> >  2 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/driver-api/dma-buf.rst
> b/Documentation/driver-api/dma-buf.rst
> > index 7f37ec30d9fd7..784f84fe50a5e 100644
> > --- a/Documentation/driver-api/dma-buf.rst
> > +++ b/Documentation/driver-api/dma-buf.rst
> > @@ -88,6 +88,9 @@ consider though:
> >  - The DMA buffer FD is also pollable, see `Implicit Fence Poll
> Support`_ below for
> >    details.
> >
> > +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> > +  `DMA Buffer ioctls`_ below for details.
> > +
> >  Basic Operation and Device DMA Access
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > @@ -106,6 +109,11 @@ Implicit Fence Poll Support
> >  .. kernel-doc:: drivers/dma-buf/dma-buf.c
> >     :doc: implicit fence polling
> >
> > +DMA Buffer ioctls
> > +~~~~~~~~~~~~~~~~~
> > +
> > +.. kernel-doc:: include/uapi/linux/dma-buf.h
> > +
> >  Kernel Functions and Structures Reference
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> > index 7f30393b92c3b..1f67ced853b14 100644
> > --- a/include/uapi/linux/dma-buf.h
> > +++ b/include/uapi/linux/dma-buf.h
> > @@ -22,8 +22,38 @@
> >
> >  #include <linux/types.h>
> >
> > -/* begin/end dma-buf functions used for userspace mmap. */
> > +/**
> > + * struct dma_buf_sync - Synchronize with CPU access.
> > + *
> > + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> > + * possible to guarantee coherency between the CPU-visible map and
> underlying
> > + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to
> bracket
> > + * any CPU access to give the kernel the chance to shuffle memory
> around if
> > + * needed.
> > + *
> > + * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC
>
> s/should/must/
>
> > + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once
> the
> > + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> > + * DMA_BUF_SYNC_END and the same read/write flags.
>
> I think we should make it really clear here that this is _only_ for cache
> coherency, and that furthermore if you want coherency with gpu access you
> either need to use poll() for implicit sync (link to the relevant section)
> or handle explicit sync with sync_file (again link would be awesome).
>
> > + */
> >  struct dma_buf_sync {
> > +     /**
> > +      * @flags: Set of access flags
> > +      *
> > +      * - DMA_BUF_SYNC_START: Indicates the start of a map access
>
> Bikeshed, but I think the item list format instead of bullet point list
> looks neater, e.g.  DOC: standard plane properties in drm_plane.c.
>
>
> > +      *   session.
> > +      *
> > +      * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
> > +      *   be read by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
>
> s/READ/WRITE/
>
> > +      *   be written by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
> > +      *   DMA_BUF_SYNC_WRITE.
> > +      */
>
> With the nits addressed: Reviewed-by: Daniel Vetter <
> daniel.vetter@ffwll.ch>
>
> >       __u64 flags;
> >  };
> >
> > --
> > 2.31.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
>

Best,
Sumit.

-- 
Thanks and regards,

Sumit Semwal
Tech Lead - Android, Vertical Technologies
Linaro.org │ Open source software for ARM SoCs

[-- Attachment #2: Type: text/html, Size: 6126 bytes --]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
@ 2021-05-27 11:12       ` Sumit Semwal
  0 siblings, 0 replies; 68+ messages in thread
From: Sumit Semwal @ 2021-05-27 11:12 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, Intel Graphics Development, DRI mailing list,
	Christian König


[-- Attachment #1.1: Type: text/plain, Size: 4545 bytes --]

Hi Daniel,

On Thu, 27 May 2021 at 16:08, Daniel Vetter <daniel@ffwll.ch> wrote:

> On Tue, May 25, 2021 at 04:17:50PM -0500, Jason Ekstrand wrote:
> > This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> > documentation for DMA_BUF_IOCTL_SYNC.
> >
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
>
> We're still missing the doc for the SET_NAME ioctl, but maybe Sumit can be
> motivated to fix that?
>

Yes, certainly, I'll cook up a patch and send it soon.

>
> > ---
> >  Documentation/driver-api/dma-buf.rst |  8 +++++++
> >  include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
> >  2 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/driver-api/dma-buf.rst
> b/Documentation/driver-api/dma-buf.rst
> > index 7f37ec30d9fd7..784f84fe50a5e 100644
> > --- a/Documentation/driver-api/dma-buf.rst
> > +++ b/Documentation/driver-api/dma-buf.rst
> > @@ -88,6 +88,9 @@ consider though:
> >  - The DMA buffer FD is also pollable, see `Implicit Fence Poll
> Support`_ below for
> >    details.
> >
> > +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> > +  `DMA Buffer ioctls`_ below for details.
> > +
> >  Basic Operation and Device DMA Access
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > @@ -106,6 +109,11 @@ Implicit Fence Poll Support
> >  .. kernel-doc:: drivers/dma-buf/dma-buf.c
> >     :doc: implicit fence polling
> >
> > +DMA Buffer ioctls
> > +~~~~~~~~~~~~~~~~~
> > +
> > +.. kernel-doc:: include/uapi/linux/dma-buf.h
> > +
> >  Kernel Functions and Structures Reference
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> > index 7f30393b92c3b..1f67ced853b14 100644
> > --- a/include/uapi/linux/dma-buf.h
> > +++ b/include/uapi/linux/dma-buf.h
> > @@ -22,8 +22,38 @@
> >
> >  #include <linux/types.h>
> >
> > -/* begin/end dma-buf functions used for userspace mmap. */
> > +/**
> > + * struct dma_buf_sync - Synchronize with CPU access.
> > + *
> > + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> > + * possible to guarantee coherency between the CPU-visible map and
> underlying
> > + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to
> bracket
> > + * any CPU access to give the kernel the chance to shuffle memory
> around if
> > + * needed.
> > + *
> > + * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC
>
> s/should/must/
>
> > + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once
> the
> > + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> > + * DMA_BUF_SYNC_END and the same read/write flags.
>
> I think we should make it really clear here that this is _only_ for cache
> coherency, and that furthermore if you want coherency with gpu access you
> either need to use poll() for implicit sync (link to the relevant section)
> or handle explicit sync with sync_file (again link would be awesome).
>
> > + */
> >  struct dma_buf_sync {
> > +     /**
> > +      * @flags: Set of access flags
> > +      *
> > +      * - DMA_BUF_SYNC_START: Indicates the start of a map access
>
> Bikeshed, but I think the item list format instead of bullet point list
> looks neater, e.g.  DOC: standard plane properties in drm_plane.c.
>
>
> > +      *   session.
> > +      *
> > +      * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
> > +      *   be read by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
>
> s/READ/WRITE/
>
> > +      *   be written by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
> > +      *   DMA_BUF_SYNC_WRITE.
> > +      */
>
> With the nits addressed: Reviewed-by: Daniel Vetter <
> daniel.vetter@ffwll.ch>
>
> >       __u64 flags;
> >  };
> >
> > --
> > 2.31.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
>

Best,
Sumit.

-- 
Thanks and regards,

Sumit Semwal
Tech Lead - Android, Vertical Technologies
Linaro.org │ Open source software for ARM SoCs

[-- Attachment #1.2: Type: text/html, Size: 6126 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-27 10:39       ` [Intel-gfx] " Daniel Vetter
@ 2021-05-27 11:58         ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-27 11:58 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, Thomas Zimmermann, Daniel Vetter, intel-gfx,
	Huang Rui, VMware Graphics, dri-devel, Jason Ekstrand, Sean Paul

Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>> None of these helpers actually leak any RCU details to the caller.  They
>>> all assume you have a genuine reference, take the RCU read lock, and
>>> retry if needed.  Naming them with an _rcu is likely to cause callers
>>> more panic than needed.
>> I'm really wondering if we need this postfix in the first place.
>>
>> If we use the right rcu_dereference_check() macro then those functions can
>> be called with both the reservation object locked and unlocked. It shouldn't
>> matter to them.
>>
>> But getting rid of the _rcu postfix sounds like a good idea in general to
>> me.
> So does that count as an ack or not? If yes I think we should land this
> patch right away, since it's going to conflict real fast badly.

I had some follow up discussion with Jason and I would rather like to 
switch to using rcu_dereference_check() in all places and completely 
remove the _rcu postfix.

But yes I see the pain of rebasing this as well.

Christian.

> -Daniel
>
>> Christian.
>>
>>> v2 (Jason Ekstrand):
>>>    - Fix function argument indentation
>>>
>>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
>>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Christian König <christian.koenig@amd.com>
>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Cc: Maxime Ripard <mripard@kernel.org>
>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>> Cc: Lucas Stach <l.stach@pengutronix.de>
>>> Cc: Rob Clark <robdclark@gmail.com>
>>> Cc: Sean Paul <sean@poorly.run>
>>> Cc: Huang Rui <ray.huang@amd.com>
>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
>>> ---
>>>    drivers/dma-buf/dma-buf.c                     |  4 +--
>>>    drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>>>    .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>>>    drivers/gpu/drm/drm_gem.c                     | 10 +++----
>>>    drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>>>    drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>>>    drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>>>    drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>>>    drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>>>    drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>>>    .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>>>    drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>>>    drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>>>    drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>>>    drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>>>    drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>>>    drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>>>    drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>>>    drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>>>    drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>>>    drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>>>    drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>>>    drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>>>    drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>>>    drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>>>    drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>>>    drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>>>    include/linux/dma-resv.h                      | 18 ++++++------
>>>    36 files changed, 108 insertions(+), 110 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>> index f264b70c383eb..ed6451d55d663 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>    	long ret;
>>>    	/* Wait on any implicit rendering fences */
>>> -	ret = dma_resv_wait_timeout_rcu(resv, write, true,
>>> -						  MAX_SCHEDULE_TIMEOUT);
>>> +	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
>>> +					     MAX_SCHEDULE_TIMEOUT);
>>>    	if (ret < 0)
>>>    		return ret;
>>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
>>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
>>> --- a/drivers/dma-buf/dma-resv.c
>>> +++ b/drivers/dma-buf/dma-resv.c
>>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>>>    EXPORT_SYMBOL(dma_resv_copy_fences);
>>>    /**
>>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
>>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>>>     * fences without update side lock held
>>>     * @obj: the reservation object
>>>     * @pfence_excl: the returned exclusive fence (or NULL)
>>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>>>     * exclusive fence is not specified the fence is put into the array of the
>>>     * shared fences as well. Returns either zero or -ENOMEM.
>>>     */
>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>> -			    struct dma_fence **pfence_excl,
>>> -			    unsigned *pshared_count,
>>> -			    struct dma_fence ***pshared)
>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>> +				 struct dma_fence **pfence_excl,
>>> +				 unsigned *pshared_count,
>>> +				 struct dma_fence ***pshared)
>>>    {
>>>    	struct dma_fence **shared = NULL;
>>>    	struct dma_fence *fence_excl;
>>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>    	*pshared = shared;
>>>    	return ret;
>>>    }
>>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>>>    /**
>>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
>>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>>>     * shared and/or exclusive fences.
>>>     * @obj: the reservation object
>>>     * @wait_all: if true, wait on all fences, else wait on just exclusive fence
>>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>     * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>>>     * greater than zer on success.
>>>     */
>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>> -			       bool wait_all, bool intr,
>>> -			       unsigned long timeout)
>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
>>> +				    bool wait_all, bool intr,
>>> +				    unsigned long timeout)
>>>    {
>>>    	struct dma_fence *fence;
>>>    	unsigned seq, shared_count;
>>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>    	rcu_read_unlock();
>>>    	goto retry;
>>>    }
>>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
>>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>>>    static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>    }
>>>    /**
>>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
>>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>>>     * fences have been signaled.
>>>     * @obj: the reservation object
>>>     * @test_all: if true, test all fences, otherwise only test the exclusive
>>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>     * RETURNS
>>>     * true if all fences signaled, else false
>>>     */
>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>>>    {
>>>    	unsigned seq, shared_count;
>>>    	int ret;
>>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>    	rcu_read_unlock();
>>>    	return ret;
>>>    }
>>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
>>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>> index 8a1fb8b6606e5..b8e24f199be9a 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>>>    		goto unpin;
>>>    	}
>>> -	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
>>> -					      &work->shared_count,
>>> -					      &work->shared);
>>> +	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
>>> +					 &work->shared_count,
>>> +					 &work->shared);
>>>    	if (unlikely(r != 0)) {
>>>    		DRM_ERROR("failed to get fences for buffer\n");
>>>    		goto unpin;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>> index baa980a477d94..0d0319bc51577 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>>>    	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>>>    		return 0;
>>> -	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
>>> +	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>>>    	if (r)
>>>    		return r;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> index 18974bd081f00..8e2996d6ba3ad 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>    		return -ENOENT;
>>>    	}
>>>    	robj = gem_to_amdgpu_bo(gobj);
>>> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
>>> -						  timeout);
>>> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
>>> +					     timeout);
>>>    	/* ret == 0 means not signaled,
>>>    	 * ret > 0 means signaled
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>> index b4971e90b98cf..38e1b32dd2cef 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>    	unsigned count;
>>>    	int r;
>>> -	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
>>> +	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>>>    	if (r)
>>>    		goto fallback;
>>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>    	/* Not enough memory for the delayed delete, as last resort
>>>    	 * block for all the fences to complete.
>>>    	 */
>>> -	dma_resv_wait_timeout_rcu(resv, true, false,
>>> -					    MAX_SCHEDULE_TIMEOUT);
>>> +	dma_resv_wait_timeout_unlocked(resv, true, false,
>>> +				       MAX_SCHEDULE_TIMEOUT);
>>>    	amdgpu_pasid_free(pasid);
>>>    }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>> index 828b5167ff128..0319c8b547c48 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>>>    	mmu_interval_set_seq(mni, cur_seq);
>>> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>> -				      MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	mutex_unlock(&adev->notifier_lock);
>>>    	if (r <= 0)
>>>    		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> index 0adffcace3263..de1c7c5501683 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>>>    		return 0;
>>>    	}
>>> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
>>> -						MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	if (r < 0)
>>>    		return r;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> index c6dbc08016045..4a2196404fb69 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>>    	ib->length_dw = 16;
>>>    	if (direct) {
>>> -		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
>>> -							true, false,
>>> -							msecs_to_jiffies(10));
>>> +		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
>>> +						   true, false,
>>> +						   msecs_to_jiffies(10));
>>>    		if (r == 0)
>>>    			r = -ETIMEDOUT;
>>>    		if (r < 0)
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 4a3e3f72e1277..7ba1c537d6584 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>    	unsigned i, shared_count;
>>>    	int r;
>>> -	r = dma_resv_get_fences_rcu(resv, &excl,
>>> -					      &shared_count, &shared);
>>> +	r = dma_resv_get_fences_unlocked(resv, &excl,
>>> +					 &shared_count, &shared);
>>>    	if (r) {
>>>    		/* Not enough memory to grab the fence list, as last resort
>>>    		 * block for all the fences to complete.
>>>    		 */
>>> -		dma_resv_wait_timeout_rcu(resv, true, false,
>>> -						    MAX_SCHEDULE_TIMEOUT);
>>> +		dma_resv_wait_timeout_unlocked(resv, true, false,
>>> +					       MAX_SCHEDULE_TIMEOUT);
>>>    		return;
>>>    	}
>>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>>>    		return true;
>>>    	/* Don't evict VM page tables while they are busy */
>>> -	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
>>> +	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>>>    		return false;
>>>    	/* Try to block ongoing updates */
>>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>>>     */
>>>    long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>>>    {
>>> -	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
>>> -					    true, true, timeout);
>>> +	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
>>> +						 true, true, timeout);
>>>    	if (timeout <= 0)
>>>    		return timeout;
>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> index 9ca517b658546..0121d2817fa26 100644
>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>>>    		 * deadlock during GPU reset when this fence will not signal
>>>    		 * but we hold reservation lock for the BO.
>>>    		 */
>>> -		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
>>> -							false,
>>> -							msecs_to_jiffies(5000));
>>> +		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
>>> +						   false,
>>> +						   msecs_to_jiffies(5000));
>>>    		if (unlikely(r <= 0))
>>>    			DRM_ERROR("Waiting for fences timed out!");
>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>> index 9989425e9875a..1241a421b9e81 100644
>>> --- a/drivers/gpu/drm/drm_gem.c
>>> +++ b/drivers/gpu/drm/drm_gem.c
>>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>>>    		return -EINVAL;
>>>    	}
>>> -	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
>>> -						  true, timeout);
>>> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
>>> +					     true, timeout);
>>>    	if (ret == 0)
>>>    		ret = -ETIME;
>>>    	else if (ret > 0)
>>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>>>    	if (!write) {
>>>    		struct dma_fence *fence =
>>> -			dma_resv_get_excl_rcu(obj->resv);
>>> +			dma_resv_get_excl_unlocked(obj->resv);
>>>    		return drm_gem_fence_array_add(fence_array, fence);
>>>    	}
>>> -	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
>>> -						&fence_count, &fences);
>>> +	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
>>> +					   &fence_count, &fences);
>>>    	if (ret || !fence_count)
>>>    		return ret;
>>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>> index a005c5a0ba46a..a27135084ae5c 100644
>>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>>>    		return 0;
>>>    	obj = drm_gem_fb_get_obj(state->fb, 0);
>>> -	fence = dma_resv_get_excl_rcu(obj->resv);
>>> +	fence = dma_resv_get_excl_unlocked(obj->resv);
>>>    	drm_atomic_set_fence_for_plane(state, fence);
>>>    	return 0;
>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>> index db69f19ab5bca..4e6f5346e84e4 100644
>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>>>    	}
>>>    	if (op & ETNA_PREP_NOSYNC) {
>>> -		if (!dma_resv_test_signaled_rcu(obj->resv,
>>> -							  write))
>>> +		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>>>    			return -EBUSY;
>>>    	} else {
>>>    		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>>> -		ret = dma_resv_wait_timeout_rcu(obj->resv,
>>> -							  write, true, remain);
>>> +		ret = dma_resv_wait_timeout_unlocked(obj->resv,
>>> +						     write, true, remain);
>>>    		if (ret <= 0)
>>>    			return ret == 0 ? -ETIMEDOUT : ret;
>>>    	}
>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>> index d05c359945799..6617fada4595d 100644
>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>>>    			continue;
>>>    		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
>>> -			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
>>> -								&bo->nr_shared,
>>> -								&bo->shared);
>>> +			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
>>> +							   &bo->nr_shared,
>>> +							   &bo->shared);
>>>    			if (ret)
>>>    				return ret;
>>>    		} else {
>>> -			bo->excl = dma_resv_get_excl_rcu(robj);
>>> +			bo->excl = dma_resv_get_excl_unlocked(robj);
>>>    		}
>>>    	}
>>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
>>> index 422b59ebf6dce..5f0b85a102159 100644
>>> --- a/drivers/gpu/drm/i915/display/intel_display.c
>>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
>>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>>>    		if (ret < 0)
>>>    			goto unpin_fb;
>>> -		fence = dma_resv_get_excl_rcu(obj->base.resv);
>>> +		fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    		if (fence) {
>>>    			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>>>    						   fence);
>>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
>>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
>>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
>>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
>>> @@ -10,7 +10,7 @@
>>>    void dma_resv_prune(struct dma_resv *resv)
>>>    {
>>>    	if (dma_resv_trylock(resv)) {
>>> -		if (dma_resv_test_signaled_rcu(resv, true))
>>> +		if (dma_resv_test_signaled_unlocked(resv, true))
>>>    			dma_resv_add_excl_fence(resv, NULL);
>>>    		dma_resv_unlock(resv);
>>>    	}
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>> index 25235ef630c10..754ad6d1bace9 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>    	 * Alternatively, we can trade that extra information on read/write
>>>    	 * activity with
>>>    	 *	args->busy =
>>> -	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
>>> +	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
>>>    	 * to report the overall busyness. This is what the wait-ioctl does.
>>>    	 *
>>>    	 */
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> index 297143511f99b..e8f323564e57b 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>>>    	if (DBG_FORCE_RELOC)
>>>    		return false;
>>> -	return !dma_resv_test_signaled_rcu(vma->resv, true);
>>> +	return !dma_resv_test_signaled_unlocked(vma->resv, true);
>>>    }
>>>    static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> index 2ebd79537aea9..7c0eb425cb3b3 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>>>    	struct dma_fence *fence;
>>>    	rcu_read_lock();
>>> -	fence = dma_resv_get_excl_rcu(obj->base.resv);
>>> +	fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    	rcu_read_unlock();
>>>    	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> index a657b99ec7606..44df18dc9669f 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>>>    		return true;
>>>    	/* we will unbind on next submission, still have userptr pins */
>>> -	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>>> -				      MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	if (r <= 0)
>>>    		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>> index 4b9856d5ba14f..5b6c52659ad4d 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>    		unsigned int count, i;
>>>    		int ret;
>>> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>    		 */
>>>    		prune_fences = count && timeout >= 0;
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(resv);
>>> +		excl = dma_resv_get_excl_unlocked(resv);
>>>    	}
>>>    	if (excl && timeout >= 0)
>>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>    		unsigned int count, i;
>>>    		int ret;
>>> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
>>> -					      &excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>> +						   &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>    		kfree(shared);
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
>>> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    	}
>>>    	if (excl) {
>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>> index 970d8f4986bbe..f1ed03ced7dd1 100644
>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>>>    		struct dma_fence **shared;
>>>    		unsigned int count, i;
>>> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
>>> -							&excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>> +						   &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>>>    			dma_fence_put(shared[i]);
>>>    		kfree(shared);
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
>>> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    	}
>>>    	if (excl) {
>>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
>>> index 2744558f30507..0bcb7ea44201e 100644
>>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
>>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
>>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>    		struct dma_fence **shared;
>>>    		unsigned int count, i;
>>> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>    			dma_fence_put(shared[i]);
>>>    		kfree(shared);
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(resv);
>>> +		excl = dma_resv_get_excl_unlocked(resv);
>>>    	}
>>>    	if (ret >= 0 && excl && excl->ops != exclude) {
>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>> index 56df86e5f7400..1aca60507bb14 100644
>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>>>    		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>>>    	long ret;
>>> -	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
>>> -						  true,  remain);
>>> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>>>    	if (ret == 0)
>>>    		return remain == 0 ? -EBUSY : -ETIMEDOUT;
>>>    	else if (ret < 0)
>>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>> index 0cb1f9d848d3e..8d048bacd6f02 100644
>>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>>>    			asyw->image.handle[0] = ctxdma->object.handle;
>>>    	}
>>> -	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
>>> +	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>>>    	asyw->image.offset[0] = nvbo->offset;
>>>    	if (wndw->func->prepare) {
>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>> index a70e82413fa75..bc6b09ee9b552 100644
>>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
>>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>>>    		return -ENOENT;
>>>    	nvbo = nouveau_gem_object(gem);
>>> -	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
>>> -						   no_wait ? 0 : 30 * HZ);
>>> +	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
>>> +					      no_wait ? 0 : 30 * HZ);
>>>    	if (!lret)
>>>    		ret = -EBUSY;
>>>    	else if (lret > 0)
>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>> index ca07098a61419..eef5b632ee0ce 100644
>>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>>>    	if (!gem_obj)
>>>    		return -ENOENT;
>>> -	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
>>> -						  true, timeout);
>>> +	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
>>> +					     true, timeout);
>>>    	if (!ret)
>>>    		ret = timeout ? -ETIMEDOUT : -EBUSY;
>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> index 6003cfeb13221..2df3e999a38d0 100644
>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>>>    	int i;
>>>    	for (i = 0; i < bo_count; i++)
>>> -		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
>>> +		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>>>    }
>>>    static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
>>> index 05ea2f39f6261..1a38b0bf36d11 100644
>>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
>>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
>>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>>>    	}
>>>    	if (domain == RADEON_GEM_DOMAIN_CPU) {
>>>    		/* Asking for cpu access wait for object idle */
>>> -		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>> +		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>    		if (!r)
>>>    			r = -EBUSY;
>>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>    	}
>>>    	robj = gem_to_radeon_bo(gobj);
>>> -	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
>>> +	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>>>    	if (r == 0)
>>>    		r = -EBUSY;
>>>    	else
>>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>    	}
>>>    	robj = gem_to_radeon_bo(gobj);
>>> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>    	if (ret == 0)
>>>    		r = -EBUSY;
>>>    	else if (ret < 0)
>>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
>>> index e37c9a57a7c36..a19be3f8a218c 100644
>>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
>>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
>>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>>>    		return true;
>>>    	}
>>> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>> -				      MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	if (r <= 0)
>>>    		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index ca1b098b6a561..215cad3149621 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>    	struct dma_resv *resv = &bo->base._resv;
>>>    	int ret;
>>> -	if (dma_resv_test_signaled_rcu(resv, true))
>>> +	if (dma_resv_test_signaled_unlocked(resv, true))
>>>    		ret = 0;
>>>    	else
>>>    		ret = -EBUSY;
>>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>    			dma_resv_unlock(bo->base.resv);
>>>    		spin_unlock(&bo->bdev->lru_lock);
>>> -		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
>>> -						 30 * HZ);
>>> +		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
>>> +						      30 * HZ);
>>>    		if (lret < 0)
>>>    			return lret;
>>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>>>    			/* Last resort, if we fail to allocate memory for the
>>>    			 * fences block for the BO to become idle
>>>    			 */
>>> -			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
>>> -						  30 * HZ);
>>> +			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
>>> +						       30 * HZ);
>>>    		}
>>>    		if (bo->bdev->funcs->release_notify)
>>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>>>    		ttm_mem_io_free(bdev, &bo->mem);
>>>    	}
>>> -	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
>>> +	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>>>    	    !dma_resv_trylock(bo->base.resv)) {
>>>    		/* The BO is not idle, resurrect it for delayed destroy */
>>>    		ttm_bo_flush_all_fences(bo);
>>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>>>    	long timeout = 15 * HZ;
>>>    	if (no_wait) {
>>> -		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
>>> +		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>>>    			return 0;
>>>    		else
>>>    			return -EBUSY;
>>>    	}
>>> -	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
>>> -						      interruptible, timeout);
>>> +	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
>>> +						 interruptible, timeout);
>>>    	if (timeout < 0)
>>>    		return timeout;
>>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
>>> index 2902dc6e64faf..010a82405e374 100644
>>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
>>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
>>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>>>    	/* Check for a conflicting fence */
>>>    	resv = obj->resv;
>>> -	if (!dma_resv_test_signaled_rcu(resv,
>>> -						  arg->flags & VGEM_FENCE_WRITE)) {
>>> +	if (!dma_resv_test_signaled_unlocked(resv,
>>> +					     arg->flags & VGEM_FENCE_WRITE)) {
>>>    		ret = -EBUSY;
>>>    		goto err_fence;
>>>    	}
>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>> index 669f2ee395154..ab010c8e32816 100644
>>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>>>    		return -ENOENT;
>>>    	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
>>> -		ret = dma_resv_test_signaled_rcu(obj->resv, true);
>>> +		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>>>    	} else {
>>> -		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
>>> -						timeout);
>>> +		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
>>> +						     timeout);
>>>    	}
>>>    	if (ret == 0)
>>>    		ret = -EBUSY;
>>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>> index 04dd49c4c2572..19e1ce23842a9 100644
>>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>>>    	if (flags & drm_vmw_synccpu_allow_cs) {
>>>    		long lret;
>>> -		lret = dma_resv_wait_timeout_rcu
>>> +		lret = dma_resv_wait_timeout_unlocked
>>>    			(bo->base.resv, true, true,
>>>    			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>>>    		if (!lret)
>>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
>>> index d44a77e8a7e34..99cfb7af966b8 100644
>>> --- a/include/linux/dma-resv.h
>>> +++ b/include/linux/dma-resv.h
>>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>    }
>>>    /**
>>> - * dma_resv_get_excl_rcu - get the reservation object's
>>> + * dma_resv_get_excl_unlocked - get the reservation object's
>>>     * exclusive fence, without lock held.
>>>     * @obj: the reservation object
>>>     *
>>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>     * The exclusive fence or NULL if none
>>>     */
>>>    static inline struct dma_fence *
>>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
>>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>>>    {
>>>    	struct dma_fence *fence;
>>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>    void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>> -			    struct dma_fence **pfence_excl,
>>> -			    unsigned *pshared_count,
>>> -			    struct dma_fence ***pshared);
>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>> +				 struct dma_fence **pfence_excl,
>>> +				 unsigned *pshared_count,
>>> +				 struct dma_fence ***pshared);
>>>    int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
>>> -			       unsigned long timeout);
>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
>>> +				    unsigned long timeout);
>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>>>    #endif /* _LINUX_RESERVATION_H */


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-05-27 11:58         ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-27 11:58 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, Thomas Zimmermann, Daniel Vetter, intel-gfx,
	Maxime Ripard, Huang Rui, VMware Graphics, dri-devel,
	Lucas Stach

Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>> None of these helpers actually leak any RCU details to the caller.  They
>>> all assume you have a genuine reference, take the RCU read lock, and
>>> retry if needed.  Naming them with an _rcu is likely to cause callers
>>> more panic than needed.
>> I'm really wondering if we need this postfix in the first place.
>>
>> If we use the right rcu_dereference_check() macro then those functions can
>> be called with both the reservation object locked and unlocked. It shouldn't
>> matter to them.
>>
>> But getting rid of the _rcu postfix sounds like a good idea in general to
>> me.
> So does that count as an ack or not? If yes I think we should land this
> patch right away, since it's going to conflict real fast badly.

I had some follow up discussion with Jason and I would rather like to 
switch to using rcu_dereference_check() in all places and completely 
remove the _rcu postfix.

But yes I see the pain of rebasing this as well.

Christian.

> -Daniel
>
>> Christian.
>>
>>> v2 (Jason Ekstrand):
>>>    - Fix function argument indentation
>>>
>>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
>>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Christian König <christian.koenig@amd.com>
>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Cc: Maxime Ripard <mripard@kernel.org>
>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>> Cc: Lucas Stach <l.stach@pengutronix.de>
>>> Cc: Rob Clark <robdclark@gmail.com>
>>> Cc: Sean Paul <sean@poorly.run>
>>> Cc: Huang Rui <ray.huang@amd.com>
>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
>>> ---
>>>    drivers/dma-buf/dma-buf.c                     |  4 +--
>>>    drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>>>    .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>>>    drivers/gpu/drm/drm_gem.c                     | 10 +++----
>>>    drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>>>    drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>>>    drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>>>    drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>>>    drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>>>    drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>>>    .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>>>    drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>>>    drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>>>    drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>>>    drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>>>    drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>>>    drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>>>    drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>>>    drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>>>    drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>>>    drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>>>    drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>>>    drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>>>    drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>>>    drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>>>    drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>>>    drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>>>    include/linux/dma-resv.h                      | 18 ++++++------
>>>    36 files changed, 108 insertions(+), 110 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>> index f264b70c383eb..ed6451d55d663 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>    	long ret;
>>>    	/* Wait on any implicit rendering fences */
>>> -	ret = dma_resv_wait_timeout_rcu(resv, write, true,
>>> -						  MAX_SCHEDULE_TIMEOUT);
>>> +	ret = dma_resv_wait_timeout_unlocked(resv, write, true,
>>> +					     MAX_SCHEDULE_TIMEOUT);
>>>    	if (ret < 0)
>>>    		return ret;
>>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
>>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
>>> --- a/drivers/dma-buf/dma-resv.c
>>> +++ b/drivers/dma-buf/dma-resv.c
>>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>>>    EXPORT_SYMBOL(dma_resv_copy_fences);
>>>    /**
>>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
>>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>>>     * fences without update side lock held
>>>     * @obj: the reservation object
>>>     * @pfence_excl: the returned exclusive fence (or NULL)
>>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>>>     * exclusive fence is not specified the fence is put into the array of the
>>>     * shared fences as well. Returns either zero or -ENOMEM.
>>>     */
>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>> -			    struct dma_fence **pfence_excl,
>>> -			    unsigned *pshared_count,
>>> -			    struct dma_fence ***pshared)
>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>> +				 struct dma_fence **pfence_excl,
>>> +				 unsigned *pshared_count,
>>> +				 struct dma_fence ***pshared)
>>>    {
>>>    	struct dma_fence **shared = NULL;
>>>    	struct dma_fence *fence_excl;
>>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>    	*pshared = shared;
>>>    	return ret;
>>>    }
>>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>>>    /**
>>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
>>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>>>     * shared and/or exclusive fences.
>>>     * @obj: the reservation object
>>>     * @wait_all: if true, wait on all fences, else wait on just exclusive fence
>>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>     * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>>>     * greater than zer on success.
>>>     */
>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>> -			       bool wait_all, bool intr,
>>> -			       unsigned long timeout)
>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
>>> +				    bool wait_all, bool intr,
>>> +				    unsigned long timeout)
>>>    {
>>>    	struct dma_fence *fence;
>>>    	unsigned seq, shared_count;
>>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>    	rcu_read_unlock();
>>>    	goto retry;
>>>    }
>>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
>>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>>>    static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>    }
>>>    /**
>>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
>>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>>>     * fences have been signaled.
>>>     * @obj: the reservation object
>>>     * @test_all: if true, test all fences, otherwise only test the exclusive
>>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>     * RETURNS
>>>     * true if all fences signaled, else false
>>>     */
>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>>>    {
>>>    	unsigned seq, shared_count;
>>>    	int ret;
>>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>    	rcu_read_unlock();
>>>    	return ret;
>>>    }
>>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
>>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>> index 8a1fb8b6606e5..b8e24f199be9a 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>>>    		goto unpin;
>>>    	}
>>> -	r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
>>> -					      &work->shared_count,
>>> -					      &work->shared);
>>> +	r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
>>> +					 &work->shared_count,
>>> +					 &work->shared);
>>>    	if (unlikely(r != 0)) {
>>>    		DRM_ERROR("failed to get fences for buffer\n");
>>>    		goto unpin;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>> index baa980a477d94..0d0319bc51577 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>>>    	if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>>>    		return 0;
>>> -	r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
>>> +	r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>>>    	if (r)
>>>    		return r;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> index 18974bd081f00..8e2996d6ba3ad 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>    		return -ENOENT;
>>>    	}
>>>    	robj = gem_to_amdgpu_bo(gobj);
>>> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
>>> -						  timeout);
>>> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
>>> +					     timeout);
>>>    	/* ret == 0 means not signaled,
>>>    	 * ret > 0 means signaled
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>> index b4971e90b98cf..38e1b32dd2cef 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>    	unsigned count;
>>>    	int r;
>>> -	r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
>>> +	r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>>>    	if (r)
>>>    		goto fallback;
>>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>    	/* Not enough memory for the delayed delete, as last resort
>>>    	 * block for all the fences to complete.
>>>    	 */
>>> -	dma_resv_wait_timeout_rcu(resv, true, false,
>>> -					    MAX_SCHEDULE_TIMEOUT);
>>> +	dma_resv_wait_timeout_unlocked(resv, true, false,
>>> +				       MAX_SCHEDULE_TIMEOUT);
>>>    	amdgpu_pasid_free(pasid);
>>>    }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>> index 828b5167ff128..0319c8b547c48 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>>>    	mmu_interval_set_seq(mni, cur_seq);
>>> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>> -				      MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	mutex_unlock(&adev->notifier_lock);
>>>    	if (r <= 0)
>>>    		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> index 0adffcace3263..de1c7c5501683 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>>>    		return 0;
>>>    	}
>>> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
>>> -						MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	if (r < 0)
>>>    		return r;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> index c6dbc08016045..4a2196404fb69 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>>    	ib->length_dw = 16;
>>>    	if (direct) {
>>> -		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
>>> -							true, false,
>>> -							msecs_to_jiffies(10));
>>> +		r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
>>> +						   true, false,
>>> +						   msecs_to_jiffies(10));
>>>    		if (r == 0)
>>>    			r = -ETIMEDOUT;
>>>    		if (r < 0)
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 4a3e3f72e1277..7ba1c537d6584 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>    	unsigned i, shared_count;
>>>    	int r;
>>> -	r = dma_resv_get_fences_rcu(resv, &excl,
>>> -					      &shared_count, &shared);
>>> +	r = dma_resv_get_fences_unlocked(resv, &excl,
>>> +					 &shared_count, &shared);
>>>    	if (r) {
>>>    		/* Not enough memory to grab the fence list, as last resort
>>>    		 * block for all the fences to complete.
>>>    		 */
>>> -		dma_resv_wait_timeout_rcu(resv, true, false,
>>> -						    MAX_SCHEDULE_TIMEOUT);
>>> +		dma_resv_wait_timeout_unlocked(resv, true, false,
>>> +					       MAX_SCHEDULE_TIMEOUT);
>>>    		return;
>>>    	}
>>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>>>    		return true;
>>>    	/* Don't evict VM page tables while they are busy */
>>> -	if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
>>> +	if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>>>    		return false;
>>>    	/* Try to block ongoing updates */
>>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>>>     */
>>>    long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>>>    {
>>> -	timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
>>> -					    true, true, timeout);
>>> +	timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
>>> +						 true, true, timeout);
>>>    	if (timeout <= 0)
>>>    		return timeout;
>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> index 9ca517b658546..0121d2817fa26 100644
>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>>>    		 * deadlock during GPU reset when this fence will not signal
>>>    		 * but we hold reservation lock for the BO.
>>>    		 */
>>> -		r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
>>> -							false,
>>> -							msecs_to_jiffies(5000));
>>> +		r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
>>> +						   false,
>>> +						   msecs_to_jiffies(5000));
>>>    		if (unlikely(r <= 0))
>>>    			DRM_ERROR("Waiting for fences timed out!");
>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>> index 9989425e9875a..1241a421b9e81 100644
>>> --- a/drivers/gpu/drm/drm_gem.c
>>> +++ b/drivers/gpu/drm/drm_gem.c
>>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>>>    		return -EINVAL;
>>>    	}
>>> -	ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
>>> -						  true, timeout);
>>> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
>>> +					     true, timeout);
>>>    	if (ret == 0)
>>>    		ret = -ETIME;
>>>    	else if (ret > 0)
>>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>>>    	if (!write) {
>>>    		struct dma_fence *fence =
>>> -			dma_resv_get_excl_rcu(obj->resv);
>>> +			dma_resv_get_excl_unlocked(obj->resv);
>>>    		return drm_gem_fence_array_add(fence_array, fence);
>>>    	}
>>> -	ret = dma_resv_get_fences_rcu(obj->resv, NULL,
>>> -						&fence_count, &fences);
>>> +	ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
>>> +					   &fence_count, &fences);
>>>    	if (ret || !fence_count)
>>>    		return ret;
>>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>> index a005c5a0ba46a..a27135084ae5c 100644
>>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>>>    		return 0;
>>>    	obj = drm_gem_fb_get_obj(state->fb, 0);
>>> -	fence = dma_resv_get_excl_rcu(obj->resv);
>>> +	fence = dma_resv_get_excl_unlocked(obj->resv);
>>>    	drm_atomic_set_fence_for_plane(state, fence);
>>>    	return 0;
>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>> index db69f19ab5bca..4e6f5346e84e4 100644
>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>>>    	}
>>>    	if (op & ETNA_PREP_NOSYNC) {
>>> -		if (!dma_resv_test_signaled_rcu(obj->resv,
>>> -							  write))
>>> +		if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>>>    			return -EBUSY;
>>>    	} else {
>>>    		unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>>> -		ret = dma_resv_wait_timeout_rcu(obj->resv,
>>> -							  write, true, remain);
>>> +		ret = dma_resv_wait_timeout_unlocked(obj->resv,
>>> +						     write, true, remain);
>>>    		if (ret <= 0)
>>>    			return ret == 0 ? -ETIMEDOUT : ret;
>>>    	}
>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>> index d05c359945799..6617fada4595d 100644
>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>>>    			continue;
>>>    		if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
>>> -			ret = dma_resv_get_fences_rcu(robj, &bo->excl,
>>> -								&bo->nr_shared,
>>> -								&bo->shared);
>>> +			ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
>>> +							   &bo->nr_shared,
>>> +							   &bo->shared);
>>>    			if (ret)
>>>    				return ret;
>>>    		} else {
>>> -			bo->excl = dma_resv_get_excl_rcu(robj);
>>> +			bo->excl = dma_resv_get_excl_unlocked(robj);
>>>    		}
>>>    	}
>>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
>>> index 422b59ebf6dce..5f0b85a102159 100644
>>> --- a/drivers/gpu/drm/i915/display/intel_display.c
>>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
>>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>>>    		if (ret < 0)
>>>    			goto unpin_fb;
>>> -		fence = dma_resv_get_excl_rcu(obj->base.resv);
>>> +		fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    		if (fence) {
>>>    			add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>>>    						   fence);
>>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
>>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
>>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
>>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
>>> @@ -10,7 +10,7 @@
>>>    void dma_resv_prune(struct dma_resv *resv)
>>>    {
>>>    	if (dma_resv_trylock(resv)) {
>>> -		if (dma_resv_test_signaled_rcu(resv, true))
>>> +		if (dma_resv_test_signaled_unlocked(resv, true))
>>>    			dma_resv_add_excl_fence(resv, NULL);
>>>    		dma_resv_unlock(resv);
>>>    	}
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>> index 25235ef630c10..754ad6d1bace9 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>    	 * Alternatively, we can trade that extra information on read/write
>>>    	 * activity with
>>>    	 *	args->busy =
>>> -	 *		!dma_resv_test_signaled_rcu(obj->resv, true);
>>> +	 *		!dma_resv_test_signaled_unlocked(obj->resv, true);
>>>    	 * to report the overall busyness. This is what the wait-ioctl does.
>>>    	 *
>>>    	 */
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> index 297143511f99b..e8f323564e57b 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>>>    	if (DBG_FORCE_RELOC)
>>>    		return false;
>>> -	return !dma_resv_test_signaled_rcu(vma->resv, true);
>>> +	return !dma_resv_test_signaled_unlocked(vma->resv, true);
>>>    }
>>>    static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> index 2ebd79537aea9..7c0eb425cb3b3 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>>>    	struct dma_fence *fence;
>>>    	rcu_read_lock();
>>> -	fence = dma_resv_get_excl_rcu(obj->base.resv);
>>> +	fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    	rcu_read_unlock();
>>>    	if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> index a657b99ec7606..44df18dc9669f 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>>>    		return true;
>>>    	/* we will unbind on next submission, still have userptr pins */
>>> -	r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>>> -				      MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	if (r <= 0)
>>>    		drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>> index 4b9856d5ba14f..5b6c52659ad4d 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>    		unsigned int count, i;
>>>    		int ret;
>>> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>    		 */
>>>    		prune_fences = count && timeout >= 0;
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(resv);
>>> +		excl = dma_resv_get_excl_unlocked(resv);
>>>    	}
>>>    	if (excl && timeout >= 0)
>>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>    		unsigned int count, i;
>>>    		int ret;
>>> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
>>> -					      &excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>> +						   &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>    		kfree(shared);
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
>>> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    	}
>>>    	if (excl) {
>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>> index 970d8f4986bbe..f1ed03ced7dd1 100644
>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>>>    		struct dma_fence **shared;
>>>    		unsigned int count, i;
>>> -		ret = dma_resv_get_fences_rcu(obj->base.resv,
>>> -							&excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>> +						   &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>>>    			dma_fence_put(shared[i]);
>>>    		kfree(shared);
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(obj->base.resv);
>>> +		excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>    	}
>>>    	if (excl) {
>>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
>>> index 2744558f30507..0bcb7ea44201e 100644
>>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
>>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
>>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>    		struct dma_fence **shared;
>>>    		unsigned int count, i;
>>> -		ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>> +		ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>    		if (ret)
>>>    			return ret;
>>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>    			dma_fence_put(shared[i]);
>>>    		kfree(shared);
>>>    	} else {
>>> -		excl = dma_resv_get_excl_rcu(resv);
>>> +		excl = dma_resv_get_excl_unlocked(resv);
>>>    	}
>>>    	if (ret >= 0 && excl && excl->ops != exclude) {
>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>> index 56df86e5f7400..1aca60507bb14 100644
>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>>>    		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>>>    	long ret;
>>> -	ret = dma_resv_wait_timeout_rcu(obj->resv, write,
>>> -						  true,  remain);
>>> +	ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>>>    	if (ret == 0)
>>>    		return remain == 0 ? -EBUSY : -ETIMEDOUT;
>>>    	else if (ret < 0)
>>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>> index 0cb1f9d848d3e..8d048bacd6f02 100644
>>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>>>    			asyw->image.handle[0] = ctxdma->object.handle;
>>>    	}
>>> -	asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
>>> +	asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>>>    	asyw->image.offset[0] = nvbo->offset;
>>>    	if (wndw->func->prepare) {
>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>> index a70e82413fa75..bc6b09ee9b552 100644
>>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
>>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>>>    		return -ENOENT;
>>>    	nvbo = nouveau_gem_object(gem);
>>> -	lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
>>> -						   no_wait ? 0 : 30 * HZ);
>>> +	lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
>>> +					      no_wait ? 0 : 30 * HZ);
>>>    	if (!lret)
>>>    		ret = -EBUSY;
>>>    	else if (lret > 0)
>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>> index ca07098a61419..eef5b632ee0ce 100644
>>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>>>    	if (!gem_obj)
>>>    		return -ENOENT;
>>> -	ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
>>> -						  true, timeout);
>>> +	ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
>>> +					     true, timeout);
>>>    	if (!ret)
>>>    		ret = timeout ? -ETIMEDOUT : -EBUSY;
>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> index 6003cfeb13221..2df3e999a38d0 100644
>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>>>    	int i;
>>>    	for (i = 0; i < bo_count; i++)
>>> -		implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
>>> +		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>>>    }
>>>    static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
>>> index 05ea2f39f6261..1a38b0bf36d11 100644
>>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
>>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
>>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>>>    	}
>>>    	if (domain == RADEON_GEM_DOMAIN_CPU) {
>>>    		/* Asking for cpu access wait for object idle */
>>> -		r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>> +		r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>    		if (!r)
>>>    			r = -EBUSY;
>>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>    	}
>>>    	robj = gem_to_radeon_bo(gobj);
>>> -	r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
>>> +	r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>>>    	if (r == 0)
>>>    		r = -EBUSY;
>>>    	else
>>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>    	}
>>>    	robj = gem_to_radeon_bo(gobj);
>>> -	ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>> +	ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>    	if (ret == 0)
>>>    		r = -EBUSY;
>>>    	else if (ret < 0)
>>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
>>> index e37c9a57a7c36..a19be3f8a218c 100644
>>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
>>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
>>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>>>    		return true;
>>>    	}
>>> -	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>> -				      MAX_SCHEDULE_TIMEOUT);
>>> +	r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>> +					   MAX_SCHEDULE_TIMEOUT);
>>>    	if (r <= 0)
>>>    		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index ca1b098b6a561..215cad3149621 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>    	struct dma_resv *resv = &bo->base._resv;
>>>    	int ret;
>>> -	if (dma_resv_test_signaled_rcu(resv, true))
>>> +	if (dma_resv_test_signaled_unlocked(resv, true))
>>>    		ret = 0;
>>>    	else
>>>    		ret = -EBUSY;
>>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>    			dma_resv_unlock(bo->base.resv);
>>>    		spin_unlock(&bo->bdev->lru_lock);
>>> -		lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
>>> -						 30 * HZ);
>>> +		lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
>>> +						      30 * HZ);
>>>    		if (lret < 0)
>>>    			return lret;
>>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>>>    			/* Last resort, if we fail to allocate memory for the
>>>    			 * fences block for the BO to become idle
>>>    			 */
>>> -			dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
>>> -						  30 * HZ);
>>> +			dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
>>> +						       30 * HZ);
>>>    		}
>>>    		if (bo->bdev->funcs->release_notify)
>>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>>>    		ttm_mem_io_free(bdev, &bo->mem);
>>>    	}
>>> -	if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
>>> +	if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>>>    	    !dma_resv_trylock(bo->base.resv)) {
>>>    		/* The BO is not idle, resurrect it for delayed destroy */
>>>    		ttm_bo_flush_all_fences(bo);
>>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>>>    	long timeout = 15 * HZ;
>>>    	if (no_wait) {
>>> -		if (dma_resv_test_signaled_rcu(bo->base.resv, true))
>>> +		if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>>>    			return 0;
>>>    		else
>>>    			return -EBUSY;
>>>    	}
>>> -	timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
>>> -						      interruptible, timeout);
>>> +	timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
>>> +						 interruptible, timeout);
>>>    	if (timeout < 0)
>>>    		return timeout;
>>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
>>> index 2902dc6e64faf..010a82405e374 100644
>>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
>>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
>>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>>>    	/* Check for a conflicting fence */
>>>    	resv = obj->resv;
>>> -	if (!dma_resv_test_signaled_rcu(resv,
>>> -						  arg->flags & VGEM_FENCE_WRITE)) {
>>> +	if (!dma_resv_test_signaled_unlocked(resv,
>>> +					     arg->flags & VGEM_FENCE_WRITE)) {
>>>    		ret = -EBUSY;
>>>    		goto err_fence;
>>>    	}
>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>> index 669f2ee395154..ab010c8e32816 100644
>>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>>>    		return -ENOENT;
>>>    	if (args->flags & VIRTGPU_WAIT_NOWAIT) {
>>> -		ret = dma_resv_test_signaled_rcu(obj->resv, true);
>>> +		ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>>>    	} else {
>>> -		ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
>>> -						timeout);
>>> +		ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
>>> +						     timeout);
>>>    	}
>>>    	if (ret == 0)
>>>    		ret = -EBUSY;
>>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>> index 04dd49c4c2572..19e1ce23842a9 100644
>>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>>>    	if (flags & drm_vmw_synccpu_allow_cs) {
>>>    		long lret;
>>> -		lret = dma_resv_wait_timeout_rcu
>>> +		lret = dma_resv_wait_timeout_unlocked
>>>    			(bo->base.resv, true, true,
>>>    			 nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>>>    		if (!lret)
>>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
>>> index d44a77e8a7e34..99cfb7af966b8 100644
>>> --- a/include/linux/dma-resv.h
>>> +++ b/include/linux/dma-resv.h
>>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>    }
>>>    /**
>>> - * dma_resv_get_excl_rcu - get the reservation object's
>>> + * dma_resv_get_excl_unlocked - get the reservation object's
>>>     * exclusive fence, without lock held.
>>>     * @obj: the reservation object
>>>     *
>>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>     * The exclusive fence or NULL if none
>>>     */
>>>    static inline struct dma_fence *
>>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
>>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>>>    {
>>>    	struct dma_fence *fence;
>>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>    void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>> -			    struct dma_fence **pfence_excl,
>>> -			    unsigned *pshared_count,
>>> -			    struct dma_fence ***pshared);
>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>> +				 struct dma_fence **pfence_excl,
>>> +				 unsigned *pshared_count,
>>> +				 struct dma_fence ***pshared);
>>>    int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
>>> -			       unsigned long timeout);
>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
>>> +				    unsigned long timeout);
>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>>>    #endif /* _LINUX_RESERVATION_H */

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-27 10:33               ` Daniel Vetter
@ 2021-05-27 12:01                 ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-27 12:01 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, dri-devel, Jason Ekstrand, Daniel Vetter

Am 27.05.21 um 12:33 schrieb Daniel Vetter:
> On Wed, May 26, 2021 at 03:23:01PM +0200, Christian König wrote:
>>
>> Am 26.05.21 um 15:12 schrieb Daniel Stone:
>>> Hi,
>>>
>>> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
>>>> Am 26.05.21 um 13:31 schrieb Daniel Stone:
>>>>> How would we insert a syncobj+val into a resv though? Like, if we pass
>>>>> an unmaterialised syncobj+val here to insert into the resv, then an
>>>>> implicit-only media user (or KMS) goes to sync against the resv, what
>>>>> happens?
>>>> Well this is for exporting, not importing. So we don't need to worry
>>>> about that.
>>>>
>>>> It's just my thinking because the drm_syncobj is the backing object on
>>>> VkSemaphore implementations these days, isn't it?
>>> Yeah, I can see that to an extent. But then binary vs. timeline
>>> syncobjs are very different in use (which is unfortunate tbh), and
>>> then we have an asymmetry between syncobj export & sync_file import.
>>>
>>> You're right that we do want a syncobj though. This is probably not
>>> practical due to smashing uAPI to bits, but if we could wind the clock
>>> back a couple of years, I suspect the interface we want is that export
>>> can either export a sync_file or a binary syncobj, and further that
>>> binary syncobjs could transparently act as timeline semaphores by
>>> mapping any value (either wait or signal) to the binary signal. In
>>> hindsight, we should probably just never have had binary syncobj. Oh
>>> well.
>> Well the later is the case IIRC. Don't ask me for the detail semantics, but
>> in general the drm_syncobj in timeline mode is compatible to the binary
>> mode.
>>
>> The sync_file is also import/exportable to a certain drm_syncobj timeline
>> point (or as binary signal). So no big deal, we are all compatible here :)
>>
>> I just thought that it might be more appropriate to return a drm_syncobj
>> directly instead of a sync_file.
> I think another big potential user for this is a vk-based compositor which
> needs to interact/support implicit synced clients. And compositor world I
> think is right now still more sync_file (because that's where we started
> with atomic kms ioctl).
>
> The other slight nudge is that drm_syncobj is a drm thing, so we'd first
> need to lift it out into drivers/dma-buf (and hand-wave the DRM prefix
> away) for it to be a non-awkward fit for dma-buf.
>
> Plus you can convert them to the other form anyway, so really doesn't
> matter much. But for the above reasons I'm leaning slightly towards
> sync_file. Except if compositor folks tell me I'm a fool and why :-)

Yeah as discussed with a Jason already that drm_syncobj is DRM specific 
is the killer argument here. So sync_file is fine with me.

Christian.

> -Daniel


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-27 12:01                 ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-27 12:01 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Simon Ser, intel-gfx, dri-devel, Daniel Vetter, Sumit Semwal

Am 27.05.21 um 12:33 schrieb Daniel Vetter:
> On Wed, May 26, 2021 at 03:23:01PM +0200, Christian König wrote:
>>
>> Am 26.05.21 um 15:12 schrieb Daniel Stone:
>>> Hi,
>>>
>>> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
>>>> Am 26.05.21 um 13:31 schrieb Daniel Stone:
>>>>> How would we insert a syncobj+val into a resv though? Like, if we pass
>>>>> an unmaterialised syncobj+val here to insert into the resv, then an
>>>>> implicit-only media user (or KMS) goes to sync against the resv, what
>>>>> happens?
>>>> Well this is for exporting, not importing. So we don't need to worry
>>>> about that.
>>>>
>>>> It's just my thinking because the drm_syncobj is the backing object on
>>>> VkSemaphore implementations these days, isn't it?
>>> Yeah, I can see that to an extent. But then binary vs. timeline
>>> syncobjs are very different in use (which is unfortunate tbh), and
>>> then we have an asymmetry between syncobj export & sync_file import.
>>>
>>> You're right that we do want a syncobj though. This is probably not
>>> practical due to smashing uAPI to bits, but if we could wind the clock
>>> back a couple of years, I suspect the interface we want is that export
>>> can either export a sync_file or a binary syncobj, and further that
>>> binary syncobjs could transparently act as timeline semaphores by
>>> mapping any value (either wait or signal) to the binary signal. In
>>> hindsight, we should probably just never have had binary syncobj. Oh
>>> well.
>> Well the later is the case IIRC. Don't ask me for the detail semantics, but
>> in general the drm_syncobj in timeline mode is compatible to the binary
>> mode.
>>
>> The sync_file is also import/exportable to a certain drm_syncobj timeline
>> point (or as binary signal). So no big deal, we are all compatible here :)
>>
>> I just thought that it might be more appropriate to return a drm_syncobj
>> directly instead of a sync_file.
> I think another big potential user for this is a vk-based compositor which
> needs to interact/support implicit synced clients. And compositor world I
> think is right now still more sync_file (because that's where we started
> with atomic kms ioctl).
>
> The other slight nudge is that drm_syncobj is a drm thing, so we'd first
> need to lift it out into drivers/dma-buf (and hand-wave the DRM prefix
> away) for it to be a non-awkward fit for dma-buf.
>
> Plus you can convert them to the other form anyway, so really doesn't
> matter much. But for the above reasons I'm leaning slightly towards
> sync_file. Except if compositor folks tell me I'm a fool and why :-)

Yeah as discussed with a Jason already that drm_syncobj is DRM specific 
is the killer argument here. So sync_file is fine with me.

Christian.

> -Daniel

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-27 11:58         ` [Intel-gfx] " Christian König
@ 2021-05-27 13:21           ` Daniel Vetter
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 13:21 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, Jason Ekstrand, intel-gfx, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Sean Paul

On Thu, May 27, 2021 at 1:59 PM Christian König
<christian.koenig@amd.com> wrote:
> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> > On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> >> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> >>> None of these helpers actually leak any RCU details to the caller.  They
> >>> all assume you have a genuine reference, take the RCU read lock, and
> >>> retry if needed.  Naming them with an _rcu is likely to cause callers
> >>> more panic than needed.
> >> I'm really wondering if we need this postfix in the first place.
> >>
> >> If we use the right rcu_dereference_check() macro then those functions can
> >> be called with both the reservation object locked and unlocked. It shouldn't
> >> matter to them.
> >>
> >> But getting rid of the _rcu postfix sounds like a good idea in general to
> >> me.
> > So does that count as an ack or not? If yes I think we should land this
> > patch right away, since it's going to conflict real fast badly.
>
> I had some follow up discussion with Jason and I would rather like to
> switch to using rcu_dereference_check() in all places and completely
> remove the _rcu postfix.
>
> But yes I see the pain of rebasing this as well.
>
> Christian.
>
> > -Daniel
> >
> >> Christian.
> >>
> >>> v2 (Jason Ekstrand):
> >>>    - Fix function argument indentation
> >>>
> >>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> >>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >>> Cc: Christian König <christian.koenig@amd.com>
> >>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> >>> Cc: Maxime Ripard <mripard@kernel.org>
> >>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> >>> Cc: Lucas Stach <l.stach@pengutronix.de>
> >>> Cc: Rob Clark <robdclark@gmail.com>
> >>> Cc: Sean Paul <sean@poorly.run>
> >>> Cc: Huang Rui <ray.huang@amd.com>
> >>> Cc: Gerd Hoffmann <kraxel@redhat.com>
> >>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> >>> ---
> >>>    drivers/dma-buf/dma-buf.c                     |  4 +--
> >>>    drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> >>>    .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> >>>    drivers/gpu/drm/drm_gem.c                     | 10 +++----
> >>>    drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> >>>    drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> >>>    drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> >>>    .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> >>>    drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> >>>    drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> >>>    drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> >>>    drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> >>>    drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> >>>    drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> >>>    drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> >>>    drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> >>>    drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> >>>    drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> >>>    drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> >>>    drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> >>>    include/linux/dma-resv.h                      | 18 ++++++------
> >>>    36 files changed, 108 insertions(+), 110 deletions(-)
> >>>
> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >>> index f264b70c383eb..ed6451d55d663 100644
> >>> --- a/drivers/dma-buf/dma-buf.c
> >>> +++ b/drivers/dma-buf/dma-buf.c
> >>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >>>     long ret;
> >>>     /* Wait on any implicit rendering fences */
> >>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
> >>> -                                             MAX_SCHEDULE_TIMEOUT);
> >>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> >>> +                                        MAX_SCHEDULE_TIMEOUT);
> >>>     if (ret < 0)
> >>>             return ret;
> >>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> >>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> >>> --- a/drivers/dma-buf/dma-resv.c
> >>> +++ b/drivers/dma-buf/dma-resv.c
> >>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> >>>    EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>    /**
> >>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> >>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> >>>     * fences without update side lock held
> >>>     * @obj: the reservation object
> >>>     * @pfence_excl: the returned exclusive fence (or NULL)
> >>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>     * exclusive fence is not specified the fence is put into the array of the
> >>>     * shared fences as well. Returns either zero or -ENOMEM.
> >>>     */
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared)
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared)
> >>>    {
> >>>     struct dma_fence **shared = NULL;
> >>>     struct dma_fence *fence_excl;
> >>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>>     *pshared = shared;
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> >>>    /**
> >>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> >>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> >>>     * shared and/or exclusive fences.
> >>>     * @obj: the reservation object
> >>>     * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> >>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>>     * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> >>>     * greater than zer on success.
> >>>     */
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>> -                          bool wait_all, bool intr,
> >>> -                          unsigned long timeout)
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> >>> +                               bool wait_all, bool intr,
> >>> +                               unsigned long timeout)
> >>>    {
> >>>     struct dma_fence *fence;
> >>>     unsigned seq, shared_count;
> >>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>>     rcu_read_unlock();
> >>>     goto retry;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> >>>    static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>    }
> >>>    /**
> >>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
> >>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> >>>     * fences have been signaled.
> >>>     * @obj: the reservation object
> >>>     * @test_all: if true, test all fences, otherwise only test the exclusive
> >>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>     * RETURNS
> >>>     * true if all fences signaled, else false
> >>>     */
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> >>>    {
> >>>     unsigned seq, shared_count;
> >>>     int ret;
> >>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>>     rcu_read_unlock();
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> index 8a1fb8b6606e5..b8e24f199be9a 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> >>>             goto unpin;
> >>>     }
> >>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> >>> -                                         &work->shared_count,
> >>> -                                         &work->shared);
> >>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> >>> +                                    &work->shared_count,
> >>> +                                    &work->shared);
> >>>     if (unlikely(r != 0)) {
> >>>             DRM_ERROR("failed to get fences for buffer\n");
> >>>             goto unpin;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> index baa980a477d94..0d0319bc51577 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> >>>     if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> >>>             return 0;
> >>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> >>>     if (r)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> index 18974bd081f00..8e2996d6ba3ad 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     }
> >>>     robj = gem_to_amdgpu_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> >>> -                                             timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> >>> +                                        timeout);
> >>>     /* ret == 0 means not signaled,
> >>>      * ret > 0 means signaled
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> index b4971e90b98cf..38e1b32dd2cef 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     unsigned count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> >>>     if (r)
> >>>             goto fallback;
> >>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     /* Not enough memory for the delayed delete, as last resort
> >>>      * block for all the fences to complete.
> >>>      */
> >>> -   dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                       MAX_SCHEDULE_TIMEOUT);
> >>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                  MAX_SCHEDULE_TIMEOUT);
> >>>     amdgpu_pasid_free(pasid);
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> index 828b5167ff128..0319c8b547c48 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> >>>     mmu_interval_set_seq(mni, cur_seq);
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     mutex_unlock(&adev->notifier_lock);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> index 0adffcace3263..de1c7c5501683 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> >>>             return 0;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> >>> -                                           MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r < 0)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> index c6dbc08016045..4a2196404fb69 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> >>>     ib->length_dw = 16;
> >>>     if (direct) {
> >>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> >>> -                                                   true, false,
> >>> -                                                   msecs_to_jiffies(10));
> >>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> >>> +                                              true, false,
> >>> +                                              msecs_to_jiffies(10));
> >>>             if (r == 0)
> >>>                     r = -ETIMEDOUT;
> >>>             if (r < 0)
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> index 4a3e3f72e1277..7ba1c537d6584 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>>     unsigned i, shared_count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, &excl,
> >>> -                                         &shared_count, &shared);
> >>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
> >>> +                                    &shared_count, &shared);
> >>>     if (r) {
> >>>             /* Not enough memory to grab the fence list, as last resort
> >>>              * block for all the fences to complete.
> >>>              */
> >>> -           dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                               MAX_SCHEDULE_TIMEOUT);
> >>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                          MAX_SCHEDULE_TIMEOUT);
> >>>             return;
> >>>     }
> >>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> >>>             return true;
> >>>     /* Don't evict VM page tables while they are busy */
> >>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> >>>             return false;
> >>>     /* Try to block ongoing updates */
> >>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> >>>     */
> >>>    long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> >>>    {
> >>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> >>> -                                       true, true, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> >>> +                                            true, true, timeout);
> >>>     if (timeout <= 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> index 9ca517b658546..0121d2817fa26 100644
> >>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> >>>              * deadlock during GPU reset when this fence will not signal
> >>>              * but we hold reservation lock for the BO.
> >>>              */
> >>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> >>> -                                                   false,
> >>> -                                                   msecs_to_jiffies(5000));
> >>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> >>> +                                              false,
> >>> +                                              msecs_to_jiffies(5000));
> >>>             if (unlikely(r <= 0))
> >>>                     DRM_ERROR("Waiting for fences timed out!");
> >>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >>> index 9989425e9875a..1241a421b9e81 100644
> >>> --- a/drivers/gpu/drm/drm_gem.c
> >>> +++ b/drivers/gpu/drm/drm_gem.c
> >>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> >>>             return -EINVAL;
> >>>     }
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> >>> +                                        true, timeout);
> >>>     if (ret == 0)
> >>>             ret = -ETIME;
> >>>     else if (ret > 0)
> >>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> >>>     if (!write) {
> >>>             struct dma_fence *fence =
> >>> -                   dma_resv_get_excl_rcu(obj->resv);
> >>> +                   dma_resv_get_excl_unlocked(obj->resv);
> >>>             return drm_gem_fence_array_add(fence_array, fence);
> >>>     }
> >>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> >>> -                                           &fence_count, &fences);
> >>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> >>> +                                      &fence_count, &fences);
> >>>     if (ret || !fence_count)
> >>>             return ret;
> >>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> index a005c5a0ba46a..a27135084ae5c 100644
> >>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> >>>             return 0;
> >>>     obj = drm_gem_fb_get_obj(state->fb, 0);
> >>> -   fence = dma_resv_get_excl_rcu(obj->resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
> >>>     drm_atomic_set_fence_for_plane(state, fence);
> >>>     return 0;
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> index db69f19ab5bca..4e6f5346e84e4 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> >>>     }
> >>>     if (op & ETNA_PREP_NOSYNC) {
> >>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
> >>> -                                                     write))
> >>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> >>>                     return -EBUSY;
> >>>     } else {
> >>>             unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
> >>> -                                                     write, true, remain);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
> >>> +                                                write, true, remain);
> >>>             if (ret <= 0)
> >>>                     return ret == 0 ? -ETIMEDOUT : ret;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> index d05c359945799..6617fada4595d 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> >>>                     continue;
> >>>             if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> >>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> >>> -                                                           &bo->nr_shared,
> >>> -                                                           &bo->shared);
> >>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> >>> +                                                      &bo->nr_shared,
> >>> +                                                      &bo->shared);
> >>>                     if (ret)
> >>>                             return ret;
> >>>             } else {
> >>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
> >>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
> >>>             }
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> >>> index 422b59ebf6dce..5f0b85a102159 100644
> >>> --- a/drivers/gpu/drm/i915/display/intel_display.c
> >>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> >>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> >>>             if (ret < 0)
> >>>                     goto unpin_fb;
> >>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>             if (fence) {
> >>>                     add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> >>>                                                fence);
> >>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
> >>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> @@ -10,7 +10,7 @@
> >>>    void dma_resv_prune(struct dma_resv *resv)
> >>>    {
> >>>     if (dma_resv_trylock(resv)) {
> >>> -           if (dma_resv_test_signaled_rcu(resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(resv, true))
> >>>                     dma_resv_add_excl_fence(resv, NULL);
> >>>             dma_resv_unlock(resv);
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> index 25235ef630c10..754ad6d1bace9 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>      * Alternatively, we can trade that extra information on read/write
> >>>      * activity with
> >>>      *      args->busy =
> >>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>      * to report the overall busyness. This is what the wait-ioctl does.
> >>>      *
> >>>      */
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> index 297143511f99b..e8f323564e57b 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> >>>     if (DBG_FORCE_RELOC)
> >>>             return false;
> >>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
> >>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
> >>>    }
> >>>    static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> index 2ebd79537aea9..7c0eb425cb3b3 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> >>>     struct dma_fence *fence;
> >>>     rcu_read_lock();
> >>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     rcu_read_unlock();
> >>>     if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> index a657b99ec7606..44df18dc9669f 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> >>>             return true;
> >>>     /* we will unbind on next submission, still have userptr pins */
> >>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> index 4b9856d5ba14f..5b6c52659ad4d 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>              */
> >>>             prune_fences = count && timeout >= 0;
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (excl && timeout >= 0)
> >>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                         &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> >>> index 970d8f4986bbe..f1ed03ced7dd1 100644
> >>> --- a/drivers/gpu/drm/i915/i915_request.c
> >>> +++ b/drivers/gpu/drm/i915/i915_request.c
> >>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                                   &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> index 2744558f30507..0bcb7ea44201e 100644
> >>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (ret >= 0 && excl && excl->ops != exclude) {
> >>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> >>> index 56df86e5f7400..1aca60507bb14 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gem.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gem.c
> >>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> >>>             op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> >>>     long ret;
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> >>> -                                             true,  remain);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> >>>     if (ret == 0)
> >>>             return remain == 0 ? -EBUSY : -ETIMEDOUT;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> index 0cb1f9d848d3e..8d048bacd6f02 100644
> >>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> >>>                     asyw->image.handle[0] = ctxdma->object.handle;
> >>>     }
> >>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> >>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> >>>     asyw->image.offset[0] = nvbo->offset;
> >>>     if (wndw->func->prepare) {
> >>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> index a70e82413fa75..bc6b09ee9b552 100644
> >>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     nvbo = nouveau_gem_object(gem);
> >>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> >>> -                                              no_wait ? 0 : 30 * HZ);
> >>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> >>> +                                         no_wait ? 0 : 30 * HZ);
> >>>     if (!lret)
> >>>             ret = -EBUSY;
> >>>     else if (lret > 0)
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> index ca07098a61419..eef5b632ee0ce 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> >>>     if (!gem_obj)
> >>>             return -ENOENT;
> >>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> >>> +                                        true, timeout);
> >>>     if (!ret)
> >>>             ret = timeout ? -ETIMEDOUT : -EBUSY;
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> index 6003cfeb13221..2df3e999a38d0 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> >>>     int i;
> >>>     for (i = 0; i < bo_count; i++)
> >>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> >>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> >>>    }
> >>>    static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> index 05ea2f39f6261..1a38b0bf36d11 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> >>>     }
> >>>     if (domain == RADEON_GEM_DOMAIN_CPU) {
> >>>             /* Asking for cpu access wait for object idle */
> >>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>             if (!r)
> >>>                     r = -EBUSY;
> >>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> >>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> >>>     if (r == 0)
> >>>             r = -EBUSY;
> >>>     else
> >>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>     if (ret == 0)
> >>>             r = -EBUSY;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> index e37c9a57a7c36..a19be3f8a218c 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> >>>             return true;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> index ca1b098b6a561..215cad3149621 100644
> >>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> >>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>     struct dma_resv *resv = &bo->base._resv;
> >>>     int ret;
> >>> -   if (dma_resv_test_signaled_rcu(resv, true))
> >>> +   if (dma_resv_test_signaled_unlocked(resv, true))
> >>>             ret = 0;
> >>>     else
> >>>             ret = -EBUSY;
> >>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>                     dma_resv_unlock(bo->base.resv);
> >>>             spin_unlock(&bo->bdev->lru_lock);
> >>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> >>> -                                            30 * HZ);
> >>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> >>> +                                                 30 * HZ);
> >>>             if (lret < 0)
> >>>                     return lret;
> >>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> >>>                     /* Last resort, if we fail to allocate memory for the
> >>>                      * fences block for the BO to become idle
> >>>                      */
> >>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> >>> -                                             30 * HZ);
> >>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> >>> +                                                  30 * HZ);
> >>>             }
> >>>             if (bo->bdev->funcs->release_notify)
> >>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> >>>             ttm_mem_io_free(bdev, &bo->mem);
> >>>     }
> >>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> >>>         !dma_resv_trylock(bo->base.resv)) {
> >>>             /* The BO is not idle, resurrect it for delayed destroy */
> >>>             ttm_bo_flush_all_fences(bo);
> >>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> >>>     long timeout = 15 * HZ;
> >>>     if (no_wait) {
> >>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> >>>                     return 0;
> >>>             else
> >>>                     return -EBUSY;
> >>>     }
> >>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> >>> -                                                 interruptible, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> >>> +                                            interruptible, timeout);
> >>>     if (timeout < 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> index 2902dc6e64faf..010a82405e374 100644
> >>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
> >>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> >>>     /* Check for a conflicting fence */
> >>>     resv = obj->resv;
> >>> -   if (!dma_resv_test_signaled_rcu(resv,
> >>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
> >>> +   if (!dma_resv_test_signaled_unlocked(resv,
> >>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
> >>>             ret = -EBUSY;
> >>>             goto err_fence;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> index 669f2ee395154..ab010c8e32816 100644
> >>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> >>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>     } else {
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> >>> -                                           timeout);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> >>> +                                                timeout);
> >>>     }
> >>>     if (ret == 0)
> >>>             ret = -EBUSY;
> >>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> index 04dd49c4c2572..19e1ce23842a9 100644
> >>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> >>>     if (flags & drm_vmw_synccpu_allow_cs) {
> >>>             long lret;
> >>> -           lret = dma_resv_wait_timeout_rcu
> >>> +           lret = dma_resv_wait_timeout_unlocked
> >>>                     (bo->base.resv, true, true,
> >>>                      nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> >>>             if (!lret)
> >>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> >>> index d44a77e8a7e34..99cfb7af966b8 100644
> >>> --- a/include/linux/dma-resv.h
> >>> +++ b/include/linux/dma-resv.h
> >>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>    }
> >>>    /**
> >>> - * dma_resv_get_excl_rcu - get the reservation object's
> >>> + * dma_resv_get_excl_unlocked - get the reservation object's
> >>>     * exclusive fence, without lock held.
> >>>     * @obj: the reservation object
> >>>     *
> >>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>     * The exclusive fence or NULL if none
> >>>     */
> >>>    static inline struct dma_fence *
> >>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
> >>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> >>>    {
> >>>     struct dma_fence *fence;
> >>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>>    void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared);
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared);
> >>>    int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> >>> -                          unsigned long timeout);
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> >>> +                               unsigned long timeout);
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> >>>    #endif /* _LINUX_RESERVATION_H */
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-05-27 13:21           ` Daniel Vetter
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 13:21 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, intel-gfx, Maxime Ripard, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Lucas Stach

On Thu, May 27, 2021 at 1:59 PM Christian König
<christian.koenig@amd.com> wrote:
> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> > On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> >> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> >>> None of these helpers actually leak any RCU details to the caller.  They
> >>> all assume you have a genuine reference, take the RCU read lock, and
> >>> retry if needed.  Naming them with an _rcu is likely to cause callers
> >>> more panic than needed.
> >> I'm really wondering if we need this postfix in the first place.
> >>
> >> If we use the right rcu_dereference_check() macro then those functions can
> >> be called with both the reservation object locked and unlocked. It shouldn't
> >> matter to them.
> >>
> >> But getting rid of the _rcu postfix sounds like a good idea in general to
> >> me.
> > So does that count as an ack or not? If yes I think we should land this
> > patch right away, since it's going to conflict real fast badly.
>
> I had some follow up discussion with Jason and I would rather like to
> switch to using rcu_dereference_check() in all places and completely
> remove the _rcu postfix.
>
> But yes I see the pain of rebasing this as well.
>
> Christian.
>
> > -Daniel
> >
> >> Christian.
> >>
> >>> v2 (Jason Ekstrand):
> >>>    - Fix function argument indentation
> >>>
> >>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> >>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >>> Cc: Christian König <christian.koenig@amd.com>
> >>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> >>> Cc: Maxime Ripard <mripard@kernel.org>
> >>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> >>> Cc: Lucas Stach <l.stach@pengutronix.de>
> >>> Cc: Rob Clark <robdclark@gmail.com>
> >>> Cc: Sean Paul <sean@poorly.run>
> >>> Cc: Huang Rui <ray.huang@amd.com>
> >>> Cc: Gerd Hoffmann <kraxel@redhat.com>
> >>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> >>> ---
> >>>    drivers/dma-buf/dma-buf.c                     |  4 +--
> >>>    drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> >>>    .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> >>>    drivers/gpu/drm/drm_gem.c                     | 10 +++----
> >>>    drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> >>>    drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> >>>    drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> >>>    .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> >>>    drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> >>>    drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> >>>    drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> >>>    drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> >>>    drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> >>>    drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> >>>    drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> >>>    drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> >>>    drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> >>>    drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> >>>    drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> >>>    drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> >>>    include/linux/dma-resv.h                      | 18 ++++++------
> >>>    36 files changed, 108 insertions(+), 110 deletions(-)
> >>>
> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >>> index f264b70c383eb..ed6451d55d663 100644
> >>> --- a/drivers/dma-buf/dma-buf.c
> >>> +++ b/drivers/dma-buf/dma-buf.c
> >>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >>>     long ret;
> >>>     /* Wait on any implicit rendering fences */
> >>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
> >>> -                                             MAX_SCHEDULE_TIMEOUT);
> >>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> >>> +                                        MAX_SCHEDULE_TIMEOUT);
> >>>     if (ret < 0)
> >>>             return ret;
> >>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> >>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> >>> --- a/drivers/dma-buf/dma-resv.c
> >>> +++ b/drivers/dma-buf/dma-resv.c
> >>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> >>>    EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>    /**
> >>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> >>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> >>>     * fences without update side lock held
> >>>     * @obj: the reservation object
> >>>     * @pfence_excl: the returned exclusive fence (or NULL)
> >>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>     * exclusive fence is not specified the fence is put into the array of the
> >>>     * shared fences as well. Returns either zero or -ENOMEM.
> >>>     */
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared)
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared)
> >>>    {
> >>>     struct dma_fence **shared = NULL;
> >>>     struct dma_fence *fence_excl;
> >>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>>     *pshared = shared;
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> >>>    /**
> >>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> >>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> >>>     * shared and/or exclusive fences.
> >>>     * @obj: the reservation object
> >>>     * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> >>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>>     * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> >>>     * greater than zer on success.
> >>>     */
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>> -                          bool wait_all, bool intr,
> >>> -                          unsigned long timeout)
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> >>> +                               bool wait_all, bool intr,
> >>> +                               unsigned long timeout)
> >>>    {
> >>>     struct dma_fence *fence;
> >>>     unsigned seq, shared_count;
> >>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>>     rcu_read_unlock();
> >>>     goto retry;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> >>>    static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>    }
> >>>    /**
> >>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
> >>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> >>>     * fences have been signaled.
> >>>     * @obj: the reservation object
> >>>     * @test_all: if true, test all fences, otherwise only test the exclusive
> >>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>     * RETURNS
> >>>     * true if all fences signaled, else false
> >>>     */
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> >>>    {
> >>>     unsigned seq, shared_count;
> >>>     int ret;
> >>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>>     rcu_read_unlock();
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> index 8a1fb8b6606e5..b8e24f199be9a 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> >>>             goto unpin;
> >>>     }
> >>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> >>> -                                         &work->shared_count,
> >>> -                                         &work->shared);
> >>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> >>> +                                    &work->shared_count,
> >>> +                                    &work->shared);
> >>>     if (unlikely(r != 0)) {
> >>>             DRM_ERROR("failed to get fences for buffer\n");
> >>>             goto unpin;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> index baa980a477d94..0d0319bc51577 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> >>>     if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> >>>             return 0;
> >>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> >>>     if (r)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> index 18974bd081f00..8e2996d6ba3ad 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     }
> >>>     robj = gem_to_amdgpu_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> >>> -                                             timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> >>> +                                        timeout);
> >>>     /* ret == 0 means not signaled,
> >>>      * ret > 0 means signaled
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> index b4971e90b98cf..38e1b32dd2cef 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     unsigned count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> >>>     if (r)
> >>>             goto fallback;
> >>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     /* Not enough memory for the delayed delete, as last resort
> >>>      * block for all the fences to complete.
> >>>      */
> >>> -   dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                       MAX_SCHEDULE_TIMEOUT);
> >>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                  MAX_SCHEDULE_TIMEOUT);
> >>>     amdgpu_pasid_free(pasid);
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> index 828b5167ff128..0319c8b547c48 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> >>>     mmu_interval_set_seq(mni, cur_seq);
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     mutex_unlock(&adev->notifier_lock);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> index 0adffcace3263..de1c7c5501683 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> >>>             return 0;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> >>> -                                           MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r < 0)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> index c6dbc08016045..4a2196404fb69 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> >>>     ib->length_dw = 16;
> >>>     if (direct) {
> >>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> >>> -                                                   true, false,
> >>> -                                                   msecs_to_jiffies(10));
> >>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> >>> +                                              true, false,
> >>> +                                              msecs_to_jiffies(10));
> >>>             if (r == 0)
> >>>                     r = -ETIMEDOUT;
> >>>             if (r < 0)
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> index 4a3e3f72e1277..7ba1c537d6584 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>>     unsigned i, shared_count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, &excl,
> >>> -                                         &shared_count, &shared);
> >>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
> >>> +                                    &shared_count, &shared);
> >>>     if (r) {
> >>>             /* Not enough memory to grab the fence list, as last resort
> >>>              * block for all the fences to complete.
> >>>              */
> >>> -           dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                               MAX_SCHEDULE_TIMEOUT);
> >>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                          MAX_SCHEDULE_TIMEOUT);
> >>>             return;
> >>>     }
> >>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> >>>             return true;
> >>>     /* Don't evict VM page tables while they are busy */
> >>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> >>>             return false;
> >>>     /* Try to block ongoing updates */
> >>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> >>>     */
> >>>    long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> >>>    {
> >>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> >>> -                                       true, true, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> >>> +                                            true, true, timeout);
> >>>     if (timeout <= 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> index 9ca517b658546..0121d2817fa26 100644
> >>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> >>>              * deadlock during GPU reset when this fence will not signal
> >>>              * but we hold reservation lock for the BO.
> >>>              */
> >>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> >>> -                                                   false,
> >>> -                                                   msecs_to_jiffies(5000));
> >>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> >>> +                                              false,
> >>> +                                              msecs_to_jiffies(5000));
> >>>             if (unlikely(r <= 0))
> >>>                     DRM_ERROR("Waiting for fences timed out!");
> >>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >>> index 9989425e9875a..1241a421b9e81 100644
> >>> --- a/drivers/gpu/drm/drm_gem.c
> >>> +++ b/drivers/gpu/drm/drm_gem.c
> >>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> >>>             return -EINVAL;
> >>>     }
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> >>> +                                        true, timeout);
> >>>     if (ret == 0)
> >>>             ret = -ETIME;
> >>>     else if (ret > 0)
> >>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> >>>     if (!write) {
> >>>             struct dma_fence *fence =
> >>> -                   dma_resv_get_excl_rcu(obj->resv);
> >>> +                   dma_resv_get_excl_unlocked(obj->resv);
> >>>             return drm_gem_fence_array_add(fence_array, fence);
> >>>     }
> >>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> >>> -                                           &fence_count, &fences);
> >>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> >>> +                                      &fence_count, &fences);
> >>>     if (ret || !fence_count)
> >>>             return ret;
> >>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> index a005c5a0ba46a..a27135084ae5c 100644
> >>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> >>>             return 0;
> >>>     obj = drm_gem_fb_get_obj(state->fb, 0);
> >>> -   fence = dma_resv_get_excl_rcu(obj->resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
> >>>     drm_atomic_set_fence_for_plane(state, fence);
> >>>     return 0;
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> index db69f19ab5bca..4e6f5346e84e4 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> >>>     }
> >>>     if (op & ETNA_PREP_NOSYNC) {
> >>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
> >>> -                                                     write))
> >>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> >>>                     return -EBUSY;
> >>>     } else {
> >>>             unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
> >>> -                                                     write, true, remain);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
> >>> +                                                write, true, remain);
> >>>             if (ret <= 0)
> >>>                     return ret == 0 ? -ETIMEDOUT : ret;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> index d05c359945799..6617fada4595d 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> >>>                     continue;
> >>>             if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> >>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> >>> -                                                           &bo->nr_shared,
> >>> -                                                           &bo->shared);
> >>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> >>> +                                                      &bo->nr_shared,
> >>> +                                                      &bo->shared);
> >>>                     if (ret)
> >>>                             return ret;
> >>>             } else {
> >>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
> >>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
> >>>             }
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> >>> index 422b59ebf6dce..5f0b85a102159 100644
> >>> --- a/drivers/gpu/drm/i915/display/intel_display.c
> >>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> >>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> >>>             if (ret < 0)
> >>>                     goto unpin_fb;
> >>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>             if (fence) {
> >>>                     add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> >>>                                                fence);
> >>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
> >>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> @@ -10,7 +10,7 @@
> >>>    void dma_resv_prune(struct dma_resv *resv)
> >>>    {
> >>>     if (dma_resv_trylock(resv)) {
> >>> -           if (dma_resv_test_signaled_rcu(resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(resv, true))
> >>>                     dma_resv_add_excl_fence(resv, NULL);
> >>>             dma_resv_unlock(resv);
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> index 25235ef630c10..754ad6d1bace9 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>      * Alternatively, we can trade that extra information on read/write
> >>>      * activity with
> >>>      *      args->busy =
> >>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>      * to report the overall busyness. This is what the wait-ioctl does.
> >>>      *
> >>>      */
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> index 297143511f99b..e8f323564e57b 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> >>>     if (DBG_FORCE_RELOC)
> >>>             return false;
> >>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
> >>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
> >>>    }
> >>>    static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> index 2ebd79537aea9..7c0eb425cb3b3 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> >>>     struct dma_fence *fence;
> >>>     rcu_read_lock();
> >>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     rcu_read_unlock();
> >>>     if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> index a657b99ec7606..44df18dc9669f 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> >>>             return true;
> >>>     /* we will unbind on next submission, still have userptr pins */
> >>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> index 4b9856d5ba14f..5b6c52659ad4d 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>              */
> >>>             prune_fences = count && timeout >= 0;
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (excl && timeout >= 0)
> >>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                         &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> >>> index 970d8f4986bbe..f1ed03ced7dd1 100644
> >>> --- a/drivers/gpu/drm/i915/i915_request.c
> >>> +++ b/drivers/gpu/drm/i915/i915_request.c
> >>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                                   &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> index 2744558f30507..0bcb7ea44201e 100644
> >>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (ret >= 0 && excl && excl->ops != exclude) {
> >>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> >>> index 56df86e5f7400..1aca60507bb14 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gem.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gem.c
> >>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> >>>             op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> >>>     long ret;
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> >>> -                                             true,  remain);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> >>>     if (ret == 0)
> >>>             return remain == 0 ? -EBUSY : -ETIMEDOUT;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> index 0cb1f9d848d3e..8d048bacd6f02 100644
> >>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> >>>                     asyw->image.handle[0] = ctxdma->object.handle;
> >>>     }
> >>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> >>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> >>>     asyw->image.offset[0] = nvbo->offset;
> >>>     if (wndw->func->prepare) {
> >>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> index a70e82413fa75..bc6b09ee9b552 100644
> >>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     nvbo = nouveau_gem_object(gem);
> >>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> >>> -                                              no_wait ? 0 : 30 * HZ);
> >>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> >>> +                                         no_wait ? 0 : 30 * HZ);
> >>>     if (!lret)
> >>>             ret = -EBUSY;
> >>>     else if (lret > 0)
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> index ca07098a61419..eef5b632ee0ce 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> >>>     if (!gem_obj)
> >>>             return -ENOENT;
> >>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> >>> +                                        true, timeout);
> >>>     if (!ret)
> >>>             ret = timeout ? -ETIMEDOUT : -EBUSY;
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> index 6003cfeb13221..2df3e999a38d0 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> >>>     int i;
> >>>     for (i = 0; i < bo_count; i++)
> >>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> >>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> >>>    }
> >>>    static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> index 05ea2f39f6261..1a38b0bf36d11 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> >>>     }
> >>>     if (domain == RADEON_GEM_DOMAIN_CPU) {
> >>>             /* Asking for cpu access wait for object idle */
> >>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>             if (!r)
> >>>                     r = -EBUSY;
> >>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> >>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> >>>     if (r == 0)
> >>>             r = -EBUSY;
> >>>     else
> >>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>     if (ret == 0)
> >>>             r = -EBUSY;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> index e37c9a57a7c36..a19be3f8a218c 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> >>>             return true;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> index ca1b098b6a561..215cad3149621 100644
> >>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> >>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>     struct dma_resv *resv = &bo->base._resv;
> >>>     int ret;
> >>> -   if (dma_resv_test_signaled_rcu(resv, true))
> >>> +   if (dma_resv_test_signaled_unlocked(resv, true))
> >>>             ret = 0;
> >>>     else
> >>>             ret = -EBUSY;
> >>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>                     dma_resv_unlock(bo->base.resv);
> >>>             spin_unlock(&bo->bdev->lru_lock);
> >>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> >>> -                                            30 * HZ);
> >>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> >>> +                                                 30 * HZ);
> >>>             if (lret < 0)
> >>>                     return lret;
> >>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> >>>                     /* Last resort, if we fail to allocate memory for the
> >>>                      * fences block for the BO to become idle
> >>>                      */
> >>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> >>> -                                             30 * HZ);
> >>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> >>> +                                                  30 * HZ);
> >>>             }
> >>>             if (bo->bdev->funcs->release_notify)
> >>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> >>>             ttm_mem_io_free(bdev, &bo->mem);
> >>>     }
> >>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> >>>         !dma_resv_trylock(bo->base.resv)) {
> >>>             /* The BO is not idle, resurrect it for delayed destroy */
> >>>             ttm_bo_flush_all_fences(bo);
> >>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> >>>     long timeout = 15 * HZ;
> >>>     if (no_wait) {
> >>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> >>>                     return 0;
> >>>             else
> >>>                     return -EBUSY;
> >>>     }
> >>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> >>> -                                                 interruptible, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> >>> +                                            interruptible, timeout);
> >>>     if (timeout < 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> index 2902dc6e64faf..010a82405e374 100644
> >>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
> >>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> >>>     /* Check for a conflicting fence */
> >>>     resv = obj->resv;
> >>> -   if (!dma_resv_test_signaled_rcu(resv,
> >>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
> >>> +   if (!dma_resv_test_signaled_unlocked(resv,
> >>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
> >>>             ret = -EBUSY;
> >>>             goto err_fence;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> index 669f2ee395154..ab010c8e32816 100644
> >>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> >>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>     } else {
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> >>> -                                           timeout);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> >>> +                                                timeout);
> >>>     }
> >>>     if (ret == 0)
> >>>             ret = -EBUSY;
> >>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> index 04dd49c4c2572..19e1ce23842a9 100644
> >>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> >>>     if (flags & drm_vmw_synccpu_allow_cs) {
> >>>             long lret;
> >>> -           lret = dma_resv_wait_timeout_rcu
> >>> +           lret = dma_resv_wait_timeout_unlocked
> >>>                     (bo->base.resv, true, true,
> >>>                      nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> >>>             if (!lret)
> >>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> >>> index d44a77e8a7e34..99cfb7af966b8 100644
> >>> --- a/include/linux/dma-resv.h
> >>> +++ b/include/linux/dma-resv.h
> >>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>    }
> >>>    /**
> >>> - * dma_resv_get_excl_rcu - get the reservation object's
> >>> + * dma_resv_get_excl_unlocked - get the reservation object's
> >>>     * exclusive fence, without lock held.
> >>>     * @obj: the reservation object
> >>>     *
> >>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>     * The exclusive fence or NULL if none
> >>>     */
> >>>    static inline struct dma_fence *
> >>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
> >>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> >>>    {
> >>>     struct dma_fence *fence;
> >>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>>    void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared);
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared);
> >>>    int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> >>> -                          unsigned long timeout);
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> >>> +                               unsigned long timeout);
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> >>>    #endif /* _LINUX_RESERVATION_H */
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-27 11:58         ` [Intel-gfx] " Christian König
@ 2021-05-27 13:25           ` Daniel Vetter
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 13:25 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, Jason Ekstrand, intel-gfx, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Sean Paul

On Thu, May 27, 2021 at 1:59 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> > On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> >> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> >>> None of these helpers actually leak any RCU details to the caller.  They
> >>> all assume you have a genuine reference, take the RCU read lock, and
> >>> retry if needed.  Naming them with an _rcu is likely to cause callers
> >>> more panic than needed.
> >> I'm really wondering if we need this postfix in the first place.
> >>
> >> If we use the right rcu_dereference_check() macro then those functions can
> >> be called with both the reservation object locked and unlocked. It shouldn't
> >> matter to them.
> >>
> >> But getting rid of the _rcu postfix sounds like a good idea in general to
> >> me.
> > So does that count as an ack or not? If yes I think we should land this
> > patch right away, since it's going to conflict real fast badly.
>
> I had some follow up discussion with Jason and I would rather like to
> switch to using rcu_dereference_check() in all places and completely
> remove the _rcu postfix.

Hm, I'm not sure whether spreading _rcu tricks further is an
especially bright idea. At least i915 is full of very clever _rcu
tricks, and encouraging drivers to roll out their own _rcu everywhere
is probably not in our best interest. Some fast-path checking is imo
ok, but that's it. Especially once we get into the entire
SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.

That's why I'm slightly leaning towards _unlocked variants, except we
do use those in lots of places where we hold dma_resv_lock too. So not
sure what's the best plan overall here.
-Daniel

> But yes I see the pain of rebasing this as well.
>
> Christian.
>
> > -Daniel
> >
> >> Christian.
> >>
> >>> v2 (Jason Ekstrand):
> >>>    - Fix function argument indentation
> >>>
> >>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> >>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >>> Cc: Christian König <christian.koenig@amd.com>
> >>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> >>> Cc: Maxime Ripard <mripard@kernel.org>
> >>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> >>> Cc: Lucas Stach <l.stach@pengutronix.de>
> >>> Cc: Rob Clark <robdclark@gmail.com>
> >>> Cc: Sean Paul <sean@poorly.run>
> >>> Cc: Huang Rui <ray.huang@amd.com>
> >>> Cc: Gerd Hoffmann <kraxel@redhat.com>
> >>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> >>> ---
> >>>    drivers/dma-buf/dma-buf.c                     |  4 +--
> >>>    drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> >>>    .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> >>>    drivers/gpu/drm/drm_gem.c                     | 10 +++----
> >>>    drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> >>>    drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> >>>    drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> >>>    .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> >>>    drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> >>>    drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> >>>    drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> >>>    drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> >>>    drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> >>>    drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> >>>    drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> >>>    drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> >>>    drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> >>>    drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> >>>    drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> >>>    drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> >>>    include/linux/dma-resv.h                      | 18 ++++++------
> >>>    36 files changed, 108 insertions(+), 110 deletions(-)
> >>>
> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >>> index f264b70c383eb..ed6451d55d663 100644
> >>> --- a/drivers/dma-buf/dma-buf.c
> >>> +++ b/drivers/dma-buf/dma-buf.c
> >>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >>>     long ret;
> >>>     /* Wait on any implicit rendering fences */
> >>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
> >>> -                                             MAX_SCHEDULE_TIMEOUT);
> >>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> >>> +                                        MAX_SCHEDULE_TIMEOUT);
> >>>     if (ret < 0)
> >>>             return ret;
> >>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> >>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> >>> --- a/drivers/dma-buf/dma-resv.c
> >>> +++ b/drivers/dma-buf/dma-resv.c
> >>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> >>>    EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>    /**
> >>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> >>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> >>>     * fences without update side lock held
> >>>     * @obj: the reservation object
> >>>     * @pfence_excl: the returned exclusive fence (or NULL)
> >>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>     * exclusive fence is not specified the fence is put into the array of the
> >>>     * shared fences as well. Returns either zero or -ENOMEM.
> >>>     */
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared)
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared)
> >>>    {
> >>>     struct dma_fence **shared = NULL;
> >>>     struct dma_fence *fence_excl;
> >>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>>     *pshared = shared;
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> >>>    /**
> >>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> >>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> >>>     * shared and/or exclusive fences.
> >>>     * @obj: the reservation object
> >>>     * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> >>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>>     * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> >>>     * greater than zer on success.
> >>>     */
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>> -                          bool wait_all, bool intr,
> >>> -                          unsigned long timeout)
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> >>> +                               bool wait_all, bool intr,
> >>> +                               unsigned long timeout)
> >>>    {
> >>>     struct dma_fence *fence;
> >>>     unsigned seq, shared_count;
> >>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>>     rcu_read_unlock();
> >>>     goto retry;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> >>>    static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>    }
> >>>    /**
> >>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
> >>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> >>>     * fences have been signaled.
> >>>     * @obj: the reservation object
> >>>     * @test_all: if true, test all fences, otherwise only test the exclusive
> >>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>     * RETURNS
> >>>     * true if all fences signaled, else false
> >>>     */
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> >>>    {
> >>>     unsigned seq, shared_count;
> >>>     int ret;
> >>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>>     rcu_read_unlock();
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> index 8a1fb8b6606e5..b8e24f199be9a 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> >>>             goto unpin;
> >>>     }
> >>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> >>> -                                         &work->shared_count,
> >>> -                                         &work->shared);
> >>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> >>> +                                    &work->shared_count,
> >>> +                                    &work->shared);
> >>>     if (unlikely(r != 0)) {
> >>>             DRM_ERROR("failed to get fences for buffer\n");
> >>>             goto unpin;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> index baa980a477d94..0d0319bc51577 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> >>>     if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> >>>             return 0;
> >>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> >>>     if (r)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> index 18974bd081f00..8e2996d6ba3ad 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     }
> >>>     robj = gem_to_amdgpu_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> >>> -                                             timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> >>> +                                        timeout);
> >>>     /* ret == 0 means not signaled,
> >>>      * ret > 0 means signaled
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> index b4971e90b98cf..38e1b32dd2cef 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     unsigned count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> >>>     if (r)
> >>>             goto fallback;
> >>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     /* Not enough memory for the delayed delete, as last resort
> >>>      * block for all the fences to complete.
> >>>      */
> >>> -   dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                       MAX_SCHEDULE_TIMEOUT);
> >>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                  MAX_SCHEDULE_TIMEOUT);
> >>>     amdgpu_pasid_free(pasid);
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> index 828b5167ff128..0319c8b547c48 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> >>>     mmu_interval_set_seq(mni, cur_seq);
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     mutex_unlock(&adev->notifier_lock);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> index 0adffcace3263..de1c7c5501683 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> >>>             return 0;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> >>> -                                           MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r < 0)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> index c6dbc08016045..4a2196404fb69 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> >>>     ib->length_dw = 16;
> >>>     if (direct) {
> >>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> >>> -                                                   true, false,
> >>> -                                                   msecs_to_jiffies(10));
> >>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> >>> +                                              true, false,
> >>> +                                              msecs_to_jiffies(10));
> >>>             if (r == 0)
> >>>                     r = -ETIMEDOUT;
> >>>             if (r < 0)
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> index 4a3e3f72e1277..7ba1c537d6584 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>>     unsigned i, shared_count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, &excl,
> >>> -                                         &shared_count, &shared);
> >>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
> >>> +                                    &shared_count, &shared);
> >>>     if (r) {
> >>>             /* Not enough memory to grab the fence list, as last resort
> >>>              * block for all the fences to complete.
> >>>              */
> >>> -           dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                               MAX_SCHEDULE_TIMEOUT);
> >>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                          MAX_SCHEDULE_TIMEOUT);
> >>>             return;
> >>>     }
> >>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> >>>             return true;
> >>>     /* Don't evict VM page tables while they are busy */
> >>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> >>>             return false;
> >>>     /* Try to block ongoing updates */
> >>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> >>>     */
> >>>    long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> >>>    {
> >>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> >>> -                                       true, true, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> >>> +                                            true, true, timeout);
> >>>     if (timeout <= 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> index 9ca517b658546..0121d2817fa26 100644
> >>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> >>>              * deadlock during GPU reset when this fence will not signal
> >>>              * but we hold reservation lock for the BO.
> >>>              */
> >>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> >>> -                                                   false,
> >>> -                                                   msecs_to_jiffies(5000));
> >>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> >>> +                                              false,
> >>> +                                              msecs_to_jiffies(5000));
> >>>             if (unlikely(r <= 0))
> >>>                     DRM_ERROR("Waiting for fences timed out!");
> >>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >>> index 9989425e9875a..1241a421b9e81 100644
> >>> --- a/drivers/gpu/drm/drm_gem.c
> >>> +++ b/drivers/gpu/drm/drm_gem.c
> >>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> >>>             return -EINVAL;
> >>>     }
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> >>> +                                        true, timeout);
> >>>     if (ret == 0)
> >>>             ret = -ETIME;
> >>>     else if (ret > 0)
> >>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> >>>     if (!write) {
> >>>             struct dma_fence *fence =
> >>> -                   dma_resv_get_excl_rcu(obj->resv);
> >>> +                   dma_resv_get_excl_unlocked(obj->resv);
> >>>             return drm_gem_fence_array_add(fence_array, fence);
> >>>     }
> >>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> >>> -                                           &fence_count, &fences);
> >>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> >>> +                                      &fence_count, &fences);
> >>>     if (ret || !fence_count)
> >>>             return ret;
> >>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> index a005c5a0ba46a..a27135084ae5c 100644
> >>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> >>>             return 0;
> >>>     obj = drm_gem_fb_get_obj(state->fb, 0);
> >>> -   fence = dma_resv_get_excl_rcu(obj->resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
> >>>     drm_atomic_set_fence_for_plane(state, fence);
> >>>     return 0;
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> index db69f19ab5bca..4e6f5346e84e4 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> >>>     }
> >>>     if (op & ETNA_PREP_NOSYNC) {
> >>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
> >>> -                                                     write))
> >>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> >>>                     return -EBUSY;
> >>>     } else {
> >>>             unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
> >>> -                                                     write, true, remain);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
> >>> +                                                write, true, remain);
> >>>             if (ret <= 0)
> >>>                     return ret == 0 ? -ETIMEDOUT : ret;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> index d05c359945799..6617fada4595d 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> >>>                     continue;
> >>>             if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> >>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> >>> -                                                           &bo->nr_shared,
> >>> -                                                           &bo->shared);
> >>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> >>> +                                                      &bo->nr_shared,
> >>> +                                                      &bo->shared);
> >>>                     if (ret)
> >>>                             return ret;
> >>>             } else {
> >>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
> >>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
> >>>             }
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> >>> index 422b59ebf6dce..5f0b85a102159 100644
> >>> --- a/drivers/gpu/drm/i915/display/intel_display.c
> >>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> >>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> >>>             if (ret < 0)
> >>>                     goto unpin_fb;
> >>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>             if (fence) {
> >>>                     add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> >>>                                                fence);
> >>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
> >>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> @@ -10,7 +10,7 @@
> >>>    void dma_resv_prune(struct dma_resv *resv)
> >>>    {
> >>>     if (dma_resv_trylock(resv)) {
> >>> -           if (dma_resv_test_signaled_rcu(resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(resv, true))
> >>>                     dma_resv_add_excl_fence(resv, NULL);
> >>>             dma_resv_unlock(resv);
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> index 25235ef630c10..754ad6d1bace9 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>      * Alternatively, we can trade that extra information on read/write
> >>>      * activity with
> >>>      *      args->busy =
> >>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>      * to report the overall busyness. This is what the wait-ioctl does.
> >>>      *
> >>>      */
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> index 297143511f99b..e8f323564e57b 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> >>>     if (DBG_FORCE_RELOC)
> >>>             return false;
> >>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
> >>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
> >>>    }
> >>>    static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> index 2ebd79537aea9..7c0eb425cb3b3 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> >>>     struct dma_fence *fence;
> >>>     rcu_read_lock();
> >>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     rcu_read_unlock();
> >>>     if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> index a657b99ec7606..44df18dc9669f 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> >>>             return true;
> >>>     /* we will unbind on next submission, still have userptr pins */
> >>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> index 4b9856d5ba14f..5b6c52659ad4d 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>              */
> >>>             prune_fences = count && timeout >= 0;
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (excl && timeout >= 0)
> >>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                         &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> >>> index 970d8f4986bbe..f1ed03ced7dd1 100644
> >>> --- a/drivers/gpu/drm/i915/i915_request.c
> >>> +++ b/drivers/gpu/drm/i915/i915_request.c
> >>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                                   &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> index 2744558f30507..0bcb7ea44201e 100644
> >>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (ret >= 0 && excl && excl->ops != exclude) {
> >>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> >>> index 56df86e5f7400..1aca60507bb14 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gem.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gem.c
> >>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> >>>             op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> >>>     long ret;
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> >>> -                                             true,  remain);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> >>>     if (ret == 0)
> >>>             return remain == 0 ? -EBUSY : -ETIMEDOUT;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> index 0cb1f9d848d3e..8d048bacd6f02 100644
> >>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> >>>                     asyw->image.handle[0] = ctxdma->object.handle;
> >>>     }
> >>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> >>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> >>>     asyw->image.offset[0] = nvbo->offset;
> >>>     if (wndw->func->prepare) {
> >>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> index a70e82413fa75..bc6b09ee9b552 100644
> >>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     nvbo = nouveau_gem_object(gem);
> >>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> >>> -                                              no_wait ? 0 : 30 * HZ);
> >>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> >>> +                                         no_wait ? 0 : 30 * HZ);
> >>>     if (!lret)
> >>>             ret = -EBUSY;
> >>>     else if (lret > 0)
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> index ca07098a61419..eef5b632ee0ce 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> >>>     if (!gem_obj)
> >>>             return -ENOENT;
> >>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> >>> +                                        true, timeout);
> >>>     if (!ret)
> >>>             ret = timeout ? -ETIMEDOUT : -EBUSY;
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> index 6003cfeb13221..2df3e999a38d0 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> >>>     int i;
> >>>     for (i = 0; i < bo_count; i++)
> >>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> >>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> >>>    }
> >>>    static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> index 05ea2f39f6261..1a38b0bf36d11 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> >>>     }
> >>>     if (domain == RADEON_GEM_DOMAIN_CPU) {
> >>>             /* Asking for cpu access wait for object idle */
> >>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>             if (!r)
> >>>                     r = -EBUSY;
> >>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> >>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> >>>     if (r == 0)
> >>>             r = -EBUSY;
> >>>     else
> >>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>     if (ret == 0)
> >>>             r = -EBUSY;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> index e37c9a57a7c36..a19be3f8a218c 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> >>>             return true;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> index ca1b098b6a561..215cad3149621 100644
> >>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> >>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>     struct dma_resv *resv = &bo->base._resv;
> >>>     int ret;
> >>> -   if (dma_resv_test_signaled_rcu(resv, true))
> >>> +   if (dma_resv_test_signaled_unlocked(resv, true))
> >>>             ret = 0;
> >>>     else
> >>>             ret = -EBUSY;
> >>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>                     dma_resv_unlock(bo->base.resv);
> >>>             spin_unlock(&bo->bdev->lru_lock);
> >>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> >>> -                                            30 * HZ);
> >>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> >>> +                                                 30 * HZ);
> >>>             if (lret < 0)
> >>>                     return lret;
> >>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> >>>                     /* Last resort, if we fail to allocate memory for the
> >>>                      * fences block for the BO to become idle
> >>>                      */
> >>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> >>> -                                             30 * HZ);
> >>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> >>> +                                                  30 * HZ);
> >>>             }
> >>>             if (bo->bdev->funcs->release_notify)
> >>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> >>>             ttm_mem_io_free(bdev, &bo->mem);
> >>>     }
> >>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> >>>         !dma_resv_trylock(bo->base.resv)) {
> >>>             /* The BO is not idle, resurrect it for delayed destroy */
> >>>             ttm_bo_flush_all_fences(bo);
> >>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> >>>     long timeout = 15 * HZ;
> >>>     if (no_wait) {
> >>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> >>>                     return 0;
> >>>             else
> >>>                     return -EBUSY;
> >>>     }
> >>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> >>> -                                                 interruptible, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> >>> +                                            interruptible, timeout);
> >>>     if (timeout < 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> index 2902dc6e64faf..010a82405e374 100644
> >>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
> >>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> >>>     /* Check for a conflicting fence */
> >>>     resv = obj->resv;
> >>> -   if (!dma_resv_test_signaled_rcu(resv,
> >>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
> >>> +   if (!dma_resv_test_signaled_unlocked(resv,
> >>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
> >>>             ret = -EBUSY;
> >>>             goto err_fence;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> index 669f2ee395154..ab010c8e32816 100644
> >>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> >>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>     } else {
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> >>> -                                           timeout);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> >>> +                                                timeout);
> >>>     }
> >>>     if (ret == 0)
> >>>             ret = -EBUSY;
> >>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> index 04dd49c4c2572..19e1ce23842a9 100644
> >>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> >>>     if (flags & drm_vmw_synccpu_allow_cs) {
> >>>             long lret;
> >>> -           lret = dma_resv_wait_timeout_rcu
> >>> +           lret = dma_resv_wait_timeout_unlocked
> >>>                     (bo->base.resv, true, true,
> >>>                      nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> >>>             if (!lret)
> >>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> >>> index d44a77e8a7e34..99cfb7af966b8 100644
> >>> --- a/include/linux/dma-resv.h
> >>> +++ b/include/linux/dma-resv.h
> >>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>    }
> >>>    /**
> >>> - * dma_resv_get_excl_rcu - get the reservation object's
> >>> + * dma_resv_get_excl_unlocked - get the reservation object's
> >>>     * exclusive fence, without lock held.
> >>>     * @obj: the reservation object
> >>>     *
> >>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>     * The exclusive fence or NULL if none
> >>>     */
> >>>    static inline struct dma_fence *
> >>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
> >>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> >>>    {
> >>>     struct dma_fence *fence;
> >>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>>    void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared);
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared);
> >>>    int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> >>> -                          unsigned long timeout);
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> >>> +                               unsigned long timeout);
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> >>>    #endif /* _LINUX_RESERVATION_H */
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-05-27 13:25           ` Daniel Vetter
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-05-27 13:25 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, intel-gfx, Maxime Ripard, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Lucas Stach

On Thu, May 27, 2021 at 1:59 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> > On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> >> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> >>> None of these helpers actually leak any RCU details to the caller.  They
> >>> all assume you have a genuine reference, take the RCU read lock, and
> >>> retry if needed.  Naming them with an _rcu is likely to cause callers
> >>> more panic than needed.
> >> I'm really wondering if we need this postfix in the first place.
> >>
> >> If we use the right rcu_dereference_check() macro then those functions can
> >> be called with both the reservation object locked and unlocked. It shouldn't
> >> matter to them.
> >>
> >> But getting rid of the _rcu postfix sounds like a good idea in general to
> >> me.
> > So does that count as an ack or not? If yes I think we should land this
> > patch right away, since it's going to conflict real fast badly.
>
> I had some follow up discussion with Jason and I would rather like to
> switch to using rcu_dereference_check() in all places and completely
> remove the _rcu postfix.

Hm, I'm not sure whether spreading _rcu tricks further is an
especially bright idea. At least i915 is full of very clever _rcu
tricks, and encouraging drivers to roll out their own _rcu everywhere
is probably not in our best interest. Some fast-path checking is imo
ok, but that's it. Especially once we get into the entire
SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.

That's why I'm slightly leaning towards _unlocked variants, except we
do use those in lots of places where we hold dma_resv_lock too. So not
sure what's the best plan overall here.
-Daniel

> But yes I see the pain of rebasing this as well.
>
> Christian.
>
> > -Daniel
> >
> >> Christian.
> >>
> >>> v2 (Jason Ekstrand):
> >>>    - Fix function argument indentation
> >>>
> >>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> >>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >>> Cc: Christian König <christian.koenig@amd.com>
> >>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> >>> Cc: Maxime Ripard <mripard@kernel.org>
> >>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> >>> Cc: Lucas Stach <l.stach@pengutronix.de>
> >>> Cc: Rob Clark <robdclark@gmail.com>
> >>> Cc: Sean Paul <sean@poorly.run>
> >>> Cc: Huang Rui <ray.huang@amd.com>
> >>> Cc: Gerd Hoffmann <kraxel@redhat.com>
> >>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> >>> ---
> >>>    drivers/dma-buf/dma-buf.c                     |  4 +--
> >>>    drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> >>>    .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> >>>    drivers/gpu/drm/drm_gem.c                     | 10 +++----
> >>>    drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> >>>    drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> >>>    drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> >>>    drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> >>>    .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> >>>    drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> >>>    drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> >>>    drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> >>>    drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> >>>    drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> >>>    drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> >>>    drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> >>>    drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> >>>    drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> >>>    drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> >>>    drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> >>>    drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> >>>    drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> >>>    drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> >>>    include/linux/dma-resv.h                      | 18 ++++++------
> >>>    36 files changed, 108 insertions(+), 110 deletions(-)
> >>>
> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >>> index f264b70c383eb..ed6451d55d663 100644
> >>> --- a/drivers/dma-buf/dma-buf.c
> >>> +++ b/drivers/dma-buf/dma-buf.c
> >>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >>>     long ret;
> >>>     /* Wait on any implicit rendering fences */
> >>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
> >>> -                                             MAX_SCHEDULE_TIMEOUT);
> >>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> >>> +                                        MAX_SCHEDULE_TIMEOUT);
> >>>     if (ret < 0)
> >>>             return ret;
> >>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> >>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> >>> --- a/drivers/dma-buf/dma-resv.c
> >>> +++ b/drivers/dma-buf/dma-resv.c
> >>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> >>>    EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>    /**
> >>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> >>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> >>>     * fences without update side lock held
> >>>     * @obj: the reservation object
> >>>     * @pfence_excl: the returned exclusive fence (or NULL)
> >>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> >>>     * exclusive fence is not specified the fence is put into the array of the
> >>>     * shared fences as well. Returns either zero or -ENOMEM.
> >>>     */
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared)
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared)
> >>>    {
> >>>     struct dma_fence **shared = NULL;
> >>>     struct dma_fence *fence_excl;
> >>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>>     *pshared = shared;
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> >>>    /**
> >>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> >>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> >>>     * shared and/or exclusive fences.
> >>>     * @obj: the reservation object
> >>>     * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> >>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> >>>     * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> >>>     * greater than zer on success.
> >>>     */
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>> -                          bool wait_all, bool intr,
> >>> -                          unsigned long timeout)
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> >>> +                               bool wait_all, bool intr,
> >>> +                               unsigned long timeout)
> >>>    {
> >>>     struct dma_fence *fence;
> >>>     unsigned seq, shared_count;
> >>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> >>>     rcu_read_unlock();
> >>>     goto retry;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> >>>    static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>    }
> >>>    /**
> >>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
> >>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> >>>     * fences have been signaled.
> >>>     * @obj: the reservation object
> >>>     * @test_all: if true, test all fences, otherwise only test the exclusive
> >>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> >>>     * RETURNS
> >>>     * true if all fences signaled, else false
> >>>     */
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> >>>    {
> >>>     unsigned seq, shared_count;
> >>>     int ret;
> >>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> >>>     rcu_read_unlock();
> >>>     return ret;
> >>>    }
> >>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> >>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> index 8a1fb8b6606e5..b8e24f199be9a 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> >>>             goto unpin;
> >>>     }
> >>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> >>> -                                         &work->shared_count,
> >>> -                                         &work->shared);
> >>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> >>> +                                    &work->shared_count,
> >>> +                                    &work->shared);
> >>>     if (unlikely(r != 0)) {
> >>>             DRM_ERROR("failed to get fences for buffer\n");
> >>>             goto unpin;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> index baa980a477d94..0d0319bc51577 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> >>>     if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> >>>             return 0;
> >>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> >>>     if (r)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> index 18974bd081f00..8e2996d6ba3ad 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> >>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     }
> >>>     robj = gem_to_amdgpu_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> >>> -                                             timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> >>> +                                        timeout);
> >>>     /* ret == 0 means not signaled,
> >>>      * ret > 0 means signaled
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> index b4971e90b98cf..38e1b32dd2cef 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> >>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     unsigned count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> >>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> >>>     if (r)
> >>>             goto fallback;
> >>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> >>>     /* Not enough memory for the delayed delete, as last resort
> >>>      * block for all the fences to complete.
> >>>      */
> >>> -   dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                       MAX_SCHEDULE_TIMEOUT);
> >>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                  MAX_SCHEDULE_TIMEOUT);
> >>>     amdgpu_pasid_free(pasid);
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> index 828b5167ff128..0319c8b547c48 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> >>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> >>>     mmu_interval_set_seq(mni, cur_seq);
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     mutex_unlock(&adev->notifier_lock);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> index 0adffcace3263..de1c7c5501683 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> >>>             return 0;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> >>> -                                           MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r < 0)
> >>>             return r;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> index c6dbc08016045..4a2196404fb69 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> >>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> >>>     ib->length_dw = 16;
> >>>     if (direct) {
> >>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> >>> -                                                   true, false,
> >>> -                                                   msecs_to_jiffies(10));
> >>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> >>> +                                              true, false,
> >>> +                                              msecs_to_jiffies(10));
> >>>             if (r == 0)
> >>>                     r = -ETIMEDOUT;
> >>>             if (r < 0)
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> index 4a3e3f72e1277..7ba1c537d6584 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>>     unsigned i, shared_count;
> >>>     int r;
> >>> -   r = dma_resv_get_fences_rcu(resv, &excl,
> >>> -                                         &shared_count, &shared);
> >>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
> >>> +                                    &shared_count, &shared);
> >>>     if (r) {
> >>>             /* Not enough memory to grab the fence list, as last resort
> >>>              * block for all the fences to complete.
> >>>              */
> >>> -           dma_resv_wait_timeout_rcu(resv, true, false,
> >>> -                                               MAX_SCHEDULE_TIMEOUT);
> >>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
> >>> +                                          MAX_SCHEDULE_TIMEOUT);
> >>>             return;
> >>>     }
> >>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> >>>             return true;
> >>>     /* Don't evict VM page tables while they are busy */
> >>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> >>>             return false;
> >>>     /* Try to block ongoing updates */
> >>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> >>>     */
> >>>    long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> >>>    {
> >>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> >>> -                                       true, true, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> >>> +                                            true, true, timeout);
> >>>     if (timeout <= 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> index 9ca517b658546..0121d2817fa26 100644
> >>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> >>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> >>>              * deadlock during GPU reset when this fence will not signal
> >>>              * but we hold reservation lock for the BO.
> >>>              */
> >>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> >>> -                                                   false,
> >>> -                                                   msecs_to_jiffies(5000));
> >>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> >>> +                                              false,
> >>> +                                              msecs_to_jiffies(5000));
> >>>             if (unlikely(r <= 0))
> >>>                     DRM_ERROR("Waiting for fences timed out!");
> >>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >>> index 9989425e9875a..1241a421b9e81 100644
> >>> --- a/drivers/gpu/drm/drm_gem.c
> >>> +++ b/drivers/gpu/drm/drm_gem.c
> >>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> >>>             return -EINVAL;
> >>>     }
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> >>> +                                        true, timeout);
> >>>     if (ret == 0)
> >>>             ret = -ETIME;
> >>>     else if (ret > 0)
> >>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> >>>     if (!write) {
> >>>             struct dma_fence *fence =
> >>> -                   dma_resv_get_excl_rcu(obj->resv);
> >>> +                   dma_resv_get_excl_unlocked(obj->resv);
> >>>             return drm_gem_fence_array_add(fence_array, fence);
> >>>     }
> >>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> >>> -                                           &fence_count, &fences);
> >>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> >>> +                                      &fence_count, &fences);
> >>>     if (ret || !fence_count)
> >>>             return ret;
> >>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> index a005c5a0ba46a..a27135084ae5c 100644
> >>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> >>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> >>>             return 0;
> >>>     obj = drm_gem_fb_get_obj(state->fb, 0);
> >>> -   fence = dma_resv_get_excl_rcu(obj->resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
> >>>     drm_atomic_set_fence_for_plane(state, fence);
> >>>     return 0;
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> index db69f19ab5bca..4e6f5346e84e4 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> >>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> >>>     }
> >>>     if (op & ETNA_PREP_NOSYNC) {
> >>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
> >>> -                                                     write))
> >>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> >>>                     return -EBUSY;
> >>>     } else {
> >>>             unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
> >>> -                                                     write, true, remain);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
> >>> +                                                write, true, remain);
> >>>             if (ret <= 0)
> >>>                     return ret == 0 ? -ETIMEDOUT : ret;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> index d05c359945799..6617fada4595d 100644
> >>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> >>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> >>>                     continue;
> >>>             if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> >>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> >>> -                                                           &bo->nr_shared,
> >>> -                                                           &bo->shared);
> >>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> >>> +                                                      &bo->nr_shared,
> >>> +                                                      &bo->shared);
> >>>                     if (ret)
> >>>                             return ret;
> >>>             } else {
> >>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
> >>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
> >>>             }
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> >>> index 422b59ebf6dce..5f0b85a102159 100644
> >>> --- a/drivers/gpu/drm/i915/display/intel_display.c
> >>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> >>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> >>>             if (ret < 0)
> >>>                     goto unpin_fb;
> >>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>             if (fence) {
> >>>                     add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> >>>                                                fence);
> >>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
> >>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> >>> @@ -10,7 +10,7 @@
> >>>    void dma_resv_prune(struct dma_resv *resv)
> >>>    {
> >>>     if (dma_resv_trylock(resv)) {
> >>> -           if (dma_resv_test_signaled_rcu(resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(resv, true))
> >>>                     dma_resv_add_excl_fence(resv, NULL);
> >>>             dma_resv_unlock(resv);
> >>>     }
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> index 25235ef630c10..754ad6d1bace9 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> >>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>      * Alternatively, we can trade that extra information on read/write
> >>>      * activity with
> >>>      *      args->busy =
> >>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>      * to report the overall busyness. This is what the wait-ioctl does.
> >>>      *
> >>>      */
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> index 297143511f99b..e8f323564e57b 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> >>>     if (DBG_FORCE_RELOC)
> >>>             return false;
> >>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
> >>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
> >>>    }
> >>>    static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> index 2ebd79537aea9..7c0eb425cb3b3 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> >>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> >>>     struct dma_fence *fence;
> >>>     rcu_read_lock();
> >>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     rcu_read_unlock();
> >>>     if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> index a657b99ec7606..44df18dc9669f 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> >>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> >>>             return true;
> >>>     /* we will unbind on next submission, still have userptr pins */
> >>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> index 4b9856d5ba14f..5b6c52659ad4d 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> >>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> >>>              */
> >>>             prune_fences = count && timeout >= 0;
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (excl && timeout >= 0)
> >>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             unsigned int count, i;
> >>>             int ret;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                         &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> >>> index 970d8f4986bbe..f1ed03ced7dd1 100644
> >>> --- a/drivers/gpu/drm/i915/i915_request.c
> >>> +++ b/drivers/gpu/drm/i915/i915_request.c
> >>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> >>> -                                                   &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> >>> +                                              &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> >>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> >>>     }
> >>>     if (excl) {
> >>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> index 2744558f30507..0bcb7ea44201e 100644
> >>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> >>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>             struct dma_fence **shared;
> >>>             unsigned int count, i;
> >>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> >>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> >>>             if (ret)
> >>>                     return ret;
> >>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> >>>                     dma_fence_put(shared[i]);
> >>>             kfree(shared);
> >>>     } else {
> >>> -           excl = dma_resv_get_excl_rcu(resv);
> >>> +           excl = dma_resv_get_excl_unlocked(resv);
> >>>     }
> >>>     if (ret >= 0 && excl && excl->ops != exclude) {
> >>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> >>> index 56df86e5f7400..1aca60507bb14 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gem.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gem.c
> >>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> >>>             op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> >>>     long ret;
> >>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> >>> -                                             true,  remain);
> >>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> >>>     if (ret == 0)
> >>>             return remain == 0 ? -EBUSY : -ETIMEDOUT;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> index 0cb1f9d848d3e..8d048bacd6f02 100644
> >>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> >>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> >>>                     asyw->image.handle[0] = ctxdma->object.handle;
> >>>     }
> >>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> >>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> >>>     asyw->image.offset[0] = nvbo->offset;
> >>>     if (wndw->func->prepare) {
> >>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> index a70e82413fa75..bc6b09ee9b552 100644
> >>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> >>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     nvbo = nouveau_gem_object(gem);
> >>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> >>> -                                              no_wait ? 0 : 30 * HZ);
> >>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> >>> +                                         no_wait ? 0 : 30 * HZ);
> >>>     if (!lret)
> >>>             ret = -EBUSY;
> >>>     else if (lret > 0)
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> index ca07098a61419..eef5b632ee0ce 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> >>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> >>>     if (!gem_obj)
> >>>             return -ENOENT;
> >>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> >>> -                                             true, timeout);
> >>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> >>> +                                        true, timeout);
> >>>     if (!ret)
> >>>             ret = timeout ? -ETIMEDOUT : -EBUSY;
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> index 6003cfeb13221..2df3e999a38d0 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> >>>     int i;
> >>>     for (i = 0; i < bo_count; i++)
> >>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> >>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> >>>    }
> >>>    static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> index 05ea2f39f6261..1a38b0bf36d11 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> >>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> >>>     }
> >>>     if (domain == RADEON_GEM_DOMAIN_CPU) {
> >>>             /* Asking for cpu access wait for object idle */
> >>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>             if (!r)
> >>>                     r = -EBUSY;
> >>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> >>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> >>>     if (r == 0)
> >>>             r = -EBUSY;
> >>>     else
> >>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> >>>     }
> >>>     robj = gem_to_radeon_bo(gobj);
> >>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> >>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> >>>     if (ret == 0)
> >>>             r = -EBUSY;
> >>>     else if (ret < 0)
> >>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> index e37c9a57a7c36..a19be3f8a218c 100644
> >>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
> >>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> >>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> >>>             return true;
> >>>     }
> >>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> >>> -                                 MAX_SCHEDULE_TIMEOUT);
> >>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> >>> +                                      MAX_SCHEDULE_TIMEOUT);
> >>>     if (r <= 0)
> >>>             DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> >>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> index ca1b098b6a561..215cad3149621 100644
> >>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> >>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> >>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>     struct dma_resv *resv = &bo->base._resv;
> >>>     int ret;
> >>> -   if (dma_resv_test_signaled_rcu(resv, true))
> >>> +   if (dma_resv_test_signaled_unlocked(resv, true))
> >>>             ret = 0;
> >>>     else
> >>>             ret = -EBUSY;
> >>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> >>>                     dma_resv_unlock(bo->base.resv);
> >>>             spin_unlock(&bo->bdev->lru_lock);
> >>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> >>> -                                            30 * HZ);
> >>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> >>> +                                                 30 * HZ);
> >>>             if (lret < 0)
> >>>                     return lret;
> >>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> >>>                     /* Last resort, if we fail to allocate memory for the
> >>>                      * fences block for the BO to become idle
> >>>                      */
> >>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> >>> -                                             30 * HZ);
> >>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> >>> +                                                  30 * HZ);
> >>>             }
> >>>             if (bo->bdev->funcs->release_notify)
> >>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> >>>             ttm_mem_io_free(bdev, &bo->mem);
> >>>     }
> >>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> >>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> >>>         !dma_resv_trylock(bo->base.resv)) {
> >>>             /* The BO is not idle, resurrect it for delayed destroy */
> >>>             ttm_bo_flush_all_fences(bo);
> >>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> >>>     long timeout = 15 * HZ;
> >>>     if (no_wait) {
> >>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> >>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> >>>                     return 0;
> >>>             else
> >>>                     return -EBUSY;
> >>>     }
> >>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> >>> -                                                 interruptible, timeout);
> >>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> >>> +                                            interruptible, timeout);
> >>>     if (timeout < 0)
> >>>             return timeout;
> >>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> index 2902dc6e64faf..010a82405e374 100644
> >>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
> >>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> >>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> >>>     /* Check for a conflicting fence */
> >>>     resv = obj->resv;
> >>> -   if (!dma_resv_test_signaled_rcu(resv,
> >>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
> >>> +   if (!dma_resv_test_signaled_unlocked(resv,
> >>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
> >>>             ret = -EBUSY;
> >>>             goto err_fence;
> >>>     }
> >>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> index 669f2ee395154..ab010c8e32816 100644
> >>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> >>>             return -ENOENT;
> >>>     if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> >>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
> >>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> >>>     } else {
> >>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> >>> -                                           timeout);
> >>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> >>> +                                                timeout);
> >>>     }
> >>>     if (ret == 0)
> >>>             ret = -EBUSY;
> >>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> index 04dd49c4c2572..19e1ce23842a9 100644
> >>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> >>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> >>>     if (flags & drm_vmw_synccpu_allow_cs) {
> >>>             long lret;
> >>> -           lret = dma_resv_wait_timeout_rcu
> >>> +           lret = dma_resv_wait_timeout_unlocked
> >>>                     (bo->base.resv, true, true,
> >>>                      nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> >>>             if (!lret)
> >>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> >>> index d44a77e8a7e34..99cfb7af966b8 100644
> >>> --- a/include/linux/dma-resv.h
> >>> +++ b/include/linux/dma-resv.h
> >>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>    }
> >>>    /**
> >>> - * dma_resv_get_excl_rcu - get the reservation object's
> >>> + * dma_resv_get_excl_unlocked - get the reservation object's
> >>>     * exclusive fence, without lock held.
> >>>     * @obj: the reservation object
> >>>     *
> >>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> >>>     * The exclusive fence or NULL if none
> >>>     */
> >>>    static inline struct dma_fence *
> >>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
> >>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> >>>    {
> >>>     struct dma_fence *fence;
> >>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>>    void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> >>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> >>> -                       struct dma_fence **pfence_excl,
> >>> -                       unsigned *pshared_count,
> >>> -                       struct dma_fence ***pshared);
> >>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> >>> +                            struct dma_fence **pfence_excl,
> >>> +                            unsigned *pshared_count,
> >>> +                            struct dma_fence ***pshared);
> >>>    int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> >>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> >>> -                          unsigned long timeout);
> >>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> >>> +                               unsigned long timeout);
> >>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> >>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> >>>    #endif /* _LINUX_RESERVATION_H */
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-27 13:25           ` [Intel-gfx] " Daniel Vetter
@ 2021-05-27 13:41             ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-27 13:41 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, Jason Ekstrand, intel-gfx, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Sean Paul

Am 27.05.21 um 15:25 schrieb Daniel Vetter:
> On Thu, May 27, 2021 at 1:59 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
>>> On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
>>>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>>>> None of these helpers actually leak any RCU details to the caller.  They
>>>>> all assume you have a genuine reference, take the RCU read lock, and
>>>>> retry if needed.  Naming them with an _rcu is likely to cause callers
>>>>> more panic than needed.
>>>> I'm really wondering if we need this postfix in the first place.
>>>>
>>>> If we use the right rcu_dereference_check() macro then those functions can
>>>> be called with both the reservation object locked and unlocked. It shouldn't
>>>> matter to them.
>>>>
>>>> But getting rid of the _rcu postfix sounds like a good idea in general to
>>>> me.
>>> So does that count as an ack or not? If yes I think we should land this
>>> patch right away, since it's going to conflict real fast badly.
>> I had some follow up discussion with Jason and I would rather like to
>> switch to using rcu_dereference_check() in all places and completely
>> remove the _rcu postfix.
> Hm, I'm not sure whether spreading _rcu tricks further is an
> especially bright idea. At least i915 is full of very clever _rcu
> tricks, and encouraging drivers to roll out their own _rcu everywhere
> is probably not in our best interest. Some fast-path checking is imo
> ok, but that's it. Especially once we get into the entire
> SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.

Oh, yes completely agree. SLAB_TYPESAFE_BY_RCU is optimizing for the 
wrong use case I think.

You save a bit of overhead while freeing fences, but in return you have 
extra overhead while adding fences to the dma_resv object.

> That's why I'm slightly leaning towards _unlocked variants, except we
> do use those in lots of places where we hold dma_resv_lock too. So not
> sure what's the best plan overall here.

Well what function names are we actually talking about?

For the dma_resv_get_excl_rcu() case I agree we should probably name 
that to dma_resv_get_excl_unlocked() because it makes no sense at all to 
use this function while holding the lock.

But for the following functions:
dma_resv_get_fences_rcu
dma_resv_wait_timeout_rcu
dma_resv_test_signaled_rcu

I think we should just drop the _rcu naming because those are supposed 
to work independent if the resv lock is held or not.

Regards,
Christian.

> -Daniel
>
>> But yes I see the pain of rebasing this as well.
>>
>> Christian.
>>
>>> -Daniel
>>>
>>>> Christian.
>>>>
>>>>> v2 (Jason Ekstrand):
>>>>>     - Fix function argument indentation
>>>>>
>>>>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
>>>>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>>> Cc: Maxime Ripard <mripard@kernel.org>
>>>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>>>> Cc: Lucas Stach <l.stach@pengutronix.de>
>>>>> Cc: Rob Clark <robdclark@gmail.com>
>>>>> Cc: Sean Paul <sean@poorly.run>
>>>>> Cc: Huang Rui <ray.huang@amd.com>
>>>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>>>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
>>>>> ---
>>>>>     drivers/dma-buf/dma-buf.c                     |  4 +--
>>>>>     drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>>>>>     .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>>>>>     drivers/gpu/drm/drm_gem.c                     | 10 +++----
>>>>>     drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>>>>>     drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>>>>>     drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>>>>>     drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>>>>>     drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>>>>>     .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>>>>>     drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>>>>>     drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>>>>>     drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>>>>>     drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>>>>>     drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>>>>>     drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>>>>>     drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>>>>>     drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>>>>>     drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>>>>>     drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>>>>>     drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>>>>>     drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>>>>>     drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>>>>>     include/linux/dma-resv.h                      | 18 ++++++------
>>>>>     36 files changed, 108 insertions(+), 110 deletions(-)
>>>>>
>>>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>>>> index f264b70c383eb..ed6451d55d663 100644
>>>>> --- a/drivers/dma-buf/dma-buf.c
>>>>> +++ b/drivers/dma-buf/dma-buf.c
>>>>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>>>      long ret;
>>>>>      /* Wait on any implicit rendering fences */
>>>>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
>>>>> -                                             MAX_SCHEDULE_TIMEOUT);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
>>>>> +                                        MAX_SCHEDULE_TIMEOUT);
>>>>>      if (ret < 0)
>>>>>              return ret;
>>>>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
>>>>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
>>>>> --- a/drivers/dma-buf/dma-resv.c
>>>>> +++ b/drivers/dma-buf/dma-resv.c
>>>>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>>>>>     EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>     /**
>>>>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
>>>>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>>>>>      * fences without update side lock held
>>>>>      * @obj: the reservation object
>>>>>      * @pfence_excl: the returned exclusive fence (or NULL)
>>>>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>      * exclusive fence is not specified the fence is put into the array of the
>>>>>      * shared fences as well. Returns either zero or -ENOMEM.
>>>>>      */
>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>> -                       struct dma_fence **pfence_excl,
>>>>> -                       unsigned *pshared_count,
>>>>> -                       struct dma_fence ***pshared)
>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>> +                            struct dma_fence **pfence_excl,
>>>>> +                            unsigned *pshared_count,
>>>>> +                            struct dma_fence ***pshared)
>>>>>     {
>>>>>      struct dma_fence **shared = NULL;
>>>>>      struct dma_fence *fence_excl;
>>>>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>      *pshared = shared;
>>>>>      return ret;
>>>>>     }
>>>>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>>>>>     /**
>>>>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
>>>>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>>>>>      * shared and/or exclusive fences.
>>>>>      * @obj: the reservation object
>>>>>      * @wait_all: if true, wait on all fences, else wait on just exclusive fence
>>>>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>>      * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>>>>>      * greater than zer on success.
>>>>>      */
>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>> -                          bool wait_all, bool intr,
>>>>> -                          unsigned long timeout)
>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
>>>>> +                               bool wait_all, bool intr,
>>>>> +                               unsigned long timeout)
>>>>>     {
>>>>>      struct dma_fence *fence;
>>>>>      unsigned seq, shared_count;
>>>>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>>      rcu_read_unlock();
>>>>>      goto retry;
>>>>>     }
>>>>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
>>>>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>>>>>     static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>     }
>>>>>     /**
>>>>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
>>>>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>>>>>      * fences have been signaled.
>>>>>      * @obj: the reservation object
>>>>>      * @test_all: if true, test all fences, otherwise only test the exclusive
>>>>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>      * RETURNS
>>>>>      * true if all fences signaled, else false
>>>>>      */
>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>>>>>     {
>>>>>      unsigned seq, shared_count;
>>>>>      int ret;
>>>>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>>      rcu_read_unlock();
>>>>>      return ret;
>>>>>     }
>>>>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
>>>>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>> index 8a1fb8b6606e5..b8e24f199be9a 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>>>>>              goto unpin;
>>>>>      }
>>>>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
>>>>> -                                         &work->shared_count,
>>>>> -                                         &work->shared);
>>>>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
>>>>> +                                    &work->shared_count,
>>>>> +                                    &work->shared);
>>>>>      if (unlikely(r != 0)) {
>>>>>              DRM_ERROR("failed to get fences for buffer\n");
>>>>>              goto unpin;
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>> index baa980a477d94..0d0319bc51577 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>>>>>      if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>>>>>              return 0;
>>>>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
>>>>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>>>>>      if (r)
>>>>>              return r;
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>> index 18974bd081f00..8e2996d6ba3ad 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>              return -ENOENT;
>>>>>      }
>>>>>      robj = gem_to_amdgpu_bo(gobj);
>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
>>>>> -                                             timeout);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
>>>>> +                                        timeout);
>>>>>      /* ret == 0 means not signaled,
>>>>>       * ret > 0 means signaled
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>> index b4971e90b98cf..38e1b32dd2cef 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>      unsigned count;
>>>>>      int r;
>>>>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
>>>>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>>>>>      if (r)
>>>>>              goto fallback;
>>>>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>      /* Not enough memory for the delayed delete, as last resort
>>>>>       * block for all the fences to complete.
>>>>>       */
>>>>> -   dma_resv_wait_timeout_rcu(resv, true, false,
>>>>> -                                       MAX_SCHEDULE_TIMEOUT);
>>>>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>> +                                  MAX_SCHEDULE_TIMEOUT);
>>>>>      amdgpu_pasid_free(pasid);
>>>>>     }
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>> index 828b5167ff128..0319c8b547c48 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>>>>>      mmu_interval_set_seq(mni, cur_seq);
>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      mutex_unlock(&adev->notifier_lock);
>>>>>      if (r <= 0)
>>>>>              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>> index 0adffcace3263..de1c7c5501683 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>>>>>              return 0;
>>>>>      }
>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
>>>>> -                                           MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      if (r < 0)
>>>>>              return r;
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>> index c6dbc08016045..4a2196404fb69 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>>>>      ib->length_dw = 16;
>>>>>      if (direct) {
>>>>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
>>>>> -                                                   true, false,
>>>>> -                                                   msecs_to_jiffies(10));
>>>>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
>>>>> +                                              true, false,
>>>>> +                                              msecs_to_jiffies(10));
>>>>>              if (r == 0)
>>>>>                      r = -ETIMEDOUT;
>>>>>              if (r < 0)
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> index 4a3e3f72e1277..7ba1c537d6584 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>>>      unsigned i, shared_count;
>>>>>      int r;
>>>>> -   r = dma_resv_get_fences_rcu(resv, &excl,
>>>>> -                                         &shared_count, &shared);
>>>>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
>>>>> +                                    &shared_count, &shared);
>>>>>      if (r) {
>>>>>              /* Not enough memory to grab the fence list, as last resort
>>>>>               * block for all the fences to complete.
>>>>>               */
>>>>> -           dma_resv_wait_timeout_rcu(resv, true, false,
>>>>> -                                               MAX_SCHEDULE_TIMEOUT);
>>>>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>> +                                          MAX_SCHEDULE_TIMEOUT);
>>>>>              return;
>>>>>      }
>>>>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>>>>>              return true;
>>>>>      /* Don't evict VM page tables while they are busy */
>>>>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>>>>>              return false;
>>>>>      /* Try to block ongoing updates */
>>>>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>>>>>      */
>>>>>     long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>>>>>     {
>>>>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
>>>>> -                                       true, true, timeout);
>>>>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
>>>>> +                                            true, true, timeout);
>>>>>      if (timeout <= 0)
>>>>>              return timeout;
>>>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>> index 9ca517b658546..0121d2817fa26 100644
>>>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>>>>>               * deadlock during GPU reset when this fence will not signal
>>>>>               * but we hold reservation lock for the BO.
>>>>>               */
>>>>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
>>>>> -                                                   false,
>>>>> -                                                   msecs_to_jiffies(5000));
>>>>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
>>>>> +                                              false,
>>>>> +                                              msecs_to_jiffies(5000));
>>>>>              if (unlikely(r <= 0))
>>>>>                      DRM_ERROR("Waiting for fences timed out!");
>>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>>> index 9989425e9875a..1241a421b9e81 100644
>>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>>>>>              return -EINVAL;
>>>>>      }
>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
>>>>> -                                             true, timeout);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
>>>>> +                                        true, timeout);
>>>>>      if (ret == 0)
>>>>>              ret = -ETIME;
>>>>>      else if (ret > 0)
>>>>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>>>>>      if (!write) {
>>>>>              struct dma_fence *fence =
>>>>> -                   dma_resv_get_excl_rcu(obj->resv);
>>>>> +                   dma_resv_get_excl_unlocked(obj->resv);
>>>>>              return drm_gem_fence_array_add(fence_array, fence);
>>>>>      }
>>>>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
>>>>> -                                           &fence_count, &fences);
>>>>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
>>>>> +                                      &fence_count, &fences);
>>>>>      if (ret || !fence_count)
>>>>>              return ret;
>>>>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>> index a005c5a0ba46a..a27135084ae5c 100644
>>>>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>>>>>              return 0;
>>>>>      obj = drm_gem_fb_get_obj(state->fb, 0);
>>>>> -   fence = dma_resv_get_excl_rcu(obj->resv);
>>>>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
>>>>>      drm_atomic_set_fence_for_plane(state, fence);
>>>>>      return 0;
>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>> index db69f19ab5bca..4e6f5346e84e4 100644
>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>>>>>      }
>>>>>      if (op & ETNA_PREP_NOSYNC) {
>>>>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
>>>>> -                                                     write))
>>>>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>>>>>                      return -EBUSY;
>>>>>      } else {
>>>>>              unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
>>>>> -                                                     write, true, remain);
>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
>>>>> +                                                write, true, remain);
>>>>>              if (ret <= 0)
>>>>>                      return ret == 0 ? -ETIMEDOUT : ret;
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>> index d05c359945799..6617fada4595d 100644
>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>>>>>                      continue;
>>>>>              if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
>>>>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
>>>>> -                                                           &bo->nr_shared,
>>>>> -                                                           &bo->shared);
>>>>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
>>>>> +                                                      &bo->nr_shared,
>>>>> +                                                      &bo->shared);
>>>>>                      if (ret)
>>>>>                              return ret;
>>>>>              } else {
>>>>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
>>>>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
>>>>>              }
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
>>>>> index 422b59ebf6dce..5f0b85a102159 100644
>>>>> --- a/drivers/gpu/drm/i915/display/intel_display.c
>>>>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
>>>>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>>>>>              if (ret < 0)
>>>>>                      goto unpin_fb;
>>>>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>              if (fence) {
>>>>>                      add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>>>>>                                                 fence);
>>>>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
>>>>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>> @@ -10,7 +10,7 @@
>>>>>     void dma_resv_prune(struct dma_resv *resv)
>>>>>     {
>>>>>      if (dma_resv_trylock(resv)) {
>>>>> -           if (dma_resv_test_signaled_rcu(resv, true))
>>>>> +           if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>                      dma_resv_add_excl_fence(resv, NULL);
>>>>>              dma_resv_unlock(resv);
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>> index 25235ef630c10..754ad6d1bace9 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>       * Alternatively, we can trade that extra information on read/write
>>>>>       * activity with
>>>>>       *      args->busy =
>>>>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
>>>>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>       * to report the overall busyness. This is what the wait-ioctl does.
>>>>>       *
>>>>>       */
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>> index 297143511f99b..e8f323564e57b 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>>>>>      if (DBG_FORCE_RELOC)
>>>>>              return false;
>>>>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
>>>>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
>>>>>     }
>>>>>     static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>> index 2ebd79537aea9..7c0eb425cb3b3 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>>>>>      struct dma_fence *fence;
>>>>>      rcu_read_lock();
>>>>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>      rcu_read_unlock();
>>>>>      if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>> index a657b99ec7606..44df18dc9669f 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>>>>>              return true;
>>>>>      /* we will unbind on next submission, still have userptr pins */
>>>>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      if (r <= 0)
>>>>>              drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>> index 4b9856d5ba14f..5b6c52659ad4d 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>              unsigned int count, i;
>>>>>              int ret;
>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>               */
>>>>>              prune_fences = count && timeout >= 0;
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>      }
>>>>>      if (excl && timeout >= 0)
>>>>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>              unsigned int count, i;
>>>>>              int ret;
>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>> -                                         &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>> +                                              &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>              kfree(shared);
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>      }
>>>>>      if (excl) {
>>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>>>> index 970d8f4986bbe..f1ed03ced7dd1 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>>>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>>>>>              struct dma_fence **shared;
>>>>>              unsigned int count, i;
>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>> -                                                   &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>> +                                              &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>>>>>                      dma_fence_put(shared[i]);
>>>>>              kfree(shared);
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>      }
>>>>>      if (excl) {
>>>>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>> index 2744558f30507..0bcb7ea44201e 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>              struct dma_fence **shared;
>>>>>              unsigned int count, i;
>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>                      dma_fence_put(shared[i]);
>>>>>              kfree(shared);
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>      }
>>>>>      if (ret >= 0 && excl && excl->ops != exclude) {
>>>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>>>> index 56df86e5f7400..1aca60507bb14 100644
>>>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>>>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>>>>>              op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>>>>>      long ret;
>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
>>>>> -                                             true,  remain);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>>>>>      if (ret == 0)
>>>>>              return remain == 0 ? -EBUSY : -ETIMEDOUT;
>>>>>      else if (ret < 0)
>>>>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>> index 0cb1f9d848d3e..8d048bacd6f02 100644
>>>>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>>>>>                      asyw->image.handle[0] = ctxdma->object.handle;
>>>>>      }
>>>>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
>>>>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>>>>>      asyw->image.offset[0] = nvbo->offset;
>>>>>      if (wndw->func->prepare) {
>>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>> index a70e82413fa75..bc6b09ee9b552 100644
>>>>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>>>>>              return -ENOENT;
>>>>>      nvbo = nouveau_gem_object(gem);
>>>>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
>>>>> -                                              no_wait ? 0 : 30 * HZ);
>>>>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
>>>>> +                                         no_wait ? 0 : 30 * HZ);
>>>>>      if (!lret)
>>>>>              ret = -EBUSY;
>>>>>      else if (lret > 0)
>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>> index ca07098a61419..eef5b632ee0ce 100644
>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>>>>>      if (!gem_obj)
>>>>>              return -ENOENT;
>>>>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
>>>>> -                                             true, timeout);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
>>>>> +                                        true, timeout);
>>>>>      if (!ret)
>>>>>              ret = timeout ? -ETIMEDOUT : -EBUSY;
>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> index 6003cfeb13221..2df3e999a38d0 100644
>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>>>>>      int i;
>>>>>      for (i = 0; i < bo_count; i++)
>>>>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
>>>>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>>>>>     }
>>>>>     static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>> index 05ea2f39f6261..1a38b0bf36d11 100644
>>>>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
>>>>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>>>>>      }
>>>>>      if (domain == RADEON_GEM_DOMAIN_CPU) {
>>>>>              /* Asking for cpu access wait for object idle */
>>>>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>              if (!r)
>>>>>                      r = -EBUSY;
>>>>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>      }
>>>>>      robj = gem_to_radeon_bo(gobj);
>>>>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
>>>>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>>>>>      if (r == 0)
>>>>>              r = -EBUSY;
>>>>>      else
>>>>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>      }
>>>>>      robj = gem_to_radeon_bo(gobj);
>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>      if (ret == 0)
>>>>>              r = -EBUSY;
>>>>>      else if (ret < 0)
>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>> index e37c9a57a7c36..a19be3f8a218c 100644
>>>>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
>>>>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>>>>>              return true;
>>>>>      }
>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      if (r <= 0)
>>>>>              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> index ca1b098b6a561..215cad3149621 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>      struct dma_resv *resv = &bo->base._resv;
>>>>>      int ret;
>>>>> -   if (dma_resv_test_signaled_rcu(resv, true))
>>>>> +   if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>              ret = 0;
>>>>>      else
>>>>>              ret = -EBUSY;
>>>>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>                      dma_resv_unlock(bo->base.resv);
>>>>>              spin_unlock(&bo->bdev->lru_lock);
>>>>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
>>>>> -                                            30 * HZ);
>>>>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
>>>>> +                                                 30 * HZ);
>>>>>              if (lret < 0)
>>>>>                      return lret;
>>>>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>>>>>                      /* Last resort, if we fail to allocate memory for the
>>>>>                       * fences block for the BO to become idle
>>>>>                       */
>>>>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
>>>>> -                                             30 * HZ);
>>>>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
>>>>> +                                                  30 * HZ);
>>>>>              }
>>>>>              if (bo->bdev->funcs->release_notify)
>>>>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>>>>>              ttm_mem_io_free(bdev, &bo->mem);
>>>>>      }
>>>>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>>>>>          !dma_resv_trylock(bo->base.resv)) {
>>>>>              /* The BO is not idle, resurrect it for delayed destroy */
>>>>>              ttm_bo_flush_all_fences(bo);
>>>>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>>>>>      long timeout = 15 * HZ;
>>>>>      if (no_wait) {
>>>>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
>>>>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>>>>>                      return 0;
>>>>>              else
>>>>>                      return -EBUSY;
>>>>>      }
>>>>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
>>>>> -                                                 interruptible, timeout);
>>>>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
>>>>> +                                            interruptible, timeout);
>>>>>      if (timeout < 0)
>>>>>              return timeout;
>>>>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>> index 2902dc6e64faf..010a82405e374 100644
>>>>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
>>>>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>>>>>      /* Check for a conflicting fence */
>>>>>      resv = obj->resv;
>>>>> -   if (!dma_resv_test_signaled_rcu(resv,
>>>>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
>>>>> +   if (!dma_resv_test_signaled_unlocked(resv,
>>>>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
>>>>>              ret = -EBUSY;
>>>>>              goto err_fence;
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>> index 669f2ee395154..ab010c8e32816 100644
>>>>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>>>>>              return -ENOENT;
>>>>>      if (args->flags & VIRTGPU_WAIT_NOWAIT) {
>>>>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
>>>>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>      } else {
>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
>>>>> -                                           timeout);
>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
>>>>> +                                                timeout);
>>>>>      }
>>>>>      if (ret == 0)
>>>>>              ret = -EBUSY;
>>>>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>> index 04dd49c4c2572..19e1ce23842a9 100644
>>>>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>>>>>      if (flags & drm_vmw_synccpu_allow_cs) {
>>>>>              long lret;
>>>>> -           lret = dma_resv_wait_timeout_rcu
>>>>> +           lret = dma_resv_wait_timeout_unlocked
>>>>>                      (bo->base.resv, true, true,
>>>>>                       nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>>>>>              if (!lret)
>>>>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
>>>>> index d44a77e8a7e34..99cfb7af966b8 100644
>>>>> --- a/include/linux/dma-resv.h
>>>>> +++ b/include/linux/dma-resv.h
>>>>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>     }
>>>>>     /**
>>>>> - * dma_resv_get_excl_rcu - get the reservation object's
>>>>> + * dma_resv_get_excl_unlocked - get the reservation object's
>>>>>      * exclusive fence, without lock held.
>>>>>      * @obj: the reservation object
>>>>>      *
>>>>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>      * The exclusive fence or NULL if none
>>>>>      */
>>>>>     static inline struct dma_fence *
>>>>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
>>>>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>>>>>     {
>>>>>      struct dma_fence *fence;
>>>>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>>     void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>> -                       struct dma_fence **pfence_excl,
>>>>> -                       unsigned *pshared_count,
>>>>> -                       struct dma_fence ***pshared);
>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>> +                            struct dma_fence **pfence_excl,
>>>>> +                            unsigned *pshared_count,
>>>>> +                            struct dma_fence ***pshared);
>>>>>     int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
>>>>> -                          unsigned long timeout);
>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
>>>>> +                               unsigned long timeout);
>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>>>>>     #endif /* _LINUX_RESERVATION_H */
>


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-05-27 13:41             ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-05-27 13:41 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, intel-gfx, Maxime Ripard, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Lucas Stach

Am 27.05.21 um 15:25 schrieb Daniel Vetter:
> On Thu, May 27, 2021 at 1:59 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
>>> On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
>>>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>>>> None of these helpers actually leak any RCU details to the caller.  They
>>>>> all assume you have a genuine reference, take the RCU read lock, and
>>>>> retry if needed.  Naming them with an _rcu is likely to cause callers
>>>>> more panic than needed.
>>>> I'm really wondering if we need this postfix in the first place.
>>>>
>>>> If we use the right rcu_dereference_check() macro then those functions can
>>>> be called with both the reservation object locked and unlocked. It shouldn't
>>>> matter to them.
>>>>
>>>> But getting rid of the _rcu postfix sounds like a good idea in general to
>>>> me.
>>> So does that count as an ack or not? If yes I think we should land this
>>> patch right away, since it's going to conflict real fast badly.
>> I had some follow up discussion with Jason and I would rather like to
>> switch to using rcu_dereference_check() in all places and completely
>> remove the _rcu postfix.
> Hm, I'm not sure whether spreading _rcu tricks further is an
> especially bright idea. At least i915 is full of very clever _rcu
> tricks, and encouraging drivers to roll out their own _rcu everywhere
> is probably not in our best interest. Some fast-path checking is imo
> ok, but that's it. Especially once we get into the entire
> SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.

Oh, yes completely agree. SLAB_TYPESAFE_BY_RCU is optimizing for the 
wrong use case I think.

You save a bit of overhead while freeing fences, but in return you have 
extra overhead while adding fences to the dma_resv object.

> That's why I'm slightly leaning towards _unlocked variants, except we
> do use those in lots of places where we hold dma_resv_lock too. So not
> sure what's the best plan overall here.

Well what function names are we actually talking about?

For the dma_resv_get_excl_rcu() case I agree we should probably name 
that to dma_resv_get_excl_unlocked() because it makes no sense at all to 
use this function while holding the lock.

But for the following functions:
dma_resv_get_fences_rcu
dma_resv_wait_timeout_rcu
dma_resv_test_signaled_rcu

I think we should just drop the _rcu naming because those are supposed 
to work independent if the resv lock is held or not.

Regards,
Christian.

> -Daniel
>
>> But yes I see the pain of rebasing this as well.
>>
>> Christian.
>>
>>> -Daniel
>>>
>>>> Christian.
>>>>
>>>>> v2 (Jason Ekstrand):
>>>>>     - Fix function argument indentation
>>>>>
>>>>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
>>>>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>>> Cc: Maxime Ripard <mripard@kernel.org>
>>>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>>>> Cc: Lucas Stach <l.stach@pengutronix.de>
>>>>> Cc: Rob Clark <robdclark@gmail.com>
>>>>> Cc: Sean Paul <sean@poorly.run>
>>>>> Cc: Huang Rui <ray.huang@amd.com>
>>>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>>>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
>>>>> ---
>>>>>     drivers/dma-buf/dma-buf.c                     |  4 +--
>>>>>     drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>>>>>     .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>>>>>     drivers/gpu/drm/drm_gem.c                     | 10 +++----
>>>>>     drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>>>>>     drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>>>>>     drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>>>>>     drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>>>>>     drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>>>>>     .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>>>>>     drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>>>>>     drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>>>>>     drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>>>>>     drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>>>>>     drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>>>>>     drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>>>>>     drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>>>>>     drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>>>>>     drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>>>>>     drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>>>>>     drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>>>>>     drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>>>>>     drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>>>>>     drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>>>>>     include/linux/dma-resv.h                      | 18 ++++++------
>>>>>     36 files changed, 108 insertions(+), 110 deletions(-)
>>>>>
>>>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>>>> index f264b70c383eb..ed6451d55d663 100644
>>>>> --- a/drivers/dma-buf/dma-buf.c
>>>>> +++ b/drivers/dma-buf/dma-buf.c
>>>>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>>>      long ret;
>>>>>      /* Wait on any implicit rendering fences */
>>>>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
>>>>> -                                             MAX_SCHEDULE_TIMEOUT);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
>>>>> +                                        MAX_SCHEDULE_TIMEOUT);
>>>>>      if (ret < 0)
>>>>>              return ret;
>>>>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
>>>>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
>>>>> --- a/drivers/dma-buf/dma-resv.c
>>>>> +++ b/drivers/dma-buf/dma-resv.c
>>>>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>>>>>     EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>     /**
>>>>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
>>>>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>>>>>      * fences without update side lock held
>>>>>      * @obj: the reservation object
>>>>>      * @pfence_excl: the returned exclusive fence (or NULL)
>>>>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>      * exclusive fence is not specified the fence is put into the array of the
>>>>>      * shared fences as well. Returns either zero or -ENOMEM.
>>>>>      */
>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>> -                       struct dma_fence **pfence_excl,
>>>>> -                       unsigned *pshared_count,
>>>>> -                       struct dma_fence ***pshared)
>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>> +                            struct dma_fence **pfence_excl,
>>>>> +                            unsigned *pshared_count,
>>>>> +                            struct dma_fence ***pshared)
>>>>>     {
>>>>>      struct dma_fence **shared = NULL;
>>>>>      struct dma_fence *fence_excl;
>>>>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>      *pshared = shared;
>>>>>      return ret;
>>>>>     }
>>>>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>>>>>     /**
>>>>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
>>>>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>>>>>      * shared and/or exclusive fences.
>>>>>      * @obj: the reservation object
>>>>>      * @wait_all: if true, wait on all fences, else wait on just exclusive fence
>>>>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>>      * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>>>>>      * greater than zer on success.
>>>>>      */
>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>> -                          bool wait_all, bool intr,
>>>>> -                          unsigned long timeout)
>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
>>>>> +                               bool wait_all, bool intr,
>>>>> +                               unsigned long timeout)
>>>>>     {
>>>>>      struct dma_fence *fence;
>>>>>      unsigned seq, shared_count;
>>>>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>>      rcu_read_unlock();
>>>>>      goto retry;
>>>>>     }
>>>>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
>>>>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>>>>>     static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>     }
>>>>>     /**
>>>>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
>>>>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>>>>>      * fences have been signaled.
>>>>>      * @obj: the reservation object
>>>>>      * @test_all: if true, test all fences, otherwise only test the exclusive
>>>>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>      * RETURNS
>>>>>      * true if all fences signaled, else false
>>>>>      */
>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>>>>>     {
>>>>>      unsigned seq, shared_count;
>>>>>      int ret;
>>>>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>>      rcu_read_unlock();
>>>>>      return ret;
>>>>>     }
>>>>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
>>>>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>> index 8a1fb8b6606e5..b8e24f199be9a 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>>>>>              goto unpin;
>>>>>      }
>>>>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
>>>>> -                                         &work->shared_count,
>>>>> -                                         &work->shared);
>>>>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
>>>>> +                                    &work->shared_count,
>>>>> +                                    &work->shared);
>>>>>      if (unlikely(r != 0)) {
>>>>>              DRM_ERROR("failed to get fences for buffer\n");
>>>>>              goto unpin;
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>> index baa980a477d94..0d0319bc51577 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>>>>>      if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>>>>>              return 0;
>>>>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
>>>>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>>>>>      if (r)
>>>>>              return r;
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>> index 18974bd081f00..8e2996d6ba3ad 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>              return -ENOENT;
>>>>>      }
>>>>>      robj = gem_to_amdgpu_bo(gobj);
>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
>>>>> -                                             timeout);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
>>>>> +                                        timeout);
>>>>>      /* ret == 0 means not signaled,
>>>>>       * ret > 0 means signaled
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>> index b4971e90b98cf..38e1b32dd2cef 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>      unsigned count;
>>>>>      int r;
>>>>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
>>>>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>>>>>      if (r)
>>>>>              goto fallback;
>>>>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>      /* Not enough memory for the delayed delete, as last resort
>>>>>       * block for all the fences to complete.
>>>>>       */
>>>>> -   dma_resv_wait_timeout_rcu(resv, true, false,
>>>>> -                                       MAX_SCHEDULE_TIMEOUT);
>>>>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>> +                                  MAX_SCHEDULE_TIMEOUT);
>>>>>      amdgpu_pasid_free(pasid);
>>>>>     }
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>> index 828b5167ff128..0319c8b547c48 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>>>>>      mmu_interval_set_seq(mni, cur_seq);
>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      mutex_unlock(&adev->notifier_lock);
>>>>>      if (r <= 0)
>>>>>              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>> index 0adffcace3263..de1c7c5501683 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>>>>>              return 0;
>>>>>      }
>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
>>>>> -                                           MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      if (r < 0)
>>>>>              return r;
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>> index c6dbc08016045..4a2196404fb69 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>>>>      ib->length_dw = 16;
>>>>>      if (direct) {
>>>>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
>>>>> -                                                   true, false,
>>>>> -                                                   msecs_to_jiffies(10));
>>>>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
>>>>> +                                              true, false,
>>>>> +                                              msecs_to_jiffies(10));
>>>>>              if (r == 0)
>>>>>                      r = -ETIMEDOUT;
>>>>>              if (r < 0)
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> index 4a3e3f72e1277..7ba1c537d6584 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>>>      unsigned i, shared_count;
>>>>>      int r;
>>>>> -   r = dma_resv_get_fences_rcu(resv, &excl,
>>>>> -                                         &shared_count, &shared);
>>>>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
>>>>> +                                    &shared_count, &shared);
>>>>>      if (r) {
>>>>>              /* Not enough memory to grab the fence list, as last resort
>>>>>               * block for all the fences to complete.
>>>>>               */
>>>>> -           dma_resv_wait_timeout_rcu(resv, true, false,
>>>>> -                                               MAX_SCHEDULE_TIMEOUT);
>>>>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>> +                                          MAX_SCHEDULE_TIMEOUT);
>>>>>              return;
>>>>>      }
>>>>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>>>>>              return true;
>>>>>      /* Don't evict VM page tables while they are busy */
>>>>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>>>>>              return false;
>>>>>      /* Try to block ongoing updates */
>>>>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>>>>>      */
>>>>>     long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>>>>>     {
>>>>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
>>>>> -                                       true, true, timeout);
>>>>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
>>>>> +                                            true, true, timeout);
>>>>>      if (timeout <= 0)
>>>>>              return timeout;
>>>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>> index 9ca517b658546..0121d2817fa26 100644
>>>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>>>>>               * deadlock during GPU reset when this fence will not signal
>>>>>               * but we hold reservation lock for the BO.
>>>>>               */
>>>>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
>>>>> -                                                   false,
>>>>> -                                                   msecs_to_jiffies(5000));
>>>>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
>>>>> +                                              false,
>>>>> +                                              msecs_to_jiffies(5000));
>>>>>              if (unlikely(r <= 0))
>>>>>                      DRM_ERROR("Waiting for fences timed out!");
>>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>>> index 9989425e9875a..1241a421b9e81 100644
>>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>>>>>              return -EINVAL;
>>>>>      }
>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
>>>>> -                                             true, timeout);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
>>>>> +                                        true, timeout);
>>>>>      if (ret == 0)
>>>>>              ret = -ETIME;
>>>>>      else if (ret > 0)
>>>>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>>>>>      if (!write) {
>>>>>              struct dma_fence *fence =
>>>>> -                   dma_resv_get_excl_rcu(obj->resv);
>>>>> +                   dma_resv_get_excl_unlocked(obj->resv);
>>>>>              return drm_gem_fence_array_add(fence_array, fence);
>>>>>      }
>>>>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
>>>>> -                                           &fence_count, &fences);
>>>>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
>>>>> +                                      &fence_count, &fences);
>>>>>      if (ret || !fence_count)
>>>>>              return ret;
>>>>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>> index a005c5a0ba46a..a27135084ae5c 100644
>>>>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>>>>>              return 0;
>>>>>      obj = drm_gem_fb_get_obj(state->fb, 0);
>>>>> -   fence = dma_resv_get_excl_rcu(obj->resv);
>>>>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
>>>>>      drm_atomic_set_fence_for_plane(state, fence);
>>>>>      return 0;
>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>> index db69f19ab5bca..4e6f5346e84e4 100644
>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>>>>>      }
>>>>>      if (op & ETNA_PREP_NOSYNC) {
>>>>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
>>>>> -                                                     write))
>>>>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>>>>>                      return -EBUSY;
>>>>>      } else {
>>>>>              unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
>>>>> -                                                     write, true, remain);
>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
>>>>> +                                                write, true, remain);
>>>>>              if (ret <= 0)
>>>>>                      return ret == 0 ? -ETIMEDOUT : ret;
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>> index d05c359945799..6617fada4595d 100644
>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>>>>>                      continue;
>>>>>              if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
>>>>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
>>>>> -                                                           &bo->nr_shared,
>>>>> -                                                           &bo->shared);
>>>>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
>>>>> +                                                      &bo->nr_shared,
>>>>> +                                                      &bo->shared);
>>>>>                      if (ret)
>>>>>                              return ret;
>>>>>              } else {
>>>>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
>>>>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
>>>>>              }
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
>>>>> index 422b59ebf6dce..5f0b85a102159 100644
>>>>> --- a/drivers/gpu/drm/i915/display/intel_display.c
>>>>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
>>>>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>>>>>              if (ret < 0)
>>>>>                      goto unpin_fb;
>>>>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>              if (fence) {
>>>>>                      add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>>>>>                                                 fence);
>>>>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
>>>>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>> @@ -10,7 +10,7 @@
>>>>>     void dma_resv_prune(struct dma_resv *resv)
>>>>>     {
>>>>>      if (dma_resv_trylock(resv)) {
>>>>> -           if (dma_resv_test_signaled_rcu(resv, true))
>>>>> +           if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>                      dma_resv_add_excl_fence(resv, NULL);
>>>>>              dma_resv_unlock(resv);
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>> index 25235ef630c10..754ad6d1bace9 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>       * Alternatively, we can trade that extra information on read/write
>>>>>       * activity with
>>>>>       *      args->busy =
>>>>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
>>>>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>       * to report the overall busyness. This is what the wait-ioctl does.
>>>>>       *
>>>>>       */
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>> index 297143511f99b..e8f323564e57b 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>>>>>      if (DBG_FORCE_RELOC)
>>>>>              return false;
>>>>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
>>>>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
>>>>>     }
>>>>>     static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>> index 2ebd79537aea9..7c0eb425cb3b3 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>>>>>      struct dma_fence *fence;
>>>>>      rcu_read_lock();
>>>>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>      rcu_read_unlock();
>>>>>      if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>> index a657b99ec7606..44df18dc9669f 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>>>>>              return true;
>>>>>      /* we will unbind on next submission, still have userptr pins */
>>>>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      if (r <= 0)
>>>>>              drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>> index 4b9856d5ba14f..5b6c52659ad4d 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>              unsigned int count, i;
>>>>>              int ret;
>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>               */
>>>>>              prune_fences = count && timeout >= 0;
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>      }
>>>>>      if (excl && timeout >= 0)
>>>>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>              unsigned int count, i;
>>>>>              int ret;
>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>> -                                         &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>> +                                              &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>              kfree(shared);
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>      }
>>>>>      if (excl) {
>>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>>>> index 970d8f4986bbe..f1ed03ced7dd1 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>>>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>>>>>              struct dma_fence **shared;
>>>>>              unsigned int count, i;
>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>> -                                                   &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>> +                                              &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>>>>>                      dma_fence_put(shared[i]);
>>>>>              kfree(shared);
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>      }
>>>>>      if (excl) {
>>>>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>> index 2744558f30507..0bcb7ea44201e 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>              struct dma_fence **shared;
>>>>>              unsigned int count, i;
>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>              if (ret)
>>>>>                      return ret;
>>>>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>                      dma_fence_put(shared[i]);
>>>>>              kfree(shared);
>>>>>      } else {
>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>      }
>>>>>      if (ret >= 0 && excl && excl->ops != exclude) {
>>>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>>>> index 56df86e5f7400..1aca60507bb14 100644
>>>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>>>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>>>>>              op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>>>>>      long ret;
>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
>>>>> -                                             true,  remain);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>>>>>      if (ret == 0)
>>>>>              return remain == 0 ? -EBUSY : -ETIMEDOUT;
>>>>>      else if (ret < 0)
>>>>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>> index 0cb1f9d848d3e..8d048bacd6f02 100644
>>>>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>>>>>                      asyw->image.handle[0] = ctxdma->object.handle;
>>>>>      }
>>>>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
>>>>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>>>>>      asyw->image.offset[0] = nvbo->offset;
>>>>>      if (wndw->func->prepare) {
>>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>> index a70e82413fa75..bc6b09ee9b552 100644
>>>>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>>>>>              return -ENOENT;
>>>>>      nvbo = nouveau_gem_object(gem);
>>>>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
>>>>> -                                              no_wait ? 0 : 30 * HZ);
>>>>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
>>>>> +                                         no_wait ? 0 : 30 * HZ);
>>>>>      if (!lret)
>>>>>              ret = -EBUSY;
>>>>>      else if (lret > 0)
>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>> index ca07098a61419..eef5b632ee0ce 100644
>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>>>>>      if (!gem_obj)
>>>>>              return -ENOENT;
>>>>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
>>>>> -                                             true, timeout);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
>>>>> +                                        true, timeout);
>>>>>      if (!ret)
>>>>>              ret = timeout ? -ETIMEDOUT : -EBUSY;
>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> index 6003cfeb13221..2df3e999a38d0 100644
>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>>>>>      int i;
>>>>>      for (i = 0; i < bo_count; i++)
>>>>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
>>>>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>>>>>     }
>>>>>     static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>> index 05ea2f39f6261..1a38b0bf36d11 100644
>>>>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
>>>>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>>>>>      }
>>>>>      if (domain == RADEON_GEM_DOMAIN_CPU) {
>>>>>              /* Asking for cpu access wait for object idle */
>>>>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>              if (!r)
>>>>>                      r = -EBUSY;
>>>>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>      }
>>>>>      robj = gem_to_radeon_bo(gobj);
>>>>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
>>>>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>>>>>      if (r == 0)
>>>>>              r = -EBUSY;
>>>>>      else
>>>>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>      }
>>>>>      robj = gem_to_radeon_bo(gobj);
>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>      if (ret == 0)
>>>>>              r = -EBUSY;
>>>>>      else if (ret < 0)
>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>> index e37c9a57a7c36..a19be3f8a218c 100644
>>>>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
>>>>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>>>>>              return true;
>>>>>      }
>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>      if (r <= 0)
>>>>>              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> index ca1b098b6a561..215cad3149621 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>      struct dma_resv *resv = &bo->base._resv;
>>>>>      int ret;
>>>>> -   if (dma_resv_test_signaled_rcu(resv, true))
>>>>> +   if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>              ret = 0;
>>>>>      else
>>>>>              ret = -EBUSY;
>>>>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>                      dma_resv_unlock(bo->base.resv);
>>>>>              spin_unlock(&bo->bdev->lru_lock);
>>>>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
>>>>> -                                            30 * HZ);
>>>>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
>>>>> +                                                 30 * HZ);
>>>>>              if (lret < 0)
>>>>>                      return lret;
>>>>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>>>>>                      /* Last resort, if we fail to allocate memory for the
>>>>>                       * fences block for the BO to become idle
>>>>>                       */
>>>>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
>>>>> -                                             30 * HZ);
>>>>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
>>>>> +                                                  30 * HZ);
>>>>>              }
>>>>>              if (bo->bdev->funcs->release_notify)
>>>>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>>>>>              ttm_mem_io_free(bdev, &bo->mem);
>>>>>      }
>>>>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>>>>>          !dma_resv_trylock(bo->base.resv)) {
>>>>>              /* The BO is not idle, resurrect it for delayed destroy */
>>>>>              ttm_bo_flush_all_fences(bo);
>>>>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>>>>>      long timeout = 15 * HZ;
>>>>>      if (no_wait) {
>>>>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
>>>>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>>>>>                      return 0;
>>>>>              else
>>>>>                      return -EBUSY;
>>>>>      }
>>>>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
>>>>> -                                                 interruptible, timeout);
>>>>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
>>>>> +                                            interruptible, timeout);
>>>>>      if (timeout < 0)
>>>>>              return timeout;
>>>>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>> index 2902dc6e64faf..010a82405e374 100644
>>>>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
>>>>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>>>>>      /* Check for a conflicting fence */
>>>>>      resv = obj->resv;
>>>>> -   if (!dma_resv_test_signaled_rcu(resv,
>>>>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
>>>>> +   if (!dma_resv_test_signaled_unlocked(resv,
>>>>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
>>>>>              ret = -EBUSY;
>>>>>              goto err_fence;
>>>>>      }
>>>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>> index 669f2ee395154..ab010c8e32816 100644
>>>>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>>>>>              return -ENOENT;
>>>>>      if (args->flags & VIRTGPU_WAIT_NOWAIT) {
>>>>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
>>>>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>      } else {
>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
>>>>> -                                           timeout);
>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
>>>>> +                                                timeout);
>>>>>      }
>>>>>      if (ret == 0)
>>>>>              ret = -EBUSY;
>>>>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>> index 04dd49c4c2572..19e1ce23842a9 100644
>>>>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>>>>>      if (flags & drm_vmw_synccpu_allow_cs) {
>>>>>              long lret;
>>>>> -           lret = dma_resv_wait_timeout_rcu
>>>>> +           lret = dma_resv_wait_timeout_unlocked
>>>>>                      (bo->base.resv, true, true,
>>>>>                       nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>>>>>              if (!lret)
>>>>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
>>>>> index d44a77e8a7e34..99cfb7af966b8 100644
>>>>> --- a/include/linux/dma-resv.h
>>>>> +++ b/include/linux/dma-resv.h
>>>>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>     }
>>>>>     /**
>>>>> - * dma_resv_get_excl_rcu - get the reservation object's
>>>>> + * dma_resv_get_excl_unlocked - get the reservation object's
>>>>>      * exclusive fence, without lock held.
>>>>>      * @obj: the reservation object
>>>>>      *
>>>>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>      * The exclusive fence or NULL if none
>>>>>      */
>>>>>     static inline struct dma_fence *
>>>>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
>>>>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>>>>>     {
>>>>>      struct dma_fence *fence;
>>>>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>>     void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>> -                       struct dma_fence **pfence_excl,
>>>>> -                       unsigned *pshared_count,
>>>>> -                       struct dma_fence ***pshared);
>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>> +                            struct dma_fence **pfence_excl,
>>>>> +                            unsigned *pshared_count,
>>>>> +                            struct dma_fence ***pshared);
>>>>>     int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
>>>>> -                          unsigned long timeout);
>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
>>>>> +                               unsigned long timeout);
>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>>>>>     #endif /* _LINUX_RESERVATION_H */
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
       [not found]               ` <fef50d81-399a-af09-1d13-de4db1b3fab8@amd.com>
@ 2021-05-27 15:39                   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-27 15:39 UTC (permalink / raw)
  To: Christian König, Maling list - DRI developers, Simon Ser,
	Daniel Vetter, Sumit Semwal, Intel GFX, Daniel Stone

On Thu, May 27, 2021 at 4:49 AM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 26.05.21 um 19:42 schrieb Jason Ekstrand:
> > On Wed, May 26, 2021 at 6:02 AM Christian König
> > <christian.koenig@amd.com> wrote:
> >> Regarding that, why do we actually use a syncfile and not a drm_syncobj
> >> here?
> > A sync file is a userspace handle to a dma_fence which is exactly what
> > we're grabbing from the dma-buf so it's a better match.  A syncobj, on
> > the other hand, is a container for fences.  If we really want it in a
> > syncobj (which we may), we can use another ioctl to stuff it in there.
> > The only thing making this work in terms of syncobj would save us is a
> > little ioctl overhead.  In my Mesa patches, we do stuff it in a
> > syncobj but, instead of acting on the syncobj directly, we go through
> > vkSemaphoreImportFd() which is way more convenient for generic WSI
> > code.
>
> Yeah, that is a really good argument.
>
> > It also works nicer because a sync_file is a pretty generic construct
> > but a syncobj is a DRM construct.  This lets us do the export in an
> > entirely driver-generic way without even having access to a DRM fd.
> > It all happens as an ioctl on the dma-buf.
>
> Well that's an even better argument and I think the killer argument here.

Cool.

> We should probably mention that on the commit message as a single
> sentence somewhere.

Happy to do so.  How does this sound:

By making this an ioctl on the dma-buf itself, it allows this new
functionality to be used in an entirely driver-agnostic way without
having access to a DRM fd. This makes it ideal for use in
driver-generic code in Mesa or in a client such as a compositor where
the DRM fd may be hard to reach.

> BTW: You replied exclusively to me. Was that intentionally? I don't
> think so :)

Oops...  I've re-added the whole lot in this reply.

--Jason

> Regards,
> Christian.
>
> >
> > On Wed, May 26, 2021 at 8:23 AM Christian König
> > <christian.koenig@amd.com> wrote:
> >
> >> Am 26.05.21 um 15:12 schrieb Daniel Stone:
> >>> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
> >>>> Am 26.05.21 um 13:31 schrieb Daniel Stone:
> >>>>> How would we insert a syncobj+val into a resv though? Like, if we pass
> >>>>> an unmaterialised syncobj+val here to insert into the resv, then an
> >>>>> implicit-only media user (or KMS) goes to sync against the resv, what
> >>>>> happens?
> >>>> Well this is for exporting, not importing. So we don't need to worry
> >>>> about that.
> >>>>
> >>>> It's just my thinking because the drm_syncobj is the backing object on
> >>>> VkSemaphore implementations these days, isn't it?
> >>> Yeah, I can see that to an extent. But then binary vs. timeline
> >>> syncobjs are very different in use (which is unfortunate tbh), and
> >>> then we have an asymmetry between syncobj export & sync_file import.
> >>>
> >>> You're right that we do want a syncobj though.
> > I'm not convinced.  Some people seem to think that we have a direct
> > progression of superiority: sync_file < syncobj < timeline syncobj.  I
> > don't think this is true.  They each serve different purposes.  A
> > sync_file is a handle to a single immutable fence operation (a
> > dma_fence*).  A syncobj is a container for a fence and is, by
> > definition, mutable (a dma_fence**).  A timeline syncobj is a
> > construct that lets us implement VK_KHR_timeline_semaphore with only
> > immutable finite-time (not future) fences.
> >
> >  From a WSI protocol PoV, it's sometimes nicer to pass a container
> > object once and then pass a serial or a simple "I just updated it"
> > signal every time instead of a new FD.  It certainly makes all the
> > "who closes this, again?" semantics easier.  But we shouldn't think of
> > syncobj as being better than sync_file.  With binary syncobj, it
> > really is the difference between passing a dma_fence* vs. a
> > dma_fence**.  Sometimes you want one and sometimes you want the other.
> > The real pity, IMO, is that the uAPI is scattered everywhere.
> >
> >>> This is probably not
> >>> practical due to smashing uAPI to bits, but if we could wind the clock
> >>> back a couple of years, I suspect the interface we want is that export
> >>> can either export a sync_file or a binary syncobj, and further that
> >>> binary syncobjs could transparently act as timeline semaphores by
> >>> mapping any value (either wait or signal) to the binary signal. In
> >>> hindsight, we should probably just never have had binary syncobj. Oh
> >>> well.
> >> Well the later is the case IIRC. Don't ask me for the detail semantics,
> >> but in general the drm_syncobj in timeline mode is compatible to the
> >> binary mode.
> >>
> >> The sync_file is also import/exportable to a certain drm_syncobj
> >> timeline point (or as binary signal). So no big deal, we are all
> >> compatible here :)
> >>
> >> I just thought that it might be more appropriate to return a drm_syncobj
> >> directly instead of a sync_file.
> > Maybe.  I'm not convinced that's better.  In the current Mesa WSI
> > code, it'd actually be quite a pain.
> >
> > --Jason
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-05-27 15:39                   ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-05-27 15:39 UTC (permalink / raw)
  To: Christian König, Maling list - DRI developers, Simon Ser,
	Daniel Vetter, Sumit Semwal, Intel GFX, Daniel Stone

On Thu, May 27, 2021 at 4:49 AM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 26.05.21 um 19:42 schrieb Jason Ekstrand:
> > On Wed, May 26, 2021 at 6:02 AM Christian König
> > <christian.koenig@amd.com> wrote:
> >> Regarding that, why do we actually use a syncfile and not a drm_syncobj
> >> here?
> > A sync file is a userspace handle to a dma_fence which is exactly what
> > we're grabbing from the dma-buf so it's a better match.  A syncobj, on
> > the other hand, is a container for fences.  If we really want it in a
> > syncobj (which we may), we can use another ioctl to stuff it in there.
> > The only thing making this work in terms of syncobj would save us is a
> > little ioctl overhead.  In my Mesa patches, we do stuff it in a
> > syncobj but, instead of acting on the syncobj directly, we go through
> > vkSemaphoreImportFd() which is way more convenient for generic WSI
> > code.
>
> Yeah, that is a really good argument.
>
> > It also works nicer because a sync_file is a pretty generic construct
> > but a syncobj is a DRM construct.  This lets us do the export in an
> > entirely driver-generic way without even having access to a DRM fd.
> > It all happens as an ioctl on the dma-buf.
>
> Well that's an even better argument and I think the killer argument here.

Cool.

> We should probably mention that on the commit message as a single
> sentence somewhere.

Happy to do so.  How does this sound:

By making this an ioctl on the dma-buf itself, it allows this new
functionality to be used in an entirely driver-agnostic way without
having access to a DRM fd. This makes it ideal for use in
driver-generic code in Mesa or in a client such as a compositor where
the DRM fd may be hard to reach.

> BTW: You replied exclusively to me. Was that intentionally? I don't
> think so :)

Oops...  I've re-added the whole lot in this reply.

--Jason

> Regards,
> Christian.
>
> >
> > On Wed, May 26, 2021 at 8:23 AM Christian König
> > <christian.koenig@amd.com> wrote:
> >
> >> Am 26.05.21 um 15:12 schrieb Daniel Stone:
> >>> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote:
> >>>> Am 26.05.21 um 13:31 schrieb Daniel Stone:
> >>>>> How would we insert a syncobj+val into a resv though? Like, if we pass
> >>>>> an unmaterialised syncobj+val here to insert into the resv, then an
> >>>>> implicit-only media user (or KMS) goes to sync against the resv, what
> >>>>> happens?
> >>>> Well this is for exporting, not importing. So we don't need to worry
> >>>> about that.
> >>>>
> >>>> It's just my thinking because the drm_syncobj is the backing object on
> >>>> VkSemaphore implementations these days, isn't it?
> >>> Yeah, I can see that to an extent. But then binary vs. timeline
> >>> syncobjs are very different in use (which is unfortunate tbh), and
> >>> then we have an asymmetry between syncobj export & sync_file import.
> >>>
> >>> You're right that we do want a syncobj though.
> > I'm not convinced.  Some people seem to think that we have a direct
> > progression of superiority: sync_file < syncobj < timeline syncobj.  I
> > don't think this is true.  They each serve different purposes.  A
> > sync_file is a handle to a single immutable fence operation (a
> > dma_fence*).  A syncobj is a container for a fence and is, by
> > definition, mutable (a dma_fence**).  A timeline syncobj is a
> > construct that lets us implement VK_KHR_timeline_semaphore with only
> > immutable finite-time (not future) fences.
> >
> >  From a WSI protocol PoV, it's sometimes nicer to pass a container
> > object once and then pass a serial or a simple "I just updated it"
> > signal every time instead of a new FD.  It certainly makes all the
> > "who closes this, again?" semantics easier.  But we shouldn't think of
> > syncobj as being better than sync_file.  With binary syncobj, it
> > really is the difference between passing a dma_fence* vs. a
> > dma_fence**.  Sometimes you want one and sometimes you want the other.
> > The real pity, IMO, is that the uAPI is scattered everywhere.
> >
> >>> This is probably not
> >>> practical due to smashing uAPI to bits, but if we could wind the clock
> >>> back a couple of years, I suspect the interface we want is that export
> >>> can either export a sync_file or a binary syncobj, and further that
> >>> binary syncobjs could transparently act as timeline semaphores by
> >>> mapping any value (either wait or signal) to the binary signal. In
> >>> hindsight, we should probably just never have had binary syncobj. Oh
> >>> well.
> >> Well the later is the case IIRC. Don't ask me for the detail semantics,
> >> but in general the drm_syncobj in timeline mode is compatible to the
> >> binary mode.
> >>
> >> The sync_file is also import/exportable to a certain drm_syncobj
> >> timeline point (or as binary signal). So no big deal, we are all
> >> compatible here :)
> >>
> >> I just thought that it might be more appropriate to return a drm_syncobj
> >> directly instead of a sync_file.
> > Maybe.  I'm not convinced that's better.  In the current Mesa WSI
> > code, it'd actually be quite a pain.
> >
> > --Jason
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-05-27 13:41             ` [Intel-gfx] " Christian König
@ 2021-06-01 14:34               ` Daniel Vetter
  -1 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-06-01 14:34 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, Thomas Zimmermann, intel-gfx, Huang Rui,
	VMware Graphics, dri-devel, Jason Ekstrand, Sean Paul

On Thu, May 27, 2021 at 03:41:02PM +0200, Christian König wrote:
> Am 27.05.21 um 15:25 schrieb Daniel Vetter:
> > On Thu, May 27, 2021 at 1:59 PM Christian König
> > <christian.koenig@amd.com> wrote:
> > > Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> > > > On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> > > > > Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> > > > > > None of these helpers actually leak any RCU details to the caller.  They
> > > > > > all assume you have a genuine reference, take the RCU read lock, and
> > > > > > retry if needed.  Naming them with an _rcu is likely to cause callers
> > > > > > more panic than needed.
> > > > > I'm really wondering if we need this postfix in the first place.
> > > > > 
> > > > > If we use the right rcu_dereference_check() macro then those functions can
> > > > > be called with both the reservation object locked and unlocked. It shouldn't
> > > > > matter to them.
> > > > > 
> > > > > But getting rid of the _rcu postfix sounds like a good idea in general to
> > > > > me.
> > > > So does that count as an ack or not? If yes I think we should land this
> > > > patch right away, since it's going to conflict real fast badly.
> > > I had some follow up discussion with Jason and I would rather like to
> > > switch to using rcu_dereference_check() in all places and completely
> > > remove the _rcu postfix.
> > Hm, I'm not sure whether spreading _rcu tricks further is an
> > especially bright idea. At least i915 is full of very clever _rcu
> > tricks, and encouraging drivers to roll out their own _rcu everywhere
> > is probably not in our best interest. Some fast-path checking is imo
> > ok, but that's it. Especially once we get into the entire
> > SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.
> 
> Oh, yes completely agree. SLAB_TYPESAFE_BY_RCU is optimizing for the wrong
> use case I think.
> 
> You save a bit of overhead while freeing fences, but in return you have
> extra overhead while adding fences to the dma_resv object.

Getting way off topic, but I'm wondering whether the entire rcu business
is really worth it for dma_fence.

Mostly we manipulate dma_resv while holding dma_resv anyway. There's maybe
a few waits and stuff, but I'm not sure whether the dma_resv_lock +
dma_fence_get + dma_resv_unlock + dma_fence_put really matter. And if you
have lock contention on a single buffer you've lost anyway.

At that point I think we have maybe some lockless tricks in the evict
code, but then again once you're evicting it's probably going pretty bad
already.

So SLAB_TYPESAFE_BY_RCU is something I want to analyze for i915 whether
it's really worth it and was justified, or whether we should drop it. But
I'm wondering whether we should drop rcu for fences outright. Would be
quite some audit to check out where it's used.

From i915 side we've done these lockless tricks back when
dev->struct_mutex was a thing and alwas contended. But with per-obj
locking now happening for real with dma-resv, that's probably not
justified.

But then looking at git history the rcu in dma_resv is older than that,
and was justified with ttm.

> > That's why I'm slightly leaning towards _unlocked variants, except we
> > do use those in lots of places where we hold dma_resv_lock too. So not
> > sure what's the best plan overall here.
> 
> Well what function names are we actually talking about?
> 
> For the dma_resv_get_excl_rcu() case I agree we should probably name that to
> dma_resv_get_excl_unlocked() because it makes no sense at all to use this
> function while holding the lock.
> 
> But for the following functions:
> dma_resv_get_fences_rcu
> dma_resv_wait_timeout_rcu
> dma_resv_test_signaled_rcu
> 
> I think we should just drop the _rcu naming because those are supposed to
> work independent if the resv lock is held or not.

Ack on all naming.
-Daniel

> 
> Regards,
> Christian.
> 
> > -Daniel
> > 
> > > But yes I see the pain of rebasing this as well.
> > > 
> > > Christian.
> > > 
> > > > -Daniel
> > > > 
> > > > > Christian.
> > > > > 
> > > > > > v2 (Jason Ekstrand):
> > > > > >     - Fix function argument indentation
> > > > > > 
> > > > > > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > > > > > Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > > > Cc: Christian König <christian.koenig@amd.com>
> > > > > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > > > > > Cc: Maxime Ripard <mripard@kernel.org>
> > > > > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > > > > Cc: Lucas Stach <l.stach@pengutronix.de>
> > > > > > Cc: Rob Clark <robdclark@gmail.com>
> > > > > > Cc: Sean Paul <sean@poorly.run>
> > > > > > Cc: Huang Rui <ray.huang@amd.com>
> > > > > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > > > > Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> > > > > > ---
> > > > > >     drivers/dma-buf/dma-buf.c                     |  4 +--
> > > > > >     drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> > > > > >     .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> > > > > >     drivers/gpu/drm/drm_gem.c                     | 10 +++----
> > > > > >     drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> > > > > >     drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> > > > > >     drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> > > > > >     drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> > > > > >     drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> > > > > >     .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> > > > > >     drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> > > > > >     drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> > > > > >     drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> > > > > >     drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> > > > > >     drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> > > > > >     drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> > > > > >     drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> > > > > >     drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> > > > > >     drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> > > > > >     drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> > > > > >     drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> > > > > >     drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> > > > > >     drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> > > > > >     include/linux/dma-resv.h                      | 18 ++++++------
> > > > > >     36 files changed, 108 insertions(+), 110 deletions(-)
> > > > > > 
> > > > > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > > > > > index f264b70c383eb..ed6451d55d663 100644
> > > > > > --- a/drivers/dma-buf/dma-buf.c
> > > > > > +++ b/drivers/dma-buf/dma-buf.c
> > > > > > @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> > > > > >      long ret;
> > > > > >      /* Wait on any implicit rendering fences */
> > > > > > -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
> > > > > > -                                             MAX_SCHEDULE_TIMEOUT);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> > > > > > +                                        MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (ret < 0)
> > > > > >              return ret;
> > > > > > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> > > > > > index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> > > > > > --- a/drivers/dma-buf/dma-resv.c
> > > > > > +++ b/drivers/dma-buf/dma-resv.c
> > > > > > @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> > > > > >     EXPORT_SYMBOL(dma_resv_copy_fences);
> > > > > >     /**
> > > > > > - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> > > > > > + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> > > > > >      * fences without update side lock held
> > > > > >      * @obj: the reservation object
> > > > > >      * @pfence_excl: the returned exclusive fence (or NULL)
> > > > > > @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> > > > > >      * exclusive fence is not specified the fence is put into the array of the
> > > > > >      * shared fences as well. Returns either zero or -ENOMEM.
> > > > > >      */
> > > > > > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > > > > > -                       struct dma_fence **pfence_excl,
> > > > > > -                       unsigned *pshared_count,
> > > > > > -                       struct dma_fence ***pshared)
> > > > > > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > > > > > +                            struct dma_fence **pfence_excl,
> > > > > > +                            unsigned *pshared_count,
> > > > > > +                            struct dma_fence ***pshared)
> > > > > >     {
> > > > > >      struct dma_fence **shared = NULL;
> > > > > >      struct dma_fence *fence_excl;
> > > > > > @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > > > > >      *pshared = shared;
> > > > > >      return ret;
> > > > > >     }
> > > > > > -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> > > > > > +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> > > > > >     /**
> > > > > > - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> > > > > > + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> > > > > >      * shared and/or exclusive fences.
> > > > > >      * @obj: the reservation object
> > > > > >      * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> > > > > > @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> > > > > >      * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> > > > > >      * greater than zer on success.
> > > > > >      */
> > > > > > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> > > > > > -                          bool wait_all, bool intr,
> > > > > > -                          unsigned long timeout)
> > > > > > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> > > > > > +                               bool wait_all, bool intr,
> > > > > > +                               unsigned long timeout)
> > > > > >     {
> > > > > >      struct dma_fence *fence;
> > > > > >      unsigned seq, shared_count;
> > > > > > @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> > > > > >      rcu_read_unlock();
> > > > > >      goto retry;
> > > > > >     }
> > > > > > -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> > > > > > +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> > > > > >     static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > > > > > @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > > > > >     }
> > > > > >     /**
> > > > > > - * dma_resv_test_signaled_rcu - Test if a reservation object's
> > > > > > + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> > > > > >      * fences have been signaled.
> > > > > >      * @obj: the reservation object
> > > > > >      * @test_all: if true, test all fences, otherwise only test the exclusive
> > > > > > @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > > > > >      * RETURNS
> > > > > >      * true if all fences signaled, else false
> > > > > >      */
> > > > > > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> > > > > > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> > > > > >     {
> > > > > >      unsigned seq, shared_count;
> > > > > >      int ret;
> > > > > > @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> > > > > >      rcu_read_unlock();
> > > > > >      return ret;
> > > > > >     }
> > > > > > -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> > > > > > +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > > > > > index 8a1fb8b6606e5..b8e24f199be9a 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > > > > > @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> > > > > >              goto unpin;
> > > > > >      }
> > > > > > -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> > > > > > -                                         &work->shared_count,
> > > > > > -                                         &work->shared);
> > > > > > +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> > > > > > +                                    &work->shared_count,
> > > > > > +                                    &work->shared);
> > > > > >      if (unlikely(r != 0)) {
> > > > > >              DRM_ERROR("failed to get fences for buffer\n");
> > > > > >              goto unpin;
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > > > > > index baa980a477d94..0d0319bc51577 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > > > > > @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> > > > > >      if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> > > > > >              return 0;
> > > > > > -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> > > > > > +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> > > > > >      if (r)
> > > > > >              return r;
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > > > > > index 18974bd081f00..8e2996d6ba3ad 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > > > > > @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> > > > > >              return -ENOENT;
> > > > > >      }
> > > > > >      robj = gem_to_amdgpu_bo(gobj);
> > > > > > -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> > > > > > -                                             timeout);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> > > > > > +                                        timeout);
> > > > > >      /* ret == 0 means not signaled,
> > > > > >       * ret > 0 means signaled
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > > > > > index b4971e90b98cf..38e1b32dd2cef 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > > > > > @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> > > > > >      unsigned count;
> > > > > >      int r;
> > > > > > -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> > > > > > +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> > > > > >      if (r)
> > > > > >              goto fallback;
> > > > > > @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> > > > > >      /* Not enough memory for the delayed delete, as last resort
> > > > > >       * block for all the fences to complete.
> > > > > >       */
> > > > > > -   dma_resv_wait_timeout_rcu(resv, true, false,
> > > > > > -                                       MAX_SCHEDULE_TIMEOUT);
> > > > > > +   dma_resv_wait_timeout_unlocked(resv, true, false,
> > > > > > +                                  MAX_SCHEDULE_TIMEOUT);
> > > > > >      amdgpu_pasid_free(pasid);
> > > > > >     }
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > > > > > index 828b5167ff128..0319c8b547c48 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > > > > > @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> > > > > >      mmu_interval_set_seq(mni, cur_seq);
> > > > > > -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > > > > > -                                 MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      mutex_unlock(&adev->notifier_lock);
> > > > > >      if (r <= 0)
> > > > > >              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > > > > > index 0adffcace3263..de1c7c5501683 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > > > > > @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> > > > > >              return 0;
> > > > > >      }
> > > > > > -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> > > > > > -                                           MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (r < 0)
> > > > > >              return r;
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > > > > > index c6dbc08016045..4a2196404fb69 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > > > > > @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> > > > > >      ib->length_dw = 16;
> > > > > >      if (direct) {
> > > > > > -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> > > > > > -                                                   true, false,
> > > > > > -                                                   msecs_to_jiffies(10));
> > > > > > +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> > > > > > +                                              true, false,
> > > > > > +                                              msecs_to_jiffies(10));
> > > > > >              if (r == 0)
> > > > > >                      r = -ETIMEDOUT;
> > > > > >              if (r < 0)
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > > > > > index 4a3e3f72e1277..7ba1c537d6584 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > > > > > @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> > > > > >      unsigned i, shared_count;
> > > > > >      int r;
> > > > > > -   r = dma_resv_get_fences_rcu(resv, &excl,
> > > > > > -                                         &shared_count, &shared);
> > > > > > +   r = dma_resv_get_fences_unlocked(resv, &excl,
> > > > > > +                                    &shared_count, &shared);
> > > > > >      if (r) {
> > > > > >              /* Not enough memory to grab the fence list, as last resort
> > > > > >               * block for all the fences to complete.
> > > > > >               */
> > > > > > -           dma_resv_wait_timeout_rcu(resv, true, false,
> > > > > > -                                               MAX_SCHEDULE_TIMEOUT);
> > > > > > +           dma_resv_wait_timeout_unlocked(resv, true, false,
> > > > > > +                                          MAX_SCHEDULE_TIMEOUT);
> > > > > >              return;
> > > > > >      }
> > > > > > @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> > > > > >              return true;
> > > > > >      /* Don't evict VM page tables while they are busy */
> > > > > > -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> > > > > > +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> > > > > >              return false;
> > > > > >      /* Try to block ongoing updates */
> > > > > > @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> > > > > >      */
> > > > > >     long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> > > > > >     {
> > > > > > -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> > > > > > -                                       true, true, timeout);
> > > > > > +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> > > > > > +                                            true, true, timeout);
> > > > > >      if (timeout <= 0)
> > > > > >              return timeout;
> > > > > > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > > > > > index 9ca517b658546..0121d2817fa26 100644
> > > > > > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > > > > > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > > > > > @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> > > > > >               * deadlock during GPU reset when this fence will not signal
> > > > > >               * but we hold reservation lock for the BO.
> > > > > >               */
> > > > > > -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> > > > > > -                                                   false,
> > > > > > -                                                   msecs_to_jiffies(5000));
> > > > > > +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> > > > > > +                                              false,
> > > > > > +                                              msecs_to_jiffies(5000));
> > > > > >              if (unlikely(r <= 0))
> > > > > >                      DRM_ERROR("Waiting for fences timed out!");
> > > > > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > > > > > index 9989425e9875a..1241a421b9e81 100644
> > > > > > --- a/drivers/gpu/drm/drm_gem.c
> > > > > > +++ b/drivers/gpu/drm/drm_gem.c
> > > > > > @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> > > > > >              return -EINVAL;
> > > > > >      }
> > > > > > -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> > > > > > -                                             true, timeout);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> > > > > > +                                        true, timeout);
> > > > > >      if (ret == 0)
> > > > > >              ret = -ETIME;
> > > > > >      else if (ret > 0)
> > > > > > @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> > > > > >      if (!write) {
> > > > > >              struct dma_fence *fence =
> > > > > > -                   dma_resv_get_excl_rcu(obj->resv);
> > > > > > +                   dma_resv_get_excl_unlocked(obj->resv);
> > > > > >              return drm_gem_fence_array_add(fence_array, fence);
> > > > > >      }
> > > > > > -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> > > > > > -                                           &fence_count, &fences);
> > > > > > +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> > > > > > +                                      &fence_count, &fences);
> > > > > >      if (ret || !fence_count)
> > > > > >              return ret;
> > > > > > diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > > > > > index a005c5a0ba46a..a27135084ae5c 100644
> > > > > > --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> > > > > > +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > > > > > @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> > > > > >              return 0;
> > > > > >      obj = drm_gem_fb_get_obj(state->fb, 0);
> > > > > > -   fence = dma_resv_get_excl_rcu(obj->resv);
> > > > > > +   fence = dma_resv_get_excl_unlocked(obj->resv);
> > > > > >      drm_atomic_set_fence_for_plane(state, fence);
> > > > > >      return 0;
> > > > > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > > > > > index db69f19ab5bca..4e6f5346e84e4 100644
> > > > > > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > > > > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > > > > > @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> > > > > >      }
> > > > > >      if (op & ETNA_PREP_NOSYNC) {
> > > > > > -           if (!dma_resv_test_signaled_rcu(obj->resv,
> > > > > > -                                                     write))
> > > > > > +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> > > > > >                      return -EBUSY;
> > > > > >      } else {
> > > > > >              unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> > > > > > -           ret = dma_resv_wait_timeout_rcu(obj->resv,
> > > > > > -                                                     write, true, remain);
> > > > > > +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
> > > > > > +                                                write, true, remain);
> > > > > >              if (ret <= 0)
> > > > > >                      return ret == 0 ? -ETIMEDOUT : ret;
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > > > > > index d05c359945799..6617fada4595d 100644
> > > > > > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > > > > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > > > > > @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> > > > > >                      continue;
> > > > > >              if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> > > > > > -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> > > > > > -                                                           &bo->nr_shared,
> > > > > > -                                                           &bo->shared);
> > > > > > +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> > > > > > +                                                      &bo->nr_shared,
> > > > > > +                                                      &bo->shared);
> > > > > >                      if (ret)
> > > > > >                              return ret;
> > > > > >              } else {
> > > > > > -                   bo->excl = dma_resv_get_excl_rcu(robj);
> > > > > > +                   bo->excl = dma_resv_get_excl_unlocked(robj);
> > > > > >              }
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> > > > > > index 422b59ebf6dce..5f0b85a102159 100644
> > > > > > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > > > > > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > > > > > @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> > > > > >              if (ret < 0)
> > > > > >                      goto unpin_fb;
> > > > > > -           fence = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >              if (fence) {
> > > > > >                      add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> > > > > >                                                 fence);
> > > > > > diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> > > > > > index 9e508e7d4629f..bdfc6bf16a4e9 100644
> > > > > > --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> > > > > > +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> > > > > > @@ -10,7 +10,7 @@
> > > > > >     void dma_resv_prune(struct dma_resv *resv)
> > > > > >     {
> > > > > >      if (dma_resv_trylock(resv)) {
> > > > > > -           if (dma_resv_test_signaled_rcu(resv, true))
> > > > > > +           if (dma_resv_test_signaled_unlocked(resv, true))
> > > > > >                      dma_resv_add_excl_fence(resv, NULL);
> > > > > >              dma_resv_unlock(resv);
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > > > > > index 25235ef630c10..754ad6d1bace9 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > > > > > @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> > > > > >       * Alternatively, we can trade that extra information on read/write
> > > > > >       * activity with
> > > > > >       *      args->busy =
> > > > > > -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
> > > > > > +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
> > > > > >       * to report the overall busyness. This is what the wait-ioctl does.
> > > > > >       *
> > > > > >       */
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > > > > > index 297143511f99b..e8f323564e57b 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > > > > > @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> > > > > >      if (DBG_FORCE_RELOC)
> > > > > >              return false;
> > > > > > -   return !dma_resv_test_signaled_rcu(vma->resv, true);
> > > > > > +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
> > > > > >     }
> > > > > >     static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > > > > > index 2ebd79537aea9..7c0eb425cb3b3 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > > > > > @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> > > > > >      struct dma_fence *fence;
> > > > > >      rcu_read_lock();
> > > > > > -   fence = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >      rcu_read_unlock();
> > > > > >      if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > > > > > index a657b99ec7606..44df18dc9669f 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > > > > > @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> > > > > >              return true;
> > > > > >      /* we will unbind on next submission, still have userptr pins */
> > > > > > -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> > > > > > -                                 MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (r <= 0)
> > > > > >              drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > > > > > index 4b9856d5ba14f..5b6c52659ad4d 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > > > > > @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> > > > > >              unsigned int count, i;
> > > > > >              int ret;
> > > > > > -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> > > > > >               */
> > > > > >              prune_fences = count && timeout >= 0;
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(resv);
> > > > > >      }
> > > > > >      if (excl && timeout >= 0)
> > > > > > @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> > > > > >              unsigned int count, i;
> > > > > >              int ret;
> > > > > > -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> > > > > > -                                         &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > > > > > +                                              &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> > > > > >              kfree(shared);
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >      }
> > > > > >      if (excl) {
> > > > > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> > > > > > index 970d8f4986bbe..f1ed03ced7dd1 100644
> > > > > > --- a/drivers/gpu/drm/i915/i915_request.c
> > > > > > +++ b/drivers/gpu/drm/i915/i915_request.c
> > > > > > @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> > > > > >              struct dma_fence **shared;
> > > > > >              unsigned int count, i;
> > > > > > -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> > > > > > -                                                   &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > > > > > +                                              &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> > > > > >                      dma_fence_put(shared[i]);
> > > > > >              kfree(shared);
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >      }
> > > > > >      if (excl) {
> > > > > > diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> > > > > > index 2744558f30507..0bcb7ea44201e 100644
> > > > > > --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> > > > > > +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> > > > > > @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> > > > > >              struct dma_fence **shared;
> > > > > >              unsigned int count, i;
> > > > > > -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> > > > > >                      dma_fence_put(shared[i]);
> > > > > >              kfree(shared);
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(resv);
> > > > > >      }
> > > > > >      if (ret >= 0 && excl && excl->ops != exclude) {
> > > > > > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > > > > > index 56df86e5f7400..1aca60507bb14 100644
> > > > > > --- a/drivers/gpu/drm/msm/msm_gem.c
> > > > > > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > > > > > @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> > > > > >              op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> > > > > >      long ret;
> > > > > > -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> > > > > > -                                             true,  remain);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> > > > > >      if (ret == 0)
> > > > > >              return remain == 0 ? -EBUSY : -ETIMEDOUT;
> > > > > >      else if (ret < 0)
> > > > > > diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > > > > > index 0cb1f9d848d3e..8d048bacd6f02 100644
> > > > > > --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > > > > > +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > > > > > @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> > > > > >                      asyw->image.handle[0] = ctxdma->object.handle;
> > > > > >      }
> > > > > > -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> > > > > > +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> > > > > >      asyw->image.offset[0] = nvbo->offset;
> > > > > >      if (wndw->func->prepare) {
> > > > > > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > > > > > index a70e82413fa75..bc6b09ee9b552 100644
> > > > > > --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> > > > > > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > > > > > @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> > > > > >              return -ENOENT;
> > > > > >      nvbo = nouveau_gem_object(gem);
> > > > > > -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> > > > > > -                                              no_wait ? 0 : 30 * HZ);
> > > > > > +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> > > > > > +                                         no_wait ? 0 : 30 * HZ);
> > > > > >      if (!lret)
> > > > > >              ret = -EBUSY;
> > > > > >      else if (lret > 0)
> > > > > > diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > > > > > index ca07098a61419..eef5b632ee0ce 100644
> > > > > > --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> > > > > > +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > > > > > @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> > > > > >      if (!gem_obj)
> > > > > >              return -ENOENT;
> > > > > > -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> > > > > > -                                             true, timeout);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> > > > > > +                                        true, timeout);
> > > > > >      if (!ret)
> > > > > >              ret = timeout ? -ETIMEDOUT : -EBUSY;
> > > > > > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > > > index 6003cfeb13221..2df3e999a38d0 100644
> > > > > > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > > > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > > > @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> > > > > >      int i;
> > > > > >      for (i = 0; i < bo_count; i++)
> > > > > > -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> > > > > > +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> > > > > >     }
> > > > > >     static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> > > > > > diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> > > > > > index 05ea2f39f6261..1a38b0bf36d11 100644
> > > > > > --- a/drivers/gpu/drm/radeon/radeon_gem.c
> > > > > > +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> > > > > > @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> > > > > >      }
> > > > > >      if (domain == RADEON_GEM_DOMAIN_CPU) {
> > > > > >              /* Asking for cpu access wait for object idle */
> > > > > > -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > > +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > >              if (!r)
> > > > > >                      r = -EBUSY;
> > > > > > @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> > > > > >      }
> > > > > >      robj = gem_to_radeon_bo(gobj);
> > > > > > -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> > > > > > +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> > > > > >      if (r == 0)
> > > > > >              r = -EBUSY;
> > > > > >      else
> > > > > > @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> > > > > >      }
> > > > > >      robj = gem_to_radeon_bo(gobj);
> > > > > > -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > >      if (ret == 0)
> > > > > >              r = -EBUSY;
> > > > > >      else if (ret < 0)
> > > > > > diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> > > > > > index e37c9a57a7c36..a19be3f8a218c 100644
> > > > > > --- a/drivers/gpu/drm/radeon/radeon_mn.c
> > > > > > +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> > > > > > @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> > > > > >              return true;
> > > > > >      }
> > > > > > -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > > > > > -                                 MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (r <= 0)
> > > > > >              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > > > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > index ca1b098b6a561..215cad3149621 100644
> > > > > > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> > > > > >      struct dma_resv *resv = &bo->base._resv;
> > > > > >      int ret;
> > > > > > -   if (dma_resv_test_signaled_rcu(resv, true))
> > > > > > +   if (dma_resv_test_signaled_unlocked(resv, true))
> > > > > >              ret = 0;
> > > > > >      else
> > > > > >              ret = -EBUSY;
> > > > > > @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> > > > > >                      dma_resv_unlock(bo->base.resv);
> > > > > >              spin_unlock(&bo->bdev->lru_lock);
> > > > > > -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> > > > > > -                                            30 * HZ);
> > > > > > +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> > > > > > +                                                 30 * HZ);
> > > > > >              if (lret < 0)
> > > > > >                      return lret;
> > > > > > @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> > > > > >                      /* Last resort, if we fail to allocate memory for the
> > > > > >                       * fences block for the BO to become idle
> > > > > >                       */
> > > > > > -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> > > > > > -                                             30 * HZ);
> > > > > > +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> > > > > > +                                                  30 * HZ);
> > > > > >              }
> > > > > >              if (bo->bdev->funcs->release_notify)
> > > > > > @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> > > > > >              ttm_mem_io_free(bdev, &bo->mem);
> > > > > >      }
> > > > > > -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> > > > > > +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> > > > > >          !dma_resv_trylock(bo->base.resv)) {
> > > > > >              /* The BO is not idle, resurrect it for delayed destroy */
> > > > > >              ttm_bo_flush_all_fences(bo);
> > > > > > @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> > > > > >      long timeout = 15 * HZ;
> > > > > >      if (no_wait) {
> > > > > > -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> > > > > > +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> > > > > >                      return 0;
> > > > > >              else
> > > > > >                      return -EBUSY;
> > > > > >      }
> > > > > > -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> > > > > > -                                                 interruptible, timeout);
> > > > > > +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> > > > > > +                                            interruptible, timeout);
> > > > > >      if (timeout < 0)
> > > > > >              return timeout;
> > > > > > diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> > > > > > index 2902dc6e64faf..010a82405e374 100644
> > > > > > --- a/drivers/gpu/drm/vgem/vgem_fence.c
> > > > > > +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> > > > > > @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> > > > > >      /* Check for a conflicting fence */
> > > > > >      resv = obj->resv;
> > > > > > -   if (!dma_resv_test_signaled_rcu(resv,
> > > > > > -                                             arg->flags & VGEM_FENCE_WRITE)) {
> > > > > > +   if (!dma_resv_test_signaled_unlocked(resv,
> > > > > > +                                        arg->flags & VGEM_FENCE_WRITE)) {
> > > > > >              ret = -EBUSY;
> > > > > >              goto err_fence;
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > > > > index 669f2ee395154..ab010c8e32816 100644
> > > > > > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > > > > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > > > > @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> > > > > >              return -ENOENT;
> > > > > >      if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> > > > > > -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
> > > > > > +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> > > > > >      } else {
> > > > > > -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> > > > > > -                                           timeout);
> > > > > > +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> > > > > > +                                                timeout);
> > > > > >      }
> > > > > >      if (ret == 0)
> > > > > >              ret = -EBUSY;
> > > > > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > > > > > index 04dd49c4c2572..19e1ce23842a9 100644
> > > > > > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > > > > > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > > > > > @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> > > > > >      if (flags & drm_vmw_synccpu_allow_cs) {
> > > > > >              long lret;
> > > > > > -           lret = dma_resv_wait_timeout_rcu
> > > > > > +           lret = dma_resv_wait_timeout_unlocked
> > > > > >                      (bo->base.resv, true, true,
> > > > > >                       nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> > > > > >              if (!lret)
> > > > > > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> > > > > > index d44a77e8a7e34..99cfb7af966b8 100644
> > > > > > --- a/include/linux/dma-resv.h
> > > > > > +++ b/include/linux/dma-resv.h
> > > > > > @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> > > > > >     }
> > > > > >     /**
> > > > > > - * dma_resv_get_excl_rcu - get the reservation object's
> > > > > > + * dma_resv_get_excl_unlocked - get the reservation object's
> > > > > >      * exclusive fence, without lock held.
> > > > > >      * @obj: the reservation object
> > > > > >      *
> > > > > > @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> > > > > >      * The exclusive fence or NULL if none
> > > > > >      */
> > > > > >     static inline struct dma_fence *
> > > > > > -dma_resv_get_excl_rcu(struct dma_resv *obj)
> > > > > > +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> > > > > >     {
> > > > > >      struct dma_fence *fence;
> > > > > > @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> > > > > >     void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> > > > > > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > > > > > -                       struct dma_fence **pfence_excl,
> > > > > > -                       unsigned *pshared_count,
> > > > > > -                       struct dma_fence ***pshared);
> > > > > > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > > > > > +                            struct dma_fence **pfence_excl,
> > > > > > +                            unsigned *pshared_count,
> > > > > > +                            struct dma_fence ***pshared);
> > > > > >     int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> > > > > > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> > > > > > -                          unsigned long timeout);
> > > > > > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> > > > > > +                               unsigned long timeout);
> > > > > > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> > > > > > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> > > > > >     #endif /* _LINUX_RESERVATION_H */
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-06-01 14:34               ` Daniel Vetter
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Vetter @ 2021-06-01 14:34 UTC (permalink / raw)
  To: Christian König
  Cc: Gerd Hoffmann, Thomas Zimmermann, intel-gfx, Maxime Ripard,
	Huang Rui, VMware Graphics, dri-devel, Lucas Stach

On Thu, May 27, 2021 at 03:41:02PM +0200, Christian König wrote:
> Am 27.05.21 um 15:25 schrieb Daniel Vetter:
> > On Thu, May 27, 2021 at 1:59 PM Christian König
> > <christian.koenig@amd.com> wrote:
> > > Am 27.05.21 um 12:39 schrieb Daniel Vetter:
> > > > On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
> > > > > Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
> > > > > > None of these helpers actually leak any RCU details to the caller.  They
> > > > > > all assume you have a genuine reference, take the RCU read lock, and
> > > > > > retry if needed.  Naming them with an _rcu is likely to cause callers
> > > > > > more panic than needed.
> > > > > I'm really wondering if we need this postfix in the first place.
> > > > > 
> > > > > If we use the right rcu_dereference_check() macro then those functions can
> > > > > be called with both the reservation object locked and unlocked. It shouldn't
> > > > > matter to them.
> > > > > 
> > > > > But getting rid of the _rcu postfix sounds like a good idea in general to
> > > > > me.
> > > > So does that count as an ack or not? If yes I think we should land this
> > > > patch right away, since it's going to conflict real fast badly.
> > > I had some follow up discussion with Jason and I would rather like to
> > > switch to using rcu_dereference_check() in all places and completely
> > > remove the _rcu postfix.
> > Hm, I'm not sure whether spreading _rcu tricks further is an
> > especially bright idea. At least i915 is full of very clever _rcu
> > tricks, and encouraging drivers to roll out their own _rcu everywhere
> > is probably not in our best interest. Some fast-path checking is imo
> > ok, but that's it. Especially once we get into the entire
> > SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.
> 
> Oh, yes completely agree. SLAB_TYPESAFE_BY_RCU is optimizing for the wrong
> use case I think.
> 
> You save a bit of overhead while freeing fences, but in return you have
> extra overhead while adding fences to the dma_resv object.

Getting way off topic, but I'm wondering whether the entire rcu business
is really worth it for dma_fence.

Mostly we manipulate dma_resv while holding dma_resv anyway. There's maybe
a few waits and stuff, but I'm not sure whether the dma_resv_lock +
dma_fence_get + dma_resv_unlock + dma_fence_put really matter. And if you
have lock contention on a single buffer you've lost anyway.

At that point I think we have maybe some lockless tricks in the evict
code, but then again once you're evicting it's probably going pretty bad
already.

So SLAB_TYPESAFE_BY_RCU is something I want to analyze for i915 whether
it's really worth it and was justified, or whether we should drop it. But
I'm wondering whether we should drop rcu for fences outright. Would be
quite some audit to check out where it's used.

From i915 side we've done these lockless tricks back when
dev->struct_mutex was a thing and alwas contended. But with per-obj
locking now happening for real with dma-resv, that's probably not
justified.

But then looking at git history the rcu in dma_resv is older than that,
and was justified with ttm.

> > That's why I'm slightly leaning towards _unlocked variants, except we
> > do use those in lots of places where we hold dma_resv_lock too. So not
> > sure what's the best plan overall here.
> 
> Well what function names are we actually talking about?
> 
> For the dma_resv_get_excl_rcu() case I agree we should probably name that to
> dma_resv_get_excl_unlocked() because it makes no sense at all to use this
> function while holding the lock.
> 
> But for the following functions:
> dma_resv_get_fences_rcu
> dma_resv_wait_timeout_rcu
> dma_resv_test_signaled_rcu
> 
> I think we should just drop the _rcu naming because those are supposed to
> work independent if the resv lock is held or not.

Ack on all naming.
-Daniel

> 
> Regards,
> Christian.
> 
> > -Daniel
> > 
> > > But yes I see the pain of rebasing this as well.
> > > 
> > > Christian.
> > > 
> > > > -Daniel
> > > > 
> > > > > Christian.
> > > > > 
> > > > > > v2 (Jason Ekstrand):
> > > > > >     - Fix function argument indentation
> > > > > > 
> > > > > > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > > > > > Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > > > Cc: Christian König <christian.koenig@amd.com>
> > > > > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > > > > > Cc: Maxime Ripard <mripard@kernel.org>
> > > > > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > > > > Cc: Lucas Stach <l.stach@pengutronix.de>
> > > > > > Cc: Rob Clark <robdclark@gmail.com>
> > > > > > Cc: Sean Paul <sean@poorly.run>
> > > > > > Cc: Huang Rui <ray.huang@amd.com>
> > > > > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > > > > Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
> > > > > > ---
> > > > > >     drivers/dma-buf/dma-buf.c                     |  4 +--
> > > > > >     drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
> > > > > >     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
> > > > > >     .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
> > > > > >     drivers/gpu/drm/drm_gem.c                     | 10 +++----
> > > > > >     drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
> > > > > >     drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
> > > > > >     drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
> > > > > >     drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
> > > > > >     drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
> > > > > >     .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
> > > > > >     drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
> > > > > >     drivers/gpu/drm/i915/i915_request.c           |  6 ++--
> > > > > >     drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
> > > > > >     drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
> > > > > >     drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
> > > > > >     drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
> > > > > >     drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
> > > > > >     drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
> > > > > >     drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
> > > > > >     drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
> > > > > >     drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
> > > > > >     drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
> > > > > >     drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
> > > > > >     drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
> > > > > >     include/linux/dma-resv.h                      | 18 ++++++------
> > > > > >     36 files changed, 108 insertions(+), 110 deletions(-)
> > > > > > 
> > > > > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > > > > > index f264b70c383eb..ed6451d55d663 100644
> > > > > > --- a/drivers/dma-buf/dma-buf.c
> > > > > > +++ b/drivers/dma-buf/dma-buf.c
> > > > > > @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> > > > > >      long ret;
> > > > > >      /* Wait on any implicit rendering fences */
> > > > > > -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
> > > > > > -                                             MAX_SCHEDULE_TIMEOUT);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
> > > > > > +                                        MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (ret < 0)
> > > > > >              return ret;
> > > > > > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> > > > > > index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
> > > > > > --- a/drivers/dma-buf/dma-resv.c
> > > > > > +++ b/drivers/dma-buf/dma-resv.c
> > > > > > @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
> > > > > >     EXPORT_SYMBOL(dma_resv_copy_fences);
> > > > > >     /**
> > > > > > - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
> > > > > > + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
> > > > > >      * fences without update side lock held
> > > > > >      * @obj: the reservation object
> > > > > >      * @pfence_excl: the returned exclusive fence (or NULL)
> > > > > > @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
> > > > > >      * exclusive fence is not specified the fence is put into the array of the
> > > > > >      * shared fences as well. Returns either zero or -ENOMEM.
> > > > > >      */
> > > > > > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > > > > > -                       struct dma_fence **pfence_excl,
> > > > > > -                       unsigned *pshared_count,
> > > > > > -                       struct dma_fence ***pshared)
> > > > > > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > > > > > +                            struct dma_fence **pfence_excl,
> > > > > > +                            unsigned *pshared_count,
> > > > > > +                            struct dma_fence ***pshared)
> > > > > >     {
> > > > > >      struct dma_fence **shared = NULL;
> > > > > >      struct dma_fence *fence_excl;
> > > > > > @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > > > > >      *pshared = shared;
> > > > > >      return ret;
> > > > > >     }
> > > > > > -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> > > > > > +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
> > > > > >     /**
> > > > > > - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
> > > > > > + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
> > > > > >      * shared and/or exclusive fences.
> > > > > >      * @obj: the reservation object
> > > > > >      * @wait_all: if true, wait on all fences, else wait on just exclusive fence
> > > > > > @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
> > > > > >      * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
> > > > > >      * greater than zer on success.
> > > > > >      */
> > > > > > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> > > > > > -                          bool wait_all, bool intr,
> > > > > > -                          unsigned long timeout)
> > > > > > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
> > > > > > +                               bool wait_all, bool intr,
> > > > > > +                               unsigned long timeout)
> > > > > >     {
> > > > > >      struct dma_fence *fence;
> > > > > >      unsigned seq, shared_count;
> > > > > > @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
> > > > > >      rcu_read_unlock();
> > > > > >      goto retry;
> > > > > >     }
> > > > > > -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
> > > > > > +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
> > > > > >     static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > > > > > @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > > > > >     }
> > > > > >     /**
> > > > > > - * dma_resv_test_signaled_rcu - Test if a reservation object's
> > > > > > + * dma_resv_test_signaled_unlocked - Test if a reservation object's
> > > > > >      * fences have been signaled.
> > > > > >      * @obj: the reservation object
> > > > > >      * @test_all: if true, test all fences, otherwise only test the exclusive
> > > > > > @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
> > > > > >      * RETURNS
> > > > > >      * true if all fences signaled, else false
> > > > > >      */
> > > > > > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> > > > > > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
> > > > > >     {
> > > > > >      unsigned seq, shared_count;
> > > > > >      int ret;
> > > > > > @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
> > > > > >      rcu_read_unlock();
> > > > > >      return ret;
> > > > > >     }
> > > > > > -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
> > > > > > +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > > > > > index 8a1fb8b6606e5..b8e24f199be9a 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> > > > > > @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
> > > > > >              goto unpin;
> > > > > >      }
> > > > > > -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
> > > > > > -                                         &work->shared_count,
> > > > > > -                                         &work->shared);
> > > > > > +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
> > > > > > +                                    &work->shared_count,
> > > > > > +                                    &work->shared);
> > > > > >      if (unlikely(r != 0)) {
> > > > > >              DRM_ERROR("failed to get fences for buffer\n");
> > > > > >              goto unpin;
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > > > > > index baa980a477d94..0d0319bc51577 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > > > > > @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
> > > > > >      if (!dma_resv_get_list(obj)) /* no shared fences to convert */
> > > > > >              return 0;
> > > > > > -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
> > > > > > +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
> > > > > >      if (r)
> > > > > >              return r;
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > > > > > index 18974bd081f00..8e2996d6ba3ad 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > > > > > @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> > > > > >              return -ENOENT;
> > > > > >      }
> > > > > >      robj = gem_to_amdgpu_bo(gobj);
> > > > > > -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
> > > > > > -                                             timeout);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
> > > > > > +                                        timeout);
> > > > > >      /* ret == 0 means not signaled,
> > > > > >       * ret > 0 means signaled
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > > > > > index b4971e90b98cf..38e1b32dd2cef 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> > > > > > @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> > > > > >      unsigned count;
> > > > > >      int r;
> > > > > > -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
> > > > > > +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
> > > > > >      if (r)
> > > > > >              goto fallback;
> > > > > > @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
> > > > > >      /* Not enough memory for the delayed delete, as last resort
> > > > > >       * block for all the fences to complete.
> > > > > >       */
> > > > > > -   dma_resv_wait_timeout_rcu(resv, true, false,
> > > > > > -                                       MAX_SCHEDULE_TIMEOUT);
> > > > > > +   dma_resv_wait_timeout_unlocked(resv, true, false,
> > > > > > +                                  MAX_SCHEDULE_TIMEOUT);
> > > > > >      amdgpu_pasid_free(pasid);
> > > > > >     }
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > > > > > index 828b5167ff128..0319c8b547c48 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
> > > > > > @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
> > > > > >      mmu_interval_set_seq(mni, cur_seq);
> > > > > > -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > > > > > -                                 MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      mutex_unlock(&adev->notifier_lock);
> > > > > >      if (r <= 0)
> > > > > >              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > > > > > index 0adffcace3263..de1c7c5501683 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> > > > > > @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
> > > > > >              return 0;
> > > > > >      }
> > > > > > -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
> > > > > > -                                           MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (r < 0)
> > > > > >              return r;
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > > > > > index c6dbc08016045..4a2196404fb69 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> > > > > > @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
> > > > > >      ib->length_dw = 16;
> > > > > >      if (direct) {
> > > > > > -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> > > > > > -                                                   true, false,
> > > > > > -                                                   msecs_to_jiffies(10));
> > > > > > +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
> > > > > > +                                              true, false,
> > > > > > +                                              msecs_to_jiffies(10));
> > > > > >              if (r == 0)
> > > > > >                      r = -ETIMEDOUT;
> > > > > >              if (r < 0)
> > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > > > > > index 4a3e3f72e1277..7ba1c537d6584 100644
> > > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > > > > > @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> > > > > >      unsigned i, shared_count;
> > > > > >      int r;
> > > > > > -   r = dma_resv_get_fences_rcu(resv, &excl,
> > > > > > -                                         &shared_count, &shared);
> > > > > > +   r = dma_resv_get_fences_unlocked(resv, &excl,
> > > > > > +                                    &shared_count, &shared);
> > > > > >      if (r) {
> > > > > >              /* Not enough memory to grab the fence list, as last resort
> > > > > >               * block for all the fences to complete.
> > > > > >               */
> > > > > > -           dma_resv_wait_timeout_rcu(resv, true, false,
> > > > > > -                                               MAX_SCHEDULE_TIMEOUT);
> > > > > > +           dma_resv_wait_timeout_unlocked(resv, true, false,
> > > > > > +                                          MAX_SCHEDULE_TIMEOUT);
> > > > > >              return;
> > > > > >      }
> > > > > > @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
> > > > > >              return true;
> > > > > >      /* Don't evict VM page tables while they are busy */
> > > > > > -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
> > > > > > +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
> > > > > >              return false;
> > > > > >      /* Try to block ongoing updates */
> > > > > > @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
> > > > > >      */
> > > > > >     long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
> > > > > >     {
> > > > > > -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
> > > > > > -                                       true, true, timeout);
> > > > > > +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
> > > > > > +                                            true, true, timeout);
> > > > > >      if (timeout <= 0)
> > > > > >              return timeout;
> > > > > > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > > > > > index 9ca517b658546..0121d2817fa26 100644
> > > > > > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > > > > > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > > > > > @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
> > > > > >               * deadlock during GPU reset when this fence will not signal
> > > > > >               * but we hold reservation lock for the BO.
> > > > > >               */
> > > > > > -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
> > > > > > -                                                   false,
> > > > > > -                                                   msecs_to_jiffies(5000));
> > > > > > +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
> > > > > > +                                              false,
> > > > > > +                                              msecs_to_jiffies(5000));
> > > > > >              if (unlikely(r <= 0))
> > > > > >                      DRM_ERROR("Waiting for fences timed out!");
> > > > > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > > > > > index 9989425e9875a..1241a421b9e81 100644
> > > > > > --- a/drivers/gpu/drm/drm_gem.c
> > > > > > +++ b/drivers/gpu/drm/drm_gem.c
> > > > > > @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
> > > > > >              return -EINVAL;
> > > > > >      }
> > > > > > -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
> > > > > > -                                             true, timeout);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
> > > > > > +                                        true, timeout);
> > > > > >      if (ret == 0)
> > > > > >              ret = -ETIME;
> > > > > >      else if (ret > 0)
> > > > > > @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
> > > > > >      if (!write) {
> > > > > >              struct dma_fence *fence =
> > > > > > -                   dma_resv_get_excl_rcu(obj->resv);
> > > > > > +                   dma_resv_get_excl_unlocked(obj->resv);
> > > > > >              return drm_gem_fence_array_add(fence_array, fence);
> > > > > >      }
> > > > > > -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
> > > > > > -                                           &fence_count, &fences);
> > > > > > +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
> > > > > > +                                      &fence_count, &fences);
> > > > > >      if (ret || !fence_count)
> > > > > >              return ret;
> > > > > > diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > > > > > index a005c5a0ba46a..a27135084ae5c 100644
> > > > > > --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> > > > > > +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> > > > > > @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
> > > > > >              return 0;
> > > > > >      obj = drm_gem_fb_get_obj(state->fb, 0);
> > > > > > -   fence = dma_resv_get_excl_rcu(obj->resv);
> > > > > > +   fence = dma_resv_get_excl_unlocked(obj->resv);
> > > > > >      drm_atomic_set_fence_for_plane(state, fence);
> > > > > >      return 0;
> > > > > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > > > > > index db69f19ab5bca..4e6f5346e84e4 100644
> > > > > > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > > > > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> > > > > > @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
> > > > > >      }
> > > > > >      if (op & ETNA_PREP_NOSYNC) {
> > > > > > -           if (!dma_resv_test_signaled_rcu(obj->resv,
> > > > > > -                                                     write))
> > > > > > +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
> > > > > >                      return -EBUSY;
> > > > > >      } else {
> > > > > >              unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
> > > > > > -           ret = dma_resv_wait_timeout_rcu(obj->resv,
> > > > > > -                                                     write, true, remain);
> > > > > > +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
> > > > > > +                                                write, true, remain);
> > > > > >              if (ret <= 0)
> > > > > >                      return ret == 0 ? -ETIMEDOUT : ret;
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > > > > > index d05c359945799..6617fada4595d 100644
> > > > > > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > > > > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
> > > > > > @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
> > > > > >                      continue;
> > > > > >              if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
> > > > > > -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
> > > > > > -                                                           &bo->nr_shared,
> > > > > > -                                                           &bo->shared);
> > > > > > +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
> > > > > > +                                                      &bo->nr_shared,
> > > > > > +                                                      &bo->shared);
> > > > > >                      if (ret)
> > > > > >                              return ret;
> > > > > >              } else {
> > > > > > -                   bo->excl = dma_resv_get_excl_rcu(robj);
> > > > > > +                   bo->excl = dma_resv_get_excl_unlocked(robj);
> > > > > >              }
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> > > > > > index 422b59ebf6dce..5f0b85a102159 100644
> > > > > > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > > > > > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > > > > > @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
> > > > > >              if (ret < 0)
> > > > > >                      goto unpin_fb;
> > > > > > -           fence = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >              if (fence) {
> > > > > >                      add_rps_boost_after_vblank(new_plane_state->hw.crtc,
> > > > > >                                                 fence);
> > > > > > diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
> > > > > > index 9e508e7d4629f..bdfc6bf16a4e9 100644
> > > > > > --- a/drivers/gpu/drm/i915/dma_resv_utils.c
> > > > > > +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
> > > > > > @@ -10,7 +10,7 @@
> > > > > >     void dma_resv_prune(struct dma_resv *resv)
> > > > > >     {
> > > > > >      if (dma_resv_trylock(resv)) {
> > > > > > -           if (dma_resv_test_signaled_rcu(resv, true))
> > > > > > +           if (dma_resv_test_signaled_unlocked(resv, true))
> > > > > >                      dma_resv_add_excl_fence(resv, NULL);
> > > > > >              dma_resv_unlock(resv);
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > > > > > index 25235ef630c10..754ad6d1bace9 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
> > > > > > @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
> > > > > >       * Alternatively, we can trade that extra information on read/write
> > > > > >       * activity with
> > > > > >       *      args->busy =
> > > > > > -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
> > > > > > +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
> > > > > >       * to report the overall busyness. This is what the wait-ioctl does.
> > > > > >       *
> > > > > >       */
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > > > > > index 297143511f99b..e8f323564e57b 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> > > > > > @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
> > > > > >      if (DBG_FORCE_RELOC)
> > > > > >              return false;
> > > > > > -   return !dma_resv_test_signaled_rcu(vma->resv, true);
> > > > > > +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
> > > > > >     }
> > > > > >     static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > > > > > index 2ebd79537aea9..7c0eb425cb3b3 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > > > > > @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
> > > > > >      struct dma_fence *fence;
> > > > > >      rcu_read_lock();
> > > > > > -   fence = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >      rcu_read_unlock();
> > > > > >      if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > > > > > index a657b99ec7606..44df18dc9669f 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > > > > > @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
> > > > > >              return true;
> > > > > >      /* we will unbind on next submission, still have userptr pins */
> > > > > > -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
> > > > > > -                                 MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (r <= 0)
> > > > > >              drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
> > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > > > > > index 4b9856d5ba14f..5b6c52659ad4d 100644
> > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> > > > > > @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> > > > > >              unsigned int count, i;
> > > > > >              int ret;
> > > > > > -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
> > > > > >               */
> > > > > >              prune_fences = count && timeout >= 0;
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(resv);
> > > > > >      }
> > > > > >      if (excl && timeout >= 0)
> > > > > > @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> > > > > >              unsigned int count, i;
> > > > > >              int ret;
> > > > > > -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> > > > > > -                                         &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > > > > > +                                              &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> > > > > >              kfree(shared);
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >      }
> > > > > >      if (excl) {
> > > > > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> > > > > > index 970d8f4986bbe..f1ed03ced7dd1 100644
> > > > > > --- a/drivers/gpu/drm/i915/i915_request.c
> > > > > > +++ b/drivers/gpu/drm/i915/i915_request.c
> > > > > > @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
> > > > > >              struct dma_fence **shared;
> > > > > >              unsigned int count, i;
> > > > > > -           ret = dma_resv_get_fences_rcu(obj->base.resv,
> > > > > > -                                                   &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
> > > > > > +                                              &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
> > > > > >                      dma_fence_put(shared[i]);
> > > > > >              kfree(shared);
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(obj->base.resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
> > > > > >      }
> > > > > >      if (excl) {
> > > > > > diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> > > > > > index 2744558f30507..0bcb7ea44201e 100644
> > > > > > --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> > > > > > +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> > > > > > @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> > > > > >              struct dma_fence **shared;
> > > > > >              unsigned int count, i;
> > > > > > -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
> > > > > > +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
> > > > > >              if (ret)
> > > > > >                      return ret;
> > > > > > @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
> > > > > >                      dma_fence_put(shared[i]);
> > > > > >              kfree(shared);
> > > > > >      } else {
> > > > > > -           excl = dma_resv_get_excl_rcu(resv);
> > > > > > +           excl = dma_resv_get_excl_unlocked(resv);
> > > > > >      }
> > > > > >      if (ret >= 0 && excl && excl->ops != exclude) {
> > > > > > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > > > > > index 56df86e5f7400..1aca60507bb14 100644
> > > > > > --- a/drivers/gpu/drm/msm/msm_gem.c
> > > > > > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > > > > > @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
> > > > > >              op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
> > > > > >      long ret;
> > > > > > -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
> > > > > > -                                             true,  remain);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
> > > > > >      if (ret == 0)
> > > > > >              return remain == 0 ? -EBUSY : -ETIMEDOUT;
> > > > > >      else if (ret < 0)
> > > > > > diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > > > > > index 0cb1f9d848d3e..8d048bacd6f02 100644
> > > > > > --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > > > > > +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
> > > > > > @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> > > > > >                      asyw->image.handle[0] = ctxdma->object.handle;
> > > > > >      }
> > > > > > -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
> > > > > > +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
> > > > > >      asyw->image.offset[0] = nvbo->offset;
> > > > > >      if (wndw->func->prepare) {
> > > > > > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > > > > > index a70e82413fa75..bc6b09ee9b552 100644
> > > > > > --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> > > > > > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > > > > > @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> > > > > >              return -ENOENT;
> > > > > >      nvbo = nouveau_gem_object(gem);
> > > > > > -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
> > > > > > -                                              no_wait ? 0 : 30 * HZ);
> > > > > > +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
> > > > > > +                                         no_wait ? 0 : 30 * HZ);
> > > > > >      if (!lret)
> > > > > >              ret = -EBUSY;
> > > > > >      else if (lret > 0)
> > > > > > diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > > > > > index ca07098a61419..eef5b632ee0ce 100644
> > > > > > --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> > > > > > +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> > > > > > @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
> > > > > >      if (!gem_obj)
> > > > > >              return -ENOENT;
> > > > > > -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
> > > > > > -                                             true, timeout);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
> > > > > > +                                        true, timeout);
> > > > > >      if (!ret)
> > > > > >              ret = timeout ? -ETIMEDOUT : -EBUSY;
> > > > > > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > > > index 6003cfeb13221..2df3e999a38d0 100644
> > > > > > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > > > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > > > @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> > > > > >      int i;
> > > > > >      for (i = 0; i < bo_count; i++)
> > > > > > -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
> > > > > > +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> > > > > >     }
> > > > > >     static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> > > > > > diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> > > > > > index 05ea2f39f6261..1a38b0bf36d11 100644
> > > > > > --- a/drivers/gpu/drm/radeon/radeon_gem.c
> > > > > > +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> > > > > > @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> > > > > >      }
> > > > > >      if (domain == RADEON_GEM_DOMAIN_CPU) {
> > > > > >              /* Asking for cpu access wait for object idle */
> > > > > > -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > > +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > >              if (!r)
> > > > > >                      r = -EBUSY;
> > > > > > @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
> > > > > >      }
> > > > > >      robj = gem_to_radeon_bo(gobj);
> > > > > > -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
> > > > > > +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
> > > > > >      if (r == 0)
> > > > > >              r = -EBUSY;
> > > > > >      else
> > > > > > @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
> > > > > >      }
> > > > > >      robj = gem_to_radeon_bo(gobj);
> > > > > > -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > > +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
> > > > > >      if (ret == 0)
> > > > > >              r = -EBUSY;
> > > > > >      else if (ret < 0)
> > > > > > diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
> > > > > > index e37c9a57a7c36..a19be3f8a218c 100644
> > > > > > --- a/drivers/gpu/drm/radeon/radeon_mn.c
> > > > > > +++ b/drivers/gpu/drm/radeon/radeon_mn.c
> > > > > > @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
> > > > > >              return true;
> > > > > >      }
> > > > > > -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
> > > > > > -                                 MAX_SCHEDULE_TIMEOUT);
> > > > > > +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
> > > > > > +                                      MAX_SCHEDULE_TIMEOUT);
> > > > > >      if (r <= 0)
> > > > > >              DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> > > > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > index ca1b098b6a561..215cad3149621 100644
> > > > > > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> > > > > >      struct dma_resv *resv = &bo->base._resv;
> > > > > >      int ret;
> > > > > > -   if (dma_resv_test_signaled_rcu(resv, true))
> > > > > > +   if (dma_resv_test_signaled_unlocked(resv, true))
> > > > > >              ret = 0;
> > > > > >      else
> > > > > >              ret = -EBUSY;
> > > > > > @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
> > > > > >                      dma_resv_unlock(bo->base.resv);
> > > > > >              spin_unlock(&bo->bdev->lru_lock);
> > > > > > -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
> > > > > > -                                            30 * HZ);
> > > > > > +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
> > > > > > +                                                 30 * HZ);
> > > > > >              if (lret < 0)
> > > > > >                      return lret;
> > > > > > @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
> > > > > >                      /* Last resort, if we fail to allocate memory for the
> > > > > >                       * fences block for the BO to become idle
> > > > > >                       */
> > > > > > -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
> > > > > > -                                             30 * HZ);
> > > > > > +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
> > > > > > +                                                  30 * HZ);
> > > > > >              }
> > > > > >              if (bo->bdev->funcs->release_notify)
> > > > > > @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
> > > > > >              ttm_mem_io_free(bdev, &bo->mem);
> > > > > >      }
> > > > > > -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> > > > > > +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
> > > > > >          !dma_resv_trylock(bo->base.resv)) {
> > > > > >              /* The BO is not idle, resurrect it for delayed destroy */
> > > > > >              ttm_bo_flush_all_fences(bo);
> > > > > > @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
> > > > > >      long timeout = 15 * HZ;
> > > > > >      if (no_wait) {
> > > > > > -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
> > > > > > +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
> > > > > >                      return 0;
> > > > > >              else
> > > > > >                      return -EBUSY;
> > > > > >      }
> > > > > > -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
> > > > > > -                                                 interruptible, timeout);
> > > > > > +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
> > > > > > +                                            interruptible, timeout);
> > > > > >      if (timeout < 0)
> > > > > >              return timeout;
> > > > > > diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
> > > > > > index 2902dc6e64faf..010a82405e374 100644
> > > > > > --- a/drivers/gpu/drm/vgem/vgem_fence.c
> > > > > > +++ b/drivers/gpu/drm/vgem/vgem_fence.c
> > > > > > @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
> > > > > >      /* Check for a conflicting fence */
> > > > > >      resv = obj->resv;
> > > > > > -   if (!dma_resv_test_signaled_rcu(resv,
> > > > > > -                                             arg->flags & VGEM_FENCE_WRITE)) {
> > > > > > +   if (!dma_resv_test_signaled_unlocked(resv,
> > > > > > +                                        arg->flags & VGEM_FENCE_WRITE)) {
> > > > > >              ret = -EBUSY;
> > > > > >              goto err_fence;
> > > > > >      }
> > > > > > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > > > > index 669f2ee395154..ab010c8e32816 100644
> > > > > > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > > > > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > > > > @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
> > > > > >              return -ENOENT;
> > > > > >      if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> > > > > > -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
> > > > > > +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
> > > > > >      } else {
> > > > > > -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
> > > > > > -                                           timeout);
> > > > > > +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
> > > > > > +                                                timeout);
> > > > > >      }
> > > > > >      if (ret == 0)
> > > > > >              ret = -EBUSY;
> > > > > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > > > > > index 04dd49c4c2572..19e1ce23842a9 100644
> > > > > > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > > > > > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> > > > > > @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
> > > > > >      if (flags & drm_vmw_synccpu_allow_cs) {
> > > > > >              long lret;
> > > > > > -           lret = dma_resv_wait_timeout_rcu
> > > > > > +           lret = dma_resv_wait_timeout_unlocked
> > > > > >                      (bo->base.resv, true, true,
> > > > > >                       nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
> > > > > >              if (!lret)
> > > > > > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> > > > > > index d44a77e8a7e34..99cfb7af966b8 100644
> > > > > > --- a/include/linux/dma-resv.h
> > > > > > +++ b/include/linux/dma-resv.h
> > > > > > @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> > > > > >     }
> > > > > >     /**
> > > > > > - * dma_resv_get_excl_rcu - get the reservation object's
> > > > > > + * dma_resv_get_excl_unlocked - get the reservation object's
> > > > > >      * exclusive fence, without lock held.
> > > > > >      * @obj: the reservation object
> > > > > >      *
> > > > > > @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
> > > > > >      * The exclusive fence or NULL if none
> > > > > >      */
> > > > > >     static inline struct dma_fence *
> > > > > > -dma_resv_get_excl_rcu(struct dma_resv *obj)
> > > > > > +dma_resv_get_excl_unlocked(struct dma_resv *obj)
> > > > > >     {
> > > > > >      struct dma_fence *fence;
> > > > > > @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> > > > > >     void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> > > > > > -int dma_resv_get_fences_rcu(struct dma_resv *obj,
> > > > > > -                       struct dma_fence **pfence_excl,
> > > > > > -                       unsigned *pshared_count,
> > > > > > -                       struct dma_fence ***pshared);
> > > > > > +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
> > > > > > +                            struct dma_fence **pfence_excl,
> > > > > > +                            unsigned *pshared_count,
> > > > > > +                            struct dma_fence ***pshared);
> > > > > >     int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> > > > > > -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
> > > > > > -                          unsigned long timeout);
> > > > > > +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
> > > > > > +                               unsigned long timeout);
> > > > > > -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
> > > > > > +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
> > > > > >     #endif /* _LINUX_RESERVATION_H */
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-06-01 14:34               ` [Intel-gfx] " Daniel Vetter
@ 2021-06-01 17:27                 ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-06-01 17:27 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, Jason Ekstrand, intel-gfx, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Sean Paul

Am 01.06.21 um 16:34 schrieb Daniel Vetter:
> On Thu, May 27, 2021 at 03:41:02PM +0200, Christian König wrote:
>> Am 27.05.21 um 15:25 schrieb Daniel Vetter:
>>> On Thu, May 27, 2021 at 1:59 PM Christian König
>>> <christian.koenig@amd.com> wrote:
>>>> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
>>>>> On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
>>>>>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>>>>>> None of these helpers actually leak any RCU details to the caller.  They
>>>>>>> all assume you have a genuine reference, take the RCU read lock, and
>>>>>>> retry if needed.  Naming them with an _rcu is likely to cause callers
>>>>>>> more panic than needed.
>>>>>> I'm really wondering if we need this postfix in the first place.
>>>>>>
>>>>>> If we use the right rcu_dereference_check() macro then those functions can
>>>>>> be called with both the reservation object locked and unlocked. It shouldn't
>>>>>> matter to them.
>>>>>>
>>>>>> But getting rid of the _rcu postfix sounds like a good idea in general to
>>>>>> me.
>>>>> So does that count as an ack or not? If yes I think we should land this
>>>>> patch right away, since it's going to conflict real fast badly.
>>>> I had some follow up discussion with Jason and I would rather like to
>>>> switch to using rcu_dereference_check() in all places and completely
>>>> remove the _rcu postfix.
>>> Hm, I'm not sure whether spreading _rcu tricks further is an
>>> especially bright idea. At least i915 is full of very clever _rcu
>>> tricks, and encouraging drivers to roll out their own _rcu everywhere
>>> is probably not in our best interest. Some fast-path checking is imo
>>> ok, but that's it. Especially once we get into the entire
>>> SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.
>> Oh, yes completely agree. SLAB_TYPESAFE_BY_RCU is optimizing for the wrong
>> use case I think.
>>
>> You save a bit of overhead while freeing fences, but in return you have
>> extra overhead while adding fences to the dma_resv object.
> Getting way off topic, but I'm wondering whether the entire rcu business
> is really worth it for dma_fence.
>
> Mostly we manipulate dma_resv while holding dma_resv anyway. There's maybe
> a few waits and stuff, but I'm not sure whether the dma_resv_lock +
> dma_fence_get + dma_resv_unlock + dma_fence_put really matter. And if you
> have lock contention on a single buffer you've lost anyway.
>
> At that point I think we have maybe some lockless tricks in the evict
> code, but then again once you're evicting it's probably going pretty bad
> already.
>
> So SLAB_TYPESAFE_BY_RCU is something I want to analyze for i915 whether
> it's really worth it and was justified, or whether we should drop it. But
> I'm wondering whether we should drop rcu for fences outright. Would be
> quite some audit to check out where it's used.
>
>  From i915 side we've done these lockless tricks back when
> dev->struct_mutex was a thing and alwas contended. But with per-obj
> locking now happening for real with dma-resv, that's probably not
> justified.
>
> But then looking at git history the rcu in dma_resv is older than that,
> and was justified with ttm.

Scratching my head when and why TTM should have ever needed some 
lockless operation when that was added? We do have some now, but just 
because they where available.

On the other hand I'm pretty sure that we can make the whole RCU 
handling in the dma_resv object much less painful. Basic problem here is 
that we have two pointers instead of one, e.g. the excl fence and/or the 
shared fences.

If we could move the exclusive fence pointer into the shared fences most 
of the trouble would go away suddenly.

The other thing we should certainly have is more use case based 
iterators. E.g. something like dma_resv_for_each_sync_fence(...) {...}.

Regards,
Christian.

>
>>> That's why I'm slightly leaning towards _unlocked variants, except we
>>> do use those in lots of places where we hold dma_resv_lock too. So not
>>> sure what's the best plan overall here.
>> Well what function names are we actually talking about?
>>
>> For the dma_resv_get_excl_rcu() case I agree we should probably name that to
>> dma_resv_get_excl_unlocked() because it makes no sense at all to use this
>> function while holding the lock.
>>
>> But for the following functions:
>> dma_resv_get_fences_rcu
>> dma_resv_wait_timeout_rcu
>> dma_resv_test_signaled_rcu
>>
>> I think we should just drop the _rcu naming because those are supposed to
>> work independent if the resv lock is held or not.
> Ack on all naming.
> -Daniel
>
>> Regards,
>> Christian.
>>
>>> -Daniel
>>>
>>>> But yes I see the pain of rebasing this as well.
>>>>
>>>> Christian.
>>>>
>>>>> -Daniel
>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> v2 (Jason Ekstrand):
>>>>>>>      - Fix function argument indentation
>>>>>>>
>>>>>>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
>>>>>>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>>>>> Cc: Maxime Ripard <mripard@kernel.org>
>>>>>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>> Cc: Lucas Stach <l.stach@pengutronix.de>
>>>>>>> Cc: Rob Clark <robdclark@gmail.com>
>>>>>>> Cc: Sean Paul <sean@poorly.run>
>>>>>>> Cc: Huang Rui <ray.huang@amd.com>
>>>>>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>>>>>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
>>>>>>> ---
>>>>>>>      drivers/dma-buf/dma-buf.c                     |  4 +--
>>>>>>>      drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>>>>>>>      .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>>>>>>>      drivers/gpu/drm/drm_gem.c                     | 10 +++----
>>>>>>>      drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>>>>>>>      drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>>>>>>>      drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>>>>>>>      drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>>>>>>>      drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>>>>>>>      .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>>>>>>>      drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>>>>>>>      drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>>>>>>>      drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>>>>>>>      drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>>>>>>>      drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>>>>>>>      drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>>>>>>>      drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>>>>>>>      drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>>>>>>>      drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>>>>>>>      drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>>>>>>>      drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>>>>>>>      drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>>>>>>>      drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>>>>>>>      include/linux/dma-resv.h                      | 18 ++++++------
>>>>>>>      36 files changed, 108 insertions(+), 110 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>>>>>> index f264b70c383eb..ed6451d55d663 100644
>>>>>>> --- a/drivers/dma-buf/dma-buf.c
>>>>>>> +++ b/drivers/dma-buf/dma-buf.c
>>>>>>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>>>>>       long ret;
>>>>>>>       /* Wait on any implicit rendering fences */
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
>>>>>>> -                                             MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
>>>>>>> +                                        MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (ret < 0)
>>>>>>>               return ret;
>>>>>>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
>>>>>>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
>>>>>>> --- a/drivers/dma-buf/dma-resv.c
>>>>>>> +++ b/drivers/dma-buf/dma-resv.c
>>>>>>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>>>>>>>      EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>>>      /**
>>>>>>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
>>>>>>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>>>>>>>       * fences without update side lock held
>>>>>>>       * @obj: the reservation object
>>>>>>>       * @pfence_excl: the returned exclusive fence (or NULL)
>>>>>>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>>>       * exclusive fence is not specified the fence is put into the array of the
>>>>>>>       * shared fences as well. Returns either zero or -ENOMEM.
>>>>>>>       */
>>>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>>> -                       struct dma_fence **pfence_excl,
>>>>>>> -                       unsigned *pshared_count,
>>>>>>> -                       struct dma_fence ***pshared)
>>>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>>>> +                            struct dma_fence **pfence_excl,
>>>>>>> +                            unsigned *pshared_count,
>>>>>>> +                            struct dma_fence ***pshared)
>>>>>>>      {
>>>>>>>       struct dma_fence **shared = NULL;
>>>>>>>       struct dma_fence *fence_excl;
>>>>>>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>>>       *pshared = shared;
>>>>>>>       return ret;
>>>>>>>      }
>>>>>>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>>>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>>>>>>>      /**
>>>>>>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
>>>>>>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>>>>>>>       * shared and/or exclusive fences.
>>>>>>>       * @obj: the reservation object
>>>>>>>       * @wait_all: if true, wait on all fences, else wait on just exclusive fence
>>>>>>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>>>>       * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>>>>>>>       * greater than zer on success.
>>>>>>>       */
>>>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>>>> -                          bool wait_all, bool intr,
>>>>>>> -                          unsigned long timeout)
>>>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
>>>>>>> +                               bool wait_all, bool intr,
>>>>>>> +                               unsigned long timeout)
>>>>>>>      {
>>>>>>>       struct dma_fence *fence;
>>>>>>>       unsigned seq, shared_count;
>>>>>>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>>>>       rcu_read_unlock();
>>>>>>>       goto retry;
>>>>>>>      }
>>>>>>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
>>>>>>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>>>>>>>      static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>>>      }
>>>>>>>      /**
>>>>>>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
>>>>>>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>>>>>>>       * fences have been signaled.
>>>>>>>       * @obj: the reservation object
>>>>>>>       * @test_all: if true, test all fences, otherwise only test the exclusive
>>>>>>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>>>       * RETURNS
>>>>>>>       * true if all fences signaled, else false
>>>>>>>       */
>>>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>>>>>>>      {
>>>>>>>       unsigned seq, shared_count;
>>>>>>>       int ret;
>>>>>>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>>>>       rcu_read_unlock();
>>>>>>>       return ret;
>>>>>>>      }
>>>>>>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
>>>>>>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>>>> index 8a1fb8b6606e5..b8e24f199be9a 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>>>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>>>>>>>               goto unpin;
>>>>>>>       }
>>>>>>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
>>>>>>> -                                         &work->shared_count,
>>>>>>> -                                         &work->shared);
>>>>>>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
>>>>>>> +                                    &work->shared_count,
>>>>>>> +                                    &work->shared);
>>>>>>>       if (unlikely(r != 0)) {
>>>>>>>               DRM_ERROR("failed to get fences for buffer\n");
>>>>>>>               goto unpin;
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>>>> index baa980a477d94..0d0319bc51577 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>>>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>>>>>>>       if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>>>>>>>               return 0;
>>>>>>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
>>>>>>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>>>>>>>       if (r)
>>>>>>>               return r;
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>>>> index 18974bd081f00..8e2996d6ba3ad 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>>>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>>>               return -ENOENT;
>>>>>>>       }
>>>>>>>       robj = gem_to_amdgpu_bo(gobj);
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
>>>>>>> -                                             timeout);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
>>>>>>> +                                        timeout);
>>>>>>>       /* ret == 0 means not signaled,
>>>>>>>        * ret > 0 means signaled
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>>>> index b4971e90b98cf..38e1b32dd2cef 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>>>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>>>       unsigned count;
>>>>>>>       int r;
>>>>>>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
>>>>>>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>>>>>>>       if (r)
>>>>>>>               goto fallback;
>>>>>>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>>>       /* Not enough memory for the delayed delete, as last resort
>>>>>>>        * block for all the fences to complete.
>>>>>>>        */
>>>>>>> -   dma_resv_wait_timeout_rcu(resv, true, false,
>>>>>>> -                                       MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>>>> +                                  MAX_SCHEDULE_TIMEOUT);
>>>>>>>       amdgpu_pasid_free(pasid);
>>>>>>>      }
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>>>> index 828b5167ff128..0319c8b547c48 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>>>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>>>>>>>       mmu_interval_set_seq(mni, cur_seq);
>>>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       mutex_unlock(&adev->notifier_lock);
>>>>>>>       if (r <= 0)
>>>>>>>               DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>>>> index 0adffcace3263..de1c7c5501683 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>>>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>>>>>>>               return 0;
>>>>>>>       }
>>>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
>>>>>>> -                                           MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (r < 0)
>>>>>>>               return r;
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>>>> index c6dbc08016045..4a2196404fb69 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>>>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>>>>>>       ib->length_dw = 16;
>>>>>>>       if (direct) {
>>>>>>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
>>>>>>> -                                                   true, false,
>>>>>>> -                                                   msecs_to_jiffies(10));
>>>>>>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
>>>>>>> +                                              true, false,
>>>>>>> +                                              msecs_to_jiffies(10));
>>>>>>>               if (r == 0)
>>>>>>>                       r = -ETIMEDOUT;
>>>>>>>               if (r < 0)
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> index 4a3e3f72e1277..7ba1c537d6584 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>>>>>       unsigned i, shared_count;
>>>>>>>       int r;
>>>>>>> -   r = dma_resv_get_fences_rcu(resv, &excl,
>>>>>>> -                                         &shared_count, &shared);
>>>>>>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
>>>>>>> +                                    &shared_count, &shared);
>>>>>>>       if (r) {
>>>>>>>               /* Not enough memory to grab the fence list, as last resort
>>>>>>>                * block for all the fences to complete.
>>>>>>>                */
>>>>>>> -           dma_resv_wait_timeout_rcu(resv, true, false,
>>>>>>> -                                               MAX_SCHEDULE_TIMEOUT);
>>>>>>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>>>> +                                          MAX_SCHEDULE_TIMEOUT);
>>>>>>>               return;
>>>>>>>       }
>>>>>>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>>>>>>>               return true;
>>>>>>>       /* Don't evict VM page tables while they are busy */
>>>>>>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
>>>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>>>>>>>               return false;
>>>>>>>       /* Try to block ongoing updates */
>>>>>>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>>>>>>>       */
>>>>>>>      long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>>>>>>>      {
>>>>>>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
>>>>>>> -                                       true, true, timeout);
>>>>>>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
>>>>>>> +                                            true, true, timeout);
>>>>>>>       if (timeout <= 0)
>>>>>>>               return timeout;
>>>>>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>>>> index 9ca517b658546..0121d2817fa26 100644
>>>>>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>>>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>>>>>>>                * deadlock during GPU reset when this fence will not signal
>>>>>>>                * but we hold reservation lock for the BO.
>>>>>>>                */
>>>>>>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
>>>>>>> -                                                   false,
>>>>>>> -                                                   msecs_to_jiffies(5000));
>>>>>>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
>>>>>>> +                                              false,
>>>>>>> +                                              msecs_to_jiffies(5000));
>>>>>>>               if (unlikely(r <= 0))
>>>>>>>                       DRM_ERROR("Waiting for fences timed out!");
>>>>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>>>>> index 9989425e9875a..1241a421b9e81 100644
>>>>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>>>>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>>>>>>>               return -EINVAL;
>>>>>>>       }
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
>>>>>>> -                                             true, timeout);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
>>>>>>> +                                        true, timeout);
>>>>>>>       if (ret == 0)
>>>>>>>               ret = -ETIME;
>>>>>>>       else if (ret > 0)
>>>>>>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>>>>>>>       if (!write) {
>>>>>>>               struct dma_fence *fence =
>>>>>>> -                   dma_resv_get_excl_rcu(obj->resv);
>>>>>>> +                   dma_resv_get_excl_unlocked(obj->resv);
>>>>>>>               return drm_gem_fence_array_add(fence_array, fence);
>>>>>>>       }
>>>>>>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
>>>>>>> -                                           &fence_count, &fences);
>>>>>>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
>>>>>>> +                                      &fence_count, &fences);
>>>>>>>       if (ret || !fence_count)
>>>>>>>               return ret;
>>>>>>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>>>> index a005c5a0ba46a..a27135084ae5c 100644
>>>>>>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>>>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>>>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>>>>>>>               return 0;
>>>>>>>       obj = drm_gem_fb_get_obj(state->fb, 0);
>>>>>>> -   fence = dma_resv_get_excl_rcu(obj->resv);
>>>>>>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
>>>>>>>       drm_atomic_set_fence_for_plane(state, fence);
>>>>>>>       return 0;
>>>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>>>> index db69f19ab5bca..4e6f5346e84e4 100644
>>>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>>>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>>>>>>>       }
>>>>>>>       if (op & ETNA_PREP_NOSYNC) {
>>>>>>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
>>>>>>> -                                                     write))
>>>>>>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>>>>>>>                       return -EBUSY;
>>>>>>>       } else {
>>>>>>>               unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>>>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
>>>>>>> -                                                     write, true, remain);
>>>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
>>>>>>> +                                                write, true, remain);
>>>>>>>               if (ret <= 0)
>>>>>>>                       return ret == 0 ? -ETIMEDOUT : ret;
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>>>> index d05c359945799..6617fada4595d 100644
>>>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>>>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>>>>>>>                       continue;
>>>>>>>               if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
>>>>>>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
>>>>>>> -                                                           &bo->nr_shared,
>>>>>>> -                                                           &bo->shared);
>>>>>>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
>>>>>>> +                                                      &bo->nr_shared,
>>>>>>> +                                                      &bo->shared);
>>>>>>>                       if (ret)
>>>>>>>                               return ret;
>>>>>>>               } else {
>>>>>>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
>>>>>>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
>>>>>>>               }
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
>>>>>>> index 422b59ebf6dce..5f0b85a102159 100644
>>>>>>> --- a/drivers/gpu/drm/i915/display/intel_display.c
>>>>>>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
>>>>>>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>>>>>>>               if (ret < 0)
>>>>>>>                       goto unpin_fb;
>>>>>>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>               if (fence) {
>>>>>>>                       add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>>>>>>>                                                  fence);
>>>>>>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>>>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
>>>>>>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>>>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>>>> @@ -10,7 +10,7 @@
>>>>>>>      void dma_resv_prune(struct dma_resv *resv)
>>>>>>>      {
>>>>>>>       if (dma_resv_trylock(resv)) {
>>>>>>> -           if (dma_resv_test_signaled_rcu(resv, true))
>>>>>>> +           if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>>>                       dma_resv_add_excl_fence(resv, NULL);
>>>>>>>               dma_resv_unlock(resv);
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>>>> index 25235ef630c10..754ad6d1bace9 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>>>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>>>        * Alternatively, we can trade that extra information on read/write
>>>>>>>        * activity with
>>>>>>>        *      args->busy =
>>>>>>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
>>>>>>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>>>        * to report the overall busyness. This is what the wait-ioctl does.
>>>>>>>        *
>>>>>>>        */
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>>>> index 297143511f99b..e8f323564e57b 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>>>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>>>>>>>       if (DBG_FORCE_RELOC)
>>>>>>>               return false;
>>>>>>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
>>>>>>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
>>>>>>>      }
>>>>>>>      static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>>>> index 2ebd79537aea9..7c0eb425cb3b3 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>>>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>>>>>>>       struct dma_fence *fence;
>>>>>>>       rcu_read_lock();
>>>>>>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>       rcu_read_unlock();
>>>>>>>       if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>>>> index a657b99ec7606..44df18dc9669f 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>>>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>>>>>>>               return true;
>>>>>>>       /* we will unbind on next submission, still have userptr pins */
>>>>>>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>>>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (r <= 0)
>>>>>>>               drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>>>> index 4b9856d5ba14f..5b6c52659ad4d 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>>>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>>>               unsigned int count, i;
>>>>>>>               int ret;
>>>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>>>                */
>>>>>>>               prune_fences = count && timeout >= 0;
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>>>       }
>>>>>>>       if (excl && timeout >= 0)
>>>>>>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>>>               unsigned int count, i;
>>>>>>>               int ret;
>>>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>>>> -                                         &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>>>> +                                              &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>>>               kfree(shared);
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>       }
>>>>>>>       if (excl) {
>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>>>>>> index 970d8f4986bbe..f1ed03ced7dd1 100644
>>>>>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>>>>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>>>>>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>>>>>>>               struct dma_fence **shared;
>>>>>>>               unsigned int count, i;
>>>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>>>> -                                                   &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>>>> +                                              &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>>>>>>>                       dma_fence_put(shared[i]);
>>>>>>>               kfree(shared);
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>       }
>>>>>>>       if (excl) {
>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>>>> index 2744558f30507..0bcb7ea44201e 100644
>>>>>>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>>>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>>>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>>>               struct dma_fence **shared;
>>>>>>>               unsigned int count, i;
>>>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>>>                       dma_fence_put(shared[i]);
>>>>>>>               kfree(shared);
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>>>       }
>>>>>>>       if (ret >= 0 && excl && excl->ops != exclude) {
>>>>>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>>>>>> index 56df86e5f7400..1aca60507bb14 100644
>>>>>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>>>>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>>>>>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>>>>>>>               op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>>>>>>>       long ret;
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
>>>>>>> -                                             true,  remain);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>>>>>>>       if (ret == 0)
>>>>>>>               return remain == 0 ? -EBUSY : -ETIMEDOUT;
>>>>>>>       else if (ret < 0)
>>>>>>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>>>> index 0cb1f9d848d3e..8d048bacd6f02 100644
>>>>>>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>>>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>>>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>>>>>>>                       asyw->image.handle[0] = ctxdma->object.handle;
>>>>>>>       }
>>>>>>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
>>>>>>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>>>>>>>       asyw->image.offset[0] = nvbo->offset;
>>>>>>>       if (wndw->func->prepare) {
>>>>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>>>> index a70e82413fa75..bc6b09ee9b552 100644
>>>>>>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>>>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>>>>>>>               return -ENOENT;
>>>>>>>       nvbo = nouveau_gem_object(gem);
>>>>>>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
>>>>>>> -                                              no_wait ? 0 : 30 * HZ);
>>>>>>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
>>>>>>> +                                         no_wait ? 0 : 30 * HZ);
>>>>>>>       if (!lret)
>>>>>>>               ret = -EBUSY;
>>>>>>>       else if (lret > 0)
>>>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>>>> index ca07098a61419..eef5b632ee0ce 100644
>>>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>>>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>>>>>>>       if (!gem_obj)
>>>>>>>               return -ENOENT;
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
>>>>>>> -                                             true, timeout);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
>>>>>>> +                                        true, timeout);
>>>>>>>       if (!ret)
>>>>>>>               ret = timeout ? -ETIMEDOUT : -EBUSY;
>>>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> index 6003cfeb13221..2df3e999a38d0 100644
>>>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>>>>>>>       int i;
>>>>>>>       for (i = 0; i < bo_count; i++)
>>>>>>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
>>>>>>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>>>>>>>      }
>>>>>>>      static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>>>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> index 05ea2f39f6261..1a38b0bf36d11 100644
>>>>>>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>>>>>>>       }
>>>>>>>       if (domain == RADEON_GEM_DOMAIN_CPU) {
>>>>>>>               /* Asking for cpu access wait for object idle */
>>>>>>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>>               if (!r)
>>>>>>>                       r = -EBUSY;
>>>>>>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>>>       }
>>>>>>>       robj = gem_to_radeon_bo(gobj);
>>>>>>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
>>>>>>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>>>>>>>       if (r == 0)
>>>>>>>               r = -EBUSY;
>>>>>>>       else
>>>>>>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>>>       }
>>>>>>>       robj = gem_to_radeon_bo(gobj);
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>>       if (ret == 0)
>>>>>>>               r = -EBUSY;
>>>>>>>       else if (ret < 0)
>>>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>>>> index e37c9a57a7c36..a19be3f8a218c 100644
>>>>>>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
>>>>>>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>>>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>>>>>>>               return true;
>>>>>>>       }
>>>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (r <= 0)
>>>>>>>               DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> index ca1b098b6a561..215cad3149621 100644
>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>>>       struct dma_resv *resv = &bo->base._resv;
>>>>>>>       int ret;
>>>>>>> -   if (dma_resv_test_signaled_rcu(resv, true))
>>>>>>> +   if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>>>               ret = 0;
>>>>>>>       else
>>>>>>>               ret = -EBUSY;
>>>>>>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>>>                       dma_resv_unlock(bo->base.resv);
>>>>>>>               spin_unlock(&bo->bdev->lru_lock);
>>>>>>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
>>>>>>> -                                            30 * HZ);
>>>>>>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
>>>>>>> +                                                 30 * HZ);
>>>>>>>               if (lret < 0)
>>>>>>>                       return lret;
>>>>>>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>>>>>>>                       /* Last resort, if we fail to allocate memory for the
>>>>>>>                        * fences block for the BO to become idle
>>>>>>>                        */
>>>>>>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
>>>>>>> -                                             30 * HZ);
>>>>>>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
>>>>>>> +                                                  30 * HZ);
>>>>>>>               }
>>>>>>>               if (bo->bdev->funcs->release_notify)
>>>>>>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>>>>>>>               ttm_mem_io_free(bdev, &bo->mem);
>>>>>>>       }
>>>>>>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
>>>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>>>>>>>           !dma_resv_trylock(bo->base.resv)) {
>>>>>>>               /* The BO is not idle, resurrect it for delayed destroy */
>>>>>>>               ttm_bo_flush_all_fences(bo);
>>>>>>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>>>>>>>       long timeout = 15 * HZ;
>>>>>>>       if (no_wait) {
>>>>>>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
>>>>>>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>>>>>>>                       return 0;
>>>>>>>               else
>>>>>>>                       return -EBUSY;
>>>>>>>       }
>>>>>>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
>>>>>>> -                                                 interruptible, timeout);
>>>>>>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
>>>>>>> +                                            interruptible, timeout);
>>>>>>>       if (timeout < 0)
>>>>>>>               return timeout;
>>>>>>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>>>> index 2902dc6e64faf..010a82405e374 100644
>>>>>>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
>>>>>>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>>>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>>>>>>>       /* Check for a conflicting fence */
>>>>>>>       resv = obj->resv;
>>>>>>> -   if (!dma_resv_test_signaled_rcu(resv,
>>>>>>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
>>>>>>> +   if (!dma_resv_test_signaled_unlocked(resv,
>>>>>>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
>>>>>>>               ret = -EBUSY;
>>>>>>>               goto err_fence;
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>>>> index 669f2ee395154..ab010c8e32816 100644
>>>>>>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>>>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>>>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>>>>>>>               return -ENOENT;
>>>>>>>       if (args->flags & VIRTGPU_WAIT_NOWAIT) {
>>>>>>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
>>>>>>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>>>       } else {
>>>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
>>>>>>> -                                           timeout);
>>>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
>>>>>>> +                                                timeout);
>>>>>>>       }
>>>>>>>       if (ret == 0)
>>>>>>>               ret = -EBUSY;
>>>>>>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>>>> index 04dd49c4c2572..19e1ce23842a9 100644
>>>>>>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>>>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>>>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>>>>>>>       if (flags & drm_vmw_synccpu_allow_cs) {
>>>>>>>               long lret;
>>>>>>> -           lret = dma_resv_wait_timeout_rcu
>>>>>>> +           lret = dma_resv_wait_timeout_unlocked
>>>>>>>                       (bo->base.resv, true, true,
>>>>>>>                        nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>>>>>>>               if (!lret)
>>>>>>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
>>>>>>> index d44a77e8a7e34..99cfb7af966b8 100644
>>>>>>> --- a/include/linux/dma-resv.h
>>>>>>> +++ b/include/linux/dma-resv.h
>>>>>>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>>>      }
>>>>>>>      /**
>>>>>>> - * dma_resv_get_excl_rcu - get the reservation object's
>>>>>>> + * dma_resv_get_excl_unlocked - get the reservation object's
>>>>>>>       * exclusive fence, without lock held.
>>>>>>>       * @obj: the reservation object
>>>>>>>       *
>>>>>>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>>>       * The exclusive fence or NULL if none
>>>>>>>       */
>>>>>>>      static inline struct dma_fence *
>>>>>>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
>>>>>>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>>>>>>>      {
>>>>>>>       struct dma_fence *fence;
>>>>>>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>>>>      void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>>> -                       struct dma_fence **pfence_excl,
>>>>>>> -                       unsigned *pshared_count,
>>>>>>> -                       struct dma_fence ***pshared);
>>>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>>>> +                            struct dma_fence **pfence_excl,
>>>>>>> +                            unsigned *pshared_count,
>>>>>>> +                            struct dma_fence ***pshared);
>>>>>>>      int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>>>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
>>>>>>> -                          unsigned long timeout);
>>>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
>>>>>>> +                               unsigned long timeout);
>>>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
>>>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>>>>>>>      #endif /* _LINUX_RESERVATION_H */


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-06-01 17:27                 ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-06-01 17:27 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, intel-gfx, Maxime Ripard, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Lucas Stach

Am 01.06.21 um 16:34 schrieb Daniel Vetter:
> On Thu, May 27, 2021 at 03:41:02PM +0200, Christian König wrote:
>> Am 27.05.21 um 15:25 schrieb Daniel Vetter:
>>> On Thu, May 27, 2021 at 1:59 PM Christian König
>>> <christian.koenig@amd.com> wrote:
>>>> Am 27.05.21 um 12:39 schrieb Daniel Vetter:
>>>>> On Wed, May 26, 2021 at 12:57:40PM +0200, Christian König wrote:
>>>>>> Am 25.05.21 um 23:17 schrieb Jason Ekstrand:
>>>>>>> None of these helpers actually leak any RCU details to the caller.  They
>>>>>>> all assume you have a genuine reference, take the RCU read lock, and
>>>>>>> retry if needed.  Naming them with an _rcu is likely to cause callers
>>>>>>> more panic than needed.
>>>>>> I'm really wondering if we need this postfix in the first place.
>>>>>>
>>>>>> If we use the right rcu_dereference_check() macro then those functions can
>>>>>> be called with both the reservation object locked and unlocked. It shouldn't
>>>>>> matter to them.
>>>>>>
>>>>>> But getting rid of the _rcu postfix sounds like a good idea in general to
>>>>>> me.
>>>>> So does that count as an ack or not? If yes I think we should land this
>>>>> patch right away, since it's going to conflict real fast badly.
>>>> I had some follow up discussion with Jason and I would rather like to
>>>> switch to using rcu_dereference_check() in all places and completely
>>>> remove the _rcu postfix.
>>> Hm, I'm not sure whether spreading _rcu tricks further is an
>>> especially bright idea. At least i915 is full of very clever _rcu
>>> tricks, and encouraging drivers to roll out their own _rcu everywhere
>>> is probably not in our best interest. Some fast-path checking is imo
>>> ok, but that's it. Especially once we get into the entire
>>> SLAB_TYPESAFE_BY_RCU business it becomes really nasty really quickly.
>> Oh, yes completely agree. SLAB_TYPESAFE_BY_RCU is optimizing for the wrong
>> use case I think.
>>
>> You save a bit of overhead while freeing fences, but in return you have
>> extra overhead while adding fences to the dma_resv object.
> Getting way off topic, but I'm wondering whether the entire rcu business
> is really worth it for dma_fence.
>
> Mostly we manipulate dma_resv while holding dma_resv anyway. There's maybe
> a few waits and stuff, but I'm not sure whether the dma_resv_lock +
> dma_fence_get + dma_resv_unlock + dma_fence_put really matter. And if you
> have lock contention on a single buffer you've lost anyway.
>
> At that point I think we have maybe some lockless tricks in the evict
> code, but then again once you're evicting it's probably going pretty bad
> already.
>
> So SLAB_TYPESAFE_BY_RCU is something I want to analyze for i915 whether
> it's really worth it and was justified, or whether we should drop it. But
> I'm wondering whether we should drop rcu for fences outright. Would be
> quite some audit to check out where it's used.
>
>  From i915 side we've done these lockless tricks back when
> dev->struct_mutex was a thing and alwas contended. But with per-obj
> locking now happening for real with dma-resv, that's probably not
> justified.
>
> But then looking at git history the rcu in dma_resv is older than that,
> and was justified with ttm.

Scratching my head when and why TTM should have ever needed some 
lockless operation when that was added? We do have some now, but just 
because they where available.

On the other hand I'm pretty sure that we can make the whole RCU 
handling in the dma_resv object much less painful. Basic problem here is 
that we have two pointers instead of one, e.g. the excl fence and/or the 
shared fences.

If we could move the exclusive fence pointer into the shared fences most 
of the trouble would go away suddenly.

The other thing we should certainly have is more use case based 
iterators. E.g. something like dma_resv_for_each_sync_fence(...) {...}.

Regards,
Christian.

>
>>> That's why I'm slightly leaning towards _unlocked variants, except we
>>> do use those in lots of places where we hold dma_resv_lock too. So not
>>> sure what's the best plan overall here.
>> Well what function names are we actually talking about?
>>
>> For the dma_resv_get_excl_rcu() case I agree we should probably name that to
>> dma_resv_get_excl_unlocked() because it makes no sense at all to use this
>> function while holding the lock.
>>
>> But for the following functions:
>> dma_resv_get_fences_rcu
>> dma_resv_wait_timeout_rcu
>> dma_resv_test_signaled_rcu
>>
>> I think we should just drop the _rcu naming because those are supposed to
>> work independent if the resv lock is held or not.
> Ack on all naming.
> -Daniel
>
>> Regards,
>> Christian.
>>
>>> -Daniel
>>>
>>>> But yes I see the pain of rebasing this as well.
>>>>
>>>> Christian.
>>>>
>>>>> -Daniel
>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> v2 (Jason Ekstrand):
>>>>>>>      - Fix function argument indentation
>>>>>>>
>>>>>>> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
>>>>>>> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>>>>> Cc: Maxime Ripard <mripard@kernel.org>
>>>>>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>> Cc: Lucas Stach <l.stach@pengutronix.de>
>>>>>>> Cc: Rob Clark <robdclark@gmail.com>
>>>>>>> Cc: Sean Paul <sean@poorly.run>
>>>>>>> Cc: Huang Rui <ray.huang@amd.com>
>>>>>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>>>>>> Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
>>>>>>> ---
>>>>>>>      drivers/dma-buf/dma-buf.c                     |  4 +--
>>>>>>>      drivers/dma-buf/dma-resv.c                    | 28 +++++++++----------
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  6 ++--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  4 +--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |  6 ++--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |  4 +--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  6 ++--
>>>>>>>      drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 14 +++++-----
>>>>>>>      .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  6 ++--
>>>>>>>      drivers/gpu/drm/drm_gem.c                     | 10 +++----
>>>>>>>      drivers/gpu/drm/drm_gem_atomic_helper.c       |  2 +-
>>>>>>>      drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  7 ++---
>>>>>>>      drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |  8 +++---
>>>>>>>      drivers/gpu/drm/i915/display/intel_display.c  |  2 +-
>>>>>>>      drivers/gpu/drm/i915/dma_resv_utils.c         |  2 +-
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_busy.c      |  2 +-
>>>>>>>      .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_object.h    |  2 +-
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +--
>>>>>>>      drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 10 +++----
>>>>>>>      drivers/gpu/drm/i915/i915_request.c           |  6 ++--
>>>>>>>      drivers/gpu/drm/i915/i915_sw_fence.c          |  4 +--
>>>>>>>      drivers/gpu/drm/msm/msm_gem.c                 |  3 +-
>>>>>>>      drivers/gpu/drm/nouveau/dispnv50/wndw.c       |  2 +-
>>>>>>>      drivers/gpu/drm/nouveau/nouveau_gem.c         |  4 +--
>>>>>>>      drivers/gpu/drm/panfrost/panfrost_drv.c       |  4 +--
>>>>>>>      drivers/gpu/drm/panfrost/panfrost_job.c       |  2 +-
>>>>>>>      drivers/gpu/drm/radeon/radeon_gem.c           |  6 ++--
>>>>>>>      drivers/gpu/drm/radeon/radeon_mn.c            |  4 +--
>>>>>>>      drivers/gpu/drm/ttm/ttm_bo.c                  | 18 ++++++------
>>>>>>>      drivers/gpu/drm/vgem/vgem_fence.c             |  4 +--
>>>>>>>      drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  6 ++--
>>>>>>>      drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |  2 +-
>>>>>>>      include/linux/dma-resv.h                      | 18 ++++++------
>>>>>>>      36 files changed, 108 insertions(+), 110 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>>>>>> index f264b70c383eb..ed6451d55d663 100644
>>>>>>> --- a/drivers/dma-buf/dma-buf.c
>>>>>>> +++ b/drivers/dma-buf/dma-buf.c
>>>>>>> @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>>>>>       long ret;
>>>>>>>       /* Wait on any implicit rendering fences */
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(resv, write, true,
>>>>>>> -                                             MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(resv, write, true,
>>>>>>> +                                        MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (ret < 0)
>>>>>>>               return ret;
>>>>>>> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
>>>>>>> index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644
>>>>>>> --- a/drivers/dma-buf/dma-resv.c
>>>>>>> +++ b/drivers/dma-buf/dma-resv.c
>>>>>>> @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>>>>>>>      EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>>>      /**
>>>>>>> - * dma_resv_get_fences_rcu - Get an object's shared and exclusive
>>>>>>> + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive
>>>>>>>       * fences without update side lock held
>>>>>>>       * @obj: the reservation object
>>>>>>>       * @pfence_excl: the returned exclusive fence (or NULL)
>>>>>>> @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
>>>>>>>       * exclusive fence is not specified the fence is put into the array of the
>>>>>>>       * shared fences as well. Returns either zero or -ENOMEM.
>>>>>>>       */
>>>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>>> -                       struct dma_fence **pfence_excl,
>>>>>>> -                       unsigned *pshared_count,
>>>>>>> -                       struct dma_fence ***pshared)
>>>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>>>> +                            struct dma_fence **pfence_excl,
>>>>>>> +                            unsigned *pshared_count,
>>>>>>> +                            struct dma_fence ***pshared)
>>>>>>>      {
>>>>>>>       struct dma_fence **shared = NULL;
>>>>>>>       struct dma_fence *fence_excl;
>>>>>>> @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>>>       *pshared = shared;
>>>>>>>       return ret;
>>>>>>>      }
>>>>>>> -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>>>> +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked);
>>>>>>>      /**
>>>>>>> - * dma_resv_wait_timeout_rcu - Wait on reservation's objects
>>>>>>> + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects
>>>>>>>       * shared and/or exclusive fences.
>>>>>>>       * @obj: the reservation object
>>>>>>>       * @wait_all: if true, wait on all fences, else wait on just exclusive fence
>>>>>>> @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
>>>>>>>       * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
>>>>>>>       * greater than zer on success.
>>>>>>>       */
>>>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>>>> -                          bool wait_all, bool intr,
>>>>>>> -                          unsigned long timeout)
>>>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj,
>>>>>>> +                               bool wait_all, bool intr,
>>>>>>> +                               unsigned long timeout)
>>>>>>>      {
>>>>>>>       struct dma_fence *fence;
>>>>>>>       unsigned seq, shared_count;
>>>>>>> @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
>>>>>>>       rcu_read_unlock();
>>>>>>>       goto retry;
>>>>>>>      }
>>>>>>> -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
>>>>>>> +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked);
>>>>>>>      static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>>> @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>>>      }
>>>>>>>      /**
>>>>>>> - * dma_resv_test_signaled_rcu - Test if a reservation object's
>>>>>>> + * dma_resv_test_signaled_unlocked - Test if a reservation object's
>>>>>>>       * fences have been signaled.
>>>>>>>       * @obj: the reservation object
>>>>>>>       * @test_all: if true, test all fences, otherwise only test the exclusive
>>>>>>> @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>>>>>>>       * RETURNS
>>>>>>>       * true if all fences signaled, else false
>>>>>>>       */
>>>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all)
>>>>>>>      {
>>>>>>>       unsigned seq, shared_count;
>>>>>>>       int ret;
>>>>>>> @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
>>>>>>>       rcu_read_unlock();
>>>>>>>       return ret;
>>>>>>>      }
>>>>>>> -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
>>>>>>> +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked);
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>>>> index 8a1fb8b6606e5..b8e24f199be9a 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
>>>>>>> @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
>>>>>>>               goto unpin;
>>>>>>>       }
>>>>>>> -   r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
>>>>>>> -                                         &work->shared_count,
>>>>>>> -                                         &work->shared);
>>>>>>> +   r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl,
>>>>>>> +                                    &work->shared_count,
>>>>>>> +                                    &work->shared);
>>>>>>>       if (unlikely(r != 0)) {
>>>>>>>               DRM_ERROR("failed to get fences for buffer\n");
>>>>>>>               goto unpin;
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>>>> index baa980a477d94..0d0319bc51577 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>>>>>>> @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj)
>>>>>>>       if (!dma_resv_get_list(obj)) /* no shared fences to convert */
>>>>>>>               return 0;
>>>>>>> -   r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
>>>>>>> +   r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences);
>>>>>>>       if (r)
>>>>>>>               return r;
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>>>> index 18974bd081f00..8e2996d6ba3ad 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>>>>>> @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>>>               return -ENOENT;
>>>>>>>       }
>>>>>>>       robj = gem_to_amdgpu_bo(gobj);
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
>>>>>>> -                                             timeout);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true,
>>>>>>> +                                        timeout);
>>>>>>>       /* ret == 0 means not signaled,
>>>>>>>        * ret > 0 means signaled
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>>>> index b4971e90b98cf..38e1b32dd2cef 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
>>>>>>> @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>>>       unsigned count;
>>>>>>>       int r;
>>>>>>> -   r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
>>>>>>> +   r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences);
>>>>>>>       if (r)
>>>>>>>               goto fallback;
>>>>>>> @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
>>>>>>>       /* Not enough memory for the delayed delete, as last resort
>>>>>>>        * block for all the fences to complete.
>>>>>>>        */
>>>>>>> -   dma_resv_wait_timeout_rcu(resv, true, false,
>>>>>>> -                                       MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>>>> +                                  MAX_SCHEDULE_TIMEOUT);
>>>>>>>       amdgpu_pasid_free(pasid);
>>>>>>>      }
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>>>> index 828b5167ff128..0319c8b547c48 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
>>>>>>> @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni,
>>>>>>>       mmu_interval_set_seq(mni, cur_seq);
>>>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       mutex_unlock(&adev->notifier_lock);
>>>>>>>       if (r <= 0)
>>>>>>>               DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>>>> index 0adffcace3263..de1c7c5501683 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>>>>>> @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
>>>>>>>               return 0;
>>>>>>>       }
>>>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
>>>>>>> -                                           MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (r < 0)
>>>>>>>               return r;
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>>>> index c6dbc08016045..4a2196404fb69 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>>>>>> @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>>>>>>       ib->length_dw = 16;
>>>>>>>       if (direct) {
>>>>>>> -           r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
>>>>>>> -                                                   true, false,
>>>>>>> -                                                   msecs_to_jiffies(10));
>>>>>>> +           r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv,
>>>>>>> +                                              true, false,
>>>>>>> +                                              msecs_to_jiffies(10));
>>>>>>>               if (r == 0)
>>>>>>>                       r = -ETIMEDOUT;
>>>>>>>               if (r < 0)
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> index 4a3e3f72e1277..7ba1c537d6584 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>>>>>       unsigned i, shared_count;
>>>>>>>       int r;
>>>>>>> -   r = dma_resv_get_fences_rcu(resv, &excl,
>>>>>>> -                                         &shared_count, &shared);
>>>>>>> +   r = dma_resv_get_fences_unlocked(resv, &excl,
>>>>>>> +                                    &shared_count, &shared);
>>>>>>>       if (r) {
>>>>>>>               /* Not enough memory to grab the fence list, as last resort
>>>>>>>                * block for all the fences to complete.
>>>>>>>                */
>>>>>>> -           dma_resv_wait_timeout_rcu(resv, true, false,
>>>>>>> -                                               MAX_SCHEDULE_TIMEOUT);
>>>>>>> +           dma_resv_wait_timeout_unlocked(resv, true, false,
>>>>>>> +                                          MAX_SCHEDULE_TIMEOUT);
>>>>>>>               return;
>>>>>>>       }
>>>>>>> @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo)
>>>>>>>               return true;
>>>>>>>       /* Don't evict VM page tables while they are busy */
>>>>>>> -   if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true))
>>>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true))
>>>>>>>               return false;
>>>>>>>       /* Try to block ongoing updates */
>>>>>>> @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>>>>>>>       */
>>>>>>>      long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
>>>>>>>      {
>>>>>>> -   timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
>>>>>>> -                                       true, true, timeout);
>>>>>>> +   timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv,
>>>>>>> +                                            true, true, timeout);
>>>>>>>       if (timeout <= 0)
>>>>>>>               return timeout;
>>>>>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>>>> index 9ca517b658546..0121d2817fa26 100644
>>>>>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>>>>>> @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
>>>>>>>                * deadlock during GPU reset when this fence will not signal
>>>>>>>                * but we hold reservation lock for the BO.
>>>>>>>                */
>>>>>>> -           r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
>>>>>>> -                                                   false,
>>>>>>> -                                                   msecs_to_jiffies(5000));
>>>>>>> +           r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true,
>>>>>>> +                                              false,
>>>>>>> +                                              msecs_to_jiffies(5000));
>>>>>>>               if (unlikely(r <= 0))
>>>>>>>                       DRM_ERROR("Waiting for fences timed out!");
>>>>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>>>>> index 9989425e9875a..1241a421b9e81 100644
>>>>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>>>>> @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
>>>>>>>               return -EINVAL;
>>>>>>>       }
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
>>>>>>> -                                             true, timeout);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all,
>>>>>>> +                                        true, timeout);
>>>>>>>       if (ret == 0)
>>>>>>>               ret = -ETIME;
>>>>>>>       else if (ret > 0)
>>>>>>> @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
>>>>>>>       if (!write) {
>>>>>>>               struct dma_fence *fence =
>>>>>>> -                   dma_resv_get_excl_rcu(obj->resv);
>>>>>>> +                   dma_resv_get_excl_unlocked(obj->resv);
>>>>>>>               return drm_gem_fence_array_add(fence_array, fence);
>>>>>>>       }
>>>>>>> -   ret = dma_resv_get_fences_rcu(obj->resv, NULL,
>>>>>>> -                                           &fence_count, &fences);
>>>>>>> +   ret = dma_resv_get_fences_unlocked(obj->resv, NULL,
>>>>>>> +                                      &fence_count, &fences);
>>>>>>>       if (ret || !fence_count)
>>>>>>>               return ret;
>>>>>>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>>>> index a005c5a0ba46a..a27135084ae5c 100644
>>>>>>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>>>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>>>>> @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>>>>>>>               return 0;
>>>>>>>       obj = drm_gem_fb_get_obj(state->fb, 0);
>>>>>>> -   fence = dma_resv_get_excl_rcu(obj->resv);
>>>>>>> +   fence = dma_resv_get_excl_unlocked(obj->resv);
>>>>>>>       drm_atomic_set_fence_for_plane(state, fence);
>>>>>>>       return 0;
>>>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>>>> index db69f19ab5bca..4e6f5346e84e4 100644
>>>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
>>>>>>> @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
>>>>>>>       }
>>>>>>>       if (op & ETNA_PREP_NOSYNC) {
>>>>>>> -           if (!dma_resv_test_signaled_rcu(obj->resv,
>>>>>>> -                                                     write))
>>>>>>> +           if (!dma_resv_test_signaled_unlocked(obj->resv, write))
>>>>>>>                       return -EBUSY;
>>>>>>>       } else {
>>>>>>>               unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
>>>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv,
>>>>>>> -                                                     write, true, remain);
>>>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv,
>>>>>>> +                                                write, true, remain);
>>>>>>>               if (ret <= 0)
>>>>>>>                       return ret == 0 ? -ETIMEDOUT : ret;
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>>>> index d05c359945799..6617fada4595d 100644
>>>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
>>>>>>> @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
>>>>>>>                       continue;
>>>>>>>               if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
>>>>>>> -                   ret = dma_resv_get_fences_rcu(robj, &bo->excl,
>>>>>>> -                                                           &bo->nr_shared,
>>>>>>> -                                                           &bo->shared);
>>>>>>> +                   ret = dma_resv_get_fences_unlocked(robj, &bo->excl,
>>>>>>> +                                                      &bo->nr_shared,
>>>>>>> +                                                      &bo->shared);
>>>>>>>                       if (ret)
>>>>>>>                               return ret;
>>>>>>>               } else {
>>>>>>> -                   bo->excl = dma_resv_get_excl_rcu(robj);
>>>>>>> +                   bo->excl = dma_resv_get_excl_unlocked(robj);
>>>>>>>               }
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
>>>>>>> index 422b59ebf6dce..5f0b85a102159 100644
>>>>>>> --- a/drivers/gpu/drm/i915/display/intel_display.c
>>>>>>> +++ b/drivers/gpu/drm/i915/display/intel_display.c
>>>>>>> @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
>>>>>>>               if (ret < 0)
>>>>>>>                       goto unpin_fb;
>>>>>>> -           fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +           fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>               if (fence) {
>>>>>>>                       add_rps_boost_after_vblank(new_plane_state->hw.crtc,
>>>>>>>                                                  fence);
>>>>>>> diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>>>> index 9e508e7d4629f..bdfc6bf16a4e9 100644
>>>>>>> --- a/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>>>> +++ b/drivers/gpu/drm/i915/dma_resv_utils.c
>>>>>>> @@ -10,7 +10,7 @@
>>>>>>>      void dma_resv_prune(struct dma_resv *resv)
>>>>>>>      {
>>>>>>>       if (dma_resv_trylock(resv)) {
>>>>>>> -           if (dma_resv_test_signaled_rcu(resv, true))
>>>>>>> +           if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>>>                       dma_resv_add_excl_fence(resv, NULL);
>>>>>>>               dma_resv_unlock(resv);
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>>>> index 25235ef630c10..754ad6d1bace9 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>>>>>>> @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>>>        * Alternatively, we can trade that extra information on read/write
>>>>>>>        * activity with
>>>>>>>        *      args->busy =
>>>>>>> -    *              !dma_resv_test_signaled_rcu(obj->resv, true);
>>>>>>> +    *              !dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>>>        * to report the overall busyness. This is what the wait-ioctl does.
>>>>>>>        *
>>>>>>>        */
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>>>> index 297143511f99b..e8f323564e57b 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>>>>>>> @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma)
>>>>>>>       if (DBG_FORCE_RELOC)
>>>>>>>               return false;
>>>>>>> -   return !dma_resv_test_signaled_rcu(vma->resv, true);
>>>>>>> +   return !dma_resv_test_signaled_unlocked(vma->resv, true);
>>>>>>>      }
>>>>>>>      static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset)
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>>>> index 2ebd79537aea9..7c0eb425cb3b3 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
>>>>>>> @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
>>>>>>>       struct dma_fence *fence;
>>>>>>>       rcu_read_lock();
>>>>>>> -   fence = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +   fence = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>       rcu_read_unlock();
>>>>>>>       if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>>>> index a657b99ec7606..44df18dc9669f 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
>>>>>>> @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni,
>>>>>>>               return true;
>>>>>>>       /* we will unbind on next submission, still have userptr pins */
>>>>>>> -   r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false,
>>>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (r <= 0)
>>>>>>>               drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r);
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>>>> index 4b9856d5ba14f..5b6c52659ad4d 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
>>>>>>> @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>>>               unsigned int count, i;
>>>>>>>               int ret;
>>>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
>>>>>>>                */
>>>>>>>               prune_fences = count && timeout >= 0;
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>>>       }
>>>>>>>       if (excl && timeout >= 0)
>>>>>>> @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>>>               unsigned int count, i;
>>>>>>>               int ret;
>>>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>>>> -                                         &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>>>> +                                              &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>>>>>>>               kfree(shared);
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>       }
>>>>>>>       if (excl) {
>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>>>>>> index 970d8f4986bbe..f1ed03ced7dd1 100644
>>>>>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>>>>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>>>>>> @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to,
>>>>>>>               struct dma_fence **shared;
>>>>>>>               unsigned int count, i;
>>>>>>> -           ret = dma_resv_get_fences_rcu(obj->base.resv,
>>>>>>> -                                                   &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(obj->base.resv,
>>>>>>> +                                              &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to,
>>>>>>>                       dma_fence_put(shared[i]);
>>>>>>>               kfree(shared);
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(obj->base.resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(obj->base.resv);
>>>>>>>       }
>>>>>>>       if (excl) {
>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>>>> index 2744558f30507..0bcb7ea44201e 100644
>>>>>>> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>>>> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
>>>>>>> @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>>>               struct dma_fence **shared;
>>>>>>>               unsigned int count, i;
>>>>>>> -           ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
>>>>>>> +           ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared);
>>>>>>>               if (ret)
>>>>>>>                       return ret;
>>>>>>> @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
>>>>>>>                       dma_fence_put(shared[i]);
>>>>>>>               kfree(shared);
>>>>>>>       } else {
>>>>>>> -           excl = dma_resv_get_excl_rcu(resv);
>>>>>>> +           excl = dma_resv_get_excl_unlocked(resv);
>>>>>>>       }
>>>>>>>       if (ret >= 0 && excl && excl->ops != exclude) {
>>>>>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>>>>>> index 56df86e5f7400..1aca60507bb14 100644
>>>>>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>>>>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>>>>>> @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
>>>>>>>               op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
>>>>>>>       long ret;
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(obj->resv, write,
>>>>>>> -                                             true,  remain);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain);
>>>>>>>       if (ret == 0)
>>>>>>>               return remain == 0 ? -EBUSY : -ETIMEDOUT;
>>>>>>>       else if (ret < 0)
>>>>>>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>>>> index 0cb1f9d848d3e..8d048bacd6f02 100644
>>>>>>> --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>>>> +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
>>>>>>> @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>>>>>>>                       asyw->image.handle[0] = ctxdma->object.handle;
>>>>>>>       }
>>>>>>> -   asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv);
>>>>>>> +   asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv);
>>>>>>>       asyw->image.offset[0] = nvbo->offset;
>>>>>>>       if (wndw->func->prepare) {
>>>>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>>>> index a70e82413fa75..bc6b09ee9b552 100644
>>>>>>> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
>>>>>>> @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
>>>>>>>               return -ENOENT;
>>>>>>>       nvbo = nouveau_gem_object(gem);
>>>>>>> -   lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true,
>>>>>>> -                                              no_wait ? 0 : 30 * HZ);
>>>>>>> +   lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true,
>>>>>>> +                                         no_wait ? 0 : 30 * HZ);
>>>>>>>       if (!lret)
>>>>>>>               ret = -EBUSY;
>>>>>>>       else if (lret > 0)
>>>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>>>> index ca07098a61419..eef5b632ee0ce 100644
>>>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>>>>>>> @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>>>>>>>       if (!gem_obj)
>>>>>>>               return -ENOENT;
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true,
>>>>>>> -                                             true, timeout);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true,
>>>>>>> +                                        true, timeout);
>>>>>>>       if (!ret)
>>>>>>>               ret = timeout ? -ETIMEDOUT : -EBUSY;
>>>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> index 6003cfeb13221..2df3e999a38d0 100644
>>>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
>>>>>>>       int i;
>>>>>>>       for (i = 0; i < bo_count; i++)
>>>>>>> -           implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv);
>>>>>>> +           implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
>>>>>>>      }
>>>>>>>      static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>>>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> index 05ea2f39f6261..1a38b0bf36d11 100644
>>>>>>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
>>>>>>>       }
>>>>>>>       if (domain == RADEON_GEM_DOMAIN_CPU) {
>>>>>>>               /* Asking for cpu access wait for object idle */
>>>>>>> -           r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>> +           r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>>               if (!r)
>>>>>>>                       r = -EBUSY;
>>>>>>> @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
>>>>>>>       }
>>>>>>>       robj = gem_to_radeon_bo(gobj);
>>>>>>> -   r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true);
>>>>>>> +   r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true);
>>>>>>>       if (r == 0)
>>>>>>>               r = -EBUSY;
>>>>>>>       else
>>>>>>> @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
>>>>>>>       }
>>>>>>>       robj = gem_to_radeon_bo(gobj);
>>>>>>> -   ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>> +   ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ);
>>>>>>>       if (ret == 0)
>>>>>>>               r = -EBUSY;
>>>>>>>       else if (ret < 0)
>>>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>>>> index e37c9a57a7c36..a19be3f8a218c 100644
>>>>>>> --- a/drivers/gpu/drm/radeon/radeon_mn.c
>>>>>>> +++ b/drivers/gpu/drm/radeon/radeon_mn.c
>>>>>>> @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn,
>>>>>>>               return true;
>>>>>>>       }
>>>>>>> -   r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
>>>>>>> -                                 MAX_SCHEDULE_TIMEOUT);
>>>>>>> +   r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false,
>>>>>>> +                                      MAX_SCHEDULE_TIMEOUT);
>>>>>>>       if (r <= 0)
>>>>>>>               DRM_ERROR("(%ld) failed to wait for user bo\n", r);
>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> index ca1b098b6a561..215cad3149621 100644
>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>>>       struct dma_resv *resv = &bo->base._resv;
>>>>>>>       int ret;
>>>>>>> -   if (dma_resv_test_signaled_rcu(resv, true))
>>>>>>> +   if (dma_resv_test_signaled_unlocked(resv, true))
>>>>>>>               ret = 0;
>>>>>>>       else
>>>>>>>               ret = -EBUSY;
>>>>>>> @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
>>>>>>>                       dma_resv_unlock(bo->base.resv);
>>>>>>>               spin_unlock(&bo->bdev->lru_lock);
>>>>>>> -           lret = dma_resv_wait_timeout_rcu(resv, true, interruptible,
>>>>>>> -                                            30 * HZ);
>>>>>>> +           lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible,
>>>>>>> +                                                 30 * HZ);
>>>>>>>               if (lret < 0)
>>>>>>>                       return lret;
>>>>>>> @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref)
>>>>>>>                       /* Last resort, if we fail to allocate memory for the
>>>>>>>                        * fences block for the BO to become idle
>>>>>>>                        */
>>>>>>> -                   dma_resv_wait_timeout_rcu(bo->base.resv, true, false,
>>>>>>> -                                             30 * HZ);
>>>>>>> +                   dma_resv_wait_timeout_unlocked(bo->base.resv, true, false,
>>>>>>> +                                                  30 * HZ);
>>>>>>>               }
>>>>>>>               if (bo->bdev->funcs->release_notify)
>>>>>>> @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref)
>>>>>>>               ttm_mem_io_free(bdev, &bo->mem);
>>>>>>>       }
>>>>>>> -   if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
>>>>>>> +   if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) ||
>>>>>>>           !dma_resv_trylock(bo->base.resv)) {
>>>>>>>               /* The BO is not idle, resurrect it for delayed destroy */
>>>>>>>               ttm_bo_flush_all_fences(bo);
>>>>>>> @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo,
>>>>>>>       long timeout = 15 * HZ;
>>>>>>>       if (no_wait) {
>>>>>>> -           if (dma_resv_test_signaled_rcu(bo->base.resv, true))
>>>>>>> +           if (dma_resv_test_signaled_unlocked(bo->base.resv, true))
>>>>>>>                       return 0;
>>>>>>>               else
>>>>>>>                       return -EBUSY;
>>>>>>>       }
>>>>>>> -   timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true,
>>>>>>> -                                                 interruptible, timeout);
>>>>>>> +   timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true,
>>>>>>> +                                            interruptible, timeout);
>>>>>>>       if (timeout < 0)
>>>>>>>               return timeout;
>>>>>>> diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>>>> index 2902dc6e64faf..010a82405e374 100644
>>>>>>> --- a/drivers/gpu/drm/vgem/vgem_fence.c
>>>>>>> +++ b/drivers/gpu/drm/vgem/vgem_fence.c
>>>>>>> @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
>>>>>>>       /* Check for a conflicting fence */
>>>>>>>       resv = obj->resv;
>>>>>>> -   if (!dma_resv_test_signaled_rcu(resv,
>>>>>>> -                                             arg->flags & VGEM_FENCE_WRITE)) {
>>>>>>> +   if (!dma_resv_test_signaled_unlocked(resv,
>>>>>>> +                                        arg->flags & VGEM_FENCE_WRITE)) {
>>>>>>>               ret = -EBUSY;
>>>>>>>               goto err_fence;
>>>>>>>       }
>>>>>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>>>> index 669f2ee395154..ab010c8e32816 100644
>>>>>>> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>>>> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>>>>>>> @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data,
>>>>>>>               return -ENOENT;
>>>>>>>       if (args->flags & VIRTGPU_WAIT_NOWAIT) {
>>>>>>> -           ret = dma_resv_test_signaled_rcu(obj->resv, true);
>>>>>>> +           ret = dma_resv_test_signaled_unlocked(obj->resv, true);
>>>>>>>       } else {
>>>>>>> -           ret = dma_resv_wait_timeout_rcu(obj->resv, true, true,
>>>>>>> -                                           timeout);
>>>>>>> +           ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true,
>>>>>>> +                                                timeout);
>>>>>>>       }
>>>>>>>       if (ret == 0)
>>>>>>>               ret = -EBUSY;
>>>>>>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>>>> index 04dd49c4c2572..19e1ce23842a9 100644
>>>>>>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>>>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
>>>>>>> @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
>>>>>>>       if (flags & drm_vmw_synccpu_allow_cs) {
>>>>>>>               long lret;
>>>>>>> -           lret = dma_resv_wait_timeout_rcu
>>>>>>> +           lret = dma_resv_wait_timeout_unlocked
>>>>>>>                       (bo->base.resv, true, true,
>>>>>>>                        nonblock ? 0 : MAX_SCHEDULE_TIMEOUT);
>>>>>>>               if (!lret)
>>>>>>> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
>>>>>>> index d44a77e8a7e34..99cfb7af966b8 100644
>>>>>>> --- a/include/linux/dma-resv.h
>>>>>>> +++ b/include/linux/dma-resv.h
>>>>>>> @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>>>      }
>>>>>>>      /**
>>>>>>> - * dma_resv_get_excl_rcu - get the reservation object's
>>>>>>> + * dma_resv_get_excl_unlocked - get the reservation object's
>>>>>>>       * exclusive fence, without lock held.
>>>>>>>       * @obj: the reservation object
>>>>>>>       *
>>>>>>> @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj)
>>>>>>>       * The exclusive fence or NULL if none
>>>>>>>       */
>>>>>>>      static inline struct dma_fence *
>>>>>>> -dma_resv_get_excl_rcu(struct dma_resv *obj)
>>>>>>> +dma_resv_get_excl_unlocked(struct dma_resv *obj)
>>>>>>>      {
>>>>>>>       struct dma_fence *fence;
>>>>>>> @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>>>>      void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>>>>>>> -int dma_resv_get_fences_rcu(struct dma_resv *obj,
>>>>>>> -                       struct dma_fence **pfence_excl,
>>>>>>> -                       unsigned *pshared_count,
>>>>>>> -                       struct dma_fence ***pshared);
>>>>>>> +int dma_resv_get_fences_unlocked(struct dma_resv *obj,
>>>>>>> +                            struct dma_fence **pfence_excl,
>>>>>>> +                            unsigned *pshared_count,
>>>>>>> +                            struct dma_fence ***pshared);
>>>>>>>      int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>>>>>>> -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr,
>>>>>>> -                          unsigned long timeout);
>>>>>>> +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr,
>>>>>>> +                               unsigned long timeout);
>>>>>>> -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all);
>>>>>>> +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all);
>>>>>>>      #endif /* _LINUX_RESERVATION_H */

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  2021-06-01 14:34               ` [Intel-gfx] " Daniel Vetter
@ 2021-06-01 17:29                 ` Christian König
  -1 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-06-01 17:29 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, Jason Ekstrand, intel-gfx, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Sean Paul


> Ack on all naming.
> -Daniel

BTW As long as Jason nor you object I will prepare patches for the 
rename of the functions.

Regards,
Christian.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
@ 2021-06-01 17:29                 ` Christian König
  0 siblings, 0 replies; 68+ messages in thread
From: Christian König @ 2021-06-01 17:29 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Gerd Hoffmann, intel-gfx, Maxime Ripard, Huang Rui,
	VMware Graphics, dri-devel, Thomas Zimmermann, Lucas Stach


> Ack on all naming.
> -Daniel

BTW As long as Jason nor you object I will prepare patches for the 
rename of the functions.

Regards,
Christian.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 0/7] dma-buf: Add an API for exporting sync files (v11)
  2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
@ 2021-06-10 20:10   ` Chia-I Wu
  -1 siblings, 0 replies; 68+ messages in thread
From: Chia-I Wu @ 2021-06-10 20:10 UTC (permalink / raw)
  To: Jason Ekstrand
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	ML dri-devel, wayland, Dave Airlie, mesa-dev, intel-gfx

On Tue, May 25, 2021 at 2:18 PM Jason Ekstrand <jason@jlekstrand.net> wrote:
> Modern userspace APIs like Vulkan are built on an explicit
> synchronization model.  This doesn't always play nicely with the
> implicit synchronization used in the kernel and assumed by X11 and
> Wayland.  The client -> compositor half of the synchronization isn't too
> bad, at least on intel, because we can control whether or not i915
> synchronizes on the buffer and whether or not it's considered written.
We might have an important use case for this half, for virtio-gpu and Chrome OS.

When the guest compositor acts as a proxy to connect guest apps to the
host compositor, implicit fencing requires the guest compositor to do
a wait before forwarding the buffer to the host compositor.  With this
patch, the guest compositor can extract the dma-fence from the buffer,
and if the fence is a virtio-gpu fence, forward both the fence and the
buffer to the host compositor.  It will allow us to convert a
guest-side wait into a host-side wait.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 0/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-06-10 20:10   ` Chia-I Wu
  0 siblings, 0 replies; 68+ messages in thread
From: Chia-I Wu @ 2021-06-10 20:10 UTC (permalink / raw)
  To: Jason Ekstrand
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	ML dri-devel, wayland, Bas Nieuwenhuizen, Dave Airlie, mesa-dev,
	intel-gfx

On Tue, May 25, 2021 at 2:18 PM Jason Ekstrand <jason@jlekstrand.net> wrote:
> Modern userspace APIs like Vulkan are built on an explicit
> synchronization model.  This doesn't always play nicely with the
> implicit synchronization used in the kernel and assumed by X11 and
> Wayland.  The client -> compositor half of the synchronization isn't too
> bad, at least on intel, because we can control whether or not i915
> synchronizes on the buffer and whether or not it's considered written.
We might have an important use case for this half, for virtio-gpu and Chrome OS.

When the guest compositor acts as a proxy to connect guest apps to the
host compositor, implicit fencing requires the guest compositor to do
a wait before forwarding the buffer to the host compositor.  With this
patch, the guest compositor can extract the dma-fence from the buffer,
and if the fence is a virtio-gpu fence, forward both the fence and the
buffer to the host compositor.  It will allow us to convert a
guest-side wait into a host-side wait.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 0/7] dma-buf: Add an API for exporting sync files (v11)
  2021-06-10 20:10   ` [Intel-gfx] " Chia-I Wu
@ 2021-06-10 20:26     ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-06-10 20:26 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	ML dri-devel, wayland, Dave Airlie, mesa-dev, Intel GFX

On Thu, Jun 10, 2021 at 3:11 PM Chia-I Wu <olvaffe@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 2:18 PM Jason Ekstrand <jason@jlekstrand.net> wrote:
> > Modern userspace APIs like Vulkan are built on an explicit
> > synchronization model.  This doesn't always play nicely with the
> > implicit synchronization used in the kernel and assumed by X11 and
> > Wayland.  The client -> compositor half of the synchronization isn't too
> > bad, at least on intel, because we can control whether or not i915
> > synchronizes on the buffer and whether or not it's considered written.
> We might have an important use case for this half, for virtio-gpu and Chrome OS.
>
> When the guest compositor acts as a proxy to connect guest apps to the
> host compositor, implicit fencing requires the guest compositor to do
> a wait before forwarding the buffer to the host compositor.  With this
> patch, the guest compositor can extract the dma-fence from the buffer,
> and if the fence is a virtio-gpu fence, forward both the fence and the
> buffer to the host compositor.  It will allow us to convert a
> guest-side wait into a host-side wait.

Yeah, I think the first half solves a lot of problems.  I'm rebasing
it now and will send a v12 series shortly.  I don't think there's a
lot standing between the first few patches and merging.  I've got IGT
tests and I'm pretty sure the code is good.  The last review cycle got
distracted with some renaming fun.

--Jason

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 0/7] dma-buf: Add an API for exporting sync files (v11)
@ 2021-06-10 20:26     ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-06-10 20:26 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	ML dri-devel, wayland, Bas Nieuwenhuizen, Dave Airlie, mesa-dev,
	Intel GFX

On Thu, Jun 10, 2021 at 3:11 PM Chia-I Wu <olvaffe@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 2:18 PM Jason Ekstrand <jason@jlekstrand.net> wrote:
> > Modern userspace APIs like Vulkan are built on an explicit
> > synchronization model.  This doesn't always play nicely with the
> > implicit synchronization used in the kernel and assumed by X11 and
> > Wayland.  The client -> compositor half of the synchronization isn't too
> > bad, at least on intel, because we can control whether or not i915
> > synchronizes on the buffer and whether or not it's considered written.
> We might have an important use case for this half, for virtio-gpu and Chrome OS.
>
> When the guest compositor acts as a proxy to connect guest apps to the
> host compositor, implicit fencing requires the guest compositor to do
> a wait before forwarding the buffer to the host compositor.  With this
> patch, the guest compositor can extract the dma-fence from the buffer,
> and if the fence is a virtio-gpu fence, forward both the fence and the
> buffer to the host compositor.  It will allow us to convert a
> guest-side wait into a host-side wait.

Yeah, I think the first half solves a lot of problems.  I'm rebasing
it now and will send a v12 series shortly.  I don't think there's a
lot standing between the first few patches and merging.  I've got IGT
tests and I'm pretty sure the code is good.  The last review cycle got
distracted with some renaming fun.

--Jason
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
  2021-05-27 10:38     ` [Intel-gfx] " Daniel Vetter
@ 2021-06-10 20:57       ` Jason Ekstrand
  -1 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-06-10 20:57 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, Intel GFX, Christian König,
	Maling list - DRI developers

On Thu, May 27, 2021 at 5:38 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, May 25, 2021 at 04:17:50PM -0500, Jason Ekstrand wrote:
> > This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> > documentation for DMA_BUF_IOCTL_SYNC.
> >
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
>
> We're still missing the doc for the SET_NAME ioctl, but maybe Sumit can be
> motivated to fix that?
>
> > ---
> >  Documentation/driver-api/dma-buf.rst |  8 +++++++
> >  include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
> >  2 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> > index 7f37ec30d9fd7..784f84fe50a5e 100644
> > --- a/Documentation/driver-api/dma-buf.rst
> > +++ b/Documentation/driver-api/dma-buf.rst
> > @@ -88,6 +88,9 @@ consider though:
> >  - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
> >    details.
> >
> > +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> > +  `DMA Buffer ioctls`_ below for details.
> > +
> >  Basic Operation and Device DMA Access
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > @@ -106,6 +109,11 @@ Implicit Fence Poll Support
> >  .. kernel-doc:: drivers/dma-buf/dma-buf.c
> >     :doc: implicit fence polling
> >
> > +DMA Buffer ioctls
> > +~~~~~~~~~~~~~~~~~
> > +
> > +.. kernel-doc:: include/uapi/linux/dma-buf.h
> > +
> >  Kernel Functions and Structures Reference
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> > index 7f30393b92c3b..1f67ced853b14 100644
> > --- a/include/uapi/linux/dma-buf.h
> > +++ b/include/uapi/linux/dma-buf.h
> > @@ -22,8 +22,38 @@
> >
> >  #include <linux/types.h>
> >
> > -/* begin/end dma-buf functions used for userspace mmap. */
> > +/**
> > + * struct dma_buf_sync - Synchronize with CPU access.
> > + *
> > + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> > + * possible to guarantee coherency between the CPU-visible map and underlying
> > + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
> > + * any CPU access to give the kernel the chance to shuffle memory around if
> > + * needed.
> > + *
> > + * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC
>
> s/should/must/
>
> > + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
> > + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> > + * DMA_BUF_SYNC_END and the same read/write flags.
>
> I think we should make it really clear here that this is _only_ for cache
> coherency, and that furthermore if you want coherency with gpu access you
> either need to use poll() for implicit sync (link to the relevant section)
> or handle explicit sync with sync_file (again link would be awesome).

I've added such a comment.  I encourage you to look at v2 which I'll
be sending shortly.  I'm not sure how to get the poll() reference to
hyperlink, though.

> > + */
> >  struct dma_buf_sync {
> > +     /**
> > +      * @flags: Set of access flags
> > +      *
> > +      * - DMA_BUF_SYNC_START: Indicates the start of a map access
>
> Bikeshed, but I think the item list format instead of bullet point list
> looks neater, e.g.  DOC: standard plane properties in drm_plane.c.

Yeah, that's better.

> > +      *   session.
> > +      *
> > +      * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
> > +      *   be read by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
>
> s/READ/WRITE/

Oops.

> > +      *   be written by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
> > +      *   DMA_BUF_SYNC_WRITE.
> > +      */
>
> With the nits addressed: Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Thanks!

> >       __u64 flags;
> >  };
> >
> > --
> > 2.31.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [Intel-gfx] [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC
@ 2021-06-10 20:57       ` Jason Ekstrand
  0 siblings, 0 replies; 68+ messages in thread
From: Jason Ekstrand @ 2021-06-10 20:57 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, Intel GFX, Christian König,
	Maling list - DRI developers, Sumit Semwal

On Thu, May 27, 2021 at 5:38 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, May 25, 2021 at 04:17:50PM -0500, Jason Ekstrand wrote:
> > This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> > documentation for DMA_BUF_IOCTL_SYNC.
> >
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
>
> We're still missing the doc for the SET_NAME ioctl, but maybe Sumit can be
> motivated to fix that?
>
> > ---
> >  Documentation/driver-api/dma-buf.rst |  8 +++++++
> >  include/uapi/linux/dma-buf.h         | 32 +++++++++++++++++++++++++++-
> >  2 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> > index 7f37ec30d9fd7..784f84fe50a5e 100644
> > --- a/Documentation/driver-api/dma-buf.rst
> > +++ b/Documentation/driver-api/dma-buf.rst
> > @@ -88,6 +88,9 @@ consider though:
> >  - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
> >    details.
> >
> > +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> > +  `DMA Buffer ioctls`_ below for details.
> > +
> >  Basic Operation and Device DMA Access
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > @@ -106,6 +109,11 @@ Implicit Fence Poll Support
> >  .. kernel-doc:: drivers/dma-buf/dma-buf.c
> >     :doc: implicit fence polling
> >
> > +DMA Buffer ioctls
> > +~~~~~~~~~~~~~~~~~
> > +
> > +.. kernel-doc:: include/uapi/linux/dma-buf.h
> > +
> >  Kernel Functions and Structures Reference
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> > index 7f30393b92c3b..1f67ced853b14 100644
> > --- a/include/uapi/linux/dma-buf.h
> > +++ b/include/uapi/linux/dma-buf.h
> > @@ -22,8 +22,38 @@
> >
> >  #include <linux/types.h>
> >
> > -/* begin/end dma-buf functions used for userspace mmap. */
> > +/**
> > + * struct dma_buf_sync - Synchronize with CPU access.
> > + *
> > + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> > + * possible to guarantee coherency between the CPU-visible map and underlying
> > + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
> > + * any CPU access to give the kernel the chance to shuffle memory around if
> > + * needed.
> > + *
> > + * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC
>
> s/should/must/
>
> > + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
> > + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> > + * DMA_BUF_SYNC_END and the same read/write flags.
>
> I think we should make it really clear here that this is _only_ for cache
> coherency, and that furthermore if you want coherency with gpu access you
> either need to use poll() for implicit sync (link to the relevant section)
> or handle explicit sync with sync_file (again link would be awesome).

I've added such a comment.  I encourage you to look at v2 which I'll
be sending shortly.  I'm not sure how to get the poll() reference to
hyperlink, though.

> > + */
> >  struct dma_buf_sync {
> > +     /**
> > +      * @flags: Set of access flags
> > +      *
> > +      * - DMA_BUF_SYNC_START: Indicates the start of a map access
>
> Bikeshed, but I think the item list format instead of bullet point list
> looks neater, e.g.  DOC: standard plane properties in drm_plane.c.

Yeah, that's better.

> > +      *   session.
> > +      *
> > +      * - DMA_BUF_SYNC_END: Indicates the end of a map access session.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
> > +      *   be read by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will
>
> s/READ/WRITE/

Oops.

> > +      *   be written by the client via the CPU map.
> > +      *
> > +      * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ |
> > +      *   DMA_BUF_SYNC_WRITE.
> > +      */
>
> With the nits addressed: Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Thanks!

> >       __u64 flags;
> >  };
> >
> > --
> > 2.31.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2021-06-10 20:57 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-25 21:17 [PATCH 0/7] dma-buf: Add an API for exporting sync files (v11) Jason Ekstrand
2021-05-25 21:17 ` [Intel-gfx] " Jason Ekstrand
2021-05-25 21:17 ` [PATCH 1/7] dma-buf: Add dma_fence_array_for_each (v2) Jason Ekstrand
2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
2021-05-25 21:17 ` [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2) Jason Ekstrand
2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
2021-05-26 10:57   ` Christian König
2021-05-26 10:57     ` [Intel-gfx] " Christian König
2021-05-27 10:39     ` Daniel Vetter
2021-05-27 10:39       ` [Intel-gfx] " Daniel Vetter
2021-05-27 11:58       ` Christian König
2021-05-27 11:58         ` [Intel-gfx] " Christian König
2021-05-27 13:21         ` Daniel Vetter
2021-05-27 13:21           ` [Intel-gfx] " Daniel Vetter
2021-05-27 13:25         ` Daniel Vetter
2021-05-27 13:25           ` [Intel-gfx] " Daniel Vetter
2021-05-27 13:41           ` Christian König
2021-05-27 13:41             ` [Intel-gfx] " Christian König
2021-06-01 14:34             ` Daniel Vetter
2021-06-01 14:34               ` [Intel-gfx] " Daniel Vetter
2021-06-01 17:27               ` Christian König
2021-06-01 17:27                 ` [Intel-gfx] " Christian König
2021-06-01 17:29               ` Christian König
2021-06-01 17:29                 ` [Intel-gfx] " Christian König
2021-05-25 21:17 ` [PATCH 3/7] dma-buf: Add dma_resv_get_singleton_unlocked (v5) Jason Ekstrand
2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
2021-05-25 21:17 ` [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC Jason Ekstrand
2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
2021-05-27 10:38   ` Daniel Vetter
2021-05-27 10:38     ` [Intel-gfx] " Daniel Vetter
2021-05-27 11:12     ` Sumit Semwal
2021-05-27 11:12       ` [Intel-gfx] " Sumit Semwal
2021-06-10 20:57     ` Jason Ekstrand
2021-06-10 20:57       ` [Intel-gfx] " Jason Ekstrand
2021-05-25 21:17 ` [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11) Jason Ekstrand
2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
2021-05-26 11:02   ` Christian König
2021-05-26 11:02     ` [Intel-gfx] " Christian König
2021-05-26 11:31     ` Daniel Stone
2021-05-26 11:31       ` Daniel Stone
2021-05-26 12:46       ` Christian König
2021-05-26 12:46         ` Christian König
2021-05-26 13:12         ` Daniel Stone
2021-05-26 13:12           ` Daniel Stone
2021-05-26 13:23           ` Christian König
2021-05-26 13:23             ` Christian König
2021-05-27 10:33             ` Daniel Vetter
2021-05-27 10:33               ` Daniel Vetter
2021-05-27 10:48               ` Simon Ser
2021-05-27 10:48                 ` Simon Ser
2021-05-27 12:01               ` Christian König
2021-05-27 12:01                 ` Christian König
     [not found]             ` <CAOFGe95Zdn8P3=sOT0HkE9_+ac70g36LxpmLOyR2bKTTeS-xvQ@mail.gmail.com>
     [not found]               ` <fef50d81-399a-af09-1d13-de4db1b3fab8@amd.com>
2021-05-27 15:39                 ` Jason Ekstrand
2021-05-27 15:39                   ` Jason Ekstrand
2021-05-25 21:17 ` [PATCH 6/7] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked Jason Ekstrand
2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
2021-05-25 21:17 ` [PATCH 7/7] RFC: dma-buf: Add an API for importing sync files (v7) Jason Ekstrand
2021-05-25 21:17   ` [Intel-gfx] " Jason Ekstrand
2021-05-26 17:09   ` Daniel Vetter
2021-05-26 17:09     ` [Intel-gfx] " Daniel Vetter
2021-05-25 21:44 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-buf: Add an API for exporting sync files (v11) Patchwork
2021-05-25 21:46 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-05-25 22:14 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-05-26  4:20 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-06-10 20:10 ` [PATCH 0/7] " Chia-I Wu
2021-06-10 20:10   ` [Intel-gfx] " Chia-I Wu
2021-06-10 20:26   ` Jason Ekstrand
2021-06-10 20:26     ` [Intel-gfx] " Jason Ekstrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.