All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
@ 2021-06-10 21:09 Jason Ekstrand
  2021-06-10 21:09 ` [PATCH 1/6] dma-buf: Add dma_fence_array_for_each (v2) Jason Ekstrand
                   ` (6 more replies)
  0 siblings, 7 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:09 UTC (permalink / raw)
  To: dri-devel
  Cc: Daniel Stone, Michel Dänzer, wayland-devel, Jason Ekstrand,
	Dave Airlie, mesa-dev, Christian König

Modern userspace APIs like Vulkan are built on an explicit
synchronization model.  This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland.  The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.

The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor.  We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer.  With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer.  In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.

This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later.  As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display.  This allows to accurately turn
it into a VkFence or VkSemaphore without any over- synchronization.

This patch series actually contains two new ioctls.  There is the export
one mentioned above as well as an RFC for an import ioctl which provides
the other half.  The intention is to land the export ioctl since it seems
like there's no real disagreement on that one.  The import ioctl, however,
has a lot of debate around it so it's intended to be RFC-only for now.

Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
IGT tests: https://patchwork.freedesktop.org/series/90490/

v10 (Jason Ekstrand, Daniel Vetter):
 - Add reviews/acks
 - Add a patch to rename _rcu to _unlocked
 - Split things better so import is clearly RFC status

v11 (Daniel Vetter):
 - Add more CCs to try and get maintainers
 - Add a patch to document DMA_BUF_IOCTL_SYNC
 - Generally better docs
 - Use separate structs for import/export (easier to document)
 - Fix an issue in the import patch

v12 (Daniel Vetter):
 - Better docs for DMA_BUF_IOCTL_SYNC

v12 (Christian König):
 - Drop the rename patch in favor of Christian's series
 - Add a comment to the commit message for the dma-buf sync_file export
   ioctl saying why we made it an ioctl on dma-buf

Cc: Christian König <christian.koenig@amd.com>
Cc: Michel Dänzer <michel@daenzer.net>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Cc: Daniel Stone <daniels@collabora.com>
Cc: mesa-dev@lists.freedesktop.org
Cc: wayland-devel@lists.freedesktop.org
Test-with: 20210524205225.872316-1-jason@jlekstrand.net

Christian König (1):
  dma-buf: Add dma_fence_array_for_each (v2)

Jason Ekstrand (5):
  dma-buf: Add dma_resv_get_singleton (v6)
  dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
  dma-buf: Add an API for exporting sync files (v12)
  RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
  RFC: dma-buf: Add an API for importing sync files (v7)

 Documentation/driver-api/dma-buf.rst |   8 ++
 drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
 drivers/dma-buf/dma-fence-array.c    |  27 +++++++
 drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
 include/linux/dma-fence-array.h      |  17 +++++
 include/linux/dma-resv.h             |   2 +
 include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
 7 files changed, 369 insertions(+), 1 deletion(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/6] dma-buf: Add dma_fence_array_for_each (v2)
  2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
@ 2021-06-10 21:09 ` Jason Ekstrand
  2021-06-10 21:09 ` [PATCH 2/6] dma-buf: Add dma_resv_get_singleton (v6) Jason Ekstrand
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:09 UTC (permalink / raw)
  To: dri-devel
  Cc: Christian König, Christian König, Jason Ekstrand,
	Daniel Vetter

From: Christian König <ckoenig.leichtzumerken@gmail.com>

Add a helper to iterate over all fences in a dma_fence_array object.

v2 (Jason Ekstrand)
 - Return NULL from dma_fence_array_first if head == NULL.  This matches
   the iterator behavior of dma_fence_chain_for_each in that it iterates
   zero times if head == NULL.
 - Return NULL from dma_fence_array_next if index > array->num_fences.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Christian König <christian.koenig@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++
 include/linux/dma-fence-array.h   | 17 +++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index d3fbd950be944..2ac1afc697d0f 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context)
 	return true;
 }
 EXPORT_SYMBOL(dma_fence_match_context);
+
+struct dma_fence *dma_fence_array_first(struct dma_fence *head)
+{
+	struct dma_fence_array *array;
+
+	if (!head)
+		return NULL;
+
+	array = to_dma_fence_array(head);
+	if (!array)
+		return head;
+
+	return array->fences[0];
+}
+EXPORT_SYMBOL(dma_fence_array_first);
+
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
+				       unsigned int index)
+{
+	struct dma_fence_array *array = to_dma_fence_array(head);
+
+	if (!array || index >= array->num_fences)
+		return NULL;
+
+	return array->fences[index];
+}
+EXPORT_SYMBOL(dma_fence_array_next);
diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h
index 303dd712220fd..588ac8089dd61 100644
--- a/include/linux/dma-fence-array.h
+++ b/include/linux/dma-fence-array.h
@@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence)
 	return container_of(fence, struct dma_fence_array, base);
 }
 
+/**
+ * dma_fence_array_for_each - iterate over all fences in array
+ * @fence: current fence
+ * @index: index into the array
+ * @head: potential dma_fence_array object
+ *
+ * Test if @array is a dma_fence_array object and if yes iterate over all fences
+ * in the array. If not just iterate over the fence in @array itself.
+ */
+#define dma_fence_array_for_each(fence, index, head)			\
+	for (index = 0, fence = dma_fence_array_first(head); fence;	\
+	     ++(index), fence = dma_fence_array_next(head, index))
+
 struct dma_fence_array *dma_fence_array_create(int num_fences,
 					       struct dma_fence **fences,
 					       u64 context, unsigned seqno,
@@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 
 bool dma_fence_match_context(struct dma_fence *fence, u64 context);
 
+struct dma_fence *dma_fence_array_first(struct dma_fence *head);
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
+				       unsigned int index);
+
 #endif /* __LINUX_DMA_FENCE_ARRAY_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/6] dma-buf: Add dma_resv_get_singleton (v6)
  2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
  2021-06-10 21:09 ` [PATCH 1/6] dma-buf: Add dma_fence_array_for_each (v2) Jason Ekstrand
@ 2021-06-10 21:09 ` Jason Ekstrand
  2021-06-11  7:11   ` Christian König
  2021-06-10 21:09 ` [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2) Jason Ekstrand
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:09 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

Add a helper function to get a single fence representing
all fences in a dma_resv object.

This fence is either the only one in the object or all not
signaled fences of the object in a flatted out dma_fence_array.

v2 (Jason Ekstrand):
 - Take reference of fences both for creating the dma_fence_array and in
   the case where we return one fence.
 - Handle the case where dma_resv_get_list() returns NULL

v3 (Jason Ekstrand):
 - Add an _rcu suffix because it is read-only
 - Rewrite to use dma_resv_get_fences_rcu so it's RCU-safe
 - Add an EXPORT_SYMBOL_GPL declaration
 - Re-author the patch to Jason since very little is left of Christian
   König's original patch
 - Remove the extra fence argument

v4 (Jason Ekstrand):
 - Restore the extra fence argument

v5 (Daniel Vetter):
 - Rename from _rcu to _unlocked since it doesn't leak RCU details to
   the caller
 - Fix docs
 - Use ERR_PTR for error handling rather than an output dma_fence**

v5 (Jason Ekstrand):
 - Drop the extra fence param and leave that to a separate patch

v6 (Jason Ekstrand):
 - Rename to dma_resv_get_singleton to match the new naming convention
   for dma_resv helpers which work without taking a lock.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-resv.c | 91 ++++++++++++++++++++++++++++++++++++++
 include/linux/dma-resv.h   |  1 +
 2 files changed, 92 insertions(+)

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index f26c71747d43a..1b26aa7e5d81c 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -34,6 +34,8 @@
  */
 
 #include <linux/dma-resv.h>
+#include <linux/dma-fence-chain.h>
+#include <linux/dma-fence-array.h>
 #include <linux/export.h>
 #include <linux/mm.h>
 #include <linux/sched/mm.h>
@@ -50,6 +52,10 @@
  * write-side updates.
  */
 
+#define dma_fence_deep_dive_for_each(fence, chain, index, head)	\
+	dma_fence_chain_for_each(chain, head)			\
+		dma_fence_array_for_each(fence, index, chain)
+
 DEFINE_WD_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
 
@@ -495,6 +501,91 @@ int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl,
 }
 EXPORT_SYMBOL_GPL(dma_resv_get_fences);
 
+/**
+ * dma_resv_get_singleton - get a single fence for the dma_resv object
+ * @obj: the reservation object
+ *
+ * Get a single fence representing all unsignaled fences in the dma_resv object
+ * plus the given extra fence. If we got only one fence return a new
+ * reference to that, otherwise return a dma_fence_array object.
+ *
+ * RETURNS
+ * The singleton dma_fence on success or an ERR_PTR on failure
+ */
+struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
+{
+	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
+	struct dma_fence_array *array;
+	unsigned int num_resv_fences, num_fences;
+	unsigned int err, i, j;
+
+	err = dma_resv_get_fences(obj, NULL, &num_resv_fences, &resv_fences);
+	if (err)
+		return ERR_PTR(err);
+
+	if (num_resv_fences == 0)
+		return NULL;
+
+	num_fences = 0;
+	result = NULL;
+
+	for (i = 0; i < num_resv_fences; ++i) {
+		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
+			if (dma_fence_is_signaled(fence))
+				continue;
+
+			result = fence;
+			++num_fences;
+		}
+	}
+
+	if (num_fences <= 1) {
+		result = dma_fence_get(result);
+		goto put_resv_fences;
+	}
+
+	fences = kmalloc_array(num_fences, sizeof(struct dma_fence *),
+			       GFP_KERNEL);
+	if (!fences) {
+		result = ERR_PTR(-ENOMEM);
+		goto put_resv_fences;
+	}
+
+	num_fences = 0;
+	for (i = 0; i < num_resv_fences; ++i) {
+		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
+			if (!dma_fence_is_signaled(fence))
+				fences[num_fences++] = dma_fence_get(fence);
+		}
+	}
+
+	if (num_fences <= 1) {
+		result = num_fences ? fences[0] : NULL;
+		kfree(fences);
+		goto put_resv_fences;
+	}
+
+	array = dma_fence_array_create(num_fences, fences,
+				       dma_fence_context_alloc(1),
+				       1, false);
+	if (array) {
+		result = &array->base;
+	} else {
+		result = ERR_PTR(-ENOMEM);
+		while (num_fences--)
+			dma_fence_put(fences[num_fences]);
+		kfree(fences);
+	}
+
+put_resv_fences:
+	while (num_resv_fences--)
+		dma_fence_put(resv_fences[num_resv_fences]);
+	kfree(resv_fences);
+
+	return result;
+}
+EXPORT_SYMBOL_GPL(dma_resv_get_singleton);
+
 /**
  * dma_resv_wait_timeout - Wait on reservation's objects
  * shared and/or exclusive fences.
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index 562b885cf9c3d..d60982975a786 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -275,6 +275,7 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
 int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl,
 			unsigned *pshared_count, struct dma_fence ***pshared);
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
+struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj);
 long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr,
 			   unsigned long timeout);
 bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
  2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
  2021-06-10 21:09 ` [PATCH 1/6] dma-buf: Add dma_fence_array_for_each (v2) Jason Ekstrand
  2021-06-10 21:09 ` [PATCH 2/6] dma-buf: Add dma_resv_get_singleton (v6) Jason Ekstrand
@ 2021-06-10 21:09 ` Jason Ekstrand
  2021-06-10 21:14     ` [Intel-gfx] " Jason Ekstrand
  2021-06-11  7:24   ` Christian König
  2021-06-10 21:09 ` [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:09 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
documentation for DMA_BUF_IOCTL_SYNC.

v2 (Daniel Vetter):
 - Fix a couple typos
 - Add commentary about synchronization with other devices
 - Use item list format for describing flags

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
---
 Documentation/driver-api/dma-buf.rst |  8 +++++
 include/uapi/linux/dma-buf.h         | 46 +++++++++++++++++++++++++++-
 2 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 7f21425d9435a..0d4c13ec1a800 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -88,6 +88,9 @@ consider though:
 - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
   details.
 
+- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
+  `DMA Buffer ioctls`_ below for details.
+
 Basic Operation and Device DMA Access
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -106,6 +109,11 @@ Implicit Fence Poll Support
 .. kernel-doc:: drivers/dma-buf/dma-buf.c
    :doc: implicit fence polling
 
+DMA Buffer ioctls
+~~~~~~~~~~~~~~~~~
+
+.. kernel-doc:: include/uapi/linux/dma-buf.h
+
 Kernel Functions and Structures Reference
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 7f30393b92c3b..1c131002fe1ee 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -22,8 +22,52 @@
 
 #include <linux/types.h>
 
-/* begin/end dma-buf functions used for userspace mmap. */
+/**
+ * struct dma_buf_sync - Synchronize with CPU access.
+ *
+ * When a DMA buffer is accessed from the CPU via mmap, it is not always
+ * possible to guarantee coherency between the CPU-visible map and underlying
+ * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
+ * any CPU access to give the kernel the chance to shuffle memory around if
+ * needed.
+ *
+ * Prior to accessing the map, the client must call DMA_BUF_IOCTL_SYNC
+ * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
+ * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
+ * DMA_BUF_SYNC_END and the same read/write flags.
+ *
+ * The synchronization provided via DMA_BUF_IOCTL_SYNC only provides cache
+ * coherency.  It does not prevent other processes or devices from
+ * accessing the memory at the same time.  If synchronization with a GPU or
+ * other device driver is required, it is the client's responsibility to
+ * wait for buffer to be ready for reading or writing.  If the driver or
+ * API with which the client is interacting uses implicit synchronization,
+ * this can be done via poll() on the DMA buffer file descriptor.  If the
+ * driver or API requires explicit synchronization, the client may have to
+ * wait on a sync_file or other synchronization primitive outside the scope
+ * of the DMA buffer API.
+ */
 struct dma_buf_sync {
+	/**
+	 * @flags: Set of access flags
+	 *
+	 * DMA_BUF_SYNC_START:
+	 *     Indicates the start of a map access session.
+	 *
+	 * DMA_BUF_SYNC_END:
+	 *     Indicates the end of a map access session.
+	 *
+	 * DMA_BUF_SYNC_READ:
+	 *     Indicates that the mapped DMA buffer will be read by the
+	 *     client via the CPU map.
+	 *
+	 * DMA_BUF_SYNC_WRITE:
+	 *     Indicates that the mapped DMA buffer will be written by the
+	 *     client via the CPU map.
+	 *
+	 * DMA_BUF_SYNC_RW:
+	 *     An alias for DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE.
+	 */
 	__u64 flags;
 };
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
                   ` (2 preceding siblings ...)
  2021-06-10 21:09 ` [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2) Jason Ekstrand
@ 2021-06-10 21:09 ` Jason Ekstrand
  2021-06-13 18:26   ` Jason Ekstrand
  2021-10-20 20:31   ` Simon Ser
  2021-06-10 21:09 ` [PATCH 5/6] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked Jason Ekstrand
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:09 UTC (permalink / raw)
  To: dri-devel; +Cc: Christian König, Jason Ekstrand, Daniel Vetter

Modern userspace APIs like Vulkan are built on an explicit
synchronization model.  This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland.  The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.

The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor.  We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer.  With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer.  In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.

This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later.  As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display.  This allows to accurately turn
it into a VkFence or VkSemaphore without any over-synchronization.

By making this an ioctl on the dma-buf itself, it allows this new
functionality to be used in an entirely driver-agnostic way without
having access to a DRM fd. This makes it ideal for use in driver-generic
code in Mesa or in a client such as a compositor where the DRM fd may be
hard to reach.

v2 (Jason Ekstrand):
 - Use a wrapper dma_fence_array of all fences including the new one
   when importing an exclusive fence.

v3 (Jason Ekstrand):
 - Lock around setting shared fences as well as exclusive
 - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
 - Initialize ret to 0 in dma_buf_wait_sync_file

v4 (Jason Ekstrand):
 - Use the new dma_resv_get_singleton helper

v5 (Jason Ekstrand):
 - Rename the IOCTLs to import/export rather than wait/signal
 - Drop the WRITE flag and always get/set the exclusive fence

v6 (Jason Ekstrand):
 - Drop the sync_file import as it was all-around sketchy and not nearly
   as useful as import.
 - Re-introduce READ/WRITE flag support for export
 - Rework the commit message

v7 (Jason Ekstrand):
 - Require at least one sync flag
 - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
 - Use _rcu helpers since we're accessing the dma_resv read-only

v8 (Jason Ekstrand):
 - Return -ENOMEM if the sync_file_create fails
 - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)

v9 (Jason Ekstrand):
 - Add documentation for the new ioctl

v10 (Jason Ekstrand):
 - Go back to dma_buf_sync_file as the ioctl struct name

v11 (Daniel Vetter):
 - Go back to dma_buf_export_sync_file as the ioctl struct name
 - Better kerneldoc describing what the read/write flags do

v12 (Christian König):
 - Document why we chose to make it an ioctl on dma-buf

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Simon Ser <contact@emersion.fr>
Acked-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c    | 67 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++
 2 files changed, 102 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 511fe0d217a08..41b14b53cdda3 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -20,6 +20,7 @@
 #include <linux/debugfs.h>
 #include <linux/module.h>
 #include <linux/seq_file.h>
+#include <linux/sync_file.h>
 #include <linux/poll.h>
 #include <linux/dma-resv.h>
 #include <linux/mm.h>
@@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
  * Note that this only signals the completion of the respective fences, i.e. the
  * DMA transfers are complete. Cache flushing and any other necessary
  * preparations before CPU access can begin still need to happen.
+ *
+ * As an alternative to poll(), the set of fences on DMA buffer can be
+ * exported as a &sync_file using &dma_buf_sync_file_export.
  */
 
 static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
@@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
 	return ret;
 }
 
+#if IS_ENABLED(CONFIG_SYNC_FILE)
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
+				     void __user *user_data)
+{
+	struct dma_buf_export_sync_file arg;
+	struct dma_fence *fence = NULL;
+	struct sync_file *sync_file;
+	int fd, ret;
+
+	if (copy_from_user(&arg, user_data, sizeof(arg)))
+		return -EFAULT;
+
+	if (arg.flags & ~DMA_BUF_SYNC_RW)
+		return -EINVAL;
+
+	if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	if (arg.flags & DMA_BUF_SYNC_WRITE) {
+		fence = dma_resv_get_singleton(dmabuf->resv);
+		if (IS_ERR(fence)) {
+			ret = PTR_ERR(fence);
+			goto err_put_fd;
+		}
+	} else if (arg.flags & DMA_BUF_SYNC_READ) {
+		fence = dma_resv_get_excl_unlocked(dmabuf->resv);
+	}
+
+	if (!fence)
+		fence = dma_fence_get_stub();
+
+	sync_file = sync_file_create(fence);
+
+	dma_fence_put(fence);
+
+	if (!sync_file) {
+		ret = -ENOMEM;
+		goto err_put_fd;
+	}
+
+	fd_install(fd, sync_file->file);
+
+	arg.fd = fd;
+	if (copy_to_user(user_data, &arg, sizeof(arg)))
+		return -EFAULT;
+
+	return 0;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return ret;
+}
+#endif
+
 static long dma_buf_ioctl(struct file *file,
 			  unsigned int cmd, unsigned long arg)
 {
@@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file,
 	case DMA_BUF_SET_NAME_B:
 		return dma_buf_set_name(dmabuf, (const char __user *)arg);
 
+#if IS_ENABLED(CONFIG_SYNC_FILE)
+	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
+		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+#endif
+
 	default:
 		return -ENOTTY;
 	}
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 1c131002fe1ee..82f12a4640403 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -81,6 +81,40 @@ struct dma_buf_sync {
 
 #define DMA_BUF_NAME_LEN	32
 
+/**
+ * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
+ *
+ * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
+ * current set of fences on a dma-buf file descriptor as a sync_file.  CPU
+ * waits via poll() or other driver-specific mechanisms typically wait on
+ * whatever fences are on the dma-buf at the time the wait begins.  This
+ * is similar except that it takes a snapshot of the current fences on the
+ * dma-buf for waiting later instead of waiting immediately.  This is
+ * useful for modern graphics APIs such as Vulkan which assume an explicit
+ * synchronization model but still need to inter-operate with dma-buf.
+ */
+struct dma_buf_export_sync_file {
+	/**
+	 * @flags: Read/write flags
+	 *
+	 * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
+	 *
+	 * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
+	 * the returned sync file waits on any writers of the dma-buf to
+	 * complete.  Waiting on the returned sync file is equivalent to
+	 * poll() with POLLIN.
+	 *
+	 * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
+	 * any users of the dma-buf (read or write) to complete.  Waiting
+	 * on the returned sync file is equivalent to poll() with POLLOUT.
+	 * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
+	 * is equivalent to just DMA_BUF_SYNC_WRITE.
+	 */
+	__u32 flags;
+	/** @fd: Returned sync file descriptor */
+	__s32 fd;
+};
+
 #define DMA_BUF_BASE		'b'
 #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
 
@@ -90,5 +124,6 @@ struct dma_buf_sync {
 #define DMA_BUF_SET_NAME	_IOW(DMA_BUF_BASE, 1, const char *)
 #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
 #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
+#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
 
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/6] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
  2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
                   ` (3 preceding siblings ...)
  2021-06-10 21:09 ` [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
@ 2021-06-10 21:09 ` Jason Ekstrand
  2021-06-11  7:44   ` Christian König
  2021-06-10 21:09 ` [PATCH 6/6] RFC: dma-buf: Add an API for importing sync files (v7) Jason Ekstrand
  2021-06-15  8:41 ` [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Christian König
  6 siblings, 1 reply; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:09 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

For dma-buf sync_file import, we want to get all the fences on a
dma_resv plus one more.  We could wrap the fence we get back in an array
fence or we could make dma_resv_get_singleton_unlocked take "one more"
to make this case easier.

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c  |  2 +-
 drivers/dma-buf/dma-resv.c | 23 +++++++++++++++++++++--
 include/linux/dma-resv.h   |  3 ++-
 3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 41b14b53cdda3..831828d71b646 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -389,7 +389,7 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
 		return fd;
 
 	if (arg.flags & DMA_BUF_SYNC_WRITE) {
-		fence = dma_resv_get_singleton(dmabuf->resv);
+		fence = dma_resv_get_singleton(dmabuf->resv, NULL);
 		if (IS_ERR(fence)) {
 			ret = PTR_ERR(fence);
 			goto err_put_fd;
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 1b26aa7e5d81c..7c48c23239b4b 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -504,6 +504,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences);
 /**
  * dma_resv_get_singleton - get a single fence for the dma_resv object
  * @obj: the reservation object
+ * @extra: extra fence to add to the resulting array
  *
  * Get a single fence representing all unsignaled fences in the dma_resv object
  * plus the given extra fence. If we got only one fence return a new
@@ -512,7 +513,8 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences);
  * RETURNS
  * The singleton dma_fence on success or an ERR_PTR on failure
  */
-struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
+struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj,
+					 struct dma_fence *extra)
 {
 	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
 	struct dma_fence_array *array;
@@ -523,7 +525,7 @@ struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
 	if (err)
 		return ERR_PTR(err);
 
-	if (num_resv_fences == 0)
+	if (num_resv_fences == 0 && !extra)
 		return NULL;
 
 	num_fences = 0;
@@ -539,6 +541,16 @@ struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
 		}
 	}
 
+	if (extra) {
+		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
+			if (dma_fence_is_signaled(fence))
+				continue;
+
+			result = fence;
+			++num_fences;
+		}
+	}
+
 	if (num_fences <= 1) {
 		result = dma_fence_get(result);
 		goto put_resv_fences;
@@ -559,6 +571,13 @@ struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
 		}
 	}
 
+	if (extra) {
+		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
+			if (dma_fence_is_signaled(fence))
+				fences[num_fences++] = dma_fence_get(fence);
+		}
+	}
+
 	if (num_fences <= 1) {
 		result = num_fences ? fences[0] : NULL;
 		kfree(fences);
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index d60982975a786..f970e03fc1a08 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -275,7 +275,8 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
 int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl,
 			unsigned *pshared_count, struct dma_fence ***pshared);
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
-struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj);
+struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj,
+					 struct dma_fence *extra);
 long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr,
 			   unsigned long timeout);
 bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/6] RFC: dma-buf: Add an API for importing sync files (v7)
  2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
                   ` (4 preceding siblings ...)
  2021-06-10 21:09 ` [PATCH 5/6] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked Jason Ekstrand
@ 2021-06-10 21:09 ` Jason Ekstrand
  2022-03-22 15:02   ` msizanoen
  2021-06-15  8:41 ` [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Christian König
  6 siblings, 1 reply; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:09 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

This patch is analogous to the previous sync file export patch in that
it allows you to import a sync_file into a dma-buf.  Unlike the previous
patch, however, this does add genuinely new functionality to dma-buf.
Without this, the only way to attach a sync_file to a dma-buf is to
submit a batch to your driver of choice which waits on the sync_file and
claims to write to the dma-buf.  Even if said batch is a no-op, a submit
is typically way more overhead than just attaching a fence.  A submit
may also imply extra synchronization with other work because it happens
on a hardware queue.

In the Vulkan world, this is useful for dealing with the out-fence from
vkQueuePresent.  Current Linux window-systems (X11, Wayland, etc.) all
rely on dma-buf implicit sync.  Since Vulkan is an explicit sync API, we
get a set of fences (VkSemaphores) in vkQueuePresent and have to stash
those as an exclusive (write) fence on the dma-buf.  We handle it in
Mesa today with the above mentioned dummy submit trick.  This ioctl
would allow us to set it directly without the dummy submit.

This may also open up possibilities for GPU drivers to move away from
implicit sync for their kernel driver uAPI and instead provide sync
files and rely on dma-buf import/export for communicating with other
implicit sync clients.

We make the explicit choice here to only allow setting RW fences which
translates to an exclusive fence on the dma_resv.  There's no use for
read-only fences for communicating with other implicit sync userspace
and any such attempts are likely to be racy at best.  When we got to
insert the RW fence, the actual fence we set as the new exclusive fence
is a combination of the sync_file provided by the user and all the other
fences on the dma_resv.  This ensures that the newly added exclusive
fence will never signal before the old one would have and ensures that
we don't break any dma_resv contracts.  We require userspace to specify
RW in the flags for symmetry with the export ioctl and in case we ever
want to support read fences in the future.

There is one downside here that's worth documenting:  If two clients
writing to the same dma-buf using this API race with each other, their
actions on the dma-buf may happen in parallel or in an undefined order.
Both with and without this API, the pattern is the same:  Collect all
the fences on dma-buf, submit work which depends on said fences, and
then set a new exclusive (write) fence on the dma-buf which depends on
said work.  The difference is that, when it's all handled by the GPU
driver's submit ioctl, the three operations happen atomically under the
dma_resv lock.  If two userspace submits race, one will happen before
the other.  You aren't guaranteed which but you are guaranteed that
they're strictly ordered.  If userspace manages the fences itself, then
these three operations happen separately and the two render operations
may happen genuinely in parallel or get interleaved.  However, this is a
case of userspace racing with itself.  As long as we ensure userspace
can't back the kernel into a corner, it should be fine.

v2 (Jason Ekstrand):
 - Use a wrapper dma_fence_array of all fences including the new one
   when importing an exclusive fence.

v3 (Jason Ekstrand):
 - Lock around setting shared fences as well as exclusive
 - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
 - Initialize ret to 0 in dma_buf_wait_sync_file

v4 (Jason Ekstrand):
 - Use the new dma_resv_get_singleton helper

v5 (Jason Ekstrand):
 - Rename the IOCTLs to import/export rather than wait/signal
 - Drop the WRITE flag and always get/set the exclusive fence

v6 (Jason Ekstrand):
 - Split import and export into separate patches
 - New commit message

v7 (Daniel Vetter):
 - Fix the uapi header to use the right struct in the ioctl
 - Use a separate dma_buf_import_sync_file struct
 - Add kerneldoc for dma_buf_import_sync_file

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Christian König <christian.koenig@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/dma-buf/dma-buf.c    | 36 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 831828d71b646..88afd723015a2 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -422,6 +422,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
 	put_unused_fd(fd);
 	return ret;
 }
+
+static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
+				     const void __user *user_data)
+{
+	struct dma_buf_import_sync_file arg;
+	struct dma_fence *fence, *singleton = NULL;
+	int ret = 0;
+
+	if (copy_from_user(&arg, user_data, sizeof(arg)))
+		return -EFAULT;
+
+	if (arg.flags != DMA_BUF_SYNC_RW)
+		return -EINVAL;
+
+	fence = sync_file_get_fence(arg.fd);
+	if (!fence)
+		return -EINVAL;
+
+	dma_resv_lock(dmabuf->resv, NULL);
+
+	singleton = dma_resv_get_singleton(dmabuf->resv, fence);
+	if (IS_ERR(singleton)) {
+		ret = PTR_ERR(singleton);
+	} else if (singleton) {
+		dma_resv_add_excl_fence(dmabuf->resv, singleton);
+		dma_resv_add_shared_fence(dmabuf->resv, singleton);
+	}
+
+	dma_resv_unlock(dmabuf->resv);
+
+	dma_fence_put(fence);
+
+	return ret;
+}
 #endif
 
 static long dma_buf_ioctl(struct file *file,
@@ -470,6 +504,8 @@ static long dma_buf_ioctl(struct file *file,
 #if IS_ENABLED(CONFIG_SYNC_FILE)
 	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
 		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+	case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
+		return dma_buf_import_sync_file(dmabuf, (const void __user *)arg);
 #endif
 
 	default:
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 82f12a4640403..7382fd67351ba 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -115,6 +115,27 @@ struct dma_buf_export_sync_file {
 	__s32 fd;
 };
 
+/**
+ * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
+ *
+ * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
+ * sync_file into a dma-buf for the purposes of implicit synchronization
+ * with other dma-buf consumers.  This allows clients using explicitly
+ * synchronized APIs such as Vulkan to inter-op with dma-buf consumers
+ * which expect implicit synchronization such as OpenGL or most media
+ * drivers/video.
+ */
+struct dma_buf_import_sync_file {
+	/**
+	 * @flags: Read/write flags
+	 *
+	 * Must be DMA_BUF_SYNC_RW.
+	 */
+	__u32 flags;
+	/** @fd: Sync file descriptor */
+	__s32 fd;
+};
+
 #define DMA_BUF_BASE		'b'
 #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
 
@@ -125,5 +146,6 @@ struct dma_buf_export_sync_file {
 #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
 #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
 #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
+#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE	_IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
 
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
  2021-06-10 21:09 ` [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2) Jason Ekstrand
@ 2021-06-10 21:14     ` Jason Ekstrand
  2021-06-11  7:24   ` Christian König
  1 sibling, 0 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:14 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Jason Ekstrand

This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
documentation for DMA_BUF_IOCTL_SYNC.

v2 (Daniel Vetter):
 - Fix a couple typos
 - Add commentary about synchronization with other devices
 - Use item list format for describing flags

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
---
 Documentation/driver-api/dma-buf.rst |  8 +++++
 include/uapi/linux/dma-buf.h         | 46 +++++++++++++++++++++++++++-
 2 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 7f21425d9435a..0d4c13ec1a800 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -88,6 +88,9 @@ consider though:
 - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
   details.
 
+- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
+  `DMA Buffer ioctls`_ below for details.
+
 Basic Operation and Device DMA Access
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -106,6 +109,11 @@ Implicit Fence Poll Support
 .. kernel-doc:: drivers/dma-buf/dma-buf.c
    :doc: implicit fence polling
 
+DMA Buffer ioctls
+~~~~~~~~~~~~~~~~~
+
+.. kernel-doc:: include/uapi/linux/dma-buf.h
+
 Kernel Functions and Structures Reference
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 7f30393b92c3b..1c131002fe1ee 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -22,8 +22,52 @@
 
 #include <linux/types.h>
 
-/* begin/end dma-buf functions used for userspace mmap. */
+/**
+ * struct dma_buf_sync - Synchronize with CPU access.
+ *
+ * When a DMA buffer is accessed from the CPU via mmap, it is not always
+ * possible to guarantee coherency between the CPU-visible map and underlying
+ * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
+ * any CPU access to give the kernel the chance to shuffle memory around if
+ * needed.
+ *
+ * Prior to accessing the map, the client must call DMA_BUF_IOCTL_SYNC
+ * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
+ * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
+ * DMA_BUF_SYNC_END and the same read/write flags.
+ *
+ * The synchronization provided via DMA_BUF_IOCTL_SYNC only provides cache
+ * coherency.  It does not prevent other processes or devices from
+ * accessing the memory at the same time.  If synchronization with a GPU or
+ * other device driver is required, it is the client's responsibility to
+ * wait for buffer to be ready for reading or writing.  If the driver or
+ * API with which the client is interacting uses implicit synchronization,
+ * this can be done via poll() on the DMA buffer file descriptor.  If the
+ * driver or API requires explicit synchronization, the client may have to
+ * wait on a sync_file or other synchronization primitive outside the scope
+ * of the DMA buffer API.
+ */
 struct dma_buf_sync {
+	/**
+	 * @flags: Set of access flags
+	 *
+	 * DMA_BUF_SYNC_START:
+	 *     Indicates the start of a map access session.
+	 *
+	 * DMA_BUF_SYNC_END:
+	 *     Indicates the end of a map access session.
+	 *
+	 * DMA_BUF_SYNC_READ:
+	 *     Indicates that the mapped DMA buffer will be read by the
+	 *     client via the CPU map.
+	 *
+	 * DMA_BUF_SYNC_WRITE:
+	 *     Indicates that the mapped DMA buffer will be written by the
+	 *     client via the CPU map.
+	 *
+	 * DMA_BUF_SYNC_RW:
+	 *     An alias for DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE.
+	 */
 	__u64 flags;
 };
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
@ 2021-06-10 21:14     ` Jason Ekstrand
  0 siblings, 0 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-10 21:14 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Daniel Vetter, Christian König, Sumit Semwal

This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
documentation for DMA_BUF_IOCTL_SYNC.

v2 (Daniel Vetter):
 - Fix a couple typos
 - Add commentary about synchronization with other devices
 - Use item list format for describing flags

Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian König <christian.koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
---
 Documentation/driver-api/dma-buf.rst |  8 +++++
 include/uapi/linux/dma-buf.h         | 46 +++++++++++++++++++++++++++-
 2 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 7f21425d9435a..0d4c13ec1a800 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -88,6 +88,9 @@ consider though:
 - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
   details.
 
+- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
+  `DMA Buffer ioctls`_ below for details.
+
 Basic Operation and Device DMA Access
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -106,6 +109,11 @@ Implicit Fence Poll Support
 .. kernel-doc:: drivers/dma-buf/dma-buf.c
    :doc: implicit fence polling
 
+DMA Buffer ioctls
+~~~~~~~~~~~~~~~~~
+
+.. kernel-doc:: include/uapi/linux/dma-buf.h
+
 Kernel Functions and Structures Reference
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
index 7f30393b92c3b..1c131002fe1ee 100644
--- a/include/uapi/linux/dma-buf.h
+++ b/include/uapi/linux/dma-buf.h
@@ -22,8 +22,52 @@
 
 #include <linux/types.h>
 
-/* begin/end dma-buf functions used for userspace mmap. */
+/**
+ * struct dma_buf_sync - Synchronize with CPU access.
+ *
+ * When a DMA buffer is accessed from the CPU via mmap, it is not always
+ * possible to guarantee coherency between the CPU-visible map and underlying
+ * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
+ * any CPU access to give the kernel the chance to shuffle memory around if
+ * needed.
+ *
+ * Prior to accessing the map, the client must call DMA_BUF_IOCTL_SYNC
+ * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
+ * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
+ * DMA_BUF_SYNC_END and the same read/write flags.
+ *
+ * The synchronization provided via DMA_BUF_IOCTL_SYNC only provides cache
+ * coherency.  It does not prevent other processes or devices from
+ * accessing the memory at the same time.  If synchronization with a GPU or
+ * other device driver is required, it is the client's responsibility to
+ * wait for buffer to be ready for reading or writing.  If the driver or
+ * API with which the client is interacting uses implicit synchronization,
+ * this can be done via poll() on the DMA buffer file descriptor.  If the
+ * driver or API requires explicit synchronization, the client may have to
+ * wait on a sync_file or other synchronization primitive outside the scope
+ * of the DMA buffer API.
+ */
 struct dma_buf_sync {
+	/**
+	 * @flags: Set of access flags
+	 *
+	 * DMA_BUF_SYNC_START:
+	 *     Indicates the start of a map access session.
+	 *
+	 * DMA_BUF_SYNC_END:
+	 *     Indicates the end of a map access session.
+	 *
+	 * DMA_BUF_SYNC_READ:
+	 *     Indicates that the mapped DMA buffer will be read by the
+	 *     client via the CPU map.
+	 *
+	 * DMA_BUF_SYNC_WRITE:
+	 *     Indicates that the mapped DMA buffer will be written by the
+	 *     client via the CPU map.
+	 *
+	 * DMA_BUF_SYNC_RW:
+	 *     An alias for DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE.
+	 */
 	__u64 flags;
 };
 
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/6] dma-buf: Add dma_resv_get_singleton (v6)
  2021-06-10 21:09 ` [PATCH 2/6] dma-buf: Add dma_resv_get_singleton (v6) Jason Ekstrand
@ 2021-06-11  7:11   ` Christian König
  0 siblings, 0 replies; 34+ messages in thread
From: Christian König @ 2021-06-11  7:11 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel; +Cc: Daniel Vetter

Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> Add a helper function to get a single fence representing
> all fences in a dma_resv object.
>
> This fence is either the only one in the object or all not
> signaled fences of the object in a flatted out dma_fence_array.
>
> v2 (Jason Ekstrand):
>   - Take reference of fences both for creating the dma_fence_array and in
>     the case where we return one fence.
>   - Handle the case where dma_resv_get_list() returns NULL
>
> v3 (Jason Ekstrand):
>   - Add an _rcu suffix because it is read-only
>   - Rewrite to use dma_resv_get_fences_rcu so it's RCU-safe
>   - Add an EXPORT_SYMBOL_GPL declaration
>   - Re-author the patch to Jason since very little is left of Christian
>     König's original patch
>   - Remove the extra fence argument
>
> v4 (Jason Ekstrand):
>   - Restore the extra fence argument
>
> v5 (Daniel Vetter):
>   - Rename from _rcu to _unlocked since it doesn't leak RCU details to
>     the caller
>   - Fix docs
>   - Use ERR_PTR for error handling rather than an output dma_fence**
>
> v5 (Jason Ekstrand):
>   - Drop the extra fence param and leave that to a separate patch
>
> v6 (Jason Ekstrand):
>   - Rename to dma_resv_get_singleton to match the new naming convention
>     for dma_resv helpers which work without taking a lock.
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Reviewed-by: Christian König <christian.koenig@amd.com>

> Cc: Christian König <christian.koenig@amd.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>   drivers/dma-buf/dma-resv.c | 91 ++++++++++++++++++++++++++++++++++++++
>   include/linux/dma-resv.h   |  1 +
>   2 files changed, 92 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index f26c71747d43a..1b26aa7e5d81c 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -34,6 +34,8 @@
>    */
>   
>   #include <linux/dma-resv.h>
> +#include <linux/dma-fence-chain.h>
> +#include <linux/dma-fence-array.h>
>   #include <linux/export.h>
>   #include <linux/mm.h>
>   #include <linux/sched/mm.h>
> @@ -50,6 +52,10 @@
>    * write-side updates.
>    */
>   
> +#define dma_fence_deep_dive_for_each(fence, chain, index, head)	\
> +	dma_fence_chain_for_each(chain, head)			\
> +		dma_fence_array_for_each(fence, index, chain)
> +
>   DEFINE_WD_CLASS(reservation_ww_class);
>   EXPORT_SYMBOL(reservation_ww_class);
>   
> @@ -495,6 +501,91 @@ int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl,
>   }
>   EXPORT_SYMBOL_GPL(dma_resv_get_fences);
>   
> +/**
> + * dma_resv_get_singleton - get a single fence for the dma_resv object
> + * @obj: the reservation object
> + *
> + * Get a single fence representing all unsignaled fences in the dma_resv object
> + * plus the given extra fence. If we got only one fence return a new
> + * reference to that, otherwise return a dma_fence_array object.
> + *
> + * RETURNS
> + * The singleton dma_fence on success or an ERR_PTR on failure
> + */
> +struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
> +{
> +	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
> +	struct dma_fence_array *array;
> +	unsigned int num_resv_fences, num_fences;
> +	unsigned int err, i, j;
> +
> +	err = dma_resv_get_fences(obj, NULL, &num_resv_fences, &resv_fences);
> +	if (err)
> +		return ERR_PTR(err);
> +
> +	if (num_resv_fences == 0)
> +		return NULL;
> +
> +	num_fences = 0;
> +	result = NULL;
> +
> +	for (i = 0; i < num_resv_fences; ++i) {
> +		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
> +			if (dma_fence_is_signaled(fence))
> +				continue;
> +
> +			result = fence;
> +			++num_fences;
> +		}
> +	}
> +
> +	if (num_fences <= 1) {
> +		result = dma_fence_get(result);
> +		goto put_resv_fences;
> +	}
> +
> +	fences = kmalloc_array(num_fences, sizeof(struct dma_fence *),
> +			       GFP_KERNEL);
> +	if (!fences) {
> +		result = ERR_PTR(-ENOMEM);
> +		goto put_resv_fences;
> +	}
> +
> +	num_fences = 0;
> +	for (i = 0; i < num_resv_fences; ++i) {
> +		dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) {
> +			if (!dma_fence_is_signaled(fence))
> +				fences[num_fences++] = dma_fence_get(fence);
> +		}
> +	}
> +
> +	if (num_fences <= 1) {
> +		result = num_fences ? fences[0] : NULL;
> +		kfree(fences);
> +		goto put_resv_fences;
> +	}
> +
> +	array = dma_fence_array_create(num_fences, fences,
> +				       dma_fence_context_alloc(1),
> +				       1, false);
> +	if (array) {
> +		result = &array->base;
> +	} else {
> +		result = ERR_PTR(-ENOMEM);
> +		while (num_fences--)
> +			dma_fence_put(fences[num_fences]);
> +		kfree(fences);
> +	}
> +
> +put_resv_fences:
> +	while (num_resv_fences--)
> +		dma_fence_put(resv_fences[num_resv_fences]);
> +	kfree(resv_fences);
> +
> +	return result;
> +}
> +EXPORT_SYMBOL_GPL(dma_resv_get_singleton);
> +
>   /**
>    * dma_resv_wait_timeout - Wait on reservation's objects
>    * shared and/or exclusive fences.
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index 562b885cf9c3d..d60982975a786 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -275,6 +275,7 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>   int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl,
>   			unsigned *pshared_count, struct dma_fence ***pshared);
>   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> +struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj);
>   long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr,
>   			   unsigned long timeout);
>   bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all);


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
  2021-06-10 21:09 ` [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2) Jason Ekstrand
  2021-06-10 21:14     ` [Intel-gfx] " Jason Ekstrand
@ 2021-06-11  7:24   ` Christian König
  1 sibling, 0 replies; 34+ messages in thread
From: Christian König @ 2021-06-11  7:24 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel; +Cc: Daniel Vetter

Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> documentation for DMA_BUF_IOCTL_SYNC.
>
> v2 (Daniel Vetter):
>   - Fix a couple typos
>   - Add commentary about synchronization with other devices
>   - Use item list format for describing flags
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>

Acked-by: Christian König <christian.koenig@amd.com>

> ---
>   Documentation/driver-api/dma-buf.rst |  8 +++++
>   include/uapi/linux/dma-buf.h         | 46 +++++++++++++++++++++++++++-
>   2 files changed, 53 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index 7f21425d9435a..0d4c13ec1a800 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -88,6 +88,9 @@ consider though:
>   - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
>     details.
>   
> +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> +  `DMA Buffer ioctls`_ below for details.
> +
>   Basic Operation and Device DMA Access
>   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   
> @@ -106,6 +109,11 @@ Implicit Fence Poll Support
>   .. kernel-doc:: drivers/dma-buf/dma-buf.c
>      :doc: implicit fence polling
>   
> +DMA Buffer ioctls
> +~~~~~~~~~~~~~~~~~
> +
> +.. kernel-doc:: include/uapi/linux/dma-buf.h
> +
>   Kernel Functions and Structures Reference
>   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 7f30393b92c3b..1c131002fe1ee 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -22,8 +22,52 @@
>   
>   #include <linux/types.h>
>   
> -/* begin/end dma-buf functions used for userspace mmap. */
> +/**
> + * struct dma_buf_sync - Synchronize with CPU access.
> + *
> + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> + * possible to guarantee coherency between the CPU-visible map and underlying
> + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
> + * any CPU access to give the kernel the chance to shuffle memory around if
> + * needed.
> + *
> + * Prior to accessing the map, the client must call DMA_BUF_IOCTL_SYNC
> + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
> + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> + * DMA_BUF_SYNC_END and the same read/write flags.
> + *
> + * The synchronization provided via DMA_BUF_IOCTL_SYNC only provides cache
> + * coherency.  It does not prevent other processes or devices from
> + * accessing the memory at the same time.  If synchronization with a GPU or
> + * other device driver is required, it is the client's responsibility to
> + * wait for buffer to be ready for reading or writing.  If the driver or
> + * API with which the client is interacting uses implicit synchronization,
> + * this can be done via poll() on the DMA buffer file descriptor.  If the
> + * driver or API requires explicit synchronization, the client may have to
> + * wait on a sync_file or other synchronization primitive outside the scope
> + * of the DMA buffer API.
> + */
>   struct dma_buf_sync {
> +	/**
> +	 * @flags: Set of access flags
> +	 *
> +	 * DMA_BUF_SYNC_START:
> +	 *     Indicates the start of a map access session.
> +	 *
> +	 * DMA_BUF_SYNC_END:
> +	 *     Indicates the end of a map access session.
> +	 *
> +	 * DMA_BUF_SYNC_READ:
> +	 *     Indicates that the mapped DMA buffer will be read by the
> +	 *     client via the CPU map.
> +	 *
> +	 * DMA_BUF_SYNC_WRITE:
> +	 *     Indicates that the mapped DMA buffer will be written by the
> +	 *     client via the CPU map.
> +	 *
> +	 * DMA_BUF_SYNC_RW:
> +	 *     An alias for DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE.
> +	 */
>   	__u64 flags;
>   };
>   


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
  2021-06-10 21:09 ` [PATCH 5/6] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked Jason Ekstrand
@ 2021-06-11  7:44   ` Christian König
  0 siblings, 0 replies; 34+ messages in thread
From: Christian König @ 2021-06-11  7:44 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel; +Cc: Daniel Vetter

Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> For dma-buf sync_file import, we want to get all the fences on a
> dma_resv plus one more.  We could wrap the fence we get back in an array
> fence or we could make dma_resv_get_singleton_unlocked take "one more"
> to make this case easier.
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Reviewed-by: Christian König <christian.koenig@amd.com>

> Cc: Christian König <christian.koenig@amd.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
>   drivers/dma-buf/dma-buf.c  |  2 +-
>   drivers/dma-buf/dma-resv.c | 23 +++++++++++++++++++++--
>   include/linux/dma-resv.h   |  3 ++-
>   3 files changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 41b14b53cdda3..831828d71b646 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -389,7 +389,7 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
>   		return fd;
>   
>   	if (arg.flags & DMA_BUF_SYNC_WRITE) {
> -		fence = dma_resv_get_singleton(dmabuf->resv);
> +		fence = dma_resv_get_singleton(dmabuf->resv, NULL);
>   		if (IS_ERR(fence)) {
>   			ret = PTR_ERR(fence);
>   			goto err_put_fd;
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 1b26aa7e5d81c..7c48c23239b4b 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -504,6 +504,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences);
>   /**
>    * dma_resv_get_singleton - get a single fence for the dma_resv object
>    * @obj: the reservation object
> + * @extra: extra fence to add to the resulting array
>    *
>    * Get a single fence representing all unsignaled fences in the dma_resv object
>    * plus the given extra fence. If we got only one fence return a new
> @@ -512,7 +513,8 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences);
>    * RETURNS
>    * The singleton dma_fence on success or an ERR_PTR on failure
>    */
> -struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
> +struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj,
> +					 struct dma_fence *extra)
>   {
>   	struct dma_fence *result, **resv_fences, *fence, *chain, **fences;
>   	struct dma_fence_array *array;
> @@ -523,7 +525,7 @@ struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
>   	if (err)
>   		return ERR_PTR(err);
>   
> -	if (num_resv_fences == 0)
> +	if (num_resv_fences == 0 && !extra)
>   		return NULL;
>   
>   	num_fences = 0;
> @@ -539,6 +541,16 @@ struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
>   		}
>   	}
>   
> +	if (extra) {
> +		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
> +			if (dma_fence_is_signaled(fence))
> +				continue;
> +
> +			result = fence;
> +			++num_fences;
> +		}
> +	}
> +
>   	if (num_fences <= 1) {
>   		result = dma_fence_get(result);
>   		goto put_resv_fences;
> @@ -559,6 +571,13 @@ struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj)
>   		}
>   	}
>   
> +	if (extra) {
> +		dma_fence_deep_dive_for_each(fence, chain, j, extra) {
> +			if (dma_fence_is_signaled(fence))
> +				fences[num_fences++] = dma_fence_get(fence);
> +		}
> +	}
> +
>   	if (num_fences <= 1) {
>   		result = num_fences ? fences[0] : NULL;
>   		kfree(fences);
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index d60982975a786..f970e03fc1a08 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -275,7 +275,8 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
>   int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl,
>   			unsigned *pshared_count, struct dma_fence ***pshared);
>   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> -struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj);
> +struct dma_fence *dma_resv_get_singleton(struct dma_resv *obj,
> +					 struct dma_fence *extra);
>   long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr,
>   			   unsigned long timeout);
>   bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all);


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-10 21:09 ` [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
@ 2021-06-13 18:26   ` Jason Ekstrand
  2021-10-20 20:31   ` Simon Ser
  1 sibling, 0 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-13 18:26 UTC (permalink / raw)
  To: dri-devel; +Cc: Christian König, Jason Ekstrand, Daniel Vetter

[-- Attachment #1: Type: text/plain, Size: 9220 bytes --]

On June 10, 2021 16:09:49 Jason Ekstrand <jason@jlekstrand.net> wrote:

> Modern userspace APIs like Vulkan are built on an explicit
> synchronization model.  This doesn't always play nicely with the
> implicit synchronization used in the kernel and assumed by X11 and
> Wayland.  The client -> compositor half of the synchronization isn't too
> bad, at least on intel, because we can control whether or not i915
> synchronizes on the buffer and whether or not it's considered written.
>
> The harder part is the compositor -> client synchronization when we get
> the buffer back from the compositor.  We're required to be able to
> provide the client with a VkSemaphore and VkFence representing the point
> in time where the window system (compositor and/or display) finished
> using the buffer.  With current APIs, it's very hard to do this in such
> a way that we don't get confused by the Vulkan driver's access of the
> buffer.  In particular, once we tell the kernel that we're rendering to
> the buffer again, any CPU waits on the buffer or GPU dependencies will
> wait on some of the client rendering and not just the compositor.
>
> This new IOCTL solves this problem by allowing us to get a snapshot of
> the implicit synchronization state of a given dma-buf in the form of a
> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> instead of CPU waiting directly, it encapsulates the wait operation, at
> the current moment in time, in a sync_file so we can check/wait on it
> later.  As long as the Vulkan driver does the sync_file export from the
> dma-buf before we re-introduce it for rendering, it will only contain
> fences from the compositor or display.  This allows to accurately turn
> it into a VkFence or VkSemaphore without any over-synchronization.

FYI, the Mesa MR for using this ioctl (the export one) have now been 
reviewed. As far as I'm concerned, we're ready to land patches 1-4 from 
this series.

--Jason


> By making this an ioctl on the dma-buf itself, it allows this new
> functionality to be used in an entirely driver-agnostic way without
> having access to a DRM fd. This makes it ideal for use in driver-generic
> code in Mesa or in a client such as a compositor where the DRM fd may be
> hard to reach.
>
> v2 (Jason Ekstrand):
> - Use a wrapper dma_fence_array of all fences including the new one
>   when importing an exclusive fence.
>
> v3 (Jason Ekstrand):
> - Lock around setting shared fences as well as exclusive
> - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
> - Initialize ret to 0 in dma_buf_wait_sync_file
>
> v4 (Jason Ekstrand):
> - Use the new dma_resv_get_singleton helper
>
> v5 (Jason Ekstrand):
> - Rename the IOCTLs to import/export rather than wait/signal
> - Drop the WRITE flag and always get/set the exclusive fence
>
> v6 (Jason Ekstrand):
> - Drop the sync_file import as it was all-around sketchy and not nearly
>   as useful as import.
> - Re-introduce READ/WRITE flag support for export
> - Rework the commit message
>
> v7 (Jason Ekstrand):
> - Require at least one sync flag
> - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
> - Use _rcu helpers since we're accessing the dma_resv read-only
>
> v8 (Jason Ekstrand):
> - Return -ENOMEM if the sync_file_create fails
> - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
>
> v9 (Jason Ekstrand):
> - Add documentation for the new ioctl
>
> v10 (Jason Ekstrand):
> - Go back to dma_buf_sync_file as the ioctl struct name
>
> v11 (Daniel Vetter):
> - Go back to dma_buf_export_sync_file as the ioctl struct name
> - Better kerneldoc describing what the read/write flags do
>
> v12 (Christian König):
> - Document why we chose to make it an ioctl on dma-buf
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Acked-by: Simon Ser <contact@emersion.fr>
> Acked-by: Christian König <christian.koenig@amd.com>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> ---
> drivers/dma-buf/dma-buf.c    | 67 ++++++++++++++++++++++++++++++++++++
> include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++
> 2 files changed, 102 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 511fe0d217a08..41b14b53cdda3 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -20,6 +20,7 @@
> #include <linux/debugfs.h>
> #include <linux/module.h>
> #include <linux/seq_file.h>
> +#include <linux/sync_file.h>
> #include <linux/poll.h>
> #include <linux/dma-resv.h>
> #include <linux/mm.h>
> @@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t 
> offset, int whence)
>  * Note that this only signals the completion of the respective fences, i.e. the
>  * DMA transfers are complete. Cache flushing and any other necessary
>  * preparations before CPU access can begin still need to happen.
> + *
> + * As an alternative to poll(), the set of fences on DMA buffer can be
> + * exported as a &sync_file using &dma_buf_sync_file_export.
>  */
>
> static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
> @@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, 
> const char __user *buf)
>  return ret;
> }
>
> +#if IS_ENABLED(CONFIG_SYNC_FILE)
> +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
> +     void __user *user_data)
> +{
> + struct dma_buf_export_sync_file arg;
> + struct dma_fence *fence = NULL;
> + struct sync_file *sync_file;
> + int fd, ret;
> +
> + if (copy_from_user(&arg, user_data, sizeof(arg)))
> + return -EFAULT;
> +
> + if (arg.flags & ~DMA_BUF_SYNC_RW)
> + return -EINVAL;
> +
> + if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
> + return -EINVAL;
> +
> + fd = get_unused_fd_flags(O_CLOEXEC);
> + if (fd < 0)
> + return fd;
> +
> + if (arg.flags & DMA_BUF_SYNC_WRITE) {
> + fence = dma_resv_get_singleton(dmabuf->resv);
> + if (IS_ERR(fence)) {
> + ret = PTR_ERR(fence);
> + goto err_put_fd;
> + }
> + } else if (arg.flags & DMA_BUF_SYNC_READ) {
> + fence = dma_resv_get_excl_unlocked(dmabuf->resv);
> + }
> +
> + if (!fence)
> + fence = dma_fence_get_stub();
> +
> + sync_file = sync_file_create(fence);
> +
> + dma_fence_put(fence);
> +
> + if (!sync_file) {
> + ret = -ENOMEM;
> + goto err_put_fd;
> + }
> +
> + fd_install(fd, sync_file->file);
> +
> + arg.fd = fd;
> + if (copy_to_user(user_data, &arg, sizeof(arg)))
> + return -EFAULT;
> +
> + return 0;
> +
> +err_put_fd:
> + put_unused_fd(fd);
> + return ret;
> +}
> +#endif
> +
> static long dma_buf_ioctl(struct file *file,
>   unsigned int cmd, unsigned long arg)
> {
> @@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file,
>  case DMA_BUF_SET_NAME_B:
>  return dma_buf_set_name(dmabuf, (const char __user *)arg);
>
> +#if IS_ENABLED(CONFIG_SYNC_FILE)
> + case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
> + return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
> +#endif
> +
>  default:
>  return -ENOTTY;
>  }
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 1c131002fe1ee..82f12a4640403 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -81,6 +81,40 @@ struct dma_buf_sync {
>
> #define DMA_BUF_NAME_LEN 32
>
> +/**
> + * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
> + *
> + * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
> + * current set of fences on a dma-buf file descriptor as a sync_file.  CPU
> + * waits via poll() or other driver-specific mechanisms typically wait on
> + * whatever fences are on the dma-buf at the time the wait begins.  This
> + * is similar except that it takes a snapshot of the current fences on the
> + * dma-buf for waiting later instead of waiting immediately.  This is
> + * useful for modern graphics APIs such as Vulkan which assume an explicit
> + * synchronization model but still need to inter-operate with dma-buf.
> + */
> +struct dma_buf_export_sync_file {
> + /**
> + * @flags: Read/write flags
> + *
> + * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
> + *
> + * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
> + * the returned sync file waits on any writers of the dma-buf to
> + * complete.  Waiting on the returned sync file is equivalent to
> + * poll() with POLLIN.
> + *
> + * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
> + * any users of the dma-buf (read or write) to complete.  Waiting
> + * on the returned sync file is equivalent to poll() with POLLOUT.
> + * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
> + * is equivalent to just DMA_BUF_SYNC_WRITE.
> + */
> + __u32 flags;
> + /** @fd: Returned sync file descriptor */
> + __s32 fd;
> +};
> +
> #define DMA_BUF_BASE 'b'
> #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>
> @@ -90,5 +124,6 @@ struct dma_buf_sync {
> #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *)
> #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32)
> #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64)
> +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct 
> dma_buf_export_sync_file)
>
> #endif
> --
> 2.31.1


[-- Attachment #2: Type: text/html, Size: 15802 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
  2021-06-10 21:14     ` [Intel-gfx] " Jason Ekstrand
@ 2021-06-15  7:10       ` Pekka Paalanen
  -1 siblings, 0 replies; 34+ messages in thread
From: Pekka Paalanen @ 2021-06-15  7:10 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: Daniel Vetter, intel-gfx, Christian König, dri-devel

[-- Attachment #1: Type: text/plain, Size: 4412 bytes --]

On Thu, 10 Jun 2021 16:14:42 -0500
Jason Ekstrand <jason@jlekstrand.net> wrote:

> This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> documentation for DMA_BUF_IOCTL_SYNC.
> 
> v2 (Daniel Vetter):
>  - Fix a couple typos
>  - Add commentary about synchronization with other devices
>  - Use item list format for describing flags
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> ---
>  Documentation/driver-api/dma-buf.rst |  8 +++++
>  include/uapi/linux/dma-buf.h         | 46 +++++++++++++++++++++++++++-
>  2 files changed, 53 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index 7f21425d9435a..0d4c13ec1a800 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -88,6 +88,9 @@ consider though:
>  - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
>    details.
>  
> +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> +  `DMA Buffer ioctls`_ below for details.
> +
>  Basic Operation and Device DMA Access
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> @@ -106,6 +109,11 @@ Implicit Fence Poll Support
>  .. kernel-doc:: drivers/dma-buf/dma-buf.c
>     :doc: implicit fence polling
>  
> +DMA Buffer ioctls
> +~~~~~~~~~~~~~~~~~
> +
> +.. kernel-doc:: include/uapi/linux/dma-buf.h
> +
>  Kernel Functions and Structures Reference
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 7f30393b92c3b..1c131002fe1ee 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -22,8 +22,52 @@
>  
>  #include <linux/types.h>
>  
> -/* begin/end dma-buf functions used for userspace mmap. */
> +/**
> + * struct dma_buf_sync - Synchronize with CPU access.
> + *
> + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> + * possible to guarantee coherency between the CPU-visible map and underlying
> + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
> + * any CPU access to give the kernel the chance to shuffle memory around if
> + * needed.
> + *
> + * Prior to accessing the map, the client must call DMA_BUF_IOCTL_SYNC
> + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
> + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> + * DMA_BUF_SYNC_END and the same read/write flags.
> + *
> + * The synchronization provided via DMA_BUF_IOCTL_SYNC only provides cache
> + * coherency.  It does not prevent other processes or devices from
> + * accessing the memory at the same time.  If synchronization with a GPU or
> + * other device driver is required, it is the client's responsibility to
> + * wait for buffer to be ready for reading or writing.

... before calling this ioctl.

Maybe that would be worthwhile to add?

Likewise, submit follow-up work to GPU et al. only after calling this
ioctl with SYNC_END?

Anyway, looks nice to me.

Acked-by: Pekka Paalanen <pekka.paalanen@collabora.com>


Thanks,
pq

>  If the driver or
> + * API with which the client is interacting uses implicit synchronization,
> + * this can be done via poll() on the DMA buffer file descriptor.  If the
> + * driver or API requires explicit synchronization, the client may have to
> + * wait on a sync_file or other synchronization primitive outside the scope
> + * of the DMA buffer API.
> + */
>  struct dma_buf_sync {
> +	/**
> +	 * @flags: Set of access flags
> +	 *
> +	 * DMA_BUF_SYNC_START:
> +	 *     Indicates the start of a map access session.
> +	 *
> +	 * DMA_BUF_SYNC_END:
> +	 *     Indicates the end of a map access session.
> +	 *
> +	 * DMA_BUF_SYNC_READ:
> +	 *     Indicates that the mapped DMA buffer will be read by the
> +	 *     client via the CPU map.
> +	 *
> +	 * DMA_BUF_SYNC_WRITE:
> +	 *     Indicates that the mapped DMA buffer will be written by the
> +	 *     client via the CPU map.
> +	 *
> +	 * DMA_BUF_SYNC_RW:
> +	 *     An alias for DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE.
> +	 */
>  	__u64 flags;
>  };
>  


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
@ 2021-06-15  7:10       ` Pekka Paalanen
  0 siblings, 0 replies; 34+ messages in thread
From: Pekka Paalanen @ 2021-06-15  7:10 UTC (permalink / raw)
  To: Jason Ekstrand; +Cc: Daniel Vetter, intel-gfx, Christian König, dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 4412 bytes --]

On Thu, 10 Jun 2021 16:14:42 -0500
Jason Ekstrand <jason@jlekstrand.net> wrote:

> This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
> documentation for DMA_BUF_IOCTL_SYNC.
> 
> v2 (Daniel Vetter):
>  - Fix a couple typos
>  - Add commentary about synchronization with other devices
>  - Use item list format for describing flags
> 
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> ---
>  Documentation/driver-api/dma-buf.rst |  8 +++++
>  include/uapi/linux/dma-buf.h         | 46 +++++++++++++++++++++++++++-
>  2 files changed, 53 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index 7f21425d9435a..0d4c13ec1a800 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -88,6 +88,9 @@ consider though:
>  - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
>    details.
>  
> +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
> +  `DMA Buffer ioctls`_ below for details.
> +
>  Basic Operation and Device DMA Access
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> @@ -106,6 +109,11 @@ Implicit Fence Poll Support
>  .. kernel-doc:: drivers/dma-buf/dma-buf.c
>     :doc: implicit fence polling
>  
> +DMA Buffer ioctls
> +~~~~~~~~~~~~~~~~~
> +
> +.. kernel-doc:: include/uapi/linux/dma-buf.h
> +
>  Kernel Functions and Structures Reference
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 7f30393b92c3b..1c131002fe1ee 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -22,8 +22,52 @@
>  
>  #include <linux/types.h>
>  
> -/* begin/end dma-buf functions used for userspace mmap. */
> +/**
> + * struct dma_buf_sync - Synchronize with CPU access.
> + *
> + * When a DMA buffer is accessed from the CPU via mmap, it is not always
> + * possible to guarantee coherency between the CPU-visible map and underlying
> + * memory.  To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket
> + * any CPU access to give the kernel the chance to shuffle memory around if
> + * needed.
> + *
> + * Prior to accessing the map, the client must call DMA_BUF_IOCTL_SYNC
> + * with DMA_BUF_SYNC_START and the appropriate read/write flags.  Once the
> + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with
> + * DMA_BUF_SYNC_END and the same read/write flags.
> + *
> + * The synchronization provided via DMA_BUF_IOCTL_SYNC only provides cache
> + * coherency.  It does not prevent other processes or devices from
> + * accessing the memory at the same time.  If synchronization with a GPU or
> + * other device driver is required, it is the client's responsibility to
> + * wait for buffer to be ready for reading or writing.

... before calling this ioctl.

Maybe that would be worthwhile to add?

Likewise, submit follow-up work to GPU et al. only after calling this
ioctl with SYNC_END?

Anyway, looks nice to me.

Acked-by: Pekka Paalanen <pekka.paalanen@collabora.com>


Thanks,
pq

>  If the driver or
> + * API with which the client is interacting uses implicit synchronization,
> + * this can be done via poll() on the DMA buffer file descriptor.  If the
> + * driver or API requires explicit synchronization, the client may have to
> + * wait on a sync_file or other synchronization primitive outside the scope
> + * of the DMA buffer API.
> + */
>  struct dma_buf_sync {
> +	/**
> +	 * @flags: Set of access flags
> +	 *
> +	 * DMA_BUF_SYNC_START:
> +	 *     Indicates the start of a map access session.
> +	 *
> +	 * DMA_BUF_SYNC_END:
> +	 *     Indicates the end of a map access session.
> +	 *
> +	 * DMA_BUF_SYNC_READ:
> +	 *     Indicates that the mapped DMA buffer will be read by the
> +	 *     client via the CPU map.
> +	 *
> +	 * DMA_BUF_SYNC_WRITE:
> +	 *     Indicates that the mapped DMA buffer will be written by the
> +	 *     client via the CPU map.
> +	 *
> +	 * DMA_BUF_SYNC_RW:
> +	 *     An alias for DMA_BUF_SYNC_READ | DMA_BUF_SYNC_WRITE.
> +	 */
>  	__u64 flags;
>  };
>  


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
                   ` (5 preceding siblings ...)
  2021-06-10 21:09 ` [PATCH 6/6] RFC: dma-buf: Add an API for importing sync files (v7) Jason Ekstrand
@ 2021-06-15  8:41 ` Christian König
  2021-06-16 18:30   ` Jason Ekstrand
  6 siblings, 1 reply; 34+ messages in thread
From: Christian König @ 2021-06-15  8:41 UTC (permalink / raw)
  To: Jason Ekstrand, Daniel Stone
  Cc: Michel Dänzer, dri-devel, wayland-devel, mesa-dev,
	Dave Airlie, Christian König

Hi Jason & Daniel,

maybe I should explain once more where the problem with this approach is 
and why I think we need to get that fixed before we can do something 
like this here.

To summarize what this patch here does is that it copies the exclusive 
fence and/or the shared fences into a sync_file. This alone is totally 
unproblematic.

The problem is what this implies. When you need to copy the exclusive 
fence to a sync_file then this means that the driver is at some point 
ignoring the exclusive fence on a buffer object.

When you combine that with complex drivers which use TTM and buffer 
moves underneath you can construct an information leak using this and 
give userspace access to memory which is allocated to the driver, but 
not yet initialized.

This way you can leak things like page tables, passwords, kernel data 
etc... in large amounts to userspace and is an absolutely no-go for 
security.

That's why I'm said we need to get this fixed before we upstream this 
patch set here and especially the driver change which is using that.

Regards,
Christian.

Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> Modern userspace APIs like Vulkan are built on an explicit
> synchronization model.  This doesn't always play nicely with the
> implicit synchronization used in the kernel and assumed by X11 and
> Wayland.  The client -> compositor half of the synchronization isn't too
> bad, at least on intel, because we can control whether or not i915
> synchronizes on the buffer and whether or not it's considered written.
>
> The harder part is the compositor -> client synchronization when we get
> the buffer back from the compositor.  We're required to be able to
> provide the client with a VkSemaphore and VkFence representing the point
> in time where the window system (compositor and/or display) finished
> using the buffer.  With current APIs, it's very hard to do this in such
> a way that we don't get confused by the Vulkan driver's access of the
> buffer.  In particular, once we tell the kernel that we're rendering to
> the buffer again, any CPU waits on the buffer or GPU dependencies will
> wait on some of the client rendering and not just the compositor.
>
> This new IOCTL solves this problem by allowing us to get a snapshot of
> the implicit synchronization state of a given dma-buf in the form of a
> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> instead of CPU waiting directly, it encapsulates the wait operation, at
> the current moment in time, in a sync_file so we can check/wait on it
> later.  As long as the Vulkan driver does the sync_file export from the
> dma-buf before we re-introduce it for rendering, it will only contain
> fences from the compositor or display.  This allows to accurately turn
> it into a VkFence or VkSemaphore without any over- synchronization.
>
> This patch series actually contains two new ioctls.  There is the export
> one mentioned above as well as an RFC for an import ioctl which provides
> the other half.  The intention is to land the export ioctl since it seems
> like there's no real disagreement on that one.  The import ioctl, however,
> has a lot of debate around it so it's intended to be RFC-only for now.
>
> Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
> IGT tests: https://patchwork.freedesktop.org/series/90490/
>
> v10 (Jason Ekstrand, Daniel Vetter):
>   - Add reviews/acks
>   - Add a patch to rename _rcu to _unlocked
>   - Split things better so import is clearly RFC status
>
> v11 (Daniel Vetter):
>   - Add more CCs to try and get maintainers
>   - Add a patch to document DMA_BUF_IOCTL_SYNC
>   - Generally better docs
>   - Use separate structs for import/export (easier to document)
>   - Fix an issue in the import patch
>
> v12 (Daniel Vetter):
>   - Better docs for DMA_BUF_IOCTL_SYNC
>
> v12 (Christian König):
>   - Drop the rename patch in favor of Christian's series
>   - Add a comment to the commit message for the dma-buf sync_file export
>     ioctl saying why we made it an ioctl on dma-buf
>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Michel Dänzer <michel@daenzer.net>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
> Cc: Daniel Stone <daniels@collabora.com>
> Cc: mesa-dev@lists.freedesktop.org
> Cc: wayland-devel@lists.freedesktop.org
> Test-with: 20210524205225.872316-1-jason@jlekstrand.net
>
> Christian König (1):
>    dma-buf: Add dma_fence_array_for_each (v2)
>
> Jason Ekstrand (5):
>    dma-buf: Add dma_resv_get_singleton (v6)
>    dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
>    dma-buf: Add an API for exporting sync files (v12)
>    RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
>    RFC: dma-buf: Add an API for importing sync files (v7)
>
>   Documentation/driver-api/dma-buf.rst |   8 ++
>   drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
>   drivers/dma-buf/dma-fence-array.c    |  27 +++++++
>   drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
>   include/linux/dma-fence-array.h      |  17 +++++
>   include/linux/dma-resv.h             |   2 +
>   include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
>   7 files changed, 369 insertions(+), 1 deletion(-)
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-15  8:41 ` [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Christian König
@ 2021-06-16 18:30   ` Jason Ekstrand
  2021-06-17  7:37     ` Christian König
  0 siblings, 1 reply; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-16 18:30 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Stone, Michel Dänzer, dri-devel,
	wayland-devel @ lists . freedesktop . org, ML mesa-dev,
	Dave Airlie, Christian König

On Tue, Jun 15, 2021 at 3:41 AM Christian König
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> Hi Jason & Daniel,
>
> maybe I should explain once more where the problem with this approach is
> and why I think we need to get that fixed before we can do something
> like this here.
>
> To summarize what this patch here does is that it copies the exclusive
> fence and/or the shared fences into a sync_file. This alone is totally
> unproblematic.
>
> The problem is what this implies. When you need to copy the exclusive
> fence to a sync_file then this means that the driver is at some point
> ignoring the exclusive fence on a buffer object.

Not necessarily.  Part of the point of this is to allow for CPU waits
on a past point in buffers timeline.  Today, we have poll() and
GEM_WAIT both of which wait for the buffer to be idle from whatever
GPU work is currently happening.  We want to wait on something in the
past and ignore anything happening now.

But, to the broader point, maybe?  I'm a little fuzzy on exactly where
i915 inserts and/or depends on fences.

> When you combine that with complex drivers which use TTM and buffer
> moves underneath you can construct an information leak using this and
> give userspace access to memory which is allocated to the driver, but
> not yet initialized.
>
> This way you can leak things like page tables, passwords, kernel data
> etc... in large amounts to userspace and is an absolutely no-go for
> security.

Ugh...  Unfortunately, I'm really out of my depth on the implications
going on here but I think I see your point.

> That's why I'm said we need to get this fixed before we upstream this
> patch set here and especially the driver change which is using that.

Well, i915 has had uAPI for a while to ignore fences.  Those changes
are years in the past.  If we have a real problem here (not sure on
that yet), then we'll have to figure out how to fix it without nuking
uAPI.

--Jason


> Regards,
> Christian.
>
> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> > Modern userspace APIs like Vulkan are built on an explicit
> > synchronization model.  This doesn't always play nicely with the
> > implicit synchronization used in the kernel and assumed by X11 and
> > Wayland.  The client -> compositor half of the synchronization isn't too
> > bad, at least on intel, because we can control whether or not i915
> > synchronizes on the buffer and whether or not it's considered written.
> >
> > The harder part is the compositor -> client synchronization when we get
> > the buffer back from the compositor.  We're required to be able to
> > provide the client with a VkSemaphore and VkFence representing the point
> > in time where the window system (compositor and/or display) finished
> > using the buffer.  With current APIs, it's very hard to do this in such
> > a way that we don't get confused by the Vulkan driver's access of the
> > buffer.  In particular, once we tell the kernel that we're rendering to
> > the buffer again, any CPU waits on the buffer or GPU dependencies will
> > wait on some of the client rendering and not just the compositor.
> >
> > This new IOCTL solves this problem by allowing us to get a snapshot of
> > the implicit synchronization state of a given dma-buf in the form of a
> > sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> > instead of CPU waiting directly, it encapsulates the wait operation, at
> > the current moment in time, in a sync_file so we can check/wait on it
> > later.  As long as the Vulkan driver does the sync_file export from the
> > dma-buf before we re-introduce it for rendering, it will only contain
> > fences from the compositor or display.  This allows to accurately turn
> > it into a VkFence or VkSemaphore without any over- synchronization.
> >
> > This patch series actually contains two new ioctls.  There is the export
> > one mentioned above as well as an RFC for an import ioctl which provides
> > the other half.  The intention is to land the export ioctl since it seems
> > like there's no real disagreement on that one.  The import ioctl, however,
> > has a lot of debate around it so it's intended to be RFC-only for now.
> >
> > Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
> > IGT tests: https://patchwork.freedesktop.org/series/90490/
> >
> > v10 (Jason Ekstrand, Daniel Vetter):
> >   - Add reviews/acks
> >   - Add a patch to rename _rcu to _unlocked
> >   - Split things better so import is clearly RFC status
> >
> > v11 (Daniel Vetter):
> >   - Add more CCs to try and get maintainers
> >   - Add a patch to document DMA_BUF_IOCTL_SYNC
> >   - Generally better docs
> >   - Use separate structs for import/export (easier to document)
> >   - Fix an issue in the import patch
> >
> > v12 (Daniel Vetter):
> >   - Better docs for DMA_BUF_IOCTL_SYNC
> >
> > v12 (Christian König):
> >   - Drop the rename patch in favor of Christian's series
> >   - Add a comment to the commit message for the dma-buf sync_file export
> >     ioctl saying why we made it an ioctl on dma-buf
> >
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Michel Dänzer <michel@daenzer.net>
> > Cc: Dave Airlie <airlied@redhat.com>
> > Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
> > Cc: Daniel Stone <daniels@collabora.com>
> > Cc: mesa-dev@lists.freedesktop.org
> > Cc: wayland-devel@lists.freedesktop.org
> > Test-with: 20210524205225.872316-1-jason@jlekstrand.net
> >
> > Christian König (1):
> >    dma-buf: Add dma_fence_array_for_each (v2)
> >
> > Jason Ekstrand (5):
> >    dma-buf: Add dma_resv_get_singleton (v6)
> >    dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
> >    dma-buf: Add an API for exporting sync files (v12)
> >    RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
> >    RFC: dma-buf: Add an API for importing sync files (v7)
> >
> >   Documentation/driver-api/dma-buf.rst |   8 ++
> >   drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
> >   drivers/dma-buf/dma-fence-array.c    |  27 +++++++
> >   drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
> >   include/linux/dma-fence-array.h      |  17 +++++
> >   include/linux/dma-resv.h             |   2 +
> >   include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
> >   7 files changed, 369 insertions(+), 1 deletion(-)
> >
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-16 18:30   ` Jason Ekstrand
@ 2021-06-17  7:37     ` Christian König
  2021-06-17 19:58       ` Daniel Vetter
  0 siblings, 1 reply; 34+ messages in thread
From: Christian König @ 2021-06-17  7:37 UTC (permalink / raw)
  To: Jason Ekstrand, Christian König
  Cc: Daniel Stone, Michel Dänzer, dri-devel,
	wayland-devel @ lists . freedesktop . org, ML mesa-dev,
	Dave Airlie

Am 16.06.21 um 20:30 schrieb Jason Ekstrand:
> On Tue, Jun 15, 2021 at 3:41 AM Christian König
> <ckoenig.leichtzumerken@gmail.com> wrote:
>> Hi Jason & Daniel,
>>
>> maybe I should explain once more where the problem with this approach is
>> and why I think we need to get that fixed before we can do something
>> like this here.
>>
>> To summarize what this patch here does is that it copies the exclusive
>> fence and/or the shared fences into a sync_file. This alone is totally
>> unproblematic.
>>
>> The problem is what this implies. When you need to copy the exclusive
>> fence to a sync_file then this means that the driver is at some point
>> ignoring the exclusive fence on a buffer object.
> Not necessarily.  Part of the point of this is to allow for CPU waits
> on a past point in buffers timeline.  Today, we have poll() and
> GEM_WAIT both of which wait for the buffer to be idle from whatever
> GPU work is currently happening.  We want to wait on something in the
> past and ignore anything happening now.

Good point, yes that is indeed a valid use case.

> But, to the broader point, maybe?  I'm a little fuzzy on exactly where
> i915 inserts and/or depends on fences.
>
>> When you combine that with complex drivers which use TTM and buffer
>> moves underneath you can construct an information leak using this and
>> give userspace access to memory which is allocated to the driver, but
>> not yet initialized.
>>
>> This way you can leak things like page tables, passwords, kernel data
>> etc... in large amounts to userspace and is an absolutely no-go for
>> security.
> Ugh...  Unfortunately, I'm really out of my depth on the implications
> going on here but I think I see your point.
>
>> That's why I'm said we need to get this fixed before we upstream this
>> patch set here and especially the driver change which is using that.
> Well, i915 has had uAPI for a while to ignore fences.

Yeah, exactly that's illegal.

At least the kernel internal fences like moving or clearing a buffer 
object needs to be taken into account before a driver is allowed to 
access a buffer.

Otherwise we have an information leak worth a CVE and that is certainly 
not something we want.

> Those changes are years in the past.  If we have a real problem here (not sure on
> that yet), then we'll have to figure out how to fix it without nuking
> uAPI.

Well, that was the basic idea of attaching flags to the fences in the 
dma_resv object.

In other words you clearly denote when you have to wait for a fence 
before accessing a buffer or you cause a security issue.

Christian.

>
> --Jason
>
>
>> Regards,
>> Christian.
>>
>> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
>>> Modern userspace APIs like Vulkan are built on an explicit
>>> synchronization model.  This doesn't always play nicely with the
>>> implicit synchronization used in the kernel and assumed by X11 and
>>> Wayland.  The client -> compositor half of the synchronization isn't too
>>> bad, at least on intel, because we can control whether or not i915
>>> synchronizes on the buffer and whether or not it's considered written.
>>>
>>> The harder part is the compositor -> client synchronization when we get
>>> the buffer back from the compositor.  We're required to be able to
>>> provide the client with a VkSemaphore and VkFence representing the point
>>> in time where the window system (compositor and/or display) finished
>>> using the buffer.  With current APIs, it's very hard to do this in such
>>> a way that we don't get confused by the Vulkan driver's access of the
>>> buffer.  In particular, once we tell the kernel that we're rendering to
>>> the buffer again, any CPU waits on the buffer or GPU dependencies will
>>> wait on some of the client rendering and not just the compositor.
>>>
>>> This new IOCTL solves this problem by allowing us to get a snapshot of
>>> the implicit synchronization state of a given dma-buf in the form of a
>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
>>> instead of CPU waiting directly, it encapsulates the wait operation, at
>>> the current moment in time, in a sync_file so we can check/wait on it
>>> later.  As long as the Vulkan driver does the sync_file export from the
>>> dma-buf before we re-introduce it for rendering, it will only contain
>>> fences from the compositor or display.  This allows to accurately turn
>>> it into a VkFence or VkSemaphore without any over- synchronization.
>>>
>>> This patch series actually contains two new ioctls.  There is the export
>>> one mentioned above as well as an RFC for an import ioctl which provides
>>> the other half.  The intention is to land the export ioctl since it seems
>>> like there's no real disagreement on that one.  The import ioctl, however,
>>> has a lot of debate around it so it's intended to be RFC-only for now.
>>>
>>> Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cb094e69c94814727939508d930f4ca94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637594650220923783%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=xUwaiuw8Qt3d37%2F6NYOHU3K%2FMFwsvg79rno9zTNodRs%3D&amp;reserved=0
>>> IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cb094e69c94814727939508d930f4ca94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637594650220923783%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=wygYaeVg%2BXmfeEUC45lWH5GgNBukl0%2B%2FMpT5u9LKYDI%3D&amp;reserved=0
>>>
>>> v10 (Jason Ekstrand, Daniel Vetter):
>>>    - Add reviews/acks
>>>    - Add a patch to rename _rcu to _unlocked
>>>    - Split things better so import is clearly RFC status
>>>
>>> v11 (Daniel Vetter):
>>>    - Add more CCs to try and get maintainers
>>>    - Add a patch to document DMA_BUF_IOCTL_SYNC
>>>    - Generally better docs
>>>    - Use separate structs for import/export (easier to document)
>>>    - Fix an issue in the import patch
>>>
>>> v12 (Daniel Vetter):
>>>    - Better docs for DMA_BUF_IOCTL_SYNC
>>>
>>> v12 (Christian König):
>>>    - Drop the rename patch in favor of Christian's series
>>>    - Add a comment to the commit message for the dma-buf sync_file export
>>>      ioctl saying why we made it an ioctl on dma-buf
>>>
>>> Cc: Christian König <christian.koenig@amd.com>
>>> Cc: Michel Dänzer <michel@daenzer.net>
>>> Cc: Dave Airlie <airlied@redhat.com>
>>> Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
>>> Cc: Daniel Stone <daniels@collabora.com>
>>> Cc: mesa-dev@lists.freedesktop.org
>>> Cc: wayland-devel@lists.freedesktop.org
>>> Test-with: 20210524205225.872316-1-jason@jlekstrand.net
>>>
>>> Christian König (1):
>>>     dma-buf: Add dma_fence_array_for_each (v2)
>>>
>>> Jason Ekstrand (5):
>>>     dma-buf: Add dma_resv_get_singleton (v6)
>>>     dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
>>>     dma-buf: Add an API for exporting sync files (v12)
>>>     RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
>>>     RFC: dma-buf: Add an API for importing sync files (v7)
>>>
>>>    Documentation/driver-api/dma-buf.rst |   8 ++
>>>    drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
>>>    drivers/dma-buf/dma-fence-array.c    |  27 +++++++
>>>    drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
>>>    include/linux/dma-fence-array.h      |  17 +++++
>>>    include/linux/dma-resv.h             |   2 +
>>>    include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
>>>    7 files changed, 369 insertions(+), 1 deletion(-)
>>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-17  7:37     ` Christian König
@ 2021-06-17 19:58       ` Daniel Vetter
  2021-06-18  9:15         ` Christian König
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Vetter @ 2021-06-17 19:58 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:
> Am 16.06.21 um 20:30 schrieb Jason Ekstrand:
> > On Tue, Jun 15, 2021 at 3:41 AM Christian König
> > <ckoenig.leichtzumerken@gmail.com> wrote:
> > > Hi Jason & Daniel,
> > > 
> > > maybe I should explain once more where the problem with this approach is
> > > and why I think we need to get that fixed before we can do something
> > > like this here.
> > > 
> > > To summarize what this patch here does is that it copies the exclusive
> > > fence and/or the shared fences into a sync_file. This alone is totally
> > > unproblematic.
> > > 
> > > The problem is what this implies. When you need to copy the exclusive
> > > fence to a sync_file then this means that the driver is at some point
> > > ignoring the exclusive fence on a buffer object.
> > Not necessarily.  Part of the point of this is to allow for CPU waits
> > on a past point in buffers timeline.  Today, we have poll() and
> > GEM_WAIT both of which wait for the buffer to be idle from whatever
> > GPU work is currently happening.  We want to wait on something in the
> > past and ignore anything happening now.
> 
> Good point, yes that is indeed a valid use case.
> 
> > But, to the broader point, maybe?  I'm a little fuzzy on exactly where
> > i915 inserts and/or depends on fences.
> > 
> > > When you combine that with complex drivers which use TTM and buffer
> > > moves underneath you can construct an information leak using this and
> > > give userspace access to memory which is allocated to the driver, but
> > > not yet initialized.
> > > 
> > > This way you can leak things like page tables, passwords, kernel data
> > > etc... in large amounts to userspace and is an absolutely no-go for
> > > security.
> > Ugh...  Unfortunately, I'm really out of my depth on the implications
> > going on here but I think I see your point.
> > 
> > > That's why I'm said we need to get this fixed before we upstream this
> > > patch set here and especially the driver change which is using that.
> > Well, i915 has had uAPI for a while to ignore fences.
> 
> Yeah, exactly that's illegal.

You're a few years too late with closing that barn door. The following
drives have this concept
- i915
- msm
- etnaviv

Because you can't write a competent vulkan driver without this. This was
discussed at absolute epic length in various xdcs iirc. We did ignore a
bit the vram/ttm/bo-moving problem because all the people present were
hacking on integrated gpu (see list above), but that just means we need to
treat the ttm_bo->moving fence properly.

> At least the kernel internal fences like moving or clearing a buffer object
> needs to be taken into account before a driver is allowed to access a
> buffer.

Yes i915 needs to make sure it never ignores ttm_bo->moving.

For dma-buf this isn't actually a problem, because dma-buf are pinned. You
can't move them while other drivers are using them, hence there's not
actually a ttm_bo->moving fence we can ignore.

p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
these other drivers) need to change before they can do dynamic dma-buf.

> Otherwise we have an information leak worth a CVE and that is certainly not
> something we want.

Because yes otherwise we get a CVE. But right now I don't think we have
one.

We do have a quite big confusion on what exactly the signaling ordering is
supposed to be between exclusive and the collective set of shared fences,
and there's some unifying that needs to happen here. But I think what
Jason implements here in the import ioctl is the most defensive version
possible, so really can't break any driver. It really works like you have
an ad-hoc gpu engine that does nothing itself, but waits for the current
exclusive fence and then sets the exclusive fence with its "CS" completion
fence.

That's imo perfectly legit use-case.

Same for the export one. Waiting for a previous snapshot of implicit
fences is imo perfectly ok use-case and useful for compositors - client
might soon start more rendering, and on some drivers that always results
in the exclusive slot being set, so if you dont take a snapshot you
oversync real bad for your atomic flip.

> > Those changes are years in the past.  If we have a real problem here (not sure on
> > that yet), then we'll have to figure out how to fix it without nuking
> > uAPI.
> 
> Well, that was the basic idea of attaching flags to the fences in the
> dma_resv object.
> 
> In other words you clearly denote when you have to wait for a fence before
> accessing a buffer or you cause a security issue.

Replied somewhere else, and I do kinda like the flag idea. But the problem
is we first need a ton more encapsulation and review of drivers before we
can change the internals. One thing at a time.

And yes for amdgpu this gets triple-hard because you both have the
ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_
you need to figure out how to support vulkan properly with true opt-in
fencing. I'm pretty sure it's doable, I'm just not finding any time
anywhere to hack on these patches - too many other fires :-(

Cheers, Daniel

> 
> Christian.
> 
> > 
> > --Jason
> > 
> > 
> > > Regards,
> > > Christian.
> > > 
> > > Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> > > > Modern userspace APIs like Vulkan are built on an explicit
> > > > synchronization model.  This doesn't always play nicely with the
> > > > implicit synchronization used in the kernel and assumed by X11 and
> > > > Wayland.  The client -> compositor half of the synchronization isn't too
> > > > bad, at least on intel, because we can control whether or not i915
> > > > synchronizes on the buffer and whether or not it's considered written.
> > > > 
> > > > The harder part is the compositor -> client synchronization when we get
> > > > the buffer back from the compositor.  We're required to be able to
> > > > provide the client with a VkSemaphore and VkFence representing the point
> > > > in time where the window system (compositor and/or display) finished
> > > > using the buffer.  With current APIs, it's very hard to do this in such
> > > > a way that we don't get confused by the Vulkan driver's access of the
> > > > buffer.  In particular, once we tell the kernel that we're rendering to
> > > > the buffer again, any CPU waits on the buffer or GPU dependencies will
> > > > wait on some of the client rendering and not just the compositor.
> > > > 
> > > > This new IOCTL solves this problem by allowing us to get a snapshot of
> > > > the implicit synchronization state of a given dma-buf in the form of a
> > > > sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> > > > instead of CPU waiting directly, it encapsulates the wait operation, at
> > > > the current moment in time, in a sync_file so we can check/wait on it
> > > > later.  As long as the Vulkan driver does the sync_file export from the
> > > > dma-buf before we re-introduce it for rendering, it will only contain
> > > > fences from the compositor or display.  This allows to accurately turn
> > > > it into a VkFence or VkSemaphore without any over- synchronization.
> > > > 
> > > > This patch series actually contains two new ioctls.  There is the export
> > > > one mentioned above as well as an RFC for an import ioctl which provides
> > > > the other half.  The intention is to land the export ioctl since it seems
> > > > like there's no real disagreement on that one.  The import ioctl, however,
> > > > has a lot of debate around it so it's intended to be RFC-only for now.
> > > > 
> > > > Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cb094e69c94814727939508d930f4ca94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637594650220923783%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=xUwaiuw8Qt3d37%2F6NYOHU3K%2FMFwsvg79rno9zTNodRs%3D&amp;reserved=0
> > > > IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cb094e69c94814727939508d930f4ca94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637594650220923783%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=wygYaeVg%2BXmfeEUC45lWH5GgNBukl0%2B%2FMpT5u9LKYDI%3D&amp;reserved=0
> > > > 
> > > > v10 (Jason Ekstrand, Daniel Vetter):
> > > >    - Add reviews/acks
> > > >    - Add a patch to rename _rcu to _unlocked
> > > >    - Split things better so import is clearly RFC status
> > > > 
> > > > v11 (Daniel Vetter):
> > > >    - Add more CCs to try and get maintainers
> > > >    - Add a patch to document DMA_BUF_IOCTL_SYNC
> > > >    - Generally better docs
> > > >    - Use separate structs for import/export (easier to document)
> > > >    - Fix an issue in the import patch
> > > > 
> > > > v12 (Daniel Vetter):
> > > >    - Better docs for DMA_BUF_IOCTL_SYNC
> > > > 
> > > > v12 (Christian König):
> > > >    - Drop the rename patch in favor of Christian's series
> > > >    - Add a comment to the commit message for the dma-buf sync_file export
> > > >      ioctl saying why we made it an ioctl on dma-buf
> > > > 
> > > > Cc: Christian König <christian.koenig@amd.com>
> > > > Cc: Michel Dänzer <michel@daenzer.net>
> > > > Cc: Dave Airlie <airlied@redhat.com>
> > > > Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
> > > > Cc: Daniel Stone <daniels@collabora.com>
> > > > Cc: mesa-dev@lists.freedesktop.org
> > > > Cc: wayland-devel@lists.freedesktop.org
> > > > Test-with: 20210524205225.872316-1-jason@jlekstrand.net
> > > > 
> > > > Christian König (1):
> > > >     dma-buf: Add dma_fence_array_for_each (v2)
> > > > 
> > > > Jason Ekstrand (5):
> > > >     dma-buf: Add dma_resv_get_singleton (v6)
> > > >     dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
> > > >     dma-buf: Add an API for exporting sync files (v12)
> > > >     RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
> > > >     RFC: dma-buf: Add an API for importing sync files (v7)
> > > > 
> > > >    Documentation/driver-api/dma-buf.rst |   8 ++
> > > >    drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
> > > >    drivers/dma-buf/dma-fence-array.c    |  27 +++++++
> > > >    drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
> > > >    include/linux/dma-fence-array.h      |  17 +++++
> > > >    include/linux/dma-resv.h             |   2 +
> > > >    include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
> > > >    7 files changed, 369 insertions(+), 1 deletion(-)
> > > > 
> 
> _______________________________________________
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-17 19:58       ` Daniel Vetter
@ 2021-06-18  9:15         ` Christian König
  2021-06-18 13:54           ` Jason Ekstrand
  2021-06-18 14:31           ` Daniel Vetter
  0 siblings, 2 replies; 34+ messages in thread
From: Christian König @ 2021-06-18  9:15 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

Am 17.06.21 um 21:58 schrieb Daniel Vetter:
> On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:
>> [SNIP]
>>> But, to the broader point, maybe?  I'm a little fuzzy on exactly where
>>> i915 inserts and/or depends on fences.
>>>
>>>> When you combine that with complex drivers which use TTM and buffer
>>>> moves underneath you can construct an information leak using this and
>>>> give userspace access to memory which is allocated to the driver, but
>>>> not yet initialized.
>>>>
>>>> This way you can leak things like page tables, passwords, kernel data
>>>> etc... in large amounts to userspace and is an absolutely no-go for
>>>> security.
>>> Ugh...  Unfortunately, I'm really out of my depth on the implications
>>> going on here but I think I see your point.
>>>
>>>> That's why I'm said we need to get this fixed before we upstream this
>>>> patch set here and especially the driver change which is using that.
>>> Well, i915 has had uAPI for a while to ignore fences.
>> Yeah, exactly that's illegal.
> You're a few years too late with closing that barn door. The following
> drives have this concept
> - i915
> - msm
> - etnaviv
>
> Because you can't write a competent vulkan driver without this.

WHAT? ^^

> This was discussed at absolute epic length in various xdcs iirc. We did ignore a
> bit the vram/ttm/bo-moving problem because all the people present were
> hacking on integrated gpu (see list above), but that just means we need to
> treat the ttm_bo->moving fence properly.

I should have visited more XDCs in the past, the problem is much larger 
than this.

But I now start to understand what you are doing with that design and 
why it looks so messy to me, amdgpu is just currently the only driver 
which does Vulkan and complex memory management at the same time.

>> At least the kernel internal fences like moving or clearing a buffer object
>> needs to be taken into account before a driver is allowed to access a
>> buffer.
> Yes i915 needs to make sure it never ignores ttm_bo->moving.

No, that is only the tip of the iceberg. See TTM for example also puts 
fences which drivers needs to wait for into the shared slots. Same thing 
for use cases like clear on release etc....

 From my point of view the main purpose of the dma_resv object is to 
serve memory management, synchronization for command submission is just 
a secondary use case.

And that drivers choose to ignore the exclusive fence is an absolutely 
no-go from a memory management and security point of view. Exclusive 
access means exclusive access. Ignoring that won't work.

The only thing which saved us so far is the fact that drivers doing this 
are not that complex.

BTW: How does it even work? I mean then you would run into the same 
problem as amdgpu with its page table update fences, e.g. that your 
shared fences might signal before the exclusive one.

> For dma-buf this isn't actually a problem, because dma-buf are pinned. You
> can't move them while other drivers are using them, hence there's not
> actually a ttm_bo->moving fence we can ignore.
>
> p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
> these other drivers) need to change before they can do dynamic dma-buf.
>
>> Otherwise we have an information leak worth a CVE and that is certainly not
>> something we want.
> Because yes otherwise we get a CVE. But right now I don't think we have
> one.

Yeah, agree. But this is just because of coincident and not because of 
good engineering :)

> We do have a quite big confusion on what exactly the signaling ordering is
> supposed to be between exclusive and the collective set of shared fences,
> and there's some unifying that needs to happen here. But I think what
> Jason implements here in the import ioctl is the most defensive version
> possible, so really can't break any driver. It really works like you have
> an ad-hoc gpu engine that does nothing itself, but waits for the current
> exclusive fence and then sets the exclusive fence with its "CS" completion
> fence.
>
> That's imo perfectly legit use-case.

The use case is certainly legit, but I'm not sure if merging this at the 
moment is a good idea.

Your note that drivers are already ignoring the exclusive fence in the 
dma_resv object was eye opening to me. And I now have the very strong 
feeling that the synchronization and the design of the dma_resv object 
is even more messy then I thought it is.

To summarize we can be really lucky that it didn't blow up into our 
faces already.

> Same for the export one. Waiting for a previous snapshot of implicit
> fences is imo perfectly ok use-case and useful for compositors - client
> might soon start more rendering, and on some drivers that always results
> in the exclusive slot being set, so if you dont take a snapshot you
> oversync real bad for your atomic flip.

The export use case is unproblematic as far as I can see.

>>> Those changes are years in the past.  If we have a real problem here (not sure on
>>> that yet), then we'll have to figure out how to fix it without nuking
>>> uAPI.
>> Well, that was the basic idea of attaching flags to the fences in the
>> dma_resv object.
>>
>> In other words you clearly denote when you have to wait for a fence before
>> accessing a buffer or you cause a security issue.
> Replied somewhere else, and I do kinda like the flag idea. But the problem
> is we first need a ton more encapsulation and review of drivers before we
> can change the internals. One thing at a time.

Ok how should we then proceed?

The large patch set I've send out to convert all users of the shared 
fence list to a for_each API is a step into the right direction I think, 
but there is still a bit more todo.

> And yes for amdgpu this gets triple-hard because you both have the
> ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_
> you need to figure out how to support vulkan properly with true opt-in
> fencing.

Well I have been pondering on that for a bit and I came to the 
conclusion that it is actually not a problem at all.

See radeon, nouveau, msm etc... all implement functions that they don't 
wait for fences from the same timeline, context, engine. That amdgpu 
doesn't wait for fences from the same process can be seen as just a 
special case of this.

>   I'm pretty sure it's doable, I'm just not finding any time
> anywhere to hack on these patches - too many other fires :-(

Well I'm here. Let's just agree on the direction and I can do the coding.

What I need help with is all the auditing. For example I still haven't 
wrapped my head around how i915 does the synchronization.

Best regards,
Christian.

>
> Cheers, Daniel
>
>> Christian.
>>
>>> --Jason
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
>>>>> Modern userspace APIs like Vulkan are built on an explicit
>>>>> synchronization model.  This doesn't always play nicely with the
>>>>> implicit synchronization used in the kernel and assumed by X11 and
>>>>> Wayland.  The client -> compositor half of the synchronization isn't too
>>>>> bad, at least on intel, because we can control whether or not i915
>>>>> synchronizes on the buffer and whether or not it's considered written.
>>>>>
>>>>> The harder part is the compositor -> client synchronization when we get
>>>>> the buffer back from the compositor.  We're required to be able to
>>>>> provide the client with a VkSemaphore and VkFence representing the point
>>>>> in time where the window system (compositor and/or display) finished
>>>>> using the buffer.  With current APIs, it's very hard to do this in such
>>>>> a way that we don't get confused by the Vulkan driver's access of the
>>>>> buffer.  In particular, once we tell the kernel that we're rendering to
>>>>> the buffer again, any CPU waits on the buffer or GPU dependencies will
>>>>> wait on some of the client rendering and not just the compositor.
>>>>>
>>>>> This new IOCTL solves this problem by allowing us to get a snapshot of
>>>>> the implicit synchronization state of a given dma-buf in the form of a
>>>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
>>>>> instead of CPU waiting directly, it encapsulates the wait operation, at
>>>>> the current moment in time, in a sync_file so we can check/wait on it
>>>>> later.  As long as the Vulkan driver does the sync_file export from the
>>>>> dma-buf before we re-introduce it for rendering, it will only contain
>>>>> fences from the compositor or display.  This allows to accurately turn
>>>>> it into a VkFence or VkSemaphore without any over- synchronization.
>>>>>
>>>>> This patch series actually contains two new ioctls.  There is the export
>>>>> one mentioned above as well as an RFC for an import ioctl which provides
>>>>> the other half.  The intention is to land the export ioctl since it seems
>>>>> like there's no real disagreement on that one.  The import ioctl, however,
>>>>> has a lot of debate around it so it's intended to be RFC-only for now.
>>>>>
>>>>> Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=TAFUHFCSFcfrP7bjkBtVin4VX2vC6OakwbrqwlZOW8c%3D&amp;reserved=0
>>>>> IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=lbn%2B81KXds9pYnFUYWi9hzLNP3PLKij4RVjV97UyZ3s%3D&amp;reserved=0
>>>>>
>>>>> v10 (Jason Ekstrand, Daniel Vetter):
>>>>>     - Add reviews/acks
>>>>>     - Add a patch to rename _rcu to _unlocked
>>>>>     - Split things better so import is clearly RFC status
>>>>>
>>>>> v11 (Daniel Vetter):
>>>>>     - Add more CCs to try and get maintainers
>>>>>     - Add a patch to document DMA_BUF_IOCTL_SYNC
>>>>>     - Generally better docs
>>>>>     - Use separate structs for import/export (easier to document)
>>>>>     - Fix an issue in the import patch
>>>>>
>>>>> v12 (Daniel Vetter):
>>>>>     - Better docs for DMA_BUF_IOCTL_SYNC
>>>>>
>>>>> v12 (Christian König):
>>>>>     - Drop the rename patch in favor of Christian's series
>>>>>     - Add a comment to the commit message for the dma-buf sync_file export
>>>>>       ioctl saying why we made it an ioctl on dma-buf
>>>>>
>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>> Cc: Michel Dänzer <michel@daenzer.net>
>>>>> Cc: Dave Airlie <airlied@redhat.com>
>>>>> Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
>>>>> Cc: Daniel Stone <daniels@collabora.com>
>>>>> Cc: mesa-dev@lists.freedesktop.org
>>>>> Cc: wayland-devel@lists.freedesktop.org
>>>>> Test-with: 20210524205225.872316-1-jason@jlekstrand.net
>>>>>
>>>>> Christian König (1):
>>>>>      dma-buf: Add dma_fence_array_for_each (v2)
>>>>>
>>>>> Jason Ekstrand (5):
>>>>>      dma-buf: Add dma_resv_get_singleton (v6)
>>>>>      dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
>>>>>      dma-buf: Add an API for exporting sync files (v12)
>>>>>      RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
>>>>>      RFC: dma-buf: Add an API for importing sync files (v7)
>>>>>
>>>>>     Documentation/driver-api/dma-buf.rst |   8 ++
>>>>>     drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
>>>>>     drivers/dma-buf/dma-fence-array.c    |  27 +++++++
>>>>>     drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
>>>>>     include/linux/dma-fence-array.h      |  17 +++++
>>>>>     include/linux/dma-resv.h             |   2 +
>>>>>     include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
>>>>>     7 files changed, 369 insertions(+), 1 deletion(-)
>>>>>
>> _______________________________________________
>> mesa-dev mailing list
>> mesa-dev@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=QxQDoUKzo57tmQxD0aEjPs8ATpCOBQiQ5W%2FDh8dbEqU%3D&amp;reserved=0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18  9:15         ` Christian König
@ 2021-06-18 13:54           ` Jason Ekstrand
  2021-06-18 14:31           ` Daniel Vetter
  1 sibling, 0 replies; 34+ messages in thread
From: Jason Ekstrand @ 2021-06-18 13:54 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Hellström, Daniel Stone, Christian König,
	Michel Dänzer, dri-devel,
	wayland-devel @ lists . freedesktop . org, Dave Airlie,
	ML mesa-dev

On Fri, Jun 18, 2021 at 4:15 AM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 17.06.21 um 21:58 schrieb Daniel Vetter:
> > On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:
> >> [SNIP]
> >>> But, to the broader point, maybe?  I'm a little fuzzy on exactly where
> >>> i915 inserts and/or depends on fences.
> >>>
> >>>> When you combine that with complex drivers which use TTM and buffer
> >>>> moves underneath you can construct an information leak using this and
> >>>> give userspace access to memory which is allocated to the driver, but
> >>>> not yet initialized.
> >>>>
> >>>> This way you can leak things like page tables, passwords, kernel data
> >>>> etc... in large amounts to userspace and is an absolutely no-go for
> >>>> security.
> >>> Ugh...  Unfortunately, I'm really out of my depth on the implications
> >>> going on here but I think I see your point.
> >>>
> >>>> That's why I'm said we need to get this fixed before we upstream this
> >>>> patch set here and especially the driver change which is using that.
> >>> Well, i915 has had uAPI for a while to ignore fences.
> >> Yeah, exactly that's illegal.
> > You're a few years too late with closing that barn door. The following
> > drives have this concept
> > - i915
> > - msm
> > - etnaviv
> >
> > Because you can't write a competent vulkan driver without this.
>
> WHAT? ^^

I think it's fair to say that you can't write a competent Vulkan
driver with implicit sync getting in the way.  Since AMD removes all
the implicit sync internally, this solves most of the problems there.
RADV does suffer some heartache around WSI which is related but I'd
hardly say that makes it incompetent.

> > This was discussed at absolute epic length in various xdcs iirc. We did ignore a
> > bit the vram/ttm/bo-moving problem because all the people present were
> > hacking on integrated gpu (see list above), but that just means we need to
> > treat the ttm_bo->moving fence properly.
>
> I should have visited more XDCs in the past, the problem is much larger
> than this.
>
> But I now start to understand what you are doing with that design and
> why it looks so messy to me, amdgpu is just currently the only driver
> which does Vulkan and complex memory management at the same time.

I'm reading "complex memory management" here and elsewhere as "has
VRAM".  All memory management is complex; shuffling to/from VRAM just
adds more layers.

> >> At least the kernel internal fences like moving or clearing a buffer object
> >> needs to be taken into account before a driver is allowed to access a
> >> buffer.
> > Yes i915 needs to make sure it never ignores ttm_bo->moving.
>
> No, that is only the tip of the iceberg. See TTM for example also puts
> fences which drivers needs to wait for into the shared slots. Same thing
> for use cases like clear on release etc....
>
>  From my point of view the main purpose of the dma_resv object is to
> serve memory management, synchronization for command submission is just
> a secondary use case.
>
> And that drivers choose to ignore the exclusive fence is an absolutely
> no-go from a memory management and security point of view. Exclusive
> access means exclusive access. Ignoring that won't work.
>
> The only thing which saved us so far is the fact that drivers doing this
> are not that complex.

I think there's something important in Daniel's list above with
drivers that have a "no implicit sync uAPI": None of them are TTM
based.  We (i915) have been doing our own thing for memory management
for a while and it may not follow your TTM mental model.  Sure,
there's a dma_resv in our BOs and we can import/export dma-buf but
that doesn't mean that, internally, we think of it the same way.  I
say this in very generic terms because there are a whole lot of
details that I don't know.  What I do know is that, whatever we're
doing, it's been pretty robust for many years.

That said, we are moving to TTM so, if I'm right that this is a GEM
<-> TTM conflict, we've got some thinking to do.

> BTW: How does it even work? I mean then you would run into the same
> problem as amdgpu with its page table update fences, e.g. that your
> shared fences might signal before the exclusive one.
>
> > For dma-buf this isn't actually a problem, because dma-buf are pinned. You
> > can't move them while other drivers are using them, hence there's not
> > actually a ttm_bo->moving fence we can ignore.
> >
> > p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
> > these other drivers) need to change before they can do dynamic dma-buf.
> >
> >> Otherwise we have an information leak worth a CVE and that is certainly not
> >> something we want.
> > Because yes otherwise we get a CVE. But right now I don't think we have
> > one.
>
> Yeah, agree. But this is just because of coincident and not because of
> good engineering :)
>
> > We do have a quite big confusion on what exactly the signaling ordering is
> > supposed to be between exclusive and the collective set of shared fences,
> > and there's some unifying that needs to happen here. But I think what
> > Jason implements here in the import ioctl is the most defensive version
> > possible, so really can't break any driver. It really works like you have
> > an ad-hoc gpu engine that does nothing itself, but waits for the current
> > exclusive fence and then sets the exclusive fence with its "CS" completion
> > fence.
> >
> > That's imo perfectly legit use-case.
>
> The use case is certainly legit, but I'm not sure if merging this at the
> moment is a good idea.
>
> Your note that drivers are already ignoring the exclusive fence in the
> dma_resv object was eye opening to me. And I now have the very strong
> feeling that the synchronization and the design of the dma_resv object
> is even more messy then I thought it is.
>
> To summarize we can be really lucky that it didn't blow up into our
> faces already.
>
> > Same for the export one. Waiting for a previous snapshot of implicit
> > fences is imo perfectly ok use-case and useful for compositors - client
> > might soon start more rendering, and on some drivers that always results
> > in the exclusive slot being set, so if you dont take a snapshot you
> > oversync real bad for your atomic flip.
>
> The export use case is unproblematic as far as I can see.

Then why are we holding it up?  I'm not asking to have import merged.
That's still labled RFC and I've said over and over that it's not
baked and I'm not convinced it helps all that much.

> >>> Those changes are years in the past.  If we have a real problem here (not sure on
> >>> that yet), then we'll have to figure out how to fix it without nuking
> >>> uAPI.
> >> Well, that was the basic idea of attaching flags to the fences in the
> >> dma_resv object.
> >>
> >> In other words you clearly denote when you have to wait for a fence before
> >> accessing a buffer or you cause a security issue.
> > Replied somewhere else, and I do kinda like the flag idea. But the problem
> > is we first need a ton more encapsulation and review of drivers before we
> > can change the internals. One thing at a time.

I think I'm warming to flags as well.  I didn't like them at first but
I think they actually make quite a bit of sense.  Unlike the explicit
fence where ignoring it can lead to a security hole, the worst that
happens if someone forgets a flag somewhere is a bit of memory
corruption and garbage on-screen.  That seems fairly non-dangerous to
me.

> Ok how should we then proceed?
>
> The large patch set I've send out to convert all users of the shared
> fence list to a for_each API is a step into the right direction I think,
> but there is still a bit more todo.
>
> > And yes for amdgpu this gets triple-hard because you both have the
> > ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_
> > you need to figure out how to support vulkan properly with true opt-in
> > fencing.
>
> Well I have been pondering on that for a bit and I came to the
> conclusion that it is actually not a problem at all.
>
> See radeon, nouveau, msm etc... all implement functions that they don't
> wait for fences from the same timeline, context, engine. That amdgpu
> doesn't wait for fences from the same process can be seen as just a
> special case of this.

That doesn't totally solve the issue, though.  RADV suffers in the WSI
arena today from too much cross-process implicit sharing.  I do think
you want some sort of "ignore implicit sync" API but, in this new
world of flags, it would look more like "don't bother looking for
shared fences flagged write".  You'd still respect the exclusive
fence, if there is one, and you'd still add a shared fence.  You just
wouldn't bother with implicit sync.  Should be safe, right?

--Jason

> >   I'm pretty sure it's doable, I'm just not finding any time
> > anywhere to hack on these patches - too many other fires :-(
>
> Well I'm here. Let's just agree on the direction and I can do the coding.
>
> What I need help with is all the auditing. For example I still haven't
> wrapped my head around how i915 does the synchronization.
>
> Best regards,
> Christian.
>
> >
> > Cheers, Daniel
> >
> >> Christian.
> >>
> >>> --Jason
> >>>
> >>>
> >>>> Regards,
> >>>> Christian.
> >>>>
> >>>> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> >>>>> Modern userspace APIs like Vulkan are built on an explicit
> >>>>> synchronization model.  This doesn't always play nicely with the
> >>>>> implicit synchronization used in the kernel and assumed by X11 and
> >>>>> Wayland.  The client -> compositor half of the synchronization isn't too
> >>>>> bad, at least on intel, because we can control whether or not i915
> >>>>> synchronizes on the buffer and whether or not it's considered written.
> >>>>>
> >>>>> The harder part is the compositor -> client synchronization when we get
> >>>>> the buffer back from the compositor.  We're required to be able to
> >>>>> provide the client with a VkSemaphore and VkFence representing the point
> >>>>> in time where the window system (compositor and/or display) finished
> >>>>> using the buffer.  With current APIs, it's very hard to do this in such
> >>>>> a way that we don't get confused by the Vulkan driver's access of the
> >>>>> buffer.  In particular, once we tell the kernel that we're rendering to
> >>>>> the buffer again, any CPU waits on the buffer or GPU dependencies will
> >>>>> wait on some of the client rendering and not just the compositor.
> >>>>>
> >>>>> This new IOCTL solves this problem by allowing us to get a snapshot of
> >>>>> the implicit synchronization state of a given dma-buf in the form of a
> >>>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> >>>>> instead of CPU waiting directly, it encapsulates the wait operation, at
> >>>>> the current moment in time, in a sync_file so we can check/wait on it
> >>>>> later.  As long as the Vulkan driver does the sync_file export from the
> >>>>> dma-buf before we re-introduce it for rendering, it will only contain
> >>>>> fences from the compositor or display.  This allows to accurately turn
> >>>>> it into a VkFence or VkSemaphore without any over- synchronization.
> >>>>>
> >>>>> This patch series actually contains two new ioctls.  There is the export
> >>>>> one mentioned above as well as an RFC for an import ioctl which provides
> >>>>> the other half.  The intention is to land the export ioctl since it seems
> >>>>> like there's no real disagreement on that one.  The import ioctl, however,
> >>>>> has a lot of debate around it so it's intended to be RFC-only for now.
> >>>>>
> >>>>> Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=TAFUHFCSFcfrP7bjkBtVin4VX2vC6OakwbrqwlZOW8c%3D&amp;reserved=0
> >>>>> IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=lbn%2B81KXds9pYnFUYWi9hzLNP3PLKij4RVjV97UyZ3s%3D&amp;reserved=0
> >>>>>
> >>>>> v10 (Jason Ekstrand, Daniel Vetter):
> >>>>>     - Add reviews/acks
> >>>>>     - Add a patch to rename _rcu to _unlocked
> >>>>>     - Split things better so import is clearly RFC status
> >>>>>
> >>>>> v11 (Daniel Vetter):
> >>>>>     - Add more CCs to try and get maintainers
> >>>>>     - Add a patch to document DMA_BUF_IOCTL_SYNC
> >>>>>     - Generally better docs
> >>>>>     - Use separate structs for import/export (easier to document)
> >>>>>     - Fix an issue in the import patch
> >>>>>
> >>>>> v12 (Daniel Vetter):
> >>>>>     - Better docs for DMA_BUF_IOCTL_SYNC
> >>>>>
> >>>>> v12 (Christian König):
> >>>>>     - Drop the rename patch in favor of Christian's series
> >>>>>     - Add a comment to the commit message for the dma-buf sync_file export
> >>>>>       ioctl saying why we made it an ioctl on dma-buf
> >>>>>
> >>>>> Cc: Christian König <christian.koenig@amd.com>
> >>>>> Cc: Michel Dänzer <michel@daenzer.net>
> >>>>> Cc: Dave Airlie <airlied@redhat.com>
> >>>>> Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
> >>>>> Cc: Daniel Stone <daniels@collabora.com>
> >>>>> Cc: mesa-dev@lists.freedesktop.org
> >>>>> Cc: wayland-devel@lists.freedesktop.org
> >>>>> Test-with: 20210524205225.872316-1-jason@jlekstrand.net
> >>>>>
> >>>>> Christian König (1):
> >>>>>      dma-buf: Add dma_fence_array_for_each (v2)
> >>>>>
> >>>>> Jason Ekstrand (5):
> >>>>>      dma-buf: Add dma_resv_get_singleton (v6)
> >>>>>      dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
> >>>>>      dma-buf: Add an API for exporting sync files (v12)
> >>>>>      RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
> >>>>>      RFC: dma-buf: Add an API for importing sync files (v7)
> >>>>>
> >>>>>     Documentation/driver-api/dma-buf.rst |   8 ++
> >>>>>     drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
> >>>>>     drivers/dma-buf/dma-fence-array.c    |  27 +++++++
> >>>>>     drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
> >>>>>     include/linux/dma-fence-array.h      |  17 +++++
> >>>>>     include/linux/dma-resv.h             |   2 +
> >>>>>     include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
> >>>>>     7 files changed, 369 insertions(+), 1 deletion(-)
> >>>>>
> >> _______________________________________________
> >> mesa-dev mailing list
> >> mesa-dev@lists.freedesktop.org
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=QxQDoUKzo57tmQxD0aEjPs8ATpCOBQiQ5W%2FDh8dbEqU%3D&amp;reserved=0
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18  9:15         ` Christian König
  2021-06-18 13:54           ` Jason Ekstrand
@ 2021-06-18 14:31           ` Daniel Vetter
  2021-06-18 14:42             ` Christian König
  1 sibling, 1 reply; 34+ messages in thread
From: Daniel Vetter @ 2021-06-18 14:31 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

On Fri, Jun 18, 2021 at 11:15 AM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 17.06.21 um 21:58 schrieb Daniel Vetter:
> > On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:
> >> [SNIP]
> >>> But, to the broader point, maybe?  I'm a little fuzzy on exactly where
> >>> i915 inserts and/or depends on fences.
> >>>
> >>>> When you combine that with complex drivers which use TTM and buffer
> >>>> moves underneath you can construct an information leak using this and
> >>>> give userspace access to memory which is allocated to the driver, but
> >>>> not yet initialized.
> >>>>
> >>>> This way you can leak things like page tables, passwords, kernel data
> >>>> etc... in large amounts to userspace and is an absolutely no-go for
> >>>> security.
> >>> Ugh...  Unfortunately, I'm really out of my depth on the implications
> >>> going on here but I think I see your point.
> >>>
> >>>> That's why I'm said we need to get this fixed before we upstream this
> >>>> patch set here and especially the driver change which is using that.
> >>> Well, i915 has had uAPI for a while to ignore fences.
> >> Yeah, exactly that's illegal.
> > You're a few years too late with closing that barn door. The following
> > drives have this concept
> > - i915
> > - msm
> > - etnaviv
> >
> > Because you can't write a competent vulkan driver without this.
>
> WHAT? ^^
>
> > This was discussed at absolute epic length in various xdcs iirc. We did ignore a
> > bit the vram/ttm/bo-moving problem because all the people present were
> > hacking on integrated gpu (see list above), but that just means we need to
> > treat the ttm_bo->moving fence properly.
>
> I should have visited more XDCs in the past, the problem is much larger
> than this.
>
> But I now start to understand what you are doing with that design and
> why it looks so messy to me, amdgpu is just currently the only driver
> which does Vulkan and complex memory management at the same time.
>
> >> At least the kernel internal fences like moving or clearing a buffer object
> >> needs to be taken into account before a driver is allowed to access a
> >> buffer.
> > Yes i915 needs to make sure it never ignores ttm_bo->moving.
>
> No, that is only the tip of the iceberg. See TTM for example also puts
> fences which drivers needs to wait for into the shared slots. Same thing
> for use cases like clear on release etc....
>
>  From my point of view the main purpose of the dma_resv object is to
> serve memory management, synchronization for command submission is just
> a secondary use case.
>
> And that drivers choose to ignore the exclusive fence is an absolutely
> no-go from a memory management and security point of view. Exclusive
> access means exclusive access. Ignoring that won't work.

Yeah, this is why I've been going all over the place about lifting
ttm_bo->moving to dma_resv. And also that I flat out don't trust your
audit, if you havent found these drivers then very clearly you didn't
audit much at all :-)

> The only thing which saved us so far is the fact that drivers doing this
> are not that complex.
>
> BTW: How does it even work? I mean then you would run into the same
> problem as amdgpu with its page table update fences, e.g. that your
> shared fences might signal before the exclusive one.

So we don't ignore any fences when we rip out the backing storage.

And yes there's currently a bug in all these drivers that if you set
both the "ignore implicit fences" and the "set the exclusive fence"
flag, then we just break this. Which is why I think we want to have a
dma_fence_add_shared_exclusive() helper extracted from your amdgpu
code, which we can then use everywhere to plug this.

> > For dma-buf this isn't actually a problem, because dma-buf are pinned. You
> > can't move them while other drivers are using them, hence there's not
> > actually a ttm_bo->moving fence we can ignore.
> >
> > p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
> > these other drivers) need to change before they can do dynamic dma-buf.
> >
> >> Otherwise we have an information leak worth a CVE and that is certainly not
> >> something we want.
> > Because yes otherwise we get a CVE. But right now I don't think we have
> > one.
>
> Yeah, agree. But this is just because of coincident and not because of
> good engineering :)

Well the good news is that I think we're now talking slightly less
past each another than the past few weeks :-)

> > We do have a quite big confusion on what exactly the signaling ordering is
> > supposed to be between exclusive and the collective set of shared fences,
> > and there's some unifying that needs to happen here. But I think what
> > Jason implements here in the import ioctl is the most defensive version
> > possible, so really can't break any driver. It really works like you have
> > an ad-hoc gpu engine that does nothing itself, but waits for the current
> > exclusive fence and then sets the exclusive fence with its "CS" completion
> > fence.
> >
> > That's imo perfectly legit use-case.
>
> The use case is certainly legit, but I'm not sure if merging this at the
> moment is a good idea.
>
> Your note that drivers are already ignoring the exclusive fence in the
> dma_resv object was eye opening to me. And I now have the very strong
> feeling that the synchronization and the design of the dma_resv object
> is even more messy then I thought it is.
>
> To summarize we can be really lucky that it didn't blow up into our
> faces already.

I don't think there was that much luck involved (ok I did find a
possible bug in i915 already around cpu cache flushing) - for SoC the
exclusive slot in dma_resv really is only used for implicit sync and
nothing else. The fun only starts when you throw in pipelined backing
storage movement.

I guess this also explains why you just seemed to ignore me when I was
asking for a memory management exclusive fence for the p2p stuff, or
some other way to specifically handling movements (like ttm_bo->moving
or whatever it is). From my pov we clearly needed that to make p2p
dma-buf work well enough, mixing up the memory management exclusive
slot with the implicit sync exclusive slot never looked like a bright
idea to me.

I think at least we now have some understanding here.

> > Same for the export one. Waiting for a previous snapshot of implicit
> > fences is imo perfectly ok use-case and useful for compositors - client
> > might soon start more rendering, and on some drivers that always results
> > in the exclusive slot being set, so if you dont take a snapshot you
> > oversync real bad for your atomic flip.
>
> The export use case is unproblematic as far as I can see.
>
> >>> Those changes are years in the past.  If we have a real problem here (not sure on
> >>> that yet), then we'll have to figure out how to fix it without nuking
> >>> uAPI.
> >> Well, that was the basic idea of attaching flags to the fences in the
> >> dma_resv object.
> >>
> >> In other words you clearly denote when you have to wait for a fence before
> >> accessing a buffer or you cause a security issue.
> > Replied somewhere else, and I do kinda like the flag idea. But the problem
> > is we first need a ton more encapsulation and review of drivers before we
> > can change the internals. One thing at a time.
>
> Ok how should we then proceed?
>
> The large patch set I've send out to convert all users of the shared
> fence list to a for_each API is a step into the right direction I think,
> but there is still a bit more todo.

Yeah I had noted that as "need to review". But I think we should be
even more aggressive with encapsulation (at least where it doesn't
matter that much from a perf pov). Like my suggestion for dma_buf_poll
to not open-code the entire dance, but just use a snapshot thing. But
I'll check out next week what you cooked up with the iterator.

> > And yes for amdgpu this gets triple-hard because you both have the
> > ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_
> > you need to figure out how to support vulkan properly with true opt-in
> > fencing.
>
> Well I have been pondering on that for a bit and I came to the
> conclusion that it is actually not a problem at all.
>
> See radeon, nouveau, msm etc... all implement functions that they don't
> wait for fences from the same timeline, context, engine. That amdgpu
> doesn't wait for fences from the same process can be seen as just a
> special case of this.

Oh that part isn't a fundamental design issue, internally you can do
whatever uapi you want. All I meant to say is because you currently
have this uapi, but not yet flags to control things more explicitly,
it's going to be more tricky code for amdgpu than for other drivers to
keep it all working. But not impossible, just more code.

> >   I'm pretty sure it's doable, I'm just not finding any time
> > anywhere to hack on these patches - too many other fires :-(
>
> Well I'm here. Let's just agree on the direction and I can do the coding.
>
> What I need help with is all the auditing. For example I still haven't
> wrapped my head around how i915 does the synchronization.

Yeah the auditing is annoying, and i915 is definitely butchered in
some ways. I'm currently screaming at silly bugs in the i915
relocation code (it was tuned a bit more than makes sense, and
acquired a pile of bugs due to that), but after that I should have
time to refresh the old series. That one audits the setting of
dma_resv fences fully, and I half-started with the
dependency/scheduler side too. There's going to be a few fixed needed
there.
-Daniel

> Best regards,
> Christian.
>
> >
> > Cheers, Daniel
> >
> >> Christian.
> >>
> >>> --Jason
> >>>
> >>>
> >>>> Regards,
> >>>> Christian.
> >>>>
> >>>> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> >>>>> Modern userspace APIs like Vulkan are built on an explicit
> >>>>> synchronization model.  This doesn't always play nicely with the
> >>>>> implicit synchronization used in the kernel and assumed by X11 and
> >>>>> Wayland.  The client -> compositor half of the synchronization isn't too
> >>>>> bad, at least on intel, because we can control whether or not i915
> >>>>> synchronizes on the buffer and whether or not it's considered written.
> >>>>>
> >>>>> The harder part is the compositor -> client synchronization when we get
> >>>>> the buffer back from the compositor.  We're required to be able to
> >>>>> provide the client with a VkSemaphore and VkFence representing the point
> >>>>> in time where the window system (compositor and/or display) finished
> >>>>> using the buffer.  With current APIs, it's very hard to do this in such
> >>>>> a way that we don't get confused by the Vulkan driver's access of the
> >>>>> buffer.  In particular, once we tell the kernel that we're rendering to
> >>>>> the buffer again, any CPU waits on the buffer or GPU dependencies will
> >>>>> wait on some of the client rendering and not just the compositor.
> >>>>>
> >>>>> This new IOCTL solves this problem by allowing us to get a snapshot of
> >>>>> the implicit synchronization state of a given dma-buf in the form of a
> >>>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> >>>>> instead of CPU waiting directly, it encapsulates the wait operation, at
> >>>>> the current moment in time, in a sync_file so we can check/wait on it
> >>>>> later.  As long as the Vulkan driver does the sync_file export from the
> >>>>> dma-buf before we re-introduce it for rendering, it will only contain
> >>>>> fences from the compositor or display.  This allows to accurately turn
> >>>>> it into a VkFence or VkSemaphore without any over- synchronization.
> >>>>>
> >>>>> This patch series actually contains two new ioctls.  There is the export
> >>>>> one mentioned above as well as an RFC for an import ioctl which provides
> >>>>> the other half.  The intention is to land the export ioctl since it seems
> >>>>> like there's no real disagreement on that one.  The import ioctl, however,
> >>>>> has a lot of debate around it so it's intended to be RFC-only for now.
> >>>>>
> >>>>> Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=TAFUHFCSFcfrP7bjkBtVin4VX2vC6OakwbrqwlZOW8c%3D&amp;reserved=0
> >>>>> IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=lbn%2B81KXds9pYnFUYWi9hzLNP3PLKij4RVjV97UyZ3s%3D&amp;reserved=0
> >>>>>
> >>>>> v10 (Jason Ekstrand, Daniel Vetter):
> >>>>>     - Add reviews/acks
> >>>>>     - Add a patch to rename _rcu to _unlocked
> >>>>>     - Split things better so import is clearly RFC status
> >>>>>
> >>>>> v11 (Daniel Vetter):
> >>>>>     - Add more CCs to try and get maintainers
> >>>>>     - Add a patch to document DMA_BUF_IOCTL_SYNC
> >>>>>     - Generally better docs
> >>>>>     - Use separate structs for import/export (easier to document)
> >>>>>     - Fix an issue in the import patch
> >>>>>
> >>>>> v12 (Daniel Vetter):
> >>>>>     - Better docs for DMA_BUF_IOCTL_SYNC
> >>>>>
> >>>>> v12 (Christian König):
> >>>>>     - Drop the rename patch in favor of Christian's series
> >>>>>     - Add a comment to the commit message for the dma-buf sync_file export
> >>>>>       ioctl saying why we made it an ioctl on dma-buf
> >>>>>
> >>>>> Cc: Christian König <christian.koenig@amd.com>
> >>>>> Cc: Michel Dänzer <michel@daenzer.net>
> >>>>> Cc: Dave Airlie <airlied@redhat.com>
> >>>>> Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
> >>>>> Cc: Daniel Stone <daniels@collabora.com>
> >>>>> Cc: mesa-dev@lists.freedesktop.org
> >>>>> Cc: wayland-devel@lists.freedesktop.org
> >>>>> Test-with: 20210524205225.872316-1-jason@jlekstrand.net
> >>>>>
> >>>>> Christian König (1):
> >>>>>      dma-buf: Add dma_fence_array_for_each (v2)
> >>>>>
> >>>>> Jason Ekstrand (5):
> >>>>>      dma-buf: Add dma_resv_get_singleton (v6)
> >>>>>      dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
> >>>>>      dma-buf: Add an API for exporting sync files (v12)
> >>>>>      RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
> >>>>>      RFC: dma-buf: Add an API for importing sync files (v7)
> >>>>>
> >>>>>     Documentation/driver-api/dma-buf.rst |   8 ++
> >>>>>     drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
> >>>>>     drivers/dma-buf/dma-fence-array.c    |  27 +++++++
> >>>>>     drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
> >>>>>     include/linux/dma-fence-array.h      |  17 +++++
> >>>>>     include/linux/dma-resv.h             |   2 +
> >>>>>     include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
> >>>>>     7 files changed, 369 insertions(+), 1 deletion(-)
> >>>>>
> >> _______________________________________________
> >> mesa-dev mailing list
> >> mesa-dev@lists.freedesktop.org
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=QxQDoUKzo57tmQxD0aEjPs8ATpCOBQiQ5W%2FDh8dbEqU%3D&amp;reserved=0
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 14:31           ` Daniel Vetter
@ 2021-06-18 14:42             ` Christian König
  2021-06-18 15:17               ` Daniel Vetter
  0 siblings, 1 reply; 34+ messages in thread
From: Christian König @ 2021-06-18 14:42 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

Am 18.06.21 um 16:31 schrieb Daniel Vetter:
> [SNIP]
>> And that drivers choose to ignore the exclusive fence is an absolutely
>> no-go from a memory management and security point of view. Exclusive
>> access means exclusive access. Ignoring that won't work.
> Yeah, this is why I've been going all over the place about lifting
> ttm_bo->moving to dma_resv. And also that I flat out don't trust your
> audit, if you havent found these drivers then very clearly you didn't
> audit much at all :-)

I just didn't though that anybody could be so stupid to allow such a 
thing in.

>> The only thing which saved us so far is the fact that drivers doing this
>> are not that complex.
>>
>> BTW: How does it even work? I mean then you would run into the same
>> problem as amdgpu with its page table update fences, e.g. that your
>> shared fences might signal before the exclusive one.
> So we don't ignore any fences when we rip out the backing storage.
>
> And yes there's currently a bug in all these drivers that if you set
> both the "ignore implicit fences" and the "set the exclusive fence"
> flag, then we just break this. Which is why I think we want to have a
> dma_fence_add_shared_exclusive() helper extracted from your amdgpu
> code, which we can then use everywhere to plug this.

Daniel are you realizing what you are talking about here? Does that also 
apply for imported DMA-bufs?

If yes than that is a security hole you can push an elephant through.

Can you point me to the code using that?

>>> For dma-buf this isn't actually a problem, because dma-buf are pinned. You
>>> can't move them while other drivers are using them, hence there's not
>>> actually a ttm_bo->moving fence we can ignore.
>>>
>>> p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
>>> these other drivers) need to change before they can do dynamic dma-buf.
>>>
>>>> Otherwise we have an information leak worth a CVE and that is certainly not
>>>> something we want.
>>> Because yes otherwise we get a CVE. But right now I don't think we have
>>> one.
>> Yeah, agree. But this is just because of coincident and not because of
>> good engineering :)
> Well the good news is that I think we're now talking slightly less
> past each another than the past few weeks :-)
>
>>> We do have a quite big confusion on what exactly the signaling ordering is
>>> supposed to be between exclusive and the collective set of shared fences,
>>> and there's some unifying that needs to happen here. But I think what
>>> Jason implements here in the import ioctl is the most defensive version
>>> possible, so really can't break any driver. It really works like you have
>>> an ad-hoc gpu engine that does nothing itself, but waits for the current
>>> exclusive fence and then sets the exclusive fence with its "CS" completion
>>> fence.
>>>
>>> That's imo perfectly legit use-case.
>> The use case is certainly legit, but I'm not sure if merging this at the
>> moment is a good idea.
>>
>> Your note that drivers are already ignoring the exclusive fence in the
>> dma_resv object was eye opening to me. And I now have the very strong
>> feeling that the synchronization and the design of the dma_resv object
>> is even more messy then I thought it is.
>>
>> To summarize we can be really lucky that it didn't blow up into our
>> faces already.
> I don't think there was that much luck involved (ok I did find a
> possible bug in i915 already around cpu cache flushing) - for SoC the
> exclusive slot in dma_resv really is only used for implicit sync and
> nothing else. The fun only starts when you throw in pipelined backing
> storage movement.
>
> I guess this also explains why you just seemed to ignore me when I was
> asking for a memory management exclusive fence for the p2p stuff, or
> some other way to specifically handling movements (like ttm_bo->moving
> or whatever it is). From my pov we clearly needed that to make p2p
> dma-buf work well enough, mixing up the memory management exclusive
> slot with the implicit sync exclusive slot never looked like a bright
> idea to me.
>
> I think at least we now have some understanding here.

Well to be honest what you have just told me means that i915 is 
seriously broken.

Ignoring the exclusive fence on an imported DMA-buf is an absolutely 
*NO-GO* even without P2P.

What you have stitched together here allows anybody to basically read 
any memory on the system with both i915 and nouveau, radeon or amdgpu.

We need to fix that ASAP!

Regards,
Christian.

>>> Same for the export one. Waiting for a previous snapshot of implicit
>>> fences is imo perfectly ok use-case and useful for compositors - client
>>> might soon start more rendering, and on some drivers that always results
>>> in the exclusive slot being set, so if you dont take a snapshot you
>>> oversync real bad for your atomic flip.
>> The export use case is unproblematic as far as I can see.
>>
>>>>> Those changes are years in the past.  If we have a real problem here (not sure on
>>>>> that yet), then we'll have to figure out how to fix it without nuking
>>>>> uAPI.
>>>> Well, that was the basic idea of attaching flags to the fences in the
>>>> dma_resv object.
>>>>
>>>> In other words you clearly denote when you have to wait for a fence before
>>>> accessing a buffer or you cause a security issue.
>>> Replied somewhere else, and I do kinda like the flag idea. But the problem
>>> is we first need a ton more encapsulation and review of drivers before we
>>> can change the internals. One thing at a time.
>> Ok how should we then proceed?
>>
>> The large patch set I've send out to convert all users of the shared
>> fence list to a for_each API is a step into the right direction I think,
>> but there is still a bit more todo.
> Yeah I had noted that as "need to review". But I think we should be
> even more aggressive with encapsulation (at least where it doesn't
> matter that much from a perf pov). Like my suggestion for dma_buf_poll
> to not open-code the entire dance, but just use a snapshot thing. But
> I'll check out next week what you cooked up with the iterator.
>
>>> And yes for amdgpu this gets triple-hard because you both have the
>>> ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_
>>> you need to figure out how to support vulkan properly with true opt-in
>>> fencing.
>> Well I have been pondering on that for a bit and I came to the
>> conclusion that it is actually not a problem at all.
>>
>> See radeon, nouveau, msm etc... all implement functions that they don't
>> wait for fences from the same timeline, context, engine. That amdgpu
>> doesn't wait for fences from the same process can be seen as just a
>> special case of this.
> Oh that part isn't a fundamental design issue, internally you can do
> whatever uapi you want. All I meant to say is because you currently
> have this uapi, but not yet flags to control things more explicitly,
> it's going to be more tricky code for amdgpu than for other drivers to
> keep it all working. But not impossible, just more code.
>
>>>    I'm pretty sure it's doable, I'm just not finding any time
>>> anywhere to hack on these patches - too many other fires :-(
>> Well I'm here. Let's just agree on the direction and I can do the coding.
>>
>> What I need help with is all the auditing. For example I still haven't
>> wrapped my head around how i915 does the synchronization.
> Yeah the auditing is annoying, and i915 is definitely butchered in
> some ways. I'm currently screaming at silly bugs in the i915
> relocation code (it was tuned a bit more than makes sense, and
> acquired a pile of bugs due to that), but after that I should have
> time to refresh the old series. That one audits the setting of
> dma_resv fences fully, and I half-started with the
> dependency/scheduler side too. There's going to be a few fixed needed
> there.
> -Daniel
>
>> Best regards,
>> Christian.
>>
>>> Cheers, Daniel
>>>
>>>> Christian.
>>>>
>>>>> --Jason
>>>>>
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
>>>>>>> Modern userspace APIs like Vulkan are built on an explicit
>>>>>>> synchronization model.  This doesn't always play nicely with the
>>>>>>> implicit synchronization used in the kernel and assumed by X11 and
>>>>>>> Wayland.  The client -> compositor half of the synchronization isn't too
>>>>>>> bad, at least on intel, because we can control whether or not i915
>>>>>>> synchronizes on the buffer and whether or not it's considered written.
>>>>>>>
>>>>>>> The harder part is the compositor -> client synchronization when we get
>>>>>>> the buffer back from the compositor.  We're required to be able to
>>>>>>> provide the client with a VkSemaphore and VkFence representing the point
>>>>>>> in time where the window system (compositor and/or display) finished
>>>>>>> using the buffer.  With current APIs, it's very hard to do this in such
>>>>>>> a way that we don't get confused by the Vulkan driver's access of the
>>>>>>> buffer.  In particular, once we tell the kernel that we're rendering to
>>>>>>> the buffer again, any CPU waits on the buffer or GPU dependencies will
>>>>>>> wait on some of the client rendering and not just the compositor.
>>>>>>>
>>>>>>> This new IOCTL solves this problem by allowing us to get a snapshot of
>>>>>>> the implicit synchronization state of a given dma-buf in the form of a
>>>>>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
>>>>>>> instead of CPU waiting directly, it encapsulates the wait operation, at
>>>>>>> the current moment in time, in a sync_file so we can check/wait on it
>>>>>>> later.  As long as the Vulkan driver does the sync_file export from the
>>>>>>> dma-buf before we re-introduce it for rendering, it will only contain
>>>>>>> fences from the compositor or display.  This allows to accurately turn
>>>>>>> it into a VkFence or VkSemaphore without any over- synchronization.
>>>>>>>
>>>>>>> This patch series actually contains two new ioctls.  There is the export
>>>>>>> one mentioned above as well as an RFC for an import ioctl which provides
>>>>>>> the other half.  The intention is to land the export ioctl since it seems
>>>>>>> like there's no real disagreement on that one.  The import ioctl, however,
>>>>>>> has a lot of debate around it so it's intended to be RFC-only for now.
>>>>>>>
>>>>>>> Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C841231ea3c6e43f2141208d93265bfe7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637596234879170817%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=kDEQr7d7fbba6938tZoERXN6hlOyKMdVjgY5U4ux4iI%3D&amp;reserved=0
>>>>>>> IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C841231ea3c6e43f2141208d93265bfe7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637596234879170817%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=MM5c55nspWbUxzajqBv1iNHdz2TYAImG2XPOSnDE6qQ%3D&amp;reserved=0
>>>>>>>
>>>>>>> v10 (Jason Ekstrand, Daniel Vetter):
>>>>>>>      - Add reviews/acks
>>>>>>>      - Add a patch to rename _rcu to _unlocked
>>>>>>>      - Split things better so import is clearly RFC status
>>>>>>>
>>>>>>> v11 (Daniel Vetter):
>>>>>>>      - Add more CCs to try and get maintainers
>>>>>>>      - Add a patch to document DMA_BUF_IOCTL_SYNC
>>>>>>>      - Generally better docs
>>>>>>>      - Use separate structs for import/export (easier to document)
>>>>>>>      - Fix an issue in the import patch
>>>>>>>
>>>>>>> v12 (Daniel Vetter):
>>>>>>>      - Better docs for DMA_BUF_IOCTL_SYNC
>>>>>>>
>>>>>>> v12 (Christian König):
>>>>>>>      - Drop the rename patch in favor of Christian's series
>>>>>>>      - Add a comment to the commit message for the dma-buf sync_file export
>>>>>>>        ioctl saying why we made it an ioctl on dma-buf
>>>>>>>
>>>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>>>> Cc: Michel Dänzer <michel@daenzer.net>
>>>>>>> Cc: Dave Airlie <airlied@redhat.com>
>>>>>>> Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
>>>>>>> Cc: Daniel Stone <daniels@collabora.com>
>>>>>>> Cc: mesa-dev@lists.freedesktop.org
>>>>>>> Cc: wayland-devel@lists.freedesktop.org
>>>>>>> Test-with: 20210524205225.872316-1-jason@jlekstrand.net
>>>>>>>
>>>>>>> Christian König (1):
>>>>>>>       dma-buf: Add dma_fence_array_for_each (v2)
>>>>>>>
>>>>>>> Jason Ekstrand (5):
>>>>>>>       dma-buf: Add dma_resv_get_singleton (v6)
>>>>>>>       dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
>>>>>>>       dma-buf: Add an API for exporting sync files (v12)
>>>>>>>       RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
>>>>>>>       RFC: dma-buf: Add an API for importing sync files (v7)
>>>>>>>
>>>>>>>      Documentation/driver-api/dma-buf.rst |   8 ++
>>>>>>>      drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
>>>>>>>      drivers/dma-buf/dma-fence-array.c    |  27 +++++++
>>>>>>>      drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
>>>>>>>      include/linux/dma-fence-array.h      |  17 +++++
>>>>>>>      include/linux/dma-resv.h             |   2 +
>>>>>>>      include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
>>>>>>>      7 files changed, 369 insertions(+), 1 deletion(-)
>>>>>>>
>>>> _______________________________________________
>>>> mesa-dev mailing list
>>>> mesa-dev@lists.freedesktop.org
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C841231ea3c6e43f2141208d93265bfe7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637596234879170817%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=iA%2B3ZezHwlfjMMpkf3bVX8M0HUk3lVDm%2F476G1S8yZI%3D&amp;reserved=0
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 14:42             ` Christian König
@ 2021-06-18 15:17               ` Daniel Vetter
  2021-06-18 16:42                 ` Christian König
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Vetter @ 2021-06-18 15:17 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

On Fri, Jun 18, 2021 at 4:42 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 18.06.21 um 16:31 schrieb Daniel Vetter:
> > [SNIP]
> >> And that drivers choose to ignore the exclusive fence is an absolutely
> >> no-go from a memory management and security point of view. Exclusive
> >> access means exclusive access. Ignoring that won't work.
> > Yeah, this is why I've been going all over the place about lifting
> > ttm_bo->moving to dma_resv. And also that I flat out don't trust your
> > audit, if you havent found these drivers then very clearly you didn't
> > audit much at all :-)
>
> I just didn't though that anybody could be so stupid to allow such a
> thing in.
>
> >> The only thing which saved us so far is the fact that drivers doing this
> >> are not that complex.
> >>
> >> BTW: How does it even work? I mean then you would run into the same
> >> problem as amdgpu with its page table update fences, e.g. that your
> >> shared fences might signal before the exclusive one.
> > So we don't ignore any fences when we rip out the backing storage.
> >
> > And yes there's currently a bug in all these drivers that if you set
> > both the "ignore implicit fences" and the "set the exclusive fence"
> > flag, then we just break this. Which is why I think we want to have a
> > dma_fence_add_shared_exclusive() helper extracted from your amdgpu
> > code, which we can then use everywhere to plug this.
>
> Daniel are you realizing what you are talking about here? Does that also
> apply for imported DMA-bufs?
>
> If yes than that is a security hole you can push an elephant through.
>
> Can you point me to the code using that?
>
> >>> For dma-buf this isn't actually a problem, because dma-buf are pinned. You
> >>> can't move them while other drivers are using them, hence there's not
> >>> actually a ttm_bo->moving fence we can ignore.
> >>>
> >>> p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
> >>> these other drivers) need to change before they can do dynamic dma-buf.
> >>>
> >>>> Otherwise we have an information leak worth a CVE and that is certainly not
> >>>> something we want.
> >>> Because yes otherwise we get a CVE. But right now I don't think we have
> >>> one.
> >> Yeah, agree. But this is just because of coincident and not because of
> >> good engineering :)
> > Well the good news is that I think we're now talking slightly less
> > past each another than the past few weeks :-)
> >
> >>> We do have a quite big confusion on what exactly the signaling ordering is
> >>> supposed to be between exclusive and the collective set of shared fences,
> >>> and there's some unifying that needs to happen here. But I think what
> >>> Jason implements here in the import ioctl is the most defensive version
> >>> possible, so really can't break any driver. It really works like you have
> >>> an ad-hoc gpu engine that does nothing itself, but waits for the current
> >>> exclusive fence and then sets the exclusive fence with its "CS" completion
> >>> fence.
> >>>
> >>> That's imo perfectly legit use-case.
> >> The use case is certainly legit, but I'm not sure if merging this at the
> >> moment is a good idea.
> >>
> >> Your note that drivers are already ignoring the exclusive fence in the
> >> dma_resv object was eye opening to me. And I now have the very strong
> >> feeling that the synchronization and the design of the dma_resv object
> >> is even more messy then I thought it is.
> >>
> >> To summarize we can be really lucky that it didn't blow up into our
> >> faces already.
> > I don't think there was that much luck involved (ok I did find a
> > possible bug in i915 already around cpu cache flushing) - for SoC the
> > exclusive slot in dma_resv really is only used for implicit sync and
> > nothing else. The fun only starts when you throw in pipelined backing
> > storage movement.
> >
> > I guess this also explains why you just seemed to ignore me when I was
> > asking for a memory management exclusive fence for the p2p stuff, or
> > some other way to specifically handling movements (like ttm_bo->moving
> > or whatever it is). From my pov we clearly needed that to make p2p
> > dma-buf work well enough, mixing up the memory management exclusive
> > slot with the implicit sync exclusive slot never looked like a bright
> > idea to me.
> >
> > I think at least we now have some understanding here.
>
> Well to be honest what you have just told me means that i915 is
> seriously broken.
>
> Ignoring the exclusive fence on an imported DMA-buf is an absolutely
> *NO-GO* even without P2P.
>
> What you have stitched together here allows anybody to basically read
> any memory on the system with both i915 and nouveau, radeon or amdgpu.
>
> We need to fix that ASAP!

Ignoring _all_ fences is officially ok for pinned dma-buf. This is
what v4l does. Aside from it's definitely not just i915 that does this
even on the drm side, we have a few more drivers nowadays.

The rules are that after you've called dma_buf_map_attachment the
memory exists, and is _not_ allowed to move, or be uncleared data, or
anything else. This must be guaranteed until dma_buf_unmap_attachment
is called.

Also drivers are not required to even set a dma_fence in the dma_resv
object for their dma access. Again v4l works like this by design, but
we've had plenty of drivers who totally ignored dma_resv beforehand
too.

So if there's a problem, I think you first need to explain what it is.
Also if you wonder how we got here, that part is easy: dma-buf
predates dma-resv extraction from ttm by quite some time (years even
iirc). So the og dma-buf rules really are "fences don't matter, do
whatever you feel with them". Well you're not allowed to just remove
them if their not your own, since that could break other drivers :-)

If amdgpu now e.g. pipelines the clearing/moving of
dma_buf_map_attachment behind an exclusive fence, that would be
broken. That is _only_ allowed if both exporter and all importers are
dynamic. I don't think you've done that, but if that's the case then
the dma_buf_ops->pin callback would need to have a
dma_fence_wait(exclusive_fence) or something like that to plug that
gap.

If it's something else, then please walk me through the scenario
because I'm not seeing a problem here.
-Daniel

> Regards,
> Christian.
>
> >>> Same for the export one. Waiting for a previous snapshot of implicit
> >>> fences is imo perfectly ok use-case and useful for compositors - client
> >>> might soon start more rendering, and on some drivers that always results
> >>> in the exclusive slot being set, so if you dont take a snapshot you
> >>> oversync real bad for your atomic flip.
> >> The export use case is unproblematic as far as I can see.
> >>
> >>>>> Those changes are years in the past.  If we have a real problem here (not sure on
> >>>>> that yet), then we'll have to figure out how to fix it without nuking
> >>>>> uAPI.
> >>>> Well, that was the basic idea of attaching flags to the fences in the
> >>>> dma_resv object.
> >>>>
> >>>> In other words you clearly denote when you have to wait for a fence before
> >>>> accessing a buffer or you cause a security issue.
> >>> Replied somewhere else, and I do kinda like the flag idea. But the problem
> >>> is we first need a ton more encapsulation and review of drivers before we
> >>> can change the internals. One thing at a time.
> >> Ok how should we then proceed?
> >>
> >> The large patch set I've send out to convert all users of the shared
> >> fence list to a for_each API is a step into the right direction I think,
> >> but there is still a bit more todo.
> > Yeah I had noted that as "need to review". But I think we should be
> > even more aggressive with encapsulation (at least where it doesn't
> > matter that much from a perf pov). Like my suggestion for dma_buf_poll
> > to not open-code the entire dance, but just use a snapshot thing. But
> > I'll check out next week what you cooked up with the iterator.
> >
> >>> And yes for amdgpu this gets triple-hard because you both have the
> >>> ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_
> >>> you need to figure out how to support vulkan properly with true opt-in
> >>> fencing.
> >> Well I have been pondering on that for a bit and I came to the
> >> conclusion that it is actually not a problem at all.
> >>
> >> See radeon, nouveau, msm etc... all implement functions that they don't
> >> wait for fences from the same timeline, context, engine. That amdgpu
> >> doesn't wait for fences from the same process can be seen as just a
> >> special case of this.
> > Oh that part isn't a fundamental design issue, internally you can do
> > whatever uapi you want. All I meant to say is because you currently
> > have this uapi, but not yet flags to control things more explicitly,
> > it's going to be more tricky code for amdgpu than for other drivers to
> > keep it all working. But not impossible, just more code.
> >
> >>>    I'm pretty sure it's doable, I'm just not finding any time
> >>> anywhere to hack on these patches - too many other fires :-(
> >> Well I'm here. Let's just agree on the direction and I can do the coding.
> >>
> >> What I need help with is all the auditing. For example I still haven't
> >> wrapped my head around how i915 does the synchronization.
> > Yeah the auditing is annoying, and i915 is definitely butchered in
> > some ways. I'm currently screaming at silly bugs in the i915
> > relocation code (it was tuned a bit more than makes sense, and
> > acquired a pile of bugs due to that), but after that I should have
> > time to refresh the old series. That one audits the setting of
> > dma_resv fences fully, and I half-started with the
> > dependency/scheduler side too. There's going to be a few fixed needed
> > there.
> > -Daniel
> >
> >> Best regards,
> >> Christian.
> >>
> >>> Cheers, Daniel
> >>>
> >>>> Christian.
> >>>>
> >>>>> --Jason
> >>>>>
> >>>>>
> >>>>>> Regards,
> >>>>>> Christian.
> >>>>>>
> >>>>>> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> >>>>>>> Modern userspace APIs like Vulkan are built on an explicit
> >>>>>>> synchronization model.  This doesn't always play nicely with the
> >>>>>>> implicit synchronization used in the kernel and assumed by X11 and
> >>>>>>> Wayland.  The client -> compositor half of the synchronization isn't too
> >>>>>>> bad, at least on intel, because we can control whether or not i915
> >>>>>>> synchronizes on the buffer and whether or not it's considered written.
> >>>>>>>
> >>>>>>> The harder part is the compositor -> client synchronization when we get
> >>>>>>> the buffer back from the compositor.  We're required to be able to
> >>>>>>> provide the client with a VkSemaphore and VkFence representing the point
> >>>>>>> in time where the window system (compositor and/or display) finished
> >>>>>>> using the buffer.  With current APIs, it's very hard to do this in such
> >>>>>>> a way that we don't get confused by the Vulkan driver's access of the
> >>>>>>> buffer.  In particular, once we tell the kernel that we're rendering to
> >>>>>>> the buffer again, any CPU waits on the buffer or GPU dependencies will
> >>>>>>> wait on some of the client rendering and not just the compositor.
> >>>>>>>
> >>>>>>> This new IOCTL solves this problem by allowing us to get a snapshot of
> >>>>>>> the implicit synchronization state of a given dma-buf in the form of a
> >>>>>>> sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> >>>>>>> instead of CPU waiting directly, it encapsulates the wait operation, at
> >>>>>>> the current moment in time, in a sync_file so we can check/wait on it
> >>>>>>> later.  As long as the Vulkan driver does the sync_file export from the
> >>>>>>> dma-buf before we re-introduce it for rendering, it will only contain
> >>>>>>> fences from the compositor or display.  This allows to accurately turn
> >>>>>>> it into a VkFence or VkSemaphore without any over- synchronization.
> >>>>>>>
> >>>>>>> This patch series actually contains two new ioctls.  There is the export
> >>>>>>> one mentioned above as well as an RFC for an import ioctl which provides
> >>>>>>> the other half.  The intention is to land the export ioctl since it seems
> >>>>>>> like there's no real disagreement on that one.  The import ioctl, however,
> >>>>>>> has a lot of debate around it so it's intended to be RFC-only for now.
> >>>>>>>
> >>>>>>> Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C841231ea3c6e43f2141208d93265bfe7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637596234879170817%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=kDEQr7d7fbba6938tZoERXN6hlOyKMdVjgY5U4ux4iI%3D&amp;reserved=0
> >>>>>>> IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C841231ea3c6e43f2141208d93265bfe7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637596234879170817%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=MM5c55nspWbUxzajqBv1iNHdz2TYAImG2XPOSnDE6qQ%3D&amp;reserved=0
> >>>>>>>
> >>>>>>> v10 (Jason Ekstrand, Daniel Vetter):
> >>>>>>>      - Add reviews/acks
> >>>>>>>      - Add a patch to rename _rcu to _unlocked
> >>>>>>>      - Split things better so import is clearly RFC status
> >>>>>>>
> >>>>>>> v11 (Daniel Vetter):
> >>>>>>>      - Add more CCs to try and get maintainers
> >>>>>>>      - Add a patch to document DMA_BUF_IOCTL_SYNC
> >>>>>>>      - Generally better docs
> >>>>>>>      - Use separate structs for import/export (easier to document)
> >>>>>>>      - Fix an issue in the import patch
> >>>>>>>
> >>>>>>> v12 (Daniel Vetter):
> >>>>>>>      - Better docs for DMA_BUF_IOCTL_SYNC
> >>>>>>>
> >>>>>>> v12 (Christian König):
> >>>>>>>      - Drop the rename patch in favor of Christian's series
> >>>>>>>      - Add a comment to the commit message for the dma-buf sync_file export
> >>>>>>>        ioctl saying why we made it an ioctl on dma-buf
> >>>>>>>
> >>>>>>> Cc: Christian König <christian.koenig@amd.com>
> >>>>>>> Cc: Michel Dänzer <michel@daenzer.net>
> >>>>>>> Cc: Dave Airlie <airlied@redhat.com>
> >>>>>>> Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
> >>>>>>> Cc: Daniel Stone <daniels@collabora.com>
> >>>>>>> Cc: mesa-dev@lists.freedesktop.org
> >>>>>>> Cc: wayland-devel@lists.freedesktop.org
> >>>>>>> Test-with: 20210524205225.872316-1-jason@jlekstrand.net
> >>>>>>>
> >>>>>>> Christian König (1):
> >>>>>>>       dma-buf: Add dma_fence_array_for_each (v2)
> >>>>>>>
> >>>>>>> Jason Ekstrand (5):
> >>>>>>>       dma-buf: Add dma_resv_get_singleton (v6)
> >>>>>>>       dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
> >>>>>>>       dma-buf: Add an API for exporting sync files (v12)
> >>>>>>>       RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
> >>>>>>>       RFC: dma-buf: Add an API for importing sync files (v7)
> >>>>>>>
> >>>>>>>      Documentation/driver-api/dma-buf.rst |   8 ++
> >>>>>>>      drivers/dma-buf/dma-buf.c            | 103 +++++++++++++++++++++++++
> >>>>>>>      drivers/dma-buf/dma-fence-array.c    |  27 +++++++
> >>>>>>>      drivers/dma-buf/dma-resv.c           | 110 +++++++++++++++++++++++++++
> >>>>>>>      include/linux/dma-fence-array.h      |  17 +++++
> >>>>>>>      include/linux/dma-resv.h             |   2 +
> >>>>>>>      include/uapi/linux/dma-buf.h         | 103 ++++++++++++++++++++++++-
> >>>>>>>      7 files changed, 369 insertions(+), 1 deletion(-)
> >>>>>>>
> >>>> _______________________________________________
> >>>> mesa-dev mailing list
> >>>> mesa-dev@lists.freedesktop.org
> >>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C841231ea3c6e43f2141208d93265bfe7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637596234879170817%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=iA%2B3ZezHwlfjMMpkf3bVX8M0HUk3lVDm%2F476G1S8yZI%3D&amp;reserved=0
> >
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 15:17               ` Daniel Vetter
@ 2021-06-18 16:42                 ` Christian König
  2021-06-18 17:20                   ` Daniel Vetter
  2021-06-18 18:20                   ` Daniel Stone
  0 siblings, 2 replies; 34+ messages in thread
From: Christian König @ 2021-06-18 16:42 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

Am 18.06.21 um 17:17 schrieb Daniel Vetter:
> [SNIP]
> Ignoring _all_ fences is officially ok for pinned dma-buf. This is
> what v4l does. Aside from it's definitely not just i915 that does this
> even on the drm side, we have a few more drivers nowadays.

No it seriously isn't. If drivers are doing this they are more than broken.

See the comment in dma-resv.h

  * Based on bo.c which bears the following copyright notice,
  * but is dual licensed:
....


The handling in ttm_bo.c is and always was that the exclusive fence is 
used for buffer moves.

As I said multiple times now the *MAIN* purpose of the dma_resv object 
is memory management and *NOT* synchronization.

Those restrictions come from the original design of TTM where the 
dma_resv object originated from.

The resulting consequences are that:

a) If you access the buffer without waiting for the exclusive fence you 
run into a potential information leak.
     We kind of let that slip for V4L since they only access the buffers 
for writes, so you can't do any harm there.

b) If you overwrite the exclusive fence with a new one without waiting 
for the old one to signal you open up the possibility for userspace to 
access freed up memory.
     This is a complete show stopper since it means that taking over the 
system is just a typing exercise.


What you have done by allowing this in is ripping open a major security 
hole for any DMA-buf import in i915 from all TTM based driver.

This needs to be fixed ASAP, either by waiting in i915 and all other 
drivers doing this for the exclusive fence while importing a DMA-buf or 
by marking i915 and all other drivers as broken.

Sorry, but if you allowed that in you seriously have no idea what you 
are talking about here and where all of this originated from.

Regards,
Christian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 16:42                 ` Christian König
@ 2021-06-18 17:20                   ` Daniel Vetter
  2021-06-18 18:01                     ` Christian König
  2021-06-18 18:20                   ` Daniel Stone
  1 sibling, 1 reply; 34+ messages in thread
From: Daniel Vetter @ 2021-06-18 17:20 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

On Fri, Jun 18, 2021 at 6:43 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
> > [SNIP]
> > Ignoring _all_ fences is officially ok for pinned dma-buf. This is
> > what v4l does. Aside from it's definitely not just i915 that does this
> > even on the drm side, we have a few more drivers nowadays.
>
> No it seriously isn't. If drivers are doing this they are more than broken.
>
> See the comment in dma-resv.h
>
>   * Based on bo.c which bears the following copyright notice,
>   * but is dual licensed:
> ....
>
>
> The handling in ttm_bo.c is and always was that the exclusive fence is
> used for buffer moves.
>
> As I said multiple times now the *MAIN* purpose of the dma_resv object
> is memory management and *NOT* synchronization.
>
> Those restrictions come from the original design of TTM where the
> dma_resv object originated from.
>
> The resulting consequences are that:
>
> a) If you access the buffer without waiting for the exclusive fence you
> run into a potential information leak.
>      We kind of let that slip for V4L since they only access the buffers
> for writes, so you can't do any harm there.
>
> b) If you overwrite the exclusive fence with a new one without waiting
> for the old one to signal you open up the possibility for userspace to
> access freed up memory.
>      This is a complete show stopper since it means that taking over the
> system is just a typing exercise.
>
>
> What you have done by allowing this in is ripping open a major security
> hole for any DMA-buf import in i915 from all TTM based driver.
>
> This needs to be fixed ASAP, either by waiting in i915 and all other
> drivers doing this for the exclusive fence while importing a DMA-buf or
> by marking i915 and all other drivers as broken.
>
> Sorry, but if you allowed that in you seriously have no idea what you
> are talking about here and where all of this originated from.

Dude, get a grip, seriously. dma-buf landed in 2011

commit d15bd7ee445d0702ad801fdaece348fdb79e6581
Author: Sumit Semwal <sumit.semwal@ti.com>
Date:   Mon Dec 26 14:53:15 2011 +0530

   dma-buf: Introduce dma buffer sharing mechanism

and drm prime landed in the same year

commit 3248877ea1796915419fba7c89315fdbf00cb56a
(airlied/drm-prime-dmabuf-initial)
Author: Dave Airlie <airlied@redhat.com>
Date:   Fri Nov 25 15:21:02 2011 +0000

   drm: base prime/dma-buf support (v5)

dma-resv was extracted much later

commit 786d7257e537da0674c02e16e3b30a44665d1cee
Author: Maarten Lankhorst <m.b.lankhorst@gmail.com>
Date:   Thu Jun 27 13:48:16 2013 +0200

   reservation: cross-device reservation support, v4

Maarten's patch only extracted the dma_resv stuff so it's there,
optionally. There was never any effort to roll this out to all the
existing drivers, of which there were plenty.

It is, and has been since 10 years, totally fine to access dma-buf
without looking at any fences at all. From your pov of a ttm driver
dma-resv is mainly used for memory management and not sync, but I
think that's also due to some reinterpretation of the actual sync
rules on your side. For everyone else the dma_resv attached to a
dma-buf has been about implicit sync only, nothing else.

_only_ when you have a dynamic importer/exporter can you assume that
the dma_resv fences must actually be obeyed. That's one of the reasons
why we had to make this a completely new mode (the other one was
locking, but they really tie together).

Wrt your problems:
a) needs to be fixed in drivers exporting buffers and failing to make
sure the memory is there by the time dma_buf_map_attachment returns.
b) needs to be fixed in the importers, and there's quite a few of
those. There's more than i915 here, which is why I think we should
have the dma_resv_add_shared_exclusive helper extracted from amdgpu.
Avoids hand-rolling this about 5 times (6 if we include the import
ioctl from Jason).

Also I've like been trying to explain this ever since the entire
dynamic dma-buf thing started.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 17:20                   ` Daniel Vetter
@ 2021-06-18 18:01                     ` Christian König
  2021-06-18 18:45                       ` Daniel Vetter
  0 siblings, 1 reply; 34+ messages in thread
From: Christian König @ 2021-06-18 18:01 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

Am 18.06.21 um 19:20 schrieb Daniel Vetter:
> On Fri, Jun 18, 2021 at 6:43 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
>>> [SNIP]
>>> Ignoring _all_ fences is officially ok for pinned dma-buf. This is
>>> what v4l does. Aside from it's definitely not just i915 that does this
>>> even on the drm side, we have a few more drivers nowadays.
>> No it seriously isn't. If drivers are doing this they are more than broken.
>>
>> See the comment in dma-resv.h
>>
>>    * Based on bo.c which bears the following copyright notice,
>>    * but is dual licensed:
>> ....
>>
>>
>> The handling in ttm_bo.c is and always was that the exclusive fence is
>> used for buffer moves.
>>
>> As I said multiple times now the *MAIN* purpose of the dma_resv object
>> is memory management and *NOT* synchronization.
>>
>> Those restrictions come from the original design of TTM where the
>> dma_resv object originated from.
>>
>> The resulting consequences are that:
>>
>> a) If you access the buffer without waiting for the exclusive fence you
>> run into a potential information leak.
>>       We kind of let that slip for V4L since they only access the buffers
>> for writes, so you can't do any harm there.
>>
>> b) If you overwrite the exclusive fence with a new one without waiting
>> for the old one to signal you open up the possibility for userspace to
>> access freed up memory.
>>       This is a complete show stopper since it means that taking over the
>> system is just a typing exercise.
>>
>>
>> What you have done by allowing this in is ripping open a major security
>> hole for any DMA-buf import in i915 from all TTM based driver.
>>
>> This needs to be fixed ASAP, either by waiting in i915 and all other
>> drivers doing this for the exclusive fence while importing a DMA-buf or
>> by marking i915 and all other drivers as broken.
>>
>> Sorry, but if you allowed that in you seriously have no idea what you
>> are talking about here and where all of this originated from.
> Dude, get a grip, seriously. dma-buf landed in 2011
>
> commit d15bd7ee445d0702ad801fdaece348fdb79e6581
> Author: Sumit Semwal <sumit.semwal@ti.com>
> Date:   Mon Dec 26 14:53:15 2011 +0530
>
>     dma-buf: Introduce dma buffer sharing mechanism
>
> and drm prime landed in the same year
>
> commit 3248877ea1796915419fba7c89315fdbf00cb56a
> (airlied/drm-prime-dmabuf-initial)
> Author: Dave Airlie <airlied@redhat.com>
> Date:   Fri Nov 25 15:21:02 2011 +0000
>
>     drm: base prime/dma-buf support (v5)
>
> dma-resv was extracted much later
>
> commit 786d7257e537da0674c02e16e3b30a44665d1cee
> Author: Maarten Lankhorst <m.b.lankhorst@gmail.com>
> Date:   Thu Jun 27 13:48:16 2013 +0200
>
>     reservation: cross-device reservation support, v4
>
> Maarten's patch only extracted the dma_resv stuff so it's there,
> optionally. There was never any effort to roll this out to all the
> existing drivers, of which there were plenty.
>
> It is, and has been since 10 years, totally fine to access dma-buf
> without looking at any fences at all. From your pov of a ttm driver
> dma-resv is mainly used for memory management and not sync, but I
> think that's also due to some reinterpretation of the actual sync
> rules on your side. For everyone else the dma_resv attached to a
> dma-buf has been about implicit sync only, nothing else.

No, that was way before my time.

The whole thing was introduced with this commit here:

commit f2c24b83ae90292d315aa7ac029c6ce7929e01aa
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date:   Wed Apr 2 17:14:48 2014 +0200

     drm/ttm: flip the switch, and convert to dma_fence

     Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

  int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
....
-       bo->sync_obj = driver->sync_obj_ref(sync_obj);
+       reservation_object_add_excl_fence(bo->resv, fence);
         if (evict) {

Maarten replaced the bo->sync_obj reference with the dma_resv exclusive 
fence.

This means that we need to apply the sync_obj semantic to all drivers 
using a DMA-buf with its dma_resv object, otherwise you break imports 
from TTM drivers.

Since then and up till now the exclusive fence must be waited on and 
never replaced with anything which signals before the old fence.

Maarten and I think Thomas did that and I was always assuming that you 
know about this design decision.

It's absolutely not that this is my invention, I'm just telling you how 
it ever was.

Anyway this means we have a seriously misunderstanding and yes now some 
of our discussions about dynamic P2P suddenly make much more sense.

Regards,
Christian.


>
> _only_ when you have a dynamic importer/exporter can you assume that
> the dma_resv fences must actually be obeyed. That's one of the reasons
> why we had to make this a completely new mode (the other one was
> locking, but they really tie together).
>
> Wrt your problems:
> a) needs to be fixed in drivers exporting buffers and failing to make
> sure the memory is there by the time dma_buf_map_attachment returns.
> b) needs to be fixed in the importers, and there's quite a few of
> those. There's more than i915 here, which is why I think we should
> have the dma_resv_add_shared_exclusive helper extracted from amdgpu.
> Avoids hand-rolling this about 5 times (6 if we include the import
> ioctl from Jason).
>
> Also I've like been trying to explain this ever since the entire
> dynamic dma-buf thing started.
> -Daniel


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 16:42                 ` Christian König
  2021-06-18 17:20                   ` Daniel Vetter
@ 2021-06-18 18:20                   ` Daniel Stone
  2021-06-18 18:44                     ` Christian König
  1 sibling, 1 reply; 34+ messages in thread
From: Daniel Stone @ 2021-06-18 18:20 UTC (permalink / raw)
  To: Christian König
  Cc: Christian König, Michel Dänzer, dri-devel,
	wayland-devel, ML mesa-dev, Dave Airlie

Sorry for the mobile reply, but V4L2 is absolutely not write-only; there has never been an intersection of V4L2 supporting dmabuf and not supporting reads.

I see your point about the heritage of dma_resv but it’s a red herring. It doesn’t matter who’s right, or who was first, or where the code was extracted from.

It’s well defined that amdgpu defines resv to be one thing, that every other non-TTM user defines it to be something very different, and that the other TTM users define it to be something in the middle.

We’ll never get to anything workable if we keep arguing who’s right. Everyone is wrong, because dma_resv doesn’t globally mean anything.

It seems clear that there are three classes of synchronisation barrier (not using the ‘f’ word here), in descending exclusion order:
  - memory management barriers (amdgpu exclusive fence / ttm_bo->moving)
  - implicit synchronisation write barriers (everyone else’s exclusive fences, amdgpu’s shared fences)
  - implicit synchronisation read barriers (everyone else’s shared fences, also amdgpu’s shared fences sometimes)

I don’t see a world in which these three uses can be reduced to two slots. What also isn’t clear to me though, is how the memory-management barriers can exclude all other access in the original proposal with purely userspace CS. Retaining the three separate modes also seems like a hard requirement to not completely break userspace, but then I don’t see how three separate slots would work if they need to be temporally ordered. amdgpu fixed this by redefining the meaning of the two slots, others fixed this by not doing one of the three modes.

So how do we square the circle without encoding a DAG into the kernel? Do the two slots need to become a single list which is ordered by time + ‘weight’ and flattened whenever modified? Something else?

Have a great weekend.

-d

> On 18 Jun 2021, at 5:43 pm, Christian König <christian.koenig@amd.com> wrote:
> 
> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
>> [SNIP]
>> Ignoring _all_ fences is officially ok for pinned dma-buf. This is
>> what v4l does. Aside from it's definitely not just i915 that does this
>> even on the drm side, we have a few more drivers nowadays.
> 
> No it seriously isn't. If drivers are doing this they are more than broken.
> 
> See the comment in dma-resv.h
> 
>  * Based on bo.c which bears the following copyright notice,
>  * but is dual licensed:
> ....
> 
> 
> The handling in ttm_bo.c is and always was that the exclusive fence is used for buffer moves.
> 
> As I said multiple times now the *MAIN* purpose of the dma_resv object is memory management and *NOT* synchronization.
> 
> Those restrictions come from the original design of TTM where the dma_resv object originated from.
> 
> The resulting consequences are that:
> 
> a) If you access the buffer without waiting for the exclusive fence you run into a potential information leak.
>     We kind of let that slip for V4L since they only access the buffers for writes, so you can't do any harm there.
> 
> b) If you overwrite the exclusive fence with a new one without waiting for the old one to signal you open up the possibility for userspace to access freed up memory.
>     This is a complete show stopper since it means that taking over the system is just a typing exercise.
> 
> 
> What you have done by allowing this in is ripping open a major security hole for any DMA-buf import in i915 from all TTM based driver.
> 
> This needs to be fixed ASAP, either by waiting in i915 and all other drivers doing this for the exclusive fence while importing a DMA-buf or by marking i915 and all other drivers as broken.
> 
> Sorry, but if you allowed that in you seriously have no idea what you are talking about here and where all of this originated from.
> 
> Regards,
> Christian.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 18:20                   ` Daniel Stone
@ 2021-06-18 18:44                     ` Christian König
  0 siblings, 0 replies; 34+ messages in thread
From: Christian König @ 2021-06-18 18:44 UTC (permalink / raw)
  To: Daniel Stone
  Cc: Christian König, Michel Dänzer, dri-devel,
	wayland-devel, ML mesa-dev, Dave Airlie

Hi Daniel,

thanks for jumping in here.

And yes, you are absolutely right we need to get this fixed and not yell 
at each other that we have a different understanding of things.

Your proposal sounds sane to me, but I wouldn't call it slots. Rather 
something like "use cases" since we can have multiple fences for each 
category I think.

And I see at four here:

1. Internal kernel memory management. Everybody needs to wait for this, 
it's equal to bo->moving.
2. Writers for implicit sync, implicit sync readers should wait for them.
3. Readers for implicit sync, implicit sync writers should wait for them.
4. Things like TLB flushes and page table updates, no implicit sync but 
memory management must take them into account before moving/freeing 
backing store.

Happy weekend and hopefully not so much heat guys.

Cheers,
Christian.

Am 18.06.21 um 20:20 schrieb Daniel Stone:
> Sorry for the mobile reply, but V4L2 is absolutely not write-only; there has never been an intersection of V4L2 supporting dmabuf and not supporting reads.
>
> I see your point about the heritage of dma_resv but it’s a red herring. It doesn’t matter who’s right, or who was first, or where the code was extracted from.
>
> It’s well defined that amdgpu defines resv to be one thing, that every other non-TTM user defines it to be something very different, and that the other TTM users define it to be something in the middle.
>
> We’ll never get to anything workable if we keep arguing who’s right. Everyone is wrong, because dma_resv doesn’t globally mean anything.
>
> It seems clear that there are three classes of synchronisation barrier (not using the ‘f’ word here), in descending exclusion order:
>    - memory management barriers (amdgpu exclusive fence / ttm_bo->moving)
>    - implicit synchronisation write barriers (everyone else’s exclusive fences, amdgpu’s shared fences)
>    - implicit synchronisation read barriers (everyone else’s shared fences, also amdgpu’s shared fences sometimes)
>
> I don’t see a world in which these three uses can be reduced to two slots. What also isn’t clear to me though, is how the memory-management barriers can exclude all other access in the original proposal with purely userspace CS. Retaining the three separate modes also seems like a hard requirement to not completely break userspace, but then I don’t see how three separate slots would work if they need to be temporally ordered. amdgpu fixed this by redefining the meaning of the two slots, others fixed this by not doing one of the three modes.
>
> So how do we square the circle without encoding a DAG into the kernel? Do the two slots need to become a single list which is ordered by time + ‘weight’ and flattened whenever modified? Something else?
>
> Have a great weekend.
>
> -d
>
>> On 18 Jun 2021, at 5:43 pm, Christian König <christian.koenig@amd.com> wrote:
>>
>> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
>>> [SNIP]
>>> Ignoring _all_ fences is officially ok for pinned dma-buf. This is
>>> what v4l does. Aside from it's definitely not just i915 that does this
>>> even on the drm side, we have a few more drivers nowadays.
>> No it seriously isn't. If drivers are doing this they are more than broken.
>>
>> See the comment in dma-resv.h
>>
>>   * Based on bo.c which bears the following copyright notice,
>>   * but is dual licensed:
>> ....
>>
>>
>> The handling in ttm_bo.c is and always was that the exclusive fence is used for buffer moves.
>>
>> As I said multiple times now the *MAIN* purpose of the dma_resv object is memory management and *NOT* synchronization.
>>
>> Those restrictions come from the original design of TTM where the dma_resv object originated from.
>>
>> The resulting consequences are that:
>>
>> a) If you access the buffer without waiting for the exclusive fence you run into a potential information leak.
>>      We kind of let that slip for V4L since they only access the buffers for writes, so you can't do any harm there.
>>
>> b) If you overwrite the exclusive fence with a new one without waiting for the old one to signal you open up the possibility for userspace to access freed up memory.
>>      This is a complete show stopper since it means that taking over the system is just a typing exercise.
>>
>>
>> What you have done by allowing this in is ripping open a major security hole for any DMA-buf import in i915 from all TTM based driver.
>>
>> This needs to be fixed ASAP, either by waiting in i915 and all other drivers doing this for the exclusive fence while importing a DMA-buf or by marking i915 and all other drivers as broken.
>>
>> Sorry, but if you allowed that in you seriously have no idea what you are talking about here and where all of this originated from.
>>
>> Regards,
>> Christian.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 18:01                     ` Christian König
@ 2021-06-18 18:45                       ` Daniel Vetter
  2021-06-21 10:16                         ` Christian König
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Vetter @ 2021-06-18 18:45 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Stone, Christian König, Michel Dänzer,
	dri-devel, wayland-devel @ lists . freedesktop . org,
	Jason Ekstrand, Dave Airlie, ML mesa-dev

On Fri, Jun 18, 2021 at 8:02 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 18.06.21 um 19:20 schrieb Daniel Vetter:
> > On Fri, Jun 18, 2021 at 6:43 PM Christian König
> > <christian.koenig@amd.com> wrote:
> >> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
> >>> [SNIP]
> >>> Ignoring _all_ fences is officially ok for pinned dma-buf. This is
> >>> what v4l does. Aside from it's definitely not just i915 that does this
> >>> even on the drm side, we have a few more drivers nowadays.
> >> No it seriously isn't. If drivers are doing this they are more than broken.
> >>
> >> See the comment in dma-resv.h
> >>
> >>    * Based on bo.c which bears the following copyright notice,
> >>    * but is dual licensed:
> >> ....
> >>
> >>
> >> The handling in ttm_bo.c is and always was that the exclusive fence is
> >> used for buffer moves.
> >>
> >> As I said multiple times now the *MAIN* purpose of the dma_resv object
> >> is memory management and *NOT* synchronization.
> >>
> >> Those restrictions come from the original design of TTM where the
> >> dma_resv object originated from.
> >>
> >> The resulting consequences are that:
> >>
> >> a) If you access the buffer without waiting for the exclusive fence you
> >> run into a potential information leak.
> >>       We kind of let that slip for V4L since they only access the buffers
> >> for writes, so you can't do any harm there.
> >>
> >> b) If you overwrite the exclusive fence with a new one without waiting
> >> for the old one to signal you open up the possibility for userspace to
> >> access freed up memory.
> >>       This is a complete show stopper since it means that taking over the
> >> system is just a typing exercise.
> >>
> >>
> >> What you have done by allowing this in is ripping open a major security
> >> hole for any DMA-buf import in i915 from all TTM based driver.
> >>
> >> This needs to be fixed ASAP, either by waiting in i915 and all other
> >> drivers doing this for the exclusive fence while importing a DMA-buf or
> >> by marking i915 and all other drivers as broken.
> >>
> >> Sorry, but if you allowed that in you seriously have no idea what you
> >> are talking about here and where all of this originated from.
> > Dude, get a grip, seriously. dma-buf landed in 2011
> >
> > commit d15bd7ee445d0702ad801fdaece348fdb79e6581
> > Author: Sumit Semwal <sumit.semwal@ti.com>
> > Date:   Mon Dec 26 14:53:15 2011 +0530
> >
> >     dma-buf: Introduce dma buffer sharing mechanism
> >
> > and drm prime landed in the same year
> >
> > commit 3248877ea1796915419fba7c89315fdbf00cb56a
> > (airlied/drm-prime-dmabuf-initial)
> > Author: Dave Airlie <airlied@redhat.com>
> > Date:   Fri Nov 25 15:21:02 2011 +0000
> >
> >     drm: base prime/dma-buf support (v5)
> >
> > dma-resv was extracted much later
> >
> > commit 786d7257e537da0674c02e16e3b30a44665d1cee
> > Author: Maarten Lankhorst <m.b.lankhorst@gmail.com>
> > Date:   Thu Jun 27 13:48:16 2013 +0200
> >
> >     reservation: cross-device reservation support, v4
> >
> > Maarten's patch only extracted the dma_resv stuff so it's there,
> > optionally. There was never any effort to roll this out to all the
> > existing drivers, of which there were plenty.
> >
> > It is, and has been since 10 years, totally fine to access dma-buf
> > without looking at any fences at all. From your pov of a ttm driver
> > dma-resv is mainly used for memory management and not sync, but I
> > think that's also due to some reinterpretation of the actual sync
> > rules on your side. For everyone else the dma_resv attached to a
> > dma-buf has been about implicit sync only, nothing else.
>
> No, that was way before my time.
>
> The whole thing was introduced with this commit here:
>
> commit f2c24b83ae90292d315aa7ac029c6ce7929e01aa
> Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> Date:   Wed Apr 2 17:14:48 2014 +0200
>
>      drm/ttm: flip the switch, and convert to dma_fence
>
>      Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>
>   int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
> ....
> -       bo->sync_obj = driver->sync_obj_ref(sync_obj);
> +       reservation_object_add_excl_fence(bo->resv, fence);
>          if (evict) {
>
> Maarten replaced the bo->sync_obj reference with the dma_resv exclusive
> fence.
>
> This means that we need to apply the sync_obj semantic to all drivers
> using a DMA-buf with its dma_resv object, otherwise you break imports
> from TTM drivers.
>
> Since then and up till now the exclusive fence must be waited on and
> never replaced with anything which signals before the old fence.
>
> Maarten and I think Thomas did that and I was always assuming that you
> know about this design decision.

Surprisingly I do actually know this.

Still the commit you cite did _not_ change any of the rules around
dma_buf: Importers have _no_ obligation to obey the exclusive fence,
because the buffer is pinned. None of the work that Maarten has done
has fundamentally changed this contract in any way.

If amdgpu (or any other ttm based driver) hands back and sgt without
waiting for ttm_bo->moving or the exclusive fence first, then that's a
bug we need to fix in these drivers. But if ttm based drivers did get
this wrong, then they got this wrong both before and after the switch
over to using dma_resv - this bug would go back all the way to Dave's
introduction of drm_prime.c and support for that.

The only thing which importers have to do is not wreak the DAG nature
of the dma_resv fences and drop dependencies. Currently there's a
handful of drivers which break this (introduced over the last few
years), and I have it somewhere on my todo list to audit&fix them all.

The goal with extracting dma_resv from ttm was to make implicit sync
working and get rid of some terrible stalls on the userspace side.
Eventually it was also the goal to make truly dynamic buffer
reservation possible, but that took another 6 or so years to realize
with your work. And we had to make dynamic dma-buf very much opt-in,
because auditing all the users is very hard work and no one
volunteered. And for dynamic dma-buf the rule is that the exclusive
fence must _never_ be ignored, and the two drivers supporting it (mlx5
and amdgpu) obey that.

So yeah for ttm drivers dma_resv is primarily for memory management,
with a side effect of also supporting implicit sync.

For everyone else (and this includes a pile of render drivers, all the
atomic kms drivers, v4l and I have no idea what else on top) dma_resv
was only ever about implicit sync, and it can be ignored. And it (the
implicit sync side) has to be ignored to be able to support vulkan
winsys buffers correctly without stalling where we shouldn't. Also we
have to ignore it on atomic kms side too (and depending upon whether
writeback is supported atomic kms is perfectly capable of reading out
any buffer passed to it).

> It's absolutely not that this is my invention, I'm just telling you how
> it ever was.
>
> Anyway this means we have a seriously misunderstanding and yes now some
> of our discussions about dynamic P2P suddenly make much more sense.

Yeah I think at least we finally managed to get this across.

Anyway I guess w/e for me now, otherwise we'll probably resort to
throwing chairs :-) I'm dearly hoping the thunderstorms all around me
actually get all the way to me, because it's way, way too hot here
right now.

Cheers, Daniel

> Regards,
> Christian.
>
>
> >
> > _only_ when you have a dynamic importer/exporter can you assume that
> > the dma_resv fences must actually be obeyed. That's one of the reasons
> > why we had to make this a completely new mode (the other one was
> > locking, but they really tie together).
> >
> > Wrt your problems:
> > a) needs to be fixed in drivers exporting buffers and failing to make
> > sure the memory is there by the time dma_buf_map_attachment returns.
> > b) needs to be fixed in the importers, and there's quite a few of
> > those. There's more than i915 here, which is why I think we should
> > have the dma_resv_add_shared_exclusive helper extracted from amdgpu.
> > Avoids hand-rolling this about 5 times (6 if we include the import
> > ioctl from Jason).
> >
> > Also I've like been trying to explain this ever since the entire
> > dynamic dma-buf thing started.
> > -Daniel
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-18 18:45                       ` Daniel Vetter
@ 2021-06-21 10:16                         ` Christian König
  2021-06-21 13:57                           ` Daniel Vetter
  0 siblings, 1 reply; 34+ messages in thread
From: Christian König @ 2021-06-21 10:16 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: Daniel Stone, Michel Dänzer, dri-devel,
	wayland-devel @ lists . freedesktop . org, Jason Ekstrand,
	Dave Airlie, ML mesa-dev

Am 18.06.21 um 20:45 schrieb Daniel Vetter:
> On Fri, Jun 18, 2021 at 8:02 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 18.06.21 um 19:20 schrieb Daniel Vetter:
>> [SNIP]
>> The whole thing was introduced with this commit here:
>>
>> commit f2c24b83ae90292d315aa7ac029c6ce7929e01aa
>> Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> Date:   Wed Apr 2 17:14:48 2014 +0200
>>
>>       drm/ttm: flip the switch, and convert to dma_fence
>>
>>       Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>>
>>    int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
>> ....
>> -       bo->sync_obj = driver->sync_obj_ref(sync_obj);
>> +       reservation_object_add_excl_fence(bo->resv, fence);
>>           if (evict) {
>>
>> Maarten replaced the bo->sync_obj reference with the dma_resv exclusive
>> fence.
>>
>> This means that we need to apply the sync_obj semantic to all drivers
>> using a DMA-buf with its dma_resv object, otherwise you break imports
>> from TTM drivers.
>>
>> Since then and up till now the exclusive fence must be waited on and
>> never replaced with anything which signals before the old fence.
>>
>> Maarten and I think Thomas did that and I was always assuming that you
>> know about this design decision.
> Surprisingly I do actually know this.
>
> Still the commit you cite did _not_ change any of the rules around
> dma_buf: Importers have _no_ obligation to obey the exclusive fence,
> because the buffer is pinned. None of the work that Maarten has done
> has fundamentally changed this contract in any way.

Well I now agree that the rules around dma_resv are different than I 
thought, but this change should have raised more eyebrows.

The problem is this completely broke interop with all drivers using TTM 
and I think might even explain some bug reports.

I re-introduced the moving fence by adding bo->moving a few years after 
the initial introduction of dma_resv, but that was just to work around 
performance problems introduced by using the exclusive fence for both 
use cases.

> If amdgpu (or any other ttm based driver) hands back and sgt without
> waiting for ttm_bo->moving or the exclusive fence first, then that's a
> bug we need to fix in these drivers. But if ttm based drivers did get
> this wrong, then they got this wrong both before and after the switch
> over to using dma_resv - this bug would go back all the way to Dave's
> introduction of drm_prime.c and support for that.

I'm not 100% sure, but I think before the switch to the dma_resv object 
drivers just waited for the BOs to become idle and that should have 
prevented this.

Anyway let's stop discussing history and move forward. Sending patches 
for all affected TTM driver with CC: stable tags in a minute.


> The only thing which importers have to do is not wreak the DAG nature
> of the dma_resv fences and drop dependencies. Currently there's a
> handful of drivers which break this (introduced over the last few
> years), and I have it somewhere on my todo list to audit&fix them all.

Please give that some priority.

Ignoring the moving fence is a information leak, but messing up the DAG 
gives you access to freed up memory.

> The goal with extracting dma_resv from ttm was to make implicit sync
> working and get rid of some terrible stalls on the userspace side.
> Eventually it was also the goal to make truly dynamic buffer
> reservation possible, but that took another 6 or so years to realize
> with your work. And we had to make dynamic dma-buf very much opt-in,
> because auditing all the users is very hard work and no one
> volunteered. And for dynamic dma-buf the rule is that the exclusive
> fence must _never_ be ignored, and the two drivers supporting it (mlx5
> and amdgpu) obey that.
>
> So yeah for ttm drivers dma_resv is primarily for memory management,
> with a side effect of also supporting implicit sync.
>
> For everyone else (and this includes a pile of render drivers, all the
> atomic kms drivers, v4l and I have no idea what else on top) dma_resv
> was only ever about implicit sync, and it can be ignored. And it (the
> implicit sync side) has to be ignored to be able to support vulkan
> winsys buffers correctly without stalling where we shouldn't. Also we
> have to ignore it on atomic kms side too (and depending upon whether
> writeback is supported atomic kms is perfectly capable of reading out
> any buffer passed to it).

Oh! That might actually explain some issues, but that just completely 
breaks when TTM based drivers use atomic.

In other words for the first use is actually rather likely for TTM based 
drivers to need to move the buffer around so that scanout is possible.

And that in turn means you need to wait for this move to finish even if 
you have an explicit fence to wait for. IIRC amdgpu rolled its own 
implementation of this and radeon doesn't have atomic, but nouveau is 
most like broken.

So we do need a better solution for this sooner or later.

>> It's absolutely not that this is my invention, I'm just telling you how
>> it ever was.
>>
>> Anyway this means we have a seriously misunderstanding and yes now some
>> of our discussions about dynamic P2P suddenly make much more sense.
> Yeah I think at least we finally managed to get this across.
>
> Anyway I guess w/e for me now, otherwise we'll probably resort to
> throwing chairs :-) I'm dearly hoping the thunderstorms all around me
> actually get all the way to me, because it's way, way too hot here
> right now.

Well it's probably rather Dave or Linus who might start to throw chairs 
at us to not getting this straight sooner.

At least the weather is getting more durable.

Cheers,
Christian.

>
> Cheers, Daniel
>
>> Regards,
>> Christian.
>>
>>
>>> _only_ when you have a dynamic importer/exporter can you assume that
>>> the dma_resv fences must actually be obeyed. That's one of the reasons
>>> why we had to make this a completely new mode (the other one was
>>> locking, but they really tie together).
>>>
>>> Wrt your problems:
>>> a) needs to be fixed in drivers exporting buffers and failing to make
>>> sure the memory is there by the time dma_buf_map_attachment returns.
>>> b) needs to be fixed in the importers, and there's quite a few of
>>> those. There's more than i915 here, which is why I think we should
>>> have the dma_resv_add_shared_exclusive helper extracted from amdgpu.
>>> Avoids hand-rolling this about 5 times (6 if we include the import
>>> ioctl from Jason).
>>>
>>> Also I've like been trying to explain this ever since the entire
>>> dynamic dma-buf thing started.
>>> -Daniel
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-21 10:16                         ` Christian König
@ 2021-06-21 13:57                           ` Daniel Vetter
  0 siblings, 0 replies; 34+ messages in thread
From: Daniel Vetter @ 2021-06-21 13:57 UTC (permalink / raw)
  To: Christian König
  Cc: Michel Dänzer, dri-devel,
	wayland-devel @ lists . freedesktop . org, Jason Ekstrand,
	Dave Airlie, ML mesa-dev, Christian König, Daniel Stone

On Mon, Jun 21, 2021 at 12:16:55PM +0200, Christian König wrote:
> Am 18.06.21 um 20:45 schrieb Daniel Vetter:
> > On Fri, Jun 18, 2021 at 8:02 PM Christian König
> > <christian.koenig@amd.com> wrote:
> > > Am 18.06.21 um 19:20 schrieb Daniel Vetter:
> > > [SNIP]
> > > The whole thing was introduced with this commit here:
> > > 
> > > commit f2c24b83ae90292d315aa7ac029c6ce7929e01aa
> > > Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > > Date:   Wed Apr 2 17:14:48 2014 +0200
> > > 
> > >       drm/ttm: flip the switch, and convert to dma_fence
> > > 
> > >       Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > > 
> > >    int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
> > > ....
> > > -       bo->sync_obj = driver->sync_obj_ref(sync_obj);
> > > +       reservation_object_add_excl_fence(bo->resv, fence);
> > >           if (evict) {
> > > 
> > > Maarten replaced the bo->sync_obj reference with the dma_resv exclusive
> > > fence.
> > > 
> > > This means that we need to apply the sync_obj semantic to all drivers
> > > using a DMA-buf with its dma_resv object, otherwise you break imports
> > > from TTM drivers.
> > > 
> > > Since then and up till now the exclusive fence must be waited on and
> > > never replaced with anything which signals before the old fence.
> > > 
> > > Maarten and I think Thomas did that and I was always assuming that you
> > > know about this design decision.
> > Surprisingly I do actually know this.
> > 
> > Still the commit you cite did _not_ change any of the rules around
> > dma_buf: Importers have _no_ obligation to obey the exclusive fence,
> > because the buffer is pinned. None of the work that Maarten has done
> > has fundamentally changed this contract in any way.
> 
> Well I now agree that the rules around dma_resv are different than I
> thought, but this change should have raised more eyebrows.
> 
> The problem is this completely broke interop with all drivers using TTM and
> I think might even explain some bug reports.
> 
> I re-introduced the moving fence by adding bo->moving a few years after the
> initial introduction of dma_resv, but that was just to work around
> performance problems introduced by using the exclusive fence for both use
> cases.

Ok that part is indeed not something I've known.

> > If amdgpu (or any other ttm based driver) hands back and sgt without
> > waiting for ttm_bo->moving or the exclusive fence first, then that's a
> > bug we need to fix in these drivers. But if ttm based drivers did get
> > this wrong, then they got this wrong both before and after the switch
> > over to using dma_resv - this bug would go back all the way to Dave's
> > introduction of drm_prime.c and support for that.
> 
> I'm not 100% sure, but I think before the switch to the dma_resv object
> drivers just waited for the BOs to become idle and that should have
> prevented this.
> 
> Anyway let's stop discussing history and move forward. Sending patches for
> all affected TTM driver with CC: stable tags in a minute.
> 
> 
> > The only thing which importers have to do is not wreak the DAG nature
> > of the dma_resv fences and drop dependencies. Currently there's a
> > handful of drivers which break this (introduced over the last few
> > years), and I have it somewhere on my todo list to audit&fix them all.
> 
> Please give that some priority.
> 
> Ignoring the moving fence is a information leak, but messing up the DAG
> gives you access to freed up memory.

Yeah will try to. I've also been hung up a bit on how to fix that, but I
think just closing the DAG-breakage is simplest. Any userspace which then
complains about the additional sync that causes would then be motivated to
look into the import ioctl Jason has. And I think the impact in practice
should be minimal, aside from some corner cases.

> > The goal with extracting dma_resv from ttm was to make implicit sync
> > working and get rid of some terrible stalls on the userspace side.
> > Eventually it was also the goal to make truly dynamic buffer
> > reservation possible, but that took another 6 or so years to realize
> > with your work. And we had to make dynamic dma-buf very much opt-in,
> > because auditing all the users is very hard work and no one
> > volunteered. And for dynamic dma-buf the rule is that the exclusive
> > fence must _never_ be ignored, and the two drivers supporting it (mlx5
> > and amdgpu) obey that.
> > 
> > So yeah for ttm drivers dma_resv is primarily for memory management,
> > with a side effect of also supporting implicit sync.
> > 
> > For everyone else (and this includes a pile of render drivers, all the
> > atomic kms drivers, v4l and I have no idea what else on top) dma_resv
> > was only ever about implicit sync, and it can be ignored. And it (the
> > implicit sync side) has to be ignored to be able to support vulkan
> > winsys buffers correctly without stalling where we shouldn't. Also we
> > have to ignore it on atomic kms side too (and depending upon whether
> > writeback is supported atomic kms is perfectly capable of reading out
> > any buffer passed to it).
> 
> Oh! That might actually explain some issues, but that just completely breaks
> when TTM based drivers use atomic.
> 
> In other words for the first use is actually rather likely for TTM based
> drivers to need to move the buffer around so that scanout is possible.
> 
> And that in turn means you need to wait for this move to finish even if you
> have an explicit fence to wait for. IIRC amdgpu rolled its own
> implementation of this and radeon doesn't have atomic, but nouveau is most
> like broken.
> 
> So we do need a better solution for this sooner or later.

So you're still allowed to have additional fences, but if you use the
default helpers then the explicit fenc will overwrite the exclusive fence
in the dma_resv object by default. Also with the default helpers we only
pick out the exclusive fence from the first object, in case you e.g.
supply a YUV buffers as multiple planes in multiple buffers.

But yeah there's probably some bugs here.

> > > It's absolutely not that this is my invention, I'm just telling you how
> > > it ever was.
> > > 
> > > Anyway this means we have a seriously misunderstanding and yes now some
> > > of our discussions about dynamic P2P suddenly make much more sense.
> > Yeah I think at least we finally managed to get this across.
> > 
> > Anyway I guess w/e for me now, otherwise we'll probably resort to
> > throwing chairs :-) I'm dearly hoping the thunderstorms all around me
> > actually get all the way to me, because it's way, way too hot here
> > right now.
> 
> Well it's probably rather Dave or Linus who might start to throw chairs at
> us to not getting this straight sooner.
> 
> At least the weather is getting more durable.

Yeah same here, I can put a t-shirt back on without dying!
-Daniel

> 
> Cheers,
> Christian.
> 
> > 
> > Cheers, Daniel
> > 
> > > Regards,
> > > Christian.
> > > 
> > > 
> > > > _only_ when you have a dynamic importer/exporter can you assume that
> > > > the dma_resv fences must actually be obeyed. That's one of the reasons
> > > > why we had to make this a completely new mode (the other one was
> > > > locking, but they really tie together).
> > > > 
> > > > Wrt your problems:
> > > > a) needs to be fixed in drivers exporting buffers and failing to make
> > > > sure the memory is there by the time dma_buf_map_attachment returns.
> > > > b) needs to be fixed in the importers, and there's quite a few of
> > > > those. There's more than i915 here, which is why I think we should
> > > > have the dma_resv_add_shared_exclusive helper extracted from amdgpu.
> > > > Avoids hand-rolling this about 5 times (6 if we include the import
> > > > ioctl from Jason).
> > > > 
> > > > Also I've like been trying to explain this ever since the entire
> > > > dynamic dma-buf thing started.
> > > > -Daniel
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12)
  2021-06-10 21:09 ` [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
  2021-06-13 18:26   ` Jason Ekstrand
@ 2021-10-20 20:31   ` Simon Ser
  1 sibling, 0 replies; 34+ messages in thread
From: Simon Ser @ 2021-10-20 20:31 UTC (permalink / raw)
  To: Jason Ekstrand
  Cc: dri-devel, Christian König, Daniel Vetter, Sumit Semwal,
	Maarten Lankhorst

FWIW, I'm using this IOCTL in a wlroots patchset [1].

To detect support for this IOCTL, is there anything better than creating a
DMA-BUF and checking for ENOTTY? I'd like to disable explicit sync at init-time
if this is missing.

[1]: https://github.com/swaywm/wlroots/pull/3282

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/6] RFC: dma-buf: Add an API for importing sync files (v7)
  2021-06-10 21:09 ` [PATCH 6/6] RFC: dma-buf: Add an API for importing sync files (v7) Jason Ekstrand
@ 2022-03-22 15:02   ` msizanoen
  0 siblings, 0 replies; 34+ messages in thread
From: msizanoen @ 2022-03-22 15:02 UTC (permalink / raw)
  To: Jason Ekstrand, dri-devel; +Cc: Daniel Vetter, Christian König

[-- Attachment #1: Type: text/plain, Size: 7614 bytes --]


On 6/11/21 04:09, Jason Ekstrand wrote:
> This patch is analogous to the previous sync file export patch in that
> it allows you to import a sync_file into a dma-buf.  Unlike the previous
> patch, however, this does add genuinely new functionality to dma-buf.
> Without this, the only way to attach a sync_file to a dma-buf is to
> submit a batch to your driver of choice which waits on the sync_file and
> claims to write to the dma-buf.  Even if said batch is a no-op, a submit
> is typically way more overhead than just attaching a fence.  A submit
> may also imply extra synchronization with other work because it happens
> on a hardware queue.
>
> In the Vulkan world, this is useful for dealing with the out-fence from
> vkQueuePresent.  Current Linux window-systems (X11, Wayland, etc.) all
> rely on dma-buf implicit sync.  Since Vulkan is an explicit sync API, we
> get a set of fences (VkSemaphores) in vkQueuePresent and have to stash
> those as an exclusive (write) fence on the dma-buf.  We handle it in
> Mesa today with the above mentioned dummy submit trick.  This ioctl
> would allow us to set it directly without the dummy submit.
>
> This may also open up possibilities for GPU drivers to move away from
> implicit sync for their kernel driver uAPI and instead provide sync
> files and rely on dma-buf import/export for communicating with other
> implicit sync clients.
>
> We make the explicit choice here to only allow setting RW fences which
> translates to an exclusive fence on the dma_resv.  There's no use for
> read-only fences for communicating with other implicit sync userspace
> and any such attempts are likely to be racy at best.  When we got to
> insert the RW fence, the actual fence we set as the new exclusive fence
> is a combination of the sync_file provided by the user and all the other
> fences on the dma_resv.  This ensures that the newly added exclusive
> fence will never signal before the old one would have and ensures that
> we don't break any dma_resv contracts.  We require userspace to specify
> RW in the flags for symmetry with the export ioctl and in case we ever
> want to support read fences in the future.
>
> There is one downside here that's worth documenting:  If two clients
> writing to the same dma-buf using this API race with each other, their
> actions on the dma-buf may happen in parallel or in an undefined order.
> Both with and without this API, the pattern is the same:  Collect all
> the fences on dma-buf, submit work which depends on said fences, and
> then set a new exclusive (write) fence on the dma-buf which depends on
> said work.  The difference is that, when it's all handled by the GPU
> driver's submit ioctl, the three operations happen atomically under the
> dma_resv lock.  If two userspace submits race, one will happen before
> the other.  You aren't guaranteed which but you are guaranteed that
> they're strictly ordered.  If userspace manages the fences itself, then
> these three operations happen separately and the two render operations
> may happen genuinely in parallel or get interleaved.  However, this is a
> case of userspace racing with itself.  As long as we ensure userspace
> can't back the kernel into a corner, it should be fine.
>
> v2 (Jason Ekstrand):
>   - Use a wrapper dma_fence_array of all fences including the new one
>     when importing an exclusive fence.
>
> v3 (Jason Ekstrand):
>   - Lock around setting shared fences as well as exclusive
>   - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
>   - Initialize ret to 0 in dma_buf_wait_sync_file
>
> v4 (Jason Ekstrand):
>   - Use the new dma_resv_get_singleton helper
>
> v5 (Jason Ekstrand):
>   - Rename the IOCTLs to import/export rather than wait/signal
>   - Drop the WRITE flag and always get/set the exclusive fence
>
> v6 (Jason Ekstrand):
>   - Split import and export into separate patches
>   - New commit message
>
> v7 (Daniel Vetter):
>   - Fix the uapi header to use the right struct in the ioctl
>   - Use a separate dma_buf_import_sync_file struct
>   - Add kerneldoc for dma_buf_import_sync_file
>
> Signed-off-by: Jason Ekstrand<jason@jlekstrand.net>
> Cc: Christian König<christian.koenig@amd.com>
> Cc: Daniel Vetter<daniel.vetter@ffwll.ch>
> Cc: Sumit Semwal<sumit.semwal@linaro.org>
> Cc: Maarten Lankhorst<maarten.lankhorst@linux.intel.com>
> ---
>   drivers/dma-buf/dma-buf.c    | 36 ++++++++++++++++++++++++++++++++++++
>   include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++
>   2 files changed, 58 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 831828d71b646..88afd723015a2 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -422,6 +422,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
>   	put_unused_fd(fd);
>   	return ret;
>   }
> +
> +static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
> +				     const void __user *user_data)
> +{
> +	struct dma_buf_import_sync_file arg;
> +	struct dma_fence *fence, *singleton = NULL;
> +	int ret = 0;
> +
> +	if (copy_from_user(&arg, user_data, sizeof(arg)))
> +		return -EFAULT;
> +
> +	if (arg.flags != DMA_BUF_SYNC_RW)
> +		return -EINVAL;
> +
> +	fence = sync_file_get_fence(arg.fd);
> +	if (!fence)
> +		return -EINVAL;
> +
> +	dma_resv_lock(dmabuf->resv, NULL);
> +
> +	singleton = dma_resv_get_singleton(dmabuf->resv, fence);
> +	if (IS_ERR(singleton)) {
> +		ret = PTR_ERR(singleton);
> +	} else if (singleton) {
> +		dma_resv_add_excl_fence(dmabuf->resv, singleton);
> +		dma_resv_add_shared_fence(dmabuf->resv, singleton);
Shouldn't there be a dma_fence_put(singleton) here?
> +	}
> +
> +	dma_resv_unlock(dmabuf->resv);
> +
> +	dma_fence_put(fence);
> +
> +	return ret;
> +}
>   #endif
>   
>   static long dma_buf_ioctl(struct file *file,
> @@ -470,6 +504,8 @@ static long dma_buf_ioctl(struct file *file,
>   #if IS_ENABLED(CONFIG_SYNC_FILE)
>   	case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
>   		return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
> +	case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
> +		return dma_buf_import_sync_file(dmabuf, (const void __user *)arg);
>   #endif
>   
>   	default:
> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
> index 82f12a4640403..7382fd67351ba 100644
> --- a/include/uapi/linux/dma-buf.h
> +++ b/include/uapi/linux/dma-buf.h
> @@ -115,6 +115,27 @@ struct dma_buf_export_sync_file {
>   	__s32 fd;
>   };
>   
> +/**
> + * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
> + *
> + * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
> + * sync_file into a dma-buf for the purposes of implicit synchronization
> + * with other dma-buf consumers.  This allows clients using explicitly
> + * synchronized APIs such as Vulkan to inter-op with dma-buf consumers
> + * which expect implicit synchronization such as OpenGL or most media
> + * drivers/video.
> + */
> +struct dma_buf_import_sync_file {
> +	/**
> +	 * @flags: Read/write flags
> +	 *
> +	 * Must be DMA_BUF_SYNC_RW.
> +	 */
> +	__u32 flags;
> +	/** @fd: Sync file descriptor */
> +	__s32 fd;
> +};
> +
>   #define DMA_BUF_BASE		'b'
>   #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>   
> @@ -125,5 +146,6 @@ struct dma_buf_export_sync_file {
>   #define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
>   #define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
>   #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE	_IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
> +#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE	_IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
>   
>   #endif

[-- Attachment #2: Type: text/html, Size: 8226 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2022-03-23 11:02 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-10 21:09 [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
2021-06-10 21:09 ` [PATCH 1/6] dma-buf: Add dma_fence_array_for_each (v2) Jason Ekstrand
2021-06-10 21:09 ` [PATCH 2/6] dma-buf: Add dma_resv_get_singleton (v6) Jason Ekstrand
2021-06-11  7:11   ` Christian König
2021-06-10 21:09 ` [PATCH 3/6] dma-buf: Document DMA_BUF_IOCTL_SYNC (v2) Jason Ekstrand
2021-06-10 21:14   ` Jason Ekstrand
2021-06-10 21:14     ` [Intel-gfx] " Jason Ekstrand
2021-06-15  7:10     ` Pekka Paalanen
2021-06-15  7:10       ` [Intel-gfx] " Pekka Paalanen
2021-06-11  7:24   ` Christian König
2021-06-10 21:09 ` [PATCH 4/6] dma-buf: Add an API for exporting sync files (v12) Jason Ekstrand
2021-06-13 18:26   ` Jason Ekstrand
2021-10-20 20:31   ` Simon Ser
2021-06-10 21:09 ` [PATCH 5/6] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked Jason Ekstrand
2021-06-11  7:44   ` Christian König
2021-06-10 21:09 ` [PATCH 6/6] RFC: dma-buf: Add an API for importing sync files (v7) Jason Ekstrand
2022-03-22 15:02   ` msizanoen
2021-06-15  8:41 ` [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12) Christian König
2021-06-16 18:30   ` Jason Ekstrand
2021-06-17  7:37     ` Christian König
2021-06-17 19:58       ` Daniel Vetter
2021-06-18  9:15         ` Christian König
2021-06-18 13:54           ` Jason Ekstrand
2021-06-18 14:31           ` Daniel Vetter
2021-06-18 14:42             ` Christian König
2021-06-18 15:17               ` Daniel Vetter
2021-06-18 16:42                 ` Christian König
2021-06-18 17:20                   ` Daniel Vetter
2021-06-18 18:01                     ` Christian König
2021-06-18 18:45                       ` Daniel Vetter
2021-06-21 10:16                         ` Christian König
2021-06-21 13:57                           ` Daniel Vetter
2021-06-18 18:20                   ` Daniel Stone
2021-06-18 18:44                     ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.