All of lore.kernel.org
 help / color / mirror / Atom feed
* [REPOST PATCH 0/8] fence synchronization patches
@ 2014-06-18 10:36 ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:36 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

The following series implements fence and converts dma-buf and
android sync to use it. Patch 5 and 6 add support for polling
to dma-buf, blocking until all fences are signaled.
Patch 7 and 8 provide some helpers, and allow use of RCU in the
reservation api. The helpers make it easier to convert ttm, and
make dealing with rcu less painful.

Patches slightly updated to fix compilation with armada and
new atomic primitives, but otherwise identical.

---

Maarten Lankhorst (8):
      fence: dma-buf cross-device synchronization (v17)
      seqno-fence: Hardware dma-buf implementation of fencing (v5)
      dma-buf: use reservation objects
      android: convert sync to fence api, v5
      reservation: add support for fences to enable cross-device synchronisation
      dma-buf: add poll support, v3
      reservation: update api and add some helpers
      reservation: add suppport for read-only access using rcu


 Documentation/DocBook/device-drivers.tmpl      |    3 
 drivers/base/Kconfig                           |    9 
 drivers/base/Makefile                          |    2 
 drivers/base/dma-buf.c                         |  168 ++++
 drivers/base/fence.c                           |  468 ++++++++++++
 drivers/base/reservation.c                     |  440 ++++++++++++
 drivers/gpu/drm/armada/armada_gem.c            |    2 
 drivers/gpu/drm/drm_prime.c                    |    8 
 drivers/gpu/drm/exynos/exynos_drm_dmabuf.c     |    2 
 drivers/gpu/drm/i915/i915_gem_dmabuf.c         |    3 
 drivers/gpu/drm/nouveau/nouveau_drm.c          |    1 
 drivers/gpu/drm/nouveau/nouveau_gem.h          |    1 
 drivers/gpu/drm/nouveau/nouveau_prime.c        |    7 
 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c      |    2 
 drivers/gpu/drm/radeon/radeon_drv.c            |    2 
 drivers/gpu/drm/radeon/radeon_prime.c          |    8 
 drivers/gpu/drm/tegra/gem.c                    |    2 
 drivers/gpu/drm/ttm/ttm_object.c               |    2 
 drivers/media/v4l2-core/videobuf2-dma-contig.c |    2 
 drivers/staging/android/Kconfig                |    1 
 drivers/staging/android/Makefile               |    2 
 drivers/staging/android/ion/ion.c              |    3 
 drivers/staging/android/sw_sync.c              |    6 
 drivers/staging/android/sync.c                 |  913 ++++++++----------------
 drivers/staging/android/sync.h                 |   79 +-
 drivers/staging/android/sync_debug.c           |  247 ++++++
 drivers/staging/android/trace/sync.h           |   12 
 include/drm/drmP.h                             |    3 
 include/linux/dma-buf.h                        |   21 -
 include/linux/fence.h                          |  355 +++++++++
 include/linux/reservation.h                    |   82 ++
 include/linux/seqno-fence.h                    |  119 +++
 include/trace/events/fence.h                   |  128 +++
 33 files changed, 2435 insertions(+), 668 deletions(-)
 create mode 100644 drivers/base/fence.c
 create mode 100644 drivers/staging/android/sync_debug.c
 create mode 100644 include/linux/fence.h
 create mode 100644 include/linux/seqno-fence.h
 create mode 100644 include/trace/events/fence.h

-- 
Signature

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [REPOST PATCH 0/8] fence synchronization patches
@ 2014-06-18 10:36 ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:36 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

The following series implements fence and converts dma-buf and
android sync to use it. Patch 5 and 6 add support for polling
to dma-buf, blocking until all fences are signaled.
Patch 7 and 8 provide some helpers, and allow use of RCU in the
reservation api. The helpers make it easier to convert ttm, and
make dealing with rcu less painful.

Patches slightly updated to fix compilation with armada and
new atomic primitives, but otherwise identical.

---

Maarten Lankhorst (8):
      fence: dma-buf cross-device synchronization (v17)
      seqno-fence: Hardware dma-buf implementation of fencing (v5)
      dma-buf: use reservation objects
      android: convert sync to fence api, v5
      reservation: add support for fences to enable cross-device synchronisation
      dma-buf: add poll support, v3
      reservation: update api and add some helpers
      reservation: add suppport for read-only access using rcu


 Documentation/DocBook/device-drivers.tmpl      |    3 
 drivers/base/Kconfig                           |    9 
 drivers/base/Makefile                          |    2 
 drivers/base/dma-buf.c                         |  168 ++++
 drivers/base/fence.c                           |  468 ++++++++++++
 drivers/base/reservation.c                     |  440 ++++++++++++
 drivers/gpu/drm/armada/armada_gem.c            |    2 
 drivers/gpu/drm/drm_prime.c                    |    8 
 drivers/gpu/drm/exynos/exynos_drm_dmabuf.c     |    2 
 drivers/gpu/drm/i915/i915_gem_dmabuf.c         |    3 
 drivers/gpu/drm/nouveau/nouveau_drm.c          |    1 
 drivers/gpu/drm/nouveau/nouveau_gem.h          |    1 
 drivers/gpu/drm/nouveau/nouveau_prime.c        |    7 
 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c      |    2 
 drivers/gpu/drm/radeon/radeon_drv.c            |    2 
 drivers/gpu/drm/radeon/radeon_prime.c          |    8 
 drivers/gpu/drm/tegra/gem.c                    |    2 
 drivers/gpu/drm/ttm/ttm_object.c               |    2 
 drivers/media/v4l2-core/videobuf2-dma-contig.c |    2 
 drivers/staging/android/Kconfig                |    1 
 drivers/staging/android/Makefile               |    2 
 drivers/staging/android/ion/ion.c              |    3 
 drivers/staging/android/sw_sync.c              |    6 
 drivers/staging/android/sync.c                 |  913 ++++++++----------------
 drivers/staging/android/sync.h                 |   79 +-
 drivers/staging/android/sync_debug.c           |  247 ++++++
 drivers/staging/android/trace/sync.h           |   12 
 include/drm/drmP.h                             |    3 
 include/linux/dma-buf.h                        |   21 -
 include/linux/fence.h                          |  355 +++++++++
 include/linux/reservation.h                    |   82 ++
 include/linux/seqno-fence.h                    |  119 +++
 include/trace/events/fence.h                   |  128 +++
 33 files changed, 2435 insertions(+), 668 deletions(-)
 create mode 100644 drivers/base/fence.c
 create mode 100644 drivers/staging/android/sync_debug.c
 create mode 100644 include/linux/fence.h
 create mode 100644 include/linux/seqno-fence.h
 create mode 100644 include/trace/events/fence.h

-- 
Signature

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:36   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:36 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

A fence can be attached to a buffer which is being filled or consumed
by hw, to allow userspace to pass the buffer without waiting to another
device.  For example, userspace can call page_flip ioctl to display the
next frame of graphics after kicking the GPU but while the GPU is still
rendering.  The display device sharing the buffer with the GPU would
attach a callback to get notified when the GPU's rendering-complete IRQ
fires, to update the scan-out address of the display, without having to
wake up userspace.

A driver must allocate a fence context for each execution ring that can
run in parallel. The function for this takes an argument with how many
contexts to allocate:
  + fence_context_alloc()

A fence is transient, one-shot deal.  It is allocated and attached
to one or more dma-buf's.  When the one that attached it is done, with
the pending operation, it can signal the fence:
  + fence_signal()

To have a rough approximation whether a fence is fired, call:
  + fence_is_signaled()

The dma-buf-mgr handles tracking, and waiting on, the fences associated
with a dma-buf.

The one pending on the fence can add an async callback:
  + fence_add_callback()

The callback can optionally be cancelled with:
  + fence_remove_callback()

To wait synchronously, optionally with a timeout:
  + fence_wait()
  + fence_wait_timeout()

When emitting a fence, call:
  + trace_fence_emit()

To annotate that a fence is blocking on another fence, call:
  + trace_fence_annotate_wait_on(fence, on_fence)

A default software-only implementation is provided, which can be used
by drivers attaching a fence to a buffer when they have no other means
for hw sync.  But a memory backed fence is also envisioned, because it
is common that GPU's can write to, or poll on some memory location for
synchronization.  For example:

  fence = custom_get_fence(...);
  if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
    dma_buf *fence_buf = seqno_fence->sync_buf;
    get_dma_buf(fence_buf);

    ... tell the hw the memory location to wait ...
    custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
  } else {
    /* fall-back to sw sync * /
    fence_add_callback(fence, my_cb);
  }

On SoC platforms, if some other hw mechanism is provided for synchronizing
between IP blocks, it could be supported as an alternate implementation
with it's own fence ops in a similar way.

enable_signaling callback is used to provide sw signaling in case a cpu
waiter is requested or no compatible hardware signaling could be used.

The intention is to provide a userspace interface (presumably via eventfd)
later, to be used in conjunction with dma-buf's mmap support for sw access
to buffers (or for userspace apps that would prefer to do their own
synchronization).

v1: Original
v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
    that dma-fence didn't need to care about the sw->hw signaling path
    (it can be handled same as sw->sw case), and therefore the fence->ops
    can be simplified and more handled in the core.  So remove the signal,
    add_callback, cancel_callback, and wait ops, and replace with a simple
    enable_signaling() op which can be used to inform a fence supporting
    hw->hw signaling that one or more devices which do not support hw
    signaling are waiting (and therefore it should enable an irq or do
    whatever is necessary in order that the CPU is notified when the
    fence is passed).
v3: Fix locking fail in attach_fence() and get_fence()
v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
    we decided that we need to be able to attach one fence to N dma-buf's,
    so using the list_head in dma-fence struct would be problematic.
v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
    about checking if fence fired or not. This is broken by design.
    waitqueue_active during destruction is now fatal, since the signaller
    should be holding a reference in enable_signalling until it signalled
    the fence. Pass the original dma_fence_cb along, and call __remove_wait
    in the dma_fence_callback handler, so that no cleanup needs to be
    performed.
v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
    fence wasn't signaled yet, for example for hardware fences that may
    choose to signal blindly.
v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
    header and fixed include mess. dma-fence.h now includes dma-buf.h
    All members are now initialized, so kmalloc can be used for
    allocating a dma-fence. More documentation added.
v9: Change compiler bitfields to flags, change return type of
    enable_signaling to bool. Rework dma_fence_wait. Added
    dma_fence_is_signaled and dma_fence_wait_timeout.
    s/dma// and change exports to non GPL. Added fence_is_signaled and
    fence_enable_sw_signaling calls, add ability to override default
    wait operation.
v10: remove event_queue, use a custom list, export try_to_wake_up from
    scheduler. Remove fence lock and use a global spinlock instead,
    this should hopefully remove all the locking headaches I was having
    on trying to implement this. enable_signaling is called with this
    lock held.
v11:
    Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
    However I kept the guarantee that after fence_signal returns, it is
    guaranteed that enable_signaling has either been called to completion,
    or will not be called any more.

    Add contexts and seqno to base fence implementation. This allows you
    to wait for less fences, by testing for seqno + signaled, and then only
    wait on the later fence.

    Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
    An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
    spam, and another runtime option can turn it off at runtime.
v12:
    Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
    and fence->seqno members.
v13:
    Fixup CONFIG_FENCE_TRACE kconfig description.
    Move fence_context_alloc to fence.
    Simplify fence_later.
    Kill priv member to fence_cb.
v14:
    Remove priv argument from fence_add_callback, oops!
v15:
    Remove priv from documentation.
    Explicitly include linux/atomic.h.
v16:
    Add trace events.
    Import changes required by android syncpoints.
v17:
    Use wake_up_state instead of try_to_wake_up. (Colin Cross)
    Fix up commit description for seqno_fence. (Rob Clark)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
Reviewed-by: Rob Clark <robdclark@gmail.com>
---
 Documentation/DocBook/device-drivers.tmpl |    2 
 drivers/base/Kconfig                      |    9 +
 drivers/base/Makefile                     |    2 
 drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
 include/linux/fence.h                     |  333 +++++++++++++++++++++++
 include/trace/events/fence.h              |  128 +++++++++
 6 files changed, 889 insertions(+), 1 deletion(-)
 create mode 100644 drivers/base/fence.c
 create mode 100644 include/linux/fence.h
 create mode 100644 include/trace/events/fence.h

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index cc63f30de166..7eef81069d1b 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -129,6 +129,8 @@ X!Edrivers/base/interface.c
      </sect1>
      <sect1><title>Device Drivers DMA Management</title>
 !Edrivers/base/dma-buf.c
+!Edrivers/base/fence.c
+!Iinclude/linux/fence.h
 !Edrivers/base/reservation.c
 !Iinclude/linux/reservation.h
 !Edrivers/base/dma-coherent.c
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 23b8726962af..00e13ce5cbbd 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -208,6 +208,15 @@ config DMA_SHARED_BUFFER
 	  APIs extension; the file's descriptor can then be passed on to other
 	  driver.
 
+config FENCE_TRACE
+	bool "Enable verbose FENCE_TRACE messages"
+	depends on DMA_SHARED_BUFFER
+	help
+	  Enable the FENCE_TRACE printks. This will add extra
+	  spam to the console log, but will make it easier to diagnose
+	  lockup related problems for dma-buffers shared across multiple
+	  devices.
+
 config DMA_CMA
 	bool "DMA Contiguous Memory Allocator"
 	depends on HAVE_DMA_CONTIGUOUS && CMA
diff --git a/drivers/base/Makefile b/drivers/base/Makefile
index 04b314e0fa51..eb4864aee073 100644
--- a/drivers/base/Makefile
+++ b/drivers/base/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_DMA_CMA) += dma-contiguous.o
 obj-y			+= power/
 obj-$(CONFIG_HAS_DMA)	+= dma-mapping.o
 obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
-obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o
+obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o fence.o reservation.o
 obj-$(CONFIG_ISA)	+= isa.o
 obj-$(CONFIG_FW_LOADER)	+= firmware_class.o
 obj-$(CONFIG_NUMA)	+= node.o
diff --git a/drivers/base/fence.c b/drivers/base/fence.c
new file mode 100644
index 000000000000..1da7f4d6542a
--- /dev/null
+++ b/drivers/base/fence.c
@@ -0,0 +1,416 @@
+/*
+ * Fence mechanism for dma-buf and to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/slab.h>
+#include <linux/export.h>
+#include <linux/atomic.h>
+#include <linux/fence.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/fence.h>
+
+EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
+EXPORT_TRACEPOINT_SYMBOL(fence_emit);
+
+/**
+ * fence context counter: each execution context should have its own
+ * fence context, this allows checking if fences belong to the same
+ * context or not. One device can have multiple separate contexts,
+ * and they're used if some engine can run independently of another.
+ */
+static atomic_t fence_context_counter = ATOMIC_INIT(0);
+
+/**
+ * fence_context_alloc - allocate an array of fence contexts
+ * @num:	[in]	amount of contexts to allocate
+ *
+ * This function will return the first index of the number of fences allocated.
+ * The fence context is used for setting fence->context to a unique number.
+ */
+unsigned fence_context_alloc(unsigned num)
+{
+	BUG_ON(!num);
+	return atomic_add_return(num, &fence_context_counter) - num;
+}
+EXPORT_SYMBOL(fence_context_alloc);
+
+int __fence_signal(struct fence *fence)
+{
+	struct fence_cb *cur, *tmp;
+	int ret = 0;
+
+	if (WARN_ON(!fence))
+		return -EINVAL;
+
+	if (!ktime_to_ns(fence->timestamp)) {
+		fence->timestamp = ktime_get();
+		smp_mb__before_atomic();
+	}
+
+	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		ret = -EINVAL;
+
+		/*
+		 * we might have raced with the unlocked fence_signal,
+		 * still run through all callbacks
+		 */
+	} else
+		trace_fence_signaled(fence);
+
+	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
+		list_del_init(&cur->node);
+		cur->func(fence, cur);
+	}
+	return ret;
+}
+EXPORT_SYMBOL(__fence_signal);
+
+/**
+ * fence_signal - signal completion of a fence
+ * @fence: the fence to signal
+ *
+ * Signal completion for software callbacks on a fence, this will unblock
+ * fence_wait() calls and run all the callbacks added with
+ * fence_add_callback(). Can be called multiple times, but since a fence
+ * can only go from unsignaled to signaled state, it will only be effective
+ * the first time.
+ */
+int fence_signal(struct fence *fence)
+{
+	unsigned long flags;
+
+	if (!fence)
+		return -EINVAL;
+
+	if (!ktime_to_ns(fence->timestamp)) {
+		fence->timestamp = ktime_get();
+		smp_mb__before_atomic();
+	}
+
+	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return -EINVAL;
+
+	trace_fence_signaled(fence);
+
+	if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
+		struct fence_cb *cur, *tmp;
+
+		spin_lock_irqsave(fence->lock, flags);
+		list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
+			list_del_init(&cur->node);
+			cur->func(fence, cur);
+		}
+		spin_unlock_irqrestore(fence->lock, flags);
+	}
+	return 0;
+}
+EXPORT_SYMBOL(fence_signal);
+
+/**
+ * fence_wait_timeout - sleep until the fence gets signaled
+ * or until timeout elapses
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ *
+ * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
+ * remaining timeout in jiffies on success. Other error values may be
+ * returned on custom implementations.
+ *
+ * Performs a synchronous wait on this fence. It is assumed the caller
+ * directly or indirectly (buf-mgr between reservation and committing)
+ * holds a reference to the fence, otherwise the fence might be
+ * freed before return, resulting in undefined behavior.
+ */
+long
+fence_wait_timeout(struct fence *fence, bool intr, signed long timeout)
+{
+	unsigned long ret;
+
+	if (WARN_ON(timeout < 0))
+		return -EINVAL;
+
+	trace_fence_wait_start(fence);
+	ret = fence->ops->wait(fence, intr, timeout);
+	trace_fence_wait_end(fence);
+	return ret;
+}
+EXPORT_SYMBOL(fence_wait_timeout);
+
+void release_fence(struct kref *kref)
+{
+	struct fence *fence =
+			container_of(kref, struct fence, refcount);
+
+	trace_fence_destroy(fence);
+
+	BUG_ON(!list_empty(&fence->cb_list));
+
+	if (fence->ops->release)
+		fence->ops->release(fence);
+	else
+		kfree(fence);
+}
+EXPORT_SYMBOL(release_fence);
+
+/**
+ * fence_enable_sw_signaling - enable signaling on fence
+ * @fence:	[in]	the fence to enable
+ *
+ * this will request for sw signaling to be enabled, to make the fence
+ * complete as soon as possible
+ */
+void fence_enable_sw_signaling(struct fence *fence)
+{
+	unsigned long flags;
+
+	if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) &&
+	    !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		trace_fence_enable_signal(fence);
+
+		spin_lock_irqsave(fence->lock, flags);
+
+		if (!fence->ops->enable_signaling(fence))
+			__fence_signal(fence);
+
+		spin_unlock_irqrestore(fence->lock, flags);
+	}
+}
+EXPORT_SYMBOL(fence_enable_sw_signaling);
+
+/**
+ * fence_add_callback - add a callback to be called when the fence
+ * is signaled
+ * @fence:	[in]	the fence to wait on
+ * @cb:		[in]	the callback to register
+ * @func:	[in]	the function to call
+ *
+ * cb will be initialized by fence_add_callback, no initialization
+ * by the caller is required. Any number of callbacks can be registered
+ * to a fence, but a callback can only be registered to one fence at a time.
+ *
+ * Note that the callback can be called from an atomic context.  If
+ * fence is already signaled, this function will return -ENOENT (and
+ * *not* call the callback)
+ *
+ * Add a software callback to the fence. Same restrictions apply to
+ * refcount as it does to fence_wait, however the caller doesn't need to
+ * keep a refcount to fence afterwards: when software access is enabled,
+ * the creator of the fence is required to keep the fence alive until
+ * after it signals with fence_signal. The callback itself can be called
+ * from irq context.
+ *
+ */
+int fence_add_callback(struct fence *fence, struct fence_cb *cb,
+		       fence_func_t func)
+{
+	unsigned long flags;
+	int ret = 0;
+	bool was_set;
+
+	if (WARN_ON(!fence || !func))
+		return -EINVAL;
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		INIT_LIST_HEAD(&cb->node);
+		return -ENOENT;
+	}
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		ret = -ENOENT;
+	else if (!was_set) {
+		trace_fence_enable_signal(fence);
+
+		if (!fence->ops->enable_signaling(fence)) {
+			__fence_signal(fence);
+			ret = -ENOENT;
+		}
+	}
+
+	if (!ret) {
+		cb->func = func;
+		list_add_tail(&cb->node, &fence->cb_list);
+	} else
+		INIT_LIST_HEAD(&cb->node);
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+EXPORT_SYMBOL(fence_add_callback);
+
+/**
+ * fence_remove_callback - remove a callback from the signaling list
+ * @fence:	[in]	the fence to wait on
+ * @cb:		[in]	the callback to remove
+ *
+ * Remove a previously queued callback from the fence. This function returns
+ * true if the callback is succesfully removed, or false if the fence has
+ * already been signaled.
+ *
+ * *WARNING*:
+ * Cancelling a callback should only be done if you really know what you're
+ * doing, since deadlocks and race conditions could occur all too easily. For
+ * this reason, it should only ever be done on hardware lockup recovery,
+ * with a reference held to the fence.
+ */
+bool
+fence_remove_callback(struct fence *fence, struct fence_cb *cb)
+{
+	unsigned long flags;
+	bool ret;
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	ret = !list_empty(&cb->node);
+	if (ret)
+		list_del_init(&cb->node);
+
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+EXPORT_SYMBOL(fence_remove_callback);
+
+struct default_wait_cb {
+	struct fence_cb base;
+	struct task_struct *task;
+};
+
+static void
+fence_default_wait_cb(struct fence *fence, struct fence_cb *cb)
+{
+	struct default_wait_cb *wait =
+		container_of(cb, struct default_wait_cb, base);
+
+	wake_up_state(wait->task, TASK_NORMAL);
+}
+
+/**
+ * fence_default_wait - default sleep until the fence gets signaled
+ * or until timeout elapses
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ *
+ * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
+ * remaining timeout in jiffies on success.
+ */
+long
+fence_default_wait(struct fence *fence, bool intr, signed long timeout)
+{
+	struct default_wait_cb cb;
+	unsigned long flags;
+	long ret = timeout;
+	bool was_set;
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return timeout;
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	if (intr && signal_pending(current)) {
+		ret = -ERESTARTSYS;
+		goto out;
+	}
+
+	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		goto out;
+
+	if (!was_set) {
+		trace_fence_enable_signal(fence);
+
+		if (!fence->ops->enable_signaling(fence)) {
+			__fence_signal(fence);
+			goto out;
+		}
+	}
+
+	cb.base.func = fence_default_wait_cb;
+	cb.task = current;
+	list_add(&cb.base.node, &fence->cb_list);
+
+	while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
+		if (intr)
+			__set_current_state(TASK_INTERRUPTIBLE);
+		else
+			__set_current_state(TASK_UNINTERRUPTIBLE);
+		spin_unlock_irqrestore(fence->lock, flags);
+
+		ret = schedule_timeout(ret);
+
+		spin_lock_irqsave(fence->lock, flags);
+		if (ret > 0 && intr && signal_pending(current))
+			ret = -ERESTARTSYS;
+	}
+
+	if (!list_empty(&cb.base.node))
+		list_del(&cb.base.node);
+	__set_current_state(TASK_RUNNING);
+
+out:
+	spin_unlock_irqrestore(fence->lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL(fence_default_wait);
+
+/**
+ * __fence_init - Initialize a custom fence.
+ * @fence:	[in]	the fence to initialize
+ * @ops:	[in]	the fence_ops for operations on this fence
+ * @lock:	[in]	the irqsafe spinlock to use for locking this fence
+ * @context:	[in]	the execution context this fence is run on
+ * @seqno:	[in]	a linear increasing sequence number for this context
+ *
+ * Initializes an allocated fence, the caller doesn't have to keep its
+ * refcount after committing with this fence, but it will need to hold a
+ * refcount again if fence_ops.enable_signaling gets called. This can
+ * be used for other implementing other types of fence.
+ *
+ * context and seqno are used for easy comparison between fences, allowing
+ * to check which fence is later by simply using fence_later.
+ */
+void
+__fence_init(struct fence *fence, const struct fence_ops *ops,
+	     spinlock_t *lock, unsigned context, unsigned seqno)
+{
+	BUG_ON(!lock);
+	BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
+	       !ops->get_driver_name || !ops->get_timeline_name);
+
+	kref_init(&fence->refcount);
+	fence->ops = ops;
+	INIT_LIST_HEAD(&fence->cb_list);
+	fence->lock = lock;
+	fence->context = context;
+	fence->seqno = seqno;
+	fence->flags = 0UL;
+
+	trace_fence_init(fence);
+}
+EXPORT_SYMBOL(__fence_init);
diff --git a/include/linux/fence.h b/include/linux/fence.h
new file mode 100644
index 000000000000..65f2a01ee7e4
--- /dev/null
+++ b/include/linux/fence.h
@@ -0,0 +1,333 @@
+/*
+ * Fence mechanism for dma-buf to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __LINUX_FENCE_H
+#define __LINUX_FENCE_H
+
+#include <linux/err.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <linux/bitops.h>
+#include <linux/kref.h>
+#include <linux/sched.h>
+#include <linux/printk.h>
+
+struct fence;
+struct fence_ops;
+struct fence_cb;
+
+/**
+ * struct fence - software synchronization primitive
+ * @refcount: refcount for this fence
+ * @ops: fence_ops associated with this fence
+ * @cb_list: list of all callbacks to call
+ * @lock: spin_lock_irqsave used for locking
+ * @context: execution context this fence belongs to, returned by
+ *           fence_context_alloc()
+ * @seqno: the sequence number of this fence inside the execution context,
+ * can be compared to decide which fence would be signaled later.
+ * @flags: A mask of FENCE_FLAG_* defined below
+ * @timestamp: Timestamp when the fence was signaled.
+ * @status: Optional, only valid if < 0, must be set before calling
+ * fence_signal, indicates that the fence has completed with an error.
+ *
+ * the flags member must be manipulated and read using the appropriate
+ * atomic ops (bit_*), so taking the spinlock will not be needed most
+ * of the time.
+ *
+ * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
+ * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
+ * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
+ * implementer of the fence for its own purposes. Can be used in different
+ * ways by different fence implementers, so do not rely on this.
+ *
+ * *) Since atomic bitops are used, this is not guaranteed to be the case.
+ * Particularly, if the bit was set, but fence_signal was called right
+ * before this bit was set, it would have been able to set the
+ * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
+ * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
+ * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
+ * after fence_signal was called, any enable_signaling call will have either
+ * been completed, or never called at all.
+ */
+struct fence {
+	struct kref refcount;
+	const struct fence_ops *ops;
+	struct list_head cb_list;
+	spinlock_t *lock;
+	unsigned context, seqno;
+	unsigned long flags;
+	ktime_t timestamp;
+	int status;
+};
+
+enum fence_flag_bits {
+	FENCE_FLAG_SIGNALED_BIT,
+	FENCE_FLAG_ENABLE_SIGNAL_BIT,
+	FENCE_FLAG_USER_BITS, /* must always be last member */
+};
+
+typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
+
+/**
+ * struct fence_cb - callback for fence_add_callback
+ * @node: used by fence_add_callback to append this struct to fence::cb_list
+ * @func: fence_func_t to call
+ *
+ * This struct will be initialized by fence_add_callback, additional
+ * data can be passed along by embedding fence_cb in another struct.
+ */
+struct fence_cb {
+	struct list_head node;
+	fence_func_t func;
+};
+
+/**
+ * struct fence_ops - operations implemented for fence
+ * @get_driver_name: returns the driver name.
+ * @get_timeline_name: return the name of the context this fence belongs to.
+ * @enable_signaling: enable software signaling of fence.
+ * @signaled: [optional] peek whether the fence is signaled, can be null.
+ * @wait: custom wait implementation, or fence_default_wait.
+ * @release: [optional] called on destruction of fence, can be null
+ * @fill_driver_data: [optional] callback to fill in free-form debug info
+ * Returns amount of bytes filled, or -errno.
+ * @fence_value_str: [optional] fills in the value of the fence as a string
+ * @timeline_value_str: [optional] fills in the current value of the timeline
+ * as a string
+ *
+ * Notes on enable_signaling:
+ * For fence implementations that have the capability for hw->hw
+ * signaling, they can implement this op to enable the necessary
+ * irqs, or insert commands into cmdstream, etc.  This is called
+ * in the first wait() or add_callback() path to let the fence
+ * implementation know that there is another driver waiting on
+ * the signal (ie. hw->sw case).
+ *
+ * This function can be called called from atomic context, but not
+ * from irq context, so normal spinlocks can be used.
+ *
+ * A return value of false indicates the fence already passed,
+ * or some failure occured that made it impossible to enable
+ * signaling. True indicates succesful enabling.
+ *
+ * fence->status may be set in enable_signaling, but only when false is
+ * returned.
+ *
+ * Calling fence_signal before enable_signaling is called allows
+ * for a tiny race window in which enable_signaling is called during,
+ * before, or after fence_signal. To fight this, it is recommended
+ * that before enable_signaling returns true an extra reference is
+ * taken on the fence, to be released when the fence is signaled.
+ * This will mean fence_signal will still be called twice, but
+ * the second time will be a noop since it was already signaled.
+ *
+ * Notes on signaled:
+ * May set fence->status if returning true.
+ *
+ * Notes on wait:
+ * Must not be NULL, set to fence_default_wait for default implementation.
+ * the fence_default_wait implementation should work for any fence, as long
+ * as enable_signaling works correctly.
+ *
+ * Must return -ERESTARTSYS if the wait is intr = true and the wait was
+ * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
+ * timed out. Can also return other error values on custom implementations,
+ * which should be treated as if the fence is signaled. For example a hardware
+ * lockup could be reported like that.
+ *
+ * Notes on release:
+ * Can be NULL, this function allows additional commands to run on
+ * destruction of the fence. Can be called from irq context.
+ * If pointer is set to NULL, kfree will get called instead.
+ */
+
+struct fence_ops {
+	const char * (*get_driver_name)(struct fence *fence);
+	const char * (*get_timeline_name)(struct fence *fence);
+	bool (*enable_signaling)(struct fence *fence);
+	bool (*signaled)(struct fence *fence);
+	long (*wait)(struct fence *fence, bool intr, signed long timeout);
+	void (*release)(struct fence *fence);
+
+	int (*fill_driver_data)(struct fence *fence, void *data, int size);
+	void (*fence_value_str)(struct fence *fence, char *str, int size);
+	void (*timeline_value_str)(struct fence *fence, char *str, int size);
+};
+
+void __fence_init(struct fence *fence, const struct fence_ops *ops,
+		  spinlock_t *lock, unsigned context, unsigned seqno);
+
+/**
+ * fence_get - increases refcount of the fence
+ * @fence:	[in]	fence to increase refcount of
+ */
+static inline void fence_get(struct fence *fence)
+{
+	if (WARN_ON(!fence))
+		return;
+	kref_get(&fence->refcount);
+}
+
+extern void release_fence(struct kref *kref);
+
+/**
+ * fence_put - decreases refcount of the fence
+ * @fence:	[in]	fence to reduce refcount of
+ */
+static inline void fence_put(struct fence *fence)
+{
+	if (WARN_ON(!fence))
+		return;
+	kref_put(&fence->refcount, release_fence);
+}
+
+int fence_signal(struct fence *fence);
+int __fence_signal(struct fence *fence);
+long fence_default_wait(struct fence *fence, bool intr, signed long timeout);
+int fence_add_callback(struct fence *fence, struct fence_cb *cb,
+		       fence_func_t func);
+bool fence_remove_callback(struct fence *fence, struct fence_cb *cb);
+void fence_enable_sw_signaling(struct fence *fence);
+
+static inline bool
+__fence_is_signaled(struct fence *fence)
+{
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return true;
+
+	if (fence->ops->signaled && fence->ops->signaled(fence)) {
+		__fence_signal(fence);
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ * fence_is_signaled - Return an indication if the fence is signaled yet.
+ * @fence:	[in]	the fence to check
+ *
+ * Returns true if the fence was already signaled, false if not. Since this
+ * function doesn't enable signaling, it is not guaranteed to ever return true
+ * If fence_add_callback, fence_wait or fence_enable_sw_signaling
+ * haven't been called before.
+ *
+ * It's recommended for seqno fences to call fence_signal when the
+ * operation is complete, it makes it possible to prevent issues from
+ * wraparound between time of issue and time of use by checking the return
+ * value of this function before calling hardware-specific wait instructions.
+ */
+static inline bool
+fence_is_signaled(struct fence *fence)
+{
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return true;
+
+	if (fence->ops->signaled && fence->ops->signaled(fence)) {
+		fence_signal(fence);
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ * fence_later - return the chronologically later fence
+ * @f1:	[in]	the first fence from the same context
+ * @f2:	[in]	the second fence from the same context
+ *
+ * Returns NULL if both fences are signaled, otherwise the fence that would be
+ * signaled last. Both fences must be from the same context, since a seqno is
+ * not re-used across contexts.
+ */
+static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
+{
+	BUG_ON(f1->context != f2->context);
+
+	/*
+	 * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been
+	 * set called if enable_signaling wasn't, and enabling that here is
+	 * overkill.
+	 */
+	if (f2->seqno - f1->seqno <= INT_MAX)
+		return fence_is_signaled(f2) ? NULL : f2;
+	else
+		return fence_is_signaled(f1) ? NULL : f1;
+}
+
+long fence_wait_timeout(struct fence *fence, bool intr, signed long timeout);
+
+
+/**
+ * fence_wait - sleep until the fence gets signaled
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ *
+ * This function will return -ERESTARTSYS if interrupted by a signal,
+ * or 0 if the fence was signaled. Other error values may be
+ * returned on custom implementations.
+ *
+ * Performs a synchronous wait on this fence. It is assumed the caller
+ * directly or indirectly (buf-mgr between reservation and committing)
+ * holds a reference to the fence, otherwise the fence might be
+ * freed before return, resulting in undefined behavior.
+ */
+static inline long fence_wait(struct fence *fence, bool intr)
+{
+	long ret;
+
+	/* Since fence_wait_timeout cannot timeout with
+	 * MAX_SCHEDULE_TIMEOUT, only valid return values are
+	 * -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT.
+	 */
+	ret = fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT);
+
+	return ret < 0 ? ret : 0;
+}
+
+unsigned fence_context_alloc(unsigned num);
+
+#define FENCE_TRACE(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		if (config_enabled(CONFIG_FENCE_TRACE))			\
+			pr_info("f %u#%u: " fmt,			\
+				__ff->context, __ff->seqno, ##args);	\
+	} while (0)
+
+#define FENCE_WARN(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno,	\
+			 ##args);					\
+	} while (0)
+
+#define FENCE_ERR(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno,	\
+			##args);					\
+	} while (0)
+
+#endif /* __LINUX_FENCE_H */
diff --git a/include/trace/events/fence.h b/include/trace/events/fence.h
new file mode 100644
index 000000000000..98feb1b82896
--- /dev/null
+++ b/include/trace/events/fence.h
@@ -0,0 +1,128 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM fence
+
+#if !defined(_TRACE_FENCE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_FENCE_H
+
+#include <linux/tracepoint.h>
+
+struct fence;
+
+TRACE_EVENT(fence_annotate_wait_on,
+
+	/* fence: the fence waiting on f1, f1: the fence to be waited on. */
+	TP_PROTO(struct fence *fence, struct fence *f1),
+
+	TP_ARGS(fence, f1),
+
+	TP_STRUCT__entry(
+		__string(driver, fence->ops->get_driver_name(fence))
+		__string(timeline, fence->ops->get_driver_name(fence))
+		__field(unsigned int, context)
+		__field(unsigned int, seqno)
+
+		__string(waiting_driver, f1->ops->get_driver_name(f1))
+		__string(waiting_timeline, f1->ops->get_timeline_name(f1))
+		__field(unsigned int, waiting_context)
+		__field(unsigned int, waiting_seqno)
+	),
+
+	TP_fast_assign(
+		__assign_str(driver, fence->ops->get_driver_name(fence))
+		__assign_str(timeline, fence->ops->get_timeline_name(fence))
+		__entry->context = fence->context;
+		__entry->seqno = fence->seqno;
+
+		__assign_str(waiting_driver, f1->ops->get_driver_name(f1))
+		__assign_str(waiting_timeline, f1->ops->get_timeline_name(f1))
+		__entry->waiting_context = f1->context;
+		__entry->waiting_seqno = f1->seqno;
+
+	),
+
+	TP_printk("driver=%s timeline=%s context=%u seqno=%u "	\
+		  "waits on driver=%s timeline=%s context=%u seqno=%u",
+		  __get_str(driver), __get_str(timeline), __entry->context,
+		  __entry->seqno,
+		  __get_str(waiting_driver), __get_str(waiting_timeline),
+		  __entry->waiting_context, __entry->waiting_seqno)
+);
+
+DECLARE_EVENT_CLASS(fence,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence),
+
+	TP_STRUCT__entry(
+		__string(driver, fence->ops->get_driver_name(fence))
+		__string(timeline, fence->ops->get_timeline_name(fence))
+		__field(unsigned int, context)
+		__field(unsigned int, seqno)
+	),
+
+	TP_fast_assign(
+		__assign_str(driver, fence->ops->get_driver_name(fence))
+		__assign_str(timeline, fence->ops->get_timeline_name(fence))
+		__entry->context = fence->context;
+		__entry->seqno = fence->seqno;
+	),
+
+	TP_printk("driver=%s timeline=%s context=%u seqno=%u",
+		  __get_str(driver), __get_str(timeline), __entry->context,
+		  __entry->seqno)
+);
+
+DEFINE_EVENT(fence, fence_emit,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_init,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_destroy,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_enable_signal,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_signaled,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_wait_start,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_wait_end,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+#endif /*  _TRACE_FENCE_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-18 10:36   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:36 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

A fence can be attached to a buffer which is being filled or consumed
by hw, to allow userspace to pass the buffer without waiting to another
device.  For example, userspace can call page_flip ioctl to display the
next frame of graphics after kicking the GPU but while the GPU is still
rendering.  The display device sharing the buffer with the GPU would
attach a callback to get notified when the GPU's rendering-complete IRQ
fires, to update the scan-out address of the display, without having to
wake up userspace.

A driver must allocate a fence context for each execution ring that can
run in parallel. The function for this takes an argument with how many
contexts to allocate:
  + fence_context_alloc()

A fence is transient, one-shot deal.  It is allocated and attached
to one or more dma-buf's.  When the one that attached it is done, with
the pending operation, it can signal the fence:
  + fence_signal()

To have a rough approximation whether a fence is fired, call:
  + fence_is_signaled()

The dma-buf-mgr handles tracking, and waiting on, the fences associated
with a dma-buf.

The one pending on the fence can add an async callback:
  + fence_add_callback()

The callback can optionally be cancelled with:
  + fence_remove_callback()

To wait synchronously, optionally with a timeout:
  + fence_wait()
  + fence_wait_timeout()

When emitting a fence, call:
  + trace_fence_emit()

To annotate that a fence is blocking on another fence, call:
  + trace_fence_annotate_wait_on(fence, on_fence)

A default software-only implementation is provided, which can be used
by drivers attaching a fence to a buffer when they have no other means
for hw sync.  But a memory backed fence is also envisioned, because it
is common that GPU's can write to, or poll on some memory location for
synchronization.  For example:

  fence = custom_get_fence(...);
  if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
    dma_buf *fence_buf = seqno_fence->sync_buf;
    get_dma_buf(fence_buf);

    ... tell the hw the memory location to wait ...
    custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
  } else {
    /* fall-back to sw sync * /
    fence_add_callback(fence, my_cb);
  }

On SoC platforms, if some other hw mechanism is provided for synchronizing
between IP blocks, it could be supported as an alternate implementation
with it's own fence ops in a similar way.

enable_signaling callback is used to provide sw signaling in case a cpu
waiter is requested or no compatible hardware signaling could be used.

The intention is to provide a userspace interface (presumably via eventfd)
later, to be used in conjunction with dma-buf's mmap support for sw access
to buffers (or for userspace apps that would prefer to do their own
synchronization).

v1: Original
v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
    that dma-fence didn't need to care about the sw->hw signaling path
    (it can be handled same as sw->sw case), and therefore the fence->ops
    can be simplified and more handled in the core.  So remove the signal,
    add_callback, cancel_callback, and wait ops, and replace with a simple
    enable_signaling() op which can be used to inform a fence supporting
    hw->hw signaling that one or more devices which do not support hw
    signaling are waiting (and therefore it should enable an irq or do
    whatever is necessary in order that the CPU is notified when the
    fence is passed).
v3: Fix locking fail in attach_fence() and get_fence()
v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
    we decided that we need to be able to attach one fence to N dma-buf's,
    so using the list_head in dma-fence struct would be problematic.
v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
    about checking if fence fired or not. This is broken by design.
    waitqueue_active during destruction is now fatal, since the signaller
    should be holding a reference in enable_signalling until it signalled
    the fence. Pass the original dma_fence_cb along, and call __remove_wait
    in the dma_fence_callback handler, so that no cleanup needs to be
    performed.
v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
    fence wasn't signaled yet, for example for hardware fences that may
    choose to signal blindly.
v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
    header and fixed include mess. dma-fence.h now includes dma-buf.h
    All members are now initialized, so kmalloc can be used for
    allocating a dma-fence. More documentation added.
v9: Change compiler bitfields to flags, change return type of
    enable_signaling to bool. Rework dma_fence_wait. Added
    dma_fence_is_signaled and dma_fence_wait_timeout.
    s/dma// and change exports to non GPL. Added fence_is_signaled and
    fence_enable_sw_signaling calls, add ability to override default
    wait operation.
v10: remove event_queue, use a custom list, export try_to_wake_up from
    scheduler. Remove fence lock and use a global spinlock instead,
    this should hopefully remove all the locking headaches I was having
    on trying to implement this. enable_signaling is called with this
    lock held.
v11:
    Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
    However I kept the guarantee that after fence_signal returns, it is
    guaranteed that enable_signaling has either been called to completion,
    or will not be called any more.

    Add contexts and seqno to base fence implementation. This allows you
    to wait for less fences, by testing for seqno + signaled, and then only
    wait on the later fence.

    Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
    An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
    spam, and another runtime option can turn it off at runtime.
v12:
    Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
    and fence->seqno members.
v13:
    Fixup CONFIG_FENCE_TRACE kconfig description.
    Move fence_context_alloc to fence.
    Simplify fence_later.
    Kill priv member to fence_cb.
v14:
    Remove priv argument from fence_add_callback, oops!
v15:
    Remove priv from documentation.
    Explicitly include linux/atomic.h.
v16:
    Add trace events.
    Import changes required by android syncpoints.
v17:
    Use wake_up_state instead of try_to_wake_up. (Colin Cross)
    Fix up commit description for seqno_fence. (Rob Clark)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
Reviewed-by: Rob Clark <robdclark@gmail.com>
---
 Documentation/DocBook/device-drivers.tmpl |    2 
 drivers/base/Kconfig                      |    9 +
 drivers/base/Makefile                     |    2 
 drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
 include/linux/fence.h                     |  333 +++++++++++++++++++++++
 include/trace/events/fence.h              |  128 +++++++++
 6 files changed, 889 insertions(+), 1 deletion(-)
 create mode 100644 drivers/base/fence.c
 create mode 100644 include/linux/fence.h
 create mode 100644 include/trace/events/fence.h

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index cc63f30de166..7eef81069d1b 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -129,6 +129,8 @@ X!Edrivers/base/interface.c
      </sect1>
      <sect1><title>Device Drivers DMA Management</title>
 !Edrivers/base/dma-buf.c
+!Edrivers/base/fence.c
+!Iinclude/linux/fence.h
 !Edrivers/base/reservation.c
 !Iinclude/linux/reservation.h
 !Edrivers/base/dma-coherent.c
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 23b8726962af..00e13ce5cbbd 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -208,6 +208,15 @@ config DMA_SHARED_BUFFER
 	  APIs extension; the file's descriptor can then be passed on to other
 	  driver.
 
+config FENCE_TRACE
+	bool "Enable verbose FENCE_TRACE messages"
+	depends on DMA_SHARED_BUFFER
+	help
+	  Enable the FENCE_TRACE printks. This will add extra
+	  spam to the console log, but will make it easier to diagnose
+	  lockup related problems for dma-buffers shared across multiple
+	  devices.
+
 config DMA_CMA
 	bool "DMA Contiguous Memory Allocator"
 	depends on HAVE_DMA_CONTIGUOUS && CMA
diff --git a/drivers/base/Makefile b/drivers/base/Makefile
index 04b314e0fa51..eb4864aee073 100644
--- a/drivers/base/Makefile
+++ b/drivers/base/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_DMA_CMA) += dma-contiguous.o
 obj-y			+= power/
 obj-$(CONFIG_HAS_DMA)	+= dma-mapping.o
 obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
-obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o
+obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o fence.o reservation.o
 obj-$(CONFIG_ISA)	+= isa.o
 obj-$(CONFIG_FW_LOADER)	+= firmware_class.o
 obj-$(CONFIG_NUMA)	+= node.o
diff --git a/drivers/base/fence.c b/drivers/base/fence.c
new file mode 100644
index 000000000000..1da7f4d6542a
--- /dev/null
+++ b/drivers/base/fence.c
@@ -0,0 +1,416 @@
+/*
+ * Fence mechanism for dma-buf and to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/slab.h>
+#include <linux/export.h>
+#include <linux/atomic.h>
+#include <linux/fence.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/fence.h>
+
+EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
+EXPORT_TRACEPOINT_SYMBOL(fence_emit);
+
+/**
+ * fence context counter: each execution context should have its own
+ * fence context, this allows checking if fences belong to the same
+ * context or not. One device can have multiple separate contexts,
+ * and they're used if some engine can run independently of another.
+ */
+static atomic_t fence_context_counter = ATOMIC_INIT(0);
+
+/**
+ * fence_context_alloc - allocate an array of fence contexts
+ * @num:	[in]	amount of contexts to allocate
+ *
+ * This function will return the first index of the number of fences allocated.
+ * The fence context is used for setting fence->context to a unique number.
+ */
+unsigned fence_context_alloc(unsigned num)
+{
+	BUG_ON(!num);
+	return atomic_add_return(num, &fence_context_counter) - num;
+}
+EXPORT_SYMBOL(fence_context_alloc);
+
+int __fence_signal(struct fence *fence)
+{
+	struct fence_cb *cur, *tmp;
+	int ret = 0;
+
+	if (WARN_ON(!fence))
+		return -EINVAL;
+
+	if (!ktime_to_ns(fence->timestamp)) {
+		fence->timestamp = ktime_get();
+		smp_mb__before_atomic();
+	}
+
+	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		ret = -EINVAL;
+
+		/*
+		 * we might have raced with the unlocked fence_signal,
+		 * still run through all callbacks
+		 */
+	} else
+		trace_fence_signaled(fence);
+
+	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
+		list_del_init(&cur->node);
+		cur->func(fence, cur);
+	}
+	return ret;
+}
+EXPORT_SYMBOL(__fence_signal);
+
+/**
+ * fence_signal - signal completion of a fence
+ * @fence: the fence to signal
+ *
+ * Signal completion for software callbacks on a fence, this will unblock
+ * fence_wait() calls and run all the callbacks added with
+ * fence_add_callback(). Can be called multiple times, but since a fence
+ * can only go from unsignaled to signaled state, it will only be effective
+ * the first time.
+ */
+int fence_signal(struct fence *fence)
+{
+	unsigned long flags;
+
+	if (!fence)
+		return -EINVAL;
+
+	if (!ktime_to_ns(fence->timestamp)) {
+		fence->timestamp = ktime_get();
+		smp_mb__before_atomic();
+	}
+
+	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return -EINVAL;
+
+	trace_fence_signaled(fence);
+
+	if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
+		struct fence_cb *cur, *tmp;
+
+		spin_lock_irqsave(fence->lock, flags);
+		list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
+			list_del_init(&cur->node);
+			cur->func(fence, cur);
+		}
+		spin_unlock_irqrestore(fence->lock, flags);
+	}
+	return 0;
+}
+EXPORT_SYMBOL(fence_signal);
+
+/**
+ * fence_wait_timeout - sleep until the fence gets signaled
+ * or until timeout elapses
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ *
+ * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
+ * remaining timeout in jiffies on success. Other error values may be
+ * returned on custom implementations.
+ *
+ * Performs a synchronous wait on this fence. It is assumed the caller
+ * directly or indirectly (buf-mgr between reservation and committing)
+ * holds a reference to the fence, otherwise the fence might be
+ * freed before return, resulting in undefined behavior.
+ */
+long
+fence_wait_timeout(struct fence *fence, bool intr, signed long timeout)
+{
+	unsigned long ret;
+
+	if (WARN_ON(timeout < 0))
+		return -EINVAL;
+
+	trace_fence_wait_start(fence);
+	ret = fence->ops->wait(fence, intr, timeout);
+	trace_fence_wait_end(fence);
+	return ret;
+}
+EXPORT_SYMBOL(fence_wait_timeout);
+
+void release_fence(struct kref *kref)
+{
+	struct fence *fence =
+			container_of(kref, struct fence, refcount);
+
+	trace_fence_destroy(fence);
+
+	BUG_ON(!list_empty(&fence->cb_list));
+
+	if (fence->ops->release)
+		fence->ops->release(fence);
+	else
+		kfree(fence);
+}
+EXPORT_SYMBOL(release_fence);
+
+/**
+ * fence_enable_sw_signaling - enable signaling on fence
+ * @fence:	[in]	the fence to enable
+ *
+ * this will request for sw signaling to be enabled, to make the fence
+ * complete as soon as possible
+ */
+void fence_enable_sw_signaling(struct fence *fence)
+{
+	unsigned long flags;
+
+	if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) &&
+	    !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		trace_fence_enable_signal(fence);
+
+		spin_lock_irqsave(fence->lock, flags);
+
+		if (!fence->ops->enable_signaling(fence))
+			__fence_signal(fence);
+
+		spin_unlock_irqrestore(fence->lock, flags);
+	}
+}
+EXPORT_SYMBOL(fence_enable_sw_signaling);
+
+/**
+ * fence_add_callback - add a callback to be called when the fence
+ * is signaled
+ * @fence:	[in]	the fence to wait on
+ * @cb:		[in]	the callback to register
+ * @func:	[in]	the function to call
+ *
+ * cb will be initialized by fence_add_callback, no initialization
+ * by the caller is required. Any number of callbacks can be registered
+ * to a fence, but a callback can only be registered to one fence at a time.
+ *
+ * Note that the callback can be called from an atomic context.  If
+ * fence is already signaled, this function will return -ENOENT (and
+ * *not* call the callback)
+ *
+ * Add a software callback to the fence. Same restrictions apply to
+ * refcount as it does to fence_wait, however the caller doesn't need to
+ * keep a refcount to fence afterwards: when software access is enabled,
+ * the creator of the fence is required to keep the fence alive until
+ * after it signals with fence_signal. The callback itself can be called
+ * from irq context.
+ *
+ */
+int fence_add_callback(struct fence *fence, struct fence_cb *cb,
+		       fence_func_t func)
+{
+	unsigned long flags;
+	int ret = 0;
+	bool was_set;
+
+	if (WARN_ON(!fence || !func))
+		return -EINVAL;
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		INIT_LIST_HEAD(&cb->node);
+		return -ENOENT;
+	}
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		ret = -ENOENT;
+	else if (!was_set) {
+		trace_fence_enable_signal(fence);
+
+		if (!fence->ops->enable_signaling(fence)) {
+			__fence_signal(fence);
+			ret = -ENOENT;
+		}
+	}
+
+	if (!ret) {
+		cb->func = func;
+		list_add_tail(&cb->node, &fence->cb_list);
+	} else
+		INIT_LIST_HEAD(&cb->node);
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+EXPORT_SYMBOL(fence_add_callback);
+
+/**
+ * fence_remove_callback - remove a callback from the signaling list
+ * @fence:	[in]	the fence to wait on
+ * @cb:		[in]	the callback to remove
+ *
+ * Remove a previously queued callback from the fence. This function returns
+ * true if the callback is succesfully removed, or false if the fence has
+ * already been signaled.
+ *
+ * *WARNING*:
+ * Cancelling a callback should only be done if you really know what you're
+ * doing, since deadlocks and race conditions could occur all too easily. For
+ * this reason, it should only ever be done on hardware lockup recovery,
+ * with a reference held to the fence.
+ */
+bool
+fence_remove_callback(struct fence *fence, struct fence_cb *cb)
+{
+	unsigned long flags;
+	bool ret;
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	ret = !list_empty(&cb->node);
+	if (ret)
+		list_del_init(&cb->node);
+
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+EXPORT_SYMBOL(fence_remove_callback);
+
+struct default_wait_cb {
+	struct fence_cb base;
+	struct task_struct *task;
+};
+
+static void
+fence_default_wait_cb(struct fence *fence, struct fence_cb *cb)
+{
+	struct default_wait_cb *wait =
+		container_of(cb, struct default_wait_cb, base);
+
+	wake_up_state(wait->task, TASK_NORMAL);
+}
+
+/**
+ * fence_default_wait - default sleep until the fence gets signaled
+ * or until timeout elapses
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ *
+ * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
+ * remaining timeout in jiffies on success.
+ */
+long
+fence_default_wait(struct fence *fence, bool intr, signed long timeout)
+{
+	struct default_wait_cb cb;
+	unsigned long flags;
+	long ret = timeout;
+	bool was_set;
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return timeout;
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	if (intr && signal_pending(current)) {
+		ret = -ERESTARTSYS;
+		goto out;
+	}
+
+	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		goto out;
+
+	if (!was_set) {
+		trace_fence_enable_signal(fence);
+
+		if (!fence->ops->enable_signaling(fence)) {
+			__fence_signal(fence);
+			goto out;
+		}
+	}
+
+	cb.base.func = fence_default_wait_cb;
+	cb.task = current;
+	list_add(&cb.base.node, &fence->cb_list);
+
+	while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
+		if (intr)
+			__set_current_state(TASK_INTERRUPTIBLE);
+		else
+			__set_current_state(TASK_UNINTERRUPTIBLE);
+		spin_unlock_irqrestore(fence->lock, flags);
+
+		ret = schedule_timeout(ret);
+
+		spin_lock_irqsave(fence->lock, flags);
+		if (ret > 0 && intr && signal_pending(current))
+			ret = -ERESTARTSYS;
+	}
+
+	if (!list_empty(&cb.base.node))
+		list_del(&cb.base.node);
+	__set_current_state(TASK_RUNNING);
+
+out:
+	spin_unlock_irqrestore(fence->lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL(fence_default_wait);
+
+/**
+ * __fence_init - Initialize a custom fence.
+ * @fence:	[in]	the fence to initialize
+ * @ops:	[in]	the fence_ops for operations on this fence
+ * @lock:	[in]	the irqsafe spinlock to use for locking this fence
+ * @context:	[in]	the execution context this fence is run on
+ * @seqno:	[in]	a linear increasing sequence number for this context
+ *
+ * Initializes an allocated fence, the caller doesn't have to keep its
+ * refcount after committing with this fence, but it will need to hold a
+ * refcount again if fence_ops.enable_signaling gets called. This can
+ * be used for other implementing other types of fence.
+ *
+ * context and seqno are used for easy comparison between fences, allowing
+ * to check which fence is later by simply using fence_later.
+ */
+void
+__fence_init(struct fence *fence, const struct fence_ops *ops,
+	     spinlock_t *lock, unsigned context, unsigned seqno)
+{
+	BUG_ON(!lock);
+	BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
+	       !ops->get_driver_name || !ops->get_timeline_name);
+
+	kref_init(&fence->refcount);
+	fence->ops = ops;
+	INIT_LIST_HEAD(&fence->cb_list);
+	fence->lock = lock;
+	fence->context = context;
+	fence->seqno = seqno;
+	fence->flags = 0UL;
+
+	trace_fence_init(fence);
+}
+EXPORT_SYMBOL(__fence_init);
diff --git a/include/linux/fence.h b/include/linux/fence.h
new file mode 100644
index 000000000000..65f2a01ee7e4
--- /dev/null
+++ b/include/linux/fence.h
@@ -0,0 +1,333 @@
+/*
+ * Fence mechanism for dma-buf to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __LINUX_FENCE_H
+#define __LINUX_FENCE_H
+
+#include <linux/err.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <linux/bitops.h>
+#include <linux/kref.h>
+#include <linux/sched.h>
+#include <linux/printk.h>
+
+struct fence;
+struct fence_ops;
+struct fence_cb;
+
+/**
+ * struct fence - software synchronization primitive
+ * @refcount: refcount for this fence
+ * @ops: fence_ops associated with this fence
+ * @cb_list: list of all callbacks to call
+ * @lock: spin_lock_irqsave used for locking
+ * @context: execution context this fence belongs to, returned by
+ *           fence_context_alloc()
+ * @seqno: the sequence number of this fence inside the execution context,
+ * can be compared to decide which fence would be signaled later.
+ * @flags: A mask of FENCE_FLAG_* defined below
+ * @timestamp: Timestamp when the fence was signaled.
+ * @status: Optional, only valid if < 0, must be set before calling
+ * fence_signal, indicates that the fence has completed with an error.
+ *
+ * the flags member must be manipulated and read using the appropriate
+ * atomic ops (bit_*), so taking the spinlock will not be needed most
+ * of the time.
+ *
+ * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
+ * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
+ * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
+ * implementer of the fence for its own purposes. Can be used in different
+ * ways by different fence implementers, so do not rely on this.
+ *
+ * *) Since atomic bitops are used, this is not guaranteed to be the case.
+ * Particularly, if the bit was set, but fence_signal was called right
+ * before this bit was set, it would have been able to set the
+ * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
+ * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
+ * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
+ * after fence_signal was called, any enable_signaling call will have either
+ * been completed, or never called at all.
+ */
+struct fence {
+	struct kref refcount;
+	const struct fence_ops *ops;
+	struct list_head cb_list;
+	spinlock_t *lock;
+	unsigned context, seqno;
+	unsigned long flags;
+	ktime_t timestamp;
+	int status;
+};
+
+enum fence_flag_bits {
+	FENCE_FLAG_SIGNALED_BIT,
+	FENCE_FLAG_ENABLE_SIGNAL_BIT,
+	FENCE_FLAG_USER_BITS, /* must always be last member */
+};
+
+typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
+
+/**
+ * struct fence_cb - callback for fence_add_callback
+ * @node: used by fence_add_callback to append this struct to fence::cb_list
+ * @func: fence_func_t to call
+ *
+ * This struct will be initialized by fence_add_callback, additional
+ * data can be passed along by embedding fence_cb in another struct.
+ */
+struct fence_cb {
+	struct list_head node;
+	fence_func_t func;
+};
+
+/**
+ * struct fence_ops - operations implemented for fence
+ * @get_driver_name: returns the driver name.
+ * @get_timeline_name: return the name of the context this fence belongs to.
+ * @enable_signaling: enable software signaling of fence.
+ * @signaled: [optional] peek whether the fence is signaled, can be null.
+ * @wait: custom wait implementation, or fence_default_wait.
+ * @release: [optional] called on destruction of fence, can be null
+ * @fill_driver_data: [optional] callback to fill in free-form debug info
+ * Returns amount of bytes filled, or -errno.
+ * @fence_value_str: [optional] fills in the value of the fence as a string
+ * @timeline_value_str: [optional] fills in the current value of the timeline
+ * as a string
+ *
+ * Notes on enable_signaling:
+ * For fence implementations that have the capability for hw->hw
+ * signaling, they can implement this op to enable the necessary
+ * irqs, or insert commands into cmdstream, etc.  This is called
+ * in the first wait() or add_callback() path to let the fence
+ * implementation know that there is another driver waiting on
+ * the signal (ie. hw->sw case).
+ *
+ * This function can be called called from atomic context, but not
+ * from irq context, so normal spinlocks can be used.
+ *
+ * A return value of false indicates the fence already passed,
+ * or some failure occured that made it impossible to enable
+ * signaling. True indicates succesful enabling.
+ *
+ * fence->status may be set in enable_signaling, but only when false is
+ * returned.
+ *
+ * Calling fence_signal before enable_signaling is called allows
+ * for a tiny race window in which enable_signaling is called during,
+ * before, or after fence_signal. To fight this, it is recommended
+ * that before enable_signaling returns true an extra reference is
+ * taken on the fence, to be released when the fence is signaled.
+ * This will mean fence_signal will still be called twice, but
+ * the second time will be a noop since it was already signaled.
+ *
+ * Notes on signaled:
+ * May set fence->status if returning true.
+ *
+ * Notes on wait:
+ * Must not be NULL, set to fence_default_wait for default implementation.
+ * the fence_default_wait implementation should work for any fence, as long
+ * as enable_signaling works correctly.
+ *
+ * Must return -ERESTARTSYS if the wait is intr = true and the wait was
+ * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
+ * timed out. Can also return other error values on custom implementations,
+ * which should be treated as if the fence is signaled. For example a hardware
+ * lockup could be reported like that.
+ *
+ * Notes on release:
+ * Can be NULL, this function allows additional commands to run on
+ * destruction of the fence. Can be called from irq context.
+ * If pointer is set to NULL, kfree will get called instead.
+ */
+
+struct fence_ops {
+	const char * (*get_driver_name)(struct fence *fence);
+	const char * (*get_timeline_name)(struct fence *fence);
+	bool (*enable_signaling)(struct fence *fence);
+	bool (*signaled)(struct fence *fence);
+	long (*wait)(struct fence *fence, bool intr, signed long timeout);
+	void (*release)(struct fence *fence);
+
+	int (*fill_driver_data)(struct fence *fence, void *data, int size);
+	void (*fence_value_str)(struct fence *fence, char *str, int size);
+	void (*timeline_value_str)(struct fence *fence, char *str, int size);
+};
+
+void __fence_init(struct fence *fence, const struct fence_ops *ops,
+		  spinlock_t *lock, unsigned context, unsigned seqno);
+
+/**
+ * fence_get - increases refcount of the fence
+ * @fence:	[in]	fence to increase refcount of
+ */
+static inline void fence_get(struct fence *fence)
+{
+	if (WARN_ON(!fence))
+		return;
+	kref_get(&fence->refcount);
+}
+
+extern void release_fence(struct kref *kref);
+
+/**
+ * fence_put - decreases refcount of the fence
+ * @fence:	[in]	fence to reduce refcount of
+ */
+static inline void fence_put(struct fence *fence)
+{
+	if (WARN_ON(!fence))
+		return;
+	kref_put(&fence->refcount, release_fence);
+}
+
+int fence_signal(struct fence *fence);
+int __fence_signal(struct fence *fence);
+long fence_default_wait(struct fence *fence, bool intr, signed long timeout);
+int fence_add_callback(struct fence *fence, struct fence_cb *cb,
+		       fence_func_t func);
+bool fence_remove_callback(struct fence *fence, struct fence_cb *cb);
+void fence_enable_sw_signaling(struct fence *fence);
+
+static inline bool
+__fence_is_signaled(struct fence *fence)
+{
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return true;
+
+	if (fence->ops->signaled && fence->ops->signaled(fence)) {
+		__fence_signal(fence);
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ * fence_is_signaled - Return an indication if the fence is signaled yet.
+ * @fence:	[in]	the fence to check
+ *
+ * Returns true if the fence was already signaled, false if not. Since this
+ * function doesn't enable signaling, it is not guaranteed to ever return true
+ * If fence_add_callback, fence_wait or fence_enable_sw_signaling
+ * haven't been called before.
+ *
+ * It's recommended for seqno fences to call fence_signal when the
+ * operation is complete, it makes it possible to prevent issues from
+ * wraparound between time of issue and time of use by checking the return
+ * value of this function before calling hardware-specific wait instructions.
+ */
+static inline bool
+fence_is_signaled(struct fence *fence)
+{
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return true;
+
+	if (fence->ops->signaled && fence->ops->signaled(fence)) {
+		fence_signal(fence);
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ * fence_later - return the chronologically later fence
+ * @f1:	[in]	the first fence from the same context
+ * @f2:	[in]	the second fence from the same context
+ *
+ * Returns NULL if both fences are signaled, otherwise the fence that would be
+ * signaled last. Both fences must be from the same context, since a seqno is
+ * not re-used across contexts.
+ */
+static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
+{
+	BUG_ON(f1->context != f2->context);
+
+	/*
+	 * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been
+	 * set called if enable_signaling wasn't, and enabling that here is
+	 * overkill.
+	 */
+	if (f2->seqno - f1->seqno <= INT_MAX)
+		return fence_is_signaled(f2) ? NULL : f2;
+	else
+		return fence_is_signaled(f1) ? NULL : f1;
+}
+
+long fence_wait_timeout(struct fence *fence, bool intr, signed long timeout);
+
+
+/**
+ * fence_wait - sleep until the fence gets signaled
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ *
+ * This function will return -ERESTARTSYS if interrupted by a signal,
+ * or 0 if the fence was signaled. Other error values may be
+ * returned on custom implementations.
+ *
+ * Performs a synchronous wait on this fence. It is assumed the caller
+ * directly or indirectly (buf-mgr between reservation and committing)
+ * holds a reference to the fence, otherwise the fence might be
+ * freed before return, resulting in undefined behavior.
+ */
+static inline long fence_wait(struct fence *fence, bool intr)
+{
+	long ret;
+
+	/* Since fence_wait_timeout cannot timeout with
+	 * MAX_SCHEDULE_TIMEOUT, only valid return values are
+	 * -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT.
+	 */
+	ret = fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT);
+
+	return ret < 0 ? ret : 0;
+}
+
+unsigned fence_context_alloc(unsigned num);
+
+#define FENCE_TRACE(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		if (config_enabled(CONFIG_FENCE_TRACE))			\
+			pr_info("f %u#%u: " fmt,			\
+				__ff->context, __ff->seqno, ##args);	\
+	} while (0)
+
+#define FENCE_WARN(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno,	\
+			 ##args);					\
+	} while (0)
+
+#define FENCE_ERR(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno,	\
+			##args);					\
+	} while (0)
+
+#endif /* __LINUX_FENCE_H */
diff --git a/include/trace/events/fence.h b/include/trace/events/fence.h
new file mode 100644
index 000000000000..98feb1b82896
--- /dev/null
+++ b/include/trace/events/fence.h
@@ -0,0 +1,128 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM fence
+
+#if !defined(_TRACE_FENCE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_FENCE_H
+
+#include <linux/tracepoint.h>
+
+struct fence;
+
+TRACE_EVENT(fence_annotate_wait_on,
+
+	/* fence: the fence waiting on f1, f1: the fence to be waited on. */
+	TP_PROTO(struct fence *fence, struct fence *f1),
+
+	TP_ARGS(fence, f1),
+
+	TP_STRUCT__entry(
+		__string(driver, fence->ops->get_driver_name(fence))
+		__string(timeline, fence->ops->get_driver_name(fence))
+		__field(unsigned int, context)
+		__field(unsigned int, seqno)
+
+		__string(waiting_driver, f1->ops->get_driver_name(f1))
+		__string(waiting_timeline, f1->ops->get_timeline_name(f1))
+		__field(unsigned int, waiting_context)
+		__field(unsigned int, waiting_seqno)
+	),
+
+	TP_fast_assign(
+		__assign_str(driver, fence->ops->get_driver_name(fence))
+		__assign_str(timeline, fence->ops->get_timeline_name(fence))
+		__entry->context = fence->context;
+		__entry->seqno = fence->seqno;
+
+		__assign_str(waiting_driver, f1->ops->get_driver_name(f1))
+		__assign_str(waiting_timeline, f1->ops->get_timeline_name(f1))
+		__entry->waiting_context = f1->context;
+		__entry->waiting_seqno = f1->seqno;
+
+	),
+
+	TP_printk("driver=%s timeline=%s context=%u seqno=%u "	\
+		  "waits on driver=%s timeline=%s context=%u seqno=%u",
+		  __get_str(driver), __get_str(timeline), __entry->context,
+		  __entry->seqno,
+		  __get_str(waiting_driver), __get_str(waiting_timeline),
+		  __entry->waiting_context, __entry->waiting_seqno)
+);
+
+DECLARE_EVENT_CLASS(fence,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence),
+
+	TP_STRUCT__entry(
+		__string(driver, fence->ops->get_driver_name(fence))
+		__string(timeline, fence->ops->get_timeline_name(fence))
+		__field(unsigned int, context)
+		__field(unsigned int, seqno)
+	),
+
+	TP_fast_assign(
+		__assign_str(driver, fence->ops->get_driver_name(fence))
+		__assign_str(timeline, fence->ops->get_timeline_name(fence))
+		__entry->context = fence->context;
+		__entry->seqno = fence->seqno;
+	),
+
+	TP_printk("driver=%s timeline=%s context=%u seqno=%u",
+		  __get_str(driver), __get_str(timeline), __entry->context,
+		  __entry->seqno)
+);
+
+DEFINE_EVENT(fence, fence_emit,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_init,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_destroy,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_enable_signal,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_signaled,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_wait_start,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_wait_end,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+#endif /*  _TRACE_FENCE_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 2/8] seqno-fence: Hardware dma-buf implementation of fencing (v5)
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:37   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

This type of fence can be used with hardware synchronization for simple
hardware that can block execution until the condition
(dma_buf[offset] - value) >= 0 has been met when WAIT_GEQUAL is used,
or (dma_buf[offset] != 0) has been met when WAIT_NONZERO is set.

A software fallback still has to be provided in case the fence is used
with a device that doesn't support this mechanism. It is useful to expose
this for graphics cards that have an op to support this.

Some cards like i915 can export those, but don't have an option to wait,
so they need the software fallback.

I extended the original patch by Rob Clark.

v1: Original
v2: Renamed from bikeshed to seqno, moved into dma-fence.c since
    not much was left of the file. Lots of documentation added.
v3: Use fence_ops instead of custom callbacks. Moved to own file
    to avoid circular dependency between dma-buf.h and fence.h
v4: Add spinlock pointer to seqno_fence_init
v5: Add condition member to allow wait for != 0.
    Fix small style errors pointed out by checkpatch.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Rob Clark <robdclark@gmail.com> #v4
---
 Documentation/DocBook/device-drivers.tmpl |    1 
 drivers/base/fence.c                      |   52 +++++++++++++
 include/linux/seqno-fence.h               |  119 +++++++++++++++++++++++++++++
 3 files changed, 172 insertions(+)
 create mode 100644 include/linux/seqno-fence.h

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index 7eef81069d1b..6ca7a11fb893 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -131,6 +131,7 @@ X!Edrivers/base/interface.c
 !Edrivers/base/dma-buf.c
 !Edrivers/base/fence.c
 !Iinclude/linux/fence.h
+!Iinclude/linux/seqno-fence.h
 !Edrivers/base/reservation.c
 !Iinclude/linux/reservation.h
 !Edrivers/base/dma-coherent.c
diff --git a/drivers/base/fence.c b/drivers/base/fence.c
index 1da7f4d6542a..752a2dfa505f 100644
--- a/drivers/base/fence.c
+++ b/drivers/base/fence.c
@@ -25,6 +25,7 @@
 #include <linux/export.h>
 #include <linux/atomic.h>
 #include <linux/fence.h>
+#include <linux/seqno-fence.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/fence.h>
@@ -414,3 +415,54 @@ __fence_init(struct fence *fence, const struct fence_ops *ops,
 	trace_fence_init(fence);
 }
 EXPORT_SYMBOL(__fence_init);
+
+static const char *seqno_fence_get_driver_name(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->get_driver_name(fence);
+}
+
+static const char *seqno_fence_get_timeline_name(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->get_timeline_name(fence);
+}
+
+static bool seqno_enable_signaling(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->enable_signaling(fence);
+}
+
+static bool seqno_signaled(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence);
+}
+
+static void seqno_release(struct fence *fence)
+{
+	struct seqno_fence *f = to_seqno_fence(fence);
+
+	dma_buf_put(f->sync_buf);
+	if (f->ops->release)
+		f->ops->release(fence);
+	else
+		kfree(f);
+}
+
+static long seqno_wait(struct fence *fence, bool intr, signed long timeout)
+{
+	struct seqno_fence *f = to_seqno_fence(fence);
+	return f->ops->wait(fence, intr, timeout);
+}
+
+const struct fence_ops seqno_fence_ops = {
+	.get_driver_name = seqno_fence_get_driver_name,
+	.get_timeline_name = seqno_fence_get_timeline_name,
+	.enable_signaling = seqno_enable_signaling,
+	.signaled = seqno_signaled,
+	.wait = seqno_wait,
+	.release = seqno_release,
+};
+EXPORT_SYMBOL(seqno_fence_ops);
diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h
new file mode 100644
index 000000000000..b4d4aad3cadc
--- /dev/null
+++ b/include/linux/seqno-fence.h
@@ -0,0 +1,119 @@
+/*
+ * seqno-fence, using a dma-buf to synchronize fencing
+ *
+ * Copyright (C) 2012 Texas Instruments
+ * Copyright (C) 2012 Canonical Ltd
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ *   Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __LINUX_SEQNO_FENCE_H
+#define __LINUX_SEQNO_FENCE_H
+
+#include <linux/fence.h>
+#include <linux/dma-buf.h>
+
+enum seqno_fence_condition {
+	SEQNO_FENCE_WAIT_GEQUAL,
+	SEQNO_FENCE_WAIT_NONZERO
+};
+
+struct seqno_fence {
+	struct fence base;
+
+	const struct fence_ops *ops;
+	struct dma_buf *sync_buf;
+	uint32_t seqno_ofs;
+	enum seqno_fence_condition condition;
+};
+
+extern const struct fence_ops seqno_fence_ops;
+
+/**
+ * to_seqno_fence - cast a fence to a seqno_fence
+ * @fence: fence to cast to a seqno_fence
+ *
+ * Returns NULL if the fence is not a seqno_fence,
+ * or the seqno_fence otherwise.
+ */
+static inline struct seqno_fence *
+to_seqno_fence(struct fence *fence)
+{
+	if (fence->ops != &seqno_fence_ops)
+		return NULL;
+	return container_of(fence, struct seqno_fence, base);
+}
+
+/**
+ * seqno_fence_init - initialize a seqno fence
+ * @fence: seqno_fence to initialize
+ * @lock: pointer to spinlock to use for fence
+ * @sync_buf: buffer containing the memory location to signal on
+ * @context: the execution context this fence is a part of
+ * @seqno_ofs: the offset within @sync_buf
+ * @seqno: the sequence # to signal on
+ * @ops: the fence_ops for operations on this seqno fence
+ *
+ * This function initializes a struct seqno_fence with passed parameters,
+ * and takes a reference on sync_buf which is released on fence destruction.
+ *
+ * A seqno_fence is a dma_fence which can complete in software when
+ * enable_signaling is called, but it also completes when
+ * (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true
+ *
+ * The seqno_fence will take a refcount on the sync_buf until it's
+ * destroyed, but actual lifetime of sync_buf may be longer if one of the
+ * callers take a reference to it.
+ *
+ * Certain hardware have instructions to insert this type of wait condition
+ * in the command stream, so no intervention from software would be needed.
+ * This type of fence can be destroyed before completed, however a reference
+ * on the sync_buf dma-buf can be taken. It is encouraged to re-use the same
+ * dma-buf for sync_buf, since mapping or unmapping the sync_buf to the
+ * device's vm can be expensive.
+ *
+ * It is recommended for creators of seqno_fence to call fence_signal
+ * before destruction. This will prevent possible issues from wraparound at
+ * time of issue vs time of check, since users can check fence_is_signaled
+ * before submitting instructions for the hardware to wait on the fence.
+ * However, when ops.enable_signaling is not called, it doesn't have to be
+ * done as soon as possible, just before there's any real danger of seqno
+ * wraparound.
+ */
+static inline void
+seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock,
+		 struct dma_buf *sync_buf,  uint32_t context,
+		 uint32_t seqno_ofs, uint32_t seqno,
+		 enum seqno_fence_condition cond,
+		 const struct fence_ops *ops)
+{
+	BUG_ON(!fence || !sync_buf || !ops);
+	BUG_ON(!ops->wait || !ops->enable_signaling ||
+	       !ops->get_driver_name || !ops->get_timeline_name);
+
+	/*
+	 * ops is used in __fence_init for get_driver_name, so needs to be
+	 * initialized first
+	 */
+	fence->ops = ops;
+	__fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno);
+	get_dma_buf(sync_buf);
+	fence->sync_buf = sync_buf;
+	fence->seqno_ofs = seqno_ofs;
+	fence->condition = cond;
+}
+
+#endif /* __LINUX_SEQNO_FENCE_H */


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 2/8] seqno-fence: Hardware dma-buf implementation of fencing (v5)
@ 2014-06-18 10:37   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

This type of fence can be used with hardware synchronization for simple
hardware that can block execution until the condition
(dma_buf[offset] - value) >= 0 has been met when WAIT_GEQUAL is used,
or (dma_buf[offset] != 0) has been met when WAIT_NONZERO is set.

A software fallback still has to be provided in case the fence is used
with a device that doesn't support this mechanism. It is useful to expose
this for graphics cards that have an op to support this.

Some cards like i915 can export those, but don't have an option to wait,
so they need the software fallback.

I extended the original patch by Rob Clark.

v1: Original
v2: Renamed from bikeshed to seqno, moved into dma-fence.c since
    not much was left of the file. Lots of documentation added.
v3: Use fence_ops instead of custom callbacks. Moved to own file
    to avoid circular dependency between dma-buf.h and fence.h
v4: Add spinlock pointer to seqno_fence_init
v5: Add condition member to allow wait for != 0.
    Fix small style errors pointed out by checkpatch.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Rob Clark <robdclark@gmail.com> #v4
---
 Documentation/DocBook/device-drivers.tmpl |    1 
 drivers/base/fence.c                      |   52 +++++++++++++
 include/linux/seqno-fence.h               |  119 +++++++++++++++++++++++++++++
 3 files changed, 172 insertions(+)
 create mode 100644 include/linux/seqno-fence.h

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index 7eef81069d1b..6ca7a11fb893 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -131,6 +131,7 @@ X!Edrivers/base/interface.c
 !Edrivers/base/dma-buf.c
 !Edrivers/base/fence.c
 !Iinclude/linux/fence.h
+!Iinclude/linux/seqno-fence.h
 !Edrivers/base/reservation.c
 !Iinclude/linux/reservation.h
 !Edrivers/base/dma-coherent.c
diff --git a/drivers/base/fence.c b/drivers/base/fence.c
index 1da7f4d6542a..752a2dfa505f 100644
--- a/drivers/base/fence.c
+++ b/drivers/base/fence.c
@@ -25,6 +25,7 @@
 #include <linux/export.h>
 #include <linux/atomic.h>
 #include <linux/fence.h>
+#include <linux/seqno-fence.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/fence.h>
@@ -414,3 +415,54 @@ __fence_init(struct fence *fence, const struct fence_ops *ops,
 	trace_fence_init(fence);
 }
 EXPORT_SYMBOL(__fence_init);
+
+static const char *seqno_fence_get_driver_name(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->get_driver_name(fence);
+}
+
+static const char *seqno_fence_get_timeline_name(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->get_timeline_name(fence);
+}
+
+static bool seqno_enable_signaling(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->enable_signaling(fence);
+}
+
+static bool seqno_signaled(struct fence *fence)
+{
+	struct seqno_fence *seqno_fence = to_seqno_fence(fence);
+	return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence);
+}
+
+static void seqno_release(struct fence *fence)
+{
+	struct seqno_fence *f = to_seqno_fence(fence);
+
+	dma_buf_put(f->sync_buf);
+	if (f->ops->release)
+		f->ops->release(fence);
+	else
+		kfree(f);
+}
+
+static long seqno_wait(struct fence *fence, bool intr, signed long timeout)
+{
+	struct seqno_fence *f = to_seqno_fence(fence);
+	return f->ops->wait(fence, intr, timeout);
+}
+
+const struct fence_ops seqno_fence_ops = {
+	.get_driver_name = seqno_fence_get_driver_name,
+	.get_timeline_name = seqno_fence_get_timeline_name,
+	.enable_signaling = seqno_enable_signaling,
+	.signaled = seqno_signaled,
+	.wait = seqno_wait,
+	.release = seqno_release,
+};
+EXPORT_SYMBOL(seqno_fence_ops);
diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h
new file mode 100644
index 000000000000..b4d4aad3cadc
--- /dev/null
+++ b/include/linux/seqno-fence.h
@@ -0,0 +1,119 @@
+/*
+ * seqno-fence, using a dma-buf to synchronize fencing
+ *
+ * Copyright (C) 2012 Texas Instruments
+ * Copyright (C) 2012 Canonical Ltd
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ *   Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __LINUX_SEQNO_FENCE_H
+#define __LINUX_SEQNO_FENCE_H
+
+#include <linux/fence.h>
+#include <linux/dma-buf.h>
+
+enum seqno_fence_condition {
+	SEQNO_FENCE_WAIT_GEQUAL,
+	SEQNO_FENCE_WAIT_NONZERO
+};
+
+struct seqno_fence {
+	struct fence base;
+
+	const struct fence_ops *ops;
+	struct dma_buf *sync_buf;
+	uint32_t seqno_ofs;
+	enum seqno_fence_condition condition;
+};
+
+extern const struct fence_ops seqno_fence_ops;
+
+/**
+ * to_seqno_fence - cast a fence to a seqno_fence
+ * @fence: fence to cast to a seqno_fence
+ *
+ * Returns NULL if the fence is not a seqno_fence,
+ * or the seqno_fence otherwise.
+ */
+static inline struct seqno_fence *
+to_seqno_fence(struct fence *fence)
+{
+	if (fence->ops != &seqno_fence_ops)
+		return NULL;
+	return container_of(fence, struct seqno_fence, base);
+}
+
+/**
+ * seqno_fence_init - initialize a seqno fence
+ * @fence: seqno_fence to initialize
+ * @lock: pointer to spinlock to use for fence
+ * @sync_buf: buffer containing the memory location to signal on
+ * @context: the execution context this fence is a part of
+ * @seqno_ofs: the offset within @sync_buf
+ * @seqno: the sequence # to signal on
+ * @ops: the fence_ops for operations on this seqno fence
+ *
+ * This function initializes a struct seqno_fence with passed parameters,
+ * and takes a reference on sync_buf which is released on fence destruction.
+ *
+ * A seqno_fence is a dma_fence which can complete in software when
+ * enable_signaling is called, but it also completes when
+ * (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true
+ *
+ * The seqno_fence will take a refcount on the sync_buf until it's
+ * destroyed, but actual lifetime of sync_buf may be longer if one of the
+ * callers take a reference to it.
+ *
+ * Certain hardware have instructions to insert this type of wait condition
+ * in the command stream, so no intervention from software would be needed.
+ * This type of fence can be destroyed before completed, however a reference
+ * on the sync_buf dma-buf can be taken. It is encouraged to re-use the same
+ * dma-buf for sync_buf, since mapping or unmapping the sync_buf to the
+ * device's vm can be expensive.
+ *
+ * It is recommended for creators of seqno_fence to call fence_signal
+ * before destruction. This will prevent possible issues from wraparound at
+ * time of issue vs time of check, since users can check fence_is_signaled
+ * before submitting instructions for the hardware to wait on the fence.
+ * However, when ops.enable_signaling is not called, it doesn't have to be
+ * done as soon as possible, just before there's any real danger of seqno
+ * wraparound.
+ */
+static inline void
+seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock,
+		 struct dma_buf *sync_buf,  uint32_t context,
+		 uint32_t seqno_ofs, uint32_t seqno,
+		 enum seqno_fence_condition cond,
+		 const struct fence_ops *ops)
+{
+	BUG_ON(!fence || !sync_buf || !ops);
+	BUG_ON(!ops->wait || !ops->enable_signaling ||
+	       !ops->get_driver_name || !ops->get_timeline_name);
+
+	/*
+	 * ops is used in __fence_init for get_driver_name, so needs to be
+	 * initialized first
+	 */
+	fence->ops = ops;
+	__fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno);
+	get_dma_buf(sync_buf);
+	fence->sync_buf = sync_buf;
+	fence->seqno_ofs = seqno_ofs;
+	fence->condition = cond;
+}
+
+#endif /* __LINUX_SEQNO_FENCE_H */

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 3/8] dma-buf: use reservation objects
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:37   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

This allows reservation objects to be used in dma-buf. it's required
for implementing polling support on the fences that belong to a dma-buf.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Mauro Carvalho Chehab <m.chehab@samsung.com> #drivers/media/v4l2-core/
Acked-by: Thomas Hellstrom <thellstrom@vmware.com> #drivers/gpu/drm/ttm
Signed-off-by: Vincent Stehlé <vincent.stehle@laposte.net> #drivers/gpu/drm/armada/
---
 drivers/base/dma-buf.c                         |   22 ++++++++++++++++++++--
 drivers/gpu/drm/armada/armada_gem.c            |    2 +-
 drivers/gpu/drm/drm_prime.c                    |    8 +++++++-
 drivers/gpu/drm/exynos/exynos_drm_dmabuf.c     |    2 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c         |    3 ++-
 drivers/gpu/drm/nouveau/nouveau_drm.c          |    1 +
 drivers/gpu/drm/nouveau/nouveau_gem.h          |    1 +
 drivers/gpu/drm/nouveau/nouveau_prime.c        |    7 +++++++
 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c      |    2 +-
 drivers/gpu/drm/radeon/radeon_drv.c            |    2 ++
 drivers/gpu/drm/radeon/radeon_prime.c          |    8 ++++++++
 drivers/gpu/drm/tegra/gem.c                    |    2 +-
 drivers/gpu/drm/ttm/ttm_object.c               |    2 +-
 drivers/media/v4l2-core/videobuf2-dma-contig.c |    2 +-
 drivers/staging/android/ion/ion.c              |    3 ++-
 include/drm/drmP.h                             |    3 +++
 include/linux/dma-buf.h                        |    9 ++++++---
 17 files changed, 65 insertions(+), 14 deletions(-)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 840c7fa80983..cd40ca22911f 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -25,10 +25,12 @@
 #include <linux/fs.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
+#include <linux/fence.h>
 #include <linux/anon_inodes.h>
 #include <linux/export.h>
 #include <linux/debugfs.h>
 #include <linux/seq_file.h>
+#include <linux/reservation.h>
 
 static inline int is_dma_buf_file(struct file *);
 
@@ -56,6 +58,9 @@ static int dma_buf_release(struct inode *inode, struct file *file)
 	list_del(&dmabuf->list_node);
 	mutex_unlock(&db_list.lock);
 
+	if (dmabuf->resv == (struct reservation_object *)&dmabuf[1])
+		reservation_object_fini(dmabuf->resv);
+
 	kfree(dmabuf);
 	return 0;
 }
@@ -128,6 +133,7 @@ static inline int is_dma_buf_file(struct file *file)
  * @size:	[in]	Size of the buffer
  * @flags:	[in]	mode flags for the file.
  * @exp_name:	[in]	name of the exporting module - useful for debugging.
+ * @resv:	[in]	reservation-object, NULL to allocate default one.
  *
  * Returns, on success, a newly created dma_buf object, which wraps the
  * supplied private data and operations for dma_buf_ops. On either missing
@@ -135,10 +141,17 @@ static inline int is_dma_buf_file(struct file *file)
  *
  */
 struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
-				size_t size, int flags, const char *exp_name)
+				size_t size, int flags, const char *exp_name,
+				struct reservation_object *resv)
 {
 	struct dma_buf *dmabuf;
 	struct file *file;
+	size_t alloc_size = sizeof(struct dma_buf);
+	if (!resv)
+		alloc_size += sizeof(struct reservation_object);
+	else
+		/* prevent &dma_buf[1] == dma_buf->resv */
+		alloc_size += 1;
 
 	if (WARN_ON(!priv || !ops
 			  || !ops->map_dma_buf
@@ -150,7 +163,7 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
 		return ERR_PTR(-EINVAL);
 	}
 
-	dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
+	dmabuf = kzalloc(alloc_size, GFP_KERNEL);
 	if (dmabuf == NULL)
 		return ERR_PTR(-ENOMEM);
 
@@ -158,6 +171,11 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
 	dmabuf->ops = ops;
 	dmabuf->size = size;
 	dmabuf->exp_name = exp_name;
+	if (!resv) {
+		resv = (struct reservation_object *)&dmabuf[1];
+		reservation_object_init(resv);
+	}
+	dmabuf->resv = resv;
 
 	file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags);
 	if (IS_ERR(file)) {
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
index bb9b642d8485..7496f55611a5 100644
--- a/drivers/gpu/drm/armada/armada_gem.c
+++ b/drivers/gpu/drm/armada/armada_gem.c
@@ -539,7 +539,7 @@ armada_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj,
 	int flags)
 {
 	return dma_buf_export(obj, &armada_gem_prime_dmabuf_ops, obj->size,
-			      O_RDWR);
+			      O_RDWR, NULL);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 304ca8cacbc4..99d578bad17e 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -336,7 +336,13 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops =  {
 struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
 				     struct drm_gem_object *obj, int flags)
 {
-	return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags);
+	struct reservation_object *robj = NULL;
+
+	if (dev->driver->gem_prime_res_obj)
+		robj = dev->driver->gem_prime_res_obj(obj);
+
+	return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size,
+			      flags, robj);
 }
 EXPORT_SYMBOL(drm_gem_prime_export);
 
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
index 2a3ad24276f8..60192ed544f0 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
@@ -187,7 +187,7 @@ struct dma_buf *exynos_dmabuf_prime_export(struct drm_device *drm_dev,
 	struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
 
 	return dma_buf_export(obj, &exynos_dmabuf_ops,
-				exynos_gem_obj->base.size, flags);
+				exynos_gem_obj->base.size, flags, NULL);
 }
 
 struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index 580aa42443ed..82a1f4b57778 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -237,7 +237,8 @@ struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
 			return ERR_PTR(ret);
 	}
 
-	return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags);
+	return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags,
+			      NULL);
 }
 
 static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
index ddd83756b9a2..e8ae68a9aaf1 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
@@ -844,6 +844,7 @@ driver = {
 	.gem_prime_export = drm_gem_prime_export,
 	.gem_prime_import = drm_gem_prime_import,
 	.gem_prime_pin = nouveau_gem_prime_pin,
+	.gem_prime_res_obj = nouveau_gem_prime_res_obj,
 	.gem_prime_unpin = nouveau_gem_prime_unpin,
 	.gem_prime_get_sg_table = nouveau_gem_prime_get_sg_table,
 	.gem_prime_import_sg_table = nouveau_gem_prime_import_sg_table,
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index 7caca057bc38..ddab762d81fe 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -35,6 +35,7 @@ extern int nouveau_gem_ioctl_info(struct drm_device *, void *,
 				  struct drm_file *);
 
 extern int nouveau_gem_prime_pin(struct drm_gem_object *);
+struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *);
 extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index 51a2cb102b44..1f51008e4d26 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -102,3 +102,10 @@ void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
 
 	nouveau_bo_unpin(nvbo);
 }
+
+struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *obj)
+{
+	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
+
+	return nvbo->bo.resv;
+}
diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
index 4fcca8d42796..a2dbfb1737b4 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
@@ -171,7 +171,7 @@ static struct dma_buf_ops omap_dmabuf_ops = {
 struct dma_buf *omap_gem_prime_export(struct drm_device *dev,
 		struct drm_gem_object *obj, int flags)
 {
-	return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags);
+	return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags, NULL);
 }
 
 struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev,
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 6e3017413386..9b30e0ce8a4d 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -132,6 +132,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct sg_table *sg);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
+struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *);
 void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
 void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 extern long radeon_kms_compat_ioctl(struct file *filp, unsigned int cmd,
@@ -562,6 +563,7 @@ static struct drm_driver kms_driver = {
 	.gem_prime_import = drm_gem_prime_import,
 	.gem_prime_pin = radeon_gem_prime_pin,
 	.gem_prime_unpin = radeon_gem_prime_unpin,
+	.gem_prime_res_obj = radeon_gem_prime_res_obj,
 	.gem_prime_get_sg_table = radeon_gem_prime_get_sg_table,
 	.gem_prime_import_sg_table = radeon_gem_prime_import_sg_table,
 	.gem_prime_vmap = radeon_gem_prime_vmap,
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index 20074560fc25..28d71070c389 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -103,3 +103,11 @@ void radeon_gem_prime_unpin(struct drm_gem_object *obj)
 	radeon_bo_unpin(bo);
 	radeon_bo_unreserve(bo);
 }
+
+
+struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *obj)
+{
+	struct radeon_bo *bo = gem_to_radeon_bo(obj);
+
+	return bo->tbo.resv;
+}
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index aa85b7b26f10..78cc8143760a 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -420,7 +420,7 @@ struct dma_buf *tegra_gem_prime_export(struct drm_device *drm,
 				       int flags)
 {
 	return dma_buf_export(gem, &tegra_gem_prime_dmabuf_ops, gem->size,
-			      flags);
+			      flags, NULL);
 }
 
 struct drm_gem_object *tegra_gem_prime_import(struct drm_device *drm,
diff --git a/drivers/gpu/drm/ttm/ttm_object.c b/drivers/gpu/drm/ttm/ttm_object.c
index d2a053352789..12c87110db3a 100644
--- a/drivers/gpu/drm/ttm/ttm_object.c
+++ b/drivers/gpu/drm/ttm/ttm_object.c
@@ -695,7 +695,7 @@ int ttm_prime_handle_to_fd(struct ttm_object_file *tfile,
 		}
 
 		dma_buf = dma_buf_export(prime, &tdev->ops,
-					 prime->size, flags);
+					 prime->size, flags, NULL);
 		if (IS_ERR(dma_buf)) {
 			ret = PTR_ERR(dma_buf);
 			ttm_mem_global_free(tdev->mem_glob,
diff --git a/drivers/media/v4l2-core/videobuf2-dma-contig.c b/drivers/media/v4l2-core/videobuf2-dma-contig.c
index 880be0782dd9..c4e4dfa8123a 100644
--- a/drivers/media/v4l2-core/videobuf2-dma-contig.c
+++ b/drivers/media/v4l2-core/videobuf2-dma-contig.c
@@ -404,7 +404,7 @@ static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags)
 	if (WARN_ON(!buf->sgt_base))
 		return NULL;
 
-	dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags);
+	dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags, NULL);
 	if (IS_ERR(dbuf))
 		return NULL;
 
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index 389b8f67a2ec..270360912b2c 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -1120,7 +1120,8 @@ struct dma_buf *ion_share_dma_buf(struct ion_client *client,
 	ion_buffer_get(buffer);
 	mutex_unlock(&client->lock);
 
-	dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR);
+	dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR,
+				NULL);
 	if (IS_ERR(dmabuf)) {
 		ion_buffer_put(buffer);
 		return dmabuf;
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 8af71a8e2c00..e41f17ea1f13 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -83,6 +83,7 @@ struct drm_device;
 
 struct device_node;
 struct videomode;
+struct reservation_object;
 
 #include <drm/drm_os_linux.h>
 #include <drm/drm_hashtab.h>
@@ -923,6 +924,8 @@ struct drm_driver {
 	/* low-level interface used by drm_gem_prime_{import,export} */
 	int (*gem_prime_pin)(struct drm_gem_object *obj);
 	void (*gem_prime_unpin)(struct drm_gem_object *obj);
+	struct reservation_object * (*gem_prime_res_obj)(
+				struct drm_gem_object *obj);
 	struct sg_table *(*gem_prime_get_sg_table)(struct drm_gem_object *obj);
 	struct drm_gem_object *(*gem_prime_import_sg_table)(
 				struct drm_device *dev, size_t size,
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index f886985a28b2..fd7def2e0ae2 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -115,6 +115,7 @@ struct dma_buf_ops {
  * @exp_name: name of the exporter; useful for debugging.
  * @list_node: node for dma_buf accounting and debugging.
  * @priv: exporter specific private data for this buffer object.
+ * @resv: reservation object linked to this dma-buf
  */
 struct dma_buf {
 	size_t size;
@@ -128,6 +129,7 @@ struct dma_buf {
 	const char *exp_name;
 	struct list_head list_node;
 	void *priv;
+	struct reservation_object *resv;
 };
 
 /**
@@ -168,10 +170,11 @@ void dma_buf_detach(struct dma_buf *dmabuf,
 				struct dma_buf_attachment *dmabuf_attach);
 
 struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
-			       size_t size, int flags, const char *);
+			       size_t size, int flags, const char *,
+			       struct reservation_object *);
 
-#define dma_buf_export(priv, ops, size, flags)	\
-	dma_buf_export_named(priv, ops, size, flags, KBUILD_MODNAME)
+#define dma_buf_export(priv, ops, size, flags, resv)	\
+	dma_buf_export_named(priv, ops, size, flags, KBUILD_MODNAME, resv)
 
 int dma_buf_fd(struct dma_buf *dmabuf, int flags);
 struct dma_buf *dma_buf_get(int fd);


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 3/8] dma-buf: use reservation objects
@ 2014-06-18 10:37   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

This allows reservation objects to be used in dma-buf. it's required
for implementing polling support on the fences that belong to a dma-buf.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Mauro Carvalho Chehab <m.chehab@samsung.com> #drivers/media/v4l2-core/
Acked-by: Thomas Hellstrom <thellstrom@vmware.com> #drivers/gpu/drm/ttm
Signed-off-by: Vincent Stehlé <vincent.stehle@laposte.net> #drivers/gpu/drm/armada/
---
 drivers/base/dma-buf.c                         |   22 ++++++++++++++++++++--
 drivers/gpu/drm/armada/armada_gem.c            |    2 +-
 drivers/gpu/drm/drm_prime.c                    |    8 +++++++-
 drivers/gpu/drm/exynos/exynos_drm_dmabuf.c     |    2 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c         |    3 ++-
 drivers/gpu/drm/nouveau/nouveau_drm.c          |    1 +
 drivers/gpu/drm/nouveau/nouveau_gem.h          |    1 +
 drivers/gpu/drm/nouveau/nouveau_prime.c        |    7 +++++++
 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c      |    2 +-
 drivers/gpu/drm/radeon/radeon_drv.c            |    2 ++
 drivers/gpu/drm/radeon/radeon_prime.c          |    8 ++++++++
 drivers/gpu/drm/tegra/gem.c                    |    2 +-
 drivers/gpu/drm/ttm/ttm_object.c               |    2 +-
 drivers/media/v4l2-core/videobuf2-dma-contig.c |    2 +-
 drivers/staging/android/ion/ion.c              |    3 ++-
 include/drm/drmP.h                             |    3 +++
 include/linux/dma-buf.h                        |    9 ++++++---
 17 files changed, 65 insertions(+), 14 deletions(-)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 840c7fa80983..cd40ca22911f 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -25,10 +25,12 @@
 #include <linux/fs.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
+#include <linux/fence.h>
 #include <linux/anon_inodes.h>
 #include <linux/export.h>
 #include <linux/debugfs.h>
 #include <linux/seq_file.h>
+#include <linux/reservation.h>
 
 static inline int is_dma_buf_file(struct file *);
 
@@ -56,6 +58,9 @@ static int dma_buf_release(struct inode *inode, struct file *file)
 	list_del(&dmabuf->list_node);
 	mutex_unlock(&db_list.lock);
 
+	if (dmabuf->resv == (struct reservation_object *)&dmabuf[1])
+		reservation_object_fini(dmabuf->resv);
+
 	kfree(dmabuf);
 	return 0;
 }
@@ -128,6 +133,7 @@ static inline int is_dma_buf_file(struct file *file)
  * @size:	[in]	Size of the buffer
  * @flags:	[in]	mode flags for the file.
  * @exp_name:	[in]	name of the exporting module - useful for debugging.
+ * @resv:	[in]	reservation-object, NULL to allocate default one.
  *
  * Returns, on success, a newly created dma_buf object, which wraps the
  * supplied private data and operations for dma_buf_ops. On either missing
@@ -135,10 +141,17 @@ static inline int is_dma_buf_file(struct file *file)
  *
  */
 struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
-				size_t size, int flags, const char *exp_name)
+				size_t size, int flags, const char *exp_name,
+				struct reservation_object *resv)
 {
 	struct dma_buf *dmabuf;
 	struct file *file;
+	size_t alloc_size = sizeof(struct dma_buf);
+	if (!resv)
+		alloc_size += sizeof(struct reservation_object);
+	else
+		/* prevent &dma_buf[1] == dma_buf->resv */
+		alloc_size += 1;
 
 	if (WARN_ON(!priv || !ops
 			  || !ops->map_dma_buf
@@ -150,7 +163,7 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
 		return ERR_PTR(-EINVAL);
 	}
 
-	dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
+	dmabuf = kzalloc(alloc_size, GFP_KERNEL);
 	if (dmabuf == NULL)
 		return ERR_PTR(-ENOMEM);
 
@@ -158,6 +171,11 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
 	dmabuf->ops = ops;
 	dmabuf->size = size;
 	dmabuf->exp_name = exp_name;
+	if (!resv) {
+		resv = (struct reservation_object *)&dmabuf[1];
+		reservation_object_init(resv);
+	}
+	dmabuf->resv = resv;
 
 	file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags);
 	if (IS_ERR(file)) {
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
index bb9b642d8485..7496f55611a5 100644
--- a/drivers/gpu/drm/armada/armada_gem.c
+++ b/drivers/gpu/drm/armada/armada_gem.c
@@ -539,7 +539,7 @@ armada_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj,
 	int flags)
 {
 	return dma_buf_export(obj, &armada_gem_prime_dmabuf_ops, obj->size,
-			      O_RDWR);
+			      O_RDWR, NULL);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 304ca8cacbc4..99d578bad17e 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -336,7 +336,13 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops =  {
 struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
 				     struct drm_gem_object *obj, int flags)
 {
-	return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags);
+	struct reservation_object *robj = NULL;
+
+	if (dev->driver->gem_prime_res_obj)
+		robj = dev->driver->gem_prime_res_obj(obj);
+
+	return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size,
+			      flags, robj);
 }
 EXPORT_SYMBOL(drm_gem_prime_export);
 
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
index 2a3ad24276f8..60192ed544f0 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
@@ -187,7 +187,7 @@ struct dma_buf *exynos_dmabuf_prime_export(struct drm_device *drm_dev,
 	struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
 
 	return dma_buf_export(obj, &exynos_dmabuf_ops,
-				exynos_gem_obj->base.size, flags);
+				exynos_gem_obj->base.size, flags, NULL);
 }
 
 struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index 580aa42443ed..82a1f4b57778 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -237,7 +237,8 @@ struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
 			return ERR_PTR(ret);
 	}
 
-	return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags);
+	return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags,
+			      NULL);
 }
 
 static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
index ddd83756b9a2..e8ae68a9aaf1 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
@@ -844,6 +844,7 @@ driver = {
 	.gem_prime_export = drm_gem_prime_export,
 	.gem_prime_import = drm_gem_prime_import,
 	.gem_prime_pin = nouveau_gem_prime_pin,
+	.gem_prime_res_obj = nouveau_gem_prime_res_obj,
 	.gem_prime_unpin = nouveau_gem_prime_unpin,
 	.gem_prime_get_sg_table = nouveau_gem_prime_get_sg_table,
 	.gem_prime_import_sg_table = nouveau_gem_prime_import_sg_table,
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index 7caca057bc38..ddab762d81fe 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -35,6 +35,7 @@ extern int nouveau_gem_ioctl_info(struct drm_device *, void *,
 				  struct drm_file *);
 
 extern int nouveau_gem_prime_pin(struct drm_gem_object *);
+struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *);
 extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index 51a2cb102b44..1f51008e4d26 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -102,3 +102,10 @@ void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
 
 	nouveau_bo_unpin(nvbo);
 }
+
+struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *obj)
+{
+	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
+
+	return nvbo->bo.resv;
+}
diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
index 4fcca8d42796..a2dbfb1737b4 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
@@ -171,7 +171,7 @@ static struct dma_buf_ops omap_dmabuf_ops = {
 struct dma_buf *omap_gem_prime_export(struct drm_device *dev,
 		struct drm_gem_object *obj, int flags)
 {
-	return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags);
+	return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags, NULL);
 }
 
 struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev,
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 6e3017413386..9b30e0ce8a4d 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -132,6 +132,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct sg_table *sg);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
+struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *);
 void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
 void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 extern long radeon_kms_compat_ioctl(struct file *filp, unsigned int cmd,
@@ -562,6 +563,7 @@ static struct drm_driver kms_driver = {
 	.gem_prime_import = drm_gem_prime_import,
 	.gem_prime_pin = radeon_gem_prime_pin,
 	.gem_prime_unpin = radeon_gem_prime_unpin,
+	.gem_prime_res_obj = radeon_gem_prime_res_obj,
 	.gem_prime_get_sg_table = radeon_gem_prime_get_sg_table,
 	.gem_prime_import_sg_table = radeon_gem_prime_import_sg_table,
 	.gem_prime_vmap = radeon_gem_prime_vmap,
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index 20074560fc25..28d71070c389 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -103,3 +103,11 @@ void radeon_gem_prime_unpin(struct drm_gem_object *obj)
 	radeon_bo_unpin(bo);
 	radeon_bo_unreserve(bo);
 }
+
+
+struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *obj)
+{
+	struct radeon_bo *bo = gem_to_radeon_bo(obj);
+
+	return bo->tbo.resv;
+}
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index aa85b7b26f10..78cc8143760a 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -420,7 +420,7 @@ struct dma_buf *tegra_gem_prime_export(struct drm_device *drm,
 				       int flags)
 {
 	return dma_buf_export(gem, &tegra_gem_prime_dmabuf_ops, gem->size,
-			      flags);
+			      flags, NULL);
 }
 
 struct drm_gem_object *tegra_gem_prime_import(struct drm_device *drm,
diff --git a/drivers/gpu/drm/ttm/ttm_object.c b/drivers/gpu/drm/ttm/ttm_object.c
index d2a053352789..12c87110db3a 100644
--- a/drivers/gpu/drm/ttm/ttm_object.c
+++ b/drivers/gpu/drm/ttm/ttm_object.c
@@ -695,7 +695,7 @@ int ttm_prime_handle_to_fd(struct ttm_object_file *tfile,
 		}
 
 		dma_buf = dma_buf_export(prime, &tdev->ops,
-					 prime->size, flags);
+					 prime->size, flags, NULL);
 		if (IS_ERR(dma_buf)) {
 			ret = PTR_ERR(dma_buf);
 			ttm_mem_global_free(tdev->mem_glob,
diff --git a/drivers/media/v4l2-core/videobuf2-dma-contig.c b/drivers/media/v4l2-core/videobuf2-dma-contig.c
index 880be0782dd9..c4e4dfa8123a 100644
--- a/drivers/media/v4l2-core/videobuf2-dma-contig.c
+++ b/drivers/media/v4l2-core/videobuf2-dma-contig.c
@@ -404,7 +404,7 @@ static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags)
 	if (WARN_ON(!buf->sgt_base))
 		return NULL;
 
-	dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags);
+	dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags, NULL);
 	if (IS_ERR(dbuf))
 		return NULL;
 
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index 389b8f67a2ec..270360912b2c 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -1120,7 +1120,8 @@ struct dma_buf *ion_share_dma_buf(struct ion_client *client,
 	ion_buffer_get(buffer);
 	mutex_unlock(&client->lock);
 
-	dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR);
+	dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR,
+				NULL);
 	if (IS_ERR(dmabuf)) {
 		ion_buffer_put(buffer);
 		return dmabuf;
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 8af71a8e2c00..e41f17ea1f13 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -83,6 +83,7 @@ struct drm_device;
 
 struct device_node;
 struct videomode;
+struct reservation_object;
 
 #include <drm/drm_os_linux.h>
 #include <drm/drm_hashtab.h>
@@ -923,6 +924,8 @@ struct drm_driver {
 	/* low-level interface used by drm_gem_prime_{import,export} */
 	int (*gem_prime_pin)(struct drm_gem_object *obj);
 	void (*gem_prime_unpin)(struct drm_gem_object *obj);
+	struct reservation_object * (*gem_prime_res_obj)(
+				struct drm_gem_object *obj);
 	struct sg_table *(*gem_prime_get_sg_table)(struct drm_gem_object *obj);
 	struct drm_gem_object *(*gem_prime_import_sg_table)(
 				struct drm_device *dev, size_t size,
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index f886985a28b2..fd7def2e0ae2 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -115,6 +115,7 @@ struct dma_buf_ops {
  * @exp_name: name of the exporter; useful for debugging.
  * @list_node: node for dma_buf accounting and debugging.
  * @priv: exporter specific private data for this buffer object.
+ * @resv: reservation object linked to this dma-buf
  */
 struct dma_buf {
 	size_t size;
@@ -128,6 +129,7 @@ struct dma_buf {
 	const char *exp_name;
 	struct list_head list_node;
 	void *priv;
+	struct reservation_object *resv;
 };
 
 /**
@@ -168,10 +170,11 @@ void dma_buf_detach(struct dma_buf *dmabuf,
 				struct dma_buf_attachment *dmabuf_attach);
 
 struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
-			       size_t size, int flags, const char *);
+			       size_t size, int flags, const char *,
+			       struct reservation_object *);
 
-#define dma_buf_export(priv, ops, size, flags)	\
-	dma_buf_export_named(priv, ops, size, flags, KBUILD_MODNAME)
+#define dma_buf_export(priv, ops, size, flags, resv)	\
+	dma_buf_export_named(priv, ops, size, flags, KBUILD_MODNAME, resv)
 
 int dma_buf_fd(struct dma_buf *dmabuf, int flags);
 struct dma_buf *dma_buf_get(int fd);

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:37   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

Just to show it's easy.

Android syncpoints can be mapped to a timeline. This removes the need
to maintain a separate api for synchronization. I've left the android
trace events in place, but the core fence events should already be
sufficient for debugging.

v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
v5:
- Fix small style issues pointed out by Thomas Hellstrom.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: John Stultz <john.stultz@linaro.org>
---
 drivers/staging/android/Kconfig      |    1 
 drivers/staging/android/Makefile     |    2 
 drivers/staging/android/sw_sync.c    |    6 
 drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
 drivers/staging/android/sync.h       |   79 ++-
 drivers/staging/android/sync_debug.c |  247 +++++++++
 drivers/staging/android/trace/sync.h |   12 
 7 files changed, 609 insertions(+), 651 deletions(-)
 create mode 100644 drivers/staging/android/sync_debug.c

diff --git a/drivers/staging/android/Kconfig b/drivers/staging/android/Kconfig
index 99e484f845f2..51607e9aa049 100644
--- a/drivers/staging/android/Kconfig
+++ b/drivers/staging/android/Kconfig
@@ -88,6 +88,7 @@ config SYNC
 	bool "Synchronization framework"
 	default n
 	select ANON_INODES
+	select DMA_SHARED_BUFFER
 	---help---
 	  This option enables the framework for synchronization between multiple
 	  drivers.  Sync implementations can take advantage of hardware
diff --git a/drivers/staging/android/Makefile b/drivers/staging/android/Makefile
index 0a01e1914905..517ad5ffa429 100644
--- a/drivers/staging/android/Makefile
+++ b/drivers/staging/android/Makefile
@@ -9,5 +9,5 @@ obj-$(CONFIG_ANDROID_TIMED_OUTPUT)	+= timed_output.o
 obj-$(CONFIG_ANDROID_TIMED_GPIO)	+= timed_gpio.o
 obj-$(CONFIG_ANDROID_LOW_MEMORY_KILLER)	+= lowmemorykiller.o
 obj-$(CONFIG_ANDROID_INTF_ALARM_DEV)	+= alarm-dev.o
-obj-$(CONFIG_SYNC)			+= sync.o
+obj-$(CONFIG_SYNC)			+= sync.o sync_debug.o
 obj-$(CONFIG_SW_SYNC)			+= sw_sync.o
diff --git a/drivers/staging/android/sw_sync.c b/drivers/staging/android/sw_sync.c
index 12a136ec1cec..a76db3ff87cb 100644
--- a/drivers/staging/android/sw_sync.c
+++ b/drivers/staging/android/sw_sync.c
@@ -50,7 +50,7 @@ static struct sync_pt *sw_sync_pt_dup(struct sync_pt *sync_pt)
 {
 	struct sw_sync_pt *pt = (struct sw_sync_pt *) sync_pt;
 	struct sw_sync_timeline *obj =
-		(struct sw_sync_timeline *)sync_pt->parent;
+		(struct sw_sync_timeline *)sync_pt_parent(sync_pt);
 
 	return (struct sync_pt *) sw_sync_pt_create(obj, pt->value);
 }
@@ -59,7 +59,7 @@ static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt)
 {
 	struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
 	struct sw_sync_timeline *obj =
-		(struct sw_sync_timeline *)sync_pt->parent;
+		(struct sw_sync_timeline *)sync_pt_parent(sync_pt);
 
 	return sw_sync_cmp(obj->value, pt->value) >= 0;
 }
@@ -97,7 +97,6 @@ static void sw_sync_pt_value_str(struct sync_pt *sync_pt,
 				       char *str, int size)
 {
 	struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
-
 	snprintf(str, size, "%d", pt->value);
 }
 
@@ -157,7 +156,6 @@ static int sw_sync_open(struct inode *inode, struct file *file)
 static int sw_sync_release(struct inode *inode, struct file *file)
 {
 	struct sw_sync_timeline *obj = file->private_data;
-
 	sync_timeline_destroy(&obj->obj);
 	return 0;
 }
diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
index 18174f7c871c..70b09b5001ba 100644
--- a/drivers/staging/android/sync.c
+++ b/drivers/staging/android/sync.c
@@ -31,22 +31,13 @@
 #define CREATE_TRACE_POINTS
 #include "trace/sync.h"
 
-static void sync_fence_signal_pt(struct sync_pt *pt);
-static int _sync_pt_has_signaled(struct sync_pt *pt);
-static void sync_fence_free(struct kref *kref);
-static void sync_dump(void);
-
-static LIST_HEAD(sync_timeline_list_head);
-static DEFINE_SPINLOCK(sync_timeline_list_lock);
-
-static LIST_HEAD(sync_fence_list_head);
-static DEFINE_SPINLOCK(sync_fence_list_lock);
+static const struct fence_ops android_fence_ops;
+static const struct file_operations sync_fence_fops;
 
 struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops,
 					   int size, const char *name)
 {
 	struct sync_timeline *obj;
-	unsigned long flags;
 
 	if (size < sizeof(struct sync_timeline))
 		return NULL;
@@ -57,17 +48,14 @@ struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops,
 
 	kref_init(&obj->kref);
 	obj->ops = ops;
+	obj->context = fence_context_alloc(1);
 	strlcpy(obj->name, name, sizeof(obj->name));
 
 	INIT_LIST_HEAD(&obj->child_list_head);
-	spin_lock_init(&obj->child_list_lock);
-
 	INIT_LIST_HEAD(&obj->active_list_head);
-	spin_lock_init(&obj->active_list_lock);
+	spin_lock_init(&obj->child_list_lock);
 
-	spin_lock_irqsave(&sync_timeline_list_lock, flags);
-	list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head);
-	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+	sync_timeline_debug_add(obj);
 
 	return obj;
 }
@@ -77,11 +65,8 @@ static void sync_timeline_free(struct kref *kref)
 {
 	struct sync_timeline *obj =
 		container_of(kref, struct sync_timeline, kref);
-	unsigned long flags;
 
-	spin_lock_irqsave(&sync_timeline_list_lock, flags);
-	list_del(&obj->sync_timeline_list);
-	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+	sync_timeline_debug_remove(obj);
 
 	if (obj->ops->release_obj)
 		obj->ops->release_obj(obj);
@@ -89,6 +74,16 @@ static void sync_timeline_free(struct kref *kref)
 	kfree(obj);
 }
 
+static void sync_timeline_get(struct sync_timeline *obj)
+{
+	kref_get(&obj->kref);
+}
+
+static void sync_timeline_put(struct sync_timeline *obj)
+{
+	kref_put(&obj->kref, sync_timeline_free);
+}
+
 void sync_timeline_destroy(struct sync_timeline *obj)
 {
 	obj->destroyed = true;
@@ -102,75 +97,33 @@ void sync_timeline_destroy(struct sync_timeline *obj)
 	 * signal any children that their parent is going away.
 	 */
 	sync_timeline_signal(obj);
-
-	kref_put(&obj->kref, sync_timeline_free);
+	sync_timeline_put(obj);
 }
 EXPORT_SYMBOL(sync_timeline_destroy);
 
-static void sync_timeline_add_pt(struct sync_timeline *obj, struct sync_pt *pt)
-{
-	unsigned long flags;
-
-	pt->parent = obj;
-
-	spin_lock_irqsave(&obj->child_list_lock, flags);
-	list_add_tail(&pt->child_list, &obj->child_list_head);
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
-
-static void sync_timeline_remove_pt(struct sync_pt *pt)
-{
-	struct sync_timeline *obj = pt->parent;
-	unsigned long flags;
-
-	spin_lock_irqsave(&obj->active_list_lock, flags);
-	if (!list_empty(&pt->active_list))
-		list_del_init(&pt->active_list);
-	spin_unlock_irqrestore(&obj->active_list_lock, flags);
-
-	spin_lock_irqsave(&obj->child_list_lock, flags);
-	if (!list_empty(&pt->child_list))
-		list_del_init(&pt->child_list);
-
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
-
 void sync_timeline_signal(struct sync_timeline *obj)
 {
 	unsigned long flags;
 	LIST_HEAD(signaled_pts);
-	struct list_head *pos, *n;
+	struct sync_pt *pt, *next;
 
 	trace_sync_timeline(obj);
 
-	spin_lock_irqsave(&obj->active_list_lock, flags);
-
-	list_for_each_safe(pos, n, &obj->active_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, active_list);
+	spin_lock_irqsave(&obj->child_list_lock, flags);
 
-		if (_sync_pt_has_signaled(pt)) {
-			list_del_init(pos);
-			list_add(&pt->signaled_list, &signaled_pts);
-			kref_get(&pt->fence->kref);
-		}
+	list_for_each_entry_safe(pt, next, &obj->active_list_head,
+				 active_list) {
+		if (__fence_is_signaled(&pt->base))
+			list_del(&pt->active_list);
 	}
 
-	spin_unlock_irqrestore(&obj->active_list_lock, flags);
-
-	list_for_each_safe(pos, n, &signaled_pts) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, signaled_list);
-
-		list_del_init(pos);
-		sync_fence_signal_pt(pt);
-		kref_put(&pt->fence->kref, sync_fence_free);
-	}
+	spin_unlock_irqrestore(&obj->child_list_lock, flags);
 }
 EXPORT_SYMBOL(sync_timeline_signal);
 
-struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size)
+struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size)
 {
+	unsigned long flags;
 	struct sync_pt *pt;
 
 	if (size < sizeof(struct sync_pt))
@@ -180,87 +133,28 @@ struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size)
 	if (pt == NULL)
 		return NULL;
 
+	spin_lock_irqsave(&obj->child_list_lock, flags);
+	sync_timeline_get(obj);
+	__fence_init(&pt->base, &android_fence_ops, &obj->child_list_lock,
+		     obj->context, ++obj->value);
+	list_add_tail(&pt->child_list, &obj->child_list_head);
 	INIT_LIST_HEAD(&pt->active_list);
-	kref_get(&parent->kref);
-	sync_timeline_add_pt(parent, pt);
-
+	spin_unlock_irqrestore(&obj->child_list_lock, flags);
 	return pt;
 }
 EXPORT_SYMBOL(sync_pt_create);
 
 void sync_pt_free(struct sync_pt *pt)
 {
-	if (pt->parent->ops->free_pt)
-		pt->parent->ops->free_pt(pt);
-
-	sync_timeline_remove_pt(pt);
-
-	kref_put(&pt->parent->kref, sync_timeline_free);
-
-	kfree(pt);
+	fence_put(&pt->base);
 }
 EXPORT_SYMBOL(sync_pt_free);
 
-/* call with pt->parent->active_list_lock held */
-static int _sync_pt_has_signaled(struct sync_pt *pt)
-{
-	int old_status = pt->status;
-
-	if (!pt->status)
-		pt->status = pt->parent->ops->has_signaled(pt);
-
-	if (!pt->status && pt->parent->destroyed)
-		pt->status = -ENOENT;
-
-	if (pt->status != old_status)
-		pt->timestamp = ktime_get();
-
-	return pt->status;
-}
-
-static struct sync_pt *sync_pt_dup(struct sync_pt *pt)
-{
-	return pt->parent->ops->dup(pt);
-}
-
-/* Adds a sync pt to the active queue.  Called when added to a fence */
-static void sync_pt_activate(struct sync_pt *pt)
-{
-	struct sync_timeline *obj = pt->parent;
-	unsigned long flags;
-	int err;
-
-	spin_lock_irqsave(&obj->active_list_lock, flags);
-
-	err = _sync_pt_has_signaled(pt);
-	if (err != 0)
-		goto out;
-
-	list_add_tail(&pt->active_list, &obj->active_list_head);
-
-out:
-	spin_unlock_irqrestore(&obj->active_list_lock, flags);
-}
-
-static int sync_fence_release(struct inode *inode, struct file *file);
-static unsigned int sync_fence_poll(struct file *file, poll_table *wait);
-static long sync_fence_ioctl(struct file *file, unsigned int cmd,
-			     unsigned long arg);
-
-
-static const struct file_operations sync_fence_fops = {
-	.release = sync_fence_release,
-	.poll = sync_fence_poll,
-	.unlocked_ioctl = sync_fence_ioctl,
-	.compat_ioctl = sync_fence_ioctl,
-};
-
-static struct sync_fence *sync_fence_alloc(const char *name)
+static struct sync_fence *sync_fence_alloc(int size, const char *name)
 {
 	struct sync_fence *fence;
-	unsigned long flags;
 
-	fence = kzalloc(sizeof(struct sync_fence), GFP_KERNEL);
+	fence = kzalloc(size, GFP_KERNEL);
 	if (fence == NULL)
 		return NULL;
 
@@ -272,16 +166,8 @@ static struct sync_fence *sync_fence_alloc(const char *name)
 	kref_init(&fence->kref);
 	strlcpy(fence->name, name, sizeof(fence->name));
 
-	INIT_LIST_HEAD(&fence->pt_list_head);
-	INIT_LIST_HEAD(&fence->waiter_list_head);
-	spin_lock_init(&fence->waiter_list_lock);
-
 	init_waitqueue_head(&fence->wq);
 
-	spin_lock_irqsave(&sync_fence_list_lock, flags);
-	list_add_tail(&fence->sync_fence_list, &sync_fence_list_head);
-	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
-
 	return fence;
 
 err:
@@ -289,120 +175,42 @@ err:
 	return NULL;
 }
 
-/* TODO: implement a create which takes more that one sync_pt */
-struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
+static void fence_check_cb_func(struct fence *f, struct fence_cb *cb)
 {
+	struct sync_fence_cb *check;
 	struct sync_fence *fence;
 
-	if (pt->fence)
-		return NULL;
-
-	fence = sync_fence_alloc(name);
-	if (fence == NULL)
-		return NULL;
-
-	pt->fence = fence;
-	list_add(&pt->pt_list, &fence->pt_list_head);
-	sync_pt_activate(pt);
-
-	/*
-	 * signal the fence in case pt was activated before
-	 * sync_pt_activate(pt) was called
-	 */
-	sync_fence_signal_pt(pt);
+	check = container_of(cb, struct sync_fence_cb, cb);
+	fence = check->fence;
 
-	return fence;
+	if (atomic_dec_and_test(&fence->status))
+		wake_up_all(&fence->wq);
 }
-EXPORT_SYMBOL(sync_fence_create);
-
-static int sync_fence_copy_pts(struct sync_fence *dst, struct sync_fence *src)
-{
-	struct list_head *pos;
-
-	list_for_each(pos, &src->pt_list_head) {
-		struct sync_pt *orig_pt =
-			container_of(pos, struct sync_pt, pt_list);
-		struct sync_pt *new_pt = sync_pt_dup(orig_pt);
 
-		if (new_pt == NULL)
-			return -ENOMEM;
-
-		new_pt->fence = dst;
-		list_add(&new_pt->pt_list, &dst->pt_list_head);
-	}
-
-	return 0;
-}
-
-static int sync_fence_merge_pts(struct sync_fence *dst, struct sync_fence *src)
-{
-	struct list_head *src_pos, *dst_pos, *n;
-
-	list_for_each(src_pos, &src->pt_list_head) {
-		struct sync_pt *src_pt =
-			container_of(src_pos, struct sync_pt, pt_list);
-		bool collapsed = false;
-
-		list_for_each_safe(dst_pos, n, &dst->pt_list_head) {
-			struct sync_pt *dst_pt =
-				container_of(dst_pos, struct sync_pt, pt_list);
-			/* collapse two sync_pts on the same timeline
-			 * to a single sync_pt that will signal at
-			 * the later of the two
-			 */
-			if (dst_pt->parent == src_pt->parent) {
-				if (dst_pt->parent->ops->compare(dst_pt, src_pt)
-						 == -1) {
-					struct sync_pt *new_pt =
-						sync_pt_dup(src_pt);
-					if (new_pt == NULL)
-						return -ENOMEM;
-
-					new_pt->fence = dst;
-					list_replace(&dst_pt->pt_list,
-						     &new_pt->pt_list);
-					sync_pt_free(dst_pt);
-				}
-				collapsed = true;
-				break;
-			}
-		}
-
-		if (!collapsed) {
-			struct sync_pt *new_pt = sync_pt_dup(src_pt);
-
-			if (new_pt == NULL)
-				return -ENOMEM;
-
-			new_pt->fence = dst;
-			list_add(&new_pt->pt_list, &dst->pt_list_head);
-		}
-	}
-
-	return 0;
-}
-
-static void sync_fence_detach_pts(struct sync_fence *fence)
+/* TODO: implement a create which takes more that one sync_pt */
+struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
 {
-	struct list_head *pos, *n;
+	struct sync_fence *fence;
 
-	list_for_each_safe(pos, n, &fence->pt_list_head) {
-		struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
+	fence = sync_fence_alloc(offsetof(struct sync_fence, cbs[1]), name);
+	if (fence == NULL)
+		return NULL;
 
-		sync_timeline_remove_pt(pt);
-	}
-}
+	fence->num_fences = 1;
+	atomic_set(&fence->status, 1);
 
-static void sync_fence_free_pts(struct sync_fence *fence)
-{
-	struct list_head *pos, *n;
+	fence_get(&pt->base);
+	fence->cbs[0].sync_pt = &pt->base;
+	fence->cbs[0].fence = fence;
+	if (fence_add_callback(&pt->base, &fence->cbs[0].cb,
+			       fence_check_cb_func))
+		atomic_dec(&fence->status);
 
-	list_for_each_safe(pos, n, &fence->pt_list_head) {
-		struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
+	sync_fence_debug_add(fence);
 
-		sync_pt_free(pt);
-	}
+	return fence;
 }
+EXPORT_SYMBOL(sync_fence_create);
 
 struct sync_fence *sync_fence_fdget(int fd)
 {
@@ -434,197 +242,155 @@ void sync_fence_install(struct sync_fence *fence, int fd)
 }
 EXPORT_SYMBOL(sync_fence_install);
 
-static int sync_fence_get_status(struct sync_fence *fence)
+static void sync_fence_add_pt(struct sync_fence *fence,
+			      int *i, struct fence *pt)
 {
-	struct list_head *pos;
-	int status = 1;
-
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
-		int pt_status = pt->status;
-
-		if (pt_status < 0) {
-			status = pt_status;
-			break;
-		} else if (status == 1) {
-			status = pt_status;
-		}
-	}
+	fence->cbs[*i].sync_pt = pt;
+	fence->cbs[*i].fence = fence;
 
-	return status;
+	if (!fence_add_callback(pt, &fence->cbs[*i].cb, fence_check_cb_func)) {
+		fence_get(pt);
+		(*i)++;
+	}
 }
 
 struct sync_fence *sync_fence_merge(const char *name,
 				    struct sync_fence *a, struct sync_fence *b)
 {
+	int num_fences = a->num_fences + b->num_fences;
 	struct sync_fence *fence;
-	struct list_head *pos;
-	int err;
+	int i, i_a, i_b;
+	unsigned long size = offsetof(struct sync_fence, cbs[num_fences]);
 
-	fence = sync_fence_alloc(name);
+	fence = sync_fence_alloc(size, name);
 	if (fence == NULL)
 		return NULL;
 
-	err = sync_fence_copy_pts(fence, a);
-	if (err < 0)
-		goto err;
+	atomic_set(&fence->status, num_fences);
 
-	err = sync_fence_merge_pts(fence, b);
-	if (err < 0)
-		goto err;
+	/*
+	 * Assume sync_fence a and b are both ordered and have no
+	 * duplicates with the same context.
+	 *
+	 * If a sync_fence can only be created with sync_fence_merge
+	 * and sync_fence_create, this is a reasonable assumption.
+	 */
+	for (i = i_a = i_b = 0; i_a < a->num_fences && i_b < b->num_fences; ) {
+		struct fence *pt_a = a->cbs[i_a].sync_pt;
+		struct fence *pt_b = b->cbs[i_b].sync_pt;
+
+		if (pt_a->context < pt_b->context) {
+			sync_fence_add_pt(fence, &i, pt_a);
+
+			i_a++;
+		} else if (pt_a->context > pt_b->context) {
+			sync_fence_add_pt(fence, &i, pt_b);
 
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, pt_list);
-		sync_pt_activate(pt);
+			i_b++;
+		} else {
+			if (pt_a->seqno - pt_b->seqno <= INT_MAX)
+				sync_fence_add_pt(fence, &i, pt_a);
+			else
+				sync_fence_add_pt(fence, &i, pt_b);
+
+			i_a++;
+			i_b++;
+		}
 	}
 
-	/*
-	 * signal the fence in case one of it's pts were activated before
-	 * they were activated
-	 */
-	sync_fence_signal_pt(list_first_entry(&fence->pt_list_head,
-					      struct sync_pt,
-					      pt_list));
+	for (; i_a < a->num_fences; i_a++)
+		sync_fence_add_pt(fence, &i, a->cbs[i_a].sync_pt);
 
+	for (; i_b < b->num_fences; i_b++)
+		sync_fence_add_pt(fence, &i, b->cbs[i_b].sync_pt);
+
+	if (num_fences > i)
+		atomic_sub(num_fences - i, &fence->status);
+	fence->num_fences = i;
+
+	sync_fence_debug_add(fence);
 	return fence;
-err:
-	sync_fence_free_pts(fence);
-	kfree(fence);
-	return NULL;
 }
 EXPORT_SYMBOL(sync_fence_merge);
 
-static void sync_fence_signal_pt(struct sync_pt *pt)
+int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode,
+				 int wake_flags, void *key)
 {
-	LIST_HEAD(signaled_waiters);
-	struct sync_fence *fence = pt->fence;
-	struct list_head *pos;
-	struct list_head *n;
-	unsigned long flags;
-	int status;
-
-	status = sync_fence_get_status(fence);
-
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
-	/*
-	 * this should protect against two threads racing on the signaled
-	 * false -> true transition
-	 */
-	if (status && !fence->status) {
-		list_for_each_safe(pos, n, &fence->waiter_list_head)
-			list_move(pos, &signaled_waiters);
-
-		fence->status = status;
-	} else {
-		status = 0;
-	}
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
+	struct sync_fence_waiter *wait;
 
-	if (status) {
-		list_for_each_safe(pos, n, &signaled_waiters) {
-			struct sync_fence_waiter *waiter =
-				container_of(pos, struct sync_fence_waiter,
-					     waiter_list);
+	wait = container_of(curr, struct sync_fence_waiter, work);
+	list_del_init(&wait->work.task_list);
 
-			list_del(pos);
-			waiter->callback(fence, waiter);
-		}
-		wake_up(&fence->wq);
-	}
+	wait->callback(wait->work.private, wait);
+	return 1;
 }
 
 int sync_fence_wait_async(struct sync_fence *fence,
 			  struct sync_fence_waiter *waiter)
 {
+	int err = atomic_read(&fence->status);
 	unsigned long flags;
-	int err = 0;
 
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
+	if (err < 0)
+		return err;
 
-	if (fence->status) {
-		err = fence->status;
-		goto out;
-	}
+	if (!err)
+		return 1;
 
-	list_add_tail(&waiter->waiter_list, &fence->waiter_list_head);
-out:
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
+	init_waitqueue_func_entry(&waiter->work, sync_fence_wake_up_wq);
+	waiter->work.private = fence;
 
-	return err;
+	spin_lock_irqsave(&fence->wq.lock, flags);
+	err = atomic_read(&fence->status);
+	if (err > 0)
+		__add_wait_queue_tail(&fence->wq, &waiter->work);
+	spin_unlock_irqrestore(&fence->wq.lock, flags);
+
+	if (err < 0)
+		return err;
+
+	return !err;
 }
 EXPORT_SYMBOL(sync_fence_wait_async);
 
 int sync_fence_cancel_async(struct sync_fence *fence,
 			     struct sync_fence_waiter *waiter)
 {
-	struct list_head *pos;
-	struct list_head *n;
 	unsigned long flags;
-	int ret = -ENOENT;
+	int ret = 0;
 
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
-	/*
-	 * Make sure waiter is still in waiter_list because it is possible for
-	 * the waiter to be removed from the list while the callback is still
-	 * pending.
-	 */
-	list_for_each_safe(pos, n, &fence->waiter_list_head) {
-		struct sync_fence_waiter *list_waiter =
-			container_of(pos, struct sync_fence_waiter,
-				     waiter_list);
-		if (list_waiter == waiter) {
-			list_del(pos);
-			ret = 0;
-			break;
-		}
-	}
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
+	spin_lock_irqsave(&fence->wq.lock, flags);
+	if (!list_empty(&waiter->work.task_list))
+		list_del_init(&waiter->work.task_list);
+	else
+		ret = -ENOENT;
+	spin_unlock_irqrestore(&fence->wq.lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL(sync_fence_cancel_async);
 
-static bool sync_fence_check(struct sync_fence *fence)
-{
-	/*
-	 * Make sure that reads to fence->status are ordered with the
-	 * wait queue event triggering
-	 */
-	smp_rmb();
-	return fence->status != 0;
-}
-
 int sync_fence_wait(struct sync_fence *fence, long timeout)
 {
-	int err = 0;
-	struct sync_pt *pt;
-
-	trace_sync_wait(fence, 1);
-	list_for_each_entry(pt, &fence->pt_list_head, pt_list)
-		trace_sync_pt(pt);
+	long ret;
+	int i;
 
-	if (timeout > 0) {
+	if (timeout < 0)
+		timeout = MAX_SCHEDULE_TIMEOUT;
+	else
 		timeout = msecs_to_jiffies(timeout);
-		err = wait_event_interruptible_timeout(fence->wq,
-						       sync_fence_check(fence),
-						       timeout);
-	} else if (timeout < 0) {
-		err = wait_event_interruptible(fence->wq,
-					       sync_fence_check(fence));
-	}
-	trace_sync_wait(fence, 0);
 
-	if (err < 0)
-		return err;
-
-	if (fence->status < 0) {
-		pr_info("fence error %d on [%p]\n", fence->status, fence);
-		sync_dump();
-		return fence->status;
-	}
+	trace_sync_wait(fence, 1);
+	for (i = 0; i < fence->num_fences; ++i)
+		trace_sync_pt(fence->cbs[i].sync_pt);
+	ret = wait_event_interruptible_timeout(fence->wq,
+					       atomic_read(&fence->status) <= 0,
+					       timeout);
+	trace_sync_wait(fence, 0);
 
-	if (fence->status == 0) {
-		if (timeout > 0) {
+	if (ret < 0)
+		return ret;
+	else if (ret == 0) {
+		if (timeout) {
 			pr_info("fence timeout on [%p] after %dms\n", fence,
 				jiffies_to_msecs(timeout));
 			sync_dump();
@@ -632,15 +398,136 @@ int sync_fence_wait(struct sync_fence *fence, long timeout)
 		return -ETIME;
 	}
 
-	return 0;
+	ret = atomic_read(&fence->status);
+	if (ret) {
+		pr_info("fence error %ld on [%p]\n", ret, fence);
+		sync_dump();
+	}
+	return ret;
 }
 EXPORT_SYMBOL(sync_fence_wait);
 
+static const char *android_fence_get_driver_name(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	return parent->ops->driver_name;
+}
+
+static const char *android_fence_get_timeline_name(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	return parent->name;
+}
+
+static void android_fence_release(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+	unsigned long flags;
+
+	spin_lock_irqsave(fence->lock, flags);
+	list_del(&pt->child_list);
+	if (WARN_ON_ONCE(!list_empty(&pt->active_list)))
+		list_del(&pt->active_list);
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	if (parent->ops->free_pt)
+		parent->ops->free_pt(pt);
+
+	sync_timeline_put(parent);
+	kfree(pt);
+}
+
+static bool android_fence_signaled(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+	int ret;
+
+	ret = parent->ops->has_signaled(pt);
+	if (ret < 0)
+		fence->status = ret;
+	return ret;
+}
+
+static bool android_fence_enable_signaling(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (android_fence_signaled(fence))
+		return false;
+
+	list_add_tail(&pt->active_list, &parent->active_list_head);
+	return true;
+}
+
+static int android_fence_fill_driver_data(struct fence *fence,
+					  void *data, int size)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (!parent->ops->fill_driver_data)
+		return 0;
+	return parent->ops->fill_driver_data(pt, data, size);
+}
+
+static void android_fence_value_str(struct fence *fence,
+				    char *str, int size)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (!parent->ops->pt_value_str) {
+		if (size)
+			*str = 0;
+		return;
+	}
+	parent->ops->pt_value_str(pt, str, size);
+}
+
+static void android_fence_timeline_value_str(struct fence *fence,
+					     char *str, int size)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (!parent->ops->timeline_value_str) {
+		if (size)
+			*str = 0;
+		return;
+	}
+	parent->ops->timeline_value_str(parent, str, size);
+}
+
+static const struct fence_ops android_fence_ops = {
+	.get_driver_name = android_fence_get_driver_name,
+	.get_timeline_name = android_fence_get_timeline_name,
+	.enable_signaling = android_fence_enable_signaling,
+	.signaled = android_fence_signaled,
+	.wait = fence_default_wait,
+	.release = android_fence_release,
+	.fill_driver_data = android_fence_fill_driver_data,
+	.fence_value_str = android_fence_value_str,
+	.timeline_value_str = android_fence_timeline_value_str,
+};
+
 static void sync_fence_free(struct kref *kref)
 {
 	struct sync_fence *fence = container_of(kref, struct sync_fence, kref);
+	int i, status = atomic_read(&fence->status);
 
-	sync_fence_free_pts(fence);
+	for (i = 0; i < fence->num_fences; ++i) {
+		if (status)
+			fence_remove_callback(fence->cbs[i].sync_pt,
+					      &fence->cbs[i].cb);
+		fence_put(fence->cbs[i].sync_pt);
+	}
 
 	kfree(fence);
 }
@@ -648,44 +535,25 @@ static void sync_fence_free(struct kref *kref)
 static int sync_fence_release(struct inode *inode, struct file *file)
 {
 	struct sync_fence *fence = file->private_data;
-	unsigned long flags;
-
-	/*
-	 * We need to remove all ways to access this fence before droping
-	 * our ref.
-	 *
-	 * start with its membership in the global fence list
-	 */
-	spin_lock_irqsave(&sync_fence_list_lock, flags);
-	list_del(&fence->sync_fence_list);
-	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
 
-	/*
-	 * remove its pts from their parents so that sync_timeline_signal()
-	 * can't reference the fence.
-	 */
-	sync_fence_detach_pts(fence);
+	sync_fence_debug_remove(fence);
 
 	kref_put(&fence->kref, sync_fence_free);
-
 	return 0;
 }
 
 static unsigned int sync_fence_poll(struct file *file, poll_table *wait)
 {
 	struct sync_fence *fence = file->private_data;
+	int status;
 
 	poll_wait(file, &fence->wq, wait);
 
-	/*
-	 * Make sure that reads to fence->status are ordered with the
-	 * wait queue event triggering
-	 */
-	smp_rmb();
+	status = atomic_read(&fence->status);
 
-	if (fence->status == 1)
+	if (!status)
 		return POLLIN;
-	else if (fence->status < 0)
+	else if (status < 0)
 		return POLLERR;
 	else
 		return 0;
@@ -750,7 +618,7 @@ err_put_fd:
 	return err;
 }
 
-static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size)
+static int sync_fill_pt_info(struct fence *fence, void *data, int size)
 {
 	struct sync_pt_info *info = data;
 	int ret;
@@ -760,20 +628,24 @@ static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size)
 
 	info->len = sizeof(struct sync_pt_info);
 
-	if (pt->parent->ops->fill_driver_data) {
-		ret = pt->parent->ops->fill_driver_data(pt, info->driver_data,
-							size - sizeof(*info));
+	if (fence->ops->fill_driver_data) {
+		ret = fence->ops->fill_driver_data(fence, info->driver_data,
+						   size - sizeof(*info));
 		if (ret < 0)
 			return ret;
 
 		info->len += ret;
 	}
 
-	strlcpy(info->obj_name, pt->parent->name, sizeof(info->obj_name));
-	strlcpy(info->driver_name, pt->parent->ops->driver_name,
+	strlcpy(info->obj_name, fence->ops->get_timeline_name(fence),
+		sizeof(info->obj_name));
+	strlcpy(info->driver_name, fence->ops->get_driver_name(fence),
 		sizeof(info->driver_name));
-	info->status = pt->status;
-	info->timestamp_ns = ktime_to_ns(pt->timestamp);
+	if (fence_is_signaled(fence))
+		info->status = fence->status >= 0 ? 1 : fence->status;
+	else
+		info->status = 0;
+	info->timestamp_ns = ktime_to_ns(fence->timestamp);
 
 	return info->len;
 }
@@ -782,10 +654,9 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence,
 					unsigned long arg)
 {
 	struct sync_fence_info_data *data;
-	struct list_head *pos;
 	__u32 size;
 	__u32 len = 0;
-	int ret;
+	int ret, i;
 
 	if (copy_from_user(&size, (void __user *)arg, sizeof(size)))
 		return -EFAULT;
@@ -801,12 +672,14 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence,
 		return -ENOMEM;
 
 	strlcpy(data->name, fence->name, sizeof(data->name));
-	data->status = fence->status;
+	data->status = atomic_read(&fence->status);
+	if (data->status >= 0)
+		data->status = !data->status;
+
 	len = sizeof(struct sync_fence_info_data);
 
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, pt_list);
+	for (i = 0; i < fence->num_fences; ++i) {
+		struct fence *pt = fence->cbs[i].sync_pt;
 
 		ret = sync_fill_pt_info(pt, (u8 *)data + len, size - len);
 
@@ -833,7 +706,6 @@ static long sync_fence_ioctl(struct file *file, unsigned int cmd,
 			     unsigned long arg)
 {
 	struct sync_fence *fence = file->private_data;
-
 	switch (cmd) {
 	case SYNC_IOC_WAIT:
 		return sync_fence_ioctl_wait(fence, arg);
@@ -849,181 +721,10 @@ static long sync_fence_ioctl(struct file *file, unsigned int cmd,
 	}
 }
 
-#ifdef CONFIG_DEBUG_FS
-static const char *sync_status_str(int status)
-{
-	if (status > 0)
-		return "signaled";
-	else if (status == 0)
-		return "active";
-	else
-		return "error";
-}
-
-static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence)
-{
-	int status = pt->status;
-
-	seq_printf(s, "  %s%spt %s",
-		   fence ? pt->parent->name : "",
-		   fence ? "_" : "",
-		   sync_status_str(status));
-	if (pt->status) {
-		struct timeval tv = ktime_to_timeval(pt->timestamp);
-
-		seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec);
-	}
-
-	if (pt->parent->ops->timeline_value_str &&
-	    pt->parent->ops->pt_value_str) {
-		char value[64];
-
-		pt->parent->ops->pt_value_str(pt, value, sizeof(value));
-		seq_printf(s, ": %s", value);
-		if (fence) {
-			pt->parent->ops->timeline_value_str(pt->parent, value,
-						    sizeof(value));
-			seq_printf(s, " / %s", value);
-		}
-	} else if (pt->parent->ops->print_pt) {
-		seq_puts(s, ": ");
-		pt->parent->ops->print_pt(s, pt);
-	}
-
-	seq_puts(s, "\n");
-}
-
-static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
-{
-	struct list_head *pos;
-	unsigned long flags;
-
-	seq_printf(s, "%s %s", obj->name, obj->ops->driver_name);
-
-	if (obj->ops->timeline_value_str) {
-		char value[64];
-
-		obj->ops->timeline_value_str(obj, value, sizeof(value));
-		seq_printf(s, ": %s", value);
-	} else if (obj->ops->print_obj) {
-		seq_puts(s, ": ");
-		obj->ops->print_obj(s, obj);
-	}
-
-	seq_puts(s, "\n");
-
-	spin_lock_irqsave(&obj->child_list_lock, flags);
-	list_for_each(pos, &obj->child_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, child_list);
-		sync_print_pt(s, pt, false);
-	}
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
-
-static void sync_print_fence(struct seq_file *s, struct sync_fence *fence)
-{
-	struct list_head *pos;
-	unsigned long flags;
-
-	seq_printf(s, "[%p] %s: %s\n", fence, fence->name,
-		   sync_status_str(fence->status));
-
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, pt_list);
-		sync_print_pt(s, pt, true);
-	}
-
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
-	list_for_each(pos, &fence->waiter_list_head) {
-		struct sync_fence_waiter *waiter =
-			container_of(pos, struct sync_fence_waiter,
-				     waiter_list);
-
-		seq_printf(s, "waiter %pF\n", waiter->callback);
-	}
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
-}
-
-static int sync_debugfs_show(struct seq_file *s, void *unused)
-{
-	unsigned long flags;
-	struct list_head *pos;
-
-	seq_puts(s, "objs:\n--------------\n");
-
-	spin_lock_irqsave(&sync_timeline_list_lock, flags);
-	list_for_each(pos, &sync_timeline_list_head) {
-		struct sync_timeline *obj =
-			container_of(pos, struct sync_timeline,
-				     sync_timeline_list);
-
-		sync_print_obj(s, obj);
-		seq_puts(s, "\n");
-	}
-	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
-
-	seq_puts(s, "fences:\n--------------\n");
-
-	spin_lock_irqsave(&sync_fence_list_lock, flags);
-	list_for_each(pos, &sync_fence_list_head) {
-		struct sync_fence *fence =
-			container_of(pos, struct sync_fence, sync_fence_list);
-
-		sync_print_fence(s, fence);
-		seq_puts(s, "\n");
-	}
-	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
-	return 0;
-}
-
-static int sync_debugfs_open(struct inode *inode, struct file *file)
-{
-	return single_open(file, sync_debugfs_show, inode->i_private);
-}
-
-static const struct file_operations sync_debugfs_fops = {
-	.open           = sync_debugfs_open,
-	.read           = seq_read,
-	.llseek         = seq_lseek,
-	.release        = single_release,
+static const struct file_operations sync_fence_fops = {
+	.release = sync_fence_release,
+	.poll = sync_fence_poll,
+	.unlocked_ioctl = sync_fence_ioctl,
+	.compat_ioctl = sync_fence_ioctl,
 };
 
-static __init int sync_debugfs_init(void)
-{
-	debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops);
-	return 0;
-}
-late_initcall(sync_debugfs_init);
-
-#define DUMP_CHUNK 256
-static char sync_dump_buf[64 * 1024];
-static void sync_dump(void)
-{
-	struct seq_file s = {
-		.buf = sync_dump_buf,
-		.size = sizeof(sync_dump_buf) - 1,
-	};
-	int i;
-
-	sync_debugfs_show(&s, NULL);
-
-	for (i = 0; i < s.count; i += DUMP_CHUNK) {
-		if ((s.count - i) > DUMP_CHUNK) {
-			char c = s.buf[i + DUMP_CHUNK];
-
-			s.buf[i + DUMP_CHUNK] = 0;
-			pr_cont("%s", s.buf + i);
-			s.buf[i + DUMP_CHUNK] = c;
-		} else {
-			s.buf[s.count] = 0;
-			pr_cont("%s", s.buf + i);
-		}
-	}
-}
-#else
-static void sync_dump(void)
-{
-}
-#endif
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h
index eaf57cccf626..66b0f431f63e 100644
--- a/drivers/staging/android/sync.h
+++ b/drivers/staging/android/sync.h
@@ -19,6 +19,7 @@
 #include <linux/list.h>
 #include <linux/spinlock.h>
 #include <linux/wait.h>
+#include <linux/fence.h>
 
 #include "uapi/sync.h"
 
@@ -40,8 +41,6 @@ struct sync_fence;
  *			 -1 if a will signal before b
  * @free_pt:		called before sync_pt is freed
  * @release_obj:	called before sync_timeline is freed
- * @print_obj:		deprecated
- * @print_pt:		deprecated
  * @fill_driver_data:	write implementation specific driver data to data.
  *			  should return an error if there is not enough room
  *			  as specified by size.  This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops {
 	/* optional */
 	void (*release_obj)(struct sync_timeline *sync_timeline);
 
-	/* deprecated */
-	void (*print_obj)(struct seq_file *s,
-			  struct sync_timeline *sync_timeline);
-
-	/* deprecated */
-	void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
-
 	/* optional */
 	int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
 
@@ -104,19 +96,21 @@ struct sync_timeline {
 
 	/* protected by child_list_lock */
 	bool			destroyed;
+	int			context, value;
 
 	struct list_head	child_list_head;
 	spinlock_t		child_list_lock;
 
 	struct list_head	active_list_head;
-	spinlock_t		active_list_lock;
 
+#ifdef CONFIG_DEBUG_FS
 	struct list_head	sync_timeline_list;
+#endif
 };
 
 /**
  * struct sync_pt - sync point
- * @parent:		sync_timeline to which this sync_pt belongs
+ * @fence:		base fence class
  * @child_list:		membership in sync_timeline.child_list_head
  * @active_list:	membership in sync_timeline.active_list_head
  * @signaled_list:	membership in temporary signaled_list on stack
@@ -127,19 +121,22 @@ struct sync_timeline {
  *			  signaled or error.
  */
 struct sync_pt {
-	struct sync_timeline		*parent;
-	struct list_head	child_list;
+	struct fence base;
 
+	struct list_head	child_list;
 	struct list_head	active_list;
-	struct list_head	signaled_list;
-
-	struct sync_fence	*fence;
-	struct list_head	pt_list;
+};
 
-	/* protected by parent->active_list_lock */
-	int			status;
+static inline struct sync_timeline *sync_pt_parent(struct sync_pt *pt)
+{
+	return container_of(pt->base.lock, struct sync_timeline,
+			    child_list_lock);
+}
 
-	ktime_t			timestamp;
+struct sync_fence_cb {
+	struct fence_cb cb;
+	struct fence *sync_pt;
+	struct sync_fence *fence;
 };
 
 /**
@@ -149,9 +146,7 @@ struct sync_pt {
  * @name:		name of sync_fence.  Useful for debugging
  * @pt_list_head:	list of sync_pts in the fence.  immutable once fence
  *			  is created
- * @waiter_list_head:	list of asynchronous waiters on this fence
- * @waiter_list_lock:	lock protecting @waiter_list_head and @status
- * @status:		1: signaled, 0:active, <0: error
+ * @status:		0: signaled, >0:active, <0: error
  *
  * @wq:			wait queue for fence signaling
  * @sync_fence_list:	membership in global fence list
@@ -160,17 +155,15 @@ struct sync_fence {
 	struct file		*file;
 	struct kref		kref;
 	char			name[32];
-
-	/* this list is immutable once the fence is created */
-	struct list_head	pt_list_head;
-
-	struct list_head	waiter_list_head;
-	spinlock_t		waiter_list_lock; /* also protects status */
-	int			status;
+#ifdef CONFIG_DEBUG_FS
+	struct list_head	sync_fence_list;
+#endif
+	int num_fences;
 
 	wait_queue_head_t	wq;
+	atomic_t		status;
 
-	struct list_head	sync_fence_list;
+	struct sync_fence_cb	cbs[];
 };
 
 struct sync_fence_waiter;
@@ -184,14 +177,14 @@ typedef void (*sync_callback_t)(struct sync_fence *fence,
  * @callback_data:	pointer to pass to @callback
  */
 struct sync_fence_waiter {
-	struct list_head	waiter_list;
-
-	sync_callback_t		callback;
+	wait_queue_t work;
+	sync_callback_t callback;
 };
 
 static inline void sync_fence_waiter_init(struct sync_fence_waiter *waiter,
 					  sync_callback_t callback)
 {
+	INIT_LIST_HEAD(&waiter->work.task_list);
 	waiter->callback = callback;
 }
 
@@ -341,4 +334,22 @@ int sync_fence_cancel_async(struct sync_fence *fence,
  */
 int sync_fence_wait(struct sync_fence *fence, long timeout);
 
+#ifdef CONFIG_DEBUG_FS
+
+extern void sync_timeline_debug_add(struct sync_timeline *obj);
+extern void sync_timeline_debug_remove(struct sync_timeline *obj);
+extern void sync_fence_debug_add(struct sync_fence *fence);
+extern void sync_fence_debug_remove(struct sync_fence *fence);
+extern void sync_dump(void);
+
+#else
+# define sync_timeline_debug_add(obj)
+# define sync_timeline_debug_remove(obj)
+# define sync_fence_debug_add(fence)
+# define sync_fence_debug_remove(fence)
+# define sync_dump()
+#endif
+int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode,
+				 int wake_flags, void *key);
+
 #endif /* _LINUX_SYNC_H */
diff --git a/drivers/staging/android/sync_debug.c b/drivers/staging/android/sync_debug.c
new file mode 100644
index 000000000000..00efaf9d8aa2
--- /dev/null
+++ b/drivers/staging/android/sync_debug.c
@@ -0,0 +1,247 @@
+/*
+ * drivers/base/sync.c
+ *
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/export.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/poll.h>
+#include <linux/sched.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/anon_inodes.h>
+#include "sync.h"
+
+#ifdef CONFIG_DEBUG_FS
+
+static LIST_HEAD(sync_timeline_list_head);
+static DEFINE_SPINLOCK(sync_timeline_list_lock);
+static LIST_HEAD(sync_fence_list_head);
+static DEFINE_SPINLOCK(sync_fence_list_lock);
+
+void sync_timeline_debug_add(struct sync_timeline *obj)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_timeline_list_lock, flags);
+	list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head);
+	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+}
+
+void sync_timeline_debug_remove(struct sync_timeline *obj)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_timeline_list_lock, flags);
+	list_del(&obj->sync_timeline_list);
+	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+}
+
+void sync_fence_debug_add(struct sync_fence *fence)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_fence_list_lock, flags);
+	list_add_tail(&fence->sync_fence_list, &sync_fence_list_head);
+	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
+}
+
+void sync_fence_debug_remove(struct sync_fence *fence)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_fence_list_lock, flags);
+	list_del(&fence->sync_fence_list);
+	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
+}
+
+static const char *sync_status_str(int status)
+{
+	if (status == 0)
+		return "signaled";
+	else if (status > 0)
+		return "active";
+	else
+		return "error";
+}
+
+static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence)
+{
+	int status = 1;
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (__fence_is_signaled(&pt->base))
+		status = pt->base.status;
+
+	seq_printf(s, "  %s%spt %s",
+		   fence ? parent->name : "",
+		   fence ? "_" : "",
+		   sync_status_str(status));
+
+	if (status <= 0) {
+		struct timeval tv = ktime_to_timeval(pt->base.timestamp);
+		seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec);
+	}
+
+	if (parent->ops->timeline_value_str &&
+	    parent->ops->pt_value_str) {
+		char value[64];
+		parent->ops->pt_value_str(pt, value, sizeof(value));
+		seq_printf(s, ": %s", value);
+		if (fence) {
+			parent->ops->timeline_value_str(parent, value,
+						    sizeof(value));
+			seq_printf(s, " / %s", value);
+		}
+	}
+
+	seq_puts(s, "\n");
+}
+
+static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
+{
+	struct list_head *pos;
+	unsigned long flags;
+
+	seq_printf(s, "%s %s", obj->name, obj->ops->driver_name);
+
+	if (obj->ops->timeline_value_str) {
+		char value[64];
+		obj->ops->timeline_value_str(obj, value, sizeof(value));
+		seq_printf(s, ": %s", value);
+	}
+
+	seq_puts(s, "\n");
+
+	spin_lock_irqsave(&obj->child_list_lock, flags);
+	list_for_each(pos, &obj->child_list_head) {
+		struct sync_pt *pt =
+			container_of(pos, struct sync_pt, child_list);
+		sync_print_pt(s, pt, false);
+	}
+	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+}
+
+static void sync_print_fence(struct seq_file *s, struct sync_fence *fence)
+{
+	wait_queue_t *pos;
+	unsigned long flags;
+	int i;
+
+	seq_printf(s, "[%p] %s: %s\n", fence, fence->name,
+		   sync_status_str(atomic_read(&fence->status)));
+
+	for (i = 0; i < fence->num_fences; ++i) {
+		struct sync_pt *pt =
+			container_of(fence->cbs[i].sync_pt,
+				     struct sync_pt, base);
+
+		sync_print_pt(s, pt, true);
+	}
+
+	spin_lock_irqsave(&fence->wq.lock, flags);
+	list_for_each_entry(pos, &fence->wq.task_list, task_list) {
+		struct sync_fence_waiter *waiter;
+
+		if (pos->func != &sync_fence_wake_up_wq)
+			continue;
+
+		waiter = container_of(pos, struct sync_fence_waiter, work);
+
+		seq_printf(s, "waiter %pF\n", waiter->callback);
+	}
+	spin_unlock_irqrestore(&fence->wq.lock, flags);
+}
+
+static int sync_debugfs_show(struct seq_file *s, void *unused)
+{
+	unsigned long flags;
+	struct list_head *pos;
+
+	seq_puts(s, "objs:\n--------------\n");
+
+	spin_lock_irqsave(&sync_timeline_list_lock, flags);
+	list_for_each(pos, &sync_timeline_list_head) {
+		struct sync_timeline *obj =
+			container_of(pos, struct sync_timeline,
+				     sync_timeline_list);
+
+		sync_print_obj(s, obj);
+		seq_puts(s, "\n");
+	}
+	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+
+	seq_puts(s, "fences:\n--------------\n");
+
+	spin_lock_irqsave(&sync_fence_list_lock, flags);
+	list_for_each(pos, &sync_fence_list_head) {
+		struct sync_fence *fence =
+			container_of(pos, struct sync_fence, sync_fence_list);
+
+		sync_print_fence(s, fence);
+		seq_puts(s, "\n");
+	}
+	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
+	return 0;
+}
+
+static int sync_debugfs_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, sync_debugfs_show, inode->i_private);
+}
+
+static const struct file_operations sync_debugfs_fops = {
+	.open           = sync_debugfs_open,
+	.read           = seq_read,
+	.llseek         = seq_lseek,
+	.release        = single_release,
+};
+
+static __init int sync_debugfs_init(void)
+{
+	debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops);
+	return 0;
+}
+late_initcall(sync_debugfs_init);
+
+#define DUMP_CHUNK 256
+static char sync_dump_buf[64 * 1024];
+void sync_dump(void)
+{
+	struct seq_file s = {
+		.buf = sync_dump_buf,
+		.size = sizeof(sync_dump_buf) - 1,
+	};
+	int i;
+
+	sync_debugfs_show(&s, NULL);
+
+	for (i = 0; i < s.count; i += DUMP_CHUNK) {
+		if ((s.count - i) > DUMP_CHUNK) {
+			char c = s.buf[i + DUMP_CHUNK];
+			s.buf[i + DUMP_CHUNK] = 0;
+			pr_cont("%s", s.buf + i);
+			s.buf[i + DUMP_CHUNK] = c;
+		} else {
+			s.buf[s.count] = 0;
+			pr_cont("%s", s.buf + i);
+		}
+	}
+}
+
+#endif
diff --git a/drivers/staging/android/trace/sync.h b/drivers/staging/android/trace/sync.h
index 95462359ba57..77edb977a7bf 100644
--- a/drivers/staging/android/trace/sync.h
+++ b/drivers/staging/android/trace/sync.h
@@ -45,7 +45,7 @@ TRACE_EVENT(sync_wait,
 
 	TP_fast_assign(
 			__assign_str(name, fence->name);
-			__entry->status = fence->status;
+			__entry->status = atomic_read(&fence->status);
 			__entry->begin = begin;
 	),
 
@@ -54,19 +54,19 @@ TRACE_EVENT(sync_wait,
 );
 
 TRACE_EVENT(sync_pt,
-	TP_PROTO(struct sync_pt *pt),
+	TP_PROTO(struct fence *pt),
 
 	TP_ARGS(pt),
 
 	TP_STRUCT__entry(
-		__string(timeline, pt->parent->name)
+		__string(timeline, pt->ops->get_timeline_name(pt))
 		__array(char, value, 32)
 	),
 
 	TP_fast_assign(
-		__assign_str(timeline, pt->parent->name);
-		if (pt->parent->ops->pt_value_str) {
-			pt->parent->ops->pt_value_str(pt, __entry->value,
+		__assign_str(timeline, pt->ops->get_timeline_name(pt));
+		if (pt->ops->fence_value_str) {
+			pt->ops->fence_value_str(pt, __entry->value,
 							sizeof(__entry->value));
 		} else {
 			__entry->value[0] = '\0';


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 4/8] android: convert sync to fence api, v5
@ 2014-06-18 10:37   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

Just to show it's easy.

Android syncpoints can be mapped to a timeline. This removes the need
to maintain a separate api for synchronization. I've left the android
trace events in place, but the core fence events should already be
sufficient for debugging.

v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
v5:
- Fix small style issues pointed out by Thomas Hellstrom.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: John Stultz <john.stultz@linaro.org>
---
 drivers/staging/android/Kconfig      |    1 
 drivers/staging/android/Makefile     |    2 
 drivers/staging/android/sw_sync.c    |    6 
 drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
 drivers/staging/android/sync.h       |   79 ++-
 drivers/staging/android/sync_debug.c |  247 +++++++++
 drivers/staging/android/trace/sync.h |   12 
 7 files changed, 609 insertions(+), 651 deletions(-)
 create mode 100644 drivers/staging/android/sync_debug.c

diff --git a/drivers/staging/android/Kconfig b/drivers/staging/android/Kconfig
index 99e484f845f2..51607e9aa049 100644
--- a/drivers/staging/android/Kconfig
+++ b/drivers/staging/android/Kconfig
@@ -88,6 +88,7 @@ config SYNC
 	bool "Synchronization framework"
 	default n
 	select ANON_INODES
+	select DMA_SHARED_BUFFER
 	---help---
 	  This option enables the framework for synchronization between multiple
 	  drivers.  Sync implementations can take advantage of hardware
diff --git a/drivers/staging/android/Makefile b/drivers/staging/android/Makefile
index 0a01e1914905..517ad5ffa429 100644
--- a/drivers/staging/android/Makefile
+++ b/drivers/staging/android/Makefile
@@ -9,5 +9,5 @@ obj-$(CONFIG_ANDROID_TIMED_OUTPUT)	+= timed_output.o
 obj-$(CONFIG_ANDROID_TIMED_GPIO)	+= timed_gpio.o
 obj-$(CONFIG_ANDROID_LOW_MEMORY_KILLER)	+= lowmemorykiller.o
 obj-$(CONFIG_ANDROID_INTF_ALARM_DEV)	+= alarm-dev.o
-obj-$(CONFIG_SYNC)			+= sync.o
+obj-$(CONFIG_SYNC)			+= sync.o sync_debug.o
 obj-$(CONFIG_SW_SYNC)			+= sw_sync.o
diff --git a/drivers/staging/android/sw_sync.c b/drivers/staging/android/sw_sync.c
index 12a136ec1cec..a76db3ff87cb 100644
--- a/drivers/staging/android/sw_sync.c
+++ b/drivers/staging/android/sw_sync.c
@@ -50,7 +50,7 @@ static struct sync_pt *sw_sync_pt_dup(struct sync_pt *sync_pt)
 {
 	struct sw_sync_pt *pt = (struct sw_sync_pt *) sync_pt;
 	struct sw_sync_timeline *obj =
-		(struct sw_sync_timeline *)sync_pt->parent;
+		(struct sw_sync_timeline *)sync_pt_parent(sync_pt);
 
 	return (struct sync_pt *) sw_sync_pt_create(obj, pt->value);
 }
@@ -59,7 +59,7 @@ static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt)
 {
 	struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
 	struct sw_sync_timeline *obj =
-		(struct sw_sync_timeline *)sync_pt->parent;
+		(struct sw_sync_timeline *)sync_pt_parent(sync_pt);
 
 	return sw_sync_cmp(obj->value, pt->value) >= 0;
 }
@@ -97,7 +97,6 @@ static void sw_sync_pt_value_str(struct sync_pt *sync_pt,
 				       char *str, int size)
 {
 	struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
-
 	snprintf(str, size, "%d", pt->value);
 }
 
@@ -157,7 +156,6 @@ static int sw_sync_open(struct inode *inode, struct file *file)
 static int sw_sync_release(struct inode *inode, struct file *file)
 {
 	struct sw_sync_timeline *obj = file->private_data;
-
 	sync_timeline_destroy(&obj->obj);
 	return 0;
 }
diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
index 18174f7c871c..70b09b5001ba 100644
--- a/drivers/staging/android/sync.c
+++ b/drivers/staging/android/sync.c
@@ -31,22 +31,13 @@
 #define CREATE_TRACE_POINTS
 #include "trace/sync.h"
 
-static void sync_fence_signal_pt(struct sync_pt *pt);
-static int _sync_pt_has_signaled(struct sync_pt *pt);
-static void sync_fence_free(struct kref *kref);
-static void sync_dump(void);
-
-static LIST_HEAD(sync_timeline_list_head);
-static DEFINE_SPINLOCK(sync_timeline_list_lock);
-
-static LIST_HEAD(sync_fence_list_head);
-static DEFINE_SPINLOCK(sync_fence_list_lock);
+static const struct fence_ops android_fence_ops;
+static const struct file_operations sync_fence_fops;
 
 struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops,
 					   int size, const char *name)
 {
 	struct sync_timeline *obj;
-	unsigned long flags;
 
 	if (size < sizeof(struct sync_timeline))
 		return NULL;
@@ -57,17 +48,14 @@ struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops,
 
 	kref_init(&obj->kref);
 	obj->ops = ops;
+	obj->context = fence_context_alloc(1);
 	strlcpy(obj->name, name, sizeof(obj->name));
 
 	INIT_LIST_HEAD(&obj->child_list_head);
-	spin_lock_init(&obj->child_list_lock);
-
 	INIT_LIST_HEAD(&obj->active_list_head);
-	spin_lock_init(&obj->active_list_lock);
+	spin_lock_init(&obj->child_list_lock);
 
-	spin_lock_irqsave(&sync_timeline_list_lock, flags);
-	list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head);
-	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+	sync_timeline_debug_add(obj);
 
 	return obj;
 }
@@ -77,11 +65,8 @@ static void sync_timeline_free(struct kref *kref)
 {
 	struct sync_timeline *obj =
 		container_of(kref, struct sync_timeline, kref);
-	unsigned long flags;
 
-	spin_lock_irqsave(&sync_timeline_list_lock, flags);
-	list_del(&obj->sync_timeline_list);
-	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+	sync_timeline_debug_remove(obj);
 
 	if (obj->ops->release_obj)
 		obj->ops->release_obj(obj);
@@ -89,6 +74,16 @@ static void sync_timeline_free(struct kref *kref)
 	kfree(obj);
 }
 
+static void sync_timeline_get(struct sync_timeline *obj)
+{
+	kref_get(&obj->kref);
+}
+
+static void sync_timeline_put(struct sync_timeline *obj)
+{
+	kref_put(&obj->kref, sync_timeline_free);
+}
+
 void sync_timeline_destroy(struct sync_timeline *obj)
 {
 	obj->destroyed = true;
@@ -102,75 +97,33 @@ void sync_timeline_destroy(struct sync_timeline *obj)
 	 * signal any children that their parent is going away.
 	 */
 	sync_timeline_signal(obj);
-
-	kref_put(&obj->kref, sync_timeline_free);
+	sync_timeline_put(obj);
 }
 EXPORT_SYMBOL(sync_timeline_destroy);
 
-static void sync_timeline_add_pt(struct sync_timeline *obj, struct sync_pt *pt)
-{
-	unsigned long flags;
-
-	pt->parent = obj;
-
-	spin_lock_irqsave(&obj->child_list_lock, flags);
-	list_add_tail(&pt->child_list, &obj->child_list_head);
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
-
-static void sync_timeline_remove_pt(struct sync_pt *pt)
-{
-	struct sync_timeline *obj = pt->parent;
-	unsigned long flags;
-
-	spin_lock_irqsave(&obj->active_list_lock, flags);
-	if (!list_empty(&pt->active_list))
-		list_del_init(&pt->active_list);
-	spin_unlock_irqrestore(&obj->active_list_lock, flags);
-
-	spin_lock_irqsave(&obj->child_list_lock, flags);
-	if (!list_empty(&pt->child_list))
-		list_del_init(&pt->child_list);
-
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
-
 void sync_timeline_signal(struct sync_timeline *obj)
 {
 	unsigned long flags;
 	LIST_HEAD(signaled_pts);
-	struct list_head *pos, *n;
+	struct sync_pt *pt, *next;
 
 	trace_sync_timeline(obj);
 
-	spin_lock_irqsave(&obj->active_list_lock, flags);
-
-	list_for_each_safe(pos, n, &obj->active_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, active_list);
+	spin_lock_irqsave(&obj->child_list_lock, flags);
 
-		if (_sync_pt_has_signaled(pt)) {
-			list_del_init(pos);
-			list_add(&pt->signaled_list, &signaled_pts);
-			kref_get(&pt->fence->kref);
-		}
+	list_for_each_entry_safe(pt, next, &obj->active_list_head,
+				 active_list) {
+		if (__fence_is_signaled(&pt->base))
+			list_del(&pt->active_list);
 	}
 
-	spin_unlock_irqrestore(&obj->active_list_lock, flags);
-
-	list_for_each_safe(pos, n, &signaled_pts) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, signaled_list);
-
-		list_del_init(pos);
-		sync_fence_signal_pt(pt);
-		kref_put(&pt->fence->kref, sync_fence_free);
-	}
+	spin_unlock_irqrestore(&obj->child_list_lock, flags);
 }
 EXPORT_SYMBOL(sync_timeline_signal);
 
-struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size)
+struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size)
 {
+	unsigned long flags;
 	struct sync_pt *pt;
 
 	if (size < sizeof(struct sync_pt))
@@ -180,87 +133,28 @@ struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size)
 	if (pt == NULL)
 		return NULL;
 
+	spin_lock_irqsave(&obj->child_list_lock, flags);
+	sync_timeline_get(obj);
+	__fence_init(&pt->base, &android_fence_ops, &obj->child_list_lock,
+		     obj->context, ++obj->value);
+	list_add_tail(&pt->child_list, &obj->child_list_head);
 	INIT_LIST_HEAD(&pt->active_list);
-	kref_get(&parent->kref);
-	sync_timeline_add_pt(parent, pt);
-
+	spin_unlock_irqrestore(&obj->child_list_lock, flags);
 	return pt;
 }
 EXPORT_SYMBOL(sync_pt_create);
 
 void sync_pt_free(struct sync_pt *pt)
 {
-	if (pt->parent->ops->free_pt)
-		pt->parent->ops->free_pt(pt);
-
-	sync_timeline_remove_pt(pt);
-
-	kref_put(&pt->parent->kref, sync_timeline_free);
-
-	kfree(pt);
+	fence_put(&pt->base);
 }
 EXPORT_SYMBOL(sync_pt_free);
 
-/* call with pt->parent->active_list_lock held */
-static int _sync_pt_has_signaled(struct sync_pt *pt)
-{
-	int old_status = pt->status;
-
-	if (!pt->status)
-		pt->status = pt->parent->ops->has_signaled(pt);
-
-	if (!pt->status && pt->parent->destroyed)
-		pt->status = -ENOENT;
-
-	if (pt->status != old_status)
-		pt->timestamp = ktime_get();
-
-	return pt->status;
-}
-
-static struct sync_pt *sync_pt_dup(struct sync_pt *pt)
-{
-	return pt->parent->ops->dup(pt);
-}
-
-/* Adds a sync pt to the active queue.  Called when added to a fence */
-static void sync_pt_activate(struct sync_pt *pt)
-{
-	struct sync_timeline *obj = pt->parent;
-	unsigned long flags;
-	int err;
-
-	spin_lock_irqsave(&obj->active_list_lock, flags);
-
-	err = _sync_pt_has_signaled(pt);
-	if (err != 0)
-		goto out;
-
-	list_add_tail(&pt->active_list, &obj->active_list_head);
-
-out:
-	spin_unlock_irqrestore(&obj->active_list_lock, flags);
-}
-
-static int sync_fence_release(struct inode *inode, struct file *file);
-static unsigned int sync_fence_poll(struct file *file, poll_table *wait);
-static long sync_fence_ioctl(struct file *file, unsigned int cmd,
-			     unsigned long arg);
-
-
-static const struct file_operations sync_fence_fops = {
-	.release = sync_fence_release,
-	.poll = sync_fence_poll,
-	.unlocked_ioctl = sync_fence_ioctl,
-	.compat_ioctl = sync_fence_ioctl,
-};
-
-static struct sync_fence *sync_fence_alloc(const char *name)
+static struct sync_fence *sync_fence_alloc(int size, const char *name)
 {
 	struct sync_fence *fence;
-	unsigned long flags;
 
-	fence = kzalloc(sizeof(struct sync_fence), GFP_KERNEL);
+	fence = kzalloc(size, GFP_KERNEL);
 	if (fence == NULL)
 		return NULL;
 
@@ -272,16 +166,8 @@ static struct sync_fence *sync_fence_alloc(const char *name)
 	kref_init(&fence->kref);
 	strlcpy(fence->name, name, sizeof(fence->name));
 
-	INIT_LIST_HEAD(&fence->pt_list_head);
-	INIT_LIST_HEAD(&fence->waiter_list_head);
-	spin_lock_init(&fence->waiter_list_lock);
-
 	init_waitqueue_head(&fence->wq);
 
-	spin_lock_irqsave(&sync_fence_list_lock, flags);
-	list_add_tail(&fence->sync_fence_list, &sync_fence_list_head);
-	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
-
 	return fence;
 
 err:
@@ -289,120 +175,42 @@ err:
 	return NULL;
 }
 
-/* TODO: implement a create which takes more that one sync_pt */
-struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
+static void fence_check_cb_func(struct fence *f, struct fence_cb *cb)
 {
+	struct sync_fence_cb *check;
 	struct sync_fence *fence;
 
-	if (pt->fence)
-		return NULL;
-
-	fence = sync_fence_alloc(name);
-	if (fence == NULL)
-		return NULL;
-
-	pt->fence = fence;
-	list_add(&pt->pt_list, &fence->pt_list_head);
-	sync_pt_activate(pt);
-
-	/*
-	 * signal the fence in case pt was activated before
-	 * sync_pt_activate(pt) was called
-	 */
-	sync_fence_signal_pt(pt);
+	check = container_of(cb, struct sync_fence_cb, cb);
+	fence = check->fence;
 
-	return fence;
+	if (atomic_dec_and_test(&fence->status))
+		wake_up_all(&fence->wq);
 }
-EXPORT_SYMBOL(sync_fence_create);
-
-static int sync_fence_copy_pts(struct sync_fence *dst, struct sync_fence *src)
-{
-	struct list_head *pos;
-
-	list_for_each(pos, &src->pt_list_head) {
-		struct sync_pt *orig_pt =
-			container_of(pos, struct sync_pt, pt_list);
-		struct sync_pt *new_pt = sync_pt_dup(orig_pt);
 
-		if (new_pt == NULL)
-			return -ENOMEM;
-
-		new_pt->fence = dst;
-		list_add(&new_pt->pt_list, &dst->pt_list_head);
-	}
-
-	return 0;
-}
-
-static int sync_fence_merge_pts(struct sync_fence *dst, struct sync_fence *src)
-{
-	struct list_head *src_pos, *dst_pos, *n;
-
-	list_for_each(src_pos, &src->pt_list_head) {
-		struct sync_pt *src_pt =
-			container_of(src_pos, struct sync_pt, pt_list);
-		bool collapsed = false;
-
-		list_for_each_safe(dst_pos, n, &dst->pt_list_head) {
-			struct sync_pt *dst_pt =
-				container_of(dst_pos, struct sync_pt, pt_list);
-			/* collapse two sync_pts on the same timeline
-			 * to a single sync_pt that will signal at
-			 * the later of the two
-			 */
-			if (dst_pt->parent == src_pt->parent) {
-				if (dst_pt->parent->ops->compare(dst_pt, src_pt)
-						 == -1) {
-					struct sync_pt *new_pt =
-						sync_pt_dup(src_pt);
-					if (new_pt == NULL)
-						return -ENOMEM;
-
-					new_pt->fence = dst;
-					list_replace(&dst_pt->pt_list,
-						     &new_pt->pt_list);
-					sync_pt_free(dst_pt);
-				}
-				collapsed = true;
-				break;
-			}
-		}
-
-		if (!collapsed) {
-			struct sync_pt *new_pt = sync_pt_dup(src_pt);
-
-			if (new_pt == NULL)
-				return -ENOMEM;
-
-			new_pt->fence = dst;
-			list_add(&new_pt->pt_list, &dst->pt_list_head);
-		}
-	}
-
-	return 0;
-}
-
-static void sync_fence_detach_pts(struct sync_fence *fence)
+/* TODO: implement a create which takes more that one sync_pt */
+struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
 {
-	struct list_head *pos, *n;
+	struct sync_fence *fence;
 
-	list_for_each_safe(pos, n, &fence->pt_list_head) {
-		struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
+	fence = sync_fence_alloc(offsetof(struct sync_fence, cbs[1]), name);
+	if (fence == NULL)
+		return NULL;
 
-		sync_timeline_remove_pt(pt);
-	}
-}
+	fence->num_fences = 1;
+	atomic_set(&fence->status, 1);
 
-static void sync_fence_free_pts(struct sync_fence *fence)
-{
-	struct list_head *pos, *n;
+	fence_get(&pt->base);
+	fence->cbs[0].sync_pt = &pt->base;
+	fence->cbs[0].fence = fence;
+	if (fence_add_callback(&pt->base, &fence->cbs[0].cb,
+			       fence_check_cb_func))
+		atomic_dec(&fence->status);
 
-	list_for_each_safe(pos, n, &fence->pt_list_head) {
-		struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
+	sync_fence_debug_add(fence);
 
-		sync_pt_free(pt);
-	}
+	return fence;
 }
+EXPORT_SYMBOL(sync_fence_create);
 
 struct sync_fence *sync_fence_fdget(int fd)
 {
@@ -434,197 +242,155 @@ void sync_fence_install(struct sync_fence *fence, int fd)
 }
 EXPORT_SYMBOL(sync_fence_install);
 
-static int sync_fence_get_status(struct sync_fence *fence)
+static void sync_fence_add_pt(struct sync_fence *fence,
+			      int *i, struct fence *pt)
 {
-	struct list_head *pos;
-	int status = 1;
-
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
-		int pt_status = pt->status;
-
-		if (pt_status < 0) {
-			status = pt_status;
-			break;
-		} else if (status == 1) {
-			status = pt_status;
-		}
-	}
+	fence->cbs[*i].sync_pt = pt;
+	fence->cbs[*i].fence = fence;
 
-	return status;
+	if (!fence_add_callback(pt, &fence->cbs[*i].cb, fence_check_cb_func)) {
+		fence_get(pt);
+		(*i)++;
+	}
 }
 
 struct sync_fence *sync_fence_merge(const char *name,
 				    struct sync_fence *a, struct sync_fence *b)
 {
+	int num_fences = a->num_fences + b->num_fences;
 	struct sync_fence *fence;
-	struct list_head *pos;
-	int err;
+	int i, i_a, i_b;
+	unsigned long size = offsetof(struct sync_fence, cbs[num_fences]);
 
-	fence = sync_fence_alloc(name);
+	fence = sync_fence_alloc(size, name);
 	if (fence == NULL)
 		return NULL;
 
-	err = sync_fence_copy_pts(fence, a);
-	if (err < 0)
-		goto err;
+	atomic_set(&fence->status, num_fences);
 
-	err = sync_fence_merge_pts(fence, b);
-	if (err < 0)
-		goto err;
+	/*
+	 * Assume sync_fence a and b are both ordered and have no
+	 * duplicates with the same context.
+	 *
+	 * If a sync_fence can only be created with sync_fence_merge
+	 * and sync_fence_create, this is a reasonable assumption.
+	 */
+	for (i = i_a = i_b = 0; i_a < a->num_fences && i_b < b->num_fences; ) {
+		struct fence *pt_a = a->cbs[i_a].sync_pt;
+		struct fence *pt_b = b->cbs[i_b].sync_pt;
+
+		if (pt_a->context < pt_b->context) {
+			sync_fence_add_pt(fence, &i, pt_a);
+
+			i_a++;
+		} else if (pt_a->context > pt_b->context) {
+			sync_fence_add_pt(fence, &i, pt_b);
 
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, pt_list);
-		sync_pt_activate(pt);
+			i_b++;
+		} else {
+			if (pt_a->seqno - pt_b->seqno <= INT_MAX)
+				sync_fence_add_pt(fence, &i, pt_a);
+			else
+				sync_fence_add_pt(fence, &i, pt_b);
+
+			i_a++;
+			i_b++;
+		}
 	}
 
-	/*
-	 * signal the fence in case one of it's pts were activated before
-	 * they were activated
-	 */
-	sync_fence_signal_pt(list_first_entry(&fence->pt_list_head,
-					      struct sync_pt,
-					      pt_list));
+	for (; i_a < a->num_fences; i_a++)
+		sync_fence_add_pt(fence, &i, a->cbs[i_a].sync_pt);
 
+	for (; i_b < b->num_fences; i_b++)
+		sync_fence_add_pt(fence, &i, b->cbs[i_b].sync_pt);
+
+	if (num_fences > i)
+		atomic_sub(num_fences - i, &fence->status);
+	fence->num_fences = i;
+
+	sync_fence_debug_add(fence);
 	return fence;
-err:
-	sync_fence_free_pts(fence);
-	kfree(fence);
-	return NULL;
 }
 EXPORT_SYMBOL(sync_fence_merge);
 
-static void sync_fence_signal_pt(struct sync_pt *pt)
+int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode,
+				 int wake_flags, void *key)
 {
-	LIST_HEAD(signaled_waiters);
-	struct sync_fence *fence = pt->fence;
-	struct list_head *pos;
-	struct list_head *n;
-	unsigned long flags;
-	int status;
-
-	status = sync_fence_get_status(fence);
-
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
-	/*
-	 * this should protect against two threads racing on the signaled
-	 * false -> true transition
-	 */
-	if (status && !fence->status) {
-		list_for_each_safe(pos, n, &fence->waiter_list_head)
-			list_move(pos, &signaled_waiters);
-
-		fence->status = status;
-	} else {
-		status = 0;
-	}
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
+	struct sync_fence_waiter *wait;
 
-	if (status) {
-		list_for_each_safe(pos, n, &signaled_waiters) {
-			struct sync_fence_waiter *waiter =
-				container_of(pos, struct sync_fence_waiter,
-					     waiter_list);
+	wait = container_of(curr, struct sync_fence_waiter, work);
+	list_del_init(&wait->work.task_list);
 
-			list_del(pos);
-			waiter->callback(fence, waiter);
-		}
-		wake_up(&fence->wq);
-	}
+	wait->callback(wait->work.private, wait);
+	return 1;
 }
 
 int sync_fence_wait_async(struct sync_fence *fence,
 			  struct sync_fence_waiter *waiter)
 {
+	int err = atomic_read(&fence->status);
 	unsigned long flags;
-	int err = 0;
 
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
+	if (err < 0)
+		return err;
 
-	if (fence->status) {
-		err = fence->status;
-		goto out;
-	}
+	if (!err)
+		return 1;
 
-	list_add_tail(&waiter->waiter_list, &fence->waiter_list_head);
-out:
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
+	init_waitqueue_func_entry(&waiter->work, sync_fence_wake_up_wq);
+	waiter->work.private = fence;
 
-	return err;
+	spin_lock_irqsave(&fence->wq.lock, flags);
+	err = atomic_read(&fence->status);
+	if (err > 0)
+		__add_wait_queue_tail(&fence->wq, &waiter->work);
+	spin_unlock_irqrestore(&fence->wq.lock, flags);
+
+	if (err < 0)
+		return err;
+
+	return !err;
 }
 EXPORT_SYMBOL(sync_fence_wait_async);
 
 int sync_fence_cancel_async(struct sync_fence *fence,
 			     struct sync_fence_waiter *waiter)
 {
-	struct list_head *pos;
-	struct list_head *n;
 	unsigned long flags;
-	int ret = -ENOENT;
+	int ret = 0;
 
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
-	/*
-	 * Make sure waiter is still in waiter_list because it is possible for
-	 * the waiter to be removed from the list while the callback is still
-	 * pending.
-	 */
-	list_for_each_safe(pos, n, &fence->waiter_list_head) {
-		struct sync_fence_waiter *list_waiter =
-			container_of(pos, struct sync_fence_waiter,
-				     waiter_list);
-		if (list_waiter == waiter) {
-			list_del(pos);
-			ret = 0;
-			break;
-		}
-	}
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
+	spin_lock_irqsave(&fence->wq.lock, flags);
+	if (!list_empty(&waiter->work.task_list))
+		list_del_init(&waiter->work.task_list);
+	else
+		ret = -ENOENT;
+	spin_unlock_irqrestore(&fence->wq.lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL(sync_fence_cancel_async);
 
-static bool sync_fence_check(struct sync_fence *fence)
-{
-	/*
-	 * Make sure that reads to fence->status are ordered with the
-	 * wait queue event triggering
-	 */
-	smp_rmb();
-	return fence->status != 0;
-}
-
 int sync_fence_wait(struct sync_fence *fence, long timeout)
 {
-	int err = 0;
-	struct sync_pt *pt;
-
-	trace_sync_wait(fence, 1);
-	list_for_each_entry(pt, &fence->pt_list_head, pt_list)
-		trace_sync_pt(pt);
+	long ret;
+	int i;
 
-	if (timeout > 0) {
+	if (timeout < 0)
+		timeout = MAX_SCHEDULE_TIMEOUT;
+	else
 		timeout = msecs_to_jiffies(timeout);
-		err = wait_event_interruptible_timeout(fence->wq,
-						       sync_fence_check(fence),
-						       timeout);
-	} else if (timeout < 0) {
-		err = wait_event_interruptible(fence->wq,
-					       sync_fence_check(fence));
-	}
-	trace_sync_wait(fence, 0);
 
-	if (err < 0)
-		return err;
-
-	if (fence->status < 0) {
-		pr_info("fence error %d on [%p]\n", fence->status, fence);
-		sync_dump();
-		return fence->status;
-	}
+	trace_sync_wait(fence, 1);
+	for (i = 0; i < fence->num_fences; ++i)
+		trace_sync_pt(fence->cbs[i].sync_pt);
+	ret = wait_event_interruptible_timeout(fence->wq,
+					       atomic_read(&fence->status) <= 0,
+					       timeout);
+	trace_sync_wait(fence, 0);
 
-	if (fence->status == 0) {
-		if (timeout > 0) {
+	if (ret < 0)
+		return ret;
+	else if (ret == 0) {
+		if (timeout) {
 			pr_info("fence timeout on [%p] after %dms\n", fence,
 				jiffies_to_msecs(timeout));
 			sync_dump();
@@ -632,15 +398,136 @@ int sync_fence_wait(struct sync_fence *fence, long timeout)
 		return -ETIME;
 	}
 
-	return 0;
+	ret = atomic_read(&fence->status);
+	if (ret) {
+		pr_info("fence error %ld on [%p]\n", ret, fence);
+		sync_dump();
+	}
+	return ret;
 }
 EXPORT_SYMBOL(sync_fence_wait);
 
+static const char *android_fence_get_driver_name(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	return parent->ops->driver_name;
+}
+
+static const char *android_fence_get_timeline_name(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	return parent->name;
+}
+
+static void android_fence_release(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+	unsigned long flags;
+
+	spin_lock_irqsave(fence->lock, flags);
+	list_del(&pt->child_list);
+	if (WARN_ON_ONCE(!list_empty(&pt->active_list)))
+		list_del(&pt->active_list);
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	if (parent->ops->free_pt)
+		parent->ops->free_pt(pt);
+
+	sync_timeline_put(parent);
+	kfree(pt);
+}
+
+static bool android_fence_signaled(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+	int ret;
+
+	ret = parent->ops->has_signaled(pt);
+	if (ret < 0)
+		fence->status = ret;
+	return ret;
+}
+
+static bool android_fence_enable_signaling(struct fence *fence)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (android_fence_signaled(fence))
+		return false;
+
+	list_add_tail(&pt->active_list, &parent->active_list_head);
+	return true;
+}
+
+static int android_fence_fill_driver_data(struct fence *fence,
+					  void *data, int size)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (!parent->ops->fill_driver_data)
+		return 0;
+	return parent->ops->fill_driver_data(pt, data, size);
+}
+
+static void android_fence_value_str(struct fence *fence,
+				    char *str, int size)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (!parent->ops->pt_value_str) {
+		if (size)
+			*str = 0;
+		return;
+	}
+	parent->ops->pt_value_str(pt, str, size);
+}
+
+static void android_fence_timeline_value_str(struct fence *fence,
+					     char *str, int size)
+{
+	struct sync_pt *pt = container_of(fence, struct sync_pt, base);
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (!parent->ops->timeline_value_str) {
+		if (size)
+			*str = 0;
+		return;
+	}
+	parent->ops->timeline_value_str(parent, str, size);
+}
+
+static const struct fence_ops android_fence_ops = {
+	.get_driver_name = android_fence_get_driver_name,
+	.get_timeline_name = android_fence_get_timeline_name,
+	.enable_signaling = android_fence_enable_signaling,
+	.signaled = android_fence_signaled,
+	.wait = fence_default_wait,
+	.release = android_fence_release,
+	.fill_driver_data = android_fence_fill_driver_data,
+	.fence_value_str = android_fence_value_str,
+	.timeline_value_str = android_fence_timeline_value_str,
+};
+
 static void sync_fence_free(struct kref *kref)
 {
 	struct sync_fence *fence = container_of(kref, struct sync_fence, kref);
+	int i, status = atomic_read(&fence->status);
 
-	sync_fence_free_pts(fence);
+	for (i = 0; i < fence->num_fences; ++i) {
+		if (status)
+			fence_remove_callback(fence->cbs[i].sync_pt,
+					      &fence->cbs[i].cb);
+		fence_put(fence->cbs[i].sync_pt);
+	}
 
 	kfree(fence);
 }
@@ -648,44 +535,25 @@ static void sync_fence_free(struct kref *kref)
 static int sync_fence_release(struct inode *inode, struct file *file)
 {
 	struct sync_fence *fence = file->private_data;
-	unsigned long flags;
-
-	/*
-	 * We need to remove all ways to access this fence before droping
-	 * our ref.
-	 *
-	 * start with its membership in the global fence list
-	 */
-	spin_lock_irqsave(&sync_fence_list_lock, flags);
-	list_del(&fence->sync_fence_list);
-	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
 
-	/*
-	 * remove its pts from their parents so that sync_timeline_signal()
-	 * can't reference the fence.
-	 */
-	sync_fence_detach_pts(fence);
+	sync_fence_debug_remove(fence);
 
 	kref_put(&fence->kref, sync_fence_free);
-
 	return 0;
 }
 
 static unsigned int sync_fence_poll(struct file *file, poll_table *wait)
 {
 	struct sync_fence *fence = file->private_data;
+	int status;
 
 	poll_wait(file, &fence->wq, wait);
 
-	/*
-	 * Make sure that reads to fence->status are ordered with the
-	 * wait queue event triggering
-	 */
-	smp_rmb();
+	status = atomic_read(&fence->status);
 
-	if (fence->status == 1)
+	if (!status)
 		return POLLIN;
-	else if (fence->status < 0)
+	else if (status < 0)
 		return POLLERR;
 	else
 		return 0;
@@ -750,7 +618,7 @@ err_put_fd:
 	return err;
 }
 
-static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size)
+static int sync_fill_pt_info(struct fence *fence, void *data, int size)
 {
 	struct sync_pt_info *info = data;
 	int ret;
@@ -760,20 +628,24 @@ static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size)
 
 	info->len = sizeof(struct sync_pt_info);
 
-	if (pt->parent->ops->fill_driver_data) {
-		ret = pt->parent->ops->fill_driver_data(pt, info->driver_data,
-							size - sizeof(*info));
+	if (fence->ops->fill_driver_data) {
+		ret = fence->ops->fill_driver_data(fence, info->driver_data,
+						   size - sizeof(*info));
 		if (ret < 0)
 			return ret;
 
 		info->len += ret;
 	}
 
-	strlcpy(info->obj_name, pt->parent->name, sizeof(info->obj_name));
-	strlcpy(info->driver_name, pt->parent->ops->driver_name,
+	strlcpy(info->obj_name, fence->ops->get_timeline_name(fence),
+		sizeof(info->obj_name));
+	strlcpy(info->driver_name, fence->ops->get_driver_name(fence),
 		sizeof(info->driver_name));
-	info->status = pt->status;
-	info->timestamp_ns = ktime_to_ns(pt->timestamp);
+	if (fence_is_signaled(fence))
+		info->status = fence->status >= 0 ? 1 : fence->status;
+	else
+		info->status = 0;
+	info->timestamp_ns = ktime_to_ns(fence->timestamp);
 
 	return info->len;
 }
@@ -782,10 +654,9 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence,
 					unsigned long arg)
 {
 	struct sync_fence_info_data *data;
-	struct list_head *pos;
 	__u32 size;
 	__u32 len = 0;
-	int ret;
+	int ret, i;
 
 	if (copy_from_user(&size, (void __user *)arg, sizeof(size)))
 		return -EFAULT;
@@ -801,12 +672,14 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence,
 		return -ENOMEM;
 
 	strlcpy(data->name, fence->name, sizeof(data->name));
-	data->status = fence->status;
+	data->status = atomic_read(&fence->status);
+	if (data->status >= 0)
+		data->status = !data->status;
+
 	len = sizeof(struct sync_fence_info_data);
 
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, pt_list);
+	for (i = 0; i < fence->num_fences; ++i) {
+		struct fence *pt = fence->cbs[i].sync_pt;
 
 		ret = sync_fill_pt_info(pt, (u8 *)data + len, size - len);
 
@@ -833,7 +706,6 @@ static long sync_fence_ioctl(struct file *file, unsigned int cmd,
 			     unsigned long arg)
 {
 	struct sync_fence *fence = file->private_data;
-
 	switch (cmd) {
 	case SYNC_IOC_WAIT:
 		return sync_fence_ioctl_wait(fence, arg);
@@ -849,181 +721,10 @@ static long sync_fence_ioctl(struct file *file, unsigned int cmd,
 	}
 }
 
-#ifdef CONFIG_DEBUG_FS
-static const char *sync_status_str(int status)
-{
-	if (status > 0)
-		return "signaled";
-	else if (status == 0)
-		return "active";
-	else
-		return "error";
-}
-
-static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence)
-{
-	int status = pt->status;
-
-	seq_printf(s, "  %s%spt %s",
-		   fence ? pt->parent->name : "",
-		   fence ? "_" : "",
-		   sync_status_str(status));
-	if (pt->status) {
-		struct timeval tv = ktime_to_timeval(pt->timestamp);
-
-		seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec);
-	}
-
-	if (pt->parent->ops->timeline_value_str &&
-	    pt->parent->ops->pt_value_str) {
-		char value[64];
-
-		pt->parent->ops->pt_value_str(pt, value, sizeof(value));
-		seq_printf(s, ": %s", value);
-		if (fence) {
-			pt->parent->ops->timeline_value_str(pt->parent, value,
-						    sizeof(value));
-			seq_printf(s, " / %s", value);
-		}
-	} else if (pt->parent->ops->print_pt) {
-		seq_puts(s, ": ");
-		pt->parent->ops->print_pt(s, pt);
-	}
-
-	seq_puts(s, "\n");
-}
-
-static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
-{
-	struct list_head *pos;
-	unsigned long flags;
-
-	seq_printf(s, "%s %s", obj->name, obj->ops->driver_name);
-
-	if (obj->ops->timeline_value_str) {
-		char value[64];
-
-		obj->ops->timeline_value_str(obj, value, sizeof(value));
-		seq_printf(s, ": %s", value);
-	} else if (obj->ops->print_obj) {
-		seq_puts(s, ": ");
-		obj->ops->print_obj(s, obj);
-	}
-
-	seq_puts(s, "\n");
-
-	spin_lock_irqsave(&obj->child_list_lock, flags);
-	list_for_each(pos, &obj->child_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, child_list);
-		sync_print_pt(s, pt, false);
-	}
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
-
-static void sync_print_fence(struct seq_file *s, struct sync_fence *fence)
-{
-	struct list_head *pos;
-	unsigned long flags;
-
-	seq_printf(s, "[%p] %s: %s\n", fence, fence->name,
-		   sync_status_str(fence->status));
-
-	list_for_each(pos, &fence->pt_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, pt_list);
-		sync_print_pt(s, pt, true);
-	}
-
-	spin_lock_irqsave(&fence->waiter_list_lock, flags);
-	list_for_each(pos, &fence->waiter_list_head) {
-		struct sync_fence_waiter *waiter =
-			container_of(pos, struct sync_fence_waiter,
-				     waiter_list);
-
-		seq_printf(s, "waiter %pF\n", waiter->callback);
-	}
-	spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
-}
-
-static int sync_debugfs_show(struct seq_file *s, void *unused)
-{
-	unsigned long flags;
-	struct list_head *pos;
-
-	seq_puts(s, "objs:\n--------------\n");
-
-	spin_lock_irqsave(&sync_timeline_list_lock, flags);
-	list_for_each(pos, &sync_timeline_list_head) {
-		struct sync_timeline *obj =
-			container_of(pos, struct sync_timeline,
-				     sync_timeline_list);
-
-		sync_print_obj(s, obj);
-		seq_puts(s, "\n");
-	}
-	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
-
-	seq_puts(s, "fences:\n--------------\n");
-
-	spin_lock_irqsave(&sync_fence_list_lock, flags);
-	list_for_each(pos, &sync_fence_list_head) {
-		struct sync_fence *fence =
-			container_of(pos, struct sync_fence, sync_fence_list);
-
-		sync_print_fence(s, fence);
-		seq_puts(s, "\n");
-	}
-	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
-	return 0;
-}
-
-static int sync_debugfs_open(struct inode *inode, struct file *file)
-{
-	return single_open(file, sync_debugfs_show, inode->i_private);
-}
-
-static const struct file_operations sync_debugfs_fops = {
-	.open           = sync_debugfs_open,
-	.read           = seq_read,
-	.llseek         = seq_lseek,
-	.release        = single_release,
+static const struct file_operations sync_fence_fops = {
+	.release = sync_fence_release,
+	.poll = sync_fence_poll,
+	.unlocked_ioctl = sync_fence_ioctl,
+	.compat_ioctl = sync_fence_ioctl,
 };
 
-static __init int sync_debugfs_init(void)
-{
-	debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops);
-	return 0;
-}
-late_initcall(sync_debugfs_init);
-
-#define DUMP_CHUNK 256
-static char sync_dump_buf[64 * 1024];
-static void sync_dump(void)
-{
-	struct seq_file s = {
-		.buf = sync_dump_buf,
-		.size = sizeof(sync_dump_buf) - 1,
-	};
-	int i;
-
-	sync_debugfs_show(&s, NULL);
-
-	for (i = 0; i < s.count; i += DUMP_CHUNK) {
-		if ((s.count - i) > DUMP_CHUNK) {
-			char c = s.buf[i + DUMP_CHUNK];
-
-			s.buf[i + DUMP_CHUNK] = 0;
-			pr_cont("%s", s.buf + i);
-			s.buf[i + DUMP_CHUNK] = c;
-		} else {
-			s.buf[s.count] = 0;
-			pr_cont("%s", s.buf + i);
-		}
-	}
-}
-#else
-static void sync_dump(void)
-{
-}
-#endif
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h
index eaf57cccf626..66b0f431f63e 100644
--- a/drivers/staging/android/sync.h
+++ b/drivers/staging/android/sync.h
@@ -19,6 +19,7 @@
 #include <linux/list.h>
 #include <linux/spinlock.h>
 #include <linux/wait.h>
+#include <linux/fence.h>
 
 #include "uapi/sync.h"
 
@@ -40,8 +41,6 @@ struct sync_fence;
  *			 -1 if a will signal before b
  * @free_pt:		called before sync_pt is freed
  * @release_obj:	called before sync_timeline is freed
- * @print_obj:		deprecated
- * @print_pt:		deprecated
  * @fill_driver_data:	write implementation specific driver data to data.
  *			  should return an error if there is not enough room
  *			  as specified by size.  This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops {
 	/* optional */
 	void (*release_obj)(struct sync_timeline *sync_timeline);
 
-	/* deprecated */
-	void (*print_obj)(struct seq_file *s,
-			  struct sync_timeline *sync_timeline);
-
-	/* deprecated */
-	void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
-
 	/* optional */
 	int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
 
@@ -104,19 +96,21 @@ struct sync_timeline {
 
 	/* protected by child_list_lock */
 	bool			destroyed;
+	int			context, value;
 
 	struct list_head	child_list_head;
 	spinlock_t		child_list_lock;
 
 	struct list_head	active_list_head;
-	spinlock_t		active_list_lock;
 
+#ifdef CONFIG_DEBUG_FS
 	struct list_head	sync_timeline_list;
+#endif
 };
 
 /**
  * struct sync_pt - sync point
- * @parent:		sync_timeline to which this sync_pt belongs
+ * @fence:		base fence class
  * @child_list:		membership in sync_timeline.child_list_head
  * @active_list:	membership in sync_timeline.active_list_head
  * @signaled_list:	membership in temporary signaled_list on stack
@@ -127,19 +121,22 @@ struct sync_timeline {
  *			  signaled or error.
  */
 struct sync_pt {
-	struct sync_timeline		*parent;
-	struct list_head	child_list;
+	struct fence base;
 
+	struct list_head	child_list;
 	struct list_head	active_list;
-	struct list_head	signaled_list;
-
-	struct sync_fence	*fence;
-	struct list_head	pt_list;
+};
 
-	/* protected by parent->active_list_lock */
-	int			status;
+static inline struct sync_timeline *sync_pt_parent(struct sync_pt *pt)
+{
+	return container_of(pt->base.lock, struct sync_timeline,
+			    child_list_lock);
+}
 
-	ktime_t			timestamp;
+struct sync_fence_cb {
+	struct fence_cb cb;
+	struct fence *sync_pt;
+	struct sync_fence *fence;
 };
 
 /**
@@ -149,9 +146,7 @@ struct sync_pt {
  * @name:		name of sync_fence.  Useful for debugging
  * @pt_list_head:	list of sync_pts in the fence.  immutable once fence
  *			  is created
- * @waiter_list_head:	list of asynchronous waiters on this fence
- * @waiter_list_lock:	lock protecting @waiter_list_head and @status
- * @status:		1: signaled, 0:active, <0: error
+ * @status:		0: signaled, >0:active, <0: error
  *
  * @wq:			wait queue for fence signaling
  * @sync_fence_list:	membership in global fence list
@@ -160,17 +155,15 @@ struct sync_fence {
 	struct file		*file;
 	struct kref		kref;
 	char			name[32];
-
-	/* this list is immutable once the fence is created */
-	struct list_head	pt_list_head;
-
-	struct list_head	waiter_list_head;
-	spinlock_t		waiter_list_lock; /* also protects status */
-	int			status;
+#ifdef CONFIG_DEBUG_FS
+	struct list_head	sync_fence_list;
+#endif
+	int num_fences;
 
 	wait_queue_head_t	wq;
+	atomic_t		status;
 
-	struct list_head	sync_fence_list;
+	struct sync_fence_cb	cbs[];
 };
 
 struct sync_fence_waiter;
@@ -184,14 +177,14 @@ typedef void (*sync_callback_t)(struct sync_fence *fence,
  * @callback_data:	pointer to pass to @callback
  */
 struct sync_fence_waiter {
-	struct list_head	waiter_list;
-
-	sync_callback_t		callback;
+	wait_queue_t work;
+	sync_callback_t callback;
 };
 
 static inline void sync_fence_waiter_init(struct sync_fence_waiter *waiter,
 					  sync_callback_t callback)
 {
+	INIT_LIST_HEAD(&waiter->work.task_list);
 	waiter->callback = callback;
 }
 
@@ -341,4 +334,22 @@ int sync_fence_cancel_async(struct sync_fence *fence,
  */
 int sync_fence_wait(struct sync_fence *fence, long timeout);
 
+#ifdef CONFIG_DEBUG_FS
+
+extern void sync_timeline_debug_add(struct sync_timeline *obj);
+extern void sync_timeline_debug_remove(struct sync_timeline *obj);
+extern void sync_fence_debug_add(struct sync_fence *fence);
+extern void sync_fence_debug_remove(struct sync_fence *fence);
+extern void sync_dump(void);
+
+#else
+# define sync_timeline_debug_add(obj)
+# define sync_timeline_debug_remove(obj)
+# define sync_fence_debug_add(fence)
+# define sync_fence_debug_remove(fence)
+# define sync_dump()
+#endif
+int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode,
+				 int wake_flags, void *key);
+
 #endif /* _LINUX_SYNC_H */
diff --git a/drivers/staging/android/sync_debug.c b/drivers/staging/android/sync_debug.c
new file mode 100644
index 000000000000..00efaf9d8aa2
--- /dev/null
+++ b/drivers/staging/android/sync_debug.c
@@ -0,0 +1,247 @@
+/*
+ * drivers/base/sync.c
+ *
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/export.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/poll.h>
+#include <linux/sched.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/anon_inodes.h>
+#include "sync.h"
+
+#ifdef CONFIG_DEBUG_FS
+
+static LIST_HEAD(sync_timeline_list_head);
+static DEFINE_SPINLOCK(sync_timeline_list_lock);
+static LIST_HEAD(sync_fence_list_head);
+static DEFINE_SPINLOCK(sync_fence_list_lock);
+
+void sync_timeline_debug_add(struct sync_timeline *obj)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_timeline_list_lock, flags);
+	list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head);
+	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+}
+
+void sync_timeline_debug_remove(struct sync_timeline *obj)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_timeline_list_lock, flags);
+	list_del(&obj->sync_timeline_list);
+	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+}
+
+void sync_fence_debug_add(struct sync_fence *fence)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_fence_list_lock, flags);
+	list_add_tail(&fence->sync_fence_list, &sync_fence_list_head);
+	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
+}
+
+void sync_fence_debug_remove(struct sync_fence *fence)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&sync_fence_list_lock, flags);
+	list_del(&fence->sync_fence_list);
+	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
+}
+
+static const char *sync_status_str(int status)
+{
+	if (status == 0)
+		return "signaled";
+	else if (status > 0)
+		return "active";
+	else
+		return "error";
+}
+
+static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence)
+{
+	int status = 1;
+	struct sync_timeline *parent = sync_pt_parent(pt);
+
+	if (__fence_is_signaled(&pt->base))
+		status = pt->base.status;
+
+	seq_printf(s, "  %s%spt %s",
+		   fence ? parent->name : "",
+		   fence ? "_" : "",
+		   sync_status_str(status));
+
+	if (status <= 0) {
+		struct timeval tv = ktime_to_timeval(pt->base.timestamp);
+		seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec);
+	}
+
+	if (parent->ops->timeline_value_str &&
+	    parent->ops->pt_value_str) {
+		char value[64];
+		parent->ops->pt_value_str(pt, value, sizeof(value));
+		seq_printf(s, ": %s", value);
+		if (fence) {
+			parent->ops->timeline_value_str(parent, value,
+						    sizeof(value));
+			seq_printf(s, " / %s", value);
+		}
+	}
+
+	seq_puts(s, "\n");
+}
+
+static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
+{
+	struct list_head *pos;
+	unsigned long flags;
+
+	seq_printf(s, "%s %s", obj->name, obj->ops->driver_name);
+
+	if (obj->ops->timeline_value_str) {
+		char value[64];
+		obj->ops->timeline_value_str(obj, value, sizeof(value));
+		seq_printf(s, ": %s", value);
+	}
+
+	seq_puts(s, "\n");
+
+	spin_lock_irqsave(&obj->child_list_lock, flags);
+	list_for_each(pos, &obj->child_list_head) {
+		struct sync_pt *pt =
+			container_of(pos, struct sync_pt, child_list);
+		sync_print_pt(s, pt, false);
+	}
+	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+}
+
+static void sync_print_fence(struct seq_file *s, struct sync_fence *fence)
+{
+	wait_queue_t *pos;
+	unsigned long flags;
+	int i;
+
+	seq_printf(s, "[%p] %s: %s\n", fence, fence->name,
+		   sync_status_str(atomic_read(&fence->status)));
+
+	for (i = 0; i < fence->num_fences; ++i) {
+		struct sync_pt *pt =
+			container_of(fence->cbs[i].sync_pt,
+				     struct sync_pt, base);
+
+		sync_print_pt(s, pt, true);
+	}
+
+	spin_lock_irqsave(&fence->wq.lock, flags);
+	list_for_each_entry(pos, &fence->wq.task_list, task_list) {
+		struct sync_fence_waiter *waiter;
+
+		if (pos->func != &sync_fence_wake_up_wq)
+			continue;
+
+		waiter = container_of(pos, struct sync_fence_waiter, work);
+
+		seq_printf(s, "waiter %pF\n", waiter->callback);
+	}
+	spin_unlock_irqrestore(&fence->wq.lock, flags);
+}
+
+static int sync_debugfs_show(struct seq_file *s, void *unused)
+{
+	unsigned long flags;
+	struct list_head *pos;
+
+	seq_puts(s, "objs:\n--------------\n");
+
+	spin_lock_irqsave(&sync_timeline_list_lock, flags);
+	list_for_each(pos, &sync_timeline_list_head) {
+		struct sync_timeline *obj =
+			container_of(pos, struct sync_timeline,
+				     sync_timeline_list);
+
+		sync_print_obj(s, obj);
+		seq_puts(s, "\n");
+	}
+	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+
+	seq_puts(s, "fences:\n--------------\n");
+
+	spin_lock_irqsave(&sync_fence_list_lock, flags);
+	list_for_each(pos, &sync_fence_list_head) {
+		struct sync_fence *fence =
+			container_of(pos, struct sync_fence, sync_fence_list);
+
+		sync_print_fence(s, fence);
+		seq_puts(s, "\n");
+	}
+	spin_unlock_irqrestore(&sync_fence_list_lock, flags);
+	return 0;
+}
+
+static int sync_debugfs_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, sync_debugfs_show, inode->i_private);
+}
+
+static const struct file_operations sync_debugfs_fops = {
+	.open           = sync_debugfs_open,
+	.read           = seq_read,
+	.llseek         = seq_lseek,
+	.release        = single_release,
+};
+
+static __init int sync_debugfs_init(void)
+{
+	debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops);
+	return 0;
+}
+late_initcall(sync_debugfs_init);
+
+#define DUMP_CHUNK 256
+static char sync_dump_buf[64 * 1024];
+void sync_dump(void)
+{
+	struct seq_file s = {
+		.buf = sync_dump_buf,
+		.size = sizeof(sync_dump_buf) - 1,
+	};
+	int i;
+
+	sync_debugfs_show(&s, NULL);
+
+	for (i = 0; i < s.count; i += DUMP_CHUNK) {
+		if ((s.count - i) > DUMP_CHUNK) {
+			char c = s.buf[i + DUMP_CHUNK];
+			s.buf[i + DUMP_CHUNK] = 0;
+			pr_cont("%s", s.buf + i);
+			s.buf[i + DUMP_CHUNK] = c;
+		} else {
+			s.buf[s.count] = 0;
+			pr_cont("%s", s.buf + i);
+		}
+	}
+}
+
+#endif
diff --git a/drivers/staging/android/trace/sync.h b/drivers/staging/android/trace/sync.h
index 95462359ba57..77edb977a7bf 100644
--- a/drivers/staging/android/trace/sync.h
+++ b/drivers/staging/android/trace/sync.h
@@ -45,7 +45,7 @@ TRACE_EVENT(sync_wait,
 
 	TP_fast_assign(
 			__assign_str(name, fence->name);
-			__entry->status = fence->status;
+			__entry->status = atomic_read(&fence->status);
 			__entry->begin = begin;
 	),
 
@@ -54,19 +54,19 @@ TRACE_EVENT(sync_wait,
 );
 
 TRACE_EVENT(sync_pt,
-	TP_PROTO(struct sync_pt *pt),
+	TP_PROTO(struct fence *pt),
 
 	TP_ARGS(pt),
 
 	TP_STRUCT__entry(
-		__string(timeline, pt->parent->name)
+		__string(timeline, pt->ops->get_timeline_name(pt))
 		__array(char, value, 32)
 	),
 
 	TP_fast_assign(
-		__assign_str(timeline, pt->parent->name);
-		if (pt->parent->ops->pt_value_str) {
-			pt->parent->ops->pt_value_str(pt, __entry->value,
+		__assign_str(timeline, pt->ops->get_timeline_name(pt));
+		if (pt->ops->fence_value_str) {
+			pt->ops->fence_value_str(pt, __entry->value,
 							sizeof(__entry->value));
 		} else {
 			__entry->value[0] = '\0';

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 5/8] reservation: add support for fences to enable cross-device synchronisation
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:37   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
---
 include/linux/reservation.h |   20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index 813dae960ebd..f3f57460a205 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -6,7 +6,7 @@
  * Copyright (C) 2012 Texas Instruments
  *
  * Authors:
- * Rob Clark <rob.clark@linaro.org>
+ * Rob Clark <robdclark@gmail.com>
  * Maarten Lankhorst <maarten.lankhorst@canonical.com>
  * Thomas Hellstrom <thellstrom-at-vmware-dot-com>
  *
@@ -40,22 +40,40 @@
 #define _LINUX_RESERVATION_H
 
 #include <linux/ww_mutex.h>
+#include <linux/fence.h>
+#include <linux/slab.h>
 
 extern struct ww_class reservation_ww_class;
 
 struct reservation_object {
 	struct ww_mutex lock;
+
+	struct fence *fence_excl;
+	struct fence **fence_shared;
+	u32 fence_shared_count, fence_shared_max;
 };
 
 static inline void
 reservation_object_init(struct reservation_object *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
+
+	obj->fence_shared_count = obj->fence_shared_max = 0;
+	obj->fence_shared = NULL;
+	obj->fence_excl = NULL;
 }
 
 static inline void
 reservation_object_fini(struct reservation_object *obj)
 {
+	int i;
+
+	if (obj->fence_excl)
+		fence_put(obj->fence_excl);
+	for (i = 0; i < obj->fence_shared_count; ++i)
+		fence_put(obj->fence_shared[i]);
+	kfree(obj->fence_shared);
+
 	ww_mutex_destroy(&obj->lock);
 }
 


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 5/8] reservation: add support for fences to enable cross-device synchronisation
@ 2014-06-18 10:37   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
---
 include/linux/reservation.h |   20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index 813dae960ebd..f3f57460a205 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -6,7 +6,7 @@
  * Copyright (C) 2012 Texas Instruments
  *
  * Authors:
- * Rob Clark <rob.clark@linaro.org>
+ * Rob Clark <robdclark@gmail.com>
  * Maarten Lankhorst <maarten.lankhorst@canonical.com>
  * Thomas Hellstrom <thellstrom-at-vmware-dot-com>
  *
@@ -40,22 +40,40 @@
 #define _LINUX_RESERVATION_H
 
 #include <linux/ww_mutex.h>
+#include <linux/fence.h>
+#include <linux/slab.h>
 
 extern struct ww_class reservation_ww_class;
 
 struct reservation_object {
 	struct ww_mutex lock;
+
+	struct fence *fence_excl;
+	struct fence **fence_shared;
+	u32 fence_shared_count, fence_shared_max;
 };
 
 static inline void
 reservation_object_init(struct reservation_object *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
+
+	obj->fence_shared_count = obj->fence_shared_max = 0;
+	obj->fence_shared = NULL;
+	obj->fence_excl = NULL;
 }
 
 static inline void
 reservation_object_fini(struct reservation_object *obj)
 {
+	int i;
+
+	if (obj->fence_excl)
+		fence_put(obj->fence_excl);
+	for (i = 0; i < obj->fence_shared_count; ++i)
+		fence_put(obj->fence_shared[i]);
+	kfree(obj->fence_shared);
+
 	ww_mutex_destroy(&obj->lock);
 }

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 6/8] dma-buf: add poll support, v3
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:37   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

Thanks to Fengguang Wu for spotting a missing static cast.

v2:
- Kill unused variable need_shared.
v3:
- Clarify the BUG() in dma_buf_release some more. (Rob Clark)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/base/dma-buf.c  |  108 +++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/dma-buf.h |   12 +++++
 2 files changed, 120 insertions(+)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index cd40ca22911f..25e8c4165936 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -30,6 +30,7 @@
 #include <linux/export.h>
 #include <linux/debugfs.h>
 #include <linux/seq_file.h>
+#include <linux/poll.h>
 #include <linux/reservation.h>
 
 static inline int is_dma_buf_file(struct file *);
@@ -52,6 +53,16 @@ static int dma_buf_release(struct inode *inode, struct file *file)
 
 	BUG_ON(dmabuf->vmapping_counter);
 
+	/*
+	 * Any fences that a dma-buf poll can wait on should be signaled
+	 * before releasing dma-buf. This is the responsibility of each
+	 * driver that uses the reservation objects.
+	 *
+	 * If you hit this BUG() it means someone dropped their ref to the
+	 * dma-buf while still having pending operation to the buffer.
+	 */
+	BUG_ON(dmabuf->cb_shared.active || dmabuf->cb_excl.active);
+
 	dmabuf->ops->release(dmabuf);
 
 	mutex_lock(&db_list.lock);
@@ -108,10 +119,103 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
 	return base + offset;
 }
 
+static void dma_buf_poll_cb(struct fence *fence, struct fence_cb *cb)
+{
+	struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb;
+	unsigned long flags;
+
+	spin_lock_irqsave(&dcb->poll->lock, flags);
+	wake_up_locked_poll(dcb->poll, dcb->active);
+	dcb->active = 0;
+	spin_unlock_irqrestore(&dcb->poll->lock, flags);
+}
+
+static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
+{
+	struct dma_buf *dmabuf;
+	struct reservation_object *resv;
+	unsigned long events;
+
+	dmabuf = file->private_data;
+	if (!dmabuf || !dmabuf->resv)
+		return POLLERR;
+
+	resv = dmabuf->resv;
+
+	poll_wait(file, &dmabuf->poll, poll);
+
+	events = poll_requested_events(poll) & (POLLIN | POLLOUT);
+	if (!events)
+		return 0;
+
+	ww_mutex_lock(&resv->lock, NULL);
+
+	if (resv->fence_excl && (!(events & POLLOUT) ||
+				 resv->fence_shared_count == 0)) {
+		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl;
+		unsigned long pevents = POLLIN;
+
+		if (resv->fence_shared_count == 0)
+			pevents |= POLLOUT;
+
+		spin_lock_irq(&dmabuf->poll.lock);
+		if (dcb->active) {
+			dcb->active |= pevents;
+			events &= ~pevents;
+		} else
+			dcb->active = pevents;
+		spin_unlock_irq(&dmabuf->poll.lock);
+
+		if (events & pevents) {
+			if (!fence_add_callback(resv->fence_excl,
+						&dcb->cb, dma_buf_poll_cb))
+				events &= ~pevents;
+			else
+				/*
+				 * No callback queued, wake up any additional
+				 * waiters.
+				 */
+				dma_buf_poll_cb(NULL, &dcb->cb);
+		}
+	}
+
+	if ((events & POLLOUT) && resv->fence_shared_count > 0) {
+		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared;
+		int i;
+
+		/* Only queue a new callback if no event has fired yet */
+		spin_lock_irq(&dmabuf->poll.lock);
+		if (dcb->active)
+			events &= ~POLLOUT;
+		else
+			dcb->active = POLLOUT;
+		spin_unlock_irq(&dmabuf->poll.lock);
+
+		if (!(events & POLLOUT))
+			goto out;
+
+		for (i = 0; i < resv->fence_shared_count; ++i)
+			if (!fence_add_callback(resv->fence_shared[i],
+						&dcb->cb, dma_buf_poll_cb)) {
+				events &= ~POLLOUT;
+				break;
+			}
+
+		/* No callback queued, wake up any additional waiters. */
+		if (i == resv->fence_shared_count)
+			dma_buf_poll_cb(NULL, &dcb->cb);
+	}
+
+out:
+	ww_mutex_unlock(&resv->lock);
+	return events;
+}
+
 static const struct file_operations dma_buf_fops = {
 	.release	= dma_buf_release,
 	.mmap		= dma_buf_mmap_internal,
 	.llseek		= dma_buf_llseek,
+	.poll		= dma_buf_poll,
 };
 
 /*
@@ -171,6 +275,10 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
 	dmabuf->ops = ops;
 	dmabuf->size = size;
 	dmabuf->exp_name = exp_name;
+	init_waitqueue_head(&dmabuf->poll);
+	dmabuf->cb_excl.poll = dmabuf->cb_shared.poll = &dmabuf->poll;
+	dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
+
 	if (!resv) {
 		resv = (struct reservation_object *)&dmabuf[1];
 		reservation_object_init(resv);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index fd7def2e0ae2..694e1fe1c4b4 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -30,6 +30,8 @@
 #include <linux/list.h>
 #include <linux/dma-mapping.h>
 #include <linux/fs.h>
+#include <linux/fence.h>
+#include <linux/wait.h>
 
 struct device;
 struct dma_buf;
@@ -130,6 +132,16 @@ struct dma_buf {
 	struct list_head list_node;
 	void *priv;
 	struct reservation_object *resv;
+
+	/* poll support */
+	wait_queue_head_t poll;
+
+	struct dma_buf_poll_cb_t {
+		struct fence_cb cb;
+		wait_queue_head_t *poll;
+
+		unsigned long active;
+	} cb_excl, cb_shared;
 };
 
 /**


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 6/8] dma-buf: add poll support, v3
@ 2014-06-18 10:37   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

Thanks to Fengguang Wu for spotting a missing static cast.

v2:
- Kill unused variable need_shared.
v3:
- Clarify the BUG() in dma_buf_release some more. (Rob Clark)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 drivers/base/dma-buf.c  |  108 +++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/dma-buf.h |   12 +++++
 2 files changed, 120 insertions(+)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index cd40ca22911f..25e8c4165936 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -30,6 +30,7 @@
 #include <linux/export.h>
 #include <linux/debugfs.h>
 #include <linux/seq_file.h>
+#include <linux/poll.h>
 #include <linux/reservation.h>
 
 static inline int is_dma_buf_file(struct file *);
@@ -52,6 +53,16 @@ static int dma_buf_release(struct inode *inode, struct file *file)
 
 	BUG_ON(dmabuf->vmapping_counter);
 
+	/*
+	 * Any fences that a dma-buf poll can wait on should be signaled
+	 * before releasing dma-buf. This is the responsibility of each
+	 * driver that uses the reservation objects.
+	 *
+	 * If you hit this BUG() it means someone dropped their ref to the
+	 * dma-buf while still having pending operation to the buffer.
+	 */
+	BUG_ON(dmabuf->cb_shared.active || dmabuf->cb_excl.active);
+
 	dmabuf->ops->release(dmabuf);
 
 	mutex_lock(&db_list.lock);
@@ -108,10 +119,103 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
 	return base + offset;
 }
 
+static void dma_buf_poll_cb(struct fence *fence, struct fence_cb *cb)
+{
+	struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb;
+	unsigned long flags;
+
+	spin_lock_irqsave(&dcb->poll->lock, flags);
+	wake_up_locked_poll(dcb->poll, dcb->active);
+	dcb->active = 0;
+	spin_unlock_irqrestore(&dcb->poll->lock, flags);
+}
+
+static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
+{
+	struct dma_buf *dmabuf;
+	struct reservation_object *resv;
+	unsigned long events;
+
+	dmabuf = file->private_data;
+	if (!dmabuf || !dmabuf->resv)
+		return POLLERR;
+
+	resv = dmabuf->resv;
+
+	poll_wait(file, &dmabuf->poll, poll);
+
+	events = poll_requested_events(poll) & (POLLIN | POLLOUT);
+	if (!events)
+		return 0;
+
+	ww_mutex_lock(&resv->lock, NULL);
+
+	if (resv->fence_excl && (!(events & POLLOUT) ||
+				 resv->fence_shared_count == 0)) {
+		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl;
+		unsigned long pevents = POLLIN;
+
+		if (resv->fence_shared_count == 0)
+			pevents |= POLLOUT;
+
+		spin_lock_irq(&dmabuf->poll.lock);
+		if (dcb->active) {
+			dcb->active |= pevents;
+			events &= ~pevents;
+		} else
+			dcb->active = pevents;
+		spin_unlock_irq(&dmabuf->poll.lock);
+
+		if (events & pevents) {
+			if (!fence_add_callback(resv->fence_excl,
+						&dcb->cb, dma_buf_poll_cb))
+				events &= ~pevents;
+			else
+				/*
+				 * No callback queued, wake up any additional
+				 * waiters.
+				 */
+				dma_buf_poll_cb(NULL, &dcb->cb);
+		}
+	}
+
+	if ((events & POLLOUT) && resv->fence_shared_count > 0) {
+		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared;
+		int i;
+
+		/* Only queue a new callback if no event has fired yet */
+		spin_lock_irq(&dmabuf->poll.lock);
+		if (dcb->active)
+			events &= ~POLLOUT;
+		else
+			dcb->active = POLLOUT;
+		spin_unlock_irq(&dmabuf->poll.lock);
+
+		if (!(events & POLLOUT))
+			goto out;
+
+		for (i = 0; i < resv->fence_shared_count; ++i)
+			if (!fence_add_callback(resv->fence_shared[i],
+						&dcb->cb, dma_buf_poll_cb)) {
+				events &= ~POLLOUT;
+				break;
+			}
+
+		/* No callback queued, wake up any additional waiters. */
+		if (i == resv->fence_shared_count)
+			dma_buf_poll_cb(NULL, &dcb->cb);
+	}
+
+out:
+	ww_mutex_unlock(&resv->lock);
+	return events;
+}
+
 static const struct file_operations dma_buf_fops = {
 	.release	= dma_buf_release,
 	.mmap		= dma_buf_mmap_internal,
 	.llseek		= dma_buf_llseek,
+	.poll		= dma_buf_poll,
 };
 
 /*
@@ -171,6 +275,10 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
 	dmabuf->ops = ops;
 	dmabuf->size = size;
 	dmabuf->exp_name = exp_name;
+	init_waitqueue_head(&dmabuf->poll);
+	dmabuf->cb_excl.poll = dmabuf->cb_shared.poll = &dmabuf->poll;
+	dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
+
 	if (!resv) {
 		resv = (struct reservation_object *)&dmabuf[1];
 		reservation_object_init(resv);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index fd7def2e0ae2..694e1fe1c4b4 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -30,6 +30,8 @@
 #include <linux/list.h>
 #include <linux/dma-mapping.h>
 #include <linux/fs.h>
+#include <linux/fence.h>
+#include <linux/wait.h>
 
 struct device;
 struct dma_buf;
@@ -130,6 +132,16 @@ struct dma_buf {
 	struct list_head list_node;
 	void *priv;
 	struct reservation_object *resv;
+
+	/* poll support */
+	wait_queue_head_t poll;
+
+	struct dma_buf_poll_cb_t {
+		struct fence_cb cb;
+		wait_queue_head_t *poll;
+
+		unsigned long active;
+	} cb_excl, cb_shared;
 };
 
 /**

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 7/8] reservation: update api and add some helpers
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:37   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

Move the list of shared fences to a struct, and return it in
reservation_object_get_list().
Add reservation_object_get_excl to get the exclusive fence.

Add reservation_object_reserve_shared(), which reserves space
in the reservation_object for 1 more shared fence.

reservation_object_add_shared_fence() and
reservation_object_add_excl_fence() are used to assign a new
fence to a reservation_object pointer, to complete a reservation.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

Changes since v1:
- Add reservation_object_get_excl, reorder code a bit.
---
 drivers/base/dma-buf.c      |   35 +++++++---
 drivers/base/fence.c        |    4 +
 drivers/base/reservation.c  |  156 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/fence.h       |    6 ++
 include/linux/reservation.h |   56 ++++++++++++++-
 5 files changed, 236 insertions(+), 21 deletions(-)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 25e8c4165936..cb8379dfeed5 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -134,7 +134,10 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 {
 	struct dma_buf *dmabuf;
 	struct reservation_object *resv;
+	struct reservation_object_list *fobj;
+	struct fence *fence_excl;
 	unsigned long events;
+	unsigned shared_count;
 
 	dmabuf = file->private_data;
 	if (!dmabuf || !dmabuf->resv)
@@ -150,12 +153,18 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 
 	ww_mutex_lock(&resv->lock, NULL);
 
-	if (resv->fence_excl && (!(events & POLLOUT) ||
-				 resv->fence_shared_count == 0)) {
+	fobj = resv->fence;
+	if (!fobj)
+		goto out;
+
+	shared_count = fobj->shared_count;
+	fence_excl = resv->fence_excl;
+
+	if (fence_excl && (!(events & POLLOUT) || shared_count == 0)) {
 		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl;
 		unsigned long pevents = POLLIN;
 
-		if (resv->fence_shared_count == 0)
+		if (shared_count == 0)
 			pevents |= POLLOUT;
 
 		spin_lock_irq(&dmabuf->poll.lock);
@@ -167,19 +176,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 		spin_unlock_irq(&dmabuf->poll.lock);
 
 		if (events & pevents) {
-			if (!fence_add_callback(resv->fence_excl,
-						&dcb->cb, dma_buf_poll_cb))
+			if (!fence_add_callback(fence_excl, &dcb->cb,
+						       dma_buf_poll_cb)) {
 				events &= ~pevents;
-			else
+			} else {
 				/*
 				 * No callback queued, wake up any additional
 				 * waiters.
 				 */
 				dma_buf_poll_cb(NULL, &dcb->cb);
+			}
 		}
 	}
 
-	if ((events & POLLOUT) && resv->fence_shared_count > 0) {
+	if ((events & POLLOUT) && shared_count > 0) {
 		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared;
 		int i;
 
@@ -194,15 +204,18 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 		if (!(events & POLLOUT))
 			goto out;
 
-		for (i = 0; i < resv->fence_shared_count; ++i)
-			if (!fence_add_callback(resv->fence_shared[i],
-						&dcb->cb, dma_buf_poll_cb)) {
+		for (i = 0; i < shared_count; ++i) {
+			struct fence *fence = fobj->shared[i];
+
+			if (!fence_add_callback(fence, &dcb->cb,
+						dma_buf_poll_cb)) {
 				events &= ~POLLOUT;
 				break;
 			}
+		}
 
 		/* No callback queued, wake up any additional waiters. */
-		if (i == resv->fence_shared_count)
+		if (i == shared_count)
 			dma_buf_poll_cb(NULL, &dcb->cb);
 	}
 
diff --git a/drivers/base/fence.c b/drivers/base/fence.c
index 752a2dfa505f..74d1f7bcb467 100644
--- a/drivers/base/fence.c
+++ b/drivers/base/fence.c
@@ -170,7 +170,7 @@ void release_fence(struct kref *kref)
 	if (fence->ops->release)
 		fence->ops->release(fence);
 	else
-		kfree(fence);
+		free_fence(fence);
 }
 EXPORT_SYMBOL(release_fence);
 
@@ -448,7 +448,7 @@ static void seqno_release(struct fence *fence)
 	if (f->ops->release)
 		f->ops->release(fence);
 	else
-		kfree(f);
+		free_fence(fence);
 }
 
 static long seqno_wait(struct fence *fence, bool intr, signed long timeout)
diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c
index a73fbf3b8e56..e6166723a9ae 100644
--- a/drivers/base/reservation.c
+++ b/drivers/base/reservation.c
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2013 Canonical Ltd
+ * Copyright (C) 2012-2014 Canonical Ltd (Maarten Lankhorst)
  *
  * Based on bo.c which bears the following copyright notice,
  * but is dual licensed:
@@ -37,3 +37,157 @@
 
 DEFINE_WW_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
+
+/*
+ * Reserve space to add a shared fence to a reservation_object,
+ * must be called with obj->lock held.
+ */
+int reservation_object_reserve_shared(struct reservation_object *obj)
+{
+	struct reservation_object_list *fobj, *old;
+	u32 max;
+
+	old = reservation_object_get_list(obj);
+
+	if (old && old->shared_max) {
+		if (old->shared_count < old->shared_max) {
+			/* perform an in-place update */
+			kfree(obj->staged);
+			obj->staged = NULL;
+			return 0;
+		} else
+			max = old->shared_max * 2;
+	} else
+		max = 4;
+
+	/*
+	 * resize obj->staged or allocate if it doesn't exist,
+	 * noop if already correct size
+	 */
+	fobj = krealloc(obj->staged, offsetof(typeof(*fobj), shared[max]),
+			GFP_KERNEL);
+	if (!fobj)
+		return -ENOMEM;
+
+	obj->staged = fobj;
+	fobj->shared_max = max;
+	return 0;
+}
+EXPORT_SYMBOL(reservation_object_reserve_shared);
+
+static void
+reservation_object_add_shared_inplace(struct reservation_object *obj,
+				      struct reservation_object_list *fobj,
+				      struct fence *fence)
+{
+	u32 i;
+
+	for (i = 0; i < fobj->shared_count; ++i) {
+		if (fobj->shared[i]->context == fence->context) {
+			struct fence *old_fence = fobj->shared[i];
+
+			fence_get(fence);
+
+			fobj->shared[i] = fence;
+
+			fence_put(old_fence);
+			return;
+		}
+	}
+
+	fence_get(fence);
+	fobj->shared[fobj->shared_count] = fence;
+	/*
+	 * make the new fence visible before incrementing
+	 * fobj->shared_count
+	 */
+	smp_wmb();
+	fobj->shared_count++;
+}
+
+static void
+reservation_object_add_shared_replace(struct reservation_object *obj,
+				      struct reservation_object_list *old,
+				      struct reservation_object_list *fobj,
+				      struct fence *fence)
+{
+	unsigned i;
+
+	fence_get(fence);
+
+	if (!old) {
+		fobj->shared[0] = fence;
+		fobj->shared_count = 1;
+		goto done;
+	}
+
+	/*
+	 * no need to bump fence refcounts, rcu_read access
+	 * requires the use of kref_get_unless_zero, and the
+	 * references from the old struct are carried over to
+	 * the new.
+	 */
+	fobj->shared_count = old->shared_count;
+
+	for (i = 0; i < old->shared_count; ++i) {
+		if (fence && old->shared[i]->context == fence->context) {
+			fence_put(old->shared[i]);
+			fobj->shared[i] = fence;
+			fence = NULL;
+		} else
+			fobj->shared[i] = old->shared[i];
+	}
+	if (fence)
+		fobj->shared[fobj->shared_count++] = fence;
+
+done:
+	obj->fence = fobj;
+	kfree(old);
+}
+
+/*
+ * Add a fence to a shared slot, obj->lock must be held, and
+ * reservation_object_reserve_shared_fence has been called.
+ */
+void reservation_object_add_shared_fence(struct reservation_object *obj,
+					 struct fence *fence)
+{
+	struct reservation_object_list *old, *fobj = obj->staged;
+
+	old = reservation_object_get_list(obj);
+	obj->staged = NULL;
+
+	if (!fobj) {
+		BUG_ON(old->shared_count == old->shared_max);
+		reservation_object_add_shared_inplace(obj, old, fence);
+	} else
+		reservation_object_add_shared_replace(obj, old, fobj, fence);
+}
+EXPORT_SYMBOL(reservation_object_add_shared_fence);
+
+void reservation_object_add_excl_fence(struct reservation_object *obj,
+				       struct fence *fence)
+{
+	struct fence *old_fence = obj->fence_excl;
+	struct reservation_object_list *old;
+	u32 i = 0;
+
+	old = reservation_object_get_list(obj);
+	if (old) {
+		i = old->shared_count;
+		old->shared_count = 0;
+	}
+
+	if (fence)
+		fence_get(fence);
+
+	obj->fence_excl = fence;
+
+	/* inplace update, no shared fences */
+	while (i--)
+		fence_put(old->shared[i]);
+
+	if (old_fence)
+		fence_put(old_fence);
+}
+EXPORT_SYMBOL(reservation_object_add_excl_fence);
diff --git a/include/linux/fence.h b/include/linux/fence.h
index 65f2a01ee7e4..d13b5ab61726 100644
--- a/include/linux/fence.h
+++ b/include/linux/fence.h
@@ -31,6 +31,7 @@
 #include <linux/kref.h>
 #include <linux/sched.h>
 #include <linux/printk.h>
+#include <linux/slab.h>
 
 struct fence;
 struct fence_ops;
@@ -191,6 +192,11 @@ static inline void fence_get(struct fence *fence)
 
 extern void release_fence(struct kref *kref);
 
+static inline void free_fence(struct fence *fence)
+{
+	kfree(fence);
+}
+
 /**
  * fence_put - decreases refcount of the fence
  * @fence:	[in]	fence to reduce refcount of
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index f3f57460a205..2affe67dea6e 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -45,36 +45,78 @@
 
 extern struct ww_class reservation_ww_class;
 
+struct reservation_object_list {
+	u32 shared_count, shared_max;
+	struct fence *shared[];
+};
+
 struct reservation_object {
 	struct ww_mutex lock;
 
 	struct fence *fence_excl;
-	struct fence **fence_shared;
-	u32 fence_shared_count, fence_shared_max;
+	struct reservation_object_list *fence;
+	struct reservation_object_list *staged;
 };
 
+#define reservation_object_assert_held(obj) \
+	lockdep_assert_held(&(obj)->lock.base)
+
 static inline void
 reservation_object_init(struct reservation_object *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
 
-	obj->fence_shared_count = obj->fence_shared_max = 0;
-	obj->fence_shared = NULL;
 	obj->fence_excl = NULL;
+	obj->fence = NULL;
+	obj->staged = NULL;
 }
 
 static inline void
 reservation_object_fini(struct reservation_object *obj)
 {
 	int i;
+	struct reservation_object_list *fobj;
 
+	/*
+	 * This object should be dead and all references must have
+	 * been released to it.
+	 */
 	if (obj->fence_excl)
 		fence_put(obj->fence_excl);
-	for (i = 0; i < obj->fence_shared_count; ++i)
-		fence_put(obj->fence_shared[i]);
-	kfree(obj->fence_shared);
+
+	fobj = obj->fence;
+	if (fobj) {
+		for (i = 0; i < fobj->shared_count; ++i)
+			fence_put(fobj->shared[i]);
+
+		kfree(fobj);
+	}
+	kfree(obj->staged);
 
 	ww_mutex_destroy(&obj->lock);
 }
 
+static inline struct reservation_object_list *
+reservation_object_get_list(struct reservation_object *obj)
+{
+	reservation_object_assert_held(obj);
+
+	return obj->fence;
+}
+
+static inline struct fence *
+reservation_object_get_excl(struct reservation_object *obj)
+{
+	reservation_object_assert_held(obj);
+
+	return obj->fence_excl;
+}
+
+int reservation_object_reserve_shared(struct reservation_object *obj);
+void reservation_object_add_shared_fence(struct reservation_object *obj,
+					 struct fence *fence);
+
+void reservation_object_add_excl_fence(struct reservation_object *obj,
+				       struct fence *fence);
+
 #endif /* _LINUX_RESERVATION_H */


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 7/8] reservation: update api and add some helpers
@ 2014-06-18 10:37   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

Move the list of shared fences to a struct, and return it in
reservation_object_get_list().
Add reservation_object_get_excl to get the exclusive fence.

Add reservation_object_reserve_shared(), which reserves space
in the reservation_object for 1 more shared fence.

reservation_object_add_shared_fence() and
reservation_object_add_excl_fence() are used to assign a new
fence to a reservation_object pointer, to complete a reservation.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>

Changes since v1:
- Add reservation_object_get_excl, reorder code a bit.
---
 drivers/base/dma-buf.c      |   35 +++++++---
 drivers/base/fence.c        |    4 +
 drivers/base/reservation.c  |  156 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/fence.h       |    6 ++
 include/linux/reservation.h |   56 ++++++++++++++-
 5 files changed, 236 insertions(+), 21 deletions(-)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 25e8c4165936..cb8379dfeed5 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -134,7 +134,10 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 {
 	struct dma_buf *dmabuf;
 	struct reservation_object *resv;
+	struct reservation_object_list *fobj;
+	struct fence *fence_excl;
 	unsigned long events;
+	unsigned shared_count;
 
 	dmabuf = file->private_data;
 	if (!dmabuf || !dmabuf->resv)
@@ -150,12 +153,18 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 
 	ww_mutex_lock(&resv->lock, NULL);
 
-	if (resv->fence_excl && (!(events & POLLOUT) ||
-				 resv->fence_shared_count == 0)) {
+	fobj = resv->fence;
+	if (!fobj)
+		goto out;
+
+	shared_count = fobj->shared_count;
+	fence_excl = resv->fence_excl;
+
+	if (fence_excl && (!(events & POLLOUT) || shared_count == 0)) {
 		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl;
 		unsigned long pevents = POLLIN;
 
-		if (resv->fence_shared_count == 0)
+		if (shared_count == 0)
 			pevents |= POLLOUT;
 
 		spin_lock_irq(&dmabuf->poll.lock);
@@ -167,19 +176,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 		spin_unlock_irq(&dmabuf->poll.lock);
 
 		if (events & pevents) {
-			if (!fence_add_callback(resv->fence_excl,
-						&dcb->cb, dma_buf_poll_cb))
+			if (!fence_add_callback(fence_excl, &dcb->cb,
+						       dma_buf_poll_cb)) {
 				events &= ~pevents;
-			else
+			} else {
 				/*
 				 * No callback queued, wake up any additional
 				 * waiters.
 				 */
 				dma_buf_poll_cb(NULL, &dcb->cb);
+			}
 		}
 	}
 
-	if ((events & POLLOUT) && resv->fence_shared_count > 0) {
+	if ((events & POLLOUT) && shared_count > 0) {
 		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared;
 		int i;
 
@@ -194,15 +204,18 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 		if (!(events & POLLOUT))
 			goto out;
 
-		for (i = 0; i < resv->fence_shared_count; ++i)
-			if (!fence_add_callback(resv->fence_shared[i],
-						&dcb->cb, dma_buf_poll_cb)) {
+		for (i = 0; i < shared_count; ++i) {
+			struct fence *fence = fobj->shared[i];
+
+			if (!fence_add_callback(fence, &dcb->cb,
+						dma_buf_poll_cb)) {
 				events &= ~POLLOUT;
 				break;
 			}
+		}
 
 		/* No callback queued, wake up any additional waiters. */
-		if (i == resv->fence_shared_count)
+		if (i == shared_count)
 			dma_buf_poll_cb(NULL, &dcb->cb);
 	}
 
diff --git a/drivers/base/fence.c b/drivers/base/fence.c
index 752a2dfa505f..74d1f7bcb467 100644
--- a/drivers/base/fence.c
+++ b/drivers/base/fence.c
@@ -170,7 +170,7 @@ void release_fence(struct kref *kref)
 	if (fence->ops->release)
 		fence->ops->release(fence);
 	else
-		kfree(fence);
+		free_fence(fence);
 }
 EXPORT_SYMBOL(release_fence);
 
@@ -448,7 +448,7 @@ static void seqno_release(struct fence *fence)
 	if (f->ops->release)
 		f->ops->release(fence);
 	else
-		kfree(f);
+		free_fence(fence);
 }
 
 static long seqno_wait(struct fence *fence, bool intr, signed long timeout)
diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c
index a73fbf3b8e56..e6166723a9ae 100644
--- a/drivers/base/reservation.c
+++ b/drivers/base/reservation.c
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2013 Canonical Ltd
+ * Copyright (C) 2012-2014 Canonical Ltd (Maarten Lankhorst)
  *
  * Based on bo.c which bears the following copyright notice,
  * but is dual licensed:
@@ -37,3 +37,157 @@
 
 DEFINE_WW_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
+
+/*
+ * Reserve space to add a shared fence to a reservation_object,
+ * must be called with obj->lock held.
+ */
+int reservation_object_reserve_shared(struct reservation_object *obj)
+{
+	struct reservation_object_list *fobj, *old;
+	u32 max;
+
+	old = reservation_object_get_list(obj);
+
+	if (old && old->shared_max) {
+		if (old->shared_count < old->shared_max) {
+			/* perform an in-place update */
+			kfree(obj->staged);
+			obj->staged = NULL;
+			return 0;
+		} else
+			max = old->shared_max * 2;
+	} else
+		max = 4;
+
+	/*
+	 * resize obj->staged or allocate if it doesn't exist,
+	 * noop if already correct size
+	 */
+	fobj = krealloc(obj->staged, offsetof(typeof(*fobj), shared[max]),
+			GFP_KERNEL);
+	if (!fobj)
+		return -ENOMEM;
+
+	obj->staged = fobj;
+	fobj->shared_max = max;
+	return 0;
+}
+EXPORT_SYMBOL(reservation_object_reserve_shared);
+
+static void
+reservation_object_add_shared_inplace(struct reservation_object *obj,
+				      struct reservation_object_list *fobj,
+				      struct fence *fence)
+{
+	u32 i;
+
+	for (i = 0; i < fobj->shared_count; ++i) {
+		if (fobj->shared[i]->context == fence->context) {
+			struct fence *old_fence = fobj->shared[i];
+
+			fence_get(fence);
+
+			fobj->shared[i] = fence;
+
+			fence_put(old_fence);
+			return;
+		}
+	}
+
+	fence_get(fence);
+	fobj->shared[fobj->shared_count] = fence;
+	/*
+	 * make the new fence visible before incrementing
+	 * fobj->shared_count
+	 */
+	smp_wmb();
+	fobj->shared_count++;
+}
+
+static void
+reservation_object_add_shared_replace(struct reservation_object *obj,
+				      struct reservation_object_list *old,
+				      struct reservation_object_list *fobj,
+				      struct fence *fence)
+{
+	unsigned i;
+
+	fence_get(fence);
+
+	if (!old) {
+		fobj->shared[0] = fence;
+		fobj->shared_count = 1;
+		goto done;
+	}
+
+	/*
+	 * no need to bump fence refcounts, rcu_read access
+	 * requires the use of kref_get_unless_zero, and the
+	 * references from the old struct are carried over to
+	 * the new.
+	 */
+	fobj->shared_count = old->shared_count;
+
+	for (i = 0; i < old->shared_count; ++i) {
+		if (fence && old->shared[i]->context == fence->context) {
+			fence_put(old->shared[i]);
+			fobj->shared[i] = fence;
+			fence = NULL;
+		} else
+			fobj->shared[i] = old->shared[i];
+	}
+	if (fence)
+		fobj->shared[fobj->shared_count++] = fence;
+
+done:
+	obj->fence = fobj;
+	kfree(old);
+}
+
+/*
+ * Add a fence to a shared slot, obj->lock must be held, and
+ * reservation_object_reserve_shared_fence has been called.
+ */
+void reservation_object_add_shared_fence(struct reservation_object *obj,
+					 struct fence *fence)
+{
+	struct reservation_object_list *old, *fobj = obj->staged;
+
+	old = reservation_object_get_list(obj);
+	obj->staged = NULL;
+
+	if (!fobj) {
+		BUG_ON(old->shared_count == old->shared_max);
+		reservation_object_add_shared_inplace(obj, old, fence);
+	} else
+		reservation_object_add_shared_replace(obj, old, fobj, fence);
+}
+EXPORT_SYMBOL(reservation_object_add_shared_fence);
+
+void reservation_object_add_excl_fence(struct reservation_object *obj,
+				       struct fence *fence)
+{
+	struct fence *old_fence = obj->fence_excl;
+	struct reservation_object_list *old;
+	u32 i = 0;
+
+	old = reservation_object_get_list(obj);
+	if (old) {
+		i = old->shared_count;
+		old->shared_count = 0;
+	}
+
+	if (fence)
+		fence_get(fence);
+
+	obj->fence_excl = fence;
+
+	/* inplace update, no shared fences */
+	while (i--)
+		fence_put(old->shared[i]);
+
+	if (old_fence)
+		fence_put(old_fence);
+}
+EXPORT_SYMBOL(reservation_object_add_excl_fence);
diff --git a/include/linux/fence.h b/include/linux/fence.h
index 65f2a01ee7e4..d13b5ab61726 100644
--- a/include/linux/fence.h
+++ b/include/linux/fence.h
@@ -31,6 +31,7 @@
 #include <linux/kref.h>
 #include <linux/sched.h>
 #include <linux/printk.h>
+#include <linux/slab.h>
 
 struct fence;
 struct fence_ops;
@@ -191,6 +192,11 @@ static inline void fence_get(struct fence *fence)
 
 extern void release_fence(struct kref *kref);
 
+static inline void free_fence(struct fence *fence)
+{
+	kfree(fence);
+}
+
 /**
  * fence_put - decreases refcount of the fence
  * @fence:	[in]	fence to reduce refcount of
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index f3f57460a205..2affe67dea6e 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -45,36 +45,78 @@
 
 extern struct ww_class reservation_ww_class;
 
+struct reservation_object_list {
+	u32 shared_count, shared_max;
+	struct fence *shared[];
+};
+
 struct reservation_object {
 	struct ww_mutex lock;
 
 	struct fence *fence_excl;
-	struct fence **fence_shared;
-	u32 fence_shared_count, fence_shared_max;
+	struct reservation_object_list *fence;
+	struct reservation_object_list *staged;
 };
 
+#define reservation_object_assert_held(obj) \
+	lockdep_assert_held(&(obj)->lock.base)
+
 static inline void
 reservation_object_init(struct reservation_object *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
 
-	obj->fence_shared_count = obj->fence_shared_max = 0;
-	obj->fence_shared = NULL;
 	obj->fence_excl = NULL;
+	obj->fence = NULL;
+	obj->staged = NULL;
 }
 
 static inline void
 reservation_object_fini(struct reservation_object *obj)
 {
 	int i;
+	struct reservation_object_list *fobj;
 
+	/*
+	 * This object should be dead and all references must have
+	 * been released to it.
+	 */
 	if (obj->fence_excl)
 		fence_put(obj->fence_excl);
-	for (i = 0; i < obj->fence_shared_count; ++i)
-		fence_put(obj->fence_shared[i]);
-	kfree(obj->fence_shared);
+
+	fobj = obj->fence;
+	if (fobj) {
+		for (i = 0; i < fobj->shared_count; ++i)
+			fence_put(fobj->shared[i]);
+
+		kfree(fobj);
+	}
+	kfree(obj->staged);
 
 	ww_mutex_destroy(&obj->lock);
 }
 
+static inline struct reservation_object_list *
+reservation_object_get_list(struct reservation_object *obj)
+{
+	reservation_object_assert_held(obj);
+
+	return obj->fence;
+}
+
+static inline struct fence *
+reservation_object_get_excl(struct reservation_object *obj)
+{
+	reservation_object_assert_held(obj);
+
+	return obj->fence_excl;
+}
+
+int reservation_object_reserve_shared(struct reservation_object *obj);
+void reservation_object_add_shared_fence(struct reservation_object *obj,
+					 struct fence *fence);
+
+void reservation_object_add_excl_fence(struct reservation_object *obj,
+				       struct fence *fence);
+
 #endif /* _LINUX_RESERVATION_H */

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 8/8] reservation: add suppport for read-only access using rcu
  2014-06-18 10:36 ` Maarten Lankhorst
@ 2014-06-18 10:37   ` Maarten Lankhorst
  -1 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

This adds 4 more functions to deal with rcu.

reservation_object_get_fences_rcu() will obtain the list of shared
and exclusive fences without obtaining the ww_mutex.

reservation_object_wait_timeout_rcu() will wait on all fences of the
reservation_object, without obtaining the ww_mutex.

reservation_object_test_signaled_rcu() will test if all fences of the
reservation_object are signaled without using the ww_mutex.

reservation_object_get_excl() is added because touching the fence_excl
member directly will trigger a sparse warning.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
---
 drivers/base/dma-buf.c      |   47 +++++-
 drivers/base/reservation.c  |  336 ++++++++++++++++++++++++++++++++++++++++---
 include/linux/fence.h       |   20 ++-
 include/linux/reservation.h |   52 +++++--
 4 files changed, 400 insertions(+), 55 deletions(-)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index cb8379dfeed5..f3014c448e1e 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -137,7 +137,7 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 	struct reservation_object_list *fobj;
 	struct fence *fence_excl;
 	unsigned long events;
-	unsigned shared_count;
+	unsigned shared_count, seq;
 
 	dmabuf = file->private_data;
 	if (!dmabuf || !dmabuf->resv)
@@ -151,14 +151,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 	if (!events)
 		return 0;
 
-	ww_mutex_lock(&resv->lock, NULL);
+retry:
+	seq = read_seqcount_begin(&resv->seq);
+	rcu_read_lock();
 
-	fobj = resv->fence;
-	if (!fobj)
-		goto out;
-
-	shared_count = fobj->shared_count;
-	fence_excl = resv->fence_excl;
+	fobj = rcu_dereference(resv->fence);
+	if (fobj)
+		shared_count = fobj->shared_count;
+	else
+		shared_count = 0;
+	fence_excl = rcu_dereference(resv->fence_excl);
+	if (read_seqcount_retry(&resv->seq, seq)) {
+		rcu_read_unlock();
+		goto retry;
+	}
 
 	if (fence_excl && (!(events & POLLOUT) || shared_count == 0)) {
 		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl;
@@ -176,14 +182,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 		spin_unlock_irq(&dmabuf->poll.lock);
 
 		if (events & pevents) {
-			if (!fence_add_callback(fence_excl, &dcb->cb,
+			if (!fence_get_rcu(fence_excl)) {
+				/* force a recheck */
+				events &= ~pevents;
+				dma_buf_poll_cb(NULL, &dcb->cb);
+			} else if (!fence_add_callback(fence_excl, &dcb->cb,
 						       dma_buf_poll_cb)) {
 				events &= ~pevents;
+				fence_put(fence_excl);
 			} else {
 				/*
 				 * No callback queued, wake up any additional
 				 * waiters.
 				 */
+				fence_put(fence_excl);
 				dma_buf_poll_cb(NULL, &dcb->cb);
 			}
 		}
@@ -205,13 +217,26 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 			goto out;
 
 		for (i = 0; i < shared_count; ++i) {
-			struct fence *fence = fobj->shared[i];
+			struct fence *fence = rcu_dereference(fobj->shared[i]);
 
+			if (!fence_get_rcu(fence)) {
+				/*
+				 * fence refcount dropped to zero, this means
+				 * that fobj has been freed
+				 *
+				 * call dma_buf_poll_cb and force a recheck!
+				 */
+				events &= ~POLLOUT;
+				dma_buf_poll_cb(NULL, &dcb->cb);
+				break;
+			}
 			if (!fence_add_callback(fence, &dcb->cb,
 						dma_buf_poll_cb)) {
+				fence_put(fence);
 				events &= ~POLLOUT;
 				break;
 			}
+			fence_put(fence);
 		}
 
 		/* No callback queued, wake up any additional waiters. */
@@ -220,7 +245,7 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 	}
 
 out:
-	ww_mutex_unlock(&resv->lock);
+	rcu_read_unlock();
 	return events;
 }
 
diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c
index e6166723a9ae..3c97c8fa8d02 100644
--- a/drivers/base/reservation.c
+++ b/drivers/base/reservation.c
@@ -38,6 +38,11 @@
 DEFINE_WW_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
 
+struct lock_class_key reservation_seqcount_class;
+EXPORT_SYMBOL(reservation_seqcount_class);
+
+const char reservation_seqcount_string[] = "reservation_seqcount";
+EXPORT_SYMBOL(reservation_seqcount_string);
 /*
  * Reserve space to add a shared fence to a reservation_object,
  * must be called with obj->lock held.
@@ -82,27 +87,37 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
 {
 	u32 i;
 
+	fence_get(fence);
+
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
+
 	for (i = 0; i < fobj->shared_count; ++i) {
-		if (fobj->shared[i]->context == fence->context) {
-			struct fence *old_fence = fobj->shared[i];
+		struct fence *old_fence;
 
-			fence_get(fence);
+		old_fence = rcu_dereference_protected(fobj->shared[i],
+						reservation_object_held(obj));
 
-			fobj->shared[i] = fence;
+		if (old_fence->context == fence->context) {
+			/* memory barrier is added by write_seqcount_begin */
+			RCU_INIT_POINTER(fobj->shared[i], fence);
+			write_seqcount_end(&obj->seq);
+			preempt_enable();
 
 			fence_put(old_fence);
 			return;
 		}
 	}
 
-	fence_get(fence);
-	fobj->shared[fobj->shared_count] = fence;
 	/*
-	 * make the new fence visible before incrementing
-	 * fobj->shared_count
+	 * memory barrier is added by write_seqcount_begin,
+	 * fobj->shared_count is protected by this lock too
 	 */
-	smp_wmb();
+	RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
 	fobj->shared_count++;
+
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
 }
 
 static void
@@ -112,11 +127,12 @@ reservation_object_add_shared_replace(struct reservation_object *obj,
 				      struct fence *fence)
 {
 	unsigned i;
+	struct fence *old_fence = NULL;
 
 	fence_get(fence);
 
 	if (!old) {
-		fobj->shared[0] = fence;
+		RCU_INIT_POINTER(fobj->shared[0], fence);
 		fobj->shared_count = 1;
 		goto done;
 	}
@@ -130,19 +146,38 @@ reservation_object_add_shared_replace(struct reservation_object *obj,
 	fobj->shared_count = old->shared_count;
 
 	for (i = 0; i < old->shared_count; ++i) {
-		if (fence && old->shared[i]->context == fence->context) {
-			fence_put(old->shared[i]);
-			fobj->shared[i] = fence;
-			fence = NULL;
+		struct fence *check;
+
+		check = rcu_dereference_protected(old->shared[i],
+						reservation_object_held(obj));
+
+		if (!old_fence && check->context == fence->context) {
+			old_fence = check;
+			RCU_INIT_POINTER(fobj->shared[i], fence);
 		} else
-			fobj->shared[i] = old->shared[i];
+			RCU_INIT_POINTER(fobj->shared[i], check);
+	}
+	if (!old_fence) {
+		RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
+		fobj->shared_count++;
 	}
-	if (fence)
-		fobj->shared[fobj->shared_count++] = fence;
 
 done:
-	obj->fence = fobj;
-	kfree(old);
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
+	/*
+	 * RCU_INIT_POINTER can be used here,
+	 * seqcount provides the necessary barriers
+	 */
+	RCU_INIT_POINTER(obj->fence, fobj);
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
+
+	if (old)
+		kfree_rcu(old, rcu);
+
+	if (old_fence)
+		fence_put(old_fence);
 }
 
 /*
@@ -158,7 +193,7 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
 	obj->staged = NULL;
 
 	if (!fobj) {
-		BUG_ON(old->shared_count == old->shared_max);
+		BUG_ON(old->shared_count >= old->shared_max);
 		reservation_object_add_shared_inplace(obj, old, fence);
 	} else
 		reservation_object_add_shared_replace(obj, old, fobj, fence);
@@ -168,26 +203,275 @@ EXPORT_SYMBOL(reservation_object_add_shared_fence);
 void reservation_object_add_excl_fence(struct reservation_object *obj,
 				       struct fence *fence)
 {
-	struct fence *old_fence = obj->fence_excl;
+	struct fence *old_fence = reservation_object_get_excl(obj);
 	struct reservation_object_list *old;
 	u32 i = 0;
 
 	old = reservation_object_get_list(obj);
-	if (old) {
+	if (old)
 		i = old->shared_count;
-		old->shared_count = 0;
-	}
 
 	if (fence)
 		fence_get(fence);
 
-	obj->fence_excl = fence;
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
+	/* write_seqcount_begin provides the necessary memory barrier */
+	RCU_INIT_POINTER(obj->fence_excl, fence);
+	if (old)
+		old->shared_count = 0;
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
 
 	/* inplace update, no shared fences */
 	while (i--)
-		fence_put(old->shared[i]);
+		fence_put(rcu_dereference_protected(old->shared[i],
+						reservation_object_held(obj)));
 
 	if (old_fence)
 		fence_put(old_fence);
 }
 EXPORT_SYMBOL(reservation_object_add_excl_fence);
+
+int reservation_object_get_fences_rcu(struct reservation_object *obj,
+				      struct fence **pfence_excl,
+				      unsigned *pshared_count,
+				      struct fence ***pshared)
+{
+	unsigned shared_count = 0;
+	unsigned retry = 1;
+	struct fence **shared = NULL, *fence_excl = NULL;
+	int ret = 0;
+
+	while (retry) {
+		struct reservation_object_list *fobj;
+		unsigned seq;
+
+		seq = read_seqcount_begin(&obj->seq);
+
+		rcu_read_lock();
+
+		fobj = rcu_dereference(obj->fence);
+		if (fobj) {
+			struct fence **nshared;
+			size_t sz = sizeof(*shared) * fobj->shared_max;
+
+			nshared = krealloc(shared, sz,
+					   GFP_NOWAIT | __GFP_NOWARN);
+			if (!nshared) {
+				rcu_read_unlock();
+				nshared = krealloc(shared, sz, GFP_KERNEL);
+				if (nshared) {
+					shared = nshared;
+					continue;
+				}
+
+				ret = -ENOMEM;
+				shared_count = 0;
+				break;
+			}
+			shared = nshared;
+			memcpy(shared, fobj->shared, sz);
+			shared_count = fobj->shared_count;
+		} else
+			shared_count = 0;
+		fence_excl = rcu_dereference(obj->fence_excl);
+
+		retry = read_seqcount_retry(&obj->seq, seq);
+		if (retry)
+			goto unlock;
+
+		if (!fence_excl || fence_get_rcu(fence_excl)) {
+			unsigned i;
+
+			for (i = 0; i < shared_count; ++i) {
+				if (fence_get_rcu(shared[i]))
+					continue;
+
+				/* uh oh, refcount failed, abort and retry */
+				while (i--)
+					fence_put(shared[i]);
+
+				if (fence_excl) {
+					fence_put(fence_excl);
+					fence_excl = NULL;
+				}
+
+				retry = 1;
+				break;
+			}
+		} else
+			retry = 1;
+
+unlock:
+		rcu_read_unlock();
+	}
+	*pshared_count = shared_count;
+	if (shared_count)
+		*pshared = shared;
+	else {
+		*pshared = NULL;
+		kfree(shared);
+	}
+	*pfence_excl = fence_excl;
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu);
+
+long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
+					 bool wait_all, bool intr,
+					 unsigned long timeout)
+{
+	struct fence *fence;
+	unsigned seq, shared_count, i = 0;
+	long ret = timeout;
+
+retry:
+	fence = NULL;
+	shared_count = 0;
+	seq = read_seqcount_begin(&obj->seq);
+	rcu_read_lock();
+
+	if (wait_all) {
+		struct reservation_object_list *fobj = rcu_dereference(obj->fence);
+
+		if (fobj)
+			shared_count = fobj->shared_count;
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		for (i = 0; i < shared_count; ++i) {
+			struct fence *lfence = rcu_dereference(fobj->shared[i]);
+
+			if (test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags))
+				continue;
+
+			if (!fence_get_rcu(lfence))
+				goto unlock_retry;
+
+			if (fence_is_signaled(lfence)) {
+				fence_put(lfence);
+				continue;
+			}
+
+			fence = lfence;
+			break;
+		}
+	}
+
+	if (!shared_count) {
+		struct fence *fence_excl = rcu_dereference(obj->fence_excl);
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		if (fence_excl &&
+		    !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence_excl->flags)) {
+			if (!fence_get_rcu(fence_excl))
+				goto unlock_retry;
+
+			if (fence_is_signaled(fence_excl))
+				fence_put(fence_excl);
+			else
+				fence = fence_excl;
+		}
+	}
+
+	rcu_read_unlock();
+	if (fence) {
+		ret = fence_wait_timeout(fence, intr, ret);
+		fence_put(fence);
+		if (ret > 0 && wait_all && (i + 1 < shared_count))
+			goto retry;
+	}
+	return ret;
+
+unlock_retry:
+	rcu_read_unlock();
+	goto retry;
+}
+EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu);
+
+
+static inline int
+reservation_object_test_signaled_single(struct fence *passed_fence)
+{
+	struct fence *fence, *lfence = passed_fence;
+	int ret = 1;
+
+	if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) {
+		int ret;
+
+		fence = fence_get_rcu(lfence);
+		if (!fence)
+			return -1;
+
+		ret = !!fence_is_signaled(fence);
+		fence_put(fence);
+	}
+	return ret;
+}
+
+bool reservation_object_test_signaled_rcu(struct reservation_object *obj,
+					  bool test_all)
+{
+	unsigned seq, shared_count;
+	int ret = true;
+
+retry:
+	shared_count = 0;
+	seq = read_seqcount_begin(&obj->seq);
+	rcu_read_lock();
+
+	if (test_all) {
+		unsigned i;
+
+		struct reservation_object_list *fobj = rcu_dereference(obj->fence);
+
+		if (fobj)
+			shared_count = fobj->shared_count;
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		for (i = 0; i < shared_count; ++i) {
+			struct fence *fence = rcu_dereference(fobj->shared[i]);
+
+			ret = reservation_object_test_signaled_single(fence);
+			if (ret < 0)
+				goto unlock_retry;
+			else if (!ret)
+				break;
+		}
+
+		/*
+		 * There could be a read_seqcount_retry here, but nothing cares
+		 * about whether it's the old or newer fence pointers that are
+		 * signaled. That race could still have happened after checking
+		 * read_seqcount_retry. If you care, use ww_mutex_lock.
+		 */
+	}
+
+	if (!shared_count) {
+		struct fence *fence_excl = rcu_dereference(obj->fence_excl);
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		if (fence_excl) {
+			ret = reservation_object_test_signaled_single(fence_excl);
+			if (ret < 0)
+				goto unlock_retry;
+		}
+	}
+
+	rcu_read_unlock();
+	return ret;
+
+unlock_retry:
+	rcu_read_unlock();
+	goto retry;
+}
+EXPORT_SYMBOL_GPL(reservation_object_test_signaled_rcu);
diff --git a/include/linux/fence.h b/include/linux/fence.h
index d13b5ab61726..4b7002457af0 100644
--- a/include/linux/fence.h
+++ b/include/linux/fence.h
@@ -31,7 +31,7 @@
 #include <linux/kref.h>
 #include <linux/sched.h>
 #include <linux/printk.h>
-#include <linux/slab.h>
+#include <linux/rcupdate.h>
 
 struct fence;
 struct fence_ops;
@@ -41,6 +41,7 @@ struct fence_cb;
  * struct fence - software synchronization primitive
  * @refcount: refcount for this fence
  * @ops: fence_ops associated with this fence
+ * @rcu: used for releasing fence with kfree_rcu
  * @cb_list: list of all callbacks to call
  * @lock: spin_lock_irqsave used for locking
  * @context: execution context this fence belongs to, returned by
@@ -74,6 +75,7 @@ struct fence_cb;
 struct fence {
 	struct kref refcount;
 	const struct fence_ops *ops;
+	struct rcu_head rcu;
 	struct list_head cb_list;
 	spinlock_t *lock;
 	unsigned context, seqno;
@@ -190,11 +192,25 @@ static inline void fence_get(struct fence *fence)
 	kref_get(&fence->refcount);
 }
 
+/**
+ * fence_get_rcu - get a fence from a reservation_object_list with rcu read lock
+ * @fence:	[in]	fence to increase refcount of
+ *
+ * Function returns NULL if no refcount could be obtained, or the fence.
+ */
+static inline struct fence *fence_get_rcu(struct fence *fence)
+{
+	if (kref_get_unless_zero(&fence->refcount))
+		return fence;
+	else
+		return NULL;
+}
+
 extern void release_fence(struct kref *kref);
 
 static inline void free_fence(struct fence *fence)
 {
-	kfree(fence);
+	kfree_rcu(fence, rcu);
 }
 
 /**
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index 2affe67dea6e..5a0b64cf68b4 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -42,22 +42,29 @@
 #include <linux/ww_mutex.h>
 #include <linux/fence.h>
 #include <linux/slab.h>
+#include <linux/seqlock.h>
+#include <linux/rcupdate.h>
 
 extern struct ww_class reservation_ww_class;
+extern struct lock_class_key reservation_seqcount_class;
+extern const char reservation_seqcount_string[];
 
 struct reservation_object_list {
+	struct rcu_head rcu;
 	u32 shared_count, shared_max;
-	struct fence *shared[];
+	struct fence __rcu *shared[];
 };
 
 struct reservation_object {
 	struct ww_mutex lock;
+	seqcount_t seq;
 
-	struct fence *fence_excl;
-	struct reservation_object_list *fence;
+	struct fence __rcu *fence_excl;
+	struct reservation_object_list __rcu *fence;
 	struct reservation_object_list *staged;
 };
 
+#define reservation_object_held(obj) lockdep_is_held(&(obj)->lock.base)
 #define reservation_object_assert_held(obj) \
 	lockdep_assert_held(&(obj)->lock.base)
 
@@ -66,8 +73,9 @@ reservation_object_init(struct reservation_object *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
 
-	obj->fence_excl = NULL;
-	obj->fence = NULL;
+	__seqcount_init(&obj->seq, reservation_seqcount_string, &reservation_seqcount_class);
+	RCU_INIT_POINTER(obj->fence, NULL);
+	RCU_INIT_POINTER(obj->fence_excl, NULL);
 	obj->staged = NULL;
 }
 
@@ -76,18 +84,20 @@ reservation_object_fini(struct reservation_object *obj)
 {
 	int i;
 	struct reservation_object_list *fobj;
+	struct fence *excl;
 
 	/*
 	 * This object should be dead and all references must have
-	 * been released to it.
+	 * been released to it, so no need to be protected with rcu.
 	 */
-	if (obj->fence_excl)
-		fence_put(obj->fence_excl);
+	excl = rcu_dereference_protected(obj->fence_excl, 1);
+	if (excl)
+		fence_put(excl);
 
-	fobj = obj->fence;
+	fobj = rcu_dereference_protected(obj->fence, 1);
 	if (fobj) {
 		for (i = 0; i < fobj->shared_count; ++i)
-			fence_put(fobj->shared[i]);
+			fence_put(rcu_dereference_protected(fobj->shared[i], 1));
 
 		kfree(fobj);
 	}
@@ -99,17 +109,15 @@ reservation_object_fini(struct reservation_object *obj)
 static inline struct reservation_object_list *
 reservation_object_get_list(struct reservation_object *obj)
 {
-	reservation_object_assert_held(obj);
-
-	return obj->fence;
+	return rcu_dereference_protected(obj->fence,
+					 reservation_object_held(obj));
 }
 
 static inline struct fence *
 reservation_object_get_excl(struct reservation_object *obj)
 {
-	reservation_object_assert_held(obj);
-
-	return obj->fence_excl;
+	return rcu_dereference_protected(obj->fence_excl,
+					 reservation_object_held(obj));
 }
 
 int reservation_object_reserve_shared(struct reservation_object *obj);
@@ -119,4 +127,16 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
 void reservation_object_add_excl_fence(struct reservation_object *obj,
 				       struct fence *fence);
 
+int reservation_object_get_fences_rcu(struct reservation_object *obj,
+				      struct fence **pfence_excl,
+				      unsigned *pshared_count,
+				      struct fence ***pshared);
+
+long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
+					 bool wait_all, bool intr,
+					 unsigned long timeout);
+
+bool reservation_object_test_signaled_rcu(struct reservation_object *obj,
+					  bool test_all);
+
 #endif /* _LINUX_RESERVATION_H */


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [REPOST PATCH 8/8] reservation: add suppport for read-only access using rcu
@ 2014-06-18 10:37   ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-18 10:37 UTC (permalink / raw)
  To: gregkh
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

This adds 4 more functions to deal with rcu.

reservation_object_get_fences_rcu() will obtain the list of shared
and exclusive fences without obtaining the ww_mutex.

reservation_object_wait_timeout_rcu() will wait on all fences of the
reservation_object, without obtaining the ww_mutex.

reservation_object_test_signaled_rcu() will test if all fences of the
reservation_object are signaled without using the ww_mutex.

reservation_object_get_excl() is added because touching the fence_excl
member directly will trigger a sparse warning.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
---
 drivers/base/dma-buf.c      |   47 +++++-
 drivers/base/reservation.c  |  336 ++++++++++++++++++++++++++++++++++++++++---
 include/linux/fence.h       |   20 ++-
 include/linux/reservation.h |   52 +++++--
 4 files changed, 400 insertions(+), 55 deletions(-)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index cb8379dfeed5..f3014c448e1e 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -137,7 +137,7 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 	struct reservation_object_list *fobj;
 	struct fence *fence_excl;
 	unsigned long events;
-	unsigned shared_count;
+	unsigned shared_count, seq;
 
 	dmabuf = file->private_data;
 	if (!dmabuf || !dmabuf->resv)
@@ -151,14 +151,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 	if (!events)
 		return 0;
 
-	ww_mutex_lock(&resv->lock, NULL);
+retry:
+	seq = read_seqcount_begin(&resv->seq);
+	rcu_read_lock();
 
-	fobj = resv->fence;
-	if (!fobj)
-		goto out;
-
-	shared_count = fobj->shared_count;
-	fence_excl = resv->fence_excl;
+	fobj = rcu_dereference(resv->fence);
+	if (fobj)
+		shared_count = fobj->shared_count;
+	else
+		shared_count = 0;
+	fence_excl = rcu_dereference(resv->fence_excl);
+	if (read_seqcount_retry(&resv->seq, seq)) {
+		rcu_read_unlock();
+		goto retry;
+	}
 
 	if (fence_excl && (!(events & POLLOUT) || shared_count == 0)) {
 		struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl;
@@ -176,14 +182,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 		spin_unlock_irq(&dmabuf->poll.lock);
 
 		if (events & pevents) {
-			if (!fence_add_callback(fence_excl, &dcb->cb,
+			if (!fence_get_rcu(fence_excl)) {
+				/* force a recheck */
+				events &= ~pevents;
+				dma_buf_poll_cb(NULL, &dcb->cb);
+			} else if (!fence_add_callback(fence_excl, &dcb->cb,
 						       dma_buf_poll_cb)) {
 				events &= ~pevents;
+				fence_put(fence_excl);
 			} else {
 				/*
 				 * No callback queued, wake up any additional
 				 * waiters.
 				 */
+				fence_put(fence_excl);
 				dma_buf_poll_cb(NULL, &dcb->cb);
 			}
 		}
@@ -205,13 +217,26 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 			goto out;
 
 		for (i = 0; i < shared_count; ++i) {
-			struct fence *fence = fobj->shared[i];
+			struct fence *fence = rcu_dereference(fobj->shared[i]);
 
+			if (!fence_get_rcu(fence)) {
+				/*
+				 * fence refcount dropped to zero, this means
+				 * that fobj has been freed
+				 *
+				 * call dma_buf_poll_cb and force a recheck!
+				 */
+				events &= ~POLLOUT;
+				dma_buf_poll_cb(NULL, &dcb->cb);
+				break;
+			}
 			if (!fence_add_callback(fence, &dcb->cb,
 						dma_buf_poll_cb)) {
+				fence_put(fence);
 				events &= ~POLLOUT;
 				break;
 			}
+			fence_put(fence);
 		}
 
 		/* No callback queued, wake up any additional waiters. */
@@ -220,7 +245,7 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
 	}
 
 out:
-	ww_mutex_unlock(&resv->lock);
+	rcu_read_unlock();
 	return events;
 }
 
diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c
index e6166723a9ae..3c97c8fa8d02 100644
--- a/drivers/base/reservation.c
+++ b/drivers/base/reservation.c
@@ -38,6 +38,11 @@
 DEFINE_WW_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
 
+struct lock_class_key reservation_seqcount_class;
+EXPORT_SYMBOL(reservation_seqcount_class);
+
+const char reservation_seqcount_string[] = "reservation_seqcount";
+EXPORT_SYMBOL(reservation_seqcount_string);
 /*
  * Reserve space to add a shared fence to a reservation_object,
  * must be called with obj->lock held.
@@ -82,27 +87,37 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
 {
 	u32 i;
 
+	fence_get(fence);
+
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
+
 	for (i = 0; i < fobj->shared_count; ++i) {
-		if (fobj->shared[i]->context == fence->context) {
-			struct fence *old_fence = fobj->shared[i];
+		struct fence *old_fence;
 
-			fence_get(fence);
+		old_fence = rcu_dereference_protected(fobj->shared[i],
+						reservation_object_held(obj));
 
-			fobj->shared[i] = fence;
+		if (old_fence->context == fence->context) {
+			/* memory barrier is added by write_seqcount_begin */
+			RCU_INIT_POINTER(fobj->shared[i], fence);
+			write_seqcount_end(&obj->seq);
+			preempt_enable();
 
 			fence_put(old_fence);
 			return;
 		}
 	}
 
-	fence_get(fence);
-	fobj->shared[fobj->shared_count] = fence;
 	/*
-	 * make the new fence visible before incrementing
-	 * fobj->shared_count
+	 * memory barrier is added by write_seqcount_begin,
+	 * fobj->shared_count is protected by this lock too
 	 */
-	smp_wmb();
+	RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
 	fobj->shared_count++;
+
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
 }
 
 static void
@@ -112,11 +127,12 @@ reservation_object_add_shared_replace(struct reservation_object *obj,
 				      struct fence *fence)
 {
 	unsigned i;
+	struct fence *old_fence = NULL;
 
 	fence_get(fence);
 
 	if (!old) {
-		fobj->shared[0] = fence;
+		RCU_INIT_POINTER(fobj->shared[0], fence);
 		fobj->shared_count = 1;
 		goto done;
 	}
@@ -130,19 +146,38 @@ reservation_object_add_shared_replace(struct reservation_object *obj,
 	fobj->shared_count = old->shared_count;
 
 	for (i = 0; i < old->shared_count; ++i) {
-		if (fence && old->shared[i]->context == fence->context) {
-			fence_put(old->shared[i]);
-			fobj->shared[i] = fence;
-			fence = NULL;
+		struct fence *check;
+
+		check = rcu_dereference_protected(old->shared[i],
+						reservation_object_held(obj));
+
+		if (!old_fence && check->context == fence->context) {
+			old_fence = check;
+			RCU_INIT_POINTER(fobj->shared[i], fence);
 		} else
-			fobj->shared[i] = old->shared[i];
+			RCU_INIT_POINTER(fobj->shared[i], check);
+	}
+	if (!old_fence) {
+		RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
+		fobj->shared_count++;
 	}
-	if (fence)
-		fobj->shared[fobj->shared_count++] = fence;
 
 done:
-	obj->fence = fobj;
-	kfree(old);
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
+	/*
+	 * RCU_INIT_POINTER can be used here,
+	 * seqcount provides the necessary barriers
+	 */
+	RCU_INIT_POINTER(obj->fence, fobj);
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
+
+	if (old)
+		kfree_rcu(old, rcu);
+
+	if (old_fence)
+		fence_put(old_fence);
 }
 
 /*
@@ -158,7 +193,7 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
 	obj->staged = NULL;
 
 	if (!fobj) {
-		BUG_ON(old->shared_count == old->shared_max);
+		BUG_ON(old->shared_count >= old->shared_max);
 		reservation_object_add_shared_inplace(obj, old, fence);
 	} else
 		reservation_object_add_shared_replace(obj, old, fobj, fence);
@@ -168,26 +203,275 @@ EXPORT_SYMBOL(reservation_object_add_shared_fence);
 void reservation_object_add_excl_fence(struct reservation_object *obj,
 				       struct fence *fence)
 {
-	struct fence *old_fence = obj->fence_excl;
+	struct fence *old_fence = reservation_object_get_excl(obj);
 	struct reservation_object_list *old;
 	u32 i = 0;
 
 	old = reservation_object_get_list(obj);
-	if (old) {
+	if (old)
 		i = old->shared_count;
-		old->shared_count = 0;
-	}
 
 	if (fence)
 		fence_get(fence);
 
-	obj->fence_excl = fence;
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
+	/* write_seqcount_begin provides the necessary memory barrier */
+	RCU_INIT_POINTER(obj->fence_excl, fence);
+	if (old)
+		old->shared_count = 0;
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
 
 	/* inplace update, no shared fences */
 	while (i--)
-		fence_put(old->shared[i]);
+		fence_put(rcu_dereference_protected(old->shared[i],
+						reservation_object_held(obj)));
 
 	if (old_fence)
 		fence_put(old_fence);
 }
 EXPORT_SYMBOL(reservation_object_add_excl_fence);
+
+int reservation_object_get_fences_rcu(struct reservation_object *obj,
+				      struct fence **pfence_excl,
+				      unsigned *pshared_count,
+				      struct fence ***pshared)
+{
+	unsigned shared_count = 0;
+	unsigned retry = 1;
+	struct fence **shared = NULL, *fence_excl = NULL;
+	int ret = 0;
+
+	while (retry) {
+		struct reservation_object_list *fobj;
+		unsigned seq;
+
+		seq = read_seqcount_begin(&obj->seq);
+
+		rcu_read_lock();
+
+		fobj = rcu_dereference(obj->fence);
+		if (fobj) {
+			struct fence **nshared;
+			size_t sz = sizeof(*shared) * fobj->shared_max;
+
+			nshared = krealloc(shared, sz,
+					   GFP_NOWAIT | __GFP_NOWARN);
+			if (!nshared) {
+				rcu_read_unlock();
+				nshared = krealloc(shared, sz, GFP_KERNEL);
+				if (nshared) {
+					shared = nshared;
+					continue;
+				}
+
+				ret = -ENOMEM;
+				shared_count = 0;
+				break;
+			}
+			shared = nshared;
+			memcpy(shared, fobj->shared, sz);
+			shared_count = fobj->shared_count;
+		} else
+			shared_count = 0;
+		fence_excl = rcu_dereference(obj->fence_excl);
+
+		retry = read_seqcount_retry(&obj->seq, seq);
+		if (retry)
+			goto unlock;
+
+		if (!fence_excl || fence_get_rcu(fence_excl)) {
+			unsigned i;
+
+			for (i = 0; i < shared_count; ++i) {
+				if (fence_get_rcu(shared[i]))
+					continue;
+
+				/* uh oh, refcount failed, abort and retry */
+				while (i--)
+					fence_put(shared[i]);
+
+				if (fence_excl) {
+					fence_put(fence_excl);
+					fence_excl = NULL;
+				}
+
+				retry = 1;
+				break;
+			}
+		} else
+			retry = 1;
+
+unlock:
+		rcu_read_unlock();
+	}
+	*pshared_count = shared_count;
+	if (shared_count)
+		*pshared = shared;
+	else {
+		*pshared = NULL;
+		kfree(shared);
+	}
+	*pfence_excl = fence_excl;
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu);
+
+long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
+					 bool wait_all, bool intr,
+					 unsigned long timeout)
+{
+	struct fence *fence;
+	unsigned seq, shared_count, i = 0;
+	long ret = timeout;
+
+retry:
+	fence = NULL;
+	shared_count = 0;
+	seq = read_seqcount_begin(&obj->seq);
+	rcu_read_lock();
+
+	if (wait_all) {
+		struct reservation_object_list *fobj = rcu_dereference(obj->fence);
+
+		if (fobj)
+			shared_count = fobj->shared_count;
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		for (i = 0; i < shared_count; ++i) {
+			struct fence *lfence = rcu_dereference(fobj->shared[i]);
+
+			if (test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags))
+				continue;
+
+			if (!fence_get_rcu(lfence))
+				goto unlock_retry;
+
+			if (fence_is_signaled(lfence)) {
+				fence_put(lfence);
+				continue;
+			}
+
+			fence = lfence;
+			break;
+		}
+	}
+
+	if (!shared_count) {
+		struct fence *fence_excl = rcu_dereference(obj->fence_excl);
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		if (fence_excl &&
+		    !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence_excl->flags)) {
+			if (!fence_get_rcu(fence_excl))
+				goto unlock_retry;
+
+			if (fence_is_signaled(fence_excl))
+				fence_put(fence_excl);
+			else
+				fence = fence_excl;
+		}
+	}
+
+	rcu_read_unlock();
+	if (fence) {
+		ret = fence_wait_timeout(fence, intr, ret);
+		fence_put(fence);
+		if (ret > 0 && wait_all && (i + 1 < shared_count))
+			goto retry;
+	}
+	return ret;
+
+unlock_retry:
+	rcu_read_unlock();
+	goto retry;
+}
+EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu);
+
+
+static inline int
+reservation_object_test_signaled_single(struct fence *passed_fence)
+{
+	struct fence *fence, *lfence = passed_fence;
+	int ret = 1;
+
+	if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) {
+		int ret;
+
+		fence = fence_get_rcu(lfence);
+		if (!fence)
+			return -1;
+
+		ret = !!fence_is_signaled(fence);
+		fence_put(fence);
+	}
+	return ret;
+}
+
+bool reservation_object_test_signaled_rcu(struct reservation_object *obj,
+					  bool test_all)
+{
+	unsigned seq, shared_count;
+	int ret = true;
+
+retry:
+	shared_count = 0;
+	seq = read_seqcount_begin(&obj->seq);
+	rcu_read_lock();
+
+	if (test_all) {
+		unsigned i;
+
+		struct reservation_object_list *fobj = rcu_dereference(obj->fence);
+
+		if (fobj)
+			shared_count = fobj->shared_count;
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		for (i = 0; i < shared_count; ++i) {
+			struct fence *fence = rcu_dereference(fobj->shared[i]);
+
+			ret = reservation_object_test_signaled_single(fence);
+			if (ret < 0)
+				goto unlock_retry;
+			else if (!ret)
+				break;
+		}
+
+		/*
+		 * There could be a read_seqcount_retry here, but nothing cares
+		 * about whether it's the old or newer fence pointers that are
+		 * signaled. That race could still have happened after checking
+		 * read_seqcount_retry. If you care, use ww_mutex_lock.
+		 */
+	}
+
+	if (!shared_count) {
+		struct fence *fence_excl = rcu_dereference(obj->fence_excl);
+
+		if (read_seqcount_retry(&obj->seq, seq))
+			goto unlock_retry;
+
+		if (fence_excl) {
+			ret = reservation_object_test_signaled_single(fence_excl);
+			if (ret < 0)
+				goto unlock_retry;
+		}
+	}
+
+	rcu_read_unlock();
+	return ret;
+
+unlock_retry:
+	rcu_read_unlock();
+	goto retry;
+}
+EXPORT_SYMBOL_GPL(reservation_object_test_signaled_rcu);
diff --git a/include/linux/fence.h b/include/linux/fence.h
index d13b5ab61726..4b7002457af0 100644
--- a/include/linux/fence.h
+++ b/include/linux/fence.h
@@ -31,7 +31,7 @@
 #include <linux/kref.h>
 #include <linux/sched.h>
 #include <linux/printk.h>
-#include <linux/slab.h>
+#include <linux/rcupdate.h>
 
 struct fence;
 struct fence_ops;
@@ -41,6 +41,7 @@ struct fence_cb;
  * struct fence - software synchronization primitive
  * @refcount: refcount for this fence
  * @ops: fence_ops associated with this fence
+ * @rcu: used for releasing fence with kfree_rcu
  * @cb_list: list of all callbacks to call
  * @lock: spin_lock_irqsave used for locking
  * @context: execution context this fence belongs to, returned by
@@ -74,6 +75,7 @@ struct fence_cb;
 struct fence {
 	struct kref refcount;
 	const struct fence_ops *ops;
+	struct rcu_head rcu;
 	struct list_head cb_list;
 	spinlock_t *lock;
 	unsigned context, seqno;
@@ -190,11 +192,25 @@ static inline void fence_get(struct fence *fence)
 	kref_get(&fence->refcount);
 }
 
+/**
+ * fence_get_rcu - get a fence from a reservation_object_list with rcu read lock
+ * @fence:	[in]	fence to increase refcount of
+ *
+ * Function returns NULL if no refcount could be obtained, or the fence.
+ */
+static inline struct fence *fence_get_rcu(struct fence *fence)
+{
+	if (kref_get_unless_zero(&fence->refcount))
+		return fence;
+	else
+		return NULL;
+}
+
 extern void release_fence(struct kref *kref);
 
 static inline void free_fence(struct fence *fence)
 {
-	kfree(fence);
+	kfree_rcu(fence, rcu);
 }
 
 /**
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index 2affe67dea6e..5a0b64cf68b4 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -42,22 +42,29 @@
 #include <linux/ww_mutex.h>
 #include <linux/fence.h>
 #include <linux/slab.h>
+#include <linux/seqlock.h>
+#include <linux/rcupdate.h>
 
 extern struct ww_class reservation_ww_class;
+extern struct lock_class_key reservation_seqcount_class;
+extern const char reservation_seqcount_string[];
 
 struct reservation_object_list {
+	struct rcu_head rcu;
 	u32 shared_count, shared_max;
-	struct fence *shared[];
+	struct fence __rcu *shared[];
 };
 
 struct reservation_object {
 	struct ww_mutex lock;
+	seqcount_t seq;
 
-	struct fence *fence_excl;
-	struct reservation_object_list *fence;
+	struct fence __rcu *fence_excl;
+	struct reservation_object_list __rcu *fence;
 	struct reservation_object_list *staged;
 };
 
+#define reservation_object_held(obj) lockdep_is_held(&(obj)->lock.base)
 #define reservation_object_assert_held(obj) \
 	lockdep_assert_held(&(obj)->lock.base)
 
@@ -66,8 +73,9 @@ reservation_object_init(struct reservation_object *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
 
-	obj->fence_excl = NULL;
-	obj->fence = NULL;
+	__seqcount_init(&obj->seq, reservation_seqcount_string, &reservation_seqcount_class);
+	RCU_INIT_POINTER(obj->fence, NULL);
+	RCU_INIT_POINTER(obj->fence_excl, NULL);
 	obj->staged = NULL;
 }
 
@@ -76,18 +84,20 @@ reservation_object_fini(struct reservation_object *obj)
 {
 	int i;
 	struct reservation_object_list *fobj;
+	struct fence *excl;
 
 	/*
 	 * This object should be dead and all references must have
-	 * been released to it.
+	 * been released to it, so no need to be protected with rcu.
 	 */
-	if (obj->fence_excl)
-		fence_put(obj->fence_excl);
+	excl = rcu_dereference_protected(obj->fence_excl, 1);
+	if (excl)
+		fence_put(excl);
 
-	fobj = obj->fence;
+	fobj = rcu_dereference_protected(obj->fence, 1);
 	if (fobj) {
 		for (i = 0; i < fobj->shared_count; ++i)
-			fence_put(fobj->shared[i]);
+			fence_put(rcu_dereference_protected(fobj->shared[i], 1));
 
 		kfree(fobj);
 	}
@@ -99,17 +109,15 @@ reservation_object_fini(struct reservation_object *obj)
 static inline struct reservation_object_list *
 reservation_object_get_list(struct reservation_object *obj)
 {
-	reservation_object_assert_held(obj);
-
-	return obj->fence;
+	return rcu_dereference_protected(obj->fence,
+					 reservation_object_held(obj));
 }
 
 static inline struct fence *
 reservation_object_get_excl(struct reservation_object *obj)
 {
-	reservation_object_assert_held(obj);
-
-	return obj->fence_excl;
+	return rcu_dereference_protected(obj->fence_excl,
+					 reservation_object_held(obj));
 }
 
 int reservation_object_reserve_shared(struct reservation_object *obj);
@@ -119,4 +127,16 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
 void reservation_object_add_excl_fence(struct reservation_object *obj,
 				       struct fence *fence);
 
+int reservation_object_get_fences_rcu(struct reservation_object *obj,
+				      struct fence **pfence_excl,
+				      unsigned *pshared_count,
+				      struct fence ***pshared);
+
+long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
+					 bool wait_all, bool intr,
+					 unsigned long timeout);
+
+bool reservation_object_test_signaled_rcu(struct reservation_object *obj,
+					  bool test_all);
+
 #endif /* _LINUX_RESERVATION_H */

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-18 10:36   ` Maarten Lankhorst
@ 2014-06-19  1:13     ` Greg KH
  -1 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19  1:13 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> +#define CREATE_TRACE_POINTS
> +#include <trace/events/fence.h>
> +
> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);

Are you really willing to live with these as tracepoints for forever?
What is the use of them in debugging?  Was it just for debugging the
fence code, or for something else?

> +/**
> + * fence_context_alloc - allocate an array of fence contexts
> + * @num:	[in]	amount of contexts to allocate
> + *
> + * This function will return the first index of the number of fences allocated.
> + * The fence context is used for setting fence->context to a unique number.
> + */
> +unsigned fence_context_alloc(unsigned num)
> +{
> +	BUG_ON(!num);
> +	return atomic_add_return(num, &fence_context_counter) - num;
> +}
> +EXPORT_SYMBOL(fence_context_alloc);

EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
Traditionally all of the driver core exports have been with this
marking, any objection to making that change here as well?

> +int __fence_signal(struct fence *fence)
> +{
> +	struct fence_cb *cur, *tmp;
> +	int ret = 0;
> +
> +	if (WARN_ON(!fence))
> +		return -EINVAL;
> +
> +	if (!ktime_to_ns(fence->timestamp)) {
> +		fence->timestamp = ktime_get();
> +		smp_mb__before_atomic();
> +	}
> +
> +	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> +		ret = -EINVAL;
> +
> +		/*
> +		 * we might have raced with the unlocked fence_signal,
> +		 * still run through all callbacks
> +		 */
> +	} else
> +		trace_fence_signaled(fence);
> +
> +	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> +		list_del_init(&cur->node);
> +		cur->func(fence, cur);
> +	}
> +	return ret;
> +}
> +EXPORT_SYMBOL(__fence_signal);

Don't export a function with __ in front of it, you are saying that an
internal function is somehow "valid" for everyone else to call?  Why?
You aren't even documenting the function, so who knows how to use it?

> +/**
> + * fence_signal - signal completion of a fence
> + * @fence: the fence to signal
> + *
> + * Signal completion for software callbacks on a fence, this will unblock
> + * fence_wait() calls and run all the callbacks added with
> + * fence_add_callback(). Can be called multiple times, but since a fence
> + * can only go from unsignaled to signaled state, it will only be effective
> + * the first time.
> + */
> +int fence_signal(struct fence *fence)
> +{
> +	unsigned long flags;
> +
> +	if (!fence)
> +		return -EINVAL;
> +
> +	if (!ktime_to_ns(fence->timestamp)) {
> +		fence->timestamp = ktime_get();
> +		smp_mb__before_atomic();
> +	}
> +
> +	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return -EINVAL;
> +
> +	trace_fence_signaled(fence);
> +
> +	if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
> +		struct fence_cb *cur, *tmp;
> +
> +		spin_lock_irqsave(fence->lock, flags);
> +		list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> +			list_del_init(&cur->node);
> +			cur->func(fence, cur);
> +		}
> +		spin_unlock_irqrestore(fence->lock, flags);
> +	}
> +	return 0;
> +}
> +EXPORT_SYMBOL(fence_signal);

So, why should I use this over __fence_signal?  What is the difference?
Am I missing some documentation somewhere?

> +void release_fence(struct kref *kref)
> +{
> +	struct fence *fence =
> +			container_of(kref, struct fence, refcount);
> +
> +	trace_fence_destroy(fence);
> +
> +	BUG_ON(!list_empty(&fence->cb_list));
> +
> +	if (fence->ops->release)
> +		fence->ops->release(fence);
> +	else
> +		kfree(fence);
> +}
> +EXPORT_SYMBOL(release_fence);

fence_release() makes it more unified with the other functions in this
file, right?

> +/**
> + * fence_default_wait - default sleep until the fence gets signaled
> + * or until timeout elapses
> + * @fence:	[in]	the fence to wait on
> + * @intr:	[in]	if true, do an interruptible wait
> + * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
> + *
> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
> + * remaining timeout in jiffies on success.
> + */
> +long

Shouldn't this be signed to be explicit?

> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
> +{
> +	struct default_wait_cb cb;
> +	unsigned long flags;
> +	long ret = timeout;
> +	bool was_set;
> +
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return timeout;
> +
> +	spin_lock_irqsave(fence->lock, flags);
> +
> +	if (intr && signal_pending(current)) {
> +		ret = -ERESTARTSYS;
> +		goto out;
> +	}
> +
> +	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
> +
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		goto out;
> +
> +	if (!was_set) {
> +		trace_fence_enable_signal(fence);
> +
> +		if (!fence->ops->enable_signaling(fence)) {
> +			__fence_signal(fence);
> +			goto out;
> +		}
> +	}
> +
> +	cb.base.func = fence_default_wait_cb;
> +	cb.task = current;
> +	list_add(&cb.base.node, &fence->cb_list);
> +
> +	while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
> +		if (intr)
> +			__set_current_state(TASK_INTERRUPTIBLE);
> +		else
> +			__set_current_state(TASK_UNINTERRUPTIBLE);
> +		spin_unlock_irqrestore(fence->lock, flags);
> +
> +		ret = schedule_timeout(ret);
> +
> +		spin_lock_irqsave(fence->lock, flags);
> +		if (ret > 0 && intr && signal_pending(current))
> +			ret = -ERESTARTSYS;
> +	}
> +
> +	if (!list_empty(&cb.base.node))
> +		list_del(&cb.base.node);
> +	__set_current_state(TASK_RUNNING);
> +
> +out:
> +	spin_unlock_irqrestore(fence->lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(fence_default_wait);
> +
> +/**
> + * __fence_init - Initialize a custom fence.
> + * @fence:	[in]	the fence to initialize
> + * @ops:	[in]	the fence_ops for operations on this fence
> + * @lock:	[in]	the irqsafe spinlock to use for locking this fence
> + * @context:	[in]	the execution context this fence is run on
> + * @seqno:	[in]	a linear increasing sequence number for this context
> + *
> + * Initializes an allocated fence, the caller doesn't have to keep its
> + * refcount after committing with this fence, but it will need to hold a
> + * refcount again if fence_ops.enable_signaling gets called. This can
> + * be used for other implementing other types of fence.
> + *
> + * context and seqno are used for easy comparison between fences, allowing
> + * to check which fence is later by simply using fence_later.
> + */
> +void
> +__fence_init(struct fence *fence, const struct fence_ops *ops,
> +	     spinlock_t *lock, unsigned context, unsigned seqno)
> +{
> +	BUG_ON(!lock);
> +	BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
> +	       !ops->get_driver_name || !ops->get_timeline_name);
> +
> +	kref_init(&fence->refcount);
> +	fence->ops = ops;
> +	INIT_LIST_HEAD(&fence->cb_list);
> +	fence->lock = lock;
> +	fence->context = context;
> +	fence->seqno = seqno;
> +	fence->flags = 0UL;
> +
> +	trace_fence_init(fence);
> +}
> +EXPORT_SYMBOL(__fence_init);

Again with the __ exported function...

I don't even see a fence_init() function anywhere, why the __ ?


> diff --git a/include/linux/fence.h b/include/linux/fence.h
> new file mode 100644
> index 000000000000..65f2a01ee7e4
> --- /dev/null
> +++ b/include/linux/fence.h
> @@ -0,0 +1,333 @@
> +/*
> + * Fence mechanism for dma-buf to allow for asynchronous dma access
> + *
> + * Copyright (C) 2012 Canonical Ltd
> + * Copyright (C) 2012 Texas Instruments
> + *
> + * Authors:
> + * Rob Clark <robdclark@gmail.com>
> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __LINUX_FENCE_H
> +#define __LINUX_FENCE_H
> +
> +#include <linux/err.h>
> +#include <linux/wait.h>
> +#include <linux/list.h>
> +#include <linux/bitops.h>
> +#include <linux/kref.h>
> +#include <linux/sched.h>
> +#include <linux/printk.h>
> +
> +struct fence;
> +struct fence_ops;
> +struct fence_cb;
> +
> +/**
> + * struct fence - software synchronization primitive
> + * @refcount: refcount for this fence
> + * @ops: fence_ops associated with this fence
> + * @cb_list: list of all callbacks to call
> + * @lock: spin_lock_irqsave used for locking
> + * @context: execution context this fence belongs to, returned by
> + *           fence_context_alloc()
> + * @seqno: the sequence number of this fence inside the execution context,
> + * can be compared to decide which fence would be signaled later.
> + * @flags: A mask of FENCE_FLAG_* defined below
> + * @timestamp: Timestamp when the fence was signaled.
> + * @status: Optional, only valid if < 0, must be set before calling
> + * fence_signal, indicates that the fence has completed with an error.
> + *
> + * the flags member must be manipulated and read using the appropriate
> + * atomic ops (bit_*), so taking the spinlock will not be needed most
> + * of the time.
> + *
> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
> + * implementer of the fence for its own purposes. Can be used in different
> + * ways by different fence implementers, so do not rely on this.
> + *
> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
> + * Particularly, if the bit was set, but fence_signal was called right
> + * before this bit was set, it would have been able to set the
> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
> + * after fence_signal was called, any enable_signaling call will have either
> + * been completed, or never called at all.
> + */
> +struct fence {
> +	struct kref refcount;
> +	const struct fence_ops *ops;
> +	struct list_head cb_list;
> +	spinlock_t *lock;
> +	unsigned context, seqno;
> +	unsigned long flags;
> +	ktime_t timestamp;
> +	int status;
> +};
> +
> +enum fence_flag_bits {
> +	FENCE_FLAG_SIGNALED_BIT,
> +	FENCE_FLAG_ENABLE_SIGNAL_BIT,
> +	FENCE_FLAG_USER_BITS, /* must always be last member */
> +};
> +
> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
> +
> +/**
> + * struct fence_cb - callback for fence_add_callback
> + * @node: used by fence_add_callback to append this struct to fence::cb_list
> + * @func: fence_func_t to call
> + *
> + * This struct will be initialized by fence_add_callback, additional
> + * data can be passed along by embedding fence_cb in another struct.
> + */
> +struct fence_cb {
> +	struct list_head node;
> +	fence_func_t func;
> +};
> +
> +/**
> + * struct fence_ops - operations implemented for fence
> + * @get_driver_name: returns the driver name.
> + * @get_timeline_name: return the name of the context this fence belongs to.
> + * @enable_signaling: enable software signaling of fence.
> + * @signaled: [optional] peek whether the fence is signaled, can be null.
> + * @wait: custom wait implementation, or fence_default_wait.
> + * @release: [optional] called on destruction of fence, can be null
> + * @fill_driver_data: [optional] callback to fill in free-form debug info
> + * Returns amount of bytes filled, or -errno.
> + * @fence_value_str: [optional] fills in the value of the fence as a string
> + * @timeline_value_str: [optional] fills in the current value of the timeline
> + * as a string
> + *
> + * Notes on enable_signaling:
> + * For fence implementations that have the capability for hw->hw
> + * signaling, they can implement this op to enable the necessary
> + * irqs, or insert commands into cmdstream, etc.  This is called
> + * in the first wait() or add_callback() path to let the fence
> + * implementation know that there is another driver waiting on
> + * the signal (ie. hw->sw case).
> + *
> + * This function can be called called from atomic context, but not
> + * from irq context, so normal spinlocks can be used.
> + *
> + * A return value of false indicates the fence already passed,
> + * or some failure occured that made it impossible to enable
> + * signaling. True indicates succesful enabling.
> + *
> + * fence->status may be set in enable_signaling, but only when false is
> + * returned.
> + *
> + * Calling fence_signal before enable_signaling is called allows
> + * for a tiny race window in which enable_signaling is called during,
> + * before, or after fence_signal. To fight this, it is recommended
> + * that before enable_signaling returns true an extra reference is
> + * taken on the fence, to be released when the fence is signaled.
> + * This will mean fence_signal will still be called twice, but
> + * the second time will be a noop since it was already signaled.
> + *
> + * Notes on signaled:
> + * May set fence->status if returning true.
> + *
> + * Notes on wait:
> + * Must not be NULL, set to fence_default_wait for default implementation.
> + * the fence_default_wait implementation should work for any fence, as long
> + * as enable_signaling works correctly.
> + *
> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
> + * timed out. Can also return other error values on custom implementations,
> + * which should be treated as if the fence is signaled. For example a hardware
> + * lockup could be reported like that.
> + *
> + * Notes on release:
> + * Can be NULL, this function allows additional commands to run on
> + * destruction of the fence. Can be called from irq context.
> + * If pointer is set to NULL, kfree will get called instead.
> + */
> +
> +struct fence_ops {
> +	const char * (*get_driver_name)(struct fence *fence);
> +	const char * (*get_timeline_name)(struct fence *fence);
> +	bool (*enable_signaling)(struct fence *fence);
> +	bool (*signaled)(struct fence *fence);
> +	long (*wait)(struct fence *fence, bool intr, signed long timeout);
> +	void (*release)(struct fence *fence);
> +
> +	int (*fill_driver_data)(struct fence *fence, void *data, int size);
> +	void (*fence_value_str)(struct fence *fence, char *str, int size);
> +	void (*timeline_value_str)(struct fence *fence, char *str, int size);
> +};
> +
> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
> +		  spinlock_t *lock, unsigned context, unsigned seqno);
> +
> +/**
> + * fence_get - increases refcount of the fence
> + * @fence:	[in]	fence to increase refcount of
> + */
> +static inline void fence_get(struct fence *fence)
> +{
> +	if (WARN_ON(!fence))
> +		return;

Why warn?

> +	kref_get(&fence->refcount);
> +}

Why is this inline?

> +
> +extern void release_fence(struct kref *kref);
> +
> +/**
> + * fence_put - decreases refcount of the fence
> + * @fence:	[in]	fence to reduce refcount of
> + */
> +static inline void fence_put(struct fence *fence)
> +{
> +	if (WARN_ON(!fence))
> +		return;

Same as above.

> +	kref_put(&fence->refcount, release_fence);
> +}

Why inline?

> +int fence_signal(struct fence *fence);
> +int __fence_signal(struct fence *fence);

Let's randomly pick a function to call... :)

> +static inline bool
> +__fence_is_signaled(struct fence *fence)
> +{
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return true;
> +
> +	if (fence->ops->signaled && fence->ops->signaled(fence)) {
> +		__fence_signal(fence);
> +		return true;
> +	}
> +
> +	return false;
> +}

Oh nice, another __ function :(

> +
> +/**
> + * fence_is_signaled - Return an indication if the fence is signaled yet.
> + * @fence:	[in]	the fence to check
> + *
> + * Returns true if the fence was already signaled, false if not. Since this
> + * function doesn't enable signaling, it is not guaranteed to ever return true
> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
> + * haven't been called before.
> + *
> + * It's recommended for seqno fences to call fence_signal when the
> + * operation is complete, it makes it possible to prevent issues from
> + * wraparound between time of issue and time of use by checking the return
> + * value of this function before calling hardware-specific wait instructions.
> + */
> +static inline bool
> +fence_is_signaled(struct fence *fence)
> +{
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return true;
> +
> +	if (fence->ops->signaled && fence->ops->signaled(fence)) {
> +		fence_signal(fence);
> +		return true;
> +	}
> +
> +	return false;
> +}


Why inline for all of these, does it really matter for speed?

> +/**
> + * fence_later - return the chronologically later fence
> + * @f1:	[in]	the first fence from the same context
> + * @f2:	[in]	the second fence from the same context
> + *
> + * Returns NULL if both fences are signaled, otherwise the fence that would be
> + * signaled last. Both fences must be from the same context, since a seqno is
> + * not re-used across contexts.
> + */
> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
> +{
> +	BUG_ON(f1->context != f2->context);

Nice, you just crashed the kernel, making it impossible to debug or
recover :(

Don't do that.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19  1:13     ` Greg KH
  0 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19  1:13 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> +#define CREATE_TRACE_POINTS
> +#include <trace/events/fence.h>
> +
> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);

Are you really willing to live with these as tracepoints for forever?
What is the use of them in debugging?  Was it just for debugging the
fence code, or for something else?

> +/**
> + * fence_context_alloc - allocate an array of fence contexts
> + * @num:	[in]	amount of contexts to allocate
> + *
> + * This function will return the first index of the number of fences allocated.
> + * The fence context is used for setting fence->context to a unique number.
> + */
> +unsigned fence_context_alloc(unsigned num)
> +{
> +	BUG_ON(!num);
> +	return atomic_add_return(num, &fence_context_counter) - num;
> +}
> +EXPORT_SYMBOL(fence_context_alloc);

EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
Traditionally all of the driver core exports have been with this
marking, any objection to making that change here as well?

> +int __fence_signal(struct fence *fence)
> +{
> +	struct fence_cb *cur, *tmp;
> +	int ret = 0;
> +
> +	if (WARN_ON(!fence))
> +		return -EINVAL;
> +
> +	if (!ktime_to_ns(fence->timestamp)) {
> +		fence->timestamp = ktime_get();
> +		smp_mb__before_atomic();
> +	}
> +
> +	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> +		ret = -EINVAL;
> +
> +		/*
> +		 * we might have raced with the unlocked fence_signal,
> +		 * still run through all callbacks
> +		 */
> +	} else
> +		trace_fence_signaled(fence);
> +
> +	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> +		list_del_init(&cur->node);
> +		cur->func(fence, cur);
> +	}
> +	return ret;
> +}
> +EXPORT_SYMBOL(__fence_signal);

Don't export a function with __ in front of it, you are saying that an
internal function is somehow "valid" for everyone else to call?  Why?
You aren't even documenting the function, so who knows how to use it?

> +/**
> + * fence_signal - signal completion of a fence
> + * @fence: the fence to signal
> + *
> + * Signal completion for software callbacks on a fence, this will unblock
> + * fence_wait() calls and run all the callbacks added with
> + * fence_add_callback(). Can be called multiple times, but since a fence
> + * can only go from unsignaled to signaled state, it will only be effective
> + * the first time.
> + */
> +int fence_signal(struct fence *fence)
> +{
> +	unsigned long flags;
> +
> +	if (!fence)
> +		return -EINVAL;
> +
> +	if (!ktime_to_ns(fence->timestamp)) {
> +		fence->timestamp = ktime_get();
> +		smp_mb__before_atomic();
> +	}
> +
> +	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return -EINVAL;
> +
> +	trace_fence_signaled(fence);
> +
> +	if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
> +		struct fence_cb *cur, *tmp;
> +
> +		spin_lock_irqsave(fence->lock, flags);
> +		list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> +			list_del_init(&cur->node);
> +			cur->func(fence, cur);
> +		}
> +		spin_unlock_irqrestore(fence->lock, flags);
> +	}
> +	return 0;
> +}
> +EXPORT_SYMBOL(fence_signal);

So, why should I use this over __fence_signal?  What is the difference?
Am I missing some documentation somewhere?

> +void release_fence(struct kref *kref)
> +{
> +	struct fence *fence =
> +			container_of(kref, struct fence, refcount);
> +
> +	trace_fence_destroy(fence);
> +
> +	BUG_ON(!list_empty(&fence->cb_list));
> +
> +	if (fence->ops->release)
> +		fence->ops->release(fence);
> +	else
> +		kfree(fence);
> +}
> +EXPORT_SYMBOL(release_fence);

fence_release() makes it more unified with the other functions in this
file, right?

> +/**
> + * fence_default_wait - default sleep until the fence gets signaled
> + * or until timeout elapses
> + * @fence:	[in]	the fence to wait on
> + * @intr:	[in]	if true, do an interruptible wait
> + * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
> + *
> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
> + * remaining timeout in jiffies on success.
> + */
> +long

Shouldn't this be signed to be explicit?

> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
> +{
> +	struct default_wait_cb cb;
> +	unsigned long flags;
> +	long ret = timeout;
> +	bool was_set;
> +
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return timeout;
> +
> +	spin_lock_irqsave(fence->lock, flags);
> +
> +	if (intr && signal_pending(current)) {
> +		ret = -ERESTARTSYS;
> +		goto out;
> +	}
> +
> +	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
> +
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		goto out;
> +
> +	if (!was_set) {
> +		trace_fence_enable_signal(fence);
> +
> +		if (!fence->ops->enable_signaling(fence)) {
> +			__fence_signal(fence);
> +			goto out;
> +		}
> +	}
> +
> +	cb.base.func = fence_default_wait_cb;
> +	cb.task = current;
> +	list_add(&cb.base.node, &fence->cb_list);
> +
> +	while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
> +		if (intr)
> +			__set_current_state(TASK_INTERRUPTIBLE);
> +		else
> +			__set_current_state(TASK_UNINTERRUPTIBLE);
> +		spin_unlock_irqrestore(fence->lock, flags);
> +
> +		ret = schedule_timeout(ret);
> +
> +		spin_lock_irqsave(fence->lock, flags);
> +		if (ret > 0 && intr && signal_pending(current))
> +			ret = -ERESTARTSYS;
> +	}
> +
> +	if (!list_empty(&cb.base.node))
> +		list_del(&cb.base.node);
> +	__set_current_state(TASK_RUNNING);
> +
> +out:
> +	spin_unlock_irqrestore(fence->lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(fence_default_wait);
> +
> +/**
> + * __fence_init - Initialize a custom fence.
> + * @fence:	[in]	the fence to initialize
> + * @ops:	[in]	the fence_ops for operations on this fence
> + * @lock:	[in]	the irqsafe spinlock to use for locking this fence
> + * @context:	[in]	the execution context this fence is run on
> + * @seqno:	[in]	a linear increasing sequence number for this context
> + *
> + * Initializes an allocated fence, the caller doesn't have to keep its
> + * refcount after committing with this fence, but it will need to hold a
> + * refcount again if fence_ops.enable_signaling gets called. This can
> + * be used for other implementing other types of fence.
> + *
> + * context and seqno are used for easy comparison between fences, allowing
> + * to check which fence is later by simply using fence_later.
> + */
> +void
> +__fence_init(struct fence *fence, const struct fence_ops *ops,
> +	     spinlock_t *lock, unsigned context, unsigned seqno)
> +{
> +	BUG_ON(!lock);
> +	BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
> +	       !ops->get_driver_name || !ops->get_timeline_name);
> +
> +	kref_init(&fence->refcount);
> +	fence->ops = ops;
> +	INIT_LIST_HEAD(&fence->cb_list);
> +	fence->lock = lock;
> +	fence->context = context;
> +	fence->seqno = seqno;
> +	fence->flags = 0UL;
> +
> +	trace_fence_init(fence);
> +}
> +EXPORT_SYMBOL(__fence_init);

Again with the __ exported function...

I don't even see a fence_init() function anywhere, why the __ ?


> diff --git a/include/linux/fence.h b/include/linux/fence.h
> new file mode 100644
> index 000000000000..65f2a01ee7e4
> --- /dev/null
> +++ b/include/linux/fence.h
> @@ -0,0 +1,333 @@
> +/*
> + * Fence mechanism for dma-buf to allow for asynchronous dma access
> + *
> + * Copyright (C) 2012 Canonical Ltd
> + * Copyright (C) 2012 Texas Instruments
> + *
> + * Authors:
> + * Rob Clark <robdclark@gmail.com>
> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __LINUX_FENCE_H
> +#define __LINUX_FENCE_H
> +
> +#include <linux/err.h>
> +#include <linux/wait.h>
> +#include <linux/list.h>
> +#include <linux/bitops.h>
> +#include <linux/kref.h>
> +#include <linux/sched.h>
> +#include <linux/printk.h>
> +
> +struct fence;
> +struct fence_ops;
> +struct fence_cb;
> +
> +/**
> + * struct fence - software synchronization primitive
> + * @refcount: refcount for this fence
> + * @ops: fence_ops associated with this fence
> + * @cb_list: list of all callbacks to call
> + * @lock: spin_lock_irqsave used for locking
> + * @context: execution context this fence belongs to, returned by
> + *           fence_context_alloc()
> + * @seqno: the sequence number of this fence inside the execution context,
> + * can be compared to decide which fence would be signaled later.
> + * @flags: A mask of FENCE_FLAG_* defined below
> + * @timestamp: Timestamp when the fence was signaled.
> + * @status: Optional, only valid if < 0, must be set before calling
> + * fence_signal, indicates that the fence has completed with an error.
> + *
> + * the flags member must be manipulated and read using the appropriate
> + * atomic ops (bit_*), so taking the spinlock will not be needed most
> + * of the time.
> + *
> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
> + * implementer of the fence for its own purposes. Can be used in different
> + * ways by different fence implementers, so do not rely on this.
> + *
> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
> + * Particularly, if the bit was set, but fence_signal was called right
> + * before this bit was set, it would have been able to set the
> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
> + * after fence_signal was called, any enable_signaling call will have either
> + * been completed, or never called at all.
> + */
> +struct fence {
> +	struct kref refcount;
> +	const struct fence_ops *ops;
> +	struct list_head cb_list;
> +	spinlock_t *lock;
> +	unsigned context, seqno;
> +	unsigned long flags;
> +	ktime_t timestamp;
> +	int status;
> +};
> +
> +enum fence_flag_bits {
> +	FENCE_FLAG_SIGNALED_BIT,
> +	FENCE_FLAG_ENABLE_SIGNAL_BIT,
> +	FENCE_FLAG_USER_BITS, /* must always be last member */
> +};
> +
> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
> +
> +/**
> + * struct fence_cb - callback for fence_add_callback
> + * @node: used by fence_add_callback to append this struct to fence::cb_list
> + * @func: fence_func_t to call
> + *
> + * This struct will be initialized by fence_add_callback, additional
> + * data can be passed along by embedding fence_cb in another struct.
> + */
> +struct fence_cb {
> +	struct list_head node;
> +	fence_func_t func;
> +};
> +
> +/**
> + * struct fence_ops - operations implemented for fence
> + * @get_driver_name: returns the driver name.
> + * @get_timeline_name: return the name of the context this fence belongs to.
> + * @enable_signaling: enable software signaling of fence.
> + * @signaled: [optional] peek whether the fence is signaled, can be null.
> + * @wait: custom wait implementation, or fence_default_wait.
> + * @release: [optional] called on destruction of fence, can be null
> + * @fill_driver_data: [optional] callback to fill in free-form debug info
> + * Returns amount of bytes filled, or -errno.
> + * @fence_value_str: [optional] fills in the value of the fence as a string
> + * @timeline_value_str: [optional] fills in the current value of the timeline
> + * as a string
> + *
> + * Notes on enable_signaling:
> + * For fence implementations that have the capability for hw->hw
> + * signaling, they can implement this op to enable the necessary
> + * irqs, or insert commands into cmdstream, etc.  This is called
> + * in the first wait() or add_callback() path to let the fence
> + * implementation know that there is another driver waiting on
> + * the signal (ie. hw->sw case).
> + *
> + * This function can be called called from atomic context, but not
> + * from irq context, so normal spinlocks can be used.
> + *
> + * A return value of false indicates the fence already passed,
> + * or some failure occured that made it impossible to enable
> + * signaling. True indicates succesful enabling.
> + *
> + * fence->status may be set in enable_signaling, but only when false is
> + * returned.
> + *
> + * Calling fence_signal before enable_signaling is called allows
> + * for a tiny race window in which enable_signaling is called during,
> + * before, or after fence_signal. To fight this, it is recommended
> + * that before enable_signaling returns true an extra reference is
> + * taken on the fence, to be released when the fence is signaled.
> + * This will mean fence_signal will still be called twice, but
> + * the second time will be a noop since it was already signaled.
> + *
> + * Notes on signaled:
> + * May set fence->status if returning true.
> + *
> + * Notes on wait:
> + * Must not be NULL, set to fence_default_wait for default implementation.
> + * the fence_default_wait implementation should work for any fence, as long
> + * as enable_signaling works correctly.
> + *
> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
> + * timed out. Can also return other error values on custom implementations,
> + * which should be treated as if the fence is signaled. For example a hardware
> + * lockup could be reported like that.
> + *
> + * Notes on release:
> + * Can be NULL, this function allows additional commands to run on
> + * destruction of the fence. Can be called from irq context.
> + * If pointer is set to NULL, kfree will get called instead.
> + */
> +
> +struct fence_ops {
> +	const char * (*get_driver_name)(struct fence *fence);
> +	const char * (*get_timeline_name)(struct fence *fence);
> +	bool (*enable_signaling)(struct fence *fence);
> +	bool (*signaled)(struct fence *fence);
> +	long (*wait)(struct fence *fence, bool intr, signed long timeout);
> +	void (*release)(struct fence *fence);
> +
> +	int (*fill_driver_data)(struct fence *fence, void *data, int size);
> +	void (*fence_value_str)(struct fence *fence, char *str, int size);
> +	void (*timeline_value_str)(struct fence *fence, char *str, int size);
> +};
> +
> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
> +		  spinlock_t *lock, unsigned context, unsigned seqno);
> +
> +/**
> + * fence_get - increases refcount of the fence
> + * @fence:	[in]	fence to increase refcount of
> + */
> +static inline void fence_get(struct fence *fence)
> +{
> +	if (WARN_ON(!fence))
> +		return;

Why warn?

> +	kref_get(&fence->refcount);
> +}

Why is this inline?

> +
> +extern void release_fence(struct kref *kref);
> +
> +/**
> + * fence_put - decreases refcount of the fence
> + * @fence:	[in]	fence to reduce refcount of
> + */
> +static inline void fence_put(struct fence *fence)
> +{
> +	if (WARN_ON(!fence))
> +		return;

Same as above.

> +	kref_put(&fence->refcount, release_fence);
> +}

Why inline?

> +int fence_signal(struct fence *fence);
> +int __fence_signal(struct fence *fence);

Let's randomly pick a function to call... :)

> +static inline bool
> +__fence_is_signaled(struct fence *fence)
> +{
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return true;
> +
> +	if (fence->ops->signaled && fence->ops->signaled(fence)) {
> +		__fence_signal(fence);
> +		return true;
> +	}
> +
> +	return false;
> +}

Oh nice, another __ function :(

> +
> +/**
> + * fence_is_signaled - Return an indication if the fence is signaled yet.
> + * @fence:	[in]	the fence to check
> + *
> + * Returns true if the fence was already signaled, false if not. Since this
> + * function doesn't enable signaling, it is not guaranteed to ever return true
> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
> + * haven't been called before.
> + *
> + * It's recommended for seqno fences to call fence_signal when the
> + * operation is complete, it makes it possible to prevent issues from
> + * wraparound between time of issue and time of use by checking the return
> + * value of this function before calling hardware-specific wait instructions.
> + */
> +static inline bool
> +fence_is_signaled(struct fence *fence)
> +{
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return true;
> +
> +	if (fence->ops->signaled && fence->ops->signaled(fence)) {
> +		fence_signal(fence);
> +		return true;
> +	}
> +
> +	return false;
> +}


Why inline for all of these, does it really matter for speed?

> +/**
> + * fence_later - return the chronologically later fence
> + * @f1:	[in]	the first fence from the same context
> + * @f2:	[in]	the second fence from the same context
> + *
> + * Returns NULL if both fences are signaled, otherwise the fence that would be
> + * signaled last. Both fences must be from the same context, since a seqno is
> + * not re-used across contexts.
> + */
> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
> +{
> +	BUG_ON(f1->context != f2->context);

Nice, you just crashed the kernel, making it impossible to debug or
recover :(

Don't do that.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-18 10:36   ` Maarten Lankhorst
  (?)
  (?)
@ 2014-06-19  1:15   ` Greg KH
  -1 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19  1:15 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.

I don't like this paragraph in all of the files, but if you insist that
some lawyer wants it there, I'll live with it...

> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.

That's just not needed at all and is fluff.  Please remove.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-18 10:37   ` Maarten Lankhorst
  (?)
@ 2014-06-19  1:15   ` Greg KH
  2014-06-19  6:37       ` Daniel Vetter
  -1 siblings, 1 reply; 73+ messages in thread
From: Greg KH @ 2014-06-19  1:15 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

On Wed, Jun 18, 2014 at 12:37:11PM +0200, Maarten Lankhorst wrote:
> Just to show it's easy.
> 
> Android syncpoints can be mapped to a timeline. This removes the need
> to maintain a separate api for synchronization. I've left the android
> trace events in place, but the core fence events should already be
> sufficient for debugging.
> 
> v2:
> - Call fence_remove_callback in sync_fence_free if not all fences have fired.
> v3:
> - Merge Colin Cross' bugfixes, and the android fence merge optimization.
> v4:
> - Merge with the upstream fixes.
> v5:
> - Fix small style issues pointed out by Thomas Hellstrom.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> Acked-by: John Stultz <john.stultz@linaro.org>
> ---
>  drivers/staging/android/Kconfig      |    1 
>  drivers/staging/android/Makefile     |    2 
>  drivers/staging/android/sw_sync.c    |    6 
>  drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
>  drivers/staging/android/sync.h       |   79 ++-
>  drivers/staging/android/sync_debug.c |  247 +++++++++
>  drivers/staging/android/trace/sync.h |   12 
>  7 files changed, 609 insertions(+), 651 deletions(-)
>  create mode 100644 drivers/staging/android/sync_debug.c

With these changes, can we pull the android sync logic out of
drivers/staging/ now?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-18 10:36   ` Maarten Lankhorst
                     ` (2 preceding siblings ...)
  (?)
@ 2014-06-19  1:16   ` Greg KH
  2014-06-19  1:25       ` Rob Clark
  -1 siblings, 1 reply; 73+ messages in thread
From: Greg KH @ 2014-06-19  1:16 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	robdclark, thierry.reding, ccross, daniel, sumit.semwal,
	linux-media

On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> A fence can be attached to a buffer which is being filled or consumed
> by hw, to allow userspace to pass the buffer without waiting to another
> device.  For example, userspace can call page_flip ioctl to display the
> next frame of graphics after kicking the GPU but while the GPU is still
> rendering.  The display device sharing the buffer with the GPU would
> attach a callback to get notified when the GPU's rendering-complete IRQ
> fires, to update the scan-out address of the display, without having to
> wake up userspace.
> 
> A driver must allocate a fence context for each execution ring that can
> run in parallel. The function for this takes an argument with how many
> contexts to allocate:
>   + fence_context_alloc()
> 
> A fence is transient, one-shot deal.  It is allocated and attached
> to one or more dma-buf's.  When the one that attached it is done, with
> the pending operation, it can signal the fence:
>   + fence_signal()
> 
> To have a rough approximation whether a fence is fired, call:
>   + fence_is_signaled()
> 
> The dma-buf-mgr handles tracking, and waiting on, the fences associated
> with a dma-buf.
> 
> The one pending on the fence can add an async callback:
>   + fence_add_callback()
> 
> The callback can optionally be cancelled with:
>   + fence_remove_callback()
> 
> To wait synchronously, optionally with a timeout:
>   + fence_wait()
>   + fence_wait_timeout()
> 
> When emitting a fence, call:
>   + trace_fence_emit()
> 
> To annotate that a fence is blocking on another fence, call:
>   + trace_fence_annotate_wait_on(fence, on_fence)
> 
> A default software-only implementation is provided, which can be used
> by drivers attaching a fence to a buffer when they have no other means
> for hw sync.  But a memory backed fence is also envisioned, because it
> is common that GPU's can write to, or poll on some memory location for
> synchronization.  For example:
> 
>   fence = custom_get_fence(...);
>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
>     dma_buf *fence_buf = seqno_fence->sync_buf;
>     get_dma_buf(fence_buf);
> 
>     ... tell the hw the memory location to wait ...
>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
>   } else {
>     /* fall-back to sw sync * /
>     fence_add_callback(fence, my_cb);
>   }
> 
> On SoC platforms, if some other hw mechanism is provided for synchronizing
> between IP blocks, it could be supported as an alternate implementation
> with it's own fence ops in a similar way.
> 
> enable_signaling callback is used to provide sw signaling in case a cpu
> waiter is requested or no compatible hardware signaling could be used.
> 
> The intention is to provide a userspace interface (presumably via eventfd)
> later, to be used in conjunction with dma-buf's mmap support for sw access
> to buffers (or for userspace apps that would prefer to do their own
> synchronization).
> 
> v1: Original
> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
>     that dma-fence didn't need to care about the sw->hw signaling path
>     (it can be handled same as sw->sw case), and therefore the fence->ops
>     can be simplified and more handled in the core.  So remove the signal,
>     add_callback, cancel_callback, and wait ops, and replace with a simple
>     enable_signaling() op which can be used to inform a fence supporting
>     hw->hw signaling that one or more devices which do not support hw
>     signaling are waiting (and therefore it should enable an irq or do
>     whatever is necessary in order that the CPU is notified when the
>     fence is passed).
> v3: Fix locking fail in attach_fence() and get_fence()
> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
>     we decided that we need to be able to attach one fence to N dma-buf's,
>     so using the list_head in dma-fence struct would be problematic.
> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
>     about checking if fence fired or not. This is broken by design.
>     waitqueue_active during destruction is now fatal, since the signaller
>     should be holding a reference in enable_signalling until it signalled
>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
>     in the dma_fence_callback handler, so that no cleanup needs to be
>     performed.
> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
>     fence wasn't signaled yet, for example for hardware fences that may
>     choose to signal blindly.
> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
>     header and fixed include mess. dma-fence.h now includes dma-buf.h
>     All members are now initialized, so kmalloc can be used for
>     allocating a dma-fence. More documentation added.
> v9: Change compiler bitfields to flags, change return type of
>     enable_signaling to bool. Rework dma_fence_wait. Added
>     dma_fence_is_signaled and dma_fence_wait_timeout.
>     s/dma// and change exports to non GPL. Added fence_is_signaled and
>     fence_enable_sw_signaling calls, add ability to override default
>     wait operation.
> v10: remove event_queue, use a custom list, export try_to_wake_up from
>     scheduler. Remove fence lock and use a global spinlock instead,
>     this should hopefully remove all the locking headaches I was having
>     on trying to implement this. enable_signaling is called with this
>     lock held.
> v11:
>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
>     However I kept the guarantee that after fence_signal returns, it is
>     guaranteed that enable_signaling has either been called to completion,
>     or will not be called any more.
> 
>     Add contexts and seqno to base fence implementation. This allows you
>     to wait for less fences, by testing for seqno + signaled, and then only
>     wait on the later fence.
> 
>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
>     spam, and another runtime option can turn it off at runtime.
> v12:
>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
>     and fence->seqno members.
> v13:
>     Fixup CONFIG_FENCE_TRACE kconfig description.
>     Move fence_context_alloc to fence.
>     Simplify fence_later.
>     Kill priv member to fence_cb.
> v14:
>     Remove priv argument from fence_add_callback, oops!
> v15:
>     Remove priv from documentation.
>     Explicitly include linux/atomic.h.
> v16:
>     Add trace events.
>     Import changes required by android syncpoints.
> v17:
>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
>     Fix up commit description for seqno_fence. (Rob Clark)
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
> Reviewed-by: Rob Clark <robdclark@gmail.com>
> ---
>  Documentation/DocBook/device-drivers.tmpl |    2 
>  drivers/base/Kconfig                      |    9 +
>  drivers/base/Makefile                     |    2 
>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
>  include/trace/events/fence.h              |  128 +++++++++
>  6 files changed, 889 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/base/fence.c
>  create mode 100644 include/linux/fence.h
>  create mode 100644 include/trace/events/fence.h

Who is going to sign up to maintain this code?  (hint, it's not me...)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19  1:13     ` Greg KH
@ 2014-06-19  1:23       ` Rob Clark
  -1 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19  1:23 UTC (permalink / raw)
  To: Greg KH
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> +#define CREATE_TRACE_POINTS
>> +#include <trace/events/fence.h>
>> +
>> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>
> Are you really willing to live with these as tracepoints for forever?
> What is the use of them in debugging?  Was it just for debugging the
> fence code, or for something else?

fwiw, the goal is something like this:

http://people.freedesktop.org/~robclark/perf-supertuxkart.svg

but without needing to make perf understand each driver's custom trace events

(from: http://bloggingthemonkey.blogspot.com/2013/09/freedreno-update-moar-fps.html
)

BR,
-R


>> +/**
>> + * fence_context_alloc - allocate an array of fence contexts
>> + * @num:     [in]    amount of contexts to allocate
>> + *
>> + * This function will return the first index of the number of fences allocated.
>> + * The fence context is used for setting fence->context to a unique number.
>> + */
>> +unsigned fence_context_alloc(unsigned num)
>> +{
>> +     BUG_ON(!num);
>> +     return atomic_add_return(num, &fence_context_counter) - num;
>> +}
>> +EXPORT_SYMBOL(fence_context_alloc);
>
> EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> Traditionally all of the driver core exports have been with this
> marking, any objection to making that change here as well?
>
>> +int __fence_signal(struct fence *fence)
>> +{
>> +     struct fence_cb *cur, *tmp;
>> +     int ret = 0;
>> +
>> +     if (WARN_ON(!fence))
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> +             ret = -EINVAL;
>> +
>> +             /*
>> +              * we might have raced with the unlocked fence_signal,
>> +              * still run through all callbacks
>> +              */
>> +     } else
>> +             trace_fence_signaled(fence);
>> +
>> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +             list_del_init(&cur->node);
>> +             cur->func(fence, cur);
>> +     }
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(__fence_signal);
>
> Don't export a function with __ in front of it, you are saying that an
> internal function is somehow "valid" for everyone else to call?  Why?
> You aren't even documenting the function, so who knows how to use it?
>
>> +/**
>> + * fence_signal - signal completion of a fence
>> + * @fence: the fence to signal
>> + *
>> + * Signal completion for software callbacks on a fence, this will unblock
>> + * fence_wait() calls and run all the callbacks added with
>> + * fence_add_callback(). Can be called multiple times, but since a fence
>> + * can only go from unsignaled to signaled state, it will only be effective
>> + * the first time.
>> + */
>> +int fence_signal(struct fence *fence)
>> +{
>> +     unsigned long flags;
>> +
>> +     if (!fence)
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return -EINVAL;
>> +
>> +     trace_fence_signaled(fence);
>> +
>> +     if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
>> +             struct fence_cb *cur, *tmp;
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +                     list_del_init(&cur->node);
>> +                     cur->func(fence, cur);
>> +             }
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +     }
>> +     return 0;
>> +}
>> +EXPORT_SYMBOL(fence_signal);
>
> So, why should I use this over __fence_signal?  What is the difference?
> Am I missing some documentation somewhere?
>
>> +void release_fence(struct kref *kref)
>> +{
>> +     struct fence *fence =
>> +                     container_of(kref, struct fence, refcount);
>> +
>> +     trace_fence_destroy(fence);
>> +
>> +     BUG_ON(!list_empty(&fence->cb_list));
>> +
>> +     if (fence->ops->release)
>> +             fence->ops->release(fence);
>> +     else
>> +             kfree(fence);
>> +}
>> +EXPORT_SYMBOL(release_fence);
>
> fence_release() makes it more unified with the other functions in this
> file, right?
>
>> +/**
>> + * fence_default_wait - default sleep until the fence gets signaled
>> + * or until timeout elapses
>> + * @fence:   [in]    the fence to wait on
>> + * @intr:    [in]    if true, do an interruptible wait
>> + * @timeout: [in]    timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>> + *
>> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
>> + * remaining timeout in jiffies on success.
>> + */
>> +long
>
> Shouldn't this be signed to be explicit?
>
>> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
>> +{
>> +     struct default_wait_cb cb;
>> +     unsigned long flags;
>> +     long ret = timeout;
>> +     bool was_set;
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return timeout;
>> +
>> +     spin_lock_irqsave(fence->lock, flags);
>> +
>> +     if (intr && signal_pending(current)) {
>> +             ret = -ERESTARTSYS;
>> +             goto out;
>> +     }
>> +
>> +     was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             goto out;
>> +
>> +     if (!was_set) {
>> +             trace_fence_enable_signal(fence);
>> +
>> +             if (!fence->ops->enable_signaling(fence)) {
>> +                     __fence_signal(fence);
>> +                     goto out;
>> +             }
>> +     }
>> +
>> +     cb.base.func = fence_default_wait_cb;
>> +     cb.task = current;
>> +     list_add(&cb.base.node, &fence->cb_list);
>> +
>> +     while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
>> +             if (intr)
>> +                     __set_current_state(TASK_INTERRUPTIBLE);
>> +             else
>> +                     __set_current_state(TASK_UNINTERRUPTIBLE);
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +
>> +             ret = schedule_timeout(ret);
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             if (ret > 0 && intr && signal_pending(current))
>> +                     ret = -ERESTARTSYS;
>> +     }
>> +
>> +     if (!list_empty(&cb.base.node))
>> +             list_del(&cb.base.node);
>> +     __set_current_state(TASK_RUNNING);
>> +
>> +out:
>> +     spin_unlock_irqrestore(fence->lock, flags);
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(fence_default_wait);
>> +
>> +/**
>> + * __fence_init - Initialize a custom fence.
>> + * @fence:   [in]    the fence to initialize
>> + * @ops:     [in]    the fence_ops for operations on this fence
>> + * @lock:    [in]    the irqsafe spinlock to use for locking this fence
>> + * @context: [in]    the execution context this fence is run on
>> + * @seqno:   [in]    a linear increasing sequence number for this context
>> + *
>> + * Initializes an allocated fence, the caller doesn't have to keep its
>> + * refcount after committing with this fence, but it will need to hold a
>> + * refcount again if fence_ops.enable_signaling gets called. This can
>> + * be used for other implementing other types of fence.
>> + *
>> + * context and seqno are used for easy comparison between fences, allowing
>> + * to check which fence is later by simply using fence_later.
>> + */
>> +void
>> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> +{
>> +     BUG_ON(!lock);
>> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> +            !ops->get_driver_name || !ops->get_timeline_name);
>> +
>> +     kref_init(&fence->refcount);
>> +     fence->ops = ops;
>> +     INIT_LIST_HEAD(&fence->cb_list);
>> +     fence->lock = lock;
>> +     fence->context = context;
>> +     fence->seqno = seqno;
>> +     fence->flags = 0UL;
>> +
>> +     trace_fence_init(fence);
>> +}
>> +EXPORT_SYMBOL(__fence_init);
>
> Again with the __ exported function...
>
> I don't even see a fence_init() function anywhere, why the __ ?
>
>
>> diff --git a/include/linux/fence.h b/include/linux/fence.h
>> new file mode 100644
>> index 000000000000..65f2a01ee7e4
>> --- /dev/null
>> +++ b/include/linux/fence.h
>> @@ -0,0 +1,333 @@
>> +/*
>> + * Fence mechanism for dma-buf to allow for asynchronous dma access
>> + *
>> + * Copyright (C) 2012 Canonical Ltd
>> + * Copyright (C) 2012 Texas Instruments
>> + *
>> + * Authors:
>> + * Rob Clark <robdclark@gmail.com>
>> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License version 2 as published by
>> + * the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __LINUX_FENCE_H
>> +#define __LINUX_FENCE_H
>> +
>> +#include <linux/err.h>
>> +#include <linux/wait.h>
>> +#include <linux/list.h>
>> +#include <linux/bitops.h>
>> +#include <linux/kref.h>
>> +#include <linux/sched.h>
>> +#include <linux/printk.h>
>> +
>> +struct fence;
>> +struct fence_ops;
>> +struct fence_cb;
>> +
>> +/**
>> + * struct fence - software synchronization primitive
>> + * @refcount: refcount for this fence
>> + * @ops: fence_ops associated with this fence
>> + * @cb_list: list of all callbacks to call
>> + * @lock: spin_lock_irqsave used for locking
>> + * @context: execution context this fence belongs to, returned by
>> + *           fence_context_alloc()
>> + * @seqno: the sequence number of this fence inside the execution context,
>> + * can be compared to decide which fence would be signaled later.
>> + * @flags: A mask of FENCE_FLAG_* defined below
>> + * @timestamp: Timestamp when the fence was signaled.
>> + * @status: Optional, only valid if < 0, must be set before calling
>> + * fence_signal, indicates that the fence has completed with an error.
>> + *
>> + * the flags member must be manipulated and read using the appropriate
>> + * atomic ops (bit_*), so taking the spinlock will not be needed most
>> + * of the time.
>> + *
>> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
>> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
>> + * implementer of the fence for its own purposes. Can be used in different
>> + * ways by different fence implementers, so do not rely on this.
>> + *
>> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
>> + * Particularly, if the bit was set, but fence_signal was called right
>> + * before this bit was set, it would have been able to set the
>> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
>> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
>> + * after fence_signal was called, any enable_signaling call will have either
>> + * been completed, or never called at all.
>> + */
>> +struct fence {
>> +     struct kref refcount;
>> +     const struct fence_ops *ops;
>> +     struct list_head cb_list;
>> +     spinlock_t *lock;
>> +     unsigned context, seqno;
>> +     unsigned long flags;
>> +     ktime_t timestamp;
>> +     int status;
>> +};
>> +
>> +enum fence_flag_bits {
>> +     FENCE_FLAG_SIGNALED_BIT,
>> +     FENCE_FLAG_ENABLE_SIGNAL_BIT,
>> +     FENCE_FLAG_USER_BITS, /* must always be last member */
>> +};
>> +
>> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
>> +
>> +/**
>> + * struct fence_cb - callback for fence_add_callback
>> + * @node: used by fence_add_callback to append this struct to fence::cb_list
>> + * @func: fence_func_t to call
>> + *
>> + * This struct will be initialized by fence_add_callback, additional
>> + * data can be passed along by embedding fence_cb in another struct.
>> + */
>> +struct fence_cb {
>> +     struct list_head node;
>> +     fence_func_t func;
>> +};
>> +
>> +/**
>> + * struct fence_ops - operations implemented for fence
>> + * @get_driver_name: returns the driver name.
>> + * @get_timeline_name: return the name of the context this fence belongs to.
>> + * @enable_signaling: enable software signaling of fence.
>> + * @signaled: [optional] peek whether the fence is signaled, can be null.
>> + * @wait: custom wait implementation, or fence_default_wait.
>> + * @release: [optional] called on destruction of fence, can be null
>> + * @fill_driver_data: [optional] callback to fill in free-form debug info
>> + * Returns amount of bytes filled, or -errno.
>> + * @fence_value_str: [optional] fills in the value of the fence as a string
>> + * @timeline_value_str: [optional] fills in the current value of the timeline
>> + * as a string
>> + *
>> + * Notes on enable_signaling:
>> + * For fence implementations that have the capability for hw->hw
>> + * signaling, they can implement this op to enable the necessary
>> + * irqs, or insert commands into cmdstream, etc.  This is called
>> + * in the first wait() or add_callback() path to let the fence
>> + * implementation know that there is another driver waiting on
>> + * the signal (ie. hw->sw case).
>> + *
>> + * This function can be called called from atomic context, but not
>> + * from irq context, so normal spinlocks can be used.
>> + *
>> + * A return value of false indicates the fence already passed,
>> + * or some failure occured that made it impossible to enable
>> + * signaling. True indicates succesful enabling.
>> + *
>> + * fence->status may be set in enable_signaling, but only when false is
>> + * returned.
>> + *
>> + * Calling fence_signal before enable_signaling is called allows
>> + * for a tiny race window in which enable_signaling is called during,
>> + * before, or after fence_signal. To fight this, it is recommended
>> + * that before enable_signaling returns true an extra reference is
>> + * taken on the fence, to be released when the fence is signaled.
>> + * This will mean fence_signal will still be called twice, but
>> + * the second time will be a noop since it was already signaled.
>> + *
>> + * Notes on signaled:
>> + * May set fence->status if returning true.
>> + *
>> + * Notes on wait:
>> + * Must not be NULL, set to fence_default_wait for default implementation.
>> + * the fence_default_wait implementation should work for any fence, as long
>> + * as enable_signaling works correctly.
>> + *
>> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
>> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
>> + * timed out. Can also return other error values on custom implementations,
>> + * which should be treated as if the fence is signaled. For example a hardware
>> + * lockup could be reported like that.
>> + *
>> + * Notes on release:
>> + * Can be NULL, this function allows additional commands to run on
>> + * destruction of the fence. Can be called from irq context.
>> + * If pointer is set to NULL, kfree will get called instead.
>> + */
>> +
>> +struct fence_ops {
>> +     const char * (*get_driver_name)(struct fence *fence);
>> +     const char * (*get_timeline_name)(struct fence *fence);
>> +     bool (*enable_signaling)(struct fence *fence);
>> +     bool (*signaled)(struct fence *fence);
>> +     long (*wait)(struct fence *fence, bool intr, signed long timeout);
>> +     void (*release)(struct fence *fence);
>> +
>> +     int (*fill_driver_data)(struct fence *fence, void *data, int size);
>> +     void (*fence_value_str)(struct fence *fence, char *str, int size);
>> +     void (*timeline_value_str)(struct fence *fence, char *str, int size);
>> +};
>> +
>> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
>> +               spinlock_t *lock, unsigned context, unsigned seqno);
>> +
>> +/**
>> + * fence_get - increases refcount of the fence
>> + * @fence:   [in]    fence to increase refcount of
>> + */
>> +static inline void fence_get(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Why warn?
>
>> +     kref_get(&fence->refcount);
>> +}
>
> Why is this inline?
>
>> +
>> +extern void release_fence(struct kref *kref);
>> +
>> +/**
>> + * fence_put - decreases refcount of the fence
>> + * @fence:   [in]    fence to reduce refcount of
>> + */
>> +static inline void fence_put(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Same as above.
>
>> +     kref_put(&fence->refcount, release_fence);
>> +}
>
> Why inline?
>
>> +int fence_signal(struct fence *fence);
>> +int __fence_signal(struct fence *fence);
>
> Let's randomly pick a function to call... :)
>
>> +static inline bool
>> +__fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             __fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
> Oh nice, another __ function :(
>
>> +
>> +/**
>> + * fence_is_signaled - Return an indication if the fence is signaled yet.
>> + * @fence:   [in]    the fence to check
>> + *
>> + * Returns true if the fence was already signaled, false if not. Since this
>> + * function doesn't enable signaling, it is not guaranteed to ever return true
>> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
>> + * haven't been called before.
>> + *
>> + * It's recommended for seqno fences to call fence_signal when the
>> + * operation is complete, it makes it possible to prevent issues from
>> + * wraparound between time of issue and time of use by checking the return
>> + * value of this function before calling hardware-specific wait instructions.
>> + */
>> +static inline bool
>> +fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
>
> Why inline for all of these, does it really matter for speed?
>
>> +/**
>> + * fence_later - return the chronologically later fence
>> + * @f1:      [in]    the first fence from the same context
>> + * @f2:      [in]    the second fence from the same context
>> + *
>> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> + * signaled last. Both fences must be from the same context, since a seqno is
>> + * not re-used across contexts.
>> + */
>> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> +{
>> +     BUG_ON(f1->context != f2->context);
>
> Nice, you just crashed the kernel, making it impossible to debug or
> recover :(
>
> Don't do that.
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19  1:23       ` Rob Clark
  0 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19  1:23 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> +#define CREATE_TRACE_POINTS
>> +#include <trace/events/fence.h>
>> +
>> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>
> Are you really willing to live with these as tracepoints for forever?
> What is the use of them in debugging?  Was it just for debugging the
> fence code, or for something else?

fwiw, the goal is something like this:

http://people.freedesktop.org/~robclark/perf-supertuxkart.svg

but without needing to make perf understand each driver's custom trace events

(from: http://bloggingthemonkey.blogspot.com/2013/09/freedreno-update-moar-fps.html
)

BR,
-R


>> +/**
>> + * fence_context_alloc - allocate an array of fence contexts
>> + * @num:     [in]    amount of contexts to allocate
>> + *
>> + * This function will return the first index of the number of fences allocated.
>> + * The fence context is used for setting fence->context to a unique number.
>> + */
>> +unsigned fence_context_alloc(unsigned num)
>> +{
>> +     BUG_ON(!num);
>> +     return atomic_add_return(num, &fence_context_counter) - num;
>> +}
>> +EXPORT_SYMBOL(fence_context_alloc);
>
> EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> Traditionally all of the driver core exports have been with this
> marking, any objection to making that change here as well?
>
>> +int __fence_signal(struct fence *fence)
>> +{
>> +     struct fence_cb *cur, *tmp;
>> +     int ret = 0;
>> +
>> +     if (WARN_ON(!fence))
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> +             ret = -EINVAL;
>> +
>> +             /*
>> +              * we might have raced with the unlocked fence_signal,
>> +              * still run through all callbacks
>> +              */
>> +     } else
>> +             trace_fence_signaled(fence);
>> +
>> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +             list_del_init(&cur->node);
>> +             cur->func(fence, cur);
>> +     }
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(__fence_signal);
>
> Don't export a function with __ in front of it, you are saying that an
> internal function is somehow "valid" for everyone else to call?  Why?
> You aren't even documenting the function, so who knows how to use it?
>
>> +/**
>> + * fence_signal - signal completion of a fence
>> + * @fence: the fence to signal
>> + *
>> + * Signal completion for software callbacks on a fence, this will unblock
>> + * fence_wait() calls and run all the callbacks added with
>> + * fence_add_callback(). Can be called multiple times, but since a fence
>> + * can only go from unsignaled to signaled state, it will only be effective
>> + * the first time.
>> + */
>> +int fence_signal(struct fence *fence)
>> +{
>> +     unsigned long flags;
>> +
>> +     if (!fence)
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return -EINVAL;
>> +
>> +     trace_fence_signaled(fence);
>> +
>> +     if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
>> +             struct fence_cb *cur, *tmp;
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +                     list_del_init(&cur->node);
>> +                     cur->func(fence, cur);
>> +             }
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +     }
>> +     return 0;
>> +}
>> +EXPORT_SYMBOL(fence_signal);
>
> So, why should I use this over __fence_signal?  What is the difference?
> Am I missing some documentation somewhere?
>
>> +void release_fence(struct kref *kref)
>> +{
>> +     struct fence *fence =
>> +                     container_of(kref, struct fence, refcount);
>> +
>> +     trace_fence_destroy(fence);
>> +
>> +     BUG_ON(!list_empty(&fence->cb_list));
>> +
>> +     if (fence->ops->release)
>> +             fence->ops->release(fence);
>> +     else
>> +             kfree(fence);
>> +}
>> +EXPORT_SYMBOL(release_fence);
>
> fence_release() makes it more unified with the other functions in this
> file, right?
>
>> +/**
>> + * fence_default_wait - default sleep until the fence gets signaled
>> + * or until timeout elapses
>> + * @fence:   [in]    the fence to wait on
>> + * @intr:    [in]    if true, do an interruptible wait
>> + * @timeout: [in]    timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>> + *
>> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
>> + * remaining timeout in jiffies on success.
>> + */
>> +long
>
> Shouldn't this be signed to be explicit?
>
>> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
>> +{
>> +     struct default_wait_cb cb;
>> +     unsigned long flags;
>> +     long ret = timeout;
>> +     bool was_set;
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return timeout;
>> +
>> +     spin_lock_irqsave(fence->lock, flags);
>> +
>> +     if (intr && signal_pending(current)) {
>> +             ret = -ERESTARTSYS;
>> +             goto out;
>> +     }
>> +
>> +     was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             goto out;
>> +
>> +     if (!was_set) {
>> +             trace_fence_enable_signal(fence);
>> +
>> +             if (!fence->ops->enable_signaling(fence)) {
>> +                     __fence_signal(fence);
>> +                     goto out;
>> +             }
>> +     }
>> +
>> +     cb.base.func = fence_default_wait_cb;
>> +     cb.task = current;
>> +     list_add(&cb.base.node, &fence->cb_list);
>> +
>> +     while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
>> +             if (intr)
>> +                     __set_current_state(TASK_INTERRUPTIBLE);
>> +             else
>> +                     __set_current_state(TASK_UNINTERRUPTIBLE);
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +
>> +             ret = schedule_timeout(ret);
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             if (ret > 0 && intr && signal_pending(current))
>> +                     ret = -ERESTARTSYS;
>> +     }
>> +
>> +     if (!list_empty(&cb.base.node))
>> +             list_del(&cb.base.node);
>> +     __set_current_state(TASK_RUNNING);
>> +
>> +out:
>> +     spin_unlock_irqrestore(fence->lock, flags);
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(fence_default_wait);
>> +
>> +/**
>> + * __fence_init - Initialize a custom fence.
>> + * @fence:   [in]    the fence to initialize
>> + * @ops:     [in]    the fence_ops for operations on this fence
>> + * @lock:    [in]    the irqsafe spinlock to use for locking this fence
>> + * @context: [in]    the execution context this fence is run on
>> + * @seqno:   [in]    a linear increasing sequence number for this context
>> + *
>> + * Initializes an allocated fence, the caller doesn't have to keep its
>> + * refcount after committing with this fence, but it will need to hold a
>> + * refcount again if fence_ops.enable_signaling gets called. This can
>> + * be used for other implementing other types of fence.
>> + *
>> + * context and seqno are used for easy comparison between fences, allowing
>> + * to check which fence is later by simply using fence_later.
>> + */
>> +void
>> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> +{
>> +     BUG_ON(!lock);
>> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> +            !ops->get_driver_name || !ops->get_timeline_name);
>> +
>> +     kref_init(&fence->refcount);
>> +     fence->ops = ops;
>> +     INIT_LIST_HEAD(&fence->cb_list);
>> +     fence->lock = lock;
>> +     fence->context = context;
>> +     fence->seqno = seqno;
>> +     fence->flags = 0UL;
>> +
>> +     trace_fence_init(fence);
>> +}
>> +EXPORT_SYMBOL(__fence_init);
>
> Again with the __ exported function...
>
> I don't even see a fence_init() function anywhere, why the __ ?
>
>
>> diff --git a/include/linux/fence.h b/include/linux/fence.h
>> new file mode 100644
>> index 000000000000..65f2a01ee7e4
>> --- /dev/null
>> +++ b/include/linux/fence.h
>> @@ -0,0 +1,333 @@
>> +/*
>> + * Fence mechanism for dma-buf to allow for asynchronous dma access
>> + *
>> + * Copyright (C) 2012 Canonical Ltd
>> + * Copyright (C) 2012 Texas Instruments
>> + *
>> + * Authors:
>> + * Rob Clark <robdclark@gmail.com>
>> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License version 2 as published by
>> + * the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __LINUX_FENCE_H
>> +#define __LINUX_FENCE_H
>> +
>> +#include <linux/err.h>
>> +#include <linux/wait.h>
>> +#include <linux/list.h>
>> +#include <linux/bitops.h>
>> +#include <linux/kref.h>
>> +#include <linux/sched.h>
>> +#include <linux/printk.h>
>> +
>> +struct fence;
>> +struct fence_ops;
>> +struct fence_cb;
>> +
>> +/**
>> + * struct fence - software synchronization primitive
>> + * @refcount: refcount for this fence
>> + * @ops: fence_ops associated with this fence
>> + * @cb_list: list of all callbacks to call
>> + * @lock: spin_lock_irqsave used for locking
>> + * @context: execution context this fence belongs to, returned by
>> + *           fence_context_alloc()
>> + * @seqno: the sequence number of this fence inside the execution context,
>> + * can be compared to decide which fence would be signaled later.
>> + * @flags: A mask of FENCE_FLAG_* defined below
>> + * @timestamp: Timestamp when the fence was signaled.
>> + * @status: Optional, only valid if < 0, must be set before calling
>> + * fence_signal, indicates that the fence has completed with an error.
>> + *
>> + * the flags member must be manipulated and read using the appropriate
>> + * atomic ops (bit_*), so taking the spinlock will not be needed most
>> + * of the time.
>> + *
>> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
>> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
>> + * implementer of the fence for its own purposes. Can be used in different
>> + * ways by different fence implementers, so do not rely on this.
>> + *
>> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
>> + * Particularly, if the bit was set, but fence_signal was called right
>> + * before this bit was set, it would have been able to set the
>> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
>> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
>> + * after fence_signal was called, any enable_signaling call will have either
>> + * been completed, or never called at all.
>> + */
>> +struct fence {
>> +     struct kref refcount;
>> +     const struct fence_ops *ops;
>> +     struct list_head cb_list;
>> +     spinlock_t *lock;
>> +     unsigned context, seqno;
>> +     unsigned long flags;
>> +     ktime_t timestamp;
>> +     int status;
>> +};
>> +
>> +enum fence_flag_bits {
>> +     FENCE_FLAG_SIGNALED_BIT,
>> +     FENCE_FLAG_ENABLE_SIGNAL_BIT,
>> +     FENCE_FLAG_USER_BITS, /* must always be last member */
>> +};
>> +
>> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
>> +
>> +/**
>> + * struct fence_cb - callback for fence_add_callback
>> + * @node: used by fence_add_callback to append this struct to fence::cb_list
>> + * @func: fence_func_t to call
>> + *
>> + * This struct will be initialized by fence_add_callback, additional
>> + * data can be passed along by embedding fence_cb in another struct.
>> + */
>> +struct fence_cb {
>> +     struct list_head node;
>> +     fence_func_t func;
>> +};
>> +
>> +/**
>> + * struct fence_ops - operations implemented for fence
>> + * @get_driver_name: returns the driver name.
>> + * @get_timeline_name: return the name of the context this fence belongs to.
>> + * @enable_signaling: enable software signaling of fence.
>> + * @signaled: [optional] peek whether the fence is signaled, can be null.
>> + * @wait: custom wait implementation, or fence_default_wait.
>> + * @release: [optional] called on destruction of fence, can be null
>> + * @fill_driver_data: [optional] callback to fill in free-form debug info
>> + * Returns amount of bytes filled, or -errno.
>> + * @fence_value_str: [optional] fills in the value of the fence as a string
>> + * @timeline_value_str: [optional] fills in the current value of the timeline
>> + * as a string
>> + *
>> + * Notes on enable_signaling:
>> + * For fence implementations that have the capability for hw->hw
>> + * signaling, they can implement this op to enable the necessary
>> + * irqs, or insert commands into cmdstream, etc.  This is called
>> + * in the first wait() or add_callback() path to let the fence
>> + * implementation know that there is another driver waiting on
>> + * the signal (ie. hw->sw case).
>> + *
>> + * This function can be called called from atomic context, but not
>> + * from irq context, so normal spinlocks can be used.
>> + *
>> + * A return value of false indicates the fence already passed,
>> + * or some failure occured that made it impossible to enable
>> + * signaling. True indicates succesful enabling.
>> + *
>> + * fence->status may be set in enable_signaling, but only when false is
>> + * returned.
>> + *
>> + * Calling fence_signal before enable_signaling is called allows
>> + * for a tiny race window in which enable_signaling is called during,
>> + * before, or after fence_signal. To fight this, it is recommended
>> + * that before enable_signaling returns true an extra reference is
>> + * taken on the fence, to be released when the fence is signaled.
>> + * This will mean fence_signal will still be called twice, but
>> + * the second time will be a noop since it was already signaled.
>> + *
>> + * Notes on signaled:
>> + * May set fence->status if returning true.
>> + *
>> + * Notes on wait:
>> + * Must not be NULL, set to fence_default_wait for default implementation.
>> + * the fence_default_wait implementation should work for any fence, as long
>> + * as enable_signaling works correctly.
>> + *
>> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
>> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
>> + * timed out. Can also return other error values on custom implementations,
>> + * which should be treated as if the fence is signaled. For example a hardware
>> + * lockup could be reported like that.
>> + *
>> + * Notes on release:
>> + * Can be NULL, this function allows additional commands to run on
>> + * destruction of the fence. Can be called from irq context.
>> + * If pointer is set to NULL, kfree will get called instead.
>> + */
>> +
>> +struct fence_ops {
>> +     const char * (*get_driver_name)(struct fence *fence);
>> +     const char * (*get_timeline_name)(struct fence *fence);
>> +     bool (*enable_signaling)(struct fence *fence);
>> +     bool (*signaled)(struct fence *fence);
>> +     long (*wait)(struct fence *fence, bool intr, signed long timeout);
>> +     void (*release)(struct fence *fence);
>> +
>> +     int (*fill_driver_data)(struct fence *fence, void *data, int size);
>> +     void (*fence_value_str)(struct fence *fence, char *str, int size);
>> +     void (*timeline_value_str)(struct fence *fence, char *str, int size);
>> +};
>> +
>> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
>> +               spinlock_t *lock, unsigned context, unsigned seqno);
>> +
>> +/**
>> + * fence_get - increases refcount of the fence
>> + * @fence:   [in]    fence to increase refcount of
>> + */
>> +static inline void fence_get(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Why warn?
>
>> +     kref_get(&fence->refcount);
>> +}
>
> Why is this inline?
>
>> +
>> +extern void release_fence(struct kref *kref);
>> +
>> +/**
>> + * fence_put - decreases refcount of the fence
>> + * @fence:   [in]    fence to reduce refcount of
>> + */
>> +static inline void fence_put(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Same as above.
>
>> +     kref_put(&fence->refcount, release_fence);
>> +}
>
> Why inline?
>
>> +int fence_signal(struct fence *fence);
>> +int __fence_signal(struct fence *fence);
>
> Let's randomly pick a function to call... :)
>
>> +static inline bool
>> +__fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             __fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
> Oh nice, another __ function :(
>
>> +
>> +/**
>> + * fence_is_signaled - Return an indication if the fence is signaled yet.
>> + * @fence:   [in]    the fence to check
>> + *
>> + * Returns true if the fence was already signaled, false if not. Since this
>> + * function doesn't enable signaling, it is not guaranteed to ever return true
>> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
>> + * haven't been called before.
>> + *
>> + * It's recommended for seqno fences to call fence_signal when the
>> + * operation is complete, it makes it possible to prevent issues from
>> + * wraparound between time of issue and time of use by checking the return
>> + * value of this function before calling hardware-specific wait instructions.
>> + */
>> +static inline bool
>> +fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
>
> Why inline for all of these, does it really matter for speed?
>
>> +/**
>> + * fence_later - return the chronologically later fence
>> + * @f1:      [in]    the first fence from the same context
>> + * @f2:      [in]    the second fence from the same context
>> + *
>> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> + * signaled last. Both fences must be from the same context, since a seqno is
>> + * not re-used across contexts.
>> + */
>> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> +{
>> +     BUG_ON(f1->context != f2->context);
>
> Nice, you just crashed the kernel, making it impossible to debug or
> recover :(
>
> Don't do that.
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19  1:16   ` Greg KH
@ 2014-06-19  1:25       ` Rob Clark
  0 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19  1:25 UTC (permalink / raw)
  To: Greg KH
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> A fence can be attached to a buffer which is being filled or consumed
>> by hw, to allow userspace to pass the buffer without waiting to another
>> device.  For example, userspace can call page_flip ioctl to display the
>> next frame of graphics after kicking the GPU but while the GPU is still
>> rendering.  The display device sharing the buffer with the GPU would
>> attach a callback to get notified when the GPU's rendering-complete IRQ
>> fires, to update the scan-out address of the display, without having to
>> wake up userspace.
>>
>> A driver must allocate a fence context for each execution ring that can
>> run in parallel. The function for this takes an argument with how many
>> contexts to allocate:
>>   + fence_context_alloc()
>>
>> A fence is transient, one-shot deal.  It is allocated and attached
>> to one or more dma-buf's.  When the one that attached it is done, with
>> the pending operation, it can signal the fence:
>>   + fence_signal()
>>
>> To have a rough approximation whether a fence is fired, call:
>>   + fence_is_signaled()
>>
>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
>> with a dma-buf.
>>
>> The one pending on the fence can add an async callback:
>>   + fence_add_callback()
>>
>> The callback can optionally be cancelled with:
>>   + fence_remove_callback()
>>
>> To wait synchronously, optionally with a timeout:
>>   + fence_wait()
>>   + fence_wait_timeout()
>>
>> When emitting a fence, call:
>>   + trace_fence_emit()
>>
>> To annotate that a fence is blocking on another fence, call:
>>   + trace_fence_annotate_wait_on(fence, on_fence)
>>
>> A default software-only implementation is provided, which can be used
>> by drivers attaching a fence to a buffer when they have no other means
>> for hw sync.  But a memory backed fence is also envisioned, because it
>> is common that GPU's can write to, or poll on some memory location for
>> synchronization.  For example:
>>
>>   fence = custom_get_fence(...);
>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
>>     dma_buf *fence_buf = seqno_fence->sync_buf;
>>     get_dma_buf(fence_buf);
>>
>>     ... tell the hw the memory location to wait ...
>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
>>   } else {
>>     /* fall-back to sw sync * /
>>     fence_add_callback(fence, my_cb);
>>   }
>>
>> On SoC platforms, if some other hw mechanism is provided for synchronizing
>> between IP blocks, it could be supported as an alternate implementation
>> with it's own fence ops in a similar way.
>>
>> enable_signaling callback is used to provide sw signaling in case a cpu
>> waiter is requested or no compatible hardware signaling could be used.
>>
>> The intention is to provide a userspace interface (presumably via eventfd)
>> later, to be used in conjunction with dma-buf's mmap support for sw access
>> to buffers (or for userspace apps that would prefer to do their own
>> synchronization).
>>
>> v1: Original
>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
>>     that dma-fence didn't need to care about the sw->hw signaling path
>>     (it can be handled same as sw->sw case), and therefore the fence->ops
>>     can be simplified and more handled in the core.  So remove the signal,
>>     add_callback, cancel_callback, and wait ops, and replace with a simple
>>     enable_signaling() op which can be used to inform a fence supporting
>>     hw->hw signaling that one or more devices which do not support hw
>>     signaling are waiting (and therefore it should enable an irq or do
>>     whatever is necessary in order that the CPU is notified when the
>>     fence is passed).
>> v3: Fix locking fail in attach_fence() and get_fence()
>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
>>     we decided that we need to be able to attach one fence to N dma-buf's,
>>     so using the list_head in dma-fence struct would be problematic.
>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
>>     about checking if fence fired or not. This is broken by design.
>>     waitqueue_active during destruction is now fatal, since the signaller
>>     should be holding a reference in enable_signalling until it signalled
>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
>>     in the dma_fence_callback handler, so that no cleanup needs to be
>>     performed.
>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
>>     fence wasn't signaled yet, for example for hardware fences that may
>>     choose to signal blindly.
>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
>>     All members are now initialized, so kmalloc can be used for
>>     allocating a dma-fence. More documentation added.
>> v9: Change compiler bitfields to flags, change return type of
>>     enable_signaling to bool. Rework dma_fence_wait. Added
>>     dma_fence_is_signaled and dma_fence_wait_timeout.
>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
>>     fence_enable_sw_signaling calls, add ability to override default
>>     wait operation.
>> v10: remove event_queue, use a custom list, export try_to_wake_up from
>>     scheduler. Remove fence lock and use a global spinlock instead,
>>     this should hopefully remove all the locking headaches I was having
>>     on trying to implement this. enable_signaling is called with this
>>     lock held.
>> v11:
>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
>>     However I kept the guarantee that after fence_signal returns, it is
>>     guaranteed that enable_signaling has either been called to completion,
>>     or will not be called any more.
>>
>>     Add contexts and seqno to base fence implementation. This allows you
>>     to wait for less fences, by testing for seqno + signaled, and then only
>>     wait on the later fence.
>>
>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
>>     spam, and another runtime option can turn it off at runtime.
>> v12:
>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
>>     and fence->seqno members.
>> v13:
>>     Fixup CONFIG_FENCE_TRACE kconfig description.
>>     Move fence_context_alloc to fence.
>>     Simplify fence_later.
>>     Kill priv member to fence_cb.
>> v14:
>>     Remove priv argument from fence_add_callback, oops!
>> v15:
>>     Remove priv from documentation.
>>     Explicitly include linux/atomic.h.
>> v16:
>>     Add trace events.
>>     Import changes required by android syncpoints.
>> v17:
>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
>>     Fix up commit description for seqno_fence. (Rob Clark)
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
>> Reviewed-by: Rob Clark <robdclark@gmail.com>
>> ---
>>  Documentation/DocBook/device-drivers.tmpl |    2
>>  drivers/base/Kconfig                      |    9 +
>>  drivers/base/Makefile                     |    2
>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
>>  include/trace/events/fence.h              |  128 +++++++++
>>  6 files changed, 889 insertions(+), 1 deletion(-)
>>  create mode 100644 drivers/base/fence.c
>>  create mode 100644 include/linux/fence.h
>>  create mode 100644 include/trace/events/fence.h
>
> Who is going to sign up to maintain this code?  (hint, it's not me...)

that would be Sumit (dma-buf tree)..

probably we should move fence/reservation/dma-buf into drivers/dma-buf
(or something approximately like that)

BR,
-R


> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19  1:25       ` Rob Clark
  0 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19  1:25 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> A fence can be attached to a buffer which is being filled or consumed
>> by hw, to allow userspace to pass the buffer without waiting to another
>> device.  For example, userspace can call page_flip ioctl to display the
>> next frame of graphics after kicking the GPU but while the GPU is still
>> rendering.  The display device sharing the buffer with the GPU would
>> attach a callback to get notified when the GPU's rendering-complete IRQ
>> fires, to update the scan-out address of the display, without having to
>> wake up userspace.
>>
>> A driver must allocate a fence context for each execution ring that can
>> run in parallel. The function for this takes an argument with how many
>> contexts to allocate:
>>   + fence_context_alloc()
>>
>> A fence is transient, one-shot deal.  It is allocated and attached
>> to one or more dma-buf's.  When the one that attached it is done, with
>> the pending operation, it can signal the fence:
>>   + fence_signal()
>>
>> To have a rough approximation whether a fence is fired, call:
>>   + fence_is_signaled()
>>
>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
>> with a dma-buf.
>>
>> The one pending on the fence can add an async callback:
>>   + fence_add_callback()
>>
>> The callback can optionally be cancelled with:
>>   + fence_remove_callback()
>>
>> To wait synchronously, optionally with a timeout:
>>   + fence_wait()
>>   + fence_wait_timeout()
>>
>> When emitting a fence, call:
>>   + trace_fence_emit()
>>
>> To annotate that a fence is blocking on another fence, call:
>>   + trace_fence_annotate_wait_on(fence, on_fence)
>>
>> A default software-only implementation is provided, which can be used
>> by drivers attaching a fence to a buffer when they have no other means
>> for hw sync.  But a memory backed fence is also envisioned, because it
>> is common that GPU's can write to, or poll on some memory location for
>> synchronization.  For example:
>>
>>   fence = custom_get_fence(...);
>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
>>     dma_buf *fence_buf = seqno_fence->sync_buf;
>>     get_dma_buf(fence_buf);
>>
>>     ... tell the hw the memory location to wait ...
>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
>>   } else {
>>     /* fall-back to sw sync * /
>>     fence_add_callback(fence, my_cb);
>>   }
>>
>> On SoC platforms, if some other hw mechanism is provided for synchronizing
>> between IP blocks, it could be supported as an alternate implementation
>> with it's own fence ops in a similar way.
>>
>> enable_signaling callback is used to provide sw signaling in case a cpu
>> waiter is requested or no compatible hardware signaling could be used.
>>
>> The intention is to provide a userspace interface (presumably via eventfd)
>> later, to be used in conjunction with dma-buf's mmap support for sw access
>> to buffers (or for userspace apps that would prefer to do their own
>> synchronization).
>>
>> v1: Original
>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
>>     that dma-fence didn't need to care about the sw->hw signaling path
>>     (it can be handled same as sw->sw case), and therefore the fence->ops
>>     can be simplified and more handled in the core.  So remove the signal,
>>     add_callback, cancel_callback, and wait ops, and replace with a simple
>>     enable_signaling() op which can be used to inform a fence supporting
>>     hw->hw signaling that one or more devices which do not support hw
>>     signaling are waiting (and therefore it should enable an irq or do
>>     whatever is necessary in order that the CPU is notified when the
>>     fence is passed).
>> v3: Fix locking fail in attach_fence() and get_fence()
>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
>>     we decided that we need to be able to attach one fence to N dma-buf's,
>>     so using the list_head in dma-fence struct would be problematic.
>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
>>     about checking if fence fired or not. This is broken by design.
>>     waitqueue_active during destruction is now fatal, since the signaller
>>     should be holding a reference in enable_signalling until it signalled
>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
>>     in the dma_fence_callback handler, so that no cleanup needs to be
>>     performed.
>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
>>     fence wasn't signaled yet, for example for hardware fences that may
>>     choose to signal blindly.
>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
>>     All members are now initialized, so kmalloc can be used for
>>     allocating a dma-fence. More documentation added.
>> v9: Change compiler bitfields to flags, change return type of
>>     enable_signaling to bool. Rework dma_fence_wait. Added
>>     dma_fence_is_signaled and dma_fence_wait_timeout.
>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
>>     fence_enable_sw_signaling calls, add ability to override default
>>     wait operation.
>> v10: remove event_queue, use a custom list, export try_to_wake_up from
>>     scheduler. Remove fence lock and use a global spinlock instead,
>>     this should hopefully remove all the locking headaches I was having
>>     on trying to implement this. enable_signaling is called with this
>>     lock held.
>> v11:
>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
>>     However I kept the guarantee that after fence_signal returns, it is
>>     guaranteed that enable_signaling has either been called to completion,
>>     or will not be called any more.
>>
>>     Add contexts and seqno to base fence implementation. This allows you
>>     to wait for less fences, by testing for seqno + signaled, and then only
>>     wait on the later fence.
>>
>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
>>     spam, and another runtime option can turn it off at runtime.
>> v12:
>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
>>     and fence->seqno members.
>> v13:
>>     Fixup CONFIG_FENCE_TRACE kconfig description.
>>     Move fence_context_alloc to fence.
>>     Simplify fence_later.
>>     Kill priv member to fence_cb.
>> v14:
>>     Remove priv argument from fence_add_callback, oops!
>> v15:
>>     Remove priv from documentation.
>>     Explicitly include linux/atomic.h.
>> v16:
>>     Add trace events.
>>     Import changes required by android syncpoints.
>> v17:
>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
>>     Fix up commit description for seqno_fence. (Rob Clark)
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
>> Reviewed-by: Rob Clark <robdclark@gmail.com>
>> ---
>>  Documentation/DocBook/device-drivers.tmpl |    2
>>  drivers/base/Kconfig                      |    9 +
>>  drivers/base/Makefile                     |    2
>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
>>  include/trace/events/fence.h              |  128 +++++++++
>>  6 files changed, 889 insertions(+), 1 deletion(-)
>>  create mode 100644 drivers/base/fence.c
>>  create mode 100644 include/linux/fence.h
>>  create mode 100644 include/trace/events/fence.h
>
> Who is going to sign up to maintain this code?  (hint, it's not me...)

that would be Sumit (dma-buf tree)..

probably we should move fence/reservation/dma-buf into drivers/dma-buf
(or something approximately like that)

BR,
-R


> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19  1:23       ` Rob Clark
  (?)
@ 2014-06-19  1:44       ` Greg KH
  -1 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19  1:44 UTC (permalink / raw)
  To: Rob Clark
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Wed, Jun 18, 2014 at 09:23:06PM -0400, Rob Clark wrote:
> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >> +#define CREATE_TRACE_POINTS
> >> +#include <trace/events/fence.h>
> >> +
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> >
> > Are you really willing to live with these as tracepoints for forever?
> > What is the use of them in debugging?  Was it just for debugging the
> > fence code, or for something else?
> 
> fwiw, the goal is something like this:
> 
> http://people.freedesktop.org/~robclark/perf-supertuxkart.svg
> 
> but without needing to make perf understand each driver's custom trace events
> 
> (from: http://bloggingthemonkey.blogspot.com/2013/09/freedreno-update-moar-fps.html
> )

Will these tracepoints provide something like that?  If so, great, but I
want to make sure as these now become a user/kernel ABI that you can not
break.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19  1:25       ` Rob Clark
  (?)
@ 2014-06-19  4:27       ` Sumit Semwal
  2014-06-19  4:54           ` Greg KH
  -1 siblings, 1 reply; 73+ messages in thread
From: Sumit Semwal @ 2014-06-19  4:27 UTC (permalink / raw)
  To: Rob Clark
  Cc: Greg KH, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, linux-media

Hi Greg,

On 19 June 2014 06:55, Rob Clark <robdclark@gmail.com> wrote:
> On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>>> A fence can be attached to a buffer which is being filled or consumed
>>> by hw, to allow userspace to pass the buffer without waiting to another
>>> device.  For example, userspace can call page_flip ioctl to display the
>>> next frame of graphics after kicking the GPU but while the GPU is still
>>> rendering.  The display device sharing the buffer with the GPU would
>>> attach a callback to get notified when the GPU's rendering-complete IRQ
>>> fires, to update the scan-out address of the display, without having to
>>> wake up userspace.
>>>
>>> A driver must allocate a fence context for each execution ring that can
>>> run in parallel. The function for this takes an argument with how many
>>> contexts to allocate:
>>>   + fence_context_alloc()
>>>
>>> A fence is transient, one-shot deal.  It is allocated and attached
>>> to one or more dma-buf's.  When the one that attached it is done, with
>>> the pending operation, it can signal the fence:
>>>   + fence_signal()
>>>
>>> To have a rough approximation whether a fence is fired, call:
>>>   + fence_is_signaled()
>>>
>>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
>>> with a dma-buf.
>>>
>>> The one pending on the fence can add an async callback:
>>>   + fence_add_callback()
>>>
>>> The callback can optionally be cancelled with:
>>>   + fence_remove_callback()
>>>
>>> To wait synchronously, optionally with a timeout:
>>>   + fence_wait()
>>>   + fence_wait_timeout()
>>>
>>> When emitting a fence, call:
>>>   + trace_fence_emit()
>>>
>>> To annotate that a fence is blocking on another fence, call:
>>>   + trace_fence_annotate_wait_on(fence, on_fence)
>>>
>>> A default software-only implementation is provided, which can be used
>>> by drivers attaching a fence to a buffer when they have no other means
>>> for hw sync.  But a memory backed fence is also envisioned, because it
>>> is common that GPU's can write to, or poll on some memory location for
>>> synchronization.  For example:
>>>
>>>   fence = custom_get_fence(...);
>>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
>>>     dma_buf *fence_buf = seqno_fence->sync_buf;
>>>     get_dma_buf(fence_buf);
>>>
>>>     ... tell the hw the memory location to wait ...
>>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
>>>   } else {
>>>     /* fall-back to sw sync * /
>>>     fence_add_callback(fence, my_cb);
>>>   }
>>>
>>> On SoC platforms, if some other hw mechanism is provided for synchronizing
>>> between IP blocks, it could be supported as an alternate implementation
>>> with it's own fence ops in a similar way.
>>>
>>> enable_signaling callback is used to provide sw signaling in case a cpu
>>> waiter is requested or no compatible hardware signaling could be used.
>>>
>>> The intention is to provide a userspace interface (presumably via eventfd)
>>> later, to be used in conjunction with dma-buf's mmap support for sw access
>>> to buffers (or for userspace apps that would prefer to do their own
>>> synchronization).
>>>
>>> v1: Original
>>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
>>>     that dma-fence didn't need to care about the sw->hw signaling path
>>>     (it can be handled same as sw->sw case), and therefore the fence->ops
>>>     can be simplified and more handled in the core.  So remove the signal,
>>>     add_callback, cancel_callback, and wait ops, and replace with a simple
>>>     enable_signaling() op which can be used to inform a fence supporting
>>>     hw->hw signaling that one or more devices which do not support hw
>>>     signaling are waiting (and therefore it should enable an irq or do
>>>     whatever is necessary in order that the CPU is notified when the
>>>     fence is passed).
>>> v3: Fix locking fail in attach_fence() and get_fence()
>>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
>>>     we decided that we need to be able to attach one fence to N dma-buf's,
>>>     so using the list_head in dma-fence struct would be problematic.
>>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
>>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
>>>     about checking if fence fired or not. This is broken by design.
>>>     waitqueue_active during destruction is now fatal, since the signaller
>>>     should be holding a reference in enable_signalling until it signalled
>>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
>>>     in the dma_fence_callback handler, so that no cleanup needs to be
>>>     performed.
>>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
>>>     fence wasn't signaled yet, for example for hardware fences that may
>>>     choose to signal blindly.
>>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
>>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
>>>     All members are now initialized, so kmalloc can be used for
>>>     allocating a dma-fence. More documentation added.
>>> v9: Change compiler bitfields to flags, change return type of
>>>     enable_signaling to bool. Rework dma_fence_wait. Added
>>>     dma_fence_is_signaled and dma_fence_wait_timeout.
>>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
>>>     fence_enable_sw_signaling calls, add ability to override default
>>>     wait operation.
>>> v10: remove event_queue, use a custom list, export try_to_wake_up from
>>>     scheduler. Remove fence lock and use a global spinlock instead,
>>>     this should hopefully remove all the locking headaches I was having
>>>     on trying to implement this. enable_signaling is called with this
>>>     lock held.
>>> v11:
>>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
>>>     However I kept the guarantee that after fence_signal returns, it is
>>>     guaranteed that enable_signaling has either been called to completion,
>>>     or will not be called any more.
>>>
>>>     Add contexts and seqno to base fence implementation. This allows you
>>>     to wait for less fences, by testing for seqno + signaled, and then only
>>>     wait on the later fence.
>>>
>>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
>>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
>>>     spam, and another runtime option can turn it off at runtime.
>>> v12:
>>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
>>>     and fence->seqno members.
>>> v13:
>>>     Fixup CONFIG_FENCE_TRACE kconfig description.
>>>     Move fence_context_alloc to fence.
>>>     Simplify fence_later.
>>>     Kill priv member to fence_cb.
>>> v14:
>>>     Remove priv argument from fence_add_callback, oops!
>>> v15:
>>>     Remove priv from documentation.
>>>     Explicitly include linux/atomic.h.
>>> v16:
>>>     Add trace events.
>>>     Import changes required by android syncpoints.
>>> v17:
>>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
>>>     Fix up commit description for seqno_fence. (Rob Clark)
>>>
>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
>>> Reviewed-by: Rob Clark <robdclark@gmail.com>
>>> ---
>>>  Documentation/DocBook/device-drivers.tmpl |    2
>>>  drivers/base/Kconfig                      |    9 +
>>>  drivers/base/Makefile                     |    2
>>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
>>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
>>>  include/trace/events/fence.h              |  128 +++++++++
>>>  6 files changed, 889 insertions(+), 1 deletion(-)
>>>  create mode 100644 drivers/base/fence.c
>>>  create mode 100644 include/linux/fence.h
>>>  create mode 100644 include/trace/events/fence.h
>>
>> Who is going to sign up to maintain this code?  (hint, it's not me...)
>
> that would be Sumit (dma-buf tree)..
>
> probably we should move fence/reservation/dma-buf into drivers/dma-buf
> (or something approximately like that)
Yes, that would be me - it might be better to create a new directory
as suggested above (drivers/dma-buf).
>
> BR,
> -R
>
>
>> thanks,
>>
>> greg k-h
Best regards,
~Sumit.


-- 
Thanks and regards,

Sumit Semwal
Graphics Engineer - Graphics working group
Linaro.org │ Open source software for ARM SoCs

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19  4:27       ` Sumit Semwal
@ 2014-06-19  4:54           ` Greg KH
  0 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19  4:54 UTC (permalink / raw)
  To: Sumit Semwal
  Cc: Rob Clark, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, linux-media

On Thu, Jun 19, 2014 at 09:57:35AM +0530, Sumit Semwal wrote:
> Hi Greg,
> 
> On 19 June 2014 06:55, Rob Clark <robdclark@gmail.com> wrote:
> > On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >>> A fence can be attached to a buffer which is being filled or consumed
> >>> by hw, to allow userspace to pass the buffer without waiting to another
> >>> device.  For example, userspace can call page_flip ioctl to display the
> >>> next frame of graphics after kicking the GPU but while the GPU is still
> >>> rendering.  The display device sharing the buffer with the GPU would
> >>> attach a callback to get notified when the GPU's rendering-complete IRQ
> >>> fires, to update the scan-out address of the display, without having to
> >>> wake up userspace.
> >>>
> >>> A driver must allocate a fence context for each execution ring that can
> >>> run in parallel. The function for this takes an argument with how many
> >>> contexts to allocate:
> >>>   + fence_context_alloc()
> >>>
> >>> A fence is transient, one-shot deal.  It is allocated and attached
> >>> to one or more dma-buf's.  When the one that attached it is done, with
> >>> the pending operation, it can signal the fence:
> >>>   + fence_signal()
> >>>
> >>> To have a rough approximation whether a fence is fired, call:
> >>>   + fence_is_signaled()
> >>>
> >>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
> >>> with a dma-buf.
> >>>
> >>> The one pending on the fence can add an async callback:
> >>>   + fence_add_callback()
> >>>
> >>> The callback can optionally be cancelled with:
> >>>   + fence_remove_callback()
> >>>
> >>> To wait synchronously, optionally with a timeout:
> >>>   + fence_wait()
> >>>   + fence_wait_timeout()
> >>>
> >>> When emitting a fence, call:
> >>>   + trace_fence_emit()
> >>>
> >>> To annotate that a fence is blocking on another fence, call:
> >>>   + trace_fence_annotate_wait_on(fence, on_fence)
> >>>
> >>> A default software-only implementation is provided, which can be used
> >>> by drivers attaching a fence to a buffer when they have no other means
> >>> for hw sync.  But a memory backed fence is also envisioned, because it
> >>> is common that GPU's can write to, or poll on some memory location for
> >>> synchronization.  For example:
> >>>
> >>>   fence = custom_get_fence(...);
> >>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
> >>>     dma_buf *fence_buf = seqno_fence->sync_buf;
> >>>     get_dma_buf(fence_buf);
> >>>
> >>>     ... tell the hw the memory location to wait ...
> >>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
> >>>   } else {
> >>>     /* fall-back to sw sync * /
> >>>     fence_add_callback(fence, my_cb);
> >>>   }
> >>>
> >>> On SoC platforms, if some other hw mechanism is provided for synchronizing
> >>> between IP blocks, it could be supported as an alternate implementation
> >>> with it's own fence ops in a similar way.
> >>>
> >>> enable_signaling callback is used to provide sw signaling in case a cpu
> >>> waiter is requested or no compatible hardware signaling could be used.
> >>>
> >>> The intention is to provide a userspace interface (presumably via eventfd)
> >>> later, to be used in conjunction with dma-buf's mmap support for sw access
> >>> to buffers (or for userspace apps that would prefer to do their own
> >>> synchronization).
> >>>
> >>> v1: Original
> >>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
> >>>     that dma-fence didn't need to care about the sw->hw signaling path
> >>>     (it can be handled same as sw->sw case), and therefore the fence->ops
> >>>     can be simplified and more handled in the core.  So remove the signal,
> >>>     add_callback, cancel_callback, and wait ops, and replace with a simple
> >>>     enable_signaling() op which can be used to inform a fence supporting
> >>>     hw->hw signaling that one or more devices which do not support hw
> >>>     signaling are waiting (and therefore it should enable an irq or do
> >>>     whatever is necessary in order that the CPU is notified when the
> >>>     fence is passed).
> >>> v3: Fix locking fail in attach_fence() and get_fence()
> >>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
> >>>     we decided that we need to be able to attach one fence to N dma-buf's,
> >>>     so using the list_head in dma-fence struct would be problematic.
> >>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
> >>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
> >>>     about checking if fence fired or not. This is broken by design.
> >>>     waitqueue_active during destruction is now fatal, since the signaller
> >>>     should be holding a reference in enable_signalling until it signalled
> >>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
> >>>     in the dma_fence_callback handler, so that no cleanup needs to be
> >>>     performed.
> >>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
> >>>     fence wasn't signaled yet, for example for hardware fences that may
> >>>     choose to signal blindly.
> >>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
> >>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
> >>>     All members are now initialized, so kmalloc can be used for
> >>>     allocating a dma-fence. More documentation added.
> >>> v9: Change compiler bitfields to flags, change return type of
> >>>     enable_signaling to bool. Rework dma_fence_wait. Added
> >>>     dma_fence_is_signaled and dma_fence_wait_timeout.
> >>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
> >>>     fence_enable_sw_signaling calls, add ability to override default
> >>>     wait operation.
> >>> v10: remove event_queue, use a custom list, export try_to_wake_up from
> >>>     scheduler. Remove fence lock and use a global spinlock instead,
> >>>     this should hopefully remove all the locking headaches I was having
> >>>     on trying to implement this. enable_signaling is called with this
> >>>     lock held.
> >>> v11:
> >>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
> >>>     However I kept the guarantee that after fence_signal returns, it is
> >>>     guaranteed that enable_signaling has either been called to completion,
> >>>     or will not be called any more.
> >>>
> >>>     Add contexts and seqno to base fence implementation. This allows you
> >>>     to wait for less fences, by testing for seqno + signaled, and then only
> >>>     wait on the later fence.
> >>>
> >>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
> >>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
> >>>     spam, and another runtime option can turn it off at runtime.
> >>> v12:
> >>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
> >>>     and fence->seqno members.
> >>> v13:
> >>>     Fixup CONFIG_FENCE_TRACE kconfig description.
> >>>     Move fence_context_alloc to fence.
> >>>     Simplify fence_later.
> >>>     Kill priv member to fence_cb.
> >>> v14:
> >>>     Remove priv argument from fence_add_callback, oops!
> >>> v15:
> >>>     Remove priv from documentation.
> >>>     Explicitly include linux/atomic.h.
> >>> v16:
> >>>     Add trace events.
> >>>     Import changes required by android syncpoints.
> >>> v17:
> >>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
> >>>     Fix up commit description for seqno_fence. (Rob Clark)
> >>>
> >>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> >>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
> >>> Reviewed-by: Rob Clark <robdclark@gmail.com>
> >>> ---
> >>>  Documentation/DocBook/device-drivers.tmpl |    2
> >>>  drivers/base/Kconfig                      |    9 +
> >>>  drivers/base/Makefile                     |    2
> >>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
> >>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
> >>>  include/trace/events/fence.h              |  128 +++++++++
> >>>  6 files changed, 889 insertions(+), 1 deletion(-)
> >>>  create mode 100644 drivers/base/fence.c
> >>>  create mode 100644 include/linux/fence.h
> >>>  create mode 100644 include/trace/events/fence.h
> >>
> >> Who is going to sign up to maintain this code?  (hint, it's not me...)
> >
> > that would be Sumit (dma-buf tree)..
> >
> > probably we should move fence/reservation/dma-buf into drivers/dma-buf
> > (or something approximately like that)
> Yes, that would be me - it might be better to create a new directory
> as suggested above (drivers/dma-buf).

That's fine with me, there is going to be more than just one file in
there, right?  :)

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19  4:54           ` Greg KH
  0 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19  4:54 UTC (permalink / raw)
  To: Sumit Semwal
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 09:57:35AM +0530, Sumit Semwal wrote:
> Hi Greg,
> 
> On 19 June 2014 06:55, Rob Clark <robdclark@gmail.com> wrote:
> > On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >>> A fence can be attached to a buffer which is being filled or consumed
> >>> by hw, to allow userspace to pass the buffer without waiting to another
> >>> device.  For example, userspace can call page_flip ioctl to display the
> >>> next frame of graphics after kicking the GPU but while the GPU is still
> >>> rendering.  The display device sharing the buffer with the GPU would
> >>> attach a callback to get notified when the GPU's rendering-complete IRQ
> >>> fires, to update the scan-out address of the display, without having to
> >>> wake up userspace.
> >>>
> >>> A driver must allocate a fence context for each execution ring that can
> >>> run in parallel. The function for this takes an argument with how many
> >>> contexts to allocate:
> >>>   + fence_context_alloc()
> >>>
> >>> A fence is transient, one-shot deal.  It is allocated and attached
> >>> to one or more dma-buf's.  When the one that attached it is done, with
> >>> the pending operation, it can signal the fence:
> >>>   + fence_signal()
> >>>
> >>> To have a rough approximation whether a fence is fired, call:
> >>>   + fence_is_signaled()
> >>>
> >>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
> >>> with a dma-buf.
> >>>
> >>> The one pending on the fence can add an async callback:
> >>>   + fence_add_callback()
> >>>
> >>> The callback can optionally be cancelled with:
> >>>   + fence_remove_callback()
> >>>
> >>> To wait synchronously, optionally with a timeout:
> >>>   + fence_wait()
> >>>   + fence_wait_timeout()
> >>>
> >>> When emitting a fence, call:
> >>>   + trace_fence_emit()
> >>>
> >>> To annotate that a fence is blocking on another fence, call:
> >>>   + trace_fence_annotate_wait_on(fence, on_fence)
> >>>
> >>> A default software-only implementation is provided, which can be used
> >>> by drivers attaching a fence to a buffer when they have no other means
> >>> for hw sync.  But a memory backed fence is also envisioned, because it
> >>> is common that GPU's can write to, or poll on some memory location for
> >>> synchronization.  For example:
> >>>
> >>>   fence = custom_get_fence(...);
> >>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
> >>>     dma_buf *fence_buf = seqno_fence->sync_buf;
> >>>     get_dma_buf(fence_buf);
> >>>
> >>>     ... tell the hw the memory location to wait ...
> >>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
> >>>   } else {
> >>>     /* fall-back to sw sync * /
> >>>     fence_add_callback(fence, my_cb);
> >>>   }
> >>>
> >>> On SoC platforms, if some other hw mechanism is provided for synchronizing
> >>> between IP blocks, it could be supported as an alternate implementation
> >>> with it's own fence ops in a similar way.
> >>>
> >>> enable_signaling callback is used to provide sw signaling in case a cpu
> >>> waiter is requested or no compatible hardware signaling could be used.
> >>>
> >>> The intention is to provide a userspace interface (presumably via eventfd)
> >>> later, to be used in conjunction with dma-buf's mmap support for sw access
> >>> to buffers (or for userspace apps that would prefer to do their own
> >>> synchronization).
> >>>
> >>> v1: Original
> >>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
> >>>     that dma-fence didn't need to care about the sw->hw signaling path
> >>>     (it can be handled same as sw->sw case), and therefore the fence->ops
> >>>     can be simplified and more handled in the core.  So remove the signal,
> >>>     add_callback, cancel_callback, and wait ops, and replace with a simple
> >>>     enable_signaling() op which can be used to inform a fence supporting
> >>>     hw->hw signaling that one or more devices which do not support hw
> >>>     signaling are waiting (and therefore it should enable an irq or do
> >>>     whatever is necessary in order that the CPU is notified when the
> >>>     fence is passed).
> >>> v3: Fix locking fail in attach_fence() and get_fence()
> >>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
> >>>     we decided that we need to be able to attach one fence to N dma-buf's,
> >>>     so using the list_head in dma-fence struct would be problematic.
> >>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
> >>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
> >>>     about checking if fence fired or not. This is broken by design.
> >>>     waitqueue_active during destruction is now fatal, since the signaller
> >>>     should be holding a reference in enable_signalling until it signalled
> >>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
> >>>     in the dma_fence_callback handler, so that no cleanup needs to be
> >>>     performed.
> >>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
> >>>     fence wasn't signaled yet, for example for hardware fences that may
> >>>     choose to signal blindly.
> >>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
> >>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
> >>>     All members are now initialized, so kmalloc can be used for
> >>>     allocating a dma-fence. More documentation added.
> >>> v9: Change compiler bitfields to flags, change return type of
> >>>     enable_signaling to bool. Rework dma_fence_wait. Added
> >>>     dma_fence_is_signaled and dma_fence_wait_timeout.
> >>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
> >>>     fence_enable_sw_signaling calls, add ability to override default
> >>>     wait operation.
> >>> v10: remove event_queue, use a custom list, export try_to_wake_up from
> >>>     scheduler. Remove fence lock and use a global spinlock instead,
> >>>     this should hopefully remove all the locking headaches I was having
> >>>     on trying to implement this. enable_signaling is called with this
> >>>     lock held.
> >>> v11:
> >>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
> >>>     However I kept the guarantee that after fence_signal returns, it is
> >>>     guaranteed that enable_signaling has either been called to completion,
> >>>     or will not be called any more.
> >>>
> >>>     Add contexts and seqno to base fence implementation. This allows you
> >>>     to wait for less fences, by testing for seqno + signaled, and then only
> >>>     wait on the later fence.
> >>>
> >>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
> >>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
> >>>     spam, and another runtime option can turn it off at runtime.
> >>> v12:
> >>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
> >>>     and fence->seqno members.
> >>> v13:
> >>>     Fixup CONFIG_FENCE_TRACE kconfig description.
> >>>     Move fence_context_alloc to fence.
> >>>     Simplify fence_later.
> >>>     Kill priv member to fence_cb.
> >>> v14:
> >>>     Remove priv argument from fence_add_callback, oops!
> >>> v15:
> >>>     Remove priv from documentation.
> >>>     Explicitly include linux/atomic.h.
> >>> v16:
> >>>     Add trace events.
> >>>     Import changes required by android syncpoints.
> >>> v17:
> >>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
> >>>     Fix up commit description for seqno_fence. (Rob Clark)
> >>>
> >>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> >>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
> >>> Reviewed-by: Rob Clark <robdclark@gmail.com>
> >>> ---
> >>>  Documentation/DocBook/device-drivers.tmpl |    2
> >>>  drivers/base/Kconfig                      |    9 +
> >>>  drivers/base/Makefile                     |    2
> >>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
> >>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
> >>>  include/trace/events/fence.h              |  128 +++++++++
> >>>  6 files changed, 889 insertions(+), 1 deletion(-)
> >>>  create mode 100644 drivers/base/fence.c
> >>>  create mode 100644 include/linux/fence.h
> >>>  create mode 100644 include/trace/events/fence.h
> >>
> >> Who is going to sign up to maintain this code?  (hint, it's not me...)
> >
> > that would be Sumit (dma-buf tree)..
> >
> > probably we should move fence/reservation/dma-buf into drivers/dma-buf
> > (or something approximately like that)
> Yes, that would be me - it might be better to create a new directory
> as suggested above (drivers/dma-buf).

That's fine with me, there is going to be more than just one file in
there, right?  :)

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19  4:54           ` Greg KH
@ 2014-06-19  5:26             ` Sumit Semwal
  -1 siblings, 0 replies; 73+ messages in thread
From: Sumit Semwal @ 2014-06-19  5:26 UTC (permalink / raw)
  To: Greg KH
  Cc: Rob Clark, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, linux-media

On 19 June 2014 10:24, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 09:57:35AM +0530, Sumit Semwal wrote:
>> Hi Greg,
>>

>> >>
>> >> Who is going to sign up to maintain this code?  (hint, it's not me...)
>> >
>> > that would be Sumit (dma-buf tree)..
>> >
>> > probably we should move fence/reservation/dma-buf into drivers/dma-buf
>> > (or something approximately like that)
>> Yes, that would be me - it might be better to create a new directory
>> as suggested above (drivers/dma-buf).
>
> That's fine with me, there is going to be more than just one file in
> there, right?  :)
>
> greg k-h
Certainly atleast 3 :)

~sumit

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19  5:26             ` Sumit Semwal
  0 siblings, 0 replies; 73+ messages in thread
From: Sumit Semwal @ 2014-06-19  5:26 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On 19 June 2014 10:24, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 09:57:35AM +0530, Sumit Semwal wrote:
>> Hi Greg,
>>

>> >>
>> >> Who is going to sign up to maintain this code?  (hint, it's not me...)
>> >
>> > that would be Sumit (dma-buf tree)..
>> >
>> > probably we should move fence/reservation/dma-buf into drivers/dma-buf
>> > (or something approximately like that)
>> Yes, that would be me - it might be better to create a new directory
>> as suggested above (drivers/dma-buf).
>
> That's fine with me, there is going to be more than just one file in
> there, right?  :)
>
> greg k-h
Certainly atleast 3 :)

~sumit

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19  1:15   ` Greg KH
@ 2014-06-19  6:37       ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19  6:37 UTC (permalink / raw)
  To: Greg KH
  Cc: Maarten Lankhorst, linux-arch, thellstrom, linux-kernel,
	dri-devel, linaro-mm-sig, robdclark, thierry.reding, ccross,
	daniel, sumit.semwal, linux-media

On Wed, Jun 18, 2014 at 06:15:56PM -0700, Greg KH wrote:
> On Wed, Jun 18, 2014 at 12:37:11PM +0200, Maarten Lankhorst wrote:
> > Just to show it's easy.
> > 
> > Android syncpoints can be mapped to a timeline. This removes the need
> > to maintain a separate api for synchronization. I've left the android
> > trace events in place, but the core fence events should already be
> > sufficient for debugging.
> > 
> > v2:
> > - Call fence_remove_callback in sync_fence_free if not all fences have fired.
> > v3:
> > - Merge Colin Cross' bugfixes, and the android fence merge optimization.
> > v4:
> > - Merge with the upstream fixes.
> > v5:
> > - Fix small style issues pointed out by Thomas Hellstrom.
> > 
> > Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > Acked-by: John Stultz <john.stultz@linaro.org>
> > ---
> >  drivers/staging/android/Kconfig      |    1 
> >  drivers/staging/android/Makefile     |    2 
> >  drivers/staging/android/sw_sync.c    |    6 
> >  drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
> >  drivers/staging/android/sync.h       |   79 ++-
> >  drivers/staging/android/sync_debug.c |  247 +++++++++
> >  drivers/staging/android/trace/sync.h |   12 
> >  7 files changed, 609 insertions(+), 651 deletions(-)
> >  create mode 100644 drivers/staging/android/sync_debug.c
> 
> With these changes, can we pull the android sync logic out of
> drivers/staging/ now?

Afaik the google guys never really looked at this and acked it. So I'm not
sure whether they'll follow along. The other issue I have as the
maintainer of gfx driver is that I don't want to implement support for two
different sync object primitives (once for dma-buf and once for android
syncpts), and my impression thus far has been that even with this we're
not there.

I'm trying to get our own android guys to upstream their i915 syncpts
support, but thus far I haven't managed to convince them to throw people's
time at this.

It looks like a step into the right direction, but until I have the proof
in the form of i915 patches that I won't have to support 2 gfx fencing
frameworks I'm opposed to de-staging android syncpts. Ofc someone else
could do that too, but besides i915 I don't see a full-fledged (modeset
side only kinda doesn't count) upstream gfx driver shipping on android.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
@ 2014-06-19  6:37       ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19  6:37 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-arch, thellstrom, linux-kernel, dri-devel, linaro-mm-sig,
	ccross, linux-media

On Wed, Jun 18, 2014 at 06:15:56PM -0700, Greg KH wrote:
> On Wed, Jun 18, 2014 at 12:37:11PM +0200, Maarten Lankhorst wrote:
> > Just to show it's easy.
> > 
> > Android syncpoints can be mapped to a timeline. This removes the need
> > to maintain a separate api for synchronization. I've left the android
> > trace events in place, but the core fence events should already be
> > sufficient for debugging.
> > 
> > v2:
> > - Call fence_remove_callback in sync_fence_free if not all fences have fired.
> > v3:
> > - Merge Colin Cross' bugfixes, and the android fence merge optimization.
> > v4:
> > - Merge with the upstream fixes.
> > v5:
> > - Fix small style issues pointed out by Thomas Hellstrom.
> > 
> > Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > Acked-by: John Stultz <john.stultz@linaro.org>
> > ---
> >  drivers/staging/android/Kconfig      |    1 
> >  drivers/staging/android/Makefile     |    2 
> >  drivers/staging/android/sw_sync.c    |    6 
> >  drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
> >  drivers/staging/android/sync.h       |   79 ++-
> >  drivers/staging/android/sync_debug.c |  247 +++++++++
> >  drivers/staging/android/trace/sync.h |   12 
> >  7 files changed, 609 insertions(+), 651 deletions(-)
> >  create mode 100644 drivers/staging/android/sync_debug.c
> 
> With these changes, can we pull the android sync logic out of
> drivers/staging/ now?

Afaik the google guys never really looked at this and acked it. So I'm not
sure whether they'll follow along. The other issue I have as the
maintainer of gfx driver is that I don't want to implement support for two
different sync object primitives (once for dma-buf and once for android
syncpts), and my impression thus far has been that even with this we're
not there.

I'm trying to get our own android guys to upstream their i915 syncpts
support, but thus far I haven't managed to convince them to throw people's
time at this.

It looks like a step into the right direction, but until I have the proof
in the form of i915 patches that I won't have to support 2 gfx fencing
frameworks I'm opposed to de-staging android syncpts. Ofc someone else
could do that too, but besides i915 I don't see a full-fledged (modeset
side only kinda doesn't count) upstream gfx driver shipping on android.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19  6:37       ` Daniel Vetter
@ 2014-06-19 11:48         ` Thierry Reding
  -1 siblings, 0 replies; 73+ messages in thread
From: Thierry Reding @ 2014-06-19 11:48 UTC (permalink / raw)
  To: Greg KH, Maarten Lankhorst, linux-arch, thellstrom, linux-kernel,
	dri-devel, linaro-mm-sig, robdclark, ccross, sumit.semwal,
	linux-media

[-- Attachment #1: Type: text/plain, Size: 3213 bytes --]

On Thu, Jun 19, 2014 at 08:37:27AM +0200, Daniel Vetter wrote:
> On Wed, Jun 18, 2014 at 06:15:56PM -0700, Greg KH wrote:
> > On Wed, Jun 18, 2014 at 12:37:11PM +0200, Maarten Lankhorst wrote:
> > > Just to show it's easy.
> > > 
> > > Android syncpoints can be mapped to a timeline. This removes the need
> > > to maintain a separate api for synchronization. I've left the android
> > > trace events in place, but the core fence events should already be
> > > sufficient for debugging.
> > > 
> > > v2:
> > > - Call fence_remove_callback in sync_fence_free if not all fences have fired.
> > > v3:
> > > - Merge Colin Cross' bugfixes, and the android fence merge optimization.
> > > v4:
> > > - Merge with the upstream fixes.
> > > v5:
> > > - Fix small style issues pointed out by Thomas Hellstrom.
> > > 
> > > Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > > Acked-by: John Stultz <john.stultz@linaro.org>
> > > ---
> > >  drivers/staging/android/Kconfig      |    1 
> > >  drivers/staging/android/Makefile     |    2 
> > >  drivers/staging/android/sw_sync.c    |    6 
> > >  drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
> > >  drivers/staging/android/sync.h       |   79 ++-
> > >  drivers/staging/android/sync_debug.c |  247 +++++++++
> > >  drivers/staging/android/trace/sync.h |   12 
> > >  7 files changed, 609 insertions(+), 651 deletions(-)
> > >  create mode 100644 drivers/staging/android/sync_debug.c
> > 
> > With these changes, can we pull the android sync logic out of
> > drivers/staging/ now?
> 
> Afaik the google guys never really looked at this and acked it. So I'm not
> sure whether they'll follow along. The other issue I have as the
> maintainer of gfx driver is that I don't want to implement support for two
> different sync object primitives (once for dma-buf and once for android
> syncpts), and my impression thus far has been that even with this we're
> not there.
> 
> I'm trying to get our own android guys to upstream their i915 syncpts
> support, but thus far I haven't managed to convince them to throw people's
> time at this.

This has been discussed a fair bit internally recently and some of our
GPU experts have raised concerns that this may result in seriously
degraded performance in our proprietary graphics stack. Now I don't care
very much for the proprietary graphics stack, but by extension I would
assume that the same restrictions are relevant for any open-source
driver as well.

I'm still trying to fully understand all the implications and at the
same time get some of the people who raised concerns to join in this
discussion. As I understand it the concern is mostly about explicit vs.
implicit synchronization and having this mechanism in the kernel will
implicitly synchronize all accesses to these buffers even in cases where
it's not needed (read vs. write locks, etc.). In one particular instance
it was even mentioned that this kind of implicit synchronization can
lead to deadlocks in some use-cases (this was mentioned for Android
compositing, but I suspect that the same may happen for Wayland or X
compositors).

Thierry

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
@ 2014-06-19 11:48         ` Thierry Reding
  0 siblings, 0 replies; 73+ messages in thread
From: Thierry Reding @ 2014-06-19 11:48 UTC (permalink / raw)
  To: Greg KH, Maarten Lankhorst, linux-arch, thellstrom, linux-kernel,
	dri-devel, linaro-mm-sig, robdclark, ccross, sumit.semwal,
	linux-media


[-- Attachment #1.1: Type: text/plain, Size: 3213 bytes --]

On Thu, Jun 19, 2014 at 08:37:27AM +0200, Daniel Vetter wrote:
> On Wed, Jun 18, 2014 at 06:15:56PM -0700, Greg KH wrote:
> > On Wed, Jun 18, 2014 at 12:37:11PM +0200, Maarten Lankhorst wrote:
> > > Just to show it's easy.
> > > 
> > > Android syncpoints can be mapped to a timeline. This removes the need
> > > to maintain a separate api for synchronization. I've left the android
> > > trace events in place, but the core fence events should already be
> > > sufficient for debugging.
> > > 
> > > v2:
> > > - Call fence_remove_callback in sync_fence_free if not all fences have fired.
> > > v3:
> > > - Merge Colin Cross' bugfixes, and the android fence merge optimization.
> > > v4:
> > > - Merge with the upstream fixes.
> > > v5:
> > > - Fix small style issues pointed out by Thomas Hellstrom.
> > > 
> > > Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > > Acked-by: John Stultz <john.stultz@linaro.org>
> > > ---
> > >  drivers/staging/android/Kconfig      |    1 
> > >  drivers/staging/android/Makefile     |    2 
> > >  drivers/staging/android/sw_sync.c    |    6 
> > >  drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
> > >  drivers/staging/android/sync.h       |   79 ++-
> > >  drivers/staging/android/sync_debug.c |  247 +++++++++
> > >  drivers/staging/android/trace/sync.h |   12 
> > >  7 files changed, 609 insertions(+), 651 deletions(-)
> > >  create mode 100644 drivers/staging/android/sync_debug.c
> > 
> > With these changes, can we pull the android sync logic out of
> > drivers/staging/ now?
> 
> Afaik the google guys never really looked at this and acked it. So I'm not
> sure whether they'll follow along. The other issue I have as the
> maintainer of gfx driver is that I don't want to implement support for two
> different sync object primitives (once for dma-buf and once for android
> syncpts), and my impression thus far has been that even with this we're
> not there.
> 
> I'm trying to get our own android guys to upstream their i915 syncpts
> support, but thus far I haven't managed to convince them to throw people's
> time at this.

This has been discussed a fair bit internally recently and some of our
GPU experts have raised concerns that this may result in seriously
degraded performance in our proprietary graphics stack. Now I don't care
very much for the proprietary graphics stack, but by extension I would
assume that the same restrictions are relevant for any open-source
driver as well.

I'm still trying to fully understand all the implications and at the
same time get some of the people who raised concerns to join in this
discussion. As I understand it the concern is mostly about explicit vs.
implicit synchronization and having this mechanism in the kernel will
implicitly synchronize all accesses to these buffers even in cases where
it's not needed (read vs. write locks, etc.). In one particular instance
it was even mentioned that this kind of implicit synchronization can
lead to deadlocks in some use-cases (this was mentioned for Android
compositing, but I suspect that the same may happen for Wayland or X
compositors).

Thierry

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19 11:48         ` Thierry Reding
  (?)
@ 2014-06-19 12:28         ` Daniel Vetter
  2014-06-19 15:35           ` Colin Cross
  2014-06-20 20:52             ` Thierry Reding
  -1 siblings, 2 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19 12:28 UTC (permalink / raw)
  To: Thierry Reding
  Cc: Greg KH, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig, Clark, Rob,
	Colin Cross, Sumit Semwal, linux-media

On Thu, Jun 19, 2014 at 1:48 PM, Thierry Reding
<thierry.reding@gmail.com> wrote:
>> > With these changes, can we pull the android sync logic out of
>> > drivers/staging/ now?
>>
>> Afaik the google guys never really looked at this and acked it. So I'm not
>> sure whether they'll follow along. The other issue I have as the
>> maintainer of gfx driver is that I don't want to implement support for two
>> different sync object primitives (once for dma-buf and once for android
>> syncpts), and my impression thus far has been that even with this we're
>> not there.
>>
>> I'm trying to get our own android guys to upstream their i915 syncpts
>> support, but thus far I haven't managed to convince them to throw people's
>> time at this.
>
> This has been discussed a fair bit internally recently and some of our
> GPU experts have raised concerns that this may result in seriously
> degraded performance in our proprietary graphics stack. Now I don't care
> very much for the proprietary graphics stack, but by extension I would
> assume that the same restrictions are relevant for any open-source
> driver as well.
>
> I'm still trying to fully understand all the implications and at the
> same time get some of the people who raised concerns to join in this
> discussion. As I understand it the concern is mostly about explicit vs.
> implicit synchronization and having this mechanism in the kernel will
> implicitly synchronize all accesses to these buffers even in cases where
> it's not needed (read vs. write locks, etc.). In one particular instance
> it was even mentioned that this kind of implicit synchronization can
> lead to deadlocks in some use-cases (this was mentioned for Android
> compositing, but I suspect that the same may happen for Wayland or X
> compositors).

Well the implicit fences here actually can't deadlock. That's the
entire point behind using ww mutexes. I've also heard tons of
complaints about implicit enforced syncing (especially from opencl
people), but in the end drivers and always expose unsynchronized
access for specific cases. We do that in i915 for upload buffers and
other fun stuff. This is about shared stuff across different drivers
and different processes.

I also expect that i915 will loose implicit syncing in a few upcoming
hw modes because explicit syncing is a more natural fit there.

All that isn't about the discussion at hand imo since no matter what
i915 needs to have on internal representation for a bit of gpu work,
and afaics right now we don't have that. With this patch android
syncpts use Maarten's fences internally, but I can't freely exchange
one for the other. So in i915 I still expect to get stuck with both of
them, which is one too many.

The other issue (and I haven't dug into details that much) I have with
syncpts are some of the interface choices. Apparently you can commit a
fence after creation (or at least the hw composer interface works like
that) which means userspace can construct deadlocks with android
syncpts if I'm not super careful in my driver. I haven't seen any
generic code to do that, so I presume everyone just blindly trusts
surface-flinger to not do that. Speaks of the average quality of an
android gfx driver if the kernel is less trusted than the compositor
in userspace ...

There's a few other things like exposing timestamps (which are tricky
to do right, our driver is littered with wrong attempts) and other
details.

Finally I've never seen anyone from google or any android product guy
push a real driver enabling for syncpts to upstream, and google itself
has a bit a history of constantly exchanging their gfx framework for
the next best thing. So I really doubt this is worthwhile to pursue in
upstream with our essentially eternal api guarantees. At least until
we see serious uptake from vendors and gfx driver guys. Unfortunately
the Intel android folks are no exception here and haven't pushed
anything like this in my direction yet at all. Despite multiple pokes
from my side.

So from my side I think we should move ahead with Maarten's work and
figure the android side out once there's real interest.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19  1:13     ` Greg KH
  (?)
  (?)
@ 2014-06-19 14:00     ` Rob Clark
  2014-06-19 17:00         ` Greg KH
  -1 siblings, 1 reply; 73+ messages in thread
From: Rob Clark @ 2014-06-19 14:00 UTC (permalink / raw)
  To: Greg KH
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> +#define CREATE_TRACE_POINTS
>> +#include <trace/events/fence.h>
>> +
>> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>
> Are you really willing to live with these as tracepoints for forever?
> What is the use of them in debugging?  Was it just for debugging the
> fence code, or for something else?
>
>> +/**
>> + * fence_context_alloc - allocate an array of fence contexts
>> + * @num:     [in]    amount of contexts to allocate
>> + *
>> + * This function will return the first index of the number of fences allocated.
>> + * The fence context is used for setting fence->context to a unique number.
>> + */
>> +unsigned fence_context_alloc(unsigned num)
>> +{
>> +     BUG_ON(!num);
>> +     return atomic_add_return(num, &fence_context_counter) - num;
>> +}
>> +EXPORT_SYMBOL(fence_context_alloc);
>
> EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> Traditionally all of the driver core exports have been with this
> marking, any objection to making that change here as well?

tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
life.  We already went through this debate once with dma-buf.  We
aren't going to change $evil_vendor's mind about non-gpl modules.  The
only result will be a more flugly convoluted solution (ie. use syncpt
EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
workaround, with the result that no-one benefits.

>> +int __fence_signal(struct fence *fence)
>> +{
>> +     struct fence_cb *cur, *tmp;
>> +     int ret = 0;
>> +
>> +     if (WARN_ON(!fence))
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> +             ret = -EINVAL;
>> +
>> +             /*
>> +              * we might have raced with the unlocked fence_signal,
>> +              * still run through all callbacks
>> +              */
>> +     } else
>> +             trace_fence_signaled(fence);
>> +
>> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +             list_del_init(&cur->node);
>> +             cur->func(fence, cur);
>> +     }
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(__fence_signal);
>
> Don't export a function with __ in front of it, you are saying that an
> internal function is somehow "valid" for everyone else to call?  Why?
> You aren't even documenting the function, so who knows how to use it?

fwiw, the __ versions appear to mainly be concessions for android
syncpt.  That is the only user outside of fence.c, and it should stay
that way.

>> +/**
>> + * fence_signal - signal completion of a fence
>> + * @fence: the fence to signal
>> + *
>> + * Signal completion for software callbacks on a fence, this will unblock
>> + * fence_wait() calls and run all the callbacks added with
>> + * fence_add_callback(). Can be called multiple times, but since a fence
>> + * can only go from unsignaled to signaled state, it will only be effective
>> + * the first time.
>> + */
>> +int fence_signal(struct fence *fence)
>> +{
>> +     unsigned long flags;
>> +
>> +     if (!fence)
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return -EINVAL;
>> +
>> +     trace_fence_signaled(fence);
>> +
>> +     if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
>> +             struct fence_cb *cur, *tmp;
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +                     list_del_init(&cur->node);
>> +                     cur->func(fence, cur);
>> +             }
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +     }
>> +     return 0;
>> +}
>> +EXPORT_SYMBOL(fence_signal);
>
> So, why should I use this over __fence_signal?  What is the difference?
> Am I missing some documentation somewhere?
>
>> +void release_fence(struct kref *kref)
>> +{
>> +     struct fence *fence =
>> +                     container_of(kref, struct fence, refcount);
>> +
>> +     trace_fence_destroy(fence);
>> +
>> +     BUG_ON(!list_empty(&fence->cb_list));
>> +
>> +     if (fence->ops->release)
>> +             fence->ops->release(fence);
>> +     else
>> +             kfree(fence);
>> +}
>> +EXPORT_SYMBOL(release_fence);
>
> fence_release() makes it more unified with the other functions in this
> file, right?
>
>> +/**
>> + * fence_default_wait - default sleep until the fence gets signaled
>> + * or until timeout elapses
>> + * @fence:   [in]    the fence to wait on
>> + * @intr:    [in]    if true, do an interruptible wait
>> + * @timeout: [in]    timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>> + *
>> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
>> + * remaining timeout in jiffies on success.
>> + */
>> +long
>
> Shouldn't this be signed to be explicit?
>
>> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
>> +{
>> +     struct default_wait_cb cb;
>> +     unsigned long flags;
>> +     long ret = timeout;
>> +     bool was_set;
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return timeout;
>> +
>> +     spin_lock_irqsave(fence->lock, flags);
>> +
>> +     if (intr && signal_pending(current)) {
>> +             ret = -ERESTARTSYS;
>> +             goto out;
>> +     }
>> +
>> +     was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             goto out;
>> +
>> +     if (!was_set) {
>> +             trace_fence_enable_signal(fence);
>> +
>> +             if (!fence->ops->enable_signaling(fence)) {
>> +                     __fence_signal(fence);
>> +                     goto out;
>> +             }
>> +     }
>> +
>> +     cb.base.func = fence_default_wait_cb;
>> +     cb.task = current;
>> +     list_add(&cb.base.node, &fence->cb_list);
>> +
>> +     while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
>> +             if (intr)
>> +                     __set_current_state(TASK_INTERRUPTIBLE);
>> +             else
>> +                     __set_current_state(TASK_UNINTERRUPTIBLE);
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +
>> +             ret = schedule_timeout(ret);
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             if (ret > 0 && intr && signal_pending(current))
>> +                     ret = -ERESTARTSYS;
>> +     }
>> +
>> +     if (!list_empty(&cb.base.node))
>> +             list_del(&cb.base.node);
>> +     __set_current_state(TASK_RUNNING);
>> +
>> +out:
>> +     spin_unlock_irqrestore(fence->lock, flags);
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(fence_default_wait);
>> +
>> +/**
>> + * __fence_init - Initialize a custom fence.
>> + * @fence:   [in]    the fence to initialize
>> + * @ops:     [in]    the fence_ops for operations on this fence
>> + * @lock:    [in]    the irqsafe spinlock to use for locking this fence
>> + * @context: [in]    the execution context this fence is run on
>> + * @seqno:   [in]    a linear increasing sequence number for this context
>> + *
>> + * Initializes an allocated fence, the caller doesn't have to keep its
>> + * refcount after committing with this fence, but it will need to hold a
>> + * refcount again if fence_ops.enable_signaling gets called. This can
>> + * be used for other implementing other types of fence.
>> + *
>> + * context and seqno are used for easy comparison between fences, allowing
>> + * to check which fence is later by simply using fence_later.
>> + */
>> +void
>> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> +{
>> +     BUG_ON(!lock);
>> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> +            !ops->get_driver_name || !ops->get_timeline_name);
>> +
>> +     kref_init(&fence->refcount);
>> +     fence->ops = ops;
>> +     INIT_LIST_HEAD(&fence->cb_list);
>> +     fence->lock = lock;
>> +     fence->context = context;
>> +     fence->seqno = seqno;
>> +     fence->flags = 0UL;
>> +
>> +     trace_fence_init(fence);
>> +}
>> +EXPORT_SYMBOL(__fence_init);
>
> Again with the __ exported function...
>
> I don't even see a fence_init() function anywhere, why the __ ?
>

think of it as a 'protected' constructor.. only the derived fence
subclass should call.

>
>> diff --git a/include/linux/fence.h b/include/linux/fence.h
>> new file mode 100644
>> index 000000000000..65f2a01ee7e4
>> --- /dev/null
>> +++ b/include/linux/fence.h
>> @@ -0,0 +1,333 @@
>> +/*
>> + * Fence mechanism for dma-buf to allow for asynchronous dma access
>> + *
>> + * Copyright (C) 2012 Canonical Ltd
>> + * Copyright (C) 2012 Texas Instruments
>> + *
>> + * Authors:
>> + * Rob Clark <robdclark@gmail.com>
>> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License version 2 as published by
>> + * the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __LINUX_FENCE_H
>> +#define __LINUX_FENCE_H
>> +
>> +#include <linux/err.h>
>> +#include <linux/wait.h>
>> +#include <linux/list.h>
>> +#include <linux/bitops.h>
>> +#include <linux/kref.h>
>> +#include <linux/sched.h>
>> +#include <linux/printk.h>
>> +
>> +struct fence;
>> +struct fence_ops;
>> +struct fence_cb;
>> +
>> +/**
>> + * struct fence - software synchronization primitive
>> + * @refcount: refcount for this fence
>> + * @ops: fence_ops associated with this fence
>> + * @cb_list: list of all callbacks to call
>> + * @lock: spin_lock_irqsave used for locking
>> + * @context: execution context this fence belongs to, returned by
>> + *           fence_context_alloc()
>> + * @seqno: the sequence number of this fence inside the execution context,
>> + * can be compared to decide which fence would be signaled later.
>> + * @flags: A mask of FENCE_FLAG_* defined below
>> + * @timestamp: Timestamp when the fence was signaled.
>> + * @status: Optional, only valid if < 0, must be set before calling
>> + * fence_signal, indicates that the fence has completed with an error.
>> + *
>> + * the flags member must be manipulated and read using the appropriate
>> + * atomic ops (bit_*), so taking the spinlock will not be needed most
>> + * of the time.
>> + *
>> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
>> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
>> + * implementer of the fence for its own purposes. Can be used in different
>> + * ways by different fence implementers, so do not rely on this.
>> + *
>> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
>> + * Particularly, if the bit was set, but fence_signal was called right
>> + * before this bit was set, it would have been able to set the
>> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
>> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
>> + * after fence_signal was called, any enable_signaling call will have either
>> + * been completed, or never called at all.
>> + */
>> +struct fence {
>> +     struct kref refcount;
>> +     const struct fence_ops *ops;
>> +     struct list_head cb_list;
>> +     spinlock_t *lock;
>> +     unsigned context, seqno;
>> +     unsigned long flags;
>> +     ktime_t timestamp;
>> +     int status;
>> +};
>> +
>> +enum fence_flag_bits {
>> +     FENCE_FLAG_SIGNALED_BIT,
>> +     FENCE_FLAG_ENABLE_SIGNAL_BIT,
>> +     FENCE_FLAG_USER_BITS, /* must always be last member */
>> +};
>> +
>> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
>> +
>> +/**
>> + * struct fence_cb - callback for fence_add_callback
>> + * @node: used by fence_add_callback to append this struct to fence::cb_list
>> + * @func: fence_func_t to call
>> + *
>> + * This struct will be initialized by fence_add_callback, additional
>> + * data can be passed along by embedding fence_cb in another struct.
>> + */
>> +struct fence_cb {
>> +     struct list_head node;
>> +     fence_func_t func;
>> +};
>> +
>> +/**
>> + * struct fence_ops - operations implemented for fence
>> + * @get_driver_name: returns the driver name.
>> + * @get_timeline_name: return the name of the context this fence belongs to.
>> + * @enable_signaling: enable software signaling of fence.
>> + * @signaled: [optional] peek whether the fence is signaled, can be null.
>> + * @wait: custom wait implementation, or fence_default_wait.
>> + * @release: [optional] called on destruction of fence, can be null
>> + * @fill_driver_data: [optional] callback to fill in free-form debug info
>> + * Returns amount of bytes filled, or -errno.
>> + * @fence_value_str: [optional] fills in the value of the fence as a string
>> + * @timeline_value_str: [optional] fills in the current value of the timeline
>> + * as a string
>> + *
>> + * Notes on enable_signaling:
>> + * For fence implementations that have the capability for hw->hw
>> + * signaling, they can implement this op to enable the necessary
>> + * irqs, or insert commands into cmdstream, etc.  This is called
>> + * in the first wait() or add_callback() path to let the fence
>> + * implementation know that there is another driver waiting on
>> + * the signal (ie. hw->sw case).
>> + *
>> + * This function can be called called from atomic context, but not
>> + * from irq context, so normal spinlocks can be used.
>> + *
>> + * A return value of false indicates the fence already passed,
>> + * or some failure occured that made it impossible to enable
>> + * signaling. True indicates succesful enabling.
>> + *
>> + * fence->status may be set in enable_signaling, but only when false is
>> + * returned.
>> + *
>> + * Calling fence_signal before enable_signaling is called allows
>> + * for a tiny race window in which enable_signaling is called during,
>> + * before, or after fence_signal. To fight this, it is recommended
>> + * that before enable_signaling returns true an extra reference is
>> + * taken on the fence, to be released when the fence is signaled.
>> + * This will mean fence_signal will still be called twice, but
>> + * the second time will be a noop since it was already signaled.
>> + *
>> + * Notes on signaled:
>> + * May set fence->status if returning true.
>> + *
>> + * Notes on wait:
>> + * Must not be NULL, set to fence_default_wait for default implementation.
>> + * the fence_default_wait implementation should work for any fence, as long
>> + * as enable_signaling works correctly.
>> + *
>> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
>> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
>> + * timed out. Can also return other error values on custom implementations,
>> + * which should be treated as if the fence is signaled. For example a hardware
>> + * lockup could be reported like that.
>> + *
>> + * Notes on release:
>> + * Can be NULL, this function allows additional commands to run on
>> + * destruction of the fence. Can be called from irq context.
>> + * If pointer is set to NULL, kfree will get called instead.
>> + */
>> +
>> +struct fence_ops {
>> +     const char * (*get_driver_name)(struct fence *fence);
>> +     const char * (*get_timeline_name)(struct fence *fence);
>> +     bool (*enable_signaling)(struct fence *fence);
>> +     bool (*signaled)(struct fence *fence);
>> +     long (*wait)(struct fence *fence, bool intr, signed long timeout);
>> +     void (*release)(struct fence *fence);
>> +
>> +     int (*fill_driver_data)(struct fence *fence, void *data, int size);
>> +     void (*fence_value_str)(struct fence *fence, char *str, int size);
>> +     void (*timeline_value_str)(struct fence *fence, char *str, int size);
>> +};
>> +
>> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
>> +               spinlock_t *lock, unsigned context, unsigned seqno);
>> +
>> +/**
>> + * fence_get - increases refcount of the fence
>> + * @fence:   [in]    fence to increase refcount of
>> + */
>> +static inline void fence_get(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Why warn?
>
>> +     kref_get(&fence->refcount);
>> +}
>
> Why is this inline?

performance can be critical.. especially if the driver is using this
fence mechanism for internal buffers as well as shared buffers (which
is what I'd like to do to avoid having to deal with two different
fencing mechanisms for shared vs non-shared buffers), since you could
easily have 100's or perhaps 1000's of buffers involved in a submit.

The fence stuff does try to inline as much stuff as possible,
especially critical-path stuff, for this reason.

>> +
>> +extern void release_fence(struct kref *kref);
>> +
>> +/**
>> + * fence_put - decreases refcount of the fence
>> + * @fence:   [in]    fence to reduce refcount of
>> + */
>> +static inline void fence_put(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Same as above.
>
>> +     kref_put(&fence->refcount, release_fence);
>> +}
>
> Why inline?
>
>> +int fence_signal(struct fence *fence);
>> +int __fence_signal(struct fence *fence);
>
> Let's randomly pick a function to call... :)
>
>> +static inline bool
>> +__fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             __fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
> Oh nice, another __ function :(
>
>> +
>> +/**
>> + * fence_is_signaled - Return an indication if the fence is signaled yet.
>> + * @fence:   [in]    the fence to check
>> + *
>> + * Returns true if the fence was already signaled, false if not. Since this
>> + * function doesn't enable signaling, it is not guaranteed to ever return true
>> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
>> + * haven't been called before.
>> + *
>> + * It's recommended for seqno fences to call fence_signal when the
>> + * operation is complete, it makes it possible to prevent issues from
>> + * wraparound between time of issue and time of use by checking the return
>> + * value of this function before calling hardware-specific wait instructions.
>> + */
>> +static inline bool
>> +fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
>
> Why inline for all of these, does it really matter for speed?
>
>> +/**
>> + * fence_later - return the chronologically later fence
>> + * @f1:      [in]    the first fence from the same context
>> + * @f2:      [in]    the second fence from the same context
>> + *
>> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> + * signaled last. Both fences must be from the same context, since a seqno is
>> + * not re-used across contexts.
>> + */
>> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> +{
>> +     BUG_ON(f1->context != f2->context);
>
> Nice, you just crashed the kernel, making it impossible to debug or
> recover :(

agreed, that should probably be 'if (WARN_ON(...)) return NULL;'

(but at least I wouldn't expect to hit that under console_lock so you
should at least see the last N lines of the backtrace on the screen
;-))

BR,
-R


> Don't do that.
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19  6:37       ` Daniel Vetter
  (?)
  (?)
@ 2014-06-19 15:22       ` Colin Cross
  2014-06-19 16:12         ` Maarten Lankhorst
  -1 siblings, 1 reply; 73+ messages in thread
From: Colin Cross @ 2014-06-19 15:22 UTC (permalink / raw)
  To: Greg KH, Maarten Lankhorst, open list:GENERIC INCLUDE/A...,
	thellstrom, lkml, dri-devel, linaro-mm-sig, Rob Clark,
	thierry.reding, Colin Cross, Sumit Semwal,
	open list:DMA BUFFER SHARIN...
  Cc: daniel

On Wed, Jun 18, 2014 at 11:37 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Wed, Jun 18, 2014 at 06:15:56PM -0700, Greg KH wrote:
>> On Wed, Jun 18, 2014 at 12:37:11PM +0200, Maarten Lankhorst wrote:
>> > Just to show it's easy.
>> >
>> > Android syncpoints can be mapped to a timeline. This removes the need
>> > to maintain a separate api for synchronization. I've left the android
>> > trace events in place, but the core fence events should already be
>> > sufficient for debugging.
>> >
>> > v2:
>> > - Call fence_remove_callback in sync_fence_free if not all fences have fired.
>> > v3:
>> > - Merge Colin Cross' bugfixes, and the android fence merge optimization.
>> > v4:
>> > - Merge with the upstream fixes.
>> > v5:
>> > - Fix small style issues pointed out by Thomas Hellstrom.
>> >
>> > Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> > Acked-by: John Stultz <john.stultz@linaro.org>
>> > ---
>> >  drivers/staging/android/Kconfig      |    1
>> >  drivers/staging/android/Makefile     |    2
>> >  drivers/staging/android/sw_sync.c    |    6
>> >  drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
>> >  drivers/staging/android/sync.h       |   79 ++-
>> >  drivers/staging/android/sync_debug.c |  247 +++++++++
>> >  drivers/staging/android/trace/sync.h |   12
>> >  7 files changed, 609 insertions(+), 651 deletions(-)
>> >  create mode 100644 drivers/staging/android/sync_debug.c
>>
>> With these changes, can we pull the android sync logic out of
>> drivers/staging/ now?
>
> Afaik the google guys never really looked at this and acked it. So I'm not
> sure whether they'll follow along. The other issue I have as the
> maintainer of gfx driver is that I don't want to implement support for two
> different sync object primitives (once for dma-buf and once for android
> syncpts), and my impression thus far has been that even with this we're
> not there.

We have tested these patches to use dma fences to back the android
sync driver and not found any major issues.  However, my understanding
is that dma fences are designed for implicit sync, and explicit sync
through the android sync driver is bolted on the side to share code.
Android is not moving away from explicit sync, but we do wrap all of
our userspace sync accesses through libsync
(https://android.googlesource.com/platform/system/core/+/master/libsync/sync.c,
ignore the sw_sync parts), so if the kernel supported a slightly
different userspace explicit sync interface we could adapt to it
fairly easily.  All we require is that individual kernel drivers need
to be able to accept work alongisde an fd to wait on, and to return an
fd that will signal when the work is done, and that userspace has some
way to merge two of those fds, wait on an fd, and get some debugging
info from an fd.  However, this patch set doesn't do that, it has no
way to export a dma fence as an fd except through the android sync
driver, so it is not yet ready to fully replace android sync.

> I'm trying to get our own android guys to upstream their i915 syncpts
> support, but thus far I haven't managed to convince them to throw people's
> time at this.
>
> It looks like a step into the right direction, but until I have the proof
> in the form of i915 patches that I won't have to support 2 gfx fencing
> frameworks I'm opposed to de-staging android syncpts. Ofc someone else
> could do that too, but besides i915 I don't see a full-fledged (modeset
> side only kinda doesn't count) upstream gfx driver shipping on android.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19 12:28         ` Daniel Vetter
@ 2014-06-19 15:35           ` Colin Cross
  2014-06-19 16:34               ` Daniel Vetter
  2014-06-20 20:52             ` Thierry Reding
  1 sibling, 1 reply; 73+ messages in thread
From: Colin Cross @ 2014-06-19 15:35 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thierry Reding, Greg KH, Maarten Lankhorst,
	open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Clark, Rob, Sumit Semwal, linux-media

On Thu, Jun 19, 2014 at 5:28 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Jun 19, 2014 at 1:48 PM, Thierry Reding
> <thierry.reding@gmail.com> wrote:
>>> > With these changes, can we pull the android sync logic out of
>>> > drivers/staging/ now?
>>>
>>> Afaik the google guys never really looked at this and acked it. So I'm not
>>> sure whether they'll follow along. The other issue I have as the
>>> maintainer of gfx driver is that I don't want to implement support for two
>>> different sync object primitives (once for dma-buf and once for android
>>> syncpts), and my impression thus far has been that even with this we're
>>> not there.
>>>
>>> I'm trying to get our own android guys to upstream their i915 syncpts
>>> support, but thus far I haven't managed to convince them to throw people's
>>> time at this.
>>
>> This has been discussed a fair bit internally recently and some of our
>> GPU experts have raised concerns that this may result in seriously
>> degraded performance in our proprietary graphics stack. Now I don't care
>> very much for the proprietary graphics stack, but by extension I would
>> assume that the same restrictions are relevant for any open-source
>> driver as well.
>>
>> I'm still trying to fully understand all the implications and at the
>> same time get some of the people who raised concerns to join in this
>> discussion. As I understand it the concern is mostly about explicit vs.
>> implicit synchronization and having this mechanism in the kernel will
>> implicitly synchronize all accesses to these buffers even in cases where
>> it's not needed (read vs. write locks, etc.). In one particular instance
>> it was even mentioned that this kind of implicit synchronization can
>> lead to deadlocks in some use-cases (this was mentioned for Android
>> compositing, but I suspect that the same may happen for Wayland or X
>> compositors).
>
> Well the implicit fences here actually can't deadlock. That's the
> entire point behind using ww mutexes. I've also heard tons of
> complaints about implicit enforced syncing (especially from opencl
> people), but in the end drivers and always expose unsynchronized
> access for specific cases. We do that in i915 for upload buffers and
> other fun stuff. This is about shared stuff across different drivers
> and different processes.
>
> I also expect that i915 will loose implicit syncing in a few upcoming
> hw modes because explicit syncing is a more natural fit there.
>
> All that isn't about the discussion at hand imo since no matter what
> i915 needs to have on internal representation for a bit of gpu work,
> and afaics right now we don't have that. With this patch android
> syncpts use Maarten's fences internally, but I can't freely exchange
> one for the other. So in i915 I still expect to get stuck with both of
> them, which is one too many.
>
> The other issue (and I haven't dug into details that much) I have with
> syncpts are some of the interface choices. Apparently you can commit a
> fence after creation (or at least the hw composer interface works like
> that) which means userspace can construct deadlocks with android
> syncpts if I'm not super careful in my driver. I haven't seen any
> generic code to do that, so I presume everyone just blindly trusts
> surface-flinger to not do that. Speaks of the average quality of an
> android gfx driver if the kernel is less trusted than the compositor
> in userspace ...

Android sync is designed not to allow userspace to deadlock the
kernel, a sync_pt should only get created by the kernel when it has
received a chunk of work that it expects to complete in the near
future.  The CONFIG_SW_SYNC_USER driver violates that by allowing
userspace to create and signal arbitrary sync points, but that is
intended only for testing sync.

> There's a few other things like exposing timestamps (which are tricky
> to do right, our driver is littered with wrong attempts) and other
> details.

Timestamps are necessary for vsync synchronization to reduce the frame latency.

> Finally I've never seen anyone from google or any android product guy
> push a real driver enabling for syncpts to upstream, and google itself
> has a bit a history of constantly exchanging their gfx framework for
> the next best thing. So I really doubt this is worthwhile to pursue in
> upstream with our essentially eternal api guarantees. At least until
> we see serious uptake from vendors and gfx driver guys. Unfortunately
> the Intel android folks are no exception here and haven't pushed
> anything like this in my direction yet at all. Despite multiple pokes
> from my side.

As far as I know, every SoC vendor that supports android is using sync
now, but none of them have succeeded in pushing their drivers upstream
for a variety of other reasons (interfaces only used by closed source
userspaces, KMS/DRM vs ADF, ION, etc.).

> So from my side I think we should move ahead with Maarten's work and
> figure the android side out once there's real interest.

As long as that doesn't involve removing the Android sync interfaces
from staging until dma fence fds are supported, that's fine with me.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19 15:22       ` Colin Cross
@ 2014-06-19 16:12         ` Maarten Lankhorst
  0 siblings, 0 replies; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-19 16:12 UTC (permalink / raw)
  To: Colin Cross, Greg KH, open list:GENERIC INCLUDE/A...,
	thellstrom, lkml, dri-devel, linaro-mm-sig, Rob Clark,
	thierry.reding, Sumit Semwal, open list:DMA BUFFER SHARIN...
  Cc: daniel

op 19-06-14 17:22, Colin Cross schreef:
> On Wed, Jun 18, 2014 at 11:37 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Wed, Jun 18, 2014 at 06:15:56PM -0700, Greg KH wrote:
>>> On Wed, Jun 18, 2014 at 12:37:11PM +0200, Maarten Lankhorst wrote:
>>>> Just to show it's easy.
>>>>
>>>> Android syncpoints can be mapped to a timeline. This removes the need
>>>> to maintain a separate api for synchronization. I've left the android
>>>> trace events in place, but the core fence events should already be
>>>> sufficient for debugging.
>>>>
>>>> v2:
>>>> - Call fence_remove_callback in sync_fence_free if not all fences have fired.
>>>> v3:
>>>> - Merge Colin Cross' bugfixes, and the android fence merge optimization.
>>>> v4:
>>>> - Merge with the upstream fixes.
>>>> v5:
>>>> - Fix small style issues pointed out by Thomas Hellstrom.
>>>>
>>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>>>> Acked-by: John Stultz <john.stultz@linaro.org>
>>>> ---
>>>>  drivers/staging/android/Kconfig      |    1
>>>>  drivers/staging/android/Makefile     |    2
>>>>  drivers/staging/android/sw_sync.c    |    6
>>>>  drivers/staging/android/sync.c       |  913 +++++++++++-----------------------
>>>>  drivers/staging/android/sync.h       |   79 ++-
>>>>  drivers/staging/android/sync_debug.c |  247 +++++++++
>>>>  drivers/staging/android/trace/sync.h |   12
>>>>  7 files changed, 609 insertions(+), 651 deletions(-)
>>>>  create mode 100644 drivers/staging/android/sync_debug.c
>>> With these changes, can we pull the android sync logic out of
>>> drivers/staging/ now?
>> Afaik the google guys never really looked at this and acked it. So I'm not
>> sure whether they'll follow along. The other issue I have as the
>> maintainer of gfx driver is that I don't want to implement support for two
>> different sync object primitives (once for dma-buf and once for android
>> syncpts), and my impression thus far has been that even with this we're
>> not there.
> We have tested these patches to use dma fences to back the android
> sync driver and not found any major issues.  However, my understanding
> is that dma fences are designed for implicit sync, and explicit sync
> through the android sync driver is bolted on the side to share code.
> Android is not moving away from explicit sync, but we do wrap all of
> our userspace sync accesses through libsync
> (https://android.googlesource.com/platform/system/core/+/master/libsync/sync.c,
> ignore the sw_sync parts), so if the kernel supported a slightly
> different userspace explicit sync interface we could adapt to it
> fairly easily.  All we require is that individual kernel drivers need
> to be able to accept work alongisde an fd to wait on, and to return an
> fd that will signal when the work is done, and that userspace has some
> way to merge two of those fds, wait on an fd, and get some debugging
> info from an fd.  However, this patch set doesn't do that, it has no
> way to export a dma fence as an fd except through the android sync
> driver, so it is not yet ready to fully replace android sync.
>
Dma fences can be exported as android fences, just didn't see a need for it yet. :-)
To wait on all implicit fences attached to a dma-buf one could simply poll the dma-buf directly,
or use something like a android userspace fence.

sync_fence_create takes a sync_pt as function argument, but I kept that to keep source code
compatibility, not because it uses any sync_pt functions. Here's a patch to create a userspace
fd for dma-fence instead of a android fence, applied on top of "android: convert sync to fence api".

diff --git a/drivers/staging/android/sw_sync.c b/drivers/staging/android/sw_sync.c
index a76db3ff87cb..afc3c63b0438 100644
--- a/drivers/staging/android/sw_sync.c
+++ b/drivers/staging/android/sw_sync.c
@@ -184,7 +184,7 @@ static long sw_sync_ioctl_create_fence(struct sw_sync_timeline *obj,
 	}
 
 	data.name[sizeof(data.name) - 1] = '\0';
-	fence = sync_fence_create(data.name, pt);
+	fence = sync_fence_create(data.name, &pt->base);
 	if (fence == NULL) {
 		sync_pt_free(pt);
 		err = -ENOMEM;
diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
index 70b09b5001ba..c89a6f954e41 100644
--- a/drivers/staging/android/sync.c
+++ b/drivers/staging/android/sync.c
@@ -188,7 +188,7 @@ static void fence_check_cb_func(struct fence *f, struct fence_cb *cb)
 }
 
 /* TODO: implement a create which takes more that one sync_pt */
-struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
+struct sync_fence *sync_fence_create(const char *name, struct fence *pt)
 {
 	struct sync_fence *fence;
 
@@ -199,10 +199,10 @@ struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
 	fence->num_fences = 1;
 	atomic_set(&fence->status, 1);
 
-	fence_get(&pt->base);
-	fence->cbs[0].sync_pt = &pt->base;
+	fence_get(pt);
+	fence->cbs[0].sync_pt = pt;
 	fence->cbs[0].fence = fence;
-	if (fence_add_callback(&pt->base, &fence->cbs[0].cb,
+	if (fence_add_callback(pt, &fence->cbs[0].cb,
 			       fence_check_cb_func))
 		atomic_dec(&fence->status);
 
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h
index 66b0f431f63e..3020751f3c5d 100644
--- a/drivers/staging/android/sync.h
+++ b/drivers/staging/android/sync.h
@@ -252,7 +252,7 @@ void sync_pt_free(struct sync_pt *pt);
  * Creates a fence containg @pt.  Once this is called, the fence takes
  * ownership of @pt.
  */
-struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt);
+struct sync_fence *sync_fence_create(const char *name, struct fence *fence);
 
 /*
  * API for sync_fence consumers


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19 15:35           ` Colin Cross
@ 2014-06-19 16:34               ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19 16:34 UTC (permalink / raw)
  To: Colin Cross
  Cc: Daniel Vetter, Thierry Reding, Greg KH, Maarten Lankhorst,
	open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Clark, Rob, Sumit Semwal, linux-media

On Thu, Jun 19, 2014 at 08:35:29AM -0700, Colin Cross wrote:
> On Thu, Jun 19, 2014 at 5:28 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> > On Thu, Jun 19, 2014 at 1:48 PM, Thierry Reding
> > <thierry.reding@gmail.com> wrote:
> >>> > With these changes, can we pull the android sync logic out of
> >>> > drivers/staging/ now?
> >>>
> >>> Afaik the google guys never really looked at this and acked it. So I'm not
> >>> sure whether they'll follow along. The other issue I have as the
> >>> maintainer of gfx driver is that I don't want to implement support for two
> >>> different sync object primitives (once for dma-buf and once for android
> >>> syncpts), and my impression thus far has been that even with this we're
> >>> not there.
> >>>
> >>> I'm trying to get our own android guys to upstream their i915 syncpts
> >>> support, but thus far I haven't managed to convince them to throw people's
> >>> time at this.
> >>
> >> This has been discussed a fair bit internally recently and some of our
> >> GPU experts have raised concerns that this may result in seriously
> >> degraded performance in our proprietary graphics stack. Now I don't care
> >> very much for the proprietary graphics stack, but by extension I would
> >> assume that the same restrictions are relevant for any open-source
> >> driver as well.
> >>
> >> I'm still trying to fully understand all the implications and at the
> >> same time get some of the people who raised concerns to join in this
> >> discussion. As I understand it the concern is mostly about explicit vs.
> >> implicit synchronization and having this mechanism in the kernel will
> >> implicitly synchronize all accesses to these buffers even in cases where
> >> it's not needed (read vs. write locks, etc.). In one particular instance
> >> it was even mentioned that this kind of implicit synchronization can
> >> lead to deadlocks in some use-cases (this was mentioned for Android
> >> compositing, but I suspect that the same may happen for Wayland or X
> >> compositors).
> >
> > Well the implicit fences here actually can't deadlock. That's the
> > entire point behind using ww mutexes. I've also heard tons of
> > complaints about implicit enforced syncing (especially from opencl
> > people), but in the end drivers and always expose unsynchronized
> > access for specific cases. We do that in i915 for upload buffers and
> > other fun stuff. This is about shared stuff across different drivers
> > and different processes.
> >
> > I also expect that i915 will loose implicit syncing in a few upcoming
> > hw modes because explicit syncing is a more natural fit there.
> >
> > All that isn't about the discussion at hand imo since no matter what
> > i915 needs to have on internal representation for a bit of gpu work,
> > and afaics right now we don't have that. With this patch android
> > syncpts use Maarten's fences internally, but I can't freely exchange
> > one for the other. So in i915 I still expect to get stuck with both of
> > them, which is one too many.
> >
> > The other issue (and I haven't dug into details that much) I have with
> > syncpts are some of the interface choices. Apparently you can commit a
> > fence after creation (or at least the hw composer interface works like
> > that) which means userspace can construct deadlocks with android
> > syncpts if I'm not super careful in my driver. I haven't seen any
> > generic code to do that, so I presume everyone just blindly trusts
> > surface-flinger to not do that. Speaks of the average quality of an
> > android gfx driver if the kernel is less trusted than the compositor
> > in userspace ...
> 
> Android sync is designed not to allow userspace to deadlock the
> kernel, a sync_pt should only get created by the kernel when it has
> received a chunk of work that it expects to complete in the near
> future.  The CONFIG_SW_SYNC_USER driver violates that by allowing
> userspace to create and signal arbitrary sync points, but that is
> intended only for testing sync.

Ok, that makes sense. As long as we sufficiently taint the kernel and hide
the sw_sync framework we should be good. I was confused by the hw composer
interface spec which seemed to suggest that the fences for a screen update
completion should be attached before surfaceflinger commits the state. But
I never looked at an implemention so guess that impression is wrong.

> > There's a few other things like exposing timestamps (which are tricky
> > to do right, our driver is littered with wrong attempts) and other
> > details.
> 
> Timestamps are necessary for vsync synchronization to reduce the frame latency.

I'm not against timestamps (we have them for drm vblank events too after
all), just would like for them to be optional. And we need to give
userspace very clear indication which hw clock the timestamp was based on
(or whether we're using the clock_monotonic system clock) to make sure
debug and profiling tools can properly align things. Because hw clocks
always get out of sync. Last time I've looked at the syncpt ioctls that
part was missing.

> > Finally I've never seen anyone from google or any android product guy
> > push a real driver enabling for syncpts to upstream, and google itself
> > has a bit a history of constantly exchanging their gfx framework for
> > the next best thing. So I really doubt this is worthwhile to pursue in
> > upstream with our essentially eternal api guarantees. At least until
> > we see serious uptake from vendors and gfx driver guys. Unfortunately
> > the Intel android folks are no exception here and haven't pushed
> > anything like this in my direction yet at all. Despite multiple pokes
> > from my side.
> 
> As far as I know, every SoC vendor that supports android is using sync
> now, but none of them have succeeded in pushing their drivers upstream
> for a variety of other reasons (interfaces only used by closed source
> userspaces, KMS/DRM vs ADF, ION, etc.).

Yeah I know, and it double-frustrates me that I haven't yet managed to
drag our own team into the public. Except for a few bolt-on hacks to make
deadlines the android driver is the upstream one ...

> > So from my side I think we should move ahead with Maarten's work and
> > figure the android side out once there's real interest.
> 
> As long as that doesn't involve removing the Android sync interfaces
> from staging until dma fence fds are supported, that's fine with me.

Nah, definitely not asking for that. I'd just like to have a real
implemention in a real driver merged upstream and the userpsace/kernel ABI
reviewed a bit before I want to sign up for something we'll need to keep
working forever. E.g. I also think we should expose the actual waiting
through poll and friends for neater integration with the usual display
toolkit event loops. egl shys away from such platform specific stuff, but
we can do it.

Like I've said I want to have to deal with one fence primitive in the
lower levels of my driver for scheduling and synchronization and all that.
The fence proposal fits the bill from my pov since the implicit
synchronization with dma-bufs is optional. But the integration with
android syncpts seems to not yet be there really.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
@ 2014-06-19 16:34               ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19 16:34 UTC (permalink / raw)
  To: Colin Cross
  Cc: open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Greg KH, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, linux-media

On Thu, Jun 19, 2014 at 08:35:29AM -0700, Colin Cross wrote:
> On Thu, Jun 19, 2014 at 5:28 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> > On Thu, Jun 19, 2014 at 1:48 PM, Thierry Reding
> > <thierry.reding@gmail.com> wrote:
> >>> > With these changes, can we pull the android sync logic out of
> >>> > drivers/staging/ now?
> >>>
> >>> Afaik the google guys never really looked at this and acked it. So I'm not
> >>> sure whether they'll follow along. The other issue I have as the
> >>> maintainer of gfx driver is that I don't want to implement support for two
> >>> different sync object primitives (once for dma-buf and once for android
> >>> syncpts), and my impression thus far has been that even with this we're
> >>> not there.
> >>>
> >>> I'm trying to get our own android guys to upstream their i915 syncpts
> >>> support, but thus far I haven't managed to convince them to throw people's
> >>> time at this.
> >>
> >> This has been discussed a fair bit internally recently and some of our
> >> GPU experts have raised concerns that this may result in seriously
> >> degraded performance in our proprietary graphics stack. Now I don't care
> >> very much for the proprietary graphics stack, but by extension I would
> >> assume that the same restrictions are relevant for any open-source
> >> driver as well.
> >>
> >> I'm still trying to fully understand all the implications and at the
> >> same time get some of the people who raised concerns to join in this
> >> discussion. As I understand it the concern is mostly about explicit vs.
> >> implicit synchronization and having this mechanism in the kernel will
> >> implicitly synchronize all accesses to these buffers even in cases where
> >> it's not needed (read vs. write locks, etc.). In one particular instance
> >> it was even mentioned that this kind of implicit synchronization can
> >> lead to deadlocks in some use-cases (this was mentioned for Android
> >> compositing, but I suspect that the same may happen for Wayland or X
> >> compositors).
> >
> > Well the implicit fences here actually can't deadlock. That's the
> > entire point behind using ww mutexes. I've also heard tons of
> > complaints about implicit enforced syncing (especially from opencl
> > people), but in the end drivers and always expose unsynchronized
> > access for specific cases. We do that in i915 for upload buffers and
> > other fun stuff. This is about shared stuff across different drivers
> > and different processes.
> >
> > I also expect that i915 will loose implicit syncing in a few upcoming
> > hw modes because explicit syncing is a more natural fit there.
> >
> > All that isn't about the discussion at hand imo since no matter what
> > i915 needs to have on internal representation for a bit of gpu work,
> > and afaics right now we don't have that. With this patch android
> > syncpts use Maarten's fences internally, but I can't freely exchange
> > one for the other. So in i915 I still expect to get stuck with both of
> > them, which is one too many.
> >
> > The other issue (and I haven't dug into details that much) I have with
> > syncpts are some of the interface choices. Apparently you can commit a
> > fence after creation (or at least the hw composer interface works like
> > that) which means userspace can construct deadlocks with android
> > syncpts if I'm not super careful in my driver. I haven't seen any
> > generic code to do that, so I presume everyone just blindly trusts
> > surface-flinger to not do that. Speaks of the average quality of an
> > android gfx driver if the kernel is less trusted than the compositor
> > in userspace ...
> 
> Android sync is designed not to allow userspace to deadlock the
> kernel, a sync_pt should only get created by the kernel when it has
> received a chunk of work that it expects to complete in the near
> future.  The CONFIG_SW_SYNC_USER driver violates that by allowing
> userspace to create and signal arbitrary sync points, but that is
> intended only for testing sync.

Ok, that makes sense. As long as we sufficiently taint the kernel and hide
the sw_sync framework we should be good. I was confused by the hw composer
interface spec which seemed to suggest that the fences for a screen update
completion should be attached before surfaceflinger commits the state. But
I never looked at an implemention so guess that impression is wrong.

> > There's a few other things like exposing timestamps (which are tricky
> > to do right, our driver is littered with wrong attempts) and other
> > details.
> 
> Timestamps are necessary for vsync synchronization to reduce the frame latency.

I'm not against timestamps (we have them for drm vblank events too after
all), just would like for them to be optional. And we need to give
userspace very clear indication which hw clock the timestamp was based on
(or whether we're using the clock_monotonic system clock) to make sure
debug and profiling tools can properly align things. Because hw clocks
always get out of sync. Last time I've looked at the syncpt ioctls that
part was missing.

> > Finally I've never seen anyone from google or any android product guy
> > push a real driver enabling for syncpts to upstream, and google itself
> > has a bit a history of constantly exchanging their gfx framework for
> > the next best thing. So I really doubt this is worthwhile to pursue in
> > upstream with our essentially eternal api guarantees. At least until
> > we see serious uptake from vendors and gfx driver guys. Unfortunately
> > the Intel android folks are no exception here and haven't pushed
> > anything like this in my direction yet at all. Despite multiple pokes
> > from my side.
> 
> As far as I know, every SoC vendor that supports android is using sync
> now, but none of them have succeeded in pushing their drivers upstream
> for a variety of other reasons (interfaces only used by closed source
> userspaces, KMS/DRM vs ADF, ION, etc.).

Yeah I know, and it double-frustrates me that I haven't yet managed to
drag our own team into the public. Except for a few bolt-on hacks to make
deadlines the android driver is the upstream one ...

> > So from my side I think we should move ahead with Maarten's work and
> > figure the android side out once there's real interest.
> 
> As long as that doesn't involve removing the Android sync interfaces
> from staging until dma fence fds are supported, that's fine with me.

Nah, definitely not asking for that. I'd just like to have a real
implemention in a real driver merged upstream and the userpsace/kernel ABI
reviewed a bit before I want to sign up for something we'll need to keep
working forever. E.g. I also think we should expose the actual waiting
through poll and friends for neater integration with the usual display
toolkit event loops. egl shys away from such platform specific stuff, but
we can do it.

Like I've said I want to have to deal with one fence primitive in the
lower levels of my driver for scheduling and synchronization and all that.
The fence proposal fits the bill from my pov since the implicit
synchronization with dma-bufs is optional. But the integration with
android syncpts seems to not yet be there really.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 14:00     ` Rob Clark
@ 2014-06-19 17:00         ` Greg KH
  0 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19 17:00 UTC (permalink / raw)
  To: Rob Clark
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >> +#define CREATE_TRACE_POINTS
> >> +#include <trace/events/fence.h>
> >> +
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> >
> > Are you really willing to live with these as tracepoints for forever?
> > What is the use of them in debugging?  Was it just for debugging the
> > fence code, or for something else?
> >
> >> +/**
> >> + * fence_context_alloc - allocate an array of fence contexts
> >> + * @num:     [in]    amount of contexts to allocate
> >> + *
> >> + * This function will return the first index of the number of fences allocated.
> >> + * The fence context is used for setting fence->context to a unique number.
> >> + */
> >> +unsigned fence_context_alloc(unsigned num)
> >> +{
> >> +     BUG_ON(!num);
> >> +     return atomic_add_return(num, &fence_context_counter) - num;
> >> +}
> >> +EXPORT_SYMBOL(fence_context_alloc);
> >
> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> > Traditionally all of the driver core exports have been with this
> > marking, any objection to making that change here as well?
> 
> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
> life.  We already went through this debate once with dma-buf.  We
> aren't going to change $evil_vendor's mind about non-gpl modules.  The
> only result will be a more flugly convoluted solution (ie. use syncpt
> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
> workaround, with the result that no-one benefits.

It has been proven that using _GPL() exports have caused companies to
release their code "properly" over the years, so as these really are
Linux-only apis, please change them to be marked this way, it helps
everyone out in the end.

> >> +int __fence_signal(struct fence *fence)
> >> +{
> >> +     struct fence_cb *cur, *tmp;
> >> +     int ret = 0;
> >> +
> >> +     if (WARN_ON(!fence))
> >> +             return -EINVAL;
> >> +
> >> +     if (!ktime_to_ns(fence->timestamp)) {
> >> +             fence->timestamp = ktime_get();
> >> +             smp_mb__before_atomic();
> >> +     }
> >> +
> >> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> >> +             ret = -EINVAL;
> >> +
> >> +             /*
> >> +              * we might have raced with the unlocked fence_signal,
> >> +              * still run through all callbacks
> >> +              */
> >> +     } else
> >> +             trace_fence_signaled(fence);
> >> +
> >> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> >> +             list_del_init(&cur->node);
> >> +             cur->func(fence, cur);
> >> +     }
> >> +     return ret;
> >> +}
> >> +EXPORT_SYMBOL(__fence_signal);
> >
> > Don't export a function with __ in front of it, you are saying that an
> > internal function is somehow "valid" for everyone else to call?  Why?
> > You aren't even documenting the function, so who knows how to use it?
> 
> fwiw, the __ versions appear to mainly be concessions for android
> syncpt.  That is the only user outside of fence.c, and it should stay
> that way.

How are you going to "ensure" this?  And where did you document it?
Please fix this up, it's a horrid way to create a new api.

If the android code needs to be fixed to fit into this model, then fix
it.

> >> +void
> >> +__fence_init(struct fence *fence, const struct fence_ops *ops,
> >> +          spinlock_t *lock, unsigned context, unsigned seqno)
> >> +{
> >> +     BUG_ON(!lock);
> >> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
> >> +            !ops->get_driver_name || !ops->get_timeline_name);
> >> +
> >> +     kref_init(&fence->refcount);
> >> +     fence->ops = ops;
> >> +     INIT_LIST_HEAD(&fence->cb_list);
> >> +     fence->lock = lock;
> >> +     fence->context = context;
> >> +     fence->seqno = seqno;
> >> +     fence->flags = 0UL;
> >> +
> >> +     trace_fence_init(fence);
> >> +}
> >> +EXPORT_SYMBOL(__fence_init);
> >
> > Again with the __ exported function...
> >
> > I don't even see a fence_init() function anywhere, why the __ ?
> >
> 
> think of it as a 'protected' constructor.. only the derived fence
> subclass should call.

Where do you say this?  Again, not a good reason, fix up the api please.

> >> +     kref_get(&fence->refcount);
> >> +}
> >
> > Why is this inline?
> 
> performance can be critical.. especially if the driver is using this
> fence mechanism for internal buffers as well as shared buffers (which
> is what I'd like to do to avoid having to deal with two different
> fencing mechanisms for shared vs non-shared buffers), since you could
> easily have 100's or perhaps 1000's of buffers involved in a submit.

"can be".  Did you actually measure it?  Please do so.

> The fence stuff does try to inline as much stuff as possible,
> especially critical-path stuff, for this reason.

Inlining code doesn't always mean "faster", in fact, on lots of
processors and with "large" inline functions, the opposite is true.  So
only do so if you can measure it.

> >> +/**
> >> + * fence_later - return the chronologically later fence
> >> + * @f1:      [in]    the first fence from the same context
> >> + * @f2:      [in]    the second fence from the same context
> >> + *
> >> + * Returns NULL if both fences are signaled, otherwise the fence that would be
> >> + * signaled last. Both fences must be from the same context, since a seqno is
> >> + * not re-used across contexts.
> >> + */
> >> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
> >> +{
> >> +     BUG_ON(f1->context != f2->context);
> >
> > Nice, you just crashed the kernel, making it impossible to debug or
> > recover :(
> 
> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> 
> (but at least I wouldn't expect to hit that under console_lock so you
> should at least see the last N lines of the backtrace on the screen
> ;-))

Lots of devices don't have console screens :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 17:00         ` Greg KH
  0 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19 17:00 UTC (permalink / raw)
  To: Rob Clark
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >> +#define CREATE_TRACE_POINTS
> >> +#include <trace/events/fence.h>
> >> +
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> >
> > Are you really willing to live with these as tracepoints for forever?
> > What is the use of them in debugging?  Was it just for debugging the
> > fence code, or for something else?
> >
> >> +/**
> >> + * fence_context_alloc - allocate an array of fence contexts
> >> + * @num:     [in]    amount of contexts to allocate
> >> + *
> >> + * This function will return the first index of the number of fences allocated.
> >> + * The fence context is used for setting fence->context to a unique number.
> >> + */
> >> +unsigned fence_context_alloc(unsigned num)
> >> +{
> >> +     BUG_ON(!num);
> >> +     return atomic_add_return(num, &fence_context_counter) - num;
> >> +}
> >> +EXPORT_SYMBOL(fence_context_alloc);
> >
> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> > Traditionally all of the driver core exports have been with this
> > marking, any objection to making that change here as well?
> 
> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
> life.  We already went through this debate once with dma-buf.  We
> aren't going to change $evil_vendor's mind about non-gpl modules.  The
> only result will be a more flugly convoluted solution (ie. use syncpt
> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
> workaround, with the result that no-one benefits.

It has been proven that using _GPL() exports have caused companies to
release their code "properly" over the years, so as these really are
Linux-only apis, please change them to be marked this way, it helps
everyone out in the end.

> >> +int __fence_signal(struct fence *fence)
> >> +{
> >> +     struct fence_cb *cur, *tmp;
> >> +     int ret = 0;
> >> +
> >> +     if (WARN_ON(!fence))
> >> +             return -EINVAL;
> >> +
> >> +     if (!ktime_to_ns(fence->timestamp)) {
> >> +             fence->timestamp = ktime_get();
> >> +             smp_mb__before_atomic();
> >> +     }
> >> +
> >> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> >> +             ret = -EINVAL;
> >> +
> >> +             /*
> >> +              * we might have raced with the unlocked fence_signal,
> >> +              * still run through all callbacks
> >> +              */
> >> +     } else
> >> +             trace_fence_signaled(fence);
> >> +
> >> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> >> +             list_del_init(&cur->node);
> >> +             cur->func(fence, cur);
> >> +     }
> >> +     return ret;
> >> +}
> >> +EXPORT_SYMBOL(__fence_signal);
> >
> > Don't export a function with __ in front of it, you are saying that an
> > internal function is somehow "valid" for everyone else to call?  Why?
> > You aren't even documenting the function, so who knows how to use it?
> 
> fwiw, the __ versions appear to mainly be concessions for android
> syncpt.  That is the only user outside of fence.c, and it should stay
> that way.

How are you going to "ensure" this?  And where did you document it?
Please fix this up, it's a horrid way to create a new api.

If the android code needs to be fixed to fit into this model, then fix
it.

> >> +void
> >> +__fence_init(struct fence *fence, const struct fence_ops *ops,
> >> +          spinlock_t *lock, unsigned context, unsigned seqno)
> >> +{
> >> +     BUG_ON(!lock);
> >> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
> >> +            !ops->get_driver_name || !ops->get_timeline_name);
> >> +
> >> +     kref_init(&fence->refcount);
> >> +     fence->ops = ops;
> >> +     INIT_LIST_HEAD(&fence->cb_list);
> >> +     fence->lock = lock;
> >> +     fence->context = context;
> >> +     fence->seqno = seqno;
> >> +     fence->flags = 0UL;
> >> +
> >> +     trace_fence_init(fence);
> >> +}
> >> +EXPORT_SYMBOL(__fence_init);
> >
> > Again with the __ exported function...
> >
> > I don't even see a fence_init() function anywhere, why the __ ?
> >
> 
> think of it as a 'protected' constructor.. only the derived fence
> subclass should call.

Where do you say this?  Again, not a good reason, fix up the api please.

> >> +     kref_get(&fence->refcount);
> >> +}
> >
> > Why is this inline?
> 
> performance can be critical.. especially if the driver is using this
> fence mechanism for internal buffers as well as shared buffers (which
> is what I'd like to do to avoid having to deal with two different
> fencing mechanisms for shared vs non-shared buffers), since you could
> easily have 100's or perhaps 1000's of buffers involved in a submit.

"can be".  Did you actually measure it?  Please do so.

> The fence stuff does try to inline as much stuff as possible,
> especially critical-path stuff, for this reason.

Inlining code doesn't always mean "faster", in fact, on lots of
processors and with "large" inline functions, the opposite is true.  So
only do so if you can measure it.

> >> +/**
> >> + * fence_later - return the chronologically later fence
> >> + * @f1:      [in]    the first fence from the same context
> >> + * @f2:      [in]    the second fence from the same context
> >> + *
> >> + * Returns NULL if both fences are signaled, otherwise the fence that would be
> >> + * signaled last. Both fences must be from the same context, since a seqno is
> >> + * not re-used across contexts.
> >> + */
> >> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
> >> +{
> >> +     BUG_ON(f1->context != f2->context);
> >
> > Nice, you just crashed the kernel, making it impossible to debug or
> > recover :(
> 
> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> 
> (but at least I wouldn't expect to hit that under console_lock so you
> should at least see the last N lines of the backtrace on the screen
> ;-))

Lots of devices don't have console screens :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 17:00         ` Greg KH
@ 2014-06-19 17:45           ` Rob Clark
  -1 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19 17:45 UTC (permalink / raw)
  To: Greg KH
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> +#define CREATE_TRACE_POINTS
>> >> +#include <trace/events/fence.h>
>> >> +
>> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >
>> > Are you really willing to live with these as tracepoints for forever?
>> > What is the use of them in debugging?  Was it just for debugging the
>> > fence code, or for something else?
>> >
>> >> +/**
>> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> + * @num:     [in]    amount of contexts to allocate
>> >> + *
>> >> + * This function will return the first index of the number of fences allocated.
>> >> + * The fence context is used for setting fence->context to a unique number.
>> >> + */
>> >> +unsigned fence_context_alloc(unsigned num)
>> >> +{
>> >> +     BUG_ON(!num);
>> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> +}
>> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >
>> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> > Traditionally all of the driver core exports have been with this
>> > marking, any objection to making that change here as well?
>>
>> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> life.  We already went through this debate once with dma-buf.  We
>> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> only result will be a more flugly convoluted solution (ie. use syncpt
>> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> workaround, with the result that no-one benefits.
>
> It has been proven that using _GPL() exports have caused companies to
> release their code "properly" over the years, so as these really are
> Linux-only apis, please change them to be marked this way, it helps
> everyone out in the end.

Well, maybe that is the true in some cases.  But it certainly didn't
work out that way for dma-buf.  And I think the end result is worse.

I don't really like coming down on the side of EXPORT_SYMBOL() instead
of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
result will only be creative workarounds using the _GPL symbols
indirectly by whatever is available via EXPORT_SYMBOL().  I don't
really see how that will be better.

>> >> +int __fence_signal(struct fence *fence)
>> >> +{
>> >> +     struct fence_cb *cur, *tmp;
>> >> +     int ret = 0;
>> >> +
>> >> +     if (WARN_ON(!fence))
>> >> +             return -EINVAL;
>> >> +
>> >> +     if (!ktime_to_ns(fence->timestamp)) {
>> >> +             fence->timestamp = ktime_get();
>> >> +             smp_mb__before_atomic();
>> >> +     }
>> >> +
>> >> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> >> +             ret = -EINVAL;
>> >> +
>> >> +             /*
>> >> +              * we might have raced with the unlocked fence_signal,
>> >> +              * still run through all callbacks
>> >> +              */
>> >> +     } else
>> >> +             trace_fence_signaled(fence);
>> >> +
>> >> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> >> +             list_del_init(&cur->node);
>> >> +             cur->func(fence, cur);
>> >> +     }
>> >> +     return ret;
>> >> +}
>> >> +EXPORT_SYMBOL(__fence_signal);
>> >
>> > Don't export a function with __ in front of it, you are saying that an
>> > internal function is somehow "valid" for everyone else to call?  Why?
>> > You aren't even documenting the function, so who knows how to use it?
>>
>> fwiw, the __ versions appear to mainly be concessions for android
>> syncpt.  That is the only user outside of fence.c, and it should stay
>> that way.
>
> How are you going to "ensure" this?  And where did you document it?
> Please fix this up, it's a horrid way to create a new api.
>
> If the android code needs to be fixed to fit into this model, then fix
> it.

heh, and in fact I was wrong about this.. the __ versions are actually
for when the lock is already held.  Maarten needs to rename (ie
_locked suffix) and add some API docs for this.

>> >> +void
>> >> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> >> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> >> +{
>> >> +     BUG_ON(!lock);
>> >> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> >> +            !ops->get_driver_name || !ops->get_timeline_name);
>> >> +
>> >> +     kref_init(&fence->refcount);
>> >> +     fence->ops = ops;
>> >> +     INIT_LIST_HEAD(&fence->cb_list);
>> >> +     fence->lock = lock;
>> >> +     fence->context = context;
>> >> +     fence->seqno = seqno;
>> >> +     fence->flags = 0UL;
>> >> +
>> >> +     trace_fence_init(fence);
>> >> +}
>> >> +EXPORT_SYMBOL(__fence_init);
>> >
>> > Again with the __ exported function...
>> >
>> > I don't even see a fence_init() function anywhere, why the __ ?
>> >
>>
>> think of it as a 'protected' constructor.. only the derived fence
>> subclass should call.
>
> Where do you say this?  Again, not a good reason, fix up the api please.
>
>> >> +     kref_get(&fence->refcount);
>> >> +}
>> >
>> > Why is this inline?
>>
>> performance can be critical.. especially if the driver is using this
>> fence mechanism for internal buffers as well as shared buffers (which
>> is what I'd like to do to avoid having to deal with two different
>> fencing mechanisms for shared vs non-shared buffers), since you could
>> easily have 100's or perhaps 1000's of buffers involved in a submit.
>
> "can be".  Did you actually measure it?  Please do so.

"can be" == "depends on how the driver uses fence".  Ie. if it is only
used for buffers that are shared between devices, I wouldn't expect it
to matter.

I believe that Maarten did some measurements at some point, which
might be relevant..

At any rate, fences certainly aren't something new to graphics
drivers.  And at least in the case of TTM we are replacing code that
is directly inline with various parts of TTM with calls to static
inline fence fxns.  So for those starting w/ static-inline seems more
reasonable.. to *not* inline would be the bigger deviation from how
things work now.

>> The fence stuff does try to inline as much stuff as possible,
>> especially critical-path stuff, for this reason.
>
> Inlining code doesn't always mean "faster", in fact, on lots of
> processors and with "large" inline functions, the opposite is true.  So
> only do so if you can measure it.

fair enough.. although inline fxns here are not particularly large,
and since this is something we are retrofitting to TTM I think it is
not unreasonable to start with static-inline until proven otherwise
;-)

BR,
-R

>> >> +/**
>> >> + * fence_later - return the chronologically later fence
>> >> + * @f1:      [in]    the first fence from the same context
>> >> + * @f2:      [in]    the second fence from the same context
>> >> + *
>> >> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> >> + * signaled last. Both fences must be from the same context, since a seqno is
>> >> + * not re-used across contexts.
>> >> + */
>> >> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> >> +{
>> >> +     BUG_ON(f1->context != f2->context);
>> >
>> > Nice, you just crashed the kernel, making it impossible to debug or
>> > recover :(
>>
>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
>>
>> (but at least I wouldn't expect to hit that under console_lock so you
>> should at least see the last N lines of the backtrace on the screen
>> ;-))
>
> Lots of devices don't have console screens :)
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 17:45           ` Rob Clark
  0 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19 17:45 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> +#define CREATE_TRACE_POINTS
>> >> +#include <trace/events/fence.h>
>> >> +
>> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >
>> > Are you really willing to live with these as tracepoints for forever?
>> > What is the use of them in debugging?  Was it just for debugging the
>> > fence code, or for something else?
>> >
>> >> +/**
>> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> + * @num:     [in]    amount of contexts to allocate
>> >> + *
>> >> + * This function will return the first index of the number of fences allocated.
>> >> + * The fence context is used for setting fence->context to a unique number.
>> >> + */
>> >> +unsigned fence_context_alloc(unsigned num)
>> >> +{
>> >> +     BUG_ON(!num);
>> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> +}
>> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >
>> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> > Traditionally all of the driver core exports have been with this
>> > marking, any objection to making that change here as well?
>>
>> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> life.  We already went through this debate once with dma-buf.  We
>> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> only result will be a more flugly convoluted solution (ie. use syncpt
>> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> workaround, with the result that no-one benefits.
>
> It has been proven that using _GPL() exports have caused companies to
> release their code "properly" over the years, so as these really are
> Linux-only apis, please change them to be marked this way, it helps
> everyone out in the end.

Well, maybe that is the true in some cases.  But it certainly didn't
work out that way for dma-buf.  And I think the end result is worse.

I don't really like coming down on the side of EXPORT_SYMBOL() instead
of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
result will only be creative workarounds using the _GPL symbols
indirectly by whatever is available via EXPORT_SYMBOL().  I don't
really see how that will be better.

>> >> +int __fence_signal(struct fence *fence)
>> >> +{
>> >> +     struct fence_cb *cur, *tmp;
>> >> +     int ret = 0;
>> >> +
>> >> +     if (WARN_ON(!fence))
>> >> +             return -EINVAL;
>> >> +
>> >> +     if (!ktime_to_ns(fence->timestamp)) {
>> >> +             fence->timestamp = ktime_get();
>> >> +             smp_mb__before_atomic();
>> >> +     }
>> >> +
>> >> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> >> +             ret = -EINVAL;
>> >> +
>> >> +             /*
>> >> +              * we might have raced with the unlocked fence_signal,
>> >> +              * still run through all callbacks
>> >> +              */
>> >> +     } else
>> >> +             trace_fence_signaled(fence);
>> >> +
>> >> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> >> +             list_del_init(&cur->node);
>> >> +             cur->func(fence, cur);
>> >> +     }
>> >> +     return ret;
>> >> +}
>> >> +EXPORT_SYMBOL(__fence_signal);
>> >
>> > Don't export a function with __ in front of it, you are saying that an
>> > internal function is somehow "valid" for everyone else to call?  Why?
>> > You aren't even documenting the function, so who knows how to use it?
>>
>> fwiw, the __ versions appear to mainly be concessions for android
>> syncpt.  That is the only user outside of fence.c, and it should stay
>> that way.
>
> How are you going to "ensure" this?  And where did you document it?
> Please fix this up, it's a horrid way to create a new api.
>
> If the android code needs to be fixed to fit into this model, then fix
> it.

heh, and in fact I was wrong about this.. the __ versions are actually
for when the lock is already held.  Maarten needs to rename (ie
_locked suffix) and add some API docs for this.

>> >> +void
>> >> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> >> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> >> +{
>> >> +     BUG_ON(!lock);
>> >> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> >> +            !ops->get_driver_name || !ops->get_timeline_name);
>> >> +
>> >> +     kref_init(&fence->refcount);
>> >> +     fence->ops = ops;
>> >> +     INIT_LIST_HEAD(&fence->cb_list);
>> >> +     fence->lock = lock;
>> >> +     fence->context = context;
>> >> +     fence->seqno = seqno;
>> >> +     fence->flags = 0UL;
>> >> +
>> >> +     trace_fence_init(fence);
>> >> +}
>> >> +EXPORT_SYMBOL(__fence_init);
>> >
>> > Again with the __ exported function...
>> >
>> > I don't even see a fence_init() function anywhere, why the __ ?
>> >
>>
>> think of it as a 'protected' constructor.. only the derived fence
>> subclass should call.
>
> Where do you say this?  Again, not a good reason, fix up the api please.
>
>> >> +     kref_get(&fence->refcount);
>> >> +}
>> >
>> > Why is this inline?
>>
>> performance can be critical.. especially if the driver is using this
>> fence mechanism for internal buffers as well as shared buffers (which
>> is what I'd like to do to avoid having to deal with two different
>> fencing mechanisms for shared vs non-shared buffers), since you could
>> easily have 100's or perhaps 1000's of buffers involved in a submit.
>
> "can be".  Did you actually measure it?  Please do so.

"can be" == "depends on how the driver uses fence".  Ie. if it is only
used for buffers that are shared between devices, I wouldn't expect it
to matter.

I believe that Maarten did some measurements at some point, which
might be relevant..

At any rate, fences certainly aren't something new to graphics
drivers.  And at least in the case of TTM we are replacing code that
is directly inline with various parts of TTM with calls to static
inline fence fxns.  So for those starting w/ static-inline seems more
reasonable.. to *not* inline would be the bigger deviation from how
things work now.

>> The fence stuff does try to inline as much stuff as possible,
>> especially critical-path stuff, for this reason.
>
> Inlining code doesn't always mean "faster", in fact, on lots of
> processors and with "large" inline functions, the opposite is true.  So
> only do so if you can measure it.

fair enough.. although inline fxns here are not particularly large,
and since this is something we are retrofitting to TTM I think it is
not unreasonable to start with static-inline until proven otherwise
;-)

BR,
-R

>> >> +/**
>> >> + * fence_later - return the chronologically later fence
>> >> + * @f1:      [in]    the first fence from the same context
>> >> + * @f2:      [in]    the second fence from the same context
>> >> + *
>> >> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> >> + * signaled last. Both fences must be from the same context, since a seqno is
>> >> + * not re-used across contexts.
>> >> + */
>> >> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> >> +{
>> >> +     BUG_ON(f1->context != f2->context);
>> >
>> > Nice, you just crashed the kernel, making it impossible to debug or
>> > recover :(
>>
>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
>>
>> (but at least I wouldn't expect to hit that under console_lock so you
>> should at least see the last N lines of the backtrace on the screen
>> ;-))
>
> Lots of devices don't have console screens :)
>
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 17:45           ` Rob Clark
  (?)
@ 2014-06-19 18:19           ` Greg KH
  2014-06-19 18:37             ` James Bottomley
                               ` (3 more replies)
  -1 siblings, 4 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19 18:19 UTC (permalink / raw)
  To: Rob Clark
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >> >> +#define CREATE_TRACE_POINTS
> >> >> +#include <trace/events/fence.h>
> >> >> +
> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> >> >
> >> > Are you really willing to live with these as tracepoints for forever?
> >> > What is the use of them in debugging?  Was it just for debugging the
> >> > fence code, or for something else?
> >> >
> >> >> +/**
> >> >> + * fence_context_alloc - allocate an array of fence contexts
> >> >> + * @num:     [in]    amount of contexts to allocate
> >> >> + *
> >> >> + * This function will return the first index of the number of fences allocated.
> >> >> + * The fence context is used for setting fence->context to a unique number.
> >> >> + */
> >> >> +unsigned fence_context_alloc(unsigned num)
> >> >> +{
> >> >> +     BUG_ON(!num);
> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
> >> >> +}
> >> >> +EXPORT_SYMBOL(fence_context_alloc);
> >> >
> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> >> > Traditionally all of the driver core exports have been with this
> >> > marking, any objection to making that change here as well?
> >>
> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
> >> life.  We already went through this debate once with dma-buf.  We
> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
> >> only result will be a more flugly convoluted solution (ie. use syncpt
> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
> >> workaround, with the result that no-one benefits.
> >
> > It has been proven that using _GPL() exports have caused companies to
> > release their code "properly" over the years, so as these really are
> > Linux-only apis, please change them to be marked this way, it helps
> > everyone out in the end.
> 
> Well, maybe that is the true in some cases.  But it certainly didn't
> work out that way for dma-buf.  And I think the end result is worse.
> 
> I don't really like coming down on the side of EXPORT_SYMBOL() instead
> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
> result will only be creative workarounds using the _GPL symbols
> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
> really see how that will be better.

You are saying that you _know_ companies will violate our license, so
you should just "give up"?  And how do you know people aren't working on
preventing those "indirect" usages as well?  :)

Sorry, I'm not going to give up here, again, it has proven to work in
the past in changing the ways of _very_ large companies, why stop now?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 18:19           ` Greg KH
@ 2014-06-19 18:37             ` James Bottomley
  2014-06-19 18:52               ` Rob Clark
                               ` (2 subsequent siblings)
  3 siblings, 0 replies; 73+ messages in thread
From: James Bottomley @ 2014-06-19 18:37 UTC (permalink / raw)
  To: Greg KH
  Cc: Rob Clark, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Thu, 2014-06-19 at 11:19 -0700, Greg KH wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
> > On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
> > >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> > >> >> +#define CREATE_TRACE_POINTS
> > >> >> +#include <trace/events/fence.h>
> > >> >> +
> > >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> > >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> > >> >
> > >> > Are you really willing to live with these as tracepoints for forever?
> > >> > What is the use of them in debugging?  Was it just for debugging the
> > >> > fence code, or for something else?
> > >> >
> > >> >> +/**
> > >> >> + * fence_context_alloc - allocate an array of fence contexts
> > >> >> + * @num:     [in]    amount of contexts to allocate
> > >> >> + *
> > >> >> + * This function will return the first index of the number of fences allocated.
> > >> >> + * The fence context is used for setting fence->context to a unique number.
> > >> >> + */
> > >> >> +unsigned fence_context_alloc(unsigned num)
> > >> >> +{
> > >> >> +     BUG_ON(!num);
> > >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
> > >> >> +}
> > >> >> +EXPORT_SYMBOL(fence_context_alloc);
> > >> >
> > >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> > >> > Traditionally all of the driver core exports have been with this
> > >> > marking, any objection to making that change here as well?
> > >>
> > >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
> > >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
> > >> life.  We already went through this debate once with dma-buf.  We
> > >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
> > >> only result will be a more flugly convoluted solution (ie. use syncpt
> > >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
> > >> workaround, with the result that no-one benefits.
> > >
> > > It has been proven that using _GPL() exports have caused companies to
> > > release their code "properly" over the years, so as these really are
> > > Linux-only apis, please change them to be marked this way, it helps
> > > everyone out in the end.
> > 
> > Well, maybe that is the true in some cases.  But it certainly didn't
> > work out that way for dma-buf.  And I think the end result is worse.
> > 
> > I don't really like coming down on the side of EXPORT_SYMBOL() instead
> > of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
> > result will only be creative workarounds using the _GPL symbols
> > indirectly by whatever is available via EXPORT_SYMBOL().  I don't
> > really see how that will be better.
> 
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
> 
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

When you try to train a dog, you have to be consistent about it.  We're
fantastically inconsistent in symbol exports.

For instance, the mutex primitives are all EXPORT_SYMBOL(), so we're
telling proprietary modules they can use them.  However, when the kernel
is built with CONFIG_DEBUG_MUTEX, they all become
EXPORT_SYMBOL_GPL() ... what type of crazy message is that supposed to
send?  It's OK to use mutexes but it's potentially a GPL violation to
debug them?

We either need to decide that we have a defined and consistent part of
our API that's GPL only or make the bold statement that we don't have
any part of our API that's usable by non-GPL modules.  Right at the
minute we do neither and it confuses people no end about what is and
isn't allowed.

James



^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 18:19           ` Greg KH
@ 2014-06-19 18:52               ` Rob Clark
  2014-06-19 18:52               ` Rob Clark
                                 ` (2 subsequent siblings)
  3 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19 18:52 UTC (permalink / raw)
  To: Greg KH
  Cc: Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Daniel Vetter, Sumit Semwal,
	linux-media

On Thu, Jun 19, 2014 at 2:19 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> >> +#define CREATE_TRACE_POINTS
>> >> >> +#include <trace/events/fence.h>
>> >> >> +
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >> >
>> >> > Are you really willing to live with these as tracepoints for forever?
>> >> > What is the use of them in debugging?  Was it just for debugging the
>> >> > fence code, or for something else?
>> >> >
>> >> >> +/**
>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> >> + * @num:     [in]    amount of contexts to allocate
>> >> >> + *
>> >> >> + * This function will return the first index of the number of fences allocated.
>> >> >> + * The fence context is used for setting fence->context to a unique number.
>> >> >> + */
>> >> >> +unsigned fence_context_alloc(unsigned num)
>> >> >> +{
>> >> >> +     BUG_ON(!num);
>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> >> +}
>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >> >
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)

Well, all I know is what happened with dmabuf.  This seems like the
exact same scenario (same vendor, same driver, same use-case).

Not really sure how we could completely prevent indirect usage, given
that drm core and many of the drivers are dual MIT/GPL.   (But ofc,
IANAL.)

> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

In the general case, I would agree.  But in this specific case, I am
not very optimistic.

That said, it isn't really my loss if it is _GPL()..  I don't have to
use or support that particular driver.  But given that we have some
history from the same debate with dma-buf, I think it is pretty easy
to infer the result from making fence EXPORT_SYMBOL_GPL().

BR,
-R

> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 18:52               ` Rob Clark
  0 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19 18:52 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 2:19 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> >> +#define CREATE_TRACE_POINTS
>> >> >> +#include <trace/events/fence.h>
>> >> >> +
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >> >
>> >> > Are you really willing to live with these as tracepoints for forever?
>> >> > What is the use of them in debugging?  Was it just for debugging the
>> >> > fence code, or for something else?
>> >> >
>> >> >> +/**
>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> >> + * @num:     [in]    amount of contexts to allocate
>> >> >> + *
>> >> >> + * This function will return the first index of the number of fences allocated.
>> >> >> + * The fence context is used for setting fence->context to a unique number.
>> >> >> + */
>> >> >> +unsigned fence_context_alloc(unsigned num)
>> >> >> +{
>> >> >> +     BUG_ON(!num);
>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> >> +}
>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >> >
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)

Well, all I know is what happened with dmabuf.  This seems like the
exact same scenario (same vendor, same driver, same use-case).

Not really sure how we could completely prevent indirect usage, given
that drm core and many of the drivers are dual MIT/GPL.   (But ofc,
IANAL.)

> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

In the general case, I would agree.  But in this specific case, I am
not very optimistic.

That said, it isn't really my loss if it is _GPL()..  I don't have to
use or support that particular driver.  But given that we have some
history from the same debate with dma-buf, I think it is pretty easy
to infer the result from making fence EXPORT_SYMBOL_GPL().

BR,
-R

> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 17:00         ` Greg KH
  (?)
  (?)
@ 2014-06-19 19:15         ` Daniel Vetter
  2014-06-19 20:01             ` Greg KH
  -1 siblings, 1 reply; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19 19:15 UTC (permalink / raw)
  To: Greg KH
  Cc: Rob Clark, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Sumit Semwal, linux-media

On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> +     BUG_ON(f1->context != f2->context);
>> >
>> > Nice, you just crashed the kernel, making it impossible to debug or
>> > recover :(
>>
>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
>>
>> (but at least I wouldn't expect to hit that under console_lock so you
>> should at least see the last N lines of the backtrace on the screen
>> ;-))
>
> Lots of devices don't have console screens :)

Aside: This is a pet peeve of mine and recently I've switched to
rejecting all patch that have a BUG_ON, period. Except when you can
prove that the kernel will die in the next few lines and there's
nothing you can do about it a WARN_ON is always better - I've wasted
_way_ too much time debugging hard hangs because such a "benign"
BUG_ON ended up eating my irq handler or a spinlock required by such.
Or some other nonsense that makes debugging a royal pita, especially
if your remote debugger consists of a frustrated users staring at a
hung machine.

</rant>

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 18:19           ` Greg KH
@ 2014-06-19 19:20               ` Daniel Vetter
  2014-06-19 18:52               ` Rob Clark
                                 ` (2 subsequent siblings)
  3 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19 19:20 UTC (permalink / raw)
  To: Greg KH, Dave Airlie
  Cc: Rob Clark, Maarten Lankhorst, open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Thierry Reding, Colin Cross, Sumit Semwal,
	linux-media

On Thu, Jun 19, 2014 at 8:19 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
>
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

Dave should chime in here since currently dma-buf is _GPL and the
drm_prime.c wrapper for it is not (and he merged that one, contributed
from said $vendor). And since we're gfx people everything we do is MIT
licensed (that's where X is from after all), so _GPL for for drm stuff
really doesn't make a lot of sense for us. ianal and all that applies.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 19:20               ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-19 19:20 UTC (permalink / raw)
  To: Greg KH, Dave Airlie
  Cc: open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 8:19 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
>
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

Dave should chime in here since currently dma-buf is _GPL and the
drm_prime.c wrapper for it is not (and he merged that one, contributed
from said $vendor). And since we're gfx people everything we do is MIT
licensed (that's where X is from after all), so _GPL for for drm stuff
really doesn't make a lot of sense for us. ianal and all that applies.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 19:15         ` Daniel Vetter
@ 2014-06-19 20:01             ` Greg KH
  0 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19 20:01 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Rob Clark, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Sumit Semwal, linux-media

On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> >> +     BUG_ON(f1->context != f2->context);
> >> >
> >> > Nice, you just crashed the kernel, making it impossible to debug or
> >> > recover :(
> >>
> >> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>
> >> (but at least I wouldn't expect to hit that under console_lock so you
> >> should at least see the last N lines of the backtrace on the screen
> >> ;-))
> >
> > Lots of devices don't have console screens :)
> 
> Aside: This is a pet peeve of mine and recently I've switched to
> rejecting all patch that have a BUG_ON, period.

Please do, I have been for a few years now as well for the same reasons
you cite.

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 20:01             ` Greg KH
  0 siblings, 0 replies; 73+ messages in thread
From: Greg KH @ 2014-06-19 20:01 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> >> +     BUG_ON(f1->context != f2->context);
> >> >
> >> > Nice, you just crashed the kernel, making it impossible to debug or
> >> > recover :(
> >>
> >> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>
> >> (but at least I wouldn't expect to hit that under console_lock so you
> >> should at least see the last N lines of the backtrace on the screen
> >> ;-))
> >
> > Lots of devices don't have console screens :)
> 
> Aside: This is a pet peeve of mine and recently I've switched to
> rejecting all patch that have a BUG_ON, period.

Please do, I have been for a few years now as well for the same reasons
you cite.

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 18:19           ` Greg KH
@ 2014-06-19 21:50               ` Dave Airlie
  2014-06-19 18:52               ` Rob Clark
                                 ` (2 subsequent siblings)
  3 siblings, 0 replies; 73+ messages in thread
From: Dave Airlie @ 2014-06-19 21:50 UTC (permalink / raw)
  To: Greg KH
  Cc: Rob Clark, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig, Colin Cross,
	linux-media

On 20 June 2014 04:19, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> >> +#define CREATE_TRACE_POINTS
>> >> >> +#include <trace/events/fence.h>
>> >> >> +
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >> >
>> >> > Are you really willing to live with these as tracepoints for forever?
>> >> > What is the use of them in debugging?  Was it just for debugging the
>> >> > fence code, or for something else?
>> >> >
>> >> >> +/**
>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> >> + * @num:     [in]    amount of contexts to allocate
>> >> >> + *
>> >> >> + * This function will return the first index of the number of fences allocated.
>> >> >> + * The fence context is used for setting fence->context to a unique number.
>> >> >> + */
>> >> >> +unsigned fence_context_alloc(unsigned num)
>> >> >> +{
>> >> >> +     BUG_ON(!num);
>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> >> +}
>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >> >
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
>
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

I've found large companies shipping lots of hw putting pressure on
other large/small companies seems to be only way this has ever
happened, we'd like to cover that up and say its some great GPL
enforcement thing.

To be honest, author's choice is how I'd treat this.

Personally I think _GPL is broken by design, and that Linus's initial
point for them has been so diluted by random lobby groups asking for
every symbol to be _GPL that they are becoming effectively pointless
now. I also dislike the fact that the lobby groups don't just bring
violators to court. I'm also sure someone like the LF could have a
nice income stream if Linus gave them permission to enforce his
copyrights.

But anyways, has someone checked that iOS or Windows don't have a
fence interface? so we know that this is a Linux only interface and
any works using it are derived? Say the nvidia driver isn't a derived
work now, will using this interface magically translate it into a
derived work, so we can go sue them? I don't think so.

But its up to Maarten and Rob, and if they say no _GPL then I don't
think we should be overriding authors intents.

Dave.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 21:50               ` Dave Airlie
  0 siblings, 0 replies; 73+ messages in thread
From: Dave Airlie @ 2014-06-19 21:50 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On 20 June 2014 04:19, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> >> +#define CREATE_TRACE_POINTS
>> >> >> +#include <trace/events/fence.h>
>> >> >> +
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >> >
>> >> > Are you really willing to live with these as tracepoints for forever?
>> >> > What is the use of them in debugging?  Was it just for debugging the
>> >> > fence code, or for something else?
>> >> >
>> >> >> +/**
>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> >> + * @num:     [in]    amount of contexts to allocate
>> >> >> + *
>> >> >> + * This function will return the first index of the number of fences allocated.
>> >> >> + * The fence context is used for setting fence->context to a unique number.
>> >> >> + */
>> >> >> +unsigned fence_context_alloc(unsigned num)
>> >> >> +{
>> >> >> +     BUG_ON(!num);
>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> >> +}
>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >> >
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
>
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

I've found large companies shipping lots of hw putting pressure on
other large/small companies seems to be only way this has ever
happened, we'd like to cover that up and say its some great GPL
enforcement thing.

To be honest, author's choice is how I'd treat this.

Personally I think _GPL is broken by design, and that Linus's initial
point for them has been so diluted by random lobby groups asking for
every symbol to be _GPL that they are becoming effectively pointless
now. I also dislike the fact that the lobby groups don't just bring
violators to court. I'm also sure someone like the LF could have a
nice income stream if Linus gave them permission to enforce his
copyrights.

But anyways, has someone checked that iOS or Windows don't have a
fence interface? so we know that this is a Linux only interface and
any works using it are derived? Say the nvidia driver isn't a derived
work now, will using this interface magically translate it into a
derived work, so we can go sue them? I don't think so.

But its up to Maarten and Rob, and if they say no _GPL then I don't
think we should be overriding authors intents.

Dave.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 20:01             ` Greg KH
  (?)
@ 2014-06-19 22:39             ` H. Peter Anvin
  2014-06-19 23:08                 ` James Bottomley
                                 ` (2 more replies)
  -1 siblings, 3 replies; 73+ messages in thread
From: H. Peter Anvin @ 2014-06-19 22:39 UTC (permalink / raw)
  To: Greg KH, Daniel Vetter
  Cc: Rob Clark, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig,
	Thierry Reding, Colin Cross, Sumit Semwal, linux-media

On 06/19/2014 01:01 PM, Greg KH wrote:
> On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
>> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>> +     BUG_ON(f1->context != f2->context);
>>>>>
>>>>> Nice, you just crashed the kernel, making it impossible to debug or
>>>>> recover :(
>>>>
>>>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
>>>>
>>>> (but at least I wouldn't expect to hit that under console_lock so you
>>>> should at least see the last N lines of the backtrace on the screen
>>>> ;-))
>>>
>>> Lots of devices don't have console screens :)
>>
>> Aside: This is a pet peeve of mine and recently I've switched to
>> rejecting all patch that have a BUG_ON, period.
> 
> Please do, I have been for a few years now as well for the same reasons
> you cite.
> 

I'm actually concerned about this trend.  Downgrading things to WARN_ON
can allow a security bug in the kernel to continue to exist, for
example, or make the error message disappear.

I am wondering if the right thing here isn't to have a user (command
line?) settable policy as to how to proceed on an assert violation,
instead of hardcoding it at compile time.

	-hpa



^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 22:39             ` H. Peter Anvin
@ 2014-06-19 23:08                 ` James Bottomley
  2014-06-19 23:42               ` Greg KH
  2014-06-20  8:24                 ` Daniel Vetter
  2 siblings, 0 replies; 73+ messages in thread
From: James Bottomley @ 2014-06-19 23:08 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Greg KH, Daniel Vetter, Rob Clark, Maarten Lankhorst, linux-arch,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Thierry Reding, Colin Cross, Sumit Semwal,
	linux-media

On Thu, 2014-06-19 at 15:39 -0700, H. Peter Anvin wrote:
> On 06/19/2014 01:01 PM, Greg KH wrote:
> > On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> >> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>> +     BUG_ON(f1->context != f2->context);
> >>>>>
> >>>>> Nice, you just crashed the kernel, making it impossible to debug or
> >>>>> recover :(
> >>>>
> >>>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>>>
> >>>> (but at least I wouldn't expect to hit that under console_lock so you
> >>>> should at least see the last N lines of the backtrace on the screen
> >>>> ;-))
> >>>
> >>> Lots of devices don't have console screens :)
> >>
> >> Aside: This is a pet peeve of mine and recently I've switched to
> >> rejecting all patch that have a BUG_ON, period.
> > 
> > Please do, I have been for a few years now as well for the same reasons
> > you cite.
> > 
> 
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.

Me too.  We use BUG_ON in the I/O subsystem where we're forced to
violate a guarantee.  When the choice is corrupt something or panic the
system, I prefer the latter every time.

> I am wondering if the right thing here isn't to have a user (command
> line?) settable policy as to how to proceed on an assert violation,
> instead of hardcoding it at compile time.

I'd say it depends on the consequence of the assertion violation.  We
have assertions that are largely theoretical, ones that govern process
internal state (so killing the process mostly sanitizes the system) and
a few that imply data loss or data corruption.

James



^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 23:08                 ` James Bottomley
  0 siblings, 0 replies; 73+ messages in thread
From: James Bottomley @ 2014-06-19 23:08 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: linux-arch, Thomas Hellstrom, Greg KH, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, 2014-06-19 at 15:39 -0700, H. Peter Anvin wrote:
> On 06/19/2014 01:01 PM, Greg KH wrote:
> > On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> >> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>> +     BUG_ON(f1->context != f2->context);
> >>>>>
> >>>>> Nice, you just crashed the kernel, making it impossible to debug or
> >>>>> recover :(
> >>>>
> >>>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>>>
> >>>> (but at least I wouldn't expect to hit that under console_lock so you
> >>>> should at least see the last N lines of the backtrace on the screen
> >>>> ;-))
> >>>
> >>> Lots of devices don't have console screens :)
> >>
> >> Aside: This is a pet peeve of mine and recently I've switched to
> >> rejecting all patch that have a BUG_ON, period.
> > 
> > Please do, I have been for a few years now as well for the same reasons
> > you cite.
> > 
> 
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.

Me too.  We use BUG_ON in the I/O subsystem where we're forced to
violate a guarantee.  When the choice is corrupt something or panic the
system, I prefer the latter every time.

> I am wondering if the right thing here isn't to have a user (command
> line?) settable policy as to how to proceed on an assert violation,
> instead of hardcoding it at compile time.

I'd say it depends on the consequence of the assertion violation.  We
have assertions that are largely theoretical, ones that govern process
internal state (so killing the process mostly sanitizes the system) and
a few that imply data loss or data corruption.

James

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 21:50               ` Dave Airlie
@ 2014-06-19 23:21                 ` Rob Clark
  -1 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19 23:21 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Greg KH, linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 5:50 PM, Dave Airlie <airlied@gmail.com> wrote:
> On 20 June 2014 04:19, Greg KH <gregkh@linuxfoundation.org> wrote:
>> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>>> >> >> +#define CREATE_TRACE_POINTS
>>> >> >> +#include <trace/events/fence.h>
>>> >> >> +
>>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>>> >> >
>>> >> > Are you really willing to live with these as tracepoints for forever?
>>> >> > What is the use of them in debugging?  Was it just for debugging the
>>> >> > fence code, or for something else?
>>> >> >
>>> >> >> +/**
>>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>>> >> >> + * @num:     [in]    amount of contexts to allocate
>>> >> >> + *
>>> >> >> + * This function will return the first index of the number of fences allocated.
>>> >> >> + * The fence context is used for setting fence->context to a unique number.
>>> >> >> + */
>>> >> >> +unsigned fence_context_alloc(unsigned num)
>>> >> >> +{
>>> >> >> +     BUG_ON(!num);
>>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>>> >> >> +}
>>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>>> >> >
>>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>>> >> > Traditionally all of the driver core exports have been with this
>>> >> > marking, any objection to making that change here as well?
>>> >>
>>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>>> >> life.  We already went through this debate once with dma-buf.  We
>>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>>> >> workaround, with the result that no-one benefits.
>>> >
>>> > It has been proven that using _GPL() exports have caused companies to
>>> > release their code "properly" over the years, so as these really are
>>> > Linux-only apis, please change them to be marked this way, it helps
>>> > everyone out in the end.
>>>
>>> Well, maybe that is the true in some cases.  But it certainly didn't
>>> work out that way for dma-buf.  And I think the end result is worse.
>>>
>>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>>> result will only be creative workarounds using the _GPL symbols
>>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>>> really see how that will be better.
>>
>> You are saying that you _know_ companies will violate our license, so
>> you should just "give up"?  And how do you know people aren't working on
>> preventing those "indirect" usages as well?  :)
>>
>> Sorry, I'm not going to give up here, again, it has proven to work in
>> the past in changing the ways of _very_ large companies, why stop now?
>
> I've found large companies shipping lots of hw putting pressure on
> other large/small companies seems to be only way this has ever
> happened, we'd like to cover that up and say its some great GPL
> enforcement thing.
>
> To be honest, author's choice is how I'd treat this.
>
> Personally I think _GPL is broken by design, and that Linus's initial
> point for them has been so diluted by random lobby groups asking for
> every symbol to be _GPL that they are becoming effectively pointless
> now. I also dislike the fact that the lobby groups don't just bring
> violators to court. I'm also sure someone like the LF could have a
> nice income stream if Linus gave them permission to enforce his
> copyrights.
>
> But anyways, has someone checked that iOS or Windows don't have a
> fence interface? so we know that this is a Linux only interface and
> any works using it are derived? Say the nvidia driver isn't a derived
> work now, will using this interface magically translate it into a
> derived work, so we can go sue them? I don't think so.

I've no ideas about what the APIs are in windows, but windows has had
multi-gpu support for a *long* time, which implies some mechanism like
dmabuf and fence.. this isn't exactly an area where we are
trailblazing here.

BR,
-R


> But its up to Maarten and Rob, and if they say no _GPL then I don't
> think we should be overriding authors intents.
>
> Dave.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-19 23:21                 ` Rob Clark
  0 siblings, 0 replies; 73+ messages in thread
From: Rob Clark @ 2014-06-19 23:21 UTC (permalink / raw)
  To: Dave Airlie
  Cc: linux-arch, Thomas Hellstrom, Greg KH, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media

On Thu, Jun 19, 2014 at 5:50 PM, Dave Airlie <airlied@gmail.com> wrote:
> On 20 June 2014 04:19, Greg KH <gregkh@linuxfoundation.org> wrote:
>> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>>> >> >> +#define CREATE_TRACE_POINTS
>>> >> >> +#include <trace/events/fence.h>
>>> >> >> +
>>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>>> >> >
>>> >> > Are you really willing to live with these as tracepoints for forever?
>>> >> > What is the use of them in debugging?  Was it just for debugging the
>>> >> > fence code, or for something else?
>>> >> >
>>> >> >> +/**
>>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>>> >> >> + * @num:     [in]    amount of contexts to allocate
>>> >> >> + *
>>> >> >> + * This function will return the first index of the number of fences allocated.
>>> >> >> + * The fence context is used for setting fence->context to a unique number.
>>> >> >> + */
>>> >> >> +unsigned fence_context_alloc(unsigned num)
>>> >> >> +{
>>> >> >> +     BUG_ON(!num);
>>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>>> >> >> +}
>>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>>> >> >
>>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>>> >> > Traditionally all of the driver core exports have been with this
>>> >> > marking, any objection to making that change here as well?
>>> >>
>>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>>> >> life.  We already went through this debate once with dma-buf.  We
>>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>>> >> workaround, with the result that no-one benefits.
>>> >
>>> > It has been proven that using _GPL() exports have caused companies to
>>> > release their code "properly" over the years, so as these really are
>>> > Linux-only apis, please change them to be marked this way, it helps
>>> > everyone out in the end.
>>>
>>> Well, maybe that is the true in some cases.  But it certainly didn't
>>> work out that way for dma-buf.  And I think the end result is worse.
>>>
>>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>>> result will only be creative workarounds using the _GPL symbols
>>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>>> really see how that will be better.
>>
>> You are saying that you _know_ companies will violate our license, so
>> you should just "give up"?  And how do you know people aren't working on
>> preventing those "indirect" usages as well?  :)
>>
>> Sorry, I'm not going to give up here, again, it has proven to work in
>> the past in changing the ways of _very_ large companies, why stop now?
>
> I've found large companies shipping lots of hw putting pressure on
> other large/small companies seems to be only way this has ever
> happened, we'd like to cover that up and say its some great GPL
> enforcement thing.
>
> To be honest, author's choice is how I'd treat this.
>
> Personally I think _GPL is broken by design, and that Linus's initial
> point for them has been so diluted by random lobby groups asking for
> every symbol to be _GPL that they are becoming effectively pointless
> now. I also dislike the fact that the lobby groups don't just bring
> violators to court. I'm also sure someone like the LF could have a
> nice income stream if Linus gave them permission to enforce his
> copyrights.
>
> But anyways, has someone checked that iOS or Windows don't have a
> fence interface? so we know that this is a Linux only interface and
> any works using it are derived? Say the nvidia driver isn't a derived
> work now, will using this interface magically translate it into a
> derived work, so we can go sue them? I don't think so.

I've no ideas about what the APIs are in windows, but windows has had
multi-gpu support for a *long* time, which implies some mechanism like
dmabuf and fence.. this isn't exactly an area where we are
trailblazing here.

BR,
-R


> But its up to Maarten and Rob, and if they say no _GPL then I don't
> think we should be overriding authors intents.
>
> Dave.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 22:39             ` H. Peter Anvin
  2014-06-19 23:08                 ` James Bottomley
@ 2014-06-19 23:42               ` Greg KH
  2014-06-20  8:30                 ` Daniel Vetter
  2014-06-20  8:24                 ` Daniel Vetter
  2 siblings, 1 reply; 73+ messages in thread
From: Greg KH @ 2014-06-19 23:42 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Daniel Vetter, Rob Clark, Maarten Lankhorst, linux-arch,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Thierry Reding, Colin Cross, Sumit Semwal,
	linux-media

On Thu, Jun 19, 2014 at 03:39:47PM -0700, H. Peter Anvin wrote:
> On 06/19/2014 01:01 PM, Greg KH wrote:
> > On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> >> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>> +     BUG_ON(f1->context != f2->context);
> >>>>>
> >>>>> Nice, you just crashed the kernel, making it impossible to debug or
> >>>>> recover :(
> >>>>
> >>>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>>>
> >>>> (but at least I wouldn't expect to hit that under console_lock so you
> >>>> should at least see the last N lines of the backtrace on the screen
> >>>> ;-))
> >>>
> >>> Lots of devices don't have console screens :)
> >>
> >> Aside: This is a pet peeve of mine and recently I've switched to
> >> rejecting all patch that have a BUG_ON, period.
> > 
> > Please do, I have been for a few years now as well for the same reasons
> > you cite.
> > 
> 
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.

A BUG_ON makes any error message disappear pretty quickly :)

I'm talking about foolish "ASSERT-like" BUG_ON that driver authors like
to add to their code when writing it to catch things they are messing
up.  After the code is working, they should be removed, like this one.

Don't enforce an api requirement with a kernel crash, warn and return an
error which the caller should always be checking anyway.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 22:39             ` H. Peter Anvin
@ 2014-06-20  8:24                 ` Daniel Vetter
  2014-06-19 23:42               ` Greg KH
  2014-06-20  8:24                 ` Daniel Vetter
  2 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-20  8:24 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Greg KH, Rob Clark, Maarten Lankhorst,
	open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Thierry Reding, Colin Cross, Sumit Semwal,
	linux-media

On Fri, Jun 20, 2014 at 12:39 AM, H. Peter Anvin <hpa@zytor.com> wrote:
>>> Aside: This is a pet peeve of mine and recently I've switched to
>>> rejecting all patch that have a BUG_ON, period.
>>
>> Please do, I have been for a few years now as well for the same reasons
>> you cite.
>>
>
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.
>
> I am wondering if the right thing here isn't to have a user (command
> line?) settable policy as to how to proceed on an assert violation,
> instead of hardcoding it at compile time.

I should clarify: If it smells like the issue is a failure of our
ioctl/syscall validation code to catch crap, BUG_ON is the right
choice. And fundamentally I've had this rule since 1-2 years now, the
only recent change I've done is switch my scripts from warning by
default if there's a new BUG_ON to rejecting by default. Mostly
because I'm lazy and let too many BUG_ONs pass through by default.

Also if you add a new interface to i915 I'll make damn sure you supply
a full set of nasty testcases to abuse the ioctl hard. In the end it's
a tradeoff and overall I don't think I'm compromising security with my
current set of rules.

Also, people don't (yet) terribly care about data integrity as soon as
their data has passed once through a gpu.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
@ 2014-06-20  8:24                 ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-20  8:24 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Greg KH, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Colin Cross, linux-media

On Fri, Jun 20, 2014 at 12:39 AM, H. Peter Anvin <hpa@zytor.com> wrote:
>>> Aside: This is a pet peeve of mine and recently I've switched to
>>> rejecting all patch that have a BUG_ON, period.
>>
>> Please do, I have been for a few years now as well for the same reasons
>> you cite.
>>
>
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.
>
> I am wondering if the right thing here isn't to have a user (command
> line?) settable policy as to how to proceed on an assert violation,
> instead of hardcoding it at compile time.

I should clarify: If it smells like the issue is a failure of our
ioctl/syscall validation code to catch crap, BUG_ON is the right
choice. And fundamentally I've had this rule since 1-2 years now, the
only recent change I've done is switch my scripts from warning by
default if there's a new BUG_ON to rejecting by default. Mostly
because I'm lazy and let too many BUG_ONs pass through by default.

Also if you add a new interface to i915 I'll make damn sure you supply
a full set of nasty testcases to abuse the ioctl hard. In the end it's
a tradeoff and overall I don't think I'm compromising security with my
current set of rules.

Also, people don't (yet) terribly care about data integrity as soon as
their data has passed once through a gpu.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17)
  2014-06-19 23:42               ` Greg KH
@ 2014-06-20  8:30                 ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-06-20  8:30 UTC (permalink / raw)
  To: Greg KH
  Cc: H. Peter Anvin, Rob Clark, Maarten Lankhorst,
	open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Thierry Reding, Colin Cross, Sumit Semwal,
	linux-media

On Fri, Jun 20, 2014 at 1:42 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> I'm actually concerned about this trend.  Downgrading things to WARN_ON
>> can allow a security bug in the kernel to continue to exist, for
>> example, or make the error message disappear.
>
> A BUG_ON makes any error message disappear pretty quickly :)
>
> I'm talking about foolish "ASSERT-like" BUG_ON that driver authors like
> to add to their code when writing it to catch things they are messing
> up.  After the code is working, they should be removed, like this one.

Well except for cases where it's super performance critical I like to
retain these WARN_ON asserts (not BUG_ON). "Is the logic sufficient
locked down with WARN_ONs?" is actually one of the main review
criteria I have for i915 patches, especially on the modeset side.
They're a bit an annoyance for distro's since they result in a
constant (but ever shifting) stream of backtraces, but for me they
serve as an excellent early warning sign when our driver has yet again
lost its marbles (or at least some) way before something user-visibly
bad happens.

And for those screaming that these checks should be hidden behind a
config option and only enabled for validation: Nope, there's too many
combinations of display hardware out there and I simply need our
entire user base to serve as guinea pigs. There's really no other way
to validate this mess called drm/i915.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-19 12:28         ` Daniel Vetter
@ 2014-06-20 20:52             ` Thierry Reding
  2014-06-20 20:52             ` Thierry Reding
  1 sibling, 0 replies; 73+ messages in thread
From: Thierry Reding @ 2014-06-20 20:52 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Greg KH, Maarten Lankhorst, linux-arch, Thomas Hellstrom,
	Linux Kernel Mailing List, dri-devel, linaro-mm-sig, Clark, Rob,
	Colin Cross, Sumit Semwal, linux-media

[-- Attachment #1: Type: text/plain, Size: 4614 bytes --]

On Thu, Jun 19, 2014 at 02:28:14PM +0200, Daniel Vetter wrote:
> On Thu, Jun 19, 2014 at 1:48 PM, Thierry Reding
> <thierry.reding@gmail.com> wrote:
> >> > With these changes, can we pull the android sync logic out of
> >> > drivers/staging/ now?
> >>
> >> Afaik the google guys never really looked at this and acked it. So I'm not
> >> sure whether they'll follow along. The other issue I have as the
> >> maintainer of gfx driver is that I don't want to implement support for two
> >> different sync object primitives (once for dma-buf and once for android
> >> syncpts), and my impression thus far has been that even with this we're
> >> not there.
> >>
> >> I'm trying to get our own android guys to upstream their i915 syncpts
> >> support, but thus far I haven't managed to convince them to throw people's
> >> time at this.
> >
> > This has been discussed a fair bit internally recently and some of our
> > GPU experts have raised concerns that this may result in seriously
> > degraded performance in our proprietary graphics stack. Now I don't care
> > very much for the proprietary graphics stack, but by extension I would
> > assume that the same restrictions are relevant for any open-source
> > driver as well.
> >
> > I'm still trying to fully understand all the implications and at the
> > same time get some of the people who raised concerns to join in this
> > discussion. As I understand it the concern is mostly about explicit vs.
> > implicit synchronization and having this mechanism in the kernel will
> > implicitly synchronize all accesses to these buffers even in cases where
> > it's not needed (read vs. write locks, etc.). In one particular instance
> > it was even mentioned that this kind of implicit synchronization can
> > lead to deadlocks in some use-cases (this was mentioned for Android
> > compositing, but I suspect that the same may happen for Wayland or X
> > compositors).
> 
> Well the implicit fences here actually can't deadlock. That's the
> entire point behind using ww mutexes. I've also heard tons of
> complaints about implicit enforced syncing (especially from opencl
> people), but in the end drivers and always expose unsynchronized
> access for specific cases. We do that in i915 for upload buffers and
> other fun stuff. This is about shared stuff across different drivers
> and different processes.

Tegra K1 needs to share buffers across different drivers even for very
basic use-cases since the GPU and display drivers are separate. So while
I agree that the GPU driver can still use explicit synchronization for
internal work, things aren't that simple in general.

Let me try to reconstruct the use-case that caused the lock on Android:
the compositor uses a hardware overlay to display an image. The system
detects that there's little activity and instructs the compositor to put
everything into one image and scan out only that (for power efficiency).
Now with implicit locking the display driver has a lock on the image, so
the GPU (used for compositing) needs to wait for it before it can
composite everything into one image. But the display driver cannot
release the lock on the image until the final image has been composited
and can be displayed instead.

This may not be technically a deadlock, but it's still a stalemate.
Unless I'm missing something fundamental about DMA fences and ww mutexes
I don't see how you can get out of this situation.

Explicit vs. implicit synchronization may also become more of an issue
as buffers are imported from other sources (such as cameras).

> Finally I've never seen anyone from google or any android product guy
> push a real driver enabling for syncpts to upstream, and google itself
> has a bit a history of constantly exchanging their gfx framework for
> the next best thing. So I really doubt this is worthwhile to pursue in
> upstream with our essentially eternal api guarantees. At least until
> we see serious uptake from vendors and gfx driver guys. Unfortunately
> the Intel android folks are no exception here and haven't pushed
> anything like this in my direction yet at all. Despite multiple pokes
> from my side.
> 
> So from my side I think we should move ahead with Maarten's work and
> figure the android side out once there's real interest.

The downside of that is that we may end up with two different ways to
synchronize buffers if it turns out that we can't make Android (or other
use-cases) work with DMA fence. However I don't think that justifies
postponing this patch set indefinitely.

Thierry

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
@ 2014-06-20 20:52             ` Thierry Reding
  0 siblings, 0 replies; 73+ messages in thread
From: Thierry Reding @ 2014-06-20 20:52 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: linux-arch, Thomas Hellstrom, Greg KH, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Colin Cross, linux-media


[-- Attachment #1.1: Type: text/plain, Size: 4614 bytes --]

On Thu, Jun 19, 2014 at 02:28:14PM +0200, Daniel Vetter wrote:
> On Thu, Jun 19, 2014 at 1:48 PM, Thierry Reding
> <thierry.reding@gmail.com> wrote:
> >> > With these changes, can we pull the android sync logic out of
> >> > drivers/staging/ now?
> >>
> >> Afaik the google guys never really looked at this and acked it. So I'm not
> >> sure whether they'll follow along. The other issue I have as the
> >> maintainer of gfx driver is that I don't want to implement support for two
> >> different sync object primitives (once for dma-buf and once for android
> >> syncpts), and my impression thus far has been that even with this we're
> >> not there.
> >>
> >> I'm trying to get our own android guys to upstream their i915 syncpts
> >> support, but thus far I haven't managed to convince them to throw people's
> >> time at this.
> >
> > This has been discussed a fair bit internally recently and some of our
> > GPU experts have raised concerns that this may result in seriously
> > degraded performance in our proprietary graphics stack. Now I don't care
> > very much for the proprietary graphics stack, but by extension I would
> > assume that the same restrictions are relevant for any open-source
> > driver as well.
> >
> > I'm still trying to fully understand all the implications and at the
> > same time get some of the people who raised concerns to join in this
> > discussion. As I understand it the concern is mostly about explicit vs.
> > implicit synchronization and having this mechanism in the kernel will
> > implicitly synchronize all accesses to these buffers even in cases where
> > it's not needed (read vs. write locks, etc.). In one particular instance
> > it was even mentioned that this kind of implicit synchronization can
> > lead to deadlocks in some use-cases (this was mentioned for Android
> > compositing, but I suspect that the same may happen for Wayland or X
> > compositors).
> 
> Well the implicit fences here actually can't deadlock. That's the
> entire point behind using ww mutexes. I've also heard tons of
> complaints about implicit enforced syncing (especially from opencl
> people), but in the end drivers and always expose unsynchronized
> access for specific cases. We do that in i915 for upload buffers and
> other fun stuff. This is about shared stuff across different drivers
> and different processes.

Tegra K1 needs to share buffers across different drivers even for very
basic use-cases since the GPU and display drivers are separate. So while
I agree that the GPU driver can still use explicit synchronization for
internal work, things aren't that simple in general.

Let me try to reconstruct the use-case that caused the lock on Android:
the compositor uses a hardware overlay to display an image. The system
detects that there's little activity and instructs the compositor to put
everything into one image and scan out only that (for power efficiency).
Now with implicit locking the display driver has a lock on the image, so
the GPU (used for compositing) needs to wait for it before it can
composite everything into one image. But the display driver cannot
release the lock on the image until the final image has been composited
and can be displayed instead.

This may not be technically a deadlock, but it's still a stalemate.
Unless I'm missing something fundamental about DMA fences and ww mutexes
I don't see how you can get out of this situation.

Explicit vs. implicit synchronization may also become more of an issue
as buffers are imported from other sources (such as cameras).

> Finally I've never seen anyone from google or any android product guy
> push a real driver enabling for syncpts to upstream, and google itself
> has a bit a history of constantly exchanging their gfx framework for
> the next best thing. So I really doubt this is worthwhile to pursue in
> upstream with our essentially eternal api guarantees. At least until
> we see serious uptake from vendors and gfx driver guys. Unfortunately
> the Intel android folks are no exception here and haven't pushed
> anything like this in my direction yet at all. Despite multiple pokes
> from my side.
> 
> So from my side I think we should move ahead with Maarten's work and
> figure the android side out once there's real interest.

The downside of that is that we may end up with two different ways to
synchronize buffers if it turns out that we can't make Android (or other
use-cases) work with DMA fence. However I don't think that justifies
postponing this patch set indefinitely.

Thierry

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-20 20:52             ` Thierry Reding
  (?)
@ 2014-06-23  8:45             ` Maarten Lankhorst
  2014-07-07 13:28                 ` Daniel Vetter
  -1 siblings, 1 reply; 73+ messages in thread
From: Maarten Lankhorst @ 2014-06-23  8:45 UTC (permalink / raw)
  To: Thierry Reding, Daniel Vetter
  Cc: Greg KH, linux-arch, Thomas Hellstrom, Linux Kernel Mailing List,
	dri-devel, linaro-mm-sig, Clark, Rob, Colin Cross, Sumit Semwal,
	linux-media

Hey,

op 20-06-14 22:52, Thierry Reding schreef:
> On Thu, Jun 19, 2014 at 02:28:14PM +0200, Daniel Vetter wrote:
>> On Thu, Jun 19, 2014 at 1:48 PM, Thierry Reding
>> <thierry.reding@gmail.com> wrote:
>>>>> With these changes, can we pull the android sync logic out of
>>>>> drivers/staging/ now?
>>>> Afaik the google guys never really looked at this and acked it. So I'm not
>>>> sure whether they'll follow along. The other issue I have as the
>>>> maintainer of gfx driver is that I don't want to implement support for two
>>>> different sync object primitives (once for dma-buf and once for android
>>>> syncpts), and my impression thus far has been that even with this we're
>>>> not there.
>>>>
>>>> I'm trying to get our own android guys to upstream their i915 syncpts
>>>> support, but thus far I haven't managed to convince them to throw people's
>>>> time at this.
>>> This has been discussed a fair bit internally recently and some of our
>>> GPU experts have raised concerns that this may result in seriously
>>> degraded performance in our proprietary graphics stack. Now I don't care
>>> very much for the proprietary graphics stack, but by extension I would
>>> assume that the same restrictions are relevant for any open-source
>>> driver as well.
>>>
>>> I'm still trying to fully understand all the implications and at the
>>> same time get some of the people who raised concerns to join in this
>>> discussion. As I understand it the concern is mostly about explicit vs.
>>> implicit synchronization and having this mechanism in the kernel will
>>> implicitly synchronize all accesses to these buffers even in cases where
>>> it's not needed (read vs. write locks, etc.). In one particular instance
>>> it was even mentioned that this kind of implicit synchronization can
>>> lead to deadlocks in some use-cases (this was mentioned for Android
>>> compositing, but I suspect that the same may happen for Wayland or X
>>> compositors).
>> Well the implicit fences here actually can't deadlock. That's the
>> entire point behind using ww mutexes. I've also heard tons of
>> complaints about implicit enforced syncing (especially from opencl
>> people), but in the end drivers and always expose unsynchronized
>> access for specific cases. We do that in i915 for upload buffers and
>> other fun stuff. This is about shared stuff across different drivers
>> and different processes.
> Tegra K1 needs to share buffers across different drivers even for very
> basic use-cases since the GPU and display drivers are separate. So while
> I agree that the GPU driver can still use explicit synchronization for
> internal work, things aren't that simple in general.
>
> Let me try to reconstruct the use-case that caused the lock on Android:
> the compositor uses a hardware overlay to display an image. The system
> detects that there's little activity and instructs the compositor to put
> everything into one image and scan out only that (for power efficiency).
> Now with implicit locking the display driver has a lock on the image, so
> the GPU (used for compositing) needs to wait for it before it can
> composite everything into one image. But the display driver cannot
> release the lock on the image until the final image has been composited
> and can be displayed instead.
>
> This may not be technically a deadlock, but it's still a stalemate.
> Unless I'm missing something fundamental about DMA fences and ww mutexes
> I don't see how you can get out of this situation.
This sounds like a case for implicit shared fences. ;-) Reading and scanning out would
only wait for the last 'exclusive' fence, not on each other.

But in drivers/drm I can encounter a similar issue, people expect to be able to
overwrite the contents of the currently displayed buffer, so I 'solved' it by not adding
a fence on the buffer, only by waiting for buffer idle before page flipping.
The rationale is that the buffer is pinned internally, and the backing storage cannot
go away until dma_buf_unmap_attachment is called. So when you render to the
current front buffer without queuing a page flip you get exactly what you expect. ;-)

> Explicit vs. implicit synchronization may also become more of an issue
> as buffers are imported from other sources (such as cameras).
Yeah, but the kernel space primitives would in both cases be the same, so drivers don't need to implement 2 separate fencing mechanisms for that. :-)

~Maarten


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
  2014-06-23  8:45             ` Maarten Lankhorst
@ 2014-07-07 13:28                 ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-07-07 13:28 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: Thierry Reding, Greg KH, open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Clark, Rob, Colin Cross, Sumit Semwal,
	linux-media

On Mon, Jun 23, 2014 at 10:45 AM, Maarten Lankhorst
<maarten.lankhorst@canonical.com> wrote:
> But in drivers/drm I can encounter a similar issue, people expect to be able to
> overwrite the contents of the currently displayed buffer, so I 'solved' it by not adding
> a fence on the buffer, only by waiting for buffer idle before page flipping.
> The rationale is that the buffer is pinned internally, and the backing storage cannot
> go away until dma_buf_unmap_attachment is called. So when you render to the
> current front buffer without queuing a page flip you get exactly what you expect. ;-)

Yeah, scanout buffers are special and imo we should only uses fences
as barriers just around the flip, but _not_ to prevent frontbuffer in
general. Userspace is after all allowed to do that (e.g. with the dumb
bo ioctls). If we'd premanently lock scanout buffers that would indeed
result in hilarity all over the place, no surprises there. And all
current drivers with dynamic memory management solve this through
pinning, but not by restricting write access at all.

So I think we can shrug this scenario off as a non-issue of the "don't
do this" kind ;-) Thierry, is there anything else you've stumbled over
in the tegra k1 enabling work?

I still get the impression that there's an awful lot of
misunderstandings between the explicit and implicit syncing camps and
that we need to do a lot more talking for a better understanding ...

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [REPOST PATCH 4/8] android: convert sync to fence api, v5
@ 2014-07-07 13:28                 ` Daniel Vetter
  0 siblings, 0 replies; 73+ messages in thread
From: Daniel Vetter @ 2014-07-07 13:28 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: open list:GENERIC INCLUDE/A...,
	Thomas Hellstrom, Greg KH, Linux Kernel Mailing List, dri-devel,
	linaro-mm-sig, Colin Cross, linux-media

On Mon, Jun 23, 2014 at 10:45 AM, Maarten Lankhorst
<maarten.lankhorst@canonical.com> wrote:
> But in drivers/drm I can encounter a similar issue, people expect to be able to
> overwrite the contents of the currently displayed buffer, so I 'solved' it by not adding
> a fence on the buffer, only by waiting for buffer idle before page flipping.
> The rationale is that the buffer is pinned internally, and the backing storage cannot
> go away until dma_buf_unmap_attachment is called. So when you render to the
> current front buffer without queuing a page flip you get exactly what you expect. ;-)

Yeah, scanout buffers are special and imo we should only uses fences
as barriers just around the flip, but _not_ to prevent frontbuffer in
general. Userspace is after all allowed to do that (e.g. with the dumb
bo ioctls). If we'd premanently lock scanout buffers that would indeed
result in hilarity all over the place, no surprises there. And all
current drivers with dynamic memory management solve this through
pinning, but not by restricting write access at all.

So I think we can shrug this scenario off as a non-issue of the "don't
do this" kind ;-) Thierry, is there anything else you've stumbled over
in the tegra k1 enabling work?

I still get the impression that there's an awful lot of
misunderstandings between the explicit and implicit syncing camps and
that we need to do a lot more talking for a better understanding ...

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2014-07-07 13:28 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-18 10:36 [REPOST PATCH 0/8] fence synchronization patches Maarten Lankhorst
2014-06-18 10:36 ` Maarten Lankhorst
2014-06-18 10:36 ` [REPOST PATCH 1/8] fence: dma-buf cross-device synchronization (v17) Maarten Lankhorst
2014-06-18 10:36   ` Maarten Lankhorst
2014-06-19  1:13   ` Greg KH
2014-06-19  1:13     ` Greg KH
2014-06-19  1:23     ` Rob Clark
2014-06-19  1:23       ` Rob Clark
2014-06-19  1:44       ` Greg KH
2014-06-19 14:00     ` Rob Clark
2014-06-19 17:00       ` Greg KH
2014-06-19 17:00         ` Greg KH
2014-06-19 17:45         ` Rob Clark
2014-06-19 17:45           ` Rob Clark
2014-06-19 18:19           ` Greg KH
2014-06-19 18:37             ` James Bottomley
2014-06-19 18:52             ` Rob Clark
2014-06-19 18:52               ` Rob Clark
2014-06-19 19:20             ` Daniel Vetter
2014-06-19 19:20               ` Daniel Vetter
2014-06-19 21:50             ` Dave Airlie
2014-06-19 21:50               ` Dave Airlie
2014-06-19 23:21               ` Rob Clark
2014-06-19 23:21                 ` Rob Clark
2014-06-19 19:15         ` Daniel Vetter
2014-06-19 20:01           ` Greg KH
2014-06-19 20:01             ` Greg KH
2014-06-19 22:39             ` H. Peter Anvin
2014-06-19 23:08               ` James Bottomley
2014-06-19 23:08                 ` James Bottomley
2014-06-19 23:42               ` Greg KH
2014-06-20  8:30                 ` Daniel Vetter
2014-06-20  8:24               ` Daniel Vetter
2014-06-20  8:24                 ` Daniel Vetter
2014-06-19  1:15   ` Greg KH
2014-06-19  1:16   ` Greg KH
2014-06-19  1:25     ` Rob Clark
2014-06-19  1:25       ` Rob Clark
2014-06-19  4:27       ` Sumit Semwal
2014-06-19  4:54         ` Greg KH
2014-06-19  4:54           ` Greg KH
2014-06-19  5:26           ` Sumit Semwal
2014-06-19  5:26             ` Sumit Semwal
2014-06-18 10:37 ` [REPOST PATCH 2/8] seqno-fence: Hardware dma-buf implementation of fencing (v5) Maarten Lankhorst
2014-06-18 10:37   ` Maarten Lankhorst
2014-06-18 10:37 ` [REPOST PATCH 3/8] dma-buf: use reservation objects Maarten Lankhorst
2014-06-18 10:37   ` Maarten Lankhorst
2014-06-18 10:37 ` [REPOST PATCH 4/8] android: convert sync to fence api, v5 Maarten Lankhorst
2014-06-18 10:37   ` Maarten Lankhorst
2014-06-19  1:15   ` Greg KH
2014-06-19  6:37     ` Daniel Vetter
2014-06-19  6:37       ` Daniel Vetter
2014-06-19 11:48       ` Thierry Reding
2014-06-19 11:48         ` Thierry Reding
2014-06-19 12:28         ` Daniel Vetter
2014-06-19 15:35           ` Colin Cross
2014-06-19 16:34             ` Daniel Vetter
2014-06-19 16:34               ` Daniel Vetter
2014-06-20 20:52           ` Thierry Reding
2014-06-20 20:52             ` Thierry Reding
2014-06-23  8:45             ` Maarten Lankhorst
2014-07-07 13:28               ` Daniel Vetter
2014-07-07 13:28                 ` Daniel Vetter
2014-06-19 15:22       ` Colin Cross
2014-06-19 16:12         ` Maarten Lankhorst
2014-06-18 10:37 ` [REPOST PATCH 5/8] reservation: add support for fences to enable cross-device synchronisation Maarten Lankhorst
2014-06-18 10:37   ` Maarten Lankhorst
2014-06-18 10:37 ` [REPOST PATCH 6/8] dma-buf: add poll support, v3 Maarten Lankhorst
2014-06-18 10:37   ` Maarten Lankhorst
2014-06-18 10:37 ` [REPOST PATCH 7/8] reservation: update api and add some helpers Maarten Lankhorst
2014-06-18 10:37   ` Maarten Lankhorst
2014-06-18 10:37 ` [REPOST PATCH 8/8] reservation: add suppport for read-only access using rcu Maarten Lankhorst
2014-06-18 10:37   ` Maarten Lankhorst

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.