linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Introduce fences for N:M completion variables
@ 2016-06-24  9:08 Chris Wilson
  2016-06-24  9:08 ` [PATCH 1/9] lib: Add kselftests for async-domains Chris Wilson
                   ` (9 more replies)
  0 siblings, 10 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel

struct completion allows for multiple waiters on a single event.
However, frequently we want to wait on multiple events. For example in
job processing, we need to wait for all prerequisite tasks to complete
before proceeding. Such dependency tracking is common to many situations.
In dma-buf, we already have a mechanism in place for tracking
dependencies between tasks and across drivers, the fence. Each fence is
a fixed point on a timeline that the hardware is processing (though the
hardware may be executing from multiple timelines concurrently). Each
fence may wait on any other fence (and for native fences the wait may be
executed on the device, but otherwise the signaling and forward progress
of the inter-fence serialisation is provided by the drivers themselves).
The added complexity of hardware interaction makes the dma-buf fence
unwieldy as a drop-in extension of struct completion. Enter kfence.

The kfence is intended to be as easy to use as a struct completion in
order to provide barriers in a DAG of tasks. It can provide
serialisation with other software events just as easily as it can mix in
dma-fences and be used to construct an event-driven state machine.

The tasks I have applied kfence to are:

 * providing fine-grained dependency and concurrent execution for the
   global initcalls. Drivers are currently creatively using the fixed
   initcall phases to solve dependency problems. Knowing which initcall
   can be executed in parallel helps speed up the boot process. Though
   not as much as removing the barrier after initramfs!

 * providing fine-grained dependency and concurrent execution for
   load/resume within a module (within the overall global async
   execution). Trying to parallelise a driver between discovery and
   hardware setup is hard to retrofit and will be challenging to
   maintain without a mechanism by which we can describe the dependencies
   of each phase upon each other (and hw state) and then let the
   hardware resolve the order in which to execute the phases. We want a
   declarative syntax?

 * providing asynchronous execution of GPU rendering (for a mix of
   inter-device rendering and inter-engine without hardware scheduling).
   This mixes dma-fences with an event-driven state machine. Here, the
   kfence primarily serves as a collection of dma-fences.

 * providing asynchronous execution of atomic modesetting,
   mixing the current usage of struct completion with dma-fences into
   one consistent framework 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/9] lib: Add kselftests for async-domains
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-06-24  9:08 ` [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism Chris Wilson
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: Chris Wilson

---
 lib/Kconfig.debug                           |  12 +++
 lib/Makefile                                |   1 +
 lib/test-async-domain.c                     | 131 ++++++++++++++++++++++++++++
 tools/testing/selftests/lib/Makefile        |   2 +-
 tools/testing/selftests/lib/async-domain.sh |  10 +++
 5 files changed, 155 insertions(+), 1 deletion(-)
 create mode 100644 lib/test-async-domain.c
 create mode 100755 tools/testing/selftests/lib/async-domain.sh

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b9cfdbfae9aa..f7b17daf8e4d 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1763,6 +1763,18 @@ config KPROBES_SANITY_TEST
 
 	  Say N if you are unsure.
 
+config ASYNC_DOMAIN_SELFTEST
+	tristate "Asynchronous domain self tests"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option provides a kernel modules that can be used to test
+	  the asynchronous task execution. This option is not useful for
+	  distributions or general kernels, but only for kernel developers
+	  working on the async_domain facility.
+
+	  Say N if you are unsure.
+
 config BACKTRACE_SELF_TEST
 	tristate "Self test for the backtrace code"
 	depends on DEBUG_KERNEL
diff --git a/lib/Makefile b/lib/Makefile
index ff6a7a6c6395..23cdb9885e45 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -28,6 +28,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o
 
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
+obj-$(CONFIG_ASYNC_DOMAIN_SELFTEST) += test-async-domain.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
 lib-$(CONFIG_HAS_DMA) += dma-noop.o
diff --git a/lib/test-async-domain.c b/lib/test-async-domain.c
new file mode 100644
index 000000000000..558a71414fb6
--- /dev/null
+++ b/lib/test-async-domain.c
@@ -0,0 +1,131 @@
+/*
+ * Test cases for async-domain facility.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/async.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+
+static void task_A(void *data, async_cookie_t cookie)
+{
+	long *result = data;
+	smp_store_mb(*result, 'A');
+}
+
+static void task_B(void *data, async_cookie_t cookie)
+{
+	long *result = data;
+	usleep_range(100, 200);
+	smp_store_mb(*result, 'B');
+}
+
+static int __init test_implicit(struct async_domain *domain)
+{
+	const long expected = 'B';
+	long result = 0;
+
+	if (!async_schedule_domain(task_B, &result, domain))
+		return -ENOMEM;
+
+	async_synchronize_full_domain(domain);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_registered(struct async_domain *domain)
+{
+	const long expected = 'B';
+	long result = 0;
+
+	if (!async_schedule_domain(task_B, &result, domain))
+		return -ENOMEM;
+
+	async_synchronize_full();
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void task_nop(void *data, async_cookie_t cookie)
+{
+	async_cookie_t *result = data;
+	smp_store_mb(*result, cookie);
+}
+
+static int __init perf_nop(int batch, long timeout_us)
+{
+	ktime_t start;
+	async_cookie_t nop, last;
+	long count, delay;
+
+	count = 0;
+	nop = last = 0;
+	start = ktime_get();
+	do {
+		ktime_t delta;
+		int n;
+
+		for (n = 0; n < batch; n++)
+			last = async_schedule(task_nop, &nop);
+		async_synchronize_full();
+		delta = ktime_sub(ktime_get(), start);
+		delay = ktime_to_ns(delta) >> 10;
+		count += batch;
+	} while (delay < timeout_us);
+
+	pr_info("%ld nop tasks (batches of %d) completed in %ldus; last queued %lld, saw %lld last\n",
+		count, batch, delay,
+		(long long)last, (long long)READ_ONCE(nop));
+	return 0;
+}
+
+static int __init test_async_domain_init(void)
+{
+	ASYNC_DOMAIN(domain);
+	int ret;
+
+	pr_info("Testing async-domains\n");
+
+	ret = test_implicit(&domain);
+	if (ret)
+		return ret;
+
+	ret = test_registered(&domain);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(1, 100);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(128, 1000);
+	if (ret)
+		return ret;
+
+	async_unregister_domain(&domain);
+	return 0;
+}
+
+static void __exit test_async_domain_cleanup(void)
+{
+	async_synchronize_full();
+}
+
+module_init(test_async_domain_init);
+module_exit(test_async_domain_cleanup);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/selftests/lib/Makefile b/tools/testing/selftests/lib/Makefile
index 08360060ab14..46a77ac5b4c6 100644
--- a/tools/testing/selftests/lib/Makefile
+++ b/tools/testing/selftests/lib/Makefile
@@ -3,6 +3,6 @@
 # No binaries, but make sure arg-less "make" doesn't trigger "run_tests"
 all:
 
-TEST_PROGS := printf.sh bitmap.sh
+TEST_PROGS := printf.sh bitmap.sh async-domain.sh
 
 include ../lib.mk
diff --git a/tools/testing/selftests/lib/async-domain.sh b/tools/testing/selftests/lib/async-domain.sh
new file mode 100755
index 000000000000..22c270051de7
--- /dev/null
+++ b/tools/testing/selftests/lib/async-domain.sh
@@ -0,0 +1,10 @@
+#!/bin/sh
+# Runs infrastructure tests using test-async-domain kernel module
+
+if /sbin/modprobe -q test-async-domain; then
+	/sbin/modprobe -q -r test-async-domain
+	echo "async-domain: ok"
+else
+	echo "async-domain: [FAIL]"
+	exit 1
+fi
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
  2016-06-24  9:08 ` [PATCH 1/9] lib: Add kselftests for async-domains Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-07-13  9:38   ` Peter Zijlstra
  2016-07-13 10:26   ` Peter Zijlstra
  2016-06-24  9:08 ` [PATCH 3/9] async: Extend kfence to allow struct embedding Chris Wilson
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

Completions are a simple synchronization mechanism, suitable for 1:M
barriers where many waiters maybe waiting for a single event. In
situations where a single waiter needs to wait for multiple events they
could wait on a list of individual completions. If many waiters need the
same set of completion events, they each need to wait upon their own list.
The kfence abstracts this by itself being able to wait upon many events
before signaling its own completion. A kfence may wait upon completions
or other kfences, so long as there are no cycles in the dependency graph.

The kfence is a one-shot flag, signaling that work has progressed passed
a certain point and the waiters may proceed. Unlike completions, kfences
are expected to live inside more complex graphs and form the basis for
parallel execution of interdependent tasks and so are referenced counted.

kfences provide both signaling and waiting routines:

	kfence_pending()

indicates that the kfence should itself wait for another signal. A
kfence created by kfence_create() starts in the pending state.

	kfence_signal()

undoes the earlier pending notification and allows the fence to complete
if all pending events have then been signaled.

	kfence_wait()

allows the caller to sleep (uninterruptibly) until the fence is complete.

This interface is very similar to completions, with the exception of
allowing multiple pending / signals to be sent before the kfence is
complete.

	kfence_add() / kfence_add_completion()

sets the kfence to wait upon another fence, or completion respectively.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/kfence.h                |  41 +++
 kernel/async.c                        | 358 +++++++++++++++++++++++++
 lib/Kconfig.debug                     |  23 ++
 lib/Makefile                          |   1 +
 lib/test-kfence.c                     | 475 ++++++++++++++++++++++++++++++++++
 tools/testing/selftests/lib/kfence.sh |  10 +
 6 files changed, 908 insertions(+)
 create mode 100644 include/linux/kfence.h
 create mode 100644 lib/test-kfence.c
 create mode 100755 tools/testing/selftests/lib/kfence.sh

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
new file mode 100644
index 000000000000..82eba8aacd02
--- /dev/null
+++ b/include/linux/kfence.h
@@ -0,0 +1,41 @@
+/*
+ * kfence.h - library routines for N:M synchronisation points
+ *
+ * Copyright (C) 2016 Intel Corporation
+ *
+ * This file is released under the GPLv2.
+ *
+ */
+
+#ifndef _KFENCE_H_
+#define _KFENCE_H_
+
+#include <linux/gfp.h>
+#include <linux/kref.h>
+#include <linux/wait.h>
+
+struct completion;
+
+struct kfence {
+	wait_queue_head_t wait;
+	unsigned long flags;
+	struct kref kref;
+	atomic_t pending;
+};
+
+extern struct kfence *kfence_create(gfp_t gfp);
+extern struct kfence *kfence_get(struct kfence *fence);
+extern int kfence_add(struct kfence *fence, struct kfence *after, gfp_t gfp);
+extern int kfence_add_completion(struct kfence *fence,
+				 struct completion *x,
+				 gfp_t gfp);
+extern void kfence_pending(struct kfence *fence);
+extern void kfence_signal(struct kfence *fence);
+extern void kfence_wait(struct kfence *fence);
+static inline bool kfence_complete(struct kfence *fence)
+{
+	return atomic_read(&fence->pending) < 0;
+}
+extern void kfence_put(struct kfence *fence);
+
+#endif /* _KFENCE_H_ */
diff --git a/kernel/async.c b/kernel/async.c
index d2edd6efec56..d0bcb7cc4884 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -50,6 +50,7 @@ asynchronous and synchronous parts of the kernel.
 
 #include <linux/async.h>
 #include <linux/atomic.h>
+#include <linux/kfence.h>
 #include <linux/ktime.h>
 #include <linux/export.h>
 #include <linux/wait.h>
@@ -82,6 +83,363 @@ static DECLARE_WAIT_QUEUE_HEAD(async_done);
 
 static atomic_t entry_count;
 
+/**
+ * DOC: kfence overview
+ *
+ * kfences provide synchronisation barriers between multiple processes.
+ * They are very similar to completions, or a pthread_barrier. Where
+ * kfence differs from completions is their ability to track multiple
+ * event sources rather than being a singular "completion event". Similar
+ * to completions, multiple processes or other kfences can listen or wait
+ * upon a kfence to signal its completion.
+ *
+ * The kfence is a one-shot flag, signaling that work has progressed passed
+ * a certain point (as measured by completion of all events the kfence is
+ * listening for) and the waiters upon kfence may proceed.
+ *
+ * kfences provide both signaling and waiting routines:
+ *
+ *	kfence_pending()
+ *
+ * indicates that the kfence should itself wait for another signal. A
+ * kfence created by kfence_create() starts in the pending state.
+ *
+ *	kfence_signal()
+ *
+ * undoes the earlier pending notification and allows the fence to complete
+ * if all pending events have then been signaled.
+ *
+ *	kfence_wait()
+ *
+ * allows the caller to sleep (uninterruptibly) until the fence is complete.
+ * Meanwhile,
+ *
+ * 	kfence_complete()
+ *
+ * reports whether or not the kfence has been passed.
+ *
+ * This interface is very similar to completions, with the exception of
+ * allowing multiple pending / signals to be sent before the kfence is
+ * complete.
+ *
+ *	kfence_add() / kfence_add_completion()
+ *
+ * sets the kfence to wait upon another fence, or completion respectively.
+ *
+ * Unlike completions, kfences are expected to live inside more complex graphs
+ * and form the basis for parallel execution of interdependent and so are
+ * reference counted. A kfence may be created using,
+ *
+ * 	kfence_create()
+ *
+ * The fence starts off pending a single signal. Once you have finished
+ * setting up the fence, use kfence_signal() to allow it to wait upon
+ * its event sources.
+ *
+ * Use
+ *
+ *	kfence_get() / kfence_put
+ *
+ * to acquire or release a reference on kfence respectively.
+ */
+
+#define KFENCE_CHECKED_BIT	0
+
+static void kfence_free(struct kref *kref)
+{
+	struct kfence *fence = container_of(kref, typeof(*fence), kref);
+
+	WARN_ON(atomic_read(&fence->pending) > 0);
+
+	kfree(fence);
+}
+
+/**
+ * kfence_put - release a reference to a kfence
+ * @fence: the kfence being disposed of
+ */
+void kfence_put(struct kfence *fence)
+{
+	if (fence)
+		kref_put(&fence->kref, kfence_free);
+}
+EXPORT_SYMBOL_GPL(kfence_put);
+
+/**
+ * kfence_get - acquire a reference to a kfence
+ * @fence: the kfence being used
+ *
+ * Returns the pointer to the kfence, with its reference count incremented.
+ */
+struct kfence *kfence_get(struct kfence *fence)
+{
+	if (fence)
+		kref_get(&fence->kref);
+	return fence;
+}
+EXPORT_SYMBOL_GPL(kfence_get);
+
+static void __kfence_wake_up_all(struct kfence *fence,
+				 struct list_head *continuation)
+{
+	wait_queue_head_t *x = &fence->wait;
+	unsigned long flags;
+
+	/* To prevent unbounded recursion as we traverse the graph
+	 * of kfences, we move the task_list from this ready fence
+	 * to the tail of the current fence we are signaling.
+	 */
+	spin_lock_irqsave_nested(&x->lock, flags, 1 + !!continuation);
+	if (continuation)
+		list_splice_tail_init(&x->task_list, continuation);
+	else while (!list_empty(&x->task_list))
+		__wake_up_locked_key(x, TASK_NORMAL, &x->task_list);
+	spin_unlock_irqrestore(&x->lock, flags);
+}
+
+static void __kfence_signal(struct kfence *fence,
+			    struct list_head *continuation)
+{
+	if (!atomic_dec_and_test(&fence->pending))
+		return;
+
+	atomic_dec(&fence->pending);
+	__kfence_wake_up_all(fence, continuation);
+}
+
+/**
+ * kfence_pending - mark the fence as pending a signal
+ * @fence: the kfence to be signaled
+ *
+ */
+void kfence_pending(struct kfence *fence)
+{
+	WARN_ON(atomic_inc_return(&fence->pending) <= 1);
+}
+EXPORT_SYMBOL_GPL(kfence_pending);
+
+/**
+ * kfence_signal - signal the fence
+ * @fence: the kfence to be signaled
+ *
+ */
+void kfence_signal(struct kfence *fence)
+{
+	__kfence_signal(fence, NULL);
+}
+EXPORT_SYMBOL_GPL(kfence_signal);
+
+/**
+ * kfence_wait - wait upon a fence to be completed
+ * @fence: the kfence to wait upon
+ *
+ */
+void kfence_wait(struct kfence *fence)
+{
+	wait_event(fence->wait, kfence_complete(fence));
+}
+EXPORT_SYMBOL_GPL(kfence_wait);
+
+static void kfence_init(struct kfence *fence)
+{
+	init_waitqueue_head(&fence->wait);
+	kref_init(&fence->kref);
+	atomic_set(&fence->pending, 1);
+	fence->flags = 0;
+}
+
+/**
+ * kfence_create - create a fence
+ * @gfp: the allowed allocation type
+ *
+ * A fence is created with a reference count of one, and pending a signal.
+ * After you have completed setting up the fence for use, call kfence_signal()
+ * to signal completion.
+ *
+ * Returns the newly allocated fence, or NULL on error.
+ */
+struct kfence *kfence_create(gfp_t gfp)
+{
+	struct kfence *fence;
+
+	fence = kmalloc(sizeof(*fence), gfp);
+	if (!fence)
+		return NULL;
+
+	kfence_init(fence);
+	return fence;
+}
+EXPORT_SYMBOL_GPL(kfence_create);
+
+static int kfence_wake(wait_queue_t *wq, unsigned mode, int flags, void *key)
+{
+	list_del(&wq->task_list);
+	__kfence_signal(wq->private, key);
+	kfence_put(wq->private);
+	kfree(wq);
+	return 0;
+}
+
+static bool __kfence_check_if_after(struct kfence *fence,
+				    const struct kfence * const signaler)
+{
+	wait_queue_t *wq;
+
+	if (__test_and_set_bit(KFENCE_CHECKED_BIT, &fence->flags))
+		return false;
+
+	if (fence == signaler)
+		return true;
+
+	list_for_each_entry(wq, &fence->wait.task_list, task_list) {
+		if (wq->func != kfence_wake)
+			continue;
+
+		if (__kfence_check_if_after(wq->private, signaler))
+			return true;
+	}
+
+	return false;
+}
+
+static void __kfence_clear_checked_bit(struct kfence *fence)
+{
+	wait_queue_t *wq;
+
+	if (!__test_and_clear_bit(KFENCE_CHECKED_BIT, &fence->flags))
+		return;
+
+	list_for_each_entry(wq, &fence->wait.task_list, task_list) {
+		if (wq->func != kfence_wake)
+			continue;
+
+		__kfence_clear_checked_bit(wq->private);
+	}
+}
+
+static bool kfence_check_if_after(struct kfence *fence,
+				  const struct kfence * const signaler)
+{
+	unsigned long flags;
+	bool err;
+
+	if (!config_enabled(CONFIG_KFENCE_CHECK_DAG))
+		return false;
+
+	spin_lock_irqsave(&async_lock, flags);
+	err = __kfence_check_if_after(fence, signaler);
+	__kfence_clear_checked_bit(fence);
+	spin_unlock_irqrestore(&async_lock, flags);
+
+	return err;
+}
+
+/**
+ * kfence_add - set one fence to wait upon another
+ * @fence: this kfence
+ * @signaler: target kfence to wait upon
+ * @gfp: the allowed allocation type
+ *
+ * kfence_add() causes the @fence to wait upon completion of @signaler.
+ * Internally the @fence is marked as pending a signal from @signaler.
+ *
+ * Returns 1 if the @fence was added to the waiqueue of @signaler, 0
+ * if @signaler was already complete, or a negative error code.
+ */
+int kfence_add(struct kfence *fence, struct kfence *signaler, gfp_t gfp)
+{
+	wait_queue_t *wq;
+	unsigned long flags;
+	int pending;
+
+	if (!signaler || kfence_complete(signaler))
+		return 0;
+
+	/* The dependency graph must be acyclic */
+	if (unlikely(kfence_check_if_after(fence, signaler)))
+		return -EINVAL;
+
+	wq = kmalloc(sizeof(*wq), gfp);
+	if (unlikely(!wq)) {
+		if (!gfpflags_allow_blocking(gfp))
+			return -ENOMEM;
+
+		kfence_wait(signaler);
+		return 0;
+	}
+
+	wq->flags = 0;
+	wq->func = kfence_wake;
+	wq->private = kfence_get(fence);
+
+	kfence_pending(fence);
+
+	spin_lock_irqsave(&signaler->wait.lock, flags);
+	if (likely(!kfence_complete(signaler))) {
+		__add_wait_queue_tail(&signaler->wait, wq);
+		pending = 1;
+	} else {
+		INIT_LIST_HEAD(&wq->task_list);
+		kfence_wake(wq, 0, 0, NULL);
+		pending = 0;
+	}
+	spin_unlock_irqrestore(&signaler->wait.lock, flags);
+
+	return pending;
+}
+EXPORT_SYMBOL_GPL(kfence_add);
+
+/**
+ * kfence_add_completion - set the fence to wait upon a completion
+ * @fence: this kfence
+ * @x: target completion to wait upon
+ * @gfp: the allowed allocation type
+ *
+ * kfence_add_completiond() causes the @fence to wait upon a completion.
+ * Internally the @fence is marked as pending a signal from @x.
+ *
+ * Returns 1 if the @fence was added to the waiqueue of @x, 0
+ * if @x was already complete, or a negative error code.
+ */
+int kfence_add_completion(struct kfence *fence, struct completion *x, gfp_t gfp)
+{
+	wait_queue_t *wq;
+	unsigned long flags;
+	int pending;
+
+	if (!x || completion_done(x))
+		return 0;
+
+	wq = kmalloc(sizeof(*wq), gfp);
+	if (unlikely(!wq)) {
+		if (!gfpflags_allow_blocking(gfp))
+			return -ENOMEM;
+
+		wait_for_completion(x);
+		return 0;
+	}
+
+	wq->flags = 0;
+	wq->func = kfence_wake;
+	wq->private = kfence_get(fence);
+
+	kfence_pending(fence);
+
+	spin_lock_irqsave(&x->wait.lock, flags);
+	if (likely(!READ_ONCE(x->done))) {
+		__add_wait_queue_tail(&x->wait, wq);
+		pending = 1;
+	} else {
+		INIT_LIST_HEAD(&wq->task_list);
+		kfence_wake(wq, 0, 0, NULL);
+		pending = 0;
+	}
+	spin_unlock_irqrestore(&x->wait.lock, flags);
+
+	return pending;
+}
+EXPORT_SYMBOL_GPL(kfence_add_completion);
+
 static async_cookie_t lowest_in_progress(struct async_domain *domain)
 {
 	struct list_head *pending;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index f7b17daf8e4d..47319f501954 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1763,6 +1763,29 @@ config KPROBES_SANITY_TEST
 
 	  Say N if you are unsure.
 
+config KFENCE_SELFTEST
+	tristate "Kfence self tests"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option provides a kernel modules that can be used to test
+	  the kfence handling. This option is not useful for distributions
+	  or general kernels, but only for kernel developers working on the
+	  kfence and async_domain facility.
+
+	  Say N if you are unsure.
+
+config KFENCE_CHECK_DAG
+	bool "Check that kfence are only used with directed acyclic graphs"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option enforces that kfences are only used with directed acyclic
+	  graphs (DAG), as otherwise the cycles in the graph means that they
+	  will never be signaled (or the corresponding task executed).
+
+	  Say N if you are unsure.
+
 config ASYNC_DOMAIN_SELFTEST
 	tristate "Asynchronous domain self tests"
 	depends on DEBUG_KERNEL
diff --git a/lib/Makefile b/lib/Makefile
index 23cdb9885e45..82e8b5f77c44 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -28,6 +28,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o
 
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
+obj-$(CONFIG_KFENCE_SELFTEST) += test-kfence.o
 obj-$(CONFIG_ASYNC_DOMAIN_SELFTEST) += test-async-domain.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
diff --git a/lib/test-kfence.c b/lib/test-kfence.c
new file mode 100644
index 000000000000..1d9e4a5a2184
--- /dev/null
+++ b/lib/test-kfence.c
@@ -0,0 +1,475 @@
+/*
+ * Test cases for kfence facility.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/delay.h>
+#include <linux/kfence.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+static int __init test_self(void)
+{
+	struct kfence *fence;
+
+	/* Test kfence signaling and completion testing */
+	pr_debug("%s\n", __func__);
+
+	fence = kfence_create(GFP_KERNEL);
+	if (!fence)
+		return -ENOMEM;
+
+	if (kfence_complete(fence))
+		return -EINVAL;
+
+	kfence_signal(fence);
+
+	if (!kfence_complete(fence))
+		return -EINVAL;
+
+	kfence_wait(fence);
+	if (!kfence_complete(fence))
+		return -EINVAL;
+
+	kfence_put(fence);
+	return 0;
+}
+
+static int __init test_dag(void)
+{
+	struct kfence *A, *B, *C;
+
+	/* Test detection of cycles within the kfence graphs */
+	pr_debug("%s\n", __func__);
+
+	if (!config_enabled(CONFIG_KFENCE_CHECK_DAG))
+		return 0;
+
+	A = kfence_create(GFP_KERNEL);
+	if (kfence_add(A, A, GFP_KERNEL) != -EINVAL) {
+		pr_err("recursive cycle not detected (AA)\n");
+		return -EINVAL;
+	}
+
+	B = kfence_create(GFP_KERNEL);
+
+	kfence_add(A, B, GFP_KERNEL);
+	if (kfence_add(B, A, GFP_KERNEL) != -EINVAL) {
+		pr_err("single depth cycle not detected (BAB)\n");
+		return -EINVAL;
+	}
+
+	C = kfence_create(GFP_KERNEL);
+	kfence_add(B, C, GFP_KERNEL);
+	if (kfence_add(C, A, GFP_KERNEL) != -EINVAL) {
+		pr_err("cycle not detected (BA, CB, AC)\n");
+		return -EINVAL;
+	}
+
+	kfence_signal(A);
+	kfence_put(A);
+
+	kfence_signal(B);
+	kfence_put(B);
+
+	kfence_signal(C);
+	kfence_put(C);
+
+	return 0;
+}
+
+static int __init test_AB(void)
+{
+	struct kfence *A, *B;
+	int ret;
+
+	/* Test kfence (A) waiting on an event source (B) */
+	pr_debug("%s\n", __func__);
+
+	A = kfence_create(GFP_KERNEL);
+	B = kfence_create(GFP_KERNEL);
+	if (!A || !B)
+		return -ENOMEM;
+
+	ret = kfence_add(A, B, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_signal(A);
+	if (kfence_complete(A))
+		return -EINVAL;
+
+	kfence_signal(B);
+	if (!kfence_complete(B))
+		return -EINVAL;
+
+	if (!kfence_complete(A))
+		return -EINVAL;
+
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_ABC(void)
+{
+	struct kfence *A, *B, *C;
+	int ret;
+
+	/* Test a chain of fences, A waits on B who waits on C */
+	pr_debug("%s\n", __func__);
+
+	A = kfence_create(GFP_KERNEL);
+	B = kfence_create(GFP_KERNEL);
+	C = kfence_create(GFP_KERNEL);
+	if (!A || !B || !C)
+		return -ENOMEM;
+
+	ret = kfence_add(A, B, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	ret = kfence_add(B, C, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_signal(A);
+	if (kfence_complete(A))
+		return -EINVAL;
+
+	kfence_signal(B);
+	if (kfence_complete(B))
+		return -EINVAL;
+
+	if (kfence_complete(A))
+		return -EINVAL;
+
+	kfence_signal(C);
+	if (!kfence_complete(C))
+		return -EINVAL;
+
+	if (!kfence_complete(B))
+		return -EINVAL;
+
+	if (!kfence_complete(A))
+		return -EINVAL;
+
+	kfence_put(C);
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_AB_C(void)
+{
+	struct kfence *A, *B, *C;
+	int ret;
+
+	/* Test multiple fences (AB) waiting on a single event (C) */
+	pr_debug("%s\n", __func__);
+
+	A = kfence_create(GFP_KERNEL);
+	B = kfence_create(GFP_KERNEL);
+	C = kfence_create(GFP_KERNEL);
+	if (!A || !B || !C)
+		return -ENOMEM;
+
+	ret = kfence_add(A, C, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	ret = kfence_add(B, C, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_signal(A);
+	kfence_signal(B);
+
+	if (kfence_complete(A))
+		return -EINVAL;
+
+	if (kfence_complete(B))
+		return -EINVAL;
+
+	kfence_signal(C);
+	if (!kfence_complete(C))
+		return -EINVAL;
+
+	if (!kfence_complete(B))
+		return -EINVAL;
+
+	if (!kfence_complete(A))
+		return -EINVAL;
+
+	kfence_put(C);
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_C_AB(void)
+{
+	struct kfence *A, *B, *C;
+	int ret;
+
+	/* Test multiple event sources (A,B) for a single fence (C) */
+	pr_debug("%s\n", __func__);
+
+	A = kfence_create(GFP_KERNEL);
+	B = kfence_create(GFP_KERNEL);
+	C = kfence_create(GFP_KERNEL);
+	if (!A || !B || !C)
+		return -ENOMEM;
+
+	ret = kfence_add(C, A, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	ret = kfence_add(C, B, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_signal(C);
+	if (kfence_complete(C))
+		return -EINVAL;
+
+	kfence_signal(A);
+	kfence_signal(B);
+
+	if (!kfence_complete(A))
+		return -EINVAL;
+
+	if (!kfence_complete(B))
+		return -EINVAL;
+
+	if (!kfence_complete(C))
+		return -EINVAL;
+
+	kfence_put(C);
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_completion(void)
+{
+	struct kfence *fence;
+	struct completion x;
+	int ret;
+
+	/* Test use of a completion as an event source for kfences */
+	pr_debug("%s\n", __func__);
+
+	init_completion(&x);
+
+	fence = kfence_create(GFP_KERNEL);
+	if (!fence)
+		return -ENOMEM;
+
+	ret = kfence_add_completion(fence, &x, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_signal(fence);
+	if (kfence_complete(fence))
+		return -EINVAL;
+
+	complete_all(&x);
+	if (!kfence_complete(fence))
+		return -EINVAL;
+
+	kfence_put(fence);
+	return 0;
+}
+
+struct task_ipc {
+	struct work_struct work;
+	struct completion started;
+	struct kfence *in, *out;
+	int value;
+};
+
+static void __init task_ipc(struct work_struct *work)
+{
+	struct task_ipc *ipc = container_of(work, typeof(*ipc), work);
+
+	complete(&ipc->started);
+
+	kfence_wait(ipc->in);
+	smp_store_mb(ipc->value, 1);
+	kfence_signal(ipc->out);
+}
+
+static int __init test_chain(void)
+{
+	const int nfences = 4096;
+	struct kfence **fences;
+	int ret, i;
+
+	/* Test a long chain of fences */
+	pr_debug("%s\n", __func__);
+
+	fences = kmalloc(sizeof(*fences)*nfences, GFP_KERNEL);
+	if (!fences)
+		return -ENOMEM;
+
+	for (i = 0; i < nfences; i++) {
+		fences[i] = kfence_create(GFP_KERNEL);
+		if (!fences[i])
+			return -ENOMEM;
+
+		if (i > 0) {
+			ret = kfence_add(fences[i], fences[i-1], GFP_KERNEL);
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	for (i = nfences; --i; ) {
+		kfence_signal(fences[i]);
+		if (kfence_complete(fences[i]))
+			return -EINVAL;
+	}
+
+	kfence_signal(fences[0]);
+	for (i = 0; i < nfences; i++) {
+		if (!kfence_complete(fences[i]))
+			return -EINVAL;
+
+		kfence_put(fences[i]);
+	}
+	kfree(fences);
+	return 0;
+}
+
+static int __init test_ipc(void)
+{
+	struct task_ipc ipc;
+	int ret = 0;
+
+	/* Test use of kfence as an interprocess signaling mechanism */
+	pr_debug("%s\n", __func__);
+
+	ipc.in = kfence_create(GFP_KERNEL);
+	ipc.out = kfence_create(GFP_KERNEL);
+	if (!ipc.in || !ipc.out)
+		return -ENOMEM;
+
+	/* use a completion to avoid chicken-and-egg testing for kfence */
+	init_completion(&ipc.started);
+
+	ipc.value = 0;
+	INIT_WORK(&ipc.work, task_ipc);
+	schedule_work(&ipc.work);
+
+	wait_for_completion(&ipc.started);
+
+	usleep_range(1000, 2000);
+	if (READ_ONCE(ipc.value)) {
+		pr_err("worker updated value before kfence was signaled\n");
+		ret = -EINVAL;
+	}
+
+	kfence_signal(ipc.in);
+	kfence_wait(ipc.out);
+
+	if (!READ_ONCE(ipc.value)) {
+		pr_err("worker signaled kfence before value was posted\n");
+		ret = -EINVAL;
+	}
+
+	flush_work(&ipc.work);
+	kfence_put(ipc.in);
+	kfence_put(ipc.out);
+	return ret;
+}
+
+static int __init test_kfence_init(void)
+{
+	int ret;
+
+	pr_info("Testing kfences\n");
+
+	ret = test_self();
+	if (ret < 0) {
+		pr_err("self failed\n");
+		return ret;
+	}
+
+	ret = test_dag();
+	if (ret < 0) {
+		pr_err("DAG checker failed\n");
+		return ret;
+	}
+
+	ret = test_AB();
+	if (ret < 0) {
+		pr_err("AB failed\n");
+		return ret;
+	}
+
+	ret = test_ABC();
+	if (ret < 0) {
+		pr_err("ABC failed\n");
+		return ret;
+	}
+
+	ret = test_AB_C();
+	if (ret < 0) {
+		pr_err("AB_C failed\n");
+		return ret;
+	}
+
+	ret = test_C_AB();
+	if (ret < 0) {
+		pr_err("C_AB failed\n");
+		return ret;
+	}
+
+	ret = test_chain();
+	if (ret < 0) {
+		pr_err("chain failed\n");
+		return ret;
+	}
+
+	ret = test_ipc();
+	if (ret < 0) {
+		pr_err("ipc failed\n");
+		return ret;
+	}
+
+	ret = test_completion();
+	if (ret < 0) {
+		pr_err("completion failed\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void __exit test_kfence_cleanup(void)
+{
+}
+
+module_init(test_kfence_init);
+module_exit(test_kfence_cleanup);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/selftests/lib/kfence.sh b/tools/testing/selftests/lib/kfence.sh
new file mode 100755
index 000000000000..487320c70ed1
--- /dev/null
+++ b/tools/testing/selftests/lib/kfence.sh
@@ -0,0 +1,10 @@
+#!/bin/sh
+# Runs infrastructure tests using test-kfence kernel module
+
+if /sbin/modprobe -q test-kfence; then
+	/sbin/modprobe -q -r test-kfence
+	echo "kfence: ok"
+else
+	echo "kfence: [FAIL]"
+	exit 1
+fi
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/9] async: Extend kfence to allow struct embedding
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
  2016-06-24  9:08 ` [PATCH 1/9] lib: Add kselftests for async-domains Chris Wilson
  2016-06-24  9:08 ` [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-07-13 10:31   ` Peter Zijlstra
  2016-06-24  9:08 ` [PATCH 4/9] async: Extend kfences for listening on DMA fences Chris Wilson
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

Provide a kfence_init() function for use for embedding the kfence into a
parent structure. kfence_init() takes an optional function pointer argument
should the caller wish to be notified when the kfence is complete. This is
useful for allowing the kfences to drive other state machinery.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/kfence.h |  5 ++++
 kernel/async.c         | 52 ++++++++++++++++++++++++++++++++++++----
 lib/test-kfence.c      | 64 +++++++++++++++++++++++++++++++++++++++++++-------
 3 files changed, 109 insertions(+), 12 deletions(-)

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 82eba8aacd02..82096bfafaa1 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -38,4 +38,9 @@ static inline bool kfence_complete(struct kfence *fence)
 }
 extern void kfence_put(struct kfence *fence);
 
+typedef void (*kfence_notify_t)(struct kfence *);
+#define __kfence_call __attribute__((aligned(4)))
+
+extern void kfence_init(struct kfence *fence, kfence_notify_t fn);
+
 #endif /* _KFENCE_H_ */
diff --git a/kernel/async.c b/kernel/async.c
index d0bcb7cc4884..db22b890711e 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -134,7 +134,17 @@ static atomic_t entry_count;
  *
  * The fence starts off pending a single signal. Once you have finished
  * setting up the fence, use kfence_signal() to allow it to wait upon
- * its event sources.
+ * its event sources. To embed a kfence within another struct, use
+ *
+ * 	kfence_init()
+ *
+ * This can also be used to receive a callback when the kfence is completed
+ * (pending signal count hits 0).  The callback is also called when the fence
+ * is released (the reference count hits 0, and the fence will be complete).
+ * Note that since the kfence is embedded inside another, its initial reference
+ * must not be dropped unless a callback is provided, or the kfence is the
+ * first member in the parent, and the parent was allocated by kmalloc (i.e.
+ * valid to be freed by kfree()).
  *
  * Use
  *
@@ -151,7 +161,11 @@ static void kfence_free(struct kref *kref)
 
 	WARN_ON(atomic_read(&fence->pending) > 0);
 
-	kfree(fence);
+	if (fence->flags) {
+		kfence_notify_t fn = (kfence_notify_t)fence->flags;
+		fn(fence);
+	} else
+		kfree(fence);
 }
 
 /**
@@ -203,6 +217,11 @@ static void __kfence_signal(struct kfence *fence,
 	if (!atomic_dec_and_test(&fence->pending))
 		return;
 
+	if (fence->flags) {
+		kfence_notify_t fn = (kfence_notify_t)fence->flags;
+		fn(fence);
+	}
+
 	atomic_dec(&fence->pending);
 	__kfence_wake_up_all(fence, continuation);
 }
@@ -240,7 +259,7 @@ void kfence_wait(struct kfence *fence)
 }
 EXPORT_SYMBOL_GPL(kfence_wait);
 
-static void kfence_init(struct kfence *fence)
+static void __kfence_init(struct kfence *fence)
 {
 	init_waitqueue_head(&fence->wait);
 	kref_init(&fence->kref);
@@ -266,11 +285,36 @@ struct kfence *kfence_create(gfp_t gfp)
 	if (!fence)
 		return NULL;
 
-	kfence_init(fence);
+	__kfence_init(fence);
 	return fence;
 }
 EXPORT_SYMBOL_GPL(kfence_create);
 
+/**
+ * kfence_init - initialize a fence for use embedded within a struct
+ * @fence: this kfence
+ * @fn: a callback function for when the fence is complete, and when the
+ * fence is released
+ *
+ * This function initialises the @fence for use embedded within a parent
+ * structure. The optional @fn hook is called when the fence is completed
+ * (when all of its events sources have signaled their completion, i.e.
+ * pending signal count hits 0, fence_complete() however reports false as the
+ * fence is not considered complete until after the callback returns) and when
+ * the fence is subsequently released (the reference count hits 0,
+ * fence_complete() is then true). Note carefully that @fn will be called from
+ * atomic context. Also the lower 2 bits of the function pointer are used for
+ * storing flags, so the function must be aligned to at least 4 bytes - use
+ * __kfence_call as a function attribute to ensure correct alignment.
+ */
+void kfence_init(struct kfence *fence, kfence_notify_t fn)
+{
+	__kfence_init(fence);
+	BUG_ON((unsigned long)fn & KFENCE_CHECKED_BIT);
+	fence->flags = (unsigned long)fn;
+}
+EXPORT_SYMBOL_GPL(kfence_init);
+
 static int kfence_wake(wait_queue_t *wq, unsigned mode, int flags, void *key)
 {
 	list_del(&wq->task_list);
diff --git a/lib/test-kfence.c b/lib/test-kfence.c
index 1d9e4a5a2184..0604a27f68b2 100644
--- a/lib/test-kfence.c
+++ b/lib/test-kfence.c
@@ -9,9 +9,27 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 
+static int __init __test_self(struct kfence *fence)
+{
+	if (kfence_complete(fence))
+		return -EINVAL;
+
+	kfence_signal(fence);
+
+	if (!kfence_complete(fence))
+		return -EINVAL;
+
+	kfence_wait(fence);
+	if (!kfence_complete(fence))
+		return -EINVAL;
+
+	return 0;
+}
+
 static int __init test_self(void)
 {
 	struct kfence *fence;
+	int ret;
 
 	/* Test kfence signaling and completion testing */
 	pr_debug("%s\n", __func__);
@@ -20,19 +38,43 @@ static int __init test_self(void)
 	if (!fence)
 		return -ENOMEM;
 
-	if (kfence_complete(fence))
-		return -EINVAL;
+	ret = __test_self(fence);
 
-	kfence_signal(fence);
+	kfence_put(fence);
+	return ret;
+}
 
-	if (!kfence_complete(fence))
-		return -EINVAL;
+struct test_stack {
+	struct kfence fence;
+	bool seen;
+};
 
-	kfence_wait(fence);
-	if (!kfence_complete(fence))
+static void __init __kfence_call fence_callback(struct kfence *fence)
+{
+	struct test_stack *ts = container_of(fence, typeof(*ts), fence);
+	ts->seen = true;
+}
+
+static int __init test_stack(void)
+{
+	struct test_stack ts;
+	int ret;
+
+	/* Test kfence signaling and completion testing (on stack) */
+	pr_debug("%s\n", __func__);
+
+	ts.seen = false;
+	kfence_init(&ts.fence, fence_callback);
+
+	ret = __test_self(&ts.fence);
+	if (ret < 0)
+		return ret;
+
+	if (!ts.seen) {
+		pr_err("fence callback not executed\n");
 		return -EINVAL;
+	}
 
-	kfence_put(fence);
 	return 0;
 }
 
@@ -413,6 +455,12 @@ static int __init test_kfence_init(void)
 		return ret;
 	}
 
+	ret = test_stack();
+	if (ret < 0) {
+		pr_err("stack failed\n");
+		return ret;
+	}
+
 	ret = test_dag();
 	if (ret < 0) {
 		pr_err("DAG checker failed\n");
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/9] async: Extend kfences for listening on DMA fences
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
                   ` (2 preceding siblings ...)
  2016-06-24  9:08 ` [PATCH 3/9] async: Extend kfence to allow struct embedding Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-06-24  9:08 ` [PATCH 5/9] async: Wrap hrtimer to provide a time source for a kfence Chris Wilson
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

dma-buf provides an interfaces for receiving notifications from DMA
hardware. kfence provides a useful interface for collecting such fences
and combining them with other events.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/kfence.h |  4 ++++
 kernel/async.c         | 63 ++++++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 65 insertions(+), 2 deletions(-)

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 82096bfafaa1..d71f30c626ae 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -15,6 +15,7 @@
 #include <linux/wait.h>
 
 struct completion;
+struct fence;
 
 struct kfence {
 	wait_queue_head_t wait;
@@ -29,6 +30,9 @@ extern int kfence_add(struct kfence *fence, struct kfence *after, gfp_t gfp);
 extern int kfence_add_completion(struct kfence *fence,
 				 struct completion *x,
 				 gfp_t gfp);
+extern int kfence_add_dma(struct kfence *fence,
+			  struct fence *dma,
+			  gfp_t gfp);
 extern void kfence_pending(struct kfence *fence);
 extern void kfence_signal(struct kfence *fence);
 extern void kfence_wait(struct kfence *fence);
diff --git a/kernel/async.c b/kernel/async.c
index db22b890711e..01552d23a916 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -50,6 +50,7 @@ asynchronous and synchronous parts of the kernel.
 
 #include <linux/async.h>
 #include <linux/atomic.h>
+#include <linux/fence.h>
 #include <linux/kfence.h>
 #include <linux/ktime.h>
 #include <linux/export.h>
@@ -122,9 +123,10 @@ static atomic_t entry_count;
  * allowing multiple pending / signals to be sent before the kfence is
  * complete.
  *
- *	kfence_add() / kfence_add_completion()
+ *	kfence_add() / kfence_add_completion() / kfence_add_dma()
  *
- * sets the kfence to wait upon another fence, or completion respectively.
+ * sets the kfence to wait upon another fence, completion, or DMA fence
+ * respectively.
  *
  * Unlike completions, kfences are expected to live inside more complex graphs
  * and form the basis for parallel execution of interdependent and so are
@@ -484,6 +486,63 @@ int kfence_add_completion(struct kfence *fence, struct completion *x, gfp_t gfp)
 }
 EXPORT_SYMBOL_GPL(kfence_add_completion);
 
+struct dma_fence_cb {
+	struct fence_cb base;
+	struct kfence *fence;
+};
+
+static void dma_kfence_wake(struct fence *dma, struct fence_cb *data)
+{
+	struct dma_fence_cb *cb = container_of(data, typeof(*cb), base);
+	kfence_signal(cb->fence);
+	kfence_put(cb->fence);
+	kfree(cb);
+}
+
+/**
+ * kfence_add_dma - set the fence to wait upon a DMA fence
+ * @fence: this kfence
+ * @dma: target DMA fence to wait upon
+ * @gfp: the allowed allocation type
+ *
+ * kfence_add_dma() causes the @fence to wait upon completion of a DMA fence.
+ *
+ * Returns 1 if the @fence was successfully to the waitqueue of @dma, 0
+ * if @dma was already signaled (and so not added), or a negative error code.
+ */
+int kfence_add_dma(struct kfence *fence, struct fence *dma, gfp_t gfp)
+{
+	struct dma_fence_cb *cb;
+	int ret;
+
+	if (!dma || fence_is_signaled(dma))
+		return 0;
+
+	cb = kmalloc(sizeof(*cb), gfp);
+	if (!cb) {
+		if (!gfpflags_allow_blocking(gfp))
+			return -ENOMEM;
+
+		return fence_wait(dma, false);
+	}
+
+	cb->fence = kfence_get(fence);
+	kfence_pending(fence);
+
+	ret = fence_add_callback(dma, &cb->base, dma_kfence_wake);
+	if (ret == 0) {
+		/* signal fence add and is pending */
+		ret = 1;
+	} else {
+		dma_kfence_wake(dma, &cb->base);
+		if (ret == -ENOENT) /* fence already signaled */
+			ret = 0;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kfence_add_dma);
+
 static async_cookie_t lowest_in_progress(struct async_domain *domain)
 {
 	struct list_head *pending;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 5/9] async: Wrap hrtimer to provide a time source for a kfence
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
                   ` (3 preceding siblings ...)
  2016-06-24  9:08 ` [PATCH 4/9] async: Extend kfences for listening on DMA fences Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-07-13 10:32   ` Peter Zijlstra
  2016-06-24  9:08 ` [PATCH 6/9] async: Add a convenience wrapper for waiting on implicit dma-buf Chris Wilson
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

kfence_add_delay() is a convenience wrapper around
hrtimer_start_range_ns() to provide a time source for a kfence graph.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/kfence.h |  5 +++++
 kernel/async.c         | 60 +++++++++++++++++++++++++++++++++++++++++++++++++-
 lib/test-kfence.c      | 44 ++++++++++++++++++++++++++++++++++++
 3 files changed, 108 insertions(+), 1 deletion(-)

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index d71f30c626ae..1abec5e6b23c 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -16,6 +16,7 @@
 
 struct completion;
 struct fence;
+enum hrtimer_mode;
 
 struct kfence {
 	wait_queue_head_t wait;
@@ -33,6 +34,10 @@ extern int kfence_add_completion(struct kfence *fence,
 extern int kfence_add_dma(struct kfence *fence,
 			  struct fence *dma,
 			  gfp_t gfp);
+extern int kfence_add_delay(struct kfence *fence,
+			    clockid_t clock, enum hrtimer_mode mode,
+			    ktime_t delay, u64 slack,
+			    gfp_t gfp);
 extern void kfence_pending(struct kfence *fence);
 extern void kfence_signal(struct kfence *fence);
 extern void kfence_wait(struct kfence *fence);
diff --git a/kernel/async.c b/kernel/async.c
index 01552d23a916..5d02445e36b7 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -126,7 +126,10 @@ static atomic_t entry_count;
  *	kfence_add() / kfence_add_completion() / kfence_add_dma()
  *
  * sets the kfence to wait upon another fence, completion, or DMA fence
- * respectively.
+ * respectively. To set the fence to wait for at least a certain period of
+ * time, or until after a certain point in time, use
+ *
+ * 	kfence_add_timer()
  *
  * Unlike completions, kfences are expected to live inside more complex graphs
  * and form the basis for parallel execution of interdependent and so are
@@ -543,6 +546,61 @@ int kfence_add_dma(struct kfence *fence, struct fence *dma, gfp_t gfp)
 }
 EXPORT_SYMBOL_GPL(kfence_add_dma);
 
+/**
+ * kfence_add_delay - set the fence to wait for a period of time
+ * @fence: this kfence
+ * @clock: which clock to program
+ * @mode: delay given as relative or absolute
+ * @delay: how long or until what time to wait
+ * @slack: how much slack that may be applied to the delay
+ *
+ * kfence_add_delay() causes the @fence to wait for a a period of time, or
+ * until a certain point in time. It is a convenience wrapper around
+ * hrtimer_start_range_ns(). For more details on @clock, @mode, @delay and
+ * @slack please consult the hrtimer documentation.
+ *
+ * Returns 1 if the delay was sucessfuly added to the @fence, or a negative
+ * error code on failure.
+ */
+struct timer_cb {
+	struct hrtimer timer;
+	struct kfence *fence;
+};
+
+static enum hrtimer_restart
+timer_kfence_wake(struct hrtimer *timer)
+{
+	struct timer_cb *cb = container_of(timer, typeof(*cb), timer);
+
+	kfence_signal(cb->fence);
+	kfence_put(cb->fence);
+	kfree(cb);
+
+	return HRTIMER_NORESTART;
+}
+
+int kfence_add_delay(struct kfence *fence,
+		     clockid_t clock, enum hrtimer_mode mode,
+		     ktime_t delay, u64 slack,
+		     gfp_t gfp)
+{
+	struct timer_cb *cb;
+
+	cb = kmalloc(sizeof(*cb), gfp);
+	if (!cb)
+		return -ENOMEM;
+
+	cb->fence = kfence_get(fence);
+	kfence_pending(fence);
+
+	hrtimer_init(&cb->timer, clock, mode);
+	cb->timer.function = timer_kfence_wake;
+
+	hrtimer_start_range_ns(&cb->timer, delay, slack, mode);
+	return 1;
+}
+EXPORT_SYMBOL_GPL(kfence_add_delay);
+
 static async_cookie_t lowest_in_progress(struct async_domain *domain)
 {
 	struct list_head *pending;
diff --git a/lib/test-kfence.c b/lib/test-kfence.c
index 0604a27f68b2..782a2cca19c5 100644
--- a/lib/test-kfence.c
+++ b/lib/test-kfence.c
@@ -341,6 +341,44 @@ static int __init test_completion(void)
 	return 0;
 }
 
+static int __init test_delay(void)
+{
+	struct kfence *fence;
+	ktime_t delay;
+	int ret;
+
+	/* Test use of a hrtimer as an event source for kfences */
+	pr_debug("%s\n", __func__);
+
+	fence = kfence_create(GFP_KERNEL);
+	if (!fence)
+		return -ENOMEM;
+
+	delay = ktime_get();
+
+	ret = kfence_add_delay(fence, CLOCK_MONOTONIC, HRTIMER_MODE_REL,
+			       ms_to_ktime(1), 1 << 10,
+			       GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_signal(fence);
+	kfence_wait(fence);
+
+	delay = ktime_sub(ktime_get(), delay);
+	kfence_put(fence);
+
+	if (!ktime_to_ms(delay)) {
+		pr_err("kfence woke too early, delay was only %lldns\n",
+		       (long long)ktime_to_ns(delay));
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 struct task_ipc {
 	struct work_struct work;
 	struct completion started;
@@ -509,6 +547,12 @@ static int __init test_kfence_init(void)
 		return ret;
 	}
 
+	ret = test_delay();
+	if (ret < 0) {
+		pr_err("delay failed\n");
+		return ret;
+	}
+
 	return 0;
 }
 
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 6/9] async: Add a convenience wrapper for waiting on implicit dma-buf
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
                   ` (4 preceding siblings ...)
  2016-06-24  9:08 ` [PATCH 5/9] async: Wrap hrtimer to provide a time source for a kfence Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-06-24  9:08 ` [PATCH 7/9] async: Add support for explicit fine-grained barriers Chris Wilson
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

dma-buf implicitly track their (DMA) rendering using a
reservation_object, which tracks ether the last write (in an exclusive
fence) or the current renders (with a set of shared fences). To wait
upon a reservation object in conjunction with other sources,
kfence_add_reservation() extracts the DMA fences from the object and
adds the individual waits for the kfence.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/kfence.h |  5 ++++
 kernel/async.c         | 65 ++++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 65 insertions(+), 5 deletions(-)

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 1abec5e6b23c..2f01eb052e4d 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -17,6 +17,7 @@
 struct completion;
 struct fence;
 enum hrtimer_mode;
+struct reservation_object;
 
 struct kfence {
 	wait_queue_head_t wait;
@@ -34,6 +35,10 @@ extern int kfence_add_completion(struct kfence *fence,
 extern int kfence_add_dma(struct kfence *fence,
 			  struct fence *dma,
 			  gfp_t gfp);
+extern int kfence_add_reservation(struct kfence *fence,
+				  struct reservation_object *resv,
+				  bool write,
+				  gfp_t gfp);
 extern int kfence_add_delay(struct kfence *fence,
 			    clockid_t clock, enum hrtimer_mode mode,
 			    ktime_t delay, u64 slack,
diff --git a/kernel/async.c b/kernel/async.c
index 5d02445e36b7..1fa1f39b5a74 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -54,9 +54,10 @@ asynchronous and synchronous parts of the kernel.
 #include <linux/kfence.h>
 #include <linux/ktime.h>
 #include <linux/export.h>
-#include <linux/wait.h>
+#include <linux/reservation.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
+#include <linux/wait.h>
 #include <linux/workqueue.h>
 
 #include "workqueue_internal.h"
@@ -123,11 +124,17 @@ static atomic_t entry_count;
  * allowing multiple pending / signals to be sent before the kfence is
  * complete.
  *
- *	kfence_add() / kfence_add_completion() / kfence_add_dma()
+ *	kfence_add() / kfence_add_completion()
+ *	/ kfence_add_dma()
+ *
+ * sets the kfence to wait upon another fence or completion respectively. To
+ * wait upon DMA activity, either use
  *
- * sets the kfence to wait upon another fence, completion, or DMA fence
- * respectively. To set the fence to wait for at least a certain period of
- * time, or until after a certain point in time, use
+ * 	kfence_add_dma() or kfence_add_reservation()
+ *
+ * depending on the manner of DMA activity tracking.  To set the fence to wait
+ * for at least a certain period of time, or until after a certain point in
+ * time, use
  *
  * 	kfence_add_timer()
  *
@@ -547,6 +554,54 @@ int kfence_add_dma(struct kfence *fence, struct fence *dma, gfp_t gfp)
 EXPORT_SYMBOL_GPL(kfence_add_dma);
 
 /**
+ * kfence_add_reservation - set the fence to wait upon a reservation_object
+ * @fence: this kfence
+ * @resv: target reservation_object (collection of DMA fences) to wait upon
+ * @write: Wait for read or read/write access
+ * @gfp: the allowed allocation type
+ *
+ * kfence_add_reservation() causes the @fence to wait upon completion of the
+ * reservation object (a collection of DMA fences), either for read access
+ * or for read/write access.
+ *
+ * Returns 1 if the @fence was successfully to the waitqueues of @resv, 0
+ * if @resev was already signaled (and so not added), or a negative error code.
+ */
+int kfence_add_reservation(struct kfence *fence,
+			   struct reservation_object *resv,
+			   bool write,
+			   gfp_t gfp)
+{
+	struct fence *excl, **shared;
+	unsigned count, i;
+	int ret;
+
+	ret = reservation_object_get_fences_rcu(resv, &excl, &count, &shared);
+	if (ret)
+		return ret;
+
+	if (write) {
+		for (i = 0; i < count; i++) {
+			ret |= kfence_add_dma(fence, shared[i], gfp);
+			if (ret < 0)
+				goto out;
+		}
+	}
+
+	if (excl)
+		ret |= kfence_add_dma(fence, excl, gfp);
+
+out:
+	fence_put(excl);
+	for (i = 0; i < count; i++)
+		fence_put(shared[i]);
+	kfree(shared);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kfence_add_reservation);
+
+/**
  * kfence_add_delay - set the fence to wait for a period of time
  * @fence: this kfence
  * @clock: which clock to program
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 7/9] async: Add support for explicit fine-grained barriers
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
                   ` (5 preceding siblings ...)
  2016-06-24  9:08 ` [PATCH 6/9] async: Add a convenience wrapper for waiting on implicit dma-buf Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-06-24  9:08 ` [PATCH 8/9] async: Add execution barriers Chris Wilson
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

The current async-domain model supports running a multitude of
independent tasks with a coarse synchronisation point. This is
sufficient for its original purpose of allowing independent drivers to
run concurrently during various phases (booting, early resume, late
resume etc), and keep the asynchronous domain out of the synchronous
kernel domains. However, for greater exploitation, drivers themselves
want to schedule multiple tasks within a phase (or between phases) and
control the order of execution within those tasks relative to each
other. To enable this, we extend the synchronisation scheme based upon
kfences and back every task with one. Any task may now wait upon the
kfence before being scheduled, and equally the kfence may be used to
wait on the task itself (rather than waiting on the cookie for all
previous tasks to be completed).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/async.h   |  60 +++++++++-
 kernel/async.c          | 244 +++++++++++++++++++++----------------
 lib/test-async-domain.c | 311 +++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 505 insertions(+), 110 deletions(-)

diff --git a/include/linux/async.h b/include/linux/async.h
index 6b0226bdaadc..45d6c8323b60 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -13,38 +13,88 @@
 #define __ASYNC_H__
 
 #include <linux/types.h>
+#include <linux/kfence.h>
 #include <linux/list.h>
 
 typedef u64 async_cookie_t;
 typedef void (*async_func_t) (void *data, async_cookie_t cookie);
+
+struct async_work {
+	struct kfence fence;
+	/* private */
+};
+
 struct async_domain {
 	struct list_head pending;
 	unsigned registered:1;
 };
 
+#define ASYNC_DOMAIN_INIT(_name, _r) {				\
+	.pending = LIST_HEAD_INIT(_name.pending),		\
+	.registered = _r						\
+}
+
 /*
  * domain participates in global async_synchronize_full
  */
 #define ASYNC_DOMAIN(_name) \
-	struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending),	\
-				      .registered = 1 }
+	struct async_domain _name = ASYNC_DOMAIN_INIT(_name, 1)
 
 /*
  * domain is free to go out of scope as soon as all pending work is
  * complete, this domain does not participate in async_synchronize_full
  */
 #define ASYNC_DOMAIN_EXCLUSIVE(_name) \
-	struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
-				      .registered = 0 }
+	struct async_domain _name = ASYNC_DOMAIN_INIT(_name, 0)
+
+extern void init_async_domain(struct async_domain *domain, bool registered);
 
 extern async_cookie_t async_schedule(async_func_t func, void *data);
 extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
 					    struct async_domain *domain);
-void async_unregister_domain(struct async_domain *domain);
+extern void async_unregister_domain(struct async_domain *domain);
 extern void async_synchronize_full(void);
 extern void async_synchronize_full_domain(struct async_domain *domain);
 extern void async_synchronize_cookie(async_cookie_t cookie);
 extern void async_synchronize_cookie_domain(async_cookie_t cookie,
 					    struct async_domain *domain);
+
 extern bool current_is_async(void);
+
+extern struct async_work *
+async_work_create(async_func_t func, void *data, gfp_t gfp);
+
+static inline struct async_work *async_work_get(struct async_work *work)
+{
+	kfence_get(&work->fence);
+	return work;
+}
+
+static inline int
+async_work_after(struct async_work *work, struct kfence *fence)
+{
+	return kfence_add(&work->fence, fence, GFP_KERNEL);
+}
+
+static inline int
+async_work_before(struct async_work *work, struct kfence *fence)
+{
+	return kfence_add(fence, &work->fence, GFP_KERNEL);
+}
+
+static inline void async_work_wait(struct async_work *work)
+{
+	kfence_wait(&work->fence);
+}
+
+static inline void async_work_put(struct async_work *work)
+{
+	kfence_put(&work->fence);
+}
+
+extern async_cookie_t queue_async_work(struct async_domain *domain,
+				       struct async_work *work,
+				       gfp_t gfp);
+extern async_cookie_t schedule_async_work(struct async_work *work);
+
 #endif
diff --git a/kernel/async.c b/kernel/async.c
index 1fa1f39b5a74..83007c39a113 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -2,6 +2,7 @@
  * async.c: Asynchronous function calls for boot performance
  *
  * (C) Copyright 2009 Intel Corporation
+ * (C) Copyright 2016 Intel Corporation
  * Author: Arjan van de Ven <arjan@linux.intel.com>
  *
  * This program is free software; you can redistribute it and/or
@@ -62,28 +63,31 @@ asynchronous and synchronous parts of the kernel.
 
 #include "workqueue_internal.h"
 
-static async_cookie_t next_cookie = 1;
-
 #define MAX_WORK		32768
-#define ASYNC_COOKIE_MAX	ULLONG_MAX	/* infinity cookie */
-
-static LIST_HEAD(async_global_pending);	/* pending from all registered doms */
-static ASYNC_DOMAIN(async_dfl_domain);
-static DEFINE_SPINLOCK(async_lock);
 
 struct async_entry {
-	struct list_head	domain_list;
-	struct list_head	global_list;
-	struct work_struct	work;
-	async_cookie_t		cookie;
-	async_func_t		func;
-	void			*data;
-	struct async_domain	*domain;
+	struct async_work base;
+	struct work_struct work;
+
+	struct list_head pending_link[2];
+
+	async_cookie_t cookie;
+	async_func_t func;
+	void *data;
 };
 
-static DECLARE_WAIT_QUEUE_HEAD(async_done);
+static LIST_HEAD(async_global_pending);	/* pending from all registered doms */
+static ASYNC_DOMAIN(async_dfl_domain);
+static DEFINE_SPINLOCK(async_lock);
+static unsigned async_pending_count;
 
-static atomic_t entry_count;
+static async_cookie_t assign_cookie(void)
+{
+	static async_cookie_t next_cookie;
+	if (++next_cookie == 0)
+		next_cookie = 1;
+	return next_cookie;
+}
 
 /**
  * DOC: kfence overview
@@ -166,6 +170,8 @@ static atomic_t entry_count;
  */
 
 #define KFENCE_CHECKED_BIT	0
+#define ASYNC_WORK_BIT		1
+#define ASYNC_QUEUED_BIT	2
 
 static void kfence_free(struct kref *kref)
 {
@@ -173,7 +179,7 @@ static void kfence_free(struct kref *kref)
 
 	WARN_ON(atomic_read(&fence->pending) > 0);
 
-	if (fence->flags) {
+	if (fence->flags && !test_bit(ASYNC_WORK_BIT, &fence->flags)) {
 		kfence_notify_t fn = (kfence_notify_t)fence->flags;
 		fn(fence);
 	} else
@@ -229,6 +235,13 @@ static void __kfence_signal(struct kfence *fence,
 	if (!atomic_dec_and_test(&fence->pending))
 		return;
 
+	if (test_bit(ASYNC_WORK_BIT, &fence->flags)) {
+		struct async_entry *entry =
+			container_of(fence, typeof(*entry), base.fence);
+		queue_work(system_unbound_wq, &entry->work);
+		return;
+	}
+
 	if (fence->flags) {
 		kfence_notify_t fn = (kfence_notify_t)fence->flags;
 		fn(fence);
@@ -322,7 +335,7 @@ EXPORT_SYMBOL_GPL(kfence_create);
 void kfence_init(struct kfence *fence, kfence_notify_t fn)
 {
 	__kfence_init(fence);
-	BUG_ON((unsigned long)fn & KFENCE_CHECKED_BIT);
+	BUG_ON((unsigned long)fn & (KFENCE_CHECKED_BIT | ASYNC_WORK_BIT));
 	fence->flags = (unsigned long)fn;
 }
 EXPORT_SYMBOL_GPL(kfence_init);
@@ -656,36 +669,10 @@ int kfence_add_delay(struct kfence *fence,
 }
 EXPORT_SYMBOL_GPL(kfence_add_delay);
 
-static async_cookie_t lowest_in_progress(struct async_domain *domain)
-{
-	struct list_head *pending;
-	async_cookie_t ret = ASYNC_COOKIE_MAX;
-	unsigned long flags;
-
-	spin_lock_irqsave(&async_lock, flags);
-
-	if (domain)
-		pending = &domain->pending;
-	else
-		pending = &async_global_pending;
-
-	if (!list_empty(pending))
-		ret = list_first_entry(pending, struct async_entry,
-				       domain_list)->cookie;
-
-	spin_unlock_irqrestore(&async_lock, flags);
-	return ret;
-}
-
-/*
- * pick the first pending entry and run it
- */
 static void async_run_entry_fn(struct work_struct *work)
 {
-	struct async_entry *entry =
-		container_of(work, struct async_entry, work);
-	unsigned long flags;
-	ktime_t uninitialized_var(calltime), delta, rettime;
+	struct async_entry *entry = container_of(work, typeof(*entry), work);
+	ktime_t uninitialized_var(calltime);
 
 	/* 1) run (and print duration) */
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
@@ -696,8 +683,7 @@ static void async_run_entry_fn(struct work_struct *work)
 	}
 	entry->func(entry->data, entry->cookie);
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
-		rettime = ktime_get();
-		delta = ktime_sub(rettime, calltime);
+		ktime_t delta = ktime_sub(ktime_get(), calltime);
 		pr_debug("initcall %lli_%pF returned 0 after %lld usecs\n",
 			(long long)entry->cookie,
 			entry->func,
@@ -705,69 +691,71 @@ static void async_run_entry_fn(struct work_struct *work)
 	}
 
 	/* 2) remove self from the pending queues */
-	spin_lock_irqsave(&async_lock, flags);
-	list_del_init(&entry->domain_list);
-	list_del_init(&entry->global_list);
-
-	/* 3) free the entry */
-	kfree(entry);
-	atomic_dec(&entry_count);
+	spin_lock_irq(&async_lock);
+	list_del(&entry->pending_link[0]);
+	list_del(&entry->pending_link[1]);
+	async_pending_count--;
+	spin_unlock_irq(&async_lock);
 
-	spin_unlock_irqrestore(&async_lock, flags);
+	/* 3) wake up any waiters */
+	atomic_dec(&entry->base.fence.pending);
+	__kfence_wake_up_all(&entry->base.fence, NULL);
 
-	/* 4) wake up any waiters */
-	wake_up(&async_done);
+	kfence_put(&entry->base.fence);
 }
 
-static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
+struct async_work *async_work_create(async_func_t func, void *data, gfp_t gfp)
 {
 	struct async_entry *entry;
-	unsigned long flags;
-	async_cookie_t newcookie;
 
-	/* allow irq-off callers */
-	entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC);
+	entry = kmalloc(sizeof(*entry), gfp);
+	if (!entry)
+		return NULL;
 
-	/*
-	 * If we're out of memory or if there's too much work
-	 * pending already, we execute synchronously.
-	 */
-	if (!entry || atomic_read(&entry_count) > MAX_WORK) {
-		kfree(entry);
-		spin_lock_irqsave(&async_lock, flags);
-		newcookie = next_cookie++;
-		spin_unlock_irqrestore(&async_lock, flags);
+	__kfence_init(&entry->base.fence);
 
-		/* low on memory.. run synchronously */
-		func(data, newcookie);
-		return newcookie;
-	}
-	INIT_LIST_HEAD(&entry->domain_list);
-	INIT_LIST_HEAD(&entry->global_list);
 	INIT_WORK(&entry->work, async_run_entry_fn);
 	entry->func = func;
 	entry->data = data;
-	entry->domain = domain;
 
-	spin_lock_irqsave(&async_lock, flags);
+	__set_bit(ASYNC_WORK_BIT, &entry->base.fence.flags);
 
-	/* allocate cookie and queue */
-	newcookie = entry->cookie = next_cookie++;
+	return &entry->base;
+}
+EXPORT_SYMBOL_GPL(async_work_create);
 
-	list_add_tail(&entry->domain_list, &domain->pending);
-	if (domain->registered)
-		list_add_tail(&entry->global_list, &async_global_pending);
+async_cookie_t queue_async_work(struct async_domain *domain,
+				struct async_work *work,
+				gfp_t gfp)
+{
+	struct async_entry *entry = container_of(work, typeof(*entry), base);
+	unsigned long flags;
 
-	atomic_inc(&entry_count);
+	if (WARN_ON(test_and_set_bit(ASYNC_QUEUED_BIT,
+				     &entry->base.fence.flags)))
+		return 0;
+
+	spin_lock_irqsave(&async_lock, flags);
+	entry->cookie = assign_cookie();
+	list_add_tail(&entry->pending_link[0], &domain->pending);
+	INIT_LIST_HEAD(&entry->pending_link[1]);
+	if (domain->registered)
+		list_add_tail(&entry->pending_link[1], &async_global_pending);
+	async_pending_count++;
 	spin_unlock_irqrestore(&async_lock, flags);
 
 	/* mark that this task has queued an async job, used by module init */
 	current->flags |= PF_USED_ASYNC;
 
-	/* schedule for execution */
-	queue_work(system_unbound_wq, &entry->work);
+	kfence_signal(kfence_get(&entry->base.fence));
 
-	return newcookie;
+	return entry->cookie;
+}
+EXPORT_SYMBOL_GPL(queue_async_work);
+
+async_cookie_t schedule_async_work(struct async_work *work)
+{
+	return queue_async_work(&async_dfl_domain, work, GFP_KERNEL);
 }
 
 /**
@@ -780,7 +768,7 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
  */
 async_cookie_t async_schedule(async_func_t func, void *data)
 {
-	return __async_schedule(func, data, &async_dfl_domain);
+	return async_schedule_domain(func, data, &async_dfl_domain);
 }
 EXPORT_SYMBOL_GPL(async_schedule);
 
@@ -799,7 +787,27 @@ EXPORT_SYMBOL_GPL(async_schedule);
 async_cookie_t async_schedule_domain(async_func_t func, void *data,
 				     struct async_domain *domain)
 {
-	return __async_schedule(func, data, domain);
+	struct async_work *work;
+	async_cookie_t cookie = 0;
+
+	work = NULL;
+	if (READ_ONCE(async_pending_count) < MAX_WORK)
+		work = async_work_create(func, data, GFP_ATOMIC);
+	if (work) {
+		cookie = queue_async_work(domain, work, GFP_ATOMIC);
+		async_work_put(work);
+	}
+	if (!cookie) {
+		unsigned long flags;
+
+		spin_lock_irqsave(&async_lock, flags);
+		cookie = assign_cookie();
+		spin_unlock_irqrestore(&async_lock, flags);
+
+		func(data, cookie);
+	}
+
+	return cookie;
 }
 EXPORT_SYMBOL_GPL(async_schedule_domain);
 
@@ -825,10 +833,8 @@ EXPORT_SYMBOL_GPL(async_synchronize_full);
  */
 void async_unregister_domain(struct async_domain *domain)
 {
-	spin_lock_irq(&async_lock);
-	WARN_ON(!domain->registered || !list_empty(&domain->pending));
+	WARN_ON(!list_empty(&domain->pending));
 	domain->registered = 0;
-	spin_unlock_irq(&async_lock);
 }
 EXPORT_SYMBOL_GPL(async_unregister_domain);
 
@@ -841,7 +847,7 @@ EXPORT_SYMBOL_GPL(async_unregister_domain);
  */
 void async_synchronize_full_domain(struct async_domain *domain)
 {
-	async_synchronize_cookie_domain(ASYNC_COOKIE_MAX, domain);
+	async_synchronize_cookie_domain(0, domain);
 }
 EXPORT_SYMBOL_GPL(async_synchronize_full_domain);
 
@@ -856,19 +862,49 @@ EXPORT_SYMBOL_GPL(async_synchronize_full_domain);
  */
 void async_synchronize_cookie_domain(async_cookie_t cookie, struct async_domain *domain)
 {
-	ktime_t uninitialized_var(starttime), delta, endtime;
+	ktime_t uninitialized_var(starttime);
+	struct list_head *pending;
+
+	pending = domain ? &domain->pending : &async_global_pending;
 
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
 		pr_debug("async_waiting @ %i\n", task_pid_nr(current));
 		starttime = ktime_get();
 	}
 
-	wait_event(async_done, lowest_in_progress(domain) >= cookie);
+	do {
+		struct kfence *fence = NULL;
+		unsigned long flags;
 
-	if (initcall_debug && system_state == SYSTEM_BOOTING) {
-		endtime = ktime_get();
-		delta = ktime_sub(endtime, starttime);
+		spin_lock_irqsave(&async_lock, flags);
+		if (!list_empty(pending)) {
+			struct async_entry *entry;
+
+			if (cookie) {
+				entry = list_first_entry(pending,
+							 struct async_entry,
+							 pending_link[!domain]);
+				if ((s64)(cookie - entry->cookie) > 0)
+					fence = kfence_get(&entry->base.fence);
+			} else {
+				entry = list_last_entry(pending,
+							struct async_entry,
+							pending_link[!domain]);
+				cookie = entry->cookie;
+				fence = kfence_get(&entry->base.fence);
+			}
+		}
+		spin_unlock_irqrestore(&async_lock, flags);
+
+		if (!fence)
+			break;
+
+		kfence_wait(fence);
+		kfence_put(fence);
+	} while (1);
 
+	if (initcall_debug && system_state == SYSTEM_BOOTING) {
+		ktime_t delta = ktime_sub(ktime_get(), starttime);
 		pr_debug("async_continuing @ %i after %lli usec\n",
 			task_pid_nr(current),
 			(long long)ktime_to_ns(delta) >> 10);
@@ -901,3 +937,11 @@ bool current_is_async(void)
 	return worker && worker->current_func == async_run_entry_fn;
 }
 EXPORT_SYMBOL_GPL(current_is_async);
+
+void init_async_domain(struct async_domain *domain, bool registered)
+{
+	memset(domain, 0, sizeof(*domain));
+	INIT_LIST_HEAD(&domain->pending);
+	domain->registered = registered;
+}
+EXPORT_SYMBOL_GPL(init_async_domain);
diff --git a/lib/test-async-domain.c b/lib/test-async-domain.c
index 558a71414fb6..24c55537c678 100644
--- a/lib/test-async-domain.c
+++ b/lib/test-async-domain.c
@@ -21,6 +21,269 @@ static void task_B(void *data, async_cookie_t cookie)
 	smp_store_mb(*result, 'B');
 }
 
+static int __init test_x(const char *name,
+			 struct async_domain *domain,
+			 async_func_t func,
+			 const long expected)
+{
+	struct async_work *A;
+	long result = 0;
+
+	A = async_work_create(func, &result, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	async_work_wait(A);
+	async_work_put(A);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			name, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_A(struct async_domain *domain)
+{
+	return test_x(__func__, domain, task_A, 'A');
+}
+
+static int __init test_B(struct async_domain *domain)
+{
+	return test_x(__func__, domain, task_B, 'B');
+}
+
+static int __init test_x_fence(const char *name,
+			       struct async_domain *domain,
+			       async_func_t func,
+			       const long expected)
+{
+	struct async_work *A;
+	struct kfence *fence;
+	long result = 0;
+
+	A = async_work_create(func, &result, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	fence = kfence_create(GFP_KERNEL);
+	if (!fence)
+		return -ENOMEM;
+
+	queue_async_work(domain, A, GFP_KERNEL);
+
+	kfence_add(fence, &A->fence, GFP_KERNEL);
+	kfence_signal(fence);
+
+	kfence_wait(fence);
+
+	async_work_put(A);
+	kfence_put(fence);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			name, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_A_fence(struct async_domain *domain)
+{
+	return test_x_fence(__func__, domain, task_A, 'A');
+}
+
+static int __init test_B_fence(struct async_domain *domain)
+{
+	return test_x_fence(__func__, domain, task_B, 'B');
+}
+
+static int __init test_x_fence_y(const char *name,
+				 struct async_domain *domain,
+				 async_func_t x,
+				 async_func_t y,
+				 const long expected)
+{
+	struct async_work *A, *B;
+	struct kfence *fence;
+	long result = 0;
+
+	A = async_work_create(x, &result, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	B = async_work_create(y, &result, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	fence = kfence_create(GFP_KERNEL);
+	if (!fence)
+		return -ENOMEM;
+
+	kfence_add(fence, &A->fence, GFP_KERNEL);
+	kfence_signal(fence);
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	async_work_put(A);
+
+	async_work_after(B, fence);
+	queue_async_work(domain, B, GFP_KERNEL);
+	kfence_put(fence);
+
+	async_work_wait(B);
+	async_work_put(B);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			name, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_A_fence_B(struct async_domain *domain)
+{
+	return test_x_fence_y(__func__, domain, task_A, task_B, 'B');
+}
+
+static int __init test_B_fence_A(struct async_domain *domain)
+{
+	return test_x_fence_y(__func__, domain, task_B, task_A, 'A');
+}
+
+struct long_context {
+	struct kfence *barrier;
+	long *src;
+	long result;
+};
+
+static void task_wait(void *data, async_cookie_t cookie)
+{
+	struct long_context *ctx = data;
+
+	kfence_wait(ctx->barrier);
+	smp_store_mb(ctx->result, READ_ONCE(*ctx->src));
+}
+
+static int __init test_pause(struct async_domain *domain)
+{
+	struct long_context ctx;
+	struct async_work *A, *B;
+	const long expected = 'B';
+	long out_B = 'A';
+
+	ctx.result = 0;
+	ctx.src = &out_B;
+
+	A = async_work_create(task_wait, &ctx, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	B = async_work_create(task_B, &out_B, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	ctx.barrier = kfence_get(&B->fence);
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	queue_async_work(domain, B, GFP_KERNEL);
+	async_work_put(B);
+
+	async_work_wait(A);
+	async_work_put(A);
+
+	if (READ_ONCE(ctx.result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, ctx.result);
+		return -EINVAL;
+	}
+
+	kfence_put(ctx.barrier);
+
+	return 0;
+}
+
+static void task_signal(void *data, async_cookie_t cookie)
+{
+	struct long_context *ctx = data;
+
+	kfence_signal(ctx->barrier);
+}
+
+static int __init test_manual(struct async_domain *domain)
+{
+	struct long_context ctx;
+	struct async_work *A, *B, *C;
+	const long expected = 'B';
+	long out_B = 'A';
+
+	ctx.result = 0;
+	ctx.src = &out_B;
+	ctx.barrier = kfence_create(GFP_KERNEL);
+
+	A = async_work_create(task_wait, &ctx, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	B = async_work_create(task_B, &out_B, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	C = async_work_create(task_signal, &ctx, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	async_work_after(C, &B->fence);
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	queue_async_work(domain, B, GFP_KERNEL);
+	queue_async_work(domain, C, GFP_KERNEL);
+
+	async_work_wait(A);
+
+	async_work_put(C);
+	async_work_put(B);
+	async_work_put(A);
+	kfence_put(ctx.barrier);
+
+	if (READ_ONCE(ctx.result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, ctx.result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_sync(struct async_domain *domain)
+{
+	struct async_work *B;
+	const long expected = 'B';
+	long result = 0;
+
+	B = async_work_create(task_B, &result, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	queue_async_work(domain, B, GFP_KERNEL);
+	async_work_put(B);
+
+	async_synchronize_full_domain(domain);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int __init test_implicit(struct async_domain *domain)
 {
 	const long expected = 'B';
@@ -99,24 +362,62 @@ static int __init test_async_domain_init(void)
 
 	pr_info("Testing async-domains\n");
 
-	ret = test_implicit(&domain);
+	ret = test_A(&domain);
 	if (ret)
 		return ret;
 
+	ret = test_A_fence(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_A_fence_B(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_B(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_B_fence(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_B_fence_A(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_pause(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_manual(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_sync(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_implicit(&domain);
+	if (ret)
+		goto err;
+
 	ret = test_registered(&domain);
 	if (ret)
-		return ret;
+		goto err;
 
 	ret = perf_nop(1, 100);
 	if (ret)
-		return ret;
+		goto err;
 
 	ret = perf_nop(128, 1000);
 	if (ret)
-		return ret;
+		goto err;
 
+err:
+	async_synchronize_full_domain(&domain);
 	async_unregister_domain(&domain);
-	return 0;
+	return ret;
 }
 
 static void __exit test_async_domain_cleanup(void)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 8/9] async: Add execution barriers
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
                   ` (6 preceding siblings ...)
  2016-06-24  9:08 ` [PATCH 7/9] async: Add support for explicit fine-grained barriers Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-06-24  9:08 ` [PATCH 9/9] async: Introduce a dependency resolver for parallel execution Chris Wilson
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
  9 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

A frequent mode of operation is fanning out N tasks to execute in
parallel, collating results, fanning out M tasks, rinse and repeat. This
is also common to the notion of the async/sync kernel domain split.
A barrier provides a mechanism by which all work queued after the
barrier must wait (i.e. not be scheduled) until all work queued before the
barrier is completed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/async.h |  4 ++++
 kernel/async.c        | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/include/linux/async.h b/include/linux/async.h
index 45d6c8323b60..64a090e3f24f 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -26,6 +26,7 @@ struct async_work {
 
 struct async_domain {
 	struct list_head pending;
+	struct kfence *barrier;
 	unsigned registered:1;
 };
 
@@ -59,6 +60,9 @@ extern void async_synchronize_cookie(async_cookie_t cookie);
 extern void async_synchronize_cookie_domain(async_cookie_t cookie,
 					    struct async_domain *domain);
 
+extern void async_barrier(void);
+extern void async_barrier_domain(struct async_domain *domain);
+
 extern bool current_is_async(void);
 
 extern struct async_work *
diff --git a/kernel/async.c b/kernel/async.c
index 83007c39a113..a22945f4b4c4 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -724,6 +724,12 @@ struct async_work *async_work_create(async_func_t func, void *data, gfp_t gfp)
 }
 EXPORT_SYMBOL_GPL(async_work_create);
 
+static void async_barrier_delete(struct async_domain *domain)
+{
+	kfence_put(domain->barrier);
+	domain->barrier = NULL;
+}
+
 async_cookie_t queue_async_work(struct async_domain *domain,
 				struct async_work *work,
 				gfp_t gfp)
@@ -744,6 +750,9 @@ async_cookie_t queue_async_work(struct async_domain *domain,
 	async_pending_count++;
 	spin_unlock_irqrestore(&async_lock, flags);
 
+	if (!kfence_add(&entry->base.fence, domain->barrier, gfp))
+		async_barrier_delete(domain);
+
 	/* mark that this task has queued an async job, used by module init */
 	current->flags |= PF_USED_ASYNC;
 
@@ -811,6 +820,55 @@ async_cookie_t async_schedule_domain(async_func_t func, void *data,
 }
 EXPORT_SYMBOL_GPL(async_schedule_domain);
 
+static struct kfence * __async_barrier_create(struct async_domain *domain)
+{
+	struct kfence *fence;
+	struct async_entry *entry;
+	unsigned long flags;
+	int ret;
+
+	fence = kfence_create(GFP_KERNEL);
+	if (!fence)
+		goto out_sync;
+
+	ret = 0;
+	spin_lock_irqsave(&async_lock, flags);
+	list_for_each_entry(entry, &domain->pending, pending_link[0]) {
+		ret |= kfence_add(fence, &entry->base.fence, GFP_ATOMIC);
+		if (ret < 0)
+			break;
+	}
+	spin_unlock_irqrestore(&async_lock, flags);
+	if (ret <= 0)
+		goto out_put;
+
+	kfence_add(fence, domain->barrier, GFP_KERNEL);
+	kfence_signal(fence);
+	return fence;
+
+out_put:
+	kfence_signal(fence);
+	kfence_put(fence);
+out_sync:
+	async_synchronize_full_domain(domain);
+	return NULL;
+}
+
+void async_barrier(void)
+{
+	async_barrier_domain(&async_dfl_domain);
+}
+EXPORT_SYMBOL_GPL(async_barrier);
+
+void async_barrier_domain(struct async_domain *domain)
+{
+	struct kfence *barrier = __async_barrier_create(domain);
+
+	kfence_put(domain->barrier);
+	domain->barrier = barrier;
+}
+EXPORT_SYMBOL_GPL(async_barrier_domain);
+
 /**
  * async_synchronize_full - synchronize all asynchronous function calls
  *
@@ -834,6 +892,8 @@ EXPORT_SYMBOL_GPL(async_synchronize_full);
 void async_unregister_domain(struct async_domain *domain)
 {
 	WARN_ON(!list_empty(&domain->pending));
+
+	async_barrier_delete(domain);
 	domain->registered = 0;
 }
 EXPORT_SYMBOL_GPL(async_unregister_domain);
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 9/9] async: Introduce a dependency resolver for parallel execution
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
                   ` (7 preceding siblings ...)
  2016-06-24  9:08 ` [PATCH 8/9] async: Add execution barriers Chris Wilson
@ 2016-06-24  9:08 ` Chris Wilson
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
  9 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-06-24  9:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, dri-devel, linaro-mm-sig

A challenge in driver initialisation is the coordination of many small
sometimes independent, sometimes interdependent tasks. We would like to
schedule the independent tasks for execution in parallel across as many
cores as possible for rapid initialisation, and then schedule all the
dependent tasks once they have completed, again running as many of those
in parallel as is possible.

Resolving the interdependencies by hand is time consuming and error
prone. Instead, we want to declare what dependencies a particular task
has, and what that task provides, and let a runtime dependency solver
work out what tasks to run and when, and which in parallel. To this end,
we introduce the struct async_dependency_graph building upon the kfence
and async_work from the previous patches to allow for the runtime
computation of the topological task ordering.

The graph is constructed with async_dependency_graph_build(), which
takes the task, its dependencies and what it provides, and builds the
graph of kfences required for ordering. Additional kfences can be
inserted through async_dependency_depends() and
async_dependency_provides() for manual control of the execution order,
and async_dependency_get() retrieves a kfence for inspection or waiting
upon.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/async.h                              |  37 +++
 kernel/async.c                                     | 250 ++++++++++++++++
 lib/Kconfig.debug                                  |  12 +
 lib/Makefile                                       |   1 +
 lib/test-async-dependency-graph.c                  | 317 +++++++++++++++++++++
 .../selftests/lib/async-dependency-graph.sh        |  10 +
 6 files changed, 627 insertions(+)
 create mode 100644 lib/test-async-dependency-graph.c
 create mode 100755 tools/testing/selftests/lib/async-dependency-graph.sh

diff --git a/include/linux/async.h b/include/linux/async.h
index 64a090e3f24f..c9cadb383813 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -15,6 +15,7 @@
 #include <linux/types.h>
 #include <linux/kfence.h>
 #include <linux/list.h>
+#include <linux/rbtree.h>
 
 typedef u64 async_cookie_t;
 typedef void (*async_func_t) (void *data, async_cookie_t cookie);
@@ -101,4 +102,40 @@ extern async_cookie_t queue_async_work(struct async_domain *domain,
 				       gfp_t gfp);
 extern async_cookie_t schedule_async_work(struct async_work *work);
 
+/* Build a graph of work based on dependencies generated by keywords.
+ * The graph must be acyclic. Can be used to both generate a topological
+ * ordering of tasks, and to execute independent chains of tasks in
+ * parallel.
+ */
+
+struct async_dependency_graph {
+	struct rb_root root;
+	struct list_head list;
+};
+
+#define ASYNC_DEPENDENCY_GRAPH_INIT(_name) {				\
+	.root = RB_ROOT,						\
+	.list = LIST_HEAD_INIT(_name.list),				\
+}
+#define ASYNC_DEPENDENCY_GRAPH(_name) \
+	struct async_dependency_graph _name = ASYNC_DEPENDENCY_GRAPH_INIT(_name)
+
+extern int async_dependency_graph_build(struct async_dependency_graph *adg,
+					async_func_t fn, void *data,
+					const char *depends,
+					const char *provides);
+
+extern int async_dependency_depends(struct async_dependency_graph *adg,
+				    struct kfence *fence,
+				    const char *depends);
+
+extern int async_dependency_provides(struct async_dependency_graph *adg,
+				     struct kfence *fence,
+				     const char *provides);
+
+extern struct kfence *async_dependency_get(struct async_dependency_graph *adg,
+					   const char *name);
+
+extern void async_dependency_graph_execute(struct async_dependency_graph *adg);
+
 #endif
diff --git a/kernel/async.c b/kernel/async.c
index a22945f4b4c4..8330d719074b 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -1005,3 +1005,253 @@ void init_async_domain(struct async_domain *domain, bool registered)
 	domain->registered = registered;
 }
 EXPORT_SYMBOL_GPL(init_async_domain);
+
+struct async_dependency {
+	struct kfence fence;
+	struct rb_node node;
+	struct list_head link;
+	char name[0];
+};
+
+static struct async_dependency *
+__lookup_dependency(struct async_dependency_graph *adg, const char *name)
+{
+	struct rb_node **p, *parent;
+	struct async_dependency *d;
+	int len;
+
+	parent = NULL;
+	p = &adg->root.rb_node;
+	while (*p) {
+		int cmp;
+
+		parent = *p;
+		d = container_of(parent, typeof(*d), node);
+
+		cmp = strcmp(name, d->name);
+		if (cmp < 0)
+			p = &parent->rb_left;
+		else if (cmp > 0)
+			p = &parent->rb_right;
+		else
+			return d;
+	}
+
+	len = strlen(name) + 1;
+	d = kmalloc(sizeof(*d) + len, GFP_KERNEL);
+	if (!d)
+		return ERR_PTR(-ENOMEM);
+
+	__kfence_init(&d->fence);
+	memcpy(d->name, name, len);
+
+	rb_link_node(&d->node, parent, p);
+	rb_insert_color(&d->node, &adg->root);
+	list_add_tail(&d->link, &adg->list);
+
+	return d;
+}
+
+/**
+ * async_dependency_depends - declare a prerequisite fence for a named stage
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @fence: the kfence to add that depends upon the named stage completing
+ * @depends: the named stage
+ *
+ * This function appends @fence into the async_dependency_graph @adg after
+ * the @depends stage is completed. That is the @fence is signaled once
+ * the chain of dependencies upto and including @depends is complete.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ * In particular, note that if CONFIG_KFENCE_CHECK_DAG is enabled, the
+ * dependency graph will be checked for cycles, and -EINVAL reported
+ * in such cases. A dependency cycle leads to unexecutable code.
+ */
+int async_dependency_depends(struct async_dependency_graph *adg,
+			     struct kfence *fence,
+			     const char *depends)
+{
+	struct async_dependency *d;
+
+	d = __lookup_dependency(adg, depends);
+	if (IS_ERR(d))
+		return PTR_ERR(d);
+
+	return kfence_add(fence, &d->fence, GFP_KERNEL);
+}
+EXPORT_SYMBOL_GPL(async_dependency_depends);
+
+/**
+ * async_dependency_provides - declare a named stage that should follow
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @fence: the kfence to add that provides the named stage with a signal
+ * @depends: the named stage
+ *
+ * This function inserts @fence into the async_dependency_graph @adg before
+ * the @provides stage is signaled. That is the @fence signals the
+ * @provides stage once completed (and once all providers have completed,
+ * work from the @provides commences).
+ *
+ * Returns: 0 on success, negative error code on failure.
+ * In particular, note that if CONFIG_KFENCE_CHECK_DAG is enabled, the
+ * dependency graph will be checked for cycles, and -EINVAL reported
+ * in such cases. A dependency cycle leads to unexecutable code.
+ */
+int async_dependency_provides(struct async_dependency_graph *adg,
+			      struct kfence *fence,
+			      const char *provides)
+{
+	struct async_dependency *d;
+
+	d = __lookup_dependency(adg, provides);
+	if (IS_ERR(d))
+		return PTR_ERR(d);
+
+	return kfence_add(&d->fence, fence, GFP_KERNEL);
+}
+EXPORT_SYMBOL_GPL(async_dependency_provides);
+
+/**
+ * async_dependency_get - lookup the kfence for a named stage
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @name: the named stage
+ *
+ * This function lookups the kfence associated with the named stage. This
+ * fence will be signaled once the named stage is ready. For example,
+ * waiting on that fence will wait until all prior dependencies of that
+ * named stage have been completed.
+ *
+ * Returns: a new reference on the kfence. The caller must release the
+ * reference with kfence_put() when finished.
+ */
+struct kfence *async_dependency_get(struct async_dependency_graph *adg,
+				    const char *name)
+{
+	struct async_dependency *d;
+
+	d = __lookup_dependency(adg, name);
+	if (IS_ERR(d))
+		return ERR_CAST(d);
+
+	return kfence_get(&d->fence);
+}
+EXPORT_SYMBOL_GPL(async_dependency_get);
+
+static int __adg_for_each_token(struct async_dependency_graph *adg,
+				struct kfence *fence,
+				const char *string,
+				int (*fn)(struct async_dependency_graph *,
+					  struct kfence *,
+					  const char *))
+{
+	char *tmp, *s, *t;
+	int ret = 0;
+
+	if (!string)
+		return 0;
+
+	tmp = kstrdup(string, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	for (s = tmp; (t = strsep(&s, ",")); ) {
+		if (*t == '\0')
+			continue;
+
+		ret |= fn(adg, fence, t);
+		if (ret < 0)
+			break;
+	}
+
+	kfree(tmp);
+	return ret;
+}
+
+/**
+ * async_dependency_graph_build - insert a task into the dependency graph
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @fn: the async_func_t to execute
+ * @data: the data to pass to the @fn
+ * @depends: a comma-separated list of named stages that must complete
+ *           before the task can execute
+ * @provides: a comma-separated list of named stages that will be signaled
+ *            when this task completes
+ *
+ * This function inserts the task @fn into the async_dependency_graph @adg
+ * after all the named stages in @depends have completed. Upon completion
+ * of the task, all the named stages in @provides are signaled (and once all
+ * their dependent tasks have also finished, the tasks afterwards will
+ * execute).
+ *
+ * If a task has no dependency (@depends is NULL or an empty string), it will
+ * be scheduled for execution as soon as it is inserted into the graph @adg.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ * In particular, note that if CONFIG_KFENCE_CHECK_DAG is enabled, the
+ * dependency graph will be checked for cycles, and -EINVAL reported
+ * in such cases. A dependency cycle leads to unexecutable code.
+ */
+int
+async_dependency_graph_build(struct async_dependency_graph *adg,
+			     async_func_t fn, void *data,
+			     const char *depends, const char *provides)
+{
+	struct async_work *work;
+	int ret;
+
+	work = async_work_create(fn, data, GFP_KERNEL);
+	if (!work)
+		return -ENOMEM;
+
+	ret = __adg_for_each_token(adg, &work->fence, depends,
+				   async_dependency_depends);
+	if (ret < 0)
+		goto err;
+
+	ret = __adg_for_each_token(adg, &work->fence, provides,
+				   async_dependency_provides);
+	if (ret < 0)
+		goto err;
+
+	if (!schedule_async_work(work)) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	ret = 0;
+out:
+	async_work_put(work);
+	return ret;
+
+err:
+	clear_bit(ASYNC_WORK_BIT, &work->fence.flags);
+	kfence_signal(&work->fence);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(async_dependency_graph_build);
+
+/**
+ * async_dependency_graph_execute - execute the dependency graph
+ * @adg: the async_dependency_graph
+ *
+ * This function marks the @adg as ready for execution. As soon as the
+ * dependencies of a task have been completed (in their entirety), that
+ * task is executed. Once completed, it signals the tasks that have listed
+ * its @provides as one of their @depends, and once ready (all @provides are
+ * complete) those tasks are scheduled for execution.
+ *
+ * Tasks are executed in the topological order of their dependencies. If two,
+ * or more, tasks are not dependent on each other they may be run concurrently.
+ *
+ * The graph @adg is freed upon execution.
+ */
+void async_dependency_graph_execute(struct async_dependency_graph *adg)
+{
+	struct async_dependency *d, *next;
+
+	list_for_each_entry_safe(d, next, &adg->list, link) {
+		kfence_signal(&d->fence);
+		kfence_put(&d->fence);
+	}
+}
+EXPORT_SYMBOL_GPL(async_dependency_graph_execute);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 47319f501954..4943b8dbcdf1 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1798,6 +1798,18 @@ config ASYNC_DOMAIN_SELFTEST
 
 	  Say N if you are unsure.
 
+config ASYNC_DEPENDENCY_GRAPH_SELFTEST
+	tristate "Asynchronous dependency graph self tests"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option provides a kernel modules that can be used to test
+	  the asynchronous dependency graph. This option is not useful for
+	  distributions or general kernels, but only for kernel developers
+	  working on the async_dependency_graph facility.
+
+	  Say N if you are unsure.
+
 config BACKTRACE_SELF_TEST
 	tristate "Self test for the backtrace code"
 	depends on DEBUG_KERNEL
diff --git a/lib/Makefile b/lib/Makefile
index 82e8b5f77c44..fa7da38d4383 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -30,6 +30,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
 obj-$(CONFIG_KFENCE_SELFTEST) += test-kfence.o
 obj-$(CONFIG_ASYNC_DOMAIN_SELFTEST) += test-async-domain.o
+obj-$(CONFIG_ASYNC_DEPENDENCY_GRAPH_SELFTEST) += test-async-dependency-graph.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
 lib-$(CONFIG_HAS_DMA) += dma-noop.o
diff --git a/lib/test-async-dependency-graph.c b/lib/test-async-dependency-graph.c
new file mode 100644
index 000000000000..ebee26d7b99e
--- /dev/null
+++ b/lib/test-async-dependency-graph.c
@@ -0,0 +1,317 @@
+/*
+ * Test cases for async-dependency-graph facility.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/async.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+struct chain {
+	atomic_t idx;
+	unsigned long values[0];
+};
+
+struct task_write {
+	struct chain *chain;
+	unsigned long value;
+};
+
+static void __init task_write(void *arg, async_cookie_t cookie)
+{
+	struct task_write *t = arg;
+	int idx = atomic_inc_return(&t->chain->idx) - 1;
+	WRITE_ONCE(t->chain->values[idx], t->value);
+}
+
+static void __init task_nop(void *data, async_cookie_t cookie)
+{
+}
+
+static int __init test_ordering(int nchain, int nwide)
+{
+	ASYNC_DEPENDENCY_GRAPH(adg);
+	struct chain **chains;
+	struct task_write *tests, *t;
+	int c, w, ret;
+
+	/* Test implementation of simple chains within the dependency graphs */
+	pr_debug("%s(nchain=%d, nwide=%d)\n", __func__, nchain, nwide);
+
+	chains = kmalloc(sizeof(struct chain *)*nwide, GFP_KERNEL);
+	tests = kmalloc(sizeof(struct task_write)*nwide*nchain, GFP_KERNEL);
+
+	if (!chains || !tests)
+		return -ENOMEM;
+
+	t = tests;
+	for (w = 0; w < nwide; w++) {
+		char *depends = NULL, *provides;
+
+		chains[w] = kzalloc(sizeof(struct chain) +
+				    nchain*sizeof(unsigned long),
+				    GFP_KERNEL);
+
+		for (c = 0; c < nchain; c++) {
+			t->chain = chains[w];
+			t->value = c;
+			provides = kasprintf(GFP_KERNEL, "%d.%d", c, w);
+			async_dependency_graph_build(&adg, task_write, t,
+						     depends, provides);
+			kfree(depends);
+			depends = provides;
+			t++;
+		}
+
+		kfree(depends);
+	}
+	async_dependency_graph_execute(&adg);
+	async_synchronize_full();
+
+	ret = 0;
+	kfree(tests);
+	for (w = 0; w < nwide; w++) {
+		for (c = 0; c < nchain; c++) {
+			if (chains[w]->values[c] != c) {
+				pr_err("%s(%d, %d): Invalid execution order (chain %d, position %d): found %d\n",
+				       __func__, nchain, nwide,
+				       w, c, (int)chains[w]->values[c]);
+
+				ret = -EINVAL;
+			}
+		}
+		kfree(chains[w]);
+	}
+	kfree(chains);
+
+	return ret;
+}
+
+static int __init test_barrier(int nwide)
+{
+	ASYNC_DEPENDENCY_GRAPH(adg);
+	struct chain **chains;
+	struct task_write *tests, *t;
+	int c, w, ret;
+
+	/* Test implementation of barriers within the dependency graphs */
+	pr_debug("%s(nwide=%d)\n", __func__, nwide);
+
+	chains = kmalloc(sizeof(struct chain *)*nwide, GFP_KERNEL);
+	tests = kmalloc(sizeof(struct task_write)*2*nwide, GFP_KERNEL);
+	if (!chains || !tests)
+		return -ENOMEM;
+
+	t = tests;
+
+	/* A,B act as a barrier running between the nops */
+	for (w = 0; w < nwide; w++) {
+		char *provides, *depends;
+
+		chains[w] = kzalloc(sizeof(struct chain) +
+				    2*sizeof(unsigned long),
+				    GFP_KERNEL);
+
+		depends = NULL;
+
+		provides = kasprintf(GFP_KERNEL, "nop1.%d", w);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+
+		kfree(depends);
+		depends = provides;
+
+		provides = kasprintf(GFP_KERNEL, "A.%d", w);
+		t->chain = chains[w];
+		t->value = 0;
+		async_dependency_graph_build(&adg, task_write, t,
+					     depends, provides);
+		t++;
+
+		kfree(depends);
+		depends = provides;
+
+		provides = kasprintf(GFP_KERNEL, "nop2.%d", w);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+		kfree(provides);
+
+		provides = kasprintf(GFP_KERNEL, "nop3.%d", w);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+		kfree(provides);
+
+		kfree(depends);
+		depends = kasprintf(GFP_KERNEL, "nop2.%d,nop3.%d", w, w);
+		t->chain = chains[w];
+		t->value = 1;
+		async_dependency_graph_build(&adg, task_write, t,
+					     depends, NULL);
+		kfree(depends);
+		t++;
+	}
+	async_dependency_graph_execute(&adg);
+	async_synchronize_full();
+
+	ret = 0;
+	kfree(tests);
+	for (w = 0; w < nwide; w++) {
+		for (c = 0; c < 2; c++) {
+			if (chains[w]->values[c] != c) {
+				pr_err("%s(%d): Invalid execution order (chain %d, position %d): found %d\n",
+				       __func__, nwide,
+				       w, c, (int)chains[w]->values[c]);
+
+				ret = -EINVAL;
+			}
+		}
+		kfree(chains[w]);
+	}
+	kfree(chains);
+
+	return ret;
+}
+
+static int __init test_dag(void)
+{
+	ASYNC_DEPENDENCY_GRAPH(adg);
+
+	/* Test detection of cycles within the dependency graphs */
+	pr_debug("%s\n", __func__);
+
+	if (!config_enabled(CONFIG_KFENCE_CHECK_DAG))
+		return 0;
+
+	async_dependency_graph_build(&adg, task_nop, NULL, "__start__", "A");
+	if (async_dependency_graph_build(&adg, task_nop, NULL, "A", "A") != -EINVAL) {
+		pr_err("Failed to detect AA cycle\n");
+		return -EINVAL;
+	}
+
+	async_dependency_graph_build(&adg, task_nop, NULL, "A", "B");
+	if (async_dependency_graph_build(&adg, task_nop, NULL, "B", "A") != -EINVAL) {
+		pr_err("Failed to detect ABA cycle\n");
+		return -EINVAL;
+	}
+
+	async_dependency_graph_build(&adg, task_nop, NULL, "B", "C");
+	if (async_dependency_graph_build(&adg, task_nop, NULL, "C", "A") != -EINVAL) {
+		pr_err("Failed to detect ABCA cycle\n");
+		return -EINVAL;
+	}
+
+	async_dependency_graph_execute(&adg);
+	async_synchronize_full();
+
+	return 0;
+}
+
+static int __init perf_nop(int chain, int width, long timeout_us)
+{
+	ktime_t start;
+	long count, delay;
+
+	count = 0;
+	start = ktime_get();
+	do {
+		ASYNC_DEPENDENCY_GRAPH(adg);
+		ktime_t delta;
+		int c, w;
+
+		for (w = 0; w < width; w++) {
+			char *depends = NULL, *provides;
+
+			for (c = 0; c < chain; c++) {
+				provides = kasprintf(GFP_KERNEL, "%d.%d", c, w);
+				async_dependency_graph_build(&adg,
+							     task_nop, NULL,
+							     depends, provides);
+				kfree(depends);
+				depends = provides;
+			}
+
+			kfree(depends);
+		}
+		async_dependency_graph_execute(&adg);
+		async_synchronize_full();
+		delta = ktime_sub(ktime_get(), start);
+		delay = ktime_to_ns(delta) >> 10;
+		count += width * chain;
+	} while (delay < timeout_us);
+
+	pr_info("%ld nop tasks (in chains of %d, %d chains in parallel) completed in %ldus\n",
+		count, chain, width, delay);
+	return 0;
+}
+
+static int __init test_async_dependency_graph_init(void)
+{
+	int ret;
+
+	pr_info("Testing async-dependency-graph\n");
+
+	ret = test_ordering(1, 1);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(2, 1);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(1, 2);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(2, 2);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(26, 26);
+	if (ret)
+		return ret;
+
+	ret = test_dag();
+	if (ret)
+		return ret;
+
+	ret = test_barrier(1);
+	if (ret)
+		return ret;
+
+	ret = test_barrier(16);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(1, 1, 100);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(256, 1, 2000);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(128, 2, 2000);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(16, 16, 2000);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void __exit test_async_dependency_graph_cleanup(void)
+{
+}
+
+module_init(test_async_dependency_graph_init);
+module_exit(test_async_dependency_graph_cleanup);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/selftests/lib/async-dependency-graph.sh b/tools/testing/selftests/lib/async-dependency-graph.sh
new file mode 100755
index 000000000000..ea4bbc76f60f
--- /dev/null
+++ b/tools/testing/selftests/lib/async-dependency-graph.sh
@@ -0,0 +1,10 @@
+#!/bin/sh
+# Runs infrastructure tests using test-async-dependency-graph kernel module
+
+if /sbin/modprobe -q test-async-dependency-graph; then
+	/sbin/modprobe -q -r test-async-dependency-graph
+	echo "async-dependency-graph: ok"
+else
+	echo "async-dependency-graph: [FAIL]"
+	exit 1
+fi
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism
  2016-06-24  9:08 ` [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism Chris Wilson
@ 2016-07-13  9:38   ` Peter Zijlstra
  2016-07-13 10:20     ` Chris Wilson
  2016-07-13 10:26   ` Peter Zijlstra
  1 sibling, 1 reply; 24+ messages in thread
From: Peter Zijlstra @ 2016-07-13  9:38 UTC (permalink / raw)
  To: Chris Wilson
  Cc: linux-kernel, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Rasmus Villemoes,
	Andy Shevchenko, Dmitry Vyukov, Alexander Potapenko, linux-media,
	dri-devel, linaro-mm-sig

On Fri, Jun 24, 2016 at 10:08:46AM +0100, Chris Wilson wrote:
> diff --git a/kernel/async.c b/kernel/async.c
> index d2edd6efec56..d0bcb7cc4884 100644
> --- a/kernel/async.c
> +++ b/kernel/async.c
> @@ -50,6 +50,7 @@ asynchronous and synchronous parts of the kernel.
>  
>  #include <linux/async.h>
>  #include <linux/atomic.h>
> +#include <linux/kfence.h>
>  #include <linux/ktime.h>
>  #include <linux/export.h>
>  #include <linux/wait.h>

So why does this live in async.c? It got its own header, why not also
give it its own .c file?

Also, I'm not a particular fan of the k* naming, but I see 'fence' is
already taken.

> +/**
> + * DOC: kfence overview
> + *
> + * kfences provide synchronisation barriers between multiple processes.
> + * They are very similar to completions, or a pthread_barrier. Where
> + * kfence differs from completions is their ability to track multiple
> + * event sources rather than being a singular "completion event". Similar
> + * to completions, multiple processes or other kfences can listen or wait
> + * upon a kfence to signal its completion.
> + *
> + * The kfence is a one-shot flag, signaling that work has progressed passed
> + * a certain point (as measured by completion of all events the kfence is
> + * listening for) and the waiters upon kfence may proceed.
> + *
> + * kfences provide both signaling and waiting routines:
> + *
> + *	kfence_pending()
> + *
> + * indicates that the kfence should itself wait for another signal. A
> + * kfence created by kfence_create() starts in the pending state.

I would much prefer:

 *  - kfence_pending(): indicates that the kfence should itself wait for
 *    another signal. A kfence created by kfence_create() starts in the
 *    pending state.

Which is much clearer in what text belongs where.

Also, what !? I don't get what this function does.

> + *
> + *	kfence_signal()
> + *
> + * undoes the earlier pending notification and allows the fence to complete
> + * if all pending events have then been signaled.

So I know _signal() is the posix thing, but seeing how we already
completions, how about being consistent with those and use _complete()
for this?

> + *
> + *	kfence_wait()
> + *
> + * allows the caller to sleep (uninterruptibly) until the fence is complete.

whitespace to separate the description of kfence_wait() from whatever
else follows.

> + * Meanwhile,
> + *
> + * 	kfence_complete()
> + *
> + * reports whether or not the kfence has been passed.

kfence_done(), again to match completions.

> + *
> + * This interface is very similar to completions, with the exception of
> + * allowing multiple pending / signals to be sent before the kfence is
> + * complete.
> + *
> + *	kfence_add() / kfence_add_completion()
> + *
> + * sets the kfence to wait upon another fence, or completion respectively.
> + *
> + * Unlike completions, kfences are expected to live inside more complex graphs
> + * and form the basis for parallel execution of interdependent and so are
> + * reference counted. A kfence may be created using,
> + *
> + * 	kfence_create()
> + *
> + * The fence starts off pending a single signal. Once you have finished
> + * setting up the fence, use kfence_signal() to allow it to wait upon
> + * its event sources.
> + *
> + * Use
> + *
> + *	kfence_get() / kfence_put
> + *
> + * to acquire or release a reference on kfence respectively.
> + */

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism
  2016-07-13  9:38   ` Peter Zijlstra
@ 2016-07-13 10:20     ` Chris Wilson
  2016-07-13 11:02       ` Daniel Vetter
  0 siblings, 1 reply; 24+ messages in thread
From: Chris Wilson @ 2016-07-13 10:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Rasmus Villemoes,
	Andy Shevchenko, Dmitry Vyukov, Alexander Potapenko, linux-media,
	dri-devel, linaro-mm-sig

On Wed, Jul 13, 2016 at 11:38:52AM +0200, Peter Zijlstra wrote:
> On Fri, Jun 24, 2016 at 10:08:46AM +0100, Chris Wilson wrote:
> > diff --git a/kernel/async.c b/kernel/async.c
> > index d2edd6efec56..d0bcb7cc4884 100644
> > --- a/kernel/async.c
> > +++ b/kernel/async.c
> > @@ -50,6 +50,7 @@ asynchronous and synchronous parts of the kernel.
> >  
> >  #include <linux/async.h>
> >  #include <linux/atomic.h>
> > +#include <linux/kfence.h>
> >  #include <linux/ktime.h>
> >  #include <linux/export.h>
> >  #include <linux/wait.h>
> 
> So why does this live in async.c? It got its own header, why not also
> give it its own .c file?

I started in kernel/async (since my first goal was fine-grained
async_work serialisation). It is still in kernel/async.c as the embedded
fence inside the async_work needs a return code to integrate. I should
have done that before posting...

> Also, I'm not a particular fan of the k* naming, but I see 'fence' is
> already taken.

Agreed, I really want to rename the dma-buf fence to struct dma_fence -
we would need to do that whilst it dma-buf fencing is still in its infancy.
 
> > +/**
> > + * DOC: kfence overview
> > + *
> > + * kfences provide synchronisation barriers between multiple processes.
> > + * They are very similar to completions, or a pthread_barrier. Where
> > + * kfence differs from completions is their ability to track multiple
> > + * event sources rather than being a singular "completion event". Similar
> > + * to completions, multiple processes or other kfences can listen or wait
> > + * upon a kfence to signal its completion.
> > + *
> > + * The kfence is a one-shot flag, signaling that work has progressed passed
> > + * a certain point (as measured by completion of all events the kfence is
> > + * listening for) and the waiters upon kfence may proceed.
> > + *
> > + * kfences provide both signaling and waiting routines:
> > + *
> > + *	kfence_pending()
> > + *
> > + * indicates that the kfence should itself wait for another signal. A
> > + * kfence created by kfence_create() starts in the pending state.
> 
> I would much prefer:
> 
>  *  - kfence_pending(): indicates that the kfence should itself wait for
>  *    another signal. A kfence created by kfence_create() starts in the
>  *    pending state.
> 
> Which is much clearer in what text belongs where.

Ok, I was just copying the style from
Documentation/scheduler/completion.txt

> Also, what !? I don't get what this function does.

Hmm. Something more like:

"To check the state of a kfence without changing it in any way, call
kfence_pending(), which returns true if the kfence is still waiting for
its event sources to be signaled."

s/signaled/completed/ depending on kfence_signal() vs kfence_complete()

> > + *
> > + *	kfence_signal()
> > + *
> > + * undoes the earlier pending notification and allows the fence to complete
> > + * if all pending events have then been signaled.
> 
> So I know _signal() is the posix thing, but seeing how we already
> completions, how about being consistent with those and use _complete()
> for this?

Possibly, but we also have the dma-buf fences to try and be fairly
consistent with. struct completion is definitely a closer sibling
though. The biggest conceptual change from completions though is that a
kfence will be signaled multiple times before it is complete - I think
that is a strong argument in favour of using _signal().

> > + *
> > + *	kfence_wait()
> > + *
> > + * allows the caller to sleep (uninterruptibly) until the fence is complete.
> 
> whitespace to separate the description of kfence_wait() from whatever
> else follows.
> 
> > + * Meanwhile,
> > + *
> > + * 	kfence_complete()
> > + *
> > + * reports whether or not the kfence has been passed.
> 
> kfence_done(), again to match completions.

Ok, will do a spin with completion naming convention and see how that
fits in (and complete the extraction to a separate .c)
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism
  2016-06-24  9:08 ` [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism Chris Wilson
  2016-07-13  9:38   ` Peter Zijlstra
@ 2016-07-13 10:26   ` Peter Zijlstra
  1 sibling, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2016-07-13 10:26 UTC (permalink / raw)
  To: Chris Wilson
  Cc: linux-kernel, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Rasmus Villemoes,
	Andy Shevchenko, Dmitry Vyukov, Alexander Potapenko, linux-media,
	dri-devel, linaro-mm-sig

On Fri, Jun 24, 2016 at 10:08:46AM +0100, Chris Wilson wrote:
> +struct kfence {
> +	wait_queue_head_t wait;
> +	unsigned long flags;
> +	struct kref kref;
> +	atomic_t pending;
> +};

> +#define KFENCE_CHECKED_BIT	0
> +
> +static void kfence_free(struct kref *kref)
> +{
> +	struct kfence *fence = container_of(kref, typeof(*fence), kref);
> +
> +	WARN_ON(atomic_read(&fence->pending) > 0);
> +
> +	kfree(fence);
> +}
> +
> +/**
> + * kfence_put - release a reference to a kfence
> + * @fence: the kfence being disposed of
> + */
> +void kfence_put(struct kfence *fence)
> +{
> +	if (fence)
> +		kref_put(&fence->kref, kfence_free);

It seems very poor semantics to allow to put NULL, that would indicate a
severe logic fail.

> +}
> +EXPORT_SYMBOL_GPL(kfence_put);

> +/**
> + * kfence_get - acquire a reference to a kfence
> + * @fence: the kfence being used
> + *
> + * Returns the pointer to the kfence, with its reference count incremented.
> + */
> +struct kfence *kfence_get(struct kfence *fence)
> +{
> +	if (fence)
> +		kref_get(&fence->kref);

Similar, getting NULL is just horrible taste.

> +	return fence;
> +}
> +EXPORT_SYMBOL_GPL(kfence_get);


> +static void __kfence_wake_up_all(struct kfence *fence,
> +				 struct list_head *continuation)
> +{
> +	wait_queue_head_t *x = &fence->wait;
> +	unsigned long flags;
> +
> +	/* To prevent unbounded recursion as we traverse the graph

Broken comment style.

> +	 * of kfences, we move the task_list from this ready fence
> +	 * to the tail of the current fence we are signaling.
> +	 */
> +	spin_lock_irqsave_nested(&x->lock, flags, 1 + !!continuation);
> +	if (continuation)
> +		list_splice_tail_init(&x->task_list, continuation);
> +	else while (!list_empty(&x->task_list))
> +		__wake_up_locked_key(x, TASK_NORMAL, &x->task_list);
> +	spin_unlock_irqrestore(&x->lock, flags);
> +}
> +
> +static void __kfence_signal(struct kfence *fence,
> +			    struct list_head *continuation)
> +{
> +	if (!atomic_dec_and_test(&fence->pending))
> +		return;
> +
> +	atomic_dec(&fence->pending);

You decrement twice?

> +	__kfence_wake_up_all(fence, continuation);
> +}
> +
> +/**
> + * kfence_pending - mark the fence as pending a signal

I would say: increment the pending count, requiring one more completion
before the fence is done.

'Mark' completely misses the point. You need to balance these increments
with decrements, its not a boolean state.

> + * @fence: the kfence to be signaled
> + *
> + */
> +void kfence_pending(struct kfence *fence)
> +{
> +	WARN_ON(atomic_inc_return(&fence->pending) <= 1);
> +}
> +EXPORT_SYMBOL_GPL(kfence_pending);


> +/**
> + * kfence_create - create a fence
> + * @gfp: the allowed allocation type
> + *
> + * A fence is created with a reference count of one, and pending a signal.
> + * After you have completed setting up the fence for use, call kfence_signal()
> + * to signal completion.
> + *
> + * Returns the newly allocated fence, or NULL on error.
> + */
> +struct kfence *kfence_create(gfp_t gfp)
> +{
> +	struct kfence *fence;
> +
> +	fence = kmalloc(sizeof(*fence), gfp);
> +	if (!fence)
> +		return NULL;
> +
> +	kfence_init(fence);
> +	return fence;
> +}
> +EXPORT_SYMBOL_GPL(kfence_create);

Why? What is the purpose of this here thing? We never provide allocation
wrappers.

> +
> +/**
> + * kfence_add - set one fence to wait upon another

Since you're going to do a whole lot other kfence_add_$foo() thingies,
why isn't this called kfence_add_kfence() ?

> + * @fence: this kfence
> + * @signaler: target kfence to wait upon
> + * @gfp: the allowed allocation type
> + *
> + * kfence_add() causes the @fence to wait upon completion of @signaler.
> + * Internally the @fence is marked as pending a signal from @signaler.
> + *
> + * Returns 1 if the @fence was added to the waiqueue of @signaler, 0
> + * if @signaler was already complete, or a negative error code.
> + */
> +int kfence_add(struct kfence *fence, struct kfence *signaler, gfp_t gfp)
> +{
> +	wait_queue_t *wq;
> +	unsigned long flags;
> +	int pending;
> +
> +	if (!signaler || kfence_complete(signaler))

Again, wth would you allow adding NULL? That's just horrible.

> +		return 0;
> +
> +	/* The dependency graph must be acyclic */
> +	if (unlikely(kfence_check_if_after(fence, signaler)))
> +		return -EINVAL;
> +
> +	wq = kmalloc(sizeof(*wq), gfp);
> +	if (unlikely(!wq)) {
> +		if (!gfpflags_allow_blocking(gfp))
> +			return -ENOMEM;
> +
> +		kfence_wait(signaler);
> +		return 0;
> +	}
> +
> +	wq->flags = 0;
> +	wq->func = kfence_wake;
> +	wq->private = kfence_get(fence);
> +
> +	kfence_pending(fence);
> +
> +	spin_lock_irqsave(&signaler->wait.lock, flags);
> +	if (likely(!kfence_complete(signaler))) {
> +		__add_wait_queue_tail(&signaler->wait, wq);
> +		pending = 1;
> +	} else {
> +		INIT_LIST_HEAD(&wq->task_list);
> +		kfence_wake(wq, 0, 0, NULL);
> +		pending = 0;
> +	}
> +	spin_unlock_irqrestore(&signaler->wait.lock, flags);
> +
> +	return pending;
> +}
> +EXPORT_SYMBOL_GPL(kfence_add);
> +
> +/**
> + * kfence_add_completion - set the fence to wait upon a completion
> + * @fence: this kfence
> + * @x: target completion to wait upon
> + * @gfp: the allowed allocation type
> + *
> + * kfence_add_completiond() causes the @fence to wait upon a completion.
> + * Internally the @fence is marked as pending a signal from @x.
> + *
> + * Returns 1 if the @fence was added to the waiqueue of @x, 0
> + * if @x was already complete, or a negative error code.
> + */
> +int kfence_add_completion(struct kfence *fence, struct completion *x, gfp_t gfp)
> +{
> +	wait_queue_t *wq;
> +	unsigned long flags;
> +	int pending;
> +
> +	if (!x || completion_done(x))
> +		return 0;
> +
> +	wq = kmalloc(sizeof(*wq), gfp);
> +	if (unlikely(!wq)) {
> +		if (!gfpflags_allow_blocking(gfp))
> +			return -ENOMEM;
> +
> +		wait_for_completion(x);
> +		return 0;
> +	}
> +
> +	wq->flags = 0;
> +	wq->func = kfence_wake;
> +	wq->private = kfence_get(fence);
> +
> +	kfence_pending(fence);
> +
> +	spin_lock_irqsave(&x->wait.lock, flags);
> +	if (likely(!READ_ONCE(x->done))) {
> +		__add_wait_queue_tail(&x->wait, wq);
> +		pending = 1;
> +	} else {
> +		INIT_LIST_HEAD(&wq->task_list);
> +		kfence_wake(wq, 0, 0, NULL);
> +		pending = 0;
> +	}
> +	spin_unlock_irqrestore(&x->wait.lock, flags);
> +
> +	return pending;
> +}
> +EXPORT_SYMBOL_GPL(kfence_add_completion);

It appears to me these two function share a _lot_ of code, surely that
can be reduced a bit?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/9] async: Extend kfence to allow struct embedding
  2016-06-24  9:08 ` [PATCH 3/9] async: Extend kfence to allow struct embedding Chris Wilson
@ 2016-07-13 10:31   ` Peter Zijlstra
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2016-07-13 10:31 UTC (permalink / raw)
  To: Chris Wilson
  Cc: linux-kernel, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Rasmus Villemoes,
	Andy Shevchenko, Dmitry Vyukov, Alexander Potapenko, linux-media,
	dri-devel, linaro-mm-sig

On Fri, Jun 24, 2016 at 10:08:47AM +0100, Chris Wilson wrote:
> @@ -151,7 +161,11 @@ static void kfence_free(struct kref *kref)
>  
>  	WARN_ON(atomic_read(&fence->pending) > 0);
>  
> -	kfree(fence);
> +	if (fence->flags) {
> +		kfence_notify_t fn = (kfence_notify_t)fence->flags;

Maybe provide an inline helper for that conversion and also mask out the
low bits, just to be careful. You're assuming they're not set here,
which seems like a dangerous thing.

> +		fn(fence);
> +	} else
> +		kfree(fence);

Also Codingstyle wants braces on both branches if its on one.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/9] async: Wrap hrtimer to provide a time source for a kfence
  2016-06-24  9:08 ` [PATCH 5/9] async: Wrap hrtimer to provide a time source for a kfence Chris Wilson
@ 2016-07-13 10:32   ` Peter Zijlstra
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2016-07-13 10:32 UTC (permalink / raw)
  To: Chris Wilson
  Cc: linux-kernel, Sumit Semwal, Shuah Khan, Tejun Heo, Daniel Vetter,
	Andrew Morton, Ingo Molnar, Kees Cook, Thomas Gleixner,
	Paul E. McKenney, Dan Williams, Andrey Ryabinin, Davidlohr Bueso,
	Nikolay Aleksandrov, David S. Miller, Rasmus Villemoes,
	Andy Shevchenko, Dmitry Vyukov, Alexander Potapenko, linux-media,
	dri-devel, linaro-mm-sig

On Fri, Jun 24, 2016 at 10:08:49AM +0100, Chris Wilson wrote:
> kfence_add_delay() is a convenience wrapper around
> hrtimer_start_range_ns() to provide a time source for a kfence graph.

Changelog could be greatly improved by telling us why we'd want this.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism
  2016-07-13 10:20     ` Chris Wilson
@ 2016-07-13 11:02       ` Daniel Vetter
  0 siblings, 0 replies; 24+ messages in thread
From: Daniel Vetter @ 2016-07-13 11:02 UTC (permalink / raw)
  To: Chris Wilson, Peter Zijlstra, linux-kernel, Sumit Semwal,
	Shuah Khan, Tejun Heo, Daniel Vetter, Andrew Morton, Ingo Molnar,
	Kees Cook, Thomas Gleixner, Paul E. McKenney, Dan Williams,
	Andrey Ryabinin, Davidlohr Bueso, Nikolay Aleksandrov,
	David S. Miller, Rasmus Villemoes, Andy Shevchenko,
	Dmitry Vyukov, Alexander Potapenko, linux-media, dri-devel,
	linaro-mm-sig

On Wed, Jul 13, 2016 at 11:20:14AM +0100, Chris Wilson wrote:
> On Wed, Jul 13, 2016 at 11:38:52AM +0200, Peter Zijlstra wrote:
> > Also, I'm not a particular fan of the k* naming, but I see 'fence' is
> > already taken.
> 
> Agreed, I really want to rename the dma-buf fence to struct dma_fence -
> we would need to do that whilst it dma-buf fencing is still in its infancy.

+1 on dma_fence, seems to make more sense than plain struct fence.
Probably best to do after the recent pile of work from Gustavo to de-stage
sync_file has landed.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Introduce fences for N:M completion variables [v2]
  2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
                   ` (8 preceding siblings ...)
  2016-06-24  9:08 ` [PATCH 9/9] async: Introduce a dependency resolver for parallel execution Chris Wilson
@ 2016-07-17 12:58 ` Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 1/7] kfence: Introduce kfence, a N:M completion mechanism Chris Wilson
                     ` (6 more replies)
  9 siblings, 7 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel

Peter Zijlstra gave a lot of feedback and thanks to him, I think both
the function names and docs are a lot more sane. There is also a good
consensus on renaming dma-buf's struct fence to be struct dma_fence,
allowing for use of the cleaner name for the core struct.

A quick overview of a fence is that it is a struct completion that waits
on multiple events (rather than just one).
-Chris

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2 1/7] kfence: Introduce kfence, a N:M completion mechanism
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
@ 2016-07-17 12:58   ` Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 2/7] kfence: Wrap hrtimer to provide a time source for a kfence Chris Wilson
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo,
	Daniel Vetter, Andrew Morton, Ingo Molnar, Kees Cook,
	Thomas Gleixner, Paul E. McKenney, Dan Williams, Andrey Ryabinin,
	Davidlohr Bueso, Nikolay Aleksandrov, David S. Miller,
	Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, linaro-mm-sig

Completions are a simple synchronization mechanism, suitable for 1:M
barriers where many waiters maybe waiting for a single event. However,
some event driven mechanisms require a graph of events where one event
may depend upon several earlier events. The kfence extends the struct
completion to be able to asynchronously wait upon several event sources,
including completions and other fences forming the basis on which an
acyclic dependency graph can be built. Most often this is used to create
a set of interdependent tasks that can be run concurrently but yet
serialised where necessary. For example, the kernel init sequence has
many tasks that could be run in parallel so long as their dependencies
on previous tasks have been completed. Similarly we have the problem of
assigning interdependent tasks to multiple hardware execution engines,
be they used for rendering or for display. kfences provides a building
block which can be used for determining an order in which tasks can
execute.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/kfence.h                |  64 ++++
 kernel/Makefile                       |   2 +-
 kernel/kfence.c                       | 431 +++++++++++++++++++++++++++
 lib/Kconfig.debug                     |  23 ++
 lib/Makefile                          |   1 +
 lib/test-kfence.c                     | 536 ++++++++++++++++++++++++++++++++++
 tools/testing/selftests/lib/kfence.sh |  10 +
 7 files changed, 1066 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/kfence.h
 create mode 100644 kernel/kfence.c
 create mode 100644 lib/test-kfence.c
 create mode 100755 tools/testing/selftests/lib/kfence.sh

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
new file mode 100644
index 000000000000..6e32385b3b8c
--- /dev/null
+++ b/include/linux/kfence.h
@@ -0,0 +1,64 @@
+/*
+ * kfence.h - library routines for N:M synchronisation points
+ *
+ * Copyright (C) 2016 Intel Corporation
+ *
+ * This file is released under the GPLv2.
+ *
+ */
+
+#ifndef _KFENCE_H_
+#define _KFENCE_H_
+
+#include <linux/gfp.h>
+#include <linux/kref.h>
+#include <linux/notifier.h> /* for NOTIFY_DONE */
+#include <linux/wait.h>
+
+struct completion;
+
+struct kfence {
+	wait_queue_head_t wait;
+	unsigned long flags;
+	struct kref kref;
+	atomic_t pending;
+};
+
+#define KFENCE_CHECKED_BIT	0 /* used internally for DAG checking */
+#define KFENCE_PRIVATE_BIT	1 /* available for use by owner */
+#define KFENCE_MASK		(~3)
+
+#define __kfence_call __aligned(4)
+typedef int (*kfence_notify_t)(struct kfence *);
+
+void kfence_init(struct kfence *fence, kfence_notify_t fn);
+
+struct kfence *kfence_get(struct kfence *fence);
+void kfence_put(struct kfence *fence);
+
+void kfence_await(struct kfence *fence);
+int kfence_await_kfence(struct kfence *fence,
+			struct kfence *after,
+			gfp_t gfp);
+int kfence_await_completion(struct kfence *fence,
+			    struct completion *x,
+			    gfp_t gfp);
+void kfence_complete(struct kfence *fence);
+void kfence_wake_up_all(struct kfence *fence);
+void kfence_wait(struct kfence *fence);
+
+/**
+ * kfence_done - report when the fence has been passed
+ * @fence: the kfence to query
+ *
+ * kfence_done() reports true when the fence is no longer waiting for any
+ * events and has completed its fence-complete notification.
+ *
+ * Returns true when the fence has been passed, false otherwise.
+ */
+static inline bool kfence_done(const struct kfence *fence)
+{
+	return atomic_read(&fence->pending) < 0;
+}
+
+#endif /* _KFENCE_H_ */
diff --git a/kernel/Makefile b/kernel/Makefile
index e2ec54e2b952..ff11f31b7ec9 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -9,7 +9,7 @@ obj-y     = fork.o exec_domain.o panic.o \
 	    extable.o params.o \
 	    kthread.o sys_ni.o nsproxy.o \
 	    notifier.o ksysfs.o cred.o reboot.o \
-	    async.o range.o smpboot.o
+	    async.o kfence.o range.o smpboot.o
 
 obj-$(CONFIG_MULTIUSER) += groups.o
 
diff --git a/kernel/kfence.c b/kernel/kfence.c
new file mode 100644
index 000000000000..693af9da545a
--- /dev/null
+++ b/kernel/kfence.c
@@ -0,0 +1,431 @@
+/*
+ * (C) Copyright 2016 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; version 2
+ * of the License.
+ */
+
+#include <linux/kfence.h>
+#include <linux/slab.h>
+
+/**
+ * DOC: kfence overview
+ *
+ * kfences provide synchronisation barriers between multiple tasks. They are
+ * very similar to completions, or the OpenGL fence synchronisation object.
+ * Where kfences differ from completions is their ability to track multiple
+ * event sources rather than being a singular "completion event". Similar to
+ * completions multiple processes can wait upon a kfence. However, unlike
+ * completions, a kfence can wait upon other kfences allowing for a graph
+ * of interdependent events.
+ *
+ * Each kfence is a one-shot flag, signaling that work has progressed past
+ * a certain point (as measured by completion of all events the kfence is
+ * listening for) and the waiters upon that kfence may proceed.
+ *
+ * kfences provide both signaling and waiting routines:
+ *
+ * - kfence_await(): indicates that the kfence is asynchronously waiting for
+ *   another event.
+ *
+ * - kfence_complete(): undoes the earlier await and marks the fence as done
+ *   if all of its pending events have been completed.
+ *
+ * - kfence_done(): reports whether or not the kfence has been passed.
+ *
+ * - kfence_wait(): allows the caller to sleep (uninterruptibly) until the
+ *   fence is passed.
+ *
+ * This interface is very similar to completions, with the exception of
+ * allowing the fence to await multiple events. kfences can wait upon other
+ * fences or other hardware events, building an ordered dependency graph:
+ *
+ * - kfence_await_kfence(): the kfence asynchronously waits upon completion
+ *   of another kfence
+ *
+ * - kfence_await_completion(): the kfence asynchronously waits upon a
+ *   completion
+ *
+ * A kfence is initialised using kfence_init(), and starts off awaiting an
+ * event. Once you have finished setting up the fence, including adding
+ * all of its asynchronous waits, call kfence_complete().
+ *
+ * Unlike completions, kfences are expected to live inside more complex graphs
+ * and form the basis for parallel execution of interdependent tasks and so are
+ * reference counted. Use kfence_get() and kfence_put() to acquire or release
+ * a reference to the kfence respectively.
+ *
+ * The kfence can be embedded inside a larger structure and be used as part
+ * of its event driven mechanism. As such kfence_init() can be passed a
+ * callback function that will be called first when the kfence is completed,
+ * and again when the kfence is to be freed. If no callback is provided, the
+ * kfence will be freed using kfree() when its reference count hits zero -
+ * if it is embedded inside another structure and no callback is provided,
+ * it must be the first member of its parent struct.
+ *
+ * The fence-completed notification is called before any listeners upon the
+ * fence are signaled, or any waiters woken. You can defer their wake up by
+ * returning NOTIFY_OK from the fence-completed notification and calling
+ * kfence_wake_up_all() later when ready.
+ */
+
+static DEFINE_SPINLOCK(kfence_lock);
+
+static int __kfence_notify(struct kfence *fence)
+{
+	kfence_notify_t fn;
+
+	fn = (kfence_notify_t)(fence->flags & KFENCE_MASK);
+	return fn(fence);
+}
+
+static void kfence_free(struct kref *kref)
+{
+	struct kfence *fence = container_of(kref, typeof(*fence), kref);
+
+	WARN_ON(atomic_read(&fence->pending) > 0);
+
+	if (fence->flags & KFENCE_MASK)
+		WARN_ON(__kfence_notify(fence) != NOTIFY_DONE);
+	else
+		kfree(fence);
+}
+
+/**
+ * kfence_put - release a reference to a kfence
+ * @fence: the kfence being disposed of
+ *
+ * kfence_put() decrements the reference count on the @fence, and when
+ * it hits zero the fence will be freed.
+ */
+void kfence_put(struct kfence *fence)
+{
+	kref_put(&fence->kref, kfence_free);
+}
+EXPORT_SYMBOL_GPL(kfence_put);
+
+/**
+ * kfence_get - acquire a reference to a kfence
+ * @fence: the kfence being used
+ *
+ * Returns the pointer to the kfence, with its reference count incremented.
+ */
+struct kfence *kfence_get(struct kfence *fence)
+{
+	kref_get(&fence->kref);
+	return fence;
+}
+EXPORT_SYMBOL_GPL(kfence_get);
+
+static void __kfence_wake_up_all(struct kfence *fence,
+				 struct list_head *continuation)
+{
+	wait_queue_head_t *x = &fence->wait;
+	unsigned long flags;
+
+	atomic_dec(&fence->pending);
+
+	/*
+	 * To prevent unbounded recursion as we traverse the graph of kfences,
+	 * we move the task_list from this the next ready fence to the tail of
+	 * the original fence's task_list (and so added to the list to be
+	 * woken).
+	 */
+	smp_mb__before_spinlock();
+	spin_lock_irqsave_nested(&x->lock, flags, 1 + !!continuation);
+	if (continuation) {
+		list_splice_tail_init(&x->task_list, continuation);
+	} else {
+		while (!list_empty(&x->task_list))
+			__wake_up_locked_key(x, TASK_NORMAL, &x->task_list);
+	}
+	spin_unlock_irqrestore(&x->lock, flags);
+}
+
+/**
+ * kfence_wake_up_all - wake all waiters upon a fence
+ * @fence: the kfence to signal
+ *
+ * If the fence-complete notification is deferred, when the callback is
+ * complete it should call kfence_wake_up_all() to wake up all waiters
+ * upon the fence.
+ *
+ * It is invalid to call kfence_wake_up_all() at any time other than from
+ * inside a deferred fence-complete notification.
+ */
+void kfence_wake_up_all(struct kfence *fence)
+{
+	WARN_ON(atomic_read(&fence->pending) != 0);
+	__kfence_wake_up_all(fence, NULL);
+}
+
+static void __kfence_complete(struct kfence *fence,
+			      struct list_head *continuation)
+{
+	if (!atomic_dec_and_test(&fence->pending))
+		return;
+
+	if (fence->flags & KFENCE_MASK && __kfence_notify(fence) != NOTIFY_DONE)
+		return;
+
+	__kfence_wake_up_all(fence, continuation);
+}
+
+/**
+ * kfence_await - increment the count of events being asynchronously waited upon
+ * @fence: the kfence
+ *
+ * kfence_await() indicates that the @fence is waiting upon the completion
+ * of an event. The @fence may wait upon multiple events concurrently.
+ * When that event is complete, a corresponding call to kfence_complete()
+ * should be made.
+ */
+void kfence_await(struct kfence *fence)
+{
+	WARN_ON(atomic_inc_return(&fence->pending) <= 1);
+}
+EXPORT_SYMBOL_GPL(kfence_await);
+
+/**
+ * kfence_complete - decrement the count of events waited upon
+ * @fence: the kfence
+ *
+ * When all event sources for the @fence are completed, i.e. the event count
+ * hits zero, all waiters upon the @fence are woken up.
+ */
+void kfence_complete(struct kfence *fence)
+{
+	if (WARN_ON(kfence_done(fence)))
+		return;
+
+	__kfence_complete(fence, NULL);
+}
+EXPORT_SYMBOL_GPL(kfence_complete);
+
+/**
+ * kfence_wait - wait upon a fence to be completed
+ * @fence: the kfence to wait upon
+ *
+ * Blocks (uninterruptibly waits) until the @fence event counter reaches zero
+ * and then also waits for the fence-completed notification to finish.
+ */
+void kfence_wait(struct kfence *fence)
+{
+	wait_event(fence->wait, kfence_done(fence));
+}
+EXPORT_SYMBOL_GPL(kfence_wait);
+
+/**
+ * kfence_init - initialize a fence for embedded use within a struct
+ * @fence: this kfence
+ * @fn: a callback function for when the fence is complete, and when the
+ * fence is released
+ *
+ * This function initialises the @fence for use embedded within a parent
+ * structure. The optional @fn hook is first called when the fence is completed
+ * (when all its pending event count hits 0) and again when the fence is
+ * to be freed. Note that the @fn will be called from atomic context. The @fn
+ * is stored inside the fence mixed with some flags, and so the @fn must
+ * be aligned using the __kfence_call function attribute.
+ *
+ * If the @fn is not provided, the kfence must be the first member in its
+ * parent struct as it will be freed using kfree().
+ *
+ * fence-complete notification: @fn will be called when the pending event
+ * count hits 0, however the fence is not completed unless the callback
+ * returns NOTIFY_DONE. During this notification callback fence_done() reports
+ * false. You can suspend completion of the fence by returning
+ * NOTIFY_OK instead and then later calling kfence_wake_up_all().
+ *
+ * fence-release notification: @fn will be called when the reference count
+ * hits 0, fence_done() will report true.
+ */
+void kfence_init(struct kfence *fence, kfence_notify_t fn)
+{
+	BUG_ON((unsigned long)fn & ~KFENCE_MASK);
+
+	init_waitqueue_head(&fence->wait);
+	kref_init(&fence->kref);
+	atomic_set(&fence->pending, 1);
+	fence->flags = (unsigned long)fn;
+}
+EXPORT_SYMBOL_GPL(kfence_init);
+
+static int kfence_wake(wait_queue_t *wq, unsigned mode, int flags, void *key)
+{
+	list_del(&wq->task_list);
+	__kfence_complete(wq->private, key);
+	kfence_put(wq->private);
+	kfree(wq);
+	return 0;
+}
+
+static bool __kfence_check_if_after(struct kfence *fence,
+				    const struct kfence * const signaler)
+{
+	wait_queue_t *wq;
+
+	if (__test_and_set_bit(KFENCE_CHECKED_BIT, &fence->flags))
+		return false;
+
+	if (fence == signaler)
+		return true;
+
+	list_for_each_entry(wq, &fence->wait.task_list, task_list) {
+		if (wq->func != kfence_wake)
+			continue;
+
+		if (__kfence_check_if_after(wq->private, signaler))
+			return true;
+	}
+
+	return false;
+}
+
+static void __kfence_clear_checked_bit(struct kfence *fence)
+{
+	wait_queue_t *wq;
+
+	if (!__test_and_clear_bit(KFENCE_CHECKED_BIT, &fence->flags))
+		return;
+
+	list_for_each_entry(wq, &fence->wait.task_list, task_list) {
+		if (wq->func != kfence_wake)
+			continue;
+
+		__kfence_clear_checked_bit(wq->private);
+	}
+}
+
+static bool kfence_check_if_after(struct kfence *fence,
+				  const struct kfence * const signaler)
+{
+	unsigned long flags;
+	bool err;
+
+	if (!config_enabled(CONFIG_KFENCE_CHECK_DAG))
+		return false;
+
+	spin_lock_irqsave(&kfence_lock, flags);
+	err = __kfence_check_if_after(fence, signaler);
+	__kfence_clear_checked_bit(fence);
+	spin_unlock_irqrestore(&kfence_lock, flags);
+
+	return err;
+}
+
+static wait_queue_t *__kfence_create_wq(struct kfence *fence, gfp_t gfp)
+{
+	wait_queue_t *wq;
+
+	wq = kmalloc(sizeof(*wq), gfp);
+	if (unlikely(!wq))
+		return NULL;
+
+	INIT_LIST_HEAD(&wq->task_list);
+	wq->flags = 0;
+	wq->func = kfence_wake;
+	wq->private = kfence_get(fence);
+
+	kfence_await(fence);
+
+	return wq;
+}
+
+/**
+ * kfence_await_kfence - set one fence to wait upon another
+ * @fence: this kfence
+ * @signaler: target kfence to wait upon
+ * @gfp: the allowed allocation mask
+ *
+ * kfence_await_kfence() causes the @fence to asynchronously wait upon the
+ * completion of @signaler.
+ *
+ * Returns 1 if the @fence was added to the waitqueue of @signaler, 0
+ * if @signaler was already complete, or a negative error code.
+ */
+int kfence_await_kfence(struct kfence *fence,
+			struct kfence *signaler,
+			gfp_t gfp)
+{
+	wait_queue_t *wq;
+	unsigned long flags;
+	int pending;
+
+	if (kfence_done(signaler))
+		return 0;
+
+	/* The dependency graph must be acyclic. */
+	if (unlikely(kfence_check_if_after(fence, signaler)))
+		return -EINVAL;
+
+	wq = __kfence_create_wq(fence, gfp);
+	if (unlikely(!wq)) {
+		if (!gfpflags_allow_blocking(gfp))
+			return -ENOMEM;
+
+		kfence_wait(signaler);
+		return 0;
+	}
+
+	spin_lock_irqsave(&signaler->wait.lock, flags);
+	if (likely(!kfence_done(signaler))) {
+		__add_wait_queue_tail(&signaler->wait, wq);
+		pending = 1;
+	} else {
+		kfence_wake(wq, 0, 0, NULL);
+		pending = 0;
+	}
+	spin_unlock_irqrestore(&signaler->wait.lock, flags);
+
+	return pending;
+}
+EXPORT_SYMBOL_GPL(kfence_await_kfence);
+
+/**
+ * kfence_await_completion - set the fence to wait upon a completion
+ * @fence: this kfence
+ * @x: target completion to wait upon
+ * @gfp: the allowed allocation mask
+ *
+ * kfence_await_completion() causes the @fence to asynchronously wait upon
+ * the completion.
+ *
+ * Returns 1 if the @fence was added to the waitqueue of @x, 0
+ * if @x was already complete, or a negative error code.
+ */
+int kfence_await_completion(struct kfence *fence,
+			    struct completion *x,
+			    gfp_t gfp)
+{
+	wait_queue_t *wq;
+	unsigned long flags;
+	int pending;
+
+	if (completion_done(x))
+		return 0;
+
+	wq = __kfence_create_wq(fence, gfp);
+	if (unlikely(!wq)) {
+		if (!gfpflags_allow_blocking(gfp))
+			return -ENOMEM;
+
+		wait_for_completion(x);
+		return 0;
+	}
+
+	spin_lock_irqsave(&x->wait.lock, flags);
+	if (likely(!READ_ONCE(x->done))) {
+		__add_wait_queue_tail(&x->wait, wq);
+		pending = 1;
+	} else {
+		kfence_wake(wq, 0, 0, NULL);
+		pending = 0;
+	}
+	spin_unlock_irqrestore(&x->wait.lock, flags);
+
+	return pending;
+}
+EXPORT_SYMBOL_GPL(kfence_await_completion);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b9cfdbfae9aa..df1182d41f06 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1763,6 +1763,29 @@ config KPROBES_SANITY_TEST
 
 	  Say N if you are unsure.
 
+config KFENCE_SELFTEST
+	tristate "Kfence self tests"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option provides a kernel modules that can be used to test
+	  the kfence handling. This option is not useful for distributions
+	  or general kernels, but only for kernel developers working on the
+	  kfence and async_domain facility.
+
+	  Say N if you are unsure.
+
+config KFENCE_CHECK_DAG
+	bool "Check that kfence are only used with directed acyclic graphs"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option enforces that kfences are only used with directed acyclic
+	  graphs (DAG), as otherwise the cycles in the graph means that they
+	  will never be signaled (or the corresponding task executed).
+
+	  Say N if you are unsure.
+
 config BACKTRACE_SELF_TEST
 	tristate "Self test for the backtrace code"
 	depends on DEBUG_KERNEL
diff --git a/lib/Makefile b/lib/Makefile
index ff6a7a6c6395..943781cfe8d1 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -28,6 +28,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o
 
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
+obj-$(CONFIG_KFENCE_SELFTEST) += test-kfence.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
 lib-$(CONFIG_HAS_DMA) += dma-noop.o
diff --git a/lib/test-kfence.c b/lib/test-kfence.c
new file mode 100644
index 000000000000..b40719fce967
--- /dev/null
+++ b/lib/test-kfence.c
@@ -0,0 +1,536 @@
+/*
+ * Test cases for kfence facility.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/delay.h>
+#include <linux/kfence.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+static struct kfence *alloc_kfence(void)
+{
+	struct kfence *fence;
+
+	fence = kmalloc(sizeof(*fence), GFP_KERNEL);
+	if (!fence)
+		return NULL;
+
+	kfence_init(fence, NULL);
+	return fence;
+}
+
+static int __init __test_self(struct kfence *fence)
+{
+	if (kfence_done(fence))
+		return -EINVAL;
+
+	kfence_complete(fence);
+	if (!kfence_done(fence))
+		return -EINVAL;
+
+	kfence_wait(fence);
+	if (!kfence_done(fence))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int __init test_self(void)
+{
+	struct kfence *fence;
+	int ret;
+
+	/* Test kfence signaling and completion testing */
+	pr_debug("%s\n", __func__);
+
+	fence = alloc_kfence();
+	if (!fence)
+		return -ENOMEM;
+
+	ret = __test_self(fence);
+
+	kfence_put(fence);
+	return ret;
+}
+
+struct test_stack {
+	struct kfence fence;
+	bool seen;
+};
+
+static int __init __kfence_call fence_callback(struct kfence *fence)
+{
+	container_of(fence, typeof(struct test_stack), fence)->seen = true;
+	return NOTIFY_DONE;
+}
+
+static int __init test_stack(void)
+{
+	struct test_stack ts;
+	int ret;
+
+	/* Test kfence signaling and completion testing (on stack) */
+	pr_debug("%s\n", __func__);
+
+	ts.seen = false;
+	kfence_init(&ts.fence, fence_callback);
+
+	ret = __test_self(&ts.fence);
+	if (ret < 0)
+		return ret;
+
+	if (!ts.seen) {
+		pr_err("fence callback not executed\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_dag(void)
+{
+	struct kfence *A, *B, *C;
+
+	/* Test detection of cycles within the kfence graphs */
+	pr_debug("%s\n", __func__);
+
+	if (!config_enabled(CONFIG_KFENCE_CHECK_DAG))
+		return 0;
+
+	A = alloc_kfence();
+	if (kfence_await_kfence(A, A, GFP_KERNEL) != -EINVAL) {
+		pr_err("recursive cycle not detected (AA)\n");
+		return -EINVAL;
+	}
+
+	B = alloc_kfence();
+
+	kfence_await_kfence(A, B, GFP_KERNEL);
+	if (kfence_await_kfence(B, A, GFP_KERNEL) != -EINVAL) {
+		pr_err("single depth cycle not detected (BAB)\n");
+		return -EINVAL;
+	}
+
+	C = alloc_kfence();
+	kfence_await_kfence(B, C, GFP_KERNEL);
+	if (kfence_await_kfence(C, A, GFP_KERNEL) != -EINVAL) {
+		pr_err("cycle not detected (BA, CB, AC)\n");
+		return -EINVAL;
+	}
+
+	kfence_complete(A);
+	kfence_put(A);
+
+	kfence_complete(B);
+	kfence_put(B);
+
+	kfence_complete(C);
+	kfence_put(C);
+
+	return 0;
+}
+
+static int __init test_AB(void)
+{
+	struct kfence *A, *B;
+	int ret;
+
+	/* Test kfence (A) waiting on an event source (B) */
+	pr_debug("%s\n", __func__);
+
+	A = alloc_kfence();
+	B = alloc_kfence();
+	if (!A || !B)
+		return -ENOMEM;
+
+	ret = kfence_await_kfence(A, B, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_complete(A);
+	if (kfence_done(A))
+		return -EINVAL;
+
+	kfence_complete(B);
+	if (!kfence_done(B))
+		return -EINVAL;
+
+	if (!kfence_done(A))
+		return -EINVAL;
+
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_ABC(void)
+{
+	struct kfence *A, *B, *C;
+	int ret;
+
+	/* Test a chain of fences, A waits on B who waits on C */
+	pr_debug("%s\n", __func__);
+
+	A = alloc_kfence();
+	B = alloc_kfence();
+	C = alloc_kfence();
+	if (!A || !B || !C)
+		return -ENOMEM;
+
+	ret = kfence_await_kfence(A, B, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	ret = kfence_await_kfence(B, C, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_complete(A);
+	if (kfence_done(A))
+		return -EINVAL;
+
+	kfence_complete(B);
+	if (kfence_done(B))
+		return -EINVAL;
+
+	if (kfence_done(A))
+		return -EINVAL;
+
+	kfence_complete(C);
+	if (!kfence_done(C))
+		return -EINVAL;
+
+	if (!kfence_done(B))
+		return -EINVAL;
+
+	if (!kfence_done(A))
+		return -EINVAL;
+
+	kfence_put(C);
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_AB_C(void)
+{
+	struct kfence *A, *B, *C;
+	int ret;
+
+	/* Test multiple fences (AB) waiting on a single event (C) */
+	pr_debug("%s\n", __func__);
+
+	A = alloc_kfence();
+	B = alloc_kfence();
+	C = alloc_kfence();
+	if (!A || !B || !C)
+		return -ENOMEM;
+
+	ret = kfence_await_kfence(A, C, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	ret = kfence_await_kfence(B, C, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_complete(A);
+	kfence_complete(B);
+
+	if (kfence_done(A))
+		return -EINVAL;
+
+	if (kfence_done(B))
+		return -EINVAL;
+
+	kfence_complete(C);
+	if (!kfence_done(C))
+		return -EINVAL;
+
+	if (!kfence_done(B))
+		return -EINVAL;
+
+	if (!kfence_done(A))
+		return -EINVAL;
+
+	kfence_put(C);
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_C_AB(void)
+{
+	struct kfence *A, *B, *C;
+	int ret;
+
+	/* Test multiple event sources (A,B) for a single fence (C) */
+	pr_debug("%s\n", __func__);
+
+	A = alloc_kfence();
+	B = alloc_kfence();
+	C = alloc_kfence();
+	if (!A || !B || !C)
+		return -ENOMEM;
+
+	ret = kfence_await_kfence(C, A, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	ret = kfence_await_kfence(C, B, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_complete(C);
+	if (kfence_done(C))
+		return -EINVAL;
+
+	kfence_complete(A);
+	kfence_complete(B);
+
+	if (!kfence_done(A))
+		return -EINVAL;
+
+	if (!kfence_done(B))
+		return -EINVAL;
+
+	if (!kfence_done(C))
+		return -EINVAL;
+
+	kfence_put(C);
+	kfence_put(B);
+	kfence_put(A);
+	return 0;
+}
+
+static int __init test_completion(void)
+{
+	struct kfence *fence;
+	struct completion x;
+	int ret;
+
+	/* Test use of a completion as an event source for kfences */
+	pr_debug("%s\n", __func__);
+
+	init_completion(&x);
+
+	fence = alloc_kfence();
+	if (!fence)
+		return -ENOMEM;
+
+	ret = kfence_await_completion(fence, &x, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_complete(fence);
+	if (kfence_done(fence))
+		return -EINVAL;
+
+	complete_all(&x);
+	if (!kfence_done(fence))
+		return -EINVAL;
+
+	kfence_put(fence);
+	return 0;
+}
+
+struct task_ipc {
+	struct work_struct work;
+	struct completion started;
+	struct kfence *in, *out;
+	int value;
+};
+
+static void __init task_ipc(struct work_struct *work)
+{
+	struct task_ipc *ipc = container_of(work, typeof(*ipc), work);
+
+	complete(&ipc->started);
+
+	kfence_wait(ipc->in);
+	smp_store_mb(ipc->value, 1);
+	kfence_complete(ipc->out);
+}
+
+static int __init test_chain(void)
+{
+	const int nfences = 4096;
+	struct kfence **fences;
+	int ret, i;
+
+	/* Test a long chain of fences */
+	pr_debug("%s\n", __func__);
+
+	fences = kmalloc_array(nfences, sizeof(*fences), GFP_KERNEL);
+	if (!fences)
+		return -ENOMEM;
+
+	for (i = 0; i < nfences; i++) {
+		fences[i] = alloc_kfence();
+		if (!fences[i])
+			return -ENOMEM;
+
+		if (i > 0) {
+			ret = kfence_await_kfence(fences[i],
+						  fences[i - 1],
+						  GFP_KERNEL);
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	for (i = nfences; --i; ) {
+		kfence_complete(fences[i]);
+		if (kfence_done(fences[i]))
+			return -EINVAL;
+	}
+
+	kfence_complete(fences[0]);
+	for (i = 0; i < nfences; i++) {
+		if (!kfence_done(fences[i]))
+			return -EINVAL;
+
+		kfence_put(fences[i]);
+	}
+	kfree(fences);
+	return 0;
+}
+
+static int __init test_ipc(void)
+{
+	struct task_ipc ipc;
+	int ret = 0;
+
+	/* Test use of kfence as an interprocess signaling mechanism */
+	pr_debug("%s\n", __func__);
+
+	ipc.in = alloc_kfence();
+	ipc.out = alloc_kfence();
+	if (!ipc.in || !ipc.out)
+		return -ENOMEM;
+
+	/* use a completion to avoid chicken-and-egg testing for kfence */
+	init_completion(&ipc.started);
+
+	ipc.value = 0;
+	INIT_WORK(&ipc.work, task_ipc);
+	schedule_work(&ipc.work);
+
+	wait_for_completion(&ipc.started);
+
+	usleep_range(1000, 2000);
+	if (READ_ONCE(ipc.value)) {
+		pr_err("worker updated value before kfence was signaled\n");
+		ret = -EINVAL;
+	}
+
+	kfence_complete(ipc.in);
+	kfence_wait(ipc.out);
+
+	if (!READ_ONCE(ipc.value)) {
+		pr_err("worker signaled kfence before value was posted\n");
+		ret = -EINVAL;
+	}
+
+	flush_work(&ipc.work);
+	kfence_put(ipc.in);
+	kfence_put(ipc.out);
+	return ret;
+}
+
+static int __init test_kfence_init(void)
+{
+	int ret;
+
+	pr_info("Testing kfences\n");
+
+	ret = test_self();
+	if (ret < 0) {
+		pr_err("self failed\n");
+		return ret;
+	}
+
+	ret = test_stack();
+	if (ret < 0) {
+		pr_err("stack failed\n");
+		return ret;
+	}
+
+	ret = test_dag();
+	if (ret < 0) {
+		pr_err("DAG checker failed\n");
+		return ret;
+	}
+
+	ret = test_AB();
+	if (ret < 0) {
+		pr_err("AB failed\n");
+		return ret;
+	}
+
+	ret = test_ABC();
+	if (ret < 0) {
+		pr_err("ABC failed\n");
+		return ret;
+	}
+
+	ret = test_AB_C();
+	if (ret < 0) {
+		pr_err("AB_C failed\n");
+		return ret;
+	}
+
+	ret = test_C_AB();
+	if (ret < 0) {
+		pr_err("C_AB failed\n");
+		return ret;
+	}
+
+	ret = test_chain();
+	if (ret < 0) {
+		pr_err("chain failed\n");
+		return ret;
+	}
+
+	ret = test_ipc();
+	if (ret < 0) {
+		pr_err("ipc failed\n");
+		return ret;
+	}
+
+	ret = test_completion();
+	if (ret < 0) {
+		pr_err("completion failed\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void __exit test_kfence_cleanup(void)
+{
+}
+
+module_init(test_kfence_init);
+module_exit(test_kfence_cleanup);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/selftests/lib/kfence.sh b/tools/testing/selftests/lib/kfence.sh
new file mode 100755
index 000000000000..487320c70ed1
--- /dev/null
+++ b/tools/testing/selftests/lib/kfence.sh
@@ -0,0 +1,10 @@
+#!/bin/sh
+# Runs infrastructure tests using test-kfence kernel module
+
+if /sbin/modprobe -q test-kfence; then
+	/sbin/modprobe -q -r test-kfence
+	echo "kfence: ok"
+else
+	echo "kfence: [FAIL]"
+	exit 1
+fi
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/7] kfence: Wrap hrtimer to provide a time source for a kfence
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 1/7] kfence: Introduce kfence, a N:M completion mechanism Chris Wilson
@ 2016-07-17 12:58   ` Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 3/7] kfence: Extend kfences for listening on DMA fences Chris Wilson
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo,
	Daniel Vetter, Andrew Morton, Ingo Molnar, Kees Cook,
	Thomas Gleixner, Paul E. McKenney, Dan Williams, Andrey Ryabinin,
	Davidlohr Bueso, Nikolay Aleksandrov, David S. Miller,
	Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, linaro-mm-sig

A common requirement when scheduling a task is that it should be not be
begun until a certain point in time is passed (e.g.
queue_delayed_work()).  kfence_await_hrtimer() causes the kfence to
asynchronously wait until after the appropriate time before being woken.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/kfence.h |  5 +++++
 kernel/kfence.c        | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/test-kfence.c      | 44 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 107 insertions(+)

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 6e32385b3b8c..76a2f95dfb70 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -16,6 +16,7 @@
 #include <linux/wait.h>
 
 struct completion;
+enum hrtimer_mode;
 
 struct kfence {
 	wait_queue_head_t wait;
@@ -43,6 +44,10 @@ int kfence_await_kfence(struct kfence *fence,
 int kfence_await_completion(struct kfence *fence,
 			    struct completion *x,
 			    gfp_t gfp);
+int kfence_await_hrtimer(struct kfence *fence,
+			 clockid_t clock, enum hrtimer_mode mode,
+			 ktime_t delay, u64 slack,
+			 gfp_t gfp);
 void kfence_complete(struct kfence *fence);
 void kfence_wake_up_all(struct kfence *fence);
 void kfence_wait(struct kfence *fence);
diff --git a/kernel/kfence.c b/kernel/kfence.c
index 693af9da545a..59c27910a749 100644
--- a/kernel/kfence.c
+++ b/kernel/kfence.c
@@ -48,6 +48,9 @@
  * - kfence_await_completion(): the kfence asynchronously waits upon a
  *   completion
  *
+ * - kfence_await_hrtimer(): the kfence asynchronously wait for an expiration
+ *   of a timer
+ *
  * A kfence is initialised using kfence_init(), and starts off awaiting an
  * event. Once you have finished setting up the fence, including adding
  * all of its asynchronous waits, call kfence_complete().
@@ -429,3 +432,58 @@ int kfence_await_completion(struct kfence *fence,
 	return pending;
 }
 EXPORT_SYMBOL_GPL(kfence_await_completion);
+
+struct timer_cb {
+	struct hrtimer timer;
+	struct kfence *fence;
+};
+
+static enum hrtimer_restart
+timer_kfence_wake(struct hrtimer *timer)
+{
+	struct timer_cb *cb = container_of(timer, typeof(*cb), timer);
+
+	kfence_complete(cb->fence);
+	kfence_put(cb->fence);
+	kfree(cb);
+
+	return HRTIMER_NORESTART;
+}
+
+/**
+ * kfence_await_hrtimer - set the fence to wait for a period of time
+ * @fence: this kfence
+ * @clock: which clock to program
+ * @mode: delay given as relative or absolute
+ * @delay: how long or until what time to wait
+ * @slack: how much slack that may be applied to the delay
+ *
+ * kfence_await_hrtimer() causes the @fence to wait for a a period of time, or
+ * until a certain point in time. It is a convenience wrapper around
+ * hrtimer_start_range_ns(). For more details on @clock, @mode, @delay and
+ * @slack please consult the hrtimer documentation.
+ *
+ * Returns 1 if the delay was sucessfuly added to the @fence, or a negative
+ * error code on failure.
+ */
+int kfence_await_hrtimer(struct kfence *fence,
+			 clockid_t clock, enum hrtimer_mode mode,
+			 ktime_t delay, u64 slack,
+			 gfp_t gfp)
+{
+	struct timer_cb *cb;
+
+	cb = kmalloc(sizeof(*cb), gfp);
+	if (!cb)
+		return -ENOMEM;
+
+	cb->fence = kfence_get(fence);
+	kfence_await(fence);
+
+	hrtimer_init(&cb->timer, clock, mode);
+	cb->timer.function = timer_kfence_wake;
+
+	hrtimer_start_range_ns(&cb->timer, delay, slack, mode);
+	return 1;
+}
+EXPORT_SYMBOL_GPL(kfence_await_hrtimer);
diff --git a/lib/test-kfence.c b/lib/test-kfence.c
index b40719fce967..1b0853fda7c3 100644
--- a/lib/test-kfence.c
+++ b/lib/test-kfence.c
@@ -352,6 +352,44 @@ static int __init test_completion(void)
 	return 0;
 }
 
+static int __init test_delay(void)
+{
+	struct kfence *fence;
+	ktime_t delay;
+	int ret;
+
+	/* Test use of a hrtimer as an event source for kfences */
+	pr_debug("%s\n", __func__);
+
+	fence = alloc_kfence();
+	if (!fence)
+		return -ENOMEM;
+
+	delay = ktime_get();
+
+	ret = kfence_await_hrtimer(fence, CLOCK_MONOTONIC, HRTIMER_MODE_REL,
+				   ms_to_ktime(1), 1 << 10,
+				   GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+	if (ret == 0)
+		return -EINVAL;
+
+	kfence_complete(fence);
+	kfence_wait(fence);
+
+	delay = ktime_sub(ktime_get(), delay);
+	kfence_put(fence);
+
+	if (!ktime_to_ms(delay)) {
+		pr_err("kfence woke too early, delay was only %lldns\n",
+		       (long long)ktime_to_ns(delay));
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 struct task_ipc {
 	struct work_struct work;
 	struct completion started;
@@ -522,6 +560,12 @@ static int __init test_kfence_init(void)
 		return ret;
 	}
 
+	ret = test_delay();
+	if (ret < 0) {
+		pr_err("delay failed\n");
+		return ret;
+	}
+
 	return 0;
 }
 
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/7] kfence: Extend kfences for listening on DMA fences
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 1/7] kfence: Introduce kfence, a N:M completion mechanism Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 2/7] kfence: Wrap hrtimer to provide a time source for a kfence Chris Wilson
@ 2016-07-17 12:58   ` Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 4/7] async: Add kselftests for async-domains Chris Wilson
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo,
	Daniel Vetter, Andrew Morton, Ingo Molnar, Kees Cook,
	Thomas Gleixner, Paul E. McKenney, Dan Williams, Andrey Ryabinin,
	Davidlohr Bueso, Nikolay Aleksandrov, David S. Miller,
	Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, linaro-mm-sig

dma-buf provides an interfaces for receiving notifications from DMA
hardware, and for implicitly tracking fences used for rendering into
dma-buf. We want to be able to use these event sources along with kfence
for easy collection and combining with other events.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 drivers/dma-buf/fence.c       | 58 +++++++++++++++++++++++++++++++++++++++++++
 drivers/dma-buf/reservation.c | 48 +++++++++++++++++++++++++++++++++++
 include/linux/fence.h         |  6 +++++
 include/linux/kfence.h        |  2 ++
 include/linux/reservation.h   |  7 ++++++
 kernel/kfence.c               |  8 ++++++
 6 files changed, 129 insertions(+)

diff --git a/drivers/dma-buf/fence.c b/drivers/dma-buf/fence.c
index 7b05dbe9b296..3f06b3b1b4cc 100644
--- a/drivers/dma-buf/fence.c
+++ b/drivers/dma-buf/fence.c
@@ -22,6 +22,7 @@
 #include <linux/export.h>
 #include <linux/atomic.h>
 #include <linux/fence.h>
+#include <linux/kfence.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/fence.h>
@@ -530,3 +531,60 @@ fence_init(struct fence *fence, const struct fence_ops *ops,
 	trace_fence_init(fence);
 }
 EXPORT_SYMBOL(fence_init);
+
+struct dma_fence_cb {
+	struct fence_cb base;
+	struct kfence *fence;
+};
+
+static void dma_kfence_wake(struct fence *dma, struct fence_cb *data)
+{
+	struct dma_fence_cb *cb = container_of(data, typeof(*cb), base);
+
+	kfence_complete(cb->fence);
+	kfence_put(cb->fence);
+	kfree(cb);
+}
+
+/**
+ * kfence_await_dma_fence - set the fence to wait upon a DMA fence
+ * @fence: this kfence
+ * @dma: target DMA fence to wait upon
+ * @gfp: the allowed allocation type
+ *
+ * kfence_add_dma() causes the @fence to wait upon completion of a DMA fence.
+ *
+ * Returns 1 if the @fence was successfully to the waitqueue of @dma, 0
+ * if @dma was already signaled (and so not added), or a negative error code.
+ */
+int kfence_await_dma_fence(struct kfence *fence, struct fence *dma, gfp_t gfp)
+{
+	struct dma_fence_cb *cb;
+	int ret;
+
+	if (fence_is_signaled(dma))
+		return 0;
+
+	cb = kmalloc(sizeof(*cb), gfp);
+	if (!cb) {
+		if (!gfpflags_allow_blocking(gfp))
+			return -ENOMEM;
+
+		return fence_wait(dma, false);
+	}
+
+	cb->fence = kfence_get(fence);
+	kfence_await(fence);
+
+	ret = fence_add_callback(dma, &cb->base, dma_kfence_wake);
+	if (ret == 0) {
+		ret = 1;
+	} else {
+		dma_kfence_wake(dma, &cb->base);
+		if (ret == -ENOENT) /* fence already signaled */
+			ret = 0;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kfence_await_dma_fence);
diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
index 9566a62ad8e3..138b792af0c3 100644
--- a/drivers/dma-buf/reservation.c
+++ b/drivers/dma-buf/reservation.c
@@ -543,3 +543,51 @@ unlock_retry:
 	goto retry;
 }
 EXPORT_SYMBOL_GPL(reservation_object_test_signaled_rcu);
+
+/**
+ * kfence_add_reservation - set the fence to wait upon a reservation_object
+ * @fence: this kfence
+ * @resv: target reservation_object (collection of DMA fences) to wait upon
+ * @write: Wait for read or read/write access
+ * @gfp: the allowed allocation type
+ *
+ * kfence_add_reservation() causes the @fence to wait upon completion of the
+ * reservation object (a collection of DMA fences), either for read access
+ * or for read/write access.
+ *
+ * Returns 1 if the @fence was successfully to the waitqueues of @resv, 0
+ * if @resev was already signaled (and so not added), or a negative error code.
+ */
+int kfence_await_reservation(struct kfence *fence,
+			     struct reservation_object *resv,
+			     bool write,
+			     gfp_t gfp)
+{
+	struct fence *excl, **shared;
+	unsigned int count, i;
+	int ret;
+
+	ret = reservation_object_get_fences_rcu(resv, &excl, &count, &shared);
+	if (ret)
+		return ret;
+
+	if (write) {
+		for (i = 0; i < count; i++) {
+			ret |= kfence_await_dma_fence(fence, shared[i], gfp);
+			if (ret < 0)
+				goto out;
+		}
+	}
+
+	if (excl)
+		ret |= kfence_await_dma_fence(fence, excl, gfp);
+
+out:
+	fence_put(excl);
+	for (i = 0; i < count; i++)
+		fence_put(shared[i]);
+	kfree(shared);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kfence_await_reservation);
diff --git a/include/linux/fence.h b/include/linux/fence.h
index 2056e9fd0138..3c3bc318e826 100644
--- a/include/linux/fence.h
+++ b/include/linux/fence.h
@@ -34,6 +34,8 @@ struct fence;
 struct fence_ops;
 struct fence_cb;
 
+struct kfence;
+
 /**
  * struct fence - software synchronization primitive
  * @refcount: refcount for this fence
@@ -378,4 +380,8 @@ unsigned fence_context_alloc(unsigned num);
 			##args);					\
 	} while (0)
 
+int kfence_await_dma_fence(struct kfence *fence,
+			   struct fence *dma,
+			   gfp_t gfp);
+
 #endif /* __LINUX_FENCE_H */
diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 76a2f95dfb70..acbfc2ea7c49 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -16,6 +16,8 @@
 #include <linux/wait.h>
 
 struct completion;
+struct fence;
+struct reservation_object;
 enum hrtimer_mode;
 
 struct kfence {
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index b0f305e77b7f..1954bab95db9 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -49,6 +49,8 @@ extern struct ww_class reservation_ww_class;
 extern struct lock_class_key reservation_seqcount_class;
 extern const char reservation_seqcount_string[];
 
+struct kfence;
+
 /**
  * struct reservation_object_list - a list of shared fences
  * @rcu: for internal use
@@ -210,4 +212,9 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
 bool reservation_object_test_signaled_rcu(struct reservation_object *obj,
 					  bool test_all);
 
+int kfence_await_reservation(struct kfence *fence,
+			     struct reservation_object *resv,
+			     bool write,
+			     gfp_t gfp);
+
 #endif /* _LINUX_RESERVATION_H */
diff --git a/kernel/kfence.c b/kernel/kfence.c
index 59c27910a749..4605eabc2c1b 100644
--- a/kernel/kfence.c
+++ b/kernel/kfence.c
@@ -7,7 +7,9 @@
  * of the License.
  */
 
+#include <linux/fence.h>
 #include <linux/kfence.h>
+#include <linux/reservation.h>
 #include <linux/slab.h>
 
 /**
@@ -51,6 +53,12 @@
  * - kfence_await_hrtimer(): the kfence asynchronously wait for an expiration
  *   of a timer
  *
+ * - kfence_await_dma_fence(): the kfence asynchronously waits for a DMA
+ *   (hardware signaled) fence
+ *
+ * - kfence_await_reservation(): the kfence asynchronously waits for a DMA
+ *   reservation object
+ *
  * A kfence is initialised using kfence_init(), and starts off awaiting an
  * event. Once you have finished setting up the fence, including adding
  * all of its asynchronous waits, call kfence_complete().
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 4/7] async: Add kselftests for async-domains
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
                     ` (2 preceding siblings ...)
  2016-07-17 12:58   ` [PATCH v2 3/7] kfence: Extend kfences for listening on DMA fences Chris Wilson
@ 2016-07-17 12:58   ` Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 5/7] async: Add support for explicit fine-grained barriers Chris Wilson
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, Chris Wilson

A preparatory patch for adding new features (and their tests). First we
want to add coverage of existing features to kselftest.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 lib/Kconfig.debug                           |   9 ++
 lib/Makefile                                |   1 +
 lib/test-async-domain.c                     | 131 ++++++++++++++++++++++++++++
 tools/testing/selftests/lib/Makefile        |   2 +-
 tools/testing/selftests/lib/async-domain.sh |  10 +++
 5 files changed, 152 insertions(+), 1 deletion(-)
 create mode 100644 lib/test-async-domain.c
 create mode 100755 tools/testing/selftests/lib/async-domain.sh

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index df1182d41f06..4b180aed88b6 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1784,6 +1784,15 @@ config KFENCE_CHECK_DAG
 	  graphs (DAG), as otherwise the cycles in the graph means that they
 	  will never be signaled (or the corresponding task executed).
 
+config ASYNC_DOMAIN_SELFTEST
+	tristate "Asynchronous domain self tests"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  the asynchronous task execution. This option is not useful for
+	  distributions or general kernels, but only for kernel developers
+	  working on the async_domain facility.
+
 	  Say N if you are unsure.
 
 config BACKTRACE_SELF_TEST
diff --git a/lib/Makefile b/lib/Makefile
index 943781cfe8d1..5864053cf63e 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -28,6 +28,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o
 
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
+obj-$(CONFIG_ASYNC_DOMAIN_SELFTEST) += test-async-domain.o
 obj-$(CONFIG_KFENCE_SELFTEST) += test-kfence.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
diff --git a/lib/test-async-domain.c b/lib/test-async-domain.c
new file mode 100644
index 000000000000..558a71414fb6
--- /dev/null
+++ b/lib/test-async-domain.c
@@ -0,0 +1,131 @@
+/*
+ * Test cases for async-domain facility.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/async.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+
+static void task_A(void *data, async_cookie_t cookie)
+{
+	long *result = data;
+	smp_store_mb(*result, 'A');
+}
+
+static void task_B(void *data, async_cookie_t cookie)
+{
+	long *result = data;
+	usleep_range(100, 200);
+	smp_store_mb(*result, 'B');
+}
+
+static int __init test_implicit(struct async_domain *domain)
+{
+	const long expected = 'B';
+	long result = 0;
+
+	if (!async_schedule_domain(task_B, &result, domain))
+		return -ENOMEM;
+
+	async_synchronize_full_domain(domain);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_registered(struct async_domain *domain)
+{
+	const long expected = 'B';
+	long result = 0;
+
+	if (!async_schedule_domain(task_B, &result, domain))
+		return -ENOMEM;
+
+	async_synchronize_full();
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void task_nop(void *data, async_cookie_t cookie)
+{
+	async_cookie_t *result = data;
+	smp_store_mb(*result, cookie);
+}
+
+static int __init perf_nop(int batch, long timeout_us)
+{
+	ktime_t start;
+	async_cookie_t nop, last;
+	long count, delay;
+
+	count = 0;
+	nop = last = 0;
+	start = ktime_get();
+	do {
+		ktime_t delta;
+		int n;
+
+		for (n = 0; n < batch; n++)
+			last = async_schedule(task_nop, &nop);
+		async_synchronize_full();
+		delta = ktime_sub(ktime_get(), start);
+		delay = ktime_to_ns(delta) >> 10;
+		count += batch;
+	} while (delay < timeout_us);
+
+	pr_info("%ld nop tasks (batches of %d) completed in %ldus; last queued %lld, saw %lld last\n",
+		count, batch, delay,
+		(long long)last, (long long)READ_ONCE(nop));
+	return 0;
+}
+
+static int __init test_async_domain_init(void)
+{
+	ASYNC_DOMAIN(domain);
+	int ret;
+
+	pr_info("Testing async-domains\n");
+
+	ret = test_implicit(&domain);
+	if (ret)
+		return ret;
+
+	ret = test_registered(&domain);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(1, 100);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(128, 1000);
+	if (ret)
+		return ret;
+
+	async_unregister_domain(&domain);
+	return 0;
+}
+
+static void __exit test_async_domain_cleanup(void)
+{
+	async_synchronize_full();
+}
+
+module_init(test_async_domain_init);
+module_exit(test_async_domain_cleanup);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/selftests/lib/Makefile b/tools/testing/selftests/lib/Makefile
index 08360060ab14..46a77ac5b4c6 100644
--- a/tools/testing/selftests/lib/Makefile
+++ b/tools/testing/selftests/lib/Makefile
@@ -3,6 +3,6 @@
 # No binaries, but make sure arg-less "make" doesn't trigger "run_tests"
 all:
 
-TEST_PROGS := printf.sh bitmap.sh
+TEST_PROGS := printf.sh bitmap.sh async-domain.sh
 
 include ../lib.mk
diff --git a/tools/testing/selftests/lib/async-domain.sh b/tools/testing/selftests/lib/async-domain.sh
new file mode 100755
index 000000000000..22c270051de7
--- /dev/null
+++ b/tools/testing/selftests/lib/async-domain.sh
@@ -0,0 +1,10 @@
+#!/bin/sh
+# Runs infrastructure tests using test-async-domain kernel module
+
+if /sbin/modprobe -q test-async-domain; then
+	/sbin/modprobe -q -r test-async-domain
+	echo "async-domain: ok"
+else
+	echo "async-domain: [FAIL]"
+	exit 1
+fi
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 5/7] async: Add support for explicit fine-grained barriers
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
                     ` (3 preceding siblings ...)
  2016-07-17 12:58   ` [PATCH v2 4/7] async: Add kselftests for async-domains Chris Wilson
@ 2016-07-17 12:58   ` Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 6/7] async: Add execution barriers Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 7/7] async: Introduce a dependency resolver for parallel execution Chris Wilson
  6 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo,
	Daniel Vetter, Andrew Morton, Ingo Molnar, Kees Cook,
	Thomas Gleixner, Paul E. McKenney, Dan Williams, Andrey Ryabinin,
	Davidlohr Bueso, Nikolay Aleksandrov, David S. Miller,
	Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, linaro-mm-sig

The current async-domain model supports running a multitude of
independent tasks with a coarse synchronisation point. This is
sufficient for its original purpose of allowing independent drivers to
run concurrently during various phases (booting, early resume, late
resume etc), and keep the asynchronous domain out of the synchronous
kernel domains. However, for greater exploitation, drivers themselves
want to schedule multiple tasks within a phase (or between phases) and
control the order of execution within those tasks relative to each
other. To enable this, we extend the synchronisation scheme based upon
kfences and back every task with one. Any task may now wait upon the
kfence before being scheduled, and equally the kfence may be used to
wait on the task itself (rather than waiting on the cookie for all
previous tasks to be completed).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/async.h   |  60 ++++++++-
 kernel/async.c          | 234 ++++++++++++++++++++--------------
 lib/test-async-domain.c | 324 +++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 515 insertions(+), 103 deletions(-)

diff --git a/include/linux/async.h b/include/linux/async.h
index 6b0226bdaadc..e7d7289a9889 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -13,38 +13,88 @@
 #define __ASYNC_H__
 
 #include <linux/types.h>
+#include <linux/kfence.h>
 #include <linux/list.h>
 
 typedef u64 async_cookie_t;
 typedef void (*async_func_t) (void *data, async_cookie_t cookie);
+
+struct async_work {
+	struct kfence fence;
+	/* private */
+};
+
 struct async_domain {
 	struct list_head pending;
 	unsigned registered:1;
 };
 
+#define ASYNC_DOMAIN_INIT(_name, _r) {				\
+	.pending = LIST_HEAD_INIT(_name.pending),		\
+	.registered = _r						\
+}
+
 /*
  * domain participates in global async_synchronize_full
  */
 #define ASYNC_DOMAIN(_name) \
-	struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending),	\
-				      .registered = 1 }
+	struct async_domain _name = ASYNC_DOMAIN_INIT(_name, 1)
 
 /*
  * domain is free to go out of scope as soon as all pending work is
  * complete, this domain does not participate in async_synchronize_full
  */
 #define ASYNC_DOMAIN_EXCLUSIVE(_name) \
-	struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
-				      .registered = 0 }
+	struct async_domain _name = ASYNC_DOMAIN_INIT(_name, 0)
+
+extern void init_async_domain(struct async_domain *domain, bool registered);
 
 extern async_cookie_t async_schedule(async_func_t func, void *data);
 extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
 					    struct async_domain *domain);
-void async_unregister_domain(struct async_domain *domain);
+extern void async_unregister_domain(struct async_domain *domain);
 extern void async_synchronize_full(void);
 extern void async_synchronize_full_domain(struct async_domain *domain);
 extern void async_synchronize_cookie(async_cookie_t cookie);
 extern void async_synchronize_cookie_domain(async_cookie_t cookie,
 					    struct async_domain *domain);
+
 extern bool current_is_async(void);
+
+extern struct async_work *
+async_work_create(async_func_t func, void *data, gfp_t gfp);
+
+static inline struct async_work *async_work_get(struct async_work *work)
+{
+	kfence_get(&work->fence);
+	return work;
+}
+
+static inline int
+async_work_after(struct async_work *work, struct kfence *fence)
+{
+	return kfence_await_kfence(&work->fence, fence, GFP_KERNEL);
+}
+
+static inline int
+async_work_before(struct async_work *work, struct kfence *fence)
+{
+	return kfence_await_kfence(fence, &work->fence, GFP_KERNEL);
+}
+
+static inline void async_work_wait(struct async_work *work)
+{
+	kfence_wait(&work->fence);
+}
+
+static inline void async_work_put(struct async_work *work)
+{
+	kfence_put(&work->fence);
+}
+
+extern async_cookie_t queue_async_work(struct async_domain *domain,
+				       struct async_work *work,
+				       gfp_t gfp);
+extern async_cookie_t schedule_async_work(struct async_work *work);
+
 #endif
diff --git a/kernel/async.c b/kernel/async.c
index d2edd6efec56..0d695919a60d 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -2,6 +2,7 @@
  * async.c: Asynchronous function calls for boot performance
  *
  * (C) Copyright 2009 Intel Corporation
+ * (C) Copyright 2016 Intel Corporation
  * Author: Arjan van de Ven <arjan@linux.intel.com>
  *
  * This program is free software; you can redistribute it and/or
@@ -59,59 +60,39 @@ asynchronous and synchronous parts of the kernel.
 
 #include "workqueue_internal.h"
 
-static async_cookie_t next_cookie = 1;
-
+#define ASYNC_QUEUED_BIT	KFENCE_PRIVATE_BIT
 #define MAX_WORK		32768
-#define ASYNC_COOKIE_MAX	ULLONG_MAX	/* infinity cookie */
-
-static LIST_HEAD(async_global_pending);	/* pending from all registered doms */
-static ASYNC_DOMAIN(async_dfl_domain);
-static DEFINE_SPINLOCK(async_lock);
 
 struct async_entry {
-	struct list_head	domain_list;
-	struct list_head	global_list;
-	struct work_struct	work;
-	async_cookie_t		cookie;
-	async_func_t		func;
-	void			*data;
-	struct async_domain	*domain;
-};
-
-static DECLARE_WAIT_QUEUE_HEAD(async_done);
+	struct async_work base;
+	struct work_struct work;
 
-static atomic_t entry_count;
+	struct list_head pending_link[2];
 
-static async_cookie_t lowest_in_progress(struct async_domain *domain)
-{
-	struct list_head *pending;
-	async_cookie_t ret = ASYNC_COOKIE_MAX;
-	unsigned long flags;
+	async_cookie_t cookie;
+	async_func_t func;
+	void *data;
+};
 
-	spin_lock_irqsave(&async_lock, flags);
+static LIST_HEAD(async_global_pending);	/* pending from all registered doms */
+static ASYNC_DOMAIN(async_dfl_domain);
+static DEFINE_SPINLOCK(async_lock);
+static unsigned int async_pending_count;
 
-	if (domain)
-		pending = &domain->pending;
-	else
-		pending = &async_global_pending;
+static async_cookie_t assign_cookie(void)
+{
+	static async_cookie_t next_cookie;
 
-	if (!list_empty(pending))
-		ret = list_first_entry(pending, struct async_entry,
-				       domain_list)->cookie;
+	if (++next_cookie == 0)
+		next_cookie = 1;
 
-	spin_unlock_irqrestore(&async_lock, flags);
-	return ret;
+	return next_cookie;
 }
 
-/*
- * pick the first pending entry and run it
- */
 static void async_run_entry_fn(struct work_struct *work)
 {
-	struct async_entry *entry =
-		container_of(work, struct async_entry, work);
-	unsigned long flags;
-	ktime_t uninitialized_var(calltime), delta, rettime;
+	struct async_entry *entry = container_of(work, typeof(*entry), work);
+	ktime_t uninitialized_var(calltime);
 
 	/* 1) run (and print duration) */
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
@@ -122,8 +103,7 @@ static void async_run_entry_fn(struct work_struct *work)
 	}
 	entry->func(entry->data, entry->cookie);
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
-		rettime = ktime_get();
-		delta = ktime_sub(rettime, calltime);
+		ktime_t delta = ktime_sub(ktime_get(), calltime);
 		pr_debug("initcall %lli_%pF returned 0 after %lld usecs\n",
 			(long long)entry->cookie,
 			entry->func,
@@ -131,69 +111,81 @@ static void async_run_entry_fn(struct work_struct *work)
 	}
 
 	/* 2) remove self from the pending queues */
-	spin_lock_irqsave(&async_lock, flags);
-	list_del_init(&entry->domain_list);
-	list_del_init(&entry->global_list);
+	spin_lock_irq(&async_lock);
+	list_del(&entry->pending_link[0]);
+	list_del(&entry->pending_link[1]);
+	async_pending_count--;
+	spin_unlock_irq(&async_lock);
 
-	/* 3) free the entry */
-	kfree(entry);
-	atomic_dec(&entry_count);
+	/* 3) wake up any waiters */
+	kfence_wake_up_all(&entry->base.fence);
+	kfence_put(&entry->base.fence);
+}
 
-	spin_unlock_irqrestore(&async_lock, flags);
+__kfence_call static int async_work_notify(struct kfence *fence)
+{
+	struct async_entry *entry =
+		container_of(fence, typeof(*entry), base.fence);
+
+	if (kfence_done(fence)) {
+		kfree(entry);
+		return NOTIFY_DONE;
+	}
 
-	/* 4) wake up any waiters */
-	wake_up(&async_done);
+	queue_work(system_unbound_wq, &entry->work);
+	return NOTIFY_OK;
 }
 
-static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
+struct async_work *async_work_create(async_func_t func, void *data, gfp_t gfp)
 {
 	struct async_entry *entry;
-	unsigned long flags;
-	async_cookie_t newcookie;
 
-	/* allow irq-off callers */
-	entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC);
+	entry = kmalloc(sizeof(*entry), gfp);
+	if (!entry)
+		return NULL;
 
-	/*
-	 * If we're out of memory or if there's too much work
-	 * pending already, we execute synchronously.
-	 */
-	if (!entry || atomic_read(&entry_count) > MAX_WORK) {
-		kfree(entry);
-		spin_lock_irqsave(&async_lock, flags);
-		newcookie = next_cookie++;
-		spin_unlock_irqrestore(&async_lock, flags);
+	kfence_init(&entry->base.fence, async_work_notify);
 
-		/* low on memory.. run synchronously */
-		func(data, newcookie);
-		return newcookie;
-	}
-	INIT_LIST_HEAD(&entry->domain_list);
-	INIT_LIST_HEAD(&entry->global_list);
 	INIT_WORK(&entry->work, async_run_entry_fn);
 	entry->func = func;
 	entry->data = data;
-	entry->domain = domain;
 
-	spin_lock_irqsave(&async_lock, flags);
+	return &entry->base;
+}
+EXPORT_SYMBOL_GPL(async_work_create);
 
-	/* allocate cookie and queue */
-	newcookie = entry->cookie = next_cookie++;
+async_cookie_t queue_async_work(struct async_domain *domain,
+				struct async_work *work,
+				gfp_t gfp)
+{
+	struct async_entry *entry = container_of(work, typeof(*entry), base);
+	unsigned long flags;
 
-	list_add_tail(&entry->domain_list, &domain->pending);
-	if (domain->registered)
-		list_add_tail(&entry->global_list, &async_global_pending);
+	if (WARN_ON(test_and_set_bit(ASYNC_QUEUED_BIT,
+				     &entry->base.fence.flags)))
+		return 0;
 
-	atomic_inc(&entry_count);
+	spin_lock_irqsave(&async_lock, flags);
+	entry->cookie = assign_cookie();
+	list_add_tail(&entry->pending_link[0], &domain->pending);
+	INIT_LIST_HEAD(&entry->pending_link[1]);
+	if (domain->registered)
+		list_add_tail(&entry->pending_link[1], &async_global_pending);
+	async_pending_count++;
 	spin_unlock_irqrestore(&async_lock, flags);
 
 	/* mark that this task has queued an async job, used by module init */
 	current->flags |= PF_USED_ASYNC;
 
-	/* schedule for execution */
-	queue_work(system_unbound_wq, &entry->work);
+	kfence_complete(kfence_get(&entry->base.fence));
+
+	return entry->cookie;
+}
+EXPORT_SYMBOL_GPL(queue_async_work);
 
-	return newcookie;
+async_cookie_t schedule_async_work(struct async_work *work)
+{
+	return queue_async_work(&async_dfl_domain, work, GFP_KERNEL);
 }
 
 /**
@@ -206,7 +198,7 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
  */
 async_cookie_t async_schedule(async_func_t func, void *data)
 {
-	return __async_schedule(func, data, &async_dfl_domain);
+	return async_schedule_domain(func, data, &async_dfl_domain);
 }
 EXPORT_SYMBOL_GPL(async_schedule);
 
@@ -225,7 +217,27 @@ EXPORT_SYMBOL_GPL(async_schedule);
 async_cookie_t async_schedule_domain(async_func_t func, void *data,
 				     struct async_domain *domain)
 {
-	return __async_schedule(func, data, domain);
+	struct async_work *work;
+	async_cookie_t cookie = 0;
+
+	work = NULL;
+	if (READ_ONCE(async_pending_count) < MAX_WORK)
+		work = async_work_create(func, data, GFP_ATOMIC);
+	if (work) {
+		cookie = queue_async_work(domain, work, GFP_ATOMIC);
+		async_work_put(work);
+	}
+	if (!cookie) {
+		unsigned long flags;
+
+		spin_lock_irqsave(&async_lock, flags);
+		cookie = assign_cookie();
+		spin_unlock_irqrestore(&async_lock, flags);
+
+		func(data, cookie);
+	}
+
+	return cookie;
 }
 EXPORT_SYMBOL_GPL(async_schedule_domain);
 
@@ -251,10 +263,8 @@ EXPORT_SYMBOL_GPL(async_synchronize_full);
  */
 void async_unregister_domain(struct async_domain *domain)
 {
-	spin_lock_irq(&async_lock);
-	WARN_ON(!domain->registered || !list_empty(&domain->pending));
+	WARN_ON(!list_empty(&domain->pending));
 	domain->registered = 0;
-	spin_unlock_irq(&async_lock);
 }
 EXPORT_SYMBOL_GPL(async_unregister_domain);
 
@@ -267,7 +277,7 @@ EXPORT_SYMBOL_GPL(async_unregister_domain);
  */
 void async_synchronize_full_domain(struct async_domain *domain)
 {
-	async_synchronize_cookie_domain(ASYNC_COOKIE_MAX, domain);
+	async_synchronize_cookie_domain(0, domain);
 }
 EXPORT_SYMBOL_GPL(async_synchronize_full_domain);
 
@@ -282,19 +292,49 @@ EXPORT_SYMBOL_GPL(async_synchronize_full_domain);
  */
 void async_synchronize_cookie_domain(async_cookie_t cookie, struct async_domain *domain)
 {
-	ktime_t uninitialized_var(starttime), delta, endtime;
+	ktime_t uninitialized_var(starttime);
+	struct list_head *pending;
+
+	pending = domain ? &domain->pending : &async_global_pending;
 
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
 		pr_debug("async_waiting @ %i\n", task_pid_nr(current));
 		starttime = ktime_get();
 	}
 
-	wait_event(async_done, lowest_in_progress(domain) >= cookie);
+	do {
+		struct kfence *fence = NULL;
+		unsigned long flags;
 
-	if (initcall_debug && system_state == SYSTEM_BOOTING) {
-		endtime = ktime_get();
-		delta = ktime_sub(endtime, starttime);
+		spin_lock_irqsave(&async_lock, flags);
+		if (!list_empty(pending)) {
+			struct async_entry *entry;
+
+			if (cookie) {
+				entry = list_first_entry(pending,
+							 struct async_entry,
+							 pending_link[!domain]);
+				if ((s64)(cookie - entry->cookie) > 0)
+					fence = kfence_get(&entry->base.fence);
+			} else {
+				entry = list_last_entry(pending,
+							struct async_entry,
+							pending_link[!domain]);
+				cookie = entry->cookie;
+				fence = kfence_get(&entry->base.fence);
+			}
+		}
+		spin_unlock_irqrestore(&async_lock, flags);
+
+		if (!fence)
+			break;
+
+		kfence_wait(fence);
+		kfence_put(fence);
+	} while (1);
 
+	if (initcall_debug && system_state == SYSTEM_BOOTING) {
+		ktime_t delta = ktime_sub(ktime_get(), starttime);
 		pr_debug("async_continuing @ %i after %lli usec\n",
 			task_pid_nr(current),
 			(long long)ktime_to_ns(delta) >> 10);
@@ -327,3 +367,11 @@ bool current_is_async(void)
 	return worker && worker->current_func == async_run_entry_fn;
 }
 EXPORT_SYMBOL_GPL(current_is_async);
+
+void init_async_domain(struct async_domain *domain, bool registered)
+{
+	memset(domain, 0, sizeof(*domain));
+	INIT_LIST_HEAD(&domain->pending);
+	domain->registered = registered;
+}
+EXPORT_SYMBOL_GPL(init_async_domain);
diff --git a/lib/test-async-domain.c b/lib/test-async-domain.c
index 558a71414fb6..ecbeba9cd65b 100644
--- a/lib/test-async-domain.c
+++ b/lib/test-async-domain.c
@@ -7,6 +7,19 @@
 #include <linux/async.h>
 #include <linux/module.h>
 #include <linux/delay.h>
+#include <linux/slab.h>
+
+static struct kfence *alloc_kfence(void)
+{
+	struct kfence *fence;
+
+	fence = kmalloc(sizeof(*fence), GFP_KERNEL);
+	if (!fence)
+		return NULL;
+
+	kfence_init(fence, NULL);
+	return fence;
+}
 
 static void task_A(void *data, async_cookie_t cookie)
 {
@@ -21,6 +34,269 @@ static void task_B(void *data, async_cookie_t cookie)
 	smp_store_mb(*result, 'B');
 }
 
+static int __init test_x(const char *name,
+			 struct async_domain *domain,
+			 async_func_t func,
+			 const long expected)
+{
+	struct async_work *A;
+	long result = 0;
+
+	A = async_work_create(func, &result, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	async_work_wait(A);
+	async_work_put(A);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			name, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_A(struct async_domain *domain)
+{
+	return test_x(__func__, domain, task_A, 'A');
+}
+
+static int __init test_B(struct async_domain *domain)
+{
+	return test_x(__func__, domain, task_B, 'B');
+}
+
+static int __init test_x_fence(const char *name,
+			       struct async_domain *domain,
+			       async_func_t func,
+			       const long expected)
+{
+	struct async_work *A;
+	struct kfence *fence;
+	long result = 0;
+
+	A = async_work_create(func, &result, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	fence = alloc_kfence();
+	if (!fence)
+		return -ENOMEM;
+
+	queue_async_work(domain, A, GFP_KERNEL);
+
+	kfence_await_kfence(fence, &A->fence, GFP_KERNEL);
+	kfence_complete(fence);
+
+	kfence_wait(fence);
+
+	async_work_put(A);
+	kfence_put(fence);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			name, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_A_fence(struct async_domain *domain)
+{
+	return test_x_fence(__func__, domain, task_A, 'A');
+}
+
+static int __init test_B_fence(struct async_domain *domain)
+{
+	return test_x_fence(__func__, domain, task_B, 'B');
+}
+
+static int __init test_x_fence_y(const char *name,
+				 struct async_domain *domain,
+				 async_func_t x,
+				 async_func_t y,
+				 const long expected)
+{
+	struct async_work *A, *B;
+	struct kfence *fence;
+	long result = 0;
+
+	A = async_work_create(x, &result, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	B = async_work_create(y, &result, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	fence = alloc_kfence();
+	if (!fence)
+		return -ENOMEM;
+
+	kfence_await_kfence(fence, &A->fence, GFP_KERNEL);
+	kfence_complete(fence);
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	async_work_put(A);
+
+	async_work_after(B, fence);
+	queue_async_work(domain, B, GFP_KERNEL);
+	kfence_put(fence);
+
+	async_work_wait(B);
+	async_work_put(B);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			name, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_A_fence_B(struct async_domain *domain)
+{
+	return test_x_fence_y(__func__, domain, task_A, task_B, 'B');
+}
+
+static int __init test_B_fence_A(struct async_domain *domain)
+{
+	return test_x_fence_y(__func__, domain, task_B, task_A, 'A');
+}
+
+struct long_context {
+	struct kfence *barrier;
+	long *src;
+	long result;
+};
+
+static void task_wait(void *data, async_cookie_t cookie)
+{
+	struct long_context *ctx = data;
+
+	kfence_wait(ctx->barrier);
+	smp_store_mb(ctx->result, READ_ONCE(*ctx->src));
+}
+
+static int __init test_pause(struct async_domain *domain)
+{
+	struct long_context ctx;
+	struct async_work *A, *B;
+	const long expected = 'B';
+	long out_B = 'A';
+
+	ctx.result = 0;
+	ctx.src = &out_B;
+
+	A = async_work_create(task_wait, &ctx, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	B = async_work_create(task_B, &out_B, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	ctx.barrier = kfence_get(&B->fence);
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	queue_async_work(domain, B, GFP_KERNEL);
+	async_work_put(B);
+
+	async_work_wait(A);
+	async_work_put(A);
+
+	if (READ_ONCE(ctx.result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, ctx.result);
+		return -EINVAL;
+	}
+
+	kfence_put(ctx.barrier);
+
+	return 0;
+}
+
+static void task_signal(void *data, async_cookie_t cookie)
+{
+	struct long_context *ctx = data;
+
+	kfence_complete(ctx->barrier);
+}
+
+static int __init test_manual(struct async_domain *domain)
+{
+	struct long_context ctx;
+	struct async_work *A, *B, *C;
+	const long expected = 'B';
+	long out_B = 'A';
+
+	ctx.result = 0;
+	ctx.src = &out_B;
+	ctx.barrier = alloc_kfence();
+
+	A = async_work_create(task_wait, &ctx, GFP_KERNEL);
+	if (!A)
+		return -ENOMEM;
+
+	B = async_work_create(task_B, &out_B, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	C = async_work_create(task_signal, &ctx, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	async_work_after(C, &B->fence);
+
+	queue_async_work(domain, A, GFP_KERNEL);
+	queue_async_work(domain, B, GFP_KERNEL);
+	queue_async_work(domain, C, GFP_KERNEL);
+
+	async_work_wait(A);
+
+	async_work_put(C);
+	async_work_put(B);
+	async_work_put(A);
+	kfence_put(ctx.barrier);
+
+	if (READ_ONCE(ctx.result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, ctx.result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int __init test_sync(struct async_domain *domain)
+{
+	struct async_work *B;
+	const long expected = 'B';
+	long result = 0;
+
+	B = async_work_create(task_B, &result, GFP_KERNEL);
+	if (!B)
+		return -ENOMEM;
+
+	queue_async_work(domain, B, GFP_KERNEL);
+	async_work_put(B);
+
+	async_synchronize_full_domain(domain);
+
+	if (READ_ONCE(result) != expected) {
+		pr_warn("%s expected %c [%ld], got %ld\n",
+			__func__, (char)expected, expected, result);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int __init test_implicit(struct async_domain *domain)
 {
 	const long expected = 'B';
@@ -99,24 +375,62 @@ static int __init test_async_domain_init(void)
 
 	pr_info("Testing async-domains\n");
 
-	ret = test_implicit(&domain);
+	ret = test_A(&domain);
 	if (ret)
 		return ret;
 
+	ret = test_A_fence(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_A_fence_B(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_B(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_B_fence(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_B_fence_A(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_pause(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_manual(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_sync(&domain);
+	if (ret)
+		goto err;
+
+	ret = test_implicit(&domain);
+	if (ret)
+		goto err;
+
 	ret = test_registered(&domain);
 	if (ret)
-		return ret;
+		goto err;
 
 	ret = perf_nop(1, 100);
 	if (ret)
-		return ret;
+		goto err;
 
 	ret = perf_nop(128, 1000);
 	if (ret)
-		return ret;
+		goto err;
 
+err:
+	async_synchronize_full_domain(&domain);
 	async_unregister_domain(&domain);
-	return 0;
+	return ret;
 }
 
 static void __exit test_async_domain_cleanup(void)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 6/7] async: Add execution barriers
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
                     ` (4 preceding siblings ...)
  2016-07-17 12:58   ` [PATCH v2 5/7] async: Add support for explicit fine-grained barriers Chris Wilson
@ 2016-07-17 12:58   ` Chris Wilson
  2016-07-17 12:58   ` [PATCH v2 7/7] async: Introduce a dependency resolver for parallel execution Chris Wilson
  6 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo,
	Daniel Vetter, Andrew Morton, Ingo Molnar, Kees Cook,
	Thomas Gleixner, Paul E. McKenney, Dan Williams, Andrey Ryabinin,
	Davidlohr Bueso, Nikolay Aleksandrov, David S. Miller,
	Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, linaro-mm-sig

A frequent mode of operation is fanning out N tasks to execute in
parallel, collating results, fanning out M tasks, rinse and repeat. This
is also common to the notion of the async/sync kernel domain split.
A barrier provides a mechanism by which all work queued after the
barrier must wait (i.e. not be scheduled) until all work queued before the
barrier is completed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/async.h |  4 +++
 kernel/async.c        | 72 +++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 76 insertions(+)

diff --git a/include/linux/async.h b/include/linux/async.h
index e7d7289a9889..de44306f8cb7 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -26,6 +26,7 @@ struct async_work {
 
 struct async_domain {
 	struct list_head pending;
+	struct kfence *barrier;
 	unsigned registered:1;
 };
 
@@ -59,6 +60,9 @@ extern void async_synchronize_cookie(async_cookie_t cookie);
 extern void async_synchronize_cookie_domain(async_cookie_t cookie,
 					    struct async_domain *domain);
 
+extern void async_barrier(void);
+extern void async_barrier_domain(struct async_domain *domain);
+
 extern bool current_is_async(void);
 
 extern struct async_work *
diff --git a/kernel/async.c b/kernel/async.c
index 0d695919a60d..5cfa398a19b2 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -154,6 +154,15 @@ struct async_work *async_work_create(async_func_t func, void *data, gfp_t gfp)
 }
 EXPORT_SYMBOL_GPL(async_work_create);
 
+static void async_barrier_delete(struct async_domain *domain)
+{
+	if (!domain->barrier)
+		return;
+
+	kfence_put(domain->barrier);
+	domain->barrier = NULL;
+}
+
 async_cookie_t queue_async_work(struct async_domain *domain,
 				struct async_work *work,
 				gfp_t gfp)
@@ -174,6 +183,10 @@ async_cookie_t queue_async_work(struct async_domain *domain,
 	async_pending_count++;
 	spin_unlock_irqrestore(&async_lock, flags);
 
+	if (domain->barrier &&
+	    !kfence_await_kfence(&entry->base.fence, domain->barrier, gfp))
+		async_barrier_delete(domain);
+
 	/* mark that this task has queued an async job, used by module init */
 	current->flags |= PF_USED_ASYNC;
 
@@ -241,6 +254,63 @@ async_cookie_t async_schedule_domain(async_func_t func, void *data,
 }
 EXPORT_SYMBOL_GPL(async_schedule_domain);
 
+static struct kfence *__async_barrier_create(struct async_domain *domain)
+{
+	struct kfence *fence;
+	struct async_entry *entry;
+	unsigned long flags;
+	int ret;
+
+	fence = kmalloc(sizeof(*fence), GFP_KERNEL);
+	if (!fence)
+		goto out_sync;
+
+	kfence_init(fence, NULL);
+
+	ret = 0;
+	spin_lock_irqsave(&async_lock, flags);
+	list_for_each_entry(entry, &domain->pending, pending_link[0]) {
+		ret |= kfence_await_kfence(fence,
+					   &entry->base.fence,
+					   GFP_ATOMIC);
+		if (ret < 0)
+			break;
+	}
+	spin_unlock_irqrestore(&async_lock, flags);
+	if (ret <= 0)
+		goto out_put;
+
+	if (domain->barrier)
+		kfence_await_kfence(fence, domain->barrier, GFP_KERNEL);
+
+	kfence_complete(fence);
+	return fence;
+
+out_put:
+	kfence_complete(fence);
+	kfence_put(fence);
+out_sync:
+	async_synchronize_full_domain(domain);
+	return NULL;
+}
+
+void async_barrier(void)
+{
+	async_barrier_domain(&async_dfl_domain);
+}
+EXPORT_SYMBOL_GPL(async_barrier);
+
+void async_barrier_domain(struct async_domain *domain)
+{
+	struct kfence *barrier = __async_barrier_create(domain);
+
+	if (domain->barrier)
+		kfence_put(domain->barrier);
+
+	domain->barrier = barrier;
+}
+EXPORT_SYMBOL_GPL(async_barrier_domain);
+
 /**
  * async_synchronize_full - synchronize all asynchronous function calls
  *
@@ -264,6 +334,8 @@ EXPORT_SYMBOL_GPL(async_synchronize_full);
 void async_unregister_domain(struct async_domain *domain)
 {
 	WARN_ON(!list_empty(&domain->pending));
+
+	async_barrier_delete(domain);
 	domain->registered = 0;
 }
 EXPORT_SYMBOL_GPL(async_unregister_domain);
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 7/7] async: Introduce a dependency resolver for parallel execution
  2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
                     ` (5 preceding siblings ...)
  2016-07-17 12:58   ` [PATCH v2 6/7] async: Add execution barriers Chris Wilson
@ 2016-07-17 12:58   ` Chris Wilson
  6 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2016-07-17 12:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Chris Wilson, Sumit Semwal, Shuah Khan, Tejun Heo,
	Daniel Vetter, Andrew Morton, Ingo Molnar, Kees Cook,
	Thomas Gleixner, Paul E. McKenney, Dan Williams, Andrey Ryabinin,
	Davidlohr Bueso, Nikolay Aleksandrov, David S. Miller,
	Peter Zijlstra (Intel),
	Rasmus Villemoes, Andy Shevchenko, Dmitry Vyukov,
	Alexander Potapenko, linux-media, linaro-mm-sig

A challenge in driver initialisation is the coordination of many small
sometimes independent, sometimes interdependent tasks. We would like to
schedule the independent tasks for execution in parallel across as many
cores as possible for rapid initialisation, and then schedule all the
dependent tasks once they have completed, again running as many of those
in parallel as is possible.

Resolving the interdependencies by hand is time consuming and error
prone. Instead, we want to declare what dependencies a particular task
has, and what that task provides, and let a runtime dependency solver
work out what tasks to run and when, and which in parallel. To this end,
we introduce the struct async_dependency_graph building upon the kfence
and async_work from the previous patches to allow for the runtime
computation of the topological task ordering.

The graph is constructed with async_dependency_graph_build(), which
takes the task, its dependencies and what it provides, and builds the
graph of kfences required for ordering. Additional kfences can be
inserted through async_dependency_depends() and
async_dependency_provides() for manual control of the execution order,
and async_dependency_get() retrieves a kfence for inspection or waiting
upon.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
---
 include/linux/async.h                              |  38 +++
 kernel/async.c                                     | 250 ++++++++++++++++
 lib/Kconfig.debug                                  |  12 +
 lib/Makefile                                       |   1 +
 lib/test-async-dependency-graph.c                  | 316 +++++++++++++++++++++
 .../selftests/lib/async-dependency-graph.sh        |  10 +
 6 files changed, 627 insertions(+)
 create mode 100644 lib/test-async-dependency-graph.c
 create mode 100755 tools/testing/selftests/lib/async-dependency-graph.sh

diff --git a/include/linux/async.h b/include/linux/async.h
index de44306f8cb7..0a0040d3fc01 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -15,6 +15,7 @@
 #include <linux/types.h>
 #include <linux/kfence.h>
 #include <linux/list.h>
+#include <linux/rbtree.h>
 
 typedef u64 async_cookie_t;
 typedef void (*async_func_t) (void *data, async_cookie_t cookie);
@@ -101,4 +102,41 @@ extern async_cookie_t queue_async_work(struct async_domain *domain,
 				       gfp_t gfp);
 extern async_cookie_t schedule_async_work(struct async_work *work);
 
+/* Build a graph of work based on dependencies generated by keywords.
+ * The graph must be acyclic. Can be used to both generate a topological
+ * ordering of tasks, and to execute independent chains of tasks in
+ * parallel.
+ */
+
+struct async_dependency_graph {
+	struct rb_root root;
+	struct list_head list;
+};
+
+#define ASYNC_DEPENDENCY_GRAPH_INIT(_name) {				\
+	.root = RB_ROOT,						\
+	.list = LIST_HEAD_INIT(_name.list),				\
+}
+
+#define ASYNC_DEPENDENCY_GRAPH(_name) \
+	struct async_dependency_graph _name = ASYNC_DEPENDENCY_GRAPH_INIT(_name)
+
+extern int async_dependency_graph_build(struct async_dependency_graph *adg,
+					async_func_t fn, void *data,
+					const char *depends,
+					const char *provides);
+
+extern int async_dependency_depends(struct async_dependency_graph *adg,
+				    struct kfence *fence,
+				    const char *depends);
+
+extern int async_dependency_provides(struct async_dependency_graph *adg,
+				     struct kfence *fence,
+				     const char *provides);
+
+extern struct kfence *async_dependency_get(struct async_dependency_graph *adg,
+					   const char *name);
+
+extern void async_dependency_graph_execute(struct async_dependency_graph *adg);
+
 #endif
diff --git a/kernel/async.c b/kernel/async.c
index 5cfa398a19b2..ac12566f2e0b 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -447,3 +447,253 @@ void init_async_domain(struct async_domain *domain, bool registered)
 	domain->registered = registered;
 }
 EXPORT_SYMBOL_GPL(init_async_domain);
+
+struct async_dependency {
+	struct kfence fence;
+	struct rb_node node;
+	struct list_head link;
+	char name[0];
+};
+
+static struct async_dependency *
+__lookup_dependency(struct async_dependency_graph *adg, const char *name)
+{
+	struct rb_node **p, *parent;
+	struct async_dependency *d;
+	int len;
+
+	parent = NULL;
+	p = &adg->root.rb_node;
+	while (*p) {
+		int cmp;
+
+		parent = *p;
+		d = container_of(parent, typeof(*d), node);
+
+		cmp = strcmp(name, d->name);
+		if (cmp < 0)
+			p = &parent->rb_left;
+		else if (cmp > 0)
+			p = &parent->rb_right;
+		else
+			return d;
+	}
+
+	len = strlen(name) + 1;
+	d = kmalloc(sizeof(*d) + len, GFP_KERNEL);
+	if (!d)
+		return ERR_PTR(-ENOMEM);
+
+	kfence_init(&d->fence, NULL);
+	memcpy(d->name, name, len);
+
+	rb_link_node(&d->node, parent, p);
+	rb_insert_color(&d->node, &adg->root);
+	list_add_tail(&d->link, &adg->list);
+
+	return d;
+}
+
+/**
+ * async_dependency_depends - declare a prerequisite fence for a named stage
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @fence: the kfence to add that depends upon the named stage completing
+ * @depends: the named stage
+ *
+ * This function appends @fence into the async_dependency_graph @adg after
+ * the @depends stage is completed. That is the @fence is signaled once
+ * the chain of dependencies upto and including @depends is complete.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ * In particular, note that if CONFIG_KFENCE_CHECK_DAG is enabled, the
+ * dependency graph will be checked for cycles, and -EINVAL reported
+ * in such cases. A dependency cycle leads to unexecutable code.
+ */
+int async_dependency_depends(struct async_dependency_graph *adg,
+			     struct kfence *fence,
+			     const char *depends)
+{
+	struct async_dependency *d;
+
+	d = __lookup_dependency(adg, depends);
+	if (IS_ERR(d))
+		return PTR_ERR(d);
+
+	return kfence_await_kfence(fence, &d->fence, GFP_KERNEL);
+}
+EXPORT_SYMBOL_GPL(async_dependency_depends);
+
+/**
+ * async_dependency_provides - declare a named stage that should follow
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @fence: the kfence to add that provides the named stage with a signal
+ * @depends: the named stage
+ *
+ * This function inserts @fence into the async_dependency_graph @adg before
+ * the @provides stage is signaled. That is the @fence signals the
+ * @provides stage once completed (and once all providers have completed,
+ * work from the @provides commences).
+ *
+ * Returns: 0 on success, negative error code on failure.
+ * In particular, note that if CONFIG_KFENCE_CHECK_DAG is enabled, the
+ * dependency graph will be checked for cycles, and -EINVAL reported
+ * in such cases. A dependency cycle leads to unexecutable code.
+ */
+int async_dependency_provides(struct async_dependency_graph *adg,
+			      struct kfence *fence,
+			      const char *provides)
+{
+	struct async_dependency *d;
+
+	d = __lookup_dependency(adg, provides);
+	if (IS_ERR(d))
+		return PTR_ERR(d);
+
+	return kfence_await_kfence(&d->fence, fence, GFP_KERNEL);
+}
+EXPORT_SYMBOL_GPL(async_dependency_provides);
+
+/**
+ * async_dependency_get - lookup the kfence for a named stage
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @name: the named stage
+ *
+ * This function lookups the kfence associated with the named stage. This
+ * fence will be signaled once the named stage is ready. For example,
+ * waiting on that fence will wait until all prior dependencies of that
+ * named stage have been completed.
+ *
+ * Returns: a new reference on the kfence. The caller must release the
+ * reference with kfence_put() when finished.
+ */
+struct kfence *async_dependency_get(struct async_dependency_graph *adg,
+				    const char *name)
+{
+	struct async_dependency *d;
+
+	d = __lookup_dependency(adg, name);
+	if (IS_ERR(d))
+		return ERR_CAST(d);
+
+	return kfence_get(&d->fence);
+}
+EXPORT_SYMBOL_GPL(async_dependency_get);
+
+static int __adg_for_each_token(struct async_dependency_graph *adg,
+				struct kfence *fence,
+				const char *string,
+				int (*fn)(struct async_dependency_graph *,
+					  struct kfence *,
+					  const char *))
+{
+	char *tmp, *s, *t;
+	int ret = 0;
+
+	if (!string)
+		return 0;
+
+	tmp = kstrdup(string, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	for (s = tmp; (t = strsep(&s, ",")); ) {
+		if (*t == '\0')
+			continue;
+
+		ret |= fn(adg, fence, t);
+		if (ret < 0)
+			break;
+	}
+
+	kfree(tmp);
+	return ret;
+}
+
+/**
+ * async_dependency_graph_build - insert a task into the dependency graph
+ * @adg: the async_dependency_graph for tracking the named stages
+ * @fn: the async_func_t to execute
+ * @data: the data to pass to the @fn
+ * @depends: a comma-separated list of named stages that must complete
+ *           before the task can execute
+ * @provides: a comma-separated list of named stages that will be signaled
+ *            when this task completes
+ *
+ * This function inserts the task @fn into the async_dependency_graph @adg
+ * after all the named stages in @depends have completed. Upon completion
+ * of the task, all the named stages in @provides are signaled (and once all
+ * their dependent tasks have also finished, the tasks afterwards will
+ * execute).
+ *
+ * If a task has no dependency (@depends is NULL or an empty string), it will
+ * be scheduled for execution as soon as it is inserted into the graph @adg.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ * In particular, note that if CONFIG_KFENCE_CHECK_DAG is enabled, the
+ * dependency graph will be checked for cycles, and -EINVAL reported
+ * in such cases. A dependency cycle leads to unexecutable code.
+ */
+int
+async_dependency_graph_build(struct async_dependency_graph *adg,
+			     async_func_t fn, void *data,
+			     const char *depends, const char *provides)
+{
+	struct async_work *work;
+	int ret;
+
+	work = async_work_create(fn, data, GFP_KERNEL);
+	if (!work)
+		return -ENOMEM;
+
+	ret = __adg_for_each_token(adg, &work->fence, depends,
+				   async_dependency_depends);
+	if (ret < 0)
+		goto err;
+
+	ret = __adg_for_each_token(adg, &work->fence, provides,
+				   async_dependency_provides);
+	if (ret < 0)
+		goto err;
+
+	if (!schedule_async_work(work)) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	ret = 0;
+out:
+	async_work_put(work);
+	return ret;
+
+err:
+	work->fence.flags = 0;
+	kfence_complete(&work->fence);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(async_dependency_graph_build);
+
+/**
+ * async_dependency_graph_execute - execute the dependency graph
+ * @adg: the async_dependency_graph
+ *
+ * This function marks the @adg as ready for execution. As soon as the
+ * dependencies of a task have been completed (in their entirety), that
+ * task is executed. Once completed, it signals the tasks that have listed
+ * its @provides as one of their @depends, and once ready (all @provides are
+ * complete) those tasks are scheduled for execution.
+ *
+ * Tasks are executed in the topological order of their dependencies. If two,
+ * or more, tasks are not dependent on each other they may be run concurrently.
+ *
+ * The graph @adg is freed upon execution.
+ */
+void async_dependency_graph_execute(struct async_dependency_graph *adg)
+{
+	struct async_dependency *d, *next;
+
+	list_for_each_entry_safe(d, next, &adg->list, link) {
+		kfence_complete(&d->fence);
+		kfence_put(&d->fence);
+	}
+}
+EXPORT_SYMBOL_GPL(async_dependency_graph_execute);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 4b180aed88b6..ad3c94ec909e 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1795,6 +1795,18 @@ config ASYNC_DOMAIN_SELFTEST
 
 	  Say N if you are unsure.
 
+config ASYNC_DEPENDENCY_GRAPH_SELFTEST
+	tristate "Asynchronous dependency graph self tests"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option provides a kernel modules that can be used to test
+	  the asynchronous dependency graph. This option is not useful for
+	  distributions or general kernels, but only for kernel developers
+	  working on the async_dependency_graph facility.
+
+	  Say N if you are unsure.
+
 config BACKTRACE_SELF_TEST
 	tristate "Self test for the backtrace code"
 	depends on DEBUG_KERNEL
diff --git a/lib/Makefile b/lib/Makefile
index 5864053cf63e..fd43aaa8846d 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -29,6 +29,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
 obj-$(CONFIG_ASYNC_DOMAIN_SELFTEST) += test-async-domain.o
+obj-$(CONFIG_ASYNC_DEPENDENCY_GRAPH_SELFTEST) += test-async-dependency-graph.o
 obj-$(CONFIG_KFENCE_SELFTEST) += test-kfence.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
diff --git a/lib/test-async-dependency-graph.c b/lib/test-async-dependency-graph.c
new file mode 100644
index 000000000000..3bf2d91a67e6
--- /dev/null
+++ b/lib/test-async-dependency-graph.c
@@ -0,0 +1,316 @@
+/*
+ * Test cases for async-dependency-graph facility.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/async.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+struct chain {
+	atomic_t idx;
+	unsigned long values[0];
+};
+
+struct task_write {
+	struct chain *chain;
+	unsigned long value;
+};
+
+static void __init task_write(void *arg, async_cookie_t cookie)
+{
+	struct task_write *t = arg;
+	int idx = atomic_inc_return(&t->chain->idx) - 1;
+	WRITE_ONCE(t->chain->values[idx], t->value);
+}
+
+static void __init task_nop(void *data, async_cookie_t cookie)
+{
+}
+
+static int __init test_ordering(int nchain, int nwide)
+{
+	ASYNC_DEPENDENCY_GRAPH(adg);
+	struct chain **chains;
+	struct task_write *tests, *t;
+	int c, w, ret;
+
+	/* Test implementation of simple chains within the dependency graphs */
+	pr_debug("%s(nchain=%d, nwide=%d)\n", __func__, nchain, nwide);
+
+	chains = kmalloc_array(nwide, sizeof(struct chain *), GFP_KERNEL);
+	tests = kmalloc_array(nchain, sizeof(struct task_write), GFP_KERNEL);
+	if (!chains || !tests)
+		return -ENOMEM;
+
+	t = tests;
+	for (w = 0; w < nwide; w++) {
+		char *depends = NULL, *provides;
+
+		chains[w] = kzalloc(sizeof(struct chain) +
+				    nchain*sizeof(unsigned long),
+				    GFP_KERNEL);
+
+		for (c = 0; c < nchain; c++) {
+			t->chain = chains[w];
+			t->value = c;
+			provides = kasprintf(GFP_KERNEL, "%d.%d", c, w);
+			async_dependency_graph_build(&adg, task_write, t,
+						     depends, provides);
+			kfree(depends);
+			depends = provides;
+			t++;
+		}
+
+		kfree(depends);
+	}
+	async_dependency_graph_execute(&adg);
+	async_synchronize_full();
+
+	ret = 0;
+	kfree(tests);
+	for (w = 0; w < nwide; w++) {
+		for (c = 0; c < nchain; c++) {
+			if (chains[w]->values[c] != c) {
+				pr_err("%s(%d, %d): Invalid execution order (chain %d, position %d): found %d\n",
+				       __func__, nchain, nwide,
+				       w, c, (int)chains[w]->values[c]);
+
+				ret = -EINVAL;
+			}
+		}
+		kfree(chains[w]);
+	}
+	kfree(chains);
+
+	return ret;
+}
+
+static int __init test_barrier(int nwide)
+{
+	ASYNC_DEPENDENCY_GRAPH(adg);
+	struct chain **chains;
+	struct task_write *tests, *t;
+	int c, w, ret;
+
+	/* Test implementation of barriers within the dependency graphs */
+	pr_debug("%s(nwide=%d)\n", __func__, nwide);
+
+	chains = kmalloc_array(nwide, sizeof(struct chain *), GFP_KERNEL);
+	tests = kmalloc_array(nwide, 2*sizeof(struct task_write), GFP_KERNEL);
+	if (!chains || !tests)
+		return -ENOMEM;
+
+	t = tests;
+
+	/* A,B act as a barrier running between the nops */
+	for (w = 0; w < nwide; w++) {
+		char *provides, *depends;
+
+		chains[w] = kzalloc(sizeof(struct chain) +
+				    2*sizeof(unsigned long),
+				    GFP_KERNEL);
+
+		depends = NULL;
+
+		provides = kasprintf(GFP_KERNEL, "nop1.%d", w);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+
+		kfree(depends);
+		depends = provides;
+
+		provides = kasprintf(GFP_KERNEL, "A.%d", w);
+		t->chain = chains[w];
+		t->value = 0;
+		async_dependency_graph_build(&adg, task_write, t,
+					     depends, provides);
+		t++;
+
+		kfree(depends);
+		depends = provides;
+
+		provides = kasprintf(GFP_KERNEL, "nop2.%d", w);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+		kfree(provides);
+
+		provides = kasprintf(GFP_KERNEL, "nop3.%d", w);
+		async_dependency_graph_build(&adg, task_nop, NULL,
+					     depends, provides);
+		kfree(provides);
+
+		kfree(depends);
+		depends = kasprintf(GFP_KERNEL, "nop2.%d,nop3.%d", w, w);
+		t->chain = chains[w];
+		t->value = 1;
+		async_dependency_graph_build(&adg, task_write, t,
+					     depends, NULL);
+		kfree(depends);
+		t++;
+	}
+	async_dependency_graph_execute(&adg);
+	async_synchronize_full();
+
+	ret = 0;
+	kfree(tests);
+	for (w = 0; w < nwide; w++) {
+		for (c = 0; c < 2; c++) {
+			if (chains[w]->values[c] != c) {
+				pr_err("%s(%d): Invalid execution order (chain %d, position %d): found %d\n",
+				       __func__, nwide,
+				       w, c, (int)chains[w]->values[c]);
+
+				ret = -EINVAL;
+			}
+		}
+		kfree(chains[w]);
+	}
+	kfree(chains);
+
+	return ret;
+}
+
+static int __init test_dag(void)
+{
+	ASYNC_DEPENDENCY_GRAPH(adg);
+
+	/* Test detection of cycles within the dependency graphs */
+	pr_debug("%s\n", __func__);
+
+	if (!config_enabled(CONFIG_KFENCE_CHECK_DAG))
+		return 0;
+
+	async_dependency_graph_build(&adg, task_nop, NULL, "__start__", "A");
+	if (async_dependency_graph_build(&adg, task_nop, NULL, "A", "A") != -EINVAL) {
+		pr_err("Failed to detect AA cycle\n");
+		return -EINVAL;
+	}
+
+	async_dependency_graph_build(&adg, task_nop, NULL, "A", "B");
+	if (async_dependency_graph_build(&adg, task_nop, NULL, "B", "A") != -EINVAL) {
+		pr_err("Failed to detect ABA cycle\n");
+		return -EINVAL;
+	}
+
+	async_dependency_graph_build(&adg, task_nop, NULL, "B", "C");
+	if (async_dependency_graph_build(&adg, task_nop, NULL, "C", "A") != -EINVAL) {
+		pr_err("Failed to detect ABCA cycle\n");
+		return -EINVAL;
+	}
+
+	async_dependency_graph_execute(&adg);
+	async_synchronize_full();
+
+	return 0;
+}
+
+static int __init perf_nop(int chain, int width, long timeout_us)
+{
+	ktime_t start;
+	long count, delay;
+
+	count = 0;
+	start = ktime_get();
+	do {
+		ASYNC_DEPENDENCY_GRAPH(adg);
+		ktime_t delta;
+		int c, w;
+
+		for (w = 0; w < width; w++) {
+			char *depends = NULL, *provides;
+
+			for (c = 0; c < chain; c++) {
+				provides = kasprintf(GFP_KERNEL, "%d.%d", c, w);
+				async_dependency_graph_build(&adg,
+							     task_nop, NULL,
+							     depends, provides);
+				kfree(depends);
+				depends = provides;
+			}
+
+			kfree(depends);
+		}
+		async_dependency_graph_execute(&adg);
+		async_synchronize_full();
+		delta = ktime_sub(ktime_get(), start);
+		delay = ktime_to_ns(delta) >> 10;
+		count += width * chain;
+	} while (delay < timeout_us);
+
+	pr_info("%ld nop tasks (in chains of %d, %d chains in parallel) completed in %ldus\n",
+		count, chain, width, delay);
+	return 0;
+}
+
+static int __init test_async_dependency_graph_init(void)
+{
+	int ret;
+
+	pr_info("Testing async-dependency-graph\n");
+
+	ret = test_ordering(1, 1);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(2, 1);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(1, 2);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(2, 2);
+	if (ret)
+		return ret;
+
+	ret = test_ordering(26, 26);
+	if (ret)
+		return ret;
+
+	ret = test_dag();
+	if (ret)
+		return ret;
+
+	ret = test_barrier(1);
+	if (ret)
+		return ret;
+
+	ret = test_barrier(16);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(1, 1, 100);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(256, 1, 2000);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(128, 2, 2000);
+	if (ret)
+		return ret;
+
+	ret = perf_nop(16, 16, 2000);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void __exit test_async_dependency_graph_cleanup(void)
+{
+}
+
+module_init(test_async_dependency_graph_init);
+module_exit(test_async_dependency_graph_cleanup);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/selftests/lib/async-dependency-graph.sh b/tools/testing/selftests/lib/async-dependency-graph.sh
new file mode 100755
index 000000000000..ea4bbc76f60f
--- /dev/null
+++ b/tools/testing/selftests/lib/async-dependency-graph.sh
@@ -0,0 +1,10 @@
+#!/bin/sh
+# Runs infrastructure tests using test-async-dependency-graph kernel module
+
+if /sbin/modprobe -q test-async-dependency-graph; then
+	/sbin/modprobe -q -r test-async-dependency-graph
+	echo "async-dependency-graph: ok"
+else
+	echo "async-dependency-graph: [FAIL]"
+	exit 1
+fi
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-07-17 13:00 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-24  9:08 Introduce fences for N:M completion variables Chris Wilson
2016-06-24  9:08 ` [PATCH 1/9] lib: Add kselftests for async-domains Chris Wilson
2016-06-24  9:08 ` [PATCH 2/9] async: Introduce kfence, a N:M completion mechanism Chris Wilson
2016-07-13  9:38   ` Peter Zijlstra
2016-07-13 10:20     ` Chris Wilson
2016-07-13 11:02       ` Daniel Vetter
2016-07-13 10:26   ` Peter Zijlstra
2016-06-24  9:08 ` [PATCH 3/9] async: Extend kfence to allow struct embedding Chris Wilson
2016-07-13 10:31   ` Peter Zijlstra
2016-06-24  9:08 ` [PATCH 4/9] async: Extend kfences for listening on DMA fences Chris Wilson
2016-06-24  9:08 ` [PATCH 5/9] async: Wrap hrtimer to provide a time source for a kfence Chris Wilson
2016-07-13 10:32   ` Peter Zijlstra
2016-06-24  9:08 ` [PATCH 6/9] async: Add a convenience wrapper for waiting on implicit dma-buf Chris Wilson
2016-06-24  9:08 ` [PATCH 7/9] async: Add support for explicit fine-grained barriers Chris Wilson
2016-06-24  9:08 ` [PATCH 8/9] async: Add execution barriers Chris Wilson
2016-06-24  9:08 ` [PATCH 9/9] async: Introduce a dependency resolver for parallel execution Chris Wilson
2016-07-17 12:58 ` Introduce fences for N:M completion variables [v2] Chris Wilson
2016-07-17 12:58   ` [PATCH v2 1/7] kfence: Introduce kfence, a N:M completion mechanism Chris Wilson
2016-07-17 12:58   ` [PATCH v2 2/7] kfence: Wrap hrtimer to provide a time source for a kfence Chris Wilson
2016-07-17 12:58   ` [PATCH v2 3/7] kfence: Extend kfences for listening on DMA fences Chris Wilson
2016-07-17 12:58   ` [PATCH v2 4/7] async: Add kselftests for async-domains Chris Wilson
2016-07-17 12:58   ` [PATCH v2 5/7] async: Add support for explicit fine-grained barriers Chris Wilson
2016-07-17 12:58   ` [PATCH v2 6/7] async: Add execution barriers Chris Wilson
2016-07-17 12:58   ` [PATCH v2 7/7] async: Introduce a dependency resolver for parallel execution Chris Wilson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).