linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes
@ 2016-12-21 18:46 Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 01/12] drm/vgem: Use ww_mutex_(un)lock even with a NULL context Nicolai Hähnle
                   ` (11 more replies)
  0 siblings, 12 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel

Here's a v3 of the series. Some comments:

Patch #1 is already in drm-misc, but I left it here for now for completeness.

Patch #2 is new and affects all types of locks, not just the w/w case. It's
a race that is exceedingly unlikely: basically, we have to be interrupted
right between checking our wait_list position and setting the task state for
long enough that two other tasks can conspire against us in the meantime
(and one of them must have received a signal).

Patch #5 is also new; it gets rid of the static inline ww_mutex_lock
wrappers.

Patch #6 (the heart of the series): Changed to be much less invasive to the
trylock calls. I've also changed the __ww_mutex_add_waiter loop to scan the
wait list in reverse order. It's probably a wash overall.

I kept the logic as far as setting ww_ctx in mutex_waiter is concerned.

About the changes to optimistic spin: I've moved them towards the end so
that they're easy to take or leave. They didn't change much in my
measurements, so I'm not very attached to them, but they do seem more
"right" to me for the w/w case.

Is there a verdict on the whole "use_ww_ctx doesn't work for
mutex_spin_on_owner" question?

Thanks,
Nicolai

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 01/12] drm/vgem: Use ww_mutex_(un)lock even with a NULL context
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 02/12] locking/mutex: Fix a race with handoffs and interruptible waits Nicolai Hähnle
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

v2: use resv->lock instead of resv->lock.base (Christian König)

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 drivers/gpu/drm/vgem/vgem_fence.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c
index 488909a..9cb00a5 100644
--- a/drivers/gpu/drm/vgem/vgem_fence.c
+++ b/drivers/gpu/drm/vgem/vgem_fence.c
@@ -191,12 +191,12 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
 
 	/* Expose the fence via the dma-buf */
 	ret = 0;
-	mutex_lock(&resv->lock.base);
+	ww_mutex_lock(&resv->lock, NULL);
 	if (arg->flags & VGEM_FENCE_WRITE)
 		reservation_object_add_excl_fence(resv, fence);
 	else if ((ret = reservation_object_reserve_shared(resv)) == 0)
 		reservation_object_add_shared_fence(resv, fence);
-	mutex_unlock(&resv->lock.base);
+	ww_mutex_unlock(&resv->lock);
 
 	/* Record the fence in our idr for later signaling */
 	if (ret == 0) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 02/12] locking/mutex: Fix a race with handoffs and interruptible waits
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 01/12] drm/vgem: Use ww_mutex_(un)lock even with a NULL context Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 03/12] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after Nicolai Hähnle
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar, dri-devel,
	Nicolai Hähnle

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

There's a possible race where the waiter in front of us leaves the wait list
due to a signal, and the current owner subsequently hands the lock off to us
even though we never observed ourselves at the front of the list.

Set the task state before checking our position in the list, so that the
race is handled by falling through the next schedule().

Found by inspection.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
---
 kernel/locking/mutex.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 9b34961..c02c566 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -697,17 +697,18 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		spin_unlock_mutex(&lock->wait_lock, flags);
 		schedule_preempt_disabled();
 
-		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
-			first = true;
-			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
-		}
-
 		set_task_state(task, state);
 		/*
 		 * Here we order against unlock; we must either see it change
 		 * state back to RUNNING and fall through the next schedule(),
 		 * or we must see its unlock and acquire.
 		 */
+
+		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
+			first = true;
+			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+		}
+
 		if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
 		     __mutex_trylock(lock, first))
 			break;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 03/12] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 01/12] drm/vgem: Use ww_mutex_(un)lock even with a NULL context Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 02/12] locking/mutex: Fix a race with handoffs and interruptible waits Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-22  1:58   ` zhoucm1
  2016-12-21 18:46 ` [PATCH v3 04/12] locking/ww_mutex: Set use_ww_ctx even when locking without a context Nicolai Hähnle
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

The function will be re-used in subsequent patches.

v3: rename to __ww_ctx_stamp_after (Chris Wilson)

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 kernel/locking/mutex.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index c02c566..66718d6 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -277,6 +277,13 @@ static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww,
 	ww_ctx->acquired++;
 }
 
+static inline bool __sched
+__ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+{
+	return a->stamp - b->stamp <= LONG_MAX &&
+	       (a->stamp != b->stamp || a > b);
+}
+
 /*
  * After acquiring lock with fastpath or when we lost out in contested
  * slowpath, set ctx and wake up any waiters so they can recheck.
@@ -602,8 +609,7 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
 	if (!hold_ctx)
 		return 0;
 
-	if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
-	    (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
+	if (__ww_ctx_stamp_after(ctx, hold_ctx)) {
 #ifdef CONFIG_DEBUG_MUTEXES
 		DEBUG_LOCKS_WARN_ON(ctx->contending_lock);
 		ctx->contending_lock = ww;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 04/12] locking/ww_mutex: Set use_ww_ctx even when locking without a context
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (2 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 03/12] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers Nicolai Hähnle
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

We will add a new field to struct mutex_waiter.  This field must be
initialized for all waiters if any waiter uses the ww_use_ctx path.

So there is a trade-off: Keep ww_mutex locking without a context on the
faster non-use_ww_ctx path, at the cost of adding the initialization to all
mutex locks (including non-ww_mutexes), or avoid the additional cost for
non-ww_mutex locks, at the cost of adding additional checks to the
use_ww_ctx path.

We take the latter choice.  It may be worth eliminating the users of
ww_mutex_lock(lock, NULL), but there are a lot of them.

v3:
- undo the moving around of ww in __mutex_lock_common

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 include/linux/ww_mutex.h | 11 ++---------
 kernel/locking/mutex.c   | 29 +++++++++++++++++------------
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index 2bb5deb..a5960e5 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -222,11 +222,7 @@ extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
  */
 static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
-	if (ctx)
-		return __ww_mutex_lock(lock, ctx);
-
-	mutex_lock(&lock->base);
-	return 0;
+	return __ww_mutex_lock(lock, ctx);
 }
 
 /**
@@ -262,10 +258,7 @@ static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ct
 static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
 							   struct ww_acquire_ctx *ctx)
 {
-	if (ctx)
-		return __ww_mutex_lock_interruptible(lock, ctx);
-	else
-		return mutex_lock_interruptible(&lock->base);
+	return __ww_mutex_lock_interruptible(lock, ctx);
 }
 
 /**
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 66718d6..a41bec2 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -467,7 +467,7 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 	for (;;) {
 		struct task_struct *owner;
 
-		if (use_ww_ctx && ww_ctx->acquired > 0) {
+		if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
 			struct ww_mutex *ww;
 
 			ww = container_of(lock, struct ww_mutex, base);
@@ -635,8 +635,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	struct ww_mutex *ww;
 	int ret;
 
-	if (use_ww_ctx) {
-		ww = container_of(lock, struct ww_mutex, base);
+	ww = container_of(lock, struct ww_mutex, base);
+
+	if (use_ww_ctx && ww_ctx) {
 		if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
 			return -EALREADY;
 	}
@@ -648,7 +649,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	    mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, false)) {
 		/* got the lock, yay! */
 		lock_acquired(&lock->dep_map, ip);
-		if (use_ww_ctx)
+		if (use_ww_ctx && ww_ctx)
 			ww_mutex_set_context_fastpath(ww, ww_ctx);
 		preempt_enable();
 		return 0;
@@ -694,7 +695,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 			goto err;
 		}
 
-		if (use_ww_ctx && ww_ctx->acquired > 0) {
+		if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
 			ret = __ww_mutex_lock_check_stamp(lock, ww_ctx);
 			if (ret)
 				goto err;
@@ -735,7 +736,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	/* got the lock - cleanup and rejoice! */
 	lock_acquired(&lock->dep_map, ip);
 
-	if (use_ww_ctx)
+	if (use_ww_ctx && ww_ctx)
 		ww_mutex_set_context_slowpath(ww, ww_ctx);
 
 	spin_unlock_mutex(&lock->wait_lock, flags);
@@ -823,8 +824,9 @@ __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	might_sleep();
 	ret =  __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE,
-				   0, &ctx->dep_map, _RET_IP_, ctx, 1);
-	if (!ret && ctx->acquired > 1)
+				   0, ctx ? &ctx->dep_map : NULL, _RET_IP_,
+				   ctx, 1);
+	if (!ret && ctx && ctx->acquired > 1)
 		return ww_mutex_deadlock_injection(lock, ctx);
 
 	return ret;
@@ -838,9 +840,10 @@ __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	might_sleep();
 	ret = __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE,
-				  0, &ctx->dep_map, _RET_IP_, ctx, 1);
+				  0, ctx ? &ctx->dep_map : NULL, _RET_IP_,
+				  ctx, 1);
 
-	if (!ret && ctx->acquired > 1)
+	if (!ret && ctx && ctx->acquired > 1)
 		return ww_mutex_deadlock_injection(lock, ctx);
 
 	return ret;
@@ -1027,7 +1030,8 @@ __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 	might_sleep();
 
 	if (__mutex_trylock_fast(&lock->base)) {
-		ww_mutex_set_context_fastpath(lock, ctx);
+		if (ctx)
+			ww_mutex_set_context_fastpath(lock, ctx);
 		return 0;
 	}
 
@@ -1041,7 +1045,8 @@ __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 	might_sleep();
 
 	if (__mutex_trylock_fast(&lock->base)) {
-		ww_mutex_set_context_fastpath(lock, ctx);
+		if (ctx)
+			ww_mutex_set_context_fastpath(lock, ctx);
 		return 0;
 	}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (3 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 04/12] locking/ww_mutex: Set use_ww_ctx even when locking without a context Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-23 10:48   ` Peter Zijlstra
  2016-12-21 18:46 ` [PATCH v3 06/12] locking/ww_mutex: Add waiters in stamp order Nicolai Hähnle
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel,
	Nicolai Hähnle

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

Keep the documentation in the header file since there is no good
place for it in mutex.c: there are two rather different
implementations with different EXPORT_SYMBOLs for each function.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
---
 include/linux/ww_mutex.h | 18 ++++--------------
 kernel/locking/mutex.c   | 16 ++++++++--------
 2 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index a5960e5..b2eaaab 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -186,11 +186,6 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
 #endif
 }
 
-extern int __must_check __ww_mutex_lock(struct ww_mutex *lock,
-					struct ww_acquire_ctx *ctx);
-extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
-						      struct ww_acquire_ctx *ctx);
-
 /**
  * ww_mutex_lock - acquire the w/w mutex
  * @lock: the mutex to be acquired
@@ -220,10 +215,8 @@ extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
  *
  * A mutex acquired with this function must be released with ww_mutex_unlock.
  */
-static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
-{
-	return __ww_mutex_lock(lock, ctx);
-}
+extern int __must_check ww_mutex_lock(struct ww_mutex *lock,
+				      struct ww_acquire_ctx *ctx);
 
 /**
  * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible
@@ -255,11 +248,8 @@ static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ct
  *
  * A mutex acquired with this function must be released with ww_mutex_unlock.
  */
-static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
-							   struct ww_acquire_ctx *ctx)
-{
-	return __ww_mutex_lock_interruptible(lock, ctx);
-}
+extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
+						    struct ww_acquire_ctx *ctx);
 
 /**
  * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a41bec2..282c6de 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -818,7 +818,7 @@ ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 }
 
 int __sched
-__ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
 	int ret;
 
@@ -831,10 +831,10 @@ __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(__ww_mutex_lock);
+EXPORT_SYMBOL_GPL(ww_mutex_lock);
 
 int __sched
-__ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
 	int ret;
 
@@ -848,7 +848,7 @@ __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible);
+EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible);
 
 #endif
 
@@ -1025,7 +1025,7 @@ EXPORT_SYMBOL(mutex_trylock);
 
 #ifndef CONFIG_DEBUG_LOCK_ALLOC
 int __sched
-__ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
 	might_sleep();
 
@@ -1037,10 +1037,10 @@ __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	return __ww_mutex_lock_slowpath(lock, ctx);
 }
-EXPORT_SYMBOL(__ww_mutex_lock);
+EXPORT_SYMBOL(ww_mutex_lock);
 
 int __sched
-__ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
 	might_sleep();
 
@@ -1052,7 +1052,7 @@ __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	return __ww_mutex_lock_interruptible_slowpath(lock, ctx);
 }
-EXPORT_SYMBOL(__ww_mutex_lock_interruptible);
+EXPORT_SYMBOL(ww_mutex_lock_interruptible);
 
 #endif
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 06/12] locking/ww_mutex: Add waiters in stamp order
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (4 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 07/12] locking/ww_mutex: Notify waiters that have to back off while adding tasks to wait list Nicolai Hähnle
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel,
	Nicolai Hähnle

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

Add regular waiters in stamp order. Keep adding waiters that have no
context in FIFO order and take care not to starve them.

While adding our task as a waiter, back off if we detect that there is a
waiter with a lower stamp in front of us.

Make sure to call lock_contended even when we back off early.

For w/w mutexes, being first in the wait list is only stable when taking the
lock without a context. Therefore, the purpose of the first flag is split into
two: 'first' remains to indicate whether we want to spin optimistically, while
'handoff' indicates that we should be prepared to accept a handoff.

For w/w locking with a context, we always accept handoffs after the first
schedule(), to handle the following sequence of events:

1. Task #0 unlocks and hands off to Task #2 which is first in line
2. Task #1 adds itself in front of Task #2
3. Task #2 wakes up and must accept the handoff even though it is no longer
   first in line

v2:
- rein in the indentation of __ww_mutex_add_waiter a bit
- set contending_lock in __ww_mutex_add_waiter (Chris Wilson)

v3:
- split 'first' into 'first' and 'handoff' to avoid moving the trylock calls
  around so much
- scan the wait_list in reverse order in __ww_mutex_add_waiter

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
---
 include/linux/mutex.h  |  3 ++
 kernel/locking/mutex.c | 97 +++++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 91 insertions(+), 9 deletions(-)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index b97870f..118a3b6 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -20,6 +20,8 @@
 #include <linux/osq_lock.h>
 #include <linux/debug_locks.h>
 
+struct ww_acquire_ctx;
+
 /*
  * Simple, straightforward mutexes with strict semantics:
  *
@@ -75,6 +77,7 @@ static inline struct task_struct *__mutex_owner(struct mutex *lock)
 struct mutex_waiter {
 	struct list_head	list;
 	struct task_struct	*task;
+	struct ww_acquire_ctx	*ww_ctx;
 #ifdef CONFIG_DEBUG_MUTEXES
 	void			*magic;
 #endif
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 282c6de..5b1ca20 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -620,6 +620,52 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
 	return 0;
 }
 
+static inline int __sched
+__ww_mutex_add_waiter(struct mutex_waiter *waiter,
+		      struct mutex *lock,
+		      struct ww_acquire_ctx *ww_ctx)
+{
+	struct mutex_waiter *cur;
+	struct list_head *pos;
+
+	if (!ww_ctx) {
+		list_add_tail(&waiter->list, &lock->wait_list);
+		return 0;
+	}
+
+	/*
+	 * Add the waiter before the first waiter with a higher stamp.
+	 * Waiters without a context are skipped to avoid starving
+	 * them.
+	 */
+	pos = &lock->wait_list;
+	list_for_each_entry_reverse(cur, &lock->wait_list, list) {
+		if (!cur->ww_ctx)
+			continue;
+
+		if (__ww_ctx_stamp_after(ww_ctx, cur->ww_ctx)) {
+			/* Back off immediately if necessary. */
+			if (ww_ctx->acquired > 0) {
+#ifdef CONFIG_DEBUG_MUTEXES
+				struct ww_mutex *ww;
+
+				ww = container_of(lock, struct ww_mutex, base);
+				DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock);
+				ww_ctx->contending_lock = ww;
+#endif
+				return -EDEADLK;
+			}
+
+			break;
+		}
+
+		pos = &cur->list;
+	}
+
+	list_add_tail(&waiter->list, pos);
+	return 0;
+}
+
 /*
  * Lock a mutex (possibly interruptible), slowpath:
  */
@@ -632,6 +678,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	struct mutex_waiter waiter;
 	unsigned long flags;
 	bool first = false;
+	bool handoff = false;
 	struct ww_mutex *ww;
 	int ret;
 
@@ -665,15 +712,25 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	debug_mutex_lock_common(lock, &waiter);
 	debug_mutex_add_waiter(lock, &waiter, task);
 
-	/* add waiting tasks to the end of the waitqueue (FIFO): */
-	list_add_tail(&waiter.list, &lock->wait_list);
+	lock_contended(&lock->dep_map, ip);
+
+	if (!use_ww_ctx) {
+		/* add waiting tasks to the end of the waitqueue (FIFO): */
+		list_add_tail(&waiter.list, &lock->wait_list);
+	} else {
+		/* Add in stamp order, waking up waiters that must back off. */
+		ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx);
+		if (ret)
+			goto err_early_backoff;
+
+		waiter.ww_ctx = ww_ctx;
+	}
+
 	waiter.task = task;
 
 	if (__mutex_waiter_is_first(lock, &waiter))
 		__mutex_set_flag(lock, MUTEX_FLAG_WAITERS);
 
-	lock_contended(&lock->dep_map, ip);
-
 	set_task_state(task, state);
 	for (;;) {
 		/*
@@ -682,7 +739,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		 * before testing the error conditions to make sure we pick up
 		 * the handoff.
 		 */
-		if (__mutex_trylock(lock, first))
+		if (__mutex_trylock(lock, handoff))
 			goto acquired;
 
 		/*
@@ -711,13 +768,34 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		 * or we must see its unlock and acquire.
 		 */
 
-		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
-			first = true;
+		if (use_ww_ctx && ww_ctx) {
+			/*
+			 * Always re-check whether we're in first position. We
+			 * don't want to spin if another task with a lower
+			 * stamp has taken our position.
+			 *
+			 * We also may have to set the handoff flag again, if
+			 * our position at the head was temporarily taken away.
+			 */
+			first = __mutex_waiter_is_first(lock, &waiter);
+
+			if (first)
+				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+
+			/*
+			 * Always be prepared to accept a handoff after the
+			 * first wait, because we may have been the first
+			 * waiter during unlock.
+			 */
+			handoff = true;
+		} else if (!first && __mutex_waiter_is_first(lock, &waiter)) {
+			first = handoff = true;
 			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
 		}
 
-		if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
-		     __mutex_trylock(lock, first))
+		if ((first &&
+		     mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
+		    __mutex_trylock(lock, handoff))
 			break;
 
 		spin_lock_mutex(&lock->wait_lock, flags);
@@ -746,6 +824,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 err:
 	__set_task_state(task, TASK_RUNNING);
 	mutex_remove_waiter(lock, &waiter, task);
+err_early_backoff:
 	spin_unlock_mutex(&lock->wait_lock, flags);
 	debug_mutex_free_waiter(&waiter);
 	mutex_release(&lock->dep_map, 1, ip);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 07/12] locking/ww_mutex: Notify waiters that have to back off while adding tasks to wait list
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (5 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 06/12] locking/ww_mutex: Add waiters in stamp order Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 08/12] locking/ww_mutex: Wake at most one waiter for back off when acquiring the lock Nicolai Hähnle
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

While adding our task as a waiter, detect if another task should back off
because of us.

With this patch, we establish the invariant that the wait list contains
at most one (sleeping) waiter with ww_ctx->acquired > 0, and this waiter
will be the first waiter with a context.

Since only waiters with ww_ctx->acquired > 0 have to back off, this allows
us to be much more economical with wakeups.

v2: rebase on v2 of earlier patches

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 kernel/locking/mutex.c | 41 +++++++++++++++++++++++++++++++----------
 1 file changed, 31 insertions(+), 10 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 5b1ca20..ee4d152 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -601,23 +601,34 @@ void __sched ww_mutex_unlock(struct ww_mutex *lock)
 EXPORT_SYMBOL(ww_mutex_unlock);
 
 static inline int __sched
-__ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
+__ww_mutex_lock_check_stamp(struct mutex *lock, struct mutex_waiter *waiter,
+			    struct ww_acquire_ctx *ctx)
 {
 	struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
 	struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
+	struct mutex_waiter *cur;
 
-	if (!hold_ctx)
-		return 0;
+	if (hold_ctx && __ww_ctx_stamp_after(ctx, hold_ctx))
+		goto deadlock;
 
-	if (__ww_ctx_stamp_after(ctx, hold_ctx)) {
-#ifdef CONFIG_DEBUG_MUTEXES
-		DEBUG_LOCKS_WARN_ON(ctx->contending_lock);
-		ctx->contending_lock = ww;
-#endif
-		return -EDEADLK;
+	/*
+	 * If there is a waiter in front of us that has a context, then its
+	 * stamp is earlier than ours and we must back off.
+	 */
+	cur = waiter;
+	list_for_each_entry_continue_reverse(cur, &lock->wait_list, list) {
+		if (cur->ww_ctx)
+			goto deadlock;
 	}
 
 	return 0;
+
+deadlock:
+#ifdef CONFIG_DEBUG_MUTEXES
+	DEBUG_LOCKS_WARN_ON(ctx->contending_lock);
+	ctx->contending_lock = ww;
+#endif
+	return -EDEADLK;
 }
 
 static inline int __sched
@@ -660,6 +671,15 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
 		}
 
 		pos = &cur->list;
+
+		/*
+		 * Wake up the waiter so that it gets a chance to back
+		 * off.
+		 */
+		if (cur->ww_ctx->acquired > 0) {
+			debug_mutex_wake_waiter(lock, cur);
+			wake_up_process(cur->task);
+		}
 	}
 
 	list_add_tail(&waiter->list, pos);
@@ -753,7 +773,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		}
 
 		if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
-			ret = __ww_mutex_lock_check_stamp(lock, ww_ctx);
+			ret = __ww_mutex_lock_check_stamp(lock, &waiter,
+							  ww_ctx);
 			if (ret)
 				goto err;
 		}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 08/12] locking/ww_mutex: Wake at most one waiter for back off when acquiring the lock
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (6 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 07/12] locking/ww_mutex: Notify waiters that have to back off while adding tasks to wait list Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 09/12] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop Nicolai Hähnle
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

The wait list is sorted by stamp order, and the only waiting task that may
have to back off is the first waiter with a context.

The regular slow path does not have to wake any other tasks at all, since
all other waiters that would have to back off were either woken up when
the waiter was added to the list, or detected the condition before they
added themselves.

Median timings taken of a contention-heavy GPU workload:

Without this series:
real    0m59.900s
user    0m7.516s
sys     2m16.076s

With changes up to and including this patch:
real    0m52.946s
user    0m7.272s
sys     1m55.964s

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 kernel/locking/mutex.c | 58 +++++++++++++++++++++++++++++++++-----------------
 1 file changed, 39 insertions(+), 19 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index ee4d152..c3f70dd 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -285,6 +285,35 @@ __ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
 }
 
 /*
+ * Wake up any waiters that may have to back off when the lock is held by the
+ * given context.
+ *
+ * Due to the invariants on the wait list, this can only affect the first
+ * waiter with a context.
+ *
+ * Must be called with wait_lock held. The current task must not be on the
+ * wait list.
+ */
+static void __sched
+__ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx)
+{
+	struct mutex_waiter *cur;
+
+	list_for_each_entry(cur, &lock->wait_list, list) {
+		if (!cur->ww_ctx)
+			continue;
+
+		if (cur->ww_ctx->acquired > 0 &&
+		    __ww_ctx_stamp_after(cur->ww_ctx, ww_ctx)) {
+			debug_mutex_wake_waiter(lock, cur);
+			wake_up_process(cur->task);
+		}
+
+		break;
+	}
+}
+
+/*
  * After acquiring lock with fastpath or when we lost out in contested
  * slowpath, set ctx and wake up any waiters so they can recheck.
  */
@@ -293,7 +322,6 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock,
 			       struct ww_acquire_ctx *ctx)
 {
 	unsigned long flags;
-	struct mutex_waiter *cur;
 
 	ww_mutex_lock_acquired(lock, ctx);
 
@@ -319,16 +347,15 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock,
 	 * so they can see the new lock->ctx.
 	 */
 	spin_lock_mutex(&lock->base.wait_lock, flags);
-	list_for_each_entry(cur, &lock->base.wait_list, list) {
-		debug_mutex_wake_waiter(&lock->base, cur);
-		wake_up_process(cur->task);
-	}
+	__ww_mutex_wakeup_for_backoff(&lock->base, ctx);
 	spin_unlock_mutex(&lock->base.wait_lock, flags);
 }
 
 /*
- * After acquiring lock in the slowpath set ctx and wake up any
- * waiters so they can recheck.
+ * After acquiring lock in the slowpath set ctx.
+ *
+ * Unlike for the fast path, the caller ensures that waiters are woken up where
+ * necessary.
  *
  * Callers must hold the mutex wait_lock.
  */
@@ -336,19 +363,8 @@ static __always_inline void
 ww_mutex_set_context_slowpath(struct ww_mutex *lock,
 			      struct ww_acquire_ctx *ctx)
 {
-	struct mutex_waiter *cur;
-
 	ww_mutex_lock_acquired(lock, ctx);
 	lock->ctx = ctx;
-
-	/*
-	 * Give any possible sleeping processes the chance to wake up,
-	 * so they can recheck if they have to back off.
-	 */
-	list_for_each_entry(cur, &lock->base.wait_list, list) {
-		debug_mutex_wake_waiter(&lock->base, cur);
-		wake_up_process(cur->task);
-	}
 }
 
 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
@@ -726,8 +742,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	/*
 	 * After waiting to acquire the wait_lock, try again.
 	 */
-	if (__mutex_trylock(lock, false))
+	if (__mutex_trylock(lock, false)) {
+		if (use_ww_ctx && ww_ctx)
+			__ww_mutex_wakeup_for_backoff(lock, ww_ctx);
+
 		goto skip_wait;
+	}
 
 	debug_mutex_lock_common(lock, &waiter);
 	debug_mutex_add_waiter(lock, &waiter, task);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 09/12] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (7 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 08/12] locking/ww_mutex: Wake at most one waiter for back off when acquiring the lock Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 10/12] locking/ww_mutex: Yield to other waiters from optimistic spin Nicolai Hähnle
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

In the following scenario, thread #1 should back off its attempt to lock
ww1 and unlock ww2 (assuming the acquire context stamps are ordered
accordingly).

    Thread #0               Thread #1
    ---------               ---------
                            successfully lock ww2
    set ww1->base.owner
                            attempt to lock ww1
                            confirm ww1->ctx == NULL
                            enter mutex_spin_on_owner
    set ww1->ctx

What was likely to happen previously is:

    attempt to lock ww2
    refuse to spin because
      ww2->ctx != NULL
    schedule()
                            detect thread #0 is off CPU
                            stop optimistic spin
                            return -EDEADLK
                            unlock ww2
                            wakeup thread #0
    lock ww2

Now, we are more likely to see:

                            detect ww1->ctx != NULL
                            stop optimistic spin
                            return -EDEADLK
                            unlock ww2
    successfully lock ww2

... because thread #1 will stop its optimistic spin as soon as possible.

The whole scenario is quite unlikely, since it requires thread #1 to get
between thread #0 setting the owner and setting the ctx. But since we're
idling here anyway, the additional check is basically free.

Found by inspection.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 kernel/locking/mutex.c | 44 ++++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index c3f70dd..6f62695 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -373,7 +373,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock,
  * access and not reliable.
  */
 static noinline
-bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
+bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
+			 bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx)
 {
 	bool ret = true;
 
@@ -396,6 +397,28 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 			break;
 		}
 
+		if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
+			struct ww_mutex *ww;
+
+			ww = container_of(lock, struct ww_mutex, base);
+
+			/*
+			 * If ww->ctx is set the contents are undefined, only
+			 * by acquiring wait_lock there is a guarantee that
+			 * they are not invalid when reading.
+			 *
+			 * As such, when deadlock detection needs to be
+			 * performed the optimistic spinning cannot be done.
+			 *
+			 * Check this in every inner iteration because we may
+			 * be racing against another thread's ww_mutex_lock.
+			 */
+			if (READ_ONCE(ww->ctx)) {
+				ret = false;
+				break;
+			}
+		}
+
 		cpu_relax();
 	}
 	rcu_read_unlock();
@@ -483,22 +506,6 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 	for (;;) {
 		struct task_struct *owner;
 
-		if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
-			struct ww_mutex *ww;
-
-			ww = container_of(lock, struct ww_mutex, base);
-			/*
-			 * If ww->ctx is set the contents are undefined, only
-			 * by acquiring wait_lock there is a guarantee that
-			 * they are not invalid when reading.
-			 *
-			 * As such, when deadlock detection needs to be
-			 * performed the optimistic spinning cannot be done.
-			 */
-			if (READ_ONCE(ww->ctx))
-				goto fail_unlock;
-		}
-
 		/*
 		 * If there's an owner, wait for it to either
 		 * release the lock or go to sleep.
@@ -510,7 +517,8 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 				break;
 			}
 
-			if (!mutex_spin_on_owner(lock, owner))
+			if (!mutex_spin_on_owner(lock, owner, use_ww_ctx,
+						 ww_ctx))
 				goto fail_unlock;
 		}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 10/12] locking/ww_mutex: Yield to other waiters from optimistic spin
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (8 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 09/12] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2017-01-05 17:47   ` Peter Zijlstra
  2016-12-21 18:46 ` [PATCH v3 11/12] locking/mutex: Initialize mutex_waiter::ww_ctx with poison when debugging Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 12/12] Documentation/locking/ww_mutex: Update the design document Nicolai Hähnle
  11 siblings, 1 reply; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

Lock stealing is less beneficial for w/w mutexes since we may just end up
backing off if we stole from a thread with an earlier acquire stamp that
already holds another w/w mutex that we also need. So don't spin
optimistically unless we are sure that there is no other waiter that might
cause us to back off.

Median timings taken of a contention-heavy GPU workload:

Before:
real    0m52.946s
user    0m7.272s
sys     1m55.964s

After:
real    0m53.086s
user    0m7.360s
sys     1m46.204s

This particular workload still spends 20%-25% of CPU in mutex_spin_on_owner
according to perf, but my attempts to further reduce this spinning based on
various heuristics all lead to an increase in measured wall time despite
the decrease in sys time.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 kernel/locking/mutex.c | 45 ++++++++++++++++++++++++++++++++++++---------
 1 file changed, 36 insertions(+), 9 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 6f62695..0bafb37 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -374,7 +374,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock,
  */
 static noinline
 bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
-			 bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx)
+			 bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx,
+			 struct mutex_waiter *waiter)
 {
 	bool ret = true;
 
@@ -397,7 +398,7 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
 			break;
 		}
 
-		if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
+		if (use_ww_ctx && ww_ctx) {
 			struct ww_mutex *ww;
 
 			ww = container_of(lock, struct ww_mutex, base);
@@ -413,7 +414,30 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
 			 * Check this in every inner iteration because we may
 			 * be racing against another thread's ww_mutex_lock.
 			 */
-			if (READ_ONCE(ww->ctx)) {
+			if (ww_ctx->acquired > 0 && READ_ONCE(ww->ctx)) {
+				ret = false;
+				break;
+			}
+
+			/*
+			 * If we aren't on the wait list yet, cancel the spin
+			 * if there are waiters. We want  to avoid stealing the
+			 * lock from a waiter with an earlier stamp, since the
+			 * other thread may already own a lock that we also
+			 * need.
+			 */
+			if (!waiter &&
+			    (atomic_long_read(&lock->owner) &
+			     MUTEX_FLAG_WAITERS)) {
+				ret = false;
+				break;
+			}
+
+			/*
+			 * Similarly, stop spinning if we are no longer the
+			 * first waiter.
+			 */
+			if (waiter && !__mutex_waiter_is_first(lock, waiter)) {
 				ret = false;
 				break;
 			}
@@ -479,7 +503,8 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
  */
 static bool mutex_optimistic_spin(struct mutex *lock,
 				  struct ww_acquire_ctx *ww_ctx,
-				  const bool use_ww_ctx, const bool waiter)
+				  const bool use_ww_ctx,
+				  struct mutex_waiter *waiter)
 {
 	struct task_struct *task = current;
 
@@ -518,12 +543,12 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 			}
 
 			if (!mutex_spin_on_owner(lock, owner, use_ww_ctx,
-						 ww_ctx))
+						 ww_ctx, waiter))
 				goto fail_unlock;
 		}
 
 		/* Try to acquire the mutex if it is unlocked. */
-		if (__mutex_trylock(lock, waiter))
+		if (__mutex_trylock(lock, waiter != NULL))
 			break;
 
 		/*
@@ -565,7 +590,8 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 #else
 static bool mutex_optimistic_spin(struct mutex *lock,
 				  struct ww_acquire_ctx *ww_ctx,
-				  const bool use_ww_ctx, const bool waiter)
+				  const bool use_ww_ctx,
+				  struct mutex_waiter *waiter)
 {
 	return false;
 }
@@ -737,7 +763,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
 
 	if (__mutex_trylock(lock, false) ||
-	    mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, false)) {
+	    mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, NULL)) {
 		/* got the lock, yay! */
 		lock_acquired(&lock->dep_map, ip);
 		if (use_ww_ctx && ww_ctx)
@@ -843,7 +869,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		}
 
 		if ((first &&
-		     mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
+		     mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx,
+					   &waiter)) ||
 		    __mutex_trylock(lock, handoff))
 			break;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 11/12] locking/mutex: Initialize mutex_waiter::ww_ctx with poison when debugging
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (9 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 10/12] locking/ww_mutex: Yield to other waiters from optimistic spin Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  2016-12-21 18:46 ` [PATCH v3 12/12] Documentation/locking/ww_mutex: Update the design document Nicolai Hähnle
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

Help catch cases where mutex_lock is used directly on w/w mutexes, which
otherwise result in the w/w tasks reading uninitialized data.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 include/linux/poison.h | 1 +
 kernel/locking/mutex.c | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/include/linux/poison.h b/include/linux/poison.h
index 51334ed..a395403 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -80,6 +80,7 @@
 /********** kernel/mutexes **********/
 #define MUTEX_DEBUG_INIT	0x11
 #define MUTEX_DEBUG_FREE	0x22
+#define MUTEX_POISON_WW_CTX	((void *) 0x500 + POISON_POINTER_DELTA)
 
 /********** lib/flex_array.c **********/
 #define FLEX_ARRAY_FREE	0x6c	/* for use-after-free poisoning */
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 0bafb37..3e46a12 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -791,6 +791,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	if (!use_ww_ctx) {
 		/* add waiting tasks to the end of the waitqueue (FIFO): */
 		list_add_tail(&waiter.list, &lock->wait_list);
+
+#ifdef CONFIG_DEBUG_MUTEXES
+		waiter.ww_ctx = MUTEX_POISON_WW_CTX;
+#endif
 	} else {
 		/* Add in stamp order, waking up waiters that must back off. */
 		ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 12/12] Documentation/locking/ww_mutex: Update the design document
  2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
                   ` (10 preceding siblings ...)
  2016-12-21 18:46 ` [PATCH v3 11/12] locking/mutex: Initialize mutex_waiter::ww_ctx with poison when debugging Nicolai Hähnle
@ 2016-12-21 18:46 ` Nicolai Hähnle
  11 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-21 18:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Nicolai Hähnle, Peter Zijlstra, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

Document the invariants we maintain for the wait list of ww_mutexes.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Maarten Lankhorst <dev@mblankhorst.nl>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
---
 Documentation/locking/ww-mutex-design.txt | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/Documentation/locking/ww-mutex-design.txt b/Documentation/locking/ww-mutex-design.txt
index 8a112dc..34c3a1b 100644
--- a/Documentation/locking/ww-mutex-design.txt
+++ b/Documentation/locking/ww-mutex-design.txt
@@ -309,11 +309,15 @@ Design:
   normal mutex locks, which are far more common. As such there is only a small
   increase in code size if wait/wound mutexes are not used.
 
+  We maintain the following invariants for the wait list:
+  (1) Waiters with an acquire context are sorted by stamp order; waiters
+      without an acquire context are interspersed in FIFO order.
+  (2) Among waiters with contexts, only the first one can have other locks
+      acquired already (ctx->acquired > 0). Note that this waiter may come
+      after other waiters without contexts in the list.
+
   In general, not much contention is expected. The locks are typically used to
-  serialize access to resources for devices. The only way to make wakeups
-  smarter would be at the cost of adding a field to struct mutex_waiter. This
-  would add overhead to all cases where normal mutexes are used, and
-  ww_mutexes are generally less performance sensitive.
+  serialize access to resources for devices.
 
 Lockdep:
   Special care has been taken to warn for as many cases of api abuse
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 03/12] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after
  2016-12-21 18:46 ` [PATCH v3 03/12] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after Nicolai Hähnle
@ 2016-12-22  1:58   ` zhoucm1
  2016-12-22  8:43     ` Nicolai Hähnle
  0 siblings, 1 reply; 19+ messages in thread
From: zhoucm1 @ 2016-12-22  1:58 UTC (permalink / raw)
  To: Nicolai Hähnle, linux-kernel
  Cc: Maarten Lankhorst, Nicolai Hähnle, Peter Zijlstra,
	dri-devel, Ingo Molnar



On 2016年12月22日 02:46, Nicolai Hähnle wrote:
> +static inline bool __sched
> +__ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
> +{
> +	return a->stamp - b->stamp <= LONG_MAX &&
> +	       (a->stamp != b->stamp || a > b);
I want to ask a stupid question, why a can compare with b? They are 
pointers of structure. Isn't stamp enough for compare?

Thanks,
David Zhou

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 03/12] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after
  2016-12-22  1:58   ` zhoucm1
@ 2016-12-22  8:43     ` Nicolai Hähnle
  0 siblings, 0 replies; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-22  8:43 UTC (permalink / raw)
  To: zhoucm1, linux-kernel
  Cc: Maarten Lankhorst, Nicolai Hähnle, Peter Zijlstra,
	dri-devel, Ingo Molnar

On 22.12.2016 02:58, zhoucm1 wrote:
> On 2016年12月22日 02:46, Nicolai Hähnle wrote:
>> +static inline bool __sched
>> +__ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
>> +{
>> +    return a->stamp - b->stamp <= LONG_MAX &&
>> +           (a->stamp != b->stamp || a > b);
> I want to ask a stupid question, why a can compare with b? They are
> pointers of structure. Isn't stamp enough for compare?

As far as I understand, the idea is to provide a tie-breaker to ensure 
that there is a strict order between contexts even if the stamp happens 
to be equal. Since we get stamps from atomic increments, this really 
only matters if (a) someone makes a mistake and confuses ww_classes 
(which CONFIG_DEBUG_MUTEXES would flag) or (b) the ww_class stamp 
counter wraps around fully during the lifetime of the acquire context. 
This is extremely unlikely of course.

Nicolai

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers
  2016-12-21 18:46 ` [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers Nicolai Hähnle
@ 2016-12-23 10:48   ` Peter Zijlstra
  2016-12-23 11:16     ` Nicolai Hähnle
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2016-12-23 10:48 UTC (permalink / raw)
  To: Nicolai Hähnle
  Cc: linux-kernel, Nicolai Hähnle, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

On Wed, Dec 21, 2016 at 07:46:33PM +0100, Nicolai Hähnle wrote:
> diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
> index a5960e5..b2eaaab 100644
> --- a/include/linux/ww_mutex.h
> +++ b/include/linux/ww_mutex.h
> @@ -186,11 +186,6 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
>  #endif
>  }
>  
> -extern int __must_check __ww_mutex_lock(struct ww_mutex *lock,
> -					struct ww_acquire_ctx *ctx);
> -extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
> -						      struct ww_acquire_ctx *ctx);
> -
>  /**
>   * ww_mutex_lock - acquire the w/w mutex
>   * @lock: the mutex to be acquired
> @@ -220,10 +215,8 @@ extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
>   *
>   * A mutex acquired with this function must be released with ww_mutex_unlock.
>   */
> -static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
> -{
> -	return __ww_mutex_lock(lock, ctx);
> -}
> +extern int __must_check ww_mutex_lock(struct ww_mutex *lock,
> +				      struct ww_acquire_ctx *ctx);
>  
>  /**
>   * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible
> @@ -255,11 +248,8 @@ static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ct
>   *
>   * A mutex acquired with this function must be released with ww_mutex_unlock.
>   */
> -static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
> -							   struct ww_acquire_ctx *ctx)
> -{
> -	return __ww_mutex_lock_interruptible(lock, ctx);
> -}
> +extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
> +						    struct ww_acquire_ctx *ctx);
>  
>  /**
>   * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index a41bec2..282c6de 100644

For some reason this patch appears to make lib/locking-selftest.c really
unhappy.

I get endless streams of:

../lib/locking-selftest.c: In function ‘ww_test_fail_acquire’:
../lib/locking-selftest.c:1141:6: error: void value not ignored as it ought to be
  ret = WWL(&o, &t);
      ^

Apparently GCC gets confused about __much_check on inline functions or
something, or I got the patch wrong.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers
  2016-12-23 10:48   ` Peter Zijlstra
@ 2016-12-23 11:16     ` Nicolai Hähnle
  2016-12-23 12:13       ` Peter Zijlstra
  0 siblings, 1 reply; 19+ messages in thread
From: Nicolai Hähnle @ 2016-12-23 11:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Nicolai Hähnle, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

On 23.12.2016 11:48, Peter Zijlstra wrote:
> On Wed, Dec 21, 2016 at 07:46:33PM +0100, Nicolai Hähnle wrote:
>> diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
>> index a5960e5..b2eaaab 100644
>> --- a/include/linux/ww_mutex.h
>> +++ b/include/linux/ww_mutex.h
>> @@ -186,11 +186,6 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
>>  #endif
>>  }
>>
>> -extern int __must_check __ww_mutex_lock(struct ww_mutex *lock,
>> -					struct ww_acquire_ctx *ctx);
>> -extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
>> -						      struct ww_acquire_ctx *ctx);
>> -
>>  /**
>>   * ww_mutex_lock - acquire the w/w mutex
>>   * @lock: the mutex to be acquired
>> @@ -220,10 +215,8 @@ extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
>>   *
>>   * A mutex acquired with this function must be released with ww_mutex_unlock.
>>   */
>> -static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
>> -{
>> -	return __ww_mutex_lock(lock, ctx);
>> -}
>> +extern int __must_check ww_mutex_lock(struct ww_mutex *lock,
>> +				      struct ww_acquire_ctx *ctx);
>>
>>  /**
>>   * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible
>> @@ -255,11 +248,8 @@ static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ct
>>   *
>>   * A mutex acquired with this function must be released with ww_mutex_unlock.
>>   */
>> -static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
>> -							   struct ww_acquire_ctx *ctx)
>> -{
>> -	return __ww_mutex_lock_interruptible(lock, ctx);
>> -}
>> +extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
>> +						    struct ww_acquire_ctx *ctx);
>>
>>  /**
>>   * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex
>> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
>> index a41bec2..282c6de 100644
>
> For some reason this patch appears to make lib/locking-selftest.c really
> unhappy.
>
> I get endless streams of:
>
> ../lib/locking-selftest.c: In function ‘ww_test_fail_acquire’:
> ../lib/locking-selftest.c:1141:6: error: void value not ignored as it ought to be
>   ret = WWL(&o, &t);
>       ^
>
> Apparently GCC gets confused about __much_check on inline functions or
> something, or I got the patch wrong.

Weird, I'm not getting that, and it makes no sense either from a quick 
glimpse of the code. Is there anything beside 
CONFIG_DEBUG_LOCKING_API_SELFTESTS I would have to enable to trigger 
this? FWIW:

$ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers
  2016-12-23 11:16     ` Nicolai Hähnle
@ 2016-12-23 12:13       ` Peter Zijlstra
  0 siblings, 0 replies; 19+ messages in thread
From: Peter Zijlstra @ 2016-12-23 12:13 UTC (permalink / raw)
  To: Nicolai Hähnle
  Cc: linux-kernel, Nicolai Hähnle, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel

On Fri, Dec 23, 2016 at 12:16:03PM +0100, Nicolai Hähnle wrote:

> >For some reason this patch appears to make lib/locking-selftest.c really
> >unhappy.
> >
> >I get endless streams of:
> >
> >../lib/locking-selftest.c: In function ‘ww_test_fail_acquire’:
> >../lib/locking-selftest.c:1141:6: error: void value not ignored as it ought to be
> >  ret = WWL(&o, &t);
> >      ^
> >
> >Apparently GCC gets confused about __much_check on inline functions or
> >something, or I got the patch wrong.
> 
> Weird, I'm not getting that, and it makes no sense either from a quick
> glimpse of the code. Is there anything beside
> CONFIG_DEBUG_LOCKING_API_SELFTESTS I would have to enable to trigger this?

Not entirely sure, I'm building an allmodconfig.

> FWIW:
> 
> $ gcc --version
> gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

gcc (Debian 6.2.1-5) 6.2.1 20161124

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 10/12] locking/ww_mutex: Yield to other waiters from optimistic spin
  2016-12-21 18:46 ` [PATCH v3 10/12] locking/ww_mutex: Yield to other waiters from optimistic spin Nicolai Hähnle
@ 2017-01-05 17:47   ` Peter Zijlstra
  0 siblings, 0 replies; 19+ messages in thread
From: Peter Zijlstra @ 2017-01-05 17:47 UTC (permalink / raw)
  To: Nicolai Hähnle
  Cc: linux-kernel, Nicolai Hähnle, Ingo Molnar,
	Maarten Lankhorst, Daniel Vetter, Chris Wilson, dri-devel



OK, so I got this far, although I stuffed two patches in the middle
(which I should probably pull to the beginning and did this one patch
differently.

I've not really tested the result yet, will attempt to do so tomorrow.

Please have a look at the current state of things here:

  git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-01-05 17:48 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-21 18:46 [PATCH v3 00/12] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 01/12] drm/vgem: Use ww_mutex_(un)lock even with a NULL context Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 02/12] locking/mutex: Fix a race with handoffs and interruptible waits Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 03/12] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after Nicolai Hähnle
2016-12-22  1:58   ` zhoucm1
2016-12-22  8:43     ` Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 04/12] locking/ww_mutex: Set use_ww_ctx even when locking without a context Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 05/12] locking/ww_mutex: Remove the __ww_mutex_lock inline wrappers Nicolai Hähnle
2016-12-23 10:48   ` Peter Zijlstra
2016-12-23 11:16     ` Nicolai Hähnle
2016-12-23 12:13       ` Peter Zijlstra
2016-12-21 18:46 ` [PATCH v3 06/12] locking/ww_mutex: Add waiters in stamp order Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 07/12] locking/ww_mutex: Notify waiters that have to back off while adding tasks to wait list Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 08/12] locking/ww_mutex: Wake at most one waiter for back off when acquiring the lock Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 09/12] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 10/12] locking/ww_mutex: Yield to other waiters from optimistic spin Nicolai Hähnle
2017-01-05 17:47   ` Peter Zijlstra
2016-12-21 18:46 ` [PATCH v3 11/12] locking/mutex: Initialize mutex_waiter::ww_ctx with poison when debugging Nicolai Hähnle
2016-12-21 18:46 ` [PATCH v3 12/12] Documentation/locking/ww_mutex: Update the design document Nicolai Hähnle

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).