From: Waiman Long <Waiman.Long@hpe.com>
To: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
"H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
Scott J Norton <scott.norton@hpe.com>,
Douglas Hatch <doug.hatch@hpe.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Waiman Long <Waiman.Long@hpe.com>
Subject: [PATCH tip/locking/core v9 1/6] locking/qspinlock: Use _acquire/_release versions of cmpxchg & xchg
Date: Fri, 30 Oct 2015 19:26:32 -0400 [thread overview]
Message-ID: <1446247597-61863-2-git-send-email-Waiman.Long@hpe.com> (raw)
In-Reply-To: <1446247597-61863-1-git-send-email-Waiman.Long@hpe.com>
This patch replaces the cmpxchg() and xchg() calls in the native
qspinlock code with the more relaxed _acquire or _release versions of
those calls to enable other architectures to adopt queued spinlocks
with less memory barrier performance overhead.
Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
include/asm-generic/qspinlock.h | 9 +++++----
kernel/locking/qspinlock.c | 29 ++++++++++++++++++++++++-----
2 files changed, 29 insertions(+), 9 deletions(-)
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index e2aadbc..39e1cb2 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -12,8 +12,9 @@
* GNU General Public License for more details.
*
* (C) Copyright 2013-2015 Hewlett-Packard Development Company, L.P.
+ * (C) Copyright 2015 Hewlett-Packard Enterprise Development LP
*
- * Authors: Waiman Long <waiman.long@hp.com>
+ * Authors: Waiman Long <waiman.long@hpe.com>
*/
#ifndef __ASM_GENERIC_QSPINLOCK_H
#define __ASM_GENERIC_QSPINLOCK_H
@@ -62,7 +63,7 @@ static __always_inline int queued_spin_is_contended(struct qspinlock *lock)
static __always_inline int queued_spin_trylock(struct qspinlock *lock)
{
if (!atomic_read(&lock->val) &&
- (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+ (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) == 0))
return 1;
return 0;
}
@@ -77,7 +78,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
{
u32 val;
- val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+ val = atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL);
if (likely(val == 0))
return;
queued_spin_lock_slowpath(lock, val);
@@ -93,7 +94,7 @@ static __always_inline void queued_spin_unlock(struct qspinlock *lock)
/*
* smp_mb__before_atomic() in order to guarantee release semantics
*/
- smp_mb__before_atomic_dec();
+ smp_mb__before_atomic();
atomic_sub(_Q_LOCKED_VAL, &lock->val);
}
#endif
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 87e9ce6..7868418 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -14,8 +14,9 @@
* (C) Copyright 2013-2015 Hewlett-Packard Development Company, L.P.
* (C) Copyright 2013-2014 Red Hat, Inc.
* (C) Copyright 2015 Intel Corp.
+ * (C) Copyright 2015 Hewlett-Packard Enterprise Development LP
*
- * Authors: Waiman Long <waiman.long@hp.com>
+ * Authors: Waiman Long <waiman.long@hpe.com>
* Peter Zijlstra <peterz@infradead.org>
*/
@@ -176,7 +177,12 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
{
struct __qspinlock *l = (void *)lock;
- return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+ /*
+ * Use release semantics to make sure that the MCS node is properly
+ * initialized before changing the tail code.
+ */
+ return (u32)xchg_release(&l->tail,
+ tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
}
#else /* _Q_PENDING_BITS == 8 */
@@ -208,7 +214,11 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
for (;;) {
new = (val & _Q_LOCKED_PENDING_MASK) | tail;
- old = atomic_cmpxchg(&lock->val, val, new);
+ /*
+ * Use release semantics to make sure that the MCS node is
+ * properly initialized before changing the tail code.
+ */
+ old = atomic_cmpxchg_release(&lock->val, val, new);
if (old == val)
break;
@@ -319,7 +329,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (val == new)
new |= _Q_PENDING_VAL;
- old = atomic_cmpxchg(&lock->val, val, new);
+ /*
+ * Acquire semantic is required here as the function may
+ * return immediately if the lock was free.
+ */
+ old = atomic_cmpxchg_acquire(&lock->val, val, new);
if (old == val)
break;
@@ -426,7 +440,12 @@ queue:
set_locked(lock);
break;
}
- old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+ /*
+ * The smp_load_acquire() call above has provided the necessary
+ * acquire semantics required for locking. At most two
+ * iterations of this loop may be ran.
+ */
+ old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
goto release; /* No contention */
--
1.7.1
next prev parent reply other threads:[~2015-10-30 23:28 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-30 23:26 [PATCH tip/locking/core v9 0/6] locking/qspinlock: Enhance pvqspinlock Waiman Long
2015-10-30 23:26 ` Waiman Long [this message]
2015-10-30 23:26 ` [PATCH tip/locking/core v9 2/6] locking/qspinlock: prefetch next node cacheline Waiman Long
2015-11-02 16:36 ` Peter Zijlstra
2015-11-02 22:54 ` Peter Zijlstra
2015-11-05 16:42 ` Waiman Long
2015-11-05 16:49 ` Peter Zijlstra
2015-11-05 16:06 ` Waiman Long
2015-11-05 16:39 ` Peter Zijlstra
2015-11-05 16:52 ` Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 3/6] locking/pvqspinlock, x86: Optimize PV unlock code path Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 4/6] locking/pvqspinlock: Collect slowpath lock statistics Waiman Long
2015-11-02 16:40 ` Peter Zijlstra
2015-11-05 16:29 ` Waiman Long
2015-11-05 16:43 ` Peter Zijlstra
2015-11-05 16:59 ` Waiman Long
2015-11-05 17:09 ` Peter Zijlstra
2015-11-05 17:34 ` Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 5/6] locking/pvqspinlock: Allow 1 lock stealing attempt Waiman Long
2015-11-06 14:50 ` Peter Zijlstra
2015-11-06 17:47 ` Waiman Long
2015-11-09 17:29 ` Peter Zijlstra
2015-11-09 19:53 ` Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 6/6] locking/pvqspinlock: Queue node adaptive spinning Waiman Long
2015-11-06 15:01 ` Peter Zijlstra
2015-11-06 17:54 ` Waiman Long
2015-11-06 20:37 ` Peter Zijlstra
2015-11-09 16:51 ` Waiman Long
2015-11-09 17:33 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1446247597-61863-2-git-send-email-Waiman.Long@hpe.com \
--to=waiman.long@hpe.com \
--cc=dave@stgolabs.net \
--cc=doug.hatch@hpe.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hpe.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).