linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Will Deacon <will.deacon@arm.com>
To: linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org,
	mingo@kernel.org, boqun.feng@gmail.com,
	paulmck@linux.vnet.ibm.com, longman@redhat.com,
	Will Deacon <will.deacon@arm.com>
Subject: [PATCH v2 06/13] locking/qspinlock: Use atomic_cond_read_acquire
Date: Wed, 11 Apr 2018 19:01:13 +0100	[thread overview]
Message-ID: <1523469680-17699-7-git-send-email-will.deacon@arm.com> (raw)
In-Reply-To: <1523469680-17699-1-git-send-email-will.deacon@arm.com>

Rather than dig into the counter field of the atomic_t inside the
qspinlock structure so that we can call smp_cond_load_acquire, use
atomic_cond_read_acquire instead, which operates on the atomic_t
directly.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 kernel/locking/qspinlock.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 01b660442d87..648a16a2cd23 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -377,8 +377,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 		 * barriers.
 		 */
 		if (val & _Q_LOCKED_MASK) {
-			smp_cond_load_acquire(&lock->val.counter,
-					      !(VAL & _Q_LOCKED_MASK));
+			atomic_cond_read_acquire(&lock->val,
+						 !(VAL & _Q_LOCKED_MASK));
 		}
 
 		/*
@@ -481,8 +481,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 *
 	 * The PV pv_wait_head_or_lock function, if active, will acquire
 	 * the lock and return a non-zero value. So we have to skip the
-	 * smp_cond_load_acquire() call. As the next PV queue head hasn't been
-	 * designated yet, there is no way for the locked value to become
+	 * atomic_cond_read_acquire() call. As the next PV queue head hasn't
+	 * been designated yet, there is no way for the locked value to become
 	 * _Q_SLOW_VAL. So both the set_locked() and the
 	 * atomic_cmpxchg_relaxed() calls will be safe.
 	 *
@@ -492,7 +492,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	if ((val = pv_wait_head_or_lock(lock, node)))
 		goto locked;
 
-	val = smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_PENDING_MASK));
+	val = atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK));
 
 locked:
 	/*
@@ -509,7 +509,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	/* In the PV case we might already have _Q_LOCKED_VAL set */
 	if ((val & _Q_TAIL_MASK) == tail) {
 		/*
-		 * The smp_cond_load_acquire() call above has provided the
+		 * The atomic_cond_read_acquire() call above has provided the
 		 * necessary acquire semantics required for locking.
 		 */
 		old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
-- 
2.1.4

  parent reply	other threads:[~2018-04-11 18:02 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-11 18:01 [PATCH v2 00/13] kernel/locking: qspinlock improvements Will Deacon
2018-04-11 18:01 ` [PATCH v2 01/13] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed Will Deacon
2018-04-11 18:01 ` [PATCH v2 02/13] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Will Deacon
2018-04-11 18:01 ` [PATCH v2 03/13] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound Will Deacon
2018-04-11 18:01 ` [PATCH v2 04/13] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Will Deacon
2018-04-11 19:34   ` Waiman Long
2018-04-11 20:35     ` Waiman Long
2018-04-11 19:53   ` Waiman Long
2018-04-12 14:06     ` Will Deacon
2018-04-12 14:16       ` Waiman Long
2018-04-12 14:18         ` Will Deacon
2018-04-11 18:01 ` [PATCH v2 05/13] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue Will Deacon
2018-04-11 18:01 ` Will Deacon [this message]
2018-04-11 18:01 ` [PATCH v2 07/13] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon
2018-04-11 18:01 ` [PATCH v2 08/13] locking/qspinlock: Use smp_cond_load_relaxed to wait for next node Will Deacon
2018-04-11 18:01 ` [PATCH v2 09/13] locking/qspinlock: Merge struct __qspinlock into struct qspinlock Will Deacon
2018-04-11 19:13   ` Waiman Long
2018-04-11 18:01 ` [PATCH v2 10/13] locking/qspinlock: Make queued_spin_unlock use smp_store_release Will Deacon
2018-04-11 18:01 ` [PATCH v2 11/13] locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() Will Deacon
2018-04-11 18:01 ` [PATCH v2 12/13] locking/qspinlock: Use try_cmpxchg instead of cmpxchg when locking Will Deacon
2018-04-11 18:01 ` [PATCH v2 13/13] locking/qspinlock: Add stat tracking for pending vs slowpath Will Deacon
2018-04-13  9:24 ` [PATCH v2 00/13] kernel/locking: qspinlock improvements Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1523469680-17699-7-git-send-email-will.deacon@arm.com \
    --to=will.deacon@arm.com \
    --cc=boqun.feng@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).