All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: mingo@kernel.org, oleg@redhat.com
Cc: linux-kernel@vger.kernel.org, peterz@infradead.org,
	paulmck@linux.vnet.ibm.com, boqun.feng@gmail.com, corbet@lwn.net,
	mhocko@kernel.org, dhowells@redhat.com,
	torvalds@linux-foundation.org, will.deacon@arm.com,
	waiman.long@hpe.com, pjt@google.com
Subject: [PATCH 3/4] locking: Introduce smp_cond_acquire()
Date: Thu, 03 Dec 2015 13:40:13 +0100	[thread overview]
Message-ID: <20151203124339.552838970@infradead.org> (raw)
In-Reply-To: 20151203124010.627312076@infradead.org

[-- Attachment #1: peterz-smp_cond_acquire.patch --]
[-- Type: text/plain, Size: 2919 bytes --]

Introduce smp_cond_acquire() which combines a control dependency and a
read barrier to form acquire semantics.

This primitive has two benefits:
 - it documents control dependencies,
 - its typically cheaper than using smp_load_acquire() in a loop.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/compiler.h   |   17 +++++++++++++++++
 kernel/locking/qspinlock.c |    3 +--
 kernel/sched/core.c        |    8 +-------
 kernel/sched/sched.h       |    2 +-
 4 files changed, 20 insertions(+), 10 deletions(-)

--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -299,6 +299,23 @@ static __always_inline void __write_once
 	__u.__val;					\
 })
 
+/**
+ * smp_cond_acquire() - Spin wait for cond with ACQUIRE ordering
+ * @cond: boolean expression to wait for
+ *
+ * Equivalent to using smp_load_acquire() on the condition variable but employs
+ * the control dependency of the wait to reduce the barrier on many platforms.
+ *
+ * The control dependency provides a LOAD->STORE order, the additional RMB
+ * provides LOAD->LOAD order, together they provide LOAD->{LOAD,STORE} order,
+ * aka. ACQUIRE.
+ */
+#define smp_cond_acquire(cond)	do {		\
+	while (!(cond))				\
+		cpu_relax();			\
+	smp_rmb(); /* ctrl + rmb := acquire */	\
+} while (0)
+
 #endif /* __KERNEL__ */
 
 #endif /* __ASSEMBLY__ */
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -446,8 +446,7 @@ void queued_spin_lock_slowpath(struct qs
 	if ((val = pv_wait_head_or_lock(lock, node)))
 		goto locked;
 
-	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK)
-		cpu_relax();
+	smp_cond_acquire(!((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK));
 
 locked:
 	/*
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1979,19 +1979,13 @@ try_to_wake_up(struct task_struct *p, un
 	/*
 	 * If the owning (remote) cpu is still in the middle of schedule() with
 	 * this task as prev, wait until its done referencing the task.
-	 */
-	while (p->on_cpu)
-		cpu_relax();
-	/*
-	 * Combined with the control dependency above, we have an effective
-	 * smp_load_acquire() without the need for full barriers.
 	 *
 	 * Pairs with the smp_store_release() in finish_lock_switch().
 	 *
 	 * This ensures that tasks getting woken will be fully ordered against
 	 * their previous state and preserve Program Order.
 	 */
-	smp_rmb();
+	smp_cond_acquire(!p->on_cpu);
 
 	p->sched_contributes_to_load = !!task_contributes_to_load(p);
 	p->state = TASK_WAKING;
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1079,7 +1079,7 @@ static inline void finish_lock_switch(st
 	 * In particular, the load of prev->state in finish_task_switch() must
 	 * happen before this.
 	 *
-	 * Pairs with the control dependency and rmb in try_to_wake_up().
+	 * Pairs with the smp_cond_acquire() in try_to_wake_up().
 	 */
 	smp_store_release(&prev->on_cpu, 0);
 #endif



  parent reply	other threads:[~2015-12-03 12:46 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-03 12:40 [PATCH 0/4] scheduler ordering bits -v2 Peter Zijlstra
2015-12-03 12:40 ` [PATCH 1/4] sched: Better document the try_to_wake_up() barriers Peter Zijlstra
2015-12-03 12:40 ` [PATCH 2/4] sched: Fix a race in try_to_wake_up() vs schedule() Peter Zijlstra
2015-12-03 12:40 ` Peter Zijlstra [this message]
2015-12-03 16:37   ` [PATCH 3/4] locking: Introduce smp_cond_acquire() Will Deacon
2015-12-03 20:26     ` Peter Zijlstra
2015-12-03 21:16       ` Peter Zijlstra
2015-12-04 14:57       ` Will Deacon
2015-12-04 20:51       ` Waiman Long
2015-12-04 22:05         ` Linus Torvalds
2015-12-04 22:48           ` Waiman Long
2015-12-04 23:43           ` Peter Zijlstra
2015-12-07 15:18             ` Will Deacon
2015-12-03 19:41   ` Davidlohr Bueso
2015-12-03 20:31     ` Peter Zijlstra
2015-12-03 12:40 ` [PATCH 4/4] sched: Document Program-Order guarantees Peter Zijlstra
2015-12-03 13:16   ` Boqun Feng
2015-12-03 13:29     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151203124339.552838970@infradead.org \
    --to=peterz@infradead.org \
    --cc=boqun.feng@gmail.com \
    --cc=corbet@lwn.net \
    --cc=dhowells@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mhocko@kernel.org \
    --cc=mingo@kernel.org \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=pjt@google.com \
    --cc=torvalds@linux-foundation.org \
    --cc=waiman.long@hpe.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.