All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Kirill Tkhai <ktkhai@parallels.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	oleg@redhat.com
Subject: Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles
Date: Tue, 17 Feb 2015 10:01:54 -0800	[thread overview]
Message-ID: <20150217180154.GA28082@linux.vnet.ibm.com> (raw)
In-Reply-To: <20150217160532.GW4166@linux.vnet.ibm.com>

On Tue, Feb 17, 2015 at 08:05:32AM -0800, Paul E. McKenney wrote:
> On Tue, Feb 17, 2015 at 02:05:23PM +0100, Peter Zijlstra wrote:
> > On Tue, Feb 17, 2015 at 01:12:58PM +0100, Peter Zijlstra wrote:
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -341,6 +341,22 @@ static struct rq *task_rq_lock(struct ta
> > >  		raw_spin_lock_irqsave(&p->pi_lock, *flags);
> > >  		rq = task_rq(p);
> > >  		raw_spin_lock(&rq->lock);
> > > +		/*
> > > +		 *	move_queued_task()		task_rq_lock()
> > > +		 *
> > > +		 *	ACQUIRE (rq->lock)
> > > +		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
> > > +		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
> > > +		 *	[S] ->cpu = new_cpu		[L] task_rq()
> > > +		 *					[L] ->on_rq
> > > +		 *	RELEASE (rq->lock)
> > > +		 *
> > > +		 * If we observe the old cpu in task_rq_lock, the acquire of
> > > +		 * the old rq->lock will fully serialize against the stores.
> > > +		 *
> > > +		 * If we observe the new cpu in task_rq_lock, the acquire will
> > > +		 * pair with the WMB to ensure we must then also see migrating.
> > > +		 */
> > >  		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
> > >  			return rq;
> > >  		raw_spin_unlock(&rq->lock);
> > 
> > Hey Paul, remember this: https://lkml.org/lkml/2014/7/16/310
> 
> I do now.  ;-)
> 
> > I just used a creative one :-)
> 
> The scenario above?
> 
> > BTW, should we attempt to include that table in memory-barriers.txt like
> > Mathieu said? As a cheat sheet with references to longer explanations
> > for the 'interesting' ones?
> > 
> > FWIW, we should probably update that table to include control
> > dependencies too; we didn't (formally) have those back then I think.
> > 
> > The blob under SMP BARRIER PAIRING does not mention pairing with control
> > dependencies; and I'm rather sure I've done so.

And here is a patch for the control-dependency pairing.  Thoughts?

							Thanx, Paul

------------------------------------------------------------------------

documentation: Clarify control-dependency pairing

This commit explicitly states that control dependencies pair normally
with other barriers, and gives an example of such pairing.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index ca2387ef27ab..b09880086d96 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -592,9 +592,9 @@ See also the subsection on "Cache Coherency" for a more thorough example.
 CONTROL DEPENDENCIES
 --------------------
 
-A control dependency requires a full read memory barrier, not simply a data
-dependency barrier to make it work correctly.  Consider the following bit of
-code:
+A load-load control dependency requires a full read memory barrier, not
+simply a data dependency barrier to make it work correctly.  Consider the
+following bit of code:
 
 	q = ACCESS_ONCE(a);
 	if (q) {
@@ -615,14 +615,15 @@ case what's actually required is:
 	}
 
 However, stores are not speculated.  This means that ordering -is- provided
-in the following example:
+for load-store control dependencies, as in the following example:
 
 	q = ACCESS_ONCE(a);
 	if (q) {
 		ACCESS_ONCE(b) = p;
 	}
 
-Please note that ACCESS_ONCE() is not optional!  Without the
+Control dependencies pair normally with other types of barriers.
+That said, please note that ACCESS_ONCE() is not optional!  Without the
 ACCESS_ONCE(), might combine the load from 'a' with other loads from
 'a', and the store to 'b' with other stores to 'b', with possible highly
 counterintuitive effects on ordering.
@@ -813,6 +814,8 @@ In summary:
       barrier() can help to preserve your control dependency.  Please
       see the Compiler Barrier section for more information.
 
+  (*) Control dependencies pair normally with other types of barriers.
+
   (*) Control dependencies do -not- provide transitivity.  If you
       need transitivity, use smp_mb().
 
@@ -850,6 +853,19 @@ Or:
 			      <data dependency barrier>
 			      y = *x;
 
+Or even:
+
+	CPU 1		      CPU 2
+	===============	      ===============================
+	r1 = ACCESS_ONCE(y);
+	<write barrier>
+	ACCESS_ONCE(y) = 1;   if (r2 = ACCESS_ONCE(x)) {
+			         <implicit control dependency>
+			         ACCESS_ONCE(y) = 1;
+			      }
+
+	assert(r1 == 0 || r2 == 0);
+
 Basically, the read barrier always has to be there, even though it can be of
 the "weaker" type.
 


  reply	other threads:[~2015-02-17 18:02 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20150217104516.12144.85911.stgit@tkhai>
2015-02-17 10:47 ` [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles Kirill Tkhai
2015-02-17 12:12   ` Peter Zijlstra
2015-02-17 12:36     ` Kirill Tkhai
2015-02-17 12:45       ` Peter Zijlstra
2015-02-17 13:05     ` Peter Zijlstra
2015-02-17 16:05       ` Paul E. McKenney
2015-02-17 18:01         ` Paul E. McKenney [this message]
2015-02-17 18:23           ` Peter Zijlstra
2015-02-17 21:45             ` Paul E. McKenney
2015-02-18 13:41               ` Peter Zijlstra
2015-02-17 18:36         ` Peter Zijlstra
2015-02-17 21:52           ` Paul E. McKenney
2015-02-18 13:47             ` Peter Zijlstra
2015-02-18 18:43               ` Paul E. McKenney
2015-02-18 15:53             ` Oleg Nesterov
2015-02-18 16:11               ` Peter Zijlstra
2015-02-18 16:32                 ` Oleg Nesterov
2015-02-18 19:23                   ` Paul E. McKenney
2015-02-18 15:59             ` Oleg Nesterov
2015-02-18 19:14               ` Manfred Spraul
2015-02-18 22:43                 ` Peter Zijlstra
2015-02-19 14:19                   ` Oleg Nesterov
2015-02-20 18:28                     ` Manfred Spraul
2015-02-20 18:45                       ` Peter Zijlstra
2015-02-20 20:23                         ` Oleg Nesterov
2015-02-21 12:54                           ` Peter Zijlstra
2015-04-25 19:56                         ` Paul E. McKenney
2015-04-26 10:52                           ` Paul E. McKenney
2015-04-28 14:33                             ` Peter Zijlstra
2015-04-28 15:53                               ` Chris Metcalf
2015-04-28 16:24                                 ` Peter Zijlstra
2015-04-28 16:44                                   ` [PATCH] spinlock: clarify doc for raw_spin_unlock_wait() Chris Metcalf
2015-04-29 17:34                                     ` Manfred Spraul
2015-04-28 17:33                                   ` [PATCH 1/2] tile: modify arch_spin_unlock_wait() semantics Chris Metcalf
2015-04-28 17:33                                     ` [PATCH 2/2] tile: use READ_ONCE() in arch_spin_is_locked() Chris Metcalf
2015-04-28 16:40                                 ` [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles Peter Zijlstra
2015-04-28 16:58                                   ` Chris Metcalf
2015-04-28 17:43                                     ` Peter Zijlstra
2015-04-28 18:00                                       ` Chris Metcalf
2015-04-28 18:24                                         ` Peter Zijlstra
2015-04-28 18:38                                           ` Chris Metcalf
2015-04-28 14:32                           ` Peter Zijlstra
2015-04-28 20:33                             ` Paul E. McKenney
2015-02-21  3:26                       ` Paul E. McKenney
2015-02-23 18:29                         ` Paul E. McKenney
2015-02-18 17:05     ` [tip:sched/core] sched: Clarify ordering between task_rq_lock() and move_queued_task() tip-bot for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150217180154.GA28082@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=jpoimboe@redhat.com \
    --cc=ktkhai@parallels.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.