linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "Alan Stern" <stern@rowland.harvard.edu>,
	"Andrea Parri" <parri.andrea@gmail.com>,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
	priyalee.kushwaha@intel.com,
	"Stanisław Drozd" <drozdziak1@gmail.com>,
	"Arnd Bergmann" <arnd@arndb.de>,
	ldr709@gmail.com, "Thomas Gleixner" <tglx@linutronix.de>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Josh Triplett" <josh@joshtriplett.org>,
	"Nicolas Pitre" <nico@linaro.org>,
	"Krister Johansen" <kjlx@templeofstupid.com>,
	"Vegard Nossum" <vegard.nossum@oracle.com>,
	dcb314@hotmail.com, "Wu Fengguang" <fengguang.wu@intel.com>,
	"Frederic Weisbecker" <fweisbec@gmail.com>,
	"Rik van Riel" <riel@redhat.com>,
	"Steven Rostedt" <rostedt@goodmis.org>,
	"Ingo Molnar" <mingo@kernel.org>,
	"Luc Maranget" <luc.maranget@inria.fr>,
	"Jade Alglave" <j.alglave@ucl.ac.uk>
Subject: Re: [GIT PULL rcu/next] RCU commits for 4.13
Date: Wed, 28 Jun 2017 17:45:56 -0700	[thread overview]
Message-ID: <20170629004556.GD3721@linux.vnet.ibm.com> (raw)
In-Reply-To: <CA+55aFwLq5oPvY5HwpKq9LkCuQ5No7O5g=+ij1T68ONu-gOm_Q@mail.gmail.com>

On Wed, Jun 28, 2017 at 05:05:46PM -0700, Linus Torvalds wrote:
> On Wed, Jun 28, 2017 at 4:54 PM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> >
> > Linus, are you dead-set against defining spin_unlock_wait() to be
> > spin_lock + spin_unlock?  For example, is the current x86 implementation
> > of spin_unlock_wait() really a non-negotiable hard requirement?  Or
> > would you be willing to live with the spin_lock + spin_unlock semantics?
> 
> So I think the "same as spin_lock + spin_unlock" semantics are kind of insane.
> 
> One of the issues is that the same as "spin_lock + spin_unlock" is
> basically now architecture-dependent. Is it really the
> architecture-dependent ordering you want to define this as?
> 
> So I just think it's a *bad* definition. If somebody wants something
> that is exactly equivalent to spin_lock+spin_unlock, then dammit, just
> do *THAT*. It's completely pointless to me to define
> spin_unlock_wait() in those terms.
> 
> And if it's not equivalent to the *architecture* behavior of
> spin_lock+spin_unlock, then I think it should be descibed in terms
> that aren't about the architecture implementation (so you shouldn't
> describe it as "spin_lock+spin_unlock", you should describe it in
> terms of memory barrier semantics.
> 
> And if we really have to use the spin_lock+spinunlock semantics for
> this, then what is the advantage of spin_unlock_wait at all, if it
> doesn't fundamentally avoid some locking overhead of just taking the
> spinlock in the first place?
> 
> And if we can't use a cheaper model, maybe we should just get rid of
> it entirely?
> 
> Finally: if the memory barrier semantics are exactly the same, and
> it's purely about avoiding some nasty contention case, I think the
> concept is broken - contention is almost never an actual issue, and if
> it is, the problem is much deeper than spin_unlock_wait().

All good points!

I must confess that your sentence about getting rid of spin_unlock_wait()
entirely does resonate with me, especially given the repeated bouts of
"but what -exactly- is it -supposed- to do?" over the past 18 months
or so.  ;-)

Just for completeness, here is a list of the definitions that have been
put forward, just in case it inspires someone to come up with something
better:

1.	spin_unlock_wait() provides only acquire semantics.  Code
	placed after the spin_unlock_wait() will see the effects of
	all previous critical sections, but there is no guarantees for
	subsequent critical sections.  The x86 implementation provides
	this.  I -think- that the ARM and PowerPC implementations could
	get rid of a memory-barrier instruction and still provide this.

2.	As #1 above, but a "smp_mb();spin_unlock_wait();" provides the
	additional guarantee that code placed before this construct is
	seen by all subsequent critical sections.  The x86 implementation
	provides this, as do ARM and PowerPC, but it is not clear that all
	architectures do.  As Alan noted, this is an extremely unnatural
	definition for the current memory model.

3.	[ Just for completeness, yes, this is off the table! ]  The
	spin_unlock_wait() has the same semantics as a spin_lock()
	followed immediately by a spin_unlock().

4.	spin_unlock_wait() is analogous to synchronize_rcu(), where
	spin_unlock_wait()'s "read-side critical sections" are the lock's
	normal critical sections.  This was the first definition I heard
	that made any sense to me, but it turns out to be equivalent
	to #3.	Thus, also off the table.

Does anyone know of any other possible definitions?

							Thanx, Paul

  reply	other threads:[~2017-06-29  0:46 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-12 21:37 [GIT PULL rcu/next] RCU commits for 4.13 Paul E. McKenney
2017-06-13  6:41 ` Ingo Molnar
2017-06-14  2:54 ` Andrea Parri
2017-06-14  4:33   ` Paul E. McKenney
2017-06-14 14:33     ` Andrea Parri
2017-06-14 20:23       ` Paul E. McKenney
2017-06-19 16:24         ` Andrea Parri
2017-06-27 20:58           ` Paul E. McKenney
2017-06-27 21:48             ` Linus Torvalds
2017-06-27 23:37               ` Paul E. McKenney
2017-06-28 15:31                 ` Alan Stern
2017-06-28 17:03                   ` Paul E. McKenney
2017-06-28 20:16                     ` Alan Stern
2017-06-28 23:54                       ` Paul E. McKenney
2017-06-29  0:05                         ` Linus Torvalds
2017-06-29  0:45                           ` Paul E. McKenney [this message]
2017-06-29  3:17                             ` Boqun Feng
2017-06-29 18:47                               ` Paul E. McKenney
2017-06-29 11:36                             ` Will Deacon
2017-06-29 11:38                           ` Will Deacon
2017-06-29 15:59                             ` Alan Stern
2017-06-29 18:11                               ` Paul E. McKenney
2017-06-30  2:51                                 ` Boqun Feng
2017-06-30  4:02                                   ` Paul E. McKenney
2017-06-30  5:16                                     ` Boqun Feng
2017-06-30 17:31                                       ` Paul E. McKenney
2017-06-29 18:46                             ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170629004556.GD3721@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=arnd@arndb.de \
    --cc=dcb314@hotmail.com \
    --cc=drozdziak1@gmail.com \
    --cc=fengguang.wu@intel.com \
    --cc=fweisbec@gmail.com \
    --cc=j.alglave@ucl.ac.uk \
    --cc=josh@joshtriplett.org \
    --cc=kjlx@templeofstupid.com \
    --cc=ldr709@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luc.maranget@inria.fr \
    --cc=mingo@kernel.org \
    --cc=nico@linaro.org \
    --cc=parri.andrea@gmail.com \
    --cc=peterz@infradead.org \
    --cc=priyalee.kushwaha@intel.com \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=stern@rowland.harvard.edu \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=vegard.nossum@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).