From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752176AbdFJPCd (ORCPT ); Sat, 10 Jun 2017 11:02:33 -0400 Received: from mail-wr0-f196.google.com ([209.85.128.196]:35925 "EHLO mail-wr0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752085AbdFJPCb (ORCPT ); Sat, 10 Jun 2017 11:02:31 -0400 Date: Sat, 10 Jun 2017 17:02:21 +0200 From: Andrea Parri To: "Paul E. McKenney" , peterz@infradead.org Cc: mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com, stern@rowland.harvard.edu, linux-kernel@vger.kernel.org Subject: Re: [PATCH tip/core/rcu 20/88] atomics: Add header comment so spin_unlock_wait() Message-ID: <20170610150221.GA7128@andrea> References: <20170525215934.GA11578@linux.vnet.ibm.com> <1495749601-21574-20-git-send-email-paulmck@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1495749601-21574-20-git-send-email-paulmck@linux.vnet.ibm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 25, 2017 at 02:58:53PM -0700, Paul E. McKenney wrote: > There is material describing the ordering guarantees provided by > spin_unlock_wait(), but it is not necessarily easy to find. This commit > therefore adds a docbook header comment to this function informally > describing its semantics. > > Signed-off-by: Paul E. McKenney > Acked-by: Peter Zijlstra > --- > include/linux/spinlock.h | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h > index 59248dcc6ef3..d9510e8522d4 100644 > --- a/include/linux/spinlock.h > +++ b/include/linux/spinlock.h > @@ -369,6 +369,26 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock) > raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ > }) > > +/** > + * spin_unlock_wait - Interpose between successive critical sections > + * @lock: the spinlock whose critical sections are to be interposed. > + * > + * Semantically this is equivalent to a spin_lock() immediately > + * followed by a spin_unlock(). However, most architectures have > + * more efficient implementations in which the spin_unlock_wait() > + * cannot block concurrent lock acquisition, and in some cases > + * where spin_unlock_wait() does not write to the lock variable. > + * Nevertheless, spin_unlock_wait() can have high overhead, so if > + * you feel the need to use it, please check to see if there is > + * a better way to get your job done. > + * > + * The ordering guarantees provided by spin_unlock_wait() are: > + * > + * 1. All accesses preceding the spin_unlock_wait() happen before > + * any accesses in later critical sections for this same lock. > + * 2. All accesses following the spin_unlock_wait() happen after > + * any accesses in earlier critical sections for this same lock. > + */ [From a discussion with Paul, Alan] I understand that some implementation would need to "be strengthened" to meet this "spin_lock(); spin_unlock()" semantics; please compare with 726328d92a42b6d4b76078e2659f43067f82c4e8 ("locking/spinlock, arch: Update and fix spin_unlock_wait() implementations") Should we "relax" this description? Should we integrate it with changes to the implementation(s)? [...] What do you think? Andrea > static __always_inline void spin_unlock_wait(spinlock_t *lock) > { > raw_spin_unlock_wait(&lock->rlock); > -- > 2.5.2 >