From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S970370AbdDTPIs (ORCPT ); Thu, 20 Apr 2017 11:08:48 -0400 Received: from merlin.infradead.org ([205.233.59.134]:54184 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S970284AbdDTPIq (ORCPT ); Thu, 20 Apr 2017 11:08:46 -0400 Date: Thu, 20 Apr 2017 17:08:26 +0200 From: Peter Zijlstra To: "Paul E. McKenney" Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com, dvyukov@google.com, will.deacon@arm.com Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to sync_exp_work_done() Message-ID: <20170420150826.n7r3omoy5hxbmtjw@hirez.programming.kicks-ass.net> References: <20170413091832.phnfppqjjy6sislo@hirez.programming.kicks-ass.net> <20170413161042.GA3956@linux.vnet.ibm.com> <20170413162409.q5gsqfytjyirgfep@hirez.programming.kicks-ass.net> <20170413165755.GJ3956@linux.vnet.ibm.com> <20170413171027.snjqn4u54t2kdzgx@hirez.programming.kicks-ass.net> <20170413173951.GM3956@linux.vnet.ibm.com> <20170413175136.5qnzvqrmzyuvlqsj@hirez.programming.kicks-ass.net> <20170419232352.GC3956@linux.vnet.ibm.com> <20170420111743.qyn3zwcmwbx4kngu@hirez.programming.kicks-ass.net> <20170420150321.GM3956@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170420150321.GM3956@linux.vnet.ibm.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 20, 2017 at 08:03:21AM -0700, Paul E. McKenney wrote: > On Thu, Apr 20, 2017 at 01:17:43PM +0200, Peter Zijlstra wrote: > > > +/** > > > + * spin_is_locked - Conditionally interpose after prior critical sections > > > + * @lock: the spinlock whose critical sections are to be interposed. > > > + * > > > + * Semantically this is equivalent to a spin_trylock(), and, if > > > + * the spin_trylock() succeeds, immediately followed by a (mythical) > > > + * spin_unlock_relaxed(). The return value from spin_trylock() is returned > > > + * by spin_is_locked(). Note that all current architectures have extremely > > > + * efficient implementations in which the spin_is_locked() does not even > > > + * write to the lock variable. > > > + * > > > + * A successful spin_is_locked() primitive in some sense "takes its place" > > > + * after some critical section for the lock in question. Any accesses > > > + * following a successful spin_is_locked() call will therefore happen > > > + * after any accesses by any of the preceding critical section for that > > > + * same lock. Note however, that spin_is_locked() provides absolutely no > > > + * ordering guarantees for code preceding the call to that spin_is_locked(). > > > + */ > > > static __always_inline int spin_is_locked(spinlock_t *lock) > > > { > > > return raw_spin_is_locked(&lock->rlock); > > > > I'm current confused on this one. The case listed in the qspinlock code > > doesn't appear to exist in the kernel anymore (or at least, I'm having > > trouble finding it). > > > > That said, I'm also not sure spin_is_locked() provides an acquire, as > > that comment has an explicit smp_acquire__after_ctrl_dep(); > > OK, I have dropped this portion of the patch for the moment. > > Going forward, exactly what semantics do you believe spin_is_locked() > provides? > > Do any of the current implementations need to change to provide the > semantics expected by the various use cases? I don't have anything other than the comment I wrote back then. I would have to go audit all spin_is_locked() implementations and users (again).