From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752708AbdFLOwK (ORCPT ); Mon, 12 Jun 2017 10:52:10 -0400 Received: from mail-ot0-f174.google.com ([74.125.82.174]:36062 "EHLO mail-ot0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752630AbdFLOwE (ORCPT ); Mon, 12 Jun 2017 10:52:04 -0400 MIME-Version: 1.0 In-Reply-To: <20170609225646.GR3721@linux.vnet.ibm.com> References: <20170413161042.GA3956@linux.vnet.ibm.com> <20170413162409.q5gsqfytjyirgfep@hirez.programming.kicks-ass.net> <20170413165755.GJ3956@linux.vnet.ibm.com> <20170413171027.snjqn4u54t2kdzgx@hirez.programming.kicks-ass.net> <20170413173951.GM3956@linux.vnet.ibm.com> <20170413175136.5qnzvqrmzyuvlqsj@hirez.programming.kicks-ass.net> <20170419232352.GC3956@linux.vnet.ibm.com> <20170420111743.qyn3zwcmwbx4kngu@hirez.programming.kicks-ass.net> <20170420150321.GM3956@linux.vnet.ibm.com> <20170420150826.n7r3omoy5hxbmtjw@hirez.programming.kicks-ass.net> <20170609225646.GR3721@linux.vnet.ibm.com> From: Dmitry Vyukov Date: Mon, 12 Jun 2017 16:51:43 +0200 Message-ID: Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to sync_exp_work_done() To: Paul McKenney Cc: Peter Zijlstra , LKML , Ingo Molnar , Lai Jiangshan , dipankar@in.ibm.com, Andrew Morton , Mathieu Desnoyers , Josh Triplett , Thomas Gleixner , Steven Rostedt , David Howells , Eric Dumazet , fweisbec@gmail.com, Oleg Nesterov , bobby.prani@gmail.com, Will Deacon , Andrea Parri , hiralpat@cisco.com, satishkh@cisco.com, sebaddel@cisco.com, kartilak@cisco.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jun 10, 2017 at 12:56 AM, Paul E. McKenney wrote: >> > > > +/** >> > > > + * spin_is_locked - Conditionally interpose after prior critical sections >> > > > + * @lock: the spinlock whose critical sections are to be interposed. >> > > > + * >> > > > + * Semantically this is equivalent to a spin_trylock(), and, if >> > > > + * the spin_trylock() succeeds, immediately followed by a (mythical) >> > > > + * spin_unlock_relaxed(). The return value from spin_trylock() is returned >> > > > + * by spin_is_locked(). Note that all current architectures have extremely >> > > > + * efficient implementations in which the spin_is_locked() does not even >> > > > + * write to the lock variable. >> > > > + * >> > > > + * A successful spin_is_locked() primitive in some sense "takes its place" >> > > > + * after some critical section for the lock in question. Any accesses >> > > > + * following a successful spin_is_locked() call will therefore happen >> > > > + * after any accesses by any of the preceding critical section for that >> > > > + * same lock. Note however, that spin_is_locked() provides absolutely no >> > > > + * ordering guarantees for code preceding the call to that spin_is_locked(). >> > > > + */ >> > > > static __always_inline int spin_is_locked(spinlock_t *lock) >> > > > { >> > > > return raw_spin_is_locked(&lock->rlock); >> > > >> > > I'm current confused on this one. The case listed in the qspinlock code >> > > doesn't appear to exist in the kernel anymore (or at least, I'm having >> > > trouble finding it). >> > > >> > > That said, I'm also not sure spin_is_locked() provides an acquire, as >> > > that comment has an explicit smp_acquire__after_ctrl_dep(); >> > >> > OK, I have dropped this portion of the patch for the moment. >> > >> > Going forward, exactly what semantics do you believe spin_is_locked() >> > provides? >> > >> > Do any of the current implementations need to change to provide the >> > semantics expected by the various use cases? >> >> I don't have anything other than the comment I wrote back then. I would >> have to go audit all spin_is_locked() implementations and users (again). > > And Andrea (CCed) and I did a review of the v4.11 uses of > spin_is_locked(), and none of the current uses requires any particular > ordering. > > There is one very strange use of spin_is_locked() in __fnic_set_state_flags() > in drivers/scsi/fnic/fnic_scsi.c. This code checks spin_is_locked(), > and then acquires the lock only if it wasn't held. I am having a very > hard time imagining a situation where this would do something useful. > My guess is that the author thought that spin_is_locked() meant that > the current CPU holds the lock, when it instead means that some CPU > (possibly the current one, possibly not) holds the lock. > > Adding the FNIC guys on CC so that they can enlighten me. > > Ignoring the FNIC use case for the moment, anyone believe that > spin_is_locked() needs to provide any ordering guarantees? Not providing any ordering guarantees for spin_is_locked() sounds good to me. Restricting all types of mutexes/locks to the simple canonical use case (protecting a critical section of code) makes it easier to reason about code, enables a bunch of possible static/dynamic correctness checking and reliefs lock/unlock function from providing unnecessary ordering (i.e. acquire in spin_is_locked() pairing with release in spin_lock()). Tricky uses of is_locked and try_lock can resort to atomic operations (or maybe be removed).