From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753006AbdFLVyl (ORCPT ); Mon, 12 Jun 2017 17:54:41 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:54538 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752534AbdFLVyj (ORCPT ); Mon, 12 Jun 2017 17:54:39 -0400 Date: Mon, 12 Jun 2017 14:54:29 -0700 From: "Paul E. McKenney" To: Dmitry Vyukov Cc: Peter Zijlstra , LKML , Ingo Molnar , Lai Jiangshan , dipankar@in.ibm.com, Andrew Morton , Mathieu Desnoyers , Josh Triplett , Thomas Gleixner , Steven Rostedt , David Howells , Eric Dumazet , fweisbec@gmail.com, Oleg Nesterov , bobby.prani@gmail.com, Will Deacon , Andrea Parri , hiralpat@cisco.com, satishkh@cisco.com, sebaddel@cisco.com, kartilak@cisco.com Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to sync_exp_work_done() Reply-To: paulmck@linux.vnet.ibm.com References: <20170413165755.GJ3956@linux.vnet.ibm.com> <20170413171027.snjqn4u54t2kdzgx@hirez.programming.kicks-ass.net> <20170413173951.GM3956@linux.vnet.ibm.com> <20170413175136.5qnzvqrmzyuvlqsj@hirez.programming.kicks-ass.net> <20170419232352.GC3956@linux.vnet.ibm.com> <20170420111743.qyn3zwcmwbx4kngu@hirez.programming.kicks-ass.net> <20170420150321.GM3956@linux.vnet.ibm.com> <20170420150826.n7r3omoy5hxbmtjw@hirez.programming.kicks-ass.net> <20170609225646.GR3721@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17061221-2213-0000-0000-000001D9EF4E X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007221; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000212; SDB=6.00873876; UDB=6.00434932; IPR=6.00653963; BA=6.00005414; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015795; XFM=3.00000015; UTC=2017-06-12 21:54:31 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17061221-2214-0000-0000-0000567AA627 Message-Id: <20170612215429.GH3721@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-12_12:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706120385 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 12, 2017 at 04:51:43PM +0200, Dmitry Vyukov wrote: > On Sat, Jun 10, 2017 at 12:56 AM, Paul E. McKenney > wrote: > >> > > > +/** > >> > > > + * spin_is_locked - Conditionally interpose after prior critical sections > >> > > > + * @lock: the spinlock whose critical sections are to be interposed. > >> > > > + * > >> > > > + * Semantically this is equivalent to a spin_trylock(), and, if > >> > > > + * the spin_trylock() succeeds, immediately followed by a (mythical) > >> > > > + * spin_unlock_relaxed(). The return value from spin_trylock() is returned > >> > > > + * by spin_is_locked(). Note that all current architectures have extremely > >> > > > + * efficient implementations in which the spin_is_locked() does not even > >> > > > + * write to the lock variable. > >> > > > + * > >> > > > + * A successful spin_is_locked() primitive in some sense "takes its place" > >> > > > + * after some critical section for the lock in question. Any accesses > >> > > > + * following a successful spin_is_locked() call will therefore happen > >> > > > + * after any accesses by any of the preceding critical section for that > >> > > > + * same lock. Note however, that spin_is_locked() provides absolutely no > >> > > > + * ordering guarantees for code preceding the call to that spin_is_locked(). > >> > > > + */ > >> > > > static __always_inline int spin_is_locked(spinlock_t *lock) > >> > > > { > >> > > > return raw_spin_is_locked(&lock->rlock); > >> > > > >> > > I'm current confused on this one. The case listed in the qspinlock code > >> > > doesn't appear to exist in the kernel anymore (or at least, I'm having > >> > > trouble finding it). > >> > > > >> > > That said, I'm also not sure spin_is_locked() provides an acquire, as > >> > > that comment has an explicit smp_acquire__after_ctrl_dep(); > >> > > >> > OK, I have dropped this portion of the patch for the moment. > >> > > >> > Going forward, exactly what semantics do you believe spin_is_locked() > >> > provides? > >> > > >> > Do any of the current implementations need to change to provide the > >> > semantics expected by the various use cases? > >> > >> I don't have anything other than the comment I wrote back then. I would > >> have to go audit all spin_is_locked() implementations and users (again). > > > > And Andrea (CCed) and I did a review of the v4.11 uses of > > spin_is_locked(), and none of the current uses requires any particular > > ordering. > > > > There is one very strange use of spin_is_locked() in __fnic_set_state_flags() > > in drivers/scsi/fnic/fnic_scsi.c. This code checks spin_is_locked(), > > and then acquires the lock only if it wasn't held. I am having a very > > hard time imagining a situation where this would do something useful. > > My guess is that the author thought that spin_is_locked() meant that > > the current CPU holds the lock, when it instead means that some CPU > > (possibly the current one, possibly not) holds the lock. > > > > Adding the FNIC guys on CC so that they can enlighten me. And if my guess is correct, the usual fix is to use a variable to track which CPU is holding the lock, with -1 indicating that no one holds it. Then the spin_is_locked() check becomes a test to see if the value of this variable is equal to the ID of the current CPU, but with preemption disabled across the test. If this makes no sense, let me know, and I can supply a prototype patch. > > Ignoring the FNIC use case for the moment, anyone believe that > > spin_is_locked() needs to provide any ordering guarantees? > > Not providing any ordering guarantees for spin_is_locked() sounds good to me. > Restricting all types of mutexes/locks to the simple canonical use > case (protecting a critical section of code) makes it easier to reason > about code, enables a bunch of possible static/dynamic correctness > checking and reliefs lock/unlock function from providing unnecessary > ordering (i.e. acquire in spin_is_locked() pairing with release in > spin_lock()). > Tricky uses of is_locked and try_lock can resort to atomic operations > (or maybe be removed). One vote in favor of dropping ordering guarantees. Any objections? Thanx, Paul