From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933413AbbIYVjU (ORCPT ); Fri, 25 Sep 2015 17:39:20 -0400 Received: from e33.co.us.ibm.com ([32.97.110.151]:50600 "EHLO e33.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933294AbbIYVjM (ORCPT ); Fri, 25 Sep 2015 17:39:12 -0400 X-IBM-Helo: d03dlp01.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Fri, 25 Sep 2015 14:30:00 -0700 From: "Paul E. McKenney" To: Martin Schwidefsky Cc: Boqun Feng , Peter Zijlstra , Davidlohr Bueso , Ingo Molnar , Thomas Gleixner , linux-kernel@vger.kernel.org, Davidlohr Bueso , heiko.carstens@de.ibm.com Subject: Re: [PATCH -tip 2/3] sched/wake_q: Relax to acquire semantics Message-ID: <20150925212959.GD30373@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150915170941.GL4029@linux.vnet.ibm.com> <20150918214120.GA4405@linux.vnet.ibm.com> <20150921112252.3c2937e1@mschwide> <20150922122735.14f3c573@mschwide> <20150922122326.GA1032@fixme-laptop.cn.ibm.com> <20150922145136.761241da@mschwide> <20150922132913.GA27867@fixme-laptop.cn.ibm.com> <20150922163307.7eeff6b9@mschwide> <20150922152822.GP4029@linux.vnet.ibm.com> <20150923084321.56f598bb@mschwide> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150923084321.56f598bb@mschwide> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15092521-0009-0000-0000-00000E5FE41F Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 23, 2015 at 08:43:21AM +0200, Martin Schwidefsky wrote: > On Tue, 22 Sep 2015 08:28:22 -0700 > "Paul E. McKenney" wrote: > > > On Tue, Sep 22, 2015 at 04:33:07PM +0200, Martin Schwidefsky wrote: > > > On Tue, 22 Sep 2015 21:29:14 +0800 > > > Boqun Feng wrote: > > > > > > > On Tue, Sep 22, 2015 at 02:51:36PM +0200, Martin Schwidefsky wrote: > > > > > On Tue, 22 Sep 2015 20:23:26 +0800 > > > > > Boqun Feng wrote: > > > > > > > > > > > Hi Martin, > > > > > > > > > > > > On Tue, Sep 22, 2015 at 12:27:35PM +0200, Martin Schwidefsky wrote: > > > > > > > On Mon, 21 Sep 2015 11:22:52 +0200 > > > > > > > Martin Schwidefsky wrote: > > > > > > > > > > > > > > > On Fri, 18 Sep 2015 14:41:20 -0700 > > > > > > > > "Paul E. McKenney" wrote: > > > > > > > > > > > > > > > > > On Tue, Sep 15, 2015 at 10:09:41AM -0700, Paul E. McKenney wrote: > > > > > > > > > > On Tue, Sep 15, 2015 at 06:30:28PM +0200, Peter Zijlstra wrote: > > > > > > > > > > > On Tue, Sep 15, 2015 at 08:34:48AM -0700, Paul E. McKenney wrote: > > > > > > > > > > > > On Tue, Sep 15, 2015 at 04:14:39PM +0200, Peter Zijlstra wrote: > > > > > > > > > > > > > On Tue, Sep 15, 2015 at 07:09:22AM -0700, Paul E. McKenney wrote: > > > > > > > > > > > > > > On Tue, Sep 15, 2015 at 02:48:00PM +0200, Peter Zijlstra wrote: > > > > > > > > > > > > > > > On Tue, Sep 15, 2015 at 05:41:42AM -0700, Paul E. McKenney wrote: > > > > > > > > > > > > > > > > > Never mind, the PPC people will implement this with lwsync and that is > > > > > > > > > > > > > > > > > very much not transitive IIRC. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I am probably lost on context, but... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > It turns out that lwsync is transitive in special cases. One of them > > > > > > > > > > > > > > > > is a series of release-acquire pairs, which can extend indefinitely. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Does that help in this case? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Probably not, but good to know. I still don't think we want to rely on > > > > > > > > > > > > > > > ACQUIRE/RELEASE being transitive in general though. > > > > > > > > > > > > > > > > > > > > > > > > > > > > OK, I will bite... Why not? > > > > > > > > > > > > > > > > > > > > > > > > > > It would mean us reviewing all archs (again) and documenting it I > > > > > > > > > > > > > suppose. Which is of course entirely possible. > > > > > > > > > > > > > > > > > > > > > > > > > > That said, I don't think the case at hand requires it, so lets postpone > > > > > > > > > > > > > this for now ;-) > > > > > > > > > > > > > > > > > > > > > > > > True enough, but in my experience smp_store_release() and > > > > > > > > > > > > smp_load_acquire() are a -lot- easier to use than other barriers, > > > > > > > > > > > > and transitivity will help promote their use. So... > > > > > > > > > > > > > > > > > > > > > > > > All the TSO architectures (x86, s390, SPARC, HPPA, ...) support transitive > > > > > > > > > > > > smp_store_release()/smp_load_acquire() via their native ordering in > > > > > > > > > > > > combination with barrier() macros. x86 with CONFIG_X86_PPRO_FENCE=y, > > > > > > > > > > > > which is not TSO, uses an mfence instruction. Power supports this via > > > > > > > > > > > > lwsync's partial cumulativity. ARM64 supports it in SMP via the new ldar > > > > > > > > > > > > and stlr instructions (in non-SMP, it uses barrier(), which suffices > > > > > > > > > > > > in that case). IA64 supports this via total ordering of all release > > > > > > > > > > > > instructions in theory and by the actual full-barrier implementation > > > > > > > > > > > > in practice (and the fact that gcc emits st.rel and ld.acq instructions > > > > > > > > > > > > for volatile stores and loads). All other architectures use smp_mb(), > > > > > > > > > > > > which is transitive. > > > > > > > > > > > > > > > > > > > > > > > > Did I miss anything? > > > > > > > > > > > > > > > > > > > > > > I think that about covers it.. the only odd duckling might be s390 which > > > > > > > > > > > is documented as TSO but recently grew smp_mb__{before,after}_atomic(), > > > > > > > > > > > which seems to confuse matters. > > > > > > > > > > > > > > > > > > > > Fair point, adding Martin and Heiko on CC for their thoughts. > > > > > > > > > > > > > > > > Well we always had the full memory barrier for the various versions of > > > > > > > > smp_mb__xxx, they just have moved around and renamed several times. > > > > > > > > > > > > > > > > After discussing this with Heiko we came to the conclusion that we can use > > > > > > > > a simple barrier() for smp_mb__before_atomic() and smp_mb__after_atomic(). > > > > > > > > > > > > > > > > > > It looks like this applies to recent mainframes that have new atomic > > > > > > > > > > instructions, which, yes, might need something to make them work with > > > > > > > > > > fully transitive smp_load_acquire() and smp_store_release(). > > > > > > > > > > > > > > > > > > > > Martin, Heiko, the question is whether or not the current s390 > > > > > > > > > > smp_store_release() and smp_load_acquire() can be transitive. > > > > > > > > > > For example, if all the Xi variables below are initially zero, > > > > > > > > > > is it possible for all the r0, r1, r2, ... rN variables to > > > > > > > > > > have the value 1 at the end of the test. > > > > > > > > > > > > > > > > > > Right... This time actually adding Martin and Heiko on CC... > > > > > > > > > > > > > > > > > > Thanx, Paul > > > > > > > > > > > > > > > > > > > CPU 0 > > > > > > > > > > r0 = smp_load_acquire(&X0); > > > > > > > > > > smp_store_release(&X1, 1); > > > > > > > > > > > > > > > > > > > > CPU 1 > > > > > > > > > > r1 = smp_load_acquire(&X1); > > > > > > > > > > smp_store_release(&X2, 1); > > > > > > > > > > > > > > > > > > > > CPU 2 > > > > > > > > > > r2 = smp_load_acquire(&X2); > > > > > > > > > > smp_store_release(&X3, 1); > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > CPU N > > > > > > > > > > rN = smp_load_acquire(&XN); > > > > > > > > > > smp_store_release(&X0, 1); > > > > > > > > > > > > > > > > > > > > If smp_store_release() and smp_load_acquire() are transitive, the > > > > > > > > > > answer would be "no". > > > > > > > > > > > > > > > > The answer is "no". Christian recently summarized what the principles of > > > > > > > > operation has to say about the CPU read / write behavior. If you consider > > > > > > > > the sequential order of instructions then > > > > > > > > > > > > > > > > 1) reads are in order > > > > > > > > 2) writes are in order > > > > > > > > 3) reads can happen earlier > > > > > > > > 4) writes can happen later > > > > > > > > > > > > > > Correction. The principles of operation states this: > > > > > > > > > > > > > > "A storage-operand store specified by one instruction appears to precede > > > > > > > all storage-operand stores specified by conceptually subsequent instructions, > > > > > > > but it does not necessarily precede storage-operand fetches specified by > > > > > > > conceptually subsequent instructions. However, a storage-operand store > > > > > > > appears to precede a conceptually subsequent storage-operand fetch from the > > > > > > > same main-storage location." > > > > > > > > > > > > > > > Confused... > > > > > > Yeah, seems like I'm confused as well. This stuff always make my head hurt.. > > > > > > > IIUC, the previous paragraph actually means that a STORE-LOAD pair can be > > > > reordered. But the below reasoning is saying that a LOAD-STORE pair can > > > > be reordered. Do I miss something here? > > > > > > True, the above paragraph allows a store to move past a load and not the other > > > way around. > > > > > > > > > > As observed by other CPUs a write to one memory location can "overtake" a > > > > > > > read of another memory location if there is no explicit memory-barrier > > > > > > > between the load and the store instruction. > > > > > > > > > > > > > > In the above example X0, X1, ... XN are different memory locations, so > > > > > > > architecturally the answer is "yes", all r0, r1, ... rN variables can have > > > > > > > the value of 1 after the test. I doubt that any existing machine will > > > > > > > show this behavior though. > > > > > > > > > > > > > > > > > > > Just be curious, how about when N == 1? The test then becomes: > > > > > > > > > > > > CPU 0 > > > > > > r0 = smp_load_acquire(&X0); > > > > > > smp_store_release(&X1,1); > > > > > > > > > > > > CPU 1 > > > > > > r1 = smp_load_acquire(&X1); > > > > > > smp_store_release(&X0,1); > > > > > > > > > > > > Is it possible that r0 == 1 and r1 == 1 at the end, due to the same > > > > > > reason? > > > > > > > > > > Yes, that is possible for the same reason. To change that we would have > > > > > to replace the barrier() in smp_load_acquire/smp_store_release with > > > > > smp_mb(). > > > > > > > > > > > > > I thought that s390 is TSO, so this is prohibitted. If that is possible, > > > > I think, that means the current implementation of smp_load_acquire and > > > > smp_store_release on s390 is incorrect... > > > > > > Ok, further reading of chapter 5 of the principles revealed this: > > > > > > "As observed by other CPUs and by channel programs, storage-operand fetches > > > associated with one instruction execution appear to precede all storage > > > operand references for conceptually subsequent instructions." > > > > > > So no writes before reads. Correction to the correction: all r0, r1, ...rN > > > equal to one can not happen after all. Got me worried there ;-) > > > > Whew!!! > > > > So s390's current smp_store_release() and smp_load_acquire() provide > > ordering as needed, then, right? For example, suppose that there > > was a long chain of smp_load_acquire()/smp_store_release() pairs > > involving many CPUs. Would the all-ones case still be impossible? > > Yes, that is impossible. Very good! Release-acquire chains can be transitive, then. ;-) Thanx, Paul