From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753082AbbJTV2k (ORCPT ); Tue, 20 Oct 2015 17:28:40 -0400 Received: from e38.co.us.ibm.com ([32.97.110.159]:48536 "EHLO e38.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752180AbbJTV2h (ORCPT ); Tue, 20 Oct 2015 17:28:37 -0400 X-IBM-Helo: d03dlp01.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org;stable@vger.kernel.org Date: Tue, 20 Oct 2015 14:28:35 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Boqun Feng , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Ingo Molnar , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Thomas Gleixner , Will Deacon , Waiman Long , Davidlohr Bueso , stable@vger.kernel.org Subject: Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier Message-ID: <20151020212835.GH5105@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1444838161-17209-1-git-send-email-boqun.feng@gmail.com> <1444838161-17209-2-git-send-email-boqun.feng@gmail.com> <20151014201916.GB3910@linux.vnet.ibm.com> <20151020071532.GB17714@fixme-laptop.cn.ibm.com> <20151020092147.GX17308@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151020092147.GX17308@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15102021-0029-0000-0000-00000D84FEE6 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 20, 2015 at 11:21:47AM +0200, Peter Zijlstra wrote: > On Tue, Oct 20, 2015 at 03:15:32PM +0800, Boqun Feng wrote: > > On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote: > > > > > > Am I missing something here? If not, it seems to me that you need > > > the leading lwsync to instead be a sync. > > > > > > Of course, if I am not missing something, then this applies also to the > > > value-returning RMW atomic operations that you pulled this pattern from. > > > If so, it would seem that I didn't think through all the possibilities > > > back when PPC_ATOMIC_EXIT_BARRIER moved to sync... In fact, I believe > > > that I worried about the RMW atomic operation acting as a barrier, > > > but not as the load/store itself. :-/ > > > > > > > Paul, I know this may be difficult, but could you recall why the > > __futex_atomic_op() and futex_atomic_cmpxchg_inatomic() also got > > involved into the movement of PPC_ATOMIC_EXIT_BARRIER to "sync"? > > > > I did some search, but couldn't find the discussion of that patch. > > > > I ask this because I recall Peter once bought up a discussion: > > > > https://lkml.org/lkml/2015/8/26/596 > > > > Peter's conclusion seems to be that we could(though didn't want to) live > > with futex atomics not being full barriers. I have heard of user-level applications relying on unlock-lock being a full barrier. So paranoia would argue for the full barrier. > > Peter, just be clear, I'm not in favor of relaxing futex atomics. But if > > I make PPC_ATOMIC_ENTRY_BARRIER being "sync", it will also strengthen > > the futex atomics, just wonder whether such strengthen is a -fix- or > > not, considering that I want this patch to go to -stable tree. > > So Linus' argued that since we only need to order against user accesses > (true) and priv changes typically imply strong barriers (open) we might > want to allow archs to rely on those instead of mandating they have > explicit barriers in the futex primitives. > > And I indeed forgot to follow up on that discussion. > > So; does PPC imply full barriers on user<->kernel boundaries? If so, its > not critical to the futex atomic implementations what extra barriers are > added. > > If not; then strengthening the futex ops is indeed (probably) a good > thing :-) I am not seeing a sync there, but I really have to defer to the maintainers on this one. I could easily have missed one. Thanx, Paul