From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932694AbbIVPeD (ORCPT ); Tue, 22 Sep 2015 11:34:03 -0400 Received: from www.linutronix.de ([62.245.132.108]:41298 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751580AbbIVPeB (ORCPT ); Tue, 22 Sep 2015 11:34:01 -0400 Date: Tue, 22 Sep 2015 17:33:12 +0200 (CEST) From: Thomas Gleixner To: Davidlohr Bueso cc: Peter Zijlstra , Ingo Molnar , Andrew Morton , Linus Torvalds , Will Deacon , "Paul E. McKenney" , linux-kernel@vger.kernel.org, Davidlohr Bueso Subject: Re: [PATCH 3/5] locking/rtmutex: Use acquire/release semantics In-Reply-To: <1442866676-10359-4-git-send-email-dave@stgolabs.net> Message-ID: References: <1442866676-10359-1-git-send-email-dave@stgolabs.net> <1442866676-10359-4-git-send-email-dave@stgolabs.net> User-Agent: Alpine 2.11 (DEB 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 21 Sep 2015, Davidlohr Bueso wrote: > As such, weakly ordered archs can benefit from more relaxed use > of barriers when locking/unlocking. > > Signed-off-by: Davidlohr Bueso > --- > kernel/locking/rtmutex.c | 30 +++++++++++++++++++++--------- > 1 file changed, 21 insertions(+), 9 deletions(-) > > diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c > index 7781d80..226a629 100644 > --- a/kernel/locking/rtmutex.c > +++ b/kernel/locking/rtmutex.c > @@ -74,14 +74,23 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock) > * set up. > */ > #ifndef CONFIG_DEBUG_RT_MUTEXES > -# define rt_mutex_cmpxchg(l,c,n) (cmpxchg(&l->owner, c, n) == c) > +# define rt_mutex_cmpxchg_relaxed(l,c,n) (cmpxchg_relaxed(&l->owner, c, n) == c) > +# define rt_mutex_cmpxchg_acquire(l,c,n) (cmpxchg_acquire(&l->owner, c, n) == c) > +# define rt_mutex_cmpxchg_release(l,c,n) (cmpxchg_release(&l->owner, c, n) == c) > + > +/* > + * Callers must hold the ->wait_lock -- which is the whole purpose as we force > + * all future threads that attempt to [Rmw] the lock to the slowpath. As such > + * relaxed semantics suffice. > + */ > static inline void mark_rt_mutex_waiters(struct rt_mutex *lock) > { > unsigned long owner, *p = (unsigned long *) &lock->owner; > > do { > owner = *p; > - } while (cmpxchg(p, owner, owner | RT_MUTEX_HAS_WAITERS) != owner); > + } while (cmpxchg_relaxed(p, owner, > + owner | RT_MUTEX_HAS_WAITERS) != owner); > } > > /* > @@ -121,11 +130,14 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock) > * lock(wait_lock); > * acquire(lock); > */ > - return rt_mutex_cmpxchg(lock, owner, NULL); > + return rt_mutex_cmpxchg_acquire(lock, owner, NULL); Why is this acquire? Thanks, tglx