From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932160AbcJMPQM (ORCPT ); Thu, 13 Oct 2016 11:16:12 -0400 Received: from foss.arm.com ([217.140.101.70]:51080 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752729AbcJMPQE (ORCPT ); Thu, 13 Oct 2016 11:16:04 -0400 Date: Thu, 13 Oct 2016 16:14:47 +0100 From: Will Deacon To: Peter Zijlstra Cc: Linus Torvalds , Waiman Long , Jason Low , Ding Tianhong , Thomas Gleixner , Ingo Molnar , Imre Deak , Linux Kernel Mailing List , Davidlohr Bueso , Tim Chen , Terry Rudd , "Paul E. McKenney" , Jason Low , Chris Wilson , Daniel Vetter Subject: Re: [PATCH -v4 5/8] locking/mutex: Add lock handoff to avoid starvation Message-ID: <20161013151447.GA13138@arm.com> References: <20161007145243.361481786@infradead.org> <20161007150211.196801561@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161007150211.196801561@infradead.org> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Peter, Just one comment below. On Fri, Oct 07, 2016 at 04:52:48PM +0200, Peter Zijlstra wrote: > --- a/kernel/locking/mutex.c > +++ b/kernel/locking/mutex.c > @@ -54,8 +54,10 @@ EXPORT_SYMBOL(__mutex_init); > * bits to store extra state. > * > * Bit0 indicates a non-empty waiter list; unlock must issue a wakeup. > + * Bit1 indicates unlock needs to hand the lock to the top-waiter > */ > #define MUTEX_FLAG_WAITERS 0x01 > +#define MUTEX_FLAG_HANDOFF 0x02 > > #define MUTEX_FLAGS 0x03 > > @@ -71,20 +73,48 @@ static inline unsigned long __owner_flag > > /* > * Actual trylock that will work on any unlocked state. > + * > + * When setting the owner field, we must preserve the low flag bits. > + * > + * Be careful with @handoff, only set that in a wait-loop (where you set > + * HANDOFF) to avoid recursive lock attempts. > */ > -static inline bool __mutex_trylock(struct mutex *lock) > +static inline bool __mutex_trylock(struct mutex *lock, const bool handoff) > { > unsigned long owner, curr = (unsigned long)current; > > owner = atomic_long_read(&lock->owner); > for (;;) { /* must loop, can race against a flag */ > - unsigned long old; > + unsigned long old, flags = __owner_flags(owner); > + > + if (__owner_task(owner)) { > + if (handoff && unlikely(__owner_task(owner) == current)) { > + /* > + * Provide ACQUIRE semantics for the lock-handoff. > + * > + * We cannot easily use load-acquire here, since > + * the actual load is a failed cmpxchg, which > + * doesn't imply any barriers. > + * > + * Also, this is a fairly unlikely scenario, and > + * this contains the cost. > + */ > + smp_mb(); /* ACQUIRE */ As we discussed on another thread recently, a failed cmpxchg_acquire will always give you ACQUIRE semantics in practice. Maybe we should update the documentation to allow this? The only special case is the full-barrier version. Will