From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758512AbcJRMhJ (ORCPT ); Tue, 18 Oct 2016 08:37:09 -0400 Received: from merlin.infradead.org ([205.233.59.134]:56994 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754431AbcJRMhI (ORCPT ); Tue, 18 Oct 2016 08:37:08 -0400 Date: Tue, 18 Oct 2016 14:36:48 +0200 From: Peter Zijlstra To: Waiman Long Cc: Linus Torvalds , Jason Low , Ding Tianhong , Thomas Gleixner , Will Deacon , Ingo Molnar , Imre Deak , Linux Kernel Mailing List , Davidlohr Bueso , Tim Chen , Terry Rudd , "Paul E. McKenney" , Jason Low , Chris Wilson , Daniel Vetter Subject: Re: [PATCH -v4 5/8] locking/mutex: Add lock handoff to avoid starvation Message-ID: <20161018123648.GV3117@twins.programming.kicks-ass.net> References: <20161007145243.361481786@infradead.org> <20161007150211.196801561@infradead.org> <58051C5E.3030204@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <58051C5E.3030204@hpe.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 17, 2016 at 02:45:50PM -0400, Waiman Long wrote: > On 10/07/2016 10:52 AM, Peter Zijlstra wrote: > > /* > > * Actual trylock that will work on any unlocked state. > >+ * > >+ * When setting the owner field, we must preserve the low flag bits. > >+ * > >+ * Be careful with @handoff, only set that in a wait-loop (where you set > >+ * HANDOFF) to avoid recursive lock attempts. > > */ > >-static inline bool __mutex_trylock(struct mutex *lock) > >+static inline bool __mutex_trylock(struct mutex *lock, const bool handoff) > > { > > unsigned long owner, curr = (unsigned long)current; > > > > owner = atomic_long_read(&lock->owner); > > for (;;) { /* must loop, can race against a flag */ > >- unsigned long old; > >+ unsigned long old, flags = __owner_flags(owner); > >+ > >+ if (__owner_task(owner)) { > >+ if (handoff&& unlikely(__owner_task(owner) == current)) { > >+ /* > >+ * Provide ACQUIRE semantics for the lock-handoff. > >+ * > >+ * We cannot easily use load-acquire here, since > >+ * the actual load is a failed cmpxchg, which > >+ * doesn't imply any barriers. > >+ * > >+ * Also, this is a fairly unlikely scenario, and > >+ * this contains the cost. > >+ */ > > I am not so sure about your comment here. I guess you are referring to the > atomic_long_cmpxchg_acquire below for the failed cmpxchg. However, it is > also possible that the path can be triggered on the first round without > cmpxchg. Maybe we can do a load_acquire on the owner again to satisfy this > requirement without a smp_mb(). Yes, I refer to the atomic_long_cmpxchg_acquire() below. If that cmpxchg fails, no barriers are implied. Yes we could fix that, but that would make all cmpxchg_acquire() loops more expensive. And only fixing the initial load doesn't help, since we still need to deal with the cmpxchg case failing. We _could_ re-issue the load I suppose, because since if owner==current, nobody else is going to change it, but I feel slightly uneasy with that. Also, its a relative slow path, so for now, lets keep it simple and explicit like so. > > >+ smp_mb(); /* ACQUIRE */ > >+ return true; > >+ } > > > >- if (__owner_task(owner)) > > return false; > >+ } > > > >- old = atomic_long_cmpxchg_acquire(&lock->owner, owner, > >- curr | __owner_flags(owner)); > >+ /* > >+ * We set the HANDOFF bit, we must make sure it doesn't live > >+ * past the point where we acquire it. This would be possible > >+ * if we (accidentally) set the bit on an unlocked mutex. > >+ */ > >+ if (handoff) > >+ flags&= ~MUTEX_FLAG_HANDOFF; > >+ > >+ old = atomic_long_cmpxchg_acquire(&lock->owner, owner, curr | flags); > > if (old == owner) > > return true; > > > > > > Other than that, the code is fine. > > Reviewed-by: Waiman Long Thanks!