From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932560AbcL0N41 (ORCPT ); Tue, 27 Dec 2016 08:56:27 -0500 Received: from mail.fireflyinternet.com ([109.228.58.192]:57451 "EHLO fireflyinternet.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932449AbcL0N4U (ORCPT ); Tue, 27 Dec 2016 08:56:20 -0500 X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Date: Tue, 27 Dec 2016 13:55:41 +0000 From: Chris Wilson To: Peter Zijlstra Cc: Linus Torvalds , Waiman Long , Jason Low , Ding Tianhong , Thomas Gleixner , Will Deacon , Ingo Molnar , Imre Deak , Linux Kernel Mailing List , Davidlohr Bueso , Tim Chen , Terry Rudd , "Paul E. McKenney" , Jason Low , Daniel Vetter Subject: Re: [PATCH -v4 5/8] locking/mutex: Add lock handoff to avoid starvation Message-ID: <20161227135541.GB1717@nuc-i3427.alporthouse.com> References: <20161007145243.361481786@infradead.org> <20161007150211.196801561@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161007150211.196801561@infradead.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 07, 2016 at 04:52:48PM +0200, Peter Zijlstra wrote: > Implement lock handoff to avoid lock starvation. > > Lock starvation is possible because mutex_lock() allows lock stealing, > where a running (or optimistic spinning) task beats the woken waiter > to the acquire. > > Lock stealing is an important performance optimization because waiting > for a waiter to wake up and get runtime can take a significant time, > during which everyboy would stall on the lock. > > The down-side is of course that it allows for starvation. > > This patch has the waiter requesting a handoff if it fails to acquire > the lock upon waking. This re-introduces some of the wait time, > because once we do a handoff we have to wait for the waiter to wake up > again. > > A future patch will add a round of optimistic spinning to attempt to > alleviate this penalty, but if that turns out to not be enough, we can > add a counter and only request handoff after multiple failed wakeups. > > There are a few tricky implementation details: > > - accepting a handoff must only be done in the wait-loop. Since the > handoff condition is owner == current, it can easily cause > recursive locking trouble. > > - accepting the handoff must be careful to provide the ACQUIRE > semantics. > > - having the HANDOFF bit set on unlock requires care, we must not > clear the owner. > > - we must be careful to not leave HANDOFF set after we've acquired > the lock. The tricky scenario is setting the HANDOFF bit on an > unlocked mutex. There's a hole along the interruptible path - we leave the HANDOFF bit set, even though the first waiter returns with -EINTR. The unlock then sees the HANDOFF, assigns it to the next waiter, but that waiter does a racy check to decide if it is first, decides it is not and so skips the trylock and also returns with -EINTR. (i.e. invalidating the /* * Here we order against unlock; we must either see it change * state back to RUNNING and fall through the next schedule(), * or we must see its unlock and acquire. */ as we may not reach the next schedule). Repeating the __mutex_waiter_is_first() after acquiring the wait_lock is sufficient, as is clearing the HANDOFF bit before -EINTR. diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 9b349619f431..6f7e3bf0d595 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -684,6 +684,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, * against mutex_unlock() and wake-ups do not go missing. */ if (unlikely(signal_pending_state(state, task))) { + if (first) + __mutex_clear_flag(lock, MUTEX_FLAG_HANDOFF); ret = -EINTR; goto err; } Though I expect you will be able to find a better solution. -Chris -- Chris Wilson, Intel Open Source Technology Centre