From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757577AbcJQNZB (ORCPT ); Mon, 17 Oct 2016 09:25:01 -0400 Received: from merlin.infradead.org ([205.233.59.134]:52688 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754730AbcJQNYw (ORCPT ); Mon, 17 Oct 2016 09:24:52 -0400 Date: Mon, 17 Oct 2016 15:24:08 +0200 From: Peter Zijlstra To: Will Deacon Cc: Linus Torvalds , Waiman Long , Jason Low , Ding Tianhong , Thomas Gleixner , Ingo Molnar , Imre Deak , Linux Kernel Mailing List , Davidlohr Bueso , Tim Chen , Terry Rudd , "Paul E. McKenney" , Jason Low , Chris Wilson , Daniel Vetter Subject: Re: [PATCH -v4 6/8] locking/mutex: Restructure wait loop Message-ID: <20161017132408.GF3157@twins.programming.kicks-ass.net> References: <20161007145243.361481786@infradead.org> <20161007150211.271490994@infradead.org> <20161013151720.GB13138@arm.com> <20161017104449.GO3117@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161017104449.GO3117@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 17, 2016 at 12:44:49PM +0200, Peter Zijlstra wrote: > On Thu, Oct 13, 2016 at 04:17:21PM +0100, Will Deacon wrote: > > Hi Peter, > > > > I'm struggling to get my head around the handoff code after this change... > > > > On Fri, Oct 07, 2016 at 04:52:49PM +0200, Peter Zijlstra wrote: > > > --- a/kernel/locking/mutex.c > > > +++ b/kernel/locking/mutex.c > > > @@ -631,13 +631,21 @@ __mutex_lock_common(struct mutex *lock, > > > > > > lock_contended(&lock->dep_map, ip); > > > > > > + set_task_state(task, state); > > > for (;;) { > > > + /* > > > + * Once we hold wait_lock, we're serialized against > > > + * mutex_unlock() handing the lock off to us, do a trylock > > > + * before testing the error conditions to make sure we pick up > > > + * the handoff. > > > + */ > > > if (__mutex_trylock(lock, first)) > > > - break; > > > + goto acquired; > > > > > > /* > > > - * got a signal? (This code gets eliminated in the > > > - * TASK_UNINTERRUPTIBLE case.) > > > + * Check for signals and wound conditions while holding > > > + * wait_lock. This ensures the lock cancellation is ordered > > > + * against mutex_unlock() and wake-ups do not go missing. > > > */ > > > if (unlikely(signal_pending_state(state, task))) { > > > ret = -EINTR; > > > @@ -650,16 +658,27 @@ __mutex_lock_common(struct mutex *lock, > > > goto err; > > > } > > > > > > - __set_task_state(task, state); > > > spin_unlock_mutex(&lock->wait_lock, flags); > > > schedule_preempt_disabled(); > > > - spin_lock_mutex(&lock->wait_lock, flags); > > > > > > if (!first && __mutex_waiter_is_first(lock, &waiter)) { > > > first = true; > > > __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); > > > } > > > + > > > + set_task_state(task, state); > > > > With this change, we no longer hold the lock wit_hen we set the task > > state, and it's ordered strictly *after* setting the HANDOFF flag. > > Doesn't that mean that the unlock code can see the HANDOFF flag, issue > > the wakeup, but then we come in and overwrite the task state? > > > > I'm struggling to work out whether that's an issue, but it certainly > > feels odd and is a change from the previous behaviour. > > Right, so I think the code is fine, since in that case the > __mutex_trylock() must see the handoff and we'll break the loop and > (re)set the state to RUNNING. > > But you're right in that its slightly odd. I'll reorder them and put the > set_task_state() above the !first thing. Humm,.. we might actually rely on this order, since the MB implied by set_task_state() is the only thing that separates the store of __mutex_set_flag() from the load of __mutex_trylock(), and those should be ordered I think. Argh, completely messed up my brain. I'll not touch it and think on this again tomorrow.