From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755336AbdCGNmw (ORCPT ); Tue, 7 Mar 2017 08:42:52 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:41544 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751270AbdCGNmm (ORCPT ); Tue, 7 Mar 2017 08:42:42 -0500 Date: Tue, 7 Mar 2017 14:22:14 +0100 (CET) From: Thomas Gleixner To: Peter Zijlstra cc: mingo@kernel.org, juri.lelli@arm.com, rostedt@goodmis.org, xlpang@redhat.com, bigeasy@linutronix.de, linux-kernel@vger.kernel.org, mathieu.desnoyers@efficios.com, jdesfossez@efficios.com, bristot@redhat.com, dvhart@infradead.org Subject: Re: [PATCH -v5 07/14] futex: Change locking rules In-Reply-To: <20170304093559.216725723@infradead.org> Message-ID: References: <20170304092717.762954142@infradead.org> <20170304093559.216725723@infradead.org> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 4 Mar 2017, Peter Zijlstra wrote: > @@ -2166,36 +2252,43 @@ static int fixup_pi_state_owner(u32 __us > /* > - * To handle the page fault we need to drop the hash bucket > - * lock here. That gives the other task (either the highest priority > - * waiter itself or the task which stole the rtmutex) the > - * chance to try the fixup of the pi_state. So once we are > - * back from handling the fault we need to check the pi_state > - * after reacquiring the hash bucket lock and before trying to > - * do another fixup. When the fixup has been done already we > - * simply return. > + * To handle the page fault we need to drop the locks here. That gives > + * the other task (either the highest priority waiter itself or the > + * task which stole the rtmutex) the chance to try the fixup of the > + * pi_state. So once we are back from handling the fault we need to > + * check the pi_state after reacquiring the locks and before trying to > + * do another fixup. When the fixup has been done already we simply > + * return. > + * > + * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely > + * drop hb->lock since the caller owns the hb -> futex_q relation. > + * Dropping the pi_mutex->wait_lock requires the state revalidate. > */ > handle_fault: > + raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); > spin_unlock(q->lock_ptr); > > ret = fault_in_user_writeable(uaddr); > > spin_lock(q->lock_ptr); > + raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); > > /* > * Check if someone else fixed it for us: Adding context: */ if (pi_state->owner != oldowner) return 0; if (ret) return ret; goto retry; Both 'return' statements leak &pi_state->pi_mutex.wait_lock .... Thanks, tglx