From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751422AbcETUxR (ORCPT ); Fri, 20 May 2016 16:53:17 -0400 Received: from merlin.infradead.org ([205.233.59.134]:40477 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751009AbcETUxQ (ORCPT ); Fri, 20 May 2016 16:53:16 -0400 Date: Fri, 20 May 2016 22:53:00 +0200 From: Peter Zijlstra To: Waiman Long Cc: Davidlohr Bueso , manfred@colorfullife.com, mingo@kernel.org, torvalds@linux-foundation.org, ggherdovich@suse.com, mgorman@techsingularity.net, linux-kernel@vger.kernel.org, Paul McKenney , Will Deacon Subject: Re: sem_lock() vs qspinlocks Message-ID: <20160520205300.GJ3193@twins.programming.kicks-ass.net> References: <20160520053926.GC31084@linux-uzut.site> <20160520115819.GF3193@twins.programming.kicks-ass.net> <573F7723.8030201@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <573F7723.8030201@hpe.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 20, 2016 at 04:44:19PM -0400, Waiman Long wrote: > On 05/20/2016 07:58 AM, Peter Zijlstra wrote: > >On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote: > >>As such, the following restores the behavior of the ticket locks and 'fixes' > >>(or hides?) the bug in sems. Naturally incorrect approach: > >> > >>@@ -290,7 +290,8 @@ static void sem_wait_array(struct sem_array *sma) > >> > >> for (i = 0; i< sma->sem_nsems; i++) { > >> sem = sma->sem_base + i; > >>- spin_unlock_wait(&sem->lock); > >>+ while (atomic_read(&sem->lock)) > >>+ cpu_relax(); > >> } > >> ipc_smp_acquire__after_spin_is_unlocked(); > >>} > >The actual bug is clear_pending_set_locked() not having acquire > >semantics. And the above 'fixes' things because it will observe the old > >pending bit or the locked bit, so it doesn't matter if the store > >flipping them is delayed. > > The clear_pending_set_locked() is not the only place where the lock is set. > If there are more than one waiter, the queuing patch will be used instead. > The set_locked(), which is also an unordered store, will then be used to set > the lock. Ah yes. I didn't get that far. One case was enough :-)