Hi Peter, On Fri, May 20, 2016 at 01:58:19PM +0200, Peter Zijlstra wrote: > On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote: > > As such, the following restores the behavior of the ticket locks and 'fixes' > > (or hides?) the bug in sems. Naturally incorrect approach: > > > > @@ -290,7 +290,8 @@ static void sem_wait_array(struct sem_array *sma) > > > > for (i = 0; i < sma->sem_nsems; i++) { > > sem = sma->sem_base + i; > > - spin_unlock_wait(&sem->lock); > > + while (atomic_read(&sem->lock)) > > + cpu_relax(); > > } > > ipc_smp_acquire__after_spin_is_unlocked(); > > } > > The actual bug is clear_pending_set_locked() not having acquire > semantics. And the above 'fixes' things because it will observe the old > pending bit or the locked bit, so it doesn't matter if the store > flipping them is delayed. > > The comment in queued_spin_lock_slowpath() above the smp_cond_acquire() > states that that acquire is sufficient, but this is incorrect in the > face of spin_is_locked()/spin_unlock_wait() usage only looking at the > lock byte. > > The problem is that the clear_pending_set_locked() is an unordered > store, therefore this store can be delayed until no later than > spin_unlock() (which orders against it due to the address dependency). > > This opens numerous races; for example: > > ipc_lock_object(&sma->sem_perm); > sem_wait_array(sma); > > false -> spin_is_locked(&sma->sem_perm.lock) > > is entirely possible, because sem_wait_array() consists of pure reads, > so the store can pass all that, even on x86. > > The below 'hack' seems to solve the problem. > > _However_ this also means the atomic_cmpxchg_relaxed() in the locked: > branch is equally wrong -- although not visible on x86. And note that > atomic_cmpxchg_acquire() would not in fact be sufficient either, since > the acquire is on the LOAD not the STORE of the LL/SC. > > I need a break of sorts, because after twisting my head around the sem > code and then the qspinlock code I'm wrecked. I'll try and make a proper > patch if people can indeed confirm my thinking here. > I think your analysis is right, however, the problem only exists if we have the following use pattern, right? CPU 0 CPU 1 ==================== ================== spin_lock(A); spin_lock(B); spin_unlock_wait(B); spin_unlock_wait(A); do_something(); do_something(); , which ends up CPU 0 and 1 both running do_something(). And actually this can be simply fixed by add smp_mb() between spin_lock() and spin_unlock_wait() on both CPU, or add an smp_mb() in spin_unlock_wait() as PPC does in 51d7d5205d338 "powerpc: Add smp_mb() to arch_spin_is_locked()". So if relaxed/acquire atomics and clear_pending_set_locked() work fine in other situations, a proper fix would be fixing the spin_is_locked()/spin_unlock_wait() or their users? Regards, Boqun > --- > kernel/locking/qspinlock.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index ce2f75e32ae1..348e172e774f 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -366,6 +366,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > * *,1,0 -> *,0,1 > */ > clear_pending_set_locked(lock); > + smp_mb(); > return; > > /*