linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: will.deacon@arm.com, mingo@kernel.org,
	linux-kernel@vger.kernel.org, longman@redhat.com,
	tglx@linutronix.de
Subject: Re: [RFC][PATCH 3/3] locking/qspinlock: Optimize for x86
Date: Thu, 27 Sep 2018 09:17:47 +0200	[thread overview]
Message-ID: <20180927071747.GD5254@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20180926205208.GA4864@andrea>

On Wed, Sep 26, 2018 at 10:52:08PM +0200, Andrea Parri wrote:
> On Wed, Sep 26, 2018 at 01:01:20PM +0200, Peter Zijlstra wrote:
> > On x86 we cannot do fetch_or with a single instruction and end up
> > using a cmpxchg loop, this reduces determinism. Replace the fetch_or
> > with a very tricky composite xchg8 + load.
> > 
> > The basic idea is that we use xchg8 to test-and-set the pending bit
> > (when it is a byte) and then a load to fetch the whole word. Using
> > two instructions of course opens a window we previously did not have.
> > In particular the ordering between pending and tail is of interrest,
> > because that is where the split happens.
> > 
> > The claim is that if we order them, it all works out just fine. There
> > are two specific cases where the pending,tail state changes:
> > 
> >  - when the 3rd lock(er) comes in and finds pending set, it'll queue
> >    and set tail; since we set tail while pending is set, the ordering
> >    is split is not important (and not fundamentally different form
> >    fetch_or). [*]
> > 
> >  - when the last queued lock holder acquires the lock (uncontended),
> >    we clear the tail and set the lock byte. By first setting the
> >    pending bit this cmpxchg will fail and the later load must then
> >    see the remaining tail.
> > 
> > Another interesting scenario is where there are only 2 threads:
> > 
> > 	lock := (0,0,0)
> > 
> > 	CPU 0			CPU 1
> > 
> > 	lock()			lock()
> > 	  trylock(-> 0,0,1)       trylock() /* fail */
> > 	    return;               xchg_relaxed(pending, 1) (-> 0,1,1)
> > 				  mb()
> > 				  val = smp_load_acquire(*lock);
> > 
> > Where, without the mb() the load would've been allowed to return 0 for
> > the locked byte.
> 
> If this were true, we would have a violation of "coherence":

The thing is, this is mixed size, see:

  https://www.cl.cam.ac.uk/~pes20/popl17/mixed-size.pdf

If I remember things correctly (I've not reread that paper recently) it
is allowed for:

	old = xchg(pending,1);
	val = smp_load_acquire(*lock);

to be re-ordered like:

	val = smp_load_acquire(*lock);
	old = xchg(pending, 1);

with the exception that it will fwd the pending byte into the later
load, so we get:

	val = (val & _Q_PENDING_MASK) | (old << _Q_PENDING_OFFSET);

for 'free'.

LKMM in particular does _NOT_ deal with mixed sized atomics _at_all_.

With the addition of smp_mb__after_atomic(), we disallow the load to be
done prior to the xchg(). It might still fwd the more recent pending
byte from its store buffer, but at least the other bytes must not be
earlier.

  reply	other threads:[~2018-09-27  7:18 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-26 11:01 [RFC][PATCH 0/3] locking/qspinlock: Improve determinism for x86 Peter Zijlstra
2018-09-26 11:01 ` [RFC][PATCH 1/3] locking/qspinlock: Re-order code Peter Zijlstra
2018-10-01 17:17   ` Will Deacon
2018-09-26 11:01 ` [RFC][PATCH 2/3] locking/qspinlock: Rework some comments Peter Zijlstra
2018-10-01 17:17   ` Will Deacon
2018-10-01 19:10     ` Peter Zijlstra
2018-10-02 13:20       ` Will Deacon
2018-10-02 13:43         ` Peter Zijlstra
2018-09-26 11:01 ` [RFC][PATCH 3/3] locking/qspinlock: Optimize for x86 Peter Zijlstra
2018-09-26 16:30   ` Waiman Long
2018-09-26 17:54     ` Peter Zijlstra
2018-09-27  7:29       ` Peter Zijlstra
2018-09-26 20:52   ` Andrea Parri
2018-09-27  7:17     ` Peter Zijlstra [this message]
2018-09-27  7:47       ` Andrea Parri
2018-09-27  7:59         ` Peter Zijlstra
2018-09-27  8:13           ` Andrea Parri
2018-09-27  8:57             ` Peter Zijlstra
2018-09-27 12:16   ` David Laight
2018-10-01 17:17   ` Will Deacon
2018-10-01 20:00     ` Peter Zijlstra
2018-10-02 13:19       ` Will Deacon
2018-10-02 14:14         ` Peter Zijlstra
2018-10-02 12:31     ` Andrea Parri
2018-10-02 13:22       ` Will Deacon
2018-10-02 13:44         ` Andrea Parri
2018-09-26 15:01 ` [RFC][PATCH 0/3] locking/qspinlock: Improve determinism " Sebastian Andrzej Siewior
2018-09-26 15:08   ` Thomas Gleixner
2018-09-26 15:38     ` Sebastian Andrzej Siewior
2018-09-26 16:20 ` Waiman Long
2018-09-26 17:51   ` Peter Zijlstra
2018-09-26 23:21     ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180927071747.GD5254@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=andrea.parri@amarulasolutions.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).