All of lore.kernel.org
 help / color / mirror / Atom feed
* RE: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
       [not found] <3344e7aeb09644758860ac343bd757a1@AcuMS.aculab.com>
@ 2018-07-11 17:36 ` Alan Stern
  0 siblings, 0 replies; 10+ messages in thread
From: Alan Stern @ 2018-07-11 17:36 UTC (permalink / raw)
  To: David Laight
  Cc: Paul E. McKenney, LKMM Maintainers -- Akira Yokosawa,
	Andrea Parri, Boqun Feng, Daniel Lustig, David Howells,
	Jade Alglave, Luc Maranget, Nicholas Piggin, Peter Zijlstra,
	Will Deacon, Kernel development list

On Wed, 11 Jul 2018, David Laight wrote:

> > From: Alan Stern
> > Sent: 10 July 2018 19:18
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking. In other words, given
> > the following code:
> >
> > WRITE_ONCE(x, 1);
> > spin_unlock(&s):
> > spin_lock(&s);
> > WRITE_ONCE(y, 1);
> >
> > the stores to x and y should be propagated in order to all other CPUs,
> > even though those other CPUs might not access the lock s. In terms of
> > the memory model, this means expanding the cumul-fence relation.
> 
> The usual 'elephant in the room' is Alpha.
> I don't claim to understand the alpha memory model but it wouldn't
> surprise me if the above is impossible to implement on alpha.

It's not impossible, since Alpha does have a full memory barrier 
instruction (and the implementation uses it).

Alan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
  2018-07-11 16:34     ` Peter Zijlstra
@ 2018-07-11 18:10       ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2018-07-11 18:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Alan Stern, LKMM Maintainers -- Akira Yokosawa,
	Andrea Parri, Boqun Feng, Daniel Lustig, David Howells,
	Jade Alglave, Luc Maranget, Nicholas Piggin,
	Kernel development list

On Wed, Jul 11, 2018 at 06:34:56PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 11, 2018 at 10:43:45AM +0100, Will Deacon wrote:
> > Hi Alan,
> > 
> > On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> > > More than one kernel developer has expressed the opinion that the LKMM
> > > should enforce ordering of writes by locking.  In other words, given
> > > the following code:
> > > 
> > > 	WRITE_ONCE(x, 1);
> > > 	spin_unlock(&s):
> > > 	spin_lock(&s);
> > > 	WRITE_ONCE(y, 1);
> > > 
> > > the stores to x and y should be propagated in order to all other CPUs,
> > > even though those other CPUs might not access the lock s.  In terms of
> > > the memory model, this means expanding the cumul-fence relation.
> > > 
> > > Locks should also provide read-read (and read-write) ordering in a
> > > similar way.  Given:
> > > 
> > > 	READ_ONCE(x);
> > > 	spin_unlock(&s);
> > > 	spin_lock(&s);
> > > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > > 
> > > the load of x should be executed before the load of (or store to) y.
> > > The LKMM already provides this ordering, but it provides it even in
> > > the case where the two accesses are separated by a release/acquire
> > > pair of fences rather than unlock/lock.  This would prevent
> > > architectures from using weakly ordered implementations of release and
> > > acquire, which seems like an unnecessary restriction.  The patch
> > > therefore removes the ordering requirement from the LKMM for that
> > > case.
> > > 
> > > All the architectures supported by the Linux kernel (including RISC-V)
> > > do provide this ordering for locks, albeit for varying reasons.
> > > Therefore this patch changes the model in accordance with the
> > > developers' wishes.
> > > 
> > > Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
> > 
> > Thanks, I'm happy with this version of the patch:
> > 
> > Reviewed-by: Will Deacon <will.deacon@arm.com>
> 
> Me too! Thanks Alan.
> 
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

And I applies you ask as well, thank you!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
  2018-07-11 16:17       ` Andrea Parri
@ 2018-07-11 18:03         ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2018-07-11 18:03 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Will Deacon, Alan Stern, LKMM Maintainers -- Akira Yokosawa,
	Boqun Feng, Daniel Lustig, David Howells, Jade Alglave,
	Luc Maranget, Nicholas Piggin, Peter Zijlstra,
	Kernel development list

On Wed, Jul 11, 2018 at 06:17:17PM +0200, Andrea Parri wrote:
> On Wed, Jul 11, 2018 at 08:42:11AM -0700, Paul E. McKenney wrote:
> > On Wed, Jul 11, 2018 at 10:43:45AM +0100, Will Deacon wrote:
> > > Hi Alan,
> > > 
> > > On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> > > > More than one kernel developer has expressed the opinion that the LKMM
> > > > should enforce ordering of writes by locking.  In other words, given
> > > > the following code:
> > > > 
> > > > 	WRITE_ONCE(x, 1);
> > > > 	spin_unlock(&s):
> > > > 	spin_lock(&s);
> > > > 	WRITE_ONCE(y, 1);
> > > > 
> > > > the stores to x and y should be propagated in order to all other CPUs,
> > > > even though those other CPUs might not access the lock s.  In terms of
> > > > the memory model, this means expanding the cumul-fence relation.
> > > > 
> > > > Locks should also provide read-read (and read-write) ordering in a
> > > > similar way.  Given:
> > > > 
> > > > 	READ_ONCE(x);
> > > > 	spin_unlock(&s);
> > > > 	spin_lock(&s);
> > > > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > > > 
> > > > the load of x should be executed before the load of (or store to) y.
> > > > The LKMM already provides this ordering, but it provides it even in
> > > > the case where the two accesses are separated by a release/acquire
> > > > pair of fences rather than unlock/lock.  This would prevent
> > > > architectures from using weakly ordered implementations of release and
> > > > acquire, which seems like an unnecessary restriction.  The patch
> > > > therefore removes the ordering requirement from the LKMM for that
> > > > case.
> > > > 
> > > > All the architectures supported by the Linux kernel (including RISC-V)
> > > > do provide this ordering for locks, albeit for varying reasons.
> > > > Therefore this patch changes the model in accordance with the
> > > > developers' wishes.
> > > > 
> > > > Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
> > > 
> > > Thanks, I'm happy with this version of the patch:
> > > 
> > > Reviewed-by: Will Deacon <will.deacon@arm.com>
> > 
> > I have applied your Reviewed-by, and thank you both!
> > 
> > Given that this is a non-trivial change and given that I am posting
> > for -tip acceptance in a few days, I intend to send this one not
> > to the upcoming merge window, but to the one after that.
> > 
> > Please let me know if there is an urgent need for this to go into the
> > v4.19 merge window.
> 
> I raised some concerns in my review to v2; AFAICT, these concerns have
> not been resolved: so, until then, please feel free to add my NAK. ;-)

I will be keeping the prototype in my -rcu tree for the time being,
but I will not be submitting it into the v4.19 merge window.  I expect
that you all will be able to come to agreement in the couple of months
until the v4.20/v5.0 merge window.  ;-)

I will apply Peter's ack and at the same time mark it EXP so that its
state will be apparent.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
  2018-07-11  9:43   ` Will Deacon
  2018-07-11 15:42     ` Paul E. McKenney
@ 2018-07-11 16:34     ` Peter Zijlstra
  2018-07-11 18:10       ` Paul E. McKenney
  1 sibling, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2018-07-11 16:34 UTC (permalink / raw)
  To: Will Deacon
  Cc: Alan Stern, Paul E. McKenney, LKMM Maintainers -- Akira Yokosawa,
	Andrea Parri, Boqun Feng, Daniel Lustig, David Howells,
	Jade Alglave, Luc Maranget, Nicholas Piggin,
	Kernel development list

On Wed, Jul 11, 2018 at 10:43:45AM +0100, Will Deacon wrote:
> Hi Alan,
> 
> On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking.  In other words, given
> > the following code:
> > 
> > 	WRITE_ONCE(x, 1);
> > 	spin_unlock(&s):
> > 	spin_lock(&s);
> > 	WRITE_ONCE(y, 1);
> > 
> > the stores to x and y should be propagated in order to all other CPUs,
> > even though those other CPUs might not access the lock s.  In terms of
> > the memory model, this means expanding the cumul-fence relation.
> > 
> > Locks should also provide read-read (and read-write) ordering in a
> > similar way.  Given:
> > 
> > 	READ_ONCE(x);
> > 	spin_unlock(&s);
> > 	spin_lock(&s);
> > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > 
> > the load of x should be executed before the load of (or store to) y.
> > The LKMM already provides this ordering, but it provides it even in
> > the case where the two accesses are separated by a release/acquire
> > pair of fences rather than unlock/lock.  This would prevent
> > architectures from using weakly ordered implementations of release and
> > acquire, which seems like an unnecessary restriction.  The patch
> > therefore removes the ordering requirement from the LKMM for that
> > case.
> > 
> > All the architectures supported by the Linux kernel (including RISC-V)
> > do provide this ordering for locks, albeit for varying reasons.
> > Therefore this patch changes the model in accordance with the
> > developers' wishes.
> > 
> > Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
> 
> Thanks, I'm happy with this version of the patch:
> 
> Reviewed-by: Will Deacon <will.deacon@arm.com>

Me too! Thanks Alan.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
  2018-07-11 15:42     ` Paul E. McKenney
@ 2018-07-11 16:17       ` Andrea Parri
  2018-07-11 18:03         ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Andrea Parri @ 2018-07-11 16:17 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Will Deacon, Alan Stern, LKMM Maintainers -- Akira Yokosawa,
	Boqun Feng, Daniel Lustig, David Howells, Jade Alglave,
	Luc Maranget, Nicholas Piggin, Peter Zijlstra,
	Kernel development list

On Wed, Jul 11, 2018 at 08:42:11AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 11, 2018 at 10:43:45AM +0100, Will Deacon wrote:
> > Hi Alan,
> > 
> > On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> > > More than one kernel developer has expressed the opinion that the LKMM
> > > should enforce ordering of writes by locking.  In other words, given
> > > the following code:
> > > 
> > > 	WRITE_ONCE(x, 1);
> > > 	spin_unlock(&s):
> > > 	spin_lock(&s);
> > > 	WRITE_ONCE(y, 1);
> > > 
> > > the stores to x and y should be propagated in order to all other CPUs,
> > > even though those other CPUs might not access the lock s.  In terms of
> > > the memory model, this means expanding the cumul-fence relation.
> > > 
> > > Locks should also provide read-read (and read-write) ordering in a
> > > similar way.  Given:
> > > 
> > > 	READ_ONCE(x);
> > > 	spin_unlock(&s);
> > > 	spin_lock(&s);
> > > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > > 
> > > the load of x should be executed before the load of (or store to) y.
> > > The LKMM already provides this ordering, but it provides it even in
> > > the case where the two accesses are separated by a release/acquire
> > > pair of fences rather than unlock/lock.  This would prevent
> > > architectures from using weakly ordered implementations of release and
> > > acquire, which seems like an unnecessary restriction.  The patch
> > > therefore removes the ordering requirement from the LKMM for that
> > > case.
> > > 
> > > All the architectures supported by the Linux kernel (including RISC-V)
> > > do provide this ordering for locks, albeit for varying reasons.
> > > Therefore this patch changes the model in accordance with the
> > > developers' wishes.
> > > 
> > > Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
> > 
> > Thanks, I'm happy with this version of the patch:
> > 
> > Reviewed-by: Will Deacon <will.deacon@arm.com>
> 
> I have applied your Reviewed-by, and thank you both!
> 
> Given that this is a non-trivial change and given that I am posting
> for -tip acceptance in a few days, I intend to send this one not
> to the upcoming merge window, but to the one after that.
> 
> Please let me know if there is an urgent need for this to go into the
> v4.19 merge window.

I raised some concerns in my review to v2; AFAICT, these concerns have
not been resolved: so, until then, please feel free to add my NAK. ;-)

  Andrea


> 
> 							Thanx, Paul
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
  2018-07-11  9:43   ` Will Deacon
@ 2018-07-11 15:42     ` Paul E. McKenney
  2018-07-11 16:17       ` Andrea Parri
  2018-07-11 16:34     ` Peter Zijlstra
  1 sibling, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2018-07-11 15:42 UTC (permalink / raw)
  To: Will Deacon
  Cc: Alan Stern, LKMM Maintainers -- Akira Yokosawa, Andrea Parri,
	Boqun Feng, Daniel Lustig, David Howells, Jade Alglave,
	Luc Maranget, Nicholas Piggin, Peter Zijlstra,
	Kernel development list

On Wed, Jul 11, 2018 at 10:43:45AM +0100, Will Deacon wrote:
> Hi Alan,
> 
> On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking.  In other words, given
> > the following code:
> > 
> > 	WRITE_ONCE(x, 1);
> > 	spin_unlock(&s):
> > 	spin_lock(&s);
> > 	WRITE_ONCE(y, 1);
> > 
> > the stores to x and y should be propagated in order to all other CPUs,
> > even though those other CPUs might not access the lock s.  In terms of
> > the memory model, this means expanding the cumul-fence relation.
> > 
> > Locks should also provide read-read (and read-write) ordering in a
> > similar way.  Given:
> > 
> > 	READ_ONCE(x);
> > 	spin_unlock(&s);
> > 	spin_lock(&s);
> > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > 
> > the load of x should be executed before the load of (or store to) y.
> > The LKMM already provides this ordering, but it provides it even in
> > the case where the two accesses are separated by a release/acquire
> > pair of fences rather than unlock/lock.  This would prevent
> > architectures from using weakly ordered implementations of release and
> > acquire, which seems like an unnecessary restriction.  The patch
> > therefore removes the ordering requirement from the LKMM for that
> > case.
> > 
> > All the architectures supported by the Linux kernel (including RISC-V)
> > do provide this ordering for locks, albeit for varying reasons.
> > Therefore this patch changes the model in accordance with the
> > developers' wishes.
> > 
> > Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
> 
> Thanks, I'm happy with this version of the patch:
> 
> Reviewed-by: Will Deacon <will.deacon@arm.com>

I have applied your Reviewed-by, and thank you both!

Given that this is a non-trivial change and given that I am posting
for -tip acceptance in a few days, I intend to send this one not
to the upcoming merge window, but to the one after that.

Please let me know if there is an urgent need for this to go into the
v4.19 merge window.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
       [not found] ` <Pine.LNX.4.44L0.1807101416390.1449-100000@iolanthe.rowland.org>
  2018-07-10 19:58   ` [PATCH v3] " Paul E. McKenney
@ 2018-07-11  9:43   ` Will Deacon
  2018-07-11 15:42     ` Paul E. McKenney
  2018-07-11 16:34     ` Peter Zijlstra
  1 sibling, 2 replies; 10+ messages in thread
From: Will Deacon @ 2018-07-11  9:43 UTC (permalink / raw)
  To: Alan Stern
  Cc: Paul E. McKenney, LKMM Maintainers -- Akira Yokosawa,
	Andrea Parri, Boqun Feng, Daniel Lustig, David Howells,
	Jade Alglave, Luc Maranget, Nicholas Piggin, Peter Zijlstra,
	Kernel development list

Hi Alan,

On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> More than one kernel developer has expressed the opinion that the LKMM
> should enforce ordering of writes by locking.  In other words, given
> the following code:
> 
> 	WRITE_ONCE(x, 1);
> 	spin_unlock(&s):
> 	spin_lock(&s);
> 	WRITE_ONCE(y, 1);
> 
> the stores to x and y should be propagated in order to all other CPUs,
> even though those other CPUs might not access the lock s.  In terms of
> the memory model, this means expanding the cumul-fence relation.
> 
> Locks should also provide read-read (and read-write) ordering in a
> similar way.  Given:
> 
> 	READ_ONCE(x);
> 	spin_unlock(&s);
> 	spin_lock(&s);
> 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> 
> the load of x should be executed before the load of (or store to) y.
> The LKMM already provides this ordering, but it provides it even in
> the case where the two accesses are separated by a release/acquire
> pair of fences rather than unlock/lock.  This would prevent
> architectures from using weakly ordered implementations of release and
> acquire, which seems like an unnecessary restriction.  The patch
> therefore removes the ordering requirement from the LKMM for that
> case.
> 
> All the architectures supported by the Linux kernel (including RISC-V)
> do provide this ordering for locks, albeit for varying reasons.
> Therefore this patch changes the model in accordance with the
> developers' wishes.
> 
> Signed-off-by: Alan Stern <stern@rowland.harvard.edu>

Thanks, I'm happy with this version of the patch:

Reviewed-by: Will Deacon <will.deacon@arm.com>

Will

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
  2018-07-10 20:24     ` Alan Stern
@ 2018-07-10 20:31       ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2018-07-10 20:31 UTC (permalink / raw)
  To: Alan Stern
  Cc: LKMM Maintainers -- Akira Yokosawa, Andrea Parri, Boqun Feng,
	Daniel Lustig, David Howells, Jade Alglave, Luc Maranget,
	Nicholas Piggin, Peter Zijlstra, Will Deacon,
	Kernel development list

On Tue, Jul 10, 2018 at 04:24:34PM -0400, Alan Stern wrote:
> On Tue, 10 Jul 2018, Paul E. McKenney wrote:
> 
> > On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> > > More than one kernel developer has expressed the opinion that the LKMM
> > > should enforce ordering of writes by locking.  In other words, given
> > > the following code:
> > > 
> > > 	WRITE_ONCE(x, 1);
> > > 	spin_unlock(&s):
> > > 	spin_lock(&s);
> > > 	WRITE_ONCE(y, 1);
> > > 
> > > the stores to x and y should be propagated in order to all other CPUs,
> > > even though those other CPUs might not access the lock s.  In terms of
> > > the memory model, this means expanding the cumul-fence relation.
> > > 
> > > Locks should also provide read-read (and read-write) ordering in a
> > > similar way.  Given:
> > > 
> > > 	READ_ONCE(x);
> > > 	spin_unlock(&s);
> > > 	spin_lock(&s);
> > > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > > 
> > > the load of x should be executed before the load of (or store to) y.
> > > The LKMM already provides this ordering, but it provides it even in
> > > the case where the two accesses are separated by a release/acquire
> > > pair of fences rather than unlock/lock.  This would prevent
> > > architectures from using weakly ordered implementations of release and
> > > acquire, which seems like an unnecessary restriction.  The patch
> > > therefore removes the ordering requirement from the LKMM for that
> > > case.
> > > 
> > > All the architectures supported by the Linux kernel (including RISC-V)
> > > do provide this ordering for locks, albeit for varying reasons.
> > > Therefore this patch changes the model in accordance with the
> > > developers' wishes.
> > > 
> > > Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
> > 
> > It now applies, thank you very much!
> > 
> > Is this something that you are comfortable pushing into the upcoming
> > merge window, or should I hold off until the next one?
> 
> Given the concerns that Andrea raised, and given that neither Peter, 
> Will, nor Daniel has commented on v.3 of the patch, I think we should 
> hold off for a little while.

Works for me!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
  2018-07-10 19:58   ` [PATCH v3] " Paul E. McKenney
@ 2018-07-10 20:24     ` Alan Stern
  2018-07-10 20:31       ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Alan Stern @ 2018-07-10 20:24 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKMM Maintainers -- Akira Yokosawa, Andrea Parri, Boqun Feng,
	Daniel Lustig, David Howells, Jade Alglave, Luc Maranget,
	Nicholas Piggin, Peter Zijlstra, Will Deacon,
	Kernel development list

On Tue, 10 Jul 2018, Paul E. McKenney wrote:

> On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking.  In other words, given
> > the following code:
> > 
> > 	WRITE_ONCE(x, 1);
> > 	spin_unlock(&s):
> > 	spin_lock(&s);
> > 	WRITE_ONCE(y, 1);
> > 
> > the stores to x and y should be propagated in order to all other CPUs,
> > even though those other CPUs might not access the lock s.  In terms of
> > the memory model, this means expanding the cumul-fence relation.
> > 
> > Locks should also provide read-read (and read-write) ordering in a
> > similar way.  Given:
> > 
> > 	READ_ONCE(x);
> > 	spin_unlock(&s);
> > 	spin_lock(&s);
> > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > 
> > the load of x should be executed before the load of (or store to) y.
> > The LKMM already provides this ordering, but it provides it even in
> > the case where the two accesses are separated by a release/acquire
> > pair of fences rather than unlock/lock.  This would prevent
> > architectures from using weakly ordered implementations of release and
> > acquire, which seems like an unnecessary restriction.  The patch
> > therefore removes the ordering requirement from the LKMM for that
> > case.
> > 
> > All the architectures supported by the Linux kernel (including RISC-V)
> > do provide this ordering for locks, albeit for varying reasons.
> > Therefore this patch changes the model in accordance with the
> > developers' wishes.
> > 
> > Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
> 
> It now applies, thank you very much!
> 
> Is this something that you are comfortable pushing into the upcoming
> merge window, or should I hold off until the next one?

Given the concerns that Andrea raised, and given that neither Peter, 
Will, nor Daniel has commented on v.3 of the patch, I think we should 
hold off for a little while.

Alan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
       [not found] ` <Pine.LNX.4.44L0.1807101416390.1449-100000@iolanthe.rowland.org>
@ 2018-07-10 19:58   ` Paul E. McKenney
  2018-07-10 20:24     ` Alan Stern
  2018-07-11  9:43   ` Will Deacon
  1 sibling, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2018-07-10 19:58 UTC (permalink / raw)
  To: Alan Stern
  Cc: LKMM Maintainers -- Akira Yokosawa, Andrea Parri, Boqun Feng,
	Daniel Lustig, David Howells, Jade Alglave, Luc Maranget,
	Nicholas Piggin, Peter Zijlstra, Will Deacon,
	Kernel development list

On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote:
> More than one kernel developer has expressed the opinion that the LKMM
> should enforce ordering of writes by locking.  In other words, given
> the following code:
> 
> 	WRITE_ONCE(x, 1);
> 	spin_unlock(&s):
> 	spin_lock(&s);
> 	WRITE_ONCE(y, 1);
> 
> the stores to x and y should be propagated in order to all other CPUs,
> even though those other CPUs might not access the lock s.  In terms of
> the memory model, this means expanding the cumul-fence relation.
> 
> Locks should also provide read-read (and read-write) ordering in a
> similar way.  Given:
> 
> 	READ_ONCE(x);
> 	spin_unlock(&s);
> 	spin_lock(&s);
> 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> 
> the load of x should be executed before the load of (or store to) y.
> The LKMM already provides this ordering, but it provides it even in
> the case where the two accesses are separated by a release/acquire
> pair of fences rather than unlock/lock.  This would prevent
> architectures from using weakly ordered implementations of release and
> acquire, which seems like an unnecessary restriction.  The patch
> therefore removes the ordering requirement from the LKMM for that
> case.
> 
> All the architectures supported by the Linux kernel (including RISC-V)
> do provide this ordering for locks, albeit for varying reasons.
> Therefore this patch changes the model in accordance with the
> developers' wishes.
> 
> Signed-off-by: Alan Stern <stern@rowland.harvard.edu>

It now applies, thank you very much!

Is this something that you are comfortable pushing into the upcoming
merge window, or should I hold off until the next one?

							Thanx, Paul

> ---
> 
> 
> v.3: Rebased against the dev branch of Paul's linux-rcu tree.
> Changed unlock-rf-lock-po to po-unlock-rf-lock-po, making it more
> symmetrical and more in accordance with the use of fence.tso for
> the release on RISC-V.
> 
> v.2: Restrict the ordering to lock operations, not general release
> and acquire fences.
> 
> [as1871c]
> 
> 
>  tools/memory-model/Documentation/explanation.txt                           |  186 +++++++---
>  tools/memory-model/linux-kernel.cat                                        |    8 
>  tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus |    7 
>  3 files changed, 150 insertions(+), 51 deletions(-)
> 
> Index: usb-4.x/tools/memory-model/linux-kernel.cat
> ===================================================================
> --- usb-4.x.orig/tools/memory-model/linux-kernel.cat
> +++ usb-4.x/tools/memory-model/linux-kernel.cat
> @@ -38,7 +38,7 @@ let strong-fence = mb | gp
>  (* Release Acquire *)
>  let acq-po = [Acquire] ; po ; [M]
>  let po-rel = [M] ; po ; [Release]
> -let rfi-rel-acq = [Release] ; rfi ; [Acquire]
> +let po-unlock-rf-lock-po = po ; [UL] ; rf ; [LKR] ; po
> 
>  (**********************************)
>  (* Fundamental coherence ordering *)
> @@ -60,13 +60,13 @@ let dep = addr | data
>  let rwdep = (dep | ctrl) ; [W]
>  let overwrite = co | fr
>  let to-w = rwdep | (overwrite & int)
> -let to-r = addr | (dep ; rfi) | rfi-rel-acq
> +let to-r = addr | (dep ; rfi)
>  let fence = strong-fence | wmb | po-rel | rmb | acq-po
> -let ppo = to-r | to-w | fence
> +let ppo = to-r | to-w | fence | (po-unlock-rf-lock-po & int)
> 
>  (* Propagation: Ordering from release operations and strong fences. *)
>  let A-cumul(r) = rfe? ; r
> -let cumul-fence = A-cumul(strong-fence | po-rel) | wmb
> +let cumul-fence = A-cumul(strong-fence | po-rel) | wmb | po-unlock-rf-lock-po
>  let prop = (overwrite & ext)? ; cumul-fence* ; rfe?
> 
>  (*
> Index: usb-4.x/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
> ===================================================================
> --- usb-4.x.orig/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
> +++ usb-4.x/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
> @@ -1,11 +1,10 @@
>  C ISA2+pooncelock+pooncelock+pombonce
> 
>  (*
> - * Result: Sometimes
> + * Result: Never
>   *
> - * This test shows that the ordering provided by a lock-protected S
> - * litmus test (P0() and P1()) are not visible to external process P2().
> - * This is likely to change soon.
> + * This test shows that write-write ordering provided by locks
> + * (in P0() and P1()) is visible to external process P2().
>   *)
> 
>  {}
> Index: usb-4.x/tools/memory-model/Documentation/explanation.txt
> ===================================================================
> --- usb-4.x.orig/tools/memory-model/Documentation/explanation.txt
> +++ usb-4.x/tools/memory-model/Documentation/explanation.txt
> @@ -28,7 +28,8 @@ Explanation of the Linux-Kernel Memory C
>    20. THE HAPPENS-BEFORE RELATION: hb
>    21. THE PROPAGATES-BEFORE RELATION: pb
>    22. RCU RELATIONS: rcu-link, gp, rscs, rcu-fence, and rb
> -  23. ODDS AND ENDS
> +  23. LOCKING
> +  24. ODDS AND ENDS
> 
> 
> 
> @@ -1067,28 +1068,6 @@ allowing out-of-order writes like this t
>  violating the write-write coherence rule by requiring the CPU not to
>  send the W write to the memory subsystem at all!)
> 
> -There is one last example of preserved program order in the LKMM: when
> -a load-acquire reads from an earlier store-release.  For example:
> -
> -	smp_store_release(&x, 123);
> -	r1 = smp_load_acquire(&x);
> -
> -If the smp_load_acquire() ends up obtaining the 123 value that was
> -stored by the smp_store_release(), the LKMM says that the load must be
> -executed after the store; the store cannot be forwarded to the load.
> -This requirement does not arise from the operational model, but it
> -yields correct predictions on all architectures supported by the Linux
> -kernel, although for differing reasons.
> -
> -On some architectures, including x86 and ARMv8, it is true that the
> -store cannot be forwarded to the load.  On others, including PowerPC
> -and ARMv7, smp_store_release() generates object code that starts with
> -a fence and smp_load_acquire() generates object code that ends with a
> -fence.  The upshot is that even though the store may be forwarded to
> -the load, it is still true that any instruction preceding the store
> -will be executed before the load or any following instructions, and
> -the store will be executed before any instruction following the load.
> -
> 
>  AND THEN THERE WAS ALPHA
>  ------------------------
> @@ -1766,6 +1745,147 @@ before it does, and the critical section
>  grace period does and ends after it does.
> 
> 
> +LOCKING
> +-------
> +
> +The LKMM includes locking.  In fact, there is special code for locking
> +in the formal model, added in order to make tools run faster.
> +However, this special code is intended to be more or less equivalent
> +to concepts we have already covered.  A spinlock_t variable is treated
> +the same as an int, and spin_lock(&s) is treated almost the same as:
> +
> +	while (cmpxchg_acquire(&s, 0, 1) != 0)
> +		cpu_relax();
> +
> +This waits until s is equal to 0 and then atomically sets it to 1,
> +and the read part of the cmpxchg operation acts as an acquire fence.
> +An alternate way to express the same thing would be:
> +
> +	r = xchg_acquire(&s, 1);
> +
> +along with a requirement that at the end, r = 0.  Similarly,
> +spin_trylock(&s) is treated almost the same as:
> +
> +	return !cmpxchg_acquire(&s, 0, 1);
> +
> +which atomically sets s to 1 if it is currently equal to 0 and returns
> +true if it succeeds (the read part of the cmpxchg operation acts as an
> +acquire fence only if the operation is successful).  spin_unlock(&s)
> +is treated almost the same as:
> +
> +	smp_store_release(&s, 0);
> +
> +The "almost" qualifiers above need some explanation.  In the LKMM, the
> +store-release in a spin_unlock() and the load-acquire which forms the
> +first half of the atomic rmw update in a spin_lock() or a successful
> +spin_trylock() -- we can call these things lock-releases and
> +lock-acquires -- have two properties beyond those of ordinary releases
> +and acquires.
> +
> +First, when a lock-acquire reads from a lock-release, the LKMM
> +requires that every instruction po-before the lock-release must
> +execute before any instruction po-after the lock-acquire.  This would
> +naturally hold if the release and acquire operations were on different
> +CPUs, but the LKMM says it holds even when they are on the same CPU.
> +For example:
> +
> +	int x, y;
> +	spinlock_t s;
> +
> +	P0()
> +	{
> +		int r1, r2;
> +
> +		spin_lock(&s);
> +		r1 = READ_ONCE(x);
> +		spin_unlock(&s);
> +		spin_lock(&s);
> +		r2 = READ_ONCE(y);
> +		spin_unlock(&s);
> +	}
> +
> +	P1()
> +	{
> +		WRITE_ONCE(y, 1);
> +		smp_wmb();
> +		WRITE_ONCE(x, 1);
> +	}
> +
> +Here the second spin_lock() reads from the first spin_unlock(), and
> +therefore the load of x must execute before the load of y.  Thus we
> +cannot have r1 = 1 and r2 = 0 at the end (this is an instance of the
> +MP pattern).
> +
> +This requirement does not apply to ordinary release and acquire
> +fences, only to lock-related operations.  For instance, suppose P0()
> +in the example had been written as:
> +
> +	P0()
> +	{
> +		int r1, r2, r3;
> +
> +		r1 = READ_ONCE(x);
> +		smp_store_release(&s, 1);
> +		r3 = smp_load_acquire(&s);
> +		r2 = READ_ONCE(y);
> +	}
> +
> +Then the CPU would be allowed to forward the s = 1 value from the
> +smp_store_release() to the smp_load_acquire(), executing the
> +instructions in the following order:
> +
> +		r3 = smp_load_acquire(&s);	// Obtains r3 = 1
> +		r2 = READ_ONCE(y);
> +		r1 = READ_ONCE(x);
> +		smp_store_release(&s, 1);	// Value is forwarded
> +
> +and thus it could load y before x, obtaining r2 = 0 and r1 = 1.
> +
> +Second, when a lock-acquire reads from a lock-release, and some other
> +stores W and W' occur po-before the lock-release and po-after the
> +lock-acquire respectively, the LKMM requires that W must propagate to
> +each CPU before W' does.  For example, consider:
> +
> +	int x, y;
> +	spinlock_t x;
> +
> +	P0()
> +	{
> +		spin_lock(&s);
> +		WRITE_ONCE(x, 1);
> +		spin_unlock(&s);
> +	}
> +
> +	P1()
> +	{
> +		int r1;
> +
> +		spin_lock(&s);
> +		r1 = READ_ONCE(x);
> +		WRITE_ONCE(y, 1);
> +		spin_unlock(&s);
> +	}
> +
> +	P2()
> +	{
> +		int r2, r3;
> +
> +		r2 = READ_ONCE(y);
> +		smp_rmb();
> +		r3 = READ_ONCE(x);
> +	}
> +
> +If r1 = 1 at the end then the spin_lock() in P1 must have read from
> +the spin_unlock() in P0.  Hence the store to x must propagate to P2
> +before the store to y does, so we cannot have r2 = 1 and r3 = 0.
> +
> +These two special requirements for lock-release and lock-acquire do
> +not arise from the operational model.  Nevertheless, kernel developers
> +have come to expect and rely on them because they do hold on all
> +architectures supported by the Linux kernel, albeit for various
> +differing reasons.
> +
> +
>  ODDS AND ENDS
>  -------------
> 
> @@ -1831,26 +1951,6 @@ they behave as follows:
>  	events and the events preceding them against all po-later
>  	events.
> 
> -The LKMM includes locking.  In fact, there is special code for locking
> -in the formal model, added in order to make tools run faster.
> -However, this special code is intended to be exactly equivalent to
> -concepts we have already covered.  A spinlock_t variable is treated
> -the same as an int, and spin_lock(&s) is treated the same as:
> -
> -	while (cmpxchg_acquire(&s, 0, 1) != 0)
> -		cpu_relax();
> -
> -which waits until s is equal to 0 and then atomically sets it to 1,
> -and where the read part of the atomic update is also an acquire fence.
> -An alternate way to express the same thing would be:
> -
> -	r = xchg_acquire(&s, 1);
> -
> -along with a requirement that at the end, r = 0.  spin_unlock(&s) is
> -treated the same as:
> -
> -	smp_store_release(&s, 0);
> -
>  Interestingly, RCU and locking each introduce the possibility of
>  deadlock.  When faced with code sequences such as:
> 
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-07-11 18:08 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <3344e7aeb09644758860ac343bd757a1@AcuMS.aculab.com>
2018-07-11 17:36 ` [PATCH v3] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire Alan Stern
2018-07-10 16:25 [PATCH v2] " Paul E. McKenney
     [not found] ` <Pine.LNX.4.44L0.1807101416390.1449-100000@iolanthe.rowland.org>
2018-07-10 19:58   ` [PATCH v3] " Paul E. McKenney
2018-07-10 20:24     ` Alan Stern
2018-07-10 20:31       ` Paul E. McKenney
2018-07-11  9:43   ` Will Deacon
2018-07-11 15:42     ` Paul E. McKenney
2018-07-11 16:17       ` Andrea Parri
2018-07-11 18:03         ` Paul E. McKenney
2018-07-11 16:34     ` Peter Zijlstra
2018-07-11 18:10       ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.