All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alan Stern <stern@rowland.harvard.edu>
To: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: linux-kernel@vger.kernel.org, <linux-doc@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>, Will Deacon <will.deacon@arm.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	Nicholas Piggin <npiggin@gmail.com>,
	David Howells <dhowells@redhat.com>,
	Jade Alglave <j.alglave@ucl.ac.uk>,
	Luc Maranget <luc.maranget@inria.fr>,
	"Paul E . McKenney" <paulmck@linux.vnet.ibm.com>,
	Akira Yokosawa <akiyks@gmail.com>,
	Daniel Lustig <dlustig@nvidia.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Randy Dunlap <rdunlap@infradead.org>
Subject: Re: [PATCH 2/3] locking: Clarify requirements for smp_mb__after_spinlock()
Date: Thu, 28 Jun 2018 09:49:58 -0400 (EDT)	[thread overview]
Message-ID: <Pine.LNX.4.44L0.1806280947080.1812-100000@iolanthe.rowland.org> (raw)
In-Reply-To: <1530182480-13205-3-git-send-email-andrea.parri@amarulasolutions.com>

On Thu, 28 Jun 2018, Andrea Parri wrote:

> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -114,29 +114,8 @@ do {								\
>  #endif /*arch_spin_is_contended*/
>  
>  /*
> - * This barrier must provide two things:
> - *
> - *   - it must guarantee a STORE before the spin_lock() is ordered against a
> - *     LOAD after it, see the comments at its two usage sites.
> - *
> - *   - it must ensure the critical section is RCsc.
> - *
> - * The latter is important for cases where we observe values written by other
> - * CPUs in spin-loops, without barriers, while being subject to scheduling.
> - *
> - * CPU0			CPU1			CPU2
> - *
> - *			for (;;) {
> - *			  if (READ_ONCE(X))
> - *			    break;
> - *			}
> - * X=1
> - *			<sched-out>
> - *						<sched-in>
> - *						r = X;
> - *
> - * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
> - * we get migrated and CPU2 sees X==0.
> + * smp_mb__after_spinlock() provides a full memory barrier between po-earlier
> + * lock acquisitions and po-later memory accesses.

How about saying "provides the equivalent of a full memory barrier"?

The point being that smp_mb__after_spinlock doesn't have to provide an
actual barrier; it just has to ensure the behavior is the same as if a
full barrier were present.

Alan


WARNING: multiple messages have this Message-ID (diff)
From: Alan Stern <stern@rowland.harvard.edu>
To: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: linux-kernel@vger.kernel.org, <linux-doc@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>, Will Deacon <will.deacon@arm.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	Nicholas Piggin <npiggin@gmail.com>,
	David Howells <dhowells@redhat.com>,
	Jade Alglave <j.alglave@ucl.ac.uk>,
	Luc Maranget <luc.maranget@inria.fr>,
	"Paul E . McKenney" <paulmck@linux.vnet.ibm.com>,
	Akira Yokosawa <akiyks@gmail.com>,
	Daniel Lustig <dlustig@nvidia.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Randy Dunlap <rdunlap@infradead.org>
Subject: Re: [PATCH 2/3] locking: Clarify requirements for smp_mb__after_spinlock()
Date: Thu, 28 Jun 2018 09:49:58 -0400 (EDT)	[thread overview]
Message-ID: <Pine.LNX.4.44L0.1806280947080.1812-100000@iolanthe.rowland.org> (raw)
In-Reply-To: <1530182480-13205-3-git-send-email-andrea.parri@amarulasolutions.com>

On Thu, 28 Jun 2018, Andrea Parri wrote:

> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -114,29 +114,8 @@ do {								\
>  #endif /*arch_spin_is_contended*/
>  
>  /*
> - * This barrier must provide two things:
> - *
> - *   - it must guarantee a STORE before the spin_lock() is ordered against a
> - *     LOAD after it, see the comments at its two usage sites.
> - *
> - *   - it must ensure the critical section is RCsc.
> - *
> - * The latter is important for cases where we observe values written by other
> - * CPUs in spin-loops, without barriers, while being subject to scheduling.
> - *
> - * CPU0			CPU1			CPU2
> - *
> - *			for (;;) {
> - *			  if (READ_ONCE(X))
> - *			    break;
> - *			}
> - * X=1
> - *			<sched-out>
> - *						<sched-in>
> - *						r = X;
> - *
> - * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
> - * we get migrated and CPU2 sees X==0.
> + * smp_mb__after_spinlock() provides a full memory barrier between po-earlier
> + * lock acquisitions and po-later memory accesses.

How about saying "provides the equivalent of a full memory barrier"?

The point being that smp_mb__after_spinlock doesn't have to provide an
actual barrier; it just has to ensure the behavior is the same as if a
full barrier were present.

Alan

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2018-06-28 13:50 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-28 10:41 [PATCH 0/3] sched/locking/doc: Miscellaneous fixes Andrea Parri
2018-06-28 10:41 ` Andrea Parri
2018-06-28 10:41 ` [PATCH 1/3] sched: Use smp_mb() in wake_woken_function() Andrea Parri
2018-06-28 10:41   ` Andrea Parri
2018-06-28 10:41 ` [PATCH 2/3] locking: Clarify requirements for smp_mb__after_spinlock() Andrea Parri
2018-06-28 10:41   ` Andrea Parri
2018-06-28 13:02   ` Matthew Wilcox
2018-06-28 13:02     ` Matthew Wilcox
2018-06-28 13:10     ` Andrea Parri
2018-06-28 13:10       ` Andrea Parri
2018-06-28 13:49   ` Alan Stern [this message]
2018-06-28 13:49     ` Alan Stern
2018-06-28 13:52     ` Andrea Parri
2018-06-28 13:52       ` Andrea Parri
2018-06-28 15:05   ` Peter Zijlstra
2018-06-28 15:05     ` Peter Zijlstra
2018-06-28 17:30     ` Andrea Parri
2018-06-28 17:30       ` Andrea Parri
2018-07-02 12:50       ` Peter Zijlstra
2018-07-02 12:50         ` Peter Zijlstra
2018-07-02 15:11   ` [PATCH v2 " Andrea Parri
2018-07-02 15:11     ` Andrea Parri
2018-07-02 15:37     ` Peter Zijlstra
2018-07-02 15:37       ` Peter Zijlstra
2018-07-03  8:49       ` Andrea Parri
2018-07-03  8:49         ` Andrea Parri
2018-07-03 14:53     ` [PATCH v3 " Andrea Parri
2018-07-03 14:53       ` Andrea Parri
2018-07-03 15:39       ` Paul E. McKenney
2018-07-03 15:39         ` Paul E. McKenney
2018-07-03 17:07         ` Andrea Parri
2018-07-03 17:07           ` Andrea Parri
2018-06-28 10:41 ` [PATCH 3/3] doc: Update wake_up() & co. memory-barrier guarantees Andrea Parri
2018-06-28 10:41   ` Andrea Parri
2018-07-05 22:28 ` [PATCH 0/3] sched/locking/doc: Miscellaneous fixes Andrea Parri
2018-07-05 22:28   ` Andrea Parri
2018-07-06 10:36   ` Peter Zijlstra
2018-07-06 10:36     ` Peter Zijlstra
2018-07-06 14:43     ` Paul E. McKenney
2018-07-06 14:43       ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.44L0.1806280947080.1812-100000@iolanthe.rowland.org \
    --to=stern@rowland.harvard.edu \
    --cc=akiyks@gmail.com \
    --cc=andrea.parri@amarulasolutions.com \
    --cc=boqun.feng@gmail.com \
    --cc=corbet@lwn.net \
    --cc=dhowells@redhat.com \
    --cc=dlustig@nvidia.com \
    --cc=j.alglave@ucl.ac.uk \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luc.maranget@inria.fr \
    --cc=mingo@redhat.com \
    --cc=npiggin@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.