All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: Will Deacon <will@kernel.org>,
	Sami Tolvanen <samitolvanen@google.com>,
	Nick Desaulniers <ndesaulniers@google.com>,
	Kees Cook <keescook@chromium.org>, Marco Elver <elver@google.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Josh Triplett <josh@joshtriplett.org>,
	Matt Turner <mattst88@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Richard Henderson <rth@twiddle.net>,
	Peter Zijlstra <peterz@infradead.org>,
	Alan Stern <stern@rowland.harvard.edu>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
	Boqun Feng <boqun.feng@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-alpha@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	kernel-team@android.com
Subject: [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc
Date: Tue, 30 Jun 2020 18:37:27 +0100	[thread overview]
Message-ID: <20200630173734.14057-12-will@kernel.org> (raw)
In-Reply-To: <20200630173734.14057-1-will@kernel.org>

smp_read_barrier_depends() has gone the way of mmiowb() and so many
esoteric memory barriers before it. Drop the two mentions of this
deceased barrier from the LKMM informal explanation document.

Acked-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 .../Documentation/explanation.txt             | 26 +++++++++----------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index e91a2eb19592..01adf9e0ebac 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order.
 In practice, this difficulty is solved by inserting a special fence
 between P1's two loads when the kernel is compiled for the Alpha
 architecture.  In fact, as of version 4.15, the kernel automatically
-adds this fence (called smp_read_barrier_depends() and defined as
-nothing at all on non-Alpha builds) after every READ_ONCE() and atomic
-load.  The effect of the fence is to cause the CPU not to execute any
-po-later instructions until after the local cache has finished
-processing all the stores it has already received.  Thus, if the code
-was changed to:
+adds this fence after every READ_ONCE() and atomic load on Alpha.  The
+effect of the fence is to cause the CPU not to execute any po-later
+instructions until after the local cache has finished processing all
+the stores it has already received.  Thus, if the code was changed to:
 
 	P1()
 	{
@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed
 directly.
 
 The LKMM requires that smp_rmb(), acquire fences, and strong fences
-share this property with smp_read_barrier_depends(): They do not allow
-the CPU to execute any po-later instructions (or po-later loads in the
-case of smp_rmb()) until all outstanding stores have been processed by
-the local cache.  In the case of a strong fence, the CPU first has to
-wait for all of its po-earlier stores to propagate to every other CPU
-in the system; then it has to wait for the local cache to process all
-the stores received as of that time -- not just the stores received
-when the strong fence began.
+share this property: They do not allow the CPU to execute any po-later
+instructions (or po-later loads in the case of smp_rmb()) until all
+outstanding stores have been processed by the local cache.  In the
+case of a strong fence, the CPU first has to wait for all of its
+po-earlier stores to propagate to every other CPU in the system; then
+it has to wait for the local cache to process all the stores received
+as of that time -- not just the stores received when the strong fence
+began.
 
 And of course, none of this matters for any architecture other than
 Alpha.
-- 
2.27.0.212.ge8ba1cc988-goog


WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: Will Deacon <will@kernel.org>,
	Sami Tolvanen <samitolvanen@google.com>,
	Nick Desaulniers <ndesaulniers@google.com>,
	Kees Cook <keescook@chromium.org>, Marco Elver <elver@google.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Josh Triplett <josh@joshtriplett.org>,
	Matt Turner <mattst88@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Richard Henderson <rth@twiddle.net>,
	Peter Zijlstra <peterz@infradead.org>,
	Alan Stern <stern@rowland.harvard.edu>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
	Boqun Feng <boqun.feng@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-alpha@vger.kernel.orgv
Subject: [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc
Date: Tue, 30 Jun 2020 18:37:27 +0100	[thread overview]
Message-ID: <20200630173734.14057-12-will@kernel.org> (raw)
In-Reply-To: <20200630173734.14057-1-will@kernel.org>

smp_read_barrier_depends() has gone the way of mmiowb() and so many
esoteric memory barriers before it. Drop the two mentions of this
deceased barrier from the LKMM informal explanation document.

Acked-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 .../Documentation/explanation.txt             | 26 +++++++++----------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index e91a2eb19592..01adf9e0ebac 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order.
 In practice, this difficulty is solved by inserting a special fence
 between P1's two loads when the kernel is compiled for the Alpha
 architecture.  In fact, as of version 4.15, the kernel automatically
-adds this fence (called smp_read_barrier_depends() and defined as
-nothing at all on non-Alpha builds) after every READ_ONCE() and atomic
-load.  The effect of the fence is to cause the CPU not to execute any
-po-later instructions until after the local cache has finished
-processing all the stores it has already received.  Thus, if the code
-was changed to:
+adds this fence after every READ_ONCE() and atomic load on Alpha.  The
+effect of the fence is to cause the CPU not to execute any po-later
+instructions until after the local cache has finished processing all
+the stores it has already received.  Thus, if the code was changed to:
 
 	P1()
 	{
@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed
 directly.
 
 The LKMM requires that smp_rmb(), acquire fences, and strong fences
-share this property with smp_read_barrier_depends(): They do not allow
-the CPU to execute any po-later instructions (or po-later loads in the
-case of smp_rmb()) until all outstanding stores have been processed by
-the local cache.  In the case of a strong fence, the CPU first has to
-wait for all of its po-earlier stores to propagate to every other CPU
-in the system; then it has to wait for the local cache to process all
-the stores received as of that time -- not just the stores received
-when the strong fence began.
+share this property: They do not allow the CPU to execute any po-later
+instructions (or po-later loads in the case of smp_rmb()) until all
+outstanding stores have been processed by the local cache.  In the
+case of a strong fence, the CPU first has to wait for all of its
+po-earlier stores to propagate to every other CPU in the system; then
+it has to wait for the local cache to process all the stores received
+as of that time -- not just the stores received when the strong fence
+began.
 
 And of course, none of this matters for any architecture other than
 Alpha.
-- 
2.27.0.212.ge8ba1cc988-goog

WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: Mark Rutland <mark.rutland@arm.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jason Wang <jasowang@redhat.com>,
	virtualization@lists.linux-foundation.org,
	Will Deacon <will@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Alan Stern <stern@rowland.harvard.edu>,
	Sami Tolvanen <samitolvanen@google.com>,
	Matt Turner <mattst88@gmail.com>,
	kernel-team@android.com, Marco Elver <elver@google.com>,
	Kees Cook <keescook@chromium.org>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Boqun Feng <boqun.feng@gmail.com>,
	Josh Triplett <josh@joshtriplett.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	linux-arm-kernel@lists.infradead.org,
	Richard Henderson <rth@twiddle.net>,
	Nick Desaulniers <ndesaulniers@google.com>,
	linux-alpha@vger.kernel.org
Subject: [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc
Date: Tue, 30 Jun 2020 18:37:27 +0100	[thread overview]
Message-ID: <20200630173734.14057-12-will@kernel.org> (raw)
In-Reply-To: <20200630173734.14057-1-will@kernel.org>

smp_read_barrier_depends() has gone the way of mmiowb() and so many
esoteric memory barriers before it. Drop the two mentions of this
deceased barrier from the LKMM informal explanation document.

Acked-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 .../Documentation/explanation.txt             | 26 +++++++++----------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index e91a2eb19592..01adf9e0ebac 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order.
 In practice, this difficulty is solved by inserting a special fence
 between P1's two loads when the kernel is compiled for the Alpha
 architecture.  In fact, as of version 4.15, the kernel automatically
-adds this fence (called smp_read_barrier_depends() and defined as
-nothing at all on non-Alpha builds) after every READ_ONCE() and atomic
-load.  The effect of the fence is to cause the CPU not to execute any
-po-later instructions until after the local cache has finished
-processing all the stores it has already received.  Thus, if the code
-was changed to:
+adds this fence after every READ_ONCE() and atomic load on Alpha.  The
+effect of the fence is to cause the CPU not to execute any po-later
+instructions until after the local cache has finished processing all
+the stores it has already received.  Thus, if the code was changed to:
 
 	P1()
 	{
@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed
 directly.
 
 The LKMM requires that smp_rmb(), acquire fences, and strong fences
-share this property with smp_read_barrier_depends(): They do not allow
-the CPU to execute any po-later instructions (or po-later loads in the
-case of smp_rmb()) until all outstanding stores have been processed by
-the local cache.  In the case of a strong fence, the CPU first has to
-wait for all of its po-earlier stores to propagate to every other CPU
-in the system; then it has to wait for the local cache to process all
-the stores received as of that time -- not just the stores received
-when the strong fence began.
+share this property: They do not allow the CPU to execute any po-later
+instructions (or po-later loads in the case of smp_rmb()) until all
+outstanding stores have been processed by the local cache.  In the
+case of a strong fence, the CPU first has to wait for all of its
+po-earlier stores to propagate to every other CPU in the system; then
+it has to wait for the local cache to process all the stores received
+as of that time -- not just the stores received when the strong fence
+began.
 
 And of course, none of this matters for any architecture other than
 Alpha.
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: Will Deacon <will@kernel.org>,
	Sami Tolvanen <samitolvanen@google.com>,
	Nick Desaulniers <ndesaulniers@google.com>,
	Kees Cook <keescook@chromium.org>, Marco Elver <elver@google.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Josh Triplett <josh@joshtriplett.org>,
	Matt Turner <mattst88@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Richard Henderson <rth@twiddle.net>,
	Peter Zijlstra <peterz@infradead.org>,
	Alan Stern <stern@rowland.harvard.edu>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
	Boqun Feng <boqun.feng@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-alpha@vger.kernel.org, v
Subject: [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc
Date: Tue, 30 Jun 2020 18:37:27 +0100	[thread overview]
Message-ID: <20200630173734.14057-12-will@kernel.org> (raw)
In-Reply-To: <20200630173734.14057-1-will@kernel.org>

smp_read_barrier_depends() has gone the way of mmiowb() and so many
esoteric memory barriers before it. Drop the two mentions of this
deceased barrier from the LKMM informal explanation document.

Acked-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 .../Documentation/explanation.txt             | 26 +++++++++----------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index e91a2eb19592..01adf9e0ebac 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order.
 In practice, this difficulty is solved by inserting a special fence
 between P1's two loads when the kernel is compiled for the Alpha
 architecture.  In fact, as of version 4.15, the kernel automatically
-adds this fence (called smp_read_barrier_depends() and defined as
-nothing at all on non-Alpha builds) after every READ_ONCE() and atomic
-load.  The effect of the fence is to cause the CPU not to execute any
-po-later instructions until after the local cache has finished
-processing all the stores it has already received.  Thus, if the code
-was changed to:
+adds this fence after every READ_ONCE() and atomic load on Alpha.  The
+effect of the fence is to cause the CPU not to execute any po-later
+instructions until after the local cache has finished processing all
+the stores it has already received.  Thus, if the code was changed to:
 
 	P1()
 	{
@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed
 directly.
 
 The LKMM requires that smp_rmb(), acquire fences, and strong fences
-share this property with smp_read_barrier_depends(): They do not allow
-the CPU to execute any po-later instructions (or po-later loads in the
-case of smp_rmb()) until all outstanding stores have been processed by
-the local cache.  In the case of a strong fence, the CPU first has to
-wait for all of its po-earlier stores to propagate to every other CPU
-in the system; then it has to wait for the local cache to process all
-the stores received as of that time -- not just the stores received
-when the strong fence began.
+share this property: They do not allow the CPU to execute any po-later
+instructions (or po-later loads in the case of smp_rmb()) until all
+outstanding stores have been processed by the local cache.  In the
+case of a strong fence, the CPU first has to wait for all of its
+po-earlier stores to propagate to every other CPU in the system; then
+it has to wait for the local cache to process all the stores received
+as of that time -- not just the stores received when the strong fence
+began.
 
 And of course, none of this matters for any architecture other than
 Alpha.
-- 
2.27.0.212.ge8ba1cc988-goog


  parent reply	other threads:[~2020-06-30 17:38 UTC|newest]

Thread overview: 200+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
2020-06-30 17:37 ` Will Deacon
2020-06-30 17:37 ` Will Deacon
2020-06-30 17:37 ` Will Deacon
2020-06-30 17:37 ` [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-07-01 16:38   ` Alexei Starovoitov
2020-07-01 16:38     ` Alexei Starovoitov
2020-07-01 16:38     ` Alexei Starovoitov
2020-07-01 16:38     ` Alexei Starovoitov
2020-06-30 17:37 ` [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE " Will Deacon
2020-06-30 17:37   ` [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE " Will Deacon
2020-06-30 19:11   ` Arnd Bergmann
2020-06-30 19:11     ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE " Arnd Bergmann
2020-06-30 19:11     ` [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE " Arnd Bergmann
2020-07-01 10:16     ` Will Deacon
2020-07-01 10:16       ` Will Deacon
2020-07-01 10:16       ` Will Deacon
2020-07-01 11:33       ` Arnd Bergmann
2020-07-01 11:33         ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE " Arnd Bergmann
2020-07-01 11:33         ` [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE " Arnd Bergmann
2020-06-30 17:37 ` [PATCH 03/18] asm/rwonce: Allow __READ_ONCE to be overridden by the architecture Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-07-02  9:32   ` Mark Rutland
2020-07-02  9:32     ` Mark Rutland
2020-07-02  9:32     ` Mark Rutland
2020-07-02  9:32     ` Mark Rutland
2020-07-02  9:48     ` Will Deacon
2020-07-02  9:48       ` Will Deacon
2020-07-02  9:48       ` Will Deacon
2020-07-02  9:48       ` Will Deacon
2020-07-02 10:08       ` Arnd Bergmann
2020-07-02 10:08         ` Arnd Bergmann
2020-07-02 10:08         ` Arnd Bergmann
2020-07-02 11:18         ` Will Deacon
2020-07-02 11:18           ` Will Deacon
2020-07-02 11:18           ` Will Deacon
2020-07-02 11:39           ` Arnd Bergmann
2020-07-02 11:39             ` Arnd Bergmann
2020-07-02 11:39             ` Arnd Bergmann
2020-07-02 14:43   ` Joel Fernandes
2020-07-02 14:43     ` Joel Fernandes
2020-07-02 14:43     ` Joel Fernandes
2020-07-02 14:55     ` Will Deacon
2020-07-02 14:55       ` Will Deacon
2020-07-02 14:55       ` Will Deacon
2020-07-02 15:07       ` Joel Fernandes
2020-07-02 15:07         ` Joel Fernandes
2020-07-02 15:07         ` Joel Fernandes
2020-06-30 17:37 ` [PATCH 05/18] asm/rwonce: Remove smp_read_barrier_depends() invocation Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 06/18] vhost: Remove redundant use of read_barrier_depends() barrier Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 07/18] alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb() Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 08/18] locking/barriers: Remove definitions for [smp_]read_barrier_depends() Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 09/18] Documentation/barriers: Remove references to [smp_]read_barrier_depends() Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 10/18] Documentation/barriers/kokr: " Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` Will Deacon [this message]
2020-06-30 17:37   ` [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 12/18] include/linux: Remove smp_read_barrier_depends() from comments Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 13/18] checkpatch: Remove checks relating to [smp_]read_barrier_depends() Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 14/18] arm64: Reduce the number of header files pulled into vmlinux.lds.S Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 15/18] arm64: alternatives: Split up alternative.h Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 16/18] arm64: cpufeatures: Add capability for LDAPR instruction Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 17/18] arm64: alternatives: Remove READ_ONCE() usage during patch operation Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 17:37   ` Will Deacon
2020-06-30 19:25   ` Arnd Bergmann
2020-06-30 19:25     ` Arnd Bergmann
2020-06-30 19:25     ` Arnd Bergmann
2020-07-01 10:19     ` Will Deacon
2020-07-01 10:19       ` Will Deacon
2020-07-01 10:19       ` Will Deacon
2020-07-01 10:59       ` Arnd Bergmann
2020-07-01 10:59         ` Arnd Bergmann
2020-07-01 10:59         ` Arnd Bergmann
2020-06-30 19:47   ` Marco Elver
2020-06-30 19:47     ` Marco Elver
2020-06-30 19:47     ` Marco Elver
2020-06-30 20:20     ` Peter Zijlstra
2020-06-30 20:20       ` Peter Zijlstra
2020-06-30 20:20       ` Peter Zijlstra
2020-06-30 22:57     ` Sami Tolvanen
2020-06-30 22:57       ` Sami Tolvanen
2020-06-30 22:57       ` Sami Tolvanen
2020-07-01 10:25       ` Will Deacon
2020-07-01 10:25         ` Will Deacon
2020-07-01 10:25         ` Will Deacon
2020-07-01 10:24     ` Will Deacon
2020-07-01 10:24       ` Will Deacon
2020-07-01 10:24       ` Will Deacon
2020-07-01 17:07   ` Dave P Martin
2020-07-01 17:07     ` Dave P Martin
2020-07-01 17:07     ` Dave P Martin
2020-07-02  7:23     ` Will Deacon
2020-07-02  7:23       ` Will Deacon
2020-07-02  7:23       ` Will Deacon
2020-07-06 16:00       ` Dave Martin
2020-07-06 16:00         ` Dave Martin
2020-07-06 16:00         ` Dave Martin
2020-07-06 16:34         ` Paul E. McKenney
2020-07-06 16:34           ` Paul E. McKenney
2020-07-06 16:34           ` Paul E. McKenney
2020-07-06 17:05           ` Dave Martin
2020-07-06 17:05             ` Dave Martin
2020-07-06 17:05             ` Dave Martin
2020-07-06 17:05             ` Dave Martin
2020-07-06 17:36             ` Paul E. McKenney
2020-07-06 17:36               ` Paul E. McKenney
2020-07-06 17:36               ` Paul E. McKenney
2020-07-06 17:36               ` Paul E. McKenney
2020-07-07 10:29               ` Dave Martin
2020-07-07 10:29                 ` Dave Martin
2020-07-07 10:29                 ` Dave Martin
2020-07-07 10:29                 ` Dave Martin
2020-07-07 22:51                 ` Paul E. McKenney
2020-07-07 22:51                   ` Paul E. McKenney
2020-07-07 22:51                   ` Paul E. McKenney
2020-07-07 22:51                   ` Paul E. McKenney
2020-07-07 23:01                   ` Nick Desaulniers
2020-07-07 23:01                     ` Nick Desaulniers
2020-07-08  7:15                     ` Marco Elver
2020-07-08  7:15                       ` Marco Elver
2020-07-08  7:15                       ` Marco Elver
2020-07-08  9:16                     ` Peter Zijlstra
2020-07-08  9:16                       ` Peter Zijlstra
2020-07-08  9:16                       ` Peter Zijlstra
2020-07-08 18:20                       ` Paul E. McKenney
2020-07-08 18:20                         ` Paul E. McKenney
2020-07-08 18:20                         ` Paul E. McKenney
2020-07-06 18:35         ` Will Deacon
2020-07-06 18:35           ` Will Deacon
2020-07-06 18:35           ` Will Deacon
2020-07-06 19:23           ` Marco Elver
2020-07-06 19:23             ` Marco Elver
2020-07-06 19:23             ` Marco Elver
2020-07-06 19:42             ` Paul E. McKenney
2020-07-06 19:42               ` Paul E. McKenney
2020-07-06 19:42               ` Paul E. McKenney
2020-07-06 19:42               ` Paul E. McKenney
2020-07-06 16:08   ` Dave Martin
2020-07-06 16:08     ` Dave Martin
2020-07-06 16:08     ` Dave Martin
2020-07-06 18:35     ` Will Deacon
2020-07-06 18:35       ` Will Deacon
2020-07-06 18:35       ` Will Deacon
2020-07-07 10:10       ` Dave Martin
2020-07-07 10:10         ` Dave Martin
2020-07-07 10:10         ` Dave Martin
2020-07-01  7:38 ` [PATCH 00/18] Allow architectures to override __READ_ONCE() Josh Triplett
2020-07-01  7:38   ` Josh Triplett
2020-07-01  7:38   ` Josh Triplett
2020-07-01  7:38   ` Josh Triplett

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200630173734.14057-12-will@kernel.org \
    --to=will@kernel.org \
    --cc=arnd@arndb.de \
    --cc=boqun.feng@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=elver@google.com \
    --cc=ink@jurassic.park.msu.ru \
    --cc=jasowang@redhat.com \
    --cc=josh@joshtriplett.org \
    --cc=keescook@chromium.org \
    --cc=kernel-team@android.com \
    --cc=linux-alpha@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mattst88@gmail.com \
    --cc=mst@redhat.com \
    --cc=ndesaulniers@google.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rth@twiddle.net \
    --cc=samitolvanen@google.com \
    --cc=stern@rowland.harvard.edu \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.