linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/18] Allow architectures to override __READ_ONCE()
@ 2020-06-30 17:37 Will Deacon
  2020-06-30 17:37 ` [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h Will Deacon
                   ` (18 more replies)
  0 siblings, 19 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

Hi everyone,

This is the long-awaited version two of the patches I previously
posted in November last year:

  https://lore.kernel.org/lkml/20191108170120.22331-1-will@kernel.org/

I ended up parking the series while the READ_ONCE() implementation was
being overhauled, but with that merged during the recent merge window
and LTO patches being posted again [1], it was time for a refresh.

The patches allow architectures to provide their own implementation of
__READ_ONCE(). This serves two main purposes:

  1. It finally allows us to remove [smp_]read_barrier_depends() from the
     Linux memory model and make it an implementation detail of the Alpha
     back-end.

  2. It allows arm64 to upgrade __READ_ONCE() to have RCpc acquire
     semantics when compiling with LTO, since this may enable compiler
     optimisations that break dependency ordering and therefore we
     require fencing to ensure ordering within the CPU.

Both of these are implemented by this series.

I've kept Paul's acks from v1 since, although the series has changed
somewhat, the patches with his Ack have not changed materially in my
opinion. I will drop them if anybody objects.

In terms of merging this, my preference would be a stable branch in the
arm64 tree, which others can pull in as they need it.

Cheers,

Will

[1] https://lore.kernel.org/r/20200624203200.78870-1-samitolvanen@google.com

Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Marco Elver <elver@google.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org>
Cc: linux-alpha@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kernel-team@android.com

--->8

SeongJae Park (1):
  Documentation/barriers/kokr: Remove references to
    [smp_]read_barrier_depends()

Will Deacon (17):
  tools: bpf: Use local copy of headers including uapi/linux/filter.h
  compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h
  asm/rwonce: Allow __READ_ONCE to be overridden by the architecture
  alpha: Override READ_ONCE() with barriered implementation
  asm/rwonce: Remove smp_read_barrier_depends() invocation
  vhost: Remove redundant use of read_barrier_depends() barrier
  alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()
  locking/barriers: Remove definitions for [smp_]read_barrier_depends()
  Documentation/barriers: Remove references to
    [smp_]read_barrier_depends()
  tools/memory-model: Remove smp_read_barrier_depends() from informal
    doc
  include/linux: Remove smp_read_barrier_depends() from comments
  checkpatch: Remove checks relating to [smp_]read_barrier_depends()
  arm64: Reduce the number of header files pulled into vmlinux.lds.S
  arm64: alternatives: Split up alternative.h
  arm64: cpufeatures: Add capability for LDAPR instruction
  arm64: alternatives: Remove READ_ONCE() usage during patch operation
  arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y

 .../RCU/Design/Requirements/Requirements.rst  |   2 +-
 Documentation/memory-barriers.txt             | 156 +---------
 .../translations/ko_KR/memory-barriers.txt    | 146 +--------
 arch/alpha/include/asm/atomic.h               |  16 +-
 arch/alpha/include/asm/barrier.h              |  61 +---
 arch/alpha/include/asm/pgtable.h              |  10 +-
 arch/alpha/include/asm/rwonce.h               |  19 ++
 arch/arm64/Kconfig                            |   3 +
 arch/arm64/include/asm/alternative-macros.h   | 276 ++++++++++++++++++
 arch/arm64/include/asm/alternative.h          | 267 +----------------
 arch/arm64/include/asm/cpucaps.h              |   3 +-
 arch/arm64/include/asm/insn.h                 |   3 +-
 arch/arm64/include/asm/kernel-pgtable.h       |   2 +-
 arch/arm64/include/asm/memory.h               |  11 +-
 arch/arm64/include/asm/rwonce.h               |  63 ++++
 arch/arm64/include/asm/uaccess.h              |   1 +
 arch/arm64/kernel/alternative.c               |   7 +-
 arch/arm64/kernel/cpufeature.c                |  10 +
 arch/arm64/kernel/entry.S                     |   1 +
 arch/arm64/kernel/vdso/Makefile               |   2 +-
 arch/arm64/kernel/vdso32/Makefile             |   2 +-
 arch/arm64/kernel/vmlinux.lds.S               |   1 -
 arch/arm64/kvm/hyp-init.S                     |   1 +
 drivers/vhost/vhost.c                         |   5 -
 include/asm-generic/Kbuild                    |   1 +
 include/asm-generic/barrier.h                 |  17 --
 include/asm-generic/rwonce.h                  |  82 ++++++
 include/linux/compiler.h                      |  83 +-----
 include/linux/percpu-refcount.h               |   2 +-
 include/linux/ptr_ring.h                      |   2 +-
 mm/memory.c                                   |   2 +-
 scripts/checkpatch.pl                         |   9 +-
 tools/bpf/Makefile                            |   3 +-
 tools/include/uapi/linux/filter.h             |  90 ++++++
 .../Documentation/explanation.txt             |  26 +-
 35 files changed, 617 insertions(+), 768 deletions(-)
 create mode 100644 arch/alpha/include/asm/rwonce.h
 create mode 100644 arch/arm64/include/asm/alternative-macros.h
 create mode 100644 arch/arm64/include/asm/rwonce.h
 create mode 100644 include/asm-generic/rwonce.h
 create mode 100644 tools/include/uapi/linux/filter.h

-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-07-01 16:38   ` Alexei Starovoitov
  2020-06-30 17:37 ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h Will Deacon
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, Xiao Yang, Alexei Starovoitov,
	virtualization, Masahiro Yamada, Will Deacon, Arnd Bergmann,
	Daniel Borkmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

Pulling header files directly out of the kernel sources for inclusion in
userspace programs is highly error prone, not least because it bypasses
the kbuild infrastructure entirely and so may end up referencing other
header files that have not been generated.

Subsequent patches will cause compiler.h to pull in the ungenerated
asm/rwonce.h file via filter.h, breaking the build for tools/bpf:

  | $ make -C tools/bpf
  | make: Entering directory '/linux/tools/bpf'
  |   CC       bpf_jit_disasm.o
  |   LINK     bpf_jit_disasm
  |   CC       bpf_dbg.o
  | In file included from /linux/include/uapi/linux/filter.h:9,
  |                  from /linux/tools/bpf/bpf_dbg.c:41:
  | /linux/include/linux/compiler.h:247:10: fatal error: asm/rwonce.h: No such file or directory
  |  #include <asm/rwonce.h>
  |           ^~~~~~~~~~~~~~
  | compilation terminated.
  | make: *** [Makefile:61: bpf_dbg.o] Error 1
  | make: Leaving directory '/linux/tools/bpf'

Take a copy of the installed version of linux/filter.h  (i.e. the one
created by the 'headers_install' target) into tools/include/uapi/linux/
and adjust the BPF tool Makefile to reference the local include
directories instead of those in the main source tree.

Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Reported-by: Xiao Yang <ice_yangxiao@163.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 tools/bpf/Makefile                |  3 +-
 tools/include/uapi/linux/filter.h | 90 +++++++++++++++++++++++++++++++
 2 files changed, 92 insertions(+), 1 deletion(-)
 create mode 100644 tools/include/uapi/linux/filter.h

diff --git a/tools/bpf/Makefile b/tools/bpf/Makefile
index 6df1850f8353..8a69258fd8aa 100644
--- a/tools/bpf/Makefile
+++ b/tools/bpf/Makefile
@@ -9,7 +9,8 @@ MAKE = make
 INSTALL ?= install
 
 CFLAGS += -Wall -O2
-CFLAGS += -D__EXPORTED_HEADERS__ -I$(srctree)/include/uapi -I$(srctree)/include
+CFLAGS += -D__EXPORTED_HEADERS__ -I$(srctree)/tools/include/uapi \
+	  -I$(srctree)/tools/include
 
 # This will work when bpf is built in tools env. where srctree
 # isn't set and when invoked from selftests build, where srctree
diff --git a/tools/include/uapi/linux/filter.h b/tools/include/uapi/linux/filter.h
new file mode 100644
index 000000000000..eaef459e7bd4
--- /dev/null
+++ b/tools/include/uapi/linux/filter.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Linux Socket Filter Data Structures
+ */
+
+#ifndef __LINUX_FILTER_H__
+#define __LINUX_FILTER_H__
+
+
+#include <linux/types.h>
+#include <linux/bpf_common.h>
+
+/*
+ * Current version of the filter code architecture.
+ */
+#define BPF_MAJOR_VERSION 1
+#define BPF_MINOR_VERSION 1
+
+/*
+ *	Try and keep these values and structures similar to BSD, especially
+ *	the BPF code definitions which need to match so you can share filters
+ */
+ 
+struct sock_filter {	/* Filter block */
+	__u16	code;   /* Actual filter code */
+	__u8	jt;	/* Jump true */
+	__u8	jf;	/* Jump false */
+	__u32	k;      /* Generic multiuse field */
+};
+
+struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
+	unsigned short		len;	/* Number of filter blocks */
+	struct sock_filter *filter;
+};
+
+/* ret - BPF_K and BPF_X also apply */
+#define BPF_RVAL(code)  ((code) & 0x18)
+#define         BPF_A           0x10
+
+/* misc */
+#define BPF_MISCOP(code) ((code) & 0xf8)
+#define         BPF_TAX         0x00
+#define         BPF_TXA         0x80
+
+/*
+ * Macros for filter block array initializers.
+ */
+#ifndef BPF_STMT
+#define BPF_STMT(code, k) { (unsigned short)(code), 0, 0, k }
+#endif
+#ifndef BPF_JUMP
+#define BPF_JUMP(code, k, jt, jf) { (unsigned short)(code), jt, jf, k }
+#endif
+
+/*
+ * Number of scratch memory words for: BPF_ST and BPF_STX
+ */
+#define BPF_MEMWORDS 16
+
+/* RATIONALE. Negative offsets are invalid in BPF.
+   We use them to reference ancillary data.
+   Unlike introduction new instructions, it does not break
+   existing compilers/optimizers.
+ */
+#define SKF_AD_OFF    (-0x1000)
+#define SKF_AD_PROTOCOL 0
+#define SKF_AD_PKTTYPE 	4
+#define SKF_AD_IFINDEX 	8
+#define SKF_AD_NLATTR	12
+#define SKF_AD_NLATTR_NEST	16
+#define SKF_AD_MARK 	20
+#define SKF_AD_QUEUE	24
+#define SKF_AD_HATYPE	28
+#define SKF_AD_RXHASH	32
+#define SKF_AD_CPU	36
+#define SKF_AD_ALU_XOR_X	40
+#define SKF_AD_VLAN_TAG	44
+#define SKF_AD_VLAN_TAG_PRESENT 48
+#define SKF_AD_PAY_OFFSET	52
+#define SKF_AD_RANDOM	56
+#define SKF_AD_VLAN_TPID	60
+#define SKF_AD_MAX	64
+
+#define SKF_NET_OFF	(-0x100000)
+#define SKF_LL_OFF	(-0x200000)
+
+#define BPF_NET_OFF	SKF_NET_OFF
+#define BPF_LL_OFF	SKF_LL_OFF
+
+#endif /* __LINUX_FILTER_H__ */
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
  2020-06-30 17:37 ` [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 19:11   ` Arnd Bergmann
  2020-06-30 17:37 ` [PATCH 03/18] asm/rwonce: Allow __READ_ONCE to be overridden by the architecture Will Deacon
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

In preparation for allowing architectures to define their own
implementation of the READ_ONCE() macro, move the generic
{READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h'
file and into a new 'rwonce.h' header under 'asm-generic'.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 include/asm-generic/Kbuild   |  1 +
 include/asm-generic/rwonce.h | 91 ++++++++++++++++++++++++++++++++++++
 include/linux/compiler.h     | 83 +-------------------------------
 3 files changed, 94 insertions(+), 81 deletions(-)
 create mode 100644 include/asm-generic/rwonce.h

diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild
index 44ec80e70518..74b0612601dd 100644
--- a/include/asm-generic/Kbuild
+++ b/include/asm-generic/Kbuild
@@ -45,6 +45,7 @@ mandatory-y += pci.h
 mandatory-y += percpu.h
 mandatory-y += pgalloc.h
 mandatory-y += preempt.h
+mandatory-y += rwonce.h
 mandatory-y += sections.h
 mandatory-y += serial.h
 mandatory-y += shmparam.h
diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h
new file mode 100644
index 000000000000..92cc2f223cb3
--- /dev/null
+++ b/include/asm-generic/rwonce.h
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Prevent the compiler from merging or refetching reads or writes. The
+ * compiler is also forbidden from reordering successive instances of
+ * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
+ * particular ordering. One way to make the compiler aware of ordering is to
+ * put the two invocations of READ_ONCE or WRITE_ONCE in different C
+ * statements.
+ *
+ * These two macros will also work on aggregate data types like structs or
+ * unions.
+ *
+ * Their two major use cases are: (1) Mediating communication between
+ * process-level code and irq/NMI handlers, all running on the same CPU,
+ * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
+ * mutilate accesses that either do not require ordering or that interact
+ * with an explicit memory barrier or atomic instruction that provides the
+ * required ordering.
+ */
+#ifndef __ASM_GENERIC_RWONCE_H
+#define __ASM_GENERIC_RWONCE_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/compiler_types.h>
+#include <linux/kasan-checks.h>
+#include <linux/kcsan-checks.h>
+
+#include <asm/barrier.h>
+
+/*
+ * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
+ * atomicity or dependency ordering guarantees. Note that this may result
+ * in tears!
+ */
+#define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))
+
+#define __READ_ONCE_SCALAR(x)						\
+({									\
+	__unqual_scalar_typeof(x) __x = __READ_ONCE(x);			\
+	smp_read_barrier_depends();					\
+	(typeof(x))__x;							\
+})
+
+#define READ_ONCE(x)							\
+({									\
+	compiletime_assert_rwonce_type(x);				\
+	__READ_ONCE_SCALAR(x);						\
+})
+
+#define __WRITE_ONCE(x, val)						\
+do {									\
+	*(volatile typeof(x) *)&(x) = (val);				\
+} while (0)
+
+#define WRITE_ONCE(x, val)						\
+do {									\
+	compiletime_assert_rwonce_type(x);				\
+	__WRITE_ONCE(x, val);						\
+} while (0)
+
+static __no_sanitize_or_inline
+unsigned long __read_once_word_nocheck(const void *addr)
+{
+	return __READ_ONCE(*(unsigned long *)addr);
+}
+
+/*
+ * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
+ * word from memory atomically but without telling KASAN/KCSAN. This is
+ * usually used by unwinding code when walking the stack of a running process.
+ */
+#define READ_ONCE_NOCHECK(x)						\
+({									\
+	unsigned long __x;						\
+	compiletime_assert(sizeof(x) == sizeof(__x),			\
+		"Unsupported access size for READ_ONCE_NOCHECK().");	\
+	__x = __read_once_word_nocheck(&(x));				\
+	smp_read_barrier_depends();					\
+	(typeof(x))__x;							\
+})
+
+static __no_kasan_or_inline
+unsigned long read_word_at_a_time(const void *addr)
+{
+	kasan_check_read(addr, 1);
+	return *(unsigned long *)addr;
+}
+
+#endif /* __ASSEMBLY__ */
+#endif	/* __ASM_GENERIC_RWONCE_H */
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 204e76856435..718b4357af32 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -230,28 +230,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 # define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __LINE__)
 #endif
 
-/*
- * Prevent the compiler from merging or refetching reads or writes. The
- * compiler is also forbidden from reordering successive instances of
- * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
- * particular ordering. One way to make the compiler aware of ordering is to
- * put the two invocations of READ_ONCE or WRITE_ONCE in different C
- * statements.
- *
- * These two macros will also work on aggregate data types like structs or
- * unions.
- *
- * Their two major use cases are: (1) Mediating communication between
- * process-level code and irq/NMI handlers, all running on the same CPU,
- * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
- * mutilate accesses that either do not require ordering or that interact
- * with an explicit memory barrier or atomic instruction that provides the
- * required ordering.
- */
-#include <asm/barrier.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
 /**
  * data_race - mark an expression as containing intentional data races
  *
@@ -272,65 +250,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 	__v;								\
 })
 
-/*
- * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
- * atomicity or dependency ordering guarantees. Note that this may result
- * in tears!
- */
-#define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))
-
-#define __READ_ONCE_SCALAR(x)						\
-({									\
-	__unqual_scalar_typeof(x) __x = __READ_ONCE(x);			\
-	smp_read_barrier_depends();					\
-	(typeof(x))__x;							\
-})
-
-#define READ_ONCE(x)							\
-({									\
-	compiletime_assert_rwonce_type(x);				\
-	__READ_ONCE_SCALAR(x);						\
-})
-
-#define __WRITE_ONCE(x, val)						\
-do {									\
-	*(volatile typeof(x) *)&(x) = (val);				\
-} while (0)
-
-#define WRITE_ONCE(x, val)						\
-do {									\
-	compiletime_assert_rwonce_type(x);				\
-	__WRITE_ONCE(x, val);						\
-} while (0)
-
-static __no_sanitize_or_inline
-unsigned long __read_once_word_nocheck(const void *addr)
-{
-	return __READ_ONCE(*(unsigned long *)addr);
-}
-
-/*
- * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
- * word from memory atomically but without telling KASAN/KCSAN. This is
- * usually used by unwinding code when walking the stack of a running process.
- */
-#define READ_ONCE_NOCHECK(x)						\
-({									\
-	unsigned long __x;						\
-	compiletime_assert(sizeof(x) == sizeof(__x),			\
-		"Unsupported access size for READ_ONCE_NOCHECK().");	\
-	__x = __read_once_word_nocheck(&(x));				\
-	smp_read_barrier_depends();					\
-	(typeof(x))__x;							\
-})
-
-static __no_kasan_or_inline
-unsigned long read_word_at_a_time(const void *addr)
-{
-	kasan_check_read(addr, 1);
-	return *(unsigned long *)addr;
-}
-
 #endif /* __KERNEL__ */
 
 /*
@@ -414,4 +333,6 @@ static inline void *offset_to_ptr(const int *off)
  */
 #define prevent_tail_call_optimization()	mb()
 
+#include <asm/rwonce.h>
+
 #endif /* __LINUX_COMPILER_H */
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 03/18] asm/rwonce: Allow __READ_ONCE to be overridden by the architecture
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
  2020-06-30 17:37 ` [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h Will Deacon
  2020-06-30 17:37 ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation Will Deacon
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

The meat and potatoes of READ_ONCE() is defined by the __READ_ONCE()
macro, which uses a volatile casts in an attempt to avoid tearing of
byte, halfword, word and double-word accesses. Allow this to be
overridden by the architecture code in the case that things like memory
barriers are also required.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 include/asm-generic/rwonce.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h
index 92cc2f223cb3..f9dfa88fc04d 100644
--- a/include/asm-generic/rwonce.h
+++ b/include/asm-generic/rwonce.h
@@ -33,7 +33,9 @@
  * atomicity or dependency ordering guarantees. Note that this may result
  * in tears!
  */
+#ifndef __READ_ONCE
 #define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))
+#endif
 
 #define __READ_ONCE_SCALAR(x)						\
 ({									\
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (2 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 03/18] asm/rwonce: Allow __READ_ONCE to be overridden by the architecture Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-07-02  9:32   ` Mark Rutland
  2020-07-02 14:43   ` Joel Fernandes
  2020-06-30 17:37 ` [PATCH 05/18] asm/rwonce: Remove smp_read_barrier_depends() invocation Will Deacon
                   ` (14 subsequent siblings)
  18 siblings, 2 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

Rather then relying on the core code to use smp_read_barrier_depends()
as part of the READ_ONCE() definition, instead override __READ_ONCE()
in the Alpha code so that it is treated the same way as
smp_load_acquire().

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/alpha/include/asm/barrier.h | 61 ++++----------------------------
 arch/alpha/include/asm/rwonce.h  | 19 ++++++++++
 2 files changed, 26 insertions(+), 54 deletions(-)
 create mode 100644 arch/alpha/include/asm/rwonce.h

diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h
index 92ec486a4f9e..2ecd068d91d1 100644
--- a/arch/alpha/include/asm/barrier.h
+++ b/arch/alpha/include/asm/barrier.h
@@ -2,64 +2,17 @@
 #ifndef __BARRIER_H
 #define __BARRIER_H
 
-#include <asm/compiler.h>
-
 #define mb()	__asm__ __volatile__("mb": : :"memory")
 #define rmb()	__asm__ __volatile__("mb": : :"memory")
 #define wmb()	__asm__ __volatile__("wmb": : :"memory")
 
-/**
- * read_barrier_depends - Flush all pending reads that subsequents reads
- * depend on.
- *
- * No data-dependent reads from memory-like regions are ever reordered
- * over this barrier.  All reads preceding this primitive are guaranteed
- * to access memory (but not necessarily other CPUs' caches) before any
- * reads following this primitive that depend on the data return by
- * any of the preceding reads.  This primitive is much lighter weight than
- * rmb() on most CPUs, and is never heavier weight than is
- * rmb().
- *
- * These ordering constraints are respected by both the local CPU
- * and the compiler.
- *
- * Ordering is not guaranteed by anything other than these primitives,
- * not even by data dependencies.  See the documentation for
- * memory_barrier() for examples and URLs to more information.
- *
- * For example, the following code would force ordering (the initial
- * value of "a" is zero, "b" is one, and "p" is "&a"):
- *
- * <programlisting>
- *	CPU 0				CPU 1
- *
- *	b = 2;
- *	memory_barrier();
- *	p = &b;				q = p;
- *					read_barrier_depends();
- *					d = *q;
- * </programlisting>
- *
- * because the read of "*q" depends on the read of "p" and these
- * two reads are separated by a read_barrier_depends().  However,
- * the following code, with the same initial values for "a" and "b":
- *
- * <programlisting>
- *	CPU 0				CPU 1
- *
- *	a = 2;
- *	memory_barrier();
- *	b = 3;				y = b;
- *					read_barrier_depends();
- *					x = a;
- * </programlisting>
- *
- * does not enforce ordering, since there is no data dependency between
- * the read of "a" and the read of "b".  Therefore, on some CPUs, such
- * as Alpha, "y" could be set to 3 and "x" to 0.  Use rmb()
- * in cases like this where there are no data dependencies.
- */
-#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
+#define __smp_load_acquire(p)						\
+({									\
+	__unqual_scalar_typeof(*p) ___p1 =				\
+		(*(volatile typeof(___p1) *)(p));			\
+	compiletime_assert_atomic_type(*p);				\
+	___p1;								\
+})
 
 #ifdef CONFIG_SMP
 #define __ASM_SMP_MB	"\tmb\n"
diff --git a/arch/alpha/include/asm/rwonce.h b/arch/alpha/include/asm/rwonce.h
new file mode 100644
index 000000000000..83a92e49a615
--- /dev/null
+++ b/arch/alpha/include/asm/rwonce.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Google LLC.
+ */
+#ifndef __ASM_RWONCE_H
+#define __ASM_RWONCE_H
+
+#include <asm/barrier.h>
+
+/*
+ * Alpha is apparently daft enough to reorder address-dependent loads
+ * on some CPU implementations. Knock some common sense into it with
+ * a memory barrier in READ_ONCE().
+ */
+#define __READ_ONCE(x)	__smp_load_acquire(&(x))
+
+#include <asm-generic/rwonce.h>
+
+#endif /* __ASM_RWONCE_H */
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 05/18] asm/rwonce: Remove smp_read_barrier_depends() invocation
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (3 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 06/18] vhost: Remove redundant use of read_barrier_depends() barrier Will Deacon
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

Alpha overrides __READ_ONCE() directly, so there's no need to use
smp_read_barrier_depends() in the core code. This also means that
__READ_ONCE() can be relied upon to provide dependency ordering.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 include/asm-generic/rwonce.h | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h
index f9dfa88fc04d..cc810f1f18ca 100644
--- a/include/asm-generic/rwonce.h
+++ b/include/asm-generic/rwonce.h
@@ -30,24 +30,16 @@
 
 /*
  * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
- * atomicity or dependency ordering guarantees. Note that this may result
- * in tears!
+ * atomicity. Note that this may result in tears!
  */
 #ifndef __READ_ONCE
 #define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))
 #endif
 
-#define __READ_ONCE_SCALAR(x)						\
-({									\
-	__unqual_scalar_typeof(x) __x = __READ_ONCE(x);			\
-	smp_read_barrier_depends();					\
-	(typeof(x))__x;							\
-})
-
 #define READ_ONCE(x)							\
 ({									\
 	compiletime_assert_rwonce_type(x);				\
-	__READ_ONCE_SCALAR(x);						\
+	__READ_ONCE(x);							\
 })
 
 #define __WRITE_ONCE(x, val)						\
@@ -74,12 +66,9 @@ unsigned long __read_once_word_nocheck(const void *addr)
  */
 #define READ_ONCE_NOCHECK(x)						\
 ({									\
-	unsigned long __x;						\
-	compiletime_assert(sizeof(x) == sizeof(__x),			\
+	compiletime_assert(sizeof(x) == sizeof(unsigned long),		\
 		"Unsupported access size for READ_ONCE_NOCHECK().");	\
-	__x = __read_once_word_nocheck(&(x));				\
-	smp_read_barrier_depends();					\
-	(typeof(x))__x;							\
+	(typeof(x))__read_once_word_nocheck(&(x));			\
 })
 
 static __no_kasan_or_inline
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 06/18] vhost: Remove redundant use of read_barrier_depends() barrier
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (4 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 05/18] asm/rwonce: Remove smp_read_barrier_depends() invocation Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 07/18] alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb() Will Deacon
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

Since commit 76ebbe78f739 ("locking/barriers: Add implicit
smp_read_barrier_depends() to READ_ONCE()"), there is no need to use
smp_read_barrier_depends() outside of the Alpha architecture code.

Unfortunately, there is precisely _one_ user in the vhost code, and
there isn't an obvious READ_ONCE() access making the barrier
redundant. However, on closer inspection (thanks, Jason), it appears
that vring synchronisation between the producer and consumer occurs via
the 'avail_idx' field, which is followed up by an rmb() in
vhost_get_vq_desc(), making the read_barrier_depends() redundant on
Alpha.

Jason says:

  | I'm also confused about the barrier here, basically in driver side
  | we did:
  |
  | 1) allocate pages
  | 2) store pages in indirect->addr
  | 3) smp_wmb()
  | 4) increase the avail idx (somehow a tail pointer of vring)
  |
  | in vhost we did:
  |
  | 1) read avail idx
  | 2) smp_rmb()
  | 3) read indirect->addr
  | 4) read from indirect->addr
  |
  | It looks to me even the data dependency barrier is not necessary
  | since we have rmb() which is sufficient for us to the correct
  | indirect->addr and driver are not expected to do any writing to
  | indirect->addr after avail idx is increased

Remove the redundant barrier invocation.

Suggested-by: Jason Wang <jasowang@redhat.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 drivers/vhost/vhost.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index d7b8df3edffc..74d135ee7e26 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -2092,11 +2092,6 @@ static int get_indirect(struct vhost_virtqueue *vq,
 		return ret;
 	}
 	iov_iter_init(&from, READ, vq->indirect, ret, len);
-
-	/* We will use the result as an address to read from, so most
-	 * architectures only need a compiler barrier here. */
-	read_barrier_depends();
-
 	count = len / sizeof desc;
 	/* Buffers are chained via a 16 bit next field, so
 	 * we can have at most 2^16 of these. */
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 07/18] alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (5 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 06/18] vhost: Remove redundant use of read_barrier_depends() barrier Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 08/18] locking/barriers: Remove definitions for [smp_]read_barrier_depends() Will Deacon
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

In preparation for removing smp_read_barrier_depends() altogether,
move the Alpha code over to using smp_rmb() and smp_mb() directly.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/alpha/include/asm/atomic.h  | 16 ++++++++--------
 arch/alpha/include/asm/pgtable.h | 10 +++++-----
 mm/memory.c                      |  2 +-
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 2144530d1428..2f8f7e54792f 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -16,10 +16,10 @@
 
 /*
  * To ensure dependency ordering is preserved for the _relaxed and
- * _release atomics, an smp_read_barrier_depends() is unconditionally
- * inserted into the _relaxed variants, which are used to build the
- * barriered versions. Avoid redundant back-to-back fences in the
- * _acquire and _fence versions.
+ * _release atomics, an smp_mb() is unconditionally inserted into the
+ * _relaxed variants, which are used to build the barriered versions.
+ * Avoid redundant back-to-back fences in the _acquire and _fence
+ * versions.
  */
 #define __atomic_acquire_fence()
 #define __atomic_post_full_fence()
@@ -70,7 +70,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
 	".previous"							\
 	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
 	:"Ir" (i), "m" (v->counter) : "memory");			\
-	smp_read_barrier_depends();					\
+	smp_mb();							\
 	return result;							\
 }
 
@@ -88,7 +88,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
 	".previous"							\
 	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
 	:"Ir" (i), "m" (v->counter) : "memory");			\
-	smp_read_barrier_depends();					\
+	smp_mb();							\
 	return result;							\
 }
 
@@ -123,7 +123,7 @@ static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)	\
 	".previous"							\
 	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
 	:"Ir" (i), "m" (v->counter) : "memory");			\
-	smp_read_barrier_depends();					\
+	smp_mb();							\
 	return result;							\
 }
 
@@ -141,7 +141,7 @@ static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)	\
 	".previous"							\
 	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
 	:"Ir" (i), "m" (v->counter) : "memory");			\
-	smp_read_barrier_depends();					\
+	smp_mb();							\
 	return result;							\
 }
 
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 162c17b2631f..660b14ce1317 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -277,9 +277,9 @@ extern inline pte_t pte_mkdirty(pte_t pte)	{ pte_val(pte) |= __DIRTY_BITS; retur
 extern inline pte_t pte_mkyoung(pte_t pte)	{ pte_val(pte) |= __ACCESS_BITS; return pte; }
 
 /*
- * The smp_read_barrier_depends() in the following functions are required to
- * order the load of *dir (the pointer in the top level page table) with any
- * subsequent load of the returned pmd_t *ret (ret is data dependent on *dir).
+ * The smp_rmb() in the following functions are required to order the load of
+ * *dir (the pointer in the top level page table) with any subsequent load of
+ * the returned pmd_t *ret (ret is data dependent on *dir).
  *
  * If this ordering is not enforced, the CPU might load an older value of
  * *ret, which may be uninitialized data. See mm/memory.c:__pte_alloc for
@@ -293,7 +293,7 @@ extern inline pte_t pte_mkyoung(pte_t pte)	{ pte_val(pte) |= __ACCESS_BITS; retu
 extern inline pmd_t * pmd_offset(pud_t * dir, unsigned long address)
 {
 	pmd_t *ret = (pmd_t *) pud_page_vaddr(*dir) + ((address >> PMD_SHIFT) & (PTRS_PER_PAGE - 1));
-	smp_read_barrier_depends(); /* see above */
+	smp_rmb(); /* see above */
 	return ret;
 }
 #define pmd_offset pmd_offset
@@ -303,7 +303,7 @@ extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address)
 {
 	pte_t *ret = (pte_t *) pmd_page_vaddr(*dir)
 		+ ((address >> PAGE_SHIFT) & (PTRS_PER_PAGE - 1));
-	smp_read_barrier_depends(); /* see above */
+	smp_rmb(); /* see above */
 	return ret;
 }
 #define pte_offset_kernel pte_offset_kernel
diff --git a/mm/memory.c b/mm/memory.c
index 87ec87cdc1ff..e1f2c730d8bb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -437,7 +437,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd)
 	 * of a chain of data-dependent loads, meaning most CPUs (alpha
 	 * being the notable exception) will already guarantee loads are
 	 * seen in-order. See the alpha page table accessors for the
-	 * smp_read_barrier_depends() barriers in page table walking code.
+	 * smp_rmb() barriers in page table walking code.
 	 */
 	smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */
 
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 08/18] locking/barriers: Remove definitions for [smp_]read_barrier_depends()
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (6 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 07/18] alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb() Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 09/18] Documentation/barriers: Remove references to [smp_]read_barrier_depends() Will Deacon
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

There are no remaining users of [smp_]read_barrier_depends(), so
remove it from the generic implementation of 'barrier.h'.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 include/asm-generic/barrier.h | 17 -----------------
 1 file changed, 17 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 2eacaf7d62f6..24f3f63f23e7 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -46,10 +46,6 @@
 #define dma_wmb()	wmb()
 #endif
 
-#ifndef read_barrier_depends
-#define read_barrier_depends()		do { } while (0)
-#endif
-
 #ifndef __smp_mb
 #define __smp_mb()	mb()
 #endif
@@ -62,10 +58,6 @@
 #define __smp_wmb()	wmb()
 #endif
 
-#ifndef __smp_read_barrier_depends
-#define __smp_read_barrier_depends()	read_barrier_depends()
-#endif
-
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
@@ -80,10 +72,6 @@
 #define smp_wmb()	__smp_wmb()
 #endif
 
-#ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	__smp_read_barrier_depends()
-#endif
-
 #else	/* !CONFIG_SMP */
 
 #ifndef smp_mb
@@ -98,10 +86,6 @@
 #define smp_wmb()	barrier()
 #endif
 
-#ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	do { } while (0)
-#endif
-
 #endif	/* CONFIG_SMP */
 
 #ifndef __smp_store_mb
@@ -196,7 +180,6 @@ do {									\
 #define virt_mb() __smp_mb()
 #define virt_rmb() __smp_rmb()
 #define virt_wmb() __smp_wmb()
-#define virt_read_barrier_depends() __smp_read_barrier_depends()
 #define virt_store_mb(var, value) __smp_store_mb(var, value)
 #define virt_mb__before_atomic() __smp_mb__before_atomic()
 #define virt_mb__after_atomic()	__smp_mb__after_atomic()
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 09/18] Documentation/barriers: Remove references to [smp_]read_barrier_depends()
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (7 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 08/18] locking/barriers: Remove definitions for [smp_]read_barrier_depends() Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 10/18] Documentation/barriers/kokr: " Will Deacon
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

The [smp_]read_barrier_depends() barrier macros no longer exist as
part of the Linux memory model, so remove all references to them from
the Documentation/ directory.

Although this is fairly mechanical on the whole, we drop the "CACHE
COHERENCY" section entirely from 'memory-barriers.txt' as it doesn't
make any sense now that the dependency barriers have been removed.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 .../RCU/Design/Requirements/Requirements.rst  |   2 +-
 Documentation/memory-barriers.txt             | 156 +-----------------
 2 files changed, 9 insertions(+), 149 deletions(-)

diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst
index 75b8ca007a11..50d5c43c48b0 100644
--- a/Documentation/RCU/Design/Requirements/Requirements.rst
+++ b/Documentation/RCU/Design/Requirements/Requirements.rst
@@ -463,7 +463,7 @@ again without disrupting RCU readers.
 This guarantee was only partially premeditated. DYNIX/ptx used an
 explicit memory barrier for publication, but had nothing resembling
 ``rcu_dereference()`` for subscription, nor did it have anything
-resembling the ``smp_read_barrier_depends()`` that was later subsumed
+resembling the dependency-ordering barrier that was later subsumed
 into ``rcu_dereference()`` and later still into ``READ_ONCE()``. The
 need for these operations made itself known quite suddenly at a
 late-1990s meeting with the DEC Alpha architects, back in the days when
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index eaabc3134294..4e55aba3eb4a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -553,12 +553,12 @@ There are certain things that the Linux kernel memory barriers do not guarantee:
 DATA DEPENDENCY BARRIERS (HISTORICAL)
 -------------------------------------
 
-As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
-added to READ_ONCE(), which means that about the only people who
-need to pay attention to this section are those working on DEC Alpha
-architecture-specific code and those working on READ_ONCE() itself.
-For those who need it, and for those who are interested in the history,
-here is the story of data-dependency barriers.
+As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for
+DEC Alpha, which means that about the only people who need to pay attention
+to this section are those working on DEC Alpha architecture-specific code
+and those working on READ_ONCE() itself.  For those who need it, and for
+those who are interested in the history, here is the story of
+data-dependency barriers.
 
 The usage requirements of data dependency barriers are a little subtle, and
 it's not always obvious that they're needed.  To illustrate, consider the
@@ -2708,144 +2708,6 @@ the properties of the memory window through which devices are accessed and/or
 the use of any special device communication instructions the CPU may have.
 
 
-CACHE COHERENCY
----------------
-
-Life isn't quite as simple as it may appear above, however: for while the
-caches are expected to be coherent, there's no guarantee that that coherency
-will be ordered.  This means that while changes made on one CPU will
-eventually become visible on all CPUs, there's no guarantee that they will
-become apparent in the same order on those other CPUs.
-
-
-Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
-has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
-
-	            :
-	            :                          +--------+
-	            :      +---------+         |        |
-	+--------+  : +--->| Cache A |<------->|        |
-	|        |  : |    +---------+         |        |
-	|  CPU 1 |<---+                        |        |
-	|        |  : |    +---------+         |        |
-	+--------+  : +--->| Cache B |<------->|        |
-	            :      +---------+         |        |
-	            :                          | Memory |
-	            :      +---------+         | System |
-	+--------+  : +--->| Cache C |<------->|        |
-	|        |  : |    +---------+         |        |
-	|  CPU 2 |<---+                        |        |
-	|        |  : |    +---------+         |        |
-	+--------+  : +--->| Cache D |<------->|        |
-	            :      +---------+         |        |
-	            :                          +--------+
-	            :
-
-Imagine the system has the following properties:
-
- (*) an odd-numbered cache line may be in cache A, cache C or it may still be
-     resident in memory;
-
- (*) an even-numbered cache line may be in cache B, cache D or it may still be
-     resident in memory;
-
- (*) while the CPU core is interrogating one cache, the other cache may be
-     making use of the bus to access the rest of the system - perhaps to
-     displace a dirty cacheline or to do a speculative load;
-
- (*) each cache has a queue of operations that need to be applied to that cache
-     to maintain coherency with the rest of the system;
-
- (*) the coherency queue is not flushed by normal loads to lines already
-     present in the cache, even though the contents of the queue may
-     potentially affect those loads.
-
-Imagine, then, that two writes are made on the first CPU, with a write barrier
-between them to guarantee that they will appear to reach that CPU's caches in
-the requisite order:
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-					u == 0, v == 1 and p == &u, q == &u
-	v = 2;
-	smp_wmb();			Make sure change to v is visible before
-					 change to p
-	<A:modify v=2>			v is now in cache A exclusively
-	p = &v;
-	<B:modify p=&v>			p is now in cache B exclusively
-
-The write memory barrier forces the other CPUs in the system to perceive that
-the local CPU's caches have apparently been updated in the correct order.  But
-now imagine that the second CPU wants to read those values:
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-	...
-			q = p;
-			x = *q;
-
-The above pair of reads may then fail to happen in the expected order, as the
-cacheline holding p may get updated in one of the second CPU's caches while
-the update to the cacheline holding v is delayed in the other of the second
-CPU's caches by some other cache event:
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-					u == 0, v == 1 and p == &u, q == &u
-	v = 2;
-	smp_wmb();
-	<A:modify v=2>	<C:busy>
-			<C:queue v=2>
-	p = &v;		q = p;
-			<D:request p>
-	<B:modify p=&v>	<D:commit p=&v>
-			<D:read p>
-			x = *q;
-			<C:read *q>	Reads from v before v updated in cache
-			<C:unbusy>
-			<C:commit v=2>
-
-Basically, while both cachelines will be updated on CPU 2 eventually, there's
-no guarantee that, without intervention, the order of update will be the same
-as that committed on CPU 1.
-
-
-To intervene, we need to interpolate a data dependency barrier or a read
-barrier between the loads (which as of v4.15 is supplied unconditionally
-by the READ_ONCE() macro).  This will force the cache to commit its
-coherency queue before processing any further requests:
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-					u == 0, v == 1 and p == &u, q == &u
-	v = 2;
-	smp_wmb();
-	<A:modify v=2>	<C:busy>
-			<C:queue v=2>
-	p = &v;		q = p;
-			<D:request p>
-	<B:modify p=&v>	<D:commit p=&v>
-			<D:read p>
-			smp_read_barrier_depends()
-			<C:unbusy>
-			<C:commit v=2>
-			x = *q;
-			<C:read *q>	Reads from v after v updated in cache
-
-
-This sort of problem can be encountered on DEC Alpha processors as they have a
-split cache that improves performance by making better use of the data bus.
-While most CPUs do imply a data dependency barrier on the read when a memory
-access depends on a read, not all do, so it may not be relied on.
-
-Other CPUs may also have split caches, but must coordinate between the various
-cachelets for normal memory accesses.  The semantics of the Alpha removes the
-need for hardware coordination in the absence of memory barriers, which
-permitted Alpha to sport higher CPU clock rates back in the day.  However,
-please note that (again, as of v4.15) smp_read_barrier_depends() should not
-be used except in Alpha arch-specific code and within the READ_ONCE() macro.
-
-
 CACHE COHERENCY VS DMA
 ----------------------
 
@@ -3009,10 +2871,8 @@ caches with the memory coherence system, thus making it seem like pointer
 changes vs new data occur in the right order.
 
 The Alpha defines the Linux kernel's memory model, although as of v4.15
-the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE()
-greatly reduced Alpha's impact on the memory model.
-
-See the subsection on "Cache Coherency" above.
+the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly
+reduced its impact on the memory model.
 
 
 VIRTUAL MACHINE GUESTS
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 10/18] Documentation/barriers/kokr: Remove references to [smp_]read_barrier_depends()
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (8 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 09/18] Documentation/barriers: Remove references to [smp_]read_barrier_depends() Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc Will Deacon
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, SeongJae Park, virtualization,
	Will Deacon, Arnd Bergmann, Yunjae Lee, Alan Stern,
	Sami Tolvanen, Matt Turner, kernel-team, Marco Elver, Kees Cook,
	Paul E. McKenney, Boqun Feng, Josh Triplett, SeongJae Park,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-alpha

From: SeongJae Park <sj38.park@gmail.com>

This commit translates commit ("Documentation/barriers: Remove references to
[smp_]read_barrier_depends()") into Korean.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reviewed-by: Yunjae Lee <lyj7694@gmail.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 .../translations/ko_KR/memory-barriers.txt    | 146 +-----------------
 1 file changed, 3 insertions(+), 143 deletions(-)

diff --git a/Documentation/translations/ko_KR/memory-barriers.txt b/Documentation/translations/ko_KR/memory-barriers.txt
index 34d041d68f78..a1f772ef622c 100644
--- a/Documentation/translations/ko_KR/memory-barriers.txt
+++ b/Documentation/translations/ko_KR/memory-barriers.txt
@@ -577,7 +577,7 @@ ACQUIRE 는 해당 오퍼레이션의 로드 부분에만 적용되고 RELEASE 
 데이터 의존성 배리어 (역사적)
 -----------------------------
 
-리눅스 커널 v4.15 기준으로, smp_read_barrier_depends() 가 READ_ONCE() 에
+리눅스 커널 v4.15 기준으로, smp_mb() 가 DEC Alpha 용 READ_ONCE() 코드에
 추가되었는데, 이는 이 섹션에 주의를 기울여야 하는 사람들은 DEC Alpha 아키텍쳐
 전용 코드를 만드는 사람들과 READ_ONCE() 자체를 만드는 사람들 뿐임을 의미합니다.
 그런 분들을 위해, 그리고 역사에 관심 있는 분들을 위해, 여기 데이터 의존성
@@ -2664,144 +2664,6 @@ CPU 코어는 프로그램의 인과성이 유지된다고만 여겨진다면 
 수도 있습니다.
 
 
-캐시 일관성
------------
-
-하지만 삶은 앞에서 이야기한 것처럼 단순하지 않습니다: 캐시들은 일관적일 것으로
-기대되지만, 그 일관성이 순서에도 적용될 거라는 보장은 없습니다.  한 CPU 에서
-만들어진 변경 사항은 최종적으로는 시스템의 모든 CPU 에게 보여지게 되지만, 다른
-CPU 들에게도 같은 순서로 보이게 될 거라는 보장은 없다는 뜻입니다.
-
-
-두개의 CPU (1 & 2) 가 달려 있고, 각 CPU 에 두개의 데이터 캐시(CPU 1 은 A/B 를,
-CPU 2 는 C/D 를 갖습니다)가 병렬로 연결되어 있는 시스템을 다룬다고 생각해
-봅시다:
-
-	            :
-	            :                          +--------+
-	            :      +---------+         |        |
-	+--------+  : +--->| Cache A |<------->|        |
-	|        |  : |    +---------+         |        |
-	|  CPU 1 |<---+                        |        |
-	|        |  : |    +---------+         |        |
-	+--------+  : +--->| Cache B |<------->|        |
-	            :      +---------+         |        |
-	            :                          | Memory |
-	            :      +---------+         | System |
-	+--------+  : +--->| Cache C |<------->|        |
-	|        |  : |    +---------+         |        |
-	|  CPU 2 |<---+                        |        |
-	|        |  : |    +---------+         |        |
-	+--------+  : +--->| Cache D |<------->|        |
-	            :      +---------+         |        |
-	            :                          +--------+
-	            :
-
-이 시스템이 다음과 같은 특성을 갖는다 생각해 봅시다:
-
- (*) 홀수번 캐시라인은 캐시 A, 캐시 C 또는 메모리에 위치할 수 있음;
-
- (*) 짝수번 캐시라인은 캐시 B, 캐시 D 또는 메모리에 위치할 수 있음;
-
- (*) CPU 코어가 한개의 캐시에 접근하는 동안, 다른 캐시는 - 더티 캐시라인을
-     메모리에 내리거나 추측성 로드를 하거나 하기 위해 - 시스템의 다른 부분에
-     액세스 하기 위해 버스를 사용할 수 있음;
-
- (*) 각 캐시는 시스템의 나머지 부분들과 일관성을 맞추기 위해 해당 캐시에
-     적용되어야 할 오퍼레이션들의 큐를 가짐;
-
- (*) 이 일관성 큐는 캐시에 이미 존재하는 라인에 가해지는 평범한 로드에 의해서는
-     비워지지 않는데, 큐의 오퍼레이션들이 이 로드의 결과에 영향을 끼칠 수 있다
-     할지라도 그러함.
-
-이제, 첫번째 CPU 에서 두개의 쓰기 오퍼레이션을 만드는데, 해당 CPU 의 캐시에
-요청된 순서로 오퍼레이션이 도달됨을 보장하기 위해 두 오퍼레이션 사이에 쓰기
-배리어를 사용하는 상황을 상상해 봅시다:
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-					u == 0, v == 1 and p == &u, q == &u
-	v = 2;
-	smp_wmb();			v 의 변경이 p 의 변경 전에 보일 것을
-					 분명히 함
-	<A:modify v=2>			v 는 이제 캐시 A 에 독점적으로 존재함
-	p = &v;
-	<B:modify p=&v>			p 는 이제 캐시 B 에 독점적으로 존재함
-
-여기서의 쓰기 메모리 배리어는 CPU 1 의 캐시가 올바른 순서로 업데이트 된 것으로
-시스템의 다른 CPU 들이 인지하게 만듭니다.  하지만, 이제 두번째 CPU 가 그 값들을
-읽으려 하는 상황을 생각해 봅시다:
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-	...
-			q = p;
-			x = *q;
-
-위의 두개의 읽기 오퍼레이션은 예상된 순서로 일어나지 못할 수 있는데, 두번째 CPU
-의 한 캐시에 다른 캐시 이벤트가 발생해 v 를 담고 있는 캐시라인의 해당 캐시에의
-업데이트가 지연되는 사이, p 를 담고 있는 캐시라인은 두번째 CPU 의 다른 캐시에
-업데이트 되어버렸을 수 있기 때문입니다.
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-					u == 0, v == 1 and p == &u, q == &u
-	v = 2;
-	smp_wmb();
-	<A:modify v=2>	<C:busy>
-			<C:queue v=2>
-	p = &v;		q = p;
-			<D:request p>
-	<B:modify p=&v>	<D:commit p=&v>
-			<D:read p>
-			x = *q;
-			<C:read *q>	캐시에 업데이트 되기 전의 v 를 읽음
-			<C:unbusy>
-			<C:commit v=2>
-
-기본적으로, 두개의 캐시라인 모두 CPU 2 에 최종적으로는 업데이트 될 것이지만,
-별도의 개입 없이는, 업데이트의 순서가 CPU 1 에서 만들어진 순서와 동일할
-것이라는 보장이 없습니다.
-
-
-여기에 개입하기 위해선, 데이터 의존성 배리어나 읽기 배리어를 로드 오퍼레이션들
-사이에 넣어야 합니다 (v4.15 부터는 READ_ONCE() 매크로에 의해 무조건적으로
-그렇게 됩니다).  이렇게 함으로써 캐시가 다음 요청을 처리하기 전에 일관성 큐를
-처리하도록 강제하게 됩니다.
-
-	CPU 1		CPU 2		COMMENT
-	===============	===============	=======================================
-					u == 0, v == 1 and p == &u, q == &u
-	v = 2;
-	smp_wmb();
-	<A:modify v=2>	<C:busy>
-			<C:queue v=2>
-	p = &v;		q = p;
-			<D:request p>
-	<B:modify p=&v>	<D:commit p=&v>
-			<D:read p>
-			smp_read_barrier_depends()
-			<C:unbusy>
-			<C:commit v=2>
-			x = *q;
-			<C:read *q>	캐시에 업데이트 된 v 를 읽음
-
-
-이런 부류의 문제는 DEC Alpha 계열 프로세서들에서 발견될 수 있는데, 이들은
-데이터 버스를 좀 더 잘 사용해 성능을 개선할 수 있는, 분할된 캐시를 가지고 있기
-때문입니다.  대부분의 CPU 는 하나의 읽기 오퍼레이션의 메모리 액세스가 다른 읽기
-오퍼레이션에 의존적이라면 데이터 의존성 배리어를 내포시킵니다만, 모두가 그런건
-아니기 때문에 이점에 의존해선 안됩니다.
-
-다른 CPU 들도 분할된 캐시를 가지고 있을 수 있지만, 그런 CPU 들은 평범한 메모리
-액세스를 위해서도 이 분할된 캐시들 사이의 조정을 해야만 합니다.  Alpha 는 가장
-약한 메모리 순서 시맨틱 (semantic) 을 선택함으로써 메모리 배리어가 명시적으로
-사용되지 않았을 때에는 그런 조정이 필요하지 않게 했으며, 이는 Alpha 가 당시에
-더 높은 CPU 클락 속도를 가질 수 있게 했습니다.  하지만, (다시 말하건대, v4.15
-이후부터는) Alpha 아키텍쳐 전용 코드와 READ_ONCE() 매크로 내부에서를 제외하고는
-smp_read_barrier_depends() 가 사용되지 않아야 함을 알아두시기 바랍니다.
-
-
 캐시 일관성 VS DMA
 ------------------
 
@@ -2962,10 +2824,8 @@ Alpha CPU 의 일부 버전은 분할된 데이터 캐시를 가지고 있어서
 데이터의 발견을 올바른 순서로 일어나게 하기 때문입니다.
 
 리눅스 커널의 메모리 배리어 모델은 Alpha 에 기초해서 정의되었습니다만, v4.15
-부터는 리눅스 커널이 READ_ONCE() 내에 smp_read_barrier_depends() 를 추가해서
-Alpha 의 메모리 모델로의 영향력이 크게 줄어들긴 했습니다.
-
-위의 "캐시 일관성" 서브섹션을 참고하세요.
+부터는 Alpha 용 READ_ONCE() 코드 내에 smp_mb() 가 추가되어서 메모리 모델로의
+Alpha 의 영향력이 크게 줄어들었습니다.
 
 
 가상 머신 게스트
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (9 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 10/18] Documentation/barriers/kokr: " Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 12/18] include/linux: Remove smp_read_barrier_depends() from comments Will Deacon
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

smp_read_barrier_depends() has gone the way of mmiowb() and so many
esoteric memory barriers before it. Drop the two mentions of this
deceased barrier from the LKMM informal explanation document.

Acked-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 .../Documentation/explanation.txt             | 26 +++++++++----------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index e91a2eb19592..01adf9e0ebac 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order.
 In practice, this difficulty is solved by inserting a special fence
 between P1's two loads when the kernel is compiled for the Alpha
 architecture.  In fact, as of version 4.15, the kernel automatically
-adds this fence (called smp_read_barrier_depends() and defined as
-nothing at all on non-Alpha builds) after every READ_ONCE() and atomic
-load.  The effect of the fence is to cause the CPU not to execute any
-po-later instructions until after the local cache has finished
-processing all the stores it has already received.  Thus, if the code
-was changed to:
+adds this fence after every READ_ONCE() and atomic load on Alpha.  The
+effect of the fence is to cause the CPU not to execute any po-later
+instructions until after the local cache has finished processing all
+the stores it has already received.  Thus, if the code was changed to:
 
 	P1()
 	{
@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed
 directly.
 
 The LKMM requires that smp_rmb(), acquire fences, and strong fences
-share this property with smp_read_barrier_depends(): They do not allow
-the CPU to execute any po-later instructions (or po-later loads in the
-case of smp_rmb()) until all outstanding stores have been processed by
-the local cache.  In the case of a strong fence, the CPU first has to
-wait for all of its po-earlier stores to propagate to every other CPU
-in the system; then it has to wait for the local cache to process all
-the stores received as of that time -- not just the stores received
-when the strong fence began.
+share this property: They do not allow the CPU to execute any po-later
+instructions (or po-later loads in the case of smp_rmb()) until all
+outstanding stores have been processed by the local cache.  In the
+case of a strong fence, the CPU first has to wait for all of its
+po-earlier stores to propagate to every other CPU in the system; then
+it has to wait for the local cache to process all the stores received
+as of that time -- not just the stores received when the strong fence
+began.
 
 And of course, none of this matters for any architecture other than
 Alpha.
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 12/18] include/linux: Remove smp_read_barrier_depends() from comments
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (10 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 13/18] checkpatch: Remove checks relating to [smp_]read_barrier_depends() Will Deacon
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

smp_read_barrier_depends() doesn't exist any more, so reword the two
comments that mention it to refer to "dependency ordering" instead.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 include/linux/percpu-refcount.h | 2 +-
 include/linux/ptr_ring.h        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 22d9d183950d..87d8a38bdea1 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -155,7 +155,7 @@ static inline bool __ref_is_percpu(struct percpu_ref *ref,
 	 * between contaminating the pointer value, meaning that
 	 * READ_ONCE() is required when fetching it.
 	 *
-	 * The smp_read_barrier_depends() implied by READ_ONCE() pairs
+	 * The dependency ordering from the READ_ONCE() pairs
 	 * with smp_store_release() in __percpu_ref_switch_to_percpu().
 	 */
 	percpu_ptr = READ_ONCE(ref->percpu_count_ptr);
diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
index 417db0a79a62..808f9d3ee546 100644
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -107,7 +107,7 @@ static inline int __ptr_ring_produce(struct ptr_ring *r, void *ptr)
 		return -ENOSPC;
 
 	/* Make sure the pointer we are storing points to a valid data. */
-	/* Pairs with smp_read_barrier_depends in __ptr_ring_consume. */
+	/* Pairs with the dependency ordering in __ptr_ring_consume. */
 	smp_wmb();
 
 	WRITE_ONCE(r->queue[r->producer++], ptr);
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 13/18] checkpatch: Remove checks relating to [smp_]read_barrier_depends()
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (11 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 12/18] include/linux: Remove smp_read_barrier_depends() from comments Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 14/18] arm64: Reduce the number of header files pulled into vmlinux.lds.S Will Deacon
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

The [smp_]read_barrier_depends() macros no longer exist, so we don't
need to deal with them in the checkpatch script.

Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 scripts/checkpatch.pl | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 4c820607540b..8032f80c5bc7 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -5903,8 +5903,7 @@ sub process {
 		my $barriers = qr{
 			mb|
 			rmb|
-			wmb|
-			read_barrier_depends
+			wmb
 		}x;
 		my $barrier_stems = qr{
 			mb__before_atomic|
@@ -5953,12 +5952,6 @@ sub process {
 			}
 		}
 
-# check for smp_read_barrier_depends and read_barrier_depends
-		if (!$file && $line =~ /\b(smp_|)read_barrier_depends\s*\(/) {
-			WARN("READ_BARRIER_DEPENDS",
-			     "$1read_barrier_depends should only be used in READ_ONCE or DEC Alpha code\n" . $herecurr);
-		}
-
 # check of hardware specific defines
 		if ($line =~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ m@include/asm-@) {
 			CHK("ARCH_DEFINES",
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 14/18] arm64: Reduce the number of header files pulled into vmlinux.lds.S
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (12 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 13/18] checkpatch: Remove checks relating to [smp_]read_barrier_depends() Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 15/18] arm64: alternatives: Split up alternative.h Will Deacon
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

Although vmlinux.lds.S smells like an assembly file and is compiled
with __ASSEMBLY__ defined, it's actually just fed to the preprocessor to
create our linker script. This means that any assembly macros defined
by headers that it includes will result in a helpful link error:

| aarch64-linux-gnu-ld:./arch/arm64/kernel/vmlinux.lds:1: syntax error

In preparation for an arm64-private asm/rwonce.h implementation, which
will end up pulling assembly macros into linux/compiler.h, reduce the
number of headers we include directly and transitively in vmlinux.lds.S

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/kernel-pgtable.h |  2 +-
 arch/arm64/include/asm/memory.h         | 11 ++++++-----
 arch/arm64/include/asm/uaccess.h        |  1 +
 arch/arm64/kernel/entry.S               |  1 +
 arch/arm64/kernel/vmlinux.lds.S         |  1 -
 arch/arm64/kvm/hyp-init.S               |  1 +
 6 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 3bf626f6fe0c..329fb15f6bac 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -8,7 +8,7 @@
 #ifndef __ASM_KERNEL_PGTABLE_H
 #define __ASM_KERNEL_PGTABLE_H
 
-#include <linux/pgtable.h>
+#include <asm/pgtable-hwdef.h>
 #include <asm/sparsemem.h>
 
 /*
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index a1871bb32bb1..9d4bf58cf7b3 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -10,11 +10,8 @@
 #ifndef __ASM_MEMORY_H
 #define __ASM_MEMORY_H
 
-#include <linux/compiler.h>
 #include <linux/const.h>
 #include <linux/sizes.h>
-#include <linux/types.h>
-#include <asm/bug.h>
 #include <asm/page-def.h>
 
 /*
@@ -157,11 +154,15 @@
 #endif
 
 #ifndef __ASSEMBLY__
-extern u64			vabits_actual;
-#define PAGE_END		(_PAGE_END(vabits_actual))
 
 #include <linux/bitops.h>
+#include <linux/compiler.h>
 #include <linux/mmdebug.h>
+#include <linux/types.h>
+#include <asm/bug.h>
+
+extern u64			vabits_actual;
+#define PAGE_END		(_PAGE_END(vabits_actual))
 
 extern s64			physvirt_offset;
 extern s64			memstart_addr;
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index bc5c7b091152..8d7c466f809b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -19,6 +19,7 @@
 #include <linux/string.h>
 
 #include <asm/cpufeature.h>
+#include <asm/mmu.h>
 #include <asm/ptrace.h>
 #include <asm/memory.h>
 #include <asm/extable.h>
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5304d193c79d..b668aad3b762 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -15,6 +15,7 @@
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
 #include <asm/asm_pointer_auth.h>
+#include <asm/bug.h>
 #include <asm/cpufeature.h>
 #include <asm/errno.h>
 #include <asm/esr.h>
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 6827da7f3aa5..e1e7c0431b4d 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -10,7 +10,6 @@
 #include <asm-generic/vmlinux.lds.h>
 #include <asm/cache.h>
 #include <asm/kernel-pgtable.h>
-#include <asm/thread_info.h>
 #include <asm/memory.h>
 #include <asm/page.h>
 
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 6e6ed5581eed..076544393c3c 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -6,6 +6,7 @@
 
 #include <linux/linkage.h>
 
+#include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_mmu.h>
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 15/18] arm64: alternatives: Split up alternative.h
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (13 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 14/18] arm64: Reduce the number of header files pulled into vmlinux.lds.S Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 16/18] arm64: cpufeatures: Add capability for LDAPR instruction Will Deacon
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

asm/alternative.h contains both the macros needed to use alternatives,
as well the type definitions and function prototypes for applying them.

Split the header in two, so that alternatives can be used from core
header files such as linux/compiler.h without the risk of circular
includes

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/alternative-macros.h | 276 ++++++++++++++++++++
 arch/arm64/include/asm/alternative.h        | 267 +------------------
 arch/arm64/include/asm/insn.h               |   3 +-
 3 files changed, 279 insertions(+), 267 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative-macros.h

diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
new file mode 100644
index 000000000000..9f697bef7958
--- /dev/null
+++ b/arch/arm64/include/asm/alternative-macros.h
@@ -0,0 +1,276 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ALTERNATIVE_MACROS_H
+#define __ASM_ALTERNATIVE_MACROS_H
+
+#include <asm/cpucaps.h>
+
+#define ARM64_CB_PATCH ARM64_NCAPS
+
+/* A64 instructions are always 32 bits. */
+#define	AARCH64_INSN_SIZE		4
+
+#ifndef __ASSEMBLY__
+
+#include <linux/stringify.h>
+
+#define ALTINSTR_ENTRY(feature)					              \
+	" .word 661b - .\n"				/* label           */ \
+	" .word 663f - .\n"				/* new instruction */ \
+	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
+	" .byte 662b-661b\n"				/* source len      */ \
+	" .byte 664f-663f\n"				/* replacement len */
+
+#define ALTINSTR_ENTRY_CB(feature, cb)					      \
+	" .word 661b - .\n"				/* label           */ \
+	" .word " __stringify(cb) "- .\n"		/* callback */	      \
+	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
+	" .byte 662b-661b\n"				/* source len      */ \
+	" .byte 664f-663f\n"				/* replacement len */
+
+/*
+ * alternative assembly primitive:
+ *
+ * If any of these .org directive fail, it means that insn1 and insn2
+ * don't have the same length. This used to be written as
+ *
+ * .if ((664b-663b) != (662b-661b))
+ * 	.error "Alternatives instruction length mismatch"
+ * .endif
+ *
+ * but most assemblers die if insn1 or insn2 have a .inst. This should
+ * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
+ * containing commit 4e4d08cf7399b606 or c1baaddf8861).
+ *
+ * Alternatives with callbacks do not generate replacement instructions.
+ */
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+	".if "__stringify(cfg_enabled)" == 1\n"				\
+	"661:\n\t"							\
+	oldinstr "\n"							\
+	"662:\n"							\
+	".pushsection .altinstructions,\"a\"\n"				\
+	ALTINSTR_ENTRY(feature)						\
+	".popsection\n"							\
+	".pushsection .altinstr_replacement, \"a\"\n"			\
+	"663:\n\t"							\
+	newinstr "\n"							\
+	"664:\n\t"							\
+	".popsection\n\t"						\
+	".org	. - (664b-663b) + (662b-661b)\n\t"			\
+	".org	. - (662b-661b) + (664b-663b)\n"			\
+	".endif\n"
+
+#define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb)	\
+	".if "__stringify(cfg_enabled)" == 1\n"				\
+	"661:\n\t"							\
+	oldinstr "\n"							\
+	"662:\n"							\
+	".pushsection .altinstructions,\"a\"\n"				\
+	ALTINSTR_ENTRY_CB(feature, cb)					\
+	".popsection\n"							\
+	"663:\n\t"							\
+	"664:\n\t"							\
+	".endif\n"
+
+#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
+
+#define ALTERNATIVE_CB(oldinstr, cb) \
+	__ALTERNATIVE_CFG_CB(oldinstr, ARM64_CB_PATCH, 1, cb)
+#else
+
+#include <asm/assembler.h>
+
+.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+	.word \orig_offset - .
+	.word \alt_offset - .
+	.hword \feature
+	.byte \orig_len
+	.byte \alt_len
+.endm
+
+.macro alternative_insn insn1, insn2, cap, enable = 1
+	.if \enable
+661:	\insn1
+662:	.pushsection .altinstructions, "a"
+	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
+	.popsection
+	.pushsection .altinstr_replacement, "ax"
+663:	\insn2
+664:	.popsection
+	.org	. - (664b-663b) + (662b-661b)
+	.org	. - (662b-661b) + (664b-663b)
+	.endif
+.endm
+
+/*
+ * Alternative sequences
+ *
+ * The code for the case where the capability is not present will be
+ * assembled and linked as normal. There are no restrictions on this
+ * code.
+ *
+ * The code for the case where the capability is present will be
+ * assembled into a special section to be used for dynamic patching.
+ * Code for that case must:
+ *
+ * 1. Be exactly the same length (in bytes) as the default code
+ *    sequence.
+ *
+ * 2. Not contain a branch target that is used outside of the
+ *    alternative sequence it is defined in (branches into an
+ *    alternative sequence are not fixed up).
+ */
+
+/*
+ * Begin an alternative code sequence.
+ */
+.macro alternative_if_not cap
+	.set .Lasm_alt_mode, 0
+	.pushsection .altinstructions, "a"
+	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
+	.popsection
+661:
+.endm
+
+.macro alternative_if cap
+	.set .Lasm_alt_mode, 1
+	.pushsection .altinstructions, "a"
+	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
+	.popsection
+	.pushsection .altinstr_replacement, "ax"
+	.align 2	/* So GAS knows label 661 is suitably aligned */
+661:
+.endm
+
+.macro alternative_cb cb
+	.set .Lasm_alt_mode, 0
+	.pushsection .altinstructions, "a"
+	altinstruction_entry 661f, \cb, ARM64_CB_PATCH, 662f-661f, 0
+	.popsection
+661:
+.endm
+
+/*
+ * Provide the other half of the alternative code sequence.
+ */
+.macro alternative_else
+662:
+	.if .Lasm_alt_mode==0
+	.pushsection .altinstr_replacement, "ax"
+	.else
+	.popsection
+	.endif
+663:
+.endm
+
+/*
+ * Complete an alternative code sequence.
+ */
+.macro alternative_endif
+664:
+	.if .Lasm_alt_mode==0
+	.popsection
+	.endif
+	.org	. - (664b-663b) + (662b-661b)
+	.org	. - (662b-661b) + (664b-663b)
+.endm
+
+/*
+ * Callback-based alternative epilogue
+ */
+.macro alternative_cb_end
+662:
+.endm
+
+/*
+ * Provides a trivial alternative or default sequence consisting solely
+ * of NOPs. The number of NOPs is chosen automatically to match the
+ * previous case.
+ */
+.macro alternative_else_nop_endif
+alternative_else
+	nops	(662b-661b) / AARCH64_INSN_SIZE
+alternative_endif
+.endm
+
+#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
+	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
+
+.macro user_alt, label, oldinstr, newinstr, cond
+9999:	alternative_insn "\oldinstr", "\newinstr", \cond
+	_asm_extable 9999b, \label
+.endm
+
+/*
+ * Generate the assembly for UAO alternatives with exception table entries.
+ * This is complicated as there is no post-increment or pair versions of the
+ * unprivileged instructions, and USER() only works for single instructions.
+ */
+#ifdef CONFIG_ARM64_UAO
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			ldtr	\reg1, [\addr];
+			ldtr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+		_asm_extable	8889b,\l;
+	.endm
+
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			stp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			sttr	\reg1, [\addr];
+			sttr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+		_asm_extable	8889b,\l;
+	.endm
+
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			\inst	\reg, [\addr], \post_inc;
+			nop;
+		alternative_else
+			\alt_inst	\reg, [\addr];
+			add		\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+	.endm
+#else
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		USER(\l, \inst \reg, [\addr], \post_inc)
+	.endm
+#endif
+
+#endif  /*  __ASSEMBLY__  */
+
+/*
+ * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature));
+ *
+ * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO));
+ * N.B. If CONFIG_FOO is specified, but not selected, the whole block
+ *      will be omitted, including oldinstr.
+ */
+#define ALTERNATIVE(oldinstr, newinstr, ...)   \
+	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
+
+#endif /* __ASM_ALTERNATIVE_MACROS_H */
diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 5e5dc05d63a0..a38b92e11811 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -2,17 +2,13 @@
 #ifndef __ASM_ALTERNATIVE_H
 #define __ASM_ALTERNATIVE_H
 
-#include <asm/cpucaps.h>
-#include <asm/insn.h>
-
-#define ARM64_CB_PATCH ARM64_NCAPS
+#include <asm/alternative-macros.h>
 
 #ifndef __ASSEMBLY__
 
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/stddef.h>
-#include <linux/stringify.h>
 
 struct alt_instr {
 	s32 orig_offset;	/* offset to original instruction */
@@ -35,264 +31,5 @@ void apply_alternatives_module(void *start, size_t length);
 static inline void apply_alternatives_module(void *start, size_t length) { }
 #endif
 
-#define ALTINSTR_ENTRY(feature)					              \
-	" .word 661b - .\n"				/* label           */ \
-	" .word 663f - .\n"				/* new instruction */ \
-	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
-	" .byte 662b-661b\n"				/* source len      */ \
-	" .byte 664f-663f\n"				/* replacement len */
-
-#define ALTINSTR_ENTRY_CB(feature, cb)					      \
-	" .word 661b - .\n"				/* label           */ \
-	" .word " __stringify(cb) "- .\n"		/* callback */	      \
-	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
-	" .byte 662b-661b\n"				/* source len      */ \
-	" .byte 664f-663f\n"				/* replacement len */
-
-/*
- * alternative assembly primitive:
- *
- * If any of these .org directive fail, it means that insn1 and insn2
- * don't have the same length. This used to be written as
- *
- * .if ((664b-663b) != (662b-661b))
- * 	.error "Alternatives instruction length mismatch"
- * .endif
- *
- * but most assemblers die if insn1 or insn2 have a .inst. This should
- * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
- * containing commit 4e4d08cf7399b606 or c1baaddf8861).
- *
- * Alternatives with callbacks do not generate replacement instructions.
- */
-#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
-	".if "__stringify(cfg_enabled)" == 1\n"				\
-	"661:\n\t"							\
-	oldinstr "\n"							\
-	"662:\n"							\
-	".pushsection .altinstructions,\"a\"\n"				\
-	ALTINSTR_ENTRY(feature)						\
-	".popsection\n"							\
-	".pushsection .altinstr_replacement, \"a\"\n"			\
-	"663:\n\t"							\
-	newinstr "\n"							\
-	"664:\n\t"							\
-	".popsection\n\t"						\
-	".org	. - (664b-663b) + (662b-661b)\n\t"			\
-	".org	. - (662b-661b) + (664b-663b)\n"			\
-	".endif\n"
-
-#define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb)	\
-	".if "__stringify(cfg_enabled)" == 1\n"				\
-	"661:\n\t"							\
-	oldinstr "\n"							\
-	"662:\n"							\
-	".pushsection .altinstructions,\"a\"\n"				\
-	ALTINSTR_ENTRY_CB(feature, cb)					\
-	".popsection\n"							\
-	"663:\n\t"							\
-	"664:\n\t"							\
-	".endif\n"
-
-#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
-	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
-
-#define ALTERNATIVE_CB(oldinstr, cb) \
-	__ALTERNATIVE_CFG_CB(oldinstr, ARM64_CB_PATCH, 1, cb)
-#else
-
-#include <asm/assembler.h>
-
-.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
-	.word \orig_offset - .
-	.word \alt_offset - .
-	.hword \feature
-	.byte \orig_len
-	.byte \alt_len
-.endm
-
-.macro alternative_insn insn1, insn2, cap, enable = 1
-	.if \enable
-661:	\insn1
-662:	.pushsection .altinstructions, "a"
-	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
-	.popsection
-	.pushsection .altinstr_replacement, "ax"
-663:	\insn2
-664:	.popsection
-	.org	. - (664b-663b) + (662b-661b)
-	.org	. - (662b-661b) + (664b-663b)
-	.endif
-.endm
-
-/*
- * Alternative sequences
- *
- * The code for the case where the capability is not present will be
- * assembled and linked as normal. There are no restrictions on this
- * code.
- *
- * The code for the case where the capability is present will be
- * assembled into a special section to be used for dynamic patching.
- * Code for that case must:
- *
- * 1. Be exactly the same length (in bytes) as the default code
- *    sequence.
- *
- * 2. Not contain a branch target that is used outside of the
- *    alternative sequence it is defined in (branches into an
- *    alternative sequence are not fixed up).
- */
-
-/*
- * Begin an alternative code sequence.
- */
-.macro alternative_if_not cap
-	.set .Lasm_alt_mode, 0
-	.pushsection .altinstructions, "a"
-	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
-	.popsection
-661:
-.endm
-
-.macro alternative_if cap
-	.set .Lasm_alt_mode, 1
-	.pushsection .altinstructions, "a"
-	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
-	.popsection
-	.pushsection .altinstr_replacement, "ax"
-	.align 2	/* So GAS knows label 661 is suitably aligned */
-661:
-.endm
-
-.macro alternative_cb cb
-	.set .Lasm_alt_mode, 0
-	.pushsection .altinstructions, "a"
-	altinstruction_entry 661f, \cb, ARM64_CB_PATCH, 662f-661f, 0
-	.popsection
-661:
-.endm
-
-/*
- * Provide the other half of the alternative code sequence.
- */
-.macro alternative_else
-662:
-	.if .Lasm_alt_mode==0
-	.pushsection .altinstr_replacement, "ax"
-	.else
-	.popsection
-	.endif
-663:
-.endm
-
-/*
- * Complete an alternative code sequence.
- */
-.macro alternative_endif
-664:
-	.if .Lasm_alt_mode==0
-	.popsection
-	.endif
-	.org	. - (664b-663b) + (662b-661b)
-	.org	. - (662b-661b) + (664b-663b)
-.endm
-
-/*
- * Callback-based alternative epilogue
- */
-.macro alternative_cb_end
-662:
-.endm
-
-/*
- * Provides a trivial alternative or default sequence consisting solely
- * of NOPs. The number of NOPs is chosen automatically to match the
- * previous case.
- */
-.macro alternative_else_nop_endif
-alternative_else
-	nops	(662b-661b) / AARCH64_INSN_SIZE
-alternative_endif
-.endm
-
-#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
-	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
-
-.macro user_alt, label, oldinstr, newinstr, cond
-9999:	alternative_insn "\oldinstr", "\newinstr", \cond
-	_asm_extable 9999b, \label
-.endm
-
-/*
- * Generate the assembly for UAO alternatives with exception table entries.
- * This is complicated as there is no post-increment or pair versions of the
- * unprivileged instructions, and USER() only works for single instructions.
- */
-#ifdef CONFIG_ARM64_UAO
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			ldtr	\reg1, [\addr];
-			ldtr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-		_asm_extable	8889b,\l;
-	.endm
-
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			stp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			sttr	\reg1, [\addr];
-			sttr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-		_asm_extable	8889b,\l;
-	.endm
-
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			\inst	\reg, [\addr], \post_inc;
-			nop;
-		alternative_else
-			\alt_inst	\reg, [\addr];
-			add		\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-	.endm
-#else
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		USER(\l, \inst \reg, [\addr], \post_inc)
-	.endm
-#endif
-
-#endif  /*  __ASSEMBLY__  */
-
-/*
- * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature));
- *
- * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO));
- * N.B. If CONFIG_FOO is specified, but not selected, the whole block
- *      will be omitted, including oldinstr.
- */
-#define ALTERNATIVE(oldinstr, newinstr, ...)   \
-	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
-
+#endif /* __ASSEMBLY__ */
 #endif /* __ASM_ALTERNATIVE_H */
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 0bc46149e491..01da70ba2fb9 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -10,8 +10,7 @@
 #include <linux/build_bug.h>
 #include <linux/types.h>
 
-/* A64 instructions are always 32 bits. */
-#define	AARCH64_INSN_SIZE		4
+#include <asm/alternative.h>
 
 #ifndef __ASSEMBLY__
 /*
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 16/18] arm64: cpufeatures: Add capability for LDAPR instruction
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (14 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 15/18] arm64: alternatives: Split up alternative.h Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 17/18] arm64: alternatives: Remove READ_ONCE() usage during patch operation Will Deacon
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

Armv8.3 introduced the LDAPR instruction, which provides weaker memory
ordering semantics than LDARi (RCpc vs RCsc). Generally, we provide an
RCsc implementation when implementing the Linux memory model, but LDAPR
can be used as a useful alternative to dependency ordering, particularly
when the compiler is capable of breaking the dependencies.

Since LDAPR is not available on all CPUs, add a cpufeature to detect it at
runtime and allow the instruction to be used with alternative code
patching.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/Kconfig               |  3 +++
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/kernel/cpufeature.c   | 10 ++++++++++
 3 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 66dc41fd49f2..e1073210e70b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1409,6 +1409,9 @@ config ARM64_PAN
 	 The feature is detected at runtime, and will remain as a 'nop'
 	 instruction if the cpu does not implement the feature.
 
+config AS_HAS_LDAPR
+	def_bool $(as-instr,.arch_extension rcpc)
+
 config ARM64_LSE_ATOMICS
 	bool
 	default ARM64_USE_LSE_ATOMICS
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index d7b3bb0cb180..3ff0103d4dfd 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -62,7 +62,8 @@
 #define ARM64_HAS_GENERIC_AUTH			52
 #define ARM64_HAS_32BIT_EL1			53
 #define ARM64_BTI				54
+#define ARM64_HAS_LDAPR				55
 
-#define ARM64_NCAPS				55
+#define ARM64_NCAPS				56
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f63053a63a9..a29256a782e9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2056,6 +2056,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 	},
 #endif
+	{
+		.desc = "RCpc load-acquire (LDAPR)",
+		.capability = ARM64_HAS_LDAPR,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_LRCPC_SHIFT,
+		.matches = has_cpuid_feature,
+		.min_field_value = 1,
+	},
 	{},
 };
 
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 17/18] arm64: alternatives: Remove READ_ONCE() usage during patch operation
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (15 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 16/18] arm64: cpufeatures: Add capability for LDAPR instruction Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
  2020-07-01  7:38 ` [PATCH 00/18] Allow architectures to override __READ_ONCE() Josh Triplett
  18 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

In preparation for patching the internals of READ_ONCE() itself, replace
its usage on the alternatives patching patch with a volatile variable
instead.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/alternative.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index d1757ef1b1e7..87bca8d44084 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -21,7 +21,8 @@
 #define ALT_ORIG_PTR(a)		__ALT_PTR(a, orig_offset)
 #define ALT_REPL_PTR(a)		__ALT_PTR(a, alt_offset)
 
-static int all_alternatives_applied;
+/* Volatile, as we may be patching the guts of READ_ONCE() */
+static volatile int all_alternatives_applied;
 
 static DECLARE_BITMAP(applied_alternatives, ARM64_NCAPS);
 
@@ -217,7 +218,7 @@ static int __apply_alternatives_multi_stop(void *unused)
 
 	/* We always have a CPU 0 at this point (__init) */
 	if (smp_processor_id()) {
-		while (!READ_ONCE(all_alternatives_applied))
+		while (!all_alternatives_applied)
 			cpu_relax();
 		isb();
 	} else {
@@ -229,7 +230,7 @@ static int __apply_alternatives_multi_stop(void *unused)
 		BUG_ON(all_alternatives_applied);
 		__apply_alternatives(&region, false, remaining_capabilities);
 		/* Barriers provided by the cache flushing */
-		WRITE_ONCE(all_alternatives_applied, 1);
+		all_alternatives_applied = 1;
 	}
 
 	return 0;
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (16 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 17/18] arm64: alternatives: Remove READ_ONCE() usage during patch operation Will Deacon
@ 2020-06-30 17:37 ` Will Deacon
  2020-06-30 19:25   ` Arnd Bergmann
                     ` (3 more replies)
  2020-07-01  7:38 ` [PATCH 00/18] Allow architectures to override __READ_ONCE() Josh Triplett
  18 siblings, 4 replies; 58+ messages in thread
From: Will Deacon @ 2020-06-30 17:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, linux-alpha

When building with LTO, there is an increased risk of the compiler
converting an address dependency headed by a READ_ONCE() invocation
into a control dependency and consequently allowing for harmful
reordering by the CPU.

Ensure that such transformations are harmless by overriding the generic
READ_ONCE() definition with one that provides acquire semantics when
building with LTO.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
 arch/arm64/kernel/vdso/Makefile   |  2 +-
 arch/arm64/kernel/vdso32/Makefile |  2 +-
 3 files changed, 65 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/rwonce.h

diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
new file mode 100644
index 000000000000..515e360b01a1
--- /dev/null
+++ b/arch/arm64/include/asm/rwonce.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+#ifndef __ASM_RWONCE_H
+#define __ASM_RWONCE_H
+
+#ifdef CONFIG_CLANG_LTO
+
+#include <linux/compiler_types.h>
+#include <asm/alternative-macros.h>
+
+#ifndef BUILD_VDSO
+
+#ifdef CONFIG_AS_HAS_LDAPR
+#define __LOAD_RCPC(sfx, regs...)					\
+	ALTERNATIVE(							\
+		"ldar"	#sfx "\t" #regs,				\
+		".arch_extension rcpc\n"				\
+		"ldapr"	#sfx "\t" #regs,				\
+	ARM64_HAS_LDAPR)
+#else
+#define __LOAD_RCPC(sfx, regs...)	"ldar" #sfx "\t" #regs
+#endif /* CONFIG_AS_HAS_LDAPR */
+
+#define __READ_ONCE(x)							\
+({									\
+	int atomic = 1;							\
+	union { __unqual_scalar_typeof(x) __val; char __c[1]; } __u;	\
+	typeof(&(x)) __x = &(x);					\
+	switch (sizeof(x)) {						\
+	case 1:								\
+		asm volatile(__LOAD_RCPC(b, %w0, %1)			\
+			: "=r" (*(__u8 *)__u.__c)			\
+			: "Q" (*__x) : "memory");			\
+		break;							\
+	case 2:								\
+		asm volatile(__LOAD_RCPC(h, %w0, %1)			\
+			: "=r" (*(__u16 *)__u.__c)			\
+			: "Q" (*__x) : "memory");			\
+		break;							\
+	case 4:								\
+		asm volatile(__LOAD_RCPC(, %w0, %1)			\
+			: "=r" (*(__u32 *)__u.__c)			\
+			: "Q" (*__x) : "memory");			\
+		break;							\
+	case 8:								\
+		asm volatile(__LOAD_RCPC(, %0, %1)			\
+			: "=r" (*(__u64 *)__u.__c)			\
+			: "Q" (*__x) : "memory");			\
+		break;							\
+	default:							\
+		atomic = 0;						\
+	}								\
+	atomic ? (typeof(x))__u.__val : (*(volatile typeof(x) *)__x);	\
+})
+
+#endif	/* !BUILD_VDSO */
+#endif	/* CONFIG_CLANG_LTO */
+
+#include <asm-generic/rwonce.h>
+
+#endif	/* __ASM_RWONCE_H */
diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
index 45d5cfe46429..60df97f2e7de 100644
--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -28,7 +28,7 @@ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv	\
 	     $(btildflags-y) -T
 
 ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
-ccflags-y += -DDISABLE_BRANCH_PROFILING
+ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
 
 CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS)
 KBUILD_CFLAGS			+= $(DISABLE_LTO)
diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
index d88148bef6b0..4fdf3754a058 100644
--- a/arch/arm64/kernel/vdso32/Makefile
+++ b/arch/arm64/kernel/vdso32/Makefile
@@ -43,7 +43,7 @@ cc32-as-instr = $(call try-run,\
 # As a result we set our own flags here.
 
 # KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile
-VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include)
+VDSO_CPPFLAGS := -DBUILD_VDSO -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include)
 VDSO_CPPFLAGS += $(LINUXINCLUDE)
 
 # Common C and assembly flags
-- 
2.27.0.212.ge8ba1cc988-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
  2020-06-30 17:37 ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h Will Deacon
@ 2020-06-30 19:11   ` Arnd Bergmann
  2020-07-01 10:16     ` [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE " Will Deacon
  0 siblings, 1 reply; 58+ messages in thread
From: Arnd Bergmann @ 2020-06-30 19:11 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Tue, Jun 30, 2020 at 7:37 PM Will Deacon <will@kernel.org> wrote:
>
> In preparation for allowing architectures to define their own
> implementation of the READ_ONCE() macro, move the generic
> {READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h'
> file and into a new 'rwonce.h' header under 'asm-generic'.
>
> Acked-by: Paul E. McKenney <paulmck@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  include/asm-generic/Kbuild   |  1 +
>  include/asm-generic/rwonce.h | 91 ++++++++++++++++++++++++++++++++++++
>  include/linux/compiler.h     | 83 +-------------------------------

Very nice, this has the added benefit of allowing us to stop including
asm/barrier.h once linux/compiler.h gets changed to not include
asm/rwonce.h.

The asm/barrier.h header has a circular dependency, pulling in
linux/compiler.h itself.

       Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
@ 2020-06-30 19:25   ` Arnd Bergmann
  2020-07-01 10:19     ` Will Deacon
  2020-06-30 19:47   ` Marco Elver
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 58+ messages in thread
From: Arnd Bergmann @ 2020-06-30 19:25 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Tue, Jun 30, 2020 at 7:39 PM Will Deacon <will@kernel.org> wrote:
> +#define __READ_ONCE(x)                                                 \
> +({                                                                     \
> +       int atomic = 1;                                                 \
> +       union { __unqual_scalar_typeof(x) __val; char __c[1]; } __u;    \
> +       typeof(&(x)) __x = &(x);                                        \
> +       switch (sizeof(x)) {                                            \
...
> +       atomic ? (typeof(x))__u.__val : (*(volatile typeof(x) *)__x);   \
> +})

This expands (x) nine times (five in __unqual_scala_typeof()), which can
lead to significant code bloat after preprocessing if something passes a
compound expression into READ_ONCE().
The compiler works it out eventually, but we've seen an actual slowdown
in compile speed from this recently, especially on clang.

I think if you move the

        typeof(&(x)) __x = &(x);

line first, all other instances can use typeof(*__x) instead of typeof(x)
and avoid this problem. Once we make gcc-4.9 the minimum version,
this could be further improved to

       __auto_type __x = &(x);

       Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
  2020-06-30 19:25   ` Arnd Bergmann
@ 2020-06-30 19:47   ` Marco Elver
  2020-06-30 20:20     ` Peter Zijlstra
                       ` (2 more replies)
  2020-07-01 17:07   ` Dave P Martin
  2020-07-06 16:08   ` Dave Martin
  3 siblings, 3 replies; 58+ messages in thread
From: Marco Elver @ 2020-06-30 19:47 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Kees Cook, Paul E. McKenney, Michael S. Tsirkin,
	Peter Zijlstra, Catalin Marinas, Jason Wang, Nick Desaulniers,
	Josh Triplett, LKML, Ivan Kokshaysky, linux-arm-kernel,
	Sami Tolvanen, linux-alpha, Alan Stern, Matt Turner,
	virtualization, Android Kernel Team, Boqun Feng, Arnd Bergmann,
	Richard Henderson

On Tue, 30 Jun 2020 at 19:39, Will Deacon <will@kernel.org> wrote:
>
> When building with LTO, there is an increased risk of the compiler
> converting an address dependency headed by a READ_ONCE() invocation
> into a control dependency and consequently allowing for harmful
> reordering by the CPU.
>
> Ensure that such transformations are harmless by overriding the generic
> READ_ONCE() definition with one that provides acquire semantics when
> building with LTO.
>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/vdso/Makefile   |  2 +-
>  arch/arm64/kernel/vdso32/Makefile |  2 +-
>  3 files changed, 65 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/rwonce.h

This seems reasonable, given we can't realistically tell the compiler
about dependent loads. What (if any), is the performance impact? I
guess this also heavily depends on the actual silicon.

I do wonder, though, if there is some way to make the compiler do
something better for us. Clearly, implementing real
memory_order_consume hasn't worked out until today. But maybe the
compiler could promote dependent loads to acquires if it recognizes it
lost dependencies during optimizations. Just thinking out loud, it
probably still has some weird corner case that will break. ;-)

The other thing is that I'd be cautious blaming LTO, as I tried to
summarize here:
https://lore.kernel.org/kernel-hardening/20200630191931.GA884155@elver.google.com/

The main thing is that, yes, this might be something to be worried
about, but if we are worried about it, we need to be worried about it
in *all* builds (LTO or not). My guess is that's not acceptable. Would
it be better to just guard the promotion of READ_ONCE() to acquire
behind a config option like CONFIG_ACQUIRE_READ_DEPENDENCIES, and then
make LTO select that (or maybe leave it optional?). In future, for
very aggressive non-LTO compilers even, one may then also select that
if there is substantiated worry things do actually break.

Thanks,
-- Marco

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 19:47   ` Marco Elver
@ 2020-06-30 20:20     ` Peter Zijlstra
  2020-06-30 22:57     ` Sami Tolvanen
  2020-07-01 10:24     ` Will Deacon
  2 siblings, 0 replies; 58+ messages in thread
From: Peter Zijlstra @ 2020-06-30 20:20 UTC (permalink / raw)
  To: Marco Elver
  Cc: Mark Rutland, Android Kernel Team, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Catalin Marinas, Jason Wang,
	Nick Desaulniers, LKML, Josh Triplett, Ivan Kokshaysky,
	linux-arm-kernel, Sami Tolvanen, linux-alpha, Alan Stern,
	Matt Turner, virtualization, Will Deacon, Boqun Feng,
	Arnd Bergmann, Richard Henderson

On Tue, Jun 30, 2020 at 09:47:30PM +0200, Marco Elver wrote:
> I do wonder, though, if there is some way to make the compiler do
> something better for us. Clearly, implementing real
> memory_order_consume hasn't worked out until today. But maybe the
> compiler could promote dependent loads to acquires if it recognizes it
> lost dependencies during optimizations. Just thinking out loud, it
> probably still has some weird corner case that will break. ;-)

I'd be very hesitant to let the compiler upgrade the ordering for us,
specifically because we're not using C11 crud and are using a lot of
inline asm.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 19:47   ` Marco Elver
  2020-06-30 20:20     ` Peter Zijlstra
@ 2020-06-30 22:57     ` Sami Tolvanen
  2020-07-01 10:25       ` Will Deacon
  2020-07-01 10:24     ` Will Deacon
  2 siblings, 1 reply; 58+ messages in thread
From: Sami Tolvanen @ 2020-06-30 22:57 UTC (permalink / raw)
  To: Marco Elver
  Cc: Mark Rutland, Android Kernel Team, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, Josh Triplett, LKML, Ivan Kokshaysky,
	linux-arm-kernel, linux-alpha, Alan Stern, Matt Turner,
	virtualization, Will Deacon, Boqun Feng, Arnd Bergmann,
	Richard Henderson

On Tue, Jun 30, 2020 at 12:47 PM Marco Elver <elver@google.com> wrote:
>
> On Tue, 30 Jun 2020 at 19:39, Will Deacon <will@kernel.org> wrote:
> >
> > When building with LTO, there is an increased risk of the compiler
> > converting an address dependency headed by a READ_ONCE() invocation
> > into a control dependency and consequently allowing for harmful
> > reordering by the CPU.
> >
> > Ensure that such transformations are harmless by overriding the generic
> > READ_ONCE() definition with one that provides acquire semantics when
> > building with LTO.
> >
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> >  3 files changed, 65 insertions(+), 2 deletions(-)
> >  create mode 100644 arch/arm64/include/asm/rwonce.h
>
> This seems reasonable, given we can't realistically tell the compiler
> about dependent loads. What (if any), is the performance impact? I
> guess this also heavily depends on the actual silicon.
>
> I do wonder, though, if there is some way to make the compiler do
> something better for us. Clearly, implementing real
> memory_order_consume hasn't worked out until today. But maybe the
> compiler could promote dependent loads to acquires if it recognizes it
> lost dependencies during optimizations. Just thinking out loud, it
> probably still has some weird corner case that will break. ;-)
>
> The other thing is that I'd be cautious blaming LTO, as I tried to
> summarize here:
> https://lore.kernel.org/kernel-hardening/20200630191931.GA884155@elver.google.com/
>
> The main thing is that, yes, this might be something to be worried
> about, but if we are worried about it, we need to be worried about it
> in *all* builds (LTO or not). My guess is that's not acceptable. Would
> it be better to just guard the promotion of READ_ONCE() to acquire
> behind a config option like CONFIG_ACQUIRE_READ_DEPENDENCIES, and then
> make LTO select that (or maybe leave it optional?). In future, for
> very aggressive non-LTO compilers even, one may then also select that
> if there is substantiated worry things do actually break.

I agree, a separate config option would be better here.

Also Will, the LTO patches use CONFIG_LTO_CLANG instead of CLANG_LTO.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 00/18] Allow architectures to override __READ_ONCE()
  2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
                   ` (17 preceding siblings ...)
  2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
@ 2020-07-01  7:38 ` Josh Triplett
  18 siblings, 0 replies; 58+ messages in thread
From: Josh Triplett @ 2020-07-01  7:38 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, virtualization, Ivan Kokshaysky,
	linux-arm-kernel, Sami Tolvanen, linux-alpha, Alan Stern,
	Matt Turner, kernel-team, Boqun Feng, Arnd Bergmann,
	Richard Henderson

On Tue, Jun 30, 2020 at 06:37:16PM +0100, Will Deacon wrote:
> The patches allow architectures to provide their own implementation of
> __READ_ONCE(). This serves two main purposes:
> 
>   1. It finally allows us to remove [smp_]read_barrier_depends() from the
>      Linux memory model and make it an implementation detail of the Alpha
>      back-end.

And there was much rejoicing. Thank you.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h
  2020-06-30 19:11   ` Arnd Bergmann
@ 2020-07-01 10:16     ` Will Deacon
  2020-07-01 11:33       ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE " Arnd Bergmann
  0 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-01 10:16 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

Hi Arnd,

On Tue, Jun 30, 2020 at 09:11:32PM +0200, Arnd Bergmann wrote:
> On Tue, Jun 30, 2020 at 7:37 PM Will Deacon <will@kernel.org> wrote:
> >
> > In preparation for allowing architectures to define their own
> > implementation of the READ_ONCE() macro, move the generic
> > {READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h'
> > file and into a new 'rwonce.h' header under 'asm-generic'.
> >
> > Acked-by: Paul E. McKenney <paulmck@kernel.org>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  include/asm-generic/Kbuild   |  1 +
> >  include/asm-generic/rwonce.h | 91 ++++++++++++++++++++++++++++++++++++
> >  include/linux/compiler.h     | 83 +-------------------------------
> 
> Very nice, this has the added benefit of allowing us to stop including
> asm/barrier.h once linux/compiler.h gets changed to not include
> asm/rwonce.h.

Yeah, with this series linux/compiler.h _does_ include asm/rwonce.h because
otherwise there are many callers to fix up, but that could be addressed
subsequently, I suppose.

> The asm/barrier.h header has a circular dependency, pulling in
> linux/compiler.h itself.

Hmm. Once smp_read_barrier_depends() disappears, I could actually remove
the include of <asm/barrier.h> from asm-generic/rwonce.h. It would have to
remain for arch/alpha/, however, since we need the barrier definitions to
implement READ_ONCE(). I can probably also replace the include of
<linux/compiler.h> in asm-generic/barrier.h with <asm/rwonce.h> too (so it's
still circular, but at least a lot simpler).

I'll have a play...

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 19:25   ` Arnd Bergmann
@ 2020-07-01 10:19     ` Will Deacon
  2020-07-01 10:59       ` Arnd Bergmann
  0 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-01 10:19 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Tue, Jun 30, 2020 at 09:25:03PM +0200, Arnd Bergmann wrote:
> On Tue, Jun 30, 2020 at 7:39 PM Will Deacon <will@kernel.org> wrote:
> > +#define __READ_ONCE(x)                                                 \
> > +({                                                                     \
> > +       int atomic = 1;                                                 \
> > +       union { __unqual_scalar_typeof(x) __val; char __c[1]; } __u;    \
> > +       typeof(&(x)) __x = &(x);                                        \
> > +       switch (sizeof(x)) {                                            \
> ...
> > +       atomic ? (typeof(x))__u.__val : (*(volatile typeof(x) *)__x);   \
> > +})
> 
> This expands (x) nine times (five in __unqual_scala_typeof()), which can
> lead to significant code bloat after preprocessing if something passes a
> compound expression into READ_ONCE().
> The compiler works it out eventually, but we've seen an actual slowdown
> in compile speed from this recently, especially on clang.
> 
> I think if you move the
> 
>         typeof(&(x)) __x = &(x);
> 
> line first, all other instances can use typeof(*__x) instead of typeof(x)
> and avoid this problem.

Cheers, I was only thinking about side-effects when I wrote this, but
bloating built time is very unpopular, so I'll go with your suggestion.

> Once we make gcc-4.9 the minimum version,
> this could be further improved to
> 
>        __auto_type __x = &(x);

Is anybody working on moving to 4.9? I've seen the mails from Linus
championing it, but I thought there was a RHEL in support that people
might care about?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 19:47   ` Marco Elver
  2020-06-30 20:20     ` Peter Zijlstra
  2020-06-30 22:57     ` Sami Tolvanen
@ 2020-07-01 10:24     ` Will Deacon
  2 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-07-01 10:24 UTC (permalink / raw)
  To: Marco Elver
  Cc: Mark Rutland, Kees Cook, Paul E. McKenney, Michael S. Tsirkin,
	Peter Zijlstra, Catalin Marinas, Jason Wang, Nick Desaulniers,
	Josh Triplett, Arnd Bergmann, LKML, Ivan Kokshaysky,
	Sami Tolvanen, linux-alpha, Alan Stern, Matt Turner,
	virtualization, Android Kernel Team, Boqun Feng,
	linux-arm-kernel, Richard Henderson

On Tue, Jun 30, 2020 at 09:47:30PM +0200, Marco Elver wrote:
> On Tue, 30 Jun 2020 at 19:39, Will Deacon <will@kernel.org> wrote:
> >
> > When building with LTO, there is an increased risk of the compiler
> > converting an address dependency headed by a READ_ONCE() invocation
> > into a control dependency and consequently allowing for harmful
> > reordering by the CPU.
> >
> > Ensure that such transformations are harmless by overriding the generic
> > READ_ONCE() definition with one that provides acquire semantics when
> > building with LTO.
> >
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> >  3 files changed, 65 insertions(+), 2 deletions(-)
> >  create mode 100644 arch/arm64/include/asm/rwonce.h
> 
> This seems reasonable, given we can't realistically tell the compiler
> about dependent loads. What (if any), is the performance impact? I
> guess this also heavily depends on the actual silicon.

Right, it depends both on the CPU micro-architecture and also the workload.
When we ran some basic tests, the overhead wasn't greater than the benefit
seen by enabling LTO, so it seems like a reasonable trade-off (given that
LTO is a dependency for CFI, so it's not just about performance).

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 22:57     ` Sami Tolvanen
@ 2020-07-01 10:25       ` Will Deacon
  0 siblings, 0 replies; 58+ messages in thread
From: Will Deacon @ 2020-07-01 10:25 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, Josh Triplett, LKML, Ivan Kokshaysky,
	linux-arm-kernel, linux-alpha, Alan Stern, Matt Turner,
	virtualization, Android Kernel Team, Boqun Feng, Arnd Bergmann,
	Richard Henderson

On Tue, Jun 30, 2020 at 03:57:54PM -0700, Sami Tolvanen wrote:
> On Tue, Jun 30, 2020 at 12:47 PM Marco Elver <elver@google.com> wrote:
> >
> > On Tue, 30 Jun 2020 at 19:39, Will Deacon <will@kernel.org> wrote:
> > >
> > > When building with LTO, there is an increased risk of the compiler
> > > converting an address dependency headed by a READ_ONCE() invocation
> > > into a control dependency and consequently allowing for harmful
> > > reordering by the CPU.
> > >
> > > Ensure that such transformations are harmless by overriding the generic
> > > READ_ONCE() definition with one that provides acquire semantics when
> > > building with LTO.
> > >
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> > >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> > >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> > >  3 files changed, 65 insertions(+), 2 deletions(-)
> > >  create mode 100644 arch/arm64/include/asm/rwonce.h
> >
> > This seems reasonable, given we can't realistically tell the compiler
> > about dependent loads. What (if any), is the performance impact? I
> > guess this also heavily depends on the actual silicon.
> >
> > I do wonder, though, if there is some way to make the compiler do
> > something better for us. Clearly, implementing real
> > memory_order_consume hasn't worked out until today. But maybe the
> > compiler could promote dependent loads to acquires if it recognizes it
> > lost dependencies during optimizations. Just thinking out loud, it
> > probably still has some weird corner case that will break. ;-)
> >
> > The other thing is that I'd be cautious blaming LTO, as I tried to
> > summarize here:
> > https://lore.kernel.org/kernel-hardening/20200630191931.GA884155@elver.google.com/
> >
> > The main thing is that, yes, this might be something to be worried
> > about, but if we are worried about it, we need to be worried about it
> > in *all* builds (LTO or not). My guess is that's not acceptable. Would
> > it be better to just guard the promotion of READ_ONCE() to acquire
> > behind a config option like CONFIG_ACQUIRE_READ_DEPENDENCIES, and then
> > make LTO select that (or maybe leave it optional?). In future, for
> > very aggressive non-LTO compilers even, one may then also select that
> > if there is substantiated worry things do actually break.
> 
> I agree, a separate config option would be better here.
> 
> Also Will, the LTO patches use CONFIG_LTO_CLANG instead of CLANG_LTO.

D'oh, sorry. I'll fix that (I had that #ifdef commented out for my testing).

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-01 10:19     ` Will Deacon
@ 2020-07-01 10:59       ` Arnd Bergmann
  0 siblings, 0 replies; 58+ messages in thread
From: Arnd Bergmann @ 2020-07-01 10:59 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Wed, Jul 1, 2020 at 12:19 PM Will Deacon <will@kernel.org> wrote:
> On Tue, Jun 30, 2020 at 09:25:03PM +0200, Arnd Bergmann wrote:
> > On Tue, Jun 30, 2020 at 7:39 PM Will Deacon <will@kernel.org> wrote:
> > Once we make gcc-4.9 the minimum version,
> > this could be further improved to
> >
> >        __auto_type __x = &(x);
>
> Is anybody working on moving to 4.9? I've seen the mails from Linus
> championing it, but I thought there was a RHEL in support that people
> might care about?

I don't think there was a serious discussion about it so far, and
we only just moved to gcc-4.8.

I think moving to gnu11 (gcc-4.9 or clang) instead of gnu99 has other
benefits as well, so we may well want to do it anyway when something
else comes up.

For __auto_type(), we could do it like

#if (clang or gcc-4.9+)
#define auto_typeof(x) __auto_type
#else
#define auto_typeof(x) typeof(x)
#endif

which could be used in a lot of macros.

     Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
  2020-07-01 10:16     ` [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE " Will Deacon
@ 2020-07-01 11:33       ` Arnd Bergmann
  0 siblings, 0 replies; 58+ messages in thread
From: Arnd Bergmann @ 2020-07-01 11:33 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Wed, Jul 1, 2020 at 12:16 PM Will Deacon <will@kernel.org> wrote:
> On Tue, Jun 30, 2020 at 09:11:32PM +0200, Arnd Bergmann wrote:
> > On Tue, Jun 30, 2020 at 7:37 PM Will Deacon <will@kernel.org> wrote:
> > >
> > > In preparation for allowing architectures to define their own
> > > implementation of the READ_ONCE() macro, move the generic
> > > {READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h'
> > > file and into a new 'rwonce.h' header under 'asm-generic'.
> > >
> > > Acked-by: Paul E. McKenney <paulmck@kernel.org>
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  include/asm-generic/Kbuild   |  1 +
> > >  include/asm-generic/rwonce.h | 91 ++++++++++++++++++++++++++++++++++++
> > >  include/linux/compiler.h     | 83 +-------------------------------
> >
> > Very nice, this has the added benefit of allowing us to stop including
> > asm/barrier.h once linux/compiler.h gets changed to not include
> > asm/rwonce.h.
>
> Yeah, with this series linux/compiler.h _does_ include asm/rwonce.h because
> otherwise there are many callers to fix up, but that could be addressed
> subsequently, I suppose.

Right, I didn't mean you should change that right away. I actually
have a work-in-progress patch series that removes a ton of
indirect header inclusions to improve build speed, and a script to
add the explicit includes into .c files where needed.

> > The asm/barrier.h header has a circular dependency, pulling in
> > linux/compiler.h itself.
>
> Hmm. Once smp_read_barrier_depends() disappears, I could actually remove
> the include of <asm/barrier.h> from asm-generic/rwonce.h. It would have to
> remain for arch/alpha/, however, since we need the barrier definitions to
> implement READ_ONCE(). I can probably also replace the include of
> <linux/compiler.h> in asm-generic/barrier.h with <asm/rwonce.h> too (so it's
> still circular, but at least a lot simpler).
>
> I'll have a play...

I think this would require the same kind of fixup for anything that depends on
asm/barrier.h being included implicitly at the moment.

      Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h
  2020-06-30 17:37 ` [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h Will Deacon
@ 2020-07-01 16:38   ` Alexei Starovoitov
  0 siblings, 0 replies; 58+ messages in thread
From: Alexei Starovoitov @ 2020-07-01 16:38 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, Xiao Yang, Alexei Starovoitov,
	virtualization, Masahiro Yamada, Arnd Bergmann, Daniel Borkmann,
	Alan Stern, Sami Tolvanen, Matt Turner, Android Kernel Team,
	Marco Elver, Kees Cook, Paul E. McKenney, Boqun Feng,
	Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Richard Henderson, Nick Desaulniers, LKML, linux-alpha

On Tue, Jun 30, 2020 at 10:37 AM Will Deacon <will@kernel.org> wrote:
>
> Pulling header files directly out of the kernel sources for inclusion in
> userspace programs is highly error prone, not least because it bypasses
> the kbuild infrastructure entirely and so may end up referencing other
> header files that have not been generated.
>
> Subsequent patches will cause compiler.h to pull in the ungenerated
> asm/rwonce.h file via filter.h, breaking the build for tools/bpf:
>
>   | $ make -C tools/bpf
>   | make: Entering directory '/linux/tools/bpf'
>   |   CC       bpf_jit_disasm.o
>   |   LINK     bpf_jit_disasm
>   |   CC       bpf_dbg.o
>   | In file included from /linux/include/uapi/linux/filter.h:9,
>   |                  from /linux/tools/bpf/bpf_dbg.c:41:
>   | /linux/include/linux/compiler.h:247:10: fatal error: asm/rwonce.h: No such file or directory
>   |  #include <asm/rwonce.h>
>   |           ^~~~~~~~~~~~~~
>   | compilation terminated.
>   | make: *** [Makefile:61: bpf_dbg.o] Error 1
>   | make: Leaving directory '/linux/tools/bpf'
>
> Take a copy of the installed version of linux/filter.h  (i.e. the one
> created by the 'headers_install' target) into tools/include/uapi/linux/
> and adjust the BPF tool Makefile to reference the local include
> directories instead of those in the main source tree.
>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
> Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
> Reported-by: Xiao Yang <ice_yangxiao@163.com>
> Signed-off-by: Will Deacon <will@kernel.org>

Acked-by: Alexei Starovoitov <ast@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
  2020-06-30 19:25   ` Arnd Bergmann
  2020-06-30 19:47   ` Marco Elver
@ 2020-07-01 17:07   ` Dave P Martin
  2020-07-02  7:23     ` Will Deacon
  2020-07-06 16:08   ` Dave Martin
  3 siblings, 1 reply; 58+ messages in thread
From: Dave P Martin @ 2020-07-01 17:07 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Paul E. McKenney, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> When building with LTO, there is an increased risk of the compiler
> converting an address dependency headed by a READ_ONCE() invocation
> into a control dependency and consequently allowing for harmful
> reordering by the CPU.
> 
> Ensure that such transformations are harmless by overriding the generic
> READ_ONCE() definition with one that provides acquire semantics when
> building with LTO.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/vdso/Makefile   |  2 +-
>  arch/arm64/kernel/vdso32/Makefile |  2 +-
>  3 files changed, 65 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/rwonce.h
> 
> diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> new file mode 100644
> index 000000000000..515e360b01a1
> --- /dev/null
> +++ b/arch/arm64/include/asm/rwonce.h
> @@ -0,0 +1,63 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Google LLC.
> + */
> +#ifndef __ASM_RWONCE_H
> +#define __ASM_RWONCE_H
> +
> +#ifdef CONFIG_CLANG_LTO

Don't we have a generic option for LTO that's not specific to Clang.

Also, can you illustrate code that can only be unsafe with Clang LTO?

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-01 17:07   ` Dave P Martin
@ 2020-07-02  7:23     ` Will Deacon
  2020-07-06 16:00       ` Dave Martin
  0 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-02  7:23 UTC (permalink / raw)
  To: Dave P Martin
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Paul E. McKenney, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > When building with LTO, there is an increased risk of the compiler
> > converting an address dependency headed by a READ_ONCE() invocation
> > into a control dependency and consequently allowing for harmful
> > reordering by the CPU.
> > 
> > Ensure that such transformations are harmless by overriding the generic
> > READ_ONCE() definition with one that provides acquire semantics when
> > building with LTO.
> > 
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> >  3 files changed, 65 insertions(+), 2 deletions(-)
> >  create mode 100644 arch/arm64/include/asm/rwonce.h
> > 
> > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > new file mode 100644
> > index 000000000000..515e360b01a1
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/rwonce.h
> > @@ -0,0 +1,63 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (C) 2020 Google LLC.
> > + */
> > +#ifndef __ASM_RWONCE_H
> > +#define __ASM_RWONCE_H
> > +
> > +#ifdef CONFIG_CLANG_LTO
> 
> Don't we have a generic option for LTO that's not specific to Clang.

/me looks at the LTO series some more

Oh yeah, there's CONFIG_LTO which is selected by CONFIG_LTO_CLANG, which is
the non-typoed version of the above. I can switch this to CONFIG_LTO.

> Also, can you illustrate code that can only be unsafe with Clang LTO?

I don't have a concrete example, but it's an ongoing concern over on the LTO
thread [1], so I cooked this to show one way we could deal with it. The main
concern is that the whole-program optimisations enabled by LTO may allow the
compiler to enumerate possible values for a pointer at link time and replace
an address dependency between two loads with a control dependency instead,
defeating the dependency ordering within the CPU.

We likely won't realise if/when this goes wrong, other than impossible to
debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
able to detect this sort of thing happening at build time, and perhaps
even prevent it with compiler options or annotations, but none of that is
close to being available and I'm keen to progress the LTO patches in the
meantime because they are a requirement for CFI.

Will

[1] https://lore.kernel.org/r/20200624203200.78870-1-samitolvanen@google.com

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-06-30 17:37 ` [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation Will Deacon
@ 2020-07-02  9:32   ` Mark Rutland
  2020-07-02  9:48     ` Will Deacon
  2020-07-02 14:43   ` Joel Fernandes
  1 sibling, 1 reply; 58+ messages in thread
From: Mark Rutland @ 2020-07-02  9:32 UTC (permalink / raw)
  To: Will Deacon
  Cc: Marco Elver, Kees Cook, Paul E. McKenney, Michael S. Tsirkin,
	Peter Zijlstra, Catalin Marinas, Jason Wang, Nick Desaulniers,
	linux-kernel, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Sami Tolvanen, linux-alpha, Alan Stern, Matt Turner,
	virtualization, kernel-team, Boqun Feng, Arnd Bergmann,
	Richard Henderson

On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> Rather then relying on the core code to use smp_read_barrier_depends()
> as part of the READ_ONCE() definition, instead override __READ_ONCE()
> in the Alpha code so that it is treated the same way as
> smp_load_acquire().
> 
> Acked-by: Paul E. McKenney <paulmck@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/alpha/include/asm/barrier.h | 61 ++++----------------------------
>  arch/alpha/include/asm/rwonce.h  | 19 ++++++++++
>  2 files changed, 26 insertions(+), 54 deletions(-)
>  create mode 100644 arch/alpha/include/asm/rwonce.h
> 
> diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h
> index 92ec486a4f9e..2ecd068d91d1 100644
> --- a/arch/alpha/include/asm/barrier.h
> +++ b/arch/alpha/include/asm/barrier.h
> @@ -2,64 +2,17 @@
>  #ifndef __BARRIER_H
>  #define __BARRIER_H
>  
> -#include <asm/compiler.h>
> -
>  #define mb()	__asm__ __volatile__("mb": : :"memory")
>  #define rmb()	__asm__ __volatile__("mb": : :"memory")
>  #define wmb()	__asm__ __volatile__("wmb": : :"memory")

> -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> +#define __smp_load_acquire(p)						\
> +({									\
> +	__unqual_scalar_typeof(*p) ___p1 =				\
> +		(*(volatile typeof(___p1) *)(p));			\
> +	compiletime_assert_atomic_type(*p);				\
> +	___p1;								\
> +})

Sorry if I'm being thick, but doesn't this need a barrier after the
volatile access to provide the acquire semantic?

IIUC prior to this commit alpha would have used the asm-generic
__smp_load_acquire, i.e.

| #ifndef __smp_load_acquire
| #define __smp_load_acquire(p)                                           \
| ({                                                                      \
|         __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p);               \
|         compiletime_assert_atomic_type(*p);                             \
|         __smp_mb();                                                     \
|         (typeof(*p))___p1;                                              \
| })
| #endif

... where the __smp_mb() would be alpha's mb() from earlier in the patch
context, i.e.

| #define mb() __asm__ __volatile__("mb": : :"memory")

... so don't we need similar before returning ___p1 above in
__smp_load_acquire() (and also matching the old read_barrier_depends())?

[...]

> +#include <asm/barrier.h>
> +
> +/*
> + * Alpha is apparently daft enough to reorder address-dependent loads
> + * on some CPU implementations. Knock some common sense into it with
> + * a memory barrier in READ_ONCE().
> + */
> +#define __READ_ONCE(x)	__smp_load_acquire(&(x))

As above, I don't see a memory barrier implied here, so this doesn't
look quite right.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-07-02  9:32   ` Mark Rutland
@ 2020-07-02  9:48     ` Will Deacon
  2020-07-02 10:08       ` Arnd Bergmann
  0 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-02  9:48 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marco Elver, Kees Cook, Paul E. McKenney, Michael S. Tsirkin,
	Peter Zijlstra, Catalin Marinas, Jason Wang, Nick Desaulniers,
	linux-kernel, Josh Triplett, Ivan Kokshaysky, linux-arm-kernel,
	Sami Tolvanen, linux-alpha, Alan Stern, Matt Turner,
	virtualization, kernel-team, Boqun Feng, Arnd Bergmann,
	Richard Henderson

On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > +#define __smp_load_acquire(p)						\
> > +({									\
> > +	__unqual_scalar_typeof(*p) ___p1 =				\
> > +		(*(volatile typeof(___p1) *)(p));			\
> > +	compiletime_assert_atomic_type(*p);				\
> > +	___p1;								\
> > +})
> 
> Sorry if I'm being thick, but doesn't this need a barrier after the
> volatile access to provide the acquire semantic?
> 
> IIUC prior to this commit alpha would have used the asm-generic
> __smp_load_acquire, i.e.
> 
> | #ifndef __smp_load_acquire
> | #define __smp_load_acquire(p)                                           \
> | ({                                                                      \
> |         __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p);               \
> |         compiletime_assert_atomic_type(*p);                             \
> |         __smp_mb();                                                     \
> |         (typeof(*p))___p1;                                              \
> | })
> | #endif
> 
> ... where the __smp_mb() would be alpha's mb() from earlier in the patch
> context, i.e.
> 
> | #define mb() __asm__ __volatile__("mb": : :"memory")
> 
> ... so don't we need similar before returning ___p1 above in
> __smp_load_acquire() (and also matching the old read_barrier_depends())?
> 
> [...]
> 
> > +#include <asm/barrier.h>
> > +
> > +/*
> > + * Alpha is apparently daft enough to reorder address-dependent loads
> > + * on some CPU implementations. Knock some common sense into it with
> > + * a memory barrier in READ_ONCE().
> > + */
> > +#define __READ_ONCE(x)	__smp_load_acquire(&(x))
> 
> As above, I don't see a memory barrier implied here, so this doesn't
> look quite right.

You're right, and Peter spotted the same thing off-list. I've reworked
locally so that the mb() ends up in __READ_ONCE() and __smp_load_acquire()
calles __READ_ONCE() instead of the other way round (which made more
sense before the rework in the merge window).

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-07-02  9:48     ` Will Deacon
@ 2020-07-02 10:08       ` Arnd Bergmann
  2020-07-02 11:18         ` Will Deacon
  0 siblings, 1 reply; 58+ messages in thread
From: Arnd Bergmann @ 2020-07-02 10:08 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Thu, Jul 2, 2020 at 11:48 AM Will Deacon <will@kernel.org> wrote:
> On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> > On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > > +#define __smp_load_acquire(p)                                              \
> > > +({                                                                 \
> > > +   __unqual_scalar_typeof(*p) ___p1 =                              \
> > > +           (*(volatile typeof(___p1) *)(p));                       \
> > > +   compiletime_assert_atomic_type(*p);                             \
> > > +   ___p1;                                                          \
> > > +})
> >
> > Sorry if I'm being thick, but doesn't this need a barrier after the
> > volatile access to provide the acquire semantic?
> >
> > IIUC prior to this commit alpha would have used the asm-generic
> > __smp_load_acquire, i.e.
> >
> > | #ifndef __smp_load_acquire
> > | #define __smp_load_acquire(p)                                           \
> > | ({                                                                      \
> > |         __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p);               \
> > |         compiletime_assert_atomic_type(*p);                             \
> > |         __smp_mb();                                                     \
> > |         (typeof(*p))___p1;                                              \
> > | })
> > | #endif

I also have a question that I didn't dare ask when the same
code came up before (I guess it's also what's in the kernel today):

With the cast to 'typeof(*p)' at the end, doesn't that mean we
lose the effect of __unqual_scalar_typeof() again, so any "volatile"
pointer passed into __READ_ONCE_SCALAR() or
__smp_load_acquire() still leads to a volatile load of the original
variable, plus another volatile access to ___p1 after
spilling it to the stack as a non-volatile variable?

I hope I'm missing something obvious here.

        Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-07-02 10:08       ` Arnd Bergmann
@ 2020-07-02 11:18         ` Will Deacon
  2020-07-02 11:39           ` Arnd Bergmann
  0 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-02 11:18 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Thu, Jul 02, 2020 at 12:08:41PM +0200, Arnd Bergmann wrote:
> On Thu, Jul 2, 2020 at 11:48 AM Will Deacon <will@kernel.org> wrote:
> > On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> > > On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > > > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > > > +#define __smp_load_acquire(p)                                              \
> > > > +({                                                                 \
> > > > +   __unqual_scalar_typeof(*p) ___p1 =                              \
> > > > +           (*(volatile typeof(___p1) *)(p));                       \
> > > > +   compiletime_assert_atomic_type(*p);                             \
> > > > +   ___p1;                                                          \
> > > > +})
> > >
> > > Sorry if I'm being thick, but doesn't this need a barrier after the
> > > volatile access to provide the acquire semantic?
> > >
> > > IIUC prior to this commit alpha would have used the asm-generic
> > > __smp_load_acquire, i.e.
> > >
> > > | #ifndef __smp_load_acquire
> > > | #define __smp_load_acquire(p)                                           \
> > > | ({                                                                      \
> > > |         __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p);               \
> > > |         compiletime_assert_atomic_type(*p);                             \
> > > |         __smp_mb();                                                     \
> > > |         (typeof(*p))___p1;                                              \
> > > | })
> > > | #endif
> 
> I also have a question that I didn't dare ask when the same
> code came up before (I guess it's also what's in the kernel today):
> 
> With the cast to 'typeof(*p)' at the end, doesn't that mean we
> lose the effect of __unqual_scalar_typeof() again, so any "volatile"
> pointer passed into __READ_ONCE_SCALAR() or
> __smp_load_acquire() still leads to a volatile load of the original
> variable, plus another volatile access to ___p1 after
> spilling it to the stack as a non-volatile variable?

Not sure I follow you here, but I can confirm that what you're worried
about doesn't happen for the usual case of a pointer-to-volatile scalar.

For example, ignoring dependency ordering:

unsigned long foo(volatile unsigned long *p)
{
	return smp_load_acquire(p) + 1;
}

Ends up looking like:

	unsigned long ___p1 = *(const volatile unsigned long *)p;
	smp_mb();
	(volatile unsigned long)___p1;

My understanding is that casting a non-pointer type to volatile doesn't
do anything, so we're good.

On the other hand, you can still cause the stack reload if you use volatile
pointers to volatile:

volatile unsigned long *bar(volatile unsigned long * volatile *ptr)
{
	return READ_ONCE(*ptr);
}

but this is pretty weird code, I think.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-07-02 11:18         ` Will Deacon
@ 2020-07-02 11:39           ` Arnd Bergmann
  0 siblings, 0 replies; 58+ messages in thread
From: Arnd Bergmann @ 2020-07-02 11:39 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Marco Elver, Kees Cook, Paul E. McKenney,
	Michael S. Tsirkin, Peter Zijlstra, Catalin Marinas, Jason Wang,
	Nick Desaulniers, linux-kernel, Josh Triplett, Ivan Kokshaysky,
	Sami Tolvanen, alpha, Alan Stern, Matt Turner, virtualization,
	Android Kernel Team, Boqun Feng, Linux ARM, Richard Henderson

On Thu, Jul 2, 2020 at 1:18 PM Will Deacon <will@kernel.org> wrote:
> On Thu, Jul 02, 2020 at 12:08:41PM +0200, Arnd Bergmann wrote:
> > On Thu, Jul 2, 2020 at 11:48 AM Will Deacon <will@kernel.org> wrote:
> > > On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:

> Not sure I follow you here, but I can confirm that what you're worried
> about doesn't happen for the usual case of a pointer-to-volatile scalar.
>
> For example, ignoring dependency ordering:
>
> unsigned long foo(volatile unsigned long *p)
> {
>         return smp_load_acquire(p) + 1;
> }
>
> Ends up looking like:
>
>         unsigned long ___p1 = *(const volatile unsigned long *)p;
>         smp_mb();
>         (volatile unsigned long)___p1;
>
> My understanding is that casting a non-pointer type to volatile doesn't
> do anything, so we're good.

Right, I mixed up the correct

        (typeof(*p))___p;

with the incorrect

       *typeof(p)&___p;

which would dereference a volatile pointer and cause the
problem.

The code is all fine then.

    Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-06-30 17:37 ` [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation Will Deacon
  2020-07-02  9:32   ` Mark Rutland
@ 2020-07-02 14:43   ` Joel Fernandes
  2020-07-02 14:55     ` Will Deacon
  1 sibling, 1 reply; 58+ messages in thread
From: Joel Fernandes @ 2020-07-02 14:43 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization,
	Joel Fernandes (Google),
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	Cc: Android Kernel, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky,
	moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Richard Henderson, Nick Desaulniers, LKML, linux-alpha

On Tue, Jun 30, 2020 at 1:38 PM Will Deacon <will@kernel.org> wrote:
>
> Rather then relying on the core code to use smp_read_barrier_depends()
> as part of the READ_ONCE() definition, instead override __READ_ONCE()
> in the Alpha code so that it is treated the same way as
> smp_load_acquire().
>
> Acked-by: Paul E. McKenney <paulmck@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/alpha/include/asm/barrier.h | 61 ++++----------------------------
>  arch/alpha/include/asm/rwonce.h  | 19 ++++++++++
>  2 files changed, 26 insertions(+), 54 deletions(-)
>  create mode 100644 arch/alpha/include/asm/rwonce.h
>
> diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h
> index 92ec486a4f9e..2ecd068d91d1 100644
> --- a/arch/alpha/include/asm/barrier.h
> +++ b/arch/alpha/include/asm/barrier.h
> @@ -2,64 +2,17 @@
>  #ifndef __BARRIER_H
>  #define __BARRIER_H
>
> -#include <asm/compiler.h>
> -
>  #define mb()   __asm__ __volatile__("mb": : :"memory")
>  #define rmb()  __asm__ __volatile__("mb": : :"memory")
>  #define wmb()  __asm__ __volatile__("wmb": : :"memory")
>
> -/**
> - * read_barrier_depends - Flush all pending reads that subsequents reads
> - * depend on.
> - *
> - * No data-dependent reads from memory-like regions are ever reordered
> - * over this barrier.  All reads preceding this primitive are guaranteed
> - * to access memory (but not necessarily other CPUs' caches) before any
> - * reads following this primitive that depend on the data return by
> - * any of the preceding reads.  This primitive is much lighter weight than
> - * rmb() on most CPUs, and is never heavier weight than is
> - * rmb().
> - *
> - * These ordering constraints are respected by both the local CPU
> - * and the compiler.
> - *
> - * Ordering is not guaranteed by anything other than these primitives,
> - * not even by data dependencies.  See the documentation for
> - * memory_barrier() for examples and URLs to more information.
> - *
> - * For example, the following code would force ordering (the initial
> - * value of "a" is zero, "b" is one, and "p" is "&a"):
> - *
> - * <programlisting>
> - *     CPU 0                           CPU 1
> - *
> - *     b = 2;
> - *     memory_barrier();
> - *     p = &b;                         q = p;
> - *                                     read_barrier_depends();
> - *                                     d = *q;
> - * </programlisting>
> - *
> - * because the read of "*q" depends on the read of "p" and these
> - * two reads are separated by a read_barrier_depends().  However,
> - * the following code, with the same initial values for "a" and "b":
> - *

Would it be Ok to keep this example in the kernel sources? I think it
serves as good documentation and highlights the issue in the Alpha
architecture well.

> - * <programlisting>
> - *     CPU 0                           CPU 1
> - *
> - *     a = 2;
> - *     memory_barrier();
> - *     b = 3;                          y = b;
> - *                                     read_barrier_depends();
> - *                                     x = a;
> - * </programlisting>
> - *
> - * does not enforce ordering, since there is no data dependency between
> - * the read of "a" and the read of "b".  Therefore, on some CPUs, such
> - * as Alpha, "y" could be set to 3 and "x" to 0.  Use rmb()
> - * in cases like this where there are no data dependencies.
> - */
> -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> +#define __smp_load_acquire(p)                                          \
> +({                                                                     \
> +       __unqual_scalar_typeof(*p) ___p1 =                              \
> +               (*(volatile typeof(___p1) *)(p));                       \
> +       compiletime_assert_atomic_type(*p);                             \
> +       ___p1;                                                          \
> +})

I had the same question as Mark about the need for a memory barrier
here, otherwise alpha will again break right? Looking forward to the
future fix you mentioned.

BTW,  do you know any architecture where speculative execution of
address-dependent loads can cause similar misorderings? That would be
pretty insane though. In Alpha's case it is not speculation but rather
the split local cache design as the docs mention.   The reason I ask
is it is pretty amusing that control-dependent loads do have such
misordering issues due to speculative branch execution and I wondered
what other games the CPUs are playing. FWIW I ran into [1] which talks
about analogy between memory dependence and control dependence.

[1] https://en.wikipedia.org/wiki/Memory_dependence_prediction


 - Joel


>
>  #ifdef CONFIG_SMP
>  #define __ASM_SMP_MB   "\tmb\n"
> diff --git a/arch/alpha/include/asm/rwonce.h b/arch/alpha/include/asm/rwonce.h
> new file mode 100644
> index 000000000000..83a92e49a615
> --- /dev/null
> +++ b/arch/alpha/include/asm/rwonce.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Google LLC.
> + */
> +#ifndef __ASM_RWONCE_H
> +#define __ASM_RWONCE_H
> +
> +#include <asm/barrier.h>
> +
> +/*
> + * Alpha is apparently daft enough to reorder address-dependent loads
> + * on some CPU implementations. Knock some common sense into it with
> + * a memory barrier in READ_ONCE().
> + */
> +#define __READ_ONCE(x) __smp_load_acquire(&(x))
> +
> +#include <asm-generic/rwonce.h>
> +
> +#endif /* __ASM_RWONCE_H */
> --
> 2.27.0.212.ge8ba1cc988-goog
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-07-02 14:43   ` Joel Fernandes
@ 2020-07-02 14:55     ` Will Deacon
  2020-07-02 15:07       ` Joel Fernandes
  0 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-02 14:55 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization,
	Joel Fernandes (Google),
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	Cc: Android Kernel, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky,
	moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Richard Henderson, Nick Desaulniers, LKML, linux-alpha

Hi Joel,

On Thu, Jul 02, 2020 at 10:43:55AM -0400, Joel Fernandes wrote:
> On Tue, Jun 30, 2020 at 1:38 PM Will Deacon <will@kernel.org> wrote:
> > diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h
> > index 92ec486a4f9e..2ecd068d91d1 100644
> > --- a/arch/alpha/include/asm/barrier.h
> > +++ b/arch/alpha/include/asm/barrier.h
> > - * For example, the following code would force ordering (the initial
> > - * value of "a" is zero, "b" is one, and "p" is "&a"):
> > - *
> > - * <programlisting>
> > - *     CPU 0                           CPU 1
> > - *
> > - *     b = 2;
> > - *     memory_barrier();
> > - *     p = &b;                         q = p;
> > - *                                     read_barrier_depends();
> > - *                                     d = *q;
> > - * </programlisting>
> > - *
> > - * because the read of "*q" depends on the read of "p" and these
> > - * two reads are separated by a read_barrier_depends().  However,
> > - * the following code, with the same initial values for "a" and "b":
> > - *
> 
> Would it be Ok to keep this example in the kernel sources? I think it
> serves as good documentation and highlights the issue in the Alpha
> architecture well.

I'd _really_ like to remove it, as I think it only serves to confuse people
on a topic that is confusing enough already. Paul's perfbook [1] already has
plenty of information about this, so I don't think we need to repeat that
here. I could add a citation, perhaps?

> > - * <programlisting>
> > - *     CPU 0                           CPU 1
> > - *
> > - *     a = 2;
> > - *     memory_barrier();
> > - *     b = 3;                          y = b;
> > - *                                     read_barrier_depends();
> > - *                                     x = a;
> > - * </programlisting>
> > - *
> > - * does not enforce ordering, since there is no data dependency between
> > - * the read of "a" and the read of "b".  Therefore, on some CPUs, such
> > - * as Alpha, "y" could be set to 3 and "x" to 0.  Use rmb()
> > - * in cases like this where there are no data dependencies.
> > - */
> > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > +#define __smp_load_acquire(p)                                          \
> > +({                                                                     \
> > +       __unqual_scalar_typeof(*p) ___p1 =                              \
> > +               (*(volatile typeof(___p1) *)(p));                       \
> > +       compiletime_assert_atomic_type(*p);                             \
> > +       ___p1;                                                          \
> > +})
> 
> I had the same question as Mark about the need for a memory barrier
> here, otherwise alpha will again break right? Looking forward to the
> future fix you mentioned.

Yeah, sorry about that. It went missing somehow during the rebase, it seems.

> BTW,  do you know any architecture where speculative execution of
> address-dependent loads can cause similar misorderings? That would be
> pretty insane though. In Alpha's case it is not speculation but rather
> the split local cache design as the docs mention.   The reason I ask
> is it is pretty amusing that control-dependent loads do have such
> misordering issues due to speculative branch execution and I wondered
> what other games the CPUs are playing. FWIW I ran into [1] which talks
> about analogy between memory dependence and control dependence.

I think you're asking about value prediction, and the implications it would
have on address-dependent loads where the address can itself be predicted.
I'm not aware of an CPUs where that is observable architecturally.

arm64 has some load instructions that do not honour address dependencies,
but I believe that's mainly to enable alternative cache designs for things
like non-temporal and large vector loads.

Will

[1] https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
  2020-07-02 14:55     ` Will Deacon
@ 2020-07-02 15:07       ` Joel Fernandes
  0 siblings, 0 replies; 58+ messages in thread
From: Joel Fernandes @ 2020-07-02 15:07 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization,
	Joel Fernandes (Google),
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	Cc: Android Kernel, Marco Elver, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Ivan Kokshaysky,
	moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Richard Henderson, Nick Desaulniers, LKML, linux-alpha

On Thu, Jul 2, 2020 at 10:55 AM Will Deacon <will@kernel.org> wrote:
> On Thu, Jul 02, 2020 at 10:43:55AM -0400, Joel Fernandes wrote:
> > On Tue, Jun 30, 2020 at 1:38 PM Will Deacon <will@kernel.org> wrote:
> > > diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h
> > > index 92ec486a4f9e..2ecd068d91d1 100644
> > > --- a/arch/alpha/include/asm/barrier.h
> > > +++ b/arch/alpha/include/asm/barrier.h
> > > - * For example, the following code would force ordering (the initial
> > > - * value of "a" is zero, "b" is one, and "p" is "&a"):
> > > - *
> > > - * <programlisting>
> > > - *     CPU 0                           CPU 1
> > > - *
> > > - *     b = 2;
> > > - *     memory_barrier();
> > > - *     p = &b;                         q = p;
> > > - *                                     read_barrier_depends();
> > > - *                                     d = *q;
> > > - * </programlisting>
> > > - *
> > > - * because the read of "*q" depends on the read of "p" and these
> > > - * two reads are separated by a read_barrier_depends().  However,
> > > - * the following code, with the same initial values for "a" and "b":
> > > - *
> >
> > Would it be Ok to keep this example in the kernel sources? I think it
> > serves as good documentation and highlights the issue in the Alpha
> > architecture well.
>
> I'd _really_ like to remove it, as I think it only serves to confuse people
> on a topic that is confusing enough already. Paul's perfbook [1] already has
> plenty of information about this, so I don't think we need to repeat that
> here. I could add a citation, perhaps?

True, and also found that LKMM docs and the memory-barriers.txt talks
about it, so removing it here sounds good to me. Maybe a reference
here to either documentation should be Ok.

> > BTW,  do you know any architecture where speculative execution of
> > address-dependent loads can cause similar misorderings? That would be
> > pretty insane though. In Alpha's case it is not speculation but rather
> > the split local cache design as the docs mention.   The reason I ask
> > is it is pretty amusing that control-dependent loads do have such
> > misordering issues due to speculative branch execution and I wondered
> > what other games the CPUs are playing. FWIW I ran into [1] which talks
> > about analogy between memory dependence and control dependence.
>
> I think you're asking about value prediction, and the implications it would
> have on address-dependent loads where the address can itself be predicted.

Yes.

> I'm not aware of an CPUs where that is observable architecturally.

I see.

> arm64 has some load instructions that do not honour address dependencies,
> but I believe that's mainly to enable alternative cache designs for things
> like non-temporal and large vector loads.

Good to know this, thanks.

 - Joel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-02  7:23     ` Will Deacon
@ 2020-07-06 16:00       ` Dave Martin
  2020-07-06 16:34         ` Paul E. McKenney
  2020-07-06 18:35         ` Will Deacon
  0 siblings, 2 replies; 58+ messages in thread
From: Dave Martin @ 2020-07-06 16:00 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Paul E. McKenney, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > > When building with LTO, there is an increased risk of the compiler
> > > converting an address dependency headed by a READ_ONCE() invocation
> > > into a control dependency and consequently allowing for harmful
> > > reordering by the CPU.
> > > 
> > > Ensure that such transformations are harmless by overriding the generic
> > > READ_ONCE() definition with one that provides acquire semantics when
> > > building with LTO.
> > > 
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> > >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> > >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> > >  3 files changed, 65 insertions(+), 2 deletions(-)
> > >  create mode 100644 arch/arm64/include/asm/rwonce.h
> > > 
> > > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > > new file mode 100644
> > > index 000000000000..515e360b01a1
> > > --- /dev/null
> > > +++ b/arch/arm64/include/asm/rwonce.h
> > > @@ -0,0 +1,63 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright (C) 2020 Google LLC.
> > > + */
> > > +#ifndef __ASM_RWONCE_H
> > > +#define __ASM_RWONCE_H
> > > +
> > > +#ifdef CONFIG_CLANG_LTO
> > 
> > Don't we have a generic option for LTO that's not specific to Clang.
> 
> /me looks at the LTO series some more
> 
> Oh yeah, there's CONFIG_LTO which is selected by CONFIG_LTO_CLANG, which is
> the non-typoed version of the above. I can switch this to CONFIG_LTO.
> 
> > Also, can you illustrate code that can only be unsafe with Clang LTO?
> 
> I don't have a concrete example, but it's an ongoing concern over on the LTO
> thread [1], so I cooked this to show one way we could deal with it. The main
> concern is that the whole-program optimisations enabled by LTO may allow the
> compiler to enumerate possible values for a pointer at link time and replace
> an address dependency between two loads with a control dependency instead,
> defeating the dependency ordering within the CPU.

Why can't that happen without LTO?

> We likely won't realise if/when this goes wrong, other than impossible to
> debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
> able to detect this sort of thing happening at build time, and perhaps
> even prevent it with compiler options or annotations, but none of that is
> close to being available and I'm keen to progress the LTO patches in the
> meantime because they are a requirement for CFI.

My concern was not so much why LTO makes things dangerous, as why !LTO
makes things safe...

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
                     ` (2 preceding siblings ...)
  2020-07-01 17:07   ` Dave P Martin
@ 2020-07-06 16:08   ` Dave Martin
  2020-07-06 18:35     ` Will Deacon
  3 siblings, 1 reply; 58+ messages in thread
From: Dave Martin @ 2020-07-06 16:08 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Paul E. McKenney, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> When building with LTO, there is an increased risk of the compiler
> converting an address dependency headed by a READ_ONCE() invocation
> into a control dependency and consequently allowing for harmful
> reordering by the CPU.
> 
> Ensure that such transformations are harmless by overriding the generic
> READ_ONCE() definition with one that provides acquire semantics when
> building with LTO.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/vdso/Makefile   |  2 +-
>  arch/arm64/kernel/vdso32/Makefile |  2 +-
>  3 files changed, 65 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/rwonce.h
> 
> diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> new file mode 100644
> index 000000000000..515e360b01a1
> --- /dev/null
> +++ b/arch/arm64/include/asm/rwonce.h
> @@ -0,0 +1,63 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Google LLC.
> + */
> +#ifndef __ASM_RWONCE_H
> +#define __ASM_RWONCE_H
> +
> +#ifdef CONFIG_CLANG_LTO
> +
> +#include <linux/compiler_types.h>
> +#include <asm/alternative-macros.h>
> +
> +#ifndef BUILD_VDSO
> +
> +#ifdef CONFIG_AS_HAS_LDAPR
> +#define __LOAD_RCPC(sfx, regs...)					\
> +	ALTERNATIVE(							\
> +		"ldar"	#sfx "\t" #regs,				\

^ Should this be here?  It seems that READ_ONCE() will actually read
twice... even if that doesn't actually conflict with the required
semantics of READ_ONCE(), it looks odd.

Making a direct link between LTO and the memory model also seems highly
spurious (as discussed in the other subthread) so can we have a comment
explaining the reasoning?

> +		".arch_extension rcpc\n"				\
> +		"ldapr"	#sfx "\t" #regs,				\
> +	ARM64_HAS_LDAPR)
> +#else
> +#define __LOAD_RCPC(sfx, regs...)	"ldar" #sfx "\t" #regs
> +#endif /* CONFIG_AS_HAS_LDAPR */

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 16:00       ` Dave Martin
@ 2020-07-06 16:34         ` Paul E. McKenney
  2020-07-06 17:05           ` Dave Martin
  2020-07-06 18:35         ` Will Deacon
  1 sibling, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2020-07-06 16:34 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Arnd Bergmann, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Mon, Jul 06, 2020 at 05:00:23PM +0100, Dave Martin wrote:
> On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> > On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > > On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > > > When building with LTO, there is an increased risk of the compiler
> > > > converting an address dependency headed by a READ_ONCE() invocation
> > > > into a control dependency and consequently allowing for harmful
> > > > reordering by the CPU.
> > > > 
> > > > Ensure that such transformations are harmless by overriding the generic
> > > > READ_ONCE() definition with one that provides acquire semantics when
> > > > building with LTO.
> > > > 
> > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > ---
> > > >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> > > >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> > > >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> > > >  3 files changed, 65 insertions(+), 2 deletions(-)
> > > >  create mode 100644 arch/arm64/include/asm/rwonce.h
> > > > 
> > > > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > > > new file mode 100644
> > > > index 000000000000..515e360b01a1
> > > > --- /dev/null
> > > > +++ b/arch/arm64/include/asm/rwonce.h
> > > > @@ -0,0 +1,63 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * Copyright (C) 2020 Google LLC.
> > > > + */
> > > > +#ifndef __ASM_RWONCE_H
> > > > +#define __ASM_RWONCE_H
> > > > +
> > > > +#ifdef CONFIG_CLANG_LTO
> > > 
> > > Don't we have a generic option for LTO that's not specific to Clang.
> > 
> > /me looks at the LTO series some more
> > 
> > Oh yeah, there's CONFIG_LTO which is selected by CONFIG_LTO_CLANG, which is
> > the non-typoed version of the above. I can switch this to CONFIG_LTO.
> > 
> > > Also, can you illustrate code that can only be unsafe with Clang LTO?
> > 
> > I don't have a concrete example, but it's an ongoing concern over on the LTO
> > thread [1], so I cooked this to show one way we could deal with it. The main
> > concern is that the whole-program optimisations enabled by LTO may allow the
> > compiler to enumerate possible values for a pointer at link time and replace
> > an address dependency between two loads with a control dependency instead,
> > defeating the dependency ordering within the CPU.
> 
> Why can't that happen without LTO?

Because without LTO, the compiler cannot see all the pointers all at
the same time due to their being in different translation units.

But yes, if the compiler could see all the pointer values and further
-know- that it was seeing all the pointer values, these optimizations
could happen even without LTO.  But it is quite easy to make sure that
the compiler thinks that there are additional pointer values that it
does not know about.

> > We likely won't realise if/when this goes wrong, other than impossible to
> > debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
> > able to detect this sort of thing happening at build time, and perhaps
> > even prevent it with compiler options or annotations, but none of that is
> > close to being available and I'm keen to progress the LTO patches in the
> > meantime because they are a requirement for CFI.
> 
> My concern was not so much why LTO makes things dangerous, as why !LTO
> makes things safe...

Because ignorant compilers are safe compilers!  ;-)

							Thanx, Paul

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 16:34         ` Paul E. McKenney
@ 2020-07-06 17:05           ` Dave Martin
  2020-07-06 17:36             ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Martin @ 2020-07-06 17:05 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Arnd Bergmann, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Mon, Jul 06, 2020 at 09:34:55AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 06, 2020 at 05:00:23PM +0100, Dave Martin wrote:
> > On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> > > On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > > > On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > > > > When building with LTO, there is an increased risk of the compiler
> > > > > converting an address dependency headed by a READ_ONCE() invocation
> > > > > into a control dependency and consequently allowing for harmful
> > > > > reordering by the CPU.
> > > > > 
> > > > > Ensure that such transformations are harmless by overriding the generic
> > > > > READ_ONCE() definition with one that provides acquire semantics when
> > > > > building with LTO.
> > > > > 
> > > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > > ---
> > > > >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> > > > >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> > > > >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> > > > >  3 files changed, 65 insertions(+), 2 deletions(-)
> > > > >  create mode 100644 arch/arm64/include/asm/rwonce.h
> > > > > 
> > > > > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > > > > new file mode 100644
> > > > > index 000000000000..515e360b01a1
> > > > > --- /dev/null
> > > > > +++ b/arch/arm64/include/asm/rwonce.h
> > > > > @@ -0,0 +1,63 @@
> > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > +/*
> > > > > + * Copyright (C) 2020 Google LLC.
> > > > > + */
> > > > > +#ifndef __ASM_RWONCE_H
> > > > > +#define __ASM_RWONCE_H
> > > > > +
> > > > > +#ifdef CONFIG_CLANG_LTO
> > > > 
> > > > Don't we have a generic option for LTO that's not specific to Clang.
> > > 
> > > /me looks at the LTO series some more
> > > 
> > > Oh yeah, there's CONFIG_LTO which is selected by CONFIG_LTO_CLANG, which is
> > > the non-typoed version of the above. I can switch this to CONFIG_LTO.
> > > 
> > > > Also, can you illustrate code that can only be unsafe with Clang LTO?
> > > 
> > > I don't have a concrete example, but it's an ongoing concern over on the LTO
> > > thread [1], so I cooked this to show one way we could deal with it. The main
> > > concern is that the whole-program optimisations enabled by LTO may allow the
> > > compiler to enumerate possible values for a pointer at link time and replace
> > > an address dependency between two loads with a control dependency instead,
> > > defeating the dependency ordering within the CPU.
> > 
> > Why can't that happen without LTO?
> 
> Because without LTO, the compiler cannot see all the pointers all at
> the same time due to their being in different translation units.
> 
> But yes, if the compiler could see all the pointer values and further
> -know- that it was seeing all the pointer values, these optimizations
> could happen even without LTO.  But it is quite easy to make sure that
> the compiler thinks that there are additional pointer values that it
> does not know about.

Yes of course, but even without LTO the compiler can still apply this
optimisation to everything visible in the translation unit, and that can
drift as people refactor code over time.

Convincing the compiler there are other possible values doesn't help.
Even in

int foo(int *p)
{
	asm ("" : "+r" (p));
	return *p;
}

Can't the compiler still generate something like this:

	switch (p) {
	case &foo:
		return foo;

	case &bar:
		return bar;

	default:
		return *p;
	}

...in which case we still have the same lost ordering guarantee that
we were trying to enforce.

If foo and bar already happen to be in registers and profiling shows
that &foo and &bar are the most likely value of p then this might be
a reasonable optimisation in some situations, irrespective of LTO.

The underlying problem here seems to be that the necessary ordering
rule is not part of what passes for the C memory model prior to C11.
If we want to control the data flow, don't we have to wrap the entire
dereference in a macro?

> > > We likely won't realise if/when this goes wrong, other than impossible to
> > > debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
> > > able to detect this sort of thing happening at build time, and perhaps
> > > even prevent it with compiler options or annotations, but none of that is
> > > close to being available and I'm keen to progress the LTO patches in the
> > > meantime because they are a requirement for CFI.
> > 
> > My concern was not so much why LTO makes things dangerous, as why !LTO
> > makes things safe...
> 
> Because ignorant compilers are safe compilers!  ;-)

AFAICT ignorance is no gurantee of ordering in general -- the compiler
is free to speculatively invent knowledge any place that the language
spec allows it to.  !LTO doesn't stop this happening.

Hopefully some of the knowledge I invented in my reply is valid...

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 17:05           ` Dave Martin
@ 2020-07-06 17:36             ` Paul E. McKenney
  2020-07-07 10:29               ` Dave Martin
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2020-07-06 17:36 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Arnd Bergmann, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Mon, Jul 06, 2020 at 06:05:57PM +0100, Dave Martin wrote:
> On Mon, Jul 06, 2020 at 09:34:55AM -0700, Paul E. McKenney wrote:
> > On Mon, Jul 06, 2020 at 05:00:23PM +0100, Dave Martin wrote:
> > > On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> > > > On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > > > > On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > > > > > When building with LTO, there is an increased risk of the compiler
> > > > > > converting an address dependency headed by a READ_ONCE() invocation
> > > > > > into a control dependency and consequently allowing for harmful
> > > > > > reordering by the CPU.
> > > > > > 
> > > > > > Ensure that such transformations are harmless by overriding the generic
> > > > > > READ_ONCE() definition with one that provides acquire semantics when
> > > > > > building with LTO.
> > > > > > 
> > > > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > > > ---
> > > > > >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> > > > > >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> > > > > >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> > > > > >  3 files changed, 65 insertions(+), 2 deletions(-)
> > > > > >  create mode 100644 arch/arm64/include/asm/rwonce.h
> > > > > > 
> > > > > > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > > > > > new file mode 100644
> > > > > > index 000000000000..515e360b01a1
> > > > > > --- /dev/null
> > > > > > +++ b/arch/arm64/include/asm/rwonce.h
> > > > > > @@ -0,0 +1,63 @@
> > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > +/*
> > > > > > + * Copyright (C) 2020 Google LLC.
> > > > > > + */
> > > > > > +#ifndef __ASM_RWONCE_H
> > > > > > +#define __ASM_RWONCE_H
> > > > > > +
> > > > > > +#ifdef CONFIG_CLANG_LTO
> > > > > 
> > > > > Don't we have a generic option for LTO that's not specific to Clang.
> > > > 
> > > > /me looks at the LTO series some more
> > > > 
> > > > Oh yeah, there's CONFIG_LTO which is selected by CONFIG_LTO_CLANG, which is
> > > > the non-typoed version of the above. I can switch this to CONFIG_LTO.
> > > > 
> > > > > Also, can you illustrate code that can only be unsafe with Clang LTO?
> > > > 
> > > > I don't have a concrete example, but it's an ongoing concern over on the LTO
> > > > thread [1], so I cooked this to show one way we could deal with it. The main
> > > > concern is that the whole-program optimisations enabled by LTO may allow the
> > > > compiler to enumerate possible values for a pointer at link time and replace
> > > > an address dependency between two loads with a control dependency instead,
> > > > defeating the dependency ordering within the CPU.
> > > 
> > > Why can't that happen without LTO?
> > 
> > Because without LTO, the compiler cannot see all the pointers all at
> > the same time due to their being in different translation units.
> > 
> > But yes, if the compiler could see all the pointer values and further
> > -know- that it was seeing all the pointer values, these optimizations
> > could happen even without LTO.  But it is quite easy to make sure that
> > the compiler thinks that there are additional pointer values that it
> > does not know about.
> 
> Yes of course, but even without LTO the compiler can still apply this
> optimisation to everything visible in the translation unit, and that can
> drift as people refactor code over time.
> 
> Convincing the compiler there are other possible values doesn't help.
> Even in
> 
> int foo(int *p)
> {
> 	asm ("" : "+r" (p));
> 	return *p;
> }
> 
> Can't the compiler still generate something like this:
> 
> 	switch (p) {
> 	case &foo:
> 		return foo;
> 
> 	case &bar:
> 		return bar;
> 
> 	default:
> 		return *p;
> 	}
> 
> ...in which case we still have the same lost ordering guarantee that
> we were trying to enforce.
> 
> If foo and bar already happen to be in registers and profiling shows
> that &foo and &bar are the most likely value of p then this might be
> a reasonable optimisation in some situations, irrespective of LTO.

Agreed, the additional information from profile-driven optimization
can be just as damaging as that from LTO.

> The underlying problem here seems to be that the necessary ordering
> rule is not part of what passes for the C memory model prior to C11.
> If we want to control the data flow, don't we have to wrap the entire
> dereference in a macro?

Yes, exactly.  Because we are relying on things that are not guaranteed
by the C memory model, we need to pay attention to the implementations.
As I have said elsewhere, the price of control dependencies is eternal
vigilance.

And this also applies, to a lesser extent, to address and data
dependencies, which are also not well supported by the C standard.

There is one important case in which the C memory model -does- support
control dependencies, and that is when the dependent write is a normal
C-language write that is not involved in a data race.  In that case,
if the compiler broke the control dependency, it might have introduced
a data race, which it is forbidden to do.  However, this rule can also
be broken when the compiler knows too much, as it might be able to prove
that breaking the dependency won't introduce a data race.  In that case,
according to the standard, it is free to break the dependency.

> > > > We likely won't realise if/when this goes wrong, other than impossible to
> > > > debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
> > > > able to detect this sort of thing happening at build time, and perhaps
> > > > even prevent it with compiler options or annotations, but none of that is
> > > > close to being available and I'm keen to progress the LTO patches in the
> > > > meantime because they are a requirement for CFI.
> > > 
> > > My concern was not so much why LTO makes things dangerous, as why !LTO
> > > makes things safe...
> > 
> > Because ignorant compilers are safe compilers!  ;-)
> 
> AFAICT ignorance is no gurantee of ordering in general -- the compiler
> is free to speculatively invent knowledge any place that the language
> spec allows it to.  !LTO doesn't stop this happening.

Agreed, according to the standard, the compiler has great freedom.

We have two choices: (1) Restrict ourselves to live within the confines of
the standard or (2) Pay continued close attention to the implementation.
We have made different choices at different times, but for many ordering
situations we have gone with door #2.

Me, I have been working to get the standard to better support our
use case.  This is at best slow going.  But don't take my word for it,
ask Will.

> Hopefully some of the knowledge I invented in my reply is valid...

It is.  It is just that there are multiple valid strategies, and the
Linux kernel is currently taking a mixed-strategy approach.

							Thanx, Paul

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 16:08   ` Dave Martin
@ 2020-07-06 18:35     ` Will Deacon
  2020-07-07 10:10       ` Dave Martin
  0 siblings, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-06 18:35 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Paul E. McKenney, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Mon, Jul 06, 2020 at 05:08:20PM +0100, Dave Martin wrote:
> On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > new file mode 100644
> > index 000000000000..515e360b01a1
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/rwonce.h
> > @@ -0,0 +1,63 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (C) 2020 Google LLC.
> > + */
> > +#ifndef __ASM_RWONCE_H
> > +#define __ASM_RWONCE_H
> > +
> > +#ifdef CONFIG_CLANG_LTO
> > +
> > +#include <linux/compiler_types.h>
> > +#include <asm/alternative-macros.h>
> > +
> > +#ifndef BUILD_VDSO
> > +
> > +#ifdef CONFIG_AS_HAS_LDAPR
> > +#define __LOAD_RCPC(sfx, regs...)					\
> > +	ALTERNATIVE(							\
> > +		"ldar"	#sfx "\t" #regs,				\
> 
> ^ Should this be here?  It seems that READ_ONCE() will actually read
> twice... even if that doesn't actually conflict with the required
> semantics of READ_ONCE(), it looks odd.

It's patched at runtime, so it's either LDAR or LDAPR.

> Making a direct link between LTO and the memory model also seems highly
> spurious (as discussed in the other subthread) so can we have a comment
> explaining the reasoning?

Sure, although like I say, this is more about helping to progress that
conversation.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 16:00       ` Dave Martin
  2020-07-06 16:34         ` Paul E. McKenney
@ 2020-07-06 18:35         ` Will Deacon
  2020-07-06 19:23           ` Marco Elver
  1 sibling, 1 reply; 58+ messages in thread
From: Will Deacon @ 2020-07-06 18:35 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Paul E. McKenney, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Mon, Jul 06, 2020 at 05:00:23PM +0100, Dave Martin wrote:
> On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> > On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > > Also, can you illustrate code that can only be unsafe with Clang LTO?
> > 
> > I don't have a concrete example, but it's an ongoing concern over on the LTO
> > thread [1], so I cooked this to show one way we could deal with it. The main
> > concern is that the whole-program optimisations enabled by LTO may allow the
> > compiler to enumerate possible values for a pointer at link time and replace
> > an address dependency between two loads with a control dependency instead,
> > defeating the dependency ordering within the CPU.
> 
> Why can't that happen without LTO?

It could, but I'd argue that it's considerably less likely because there
is less information available to the compiler to perform these sorts of
optimisations. It also doesn't appear to be happening in practice.

The current state of affairs is that, if/when we catch the compiler
performing harmful optimistations, we look for a way to disable them.
However, there are good reasons to enable LTO, so this is one way to
do that without having to worry about the potential impact on dependency
ordering.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 18:35         ` Will Deacon
@ 2020-07-06 19:23           ` Marco Elver
  2020-07-06 19:42             ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Marco Elver @ 2020-07-06 19:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, Android Kernel Team,
	Dave Martin, Kees Cook, Paul E. McKenney, Boqun Feng,
	Josh Triplett, Ivan Kokshaysky, Linux ARM, Richard Henderson,
	Nick Desaulniers, LKML, linux-alpha

On Mon, 6 Jul 2020 at 20:35, Will Deacon <will@kernel.org> wrote:
> On Mon, Jul 06, 2020 at 05:00:23PM +0100, Dave Martin wrote:
> > On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> > > On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > > > Also, can you illustrate code that can only be unsafe with Clang LTO?
> > >
> > > I don't have a concrete example, but it's an ongoing concern over on the LTO
> > > thread [1], so I cooked this to show one way we could deal with it. The main
> > > concern is that the whole-program optimisations enabled by LTO may allow the
> > > compiler to enumerate possible values for a pointer at link time and replace
> > > an address dependency between two loads with a control dependency instead,
> > > defeating the dependency ordering within the CPU.
> >
> > Why can't that happen without LTO?
>
> It could, but I'd argue that it's considerably less likely because there
> is less information available to the compiler to perform these sorts of
> optimisations. It also doesn't appear to be happening in practice.
>
> The current state of affairs is that, if/when we catch the compiler
> performing harmful optimistations, we look for a way to disable them.
> However, there are good reasons to enable LTO, so this is one way to
> do that without having to worry about the potential impact on dependency
> ordering.

If it's of any help, I'll see if we can implement that warning in LLVM
if data dependencies somehow disappear (although I don't have any
cycles to pursue right now myself). Until then, short of manual
inspection or encountering a bug in the wild, there is no proof any of
this happens or doesn't happen.

Also, as some anecdotal evidence it's extremely unlikely, even with
LTO: looking at the passes that LLVM runs, there are a number of
passes that seem to want to eliminate basic blocks, thereby getting
rid of branches. Intuitively, it makes sense, because branches are
expensive on most architectures (for GPU targets, I think it tries
even harder to get rid of branches). If we extend our reasoning and
assumptions of LTO's aggressiveness in that direction, we might
actually end up with fewer branches. That might be beneficial for the
data dependencies we worry about (but not so much for control
dependencies we want to keep). Still, no point in speculating (no pun
intended) until we have hard data what actually happens. :-)

Thanks,
-- Marco

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 19:23           ` Marco Elver
@ 2020-07-06 19:42             ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2020-07-06 19:42 UTC (permalink / raw)
  To: Marco Elver
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Alan Stern, Sami Tolvanen, Matt Turner, Android Kernel Team,
	Dave Martin, Kees Cook, Arnd Bergmann, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, Linux ARM, Richard Henderson, Nick Desaulniers,
	LKML, linux-alpha

On Mon, Jul 06, 2020 at 09:23:26PM +0200, Marco Elver wrote:
> On Mon, 6 Jul 2020 at 20:35, Will Deacon <will@kernel.org> wrote:
> > On Mon, Jul 06, 2020 at 05:00:23PM +0100, Dave Martin wrote:
> > > On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> > > > On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > > > > Also, can you illustrate code that can only be unsafe with Clang LTO?
> > > >
> > > > I don't have a concrete example, but it's an ongoing concern over on the LTO
> > > > thread [1], so I cooked this to show one way we could deal with it. The main
> > > > concern is that the whole-program optimisations enabled by LTO may allow the
> > > > compiler to enumerate possible values for a pointer at link time and replace
> > > > an address dependency between two loads with a control dependency instead,
> > > > defeating the dependency ordering within the CPU.
> > >
> > > Why can't that happen without LTO?
> >
> > It could, but I'd argue that it's considerably less likely because there
> > is less information available to the compiler to perform these sorts of
> > optimisations. It also doesn't appear to be happening in practice.
> >
> > The current state of affairs is that, if/when we catch the compiler
> > performing harmful optimistations, we look for a way to disable them.
> > However, there are good reasons to enable LTO, so this is one way to
> > do that without having to worry about the potential impact on dependency
> > ordering.
> 
> If it's of any help, I'll see if we can implement that warning in LLVM
> if data dependencies somehow disappear (although I don't have any
> cycles to pursue right now myself). Until then, short of manual
> inspection or encountering a bug in the wild, there is no proof any of
> this happens or doesn't happen.
> 
> Also, as some anecdotal evidence it's extremely unlikely, even with
> LTO: looking at the passes that LLVM runs, there are a number of
> passes that seem to want to eliminate basic blocks, thereby getting
> rid of branches. Intuitively, it makes sense, because branches are
> expensive on most architectures (for GPU targets, I think it tries
> even harder to get rid of branches). If we extend our reasoning and
> assumptions of LTO's aggressiveness in that direction, we might
> actually end up with fewer branches. That might be beneficial for the
> data dependencies we worry about (but not so much for control
> dependencies we want to keep). Still, no point in speculating (no pun
> intended) until we have hard data what actually happens. :-)

Anything along these lines would be very welcome!!!

							Thanx, Paul

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 18:35     ` Will Deacon
@ 2020-07-07 10:10       ` Dave Martin
  0 siblings, 0 replies; 58+ messages in thread
From: Dave Martin @ 2020-07-07 10:10 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Arnd Bergmann,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Paul E. McKenney, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Mon, Jul 06, 2020 at 07:35:11PM +0100, Will Deacon wrote:
> On Mon, Jul 06, 2020 at 05:08:20PM +0100, Dave Martin wrote:
> > On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > > new file mode 100644
> > > index 000000000000..515e360b01a1
> > > --- /dev/null
> > > +++ b/arch/arm64/include/asm/rwonce.h
> > > @@ -0,0 +1,63 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright (C) 2020 Google LLC.
> > > + */
> > > +#ifndef __ASM_RWONCE_H
> > > +#define __ASM_RWONCE_H
> > > +
> > > +#ifdef CONFIG_CLANG_LTO
> > > +
> > > +#include <linux/compiler_types.h>
> > > +#include <asm/alternative-macros.h>
> > > +
> > > +#ifndef BUILD_VDSO
> > > +
> > > +#ifdef CONFIG_AS_HAS_LDAPR
> > > +#define __LOAD_RCPC(sfx, regs...)					\
> > > +	ALTERNATIVE(							\
> > > +		"ldar"	#sfx "\t" #regs,				\
> > 
> > ^ Should this be here?  It seems that READ_ONCE() will actually read
> > twice... even if that doesn't actually conflict with the required
> > semantics of READ_ONCE(), it looks odd.
> 
> It's patched at runtime, so it's either LDAR or LDAPR.

Agh ignore me, I somehow failed to sport the ALTERNATIVE().

For my understanding -- my background here is a bit shaky -- the LDAPR
gives us load-to-load order even if there is just a control dependency?

If so (possibly dumb question): why can't we just turn this on
unconditionally?  Is there a significant performance impact?

I'm still confused (or ignorant) though.  If both loads are READ_ONCE()
then switching to LDAPR presumably helps, but otherwise, once the
compiler has reduced the address dependency to a control dependency
can't it then go one step further and reverse the order of the loads?
LDAPR wouldn't rescue us from that.

Or does the "memory" clobber in READ_ONCE() fix that for all important
cases?  I can't see this mattering for local variables (where it
definitely won't work), but I wonder whether static variables might not
count as "memory" in some situations.

Discounting ridiculous things like static register variables, I think
the only way for a static variable not to count as memory would be if
there are no writes to it that are reachable from any translation unit
entry point (possibly after dead code removal).  If so, maybe that's
enough.

> > Making a direct link between LTO and the memory model also seems highly
> > spurious (as discussed in the other subthread) so can we have a comment
> > explaining the reasoning?
> 
> Sure, although like I say, this is more about helping to progress that
> conversation.

That's fair enough, but when there is a consensus it would be good to
see it documented in the code _especially_ if we know that the fix won't
address all instances of the problem and in any case works partly by
accident.  That doesn't mean it's not a good practical compromise, but
it could be very confusing to unpick later on.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-06 17:36             ` Paul E. McKenney
@ 2020-07-07 10:29               ` Dave Martin
  2020-07-07 22:51                 ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Martin @ 2020-07-07 10:29 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Arnd Bergmann, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Mon, Jul 06, 2020 at 10:36:28AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 06, 2020 at 06:05:57PM +0100, Dave Martin wrote:
> > On Mon, Jul 06, 2020 at 09:34:55AM -0700, Paul E. McKenney wrote:
> > > On Mon, Jul 06, 2020 at 05:00:23PM +0100, Dave Martin wrote:
> > > > On Thu, Jul 02, 2020 at 08:23:02AM +0100, Will Deacon wrote:
> > > > > On Wed, Jul 01, 2020 at 06:07:25PM +0100, Dave P Martin wrote:
> > > > > > On Tue, Jun 30, 2020 at 06:37:34PM +0100, Will Deacon wrote:
> > > > > > > When building with LTO, there is an increased risk of the compiler
> > > > > > > converting an address dependency headed by a READ_ONCE() invocation
> > > > > > > into a control dependency and consequently allowing for harmful
> > > > > > > reordering by the CPU.
> > > > > > > 
> > > > > > > Ensure that such transformations are harmless by overriding the generic
> > > > > > > READ_ONCE() definition with one that provides acquire semantics when
> > > > > > > building with LTO.
> > > > > > > 
> > > > > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > > > > ---
> > > > > > >  arch/arm64/include/asm/rwonce.h   | 63 +++++++++++++++++++++++++++++++
> > > > > > >  arch/arm64/kernel/vdso/Makefile   |  2 +-
> > > > > > >  arch/arm64/kernel/vdso32/Makefile |  2 +-
> > > > > > >  3 files changed, 65 insertions(+), 2 deletions(-)
> > > > > > >  create mode 100644 arch/arm64/include/asm/rwonce.h
> > > > > > > 
> > > > > > > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> > > > > > > new file mode 100644
> > > > > > > index 000000000000..515e360b01a1
> > > > > > > --- /dev/null
> > > > > > > +++ b/arch/arm64/include/asm/rwonce.h
> > > > > > > @@ -0,0 +1,63 @@
> > > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > > +/*
> > > > > > > + * Copyright (C) 2020 Google LLC.
> > > > > > > + */
> > > > > > > +#ifndef __ASM_RWONCE_H
> > > > > > > +#define __ASM_RWONCE_H
> > > > > > > +
> > > > > > > +#ifdef CONFIG_CLANG_LTO
> > > > > > 
> > > > > > Don't we have a generic option for LTO that's not specific to Clang.
> > > > > 
> > > > > /me looks at the LTO series some more
> > > > > 
> > > > > Oh yeah, there's CONFIG_LTO which is selected by CONFIG_LTO_CLANG, which is
> > > > > the non-typoed version of the above. I can switch this to CONFIG_LTO.
> > > > > 
> > > > > > Also, can you illustrate code that can only be unsafe with Clang LTO?
> > > > > 
> > > > > I don't have a concrete example, but it's an ongoing concern over on the LTO
> > > > > thread [1], so I cooked this to show one way we could deal with it. The main
> > > > > concern is that the whole-program optimisations enabled by LTO may allow the
> > > > > compiler to enumerate possible values for a pointer at link time and replace
> > > > > an address dependency between two loads with a control dependency instead,
> > > > > defeating the dependency ordering within the CPU.
> > > > 
> > > > Why can't that happen without LTO?
> > > 
> > > Because without LTO, the compiler cannot see all the pointers all at
> > > the same time due to their being in different translation units.
> > > 
> > > But yes, if the compiler could see all the pointer values and further
> > > -know- that it was seeing all the pointer values, these optimizations
> > > could happen even without LTO.  But it is quite easy to make sure that
> > > the compiler thinks that there are additional pointer values that it
> > > does not know about.
> > 
> > Yes of course, but even without LTO the compiler can still apply this
> > optimisation to everything visible in the translation unit, and that can
> > drift as people refactor code over time.
> > 
> > Convincing the compiler there are other possible values doesn't help.
> > Even in
> > 
> > int foo(int *p)
> > {
> > 	asm ("" : "+r" (p));
> > 	return *p;
> > }
> > 
> > Can't the compiler still generate something like this:
> > 
> > 	switch (p) {
> > 	case &foo:
> > 		return foo;
> > 
> > 	case &bar:
> > 		return bar;
> > 
> > 	default:
> > 		return *p;
> > 	}
> > 
> > ...in which case we still have the same lost ordering guarantee that
> > we were trying to enforce.
> > 
> > If foo and bar already happen to be in registers and profiling shows
> > that &foo and &bar are the most likely value of p then this might be
> > a reasonable optimisation in some situations, irrespective of LTO.
> 
> Agreed, the additional information from profile-driven optimization
> can be just as damaging as that from LTO.
> 
> > The underlying problem here seems to be that the necessary ordering
> > rule is not part of what passes for the C memory model prior to C11.
> > If we want to control the data flow, don't we have to wrap the entire
> > dereference in a macro?
> 
> Yes, exactly.  Because we are relying on things that are not guaranteed
> by the C memory model, we need to pay attention to the implementations.
> As I have said elsewhere, the price of control dependencies is eternal
> vigilance.
> 
> And this also applies, to a lesser extent, to address and data
> dependencies, which are also not well supported by the C standard.
> 
> There is one important case in which the C memory model -does- support
> control dependencies, and that is when the dependent write is a normal
> C-language write that is not involved in a data race.  In that case,
> if the compiler broke the control dependency, it might have introduced
> a data race, which it is forbidden to do.  However, this rule can also
> be broken when the compiler knows too much, as it might be able to prove
> that breaking the dependency won't introduce a data race.  In that case,
> according to the standard, it is free to break the dependency.

Which only matters because the C abstract machine may not match reality.

LTO has no bearing on the abstract machine though.

If specific compiler options etc. can be added to inhibit the
problematic optimisations, that would be ideal.  I guess that can't
happen overnight though.

> > > > > We likely won't realise if/when this goes wrong, other than impossible to
> > > > > debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
> > > > > able to detect this sort of thing happening at build time, and perhaps
> > > > > even prevent it with compiler options or annotations, but none of that is
> > > > > close to being available and I'm keen to progress the LTO patches in the
> > > > > meantime because they are a requirement for CFI.
> > > > 
> > > > My concern was not so much why LTO makes things dangerous, as why !LTO
> > > > makes things safe...
> > > 
> > > Because ignorant compilers are safe compilers!  ;-)
> > 
> > AFAICT ignorance is no gurantee of ordering in general -- the compiler
> > is free to speculatively invent knowledge any place that the language
> > spec allows it to.  !LTO doesn't stop this happening.
> 
> Agreed, according to the standard, the compiler has great freedom.
> 
> We have two choices: (1) Restrict ourselves to live within the confines of
> the standard or (2) Pay continued close attention to the implementation.
> We have made different choices at different times, but for many ordering
> situations we have gone with door #2.
> 
> Me, I have been working to get the standard to better support our
> use case.  This is at best slow going.  But don't take my word for it,
> ask Will.

I can believe it.  They want to enable optimisations rather than prevent
them...

> > Hopefully some of the knowledge I invented in my reply is valid...
> 
> It is.  It is just that there are multiple valid strategies, and the
> Linux kernel is currently taking a mixed-strategy approach.

Ack.  The hope that there is a correct way to fix everything dies
hard ;)

Life was cosier before I started trying to reason about language specs.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-07 10:29               ` Dave Martin
@ 2020-07-07 22:51                 ` Paul E. McKenney
  2020-07-07 23:01                   ` Nick Desaulniers
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2020-07-07 22:51 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Alan Stern, Sami Tolvanen, Matt Turner, kernel-team, Marco Elver,
	Kees Cook, Arnd Bergmann, Boqun Feng, Josh Triplett,
	Ivan Kokshaysky, linux-arm-kernel, Richard Henderson,
	Nick Desaulniers, linux-kernel, linux-alpha

On Tue, Jul 07, 2020 at 11:29:15AM +0100, Dave Martin wrote:
> On Mon, Jul 06, 2020 at 10:36:28AM -0700, Paul E. McKenney wrote:
> > On Mon, Jul 06, 2020 at 06:05:57PM +0100, Dave Martin wrote:

[ . . . ]

> > > The underlying problem here seems to be that the necessary ordering
> > > rule is not part of what passes for the C memory model prior to C11.
> > > If we want to control the data flow, don't we have to wrap the entire
> > > dereference in a macro?
> > 
> > Yes, exactly.  Because we are relying on things that are not guaranteed
> > by the C memory model, we need to pay attention to the implementations.
> > As I have said elsewhere, the price of control dependencies is eternal
> > vigilance.
> > 
> > And this also applies, to a lesser extent, to address and data
> > dependencies, which are also not well supported by the C standard.
> > 
> > There is one important case in which the C memory model -does- support
> > control dependencies, and that is when the dependent write is a normal
> > C-language write that is not involved in a data race.  In that case,
> > if the compiler broke the control dependency, it might have introduced
> > a data race, which it is forbidden to do.  However, this rule can also
> > be broken when the compiler knows too much, as it might be able to prove
> > that breaking the dependency won't introduce a data race.  In that case,
> > according to the standard, it is free to break the dependency.
> 
> Which only matters because the C abstract machine may not match reality.
> 
> LTO has no bearing on the abstract machine though.
> 
> If specific compiler options etc. can be added to inhibit the
> problematic optimisations, that would be ideal.  I guess that can't
> happen overnight though.

Sadly, I must agree.

> > > > > > We likely won't realise if/when this goes wrong, other than impossible to
> > > > > > debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
> > > > > > able to detect this sort of thing happening at build time, and perhaps
> > > > > > even prevent it with compiler options or annotations, but none of that is
> > > > > > close to being available and I'm keen to progress the LTO patches in the
> > > > > > meantime because they are a requirement for CFI.
> > > > > 
> > > > > My concern was not so much why LTO makes things dangerous, as why !LTO
> > > > > makes things safe...
> > > > 
> > > > Because ignorant compilers are safe compilers!  ;-)
> > > 
> > > AFAICT ignorance is no gurantee of ordering in general -- the compiler
> > > is free to speculatively invent knowledge any place that the language
> > > spec allows it to.  !LTO doesn't stop this happening.
> > 
> > Agreed, according to the standard, the compiler has great freedom.
> > 
> > We have two choices: (1) Restrict ourselves to live within the confines of
> > the standard or (2) Pay continued close attention to the implementation.
> > We have made different choices at different times, but for many ordering
> > situations we have gone with door #2.
> > 
> > Me, I have been working to get the standard to better support our
> > use case.  This is at best slow going.  But don't take my word for it,
> > ask Will.
> 
> I can believe it.  They want to enable optimisations rather than prevent
> them...

Right in one!  ;-)

> > > Hopefully some of the knowledge I invented in my reply is valid...
> > 
> > It is.  It is just that there are multiple valid strategies, and the
> > Linux kernel is currently taking a mixed-strategy approach.
> 
> Ack.  The hope that there is a correct way to fix everything dies
> hard ;)

Either that, or one slowly degrades ones definition of "correct".  :-/

> Life was cosier before I started trying to reason about language specs.

Same here!

							Thanx, Paul

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-07 22:51                 ` Paul E. McKenney
@ 2020-07-07 23:01                   ` Nick Desaulniers
  2020-07-08  7:15                     ` Marco Elver
  2020-07-08  9:16                     ` Peter Zijlstra
  0 siblings, 2 replies; 58+ messages in thread
From: Nick Desaulniers @ 2020-07-07 23:01 UTC (permalink / raw)
  To: Paul E. McKenney, Dave Martin, Peter Zijlstra, Will Deacon,
	Sami Tolvanen, Marco Elver
  Cc: Mark Rutland, LKML, Kees Cook, Arnd Bergmann, Michael S. Tsirkin,
	Catalin Marinas, Jason Wang, Josh Triplett, Steven Rostedt,
	virtualization, Alan Stern, linux-alpha, Ivan Kokshaysky,
	Matt Turner, kernel-team, Boqun Feng, Linux ARM,
	Richard Henderson

I'm trying to put together a Micro Conference for Linux Plumbers
conference focused on "make LLVM slightly less shitty."  Do you all
plan on attending the conference? Would it be worthwhile to hold a
session focused on discussing this (LTO and memory models) be
worthwhile?


On Tue, Jul 7, 2020 at 3:51 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Tue, Jul 07, 2020 at 11:29:15AM +0100, Dave Martin wrote:
> > On Mon, Jul 06, 2020 at 10:36:28AM -0700, Paul E. McKenney wrote:
> > > On Mon, Jul 06, 2020 at 06:05:57PM +0100, Dave Martin wrote:
>
> [ . . . ]
>
> > > > The underlying problem here seems to be that the necessary ordering
> > > > rule is not part of what passes for the C memory model prior to C11.
> > > > If we want to control the data flow, don't we have to wrap the entire
> > > > dereference in a macro?
> > >
> > > Yes, exactly.  Because we are relying on things that are not guaranteed
> > > by the C memory model, we need to pay attention to the implementations.
> > > As I have said elsewhere, the price of control dependencies is eternal
> > > vigilance.
> > >
> > > And this also applies, to a lesser extent, to address and data
> > > dependencies, which are also not well supported by the C standard.
> > >
> > > There is one important case in which the C memory model -does- support
> > > control dependencies, and that is when the dependent write is a normal
> > > C-language write that is not involved in a data race.  In that case,
> > > if the compiler broke the control dependency, it might have introduced
> > > a data race, which it is forbidden to do.  However, this rule can also
> > > be broken when the compiler knows too much, as it might be able to prove
> > > that breaking the dependency won't introduce a data race.  In that case,
> > > according to the standard, it is free to break the dependency.
> >
> > Which only matters because the C abstract machine may not match reality.
> >
> > LTO has no bearing on the abstract machine though.
> >
> > If specific compiler options etc. can be added to inhibit the
> > problematic optimisations, that would be ideal.  I guess that can't
> > happen overnight though.
>
> Sadly, I must agree.
>
> > > > > > > We likely won't realise if/when this goes wrong, other than impossible to
> > > > > > > debug, subtle breakage that crops up seemingly randomly. Ideally, we'd be
> > > > > > > able to detect this sort of thing happening at build time, and perhaps
> > > > > > > even prevent it with compiler options or annotations, but none of that is
> > > > > > > close to being available and I'm keen to progress the LTO patches in the
> > > > > > > meantime because they are a requirement for CFI.
> > > > > >
> > > > > > My concern was not so much why LTO makes things dangerous, as why !LTO
> > > > > > makes things safe...
> > > > >
> > > > > Because ignorant compilers are safe compilers!  ;-)
> > > >
> > > > AFAICT ignorance is no gurantee of ordering in general -- the compiler
> > > > is free to speculatively invent knowledge any place that the language
> > > > spec allows it to.  !LTO doesn't stop this happening.
> > >
> > > Agreed, according to the standard, the compiler has great freedom.
> > >
> > > We have two choices: (1) Restrict ourselves to live within the confines of
> > > the standard or (2) Pay continued close attention to the implementation.
> > > We have made different choices at different times, but for many ordering
> > > situations we have gone with door #2.
> > >
> > > Me, I have been working to get the standard to better support our
> > > use case.  This is at best slow going.  But don't take my word for it,
> > > ask Will.
> >
> > I can believe it.  They want to enable optimisations rather than prevent
> > them...
>
> Right in one!  ;-)
>
> > > > Hopefully some of the knowledge I invented in my reply is valid...
> > >
> > > It is.  It is just that there are multiple valid strategies, and the
> > > Linux kernel is currently taking a mixed-strategy approach.
> >
> > Ack.  The hope that there is a correct way to fix everything dies
> > hard ;)
>
> Either that, or one slowly degrades ones definition of "correct".  :-/
>
> > Life was cosier before I started trying to reason about language specs.
>
> Same here!
>
>                                                         Thanx, Paul



-- 
Thanks,
~Nick Desaulniers

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-07 23:01                   ` Nick Desaulniers
@ 2020-07-08  7:15                     ` Marco Elver
  2020-07-08  9:16                     ` Peter Zijlstra
  1 sibling, 0 replies; 58+ messages in thread
From: Marco Elver @ 2020-07-08  7:15 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Mark Rutland, Michael S. Tsirkin, Peter Zijlstra,
	Catalin Marinas, Jason Wang, virtualization, Will Deacon,
	Arnd Bergmann, Alan Stern, Sami Tolvanen, Matt Turner,
	kernel-team, Dave Martin, Kees Cook, Paul E. McKenney,
	Boqun Feng, Josh Triplett, Steven Rostedt, Ivan Kokshaysky,
	Linux ARM, Richard Henderson, LKML, linux-alpha

On Wed, 8 Jul 2020 at 01:01, Nick Desaulniers <ndesaulniers@google.com> wrote:
>
> I'm trying to put together a Micro Conference for Linux Plumbers
> conference focused on "make LLVM slightly less shitty."  Do you all
> plan on attending the conference? Would it be worthwhile to hold a
> session focused on discussing this (LTO and memory models) be
> worthwhile?

I would welcome sessions on LLVM, and would try to attend. Apart from
general improvements to the LLVM ecosystem, we should also emphasize
the benefits LLVM provides and how we can enable them (one reason we
want LTO is to get CFI).

Regarding LTO and memory models, I'm not sure. Given the current state
of things, such a discussion needs to be carefully framed to not go in
circles, because we're trying to figure out things at the intersection
of architecture, what the compiler does, the C standard, and the
kernel wants. And because some of these boxes are difficult to change
(standard, arch, compiler) or difficult to precisely define behaviour
(compiler), we might end up going in circles. From what I see there
are efforts to fix the situation at the root (standard), and we might
have means to get the compiler to tell us what it's doing. But these
happen extremely slowly.

So, if we do this, we need to be careful to not end up re-discussing
what we discussed here, but rather try and make it a continuation that
hopefully leads to some constructive output.

Thanks,
-- Marco

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-07 23:01                   ` Nick Desaulniers
  2020-07-08  7:15                     ` Marco Elver
@ 2020-07-08  9:16                     ` Peter Zijlstra
  2020-07-08 18:20                       ` Paul E. McKenney
  1 sibling, 1 reply; 58+ messages in thread
From: Peter Zijlstra @ 2020-07-08  9:16 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Mark Rutland, Michael S. Tsirkin, Catalin Marinas, Jason Wang,
	virtualization, Will Deacon, Arnd Bergmann, Alan Stern,
	Sami Tolvanen, Matt Turner, kernel-team, Dave Martin,
	Marco Elver, Kees Cook, Paul E. McKenney, Boqun Feng,
	Josh Triplett, Steven Rostedt, Ivan Kokshaysky, Linux ARM,
	Richard Henderson, LKML, linux-alpha

On Tue, Jul 07, 2020 at 04:01:28PM -0700, Nick Desaulniers wrote:
> I'm trying to put together a Micro Conference for Linux Plumbers
> conference focused on "make LLVM slightly less shitty."  Do you all
> plan on attending the conference? Would it be worthwhile to hold a
> session focused on discussing this (LTO and memory models) be
> worthwhile?

I'd love to have a session about compilers and memory ordering with both
GCC and CLANG in attendance.

We need a solution for dependent-loads and control-dependencies for both
toolchains.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y
  2020-07-08  9:16                     ` Peter Zijlstra
@ 2020-07-08 18:20                       ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2020-07-08 18:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Michael S. Tsirkin, Catalin Marinas, Jason Wang,
	virtualization, Will Deacon, Alan Stern, Sami Tolvanen,
	Matt Turner, kernel-team, Dave Martin, Marco Elver, Kees Cook,
	Arnd Bergmann, Boqun Feng, Josh Triplett, Steven Rostedt,
	Ivan Kokshaysky, Linux ARM, Richard Henderson, Nick Desaulniers,
	LKML, linux-alpha

On Wed, Jul 08, 2020 at 11:16:20AM +0200, Peter Zijlstra wrote:
> On Tue, Jul 07, 2020 at 04:01:28PM -0700, Nick Desaulniers wrote:
> > I'm trying to put together a Micro Conference for Linux Plumbers
> > conference focused on "make LLVM slightly less shitty."  Do you all
> > plan on attending the conference? Would it be worthwhile to hold a
> > session focused on discussing this (LTO and memory models) be
> > worthwhile?
> 
> I'd love to have a session about compilers and memory ordering with both
> GCC and CLANG in attendance.
> 
> We need a solution for dependent-loads and control-dependencies for both
> toolchains.

What Peter said!  ;-)

							Thanx, Paul

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2020-07-08 18:28 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-30 17:37 [PATCH 00/18] Allow architectures to override __READ_ONCE() Will Deacon
2020-06-30 17:37 ` [PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h Will Deacon
2020-07-01 16:38   ` Alexei Starovoitov
2020-06-30 17:37 ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h Will Deacon
2020-06-30 19:11   ` Arnd Bergmann
2020-07-01 10:16     ` [PATCH 02/18] compiler.h: Split {READ,WRITE}_ONCE " Will Deacon
2020-07-01 11:33       ` [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE " Arnd Bergmann
2020-06-30 17:37 ` [PATCH 03/18] asm/rwonce: Allow __READ_ONCE to be overridden by the architecture Will Deacon
2020-06-30 17:37 ` [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation Will Deacon
2020-07-02  9:32   ` Mark Rutland
2020-07-02  9:48     ` Will Deacon
2020-07-02 10:08       ` Arnd Bergmann
2020-07-02 11:18         ` Will Deacon
2020-07-02 11:39           ` Arnd Bergmann
2020-07-02 14:43   ` Joel Fernandes
2020-07-02 14:55     ` Will Deacon
2020-07-02 15:07       ` Joel Fernandes
2020-06-30 17:37 ` [PATCH 05/18] asm/rwonce: Remove smp_read_barrier_depends() invocation Will Deacon
2020-06-30 17:37 ` [PATCH 06/18] vhost: Remove redundant use of read_barrier_depends() barrier Will Deacon
2020-06-30 17:37 ` [PATCH 07/18] alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb() Will Deacon
2020-06-30 17:37 ` [PATCH 08/18] locking/barriers: Remove definitions for [smp_]read_barrier_depends() Will Deacon
2020-06-30 17:37 ` [PATCH 09/18] Documentation/barriers: Remove references to [smp_]read_barrier_depends() Will Deacon
2020-06-30 17:37 ` [PATCH 10/18] Documentation/barriers/kokr: " Will Deacon
2020-06-30 17:37 ` [PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc Will Deacon
2020-06-30 17:37 ` [PATCH 12/18] include/linux: Remove smp_read_barrier_depends() from comments Will Deacon
2020-06-30 17:37 ` [PATCH 13/18] checkpatch: Remove checks relating to [smp_]read_barrier_depends() Will Deacon
2020-06-30 17:37 ` [PATCH 14/18] arm64: Reduce the number of header files pulled into vmlinux.lds.S Will Deacon
2020-06-30 17:37 ` [PATCH 15/18] arm64: alternatives: Split up alternative.h Will Deacon
2020-06-30 17:37 ` [PATCH 16/18] arm64: cpufeatures: Add capability for LDAPR instruction Will Deacon
2020-06-30 17:37 ` [PATCH 17/18] arm64: alternatives: Remove READ_ONCE() usage during patch operation Will Deacon
2020-06-30 17:37 ` [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y Will Deacon
2020-06-30 19:25   ` Arnd Bergmann
2020-07-01 10:19     ` Will Deacon
2020-07-01 10:59       ` Arnd Bergmann
2020-06-30 19:47   ` Marco Elver
2020-06-30 20:20     ` Peter Zijlstra
2020-06-30 22:57     ` Sami Tolvanen
2020-07-01 10:25       ` Will Deacon
2020-07-01 10:24     ` Will Deacon
2020-07-01 17:07   ` Dave P Martin
2020-07-02  7:23     ` Will Deacon
2020-07-06 16:00       ` Dave Martin
2020-07-06 16:34         ` Paul E. McKenney
2020-07-06 17:05           ` Dave Martin
2020-07-06 17:36             ` Paul E. McKenney
2020-07-07 10:29               ` Dave Martin
2020-07-07 22:51                 ` Paul E. McKenney
2020-07-07 23:01                   ` Nick Desaulniers
2020-07-08  7:15                     ` Marco Elver
2020-07-08  9:16                     ` Peter Zijlstra
2020-07-08 18:20                       ` Paul E. McKenney
2020-07-06 18:35         ` Will Deacon
2020-07-06 19:23           ` Marco Elver
2020-07-06 19:42             ` Paul E. McKenney
2020-07-06 16:08   ` Dave Martin
2020-07-06 18:35     ` Will Deacon
2020-07-07 10:10       ` Dave Martin
2020-07-01  7:38 ` [PATCH 00/18] Allow architectures to override __READ_ONCE() Josh Triplett

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).