All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/7] powerpc: Add KCSAN Support
@ 2023-02-08  3:21 Rohan McLure
  2023-02-08  3:21 ` [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems Rohan McLure
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:21 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

Add Kernel Concurrency Sanitiser support for PPC64. Doing so involves
exclusion of a number of compilation units from instrumentation, as was
done with KASAN.

KCSAN uses watchpoints on memory accesses to enforce the semantics of
the Linux kernel memory model, notifying the user of observed data races
which have not been declared to be intended in source through the
data_race() macro, in order to remove false positives.

A number of such race conditions are identified. This patch series
provides support for the instrumentation, with bug fixes as well as
removal of false positives to be issued in future patches.

As of v4, provide stubs for __atomic_* builtins for 8-byte values
for 32-bit systems without toolchain support. This generalises xtensa's
stubs, and causes xtensa to use generic implementation.

v3: Restrict support to PPC64 as kcsan code expects support for
__atomic* builtins for 64-bit atomic types.
Link: https://lore.kernel.org/linuxppc-dev/449b9d60-18f6-ebf3-9878-ae54a61d1e49@csgroup.eu/

v2: Implement __smp_mb() in terms of __mb() to avoid multiple calls to
kcsan_mb().
Link: https://lore.kernel.org/linuxppc-dev/20230201043438.1301212-4-rmclure@linux.ibm.com/

v1: https://lore.kernel.org/linuxppc-dev/20230131234859.1275125-1-rmclure@linux.ibm.com/

Rohan McLure (7):
  kcsan: Add atomic builtin stubs for 32-bit systems
  xtensa: kcsan: Remove kcsan stubs for atomic builtins
  powerpc: kcsan: Add exclusions from instrumentation
  powerpc: kcsan: Exclude udelay to prevent recursive instrumentation
  powerpc: kcsan: Memory barriers semantics
  powerpc: kcsan: Prevent recursive instrumentation with IRQ
    save/restores
  powerpc: kcsan: Add KCSAN Support

 arch/powerpc/Kconfig                          |  1 +
 arch/powerpc/include/asm/barrier.h            | 12 ++++-----
 arch/powerpc/kernel/Makefile                  | 10 +++++++
 arch/powerpc/kernel/irq_64.c                  |  6 ++---
 arch/powerpc/kernel/time.c                    |  4 +--
 arch/powerpc/kernel/trace/Makefile            |  1 +
 arch/powerpc/kernel/vdso/Makefile             |  1 +
 arch/powerpc/lib/Makefile                     |  2 ++
 arch/powerpc/purgatory/Makefile               |  1 +
 arch/powerpc/xmon/Makefile                    |  1 +
 arch/xtensa/lib/Makefile                      |  1 -
 kernel/kcsan/Makefile                         |  2 +-
 .../lib/kcsan-stubs.c => kernel/kcsan/stubs.c | 26 ++++++++++++++++++-
 13 files changed, 54 insertions(+), 14 deletions(-)
 rename arch/xtensa/lib/kcsan-stubs.c => kernel/kcsan/stubs.c (58%)

-- 
2.37.2


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems
  2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
@ 2023-02-08  3:21 ` Rohan McLure
  2023-02-08  4:23   ` Max Filippov
  2023-02-08 12:23   ` Christophe Leroy
  2023-02-08  3:21 ` [PATCH v4 2/7] xtensa: kcsan: Remove kcsan stubs for atomic builtins Rohan McLure
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:21 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

KCSAN instruments calls to atomic builtins, and will in turn call these
builtins itself. As such, architectures supporting KCSAN must have
compiler support for these atomic primitives.

Since 32-bit systems are unlikely to have 64-bit compiler builtins,
provide a stub for each missing builtin, and use BUG() to assert
unreachability.

In commit 725aea873261 ("xtensa: enable KCSAN"), xtensa implements these
locally, but does not advertise the fact with preprocessor macros. To
avoid production of duplicate symbols, do not build the stubs on xtensa.
A future patch will remove the xtensa implementation of these stubs.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v4: New patch
---
 kernel/kcsan/Makefile |  3 ++
 kernel/kcsan/stubs.c  | 78 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+)
 create mode 100644 kernel/kcsan/stubs.c

diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile
index 8cf70f068d92..5dfc5825aae9 100644
--- a/kernel/kcsan/Makefile
+++ b/kernel/kcsan/Makefile
@@ -12,6 +12,9 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
 	-fno-stack-protector -DDISABLE_BRANCH_PROFILING
 
 obj-y := core.o debugfs.o report.o
+ifndef XTENSA
+	obj-y += stubs.o
+endif
 
 KCSAN_INSTRUMENT_BARRIERS_selftest.o := y
 obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
diff --git a/kernel/kcsan/stubs.c b/kernel/kcsan/stubs.c
new file mode 100644
index 000000000000..ec5cd99be422
--- /dev/null
+++ b/kernel/kcsan/stubs.c
@@ -0,0 +1,78 @@
+// SPDX-License Identifier: GPL-2.0
+
+#include <linux/bug.h>
+#include <linux/types.h>
+
+#ifdef CONFIG_32BIT
+
+#if !__has_builtin(__atomic_store_8)
+void __atomic_store_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_load_8)
+u64 __atomic_load_8(const volatile void *p, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_exchange_8)
+u64 __atomic_exchange_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_compare_exchange_8)
+bool __atomic_compare_exchange_8(volatile void *p1, void *p2, u64 v, bool b, int i1, int i2)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_fetch_add_8)
+u64 __atomic_fetch_add_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_fetch_sub_8)
+u64 __atomic_fetch_sub_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_fetch_and_8)
+u64 __atomic_fetch_and_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_fetch_or_8)
+u64 __atomic_fetch_or_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_fetch_xor_8)
+u64 __atomic_fetch_xor_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#if !__has_builtin(__atomic_fetch_nand_8)
+u64 __atomic_fetch_nand_8(volatile void *p, u64 v, int i)
+{
+	BUG();
+}
+#endif
+
+#endif /* CONFIG_32BIT */
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 2/7] xtensa: kcsan: Remove kcsan stubs for atomic builtins
  2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
  2023-02-08  3:21 ` [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems Rohan McLure
@ 2023-02-08  3:21 ` Rohan McLure
  2023-02-08  4:24   ` Max Filippov
  2023-02-08  3:21 ` [PATCH v4 3/7] powerpc: kcsan: Add exclusions from instrumentation Rohan McLure
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:21 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

A prior patch implemented stubs in place of the __atomic_* builtins in
generic code, as it is useful for other 32-bit architectures such as
32-bit powerpc.

Remove the kcsan-stubs.c translation unit and instead use
the generic implementation.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
V4: New patch
---
 arch/xtensa/lib/Makefile      |  1 -
 arch/xtensa/lib/kcsan-stubs.c | 54 -----------------------------------
 kernel/kcsan/Makefile         |  5 +---
 3 files changed, 1 insertion(+), 59 deletions(-)
 delete mode 100644 arch/xtensa/lib/kcsan-stubs.c

diff --git a/arch/xtensa/lib/Makefile b/arch/xtensa/lib/Makefile
index 7ecef0519a27..d69356dc97df 100644
--- a/arch/xtensa/lib/Makefile
+++ b/arch/xtensa/lib/Makefile
@@ -8,5 +8,4 @@ lib-y	+= memcopy.o memset.o checksum.o \
 	   divsi3.o udivsi3.o modsi3.o umodsi3.o mulsi3.o umulsidi3.o \
 	   usercopy.o strncpy_user.o strnlen_user.o
 lib-$(CONFIG_PCI) += pci-auto.o
-lib-$(CONFIG_KCSAN) += kcsan-stubs.o
 KCSAN_SANITIZE_kcsan-stubs.o := n
diff --git a/arch/xtensa/lib/kcsan-stubs.c b/arch/xtensa/lib/kcsan-stubs.c
deleted file mode 100644
index 2b08faa62b86..000000000000
--- a/arch/xtensa/lib/kcsan-stubs.c
+++ /dev/null
@@ -1,54 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-
-#include <linux/bug.h>
-#include <linux/types.h>
-
-void __atomic_store_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
-
-u64 __atomic_load_8(const volatile void *p, int i)
-{
-	BUG();
-}
-
-u64 __atomic_exchange_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
-
-bool __atomic_compare_exchange_8(volatile void *p1, void *p2, u64 v, bool b, int i1, int i2)
-{
-	BUG();
-}
-
-u64 __atomic_fetch_add_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
-
-u64 __atomic_fetch_sub_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
-
-u64 __atomic_fetch_and_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
-
-u64 __atomic_fetch_or_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
-
-u64 __atomic_fetch_xor_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
-
-u64 __atomic_fetch_nand_8(volatile void *p, u64 v, int i)
-{
-	BUG();
-}
diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile
index 5dfc5825aae9..377b81be94fa 100644
--- a/kernel/kcsan/Makefile
+++ b/kernel/kcsan/Makefile
@@ -11,10 +11,7 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
 	$(call cc-option,-mno-outline-atomics) \
 	-fno-stack-protector -DDISABLE_BRANCH_PROFILING
 
-obj-y := core.o debugfs.o report.o
-ifndef XTENSA
-	obj-y += stubs.o
-endif
+obj-y := core.o debugfs.o report.o stubs.o
 
 KCSAN_INSTRUMENT_BARRIERS_selftest.o := y
 obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 3/7] powerpc: kcsan: Add exclusions from instrumentation
  2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
  2023-02-08  3:21 ` [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems Rohan McLure
  2023-02-08  3:21 ` [PATCH v4 2/7] xtensa: kcsan: Remove kcsan stubs for atomic builtins Rohan McLure
@ 2023-02-08  3:21 ` Rohan McLure
  2023-02-08  3:21 ` [PATCH v4 4/7] powerpc: kcsan: Exclude udelay to prevent recursive instrumentation Rohan McLure
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:21 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

Exclude various incompatible compilation units from KCSAN
instrumentation.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
 arch/powerpc/kernel/Makefile       | 10 ++++++++++
 arch/powerpc/kernel/trace/Makefile |  1 +
 arch/powerpc/kernel/vdso/Makefile  |  1 +
 arch/powerpc/lib/Makefile          |  2 ++
 arch/powerpc/purgatory/Makefile    |  1 +
 arch/powerpc/xmon/Makefile         |  1 +
 6 files changed, 16 insertions(+)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 9b6146056e48..9bf2be123093 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -54,6 +54,13 @@ CFLAGS_cputable.o += -DDISABLE_BRANCH_PROFILING
 CFLAGS_btext.o += -DDISABLE_BRANCH_PROFILING
 endif
 
+KCSAN_SANITIZE_early_32.o := n
+KCSAN_SANITIZE_early_64.o := n
+KCSAN_SANITIZE_cputable.o := n
+KCSAN_SANITIZE_btext.o := n
+KCSAN_SANITIZE_paca.o := n
+KCSAN_SANITIZE_setup_64.o := n
+
 #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET
 # Remove stack protector to avoid triggering unneeded stack canary
 # checks due to randomize_kstack_offset.
@@ -177,12 +184,15 @@ obj-$(CONFIG_PPC_SECVAR_SYSFS)	+= secvar-sysfs.o
 # Disable GCOV, KCOV & sanitizers in odd or sensitive code
 GCOV_PROFILE_prom_init.o := n
 KCOV_INSTRUMENT_prom_init.o := n
+KCSAN_SANITIZE_prom_init.o := n
 UBSAN_SANITIZE_prom_init.o := n
 GCOV_PROFILE_kprobes.o := n
 KCOV_INSTRUMENT_kprobes.o := n
+KCSAN_SANITIZE_kprobes.o := n
 UBSAN_SANITIZE_kprobes.o := n
 GCOV_PROFILE_kprobes-ftrace.o := n
 KCOV_INSTRUMENT_kprobes-ftrace.o := n
+KCSAN_SANITIZE_kprobes-ftrace.o := n
 UBSAN_SANITIZE_kprobes-ftrace.o := n
 GCOV_PROFILE_syscall_64.o := n
 KCOV_INSTRUMENT_syscall_64.o := n
diff --git a/arch/powerpc/kernel/trace/Makefile b/arch/powerpc/kernel/trace/Makefile
index af8527538fe4..b16a9f9c0b35 100644
--- a/arch/powerpc/kernel/trace/Makefile
+++ b/arch/powerpc/kernel/trace/Makefile
@@ -23,4 +23,5 @@ obj-$(CONFIG_PPC32)			+= $(obj32-y)
 # Disable GCOV, KCOV & sanitizers in odd or sensitive code
 GCOV_PROFILE_ftrace.o := n
 KCOV_INSTRUMENT_ftrace.o := n
+KCSAN_SANITIZE_ftrace.o := n
 UBSAN_SANITIZE_ftrace.o := n
diff --git a/arch/powerpc/kernel/vdso/Makefile b/arch/powerpc/kernel/vdso/Makefile
index 6a977b0d8ffc..3a2f32929fcf 100644
--- a/arch/powerpc/kernel/vdso/Makefile
+++ b/arch/powerpc/kernel/vdso/Makefile
@@ -46,6 +46,7 @@ GCOV_PROFILE := n
 KCOV_INSTRUMENT := n
 UBSAN_SANITIZE := n
 KASAN_SANITIZE := n
+KCSAN_SANITIZE := n
 
 ccflags-y := -shared -fno-common -fno-builtin -nostdlib -Wl,--hash-style=both
 ccflags-$(CONFIG_LD_IS_LLD) += $(call cc-option,--ld-path=$(LD),-fuse-ld=lld)
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 4de71cbf6e8e..c4db459d304a 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -16,6 +16,8 @@ KASAN_SANITIZE_feature-fixups.o := n
 # restart_table.o contains functions called in the NMI interrupt path
 # which can be in real mode. Disable KASAN.
 KASAN_SANITIZE_restart_table.o := n
+KCSAN_SANITIZE_code-patching.o := n
+KCSAN_SANITIZE_feature-fixups.o := n
 
 ifdef CONFIG_KASAN
 CFLAGS_code-patching.o += -DDISABLE_BRANCH_PROFILING
diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
index a81d155b89ae..6f5e2727963c 100644
--- a/arch/powerpc/purgatory/Makefile
+++ b/arch/powerpc/purgatory/Makefile
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 
 KASAN_SANITIZE := n
+KCSAN_SANITIZE := n
 
 targets += trampoline_$(BITS).o purgatory.ro
 
diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
index eb25d7554ffd..d334de392e6c 100644
--- a/arch/powerpc/xmon/Makefile
+++ b/arch/powerpc/xmon/Makefile
@@ -5,6 +5,7 @@ GCOV_PROFILE := n
 KCOV_INSTRUMENT := n
 UBSAN_SANITIZE := n
 KASAN_SANITIZE := n
+KCSAN_SANITIZE := n
 
 # Disable ftrace for the entire directory
 ccflags-remove-$(CONFIG_FUNCTION_TRACER) += $(CC_FLAGS_FTRACE)
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 4/7] powerpc: kcsan: Exclude udelay to prevent recursive instrumentation
  2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
                   ` (2 preceding siblings ...)
  2023-02-08  3:21 ` [PATCH v4 3/7] powerpc: kcsan: Add exclusions from instrumentation Rohan McLure
@ 2023-02-08  3:21 ` Rohan McLure
  2023-02-08  3:22 ` [PATCH v4 5/7] powerpc: kcsan: Memory barriers semantics Rohan McLure
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:21 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

In order for KCSAN to increase its likelihood of observing a data race,
it sets a watchpoint on memory accesses and stalls, allowing for
detection of conflicting accesses by other kernel threads or interrupts.

Stalls are implemented by injecting a call to udelay in instrumented code.
To prevent recursive instrumentation, exclude udelay from being instrumented.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
 arch/powerpc/kernel/time.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index d68de3618741..b894029f53db 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -356,7 +356,7 @@ void vtime_flush(struct task_struct *tsk)
 }
 #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
 
-void __delay(unsigned long loops)
+void __no_kcsan __delay(unsigned long loops)
 {
 	unsigned long start;
 
@@ -377,7 +377,7 @@ void __delay(unsigned long loops)
 }
 EXPORT_SYMBOL(__delay);
 
-void udelay(unsigned long usecs)
+void __no_kcsan udelay(unsigned long usecs)
 {
 	__delay(tb_ticks_per_usec * usecs);
 }
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 5/7] powerpc: kcsan: Memory barriers semantics
  2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
                   ` (3 preceding siblings ...)
  2023-02-08  3:21 ` [PATCH v4 4/7] powerpc: kcsan: Exclude udelay to prevent recursive instrumentation Rohan McLure
@ 2023-02-08  3:22 ` Rohan McLure
  2023-02-08  3:22 ` [PATCH v4 6/7] powerpc: kcsan: Prevent recursive instrumentation with IRQ save/restores Rohan McLure
  2023-02-08  3:22 ` [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support Rohan McLure
  6 siblings, 0 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:22 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

Annotate memory barriers *mb() with calls to kcsan_mb(), signaling to
compilers supporting KCSAN that the respective memory barrier has been
issued. Rename memory barrier *mb() to __*mb() to opt in for
asm-generic/barrier.h to generate the respective *mb() macro.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v2: Implement __smp_mb() in terms of __mb() to avoid duplicate calls to
kcsan_mb()
---
 arch/powerpc/include/asm/barrier.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index e80b2c0e9315..b95b666f0374 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -35,9 +35,9 @@
  * However, on CPUs that don't support lwsync, lwsync actually maps to a
  * heavy-weight sync, so smp_wmb() can be a lighter-weight eieio.
  */
-#define mb()   __asm__ __volatile__ ("sync" : : : "memory")
-#define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
-#define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
+#define __mb()   __asm__ __volatile__ ("sync" : : : "memory")
+#define __rmb()  __asm__ __volatile__ ("sync" : : : "memory")
+#define __wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
 /* The sub-arch has lwsync */
 #if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
@@ -51,12 +51,12 @@
 /* clang defines this macro for a builtin, which will not work with runtime patching */
 #undef __lwsync
 #define __lwsync()	__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
-#define dma_rmb()	__lwsync()
-#define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
+#define __dma_rmb()	__lwsync()
+#define __dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
 #define __smp_lwsync()	__lwsync()
 
-#define __smp_mb()	mb()
+#define __smp_mb()	__mb()
 #define __smp_rmb()	__lwsync()
 #define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 6/7] powerpc: kcsan: Prevent recursive instrumentation with IRQ save/restores
  2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
                   ` (4 preceding siblings ...)
  2023-02-08  3:22 ` [PATCH v4 5/7] powerpc: kcsan: Memory barriers semantics Rohan McLure
@ 2023-02-08  3:22 ` Rohan McLure
  2023-02-08  3:22 ` [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support Rohan McLure
  6 siblings, 0 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:22 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

Instrumented memory accesses provided by KCSAN will access core-local
memories (which will save and restore IRQs) as well as restoring IRQs
directly. Avoid recursive instrumentation by applying __no_kcsan
annotation to IRQ restore routines.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
 arch/powerpc/kernel/irq_64.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c
index eb2b380e52a0..3a1e0bffe9e0 100644
--- a/arch/powerpc/kernel/irq_64.c
+++ b/arch/powerpc/kernel/irq_64.c
@@ -97,7 +97,7 @@ static inline bool irq_happened_test_and_clear(u8 irq)
 	return false;
 }
 
-void replay_soft_interrupts(void)
+__no_kcsan void replay_soft_interrupts(void)
 {
 	struct pt_regs regs;
 
@@ -185,7 +185,7 @@ void replay_soft_interrupts(void)
 }
 
 #if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_PPC_KUAP)
-static inline void replay_soft_interrupts_irqrestore(void)
+__no_kcsan static inline void replay_soft_interrupts_irqrestore(void)
 {
 	unsigned long kuap_state = get_kuap();
 
@@ -209,7 +209,7 @@ static inline void replay_soft_interrupts_irqrestore(void)
 #define replay_soft_interrupts_irqrestore() replay_soft_interrupts()
 #endif
 
-notrace void arch_local_irq_restore(unsigned long mask)
+notrace __no_kcsan void arch_local_irq_restore(unsigned long mask)
 {
 	unsigned char irq_happened;
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support
  2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
                   ` (5 preceding siblings ...)
  2023-02-08  3:22 ` [PATCH v4 6/7] powerpc: kcsan: Prevent recursive instrumentation with IRQ save/restores Rohan McLure
@ 2023-02-08  3:22 ` Rohan McLure
  2023-02-08 10:10   ` Marco Elver
  2023-02-08 12:25   ` Christophe Leroy
  6 siblings, 2 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-08  3:22 UTC (permalink / raw)
  To: linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc, Rohan McLure

Enable HAVE_ARCH_KCSAN on all powerpc platforms, permitting use of the
kernel concurrency sanitiser through the CONFIG_KCSAN_* kconfig options.
KCSAN requires compiler builtins __atomic_* 64-bit values, and so only
report support on PPC64.

See documentation in Documentation/dev-tools/kcsan.rst for more
information.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v3: Restrict support to 64-bit, as TSAN expects 64-bit __atomic_* compiler
built-ins.
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b8c4ac56bddc..55bc2d724c73 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -198,6 +198,7 @@ config PPC
 	select HAVE_ARCH_KASAN			if PPC_RADIX_MMU
 	select HAVE_ARCH_KASAN			if PPC_BOOK3E_64
 	select HAVE_ARCH_KASAN_VMALLOC		if HAVE_ARCH_KASAN
+	select HAVE_ARCH_KCSAN            if PPC64
 	select HAVE_ARCH_KFENCE			if ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
 	select HAVE_ARCH_KGDB
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems
  2023-02-08  3:21 ` [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems Rohan McLure
@ 2023-02-08  4:23   ` Max Filippov
  2023-02-08 12:23   ` Christophe Leroy
  1 sibling, 0 replies; 15+ messages in thread
From: Max Filippov @ 2023-02-08  4:23 UTC (permalink / raw)
  To: Rohan McLure; +Cc: chris, elver, linux-xtensa, npiggin, linuxppc-dev

On Tue, Feb 7, 2023 at 7:22 PM Rohan McLure <rmclure@linux.ibm.com> wrote:
>
> KCSAN instruments calls to atomic builtins, and will in turn call these
> builtins itself. As such, architectures supporting KCSAN must have
> compiler support for these atomic primitives.
>
> Since 32-bit systems are unlikely to have 64-bit compiler builtins,
> provide a stub for each missing builtin, and use BUG() to assert
> unreachability.
>
> In commit 725aea873261 ("xtensa: enable KCSAN"), xtensa implements these
> locally, but does not advertise the fact with preprocessor macros. To
> avoid production of duplicate symbols, do not build the stubs on xtensa.
> A future patch will remove the xtensa implementation of these stubs.
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v4: New patch
> ---
>  kernel/kcsan/Makefile |  3 ++
>  kernel/kcsan/stubs.c  | 78 +++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 81 insertions(+)
>  create mode 100644 kernel/kcsan/stubs.c

Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>

-- 
Thanks.
-- Max

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 2/7] xtensa: kcsan: Remove kcsan stubs for atomic builtins
  2023-02-08  3:21 ` [PATCH v4 2/7] xtensa: kcsan: Remove kcsan stubs for atomic builtins Rohan McLure
@ 2023-02-08  4:24   ` Max Filippov
  0 siblings, 0 replies; 15+ messages in thread
From: Max Filippov @ 2023-02-08  4:24 UTC (permalink / raw)
  To: Rohan McLure; +Cc: chris, elver, linux-xtensa, npiggin, linuxppc-dev

On Tue, Feb 7, 2023 at 7:22 PM Rohan McLure <rmclure@linux.ibm.com> wrote:
>
> A prior patch implemented stubs in place of the __atomic_* builtins in
> generic code, as it is useful for other 32-bit architectures such as
> 32-bit powerpc.
>
> Remove the kcsan-stubs.c translation unit and instead use
> the generic implementation.
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> V4: New patch
> ---
>  arch/xtensa/lib/Makefile      |  1 -
>  arch/xtensa/lib/kcsan-stubs.c | 54 -----------------------------------
>  kernel/kcsan/Makefile         |  5 +---
>  3 files changed, 1 insertion(+), 59 deletions(-)
>  delete mode 100644 arch/xtensa/lib/kcsan-stubs.c

Acked-by: Max Filippov <jcmvbkbc@gmail.com>

-- 
Thanks.
-- Max

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support
  2023-02-08  3:22 ` [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support Rohan McLure
@ 2023-02-08 10:10   ` Marco Elver
  2023-02-08 12:25   ` Christophe Leroy
  1 sibling, 0 replies; 15+ messages in thread
From: Marco Elver @ 2023-02-08 10:10 UTC (permalink / raw)
  To: Rohan McLure; +Cc: chris, linux-xtensa, npiggin, jcmvbkbc, linuxppc-dev

On Wed, 8 Feb 2023 at 04:23, Rohan McLure <rmclure@linux.ibm.com> wrote:
>
> Enable HAVE_ARCH_KCSAN on all powerpc platforms, permitting use of the
> kernel concurrency sanitiser through the CONFIG_KCSAN_* kconfig options.
> KCSAN requires compiler builtins __atomic_* 64-bit values, and so only
> report support on PPC64.

I thought the stubs solve that?

In general the whole builtin atomic support is only there to support
some odd corner cases today (Clang GCOV generates builtin atomic ops,
occasionally some driver sneaks in a builtin atomic although it's
technically strongly discouraged, and probably Rust stuff in future).
So I'd like to keep them, although they shouldn't cause issues.

> See documentation in Documentation/dev-tools/kcsan.rst for more
> information.

Do the tests pass?

> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v3: Restrict support to 64-bit, as TSAN expects 64-bit __atomic_* compiler
> built-ins.
> ---
>  arch/powerpc/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index b8c4ac56bddc..55bc2d724c73 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -198,6 +198,7 @@ config PPC
>         select HAVE_ARCH_KASAN                  if PPC_RADIX_MMU
>         select HAVE_ARCH_KASAN                  if PPC_BOOK3E_64
>         select HAVE_ARCH_KASAN_VMALLOC          if HAVE_ARCH_KASAN
> +       select HAVE_ARCH_KCSAN            if PPC64
>         select HAVE_ARCH_KFENCE                 if ARCH_SUPPORTS_DEBUG_PAGEALLOC
>         select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
>         select HAVE_ARCH_KGDB
> --
> 2.37.2
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems
  2023-02-08  3:21 ` [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems Rohan McLure
  2023-02-08  4:23   ` Max Filippov
@ 2023-02-08 12:23   ` Christophe Leroy
  2023-02-08 23:14     ` Rohan McLure
  1 sibling, 1 reply; 15+ messages in thread
From: Christophe Leroy @ 2023-02-08 12:23 UTC (permalink / raw)
  To: Rohan McLure, linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc



Le 08/02/2023 à 04:21, Rohan McLure a écrit :
> KCSAN instruments calls to atomic builtins, and will in turn call these
> builtins itself. As such, architectures supporting KCSAN must have
> compiler support for these atomic primitives.
> 
> Since 32-bit systems are unlikely to have 64-bit compiler builtins,
> provide a stub for each missing builtin, and use BUG() to assert
> unreachability.
> 
> In commit 725aea873261 ("xtensa: enable KCSAN"), xtensa implements these
> locally, but does not advertise the fact with preprocessor macros. To
> avoid production of duplicate symbols, do not build the stubs on xtensa.
> A future patch will remove the xtensa implementation of these stubs.
> 
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v4: New patch
> ---
>   kernel/kcsan/Makefile |  3 ++
>   kernel/kcsan/stubs.c  | 78 +++++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 81 insertions(+)
>   create mode 100644 kernel/kcsan/stubs.c

I think it would be better to merge patch 1 and patch 2, that way we 
would keep the history and see that stubs.c almost comes from xtensa.

The summary would then be:

  arch/xtensa/lib/Makefile                              |  1 -
  kernel/kcsan/Makefile                                 |  2 +-
  arch/xtensa/lib/kcsan-stubs.c => kernel/kcsan/stubs.c | 26 
+++++++++++++++++++++++++-
  3 files changed, 26 insertions(+), 3 deletions(-)


> 
> diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile
> index 8cf70f068d92..5dfc5825aae9 100644
> --- a/kernel/kcsan/Makefile
> +++ b/kernel/kcsan/Makefile
> @@ -12,6 +12,9 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
>   	-fno-stack-protector -DDISABLE_BRANCH_PROFILING
>   
>   obj-y := core.o debugfs.o report.o
> +ifndef XTENSA
> +	obj-y += stubs.o

obj-$(CONFIG_32BIT) += stubs.o

That would avoid the #if CONFIG_32BIT in stubs.c

> +endif
>   
>   KCSAN_INSTRUMENT_BARRIERS_selftest.o := y
>   obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
> diff --git a/kernel/kcsan/stubs.c b/kernel/kcsan/stubs.c
> new file mode 100644
> index 000000000000..ec5cd99be422
> --- /dev/null
> +++ b/kernel/kcsan/stubs.c
> @@ -0,0 +1,78 @@
> +// SPDX-License Identifier: GPL-2.0

Missing - between License and Identifier ?

> +
> +#include <linux/bug.h>
> +#include <linux/types.h>
> +
> +#ifdef CONFIG_32BIT

Should be handled in Makefile

> +
> +#if !__has_builtin(__atomic_store_8)

Does any 32 bit ARCH support that ? Is that #if required ?

If yes, do we really need the #if for each and every function, can't we 
just check for one and assume that if we don't have __atomic_store_8 we 
don't have any of the functions ?


> +void __atomic_store_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_load_8)
> +u64 __atomic_load_8(const volatile void *p, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_exchange_8)
> +u64 __atomic_exchange_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_compare_exchange_8)
> +bool __atomic_compare_exchange_8(volatile void *p1, void *p2, u64 v, bool b, int i1, int i2)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_fetch_add_8)
> +u64 __atomic_fetch_add_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_fetch_sub_8)
> +u64 __atomic_fetch_sub_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_fetch_and_8)
> +u64 __atomic_fetch_and_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_fetch_or_8)
> +u64 __atomic_fetch_or_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_fetch_xor_8)
> +u64 __atomic_fetch_xor_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#if !__has_builtin(__atomic_fetch_nand_8)
> +u64 __atomic_fetch_nand_8(volatile void *p, u64 v, int i)
> +{
> +	BUG();
> +}
> +#endif
> +
> +#endif /* CONFIG_32BIT */

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support
  2023-02-08  3:22 ` [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support Rohan McLure
  2023-02-08 10:10   ` Marco Elver
@ 2023-02-08 12:25   ` Christophe Leroy
  1 sibling, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2023-02-08 12:25 UTC (permalink / raw)
  To: Rohan McLure, linuxppc-dev, linux-xtensa; +Cc: chris, elver, npiggin, jcmvbkbc



Le 08/02/2023 à 04:22, Rohan McLure a écrit :
> Enable HAVE_ARCH_KCSAN on all powerpc platforms, permitting use of the
> kernel concurrency sanitiser through the CONFIG_KCSAN_* kconfig options.
> KCSAN requires compiler builtins __atomic_* 64-bit values, and so only
> report support on PPC64.

Copy/pasted from v3 ?

In v4 PPC32 is supported as well.

> 
> See documentation in Documentation/dev-tools/kcsan.rst for more
> information.
> 
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v3: Restrict support to 64-bit, as TSAN expects 64-bit __atomic_* compiler
> built-ins.
> ---
>   arch/powerpc/Kconfig | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index b8c4ac56bddc..55bc2d724c73 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -198,6 +198,7 @@ config PPC
>   	select HAVE_ARCH_KASAN			if PPC_RADIX_MMU
>   	select HAVE_ARCH_KASAN			if PPC_BOOK3E_64
>   	select HAVE_ARCH_KASAN_VMALLOC		if HAVE_ARCH_KASAN
> +	select HAVE_ARCH_KCSAN            if PPC64

That if PPC64 should go away as we now have the builtins.

>   	select HAVE_ARCH_KFENCE			if ARCH_SUPPORTS_DEBUG_PAGEALLOC
>   	select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
>   	select HAVE_ARCH_KGDB

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems
  2023-02-08 12:23   ` Christophe Leroy
@ 2023-02-08 23:14     ` Rohan McLure
  2023-02-09 23:36       ` Rohan McLure
  0 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-02-08 23:14 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: chris, elver, linux-xtensa, npiggin, jcmvbkbc, linuxppc-dev



> On 8 Feb 2023, at 11:23 pm, Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
> 
> 
> 
> Le 08/02/2023 à 04:21, Rohan McLure a écrit :
>> KCSAN instruments calls to atomic builtins, and will in turn call these
>> builtins itself. As such, architectures supporting KCSAN must have
>> compiler support for these atomic primitives.
>> 
>> Since 32-bit systems are unlikely to have 64-bit compiler builtins,
>> provide a stub for each missing builtin, and use BUG() to assert
>> unreachability.
>> 
>> In commit 725aea873261 ("xtensa: enable KCSAN"), xtensa implements these
>> locally, but does not advertise the fact with preprocessor macros. To
>> avoid production of duplicate symbols, do not build the stubs on xtensa.
>> A future patch will remove the xtensa implementation of these stubs.
>> 
>> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
>> ---
>> v4: New patch
>> ---
>>  kernel/kcsan/Makefile |  3 ++
>>  kernel/kcsan/stubs.c  | 78 +++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 81 insertions(+)
>>  create mode 100644 kernel/kcsan/stubs.c
> 
> I think it would be better to merge patch 1 and patch 2, that way we 
> would keep the history and see that stubs.c almost comes from xtensa.
> 
> The summary would then be:
> 
>  arch/xtensa/lib/Makefile                              |  1 -
>  kernel/kcsan/Makefile                                 |  2 +-
>  arch/xtensa/lib/kcsan-stubs.c => kernel/kcsan/stubs.c | 26 
> +++++++++++++++++++++++++-
>  3 files changed, 26 insertions(+), 3 deletions(-)
> 
> 
>> 
>> diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile
>> index 8cf70f068d92..5dfc5825aae9 100644
>> --- a/kernel/kcsan/Makefile
>> +++ b/kernel/kcsan/Makefile
>> @@ -12,6 +12,9 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
>>   -fno-stack-protector -DDISABLE_BRANCH_PROFILING
>> 
>>  obj-y := core.o debugfs.o report.o
>> +ifndef XTENSA
>> + obj-y += stubs.o
> 
> obj-$(CONFIG_32BIT) += stubs.o
> 
> That would avoid the #if CONFIG_32BIT in stubs.c

Thanks. Yes happy to do this.

> 
>> +endif
>> 
>>  KCSAN_INSTRUMENT_BARRIERS_selftest.o := y
>>  obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
>> diff --git a/kernel/kcsan/stubs.c b/kernel/kcsan/stubs.c
>> new file mode 100644
>> index 000000000000..ec5cd99be422
>> --- /dev/null
>> +++ b/kernel/kcsan/stubs.c
>> @@ -0,0 +1,78 @@
>> +// SPDX-License Identifier: GPL-2.0
> 
> Missing - between License and Identifier ?
> 
>> +
>> +#include <linux/bug.h>
>> +#include <linux/types.h>
>> +
>> +#ifdef CONFIG_32BIT
> 
> Should be handled in Makefile
> 
>> +
>> +#if !__has_builtin(__atomic_store_8)
> 
> Does any 32 bit ARCH support that ? Is that #if required ?
> 
> If yes, do we really need the #if for each and every function, can't we 
> just check for one and assume that if we don't have __atomic_store_8 we 
> don't have any of the functions ?

Turns out that testing with gcc provides 8-byte atomic builtins on x86
and arm on 32-bit. However I believe it should just suffice to check for
__atomic_store_8 or any other such builtin i.e. if an arch implements one it
likely implements them all from what I’ve seen.

> 
> 
>> +void __atomic_store_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_load_8)
>> +u64 __atomic_load_8(const volatile void *p, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_exchange_8)
>> +u64 __atomic_exchange_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_compare_exchange_8)
>> +bool __atomic_compare_exchange_8(volatile void *p1, void *p2, u64 v, bool b, int i1, int i2)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_fetch_add_8)
>> +u64 __atomic_fetch_add_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_fetch_sub_8)
>> +u64 __atomic_fetch_sub_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_fetch_and_8)
>> +u64 __atomic_fetch_and_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_fetch_or_8)
>> +u64 __atomic_fetch_or_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_fetch_xor_8)
>> +u64 __atomic_fetch_xor_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#if !__has_builtin(__atomic_fetch_nand_8)
>> +u64 __atomic_fetch_nand_8(volatile void *p, u64 v, int i)
>> +{
>> + BUG();
>> +}
>> +#endif
>> +
>> +#endif /* CONFIG_32BIT */



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems
  2023-02-08 23:14     ` Rohan McLure
@ 2023-02-09 23:36       ` Rohan McLure
  0 siblings, 0 replies; 15+ messages in thread
From: Rohan McLure @ 2023-02-09 23:36 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: chris, elver, linux-xtensa, npiggin, jcmvbkbc, linuxppc-dev



> On 9 Feb 2023, at 10:14 am, Rohan McLure <rmclure@linux.ibm.com> wrote:
> 
> 
> 
>> On 8 Feb 2023, at 11:23 pm, Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
>> 
>> 
>> 
>> Le 08/02/2023 à 04:21, Rohan McLure a écrit :
>>> KCSAN instruments calls to atomic builtins, and will in turn call these
>>> builtins itself. As such, architectures supporting KCSAN must have
>>> compiler support for these atomic primitives.
>>> 
>>> Since 32-bit systems are unlikely to have 64-bit compiler builtins,
>>> provide a stub for each missing builtin, and use BUG() to assert
>>> unreachability.
>>> 
>>> In commit 725aea873261 ("xtensa: enable KCSAN"), xtensa implements these
>>> locally, but does not advertise the fact with preprocessor macros. To
>>> avoid production of duplicate symbols, do not build the stubs on xtensa.
>>> A future patch will remove the xtensa implementation of these stubs.
>>> 
>>> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
>>> ---
>>> v4: New patch
>>> ---
>>> kernel/kcsan/Makefile |  3 ++
>>> kernel/kcsan/stubs.c  | 78 +++++++++++++++++++++++++++++++++++++++++++
>>> 2 files changed, 81 insertions(+)
>>> create mode 100644 kernel/kcsan/stubs.c
>> 
>> I think it would be better to merge patch 1 and patch 2, that way we 
>> would keep the history and see that stubs.c almost comes from xtensa.
>> 
>> The summary would then be:
>> 
>> arch/xtensa/lib/Makefile                              |  1 -
>> kernel/kcsan/Makefile                                 |  2 +-
>> arch/xtensa/lib/kcsan-stubs.c => kernel/kcsan/stubs.c | 26 
>> +++++++++++++++++++++++++-
>> 3 files changed, 26 insertions(+), 3 deletions(-)
>> 
>> 
>>> 
>>> diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile
>>> index 8cf70f068d92..5dfc5825aae9 100644
>>> --- a/kernel/kcsan/Makefile
>>> +++ b/kernel/kcsan/Makefile
>>> @@ -12,6 +12,9 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \
>>>  -fno-stack-protector -DDISABLE_BRANCH_PROFILING
>>> 
>>> obj-y := core.o debugfs.o report.o
>>> +ifndef XTENSA
>>> + obj-y += stubs.o
>> 
>> obj-$(CONFIG_32BIT) += stubs.o
>> 
>> That would avoid the #if CONFIG_32BIT in stubs.c
> 
> Thanks. Yes happy to do this.
> 
>> 
>>> +endif
>>> 
>>> KCSAN_INSTRUMENT_BARRIERS_selftest.o := y
>>> obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
>>> diff --git a/kernel/kcsan/stubs.c b/kernel/kcsan/stubs.c
>>> new file mode 100644
>>> index 000000000000..ec5cd99be422
>>> --- /dev/null
>>> +++ b/kernel/kcsan/stubs.c
>>> @@ -0,0 +1,78 @@
>>> +// SPDX-License Identifier: GPL-2.0
>> 
>> Missing - between License and Identifier ?
>> 
>>> +
>>> +#include <linux/bug.h>
>>> +#include <linux/types.h>
>>> +
>>> +#ifdef CONFIG_32BIT
>> 
>> Should be handled in Makefile
>> 
>>> +
>>> +#if !__has_builtin(__atomic_store_8)
>> 
>> Does any 32 bit ARCH support that ? Is that #if required ?
>> 
>> If yes, do we really need the #if for each and every function, can't we 
>> just check for one and assume that if we don't have __atomic_store_8 we 
>> don't have any of the functions ?
> 
> Turns out that testing with gcc provides 8-byte atomic builtins on x86
> and arm on 32-bit. However I believe it should just suffice to check for
> __atomic_store_8 or any other such builtin i.e. if an arch implements one it
> likely implements them all from what I’ve seen.

In reality, __has_builtin only specifies that GCC is aware of the existance of
the builtin, but linking against libatomic may still be required. Let’s
remove this check, and have ppc32 and xtensa opt into compiling this stubs file.

Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108734

> 
>> 
>> 
>>> +void __atomic_store_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_load_8)
>>> +u64 __atomic_load_8(const volatile void *p, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_exchange_8)
>>> +u64 __atomic_exchange_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_compare_exchange_8)
>>> +bool __atomic_compare_exchange_8(volatile void *p1, void *p2, u64 v, bool b, int i1, int i2)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_fetch_add_8)
>>> +u64 __atomic_fetch_add_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_fetch_sub_8)
>>> +u64 __atomic_fetch_sub_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_fetch_and_8)
>>> +u64 __atomic_fetch_and_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_fetch_or_8)
>>> +u64 __atomic_fetch_or_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_fetch_xor_8)
>>> +u64 __atomic_fetch_xor_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#if !__has_builtin(__atomic_fetch_nand_8)
>>> +u64 __atomic_fetch_nand_8(volatile void *p, u64 v, int i)
>>> +{
>>> + BUG();
>>> +}
>>> +#endif
>>> +
>>> +#endif /* CONFIG_32BIT */



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-02-09 23:37 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-08  3:21 [PATCH v4 0/7] powerpc: Add KCSAN Support Rohan McLure
2023-02-08  3:21 ` [PATCH v4 1/7] kcsan: Add atomic builtin stubs for 32-bit systems Rohan McLure
2023-02-08  4:23   ` Max Filippov
2023-02-08 12:23   ` Christophe Leroy
2023-02-08 23:14     ` Rohan McLure
2023-02-09 23:36       ` Rohan McLure
2023-02-08  3:21 ` [PATCH v4 2/7] xtensa: kcsan: Remove kcsan stubs for atomic builtins Rohan McLure
2023-02-08  4:24   ` Max Filippov
2023-02-08  3:21 ` [PATCH v4 3/7] powerpc: kcsan: Add exclusions from instrumentation Rohan McLure
2023-02-08  3:21 ` [PATCH v4 4/7] powerpc: kcsan: Exclude udelay to prevent recursive instrumentation Rohan McLure
2023-02-08  3:22 ` [PATCH v4 5/7] powerpc: kcsan: Memory barriers semantics Rohan McLure
2023-02-08  3:22 ` [PATCH v4 6/7] powerpc: kcsan: Prevent recursive instrumentation with IRQ save/restores Rohan McLure
2023-02-08  3:22 ` [PATCH v4 7/7] powerpc: kcsan: Add KCSAN Support Rohan McLure
2023-02-08 10:10   ` Marco Elver
2023-02-08 12:25   ` Christophe Leroy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.