linux-hardening.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Annotate atomics for signed integer wrap-around
@ 2024-04-24 19:17 Kees Cook
  2024-04-24 19:17 ` [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition Kees Cook
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: Kees Cook @ 2024-04-24 19:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Kees Cook, Jakub Kicinski, Will Deacon, Peter Zijlstra,
	Boqun Feng, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Catalin Marinas, Arnd Bergmann,
	Andrew Morton, David S. Miller, David Ahern, Eric Dumazet,
	Paolo Abeni, Paul E. McKenney, Uros Bizjak, linux-kernel, x86,
	linux-arm-kernel, linux-arch, netdev, linux-hardening

Hi,

As part of enabling the signed integer overflow sanitizer for production
use, we have to annotated the atomics which expect to use wrapping signed
values. Do this for x86, arm64, and the fallbacks. Additionally annotate
the first place anyone will trip over signed integer wrap-around: ipv4,
which has traditionally included the comment hint about how to debug
sanitizer issues.

Since this touches 2 architectures and netdev, I think it might be
easiest if I carry this in the hardening tree, or maybe via the netdev
tree. Thoughts?

Thanks!

-Kees

Kees Cook (4):
  locking/atomic/x86: Silence intentional wrapping addition
  arm64: atomics: lse: Silence intentional wrapping addition
  locking/atomic: Annotate generic atomics with wrapping
  ipv4: Silence intentional wrapping addition

 arch/arm64/include/asm/atomic_lse.h          | 10 ++++++----
 arch/x86/include/asm/atomic.h                |  3 ++-
 arch/x86/include/asm/atomic64_32.h           |  2 +-
 arch/x86/include/asm/atomic64_64.h           |  2 +-
 include/asm-generic/atomic.h                 |  6 +++---
 include/asm-generic/atomic64.h               |  6 +++---
 include/linux/atomic/atomic-arch-fallback.h  | 19 ++++++++++---------
 include/linux/atomic/atomic-instrumented.h   |  3 ++-
 include/linux/atomic/atomic-long.h           |  3 ++-
 include/net/ip.h                             |  4 ++--
 lib/atomic64.c                               | 10 +++++-----
 net/ipv4/route.c                             | 10 +++++-----
 scripts/atomic/fallbacks/dec_if_positive     |  2 +-
 scripts/atomic/fallbacks/dec_unless_positive |  2 +-
 scripts/atomic/fallbacks/fetch_add_unless    |  2 +-
 scripts/atomic/fallbacks/inc_unless_negative |  2 +-
 scripts/atomic/gen-atomic-fallback.sh        |  1 +
 scripts/atomic/gen-atomic-instrumented.sh    |  1 +
 scripts/atomic/gen-atomic-long.sh            |  1 +
 19 files changed, 49 insertions(+), 40 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 19:17 [PATCH 0/4] Annotate atomics for signed integer wrap-around Kees Cook
@ 2024-04-24 19:17 ` Kees Cook
  2024-04-24 22:41   ` Peter Zijlstra
  2024-04-24 19:17 ` [PATCH 2/4] arm64: atomics: lse: " Kees Cook
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Kees Cook @ 2024-04-24 19:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Kees Cook, Will Deacon, Peter Zijlstra, Boqun Feng,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Jakub Kicinski, Catalin Marinas, Arnd Bergmann,
	Andrew Morton, David S. Miller, David Ahern, Eric Dumazet,
	Paolo Abeni, Paul E. McKenney, Uros Bizjak, linux-kernel,
	linux-arm-kernel, linux-arch, netdev, linux-hardening

Use add_wrap() to annotate the addition in atomic_add_return() as
expecting to wrap around.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/atomic.h      | 3 ++-
 arch/x86/include/asm/atomic64_32.h | 2 +-
 arch/x86/include/asm/atomic64_64.h | 2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 55a55ec04350..a5862a258760 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -3,6 +3,7 @@
 #define _ASM_X86_ATOMIC_H
 
 #include <linux/compiler.h>
+#include <linux/overflow.h>
 #include <linux/types.h>
 #include <asm/alternative.h>
 #include <asm/cmpxchg.h>
@@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
 
 static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
 {
-	return i + xadd(&v->counter, i);
+	return wrapping_add(int, i, xadd(&v->counter, i));
 }
 #define arch_atomic_add_return arch_atomic_add_return
 
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 3486d91b8595..608b100e8ffe 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -254,7 +254,7 @@ static __always_inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
 {
 	s64 old, c = 0;
 
-	while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, wrapping_add(s64, c, i))) != c)
 		c = old;
 
 	return old;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 3165c0feedf7..f1dc8aa54b52 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -76,7 +76,7 @@ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
 
 static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 {
-	return i + xadd(&v->counter, i);
+	return wrapping_add(s64, i, xadd(&v->counter, i));
 }
 #define arch_atomic64_add_return arch_atomic64_add_return
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/4] arm64: atomics: lse: Silence intentional wrapping addition
  2024-04-24 19:17 [PATCH 0/4] Annotate atomics for signed integer wrap-around Kees Cook
  2024-04-24 19:17 ` [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition Kees Cook
@ 2024-04-24 19:17 ` Kees Cook
  2024-05-02 11:21   ` Will Deacon
  2024-04-24 19:17 ` [PATCH 3/4] locking/atomic: Annotate generic atomics with wrapping Kees Cook
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Kees Cook @ 2024-04-24 19:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Kees Cook, Will Deacon, Peter Zijlstra, Boqun Feng,
	Catalin Marinas, linux-arm-kernel, Jakub Kicinski,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin, Arnd Bergmann, Andrew Morton, David S. Miller,
	David Ahern, Eric Dumazet, Paolo Abeni, Paul E. McKenney,
	Uros Bizjak, linux-kernel, x86, linux-arch, netdev,
	linux-hardening

Annotate atomic_add_return() and atomic_sub_return() to avoid signed
overflow instrumentation. They are expected to wrap around.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/atomic_lse.h | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 87f568a94e55..a33576b20b52 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -10,6 +10,8 @@
 #ifndef __ASM_ATOMIC_LSE_H
 #define __ASM_ATOMIC_LSE_H
 
+#include <linux/overflow.h>
+
 #define ATOMIC_OP(op, asm_op)						\
 static __always_inline void						\
 __lse_atomic_##op(int i, atomic_t *v)					\
@@ -82,13 +84,13 @@ ATOMIC_FETCH_OP_SUB(        )
 static __always_inline int						\
 __lse_atomic_add_return##name(int i, atomic_t *v)			\
 {									\
-	return __lse_atomic_fetch_add##name(i, v) + i;			\
+	return wrapping_add(int, __lse_atomic_fetch_add##name(i, v), i);\
 }									\
 									\
 static __always_inline int						\
 __lse_atomic_sub_return##name(int i, atomic_t *v)			\
 {									\
-	return __lse_atomic_fetch_sub(i, v) - i;			\
+	return wrapping_sub(int, __lse_atomic_fetch_sub(i, v), i);	\
 }
 
 ATOMIC_OP_ADD_SUB_RETURN(_relaxed)
@@ -189,13 +191,13 @@ ATOMIC64_FETCH_OP_SUB(        )
 static __always_inline long						\
 __lse_atomic64_add_return##name(s64 i, atomic64_t *v)			\
 {									\
-	return __lse_atomic64_fetch_add##name(i, v) + i;		\
+	return wrapping_add(s64, __lse_atomic64_fetch_add##name(i, v), i); \
 }									\
 									\
 static __always_inline long						\
 __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)			\
 {									\
-	return __lse_atomic64_fetch_sub##name(i, v) - i;		\
+	return wrapping_sub(s64, __lse_atomic64_fetch_sub##name(i, v), i); \
 }
 
 ATOMIC64_OP_ADD_SUB_RETURN(_relaxed)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/4] locking/atomic: Annotate generic atomics with wrapping
  2024-04-24 19:17 [PATCH 0/4] Annotate atomics for signed integer wrap-around Kees Cook
  2024-04-24 19:17 ` [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition Kees Cook
  2024-04-24 19:17 ` [PATCH 2/4] arm64: atomics: lse: " Kees Cook
@ 2024-04-24 19:17 ` Kees Cook
  2024-04-24 19:17 ` [PATCH 4/4] ipv4: Silence intentional wrapping addition Kees Cook
  2024-04-26  7:40 ` [PATCH 1/4] locking/atomic/x86: " David Howells
  4 siblings, 0 replies; 22+ messages in thread
From: Kees Cook @ 2024-04-24 19:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Kees Cook, Will Deacon, Peter Zijlstra, Boqun Feng,
	Arnd Bergmann, Andrew Morton, linux-arch, Jakub Kicinski,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin, Catalin Marinas, David S. Miller, David Ahern,
	Eric Dumazet, Paolo Abeni, Paul E. McKenney, Uros Bizjak,
	linux-kernel, x86, linux-arm-kernel, netdev, linux-hardening

Because atomics depend on signed wrap-around, we need to use helpers to
perform the operations so that it is not instrumented by the signed
wrap-around sanitizer.

Refresh generated files by running scripts/atomic/gen-atomics.sh.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-arch@vger.kernel.org
---
 include/asm-generic/atomic.h                 |  6 +++---
 include/asm-generic/atomic64.h               |  6 +++---
 include/linux/atomic/atomic-arch-fallback.h  | 19 ++++++++++---------
 include/linux/atomic/atomic-instrumented.h   |  3 ++-
 include/linux/atomic/atomic-long.h           |  3 ++-
 lib/atomic64.c                               | 10 +++++-----
 scripts/atomic/fallbacks/dec_if_positive     |  2 +-
 scripts/atomic/fallbacks/dec_unless_positive |  2 +-
 scripts/atomic/fallbacks/fetch_add_unless    |  2 +-
 scripts/atomic/fallbacks/inc_unless_negative |  2 +-
 scripts/atomic/gen-atomic-fallback.sh        |  1 +
 scripts/atomic/gen-atomic-instrumented.sh    |  1 +
 scripts/atomic/gen-atomic-long.sh            |  1 +
 13 files changed, 32 insertions(+), 26 deletions(-)

diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 22142c71d35a..1b54e9b1cd02 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -55,7 +55,7 @@ static inline int generic_atomic_fetch_##op(int i, atomic_t *v)		\
 #include <linux/irqflags.h>
 
 #define ATOMIC_OP(op, c_op)						\
-static inline void generic_atomic_##op(int i, atomic_t *v)		\
+static inline void __signed_wrap generic_atomic_##op(int i, atomic_t *v)\
 {									\
 	unsigned long flags;						\
 									\
@@ -65,7 +65,7 @@ static inline void generic_atomic_##op(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_OP_RETURN(op, c_op)					\
-static inline int generic_atomic_##op##_return(int i, atomic_t *v)	\
+static inline int __signed_wrap generic_atomic_##op##_return(int i, atomic_t *v)\
 {									\
 	unsigned long flags;						\
 	int ret;							\
@@ -78,7 +78,7 @@ static inline int generic_atomic_##op##_return(int i, atomic_t *v)	\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op)					\
-static inline int generic_atomic_fetch_##op(int i, atomic_t *v)		\
+static inline int __signed_wrap generic_atomic_fetch_##op(int i, atomic_t *v)\
 {									\
 	unsigned long flags;						\
 	int ret;							\
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index 100d24b02e52..0084867fe399 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -19,13 +19,13 @@ extern s64 generic_atomic64_read(const atomic64_t *v);
 extern void generic_atomic64_set(atomic64_t *v, s64 i);
 
 #define ATOMIC64_OP(op)							\
-extern void generic_atomic64_##op(s64 a, atomic64_t *v);
+extern void __signed_wrap generic_atomic64_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OP_RETURN(op)						\
-extern s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v);
+extern s64 __signed_wrap generic_atomic64_##op##_return(s64 a, atomic64_t *v);
 
 #define ATOMIC64_FETCH_OP(op)						\
-extern s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v);
+extern s64 __signed_wrap generic_atomic64_fetch_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OPS(op)	ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)
 
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 956bcba5dbf2..2d2ebb4e0f8f 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -7,6 +7,7 @@
 #define _LINUX_ATOMIC_FALLBACK_H
 
 #include <linux/compiler.h>
+#include <linux/overflow.h>
 
 #if defined(arch_xchg)
 #define raw_xchg arch_xchg
@@ -2428,7 +2429,7 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 	do {
 		if (unlikely(c == u))
 			break;
-	} while (!raw_atomic_try_cmpxchg(v, &c, c + a));
+	} while (!raw_atomic_try_cmpxchg(v, &c, wrapping_add(int, c, a)));
 
 	return c;
 #endif
@@ -2500,7 +2501,7 @@ raw_atomic_inc_unless_negative(atomic_t *v)
 	do {
 		if (unlikely(c < 0))
 			return false;
-	} while (!raw_atomic_try_cmpxchg(v, &c, c + 1));
+	} while (!raw_atomic_try_cmpxchg(v, &c, wrapping_add(int, c, 1)));
 
 	return true;
 #endif
@@ -2528,7 +2529,7 @@ raw_atomic_dec_unless_positive(atomic_t *v)
 	do {
 		if (unlikely(c > 0))
 			return false;
-	} while (!raw_atomic_try_cmpxchg(v, &c, c - 1));
+	} while (!raw_atomic_try_cmpxchg(v, &c, wrapping_sub(int, c, 1)));
 
 	return true;
 #endif
@@ -2554,7 +2555,7 @@ raw_atomic_dec_if_positive(atomic_t *v)
 	int dec, c = raw_atomic_read(v);
 
 	do {
-		dec = c - 1;
+		dec = wrapping_sub(int, c, 1);
 		if (unlikely(dec < 0))
 			break;
 	} while (!raw_atomic_try_cmpxchg(v, &c, dec));
@@ -4554,7 +4555,7 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 	do {
 		if (unlikely(c == u))
 			break;
-	} while (!raw_atomic64_try_cmpxchg(v, &c, c + a));
+	} while (!raw_atomic64_try_cmpxchg(v, &c, wrapping_add(s64, c, a)));
 
 	return c;
 #endif
@@ -4626,7 +4627,7 @@ raw_atomic64_inc_unless_negative(atomic64_t *v)
 	do {
 		if (unlikely(c < 0))
 			return false;
-	} while (!raw_atomic64_try_cmpxchg(v, &c, c + 1));
+	} while (!raw_atomic64_try_cmpxchg(v, &c, wrapping_add(s64, c, 1)));
 
 	return true;
 #endif
@@ -4654,7 +4655,7 @@ raw_atomic64_dec_unless_positive(atomic64_t *v)
 	do {
 		if (unlikely(c > 0))
 			return false;
-	} while (!raw_atomic64_try_cmpxchg(v, &c, c - 1));
+	} while (!raw_atomic64_try_cmpxchg(v, &c, wrapping_sub(s64, c, 1)));
 
 	return true;
 #endif
@@ -4680,7 +4681,7 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
 	s64 dec, c = raw_atomic64_read(v);
 
 	do {
-		dec = c - 1;
+		dec = wrapping_sub(s64, c, 1);
 		if (unlikely(dec < 0))
 			break;
 	} while (!raw_atomic64_try_cmpxchg(v, &c, dec));
@@ -4690,4 +4691,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
 }
 
 #endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 14850c0b0db20c62fdc78ccd1d42b98b88d76331
+// 1278e3a674d0a36c2f0eb9f5fd0ddfcbf3690406
diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
index debd487fe971..af103189bd7d 100644
--- a/include/linux/atomic/atomic-instrumented.h
+++ b/include/linux/atomic/atomic-instrumented.h
@@ -15,6 +15,7 @@
 #include <linux/build_bug.h>
 #include <linux/compiler.h>
 #include <linux/instrumented.h>
+#include <linux/overflow.h>
 
 /**
  * atomic_read() - atomic load with relaxed ordering
@@ -5050,4 +5051,4 @@ atomic_long_dec_if_positive(atomic_long_t *v)
 
 
 #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// ce5b65e0f1f8a276268b667194581d24bed219d4
+// b9cd8314e11c4c818fb469dbd18c7390fcaf9b3c
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 3ef844b3ab8a..07c1625a2d92 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -7,6 +7,7 @@
 #define _LINUX_ATOMIC_LONG_H
 
 #include <linux/compiler.h>
+#include <linux/overflow.h>
 #include <asm/types.h>
 
 #ifdef CONFIG_64BIT
@@ -1809,4 +1810,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
 }
 
 #endif /* _LINUX_ATOMIC_LONG_H */
-// 1c4a26fc77f345342953770ebe3c4d08e7ce2f9a
+// 01a5fe70d091e84c1de5eea7e9c09ebeaf7799b3
diff --git a/lib/atomic64.c b/lib/atomic64.c
index caf895789a1e..25cc8993d7da 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -67,7 +67,7 @@ void generic_atomic64_set(atomic64_t *v, s64 i)
 EXPORT_SYMBOL(generic_atomic64_set);
 
 #define ATOMIC64_OP(op, c_op)						\
-void generic_atomic64_##op(s64 a, atomic64_t *v)			\
+void __signed_wrap generic_atomic64_##op(s64 a, atomic64_t *v)		\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -79,7 +79,7 @@ void generic_atomic64_##op(s64 a, atomic64_t *v)			\
 EXPORT_SYMBOL(generic_atomic64_##op);
 
 #define ATOMIC64_OP_RETURN(op, c_op)					\
-s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v)		\
+s64 __signed_wrap generic_atomic64_##op##_return(s64 a, atomic64_t *v)	\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -93,7 +93,7 @@ s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v)		\
 EXPORT_SYMBOL(generic_atomic64_##op##_return);
 
 #define ATOMIC64_FETCH_OP(op, c_op)					\
-s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v)			\
+s64 __signed_wrap generic_atomic64_fetch_##op(s64 a, atomic64_t *v)	\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -135,7 +135,7 @@ s64 generic_atomic64_dec_if_positive(atomic64_t *v)
 	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
-	val = v->counter - 1;
+	val = wrapping_sub(typeof(val), v->counter, 1);
 	if (val >= 0)
 		v->counter = val;
 	raw_spin_unlock_irqrestore(lock, flags);
@@ -181,7 +181,7 @@ s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
 	if (val != u)
-		v->counter += a;
+		wrapping_assign_add(v->counter, a);
 	raw_spin_unlock_irqrestore(lock, flags);
 
 	return val;
diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
index f65c11b4b85b..910a6d4ef398 100755
--- a/scripts/atomic/fallbacks/dec_if_positive
+++ b/scripts/atomic/fallbacks/dec_if_positive
@@ -2,7 +2,7 @@ cat <<EOF
 	${int} dec, c = raw_${atomic}_read(v);
 
 	do {
-		dec = c - 1;
+		dec = wrapping_sub(${int}, c, 1);
 		if (unlikely(dec < 0))
 			break;
 	} while (!raw_${atomic}_try_cmpxchg(v, &c, dec));
diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
index d025361d7b85..327451527825 100755
--- a/scripts/atomic/fallbacks/dec_unless_positive
+++ b/scripts/atomic/fallbacks/dec_unless_positive
@@ -4,7 +4,7 @@ cat <<EOF
 	do {
 		if (unlikely(c > 0))
 			return false;
-	} while (!raw_${atomic}_try_cmpxchg(v, &c, c - 1));
+	} while (!raw_${atomic}_try_cmpxchg(v, &c, wrapping_sub(${int}, c, 1)));
 
 	return true;
 EOF
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 8db7e9e17fac..a9a11675a4d7 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -4,7 +4,7 @@ cat << EOF
 	do {
 		if (unlikely(c == u))
 			break;
-	} while (!raw_${atomic}_try_cmpxchg(v, &c, c + a));
+	} while (!raw_${atomic}_try_cmpxchg(v, &c, wrapping_add(${int}, c, a)));
 
 	return c;
 EOF
diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
index 7b4b09868842..0275d3c683eb 100755
--- a/scripts/atomic/fallbacks/inc_unless_negative
+++ b/scripts/atomic/fallbacks/inc_unless_negative
@@ -4,7 +4,7 @@ cat <<EOF
 	do {
 		if (unlikely(c < 0))
 			return false;
-	} while (!raw_${atomic}_try_cmpxchg(v, &c, c + 1));
+	} while (!raw_${atomic}_try_cmpxchg(v, &c, wrapping_add(${int}, c, 1)));
 
 	return true;
 EOF
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index f80d69cfeb1f..60f5adf3a022 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -297,6 +297,7 @@ cat << EOF
 #define _LINUX_ATOMIC_FALLBACK_H
 
 #include <linux/compiler.h>
+#include <linux/overflow.h>
 
 EOF
 
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index 592f3ec89b5f..fbc6c2f0abd3 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -146,6 +146,7 @@ cat << EOF
 #include <linux/build_bug.h>
 #include <linux/compiler.h>
 #include <linux/instrumented.h>
+#include <linux/overflow.h>
 
 EOF
 
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 9826be3ba986..ae6d549c9079 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -75,6 +75,7 @@ cat << EOF
 #define _LINUX_ATOMIC_LONG_H
 
 #include <linux/compiler.h>
+#include <linux/overflow.h>
 #include <asm/types.h>
 
 #ifdef CONFIG_64BIT
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/4] ipv4: Silence intentional wrapping addition
  2024-04-24 19:17 [PATCH 0/4] Annotate atomics for signed integer wrap-around Kees Cook
                   ` (2 preceding siblings ...)
  2024-04-24 19:17 ` [PATCH 3/4] locking/atomic: Annotate generic atomics with wrapping Kees Cook
@ 2024-04-24 19:17 ` Kees Cook
  2024-04-26  7:40 ` [PATCH 1/4] locking/atomic/x86: " David Howells
  4 siblings, 0 replies; 22+ messages in thread
From: Kees Cook @ 2024-04-24 19:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Kees Cook, Jakub Kicinski, David S. Miller, David Ahern,
	Eric Dumazet, Paolo Abeni, netdev, Will Deacon, Peter Zijlstra,
	Boqun Feng, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Catalin Marinas, Arnd Bergmann,
	Andrew Morton, Paul E. McKenney, Uros Bizjak, linux-kernel, x86,
	linux-arm-kernel, linux-arch, linux-hardening

The overflow sanitizer quickly noticed what appears to have been an old
sore spot involving intended wrap around:

[   22.192362] ------------[ cut here ]------------
[   22.193329] UBSAN: signed-integer-overflow in ../arch/x86/include/asm/atomic.h:85:11
[   22.194844] 1469769800 + 1671667352 cannot be represented in type 'int'
[   22.195975] CPU: 2 PID: 2260 Comm: nmbd Not tainted 6.7.0 #1
[   22.196927] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
[   22.198231] Call Trace:
[   22.198641]  <TASK>
[   22.198641]  dump_stack_lvl+0x64/0x80
[   22.199533]  handle_overflow+0x152/0x1a0
[   22.200382]  __ip_select_ident+0xe3/0x100

Explicitly mark ip_select_ident() as performing wrapping signed
arithmetic. Update the passed type as a u32 since that is how it is used
(it is either u16 or a literal "1" in callers, but used with a wrapping
int, so it's actually a u32). Update the comment to mention annotation
instead of -fno-strict-overflow, which is no longer the issue.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: David Ahern <dsahern@kernel.org>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: netdev@vger.kernel.org
---
 include/net/ip.h |  4 ++--
 net/ipv4/route.c | 10 +++++-----
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/net/ip.h b/include/net/ip.h
index 25cb688bdc62..09d502a0ae30 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -537,10 +537,10 @@ void ip_dst_metrics_put(struct dst_entry *dst)
 		kfree(p);
 }
 
-void __ip_select_ident(struct net *net, struct iphdr *iph, int segs);
+void __ip_select_ident(struct net *net, struct iphdr *iph, u32 segs);
 
 static inline void ip_select_ident_segs(struct net *net, struct sk_buff *skb,
-					struct sock *sk, int segs)
+					struct sock *sk, u32 segs)
 {
 	struct iphdr *iph = ip_hdr(skb);
 
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index c8f76f56dc16..400e7a16fdba 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -458,7 +458,7 @@ static u32 *ip_tstamps __read_mostly;
  * if one generator is seldom used. This makes hard for an attacker
  * to infer how many packets were sent between two points in time.
  */
-static u32 ip_idents_reserve(u32 hash, int segs)
+static __signed_wrap u32 ip_idents_reserve(u32 hash, u32 segs)
 {
 	u32 bucket, old, now = (u32)jiffies;
 	atomic_t *p_id;
@@ -473,14 +473,14 @@ static u32 ip_idents_reserve(u32 hash, int segs)
 	if (old != now && cmpxchg(p_tstamp, old, now) == old)
 		delta = get_random_u32_below(now - old);
 
-	/* If UBSAN reports an error there, please make sure your compiler
-	 * supports -fno-strict-overflow before reporting it that was a bug
-	 * in UBSAN, and it has been fixed in GCC-8.
+	/* If UBSAN reports an error here, please make sure your arch's
+	 * atomic_add_return() implementation has been annotated with
+	 * __signed_wrap or uses wrapping_add() internally.
 	 */
 	return atomic_add_return(segs + delta, p_id) - segs;
 }
 
-void __ip_select_ident(struct net *net, struct iphdr *iph, int segs)
+void __ip_select_ident(struct net *net, struct iphdr *iph, u32 segs)
 {
 	u32 hash, id;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 19:17 ` [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition Kees Cook
@ 2024-04-24 22:41   ` Peter Zijlstra
  2024-04-24 22:45     ` Kees Cook
  2024-04-24 22:51     ` Peter Zijlstra
  0 siblings, 2 replies; 22+ messages in thread
From: Peter Zijlstra @ 2024-04-24 22:41 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:

> @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
>  
>  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
>  {
> -	return i + xadd(&v->counter, i);
> +	return wrapping_add(int, i, xadd(&v->counter, i));
>  }
>  #define arch_atomic_add_return arch_atomic_add_return

this is going to get old *real* quick :-/

This must be the ugliest possible way to annotate all this, and then
litter the kernel with all this... urgh.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 22:41   ` Peter Zijlstra
@ 2024-04-24 22:45     ` Kees Cook
  2024-04-24 22:54       ` Peter Zijlstra
  2024-04-25 10:15       ` Mark Rutland
  2024-04-24 22:51     ` Peter Zijlstra
  1 sibling, 2 replies; 22+ messages in thread
From: Kees Cook @ 2024-04-24 22:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> 
> > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> >  
> >  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> >  {
> > -	return i + xadd(&v->counter, i);
> > +	return wrapping_add(int, i, xadd(&v->counter, i));
> >  }
> >  #define arch_atomic_add_return arch_atomic_add_return
> 
> this is going to get old *real* quick :-/
> 
> This must be the ugliest possible way to annotate all this, and then
> litter the kernel with all this... urgh.

I'm expecting to have explicit wrapping type annotations soon[1], but for
the atomics, it's kind of a wash on how intrusive the annotations get. I
had originally wanted to mark the function (as I did in other cases)
rather than using the helper, but Mark preferred it this way. I'm happy
to do whatever! :)

-Kees

[1] https://github.com/llvm/llvm-project/pull/86618

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 22:41   ` Peter Zijlstra
  2024-04-24 22:45     ` Kees Cook
@ 2024-04-24 22:51     ` Peter Zijlstra
  1 sibling, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2024-04-24 22:51 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> 
> > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> >  
> >  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> >  {
> > -	return i + xadd(&v->counter, i);
> > +	return wrapping_add(int, i, xadd(&v->counter, i));
> >  }
> >  #define arch_atomic_add_return arch_atomic_add_return
> 
> this is going to get old *real* quick :-/
> 
> This must be the ugliest possible way to annotate all this, and then
> litter the kernel with all this... urgh.

Also, what drugs is involved with __builtin_add_overflow() ? Per
-fno-strict-overflow everything is 2s complement and you can just do the
unsigned add.

Over the years we've been writing code with the express knowledge that
everything wraps properly, this annotation is going to be utter pain.

As I've said before, add an explicit non-wrapping type and use that for
the cases you care about actually not wrapping.

NAK

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 22:45     ` Kees Cook
@ 2024-04-24 22:54       ` Peter Zijlstra
  2024-04-24 23:05         ` Peter Zijlstra
  2024-04-24 23:20         ` Kees Cook
  2024-04-25 10:15       ` Mark Rutland
  1 sibling, 2 replies; 22+ messages in thread
From: Peter Zijlstra @ 2024-04-24 22:54 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > 
> > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > >  
> > >  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > >  {
> > > -	return i + xadd(&v->counter, i);
> > > +	return wrapping_add(int, i, xadd(&v->counter, i));
> > >  }
> > >  #define arch_atomic_add_return arch_atomic_add_return
> > 
> > this is going to get old *real* quick :-/
> > 
> > This must be the ugliest possible way to annotate all this, and then
> > litter the kernel with all this... urgh.
> 
> I'm expecting to have explicit wrapping type annotations soon[1], but for
> the atomics, it's kind of a wash on how intrusive the annotations get. I
> had originally wanted to mark the function (as I did in other cases)
> rather than using the helper, but Mark preferred it this way. I'm happy
> to do whatever! :)
> 
> -Kees
> 
> [1] https://github.com/llvm/llvm-project/pull/86618

This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
We've been writing code for years under that assumption.

You want to mark the non-wrapping case.




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 22:54       ` Peter Zijlstra
@ 2024-04-24 23:05         ` Peter Zijlstra
  2024-04-24 23:30           ` Kees Cook
  2024-04-24 23:20         ` Kees Cook
  1 sibling, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2024-04-24 23:05 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > > 
> > > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > > >  
> > > >  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > >  {
> > > > -	return i + xadd(&v->counter, i);
> > > > +	return wrapping_add(int, i, xadd(&v->counter, i));
> > > >  }
> > > >  #define arch_atomic_add_return arch_atomic_add_return
> > > 
> > > this is going to get old *real* quick :-/
> > > 
> > > This must be the ugliest possible way to annotate all this, and then
> > > litter the kernel with all this... urgh.
> > 
> > I'm expecting to have explicit wrapping type annotations soon[1], but for
> > the atomics, it's kind of a wash on how intrusive the annotations get. I
> > had originally wanted to mark the function (as I did in other cases)
> > rather than using the helper, but Mark preferred it this way. I'm happy
> > to do whatever! :)
> > 
> > -Kees
> > 
> > [1] https://github.com/llvm/llvm-project/pull/86618
> 
> This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> We've been writing code for years under that assumption.
> 
> You want to mark the non-wrapping case.

That is, anything that actively warns about signed overflow when build
with -fno-strict-overflow is a bug. If you want this warning you have to
explicitly mark things.

Signed overflow is not UB, is not a bug.

Now, it might be unexpected in some places, but fundamentally we run on
2s complement and expect 2s complement. If you want more, mark it so.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 22:54       ` Peter Zijlstra
  2024-04-24 23:05         ` Peter Zijlstra
@ 2024-04-24 23:20         ` Kees Cook
  2024-04-25  9:17           ` Peter Zijlstra
  1 sibling, 1 reply; 22+ messages in thread
From: Kees Cook @ 2024-04-24 23:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > > 
> > > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > > >  
> > > >  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > >  {
> > > > -	return i + xadd(&v->counter, i);
> > > > +	return wrapping_add(int, i, xadd(&v->counter, i));
> > > >  }
> > > >  #define arch_atomic_add_return arch_atomic_add_return
> > > 
> > > this is going to get old *real* quick :-/
> > > 
> > > This must be the ugliest possible way to annotate all this, and then
> > > litter the kernel with all this... urgh.
> > 
> > I'm expecting to have explicit wrapping type annotations soon[1], but for
> > the atomics, it's kind of a wash on how intrusive the annotations get. I
> > had originally wanted to mark the function (as I did in other cases)
> > rather than using the helper, but Mark preferred it this way. I'm happy
> > to do whatever! :)
> > 
> > -Kees
> > 
> > [1] https://github.com/llvm/llvm-project/pull/86618
> 
> This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> We've been writing code for years under that assumption.

Right, which is why this is going to take time to roll out. :) What we
were really doing with -fno-strict-overflow was getting rid of undefined
behavior. That was really really horrible; we don't need the compiler
hallucinating.

> You want to mark the non-wrapping case.

What we want is lack of ambiguity. Having done these kinds of things in
the kernel for a while now, I have strong evidence that we get much better
results with the "fail safe" approach, but start by making it non-fatal.
That way we get full coverage, but we don't melt the world for anyone
that doesn't want it, and we can shake things out over a few years. For
example, it has worked well for CONFIG_FORTIFY, CONFIG_UBSAN_BOUNDS,
KCFI, etc.

The riskier condition is having something wrap when it wasn't expected
(e.g. allocations, pointer offsets, etc), so we start by defining our
regular types as non-wrapping, and annotate the wrapping types (or
specific calculations or functions).

For signed types in particular, wrapping is overwhelmingly the
uncommon case, so from a purely "how much annotations is needed"
perspective, marking wrapping is also easiest. Yes, there are cases of
expected wrapping, but we'll track them all down and get them marked
unambiguously. One thing on the short list is atomics, so here we are. :)

-Kees

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 23:05         ` Peter Zijlstra
@ 2024-04-24 23:30           ` Kees Cook
  2024-04-25  9:28             ` Peter Zijlstra
  0 siblings, 1 reply; 22+ messages in thread
From: Kees Cook @ 2024-04-24 23:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 01:05:00AM +0200, Peter Zijlstra wrote:
> On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > > > 
> > > > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > > > >  
> > > > >  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > > >  {
> > > > > -	return i + xadd(&v->counter, i);
> > > > > +	return wrapping_add(int, i, xadd(&v->counter, i));
> > > > >  }
> > > > >  #define arch_atomic_add_return arch_atomic_add_return
> > > > 
> > > > this is going to get old *real* quick :-/
> > > > 
> > > > This must be the ugliest possible way to annotate all this, and then
> > > > litter the kernel with all this... urgh.
> > > 
> > > I'm expecting to have explicit wrapping type annotations soon[1], but for
> > > the atomics, it's kind of a wash on how intrusive the annotations get. I
> > > had originally wanted to mark the function (as I did in other cases)
> > > rather than using the helper, but Mark preferred it this way. I'm happy
> > > to do whatever! :)
> > > 
> > > -Kees
> > > 
> > > [1] https://github.com/llvm/llvm-project/pull/86618
> > 
> > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > We've been writing code for years under that assumption.
> > 
> > You want to mark the non-wrapping case.
> 
> That is, anything that actively warns about signed overflow when build
> with -fno-strict-overflow is a bug. If you want this warning you have to
> explicitly mark things.

This is confusing UB with "overflow detection". We're doing the latter.

> Signed overflow is not UB, is not a bug.
> 
> Now, it might be unexpected in some places, but fundamentally we run on
> 2s complement and expect 2s complement. If you want more, mark it so.

Regular C never provided us with enough choice in types to be able to
select the overflow resolution strategy. :( So we're stuck mixing
expectations into our types. (One early defense you were involved in
touched on this too: refcount_t uses a saturating overflow strategy, as
that works best for how it gets used.)

Regardless, yes, someone intent on wrapping gets their expected 2s
complement results, but in the cases were a few values started collecting
in some dark corner of protocol handling, having a calculation wrap around
is at best a behavioral bug and at worst a total system compromise.
Wrapping is the uncommon case here, so we mark those.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 23:20         ` Kees Cook
@ 2024-04-25  9:17           ` Peter Zijlstra
  2024-04-25 17:39             ` Kees Cook
  0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2024-04-25  9:17 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Wed, Apr 24, 2024 at 04:20:20PM -0700, Kees Cook wrote:

> > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > We've been writing code for years under that assumption.
> 
> Right, which is why this is going to take time to roll out. :) What we
> were really doing with -fno-strict-overflow was getting rid of undefined
> behavior. That was really really horrible; we don't need the compiler
> hallucinating.

Right, but that then got us well defined semantics for signed overflow.

> > You want to mark the non-wrapping case.
> 
> What we want is lack of ambiguity. Having done these kinds of things in
> the kernel for a while now, I have strong evidence that we get much better
> results with the "fail safe" approach, but start by making it non-fatal.
> That way we get full coverage, but we don't melt the world for anyone
> that doesn't want it, and we can shake things out over a few years. For
> example, it has worked well for CONFIG_FORTIFY, CONFIG_UBSAN_BOUNDS,
> KCFI, etc.

The non-fatal argument doesn't have bearing on the mark warp or mark
non-wrap argument though.

> The riskier condition is having something wrap when it wasn't expected
> (e.g. allocations, pointer offsets, etc), so we start by defining our
> regular types as non-wrapping, and annotate the wrapping types (or
> specific calculations or functions).

But but most of those you mention are unsigned. Are you saying you're
making all unsigned variables non-wrap by default too? That's bloody
insane.

> For signed types in particular, wrapping is overwhelmingly the
> uncommon case, so from a purely "how much annotations is needed"
> perspective, marking wrapping is also easiest. Yes, there are cases of
> expected wrapping, but we'll track them all down and get them marked
> unambiguously. 

But I am confused now, because above you seem to imply you're making
unsigned non-wrap too, and there wrapping is *far* more common, and I
must say I hate this wrapping_add() thing with a passion.

> One thing on the short list is atomics, so here we are. :)

Well, there are wrapping and non-wrapping users of atomic. If only C had
generics etc.. (and yeah, _Generic doesn't really count).

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 23:30           ` Kees Cook
@ 2024-04-25  9:28             ` Peter Zijlstra
  2024-04-25 10:19               ` Mark Rutland
  0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2024-04-25  9:28 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Wed, Apr 24, 2024 at 04:30:50PM -0700, Kees Cook wrote:

> > That is, anything that actively warns about signed overflow when build
> > with -fno-strict-overflow is a bug. If you want this warning you have to
> > explicitly mark things.
> 
> This is confusing UB with "overflow detection". We're doing the latter.

Well, all of this is confusing to me because it is not presented
coherently.

The traditional 'must not let signed overflow' is because of the UB
nonsense, which we fixed.

> > Signed overflow is not UB, is not a bug.
> > 
> > Now, it might be unexpected in some places, but fundamentally we run on
> > 2s complement and expect 2s complement. If you want more, mark it so.
> 
> Regular C never provided us with enough choice in types to be able to
> select the overflow resolution strategy. :( So we're stuck mixing
> expectations into our types.

Traditionally C has explicit wrapping for unsigned and UB on signed. We
fixed the UB, so now expect wrapping for everything.

You want to add overflow, so you should make that a special and preserve
semantics for existing code.

Also I would very strongly suggest you add an overflow qualifier to the
type system and please provide sane means of qualifier manipulation --
stripping qualifiers is painful :/

> Regardless, yes, someone intent on wrapping gets their expected 2s
> complement results, but in the cases were a few values started collecting
> in some dark corner of protocol handling, having a calculation wrap around
> is at best a behavioral bug and at worst a total system compromise.
> Wrapping is the uncommon case here, so we mark those.

Then feel free to sprinkle copious amounts of 'overflow' qualifiers in
the protocol handling code.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 22:45     ` Kees Cook
  2024-04-24 22:54       ` Peter Zijlstra
@ 2024-04-25 10:15       ` Mark Rutland
  2024-04-25 17:19         ` Kees Cook
  1 sibling, 1 reply; 22+ messages in thread
From: Mark Rutland @ 2024-04-25 10:15 UTC (permalink / raw)
  To: Kees Cook
  Cc: Peter Zijlstra, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > 
> > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > >  
> > >  static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > >  {
> > > -	return i + xadd(&v->counter, i);
> > > +	return wrapping_add(int, i, xadd(&v->counter, i));
> > >  }
> > >  #define arch_atomic_add_return arch_atomic_add_return
> > 
> > this is going to get old *real* quick :-/
> > 
> > This must be the ugliest possible way to annotate all this, and then
> > litter the kernel with all this... urgh.
> 
> I'm expecting to have explicit wrapping type annotations soon[1], but for
> the atomics, it's kind of a wash on how intrusive the annotations get. I
> had originally wanted to mark the function (as I did in other cases)
> rather than using the helper, but Mark preferred it this way. I'm happy
> to do whatever! :)

To be clear, I dislike the function annotation because then it applies to
*everything* within the function, which is overly broad and the intent becomes
unclear. That makes it painful to refactor the code (since e.g. if we want to
add another operation to the function which *should not* wrap, that gets
silenced too).

I'm happy with something that applies to specific types/variables or specific
operations (which is what these patches do).

As to whether or not we do this at all I'll have to defer to Peter.

Mark.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-25  9:28             ` Peter Zijlstra
@ 2024-04-25 10:19               ` Mark Rutland
  0 siblings, 0 replies; 22+ messages in thread
From: Mark Rutland @ 2024-04-25 10:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Will Deacon, Boqun Feng, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 11:28:12AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 04:30:50PM -0700, Kees Cook wrote:
> 
> > > That is, anything that actively warns about signed overflow when build
> > > with -fno-strict-overflow is a bug. If you want this warning you have to
> > > explicitly mark things.
> > 
> > This is confusing UB with "overflow detection". We're doing the latter.
> 
> Well, all of this is confusing to me because it is not presented
> coherently.
> 
> The traditional 'must not let signed overflow' is because of the UB
> nonsense, which we fixed.
> 
> > > Signed overflow is not UB, is not a bug.
> > > 
> > > Now, it might be unexpected in some places, but fundamentally we run on
> > > 2s complement and expect 2s complement. If you want more, mark it so.
> > 
> > Regular C never provided us with enough choice in types to be able to
> > select the overflow resolution strategy. :( So we're stuck mixing
> > expectations into our types.
> 
> Traditionally C has explicit wrapping for unsigned and UB on signed. We
> fixed the UB, so now expect wrapping for everything.
> 
> You want to add overflow, so you should make that a special and preserve
> semantics for existing code.
> 
> Also I would very strongly suggest you add an overflow qualifier to the
> type system and please provide sane means of qualifier manipulation --
> stripping qualifiers is painful :/

I agree that having an overflow/nooverflow qualifier that's separate from
signed/unsigned would make more sense than inferring that from signed vs
unsigned.

Mark.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-25 10:15       ` Mark Rutland
@ 2024-04-25 17:19         ` Kees Cook
  0 siblings, 0 replies; 22+ messages in thread
From: Kees Cook @ 2024-04-25 17:19 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Peter Zijlstra, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 11:15:17AM +0100, Mark Rutland wrote:
> To be clear, I dislike the function annotation because then it applies to
> *everything* within the function, which is overly broad and the intent becomes
> unclear. That makes it painful to refactor the code (since e.g. if we want to
> add another operation to the function which *should not* wrap, that gets
> silenced too).

Yeah, I find that a convincing argument for larger functions, but it
seemed to me that for these 1-line implementations it was okay. But
regardless, yup, no function-level annotation here.

> I'm happy with something that applies to specific types/variables or specific
> operations (which is what these patches do).

Thanks!

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-25  9:17           ` Peter Zijlstra
@ 2024-04-25 17:39             ` Kees Cook
  0 siblings, 0 replies; 22+ messages in thread
From: Kees Cook @ 2024-04-25 17:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Will Deacon, Boqun Feng, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Jakub Kicinski, Catalin Marinas, Arnd Bergmann, Andrew Morton,
	David S. Miller, David Ahern, Eric Dumazet, Paolo Abeni,
	Paul E. McKenney, Uros Bizjak, linux-kernel, linux-arm-kernel,
	linux-arch, netdev, linux-hardening

On Thu, Apr 25, 2024 at 11:17:52AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 04:20:20PM -0700, Kees Cook wrote:
> 
> > > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > > We've been writing code for years under that assumption.
> > 
> > Right, which is why this is going to take time to roll out. :) What we
> > were really doing with -fno-strict-overflow was getting rid of undefined
> > behavior. That was really really horrible; we don't need the compiler
> > hallucinating.
> 
> Right, but that then got us well defined semantics for signed overflow.

Yes, and this gets us to the next step: disambiguation for general
users. It's good that we have a well-defined overflow resolution strategy,
but our decades of persistent wrap-around flaws in the kernel show
that many devs (even experienced ones) produce code with unexpected and
unwanted (to the logic of the code) wrap-around. So we have to find a
way to distinguish wrapping and non-wrapping operations or types up
front and in a clear way.

> 
> > > You want to mark the non-wrapping case.
> > 
> > What we want is lack of ambiguity. Having done these kinds of things in
> > the kernel for a while now, I have strong evidence that we get much better
> > results with the "fail safe" approach, but start by making it non-fatal.
> > That way we get full coverage, but we don't melt the world for anyone
> > that doesn't want it, and we can shake things out over a few years. For
> > example, it has worked well for CONFIG_FORTIFY, CONFIG_UBSAN_BOUNDS,
> > KCFI, etc.
> 
> The non-fatal argument doesn't have bearing on the mark warp or mark
> non-wrap argument though.

This gets at the strategy of refactoring our code to gain our unambiguous
coverage. Since we can't sanely have a flag-day, we have to go piecemeal,
and there will continue to be places where the coverage was missed, and
so we want to progress through marking wrapping cases without BUGing the
kernel. (We don't care about catching non-wrapping -- the exceptional
condition is hitting an overflow.)

> > The riskier condition is having something wrap when it wasn't expected
> > (e.g. allocations, pointer offsets, etc), so we start by defining our
> > regular types as non-wrapping, and annotate the wrapping types (or
> > specific calculations or functions).
> 
> But but most of those you mention are unsigned. Are you saying you're
> making all unsigned variables non-wrap by default too? That's bloody
> insane.

We have a mix (and a regular confusion even in core code) where "int"
gets passed around even though at one end or another of a call chain
it's actually u32 or u16 or whatever. Regardless, yes, the next step
after signed overflow mitigation would be unsigned overflow mitigation,
and as you suggest, it's much more tricky.

> > For signed types in particular, wrapping is overwhelmingly the
> > uncommon case, so from a purely "how much annotations is needed"
> > perspective, marking wrapping is also easiest. Yes, there are cases of
> > expected wrapping, but we'll track them all down and get them marked
> > unambiguously. 
> 
> But I am confused now, because above you seem to imply you're making
> unsigned non-wrap too, and there wrapping is *far* more common, and I
> must say I hate this wrapping_add() thing with a passion.

Yes, most people are not a fan of the wrapping_*() helpers, which is why
I'm trying to get a typedef attribute created. But again, to gain the
"fail safe by default" coverage, we have to start with the assumption
that the default is non-wrapping, and mark those that aren't. (Otherwise
we're not actually catching unexpected cases.) And no, it's not going
to be over-night. It's taken almost 5 years to disambiguate array bounds
and we're still not done. :)

> > One thing on the short list is atomics, so here we are. :)
> 
> Well, there are wrapping and non-wrapping users of atomic. If only C had
> generics etc.. (and yeah, _Generic doesn't really count).

Non-wrapping users of atomics should be using refcount_t, which is
our non-wrapping atomic type. But regardless, atomics are internally
wrapping, yes?

Anyway, I suspect this whole plan needs wider discussion. I will write
up a more complete RFC that covers my plans, including the rationale for
why we should adopt this in a certain way. (These kinds of strategic RFCs
don't usually get much traction since our development style is much more
"show the patches", so that's why I have been just sending patches. But
since it's a pretty big topic, I'll give it a shot...)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-24 19:17 [PATCH 0/4] Annotate atomics for signed integer wrap-around Kees Cook
                   ` (3 preceding siblings ...)
  2024-04-24 19:17 ` [PATCH 4/4] ipv4: Silence intentional wrapping addition Kees Cook
@ 2024-04-26  7:40 ` David Howells
  2024-05-02 14:57   ` Kees Cook
  4 siblings, 1 reply; 22+ messages in thread
From: David Howells @ 2024-04-26  7:40 UTC (permalink / raw)
  To: Kees Cook
  Cc: dhowells, Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Jakub Kicinski, Catalin Marinas, Arnd Bergmann,
	Andrew Morton, David S. Miller, David Ahern, Eric Dumazet,
	Paolo Abeni, Paul E. McKenney, Uros Bizjak, linux-kernel,
	linux-arm-kernel, linux-arch, netdev, linux-hardening

Kees Cook <keescook@chromium.org> wrote:

> -	return i + xadd(&v->counter, i);
> +	return wrapping_add(int, i, xadd(&v->counter, i));

Ewww.  Can't you just mark the variable as wrapping in some way, either by:

	unsigned int __cyclic counter;

or, to use rxrpc packet sequence numbers as an example:

	typedef unsigned int __cyclic rxrpc_seq_t;

	rxrpc_seq_t	tx_top;

Then you can get the compiler to spit out a warning if you use <, <=, > or >=
on the numbers as an added bonus.  (You should use before() and after()
instead).

David


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] arm64: atomics: lse: Silence intentional wrapping addition
  2024-04-24 19:17 ` [PATCH 2/4] arm64: atomics: lse: " Kees Cook
@ 2024-05-02 11:21   ` Will Deacon
  2024-05-02 15:00     ` Kees Cook
  0 siblings, 1 reply; 22+ messages in thread
From: Will Deacon @ 2024-05-02 11:21 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Peter Zijlstra, Boqun Feng, Catalin Marinas,
	linux-arm-kernel, Jakub Kicinski, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin, Arnd Bergmann,
	Andrew Morton, David S. Miller, David Ahern, Eric Dumazet,
	Paolo Abeni, Paul E. McKenney, Uros Bizjak, linux-kernel, x86,
	linux-arch, netdev, linux-hardening

On Wed, Apr 24, 2024 at 12:17:35PM -0700, Kees Cook wrote:
> Annotate atomic_add_return() and atomic_sub_return() to avoid signed
> overflow instrumentation. They are expected to wrap around.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/atomic_lse.h | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)

How come the ll/sc routines (in atomic_ll_sc.h) don't need the same
treatment? If that's just an oversight, then maybe it's better to
instrument the higher-level wrappers in asm/atomic.h?

Will

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition
  2024-04-26  7:40 ` [PATCH 1/4] locking/atomic/x86: " David Howells
@ 2024-05-02 14:57   ` Kees Cook
  0 siblings, 0 replies; 22+ messages in thread
From: Kees Cook @ 2024-05-02 14:57 UTC (permalink / raw)
  To: David Howells
  Cc: Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Jakub Kicinski, Catalin Marinas, Arnd Bergmann,
	Andrew Morton, David S. Miller, David Ahern, Eric Dumazet,
	Paolo Abeni, Paul E. McKenney, Uros Bizjak, linux-kernel,
	linux-arm-kernel, linux-arch, netdev, linux-hardening

On Fri, Apr 26, 2024 at 08:40:50AM +0100, David Howells wrote:
> Kees Cook <keescook@chromium.org> wrote:
> 
> > -	return i + xadd(&v->counter, i);
> > +	return wrapping_add(int, i, xadd(&v->counter, i));
> 
> Ewww.  Can't you just mark the variable as wrapping in some way, either by:
> 
> 	unsigned int __cyclic counter;

Yeah, that's the plan now. Justin is currently working on the "wraps"
attribute for Clang:
https://github.com/llvm/llvm-project/pull/86618

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] arm64: atomics: lse: Silence intentional wrapping addition
  2024-05-02 11:21   ` Will Deacon
@ 2024-05-02 15:00     ` Kees Cook
  0 siblings, 0 replies; 22+ messages in thread
From: Kees Cook @ 2024-05-02 15:00 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Peter Zijlstra, Boqun Feng, Catalin Marinas,
	linux-arm-kernel, Jakub Kicinski, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin, Arnd Bergmann,
	Andrew Morton, David S. Miller, David Ahern, Eric Dumazet,
	Paolo Abeni, Paul E. McKenney, Uros Bizjak, linux-kernel, x86,
	linux-arch, netdev, linux-hardening

On Thu, May 02, 2024 at 12:21:28PM +0100, Will Deacon wrote:
> On Wed, Apr 24, 2024 at 12:17:35PM -0700, Kees Cook wrote:
> > Annotate atomic_add_return() and atomic_sub_return() to avoid signed
> > overflow instrumentation. They are expected to wrap around.
> > 
> > Signed-off-by: Kees Cook <keescook@chromium.org>
> > ---
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: linux-arm-kernel@lists.infradead.org
> > ---
> >  arch/arm64/include/asm/atomic_lse.h | 10 ++++++----
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> How come the ll/sc routines (in atomic_ll_sc.h) don't need the same
> treatment? If that's just an oversight, then maybe it's better to
> instrument the higher-level wrappers in asm/atomic.h?

Those are all written in asm, so there's no open-coded C arithmetic that
the sanitizers will notice. All is well there! :)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2024-05-02 15:00 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-24 19:17 [PATCH 0/4] Annotate atomics for signed integer wrap-around Kees Cook
2024-04-24 19:17 ` [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition Kees Cook
2024-04-24 22:41   ` Peter Zijlstra
2024-04-24 22:45     ` Kees Cook
2024-04-24 22:54       ` Peter Zijlstra
2024-04-24 23:05         ` Peter Zijlstra
2024-04-24 23:30           ` Kees Cook
2024-04-25  9:28             ` Peter Zijlstra
2024-04-25 10:19               ` Mark Rutland
2024-04-24 23:20         ` Kees Cook
2024-04-25  9:17           ` Peter Zijlstra
2024-04-25 17:39             ` Kees Cook
2024-04-25 10:15       ` Mark Rutland
2024-04-25 17:19         ` Kees Cook
2024-04-24 22:51     ` Peter Zijlstra
2024-04-24 19:17 ` [PATCH 2/4] arm64: atomics: lse: " Kees Cook
2024-05-02 11:21   ` Will Deacon
2024-05-02 15:00     ` Kees Cook
2024-04-24 19:17 ` [PATCH 3/4] locking/atomic: Annotate generic atomics with wrapping Kees Cook
2024-04-24 19:17 ` [PATCH 4/4] ipv4: Silence intentional wrapping addition Kees Cook
2024-04-26  7:40 ` [PATCH 1/4] locking/atomic/x86: " David Howells
2024-05-02 14:57   ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).