linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH kcsan 0/8] KCSAN updates for v5.15
@ 2021-07-21 21:07 Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 1/8] kcsan: Improve some Kconfig comments Paul E. McKenney
                   ` (11 more replies)
  0 siblings, 12 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:07 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng

Hello!

This series contains KCSAN updates:

1.	Improve some Kconfig comments, courtesy of Marco Elver.

2.	Remove CONFIG_KCSAN_DEBUG, courtesy of Marco Elver.

3.	Introduce CONFIG_KCSAN_STRICT, courtesy of Marco Elver.

4.	Reduce get_ctx() uses in kcsan_found_watchpoint(), courtesy of
	Marco Elver.

5.	Rework atomic.h into permissive.h, courtesy of Marco Elver.

6.	Print if strict or non-strict during init, courtesy of Marco
	Elver.

7.	permissive: Ignore data-racy 1-bit value changes, courtesy of
	Marco Elver.

8.	Make strict mode imply interruptible watchers, courtesy of
	Marco Elver.

						Thanx, Paul

------------------------------------------------------------------------

 Documentation/dev-tools/kcsan.rst   |    8 ++++
 b/Documentation/dev-tools/kcsan.rst |    4 ++
 b/kernel/kcsan/core.c               |    9 ----
 b/kernel/kcsan/kcsan_test.c         |   32 ++++++++++++++++
 b/kernel/kcsan/permissive.h         |   47 ++++++++++++++++++++++++
 b/lib/Kconfig.kcsan                 |   16 ++++----
 kernel/kcsan/atomic.h               |   23 ------------
 kernel/kcsan/core.c                 |   68 +++++++++++++++++++++++++-----------
 kernel/kcsan/permissive.h           |   49 +++++++++++++++++++++++++
 lib/Kconfig.kcsan                   |   26 +++++++++++--
 10 files changed, 218 insertions(+), 64 deletions(-)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 1/8] kcsan: Improve some Kconfig comments
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 2/8] kcsan: Remove CONFIG_KCSAN_DEBUG Paul E. McKenney
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng,
	Mark Rutland, Paul E . McKenney

From: Marco Elver <elver@google.com>

Improve comment for CC_HAS_TSAN_COMPOUND_READ_BEFORE_WRITE. Also shorten
the comment above the "strictness" configuration options.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 lib/Kconfig.kcsan | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
index 0440f373248eb..6152fbd5cbb43 100644
--- a/lib/Kconfig.kcsan
+++ b/lib/Kconfig.kcsan
@@ -40,10 +40,14 @@ menuconfig KCSAN
 
 if KCSAN
 
-# Compiler capabilities that should not fail the test if they are unavailable.
 config CC_HAS_TSAN_COMPOUND_READ_BEFORE_WRITE
 	def_bool (CC_IS_CLANG && $(cc-option,-fsanitize=thread -mllvm -tsan-compound-read-before-write=1)) || \
 		 (CC_IS_GCC && $(cc-option,-fsanitize=thread --param tsan-compound-read-before-write=1))
+	help
+	  The compiler instruments plain compound read-write operations
+	  differently (++, --, +=, -=, |=, &=, etc.), which allows KCSAN to
+	  distinguish them from other plain accesses. This is currently
+	  supported by Clang 12 or later.
 
 config KCSAN_VERBOSE
 	bool "Show verbose reports with more information about system state"
@@ -169,13 +173,9 @@ config KCSAN_REPORT_ONCE_IN_MS
 	  reporting to avoid flooding the console with reports.  Setting this
 	  to 0 disables rate limiting.
 
-# The main purpose of the below options is to control reported data races (e.g.
-# in fuzzer configs), and are not expected to be switched frequently by other
-# users. We could turn some of them into boot parameters, but given they should
-# not be switched normally, let's keep them here to simplify configuration.
-#
-# The defaults below are chosen to be very conservative, and may miss certain
-# bugs.
+# The main purpose of the below options is to control reported data races, and
+# are not expected to be switched frequently by non-testers or at runtime.
+# The defaults are chosen to be conservative, and can miss certain bugs.
 
 config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN
 	bool "Report races of unknown origin"
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 2/8] kcsan: Remove CONFIG_KCSAN_DEBUG
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 1/8] kcsan: Improve some Kconfig comments Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 3/8] kcsan: Introduce CONFIG_KCSAN_STRICT Paul E. McKenney
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng,
	Mark Rutland, Paul E . McKenney

From: Marco Elver <elver@google.com>

By this point CONFIG_KCSAN_DEBUG is pretty useless, as the system just
isn't usable with it due to spamming console (I imagine a randconfig
test robot will run into this sooner or later). Remove it.

Back in 2019 I used it occasionally to record traces of watchpoints and
verify the encoding is correct, but these days we have proper tests. If
something similar is needed in future, just add it back ad-hoc.

Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/kcsan/core.c | 9 ---------
 lib/Kconfig.kcsan   | 3 ---
 2 files changed, 12 deletions(-)

diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index 26709ea65c715..d92977ede7e17 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -479,15 +479,6 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
 		break; /* ignore; we do not diff the values */
 	}
 
-	if (IS_ENABLED(CONFIG_KCSAN_DEBUG)) {
-		kcsan_disable_current();
-		pr_err("watching %s, size: %zu, addr: %px [slot: %d, encoded: %lx]\n",
-		       is_write ? "write" : "read", size, ptr,
-		       watchpoint_slot((unsigned long)ptr),
-		       encode_watchpoint((unsigned long)ptr, size, is_write));
-		kcsan_enable_current();
-	}
-
 	/*
 	 * Delay this thread, to increase probability of observing a racy
 	 * conflicting access.
diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
index 6152fbd5cbb43..5304f211f81f1 100644
--- a/lib/Kconfig.kcsan
+++ b/lib/Kconfig.kcsan
@@ -62,9 +62,6 @@ config KCSAN_VERBOSE
 	  generated from any one of them, system stability may suffer due to
 	  deadlocks or recursion.  If in doubt, say N.
 
-config KCSAN_DEBUG
-	bool "Debugging of KCSAN internals"
-
 config KCSAN_SELFTEST
 	bool "Perform short selftests on boot"
 	default y
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 3/8] kcsan: Introduce CONFIG_KCSAN_STRICT
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 1/8] kcsan: Improve some Kconfig comments Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 2/8] kcsan: Remove CONFIG_KCSAN_DEBUG Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 4/8] kcsan: Reduce get_ctx() uses in kcsan_found_watchpoint() Paul E. McKenney
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng,
	Mark Rutland, Paul E . McKenney

From: Marco Elver <elver@google.com>

Add a simpler Kconfig variable to configure KCSAN's "strict" mode. This
makes it simpler in documentation or messages to suggest just a single
configuration option to select the strictest checking mode (vs.
currently having to list several options).

Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/dev-tools/kcsan.rst |  4 ++++
 lib/Kconfig.kcsan                 | 10 ++++++++++
 2 files changed, 14 insertions(+)

diff --git a/Documentation/dev-tools/kcsan.rst b/Documentation/dev-tools/kcsan.rst
index 6a600cf8430b1..69dc9c502ccc5 100644
--- a/Documentation/dev-tools/kcsan.rst
+++ b/Documentation/dev-tools/kcsan.rst
@@ -127,6 +127,10 @@ Kconfig options:
   causes KCSAN to not report data races due to conflicts where the only plain
   accesses are aligned writes up to word size.
 
+To use the strictest possible rules, select ``CONFIG_KCSAN_STRICT=y``, which
+configures KCSAN to follow the Linux-kernel memory consistency model (LKMM) as
+closely as possible.
+
 DebugFS interface
 ~~~~~~~~~~~~~~~~~
 
diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
index 5304f211f81f1..c76fbb3ee09ec 100644
--- a/lib/Kconfig.kcsan
+++ b/lib/Kconfig.kcsan
@@ -183,9 +183,17 @@ config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN
 	  reported if it was only possible to infer a race due to a data value
 	  change while an access is being delayed on a watchpoint.
 
+config KCSAN_STRICT
+	bool "Strict data-race checking"
+	help
+	  KCSAN will report data races with the strictest possible rules, which
+	  closely aligns with the rules defined by the Linux-kernel memory
+	  consistency model (LKMM).
+
 config KCSAN_REPORT_VALUE_CHANGE_ONLY
 	bool "Only report races where watcher observed a data value change"
 	default y
+	depends on !KCSAN_STRICT
 	help
 	  If enabled and a conflicting write is observed via a watchpoint, but
 	  the data value of the memory location was observed to remain
@@ -194,6 +202,7 @@ config KCSAN_REPORT_VALUE_CHANGE_ONLY
 config KCSAN_ASSUME_PLAIN_WRITES_ATOMIC
 	bool "Assume that plain aligned writes up to word size are atomic"
 	default y
+	depends on !KCSAN_STRICT
 	help
 	  Assume that plain aligned writes up to word size are atomic by
 	  default, and also not subject to other unsafe compiler optimizations
@@ -206,6 +215,7 @@ config KCSAN_ASSUME_PLAIN_WRITES_ATOMIC
 
 config KCSAN_IGNORE_ATOMICS
 	bool "Do not instrument marked atomic accesses"
+	depends on !KCSAN_STRICT
 	help
 	  Never instrument marked atomic accesses. This option can be used for
 	  additional filtering. Conflicting marked atomic reads and plain
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 4/8] kcsan: Reduce get_ctx() uses in kcsan_found_watchpoint()
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (2 preceding siblings ...)
  2021-07-21 21:08 ` [PATCH kcsan 3/8] kcsan: Introduce CONFIG_KCSAN_STRICT Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 5/8] kcsan: Rework atomic.h into permissive.h Paul E. McKenney
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng,
	Mark Rutland, Paul E . McKenney

From: Marco Elver <elver@google.com>

There are a number get_ctx() calls that are close to each other, which
results in poor codegen (repeated preempt_count loads).

Specifically in kcsan_found_watchpoint() (even though it's a slow-path)
it is beneficial to keep the race-window small until the watchpoint has
actually been consumed to avoid missed opportunities to report a race.

Let's clean it up a bit before we add more code in
kcsan_found_watchpoint().

Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/kcsan/core.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index d92977ede7e17..906100923b888 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -301,9 +301,9 @@ static inline void reset_kcsan_skip(void)
 	this_cpu_write(kcsan_skip, skip_count);
 }
 
-static __always_inline bool kcsan_is_enabled(void)
+static __always_inline bool kcsan_is_enabled(struct kcsan_ctx *ctx)
 {
-	return READ_ONCE(kcsan_enabled) && get_ctx()->disable_count == 0;
+	return READ_ONCE(kcsan_enabled) && !ctx->disable_count;
 }
 
 /* Introduce delay depending on context and configuration. */
@@ -353,10 +353,17 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr,
 					    atomic_long_t *watchpoint,
 					    long encoded_watchpoint)
 {
+	struct kcsan_ctx *ctx = get_ctx();
 	unsigned long flags;
 	bool consumed;
 
-	if (!kcsan_is_enabled())
+	/*
+	 * We know a watchpoint exists. Let's try to keep the race-window
+	 * between here and finally consuming the watchpoint below as small as
+	 * possible -- avoid unneccessarily complex code until consumed.
+	 */
+
+	if (!kcsan_is_enabled(ctx))
 		return;
 
 	/*
@@ -364,14 +371,12 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr,
 	 * reporting a race where e.g. the writer set up the watchpoint, but the
 	 * reader has access_mask!=0, we have to ignore the found watchpoint.
 	 */
-	if (get_ctx()->access_mask != 0)
+	if (ctx->access_mask)
 		return;
 
 	/*
-	 * Consume the watchpoint as soon as possible, to minimize the chances
-	 * of !consumed. Consuming the watchpoint must always be guarded by
-	 * kcsan_is_enabled() check, as otherwise we might erroneously
-	 * triggering reports when disabled.
+	 * Consuming the watchpoint must be guarded by kcsan_is_enabled() to
+	 * avoid erroneously triggering reports if the context is disabled.
 	 */
 	consumed = try_consume_watchpoint(watchpoint, encoded_watchpoint);
 
@@ -409,6 +414,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
 	unsigned long access_mask;
 	enum kcsan_value_change value_change = KCSAN_VALUE_CHANGE_MAYBE;
 	unsigned long ua_flags = user_access_save();
+	struct kcsan_ctx *ctx = get_ctx();
 	unsigned long irq_flags = 0;
 
 	/*
@@ -417,7 +423,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
 	 */
 	reset_kcsan_skip();
 
-	if (!kcsan_is_enabled())
+	if (!kcsan_is_enabled(ctx))
 		goto out;
 
 	/*
@@ -489,7 +495,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
 	 * Re-read value, and check if it is as expected; if not, we infer a
 	 * racy access.
 	 */
-	access_mask = get_ctx()->access_mask;
+	access_mask = ctx->access_mask;
 	new = 0;
 	switch (size) {
 	case 1:
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 5/8] kcsan: Rework atomic.h into permissive.h
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (3 preceding siblings ...)
  2021-07-21 21:08 ` [PATCH kcsan 4/8] kcsan: Reduce get_ctx() uses in kcsan_found_watchpoint() Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 6/8] kcsan: Print if strict or non-strict during init Paul E. McKenney
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng,
	Mark Rutland, Paul E . McKenney

From: Marco Elver <elver@google.com>

Rework atomic.h into permissive.h to better reflect its purpose, and
introduce kcsan_ignore_address() and kcsan_ignore_data_race().

Introduce CONFIG_KCSAN_PERMISSIVE and update the stub functions in
preparation for subsequent changes.

As before, developers who choose to use KCSAN in "strict" mode will see
all data races and are not affected. Furthermore, by relying on the
value-change filter logic for kcsan_ignore_data_race(), even if the
permissive rules are enabled, the opt-outs in report.c:skip_report()
override them (such as for RCU-related functions by default).

The option CONFIG_KCSAN_PERMISSIVE is disabled by default, so that the
documented default behaviour of KCSAN does not change. Instead, like
CONFIG_KCSAN_IGNORE_ATOMICS, the option needs to be explicitly opted in.

Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/dev-tools/kcsan.rst |  8 ++++++
 kernel/kcsan/atomic.h             | 23 ---------------
 kernel/kcsan/core.c               | 33 ++++++++++++++++------
 kernel/kcsan/permissive.h         | 47 +++++++++++++++++++++++++++++++
 lib/Kconfig.kcsan                 | 10 +++++++
 5 files changed, 89 insertions(+), 32 deletions(-)
 delete mode 100644 kernel/kcsan/atomic.h
 create mode 100644 kernel/kcsan/permissive.h

diff --git a/Documentation/dev-tools/kcsan.rst b/Documentation/dev-tools/kcsan.rst
index 69dc9c502ccc5..7db43c7c09b8c 100644
--- a/Documentation/dev-tools/kcsan.rst
+++ b/Documentation/dev-tools/kcsan.rst
@@ -127,6 +127,14 @@ Kconfig options:
   causes KCSAN to not report data races due to conflicts where the only plain
   accesses are aligned writes up to word size.
 
+* ``CONFIG_KCSAN_PERMISSIVE``: Enable additional permissive rules to ignore
+  certain classes of common data races. Unlike the above, the rules are more
+  complex involving value-change patterns, access type, and address. This
+  option depends on ``CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=y``. For details
+  please see the ``kernel/kcsan/permissive.h``. Testers and maintainers that
+  only focus on reports from specific subsystems and not the whole kernel are
+  recommended to disable this option.
+
 To use the strictest possible rules, select ``CONFIG_KCSAN_STRICT=y``, which
 configures KCSAN to follow the Linux-kernel memory consistency model (LKMM) as
 closely as possible.
diff --git a/kernel/kcsan/atomic.h b/kernel/kcsan/atomic.h
deleted file mode 100644
index 530ae1bda8e75..0000000000000
--- a/kernel/kcsan/atomic.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Rules for implicitly atomic memory accesses.
- *
- * Copyright (C) 2019, Google LLC.
- */
-
-#ifndef _KERNEL_KCSAN_ATOMIC_H
-#define _KERNEL_KCSAN_ATOMIC_H
-
-#include <linux/types.h>
-
-/*
- * Special rules for certain memory where concurrent conflicting accesses are
- * common, however, the current convention is to not mark them; returns true if
- * access to @ptr should be considered atomic. Called from slow-path.
- */
-static bool kcsan_is_atomic_special(const volatile void *ptr)
-{
-	return false;
-}
-
-#endif /* _KERNEL_KCSAN_ATOMIC_H */
diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index 906100923b888..439edb9dcbb13 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -20,9 +20,9 @@
 #include <linux/sched.h>
 #include <linux/uaccess.h>
 
-#include "atomic.h"
 #include "encoding.h"
 #include "kcsan.h"
+#include "permissive.h"
 
 static bool kcsan_early_enable = IS_ENABLED(CONFIG_KCSAN_EARLY_ENABLE);
 unsigned int kcsan_udelay_task = CONFIG_KCSAN_UDELAY_TASK;
@@ -353,6 +353,7 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr,
 					    atomic_long_t *watchpoint,
 					    long encoded_watchpoint)
 {
+	const bool is_assert = (type & KCSAN_ACCESS_ASSERT) != 0;
 	struct kcsan_ctx *ctx = get_ctx();
 	unsigned long flags;
 	bool consumed;
@@ -374,6 +375,16 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr,
 	if (ctx->access_mask)
 		return;
 
+	/*
+	 * If the other thread does not want to ignore the access, and there was
+	 * a value change as a result of this thread's operation, we will still
+	 * generate a report of unknown origin.
+	 *
+	 * Use CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN=n to filter.
+	 */
+	if (!is_assert && kcsan_ignore_address(ptr))
+		return;
+
 	/*
 	 * Consuming the watchpoint must be guarded by kcsan_is_enabled() to
 	 * avoid erroneously triggering reports if the context is disabled.
@@ -396,7 +407,7 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr,
 		atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_REPORT_RACES]);
 	}
 
-	if ((type & KCSAN_ACCESS_ASSERT) != 0)
+	if (is_assert)
 		atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ASSERT_FAILURES]);
 	else
 		atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_DATA_RACES]);
@@ -427,12 +438,10 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
 		goto out;
 
 	/*
-	 * Special atomic rules: unlikely to be true, so we check them here in
-	 * the slow-path, and not in the fast-path in is_atomic(). Call after
-	 * kcsan_is_enabled(), as we may access memory that is not yet
-	 * initialized during early boot.
+	 * Check to-ignore addresses after kcsan_is_enabled(), as we may access
+	 * memory that is not yet initialized during early boot.
 	 */
-	if (!is_assert && kcsan_is_atomic_special(ptr))
+	if (!is_assert && kcsan_ignore_address(ptr))
 		goto out;
 
 	if (!check_encodable((unsigned long)ptr, size)) {
@@ -518,8 +527,14 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
 	if (access_mask)
 		diff &= access_mask;
 
-	/* Were we able to observe a value-change? */
-	if (diff != 0)
+	/*
+	 * Check if we observed a value change.
+	 *
+	 * Also check if the data race should be ignored (the rules depend on
+	 * non-zero diff); if it is to be ignored, the below rules for
+	 * KCSAN_VALUE_CHANGE_MAYBE apply.
+	 */
+	if (diff && !kcsan_ignore_data_race(size, type, old, new, diff))
 		value_change = KCSAN_VALUE_CHANGE_TRUE;
 
 	/* Check if this access raced with another. */
diff --git a/kernel/kcsan/permissive.h b/kernel/kcsan/permissive.h
new file mode 100644
index 0000000000000..f90e30800c11b
--- /dev/null
+++ b/kernel/kcsan/permissive.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Special rules for ignoring entire classes of data-racy memory accesses. None
+ * of the rules here imply that such data races are generally safe!
+ *
+ * All rules in this file can be configured via CONFIG_KCSAN_PERMISSIVE. Keep
+ * them separate from core code to make it easier to audit.
+ *
+ * Copyright (C) 2019, Google LLC.
+ */
+
+#ifndef _KERNEL_KCSAN_PERMISSIVE_H
+#define _KERNEL_KCSAN_PERMISSIVE_H
+
+#include <linux/types.h>
+
+/*
+ * Access ignore rules based on address.
+ */
+static __always_inline bool kcsan_ignore_address(const volatile void *ptr)
+{
+	if (!IS_ENABLED(CONFIG_KCSAN_PERMISSIVE))
+		return false;
+
+	return false;
+}
+
+/*
+ * Data race ignore rules based on access type and value change patterns.
+ */
+static bool
+kcsan_ignore_data_race(size_t size, int type, u64 old, u64 new, u64 diff)
+{
+	if (!IS_ENABLED(CONFIG_KCSAN_PERMISSIVE))
+		return false;
+
+	/*
+	 * Rules here are only for plain read accesses, so that we still report
+	 * data races between plain read-write accesses.
+	 */
+	if (type || size > sizeof(long))
+		return false;
+
+	return false;
+}
+
+#endif /* _KERNEL_KCSAN_PERMISSIVE_H */
diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
index c76fbb3ee09ec..26f03c754d39b 100644
--- a/lib/Kconfig.kcsan
+++ b/lib/Kconfig.kcsan
@@ -231,4 +231,14 @@ config KCSAN_IGNORE_ATOMICS
 	  due to two conflicting plain writes will be reported (aligned and
 	  unaligned, if CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n).
 
+config KCSAN_PERMISSIVE
+	bool "Enable all additional permissive rules"
+	depends on KCSAN_REPORT_VALUE_CHANGE_ONLY
+	help
+	  Enable additional permissive rules to ignore certain classes of data
+	  races (also see kernel/kcsan/permissive.h). None of the permissive
+	  rules imply that such data races are generally safe, but can be used
+	  to further reduce reported data races due to data-racy patterns
+	  common across the kernel.
+
 endif # KCSAN
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 6/8] kcsan: Print if strict or non-strict during init
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (4 preceding siblings ...)
  2021-07-21 21:08 ` [PATCH kcsan 5/8] kcsan: Rework atomic.h into permissive.h Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 7/8] kcsan: permissive: Ignore data-racy 1-bit value changes Paul E. McKenney
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng,
	Mark Rutland, Paul E . McKenney

From: Marco Elver <elver@google.com>

Show a brief message if KCSAN is strict or non-strict, and if non-strict
also say that CONFIG_KCSAN_STRICT=y can be used to see all data races.

This is to hint to users of KCSAN who blindly use the default config
that their configuration might miss data races of interest.

Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/kcsan/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index 439edb9dcbb13..76e67d1e02d48 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -656,6 +656,15 @@ void __init kcsan_init(void)
 		pr_info("enabled early\n");
 		WRITE_ONCE(kcsan_enabled, true);
 	}
+
+	if (IS_ENABLED(CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY) ||
+	    IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) ||
+	    IS_ENABLED(CONFIG_KCSAN_PERMISSIVE) ||
+	    IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) {
+		pr_warn("non-strict mode configured - use CONFIG_KCSAN_STRICT=y to see all data races\n");
+	} else {
+		pr_info("strict mode configured\n");
+	}
 }
 
 /* === Exported interface =================================================== */
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 7/8] kcsan: permissive: Ignore data-racy 1-bit value changes
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (5 preceding siblings ...)
  2021-07-21 21:08 ` [PATCH kcsan 6/8] kcsan: Print if strict or non-strict during init Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:08 ` [PATCH kcsan 8/8] kcsan: Make strict mode imply interruptible watchers Paul E. McKenney
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng,
	Mark Rutland, Paul E . McKenney

From: Marco Elver <elver@google.com>

Add rules to ignore data-racy reads with only 1-bit value changes.
Details about the rules are captured in comments in
kernel/kcsan/permissive.h. More background follows.

While investigating a number of data races, we've encountered data-racy
accesses on flags variables to be very common. The typical pattern is a
reader masking all but one bit, and/or the writer setting/clearing only
1 bit (current->flags being a frequently encountered case; more examples
in mm/sl[au]b.c, which disable KCSAN for this reason).

Since these types of data-racy accesses are common (with the assumption
they are intentional and hard to miscompile) having the option (with
CONFIG_KCSAN_PERMISSIVE=y) to filter them will avoid forcing everyone to
mark them, and deliberately left to preference at this time.

One important motivation for having this option built-in is to move
closer to being able to enable KCSAN on CI systems or for testers
wishing to test the whole kernel, while more easily filtering
less interesting data races with higher probability.

For the implementation, we considered several alternatives, but had one
major requirement: that the rules be kept together with the Linux-kernel
tree. Adding them to the compiler would preclude us from making changes
quickly; if the rules require tweaks, having them part of the compiler
requires waiting another ~1 year for the next release -- that's not
realistic. We are left with the following options:

	1. Maintain compiler plugins as part of the kernel-tree that
	   removes instrumentation for some accesses (e.g. plain-& with
	   1-bit mask). The analysis would be reader-side focused, as
	   no assumption can be made about racing writers.

Because it seems unrealistic to maintain 2 plugins, one for LLVM and
GCC, we would likely pick LLVM. Furthermore, no kernel infrastructure
exists to maintain LLVM plugins, and the build-system implications and
maintenance overheads do not look great (historically, plugins written
against old LLVM APIs are not guaranteed to work with newer LLVM APIs).

	2. Find a set of rules that can be expressed in terms of
	   observed value changes, and make it part of the KCSAN runtime.
	   The analysis is writer-side focused, given we rely on observed
	   value changes.

The approach taken here is (2). While a complete approach requires both
(1) and (2), experiments show that the majority of data races involving
trivial bit operations on flags variables can be removed with (2) alone.

It goes without saying that the filtering of data races using (1) or (2)
does _not_ guarantee they are safe! Therefore, limiting ourselves to (2)
for now is the conservative choice for setups that wish to enable
CONFIG_KCSAN_PERMISSIVE=y.

Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/kcsan/kcsan_test.c | 32 +++++++++++++++++++++++++
 kernel/kcsan/permissive.h | 49 ++++++++++++++++++++++++++++++++++++++-
 2 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c
index 8bcffbdef3d36..dc55fd5a36fcc 100644
--- a/kernel/kcsan/kcsan_test.c
+++ b/kernel/kcsan/kcsan_test.c
@@ -414,6 +414,14 @@ static noinline void test_kernel_atomic_builtins(void)
 	__atomic_load_n(&test_var, __ATOMIC_RELAXED);
 }
 
+static noinline void test_kernel_xor_1bit(void)
+{
+	/* Do not report data races between the read-writes. */
+	kcsan_nestable_atomic_begin();
+	test_var ^= 0x10000;
+	kcsan_nestable_atomic_end();
+}
+
 /* ===== Test cases ===== */
 
 /* Simple test with normal data race. */
@@ -952,6 +960,29 @@ static void test_atomic_builtins(struct kunit *test)
 	KUNIT_EXPECT_FALSE(test, match_never);
 }
 
+__no_kcsan
+static void test_1bit_value_change(struct kunit *test)
+{
+	const struct expect_report expect = {
+		.access = {
+			{ test_kernel_read, &test_var, sizeof(test_var), 0 },
+			{ test_kernel_xor_1bit, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) },
+		},
+	};
+	bool match = false;
+
+	begin_test_checks(test_kernel_read, test_kernel_xor_1bit);
+	do {
+		match = IS_ENABLED(CONFIG_KCSAN_PERMISSIVE)
+				? report_available()
+				: report_matches(&expect);
+	} while (!end_test_checks(match));
+	if (IS_ENABLED(CONFIG_KCSAN_PERMISSIVE))
+		KUNIT_EXPECT_FALSE(test, match);
+	else
+		KUNIT_EXPECT_TRUE(test, match);
+}
+
 /*
  * Generate thread counts for all test cases. Values generated are in interval
  * [2, 5] followed by exponentially increasing thread counts from 8 to 32.
@@ -1024,6 +1055,7 @@ static struct kunit_case kcsan_test_cases[] = {
 	KCSAN_KUNIT_CASE(test_jiffies_noreport),
 	KCSAN_KUNIT_CASE(test_seqlock_noreport),
 	KCSAN_KUNIT_CASE(test_atomic_builtins),
+	KCSAN_KUNIT_CASE(test_1bit_value_change),
 	{},
 };
 
diff --git a/kernel/kcsan/permissive.h b/kernel/kcsan/permissive.h
index f90e30800c11b..2c01fe4a59ee7 100644
--- a/kernel/kcsan/permissive.h
+++ b/kernel/kcsan/permissive.h
@@ -12,6 +12,8 @@
 #ifndef _KERNEL_KCSAN_PERMISSIVE_H
 #define _KERNEL_KCSAN_PERMISSIVE_H
 
+#include <linux/bitops.h>
+#include <linux/sched.h>
 #include <linux/types.h>
 
 /*
@@ -22,7 +24,11 @@ static __always_inline bool kcsan_ignore_address(const volatile void *ptr)
 	if (!IS_ENABLED(CONFIG_KCSAN_PERMISSIVE))
 		return false;
 
-	return false;
+	/*
+	 * Data-racy bitops on current->flags are too common, ignore completely
+	 * for now.
+	 */
+	return ptr == &current->flags;
 }
 
 /*
@@ -41,6 +47,47 @@ kcsan_ignore_data_race(size_t size, int type, u64 old, u64 new, u64 diff)
 	if (type || size > sizeof(long))
 		return false;
 
+	/*
+	 * A common pattern is checking/setting just 1 bit in a variable; for
+	 * example:
+	 *
+	 *	if (flags & SOME_FLAG) { ... }
+	 *
+	 * and elsewhere flags is updated concurrently:
+	 *
+	 *	flags |= SOME_OTHER_FLAG; // just 1 bit
+	 *
+	 * While it is still recommended that such accesses be marked
+	 * appropriately, in many cases these types of data races are so common
+	 * that marking them all is often unrealistic and left to maintainer
+	 * preference.
+	 *
+	 * The assumption in all cases is that with all known compiler
+	 * optimizations (including those that tear accesses), because no more
+	 * than 1 bit changed, the plain accesses are safe despite the presence
+	 * of data races.
+	 *
+	 * The rules here will ignore the data races if we observe no more than
+	 * 1 bit changed.
+	 *
+	 * Of course many operations can effecively change just 1 bit, but the
+	 * general assuption that data races involving 1-bit changes can be
+	 * tolerated still applies.
+	 *
+	 * And in case a true bug is missed, the bug likely manifests as a
+	 * reportable data race elsewhere.
+	 */
+	if (hweight64(diff) == 1) {
+		/*
+		 * Exception: Report data races where the values look like
+		 * ordinary booleans (one of them was 0 and the 0th bit was
+		 * changed) More often than not, they come with interesting
+		 * memory ordering requirements, so let's report them.
+		 */
+		if (!((!old || !new) && diff == 1))
+			return true;
+	}
+
 	return false;
 }
 
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH kcsan 8/8] kcsan: Make strict mode imply interruptible watchers
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (6 preceding siblings ...)
  2021-07-21 21:08 ` [PATCH kcsan 7/8] kcsan: permissive: Ignore data-racy 1-bit value changes Paul E. McKenney
@ 2021-07-21 21:08 ` Paul E. McKenney
  2021-07-21 21:10 ` [PATCH memory-model 1/4] tools/memory-model: Make read_foo_diagnostic() more clearly diagnostic Paul E. McKenney
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:08 UTC (permalink / raw)
  To: linux-kernel, kasan-dev, kernel-team, mingo
  Cc: elver, andreyknvl, glider, dvyukov, cai, boqun.feng, Paul E . McKenney

From: Marco Elver <elver@google.com>

If CONFIG_KCSAN_STRICT=y, select CONFIG_KCSAN_INTERRUPT_WATCHER as well.

With interruptible watchers, we'll also report same-CPU data races; if
we requested strict mode, we might as well show these, too.

Suggested-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 lib/Kconfig.kcsan | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
index 26f03c754d39b..e0a93ffdef30e 100644
--- a/lib/Kconfig.kcsan
+++ b/lib/Kconfig.kcsan
@@ -150,7 +150,8 @@ config KCSAN_SKIP_WATCH_RANDOMIZE
 	  KCSAN_WATCH_SKIP.
 
 config KCSAN_INTERRUPT_WATCHER
-	bool "Interruptible watchers"
+	bool "Interruptible watchers" if !KCSAN_STRICT
+	default KCSAN_STRICT
 	help
 	  If enabled, a task that set up a watchpoint may be interrupted while
 	  delayed. This option will allow KCSAN to detect races between
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH memory-model 1/4] tools/memory-model: Make read_foo_diagnostic() more clearly diagnostic
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (7 preceding siblings ...)
  2021-07-21 21:08 ` [PATCH kcsan 8/8] kcsan: Make strict mode imply interruptible watchers Paul E. McKenney
@ 2021-07-21 21:10 ` Paul E. McKenney
  2021-07-21 21:10 ` [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads Paul E. McKenney
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:10 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney,
	Manfred Spraul

The current definition of read_foo_diagnostic() in the "Lock Protection
With Lockless Diagnostic Access" section returns a value, which could
be use for any purpose.  This could mislead people into incorrectly
using data_race() in cases where READ_ONCE() is required.  This commit
therefore makes read_foo_diagnostic() simply print the value read.

Reported-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/access-marking.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
index 1ab189f51f55d..58bff26198767 100644
--- a/tools/memory-model/Documentation/access-marking.txt
+++ b/tools/memory-model/Documentation/access-marking.txt
@@ -259,9 +259,9 @@ diagnostic purposes.  The code might look as follows:
 		return ret;
 	}
 
-	int read_foo_diagnostic(void)
+	void read_foo_diagnostic(void)
 	{
-		return data_race(foo);
+		pr_info("Current value of foo: %d\n", data_race(foo));
 	}
 
 The reader-writer lock prevents the compiler from introducing concurrency
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (8 preceding siblings ...)
  2021-07-21 21:10 ` [PATCH memory-model 1/4] tools/memory-model: Make read_foo_diagnostic() more clearly diagnostic Paul E. McKenney
@ 2021-07-21 21:10 ` Paul E. McKenney
  2021-07-23  2:08   ` Alan Stern
  2021-07-28 17:40   ` [PATCH v2 " Paul E. McKenney
  2021-07-21 21:10 ` [PATCH memory-model 3/4] tools/memory-model: Heuristics using data_race() must handle all values Paul E. McKenney
  2021-07-21 21:10 ` [PATCH memory-model 4/4] tools/memory-model: Document data_race(READ_ONCE()) Paul E. McKenney
  11 siblings, 2 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:10 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney,
	Manfred Spraul

This commit adds example code for heuristic lockless reads, based loosely
on the sem_lock() and sem_unlock() functions.

Reported-by: Manfred Spraul <manfred@colorfullife.com>
[ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 .../Documentation/access-marking.txt          | 94 +++++++++++++++++++
 1 file changed, 94 insertions(+)

diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
index 58bff26198767..be7d507997cf8 100644
--- a/tools/memory-model/Documentation/access-marking.txt
+++ b/tools/memory-model/Documentation/access-marking.txt
@@ -319,6 +319,100 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
 concurrent lockless write.
 
 
+Lock-Protected Writes With Heuristic Lockless Reads
+---------------------------------------------------
+
+For another example, suppose that the code can normally make use of
+a per-data-structure lock, but there are times when a global lock
+is required.  These times are indicated via a global flag.  The code
+might look as follows, and is based loosely on nf_conntrack_lock(),
+nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
+
+	bool global_flag;
+	DEFINE_SPINLOCK(global_lock);
+	struct foo {
+		spinlock_t f_lock;
+		int f_data;
+	};
+
+	/* All foo structures are in the following array. */
+	int nfoo;
+	struct foo *foo_array;
+
+	void do_something_locked(struct foo *fp)
+	{
+		bool gf = true;
+
+		/* IMPORTANT: Heuristic plus spin_lock()! */
+		if (!data_race(global_flag)) {
+			spin_lock(&fp->f_lock);
+			if (!smp_load_acquire(&global_flag)) {
+				do_something(fp);
+				spin_unlock(&fp->f_lock);
+				return;
+			}
+			spin_unlock(&fp->f_lock);
+		}
+		spin_lock(&global_lock);
+		/* Lock held, thus global flag cannot change. */
+		if (!global_flag) {
+			spin_lock(&fp->f_lock);
+			spin_unlock(&global_lock);
+			gf = false;
+		}
+		do_something(fp);
+		if (fg)
+			spin_unlock(&global_lock);
+		else
+			spin_lock(&fp->f_lock);
+	}
+
+	void begin_global(void)
+	{
+		int i;
+
+		spin_lock(&global_lock);
+		WRITE_ONCE(global_flag, true);
+		for (i = 0; i < nfoo; i++) {
+			/* Wait for pre-existing local locks. */
+			spin_lock(&fp->f_lock);
+			spin_unlock(&fp->f_lock);
+		}
+	}
+
+	void end_global(void)
+	{
+		smp_store_release(&global_flag, false);
+		/* Pre-existing global lock acquisitions will recheck. */
+		spin_unlock(&global_lock);
+	}
+
+All code paths leading from the do_something_locked() function's first
+read from global_flag acquire a lock, so endless load fusing cannot
+happen.
+
+If the value read from global_flag is true, then global_flag is rechecked
+while holding global_lock, which prevents global_flag from changing.
+If this recheck finds that global_flag is now false, the acquisition
+of ->f_lock prior to the release of global_lock will result in any subsequent
+begin_global() invocation waiting to acquire ->f_lock.
+
+On the other hand, if the value read from global_flag is false, then
+global_flag, then rechecking under ->f_lock combined with synchronization
+with begin_global() guarantees than any erroneous read will cause the
+do_something_locked() function's first do_something() invocation to happen
+before begin_global() returns.  The combination of the smp_load_acquire()
+in do_something_locked() and the smp_store_release() in end_global()
+guarantees that either the do_something_locked() function's first
+do_something() invocation happens after the call to end_global() or that
+do_something_locked() acquires global_lock() and rechecks under the lock.
+
+For this to work, only those foo structures in foo_array[] may be
+passed to do_something_locked().  The reason for this is that the
+synchronization with begin_global() relies on momentarily locking each
+and every foo structure.
+
+
 Lockless Reads and Writes
 -------------------------
 
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH memory-model 3/4] tools/memory-model: Heuristics using data_race() must handle all values
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (9 preceding siblings ...)
  2021-07-21 21:10 ` [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads Paul E. McKenney
@ 2021-07-21 21:10 ` Paul E. McKenney
  2021-07-21 21:10 ` [PATCH memory-model 4/4] tools/memory-model: Document data_race(READ_ONCE()) Paul E. McKenney
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:10 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Manfred Spraul,
	Paul E . McKenney

From: Manfred Spraul <manfred@colorfullife.com>

Data loaded for use by some sorts of heuristics can tolerate the
occasional erroneous value.  In this case the loads may use data_race()
to give the compiler full freedom to optimize while also informing KCSAN
of the intent.  However, for this to work, the heuristic needs to be
able to tolerate any erroneous value that could possibly arise.  This
commit therefore adds a paragraph spelling this out.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/access-marking.txt | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
index be7d507997cf8..fe4ad6d12d24c 100644
--- a/tools/memory-model/Documentation/access-marking.txt
+++ b/tools/memory-model/Documentation/access-marking.txt
@@ -126,6 +126,11 @@ consistent errors, which in turn are quite capable of breaking heuristics.
 Therefore use of data_race() should be limited to cases where some other
 code (such as a barrier() call) will force the occasional reload.
 
+Note that this use case requires that the heuristic be able to handle
+any possible error.  In contrast, if the heuristics might be fatally
+confused by one or more of the possible erroneous values, use READ_ONCE()
+instead of data_race().
+
 In theory, plain C-language loads can also be used for this use case.
 However, in practice this will have the disadvantage of causing KCSAN
 to generate false positives because KCSAN will have no way of knowing
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH memory-model 4/4] tools/memory-model: Document data_race(READ_ONCE())
  2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
                   ` (10 preceding siblings ...)
  2021-07-21 21:10 ` [PATCH memory-model 3/4] tools/memory-model: Heuristics using data_race() must handle all values Paul E. McKenney
@ 2021-07-21 21:10 ` Paul E. McKenney
  11 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-21 21:10 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney

It is possible to cause KCSAN to ignore marked accesses by applying
__no_kcsan to the function or applying data_race() to the marked accesses.
These approaches allow the developer to restrict compiler optimizations
while also causing KCSAN to ignore diagnostic accesses.

This commit therefore updates the documentation accordingly.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 .../Documentation/access-marking.txt          | 49 +++++++++++++------
 1 file changed, 35 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
index fe4ad6d12d24c..a3dcc32e27b44 100644
--- a/tools/memory-model/Documentation/access-marking.txt
+++ b/tools/memory-model/Documentation/access-marking.txt
@@ -37,7 +37,9 @@ compiler's use of code-motion and common-subexpression optimizations.
 Therefore, if a given access is involved in an intentional data race,
 using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
 preferable to data_race(), which in turn is usually preferable to plain
-C-language accesses.
+C-language accesses.  It is permissible to combine #2 and #3, for example,
+data_race(READ_ONCE(a)), which will both restrict compiler optimizations
+and disable KCSAN diagnostics.
 
 KCSAN will complain about many types of data races involving plain
 C-language accesses, but marking all accesses involved in a given data
@@ -86,6 +88,10 @@ that fail to exclude the updates.  In this case, it is important to use
 data_race() for the diagnostic reads because otherwise KCSAN would give
 false-positive warnings about these diagnostic reads.
 
+If it is necessary to both restrict compiler optimizations and disable
+KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
+data_race(READ_ONCE(a)).
+
 In theory, plain C-language loads can also be used for this use case.
 However, in practice this will have the disadvantage of causing KCSAN
 to generate false positives because KCSAN will have no way of knowing
@@ -279,19 +285,34 @@ tells KCSAN that data races are expected, and should be silently
 ignored.  This data_race() also tells the human reading the code that
 read_foo_diagnostic() might sometimes return a bogus value.
 
-However, please note that your kernel must be built with
-CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n in order for KCSAN to
-detect a buggy lockless write.  If you need KCSAN to detect such a
-write even if that write did not change the value of foo, you also
-need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.  If you need KCSAN to
-detect such a write happening in an interrupt handler running on the
-same CPU doing the legitimate lock-protected write, you also need
-CONFIG_KCSAN_INTERRUPT_WATCHER=y.  With some or all of these Kconfig
-options set properly, KCSAN can be quite helpful, although it is not
-necessarily a full replacement for hardware watchpoints.  On the other
-hand, neither are hardware watchpoints a full replacement for KCSAN
-because it is not always easy to tell hardware watchpoint to conditionally
-trap on accesses.
+If it is necessary to suppress compiler optimization and also detect
+buggy lockless writes, read_foo_diagnostic() can be updated as follows:
+
+	void read_foo_diagnostic(void)
+	{
+		pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
+	}
+
+Alternatively, given that KCSAN is to ignore all accesses in this function,
+this function can be marked __no_kcsan and the data_race() can be dropped:
+
+	void __no_kcsan read_foo_diagnostic(void)
+	{
+		pr_info("Current value of foo: %d\n", READ_ONCE(foo));
+	}
+
+However, in order for KCSAN to detect buggy lockless writes, your kernel
+must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.  If you
+need KCSAN to detect such a write even if that write did not change
+the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
+If you need KCSAN to detect such a write happening in an interrupt handler
+running on the same CPU doing the legitimate lock-protected write, you
+also need CONFIG_KCSAN_INTERRUPT_WATCHER=y.  With some or all of these
+Kconfig options set properly, KCSAN can be quite helpful, although
+it is not necessarily a full replacement for hardware watchpoints.
+On the other hand, neither are hardware watchpoints a full replacement
+for KCSAN because it is not always easy to tell hardware watchpoint to
+conditionally trap on accesses.
 
 
 Lock-Protected Writes With Lockless Reads
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-21 21:10 ` [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads Paul E. McKenney
@ 2021-07-23  2:08   ` Alan Stern
  2021-07-23  6:52     ` Manfred Spraul
  2021-07-23 16:24     ` Paul E. McKenney
  2021-07-28 17:40   ` [PATCH v2 " Paul E. McKenney
  1 sibling, 2 replies; 28+ messages in thread
From: Alan Stern @ 2021-07-23  2:08 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks, Manfred Spraul

On Wed, Jul 21, 2021 at 02:10:01PM -0700, Paul E. McKenney wrote:
> This commit adds example code for heuristic lockless reads, based loosely
> on the sem_lock() and sem_unlock() functions.
> 
> Reported-by: Manfred Spraul <manfred@colorfullife.com>
> [ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>  .../Documentation/access-marking.txt          | 94 +++++++++++++++++++
>  1 file changed, 94 insertions(+)
> 
> diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
> index 58bff26198767..be7d507997cf8 100644
> --- a/tools/memory-model/Documentation/access-marking.txt
> +++ b/tools/memory-model/Documentation/access-marking.txt
> @@ -319,6 +319,100 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
>  concurrent lockless write.
>  
>  
> +Lock-Protected Writes With Heuristic Lockless Reads
> +---------------------------------------------------
> +
> +For another example, suppose that the code can normally make use of
> +a per-data-structure lock, but there are times when a global lock
> +is required.  These times are indicated via a global flag.  The code
> +might look as follows, and is based loosely on nf_conntrack_lock(),
> +nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
> +
> +	bool global_flag;
> +	DEFINE_SPINLOCK(global_lock);
> +	struct foo {
> +		spinlock_t f_lock;
> +		int f_data;
> +	};
> +
> +	/* All foo structures are in the following array. */
> +	int nfoo;
> +	struct foo *foo_array;
> +
> +	void do_something_locked(struct foo *fp)
> +	{
> +		bool gf = true;
> +
> +		/* IMPORTANT: Heuristic plus spin_lock()! */
> +		if (!data_race(global_flag)) {
> +			spin_lock(&fp->f_lock);
> +			if (!smp_load_acquire(&global_flag)) {
> +				do_something(fp);
> +				spin_unlock(&fp->f_lock);
> +				return;
> +			}
> +			spin_unlock(&fp->f_lock);
> +		}
> +		spin_lock(&global_lock);
> +		/* Lock held, thus global flag cannot change. */
> +		if (!global_flag) {

How can global_flag ever be true at this point?  The only line of code 
that sets it is in begin_global() below, it only runs while global_lock 
is held, and global_flag is set back to false before the lock is 
released.

> +			spin_lock(&fp->f_lock);
> +			spin_unlock(&global_lock);
> +			gf = false;
> +		}
> +		do_something(fp);
> +		if (fg)

Should be gf, not fg.

> +			spin_unlock(&global_lock);
> +		else
> +			spin_lock(&fp->f_lock);
> +	}
> +
> +	void begin_global(void)
> +	{
> +		int i;
> +
> +		spin_lock(&global_lock);
> +		WRITE_ONCE(global_flag, true);

Why does this need to be WRITE_ONCE?  It still races with the first read 
of global_flag above.

> +		for (i = 0; i < nfoo; i++) {
> +			/* Wait for pre-existing local locks. */
> +			spin_lock(&fp->f_lock);
> +			spin_unlock(&fp->f_lock);

Why not acquire all the locks here and release all of them in 
end_global()?  Then global_flag wouldn't need acquire-release 
sychronization.

> +		}
> +	}
> +
> +	void end_global(void)
> +	{
> +		smp_store_release(&global_flag, false);
> +		/* Pre-existing global lock acquisitions will recheck. */

What does that comment mean?  How can there be any pre-existing global 
lock acquisitions when we hold the lock right now?

> +		spin_unlock(&global_lock);
> +	}
> +
> +All code paths leading from the do_something_locked() function's first
> +read from global_flag acquire a lock, so endless load fusing cannot
> +happen.
> +
> +If the value read from global_flag is true, then global_flag is rechecked
> +while holding global_lock, which prevents global_flag from changing.
> +If this recheck finds that global_flag is now false, the acquisition

Again, how can't global_flag be false now?

Did you originally have in mind some sort of scheme in which 
begin_global() would release global_lock before returning and 
end_global() would acquire global_lock before clearing global_flag?  But 
I don't see how that could work without changes to do_something_locked().

> +of ->f_lock prior to the release of global_lock will result in any subsequent
> +begin_global() invocation waiting to acquire ->f_lock.
> +
> +On the other hand, if the value read from global_flag is false, then
> +global_flag, then rechecking under ->f_lock combined with synchronization
---^^^^^^^^^^^^^^^^^^

Typo?

> +with begin_global() guarantees than any erroneous read will cause the
> +do_something_locked() function's first do_something() invocation to happen
> +before begin_global() returns.  The combination of the smp_load_acquire()
> +in do_something_locked() and the smp_store_release() in end_global()
> +guarantees that either the do_something_locked() function's first
> +do_something() invocation happens after the call to end_global() or that
> +do_something_locked() acquires global_lock() and rechecks under the lock.

This last sentence also makes no sense unless you imagine dropping 
global_lock between begin_global() and end_global().

Alan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23  2:08   ` Alan Stern
@ 2021-07-23  6:52     ` Manfred Spraul
  2021-07-23 13:05       ` Alan Stern
  2021-07-23 16:24     ` Paul E. McKenney
  1 sibling, 1 reply; 28+ messages in thread
From: Manfred Spraul @ 2021-07-23  6:52 UTC (permalink / raw)
  To: Alan Stern, Paul E. McKenney
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

Hi Alan,

On 7/23/21 4:08 AM, Alan Stern wrote:
> On Wed, Jul 21, 2021 at 02:10:01PM -0700, Paul E. McKenney wrote:
>> This commit adds example code for heuristic lockless reads, based loosely
>> on the sem_lock() and sem_unlock() functions.
>>
>> Reported-by: Manfred Spraul <manfred@colorfullife.com>
>> [ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>> ---
>>   .../Documentation/access-marking.txt          | 94 +++++++++++++++++++
>>   1 file changed, 94 insertions(+)
>>
>> diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
>> index 58bff26198767..be7d507997cf8 100644
>> --- a/tools/memory-model/Documentation/access-marking.txt
>> +++ b/tools/memory-model/Documentation/access-marking.txt
>> @@ -319,6 +319,100 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
>>   concurrent lockless write.
>>   
>>   
>> +Lock-Protected Writes With Heuristic Lockless Reads
>> +---------------------------------------------------
>> +
>> +For another example, suppose that the code can normally make use of
>> +a per-data-structure lock, but there are times when a global lock
>> +is required.  These times are indicated via a global flag.  The code
>> +might look as follows, and is based loosely on nf_conntrack_lock(),
>> +nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
>> +
>> +	bool global_flag;
>> +	DEFINE_SPINLOCK(global_lock);
>> +	struct foo {
>> +		spinlock_t f_lock;
>> +		int f_data;
>> +	};
>> +
>> +	/* All foo structures are in the following array. */
>> +	int nfoo;
>> +	struct foo *foo_array;
>> +
>> +	void do_something_locked(struct foo *fp)
>> +	{
>> +		bool gf = true;
>> +
>> +		/* IMPORTANT: Heuristic plus spin_lock()! */
>> +		if (!data_race(global_flag)) {
>> +			spin_lock(&fp->f_lock);
>> +			if (!smp_load_acquire(&global_flag)) {
>> +				do_something(fp);
>> +				spin_unlock(&fp->f_lock);
>> +				return;
>> +			}
>> +			spin_unlock(&fp->f_lock);
>> +		}
>> +		spin_lock(&global_lock);
>> +		/* Lock held, thus global flag cannot change. */
>> +		if (!global_flag) {
> How can global_flag ever be true at this point?  The only line of code
> that sets it is in begin_global() below, it only runs while global_lock
> is held, and global_flag is set back to false before the lock is
> released.

It can't be true. The code is a simplified version of the algorithm in 
ipc/sem.c.

For the ipc/sem.c, global_flag can remain true even after dropping 
global_lock.

When transferring the approach to nf_conntrack_core, I didn't notice 
that nf_conntrack doesn't need a persistent global_flag.

Thus the recheck after spin_lock(&global_lock) is not needed.


>> +			spin_lock(&fp->f_lock);
>> +			spin_unlock(&global_lock);
>> +			gf = false;
>> +		}
>> +		do_something(fp);
>> +		if (fg)
> Should be gf, not fg.
>
>> +			spin_unlock(&global_lock);
>> +		else
>> +			spin_lock(&fp->f_lock);
>> +	}
>> +
>> +	void begin_global(void)
>> +	{
>> +		int i;
>> +
>> +		spin_lock(&global_lock);
>> +		WRITE_ONCE(global_flag, true);
> Why does this need to be WRITE_ONCE?  It still races with the first read
> of global_flag above.
>
>> +		for (i = 0; i < nfoo; i++) {
>> +			/* Wait for pre-existing local locks. */
>> +			spin_lock(&fp->f_lock);
>> +			spin_unlock(&fp->f_lock);
> Why not acquire all the locks here and release all of them in
> end_global()?  Then global_flag wouldn't need acquire-release
> sychronization.

 From my understanding:
spin_lock contains preempt_count_add, thus you can't acquire more than 
255 spinlocks (actually 245, the warning limit is 10 below 255)

>> +		}
>> +	}
>> +
>> +	void end_global(void)
>> +	{
>> +		smp_store_release(&global_flag, false);
>> +		/* Pre-existing global lock acquisitions will recheck. */
> What does that comment mean?  How can there be any pre-existing global
> lock acquisitions when we hold the lock right now?

>> +		spin_unlock(&global_lock);
>> +	}
>> +
>> +All code paths leading from the do_something_locked() function's first
>> +read from global_flag acquire a lock, so endless load fusing cannot
>> +happen.
>> +
>> +If the value read from global_flag is true, then global_flag is rechecked
>> +while holding global_lock, which prevents global_flag from changing.
>> +If this recheck finds that global_flag is now false, the acquisition
> Again, how can't global_flag be false now?
>
> Did you originally have in mind some sort of scheme in which
> begin_global() would release global_lock before returning and
> end_global() would acquire global_lock before clearing global_flag?  But
> I don't see how that could work without changes to do_something_locked().
>
>> +of ->f_lock prior to the release of global_lock will result in any subsequent
>> +begin_global() invocation waiting to acquire ->f_lock.
>> +
>> +On the other hand, if the value read from global_flag is false, then
>> +global_flag, then rechecking under ->f_lock combined with synchronization
> ---^^^^^^^^^^^^^^^^^^
>
> Typo?
>
>> +with begin_global() guarantees than any erroneous read will cause the
>> +do_something_locked() function's first do_something() invocation to happen
>> +before begin_global() returns.  The combination of the smp_load_acquire()
>> +in do_something_locked() and the smp_store_release() in end_global()
>> +guarantees that either the do_something_locked() function's first
>> +do_something() invocation happens after the call to end_global() or that
>> +do_something_locked() acquires global_lock() and rechecks under the lock.
> This last sentence also makes no sense unless you imagine dropping
> global_lock between begin_global() and end_global().

ipc/sem.c does that and needs that, nf_conntrack doesn't use this.


--

     Manfred


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23  6:52     ` Manfred Spraul
@ 2021-07-23 13:05       ` Alan Stern
  2021-07-23 13:57         ` Manfred Spraul
  2021-07-23 16:30         ` Paul E. McKenney
  0 siblings, 2 replies; 28+ messages in thread
From: Alan Stern @ 2021-07-23 13:05 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: Paul E. McKenney, linux-kernel, linux-arch, kernel-team, mingo,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

On Fri, Jul 23, 2021 at 08:52:50AM +0200, Manfred Spraul wrote:
> Hi Alan,

Hi.

> On 7/23/21 4:08 AM, Alan Stern wrote:
> > On Wed, Jul 21, 2021 at 02:10:01PM -0700, Paul E. McKenney wrote:
> > > This commit adds example code for heuristic lockless reads, based loosely
> > > on the sem_lock() and sem_unlock() functions.
> > > 
> > > Reported-by: Manfred Spraul <manfred@colorfullife.com>
> > > [ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > ---
> > >   .../Documentation/access-marking.txt          | 94 +++++++++++++++++++
> > >   1 file changed, 94 insertions(+)
> > > 
> > > diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
> > > index 58bff26198767..be7d507997cf8 100644
> > > --- a/tools/memory-model/Documentation/access-marking.txt
> > > +++ b/tools/memory-model/Documentation/access-marking.txt
> > > @@ -319,6 +319,100 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
> > >   concurrent lockless write.
> > > +Lock-Protected Writes With Heuristic Lockless Reads
> > > +---------------------------------------------------
> > > +
> > > +For another example, suppose that the code can normally make use of
> > > +a per-data-structure lock, but there are times when a global lock
> > > +is required.  These times are indicated via a global flag.  The code
> > > +might look as follows, and is based loosely on nf_conntrack_lock(),
> > > +nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
> > > +
> > > +	bool global_flag;
> > > +	DEFINE_SPINLOCK(global_lock);
> > > +	struct foo {
> > > +		spinlock_t f_lock;
> > > +		int f_data;
> > > +	};
> > > +
> > > +	/* All foo structures are in the following array. */
> > > +	int nfoo;
> > > +	struct foo *foo_array;
> > > +
> > > +	void do_something_locked(struct foo *fp)
> > > +	{
> > > +		bool gf = true;
> > > +
> > > +		/* IMPORTANT: Heuristic plus spin_lock()! */
> > > +		if (!data_race(global_flag)) {
> > > +			spin_lock(&fp->f_lock);
> > > +			if (!smp_load_acquire(&global_flag)) {
> > > +				do_something(fp);
> > > +				spin_unlock(&fp->f_lock);
> > > +				return;
> > > +			}
> > > +			spin_unlock(&fp->f_lock);
> > > +		}
> > > +		spin_lock(&global_lock);
> > > +		/* Lock held, thus global flag cannot change. */
> > > +		if (!global_flag) {
> > How can global_flag ever be true at this point?  The only line of code
> > that sets it is in begin_global() below, it only runs while global_lock
> > is held, and global_flag is set back to false before the lock is
> > released.
> 
> It can't be true. The code is a simplified version of the algorithm in
> ipc/sem.c.
> 
> For the ipc/sem.c, global_flag can remain true even after dropping
> global_lock.
> 
> When transferring the approach to nf_conntrack_core, I didn't notice that
> nf_conntrack doesn't need a persistent global_flag.
> 
> Thus the recheck after spin_lock(&global_lock) is not needed.

In fact, since global_flag is true if and only if global_lock is locked, 
perhaps it can be removed entirely and replaced with 
spin_is_locked(&global_lock).

> > > +			spin_lock(&fp->f_lock);
> > > +			spin_unlock(&global_lock);
> > > +			gf = false;
> > > +		}
> > > +		do_something(fp);
> > > +		if (fg)
> > Should be gf, not fg.
> > 
> > > +			spin_unlock(&global_lock);
> > > +		else
> > > +			spin_lock(&fp->f_lock);
> > > +	}
> > > +
> > > +	void begin_global(void)
> > > +	{
> > > +		int i;
> > > +
> > > +		spin_lock(&global_lock);
> > > +		WRITE_ONCE(global_flag, true);
> > Why does this need to be WRITE_ONCE?  It still races with the first read
> > of global_flag above.
> > 
> > > +		for (i = 0; i < nfoo; i++) {
> > > +			/* Wait for pre-existing local locks. */
> > > +			spin_lock(&fp->f_lock);
> > > +			spin_unlock(&fp->f_lock);
> > Why not acquire all the locks here and release all of them in
> > end_global()?  Then global_flag wouldn't need acquire-release
> > sychronization.
> 
> From my understanding:
> spin_lock contains preempt_count_add, thus you can't acquire more than 255
> spinlocks (actually 245, the warning limit is 10 below 255)

It might be worth mentioning this in a code comment.  Or in the 
accompanying text.

> > > +		}
> > > +	}
> > > +
> > > +	void end_global(void)
> > > +	{
> > > +		smp_store_release(&global_flag, false);
> > > +		/* Pre-existing global lock acquisitions will recheck. */
> > What does that comment mean?  How can there be any pre-existing global
> > lock acquisitions when we hold the lock right now?
> 
> > > +		spin_unlock(&global_lock);
> > > +	}
> > > +
> > > +All code paths leading from the do_something_locked() function's first
> > > +read from global_flag acquire a lock, so endless load fusing cannot
> > > +happen.
> > > +
> > > +If the value read from global_flag is true, then global_flag is rechecked
> > > +while holding global_lock, which prevents global_flag from changing.
> > > +If this recheck finds that global_flag is now false, the acquisition
> > Again, how can't global_flag be false now?
> > 
> > Did you originally have in mind some sort of scheme in which
> > begin_global() would release global_lock before returning and
> > end_global() would acquire global_lock before clearing global_flag?  But
> > I don't see how that could work without changes to do_something_locked().
> > 
> > > +of ->f_lock prior to the release of global_lock will result in any subsequent
> > > +begin_global() invocation waiting to acquire ->f_lock.
> > > +
> > > +On the other hand, if the value read from global_flag is false, then
> > > +global_flag, then rechecking under ->f_lock combined with synchronization
> > ---^^^^^^^^^^^^^^^^^^
> > 
> > Typo?
> > 
> > > +with begin_global() guarantees than any erroneous read will cause the
> > > +do_something_locked() function's first do_something() invocation to happen
> > > +before begin_global() returns.  The combination of the smp_load_acquire()
> > > +in do_something_locked() and the smp_store_release() in end_global()
> > > +guarantees that either the do_something_locked() function's first
> > > +do_something() invocation happens after the call to end_global() or that
> > > +do_something_locked() acquires global_lock() and rechecks under the lock.
> > This last sentence also makes no sense unless you imagine dropping
> > global_lock between begin_global() and end_global().
> 
> ipc/sem.c does that and needs that, nf_conntrack doesn't use this.

Given all these issues, it seems like this patch needs to be re-written.

Alan Stern

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 13:05       ` Alan Stern
@ 2021-07-23 13:57         ` Manfred Spraul
  2021-07-23 16:30         ` Paul E. McKenney
  1 sibling, 0 replies; 28+ messages in thread
From: Manfred Spraul @ 2021-07-23 13:57 UTC (permalink / raw)
  To: Alan Stern
  Cc: Paul E. McKenney, linux-kernel, linux-arch, kernel-team, mingo,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

Hi Alan,

On 7/23/21 3:05 PM, Alan Stern wrote:
> On Fri, Jul 23, 2021 at 08:52:50AM +0200, Manfred Spraul wrote:
>> Hi Alan,
> Hi.
>
>> On 7/23/21 4:08 AM, Alan Stern wrote:
>>> On Wed, Jul 21, 2021 at 02:10:01PM -0700, Paul E. McKenney wrote:
>>>> This commit adds example code for heuristic lockless reads, based loosely
>>>> on the sem_lock() and sem_unlock() functions.
>>>>
>>>> Reported-by: Manfred Spraul <manfred@colorfullife.com>
>>>> [ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
>>>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>>>> ---
>>>>    .../Documentation/access-marking.txt          | 94 +++++++++++++++++++
>>>>    1 file changed, 94 insertions(+)
>>>>
>>>> diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
>>>> index 58bff26198767..be7d507997cf8 100644
>>>> --- a/tools/memory-model/Documentation/access-marking.txt
>>>> +++ b/tools/memory-model/Documentation/access-marking.txt
>>>> @@ -319,6 +319,100 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
>>>>    concurrent lockless write.
>>>> +Lock-Protected Writes With Heuristic Lockless Reads
>>>> +---------------------------------------------------
>>>> +
>>>> +For another example, suppose that the code can normally make use of
>>>> +a per-data-structure lock, but there are times when a global lock
>>>> +is required.  These times are indicated via a global flag.  The code
>>>> +might look as follows, and is based loosely on nf_conntrack_lock(),
>>>> +nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
>>>> +
>>>> +	bool global_flag;
>>>> +	DEFINE_SPINLOCK(global_lock);
>>>> +	struct foo {
>>>> +		spinlock_t f_lock;
>>>> +		int f_data;
>>>> +	};
>>>> +
>>>> +	/* All foo structures are in the following array. */
>>>> +	int nfoo;
>>>> +	struct foo *foo_array;
>>>> +
>>>> +	void do_something_locked(struct foo *fp)
>>>> +	{
>>>> +		bool gf = true;
>>>> +
>>>> +		/* IMPORTANT: Heuristic plus spin_lock()! */
>>>> +		if (!data_race(global_flag)) {
>>>> +			spin_lock(&fp->f_lock);
>>>> +			if (!smp_load_acquire(&global_flag)) {
>>>> +				do_something(fp);
>>>> +				spin_unlock(&fp->f_lock);
>>>> +				return;
>>>> +			}
>>>> +			spin_unlock(&fp->f_lock);
>>>> +		}
>>>> +		spin_lock(&global_lock);
>>>> +		/* Lock held, thus global flag cannot change. */
>>>> +		if (!global_flag) {
>>> How can global_flag ever be true at this point?  The only line of code
>>> that sets it is in begin_global() below, it only runs while global_lock
>>> is held, and global_flag is set back to false before the lock is
>>> released.
>> It can't be true. The code is a simplified version of the algorithm in
>> ipc/sem.c.
>>
>> For the ipc/sem.c, global_flag can remain true even after dropping
>> global_lock.
>>
>> When transferring the approach to nf_conntrack_core, I didn't notice that
>> nf_conntrack doesn't need a persistent global_flag.
>>
>> Thus the recheck after spin_lock(&global_lock) is not needed.
> In fact, since global_flag is true if and only if global_lock is locked,
> perhaps it can be removed entirely and replaced with
> spin_is_locked(&global_lock).

I try to avoid spin_is_locked():

- spin_is_locked() is no memory barrier

- spin_lock() is an acquire memory barrier - for the read part. There is 
no barrier at all related to the write part.

With an explicit variable, the memory barriers can be controlled much 
better - and it is guaranteed to work in the same way on all architectures.


--

     Manfred


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23  2:08   ` Alan Stern
  2021-07-23  6:52     ` Manfred Spraul
@ 2021-07-23 16:24     ` Paul E. McKenney
  2021-07-23 16:59       ` Alan Stern
  1 sibling, 1 reply; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-23 16:24 UTC (permalink / raw)
  To: Alan Stern
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks, Manfred Spraul

On Thu, Jul 22, 2021 at 10:08:46PM -0400, Alan Stern wrote:
> On Wed, Jul 21, 2021 at 02:10:01PM -0700, Paul E. McKenney wrote:
> > This commit adds example code for heuristic lockless reads, based loosely
> > on the sem_lock() and sem_unlock() functions.
> > 
> > Reported-by: Manfred Spraul <manfred@colorfullife.com>
> > [ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> >  .../Documentation/access-marking.txt          | 94 +++++++++++++++++++
> >  1 file changed, 94 insertions(+)
> > 
> > diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
> > index 58bff26198767..be7d507997cf8 100644
> > --- a/tools/memory-model/Documentation/access-marking.txt
> > +++ b/tools/memory-model/Documentation/access-marking.txt
> > @@ -319,6 +319,100 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
> >  concurrent lockless write.
> >  
> >  
> > +Lock-Protected Writes With Heuristic Lockless Reads
> > +---------------------------------------------------
> > +
> > +For another example, suppose that the code can normally make use of
> > +a per-data-structure lock, but there are times when a global lock
> > +is required.  These times are indicated via a global flag.  The code
> > +might look as follows, and is based loosely on nf_conntrack_lock(),
> > +nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
> > +
> > +	bool global_flag;
> > +	DEFINE_SPINLOCK(global_lock);
> > +	struct foo {
> > +		spinlock_t f_lock;
> > +		int f_data;
> > +	};
> > +
> > +	/* All foo structures are in the following array. */
> > +	int nfoo;
> > +	struct foo *foo_array;
> > +
> > +	void do_something_locked(struct foo *fp)
> > +	{
> > +		bool gf = true;
> > +
> > +		/* IMPORTANT: Heuristic plus spin_lock()! */
> > +		if (!data_race(global_flag)) {
> > +			spin_lock(&fp->f_lock);
> > +			if (!smp_load_acquire(&global_flag)) {
> > +				do_something(fp);
> > +				spin_unlock(&fp->f_lock);
> > +				return;
> > +			}
> > +			spin_unlock(&fp->f_lock);
> > +		}
> > +		spin_lock(&global_lock);
> > +		/* Lock held, thus global flag cannot change. */
> > +		if (!global_flag) {
> 
> How can global_flag ever be true at this point?  The only line of code 
> that sets it is in begin_global() below, it only runs while global_lock 
> is held, and global_flag is set back to false before the lock is 
> released.

Good point.  The fact that wwe hold global_lock means that global_flag
cannot be set, which means that we can unconditionally acquire the
per-foo lock and release global_lock.

> > +			spin_lock(&fp->f_lock);
> > +			spin_unlock(&global_lock);
> > +			gf = false;
> > +		}
> > +		do_something(fp);
> > +		if (fg)
> 
> Should be gf, not fg.

And we can also eliminate gf and its typo.

> > +			spin_unlock(&global_lock);
> > +		else
> > +			spin_lock(&fp->f_lock);
> > +	}
> > +
> > +	void begin_global(void)
> > +	{
> > +		int i;
> > +
> > +		spin_lock(&global_lock);
> > +		WRITE_ONCE(global_flag, true);
> 
> Why does this need to be WRITE_ONCE?  It still races with the first read 
> of global_flag above.

But also with the smp_load_acquire() of global_flag, right?

> > +		for (i = 0; i < nfoo; i++) {
> > +			/* Wait for pre-existing local locks. */
> > +			spin_lock(&fp->f_lock);
> > +			spin_unlock(&fp->f_lock);
> 
> Why not acquire all the locks here and release all of them in 
> end_global()?  Then global_flag wouldn't need acquire-release 
> sychronization.

As suggested later in this thread, I have added a comment.

> > +		}
> > +	}
> > +
> > +	void end_global(void)
> > +	{
> > +		smp_store_release(&global_flag, false);
> > +		/* Pre-existing global lock acquisitions will recheck. */
> 
> What does that comment mean?  How can there be any pre-existing global 
> lock acquisitions when we hold the lock right now?

I have removed this comment.  The last shred of reason for it went away
with the gf local variable.

> > +		spin_unlock(&global_lock);
> > +	}
> > +
> > +All code paths leading from the do_something_locked() function's first
> > +read from global_flag acquire a lock, so endless load fusing cannot
> > +happen.
> > +
> > +If the value read from global_flag is true, then global_flag is rechecked
> > +while holding global_lock, which prevents global_flag from changing.
> > +If this recheck finds that global_flag is now false, the acquisition
> 
> Again, how can't global_flag be false now?
> 
> Did you originally have in mind some sort of scheme in which 
> begin_global() would release global_lock before returning and 
> end_global() would acquire global_lock before clearing global_flag?  But 
> I don't see how that could work without changes to do_something_locked().

I was thinking along those lines, but I clearly wasn't thinking very
clearly.  :-/

> > +of ->f_lock prior to the release of global_lock will result in any subsequent
> > +begin_global() invocation waiting to acquire ->f_lock.
> > +
> > +On the other hand, if the value read from global_flag is false, then
> > +global_flag, then rechecking under ->f_lock combined with synchronization
> ---^^^^^^^^^^^^^^^^^^
> 
> Typo?

Good catch, and I took care of this by rewriting this paragraph.

Likely introducing other typos in the process, but so it goes.

> > +with begin_global() guarantees than any erroneous read will cause the
> > +do_something_locked() function's first do_something() invocation to happen
> > +before begin_global() returns.  The combination of the smp_load_acquire()
> > +in do_something_locked() and the smp_store_release() in end_global()
> > +guarantees that either the do_something_locked() function's first
> > +do_something() invocation happens after the call to end_global() or that
> > +do_something_locked() acquires global_lock() and rechecks under the lock.
> 
> This last sentence also makes no sense unless you imagine dropping 
> global_lock between begin_global() and end_global().

Agreed.

						Thanx, Paul

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 13:05       ` Alan Stern
  2021-07-23 13:57         ` Manfred Spraul
@ 2021-07-23 16:30         ` Paul E. McKenney
  2021-07-23 17:08           ` Alan Stern
  1 sibling, 1 reply; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-23 16:30 UTC (permalink / raw)
  To: Alan Stern
  Cc: Manfred Spraul, linux-kernel, linux-arch, kernel-team, mingo,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

On Fri, Jul 23, 2021 at 09:05:54AM -0400, Alan Stern wrote:
> On Fri, Jul 23, 2021 at 08:52:50AM +0200, Manfred Spraul wrote:
> > Hi Alan,
> 
> Hi.
> 
> > On 7/23/21 4:08 AM, Alan Stern wrote:
> > > On Wed, Jul 21, 2021 at 02:10:01PM -0700, Paul E. McKenney wrote:
> > > > This commit adds example code for heuristic lockless reads, based loosely
> > > > on the sem_lock() and sem_unlock() functions.
> > > > 
> > > > Reported-by: Manfred Spraul <manfred@colorfullife.com>
> > > > [ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
> > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > > ---
> > > >   .../Documentation/access-marking.txt          | 94 +++++++++++++++++++
> > > >   1 file changed, 94 insertions(+)
> > > > 
> > > > diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
> > > > index 58bff26198767..be7d507997cf8 100644
> > > > --- a/tools/memory-model/Documentation/access-marking.txt
> > > > +++ b/tools/memory-model/Documentation/access-marking.txt
> > > > @@ -319,6 +319,100 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
> > > >   concurrent lockless write.
> > > > +Lock-Protected Writes With Heuristic Lockless Reads
> > > > +---------------------------------------------------
> > > > +
> > > > +For another example, suppose that the code can normally make use of
> > > > +a per-data-structure lock, but there are times when a global lock
> > > > +is required.  These times are indicated via a global flag.  The code
> > > > +might look as follows, and is based loosely on nf_conntrack_lock(),
> > > > +nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
> > > > +
> > > > +	bool global_flag;
> > > > +	DEFINE_SPINLOCK(global_lock);
> > > > +	struct foo {
> > > > +		spinlock_t f_lock;
> > > > +		int f_data;
> > > > +	};
> > > > +
> > > > +	/* All foo structures are in the following array. */
> > > > +	int nfoo;
> > > > +	struct foo *foo_array;
> > > > +
> > > > +	void do_something_locked(struct foo *fp)
> > > > +	{
> > > > +		bool gf = true;
> > > > +
> > > > +		/* IMPORTANT: Heuristic plus spin_lock()! */
> > > > +		if (!data_race(global_flag)) {
> > > > +			spin_lock(&fp->f_lock);
> > > > +			if (!smp_load_acquire(&global_flag)) {
> > > > +				do_something(fp);
> > > > +				spin_unlock(&fp->f_lock);
> > > > +				return;
> > > > +			}
> > > > +			spin_unlock(&fp->f_lock);
> > > > +		}
> > > > +		spin_lock(&global_lock);
> > > > +		/* Lock held, thus global flag cannot change. */
> > > > +		if (!global_flag) {
> > > How can global_flag ever be true at this point?  The only line of code
> > > that sets it is in begin_global() below, it only runs while global_lock
> > > is held, and global_flag is set back to false before the lock is
> > > released.
> > 
> > It can't be true. The code is a simplified version of the algorithm in
> > ipc/sem.c.
> > 
> > For the ipc/sem.c, global_flag can remain true even after dropping
> > global_lock.
> > 
> > When transferring the approach to nf_conntrack_core, I didn't notice that
> > nf_conntrack doesn't need a persistent global_flag.
> > 
> > Thus the recheck after spin_lock(&global_lock) is not needed.
> 
> In fact, since global_flag is true if and only if global_lock is locked, 
> perhaps it can be removed entirely and replaced with 
> spin_is_locked(&global_lock).
> 
> > > > +			spin_lock(&fp->f_lock);
> > > > +			spin_unlock(&global_lock);
> > > > +			gf = false;
> > > > +		}
> > > > +		do_something(fp);
> > > > +		if (fg)
> > > Should be gf, not fg.
> > > 
> > > > +			spin_unlock(&global_lock);
> > > > +		else
> > > > +			spin_lock(&fp->f_lock);
> > > > +	}
> > > > +
> > > > +	void begin_global(void)
> > > > +	{
> > > > +		int i;
> > > > +
> > > > +		spin_lock(&global_lock);
> > > > +		WRITE_ONCE(global_flag, true);
> > > Why does this need to be WRITE_ONCE?  It still races with the first read
> > > of global_flag above.
> > > 
> > > > +		for (i = 0; i < nfoo; i++) {
> > > > +			/* Wait for pre-existing local locks. */
> > > > +			spin_lock(&fp->f_lock);
> > > > +			spin_unlock(&fp->f_lock);
> > > Why not acquire all the locks here and release all of them in
> > > end_global()?  Then global_flag wouldn't need acquire-release
> > > sychronization.
> > 
> > From my understanding:
> > spin_lock contains preempt_count_add, thus you can't acquire more than 255
> > spinlocks (actually 245, the warning limit is 10 below 255)
> 
> It might be worth mentioning this in a code comment.  Or in the 
> accompanying text.

As noted earlier, done.

> > > > +		}
> > > > +	}
> > > > +
> > > > +	void end_global(void)
> > > > +	{
> > > > +		smp_store_release(&global_flag, false);
> > > > +		/* Pre-existing global lock acquisitions will recheck. */
> > > What does that comment mean?  How can there be any pre-existing global
> > > lock acquisitions when we hold the lock right now?
> > 
> > > > +		spin_unlock(&global_lock);
> > > > +	}
> > > > +
> > > > +All code paths leading from the do_something_locked() function's first
> > > > +read from global_flag acquire a lock, so endless load fusing cannot
> > > > +happen.
> > > > +
> > > > +If the value read from global_flag is true, then global_flag is rechecked
> > > > +while holding global_lock, which prevents global_flag from changing.
> > > > +If this recheck finds that global_flag is now false, the acquisition
> > > Again, how can't global_flag be false now?
> > > 
> > > Did you originally have in mind some sort of scheme in which
> > > begin_global() would release global_lock before returning and
> > > end_global() would acquire global_lock before clearing global_flag?  But
> > > I don't see how that could work without changes to do_something_locked().
> > > 
> > > > +of ->f_lock prior to the release of global_lock will result in any subsequent
> > > > +begin_global() invocation waiting to acquire ->f_lock.
> > > > +
> > > > +On the other hand, if the value read from global_flag is false, then
> > > > +global_flag, then rechecking under ->f_lock combined with synchronization
> > > ---^^^^^^^^^^^^^^^^^^
> > > 
> > > Typo?
> > > 
> > > > +with begin_global() guarantees than any erroneous read will cause the
> > > > +do_something_locked() function's first do_something() invocation to happen
> > > > +before begin_global() returns.  The combination of the smp_load_acquire()
> > > > +in do_something_locked() and the smp_store_release() in end_global()
> > > > +guarantees that either the do_something_locked() function's first
> > > > +do_something() invocation happens after the call to end_global() or that
> > > > +do_something_locked() acquires global_lock() and rechecks under the lock.
> > > This last sentence also makes no sense unless you imagine dropping
> > > global_lock between begin_global() and end_global().
> > 
> > ipc/sem.c does that and needs that, nf_conntrack doesn't use this.
> 
> Given all these issues, it seems like this patch needs to be re-written.

How about like this?

							Thanx, Paul

------------------------------------------------------------------------

Lock-Protected Writes With Heuristic Lockless Reads
---------------------------------------------------

For another example, suppose that the code can normally make use of
a per-data-structure lock, but there are times when a global lock
is required.  These times are indicated via a global flag.  The code
might look as follows, and is based loosely on nf_conntrack_lock(),
nf_conntrack_all_lock(), and nf_conntrack_all_unlock():

	bool global_flag;
	DEFINE_SPINLOCK(global_lock);
	struct foo {
		spinlock_t f_lock;
		int f_data;
	};

	/* All foo structures are in the following array. */
	int nfoo;
	struct foo *foo_array;

	void do_something_locked(struct foo *fp)
	{
		/* IMPORTANT: Heuristic plus spin_lock()! */
		if (!data_race(global_flag)) {
			spin_lock(&fp->f_lock);
			if (!smp_load_acquire(&global_flag)) {
				do_something(fp);
				spin_unlock(&fp->f_lock);
				return;
			}
			spin_unlock(&fp->f_lock);
		}
		spin_lock(&global_lock);
		/* global_lock held, thus global flag cannot be set. */
		spin_lock(&fp->f_lock);
		spin_unlock(&global_lock);
		/*
		 * global_flag might be set here, but begin_global()
		 * will wait for ->f_lock to be released.
		 */
		do_something(fp);
		spin_lock(&fp->f_lock);
}

	void begin_global(void)
	{
		int i;

		spin_lock(&global_lock);
		WRITE_ONCE(global_flag, true);
		for (i = 0; i < nfoo; i++) {
			/*
			 * Wait for pre-existing local locks.  One at
			 * a time to avoid lockdep limitations.
			 */
			spin_lock(&fp->f_lock);
			spin_unlock(&fp->f_lock);
		}
	}

	void end_global(void)
	{
		smp_store_release(&global_flag, false);
		spin_unlock(&global_lock);
	}

All code paths leading from the do_something_locked() function's first
read from global_flag acquire a lock, so endless load fusing cannot
happen.

If the value read from global_flag is true, then global_flag is
rechecked while holding ->f_lock, which, if global_flag is now false,
prevents begin_global() from completing.  It is therefore safe to invoke
do_something().

Otherwise, if either value read from global_flag is true, then after
global_lock is acquired global_flag must be false.  The acquisition of
->f_lock will prevent any call to begin_global() from returning, which
means that it is safe to release global_lock and invoke do_something().

For this to work, only those foo structures in foo_array[] may be passed
to do_something_locked().  The reason for this is that the synchronization
with begin_global() relies on momentarily holding the lock of each and
every foo structure.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 16:24     ` Paul E. McKenney
@ 2021-07-23 16:59       ` Alan Stern
  2021-07-23 17:30         ` Paul E. McKenney
  0 siblings, 1 reply; 28+ messages in thread
From: Alan Stern @ 2021-07-23 16:59 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks, Manfred Spraul

On Fri, Jul 23, 2021 at 09:24:31AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 22, 2021 at 10:08:46PM -0400, Alan Stern wrote:

> > > +	void do_something_locked(struct foo *fp)
> > > +	{
> > > +		bool gf = true;
> > > +
> > > +		/* IMPORTANT: Heuristic plus spin_lock()! */
> > > +		if (!data_race(global_flag)) {
> > > +			spin_lock(&fp->f_lock);
> > > +			if (!smp_load_acquire(&global_flag)) {

> > > +	void begin_global(void)
> > > +	{
> > > +		int i;
> > > +
> > > +		spin_lock(&global_lock);
> > > +		WRITE_ONCE(global_flag, true);
> > 
> > Why does this need to be WRITE_ONCE?  It still races with the first read 
> > of global_flag above.
> 
> But also with the smp_load_acquire() of global_flag, right?

What I'm curious about is why, given these two races, you notate one of 
them by changing a normal write to WRITE_ONCE and you notate the other 
by changing a normal read to a data_race() read.  Why not handle them 
both the same way?

Alan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 16:30         ` Paul E. McKenney
@ 2021-07-23 17:08           ` Alan Stern
  2021-07-23 20:32             ` Paul E. McKenney
  0 siblings, 1 reply; 28+ messages in thread
From: Alan Stern @ 2021-07-23 17:08 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Manfred Spraul, linux-kernel, linux-arch, kernel-team, mingo,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

On Fri, Jul 23, 2021 at 09:30:08AM -0700, Paul E. McKenney wrote:
> How about like this?
> 
> 							Thanx, Paul

Generally a lot better, but still at least one issue.

> ------------------------------------------------------------------------
> 
> Lock-Protected Writes With Heuristic Lockless Reads
> ---------------------------------------------------
> 
> For another example, suppose that the code can normally make use of
> a per-data-structure lock, but there are times when a global lock
> is required.  These times are indicated via a global flag.  The code
> might look as follows, and is based loosely on nf_conntrack_lock(),
> nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
> 
> 	bool global_flag;
> 	DEFINE_SPINLOCK(global_lock);
> 	struct foo {
> 		spinlock_t f_lock;
> 		int f_data;
> 	};
> 
> 	/* All foo structures are in the following array. */
> 	int nfoo;
> 	struct foo *foo_array;
> 
> 	void do_something_locked(struct foo *fp)
> 	{
> 		/* IMPORTANT: Heuristic plus spin_lock()! */
> 		if (!data_race(global_flag)) {
> 			spin_lock(&fp->f_lock);
> 			if (!smp_load_acquire(&global_flag)) {
> 				do_something(fp);
> 				spin_unlock(&fp->f_lock);
> 				return;
> 			}
> 			spin_unlock(&fp->f_lock);
> 		}
> 		spin_lock(&global_lock);
> 		/* global_lock held, thus global flag cannot be set. */
> 		spin_lock(&fp->f_lock);
> 		spin_unlock(&global_lock);
> 		/*
> 		 * global_flag might be set here, but begin_global()
> 		 * will wait for ->f_lock to be released.
> 		 */
> 		do_something(fp);
> 		spin_lock(&fp->f_lock);

spin_unlock.

> }
> 
> 	void begin_global(void)
> 	{
> 		int i;
> 
> 		spin_lock(&global_lock);
> 		WRITE_ONCE(global_flag, true);
> 		for (i = 0; i < nfoo; i++) {
> 			/*
> 			 * Wait for pre-existing local locks.  One at
> 			 * a time to avoid lockdep limitations.
> 			 */
> 			spin_lock(&fp->f_lock);
> 			spin_unlock(&fp->f_lock);
> 		}
> 	}
> 
> 	void end_global(void)
> 	{
> 		smp_store_release(&global_flag, false);
> 		spin_unlock(&global_lock);
> 	}
> 
> All code paths leading from the do_something_locked() function's first
> read from global_flag acquire a lock, so endless load fusing cannot
> happen.
> 
> If the value read from global_flag is true, then global_flag is
> rechecked while holding ->f_lock, which, if global_flag is now false,
> prevents begin_global() from completing.  It is therefore safe to invoke
> do_something().
> 
> Otherwise, if either value read from global_flag is true, then after
> global_lock is acquired global_flag must be false.  The acquisition of
> ->f_lock will prevent any call to begin_global() from returning, which
> means that it is safe to release global_lock and invoke do_something().
> 
> For this to work, only those foo structures in foo_array[] may be passed
> to do_something_locked().  The reason for this is that the synchronization
> with begin_global() relies on momentarily holding the lock of each and
> every foo structure.

This doesn't mention the reason for the acquire-release
synchronization of global_flag.  It's needed because work done between
begin_global() and end_global() can affect a foo structure without
holding its private f_lock member, and we want all such work to be
visible to other threads when they call do_something_locked() later.

Alan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 16:59       ` Alan Stern
@ 2021-07-23 17:30         ` Paul E. McKenney
  2021-07-23 18:11           ` Alan Stern
  0 siblings, 1 reply; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-23 17:30 UTC (permalink / raw)
  To: Alan Stern
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks, Manfred Spraul

On Fri, Jul 23, 2021 at 12:59:47PM -0400, Alan Stern wrote:
> On Fri, Jul 23, 2021 at 09:24:31AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 22, 2021 at 10:08:46PM -0400, Alan Stern wrote:
> 
> > > > +	void do_something_locked(struct foo *fp)
> > > > +	{
> > > > +		bool gf = true;
> > > > +
> > > > +		/* IMPORTANT: Heuristic plus spin_lock()! */
> > > > +		if (!data_race(global_flag)) {
> > > > +			spin_lock(&fp->f_lock);
> > > > +			if (!smp_load_acquire(&global_flag)) {
> 
> > > > +	void begin_global(void)
> > > > +	{
> > > > +		int i;
> > > > +
> > > > +		spin_lock(&global_lock);
> > > > +		WRITE_ONCE(global_flag, true);
> > > 
> > > Why does this need to be WRITE_ONCE?  It still races with the first read 
> > > of global_flag above.
> > 
> > But also with the smp_load_acquire() of global_flag, right?
> 
> What I'm curious about is why, given these two races, you notate one of 
> them by changing a normal write to WRITE_ONCE and you notate the other 
> by changing a normal read to a data_race() read.  Why not handle them 
> both the same way?

Because the code can tolerate the first read returning complete nonsense,
but needs the value from the second read to be exact at that point in
time.  (If the value changes immediately after being read, the fact that
->f_lock is held prevents begin_global() from completing.)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 17:30         ` Paul E. McKenney
@ 2021-07-23 18:11           ` Alan Stern
  2021-07-23 20:28             ` Paul E. McKenney
  0 siblings, 1 reply; 28+ messages in thread
From: Alan Stern @ 2021-07-23 18:11 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks, Manfred Spraul

On Fri, Jul 23, 2021 at 10:30:10AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 23, 2021 at 12:59:47PM -0400, Alan Stern wrote:
> > On Fri, Jul 23, 2021 at 09:24:31AM -0700, Paul E. McKenney wrote:
> > > On Thu, Jul 22, 2021 at 10:08:46PM -0400, Alan Stern wrote:
> > 
> > > > > +	void do_something_locked(struct foo *fp)
> > > > > +	{
> > > > > +		bool gf = true;
> > > > > +
> > > > > +		/* IMPORTANT: Heuristic plus spin_lock()! */
> > > > > +		if (!data_race(global_flag)) {
> > > > > +			spin_lock(&fp->f_lock);
> > > > > +			if (!smp_load_acquire(&global_flag)) {
> > 
> > > > > +	void begin_global(void)
> > > > > +	{
> > > > > +		int i;
> > > > > +
> > > > > +		spin_lock(&global_lock);
> > > > > +		WRITE_ONCE(global_flag, true);
> > > > 
> > > > Why does this need to be WRITE_ONCE?  It still races with the first read 
> > > > of global_flag above.
> > > 
> > > But also with the smp_load_acquire() of global_flag, right?
> > 
> > What I'm curious about is why, given these two races, you notate one of 
> > them by changing a normal write to WRITE_ONCE and you notate the other 
> > by changing a normal read to a data_race() read.  Why not handle them 
> > both the same way?
> 
> Because the code can tolerate the first read returning complete nonsense,
> but needs the value from the second read to be exact at that point in
> time.

In other words, if the second read races with the WRITE_ONCE, it needs to 
get either the value before the write or the value after the write; 
nothing else will do because it isn't a heuristic here.  Fair point.

>  (If the value changes immediately after being read, the fact that
> ->f_lock is held prevents begin_global() from completing.)

This seems like something worth explaining in the document.  That 
"IMPORTANT" comment doesn't really get the full point across.

Alan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 18:11           ` Alan Stern
@ 2021-07-23 20:28             ` Paul E. McKenney
  0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-23 20:28 UTC (permalink / raw)
  To: Alan Stern
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks, Manfred Spraul

On Fri, Jul 23, 2021 at 02:11:38PM -0400, Alan Stern wrote:
> On Fri, Jul 23, 2021 at 10:30:10AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 23, 2021 at 12:59:47PM -0400, Alan Stern wrote:
> > > On Fri, Jul 23, 2021 at 09:24:31AM -0700, Paul E. McKenney wrote:
> > > > On Thu, Jul 22, 2021 at 10:08:46PM -0400, Alan Stern wrote:
> > > 
> > > > > > +	void do_something_locked(struct foo *fp)
> > > > > > +	{
> > > > > > +		bool gf = true;
> > > > > > +
> > > > > > +		/* IMPORTANT: Heuristic plus spin_lock()! */
> > > > > > +		if (!data_race(global_flag)) {
> > > > > > +			spin_lock(&fp->f_lock);
> > > > > > +			if (!smp_load_acquire(&global_flag)) {
> > > 
> > > > > > +	void begin_global(void)
> > > > > > +	{
> > > > > > +		int i;
> > > > > > +
> > > > > > +		spin_lock(&global_lock);
> > > > > > +		WRITE_ONCE(global_flag, true);
> > > > > 
> > > > > Why does this need to be WRITE_ONCE?  It still races with the first read 
> > > > > of global_flag above.
> > > > 
> > > > But also with the smp_load_acquire() of global_flag, right?
> > > 
> > > What I'm curious about is why, given these two races, you notate one of 
> > > them by changing a normal write to WRITE_ONCE and you notate the other 
> > > by changing a normal read to a data_race() read.  Why not handle them 
> > > both the same way?
> > 
> > Because the code can tolerate the first read returning complete nonsense,
> > but needs the value from the second read to be exact at that point in
> > time.
> 
> In other words, if the second read races with the WRITE_ONCE, it needs to 
> get either the value before the write or the value after the write; 
> nothing else will do because it isn't a heuristic here.  Fair point.
> 
> >  (If the value changes immediately after being read, the fact that
> > ->f_lock is held prevents begin_global() from completing.)
> 
> This seems like something worth explaining in the document.  That 
> "IMPORTANT" comment doesn't really get the full point across.

How about this comment instead?

	/* This works even if data_race() returns nonsense. */

							Thanx, Paul

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 17:08           ` Alan Stern
@ 2021-07-23 20:32             ` Paul E. McKenney
  2021-07-23 21:03               ` Alan Stern
  0 siblings, 1 reply; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-23 20:32 UTC (permalink / raw)
  To: Alan Stern
  Cc: Manfred Spraul, linux-kernel, linux-arch, kernel-team, mingo,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

On Fri, Jul 23, 2021 at 01:08:20PM -0400, Alan Stern wrote:
> On Fri, Jul 23, 2021 at 09:30:08AM -0700, Paul E. McKenney wrote:
> > How about like this?
> > 
> > 							Thanx, Paul
> 
> Generally a lot better, but still at least one issue.
> 
> > ------------------------------------------------------------------------
> > 
> > Lock-Protected Writes With Heuristic Lockless Reads
> > ---------------------------------------------------
> > 
> > For another example, suppose that the code can normally make use of
> > a per-data-structure lock, but there are times when a global lock
> > is required.  These times are indicated via a global flag.  The code
> > might look as follows, and is based loosely on nf_conntrack_lock(),
> > nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
> > 
> > 	bool global_flag;
> > 	DEFINE_SPINLOCK(global_lock);
> > 	struct foo {
> > 		spinlock_t f_lock;
> > 		int f_data;
> > 	};
> > 
> > 	/* All foo structures are in the following array. */
> > 	int nfoo;
> > 	struct foo *foo_array;
> > 
> > 	void do_something_locked(struct foo *fp)
> > 	{
> > 		/* IMPORTANT: Heuristic plus spin_lock()! */
> > 		if (!data_race(global_flag)) {
> > 			spin_lock(&fp->f_lock);
> > 			if (!smp_load_acquire(&global_flag)) {
> > 				do_something(fp);
> > 				spin_unlock(&fp->f_lock);
> > 				return;
> > 			}
> > 			spin_unlock(&fp->f_lock);
> > 		}
> > 		spin_lock(&global_lock);
> > 		/* global_lock held, thus global flag cannot be set. */
> > 		spin_lock(&fp->f_lock);
> > 		spin_unlock(&global_lock);
> > 		/*
> > 		 * global_flag might be set here, but begin_global()
> > 		 * will wait for ->f_lock to be released.
> > 		 */
> > 		do_something(fp);
> > 		spin_lock(&fp->f_lock);
> 
> spin_unlock.

Good eyes, fixed.

> > }
> > 
> > 	void begin_global(void)
> > 	{
> > 		int i;
> > 
> > 		spin_lock(&global_lock);
> > 		WRITE_ONCE(global_flag, true);
> > 		for (i = 0; i < nfoo; i++) {
> > 			/*
> > 			 * Wait for pre-existing local locks.  One at
> > 			 * a time to avoid lockdep limitations.
> > 			 */
> > 			spin_lock(&fp->f_lock);
> > 			spin_unlock(&fp->f_lock);
> > 		}
> > 	}
> > 
> > 	void end_global(void)
> > 	{
> > 		smp_store_release(&global_flag, false);
> > 		spin_unlock(&global_lock);
> > 	}
> > 
> > All code paths leading from the do_something_locked() function's first
> > read from global_flag acquire a lock, so endless load fusing cannot
> > happen.
> > 
> > If the value read from global_flag is true, then global_flag is
> > rechecked while holding ->f_lock, which, if global_flag is now false,
> > prevents begin_global() from completing.  It is therefore safe to invoke
> > do_something().
> > 
> > Otherwise, if either value read from global_flag is true, then after
> > global_lock is acquired global_flag must be false.  The acquisition of
> > ->f_lock will prevent any call to begin_global() from returning, which
> > means that it is safe to release global_lock and invoke do_something().
> > 
> > For this to work, only those foo structures in foo_array[] may be passed
> > to do_something_locked().  The reason for this is that the synchronization
> > with begin_global() relies on momentarily holding the lock of each and
> > every foo structure.
> 
> This doesn't mention the reason for the acquire-release
> synchronization of global_flag.  It's needed because work done between
> begin_global() and end_global() can affect a foo structure without
> holding its private f_lock member, and we want all such work to be
> visible to other threads when they call do_something_locked() later.

Like this added paragraph at the end?

	The smp_load_acquire() and smp_store_release() are required
	because changes to a foo structure between calls to begin_global()
	and end_global() are carried out without holding that structure's
	->f_lock.  The smp_load_acquire() and smp_store_release()
	ensure that the next invocation of do_something() from the call
	to do_something_locked() that acquires that ->f_lock will see
	those changes.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 20:32             ` Paul E. McKenney
@ 2021-07-23 21:03               ` Alan Stern
  2021-07-23 22:29                 ` Paul E. McKenney
  0 siblings, 1 reply; 28+ messages in thread
From: Alan Stern @ 2021-07-23 21:03 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Manfred Spraul, linux-kernel, linux-arch, kernel-team, mingo,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

On Fri, Jul 23, 2021 at 01:28:11PM -0700, Paul E. McKenney wrote:
> On Fri, Jul 23, 2021 at 02:11:38PM -0400, Alan Stern wrote:
> > In other words, if the second read races with the WRITE_ONCE, it needs 
to
> > get either the value before the write or the value after the write;
> > nothing else will do because it isn't a heuristic here.  Fair point.
> >
> > >  (If the value changes immediately after being read, the fact that
> > > ->f_lock is held prevents begin_global() from completing.)
> >
> > This seems like something worth explaining in the document.  That
> > "IMPORTANT" comment doesn't really get the full point across.
>
> How about this comment instead?
>
>       /* This works even if data_race() returns nonsense. */

That's somewhat better.


On Fri, Jul 23, 2021 at 01:32:48PM -0700, Paul E. McKenney wrote:
> On Fri, Jul 23, 2021 at 01:08:20PM -0400, Alan Stern wrote:
> > This doesn't mention the reason for the acquire-release
> > synchronization of global_flag.  It's needed because work done between
> > begin_global() and end_global() can affect a foo structure without
> > holding its private f_lock member, and we want all such work to be
> > visible to other threads when they call do_something_locked() later.
> 
> Like this added paragraph at the end?
> 
> 	The smp_load_acquire() and smp_store_release() are required
> 	because changes to a foo structure between calls to begin_global()
> 	and end_global() are carried out without holding that structure's
> 	->f_lock.  The smp_load_acquire() and smp_store_release()
> 	ensure that the next invocation of do_something() from the call
> 	to do_something_locked() that acquires that ->f_lock will see
> 	those changes.

I'd shorten the last sentence:

	The smp_load_acquire() and smp_store_release() ensure that the
	next invocation of do_something() from do_something_locked()
	will see those changes.

Alan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-23 21:03               ` Alan Stern
@ 2021-07-23 22:29                 ` Paul E. McKenney
  0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-23 22:29 UTC (permalink / raw)
  To: Alan Stern
  Cc: Manfred Spraul, linux-kernel, linux-arch, kernel-team, mingo,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

On Fri, Jul 23, 2021 at 05:03:47PM -0400, Alan Stern wrote:
> On Fri, Jul 23, 2021 at 01:28:11PM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 23, 2021 at 02:11:38PM -0400, Alan Stern wrote:
> > > In other words, if the second read races with the WRITE_ONCE, it needs 
> to
> > > get either the value before the write or the value after the write;
> > > nothing else will do because it isn't a heuristic here.  Fair point.
> > >
> > > >  (If the value changes immediately after being read, the fact that
> > > > ->f_lock is held prevents begin_global() from completing.)
> > >
> > > This seems like something worth explaining in the document.  That
> > > "IMPORTANT" comment doesn't really get the full point across.
> >
> > How about this comment instead?
> >
> >       /* This works even if data_race() returns nonsense. */
> 
> That's somewhat better.
> 
> 
> On Fri, Jul 23, 2021 at 01:32:48PM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 23, 2021 at 01:08:20PM -0400, Alan Stern wrote:
> > > This doesn't mention the reason for the acquire-release
> > > synchronization of global_flag.  It's needed because work done between
> > > begin_global() and end_global() can affect a foo structure without
> > > holding its private f_lock member, and we want all such work to be
> > > visible to other threads when they call do_something_locked() later.
> > 
> > Like this added paragraph at the end?
> > 
> > 	The smp_load_acquire() and smp_store_release() are required
> > 	because changes to a foo structure between calls to begin_global()
> > 	and end_global() are carried out without holding that structure's
> > 	->f_lock.  The smp_load_acquire() and smp_store_release()
> > 	ensure that the next invocation of do_something() from the call
> > 	to do_something_locked() that acquires that ->f_lock will see
> > 	those changes.
> 
> I'd shorten the last sentence:
> 
> 	The smp_load_acquire() and smp_store_release() ensure that the
> 	next invocation of do_something() from do_something_locked()
> 	will see those changes.

Sold!  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads
  2021-07-21 21:10 ` [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads Paul E. McKenney
  2021-07-23  2:08   ` Alan Stern
@ 2021-07-28 17:40   ` Paul E. McKenney
  1 sibling, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2021-07-28 17:40 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Manfred Spraul

This commit adds example code for heuristic lockless reads, based loosely
on the sem_lock() and sem_unlock() functions.

[ paulmck: Apply Alan Stern and Manfred Spraul feedback. ]

Reported-by: Manfred Spraul <manfred@colorfullife.com>
[ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ]
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
index 58bff26198767..d96fe20ed582a 100644
--- a/tools/memory-model/Documentation/access-marking.txt
+++ b/tools/memory-model/Documentation/access-marking.txt
@@ -319,6 +319,99 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
 concurrent lockless write.
 
 
+Lock-Protected Writes With Heuristic Lockless Reads
+---------------------------------------------------
+
+For another example, suppose that the code can normally make use of
+a per-data-structure lock, but there are times when a global lock
+is required.  These times are indicated via a global flag.  The code
+might look as follows, and is based loosely on nf_conntrack_lock(),
+nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
+
+	bool global_flag;
+	DEFINE_SPINLOCK(global_lock);
+	struct foo {
+		spinlock_t f_lock;
+		int f_data;
+	};
+
+	/* All foo structures are in the following array. */
+	int nfoo;
+	struct foo *foo_array;
+
+	void do_something_locked(struct foo *fp)
+	{
+		/* This works even if data_race() returns nonsense. */
+		if (!data_race(global_flag)) {
+			spin_lock(&fp->f_lock);
+			if (!smp_load_acquire(&global_flag)) {
+				do_something(fp);
+				spin_unlock(&fp->f_lock);
+				return;
+			}
+			spin_unlock(&fp->f_lock);
+		}
+		spin_lock(&global_lock);
+		/* global_lock held, thus global flag cannot be set. */
+		spin_lock(&fp->f_lock);
+		spin_unlock(&global_lock);
+		/*
+		 * global_flag might be set here, but begin_global()
+		 * will wait for ->f_lock to be released.
+		 */
+		do_something(fp);
+		spin_unlock(&fp->f_lock);
+	}
+
+	void begin_global(void)
+	{
+		int i;
+
+		spin_lock(&global_lock);
+		WRITE_ONCE(global_flag, true);
+		for (i = 0; i < nfoo; i++) {
+			/*
+			 * Wait for pre-existing local locks.  One at
+			 * a time to avoid lockdep limitations.
+			 */
+			spin_lock(&fp->f_lock);
+			spin_unlock(&fp->f_lock);
+		}
+	}
+
+	void end_global(void)
+	{
+		smp_store_release(&global_flag, false);
+		spin_unlock(&global_lock);
+	}
+
+All code paths leading from the do_something_locked() function's first
+read from global_flag acquire a lock, so endless load fusing cannot
+happen.
+
+If the value read from global_flag is true, then global_flag is
+rechecked while holding ->f_lock, which, if global_flag is now false,
+prevents begin_global() from completing.  It is therefore safe to invoke
+do_something().
+
+Otherwise, if either value read from global_flag is true, then after
+global_lock is acquired global_flag must be false.  The acquisition of
+->f_lock will prevent any call to begin_global() from returning, which
+means that it is safe to release global_lock and invoke do_something().
+
+For this to work, only those foo structures in foo_array[] may be passed
+to do_something_locked().  The reason for this is that the synchronization
+with begin_global() relies on momentarily holding the lock of each and
+every foo structure.
+
+The smp_load_acquire() and smp_store_release() are required because
+changes to a foo structure between calls to begin_global() and
+end_global() are carried out without holding that structure's ->f_lock.
+The smp_load_acquire() and smp_store_release() ensure that the next
+invocation of do_something() from do_something_locked() will see those
+changes.
+
+
 Lockless Reads and Writes
 -------------------------
 

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-07-28 17:40 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-21 21:07 [PATCH kcsan 0/8] KCSAN updates for v5.15 Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 1/8] kcsan: Improve some Kconfig comments Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 2/8] kcsan: Remove CONFIG_KCSAN_DEBUG Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 3/8] kcsan: Introduce CONFIG_KCSAN_STRICT Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 4/8] kcsan: Reduce get_ctx() uses in kcsan_found_watchpoint() Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 5/8] kcsan: Rework atomic.h into permissive.h Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 6/8] kcsan: Print if strict or non-strict during init Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 7/8] kcsan: permissive: Ignore data-racy 1-bit value changes Paul E. McKenney
2021-07-21 21:08 ` [PATCH kcsan 8/8] kcsan: Make strict mode imply interruptible watchers Paul E. McKenney
2021-07-21 21:10 ` [PATCH memory-model 1/4] tools/memory-model: Make read_foo_diagnostic() more clearly diagnostic Paul E. McKenney
2021-07-21 21:10 ` [PATCH memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads Paul E. McKenney
2021-07-23  2:08   ` Alan Stern
2021-07-23  6:52     ` Manfred Spraul
2021-07-23 13:05       ` Alan Stern
2021-07-23 13:57         ` Manfred Spraul
2021-07-23 16:30         ` Paul E. McKenney
2021-07-23 17:08           ` Alan Stern
2021-07-23 20:32             ` Paul E. McKenney
2021-07-23 21:03               ` Alan Stern
2021-07-23 22:29                 ` Paul E. McKenney
2021-07-23 16:24     ` Paul E. McKenney
2021-07-23 16:59       ` Alan Stern
2021-07-23 17:30         ` Paul E. McKenney
2021-07-23 18:11           ` Alan Stern
2021-07-23 20:28             ` Paul E. McKenney
2021-07-28 17:40   ` [PATCH v2 " Paul E. McKenney
2021-07-21 21:10 ` [PATCH memory-model 3/4] tools/memory-model: Heuristics using data_race() must handle all values Paul E. McKenney
2021-07-21 21:10 ` [PATCH memory-model 4/4] tools/memory-model: Document data_race(READ_ONCE()) Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).