linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 rcu 01/11] srcu: Convert ->srcu_lock_count and ->srcu_unlock_count to atomic
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 02/11] srcu: Create an srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe() Paul E. McKenney
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Frederic Weisbecker, Thomas Gleixner, John Ogness, Petr Mladek

NMI-safe variants of srcu_read_lock() and srcu_read_unlock() are needed
by printk(), which on many architectures entails read-modify-write
atomic operations.  This commit prepares Tree SRCU for this change by
making both ->srcu_lock_count and ->srcu_unlock_count by atomic_long_t.

[ paulmck: Apply feedback from John Ogness. ]

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
---
 include/linux/srcutree.h |  4 ++--
 kernel/rcu/srcutree.c    | 24 ++++++++++++------------
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
index e3014319d1ade..0c4eca07d78d5 100644
--- a/include/linux/srcutree.h
+++ b/include/linux/srcutree.h
@@ -23,8 +23,8 @@ struct srcu_struct;
  */
 struct srcu_data {
 	/* Read-side state. */
-	unsigned long srcu_lock_count[2];	/* Locks per CPU. */
-	unsigned long srcu_unlock_count[2];	/* Unlocks per CPU. */
+	atomic_long_t srcu_lock_count[2];	/* Locks per CPU. */
+	atomic_long_t srcu_unlock_count[2];	/* Unlocks per CPU. */
 
 	/* Update-side state. */
 	spinlock_t __private lock ____cacheline_internodealigned_in_smp;
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 1c304fec89c02..25e9458da6a26 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -417,7 +417,7 @@ static unsigned long srcu_readers_lock_idx(struct srcu_struct *ssp, int idx)
 	for_each_possible_cpu(cpu) {
 		struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu);
 
-		sum += READ_ONCE(cpuc->srcu_lock_count[idx]);
+		sum += atomic_long_read(&cpuc->srcu_lock_count[idx]);
 	}
 	return sum;
 }
@@ -434,7 +434,7 @@ static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx)
 	for_each_possible_cpu(cpu) {
 		struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu);
 
-		sum += READ_ONCE(cpuc->srcu_unlock_count[idx]);
+		sum += atomic_long_read(&cpuc->srcu_unlock_count[idx]);
 	}
 	return sum;
 }
@@ -503,10 +503,10 @@ static bool srcu_readers_active(struct srcu_struct *ssp)
 	for_each_possible_cpu(cpu) {
 		struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu);
 
-		sum += READ_ONCE(cpuc->srcu_lock_count[0]);
-		sum += READ_ONCE(cpuc->srcu_lock_count[1]);
-		sum -= READ_ONCE(cpuc->srcu_unlock_count[0]);
-		sum -= READ_ONCE(cpuc->srcu_unlock_count[1]);
+		sum += atomic_long_read(&cpuc->srcu_lock_count[0]);
+		sum += atomic_long_read(&cpuc->srcu_lock_count[1]);
+		sum -= atomic_long_read(&cpuc->srcu_unlock_count[0]);
+		sum -= atomic_long_read(&cpuc->srcu_unlock_count[1]);
 	}
 	return sum;
 }
@@ -636,7 +636,7 @@ int __srcu_read_lock(struct srcu_struct *ssp)
 	int idx;
 
 	idx = READ_ONCE(ssp->srcu_idx) & 0x1;
-	this_cpu_inc(ssp->sda->srcu_lock_count[idx]);
+	this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter);
 	smp_mb(); /* B */  /* Avoid leaking the critical section. */
 	return idx;
 }
@@ -650,7 +650,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock);
 void __srcu_read_unlock(struct srcu_struct *ssp, int idx)
 {
 	smp_mb(); /* C */  /* Avoid leaking the critical section. */
-	this_cpu_inc(ssp->sda->srcu_unlock_count[idx]);
+	this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter);
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock);
 
@@ -1687,8 +1687,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf)
 			struct srcu_data *sdp;
 
 			sdp = per_cpu_ptr(ssp->sda, cpu);
-			u0 = data_race(sdp->srcu_unlock_count[!idx]);
-			u1 = data_race(sdp->srcu_unlock_count[idx]);
+			u0 = data_race(atomic_long_read(&sdp->srcu_unlock_count[!idx]));
+			u1 = data_race(atomic_long_read(&sdp->srcu_unlock_count[idx]));
 
 			/*
 			 * Make sure that a lock is always counted if the corresponding
@@ -1696,8 +1696,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf)
 			 */
 			smp_rmb();
 
-			l0 = data_race(sdp->srcu_lock_count[!idx]);
-			l1 = data_race(sdp->srcu_lock_count[idx]);
+			l0 = data_race(atomic_long_read(&sdp->srcu_lock_count[!idx]));
+			l1 = data_race(atomic_long_read(&sdp->srcu_lock_count[idx]));
 
 			c0 = l0 - u0;
 			c1 = l1 - u1;
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 02/11] srcu: Create an srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe()
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 01/11] srcu: Convert ->srcu_lock_count and ->srcu_unlock_count to atomic Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 03/11] srcu: Check for consistent per-CPU per-srcu_struct NMI safety Paul E. McKenney
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Randy Dunlap, Frederic Weisbecker, Thomas Gleixner, John Ogness,
	Petr Mladek

On strict load-store architectures, the use of this_cpu_inc() by
srcu_read_lock() and srcu_read_unlock() is not NMI-safe in TREE SRCU.
To see this suppose that an NMI arrives in the middle of srcu_read_lock(),
just after it has read ->srcu_lock_count, but before it has written
the incremented value back to memory.  If that NMI handler also does
srcu_read_lock() and srcu_read_lock() on that same srcu_struct structure,
then upon return from that NMI handler, the interrupted srcu_read_lock()
will overwrite the NMI handler's update to ->srcu_lock_count, but
leave unchanged the NMI handler's update by srcu_read_unlock() to
->srcu_unlock_count.

This can result in a too-short SRCU grace period, which can in turn
result in arbitrary memory corruption.

If the NMI handler instead interrupts the srcu_read_unlock(), this
can result in eternal SRCU grace periods, which is not much better.

This commit therefore creates a pair of new srcu_read_lock_nmisafe()
and srcu_read_unlock_nmisafe() functions, which allow SRCU readers in
both NMI handlers and in process and IRQ context.  It is bad practice
to mix the existing and the new _nmisafe() primitives on the same
srcu_struct structure.  Use one set or the other, not both.

Just to underline that "bad practice" point, using srcu_read_lock() at
process level and srcu_read_lock_nmisafe() in your NMI handler will not,
repeat NOT, work.  If you do not immediately understand why this is the
case, please review the earlier paragraphs in this commit log.

[ paulmck: Apply kernel test robot feedback. ]
[ paulmck: Apply feedback from Randy Dunlap. ]
[ paulmck: Apply feedback from John Ogness. ]

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
---
 arch/Kconfig             |  3 +++
 include/linux/srcu.h     | 39 ++++++++++++++++++++++++++++++++++++
 include/linux/srcutiny.h | 11 ++++++++++
 include/linux/srcutree.h |  3 +++
 kernel/rcu/Kconfig       |  3 +++
 kernel/rcu/rcutorture.c  | 11 ++++++++--
 kernel/rcu/srcutree.c    | 43 ++++++++++++++++++++++++++++++++++++----
 7 files changed, 107 insertions(+), 6 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 8f138e580d1ae..6b95244c3057d 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -468,6 +468,9 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM
 config ARCH_HAVE_NMI_SAFE_CMPXCHG
 	bool
 
+config ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
+	bool
+
 config HAVE_ALIGNED_STRUCT_PAGE
 	bool
 	help
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 01226e4d960a0..2cc8321c0c86a 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -52,6 +52,8 @@ int init_srcu_struct(struct srcu_struct *ssp);
 #else
 /* Dummy definition for things like notifiers.  Actual use gets link error. */
 struct srcu_struct { };
+int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp);
+void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp);
 #endif
 
 void call_srcu(struct srcu_struct *ssp, struct rcu_head *head,
@@ -166,6 +168,25 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)
 	return retval;
 }
 
+/**
+ * srcu_read_lock_nmisafe - register a new reader for an SRCU-protected structure.
+ * @ssp: srcu_struct in which to register the new reader.
+ *
+ * Enter an SRCU read-side critical section, but in an NMI-safe manner.
+ * See srcu_read_lock() for more information.
+ */
+static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp)
+{
+	int retval;
+
+	if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
+		retval = __srcu_read_lock_nmisafe(ssp);
+	else
+		retval = __srcu_read_lock(ssp);
+	rcu_lock_acquire(&(ssp)->dep_map);
+	return retval;
+}
+
 /* Used by tracing, cannot be traced and cannot invoke lockdep. */
 static inline notrace int
 srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp)
@@ -191,6 +212,24 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
 	__srcu_read_unlock(ssp, idx);
 }
 
+/**
+ * srcu_read_unlock_nmisafe - unregister a old reader from an SRCU-protected structure.
+ * @ssp: srcu_struct in which to unregister the old reader.
+ * @idx: return value from corresponding srcu_read_lock().
+ *
+ * Exit an SRCU read-side critical section, but in an NMI-safe manner.
+ */
+static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
+	__releases(ssp)
+{
+	WARN_ON_ONCE(idx & ~0x1);
+	rcu_lock_release(&(ssp)->dep_map);
+	if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
+		__srcu_read_unlock_nmisafe(ssp, idx);
+	else
+		__srcu_read_unlock(ssp, idx);
+}
+
 /* Used by tracing, cannot be traced and cannot call lockdep. */
 static inline notrace void
 srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp)
diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
index 5aa5e0faf6a12..278331bd77660 100644
--- a/include/linux/srcutiny.h
+++ b/include/linux/srcutiny.h
@@ -90,4 +90,15 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
 		 data_race(READ_ONCE(ssp->srcu_idx_max)));
 }
 
+static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
+{
+	BUG();
+	return 0;
+}
+
+static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
+{
+	BUG();
+}
+
 #endif
diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
index 0c4eca07d78d5..d45dd507f4a56 100644
--- a/include/linux/srcutree.h
+++ b/include/linux/srcutree.h
@@ -154,4 +154,7 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp);
 void srcu_barrier(struct srcu_struct *ssp);
 void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf);
 
+int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp);
+void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp);
+
 #endif
diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
index d471d22a5e21b..f53ad63b2bc63 100644
--- a/kernel/rcu/Kconfig
+++ b/kernel/rcu/Kconfig
@@ -72,6 +72,9 @@ config TREE_SRCU
 	help
 	  This option selects the full-fledged version of SRCU.
 
+config NEED_SRCU_NMI_SAFE
+	def_bool HAVE_NMI && !ARCH_HAS_NMI_SAFE_THIS_CPU_OPS && !TINY_SRCU
+
 config TASKS_RCU_GENERIC
 	def_bool TASKS_RCU || TASKS_RUDE_RCU || TASKS_TRACE_RCU
 	select SRCU
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 503c2aa845a4a..b4c74ce102256 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -615,10 +615,14 @@ static struct rcu_torture_ops rcu_busted_ops = {
 DEFINE_STATIC_SRCU(srcu_ctl);
 static struct srcu_struct srcu_ctld;
 static struct srcu_struct *srcu_ctlp = &srcu_ctl;
+static struct rcu_torture_ops srcud_ops;
 
 static int srcu_torture_read_lock(void) __acquires(srcu_ctlp)
 {
-	return srcu_read_lock(srcu_ctlp);
+	if (cur_ops == &srcud_ops)
+		return srcu_read_lock_nmisafe(srcu_ctlp);
+	else
+		return srcu_read_lock(srcu_ctlp);
 }
 
 static void
@@ -642,7 +646,10 @@ srcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp)
 
 static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp)
 {
-	srcu_read_unlock(srcu_ctlp, idx);
+	if (cur_ops == &srcud_ops)
+		srcu_read_unlock_nmisafe(srcu_ctlp, idx);
+	else
+		srcu_read_unlock(srcu_ctlp, idx);
 }
 
 static int torture_srcu_read_lock_held(void)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 25e9458da6a26..32a94b254d29f 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -654,6 +654,41 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx)
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock);
 
+#ifdef CONFIG_NEED_SRCU_NMI_SAFE
+
+/*
+ * Counts the new reader in the appropriate per-CPU element of the
+ * srcu_struct, but in an NMI-safe manner using RMW atomics.
+ * Returns an index that must be passed to the matching srcu_read_unlock().
+ */
+int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
+{
+	int idx;
+	struct srcu_data *sdp = raw_cpu_ptr(ssp->sda);
+
+	idx = READ_ONCE(ssp->srcu_idx) & 0x1;
+	atomic_long_inc(&sdp->srcu_lock_count[idx]);
+	smp_mb__after_atomic(); /* B */  /* Avoid leaking the critical section. */
+	return idx;
+}
+EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe);
+
+/*
+ * Removes the count for the old reader from the appropriate per-CPU
+ * element of the srcu_struct.  Note that this may well be a different
+ * CPU than that which was incremented by the corresponding srcu_read_lock().
+ */
+void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
+{
+	struct srcu_data *sdp = raw_cpu_ptr(ssp->sda);
+
+	smp_mb__before_atomic(); /* C */  /* Avoid leaking the critical section. */
+	atomic_long_inc(&sdp->srcu_unlock_count[idx]);
+}
+EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe);
+
+#endif // CONFIG_NEED_SRCU_NMI_SAFE
+
 /*
  * Start an SRCU grace period.
  */
@@ -1090,7 +1125,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
 	int ss_state;
 
 	check_init_srcu_struct(ssp);
-	idx = srcu_read_lock(ssp);
+	idx = __srcu_read_lock_nmisafe(ssp);
 	ss_state = smp_load_acquire(&ssp->srcu_size_state);
 	if (ss_state < SRCU_SIZE_WAIT_CALL)
 		sdp = per_cpu_ptr(ssp->sda, 0);
@@ -1123,7 +1158,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
 		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
 	else if (needexp)
 		srcu_funnel_exp_start(ssp, sdp_mynode, s);
-	srcu_read_unlock(ssp, idx);
+	__srcu_read_unlock_nmisafe(ssp, idx);
 	return s;
 }
 
@@ -1427,13 +1462,13 @@ void srcu_barrier(struct srcu_struct *ssp)
 	/* Initial count prevents reaching zero until all CBs are posted. */
 	atomic_set(&ssp->srcu_barrier_cpu_cnt, 1);
 
-	idx = srcu_read_lock(ssp);
+	idx = __srcu_read_lock_nmisafe(ssp);
 	if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER)
 		srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0));
 	else
 		for_each_possible_cpu(cpu)
 			srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu));
-	srcu_read_unlock(ssp, idx);
+	__srcu_read_unlock_nmisafe(ssp, idx);
 
 	/* Remove the initial count, at which point reaching zero can happen. */
 	if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt))
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2
@ 2022-10-19 22:58 Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 01/11] srcu: Convert ->srcu_lock_count and ->srcu_unlock_count to atomic Paul E. McKenney
                   ` (10 more replies)
  0 siblings, 11 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu; +Cc: linux-kernel, kernel-team, rostedt

Hello!

This series provides the second version of an NMI-safe SRCU reader API in
the guise of srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe().
A given srcu_struct structure must use either the traditional
srcu_read_lock() and srcu_read_unlock() API or the new _nmisafe() API:
Mixing and matching is not permitted.  So much so that kernels built
with CONFIG_PROVE_RCU=y will complain if you try it.

The reason for this restriction is that I have yet to find a use case
that is not a accident waiting to happen.  And if free intermixing
were permitted, it is pretty much a given that someone somewhere will
get confused and use srcu_read_lock_nmisafe() within NMI handlers and
srcu_read_lock() elsewhere, which will not (repeat, NOT) provide NMI
safety.

The series is as follows:

1.	Convert ->srcu_lock_count and ->srcu_unlock_count to atomic.

2.	Create an srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe().

3.	Check for consistent per-CPU per-srcu_struct NMI safety.

4.	Check for consistent global per-srcu_struct NMI safety.

5.	Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option.

6.	Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option.

7.	Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option.

8.	Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option.

9.	Warn when NMI-unsafe API is used in NMI, courtesy of Frederic
	Weisbecker.

10.	Explain the reason behind the read side critical section on GP
	start, courtesy of Frederic Weisbecker.

11.	Debug NMI safety even on archs that don't require it, courtesy
	of Frederic Weisbecker.

Changes since v2:

1.	Added ability for architectures having implicitly NMI-safe
	SRCU readers to warn about uses that cause trouble for other
	architectures.

Changes since v1:

1.	Added enabling patches for arm64, loongarch, s390, and x86.
	These have what appear to me to be NMI-safe this_cpu_inc()
	implementations.

2.	Fix a build error on !SMP kernels built without SRCU.

3.	Fix a build error on !SMP kernels.

						Thanx, Paul

------------------------------------------------------------------------

 b/arch/Kconfig             |    3 +
 b/arch/arm64/Kconfig       |    1 
 b/arch/loongarch/Kconfig   |    1 
 b/arch/s390/Kconfig        |    1 
 b/arch/x86/Kconfig         |    1 
 b/include/linux/srcu.h     |   39 ++++++++++++++
 b/include/linux/srcutiny.h |   11 ++++
 b/include/linux/srcutree.h |    4 -
 b/kernel/rcu/Kconfig       |    3 +
 b/kernel/rcu/rcutorture.c  |   11 +++-
 b/kernel/rcu/srcutree.c    |   24 ++++-----
 include/linux/srcu.h       |   48 +++++++++++++-----
 include/linux/srcutiny.h   |   16 ------
 include/linux/srcutree.h   |   19 +++----
 kernel/rcu/srcutree.c      |  118 +++++++++++++++++++++++++++++++++++----------
 15 files changed, 224 insertions(+), 76 deletions(-)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 03/11] srcu: Check for consistent per-CPU per-srcu_struct NMI safety
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 01/11] srcu: Convert ->srcu_lock_count and ->srcu_unlock_count to atomic Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 02/11] srcu: Create an srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe() Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 04/11] srcu: Check for consistent global " Paul E. McKenney
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Frederic Weisbecker, Thomas Gleixner, John Ogness, Petr Mladek

This commit adds runtime checks to verify that a given srcu_struct uses
consistent NMI-safe (or not) read-side primitives on a per-CPU basis.

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
---
 include/linux/srcu.h     |  4 ++--
 include/linux/srcutiny.h |  4 ++--
 include/linux/srcutree.h |  9 +++++++--
 kernel/rcu/srcutree.c    | 38 ++++++++++++++++++++++++++++++++------
 4 files changed, 43 insertions(+), 12 deletions(-)

diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 2cc8321c0c86a..565f60d574847 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -180,7 +180,7 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp
 	int retval;
 
 	if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		retval = __srcu_read_lock_nmisafe(ssp);
+		retval = __srcu_read_lock_nmisafe(ssp, true);
 	else
 		retval = __srcu_read_lock(ssp);
 	rcu_lock_acquire(&(ssp)->dep_map);
@@ -225,7 +225,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
 	WARN_ON_ONCE(idx & ~0x1);
 	rcu_lock_release(&(ssp)->dep_map);
 	if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		__srcu_read_unlock_nmisafe(ssp, idx);
+		__srcu_read_unlock_nmisafe(ssp, idx, true);
 	else
 		__srcu_read_unlock(ssp, idx);
 }
diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
index 278331bd77660..f890301f123df 100644
--- a/include/linux/srcutiny.h
+++ b/include/linux/srcutiny.h
@@ -90,13 +90,13 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
 		 data_race(READ_ONCE(ssp->srcu_idx_max)));
 }
 
-static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
+static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe)
 {
 	BUG();
 	return 0;
 }
 
-static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
+static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe)
 {
 	BUG();
 }
diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
index d45dd507f4a56..35ffdedf86ccb 100644
--- a/include/linux/srcutree.h
+++ b/include/linux/srcutree.h
@@ -25,6 +25,7 @@ struct srcu_data {
 	/* Read-side state. */
 	atomic_long_t srcu_lock_count[2];	/* Locks per CPU. */
 	atomic_long_t srcu_unlock_count[2];	/* Unlocks per CPU. */
+	int srcu_nmi_safety;			/* NMI-safe srcu_struct structure? */
 
 	/* Update-side state. */
 	spinlock_t __private lock ____cacheline_internodealigned_in_smp;
@@ -42,6 +43,10 @@ struct srcu_data {
 	struct srcu_struct *ssp;
 };
 
+#define SRCU_NMI_UNKNOWN	0x0
+#define SRCU_NMI_NMI_UNSAFE	0x1
+#define SRCU_NMI_NMI_SAFE	0x2
+
 /*
  * Node in SRCU combining tree, similar in function to rcu_data.
  */
@@ -154,7 +159,7 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp);
 void srcu_barrier(struct srcu_struct *ssp);
 void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf);
 
-int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp);
-void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp);
+int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp);
+void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp);
 
 #endif
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 32a94b254d29f..30575864fcfa3 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -626,6 +626,26 @@ void cleanup_srcu_struct(struct srcu_struct *ssp)
 }
 EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
 
+/*
+ * Check for consistent NMI safety.
+ */
+static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe)
+{
+	int nmi_safe_mask = 1 << nmi_safe;
+	int old_nmi_safe_mask;
+	struct srcu_data *sdp;
+
+	if (!IS_ENABLED(CONFIG_PROVE_RCU))
+		return;
+	sdp = raw_cpu_ptr(ssp->sda);
+	old_nmi_safe_mask = READ_ONCE(sdp->srcu_nmi_safety);
+	if (!old_nmi_safe_mask) {
+		WRITE_ONCE(sdp->srcu_nmi_safety, nmi_safe_mask);
+		return;
+	}
+	WARN_ONCE(old_nmi_safe_mask != nmi_safe_mask, "CPU %d old state %d new state %d\n", sdp->cpu, old_nmi_safe_mask, nmi_safe_mask);
+}
+
 /*
  * Counts the new reader in the appropriate per-CPU element of the
  * srcu_struct.
@@ -638,6 +658,7 @@ int __srcu_read_lock(struct srcu_struct *ssp)
 	idx = READ_ONCE(ssp->srcu_idx) & 0x1;
 	this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter);
 	smp_mb(); /* B */  /* Avoid leaking the critical section. */
+	srcu_check_nmi_safety(ssp, false);
 	return idx;
 }
 EXPORT_SYMBOL_GPL(__srcu_read_lock);
@@ -651,6 +672,7 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx)
 {
 	smp_mb(); /* C */  /* Avoid leaking the critical section. */
 	this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter);
+	srcu_check_nmi_safety(ssp, false);
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock);
 
@@ -661,7 +683,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock);
  * srcu_struct, but in an NMI-safe manner using RMW atomics.
  * Returns an index that must be passed to the matching srcu_read_unlock().
  */
-int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
+int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe)
 {
 	int idx;
 	struct srcu_data *sdp = raw_cpu_ptr(ssp->sda);
@@ -669,6 +691,8 @@ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
 	idx = READ_ONCE(ssp->srcu_idx) & 0x1;
 	atomic_long_inc(&sdp->srcu_lock_count[idx]);
 	smp_mb__after_atomic(); /* B */  /* Avoid leaking the critical section. */
+	if (chknmisafe)
+		srcu_check_nmi_safety(ssp, true);
 	return idx;
 }
 EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe);
@@ -678,12 +702,14 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe);
  * element of the srcu_struct.  Note that this may well be a different
  * CPU than that which was incremented by the corresponding srcu_read_lock().
  */
-void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
+void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe)
 {
 	struct srcu_data *sdp = raw_cpu_ptr(ssp->sda);
 
 	smp_mb__before_atomic(); /* C */  /* Avoid leaking the critical section. */
 	atomic_long_inc(&sdp->srcu_unlock_count[idx]);
+	if (chknmisafe)
+		srcu_check_nmi_safety(ssp, true);
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe);
 
@@ -1125,7 +1151,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
 	int ss_state;
 
 	check_init_srcu_struct(ssp);
-	idx = __srcu_read_lock_nmisafe(ssp);
+	idx = __srcu_read_lock_nmisafe(ssp, false);
 	ss_state = smp_load_acquire(&ssp->srcu_size_state);
 	if (ss_state < SRCU_SIZE_WAIT_CALL)
 		sdp = per_cpu_ptr(ssp->sda, 0);
@@ -1158,7 +1184,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
 		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
 	else if (needexp)
 		srcu_funnel_exp_start(ssp, sdp_mynode, s);
-	__srcu_read_unlock_nmisafe(ssp, idx);
+	__srcu_read_unlock_nmisafe(ssp, idx, false);
 	return s;
 }
 
@@ -1462,13 +1488,13 @@ void srcu_barrier(struct srcu_struct *ssp)
 	/* Initial count prevents reaching zero until all CBs are posted. */
 	atomic_set(&ssp->srcu_barrier_cpu_cnt, 1);
 
-	idx = __srcu_read_lock_nmisafe(ssp);
+	idx = __srcu_read_lock_nmisafe(ssp, false);
 	if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER)
 		srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0));
 	else
 		for_each_possible_cpu(cpu)
 			srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu));
-	__srcu_read_unlock_nmisafe(ssp, idx);
+	__srcu_read_unlock_nmisafe(ssp, idx, false);
 
 	/* Remove the initial count, at which point reaching zero can happen. */
 	if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt))
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 04/11] srcu: Check for consistent global per-srcu_struct NMI safety
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (2 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 03/11] srcu: Check for consistent per-CPU per-srcu_struct NMI safety Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 05/11] arch/x86: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option Paul E. McKenney
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Frederic Weisbecker, Thomas Gleixner, John Ogness, Petr Mladek

This commit adds runtime checks to verify that a given srcu_struct uses
consistent NMI-safe (or not) read-side primitives globally, but based
on the per-CPU data.  These global checks are made by the grace-period
code that must scan the srcu_data structures anyway, and are done only
in kernels built with CONFIG_PROVE_RCU=y.

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
---
 kernel/rcu/srcutree.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 30575864fcfa3..87ae6f5c1edae 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -429,13 +429,18 @@ static unsigned long srcu_readers_lock_idx(struct srcu_struct *ssp, int idx)
 static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx)
 {
 	int cpu;
+	unsigned long mask = 0;
 	unsigned long sum = 0;
 
 	for_each_possible_cpu(cpu) {
 		struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu);
 
 		sum += atomic_long_read(&cpuc->srcu_unlock_count[idx]);
+		if (IS_ENABLED(CONFIG_PROVE_RCU))
+			mask = mask | READ_ONCE(cpuc->srcu_nmi_safety);
 	}
+	WARN_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && (mask & (mask >> 1)),
+		  "Mixed NMI-safe readers for srcu_struct at %ps.\n", ssp);
 	return sum;
 }
 
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 05/11] arch/x86: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (3 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 04/11] srcu: Check for consistent global " Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 06/11] arch/arm64: " Paul E. McKenney
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Frederic Weisbecker, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin, John Ogness,
	Petr Mladek, x86

The x86 architecture uses an add-to-memory instruction to implement
this_cpu_add(), which is NMI safe.  This means that the old and
more-efficient srcu_read_lock() may be used in NMI context, without
the need for srcu_read_lock_nmisafe().  Therefore, add the new Kconfig
option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/x86/Kconfig, which will
cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current
srcu_read_lock() behavior.

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
Cc: <x86@kernel.org>
---
 arch/x86/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6d1879ef933a2..bcb3190eaa266 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -81,6 +81,7 @@ config X86
 	select ARCH_HAS_KCOV			if X86_64
 	select ARCH_HAS_MEM_ENCRYPT
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
+	select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
 	select ARCH_HAS_PMEM_API		if X86_64
 	select ARCH_HAS_PTE_DEVMAP		if X86_64
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 06/11] arch/arm64: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (4 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 05/11] arch/x86: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 07/11] arch/loongarch: " Paul E. McKenney
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Neeraj Upadhyay, Frederic Weisbecker, Boqun Feng,
	Catalin Marinas, Will Deacon, Thomas Gleixner, John Ogness,
	Petr Mladek, linux-arm-kernel

The arm64 architecture uses either an LL/SC loop (old systems) or an LSE
stadd instruction (new systems) to implement this_cpu_add(), both of which
are NMI safe.  This means that the old and more-efficient srcu_read_lock()
may be used in NMI context, without the need for srcu_read_lock_nmisafe().
Therefore, add the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to
arch/arm64/Kconfig, which will cause NEED_SRCU_NMI_SAFE to be deselected,
thus preserving the current srcu_read_lock() behavior.

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Suggested-by: Frederic Weisbecker <frederic@kernel.org>
Suggested-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
Cc: <linux-arm-kernel@lists.infradead.org>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 505c8a1ccbe0c..099ee812f3f18 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -31,6 +31,7 @@ config ARM64
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_KEEPINITRD
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
+	select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
 	select ARCH_HAS_PTE_DEVMAP
 	select ARCH_HAS_PTE_SPECIAL
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 07/11] arch/loongarch: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (5 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 06/11] arch/arm64: " Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 08/11] arch/s390: " Paul E. McKenney
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Neeraj Upadhyay, Frederic Weisbecker, Boqun Feng, Huacai Chen,
	WANG Xuerui, Thomas Gleixner, John Ogness, Petr Mladek,
	loongarch

The loongarch architecture uses the atomic read-modify-write amadd
instruction to implement this_cpu_add(), which is NMI safe.  This means
that the old and more-efficient srcu_read_lock() may be used in NMI
context, without the need for srcu_read_lock_nmisafe().  Therefore, add
the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/x86/Kconfig,
which will cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving
the current srcu_read_lock() behavior.

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Suggested-by: Frederic Weisbecker <frederic@kernel.org>
Suggested-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: WANG Xuerui <kernel@xen0n.name>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
Cc: <loongarch@lists.linux.dev>
---
 arch/loongarch/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 903096bd87f88..386adde2feffb 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -10,6 +10,7 @@ config LOONGARCH
 	select ARCH_ENABLE_MEMORY_HOTPLUG
 	select ARCH_ENABLE_MEMORY_HOTREMOVE
 	select ARCH_HAS_ACPI_TABLE_UPGRADE	if ACPI
+	select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
 	select ARCH_HAS_PTE_SPECIAL
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_INLINE_READ_LOCK if !PREEMPTION
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 08/11] arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (6 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 07/11] arch/loongarch: " Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-20  5:16   ` Christian Borntraeger
  2022-10-19 22:58 ` [PATCH v3 rcu 09/11] srcu: Warn when NMI-unsafe API is used in NMI Paul E. McKenney
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
	Neeraj Upadhyay, Frederic Weisbecker, Boqun Feng, Heiko Carstens,
	Vasily Gorbik, Alexander Gordeev, Christian Borntraeger,
	Sven Schnelle, Thomas Gleixner, John Ogness, Petr Mladek,
	linux-s390

The s390 architecture uses either a cmpxchg loop (old systems)
or the laa add-to-memory instruction (new systems) to implement
this_cpu_add(), both of which are NMI safe.  This means that the old
and more-efficient srcu_read_lock() may be used in NMI context, without
the need for srcu_read_lock_nmisafe().  Therefore, add the new Kconfig
option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will
cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current
srcu_read_lock() behavior.

Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/

Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Suggested-by: Frederic Weisbecker <frederic@kernel.org>
Suggested-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Petr Mladek <pmladek@suse.com>
Cc: <linux-s390@vger.kernel.org>
---
 arch/s390/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 318fce77601d3..0acdfda332908 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -73,6 +73,7 @@ config S390
 	select ARCH_HAS_GIGANTIC_PAGE
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_MEM_ENCRYPT
+	select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
 	select ARCH_HAS_PTE_SPECIAL
 	select ARCH_HAS_SCALED_CPUTIME
 	select ARCH_HAS_SET_MEMORY
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 09/11] srcu: Warn when NMI-unsafe API is used in NMI
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (7 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 08/11] arch/s390: " Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 10/11] srcu: Explain the reason behind the read side critical section on GP start Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 11/11] srcu: Debug NMI safety even on archs that don't require it Paul E. McKenney
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Frederic Weisbecker,
	Paul E . McKenney

From: Frederic Weisbecker <frederic@kernel.org>

Using the NMI-unsafe reader API from within an NMI handler is very likely
to be buggy for three reasons:

1) NMIs aren't strictly re-entrant (a pending nested NMI will execute at
   the end of the current one) so it should be fine to use a non-atomic
   increment here. However, breakpoints can still interrupt NMIs and if
   a breakpoint callback has a reader on that same ssp, a racy increment
   can happen.

2) If the only reader site for a given srcu_struct structure is in an
   NMI handler, then RCU should be used instead of SRCU.

3) Because of the previous reason (2), an srcu_struct structure having
   an SRCU read side critical section in an NMI handler is likely to
   have another one from a task context.

For all these reasons, warn if an NMI-unsafe reader API is used from an
NMI handler.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/srcutree.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 87ae6f5c1edae..18bb696cff8ca 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -642,6 +642,8 @@ static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe)
 
 	if (!IS_ENABLED(CONFIG_PROVE_RCU))
 		return;
+	/* NMI-unsafe use in NMI is a bad sign */
+	WARN_ON_ONCE(!nmi_safe && in_nmi());
 	sdp = raw_cpu_ptr(ssp->sda);
 	old_nmi_safe_mask = READ_ONCE(sdp->srcu_nmi_safety);
 	if (!old_nmi_safe_mask) {
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 10/11] srcu: Explain the reason behind the read side critical section on GP start
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (8 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 09/11] srcu: Warn when NMI-unsafe API is used in NMI Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  2022-10-19 22:58 ` [PATCH v3 rcu 11/11] srcu: Debug NMI safety even on archs that don't require it Paul E. McKenney
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Frederic Weisbecker,
	Paul E . McKenney

From: Frederic Weisbecker <frederic@kernel.org>

Tell about the need to protect against concurrent updaters who may
overflow the GP counter behind the current update.

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/srcutree.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 18bb696cff8ca..272830a87e566 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -1158,6 +1158,11 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
 	int ss_state;
 
 	check_init_srcu_struct(ssp);
+	/*
+	 * While starting a new grace period, make sure we are in an
+	 * SRCU read-side critical section so that the grace-period
+	 * sequence number cannot wrap around in the meantime.
+	 */
 	idx = __srcu_read_lock_nmisafe(ssp, false);
 	ss_state = smp_load_acquire(&ssp->srcu_size_state);
 	if (ss_state < SRCU_SIZE_WAIT_CALL)
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 rcu 11/11] srcu: Debug NMI safety even on archs that don't require it
  2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
                   ` (9 preceding siblings ...)
  2022-10-19 22:58 ` [PATCH v3 rcu 10/11] srcu: Explain the reason behind the read side critical section on GP start Paul E. McKenney
@ 2022-10-19 22:58 ` Paul E. McKenney
  10 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:58 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, rostedt, Frederic Weisbecker,
	Paul E . McKenney

From: Frederic Weisbecker <frederic@kernel.org>

Currently the NMI safety debugging is only performed on architectures
that don't support NMI-safe this_cpu_inc().

Reorder the code so that other architectures like x86 also detect bad
uses.

[ paulmck: Apply kernel test robot, Stephen Rothwell, and Zqiang feedback. ]

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/srcu.h     | 44 +++++++++++++++++++++++++++++++---------
 include/linux/srcutiny.h | 12 -----------
 include/linux/srcutree.h |  7 -------
 kernel/rcu/srcutree.c    | 25 +++++++++--------------
 4 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 565f60d574847..f0814ffca34bb 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -52,8 +52,6 @@ int init_srcu_struct(struct srcu_struct *ssp);
 #else
 /* Dummy definition for things like notifiers.  Actual use gets link error. */
 struct srcu_struct { };
-int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp);
-void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp);
 #endif
 
 void call_srcu(struct srcu_struct *ssp, struct rcu_head *head,
@@ -66,6 +64,20 @@ unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
 unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
 bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
 
+#ifdef CONFIG_NEED_SRCU_NMI_SAFE
+int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp);
+void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp);
+#else
+static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
+{
+	return __srcu_read_lock(ssp);
+}
+static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
+{
+	__srcu_read_unlock(ssp, idx);
+}
+#endif /* CONFIG_NEED_SRCU_NMI_SAFE */
+
 #ifdef CONFIG_SRCU
 void srcu_init(void);
 #else /* #ifdef CONFIG_SRCU */
@@ -106,6 +118,18 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
 
 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
+#define SRCU_NMI_UNKNOWN	0x0
+#define SRCU_NMI_UNSAFE		0x1
+#define SRCU_NMI_SAFE		0x2
+
+#if defined(CONFIG_PROVE_RCU) && defined(CONFIG_TREE_SRCU)
+void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe);
+#else
+static inline void srcu_check_nmi_safety(struct srcu_struct *ssp,
+					 bool nmi_safe) { }
+#endif
+
+
 /**
  * srcu_dereference_check - fetch SRCU-protected pointer for later dereferencing
  * @p: the pointer to fetch and protect for later dereferencing
@@ -163,6 +187,7 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)
 {
 	int retval;
 
+	srcu_check_nmi_safety(ssp, false);
 	retval = __srcu_read_lock(ssp);
 	rcu_lock_acquire(&(ssp)->dep_map);
 	return retval;
@@ -179,10 +204,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp
 {
 	int retval;
 
-	if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		retval = __srcu_read_lock_nmisafe(ssp, true);
-	else
-		retval = __srcu_read_lock(ssp);
+	srcu_check_nmi_safety(ssp, true);
+	retval = __srcu_read_lock_nmisafe(ssp);
 	rcu_lock_acquire(&(ssp)->dep_map);
 	return retval;
 }
@@ -193,6 +216,7 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp)
 {
 	int retval;
 
+	srcu_check_nmi_safety(ssp, false);
 	retval = __srcu_read_lock(ssp);
 	return retval;
 }
@@ -208,6 +232,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
 	__releases(ssp)
 {
 	WARN_ON_ONCE(idx & ~0x1);
+	srcu_check_nmi_safety(ssp, false);
 	rcu_lock_release(&(ssp)->dep_map);
 	__srcu_read_unlock(ssp, idx);
 }
@@ -223,17 +248,16 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
 	__releases(ssp)
 {
 	WARN_ON_ONCE(idx & ~0x1);
+	srcu_check_nmi_safety(ssp, true);
 	rcu_lock_release(&(ssp)->dep_map);
-	if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
-		__srcu_read_unlock_nmisafe(ssp, idx, true);
-	else
-		__srcu_read_unlock(ssp, idx);
+	__srcu_read_unlock_nmisafe(ssp, idx);
 }
 
 /* Used by tracing, cannot be traced and cannot call lockdep. */
 static inline notrace void
 srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp)
 {
+	srcu_check_nmi_safety(ssp, false);
 	__srcu_read_unlock(ssp, idx);
 }
 
diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
index f890301f123df..f3a4d65b91efd 100644
--- a/include/linux/srcutiny.h
+++ b/include/linux/srcutiny.h
@@ -89,16 +89,4 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
 		 data_race(READ_ONCE(ssp->srcu_idx)),
 		 data_race(READ_ONCE(ssp->srcu_idx_max)));
 }
-
-static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe)
-{
-	BUG();
-	return 0;
-}
-
-static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe)
-{
-	BUG();
-}
-
 #endif
diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
index 35ffdedf86ccb..c689a81752c9a 100644
--- a/include/linux/srcutree.h
+++ b/include/linux/srcutree.h
@@ -43,10 +43,6 @@ struct srcu_data {
 	struct srcu_struct *ssp;
 };
 
-#define SRCU_NMI_UNKNOWN	0x0
-#define SRCU_NMI_NMI_UNSAFE	0x1
-#define SRCU_NMI_NMI_SAFE	0x2
-
 /*
  * Node in SRCU combining tree, similar in function to rcu_data.
  */
@@ -159,7 +155,4 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp);
 void srcu_barrier(struct srcu_struct *ssp);
 void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf);
 
-int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp);
-void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp);
-
 #endif
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 272830a87e566..ca4b5dcec675b 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -631,17 +631,16 @@ void cleanup_srcu_struct(struct srcu_struct *ssp)
 }
 EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
 
+#ifdef CONFIG_PROVE_RCU
 /*
  * Check for consistent NMI safety.
  */
-static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe)
+void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe)
 {
 	int nmi_safe_mask = 1 << nmi_safe;
 	int old_nmi_safe_mask;
 	struct srcu_data *sdp;
 
-	if (!IS_ENABLED(CONFIG_PROVE_RCU))
-		return;
 	/* NMI-unsafe use in NMI is a bad sign */
 	WARN_ON_ONCE(!nmi_safe && in_nmi());
 	sdp = raw_cpu_ptr(ssp->sda);
@@ -652,6 +651,8 @@ static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe)
 	}
 	WARN_ONCE(old_nmi_safe_mask != nmi_safe_mask, "CPU %d old state %d new state %d\n", sdp->cpu, old_nmi_safe_mask, nmi_safe_mask);
 }
+EXPORT_SYMBOL_GPL(srcu_check_nmi_safety);
+#endif /* CONFIG_PROVE_RCU */
 
 /*
  * Counts the new reader in the appropriate per-CPU element of the
@@ -665,7 +666,6 @@ int __srcu_read_lock(struct srcu_struct *ssp)
 	idx = READ_ONCE(ssp->srcu_idx) & 0x1;
 	this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter);
 	smp_mb(); /* B */  /* Avoid leaking the critical section. */
-	srcu_check_nmi_safety(ssp, false);
 	return idx;
 }
 EXPORT_SYMBOL_GPL(__srcu_read_lock);
@@ -679,7 +679,6 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx)
 {
 	smp_mb(); /* C */  /* Avoid leaking the critical section. */
 	this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter);
-	srcu_check_nmi_safety(ssp, false);
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock);
 
@@ -690,7 +689,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock);
  * srcu_struct, but in an NMI-safe manner using RMW atomics.
  * Returns an index that must be passed to the matching srcu_read_unlock().
  */
-int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe)
+int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
 {
 	int idx;
 	struct srcu_data *sdp = raw_cpu_ptr(ssp->sda);
@@ -698,8 +697,6 @@ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe)
 	idx = READ_ONCE(ssp->srcu_idx) & 0x1;
 	atomic_long_inc(&sdp->srcu_lock_count[idx]);
 	smp_mb__after_atomic(); /* B */  /* Avoid leaking the critical section. */
-	if (chknmisafe)
-		srcu_check_nmi_safety(ssp, true);
 	return idx;
 }
 EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe);
@@ -709,14 +706,12 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe);
  * element of the srcu_struct.  Note that this may well be a different
  * CPU than that which was incremented by the corresponding srcu_read_lock().
  */
-void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe)
+void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
 {
 	struct srcu_data *sdp = raw_cpu_ptr(ssp->sda);
 
 	smp_mb__before_atomic(); /* C */  /* Avoid leaking the critical section. */
 	atomic_long_inc(&sdp->srcu_unlock_count[idx]);
-	if (chknmisafe)
-		srcu_check_nmi_safety(ssp, true);
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe);
 
@@ -1163,7 +1158,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
 	 * SRCU read-side critical section so that the grace-period
 	 * sequence number cannot wrap around in the meantime.
 	 */
-	idx = __srcu_read_lock_nmisafe(ssp, false);
+	idx = __srcu_read_lock_nmisafe(ssp);
 	ss_state = smp_load_acquire(&ssp->srcu_size_state);
 	if (ss_state < SRCU_SIZE_WAIT_CALL)
 		sdp = per_cpu_ptr(ssp->sda, 0);
@@ -1196,7 +1191,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
 		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
 	else if (needexp)
 		srcu_funnel_exp_start(ssp, sdp_mynode, s);
-	__srcu_read_unlock_nmisafe(ssp, idx, false);
+	__srcu_read_unlock_nmisafe(ssp, idx);
 	return s;
 }
 
@@ -1500,13 +1495,13 @@ void srcu_barrier(struct srcu_struct *ssp)
 	/* Initial count prevents reaching zero until all CBs are posted. */
 	atomic_set(&ssp->srcu_barrier_cpu_cnt, 1);
 
-	idx = __srcu_read_lock_nmisafe(ssp, false);
+	idx = __srcu_read_lock_nmisafe(ssp);
 	if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER)
 		srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0));
 	else
 		for_each_possible_cpu(cpu)
 			srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu));
-	__srcu_read_unlock_nmisafe(ssp, idx, false);
+	__srcu_read_unlock_nmisafe(ssp, idx);
 
 	/* Remove the initial count, at which point reaching zero can happen. */
 	if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt))
-- 
2.31.1.189.g2e36527f23


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 rcu 08/11] arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-19 22:58 ` [PATCH v3 rcu 08/11] arch/s390: " Paul E. McKenney
@ 2022-10-20  5:16   ` Christian Borntraeger
  2022-10-20  7:23     ` Heiko Carstens
  0 siblings, 1 reply; 16+ messages in thread
From: Christian Borntraeger @ 2022-10-20  5:16 UTC (permalink / raw)
  To: Paul E. McKenney, rcu
  Cc: linux-kernel, kernel-team, rostedt, Neeraj Upadhyay,
	Frederic Weisbecker, Boqun Feng, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sven Schnelle, Thomas Gleixner, John Ogness,
	Petr Mladek, linux-s390



Am 20.10.22 um 00:58 schrieb Paul E. McKenney:
> The s390 architecture uses either a cmpxchg loop (old systems)
> or the laa add-to-memory instruction (new systems) to implement
> this_cpu_add(), both of which are NMI safe.  This means that the old
> and more-efficient srcu_read_lock() may be used in NMI context, without
> the need for srcu_read_lock_nmisafe().  Therefore, add the new Kconfig
> option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will
						s390 ?
> cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current
> srcu_read_lock() behavior.
> 
> Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/
> 
> Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> Suggested-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Alexander Gordeev <agordeev@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
> Cc: Sven Schnelle <svens@linux.ibm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: John Ogness <john.ogness@linutronix.de>
> Cc: Petr Mladek <pmladek@suse.com>
> Cc: <linux-s390@vger.kernel.org>
> ---
>   arch/s390/Kconfig | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> index 318fce77601d3..0acdfda332908 100644
> --- a/arch/s390/Kconfig
> +++ b/arch/s390/Kconfig
> @@ -73,6 +73,7 @@ config S390
>   	select ARCH_HAS_GIGANTIC_PAGE
>   	select ARCH_HAS_KCOV
>   	select ARCH_HAS_MEM_ENCRYPT
> +	select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
>   	select ARCH_HAS_PTE_SPECIAL
>   	select ARCH_HAS_SCALED_CPUTIME
>   	select ARCH_HAS_SET_MEMORY

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 rcu 08/11] arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-20  5:16   ` Christian Borntraeger
@ 2022-10-20  7:23     ` Heiko Carstens
  2022-10-20  7:27       ` Heiko Carstens
  0 siblings, 1 reply; 16+ messages in thread
From: Heiko Carstens @ 2022-10-20  7:23 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Paul E. McKenney, rcu, linux-kernel, kernel-team, rostedt,
	Neeraj Upadhyay, Frederic Weisbecker, Boqun Feng, Vasily Gorbik,
	Alexander Gordeev, Sven Schnelle, Thomas Gleixner, John Ogness,
	Petr Mladek, linux-s390

On Thu, Oct 20, 2022 at 07:16:44AM +0200, Christian Borntraeger wrote:
> 
> 
> Am 20.10.22 um 00:58 schrieb Paul E. McKenney:
> > The s390 architecture uses either a cmpxchg loop (old systems)
> > or the laa add-to-memory instruction (new systems) to implement
> > this_cpu_add(), both of which are NMI safe.  This means that the old
> > and more-efficient srcu_read_lock() may be used in NMI context, without
> > the need for srcu_read_lock_nmisafe().  Therefore, add the new Kconfig
> > option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will
> 						s390 ?
> > cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current
> > srcu_read_lock() behavior.
> > 
> > Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/
> > 
> > Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> > Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> > Suggested-by: Boqun Feng <boqun.feng@gmail.com>
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
...
> > ---
> >   arch/s390/Kconfig | 1 +
> >   1 file changed, 1 insertion(+)

Not sure what Christian was trying to say with his empty reply :)
In any case:
Acked-by: Heiko Carstens <hca@linux.ibm.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 rcu 08/11] arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-20  7:23     ` Heiko Carstens
@ 2022-10-20  7:27       ` Heiko Carstens
  2022-10-20 16:35         ` Paul E. McKenney
  0 siblings, 1 reply; 16+ messages in thread
From: Heiko Carstens @ 2022-10-20  7:27 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Christian Borntraeger, Paul E. McKenney, rcu, linux-kernel,
	kernel-team, rostedt, Neeraj Upadhyay, Frederic Weisbecker,
	Boqun Feng, Vasily Gorbik, Alexander Gordeev, Sven Schnelle,
	Thomas Gleixner, John Ogness, Petr Mladek, linux-s390

On Thu, Oct 20, 2022 at 09:23:49AM +0200, Heiko Carstens wrote:
> On Thu, Oct 20, 2022 at 07:16:44AM +0200, Christian Borntraeger wrote:
> > Am 20.10.22 um 00:58 schrieb Paul E. McKenney:
> > > The s390 architecture uses either a cmpxchg loop (old systems)
> > > or the laa add-to-memory instruction (new systems) to implement
> > > this_cpu_add(), both of which are NMI safe.  This means that the old
> > > and more-efficient srcu_read_lock() may be used in NMI context, without
> > > the need for srcu_read_lock_nmisafe().  Therefore, add the new Kconfig
> > > option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will
> > 						s390 ?

Ah, this typo is what Christian pointed out; missed that.

> > > cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current
> > > srcu_read_lock() behavior.
> > > 
> > > Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/
> > > 
> > > Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> > > Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> > > Suggested-by: Boqun Feng <boqun.feng@gmail.com>
> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ...
> > > ---
> > >   arch/s390/Kconfig | 1 +
> > >   1 file changed, 1 insertion(+)
> 
> Not sure what Christian was trying to say with his empty reply :)
> In any case:
> Acked-by: Heiko Carstens <hca@linux.ibm.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 rcu 08/11] arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  2022-10-20  7:27       ` Heiko Carstens
@ 2022-10-20 16:35         ` Paul E. McKenney
  0 siblings, 0 replies; 16+ messages in thread
From: Paul E. McKenney @ 2022-10-20 16:35 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Christian Borntraeger, rcu, linux-kernel, kernel-team, rostedt,
	Neeraj Upadhyay, Frederic Weisbecker, Boqun Feng, Vasily Gorbik,
	Alexander Gordeev, Sven Schnelle, Thomas Gleixner, John Ogness,
	Petr Mladek, linux-s390

On Thu, Oct 20, 2022 at 09:27:37AM +0200, Heiko Carstens wrote:
> On Thu, Oct 20, 2022 at 09:23:49AM +0200, Heiko Carstens wrote:
> > On Thu, Oct 20, 2022 at 07:16:44AM +0200, Christian Borntraeger wrote:
> > > Am 20.10.22 um 00:58 schrieb Paul E. McKenney:
> > > > The s390 architecture uses either a cmpxchg loop (old systems)
> > > > or the laa add-to-memory instruction (new systems) to implement
> > > > this_cpu_add(), both of which are NMI safe.  This means that the old
> > > > and more-efficient srcu_read_lock() may be used in NMI context, without
> > > > the need for srcu_read_lock_nmisafe().  Therefore, add the new Kconfig
> > > > option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will
> > > 						s390 ?
> 
> Ah, this typo is what Christian pointed out; missed that.
> 
> > > > cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current
> > > > srcu_read_lock() behavior.
> > > > 
> > > > Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/
> > > > 
> > > > Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> > > > Suggested-by: Frederic Weisbecker <frederic@kernel.org>
> > > > Suggested-by: Boqun Feng <boqun.feng@gmail.com>
> > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ...
> > > > ---
> > > >   arch/s390/Kconfig | 1 +
> > > >   1 file changed, 1 insertion(+)
> > 
> > Not sure what Christian was trying to say with his empty reply :)
> > In any case:
> > Acked-by: Heiko Carstens <hca@linux.ibm.com>

I will apply the fix and the ack on my next rebase, thank you both!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-10-20 16:35 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-19 22:58 [PATCH rcu 0/11] NMI-safe SRCU readers for v6.2 Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 01/11] srcu: Convert ->srcu_lock_count and ->srcu_unlock_count to atomic Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 02/11] srcu: Create an srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe() Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 03/11] srcu: Check for consistent per-CPU per-srcu_struct NMI safety Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 04/11] srcu: Check for consistent global " Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 05/11] arch/x86: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 06/11] arch/arm64: " Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 07/11] arch/loongarch: " Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 08/11] arch/s390: " Paul E. McKenney
2022-10-20  5:16   ` Christian Borntraeger
2022-10-20  7:23     ` Heiko Carstens
2022-10-20  7:27       ` Heiko Carstens
2022-10-20 16:35         ` Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 09/11] srcu: Warn when NMI-unsafe API is used in NMI Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 10/11] srcu: Explain the reason behind the read side critical section on GP start Paul E. McKenney
2022-10-19 22:58 ` [PATCH v3 rcu 11/11] srcu: Debug NMI safety even on archs that don't require it Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).