linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2
@ 2019-08-01 23:16 Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 01/10] rcu/nocb: Enable re-awakening under high callback load Paul E. McKenney
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel

Hello!

This series partially addresses lock-contention increases caused by the
move to the ->cblist segmented callback list.

1.	Enable re-awakening under high callback load.

2.	Never downgrade ->nocb_defer_wakeup in wake_nocb_gp_defer().

3.	Make __call_rcu_nocb_wake() safe for many callbacks.

4.	Avoid needless wakeups of no-CBs grace-period kthread.

5.	Avoid ->nocb_lock capture by corresponding CPU.

6.	Round down for number of no-CBs grace-period kthreads.

7.	Reduce contention at no-CBs registry-time CB advancement.

8.	Reduce contention at no-CBs invocation-done time.

9.	Reduce ->nocb_lock contention with separate ->nocb_gp_lock.

10.	Unconditionally advance and wake for excessive CBs.

							Thanx, Paul

------------------------------------------------------------------------

 tree.c        |   20 ++++++++-
 tree.h        |   21 ++++++++-
 tree_plugin.h |  128 ++++++++++++++++++++++++++++++++++++----------------------
 3 files changed, 118 insertions(+), 51 deletions(-)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 01/10] rcu/nocb: Enable re-awakening under high callback load
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 02/10] rcu/nocb: Never downgrade ->nocb_defer_wakeup in wake_nocb_gp_defer() Paul E. McKenney
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

The __call_rcu_nocb_wake() function and its predecessors set
->qlen_last_fqs_check to zero for the first callback and to LONG_MAX / 2
for forced reawakenings.  The former can result in a too-quick reawakening
when there are many callbacks ready to invoke and the latter prevents a
second reawakening.  This commit therefore sets ->qlen_last_fqs_check
to the current number of callbacks in both cases.  While in the area,
this commit also moves both assignments under ->nocb_lock.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree_plugin.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index c1dfbac8cd39..297b38732e28 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1626,6 +1626,7 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 	// Need to actually to a wakeup.
 	len = rcu_segcblist_n_cbs(&rdp->cblist);
 	if (was_alldone) {
+		rdp->qlen_last_fqs_check = len;
 		if (!irqs_disabled_flags(flags)) {
 			/* ... if queue was empty ... */
 			wake_nocb_gp(rdp, false, flags);
@@ -1636,9 +1637,9 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 					   TPS("WakeEmptyIsDeferred"));
 			rcu_nocb_unlock_irqrestore(rdp, flags);
 		}
-		rdp->qlen_last_fqs_check = 0;
 	} else if (len > rdp->qlen_last_fqs_check + qhimark) {
 		/* ... or if many callbacks queued. */
+		rdp->qlen_last_fqs_check = len;
 		if (!irqs_disabled_flags(flags)) {
 			wake_nocb_gp(rdp, true, flags);
 			trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
@@ -1648,7 +1649,6 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 					   TPS("WakeOvfIsDeferred"));
 			rcu_nocb_unlock_irqrestore(rdp, flags);
 		}
-		rdp->qlen_last_fqs_check = LONG_MAX / 2;
 	} else {
 		trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot"));
 		rcu_nocb_unlock_irqrestore(rdp, flags);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 02/10] rcu/nocb: Never downgrade ->nocb_defer_wakeup in wake_nocb_gp_defer()
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 01/10] rcu/nocb: Enable re-awakening under high callback load Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 03/10] rcu/nocb: Make __call_rcu_nocb_wake() safe for many callbacks Paul E. McKenney
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

Currently, wake_nocb_gp_defer() simply stores whatever waketype was
passed in, which can result in a RCU_NOCB_WAKE_FORCE being downgraded
to RCU_NOCB_WAKE, which could in turn delay callback processing.
This commit therefore adds a check so that wake_nocb_gp_defer() only
updates ->nocb_defer_wakeup when the update increases the forcefulness,
thus avoiding downgrades.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree_plugin.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 297b38732e28..f93603ca1672 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1598,7 +1598,8 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype,
 {
 	if (rdp->nocb_defer_wakeup == RCU_NOCB_WAKE_NOT)
 		mod_timer(&rdp->nocb_timer, jiffies + 1);
-	WRITE_ONCE(rdp->nocb_defer_wakeup, waketype);
+	if (rdp->nocb_defer_wakeup < waketype)
+		WRITE_ONCE(rdp->nocb_defer_wakeup, waketype);
 	trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, reason);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 03/10] rcu/nocb: Make __call_rcu_nocb_wake() safe for many callbacks
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 01/10] rcu/nocb: Enable re-awakening under high callback load Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 02/10] rcu/nocb: Never downgrade ->nocb_defer_wakeup in wake_nocb_gp_defer() Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 04/10] rcu/nocb: Avoid needless wakeups of no-CBs grace-period kthread Paul E. McKenney
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

It might be hard to imagine having more than two billion callbacks
queued on a single CPU's ->cblist, but someone will do it sometime.
This commit therefore makes __call_rcu_nocb_wake() handle this situation
by upgrading local variable "len" from "int" to "long".

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree_plugin.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index f93603ca1672..fa511e306f4d 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1613,7 +1613,7 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 				 unsigned long flags)
 				 __releases(rdp->nocb_lock)
 {
-	int len;
+	long len;
 	struct task_struct *t;
 
 	// If we are being polled or there is no kthread, just leave.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 04/10] rcu/nocb: Avoid needless wakeups of no-CBs grace-period kthread
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
                   ` (2 preceding siblings ...)
  2019-08-01 23:16 ` [PATCH tip/core/rcu 03/10] rcu/nocb: Make __call_rcu_nocb_wake() safe for many callbacks Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 05/10] rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU Paul E. McKenney
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

Currently, the code provides an extra wakeup for the no-CBs grace-period
kthread if one of its CPUs is generating excessive numbers of callbacks.
But satisfying though it is to wake something up when things are going
south, unless the thing being awakened can actually help solve the
problem, that extra wakeup does nothing but consume additional CPU time,
which is exactly what you don't want during a call_rcu() flood.

This commit therefore avoids doing anything if the corresponding
no-CBs callback kthread is going full tilt.  Otherwise, if advancing
callbacks immediately might help and if the leaf rcu_node structure's
lock is immediately available, this commit invokes a new variant of
rcu_advance_cbs() that advances callbacks only if doing so won't require
awakening the grace-period kthread (not to be confused with any of the
no-CBs grace-period kthreads).

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree.c        | 15 +++++++++++++++
 kernel/rcu/tree_plugin.h | 13 +++++++++----
 2 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index fb6b80aa34f6..a6ddfae6978d 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1334,6 +1334,19 @@ static bool rcu_advance_cbs(struct rcu_node *rnp, struct rcu_data *rdp)
 	return rcu_accelerate_cbs(rnp, rdp);
 }
 
+/*
+ * Move and classify callbacks, but only if doing so won't require
+ * that the RCU grace-period kthread be awakened.
+ */
+static void __maybe_unused rcu_advance_cbs_nowake(struct rcu_node *rnp,
+						  struct rcu_data *rdp)
+{
+	raw_lockdep_assert_held_rcu_node(rnp);
+	if (!rcu_seq_state(rcu_seq_current(&rnp->gp_seq)))
+		return;
+	WARN_ON_ONCE(rcu_advance_cbs(rnp, rdp));
+}
+
 /*
  * Update CPU-local rcu_data state to record the beginnings and ends of
  * grace periods.  The caller must hold the ->lock of the leaf rcu_node
@@ -2118,6 +2131,8 @@ static void rcu_do_batch(struct rcu_data *rdp)
 			      rcu_segcblist_n_lazy_cbs(&rdp->cblist),
 			      rcu_segcblist_n_cbs(&rdp->cblist), bl);
 	rcu_segcblist_extract_done_cbs(&rdp->cblist, &rcl);
+	if (offloaded)
+		rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist);
 	rcu_nocb_unlock_irqrestore(rdp, flags);
 
 	/* Invoke callbacks. */
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index fa511e306f4d..bda86098ca38 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1641,10 +1641,15 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 	} else if (len > rdp->qlen_last_fqs_check + qhimark) {
 		/* ... or if many callbacks queued. */
 		rdp->qlen_last_fqs_check = len;
-		if (!irqs_disabled_flags(flags)) {
-			wake_nocb_gp(rdp, true, flags);
-			trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
-					    TPS("WakeOvf"));
+		if (!rdp->nocb_cb_sleep &&
+		    rcu_segcblist_ready_cbs(&rdp->cblist)) {
+			// Already going full tilt, so don't try to rewake.
+			rcu_nocb_unlock_irqrestore(rdp, flags);
+		} else if (rcu_segcblist_pend_cbs(&rdp->cblist) &&
+			   raw_spin_trylock_rcu_node(rdp->mynode)) {
+			rcu_advance_cbs_nowake(rdp->mynode, rdp);
+			raw_spin_unlock_rcu_node(rdp->mynode);
+			rcu_nocb_unlock_irqrestore(rdp, flags);
 		} else {
 			wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_FORCE,
 					   TPS("WakeOvfIsDeferred"));
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 05/10] rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
                   ` (3 preceding siblings ...)
  2019-08-01 23:16 ` [PATCH tip/core/rcu 04/10] rcu/nocb: Avoid needless wakeups of no-CBs grace-period kthread Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 06/10] rcu/nocb: Round down for number of no-CBs grace-period kthreads Paul E. McKenney
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

A given rcu_data structure's ->nocb_lock can be acquired very frequently
by the corresponding CPU and occasionally by the corresponding no-CBs
grace-period and callbacks kthreads.  In particular, these two kthreads
will have frequent gaps between ->nocb_lock acquisitions that are roughly
a grace period in duration.  This means that any excessive ->nocb_lock
contention will be due to the CPU's acquisitions, and this in turn
enables a very naive contention-avoidance strategy to be quite effective.

This commit therefore modifies rcu_nocb_lock() to first
attempt a raw_spin_trylock(), and to atomically increment a
separate ->nocb_lock_contended across a raw_spin_lock().  This new
->nocb_lock_contended field is checked in __call_rcu_nocb_wake() when
interrupts are enabled, with a spin-wait for contending acquisitions
to complete, thus allowing the kthreads a chance to acquire the lock.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree.h        | 18 ++++++++++-
 kernel/rcu/tree_plugin.h | 68 ++++++++++++++++++++++++++--------------
 2 files changed, 62 insertions(+), 24 deletions(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index c12e85c12310..7062f9d9c053 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -197,6 +197,7 @@ struct rcu_data {
 	struct swait_queue_head nocb_cb_wq; /* For nocb kthreads to sleep on. */
 	struct task_struct *nocb_gp_kthread;
 	raw_spinlock_t nocb_lock;	/* Guard following pair of fields. */
+	atomic_t nocb_lock_contended;	/* Contention experienced. */
 	int nocb_defer_wakeup;		/* Defer wakeup of nocb_kthread. */
 	struct timer_list nocb_timer;	/* Enforce finite deferral. */
 
@@ -430,7 +431,22 @@ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp,
 				       unsigned long flags);
 #ifdef CONFIG_RCU_NOCB_CPU
 static void __init rcu_organize_nocb_kthreads(void);
-#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
+#define rcu_nocb_lock_irqsave(rdp, flags)				\
+do {									\
+	if (!rcu_segcblist_is_offloaded(&(rdp)->cblist)) {		\
+		local_irq_save(flags);					\
+	} else if (!raw_spin_trylock_irqsave(&(rdp)->nocb_lock, (flags))) {\
+		atomic_inc(&(rdp)->nocb_lock_contended);		\
+		smp_mb__after_atomic(); /* atomic_inc() before lock. */	\
+		raw_spin_lock_irqsave(&(rdp)->nocb_lock, (flags));	\
+		smp_mb__before_atomic(); /* atomic_dec() after lock. */	\
+		atomic_dec(&(rdp)->nocb_lock_contended);		\
+	}								\
+} while (0)
+#else /* #ifdef CONFIG_RCU_NOCB_CPU */
+#define rcu_nocb_lock_irqsave(rdp, flags) local_irq_save(flags)
+#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
+
 static void rcu_bind_gp_kthread(void);
 static bool rcu_nohz_full_cpu(void);
 static void rcu_dynticks_task_enter(void);
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index bda86098ca38..b6d9ed169edc 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1496,14 +1496,36 @@ early_param("rcu_nocb_poll", parse_rcu_nocb_poll);
 
 /*
  * Acquire the specified rcu_data structure's ->nocb_lock, but only
- * if it corresponds to a no-CBs CPU.
+ * if it corresponds to a no-CBs CPU.  If the lock isn't immediately
+ * available, increment ->nocb_lock_contended to flag the contention.
  */
 static void rcu_nocb_lock(struct rcu_data *rdp)
 {
-	if (rcu_segcblist_is_offloaded(&rdp->cblist)) {
-		lockdep_assert_irqs_disabled();
-		raw_spin_lock(&rdp->nocb_lock);
-	}
+	lockdep_assert_irqs_disabled();
+	if (!rcu_segcblist_is_offloaded(&rdp->cblist) ||
+	    raw_spin_trylock(&rdp->nocb_lock))
+		return;
+	atomic_inc(&rdp->nocb_lock_contended);
+	smp_mb__after_atomic(); /* atomic_inc() before lock. */
+	raw_spin_lock(&rdp->nocb_lock);
+	smp_mb__before_atomic(); /* atomic_dec() after lock. */
+	atomic_dec(&rdp->nocb_lock_contended);
+}
+
+/*
+ * Spinwait until the specified rcu_data structure's ->nocb_lock is
+ * not contended.  Please note that this is extremely special-purpose,
+ * relying on the fact that at most two kthreads and one CPU contend for
+ * this lock, and also that the two kthreads are guaranteed to have frequent
+ * grace-period-duration time intervals between successive acquisitions
+ * of the lock.  This allows us to use an extremely simple throttling
+ * mechanism, and further to apply it only to the CPU doing floods of
+ * call_rcu() invocations.  Don't try this at home!
+ */
+static void rcu_nocb_wait_contended(struct rcu_data *rdp)
+{
+	while (atomic_read(&rdp->nocb_lock_contended))
+		cpu_relax();
 }
 
 /*
@@ -1573,19 +1595,19 @@ static void wake_nocb_gp(struct rcu_data *rdp, bool force,
 
 	lockdep_assert_held(&rdp->nocb_lock);
 	if (!READ_ONCE(rdp_gp->nocb_gp_kthread)) {
-		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(rdp, flags);
 		return;
 	}
 	if (READ_ONCE(rdp_gp->nocb_gp_sleep) || force) {
 		del_timer(&rdp->nocb_timer);
-		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(rdp, flags);
 		smp_mb(); /* enqueue before ->nocb_gp_sleep. */
-		raw_spin_lock_irqsave(&rdp_gp->nocb_lock, flags);
+		rcu_nocb_lock_irqsave(rdp_gp, flags);
 		WRITE_ONCE(rdp_gp->nocb_gp_sleep, false);
-		raw_spin_unlock_irqrestore(&rdp_gp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(rdp_gp, flags);
 		wake_up_process(rdp_gp->nocb_gp_kthread);
 	} else {
-		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(rdp, flags);
 	}
 }
 
@@ -1644,23 +1666,23 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 		if (!rdp->nocb_cb_sleep &&
 		    rcu_segcblist_ready_cbs(&rdp->cblist)) {
 			// Already going full tilt, so don't try to rewake.
-			rcu_nocb_unlock_irqrestore(rdp, flags);
 		} else if (rcu_segcblist_pend_cbs(&rdp->cblist) &&
 			   raw_spin_trylock_rcu_node(rdp->mynode)) {
 			rcu_advance_cbs_nowake(rdp->mynode, rdp);
 			raw_spin_unlock_rcu_node(rdp->mynode);
-			rcu_nocb_unlock_irqrestore(rdp, flags);
 		} else {
 			wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_FORCE,
 					   TPS("WakeOvfIsDeferred"));
-			rcu_nocb_unlock_irqrestore(rdp, flags);
 		}
+		rcu_nocb_unlock_irqrestore(rdp, flags);
 	} else {
 		trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot"));
 		rcu_nocb_unlock_irqrestore(rdp, flags);
 	}
-	if (!irqs_disabled_flags(flags))
+	if (!irqs_disabled_flags(flags)) {
 		lockdep_assert_irqs_enabled();
+		rcu_nocb_wait_contended(rdp);
+	}
 	return;
 }
 
@@ -1690,7 +1712,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
 		if (rcu_segcblist_empty(&rdp->cblist))
 			continue; /* No callbacks here, try next. */
 		rnp = rdp->mynode;
-		raw_spin_lock_irqsave(&rdp->nocb_lock, flags);
+		rcu_nocb_lock_irqsave(rdp, flags);
 		WRITE_ONCE(my_rdp->nocb_defer_wakeup, RCU_NOCB_WAKE_NOT);
 		del_timer(&my_rdp->nocb_timer);
 		raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */
@@ -1710,7 +1732,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
 		} else {
 			needwake = false;
 		}
-		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(rdp, flags);
 		if (needwake) {
 			swake_up_one(&rdp->nocb_cb_wq);
 			gotcbs = true;
@@ -1739,9 +1761,9 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
 		trace_rcu_this_gp(rnp, my_rdp, wait_gp_seq, TPS("EndWait"));
 	}
 	if (!rcu_nocb_poll) {
-		raw_spin_lock_irqsave(&my_rdp->nocb_lock, flags);
+		rcu_nocb_lock_irqsave(my_rdp, flags);
 		WRITE_ONCE(my_rdp->nocb_gp_sleep, true);
-		raw_spin_unlock_irqrestore(&my_rdp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(my_rdp, flags);
 	}
 	WARN_ON(signal_pending(current));
 }
@@ -1782,12 +1804,12 @@ static void nocb_cb_wait(struct rcu_data *rdp)
 	rcu_do_batch(rdp);
 	local_bh_enable();
 	lockdep_assert_irqs_enabled();
-	raw_spin_lock_irqsave(&rdp->nocb_lock, flags);
+	rcu_nocb_lock_irqsave(rdp, flags);
 	raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */
 	needwake_gp = rcu_advance_cbs(rdp->mynode, rdp);
 	raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */
 	if (rcu_segcblist_ready_cbs(&rdp->cblist)) {
-		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(rdp, flags);
 		if (needwake_gp)
 			rcu_gp_kthread_wake();
 		return;
@@ -1795,7 +1817,7 @@ static void nocb_cb_wait(struct rcu_data *rdp)
 
 	trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("CBSleep"));
 	WRITE_ONCE(rdp->nocb_cb_sleep, true);
-	raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
+	rcu_nocb_unlock_irqrestore(rdp, flags);
 	if (needwake_gp)
 		rcu_gp_kthread_wake();
 	swait_event_interruptible_exclusive(rdp->nocb_cb_wq,
@@ -1837,9 +1859,9 @@ static void do_nocb_deferred_wakeup_common(struct rcu_data *rdp)
 	unsigned long flags;
 	int ndw;
 
-	raw_spin_lock_irqsave(&rdp->nocb_lock, flags);
+	rcu_nocb_lock_irqsave(rdp, flags);
 	if (!rcu_nocb_need_deferred_wakeup(rdp)) {
-		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
+		rcu_nocb_unlock_irqrestore(rdp, flags);
 		return;
 	}
 	ndw = READ_ONCE(rdp->nocb_defer_wakeup);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 06/10] rcu/nocb: Round down for number of no-CBs grace-period kthreads
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
                   ` (4 preceding siblings ...)
  2019-08-01 23:16 ` [PATCH tip/core/rcu 05/10] rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 07/10] rcu/nocb: Reduce contention at no-CBs registry-time CB advancement Paul E. McKenney
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

Currently, when the square root of the number of CPUs is rounded down
by int_sqrt(), this round-down is applied to the number of callback
kthreads per grace-period kthreads.  This makes almost no difference
for large systems, but results in oddities such as three no-CBs
grace-period kthreads for a five-CPU system, which is a bit excessive.
This commit therefore causes the round-down to apply to the number of
no-CBs grace-period kthreads, so that systems with from four to eight
CPUs have only two no-CBs grace period kthreads.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree_plugin.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index b6d9ed169edc..06b4fe275b3a 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -2026,7 +2026,7 @@ static void __init rcu_organize_nocb_kthreads(void)
 	if (!cpumask_available(rcu_nocb_mask))
 		return;
 	if (ls == -1) {
-		ls = int_sqrt(nr_cpu_ids);
+		ls = nr_cpu_ids / int_sqrt(nr_cpu_ids);
 		rcu_nocb_gp_stride = ls;
 	}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 07/10] rcu/nocb: Reduce contention at no-CBs registry-time CB advancement
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
                   ` (5 preceding siblings ...)
  2019-08-01 23:16 ` [PATCH tip/core/rcu 06/10] rcu/nocb: Round down for number of no-CBs grace-period kthreads Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 08/10] rcu/nocb: Reduce contention at no-CBs invocation-done time Paul E. McKenney
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

Currently, __call_rcu_nocb_wake() conditionally acquires the leaf rcu_node
structure's ->lock, and only afterwards does rcu_advance_cbs_nowake()
check to see if it is possible to advance callbacks without potentially
needing to awaken the grace-period kthread.  Given that the no-awaken
check can be done locklessly, this commit reverses the order, so that
rcu_advance_cbs_nowake() is invoked without holding the leaf rcu_node
structure's ->lock and rcu_advance_cbs_nowake() checks the grace-period
state before conditionally acquiring that lock, thus reducing the number
of needless acquistions of the leaf rcu_node structure's ->lock.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree.c        | 5 +++--
 kernel/rcu/tree_plugin.h | 4 +---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index a6ddfae6978d..ec320658aeef 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1341,10 +1341,11 @@ static bool rcu_advance_cbs(struct rcu_node *rnp, struct rcu_data *rdp)
 static void __maybe_unused rcu_advance_cbs_nowake(struct rcu_node *rnp,
 						  struct rcu_data *rdp)
 {
-	raw_lockdep_assert_held_rcu_node(rnp);
-	if (!rcu_seq_state(rcu_seq_current(&rnp->gp_seq)))
+	if (!rcu_seq_state(rcu_seq_current(&rnp->gp_seq)) ||
+	    !raw_spin_trylock_rcu_node(rnp))
 		return;
 	WARN_ON_ONCE(rcu_advance_cbs(rnp, rdp));
+	raw_spin_unlock_rcu_node(rnp);
 }
 
 /*
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 06b4fe275b3a..a1a2fc9df6d8 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1666,10 +1666,8 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 		if (!rdp->nocb_cb_sleep &&
 		    rcu_segcblist_ready_cbs(&rdp->cblist)) {
 			// Already going full tilt, so don't try to rewake.
-		} else if (rcu_segcblist_pend_cbs(&rdp->cblist) &&
-			   raw_spin_trylock_rcu_node(rdp->mynode)) {
+		} else if (rcu_segcblist_pend_cbs(&rdp->cblist)) {
 			rcu_advance_cbs_nowake(rdp->mynode, rdp);
-			raw_spin_unlock_rcu_node(rdp->mynode);
 		} else {
 			wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_FORCE,
 					   TPS("WakeOvfIsDeferred"));
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 08/10] rcu/nocb: Reduce contention at no-CBs invocation-done time
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
                   ` (6 preceding siblings ...)
  2019-08-01 23:16 ` [PATCH tip/core/rcu 07/10] rcu/nocb: Reduce contention at no-CBs registry-time CB advancement Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 09/10] rcu/nocb: Reduce ->nocb_lock contention with separate ->nocb_gp_lock Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 10/10] rcu/nocb: Unconditionally advance and wake for excessive CBs Paul E. McKenney
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

Currently, nocb_cb_wait() unconditionally acquires the leaf rcu_node
->lock to advance callbacks when done invoking the previous batch.
It does this while holding ->nocb_lock, which means that contention on
the leaf rcu_node ->lock visits itself on the ->nocb_lock.  This commit
therefore makes this lock acquisition conditional, forgoing callback
advancement when the leaf rcu_node ->lock is not immediately available.
(In this case, the no-CBs grace-period kthread will eventually do any
needed callback advancement.)

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree_plugin.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index a1a2fc9df6d8..7fbf2c4411a1 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1803,9 +1803,10 @@ static void nocb_cb_wait(struct rcu_data *rdp)
 	local_bh_enable();
 	lockdep_assert_irqs_enabled();
 	rcu_nocb_lock_irqsave(rdp, flags);
-	raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */
-	needwake_gp = rcu_advance_cbs(rdp->mynode, rdp);
-	raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */
+	if (raw_spin_trylock_rcu_node(rnp)) { /* irqs already disabled. */
+		needwake_gp = rcu_advance_cbs(rdp->mynode, rdp);
+		raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */
+	}
 	if (rcu_segcblist_ready_cbs(&rdp->cblist)) {
 		rcu_nocb_unlock_irqrestore(rdp, flags);
 		if (needwake_gp)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 09/10] rcu/nocb: Reduce ->nocb_lock contention with separate ->nocb_gp_lock
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
                   ` (7 preceding siblings ...)
  2019-08-01 23:16 ` [PATCH tip/core/rcu 08/10] rcu/nocb: Reduce contention at no-CBs invocation-done time Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  2019-08-01 23:16 ` [PATCH tip/core/rcu 10/10] rcu/nocb: Unconditionally advance and wake for excessive CBs Paul E. McKenney
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

The sleep/wakeup of the no-CBs grace-period kthreads is synchronized
using the ->nocb_lock of the first CPU corresponding to that kthread.
This commit provides a separate ->nocb_gp_lock for this purpose, thus
reducing contention on ->nocb_lock.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree.h        | 3 ++-
 kernel/rcu/tree_plugin.h | 9 +++++----
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 7062f9d9c053..2c3e9068671c 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -202,7 +202,8 @@ struct rcu_data {
 	struct timer_list nocb_timer;	/* Enforce finite deferral. */
 
 	/* The following fields are used by GP kthread, hence own cacheline. */
-	bool nocb_gp_sleep ____cacheline_internodealigned_in_smp;
+	raw_spinlock_t nocb_gp_lock ____cacheline_internodealigned_in_smp;
+	bool nocb_gp_sleep;
 					/* Is the nocb GP thread asleep? */
 	struct swait_queue_head nocb_gp_wq; /* For nocb kthreads to sleep on. */
 	bool nocb_cb_sleep;		/* Is the nocb CB thread asleep? */
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 7fbf2c4411a1..af9cbc7d4784 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1602,9 +1602,9 @@ static void wake_nocb_gp(struct rcu_data *rdp, bool force,
 		del_timer(&rdp->nocb_timer);
 		rcu_nocb_unlock_irqrestore(rdp, flags);
 		smp_mb(); /* enqueue before ->nocb_gp_sleep. */
-		rcu_nocb_lock_irqsave(rdp_gp, flags);
+		raw_spin_lock_irqsave(&rdp_gp->nocb_gp_lock, flags);
 		WRITE_ONCE(rdp_gp->nocb_gp_sleep, false);
-		rcu_nocb_unlock_irqrestore(rdp_gp, flags);
+		raw_spin_unlock_irqrestore(&rdp_gp->nocb_gp_lock, flags);
 		wake_up_process(rdp_gp->nocb_gp_kthread);
 	} else {
 		rcu_nocb_unlock_irqrestore(rdp, flags);
@@ -1759,9 +1759,9 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
 		trace_rcu_this_gp(rnp, my_rdp, wait_gp_seq, TPS("EndWait"));
 	}
 	if (!rcu_nocb_poll) {
-		rcu_nocb_lock_irqsave(my_rdp, flags);
+		raw_spin_lock_irqsave(&my_rdp->nocb_gp_lock, flags);
 		WRITE_ONCE(my_rdp->nocb_gp_sleep, true);
-		rcu_nocb_unlock_irqrestore(my_rdp, flags);
+		raw_spin_unlock_irqrestore(&my_rdp->nocb_gp_lock, flags);
 	}
 	WARN_ON(signal_pending(current));
 }
@@ -1941,6 +1941,7 @@ static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp)
 	init_swait_queue_head(&rdp->nocb_cb_wq);
 	init_swait_queue_head(&rdp->nocb_gp_wq);
 	raw_spin_lock_init(&rdp->nocb_lock);
+	raw_spin_lock_init(&rdp->nocb_gp_lock);
 	timer_setup(&rdp->nocb_timer, do_nocb_deferred_wakeup_timer, 0);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 10/10] rcu/nocb: Unconditionally advance and wake for excessive CBs
  2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
                   ` (8 preceding siblings ...)
  2019-08-01 23:16 ` [PATCH tip/core/rcu 09/10] rcu/nocb: Reduce ->nocb_lock contention with separate ->nocb_gp_lock Paul E. McKenney
@ 2019-08-01 23:16 ` Paul E. McKenney
  9 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2019-08-01 23:16 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

When there are excessive numbers of callbacks, and when either the
corresponding no-CBs callback kthread is asleep or there is no more
ready-to-invoke callbacks, and when least one callback is pending,
__call_rcu_nocb_wake() will advance the callbacks, but refrain from
awakening the corresponding no-CBs grace-period kthread.  However,
because rcu_advance_cbs_nowake() is used, it is possible (if a bit
unlikely) that the needed advancement could not happen due to a grace
period not being in progress.  Plus there will always be at least one
pending callback due to one having just now been enqueued.

This commit therefore attempts to advance callbacks and awakens the
no-CBs grace-period kthread when there are excessive numbers of callbacks
posted and when the no-CBs callback kthread is not in a position to do
anything helpful.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree_plugin.h | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index af9cbc7d4784..e164d2c5fa93 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1666,13 +1666,19 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
 		if (!rdp->nocb_cb_sleep &&
 		    rcu_segcblist_ready_cbs(&rdp->cblist)) {
 			// Already going full tilt, so don't try to rewake.
-		} else if (rcu_segcblist_pend_cbs(&rdp->cblist)) {
-			rcu_advance_cbs_nowake(rdp->mynode, rdp);
+			rcu_nocb_unlock_irqrestore(rdp, flags);
 		} else {
-			wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_FORCE,
-					   TPS("WakeOvfIsDeferred"));
+			rcu_advance_cbs_nowake(rdp->mynode, rdp);
+			if (!irqs_disabled_flags(flags)) {
+				wake_nocb_gp(rdp, false, flags);
+				trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
+						    TPS("WakeOvf"));
+			} else {
+				wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_FORCE,
+						   TPS("WakeOvfIsDeferred"));
+				rcu_nocb_unlock_irqrestore(rdp, flags);
+			}
 		}
-		rcu_nocb_unlock_irqrestore(rdp, flags);
 	} else {
 		trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot"));
 		rcu_nocb_unlock_irqrestore(rdp, flags);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-08-01 23:17 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-01 23:16 [PATCH tip/core/rcu 0/10] No-CBs contention-reduction updates for v5.3-rc2 Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 01/10] rcu/nocb: Enable re-awakening under high callback load Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 02/10] rcu/nocb: Never downgrade ->nocb_defer_wakeup in wake_nocb_gp_defer() Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 03/10] rcu/nocb: Make __call_rcu_nocb_wake() safe for many callbacks Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 04/10] rcu/nocb: Avoid needless wakeups of no-CBs grace-period kthread Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 05/10] rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 06/10] rcu/nocb: Round down for number of no-CBs grace-period kthreads Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 07/10] rcu/nocb: Reduce contention at no-CBs registry-time CB advancement Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 08/10] rcu/nocb: Reduce contention at no-CBs invocation-done time Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 09/10] rcu/nocb: Reduce ->nocb_lock contention with separate ->nocb_gp_lock Paul E. McKenney
2019-08-01 23:16 ` [PATCH tip/core/rcu 10/10] rcu/nocb: Unconditionally advance and wake for excessive CBs Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).