rcu.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes
@ 2023-03-29 16:01 Frederic Weisbecker
  2023-03-29 16:02 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 16:01 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

Changes since v1 (https://lore.kernel.org/lkml/20230322194456.2331527-1-frederic@kernel.org/):

* Use mutex_trylock() to avoid inverted dependency chain against
  allocations.

* WARN if an rdp is part of nocb mask but is not offloaded

Tested through shrinker debugfs interface.

Frederic Weisbecker (4):
  rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  rcu/nocb: Fix shrinker race against callback enqueuer
  rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  rcu/nocb: Make shrinker to iterate only NOCB CPUs

 kernel/rcu/tree_nocb.h | 52 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 47 insertions(+), 5 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-29 16:01 [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
@ 2023-03-29 16:02 ` Frederic Weisbecker
  2023-03-29 20:44   ` Paul E. McKenney
  2023-03-29 16:02 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 16:02 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

The shrinker may run concurrently with callbacks (de-)offloading. As
such, calling rcu_nocb_lock() is very dangerous because it does a
conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
an imbalance.

Fix this with protecting against (de-)offloading using the barrier mutex.
Although if the barrier mutex is contended, which should be rare, then
step aside so as not to trigger a mutex VS allocation
dependency chain.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 25 ++++++++++++++++++++++++-
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index f2280616f9d5..1a86883902ce 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1336,13 +1336,33 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	unsigned long flags;
 	unsigned long count = 0;
 
+	/*
+	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
+	 * may be ignored or imbalanced.
+	 */
+	if (!mutex_trylock(&rcu_state.barrier_mutex)) {
+		/*
+		 * But really don't insist if barrier_mutex is contended since we
+		 * can't guarantee that it will never engage in a dependency
+		 * chain involving memory allocation. The lock is seldom contended
+		 * anyway.
+		 */
+		return 0;
+	}
+
 	/* Snapshot count of all CPUs */
 	for_each_possible_cpu(cpu) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
-		int _count = READ_ONCE(rdp->lazy_len);
+		int _count;
+
+		if (!rcu_rdp_is_offloaded(rdp))
+			continue;
+
+		_count = READ_ONCE(rdp->lazy_len);
 
 		if (_count == 0)
 			continue;
+
 		rcu_nocb_lock_irqsave(rdp, flags);
 		WRITE_ONCE(rdp->lazy_len, 0);
 		rcu_nocb_unlock_irqrestore(rdp, flags);
@@ -1352,6 +1372,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 		if (sc->nr_to_scan <= 0)
 			break;
 	}
+
+	mutex_unlock(&rcu_state.barrier_mutex);
+
 	return count ? count : SHRINK_STOP;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer
  2023-03-29 16:01 [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
  2023-03-29 16:02 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
@ 2023-03-29 16:02 ` Frederic Weisbecker
  2023-03-29 20:47   ` Paul E. McKenney
  2023-03-29 16:02 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 16:02 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

The shrinker resets the lazy callbacks counter in order to trigger the
pending lazy queue flush though the rcuog kthread. The counter reset is
protected by the ->nocb_lock against concurrent accesses...except
for one of them. Here is a list of existing synchronized readers/writer:

1) The first lazy enqueuer (incrementing ->lazy_len to 1) does so under
   ->nocb_lock and ->nocb_bypass_lock.

2) The further lazy enqueuers (incrementing ->lazy_len above 1) do so
   under ->nocb_bypass_lock _only_.

3) The lazy flush checks and resets to 0 under ->nocb_lock and
	->nocb_bypass_lock.

The shrinker protects its ->lazy_len reset against cases 1) and 3) but
not against 2). As such, setting ->lazy_len to 0 under the ->nocb_lock
may be cancelled right away by an overwrite from an enqueuer, leading
rcuog to ignore the flush.

To avoid that, use the proper bypass flush API which takes care of all
those details.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 1a86883902ce..c321fce2af8e 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1364,7 +1364,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 			continue;
 
 		rcu_nocb_lock_irqsave(rdp, flags);
-		WRITE_ONCE(rdp->lazy_len, 0);
+		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
 		rcu_nocb_unlock_irqrestore(rdp, flags);
 		wake_nocb_gp(rdp, false);
 		sc->nr_to_scan -= _count;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  2023-03-29 16:01 [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
  2023-03-29 16:02 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
  2023-03-29 16:02 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
@ 2023-03-29 16:02 ` Frederic Weisbecker
  2023-03-29 20:54   ` Paul E. McKenney
  2023-03-29 16:02 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
  2023-04-24 17:35 ` [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Paul E. McKenney
  4 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 16:02 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

The ->lazy_len is only checked locklessly. Recheck again under the
->nocb_lock to avoid spending more time on flushing/waking if not
necessary. The ->lazy_len can still increment concurrently (from 1 to
infinity) but under the ->nocb_lock we at least know for sure if there
are lazy callbacks at all (->lazy_len > 0).

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index c321fce2af8e..dfa9c10d6727 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1358,12 +1358,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 		if (!rcu_rdp_is_offloaded(rdp))
 			continue;
 
+		if (!READ_ONCE(rdp->lazy_len))
+			continue;
+
+		rcu_nocb_lock_irqsave(rdp, flags);
+		/*
+		 * Recheck under the nocb lock. Since we are not holding the bypass
+		 * lock we may still race with increments from the enqueuer but still
+		 * we know for sure if there is at least one lazy callback.
+		 */
 		_count = READ_ONCE(rdp->lazy_len);
-
-		if (_count == 0)
+		if (!_count) {
+			rcu_nocb_unlock_irqrestore(rdp, flags);
 			continue;
-
-		rcu_nocb_lock_irqsave(rdp, flags);
+		}
 		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
 		rcu_nocb_unlock_irqrestore(rdp, flags);
 		wake_nocb_gp(rdp, false);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-29 16:01 [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2023-03-29 16:02 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
@ 2023-03-29 16:02 ` Frederic Weisbecker
  2023-03-29 20:58   ` Paul E. McKenney
  2023-04-24 17:35 ` [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Paul E. McKenney
  4 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 16:02 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
over the NOCB mask is enough for both counting and scanning. Just lock
the mostly uncontended barrier mutex on counting as well in order to
keep rcu_nocb_mask stable.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index dfa9c10d6727..43229d2b0c44 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1319,13 +1319,22 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
 	int cpu;
 	unsigned long count = 0;
 
+	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+		return 0;
+
+	/*  Protect rcu_nocb_mask against concurrent (de-)offloading. */
+	if (!mutex_trylock(&rcu_state.barrier_mutex))
+		return 0;
+
 	/* Snapshot count of all CPUs */
-	for_each_possible_cpu(cpu) {
+	for_each_cpu(cpu, rcu_nocb_mask) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 
 		count +=  READ_ONCE(rdp->lazy_len);
 	}
 
+	mutex_unlock(&rcu_state.barrier_mutex);
+
 	return count ? count : SHRINK_EMPTY;
 }
 
@@ -1336,6 +1345,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	unsigned long flags;
 	unsigned long count = 0;
 
+	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+		return 0;
 	/*
 	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
 	 * may be ignored or imbalanced.
@@ -1351,11 +1362,11 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	}
 
 	/* Snapshot count of all CPUs */
-	for_each_possible_cpu(cpu) {
+	for_each_cpu(cpu, rcu_nocb_mask) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 		int _count;
 
-		if (!rcu_rdp_is_offloaded(rdp))
+		if (WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp)))
 			continue;
 
 		if (!READ_ONCE(rdp->lazy_len))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-29 16:02 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
@ 2023-03-29 20:44   ` Paul E. McKenney
  2023-03-29 21:18     ` Frederic Weisbecker
  0 siblings, 1 reply; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-29 20:44 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 06:02:00PM +0200, Frederic Weisbecker wrote:
> The shrinker may run concurrently with callbacks (de-)offloading. As
> such, calling rcu_nocb_lock() is very dangerous because it does a
> conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> an imbalance.
> 
> Fix this with protecting against (de-)offloading using the barrier mutex.
> Although if the barrier mutex is contended, which should be rare, then
> step aside so as not to trigger a mutex VS allocation
> dependency chain.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  kernel/rcu/tree_nocb.h | 25 ++++++++++++++++++++++++-
>  1 file changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index f2280616f9d5..1a86883902ce 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1336,13 +1336,33 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	unsigned long flags;
>  	unsigned long count = 0;
>  
> +	/*
> +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> +	 * may be ignored or imbalanced.
> +	 */
> +	if (!mutex_trylock(&rcu_state.barrier_mutex)) {

This looks much better, thank you!

> +		/*
> +		 * But really don't insist if barrier_mutex is contended since we
> +		 * can't guarantee that it will never engage in a dependency
> +		 * chain involving memory allocation. The lock is seldom contended
> +		 * anyway.
> +		 */
> +		return 0;
> +	}
> +
>  	/* Snapshot count of all CPUs */
>  	for_each_possible_cpu(cpu) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> -		int _count = READ_ONCE(rdp->lazy_len);
> +		int _count;
> +
> +		if (!rcu_rdp_is_offloaded(rdp))
> +			continue;
> +
> +		_count = READ_ONCE(rdp->lazy_len);
>  
>  		if (_count == 0)
>  			continue;
> +

And I just might have unconfused myself here.  We get here only if this
CPU is offloaded, in which case it might also have non-zero ->lazy_len,
so this is in fact *not* dead code.

>  		rcu_nocb_lock_irqsave(rdp, flags);
>  		WRITE_ONCE(rdp->lazy_len, 0);
>  		rcu_nocb_unlock_irqrestore(rdp, flags);
> @@ -1352,6 +1372,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  		if (sc->nr_to_scan <= 0)
>  			break;
>  	}
> +
> +	mutex_unlock(&rcu_state.barrier_mutex);
> +
>  	return count ? count : SHRINK_STOP;
>  }
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer
  2023-03-29 16:02 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
@ 2023-03-29 20:47   ` Paul E. McKenney
  0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-29 20:47 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 06:02:01PM +0200, Frederic Weisbecker wrote:
> The shrinker resets the lazy callbacks counter in order to trigger the
> pending lazy queue flush though the rcuog kthread. The counter reset is
> protected by the ->nocb_lock against concurrent accesses...except
> for one of them. Here is a list of existing synchronized readers/writer:
> 
> 1) The first lazy enqueuer (incrementing ->lazy_len to 1) does so under
>    ->nocb_lock and ->nocb_bypass_lock.
> 
> 2) The further lazy enqueuers (incrementing ->lazy_len above 1) do so
>    under ->nocb_bypass_lock _only_.
> 
> 3) The lazy flush checks and resets to 0 under ->nocb_lock and
> 	->nocb_bypass_lock.
> 
> The shrinker protects its ->lazy_len reset against cases 1) and 3) but
> not against 2). As such, setting ->lazy_len to 0 under the ->nocb_lock
> may be cancelled right away by an overwrite from an enqueuer, leading
> rcuog to ignore the flush.
> 
> To avoid that, use the proper bypass flush API which takes care of all
> those details.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  kernel/rcu/tree_nocb.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index 1a86883902ce..c321fce2af8e 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1364,7 +1364,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  			continue;
>  
>  		rcu_nocb_lock_irqsave(rdp, flags);
> -		WRITE_ONCE(rdp->lazy_len, 0);
> +		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));

And I do feel much better about this version.  ;-)

>  		rcu_nocb_unlock_irqrestore(rdp, flags);
>  		wake_nocb_gp(rdp, false);
>  		sc->nr_to_scan -= _count;
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  2023-03-29 16:02 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
@ 2023-03-29 20:54   ` Paul E. McKenney
  2023-03-29 21:22     ` Frederic Weisbecker
  0 siblings, 1 reply; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-29 20:54 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 06:02:02PM +0200, Frederic Weisbecker wrote:
> The ->lazy_len is only checked locklessly. Recheck again under the
> ->nocb_lock to avoid spending more time on flushing/waking if not
> necessary. The ->lazy_len can still increment concurrently (from 1 to
> infinity) but under the ->nocb_lock we at least know for sure if there
> are lazy callbacks at all (->lazy_len > 0).
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  kernel/rcu/tree_nocb.h | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index c321fce2af8e..dfa9c10d6727 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1358,12 +1358,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  		if (!rcu_rdp_is_offloaded(rdp))
>  			continue;
>  
> +		if (!READ_ONCE(rdp->lazy_len))
> +			continue;

Do you depend on the ordering of the above read of ->lazy_len against
anything in the following, aside from the re-read of ->lazy_len?  (Same
variable, both READ_ONCE() or stronger, so you do get that ordering.)

If you do need that ordering, the above READ_ONCE() needs to instead
be smp_load_acquire() or similar.  If you don't need that ordering,
what you have is good.

> +		rcu_nocb_lock_irqsave(rdp, flags);
> +		/*
> +		 * Recheck under the nocb lock. Since we are not holding the bypass
> +		 * lock we may still race with increments from the enqueuer but still
> +		 * we know for sure if there is at least one lazy callback.
> +		 */
>  		_count = READ_ONCE(rdp->lazy_len);
> -
> -		if (_count == 0)
> +		if (!_count) {
> +			rcu_nocb_unlock_irqrestore(rdp, flags);
>  			continue;
> -
> -		rcu_nocb_lock_irqsave(rdp, flags);
> +		}
>  		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
>  		rcu_nocb_unlock_irqrestore(rdp, flags);
>  		wake_nocb_gp(rdp, false);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-29 16:02 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
@ 2023-03-29 20:58   ` Paul E. McKenney
  2023-03-29 21:35     ` Frederic Weisbecker
  0 siblings, 1 reply; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-29 20:58 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 06:02:03PM +0200, Frederic Weisbecker wrote:
> Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
> over the NOCB mask is enough for both counting and scanning. Just lock
> the mostly uncontended barrier mutex on counting as well in order to
> keep rcu_nocb_mask stable.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

Looks plausible.  ;-)

What are you doing to test this?  For that matter, what should rcutorture
be doing to test this?  My guess is that the current callback flooding in
rcu_torture_fwd_prog_cr() should do the trick, but figured I should ask.

							Thanx, Paul

> ---
>  kernel/rcu/tree_nocb.h | 17 ++++++++++++++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index dfa9c10d6727..43229d2b0c44 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1319,13 +1319,22 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
>  	int cpu;
>  	unsigned long count = 0;
>  
> +	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> +		return 0;
> +
> +	/*  Protect rcu_nocb_mask against concurrent (de-)offloading. */
> +	if (!mutex_trylock(&rcu_state.barrier_mutex))
> +		return 0;
> +
>  	/* Snapshot count of all CPUs */
> -	for_each_possible_cpu(cpu) {
> +	for_each_cpu(cpu, rcu_nocb_mask) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>  
>  		count +=  READ_ONCE(rdp->lazy_len);
>  	}
>  
> +	mutex_unlock(&rcu_state.barrier_mutex);
> +
>  	return count ? count : SHRINK_EMPTY;
>  }
>  
> @@ -1336,6 +1345,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	unsigned long flags;
>  	unsigned long count = 0;
>  
> +	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> +		return 0;
>  	/*
>  	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
>  	 * may be ignored or imbalanced.
> @@ -1351,11 +1362,11 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	}
>  
>  	/* Snapshot count of all CPUs */
> -	for_each_possible_cpu(cpu) {
> +	for_each_cpu(cpu, rcu_nocb_mask) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>  		int _count;
>  
> -		if (!rcu_rdp_is_offloaded(rdp))
> +		if (WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp)))
>  			continue;
>  
>  		if (!READ_ONCE(rdp->lazy_len))
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-29 20:44   ` Paul E. McKenney
@ 2023-03-29 21:18     ` Frederic Weisbecker
  0 siblings, 0 replies; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 21:18 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 01:44:53PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 29, 2023 at 06:02:00PM +0200, Frederic Weisbecker wrote:
> > +		/*
> > +		 * But really don't insist if barrier_mutex is contended since we
> > +		 * can't guarantee that it will never engage in a dependency
> > +		 * chain involving memory allocation. The lock is seldom contended
> > +		 * anyway.
> > +		 */
> > +		return 0;
> > +	}
> > +
> >  	/* Snapshot count of all CPUs */
> >  	for_each_possible_cpu(cpu) {
> >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > -		int _count = READ_ONCE(rdp->lazy_len);
> > +		int _count;
> > +
> > +		if (!rcu_rdp_is_offloaded(rdp))
> > +			continue;
> > +
> > +		_count = READ_ONCE(rdp->lazy_len);
> >  
> >  		if (_count == 0)
> >  			continue;
> > +
> 
> And I just might have unconfused myself here.  We get here only if this
> CPU is offloaded, in which case it might also have non-zero ->lazy_len,
> so this is in fact *not* dead code.

Right. Now whether it's really alive remains to be proven ;)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  2023-03-29 20:54   ` Paul E. McKenney
@ 2023-03-29 21:22     ` Frederic Weisbecker
  2023-03-29 21:38       ` Paul E. McKenney
  0 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 21:22 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 01:54:20PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 29, 2023 at 06:02:02PM +0200, Frederic Weisbecker wrote:
> > The ->lazy_len is only checked locklessly. Recheck again under the
> > ->nocb_lock to avoid spending more time on flushing/waking if not
> > necessary. The ->lazy_len can still increment concurrently (from 1 to
> > infinity) but under the ->nocb_lock we at least know for sure if there
> > are lazy callbacks at all (->lazy_len > 0).
> > 
> > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > ---
> >  kernel/rcu/tree_nocb.h | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> > 
> > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > index c321fce2af8e..dfa9c10d6727 100644
> > --- a/kernel/rcu/tree_nocb.h
> > +++ b/kernel/rcu/tree_nocb.h
> > @@ -1358,12 +1358,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  		if (!rcu_rdp_is_offloaded(rdp))
> >  			continue;
> >  
> > +		if (!READ_ONCE(rdp->lazy_len))
> > +			continue;
> 
> Do you depend on the ordering of the above read of ->lazy_len against
> anything in the following, aside from the re-read of ->lazy_len?  (Same
> variable, both READ_ONCE() or stronger, so you do get that ordering.)
> 
> If you do need that ordering, the above READ_ONCE() needs to instead
> be smp_load_acquire() or similar.  If you don't need that ordering,
> what you have is good.

No ordering dependency intended here. The early ->lazy_len read is really just
an optimization here to avoid locking if it *seems* there is nothing to do with
this rdp. But what follows doesn't depend on that read.

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-29 20:58   ` Paul E. McKenney
@ 2023-03-29 21:35     ` Frederic Weisbecker
  2023-03-29 23:12       ` Paul E. McKenney
  0 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 21:35 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 01:58:06PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 29, 2023 at 06:02:03PM +0200, Frederic Weisbecker wrote:
> > Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
> > over the NOCB mask is enough for both counting and scanning. Just lock
> > the mostly uncontended barrier mutex on counting as well in order to
> > keep rcu_nocb_mask stable.
> > 
> > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> 
> Looks plausible.  ;-)
> 
> What are you doing to test this?  For that matter, what should rcutorture
> be doing to test this?  My guess is that the current callback flooding in
> rcu_torture_fwd_prog_cr() should do the trick, but figured I should ask.

All I did was to trigger these shrinker callbacks through debugfs
(https://docs.kernel.org/admin-guide/mm/shrinker_debugfs.html)

But rcutorture isn't testing it because:

- No torture config has CONFIG_RCU_LAZY
- rcutorture doesn't do any lazy call_rcu() (always calls hurry for the
  main RCU flavour).

And I suspect rcutorture isn't ready for accepting the lazy delay, that would
require some special treatment.

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  2023-03-29 21:22     ` Frederic Weisbecker
@ 2023-03-29 21:38       ` Paul E. McKenney
  0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-29 21:38 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 11:22:45PM +0200, Frederic Weisbecker wrote:
> On Wed, Mar 29, 2023 at 01:54:20PM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 29, 2023 at 06:02:02PM +0200, Frederic Weisbecker wrote:
> > > The ->lazy_len is only checked locklessly. Recheck again under the
> > > ->nocb_lock to avoid spending more time on flushing/waking if not
> > > necessary. The ->lazy_len can still increment concurrently (from 1 to
> > > infinity) but under the ->nocb_lock we at least know for sure if there
> > > are lazy callbacks at all (->lazy_len > 0).
> > > 
> > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > > ---
> > >  kernel/rcu/tree_nocb.h | 16 ++++++++++++----
> > >  1 file changed, 12 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > > index c321fce2af8e..dfa9c10d6727 100644
> > > --- a/kernel/rcu/tree_nocb.h
> > > +++ b/kernel/rcu/tree_nocb.h
> > > @@ -1358,12 +1358,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  		if (!rcu_rdp_is_offloaded(rdp))
> > >  			continue;
> > >  
> > > +		if (!READ_ONCE(rdp->lazy_len))
> > > +			continue;
> > 
> > Do you depend on the ordering of the above read of ->lazy_len against
> > anything in the following, aside from the re-read of ->lazy_len?  (Same
> > variable, both READ_ONCE() or stronger, so you do get that ordering.)
> > 
> > If you do need that ordering, the above READ_ONCE() needs to instead
> > be smp_load_acquire() or similar.  If you don't need that ordering,
> > what you have is good.
> 
> No ordering dependency intended here. The early ->lazy_len read is really just
> an optimization here to avoid locking if it *seems* there is nothing to do with
> this rdp. But what follows doesn't depend on that read.

Full steam ahead with READ_ONCE(), then!  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-29 21:35     ` Frederic Weisbecker
@ 2023-03-29 23:12       ` Paul E. McKenney
  0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-29 23:12 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 11:35:36PM +0200, Frederic Weisbecker wrote:
> On Wed, Mar 29, 2023 at 01:58:06PM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 29, 2023 at 06:02:03PM +0200, Frederic Weisbecker wrote:
> > > Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
> > > over the NOCB mask is enough for both counting and scanning. Just lock
> > > the mostly uncontended barrier mutex on counting as well in order to
> > > keep rcu_nocb_mask stable.
> > > 
> > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > 
> > Looks plausible.  ;-)
> > 
> > What are you doing to test this?  For that matter, what should rcutorture
> > be doing to test this?  My guess is that the current callback flooding in
> > rcu_torture_fwd_prog_cr() should do the trick, but figured I should ask.
> 
> All I did was to trigger these shrinker callbacks through debugfs
> (https://docs.kernel.org/admin-guide/mm/shrinker_debugfs.html)
> 
> But rcutorture isn't testing it because:
> 
> - No torture config has CONFIG_RCU_LAZY
> - rcutorture doesn't do any lazy call_rcu() (always calls hurry for the
>   main RCU flavour).
> 
> And I suspect rcutorture isn't ready for accepting the lazy delay, that would
> require some special treatment.

All fair points!

And yes, any non-lazy callback would delazify everything, so as it
is currently constituted, it would not be testing very much of the
lazy-callback state space.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes
  2023-03-29 16:01 [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2023-03-29 16:02 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
@ 2023-04-24 17:35 ` Paul E. McKenney
  4 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-04-24 17:35 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 06:01:59PM +0200, Frederic Weisbecker wrote:
> Changes since v1 (https://lore.kernel.org/lkml/20230322194456.2331527-1-frederic@kernel.org/):
> 
> * Use mutex_trylock() to avoid inverted dependency chain against
>   allocations.
> 
> * WARN if an rdp is part of nocb mask but is not offloaded
> 
> Tested through shrinker debugfs interface.

I pulled this one in, thank you!

As discussed, we do need some way to test lazy callbacks, but that should
not block this series.  And it might well be a separate test.

							Thanx, Paul

> Frederic Weisbecker (4):
>   rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
>   rcu/nocb: Fix shrinker race against callback enqueuer
>   rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
>   rcu/nocb: Make shrinker to iterate only NOCB CPUs
> 
>  kernel/rcu/tree_nocb.h | 52 ++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 47 insertions(+), 5 deletions(-)
> 
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-22 19:44 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
@ 2023-03-24  0:41   ` Joel Fernandes
  0 siblings, 0 replies; 17+ messages in thread
From: Joel Fernandes @ 2023-03-24  0:41 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Paul E . McKenney, LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng

On Wed, Mar 22, 2023 at 08:44:56PM +0100, Frederic Weisbecker wrote:
> Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
> over the NOCB mask is enough for both counting and scanning. Just lock
> the mostly uncontended barrier mutex on counting as well in order to
> keep rcu_nocb_mask stable.
> 

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>

thanks,

 - Joel


> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  kernel/rcu/tree_nocb.h | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index a3dc7465b0b2..185c0c9a60d4 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1319,13 +1319,21 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
>  	int cpu;
>  	unsigned long count = 0;
>  
> +	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> +		return 0;
> +
> +	/*  Protect rcu_nocb_mask against concurrent (de-)offloading. */
> +	mutex_lock(&rcu_state.barrier_mutex);
> +
>  	/* Snapshot count of all CPUs */
> -	for_each_possible_cpu(cpu) {
> +	for_each_cpu(cpu, rcu_nocb_mask) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>  
>  		count +=  READ_ONCE(rdp->lazy_len);
>  	}
>  
> +	mutex_unlock(&rcu_state.barrier_mutex);
> +
>  	return count ? count : SHRINK_EMPTY;
>  }
>  
> @@ -1336,6 +1344,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	unsigned long flags;
>  	unsigned long count = 0;
>  
> +	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> +		return 0;
>  	/*
>  	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
>  	 * may be ignored or imbalanced.
> @@ -1343,7 +1353,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	mutex_lock(&rcu_state.barrier_mutex);
>  
>  	/* Snapshot count of all CPUs */
> -	for_each_possible_cpu(cpu) {
> +	for_each_cpu(cpu, rcu_nocb_mask) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>  		int _count;
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-22 19:44 [PATCH 0/4] " Frederic Weisbecker
@ 2023-03-22 19:44 ` Frederic Weisbecker
  2023-03-24  0:41   ` Joel Fernandes
  0 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-22 19:44 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
over the NOCB mask is enough for both counting and scanning. Just lock
the mostly uncontended barrier mutex on counting as well in order to
keep rcu_nocb_mask stable.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index a3dc7465b0b2..185c0c9a60d4 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1319,13 +1319,21 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
 	int cpu;
 	unsigned long count = 0;
 
+	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+		return 0;
+
+	/*  Protect rcu_nocb_mask against concurrent (de-)offloading. */
+	mutex_lock(&rcu_state.barrier_mutex);
+
 	/* Snapshot count of all CPUs */
-	for_each_possible_cpu(cpu) {
+	for_each_cpu(cpu, rcu_nocb_mask) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 
 		count +=  READ_ONCE(rdp->lazy_len);
 	}
 
+	mutex_unlock(&rcu_state.barrier_mutex);
+
 	return count ? count : SHRINK_EMPTY;
 }
 
@@ -1336,6 +1344,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	unsigned long flags;
 	unsigned long count = 0;
 
+	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+		return 0;
 	/*
 	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
 	 * may be ignored or imbalanced.
@@ -1343,7 +1353,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	mutex_lock(&rcu_state.barrier_mutex);
 
 	/* Snapshot count of all CPUs */
-	for_each_possible_cpu(cpu) {
+	for_each_cpu(cpu, rcu_nocb_mask) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 		int _count;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-04-24 17:35 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-29 16:01 [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
2023-03-29 16:02 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
2023-03-29 20:44   ` Paul E. McKenney
2023-03-29 21:18     ` Frederic Weisbecker
2023-03-29 16:02 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
2023-03-29 20:47   ` Paul E. McKenney
2023-03-29 16:02 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
2023-03-29 20:54   ` Paul E. McKenney
2023-03-29 21:22     ` Frederic Weisbecker
2023-03-29 21:38       ` Paul E. McKenney
2023-03-29 16:02 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
2023-03-29 20:58   ` Paul E. McKenney
2023-03-29 21:35     ` Frederic Weisbecker
2023-03-29 23:12       ` Paul E. McKenney
2023-04-24 17:35 ` [PATCH 0/4 v2] rcu/nocb: Shrinker related boring fixes Paul E. McKenney
  -- strict thread matches above, loose matches on Subject: below --
2023-03-22 19:44 [PATCH 0/4] " Frederic Weisbecker
2023-03-22 19:44 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
2023-03-24  0:41   ` Joel Fernandes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).