All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] rcu/nocb: Shrinker related boring fixes
@ 2023-03-22 19:44 Frederic Weisbecker
  2023-03-22 19:44 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-22 19:44 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

Hi,

Just a couple of boring fixes related to the shrinker.

1/4: is related to (de-)offloading so the issue only concerns rcutorture
     right now (which doesn't seem to test lazy so hardly possible to trigger).

2/4: fix ignored flush, making the shrinker more efficient but nothing
     critical.

3/4: optimization

4/4: optimization

As such none of these patches carry a "Fixes:" because none fix an actual
regression. Nothing worth backportng.

Frederic Weisbecker (4):
  rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  rcu/nocb: Fix shrinker race against callback enqueuer
  rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  rcu/nocb: Make shrinker to iterate only NOCB CPUs

 kernel/rcu/tree_nocb.h | 43 +++++++++++++++++++++++++++++++++++++-----
 1 file changed, 38 insertions(+), 5 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-22 19:44 [PATCH 0/4] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
@ 2023-03-22 19:44 ` Frederic Weisbecker
  2023-03-22 23:18   ` Paul E. McKenney
  2023-03-22 19:44 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-22 19:44 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

The shrinker may run concurrently with callbacks (de-)offloading. As
such, calling rcu_nocb_lock() is very dangerous because it does a
conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
an imbalance.

Fix this with protecting against (de-)offloading using the barrier mutex.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index f2280616f9d5..dd9b655ae533 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	unsigned long flags;
 	unsigned long count = 0;
 
+	/*
+	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
+	 * may be ignored or imbalanced.
+	 */
+	mutex_lock(&rcu_state.barrier_mutex);
+
 	/* Snapshot count of all CPUs */
 	for_each_possible_cpu(cpu) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
-		int _count = READ_ONCE(rdp->lazy_len);
+		int _count;
+
+		if (!rcu_rdp_is_offloaded(rdp))
+			continue;
+
+		_count = READ_ONCE(rdp->lazy_len);
 
 		if (_count == 0)
 			continue;
+
 		rcu_nocb_lock_irqsave(rdp, flags);
 		WRITE_ONCE(rdp->lazy_len, 0);
 		rcu_nocb_unlock_irqrestore(rdp, flags);
@@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 		if (sc->nr_to_scan <= 0)
 			break;
 	}
+
+	mutex_unlock(&rcu_state.barrier_mutex);
+
 	return count ? count : SHRINK_STOP;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer
  2023-03-22 19:44 [PATCH 0/4] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
  2023-03-22 19:44 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
@ 2023-03-22 19:44 ` Frederic Weisbecker
  2023-03-22 23:19   ` Paul E. McKenney
  2023-03-22 19:44 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
  2023-03-22 19:44 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
  3 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-22 19:44 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

The shrinker resets the lazy callbacks counter in order to trigger the
pending lazy queue flush though the rcuog kthread. The counter reset is
protected by the ->nocb_lock against concurrent accesses...except
for one of them. Here is a list of existing synchronized readers/writer:

1) The first lazy enqueuer (incrementing ->lazy_len to 1) does so under
   ->nocb_lock and ->nocb_bypass_lock.

2) The further lazy enqueuers (incrementing ->lazy_len above 1) do so
   under ->nocb_bypass_lock _only_.

3) The lazy flush checks and resets to 0 under ->nocb_lock and
	->nocb_bypass_lock.

The shrinker protects its ->lazy_len reset against cases 1) and 3) but
not against 2). As such, setting ->lazy_len to 0 under the ->nocb_lock
may be cancelled right away by an overwrite from an enqueuer, leading
rcuog to ignore the flush.

To avoid that, use the proper bypass flush API which takes care of all
those details.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index dd9b655ae533..cb57e8312231 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1356,7 +1356,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 			continue;
 
 		rcu_nocb_lock_irqsave(rdp, flags);
-		WRITE_ONCE(rdp->lazy_len, 0);
+		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
 		rcu_nocb_unlock_irqrestore(rdp, flags);
 		wake_nocb_gp(rdp, false);
 		sc->nr_to_scan -= _count;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  2023-03-22 19:44 [PATCH 0/4] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
  2023-03-22 19:44 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
  2023-03-22 19:44 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
@ 2023-03-22 19:44 ` Frederic Weisbecker
  2023-03-22 23:21   ` Paul E. McKenney
  2023-03-22 19:44 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
  3 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-22 19:44 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

The ->lazy_len is only checked locklessly. Recheck again under the
->nocb_lock to avoid spending more time on flushing/waking if not
necessary. The ->lazy_len can still increment concurrently (from 1 to
infinity) but under the ->nocb_lock we at least know for sure if there
are lazy callbacks at all (->lazy_len > 0).

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index cb57e8312231..a3dc7465b0b2 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1350,12 +1350,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 		if (!rcu_rdp_is_offloaded(rdp))
 			continue;
 
+		if (!READ_ONCE(rdp->lazy_len))
+			continue;
+
+		rcu_nocb_lock_irqsave(rdp, flags);
+		/*
+		 * Recheck under the nocb lock. Since we are not holding the bypass
+		 * lock we may still race with increments from the enqueuer but still
+		 * we know for sure if there is at least one lazy callback.
+		 */
 		_count = READ_ONCE(rdp->lazy_len);
-
-		if (_count == 0)
+		if (!_count) {
+			rcu_nocb_unlock_irqrestore(rdp, flags);
 			continue;
-
-		rcu_nocb_lock_irqsave(rdp, flags);
+		}
 		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
 		rcu_nocb_unlock_irqrestore(rdp, flags);
 		wake_nocb_gp(rdp, false);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-22 19:44 [PATCH 0/4] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2023-03-22 19:44 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
@ 2023-03-22 19:44 ` Frederic Weisbecker
  2023-03-24  0:41   ` Joel Fernandes
  3 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-22 19:44 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: LKML, Frederic Weisbecker, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng, Joel Fernandes

Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
over the NOCB mask is enough for both counting and scanning. Just lock
the mostly uncontended barrier mutex on counting as well in order to
keep rcu_nocb_mask stable.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index a3dc7465b0b2..185c0c9a60d4 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1319,13 +1319,21 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
 	int cpu;
 	unsigned long count = 0;
 
+	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+		return 0;
+
+	/*  Protect rcu_nocb_mask against concurrent (de-)offloading. */
+	mutex_lock(&rcu_state.barrier_mutex);
+
 	/* Snapshot count of all CPUs */
-	for_each_possible_cpu(cpu) {
+	for_each_cpu(cpu, rcu_nocb_mask) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 
 		count +=  READ_ONCE(rdp->lazy_len);
 	}
 
+	mutex_unlock(&rcu_state.barrier_mutex);
+
 	return count ? count : SHRINK_EMPTY;
 }
 
@@ -1336,6 +1344,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	unsigned long flags;
 	unsigned long count = 0;
 
+	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+		return 0;
 	/*
 	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
 	 * may be ignored or imbalanced.
@@ -1343,7 +1353,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	mutex_lock(&rcu_state.barrier_mutex);
 
 	/* Snapshot count of all CPUs */
-	for_each_possible_cpu(cpu) {
+	for_each_cpu(cpu, rcu_nocb_mask) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 		int _count;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-22 19:44 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
@ 2023-03-22 23:18   ` Paul E. McKenney
  2023-03-24  0:55     ` Joel Fernandes
  2023-03-24 22:09     ` Frederic Weisbecker
  0 siblings, 2 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-22 23:18 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> The shrinker may run concurrently with callbacks (de-)offloading. As
> such, calling rcu_nocb_lock() is very dangerous because it does a
> conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> an imbalance.
> 
> Fix this with protecting against (de-)offloading using the barrier mutex.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

Good catch!!!  A few questions, comments, and speculations below.

							Thanx, Paul

> ---
>  kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index f2280616f9d5..dd9b655ae533 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	unsigned long flags;
>  	unsigned long count = 0;
>  
> +	/*
> +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> +	 * may be ignored or imbalanced.
> +	 */
> +	mutex_lock(&rcu_state.barrier_mutex);

I was worried about this possibly leading to out-of-memory deadlock,
but if I recall correctly, the (de-)offloading process never allocates
memory, so this should be OK?

The other concern was that the (de-)offloading operation might take a
long time, but the usual cause for that is huge numbers of callbacks,
in which case letting them free their memory is not necessarily a bad
strategy.

> +
>  	/* Snapshot count of all CPUs */
>  	for_each_possible_cpu(cpu) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> -		int _count = READ_ONCE(rdp->lazy_len);
> +		int _count;
> +
> +		if (!rcu_rdp_is_offloaded(rdp))
> +			continue;

If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?

Or can it contain garbage after a de-offloading operation?

> +		_count = READ_ONCE(rdp->lazy_len);
>  
>  		if (_count == 0)
>  			continue;
> +
>  		rcu_nocb_lock_irqsave(rdp, flags);
>  		WRITE_ONCE(rdp->lazy_len, 0);
>  		rcu_nocb_unlock_irqrestore(rdp, flags);
> @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  		if (sc->nr_to_scan <= 0)
>  			break;
>  	}
> +
> +	mutex_unlock(&rcu_state.barrier_mutex);
> +
>  	return count ? count : SHRINK_STOP;
>  }
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer
  2023-03-22 19:44 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
@ 2023-03-22 23:19   ` Paul E. McKenney
  0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-22 23:19 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 22, 2023 at 08:44:54PM +0100, Frederic Weisbecker wrote:
> The shrinker resets the lazy callbacks counter in order to trigger the
> pending lazy queue flush though the rcuog kthread. The counter reset is
> protected by the ->nocb_lock against concurrent accesses...except
> for one of them. Here is a list of existing synchronized readers/writer:
> 
> 1) The first lazy enqueuer (incrementing ->lazy_len to 1) does so under
>    ->nocb_lock and ->nocb_bypass_lock.
> 
> 2) The further lazy enqueuers (incrementing ->lazy_len above 1) do so
>    under ->nocb_bypass_lock _only_.
> 
> 3) The lazy flush checks and resets to 0 under ->nocb_lock and
> 	->nocb_bypass_lock.
> 
> The shrinker protects its ->lazy_len reset against cases 1) and 3) but
> not against 2). As such, setting ->lazy_len to 0 under the ->nocb_lock
> may be cancelled right away by an overwrite from an enqueuer, leading
> rcuog to ignore the flush.
> 
> To avoid that, use the proper bypass flush API which takes care of all
> those details.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

Again, good catch, and this one looks good to me.

So what am I missing?  ;-)

							Thanx, Paul

> ---
>  kernel/rcu/tree_nocb.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index dd9b655ae533..cb57e8312231 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1356,7 +1356,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  			continue;
>  
>  		rcu_nocb_lock_irqsave(rdp, flags);
> -		WRITE_ONCE(rdp->lazy_len, 0);
> +		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
>  		rcu_nocb_unlock_irqrestore(rdp, flags);
>  		wake_nocb_gp(rdp, false);
>  		sc->nr_to_scan -= _count;
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
  2023-03-22 19:44 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
@ 2023-03-22 23:21   ` Paul E. McKenney
  0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-22 23:21 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 22, 2023 at 08:44:55PM +0100, Frederic Weisbecker wrote:
> The ->lazy_len is only checked locklessly. Recheck again under the
> ->nocb_lock to avoid spending more time on flushing/waking if not
> necessary. The ->lazy_len can still increment concurrently (from 1 to
> infinity) but under the ->nocb_lock we at least know for sure if there
> are lazy callbacks at all (->lazy_len > 0).
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

This one looks plausible, and might hold the answer to earlier
questions.

							Thanx, Paul

> ---
>  kernel/rcu/tree_nocb.h | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index cb57e8312231..a3dc7465b0b2 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1350,12 +1350,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  		if (!rcu_rdp_is_offloaded(rdp))
>  			continue;
>  
> +		if (!READ_ONCE(rdp->lazy_len))
> +			continue;
> +
> +		rcu_nocb_lock_irqsave(rdp, flags);
> +		/*
> +		 * Recheck under the nocb lock. Since we are not holding the bypass
> +		 * lock we may still race with increments from the enqueuer but still
> +		 * we know for sure if there is at least one lazy callback.
> +		 */
>  		_count = READ_ONCE(rdp->lazy_len);
> -
> -		if (_count == 0)
> +		if (!_count) {
> +			rcu_nocb_unlock_irqrestore(rdp, flags);
>  			continue;
> -
> -		rcu_nocb_lock_irqsave(rdp, flags);
> +		}
>  		WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
>  		rcu_nocb_unlock_irqrestore(rdp, flags);
>  		wake_nocb_gp(rdp, false);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs
  2023-03-22 19:44 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
@ 2023-03-24  0:41   ` Joel Fernandes
  0 siblings, 0 replies; 17+ messages in thread
From: Joel Fernandes @ 2023-03-24  0:41 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Paul E . McKenney, LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay,
	Boqun Feng

On Wed, Mar 22, 2023 at 08:44:56PM +0100, Frederic Weisbecker wrote:
> Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
> over the NOCB mask is enough for both counting and scanning. Just lock
> the mostly uncontended barrier mutex on counting as well in order to
> keep rcu_nocb_mask stable.
> 

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>

thanks,

 - Joel


> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  kernel/rcu/tree_nocb.h | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index a3dc7465b0b2..185c0c9a60d4 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1319,13 +1319,21 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
>  	int cpu;
>  	unsigned long count = 0;
>  
> +	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> +		return 0;
> +
> +	/*  Protect rcu_nocb_mask against concurrent (de-)offloading. */
> +	mutex_lock(&rcu_state.barrier_mutex);
> +
>  	/* Snapshot count of all CPUs */
> -	for_each_possible_cpu(cpu) {
> +	for_each_cpu(cpu, rcu_nocb_mask) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>  
>  		count +=  READ_ONCE(rdp->lazy_len);
>  	}
>  
> +	mutex_unlock(&rcu_state.barrier_mutex);
> +
>  	return count ? count : SHRINK_EMPTY;
>  }
>  
> @@ -1336,6 +1344,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	unsigned long flags;
>  	unsigned long count = 0;
>  
> +	if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> +		return 0;
>  	/*
>  	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
>  	 * may be ignored or imbalanced.
> @@ -1343,7 +1353,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	mutex_lock(&rcu_state.barrier_mutex);
>  
>  	/* Snapshot count of all CPUs */
> -	for_each_possible_cpu(cpu) {
> +	for_each_cpu(cpu, rcu_nocb_mask) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>  		int _count;
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-22 23:18   ` Paul E. McKenney
@ 2023-03-24  0:55     ` Joel Fernandes
  2023-03-24  1:06       ` Paul E. McKenney
  2023-03-24 22:09     ` Frederic Weisbecker
  1 sibling, 1 reply; 17+ messages in thread
From: Joel Fernandes @ 2023-03-24  0:55 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Frederic Weisbecker, LKML, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng

On Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> > The shrinker may run concurrently with callbacks (de-)offloading. As
> > such, calling rcu_nocb_lock() is very dangerous because it does a
> > conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> > lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> > an imbalance.
> > 
> > Fix this with protecting against (de-)offloading using the barrier mutex.
> > 
> > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> 
> Good catch!!!  A few questions, comments, and speculations below.

Added a few more. ;)

> > ---
> >  kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > index f2280616f9d5..dd9b655ae533 100644
> > --- a/kernel/rcu/tree_nocb.h
> > +++ b/kernel/rcu/tree_nocb.h
> > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  	unsigned long flags;
> >  	unsigned long count = 0;
> >  
> > +	/*
> > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > +	 * may be ignored or imbalanced.
> > +	 */
> > +	mutex_lock(&rcu_state.barrier_mutex);
> 
> I was worried about this possibly leading to out-of-memory deadlock,
> but if I recall correctly, the (de-)offloading process never allocates
> memory, so this should be OK?

Maybe trylock is better then? If we can't make progress, may be better to let
kswapd free memory by other means than blocking on the mutex.

ISTR, from my Android days that there are weird lockdep issues that happen
when locking in a shrinker (due to the 'fake lock' dependency added during
reclaim).

> The other concern was that the (de-)offloading operation might take a
> long time, but the usual cause for that is huge numbers of callbacks,
> in which case letting them free their memory is not necessarily a bad
> strategy.
> 
> > +
> >  	/* Snapshot count of all CPUs */
> >  	for_each_possible_cpu(cpu) {
> >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > -		int _count = READ_ONCE(rdp->lazy_len);
> > +		int _count;
> > +
> > +		if (!rcu_rdp_is_offloaded(rdp))
> > +			continue;
> 
> If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?

Did you mean de-offloaded? If it is offloaded, that means nocb is active so
there could be lazy CBs queued. Or did I miss something?

thanks,

 - Joel


> Or can it contain garbage after a de-offloading operation?
> 
> > +		_count = READ_ONCE(rdp->lazy_len);
> >  
> >  		if (_count == 0)
> >  			continue;
> > +
> >  		rcu_nocb_lock_irqsave(rdp, flags);
> >  		WRITE_ONCE(rdp->lazy_len, 0);
> >  		rcu_nocb_unlock_irqrestore(rdp, flags);
> > @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  		if (sc->nr_to_scan <= 0)
> >  			break;
> >  	}
> > +
> > +	mutex_unlock(&rcu_state.barrier_mutex);
> > +
> >  	return count ? count : SHRINK_STOP;
> >  }
> >  
> > -- 
> > 2.34.1
> > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-24  0:55     ` Joel Fernandes
@ 2023-03-24  1:06       ` Paul E. McKenney
  0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-24  1:06 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Frederic Weisbecker, LKML, rcu, Uladzislau Rezki,
	Neeraj Upadhyay, Boqun Feng

On Fri, Mar 24, 2023 at 12:55:23AM +0000, Joel Fernandes wrote:
> On Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> > > The shrinker may run concurrently with callbacks (de-)offloading. As
> > > such, calling rcu_nocb_lock() is very dangerous because it does a
> > > conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> > > lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> > > an imbalance.
> > > 
> > > Fix this with protecting against (de-)offloading using the barrier mutex.
> > > 
> > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > 
> > Good catch!!!  A few questions, comments, and speculations below.
> 
> Added a few more. ;)
> 
> > > ---
> > >  kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
> > >  1 file changed, 16 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > > index f2280616f9d5..dd9b655ae533 100644
> > > --- a/kernel/rcu/tree_nocb.h
> > > +++ b/kernel/rcu/tree_nocb.h
> > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  	unsigned long flags;
> > >  	unsigned long count = 0;
> > >  
> > > +	/*
> > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > +	 * may be ignored or imbalanced.
> > > +	 */
> > > +	mutex_lock(&rcu_state.barrier_mutex);
> > 
> > I was worried about this possibly leading to out-of-memory deadlock,
> > but if I recall correctly, the (de-)offloading process never allocates
> > memory, so this should be OK?
> 
> Maybe trylock is better then? If we can't make progress, may be better to let
> kswapd free memory by other means than blocking on the mutex.
> 
> ISTR, from my Android days that there are weird lockdep issues that happen
> when locking in a shrinker (due to the 'fake lock' dependency added during
> reclaim).

This stuff gets tricky quickly.  ;-)

> > The other concern was that the (de-)offloading operation might take a
> > long time, but the usual cause for that is huge numbers of callbacks,
> > in which case letting them free their memory is not necessarily a bad
> > strategy.
> > 
> > > +
> > >  	/* Snapshot count of all CPUs */
> > >  	for_each_possible_cpu(cpu) {
> > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > +		int _count;
> > > +
> > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > +			continue;
> > 
> > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> 
> Did you mean de-offloaded? If it is offloaded, that means nocb is active so
> there could be lazy CBs queued. Or did I miss something?

You are quite right, offloaded for ->lazy_len to be zero.

							Thanx, Paul.

> thanks,
> 
>  - Joel
> 
> 
> > Or can it contain garbage after a de-offloading operation?
> > 
> > > +		_count = READ_ONCE(rdp->lazy_len);
> > >  
> > >  		if (_count == 0)
> > >  			continue;
> > > +
> > >  		rcu_nocb_lock_irqsave(rdp, flags);
> > >  		WRITE_ONCE(rdp->lazy_len, 0);
> > >  		rcu_nocb_unlock_irqrestore(rdp, flags);
> > > @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  		if (sc->nr_to_scan <= 0)
> > >  			break;
> > >  	}
> > > +
> > > +	mutex_unlock(&rcu_state.barrier_mutex);
> > > +
> > >  	return count ? count : SHRINK_STOP;
> > >  }
> > >  
> > > -- 
> > > 2.34.1
> > > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-22 23:18   ` Paul E. McKenney
  2023-03-24  0:55     ` Joel Fernandes
@ 2023-03-24 22:09     ` Frederic Weisbecker
  2023-03-24 22:51       ` Paul E. McKenney
  1 sibling, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-24 22:09 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  	unsigned long flags;
> >  	unsigned long count = 0;
> >  
> > +	/*
> > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > +	 * may be ignored or imbalanced.
> > +	 */
> > +	mutex_lock(&rcu_state.barrier_mutex);
> 
> I was worried about this possibly leading to out-of-memory deadlock,
> but if I recall correctly, the (de-)offloading process never allocates
> memory, so this should be OK?

Good point. It _should_ be fine but like you, Joel and Hillf pointed out
it's asking for trouble.

We could try Joel's idea to use mutex_trylock() as a best effort, which
should be fine as it's mostly uncontended.

The alternative is to force nocb locking and check the offloading state
right after. So instead of:

	rcu_nocb_lock_irqsave(rdp, flags);
	//flush stuff
	rcu_nocb_unlock_irqrestore(rdp, flags);

Have:

	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
	if (!rcu_rdp_is_offloaded(rdp))
		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
		continue;
	}
	//flush stuff
	rcu_nocb_unlock_irqrestore(rdp, flags);

But it's not pretty and also disqualifies the last two patches as
rcu_nocb_mask can't be iterated safely anymore.

What do you think?

> >  	/* Snapshot count of all CPUs */
> >  	for_each_possible_cpu(cpu) {
> >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > -		int _count = READ_ONCE(rdp->lazy_len);
> > +		int _count;
> > +
> > +		if (!rcu_rdp_is_offloaded(rdp))
> > +			continue;
> 
> If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> 
> Or can it contain garbage after a de-offloading operation?

If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
Bypass is flushed and disabled atomically early on de-offloading and the
flush resets ->lazy_len.

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-24 22:09     ` Frederic Weisbecker
@ 2023-03-24 22:51       ` Paul E. McKenney
  2023-03-26 20:01         ` Frederic Weisbecker
  0 siblings, 1 reply; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-24 22:51 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  	unsigned long flags;
> > >  	unsigned long count = 0;
> > >  
> > > +	/*
> > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > +	 * may be ignored or imbalanced.
> > > +	 */
> > > +	mutex_lock(&rcu_state.barrier_mutex);
> > 
> > I was worried about this possibly leading to out-of-memory deadlock,
> > but if I recall correctly, the (de-)offloading process never allocates
> > memory, so this should be OK?
> 
> Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> it's asking for trouble.
> 
> We could try Joel's idea to use mutex_trylock() as a best effort, which
> should be fine as it's mostly uncontended.
> 
> The alternative is to force nocb locking and check the offloading state
> right after. So instead of:
> 
> 	rcu_nocb_lock_irqsave(rdp, flags);
> 	//flush stuff
> 	rcu_nocb_unlock_irqrestore(rdp, flags);
> 
> Have:
> 
> 	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> 	if (!rcu_rdp_is_offloaded(rdp))
> 		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> 		continue;
> 	}
> 	//flush stuff
> 	rcu_nocb_unlock_irqrestore(rdp, flags);
> 
> But it's not pretty and also disqualifies the last two patches as
> rcu_nocb_mask can't be iterated safely anymore.
> 
> What do you think?

The mutex_trylock() approach does have the advantage of simplicity,
and as you say should do well given low contention.

Which reminds me, what sort of test strategy did you have in mind?
Memory exhaustion can have surprising effects.

> > >  	/* Snapshot count of all CPUs */
> > >  	for_each_possible_cpu(cpu) {
> > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > +		int _count;
> > > +
> > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > +			continue;
> > 
> > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > 
> > Or can it contain garbage after a de-offloading operation?
> 
> If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> Bypass is flushed and disabled atomically early on de-offloading and the
> flush resets ->lazy_len.

Whew!  At the moment, I don't feel strongly about whether or not
the following code should (1) read the value, (2) warn on non-zero,
(3) assume zero without reading, or (4) some other option that is not
occurring to me.  Your choice!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-24 22:51       ` Paul E. McKenney
@ 2023-03-26 20:01         ` Frederic Weisbecker
  2023-03-26 21:45           ` Paul E. McKenney
  0 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-26 20:01 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

Le Fri, Mar 24, 2023 at 03:51:54PM -0700, Paul E. McKenney a écrit :
> On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> > Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > >  	unsigned long flags;
> > > >  	unsigned long count = 0;
> > > >  
> > > > +	/*
> > > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > > +	 * may be ignored or imbalanced.
> > > > +	 */
> > > > +	mutex_lock(&rcu_state.barrier_mutex);
> > > 
> > > I was worried about this possibly leading to out-of-memory deadlock,
> > > but if I recall correctly, the (de-)offloading process never allocates
> > > memory, so this should be OK?
> > 
> > Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> > it's asking for trouble.
> > 
> > We could try Joel's idea to use mutex_trylock() as a best effort, which
> > should be fine as it's mostly uncontended.
> > 
> > The alternative is to force nocb locking and check the offloading state
> > right after. So instead of:
> > 
> > 	rcu_nocb_lock_irqsave(rdp, flags);
> > 	//flush stuff
> > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > 
> > Have:
> > 
> > 	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> > 	if (!rcu_rdp_is_offloaded(rdp))
> > 		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> > 		continue;
> > 	}
> > 	//flush stuff
> > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > 
> > But it's not pretty and also disqualifies the last two patches as
> > rcu_nocb_mask can't be iterated safely anymore.
> > 
> > What do you think?
> 
> The mutex_trylock() approach does have the advantage of simplicity,
> and as you say should do well given low contention.
> 
> Which reminds me, what sort of test strategy did you have in mind?
> Memory exhaustion can have surprising effects.

The best I can do is to trigger the count and scan callbacks through
the shrinker debugfs and see if it crashes or not :-)

> 
> > > >  	/* Snapshot count of all CPUs */
> > > >  	for_each_possible_cpu(cpu) {
> > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > +		int _count;
> > > > +
> > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > +			continue;
> > > 
> > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > 
> > > Or can it contain garbage after a de-offloading operation?
> > 
> > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > Bypass is flushed and disabled atomically early on de-offloading and the
> > flush resets ->lazy_len.
> 
> Whew!  At the moment, I don't feel strongly about whether or not
> the following code should (1) read the value, (2) warn on non-zero,
> (3) assume zero without reading, or (4) some other option that is not
> occurring to me.  Your choice!

(2) looks like a good idea!

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-26 20:01         ` Frederic Weisbecker
@ 2023-03-26 21:45           ` Paul E. McKenney
  2023-03-29 16:07             ` Frederic Weisbecker
  0 siblings, 1 reply; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-26 21:45 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> Le Fri, Mar 24, 2023 at 03:51:54PM -0700, Paul E. McKenney a écrit :
> > On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> > > Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > > > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > > >  	unsigned long flags;
> > > > >  	unsigned long count = 0;
> > > > >  
> > > > > +	/*
> > > > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > > > +	 * may be ignored or imbalanced.
> > > > > +	 */
> > > > > +	mutex_lock(&rcu_state.barrier_mutex);
> > > > 
> > > > I was worried about this possibly leading to out-of-memory deadlock,
> > > > but if I recall correctly, the (de-)offloading process never allocates
> > > > memory, so this should be OK?
> > > 
> > > Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> > > it's asking for trouble.
> > > 
> > > We could try Joel's idea to use mutex_trylock() as a best effort, which
> > > should be fine as it's mostly uncontended.
> > > 
> > > The alternative is to force nocb locking and check the offloading state
> > > right after. So instead of:
> > > 
> > > 	rcu_nocb_lock_irqsave(rdp, flags);
> > > 	//flush stuff
> > > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > > 
> > > Have:
> > > 
> > > 	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> > > 	if (!rcu_rdp_is_offloaded(rdp))
> > > 		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> > > 		continue;
> > > 	}
> > > 	//flush stuff
> > > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > > 
> > > But it's not pretty and also disqualifies the last two patches as
> > > rcu_nocb_mask can't be iterated safely anymore.
> > > 
> > > What do you think?
> > 
> > The mutex_trylock() approach does have the advantage of simplicity,
> > and as you say should do well given low contention.
> > 
> > Which reminds me, what sort of test strategy did you have in mind?
> > Memory exhaustion can have surprising effects.
> 
> The best I can do is to trigger the count and scan callbacks through
> the shrinker debugfs and see if it crashes or not :-)

Sounds like a good start.  Maybe also a good finish?  ;-)

> > > > >  	/* Snapshot count of all CPUs */
> > > > >  	for_each_possible_cpu(cpu) {
> > > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > > +		int _count;
> > > > > +
> > > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > > +			continue;
> > > > 
> > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > 
> > > > Or can it contain garbage after a de-offloading operation?
> > > 
> > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > flush resets ->lazy_len.
> > 
> > Whew!  At the moment, I don't feel strongly about whether or not
> > the following code should (1) read the value, (2) warn on non-zero,
> > (3) assume zero without reading, or (4) some other option that is not
> > occurring to me.  Your choice!
> 
> (2) looks like a good idea!

Sounds good to me!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-26 21:45           ` Paul E. McKenney
@ 2023-03-29 16:07             ` Frederic Weisbecker
  2023-03-29 20:45               ` Paul E. McKenney
  0 siblings, 1 reply; 17+ messages in thread
From: Frederic Weisbecker @ 2023-03-29 16:07 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Sun, Mar 26, 2023 at 02:45:18PM -0700, Paul E. McKenney wrote:
> On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> > > > > >  	/* Snapshot count of all CPUs */
> > > > > >  	for_each_possible_cpu(cpu) {
> > > > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > > > +		int _count;
> > > > > > +
> > > > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > > > +			continue;
> > > > > 
> > > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > > 
> > > > > Or can it contain garbage after a de-offloading operation?
> > > > 
> > > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > > flush resets ->lazy_len.
> > > 
> > > Whew!  At the moment, I don't feel strongly about whether or not
> > > the following code should (1) read the value, (2) warn on non-zero,
> > > (3) assume zero without reading, or (4) some other option that is not
> > > occurring to me.  Your choice!
> > 
> > (2) looks like a good idea!
> 
> Sounds good to me!

So since we now iterate rcu_nocb_mask after the patchset, there is no more
deoffloaded rdp to check. Meanwhile I put a WARN in the new series making
sure that an rdp in rcu_nocb_mask is also offloaded (heh!)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
  2023-03-29 16:07             ` Frederic Weisbecker
@ 2023-03-29 20:45               ` Paul E. McKenney
  0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2023-03-29 20:45 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, rcu, Uladzislau Rezki, Neeraj Upadhyay, Boqun Feng, Joel Fernandes

On Wed, Mar 29, 2023 at 06:07:58PM +0200, Frederic Weisbecker wrote:
> On Sun, Mar 26, 2023 at 02:45:18PM -0700, Paul E. McKenney wrote:
> > On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> > > > > > >  	/* Snapshot count of all CPUs */
> > > > > > >  	for_each_possible_cpu(cpu) {
> > > > > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > > > > +		int _count;
> > > > > > > +
> > > > > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > > > > +			continue;
> > > > > > 
> > > > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > > > 
> > > > > > Or can it contain garbage after a de-offloading operation?
> > > > > 
> > > > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > > > flush resets ->lazy_len.
> > > > 
> > > > Whew!  At the moment, I don't feel strongly about whether or not
> > > > the following code should (1) read the value, (2) warn on non-zero,
> > > > (3) assume zero without reading, or (4) some other option that is not
> > > > occurring to me.  Your choice!
> > > 
> > > (2) looks like a good idea!
> > 
> > Sounds good to me!
> 
> So since we now iterate rcu_nocb_mask after the patchset, there is no more
> deoffloaded rdp to check. Meanwhile I put a WARN in the new series making
> sure that an rdp in rcu_nocb_mask is also offloaded (heh!)

Sounds good, thank you!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-03-29 20:45 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-22 19:44 [PATCH 0/4] rcu/nocb: Shrinker related boring fixes Frederic Weisbecker
2023-03-22 19:44 ` [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading Frederic Weisbecker
2023-03-22 23:18   ` Paul E. McKenney
2023-03-24  0:55     ` Joel Fernandes
2023-03-24  1:06       ` Paul E. McKenney
2023-03-24 22:09     ` Frederic Weisbecker
2023-03-24 22:51       ` Paul E. McKenney
2023-03-26 20:01         ` Frederic Weisbecker
2023-03-26 21:45           ` Paul E. McKenney
2023-03-29 16:07             ` Frederic Weisbecker
2023-03-29 20:45               ` Paul E. McKenney
2023-03-22 19:44 ` [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer Frederic Weisbecker
2023-03-22 23:19   ` Paul E. McKenney
2023-03-22 19:44 ` [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker Frederic Weisbecker
2023-03-22 23:21   ` Paul E. McKenney
2023-03-22 19:44 ` [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs Frederic Weisbecker
2023-03-24  0:41   ` Joel Fernandes

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.