rcu.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Boqun Feng <boqun.feng@gmail.com>
To: "Paul E. McKenney" <paulmck@kernel.org>
Cc: rcu@vger.kernel.org, linux-kernel@vger.kernel.org,
	kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com,
	akpm@linux-foundation.org, mathieu.desnoyers@efficios.com,
	josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org,
	rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com,
	fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [PATCH rcu 04/18] rcu: Weaken ->dynticks accesses and updates
Date: Thu, 29 Jul 2021 15:58:04 +0800	[thread overview]
Message-ID: <YQJfjFv8lOnkUkhs@boqun-archlinux> (raw)
In-Reply-To: <20210721202127.2129660-4-paulmck@kernel.org>

Hi Paul,

On Wed, Jul 21, 2021 at 01:21:12PM -0700, Paul E. McKenney wrote:
> Accesses to the rcu_data structure's ->dynticks field have always been
> fully ordered because it was not possible to prove that weaker ordering
> was safe.  However, with the removal of the rcu_eqs_special_set() function
> and the advent of the Linux-kernel memory model, it is now easy to show
> that two of the four original full memory barriers can be weakened to
> acquire and release operations.  The remaining pair must remain full
> memory barriers.  This change makes the memory ordering requirements
> more evident, and it might well also speed up the to-idle and from-idle
> fastpaths on some architectures.
> 
> The following litmus test, adapted from one supplied off-list by Frederic
> Weisbecker, models the RCU grace-period kthread detecting an idle CPU
> that is concurrently transitioning to non-idle:
> 
> 	C dynticks-from-idle
> 
> 	{
> 		DYNTICKS=0; (* Initially idle. *)
> 	}
> 
> 	P0(int *X, int *DYNTICKS)
> 	{
> 		int dynticks;
> 		int x;
> 
> 		// Idle.
> 		dynticks = READ_ONCE(*DYNTICKS);
> 		smp_store_release(DYNTICKS, dynticks + 1);
> 		smp_mb();

This smp_mb() is needed, because there is an ->fr relation in the cycle
we want to avoid, however...

> 		// Now non-idle
> 		x = READ_ONCE(*X);
> 	}
> 
> 	P1(int *X, int *DYNTICKS)
> 	{
> 		int dynticks;
> 
> 		WRITE_ONCE(*X, 1);
> 		smp_mb();
> 		dynticks = smp_load_acquire(DYNTICKS);
> 	}
> 
> 	exists (1:dynticks=0 /\ 0:x=1)
> 
> Running "herd7 -conf linux-kernel.cfg dynticks-from-idle.litmus" verifies
> this transition, namely, showing that if the RCU grace-period kthread (P1)
> sees another CPU as idle (P0), then any memory access prior to the start
> of the grace period (P1's write to X) will be seen by any RCU read-side
> critical section following the to-non-idle transition (P0's read from X).
> This is a straightforward use of full memory barriers to force ordering
> in a store-buffering (SB) litmus test.
> 
> The following litmus test, also adapted from the one supplied off-list
> by Frederic Weisbecker, models the RCU grace-period kthread detecting
> a non-idle CPU that is concurrently transitioning to idle:
> 
> 	C dynticks-into-idle
> 
> 	{
> 		DYNTICKS=1; (* Initially non-idle. *)
> 	}
> 
> 	P0(int *X, int *DYNTICKS)
> 	{
> 		int dynticks;
> 
> 		// Non-idle.
> 		WRITE_ONCE(*X, 1);
> 		dynticks = READ_ONCE(*DYNTICKS);
> 		smp_store_release(DYNTICKS, dynticks + 1);
> 		smp_mb();

this smp_mb() is not needed, as we rely on the release-acquire pair to
provide the ordering.

This means that if we use different implementations (one w/ smp_mb(),
another w/o) rcu_dynticks_inc() for idle-to-nonidle and nonidle-to-idle,
we could save a smp_mb(). Thoughts?

> 		// Now idle.
> 	}
> 
> 	P1(int *X, int *DYNTICKS)
> 	{
> 		int x;
> 		int dynticks;
> 
> 		smp_mb();
> 		dynticks = smp_load_acquire(DYNTICKS);
> 		x = READ_ONCE(*X);
> 	}
> 
> 	exists (1:dynticks=2 /\ 1:x=0)
> 
> Running "herd7 -conf linux-kernel.cfg dynticks-into-idle.litmus" verifies
> this transition, namely, showing that if the RCU grace-period kthread
> (P1) sees another CPU as newly idle (P0), then any pre-idle memory access
> (P0's write to X) will be seen by any code following the grace period
> (P1's read from X).  This is a simple release-acquire pair forcing
> ordering in a message-passing (MP) litmus test.
> 
> Of course, if the grace-period kthread detects the CPU as non-idle,
> it will refrain from reporting a quiescent state on behalf of that CPU,
> so there are no ordering requirements from the grace-period kthread in
> that case.  However, other subsystems call rcu_is_idle_cpu() to check
> for CPUs being non-idle from an RCU perspective.  That case is also
> verified by the above litmus tests with the proviso that the sense of
> the low-order bit of the DYNTICKS counter be inverted.
> 
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>  kernel/rcu/tree.c | 50 +++++++++++++++++++++++++++++++----------------
>  kernel/rcu/tree.h |  2 +-
>  2 files changed, 34 insertions(+), 18 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 42a0032dd99f7..bc6ccf0ba3b24 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -77,7 +77,7 @@
>  static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = {
>  	.dynticks_nesting = 1,
>  	.dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE,
> -	.dynticks = ATOMIC_INIT(1),
> +	.dynticks = 1UL,
>  #ifdef CONFIG_RCU_NOCB_CPU
>  	.cblist.flags = SEGCBLIST_SOFTIRQ_ONLY,
>  #endif
> @@ -251,6 +251,21 @@ void rcu_softirq_qs(void)
>  	rcu_tasks_qs(current, false);
>  }
>  
> +/*
> + * Increment the current CPU's rcu_data structure's ->dynticks field
> + * with ordering.  Return the new value.
> + */
> +static noinstr unsigned long rcu_dynticks_inc(int incby)
> +{
> +	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
> +	int seq;
> +	
> +	seq = READ_ONCE(rdp->dynticks) + incby;
> +	smp_store_release(&rdp->dynticks, seq);
> +	smp_mb();  // Fundamental RCU ordering guarantee.
> +	return seq;
> +}

So say we have a rcu_dynticks_inc_no_mb() which has the same body except
the smp_mb(), and ...

> +
>  /*
>   * Record entry into an extended quiescent state.  This is only to be
>   * called when not already in an extended quiescent state, that is,
> @@ -267,7 +282,7 @@ static noinstr void rcu_dynticks_eqs_enter(void)
>  	 * next idle sojourn.
>  	 */
>  	rcu_dynticks_task_trace_enter();  // Before ->dynticks update!
> -	seq = arch_atomic_inc_return(&this_cpu_ptr(&rcu_data)->dynticks);
> +	seq = rcu_dynticks_inc(1);

...we can actually change this to the no smp_mb() version because we are
exiting RCU here:
	
	seq = rcu_dynticks_inc_no_mb(1);

>  	// RCU is no longer watching.  Better be in extended quiescent state!
>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & 0x1));
>  }
> @@ -286,7 +301,7 @@ static noinstr void rcu_dynticks_eqs_exit(void)
>  	 * and we also must force ordering with the next RCU read-side
>  	 * critical section.
>  	 */
> -	seq = arch_atomic_inc_return(&this_cpu_ptr(&rcu_data)->dynticks);
> +	seq = rcu_dynticks_inc(1);
>  	// RCU is now watching.  Better not be in an extended quiescent state!
>  	rcu_dynticks_task_trace_exit();  // After ->dynticks update!
>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & 0x1));
> @@ -306,9 +321,9 @@ static void rcu_dynticks_eqs_online(void)
>  {
>  	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
>  
> -	if (atomic_read(&rdp->dynticks) & 0x1)
> +	if (READ_ONCE(rdp->dynticks) & 0x1)
>  		return;
> -	atomic_inc(&rdp->dynticks);
> +	rcu_dynticks_inc(1);
>  }
>  
>  /*
> @@ -318,7 +333,7 @@ static void rcu_dynticks_eqs_online(void)
>   */
>  static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
>  {
> -	return !(arch_atomic_read(&this_cpu_ptr(&rcu_data)->dynticks) & 0x1);
> +	return !(READ_ONCE(this_cpu_ptr(&rcu_data)->dynticks) & 0x1);
>  }
>  
>  /*
> @@ -327,7 +342,8 @@ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
>   */
>  static int rcu_dynticks_snap(struct rcu_data *rdp)
>  {
> -	return atomic_add_return(0, &rdp->dynticks);
> +	smp_mb();  // Fundamental RCU ordering guarantee.
> +	return smp_load_acquire(&rdp->dynticks);
>  }
>  
>  /*
> @@ -367,7 +383,7 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp)
>  	int snap;
>  
>  	// If not quiescent, force back to earlier extended quiescent state.
> -	snap = atomic_read(&rdp->dynticks) & ~0x1;
> +	snap = READ_ONCE(rdp->dynticks) & ~0x1;
>  
>  	smp_rmb(); // Order ->dynticks and *vp reads.
>  	if (READ_ONCE(*vp))
> @@ -375,7 +391,7 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp)
>  	smp_rmb(); // Order *vp read and ->dynticks re-read.
>  
>  	// If still in the same extended quiescent state, we are good!
> -	return snap == atomic_read(&rdp->dynticks);
> +	return snap == READ_ONCE(rdp->dynticks);
>  }
>  
>  /*
> @@ -391,12 +407,12 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp)
>   */
>  notrace void rcu_momentary_dyntick_idle(void)
>  {
> -	int special;
> +	int seq;
>  
>  	raw_cpu_write(rcu_data.rcu_need_heavy_qs, false);
> -	special = atomic_add_return(2, &this_cpu_ptr(&rcu_data)->dynticks);
> +	seq = rcu_dynticks_inc(2);
>  	/* It is illegal to call this from idle state. */
> -	WARN_ON_ONCE(!(special & 0x1));
> +	WARN_ON_ONCE(!(seq & 0x1));
>  	rcu_preempt_deferred_qs(current);
>  }
>  EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle);
> @@ -612,7 +628,7 @@ static noinstr void rcu_eqs_enter(bool user)
>  
>  	lockdep_assert_irqs_disabled();
>  	instrumentation_begin();
> -	trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, atomic_read(&rdp->dynticks));
> +	trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, READ_ONCE(rdp->dynticks));
>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current));
>  	rcu_prepare_for_idle();
>  	rcu_preempt_deferred_qs(current);
> @@ -747,7 +763,7 @@ noinstr void rcu_nmi_exit(void)
>  	 */
>  	if (rdp->dynticks_nmi_nesting != 1) {
>  		trace_rcu_dyntick(TPS("--="), rdp->dynticks_nmi_nesting, rdp->dynticks_nmi_nesting - 2,
> -				  atomic_read(&rdp->dynticks));
> +				  READ_ONCE(rdp->dynticks));
>  		WRITE_ONCE(rdp->dynticks_nmi_nesting, /* No store tearing. */
>  			   rdp->dynticks_nmi_nesting - 2);
>  		instrumentation_end();
> @@ -755,7 +771,7 @@ noinstr void rcu_nmi_exit(void)
>  	}
>  
>  	/* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */
> -	trace_rcu_dyntick(TPS("Startirq"), rdp->dynticks_nmi_nesting, 0, atomic_read(&rdp->dynticks));
> +	trace_rcu_dyntick(TPS("Startirq"), rdp->dynticks_nmi_nesting, 0, READ_ONCE(rdp->dynticks));
>  	WRITE_ONCE(rdp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */
>  
>  	if (!in_nmi())
> @@ -863,7 +879,7 @@ static void noinstr rcu_eqs_exit(bool user)
>  	instrument_atomic_write(&rdp->dynticks, sizeof(rdp->dynticks));
>  
>  	rcu_cleanup_after_idle();
> -	trace_rcu_dyntick(TPS("End"), rdp->dynticks_nesting, 1, atomic_read(&rdp->dynticks));
> +	trace_rcu_dyntick(TPS("End"), rdp->dynticks_nesting, 1, READ_ONCE(rdp->dynticks));
>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current));
>  	WRITE_ONCE(rdp->dynticks_nesting, 1);
>  	WARN_ON_ONCE(rdp->dynticks_nmi_nesting);
> @@ -1026,7 +1042,7 @@ noinstr void rcu_nmi_enter(void)
>  
>  	trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="),
>  			  rdp->dynticks_nmi_nesting,
> -			  rdp->dynticks_nmi_nesting + incby, atomic_read(&rdp->dynticks));
> +			  rdp->dynticks_nmi_nesting + incby, READ_ONCE(rdp->dynticks));
>  	instrumentation_end();
>  	WRITE_ONCE(rdp->dynticks_nmi_nesting, /* Prevent store tearing. */
>  		   rdp->dynticks_nmi_nesting + incby);
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index 305cf6aeb4086..ce611da2ff6b3 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -184,7 +184,7 @@ struct rcu_data {
>  	int dynticks_snap;		/* Per-GP tracking for dynticks. */
>  	long dynticks_nesting;		/* Track process nesting level. */
>  	long dynticks_nmi_nesting;	/* Track irq/NMI nesting level. */
> -	atomic_t dynticks;		/* Even value for idle, else odd. */
> +	unsigned long dynticks;		/* Even value for idle, else odd. */

Here the type of dynticks is changed, so we should change the type of
trace point as well. Or use "unsigned int" as the type.

Regards,
Boqun

>  	bool rcu_need_heavy_qs;		/* GP old, so heavy quiescent state! */
>  	bool rcu_urgent_qs;		/* GP old need light quiescent state. */
>  	bool rcu_forced_tick;		/* Forced tick to provide QS. */
> -- 
> 2.31.1.189.g2e36527f23
> 

  parent reply	other threads:[~2021-07-29  7:58 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-21 20:20 [PATCH rcu 0/18] Miscellaneous fixes for v5.15 Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 01/18] rcu: Fix to include first blocked task in stall warning Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 02/18] rcu: Fix stall-warning deadlock due to non-release of rcu_node ->lock Paul E. McKenney
2021-08-03 14:24   ` Qais Yousef
2021-08-03 15:52     ` Paul E. McKenney
2021-08-03 16:12       ` Qais Yousef
2021-08-03 16:28         ` Paul E. McKenney
2021-08-03 16:33           ` Qais Yousef
2021-08-04 13:50           ` Qais Yousef
2021-08-04 22:33             ` Paul E. McKenney
2021-08-06  9:56               ` Qais Yousef
2021-08-06  9:57   ` Qais Yousef
2021-08-06 11:43     ` Paul E. McKenney
2021-08-06 12:33       ` Qais Yousef
2021-07-21 20:21 ` [PATCH rcu 03/18] rcu: Remove special bit at the bottom of the ->dynticks counter Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 04/18] rcu: Weaken ->dynticks accesses and updates Paul E. McKenney
2021-07-21 20:41   ` Linus Torvalds
2021-07-21 21:25     ` Paul E. McKenney
2021-07-28 17:37   ` [PATCH v2 " Paul E. McKenney
2021-07-28 17:58     ` Linus Torvalds
2021-07-28 18:12       ` Mathieu Desnoyers
2021-07-28 18:32         ` Linus Torvalds
2021-07-28 18:39           ` Mathieu Desnoyers
2021-07-28 18:46         ` Paul E. McKenney
2021-07-28 18:46       ` Paul E. McKenney
2021-07-28 18:57         ` Linus Torvalds
2021-07-28 18:23     ` Mathieu Desnoyers
2021-07-28 18:58       ` Paul E. McKenney
2021-07-28 19:45         ` Paul E. McKenney
2021-07-28 20:03           ` Mathieu Desnoyers
2021-07-28 20:28             ` Paul E. McKenney
2021-07-29 14:41               ` Mathieu Desnoyers
2021-07-29 15:57                 ` Paul E. McKenney
2021-07-29 17:41                   ` Mathieu Desnoyers
2021-07-29 18:05                     ` Paul E. McKenney
2021-07-29 18:42                       ` Mathieu Desnoyers
2021-07-28 20:37     ` Josh Triplett
2021-07-28 20:47       ` Paul E. McKenney
2021-07-28 22:23         ` Frederic Weisbecker
2021-07-29  1:07           ` Paul E. McKenney
2021-07-29  7:58   ` Boqun Feng [this message]
2021-07-29 10:53     ` [PATCH " Frederic Weisbecker
2021-07-30  5:56       ` Boqun Feng
2021-07-30 17:18         ` Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 05/18] rcu: Mark accesses to ->rcu_read_lock_nesting Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 06/18] rculist: Unify documentation about missing list_empty_rcu() Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 07/18] rcu/tree: Handle VM stoppage in stall detection Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 08/18] rcu: Do not disable GP stall detection in rcu_cpu_stall_reset() Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 09/18] rcu: Start timing stall repetitions after warning complete Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 10/18] srcutiny: Mark read-side data races Paul E. McKenney
2021-07-29  8:23   ` Boqun Feng
2021-07-29 13:36     ` Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 11/18] rcu: Mark lockless ->qsmask read in rcu_check_boost_fail() Paul E. McKenney
2021-07-29  8:54   ` Boqun Feng
2021-07-29 14:03     ` Paul E. McKenney
2021-07-30  2:28       ` Boqun Feng
2021-07-30  3:26         ` Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 12/18] rcu: Make rcu_gp_init() and rcu_gp_fqs_loop noinline to conserve stack Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 13/18] rcu: Remove trailing spaces and tabs Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 14/18] rcu: Mark accesses in tree_stall.h Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 15/18] rcu: Remove useless "ret" update in rcu_gp_fqs_loop() Paul E. McKenney
2021-08-03 16:48   ` Joe Perches
2021-08-03 17:10     ` Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 16/18] rcu: Use per_cpu_ptr to get the pointer of per_cpu variable Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 17/18] rcu: Explain why rcu_all_qs() is a stub in preemptible TREE RCU Paul E. McKenney
2021-07-21 20:21 ` [PATCH rcu 18/18] rcu: Print human-readable message for schedule() in RCU reader Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YQJfjFv8lOnkUkhs@boqun-archlinux \
    --to=boqun.feng@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=dhowells@redhat.com \
    --cc=edumazet@google.com \
    --cc=fweisbec@gmail.com \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mingo@kernel.org \
    --cc=oleg@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).