linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.ibm.com>
To: Scott Wood <swood@redhat.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Juri Lelli <juri.lelli@redhat.com>,
	Clark Williams <williams@redhat.com>,
	linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH RT 4/4] rcutorture: Avoid problematic critical section nesting
Date: Thu, 20 Jun 2019 14:18:26 -0700	[thread overview]
Message-ID: <20190620211826.GX26519@linux.ibm.com> (raw)
In-Reply-To: <20190619011908.25026-5-swood@redhat.com>

On Tue, Jun 18, 2019 at 08:19:08PM -0500, Scott Wood wrote:
> rcutorture was generating some nesting scenarios that are not
> reasonable.  Constrain the state selection to avoid them.
> 
> Example #1:
> 
> 1. preempt_disable()
> 2. local_bh_disable()
> 3. preempt_enable()
> 4. local_bh_enable()
> 
> On PREEMPT_RT, BH disabling takes a local lock only when called in
> non-atomic context.  Thus, atomic context must be retained until after BH
> is re-enabled.  Likewise, if BH is initially disabled in non-atomic
> context, it cannot be re-enabled in atomic context.
> 
> Example #2:
> 
> 1. rcu_read_lock()
> 2. local_irq_disable()
> 3. rcu_read_unlock()
> 4. local_irq_enable()
> 
> If the thread is preempted between steps 1 and 2,
> rcu_read_unlock_special.b.blocked will be set, but it won't be
> acted on in step 3 because IRQs are disabled.  Thus, reporting of the
> quiescent state will be delayed beyond the local_irq_enable().
> 
> Example #3:
> 
> 1. preempt_disable()
> 2. local_irq_disable()
> 3. preempt_enable()
> 4. local_irq_enable()
> 
> If need_resched is set between steps 1 and 2, then the reschedule
> in step 3 will not happen.
> 
> Signed-off-by: Scott Wood <swood@redhat.com>

OK for -rt, but as long as people can code those sequences without getting
their wrists slapped, RCU needs to deal with it.  So I cannot accept
this in mainline at the current time.  Yes, I will know when it is safe
to accept it when rcutorture's virtual wrist gets slapped in mainline.
Why did you ask?  ;-)

But I have to ask...  With this elaboration, is it time to make this a
data-driven state machine?  Or is the complexity not yet to the point
where that would constitute a simplification?

							Thanx, Paul

> ---
> TODO: Document restrictions and add debug checks for invalid sequences.
> 
> I had been planning to resolve #1 (only as shown, not the case of
> disabling preemption while non-atomic and enabling while atomic) by
> changing how migrate_disable() works to avoid the split behavior, but
> recently BH disabling was changed to do the same thing.  I still plan to
> send the migrate disable changes as a separate patchset, for the sake of
> the significant performance improvement I saw.
> ---
>  kernel/rcu/rcutorture.c | 92 +++++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 78 insertions(+), 14 deletions(-)
> 
> diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> index 584b0d1da0a3..0523d9e78246 100644
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -73,10 +73,13 @@
>  #define RCUTORTURE_RDR_RBH	 0x08	/*  ... rcu_read_lock_bh(). */
>  #define RCUTORTURE_RDR_SCHED	 0x10	/*  ... rcu_read_lock_sched(). */
>  #define RCUTORTURE_RDR_RCU	 0x20	/*  ... entering another RCU reader. */
> -#define RCUTORTURE_RDR_NBITS	 6	/* Number of bits defined above. */
> +#define RCUTORTURE_RDR_ATOM_BH	 0x40	/*  ... disabling bh while atomic */
> +#define RCUTORTURE_RDR_ATOM_RBH	 0x80	/*  ... RBH while atomic */
> +#define RCUTORTURE_RDR_NBITS	 8	/* Number of bits defined above. */
>  #define RCUTORTURE_MAX_EXTEND	 \
>  	(RCUTORTURE_RDR_BH | RCUTORTURE_RDR_IRQ | RCUTORTURE_RDR_PREEMPT | \
> -	 RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED)
> +	 RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED | \
> +	 RCUTORTURE_RDR_ATOM_BH | RCUTORTURE_RDR_ATOM_RBH)
>  #define RCUTORTURE_RDR_MAX_LOOPS 0x7	/* Maximum reader extensions. */
>  					/* Must be power of two minus one. */
>  #define RCUTORTURE_RDR_MAX_SEGS (RCUTORTURE_RDR_MAX_LOOPS + 3)
> @@ -1111,31 +1114,52 @@ static void rcutorture_one_extend(int *readstate, int newstate,
>  	WARN_ON_ONCE((idxold >> RCUTORTURE_RDR_SHIFT) > 1);
>  	rtrsp->rt_readstate = newstate;
>  
> -	/* First, put new protection in place to avoid critical-section gap. */
> +	/*
> +	 * First, put new protection in place to avoid critical-section gap.
> +	 * Disable preemption around the ATOM disables to ensure that
> +	 * in_atomic() is true.
> +	 */
>  	if (statesnew & RCUTORTURE_RDR_BH)
>  		local_bh_disable();
> +	if (statesnew & RCUTORTURE_RDR_RBH)
> +		rcu_read_lock_bh();
>  	if (statesnew & RCUTORTURE_RDR_IRQ)
>  		local_irq_disable();
>  	if (statesnew & RCUTORTURE_RDR_PREEMPT)
>  		preempt_disable();
> -	if (statesnew & RCUTORTURE_RDR_RBH)
> -		rcu_read_lock_bh();
>  	if (statesnew & RCUTORTURE_RDR_SCHED)
>  		rcu_read_lock_sched();
> +	preempt_disable();
> +	if (statesnew & RCUTORTURE_RDR_ATOM_BH)
> +		local_bh_disable();
> +	if (statesnew & RCUTORTURE_RDR_ATOM_RBH)
> +		rcu_read_lock_bh();
> +	preempt_enable();
>  	if (statesnew & RCUTORTURE_RDR_RCU)
>  		idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
>  
> -	/* Next, remove old protection, irq first due to bh conflict. */
> +	/*
> +	 * Next, remove old protection, in decreasing order of strength
> +	 * to avoid unlock paths that aren't safe in the stronger
> +	 * context.  Disable preemption around the ATOM enables in
> +	 * case the context was only atomic due to IRQ disabling.
> +	 */
> +	preempt_disable();
>  	if (statesold & RCUTORTURE_RDR_IRQ)
>  		local_irq_enable();
> -	if (statesold & RCUTORTURE_RDR_BH)
> +	if (statesold & RCUTORTURE_RDR_ATOM_BH)
>  		local_bh_enable();
> +	if (statesold & RCUTORTURE_RDR_ATOM_RBH)
> +		rcu_read_unlock_bh();
> +	preempt_enable();
>  	if (statesold & RCUTORTURE_RDR_PREEMPT)
>  		preempt_enable();
> -	if (statesold & RCUTORTURE_RDR_RBH)
> -		rcu_read_unlock_bh();
>  	if (statesold & RCUTORTURE_RDR_SCHED)
>  		rcu_read_unlock_sched();
> +	if (statesold & RCUTORTURE_RDR_BH)
> +		local_bh_enable();
> +	if (statesold & RCUTORTURE_RDR_RBH)
> +		rcu_read_unlock_bh();
>  	if (statesold & RCUTORTURE_RDR_RCU)
>  		cur_ops->readunlock(idxold >> RCUTORTURE_RDR_SHIFT);
>  
> @@ -1171,6 +1195,12 @@ static int rcutorture_extend_mask_max(void)
>  	int mask = rcutorture_extend_mask_max();
>  	unsigned long randmask1 = torture_random(trsp) >> 8;
>  	unsigned long randmask2 = randmask1 >> 3;
> +	unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
> +	unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
> +	unsigned long nonatomic_bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +	unsigned long atomic_bhs = RCUTORTURE_RDR_ATOM_BH |
> +				   RCUTORTURE_RDR_ATOM_RBH;
> +	unsigned long tmp;
>  
>  	WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
>  	/* Most of the time lots of bits, half the time only one bit. */
> @@ -1178,11 +1208,45 @@ static int rcutorture_extend_mask_max(void)
>  		mask = mask & randmask2;
>  	else
>  		mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
> -	/* Can't enable bh w/irq disabled. */
> -	if ((mask & RCUTORTURE_RDR_IRQ) &&
> -	    ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
> -	     (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
> -		mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +
> +	/*
> +	 * Can't enable bh w/irq disabled.
> +	 *
> +	 * Can't enable preemption with irqs disabled, if irqs had ever
> +	 * been enabled during this preempt critical section (could miss
> +	 * a reschedule).
> +	 */
> +	tmp = atomic_bhs | nonatomic_bhs | preempts;
> +	if (mask & RCUTORTURE_RDR_IRQ)
> +		mask |= oldmask & tmp;
> +
> +	/*
> +	 * Can't release the outermost rcu lock in an irq disabled
> +	 * section without preemption also being disabled, if irqs had
> +	 * ever been enabled during this RCU critical section (could leak
> +	 * a special flag and delay reporting the qs).
> +	 */
> +	if ((oldmask & RCUTORTURE_RDR_RCU) && (mask & RCUTORTURE_RDR_IRQ) &&
> +	    !(mask & preempts))
> +		mask |= RCUTORTURE_RDR_RCU;
> +
> +	/* Can't modify atomic bh in non-atomic context */
> +	if ((oldmask & atomic_bhs) && (mask & atomic_bhs) &&
> +	    !(mask & preempts_irq)) {
> +		mask |= oldmask & preempts_irq;
> +		if (mask & RCUTORTURE_RDR_IRQ)
> +			mask |= oldmask & tmp;
> +	}
> +	if ((mask & atomic_bhs) && !(mask & preempts_irq))
> +		mask |= RCUTORTURE_RDR_PREEMPT;
> +
> +	/* Can't modify non-atomic bh in atomic context */
> +	tmp = nonatomic_bhs;
> +	if (oldmask & preempts_irq)
> +		mask &= ~tmp;
> +	if ((oldmask | mask) & preempts_irq)
> +		mask |= oldmask & tmp;
> +
>  	if ((mask & RCUTORTURE_RDR_IRQ) &&
>  	    !(mask & cur_ops->ext_irq_conflict) &&
>  	    (oldmask & cur_ops->ext_irq_conflict))
> -- 
> 1.8.3.1
> 

  reply	other threads:[~2019-06-20 21:19 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-19  1:19 [PATCH RT 0/4] Address rcutorture issues Scott Wood
2019-06-19  1:19 ` [PATCH RT 1/4] rcu: Acquire RCU lock when disabling BHs Scott Wood
2019-06-20 20:53   ` Paul E. McKenney
2019-06-20 21:06     ` Scott Wood
2019-06-20 21:20       ` Paul E. McKenney
2019-06-20 21:38         ` Scott Wood
2019-06-20 22:16           ` Paul E. McKenney
2019-06-19  1:19 ` [PATCH RT 2/4] sched: migrate_enable: Use sleeping_lock to indicate involuntary sleep Scott Wood
2019-06-19  1:19 ` [RFC PATCH RT 3/4] rcu: unlock special: Treat irq and preempt disabled the same Scott Wood
2019-06-20 21:10   ` Paul E. McKenney
2019-06-20 21:59     ` Scott Wood
2019-06-20 22:25       ` Paul E. McKenney
2019-06-20 23:08         ` Scott Wood
2019-06-22  0:26           ` Paul E. McKenney
2019-06-22 19:13             ` Paul E. McKenney
2019-06-24 17:40               ` Scott Wood
2019-06-19  1:19 ` [RFC PATCH RT 4/4] rcutorture: Avoid problematic critical section nesting Scott Wood
2019-06-20 21:18   ` Paul E. McKenney [this message]
2019-06-20 21:43     ` Scott Wood
2019-06-21 16:38     ` Sebastian Andrzej Siewior
2019-06-21 23:59       ` Paul E. McKenney
2019-06-26 15:08         ` Steven Rostedt
2019-06-26 16:49           ` Scott Wood
2019-06-27 18:00             ` Paul E. McKenney
2019-06-27 20:16               ` Scott Wood
2019-06-27 20:50                 ` Paul E. McKenney
2019-06-27 22:46                   ` Scott Wood
2019-06-28  0:52                     ` Paul E. McKenney
2019-06-28 19:37                       ` Scott Wood
2019-06-28 20:24                         ` Paul E. McKenney
2019-06-20 19:12 ` [PATCH RT 0/4] Address rcutorture issues Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190620211826.GX26519@linux.ibm.com \
    --to=paulmck@linux.ibm.com \
    --cc=bigeasy@linutronix.de \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=swood@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=williams@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).