All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Valentin Schneider <valentin.schneider@arm.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, rcu@vger.kernel.org,
	linux-rt-users@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Ingo Molnar <mingo@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Steven Rostedt <rostedt@goodmis.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Frederic Weisbecker <frederic@kernel.org>,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Steven Price <steven.price@arm.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Boqun Feng <boqun.feng@gmail.com>, Mike Galbraith <efault@gmx.de>,
	Scott Wood <swood@redhat.com>
Subject: Re: [PATCH] rcutorture: Avoid problematic critical section nesting on RT
Date: Wed, 18 Aug 2021 15:46:51 -0700	[thread overview]
Message-ID: <20210818224651.GY4126399@paulmck-ThinkPad-P17-Gen-1> (raw)
In-Reply-To: <20210817144018.nqssoq475vitrqlv@linutronix.de>

On Tue, Aug 17, 2021 at 04:40:18PM +0200, Sebastian Andrzej Siewior wrote:
> From: Scott Wood <swood@redhat.com>
> 
> rcutorture was generating some nesting scenarios that are not
> reasonable.  Constrain the state selection to avoid them.
> 
> Example:
> 
> 1. rcu_read_lock()
> 2. local_irq_disable()
> 3. rcu_read_unlock()
> 4. local_irq_enable()
> 
> If the thread is preempted between steps 1 and 2,
> rcu_read_unlock_special.b.blocked will be set, but it won't be
> acted on in step 3 because IRQs are disabled.  Thus, reporting of the
> quiescent state will be delayed beyond the local_irq_enable().
> 
> For now, these scenarios will continue to be tested on non-PREEMPT_RT
> kernels, until debug checks are added to ensure that they are not
> happening elsewhere.
> 
> Signed-off-by: Scott Wood <swood@redhat.com>
> [valentin.schneider@arm.com: Don't disable BH in atomic context]
> [bigeasy: remove 'preempt_disable(); local_bh_disable(); preempt_enable();
>  local_bh_enable();' from the examples because this works on RT now. ]
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

This looks close to being ready for mainline, actually.

One comment below.

							Thanx, Paul

> ---
> I folded Valentin's bits.
> I removed the unbalanced preempt_disable()/migrate_disable() part from
> the description because it is supported now by the migrate disable
> implementation. I didn't find it explicit in code/ patch except as part
> of local_bh_disable().
> 
> 
>  kernel/rcu/rcutorture.c |   94 ++++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 80 insertions(+), 14 deletions(-)
> ---
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -61,10 +61,13 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck
>  #define RCUTORTURE_RDR_RBH	 0x08	/*  ... rcu_read_lock_bh(). */
>  #define RCUTORTURE_RDR_SCHED	 0x10	/*  ... rcu_read_lock_sched(). */
>  #define RCUTORTURE_RDR_RCU	 0x20	/*  ... entering another RCU reader. */
> -#define RCUTORTURE_RDR_NBITS	 6	/* Number of bits defined above. */
> +#define RCUTORTURE_RDR_ATOM_BH	 0x40	/*  ... disabling bh while atomic */
> +#define RCUTORTURE_RDR_ATOM_RBH	 0x80	/*  ... RBH while atomic */
> +#define RCUTORTURE_RDR_NBITS	 8	/* Number of bits defined above. */
>  #define RCUTORTURE_MAX_EXTEND	 \
>  	(RCUTORTURE_RDR_BH | RCUTORTURE_RDR_IRQ | RCUTORTURE_RDR_PREEMPT | \
> -	 RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED)
> +	 RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED | \
> +	 RCUTORTURE_RDR_ATOM_BH | RCUTORTURE_RDR_ATOM_RBH)
>  #define RCUTORTURE_RDR_MAX_LOOPS 0x7	/* Maximum reader extensions. */
>  					/* Must be power of two minus one. */
>  #define RCUTORTURE_RDR_MAX_SEGS (RCUTORTURE_RDR_MAX_LOOPS + 3)
> @@ -1429,31 +1432,53 @@ static void rcutorture_one_extend(int *r
>  	WARN_ON_ONCE((idxold >> RCUTORTURE_RDR_SHIFT) > 1);
>  	rtrsp->rt_readstate = newstate;
>  
> -	/* First, put new protection in place to avoid critical-section gap. */
> +	/*
> +	 * First, put new protection in place to avoid critical-section gap.
> +	 * Disable preemption around the ATOM disables to ensure that
> +	 * in_atomic() is true.
> +	 */
>  	if (statesnew & RCUTORTURE_RDR_BH)
>  		local_bh_disable();
> +	if (statesnew & RCUTORTURE_RDR_RBH)
> +		rcu_read_lock_bh();
>  	if (statesnew & RCUTORTURE_RDR_IRQ)
>  		local_irq_disable();
>  	if (statesnew & RCUTORTURE_RDR_PREEMPT)
>  		preempt_disable();
> -	if (statesnew & RCUTORTURE_RDR_RBH)
> -		rcu_read_lock_bh();
>  	if (statesnew & RCUTORTURE_RDR_SCHED)
>  		rcu_read_lock_sched();
> +	preempt_disable();
> +	if (statesnew & RCUTORTURE_RDR_ATOM_BH)
> +		local_bh_disable();
> +	if (statesnew & RCUTORTURE_RDR_ATOM_RBH)
> +		rcu_read_lock_bh();
> +	preempt_enable();
>  	if (statesnew & RCUTORTURE_RDR_RCU)
>  		idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
>  
> -	/* Next, remove old protection, irq first due to bh conflict. */
> +	/*
> +	 * Next, remove old protection, in decreasing order of strength
> +	 * to avoid unlock paths that aren't safe in the stronger
> +	 * context.  Disable preemption around the ATOM enables in
> +	 * case the context was only atomic due to IRQ disabling.
> +	 */
> +	preempt_disable();
>  	if (statesold & RCUTORTURE_RDR_IRQ)
>  		local_irq_enable();
> -	if (statesold & RCUTORTURE_RDR_BH)
> +	if (statesold & RCUTORTURE_RDR_ATOM_BH)
>  		local_bh_enable();
> +	if (statesold & RCUTORTURE_RDR_ATOM_RBH)
> +		rcu_read_unlock_bh();
> +	preempt_enable();

The addition of preempt_enable() here prevents rcutorture from covering
an important part of the mainline RCU state space, namely when an RCU
read-side section ends with just local_irq_enable().  This situation
is a challenge for RCU because it must indirectly detect the end of the
critical section.

Would it work for RT if the preempt_enable() and preempt_disable()
were executed only if either RT on the one hand or statesold has the
RCUTORTURE_RDR_ATOM_BH or RCUTORTURE_RDR_ATOM_RBH bit set on the other?

>  	if (statesold & RCUTORTURE_RDR_PREEMPT)
>  		preempt_enable();
> -	if (statesold & RCUTORTURE_RDR_RBH)
> -		rcu_read_unlock_bh();
>  	if (statesold & RCUTORTURE_RDR_SCHED)
>  		rcu_read_unlock_sched();
> +	if (statesold & RCUTORTURE_RDR_BH)
> +		local_bh_enable();
> +	if (statesold & RCUTORTURE_RDR_RBH)
> +		rcu_read_unlock_bh();
> +
>  	if (statesold & RCUTORTURE_RDR_RCU) {
>  		bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
>  
> @@ -1496,6 +1521,12 @@ rcutorture_extend_mask(int oldmask, stru
>  	int mask = rcutorture_extend_mask_max();
>  	unsigned long randmask1 = torture_random(trsp) >> 8;
>  	unsigned long randmask2 = randmask1 >> 3;
> +	unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
> +	unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
> +	unsigned long nonatomic_bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +	unsigned long atomic_bhs = RCUTORTURE_RDR_ATOM_BH |
> +				   RCUTORTURE_RDR_ATOM_RBH;
> +	unsigned long tmp;
>  
>  	WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
>  	/* Mostly only one bit (need preemption!), sometimes lots of bits. */
> @@ -1503,11 +1534,46 @@ rcutorture_extend_mask(int oldmask, stru
>  		mask = mask & randmask2;
>  	else
>  		mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
> -	/* Can't enable bh w/irq disabled. */
> -	if ((mask & RCUTORTURE_RDR_IRQ) &&
> -	    ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
> -	     (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
> -		mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +
> +	/*
> +	 * Can't enable bh w/irq disabled.
> +	 */
> +	tmp = atomic_bhs | nonatomic_bhs;
> +	if (mask & RCUTORTURE_RDR_IRQ)
> +		mask |= oldmask & tmp;

This is more straightforward than my original, good!

> +
> +	/*
> +	 * Ideally these sequences would be detected in debug builds
> +	 * (regardless of RT), but until then don't stop testing
> +	 * them on non-RT.
> +	 */
> +	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
> +		/*
> +		 * Can't disable bh in atomic context if bh was already
> +		 * disabled by another task on the same CPU. Instead of
> +		 * attempting to track this, just avoid disabling bh in atomic
> +		 * context.
> +		 */
> +		mask &= ~atomic_bhs;

At some point, we will need to test disabling bh in atomic context,
correct?  Or am I missing something here?

> +		/*
> +		 * Can't release the outermost rcu lock in an irq disabled
> +		 * section without preemption also being disabled, if irqs
> +		 * had ever been enabled during this RCU critical section
> +		 * (could leak a special flag and delay reporting the qs).
> +		 */
> +		if ((oldmask & RCUTORTURE_RDR_RCU) &&
> +		    (mask & RCUTORTURE_RDR_IRQ) &&
> +		    !(mask & preempts))
> +			mask |= RCUTORTURE_RDR_RCU;
> +
> +		/* Can't modify non-atomic bh in atomic context */
> +		tmp = nonatomic_bhs;
> +		if (oldmask & preempts_irq)
> +			mask &= ~tmp;
> +		if ((oldmask | mask) & preempts_irq)
> +			mask |= oldmask & tmp;
> +	}
> +
>  	return mask ?: RCUTORTURE_RDR_RCU;
>  }
>  

WARNING: multiple messages have this Message-ID (diff)
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Valentin Schneider <valentin.schneider@arm.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, rcu@vger.kernel.org,
	linux-rt-users@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Ingo Molnar <mingo@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Steven Rostedt <rostedt@goodmis.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Frederic Weisbecker <frederic@kernel.org>,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Steven Price <steven.price@arm.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Boqun Feng <boqun.feng@gmail.com>, Mike Galbraith <efault@gmx.de>,
	Scott Wood <swood@redhat.com>
Subject: Re: [PATCH] rcutorture: Avoid problematic critical section nesting on RT
Date: Wed, 18 Aug 2021 15:46:51 -0700	[thread overview]
Message-ID: <20210818224651.GY4126399@paulmck-ThinkPad-P17-Gen-1> (raw)
In-Reply-To: <20210817144018.nqssoq475vitrqlv@linutronix.de>

On Tue, Aug 17, 2021 at 04:40:18PM +0200, Sebastian Andrzej Siewior wrote:
> From: Scott Wood <swood@redhat.com>
> 
> rcutorture was generating some nesting scenarios that are not
> reasonable.  Constrain the state selection to avoid them.
> 
> Example:
> 
> 1. rcu_read_lock()
> 2. local_irq_disable()
> 3. rcu_read_unlock()
> 4. local_irq_enable()
> 
> If the thread is preempted between steps 1 and 2,
> rcu_read_unlock_special.b.blocked will be set, but it won't be
> acted on in step 3 because IRQs are disabled.  Thus, reporting of the
> quiescent state will be delayed beyond the local_irq_enable().
> 
> For now, these scenarios will continue to be tested on non-PREEMPT_RT
> kernels, until debug checks are added to ensure that they are not
> happening elsewhere.
> 
> Signed-off-by: Scott Wood <swood@redhat.com>
> [valentin.schneider@arm.com: Don't disable BH in atomic context]
> [bigeasy: remove 'preempt_disable(); local_bh_disable(); preempt_enable();
>  local_bh_enable();' from the examples because this works on RT now. ]
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

This looks close to being ready for mainline, actually.

One comment below.

							Thanx, Paul

> ---
> I folded Valentin's bits.
> I removed the unbalanced preempt_disable()/migrate_disable() part from
> the description because it is supported now by the migrate disable
> implementation. I didn't find it explicit in code/ patch except as part
> of local_bh_disable().
> 
> 
>  kernel/rcu/rcutorture.c |   94 ++++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 80 insertions(+), 14 deletions(-)
> ---
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -61,10 +61,13 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck
>  #define RCUTORTURE_RDR_RBH	 0x08	/*  ... rcu_read_lock_bh(). */
>  #define RCUTORTURE_RDR_SCHED	 0x10	/*  ... rcu_read_lock_sched(). */
>  #define RCUTORTURE_RDR_RCU	 0x20	/*  ... entering another RCU reader. */
> -#define RCUTORTURE_RDR_NBITS	 6	/* Number of bits defined above. */
> +#define RCUTORTURE_RDR_ATOM_BH	 0x40	/*  ... disabling bh while atomic */
> +#define RCUTORTURE_RDR_ATOM_RBH	 0x80	/*  ... RBH while atomic */
> +#define RCUTORTURE_RDR_NBITS	 8	/* Number of bits defined above. */
>  #define RCUTORTURE_MAX_EXTEND	 \
>  	(RCUTORTURE_RDR_BH | RCUTORTURE_RDR_IRQ | RCUTORTURE_RDR_PREEMPT | \
> -	 RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED)
> +	 RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED | \
> +	 RCUTORTURE_RDR_ATOM_BH | RCUTORTURE_RDR_ATOM_RBH)
>  #define RCUTORTURE_RDR_MAX_LOOPS 0x7	/* Maximum reader extensions. */
>  					/* Must be power of two minus one. */
>  #define RCUTORTURE_RDR_MAX_SEGS (RCUTORTURE_RDR_MAX_LOOPS + 3)
> @@ -1429,31 +1432,53 @@ static void rcutorture_one_extend(int *r
>  	WARN_ON_ONCE((idxold >> RCUTORTURE_RDR_SHIFT) > 1);
>  	rtrsp->rt_readstate = newstate;
>  
> -	/* First, put new protection in place to avoid critical-section gap. */
> +	/*
> +	 * First, put new protection in place to avoid critical-section gap.
> +	 * Disable preemption around the ATOM disables to ensure that
> +	 * in_atomic() is true.
> +	 */
>  	if (statesnew & RCUTORTURE_RDR_BH)
>  		local_bh_disable();
> +	if (statesnew & RCUTORTURE_RDR_RBH)
> +		rcu_read_lock_bh();
>  	if (statesnew & RCUTORTURE_RDR_IRQ)
>  		local_irq_disable();
>  	if (statesnew & RCUTORTURE_RDR_PREEMPT)
>  		preempt_disable();
> -	if (statesnew & RCUTORTURE_RDR_RBH)
> -		rcu_read_lock_bh();
>  	if (statesnew & RCUTORTURE_RDR_SCHED)
>  		rcu_read_lock_sched();
> +	preempt_disable();
> +	if (statesnew & RCUTORTURE_RDR_ATOM_BH)
> +		local_bh_disable();
> +	if (statesnew & RCUTORTURE_RDR_ATOM_RBH)
> +		rcu_read_lock_bh();
> +	preempt_enable();
>  	if (statesnew & RCUTORTURE_RDR_RCU)
>  		idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
>  
> -	/* Next, remove old protection, irq first due to bh conflict. */
> +	/*
> +	 * Next, remove old protection, in decreasing order of strength
> +	 * to avoid unlock paths that aren't safe in the stronger
> +	 * context.  Disable preemption around the ATOM enables in
> +	 * case the context was only atomic due to IRQ disabling.
> +	 */
> +	preempt_disable();
>  	if (statesold & RCUTORTURE_RDR_IRQ)
>  		local_irq_enable();
> -	if (statesold & RCUTORTURE_RDR_BH)
> +	if (statesold & RCUTORTURE_RDR_ATOM_BH)
>  		local_bh_enable();
> +	if (statesold & RCUTORTURE_RDR_ATOM_RBH)
> +		rcu_read_unlock_bh();
> +	preempt_enable();

The addition of preempt_enable() here prevents rcutorture from covering
an important part of the mainline RCU state space, namely when an RCU
read-side section ends with just local_irq_enable().  This situation
is a challenge for RCU because it must indirectly detect the end of the
critical section.

Would it work for RT if the preempt_enable() and preempt_disable()
were executed only if either RT on the one hand or statesold has the
RCUTORTURE_RDR_ATOM_BH or RCUTORTURE_RDR_ATOM_RBH bit set on the other?

>  	if (statesold & RCUTORTURE_RDR_PREEMPT)
>  		preempt_enable();
> -	if (statesold & RCUTORTURE_RDR_RBH)
> -		rcu_read_unlock_bh();
>  	if (statesold & RCUTORTURE_RDR_SCHED)
>  		rcu_read_unlock_sched();
> +	if (statesold & RCUTORTURE_RDR_BH)
> +		local_bh_enable();
> +	if (statesold & RCUTORTURE_RDR_RBH)
> +		rcu_read_unlock_bh();
> +
>  	if (statesold & RCUTORTURE_RDR_RCU) {
>  		bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
>  
> @@ -1496,6 +1521,12 @@ rcutorture_extend_mask(int oldmask, stru
>  	int mask = rcutorture_extend_mask_max();
>  	unsigned long randmask1 = torture_random(trsp) >> 8;
>  	unsigned long randmask2 = randmask1 >> 3;
> +	unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
> +	unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
> +	unsigned long nonatomic_bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +	unsigned long atomic_bhs = RCUTORTURE_RDR_ATOM_BH |
> +				   RCUTORTURE_RDR_ATOM_RBH;
> +	unsigned long tmp;
>  
>  	WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
>  	/* Mostly only one bit (need preemption!), sometimes lots of bits. */
> @@ -1503,11 +1534,46 @@ rcutorture_extend_mask(int oldmask, stru
>  		mask = mask & randmask2;
>  	else
>  		mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
> -	/* Can't enable bh w/irq disabled. */
> -	if ((mask & RCUTORTURE_RDR_IRQ) &&
> -	    ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
> -	     (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
> -		mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +
> +	/*
> +	 * Can't enable bh w/irq disabled.
> +	 */
> +	tmp = atomic_bhs | nonatomic_bhs;
> +	if (mask & RCUTORTURE_RDR_IRQ)
> +		mask |= oldmask & tmp;

This is more straightforward than my original, good!

> +
> +	/*
> +	 * Ideally these sequences would be detected in debug builds
> +	 * (regardless of RT), but until then don't stop testing
> +	 * them on non-RT.
> +	 */
> +	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
> +		/*
> +		 * Can't disable bh in atomic context if bh was already
> +		 * disabled by another task on the same CPU. Instead of
> +		 * attempting to track this, just avoid disabling bh in atomic
> +		 * context.
> +		 */
> +		mask &= ~atomic_bhs;

At some point, we will need to test disabling bh in atomic context,
correct?  Or am I missing something here?

> +		/*
> +		 * Can't release the outermost rcu lock in an irq disabled
> +		 * section without preemption also being disabled, if irqs
> +		 * had ever been enabled during this RCU critical section
> +		 * (could leak a special flag and delay reporting the qs).
> +		 */
> +		if ((oldmask & RCUTORTURE_RDR_RCU) &&
> +		    (mask & RCUTORTURE_RDR_IRQ) &&
> +		    !(mask & preempts))
> +			mask |= RCUTORTURE_RDR_RCU;
> +
> +		/* Can't modify non-atomic bh in atomic context */
> +		tmp = nonatomic_bhs;
> +		if (oldmask & preempts_irq)
> +			mask &= ~tmp;
> +		if ((oldmask | mask) & preempts_irq)
> +			mask |= oldmask & tmp;
> +	}
> +
>  	return mask ?: RCUTORTURE_RDR_RCU;
>  }
>  

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-08-18 22:46 UTC|newest]

Thread overview: 102+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-11 20:13 [PATCH v3 0/4] rcu, arm64: PREEMPT_RT fixlets Valentin Schneider
2021-08-11 20:13 ` Valentin Schneider
2021-08-11 20:13 ` [PATCH v3 1/4] rcutorture: Don't disable softirqs with preemption disabled when PREEMPT_RT Valentin Schneider
2021-08-11 20:13   ` Valentin Schneider
2021-08-12 16:47   ` Paul E. McKenney
2021-08-12 16:47     ` Paul E. McKenney
2021-08-17 12:13   ` Sebastian Andrzej Siewior
2021-08-17 12:13     ` Sebastian Andrzej Siewior
2021-08-17 13:17     ` Sebastian Andrzej Siewior
2021-08-17 13:17       ` Sebastian Andrzej Siewior
2021-08-17 14:40       ` [PATCH] rcutorture: Avoid problematic critical section nesting on RT Sebastian Andrzej Siewior
2021-08-17 14:40         ` Sebastian Andrzej Siewior
2021-08-18 22:46         ` Paul E. McKenney [this message]
2021-08-18 22:46           ` Paul E. McKenney
2021-08-19 15:35           ` Sebastian Andrzej Siewior
2021-08-19 15:35             ` Sebastian Andrzej Siewior
2021-08-19 15:39           ` Sebastian Andrzej Siewior
2021-08-19 15:39             ` Sebastian Andrzej Siewior
2021-08-19 15:47             ` Sebastian Andrzej Siewior
2021-08-19 15:47               ` Sebastian Andrzej Siewior
2021-08-19 18:20               ` Paul E. McKenney
2021-08-19 18:20                 ` Paul E. McKenney
2021-08-19 18:45                 ` Sebastian Andrzej Siewior
2021-08-19 18:45                   ` Sebastian Andrzej Siewior
2021-08-20  4:11                 ` Scott Wood
2021-08-20  4:11                   ` Scott Wood
2021-08-20  7:11                   ` Sebastian Andrzej Siewior
2021-08-20  7:11                     ` Sebastian Andrzej Siewior
2021-08-20  7:42                   ` [PATCH v2] rcutorture: Avoid problematic critical section nesting on PREEMPT_RT Sebastian Andrzej Siewior
2021-08-20  7:42                     ` Sebastian Andrzej Siewior
2021-08-20 22:10                     ` Paul E. McKenney
2021-08-20 22:10                       ` Paul E. McKenney
2021-08-20  3:23         ` [PATCH] rcutorture: Avoid problematic critical section nesting on RT Scott Wood
2021-08-20  3:23           ` Scott Wood
2021-08-20  6:54           ` Sebastian Andrzej Siewior
2021-08-20  6:54             ` Sebastian Andrzej Siewior
2021-08-11 20:13 ` [PATCH v3 2/4] sched: Introduce migratable() Valentin Schneider
2021-08-11 20:13   ` Valentin Schneider
2021-08-17 14:43   ` Sebastian Andrzej Siewior
2021-08-17 14:43     ` Sebastian Andrzej Siewior
2021-08-22 17:31     ` Valentin Schneider
2021-08-22 17:31       ` Valentin Schneider
2021-08-17 17:09   ` Sebastian Andrzej Siewior
2021-08-17 17:09     ` Sebastian Andrzej Siewior
2021-08-17 19:30     ` Phil Auld
2021-08-17 19:30       ` Phil Auld
2021-08-22 18:14     ` Valentin Schneider
2021-08-22 18:14       ` Valentin Schneider
2022-01-26 16:56       ` Sebastian Andrzej Siewior
2022-01-26 16:56         ` Sebastian Andrzej Siewior
2022-01-26 18:10         ` Valentin Schneider
2022-01-26 18:10           ` Valentin Schneider
2022-01-27 10:07           ` Sebastian Andrzej Siewior
2022-01-27 10:07             ` Sebastian Andrzej Siewior
2022-01-27 18:23             ` Valentin Schneider
2022-01-27 18:23               ` Valentin Schneider
2022-01-27 19:27         ` Valentin Schneider
2022-01-27 19:27           ` Valentin Schneider
2022-02-04  9:24           ` Sebastian Andrzej Siewior
2022-02-04  9:24             ` Sebastian Andrzej Siewior
2021-08-11 20:13 ` [PATCH v3 3/4] rcu/nocb: Protect NOCB state via local_lock() under PREEMPT_RT Valentin Schneider
2021-08-11 20:13   ` Valentin Schneider
2021-08-13  0:20   ` Paul E. McKenney
2021-08-13  0:20     ` Paul E. McKenney
2021-08-13 18:48     ` Valentin Schneider
2021-08-13 18:48       ` Valentin Schneider
2021-08-24 13:00     ` Frederic Weisbecker
2021-08-17 15:36   ` Sebastian Andrzej Siewior
2021-08-17 15:36     ` Sebastian Andrzej Siewior
2021-08-22 18:15     ` Valentin Schneider
2021-08-22 18:15       ` Valentin Schneider
2021-09-21 14:05   ` Thomas Gleixner
2021-09-21 14:05     ` Thomas Gleixner
2021-09-21 21:12     ` rcu/tree: Protect rcu_rdp_is_offloaded() invocations on RT Thomas Gleixner
2021-09-21 23:36       ` Frederic Weisbecker
2021-09-21 23:36         ` Frederic Weisbecker
2021-09-22  2:18         ` Paul E. McKenney
2021-09-22  2:18           ` Paul E. McKenney
2021-09-22 11:31           ` Frederic Weisbecker
2021-09-22 11:31             ` Frederic Weisbecker
2021-09-21 23:45       ` Frederic Weisbecker
2021-09-21 23:45         ` Frederic Weisbecker
2021-09-22  6:32         ` Sebastian Andrzej Siewior
2021-09-22  6:32           ` Sebastian Andrzej Siewior
2021-09-22 11:10           ` Frederic Weisbecker
2021-09-22 11:10             ` Frederic Weisbecker
2021-09-22 11:27             ` Sebastian Andrzej Siewior
2021-09-22 11:27               ` Sebastian Andrzej Siewior
2021-09-22 11:38               ` Frederic Weisbecker
2021-09-22 11:38                 ` Frederic Weisbecker
2021-09-22 13:02                 ` Sebastian Andrzej Siewior
2021-09-22 13:02                   ` Sebastian Andrzej Siewior
2021-09-23 10:02                   ` Frederic Weisbecker
2021-09-23 10:02                     ` Frederic Weisbecker
2021-09-30  9:00       ` Valentin Schneider
2021-09-30  9:00         ` Valentin Schneider
2021-09-30 10:53         ` Frederic Weisbecker
2021-09-30 10:53           ` Frederic Weisbecker
2021-09-30 13:22           ` Valentin Schneider
2021-09-30 13:22             ` Valentin Schneider
2021-08-11 20:13 ` [PATCH v3 4/4] arm64: mm: Make arch_faults_on_old_pte() check for migratability Valentin Schneider
2021-08-11 20:13   ` Valentin Schneider

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210818224651.GY4126399@paulmck-ThinkPad-P17-Gen-1 \
    --to=paulmck@kernel.org \
    --cc=anshuman.khandual@arm.com \
    --cc=ardb@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=boqun.feng@gmail.com \
    --cc=bristot@redhat.com \
    --cc=catalin.marinas@arm.com \
    --cc=dave@stgolabs.net \
    --cc=efault@gmx.de \
    --cc=frederic@kernel.org \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=steven.price@arm.com \
    --cc=swood@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=valentin.schneider@arm.com \
    --cc=vincenzo.frascino@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.