All of lore.kernel.org
 help / color / mirror / Atom feed
* timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
@ 2016-01-12 20:03 Sasha Levin
  2016-01-12 20:18 ` Peter Zijlstra
  2016-01-13  9:05 ` Thomas Gleixner
  0 siblings, 2 replies; 15+ messages in thread
From: Sasha Levin @ 2016-01-12 20:03 UTC (permalink / raw)
  To: Thomas Gleixner, LKML

Hi all,

While fuzzing with trinity inside a KVM tools guest, running the latest -next
kernel, I've hit the following lockdep warning:

[ 3408.411602] ======================================================

[ 3408.418607] [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]

[ 3408.419322] 4.4.0-rc8-next-20160111-sasha-00024-g376a9c2-dirty #2782 Not tainted

[ 3408.420171] ------------------------------------------------------

[ 3408.420984] trinity-c65/30907 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:

[ 3408.421869] (&lock->wait_lock){+.+...}, at: rt_mutex_slowunlock (kernel/locking/rtmutex.c:1266)
[ 3408.423182]

[ 3408.423182] and this task is already holding:

[ 3408.423916] (&(&new_timer->it_lock)->rlock){-.-...}, at: __lock_timer (kernel/time/posix-timers.c:707)
[ 3408.425281] which would create a new lock dependency:

[ 3408.425942]  (&(&new_timer->it_lock)->rlock){-.-...} -> (&lock->wait_lock){+.+...}

[ 3408.427118]

[ 3408.427118] but this new dependency connects a HARDIRQ-irq-safe lock:

[ 3408.428093]  (&(&new_timer->it_lock)->rlock){-.-...}

... which became HARDIRQ-irq-safe at:

[ 3408.429232] __lock_acquire (kernel/locking/lockdep.c:2796 kernel/locking/lockdep.c:3162)
[ 3408.430075] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.430814] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.431584] run_posix_cpu_timers (include/linux/list.h:156 kernel/time/posix-cpu-timers.c:1231)
[ 3408.437683] update_process_times (kernel/time/timer.c:1427)
[ 3408.438528] tick_sched_handle (kernel/time/tick-sched.c:152)
[ 3408.439305] tick_sched_timer (kernel/time/tick-sched.c:1089)
[ 3408.440086] __hrtimer_run_queues (kernel/time/hrtimer.c:1233 kernel/time/hrtimer.c:1295)
[ 3408.446607] hrtimer_interrupt (kernel/time/hrtimer.c:1332)
[ 3408.447427] smp_trace_apic_timer_interrupt (./arch/x86/include/asm/apic.h:659 arch/x86/kernel/apic/apic.c:953)
[ 3408.448406] smp_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:956)
[ 3408.460706] apic_timer_interrupt (arch/x86/entry/entry_64.S:687)
[ 3408.461517] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.462259] sync_inodes_sb (fs/fs-writeback.c:2133 fs/fs-writeback.c:2292)
[ 3408.463042] sync_inodes_one_sb (fs/sync.c:74)
[ 3408.463826] iterate_supers (fs/super.c:535)
[ 3408.464583] sys_sync (fs/sync.c:113)
[ 3408.465239] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:186)
[ 3408.466074]

[ 3408.466074] to a HARDIRQ-irq-unsafe lock:

[ 3408.466732]  (&lock->wait_lock){+.+...}

... which became HARDIRQ-irq-unsafe at:

[ 3408.467701] ... __lock_acquire (kernel/locking/lockdep.c:2813 kernel/locking/lockdep.c:3162)
[ 3408.468551] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.469240] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.469987] rt_mutex_slowlock (kernel/locking/rtmutex.c:1184)
[ 3408.470722] rt_mutex_lock (kernel/locking/rtmutex.c:1402)
[ 3408.471416] rcu_boost_kthread (kernel/rcu/tree_plugin.h:1033 kernel/rcu/tree_plugin.h:1055)
[ 3408.472253] kthread (kernel/kthread.c:209)
[ 3408.472937] ret_from_fork (arch/x86/entry/entry_64.S:469)
[ 3408.473642]

[ 3408.473642] other info that might help us debug this:

[ 3408.473642]

[ 3408.474461]  Possible interrupt unsafe locking scenario:

[ 3408.474461]

[ 3408.475239]        CPU0                    CPU1

[ 3408.475809]        ----                    ----

[ 3408.476380]   lock(&lock->wait_lock);

[ 3408.476925]                                local_irq_disable();

[ 3408.477640]                                lock(&(&new_timer->it_lock)->rlock);

[ 3408.478607]                                lock(&lock->wait_lock);

[ 3408.479445]   <Interrupt>

[ 3408.479796]     lock(&(&new_timer->it_lock)->rlock);

[ 3408.480504]

[ 3408.480504]  *** DEADLOCK ***

[ 3408.480504]

[ 3408.481270] 2 locks held by trinity-c65/30907:

[ 3408.481826] #0: (rcu_read_lock){......}, at: __lock_timer (include/linux/rcupdate.h:872 kernel/time/posix-timers.c:704)
[ 3408.483007] #1: (&(&new_timer->it_lock)->rlock){-.-...}, at: __lock_timer (kernel/time/posix-timers.c:707)
[ 3408.484369]

the dependencies between HARDIRQ-irq-safe lock and the holding lock:

[ 3408.485317] -> (&(&new_timer->it_lock)->rlock){-.-...} ops: 3448 {

[ 3408.486264]    IN-HARDIRQ-W at:

[ 3408.486707] __lock_acquire (kernel/locking/lockdep.c:2796 kernel/locking/lockdep.c:3162)
[ 3408.487670] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.504632] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.505511] run_posix_cpu_timers (include/linux/list.h:156 kernel/time/posix-cpu-timers.c:1231)
[ 3408.506496] update_process_times (kernel/time/timer.c:1427)
[ 3408.507473] tick_sched_handle (kernel/time/tick-sched.c:152)
[ 3408.508447] tick_sched_timer (kernel/time/tick-sched.c:1089)
[ 3408.509366] __hrtimer_run_queues (kernel/time/hrtimer.c:1233 kernel/time/hrtimer.c:1295)
[ 3408.510356] hrtimer_interrupt (kernel/time/hrtimer.c:1332)
[ 3408.511308] smp_trace_apic_timer_interrupt (./arch/x86/include/asm/apic.h:659 arch/x86/kernel/apic/apic.c:953)
[ 3408.512386] smp_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:956)
[ 3408.513380] apic_timer_interrupt (arch/x86/entry/entry_64.S:687)
[ 3408.514351] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.515112] sync_inodes_sb (fs/fs-writeback.c:2133 fs/fs-writeback.c:2292)
[ 3408.528093] sync_inodes_one_sb (fs/sync.c:74)
[ 3408.529101] iterate_supers (fs/super.c:535)
[ 3408.530052] sys_sync (fs/sync.c:113)
[ 3408.530926] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:186)
[ 3408.531988]    IN-SOFTIRQ-W at:

[ 3408.532442] __lock_acquire (kernel/locking/lockdep.c:2799 kernel/locking/lockdep.c:3162)
[ 3408.533415] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.534345] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.535239] run_posix_cpu_timers (include/linux/list.h:156 kernel/time/posix-cpu-timers.c:1231)
[ 3408.552300] update_process_times (kernel/time/timer.c:1427)
[ 3408.553305] tick_sched_handle (kernel/time/tick-sched.c:152)
[ 3408.554283] tick_sched_timer (kernel/time/tick-sched.c:1089)
[ 3408.555241] __hrtimer_run_queues (kernel/time/hrtimer.c:1233 kernel/time/hrtimer.c:1295)
[ 3408.556226] hrtimer_interrupt (kernel/time/hrtimer.c:1332)
[ 3408.557101] smp_trace_apic_timer_interrupt (./arch/x86/include/asm/apic.h:659 arch/x86/kernel/apic/apic.c:953)
[ 3408.558131] smp_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:956)
[ 3408.559122] apic_timer_interrupt (arch/x86/entry/entry_64.S:687)
[ 3408.560128] irq_exit (kernel/softirq.c:350 kernel/softirq.c:391)
[ 3408.561008] smp_trace_apic_timer_interrupt (./arch/x86/include/asm/irq_regs.h:26 arch/x86/kernel/apic/apic.c:955)
[ 3408.562123] smp_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:956)
[ 3408.563153] apic_timer_interrupt (arch/x86/entry/entry_64.S:687)
[ 3408.566483]    INITIAL USE at:

[ 3408.566917] __lock_acquire (kernel/locking/lockdep.c:3166)
[ 3408.567903] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.568826] _raw_spin_lock_irqsave (include/linux/spinlock_api_smp.h:119 kernel/locking/spinlock.c:159)
[ 3408.569824] exit_itimers (kernel/time/posix-timers.c:936 kernel/time/posix-timers.c:983 kernel/time/posix-timers.c:1008)
[ 3408.570725] do_exit (kernel/exit.c:724)
[ 3408.571606] do_group_exit (kernel/exit.c:862)
[ 3408.572539] get_signal (kernel/signal.c:2327)
[ 3408.573458] do_signal (arch/x86/kernel/signal.c:781)
[ 3408.574357] exit_to_usermode_loop (arch/x86/entry/common.c:249)
[ 3408.575335] syscall_return_slowpath (./arch/x86/include/asm/jump_label.h:35 include/linux/context_tracking_state.h:30 include/linux/context_tracking.h:24 arch/x86/entry/common.c:284 arch/x86/entry/common.c:344)
[ 3408.576356] int_ret_from_sys_call (arch/x86/entry/entry_64.S:282)
[ 3408.577326]  }

[ 3408.577574] ... key at: __key.35766 (??:?)
[ 3408.578477]  ... acquired at:

[ 3408.578863] check_irq_usage (kernel/locking/lockdep.c:1649)
[ 3408.579640] __lock_acquire (kernel/locking/lockdep_states.h:7 kernel/locking/lockdep.c:1857 kernel/locking/lockdep.c:1958 kernel/locking/lockdep.c:2144 kernel/locking/lockdep.c:3206)
[ 3408.596505] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.597191] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.597965] rt_mutex_slowunlock (kernel/locking/rtmutex.c:1266)
[ 3408.598720] rt_mutex_unlock (kernel/locking/rtmutex.c:1384 kernel/locking/rtmutex.c:1486)
[ 3408.599499] rcu_read_unlock_special (kernel/rcu/tree_plugin.h:503)
[ 3408.603731] __rcu_read_unlock (kernel/rcu/update.c:223)
[ 3408.604452] __lock_timer (include/linux/rcupdate.h:495 include/linux/rcupdate.h:930 kernel/time/posix-timers.c:709)
[ 3408.605221] SyS_timer_settime (kernel/time/posix-timers.c:903 kernel/time/posix-timers.c:881)
[ 3408.605982] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:186)
[ 3408.606791]

[ 3408.607008]

the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock:

[ 3408.608038] -> (&lock->wait_lock){+.+...} ops: 134 {

[ 3408.608843]    HARDIRQ-ON-W at:

[ 3408.609209] __lock_acquire (kernel/locking/lockdep.c:2813 kernel/locking/lockdep.c:3162)
[ 3408.610199] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.611069] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.611996] rt_mutex_slowlock (kernel/locking/rtmutex.c:1184)
[ 3408.612972] rt_mutex_lock (kernel/locking/rtmutex.c:1402)
[ 3408.613856] rcu_boost_kthread (kernel/rcu/tree_plugin.h:1033 kernel/rcu/tree_plugin.h:1055)
[ 3408.614733] kthread (kernel/kthread.c:209)
[ 3408.615599] ret_from_fork (arch/x86/entry/entry_64.S:469)
[ 3408.628604]    SOFTIRQ-ON-W at:

[ 3408.629055] __lock_acquire (kernel/locking/lockdep.c:2817 kernel/locking/lockdep.c:3162)
[ 3408.630046] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.630992] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.638480] rt_mutex_slowlock (kernel/locking/rtmutex.c:1184)
[ 3408.640811] rt_mutex_lock (kernel/locking/rtmutex.c:1402)
[ 3408.644207] rcu_boost_kthread (kernel/rcu/tree_plugin.h:1033 kernel/rcu/tree_plugin.h:1055)
[ 3408.647236] kthread (kernel/kthread.c:209)
[ 3408.650140] ret_from_fork (arch/x86/entry/entry_64.S:469)
[ 3408.653122]    INITIAL USE at:

[ 3408.654723] __lock_acquire (kernel/locking/lockdep.c:3166)
[ 3408.656792] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.658136] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.659612] rt_mutex_slowlock (kernel/locking/rtmutex.c:1184)
[ 3408.660998] rt_mutex_lock (kernel/locking/rtmutex.c:1402)
[ 3408.663768] rcu_boost_kthread (kernel/rcu/tree_plugin.h:1033 kernel/rcu/tree_plugin.h:1055)
[ 3408.672030] kthread (kernel/kthread.c:209)
[ 3408.675053] ret_from_fork (arch/x86/entry/entry_64.S:469)
[ 3408.678075]  }

[ 3408.678996] ... key at: __key.19737 (??:?)
[ 3408.681194]  ... acquired at:

[ 3408.682967] check_irq_usage (kernel/locking/lockdep.c:1649)
[ 3408.684098] __lock_acquire (kernel/locking/lockdep_states.h:7 kernel/locking/lockdep.c:1857 kernel/locking/lockdep.c:1958 kernel/locking/lockdep.c:2144 kernel/locking/lockdep.c:3206)
[ 3408.685496] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.686597] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.687693] rt_mutex_slowunlock (kernel/locking/rtmutex.c:1266)
[ 3408.688916] rt_mutex_unlock (kernel/locking/rtmutex.c:1384 kernel/locking/rtmutex.c:1486)
[ 3408.690042] rcu_read_unlock_special (kernel/rcu/tree_plugin.h:503)
[ 3408.691451] __rcu_read_unlock (kernel/rcu/update.c:223)
[ 3408.692629] __lock_timer (include/linux/rcupdate.h:495 include/linux/rcupdate.h:930 kernel/time/posix-timers.c:709)
[ 3408.693743] SyS_timer_settime (kernel/time/posix-timers.c:903 kernel/time/posix-timers.c:881)
[ 3408.694978] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:186)
[ 3408.696162]

[ 3408.696629]

[ 3408.696629] stack backtrace:

[ 3408.697558] CPU: 0 PID: 30907 Comm: trinity-c65 Not tainted 4.4.0-rc8-next-20160111-sasha-00024-g376a9c2-dirty #2782

[ 3408.699544]  1ffff100188b2ecd 000000003efba18a ffff8800c45976e8 ffffffffa401ab02

[ 3408.701068]  0000000041b58ab3 ffffffffaf1b73b8 ffffffffa401aa37 ffff8800c45976e8

[ 3408.702589]  ffffffffa245a127 000000003efba18a 000000005046ad48 0000000000000001

[ 3408.703754] Call Trace:

[ 3408.704185] dump_stack (lib/dump_stack.c:52)
[ 3408.707084] check_usage (kernel/locking/lockdep.c:1561)
[ 3408.714764] check_irq_usage (kernel/locking/lockdep.c:1649)
[ 3408.715747] __lock_acquire (kernel/locking/lockdep_states.h:7 kernel/locking/lockdep.c:1857 kernel/locking/lockdep.c:1958 kernel/locking/lockdep.c:2144 kernel/locking/lockdep.c:3206)
[ 3408.722807] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3587)
[ 3408.726723] _raw_spin_lock (include/linux/spinlock_api_smp.h:145 kernel/locking/spinlock.c:151)
[ 3408.728982] rt_mutex_slowunlock (kernel/locking/rtmutex.c:1266)
[ 3408.730066] rt_mutex_unlock (kernel/locking/rtmutex.c:1384 kernel/locking/rtmutex.c:1486)
[ 3408.733192] rcu_read_unlock_special (kernel/rcu/tree_plugin.h:503)
[ 3408.735155] __rcu_read_unlock (kernel/rcu/update.c:223)
[ 3408.736090] __lock_timer (include/linux/rcupdate.h:495 include/linux/rcupdate.h:930 kernel/time/posix-timers.c:709)
[ 3408.737947] SyS_timer_settime (kernel/time/posix-timers.c:903 kernel/time/posix-timers.c:881)
[ 3408.744580] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:186)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-12 20:03 timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected Sasha Levin
@ 2016-01-12 20:18 ` Peter Zijlstra
  2016-01-12 20:52   ` Paul E. McKenney
  2016-01-13  9:05 ` Thomas Gleixner
  1 sibling, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2016-01-12 20:18 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Thomas Gleixner, LKML, Paul McKenney

On Tue, Jan 12, 2016 at 03:03:27PM -0500, Sasha Levin wrote:
> [ 3408.703754] Call Trace:

> [ 3408.733192] rcu_read_unlock_special (kernel/rcu/tree_plugin.h:503)
> [ 3408.735155] __rcu_read_unlock (kernel/rcu/update.c:223)
> [ 3408.736090] __lock_timer (include/linux/rcupdate.h:495 include/linux/rcupdate.h:930 kernel/time/posix-timers.c:709)

I'm thinking this is one of those magic preemptible RCU bits..

---
 kernel/time/posix-timers.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
index 31d11ac9fa47..09e28733e725 100644
--- a/kernel/time/posix-timers.c
+++ b/kernel/time/posix-timers.c
@@ -701,17 +701,25 @@ static struct k_itimer *__lock_timer(timer_t timer_id, unsigned long *flags)
 	if ((unsigned long long)timer_id > INT_MAX)
 		return NULL;
 
+	/*
+	 * One of the few rules of preemptible RCU is that one cannot do
+	 * rcu_read_unlock() while holding a scheduler (or nested) lock when
+	 * part of the read side critical section was irqs-enabled -- see
+	 * rcu_read_unlock_special().
+	 */
+	local_irq_safe(*flags);
 	rcu_read_lock();
 	timr = posix_timer_by_id(timer_id);
 	if (timr) {
-		spin_lock_irqsave(&timr->it_lock, *flags);
+		spin_lock(&timr->it_lock);
 		if (timr->it_signal == current->signal) {
 			rcu_read_unlock();
 			return timr;
 		}
-		spin_unlock_irqrestore(&timr->it_lock, *flags);
+		spin_unlock(&timr->it_lock);
 	}
 	rcu_read_unlock();
+	local_irq_restore(*flags);
 
 	return NULL;
 }

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-12 20:18 ` Peter Zijlstra
@ 2016-01-12 20:52   ` Paul E. McKenney
  0 siblings, 0 replies; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-12 20:52 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Sasha Levin, Thomas Gleixner, LKML

On Tue, Jan 12, 2016 at 09:18:56PM +0100, Peter Zijlstra wrote:
> On Tue, Jan 12, 2016 at 03:03:27PM -0500, Sasha Levin wrote:
> > [ 3408.703754] Call Trace:
> 
> > [ 3408.733192] rcu_read_unlock_special (kernel/rcu/tree_plugin.h:503)
> > [ 3408.735155] __rcu_read_unlock (kernel/rcu/update.c:223)
> > [ 3408.736090] __lock_timer (include/linux/rcupdate.h:495 include/linux/rcupdate.h:930 kernel/time/posix-timers.c:709)
> 
> I'm thinking this is one of those magic preemptible RCU bits..

Hmmm...  Looking back at Sasha's original email, RCU doesn't have much
choice about making ->wait_lock HARDIRQ-irq-unsafe, since it acquires
it via a call to rt_mutex_lock(), which cannot be invoked with irqs
disabled.  In fact, it seems a bit odd to acquire something named
->wait_lock with irqs disabled.

That said...

> ---
>  kernel/time/posix-timers.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> index 31d11ac9fa47..09e28733e725 100644
> --- a/kernel/time/posix-timers.c
> +++ b/kernel/time/posix-timers.c
> @@ -701,17 +701,25 @@ static struct k_itimer *__lock_timer(timer_t timer_id, unsigned long *flags)
>  	if ((unsigned long long)timer_id > INT_MAX)
>  		return NULL;
> 
> +	/*
> +	 * One of the few rules of preemptible RCU is that one cannot do
> +	 * rcu_read_unlock() while holding a scheduler (or nested) lock when
> +	 * part of the read side critical section was irqs-enabled -- see
> +	 * rcu_read_unlock_special().
> +	 */
> +	local_irq_safe(*flags);
>  	rcu_read_lock();
>  	timr = posix_timer_by_id(timer_id);
>  	if (timr) {
> -		spin_lock_irqsave(&timr->it_lock, *flags);
> +		spin_lock(&timr->it_lock);
>  		if (timr->it_signal == current->signal) {
>  			rcu_read_unlock();

If ->it_lock is ever acquired while one of the rq or pi locks was held,
Peter's patch is needed.

It is just that I am not seeing what I would expect to see in Sasha's
lockdep splat if that were the case.

							Thanx, Paul

>  			return timr;
>  		}
> -		spin_unlock_irqrestore(&timr->it_lock, *flags);
> +		spin_unlock(&timr->it_lock);
>  	}
>  	rcu_read_unlock();
> +	local_irq_restore(*flags);
> 
>  	return NULL;
>  }
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-12 20:03 timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected Sasha Levin
  2016-01-12 20:18 ` Peter Zijlstra
@ 2016-01-13  9:05 ` Thomas Gleixner
  2016-01-13 16:16   ` Paul E. McKenney
  1 sibling, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2016-01-13  9:05 UTC (permalink / raw)
  To: Sasha Levin; +Cc: LKML, Paul E. McKenney, Peter Zijlstra

Sasha,

On Tue, 12 Jan 2016, Sasha Levin wrote:

Cc'ing Paul, Peter

> While fuzzing with trinity inside a KVM tools guest, running the latest -next
> kernel, I've hit the following lockdep warning:

> [ 3408.474461]  Possible interrupt unsafe locking scenario:
> 
> [ 3408.474461]
> 
> [ 3408.475239]        CPU0                    CPU1
> 
> [ 3408.475809]        ----                    ----
> 
> [ 3408.476380]   lock(&lock->wait_lock);
> 
> [ 3408.476925]                                local_irq_disable();
> 
> [ 3408.477640]                                lock(&(&new_timer->it_lock)->rlock);
>
> [ 3408.478607]                                lock(&lock->wait_lock);

That comes from rcu_read_unlock:

    						rcu_read_unlock()
						 rcu_read_unlock_special()
						 ...
						  rt_mutex_unlock(&rnp->boost_mtx);
   					           raw_spin_lock(&boost_mtx->wait_lock);

> [ 3408.479445]   <Interrupt>
> 
> [ 3408.479796]     lock(&(&new_timer->it_lock)->rlock);

So the task on CPU0 holds rnp->boost_mtx.wait_lock and then the interrupt
deadlocks on the timer->it_lock.

We can fix that particular issue in the posix-timer code by making the
locking symetric:

	rcu_read_lock();
	spin_lock_irq(timer->lock);

...

	spin_unlock_irq(timer->lock);
	rcu_read_unlock();

instead of:

	rcu_read_lock();
	spin_lock_irq(timer->lock);
	rcu_read_unlock();

...

	spin_unlock_irq(timer->lock);

But the question is, whether this is the only offending code path in tree. We
can avoid the hassle by making rtmutex->wait_lock irq safe.

Thoughts?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-13  9:05 ` Thomas Gleixner
@ 2016-01-13 16:16   ` Paul E. McKenney
  2016-01-14 17:43     ` Thomas Gleixner
  0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-13 16:16 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Wed, Jan 13, 2016 at 10:05:49AM +0100, Thomas Gleixner wrote:
> Sasha,
> 
> On Tue, 12 Jan 2016, Sasha Levin wrote:
> 
> Cc'ing Paul, Peter
> 
> > While fuzzing with trinity inside a KVM tools guest, running the latest -next
> > kernel, I've hit the following lockdep warning:
> 
> > [ 3408.474461]  Possible interrupt unsafe locking scenario:
> > 
> > [ 3408.474461]
> > 
> > [ 3408.475239]        CPU0                    CPU1
> > 
> > [ 3408.475809]        ----                    ----
> > 
> > [ 3408.476380]   lock(&lock->wait_lock);
> > 
> > [ 3408.476925]                                local_irq_disable();
> > 
> > [ 3408.477640]                                lock(&(&new_timer->it_lock)->rlock);
> >
> > [ 3408.478607]                                lock(&lock->wait_lock);
> 
> That comes from rcu_read_unlock:
> 
>     						rcu_read_unlock()
> 						 rcu_read_unlock_special()
> 						 ...
> 						  rt_mutex_unlock(&rnp->boost_mtx);
>    					           raw_spin_lock(&boost_mtx->wait_lock);
> 
> > [ 3408.479445]   <Interrupt>
> > 
> > [ 3408.479796]     lock(&(&new_timer->it_lock)->rlock);
> 
> So the task on CPU0 holds rnp->boost_mtx.wait_lock and then the interrupt
> deadlocks on the timer->it_lock.
> 
> We can fix that particular issue in the posix-timer code by making the
> locking symetric:
> 
> 	rcu_read_lock();
> 	spin_lock_irq(timer->lock);
> 
> ...
> 
> 	spin_unlock_irq(timer->lock);
> 	rcu_read_unlock();
> 
> instead of:
> 
> 	rcu_read_lock();
> 	spin_lock_irq(timer->lock);
> 	rcu_read_unlock();
> 
> ...
> 
> 	spin_unlock_irq(timer->lock);
> 
> But the question is, whether this is the only offending code path in tree. We
> can avoid the hassle by making rtmutex->wait_lock irq safe.
> 
> Thoughts?

Given that the lock is disabling irq, I don't see a problem with
extending the RCU read-side critical section to cover the entire
irq-disabled region.  Your point about the hassle of finding and fixing
all the other instances of this sort is well taken, however.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-13 16:16   ` Paul E. McKenney
@ 2016-01-14 17:43     ` Thomas Gleixner
  2016-01-14 18:18       ` Paul E. McKenney
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2016-01-14 17:43 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Wed, 13 Jan 2016, Paul E. McKenney wrote:
> On Wed, Jan 13, 2016 at 10:05:49AM +0100, Thomas Gleixner wrote:
> > We can fix that particular issue in the posix-timer code by making the
> > locking symetric:
> > 
> > 	rcu_read_lock();
> > 	spin_lock_irq(timer->lock);
> > 
> > ...
> > 
> > 	spin_unlock_irq(timer->lock);
> > 	rcu_read_unlock();
> > 
> > instead of:
> > 
> > 	rcu_read_lock();
> > 	spin_lock_irq(timer->lock);
> > 	rcu_read_unlock();
> > 
> > ...
> > 
> > 	spin_unlock_irq(timer->lock);
> > 
> > But the question is, whether this is the only offending code path in tree. We
> > can avoid the hassle by making rtmutex->wait_lock irq safe.
> > 
> > Thoughts?
> 
> Given that the lock is disabling irq, I don't see a problem with
> extending the RCU read-side critical section to cover the entire
> irq-disabled region.

I cannot follow here. What would be different if the lock would not disable
irqs? I mean you can get preempted right after rcu_read_lock() before
acquiring the spinlock.

> Your point about the hassle of finding and fixing all the other instances of
> this sort is well taken, however.

Right. We have the pattern 

     rcu_read_lock();
     x = lookup();
     if (x)
	   keep_hold(x)
     rcu_read_unlock();
     return x;

all over the place. Now that keep_hold() can be everything from a refcount to
a spinlock and I'm not sure that we can force stuff depending on the mechanism
to be completely symetric. So we are probably better off by making that rcu
unlock machinery more robust.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-14 17:43     ` Thomas Gleixner
@ 2016-01-14 18:18       ` Paul E. McKenney
  2016-01-14 19:47         ` Thomas Gleixner
  0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-14 18:18 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Thu, Jan 14, 2016 at 06:43:16PM +0100, Thomas Gleixner wrote:
> On Wed, 13 Jan 2016, Paul E. McKenney wrote:
> > On Wed, Jan 13, 2016 at 10:05:49AM +0100, Thomas Gleixner wrote:
> > > We can fix that particular issue in the posix-timer code by making the
> > > locking symetric:
> > > 
> > > 	rcu_read_lock();
> > > 	spin_lock_irq(timer->lock);
> > > 
> > > ...
> > > 
> > > 	spin_unlock_irq(timer->lock);
> > > 	rcu_read_unlock();
> > > 
> > > instead of:
> > > 
> > > 	rcu_read_lock();
> > > 	spin_lock_irq(timer->lock);
> > > 	rcu_read_unlock();
> > > 
> > > ...
> > > 
> > > 	spin_unlock_irq(timer->lock);
> > > 
> > > But the question is, whether this is the only offending code path in tree. We
> > > can avoid the hassle by making rtmutex->wait_lock irq safe.
> > > 
> > > Thoughts?
> > 
> > Given that the lock is disabling irq, I don't see a problem with
> > extending the RCU read-side critical section to cover the entire
> > irq-disabled region.
> 
> I cannot follow here. What would be different if the lock would not disable
> irqs? I mean you can get preempted right after rcu_read_lock() before
> acquiring the spinlock.

I was thinking in terms of the fact that disabling irqs would block the
grace period for the current implementation of RCU (but -not- SRCU, just
for the record).  You are right that the new version can be preempted
just after the rcu_read_lock() but the same is true of the old pattern
as well.  To avoid this possibility of preemption, the code would need
to look something like this:

	local_irq_disable();
	rcu_read_lock();
	spin_lock(timer->lock);

...

	spin_unlock(timer->lock);
	rcu_read_unlock();
	local_irq_enable();

> > Your point about the hassle of finding and fixing all the other instances of
> > this sort is well taken, however.
> 
> Right. We have the pattern 
> 
>      rcu_read_lock();
>      x = lookup();
>      if (x)
> 	   keep_hold(x)
>      rcu_read_unlock();
>      return x;
> 
> all over the place. Now that keep_hold() can be everything from a refcount to
> a spinlock and I'm not sure that we can force stuff depending on the mechanism
> to be completely symetric. So we are probably better off by making that rcu
> unlock machinery more robust.

OK.  If I read the lockdep reports correctly, the issue occurs
when rcu_read_unlock_special() finds that it needs to unboost,
which means doing an rt_mutex_unlock().  This is done outside of
rcu_read_unlock_special()'s irq-disabled region, but of course the caller
might have disabled irqs.

If I remember correctly, disabling irqs across rt_mutex_unlock() gets
me lockdep splats.

I could imagine having a per-CPU pointer to rt_mutex that
rcu_read_unlock() sets, and that is checked at every point that irqs
are enabled, with a call to rt_mutex_unlock() if that pointer is non-NULL.

But perhaps you had something else in mind?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-14 18:18       ` Paul E. McKenney
@ 2016-01-14 19:47         ` Thomas Gleixner
  2016-01-15  1:42           ` Paul E. McKenney
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2016-01-14 19:47 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> On Thu, Jan 14, 2016 at 06:43:16PM +0100, Thomas Gleixner wrote:
> > Right. We have the pattern 
> > 
> >      rcu_read_lock();
> >      x = lookup();
> >      if (x)
> > 	   keep_hold(x)
> >      rcu_read_unlock();
> >      return x;
> > 
> > all over the place. Now that keep_hold() can be everything from a refcount to
> > a spinlock and I'm not sure that we can force stuff depending on the mechanism
> > to be completely symetric. So we are probably better off by making that rcu
> > unlock machinery more robust.
> 
> OK.  If I read the lockdep reports correctly, the issue occurs
> when rcu_read_unlock_special() finds that it needs to unboost,
> which means doing an rt_mutex_unlock().  This is done outside of
> rcu_read_unlock_special()'s irq-disabled region, but of course the caller
> might have disabled irqs.
> 
> If I remember correctly, disabling irqs across rt_mutex_unlock() gets
> me lockdep splats.

That shouldn't be the case. The splats come from this scenario:

CPU0 	       	      	    CPU1
rtmutex_lock(rcu)		    
  raw_spin_lock(&rcu->lock)
			    rcu_read_lock()
Interrupt		    spin_lock_irq(some->lock);
			    rcu_read_unlock()
			      rcu_read_unlock_special()
			        rtmutex_unlock(rcu)
				  raw_spin_lock(&rcu->lock)
  spin_lock(some->lock)

Now we dead locked.
 
> I could imagine having a per-CPU pointer to rt_mutex that
> rcu_read_unlock() sets, and that is checked at every point that irqs
> are enabled, with a call to rt_mutex_unlock() if that pointer is non-NULL.
> 
> But perhaps you had something else in mind?

We can solve that issue by taking rtmutex->wait_lock with irqsave. So the
above becomes:

CPU0				CPU1
rtmutex_lock(rcu)		    
  raw_spin_lock_irq(&rcu->lock)
				rcu_read_lock()
		    	    	spin_lock_irq(some->lock);
			    	rcu_read_unlock()
				  rcu_read_unlock_special()
			          rtmutex_unlock(rcu)
				    raw_spin_lock_irqsave(&rcu->lock, flags)
  raw_spin_unlock_irq(&rcu->lock)
Interrupt			    ...
  spin_lock(some->lock)		    raw_spin_unlock_irqrestore(&rcu->lock, flags)
				...
		    	    	spin_unlock_irq(some->lock);
				
Untested patch below.

Thanks,

	tglx

8<-------------------------

Subject: rtmutex: Make wait_lock irq safe
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 13 Jan 2016 11:25:38 +0100

Add some blurb here.

Not-Yet-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/futex.c           |   18 +++----
 kernel/locking/rtmutex.c |  111 +++++++++++++++++++++++++----------------------
 2 files changed, 69 insertions(+), 60 deletions(-)

--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1226,7 +1226,7 @@ static int wake_futex_pi(u32 __user *uad
 	if (pi_state->owner != current)
 		return -EINVAL;
 
-	raw_spin_lock(&pi_state->pi_mutex.wait_lock);
+	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
 	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
 
 	/*
@@ -1252,22 +1252,22 @@ static int wake_futex_pi(u32 __user *uad
 	else if (curval != uval)
 		ret = -EINVAL;
 	if (ret) {
-		raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
+		raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
 		return ret;
 	}
 
-	raw_spin_lock_irq(&pi_state->owner->pi_lock);
+	raw_spin_lock(&pi_state->owner->pi_lock);
 	WARN_ON(list_empty(&pi_state->list));
 	list_del_init(&pi_state->list);
-	raw_spin_unlock_irq(&pi_state->owner->pi_lock);
+	raw_spin_unlock(&pi_state->owner->pi_lock);
 
-	raw_spin_lock_irq(&new_owner->pi_lock);
+	raw_spin_lock(&new_owner->pi_lock);
 	WARN_ON(!list_empty(&pi_state->list));
 	list_add(&pi_state->list, &new_owner->pi_state_list);
 	pi_state->owner = new_owner;
-	raw_spin_unlock_irq(&new_owner->pi_lock);
+	raw_spin_unlock(&new_owner->pi_lock);
 
-	raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
+	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
 
 	deboost = rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
 
@@ -2162,11 +2162,11 @@ static int fixup_owner(u32 __user *uaddr
 		 * we returned due to timeout or signal without taking the
 		 * rt_mutex. Too late.
 		 */
-		raw_spin_lock(&q->pi_state->pi_mutex.wait_lock);
+		raw_spin_lock_irq(&q->pi_state->pi_mutex.wait_lock);
 		owner = rt_mutex_owner(&q->pi_state->pi_mutex);
 		if (!owner)
 			owner = rt_mutex_next_owner(&q->pi_state->pi_mutex);
-		raw_spin_unlock(&q->pi_state->pi_mutex.wait_lock);
+		raw_spin_unlock_irq(&q->pi_state->pi_mutex.wait_lock);
 		ret = fixup_pi_state_owner(uaddr, q, owner);
 		goto out;
 	}
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -99,13 +99,14 @@ static inline void mark_rt_mutex_waiters
  * 2) Drop lock->wait_lock
  * 3) Try to unlock the lock with cmpxchg
  */
-static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock)
+static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
+					unsigned long flags)
 	__releases(lock->wait_lock)
 {
 	struct task_struct *owner = rt_mutex_owner(lock);
 
 	clear_rt_mutex_waiters(lock);
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 	/*
 	 * If a new waiter comes in between the unlock and the cmpxchg
 	 * we have two situations:
@@ -147,11 +148,12 @@ static inline void mark_rt_mutex_waiters
 /*
  * Simple slow path only version: lock->owner is protected by lock->wait_lock.
  */
-static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock)
+static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
+					unsigned long flags)
 	__releases(lock->wait_lock)
 {
 	lock->owner = NULL;
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 	return true;
 }
 #endif
@@ -591,7 +593,7 @@ static int rt_mutex_adjust_prio_chain(st
 		/*
 		 * No requeue[7] here. Just release @task [8]
 		 */
-		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+		raw_spin_unlock(&task->pi_lock);
 		put_task_struct(task);
 
 		/*
@@ -599,14 +601,14 @@ static int rt_mutex_adjust_prio_chain(st
 		 * If there is no owner of the lock, end of chain.
 		 */
 		if (!rt_mutex_owner(lock)) {
-			raw_spin_unlock(&lock->wait_lock);
+			raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 			return 0;
 		}
 
 		/* [10] Grab the next task, i.e. owner of @lock */
 		task = rt_mutex_owner(lock);
 		get_task_struct(task);
-		raw_spin_lock_irqsave(&task->pi_lock, flags);
+		raw_spin_lock(&task->pi_lock);
 
 		/*
 		 * No requeue [11] here. We just do deadlock detection.
@@ -621,8 +623,8 @@ static int rt_mutex_adjust_prio_chain(st
 		top_waiter = rt_mutex_top_waiter(lock);
 
 		/* [13] Drop locks */
-		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
-		raw_spin_unlock(&lock->wait_lock);
+		raw_spin_unlock(&task->pi_lock);
+		raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 		/* If owner is not blocked, end of chain. */
 		if (!next_lock)
@@ -643,7 +645,7 @@ static int rt_mutex_adjust_prio_chain(st
 	rt_mutex_enqueue(lock, waiter);
 
 	/* [8] Release the task */
-	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+	raw_spin_unlock(&task->pi_lock, flags);
 	put_task_struct(task);
 
 	/*
@@ -661,14 +663,14 @@ static int rt_mutex_adjust_prio_chain(st
 		 */
 		if (prerequeue_top_waiter != rt_mutex_top_waiter(lock))
 			wake_up_process(rt_mutex_top_waiter(lock)->task);
-		raw_spin_unlock(&lock->wait_lock);
+		raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 		return 0;
 	}
 
 	/* [10] Grab the next task, i.e. the owner of @lock */
 	task = rt_mutex_owner(lock);
 	get_task_struct(task);
-	raw_spin_lock_irqsave(&task->pi_lock, flags);
+	raw_spin_lock(&task->pi_lock);
 
 	/* [11] requeue the pi waiters if necessary */
 	if (waiter == rt_mutex_top_waiter(lock)) {
@@ -722,8 +724,8 @@ static int rt_mutex_adjust_prio_chain(st
 	top_waiter = rt_mutex_top_waiter(lock);
 
 	/* [13] Drop the locks */
-	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock(&task->pi_lock);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	/*
 	 * Make the actual exit decisions [12], based on the stored
@@ -766,8 +768,6 @@ static int rt_mutex_adjust_prio_chain(st
 static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
 				struct rt_mutex_waiter *waiter)
 {
-	unsigned long flags;
-
 	/*
 	 * Before testing whether we can acquire @lock, we set the
 	 * RT_MUTEX_HAS_WAITERS bit in @lock->owner. This forces all
@@ -852,7 +852,7 @@ static int try_to_take_rt_mutex(struct r
 	 * case, but conditionals are more expensive than a redundant
 	 * store.
 	 */
-	raw_spin_lock_irqsave(&task->pi_lock, flags);
+	raw_spin_lock(&task->pi_lock);
 	task->pi_blocked_on = NULL;
 	/*
 	 * Finish the lock acquisition. @task is the new owner. If
@@ -861,7 +861,7 @@ static int try_to_take_rt_mutex(struct r
 	 */
 	if (rt_mutex_has_waiters(lock))
 		rt_mutex_enqueue_pi(task, rt_mutex_top_waiter(lock));
-	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+	raw_spin_unlock(&task->pi_lock);
 
 takeit:
 	/* We got the lock. */
@@ -894,7 +894,6 @@ static int task_blocks_on_rt_mutex(struc
 	struct rt_mutex_waiter *top_waiter = waiter;
 	struct rt_mutex *next_lock;
 	int chain_walk = 0, res;
-	unsigned long flags;
 
 	/*
 	 * Early deadlock detection. We really don't want the task to
@@ -908,7 +907,7 @@ static int task_blocks_on_rt_mutex(struc
 	if (owner == task)
 		return -EDEADLK;
 
-	raw_spin_lock_irqsave(&task->pi_lock, flags);
+	raw_spin_lock(&task->pi_lock);
 	__rt_mutex_adjust_prio(task);
 	waiter->task = task;
 	waiter->lock = lock;
@@ -921,12 +920,12 @@ static int task_blocks_on_rt_mutex(struc
 
 	task->pi_blocked_on = waiter;
 
-	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+	raw_spin_unlock(&task->pi_lock);
 
 	if (!owner)
 		return 0;
 
-	raw_spin_lock_irqsave(&owner->pi_lock, flags);
+	raw_spin_lock(&owner->pi_lock);
 	if (waiter == rt_mutex_top_waiter(lock)) {
 		rt_mutex_dequeue_pi(owner, top_waiter);
 		rt_mutex_enqueue_pi(owner, waiter);
@@ -941,7 +940,7 @@ static int task_blocks_on_rt_mutex(struc
 	/* Store the lock on which owner is blocked or NULL */
 	next_lock = task_blocked_on_lock(owner);
 
-	raw_spin_unlock_irqrestore(&owner->pi_lock, flags);
+	raw_spin_unlock(&owner->pi_lock);
 	/*
 	 * Even if full deadlock detection is on, if the owner is not
 	 * blocked itself, we can avoid finding this out in the chain
@@ -957,12 +956,12 @@ static int task_blocks_on_rt_mutex(struc
 	 */
 	get_task_struct(owner);
 
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irq(&lock->wait_lock);
 
 	res = rt_mutex_adjust_prio_chain(owner, chwalk, lock,
 					 next_lock, waiter, task);
 
-	raw_spin_lock(&lock->wait_lock);
+	raw_spin_lock_irq(&lock->wait_lock);
 
 	return res;
 }
@@ -977,9 +976,8 @@ static void mark_wakeup_next_waiter(stru
 				    struct rt_mutex *lock)
 {
 	struct rt_mutex_waiter *waiter;
-	unsigned long flags;
 
-	raw_spin_lock_irqsave(&current->pi_lock, flags);
+	raw_spin_lock(&current->pi_lock);
 
 	waiter = rt_mutex_top_waiter(lock);
 
@@ -1001,7 +999,7 @@ static void mark_wakeup_next_waiter(stru
 	 */
 	lock->owner = (void *) RT_MUTEX_HAS_WAITERS;
 
-	raw_spin_unlock_irqrestore(&current->pi_lock, flags);
+	raw_spin_unlock(&current->pi_lock);
 
 	wake_q_add(wake_q, waiter->task);
 }
@@ -1018,12 +1016,11 @@ static void remove_waiter(struct rt_mute
 	bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock));
 	struct task_struct *owner = rt_mutex_owner(lock);
 	struct rt_mutex *next_lock;
-	unsigned long flags;
 
-	raw_spin_lock_irqsave(&current->pi_lock, flags);
+	raw_spin_lock(&current->pi_lock);
 	rt_mutex_dequeue(lock, waiter);
 	current->pi_blocked_on = NULL;
-	raw_spin_unlock_irqrestore(&current->pi_lock, flags);
+	raw_spin_unlock(&current->pi_lock);
 
 	/*
 	 * Only update priority if the waiter was the highest priority
@@ -1032,7 +1029,7 @@ static void remove_waiter(struct rt_mute
 	if (!owner || !is_top_waiter)
 		return;
 
-	raw_spin_lock_irqsave(&owner->pi_lock, flags);
+	raw_spin_lock(&owner->pi_lock);
 
 	rt_mutex_dequeue_pi(owner, waiter);
 
@@ -1044,7 +1041,7 @@ static void remove_waiter(struct rt_mute
 	/* Store the lock on which owner is blocked or NULL */
 	next_lock = task_blocked_on_lock(owner);
 
-	raw_spin_unlock_irqrestore(&owner->pi_lock, flags);
+	raw_spin_unlock(&owner->pi_lock);
 
 	/*
 	 * Don't walk the chain, if the owner task is not blocked
@@ -1056,12 +1053,12 @@ static void remove_waiter(struct rt_mute
 	/* gets dropped in rt_mutex_adjust_prio_chain()! */
 	get_task_struct(owner);
 
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irq(&lock->wait_lock);
 
 	rt_mutex_adjust_prio_chain(owner, RT_MUTEX_MIN_CHAINWALK, lock,
 				   next_lock, NULL, current);
 
-	raw_spin_lock(&lock->wait_lock);
+	raw_spin_lock_irq(&lock->wait_lock);
 }
 
 /*
@@ -1129,13 +1126,13 @@ static int __sched
 				break;
 		}
 
-		raw_spin_unlock(&lock->wait_lock);
+		raw_spin_unlock_irq(&lock->wait_lock);
 
 		debug_rt_mutex_print_deadlock(waiter);
 
 		schedule();
 
-		raw_spin_lock(&lock->wait_lock);
+		raw_spin_lock_irq(&lock->wait_lock);
 		set_current_state(state);
 	}
 
@@ -1172,17 +1169,26 @@ rt_mutex_slowlock(struct rt_mutex *lock,
 		  enum rtmutex_chainwalk chwalk)
 {
 	struct rt_mutex_waiter waiter;
+	unsigned long flags;
 	int ret = 0;
 
 	debug_rt_mutex_init_waiter(&waiter);
 	RB_CLEAR_NODE(&waiter.pi_tree_entry);
 	RB_CLEAR_NODE(&waiter.tree_entry);
 
-	raw_spin_lock(&lock->wait_lock);
+	/*
+	 * Technically we could use raw_spin_[un]lock_irq() here, but this can
+	 * be called in early boot if the cmpxchg() fast path is disabled
+	 * (debug, no architecture support). In this case we will acquire the
+	 * rtmutex with lock->wait_lock held. But we cannot unconditionally
+	 * enable interrupts in that early boot case. So we need to use the
+	 * irqsave/restore variants.
+	 */
+	raw_spin_lock_irqsave(&lock->wait_lock, flags);
 
 	/* Try to acquire the lock again: */
 	if (try_to_take_rt_mutex(lock, current, NULL)) {
-		raw_spin_unlock(&lock->wait_lock);
+		raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 		return 0;
 	}
 
@@ -1211,7 +1217,7 @@ rt_mutex_slowlock(struct rt_mutex *lock,
 	 */
 	fixup_rt_mutex_waiters(lock);
 
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	/* Remove pending timer: */
 	if (unlikely(timeout))
@@ -1227,6 +1233,7 @@ rt_mutex_slowlock(struct rt_mutex *lock,
  */
 static inline int rt_mutex_slowtrylock(struct rt_mutex *lock)
 {
+	unsigned long flags;
 	int ret;
 
 	/*
@@ -1241,7 +1248,7 @@ static inline int rt_mutex_slowtrylock(s
 	 * The mutex has currently no owner. Lock the wait lock and
 	 * try to acquire the lock.
 	 */
-	raw_spin_lock(&lock->wait_lock);
+	raw_spin_lock_irqsave(&lock->wait_lock, flags);
 
 	ret = try_to_take_rt_mutex(lock, current, NULL);
 
@@ -1251,7 +1258,7 @@ static inline int rt_mutex_slowtrylock(s
 	 */
 	fixup_rt_mutex_waiters(lock);
 
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	return ret;
 }
@@ -1263,7 +1270,9 @@ static inline int rt_mutex_slowtrylock(s
 static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
 					struct wake_q_head *wake_q)
 {
-	raw_spin_lock(&lock->wait_lock);
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&lock->wait_lock, flags);
 
 	debug_rt_mutex_unlock(lock);
 
@@ -1302,10 +1311,10 @@ static bool __sched rt_mutex_slowunlock(
 	 */
 	while (!rt_mutex_has_waiters(lock)) {
 		/* Drops lock->wait_lock ! */
-		if (unlock_rt_mutex_safe(lock) == true)
+		if (unlock_rt_mutex_safe(lock, flags) == true)
 			return false;
 		/* Relock the rtmutex and try again */
-		raw_spin_lock(&lock->wait_lock);
+		raw_spin_lock_irqsave(&lock->wait_lock, flags);
 	}
 
 	/*
@@ -1316,7 +1325,7 @@ static bool __sched rt_mutex_slowunlock(
 	 */
 	mark_wakeup_next_waiter(wake_q, lock);
 
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	/* check PI boosting */
 	return true;
@@ -1596,10 +1605,10 @@ int rt_mutex_start_proxy_lock(struct rt_
 {
 	int ret;
 
-	raw_spin_lock(&lock->wait_lock);
+	raw_spin_lock_irq(&lock->wait_lock);
 
 	if (try_to_take_rt_mutex(lock, task, NULL)) {
-		raw_spin_unlock(&lock->wait_lock);
+		raw_spin_unlock_irq(&lock->wait_lock);
 		return 1;
 	}
 
@@ -1620,7 +1629,7 @@ int rt_mutex_start_proxy_lock(struct rt_
 	if (unlikely(ret))
 		remove_waiter(lock, waiter);
 
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irq(&lock->wait_lock);
 
 	debug_rt_mutex_print_deadlock(waiter);
 
@@ -1668,7 +1677,7 @@ int rt_mutex_finish_proxy_lock(struct rt
 {
 	int ret;
 
-	raw_spin_lock(&lock->wait_lock);
+	raw_spin_lock_irq(&lock->wait_lock);
 
 	set_current_state(TASK_INTERRUPTIBLE);
 
@@ -1684,7 +1693,7 @@ int rt_mutex_finish_proxy_lock(struct rt
 	 */
 	fixup_rt_mutex_waiters(lock);
 
-	raw_spin_unlock(&lock->wait_lock);
+	raw_spin_unlock_irq(&lock->wait_lock);
 
 	return ret;
 }

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-14 19:47         ` Thomas Gleixner
@ 2016-01-15  1:42           ` Paul E. McKenney
  2016-01-15 10:03             ` Thomas Gleixner
  0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-15  1:42 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Thu, Jan 14, 2016 at 08:47:41PM +0100, Thomas Gleixner wrote:
> On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> > On Thu, Jan 14, 2016 at 06:43:16PM +0100, Thomas Gleixner wrote:
> > > Right. We have the pattern 
> > > 
> > >      rcu_read_lock();
> > >      x = lookup();
> > >      if (x)
> > > 	   keep_hold(x)
> > >      rcu_read_unlock();
> > >      return x;
> > > 
> > > all over the place. Now that keep_hold() can be everything from a refcount to
> > > a spinlock and I'm not sure that we can force stuff depending on the mechanism
> > > to be completely symetric. So we are probably better off by making that rcu
> > > unlock machinery more robust.
> > 
> > OK.  If I read the lockdep reports correctly, the issue occurs
> > when rcu_read_unlock_special() finds that it needs to unboost,
> > which means doing an rt_mutex_unlock().  This is done outside of
> > rcu_read_unlock_special()'s irq-disabled region, but of course the caller
> > might have disabled irqs.
> > 
> > If I remember correctly, disabling irqs across rt_mutex_unlock() gets
> > me lockdep splats.
> 
> That shouldn't be the case. The splats come from this scenario:
> 
> CPU0 	       	      	    CPU1
> rtmutex_lock(rcu)		    
>   raw_spin_lock(&rcu->lock)
> 			    rcu_read_lock()
> Interrupt		    spin_lock_irq(some->lock);
> 			    rcu_read_unlock()
> 			      rcu_read_unlock_special()
> 			        rtmutex_unlock(rcu)
> 				  raw_spin_lock(&rcu->lock)
>   spin_lock(some->lock)
> 
> Now we dead locked.
> 
> > I could imagine having a per-CPU pointer to rt_mutex that
> > rcu_read_unlock() sets, and that is checked at every point that irqs
> > are enabled, with a call to rt_mutex_unlock() if that pointer is non-NULL.
> > 
> > But perhaps you had something else in mind?
> 
> We can solve that issue by taking rtmutex->wait_lock with irqsave. So the
> above becomes:
> 
> CPU0				CPU1
> rtmutex_lock(rcu)		    
>   raw_spin_lock_irq(&rcu->lock)
> 				rcu_read_lock()
> 		    	    	spin_lock_irq(some->lock);
> 			    	rcu_read_unlock()
> 				  rcu_read_unlock_special()
> 			          rtmutex_unlock(rcu)
> 				    raw_spin_lock_irqsave(&rcu->lock, flags)
>   raw_spin_unlock_irq(&rcu->lock)
> Interrupt			    ...
>   spin_lock(some->lock)		    raw_spin_unlock_irqrestore(&rcu->lock, flags)
> 				...
> 		    	    	spin_unlock_irq(some->lock);
> 				
> Untested patch below.

One small fix to make it build below.  Started rcutorture, somewhat
pointlessly given that the splat doesn't appear on my setup.

							Thanx, Paul

------------------------------------------------------------------------

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 18e8b41ff796..1fdf6470dfd1 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -645,7 +645,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
 	rt_mutex_enqueue(lock, waiter);
 
 	/* [8] Release the task */
-	raw_spin_unlock(&task->pi_lock, flags);
+	raw_spin_unlock(&task->pi_lock);
 	put_task_struct(task);
 
 	/*

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-15  1:42           ` Paul E. McKenney
@ 2016-01-15 10:03             ` Thomas Gleixner
  2016-01-15 21:11               ` Paul E. McKenney
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2016-01-15 10:03 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> > Untested patch below.
> 
> One small fix to make it build below.  Started rcutorture, somewhat
> pointlessly given that the splat doesn't appear on my setup.

Well, at least it tells us whether the change explodes by itself.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-15 10:03             ` Thomas Gleixner
@ 2016-01-15 21:11               ` Paul E. McKenney
  2016-01-15 22:10                 ` Paul E. McKenney
  0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-15 21:11 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Fri, Jan 15, 2016 at 11:03:24AM +0100, Thomas Gleixner wrote:
> On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> > > Untested patch below.
> > 
> > One small fix to make it build below.  Started rcutorture, somewhat
> > pointlessly given that the splat doesn't appear on my setup.
> 
> Well, at least it tells us whether the change explodes by itself.

Hmmm...

So this is a strange one.  I have been seeing increasing instability
in mainline over the past couple of releases, with the main symptom
being that the kernel decides that awakening RCU's grace-period kthreads
is an optional activity.  The usual situation is that the kthread is
blocked for tens of seconds in an wait_event_interruptible_timeout(),
despite having a three-jiffy timeout.  Doing periodic wakeups from
the scheduling-clock interrupt seems to clear things up, but such hacks
should not be necessary.

Normally, I have to run for for some hours to have a good chance of seeing
this happen.  This change triggered in a 30-minute run.  Not only that,
but in a .config scenario that is normally very hard to trigger.  This
scenario does involve CPU hotplug, and I am re-running with CPU hotplug
disabled.

That said, I am starting to hear reports of people hitting this without
CPU hotplug operations...

							Thanx, Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-15 21:11               ` Paul E. McKenney
@ 2016-01-15 22:10                 ` Paul E. McKenney
  2016-01-15 23:14                   ` Paul E. McKenney
  0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-15 22:10 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Fri, Jan 15, 2016 at 01:11:25PM -0800, Paul E. McKenney wrote:
> On Fri, Jan 15, 2016 at 11:03:24AM +0100, Thomas Gleixner wrote:
> > On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> > > > Untested patch below.
> > > 
> > > One small fix to make it build below.  Started rcutorture, somewhat
> > > pointlessly given that the splat doesn't appear on my setup.
> > 
> > Well, at least it tells us whether the change explodes by itself.
> 
> Hmmm...
> 
> So this is a strange one.  I have been seeing increasing instability
> in mainline over the past couple of releases, with the main symptom
> being that the kernel decides that awakening RCU's grace-period kthreads
> is an optional activity.  The usual situation is that the kthread is
> blocked for tens of seconds in an wait_event_interruptible_timeout(),
> despite having a three-jiffy timeout.  Doing periodic wakeups from
> the scheduling-clock interrupt seems to clear things up, but such hacks
> should not be necessary.
> 
> Normally, I have to run for for some hours to have a good chance of seeing
> this happen.  This change triggered in a 30-minute run.  Not only that,
> but in a .config scenario that is normally very hard to trigger.  This
> scenario does involve CPU hotplug, and I am re-running with CPU hotplug
> disabled.
> 
> That said, I am starting to hear reports of people hitting this without
> CPU hotplug operations...

And without hotplug operations, instead of dying repeatedly in 30 minutes,
it goes four hours with no complaints.  Next trying wakeups.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-15 22:10                 ` Paul E. McKenney
@ 2016-01-15 23:14                   ` Paul E. McKenney
  2016-01-29 15:27                     ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-15 23:14 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Sasha Levin, LKML, Peter Zijlstra

On Fri, Jan 15, 2016 at 02:10:45PM -0800, Paul E. McKenney wrote:
> On Fri, Jan 15, 2016 at 01:11:25PM -0800, Paul E. McKenney wrote:
> > On Fri, Jan 15, 2016 at 11:03:24AM +0100, Thomas Gleixner wrote:
> > > On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> > > > > Untested patch below.
> > > > 
> > > > One small fix to make it build below.  Started rcutorture, somewhat
> > > > pointlessly given that the splat doesn't appear on my setup.
> > > 
> > > Well, at least it tells us whether the change explodes by itself.
> > 
> > Hmmm...
> > 
> > So this is a strange one.  I have been seeing increasing instability
> > in mainline over the past couple of releases, with the main symptom
> > being that the kernel decides that awakening RCU's grace-period kthreads
> > is an optional activity.  The usual situation is that the kthread is
> > blocked for tens of seconds in an wait_event_interruptible_timeout(),
> > despite having a three-jiffy timeout.  Doing periodic wakeups from
> > the scheduling-clock interrupt seems to clear things up, but such hacks
> > should not be necessary.
> > 
> > Normally, I have to run for for some hours to have a good chance of seeing
> > this happen.  This change triggered in a 30-minute run.  Not only that,
> > but in a .config scenario that is normally very hard to trigger.  This
> > scenario does involve CPU hotplug, and I am re-running with CPU hotplug
> > disabled.
> > 
> > That said, I am starting to hear reports of people hitting this without
> > CPU hotplug operations...
> 
> And without hotplug operations, instead of dying repeatedly in 30 minutes,
> it goes four hours with no complaints.  Next trying wakeups.

And if I make the scheduling-clock interrupt send extra wakeups to the RCU
grace-period kthread when needed, things work even with CPU hotplug going.

The "when needed" means any time that the RCU grace-period kthread has
been sleeping three times as long as the timeout interval.  If the first
wakeup does nothing, it does another wakeup once per second.

So it looks like this change makes an existing problem much worse, as
opposed to introducing a new problem.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-15 23:14                   ` Paul E. McKenney
@ 2016-01-29 15:27                     ` Peter Zijlstra
  2016-01-31  0:28                       ` Paul E. McKenney
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2016-01-29 15:27 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: Thomas Gleixner, Sasha Levin, LKML

On Fri, Jan 15, 2016 at 03:14:10PM -0800, Paul E. McKenney wrote:
> And if I make the scheduling-clock interrupt send extra wakeups to the RCU
> grace-period kthread when needed, things work even with CPU hotplug going.
> 
> The "when needed" means any time that the RCU grace-period kthread has
> been sleeping three times as long as the timeout interval.  If the first
> wakeup does nothing, it does another wakeup once per second.
> 
> So it looks like this change makes an existing problem much worse, as
> opposed to introducing a new problem.

I have a vague idea about a possible race window. Have you been
observing this on PPC or x86?

The reason I'm asking is that PPC (obviously) allows for more races :-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
  2016-01-29 15:27                     ` Peter Zijlstra
@ 2016-01-31  0:28                       ` Paul E. McKenney
  0 siblings, 0 replies; 15+ messages in thread
From: Paul E. McKenney @ 2016-01-31  0:28 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Thomas Gleixner, Sasha Levin, LKML

On Fri, Jan 29, 2016 at 04:27:35PM +0100, Peter Zijlstra wrote:
> On Fri, Jan 15, 2016 at 03:14:10PM -0800, Paul E. McKenney wrote:
> > And if I make the scheduling-clock interrupt send extra wakeups to the RCU
> > grace-period kthread when needed, things work even with CPU hotplug going.
> > 
> > The "when needed" means any time that the RCU grace-period kthread has
> > been sleeping three times as long as the timeout interval.  If the first
> > wakeup does nothing, it does another wakeup once per second.
> > 
> > So it looks like this change makes an existing problem much worse, as
> > opposed to introducing a new problem.
> 
> I have a vague idea about a possible race window. Have you been
> observing this on PPC or x86?
> 
> The reason I'm asking is that PPC (obviously) allows for more races :-)

;-)

I have been seeing this on x86.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2016-01-31  0:28 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-12 20:03 timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected Sasha Levin
2016-01-12 20:18 ` Peter Zijlstra
2016-01-12 20:52   ` Paul E. McKenney
2016-01-13  9:05 ` Thomas Gleixner
2016-01-13 16:16   ` Paul E. McKenney
2016-01-14 17:43     ` Thomas Gleixner
2016-01-14 18:18       ` Paul E. McKenney
2016-01-14 19:47         ` Thomas Gleixner
2016-01-15  1:42           ` Paul E. McKenney
2016-01-15 10:03             ` Thomas Gleixner
2016-01-15 21:11               ` Paul E. McKenney
2016-01-15 22:10                 ` Paul E. McKenney
2016-01-15 23:14                   ` Paul E. McKenney
2016-01-29 15:27                     ` Peter Zijlstra
2016-01-31  0:28                       ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.