linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 0/7] Linux 3.2.77-rt111-rc1
@ 2016-02-26 21:44 Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 1/7] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Dear RT Folks,

This is the RT stable review cycle of patch 3.2.77-rt111-rc1.

Please scream at me if I messed something up. Please test the patches too.

Note, I'm bringing this tree up to stable patches in 4.1.7-rt8.
Then I'll be pulling 4.1-rt into stable, as development is now on 4.4-rt.
After that, I'll be pulling the 4.1-rt stable changes into the stable trees.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 2/29/2016.

Enjoy,

-- Steve


To build 3.2.77-rt111-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.2.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.2.77.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.2/patch-3.2.77-rt111-rc1.patch.xz

You can also build from 3.2.77-rt110 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.2/incr/patch-3.2.77-rt110-rt111-rc1.patch.xz


Changes from 3.2.77-rt110:

---


Josh Cartwright (1):
      net: Make synchronize_rcu_expedited() conditional on !RT_FULL

Peter Zijlstra (1):
      sched: Introduce the trace_sched_waking tracepoint

Sebastian Andrzej Siewior (1):
      dump stack: don't disable preemption during trace

Steven Rostedt (Red Hat) (2):
      rtmutex: Have slowfn of rt_mutex_timed_fastlock() use enum
      Linux 3.2.77-rt111-rc1

Thomas Gleixner (1):
      rtmutex: Handle non enqueued waiters gracefully

bmouring@ni.com (1):
      rtmutex: Use chainwalking control enum

----
 arch/x86/kernel/dumpstack_64.c    |  8 ++++----
 include/trace/events/sched.h      | 30 +++++++++++++++++++++---------
 kernel/rtmutex.c                  |  6 +++---
 kernel/sched.c                    |  7 +++++--
 kernel/trace/trace_sched_switch.c |  2 +-
 kernel/trace/trace_sched_wakeup.c |  2 +-
 localversion-rt                   |  2 +-
 net/core/dev.c                    |  2 +-
 8 files changed, 37 insertions(+), 22 deletions(-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH RT 1/7] rtmutex: Handle non enqueued waiters gracefully
  2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
@ 2016-02-26 21:44 ` Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 2/7] rtmutex: Use chainwalking control enum Steven Rostedt
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0001-rtmutex-Handle-non-enqueued-waiters-gracefully.patch --]
[-- Type: text/plain, Size: 1230 bytes --]

3.2.77-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

Yimin debugged that in case of a PI wakeup in progress when
rt_mutex_start_proxy_lock() calls task_blocks_on_rt_mutex() the latter
returns -EAGAIN and in consequence the remove_waiter() call runs into
a BUG_ON() because there is nothing to remove.

Guard it with rt_mutex_has_waiters(). This is a quick fix which is
easy to backport. The proper fix is to have a central check in
remove_waiter() so we can call it unconditionally.

Reported-and-debugged-by: Yimin Deng <yimin11.deng@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 52cab2720a06..f074834c1ad6 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -1935,7 +1935,7 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 		ret = 0;
 	}
 
-	if (unlikely(ret))
+	if (ret && rt_mutex_has_waiters(lock))
 		remove_waiter(lock, waiter);
 
 	raw_spin_unlock(&lock->wait_lock);
-- 
2.7.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 2/7] rtmutex: Use chainwalking control enum
  2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 1/7] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
@ 2016-02-26 21:44 ` Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 3/7] dump stack: dont disable preemption during trace Steven Rostedt
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Brad Mouring

[-- Attachment #1: 0002-rtmutex-Use-chainwalking-control-enum.patch --]
[-- Type: text/plain, Size: 1154 bytes --]

3.2.77-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "bmouring@ni.com" <bmouring@ni.com>

In 8930ed80 (rtmutex: Cleanup deadlock detector debug logic),
chainwalking control enums were introduced to limit the deadlock
detection logic. One of the calls to task_blocks_on_rt_mutex was
missed when converting to use the enums.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index f074834c1ad6..2a4c34bb586f 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -1179,7 +1179,7 @@ static void  noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock)
 	__set_current_state(TASK_UNINTERRUPTIBLE);
 	pi_unlock(&self->pi_lock);
 
-	ret = task_blocks_on_rt_mutex(lock, &waiter, self, 0);
+	ret = task_blocks_on_rt_mutex(lock, &waiter, self, RT_MUTEX_MIN_CHAINWALK);
 	BUG_ON(ret);
 
 	for (;;) {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 3/7] dump stack: dont disable preemption during trace
  2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 1/7] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 2/7] rtmutex: Use chainwalking control enum Steven Rostedt
@ 2016-02-26 21:44 ` Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 4/7] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0003-dump-stack-don-t-disable-preemption-during-trace.patch --]
[-- Type: text/plain, Size: 2235 bytes --]

3.2.77-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

I see here large latencies during a stack dump on x86. The
preempt_disable() and get_cpu() should forbid moving the task to another
CPU during a stack dump and avoiding two stack traces in parallel on the
same CPU. However a stack trace from a second CPU may still happen in
parallel. Also nesting is allowed so a stack trace happens in
process-context and we may have another one from IRQ context. With migrate
disable we keep this code preemptible and allow a second backtrace on
the same CPU by another task.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/dumpstack_64.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index 5e890ccd5429..530fdb80d250 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -114,7 +114,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 		unsigned long *stack, unsigned long bp,
 		const struct stacktrace_ops *ops, void *data)
 {
-	const unsigned cpu = get_cpu();
+	const unsigned cpu = get_cpu_light();
 	unsigned long *irq_stack_end =
 		(unsigned long *)per_cpu(irq_stack_ptr, cpu);
 	unsigned used = 0;
@@ -191,7 +191,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 	 * This handles the process stack:
 	 */
 	bp = ops->walk_stack(tinfo, stack, bp, ops, data, NULL, &graph);
-	put_cpu();
+	put_cpu_light();
 }
 EXPORT_SYMBOL(dump_trace);
 
@@ -205,7 +205,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
 	int cpu;
 	int i;
 
-	preempt_disable();
+	migrate_disable();
 	cpu = smp_processor_id();
 
 	irq_stack_end	= (unsigned long *)(per_cpu(irq_stack_ptr, cpu));
@@ -238,7 +238,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
 		printk(KERN_CONT " %016lx", *stack++);
 		touch_nmi_watchdog();
 	}
-	preempt_enable();
+	migrate_enable();
 
 	printk(KERN_CONT "\n");
 	show_trace_log_lvl(task, regs, sp, bp, log_lvl);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 4/7] net: Make synchronize_rcu_expedited() conditional on !RT_FULL
  2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2016-02-26 21:44 ` [PATCH RT 3/7] dump stack: dont disable preemption during trace Steven Rostedt
@ 2016-02-26 21:44 ` Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 5/7] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Paul E. McKenney, Josh Cartwright,
	Eric Dumazet, David S. Miller

[-- Attachment #1: 0004-net-Make-synchronize_rcu_expedited-conditional-on-RT.patch --]
[-- Type: text/plain, Size: 1447 bytes --]

3.2.77-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Josh Cartwright <joshc@ni.com>

While the use of synchronize_rcu_expedited() might make
synchronize_net() "faster", it does so at significant cost on RT
systems, as expediting a grace period forcibly preempts any
high-priority RT tasks (via the stop_machine() mechanism).

Without this change, we can observe a latency spike up to 30us with
cyclictest by rapidly unplugging/reestablishing an ethernet link.

Suggested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Josh Cartwright <joshc@ni.com>
Cc: bigeasy@linutronix.de
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20151027123153.GG8245@jcartwri.amer.corp.natinst.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 net/core/dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index f7f91c97933b..b6fa0b2fd713 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6169,7 +6169,7 @@ EXPORT_SYMBOL(free_netdev);
 void synchronize_net(void)
 {
 	might_sleep();
-	if (rtnl_is_locked())
+	if (rtnl_is_locked() && !IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
 		synchronize_rcu_expedited();
 	else
 		synchronize_rcu();
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 5/7] sched: Introduce the trace_sched_waking tracepoint
  2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2016-02-26 21:44 ` [PATCH RT 4/7] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
@ 2016-02-26 21:44 ` Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 6/7] rtmutex: Have slowfn of rt_mutex_timed_fastlock() use enum Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 7/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mathieu Desnoyers, Julien Desfossez,
	Peter Zijlstra (Intel),
	Francis Giraldeau, Linus Torvalds, Mike Galbraith, Ingo Molnar

[-- Attachment #1: 0005-sched-Introduce-the-trace_sched_waking-tracepoint.patch --]
[-- Type: text/plain, Size: 6045 bytes --]

3.2.77-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <peterz@infradead.org>

Upstream commit fbd705a0c6184580d0e2fbcbd47a37b6e5822511

Mathieu reported that since 317f394160e9 ("sched: Move the second half
of ttwu() to the remote cpu") trace_sched_wakeup() can happen out of
context of the waker.

This is a problem when you want to analyse wakeup paths because it is
now very hard to correlate the wakeup event to whoever issued the
wakeup.

OTOH trace_sched_wakeup() is issued at the point where we set
p->state = TASK_RUNNING, which is right were we hand the task off to
the scheduler, so this is an important point when looking at
scheduling behaviour, up to here its been the wakeup path everything
hereafter is due to scheduler policy.

To bridge this gap, introduce a second tracepoint: trace_sched_waking.
It is guaranteed to be called in the waker context.

[ Ported to linux-4.1.y-rt kernel by Mathieu Desnoyers. Resolved
  conflict: try_to_wake_up_local() does not exist in -rt kernel. Removed
  its instrumentation hunk. ]

Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
CC: Julien Desfossez <jdesfossez@efficios.com>
CC: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Francis Giraldeau <francis.giraldeau@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/20150609091336.GQ3644@twins.programming.kicks-ass.net
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/trace/events/sched.h      | 30 +++++++++++++++++++++---------
 kernel/sched.c                    |  7 +++++--
 kernel/trace/trace_sched_switch.c |  2 +-
 kernel/trace/trace_sched_wakeup.c |  2 +-
 4 files changed, 28 insertions(+), 13 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 959ff18b63b6..29cfc3fe68ad 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -54,9 +54,9 @@ TRACE_EVENT(sched_kthread_stop_ret,
  */
 DECLARE_EVENT_CLASS(sched_wakeup_template,
 
-	TP_PROTO(struct task_struct *p, int success),
+	TP_PROTO(struct task_struct *p),
 
-	TP_ARGS(p, success),
+	TP_ARGS(p),
 
 	TP_STRUCT__entry(
 		__array(	char,	comm,	TASK_COMM_LEN	)
@@ -70,25 +70,37 @@ DECLARE_EVENT_CLASS(sched_wakeup_template,
 		memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
 		__entry->pid		= p->pid;
 		__entry->prio		= p->prio;
-		__entry->success	= success;
+		__entry->success	= 1; /* rudiment, kill when possible */
 		__entry->target_cpu	= task_cpu(p);
 	),
 
-	TP_printk("comm=%s pid=%d prio=%d success=%d target_cpu=%03d",
+	TP_printk("comm=%s pid=%d prio=%d target_cpu=%03d",
 		  __entry->comm, __entry->pid, __entry->prio,
-		  __entry->success, __entry->target_cpu)
+		  __entry->target_cpu)
 );
 
+/*
+ * Tracepoint called when waking a task; this tracepoint is guaranteed to be
+ * called from the waking context.
+ */
+DEFINE_EVENT(sched_wakeup_template, sched_waking,
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
+
+ /*
+  * Tracepoint called when the task is actually woken; p->state == TASK_RUNNNG.
+  * It it not always called from the waking context.
+  */
 DEFINE_EVENT(sched_wakeup_template, sched_wakeup,
-	     TP_PROTO(struct task_struct *p, int success),
-	     TP_ARGS(p, success));
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
 
 /*
  * Tracepoint for waking up a new task:
  */
 DEFINE_EVENT(sched_wakeup_template, sched_wakeup_new,
-	     TP_PROTO(struct task_struct *p, int success),
-	     TP_ARGS(p, success));
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
 
 #ifdef CREATE_TRACE_POINTS
 static inline long __trace_sched_switch_state(struct task_struct *p)
diff --git a/kernel/sched.c b/kernel/sched.c
index 1615ca209afd..a9f6d6c0ab93 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2661,10 +2661,11 @@ static void ttwu_activate(struct rq *rq, struct task_struct *p, int en_flags)
 static void
 ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
 {
-	trace_sched_wakeup(p, true);
 	check_preempt_curr(rq, p, wake_flags);
 
 	p->state = TASK_RUNNING;
+	trace_sched_wakeup(p);
+
 #ifdef CONFIG_SMP
 	if (p->sched_class->task_woken)
 		p->sched_class->task_woken(rq, p);
@@ -2852,6 +2853,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	if (!(wake_flags & WF_LOCK_SLEEPER))
 		p->saved_state = TASK_RUNNING;
 
+	trace_sched_waking(p);
+
 	success = 1; /* we're going to change ->state */
 	cpu = task_cpu(p);
 
@@ -3070,7 +3073,7 @@ void wake_up_new_task(struct task_struct *p)
 	rq = __task_rq_lock(p);
 	activate_task(rq, p, 0);
 	p->on_rq = 1;
-	trace_sched_wakeup_new(p, true);
+	trace_sched_wakeup_new(p);
 	check_preempt_curr(rq, p, WF_FORK);
 #ifdef CONFIG_SMP
 	if (p->sched_class->task_woken)
diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c
index 7e62c0a18456..5982146ee863 100644
--- a/kernel/trace/trace_sched_switch.c
+++ b/kernel/trace/trace_sched_switch.c
@@ -108,7 +108,7 @@ tracing_sched_wakeup_trace(struct trace_array *tr,
 }
 
 static void
-probe_sched_wakeup(void *ignore, struct task_struct *wakee, int success)
+probe_sched_wakeup(void *ignore, struct task_struct *wakee)
 {
 	struct trace_array_cpu *data;
 	unsigned long flags;
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 6857e0c99656..a3f6ef0642f1 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -403,7 +403,7 @@ static void wakeup_reset(struct trace_array *tr)
 }
 
 static void
-probe_wakeup(void *ignore, struct task_struct *p, int success)
+probe_wakeup(void *ignore, struct task_struct *p)
 {
 	struct trace_array_cpu *data;
 	int cpu = smp_processor_id();
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 6/7] rtmutex: Have slowfn of rt_mutex_timed_fastlock() use enum
  2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2016-02-26 21:44 ` [PATCH RT 5/7] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
@ 2016-02-26 21:44 ` Steven Rostedt
  2016-02-26 21:44 ` [PATCH RT 7/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0006-rtmutex-Have-slowfn-of-rt_mutex_timed_fastlock-use-e.patch --]
[-- Type: text/plain, Size: 1057 bytes --]

3.2.77-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

The backport of commit 8930ed80f970 "rtmutex: Cleanup deadlock detector
debug logic" had conflicts, and the conflict resolution changed
rt_mutex_timed_fastlock()'s parameter to an enum, but missed changing its
slowfn's prototype.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 2a4c34bb586f..e836b05093a1 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -1657,7 +1657,7 @@ rt_mutex_timed_fastlock(struct rt_mutex *lock, int state,
 			enum rtmutex_chainwalk chwalk,
 			int (*slowfn)(struct rt_mutex *lock, int state,
 				      struct hrtimer_sleeper *timeout,
-				      int detect_deadlock))
+				      enum rtmutex_chainwalk chwalk))
 {
 	if (chwalk == RT_MUTEX_MIN_CHAINWALK &&
 	    likely(rt_mutex_cmpxchg(lock, NULL, current))) {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 7/7] Linux 3.2.77-rt111-rc1
  2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2016-02-26 21:44 ` [PATCH RT 6/7] rtmutex: Have slowfn of rt_mutex_timed_fastlock() use enum Steven Rostedt
@ 2016-02-26 21:44 ` Steven Rostedt
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:44 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0007-Linux-3.2.77-rt111-rc1.patch --]
[-- Type: text/plain, Size: 414 bytes --]

3.2.77-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index b3e668a8fb94..ff68eff1428c 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt110
+-rt111-rc1
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 1/7] rtmutex: Handle non enqueued waiters gracefully
  2016-02-26 21:42 [PATCH RT 0/7] Linux 3.4.110-rt139-rc1 Steven Rostedt
@ 2016-02-26 21:42 ` Steven Rostedt
  0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:42 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0001-rtmutex-Handle-non-enqueued-waiters-gracefully.patch --]
[-- Type: text/plain, Size: 1229 bytes --]

3.4.110-rt139-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

Yimin debugged that in case of a PI wakeup in progress when
rt_mutex_start_proxy_lock() calls task_blocks_on_rt_mutex() the latter
returns -EAGAIN and in consequence the remove_waiter() call runs into
a BUG_ON() because there is nothing to remove.

Guard it with rt_mutex_has_waiters(). This is a quick fix which is
easy to backport. The proper fix is to have a central check in
remove_waiter() so we can call it unconditionally.

Reported-and-debugged-by: Yimin Deng <yimin11.deng@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 4cc273b85beb..d353e9191955 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -1911,7 +1911,7 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 		ret = 0;
 	}
 
-	if (unlikely(ret))
+	if (ret && rt_mutex_has_waiters(lock))
 		remove_waiter(lock, waiter);
 
 	raw_spin_unlock(&lock->wait_lock);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-02-26 22:20 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-26 21:44 [PATCH RT 0/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
2016-02-26 21:44 ` [PATCH RT 1/7] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
2016-02-26 21:44 ` [PATCH RT 2/7] rtmutex: Use chainwalking control enum Steven Rostedt
2016-02-26 21:44 ` [PATCH RT 3/7] dump stack: dont disable preemption during trace Steven Rostedt
2016-02-26 21:44 ` [PATCH RT 4/7] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
2016-02-26 21:44 ` [PATCH RT 5/7] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
2016-02-26 21:44 ` [PATCH RT 6/7] rtmutex: Have slowfn of rt_mutex_timed_fastlock() use enum Steven Rostedt
2016-02-26 21:44 ` [PATCH RT 7/7] Linux 3.2.77-rt111-rc1 Steven Rostedt
  -- strict thread matches above, loose matches on Subject: below --
2016-02-26 21:42 [PATCH RT 0/7] Linux 3.4.110-rt139-rc1 Steven Rostedt
2016-02-26 21:42 ` [PATCH RT 1/7] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).