All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RT 00/13] Linux 3.12.54-rt74-rc1
@ 2016-03-02 15:48 Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 01/13] ptrace: dont open IRQs in ptrace_freeze_traced() too early Steven Rostedt
                   ` (13 more replies)
  0 siblings, 14 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Dear RT Folks,

This is the RT stable review cycle of patch 3.12.54-rt74-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 3/7/2016.

Enjoy,

-- Steve


To build 3.12.54-rt74-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.12.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.12.54.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/patch-3.12.54-rt74-rc1.patch.xz

You can also build from 3.12.54-rt73 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/incr/patch-3.12.54-rt73-rt74-rc1.patch.xz


Changes from 3.12.54-rt73:

---


Clark Williams (1):
      rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL

Mike Galbraith (1):
      sched,rt: __always_inline preemptible_lazy()

Sebastian Andrzej Siewior (9):
      ptrace: don't open IRQs in ptrace_freeze_traced() too early
      preempt-lazy: Add the lazy-preemption check to preempt_schedule()
      softirq: split timer softirqs out of ksoftirqd
      net: provide a way to delegate processing a softirq to ksoftirqd
      latencyhist: disable jump-labels
      kernel: migrate_disable() do fastpath in atomic & irqs-off
      kernel: softirq: unlock with irqs on
      kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks"
      kernel: sched: Fix preempt_disable_ip recodring for preempt_disable()

Steven Rostedt (Red Hat) (1):
      Linux 3.12.54-rt74-rc1

Yang Shi (1):
      trace: Use rcuidle version for preemptoff_hist trace point

----
 arch/Kconfig                 |   1 +
 include/linux/ftrace.h       |  12 +++++
 include/linux/interrupt.h    |   8 ++++
 include/linux/sched.h        |   2 -
 include/trace/events/hist.h  |   1 +
 kernel/ptrace.c              |   6 ++-
 kernel/rcutorture.c          |   7 +++
 kernel/sched/core.c          |  52 ++++++++++++---------
 kernel/softirq.c             | 109 ++++++++++++++++++++++++++++++++++++++-----
 kernel/stop_machine.c        |  38 +++------------
 kernel/trace/trace_irqsoff.c |   8 ++--
 localversion-rt              |   2 +-
 net/core/dev.c               |   2 +-
 13 files changed, 173 insertions(+), 75 deletions(-)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RT 01/13] ptrace: dont open IRQs in ptrace_freeze_traced() too early
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 02/13] preempt-lazy: Add the lazy-preemption check to preempt_schedule() Steven Rostedt
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, kernel test robot, stable-rt

[-- Attachment #1: 0001-ptrace-don-t-open-IRQs-in-ptrace_freeze_traced-too-e.patch --]
[-- Type: text/plain, Size: 1327 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

In the non-RT case the spin_lock_irq() here disables interrupts as well
as raw_spin_lock_irq(). So in the unlock case the interrupts are enabled
too early.

Reported-by: kernel test robot <ying.huang@linux.intel.com>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/ptrace.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index c8cd8ffab511..fe11653fb005 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -135,12 +135,14 @@ static bool ptrace_freeze_traced(struct task_struct *task)
 
 	spin_lock_irq(&task->sighand->siglock);
 	if (task_is_traced(task) && !__fatal_signal_pending(task)) {
-		raw_spin_lock_irq(&task->pi_lock);
+		unsigned long flags;
+
+		raw_spin_lock_irqsave(&task->pi_lock, flags);
 		if (task->state & __TASK_TRACED)
 			task->state = __TASK_TRACED;
 		else
 			task->saved_state = __TASK_TRACED;
-		raw_spin_unlock_irq(&task->pi_lock);
+		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
 		ret = true;
 	}
 	spin_unlock_irq(&task->sighand->siglock);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 02/13] preempt-lazy: Add the lazy-preemption check to preempt_schedule()
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 01/13] ptrace: dont open IRQs in ptrace_freeze_traced() too early Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 03/13] softirq: split timer softirqs out of ksoftirqd Steven Rostedt
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mike Galbraith

[-- Attachment #1: 0002-preempt-lazy-Add-the-lazy-preemption-check-to-preemp.patch --]
[-- Type: text/plain, Size: 1823 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Probably in the rebase onto v4.1 this check got moved into less commonly used
preempt_schedule_notrace(). This patch ensures that both functions use it.

Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/sched/core.c | 34 ++++++++++++++++++++++++++--------
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 14b5ba66fa72..a936937c3535 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2728,6 +2728,30 @@ void __sched schedule_preempt_disabled(void)
 	preempt_disable();
 }
 
+#ifdef CONFIG_PREEMPT_LAZY
+/*
+ * If TIF_NEED_RESCHED is then we allow to be scheduled away since this is
+ * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
+ * preempt_lazy_count counter >0.
+ */
+static int preemptible_lazy(void)
+{
+	if (test_thread_flag(TIF_NEED_RESCHED))
+		return 1;
+	if (current_thread_info()->preempt_lazy_count)
+		return 0;
+	return 1;
+}
+
+#else
+
+static int preemptible_lazy(void)
+{
+	return 1;
+}
+
+#endif
+
 #ifdef CONFIG_PREEMPT
 /*
  * this is the entry point to schedule() from in-kernel preemption
@@ -2742,15 +2766,9 @@ asmlinkage void __sched notrace preempt_schedule(void)
 	 */
 	if (likely(!preemptible()))
 		return;
-
-#ifdef CONFIG_PREEMPT_LAZY
-	/*
-	 * Check for lazy preemption
-	 */
-	if (current_thread_info()->preempt_lazy_count &&
-			!test_thread_flag(TIF_NEED_RESCHED))
+	if (!preemptible_lazy())
 		return;
-#endif
+
 	do {
 		add_preempt_count_notrace(PREEMPT_ACTIVE);
 		/*
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 03/13] softirq: split timer softirqs out of ksoftirqd
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 01/13] ptrace: dont open IRQs in ptrace_freeze_traced() too early Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 02/13] preempt-lazy: Add the lazy-preemption check to preempt_schedule() Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 04/13] net: provide a way to delegate processing a softirq to ksoftirqd Steven Rostedt
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0003-softirq-split-timer-softirqs-out-of-ksoftirqd.patch --]
[-- Type: text/plain, Size: 6647 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The softirqd runs in -RT with SCHED_FIFO (prio 1) and deals mostly with
timer wakeup which can not happen in hardirq context. The prio has been
risen from the normal SCHED_OTHER so the timer wakeup does not happen
too late.
With enough networking load it is possible that the system never goes
idle and schedules ksoftirqd and everything else with a higher priority.
One of the tasks left behind is one of RCU's threads and so we see stalls
and eventually run out of memory.
This patch moves the TIMER and HRTIMER softirqs out of the `ksoftirqd`
thread into its own `ktimersoftd`. The former can now run SCHED_OTHER
(same as mainline) and the latter at SCHED_FIFO due to the wakeups.

>From networking point of view: The NAPI callback runs after the network
interrupt thread completes. If its run time takes too long the NAPI code
itself schedules the `ksoftirqd`. Here in the thread it can run at
SCHED_OTHER priority and it won't defer RCU anymore.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/softirq.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 72 insertions(+), 10 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 1c8ad53d1353..c10594c778fe 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -58,6 +58,10 @@ EXPORT_SYMBOL(irq_stat);
 static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;
 
 DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
+#ifdef CONFIG_PREEMPT_RT_FULL
+#define TIMER_SOFTIRQS	((1 << TIMER_SOFTIRQ) | (1 << HRTIMER_SOFTIRQ))
+DEFINE_PER_CPU(struct task_struct *, ktimer_softirqd);
+#endif
 
 char *softirq_to_name[NR_SOFTIRQS] = {
 	"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "BLOCK_IOPOLL",
@@ -171,6 +175,17 @@ static void wakeup_softirqd(void)
 		wake_up_process(tsk);
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static void wakeup_timer_softirqd(void)
+{
+	/* Interrupts are disabled: no need to stop preemption */
+	struct task_struct *tsk = __this_cpu_read(ktimer_softirqd);
+
+	if (tsk && tsk->state != TASK_RUNNING)
+		wake_up_process(tsk);
+}
+#endif
+
 static void handle_softirq(unsigned int vec_nr, int cpu, int need_rcu_bh_qs)
 {
 	struct softirq_action *h = softirq_vec + vec_nr;
@@ -488,7 +503,6 @@ void __raise_softirq_irqoff(unsigned int nr)
 static inline void local_bh_disable_nort(void) { local_bh_disable(); }
 static inline void _local_bh_enable_nort(void) { _local_bh_enable(); }
 static void ksoftirqd_set_sched_params(unsigned int cpu) { }
-static void ksoftirqd_clr_sched_params(unsigned int cpu, bool online) { }
 
 #else /* !PREEMPT_RT_FULL */
 
@@ -638,8 +652,12 @@ void thread_do_softirq(void)
 
 static void do_raise_softirq_irqoff(unsigned int nr)
 {
+	unsigned int mask;
+
+	mask = 1UL << nr;
+
 	trace_softirq_raise(nr);
-	or_softirq_pending(1UL << nr);
+	or_softirq_pending(mask);
 
 	/*
 	 * If we are not in a hard interrupt and inside a bh disabled
@@ -648,16 +666,29 @@ static void do_raise_softirq_irqoff(unsigned int nr)
 	 * delegate it to ksoftirqd.
 	 */
 	if (!in_irq() && current->softirq_nestcnt)
-		current->softirqs_raised |= (1U << nr);
-	else if (__this_cpu_read(ksoftirqd))
-		__this_cpu_read(ksoftirqd)->softirqs_raised |= (1U << nr);
+		current->softirqs_raised |= mask;
+	else if (!__this_cpu_read(ksoftirqd) || !__this_cpu_read(ktimer_softirqd))
+		return;
+
+	if (mask & TIMER_SOFTIRQS)
+		__this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
+	else
+		__this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
+}
+
+static void wakeup_proper_softirq(unsigned int nr)
+{
+	if ((1UL << nr) & TIMER_SOFTIRQS)
+		wakeup_timer_softirqd();
+	else
+		wakeup_softirqd();
 }
 
 void __raise_softirq_irqoff(unsigned int nr)
 {
 	do_raise_softirq_irqoff(nr);
 	if (!in_irq() && !current->softirq_nestcnt)
-		wakeup_softirqd();
+		wakeup_proper_softirq(nr);
 }
 
 /*
@@ -697,22 +728,37 @@ static inline void _local_bh_enable_nort(void) { }
 
 static inline void ksoftirqd_set_sched_params(unsigned int cpu)
 {
+	/* Take over all but timer pending softirqs when starting */
+	local_irq_disable();
+	current->softirqs_raised = local_softirq_pending() & ~TIMER_SOFTIRQS;
+	local_irq_enable();
+}
+
+static inline void ktimer_softirqd_set_sched_params(unsigned int cpu)
+{
 	struct sched_param param = { .sched_priority = 1 };
 
 	sched_setscheduler(current, SCHED_FIFO, &param);
-	/* Take over all pending softirqs when starting */
+
+	/* Take over timer pending softirqs when starting */
 	local_irq_disable();
-	current->softirqs_raised = local_softirq_pending();
+	current->softirqs_raised = local_softirq_pending() & TIMER_SOFTIRQS;
 	local_irq_enable();
 }
 
-static inline void ksoftirqd_clr_sched_params(unsigned int cpu, bool online)
+static inline void ktimer_softirqd_clr_sched_params(unsigned int cpu,
+						    bool online)
 {
 	struct sched_param param = { .sched_priority = 0 };
 
 	sched_setscheduler(current, SCHED_NORMAL, &param);
 }
 
+static int ktimer_softirqd_should_run(unsigned int cpu)
+{
+	return current->softirqs_raised;
+}
+
 #endif /* PREEMPT_RT_FULL */
 /*
  * Enter an interrupt context.
@@ -759,6 +805,9 @@ static inline void invoke_softirq(void)
 	if (__this_cpu_read(ksoftirqd) &&
 			__this_cpu_read(ksoftirqd)->softirqs_raised)
 		wakeup_softirqd();
+	if (__this_cpu_read(ktimer_softirqd) &&
+			__this_cpu_read(ktimer_softirqd)->softirqs_raised)
+		wakeup_timer_softirqd();
 	local_irq_restore(flags);
 #endif
 }
@@ -1333,17 +1382,30 @@ static struct notifier_block cpu_nfb = {
 static struct smp_hotplug_thread softirq_threads = {
 	.store			= &ksoftirqd,
 	.setup			= ksoftirqd_set_sched_params,
-	.cleanup		= ksoftirqd_clr_sched_params,
 	.thread_should_run	= ksoftirqd_should_run,
 	.thread_fn		= run_ksoftirqd,
 	.thread_comm		= "ksoftirqd/%u",
 };
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static struct smp_hotplug_thread softirq_timer_threads = {
+	.store			= &ktimer_softirqd,
+	.setup			= ktimer_softirqd_set_sched_params,
+	.cleanup		= ktimer_softirqd_clr_sched_params,
+	.thread_should_run	= ktimer_softirqd_should_run,
+	.thread_fn		= run_ksoftirqd,
+	.thread_comm		= "ktimersoftd/%u",
+};
+#endif
+
 static __init int spawn_ksoftirqd(void)
 {
 	register_cpu_notifier(&cpu_nfb);
 
 	BUG_ON(smpboot_register_percpu_thread(&softirq_threads));
+#ifdef CONFIG_PREEMPT_RT_FULL
+	BUG_ON(smpboot_register_percpu_thread(&softirq_timer_threads));
+#endif
 
 	return 0;
 }
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 04/13] net: provide a way to delegate processing a softirq to ksoftirqd
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 03/13] softirq: split timer softirqs out of ksoftirqd Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 05/13] latencyhist: disable jump-labels Steven Rostedt
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0004-net-provide-a-way-to-delegate-processing-a-softirq-t.patch --]
[-- Type: text/plain, Size: 2676 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

If the NET_RX uses up all of his budget it moves the following NAPI
invocations into the `ksoftirqd`. On -RT it does not do so. Instead it
rises the NET_RX softirq in its current context again.

In order to get closer to mainline's behaviour this patch provides
__raise_softirq_irqoff_ksoft() which raises the softirq in the ksoftird.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/interrupt.h |  8 ++++++++
 kernel/softirq.c          | 21 +++++++++++++++++++++
 net/core/dev.c            |  2 +-
 3 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 750d1e6cbe36..5cd89e325324 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -428,6 +428,14 @@ extern void thread_do_softirq(void);
 extern void open_softirq(int nr, void (*action)(struct softirq_action *));
 extern void softirq_init(void);
 extern void __raise_softirq_irqoff(unsigned int nr);
+#ifdef CONFIG_PREEMPT_RT_FULL
+extern void __raise_softirq_irqoff_ksoft(unsigned int nr);
+#else
+static inline void __raise_softirq_irqoff_ksoft(unsigned int nr)
+{
+	__raise_softirq_irqoff(nr);
+}
+#endif
 
 extern void raise_softirq_irqoff(unsigned int nr);
 extern void raise_softirq(unsigned int nr);
diff --git a/kernel/softirq.c b/kernel/softirq.c
index c10594c778fe..cce9723c5a18 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -692,6 +692,27 @@ void __raise_softirq_irqoff(unsigned int nr)
 }
 
 /*
+ * Same as __raise_softirq_irqoff() but will process them in ksoftirqd
+ */
+void __raise_softirq_irqoff_ksoft(unsigned int nr)
+{
+	unsigned int mask;
+
+	if (WARN_ON_ONCE(!__this_cpu_read(ksoftirqd) ||
+			 !__this_cpu_read(ktimer_softirqd)))
+		return;
+	mask = 1UL << nr;
+
+	trace_softirq_raise(nr);
+	or_softirq_pending(mask);
+	if (mask & TIMER_SOFTIRQS)
+		__this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
+	else
+		__this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
+	wakeup_proper_softirq(nr);
+}
+
+/*
  * This function must run with irqs disabled!
  */
 void raise_softirq_irqoff(unsigned int nr)
diff --git a/net/core/dev.c b/net/core/dev.c
index cfec8a542d35..ee08da0fe0d7 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4428,7 +4428,7 @@ out:
 
 softnet_break:
 	sd->time_squeeze++;
-	__raise_softirq_irqoff(NET_RX_SOFTIRQ);
+	__raise_softirq_irqoff_ksoft(NET_RX_SOFTIRQ);
 	goto out;
 }
 
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 05/13] latencyhist: disable jump-labels
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 04/13] net: provide a way to delegate processing a softirq to ksoftirqd Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 06/13] kernel: migrate_disable() do fastpath in atomic & irqs-off Steven Rostedt
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Christoph Mathys, stable-rt

[-- Attachment #1: 0005-latencyhist-disable-jump-labels.patch --]
[-- Type: text/plain, Size: 2912 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Atleast on X86 we die a recursive death

|CPU: 3 PID: 585 Comm: bash Not tainted 4.4.1-rt4+ #198
|Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Debian-1.8.2-1 04/01/2014
|task: ffff88007ab4cd00 ti: ffff88007ab94000 task.ti: ffff88007ab94000
|RIP: 0010:[<ffffffff81684870>]  [<ffffffff81684870>] int3+0x0/0x10
|RSP: 0018:ffff88013c107fd8  EFLAGS: 00010082
|RAX: ffff88007ab4cd00 RBX: ffffffff8100ceab RCX: 0000000080202001
|RDX: 0000000000000000 RSI: ffffffff8100ceab RDI: ffffffff810c78b2
|RBP: ffff88007ab97c10 R08: ffffffffff57b000 R09: 0000000000000000
|R10: ffff88013bb64790 R11: ffff88007ab4cd68 R12: ffffffff8100ceab
|R13: ffffffff810c78b2 R14: ffffffff810f8158 R15: ffffffff810f9120
|FS:  0000000000000000(0000) GS:ffff88013c100000(0063) knlGS:00000000f74e3940
|CS:  0010 DS: 002b ES: 002b CR0: 000000008005003b
|CR2: 0000000008cf6008 CR3: 000000013b169000 CR4: 00000000000006e0
|Call Trace:
| <#DB>
| [<ffffffff810f8158>] ? trace_preempt_off+0x18/0x170
| <<EOE>>
| [<ffffffff81077745>] preempt_count_add+0xa5/0xc0
| [<ffffffff810c78b2>] on_each_cpu+0x22/0x90
| [<ffffffff8100ceab>] text_poke_bp+0x5b/0xc0
| [<ffffffff8100a29c>] arch_jump_label_transform+0x8c/0xf0
| [<ffffffff8111c77c>] __jump_label_update+0x6c/0x80
| [<ffffffff8111c83a>] jump_label_update+0xaa/0xc0
| [<ffffffff8111ca54>] static_key_slow_inc+0x94/0xa0
| [<ffffffff810e0d8d>] tracepoint_probe_register_prio+0x26d/0x2c0
| [<ffffffff810e0df3>] tracepoint_probe_register+0x13/0x20
| [<ffffffff810fca78>] trace_event_reg+0x98/0xd0
| [<ffffffff810fcc8b>] __ftrace_event_enable_disable+0x6b/0x180
| [<ffffffff810fd5b8>] event_enable_write+0x78/0xc0
| [<ffffffff8117a768>] __vfs_write+0x28/0xe0
| [<ffffffff8117b025>] vfs_write+0xa5/0x180
| [<ffffffff8117bb76>] SyS_write+0x46/0xa0
| [<ffffffff81002c91>] do_fast_syscall_32+0xa1/0x1d0
| [<ffffffff81684d57>] sysenter_flags_fixed+0xd/0x17

during
 echo 1 > /sys/kernel/debug/tracing/events/hist/preemptirqsoff_hist/enable

Reported-By: Christoph Mathys <eraserix@gmail.com>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 77e7e80963d5..b1a399585d52 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -50,6 +50,7 @@ config KPROBES
 config JUMP_LABEL
        bool "Optimize very unlikely/likely branches"
        depends on HAVE_ARCH_JUMP_LABEL
+       depends on (!INTERRUPT_OFF_HIST && !PREEMPT_OFF_HIST && !WAKEUP_LATENCY_HIST && !MISSED_TIMER_OFFSETS_HIST)
        help
          This option enables a transparent branch optimization that
 	 makes certain almost-always-true or almost-always-false branch
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 06/13] kernel: migrate_disable() do fastpath in atomic & irqs-off
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 05/13] latencyhist: disable jump-labels Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 07/13] kernel: softirq: unlock with irqs on Steven Rostedt
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0006-kernel-migrate_disable-do-fastpath-in-atomic-irqs-of.patch --]
[-- Type: text/plain, Size: 1105 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

With interrupts off it makes no sense to do the long path since we can't
leave the CPU anyway. Also we might end up in a recursion with lockdep.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/sched/core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a936937c3535..d02b025f434c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2429,7 +2429,7 @@ void migrate_disable(void)
 {
 	struct task_struct *p = current;
 
-	if (in_atomic()) {
+	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic++;
 #endif
@@ -2463,7 +2463,7 @@ void migrate_enable(void)
 	unsigned long flags;
 	struct rq *rq;
 
-	if (in_atomic()) {
+	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic--;
 #endif
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 07/13] kernel: softirq: unlock with irqs on
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 06/13] kernel: migrate_disable() do fastpath in atomic & irqs-off Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 08/13] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks" Steven Rostedt
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0007-kernel-softirq-unlock-with-irqs-on.patch --]
[-- Type: text/plain, Size: 983 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

We unlock the lock while the interrupts are off. This isn't a problem
now but will get because the migrate_disable() + enable are not
symmetrical in regard to the status of interrupts.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/softirq.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index cce9723c5a18..7c8e36bf9e2a 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -578,8 +578,10 @@ static void do_current_softirqs(int need_rcu_bh_qs)
 			do_single_softirq(i, need_rcu_bh_qs);
 		}
 		softirq_clr_runner(i);
-		unlock_softirq(i);
 		WARN_ON(current->softirq_nestcnt != 1);
+		local_irq_enable();
+		unlock_softirq(i);
+		local_irq_disable();
 	}
 }
 
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 08/13] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks"
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 07/13] kernel: softirq: unlock with irqs on Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 09/13] sched,rt: __always_inline preemptible_lazy() Steven Rostedt
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0008-kernel-stop_machine-partly-revert-stop_machine-Use-r.patch --]
[-- Type: text/plain, Size: 5190 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

With completion using swait and so rawlocks we don't need this anymore.
Further, bisect thinks this patch is responsible for:

|BUG: unable to handle kernel NULL pointer dereference at           (null)
|IP: [<ffffffff81082123>] sched_cpu_active+0x53/0x70
|PGD 0
|Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
|Dumping ftrace buffer:
|   (ftrace buffer empty)
|Modules linked in:
|CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.4.1+ #330
|Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Debian-1.8.2-1 04/01/2014
|task: ffff88013ae64b00 ti: ffff88013ae74000 task.ti: ffff88013ae74000
|RIP: 0010:[<ffffffff81082123>]  [<ffffffff81082123>] sched_cpu_active+0x53/0x70
|RSP: 0000:ffff88013ae77eb8  EFLAGS: 00010082
|RAX: 0000000000000001 RBX: ffffffff81c2cf20 RCX: 0000001050fb52fb
|RDX: 0000001050fb52fb RSI: 000000105117ca1e RDI: 00000000001c7723
|RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001
|R10: 0000000000000000 R11: 0000000000000001 R12: 00000000ffffffff
|R13: ffffffff81c2cee0 R14: 0000000000000000 R15: 0000000000000001
|FS:  0000000000000000(0000) GS:ffff88013b200000(0000) knlGS:0000000000000000
|CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
|CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
|Stack:
| ffffffff810c446d ffff88013ae77f00 ffffffff8107d8dd 000000000000000a
| 0000000000000001 0000000000000000 0000000000000000 0000000000000000
| 0000000000000000 ffff88013ae77f10 ffffffff8107d90e ffff88013ae77f20
|Call Trace:
| [<ffffffff810c446d>] ? debug_lockdep_rcu_enabled+0x1d/0x20
| [<ffffffff8107d8dd>] ? notifier_call_chain+0x5d/0x80
| [<ffffffff8107d90e>] ? __raw_notifier_call_chain+0xe/0x10
| [<ffffffff810598a3>] ? cpu_notify+0x23/0x40
| [<ffffffff8105a7b8>] ? notify_cpu_starting+0x28/0x30

during hotplug. The rawlocks need to remain however.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/stop_machine.c | 38 +++++++-------------------------------
 1 file changed, 7 insertions(+), 31 deletions(-)

diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 5f02a3fe729f..652fea4b0c83 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -29,7 +29,7 @@ struct cpu_stop_done {
 	atomic_t		nr_todo;	/* nr left to execute */
 	bool			executed;	/* actually executed? */
 	int			ret;		/* collected return value */
-	struct task_struct	*waiter;	/* woken when nr_todo reaches 0 */
+	struct completion	completion;	/* fired if nr_todo reaches 0 */
 };
 
 /* the actual stopper, one per every possible cpu, enabled on online cpus */
@@ -47,7 +47,7 @@ static void cpu_stop_init_done(struct cpu_stop_done *done, unsigned int nr_todo)
 {
 	memset(done, 0, sizeof(*done));
 	atomic_set(&done->nr_todo, nr_todo);
-	done->waiter = current;
+	init_completion(&done->completion);
 }
 
 /* signal completion unless @done is NULL */
@@ -56,10 +56,8 @@ static void cpu_stop_signal_done(struct cpu_stop_done *done, bool executed)
 	if (done) {
 		if (executed)
 			done->executed = true;
-		if (atomic_dec_and_test(&done->nr_todo)) {
-			wake_up_process(done->waiter);
-			done->waiter = NULL;
-		}
+		if (atomic_dec_and_test(&done->nr_todo))
+			complete(&done->completion);
 	}
 }
 
@@ -82,22 +80,6 @@ static void cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
 	raw_spin_unlock_irqrestore(&stopper->lock, flags);
 }
 
-static void wait_for_stop_done(struct cpu_stop_done *done)
-{
-	set_current_state(TASK_UNINTERRUPTIBLE);
-	while (atomic_read(&done->nr_todo)) {
-		schedule();
-		set_current_state(TASK_UNINTERRUPTIBLE);
-	}
-	/*
-	 * We need to wait until cpu_stop_signal_done() has cleared
-	 * done->waiter.
-	 */
-	while (done->waiter)
-		cpu_relax();
-	set_current_state(TASK_RUNNING);
-}
-
 /**
  * stop_one_cpu - stop a cpu
  * @cpu: cpu to stop
@@ -129,7 +111,7 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
 
 	cpu_stop_init_done(&done, 1);
 	cpu_stop_queue_work(cpu, &work);
-	wait_for_stop_done(&done);
+	wait_for_completion(&done.completion);
 	return done.executed ? done.ret : -ENOENT;
 }
 
@@ -195,7 +177,7 @@ static int __stop_cpus(const struct cpumask *cpumask,
 
 	cpu_stop_init_done(&done, cpumask_weight(cpumask));
 	queue_stop_cpus_work(cpumask, fn, arg, &done, false);
-	wait_for_stop_done(&done);
+	wait_for_completion(&done.completion);
 	return done.executed ? done.ret : -ENOENT;
 }
 
@@ -326,13 +308,7 @@ repeat:
 			  kallsyms_lookup((unsigned long)fn, NULL, NULL, NULL,
 					  ksym_buf), arg);
 
-		/*
-		 * Make sure that the wakeup and setting done->waiter
-		 * to NULL is atomic.
-		 */
-		local_irq_disable();
 		cpu_stop_signal_done(done, true);
-		local_irq_enable();
 		goto repeat;
 	}
 }
@@ -573,7 +549,7 @@ int stop_machine_from_inactive_cpu(int (*fn)(void *), void *data,
 	ret = stop_machine_cpu_stop(&smdata);
 
 	/* Busy wait for completion. */
-	while (atomic_read(&done.nr_todo))
+	while (!completion_done(&done.completion))
 		cpu_relax();
 
 	mutex_unlock(&stop_cpus_mutex);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 09/13] sched,rt: __always_inline preemptible_lazy()
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (7 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 08/13] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks" Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 10/13] trace: Use rcuidle version for preemptoff_hist trace point Steven Rostedt
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mike Galbraith, Hillf Danton

[-- Attachment #1: 0009-sched-rt-__always_inline-preemptible_lazy.patch --]
[-- Type: text/plain, Size: 1267 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <umgwanakikbuti@gmail.com>

homer: # nm kernel/sched/core.o|grep preemptible_lazy
00000000000000b5 t preemptible_lazy

echo wakeup_rt > current_tracer ==> Welcome to infinity.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Link: http://lkml.kernel.org/r/1456067490.3771.2.camel@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/sched/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d02b025f434c..7e56dc293669 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2734,7 +2734,7 @@ void __sched schedule_preempt_disabled(void)
  * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
  * preempt_lazy_count counter >0.
  */
-static int preemptible_lazy(void)
+static __always_inline int preemptible_lazy(void)
 {
 	if (test_thread_flag(TIF_NEED_RESCHED))
 		return 1;
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 10/13] trace: Use rcuidle version for preemptoff_hist trace point
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (8 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 09/13] sched,rt: __always_inline preemptible_lazy() Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 11/13] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL Steven Rostedt
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Yang Shi

[-- Attachment #1: 0010-trace-Use-rcuidle-version-for-preemptoff_hist-trace-.patch --]
[-- Type: text/plain, Size: 3973 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Yang Shi <yang.shi@windriver.com>

When running -rt kernel with both PREEMPT_OFF_HIST and LOCKDEP enabled,
the below error is reported:

 [ INFO: suspicious RCU usage. ]
 4.4.1-rt6 #1 Not tainted
 include/trace/events/hist.h:31 suspicious rcu_dereference_check() usage!

 other info that might help us debug this:

 RCU used illegally from idle CPU!
 rcu_scheduler_active = 1, debug_locks = 0
 RCU used illegally from extended quiescent state!
 no locks held by swapper/0/0.

 stack backtrace:
 CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.1-rt6-WR8.0.0.0_standard #1
 Stack : 0000000000000006 0000000000000000 ffffffff81ca8c38 ffffffff81c8fc80
    ffffffff811bdd68 ffffffff81cb0000 0000000000000000 ffffffff81cb0000
    0000000000000000 0000000000000000 0000000000000004 0000000000000000
    0000000000000004 ffffffff811bdf50 0000000000000000 ffffffff82b60000
    0000000000000000 ffffffff812897ac ffffffff819f0000 000000000000000b
    ffffffff811be460 ffffffff81b7c588 ffffffff81c8fc80 0000000000000000
    0000000000000000 ffffffff81ec7f88 ffffffff81d70000 ffffffff81b70000
    ffffffff81c90000 ffffffff81c3fb00 ffffffff81c3fc28 ffffffff815e6f98
    0000000000000000 ffffffff81c8fa87 ffffffff81b70958 ffffffff811bf2c4
    0707fe32e8d60ca5 ffffffff81126d60 0000000000000000 0000000000000000
    ...
 Call Trace:
 [<ffffffff81126d60>] show_stack+0xe8/0x108
 [<ffffffff815e6f98>] dump_stack+0x88/0xb0
 [<ffffffff8124b88c>] time_hardirqs_off+0x204/0x300
 [<ffffffff811aa5dc>] trace_hardirqs_off_caller+0x24/0xe8
 [<ffffffff811a4ec4>] cpu_startup_entry+0x39c/0x508
 [<ffffffff81d7dc68>] start_kernel+0x584/0x5a0

Replace regular trace_preemptoff_hist to rcuidle version to avoid the error.

Signed-off-by: Yang Shi <yang.shi@windriver.com>
Cc: bigeasy@linutronix.de
Cc: rostedt@goodmis.org
Cc: linux-rt-users@vger.kernel.org
Link: http://lkml.kernel.org/r/1456262603-10075-1-git-send-email-yang.shi@windriver.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/trace/events/hist.h  | 1 +
 kernel/trace/trace_irqsoff.c | 8 ++++----
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/trace/events/hist.h b/include/trace/events/hist.h
index 6122e4286177..f7710de1b1f3 100644
--- a/include/trace/events/hist.h
+++ b/include/trace/events/hist.h
@@ -9,6 +9,7 @@
 
 #if !defined(CONFIG_PREEMPT_OFF_HIST) && !defined(CONFIG_INTERRUPT_OFF_HIST)
 #define trace_preemptirqsoff_hist(a, b)
+#define trace_preemptirqsoff_hist_rcuidle(a, b)
 #else
 TRACE_EVENT(preemptirqsoff_hist,
 
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 2f4eb37815d8..77bdfa55ce90 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -440,13 +440,13 @@ void start_critical_timings(void)
 {
 	if (preempt_trace() || irq_trace())
 		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
-	trace_preemptirqsoff_hist(TRACE_START, 1);
+	trace_preemptirqsoff_hist_rcuidle(TRACE_START, 1);
 }
 EXPORT_SYMBOL_GPL(start_critical_timings);
 
 void stop_critical_timings(void)
 {
-	trace_preemptirqsoff_hist(TRACE_STOP, 0);
+	trace_preemptirqsoff_hist_rcuidle(TRACE_STOP, 0);
 	if (preempt_trace() || irq_trace())
 		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
 }
@@ -491,7 +491,7 @@ inline void print_irqtrace_events(struct task_struct *curr)
  */
 void trace_hardirqs_on(void)
 {
-	trace_preemptirqsoff_hist(IRQS_ON, 0);
+	trace_preemptirqsoff_hist_rcuidle(IRQS_ON, 0);
 	if (!preempt_trace() && irq_trace())
 		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
 }
@@ -501,7 +501,7 @@ void trace_hardirqs_off(void)
 {
 	if (!preempt_trace() && irq_trace())
 		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
-	trace_preemptirqsoff_hist(IRQS_OFF, 1);
+	trace_preemptirqsoff_hist_rcuidle(IRQS_OFF, 1);
 }
 EXPORT_SYMBOL(trace_hardirqs_off);
 
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 11/13] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (9 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 10/13] trace: Use rcuidle version for preemptoff_hist trace point Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:48 ` [PATCH RT 12/13] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable() Steven Rostedt
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Clark Williams

[-- Attachment #1: 0011-rcu-torture-Comment-out-rcu_bh-ops-on-PREEMPT_RT_FUL.patch --]
[-- Type: text/plain, Size: 1076 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Clark Williams <williams@redhat.com>

RT has dropped support of rcu_bh, comment out in rcutorture.

Signed-off-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rcutorture.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/rcutorture.c b/kernel/rcutorture.c
index be63101c6175..a31bab6c70ad 100644
--- a/kernel/rcutorture.c
+++ b/kernel/rcutorture.c
@@ -473,6 +473,7 @@ static struct rcu_torture_ops rcu_ops = {
 	.name		= "rcu"
 };
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 /*
  * Definitions for rcu_bh torture testing.
  */
@@ -515,6 +516,12 @@ static struct rcu_torture_ops rcu_bh_ops = {
 	.name		= "rcu_bh"
 };
 
+#else
+static struct rcu_torture_ops rcu_bh_ops = {
+	.ttype		= INVALID_RCU_FLAVOR,
+};
+#endif
+
 /*
  * Definitions for srcu torture testing.
  */
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 12/13] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable()
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (10 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 11/13] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL Steven Rostedt
@ 2016-03-02 15:48 ` Steven Rostedt
  2016-03-02 15:49 ` [PATCH RT 13/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
  2016-03-04  4:04 ` [PATCH RT 14/13] tracing: Fix probe_wakeup_latency_hist_start() prototype Mike Galbraith
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0012-kernel-sched-Fix-preempt_disable_ip-recodring-for-pr.patch --]
[-- Type: text/plain, Size: 3899 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

preempt_disable() invokes preempt_count_add() which saves the caller in
current->preempt_disable_ip. It uses CALLER_ADDR1 which does not look for its
caller but for the parent of the caller. Which means we get the correct caller
for something like spin_lock() unless the architectures inlines those
invocations. It is always wrong for preempt_disable() or local_bh_disable().

This patch makes the function get_parent_ip() which tries CALLER_ADDR0,1,2 if
the former is a locking function.  This seems to record the preempt_disable()
caller properly for preempt_disable() itself as well as for get_cpu_var() or
local_bh_disable().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/ftrace.h | 12 ++++++++++++
 include/linux/sched.h  |  2 --
 kernel/sched/core.c    | 14 ++------------
 kernel/softirq.c       |  2 +-
 4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index e68db4d534cb..9b7b9341f886 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -627,6 +627,18 @@ static inline void __ftrace_enabled_restore(int enabled)
 # endif
 #endif /* ifndef HAVE_ARCH_CALLER_ADDR */
 
+static inline unsigned long get_lock_parent_ip(void)
+{
+	unsigned long addr = CALLER_ADDR0;
+
+	if (!in_lock_functions(addr))
+		return addr;
+	addr = CALLER_ADDR1;
+	if (!in_lock_functions(addr))
+		return addr;
+	return CALLER_ADDR2;
+}
+
 #ifdef CONFIG_IRQSOFF_TRACER
   extern void time_hardirqs_on(unsigned long a0, unsigned long a1);
   extern void time_hardirqs_off(unsigned long a0, unsigned long a1);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 3ce814c4ef42..4006703785b2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -113,8 +113,6 @@ extern unsigned long this_cpu_load(void);
 extern void calc_global_load(unsigned long ticks);
 extern void update_cpu_load_nohz(void);
 
-extern unsigned long get_parent_ip(unsigned long addr);
-
 extern void dump_cpu_task(int cpu);
 
 struct seq_file;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7e56dc293669..c66db09aaf12 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2279,16 +2279,6 @@ u64 scheduler_tick_max_deferment(void)
 }
 #endif
 
-notrace unsigned long get_parent_ip(unsigned long addr)
-{
-	if (in_lock_functions(addr)) {
-		addr = CALLER_ADDR2;
-		if (in_lock_functions(addr))
-			addr = CALLER_ADDR3;
-	}
-	return addr;
-}
-
 #if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \
 				defined(CONFIG_PREEMPT_TRACER))
 
@@ -2310,7 +2300,7 @@ void __kprobes add_preempt_count(int val)
 				PREEMPT_MASK - 10);
 #endif
 	if (preempt_count() == val) {
-		unsigned long ip = get_parent_ip(CALLER_ADDR1);
+		unsigned long ip = get_lock_parent_ip();
 #ifdef CONFIG_DEBUG_PREEMPT
 		current->preempt_disable_ip = ip;
 #endif
@@ -2336,7 +2326,7 @@ void __kprobes sub_preempt_count(int val)
 #endif
 
 	if (preempt_count() == val)
-		trace_preempt_on(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
+		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
 	preempt_count() -= val;
 }
 EXPORT_SYMBOL(sub_preempt_count);
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 7c8e36bf9e2a..315b048423af 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -277,7 +277,7 @@ static void __local_bh_disable(unsigned long ip, unsigned int cnt)
 	raw_local_irq_restore(flags);
 
 	if (preempt_count() == cnt)
-		trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
+		trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
 }
 #else /* !CONFIG_TRACE_IRQFLAGS */
 static inline void __local_bh_disable(unsigned long ip, unsigned int cnt)
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 13/13] Linux 3.12.54-rt74-rc1
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (11 preceding siblings ...)
  2016-03-02 15:48 ` [PATCH RT 12/13] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable() Steven Rostedt
@ 2016-03-02 15:49 ` Steven Rostedt
  2016-03-04  4:04 ` [PATCH RT 14/13] tracing: Fix probe_wakeup_latency_hist_start() prototype Mike Galbraith
  13 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0013-Linux-3.12.54-rt74-rc1.patch --]
[-- Type: text/plain, Size: 412 bytes --]

3.12.54-rt74-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index e8ada8cdb471..8dd12f3c29cb 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt73
+-rt74-rc1
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RT 14/13] tracing: Fix probe_wakeup_latency_hist_start() prototype
  2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
                   ` (12 preceding siblings ...)
  2016-03-02 15:49 ` [PATCH RT 13/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
@ 2016-03-04  4:04 ` Mike Galbraith
  2016-03-07 19:42   ` Steven Rostedt
  13 siblings, 1 reply; 16+ messages in thread
From: Mike Galbraith @ 2016-03-04  4:04 UTC (permalink / raw)
  To: Steven Rostedt, linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Drop 'success' arg from probe_wakeup_latency_hist_start().

Fixes: cf1dd658 sched: Introduce the trace_sched_waking tracepoint
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
---
 kernel/trace/latency_hist.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/latency_hist.c b/kernel/trace/latency_hist.c
index 66a69eb5329c..b6c1d14b71c4 100644
--- a/kernel/trace/latency_hist.c
+++ b/kernel/trace/latency_hist.c
@@ -115,7 +115,7 @@ static DEFINE_PER_CPU(struct hist_data, wakeup_latency_hist_sharedprio);
 static char *wakeup_latency_hist_dir = "wakeup";
 static char *wakeup_latency_hist_dir_sharedprio = "sharedprio";
 static notrace void probe_wakeup_latency_hist_start(void *v,
-	struct task_struct *p, int success);
+	struct task_struct *p);
 static notrace void probe_wakeup_latency_hist_stop(void *v,
 	struct task_struct *prev, struct task_struct *next);
 static notrace void probe_sched_migrate_task(void *,
@@ -869,7 +869,7 @@ static notrace void probe_sched_migrate_task(void *v, struct task_struct *task,
 }
 
 static notrace void probe_wakeup_latency_hist_start(void *v,
-	struct task_struct *p, int success)
+	struct task_struct *p)
 {
 	unsigned long flags;
 	struct task_struct *curr = current;

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH RT 14/13] tracing: Fix probe_wakeup_latency_hist_start() prototype
  2016-03-04  4:04 ` [PATCH RT 14/13] tracing: Fix probe_wakeup_latency_hist_start() prototype Mike Galbraith
@ 2016-03-07 19:42   ` Steven Rostedt
  0 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2016-03-07 19:42 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker

On Fri, 04 Mar 2016 05:04:06 +0100
Mike Galbraith <umgwanakikbuti@gmail.com> wrote:

> Drop 'success' arg from probe_wakeup_latency_hist_start().
> 
> Fixes: cf1dd658 sched: Introduce the trace_sched_waking tracepoint
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>

Thanks, I applied this to 3.18-rt, 3.14-rt, 3.12-rt, 3.10-rt, 3.4-rt,
and 3.2-rt.

-- Steve

> ---
>  kernel/trace/latency_hist.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/latency_hist.c b/kernel/trace/latency_hist.c
> index 66a69eb5329c..b6c1d14b71c4 100644
> --- a/kernel/trace/latency_hist.c
> +++ b/kernel/trace/latency_hist.c
> @@ -115,7 +115,7 @@ static DEFINE_PER_CPU(struct hist_data, wakeup_latency_hist_sharedprio);
>  static char *wakeup_latency_hist_dir = "wakeup";
>  static char *wakeup_latency_hist_dir_sharedprio = "sharedprio";
>  static notrace void probe_wakeup_latency_hist_start(void *v,
> -	struct task_struct *p, int success);
> +	struct task_struct *p);
>  static notrace void probe_wakeup_latency_hist_stop(void *v,
>  	struct task_struct *prev, struct task_struct *next);
>  static notrace void probe_sched_migrate_task(void *,
> @@ -869,7 +869,7 @@ static notrace void probe_sched_migrate_task(void *v, struct task_struct *task,
>  }
>  
>  static notrace void probe_wakeup_latency_hist_start(void *v,
> -	struct task_struct *p, int success)
> +	struct task_struct *p)
>  {
>  	unsigned long flags;
>  	struct task_struct *curr = current;

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-03-07 19:42 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-02 15:48 [PATCH RT 00/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 01/13] ptrace: dont open IRQs in ptrace_freeze_traced() too early Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 02/13] preempt-lazy: Add the lazy-preemption check to preempt_schedule() Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 03/13] softirq: split timer softirqs out of ksoftirqd Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 04/13] net: provide a way to delegate processing a softirq to ksoftirqd Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 05/13] latencyhist: disable jump-labels Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 06/13] kernel: migrate_disable() do fastpath in atomic & irqs-off Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 07/13] kernel: softirq: unlock with irqs on Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 08/13] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks" Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 09/13] sched,rt: __always_inline preemptible_lazy() Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 10/13] trace: Use rcuidle version for preemptoff_hist trace point Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 11/13] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL Steven Rostedt
2016-03-02 15:48 ` [PATCH RT 12/13] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable() Steven Rostedt
2016-03-02 15:49 ` [PATCH RT 13/13] Linux 3.12.54-rt74-rc1 Steven Rostedt
2016-03-04  4:04 ` [PATCH RT 14/13] tracing: Fix probe_wakeup_latency_hist_start() prototype Mike Galbraith
2016-03-07 19:42   ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.