linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 00/10] Linux 3.14.61-rt63-rc1
@ 2016-02-26 21:35 Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 01/10] cpufreq: Remove cpufreq_rwsem Steven Rostedt
                   ` (9 more replies)
  0 siblings, 10 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Dear RT Folks,

This is the RT stable review cycle of patch 3.14.61-rt63-rc1.

Please scream at me if I messed something up. Please test the patches too.

Note, I'm bringing this tree up to stable patches in 4.1.7-rt8.
Then I'll be pulling 4.1-rt into stable, as development is now on 4.4-rt.
After that, I'll be pulling the 4.1-rt stable changes into the stable trees.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 2/29/2016.

Enjoy,

-- Steve


To build 3.14.61-rt63-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.14.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.14.61.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.14/patch-3.14.61-rt63-rc1.patch.xz

You can also build from 3.14.61-rt62 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.14/incr/patch-3.14.61-rt62-rt63-rc1.patch.xz


Changes from 3.14.61-rt62:

---


Grygorii Strashko (2):
      ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die()
      net/core/cpuhotplug: Drain input_pkt_queue lockless

Josh Cartwright (1):
      net: Make synchronize_rcu_expedited() conditional on !RT_FULL

Peter Zijlstra (1):
      sched: Introduce the trace_sched_waking tracepoint

Sebastian Andrzej Siewior (2):
      cpufreq: Remove cpufreq_rwsem
      dump stack: don't disable preemption during trace

Steven Rostedt (Red Hat) (1):
      Linux 3.14.61-rt63-rc1

Thomas Gleixner (2):
      rtmutex: Handle non enqueued waiters gracefully
      irqwork: Move irq safe work to irq context

bmouring@ni.com (1):
      rtmutex: Use chainwalking control enum

----
 arch/arm/kernel/smp.c             |  5 +++--
 arch/x86/kernel/dumpstack_64.c    |  8 ++++----
 drivers/cpufreq/cpufreq.c         | 34 +++-------------------------------
 include/linux/irq_work.h          |  6 ++++++
 include/trace/events/sched.h      | 30 +++++++++++++++++++++---------
 kernel/irq_work.c                 |  9 +++++++++
 kernel/locking/rtmutex.c          |  4 ++--
 kernel/sched/core.c               |  8 +++++---
 kernel/timer.c                    |  6 ++----
 kernel/trace/trace_sched_switch.c |  2 +-
 kernel/trace/trace_sched_wakeup.c |  2 +-
 lib/dump_stack.c                  |  4 ++--
 localversion-rt                   |  2 +-
 net/core/dev.c                    |  4 ++--
 14 files changed, 62 insertions(+), 62 deletions(-)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RT 01/10] cpufreq: Remove cpufreq_rwsem
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 02/10] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die() Steven Rostedt
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0001-cpufreq-Remove-cpufreq_rwsem.patch --]
[-- Type: text/plain, Size: 6230 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

cpufreq_rwsem was introduced in commit 6eed9404ab3c4 ("cpufreq: Use
rwsem for protecting critical sections) in order to replace
try_module_get() on the cpu-freq driver. That try_module_get() worked
well until the refcount was so heavily used that module removal became
more or less impossible.

Though when looking at the various (undocumented) protection
mechanisms in that code, the randomly sprinkeled around cpufreq_rwsem
locking sites are superfluous.

The policy, which is acquired in cpufreq_cpu_get() and released in
cpufreq_cpu_put() is sufficiently protected already.

  cpufreq_cpu_get(cpu)
    /* Protects against concurrent driver removal */
    read_lock_irqsave(&cpufreq_driver_lock, flags);
    policy = per_cpu(cpufreq_cpu_data, cpu);
    kobject_get(&policy->kobj);
    read_unlock_irqrestore(&cpufreq_driver_lock, flags);

The reference on the policy serializes versus module unload already:

  cpufreq_unregister_driver()
    subsys_interface_unregister()
      __cpufreq_remove_dev_finish()
        per_cpu(cpufreq_cpu_data) = NULL;
	cpufreq_policy_put_kobj()

If there is a reference held on the policy, i.e. obtained prior to the
unregister call, then cpufreq_policy_put_kobj() will wait until that
reference is dropped. So once subsys_interface_unregister() returns
there is no policy pointer in flight and no new reference can be
obtained. So that rwsem protection is useless.

The other usage of cpufreq_rwsem in show()/store() of the sysfs
interface is redundant as well because sysfs already does the proper
kobject_get()/put() pairs.

That leaves CPU hotplug versus module removal. The current
down_write() around the write_lock() in cpufreq_unregister_driver() is
silly at best as it protects actually nothing.

The trivial solution to this is to prevent hotplug across
cpufreq_unregister_driver completely.

[upstream: rafael/linux-pm 454d3a2500a4eb33be85dde3bfba9e5f6b5efadc]
[fixes: "cpufreq_stat_notifier_trans: No policy found" since v4.0-rt]
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 drivers/cpufreq/cpufreq.c | 34 +++-------------------------------
 1 file changed, 3 insertions(+), 31 deletions(-)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index ef3b8adb9d47..885d441c0af3 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -52,12 +52,6 @@ static inline bool has_target(void)
 	return cpufreq_driver->target_index || cpufreq_driver->target;
 }
 
-/*
- * rwsem to guarantee that cpufreq driver module doesn't unload during critical
- * sections
- */
-static DECLARE_RWSEM(cpufreq_rwsem);
-
 /* internal prototypes */
 static int __cpufreq_governor(struct cpufreq_policy *policy,
 		unsigned int event);
@@ -198,9 +192,6 @@ struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
 	if (cpufreq_disabled() || (cpu >= nr_cpu_ids))
 		return NULL;
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		return NULL;
-
 	/* get the cpufreq driver */
 	read_lock_irqsave(&cpufreq_driver_lock, flags);
 
@@ -213,9 +204,6 @@ struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
 
 	read_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
-	if (!policy)
-		up_read(&cpufreq_rwsem);
-
 	return policy;
 }
 EXPORT_SYMBOL_GPL(cpufreq_cpu_get);
@@ -226,7 +214,6 @@ void cpufreq_cpu_put(struct cpufreq_policy *policy)
 		return;
 
 	kobject_put(&policy->kobj);
-	up_read(&cpufreq_rwsem);
 }
 EXPORT_SYMBOL_GPL(cpufreq_cpu_put);
 
@@ -710,9 +697,6 @@ static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
 	struct freq_attr *fattr = to_attr(attr);
 	ssize_t ret;
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		return -EINVAL;
-
 	down_read(&policy->rwsem);
 
 	if (fattr->show)
@@ -721,7 +705,6 @@ static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
 		ret = -EIO;
 
 	up_read(&policy->rwsem);
-	up_read(&cpufreq_rwsem);
 
 	return ret;
 }
@@ -738,9 +721,6 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
 	if (!cpu_online(policy->cpu))
 		goto unlock;
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		goto unlock;
-
 	down_write(&policy->rwsem);
 
 	if (fattr->store)
@@ -749,8 +729,6 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
 		ret = -EIO;
 
 	up_write(&policy->rwsem);
-
-	up_read(&cpufreq_rwsem);
 unlock:
 	put_online_cpus();
 
@@ -1065,9 +1043,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
 	}
 #endif
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		return 0;
-
 #ifdef CONFIG_HOTPLUG_CPU
 	/* Check if this cpu was hot-unplugged earlier and has siblings */
 	read_lock_irqsave(&cpufreq_driver_lock, flags);
@@ -1075,7 +1050,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
 		if (cpumask_test_cpu(cpu, tpolicy->related_cpus)) {
 			read_unlock_irqrestore(&cpufreq_driver_lock, flags);
 			ret = cpufreq_add_policy_cpu(tpolicy, cpu, dev);
-			up_read(&cpufreq_rwsem);
 			return ret;
 		}
 	}
@@ -1223,7 +1197,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
 	up_write(&policy->rwsem);
 
 	kobject_uevent(&policy->kobj, KOBJ_ADD);
-	up_read(&cpufreq_rwsem);
 
 	pr_debug("initialization complete\n");
 
@@ -1249,8 +1222,6 @@ err_set_policy_cpu:
 	cpufreq_policy_free(policy);
 
 nomem_out:
-	up_read(&cpufreq_rwsem);
-
 	return ret;
 }
 
@@ -2399,19 +2370,20 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
 
 	pr_debug("unregistering driver %s\n", driver->name);
 
+	/* Protect against concurrent cpu hotplug */
+	get_online_cpus();
 	subsys_interface_unregister(&cpufreq_interface);
 	if (cpufreq_boost_supported())
 		cpufreq_sysfs_remove_file(&boost.attr);
 
 	unregister_hotcpu_notifier(&cpufreq_cpu_notifier);
 
-	down_write(&cpufreq_rwsem);
 	write_lock_irqsave(&cpufreq_driver_lock, flags);
 
 	cpufreq_driver = NULL;
 
 	write_unlock_irqrestore(&cpufreq_driver_lock, flags);
-	up_write(&cpufreq_rwsem);
+	put_online_cpus();
 
 	return 0;
 }
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 02/10] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die()
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 01/10] cpufreq: Remove cpufreq_rwsem Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 03/10] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Grygorii Strashko, linux-arm-kernel,
	Sekhar Nori, Austin Schuh, philipp, Russell King, stable-rt

[-- Attachment #1: 0002-ARM-smp-Move-clear_tasks_mm_cpumask-call-to-__cpu_di.patch --]
[-- Type: text/plain, Size: 3906 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Grygorii Strashko <grygorii.strashko@ti.com>

When running with the RT-kernel (4.1.5-rt5) on TI OMAP dra7-evm and trying
to do Suspend to RAM, the following backtrace occurs:

 Disabling non-boot CPUs ...
 PM: noirq suspend of devices complete after 7.295 msecs
 Disabling non-boot CPUs ...
 BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
 in_atomic(): 1, irqs_disabled(): 128, pid: 18, name: migration/1
 INFO: lockdep is turned off.
 irq event stamp: 122
 hardirqs last  enabled at (121): [<c06ac0ac>] _raw_spin_unlock_irqrestore+0x88/0x90
 hardirqs last disabled at (122): [<c06abed0>] _raw_spin_lock_irq+0x28/0x5c
 softirqs last  enabled at (0): [<c003d294>] copy_process.part.52+0x410/0x19d8
 softirqs last disabled at (0): [<  (null)>]   (null)
 Preemption disabled at:[<  (null)>]   (null)
  CPU: 1 PID: 18 Comm: migration/1 Tainted: G        W       4.1.4-rt3-01046-g96ac8da #204
 Hardware name: Generic DRA74X (Flattened Device Tree)
 [<c0019134>] (unwind_backtrace) from [<c0014774>] (show_stack+0x20/0x24)
 [<c0014774>] (show_stack) from [<c06a70f4>] (dump_stack+0x88/0xdc)
 [<c06a70f4>] (dump_stack) from [<c006cab8>] (___might_sleep+0x198/0x2a8)
 [<c006cab8>] (___might_sleep) from [<c06ac4dc>] (rt_spin_lock+0x30/0x70)
 [<c06ac4dc>] (rt_spin_lock) from [<c013f790>] (find_lock_task_mm+0x9c/0x174)
 [<c013f790>] (find_lock_task_mm) from [<c00409ac>] (clear_tasks_mm_cpumask+0xb4/0x1ac)
 [<c00409ac>] (clear_tasks_mm_cpumask) from [<c00166a4>] (__cpu_disable+0x98/0xbc)
 [<c00166a4>] (__cpu_disable) from [<c06a2e8c>] (take_cpu_down+0x1c/0x50)
 [<c06a2e8c>] (take_cpu_down) from [<c00f2600>] (multi_cpu_stop+0x11c/0x158)
 [<c00f2600>] (multi_cpu_stop) from [<c00f2a9c>] (cpu_stopper_thread+0xc4/0x184)
 [<c00f2a9c>] (cpu_stopper_thread) from [<c0069058>] (smpboot_thread_fn+0x18c/0x324)
 [<c0069058>] (smpboot_thread_fn) from [<c00649c4>] (kthread+0xe8/0x104)
 [<c00649c4>] (kthread) from [<c0010058>] (ret_from_fork+0x14/0x3c)
 CPU1: shutdown
 PM: Calling sched_clock_suspend+0x0/0x40
 PM: Calling timekeeping_suspend+0x0/0x2e0
 PM: Calling irq_gc_suspend+0x0/0x68
 PM: Calling fw_suspend+0x0/0x2c
 PM: Calling cpu_pm_suspend+0x0/0x28

Also, sometimes system stucks right after displaying "Disabling non-boot
CPUs ...". The root cause of above backtrace is task_lock() which takes
a sleeping lock on -RT.

To fix the issue, move clear_tasks_mm_cpumask() call from __cpu_disable()
to __cpu_die() which is called on the thread which is asking for a target
CPU to be shutdown. In addition, this change restores CPUhotplug functionality
on TI OMAP dra7-evm and CPU1 can be unplugged/plugged many times.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: <linux-arm-kernel@lists.infradead.org>
Cc: Sekhar Nori <nsekhar@ti.com>
Cc: Austin Schuh <austin@peloton-tech.com>
Cc: <philipp@peloton-tech.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: <bigeasy@linutronix.de>
Cc: stable-rt@vger.kernel.org
Link: http://lkml.kernel.org/r/1441995683-30817-1-git-send-email-grygorii.strashko@ti.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/arm/kernel/smp.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 8cd3724714fe..0480b2685da0 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -209,8 +209,6 @@ int __cpu_disable(void)
 	flush_cache_louis();
 	local_flush_tlb_all();
 
-	clear_tasks_mm_cpumask(cpu);
-
 	return 0;
 }
 
@@ -226,6 +224,9 @@ void __cpu_die(unsigned int cpu)
 		pr_err("CPU%u: cpu didn't die\n", cpu);
 		return;
 	}
+
+	clear_tasks_mm_cpumask(cpu);
+
 	printk(KERN_NOTICE "CPU%u: shutdown\n", cpu);
 
 	/*
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 03/10] rtmutex: Handle non enqueued waiters gracefully
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 01/10] cpufreq: Remove cpufreq_rwsem Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 02/10] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die() Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 04/10] rtmutex: Use chainwalking control enum Steven Rostedt
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0003-rtmutex-Handle-non-enqueued-waiters-gracefully.patch --]
[-- Type: text/plain, Size: 1268 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

Yimin debugged that in case of a PI wakeup in progress when
rt_mutex_start_proxy_lock() calls task_blocks_on_rt_mutex() the latter
returns -EAGAIN and in consequence the remove_waiter() call runs into
a BUG_ON() because there is nothing to remove.

Guard it with rt_mutex_has_waiters(). This is a quick fix which is
easy to backport. The proper fix is to have a central check in
remove_waiter() so we can call it unconditionally.

Reported-and-debugged-by: Yimin Deng <yimin11.deng@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/locking/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index ee4e7e747e06..2f61c66d6f05 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -2138,7 +2138,7 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 		ret = 0;
 	}
 
-	if (unlikely(ret))
+	if (ret && rt_mutex_has_waiters(lock))
 		remove_waiter(lock, waiter);
 
 	raw_spin_unlock(&lock->wait_lock);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 04/10] rtmutex: Use chainwalking control enum
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2016-02-26 21:35 ` [PATCH RT 03/10] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 05/10] dump stack: dont disable preemption during trace Steven Rostedt
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Brad Mouring

[-- Attachment #1: 0004-rtmutex-Use-chainwalking-control-enum.patch --]
[-- Type: text/plain, Size: 1194 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "bmouring@ni.com" <bmouring@ni.com>

In 8930ed80 (rtmutex: Cleanup deadlock detector debug logic),
chainwalking control enums were introduced to limit the deadlock
detection logic. One of the calls to task_blocks_on_rt_mutex was
missed when converting to use the enums.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/locking/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 2f61c66d6f05..4bd9790dece5 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1251,7 +1251,7 @@ static void  noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock)
 	__set_current_state(TASK_UNINTERRUPTIBLE);
 	pi_unlock(&self->pi_lock);
 
-	ret = task_blocks_on_rt_mutex(lock, &waiter, self, 0);
+	ret = task_blocks_on_rt_mutex(lock, &waiter, self, RT_MUTEX_MIN_CHAINWALK);
 	BUG_ON(ret);
 
 	for (;;) {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 05/10] dump stack: dont disable preemption during trace
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2016-02-26 21:35 ` [PATCH RT 04/10] rtmutex: Use chainwalking control enum Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 06/10] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0005-dump-stack-don-t-disable-preemption-during-trace.patch --]
[-- Type: text/plain, Size: 2785 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

I see here large latencies during a stack dump on x86. The
preempt_disable() and get_cpu() should forbid moving the task to another
CPU during a stack dump and avoiding two stack traces in parallel on the
same CPU. However a stack trace from a second CPU may still happen in
parallel. Also nesting is allowed so a stack trace happens in
process-context and we may have another one from IRQ context. With migrate
disable we keep this code preemptible and allow a second backtrace on
the same CPU by another task.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/dumpstack_64.c | 8 ++++----
 lib/dump_stack.c               | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index 66e274a3d968..37aee503a7ba 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -114,7 +114,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 		unsigned long *stack, unsigned long bp,
 		const struct stacktrace_ops *ops, void *data)
 {
-	const unsigned cpu = get_cpu();
+	const unsigned cpu = get_cpu_light();
 	unsigned long *irq_stack_end =
 		(unsigned long *)per_cpu(irq_stack_ptr, cpu);
 	unsigned used = 0;
@@ -191,7 +191,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 	 * This handles the process stack:
 	 */
 	bp = ops->walk_stack(tinfo, stack, bp, ops, data, NULL, &graph);
-	put_cpu();
+	put_cpu_light();
 }
 EXPORT_SYMBOL(dump_trace);
 
@@ -205,7 +205,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
 	int cpu;
 	int i;
 
-	preempt_disable();
+	migrate_disable();
 	cpu = smp_processor_id();
 
 	irq_stack_end	= (unsigned long *)(per_cpu(irq_stack_ptr, cpu));
@@ -238,7 +238,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
 		pr_cont(" %016lx", *stack++);
 		touch_nmi_watchdog();
 	}
-	preempt_enable();
+	migrate_enable();
 
 	pr_cont("\n");
 	show_trace_log_lvl(task, regs, sp, bp, log_lvl);
diff --git a/lib/dump_stack.c b/lib/dump_stack.c
index f23b63f0a1c3..b39c60b1f12c 100644
--- a/lib/dump_stack.c
+++ b/lib/dump_stack.c
@@ -33,7 +33,7 @@ asmlinkage void dump_stack(void)
 	 * Permit this cpu to perform nested stack dumps while serialising
 	 * against other CPUs
 	 */
-	preempt_disable();
+	migrate_disable();
 
 retry:
 	cpu = smp_processor_id();
@@ -52,7 +52,7 @@ retry:
 	if (!was_locked)
 		atomic_set(&dump_lock, -1);
 
-	preempt_enable();
+	migrate_enable();
 }
 #else
 asmlinkage void dump_stack(void)
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 06/10] net: Make synchronize_rcu_expedited() conditional on !RT_FULL
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2016-02-26 21:35 ` [PATCH RT 05/10] dump stack: dont disable preemption during trace Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 07/10] net/core/cpuhotplug: Drain input_pkt_queue lockless Steven Rostedt
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Paul E. McKenney, Josh Cartwright,
	Eric Dumazet, David S. Miller

[-- Attachment #1: 0006-net-Make-synchronize_rcu_expedited-conditional-on-RT.patch --]
[-- Type: text/plain, Size: 1447 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Josh Cartwright <joshc@ni.com>

While the use of synchronize_rcu_expedited() might make
synchronize_net() "faster", it does so at significant cost on RT
systems, as expediting a grace period forcibly preempts any
high-priority RT tasks (via the stop_machine() mechanism).

Without this change, we can observe a latency spike up to 30us with
cyclictest by rapidly unplugging/reestablishing an ethernet link.

Suggested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Josh Cartwright <joshc@ni.com>
Cc: bigeasy@linutronix.de
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20151027123153.GG8245@jcartwri.amer.corp.natinst.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 net/core/dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 6febdc96835f..ddb8578b2f34 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6638,7 +6638,7 @@ EXPORT_SYMBOL(free_netdev);
 void synchronize_net(void)
 {
 	might_sleep();
-	if (rtnl_is_locked())
+	if (rtnl_is_locked() && !IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
 		synchronize_rcu_expedited();
 	else
 		synchronize_rcu();
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 07/10] net/core/cpuhotplug: Drain input_pkt_queue lockless
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2016-02-26 21:35 ` [PATCH RT 06/10] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:35 ` [PATCH RT 08/10] irqwork: Move irq safe work to irq context Steven Rostedt
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0007-net-core-cpuhotplug-Drain-input_pkt_queue-lockless.patch --]
[-- Type: text/plain, Size: 2489 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Grygorii Strashko <grygorii.strashko@ti.com>

I can constantly see below error report with 4.1 RT-kernel on TI ARM dra7-evm
if I'm trying to unplug cpu1:

[   57.737589] CPU1: shutdown
[   57.767537] BUG: spinlock bad magic on CPU#0, sh/137
[   57.767546]  lock: 0xee994730, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
[   57.767552] CPU: 0 PID: 137 Comm: sh Not tainted 4.1.10-rt8-01700-g2c38702-dirty #55
[   57.767555] Hardware name: Generic DRA74X (Flattened Device Tree)
[   57.767568] [<c001acd0>] (unwind_backtrace) from [<c001534c>] (show_stack+0x20/0x24)
[   57.767579] [<c001534c>] (show_stack) from [<c075560c>] (dump_stack+0x84/0xa0)
[   57.767593] [<c075560c>] (dump_stack) from [<c00aca48>] (spin_dump+0x84/0xac)
[   57.767603] [<c00aca48>] (spin_dump) from [<c00acaa4>] (spin_bug+0x34/0x38)
[   57.767614] [<c00acaa4>] (spin_bug) from [<c00acc10>] (do_raw_spin_lock+0x168/0x1c0)
[   57.767624] [<c00acc10>] (do_raw_spin_lock) from [<c075b4cc>] (_raw_spin_lock+0x4c/0x54)
[   57.767631] [<c075b4cc>] (_raw_spin_lock) from [<c07599fc>] (rt_spin_lock_slowlock+0x5c/0x374)
[   57.767638] [<c07599fc>] (rt_spin_lock_slowlock) from [<c075bcf4>] (rt_spin_lock+0x38/0x70)
[   57.767649] [<c075bcf4>] (rt_spin_lock) from [<c06333c0>] (skb_dequeue+0x28/0x7c)
[   57.767662] [<c06333c0>] (skb_dequeue) from [<c06476ec>] (dev_cpu_callback+0x1b8/0x240)
[   57.767673] [<c06476ec>] (dev_cpu_callback) from [<c007566c>] (notifier_call_chain+0x3c/0xb4)

The reason is that skb_dequeue is taking skb->lock, but RT changed the
core code to use a raw spinlock. The non-raw lock is not initialized
on purpose to catch exactly this kind of problem.

Fixes: 91df05da13a6 'net: Use skbufhead with raw lock'
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 net/core/dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index ddb8578b2f34..055dc9c98b10 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6890,7 +6890,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,
 		netif_rx_internal(skb);
 		input_queue_head_incr(oldsd);
 	}
-	while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) {
+	while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) {
 		netif_rx_internal(skb);
 		input_queue_head_incr(oldsd);
 	}
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 08/10] irqwork: Move irq safe work to irq context
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2016-02-26 21:35 ` [PATCH RT 07/10] net/core/cpuhotplug: Drain input_pkt_queue lockless Steven Rostedt
@ 2016-02-26 21:35 ` Steven Rostedt
  2016-02-26 21:36 ` [PATCH RT 09/10] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
  2016-02-26 21:36 ` [PATCH RT 10/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0008-irqwork-Move-irq-safe-work-to-irq-context.patch --]
[-- Type: text/plain, Size: 2757 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

On architectures where arch_irq_work_has_interrupt() returns false, we
end up running the irq safe work from the softirq context. That
results in a potential deadlock in the scheduler irq work which
expects that function to be called with interrupts disabled.

Split the irq_work_tick() function into a hard and soft variant. Call
the hard variant from the tick interrupt and add the soft variant to
the timer softirq.

Reported-and-tested-by: Yanjiang Jin <yanjiang.jin@windriver.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/irq_work.h | 6 ++++++
 kernel/irq_work.c        | 9 +++++++++
 kernel/timer.c           | 6 ++----
 3 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 4a8c7a2df480..ccd736ebee9e 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -44,4 +44,10 @@ bool irq_work_needs_cpu(void);
 static inline bool irq_work_needs_cpu(void) { return false; }
 #endif
 
+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
+void irq_work_tick_soft(void);
+#else
+static inline void irq_work_tick_soft(void) { }
+#endif
+
 #endif /* _LINUX_IRQ_WORK_H */
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index af8ceafc94e4..883bb73698b9 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -163,8 +163,17 @@ void irq_work_tick(void)
 
 	if (!llist_empty(raised) && !arch_irq_work_has_interrupt())
 		irq_work_run_list(raised);
+
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
+		irq_work_run_list(this_cpu_ptr(&lazy_list));
+}
+
+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
+void irq_work_tick_soft(void)
+{
 	irq_work_run_list(this_cpu_ptr(&lazy_list));
 }
+#endif
 
 /*
  * Synchronize against the irq_work @entry, ensures the entry is not
diff --git a/kernel/timer.c b/kernel/timer.c
index 300870358b4f..3796af212c95 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -1450,7 +1450,7 @@ void update_process_times(int user_tick)
 	scheduler_tick();
 	run_local_timers();
 	rcu_check_callbacks(cpu, user_tick);
-#if defined(CONFIG_IRQ_WORK) && !defined(CONFIG_PREEMPT_RT_FULL)
+#if defined(CONFIG_IRQ_WORK)
 	if (in_irq())
 		irq_work_run();
 #endif
@@ -1466,9 +1466,7 @@ static void run_timer_softirq(struct softirq_action *h)
 
 	hrtimer_run_pending();
 
-#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
-	irq_work_tick();
-#endif
+	irq_work_tick_soft();
 
 	if (time_after_eq(jiffies, base->timer_jiffies))
 		__run_timers(base);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 09/10] sched: Introduce the trace_sched_waking tracepoint
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
                   ` (7 preceding siblings ...)
  2016-02-26 21:35 ` [PATCH RT 08/10] irqwork: Move irq safe work to irq context Steven Rostedt
@ 2016-02-26 21:36 ` Steven Rostedt
  2016-02-26 21:36 ` [PATCH RT 10/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:36 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mathieu Desnoyers, Julien Desfossez,
	Peter Zijlstra (Intel),
	Francis Giraldeau, Linus Torvalds, Mike Galbraith, Ingo Molnar

[-- Attachment #1: 0009-sched-Introduce-the-trace_sched_waking-tracepoint.patch --]
[-- Type: text/plain, Size: 6007 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <peterz@infradead.org>

Upstream commit fbd705a0c6184580d0e2fbcbd47a37b6e5822511

Mathieu reported that since 317f394160e9 ("sched: Move the second half
of ttwu() to the remote cpu") trace_sched_wakeup() can happen out of
context of the waker.

This is a problem when you want to analyse wakeup paths because it is
now very hard to correlate the wakeup event to whoever issued the
wakeup.

OTOH trace_sched_wakeup() is issued at the point where we set
p->state = TASK_RUNNING, which is right were we hand the task off to
the scheduler, so this is an important point when looking at
scheduling behaviour, up to here its been the wakeup path everything
hereafter is due to scheduler policy.

To bridge this gap, introduce a second tracepoint: trace_sched_waking.
It is guaranteed to be called in the waker context.

[ Ported to linux-4.1.y-rt kernel by Mathieu Desnoyers. Resolved
  conflict: try_to_wake_up_local() does not exist in -rt kernel. Removed
  its instrumentation hunk. ]

Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
CC: Julien Desfossez <jdesfossez@efficios.com>
CC: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Francis Giraldeau <francis.giraldeau@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/20150609091336.GQ3644@twins.programming.kicks-ass.net
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/trace/events/sched.h      | 30 +++++++++++++++++++++---------
 kernel/sched/core.c               |  8 +++++---
 kernel/trace/trace_sched_switch.c |  2 +-
 kernel/trace/trace_sched_wakeup.c |  2 +-
 4 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index dc7bb01f580f..67b26cdf3f52 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -55,9 +55,9 @@ TRACE_EVENT(sched_kthread_stop_ret,
  */
 DECLARE_EVENT_CLASS(sched_wakeup_template,
 
-	TP_PROTO(struct task_struct *p, int success),
+	TP_PROTO(struct task_struct *p),
 
-	TP_ARGS(__perf_task(p), success),
+	TP_ARGS(__perf_task(p)),
 
 	TP_STRUCT__entry(
 		__array(	char,	comm,	TASK_COMM_LEN	)
@@ -71,25 +71,37 @@ DECLARE_EVENT_CLASS(sched_wakeup_template,
 		memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
 		__entry->pid		= p->pid;
 		__entry->prio		= p->prio;
-		__entry->success	= success;
+		__entry->success	= 1; /* rudiment, kill when possible */
 		__entry->target_cpu	= task_cpu(p);
 	),
 
-	TP_printk("comm=%s pid=%d prio=%d success=%d target_cpu=%03d",
+	TP_printk("comm=%s pid=%d prio=%d target_cpu=%03d",
 		  __entry->comm, __entry->pid, __entry->prio,
-		  __entry->success, __entry->target_cpu)
+		  __entry->target_cpu)
 );
 
+/*
+ * Tracepoint called when waking a task; this tracepoint is guaranteed to be
+ * called from the waking context.
+ */
+DEFINE_EVENT(sched_wakeup_template, sched_waking,
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
+
+/*
+ * Tracepoint called when the task is actually woken; p->state == TASK_RUNNNG.
+ * It it not always called from the waking context.
+ */
 DEFINE_EVENT(sched_wakeup_template, sched_wakeup,
-	     TP_PROTO(struct task_struct *p, int success),
-	     TP_ARGS(p, success));
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
 
 /*
  * Tracepoint for waking up a new task:
  */
 DEFINE_EVENT(sched_wakeup_template, sched_wakeup_new,
-	     TP_PROTO(struct task_struct *p, int success),
-	     TP_ARGS(p, success));
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
 
 #ifdef CREATE_TRACE_POINTS
 static inline long __trace_sched_switch_state(struct task_struct *p)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a570a5846af2..edacb6d1d2bf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1481,9 +1481,9 @@ static void
 ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
 {
 	check_preempt_curr(rq, p, wake_flags);
-	trace_sched_wakeup(p, true);
-
 	p->state = TASK_RUNNING;
+	trace_sched_wakeup(p);
+
 #ifdef CONFIG_SMP
 	if (p->sched_class->task_woken)
 		p->sched_class->task_woken(rq, p);
@@ -1676,6 +1676,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	if (!(wake_flags & WF_LOCK_SLEEPER))
 		p->saved_state = TASK_RUNNING;
 
+	trace_sched_waking(p);
+
 	success = 1; /* we're going to change ->state */
 	cpu = task_cpu(p);
 
@@ -2078,7 +2080,7 @@ void wake_up_new_task(struct task_struct *p)
 	rq = __task_rq_lock(p);
 	activate_task(rq, p, 0);
 	p->on_rq = 1;
-	trace_sched_wakeup_new(p, true);
+	trace_sched_wakeup_new(p);
 	check_preempt_curr(rq, p, WF_FORK);
 #ifdef CONFIG_SMP
 	if (p->sched_class->task_woken)
diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c
index 3f34dc9b40f3..9586cde520b0 100644
--- a/kernel/trace/trace_sched_switch.c
+++ b/kernel/trace/trace_sched_switch.c
@@ -106,7 +106,7 @@ tracing_sched_wakeup_trace(struct trace_array *tr,
 }
 
 static void
-probe_sched_wakeup(void *ignore, struct task_struct *wakee, int success)
+probe_sched_wakeup(void *ignore, struct task_struct *wakee)
 {
 	struct trace_array_cpu *data;
 	unsigned long flags;
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 6e32635e5e57..32eea7ba8c4a 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -462,7 +462,7 @@ static void wakeup_reset(struct trace_array *tr)
 }
 
 static void
-probe_wakeup(void *ignore, struct task_struct *p, int success)
+probe_wakeup(void *ignore, struct task_struct *p)
 {
 	struct trace_array_cpu *data;
 	int cpu = smp_processor_id();
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 10/10] Linux 3.14.61-rt63-rc1
  2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
                   ` (8 preceding siblings ...)
  2016-02-26 21:36 ` [PATCH RT 09/10] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
@ 2016-02-26 21:36 ` Steven Rostedt
  9 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:36 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0010-Linux-3.14.61-rt63-rc1.patch --]
[-- Type: text/plain, Size: 412 bytes --]

3.14.61-rt63-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 40d81d8e61b6..9c4ed2db9626 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt62
+-rt63-rc1
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RT 03/10] rtmutex: Handle non enqueued waiters gracefully
  2016-02-26 21:39 [PATCH RT 00/10] Linux 3.12.54-rt73-rc1 Steven Rostedt
@ 2016-02-26 21:39 ` Steven Rostedt
  0 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:39 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0003-rtmutex-Handle-non-enqueued-waiters-gracefully.patch --]
[-- Type: text/plain, Size: 1228 bytes --]

3.12.54-rt73-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

Yimin debugged that in case of a PI wakeup in progress when
rt_mutex_start_proxy_lock() calls task_blocks_on_rt_mutex() the latter
returns -EAGAIN and in consequence the remove_waiter() call runs into
a BUG_ON() because there is nothing to remove.

Guard it with rt_mutex_has_waiters(). This is a quick fix which is
easy to backport. The proper fix is to have a central check in
remove_waiter() so we can call it unconditionally.

Reported-and-debugged-by: Yimin Deng <yimin11.deng@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 7601c1332a88..03439f1bc5ea 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -2065,7 +2065,7 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 		ret = 0;
 	}
 
-	if (unlikely(ret))
+	if (ret && rt_mutex_has_waiters(lock))
 		remove_waiter(lock, waiter);
 
 	raw_spin_unlock(&lock->wait_lock);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-02-26 22:32 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-26 21:35 [PATCH RT 00/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 01/10] cpufreq: Remove cpufreq_rwsem Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 02/10] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die() Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 03/10] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 04/10] rtmutex: Use chainwalking control enum Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 05/10] dump stack: dont disable preemption during trace Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 06/10] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 07/10] net/core/cpuhotplug: Drain input_pkt_queue lockless Steven Rostedt
2016-02-26 21:35 ` [PATCH RT 08/10] irqwork: Move irq safe work to irq context Steven Rostedt
2016-02-26 21:36 ` [PATCH RT 09/10] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
2016-02-26 21:36 ` [PATCH RT 10/10] Linux 3.14.61-rt63-rc1 Steven Rostedt
2016-02-26 21:39 [PATCH RT 00/10] Linux 3.12.54-rt73-rc1 Steven Rostedt
2016-02-26 21:39 ` [PATCH RT 03/10] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).