linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 00/12] Linux
@ 2016-02-26 21:32 Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 01/12] cpufreq: Remove cpufreq_rwsem Steven Rostedt
                   ` (12 more replies)
  0 siblings, 13 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Dear RT Folks,

This is the RT stable review cycle of patch 3.18.27-rt26-rc1.

Please scream at me if I messed something up. Please test the patches too.

Note, I'm bringing this tree up to stable patches in 4.1.7-rt8.
Then I'll be pulling 4.1-rt into stable, as development is now on 4.4-rt.
After that, I'll be pulling the 4.1-rt stable changes into the stable trees.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 2/29/2016.

Enjoy,

-- Steve


To build 3.18.27-rt26-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.18.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.18.27.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.18/patch-3.18.27-rt26-rc1.patch.xz

You can also build from 3.18.27-rt25 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.18/incr/patch-3.18.27-rt25-rt26-rc1.patch.xz


Changes from 3.18.27-rt25:

---


Grygorii Strashko (2):
      ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die()
      net/core/cpuhotplug: Drain input_pkt_queue lockless

Josh Cartwright (1):
      net: Make synchronize_rcu_expedited() conditional on !RT_FULL

Peter Zijlstra (1):
      sched: Introduce the trace_sched_waking tracepoint

Sebastian Andrzej Siewior (2):
      cpufreq: Remove cpufreq_rwsem
      dump stack: don't disable preemption during trace

Steven Rostedt (Red Hat) (1):
      Linux 3.18.27-rt26-rc1

Thomas Gleixner (3):
      genirq: Handle force threading of interrupts with primary and thread handler
      rtmutex: Handle non enqueued waiters gracefully
      irqwork: Move irq safe work to irq context

Wolfgang M. Reimer (1):
      locking: locktorture: Do NOT include rwlock.h directly

bmouring@ni.com (1):
      rtmutex: Use chainwalking control enum

----
 arch/arm/kernel/smp.c             |   5 +-
 arch/x86/kernel/dumpstack_32.c    |   4 +-
 arch/x86/kernel/dumpstack_64.c    |   8 +-
 drivers/cpufreq/cpufreq.c         |  34 +-------
 include/linux/interrupt.h         |   2 +
 include/linux/irq_work.h          |   6 ++
 include/trace/events/sched.h      |  30 +++++---
 kernel/irq/manage.c               | 158 ++++++++++++++++++++++++++++----------
 kernel/irq_work.c                 |   9 +++
 kernel/locking/locktorture.c      |   1 -
 kernel/locking/rtmutex.c          |   4 +-
 kernel/sched/core.c               |   8 +-
 kernel/time/timer.c               |   6 +-
 kernel/trace/trace_sched_switch.c |   2 +-
 kernel/trace/trace_sched_wakeup.c |   2 +-
 lib/dump_stack.c                  |   6 +-
 localversion-rt                   |   2 +-
 net/core/dev.c                    |   4 +-
 18 files changed, 183 insertions(+), 108 deletions(-)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH RT 01/12] cpufreq: Remove cpufreq_rwsem
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 03/12] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die() Steven Rostedt
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0001-cpufreq-Remove-cpufreq_rwsem.patch --]
[-- Type: text/plain, Size: 6230 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

cpufreq_rwsem was introduced in commit 6eed9404ab3c4 ("cpufreq: Use
rwsem for protecting critical sections) in order to replace
try_module_get() on the cpu-freq driver. That try_module_get() worked
well until the refcount was so heavily used that module removal became
more or less impossible.

Though when looking at the various (undocumented) protection
mechanisms in that code, the randomly sprinkeled around cpufreq_rwsem
locking sites are superfluous.

The policy, which is acquired in cpufreq_cpu_get() and released in
cpufreq_cpu_put() is sufficiently protected already.

  cpufreq_cpu_get(cpu)
    /* Protects against concurrent driver removal */
    read_lock_irqsave(&cpufreq_driver_lock, flags);
    policy = per_cpu(cpufreq_cpu_data, cpu);
    kobject_get(&policy->kobj);
    read_unlock_irqrestore(&cpufreq_driver_lock, flags);

The reference on the policy serializes versus module unload already:

  cpufreq_unregister_driver()
    subsys_interface_unregister()
      __cpufreq_remove_dev_finish()
        per_cpu(cpufreq_cpu_data) = NULL;
	cpufreq_policy_put_kobj()

If there is a reference held on the policy, i.e. obtained prior to the
unregister call, then cpufreq_policy_put_kobj() will wait until that
reference is dropped. So once subsys_interface_unregister() returns
there is no policy pointer in flight and no new reference can be
obtained. So that rwsem protection is useless.

The other usage of cpufreq_rwsem in show()/store() of the sysfs
interface is redundant as well because sysfs already does the proper
kobject_get()/put() pairs.

That leaves CPU hotplug versus module removal. The current
down_write() around the write_lock() in cpufreq_unregister_driver() is
silly at best as it protects actually nothing.

The trivial solution to this is to prevent hotplug across
cpufreq_unregister_driver completely.

[upstream: rafael/linux-pm 454d3a2500a4eb33be85dde3bfba9e5f6b5efadc]
[fixes: "cpufreq_stat_notifier_trans: No policy found" since v4.0-rt]
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 drivers/cpufreq/cpufreq.c | 34 +++-------------------------------
 1 file changed, 3 insertions(+), 31 deletions(-)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 90e8deb6c15e..7a9c1a7ecfe5 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -53,12 +53,6 @@ static inline bool has_target(void)
 	return cpufreq_driver->target_index || cpufreq_driver->target;
 }
 
-/*
- * rwsem to guarantee that cpufreq driver module doesn't unload during critical
- * sections
- */
-static DECLARE_RWSEM(cpufreq_rwsem);
-
 /* internal prototypes */
 static int __cpufreq_governor(struct cpufreq_policy *policy,
 		unsigned int event);
@@ -205,9 +199,6 @@ struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
 	if (cpufreq_disabled() || (cpu >= nr_cpu_ids))
 		return NULL;
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		return NULL;
-
 	/* get the cpufreq driver */
 	read_lock_irqsave(&cpufreq_driver_lock, flags);
 
@@ -220,9 +211,6 @@ struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
 
 	read_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
-	if (!policy)
-		up_read(&cpufreq_rwsem);
-
 	return policy;
 }
 EXPORT_SYMBOL_GPL(cpufreq_cpu_get);
@@ -233,7 +221,6 @@ void cpufreq_cpu_put(struct cpufreq_policy *policy)
 		return;
 
 	kobject_put(&policy->kobj);
-	up_read(&cpufreq_rwsem);
 }
 EXPORT_SYMBOL_GPL(cpufreq_cpu_put);
 
@@ -762,9 +749,6 @@ static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
 	struct freq_attr *fattr = to_attr(attr);
 	ssize_t ret;
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		return -EINVAL;
-
 	down_read(&policy->rwsem);
 
 	if (fattr->show)
@@ -773,7 +757,6 @@ static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
 		ret = -EIO;
 
 	up_read(&policy->rwsem);
-	up_read(&cpufreq_rwsem);
 
 	return ret;
 }
@@ -790,9 +773,6 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
 	if (!cpu_online(policy->cpu))
 		goto unlock;
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		goto unlock;
-
 	down_write(&policy->rwsem);
 
 	if (fattr->store)
@@ -801,8 +781,6 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
 		ret = -EIO;
 
 	up_write(&policy->rwsem);
-
-	up_read(&cpufreq_rwsem);
 unlock:
 	put_online_cpus();
 
@@ -1142,9 +1120,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
 	}
 #endif
 
-	if (!down_read_trylock(&cpufreq_rwsem))
-		return 0;
-
 #ifdef CONFIG_HOTPLUG_CPU
 	/* Check if this cpu was hot-unplugged earlier and has siblings */
 	read_lock_irqsave(&cpufreq_driver_lock, flags);
@@ -1152,7 +1127,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
 		if (cpumask_test_cpu(cpu, tpolicy->related_cpus)) {
 			read_unlock_irqrestore(&cpufreq_driver_lock, flags);
 			ret = cpufreq_add_policy_cpu(tpolicy, cpu, dev);
-			up_read(&cpufreq_rwsem);
 			return ret;
 		}
 	}
@@ -1288,7 +1262,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
 	up_write(&policy->rwsem);
 
 	kobject_uevent(&policy->kobj, KOBJ_ADD);
-	up_read(&cpufreq_rwsem);
 
 	pr_debug("initialization complete\n");
 
@@ -1314,8 +1287,6 @@ err_set_policy_cpu:
 	cpufreq_policy_free(policy);
 
 nomem_out:
-	up_read(&cpufreq_rwsem);
-
 	return ret;
 }
 
@@ -2528,19 +2499,20 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
 
 	pr_debug("unregistering driver %s\n", driver->name);
 
+	/* Protect against concurrent cpu hotplug */
+	get_online_cpus();
 	subsys_interface_unregister(&cpufreq_interface);
 	if (cpufreq_boost_supported())
 		cpufreq_sysfs_remove_file(&boost.attr);
 
 	unregister_hotcpu_notifier(&cpufreq_cpu_notifier);
 
-	down_write(&cpufreq_rwsem);
 	write_lock_irqsave(&cpufreq_driver_lock, flags);
 
 	cpufreq_driver = NULL;
 
 	write_unlock_irqrestore(&cpufreq_driver_lock, flags);
-	up_write(&cpufreq_rwsem);
+	put_online_cpus();
 
 	return 0;
 }
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 03/12] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die()
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 01/12] cpufreq: Remove cpufreq_rwsem Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 04/12] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Grygorii Strashko, Russell King, stable-rt,
	Sebastian Andrzej Siewior, Sekhar Nori, Carsten Emde,
	Paul Gortmaker, philipp, Austin Schuh, John Kacur,
	Thomas Gleixner, linux-arm-kernel

[-- Attachment #1: 0003-ARM-smp-Move-clear_tasks_mm_cpumask-call-to-__cpu_di.patch --]
[-- Type: text/plain, Size: 3906 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Grygorii Strashko <grygorii.strashko@ti.com>

When running with the RT-kernel (4.1.5-rt5) on TI OMAP dra7-evm and trying
to do Suspend to RAM, the following backtrace occurs:

 Disabling non-boot CPUs ...
 PM: noirq suspend of devices complete after 7.295 msecs
 Disabling non-boot CPUs ...
 BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
 in_atomic(): 1, irqs_disabled(): 128, pid: 18, name: migration/1
 INFO: lockdep is turned off.
 irq event stamp: 122
 hardirqs last  enabled at (121): [<c06ac0ac>] _raw_spin_unlock_irqrestore+0x88/0x90
 hardirqs last disabled at (122): [<c06abed0>] _raw_spin_lock_irq+0x28/0x5c
 softirqs last  enabled at (0): [<c003d294>] copy_process.part.52+0x410/0x19d8
 softirqs last disabled at (0): [<  (null)>]   (null)
 Preemption disabled at:[<  (null)>]   (null)
  CPU: 1 PID: 18 Comm: migration/1 Tainted: G        W       4.1.4-rt3-01046-g96ac8da #204
 Hardware name: Generic DRA74X (Flattened Device Tree)
 [<c0019134>] (unwind_backtrace) from [<c0014774>] (show_stack+0x20/0x24)
 [<c0014774>] (show_stack) from [<c06a70f4>] (dump_stack+0x88/0xdc)
 [<c06a70f4>] (dump_stack) from [<c006cab8>] (___might_sleep+0x198/0x2a8)
 [<c006cab8>] (___might_sleep) from [<c06ac4dc>] (rt_spin_lock+0x30/0x70)
 [<c06ac4dc>] (rt_spin_lock) from [<c013f790>] (find_lock_task_mm+0x9c/0x174)
 [<c013f790>] (find_lock_task_mm) from [<c00409ac>] (clear_tasks_mm_cpumask+0xb4/0x1ac)
 [<c00409ac>] (clear_tasks_mm_cpumask) from [<c00166a4>] (__cpu_disable+0x98/0xbc)
 [<c00166a4>] (__cpu_disable) from [<c06a2e8c>] (take_cpu_down+0x1c/0x50)
 [<c06a2e8c>] (take_cpu_down) from [<c00f2600>] (multi_cpu_stop+0x11c/0x158)
 [<c00f2600>] (multi_cpu_stop) from [<c00f2a9c>] (cpu_stopper_thread+0xc4/0x184)
 [<c00f2a9c>] (cpu_stopper_thread) from [<c0069058>] (smpboot_thread_fn+0x18c/0x324)
 [<c0069058>] (smpboot_thread_fn) from [<c00649c4>] (kthread+0xe8/0x104)
 [<c00649c4>] (kthread) from [<c0010058>] (ret_from_fork+0x14/0x3c)
 CPU1: shutdown
 PM: Calling sched_clock_suspend+0x0/0x40
 PM: Calling timekeeping_suspend+0x0/0x2e0
 PM: Calling irq_gc_suspend+0x0/0x68
 PM: Calling fw_suspend+0x0/0x2c
 PM: Calling cpu_pm_suspend+0x0/0x28

Also, sometimes system stucks right after displaying "Disabling non-boot
CPUs ...". The root cause of above backtrace is task_lock() which takes
a sleeping lock on -RT.

To fix the issue, move clear_tasks_mm_cpumask() call from __cpu_disable()
to __cpu_die() which is called on the thread which is asking for a target
CPU to be shutdown. In addition, this change restores CPUhotplug functionality
on TI OMAP dra7-evm and CPU1 can be unplugged/plugged many times.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: <linux-arm-kernel@lists.infradead.org>
Cc: Sekhar Nori <nsekhar@ti.com>
Cc: Austin Schuh <austin@peloton-tech.com>
Cc: <philipp@peloton-tech.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: <bigeasy@linutronix.de>
Cc: stable-rt@vger.kernel.org
Link: http://lkml.kernel.org/r/1441995683-30817-1-git-send-email-grygorii.strashko@ti.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/arm/kernel/smp.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index a8e32aaf0383..6e9b81666a23 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -208,8 +208,6 @@ int __cpu_disable(void)
 	flush_cache_louis();
 	local_flush_tlb_all();
 
-	clear_tasks_mm_cpumask(cpu);
-
 	return 0;
 }
 
@@ -225,6 +223,9 @@ void __cpu_die(unsigned int cpu)
 		pr_err("CPU%u: cpu didn't die\n", cpu);
 		return;
 	}
+
+	clear_tasks_mm_cpumask(cpu);
+
 	printk(KERN_NOTICE "CPU%u: shutdown\n", cpu);
 
 	/*
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 04/12] rtmutex: Handle non enqueued waiters gracefully
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 01/12] cpufreq: Remove cpufreq_rwsem Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 03/12] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die() Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 05/12] locking: locktorture: Do NOT include rwlock.h directly Steven Rostedt
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0004-rtmutex-Handle-non-enqueued-waiters-gracefully.patch --]
[-- Type: text/plain, Size: 1270 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

Yimin debugged that in case of a PI wakeup in progress when
rt_mutex_start_proxy_lock() calls task_blocks_on_rt_mutex() the latter
returns -EAGAIN and in consequence the remove_waiter() call runs into
a BUG_ON() because there is nothing to remove.

Guard it with rt_mutex_has_waiters(). This is a quick fix which is
easy to backport. The proper fix is to have a central check in
remove_waiter() so we can call it unconditionally.

Reported-and-debugged-by: Yimin Deng <yimin11.deng@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/locking/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 64973df0c686..c1b7d5b1be7e 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -2144,7 +2144,7 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 		ret = 0;
 	}
 
-	if (unlikely(ret))
+	if (ret && rt_mutex_has_waiters(lock))
 		remove_waiter(lock, waiter);
 
 	raw_spin_unlock(&lock->wait_lock);
-- 
2.7.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 05/12] locking: locktorture: Do NOT include rwlock.h directly
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (2 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 04/12] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 06/12] rtmutex: Use chainwalking control enum Steven Rostedt
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt, Wolfgang M. Reimer

[-- Attachment #1: 0005-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch --]
[-- Type: text/plain, Size: 1058 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Wolfgang M. Reimer" <linuxball@gmail.com>

Including rwlock.h directly will cause kernel builds to fail
if CONFIG_PREEMPT_RT_FULL is defined. The correct header file
(rwlock_rt.h OR rwlock.h) will be included by spinlock.h which
is included by locktorture.c anyway.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Wolfgang M. Reimer <linuxball@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/locking/locktorture.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index ec8cce259779..aa60d919e336 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -24,7 +24,6 @@
 #include <linux/module.h>
 #include <linux/kthread.h>
 #include <linux/spinlock.h>
-#include <linux/rwlock.h>
 #include <linux/mutex.h>
 #include <linux/rwsem.h>
 #include <linux/smp.h>
-- 
2.7.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 06/12] rtmutex: Use chainwalking control enum
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (3 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 05/12] locking: locktorture: Do NOT include rwlock.h directly Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 07/12] dump stack: dont disable preemption during trace Steven Rostedt
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Brad Mouring

[-- Attachment #1: 0006-rtmutex-Use-chainwalking-control-enum.patch --]
[-- Type: text/plain, Size: 1194 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "bmouring@ni.com" <bmouring@ni.com>

In 8930ed80 (rtmutex: Cleanup deadlock detector debug logic),
chainwalking control enums were introduced to limit the deadlock
detection logic. One of the calls to task_blocks_on_rt_mutex was
missed when converting to use the enums.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/locking/rtmutex.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index c1b7d5b1be7e..8d950b4521fc 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1008,7 +1008,7 @@ static void  noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock)
 	__set_current_state(TASK_UNINTERRUPTIBLE);
 	pi_unlock(&self->pi_lock);
 
-	ret = task_blocks_on_rt_mutex(lock, &waiter, self, 0);
+	ret = task_blocks_on_rt_mutex(lock, &waiter, self, RT_MUTEX_MIN_CHAINWALK);
 	BUG_ON(ret);
 
 	for (;;) {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 07/12] dump stack: dont disable preemption during trace
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (4 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 06/12] rtmutex: Use chainwalking control enum Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-27 10:32   ` Sebastian Andrzej Siewior
  2016-02-26 21:32 ` [PATCH RT 08/12] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0007-dump-stack-don-t-disable-preemption-during-trace.patch --]
[-- Type: text/plain, Size: 3933 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

I see here large latencies during a stack dump on x86. The
preempt_disable() and get_cpu() should forbid moving the task to another
CPU during a stack dump and avoiding two stack traces in parallel on the
same CPU. However a stack trace from a second CPU may still happen in
parallel. Also nesting is allowed so a stack trace happens in
process-context and we may have another one from IRQ context. With migrate
disable we keep this code preemptible and allow a second backtrace on
the same CPU by another task.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/dumpstack_32.c | 4 ++--
 arch/x86/kernel/dumpstack_64.c | 8 ++++----
 lib/dump_stack.c               | 6 ++----
 3 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/dumpstack_32.c b/arch/x86/kernel/dumpstack_32.c
index 5abd4cd4230c..1282817bb4c3 100644
--- a/arch/x86/kernel/dumpstack_32.c
+++ b/arch/x86/kernel/dumpstack_32.c
@@ -42,7 +42,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 		unsigned long *stack, unsigned long bp,
 		const struct stacktrace_ops *ops, void *data)
 {
-	const unsigned cpu = get_cpu();
+	const unsigned cpu = get_cpu_light();
 	int graph = 0;
 	u32 *prev_esp;
 
@@ -86,7 +86,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 			break;
 		touch_nmi_watchdog();
 	}
-	put_cpu();
+	put_cpu_light();
 }
 EXPORT_SYMBOL(dump_trace);
 
diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index ff86f19b5758..4821f291890f 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -152,7 +152,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 		unsigned long *stack, unsigned long bp,
 		const struct stacktrace_ops *ops, void *data)
 {
-	const unsigned cpu = get_cpu();
+	const unsigned cpu = get_cpu_light();
 	struct thread_info *tinfo;
 	unsigned long *irq_stack = (unsigned long *)per_cpu(irq_stack_ptr, cpu);
 	unsigned long dummy;
@@ -241,7 +241,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 	 * This handles the process stack:
 	 */
 	bp = ops->walk_stack(tinfo, stack, bp, ops, data, NULL, &graph);
-	put_cpu();
+	put_cpu_light();
 }
 EXPORT_SYMBOL(dump_trace);
 
@@ -255,7 +255,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
 	int cpu;
 	int i;
 
-	preempt_disable();
+	migrate_disable();
 	cpu = smp_processor_id();
 
 	irq_stack_end	= (unsigned long *)(per_cpu(irq_stack_ptr, cpu));
@@ -288,7 +288,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
 		pr_cont(" %016lx", *stack++);
 		touch_nmi_watchdog();
 	}
-	preempt_enable();
+	migrate_enable();
 
 	pr_cont("\n");
 	show_trace_log_lvl(task, regs, sp, bp, log_lvl);
diff --git a/lib/dump_stack.c b/lib/dump_stack.c
index c30d07e99dba..01ca6dae9414 100644
--- a/lib/dump_stack.c
+++ b/lib/dump_stack.c
@@ -25,7 +25,6 @@ static atomic_t dump_lock = ATOMIC_INIT(-1);
 
 asmlinkage __visible void dump_stack(void)
 {
-	unsigned long flags;
 	int was_locked;
 	int old;
 	int cpu;
@@ -34,8 +33,8 @@ asmlinkage __visible void dump_stack(void)
 	 * Permit this cpu to perform nested stack dumps while serialising
 	 * against other CPUs
 	 */
+	migrate_disable();
 retry:
-	local_irq_save(flags);
 	cpu = smp_processor_id();
 	old = atomic_cmpxchg(&dump_lock, -1, cpu);
 	if (old == -1) {
@@ -43,7 +42,6 @@ retry:
 	} else if (old == cpu) {
 		was_locked = 1;
 	} else {
-		local_irq_restore(flags);
 		cpu_relax();
 		goto retry;
 	}
@@ -53,7 +51,7 @@ retry:
 	if (!was_locked)
 		atomic_set(&dump_lock, -1);
 
-	local_irq_restore(flags);
+	migrate_enable();
 }
 #else
 asmlinkage __visible void dump_stack(void)
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 08/12] net: Make synchronize_rcu_expedited() conditional on !RT_FULL
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (5 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 07/12] dump stack: dont disable preemption during trace Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 09/12] net/core/cpuhotplug: Drain input_pkt_queue lockless Steven Rostedt
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Paul E. McKenney, Josh Cartwright,
	Eric Dumazet, David S. Miller

[-- Attachment #1: 0008-net-Make-synchronize_rcu_expedited-conditional-on-RT.patch --]
[-- Type: text/plain, Size: 1447 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Josh Cartwright <joshc@ni.com>

While the use of synchronize_rcu_expedited() might make
synchronize_net() "faster", it does so at significant cost on RT
systems, as expediting a grace period forcibly preempts any
high-priority RT tasks (via the stop_machine() mechanism).

Without this change, we can observe a latency spike up to 30us with
cyclictest by rapidly unplugging/reestablishing an ethernet link.

Suggested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Josh Cartwright <joshc@ni.com>
Cc: bigeasy@linutronix.de
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20151027123153.GG8245@jcartwri.amer.corp.natinst.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 net/core/dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 1cbcf08cc224..48872718d8e2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6813,7 +6813,7 @@ EXPORT_SYMBOL(free_netdev);
 void synchronize_net(void)
 {
 	might_sleep();
-	if (rtnl_is_locked())
+	if (rtnl_is_locked() && !IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
 		synchronize_rcu_expedited();
 	else
 		synchronize_rcu();
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 09/12] net/core/cpuhotplug: Drain input_pkt_queue lockless
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (6 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 08/12] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 10/12] irqwork: Move irq safe work to irq context Steven Rostedt
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0009-net-core-cpuhotplug-Drain-input_pkt_queue-lockless.patch --]
[-- Type: text/plain, Size: 2489 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Grygorii Strashko <grygorii.strashko@ti.com>

I can constantly see below error report with 4.1 RT-kernel on TI ARM dra7-evm
if I'm trying to unplug cpu1:

[   57.737589] CPU1: shutdown
[   57.767537] BUG: spinlock bad magic on CPU#0, sh/137
[   57.767546]  lock: 0xee994730, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
[   57.767552] CPU: 0 PID: 137 Comm: sh Not tainted 4.1.10-rt8-01700-g2c38702-dirty #55
[   57.767555] Hardware name: Generic DRA74X (Flattened Device Tree)
[   57.767568] [<c001acd0>] (unwind_backtrace) from [<c001534c>] (show_stack+0x20/0x24)
[   57.767579] [<c001534c>] (show_stack) from [<c075560c>] (dump_stack+0x84/0xa0)
[   57.767593] [<c075560c>] (dump_stack) from [<c00aca48>] (spin_dump+0x84/0xac)
[   57.767603] [<c00aca48>] (spin_dump) from [<c00acaa4>] (spin_bug+0x34/0x38)
[   57.767614] [<c00acaa4>] (spin_bug) from [<c00acc10>] (do_raw_spin_lock+0x168/0x1c0)
[   57.767624] [<c00acc10>] (do_raw_spin_lock) from [<c075b4cc>] (_raw_spin_lock+0x4c/0x54)
[   57.767631] [<c075b4cc>] (_raw_spin_lock) from [<c07599fc>] (rt_spin_lock_slowlock+0x5c/0x374)
[   57.767638] [<c07599fc>] (rt_spin_lock_slowlock) from [<c075bcf4>] (rt_spin_lock+0x38/0x70)
[   57.767649] [<c075bcf4>] (rt_spin_lock) from [<c06333c0>] (skb_dequeue+0x28/0x7c)
[   57.767662] [<c06333c0>] (skb_dequeue) from [<c06476ec>] (dev_cpu_callback+0x1b8/0x240)
[   57.767673] [<c06476ec>] (dev_cpu_callback) from [<c007566c>] (notifier_call_chain+0x3c/0xb4)

The reason is that skb_dequeue is taking skb->lock, but RT changed the
core code to use a raw spinlock. The non-raw lock is not initialized
on purpose to catch exactly this kind of problem.

Fixes: 91df05da13a6 'net: Use skbufhead with raw lock'
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 net/core/dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 48872718d8e2..39d2a0ba38ed 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7065,7 +7065,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,
 		netif_rx_internal(skb);
 		input_queue_head_incr(oldsd);
 	}
-	while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) {
+	while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) {
 		netif_rx_internal(skb);
 		input_queue_head_incr(oldsd);
 	}
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 10/12] irqwork: Move irq safe work to irq context
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (7 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 09/12] net/core/cpuhotplug: Drain input_pkt_queue lockless Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 11/12] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0010-irqwork-Move-irq-safe-work-to-irq-context.patch --]
[-- Type: text/plain, Size: 2760 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

On architectures where arch_irq_work_has_interrupt() returns false, we
end up running the irq safe work from the softirq context. That
results in a potential deadlock in the scheduler irq work which
expects that function to be called with interrupts disabled.

Split the irq_work_tick() function into a hard and soft variant. Call
the hard variant from the tick interrupt and add the soft variant to
the timer softirq.

Reported-and-tested-by: Yanjiang Jin <yanjiang.jin@windriver.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/irq_work.h | 6 ++++++
 kernel/irq_work.c        | 9 +++++++++
 kernel/time/timer.c      | 6 ++----
 3 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 30ef6c214e6f..af7ed9ad52c3 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -51,4 +51,10 @@ bool irq_work_needs_cpu(void);
 static inline bool irq_work_needs_cpu(void) { return false; }
 #endif
 
+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
+void irq_work_tick_soft(void);
+#else
+static inline void irq_work_tick_soft(void) { }
+#endif
+
 #endif /* _LINUX_IRQ_WORK_H */
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 9678fd1382a7..3d5a476b58b9 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -200,8 +200,17 @@ void irq_work_tick(void)
 
 	if (!llist_empty(raised) && !arch_irq_work_has_interrupt())
 		irq_work_run_list(raised);
+
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
+		irq_work_run_list(this_cpu_ptr(&lazy_list));
+}
+
+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
+void irq_work_tick_soft(void)
+{
 	irq_work_run_list(this_cpu_ptr(&lazy_list));
 }
+#endif
 
 /*
  * Synchronize against the irq_work @entry, ensures the entry is not
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 3a978d000fce..78e39b644780 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1451,7 +1451,7 @@ void update_process_times(int user_tick)
 	run_local_timers();
 	rcu_check_callbacks(cpu, user_tick);
 
-#if defined(CONFIG_IRQ_WORK) && !defined(CONFIG_PREEMPT_RT_FULL)
+#if defined(CONFIG_IRQ_WORK)
 	if (in_irq())
 		irq_work_tick();
 #endif
@@ -1467,9 +1467,7 @@ static void run_timer_softirq(struct softirq_action *h)
 
 	hrtimer_run_pending();
 
-#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
-	irq_work_tick();
-#endif
+	irq_work_tick_soft();
 
 	if (time_after_eq(jiffies, base->timer_jiffies))
 		__run_timers(base);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 11/12] sched: Introduce the trace_sched_waking tracepoint
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (8 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 10/12] irqwork: Move irq safe work to irq context Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:32 ` [PATCH RT 12/12] Linux 3.18.27-rt26-rc1 Steven Rostedt
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mathieu Desnoyers, Julien Desfossez,
	Peter Zijlstra (Intel),
	Francis Giraldeau, Linus Torvalds, Mike Galbraith, Ingo Molnar

[-- Attachment #1: 0011-sched-Introduce-the-trace_sched_waking-tracepoint.patch --]
[-- Type: text/plain, Size: 6023 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <peterz@infradead.org>

Upstream commit fbd705a0c6184580d0e2fbcbd47a37b6e5822511

Mathieu reported that since 317f394160e9 ("sched: Move the second half
of ttwu() to the remote cpu") trace_sched_wakeup() can happen out of
context of the waker.

This is a problem when you want to analyse wakeup paths because it is
now very hard to correlate the wakeup event to whoever issued the
wakeup.

OTOH trace_sched_wakeup() is issued at the point where we set
p->state = TASK_RUNNING, which is right were we hand the task off to
the scheduler, so this is an important point when looking at
scheduling behaviour, up to here its been the wakeup path everything
hereafter is due to scheduler policy.

To bridge this gap, introduce a second tracepoint: trace_sched_waking.
It is guaranteed to be called in the waker context.

[ Ported to linux-4.1.y-rt kernel by Mathieu Desnoyers. Resolved
  conflict: try_to_wake_up_local() does not exist in -rt kernel. Removed
  its instrumentation hunk. ]

Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
CC: Julien Desfossez <jdesfossez@efficios.com>
CC: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Francis Giraldeau <francis.giraldeau@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/20150609091336.GQ3644@twins.programming.kicks-ass.net
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/trace/events/sched.h      | 30 +++++++++++++++++++++---------
 kernel/sched/core.c               |  8 +++++---
 kernel/trace/trace_sched_switch.c |  2 +-
 kernel/trace/trace_sched_wakeup.c |  2 +-
 4 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index a7d67bc14906..09f27eb85ef8 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -55,9 +55,9 @@ TRACE_EVENT(sched_kthread_stop_ret,
  */
 DECLARE_EVENT_CLASS(sched_wakeup_template,
 
-	TP_PROTO(struct task_struct *p, int success),
+	TP_PROTO(struct task_struct *p),
 
-	TP_ARGS(__perf_task(p), success),
+	TP_ARGS(__perf_task(p)),
 
 	TP_STRUCT__entry(
 		__array(	char,	comm,	TASK_COMM_LEN	)
@@ -71,25 +71,37 @@ DECLARE_EVENT_CLASS(sched_wakeup_template,
 		memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
 		__entry->pid		= p->pid;
 		__entry->prio		= p->prio;
-		__entry->success	= success;
+		__entry->success	= 1; /* rudiment, kill when possible */
 		__entry->target_cpu	= task_cpu(p);
 	),
 
-	TP_printk("comm=%s pid=%d prio=%d success=%d target_cpu=%03d",
+	TP_printk("comm=%s pid=%d prio=%d target_cpu=%03d",
 		  __entry->comm, __entry->pid, __entry->prio,
-		  __entry->success, __entry->target_cpu)
+		  __entry->target_cpu)
 );
 
+/*
+ * Tracepoint called when waking a task; this tracepoint is guaranteed to be
+ * called from the waking context.
+ */
+DEFINE_EVENT(sched_wakeup_template, sched_waking,
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
+
+/*
+ * Tracepoint called when the task is actually woken; p->state == TASK_RUNNNG.
+ * It it not always called from the waking context.
+ */
 DEFINE_EVENT(sched_wakeup_template, sched_wakeup,
-	     TP_PROTO(struct task_struct *p, int success),
-	     TP_ARGS(p, success));
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
 
 /*
  * Tracepoint for waking up a new task:
  */
 DEFINE_EVENT(sched_wakeup_template, sched_wakeup_new,
-	     TP_PROTO(struct task_struct *p, int success),
-	     TP_ARGS(p, success));
+	     TP_PROTO(struct task_struct *p),
+	     TP_ARGS(p));
 
 #ifdef CREATE_TRACE_POINTS
 static inline long __trace_sched_switch_state(struct task_struct *p)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7e844b4f1701..9e01a8f358f8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1606,9 +1606,9 @@ static void
 ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
 {
 	check_preempt_curr(rq, p, wake_flags);
-	trace_sched_wakeup(p, true);
-
 	p->state = TASK_RUNNING;
+	trace_sched_wakeup(p);
+
 #ifdef CONFIG_SMP
 	if (p->sched_class->task_woken)
 		p->sched_class->task_woken(rq, p);
@@ -1832,6 +1832,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	if (!(wake_flags & WF_LOCK_SLEEPER))
 		p->saved_state = TASK_RUNNING;
 
+	trace_sched_waking(p);
+
 	success = 1; /* we're going to change ->state */
 	cpu = task_cpu(p);
 
@@ -2247,7 +2249,7 @@ void wake_up_new_task(struct task_struct *p)
 	rq = __task_rq_lock(p);
 	activate_task(rq, p, 0);
 	p->on_rq = TASK_ON_RQ_QUEUED;
-	trace_sched_wakeup_new(p, true);
+	trace_sched_wakeup_new(p);
 	check_preempt_curr(rq, p, WF_FORK);
 #ifdef CONFIG_SMP
 	if (p->sched_class->task_woken)
diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c
index 3f34dc9b40f3..9586cde520b0 100644
--- a/kernel/trace/trace_sched_switch.c
+++ b/kernel/trace/trace_sched_switch.c
@@ -106,7 +106,7 @@ tracing_sched_wakeup_trace(struct trace_array *tr,
 }
 
 static void
-probe_sched_wakeup(void *ignore, struct task_struct *wakee, int success)
+probe_sched_wakeup(void *ignore, struct task_struct *wakee)
 {
 	struct trace_array_cpu *data;
 	unsigned long flags;
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 19bd8928ce94..808258ccf6c5 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -460,7 +460,7 @@ static void wakeup_reset(struct trace_array *tr)
 }
 
 static void
-probe_wakeup(void *ignore, struct task_struct *p, int success)
+probe_wakeup(void *ignore, struct task_struct *p)
 {
 	struct trace_array_cpu *data;
 	int cpu = smp_processor_id();
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RT 12/12] Linux 3.18.27-rt26-rc1
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (9 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 11/12] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
@ 2016-02-26 21:32 ` Steven Rostedt
  2016-02-26 21:36 ` [PATCH RT 00/12] " Steven Rostedt
       [not found] ` <20160226213340.259403556@goodmis.org>
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:32 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0012-Linux-3.18.27-rt26-rc1.patch --]
[-- Type: text/plain, Size: 414 bytes --]

3.18.27-rt26-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index c5b71f9a229d..02556f47e3e1 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt25
+-rt26-rc1
-- 
2.7.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH RT 00/12] Linux 3.18.27-rt26-rc1
  2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
                   ` (10 preceding siblings ...)
  2016-02-26 21:32 ` [PATCH RT 12/12] Linux 3.18.27-rt26-rc1 Steven Rostedt
@ 2016-02-26 21:36 ` Steven Rostedt
       [not found] ` <20160226213340.259403556@goodmis.org>
  12 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:36 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Screw up in scripts, lost the version in the subject.

-- Steve


On Fri, 26 Feb 2016 16:32:35 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> Dear RT Folks,
> 
> This is the RT stable review cycle of patch 3.18.27-rt26-rc1.
> 
> Please scream at me if I messed something up. Please test the patches too.
> 
> Note, I'm bringing this tree up to stable patches in 4.1.7-rt8.
> Then I'll be pulling 4.1-rt into stable, as development is now on 4.4-rt.
> After that, I'll be pulling the 4.1-rt stable changes into the stable trees.
> 
> The -rc release will be uploaded to kernel.org and will be deleted when
> the final release is out. This is just a review release (or release candidate).
> 
> The pre-releases will not be pushed to the git repository, only the
> final release is.
> 
> If all goes well, this patch will be converted to the next main release
> on 2/29/2016.
> 
> Enjoy,
> 
> -- Steve
> 
> 
> To build 3.18.27-rt26-rc1 directly, the following patches should be applied:
> 
>   http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.18.tar.xz
> 
>   http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.18.27.xz
> 
>   http://www.kernel.org/pub/linux/kernel/projects/rt/3.18/patch-3.18.27-rt26-rc1.patch.xz
> 
> You can also build from 3.18.27-rt25 by applying the incremental patch:
> 
> http://www.kernel.org/pub/linux/kernel/projects/rt/3.18/incr/patch-3.18.27-rt25-rt26-rc1.patch.xz
> 
> 
> Changes from 3.18.27-rt25:
> 
> ---
> 
> 
> Grygorii Strashko (2):
>       ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die()
>       net/core/cpuhotplug: Drain input_pkt_queue lockless
> 
> Josh Cartwright (1):
>       net: Make synchronize_rcu_expedited() conditional on !RT_FULL
> 
> Peter Zijlstra (1):
>       sched: Introduce the trace_sched_waking tracepoint
> 
> Sebastian Andrzej Siewior (2):
>       cpufreq: Remove cpufreq_rwsem
>       dump stack: don't disable preemption during trace
> 
> Steven Rostedt (Red Hat) (1):
>       Linux 3.18.27-rt26-rc1
> 
> Thomas Gleixner (3):
>       genirq: Handle force threading of interrupts with primary and thread handler
>       rtmutex: Handle non enqueued waiters gracefully
>       irqwork: Move irq safe work to irq context
> 
> Wolfgang M. Reimer (1):
>       locking: locktorture: Do NOT include rwlock.h directly
> 
> bmouring@ni.com (1):
>       rtmutex: Use chainwalking control enum
> 
> ----
>  arch/arm/kernel/smp.c             |   5 +-
>  arch/x86/kernel/dumpstack_32.c    |   4 +-
>  arch/x86/kernel/dumpstack_64.c    |   8 +-
>  drivers/cpufreq/cpufreq.c         |  34 +-------
>  include/linux/interrupt.h         |   2 +
>  include/linux/irq_work.h          |   6 ++
>  include/trace/events/sched.h      |  30 +++++---
>  kernel/irq/manage.c               | 158 ++++++++++++++++++++++++++++----------
>  kernel/irq_work.c                 |   9 +++
>  kernel/locking/locktorture.c      |   1 -
>  kernel/locking/rtmutex.c          |   4 +-
>  kernel/sched/core.c               |   8 +-
>  kernel/time/timer.c               |   6 +-
>  kernel/trace/trace_sched_switch.c |   2 +-
>  kernel/trace/trace_sched_wakeup.c |   2 +-
>  lib/dump_stack.c                  |   6 +-
>  localversion-rt                   |   2 +-
>  net/core/dev.c                    |   4 +-
>  18 files changed, 183 insertions(+), 108 deletions(-)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RT 02/12] genirq: Handle force threading of interrupts with primary and thread handler
       [not found] ` <20160226213340.259403556@goodmis.org>
@ 2016-02-26 21:48   ` Steven Rostedt
  0 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-02-26 21:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Kohji Okuno, Michal Å mucr,
	stable-rt

Quilt mail doesn't seem to handle Å well, and vger.kernel.org blocked
it.

-- Steve


On Fri, 26 Feb 2016 16:32:37 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> 3.18.27-rt26-rc1 stable review patch.
> If anyone has any objections, please let me know.
> 
> ------------------
> 
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Force threading of interrupts does not deal with interrupts which are
> requested with a primary and a threaded handler. The current policy is
> to leave them alone and let the primary handler run in interrupt
> context, but we set the ONESHOT flag for those interrupts as well.
> 
> Kohji Okuno debugged a problem with the SDHCI driver where the
> interrupt thread waits for a hardware interrupt to trigger, which cant
> work well because the hardware interrupt is masked due to the ONESHOT
> flag being set. He proposed to set the ONESHOT flag only if the
> interrupt does not provide a thread handler.
> 
> Though that does not work either because these interrupts can be
> shared. So the other interrupt would rightfully get the ONESHOT flag
> set and therefor the same situation would happen again.
> 
> To deal with this proper, we need to force thread the primary handler
> of such interrupts as well. That means that the primary interrupt
> handler is treated as any other primary interrupt handler which is not
> marked IRQF_NO_THREAD. The threaded handler becomes a separate thread
> so the SDHCI flow logic can be handled gracefully.
> 
> The same issue was reported against 4.1-rt.
> 
> Reported-by: Kohji Okuno <okuno.kohji@jp.panasonic.com>
> Reported-By: Michal Å mucr <msmucr@gmail.com>
> Reported-and-tested-by: Nathan Sullivan <nathan.sullivan@ni.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: stable-rt@vger.kernel.org
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> ---
>  include/linux/interrupt.h |   2 +
>  kernel/irq/manage.c       | 158 ++++++++++++++++++++++++++++++++++------------
>  2 files changed, 119 insertions(+), 41 deletions(-)
> 
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index 33cfbc085a94..86628c733be7 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -100,6 +100,7 @@ typedef irqreturn_t (*irq_handler_t)(int, void *);
>   * @flags:	flags (see IRQF_* above)
>   * @thread_fn:	interrupt handler function for threaded interrupts
>   * @thread:	thread pointer for threaded interrupts
> + * @secondary:	pointer to secondary irqaction (force threading)
>   * @thread_flags:	flags related to @thread
>   * @thread_mask:	bitmask for keeping track of @thread activity
>   * @dir:	pointer to the proc/irq/NN/name entry
> @@ -111,6 +112,7 @@ struct irqaction {
>  	struct irqaction	*next;
>  	irq_handler_t		thread_fn;
>  	struct task_struct	*thread;
> +	struct irqaction	*secondary;
>  	unsigned int		irq;
>  	unsigned int		flags;
>  	unsigned long		thread_flags;
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 382cbe57abf3..70f59992c201 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -735,6 +735,12 @@ static irqreturn_t irq_nested_primary_handler(int irq, void *dev_id)
>  	return IRQ_NONE;
>  }
>  
> +static irqreturn_t irq_forced_secondary_handler(int irq, void *dev_id)
> +{
> +	WARN(1, "Secondary action handler called for irq %d\n", irq);
> +	return IRQ_NONE;
> +}
> +
>  static int irq_wait_for_interrupt(struct irqaction *action)
>  {
>  	set_current_state(TASK_INTERRUPTIBLE);
> @@ -761,7 +767,8 @@ static int irq_wait_for_interrupt(struct irqaction *action)
>  static void irq_finalize_oneshot(struct irq_desc *desc,
>  				 struct irqaction *action)
>  {
> -	if (!(desc->istate & IRQS_ONESHOT))
> +	if (!(desc->istate & IRQS_ONESHOT) ||
> +	    action->handler == irq_forced_secondary_handler)
>  		return;
>  again:
>  	chip_bus_lock(desc);
> @@ -923,6 +930,18 @@ static void irq_thread_dtor(struct callback_head *unused)
>  	irq_finalize_oneshot(desc, action);
>  }
>  
> +static void irq_wake_secondary(struct irq_desc *desc, struct irqaction *action)
> +{
> +	struct irqaction *secondary = action->secondary;
> +
> +	if (WARN_ON_ONCE(!secondary))
> +		return;
> +
> +	raw_spin_lock_irq(&desc->lock);
> +	__irq_wake_thread(desc, secondary);
> +	raw_spin_unlock_irq(&desc->lock);
> +}
> +
>  /*
>   * Interrupt handler thread
>   */
> @@ -953,6 +972,8 @@ static int irq_thread(void *data)
>  		action_ret = handler_fn(desc, action);
>  		if (action_ret == IRQ_HANDLED)
>  			atomic_inc(&desc->threads_handled);
> +		if (action_ret == IRQ_WAKE_THREAD)
> +			irq_wake_secondary(desc, action);
>  
>  #ifdef CONFIG_PREEMPT_RT_FULL
>  		migrate_disable();
> @@ -1003,20 +1024,36 @@ void irq_wake_thread(unsigned int irq, void *dev_id)
>  }
>  EXPORT_SYMBOL_GPL(irq_wake_thread);
>  
> -static void irq_setup_forced_threading(struct irqaction *new)
> +static int irq_setup_forced_threading(struct irqaction *new)
>  {
>  	if (!force_irqthreads)
> -		return;
> +		return 0;
>  	if (new->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT))
> -		return;
> +		return 0;
>  
>  	new->flags |= IRQF_ONESHOT;
>  
> -	if (!new->thread_fn) {
> -		set_bit(IRQTF_FORCED_THREAD, &new->thread_flags);
> -		new->thread_fn = new->handler;
> -		new->handler = irq_default_primary_handler;
> +	/*
> +	 * Handle the case where we have a real primary handler and a
> +	 * thread handler. We force thread them as well by creating a
> +	 * secondary action.
> +	 */
> +	if (new->handler != irq_default_primary_handler && new->thread_fn) {
> +		/* Allocate the secondary action */
> +		new->secondary = kzalloc(sizeof(struct irqaction), GFP_KERNEL);
> +		if (!new->secondary)
> +			return -ENOMEM;
> +		new->secondary->handler = irq_forced_secondary_handler;
> +		new->secondary->thread_fn = new->thread_fn;
> +		new->secondary->dev_id = new->dev_id;
> +		new->secondary->irq = new->irq;
> +		new->secondary->name = new->name;
>  	}
> +	/* Deal with the primary handler */
> +	set_bit(IRQTF_FORCED_THREAD, &new->thread_flags);
> +	new->thread_fn = new->handler;
> +	new->handler = irq_default_primary_handler;
> +	return 0;
>  }
>  
>  static int irq_request_resources(struct irq_desc *desc)
> @@ -1036,6 +1073,48 @@ static void irq_release_resources(struct irq_desc *desc)
>  		c->irq_release_resources(d);
>  }
>  
> +static int
> +setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
> +{
> +	struct task_struct *t;
> +	struct sched_param param = {
> +		.sched_priority = MAX_USER_RT_PRIO/2,
> +	};
> +
> +	if (!secondary) {
> +		t = kthread_create(irq_thread, new, "irq/%d-%s", irq,
> +				   new->name);
> +	} else {
> +		t = kthread_create(irq_thread, new, "irq/%d-s-%s", irq,
> +				   new->name);
> +		param.sched_priority += 1;
> +	}
> +
> +	if (IS_ERR(t))
> +		return PTR_ERR(t);
> +
> +	sched_setscheduler_nocheck(t, SCHED_FIFO, &param);
> +
> +	/*
> +	 * We keep the reference to the task struct even if
> +	 * the thread dies to avoid that the interrupt code
> +	 * references an already freed task_struct.
> +	 */
> +	get_task_struct(t);
> +	new->thread = t;
> +	/*
> +	 * Tell the thread to set its affinity. This is
> +	 * important for shared interrupt handlers as we do
> +	 * not invoke setup_affinity() for the secondary
> +	 * handlers as everything is already set up. Even for
> +	 * interrupts marked with IRQF_NO_BALANCE this is
> +	 * correct as we want the thread to move to the cpu(s)
> +	 * on which the requesting code placed the interrupt.
> +	 */
> +	set_bit(IRQTF_AFFINITY, &new->thread_flags);
> +	return 0;
> +}
> +
>  /*
>   * Internal function to register an irqaction - typically used to
>   * allocate special interrupts that are part of the architecture.
> @@ -1056,6 +1135,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
>  	if (!try_module_get(desc->owner))
>  		return -ENODEV;
>  
> +	new->irq = irq;
> +
>  	/*
>  	 * Check whether the interrupt nests into another interrupt
>  	 * thread.
> @@ -1073,8 +1154,11 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
>  		 */
>  		new->handler = irq_nested_primary_handler;
>  	} else {
> -		if (irq_settings_can_thread(desc))
> -			irq_setup_forced_threading(new);
> +		if (irq_settings_can_thread(desc)) {
> +			ret = irq_setup_forced_threading(new);
> +			if (ret)
> +				goto out_mput;
> +		}
>  	}
>  
>  	/*
> @@ -1083,37 +1167,14 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
>  	 * thread.
>  	 */
>  	if (new->thread_fn && !nested) {
> -		struct task_struct *t;
> -		static const struct sched_param param = {
> -			.sched_priority = MAX_USER_RT_PRIO/2,
> -		};
> -
> -		t = kthread_create(irq_thread, new, "irq/%d-%s", irq,
> -				   new->name);
> -		if (IS_ERR(t)) {
> -			ret = PTR_ERR(t);
> +		ret = setup_irq_thread(new, irq, false);
> +		if (ret)
>  			goto out_mput;
> +		if (new->secondary) {
> +			ret = setup_irq_thread(new->secondary, irq, true);
> +			if (ret)
> +				goto out_thread;
>  		}
> -
> -		sched_setscheduler_nocheck(t, SCHED_FIFO, &param);
> -
> -		/*
> -		 * We keep the reference to the task struct even if
> -		 * the thread dies to avoid that the interrupt code
> -		 * references an already freed task_struct.
> -		 */
> -		get_task_struct(t);
> -		new->thread = t;
> -		/*
> -		 * Tell the thread to set its affinity. This is
> -		 * important for shared interrupt handlers as we do
> -		 * not invoke setup_affinity() for the secondary
> -		 * handlers as everything is already set up. Even for
> -		 * interrupts marked with IRQF_NO_BALANCE this is
> -		 * correct as we want the thread to move to the cpu(s)
> -		 * on which the requesting code placed the interrupt.
> -		 */
> -		set_bit(IRQTF_AFFINITY, &new->thread_flags);
>  	}
>  
>  	if (!alloc_cpumask_var(&mask, GFP_KERNEL)) {
> @@ -1289,7 +1350,6 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
>  				   irq, nmsk, omsk);
>  	}
>  
> -	new->irq = irq;
>  	*old_ptr = new;
>  
>  	irq_pm_install_action(desc, new);
> @@ -1315,6 +1375,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
>  	 */
>  	if (new->thread)
>  		wake_up_process(new->thread);
> +	if (new->secondary)
> +		wake_up_process(new->secondary->thread);
>  
>  	register_irq_proc(irq, desc);
>  	new->dir = NULL;
> @@ -1345,6 +1407,13 @@ out_thread:
>  		kthread_stop(t);
>  		put_task_struct(t);
>  	}
> +	if (new->secondary && new->secondary->thread) {
> +		struct task_struct *t = new->secondary->thread;
> +
> +		new->secondary->thread = NULL;
> +		kthread_stop(t);
> +		put_task_struct(t);
> +	}
>  out_mput:
>  	module_put(desc->owner);
>  	return ret;
> @@ -1452,9 +1521,14 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id)
>  	if (action->thread) {
>  		kthread_stop(action->thread);
>  		put_task_struct(action->thread);
> +		if (action->secondary && action->secondary->thread) {
> +			kthread_stop(action->secondary->thread);
> +			put_task_struct(action->secondary->thread);
> +		}
>  	}
>  
>  	module_put(desc->owner);
> +	kfree(action->secondary);
>  	return action;
>  }
>  
> @@ -1593,8 +1667,10 @@ int request_threaded_irq(unsigned int irq, irq_handler_t handler,
>  	retval = __setup_irq(irq, desc, action);
>  	chip_bus_sync_unlock(desc);
>  
> -	if (retval)
> +	if (retval) {
> +		kfree(action->secondary);
>  		kfree(action);
> +	}
>  
>  #ifdef CONFIG_DEBUG_SHIRQ_FIXME
>  	if (!retval && (irqflags & IRQF_SHARED)) {

--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RT 07/12] dump stack: dont disable preemption during trace
  2016-02-26 21:32 ` [PATCH RT 07/12] dump stack: dont disable preemption during trace Steven Rostedt
@ 2016-02-27 10:32   ` Sebastian Andrzej Siewior
  2016-02-29 15:06     ` Steven Rostedt
  2016-03-01 14:45     ` Steven Rostedt
  0 siblings, 2 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-02-27 10:32 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker

On 2016-02-26 16:32:42 [-0500], Steven Rostedt wrote:
> 3.18.27-rt26-rc1 stable review patch.
> If anyone has any objections, please let me know.

Please merge this one along with "kernel: migrate_disable() do fastpath
in atomic & irqs-off". Otherwise we might get recursive with lockdep and
CPU-hotplug (if we go after hp->lock in pin_current_cpu()).

Sebastian

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RT 07/12] dump stack: dont disable preemption during trace
  2016-02-27 10:32   ` Sebastian Andrzej Siewior
@ 2016-02-29 15:06     ` Steven Rostedt
  2016-02-29 15:07       ` Thomas Gleixner
  2016-03-01 14:45     ` Steven Rostedt
  1 sibling, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2016-02-29 15:06 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker

On Sat, 27 Feb 2016 11:32:06 +0100
Sebastian Andrzej Siewior <sebastian@breakpoint.cc> wrote:

> On 2016-02-26 16:32:42 [-0500], Steven Rostedt wrote:
> > 3.18.27-rt26-rc1 stable review patch.
> > If anyone has any objections, please let me know.  
> 
> Please merge this one along with "kernel: migrate_disable() do fastpath
> in atomic & irqs-off". Otherwise we might get recursive with lockdep and
> CPU-hotplug (if we go after hp->lock in pin_current_cpu()).

This is in 4.4-rt but not 4.1-rt. I'm currently bringing everything up
to 4.1-rt, then I'm going to pull 4.1-rt into stable, and then create a
new round with the changes that happened in 4.4-rt.

-- Steve

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RT 07/12] dump stack: dont disable preemption during trace
  2016-02-29 15:06     ` Steven Rostedt
@ 2016-02-29 15:07       ` Thomas Gleixner
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Gleixner @ 2016-02-29 15:07 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Sebastian Andrzej Siewior, linux-kernel, linux-rt-users,
	Carsten Emde, Sebastian Andrzej Siewior, John Kacur,
	Paul Gortmaker

On Mon, 29 Feb 2016, Steven Rostedt wrote:

> On Sat, 27 Feb 2016 11:32:06 +0100
> Sebastian Andrzej Siewior <sebastian@breakpoint.cc> wrote:
> 
> > On 2016-02-26 16:32:42 [-0500], Steven Rostedt wrote:
> > > 3.18.27-rt26-rc1 stable review patch.
> > > If anyone has any objections, please let me know.  
> > 
> > Please merge this one along with "kernel: migrate_disable() do fastpath
> > in atomic & irqs-off". Otherwise we might get recursive with lockdep and
> > CPU-hotplug (if we go after hp->lock in pin_current_cpu()).
> 
> This is in 4.4-rt but not 4.1-rt. I'm currently bringing everything up
> to 4.1-rt, then I'm going to pull 4.1-rt into stable, and then create a
> new round with the changes that happened in 4.4-rt.

Oh nice, I was trying to dig out a time slot to bring 4.1 back up to date ...

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RT 07/12] dump stack: dont disable preemption during trace
  2016-02-27 10:32   ` Sebastian Andrzej Siewior
  2016-02-29 15:06     ` Steven Rostedt
@ 2016-03-01 14:45     ` Steven Rostedt
  2016-03-01 18:25       ` Steven Rostedt
  1 sibling, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2016-03-01 14:45 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker

On Sat, 27 Feb 2016 11:32:06 +0100
Sebastian Andrzej Siewior <sebastian@breakpoint.cc> wrote:

> On 2016-02-26 16:32:42 [-0500], Steven Rostedt wrote:
> > 3.18.27-rt26-rc1 stable review patch.
> > If anyone has any objections, please let me know.  
> 
> Please merge this one along with "kernel: migrate_disable() do fastpath
> in atomic & irqs-off". Otherwise we might get recursive with lockdep and
> CPU-hotplug (if we go after hp->lock in pin_current_cpu()).

Interesting. When I pulled this patch into 4.1-rt, it caused the system
to lock up. I'm looking into it now.

-- Steve

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RT 07/12] dump stack: dont disable preemption during trace
  2016-03-01 14:45     ` Steven Rostedt
@ 2016-03-01 18:25       ` Steven Rostedt
  0 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2016-03-01 18:25 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker

On Tue, 1 Mar 2016 09:45:05 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Sat, 27 Feb 2016 11:32:06 +0100
> Sebastian Andrzej Siewior <sebastian@breakpoint.cc> wrote:
> 
> > On 2016-02-26 16:32:42 [-0500], Steven Rostedt wrote:  
> > > 3.18.27-rt26-rc1 stable review patch.
> > > If anyone has any objections, please let me know.    
> > 
> > Please merge this one along with "kernel: migrate_disable() do fastpath
> > in atomic & irqs-off". Otherwise we might get recursive with lockdep and
> > CPU-hotplug (if we go after hp->lock in pin_current_cpu()).  
> 
> Interesting. When I pulled this patch into 4.1-rt, it caused the system
> to lock up. I'm looking into it now.

And that's because this commit also needs:

"kernel: softirq: unlock with irqs on"

And due to the WARN_ON_ONCE() issue (that I just posted a patch to fix
mainline), it caused my system to reboot.

http://lkml.kernel.org/r/20160301110939.1b786f49@gandalf.local.home

-- Steve

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2016-03-01 18:25 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-26 21:32 [PATCH RT 00/12] Linux Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 01/12] cpufreq: Remove cpufreq_rwsem Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 03/12] ARM: smp: Move clear_tasks_mm_cpumask() call to __cpu_die() Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 04/12] rtmutex: Handle non enqueued waiters gracefully Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 05/12] locking: locktorture: Do NOT include rwlock.h directly Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 06/12] rtmutex: Use chainwalking control enum Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 07/12] dump stack: dont disable preemption during trace Steven Rostedt
2016-02-27 10:32   ` Sebastian Andrzej Siewior
2016-02-29 15:06     ` Steven Rostedt
2016-02-29 15:07       ` Thomas Gleixner
2016-03-01 14:45     ` Steven Rostedt
2016-03-01 18:25       ` Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 08/12] net: Make synchronize_rcu_expedited() conditional on !RT_FULL Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 09/12] net/core/cpuhotplug: Drain input_pkt_queue lockless Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 10/12] irqwork: Move irq safe work to irq context Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 11/12] sched: Introduce the trace_sched_waking tracepoint Steven Rostedt
2016-02-26 21:32 ` [PATCH RT 12/12] Linux 3.18.27-rt26-rc1 Steven Rostedt
2016-02-26 21:36 ` [PATCH RT 00/12] " Steven Rostedt
     [not found] ` <20160226213340.259403556@goodmis.org>
2016-02-26 21:48   ` [PATCH RT 02/12] genirq: Handle force threading of interrupts with primary and thread handler Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).