linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 00/22] Linux 4.1.15-rt18-rc1
@ 2016-03-02 15:08 Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 01/22] sched: reset tasks lockless wake-queues on fork() Steven Rostedt
                   ` (21 more replies)
  0 siblings, 22 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:08 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Dear RT Folks,

This is the RT stable review cycle of patch 4.1.15-rt18-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 3/7/2016.

Enjoy,

-- Steve


To build 4.1.15-rt18-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-4.1.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-4.1.15.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.15-rt18-rc1.patch.xz

You can also build from 4.1.15-rt17 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.15-rt17-rt18-rc1.patch.xz


Changes from 4.1.15-rt17:

---


Clark Williams (1):
      rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL

Mike Galbraith (3):
      sched,rt: __always_inline preemptible_lazy()
      drm,radeon,i915: Use preempt_disable/enable_rt() where recommended
      drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end()

Sebastian Andrzej Siewior (13):
      sched: reset task's lockless wake-queues on fork()
      ptrace: don't open IRQs in ptrace_freeze_traced() too early
      net: move xmit_recursion to per-task variable on -RT
      kernel/softirq: use cond_resched_rcu_qs() on -RT as well (run_ksoftirqd())
      net/core: protect users of napi_alloc_cache against reentrance
      preempt-lazy: Add the lazy-preemption check to preempt_schedule()
      softirq: split timer softirqs out of ksoftirqd
      net: provide a way to delegate processing a softirq to ksoftirqd
      latencyhist: disable jump-labels
      kernel: migrate_disable() do fastpath in atomic & irqs-off
      kernel: softirq: unlock with irqs on
      kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks"
      kernel: sched: Fix preempt_disable_ip recodring for preempt_disable()

Steven Rostedt (Red Hat) (1):
      Linux 4.1.15-rt18-rc1

Thomas Gleixner (1):
      tick/broadcast: Make broadcast hrtimer irqsafe

Yang Shi (3):
      arm64: replace read_lock to rcu lock in call_step_hook
      trace: Use rcuidle version for preemptoff_hist trace point
      f2fs: Mutex can't be used by down_write_nest_lock()

----
 arch/Kconfig                            |   1 +
 arch/arm64/kernel/debug-monitors.c      |  21 +++---
 drivers/gpu/drm/i915/i915_irq.c         |   2 +
 drivers/gpu/drm/i915/intel_sprite.c     |  11 +--
 drivers/gpu/drm/radeon/radeon_display.c |   2 +
 fs/f2fs/f2fs.h                          |   4 +-
 include/linux/ftrace.h                  |  12 ++++
 include/linux/interrupt.h               |   8 +++
 include/linux/netdevice.h               |   9 +++
 include/linux/sched.h                   |   5 +-
 kernel/fork.c                           |   1 +
 kernel/ptrace.c                         |   6 +-
 kernel/rcu/rcutorture.c                 |   7 ++
 kernel/sched/core.c                     |  54 +++++++++------
 kernel/softirq.c                        | 116 +++++++++++++++++++++++++++-----
 kernel/stop_machine.c                   |  40 +++--------
 kernel/time/tick-broadcast-hrtimer.c    |   1 +
 kernel/trace/trace_irqsoff.c            |   8 +--
 localversion-rt                         |   2 +-
 net/core/dev.c                          |  43 ++++++++++--
 net/core/skbuff.c                       |   8 ++-
 21 files changed, 261 insertions(+), 100 deletions(-)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH RT 01/22] sched: reset tasks lockless wake-queues on fork()
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 02/22] ptrace: dont open IRQs in ptrace_freeze_traced() too early Steven Rostedt
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0001-sched-reset-task-s-lockless-wake-queues-on-fork.patch --]
[-- Type: text/plain, Size: 1423 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

In 7675104990ed ("sched: Implement lockless wake-queues") we gained
lockless wake-queues. -RT managed to lockup itself with those. There
could be multiple attempts for task X to enqueue it for a wakeup
_even_ if task X is already running.
The reason is that task X could be runnable but not yet on CPU. The the
task performing the wakeup did not leave the CPU it could performe
multiple wakeups.
With the proper timming task X could be running and enqueued for a
wakeup. If this happens while X is performing a fork() then its its
child will have a !NULL `wake_q` member copied.
This is not a problem as long as the child task does not participate in
lockless wakeups :)

Fixes: 7675104990ed ("sched: Implement lockless wake-queues")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/fork.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/fork.c b/kernel/fork.c
index 1b0e656f60e8..8f8a0a13d212 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -387,6 +387,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig)
 #endif
 	tsk->splice_pipe = NULL;
 	tsk->task_frag.page = NULL;
+	tsk->wake_q.next = NULL;
 
 	account_kernel_stack(ti, 1);
 
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 02/22] ptrace: dont open IRQs in ptrace_freeze_traced() too early
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 01/22] sched: reset tasks lockless wake-queues on fork() Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 03/22] net: move xmit_recursion to per-task variable on -RT Steven Rostedt
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, kernel test robot, stable-rt

[-- Attachment #1: 0002-ptrace-don-t-open-IRQs-in-ptrace_freeze_traced-too-e.patch --]
[-- Type: text/plain, Size: 1326 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

In the non-RT case the spin_lock_irq() here disables interrupts as well
as raw_spin_lock_irq(). So in the unlock case the interrupts are enabled
too early.

Reported-by: kernel test robot <ying.huang@linux.intel.com>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/ptrace.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index b8d001b89047..b90be3ae3a10 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -129,12 +129,14 @@ static bool ptrace_freeze_traced(struct task_struct *task)
 
 	spin_lock_irq(&task->sighand->siglock);
 	if (task_is_traced(task) && !__fatal_signal_pending(task)) {
-		raw_spin_lock_irq(&task->pi_lock);
+		unsigned long flags;
+
+		raw_spin_lock_irqsave(&task->pi_lock, flags);
 		if (task->state & __TASK_TRACED)
 			task->state = __TASK_TRACED;
 		else
 			task->saved_state = __TASK_TRACED;
-		raw_spin_unlock_irq(&task->pi_lock);
+		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
 		ret = true;
 	}
 	spin_unlock_irq(&task->sighand->siglock);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 03/22] net: move xmit_recursion to per-task variable on -RT
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 01/22] sched: reset tasks lockless wake-queues on fork() Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 02/22] ptrace: dont open IRQs in ptrace_freeze_traced() too early Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 04/22] kernel/softirq: use cond_resched_rcu_qs() on -RT as well (run_ksoftirqd()) Steven Rostedt
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0003-net-move-xmit_recursion-to-per-task-variable-on-RT.patch --]
[-- Type: text/plain, Size: 3700 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

A softirq on -RT can be preempted. That means one task is in
__dev_queue_xmit(), gets preempted and another task may enter
__dev_queue_xmit() aw well. netperf together with a bridge device
will then trigger the `recursion alert` because each task increments
the xmit_recursion variable which is per-CPU.
A virtual device like br0 is required to trigger this warning.

This patch moves the counter to per task instead per-CPU so it counts
the recursion properly on -RT.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/netdevice.h |  9 +++++++++
 include/linux/sched.h     |  3 +++
 net/core/dev.c            | 41 ++++++++++++++++++++++++++++++++++++++---
 3 files changed, 50 insertions(+), 3 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 7a289e802a23..d24fe5d9980d 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -2192,11 +2192,20 @@ void netdev_freemem(struct net_device *dev);
 void synchronize_net(void);
 int init_dummy_netdev(struct net_device *dev);
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static inline int dev_recursion_level(void)
+{
+	return current->xmit_recursion;
+}
+
+#else
+
 DECLARE_PER_CPU(int, xmit_recursion);
 static inline int dev_recursion_level(void)
 {
 	return this_cpu_read(xmit_recursion);
 }
+#endif
 
 struct net_device *dev_get_by_index(struct net *net, int ifindex);
 struct net_device *__dev_get_by_index(struct net *net, int ifindex);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 1a56c0512491..4d995add9497 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1806,6 +1806,9 @@ struct task_struct {
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 	unsigned long	task_state_change;
 #endif
+#ifdef CONFIG_PREEMPT_RT_FULL
+	int xmit_recursion;
+#endif
 	int pagefault_disabled;
 };
 
diff --git a/net/core/dev.c b/net/core/dev.c
index 16fbef81024d..7d8ad99de55f 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2883,9 +2883,44 @@ static void skb_update_prio(struct sk_buff *skb)
 #define skb_update_prio(skb)
 #endif
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+
+static inline int xmit_rec_read(void)
+{
+       return current->xmit_recursion;
+}
+
+static inline void xmit_rec_inc(void)
+{
+       current->xmit_recursion++;
+}
+
+static inline void xmit_rec_dec(void)
+{
+       current->xmit_recursion--;
+}
+
+#else
+
 DEFINE_PER_CPU(int, xmit_recursion);
 EXPORT_SYMBOL(xmit_recursion);
 
+static inline int xmit_rec_read(void)
+{
+	return __this_cpu_read(xmit_recursion);
+}
+
+static inline void xmit_rec_inc(void)
+{
+	__this_cpu_inc(xmit_recursion);
+}
+
+static inline int xmit_rec_dec(void)
+{
+	__this_cpu_dec(xmit_recursion);
+}
+#endif
+
 #define RECURSION_LIMIT 10
 
 /**
@@ -2987,7 +3022,7 @@ static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv)
 
 		if (txq->xmit_lock_owner != cpu) {
 
-			if (__this_cpu_read(xmit_recursion) > RECURSION_LIMIT)
+			if (xmit_rec_read() > RECURSION_LIMIT)
 				goto recursion_alert;
 
 			skb = validate_xmit_skb(skb, dev);
@@ -2997,9 +3032,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv)
 			HARD_TX_LOCK(dev, txq, cpu);
 
 			if (!netif_xmit_stopped(txq)) {
-				__this_cpu_inc(xmit_recursion);
+				xmit_rec_inc();
 				skb = dev_hard_start_xmit(skb, dev, txq, &rc);
-				__this_cpu_dec(xmit_recursion);
+				xmit_rec_dec();
 				if (dev_xmit_complete(rc)) {
 					HARD_TX_UNLOCK(dev, txq);
 					goto out;
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 04/22] kernel/softirq: use cond_resched_rcu_qs() on -RT as well (run_ksoftirqd())
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 03/22] net: move xmit_recursion to per-task variable on -RT Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 05/22] net/core: protect users of napi_alloc_cache against reentrance Steven Rostedt
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Grygorii Strashko

[-- Attachment #1: 0004-kernel-softirq-use-cond_resched_rcu_qs-on-RT-as-well.patch --]
[-- Type: text/plain, Size: 927 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The upstream changes were:
6047967 ksoftirqd: Use new cond_resched_rcu_qs() function
28423ad ksoftirqd: Enable IRQs and call cond_resched() before poking RCU

Reported-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/softirq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 0fd93311536f..f4c2e679a7d7 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -599,8 +599,8 @@ static void run_ksoftirqd(unsigned int cpu)
 
 	do_current_softirqs();
 	current->softirq_nestcnt--;
-	rcu_note_context_switch();
 	local_irq_enable();
+	cond_resched_rcu_qs();
 }
 
 /*
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 05/22] net/core: protect users of napi_alloc_cache against reentrance
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 04/22] kernel/softirq: use cond_resched_rcu_qs() on -RT as well (run_ksoftirqd()) Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 06/22] preempt-lazy: Add the lazy-preemption check to preempt_schedule() Steven Rostedt
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0005-net-core-protect-users-of-napi_alloc_cache-against-r.patch --]
[-- Type: text/plain, Size: 1657 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

On -RT the code running in BH can not be moved to another CPU so CPU
local variable remain local. However the code can be preempted
and another task may enter BH accessing the same CPU using the same
napi_alloc_cache variable.
This patch ensures that each user of napi_alloc_cache uses a local lock.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 net/core/skbuff.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 36c138197f37..df293d45e0cd 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -358,6 +358,7 @@ struct netdev_alloc_cache {
 static DEFINE_PER_CPU(struct netdev_alloc_cache, netdev_alloc_cache);
 static DEFINE_PER_CPU(struct netdev_alloc_cache, napi_alloc_cache);
 static DEFINE_LOCAL_IRQ_LOCK(netdev_alloc_lock);
+static DEFINE_LOCAL_IRQ_LOCK(napi_alloc_cache_lock);
 
 static struct page *__page_frag_refill(struct netdev_alloc_cache *nc,
 				       gfp_t gfp_mask)
@@ -456,7 +457,12 @@ EXPORT_SYMBOL(netdev_alloc_frag);
 
 static void *__napi_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
 {
-	return __alloc_page_frag(&napi_alloc_cache, fragsz, gfp_mask);
+	void *data;
+
+	local_lock(napi_alloc_cache_lock);
+	data = __alloc_page_frag(&napi_alloc_cache, fragsz, gfp_mask);
+	local_unlock(napi_alloc_cache_lock);
+	return data;
 }
 
 void *napi_alloc_frag(unsigned int fragsz)
-- 
2.7.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 06/22] preempt-lazy: Add the lazy-preemption check to preempt_schedule()
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 05/22] net/core: protect users of napi_alloc_cache against reentrance Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 07/22] softirq: split timer softirqs out of ksoftirqd Steven Rostedt
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mike Galbraith

[-- Attachment #1: 0006-preempt-lazy-Add-the-lazy-preemption-check-to-preemp.patch --]
[-- Type: text/plain, Size: 2058 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Probably in the rebase onto v4.1 this check got moved into less commonly used
preempt_schedule_notrace(). This patch ensures that both functions use it.

Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/sched/core.c | 36 ++++++++++++++++++++++++++++--------
 1 file changed, 28 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d21091f2fd1f..5cbfb0548574 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3116,6 +3116,30 @@ static void __sched notrace preempt_schedule_common(void)
 	} while (need_resched());
 }
 
+#ifdef CONFIG_PREEMPT_LAZY
+/*
+ * If TIF_NEED_RESCHED is then we allow to be scheduled away since this is
+ * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
+ * preempt_lazy_count counter >0.
+ */
+static int preemptible_lazy(void)
+{
+	if (test_thread_flag(TIF_NEED_RESCHED))
+		return 1;
+	if (current_thread_info()->preempt_lazy_count)
+		return 0;
+	return 1;
+}
+
+#else
+
+static int preemptible_lazy(void)
+{
+	return 1;
+}
+
+#endif
+
 #ifdef CONFIG_PREEMPT
 /*
  * this is the entry point to schedule() from in-kernel preemption
@@ -3130,6 +3154,8 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
 	 */
 	if (likely(!preemptible()))
 		return;
+	if (!preemptible_lazy())
+		return;
 
 	preempt_schedule_common();
 }
@@ -3157,15 +3183,9 @@ asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 
 	if (likely(!preemptible()))
 		return;
-
-#ifdef CONFIG_PREEMPT_LAZY
-	/*
-	 * Check for lazy preemption
-	 */
-	if (current_thread_info()->preempt_lazy_count &&
-			!test_thread_flag(TIF_NEED_RESCHED))
+	if (!preemptible_lazy())
 		return;
-#endif
+
 	do {
 		__preempt_count_add(PREEMPT_ACTIVE);
 		/*
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 07/22] softirq: split timer softirqs out of ksoftirqd
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 06/22] preempt-lazy: Add the lazy-preemption check to preempt_schedule() Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 08/22] net: provide a way to delegate processing a softirq to ksoftirqd Steven Rostedt
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0007-softirq-split-timer-softirqs-out-of-ksoftirqd.patch --]
[-- Type: text/plain, Size: 6889 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The softirqd runs in -RT with SCHED_FIFO (prio 1) and deals mostly with
timer wakeup which can not happen in hardirq context. The prio has been
risen from the normal SCHED_OTHER so the timer wakeup does not happen
too late.
With enough networking load it is possible that the system never goes
idle and schedules ksoftirqd and everything else with a higher priority.
One of the tasks left behind is one of RCU's threads and so we see stalls
and eventually run out of memory.
This patch moves the TIMER and HRTIMER softirqs out of the `ksoftirqd`
thread into its own `ktimersoftd`. The former can now run SCHED_OTHER
(same as mainline) and the latter at SCHED_FIFO due to the wakeups.

>From networking point of view: The NAPI callback runs after the network
interrupt thread completes. If its run time takes too long the NAPI code
itself schedules the `ksoftirqd`. Here in the thread it can run at
SCHED_OTHER priority and it won't defer RCU anymore.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/softirq.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 74 insertions(+), 11 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index f4c2e679a7d7..aff764fad236 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -58,6 +58,10 @@ EXPORT_SYMBOL(irq_stat);
 static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;
 
 DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
+#ifdef CONFIG_PREEMPT_RT_FULL
+#define TIMER_SOFTIRQS	((1 << TIMER_SOFTIRQ) | (1 << HRTIMER_SOFTIRQ))
+DEFINE_PER_CPU(struct task_struct *, ktimer_softirqd);
+#endif
 
 const char * const softirq_to_name[NR_SOFTIRQS] = {
 	"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "BLOCK_IOPOLL",
@@ -171,6 +175,17 @@ static void wakeup_softirqd(void)
 		wake_up_process(tsk);
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static void wakeup_timer_softirqd(void)
+{
+	/* Interrupts are disabled: no need to stop preemption */
+	struct task_struct *tsk = __this_cpu_read(ktimer_softirqd);
+
+	if (tsk && tsk->state != TASK_RUNNING)
+		wake_up_process(tsk);
+}
+#endif
+
 static void handle_softirq(unsigned int vec_nr)
 {
 	struct softirq_action *h = softirq_vec + vec_nr;
@@ -473,7 +488,6 @@ void __raise_softirq_irqoff(unsigned int nr)
 static inline void local_bh_disable_nort(void) { local_bh_disable(); }
 static inline void _local_bh_enable_nort(void) { _local_bh_enable(); }
 static void ksoftirqd_set_sched_params(unsigned int cpu) { }
-static void ksoftirqd_clr_sched_params(unsigned int cpu, bool online) { }
 
 #else /* !PREEMPT_RT_FULL */
 
@@ -618,8 +632,12 @@ void thread_do_softirq(void)
 
 static void do_raise_softirq_irqoff(unsigned int nr)
 {
+	unsigned int mask;
+
+	mask = 1UL << nr;
+
 	trace_softirq_raise(nr);
-	or_softirq_pending(1UL << nr);
+	or_softirq_pending(mask);
 
 	/*
 	 * If we are not in a hard interrupt and inside a bh disabled
@@ -628,16 +646,30 @@ static void do_raise_softirq_irqoff(unsigned int nr)
 	 * delegate it to ksoftirqd.
 	 */
 	if (!in_irq() && current->softirq_nestcnt)
-		current->softirqs_raised |= (1U << nr);
-	else if (__this_cpu_read(ksoftirqd))
-		__this_cpu_read(ksoftirqd)->softirqs_raised |= (1U << nr);
+		current->softirqs_raised |= mask;
+	else if (!__this_cpu_read(ksoftirqd) || !__this_cpu_read(ktimer_softirqd))
+		return;
+
+	if (mask & TIMER_SOFTIRQS)
+		__this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
+	else
+		__this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
+}
+
+static void wakeup_proper_softirq(unsigned int nr)
+{
+	if ((1UL << nr) & TIMER_SOFTIRQS)
+		wakeup_timer_softirqd();
+	else
+		wakeup_softirqd();
 }
 
+
 void __raise_softirq_irqoff(unsigned int nr)
 {
 	do_raise_softirq_irqoff(nr);
 	if (!in_irq() && !current->softirq_nestcnt)
-		wakeup_softirqd();
+		wakeup_proper_softirq(nr);
 }
 
 /*
@@ -663,7 +695,7 @@ void raise_softirq_irqoff(unsigned int nr)
 	 * raise a WARN() if the condition is met.
 	 */
 	if (!current->softirq_nestcnt)
-		wakeup_softirqd();
+		wakeup_proper_softirq(nr);
 }
 
 static inline int ksoftirqd_softirq_pending(void)
@@ -676,22 +708,37 @@ static inline void _local_bh_enable_nort(void) { }
 
 static inline void ksoftirqd_set_sched_params(unsigned int cpu)
 {
+	/* Take over all but timer pending softirqs when starting */
+	local_irq_disable();
+	current->softirqs_raised = local_softirq_pending() & ~TIMER_SOFTIRQS;
+	local_irq_enable();
+}
+
+static inline void ktimer_softirqd_set_sched_params(unsigned int cpu)
+{
 	struct sched_param param = { .sched_priority = 1 };
 
 	sched_setscheduler(current, SCHED_FIFO, &param);
-	/* Take over all pending softirqs when starting */
+
+	/* Take over timer pending softirqs when starting */
 	local_irq_disable();
-	current->softirqs_raised = local_softirq_pending();
+	current->softirqs_raised = local_softirq_pending() & TIMER_SOFTIRQS;
 	local_irq_enable();
 }
 
-static inline void ksoftirqd_clr_sched_params(unsigned int cpu, bool online)
+static inline void ktimer_softirqd_clr_sched_params(unsigned int cpu,
+						    bool online)
 {
 	struct sched_param param = { .sched_priority = 0 };
 
 	sched_setscheduler(current, SCHED_NORMAL, &param);
 }
 
+static int ktimer_softirqd_should_run(unsigned int cpu)
+{
+	return current->softirqs_raised;
+}
+
 #endif /* PREEMPT_RT_FULL */
 /*
  * Enter an interrupt context.
@@ -741,6 +788,9 @@ static inline void invoke_softirq(void)
 	if (__this_cpu_read(ksoftirqd) &&
 			__this_cpu_read(ksoftirqd)->softirqs_raised)
 		wakeup_softirqd();
+	if (__this_cpu_read(ktimer_softirqd) &&
+			__this_cpu_read(ktimer_softirqd)->softirqs_raised)
+		wakeup_timer_softirqd();
 	local_irq_restore(flags);
 #endif
 }
@@ -1173,17 +1223,30 @@ static struct notifier_block cpu_nfb = {
 static struct smp_hotplug_thread softirq_threads = {
 	.store			= &ksoftirqd,
 	.setup			= ksoftirqd_set_sched_params,
-	.cleanup		= ksoftirqd_clr_sched_params,
 	.thread_should_run	= ksoftirqd_should_run,
 	.thread_fn		= run_ksoftirqd,
 	.thread_comm		= "ksoftirqd/%u",
 };
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static struct smp_hotplug_thread softirq_timer_threads = {
+	.store			= &ktimer_softirqd,
+	.setup			= ktimer_softirqd_set_sched_params,
+	.cleanup		= ktimer_softirqd_clr_sched_params,
+	.thread_should_run	= ktimer_softirqd_should_run,
+	.thread_fn		= run_ksoftirqd,
+	.thread_comm		= "ktimersoftd/%u",
+};
+#endif
+
 static __init int spawn_ksoftirqd(void)
 {
 	register_cpu_notifier(&cpu_nfb);
 
 	BUG_ON(smpboot_register_percpu_thread(&softirq_threads));
+#ifdef CONFIG_PREEMPT_RT_FULL
+	BUG_ON(smpboot_register_percpu_thread(&softirq_timer_threads));
+#endif
 
 	return 0;
 }
-- 
2.7.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 08/22] net: provide a way to delegate processing a softirq to ksoftirqd
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 07/22] softirq: split timer softirqs out of ksoftirqd Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 09/22] latencyhist: disable jump-labels Steven Rostedt
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0008-net-provide-a-way-to-delegate-processing-a-softirq-t.patch --]
[-- Type: text/plain, Size: 2818 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

If the NET_RX uses up all of his budget it moves the following NAPI
invocations into the `ksoftirqd`. On -RT it does not do so. Instead it
rises the NET_RX softirq in its current context again.

In order to get closer to mainline's behaviour this patch provides
__raise_softirq_irqoff_ksoft() which raises the softirq in the ksoftird.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/interrupt.h |  8 ++++++++
 kernel/softirq.c          | 21 +++++++++++++++++++++
 net/core/dev.c            |  2 +-
 3 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index fe254555cf95..d11fd0a440ff 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -463,6 +463,14 @@ extern void thread_do_softirq(void);
 extern void open_softirq(int nr, void (*action)(struct softirq_action *));
 extern void softirq_init(void);
 extern void __raise_softirq_irqoff(unsigned int nr);
+#ifdef CONFIG_PREEMPT_RT_FULL
+extern void __raise_softirq_irqoff_ksoft(unsigned int nr);
+#else
+static inline void __raise_softirq_irqoff_ksoft(unsigned int nr)
+{
+	__raise_softirq_irqoff(nr);
+}
+#endif
 
 extern void raise_softirq_irqoff(unsigned int nr);
 extern void raise_softirq(unsigned int nr);
diff --git a/kernel/softirq.c b/kernel/softirq.c
index aff764fad236..d1e999e74d23 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -673,6 +673,27 @@ void __raise_softirq_irqoff(unsigned int nr)
 }
 
 /*
+ * Same as __raise_softirq_irqoff() but will process them in ksoftirqd
+ */
+void __raise_softirq_irqoff_ksoft(unsigned int nr)
+{
+	unsigned int mask;
+
+	if (WARN_ON_ONCE(!__this_cpu_read(ksoftirqd) ||
+			 !__this_cpu_read(ktimer_softirqd)))
+		return;
+	mask = 1UL << nr;
+
+	trace_softirq_raise(nr);
+	or_softirq_pending(mask);
+	if (mask & TIMER_SOFTIRQS)
+		__this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
+	else
+		__this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
+	wakeup_proper_softirq(nr);
+}
+
+/*
  * This function must run with irqs disabled!
  */
 void raise_softirq_irqoff(unsigned int nr)
diff --git a/net/core/dev.c b/net/core/dev.c
index 7d8ad99de55f..90c4c45b206c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4790,7 +4790,7 @@ static void net_rx_action(struct softirq_action *h)
 	list_splice_tail(&repoll, &list);
 	list_splice(&list, &sd->poll_list);
 	if (!list_empty(&sd->poll_list))
-		__raise_softirq_irqoff(NET_RX_SOFTIRQ);
+		__raise_softirq_irqoff_ksoft(NET_RX_SOFTIRQ);
 
 	net_rps_action_and_irq_enable(sd);
 }
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 09/22] latencyhist: disable jump-labels
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (7 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 08/22] net: provide a way to delegate processing a softirq to ksoftirqd Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 10/22] arm64: replace read_lock to rcu lock in call_step_hook Steven Rostedt
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Christoph Mathys, stable-rt

[-- Attachment #1: 0009-latencyhist-disable-jump-labels.patch --]
[-- Type: text/plain, Size: 2911 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Atleast on X86 we die a recursive death

|CPU: 3 PID: 585 Comm: bash Not tainted 4.4.1-rt4+ #198
|Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Debian-1.8.2-1 04/01/2014
|task: ffff88007ab4cd00 ti: ffff88007ab94000 task.ti: ffff88007ab94000
|RIP: 0010:[<ffffffff81684870>]  [<ffffffff81684870>] int3+0x0/0x10
|RSP: 0018:ffff88013c107fd8  EFLAGS: 00010082
|RAX: ffff88007ab4cd00 RBX: ffffffff8100ceab RCX: 0000000080202001
|RDX: 0000000000000000 RSI: ffffffff8100ceab RDI: ffffffff810c78b2
|RBP: ffff88007ab97c10 R08: ffffffffff57b000 R09: 0000000000000000
|R10: ffff88013bb64790 R11: ffff88007ab4cd68 R12: ffffffff8100ceab
|R13: ffffffff810c78b2 R14: ffffffff810f8158 R15: ffffffff810f9120
|FS:  0000000000000000(0000) GS:ffff88013c100000(0063) knlGS:00000000f74e3940
|CS:  0010 DS: 002b ES: 002b CR0: 000000008005003b
|CR2: 0000000008cf6008 CR3: 000000013b169000 CR4: 00000000000006e0
|Call Trace:
| <#DB>
| [<ffffffff810f8158>] ? trace_preempt_off+0x18/0x170
| <<EOE>>
| [<ffffffff81077745>] preempt_count_add+0xa5/0xc0
| [<ffffffff810c78b2>] on_each_cpu+0x22/0x90
| [<ffffffff8100ceab>] text_poke_bp+0x5b/0xc0
| [<ffffffff8100a29c>] arch_jump_label_transform+0x8c/0xf0
| [<ffffffff8111c77c>] __jump_label_update+0x6c/0x80
| [<ffffffff8111c83a>] jump_label_update+0xaa/0xc0
| [<ffffffff8111ca54>] static_key_slow_inc+0x94/0xa0
| [<ffffffff810e0d8d>] tracepoint_probe_register_prio+0x26d/0x2c0
| [<ffffffff810e0df3>] tracepoint_probe_register+0x13/0x20
| [<ffffffff810fca78>] trace_event_reg+0x98/0xd0
| [<ffffffff810fcc8b>] __ftrace_event_enable_disable+0x6b/0x180
| [<ffffffff810fd5b8>] event_enable_write+0x78/0xc0
| [<ffffffff8117a768>] __vfs_write+0x28/0xe0
| [<ffffffff8117b025>] vfs_write+0xa5/0x180
| [<ffffffff8117bb76>] SyS_write+0x46/0xa0
| [<ffffffff81002c91>] do_fast_syscall_32+0xa1/0x1d0
| [<ffffffff81684d57>] sysenter_flags_fixed+0xd/0x17

during
 echo 1 > /sys/kernel/debug/tracing/events/hist/preemptirqsoff_hist/enable

Reported-By: Christoph Mathys <eraserix@gmail.com>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index cb27d367b24a..78d3ed24484a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -50,6 +50,7 @@ config KPROBES
 config JUMP_LABEL
        bool "Optimize very unlikely/likely branches"
        depends on HAVE_ARCH_JUMP_LABEL
+       depends on (!INTERRUPT_OFF_HIST && !PREEMPT_OFF_HIST && !WAKEUP_LATENCY_HIST && !MISSED_TIMER_OFFSETS_HIST)
        help
          This option enables a transparent branch optimization that
 	 makes certain almost-always-true or almost-always-false branch
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 10/22] arm64: replace read_lock to rcu lock in call_step_hook
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (8 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 09/22] latencyhist: disable jump-labels Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 11/22] kernel: migrate_disable() do fastpath in atomic & irqs-off Steven Rostedt
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt, Yang Shi

[-- Attachment #1: 0010-arm64-replace-read_lock-to-rcu-lock-in-call_step_hoo.patch --]
[-- Type: text/plain, Size: 3602 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Yang Shi <yang.shi@linaro.org>

BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
in_atomic(): 1, irqs_disabled(): 128, pid: 383, name: sh
Preemption disabled at:[<ffff800000124c18>] kgdb_cpu_enter+0x158/0x6b8

CPU: 3 PID: 383 Comm: sh Tainted: G        W       4.1.13-rt13 #2
Hardware name: Freescale Layerscape 2085a RDB Board (DT)
Call trace:
[<ffff8000000885e8>] dump_backtrace+0x0/0x128
[<ffff800000088734>] show_stack+0x24/0x30
[<ffff80000079a7c4>] dump_stack+0x80/0xa0
[<ffff8000000bd324>] ___might_sleep+0x18c/0x1a0
[<ffff8000007a20ac>] __rt_spin_lock+0x2c/0x40
[<ffff8000007a2268>] rt_read_lock+0x40/0x58
[<ffff800000085328>] single_step_handler+0x38/0xd8
[<ffff800000082368>] do_debug_exception+0x58/0xb8
Exception stack(0xffff80834a1e7c80 to 0xffff80834a1e7da0)
7c80: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7e40 ffff8083 001bfcc4 ffff8000
7ca0: f2000400 00000000 00000000 00000000 4a1e7d80 ffff8083 0049501c ffff8000
7cc0: 00005402 00000000 00aaa210 ffff8000 4a1e7ea0 ffff8083 000833f4 ffff8000
7ce0: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7ea0 ffff8083 001bfcc0 ffff8000
7d00: 4a0fc400 ffff8083 00005402 00000000 4a1e7d40 ffff8083 00490324 ffff8000
7d20: ffffff9c 00000000 92c23ba0 0000ffff 000a0000 00000000 00000000 00000000
7d40: 00000008 00000000 00080000 00000000 92c23b8b 0000ffff 92c23b8e 0000ffff
7d60: 00000038 00000000 00001cb2 00000000 00000005 00000000 92d7b498 0000ffff
7d80: 01010101 01010101 92be9000 0000ffff 00000000 00000000 00000030 00000000
[<ffff8000000833f4>] el1_dbg+0x18/0x6c

This issue is similar with 62c6c61("arm64: replace read_lock to rcu lock in
call_break_hook"), but comes to single_step_handler.

This also solves kgdbts boot test silent hang issue on 4.4 -rt kernel.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/arm64/kernel/debug-monitors.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 70654d843d9b..0d1d675f2cce 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -184,20 +184,21 @@ static void clear_regs_spsr_ss(struct pt_regs *regs)
 
 /* EL1 Single Step Handler hooks */
 static LIST_HEAD(step_hook);
-static DEFINE_RWLOCK(step_hook_lock);
+static DEFINE_SPINLOCK(step_hook_lock);
 
 void register_step_hook(struct step_hook *hook)
 {
-	write_lock(&step_hook_lock);
-	list_add(&hook->node, &step_hook);
-	write_unlock(&step_hook_lock);
+	spin_lock(&step_hook_lock);
+	list_add_rcu(&hook->node, &step_hook);
+	spin_unlock(&step_hook_lock);
 }
 
 void unregister_step_hook(struct step_hook *hook)
 {
-	write_lock(&step_hook_lock);
-	list_del(&hook->node);
-	write_unlock(&step_hook_lock);
+	spin_lock(&step_hook_lock);
+	list_del_rcu(&hook->node);
+	spin_unlock(&step_hook_lock);
+	synchronize_rcu();
 }
 
 /*
@@ -211,15 +212,15 @@ static int call_step_hook(struct pt_regs *regs, unsigned int esr)
 	struct step_hook *hook;
 	int retval = DBG_HOOK_ERROR;
 
-	read_lock(&step_hook_lock);
+	rcu_read_lock();
 
-	list_for_each_entry(hook, &step_hook, node)	{
+	list_for_each_entry_rcu(hook, &step_hook, node)	{
 		retval = hook->fn(regs, esr);
 		if (retval == DBG_HOOK_HANDLED)
 			break;
 	}
 
-	read_unlock(&step_hook_lock);
+	rcu_read_unlock();
 
 	return retval;
 }
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 11/22] kernel: migrate_disable() do fastpath in atomic & irqs-off
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (9 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 10/22] arm64: replace read_lock to rcu lock in call_step_hook Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 12/22] kernel: softirq: unlock with irqs on Steven Rostedt
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0011-kernel-migrate_disable-do-fastpath-in-atomic-irqs-of.patch --]
[-- Type: text/plain, Size: 1104 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

With interrupts off it makes no sense to do the long path since we can't
leave the CPU anyway. Also we might end up in a recursion with lockdep.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/sched/core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5cbfb0548574..56b6d73f9154 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2798,7 +2798,7 @@ void migrate_disable(void)
 {
 	struct task_struct *p = current;
 
-	if (in_atomic()) {
+	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic++;
 #endif
@@ -2832,7 +2832,7 @@ void migrate_enable(void)
 	unsigned long flags;
 	struct rq *rq;
 
-	if (in_atomic()) {
+	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic--;
 #endif
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 12/22] kernel: softirq: unlock with irqs on
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (10 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 11/22] kernel: migrate_disable() do fastpath in atomic & irqs-off Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 13/22] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks" Steven Rostedt
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0012-kernel-softirq-unlock-with-irqs-on.patch --]
[-- Type: text/plain, Size: 954 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

We unlock the lock while the interrupts are off. This isn't a problem
now but will get because the migrate_disable() + enable are not
symmetrical in regard to the status of interrupts.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/softirq.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index d1e999e74d23..2ca63cc1469e 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -563,8 +563,10 @@ static void do_current_softirqs(void)
 			do_single_softirq(i);
 		}
 		softirq_clr_runner(i);
-		unlock_softirq(i);
 		WARN_ON(current->softirq_nestcnt != 1);
+		local_irq_enable();
+		unlock_softirq(i);
+		local_irq_disable();
 	}
 }
 
-- 
2.7.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 13/22] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks"
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (11 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 12/22] kernel: softirq: unlock with irqs on Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 14/22] tick/broadcast: Make broadcast hrtimer irqsafe Steven Rostedt
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0013-kernel-stop_machine-partly-revert-stop_machine-Use-r.patch --]
[-- Type: text/plain, Size: 5468 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

With completion using swait and so rawlocks we don't need this anymore.
Further, bisect thinks this patch is responsible for:

|BUG: unable to handle kernel NULL pointer dereference at           (null)
|IP: [<ffffffff81082123>] sched_cpu_active+0x53/0x70
|PGD 0
|Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
|Dumping ftrace buffer:
|   (ftrace buffer empty)
|Modules linked in:
|CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.4.1+ #330
|Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Debian-1.8.2-1 04/01/2014
|task: ffff88013ae64b00 ti: ffff88013ae74000 task.ti: ffff88013ae74000
|RIP: 0010:[<ffffffff81082123>]  [<ffffffff81082123>] sched_cpu_active+0x53/0x70
|RSP: 0000:ffff88013ae77eb8  EFLAGS: 00010082
|RAX: 0000000000000001 RBX: ffffffff81c2cf20 RCX: 0000001050fb52fb
|RDX: 0000001050fb52fb RSI: 000000105117ca1e RDI: 00000000001c7723
|RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001
|R10: 0000000000000000 R11: 0000000000000001 R12: 00000000ffffffff
|R13: ffffffff81c2cee0 R14: 0000000000000000 R15: 0000000000000001
|FS:  0000000000000000(0000) GS:ffff88013b200000(0000) knlGS:0000000000000000
|CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
|CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
|Stack:
| ffffffff810c446d ffff88013ae77f00 ffffffff8107d8dd 000000000000000a
| 0000000000000001 0000000000000000 0000000000000000 0000000000000000
| 0000000000000000 ffff88013ae77f10 ffffffff8107d90e ffff88013ae77f20
|Call Trace:
| [<ffffffff810c446d>] ? debug_lockdep_rcu_enabled+0x1d/0x20
| [<ffffffff8107d8dd>] ? notifier_call_chain+0x5d/0x80
| [<ffffffff8107d90e>] ? __raw_notifier_call_chain+0xe/0x10
| [<ffffffff810598a3>] ? cpu_notify+0x23/0x40
| [<ffffffff8105a7b8>] ? notify_cpu_starting+0x28/0x30

during hotplug. The rawlocks need to remain however.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/stop_machine.c | 40 ++++++++--------------------------------
 1 file changed, 8 insertions(+), 32 deletions(-)

diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 1af29ad20970..d3ea2452e291 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -30,7 +30,7 @@ struct cpu_stop_done {
 	atomic_t		nr_todo;	/* nr left to execute */
 	bool			executed;	/* actually executed? */
 	int			ret;		/* collected return value */
-	struct task_struct	*waiter;	/* woken when nr_todo reaches 0 */
+	struct completion	completion;	/* fired if nr_todo reaches 0 */
 };
 
 /* the actual stopper, one per every possible cpu, enabled on online cpus */
@@ -56,7 +56,7 @@ static void cpu_stop_init_done(struct cpu_stop_done *done, unsigned int nr_todo)
 {
 	memset(done, 0, sizeof(*done));
 	atomic_set(&done->nr_todo, nr_todo);
-	done->waiter = current;
+	init_completion(&done->completion);
 }
 
 /* signal completion unless @done is NULL */
@@ -65,10 +65,8 @@ static void cpu_stop_signal_done(struct cpu_stop_done *done, bool executed)
 	if (done) {
 		if (executed)
 			done->executed = true;
-		if (atomic_dec_and_test(&done->nr_todo)) {
-			wake_up_process(done->waiter);
-			done->waiter = NULL;
-		}
+		if (atomic_dec_and_test(&done->nr_todo))
+			complete(&done->completion);
 	}
 }
 
@@ -91,22 +89,6 @@ static void cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
 	raw_spin_unlock_irqrestore(&stopper->lock, flags);
 }
 
-static void wait_for_stop_done(struct cpu_stop_done *done)
-{
-	set_current_state(TASK_UNINTERRUPTIBLE);
-	while (atomic_read(&done->nr_todo)) {
-		schedule();
-		set_current_state(TASK_UNINTERRUPTIBLE);
-	}
-	/*
-	 * We need to wait until cpu_stop_signal_done() has cleared
-	 * done->waiter.
-	 */
-	while (done->waiter)
-		cpu_relax();
-	set_current_state(TASK_RUNNING);
-}
-
 /**
  * stop_one_cpu - stop a cpu
  * @cpu: cpu to stop
@@ -138,7 +120,7 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
 
 	cpu_stop_init_done(&done, 1);
 	cpu_stop_queue_work(cpu, &work);
-	wait_for_stop_done(&done);
+	wait_for_completion(&done.completion);
 	return done.executed ? done.ret : -ENOENT;
 }
 
@@ -315,7 +297,7 @@ int stop_two_cpus(unsigned int cpu1, unsigned int cpu2, cpu_stop_fn_t fn, void *
 	lg_local_unlock(&stop_cpus_lock);
 	preempt_enable_nort();
 
-	wait_for_stop_done(&done);
+	wait_for_completion(&done.completion);
 
 	return done.executed ? done.ret : -ENOENT;
 }
@@ -380,7 +362,7 @@ static int __stop_cpus(const struct cpumask *cpumask,
 
 	cpu_stop_init_done(&done, cpumask_weight(cpumask));
 	queue_stop_cpus_work(cpumask, fn, arg, &done, false);
-	wait_for_stop_done(&done);
+	wait_for_completion(&done.completion);
 	return done.executed ? done.ret : -ENOENT;
 }
 
@@ -511,13 +493,7 @@ repeat:
 			  kallsyms_lookup((unsigned long)fn, NULL, NULL, NULL,
 					  ksym_buf), arg);
 
-		/*
-		 * Make sure that the wakeup and setting done->waiter
-		 * to NULL is atomic.
-		 */
-		local_irq_disable();
 		cpu_stop_signal_done(done, true);
-		local_irq_enable();
 		goto repeat;
 	}
 }
@@ -676,7 +652,7 @@ int stop_machine_from_inactive_cpu(int (*fn)(void *), void *data,
 	ret = multi_cpu_stop(&msdata);
 
 	/* Busy wait for completion. */
-	while (atomic_read(&done.nr_todo))
+	while (!completion_done(&done.completion))
 		cpu_relax();
 
 	mutex_unlock(&stop_cpus_mutex);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 14/22] tick/broadcast: Make broadcast hrtimer irqsafe
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (12 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 13/22] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks" Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 15/22] sched,rt: __always_inline preemptible_lazy() Steven Rostedt
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0014-tick-broadcast-Make-broadcast-hrtimer-irqsafe.patch --]
[-- Type: text/plain, Size: 2782 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

Otherwise we end up with the following:

|=================================
|[ INFO: inconsistent lock state ]
|4.4.2-rt7+ #5 Not tainted
|---------------------------------
|inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
|ktimersoftd/0/4 [HC0[0]:SC0[0]:HE1:SE1] takes:
| (tick_broadcast_lock){?.....}, at: [<ffffffc000150db4>] tick_handle_oneshot_broadcast+0x58/0x27c
|{IN-HARDIRQ-W} state was registered at:
|  [<ffffffc000118198>] mark_lock+0x19c/0x6a0
|  [<ffffffc000119728>] __lock_acquire+0xb1c/0x2100
|  [<ffffffc00011b560>] lock_acquire+0xf8/0x230
|  [<ffffffc00061bf08>] _raw_spin_lock_irqsave+0x50/0x68
|  [<ffffffc000152188>] tick_broadcast_switch_to_oneshot+0x20/0x60
|  [<ffffffc0001529f4>] tick_switch_to_oneshot+0x64/0xd8
|  [<ffffffc000152b00>] tick_init_highres+0x1c/0x24
|  [<ffffffc000141e58>] hrtimer_run_queues+0x78/0x100
|  [<ffffffc00013f804>] update_process_times+0x38/0x74
|  [<ffffffc00014fc5c>] tick_periodic+0x60/0x140
|  [<ffffffc00014fd68>] tick_handle_periodic+0x2c/0x94
|  [<ffffffc00052b878>] arch_timer_handler_phys+0x3c/0x48
|  [<ffffffc00012d078>] handle_percpu_devid_irq+0x100/0x390
|  [<ffffffc000127f34>] generic_handle_irq+0x34/0x4c
|  [<ffffffc000128300>] __handle_domain_irq+0x90/0xf8
|  [<ffffffc000082554>] gic_handle_irq+0x5c/0xa4
|  [<ffffffc0000855ac>] el1_irq+0x6c/0xec
|  [<ffffffc000112bec>] default_idle_call+0x2c/0x44
|  [<ffffffc000113058>] cpu_startup_entry+0x3cc/0x410
|  [<ffffffc0006169f8>] rest_init+0x158/0x168
|  [<ffffffc000888954>] start_kernel+0x3a0/0x3b4
|  [<0000000080621000>] 0x80621000
|irq event stamp: 18723
|hardirqs last  enabled at (18723): [<ffffffc00061c188>] _raw_spin_unlock_irq+0x38/0x80
|hardirqs last disabled at (18722): [<ffffffc000140a4c>] run_hrtimer_softirq+0x2c/0x2f4
|softirqs last  enabled at (0): [<ffffffc0000c4744>] copy_process.isra.50+0x300/0x16d4
|softirqs last disabled at (0): [<          (null)>]           (null)

Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/time/tick-broadcast-hrtimer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/time/tick-broadcast-hrtimer.c b/kernel/time/tick-broadcast-hrtimer.c
index 6aac4beedbbe..943c03395e46 100644
--- a/kernel/time/tick-broadcast-hrtimer.c
+++ b/kernel/time/tick-broadcast-hrtimer.c
@@ -109,5 +109,6 @@ void tick_setup_hrtimer_broadcast(void)
 {
 	hrtimer_init(&bctimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
 	bctimer.function = bc_handler;
+	bctimer.irqsafe = true;
 	clockevents_register_device(&ce_broadcast_hrtimer);
 }
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 15/22] sched,rt: __always_inline preemptible_lazy()
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (13 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 14/22] tick/broadcast: Make broadcast hrtimer irqsafe Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 16/22] drm,radeon,i915: Use preempt_disable/enable_rt() where recommended Steven Rostedt
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mike Galbraith, Hillf Danton

[-- Attachment #1: 0015-sched-rt-__always_inline-preemptible_lazy.patch --]
[-- Type: text/plain, Size: 1279 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <umgwanakikbuti@gmail.com>

homer: # nm kernel/sched/core.o|grep preemptible_lazy
00000000000000b5 t preemptible_lazy

echo wakeup_rt > current_tracer ==> Welcome to infinity.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Link: http://lkml.kernel.org/r/1456067490.3771.2.camel@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/sched/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 56b6d73f9154..62f1a6e26a25 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3122,7 +3122,7 @@ static void __sched notrace preempt_schedule_common(void)
  * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
  * preempt_lazy_count counter >0.
  */
-static int preemptible_lazy(void)
+static __always_inline int preemptible_lazy(void)
 {
 	if (test_thread_flag(TIF_NEED_RESCHED))
 		return 1;
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 16/22] drm,radeon,i915: Use preempt_disable/enable_rt() where recommended
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (14 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 15/22] sched,rt: __always_inline preemptible_lazy() Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 17/22] drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end() Steven Rostedt
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mike Galbraith

[-- Attachment #1: 0016-drm-radeon-i915-Use-preempt_disable-enable_rt-where-.patch --]
[-- Type: text/plain, Size: 2264 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <umgwanakikbuti@gmail.com>

DRM folks identified the spots, so use them.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 drivers/gpu/drm/i915/i915_irq.c         | 2 ++
 drivers/gpu/drm/radeon/radeon_display.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index b0df8d10482a..8d34df020842 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -676,6 +676,7 @@ static int i915_get_crtc_scanoutpos(struct drm_device *dev, int pipe,
 	spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
 
 	/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_disable_rt();
 
 	/* Get optional system timestamp before query. */
 	if (stime)
@@ -727,6 +728,7 @@ static int i915_get_crtc_scanoutpos(struct drm_device *dev, int pipe,
 		*etime = ktime_get();
 
 	/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_enable_rt();
 
 	spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
 
diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
index 6743174acdbc..8ad198bbc24d 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -1798,6 +1798,7 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, int crtc, unsigned int fl
 	struct radeon_device *rdev = dev->dev_private;
 
 	/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_disable_rt();
 
 	/* Get optional system timestamp before query. */
 	if (stime)
@@ -1890,6 +1891,7 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, int crtc, unsigned int fl
 		*etime = ktime_get();
 
 	/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_enable_rt();
 
 	/* Decode into vertical and horizontal scanout position. */
 	*vpos = position & 0x1fff;
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 17/22] drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end()
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (15 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 16/22] drm,radeon,i915: Use preempt_disable/enable_rt() where recommended Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 18/22] trace: Use rcuidle version for preemptoff_hist trace point Steven Rostedt
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Mike Galbraith

[-- Attachment #1: 0017-drm-i915-Use-local_lock-unlock_irq-in-intel_pipe_upd.patch --]
[-- Type: text/plain, Size: 5633 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <umgwanakikbuti@gmail.com>

[    8.014039] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:918
[    8.014041] in_atomic(): 0, irqs_disabled(): 1, pid: 78, name: kworker/u4:4
[    8.014045] CPU: 1 PID: 78 Comm: kworker/u4:4 Not tainted 4.1.7-rt7 #5
[    8.014055] Workqueue: events_unbound async_run_entry_fn
[    8.014059]  0000000000000000 ffff880037153748 ffffffff815f32c9 0000000000000002
[    8.014063]  ffff88013a50e380 ffff880037153768 ffffffff815ef075 ffff8800372c06c8
[    8.014066]  ffff8800372c06c8 ffff880037153778 ffffffff8107c0b3 ffff880037153798
[    8.014067] Call Trace:
[    8.014074]  [<ffffffff815f32c9>] dump_stack+0x4a/0x61
[    8.014078]  [<ffffffff815ef075>] ___might_sleep.part.93+0xe9/0xee
[    8.014082]  [<ffffffff8107c0b3>] ___might_sleep+0x53/0x80
[    8.014086]  [<ffffffff815f9064>] rt_spin_lock+0x24/0x50
[    8.014090]  [<ffffffff8109368b>] prepare_to_wait+0x2b/0xa0
[    8.014152]  [<ffffffffa016c04c>] intel_pipe_update_start+0x17c/0x300 [i915]
[    8.014156]  [<ffffffff81093b40>] ? prepare_to_wait_event+0x120/0x120
[    8.014201]  [<ffffffffa0158f36>] intel_begin_crtc_commit+0x166/0x1e0 [i915]
[    8.014215]  [<ffffffffa00c806d>] drm_atomic_helper_commit_planes+0x5d/0x1a0 [drm_kms_helper]
[    8.014260]  [<ffffffffa0171e9b>] intel_atomic_commit+0xab/0xf0 [i915]
[    8.014288]  [<ffffffffa00654c7>] drm_atomic_commit+0x37/0x60 [drm]
[    8.014298]  [<ffffffffa00c6fcd>] drm_atomic_helper_plane_set_property+0x8d/0xd0 [drm_kms_helper]
[    8.014301]  [<ffffffff815f77d9>] ? __ww_mutex_lock+0x39/0x40
[    8.014319]  [<ffffffffa0053b3d>] drm_mode_plane_set_obj_prop+0x2d/0x90 [drm]
[    8.014328]  [<ffffffffa00c8edb>] restore_fbdev_mode+0x6b/0xf0 [drm_kms_helper]
[    8.014337]  [<ffffffffa00cae49>] drm_fb_helper_restore_fbdev_mode_unlocked+0x29/0x80 [drm_kms_helper]
[    8.014346]  [<ffffffffa00caec2>] drm_fb_helper_set_par+0x22/0x50 [drm_kms_helper]
[    8.014390]  [<ffffffffa016dfba>] intel_fbdev_set_par+0x1a/0x60 [i915]
[    8.014394]  [<ffffffff81327dc4>] fbcon_init+0x4f4/0x580
[    8.014398]  [<ffffffff8139ef4c>] visual_init+0xbc/0x120
[    8.014401]  [<ffffffff813a1623>] do_bind_con_driver+0x163/0x330
[    8.014405]  [<ffffffff813a1b2c>] do_take_over_console+0x11c/0x1c0
[    8.014408]  [<ffffffff813236e3>] do_fbcon_takeover+0x63/0xd0
[    8.014410]  [<ffffffff81328965>] fbcon_event_notify+0x785/0x8d0
[    8.014413]  [<ffffffff8107c12d>] ? __might_sleep+0x4d/0x90
[    8.014416]  [<ffffffff810775fe>] notifier_call_chain+0x4e/0x80
[    8.014419]  [<ffffffff810779cd>] __blocking_notifier_call_chain+0x4d/0x70
[    8.014422]  [<ffffffff81077a06>] blocking_notifier_call_chain+0x16/0x20
[    8.014425]  [<ffffffff8132b48b>] fb_notifier_call_chain+0x1b/0x20
[    8.014428]  [<ffffffff8132d8fa>] register_framebuffer+0x21a/0x350
[    8.014439]  [<ffffffffa00cb164>] drm_fb_helper_initial_config+0x274/0x3e0 [drm_kms_helper]
[    8.014483]  [<ffffffffa016f1cb>] intel_fbdev_initial_config+0x1b/0x20 [i915]
[    8.014486]  [<ffffffff8107912c>] async_run_entry_fn+0x4c/0x160
[    8.014490]  [<ffffffff81070ffa>] process_one_work+0x14a/0x470
[    8.014493]  [<ffffffff81071489>] worker_thread+0x169/0x4c0
[    8.014496]  [<ffffffff81071320>] ? process_one_work+0x470/0x470
[    8.014499]  [<ffffffff81076606>] kthread+0xc6/0xe0
[    8.014502]  [<ffffffff81070000>] ? queue_work_on+0x80/0x110
[    8.014506]  [<ffffffff81076540>] ? kthread_worker_fn+0x1c0/0x1c0

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 drivers/gpu/drm/i915/intel_sprite.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_sprite.c b/drivers/gpu/drm/i915/intel_sprite.c
index a4c0a04b5044..6da459fe20b2 100644
--- a/drivers/gpu/drm/i915/intel_sprite.c
+++ b/drivers/gpu/drm/i915/intel_sprite.c
@@ -37,6 +37,7 @@
 #include "intel_drv.h"
 #include <drm/i915_drm.h>
 #include "i915_drv.h"
+#include <linux/locallock.h>
 
 static bool
 format_is_yuv(uint32_t format)
@@ -61,6 +62,8 @@ static int usecs_to_scanlines(const struct drm_display_mode *mode, int usecs)
 	return DIV_ROUND_UP(usecs * mode->crtc_clock, 1000 * mode->crtc_htotal);
 }
 
+static DEFINE_LOCAL_IRQ_LOCK(pipe_update_lock);
+
 /**
  * intel_pipe_update_start() - start update of a set of display registers
  * @crtc: the crtc of which the registers are going to be updated
@@ -101,7 +104,7 @@ bool intel_pipe_update_start(struct intel_crtc *crtc, uint32_t *start_vbl_count)
 	if (WARN_ON(drm_crtc_vblank_get(&crtc->base)))
 		return false;
 
-	local_irq_disable();
+	local_lock_irq(pipe_update_lock);
 
 	trace_i915_pipe_update_start(crtc, min, max);
 
@@ -123,11 +126,11 @@ bool intel_pipe_update_start(struct intel_crtc *crtc, uint32_t *start_vbl_count)
 			break;
 		}
 
-		local_irq_enable();
+		local_unlock_irq(pipe_update_lock);
 
 		timeout = schedule_timeout(timeout);
 
-		local_irq_disable();
+		local_lock_irq(pipe_update_lock);
 	}
 
 	finish_wait(wq, &wait);
@@ -158,7 +161,7 @@ void intel_pipe_update_end(struct intel_crtc *crtc, u32 start_vbl_count)
 
 	trace_i915_pipe_update_end(crtc, end_vbl_count);
 
-	local_irq_enable();
+	local_unlock_irq(pipe_update_lock);
 
 	if (start_vbl_count != end_vbl_count)
 		DRM_ERROR("Atomic update failure on pipe %c (start=%u end=%u)\n",
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 18/22] trace: Use rcuidle version for preemptoff_hist trace point
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (16 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 17/22] drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end() Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 19/22] f2fs: Mutex cant be used by down_write_nest_lock() Steven Rostedt
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Yang Shi

[-- Attachment #1: 0018-trace-Use-rcuidle-version-for-preemptoff_hist-trace-.patch --]
[-- Type: text/plain, Size: 3527 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Yang Shi <yang.shi@windriver.com>

When running -rt kernel with both PREEMPT_OFF_HIST and LOCKDEP enabled,
the below error is reported:

 [ INFO: suspicious RCU usage. ]
 4.4.1-rt6 #1 Not tainted
 include/trace/events/hist.h:31 suspicious rcu_dereference_check() usage!

 other info that might help us debug this:

 RCU used illegally from idle CPU!
 rcu_scheduler_active = 1, debug_locks = 0
 RCU used illegally from extended quiescent state!
 no locks held by swapper/0/0.

 stack backtrace:
 CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.1-rt6-WR8.0.0.0_standard #1
 Stack : 0000000000000006 0000000000000000 ffffffff81ca8c38 ffffffff81c8fc80
    ffffffff811bdd68 ffffffff81cb0000 0000000000000000 ffffffff81cb0000
    0000000000000000 0000000000000000 0000000000000004 0000000000000000
    0000000000000004 ffffffff811bdf50 0000000000000000 ffffffff82b60000
    0000000000000000 ffffffff812897ac ffffffff819f0000 000000000000000b
    ffffffff811be460 ffffffff81b7c588 ffffffff81c8fc80 0000000000000000
    0000000000000000 ffffffff81ec7f88 ffffffff81d70000 ffffffff81b70000
    ffffffff81c90000 ffffffff81c3fb00 ffffffff81c3fc28 ffffffff815e6f98
    0000000000000000 ffffffff81c8fa87 ffffffff81b70958 ffffffff811bf2c4
    0707fe32e8d60ca5 ffffffff81126d60 0000000000000000 0000000000000000
    ...
 Call Trace:
 [<ffffffff81126d60>] show_stack+0xe8/0x108
 [<ffffffff815e6f98>] dump_stack+0x88/0xb0
 [<ffffffff8124b88c>] time_hardirqs_off+0x204/0x300
 [<ffffffff811aa5dc>] trace_hardirqs_off_caller+0x24/0xe8
 [<ffffffff811a4ec4>] cpu_startup_entry+0x39c/0x508
 [<ffffffff81d7dc68>] start_kernel+0x584/0x5a0

Replace regular trace_preemptoff_hist to rcuidle version to avoid the error.

Signed-off-by: Yang Shi <yang.shi@windriver.com>
Cc: bigeasy@linutronix.de
Cc: rostedt@goodmis.org
Cc: linux-rt-users@vger.kernel.org
Link: http://lkml.kernel.org/r/1456262603-10075-1-git-send-email-yang.shi@windriver.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_irqsoff.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index d0e1d0e48640..0f2d3e3545e8 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -434,13 +434,13 @@ void start_critical_timings(void)
 {
 	if (preempt_trace() || irq_trace())
 		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
-	trace_preemptirqsoff_hist(TRACE_START, 1);
+	trace_preemptirqsoff_hist_rcuidle(TRACE_START, 1);
 }
 EXPORT_SYMBOL_GPL(start_critical_timings);
 
 void stop_critical_timings(void)
 {
-	trace_preemptirqsoff_hist(TRACE_STOP, 0);
+	trace_preemptirqsoff_hist_rcuidle(TRACE_STOP, 0);
 	if (preempt_trace() || irq_trace())
 		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
 }
@@ -485,7 +485,7 @@ inline void print_irqtrace_events(struct task_struct *curr)
  */
 void trace_hardirqs_on(void)
 {
-	trace_preemptirqsoff_hist(IRQS_ON, 0);
+	trace_preemptirqsoff_hist_rcuidle(IRQS_ON, 0);
 	if (!preempt_trace() && irq_trace())
 		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
 }
@@ -495,7 +495,7 @@ void trace_hardirqs_off(void)
 {
 	if (!preempt_trace() && irq_trace())
 		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
-	trace_preemptirqsoff_hist(IRQS_OFF, 1);
+	trace_preemptirqsoff_hist_rcuidle(IRQS_OFF, 1);
 }
 EXPORT_SYMBOL(trace_hardirqs_off);
 
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 19/22] f2fs: Mutex cant be used by down_write_nest_lock()
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (17 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 18/22] trace: Use rcuidle version for preemptoff_hist trace point Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 20/22] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL Steven Rostedt
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Yang Shi, cm224.lee, chao2.yu,
	linaro-kernel, linux-f2fs-devel, linux-fsdevel, jaegeuk

[-- Attachment #1: 0019-f2fs-Mutex-can-t-be-used-by-down_write_nest_lock.patch --]
[-- Type: text/plain, Size: 2382 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Yang Shi <yang.shi@linaro.org>

fsf2_lock_all() calls down_write_nest_lock() to acquire a rw_sem and check
a mutex, but down_write_nest_lock() is designed for two rw_sem accoring to the
comment in include/linux/rwsem.h. And, other than f2fs, it is just called in
mm/mmap.c with two rwsem.

So, it looks it is used wrongly by f2fs. And, it causes the below compile
warning on -rt kernel too.

In file included from fs/f2fs/xattr.c:25:0:
fs/f2fs/f2fs.h: In function 'f2fs_lock_all':
fs/f2fs/f2fs.h:962:34: warning: passing argument 2 of 'down_write_nest_lock' from
		       incompatible pointer type [-Wincompatible-pointer-types]
  f2fs_down_write(&sbi->cp_rwsem, &sbi->cp_mutex);
                                  ^

The nest annotation was anyway bogus as nested annotations for lockdep are
only required if one nests two locks of the same lock class, which is not the
case here.

Signed-off-by: Yang Shi <yang.shi@linaro.org>
Cc: cm224.lee@samsung.com
Cc: chao2.yu@samsung.com
Cc: linaro-kernel@lists.linaro.org
Cc: linux-rt-users@vger.kernel.org
Cc: bigeasy@linutronix.de
Cc: rostedt@goodmis.org
Cc: linux-f2fs-devel@lists.sourceforge.net
Cc: linux-fsdevel@vger.kernel.org
Cc: jaegeuk@kernel.org
Link: http://lkml.kernel.org/r/1456532725-4126-1-git-send-email-yang.shi@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 fs/f2fs/f2fs.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 8de34ab6d5b1..4e80270703a4 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -22,7 +22,6 @@
 
 #ifdef CONFIG_F2FS_CHECK_FS
 #define f2fs_bug_on(sbi, condition)	BUG_ON(condition)
-#define f2fs_down_write(x, y)	down_write_nest_lock(x, y)
 #else
 #define f2fs_bug_on(sbi, condition)					\
 	do {								\
@@ -31,7 +30,6 @@
 			set_sbi_flag(sbi, SBI_NEED_FSCK);		\
 		}							\
 	} while (0)
-#define f2fs_down_write(x, y)	down_write(x)
 #endif
 
 /*
@@ -838,7 +836,7 @@ static inline void f2fs_unlock_op(struct f2fs_sb_info *sbi)
 
 static inline void f2fs_lock_all(struct f2fs_sb_info *sbi)
 {
-	f2fs_down_write(&sbi->cp_rwsem, &sbi->cp_mutex);
+	down_write(&sbi->cp_rwsem);
 }
 
 static inline void f2fs_unlock_all(struct f2fs_sb_info *sbi)
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 20/22] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (18 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 19/22] f2fs: Mutex cant be used by down_write_nest_lock() Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 21/22] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable() Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 22/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Clark Williams

[-- Attachment #1: 0020-rcu-torture-Comment-out-rcu_bh-ops-on-PREEMPT_RT_FUL.patch --]
[-- Type: text/plain, Size: 1170 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Clark Williams <williams@redhat.com>

RT has dropped support of rcu_bh, comment out in rcutorture.

Signed-off-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rcu/rcutorture.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 8dbe27611ec3..7b6170a46409 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -389,6 +389,7 @@ static struct rcu_torture_ops rcu_ops = {
 	.name		= "rcu"
 };
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 /*
  * Definitions for rcu_bh torture testing.
  */
@@ -428,6 +429,12 @@ static struct rcu_torture_ops rcu_bh_ops = {
 	.name		= "rcu_bh"
 };
 
+#else
+static struct rcu_torture_ops rcu_bh_ops = {
+	.ttype		= INVALID_RCU_FLAVOR,
+};
+#endif
+
 /*
  * Don't even think about trying any of these in real life!!!
  * The names includes "busted", and they really means it!
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 21/22] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable()
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (19 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 20/22] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  2016-03-02 15:09 ` [PATCH RT 22/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0021-kernel-sched-Fix-preempt_disable_ip-recodring-for-pr.patch --]
[-- Type: text/plain, Size: 4045 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

preempt_disable() invokes preempt_count_add() which saves the caller in
current->preempt_disable_ip. It uses CALLER_ADDR1 which does not look for its
caller but for the parent of the caller. Which means we get the correct caller
for something like spin_lock() unless the architectures inlines those
invocations. It is always wrong for preempt_disable() or local_bh_disable().

This patch makes the function get_parent_ip() which tries CALLER_ADDR0,1,2 if
the former is a locking function.  This seems to record the preempt_disable()
caller properly for preempt_disable() itself as well as for get_cpu_var() or
local_bh_disable().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/ftrace.h | 12 ++++++++++++
 include/linux/sched.h  |  2 --
 kernel/sched/core.c    | 14 ++------------
 kernel/softirq.c       |  4 ++--
 4 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 6cd8c0ee4b6f..1ec37fef6355 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -682,6 +682,18 @@ static inline void __ftrace_enabled_restore(int enabled)
 #define CALLER_ADDR5 ((unsigned long)ftrace_return_address(5))
 #define CALLER_ADDR6 ((unsigned long)ftrace_return_address(6))
 
+static inline unsigned long get_lock_parent_ip(void)
+{
+	unsigned long addr = CALLER_ADDR0;
+
+	if (!in_lock_functions(addr))
+		return addr;
+	addr = CALLER_ADDR1;
+	if (!in_lock_functions(addr))
+		return addr;
+	return CALLER_ADDR2;
+}
+
 #ifdef CONFIG_IRQSOFF_TRACER
   extern void time_hardirqs_on(unsigned long a0, unsigned long a1);
   extern void time_hardirqs_off(unsigned long a0, unsigned long a1);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4d995add9497..0f4a133f0abd 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -176,8 +176,6 @@ extern void get_iowait_load(unsigned long *nr_waiters, unsigned long *load);
 extern void calc_global_load(unsigned long ticks);
 extern void update_cpu_load_nohz(void);
 
-extern unsigned long get_parent_ip(unsigned long addr);
-
 extern void dump_cpu_task(int cpu);
 
 struct seq_file;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 62f1a6e26a25..3b5d43a884e8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2643,16 +2643,6 @@ u64 scheduler_tick_max_deferment(void)
 }
 #endif
 
-notrace unsigned long get_parent_ip(unsigned long addr)
-{
-	if (in_lock_functions(addr)) {
-		addr = CALLER_ADDR2;
-		if (in_lock_functions(addr))
-			addr = CALLER_ADDR3;
-	}
-	return addr;
-}
-
 #if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \
 				defined(CONFIG_PREEMPT_TRACER))
 
@@ -2674,7 +2664,7 @@ void preempt_count_add(int val)
 				PREEMPT_MASK - 10);
 #endif
 	if (preempt_count() == val) {
-		unsigned long ip = get_parent_ip(CALLER_ADDR1);
+		unsigned long ip = get_lock_parent_ip();
 #ifdef CONFIG_DEBUG_PREEMPT
 		current->preempt_disable_ip = ip;
 #endif
@@ -2701,7 +2691,7 @@ void preempt_count_sub(int val)
 #endif
 
 	if (preempt_count() == val)
-		trace_preempt_on(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
+		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
 	__preempt_count_sub(val);
 }
 EXPORT_SYMBOL(preempt_count_sub);
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 2ca63cc1469e..cb9c1d5dee10 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -287,9 +287,9 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 
 	if (preempt_count() == cnt) {
 #ifdef CONFIG_DEBUG_PREEMPT
-		current->preempt_disable_ip = get_parent_ip(CALLER_ADDR1);
+		current->preempt_disable_ip = get_lock_parent_ip();
 #endif
-		trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
+		trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
 	}
 }
 EXPORT_SYMBOL(__local_bh_disable_ip);
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH RT 22/22] Linux 4.1.15-rt18-rc1
  2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
                   ` (20 preceding siblings ...)
  2016-03-02 15:09 ` [PATCH RT 21/22] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable() Steven Rostedt
@ 2016-03-02 15:09 ` Steven Rostedt
  21 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2016-03-02 15:09 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0022-Linux-4.1.15-rt18-rc1.patch --]
[-- Type: text/plain, Size: 411 bytes --]

4.1.15-rt18-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 1e584b47c987..26374fc600bc 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt17
+-rt18-rc1
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-03-02 15:11 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-02 15:08 [PATCH RT 00/22] Linux 4.1.15-rt18-rc1 Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 01/22] sched: reset tasks lockless wake-queues on fork() Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 02/22] ptrace: dont open IRQs in ptrace_freeze_traced() too early Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 03/22] net: move xmit_recursion to per-task variable on -RT Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 04/22] kernel/softirq: use cond_resched_rcu_qs() on -RT as well (run_ksoftirqd()) Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 05/22] net/core: protect users of napi_alloc_cache against reentrance Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 06/22] preempt-lazy: Add the lazy-preemption check to preempt_schedule() Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 07/22] softirq: split timer softirqs out of ksoftirqd Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 08/22] net: provide a way to delegate processing a softirq to ksoftirqd Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 09/22] latencyhist: disable jump-labels Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 10/22] arm64: replace read_lock to rcu lock in call_step_hook Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 11/22] kernel: migrate_disable() do fastpath in atomic & irqs-off Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 12/22] kernel: softirq: unlock with irqs on Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 13/22] kernel/stop_machine: partly revert "stop_machine: Use raw spinlocks" Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 14/22] tick/broadcast: Make broadcast hrtimer irqsafe Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 15/22] sched,rt: __always_inline preemptible_lazy() Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 16/22] drm,radeon,i915: Use preempt_disable/enable_rt() where recommended Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 17/22] drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end() Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 18/22] trace: Use rcuidle version for preemptoff_hist trace point Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 19/22] f2fs: Mutex cant be used by down_write_nest_lock() Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 20/22] rcu/torture: Comment out rcu_bh ops on PREEMPT_RT_FULL Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 21/22] kernel: sched: Fix preempt_disable_ip recodring for preempt_disable() Steven Rostedt
2016-03-02 15:09 ` [PATCH RT 22/22] Linux 4.1.15-rt18-rc1 Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).