linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates
@ 2017-04-10 19:52 Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 1/7] ftrace: Add use of synchronize_rcu_tasks() with dynamic trampolines Steven Rostedt
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next

Head SHA1: d54b6eeb553c89ed8d4c5a2ed73df374a45b9562


Paul E. McKenney (1):
      rcu: Fix dyntick-idle tracing

Steven Rostedt (VMware) (6):
      ftrace: Add use of synchronize_rcu_tasks() with dynamic trampolines
      tracing: Replace the per_cpu() with __this_cpu*() in trace_stack.c
      tracing: Add stack_tracer_disable/enable() functions
      tracing: Rename trace_active to disable_stack_tracer and inline its modification
      rcu/tracing: Add rcu_disabled to denote when rcu_irq_enter() will not work
      tracing: Make sure rcu_irq_enter() can work for trace_*_rcuidle() trace events

----
 include/linux/ftrace.h     | 38 ++++++++++++++++++++++++++++
 include/linux/rcupdate.h   |  5 ++++
 include/linux/tracepoint.h | 19 ++++++++------
 kernel/rcu/tree.c          | 62 +++++++++++++++++++++++++++-------------------
 kernel/trace/Kconfig       |  3 ++-
 kernel/trace/ftrace.c      | 42 ++++++++++++++-----------------
 kernel/trace/trace_stack.c | 35 +++++++++++++-------------
 7 files changed, 128 insertions(+), 76 deletions(-)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [for-next][PATCH 1/7] ftrace: Add use of synchronize_rcu_tasks() with dynamic trampolines
  2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
@ 2017-04-10 19:52 ` Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 2/7] tracing: Replace the per_cpu() with __this_cpu*() in trace_stack.c Steven Rostedt
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Paul E. McKenney

[-- Attachment #1: 0001-ftrace-Add-use-of-synchronize_rcu_tasks-with-dynamic.patch --]
[-- Type: text/plain, Size: 4464 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The function tracer needs to be more careful than other subsystems when it
comes to freeing data. Especially if that data is actually executable code.
When a single function is traced, a trampoline can be dynamically allocated
which is called to jump to the function trace callback. When the callback is
no longer needed, the dynamic allocated trampoline needs to be freed. This
is where the issues arise. The dynamically allocated trampoline must not be
used again. As function tracing can trace all subsystems, including
subsystems that are used to serialize aspects of freeing (namely RCU), it
must take extra care when doing the freeing.

Before synchronize_rcu_tasks() was around, there was no way for the function
tracer to know that nothing was using the dynamically allocated trampoline
when CONFIG_PREEMPT was enabled. That's because a task could be indefinitely
preempted while sitting on the trampoline. Now with synchronize_rcu_tasks(),
it will wait till all tasks have either voluntarily scheduled (not on the
trampoline) or goes into userspace (not on the trampoline). Then it is safe
to free the trampoline even with CONFIG_PREEMPT set.

Acked-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/Kconfig  |  3 ++-
 kernel/trace/ftrace.c | 42 ++++++++++++++++++------------------------
 2 files changed, 20 insertions(+), 25 deletions(-)

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index d4a06e714645..67b463b4f169 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -134,7 +134,8 @@ config FUNCTION_TRACER
 	select KALLSYMS
 	select GENERIC_TRACER
 	select CONTEXT_SWITCH_TRACER
-        select GLOB
+	select GLOB
+	select TASKS_RCU if PREEMPT
 	help
 	  Enable the kernel to trace every kernel function. This is done
 	  by using a compiler feature to insert a small, 5-byte No-Operation
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 8efd9fe7aec0..34f63e78d661 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2808,18 +2808,28 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
 	 * callers are done before leaving this function.
 	 * The same goes for freeing the per_cpu data of the per_cpu
 	 * ops.
-	 *
-	 * Again, normal synchronize_sched() is not good enough.
-	 * We need to do a hard force of sched synchronization.
-	 * This is because we use preempt_disable() to do RCU, but
-	 * the function tracers can be called where RCU is not watching
-	 * (like before user_exit()). We can not rely on the RCU
-	 * infrastructure to do the synchronization, thus we must do it
-	 * ourselves.
 	 */
 	if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU)) {
+		/*
+		 * We need to do a hard force of sched synchronization.
+		 * This is because we use preempt_disable() to do RCU, but
+		 * the function tracers can be called where RCU is not watching
+		 * (like before user_exit()). We can not rely on the RCU
+		 * infrastructure to do the synchronization, thus we must do it
+		 * ourselves.
+		 */
 		schedule_on_each_cpu(ftrace_sync);
 
+		/*
+		 * When the kernel is preeptive, tasks can be preempted
+		 * while on a ftrace trampoline. Just scheduling a task on
+		 * a CPU is not good enough to flush them. Calling
+		 * synchornize_rcu_tasks() will wait for those tasks to
+		 * execute and either schedule voluntarily or enter user space.
+		 */
+		if (IS_ENABLED(CONFIG_PREEMPT))
+			synchronize_rcu_tasks();
+
 		arch_ftrace_trampoline_free(ops);
 
 		if (ops->flags & FTRACE_OPS_FL_PER_CPU)
@@ -5366,22 +5376,6 @@ void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
 
 static void ftrace_update_trampoline(struct ftrace_ops *ops)
 {
-
-/*
- * Currently there's no safe way to free a trampoline when the kernel
- * is configured with PREEMPT. That is because a task could be preempted
- * when it jumped to the trampoline, it may be preempted for a long time
- * depending on the system load, and currently there's no way to know
- * when it will be off the trampoline. If the trampoline is freed
- * too early, when the task runs again, it will be executing on freed
- * memory and crash.
- */
-#ifdef CONFIG_PREEMPT
-	/* Currently, only non dynamic ops can have a trampoline */
-	if (ops->flags & FTRACE_OPS_FL_DYNAMIC)
-		return;
-#endif
-
 	arch_ftrace_update_trampoline(ops);
 }
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [for-next][PATCH 2/7] tracing: Replace the per_cpu() with __this_cpu*() in trace_stack.c
  2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 1/7] ftrace: Add use of synchronize_rcu_tasks() with dynamic trampolines Steven Rostedt
@ 2017-04-10 19:52 ` Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 3/7] tracing: Add stack_tracer_disable/enable() functions Steven Rostedt
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Paul E. McKenney

[-- Attachment #1: 0002-tracing-Replace-the-per_cpu-with-__this_cpu-in-trace.patch --]
[-- Type: text/plain, Size: 2751 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The updates to the trace_active per cpu variable can be updated with the
__this_cpu_*() functions as it only gets updated on the CPU that the variable
is on.

Thanks to Paul McKenney for suggesting __this_cpu_* instead of this_cpu_*.

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_stack.c | 23 +++++++----------------
 1 file changed, 7 insertions(+), 16 deletions(-)

diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index 5fb1f2c87e6b..338d076a06da 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -207,13 +207,12 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip,
 		 struct ftrace_ops *op, struct pt_regs *pt_regs)
 {
 	unsigned long stack;
-	int cpu;
 
 	preempt_disable_notrace();
 
-	cpu = raw_smp_processor_id();
 	/* no atomic needed, we only modify this variable by this cpu */
-	if (per_cpu(trace_active, cpu)++ != 0)
+	__this_cpu_inc(trace_active);
+	if (__this_cpu_read(trace_active) != 1)
 		goto out;
 
 	ip += MCOUNT_INSN_SIZE;
@@ -221,7 +220,7 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip,
 	check_stack(ip, &stack);
 
  out:
-	per_cpu(trace_active, cpu)--;
+	__this_cpu_dec(trace_active);
 	/* prevent recursion in schedule */
 	preempt_enable_notrace();
 }
@@ -253,7 +252,6 @@ stack_max_size_write(struct file *filp, const char __user *ubuf,
 	long *ptr = filp->private_data;
 	unsigned long val, flags;
 	int ret;
-	int cpu;
 
 	ret = kstrtoul_from_user(ubuf, count, 10, &val);
 	if (ret)
@@ -266,14 +264,13 @@ stack_max_size_write(struct file *filp, const char __user *ubuf,
 	 * we will cause circular lock, so we also need to increase
 	 * the percpu trace_active here.
 	 */
-	cpu = smp_processor_id();
-	per_cpu(trace_active, cpu)++;
+	__this_cpu_inc(trace_active);
 
 	arch_spin_lock(&stack_trace_max_lock);
 	*ptr = val;
 	arch_spin_unlock(&stack_trace_max_lock);
 
-	per_cpu(trace_active, cpu)--;
+	__this_cpu_dec(trace_active);
 	local_irq_restore(flags);
 
 	return count;
@@ -307,12 +304,9 @@ t_next(struct seq_file *m, void *v, loff_t *pos)
 
 static void *t_start(struct seq_file *m, loff_t *pos)
 {
-	int cpu;
-
 	local_irq_disable();
 
-	cpu = smp_processor_id();
-	per_cpu(trace_active, cpu)++;
+	__this_cpu_inc(trace_active);
 
 	arch_spin_lock(&stack_trace_max_lock);
 
@@ -324,12 +318,9 @@ static void *t_start(struct seq_file *m, loff_t *pos)
 
 static void t_stop(struct seq_file *m, void *p)
 {
-	int cpu;
-
 	arch_spin_unlock(&stack_trace_max_lock);
 
-	cpu = smp_processor_id();
-	per_cpu(trace_active, cpu)--;
+	__this_cpu_dec(trace_active);
 
 	local_irq_enable();
 }
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [for-next][PATCH 3/7] tracing: Add stack_tracer_disable/enable() functions
  2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 1/7] ftrace: Add use of synchronize_rcu_tasks() with dynamic trampolines Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 2/7] tracing: Replace the per_cpu() with __this_cpu*() in trace_stack.c Steven Rostedt
@ 2017-04-10 19:52 ` Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 4/7] tracing: Rename trace_active to disable_stack_tracer and inline its modification Steven Rostedt
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0003-tracing-Add-stack_tracer_disable-enable-functions.patch --]
[-- Type: text/plain, Size: 2484 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

There are certain parts of the kernel that cannot let stack tracing
proceed (namely in RCU), because the stack tracer uses RCU, and parts of RCU
internals cannot handle having RCU read side locks taken.

Add stack_tracer_disable() and stack_tracer_enable() functions to let RCU
stop stack tracing on the current CPU when it is in those critical sections.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/ftrace.h     |  6 ++++++
 kernel/trace/trace_stack.c | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index ef7123219f14..7b4e6572ab21 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -286,6 +286,12 @@ int
 stack_trace_sysctl(struct ctl_table *table, int write,
 		   void __user *buffer, size_t *lenp,
 		   loff_t *ppos);
+
+void stack_tracer_disable(void);
+void stack_tracer_enable(void);
+#else
+static inline void stack_tracer_disable(void) { }
+static inline void stack_tracer_enable(void) { }
 #endif
 
 struct ftrace_func_command {
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index 338d076a06da..21e536cf66e4 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -41,6 +41,38 @@ static DEFINE_MUTEX(stack_sysctl_mutex);
 int stack_tracer_enabled;
 static int last_stack_tracer_enabled;
 
+/**
+ * stack_tracer_disable - temporarily disable the stack tracer
+ *
+ * There's a few locations (namely in RCU) where stack tracing
+ * cannot be executed. This function is used to disable stack
+ * tracing during those critical sections.
+ *
+ * This function must be called with preemption or interrupts
+ * disabled and stack_tracer_enable() must be called shortly after
+ * while preemption or interrupts are still disabled.
+ */
+void stack_tracer_disable(void)
+{
+	/* Preemption or interupts must be disabled */
+	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
+		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
+	this_cpu_inc(trace_active);
+}
+
+/**
+ * stack_tracer_enable - re-enable the stack tracer
+ *
+ * After stack_tracer_disable() is called, stack_tracer_enable()
+ * must be called shortly afterward.
+ */
+void stack_tracer_enable(void)
+{
+	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
+		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
+	this_cpu_dec(trace_active);
+}
+
 void stack_trace_print(void)
 {
 	long i;
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [for-next][PATCH 4/7] tracing: Rename trace_active to disable_stack_tracer and inline its modification
  2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
                   ` (2 preceding siblings ...)
  2017-04-10 19:52 ` [for-next][PATCH 3/7] tracing: Add stack_tracer_disable/enable() functions Steven Rostedt
@ 2017-04-10 19:52 ` Steven Rostedt
  2017-04-13  7:58   ` Valentin Rothberg
  2017-04-10 19:52 ` [for-next][PATCH 5/7] rcu: Fix dyntick-idle tracing Steven Rostedt
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Paul E. McKenney

[-- Attachment #1: 0004-tracing-Rename-trace_active-to-disable_stack_tracer-.patch --]
[-- Type: text/plain, Size: 5367 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

In order to eliminate a function call, make "trace_active" into
"disable_stack_tracer" and convert stack_tracer_disable() and friends into
static inline functions.

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/ftrace.h     | 36 +++++++++++++++++++++++++++++++--
 kernel/trace/trace_stack.c | 50 +++++++++-------------------------------------
 2 files changed, 43 insertions(+), 43 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 7b4e6572ab21..06b2990a35e4 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -287,8 +287,40 @@ stack_trace_sysctl(struct ctl_table *table, int write,
 		   void __user *buffer, size_t *lenp,
 		   loff_t *ppos);
 
-void stack_tracer_disable(void);
-void stack_tracer_enable(void);
+/* DO NOT MODIFY THIS VARIABLE DIRECTLY! */
+DECLARE_PER_CPU(int, disable_stack_tracer);
+
+/**
+ * stack_tracer_disable - temporarily disable the stack tracer
+ *
+ * There's a few locations (namely in RCU) where stack tracing
+ * cannot be executed. This function is used to disable stack
+ * tracing during those critical sections.
+ *
+ * This function must be called with preemption or interrupts
+ * disabled and stack_tracer_enable() must be called shortly after
+ * while preemption or interrupts are still disabled.
+ */
+static inline void stack_tracer_disable(void)
+{
+	/* Preemption or interupts must be disabled */
+	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
+		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
+	this_cpu_inc(disable_stack_tracer);
+}
+
+/**
+ * stack_tracer_enable - re-enable the stack tracer
+ *
+ * After stack_tracer_disable() is called, stack_tracer_enable()
+ * must be called shortly afterward.
+ */
+static inline void stack_tracer_enable(void)
+{
+	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
+		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
+	this_cpu_dec(disable_stack_tracer);
+}
 #else
 static inline void stack_tracer_disable(void) { }
 static inline void stack_tracer_enable(void) { }
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index 21e536cf66e4..f2f02ff350d4 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -35,44 +35,12 @@ unsigned long stack_trace_max_size;
 arch_spinlock_t stack_trace_max_lock =
 	(arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
 
-static DEFINE_PER_CPU(int, trace_active);
+DEFINE_PER_CPU(int, disable_stack_tracer);
 static DEFINE_MUTEX(stack_sysctl_mutex);
 
 int stack_tracer_enabled;
 static int last_stack_tracer_enabled;
 
-/**
- * stack_tracer_disable - temporarily disable the stack tracer
- *
- * There's a few locations (namely in RCU) where stack tracing
- * cannot be executed. This function is used to disable stack
- * tracing during those critical sections.
- *
- * This function must be called with preemption or interrupts
- * disabled and stack_tracer_enable() must be called shortly after
- * while preemption or interrupts are still disabled.
- */
-void stack_tracer_disable(void)
-{
-	/* Preemption or interupts must be disabled */
-	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
-		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
-	this_cpu_inc(trace_active);
-}
-
-/**
- * stack_tracer_enable - re-enable the stack tracer
- *
- * After stack_tracer_disable() is called, stack_tracer_enable()
- * must be called shortly afterward.
- */
-void stack_tracer_enable(void)
-{
-	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
-		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
-	this_cpu_dec(trace_active);
-}
-
 void stack_trace_print(void)
 {
 	long i;
@@ -243,8 +211,8 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip,
 	preempt_disable_notrace();
 
 	/* no atomic needed, we only modify this variable by this cpu */
-	__this_cpu_inc(trace_active);
-	if (__this_cpu_read(trace_active) != 1)
+	__this_cpu_inc(disable_stack_tracer);
+	if (__this_cpu_read(disable_stack_tracer) != 1)
 		goto out;
 
 	ip += MCOUNT_INSN_SIZE;
@@ -252,7 +220,7 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip,
 	check_stack(ip, &stack);
 
  out:
-	__this_cpu_dec(trace_active);
+	__this_cpu_dec(disable_stack_tracer);
 	/* prevent recursion in schedule */
 	preempt_enable_notrace();
 }
@@ -294,15 +262,15 @@ stack_max_size_write(struct file *filp, const char __user *ubuf,
 	/*
 	 * In case we trace inside arch_spin_lock() or after (NMI),
 	 * we will cause circular lock, so we also need to increase
-	 * the percpu trace_active here.
+	 * the percpu disable_stack_tracer here.
 	 */
-	__this_cpu_inc(trace_active);
+	__this_cpu_inc(disable_stack_tracer);
 
 	arch_spin_lock(&stack_trace_max_lock);
 	*ptr = val;
 	arch_spin_unlock(&stack_trace_max_lock);
 
-	__this_cpu_dec(trace_active);
+	__this_cpu_dec(disable_stack_tracer);
 	local_irq_restore(flags);
 
 	return count;
@@ -338,7 +306,7 @@ static void *t_start(struct seq_file *m, loff_t *pos)
 {
 	local_irq_disable();
 
-	__this_cpu_inc(trace_active);
+	__this_cpu_inc(disable_stack_tracer);
 
 	arch_spin_lock(&stack_trace_max_lock);
 
@@ -352,7 +320,7 @@ static void t_stop(struct seq_file *m, void *p)
 {
 	arch_spin_unlock(&stack_trace_max_lock);
 
-	__this_cpu_dec(trace_active);
+	__this_cpu_dec(disable_stack_tracer);
 
 	local_irq_enable();
 }
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [for-next][PATCH 5/7] rcu: Fix dyntick-idle tracing
  2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
                   ` (3 preceding siblings ...)
  2017-04-10 19:52 ` [for-next][PATCH 4/7] tracing: Rename trace_active to disable_stack_tracer and inline its modification Steven Rostedt
@ 2017-04-10 19:52 ` Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 6/7] rcu/tracing: Add rcu_disabled to denote when rcu_irq_enter() will not work Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 7/7] tracing: Make sure rcu_irq_enter() can work for trace_*_rcuidle() trace events Steven Rostedt
  6 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Paul E. McKenney

[-- Attachment #1: 0005-rcu-Fix-dyntick-idle-tracing.patch --]
[-- Type: text/plain, Size: 4839 bytes --]

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The tracing subsystem started using rcu_irq_entry() and rcu_irq_exit()
(with my blessing) to allow the current _rcuidle alternative tracepoint
name to be dispensed with while still maintaining good performance.
Unfortunately, this causes RCU's dyntick-idle entry code's tracing to
appear to RCU like an interrupt that occurs where RCU is not designed
to handle interrupts.

This commit fixes this problem by moving the zeroing of ->dynticks_nesting
after the offending trace_rcu_dyntick() statement, which narrows the
window of vulnerability to a pair of adjacent statements that are now
marked with comments to that effect.

Link: http://lkml.kernel.org/r/20170405093207.404f8deb@gandalf.local.home
Link: http://lkml.kernel.org/r/20170405193928.GM1600@linux.vnet.ibm.com

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/rcu/tree.c | 48 +++++++++++++++++++++++-------------------------
 1 file changed, 23 insertions(+), 25 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 50fee7689e71..8b4d273331e4 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -57,6 +57,7 @@
 #include <linux/random.h>
 #include <linux/trace_events.h>
 #include <linux/suspend.h>
+#include <linux/ftrace.h>
 
 #include "tree.h"
 #include "rcu.h"
@@ -771,25 +772,24 @@ cpu_needs_another_gp(struct rcu_state *rsp, struct rcu_data *rdp)
 }
 
 /*
- * rcu_eqs_enter_common - current CPU is moving towards extended quiescent state
+ * rcu_eqs_enter_common - current CPU is entering an extended quiescent state
  *
- * If the new value of the ->dynticks_nesting counter now is zero,
- * we really have entered idle, and must do the appropriate accounting.
- * The caller must have disabled interrupts.
+ * Enter idle, doing appropriate accounting.  The caller must have
+ * disabled interrupts.
  */
-static void rcu_eqs_enter_common(long long oldval, bool user)
+static void rcu_eqs_enter_common(bool user)
 {
 	struct rcu_state *rsp;
 	struct rcu_data *rdp;
-	RCU_TRACE(struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);)
+	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
 
-	trace_rcu_dyntick(TPS("Start"), oldval, rdtp->dynticks_nesting);
+	trace_rcu_dyntick(TPS("Start"), rdtp->dynticks_nesting, 0);
 	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
 	    !user && !is_idle_task(current)) {
 		struct task_struct *idle __maybe_unused =
 			idle_task(smp_processor_id());
 
-		trace_rcu_dyntick(TPS("Error on entry: not idle task"), oldval, 0);
+		trace_rcu_dyntick(TPS("Error on entry: not idle task"), rdtp->dynticks_nesting, 0);
 		rcu_ftrace_dump(DUMP_ORIG);
 		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
 			  current->pid, current->comm,
@@ -800,7 +800,10 @@ static void rcu_eqs_enter_common(long long oldval, bool user)
 		do_nocb_deferred_wakeup(rdp);
 	}
 	rcu_prepare_for_idle();
-	rcu_dynticks_eqs_enter();
+	stack_tracer_disable();
+	rdtp->dynticks_nesting = 0; /* Breaks tracing momentarily. */
+	rcu_dynticks_eqs_enter(); /* After this, tracing works again. */
+	stack_tracer_enable();
 	rcu_dynticks_task_enter();
 
 	/*
@@ -821,19 +824,15 @@ static void rcu_eqs_enter_common(long long oldval, bool user)
  */
 static void rcu_eqs_enter(bool user)
 {
-	long long oldval;
 	struct rcu_dynticks *rdtp;
 
 	rdtp = this_cpu_ptr(&rcu_dynticks);
-	oldval = rdtp->dynticks_nesting;
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     (oldval & DYNTICK_TASK_NEST_MASK) == 0);
-	if ((oldval & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE) {
-		rdtp->dynticks_nesting = 0;
-		rcu_eqs_enter_common(oldval, user);
-	} else {
+		     (rdtp->dynticks_nesting & DYNTICK_TASK_NEST_MASK) == 0);
+	if ((rdtp->dynticks_nesting & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE)
+		rcu_eqs_enter_common(user);
+	else
 		rdtp->dynticks_nesting -= DYNTICK_TASK_NEST_VALUE;
-	}
 }
 
 /**
@@ -892,19 +891,18 @@ void rcu_user_enter(void)
  */
 void rcu_irq_exit(void)
 {
-	long long oldval;
 	struct rcu_dynticks *rdtp;
 
 	RCU_LOCKDEP_WARN(!irqs_disabled(), "rcu_irq_exit() invoked with irqs enabled!!!");
 	rdtp = this_cpu_ptr(&rcu_dynticks);
-	oldval = rdtp->dynticks_nesting;
-	rdtp->dynticks_nesting--;
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     rdtp->dynticks_nesting < 0);
-	if (rdtp->dynticks_nesting)
-		trace_rcu_dyntick(TPS("--="), oldval, rdtp->dynticks_nesting);
-	else
-		rcu_eqs_enter_common(oldval, true);
+		     rdtp->dynticks_nesting < 1);
+	if (rdtp->dynticks_nesting <= 1) {
+		rcu_eqs_enter_common(true);
+	} else {
+		trace_rcu_dyntick(TPS("--="), rdtp->dynticks_nesting, rdtp->dynticks_nesting - 1);
+		rdtp->dynticks_nesting--;
+	}
 	rcu_sysidle_enter(1);
 }
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [for-next][PATCH 6/7] rcu/tracing: Add rcu_disabled to denote when rcu_irq_enter() will not work
  2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
                   ` (4 preceding siblings ...)
  2017-04-10 19:52 ` [for-next][PATCH 5/7] rcu: Fix dyntick-idle tracing Steven Rostedt
@ 2017-04-10 19:52 ` Steven Rostedt
  2017-04-10 19:52 ` [for-next][PATCH 7/7] tracing: Make sure rcu_irq_enter() can work for trace_*_rcuidle() trace events Steven Rostedt
  6 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Paul E. McKenney

[-- Attachment #1: 0006-rcu-tracing-Add-rcu_disabled-to-denote-when-rcu_irq_.patch --]
[-- Type: text/plain, Size: 4664 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Tracing uses rcu_irq_enter() as a way to make sure that RCU is watching when
it needs to use rcu_read_lock() and friends. This is because tracing can
happen as RCU is about to enter user space, or about to go idle, and RCU
does not watch for RCU read side critical sections as it makes the
transition.

There is a small location within the RCU infrastructure that rcu_irq_enter()
itself will not work. If tracing were to occur in that section it will break
if it tries to use rcu_irq_enter().

Originally, this happens with the stack_tracer, because it will call
save_stack_trace when it encounters stack usage that is greater than any
stack usage it had encountered previously. There was a case where that
happened in the RCU section where rcu_irq_enter() did not work, and lockdep
complained loudly about it. To fix it, stack tracing added a call to be
disabled and RCU would disable stack tracing during the critical section
that rcu_irq_enter() was inoperable. This solution worked, but there are
other cases that use rcu_irq_enter() and it would be a good idea to let RCU
give a way to let others know that rcu_irq_enter() will not work. For
example, in trace events.

Another helpful aspect of this change is that it also moves the per cpu
variable called in the RCU critical section into a cache locale along with
other RCU per cpu variables used in that same location.

I'm keeping the stack_trace_disable() code, as that still could be used in
the future by places that really need to disable it. And since it's only a
static inline, it wont take up any kernel text if it is not used.

Link: http://lkml.kernel.org/r/20170405093207.404f8deb@gandalf.local.home

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/rcupdate.h   |  5 +++++
 kernel/rcu/tree.c          | 18 ++++++++++++++++--
 kernel/trace/trace_stack.c |  8 ++++++++
 3 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index de88b33c0974..dea8f17b2fe3 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -97,6 +97,7 @@ void do_trace_rcu_torture_read(const char *rcutorturename,
 			       unsigned long secs,
 			       unsigned long c_old,
 			       unsigned long c);
+bool rcu_irq_enter_disabled(void);
 #else
 static inline void rcutorture_get_gp_data(enum rcutorture_type test_type,
 					  int *flags,
@@ -113,6 +114,10 @@ static inline void rcutorture_record_test_transition(void)
 static inline void rcutorture_record_progress(unsigned long vernum)
 {
 }
+static inline bool rcu_irq_enter_disabled(void)
+{
+	return false;
+}
 #ifdef CONFIG_RCU_TRACE
 void do_trace_rcu_torture_read(const char *rcutorturename,
 			       struct rcu_head *rhp,
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8b4d273331e4..a6dcf3bd244f 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -285,6 +285,20 @@ static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
 };
 
 /*
+ * There's a few places, currently just in the tracing infrastructure,
+ * that uses rcu_irq_enter() to make sure RCU is watching. But there's
+ * a small location where that will not even work. In those cases
+ * rcu_irq_enter_disabled() needs to be checked to make sure rcu_irq_enter()
+ * can be called.
+ */
+static DEFINE_PER_CPU(bool, disable_rcu_irq_enter);
+
+bool rcu_irq_enter_disabled(void)
+{
+	return this_cpu_read(disable_rcu_irq_enter);
+}
+
+/*
  * Record entry into an extended quiescent state.  This is only to be
  * called when not already in an extended quiescent state.
  */
@@ -800,10 +814,10 @@ static void rcu_eqs_enter_common(bool user)
 		do_nocb_deferred_wakeup(rdp);
 	}
 	rcu_prepare_for_idle();
-	stack_tracer_disable();
+	__this_cpu_inc(disable_rcu_irq_enter);
 	rdtp->dynticks_nesting = 0; /* Breaks tracing momentarily. */
 	rcu_dynticks_eqs_enter(); /* After this, tracing works again. */
-	stack_tracer_enable();
+	__this_cpu_dec(disable_rcu_irq_enter);
 	rcu_dynticks_task_enter();
 
 	/*
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index f2f02ff350d4..76aa04d4c925 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -96,6 +96,14 @@ check_stack(unsigned long ip, unsigned long *stack)
 	if (in_nmi())
 		return;
 
+	/*
+	 * There's a slight chance that we are tracing inside the
+	 * RCU infrastructure, and rcu_irq_enter() will not work
+	 * as expected.
+	 */
+	if (unlikely(rcu_irq_enter_disabled()))
+		return;
+
 	local_irq_save(flags);
 	arch_spin_lock(&stack_trace_max_lock);
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [for-next][PATCH 7/7] tracing: Make sure rcu_irq_enter() can work for trace_*_rcuidle() trace events
  2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
                   ` (5 preceding siblings ...)
  2017-04-10 19:52 ` [for-next][PATCH 6/7] rcu/tracing: Add rcu_disabled to denote when rcu_irq_enter() will not work Steven Rostedt
@ 2017-04-10 19:52 ` Steven Rostedt
  6 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-10 19:52 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Mathieu Desnoyers, Paul E. McKenney

[-- Attachment #1: 0007-tracing-Make-sure-rcu_irq_enter-can-work-for-trace_-.patch --]
[-- Type: text/plain, Size: 3246 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Stack tracing discovered that there's a small location inside the RCU
infrastructure where calling rcu_irq_enter() does not work. As trace events
use rcu_irq_enter() it must make sure that it is functionable. A check
against rcu_irq_enter_disabled() is added with a WARN_ON_ONCE() as no trace
event should ever be used in that part of RCU. If the warning is triggered,
then the trace event is ignored.

Restructure the __DO_TRACE() a bit to get rid of the prercu and postrcu,
and just have an rcucheck that does the work from within the _DO_TRACE()
macro. gcc optimization will compile out the rcucheck=0 case.

Link: http://lkml.kernel.org/r/20170405093207.404f8deb@gandalf.local.home

Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/tracepoint.h | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index f72fcfe0e66a..cc48cb2ce209 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -128,7 +128,7 @@ extern void syscall_unregfunc(void);
  * as "(void *, void)". The DECLARE_TRACE_NOARGS() will pass in just
  * "void *data", where as the DECLARE_TRACE() will pass in "void *data, proto".
  */
-#define __DO_TRACE(tp, proto, args, cond, prercu, postrcu)		\
+#define __DO_TRACE(tp, proto, args, cond, rcucheck)			\
 	do {								\
 		struct tracepoint_func *it_func_ptr;			\
 		void *it_func;						\
@@ -136,7 +136,11 @@ extern void syscall_unregfunc(void);
 									\
 		if (!(cond))						\
 			return;						\
-		prercu;							\
+		if (rcucheck) {						\
+			if (WARN_ON_ONCE(rcu_irq_enter_disabled()))	\
+				return;					\
+			rcu_irq_enter_irqson();				\
+		}							\
 		rcu_read_lock_sched_notrace();				\
 		it_func_ptr = rcu_dereference_sched((tp)->funcs);	\
 		if (it_func_ptr) {					\
@@ -147,20 +151,19 @@ extern void syscall_unregfunc(void);
 			} while ((++it_func_ptr)->func);		\
 		}							\
 		rcu_read_unlock_sched_notrace();			\
-		postrcu;						\
+		if (rcucheck)						\
+			rcu_irq_exit_irqson();				\
 	} while (0)
 
 #ifndef MODULE
-#define __DECLARE_TRACE_RCU(name, proto, args, cond, data_proto, data_args)	\
+#define __DECLARE_TRACE_RCU(name, proto, args, cond, data_proto, data_args) \
 	static inline void trace_##name##_rcuidle(proto)		\
 	{								\
 		if (static_key_false(&__tracepoint_##name.key))		\
 			__DO_TRACE(&__tracepoint_##name,		\
 				TP_PROTO(data_proto),			\
 				TP_ARGS(data_args),			\
-				TP_CONDITION(cond),			\
-				rcu_irq_enter_irqson(),			\
-				rcu_irq_exit_irqson());			\
+				TP_CONDITION(cond), 1);			\
 	}
 #else
 #define __DECLARE_TRACE_RCU(name, proto, args, cond, data_proto, data_args)
@@ -186,7 +189,7 @@ extern void syscall_unregfunc(void);
 			__DO_TRACE(&__tracepoint_##name,		\
 				TP_PROTO(data_proto),			\
 				TP_ARGS(data_args),			\
-				TP_CONDITION(cond),,);			\
+				TP_CONDITION(cond), 0);			\
 		if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) {		\
 			rcu_read_lock_sched_notrace();			\
 			rcu_dereference_sched(__tracepoint_##name.funcs);\
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [for-next][PATCH 4/7] tracing: Rename trace_active to disable_stack_tracer and inline its modification
  2017-04-10 19:52 ` [for-next][PATCH 4/7] tracing: Rename trace_active to disable_stack_tracer and inline its modification Steven Rostedt
@ 2017-04-13  7:58   ` Valentin Rothberg
  2017-04-13 14:00     ` Steven Rostedt
  0 siblings, 1 reply; 10+ messages in thread
From: Valentin Rothberg @ 2017-04-13  7:58 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, Paul E. McKenney,
	valentinrothberg

Hi Steven,

I just found this patch in linux-next commit 8aaf1ee70e19 with
scripts/checkkconfigsymbols.py complaining about an undefined
CONFIG_PREEMPT_DEBUG (see below). I guess it's just a typo, since
there is a symbol named DEBUG_PREEMPT.

If you want to, I can send a trivial patch to rename both references.

Best regards,
 Valentin

On Apr 10 '17 15:52, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
> 
> In order to eliminate a function call, make "trace_active" into
> "disable_stack_tracer" and convert stack_tracer_disable() and friends into
> static inline functions.
> 
> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  include/linux/ftrace.h     | 36 +++++++++++++++++++++++++++++++--
>  kernel/trace/trace_stack.c | 50 +++++++++-------------------------------------
>  2 files changed, 43 insertions(+), 43 deletions(-)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 7b4e6572ab21..06b2990a35e4 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -287,8 +287,40 @@ stack_trace_sysctl(struct ctl_table *table, int write,
>  		   void __user *buffer, size_t *lenp,
>  		   loff_t *ppos);
>  
> -void stack_tracer_disable(void);
> -void stack_tracer_enable(void);
> +/* DO NOT MODIFY THIS VARIABLE DIRECTLY! */
> +DECLARE_PER_CPU(int, disable_stack_tracer);
> +
> +/**
> + * stack_tracer_disable - temporarily disable the stack tracer
> + *
> + * There's a few locations (namely in RCU) where stack tracing
> + * cannot be executed. This function is used to disable stack
> + * tracing during those critical sections.
> + *
> + * This function must be called with preemption or interrupts
> + * disabled and stack_tracer_enable() must be called shortly after
> + * while preemption or interrupts are still disabled.
> + */
> +static inline void stack_tracer_disable(void)
> +{
> +	/* Preemption or interupts must be disabled */
> +	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
                   ^ undefined
> +		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
> +	this_cpu_inc(disable_stack_tracer);
> +}
> +
> +/**
> + * stack_tracer_enable - re-enable the stack tracer
> + *
> + * After stack_tracer_disable() is called, stack_tracer_enable()
> + * must be called shortly afterward.
> + */
> +static inline void stack_tracer_enable(void)
> +{
> +	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
                   ^ undefined
> +		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
> +	this_cpu_dec(disable_stack_tracer);
> +}
>  #else
>  static inline void stack_tracer_disable(void) { }
>  static inline void stack_tracer_enable(void) { }
> diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
> index 21e536cf66e4..f2f02ff350d4 100644
> --- a/kernel/trace/trace_stack.c
> +++ b/kernel/trace/trace_stack.c
> @@ -35,44 +35,12 @@ unsigned long stack_trace_max_size;
>  arch_spinlock_t stack_trace_max_lock =
>  	(arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
>  
> -static DEFINE_PER_CPU(int, trace_active);
> +DEFINE_PER_CPU(int, disable_stack_tracer);
>  static DEFINE_MUTEX(stack_sysctl_mutex);
>  
>  int stack_tracer_enabled;
>  static int last_stack_tracer_enabled;
>  
> -/**
> - * stack_tracer_disable - temporarily disable the stack tracer
> - *
> - * There's a few locations (namely in RCU) where stack tracing
> - * cannot be executed. This function is used to disable stack
> - * tracing during those critical sections.
> - *
> - * This function must be called with preemption or interrupts
> - * disabled and stack_tracer_enable() must be called shortly after
> - * while preemption or interrupts are still disabled.
> - */
> -void stack_tracer_disable(void)
> -{
> -	/* Preemption or interupts must be disabled */
> -	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
> -		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
> -	this_cpu_inc(trace_active);
> -}
> -
> -/**
> - * stack_tracer_enable - re-enable the stack tracer
> - *
> - * After stack_tracer_disable() is called, stack_tracer_enable()
> - * must be called shortly afterward.
> - */
> -void stack_tracer_enable(void)
> -{
> -	if (IS_ENABLED(CONFIG_PREEMPT_DEBUG))
> -		WARN_ON_ONCE(!preempt_count() || !irqs_disabled());
> -	this_cpu_dec(trace_active);
> -}
> -
>  void stack_trace_print(void)
>  {
>  	long i;
> @@ -243,8 +211,8 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip,
>  	preempt_disable_notrace();
>  
>  	/* no atomic needed, we only modify this variable by this cpu */
> -	__this_cpu_inc(trace_active);
> -	if (__this_cpu_read(trace_active) != 1)
> +	__this_cpu_inc(disable_stack_tracer);
> +	if (__this_cpu_read(disable_stack_tracer) != 1)
>  		goto out;
>  
>  	ip += MCOUNT_INSN_SIZE;
> @@ -252,7 +220,7 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip,
>  	check_stack(ip, &stack);
>  
>   out:
> -	__this_cpu_dec(trace_active);
> +	__this_cpu_dec(disable_stack_tracer);
>  	/* prevent recursion in schedule */
>  	preempt_enable_notrace();
>  }
> @@ -294,15 +262,15 @@ stack_max_size_write(struct file *filp, const char __user *ubuf,
>  	/*
>  	 * In case we trace inside arch_spin_lock() or after (NMI),
>  	 * we will cause circular lock, so we also need to increase
> -	 * the percpu trace_active here.
> +	 * the percpu disable_stack_tracer here.
>  	 */
> -	__this_cpu_inc(trace_active);
> +	__this_cpu_inc(disable_stack_tracer);
>  
>  	arch_spin_lock(&stack_trace_max_lock);
>  	*ptr = val;
>  	arch_spin_unlock(&stack_trace_max_lock);
>  
> -	__this_cpu_dec(trace_active);
> +	__this_cpu_dec(disable_stack_tracer);
>  	local_irq_restore(flags);
>  
>  	return count;
> @@ -338,7 +306,7 @@ static void *t_start(struct seq_file *m, loff_t *pos)
>  {
>  	local_irq_disable();
>  
> -	__this_cpu_inc(trace_active);
> +	__this_cpu_inc(disable_stack_tracer);
>  
>  	arch_spin_lock(&stack_trace_max_lock);
>  
> @@ -352,7 +320,7 @@ static void t_stop(struct seq_file *m, void *p)
>  {
>  	arch_spin_unlock(&stack_trace_max_lock);
>  
> -	__this_cpu_dec(trace_active);
> +	__this_cpu_dec(disable_stack_tracer);
>  
>  	local_irq_enable();
>  }
> -- 
> 2.10.2
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [for-next][PATCH 4/7] tracing: Rename trace_active to disable_stack_tracer and inline its modification
  2017-04-13  7:58   ` Valentin Rothberg
@ 2017-04-13 14:00     ` Steven Rostedt
  0 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2017-04-13 14:00 UTC (permalink / raw)
  To: Valentin Rothberg
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, Paul E. McKenney

On Thu, 13 Apr 2017 09:58:17 +0200
Valentin Rothberg <valentinrothberg@gmail.com> wrote:

> Hi Steven,
> 
> I just found this patch in linux-next commit 8aaf1ee70e19 with
> scripts/checkkconfigsymbols.py complaining about an undefined
> CONFIG_PREEMPT_DEBUG (see below). I guess it's just a typo, since
> there is a symbol named DEBUG_PREEMPT.
> 
> If you want to, I can send a trivial patch to rename both references.

Please do!

Thanks for catching this.

-- Steve

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-04-13 14:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-10 19:52 [for-next][PATCH 0/7] tracing: Stack tracing and rcu modification updates Steven Rostedt
2017-04-10 19:52 ` [for-next][PATCH 1/7] ftrace: Add use of synchronize_rcu_tasks() with dynamic trampolines Steven Rostedt
2017-04-10 19:52 ` [for-next][PATCH 2/7] tracing: Replace the per_cpu() with __this_cpu*() in trace_stack.c Steven Rostedt
2017-04-10 19:52 ` [for-next][PATCH 3/7] tracing: Add stack_tracer_disable/enable() functions Steven Rostedt
2017-04-10 19:52 ` [for-next][PATCH 4/7] tracing: Rename trace_active to disable_stack_tracer and inline its modification Steven Rostedt
2017-04-13  7:58   ` Valentin Rothberg
2017-04-13 14:00     ` Steven Rostedt
2017-04-10 19:52 ` [for-next][PATCH 5/7] rcu: Fix dyntick-idle tracing Steven Rostedt
2017-04-10 19:52 ` [for-next][PATCH 6/7] rcu/tracing: Add rcu_disabled to denote when rcu_irq_enter() will not work Steven Rostedt
2017-04-10 19:52 ` [for-next][PATCH 7/7] tracing: Make sure rcu_irq_enter() can work for trace_*_rcuidle() trace events Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).