linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next
@ 2018-08-09 21:03 Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 1/6] tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-08-09 21:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, joel

Joel,

These are the last minute updates I have on top of your changes.
A couple of them I may have already posted to you. I'm trying to
get my tests to all pass without errors or new warnings. My tests
are still running, and hopefully this will have all the fixes.

-- Steve


  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
ftrace/core

Head SHA1: 05ceccdf48666c14eed1082439bf9d5cb50ff9ed


Colin Ian King (1):
      ftrace: Remove unused pointer ftrace_swapper_pid

Steven Rostedt (VMware) (5):
      tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage"
      tracing/irqsoff: Handle preempt_count for different configs
      tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage"
      tracing: Fix synchronizing to event changes with tracepoint_synchronize_unregister()
      uprobes: Use synchronize_rcu() not synchronize_sched()

----
 include/linux/irqflags.h            |  8 ++-
 include/linux/lockdep.h             |  2 -
 init/main.c                         |  2 -
 kernel/locking/lockdep.c            | 14 +-----
 kernel/trace/ftrace.c               |  2 -
 kernel/trace/trace.h                | 15 ++++++
 kernel/trace/trace_events.c         |  8 +--
 kernel/trace/trace_events_filter.c  | 15 +++---
 kernel/trace/trace_events_hist.c    |  2 +-
 kernel/trace/trace_events_trigger.c |  6 ++-
 kernel/trace/trace_irqsoff.c        | 98 +++++++++++++------------------------
 kernel/trace/trace_preemptirq.c     | 53 +++++++++++++-------
 kernel/trace/trace_uprobe.c         |  2 +-
 13 files changed, 109 insertions(+), 118 deletions(-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [for-next][PATCH 1/6] tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage"
  2018-08-09 21:03 [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next Steven Rostedt
@ 2018-08-09 21:03 ` Steven Rostedt
  2018-08-10 12:20   ` Joel Fernandes
  2018-08-09 21:03 ` [for-next][PATCH 2/6] tracing/irqsoff: Handle preempt_count for different configs Steven Rostedt
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2018-08-09 21:03 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, joel, Joel Fernandes, Peter Zijlstra

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Joel Fernandes created a nice patch that cleaned up the duplicate hooks used
by lockdep and irqsoff latency tracer. It made both use tracepoints. But it
caused lockdep to trigger several false positives. We have not figured out
why yet, but removing lockdep from using the trace event hooks and just call
its helper functions directly (like it use to), makes the problem go away.

This is a partial revert of the clean up patch c3bc8fd637a9 ("tracing:
Centralize preemptirq tracepoints and unify their usage") that adds direct
calls for lockdep, but also keeps most of the clean up done to get rid of
the horrible preprocessor if statements.

Link: http://lkml.kernel.org/r/20180806155058.5ee875f4@gandalf.local.home

Cc: Joel Fernandes <joelaf@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: c3bc8fd637a9 ("tracing: Centralize preemptirq tracepoints and unify their usage")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/irqflags.h        |  8 ++++++--
 include/linux/lockdep.h         |  2 --
 init/main.c                     |  2 --
 kernel/locking/lockdep.c        | 14 ++-----------
 kernel/trace/trace_preemptirq.c | 36 ++++++++++++++++++---------------
 5 files changed, 28 insertions(+), 34 deletions(-)

diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
index 50edb9cbbd26..21619c92c377 100644
--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -19,9 +19,13 @@
 #ifdef CONFIG_PROVE_LOCKING
   extern void trace_softirqs_on(unsigned long ip);
   extern void trace_softirqs_off(unsigned long ip);
+  extern void lockdep_hardirqs_on(unsigned long ip);
+  extern void lockdep_hardirqs_off(unsigned long ip);
 #else
-# define trace_softirqs_on(ip)	do { } while (0)
-# define trace_softirqs_off(ip)	do { } while (0)
+  static inline void trace_softirqs_on(unsigned long ip) { }
+  static inline void trace_softirqs_off(unsigned long ip) { }
+  static inline void lockdep_hardirqs_on(unsigned long ip) { }
+  static inline void lockdep_hardirqs_off(unsigned long ip) { }
 #endif
 
 #ifdef CONFIG_TRACE_IRQFLAGS
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index a8113357ceeb..b0d0b51c4d85 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -267,7 +267,6 @@ struct held_lock {
  * Initialization, self-test and debugging-output methods:
  */
 extern void lockdep_init(void);
-extern void lockdep_init_early(void);
 extern void lockdep_reset(void);
 extern void lockdep_reset_lock(struct lockdep_map *lock);
 extern void lockdep_free_key_range(void *start, unsigned long size);
@@ -408,7 +407,6 @@ static inline void lockdep_on(void)
 # define lock_set_class(l, n, k, s, i)		do { } while (0)
 # define lock_set_subclass(l, s, i)		do { } while (0)
 # define lockdep_init()				do { } while (0)
-# define lockdep_init_early()			do { } while (0)
 # define lockdep_init_map(lock, name, key, sub) \
 		do { (void)(name); (void)(key); } while (0)
 # define lockdep_set_class(lock, key)		do { (void)(key); } while (0)
diff --git a/init/main.c b/init/main.c
index 44fe43be84c1..5d42e577643a 100644
--- a/init/main.c
+++ b/init/main.c
@@ -649,8 +649,6 @@ asmlinkage __visible void __init start_kernel(void)
 	call_function_init();
 	WARN(!irqs_disabled(), "Interrupts were enabled early\n");
 
-	lockdep_init_early();
-
 	early_boot_irqs_disabled = false;
 	local_irq_enable();
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 03bfaeb9f4e6..e406c5fdb41e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2840,8 +2840,7 @@ static void __trace_hardirqs_on_caller(unsigned long ip)
 	debug_atomic_inc(hardirqs_on_events);
 }
 
-static void lockdep_hardirqs_on(void *none, unsigned long ignore,
-				unsigned long ip)
+void lockdep_hardirqs_on(unsigned long ip)
 {
 	if (unlikely(!debug_locks || current->lockdep_recursion))
 		return;
@@ -2885,8 +2884,7 @@ static void lockdep_hardirqs_on(void *none, unsigned long ignore,
 /*
  * Hardirqs were disabled:
  */
-static void lockdep_hardirqs_off(void *none, unsigned long ignore,
-				 unsigned long ip)
+void lockdep_hardirqs_off(unsigned long ip)
 {
 	struct task_struct *curr = current;
 
@@ -4315,14 +4313,6 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 	raw_local_irq_restore(flags);
 }
 
-void __init lockdep_init_early(void)
-{
-#ifdef CONFIG_PROVE_LOCKING
-	register_trace_prio_irq_disable(lockdep_hardirqs_off, NULL, INT_MAX);
-	register_trace_prio_irq_enable(lockdep_hardirqs_on, NULL, INT_MIN);
-#endif
-}
-
 void __init lockdep_init(void)
 {
 	printk("Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar\n");
diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
index e76b78bf258e..fa656b25f427 100644
--- a/kernel/trace/trace_preemptirq.c
+++ b/kernel/trace/trace_preemptirq.c
@@ -19,41 +19,45 @@ static DEFINE_PER_CPU(int, tracing_irq_cpu);
 
 void trace_hardirqs_on(void)
 {
-	if (!this_cpu_read(tracing_irq_cpu))
-		return;
+	if (this_cpu_read(tracing_irq_cpu)) {
+		trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
+		this_cpu_write(tracing_irq_cpu, 0);
+	}
 
-	trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
-	this_cpu_write(tracing_irq_cpu, 0);
+	lockdep_hardirqs_on(CALLER_ADDR0);
 }
 EXPORT_SYMBOL(trace_hardirqs_on);
 
 void trace_hardirqs_off(void)
 {
-	if (this_cpu_read(tracing_irq_cpu))
-		return;
+	if (!this_cpu_read(tracing_irq_cpu)) {
+		this_cpu_write(tracing_irq_cpu, 1);
+		trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
+	}
 
-	this_cpu_write(tracing_irq_cpu, 1);
-	trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
+	lockdep_hardirqs_off(CALLER_ADDR0);
 }
 EXPORT_SYMBOL(trace_hardirqs_off);
 
 __visible void trace_hardirqs_on_caller(unsigned long caller_addr)
 {
-	if (!this_cpu_read(tracing_irq_cpu))
-		return;
+	if (this_cpu_read(tracing_irq_cpu)) {
+		trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
+		this_cpu_write(tracing_irq_cpu, 0);
+	}
 
-	trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
-	this_cpu_write(tracing_irq_cpu, 0);
+	lockdep_hardirqs_on(CALLER_ADDR0);
 }
 EXPORT_SYMBOL(trace_hardirqs_on_caller);
 
 __visible void trace_hardirqs_off_caller(unsigned long caller_addr)
 {
-	if (this_cpu_read(tracing_irq_cpu))
-		return;
+	if (!this_cpu_read(tracing_irq_cpu)) {
+		this_cpu_write(tracing_irq_cpu, 1);
+		trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
+	}
 
-	this_cpu_write(tracing_irq_cpu, 1);
-	trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
+	lockdep_hardirqs_off(CALLER_ADDR0);
 }
 EXPORT_SYMBOL(trace_hardirqs_off_caller);
 #endif /* CONFIG_TRACE_IRQFLAGS */
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [for-next][PATCH 2/6] tracing/irqsoff: Handle preempt_count for different configs
  2018-08-09 21:03 [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 1/6] tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
@ 2018-08-09 21:03 ` Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 3/6] tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-08-09 21:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, joel

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

I was hitting the following warning:

WARNING: CPU: 0 PID: 1 at kernel/trace/trace_irqsoff.c:631 tracer_hardirqs_off+0x15/0x2a

Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.0-rc6-test+ #13
Hardware name: MSI MS-7823/CSM-H87M-G43 (MS-7823), BIOS V1.6 02/22/2014
EIP: tracer_hardirqs_off+0x15/0x2a
Code: ff 85 c0 74 0e 8b 45 00 8b 50 04 8b 45 04 e8 35 ff ff ff 5d c3 55 64 a1 cc 37 51 c1 a9 ff ff ff 7f 89 e5 53 89 d3 89 ca 75 02 <0f> 0b e8 90 fc ff ff 85 c0 74 07 89 d8 e8 0c ff ff ff 5b 5d c3 55
EAX: 80000000 EBX: c04337f0 ECX: c04338e3 EDX: c04338e3
ESI: c04337f0 EDI: c04338e3 EBP: f2aa1d68 ESP: f2aa1d64
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00210046
CR0: 80050033 CR2: 00000000 CR3: 01668000 CR4: 001406f0
Call Trace:
 trace_irq_disable_rcuidle+0x63/0x6c
 trace_hardirqs_off+0x26/0x30
 default_send_IPI_mask_allbutself_logical+0x31/0x93
 default_send_IPI_allbutself+0x37/0x48
 native_send_call_func_ipi+0x4d/0x6a
 smp_call_function_many+0x165/0x19d
 ? add_nops+0x34/0x34
 ? trace_hardirqs_on_caller+0x2d/0x2d
 ? add_nops+0x34/0x34
 smp_call_function+0x1f/0x23
 on_each_cpu+0x15/0x43
 ? trace_hardirqs_on_caller+0x2d/0x2d
 ? trace_hardirqs_on_caller+0x2d/0x2d
 ? trace_irq_disable_rcuidle+0x1/0x6c
 text_poke_bp+0xa0/0xc2
 ? trace_hardirqs_on_caller+0x2d/0x2d
 arch_jump_label_transform+0xa7/0xcb
 ? trace_irq_disable_rcuidle+0x5/0x6c
 __jump_label_update+0x3e/0x6d
 jump_label_update+0x7d/0x81
 static_key_slow_inc_cpuslocked+0x58/0x6d
 static_key_slow_inc+0x19/0x20
 tracepoint_probe_register_prio+0x19e/0x1d1
 ? start_critical_timings+0x1c/0x1c
 tracepoint_probe_register+0xf/0x11
 irqsoff_tracer_init+0x21/0xf2
 tracer_init+0x16/0x1a
 trace_selftest_startup_irqsoff+0x25/0xc4
 run_tracer_selftest+0xca/0x131
 register_tracer+0xd5/0x172
 ? trace_event_define_fields_preemptirq_template+0x45/0x45
 init_irqsoff_tracer+0xd/0x11
 do_one_initcall+0xab/0x1e8
 ? rcu_read_lock_sched_held+0x3d/0x44
 ? trace_initcall_level+0x52/0x86
 kernel_init_freeable+0x195/0x21a
 ? rest_init+0xb4/0xb4
 kernel_init+0xd/0xe4
 ret_from_fork+0x2e/0x38

It is due to running a CONFIG_PREEMPT_VOLUNTARY kernel, which would trigger
this warning every time:

	WARN_ON_ONCE(preempt_count());

Because on CONFIG_PREEMPT_VOLUNTARY, preempt_count() is always zero.

This warning is to make sure preempt_count is set because
tracer_hardirqs_on() does a preempt_enable_notrace() to make the
preempt_trace() work properly, as being called by a trace event, the trace
event code disables preemption, and the tracer wants to know what the
preemption was before it was called.

Instead of enabling preemption like this, just record the preempt_count,
subtract PREEMPT_DISABLE_OFFSET from it (which is zero with !CONFIG_PREEMPT
set), and pass that value to the necessary functions, which should use the
passed in parameter instead of calling preempt_count() directly.

Fixes: da5b3ebb45277 ("tracing: irqsoff: Account for additional preempt_disable")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_irqsoff.c | 66 ++++++++++++++++++++----------------
 1 file changed, 37 insertions(+), 29 deletions(-)

diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index ffbf1505d5bc..4af990e9c594 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -40,12 +40,12 @@ static int start_irqsoff_tracer(struct trace_array *tr, int graph);
 
 #ifdef CONFIG_PREEMPT_TRACER
 static inline int
-preempt_trace(void)
+preempt_trace(int pc)
 {
-	return ((trace_type & TRACER_PREEMPT_OFF) && preempt_count());
+	return ((trace_type & TRACER_PREEMPT_OFF) && pc);
 }
 #else
-# define preempt_trace() (0)
+# define preempt_trace(pc) (0)
 #endif
 
 #ifdef CONFIG_IRQSOFF_TRACER
@@ -366,7 +366,7 @@ check_critical_timing(struct trace_array *tr,
 }
 
 static inline void
-start_critical_timing(unsigned long ip, unsigned long parent_ip)
+start_critical_timing(unsigned long ip, unsigned long parent_ip, int pc)
 {
 	int cpu;
 	struct trace_array *tr = irqsoff_trace;
@@ -394,7 +394,7 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
 
 	local_save_flags(flags);
 
-	__trace_function(tr, ip, parent_ip, flags, preempt_count());
+	__trace_function(tr, ip, parent_ip, flags, pc);
 
 	per_cpu(tracing_cpu, cpu) = 1;
 
@@ -402,7 +402,7 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
 }
 
 static inline void
-stop_critical_timing(unsigned long ip, unsigned long parent_ip)
+stop_critical_timing(unsigned long ip, unsigned long parent_ip, int pc)
 {
 	int cpu;
 	struct trace_array *tr = irqsoff_trace;
@@ -428,7 +428,7 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip)
 	atomic_inc(&data->disabled);
 
 	local_save_flags(flags);
-	__trace_function(tr, ip, parent_ip, flags, preempt_count());
+	__trace_function(tr, ip, parent_ip, flags, pc);
 	check_critical_timing(tr, data, parent_ip ? : ip, cpu);
 	data->critical_start = 0;
 	atomic_dec(&data->disabled);
@@ -437,15 +437,19 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip)
 /* start and stop critical timings used to for stoppage (in idle) */
 void start_critical_timings(void)
 {
-	if (preempt_trace() || irq_trace())
-		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
+	int pc = preempt_count();
+
+	if (preempt_trace(pc) || irq_trace())
+		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1, pc);
 }
 EXPORT_SYMBOL_GPL(start_critical_timings);
 
 void stop_critical_timings(void)
 {
-	if (preempt_trace() || irq_trace())
-		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
+	int pc = preempt_count();
+
+	if (preempt_trace(pc) || irq_trace())
+		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1, pc);
 }
 EXPORT_SYMBOL_GPL(stop_critical_timings);
 
@@ -603,40 +607,40 @@ static void irqsoff_tracer_stop(struct trace_array *tr)
  */
 static void tracer_hardirqs_on(void *none, unsigned long a0, unsigned long a1)
 {
+	unsigned int pc = preempt_count();
+
 	/*
 	 * Tracepoint probes are expected to be called with preempt disabled,
 	 * We don't care about being called with preempt disabled but we need
 	 * to know in the future if that changes so we can remove the next
 	 * preempt_enable.
 	 */
-	WARN_ON_ONCE(!preempt_count());
-
-	/* Tracepoint probes disable preemption atleast once, account for that */
-	preempt_enable_notrace();
+	WARN_ON_ONCE(pc < PREEMPT_DISABLE_OFFSET);
 
-	if (!preempt_trace() && irq_trace())
-		stop_critical_timing(a0, a1);
+	/* Use PREEMPT_DISABLE_OFFSET to handle !CONFIG_PREEMPT cases */
+	pc -= PREEMPT_DISABLE_OFFSET;
 
-	preempt_disable_notrace();
+	if (!preempt_trace(pc) && irq_trace())
+		stop_critical_timing(a0, a1, pc);
 }
 
 static void tracer_hardirqs_off(void *none, unsigned long a0, unsigned long a1)
 {
+	unsigned int pc = preempt_count();
+
 	/*
 	 * Tracepoint probes are expected to be called with preempt disabled,
 	 * We don't care about being called with preempt disabled but we need
 	 * to know in the future if that changes so we can remove the next
 	 * preempt_enable.
 	 */
-	WARN_ON_ONCE(!preempt_count());
-
-	/* Tracepoint probes disable preemption atleast once, account for that */
-	preempt_enable_notrace();
+	WARN_ON_ONCE(pc < PREEMPT_DISABLE_OFFSET);
 
-	if (!preempt_trace() && irq_trace())
-		start_critical_timing(a0, a1);
+	/* Use PREEMPT_DISABLE_OFFSET to handle !CONFIG_PREEMPT cases */
+	pc -= PREEMPT_DISABLE_OFFSET;
 
-	preempt_disable_notrace();
+	if (!preempt_trace(pc) && irq_trace())
+		start_critical_timing(a0, a1, pc);
 }
 
 static int irqsoff_tracer_init(struct trace_array *tr)
@@ -679,14 +683,18 @@ static struct tracer irqsoff_tracer __read_mostly =
 #ifdef CONFIG_PREEMPT_TRACER
 static void tracer_preempt_on(void *none, unsigned long a0, unsigned long a1)
 {
-	if (preempt_trace() && !irq_trace())
-		stop_critical_timing(a0, a1);
+	int pc = preempt_count();
+
+	if (preempt_trace(pc) && !irq_trace())
+		stop_critical_timing(a0, a1, pc);
 }
 
 static void tracer_preempt_off(void *none, unsigned long a0, unsigned long a1)
 {
-	if (preempt_trace() && !irq_trace())
-		start_critical_timing(a0, a1);
+	int pc = preempt_count();
+
+	if (preempt_trace(pc) && !irq_trace())
+		start_critical_timing(a0, a1, pc);
 }
 
 static int preemptoff_tracer_init(struct trace_array *tr)
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [for-next][PATCH 3/6] tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage"
  2018-08-09 21:03 [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 1/6] tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 2/6] tracing/irqsoff: Handle preempt_count for different configs Steven Rostedt
@ 2018-08-09 21:03 ` Steven Rostedt
  2018-08-10 12:22   ` Joel Fernandes
  2018-08-09 21:03 ` [for-next][PATCH 4/6] ftrace: Remove unused pointer ftrace_swapper_pid Steven Rostedt
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2018-08-09 21:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, joel, Joel Fernandes

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Joel Fernandes created a nice patch that cleaned up the duplicate hooks used
by lockdep and irqsoff latency tracer. It made both use tracepoints. But the
latency tracer is triggering warnings when using tracepoints to call into
the latency tracer's routines. Mainly, they can be called from NMI context.
If that happens, then the SRCU may not work properly because on some
architectures, SRCU is not safe to be called in both NMI and non-NMI
context.

This is a partial revert of the clean up patch c3bc8fd637a9 ("tracing:
Centralize preemptirq tracepoints and unify their usage") that adds back the
direct calls into the latency tracer. It also only calls the trace events
when not in NMI.

Cc: Joel Fernandes <joelaf@google.com>
Fixes: c3bc8fd637a9 ("tracing: Centralize preemptirq tracepoints and unify their usage")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.h            | 15 +++++++++++
 kernel/trace/trace_irqsoff.c    | 48 +++------------------------------
 kernel/trace/trace_preemptirq.c | 25 ++++++++++++-----
 3 files changed, 38 insertions(+), 50 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index d88cd9bb72f4..a62b678731e3 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1827,6 +1827,21 @@ static inline int tracing_alloc_snapshot_instance(struct trace_array *tr)
 }
 #endif
 
+#ifdef CONFIG_PREEMPT_TRACER
+void tracer_preempt_on(unsigned long a0, unsigned long a1);
+void tracer_preempt_off(unsigned long a0, unsigned long a1);
+#else
+static inline void tracer_preempt_on(unsigned long a0, unsigned long a1) { }
+static inline void tracer_preempt_off(unsigned long a0, unsigned long a1) { }
+#endif
+#ifdef CONFIG_IRQSOFF_TRACER
+void tracer_hardirqs_on(unsigned long a0, unsigned long a1);
+void tracer_hardirqs_off(unsigned long a0, unsigned long a1);
+#else
+static inline void tracer_hardirqs_on(unsigned long a0, unsigned long a1) { }
+static inline void tracer_hardirqs_off(unsigned long a0, unsigned long a1) { }
+#endif
+
 extern struct trace_iterator *tracepoint_print_iter;
 
 #endif /* _LINUX_KERNEL_TRACE_H */
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 4af990e9c594..94c1ba139b3b 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -605,40 +605,18 @@ static void irqsoff_tracer_stop(struct trace_array *tr)
 /*
  * We are only interested in hardirq on/off events:
  */
-static void tracer_hardirqs_on(void *none, unsigned long a0, unsigned long a1)
+void tracer_hardirqs_on(unsigned long a0, unsigned long a1)
 {
 	unsigned int pc = preempt_count();
 
-	/*
-	 * Tracepoint probes are expected to be called with preempt disabled,
-	 * We don't care about being called with preempt disabled but we need
-	 * to know in the future if that changes so we can remove the next
-	 * preempt_enable.
-	 */
-	WARN_ON_ONCE(pc < PREEMPT_DISABLE_OFFSET);
-
-	/* Use PREEMPT_DISABLE_OFFSET to handle !CONFIG_PREEMPT cases */
-	pc -= PREEMPT_DISABLE_OFFSET;
-
 	if (!preempt_trace(pc) && irq_trace())
 		stop_critical_timing(a0, a1, pc);
 }
 
-static void tracer_hardirqs_off(void *none, unsigned long a0, unsigned long a1)
+void tracer_hardirqs_off(unsigned long a0, unsigned long a1)
 {
 	unsigned int pc = preempt_count();
 
-	/*
-	 * Tracepoint probes are expected to be called with preempt disabled,
-	 * We don't care about being called with preempt disabled but we need
-	 * to know in the future if that changes so we can remove the next
-	 * preempt_enable.
-	 */
-	WARN_ON_ONCE(pc < PREEMPT_DISABLE_OFFSET);
-
-	/* Use PREEMPT_DISABLE_OFFSET to handle !CONFIG_PREEMPT cases */
-	pc -= PREEMPT_DISABLE_OFFSET;
-
 	if (!preempt_trace(pc) && irq_trace())
 		start_critical_timing(a0, a1, pc);
 }
@@ -647,15 +625,11 @@ static int irqsoff_tracer_init(struct trace_array *tr)
 {
 	trace_type = TRACER_IRQS_OFF;
 
-	register_trace_irq_disable(tracer_hardirqs_off, NULL);
-	register_trace_irq_enable(tracer_hardirqs_on, NULL);
 	return __irqsoff_tracer_init(tr);
 }
 
 static void irqsoff_tracer_reset(struct trace_array *tr)
 {
-	unregister_trace_irq_disable(tracer_hardirqs_off, NULL);
-	unregister_trace_irq_enable(tracer_hardirqs_on, NULL);
 	__irqsoff_tracer_reset(tr);
 }
 
@@ -681,7 +655,7 @@ static struct tracer irqsoff_tracer __read_mostly =
 #endif /*  CONFIG_IRQSOFF_TRACER */
 
 #ifdef CONFIG_PREEMPT_TRACER
-static void tracer_preempt_on(void *none, unsigned long a0, unsigned long a1)
+void tracer_preempt_on(unsigned long a0, unsigned long a1)
 {
 	int pc = preempt_count();
 
@@ -689,7 +663,7 @@ static void tracer_preempt_on(void *none, unsigned long a0, unsigned long a1)
 		stop_critical_timing(a0, a1, pc);
 }
 
-static void tracer_preempt_off(void *none, unsigned long a0, unsigned long a1)
+void tracer_preempt_off(unsigned long a0, unsigned long a1)
 {
 	int pc = preempt_count();
 
@@ -701,15 +675,11 @@ static int preemptoff_tracer_init(struct trace_array *tr)
 {
 	trace_type = TRACER_PREEMPT_OFF;
 
-	register_trace_preempt_disable(tracer_preempt_off, NULL);
-	register_trace_preempt_enable(tracer_preempt_on, NULL);
 	return __irqsoff_tracer_init(tr);
 }
 
 static void preemptoff_tracer_reset(struct trace_array *tr)
 {
-	unregister_trace_preempt_disable(tracer_preempt_off, NULL);
-	unregister_trace_preempt_enable(tracer_preempt_on, NULL);
 	__irqsoff_tracer_reset(tr);
 }
 
@@ -740,21 +710,11 @@ static int preemptirqsoff_tracer_init(struct trace_array *tr)
 {
 	trace_type = TRACER_IRQS_OFF | TRACER_PREEMPT_OFF;
 
-	register_trace_irq_disable(tracer_hardirqs_off, NULL);
-	register_trace_irq_enable(tracer_hardirqs_on, NULL);
-	register_trace_preempt_disable(tracer_preempt_off, NULL);
-	register_trace_preempt_enable(tracer_preempt_on, NULL);
-
 	return __irqsoff_tracer_init(tr);
 }
 
 static void preemptirqsoff_tracer_reset(struct trace_array *tr)
 {
-	unregister_trace_irq_disable(tracer_hardirqs_off, NULL);
-	unregister_trace_irq_enable(tracer_hardirqs_on, NULL);
-	unregister_trace_preempt_disable(tracer_preempt_off, NULL);
-	unregister_trace_preempt_enable(tracer_preempt_on, NULL);
-
 	__irqsoff_tracer_reset(tr);
 }
 
diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
index fa656b25f427..71f553cceb3c 100644
--- a/kernel/trace/trace_preemptirq.c
+++ b/kernel/trace/trace_preemptirq.c
@@ -9,6 +9,7 @@
 #include <linux/uaccess.h>
 #include <linux/module.h>
 #include <linux/ftrace.h>
+#include "trace.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/preemptirq.h>
@@ -20,7 +21,9 @@ static DEFINE_PER_CPU(int, tracing_irq_cpu);
 void trace_hardirqs_on(void)
 {
 	if (this_cpu_read(tracing_irq_cpu)) {
-		trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
+		if (!in_nmi())
+			trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
+		tracer_hardirqs_on(CALLER_ADDR0, CALLER_ADDR1);
 		this_cpu_write(tracing_irq_cpu, 0);
 	}
 
@@ -32,7 +35,9 @@ void trace_hardirqs_off(void)
 {
 	if (!this_cpu_read(tracing_irq_cpu)) {
 		this_cpu_write(tracing_irq_cpu, 1);
-		trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
+		tracer_hardirqs_off(CALLER_ADDR0, CALLER_ADDR1);
+		if (!in_nmi())
+			trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
 	}
 
 	lockdep_hardirqs_off(CALLER_ADDR0);
@@ -42,7 +47,9 @@ EXPORT_SYMBOL(trace_hardirqs_off);
 __visible void trace_hardirqs_on_caller(unsigned long caller_addr)
 {
 	if (this_cpu_read(tracing_irq_cpu)) {
-		trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
+		if (!in_nmi())
+			trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
+		tracer_hardirqs_on(CALLER_ADDR0, caller_addr);
 		this_cpu_write(tracing_irq_cpu, 0);
 	}
 
@@ -54,7 +61,9 @@ __visible void trace_hardirqs_off_caller(unsigned long caller_addr)
 {
 	if (!this_cpu_read(tracing_irq_cpu)) {
 		this_cpu_write(tracing_irq_cpu, 1);
-		trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
+		tracer_hardirqs_off(CALLER_ADDR0, caller_addr);
+		if (!in_nmi())
+			trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
 	}
 
 	lockdep_hardirqs_off(CALLER_ADDR0);
@@ -66,11 +75,15 @@ EXPORT_SYMBOL(trace_hardirqs_off_caller);
 
 void trace_preempt_on(unsigned long a0, unsigned long a1)
 {
-	trace_preempt_enable_rcuidle(a0, a1);
+	if (!in_nmi())
+		trace_preempt_enable_rcuidle(a0, a1);
+	tracer_preempt_on(a0, a1);
 }
 
 void trace_preempt_off(unsigned long a0, unsigned long a1)
 {
-	trace_preempt_disable_rcuidle(a0, a1);
+	if (!in_nmi())
+		trace_preempt_disable_rcuidle(a0, a1);
+	tracer_preempt_off(a0, a1);
 }
 #endif
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [for-next][PATCH 4/6] ftrace: Remove unused pointer ftrace_swapper_pid
  2018-08-09 21:03 [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next Steven Rostedt
                   ` (2 preceding siblings ...)
  2018-08-09 21:03 ` [for-next][PATCH 3/6] tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
@ 2018-08-09 21:03 ` Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 5/6] tracing: Fix synchronizing to event changes with tracepoint_synchronize_unregister() Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 6/6] uprobes: Use synchronize_rcu() not synchronize_sched() Steven Rostedt
  5 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-08-09 21:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, joel, Colin Ian King

From: Colin Ian King <colin.king@canonical.com>

Pointer ftrace_swapper_pid is defined but is never used hence it is
redundant and can be removed. The use of this variable was removed
in commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do").

Cleans up clang warning:
warning: 'ftrace_swapper_pid' defined but not used [-Wunused-const-variable=]

Link: http://lkml.kernel.org/r/20180809125609.13142-1-colin.king@canonical.com

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 2d795193024b..48b5b466ec7a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1020,8 +1020,6 @@ static __init void ftrace_profile_tracefs(struct dentry *d_tracer)
 }
 #endif /* CONFIG_FUNCTION_PROFILER */
 
-static struct pid * const ftrace_swapper_pid = &init_struct_pid;
-
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 static int ftrace_graph_active;
 #else
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [for-next][PATCH 5/6] tracing: Fix synchronizing to event changes with tracepoint_synchronize_unregister()
  2018-08-09 21:03 [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next Steven Rostedt
                   ` (3 preceding siblings ...)
  2018-08-09 21:03 ` [for-next][PATCH 4/6] ftrace: Remove unused pointer ftrace_swapper_pid Steven Rostedt
@ 2018-08-09 21:03 ` Steven Rostedt
  2018-08-09 21:03 ` [for-next][PATCH 6/6] uprobes: Use synchronize_rcu() not synchronize_sched() Steven Rostedt
  5 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-08-09 21:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, joel

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Now that some trace events can be protected by srcu_read_lock(tracepoint_srcu),
we need to make sure all locations that depend on this are also protected.
There were many places that did a synchronize_sched() thinking that it was
enough to protect againts access to trace events. This use to be the case,
but now that we use SRCU for _rcuidle() trace events, they may not be
protected by synchronize_sched(), as they may be called in paths that RCU is
not watching for preempt disable.

Fixes: e6753f23d961d ("tracepoint: Make rcuidle tracepoint callers use SRCU")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_events.c         |  8 ++++----
 kernel/trace/trace_events_filter.c  | 15 ++++++++-------
 kernel/trace/trace_events_hist.c    |  2 +-
 kernel/trace/trace_events_trigger.c |  6 ++++--
 4 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 7b508ce8ac44..808cf29febe2 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -636,7 +636,7 @@ static void __ftrace_clear_event_pids(struct trace_array *tr)
 	rcu_assign_pointer(tr->filtered_pids, NULL);
 
 	/* Wait till all users are no longer using pid filtering */
-	synchronize_sched();
+	tracepoint_synchronize_unregister();
 
 	trace_free_pid_list(pid_list);
 }
@@ -1622,7 +1622,7 @@ ftrace_event_pid_write(struct file *filp, const char __user *ubuf,
 	}
 
 	if (filtered_pids) {
-		synchronize_sched();
+		tracepoint_synchronize_unregister();
 		trace_free_pid_list(filtered_pids);
 	} else if (pid_list) {
 		/*
@@ -3036,8 +3036,8 @@ int event_trace_del_tracer(struct trace_array *tr)
 	/* Disable any running events */
 	__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);
 
-	/* Access to events are within rcu_read_lock_sched() */
-	synchronize_sched();
+	/* Make sure no more events are being executed */
+	tracepoint_synchronize_unregister();
 
 	down_write(&trace_event_sem);
 	__trace_remove_event_dirs(tr);
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 893a206bcba4..184c7685d5ea 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -899,7 +899,8 @@ int filter_match_preds(struct event_filter *filter, void *rec)
 	if (!filter)
 		return 1;
 
-	prog = rcu_dereference_sched(filter->prog);
+	/* Protected by either SRCU(tracepoint_srcu) or preempt_disable */
+	prog = rcu_dereference_raw(filter->prog);
 	if (!prog)
 		return 1;
 
@@ -1626,10 +1627,10 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
 
 	/*
 	 * The calls can still be using the old filters.
-	 * Do a synchronize_sched() to ensure all calls are
+	 * Do a synchronize_sched() and to ensure all calls are
 	 * done with them before we free them.
 	 */
-	synchronize_sched();
+	tracepoint_synchronize_unregister();
 	list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
 		__free_filter(filter_item->filter);
 		list_del(&filter_item->list);
@@ -1648,7 +1649,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
 	kfree(filter);
 	/* If any call succeeded, we still need to sync */
 	if (!fail)
-		synchronize_sched();
+		tracepoint_synchronize_unregister();
 	list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
 		__free_filter(filter_item->filter);
 		list_del(&filter_item->list);
@@ -1790,7 +1791,7 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
 		event_clear_filter(file);
 
 		/* Make sure the filter is not being used */
-		synchronize_sched();
+		tracepoint_synchronize_unregister();
 		__free_filter(filter);
 
 		return 0;
@@ -1817,7 +1818,7 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
 
 		if (tmp) {
 			/* Make sure the call is done with the filter */
-			synchronize_sched();
+			tracepoint_synchronize_unregister();
 			__free_filter(tmp);
 		}
 	}
@@ -1847,7 +1848,7 @@ int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
 		filter = system->filter;
 		system->filter = NULL;
 		/* Ensure all filters are no longer used */
-		synchronize_sched();
+		tracepoint_synchronize_unregister();
 		filter_free_subsystem_filters(dir, tr);
 		__free_filter(filter);
 		goto out_unlock;
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index aae18af94c94..c522b51d9909 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5141,7 +5141,7 @@ static void hist_clear(struct event_trigger_data *data)
 	if (data->name)
 		pause_named_trigger(data);
 
-	synchronize_sched();
+	tracepoint_synchronize_unregister();
 
 	tracing_map_clear(hist_data->map);
 
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 58d21fd52932..750044ef15e8 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -34,7 +34,9 @@ void trigger_data_free(struct event_trigger_data *data)
 	if (data->cmd_ops->set_filter)
 		data->cmd_ops->set_filter(NULL, data, NULL);
 
-	synchronize_sched(); /* make sure current triggers exit before free */
+	/* make sure current triggers exit before free */
+	tracepoint_synchronize_unregister();
+
 	kfree(data);
 }
 
@@ -752,7 +754,7 @@ int set_trigger_filter(char *filter_str,
 
 	if (tmp) {
 		/* Make sure the call is done with the filter */
-		synchronize_sched();
+		tracepoint_synchronize_unregister();
 		free_event_filter(tmp);
 	}
 
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [for-next][PATCH 6/6] uprobes: Use synchronize_rcu() not synchronize_sched()
  2018-08-09 21:03 [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next Steven Rostedt
                   ` (4 preceding siblings ...)
  2018-08-09 21:03 ` [for-next][PATCH 5/6] tracing: Fix synchronizing to event changes with tracepoint_synchronize_unregister() Steven Rostedt
@ 2018-08-09 21:03 ` Steven Rostedt
  5 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-08-09 21:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, joel, stable

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

While debugging another bug, I was looking at all the synchronize*()
functions being used in kernel/trace, and noticed that trace_uprobes was
using synchronize_sched(), with a comment to synchronize with
{u,ret}_probe_trace_func(). When looking at those functions, the data is
protected with "rcu_read_lock()" and not with "rcu_read_lock_sched()". This
is using the wrong synchronize_*() function.

Cc: stable@vger.kernel.org
Fixes: 70ed91c6ec7f8 ("tracing/uprobes: Support ftrace_event_file base multibuffer")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_uprobe.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index bf89a51e740d..ac02fafc9f1b 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -952,7 +952,7 @@ probe_event_disable(struct trace_uprobe *tu, struct trace_event_file *file)
 
 		list_del_rcu(&link->list);
 		/* synchronize with u{,ret}probe_trace_func */
-		synchronize_sched();
+		synchronize_rcu();
 		kfree(link);
 
 		if (!list_empty(&tu->tp.files))
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [for-next][PATCH 1/6] tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage"
  2018-08-09 21:03 ` [for-next][PATCH 1/6] tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
@ 2018-08-10 12:20   ` Joel Fernandes
  0 siblings, 0 replies; 9+ messages in thread
From: Joel Fernandes @ 2018-08-10 12:20 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: LKML, Ingo Molnar, Andrew Morton, Joel Fernandes, Peter Zijlstra

On Thu, Aug 9, 2018 at 2:03 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
>
> Joel Fernandes created a nice patch that cleaned up the duplicate hooks used
> by lockdep and irqsoff latency tracer. It made both use tracepoints. But it
> caused lockdep to trigger several false positives. We have not figured out
> why yet, but removing lockdep from using the trace event hooks and just call
> its helper functions directly (like it use to), makes the problem go away.
>
> This is a partial revert of the clean up patch c3bc8fd637a9 ("tracing:
> Centralize preemptirq tracepoints and unify their usage") that adds direct
> calls for lockdep, but also keeps most of the clean up done to get rid of
> the horrible preprocessor if statements.
>
> Link: http://lkml.kernel.org/r/20180806155058.5ee875f4@gandalf.local.home
>
> Cc: Joel Fernandes <joelaf@google.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Fixes: c3bc8fd637a9 ("tracing: Centralize preemptirq tracepoints and unify their usage")

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>

thanks,

- Joel

> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  include/linux/irqflags.h        |  8 ++++++--
>  include/linux/lockdep.h         |  2 --
>  init/main.c                     |  2 --
>  kernel/locking/lockdep.c        | 14 ++-----------
>  kernel/trace/trace_preemptirq.c | 36 ++++++++++++++++++---------------
>  5 files changed, 28 insertions(+), 34 deletions(-)
>
> diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [for-next][PATCH 3/6] tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage"
  2018-08-09 21:03 ` [for-next][PATCH 3/6] tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
@ 2018-08-10 12:22   ` Joel Fernandes
  0 siblings, 0 replies; 9+ messages in thread
From: Joel Fernandes @ 2018-08-10 12:22 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Andrew Morton, Joel Fernandes

On Thu, Aug 9, 2018 at 2:03 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
>
> Joel Fernandes created a nice patch that cleaned up the duplicate hooks used
> by lockdep and irqsoff latency tracer. It made both use tracepoints. But the
> latency tracer is triggering warnings when using tracepoints to call into
> the latency tracer's routines. Mainly, they can be called from NMI context.
> If that happens, then the SRCU may not work properly because on some
> architectures, SRCU is not safe to be called in both NMI and non-NMI
> context.
>
> This is a partial revert of the clean up patch c3bc8fd637a9 ("tracing:
> Centralize preemptirq tracepoints and unify their usage") that adds back the
> direct calls into the latency tracer. It also only calls the trace events
> when not in NMI.
>
> Cc: Joel Fernandes <joelaf@google.com>
> Fixes: c3bc8fd637a9 ("tracing: Centralize preemptirq tracepoints and unify their usage")
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>

thanks,

- Joel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-08-10 12:22 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-09 21:03 [for-next][PATCH 0/6] tracing: Last minute updates before pushing to linux-next Steven Rostedt
2018-08-09 21:03 ` [for-next][PATCH 1/6] tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
2018-08-10 12:20   ` Joel Fernandes
2018-08-09 21:03 ` [for-next][PATCH 2/6] tracing/irqsoff: Handle preempt_count for different configs Steven Rostedt
2018-08-09 21:03 ` [for-next][PATCH 3/6] tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage" Steven Rostedt
2018-08-10 12:22   ` Joel Fernandes
2018-08-09 21:03 ` [for-next][PATCH 4/6] ftrace: Remove unused pointer ftrace_swapper_pid Steven Rostedt
2018-08-09 21:03 ` [for-next][PATCH 5/6] tracing: Fix synchronizing to event changes with tracepoint_synchronize_unregister() Steven Rostedt
2018-08-09 21:03 ` [for-next][PATCH 6/6] uprobes: Use synchronize_rcu() not synchronize_sched() Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).