All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 0/2] tracing: Add trace events for preemption and irq disable/enable
@ 2017-10-06  0:54 Joel Fernandes
  2017-10-06  0:54 ` [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events Joel Fernandes
  2017-10-06  0:54 ` [PATCH v8 2/2] tracing: Add support for preempt and irq enable/disable events Joel Fernandes
  0 siblings, 2 replies; 7+ messages in thread
From: Joel Fernandes @ 2017-10-06  0:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: kernel-team, Joel Fernandes, Steven Rostedt, Peter Zilstra

These patches add trace events support for preempt and irq disable/enable
events.

Here's an example of how Android's systrace will be using it to show atomic
sections as a gantt chart: http://imgur.com/download/TZplEVp
Other advantages of this initial work could be rewriting of preemptirqs off
tracer to use trace events, and replacing kprobes with tracepoint hooks for
these events in BPF samples (see samples/bpf/lathist_kern.c).

Changes since v7:
* Defining the tracepoints when they are unused gives a false impression to the
user that the tracepoints are present but unused, for this reason:
- preempt toggle tracepoints are unused when CONFIG_PREEMPT_DEBUG is off, lets
  not define them.
- irq toggle tracepoints are unused when CONFIG_PROVE_LOCKING is on, lets not
  define them.
I think in future patches, we should also do the same to the irqsoff tracer
since it can appear that the tracer is not working when the issue is hooks
aren't called.
Also in future patches, we should unify the trace_hardirqs* paths of
trace_irqsoff.c and lockdep.c and avoid duplication of the 2 paths - but I
leave that for a future patch set as I don't want to disrupt the functionality
of irqsoff tracer or lockdep in this patchset and increase the risk of breaking
something else. I am focusing on getting the tracepoints in first in this set.
Hope that's Ok.

* Clarified comment about per-CPU variable used for protection in
trace_hardirqs_*

Joel Fernandes (2):
  tracing: Prepare to add preempt and irq trace events
  tracing: Add support for preempt and irq enable/disable events

 include/linux/ftrace.h            |   3 +-
 include/trace/events/preemptirq.h |  70 ++++++++++++++++++++
 kernel/trace/Kconfig              |  11 ++++
 kernel/trace/Makefile             |   1 +
 kernel/trace/trace_irqsoff.c      | 133 ++++++++++++++++++++++++++++++--------
 5 files changed, 191 insertions(+), 27 deletions(-)
 create mode 100644 include/trace/events/preemptirq.h

Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zilstra <peterz@infradead.org>
Cc: kernel-team@android.com
-- 
2.14.2.920.gcf0c67979c-goog

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events
  2017-10-06  0:54 [PATCH v8 0/2] tracing: Add trace events for preemption and irq disable/enable Joel Fernandes
@ 2017-10-06  0:54 ` Joel Fernandes
  2017-10-10 22:19   ` Steven Rostedt
  2017-10-06  0:54 ` [PATCH v8 2/2] tracing: Add support for preempt and irq enable/disable events Joel Fernandes
  1 sibling, 1 reply; 7+ messages in thread
From: Joel Fernandes @ 2017-10-06  0:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: kernel-team, Joel Fernandes, Steven Rostedt, Peter Zijlstra

In preparation of adding irqsoff and preemptsoff enable and disable trace
events, move required functions and code to make it easier to add these events
in a later patch. This patch is just code movement and no functional change.

Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@android.com
Signed-off-by: Joel Fernandes <joelaf@google.com>
---
 kernel/trace/trace_irqsoff.c | 100 ++++++++++++++++++++++++++++++++-----------
 1 file changed, 74 insertions(+), 26 deletions(-)

diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 7758bc0617cb..0e3033c00474 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -16,6 +16,7 @@
 
 #include "trace.h"
 
+#if defined(CONFIG_IRQSOFF_TRACER) || defined(CONFIG_PREEMPT_TRACER)
 static struct trace_array		*irqsoff_trace __read_mostly;
 static int				tracer_enabled __read_mostly;
 
@@ -462,64 +463,44 @@ void time_hardirqs_off(unsigned long a0, unsigned long a1)
 
 #else /* !CONFIG_PROVE_LOCKING */
 
-/*
- * Stubs:
- */
-
-void trace_softirqs_on(unsigned long ip)
-{
-}
-
-void trace_softirqs_off(unsigned long ip)
-{
-}
-
-inline void print_irqtrace_events(struct task_struct *curr)
-{
-}
-
 /*
  * We are only interested in hardirq on/off events:
  */
-void trace_hardirqs_on(void)
+static inline void tracer_hardirqs_on(void)
 {
 	if (!preempt_trace() && irq_trace())
 		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
 }
-EXPORT_SYMBOL(trace_hardirqs_on);
 
-void trace_hardirqs_off(void)
+static inline void tracer_hardirqs_off(void)
 {
 	if (!preempt_trace() && irq_trace())
 		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
 }
-EXPORT_SYMBOL(trace_hardirqs_off);
 
-__visible void trace_hardirqs_on_caller(unsigned long caller_addr)
+static inline void tracer_hardirqs_on_caller(unsigned long caller_addr)
 {
 	if (!preempt_trace() && irq_trace())
 		stop_critical_timing(CALLER_ADDR0, caller_addr);
 }
-EXPORT_SYMBOL(trace_hardirqs_on_caller);
 
-__visible void trace_hardirqs_off_caller(unsigned long caller_addr)
+static inline void tracer_hardirqs_off_caller(unsigned long caller_addr)
 {
 	if (!preempt_trace() && irq_trace())
 		start_critical_timing(CALLER_ADDR0, caller_addr);
 }
-EXPORT_SYMBOL(trace_hardirqs_off_caller);
 
 #endif /* CONFIG_PROVE_LOCKING */
 #endif /*  CONFIG_IRQSOFF_TRACER */
 
 #ifdef CONFIG_PREEMPT_TRACER
-void trace_preempt_on(unsigned long a0, unsigned long a1)
+static inline void tracer_preempt_on(unsigned long a0, unsigned long a1)
 {
 	if (preempt_trace() && !irq_trace())
 		stop_critical_timing(a0, a1);
 }
 
-void trace_preempt_off(unsigned long a0, unsigned long a1)
+static inline void tracer_preempt_off(unsigned long a0, unsigned long a1)
 {
 	if (preempt_trace() && !irq_trace())
 		start_critical_timing(a0, a1);
@@ -781,3 +762,70 @@ __init static int init_irqsoff_tracer(void)
 	return 0;
 }
 core_initcall(init_irqsoff_tracer);
+#endif /* IRQSOFF_TRACER || PREEMPTOFF_TRACER */
+
+#ifndef CONFIG_IRQSOFF_TRACER
+static inline void tracer_hardirqs_on(void) { }
+static inline void tracer_hardirqs_off(void) { }
+static inline void tracer_hardirqs_on_caller(unsigned long caller_addr) { }
+static inline void tracer_hardirqs_off_caller(unsigned long caller_addr) { }
+#endif
+
+#ifndef CONFIG_PREEMPT_TRACER
+static inline void tracer_preempt_on(unsigned long a0, unsigned long a1) { }
+static inline void tracer_preempt_off(unsigned long a0, unsigned long a1) { }
+#endif
+
+#if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PROVE_LOCKING)
+void trace_hardirqs_on(void)
+{
+	tracer_hardirqs_on();
+}
+EXPORT_SYMBOL(trace_hardirqs_on);
+
+void trace_hardirqs_off(void)
+{
+	tracer_hardirqs_off();
+}
+EXPORT_SYMBOL(trace_hardirqs_off);
+
+__visible void trace_hardirqs_on_caller(unsigned long caller_addr)
+{
+	tracer_hardirqs_on_caller(caller_addr);
+}
+EXPORT_SYMBOL(trace_hardirqs_on_caller);
+
+__visible void trace_hardirqs_off_caller(unsigned long caller_addr)
+{
+	tracer_hardirqs_off_caller(caller_addr);
+}
+EXPORT_SYMBOL(trace_hardirqs_off_caller);
+
+/*
+ * Stubs:
+ */
+
+void trace_softirqs_on(unsigned long ip)
+{
+}
+
+void trace_softirqs_off(unsigned long ip)
+{
+}
+
+inline void print_irqtrace_events(struct task_struct *curr)
+{
+}
+#endif
+
+#ifdef CONFIG_PREEMPT_TRACER
+void trace_preempt_on(unsigned long a0, unsigned long a1)
+{
+	tracer_preempt_on(a0, a1);
+}
+
+void trace_preempt_off(unsigned long a0, unsigned long a1)
+{
+	tracer_preempt_off(a0, a1);
+}
+#endif
-- 
2.14.2.920.gcf0c67979c-goog

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v8 2/2] tracing: Add support for preempt and irq enable/disable events
  2017-10-06  0:54 [PATCH v8 0/2] tracing: Add trace events for preemption and irq disable/enable Joel Fernandes
  2017-10-06  0:54 ` [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events Joel Fernandes
@ 2017-10-06  0:54 ` Joel Fernandes
  1 sibling, 0 replies; 7+ messages in thread
From: Joel Fernandes @ 2017-10-06  0:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: kernel-team, Joel Fernandes, Steven Rostedt, Peter Zilstra

Preempt and irq trace events can be used for tracing the start and
end of an atomic section which can be used by a trace viewer like
systrace to graphically view the start and end of an atomic section and
correlate them with latencies and scheduling issues.

This also serves as a prelude to using synthetic events or probes to
rewrite the preempt and irqsoff tracers, along with numerous benefits of
using trace events features for these events.

Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zilstra <peterz@infradead.org>
Cc: kernel-team@android.com
Signed-off-by: Joel Fernandes <joelaf@google.com>
---
 include/linux/ftrace.h            |  3 +-
 include/trace/events/preemptirq.h | 70 +++++++++++++++++++++++++++++++++++++++
 kernel/trace/Kconfig              | 11 ++++++
 kernel/trace/Makefile             |  1 +
 kernel/trace/trace_irqsoff.c      | 35 +++++++++++++++++++-
 5 files changed, 118 insertions(+), 2 deletions(-)
 create mode 100644 include/trace/events/preemptirq.h

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 2e028854bac7..768e49b8c80f 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -742,7 +742,8 @@ static inline unsigned long get_lock_parent_ip(void)
   static inline void time_hardirqs_off(unsigned long a0, unsigned long a1) { }
 #endif
 
-#ifdef CONFIG_PREEMPT_TRACER
+#if defined(CONFIG_PREEMPT_TRACER) || \
+	(defined(CONFIG_DEBUG_PREEMPT) && defined(CONFIG_PREEMPTIRQ_EVENTS))
   extern void trace_preempt_on(unsigned long a0, unsigned long a1);
   extern void trace_preempt_off(unsigned long a0, unsigned long a1);
 #else
diff --git a/include/trace/events/preemptirq.h b/include/trace/events/preemptirq.h
new file mode 100644
index 000000000000..f5024c560d8f
--- /dev/null
+++ b/include/trace/events/preemptirq.h
@@ -0,0 +1,70 @@
+#ifdef CONFIG_PREEMPTIRQ_EVENTS
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM preemptirq
+
+#if !defined(_TRACE_PREEMPTIRQ_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_PREEMPTIRQ_H
+
+#include <linux/ktime.h>
+#include <linux/tracepoint.h>
+#include <linux/string.h>
+#include <asm/sections.h>
+
+DECLARE_EVENT_CLASS(preemptirq_template,
+
+	TP_PROTO(unsigned long ip, unsigned long parent_ip),
+
+	TP_ARGS(ip, parent_ip),
+
+	TP_STRUCT__entry(
+		__field(u32, caller_offs)
+		__field(u32, parent_offs)
+	),
+
+	TP_fast_assign(
+		__entry->caller_offs = (u32)(ip - (unsigned long)_stext);
+		__entry->parent_offs = (u32)(parent_ip - (unsigned long)_stext);
+	),
+
+	TP_printk("caller=%pF parent=%pF",
+		  (void *)((unsigned long)(_stext) + __entry->caller_offs),
+		  (void *)((unsigned long)(_stext) + __entry->parent_offs))
+);
+
+#ifndef CONFIG_PROVE_LOCKING
+DEFINE_EVENT(preemptirq_template, irq_disable,
+	     TP_PROTO(unsigned long ip, unsigned long parent_ip),
+	     TP_ARGS(ip, parent_ip));
+
+DEFINE_EVENT(preemptirq_template, irq_enable,
+	     TP_PROTO(unsigned long ip, unsigned long parent_ip),
+	     TP_ARGS(ip, parent_ip));
+#endif
+
+#ifdef CONFIG_DEBUG_PREEMPT
+DEFINE_EVENT(preemptirq_template, preempt_disable,
+	     TP_PROTO(unsigned long ip, unsigned long parent_ip),
+	     TP_ARGS(ip, parent_ip));
+
+DEFINE_EVENT(preemptirq_template, preempt_enable,
+	     TP_PROTO(unsigned long ip, unsigned long parent_ip),
+	     TP_ARGS(ip, parent_ip));
+#endif
+
+#endif /* _TRACE_PREEMPTIRQ_H */
+
+#include <trace/define_trace.h>
+
+#else /* !CONFIG_PREEMPTIRQ_EVENTS */
+
+#define trace_irq_enable(...)
+#define trace_irq_disable(...)
+#define trace_preempt_enable(...)
+#define trace_preempt_disable(...)
+#define trace_irq_enable_rcuidle(...)
+#define trace_irq_disable_rcuidle(...)
+#define trace_preempt_enable_rcuidle(...)
+#define trace_preempt_disable_rcuidle(...)
+
+#endif
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 434c840e2d82..b8395a020821 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -160,6 +160,17 @@ config FUNCTION_GRAPH_TRACER
 	  address on the current task structure into a stack of calls.
 
 
+config PREEMPTIRQ_EVENTS
+	bool "Enable trace events for preempt and irq disable/enable"
+	select TRACE_IRQFLAGS
+	depends on DEBUG_PREEMPT || !PROVE_LOCKING
+	default n
+	help
+	  Enable tracing of disable and enable events for preemption and irqs.
+	  For tracing preempt disable/enable events, DEBUG_PREEMPT must be
+	  enabled. For tracing irq disable/enable events, PROVE_LOCKING must
+	  be disabled.
+
 config IRQSOFF_TRACER
 	bool "Interrupts-off Latency Tracer"
 	default n
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index 90f2701d92a7..9f62eee61f14 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_TRACING) += trace_printk.o
 obj-$(CONFIG_TRACING_MAP) += tracing_map.o
 obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
 obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
+obj-$(CONFIG_PREEMPTIRQ_EVENTS) += trace_irqsoff.o
 obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
 obj-$(CONFIG_PREEMPT_TRACER) += trace_irqsoff.o
 obj-$(CONFIG_SCHED_TRACER) += trace_sched_wakeup.o
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 0e3033c00474..b1219780599b 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -16,6 +16,9 @@
 
 #include "trace.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/preemptirq.h>
+
 #if defined(CONFIG_IRQSOFF_TRACER) || defined(CONFIG_PREEMPT_TRACER)
 static struct trace_array		*irqsoff_trace __read_mostly;
 static int				tracer_enabled __read_mostly;
@@ -776,27 +779,54 @@ static inline void tracer_preempt_on(unsigned long a0, unsigned long a1) { }
 static inline void tracer_preempt_off(unsigned long a0, unsigned long a1) { }
 #endif
 
+/* Per-cpu variable to prevent redundant calls when IRQs already off */
+static DEFINE_PER_CPU(int, tracing_irq_cpu);
+
 #if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PROVE_LOCKING)
 void trace_hardirqs_on(void)
 {
+	if (!this_cpu_read(tracing_irq_cpu))
+		return;
+
+	trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
 	tracer_hardirqs_on();
+
+	this_cpu_write(tracing_irq_cpu, 0);
 }
 EXPORT_SYMBOL(trace_hardirqs_on);
 
 void trace_hardirqs_off(void)
 {
+	if (this_cpu_read(tracing_irq_cpu))
+		return;
+
+	this_cpu_write(tracing_irq_cpu, 1);
+
+	trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
 	tracer_hardirqs_off();
 }
 EXPORT_SYMBOL(trace_hardirqs_off);
 
 __visible void trace_hardirqs_on_caller(unsigned long caller_addr)
 {
+	if (!this_cpu_read(tracing_irq_cpu))
+		return;
+
+	trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
 	tracer_hardirqs_on_caller(caller_addr);
+
+	this_cpu_write(tracing_irq_cpu, 0);
 }
 EXPORT_SYMBOL(trace_hardirqs_on_caller);
 
 __visible void trace_hardirqs_off_caller(unsigned long caller_addr)
 {
+	if (this_cpu_read(tracing_irq_cpu))
+		return;
+
+	this_cpu_write(tracing_irq_cpu, 1);
+
+	trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
 	tracer_hardirqs_off_caller(caller_addr);
 }
 EXPORT_SYMBOL(trace_hardirqs_off_caller);
@@ -818,14 +848,17 @@ inline void print_irqtrace_events(struct task_struct *curr)
 }
 #endif
 
-#ifdef CONFIG_PREEMPT_TRACER
+#if defined(CONFIG_PREEMPT_TRACER) || \
+	(defined(CONFIG_DEBUG_PREEMPT) && defined(CONFIG_PREEMPTIRQ_EVENTS))
 void trace_preempt_on(unsigned long a0, unsigned long a1)
 {
+	trace_preempt_enable_rcuidle(a0, a1);
 	tracer_preempt_on(a0, a1);
 }
 
 void trace_preempt_off(unsigned long a0, unsigned long a1)
 {
+	trace_preempt_disable_rcuidle(a0, a1);
 	tracer_preempt_off(a0, a1);
 }
 #endif
-- 
2.14.2.920.gcf0c67979c-goog

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events
  2017-10-06  0:54 ` [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events Joel Fernandes
@ 2017-10-10 22:19   ` Steven Rostedt
  2017-10-10 22:32     ` Joel Fernandes
  0 siblings, 1 reply; 7+ messages in thread
From: Steven Rostedt @ 2017-10-10 22:19 UTC (permalink / raw)
  To: Joel Fernandes; +Cc: linux-kernel, kernel-team, Peter Zijlstra

On Thu,  5 Oct 2017 17:54:31 -0700
Joel Fernandes <joelaf@google.com> wrote:

> In preparation of adding irqsoff and preemptsoff enable and disable trace
> events, move required functions and code to make it easier to add these events
> in a later patch. This patch is just code movement and no functional change.

New warning:

 kernel/trace/trace_irqsoff.c:783:69: warning: 'tracing_irq_cpu' defined but not used [-Wunused-variable]

Looks pretty obvious what the fix is. Care to send a v9?

-- Steve

> 
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: kernel-team@android.com
> Signed-off-by: Joel Fernandes <joelaf@google.com>
> ---
>  kernel/trace/trace_irqsoff.c | 100 ++++++++++++++++++++++++++++++++-----------
>  1 file changed, 74 insertions(+), 26 deletions(-)
> 
> diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> index 7758bc0617cb..0e3033c00474 100644
> --- a/kernel/trace/trace_irqsoff.c
> +++ b/kernel/trace/trace_irqsoff.c
> @@ -16,6 +16,7 @@
>  
>  #include "trace.h"
>  
> +#if defined(CONFIG_IRQSOFF_TRACER) || defined(CONFIG_PREEMPT_TRACER)
>  static struct trace_array		*irqsoff_trace __read_mostly;
>  static int				tracer_enabled __read_mostly;
>  
> @@ -462,64 +463,44 @@ void time_hardirqs_off(unsigned long a0, unsigned long a1)
>  
>  #else /* !CONFIG_PROVE_LOCKING */
>  
> -/*
> - * Stubs:
> - */
> -
> -void trace_softirqs_on(unsigned long ip)
> -{
> -}
> -
> -void trace_softirqs_off(unsigned long ip)
> -{
> -}
> -
> -inline void print_irqtrace_events(struct task_struct *curr)
> -{
> -}
> -
>  /*
>   * We are only interested in hardirq on/off events:
>   */
> -void trace_hardirqs_on(void)
> +static inline void tracer_hardirqs_on(void)
>  {
>  	if (!preempt_trace() && irq_trace())
>  		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
>  }
> -EXPORT_SYMBOL(trace_hardirqs_on);
>  
> -void trace_hardirqs_off(void)
> +static inline void tracer_hardirqs_off(void)
>  {
>  	if (!preempt_trace() && irq_trace())
>  		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
>  }
> -EXPORT_SYMBOL(trace_hardirqs_off);
>  
> -__visible void trace_hardirqs_on_caller(unsigned long caller_addr)
> +static inline void tracer_hardirqs_on_caller(unsigned long caller_addr)
>  {
>  	if (!preempt_trace() && irq_trace())
>  		stop_critical_timing(CALLER_ADDR0, caller_addr);
>  }
> -EXPORT_SYMBOL(trace_hardirqs_on_caller);
>  
> -__visible void trace_hardirqs_off_caller(unsigned long caller_addr)
> +static inline void tracer_hardirqs_off_caller(unsigned long caller_addr)
>  {
>  	if (!preempt_trace() && irq_trace())
>  		start_critical_timing(CALLER_ADDR0, caller_addr);
>  }
> -EXPORT_SYMBOL(trace_hardirqs_off_caller);
>  
>  #endif /* CONFIG_PROVE_LOCKING */
>  #endif /*  CONFIG_IRQSOFF_TRACER */
>  
>  #ifdef CONFIG_PREEMPT_TRACER
> -void trace_preempt_on(unsigned long a0, unsigned long a1)
> +static inline void tracer_preempt_on(unsigned long a0, unsigned long a1)
>  {
>  	if (preempt_trace() && !irq_trace())
>  		stop_critical_timing(a0, a1);
>  }
>  
> -void trace_preempt_off(unsigned long a0, unsigned long a1)
> +static inline void tracer_preempt_off(unsigned long a0, unsigned long a1)
>  {
>  	if (preempt_trace() && !irq_trace())
>  		start_critical_timing(a0, a1);
> @@ -781,3 +762,70 @@ __init static int init_irqsoff_tracer(void)
>  	return 0;
>  }
>  core_initcall(init_irqsoff_tracer);
> +#endif /* IRQSOFF_TRACER || PREEMPTOFF_TRACER */
> +
> +#ifndef CONFIG_IRQSOFF_TRACER
> +static inline void tracer_hardirqs_on(void) { }
> +static inline void tracer_hardirqs_off(void) { }
> +static inline void tracer_hardirqs_on_caller(unsigned long caller_addr) { }
> +static inline void tracer_hardirqs_off_caller(unsigned long caller_addr) { }
> +#endif
> +
> +#ifndef CONFIG_PREEMPT_TRACER
> +static inline void tracer_preempt_on(unsigned long a0, unsigned long a1) { }
> +static inline void tracer_preempt_off(unsigned long a0, unsigned long a1) { }
> +#endif
> +
> +#if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PROVE_LOCKING)
> +void trace_hardirqs_on(void)
> +{
> +	tracer_hardirqs_on();
> +}
> +EXPORT_SYMBOL(trace_hardirqs_on);
> +
> +void trace_hardirqs_off(void)
> +{
> +	tracer_hardirqs_off();
> +}
> +EXPORT_SYMBOL(trace_hardirqs_off);
> +
> +__visible void trace_hardirqs_on_caller(unsigned long caller_addr)
> +{
> +	tracer_hardirqs_on_caller(caller_addr);
> +}
> +EXPORT_SYMBOL(trace_hardirqs_on_caller);
> +
> +__visible void trace_hardirqs_off_caller(unsigned long caller_addr)
> +{
> +	tracer_hardirqs_off_caller(caller_addr);
> +}
> +EXPORT_SYMBOL(trace_hardirqs_off_caller);
> +
> +/*
> + * Stubs:
> + */
> +
> +void trace_softirqs_on(unsigned long ip)
> +{
> +}
> +
> +void trace_softirqs_off(unsigned long ip)
> +{
> +}
> +
> +inline void print_irqtrace_events(struct task_struct *curr)
> +{
> +}
> +#endif
> +
> +#ifdef CONFIG_PREEMPT_TRACER
> +void trace_preempt_on(unsigned long a0, unsigned long a1)
> +{
> +	tracer_preempt_on(a0, a1);
> +}
> +
> +void trace_preempt_off(unsigned long a0, unsigned long a1)
> +{
> +	tracer_preempt_off(a0, a1);
> +}
> +#endif

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events
  2017-10-10 22:19   ` Steven Rostedt
@ 2017-10-10 22:32     ` Joel Fernandes
  2017-10-10 22:44       ` Steven Rostedt
  0 siblings, 1 reply; 7+ messages in thread
From: Joel Fernandes @ 2017-10-10 22:32 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: LKML, kernel-team, Peter Zijlstra

On Tue, Oct 10, 2017 at 3:19 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Thu,  5 Oct 2017 17:54:31 -0700
> Joel Fernandes <joelaf@google.com> wrote:
>
>> In preparation of adding irqsoff and preemptsoff enable and disable trace
>> events, move required functions and code to make it easier to add these events
>> in a later patch. This patch is just code movement and no functional change.
>
> New warning:
>
>  kernel/trace/trace_irqsoff.c:783:69: warning: 'tracing_irq_cpu' defined but not used [-Wunused-variable]
>
> Looks pretty obvious what the fix is. Care to send a v9?

Yes, I'll fix it and send a v9. Sorry about that.

thanks,

- Joel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events
  2017-10-10 22:32     ` Joel Fernandes
@ 2017-10-10 22:44       ` Steven Rostedt
  2017-10-10 22:54         ` Joel Fernandes
  0 siblings, 1 reply; 7+ messages in thread
From: Steven Rostedt @ 2017-10-10 22:44 UTC (permalink / raw)
  To: Joel Fernandes; +Cc: LKML, kernel-team, Peter Zijlstra

On Tue, 10 Oct 2017 15:32:27 -0700
Joel Fernandes <joelaf@google.com> wrote:

> On Tue, Oct 10, 2017 at 3:19 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > On Thu,  5 Oct 2017 17:54:31 -0700
> > Joel Fernandes <joelaf@google.com> wrote:
> >  
> >> In preparation of adding irqsoff and preemptsoff enable and disable trace
> >> events, move required functions and code to make it easier to add these events
> >> in a later patch. This patch is just code movement and no functional change.  
> >
> > New warning:
> >
> >  kernel/trace/trace_irqsoff.c:783:69: warning: 'tracing_irq_cpu' defined but not used [-Wunused-variable]
> >
> > Looks pretty obvious what the fix is. Care to send a v9?  
> 
> Yes, I'll fix it and send a v9. Sorry about that.
> 

Just send the patch for this one. Don't need to send a series.

Heck, call it v8.1 ;-)

-- Steve

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events
  2017-10-10 22:44       ` Steven Rostedt
@ 2017-10-10 22:54         ` Joel Fernandes
  0 siblings, 0 replies; 7+ messages in thread
From: Joel Fernandes @ 2017-10-10 22:54 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: LKML, kernel-team, Peter Zijlstra

On Tue, Oct 10, 2017 at 3:44 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Tue, 10 Oct 2017 15:32:27 -0700
> Joel Fernandes <joelaf@google.com> wrote:
>
>> On Tue, Oct 10, 2017 at 3:19 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>> > On Thu,  5 Oct 2017 17:54:31 -0700
>> > Joel Fernandes <joelaf@google.com> wrote:
>> >
>> >> In preparation of adding irqsoff and preemptsoff enable and disable trace
>> >> events, move required functions and code to make it easier to add these events
>> >> in a later patch. This patch is just code movement and no functional change.
>> >
>> > New warning:
>> >
>> >  kernel/trace/trace_irqsoff.c:783:69: warning: 'tracing_irq_cpu' defined but not used [-Wunused-variable]
>> >
>> > Looks pretty obvious what the fix is. Care to send a v9?
>>
>> Yes, I'll fix it and send a v9. Sorry about that.
>>
>
> Just send the patch for this one. Don't need to send a series.
>
> Heck, call it v8.1 ;-)

Done, the variable was introduced in patch 2/2 so I fixed that up and
resent it as 8.1 ;-)

thanks,

- Joel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-10-10 22:54 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-06  0:54 [PATCH v8 0/2] tracing: Add trace events for preemption and irq disable/enable Joel Fernandes
2017-10-06  0:54 ` [PATCH v8 1/2] tracing: Prepare to add preempt and irq trace events Joel Fernandes
2017-10-10 22:19   ` Steven Rostedt
2017-10-10 22:32     ` Joel Fernandes
2017-10-10 22:44       ` Steven Rostedt
2017-10-10 22:54         ` Joel Fernandes
2017-10-06  0:54 ` [PATCH v8 2/2] tracing: Add support for preempt and irq enable/disable events Joel Fernandes

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.