linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/14] event tracing expose change and bugfix/cleanup
@ 2013-03-27  9:48 zhangwei(Jovi)
  2013-03-27  9:48 ` zhangwei(Jovi)
                   ` (15 more replies)
  0 siblings, 16 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Hi steven,

This patchset contain:
1) event tracing expose work (v2)
   this expose work rewrited compare with v1, new implementation
   is based on multi-instances buffer work, it also integrate syscall
   tracing code to use same event backend store mechanism.
   The change include patch 1-7(patch 2 also fix a long-term minor bug)

2) some cleanup
   This include patch 8-12.

3) patch 13 fix libtraceevent warning

4) patch 14 fix a regression bug of perf function tracing

Note that these patches is based on latest linux-trace git tree:
(on top of multi-instances buffer implementation)

    git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
    tip/perf/core

All patches pass basic testing.

zhangwei(Jovi) (14):
  tracing: move trace_array definition into include/linux/trace_array.h
  tracing: fix irqs-off tag display in syscall tracing
  tracing: expose event tracing infrastructure
  tracing: add private data field into struct ftrace_event_file
  tracing: switch syscall tracing to use event_trace_ops backend
  tracing: export syscall metadata
  tracing: expose structure ftrace_event_field
  tracing: remove TRACE_EVENT_TYPE enum definition
  tracing: remove obsolete macro guard _TRACE_PROFILE_INIT
  tracing: remove ftrace(...) function
  tracing: use per trace_array clock_id instead of global
    trace_clock_id
  tracing: guard tracing_selftest_disabled by
    CONFIG_FTRACE_STARTUP_TEST
  libtraceevent: add libtraceevent prefix in warning message
  tracing: fix regression of perf function tracing

 include/linux/ftrace_event.h       |   38 ++++++++++
 include/linux/trace_array.h        |  118 +++++++++++++++++++++++++++++
 include/trace/ftrace.h             |   71 ++++++------------
 include/trace/syscall.h            |    1 +
 kernel/trace/ftrace.c              |    7 +-
 kernel/trace/trace.c               |   27 +++----
 kernel/trace/trace.h               |  143 +-----------------------------------
 kernel/trace/trace_events.c        |   55 ++++++++++++++
 kernel/trace/trace_syscalls.c      |   51 ++++++-------
 tools/lib/traceevent/event-parse.c |    2 +-
 10 files changed, 277 insertions(+), 236 deletions(-)
 create mode 100644 include/linux/trace_array.h

-- 
1.7.9.7



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 00/14] event tracing expose change and bugfix/cleanup
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 01/14] tracing: move trace_array definition into include/linux/trace_array.h zhangwei(Jovi)
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Hi steven,

This patchset contain:
1) event tracing expose work (v2)
   this expose work rewrited compare with v1, new implementation
   is based on multi-instances buffer work, it also integrate syscall
   tracing code to use same event backend store mechanism.
   The change include patch 1-7(patch 2 also fix a long-term minor bug)

2) some cleanup
   This include patch 8-12.

3) patch 13 fix libtraceevent warning

4) patch 14 fix a regression bug of perf function tracing
   This patch also need send to stable tree.

Note that these patches is based on latest linux-trace git tree:
(on top of multi-instances buffer implementation)

    git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
    tip/perf/core

All patches pass basic testing.

zhangwei(Jovi) (14):
  tracing: move trace_array definition into include/linux/trace_array.h
  tracing: fix irqs-off tag display in syscall tracing
  tracing: expose event tracing infrastructure
  tracing: add private data field into struct ftrace_event_file
  tracing: switch syscall tracing to use event_trace_ops backend
  tracing: export syscall metadata
  tracing: expose structure ftrace_event_field
  tracing: remove TRACE_EVENT_TYPE enum definition
  tracing: remove obsolete macro guard _TRACE_PROFILE_INIT
  tracing: remove ftrace(...) function
  tracing: use per trace_array clock_id instead of global
    trace_clock_id
  tracing: guard tracing_selftest_disabled by
    CONFIG_FTRACE_STARTUP_TEST
  libtraceevent: add libtraceevent prefix in warning message
  tracing: fix regression of perf function tracing

 include/linux/ftrace_event.h       |   38 ++++++++++
 include/linux/trace_array.h        |  118 +++++++++++++++++++++++++++++
 include/trace/ftrace.h             |   71 ++++++------------
 include/trace/syscall.h            |    1 +
 kernel/trace/ftrace.c              |    7 +-
 kernel/trace/trace.c               |   27 +++----
 kernel/trace/trace.h               |  143 +-----------------------------------
 kernel/trace/trace_events.c        |   55 ++++++++++++++
 kernel/trace/trace_syscalls.c      |   51 ++++++-------
 tools/lib/traceevent/event-parse.c |    2 +-
 10 files changed, 277 insertions(+), 236 deletions(-)
 create mode 100644 include/linux/trace_array.h

-- 
1.7.9.7



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 01/14] tracing: move trace_array definition into include/linux/trace_array.h
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
  2013-03-27  9:48 ` zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 02/14] tracing: fix irqs-off tag display in syscall tracing zhangwei(Jovi)
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Prepare for expose event tracing infrastructure.
(struct trace_array shall be use by external modules)

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 include/linux/trace_array.h |  117 +++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/trace.h        |  116 +-----------------------------------------
 2 files changed, 118 insertions(+), 115 deletions(-)
 create mode 100644 include/linux/trace_array.h

diff --git a/include/linux/trace_array.h b/include/linux/trace_array.h
new file mode 100644
index 0000000..c5b7a13
--- /dev/null
+++ b/include/linux/trace_array.h
@@ -0,0 +1,117 @@
+#ifndef _LINUX_KERNEL_TRACE_ARRAY_H
+#define _LINUX_KERNEL_TRACE_ARRAY_H
+
+#ifdef CONFIG_FTRACE_SYSCALLS
+#include <asm/unistd.h>		/* For NR_SYSCALLS	     */
+#include <asm/syscall.h>	/* some archs define it here */
+#endif
+
+struct trace_cpu {
+	struct trace_array	*tr;
+	struct dentry		*dir;
+	int			cpu;
+};
+
+/*
+ * The CPU trace array - it consists of thousands of trace entries
+ * plus some other descriptor data: (for example which task started
+ * the trace, etc.)
+ */
+struct trace_array_cpu {
+	struct trace_cpu	trace_cpu;
+	atomic_t		disabled;
+	void			*buffer_page;	/* ring buffer spare */
+
+	unsigned long		entries;
+	unsigned long		saved_latency;
+	unsigned long		critical_start;
+	unsigned long		critical_end;
+	unsigned long		critical_sequence;
+	unsigned long		nice;
+	unsigned long		policy;
+	unsigned long		rt_priority;
+	unsigned long		skipped_entries;
+	cycle_t			preempt_timestamp;
+	pid_t			pid;
+	kuid_t			uid;
+	char			comm[TASK_COMM_LEN];
+};
+
+struct tracer;
+
+struct trace_buffer {
+	struct trace_array		*tr;
+	struct ring_buffer		*buffer;
+	struct trace_array_cpu __percpu	*data;
+	cycle_t				time_start;
+	int				cpu;
+};
+
+/*
+ * The trace array - an array of per-CPU trace arrays. This is the
+ * highest level data structure that individual tracers deal with.
+ * They have on/off state as well:
+ */
+struct trace_array {
+	struct list_head	list;
+	char			*name;
+	struct trace_buffer	trace_buffer;
+#ifdef CONFIG_TRACER_MAX_TRACE
+	/*
+	 * The max_buffer is used to snapshot the trace when a maximum
+	 * latency is reached, or when the user initiates a snapshot.
+	 * Some tracers will use this to store a maximum trace while
+	 * it continues examining live traces.
+	 *
+	 * The buffers for the max_buffer are set up the same as the trace_buffer
+	 * When a snapshot is taken, the buffer of the max_buffer is swapped
+	 * with the buffer of the trace_buffer and the buffers are reset for
+	 * the trace_buffer so the tracing can continue.
+	 */
+	struct trace_buffer	max_buffer;
+	bool			allocated_snapshot;
+#endif
+	int			buffer_disabled;
+	struct trace_cpu	trace_cpu;	/* place holder */
+#ifdef CONFIG_FTRACE_SYSCALLS
+	int			sys_refcount_enter;
+	int			sys_refcount_exit;
+	DECLARE_BITMAP(enabled_enter_syscalls, NR_syscalls);
+	DECLARE_BITMAP(enabled_exit_syscalls, NR_syscalls);
+#endif
+	int			stop_count;
+	int			clock_id;
+	struct tracer		*current_trace;
+	unsigned int		flags;
+	raw_spinlock_t		start_lock;
+	struct dentry		*dir;
+	struct dentry		*options;
+	struct dentry		*percpu_dir;
+	struct dentry		*event_dir;
+	struct list_head	systems;
+	struct list_head	events;
+	struct task_struct	*waiter;
+	int			ref;
+};
+
+enum {
+	TRACE_ARRAY_FL_GLOBAL	= (1 << 0)
+};
+
+extern struct list_head ftrace_trace_arrays;
+
+/*
+ * The global tracer (top) should be the first trace array added,
+ * but we check the flag anyway.
+ */
+static inline struct trace_array *top_trace_array(void)
+{
+	struct trace_array *tr;
+
+	tr = list_entry(ftrace_trace_arrays.prev,
+			typeof(*tr), list);
+	WARN_ON(!(tr->flags & TRACE_ARRAY_FL_GLOBAL));
+	return tr;
+}
+
+#endif /* _LINUX_KERNEL_TRACE_ARRAY_H */
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 9e01458..a8acfcd 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -12,11 +12,7 @@
 #include <linux/hw_breakpoint.h>
 #include <linux/trace_seq.h>
 #include <linux/ftrace_event.h>
-
-#ifdef CONFIG_FTRACE_SYSCALLS
-#include <asm/unistd.h>		/* For NR_SYSCALLS	     */
-#include <asm/syscall.h>	/* some archs define it here */
-#endif
+#include <linux/trace_array.h>
 
 enum trace_type {
 	__TRACE_FIRST_TYPE = 0,
@@ -133,116 +129,6 @@ enum trace_flag_type {
 
 #define TRACE_BUF_SIZE		1024
 
-struct trace_array;
-
-struct trace_cpu {
-	struct trace_array	*tr;
-	struct dentry		*dir;
-	int			cpu;
-};
-
-/*
- * The CPU trace array - it consists of thousands of trace entries
- * plus some other descriptor data: (for example which task started
- * the trace, etc.)
- */
-struct trace_array_cpu {
-	struct trace_cpu	trace_cpu;
-	atomic_t		disabled;
-	void			*buffer_page;	/* ring buffer spare */
-
-	unsigned long		entries;
-	unsigned long		saved_latency;
-	unsigned long		critical_start;
-	unsigned long		critical_end;
-	unsigned long		critical_sequence;
-	unsigned long		nice;
-	unsigned long		policy;
-	unsigned long		rt_priority;
-	unsigned long		skipped_entries;
-	cycle_t			preempt_timestamp;
-	pid_t			pid;
-	kuid_t			uid;
-	char			comm[TASK_COMM_LEN];
-};
-
-struct tracer;
-
-struct trace_buffer {
-	struct trace_array		*tr;
-	struct ring_buffer		*buffer;
-	struct trace_array_cpu __percpu	*data;
-	cycle_t				time_start;
-	int				cpu;
-};
-
-/*
- * The trace array - an array of per-CPU trace arrays. This is the
- * highest level data structure that individual tracers deal with.
- * They have on/off state as well:
- */
-struct trace_array {
-	struct list_head	list;
-	char			*name;
-	struct trace_buffer	trace_buffer;
-#ifdef CONFIG_TRACER_MAX_TRACE
-	/*
-	 * The max_buffer is used to snapshot the trace when a maximum
-	 * latency is reached, or when the user initiates a snapshot.
-	 * Some tracers will use this to store a maximum trace while
-	 * it continues examining live traces.
-	 *
-	 * The buffers for the max_buffer are set up the same as the trace_buffer
-	 * When a snapshot is taken, the buffer of the max_buffer is swapped
-	 * with the buffer of the trace_buffer and the buffers are reset for
-	 * the trace_buffer so the tracing can continue.
-	 */
-	struct trace_buffer	max_buffer;
-	bool			allocated_snapshot;
-#endif
-	int			buffer_disabled;
-	struct trace_cpu	trace_cpu;	/* place holder */
-#ifdef CONFIG_FTRACE_SYSCALLS
-	int			sys_refcount_enter;
-	int			sys_refcount_exit;
-	DECLARE_BITMAP(enabled_enter_syscalls, NR_syscalls);
-	DECLARE_BITMAP(enabled_exit_syscalls, NR_syscalls);
-#endif
-	int			stop_count;
-	int			clock_id;
-	struct tracer		*current_trace;
-	unsigned int		flags;
-	raw_spinlock_t		start_lock;
-	struct dentry		*dir;
-	struct dentry		*options;
-	struct dentry		*percpu_dir;
-	struct dentry		*event_dir;
-	struct list_head	systems;
-	struct list_head	events;
-	struct task_struct	*waiter;
-	int			ref;
-};
-
-enum {
-	TRACE_ARRAY_FL_GLOBAL	= (1 << 0)
-};
-
-extern struct list_head ftrace_trace_arrays;
-
-/*
- * The global tracer (top) should be the first trace array added,
- * but we check the flag anyway.
- */
-static inline struct trace_array *top_trace_array(void)
-{
-	struct trace_array *tr;
-
-	tr = list_entry(ftrace_trace_arrays.prev,
-			typeof(*tr), list);
-	WARN_ON(!(tr->flags & TRACE_ARRAY_FL_GLOBAL));
-	return tr;
-}
-
 #define FTRACE_CMP_TYPE(var, type) \
 	__builtin_types_compatible_p(typeof(var), type *)
 
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 02/14] tracing: fix irqs-off tag display in syscall tracing
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
  2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 01/14] tracing: move trace_array definition into include/linux/trace_array.h zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 03/14] tracing: expose event tracing infrastructure zhangwei(Jovi)
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Now all syscall tracing irqs-off tag is wrong,
syscall enter entry doesn't disable irq.

 [root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
 [root@jovi tracing]# cat trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 13/13   #P:2
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
       irqbalance-513   [000] d... 56115.496766: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
       irqbalance-513   [000] d... 56115.497008: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
         sendmail-771   [000] d... 56115.827982: sys_open(filename: b770e6d1, flags: 0, mode: 1b6)

The reason is syscall tracing doesn't record irq_flags into buffer.
Fix this after this patch:

 [root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
 [root@jovi tracing]# cat trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 14/14   #P:2
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
       irqbalance-514   [001] ....    46.213921: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
       irqbalance-514   [001] ....    46.214160: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
            <...>-920   [001] ....    47.307260: sys_open(filename: 4e82a0c5, flags: 80000, mode: 0)

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 kernel/trace/trace_syscalls.c |   21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 8f2ac73..322e164 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -306,6 +306,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
 	struct syscall_metadata *sys_data;
 	struct ring_buffer_event *event;
 	struct ring_buffer *buffer;
+	unsigned long irq_flags;
+	int pc;
 	int syscall_nr;
 	int size;
 
@@ -321,9 +323,12 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
 
 	size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;
 
+	local_save_flags(irq_flags);
+	pc = preempt_count();
+
 	buffer = tr->trace_buffer.buffer;
 	event = trace_buffer_lock_reserve(buffer,
-			sys_data->enter_event->event.type, size, 0, 0);
+			sys_data->enter_event->event.type, size, irq_flags, pc);
 	if (!event)
 		return;
 
@@ -333,7 +338,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
 
 	if (!filter_current_check_discard(buffer, sys_data->enter_event,
 					  entry, event))
-		trace_current_buffer_unlock_commit(buffer, event, 0, 0);
+		trace_current_buffer_unlock_commit(buffer, event,
+						   irq_flags, pc);
 }
 
 static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
@@ -343,6 +349,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
 	struct syscall_metadata *sys_data;
 	struct ring_buffer_event *event;
 	struct ring_buffer *buffer;
+	unsigned long irq_flags;
+	int pc;
 	int syscall_nr;
 
 	syscall_nr = trace_get_syscall_nr(current, regs);
@@ -355,9 +363,13 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
 	if (!sys_data)
 		return;
 
+	local_save_flags(irq_flags);
+	pc = preempt_count();
+
 	buffer = tr->trace_buffer.buffer;
 	event = trace_buffer_lock_reserve(buffer,
-			sys_data->exit_event->event.type, sizeof(*entry), 0, 0);
+			sys_data->exit_event->event.type, sizeof(*entry),
+			irq_flags, pc);
 	if (!event)
 		return;
 
@@ -367,7 +379,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
 
 	if (!filter_current_check_discard(buffer, sys_data->exit_event,
 					  entry, event))
-		trace_current_buffer_unlock_commit(buffer, event, 0, 0);
+		trace_current_buffer_unlock_commit(buffer, event,
+						   irq_flags, pc);
 }
 
 static int reg_event_syscall_enter(struct ftrace_event_file *file,
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 03/14] tracing: expose event tracing infrastructure
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (2 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 02/14] tracing: fix irqs-off tag display in syscall tracing zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 04/14] tracing: add private data field into struct ftrace_event_file zhangwei(Jovi)
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Currently event tracing only can be use for ftrace and perf,
there don't have any mechanism to let modules(like external tracing tool)
register callback tracing function.

Event tracing implement based on tracepoint, compare with raw tracepoint,
event tracing infrastructure provide built-in structured event annotate format,
this feature should expose to external user.

For example, simple pseudo ktap script demonstrate how to use this event
tracing expose change.

function event_trace(e)
{
        printf("%s", e.annotate);
}

os.trace("sched:sched_switch", event_trace);
os.trace("irq:softirq_raise", event_trace);

The running result:
sched_switch: prev_comm=rcu_sched prev_pid=10 prev_prio=120 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
softirq_raise: vec=1 [action=TIMER]
...

This expose change can be use by other tracing tool, like systemtap/lttng,
if they would implement this.

This patch introduce struct event_trace_ops in trace_array, it have
two callback functions, pre_trace and do_trace.
when ftrace_raw_event_<call> function hit, it will call all
registered event_trace_ops.

the benefit of this change is kernel size shrink ~18K

(the kernel size will reduce more when perf tracing code
converting to use this mechanism in future)

text    data     bss     dec     hex filename
7402131  804364 3149824 11356319         ad489f vmlinux.old
7383115  804684 3149824 11337623         acff97 vmlinux.new

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 include/linux/ftrace_event.h |   26 ++++++++++++++++
 include/linux/trace_array.h  |    1 +
 include/trace/ftrace.h       |   69 +++++++++++++-----------------------------
 kernel/trace/trace.c         |    4 ++-
 kernel/trace/trace.h         |    2 ++
 kernel/trace/trace_events.c  |   53 ++++++++++++++++++++++++++++++++
 6 files changed, 106 insertions(+), 49 deletions(-)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 4e28b01..27d7a4f 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -6,6 +6,7 @@
 #include <linux/percpu.h>
 #include <linux/hardirq.h>
 #include <linux/perf_event.h>
+#include <linux/trace_array.h>
 
 struct trace_array;
 struct trace_buffer;
@@ -245,6 +246,31 @@ struct ftrace_event_call {
 #endif
 };
 
+struct ftrace_trace_descriptor_t {
+	struct ring_buffer_event *event;
+	struct ring_buffer *buffer;
+	unsigned long irq_flags;
+	int pc;
+};
+
+/*
+ * trace_descriptor_t is purpose for passing arguments between
+ * pre_trace and do_trace function.
+ * this definition is ugly, change it in future.
+ */
+struct trace_descriptor_t {
+	struct ftrace_trace_descriptor_t	f;
+	void  *data;
+};
+
+/* callback function for tracing */
+struct event_trace_ops {
+	void *(*pre_trace)(struct ftrace_event_file *file,
+			   int entry_size, void *data);
+	void (*do_trace)(struct ftrace_event_file *file, void *entry,
+			 int entry_size, void *data);
+};
+
 struct trace_array;
 struct ftrace_subsystem_dir;
 
diff --git a/include/linux/trace_array.h b/include/linux/trace_array.h
index c5b7a13..b362c5f 100644
--- a/include/linux/trace_array.h
+++ b/include/linux/trace_array.h
@@ -56,6 +56,7 @@ struct trace_array {
 	struct list_head	list;
 	char			*name;
 	struct trace_buffer	trace_buffer;
+	struct event_trace_ops	*ops;
 #ifdef CONFIG_TRACER_MAX_TRACE
 	/*
 	 * The max_buffer is used to snapshot the trace when a maximum
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 4bda044..e4ea38c 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -401,41 +401,28 @@ static inline notrace int ftrace_get_offsets_##call(			\
  *
  * static struct ftrace_event_call event_<call>;
  *
- * static void ftrace_raw_event_<call>(void *__data, proto)
+ * static notrace void ftrace_raw_event_##call(void *__data, proto)
  * {
  *	struct ftrace_event_file *ftrace_file = __data;
- *	struct ftrace_event_call *event_call = ftrace_file->event_call;
- *	struct ftrace_data_offsets_<call> __maybe_unused __data_offsets;
- *	struct ring_buffer_event *event;
- *	struct ftrace_raw_<call> *entry; <-- defined in stage 1
- *	struct ring_buffer *buffer;
- *	unsigned long irq_flags;
- *	int __data_size;
- *	int pc;
+ *	struct ftrace_data_offsets_##call __maybe_unused __data_offsets;
+ *	struct trace_descriptor_t __desc;
+ *	struct event_trace_ops *ops = ftrace_file->tr->ops;
+ *	struct ftrace_raw_##call *entry; <-- defined in stage 1
+ *	int __data_size, __entry_size;
  *
- *	if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
- *		     &ftrace_file->flags))
- *		return;
- *
- *	local_save_flags(irq_flags);
- *	pc = preempt_count();
- *
- *	__data_size = ftrace_get_offsets_<call>(&__data_offsets, args);
+ *	__data_size = ftrace_get_offsets_##call(&__data_offsets, args);
+ *	__entry_size = sizeof(*entry) + __data_size;
  *
- *	event = trace_event_buffer_lock_reserve(&buffer, ftrace_file,
- *				  event_<call>->event.type,
- *				  sizeof(*entry) + __data_size,
- *				  irq_flags, pc);
- *	if (!event)
+ *	entry = ops->pre_trace(ftrace_file, __entry_size, &__desc);
+ *	if (!entry)
  *		return;
- *	entry	= ring_buffer_event_data(event);
+ *
+ *	tstruct
  *
  *	{ <assign>; }  <-- Here we assign the entries by the __field and
  *			   __array macros.
  *
- *	if (!filter_current_check_discard(buffer, event_call, entry, event))
- *		trace_nowake_buffer_unlock_commit(buffer,
- *						   event, irq_flags, pc);
+ *	ops->do_trace(ftrace_file, entry, __entry_size, &__desc);
  * }
  *
  * static struct trace_event ftrace_event_type_<call> = {
@@ -513,38 +500,24 @@ static notrace void							\
 ftrace_raw_event_##call(void *__data, proto)				\
 {									\
 	struct ftrace_event_file *ftrace_file = __data;			\
-	struct ftrace_event_call *event_call = ftrace_file->event_call;	\
 	struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\
-	struct ring_buffer_event *event;				\
+	struct trace_descriptor_t __desc;				\
+	struct event_trace_ops *ops = ftrace_file->tr->ops;		\
 	struct ftrace_raw_##call *entry;				\
-	struct ring_buffer *buffer;					\
-	unsigned long irq_flags;					\
-	int __data_size;						\
-	int pc;								\
+	int __data_size, __entry_size;					\
 									\
-	if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,			\
-		     &ftrace_file->flags))				\
-		return;							\
-									\
-	local_save_flags(irq_flags);					\
-	pc = preempt_count();						\
+ 	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
+	__entry_size = sizeof(*entry) + __data_size;			\
 									\
-	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
-									\
-	event = trace_event_buffer_lock_reserve(&buffer, ftrace_file,	\
-				 event_call->event.type,		\
-				 sizeof(*entry) + __data_size,		\
-				 irq_flags, pc);			\
-	if (!event)							\
+	entry = ops->pre_trace(ftrace_file, __entry_size, &__desc);	\
+	if (!entry)							\
 		return;							\
-	entry	= ring_buffer_event_data(event);			\
 									\
 	tstruct								\
 									\
 	{ assign; }							\
 									\
-	if (!filter_current_check_discard(buffer, event_call, entry, event)) \
-		trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
+	ops->do_trace(ftrace_file, entry, __entry_size, &__desc);	\
 }
 /*
  * The ftrace_test_probe is compiled out, it is only here as a build time check
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 829b2be..224b152 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -189,7 +189,7 @@ unsigned long long ns2usecs(cycle_t nsec)
  * pages for the buffer for that CPU. Each CPU has the same number
  * of pages allocated for its buffer.
  */
-static struct trace_array	global_trace;
+static struct trace_array	global_trace = {.ops = &ftrace_events_ops};
 
 LIST_HEAD(ftrace_trace_arrays);
 
@@ -5773,6 +5773,8 @@ static int new_instance_create(const char *name)
 
 	list_add(&tr->list, &ftrace_trace_arrays);
 
+	tr->ops = &ftrace_events_ops;
+
 	mutex_unlock(&trace_types_lock);
 
 	return 0;
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index a8acfcd..0a1f4be 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -493,6 +493,8 @@ extern unsigned long nsecs_to_usecs(unsigned long nsecs);
 
 extern unsigned long tracing_thresh;
 
+extern struct event_trace_ops ftrace_events_ops;
+
 #ifdef CONFIG_TRACER_MAX_TRACE
 extern unsigned long tracing_max_latency;
 
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 53582e9..23f4cfc 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -241,6 +241,59 @@ void trace_event_enable_cmd_record(bool enable)
 	mutex_unlock(&event_mutex);
 }
 
+static void *ftrace_events_pre_trace(struct ftrace_event_file *file,
+				     int entry_size, void *data)
+{
+	struct ftrace_event_call *event_call = file->event_call;
+	struct ftrace_trace_descriptor_t *desc = &((struct trace_descriptor_t *)
+						 data)->f;
+	struct ring_buffer_event *event;
+	struct ring_buffer *buffer;
+	unsigned long irq_flags;
+	int pc;
+
+	if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &file->flags))
+		return NULL;
+
+	local_save_flags(irq_flags);
+	pc = preempt_count();
+
+	event = trace_event_buffer_lock_reserve(&buffer, file,
+						event_call->event.type,
+						entry_size, irq_flags, pc);
+
+	if (!event)
+		return NULL;
+
+	desc->event = event;
+	desc->buffer = buffer;
+	desc->irq_flags = irq_flags;
+	desc->pc = pc;
+
+	return ring_buffer_event_data(event);
+}
+
+static void ftrace_events_do_trace(struct ftrace_event_file *file, void *entry,
+				   int entry_size, void *data)
+{
+	struct ftrace_event_call *event_call = file->event_call;
+	struct ftrace_trace_descriptor_t *desc = &((struct trace_descriptor_t *)
+						 data)->f;
+	struct ring_buffer_event *event = desc->event;
+	struct ring_buffer *buffer = desc->buffer;
+	unsigned long irq_flags = desc->irq_flags;
+	int pc = desc->pc;
+
+	if (!filter_current_check_discard(buffer, event_call, entry, event))
+		trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
+}
+
+struct event_trace_ops ftrace_events_ops = {
+	.pre_trace = ftrace_events_pre_trace,
+	.do_trace  = ftrace_events_do_trace,
+};
+
+
 static int __ftrace_event_enable_disable(struct ftrace_event_file *file,
 					 int enable, int soft_disable)
 {
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 04/14] tracing: add private data field into struct ftrace_event_file
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (3 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 03/14] tracing: expose event tracing infrastructure zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 05/14] tracing: switch syscall tracing to use event_trace_ops backend zhangwei(Jovi)
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Since modules can register own event_trace_ops tracing callback
functions now, so it's reasonable to allow modules have own
private data in struct ftrace_event_file.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 include/linux/ftrace_event.h |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 27d7a4f..38c272a 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -319,6 +319,8 @@ struct ftrace_event_file {
 	 * caching and such. Which is mostly OK ;-)
 	 */
 	unsigned long		flags;
+
+	void *data; /* private data */
 };
 
 #define __TRACE_EVENT_FLAGS(name, value)				\
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 05/14] tracing: switch syscall tracing to use event_trace_ops backend
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (4 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 04/14] tracing: add private data field into struct ftrace_event_file zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 06/14] tracing: export syscall metadata zhangwei(Jovi)
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Other tracepoints already switched to use event_trace_ops as
backend store mechanism, syscall tracing can use same backend.

This change would also expose syscall tracing to external modules
with same interface like other tracepoints.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 kernel/trace/trace_syscalls.c |   49 ++++++++++++++---------------------------
 1 file changed, 16 insertions(+), 33 deletions(-)

diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 322e164..72675b1 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -302,12 +302,10 @@ static int __init syscall_exit_define_fields(struct ftrace_event_call *call)
 static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
 {
 	struct trace_array *tr = data;
+	struct ftrace_event_file event_file;
+	struct trace_descriptor_t desc;
 	struct syscall_trace_enter *entry;
 	struct syscall_metadata *sys_data;
-	struct ring_buffer_event *event;
-	struct ring_buffer *buffer;
-	unsigned long irq_flags;
-	int pc;
 	int syscall_nr;
 	int size;
 
@@ -323,34 +321,26 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
 
 	size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;
 
-	local_save_flags(irq_flags);
-	pc = preempt_count();
-
-	buffer = tr->trace_buffer.buffer;
-	event = trace_buffer_lock_reserve(buffer,
-			sys_data->enter_event->event.type, size, irq_flags, pc);
-	if (!event)
+	event_file.tr = tr;
+	event_file.event_call = sys_data->enter_event;
+	event_file.flags = FTRACE_EVENT_FL_ENABLED;
+	entry = tr->ops->pre_trace(&event_file, size, &desc);
+	if (!entry)
 		return;
 
-	entry = ring_buffer_event_data(event);
 	entry->nr = syscall_nr;
 	syscall_get_arguments(current, regs, 0, sys_data->nb_args, entry->args);
 
-	if (!filter_current_check_discard(buffer, sys_data->enter_event,
-					  entry, event))
-		trace_current_buffer_unlock_commit(buffer, event,
-						   irq_flags, pc);
+	tr->ops->do_trace(&event_file, entry, size, &desc);
 }
 
 static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
 {
 	struct trace_array *tr = data;
+	struct ftrace_event_file event_file;
+	struct trace_descriptor_t desc;
 	struct syscall_trace_exit *entry;
 	struct syscall_metadata *sys_data;
-	struct ring_buffer_event *event;
-	struct ring_buffer *buffer;
-	unsigned long irq_flags;
-	int pc;
 	int syscall_nr;
 
 	syscall_nr = trace_get_syscall_nr(current, regs);
@@ -363,24 +353,17 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
 	if (!sys_data)
 		return;
 
-	local_save_flags(irq_flags);
-	pc = preempt_count();
-
-	buffer = tr->trace_buffer.buffer;
-	event = trace_buffer_lock_reserve(buffer,
-			sys_data->exit_event->event.type, sizeof(*entry),
-			irq_flags, pc);
-	if (!event)
+	event_file.tr = tr;
+	event_file.event_call = sys_data->exit_event;
+	event_file.flags = FTRACE_EVENT_FL_ENABLED;
+	entry = tr->ops->pre_trace(&event_file, sizeof(*entry), &desc);
+	if (!entry)
 		return;
 
-	entry = ring_buffer_event_data(event);
 	entry->nr = syscall_nr;
 	entry->ret = syscall_get_return_value(current, regs);
 
-	if (!filter_current_check_discard(buffer, sys_data->exit_event,
-					  entry, event))
-		trace_current_buffer_unlock_commit(buffer, event,
-						   irq_flags, pc);
+	tr->ops->do_trace(&event_file, entry, sizeof(*entry), &desc);
 }
 
 static int reg_event_syscall_enter(struct ftrace_event_file *file,
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 06/14] tracing: export syscall metadata
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (5 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 05/14] tracing: switch syscall tracing to use event_trace_ops backend zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 07/14] tracing: expose structure ftrace_event_field zhangwei(Jovi)
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Currently syscall metadata is important for kernel
syscall tracing, even mandatory, external modules may need this
metadata when perform syscall tracing(like ktap tool), so export it.

Instead of export variable syscalls_metadata, export function
syscall_nr_to_meta is more safe.

This patch also rename syscall_nr_to_meta to trace_syscall_nr_to_metadata.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 include/trace/syscall.h       |    1 +
 kernel/trace/trace_syscalls.c |   15 ++++++++-------
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/include/trace/syscall.h b/include/trace/syscall.h
index 84bc419..428634e 100644
--- a/include/trace/syscall.h
+++ b/include/trace/syscall.h
@@ -31,4 +31,5 @@ struct syscall_metadata {
 	struct ftrace_event_call *exit_event;
 };
 
+struct syscall_metadata *trace_syscall_nr_to_meta(int nr);
 #endif /* _TRACE_SYSCALL_H */
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 72675b1..d739471 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -98,13 +98,14 @@ find_syscall_meta(unsigned long syscall)
 	return NULL;
 }
 
-static struct syscall_metadata *syscall_nr_to_meta(int nr)
+struct syscall_metadata *trace_syscall_nr_to_meta(int nr)
 {
 	if (!syscalls_metadata || nr >= NR_syscalls || nr < 0)
 		return NULL;
 
 	return syscalls_metadata[nr];
 }
+EXPORT_SYMBOL_GPL(trace_syscall_nr_to_meta);
 
 static enum print_line_t
 print_syscall_enter(struct trace_iterator *iter, int flags,
@@ -118,7 +119,7 @@ print_syscall_enter(struct trace_iterator *iter, int flags,
 
 	trace = (typeof(trace))ent;
 	syscall = trace->nr;
-	entry = syscall_nr_to_meta(syscall);
+	entry = trace_syscall_nr_to_meta(syscall);
 
 	if (!entry)
 		goto end;
@@ -172,7 +173,7 @@ print_syscall_exit(struct trace_iterator *iter, int flags,
 
 	trace = (typeof(trace))ent;
 	syscall = trace->nr;
-	entry = syscall_nr_to_meta(syscall);
+	entry = trace_syscall_nr_to_meta(syscall);
 
 	if (!entry) {
 		trace_seq_printf(s, "\n");
@@ -315,7 +316,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
 	if (!test_bit(syscall_nr, tr->enabled_enter_syscalls))
 		return;
 
-	sys_data = syscall_nr_to_meta(syscall_nr);
+	sys_data = trace_syscall_nr_to_meta(syscall_nr);
 	if (!sys_data)
 		return;
 
@@ -349,7 +350,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
 	if (!test_bit(syscall_nr, tr->enabled_exit_syscalls))
 		return;
 
-	sys_data = syscall_nr_to_meta(syscall_nr);
+	sys_data = trace_syscall_nr_to_meta(syscall_nr);
 	if (!sys_data)
 		return;
 
@@ -545,7 +546,7 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id)
 	if (!test_bit(syscall_nr, enabled_perf_enter_syscalls))
 		return;
 
-	sys_data = syscall_nr_to_meta(syscall_nr);
+	sys_data = trace_syscall_nr_to_meta(syscall_nr);
 	if (!sys_data)
 		return;
 
@@ -621,7 +622,7 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret)
 	if (!test_bit(syscall_nr, enabled_perf_exit_syscalls))
 		return;
 
-	sys_data = syscall_nr_to_meta(syscall_nr);
+	sys_data = trace_syscall_nr_to_meta(syscall_nr);
 	if (!sys_data)
 		return;
 
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 07/14] tracing: expose structure ftrace_event_field
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (6 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 06/14] tracing: export syscall metadata zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 08/14] tracing: remove TRACE_EVENT_TYPE enum definition zhangwei(Jovi)
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Currently event tracing field information is only stored in
struct ftrace_event_field, this structure is defined in
internal trace.h.
Move this ftrace_event_field into include/linux/ftrace_event.h,
then external modules can make use this structure to parse event
field(like ktap).

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 include/linux/ftrace_event.h |   10 ++++++++++
 kernel/trace/trace.h         |   10 ----------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 38c272a..c0de961 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -176,6 +176,16 @@ enum trace_reg {
 #endif
 };
 
+struct ftrace_event_field {
+	struct list_head	link;
+	const char		*name;
+	const char		*type;
+	int			filter_type;
+	int			offset;
+	int			size;
+	int			is_signed;
+};
+
 struct ftrace_event_call;
 
 struct ftrace_event_class {
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 0a1f4be..ff28b5a 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -800,16 +800,6 @@ enum {
 	TRACE_EVENT_TYPE_RAW		= 2,
 };
 
-struct ftrace_event_field {
-	struct list_head	link;
-	const char		*name;
-	const char		*type;
-	int			filter_type;
-	int			offset;
-	int			size;
-	int			is_signed;
-};
-
 struct event_filter {
 	int			n_preds;	/* Number assigned */
 	int			a_preds;	/* allocated */
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 08/14] tracing: remove TRACE_EVENT_TYPE enum definition
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (7 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 07/14] tracing: expose structure ftrace_event_field zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 09/14] tracing: remove obsolete macro guard _TRACE_PROFILE_INIT zhangwei(Jovi)
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

TRACE_EVENT_TYPE enum is not used at present, remove it.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 kernel/trace/trace.h |    6 ------
 1 file changed, 6 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index ff28b5a..398ff9e 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -794,12 +794,6 @@ static inline void trace_branch_disable(void)
 /* set ring buffers to default size if not already done so */
 int tracing_update_buffers(void);
 
-/* trace event type bit fields, not numeric */
-enum {
-	TRACE_EVENT_TYPE_PRINTF		= 1,
-	TRACE_EVENT_TYPE_RAW		= 2,
-};
-
 struct event_filter {
 	int			n_preds;	/* Number assigned */
 	int			a_preds;	/* allocated */
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 09/14] tracing: remove obsolete macro guard _TRACE_PROFILE_INIT
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (8 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 08/14] tracing: remove TRACE_EVENT_TYPE enum definition zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 10/14] tracing: remove ftrace(...) function zhangwei(Jovi)
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Macro _TRACE_PROFILE_INIT was removed at long time ago,
but leave guard "#undef" here, remove it.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 include/trace/ftrace.h |    2 --
 1 file changed, 2 deletions(-)

diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index e4ea38c..cdf6a2c 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -677,5 +677,3 @@ static inline void perf_test_probe_##call(void)				\
 #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
 #endif /* CONFIG_PERF_EVENTS */
 
-#undef _TRACE_PROFILE_INIT
-
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 10/14] tracing: remove ftrace(...) function
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (9 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 09/14] tracing: remove obsolete macro guard _TRACE_PROFILE_INIT zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 11/14] tracing: use per trace_array clock_id instead of global trace_clock_id zhangwei(Jovi)
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

The only caller of function ftrace(...) was removed at long time ago,
so remove the function body also.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 kernel/trace/trace.c |    9 ---------
 kernel/trace/trace.h |    5 -----
 2 files changed, 14 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 224b152..dd0c122 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1534,15 +1534,6 @@ trace_function(struct trace_array *tr,
 		__buffer_unlock_commit(buffer, event);
 }
 
-void
-ftrace(struct trace_array *tr, struct trace_array_cpu *data,
-       unsigned long ip, unsigned long parent_ip, unsigned long flags,
-       int pc)
-{
-	if (likely(!atomic_read(&data->disabled)))
-		trace_function(tr, ip, parent_ip, flags, pc);
-}
-
 #ifdef CONFIG_STACKTRACE
 
 #define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 398ff9e..a244fbc 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -445,11 +445,6 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu);
 
 void poll_wait_pipe(struct trace_iterator *iter);
 
-void ftrace(struct trace_array *tr,
-			    struct trace_array_cpu *data,
-			    unsigned long ip,
-			    unsigned long parent_ip,
-			    unsigned long flags, int pc);
 void tracing_sched_switch_trace(struct trace_array *tr,
 				struct task_struct *prev,
 				struct task_struct *next,
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 11/14] tracing: use per trace_array clock_id instead of global trace_clock_id
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (10 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 10/14] tracing: remove ftrace(...) function zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 12/14] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST zhangwei(Jovi)
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

tracing clock id already changed into per trace_array variable,
but there still use global trace_clock_id, which value always is 0 now.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 kernel/trace/trace.c |    8 +++-----
 kernel/trace/trace.h |    2 --
 2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index dd0c122..ee4e110 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -652,8 +652,6 @@ static struct {
 	ARCH_TRACE_CLOCKS
 };
 
-int trace_clock_id;
-
 /*
  * trace_parser_get_init - gets the buffer for trace parser
  */
@@ -2806,7 +2804,7 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
 		iter->iter_flags |= TRACE_FILE_ANNOTATE;
 
 	/* Output in nanoseconds only if we are using a clock in nanoseconds. */
-	if (trace_clocks[trace_clock_id].in_ns)
+	if (trace_clocks[tr->clock_id].in_ns)
 		iter->iter_flags |= TRACE_FILE_TIME_IN_NS;
 
 	/* stop the trace while dumping if we are not opening "snapshot" */
@@ -3805,7 +3803,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
 		iter->iter_flags |= TRACE_FILE_LAT_FMT;
 
 	/* Output in nanoseconds only if we are using a clock in nanoseconds. */
-	if (trace_clocks[trace_clock_id].in_ns)
+	if (trace_clocks[tr->clock_id].in_ns)
 		iter->iter_flags |= TRACE_FILE_TIME_IN_NS;
 
 	iter->cpu_file = tc->cpu;
@@ -5075,7 +5073,7 @@ tracing_stats_read(struct file *filp, char __user *ubuf,
 	cnt = ring_buffer_bytes_cpu(trace_buf->buffer, cpu);
 	trace_seq_printf(s, "bytes: %ld\n", cnt);
 
-	if (trace_clocks[trace_clock_id].in_ns) {
+	if (trace_clocks[tr->clock_id].in_ns) {
 		/* local or global for trace_clock */
 		t = ns2usecs(ring_buffer_oldest_event_ts(trace_buf->buffer, cpu));
 		usec_rem = do_div(t, USEC_PER_SEC);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index a244fbc..19e3da2 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -588,8 +588,6 @@ enum print_line_t print_trace_line(struct trace_iterator *iter);
 
 extern unsigned long trace_flags;
 
-extern int trace_clock_id;
-
 /* Standard output formatting function used for function return traces */
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 12/14] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (11 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 11/14] tracing: use per trace_array clock_id instead of global trace_clock_id zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 13/14] libtraceevent: add libtraceevent prefix in warning message zhangwei(Jovi)
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Variable tracing_selftest_disabled have not any sense when
CONFIG_FTRACE_STARTUP_TEST is disabled.

This patch also remove __read_mostly attribute, since variable
tracing_selftest_disabled really not read mostly.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 kernel/trace/trace.c        |    6 ++++--
 kernel/trace/trace.h        |    2 +-
 kernel/trace/trace_events.c |    2 ++
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index ee4e110..09a3aa8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -58,10 +58,12 @@ bool ring_buffer_expanded;
  */
 static bool __read_mostly tracing_selftest_running;
 
+#ifdef CONFIG_FTRACE_STARTUP_TEST
 /*
  * If a tracer is running, we do not want to run SELFTEST.
  */
-bool __read_mostly tracing_selftest_disabled;
+bool tracing_selftest_disabled;
+#endif
 
 /* For tracers that don't implement custom flags */
 static struct tracer_opt dummy_tracer_opt[] = {
@@ -1069,8 +1071,8 @@ int register_tracer(struct tracer *type)
 	tracing_set_tracer(type->name);
 	default_bootup_tracer = NULL;
 	/* disable other selftests, since this will break it. */
-	tracing_selftest_disabled = true;
 #ifdef CONFIG_FTRACE_STARTUP_TEST
+	tracing_selftest_disabled = true;
 	printk(KERN_INFO "Disabling FTRACE selftests due to running tracer '%s'\n",
 	       type->name);
 #endif
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 19e3da2..573c1dc 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -546,10 +546,10 @@ extern int DYN_FTRACE_TEST_NAME(void);
 extern int DYN_FTRACE_TEST_NAME2(void);
 
 extern bool ring_buffer_expanded;
-extern bool tracing_selftest_disabled;
 DECLARE_PER_CPU(int, ftrace_cpu_disabled);
 
 #ifdef CONFIG_FTRACE_STARTUP_TEST
+extern bool tracing_selftest_disabled;
 extern int trace_selftest_startup_function(struct tracer *trace,
 					   struct trace_array *tr);
 extern int trace_selftest_startup_function_graph(struct tracer *trace,
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 23f4cfc..d5ca0c1 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2251,7 +2251,9 @@ static __init int setup_trace_event(char *str)
 {
 	strlcpy(bootup_event_buf, str, COMMAND_LINE_SIZE);
 	ring_buffer_expanded = true;
+#ifdef CONFIG_FTRACE_STARTUP_TEST
 	tracing_selftest_disabled = true;
+#endif
 
 	return 1;
 }
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 13/14] libtraceevent: add libtraceevent prefix in warning message
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (12 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 12/14] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` [PATCH 14/14] tracing: fix regression of perf function tracing zhangwei(Jovi)
  2013-03-27  9:48 ` zhangwei(Jovi)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Now using perf tracepoint, perf output some warning message
which hard to understand what's wrong in perf.

[root@jovi perf]# ./perf stat -e timer:* ls
  Warning: unknown op '{'
  Warning: unknown op '{'
...

Actually these warning message is caused by libtraceevent format
parsing code.

So add libtraceevent prefix to identify this more clearly.

(we should remove all those warning message when running perf stat in future,
it's not necessary to parse event format when running perf stat)

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 tools/lib/traceevent/event-parse.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
index 82b0606..a3971d2 100644
--- a/tools/lib/traceevent/event-parse.c
+++ b/tools/lib/traceevent/event-parse.c
@@ -47,7 +47,7 @@ static int show_warning = 1;
 #define do_warning(fmt, ...)				\
 	do {						\
 		if (show_warning)			\
-			warning(fmt, ##__VA_ARGS__);	\
+			warning("libtraceevent: "fmt, ##__VA_ARGS__);	\
 	} while (0)
 
 static void init_input_buf(const char *buf, unsigned long long size)
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 14/14] tracing: fix regression of perf function tracing
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (13 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 13/14] libtraceevent: add libtraceevent prefix in warning message zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  2013-03-27  9:48 ` zhangwei(Jovi)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML; +Cc: zhangwei(Jovi)

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Using perf command: perf stat -e ftrace:function ls

this will cause kernel warning and oops.

[  797.828904] ------------[ cut here ]------------
[  797.828946] WARNING: at include/linux/ftrace.h:209 ftrace_ops_control_func+0xb1/0xc0()
[  797.829065] Pid: 6086, comm: perf Not tainted 3.8.0-rc4+ #88
[  797.829066] Call Trace:
[  797.829078]  [<c0447a42>] warn_slowpath_common+0x72/0xa0
[  797.829080]  [<c04bd6d1>] ? ftrace_ops_control_func+0xb1/0xc0
[  797.829082]  [<c04bd6d1>] ? ftrace_ops_control_func+0xb1/0xc0
[  797.829083]  [<c04b7453>] ? synchronize_sched+0x3/0x50
[  797.829085]  [<c04be50f>] ? __unregister_ftrace_function+0x8f/0x170
[  797.829087]  [<c0447a92>] warn_slowpath_null+0x22/0x30
[  797.829089]  [<c04bd6d1>] ftrace_ops_control_func+0xb1/0xc0
[  797.829099]  [<c08eba6b>] ftrace_call+0x5/0xb
[  797.829100]  [<c04b7458>] ? synchronize_sched+0x8/0x50
[  797.829102]  [<c04be50f>] __unregister_ftrace_function+0x8f/0x170
[  797.829104]  [<c04c00cf>] unregister_ftrace_function+0x1f/0x50
[  797.829109]  [<c04d31bd>] perf_ftrace_event_register+0x9d/0x140
[  797.829111]  [<c04d304b>] perf_trace_destroy+0x2b/0x50
[  797.829117]  [<c04db3c8>] tp_perf_event_destroy+0x8/0x10
[  797.829119]  [<c04dd672>] free_event+0x42/0x110
[  797.829121]  [<c04de446>] perf_event_release_kernel+0x56/0x90
[  797.829122]  [<c04de4fc>] put_event+0x7c/0xa0
[  797.829124]  [<c04de5cb>] perf_release+0xb/0x10
[  797.829128]  [<c0532ae6>] __fput+0xc6/0x1f0
[  797.829130]  [<c0532c1d>] ____fput+0xd/0x10
[  797.829134]  [<c04645b1>] task_work_run+0x81/0xa0
[  797.829142]  [<c0412819>] do_notify_resume+0x59/0x90
[  797.829150]  [<c08e4745>] work_notifysig+0x30/0x37
[  797.829152] ---[ end trace 4dbd63f12b55163f ]---

This bug was introduced by below commit(in 3.8-rc4):
        commit 0a016409e42f273415f8225ddf2c58eb2df88034
        Author: Steven Rostedt <srostedt@redhat.com>
        Date:   Fri Nov 2 17:03:03 2012 -0400

        ftrace: Optimize the function tracer list loop

When variable op is ftrace_list_end, it cannot pass control ops checking,
so that loop optimize is not suit for ftrace_control_list, change it back.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
---
 kernel/trace/ftrace.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 2577082..2899974 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4185,11 +4185,14 @@ ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip,
 	 */
 	preempt_disable_notrace();
 	trace_recursion_set(TRACE_CONTROL_BIT);
-	do_for_each_ftrace_op(op, ftrace_control_list) {
+	op = rcu_dereference_raw(ftrace_control_list);
+	while (op != &ftrace_list_end) {
 		if (!ftrace_function_local_disabled(op) &&
 		    ftrace_ops_test(op, ip))
 			op->func(ip, parent_ip, op, regs);
-	} while_for_each_ftrace_op(op);
+
+		op = rcu_dereference_raw(op->next);
+	}
 	trace_recursion_clear(TRACE_CONTROL_BIT);
 	preempt_enable_notrace();
 }
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 14/14] tracing: fix regression of perf function tracing
  2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
                   ` (14 preceding siblings ...)
  2013-03-27  9:48 ` [PATCH 14/14] tracing: fix regression of perf function tracing zhangwei(Jovi)
@ 2013-03-27  9:48 ` zhangwei(Jovi)
  15 siblings, 0 replies; 17+ messages in thread
From: zhangwei(Jovi) @ 2013-03-27  9:48 UTC (permalink / raw)
  To: Steven Rostedt, Frederic Weisbecker, Ingo Molnar, LKML
  Cc: zhangwei(Jovi), stable

From: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>

Using perf command: perf stat -e ftrace:function ls

this will cause kernel warning and oops.

[  797.828904] ------------[ cut here ]------------
[  797.828946] WARNING: at include/linux/ftrace.h:209 ftrace_ops_control_func+0xb1/0xc0()
[  797.829065] Pid: 6086, comm: perf Not tainted 3.8.0-rc4+ #88
[  797.829066] Call Trace:
[  797.829078]  [<c0447a42>] warn_slowpath_common+0x72/0xa0
[  797.829080]  [<c04bd6d1>] ? ftrace_ops_control_func+0xb1/0xc0
[  797.829082]  [<c04bd6d1>] ? ftrace_ops_control_func+0xb1/0xc0
[  797.829083]  [<c04b7453>] ? synchronize_sched+0x3/0x50
[  797.829085]  [<c04be50f>] ? __unregister_ftrace_function+0x8f/0x170
[  797.829087]  [<c0447a92>] warn_slowpath_null+0x22/0x30
[  797.829089]  [<c04bd6d1>] ftrace_ops_control_func+0xb1/0xc0
[  797.829099]  [<c08eba6b>] ftrace_call+0x5/0xb
[  797.829100]  [<c04b7458>] ? synchronize_sched+0x8/0x50
[  797.829102]  [<c04be50f>] __unregister_ftrace_function+0x8f/0x170
[  797.829104]  [<c04c00cf>] unregister_ftrace_function+0x1f/0x50
[  797.829109]  [<c04d31bd>] perf_ftrace_event_register+0x9d/0x140
[  797.829111]  [<c04d304b>] perf_trace_destroy+0x2b/0x50
[  797.829117]  [<c04db3c8>] tp_perf_event_destroy+0x8/0x10
[  797.829119]  [<c04dd672>] free_event+0x42/0x110
[  797.829121]  [<c04de446>] perf_event_release_kernel+0x56/0x90
[  797.829122]  [<c04de4fc>] put_event+0x7c/0xa0
[  797.829124]  [<c04de5cb>] perf_release+0xb/0x10
[  797.829128]  [<c0532ae6>] __fput+0xc6/0x1f0
[  797.829130]  [<c0532c1d>] ____fput+0xd/0x10
[  797.829134]  [<c04645b1>] task_work_run+0x81/0xa0
[  797.829142]  [<c0412819>] do_notify_resume+0x59/0x90
[  797.829150]  [<c08e4745>] work_notifysig+0x30/0x37
[  797.829152] ---[ end trace 4dbd63f12b55163f ]---

This bug was introduced by below commit(in 3.8-rc4):
        commit 0a016409e42f273415f8225ddf2c58eb2df88034
        Author: Steven Rostedt <srostedt@redhat.com>
        Date:   Fri Nov 2 17:03:03 2012 -0400

        ftrace: Optimize the function tracer list loop

When variable op is ftrace_list_end, it cannot pass control ops checking,
so that loop optimize is not suit for ftrace_control_list, change it back.

Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
Cc: stable@vger.kernel.org
---
 kernel/trace/ftrace.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 2577082..2899974 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4185,11 +4185,14 @@ ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip,
 	 */
 	preempt_disable_notrace();
 	trace_recursion_set(TRACE_CONTROL_BIT);
-	do_for_each_ftrace_op(op, ftrace_control_list) {
+	op = rcu_dereference_raw(ftrace_control_list);
+	while (op != &ftrace_list_end) {
 		if (!ftrace_function_local_disabled(op) &&
 		    ftrace_ops_test(op, ip))
 			op->func(ip, parent_ip, op, regs);
-	} while_for_each_ftrace_op(op);
+
+		op = rcu_dereference_raw(op->next);
+	}
 	trace_recursion_clear(TRACE_CONTROL_BIT);
 	preempt_enable_notrace();
 }
-- 
1.7.9.7



^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2013-03-27  9:53 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-27  9:48 [PATCH 00/14] event tracing expose change and bugfix/cleanup zhangwei(Jovi)
2013-03-27  9:48 ` zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 01/14] tracing: move trace_array definition into include/linux/trace_array.h zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 02/14] tracing: fix irqs-off tag display in syscall tracing zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 03/14] tracing: expose event tracing infrastructure zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 04/14] tracing: add private data field into struct ftrace_event_file zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 05/14] tracing: switch syscall tracing to use event_trace_ops backend zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 06/14] tracing: export syscall metadata zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 07/14] tracing: expose structure ftrace_event_field zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 08/14] tracing: remove TRACE_EVENT_TYPE enum definition zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 09/14] tracing: remove obsolete macro guard _TRACE_PROFILE_INIT zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 10/14] tracing: remove ftrace(...) function zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 11/14] tracing: use per trace_array clock_id instead of global trace_clock_id zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 12/14] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 13/14] libtraceevent: add libtraceevent prefix in warning message zhangwei(Jovi)
2013-03-27  9:48 ` [PATCH 14/14] tracing: fix regression of perf function tracing zhangwei(Jovi)
2013-03-27  9:48 ` zhangwei(Jovi)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).