* [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size
@ 2014-02-06 17:39 Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 1/4] tracing: Move raw output code from macro to standalone function Steven Rostedt
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-02-06 17:39 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Peter Zijlstra,
Frederic Weisbecker, Namhyung Kim, Oleg Nesterov, Li Zefan
I posted this a while ago (August 2012), and it seemed to have positive
feedback. But I forgot about it and it never went any further.
http://lkml.kernel.org/r/20120810034302.758092203@goodmis.org
It works to move the tracepoint code out of the macros and into reusable
functions that can save a whopping 73K from the kernel memory (with just the
modules I used compiled in).
There was some changes in mainline since I last posted this that helped
lower the tracepoint footprint, and that makes the first patch not as much
of an approvement that it was in the past.
Anyway, hopefully this can get into 3.15.
-- Steve
Steven Rostedt (4):
tracing: Move raw output code from macro to standalone function
tracing: Move event storage for array from macro to standalone function
tracing: Use helper functions in event assignment to shrink macro size
perf/events: Use helper functions in event assignment to shrink macro size
----
include/linux/ftrace_event.h | 46 +++++++++++++++++++++--
include/trace/ftrace.h | 75 ++++++++++++-------------------------
kernel/trace/trace_event_perf.c | 51 +++++++++++++++++++++++++
kernel/trace/trace_events.c | 6 ---
kernel/trace/trace_export.c | 12 ++----
kernel/trace/trace_output.c | 83 +++++++++++++++++++++++++++++++++++++++++
6 files changed, 203 insertions(+), 70 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC][PATCH 1/4] tracing: Move raw output code from macro to standalone function
2014-02-06 17:39 [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size Steven Rostedt
@ 2014-02-06 17:39 ` Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 2/4] tracing: Move event storage for array " Steven Rostedt
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-02-06 17:39 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Peter Zijlstra,
Frederic Weisbecker, Namhyung Kim, Oleg Nesterov, Li Zefan
[-- Attachment #1: 0001-tracing-Move-raw-output-code-from-macro-to-standalon.patch --]
[-- Type: text/plain, Size: 4286 bytes --]
From: Steven Rostedt <srostedt@redhat.com>
The code for trace events to format the raw recorded event data
into human readable format in the 'trace' file is repeated for every
event in the system. When you have over 500 events, this can add up
quite a bit.
By making helper functions in the core kernel to do the work
instead, we can shrink the size of the kernel down a bit.
With a kernel configured with 502 events, the change in size was:
text data bss dec hex filename
12991007 1913568 9785344 24689919 178bcff /tmp/vmlinux.orig
12990946 1913568 9785344 24689858 178bcc2 /tmp/vmlinux.patched
Note, this version does not save as much as the version of this patch
I had a few years ago. That is because in the mean time, commit
f71130de5c7f ("tracing: Add a helper function for event print functions")
did a lot of the work my original patch did. But this change helps
slightly, and is part of a larger clean up to reduce the size much further.
Link: http://lkml.kernel.org/r/20120810034707.378538034@goodmis.org
Cc: Li Zefan <lizefan@huawei.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 5 +++++
include/trace/ftrace.h | 10 +---------
kernel/trace/trace_output.c | 31 +++++++++++++++++++++++++++++++
3 files changed, 37 insertions(+), 9 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 8c9b7a1..16b063c 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -162,6 +162,8 @@ void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
void tracing_record_cmdline(struct task_struct *tsk);
+int ftrace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...);
+
struct event_filter;
enum trace_reg {
@@ -196,6 +198,9 @@ struct ftrace_event_class {
extern int ftrace_event_reg(struct ftrace_event_call *event,
enum trace_reg type, void *data);
+int ftrace_output_event(struct trace_iterator *iter, struct ftrace_event_call *event,
+ char *fmt, ...);
+
enum {
TRACE_EVENT_FL_FILTERED_BIT,
TRACE_EVENT_FL_CAP_ANY_BIT,
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 5c38606..b482700 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -265,11 +265,9 @@ static notrace enum print_line_t \
ftrace_raw_output_##call(struct trace_iterator *iter, int flags, \
struct trace_event *event) \
{ \
- struct trace_seq *s = &iter->seq; \
struct ftrace_raw_##template *field; \
struct trace_entry *entry; \
struct trace_seq *p = &iter->tmp_seq; \
- int ret; \
\
entry = iter->ent; \
\
@@ -281,13 +279,7 @@ ftrace_raw_output_##call(struct trace_iterator *iter, int flags, \
field = (typeof(field))entry; \
\
trace_seq_init(p); \
- ret = trace_seq_printf(s, "%s: ", #call); \
- if (ret) \
- ret = trace_seq_printf(s, print); \
- if (!ret) \
- return TRACE_TYPE_PARTIAL_LINE; \
- \
- return TRACE_TYPE_HANDLED; \
+ return ftrace_output_call(iter, #call, print); \
} \
static struct trace_event_functions ftrace_event_type_funcs_##call = { \
.trace = ftrace_raw_output_##call, \
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index ed32284..ca0e79e2 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -439,6 +439,37 @@ int ftrace_raw_output_prep(struct trace_iterator *iter,
}
EXPORT_SYMBOL(ftrace_raw_output_prep);
+static int ftrace_output_raw(struct trace_iterator *iter, char *name,
+ char *fmt, va_list ap)
+{
+ struct trace_seq *s = &iter->seq;
+ int ret;
+
+ ret = trace_seq_printf(s, "%s: ", name);
+ if (!ret)
+ return TRACE_TYPE_PARTIAL_LINE;
+
+ ret = trace_seq_vprintf(s, fmt, ap);
+
+ if (!ret)
+ return TRACE_TYPE_PARTIAL_LINE;
+
+ return TRACE_TYPE_HANDLED;
+}
+
+int ftrace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...)
+{
+ va_list ap;
+ int ret;
+
+ va_start(ap, fmt);
+ ret = ftrace_output_raw(iter, name, fmt, ap);
+ va_end(ap);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ftrace_output_call);
+
#ifdef CONFIG_KRETPROBES
static inline const char *kretprobed(const char *name)
{
--
1.8.4.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH 2/4] tracing: Move event storage for array from macro to standalone function
2014-02-06 17:39 [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 1/4] tracing: Move raw output code from macro to standalone function Steven Rostedt
@ 2014-02-06 17:39 ` Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 3/4] tracing: Use helper functions in event assignment to shrink macro size Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 4/4] perf/events: " Steven Rostedt
3 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-02-06 17:39 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Peter Zijlstra,
Frederic Weisbecker, Namhyung Kim, Oleg Nesterov, Li Zefan
[-- Attachment #1: 0002-tracing-Move-event-storage-for-array-from-macro-to-s.patch --]
[-- Type: text/plain, Size: 5739 bytes --]
From: Steven Rostedt <srostedt@redhat.com>
The code that shows array fields for events is defined for all events.
This can add up quite a bit when you have over 500 events.
By making helper functions in the core kernel to do the work
instead, we can shrink the size of the kernel down a bit.
With a kernel configured with 502 events, the change in size was:
text data bss dec hex filename
12990946 1913568 9785344 24689858 178bcc2 /tmp/vmlinux
12987390 1913504 9785344 24686238 178ae9e /tmp/vmlinux.patched
That's a total of 3556 bytes, which comes down to 7 bytes per event.
Although it's not much, this code is just called at initialization of
the events.
Link: http://lkml.kernel.org/r/20120810034708.084036335@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 8 ++++----
include/trace/ftrace.h | 12 ++++--------
kernel/trace/trace_events.c | 6 ------
kernel/trace/trace_export.c | 12 ++++--------
kernel/trace/trace_output.c | 21 +++++++++++++++++++++
5 files changed, 33 insertions(+), 26 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 16b063c..014090c 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -201,6 +201,10 @@ extern int ftrace_event_reg(struct ftrace_event_call *event,
int ftrace_output_event(struct trace_iterator *iter, struct ftrace_event_call *event,
char *fmt, ...);
+int ftrace_event_define_field(struct ftrace_event_call *call,
+ char *type, int len, char *item, int offset,
+ int field_size, int sign, int filter);
+
enum {
TRACE_EVENT_FL_FILTERED_BIT,
TRACE_EVENT_FL_CAP_ANY_BIT,
@@ -361,10 +365,6 @@ enum {
FILTER_TRACE_FN,
};
-#define EVENT_STORAGE_SIZE 128
-extern struct mutex event_storage_mutex;
-extern char event_storage[EVENT_STORAGE_SIZE];
-
extern int trace_event_raw_init(struct ftrace_event_call *call);
extern int trace_define_field(struct ftrace_event_call *call, const char *type,
const char *name, int offset, int size,
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index b482700..c9c991f 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -302,15 +302,11 @@ static struct trace_event_functions ftrace_event_type_funcs_##call = { \
#undef __array
#define __array(type, item, len) \
do { \
- mutex_lock(&event_storage_mutex); \
BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \
- snprintf(event_storage, sizeof(event_storage), \
- "%s[%d]", #type, len); \
- ret = trace_define_field(event_call, event_storage, #item, \
- offsetof(typeof(field), item), \
- sizeof(field.item), \
- is_signed_type(type), FILTER_OTHER); \
- mutex_unlock(&event_storage_mutex); \
+ ret = ftrace_event_define_field(event_call, #type, len, \
+ #item, offsetof(typeof(field), item), \
+ sizeof(field.item), \
+ is_signed_type(type), FILTER_OTHER); \
if (ret) \
return ret; \
} while (0);
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index a11800a..86a47d9 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -27,12 +27,6 @@
DEFINE_MUTEX(event_mutex);
-DEFINE_MUTEX(event_storage_mutex);
-EXPORT_SYMBOL_GPL(event_storage_mutex);
-
-char event_storage[EVENT_STORAGE_SIZE];
-EXPORT_SYMBOL_GPL(event_storage);
-
LIST_HEAD(ftrace_events);
static LIST_HEAD(ftrace_common_fields);
diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c
index 7c3e3e7..39c746c 100644
--- a/kernel/trace/trace_export.c
+++ b/kernel/trace/trace_export.c
@@ -96,14 +96,10 @@ static void __always_unused ____ftrace_check_##name(void) \
#define __array(type, item, len) \
do { \
BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \
- mutex_lock(&event_storage_mutex); \
- snprintf(event_storage, sizeof(event_storage), \
- "%s[%d]", #type, len); \
- ret = trace_define_field(event_call, event_storage, #item, \
- offsetof(typeof(field), item), \
- sizeof(field.item), \
- is_signed_type(type), filter_type); \
- mutex_unlock(&event_storage_mutex); \
+ ret = ftrace_event_define_field(event_call, #type, len, \
+ #item, offsetof(typeof(field), item), \
+ sizeof(field.item), \
+ is_signed_type(type), filter_type); \
if (ret) \
return ret; \
} while (0);
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index ca0e79e2..ee8d748 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -20,6 +20,10 @@ static struct hlist_head event_hash[EVENT_HASHSIZE] __read_mostly;
static int next_event_type = __TRACE_LAST_TYPE + 1;
+#define EVENT_STORAGE_SIZE 128
+static DEFINE_MUTEX(event_storage_mutex);
+static char event_storage[EVENT_STORAGE_SIZE];
+
int trace_print_seq(struct seq_file *m, struct trace_seq *s)
{
int len = s->len >= PAGE_SIZE ? PAGE_SIZE - 1 : s->len;
@@ -470,6 +474,23 @@ int ftrace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...)
}
EXPORT_SYMBOL_GPL(ftrace_output_call);
+int ftrace_event_define_field(struct ftrace_event_call *call,
+ char *type, int len, char *item, int offset,
+ int field_size, int sign, int filter)
+{
+ int ret;
+
+ mutex_lock(&event_storage_mutex);
+ snprintf(event_storage, sizeof(event_storage),
+ "%s[%d]", type, len);
+ ret = trace_define_field(call, event_storage, item, offset,
+ field_size, sign, filter);
+ mutex_unlock(&event_storage_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ftrace_event_define_field);
+
#ifdef CONFIG_KRETPROBES
static inline const char *kretprobed(const char *name)
{
--
1.8.4.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH 3/4] tracing: Use helper functions in event assignment to shrink macro size
2014-02-06 17:39 [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 1/4] tracing: Move raw output code from macro to standalone function Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 2/4] tracing: Move event storage for array " Steven Rostedt
@ 2014-02-06 17:39 ` Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 4/4] perf/events: " Steven Rostedt
3 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-02-06 17:39 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Peter Zijlstra,
Frederic Weisbecker, Namhyung Kim, Oleg Nesterov, Li Zefan
[-- Attachment #1: 0003-tracing-Use-helper-functions-in-event-assignment-to-.patch --]
[-- Type: text/plain, Size: 5196 bytes --]
From: Steven Rostedt <srostedt@redhat.com>
The functions that assign the contents for the ftrace events are
defined by the TRACE_EVENT() macros. Each event has its own unique
way to assign data to its buffer. When you have over 500 events,
that means there's 500 functions assigning data uniquely for each
event (not really that many, as DECLARE_EVENT_CLASS() and multiple
DEFINE_EVENT()s will only need a single function).
By making helper functions in the core kernel to do some of the work
instead, we can shrink the size of the kernel down a bit.
With a kernel configured with 502 events, the change in size was:
text data bss dec hex filename
12987390 1913504 9785344 24686238 178ae9e /tmp/vmlinux
12959102 1913504 9785344 24657950 178401e /tmp/vmlinux.patched
That's a total of 28288 bytes, which comes down to 56 bytes per event.
Link: http://lkml.kernel.org/r/20120810034708.370808175@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 16 ++++++++++++++++
include/trace/ftrace.h | 20 ++++++--------------
kernel/trace/trace_output.c | 31 +++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 14 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 014090c..4cc6852 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -205,6 +205,22 @@ int ftrace_event_define_field(struct ftrace_event_call *call,
char *type, int len, char *item, int offset,
int field_size, int sign, int filter);
+struct ftrace_event_buffer {
+ struct ring_buffer *buffer;
+ struct ring_buffer_event *event;
+ struct ftrace_event_file *ftrace_file;
+ void *entry;
+ unsigned long flags;
+ int pc;
+};
+
+void *ftrace_event_buffer_reserve(struct ftrace_event_buffer *fbuffer,
+ struct ftrace_event_file *ftrace_file,
+ int type, unsigned long len);
+
+void ftrace_event_buffer_commit(struct ftrace_event_buffer *fbuffer,
+ struct ftrace_event_call *event_call);
+
enum {
TRACE_EVENT_FL_FILTERED_BIT,
TRACE_EVENT_FL_CAP_ANY_BIT,
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index c9c991f..dc883a3 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -520,36 +520,28 @@ ftrace_raw_event_##call(void *__data, proto) \
struct ftrace_event_file *ftrace_file = __data; \
struct ftrace_event_call *event_call = ftrace_file->event_call; \
struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\
- struct ring_buffer_event *event; \
+ struct ftrace_event_buffer fbuffer; \
struct ftrace_raw_##call *entry; \
- struct ring_buffer *buffer; \
- unsigned long irq_flags; \
int __data_size; \
- int pc; \
\
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
&ftrace_file->flags)) \
return; \
\
- local_save_flags(irq_flags); \
- pc = preempt_count(); \
- \
__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
\
- event = trace_event_buffer_lock_reserve(&buffer, ftrace_file, \
+ entry = ftrace_event_buffer_reserve(&fbuffer, ftrace_file, \
event_call->event.type, \
- sizeof(*entry) + __data_size, \
- irq_flags, pc); \
- if (!event) \
+ sizeof(*entry) + __data_size); \
+ \
+ if (!entry) \
return; \
- entry = ring_buffer_event_data(event); \
\
tstruct \
\
{ assign; } \
\
- if (!filter_check_discard(ftrace_file, entry, buffer, event)) \
- trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
+ ftrace_event_buffer_commit(&fbuffer, event_call); \
}
/*
* The ftrace_test_probe is compiled out, it is only here as a build time check
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index ee8d748..fae6c9b 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -491,6 +491,37 @@ int ftrace_event_define_field(struct ftrace_event_call *call,
}
EXPORT_SYMBOL_GPL(ftrace_event_define_field);
+void *ftrace_event_buffer_reserve(struct ftrace_event_buffer *fbuffer,
+ struct ftrace_event_file *ftrace_file,
+ int type, unsigned long len)
+{
+ local_save_flags(fbuffer->flags);
+ fbuffer->pc = preempt_count();
+ fbuffer->ftrace_file = ftrace_file;
+
+ fbuffer->event =
+ trace_event_buffer_lock_reserve(&fbuffer->buffer, ftrace_file,
+ type, len,
+ fbuffer->flags, fbuffer->pc);
+ if (!fbuffer->event)
+ return NULL;
+
+ fbuffer->entry = ring_buffer_event_data(fbuffer->event);
+ return fbuffer->entry;
+}
+EXPORT_SYMBOL_GPL(ftrace_event_buffer_reserve);
+
+void ftrace_event_buffer_commit(struct ftrace_event_buffer *fbuffer,
+ struct ftrace_event_call *event_call)
+{
+ if (!filter_check_discard(fbuffer->ftrace_file, fbuffer->entry,
+ fbuffer->buffer, fbuffer->event))
+ trace_buffer_unlock_commit(fbuffer->buffer,
+ fbuffer->event,
+ fbuffer->flags, fbuffer->pc);
+}
+EXPORT_SYMBOL_GPL(ftrace_event_buffer_commit);
+
#ifdef CONFIG_KRETPROBES
static inline const char *kretprobed(const char *name)
{
--
1.8.4.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH 4/4] perf/events: Use helper functions in event assignment to shrink macro size
2014-02-06 17:39 [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size Steven Rostedt
` (2 preceding siblings ...)
2014-02-06 17:39 ` [RFC][PATCH 3/4] tracing: Use helper functions in event assignment to shrink macro size Steven Rostedt
@ 2014-02-06 17:39 ` Steven Rostedt
2014-02-06 18:47 ` Steven Rostedt
2014-02-12 19:58 ` Peter Zijlstra
3 siblings, 2 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-02-06 17:39 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, Peter Zijlstra,
Frederic Weisbecker, Namhyung Kim, Oleg Nesterov, Li Zefan,
Peter Zijlstra
[-- Attachment #1: 0004-perf-events-Use-helper-functions-in-event-assignment.patch --]
[-- Type: text/plain, Size: 7218 bytes --]
From: Steven Rostedt <srostedt@redhat.com>
The functions that assign the contents for the perf software events are
defined by the TRACE_EVENT() macros. Each event has its own unique
way to assign data to its buffer. When you have over 500 events,
that means there's 500 functions assigning data uniquely for each
event.
By making helper functions in the core kernel to do the work
instead, we can shrink the size of the kernel down a bit.
With a kernel configured with 707 events, the change in size was:
text data bss dec hex filename
12959102 1913504 9785344 24657950 178401e /tmp/vmlinux
12917629 1913568 9785344 24616541 1779e5d /tmp/vmlinux.patched
That's a total of 41473 bytes, which comes down to 82 bytes per event.
Note, most of the savings comes from moving the setup and final submit
into helper functions, where the setup does the work and stores the
data into a structure, and that structure is passed to the submit function,
moving the setup of the parameters of perf_trace_buf_submit().
Link: http://lkml.kernel.org/r/20120810034708.589220175@goodmis.org
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace_event.h | 17 ++++++++++++++
include/trace/ftrace.h | 33 ++++++++++----------------
kernel/trace/trace_event_perf.c | 51 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 80 insertions(+), 21 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 4cc6852..f33162e 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -450,6 +450,23 @@ struct perf_event;
DECLARE_PER_CPU(struct pt_regs, perf_trace_regs);
+struct perf_trace_event {
+ struct pt_regs regs;
+ struct hlist_head __percpu *head;
+ struct task_struct *task;
+ struct ftrace_event_call *event_call;
+ void *entry;
+ u64 addr;
+ u64 count;
+ int entry_size;
+ int rctx;
+ int constant;
+};
+
+extern void *perf_trace_event_setup(struct ftrace_event_call *event_call,
+ struct perf_trace_event *pe);
+extern void perf_trace_event_submit(struct perf_trace_event *pe);
+
extern int perf_trace_init(struct perf_event *event);
extern void perf_trace_destroy(struct perf_event *event);
extern int perf_trace_add(struct perf_event *event, int flags);
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index dc883a3..ba9173a 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -629,13 +629,13 @@ __attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
#define __get_str(field) (char *)__get_dynamic_array(field)
#undef __perf_addr
-#define __perf_addr(a) (__addr = (a))
+#define __perf_addr(a) (__pe.addr = (a))
#undef __perf_count
-#define __perf_count(c) (__count = (c))
+#define __perf_count(c) (__pe.count = (c))
#undef __perf_task
-#define __perf_task(t) (__task = (t))
+#define __perf_task(t) (__pe.task = (t))
#undef DECLARE_EVENT_CLASS
#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
@@ -645,28 +645,20 @@ perf_trace_##call(void *__data, proto) \
struct ftrace_event_call *event_call = __data; \
struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\
struct ftrace_raw_##call *entry; \
- struct pt_regs __regs; \
- u64 __addr = 0, __count = 1; \
- struct task_struct *__task = NULL; \
- struct hlist_head *head; \
- int __entry_size; \
+ struct perf_trace_event __pe; \
int __data_size; \
- int rctx; \
+ \
+ __pe.task = NULL; \
\
__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
\
- head = this_cpu_ptr(event_call->perf_events); \
- if (__builtin_constant_p(!__task) && !__task && \
- hlist_empty(head)) \
- return; \
+ __pe.constant = __builtin_constant_p(!__pe.task) && !__pe.task; \
\
- __entry_size = ALIGN(__data_size + sizeof(*entry) + sizeof(u32),\
- sizeof(u64)); \
- __entry_size -= sizeof(u32); \
+ __pe.entry_size = __data_size + sizeof(*entry); \
+ __pe.addr = 0; \
+ __pe.count = 1; \
\
- perf_fetch_caller_regs(&__regs); \
- entry = perf_trace_buf_prepare(__entry_size, \
- event_call->event.type, &__regs, &rctx); \
+ entry = perf_trace_event_setup(event_call, &__pe); \
if (!entry) \
return; \
\
@@ -674,8 +666,7 @@ perf_trace_##call(void *__data, proto) \
\
{ assign; } \
\
- perf_trace_buf_submit(entry, __entry_size, rctx, __addr, \
- __count, &__regs, head, __task); \
+ perf_trace_event_submit(&__pe); \
}
/*
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index e854f42..6b01559 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -21,6 +21,57 @@ typedef typeof(unsigned long [PERF_MAX_TRACE_SIZE / sizeof(unsigned long)])
/* Count the events in use (per event id, not per instance) */
static int total_ref_count;
+/**
+ * perf_trace_event_setup - set up for a perf sw event
+ * @event_call: The sw event that is to be recorded
+ * @pe: The perf event structure to pass to the submit function
+ *
+ * This is a helper function to keep the work to set up a perf sw
+ * event out of the inlined trace code. Since the same work neeeds to
+ * be done for the sw events, having a separate function helps keep
+ * from duplicating that code all over the kernel.
+ *
+ * The use of the perf event structure (@pe) is to store and pass the
+ * data to the perf_trace_event_submit() call and keep the setting
+ * up of the parameters of perf_trace_buf_submit() out of the inlined
+ * trace code.
+ */
+void *perf_trace_event_setup(struct ftrace_event_call *event_call,
+ struct perf_trace_event *pe)
+{
+ pe->head = this_cpu_ptr(event_call->perf_events);
+ if (pe->constant && hlist_empty(pe->head))
+ return NULL;
+
+ pe->entry_size = ALIGN(pe->entry_size + sizeof(u32), sizeof(u64));
+ pe->entry_size -= sizeof(u32);
+ pe->event_call = event_call;
+
+ perf_fetch_caller_regs(&pe->regs);
+
+ pe->entry = perf_trace_buf_prepare(pe->entry_size,
+ event_call->event.type, &pe->regs, &pe->rctx);
+ return pe->entry;
+}
+EXPORT_SYMBOL_GPL(perf_trace_event_setup);
+
+/**
+ * perf_trace_event_submit - submit from perf sw event
+ * @pe: perf event structure that holds all the necessary data
+ *
+ * This is a helper function that removes a lot of the setting up of
+ * the function parameters to call perf_trace_buf_submit() from the
+ * inlined code. Using the perf event structure @pe to store the
+ * information passed from perf_trace_event_setup() keeps the overhead
+ * of building the function call paremeters out of the inlined functions.
+ */
+void perf_trace_event_submit(struct perf_trace_event *pe)
+{
+ perf_trace_buf_submit(pe->entry, pe->entry_size, pe->rctx, pe->addr,
+ pe->count, &pe->regs, pe->head, pe->task);
+}
+EXPORT_SYMBOL_GPL(perf_trace_event_submit);
+
static int perf_trace_event_perm(struct ftrace_event_call *tp_event,
struct perf_event *p_event)
{
--
1.8.4.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC][PATCH 4/4] perf/events: Use helper functions in event assignment to shrink macro size
2014-02-06 17:39 ` [RFC][PATCH 4/4] perf/events: " Steven Rostedt
@ 2014-02-06 18:47 ` Steven Rostedt
2014-02-12 19:58 ` Peter Zijlstra
1 sibling, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-02-06 18:47 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, Ingo Molnar, Andrew Morton, Thomas Gleixner,
Peter Zijlstra, Frederic Weisbecker, Namhyung Kim, Oleg Nesterov,
Li Zefan, Peter Zijlstra
On Thu, 06 Feb 2014 12:39:14 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> From: Steven Rostedt <srostedt@redhat.com>
>
> The functions that assign the contents for the perf software events are
> defined by the TRACE_EVENT() macros. Each event has its own unique
> way to assign data to its buffer. When you have over 500 events,
> that means there's 500 functions assigning data uniquely for each
> event.
>
> By making helper functions in the core kernel to do the work
> instead, we can shrink the size of the kernel down a bit.
>
> With a kernel configured with 707 events, the change in size was:
>
> text data bss dec hex filename
> 12959102 1913504 9785344 24657950 178401e /tmp/vmlinux
> 12917629 1913568 9785344 24616541 1779e5d /tmp/vmlinux.patched
>
> That's a total of 41473 bytes, which comes down to 82 bytes per event.
>
> Note, most of the savings comes from moving the setup and final submit
> into helper functions, where the setup does the work and stores the
> data into a structure, and that structure is passed to the submit function,
> moving the setup of the parameters of perf_trace_buf_submit().
>
> Link: http://lkml.kernel.org/r/20120810034708.589220175@goodmis.org
>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
Peter, Frederic,
Can you give an ack to this. Peter, you pretty much gave you ack before
except for one nit:
http://marc.info/?l=linux-kernel&m=134484533217124&w=2
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> ---
> include/linux/ftrace_event.h | 17 ++++++++++++++
> include/trace/ftrace.h | 33 ++++++++++----------------
> kernel/trace/trace_event_perf.c | 51 +++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 80 insertions(+), 21 deletions(-)
>
> +
> +/**
> + * perf_trace_event_submit - submit from perf sw event
> + * @pe: perf event structure that holds all the necessary data
> + *
> + * This is a helper function that removes a lot of the setting up of
> + * the function parameters to call perf_trace_buf_submit() from the
> + * inlined code. Using the perf event structure @pe to store the
> + * information passed from perf_trace_event_setup() keeps the overhead
> + * of building the function call paremeters out of the inlined functions.
> + */
> +void perf_trace_event_submit(struct perf_trace_event *pe)
> +{
> + perf_trace_buf_submit(pe->entry, pe->entry_size, pe->rctx, pe->addr,
> + pe->count, &pe->regs, pe->head, pe->task);
> +}
> +EXPORT_SYMBOL_GPL(perf_trace_event_submit);
> +
You wanted the perf_trace_buf_submit() to go away. Now I could do that,
bu that would require all other users to use the new perf_trace_event
structure to pass in. The only reason I did that was because this
structure is set up in perf_trace_event_setup() which passes in only
the event_call and the pe structure. In the setup function, the pe
structure is assigned all the information required for
perf_trace_event_submit().
What this does is to remove the function parameter setup from the
inlined tracepoint callers, which is quite a lot!
This is what a perf tracepoint currently looks like:
0000000000000b44 <perf_trace_sched_pi_setprio>:
b44: 55 push %rbp
b45: 48 89 e5 mov %rsp,%rbp
b48: 41 56 push %r14
b4a: 41 89 d6 mov %edx,%r14d
b4d: 41 55 push %r13
b4f: 49 89 fd mov %rdi,%r13
b52: 41 54 push %r12
b54: 49 89 f4 mov %rsi,%r12
b57: 53 push %rbx
b58: 48 81 ec c0 00 00 00 sub $0xc0,%rsp
b5f: 48 8b 9f 80 00 00 00 mov 0x80(%rdi),%rbx
b66: e8 00 00 00 00 callq b6b <perf_trace_sched_pi_setprio+0x27>
b67: R_X86_64_PC32 debug_smp_processor_id-0x4
b6b: 89 c0 mov %eax,%eax
b6d: 48 03 1c c5 00 00 00 add 0x0(,%rax,8),%rbx
b74: 00
b71: R_X86_64_32S __per_cpu_offset
b75: 48 83 3b 00 cmpq $0x0,(%rbx)
b79: 0f 84 92 00 00 00 je c11 <perf_trace_sched_pi_setprio+0xcd>
b7f: 48 8d bd 38 ff ff ff lea -0xc8(%rbp),%rdi
b86: e8 ab fe ff ff callq a36 <perf_fetch_caller_regs>
b8b: 41 8b 75 40 mov 0x40(%r13),%esi
b8f: 48 8d 8d 34 ff ff ff lea -0xcc(%rbp),%rcx
b96: 48 8d 95 38 ff ff ff lea -0xc8(%rbp),%rdx
b9d: bf 24 00 00 00 mov $0x24,%edi
ba2: 81 e6 ff ff 00 00 and $0xffff,%esi
ba8: e8 00 00 00 00 callq bad <perf_trace_sched_pi_setprio+0x69>
ba9: R_X86_64_PC32 perf_trace_buf_prepare-0x4
bad: 48 85 c0 test %rax,%rax
bb0: 74 5f je c11 <perf_trace_sched_pi_setprio+0xcd>
bb2: 49 8b 94 24 b0 04 00 mov 0x4b0(%r12),%rdx
bb9: 00
bba: 4c 8d 85 38 ff ff ff lea -0xc8(%rbp),%r8
bc1: 49 89 d9 mov %rbx,%r9
bc4: b9 24 00 00 00 mov $0x24,%ecx
bc9: be 01 00 00 00 mov $0x1,%esi
bce: 31 ff xor %edi,%edi
bd0: 48 89 50 08 mov %rdx,0x8(%rax)
bd4: 49 8b 94 24 b8 04 00 mov 0x4b8(%r12),%rdx
bdb: 00
bdc: 48 89 50 10 mov %rdx,0x10(%rax)
be0: 41 8b 94 24 0c 03 00 mov 0x30c(%r12),%edx
be7: 00
be8: 89 50 18 mov %edx,0x18(%rax)
beb: 41 8b 54 24 50 mov 0x50(%r12),%edx
bf0: 44 89 70 20 mov %r14d,0x20(%rax)
bf4: 89 50 1c mov %edx,0x1c(%rax)
bf7: 8b 95 34 ff ff ff mov -0xcc(%rbp),%edx
bfd: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
c04: 00 00
c06: 89 14 24 mov %edx,(%rsp)
c09: 48 89 c2 mov %rax,%rdx
c0c: e8 00 00 00 00 callq c11 <perf_trace_sched_pi_setprio+0xcd>
c0d: R_X86_64_PC32 perf_tp_event-0x4
c11: 48 81 c4 c0 00 00 00 add $0xc0,%rsp
c18: 5b pop %rbx
c19: 41 5c pop %r12
c1b: 41 5d pop %r13
c1d: 41 5e pop %r14
c1f: 5d pop %rbp
c20: c3 retq
This is what it looks like after this patch:
0000000000000ab1 <perf_trace_sched_pi_setprio>:
ab1: 55 push %rbp
ab2: 48 89 e5 mov %rsp,%rbp
ab5: 41 54 push %r12
ab7: 41 89 d4 mov %edx,%r12d
aba: 53 push %rbx
abb: 48 89 f3 mov %rsi,%rbx
abe: 48 8d b5 08 ff ff ff lea -0xf8(%rbp),%rsi
ac5: 48 81 ec f0 00 00 00 sub $0xf0,%rsp
acc: 48 c7 45 b8 00 00 00 movq $0x0,-0x48(%rbp)
ad3: 00
ad4: c7 45 e8 01 00 00 00 movl $0x1,-0x18(%rbp)
adb: c7 45 e0 24 00 00 00 movl $0x24,-0x20(%rbp)
ae2: 48 c7 45 d0 00 00 00 movq $0x0,-0x30(%rbp)
ae9: 00
aea: 48 c7 45 d8 01 00 00 movq $0x1,-0x28(%rbp)
af1: 00
af2: e8 00 00 00 00 callq af7 <perf_trace_sched_pi_setprio+0x46>
af3: R_X86_64_PC32 perf_trace_event_setup-0x4
af7: 48 85 c0 test %rax,%rax
afa: 74 35 je b31 <perf_trace_sched_pi_setprio+0x80>
afc: 48 8b 93 b0 04 00 00 mov 0x4b0(%rbx),%rdx
b03: 48 8d bd 08 ff ff ff lea -0xf8(%rbp),%rdi
b0a: 48 89 50 08 mov %rdx,0x8(%rax)
b0e: 48 8b 93 b8 04 00 00 mov 0x4b8(%rbx),%rdx
b15: 48 89 50 10 mov %rdx,0x10(%rax)
b19: 8b 93 0c 03 00 00 mov 0x30c(%rbx),%edx
b1f: 89 50 18 mov %edx,0x18(%rax)
b22: 8b 53 50 mov 0x50(%rbx),%edx
b25: 44 89 60 20 mov %r12d,0x20(%rax)
b29: 89 50 1c mov %edx,0x1c(%rax)
b2c: e8 00 00 00 00 callq b31 <perf_trace_sched_pi_setprio+0x80>
b2d: R_X86_64_PC32 perf_trace_event_submit-0x4
b31: 48 81 c4 f0 00 00 00 add $0xf0,%rsp
b38: 5b pop %rbx
b39: 41 5c pop %r12
b3b: 5d pop %rbp
b3c: c3 retq
Thus, it's not really just a wrapper function, but a function that is
paired with the tracepoint setup version.
-- Steve
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC][PATCH 4/4] perf/events: Use helper functions in event assignment to shrink macro size
2014-02-06 17:39 ` [RFC][PATCH 4/4] perf/events: " Steven Rostedt
2014-02-06 18:47 ` Steven Rostedt
@ 2014-02-12 19:58 ` Peter Zijlstra
2014-02-21 18:53 ` Steven Rostedt
1 sibling, 1 reply; 9+ messages in thread
From: Peter Zijlstra @ 2014-02-12 19:58 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, Ingo Molnar, Andrew Morton, Thomas Gleixner,
Frederic Weisbecker, Namhyung Kim, Oleg Nesterov, Li Zefan
> +void *perf_trace_event_setup(struct ftrace_event_call *event_call,
> + struct perf_trace_event *pe)
> +{
> + pe->head = this_cpu_ptr(event_call->perf_events);
> + if (pe->constant && hlist_empty(pe->head))
> + return NULL;
> +
> + pe->entry_size = ALIGN(pe->entry_size + sizeof(u32), sizeof(u64));
> + pe->entry_size -= sizeof(u32);
> + pe->event_call = event_call;
> +
> + perf_fetch_caller_regs(&pe->regs);
I think this one is wrong, we're getting the wrong caller now.
> + pe->entry = perf_trace_buf_prepare(pe->entry_size,
> + event_call->event.type, &pe->regs, &pe->rctx);
> + return pe->entry;
> +}
> +EXPORT_SYMBOL_GPL(perf_trace_event_setup);
> +void perf_trace_event_submit(struct perf_trace_event *pe)
> +{
> + perf_trace_buf_submit(pe->entry, pe->entry_size, pe->rctx, pe->addr,
> + pe->count, &pe->regs, pe->head, pe->task);
> +}
> +EXPORT_SYMBOL_GPL(perf_trace_event_submit);
This is ridiculous, perf_trace_buf_submit() is already a pointless
wrapper, which you now wrap moar...
It would be nice if we could reduce the api clutter a little and keep
this and the [ku]probe/syscall sites similar.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC][PATCH 4/4] perf/events: Use helper functions in event assignment to shrink macro size
2014-02-12 19:58 ` Peter Zijlstra
@ 2014-02-21 18:53 ` Steven Rostedt
0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-02-21 18:53 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, Ingo Molnar, Andrew Morton, Thomas Gleixner,
Frederic Weisbecker, Namhyung Kim, Oleg Nesterov, Li Zefan
On Wed, 12 Feb 2014 20:58:27 +0100
Peter Zijlstra <peterz@infradead.org> wrote:
> > +void *perf_trace_event_setup(struct ftrace_event_call *event_call,
> > + struct perf_trace_event *pe)
> > +{
> > + pe->head = this_cpu_ptr(event_call->perf_events);
> > + if (pe->constant && hlist_empty(pe->head))
> > + return NULL;
> > +
> > + pe->entry_size = ALIGN(pe->entry_size + sizeof(u32), sizeof(u64));
> > + pe->entry_size -= sizeof(u32);
> > + pe->event_call = event_call;
> > +
> > + perf_fetch_caller_regs(&pe->regs);
>
> I think this one is wrong, we're getting the wrong caller now.
Ah, I didn't see the magic CALLER_ADDR() usage there. I guess I could
pass that into this function too. But I think you had a fix for this,
so I'll wait.
>
> > + pe->entry = perf_trace_buf_prepare(pe->entry_size,
> > + event_call->event.type, &pe->regs, &pe->rctx);
> > + return pe->entry;
> > +}
> > +EXPORT_SYMBOL_GPL(perf_trace_event_setup);
>
>
> > +void perf_trace_event_submit(struct perf_trace_event *pe)
> > +{
> > + perf_trace_buf_submit(pe->entry, pe->entry_size, pe->rctx, pe->addr,
> > + pe->count, &pe->regs, pe->head, pe->task);
> > +}
> > +EXPORT_SYMBOL_GPL(perf_trace_event_submit);
>
> This is ridiculous, perf_trace_buf_submit() is already a pointless
> wrapper, which you now wrap moar...
I did a couple of git greps and found this:
$ git grep perf_trace_buf_submit
include/linux/ftrace_event.h:perf_trace_buf_submit(void *raw_data, int size, int rctx, u64 addr,
include/trace/ftrace.h: perf_trace_buf_submit(entry, __entry_size, rctx, __addr, \
kernel/trace/trace_event_perf.c: perf_trace_buf_submit(entry, ENTRY_SIZE, rctx, 0,
kernel/trace/trace_kprobe.c: perf_trace_buf_submit(entry, size, rctx, 0, 1, regs, head, NULL);
kernel/trace/trace_kprobe.c: perf_trace_buf_submit(entry, size, rctx, 0, 1, regs, head, NULL);
kernel/trace/trace_syscalls.c: perf_trace_buf_submit(rec, size, rctx, 0, 1, regs, head, NULL);
kernel/trace/trace_syscalls.c: perf_trace_buf_submit(rec, size, rctx, 0, 1, regs, head, NULL);
kernel/trace/trace_uprobe.c: perf_trace_buf_submit(entry, size, rctx, 0, 1, regs, head, NULL);
$ git grep perf_tp_event
include/linux/ftrace_event.h: perf_tp_event(addr, count, raw_data, size, regs, head, rctx, task);
include/linux/perf_event.h:extern void perf_tp_event(u64 addr, u64 count, void *record,
kernel/events/core.c:static int perf_tp_event_match(struct perf_event *event,
kernel/events/core.c:void perf_tp_event(u64 addr, u64 count, void *record, int entry_size,
kernel/events/core.c: if (perf_tp_event_match(event, &data, regs))
kernel/events/core.c: if (perf_tp_event_match(event, &data, regs))
kernel/events/core.c:EXPORT_SYMBOL_GPL(perf_tp_event);
kernel/events/core.c:static int perf_tp_event_init(struct perf_event *event)
kernel/events/core.c: .event_init = perf_tp_event_init,
As perf_tp_event() is only used internally, should we just rename it to
perf_trace_buf_submit, and remove the static inline of it?
>
> It would be nice if we could reduce the api clutter a little and keep
> this and the [ku]probe/syscall sites similar.
Yeah, looks like this can have a bigger clean up.
I think I'll just put my first three patches in my 3.15 queue, and we
can work on fixing the perf code for 3.16.
-- Steve
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size
@ 2012-08-10 3:43 Steven Rostedt
0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2012-08-10 3:43 UTC (permalink / raw)
To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker
This patch set is another effort to bring down the size of trace events.
Helper functions are used to remove duplicate code that is created
for each event by the TRACE_EVENT() macros. With a config that enables
707 events in the core kernel, the size is brought down 91,774 bytes!
Steven Rostedt (4):
tracing: Move print code from macro to standalone function
tracing: Move event storage for array from macro to standalone function
tracing: Use helper functions in event assignment to shrink macro size
perf/events: Use helper functions in event assignment to shrink macro size
----
include/linux/ftrace_event.h | 40 +++++++++++++++--
include/trace/ftrace.h | 85 +++++++++---------------------------
kernel/trace/trace_event_perf.c | 26 +++++++++++
kernel/trace/trace_events.c | 6 ---
kernel/trace/trace_export.c | 12 ++----
kernel/trace/trace_output.c | 90 +++++++++++++++++++++++++++++++++++++++
6 files changed, 176 insertions(+), 83 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-02-21 18:54 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-06 17:39 [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 1/4] tracing: Move raw output code from macro to standalone function Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 2/4] tracing: Move event storage for array " Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 3/4] tracing: Use helper functions in event assignment to shrink macro size Steven Rostedt
2014-02-06 17:39 ` [RFC][PATCH 4/4] perf/events: " Steven Rostedt
2014-02-06 18:47 ` Steven Rostedt
2014-02-12 19:58 ` Peter Zijlstra
2014-02-21 18:53 ` Steven Rostedt
-- strict thread matches above, loose matches on Subject: below --
2012-08-10 3:43 [RFC][PATCH 0/4] tracing/perf: Use helper functions to help shrink kernel size Steven Rostedt
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.