All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
@ 2013-06-18 19:21 Oleg Nesterov
  2013-06-18 19:22 ` [PATCH 1/3] tracing/perf: expand TRACE_EVENT(sched_stat_runtime) Oleg Nesterov
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-18 19:21 UTC (permalink / raw)
  To: Steven Rostedt, Peter Zijlstra
  Cc: Frederic Weisbecker, Ingo Molnar, Masami Hiramatsu,
	Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

Hello.

On top of "PATCH 0/3] tracing: more list_empty(perf_events) checks"
series I sent yesterday.

Compile tested only, not for inclusion yet.

But I'll appreciate if you can take a look. I'll try to test this
tomorrow somehow and let you know. Right now I am looking at asm code,
looks correct...

I also compiled the kernel with the additional patch below, everything
compiles except sched/core.o as expected.

Oleg.

--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -664,6 +664,8 @@ perf_trace_##call(void *__data, proto)					\
 									\
 	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
 									\
+	BUILD_BUG_ON(!(__builtin_constant_p(!__task) && !__task));	\
+									\
 	head = this_cpu_ptr(event_call->perf_events);			\
 	if (__builtin_constant_p(!__task) && !__task &&			\
 				hlist_empty(head))			\


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/3] tracing/perf: expand TRACE_EVENT(sched_stat_runtime)
  2013-06-18 19:21 [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Oleg Nesterov
@ 2013-06-18 19:22 ` Oleg Nesterov
  2013-06-18 19:22 ` [PATCH 2/3] tracing/perf: reimplement TP_perf_assign() logic Oleg Nesterov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-18 19:22 UTC (permalink / raw)
  To: Steven Rostedt, Peter Zijlstra
  Cc: Frederic Weisbecker, Ingo Molnar, Masami Hiramatsu,
	Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

To simplify the review of the next patch:

1. We are going to reimplent __perf_task/counter and embedd them
   into TP_ARGS(). expand TRACE_EVENT(sched_stat_runtime) into
   DECLARE_EVENT_CLASS() + DEFINE_EVENT(), this way they can use
   different TP_ARGS's.

2. Change perf_trace_##call() macri to do perf_fetch_caller_regs()
   right before perf_trace_buf_prepare(). This way it evaluates
   "args" asap.

   Note: after 87f44bbc perf_trace_buf_prepare() doesn't need
   "struct pt_regs *regs", perhaps it makes sense to remove this
   argument. And perhaps we can teach perf_trace_buf_submit()
   to accept regs == NULL and do fetch_caller_regs(CALLER_ADDR1)
   in this case.

3. Cosmetic, but the typecast from "void*" buys nothing. It just
   adds the noise, remove it.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 include/trace/events/sched.h |    6 +++++-
 include/trace/ftrace.h       |    7 +++----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index e5586ca..249c024 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -372,7 +372,7 @@ DEFINE_EVENT(sched_stat_template, sched_stat_blocked,
  * Tracepoint for accounting runtime (time the task is executing
  * on a CPU).
  */
-TRACE_EVENT(sched_stat_runtime,
+DECLARE_EVENT_CLASS(sched_stat_runtime,
 
 	TP_PROTO(struct task_struct *tsk, u64 runtime, u64 vruntime),
 
@@ -401,6 +401,10 @@ TRACE_EVENT(sched_stat_runtime,
 			(unsigned long long)__entry->vruntime)
 );
 
+DEFINE_EVENT(sched_stat_runtime, sched_stat_runtime,
+	     TP_PROTO(struct task_struct *tsk, u64 runtime, u64 vruntime),
+	     TP_ARGS(tsk, runtime, vruntime));
+
 /*
  * Tracepoint for showing priority inheritance modifying a tasks
  * priority.
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index c162a57..aed594a 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -659,15 +659,14 @@ perf_trace_##call(void *__data, proto)					\
 	int __data_size;						\
 	int rctx;							\
 									\
-	perf_fetch_caller_regs(&__regs);				\
-									\
 	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
 	__entry_size = ALIGN(__data_size + sizeof(*entry) + sizeof(u32),\
 			     sizeof(u64));				\
 	__entry_size -= sizeof(u32);					\
 									\
-	entry = (struct ftrace_raw_##call *)perf_trace_buf_prepare(	\
-		__entry_size, event_call->event.type, &__regs, &rctx);	\
+	perf_fetch_caller_regs(&__regs);				\
+	entry = perf_trace_buf_prepare(__entry_size,			\
+			event_call->event.type, &__regs, &rctx);	\
 	if (!entry)							\
 		return;							\
 									\
-- 
1.5.5.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/3] tracing/perf: reimplement TP_perf_assign() logic
  2013-06-18 19:21 [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Oleg Nesterov
  2013-06-18 19:22 ` [PATCH 1/3] tracing/perf: expand TRACE_EVENT(sched_stat_runtime) Oleg Nesterov
@ 2013-06-18 19:22 ` Oleg Nesterov
  2013-06-18 19:22 ` [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible Oleg Nesterov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-18 19:22 UTC (permalink / raw)
  To: Steven Rostedt, Peter Zijlstra
  Cc: Frederic Weisbecker, Ingo Molnar, Masami Hiramatsu,
	Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

TP_perf_assign/__perf_xxx is used to change the default values
of __addr/__count/__task variables for perf_trace_buf_submit().

Unfortunately, TP_perf_assign() is called "too late", we want to
have a fast-path "__task != NULL" check in perf_trace_##call() at
the start. So this patch simply embeds __perf_xxx() into TP_ARGS(),
this way DECLARE_EVENT_CLASS() can use the result of assignments
hidden in "args" right after ftrace_get_offsets_##call() which is
mostly trivial.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 include/trace/events/sched.h |   16 +++-------------
 include/trace/ftrace.h       |   19 +++++++++++--------
 2 files changed, 14 insertions(+), 21 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 249c024..2e7d994 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -57,7 +57,7 @@ DECLARE_EVENT_CLASS(sched_wakeup_template,
 
 	TP_PROTO(struct task_struct *p, int success),
 
-	TP_ARGS(p, success),
+	TP_ARGS(__perf_task(p), success),
 
 	TP_STRUCT__entry(
 		__array(	char,	comm,	TASK_COMM_LEN	)
@@ -73,9 +73,6 @@ DECLARE_EVENT_CLASS(sched_wakeup_template,
 		__entry->prio		= p->prio;
 		__entry->success	= success;
 		__entry->target_cpu	= task_cpu(p);
-	)
-	TP_perf_assign(
-		__perf_task(p);
 	),
 
 	TP_printk("comm=%s pid=%d prio=%d success=%d target_cpu=%03d",
@@ -313,7 +310,7 @@ DECLARE_EVENT_CLASS(sched_stat_template,
 
 	TP_PROTO(struct task_struct *tsk, u64 delay),
 
-	TP_ARGS(tsk, delay),
+	TP_ARGS(__perf_task(tsk), __perf_count(delay)),
 
 	TP_STRUCT__entry(
 		__array( char,	comm,	TASK_COMM_LEN	)
@@ -325,10 +322,6 @@ DECLARE_EVENT_CLASS(sched_stat_template,
 		memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
 		__entry->pid	= tsk->pid;
 		__entry->delay	= delay;
-	)
-	TP_perf_assign(
-		__perf_count(delay);
-		__perf_task(tsk);
 	),
 
 	TP_printk("comm=%s pid=%d delay=%Lu [ns]",
@@ -376,7 +369,7 @@ DECLARE_EVENT_CLASS(sched_stat_runtime,
 
 	TP_PROTO(struct task_struct *tsk, u64 runtime, u64 vruntime),
 
-	TP_ARGS(tsk, runtime, vruntime),
+	TP_ARGS(tsk, __perf_count(runtime), vruntime),
 
 	TP_STRUCT__entry(
 		__array( char,	comm,	TASK_COMM_LEN	)
@@ -390,9 +383,6 @@ DECLARE_EVENT_CLASS(sched_stat_runtime,
 		__entry->pid		= tsk->pid;
 		__entry->runtime	= runtime;
 		__entry->vruntime	= vruntime;
-	)
-	TP_perf_assign(
-		__perf_count(runtime);
 	),
 
 	TP_printk("comm=%s pid=%d runtime=%Lu [ns] vruntime=%Lu [ns]",
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index aed594a..8886877 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -503,8 +503,14 @@ static inline notrace int ftrace_get_offsets_##call(			\
 #undef TP_fast_assign
 #define TP_fast_assign(args...) args
 
-#undef TP_perf_assign
-#define TP_perf_assign(args...)
+#undef __perf_addr
+#define __perf_addr(a)	(a)
+
+#undef __perf_count
+#define __perf_count(c)	(c)
+
+#undef __perf_task
+#define __perf_task(t)	(t)
 
 #undef DECLARE_EVENT_CLASS
 #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
@@ -632,16 +638,13 @@ __attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
 #define __get_str(field) (char *)__get_dynamic_array(field)
 
 #undef __perf_addr
-#define __perf_addr(a) __addr = (a)
+#define __perf_addr(a)	(__addr = (a))
 
 #undef __perf_count
-#define __perf_count(c) __count = (c)
+#define __perf_count(c)	(__count = (c))
 
 #undef __perf_task
-#define __perf_task(t) __task = (t)
-
-#undef TP_perf_assign
-#define TP_perf_assign(args...) args
+#define __perf_task(t)	(__task = (t))
 
 #undef DECLARE_EVENT_CLASS
 #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
-- 
1.5.5.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible
  2013-06-18 19:21 [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Oleg Nesterov
  2013-06-18 19:22 ` [PATCH 1/3] tracing/perf: expand TRACE_EVENT(sched_stat_runtime) Oleg Nesterov
  2013-06-18 19:22 ` [PATCH 2/3] tracing/perf: reimplement TP_perf_assign() logic Oleg Nesterov
@ 2013-06-18 19:22 ` Oleg Nesterov
  2013-06-18 20:02   ` Steven Rostedt
  2013-06-19 12:10 ` [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Peter Zijlstra
  2013-07-18  3:06 ` Steven Rostedt
  4 siblings, 1 reply; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-18 19:22 UTC (permalink / raw)
  To: Steven Rostedt, Peter Zijlstra
  Cc: Frederic Weisbecker, Ingo Molnar, Masami Hiramatsu,
	Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

perf_trace_buf_prepare() + perf_trace_buf_submit(task => NULL)
make no sense if hlist_empty(head). Change perf_trace_##call()
to check ->perf_events beforehand and do nothing if it is empty.

However, we can only do this if __task == NULL, so we also add
the __builtin_constant_p(__task) check.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 include/trace/ftrace.h |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 8886877..04455b8 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -663,6 +663,12 @@ perf_trace_##call(void *__data, proto)					\
 	int rctx;							\
 									\
 	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
+									\
+	head = this_cpu_ptr(event_call->perf_events);			\
+	if (__builtin_constant_p(!__task) && !__task &&			\
+				hlist_empty(head))			\
+		return;							\
+									\
 	__entry_size = ALIGN(__data_size + sizeof(*entry) + sizeof(u32),\
 			     sizeof(u64));				\
 	__entry_size -= sizeof(u32);					\
@@ -677,7 +683,6 @@ perf_trace_##call(void *__data, proto)					\
 									\
 	{ assign; }							\
 									\
-	head = this_cpu_ptr(event_call->perf_events);			\
 	perf_trace_buf_submit(entry, __entry_size, rctx, __addr,	\
 		__count, &__regs, head, __task);			\
 }
-- 
1.5.5.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible
  2013-06-18 19:22 ` [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible Oleg Nesterov
@ 2013-06-18 20:02   ` Steven Rostedt
  2013-06-19 18:12     ` Oleg Nesterov
  0 siblings, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2013-06-18 20:02 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Tue, 2013-06-18 at 21:22 +0200, Oleg Nesterov wrote:
> perf_trace_buf_prepare() + perf_trace_buf_submit(task => NULL)
> make no sense if hlist_empty(head). Change perf_trace_##call()
> to check ->perf_events beforehand and do nothing if it is empty.
> 
> However, we can only do this if __task == NULL, so we also add
> the __builtin_constant_p(__task) check.
> 
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>
> ---
>  include/trace/ftrace.h |    7 ++++++-
>  1 files changed, 6 insertions(+), 1 deletions(-)
> 
> diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
> index 8886877..04455b8 100644
> --- a/include/trace/ftrace.h
> +++ b/include/trace/ftrace.h
> @@ -663,6 +663,12 @@ perf_trace_##call(void *__data, proto)					\
>  	int rctx;							\
>  									\
>  	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
> +									\
> +	head = this_cpu_ptr(event_call->perf_events);			\
> +	if (__builtin_constant_p(!__task) && !__task &&			\


I'm trying to wrap my head around this:

  __builtin_constant_p(!task)

is this the same as:

  !__builtin_constant_p(task)

Or is it the same as:

  __builtin_constant_p(task)

?

Because that '!' is confusing the heck out of me.

If !task is a constant, wouldn't task be a constant too, and if task is
not a constant then I would also assume !task is not a constant as well.

If this is the case, can we nuke the '!' from the builtin_consant_p().
The code is confusing enough as is. Or is it that the code is very
confusing and in keeping with the coding style, you are trying to come
up with new ways of adding to the confusion.

Or is this your way to confuse me as much as my code has confused
you? ;-)

-- Steve

> +				hlist_empty(head))			\
> +		return;							\
> +									\
>  	__entry_size = ALIGN(__data_size + sizeof(*entry) + sizeof(u32),\
>  			     sizeof(u64));				\
>  	__entry_size -= sizeof(u32);					\
> @@ -677,7 +683,6 @@ perf_trace_##call(void *__data, proto)					\
>  									\
>  	{ assign; }							\
>  									\
> -	head = this_cpu_ptr(event_call->perf_events);			\
>  	perf_trace_buf_submit(entry, __entry_size, rctx, __addr,	\
>  		__count, &__regs, head, __task);			\
>  }



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-18 19:21 [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Oleg Nesterov
                   ` (2 preceding siblings ...)
  2013-06-18 19:22 ` [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible Oleg Nesterov
@ 2013-06-19 12:10 ` Peter Zijlstra
  2013-06-19 15:28   ` Oleg Nesterov
  2013-07-18  3:06 ` Steven Rostedt
  4 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2013-06-19 12:10 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Steven Rostedt, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Tue, Jun 18, 2013 at 09:21:47PM +0200, Oleg Nesterov wrote:
> Hello.
> 
> On top of "PATCH 0/3] tracing: more list_empty(perf_events) checks"
> series I sent yesterday.
> 
> Compile tested only, not for inclusion yet.
> 
> But I'll appreciate if you can take a look. I'll try to test this
> tomorrow somehow and let you know. Right now I am looking at asm code,
> looks correct...
> 
> I also compiled the kernel with the additional patch below, everything
> compiles except sched/core.o as expected.
> 

I'm probably missing something obviuos, but what are we trying to do?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-19 12:10 ` [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Peter Zijlstra
@ 2013-06-19 15:28   ` Oleg Nesterov
  2013-06-19 17:51     ` Oleg Nesterov
  0 siblings, 1 reply; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-19 15:28 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Steven Rostedt, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On 06/19, Peter Zijlstra wrote:
>
> I'm probably missing something obviuos, but what are we trying to do?

Say, "perf record -e sched:sched_switch -p1".

Every task except /sbin/init will do perf_trace_sched_switch() and
perf_trace_buf_prepare() + perf_trace_buf_submit for no reason(),
it doesn't have a counter.

So it makes sense to add the fast-path check at the start of
perf_trace_##call(),

	if (hlist_empty(event_call->perf_events))
		return;

The problem is, we should not do this if __task != NULL (iow, if
DECLARE_EVENT_CLASS() uses __perf_task()), perf_tp_event() has the
additional code for this case.

So we should do

	if (!__task && hlist_empty(event_call->perf_events))
		return;

But __task is changed by "{ assign; }" block right before
perf_trace_buf_submit(). Too late for the fast-path check,
we already called perf_trace_buf_prepare/fetch_regs.

So. After 2/3 __perf_task() (and __perf_count/addr) is called
when ftrace_get_offsets_##call(args) evaluates the arguments,
and we can check !__task && hlist_empty() right after that.

Oleg.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-19 15:28   ` Oleg Nesterov
@ 2013-06-19 17:51     ` Oleg Nesterov
  2013-06-19 18:50       ` David Ahern
  0 siblings, 1 reply; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-19 17:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Steven Rostedt, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On 06/19, Oleg Nesterov wrote:
>
> On 06/19, Peter Zijlstra wrote:
> >
> > I'm probably missing something obviuos, but what are we trying to do?
>
> Say, "perf record -e sched:sched_switch -p1".
>
> Every task except /sbin/init will do perf_trace_sched_switch() and
> perf_trace_buf_prepare() + perf_trace_buf_submit for no reason(),
> it doesn't have a counter.

I did some testing under kvm, not sure these numbers actually mean
something, but still.

So, the test-case:

	int pipe1[2], pipe2[2];

	void *tfunc(void *arg)
	{
		for (;;) {
			char c;
			assert(read(pipe1[0], &c, 1) == 1);
			assert(write(pipe2[1], &c, 1) == 1);
		}
	}

	int main(void)
	{
		pthread_t thr;
		int nr;

		assert(pipe(pipe1) == 0);
		assert(pipe(pipe2) == 0);

		assert(pthread_create(&thr, NULL, tfunc, NULL) == 0);

		for (nr = 0; nr < 1000 * 1000; ++nr) {
			char c;

			assert(write(pipe1[1], &c, 1) == 1);
			assert(read(pipe2[0], &c, 1) == 1);
		}

		return 0;
	}

Idle machine, "/usr/bin/time -f "%e %S %U" taskset 1 ./pf" 3 times:

	20.73 20.05 0.66
	20.68 20.04 0.63
	20.68 20.02 0.65

Now with "perf record -e sched:sched_switch -p1" running,

before 3/3:

	21.59 20.77 0.80
	21.40 20.70 0.68
	21.50 20.72 0.78

after 3/3:

	21.00 20.23 0.76
	20.89 20.19 0.69
	20.94 20.26 0.66

Oleg.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible
  2013-06-18 20:02   ` Steven Rostedt
@ 2013-06-19 18:12     ` Oleg Nesterov
  2013-06-19 18:24       ` Steven Rostedt
  0 siblings, 1 reply; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-19 18:12 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On 06/18, Steven Rostedt wrote:
>
> On Tue, 2013-06-18 at 21:22 +0200, Oleg Nesterov wrote:
> > @@ -663,6 +663,12 @@ perf_trace_##call(void *__data, proto)					\
> >  	int rctx;							\
> >  									\
> >  	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
> > +									\
> > +	head = this_cpu_ptr(event_call->perf_events);			\
> > +	if (__builtin_constant_p(!__task) && !__task &&			\
>
>
> I'm trying to wrap my head around this:
>
>   __builtin_constant_p(!task)
>
> is this the same as:
>
>   !__builtin_constant_p(task)
>
> Or is it the same as:
>
>   __builtin_constant_p(task)
>
> ?
>
> Because that '!' is confusing the heck out of me.
>
> If !task is a constant, wouldn't task be a constant too, and if task is
> not a constant then I would also assume !task is not a constant as well.

!__task looks more explicit/symmetrical to me. We need

	if (is_compile_time_true(!__task)) && list_empty)
		return;

is_compile_time_true(cond) could be defined as

	__builtin_constant_p(cond) && (cond)
or
	__builtin_constant_p(!cond) && (cond)

but the 1ts one looks more clean.

However,

> If this is the case, can we nuke the '!' from the builtin_consant_p().

OK, I do not really mind, will do.

And,

> Or is this your way to confuse me as much as my code has confused
> you? ;-)

Of course! this was the main reason.


Steven, I convinced myself the patch should be correct. If you agree with
this hack:

	- anything else I should do apart from the change above?

	- should I resend the previous "[PATCH 0/3] tracing: more
	  list_empty(perf_events) checks" series?

	  This series depends on "[PATCH 3/3] tracing/perf: Move the
	  PERF_MAX_TRACE_SIZE check into perf_trace_buf_prepare()".

	  Or I can drop this patch if you do not like it and rediff.

	  Just in case, there are other pending patches in trace_kprobe.c
	  which I am going to resend, but they are orthogonal.

Oleg.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible
  2013-06-19 18:12     ` Oleg Nesterov
@ 2013-06-19 18:24       ` Steven Rostedt
  0 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2013-06-19 18:24 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Wed, 2013-06-19 at 20:12 +0200, Oleg Nesterov wrote:

> > Or is this your way to confuse me as much as my code has confused
> > you? ;-)
> 
> Of course! this was the main reason.

I knew it!

> 
> 
> Steven, I convinced myself the patch should be correct. If you agree with
> this hack:
> 
> 	- anything else I should do apart from the change above?
> 
> 	- should I resend the previous "[PATCH 0/3] tracing: more
> 	  list_empty(perf_events) checks" series?
> 
> 	  This series depends on "[PATCH 3/3] tracing/perf: Move the
> 	  PERF_MAX_TRACE_SIZE check into perf_trace_buf_prepare()".
> 
> 	  Or I can drop this patch if you do not like it and rediff.
> 
> 	  Just in case, there are other pending patches in trace_kprobe.c
> 	  which I am going to resend, but they are orthogonal.

I'll pull in the patches and play with them. I'll let you know what I
find.

Thanks,


-- Steve



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-19 17:51     ` Oleg Nesterov
@ 2013-06-19 18:50       ` David Ahern
  2013-06-19 19:58         ` Oleg Nesterov
  0 siblings, 1 reply; 19+ messages in thread
From: David Ahern @ 2013-06-19 18:50 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Peter Zijlstra, Steven Rostedt, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On 6/19/13 11:51 AM, Oleg Nesterov wrote:
> On 06/19, Oleg Nesterov wrote:
>>
>> On 06/19, Peter Zijlstra wrote:
>>>
>>> I'm probably missing something obviuos, but what are we trying to do?
>>
>> Say, "perf record -e sched:sched_switch -p1".
>>
>> Every task except /sbin/init will do perf_trace_sched_switch() and
>> perf_trace_buf_prepare() + perf_trace_buf_submit for no reason(),
>> it doesn't have a counter.
>
> I did some testing under kvm, not sure these numbers actually mean
> something, but still.
>
> So, the test-case:
>
> 	int pipe1[2], pipe2[2];
>
> 	void *tfunc(void *arg)
> 	{
> 		for (;;) {
> 			char c;
> 			assert(read(pipe1[0], &c, 1) == 1);
> 			assert(write(pipe2[1], &c, 1) == 1);
> 		}
> 	}
>
> 	int main(void)
> 	{
> 		pthread_t thr;
> 		int nr;
>
> 		assert(pipe(pipe1) == 0);
> 		assert(pipe(pipe2) == 0);
>
> 		assert(pthread_create(&thr, NULL, tfunc, NULL) == 0);
>
> 		for (nr = 0; nr < 1000 * 1000; ++nr) {
> 			char c;
>
> 			assert(write(pipe1[1], &c, 1) == 1);
> 			assert(read(pipe2[0], &c, 1) == 1);
> 		}
>
> 		return 0;
> 	}

Same as "perf bench sched pipe"

David


>
> Idle machine, "/usr/bin/time -f "%e %S %U" taskset 1 ./pf" 3 times:
>
> 	20.73 20.05 0.66
> 	20.68 20.04 0.63
> 	20.68 20.02 0.65
>
> Now with "perf record -e sched:sched_switch -p1" running,
>
> before 3/3:
>
> 	21.59 20.77 0.80
> 	21.40 20.70 0.68
> 	21.50 20.72 0.78
>
> after 3/3:
>
> 	21.00 20.23 0.76
> 	20.89 20.19 0.69
> 	20.94 20.26 0.66
>
> Oleg.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-19 18:50       ` David Ahern
@ 2013-06-19 19:58         ` Oleg Nesterov
  2013-06-20 18:23           ` Steven Rostedt
  0 siblings, 1 reply; 19+ messages in thread
From: Oleg Nesterov @ 2013-06-19 19:58 UTC (permalink / raw)
  To: David Ahern
  Cc: Peter Zijlstra, Steven Rostedt, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On 06/19, David Ahern wrote:
>
> On 6/19/13 11:51 AM, Oleg Nesterov wrote:
>>
>> not sure these numbers actually mean
>> something, but still.

Yes.

>> So, the test-case:
>>
>> 	int pipe1[2], pipe2[2];
>
> Same as "perf bench sched pipe"

You just cruelly disclosed the fact that I do not use perf.

Thanks. So,

	# perf record -e sched:sched_switch -p1 &
	[1] 516
	# perf bench sched pipe

3 times.

before:

     Total time: 30.119 [sec]

      30.119501 usecs/op
          33201 ops/sec

     Total time: 30.634 [sec]

      30.634105 usecs/op
          32643 ops/sec

     Total time: 30.100 [sec]

      30.100209 usecs/op
          33222 ops/sec


after:

     Total time: 29.645 [sec]

      29.645941 usecs/op
          33731 ops/sec

     Total time: 29.759 [sec]

      29.759075 usecs/op
          33603 ops/sec

     Total time: 29.803 [sec]

      29.803522 usecs/op
          33553 ops/sec

Hmm. Actually sched-pipe.c is a bit more "heavy", it does switch_mm().
And I used taskset. But it seems that this test-case shows the similar
results.

Oleg.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-19 19:58         ` Oleg Nesterov
@ 2013-06-20 18:23           ` Steven Rostedt
  2013-06-20 18:35             ` David Ahern
  0 siblings, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2013-06-20 18:23 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: David Ahern, Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Wed, 2013-06-19 at 21:58 +0200, Oleg Nesterov wrote:
> On 06/19, David Ahern wrote:
> >
> > On 6/19/13 11:51 AM, Oleg Nesterov wrote:
> >>
> >> not sure these numbers actually mean
> >> something, but still.
> 
> Yes.
> 
> >> So, the test-case:
> >>
> >> 	int pipe1[2], pipe2[2];
> >
> > Same as "perf bench sched pipe"
> 
> You just cruelly disclosed the fact that I do not use perf.
> 
> Thanks. So,
> 
> 	# perf record -e sched:sched_switch -p1 &
> 	[1] 516
> 	# perf bench sched pipe
> 
> 3 times.
> 
> before:
> 
>      Total time: 30.119 [sec]
> 
>       30.119501 usecs/op
>           33201 ops/sec
> 
>      Total time: 30.634 [sec]
> 
>       30.634105 usecs/op
>           32643 ops/sec
> 
>      Total time: 30.100 [sec]
> 
>       30.100209 usecs/op
>           33222 ops/sec
> 
> 
> after:
> 
>      Total time: 29.645 [sec]
> 
>       29.645941 usecs/op
>           33731 ops/sec
> 
>      Total time: 29.759 [sec]
> 
>       29.759075 usecs/op
>           33603 ops/sec
> 
>      Total time: 29.803 [sec]
> 
>       29.803522 usecs/op
>           33553 ops/sec
> 
> Hmm. Actually sched-pipe.c is a bit more "heavy", it does switch_mm().
> And I used taskset. But it seems that this test-case shows the similar
> results.
> 

OK, I tested this against 3.10-rc6 and then applied your patches (had to
modify a little because it didn't apply that cleanly).

I ran this:  

 perf stat --repeat 100 -- perf bench sched pipe > /tmp/perf-bench-sched.{before, after}

before:

# tail -20 perf-bench-sched.before 
      24.115329 usecs/op
          41467 ops/sec


 Performance counter stats for 'perf bench sched pipe' (100 runs):

      17851.057092 task-clock                #    0.741 CPUs utilized            ( +-  0.03% )
         1,996,681 context-switches          #    0.112 M/sec                    ( +-  0.00% )
                61 cpu-migrations            #    0.003 K/sec                    ( +-  2.13% )
             1,248 page-faults               #    0.070 K/sec                    ( +-  0.01% )
    29,738,460,230 cycles                    #    1.666 GHz                      ( +-  0.03% ) [50.91%]
   <not supported> stalled-cycles-frontend 
   <not supported> stalled-cycles-backend  
    22,108,278,276 instructions              #    0.74  insns per cycle          ( +-  0.01% ) [76.35%]
     5,275,965,301 branches                  #  295.555 M/sec                    ( +-  0.00% ) [74.14%]
        69,232,340 branch-misses             #    1.31% of all branches          ( +-  0.19% ) [74.95%]

      24.089150300 seconds time elapsed                                          ( +-  0.02% )


after:

# tail -20 perf-bench-sched.after 
      24.170945 usecs/op
          41371 ops/sec


 Performance counter stats for 'perf bench sched pipe' (100 runs):

      18060.703178 task-clock                #    0.747 CPUs utilized            ( +-  0.02% )
         1,996,865 context-switches          #    0.111 M/sec                    ( +-  0.00% )
                63 cpu-migrations            #    0.003 K/sec                    ( +-  3.07% )
             1,248 page-faults               #    0.069 K/sec                    ( +-  0.01% )
    29,596,801,452 cycles                    #    1.639 GHz                      ( +-  0.02% ) [49.13%]
   <not supported> stalled-cycles-frontend 
   <not supported> stalled-cycles-backend  
    22,033,684,587 instructions              #    0.74  insns per cycle          ( +-  0.01% ) [73.34%]
     5,281,256,193 branches                  #  292.417 M/sec                    ( +-  0.00% ) [75.84%]
        66,966,995 branch-misses             #    1.27% of all branches          ( +-  0.22% ) [75.04%]

      24.183738898 seconds time elapsed                                          ( +-  0.01% )



Maybe I did something wrong, but on this box, I didn't see any
significant improvement with the patches. Note, I did the test before
applying all patches, and then again after applying all patches.

-- Steve



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-20 18:23           ` Steven Rostedt
@ 2013-06-20 18:35             ` David Ahern
  2013-06-20 18:47               ` Steven Rostedt
  0 siblings, 1 reply; 19+ messages in thread
From: David Ahern @ 2013-06-20 18:35 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Oleg Nesterov, Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On 6/20/13 12:23 PM, Steven Rostedt wrote:
>
> I ran this:
>
>   perf stat --repeat 100 -- perf bench sched pipe > /tmp/perf-bench-sched.{before, after}

You want to compare:
   perf stat --repeat 100 -p 1 -- perf bench sched pipe

so that event is tagged to pid 1 and not the perf-bench workload.

David

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-20 18:35             ` David Ahern
@ 2013-06-20 18:47               ` Steven Rostedt
  2013-06-20 18:53                 ` David Ahern
  2013-06-20 18:53                 ` Steven Rostedt
  0 siblings, 2 replies; 19+ messages in thread
From: Steven Rostedt @ 2013-06-20 18:47 UTC (permalink / raw)
  To: David Ahern
  Cc: Oleg Nesterov, Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Thu, 2013-06-20 at 12:35 -0600, David Ahern wrote:
> On 6/20/13 12:23 PM, Steven Rostedt wrote:
> >
> > I ran this:
> >
> >   perf stat --repeat 100 -- perf bench sched pipe > /tmp/perf-bench-sched.{before, after}
> 
> You want to compare:
>    perf stat --repeat 100 -p 1 -- perf bench sched pipe
> 
> so that event is tagged to pid 1 and not the perf-bench workload.

I guess I'm a bit confused. What's the significance of measuring pid 1
(init)?

-- Steve



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-20 18:47               ` Steven Rostedt
@ 2013-06-20 18:53                 ` David Ahern
  2013-06-20 18:53                 ` Steven Rostedt
  1 sibling, 0 replies; 19+ messages in thread
From: David Ahern @ 2013-06-20 18:53 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Oleg Nesterov, Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On 6/20/13 12:47 PM, Steven Rostedt wrote:
> On Thu, 2013-06-20 at 12:35 -0600, David Ahern wrote:
>> On 6/20/13 12:23 PM, Steven Rostedt wrote:
>>>
>>> I ran this:
>>>
>>>    perf stat --repeat 100 -- perf bench sched pipe > /tmp/perf-bench-sched.{before, after}
>>
>> You want to compare:
>>     perf stat --repeat 100 -p 1 -- perf bench sched pipe
>>
>> so that event is tagged to pid 1 and not the perf-bench workload.
>
> I guess I'm a bit confused. What's the significance of measuring pid 1
> (init)?

I believe Oleg's point is the overhead for tasks without events 
associated with them. To show that create an event tagged to the init 
task and then run some workload -- like perf bench shed pipe. It shows 
that all tasks take a hit, not just the one getting profiled.

David

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-20 18:47               ` Steven Rostedt
  2013-06-20 18:53                 ` David Ahern
@ 2013-06-20 18:53                 ` Steven Rostedt
  2013-06-20 22:20                   ` Steven Rostedt
  1 sibling, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2013-06-20 18:53 UTC (permalink / raw)
  To: David Ahern
  Cc: Oleg Nesterov, Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Thu, 2013-06-20 at 14:47 -0400, Steven Rostedt wrote:
> On Thu, 2013-06-20 at 12:35 -0600, David Ahern wrote:
> > On 6/20/13 12:23 PM, Steven Rostedt wrote:
> > >
> > > I ran this:
> > >
> > >   perf stat --repeat 100 -- perf bench sched pipe > /tmp/perf-bench-sched.{before, after}
> > 
> > You want to compare:
> >    perf stat --repeat 100 -p 1 -- perf bench sched pipe
> > 
> > so that event is tagged to pid 1 and not the perf-bench workload.
> 
> I guess I'm a bit confused. What's the significance of measuring pid 1
> (init)?

Oh wait, I just noticed Oleg's:

# perf record -e sched:sched_switch -p1 &

I missed that.

bah, time to re-run it.

-- Steve



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-20 18:53                 ` Steven Rostedt
@ 2013-06-20 22:20                   ` Steven Rostedt
  0 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2013-06-20 22:20 UTC (permalink / raw)
  To: David Ahern
  Cc: Oleg Nesterov, Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Thu, 2013-06-20 at 14:53 -0400, Steven Rostedt wrote:

> Oh wait, I just noticed Oleg's:
> 
> # perf record -e sched:sched_switch -p1 &
> 
> I missed that.
> 
> bah, time to re-run it.
> 


New update, after running with the above command:

[root@ixf ~]# cat perf-bench-sched.before.2 | tail -20
     Total time: 11.833 [sec]

      11.833804 usecs/op
          84503 ops/sec

 Performance counter stats for process id '1' (100 runs):

     <not counted> task-clock              
     <not counted> context-switches        
     <not counted> cpu-migrations          
     <not counted> page-faults             
     <not counted> cycles                  
   <not supported> stalled-cycles-frontend 
   <not supported> stalled-cycles-backend  
     <not counted> instructions            
     <not counted> branches                
     <not counted> branch-misses           

      11.955755673 seconds time elapsed
( +-  0.05% )

[root@ixf ~]# cat perf-bench-sched.after-2 | tail -20
     Total time: 11.490 [sec]

      11.490764 usecs/op
          87026 ops/sec

 Performance counter stats for process id '1' (100 runs):

     <not counted> task-clock              
     <not counted> context-switches        
     <not counted> cpu-migrations          
     <not counted> page-faults             
     <not counted> cycles                  
   <not supported> stalled-cycles-frontend 
   <not supported> stalled-cycles-backend  
     <not counted> instructions            
     <not counted> branches                
     <not counted> branch-misses           

      11.511847099 seconds time elapsed
( +-  0.05% )


OK, this does give us a slight improvement. Approximately 4%.

-- Steve



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks.
  2013-06-18 19:21 [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Oleg Nesterov
                   ` (3 preceding siblings ...)
  2013-06-19 12:10 ` [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Peter Zijlstra
@ 2013-07-18  3:06 ` Steven Rostedt
  4 siblings, 0 replies; 19+ messages in thread
From: Steven Rostedt @ 2013-07-18  3:06 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Peter Zijlstra, Frederic Weisbecker, Ingo Molnar,
	Masami Hiramatsu, Srikar Dronamraju, zhangwei(Jovi),
	linux-kernel

On Tue, 2013-06-18 at 21:21 +0200, Oleg Nesterov wrote:
> Hello.
> 
> On top of "PATCH 0/3] tracing: more list_empty(perf_events) checks"
> series I sent yesterday.
> 
> Compile tested only, not for inclusion yet.

Oleg, I know you sent me a mbox with these patches, but I rather pull
the real email. But this is the only one I have. With a comment like
that, I wouldn't pull it in.

But as you sent me them to include them, I take it, this is ready.

Peter, Frederic, can you review these patches and ack or nack them?

Thanks,

-- Steve

> 
> But I'll appreciate if you can take a look. I'll try to test this
> tomorrow somehow and let you know. Right now I am looking at asm code,
> looks correct...
> 
> I also compiled the kernel with the additional patch below, everything
> compiles except sched/core.o as expected.
> 
> Oleg.
> 
> --- a/include/trace/ftrace.h
> +++ b/include/trace/ftrace.h
> @@ -664,6 +664,8 @@ perf_trace_##call(void *__data, proto)					\
>  									\
>  	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
>  									\
> +	BUILD_BUG_ON(!(__builtin_constant_p(!__task) && !__task));	\
> +									\
>  	head = this_cpu_ptr(event_call->perf_events);			\
>  	if (__builtin_constant_p(!__task) && !__task &&			\
>  				hlist_empty(head))			\



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2013-07-18  3:06 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-18 19:21 [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Oleg Nesterov
2013-06-18 19:22 ` [PATCH 1/3] tracing/perf: expand TRACE_EVENT(sched_stat_runtime) Oleg Nesterov
2013-06-18 19:22 ` [PATCH 2/3] tracing/perf: reimplement TP_perf_assign() logic Oleg Nesterov
2013-06-18 19:22 ` [PATCH 3/3] tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible Oleg Nesterov
2013-06-18 20:02   ` Steven Rostedt
2013-06-19 18:12     ` Oleg Nesterov
2013-06-19 18:24       ` Steven Rostedt
2013-06-19 12:10 ` [PATCH 0/3] tracing/perf: perf_trace_buf/perf_xxx hacks Peter Zijlstra
2013-06-19 15:28   ` Oleg Nesterov
2013-06-19 17:51     ` Oleg Nesterov
2013-06-19 18:50       ` David Ahern
2013-06-19 19:58         ` Oleg Nesterov
2013-06-20 18:23           ` Steven Rostedt
2013-06-20 18:35             ` David Ahern
2013-06-20 18:47               ` Steven Rostedt
2013-06-20 18:53                 ` David Ahern
2013-06-20 18:53                 ` Steven Rostedt
2013-06-20 22:20                   ` Steven Rostedt
2013-07-18  3:06 ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.