All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/15] [GIT PULL] tracing: various updates
@ 2011-06-09 17:27 Steven Rostedt
  2011-06-09 17:27 ` [PATCH 01/15] tracing: Schedule a delayed work to call wakeup() Steven Rostedt
                   ` (14 more replies)
  0 siblings, 15 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker


Ingo,

Most of these are updates for tracing with one exception:

The patch by Paul McQuade that updates async, which I told Arjan
that I would be fine in pulling.

Please pull the latest tip/perf/core tree, which can be found at:

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git
tip/perf/core

Head SHA1: 2560176dcb6d776115ef5615dac4f566fb8de129


Jiri Olsa (5):
      tracing, function_graph: Remove dependency of abstime and duration fields on latency
      tracing, function_graph: Merge overhead and duration display functions
      tracing, function: Fix trace header to follow context-info option
      tracing, function_graph: Remove lock-depth from latency trace
      tracing, function_graph: Add context-info support for function_graph tracer

Masami Hiramatsu (3):
      x86: Swap save_stack_trace_regs parameters
      stack_trace: Add weak save_stack_trace_regs()
      tracing/kprobes: Fix kprobe-tracer to support stack trace

Paul McQuade (2):
      ftrace: Fixed an include coding style issue
      async: Fixed an include coding style issue

Peter Huewe (1):
      tracing: Convert to kstrtoul_from_user

Vaibhav Nagarnaik (4):
      tracing: Schedule a delayed work to call wakeup()
      tracing: Use NUMA allocation for per-cpu ring buffer pages
      tracing: Add a free on close control mechanism for buffer_size_kb
      ring-buffer: Set __GFP_NORETRY flag for ring buffer allocating process

----
 arch/x86/kernel/stacktrace.c         |    2 +-
 arch/x86/mm/kmemcheck/error.c        |    2 +-
 include/linux/ftrace.h               |    3 +
 include/linux/ftrace_event.h         |    4 +
 include/linux/ring_buffer.h          |    2 +-
 include/linux/stacktrace.h           |    4 +-
 kernel/async.c                       |   12 +-
 kernel/stacktrace.c                  |   13 ++-
 kernel/trace/ftrace.c                |   20 +--
 kernel/trace/ring_buffer.c           |   68 +++++----
 kernel/trace/ring_buffer_benchmark.c |    2 +-
 kernel/trace/trace.c                 |  272 ++++++++++++++++++++--------------
 kernel/trace/trace.h                 |    9 +
 kernel/trace/trace_events.c          |   26 +---
 kernel/trace/trace_functions_graph.c |  223 ++++++++++++++--------------
 kernel/trace/trace_irqsoff.c         |    4 +-
 kernel/trace/trace_kprobe.c          |    6 +-
 kernel/trace/trace_sched_wakeup.c    |    4 +-
 kernel/trace/trace_stack.c           |   13 +--
 19 files changed, 369 insertions(+), 320 deletions(-)

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH 01/15] tracing: Schedule a delayed work to call wakeup()
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-13 10:07   ` Ingo Molnar
  2011-06-09 17:27 ` [PATCH 02/15] tracing: Use NUMA allocation for per-cpu ring buffer pages Steven Rostedt
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp,
	Linus Torvalds

[-- Attachment #1: 0001-tracing-Schedule-a-delayed-work-to-call-wakeup.patch --]
[-- Type: text/plain, Size: 4592 bytes --]

From: Vaibhav Nagarnaik <vnagarnaik@google.com>

In using syscall tracing by concurrent processes, the wakeup() that is
called in the event commit function causes contention on the spin lock
of the waitqueue. I enabled sys_enter_getuid and sys_exit_getuid
tracepoints, and by running getuid_microbench from autotest in parallel
I found that the contention causes exponential latency increase in the
tracing path.

The autotest binary getuid_microbench calls getuid() in a tight loop for
the given number of iterations and measures the average time required to
complete a single invocation of syscall.

The patch schedules a delayed work after 2 ms once an event commit calls
to wake up the trace wait_queue. This removes the delay caused by
contention on spin lock in wakeup() and amortizes the wakeup() calls
scheduled over the 2 ms period.

In the following example, the script enables the sys_enter_getuid and
sys_exit_getuid tracepoints and runs the getuid_microbench in parallel
with the given number of processes. The output clearly shows the latency
increase caused by contentions.

$ ~/getuid.sh 1
1000000 calls in 0.720974253 s (720.974253 ns/call)

$ ~/getuid.sh 2
1000000 calls in 1.166457554 s (1166.457554 ns/call)
1000000 calls in 1.168933765 s (1168.933765 ns/call)

$ ~/getuid.sh 3
1000000 calls in 1.783827516 s (1783.827516 ns/call)
1000000 calls in 1.795553270 s (1795.553270 ns/call)
1000000 calls in 1.796493376 s (1796.493376 ns/call)

$ ~/getuid.sh 4
1000000 calls in 4.483041796 s (4483.041796 ns/call)
1000000 calls in 4.484165388 s (4484.165388 ns/call)
1000000 calls in 4.484850762 s (4484.850762 ns/call)
1000000 calls in 4.485643576 s (4485.643576 ns/call)

$ ~/getuid.sh 5
1000000 calls in 6.497521653 s (6497.521653 ns/call)
1000000 calls in 6.502000236 s (6502.000236 ns/call)
1000000 calls in 6.501709115 s (6501.709115 ns/call)
1000000 calls in 6.502124100 s (6502.124100 ns/call)
1000000 calls in 6.502936358 s (6502.936358 ns/call)

After the patch, the latencies scale better.
1000000 calls in 0.728720455 s (728.720455 ns/call)

1000000 calls in 0.842782857 s (842.782857 ns/call)
1000000 calls in 0.883803135 s (883.803135 ns/call)

1000000 calls in 0.902077764 s (902.077764 ns/call)
1000000 calls in 0.902838202 s (902.838202 ns/call)
1000000 calls in 0.908896885 s (908.896885 ns/call)

1000000 calls in 0.932523515 s (932.523515 ns/call)
1000000 calls in 0.958009672 s (958.009672 ns/call)
1000000 calls in 0.986188020 s (986.188020 ns/call)
1000000 calls in 0.989771102 s (989.771102 ns/call)

1000000 calls in 0.933518391 s (933.518391 ns/call)
1000000 calls in 0.958897947 s (958.897947 ns/call)
1000000 calls in 1.031038897 s (1031.038897 ns/call)
1000000 calls in 1.089516025 s (1089.516025 ns/call)
1000000 calls in 1.141998347 s (1141.998347 ns/call)

Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: David Sharp <dhsharp@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1305059241-7629-1-git-send-email-vnagarnaik@google.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index ee9c921..2af132e 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -343,26 +343,27 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
 static int trace_stop_count;
 static DEFINE_SPINLOCK(tracing_start_lock);
 
+static void wakeup_work_handler(struct work_struct *work)
+{
+	wake_up(&trace_wait);
+}
+
+static DECLARE_DELAYED_WORK(wakeup_work, wakeup_work_handler);
+
 /**
  * trace_wake_up - wake up tasks waiting for trace input
  *
- * Simply wakes up any task that is blocked on the trace_wait
- * queue. These is used with trace_poll for tasks polling the trace.
+ * Schedules a delayed work to wake up any task that is blocked on the
+ * trace_wait queue. These is used with trace_poll for tasks polling the
+ * trace.
  */
 void trace_wake_up(void)
 {
-	int cpu;
+	const unsigned long delay = msecs_to_jiffies(2); //delay 2 ms
 
 	if (trace_flags & TRACE_ITER_BLOCK)
 		return;
-	/*
-	 * The runqueue_is_locked() can fail, but this is the best we
-	 * have for now:
-	 */
-	cpu = get_cpu();
-	if (!runqueue_is_locked(cpu))
-		wake_up(&trace_wait);
-	put_cpu();
+	schedule_delayed_work(&wakeup_work, delay);
 }
 
 static int __init set_buf_size(char *str)
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 02/15] tracing: Use NUMA allocation for per-cpu ring buffer pages
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
  2011-06-09 17:27 ` [PATCH 01/15] tracing: Schedule a delayed work to call wakeup() Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-13 10:09   ` Ingo Molnar
  2011-06-09 17:27 ` [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb Steven Rostedt
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp

[-- Attachment #1: 0002-tracing-Use-NUMA-allocation-for-per-cpu-ring-buffer-.patch --]
[-- Type: text/plain, Size: 6890 bytes --]

From: Vaibhav Nagarnaik <vnagarnaik@google.com>

The tracing ring buffer is a group of per-cpu ring buffers where
allocation and logging is done on a per-cpu basis. The events that are
generated on a particular CPU are logged in the corresponding buffer.
This is to provide wait-free writes between CPUs and good NUMA node
locality while accessing the ring buffer.

However, the allocation routines consider NUMA locality only for buffer
page metadata and not for the actual buffer page. This causes the pages
to be allocated on the NUMA node local to the CPU where the allocation
routine is running at the time.

This patch fixes the problem by using a NUMA node specific allocation
routine so that the pages are allocated from a NUMA node local to the
logging CPU.

I tested with the getuid_microbench from autotest. It is a simple binary
that calls getuid() in a loop and measures the average time for the
syscall to complete. The following command was used to test:
$ getuid_microbench 1000000

Compared the numbers found on kernel with and without this patch and
found that logging latency decreases by 30-50 ns/call.
tracing with non-NUMA allocation - 569 ns/call
tracing with NUMA allocation     - 512 ns/call

Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: David Sharp <dhsharp@google.com>
Link: http://lkml.kernel.org/r/1304470602-20366-1-git-send-email-vnagarnaik@google.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/ring_buffer.h          |    2 +-
 kernel/trace/ring_buffer.c           |   36 +++++++++++++++++----------------
 kernel/trace/ring_buffer_benchmark.c |    2 +-
 kernel/trace/trace.c                 |    6 +++-
 4 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index ab38ac8..b891de9 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -169,7 +169,7 @@ void ring_buffer_set_clock(struct ring_buffer *buffer,
 size_t ring_buffer_page_len(void *page);
 
 
-void *ring_buffer_alloc_read_page(struct ring_buffer *buffer);
+void *ring_buffer_alloc_read_page(struct ring_buffer *buffer, int cpu);
 void ring_buffer_free_read_page(struct ring_buffer *buffer, void *data);
 int ring_buffer_read_page(struct ring_buffer *buffer, void **data_page,
 			  size_t len, int cpu, int full);
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index b0c7aa4..2780e60 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -997,13 +997,13 @@ static int rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer,
 			     unsigned nr_pages)
 {
 	struct buffer_page *bpage, *tmp;
-	unsigned long addr;
 	LIST_HEAD(pages);
 	unsigned i;
 
 	WARN_ON(!nr_pages);
 
 	for (i = 0; i < nr_pages; i++) {
+		struct page *page;
 		bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
 				    GFP_KERNEL, cpu_to_node(cpu_buffer->cpu));
 		if (!bpage)
@@ -1013,10 +1013,11 @@ static int rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer,
 
 		list_add(&bpage->list, &pages);
 
-		addr = __get_free_page(GFP_KERNEL);
-		if (!addr)
+		page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu),
+					GFP_KERNEL, 0);
+		if (!page)
 			goto free_pages;
-		bpage->page = (void *)addr;
+		bpage->page = page_address(page);
 		rb_init_page(bpage->page);
 	}
 
@@ -1045,7 +1046,7 @@ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu)
 {
 	struct ring_buffer_per_cpu *cpu_buffer;
 	struct buffer_page *bpage;
-	unsigned long addr;
+	struct page *page;
 	int ret;
 
 	cpu_buffer = kzalloc_node(ALIGN(sizeof(*cpu_buffer), cache_line_size()),
@@ -1067,10 +1068,10 @@ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu)
 	rb_check_bpage(cpu_buffer, bpage);
 
 	cpu_buffer->reader_page = bpage;
-	addr = __get_free_page(GFP_KERNEL);
-	if (!addr)
+	page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, 0);
+	if (!page)
 		goto fail_free_reader;
-	bpage->page = (void *)addr;
+	bpage->page = page_address(page);
 	rb_init_page(bpage->page);
 
 	INIT_LIST_HEAD(&cpu_buffer->reader_page->list);
@@ -1314,7 +1315,6 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size)
 	unsigned nr_pages, rm_pages, new_pages;
 	struct buffer_page *bpage, *tmp;
 	unsigned long buffer_size;
-	unsigned long addr;
 	LIST_HEAD(pages);
 	int i, cpu;
 
@@ -1375,16 +1375,18 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size)
 
 	for_each_buffer_cpu(buffer, cpu) {
 		for (i = 0; i < new_pages; i++) {
+			struct page *page;
 			bpage = kzalloc_node(ALIGN(sizeof(*bpage),
 						  cache_line_size()),
 					    GFP_KERNEL, cpu_to_node(cpu));
 			if (!bpage)
 				goto free_pages;
 			list_add(&bpage->list, &pages);
-			addr = __get_free_page(GFP_KERNEL);
-			if (!addr)
+			page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL,
+						0);
+			if (!page)
 				goto free_pages;
-			bpage->page = (void *)addr;
+			bpage->page = page_address(page);
 			rb_init_page(bpage->page);
 		}
 	}
@@ -3730,16 +3732,16 @@ EXPORT_SYMBOL_GPL(ring_buffer_swap_cpu);
  * Returns:
  *  The page allocated, or NULL on error.
  */
-void *ring_buffer_alloc_read_page(struct ring_buffer *buffer)
+void *ring_buffer_alloc_read_page(struct ring_buffer *buffer, int cpu)
 {
 	struct buffer_data_page *bpage;
-	unsigned long addr;
+	struct page *page;
 
-	addr = __get_free_page(GFP_KERNEL);
-	if (!addr)
+	page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, 0);
+	if (!page)
 		return NULL;
 
-	bpage = (void *)addr;
+	bpage = page_address(page);
 
 	rb_init_page(bpage);
 
diff --git a/kernel/trace/ring_buffer_benchmark.c b/kernel/trace/ring_buffer_benchmark.c
index 302f8a6..a5457d5 100644
--- a/kernel/trace/ring_buffer_benchmark.c
+++ b/kernel/trace/ring_buffer_benchmark.c
@@ -106,7 +106,7 @@ static enum event_status read_page(int cpu)
 	int inc;
 	int i;
 
-	bpage = ring_buffer_alloc_read_page(buffer);
+	bpage = ring_buffer_alloc_read_page(buffer, cpu);
 	if (!bpage)
 		return EVENT_DROPPED;
 
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2af132e..6368eeb 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3697,7 +3697,8 @@ tracing_buffers_read(struct file *filp, char __user *ubuf,
 		return 0;
 
 	if (!info->spare)
-		info->spare = ring_buffer_alloc_read_page(info->tr->buffer);
+		info->spare = ring_buffer_alloc_read_page(info->tr->buffer,
+								info->cpu);
 	if (!info->spare)
 		return -ENOMEM;
 
@@ -3854,7 +3855,8 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
 
 		ref->ref = 1;
 		ref->buffer = info->tr->buffer;
-		ref->page = ring_buffer_alloc_read_page(ref->buffer);
+		ref->page = ring_buffer_alloc_read_page(ref->buffer,
+							info->cpu);
 		if (!ref->page) {
 			kfree(ref);
 			break;
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
  2011-06-09 17:27 ` [PATCH 01/15] tracing: Schedule a delayed work to call wakeup() Steven Rostedt
  2011-06-09 17:27 ` [PATCH 02/15] tracing: Use NUMA allocation for per-cpu ring buffer pages Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-13 10:12   ` Ingo Molnar
  2011-06-09 17:27 ` [PATCH 04/15] ftrace: Fixed an include coding style issue Steven Rostedt
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp

[-- Attachment #1: 0003-tracing-Add-a-free-on-close-control-mechanism-for-bu.patch --]
[-- Type: text/plain, Size: 7060 bytes --]

From: Vaibhav Nagarnaik <vnagarnaik@google.com>

The proc file entry buffer_size_kb is used to set the size of tracing
buffer. The memory to expand the buffer size is kernel memory. Consider
a use case where tracing is handled by a user space utility, which acts
as a gate keeper for tracing requests. In an OOM condition, tracing is
considered a low priority task and if the utility gets killed the ring
buffer memory cannot be released back to the kernel.

This patch adds an IOCTL on the buffer_size_kb file to set a boolean.
When this boolean is enabled, closing buffer_size_kb file will cause
tracing to stop and free up the ring buffer memory.

The user space process can then open the buffer_size_kb file to set the
new buffer size for tracing, enable the boolean through IOCTL and keep
the file open. Under OOM condition, if the process gets killed, the
kernel closes the file descriptor for buffer_size_kb. The release
handler stops the tracing and releases the kernel memory automatically.

Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: David Sharp <dhsharp@google.com>
Link: http://lkml.kernel.org/r/1306434886-16384-1-git-send-email-vnagarnaik@google.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/ftrace.h |    3 +
 kernel/trace/trace.c   |  140 +++++++++++++++++++++++++++++++++++------------
 2 files changed, 107 insertions(+), 36 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 9d88e1c..72a2f0d 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -127,6 +127,9 @@ struct ftrace_func_command {
 					char *params, int enable);
 };
 
+/* enable/disable auto free ring buffer on file close */
+#define TRACE_RINGBUF_FREE_ON_CLOSE _IOW('t', 0x01, int)
+
 #ifdef CONFIG_DYNAMIC_FTRACE
 
 int ftrace_arch_code_modify_prepare(void);
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 6368eeb..b926578 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2768,7 +2768,7 @@ int tracer_init(struct tracer *t, struct trace_array *tr)
 	return t->init(tr);
 }
 
-static int tracing_resize_ring_buffer(unsigned long size)
+static int __tracing_resize_ring_buffer(unsigned long size)
 {
 	int ret;
 
@@ -2820,6 +2820,41 @@ static int tracing_resize_ring_buffer(unsigned long size)
 	return ret;
 }
 
+static ssize_t tracing_resize_ring_buffer(unsigned long size)
+{
+	int cpu, ret = size;
+
+	mutex_lock(&trace_types_lock);
+
+	tracing_stop();
+
+	/* disable all cpu buffers */
+	for_each_tracing_cpu(cpu) {
+		if (global_trace.data[cpu])
+			atomic_inc(&global_trace.data[cpu]->disabled);
+		if (max_tr.data[cpu])
+			atomic_inc(&max_tr.data[cpu]->disabled);
+	}
+
+	if (size != global_trace.entries)
+		ret = __tracing_resize_ring_buffer(size);
+
+	if (ret < 0)
+		ret = -ENOMEM;
+
+	for_each_tracing_cpu(cpu) {
+		if (global_trace.data[cpu])
+			atomic_dec(&global_trace.data[cpu]->disabled);
+		if (max_tr.data[cpu])
+			atomic_dec(&max_tr.data[cpu]->disabled);
+	}
+
+	tracing_start();
+	mutex_unlock(&trace_types_lock);
+
+	return ret;
+}
+
 
 /**
  * tracing_update_buffers - used by tracing facility to expand ring buffers
@@ -2837,7 +2872,7 @@ int tracing_update_buffers(void)
 
 	mutex_lock(&trace_types_lock);
 	if (!ring_buffer_expanded)
-		ret = tracing_resize_ring_buffer(trace_buf_size);
+		ret = __tracing_resize_ring_buffer(trace_buf_size);
 	mutex_unlock(&trace_types_lock);
 
 	return ret;
@@ -2861,7 +2896,7 @@ static int tracing_set_tracer(const char *buf)
 	mutex_lock(&trace_types_lock);
 
 	if (!ring_buffer_expanded) {
-		ret = tracing_resize_ring_buffer(trace_buf_size);
+		ret = __tracing_resize_ring_buffer(trace_buf_size);
 		if (ret < 0)
 			goto out;
 		ret = 0;
@@ -3410,11 +3445,37 @@ out_err:
 	goto out;
 }
 
+struct ftrace_entries_info {
+	struct trace_array	*tr;
+	int			free_buffer_on_close;
+};
+
+static int
+tracing_entries_open(struct inode *inode, struct file *filp)
+{
+	struct ftrace_entries_info *info;
+
+	if (tracing_disabled)
+		return -ENODEV;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (!info)
+		return -ENOMEM;
+
+	info->tr = (struct trace_array *)inode->i_private;
+	info->free_buffer_on_close = 0;
+
+	filp->private_data = info;
+
+	return 0;
+}
+
 static ssize_t
 tracing_entries_read(struct file *filp, char __user *ubuf,
 		     size_t cnt, loff_t *ppos)
 {
-	struct trace_array *tr = filp->private_data;
+	struct ftrace_entries_info *info = filp->private_data;
+	struct trace_array *tr = info->tr;
 	char buf[96];
 	int r;
 
@@ -3436,7 +3497,7 @@ tracing_entries_write(struct file *filp, const char __user *ubuf,
 {
 	unsigned long val;
 	char buf[64];
-	int ret, cpu;
+	int ret;
 
 	if (cnt >= sizeof(buf))
 		return -EINVAL;
@@ -3454,46 +3515,50 @@ tracing_entries_write(struct file *filp, const char __user *ubuf,
 	if (!val)
 		return -EINVAL;
 
-	mutex_lock(&trace_types_lock);
+	/* value is in KB */
+	val <<= 10;
 
-	tracing_stop();
+	ret = tracing_resize_ring_buffer(val);
+	if (ret < 0)
+		return ret;
 
-	/* disable all cpu buffers */
-	for_each_tracing_cpu(cpu) {
-		if (global_trace.data[cpu])
-			atomic_inc(&global_trace.data[cpu]->disabled);
-		if (max_tr.data[cpu])
-			atomic_inc(&max_tr.data[cpu]->disabled);
-	}
+	*ppos += cnt;
 
-	/* value is in KB */
-	val <<= 10;
+	return cnt;
+}
 
-	if (val != global_trace.entries) {
-		ret = tracing_resize_ring_buffer(val);
-		if (ret < 0) {
-			cnt = ret;
-			goto out;
-		}
+static long
+tracing_entries_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+	int ret = -ENOIOCTLCMD;
+	struct ftrace_entries_info *info = filp->private_data;
+
+	switch (cmd) {
+	case TRACE_RINGBUF_FREE_ON_CLOSE: {
+		info->free_buffer_on_close = !!arg;
+		ret = 0;
+		break;
+	}
 	}
 
-	*ppos += cnt;
+	return ret;
+}
 
-	/* If check pages failed, return ENOMEM */
-	if (tracing_disabled)
-		cnt = -ENOMEM;
- out:
-	for_each_tracing_cpu(cpu) {
-		if (global_trace.data[cpu])
-			atomic_dec(&global_trace.data[cpu]->disabled);
-		if (max_tr.data[cpu])
-			atomic_dec(&max_tr.data[cpu]->disabled);
+static int
+tracing_entries_release(struct inode *inode, struct file *filp)
+{
+	struct ftrace_entries_info *info = filp->private_data;
+
+	if (info->free_buffer_on_close) {
+		/* disable tracing */
+		tracing_off();
+		/* resize the ring buffer to 0 */
+		tracing_resize_ring_buffer(0);
 	}
 
-	tracing_start();
-	mutex_unlock(&trace_types_lock);
+	kfree(info);
 
-	return cnt;
+	return 0;
 }
 
 static int mark_printk(const char *fmt, ...)
@@ -3635,9 +3700,12 @@ static const struct file_operations tracing_pipe_fops = {
 };
 
 static const struct file_operations tracing_entries_fops = {
-	.open		= tracing_open_generic,
+	.open		= tracing_entries_open,
 	.read		= tracing_entries_read,
 	.write		= tracing_entries_write,
+	.unlocked_ioctl	= tracing_entries_ioctl,
+	.compat_ioctl	= tracing_entries_ioctl,
+	.release	= tracing_entries_release,
 	.llseek		= generic_file_llseek,
 };
 
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 04/15] ftrace: Fixed an include coding style issue
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (2 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 05/15] async: " Steven Rostedt
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Paul McQuade

[-- Attachment #1: 0004-ftrace-Fixed-an-include-coding-style-issue.patch --]
[-- Type: text/plain, Size: 1269 bytes --]

From: Paul McQuade <tungstentide@gmail.com>

Removed <asm/ftrace.h> because <linux/ftrace.h> was already declared.
Braces of struct's coding style fixed.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Paul McQuade <tungstentide@gmail.com>
Link: http://lkml.kernel.org/r/4DE59711.3090900@gmail.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 1ee417f..e153807 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -32,7 +32,6 @@
 
 #include <trace/events/sched.h>
 
-#include <asm/ftrace.h>
 #include <asm/setup.h>
 
 #include "trace_output.h"
@@ -82,8 +81,7 @@ static int ftrace_disabled __read_mostly;
 
 static DEFINE_MUTEX(ftrace_lock);
 
-static struct ftrace_ops ftrace_list_end __read_mostly =
-{
+static struct ftrace_ops ftrace_list_end __read_mostly = {
 	.func		= ftrace_stub,
 };
 
@@ -785,8 +783,7 @@ static void unregister_ftrace_profiler(void)
 	unregister_ftrace_graph();
 }
 #else
-static struct ftrace_ops ftrace_profile_ops __read_mostly =
-{
+static struct ftrace_ops ftrace_profile_ops __read_mostly = {
 	.func		= function_profile_call,
 };
 
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 05/15] async: Fixed an include coding style issue
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (3 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 04/15] ftrace: Fixed an include coding style issue Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 06/15] tracing, function_graph: Remove dependency of abstime and duration Steven Rostedt
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker,
	Arjan van de Ven, Paul McQuade

[-- Attachment #1: 0005-async-Fixed-an-include-coding-style-issue.patch --]
[-- Type: text/plain, Size: 2432 bytes --]

From: Paul McQuade <tungstentide@gmail.com>

Added <linux/atomic.h>,<linux/ktime.h> and Removed <asm/atomic.h>.
Added KERN_DEBUG to printk() functions.

Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Paul McQuade <tungstentide@gmail.com>
Link: http://lkml.kernel.org/r/4DE596B4.7030904@gmail.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/async.c |   12 +++++++-----
 1 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/kernel/async.c b/kernel/async.c
index cd9dbb9..d5fe7af 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -49,12 +49,13 @@ asynchronous and synchronous parts of the kernel.
 */
 
 #include <linux/async.h>
+#include <linux/atomic.h>
+#include <linux/ktime.h>
 #include <linux/module.h>
 #include <linux/wait.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/workqueue.h>
-#include <asm/atomic.h>
 
 static async_cookie_t next_cookie = 1;
 
@@ -128,7 +129,8 @@ static void async_run_entry_fn(struct work_struct *work)
 
 	/* 2) run (and print duration) */
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
-		printk("calling  %lli_%pF @ %i\n", (long long)entry->cookie,
+		printk(KERN_DEBUG "calling  %lli_%pF @ %i\n",
+			(long long)entry->cookie,
 			entry->func, task_pid_nr(current));
 		calltime = ktime_get();
 	}
@@ -136,7 +138,7 @@ static void async_run_entry_fn(struct work_struct *work)
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
 		rettime = ktime_get();
 		delta = ktime_sub(rettime, calltime);
-		printk("initcall %lli_%pF returned 0 after %lld usecs\n",
+		printk(KERN_DEBUG "initcall %lli_%pF returned 0 after %lld usecs\n",
 			(long long)entry->cookie,
 			entry->func,
 			(long long)ktime_to_ns(delta) >> 10);
@@ -270,7 +272,7 @@ void async_synchronize_cookie_domain(async_cookie_t cookie,
 	ktime_t starttime, delta, endtime;
 
 	if (initcall_debug && system_state == SYSTEM_BOOTING) {
-		printk("async_waiting @ %i\n", task_pid_nr(current));
+		printk(KERN_DEBUG "async_waiting @ %i\n", task_pid_nr(current));
 		starttime = ktime_get();
 	}
 
@@ -280,7 +282,7 @@ void async_synchronize_cookie_domain(async_cookie_t cookie,
 		endtime = ktime_get();
 		delta = ktime_sub(endtime, starttime);
 
-		printk("async_continuing @ %i after %lli usec\n",
+		printk(KERN_DEBUG "async_continuing @ %i after %lli usec\n",
 			task_pid_nr(current),
 			(long long)ktime_to_ns(delta) >> 10);
 	}
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 06/15] tracing, function_graph: Remove dependency of abstime and duration
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (4 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 05/15] async: " Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 07/15] tracing, function_graph: Merge overhead and duration display Steven Rostedt
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Jiri Olsa

[-- Attachment #1: 0006-tracing-function_graph-Remove-dependency-of-abstime-.patch fields on latency --]
[-- Type: text/plain, Size: 5186 bytes --]

From: Jiri Olsa <jolsa@redhat.com>

The display of absolute time and duration fields is based on the
latency field. This was added during the irqsoff/wakeup tracers
graph support changes.

It's causing confusion in what fields will be displayed for the
function_graph tracer itself. So I'm removing this depency, and
adding absolute time and duration fields to the preemptirqsoff
preemptoff irqsoff wakeup tracers.

With following commands:
	# echo function_graph > ./current_tracer
	# cat trace

This is what it looked like before:
# tracer: function_graph
#
#     TIME        CPU  DURATION                  FUNCTION CALLS
#      |          |     |   |                     |   |   |   |
 0)   0.068 us    |          } /* page_add_file_rmap */
 0)               |          _raw_spin_unlock() {
...

This is what it looks like now:
# tracer: function_graph
#
# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 0)   0.068 us    |                } /* add_preempt_count */
 0)   0.993 us    |              } /* vfsmount_lock_local_lock */
...

For preemptirqsoff preemptoff irqsoff wakeup tracers,
this is what it looked like before:
SNIP
#                       _-----=> irqs-off
#                      / _----=> need-resched
#                     | / _---=> hardirq/softirq
#                     || / _--=> preempt-depth
#                     ||| / _-=> lock-depth
#                     |||| /
# CPU  TASK/PID       |||||  DURATION                  FUNCTION CALLS
# |     |    |        |||||   |   |                     |   |   |   |
 1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
...

This is what it looks like now:
SNIP
#
#                                       _-----=> irqs-off
#                                      / _----=> need-resched
#                                     | / _---=> hardirq/softirq
#                                     || / _--=> preempt-depth
#                                     ||| /
#     TIME        CPU  TASK/PID       ||||  DURATION                  FUNCTION CALLS
#      |          |     |    |        ||||   |   |                     |   |   |   |
   19.847735 |   1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
...

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1307113131-10045-2-git-send-email-jolsa@redhat.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_functions_graph.c |   19 +++----------------
 kernel/trace/trace_irqsoff.c         |    4 +++-
 kernel/trace/trace_sched_wakeup.c    |    4 +++-
 3 files changed, 9 insertions(+), 18 deletions(-)

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 962cdb2..352652e 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -1207,7 +1207,7 @@ print_graph_comment(struct trace_seq *s, struct trace_entry *ent,
 
 
 enum print_line_t
-__print_graph_function_flags(struct trace_iterator *iter, u32 flags)
+print_graph_function_flags(struct trace_iterator *iter, u32 flags)
 {
 	struct ftrace_graph_ent_entry *field;
 	struct fgraph_data *data = iter->private;
@@ -1270,18 +1270,7 @@ __print_graph_function_flags(struct trace_iterator *iter, u32 flags)
 static enum print_line_t
 print_graph_function(struct trace_iterator *iter)
 {
-	return __print_graph_function_flags(iter, tracer_flags.val);
-}
-
-enum print_line_t print_graph_function_flags(struct trace_iterator *iter,
-					     u32 flags)
-{
-	if (trace_flags & TRACE_ITER_LATENCY_FMT)
-		flags |= TRACE_GRAPH_PRINT_DURATION;
-	else
-		flags |= TRACE_GRAPH_PRINT_ABS_TIME;
-
-	return __print_graph_function_flags(iter, flags);
+	return print_graph_function_flags(iter, tracer_flags.val);
 }
 
 static enum print_line_t
@@ -1364,9 +1353,7 @@ void print_graph_headers_flags(struct seq_file *s, u32 flags)
 			return;
 
 		print_trace_header(s, iter);
-		flags |= TRACE_GRAPH_PRINT_DURATION;
-	} else
-		flags |= TRACE_GRAPH_PRINT_ABS_TIME;
+	}
 
 	__print_graph_headers_flags(s, flags);
 }
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index c77424b..667aa8c 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -226,7 +226,9 @@ static void irqsoff_trace_close(struct trace_iterator *iter)
 }
 
 #define GRAPH_TRACER_FLAGS (TRACE_GRAPH_PRINT_CPU | \
-			    TRACE_GRAPH_PRINT_PROC)
+			    TRACE_GRAPH_PRINT_PROC | \
+			    TRACE_GRAPH_PRINT_ABS_TIME | \
+			    TRACE_GRAPH_PRINT_DURATION)
 
 static enum print_line_t irqsoff_print_line(struct trace_iterator *iter)
 {
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index f029dd4..e4a70c0 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -227,7 +227,9 @@ static void wakeup_trace_close(struct trace_iterator *iter)
 		graph_trace_close(iter);
 }
 
-#define GRAPH_TRACER_FLAGS (TRACE_GRAPH_PRINT_PROC)
+#define GRAPH_TRACER_FLAGS (TRACE_GRAPH_PRINT_PROC | \
+			    TRACE_GRAPH_PRINT_ABS_TIME | \
+			    TRACE_GRAPH_PRINT_DURATION)
 
 static enum print_line_t wakeup_print_line(struct trace_iterator *iter)
 {
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 07/15] tracing, function_graph: Merge overhead and duration display
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (5 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 06/15] tracing, function_graph: Remove dependency of abstime and duration Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 08/15] tracing, function: Fix trace header to follow context-info option Steven Rostedt
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Jiri Olsa

[-- Attachment #1: 0007-tracing-function_graph-Merge-overhead-and-duration-d.patch functions --]
[-- Type: text/plain, Size: 7375 bytes --]

From: Jiri Olsa <jolsa@redhat.com>

Functions print_graph_overhead() and print_graph_duration() displays
data for one field - DURATION.

I merged them into single function print_graph_duration(),
and added a way to display the empty parts of the field.

This way the print_graph_irq() function can use this column to display
the IRQ signs if needed and the DURATION field details stays inside
the print_graph_duration() function.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1307113131-10045-3-git-send-email-jolsa@redhat.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_functions_graph.c |  148 +++++++++++++++++-----------------
 1 files changed, 74 insertions(+), 74 deletions(-)

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 352652e..44b955f 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -74,6 +74,20 @@ static struct tracer_flags tracer_flags = {
 
 static struct trace_array *graph_array;
 
+/*
+ * DURATION column is being also used to display IRQ signs,
+ * following values are used by print_graph_irq and others
+ * to fill in space into DURATION column.
+ */
+enum {
+	DURATION_FILL_FULL  = -1,
+	DURATION_FILL_START = -2,
+	DURATION_FILL_END   = -3,
+};
+
+static enum print_line_t
+print_graph_duration(unsigned long long duration, struct trace_seq *s,
+		     u32 flags);
 
 /* Add a function return address to the trace stack on thread info.*/
 int
@@ -577,32 +591,6 @@ get_return_for_leaf(struct trace_iterator *iter,
 	return next;
 }
 
-/* Signal a overhead of time execution to the output */
-static int
-print_graph_overhead(unsigned long long duration, struct trace_seq *s,
-		     u32 flags)
-{
-	/* If duration disappear, we don't need anything */
-	if (!(flags & TRACE_GRAPH_PRINT_DURATION))
-		return 1;
-
-	/* Non nested entry or return */
-	if (duration == -1)
-		return trace_seq_printf(s, "  ");
-
-	if (flags & TRACE_GRAPH_PRINT_OVERHEAD) {
-		/* Duration exceeded 100 msecs */
-		if (duration > 100000ULL)
-			return trace_seq_printf(s, "! ");
-
-		/* Duration exceeded 10 msecs */
-		if (duration > 10000ULL)
-			return trace_seq_printf(s, "+ ");
-	}
-
-	return trace_seq_printf(s, "  ");
-}
-
 static int print_graph_abs_time(u64 t, struct trace_seq *s)
 {
 	unsigned long usecs_rem;
@@ -650,9 +638,9 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr,
 	}
 
 	/* No overhead */
-	ret = print_graph_overhead(-1, s, flags);
-	if (!ret)
-		return TRACE_TYPE_PARTIAL_LINE;
+	ret = print_graph_duration(DURATION_FILL_START, s, flags);
+	if (ret != TRACE_TYPE_HANDLED)
+		return ret;
 
 	if (type == TRACE_GRAPH_ENT)
 		ret = trace_seq_printf(s, "==========>");
@@ -662,9 +650,10 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr,
 	if (!ret)
 		return TRACE_TYPE_PARTIAL_LINE;
 
-	/* Don't close the duration column if haven't one */
-	if (flags & TRACE_GRAPH_PRINT_DURATION)
-		trace_seq_printf(s, " |");
+	ret = print_graph_duration(DURATION_FILL_END, s, flags);
+	if (ret != TRACE_TYPE_HANDLED)
+		return ret;
+
 	ret = trace_seq_printf(s, "\n");
 
 	if (!ret)
@@ -716,9 +705,48 @@ trace_print_graph_duration(unsigned long long duration, struct trace_seq *s)
 }
 
 static enum print_line_t
-print_graph_duration(unsigned long long duration, struct trace_seq *s)
+print_graph_duration(unsigned long long duration, struct trace_seq *s,
+		     u32 flags)
 {
-	int ret;
+	int ret = -1;
+
+	if (!(flags & TRACE_GRAPH_PRINT_DURATION))
+		return TRACE_TYPE_HANDLED;
+
+	/* No real adata, just filling the column with spaces */
+	switch (duration) {
+	case DURATION_FILL_FULL:
+		ret = trace_seq_printf(s, "              |  ");
+		return ret ? TRACE_TYPE_HANDLED : TRACE_TYPE_PARTIAL_LINE;
+	case DURATION_FILL_START:
+		ret = trace_seq_printf(s, "  ");
+		return ret ? TRACE_TYPE_HANDLED : TRACE_TYPE_PARTIAL_LINE;
+	case DURATION_FILL_END:
+		ret = trace_seq_printf(s, " |");
+		return ret ? TRACE_TYPE_HANDLED : TRACE_TYPE_PARTIAL_LINE;
+	}
+
+	/* Signal a overhead of time execution to the output */
+	if (flags & TRACE_GRAPH_PRINT_OVERHEAD) {
+		/* Duration exceeded 100 msecs */
+		if (duration > 100000ULL)
+			ret = trace_seq_printf(s, "! ");
+		/* Duration exceeded 10 msecs */
+		else if (duration > 10000ULL)
+			ret = trace_seq_printf(s, "+ ");
+	}
+
+	/*
+	 * The -1 means we either did not exceed the duration tresholds
+	 * or we dont want to print out the overhead. Either way we need
+	 * to fill out the space.
+	 */
+	if (ret == -1)
+		ret = trace_seq_printf(s, "  ");
+
+	/* Catching here any failure happenned above */
+	if (!ret)
+		return TRACE_TYPE_PARTIAL_LINE;
 
 	ret = trace_print_graph_duration(duration, s);
 	if (ret != TRACE_TYPE_HANDLED)
@@ -767,18 +795,11 @@ print_graph_entry_leaf(struct trace_iterator *iter,
 			cpu_data->enter_funcs[call->depth] = 0;
 	}
 
-	/* Overhead */
-	ret = print_graph_overhead(duration, s, flags);
-	if (!ret)
+	/* Overhead and duration */
+	ret = print_graph_duration(duration, s, flags);
+	if (ret == TRACE_TYPE_PARTIAL_LINE)
 		return TRACE_TYPE_PARTIAL_LINE;
 
-	/* Duration */
-	if (flags & TRACE_GRAPH_PRINT_DURATION) {
-		ret = print_graph_duration(duration, s);
-		if (ret == TRACE_TYPE_PARTIAL_LINE)
-			return TRACE_TYPE_PARTIAL_LINE;
-	}
-
 	/* Function */
 	for (i = 0; i < call->depth * TRACE_GRAPH_INDENT; i++) {
 		ret = trace_seq_printf(s, " ");
@@ -815,17 +836,10 @@ print_graph_entry_nested(struct trace_iterator *iter,
 			cpu_data->enter_funcs[call->depth] = call->func;
 	}
 
-	/* No overhead */
-	ret = print_graph_overhead(-1, s, flags);
-	if (!ret)
-		return TRACE_TYPE_PARTIAL_LINE;
-
 	/* No time */
-	if (flags & TRACE_GRAPH_PRINT_DURATION) {
-		ret = trace_seq_printf(s, "            |  ");
-		if (!ret)
-			return TRACE_TYPE_PARTIAL_LINE;
-	}
+	ret = print_graph_duration(DURATION_FILL_FULL, s, flags);
+	if (ret != TRACE_TYPE_HANDLED)
+		return ret;
 
 	/* Function */
 	for (i = 0; i < call->depth * TRACE_GRAPH_INDENT; i++) {
@@ -1078,18 +1092,11 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
 	if (print_graph_prologue(iter, s, 0, 0, flags))
 		return TRACE_TYPE_PARTIAL_LINE;
 
-	/* Overhead */
-	ret = print_graph_overhead(duration, s, flags);
-	if (!ret)
+	/* Overhead and duration */
+	ret = print_graph_duration(duration, s, flags);
+	if (ret == TRACE_TYPE_PARTIAL_LINE)
 		return TRACE_TYPE_PARTIAL_LINE;
 
-	/* Duration */
-	if (flags & TRACE_GRAPH_PRINT_DURATION) {
-		ret = print_graph_duration(duration, s);
-		if (ret == TRACE_TYPE_PARTIAL_LINE)
-			return TRACE_TYPE_PARTIAL_LINE;
-	}
-
 	/* Closing brace */
 	for (i = 0; i < trace->depth * TRACE_GRAPH_INDENT; i++) {
 		ret = trace_seq_printf(s, " ");
@@ -1146,17 +1153,10 @@ print_graph_comment(struct trace_seq *s, struct trace_entry *ent,
 	if (print_graph_prologue(iter, s, 0, 0, flags))
 		return TRACE_TYPE_PARTIAL_LINE;
 
-	/* No overhead */
-	ret = print_graph_overhead(-1, s, flags);
-	if (!ret)
-		return TRACE_TYPE_PARTIAL_LINE;
-
 	/* No time */
-	if (flags & TRACE_GRAPH_PRINT_DURATION) {
-		ret = trace_seq_printf(s, "            |  ");
-		if (!ret)
-			return TRACE_TYPE_PARTIAL_LINE;
-	}
+	ret = print_graph_duration(DURATION_FILL_FULL, s, flags);
+	if (ret != TRACE_TYPE_HANDLED)
+		return ret;
 
 	/* Indentation */
 	if (depth > 0)
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 08/15] tracing, function: Fix trace header to follow context-info option
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (6 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 07/15] tracing, function_graph: Merge overhead and duration display Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 09/15] tracing, function_graph: Remove lock-depth from latency trace Steven Rostedt
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Jiri Olsa

[-- Attachment #1: 0008-tracing-function-Fix-trace-header-to-follow-context-.patch --]
[-- Type: text/plain, Size: 1391 bytes --]

From: Jiri Olsa <jolsa@redhat.com>

The header display of function tracer does not follow
the context-info option, so field names are displayed even
if this option is off.

Added check for TRACE_ITER_CONTEXT_INFO trace_flags.

With following commands:
	# echo function > ./current_tracer
	# echo 0 > options/context-info
	# cat trace

This is what it looked like before:
# tracer: function
#
#           TASK-PID    CPU#    TIMESTAMP  FUNCTION
#              | |       |          |         |
add_preempt_count <-schedule
rcu_note_context_switch <-schedule
...

This is what it looks like now:
# tracer: function
#
_raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
...

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1307113131-10045-4-git-send-email-jolsa@redhat.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index b926578..3d5e17a 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2052,6 +2052,9 @@ void trace_default_header(struct seq_file *m)
 {
 	struct trace_iterator *iter = m->private;
 
+	if (!(trace_flags & TRACE_ITER_CONTEXT_INFO))
+		return;
+
 	if (iter->iter_flags & TRACE_FILE_LAT_FMT) {
 		/* print nothing if the buffers are empty */
 		if (trace_empty(iter))
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 09/15] tracing, function_graph: Remove lock-depth from latency trace
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (7 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 08/15] tracing, function: Fix trace header to follow context-info option Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 10/15] tracing, function_graph: Add context-info support for function_graph Steven Rostedt
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Jiri Olsa

[-- Attachment #1: 0009-tracing-function_graph-Remove-lock-depth-from-latenc.patch --]
[-- Type: text/plain, Size: 3457 bytes --]

From: Jiri Olsa <jolsa@redhat.com>

The lock_depth was removed in commit
e6e1e25 tracing: Remove lock_depth from event entry

Removing the lock_depth info from function_graph latency header.

With following commands:
	# echo function_graph > ./current_tracer
	# echo 1 > options/latency-format
	# cat trace

This is what it looked like before:
# tracer: function_graph
#
# function_graph latency trace v1.1.5 on 3.0.0-rc1-tip+
# --------------------------------------------------------------------
# latency: 0 us, #59756/311298, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
#    -----------------
#    | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
#    -----------------
#
#      _-----=> irqs-off
#     / _----=> need-resched
#    | / _---=> hardirq/softirq
#    || / _--=> preempt-depth
#    ||| / _-=> lock-depth
#    |||| /
# CPU|||||  DURATION                  FUNCTION CALLS
# |  |||||   |   |                     |   |   |   |
 0)  ....  0.068 us    |    } /* __rcu_read_unlock */
...

This is what it looks like now:
# tracer: function_graph
#
# function_graph latency trace v1.1.5 on 3.0.0-rc1-tip+
# --------------------------------------------------------------------
# latency: 0 us, #59747/1744610, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
#    -----------------
#    | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
#    -----------------
#
#      _-----=> irqs-off
#     / _----=> need-resched
#    | / _---=> hardirq/softirq
#    || / _--=> preempt-depth
#    ||| /
# CPU||||  DURATION                  FUNCTION CALLS
# |  ||||   |   |                     |   |   |   |
 0)  ..s.  1.641 us    |  } /* __rcu_process_callbacks */
...

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1307113131-10045-5-git-send-email-jolsa@redhat.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_functions_graph.c |    7 +++----
 1 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 44b955f..1da5b97 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -1298,8 +1298,7 @@ static void print_lat_header(struct seq_file *s, u32 flags)
 	seq_printf(s, "#%.*s / _----=> need-resched    \n", size, spaces);
 	seq_printf(s, "#%.*s| / _---=> hardirq/softirq \n", size, spaces);
 	seq_printf(s, "#%.*s|| / _--=> preempt-depth   \n", size, spaces);
-	seq_printf(s, "#%.*s||| / _-=> lock-depth      \n", size, spaces);
-	seq_printf(s, "#%.*s|||| /                     \n", size, spaces);
+	seq_printf(s, "#%.*s||| /                      \n", size, spaces);
 }
 
 static void __print_graph_headers_flags(struct seq_file *s, u32 flags)
@@ -1318,7 +1317,7 @@ static void __print_graph_headers_flags(struct seq_file *s, u32 flags)
 	if (flags & TRACE_GRAPH_PRINT_PROC)
 		seq_printf(s, "  TASK/PID       ");
 	if (lat)
-		seq_printf(s, "|||||");
+		seq_printf(s, "||||");
 	if (flags & TRACE_GRAPH_PRINT_DURATION)
 		seq_printf(s, "  DURATION   ");
 	seq_printf(s, "               FUNCTION CALLS\n");
@@ -1332,7 +1331,7 @@ static void __print_graph_headers_flags(struct seq_file *s, u32 flags)
 	if (flags & TRACE_GRAPH_PRINT_PROC)
 		seq_printf(s, "   |    |        ");
 	if (lat)
-		seq_printf(s, "|||||");
+		seq_printf(s, "||||");
 	if (flags & TRACE_GRAPH_PRINT_DURATION)
 		seq_printf(s, "   |   |      ");
 	seq_printf(s, "               |   |   |   |\n");
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 10/15] tracing, function_graph: Add context-info support for function_graph
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (8 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 09/15] tracing, function_graph: Remove lock-depth from latency trace Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 11/15] tracing: Convert to kstrtoul_from_user Steven Rostedt
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Jiri Olsa

[-- Attachment #1: 0010-tracing-function_graph-Add-context-info-support-for-.patch tracer --]
[-- Type: text/plain, Size: 3736 bytes --]

From: Jiri Olsa <jolsa@redhat.com>

The function_graph tracer does not follow global context-info option.
Adding TRACE_ITER_CONTEXT_INFO trace_flags check to enable it.

With following commands:
	# echo function_graph > ./current_tracer
	# echo 0 > options/context-info
	# cat trace

This is what it looked like before:
# tracer: function_graph
#
#     TIME        CPU  DURATION                  FUNCTION CALLS
#      |          |     |   |                     |   |   |   |
 1)   0.079 us    |          } /* __vma_link_rb */
 1)   0.056 us    |          copy_page_range();
 1)               |          security_vm_enough_memory() {
...

This is what it looks like now:
# tracer: function_graph
#
  } /* update_ts_time_stats */
  timekeeping_max_deferment();
...

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1307113131-10045-6-git-send-email-jolsa@redhat.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_functions_graph.c |   53 ++++++++++++++++++++--------------
 1 files changed, 31 insertions(+), 22 deletions(-)

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 1da5b97..e8d6bb5 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -613,28 +613,30 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr,
 		addr >= (unsigned long)__irqentry_text_end)
 		return TRACE_TYPE_UNHANDLED;
 
-	/* Absolute time */
-	if (flags & TRACE_GRAPH_PRINT_ABS_TIME) {
-		ret = print_graph_abs_time(iter->ts, s);
-		if (!ret)
-			return TRACE_TYPE_PARTIAL_LINE;
-	}
+	if (trace_flags & TRACE_ITER_CONTEXT_INFO) {
+		/* Absolute time */
+		if (flags & TRACE_GRAPH_PRINT_ABS_TIME) {
+			ret = print_graph_abs_time(iter->ts, s);
+			if (!ret)
+				return TRACE_TYPE_PARTIAL_LINE;
+		}
 
-	/* Cpu */
-	if (flags & TRACE_GRAPH_PRINT_CPU) {
-		ret = print_graph_cpu(s, cpu);
-		if (ret == TRACE_TYPE_PARTIAL_LINE)
-			return TRACE_TYPE_PARTIAL_LINE;
-	}
+		/* Cpu */
+		if (flags & TRACE_GRAPH_PRINT_CPU) {
+			ret = print_graph_cpu(s, cpu);
+			if (ret == TRACE_TYPE_PARTIAL_LINE)
+				return TRACE_TYPE_PARTIAL_LINE;
+		}
 
-	/* Proc */
-	if (flags & TRACE_GRAPH_PRINT_PROC) {
-		ret = print_graph_proc(s, pid);
-		if (ret == TRACE_TYPE_PARTIAL_LINE)
-			return TRACE_TYPE_PARTIAL_LINE;
-		ret = trace_seq_printf(s, " | ");
-		if (!ret)
-			return TRACE_TYPE_PARTIAL_LINE;
+		/* Proc */
+		if (flags & TRACE_GRAPH_PRINT_PROC) {
+			ret = print_graph_proc(s, pid);
+			if (ret == TRACE_TYPE_PARTIAL_LINE)
+				return TRACE_TYPE_PARTIAL_LINE;
+			ret = trace_seq_printf(s, " | ");
+			if (!ret)
+				return TRACE_TYPE_PARTIAL_LINE;
+		}
 	}
 
 	/* No overhead */
@@ -710,8 +712,9 @@ print_graph_duration(unsigned long long duration, struct trace_seq *s,
 {
 	int ret = -1;
 
-	if (!(flags & TRACE_GRAPH_PRINT_DURATION))
-		return TRACE_TYPE_HANDLED;
+	if (!(flags & TRACE_GRAPH_PRINT_DURATION) ||
+	    !(trace_flags & TRACE_ITER_CONTEXT_INFO))
+			return TRACE_TYPE_HANDLED;
 
 	/* No real adata, just filling the column with spaces */
 	switch (duration) {
@@ -879,6 +882,9 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s,
 			return TRACE_TYPE_PARTIAL_LINE;
 	}
 
+	if (!(trace_flags & TRACE_ITER_CONTEXT_INFO))
+		return 0;
+
 	/* Absolute time */
 	if (flags & TRACE_GRAPH_PRINT_ABS_TIME) {
 		ret = print_graph_abs_time(iter->ts, s);
@@ -1346,6 +1352,9 @@ void print_graph_headers_flags(struct seq_file *s, u32 flags)
 {
 	struct trace_iterator *iter = s->private;
 
+	if (!(trace_flags & TRACE_ITER_CONTEXT_INFO))
+		return;
+
 	if (trace_flags & TRACE_ITER_LATENCY_FMT) {
 		/* print nothing if the buffers are empty */
 		if (trace_empty(iter))
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 11/15] tracing: Convert to kstrtoul_from_user
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (9 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 10/15] tracing, function_graph: Add context-info support for function_graph Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 12/15] ring-buffer: Set __GFP_NORETRY flag for ring buffer allocating Steven Rostedt
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, Peter Huewe

[-- Attachment #1: 0011-tracing-Convert-to-kstrtoul_from_user.patch --]
[-- Type: text/plain, Size: 6409 bytes --]

From: Peter Huewe <peterhuewe@gmx.de>

This patch replaces the code for getting an unsigned long from a
userspace buffer by a simple call to kstroul_from_user.
This makes it easier to read and less error prone.

Signed-off-by: Peter Huewe <peterhuewe@gmx.de>
Link: http://lkml.kernel.org/r/1307476707-14762-1-git-send-email-peterhuewe@gmx.de
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c       |   13 +-------
 kernel/trace/ring_buffer.c  |   13 +-------
 kernel/trace/trace.c        |   65 ++++++------------------------------------
 kernel/trace/trace_events.c |   26 ++--------------
 kernel/trace/trace_stack.c  |   13 +-------
 5 files changed, 20 insertions(+), 110 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index e153807..458018a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -803,19 +803,10 @@ ftrace_profile_write(struct file *filp, const char __user *ubuf,
 		     size_t cnt, loff_t *ppos)
 {
 	unsigned long val;
-	char buf[64];		/* big enough to hold a number */
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	val = !!val;
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 2780e60..7c5d1c2 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3980,20 +3980,11 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
 		size_t cnt, loff_t *ppos)
 {
 	unsigned long *p = filp->private_data;
-	char buf[64];
 	unsigned long val;
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	if (val)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 3d5e17a..b05eb28 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2705,20 +2705,11 @@ tracing_ctrl_write(struct file *filp, const char __user *ubuf,
 		   size_t cnt, loff_t *ppos)
 {
 	struct trace_array *tr = filp->private_data;
-	char buf[64];
 	unsigned long val;
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	val = !!val;
@@ -3005,20 +2996,11 @@ tracing_max_lat_write(struct file *filp, const char __user *ubuf,
 		      size_t cnt, loff_t *ppos)
 {
 	unsigned long *ptr = filp->private_data;
-	char buf[64];
 	unsigned long val;
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	*ptr = val * 1000;
@@ -3499,19 +3481,10 @@ tracing_entries_write(struct file *filp, const char __user *ubuf,
 		      size_t cnt, loff_t *ppos)
 {
 	unsigned long val;
-	char buf[64];
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	/* must have at least 1 entry */
@@ -4173,19 +4146,10 @@ trace_options_write(struct file *filp, const char __user *ubuf, size_t cnt,
 {
 	struct trace_option_dentry *topt = filp->private_data;
 	unsigned long val;
-	char buf[64];
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	if (val != 0 && val != 1)
@@ -4233,20 +4197,11 @@ trace_options_core_write(struct file *filp, const char __user *ubuf, size_t cnt,
 			 loff_t *ppos)
 {
 	long index = (long)filp->private_data;
-	char buf[64];
 	unsigned long val;
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	if (val != 0 && val != 1)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 686ec39..4d7e149 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -486,20 +486,11 @@ event_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
 		   loff_t *ppos)
 {
 	struct ftrace_event_call *call = filp->private_data;
-	char buf[64];
 	unsigned long val;
 	int ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	ret = tracing_update_buffers();
@@ -571,19 +562,10 @@ system_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
 {
 	const char *system = filp->private_data;
 	unsigned long val;
-	char buf[64];
 	ssize_t ret;
 
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	buf[cnt] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
 		return ret;
 
 	ret = tracing_update_buffers();
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index b0b53b8..77575b3 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -156,20 +156,11 @@ stack_max_size_write(struct file *filp, const char __user *ubuf,
 {
 	long *ptr = filp->private_data;
 	unsigned long val, flags;
-	char buf[64];
 	int ret;
 	int cpu;
 
-	if (count >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, count))
-		return -EFAULT;
-
-	buf[count] = 0;
-
-	ret = strict_strtoul(buf, 10, &val);
-	if (ret < 0)
+	ret = kstrtoul_from_user(ubuf, count, 10, &val);
+	if (ret)
 		return ret;
 
 	local_irq_save(flags);
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 12/15] ring-buffer: Set __GFP_NORETRY flag for ring buffer allocating
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (10 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 11/15] tracing: Convert to kstrtoul_from_user Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-09 17:27 ` [PATCH 13/15] x86: Swap save_stack_trace_regs parameters Steven Rostedt
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker, David Rientjes,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp

[-- Attachment #1: 0012-ring-buffer-Set-__GFP_NORETRY-flag-for-ring-buffer-a.patch process --]
[-- Type: text/plain, Size: 3117 bytes --]

From: Vaibhav Nagarnaik <vnagarnaik@google.com>

The tracing ring buffer is allocated from kernel memory. While
allocating a large chunk of memory, OOM might happen which destabilizes
the system. Thus random processes might get killed during the
allocation.

This patch adds __GFP_NORETRY flag to the ring buffer allocation calls
to make it fail more gracefully if the system will not be able to
complete the allocation request.

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: David Sharp <dhsharp@google.com>
Link: http://lkml.kernel.org/r/1307491302-9236-1-git-send-email-vnagarnaik@google.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ring_buffer.c |   25 ++++++++++++++++++++-----
 1 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 7c5d1c2..911d572 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1004,8 +1004,14 @@ static int rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer,
 
 	for (i = 0; i < nr_pages; i++) {
 		struct page *page;
+		/*
+		 * __GFP_NORETRY flag makes sure that the allocation fails
+		 * gracefully without invoking oom-killer and the system is
+		 * not destabilized.
+		 */
 		bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
-				    GFP_KERNEL, cpu_to_node(cpu_buffer->cpu));
+				    GFP_KERNEL | __GFP_NORETRY,
+				    cpu_to_node(cpu_buffer->cpu));
 		if (!bpage)
 			goto free_pages;
 
@@ -1014,7 +1020,7 @@ static int rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer,
 		list_add(&bpage->list, &pages);
 
 		page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu),
-					GFP_KERNEL, 0);
+					GFP_KERNEL | __GFP_NORETRY, 0);
 		if (!page)
 			goto free_pages;
 		bpage->page = page_address(page);
@@ -1376,13 +1382,20 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size)
 	for_each_buffer_cpu(buffer, cpu) {
 		for (i = 0; i < new_pages; i++) {
 			struct page *page;
+			/*
+			 * __GFP_NORETRY flag makes sure that the allocation
+			 * fails gracefully without invoking oom-killer and
+			 * the system is not destabilized.
+			 */
 			bpage = kzalloc_node(ALIGN(sizeof(*bpage),
 						  cache_line_size()),
-					    GFP_KERNEL, cpu_to_node(cpu));
+					    GFP_KERNEL | __GFP_NORETRY,
+					    cpu_to_node(cpu));
 			if (!bpage)
 				goto free_pages;
 			list_add(&bpage->list, &pages);
-			page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL,
+			page = alloc_pages_node(cpu_to_node(cpu),
+						GFP_KERNEL | __GFP_NORETRY,
 						0);
 			if (!page)
 				goto free_pages;
@@ -3737,7 +3750,9 @@ void *ring_buffer_alloc_read_page(struct ring_buffer *buffer, int cpu)
 	struct buffer_data_page *bpage;
 	struct page *page;
 
-	page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, 0);
+	page = alloc_pages_node(cpu_to_node(cpu),
+				GFP_KERNEL | __GFP_NORETRY,
+				0);
 	if (!page)
 		return NULL;
 
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 13/15] x86: Swap save_stack_trace_regs parameters
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (11 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 12/15] ring-buffer: Set __GFP_NORETRY flag for ring buffer allocating Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-13 10:18   ` Ingo Molnar
  2011-06-09 17:27 ` [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs() Steven Rostedt
  2011-06-09 17:27 ` [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace Steven Rostedt
  14 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim

[-- Attachment #1: 0013-x86-Swap-save_stack_trace_regs-parameters.patch --]
[-- Type: text/plain, Size: 2459 bytes --]

From: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>

Swap the 1st and 2nd parameters of save_stack_trace_regs()
as same as the parameters of save_stack_trace_tsk().

Cc: Ingo Molnar <mingo@elte.hu>
Cc: yrl.pp-manager.tt@hitachi.com
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/20110608070921.17777.31103.stgit@fedora15
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/stacktrace.c  |    2 +-
 arch/x86/mm/kmemcheck/error.c |    2 +-
 include/linux/stacktrace.h    |    4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 55d9bc0..fdd0c64 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -66,7 +66,7 @@ void save_stack_trace(struct stack_trace *trace)
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
-void save_stack_trace_regs(struct stack_trace *trace, struct pt_regs *regs)
+void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 {
 	dump_trace(current, regs, NULL, 0, &save_stack_ops, trace);
 	if (trace->nr_entries < trace->max_entries)
diff --git a/arch/x86/mm/kmemcheck/error.c b/arch/x86/mm/kmemcheck/error.c
index 704a37c..dab4187 100644
--- a/arch/x86/mm/kmemcheck/error.c
+++ b/arch/x86/mm/kmemcheck/error.c
@@ -185,7 +185,7 @@ void kmemcheck_error_save(enum kmemcheck_shadow state,
 	e->trace.entries = e->trace_entries;
 	e->trace.max_entries = ARRAY_SIZE(e->trace_entries);
 	e->trace.skip = 0;
-	save_stack_trace_regs(&e->trace, regs);
+	save_stack_trace_regs(regs, &e->trace);
 
 	/* Round address down to nearest 16 bytes */
 	shadow_copy = kmemcheck_shadow_lookup(address
diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index 25310f1..115b570 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -14,8 +14,8 @@ struct stack_trace {
 };
 
 extern void save_stack_trace(struct stack_trace *trace);
-extern void save_stack_trace_regs(struct stack_trace *trace,
-				  struct pt_regs *regs);
+extern void save_stack_trace_regs(struct pt_regs *regs,
+				  struct stack_trace *trace);
 extern void save_stack_trace_tsk(struct task_struct *tsk,
 				struct stack_trace *trace);
 
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs()
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (12 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 13/15] x86: Swap save_stack_trace_regs parameters Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-13 10:19   ` Ingo Molnar
  2011-06-09 17:27 ` [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace Steven Rostedt
  14 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim

[-- Attachment #1: 0014-stack_trace-Add-weak-save_stack_trace_regs.patch --]
[-- Type: text/plain, Size: 1744 bytes --]

From: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>

Add weak symbol of save_stack_trace_regs() as same as
save_stack_trace_tsk() since that is not implemented
except x86 yet.

Cc: Ingo Molnar <mingo@elte.hu>
Cc: yrl.pp-manager.tt@hitachi.com
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/20110608070927.17777.37895.stgit@fedora15
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/stacktrace.c |   13 ++++++++++---
 1 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
index eb212f8..6fa38bc 100644
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -26,12 +26,19 @@ void print_stack_trace(struct stack_trace *trace, int spaces)
 EXPORT_SYMBOL_GPL(print_stack_trace);
 
 /*
- * Architectures that do not implement save_stack_trace_tsk get this
- * weak alias and a once-per-bootup warning (whenever this facility
- * is utilized - for example by procfs):
+ * Architectures that do not implement save_stack_trace_tsk or
+ * save_stack_trace_regs get this weak alias and a once-per-bootup warning
+ * (whenever this facility is utilized - for example by procfs):
  */
 __weak void
 save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 {
 	WARN_ONCE(1, KERN_INFO "save_stack_trace_tsk() not implemented yet.\n");
 }
+
+__weak void
+save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
+{
+	WARN_ONCE(1,
+		  KERN_INFO "save_stack_trace_regs() not implemented yet.\n");
+}
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
                   ` (13 preceding siblings ...)
  2011-06-09 17:27 ` [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs() Steven Rostedt
@ 2011-06-09 17:27 ` Steven Rostedt
  2011-06-13 10:21   ` Ingo Molnar
  14 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-09 17:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim

[-- Attachment #1: 0015-tracing-kprobes-Fix-kprobe-tracer-to-support-stack-t.patch --]
[-- Type: text/plain, Size: 7579 bytes --]

From: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>

Fix to support kernel stack trace correctly on kprobe-tracer.
Since the execution path of kprobe-based dynamic events is different
from other tracepoint-based events, normal ftrace_trace_stack() doesn't
work correctly. To fix that, this introduces ftrace_trace_stack_regs()
which traces stack via pt_regs instead of current stack register.

e.g.

 # echo p schedule+4 > /sys/kernel/debug/tracing/kprobe_events
 # echo 1 > /sys/kernel/debug/tracing/options/stacktrace
 # echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
 # head -n 20 /sys/kernel/debug/tracing/trace
# tracer: nop
#
#           TASK-PID    CPU#    TIMESTAMP  FUNCTION
#              | |       |          |         |
            bash-2968  [000] 10297.050245: p_schedule_4: (schedule+0x4/0x4ca)
            bash-2968  [000] 10297.050247: <stack trace>
 => schedule_timeout
 => n_tty_read
 => tty_read
 => vfs_read
 => sys_read
 => system_call_fastpath
     kworker/0:1-2940  [000] 10297.050265: p_schedule_4: (schedule+0x4/0x4ca)
     kworker/0:1-2940  [000] 10297.050266: <stack trace>
 => worker_thread
 => kthread
 => kernel_thread_helper
            sshd-1132  [000] 10297.050365: p_schedule_4: (schedule+0x4/0x4ca)
            sshd-1132  [000] 10297.050365: <stack trace>
 => sysret_careful

Note: Even with this fix, the first entry will be skipped
if the probe is put on the function entry area before
the frame pointer is set up (usually, that is 4 bytes
 (push %bp; mov %sp %bp) on x86), because stack unwinder
depends on the frame pointer.

Cc: Ingo Molnar <mingo@elte.hu>
Cc: yrl.pp-manager.tt@hitachi.com
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/20110608070934.17777.17116.stgit@fedora15
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/ftrace_event.h |    4 ++++
 kernel/trace/trace.c         |   35 ++++++++++++++++++++++++++++++-----
 kernel/trace/trace.h         |    9 +++++++++
 kernel/trace/trace_kprobe.c  |    6 ++++--
 4 files changed, 47 insertions(+), 7 deletions(-)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 59d3ef1..b1e69ee 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -129,6 +129,10 @@ void trace_current_buffer_unlock_commit(struct ring_buffer *buffer,
 void trace_nowake_buffer_unlock_commit(struct ring_buffer *buffer,
 				       struct ring_buffer_event *event,
 					unsigned long flags, int pc);
+void trace_nowake_buffer_unlock_commit_regs(struct ring_buffer *buffer,
+					    struct ring_buffer_event *event,
+					    unsigned long flags, int pc,
+					    struct pt_regs *regs);
 void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
 					 struct ring_buffer_event *event);
 
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index b05eb28..db62cda 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1192,6 +1192,18 @@ void trace_nowake_buffer_unlock_commit(struct ring_buffer *buffer,
 }
 EXPORT_SYMBOL_GPL(trace_nowake_buffer_unlock_commit);
 
+void trace_nowake_buffer_unlock_commit_regs(struct ring_buffer *buffer,
+					    struct ring_buffer_event *event,
+					    unsigned long flags, int pc,
+					    struct pt_regs *regs)
+{
+	ring_buffer_unlock_commit(buffer, event);
+
+	ftrace_trace_stack_regs(buffer, flags, 0, pc, regs);
+	ftrace_trace_userstack(buffer, flags, pc);
+}
+EXPORT_SYMBOL_GPL(trace_nowake_buffer_unlock_commit_regs);
+
 void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
 					 struct ring_buffer_event *event)
 {
@@ -1237,7 +1249,7 @@ ftrace(struct trace_array *tr, struct trace_array_cpu *data,
 #ifdef CONFIG_STACKTRACE
 static void __ftrace_trace_stack(struct ring_buffer *buffer,
 				 unsigned long flags,
-				 int skip, int pc)
+				 int skip, int pc, struct pt_regs *regs)
 {
 	struct ftrace_event_call *call = &event_kernel_stack;
 	struct ring_buffer_event *event;
@@ -1256,24 +1268,36 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 	trace.skip		= skip;
 	trace.entries		= entry->caller;
 
-	save_stack_trace(&trace);
+	if (regs)
+		save_stack_trace_regs(regs, &trace);
+	else
+		save_stack_trace(&trace);
 	if (!filter_check_discard(call, entry, buffer, event))
 		ring_buffer_unlock_commit(buffer, event);
 }
 
+void ftrace_trace_stack_regs(struct ring_buffer *buffer, unsigned long flags,
+			     int skip, int pc, struct pt_regs *regs)
+{
+	if (!(trace_flags & TRACE_ITER_STACKTRACE))
+		return;
+
+	__ftrace_trace_stack(buffer, flags, skip, pc, regs);
+}
+
 void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
 			int skip, int pc)
 {
 	if (!(trace_flags & TRACE_ITER_STACKTRACE))
 		return;
 
-	__ftrace_trace_stack(buffer, flags, skip, pc);
+	__ftrace_trace_stack(buffer, flags, skip, pc, NULL);
 }
 
 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
 		   int pc)
 {
-	__ftrace_trace_stack(tr->buffer, flags, skip, pc);
+	__ftrace_trace_stack(tr->buffer, flags, skip, pc, NULL);
 }
 
 /**
@@ -1289,7 +1313,8 @@ void trace_dump_stack(void)
 	local_save_flags(flags);
 
 	/* skipping 3 traces, seems to get us at the caller of this function */
-	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count());
+	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count(),
+			     NULL);
 }
 
 static DEFINE_PER_CPU(int, user_stack_count);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 229f859..caee141 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -389,6 +389,9 @@ void update_max_tr_single(struct trace_array *tr,
 void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags,
 			int skip, int pc);
 
+void ftrace_trace_stack_regs(struct ring_buffer *buffer, unsigned long flags,
+			     int skip, int pc, struct pt_regs *regs);
+
 void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
 			    int pc);
 
@@ -400,6 +403,12 @@ static inline void ftrace_trace_stack(struct ring_buffer *buffer,
 {
 }
 
+static inline void ftrace_trace_stack_regs(struct ring_buffer *buffer,
+					   unsigned long flags, int skip,
+					   int pc, struct pt_regs *regs)
+{
+}
+
 static inline void ftrace_trace_userstack(struct ring_buffer *buffer,
 					  unsigned long flags, int pc)
 {
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index f925c45..7053ef3 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1397,7 +1397,8 @@ static __kprobes void kprobe_trace_func(struct kprobe *kp, struct pt_regs *regs)
 	store_trace_args(sizeof(*entry), tp, regs, (u8 *)&entry[1], dsize);
 
 	if (!filter_current_check_discard(buffer, call, entry, event))
-		trace_nowake_buffer_unlock_commit(buffer, event, irq_flags, pc);
+		trace_nowake_buffer_unlock_commit_regs(buffer, event,
+						       irq_flags, pc, regs);
 }
 
 /* Kretprobe handler */
@@ -1429,7 +1430,8 @@ static __kprobes void kretprobe_trace_func(struct kretprobe_instance *ri,
 	store_trace_args(sizeof(*entry), tp, regs, (u8 *)&entry[1], dsize);
 
 	if (!filter_current_check_discard(buffer, call, entry, event))
-		trace_nowake_buffer_unlock_commit(buffer, event, irq_flags, pc);
+		trace_nowake_buffer_unlock_commit_regs(buffer, event,
+						       irq_flags, pc, regs);
 }
 
 /* Event entry printers */
-- 
1.7.4.4



^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH 01/15] tracing: Schedule a delayed work to call wakeup()
  2011-06-09 17:27 ` [PATCH 01/15] tracing: Schedule a delayed work to call wakeup() Steven Rostedt
@ 2011-06-13 10:07   ` Ingo Molnar
  2011-06-13 10:27     ` Pekka Enberg
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 10:07 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp,
	Linus Torvalds


* Steven Rostedt <rostedt@goodmis.org> wrote:

> +	const unsigned long delay = msecs_to_jiffies(2); //delay 2 ms

We don't do C++ comments in the kernel ...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/15] tracing: Use NUMA allocation for per-cpu ring buffer pages
  2011-06-09 17:27 ` [PATCH 02/15] tracing: Use NUMA allocation for per-cpu ring buffer pages Steven Rostedt
@ 2011-06-13 10:09   ` Ingo Molnar
  2011-06-13 11:28     ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 10:09 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp


* Steven Rostedt <rostedt@goodmis.org> wrote:

>  	for (i = 0; i < nr_pages; i++) {
> +		struct page *page;
>  		bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),

missing separation between local variable definitions and the first C 
statement.

Same mistake repeats elsewhere in the patch as well.

> +			page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL,
> +						0);

that linebreak is pointless.

> +		info->spare = ring_buffer_alloc_read_page(info->tr->buffer,
> +								info->cpu);

ditto.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-09 17:27 ` [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb Steven Rostedt
@ 2011-06-13 10:12   ` Ingo Molnar
  2011-06-13 11:39     ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 10:12 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp,
	Peter Zijlstra, Thomas Gleixner


* Steven Rostedt <rostedt@goodmis.org> wrote:

> +	switch (cmd) {
> +	case TRACE_RINGBUF_FREE_ON_CLOSE: {
> +		info->free_buffer_on_close = !!arg;
> +		ret = 0;
> +		break;
> +	}
>  	}

that doesn't look very tidy.

> @@ -3635,9 +3700,12 @@ static const struct file_operations tracing_pipe_fops = {
>  };
>  
>  static const struct file_operations tracing_entries_fops = {
> -	.open		= tracing_open_generic,
> +	.open		= tracing_entries_open,
>  	.read		= tracing_entries_read,
>  	.write		= tracing_entries_write,
> +	.unlocked_ioctl	= tracing_entries_ioctl,
> +	.compat_ioctl	= tracing_entries_ioctl,

i don't like it at all that you are adding to the ftrace ABI here. 
The *only* premise of the whole /debug/tracing/ muck was to allow it 
to be human-parseable and scripted - an ioctl is clearly outside that 
scope. Instead of increasing the mess in /debug/tracing/ we want 
clean tracing done via the perf ABI ...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 13/15] x86: Swap save_stack_trace_regs parameters
  2011-06-09 17:27 ` [PATCH 13/15] x86: Swap save_stack_trace_regs parameters Steven Rostedt
@ 2011-06-13 10:18   ` Ingo Molnar
  2011-06-13 11:43     ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 10:18 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim


* Steven Rostedt <rostedt@goodmis.org> wrote:

> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: yrl.pp-manager.tt@hitachi.com
> Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Namhyung Kim <namhyung@gmail.com>
> Cc: linux-kernel@vger.kernel.org

That looks very messy:

 - there's duplicates
 - the SOB is in the middle
 - when there's a Link entry there's no need for a Cc: linux-kernel entry

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs()
  2011-06-09 17:27 ` [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs() Steven Rostedt
@ 2011-06-13 10:19   ` Ingo Molnar
  2011-06-13 10:52     ` Masami Hiramatsu
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 10:19 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim


* Steven Rostedt <rostedt@goodmis.org> wrote:

> +save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
> +{
> +	WARN_ONCE(1,
> +		  KERN_INFO "save_stack_trace_regs() not implemented yet.\n");

 - pointless linebreak
 - we don't add KERN_ tags to WARN*() invocations

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-09 17:27 ` [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace Steven Rostedt
@ 2011-06-13 10:21   ` Ingo Molnar
  2011-06-13 11:44     ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 10:21 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim


* Steven Rostedt <rostedt@goodmis.org> wrote:

> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: yrl.pp-manager.tt@hitachi.com
> Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Namhyung Kim <namhyung@gmail.com>
> Cc: linux-kernel@vger.kernel.org
> Link: http://lkml.kernel.org/r/20110608070934.17777.17116.stgit@fedora15
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>

Very sloppy tags section as well, see previous mail.

> +	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count(),
> +			     NULL);

pointless linebreak.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 01/15] tracing: Schedule a delayed work to call wakeup()
  2011-06-13 10:07   ` Ingo Molnar
@ 2011-06-13 10:27     ` Pekka Enberg
  2011-06-13 11:45       ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Pekka Enberg @ 2011-06-13 10:27 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Steven Rostedt, linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp,
	Linus Torvalds

On Mon, Jun 13, 2011 at 1:07 PM, Ingo Molnar <mingo@elte.hu> wrote:
>
> * Steven Rostedt <rostedt@goodmis.org> wrote:
>
>> +     const unsigned long delay = msecs_to_jiffies(2); //delay 2 ms
>
> We don't do C++ comments in the kernel ...

Heh, we also don't do pointless comments like the one above! ;-)

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs()
  2011-06-13 10:19   ` Ingo Molnar
@ 2011-06-13 10:52     ` Masami Hiramatsu
  2011-06-13 11:42       ` Ingo Molnar
  0 siblings, 1 reply; 41+ messages in thread
From: Masami Hiramatsu @ 2011-06-13 10:52 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Steven Rostedt, linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Ingo Molnar, Peter Zijlstra, Namhyung Kim

(2011/06/13 19:19), Ingo Molnar wrote:
> 
> * Steven Rostedt <rostedt@goodmis.org> wrote:
> 
>> +save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
>> +{
>> +	WARN_ONCE(1,
>> +		  KERN_INFO "save_stack_trace_regs() not implemented yet.\n");
> 
>  - pointless linebreak
>  - we don't add KERN_ tags to WARN*() invocations

Hm, I just copied save_stack_trace_tsk(). OK, I'll remove it.

Thanks!

> 
> Thanks,
> 
> 	Ingo


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/15] tracing: Use NUMA allocation for per-cpu ring buffer pages
  2011-06-13 10:09   ` Ingo Molnar
@ 2011-06-13 11:28     ` Steven Rostedt
  0 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-13 11:28 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp

On Mon, 2011-06-13 at 12:09 +0200, Ingo Molnar wrote:
> * Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> >  	for (i = 0; i < nr_pages; i++) {
> > +		struct page *page;
> >  		bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
> 
> missing separation between local variable definitions and the first C 
> statement.
> 
> Same mistake repeats elsewhere in the patch as well.
> 
> > +			page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL,
> > +						0);
> 
> that linebreak is pointless.

Probably to keep the 80 char limit :-p


> 
> > +		info->spare = ring_buffer_alloc_read_page(info->tr->buffer,
> > +								info->cpu);
> 
> ditto.

I can fix these up.

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-13 10:12   ` Ingo Molnar
@ 2011-06-13 11:39     ` Steven Rostedt
  2011-06-13 11:49       ` Ingo Molnar
  0 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-13 11:39 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp,
	Peter Zijlstra, Thomas Gleixner

On Mon, 2011-06-13 at 12:12 +0200, Ingo Molnar wrote:
> * Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> > +	switch (cmd) {
> > +	case TRACE_RINGBUF_FREE_ON_CLOSE: {
> > +		info->free_buffer_on_close = !!arg;
> > +		ret = 0;
> > +		break;
> > +	}
> >  	}
> 
> that doesn't look very tidy.
> 
> > @@ -3635,9 +3700,12 @@ static const struct file_operations tracing_pipe_fops = {
> >  };
> >  
> >  static const struct file_operations tracing_entries_fops = {
> > -	.open		= tracing_open_generic,
> > +	.open		= tracing_entries_open,
> >  	.read		= tracing_entries_read,
> >  	.write		= tracing_entries_write,
> > +	.unlocked_ioctl	= tracing_entries_ioctl,
> > +	.compat_ioctl	= tracing_entries_ioctl,
> 
> i don't like it at all that you are adding to the ftrace ABI here. 
> The *only* premise of the whole /debug/tracing/ muck was to allow it 
> to be human-parseable and scripted - an ioctl is clearly outside that 
> scope. Instead of increasing the mess in /debug/tracing/ we want 
> clean tracing done via the perf ABI ...

Vaibhav originally suggested adding a "buffer_free" file that you could
write into and cause it to free the buffer. It would do this on the
release so you could also have an app (like Google needs) to open this
file and if the app dies, it will automatically free the buffer closing
it.

https://lkml.org/lkml/2011/3/17/366

I didn't really like adding another file to the debugfs system, and
recommended the ioctl. It seemed like a nice "unix" fit. But if you want
to go back to the Vaibhav's original method, which will stay in the
frame of "human-parseable and scripted". We could do that.

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs()
  2011-06-13 10:52     ` Masami Hiramatsu
@ 2011-06-13 11:42       ` Ingo Molnar
  0 siblings, 0 replies; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 11:42 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Steven Rostedt, linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Ingo Molnar, Peter Zijlstra, Namhyung Kim


* Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:

> (2011/06/13 19:19), Ingo Molnar wrote:
> > 
> > * Steven Rostedt <rostedt@goodmis.org> wrote:
> > 
> >> +save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
> >> +{
> >> +	WARN_ONCE(1,
> >> +		  KERN_INFO "save_stack_trace_regs() not implemented yet.\n");
> > 
> >  - pointless linebreak
> >  - we don't add KERN_ tags to WARN*() invocations
> 
> Hm, I just copied save_stack_trace_tsk(). OK, I'll remove it.

Please remove it from save_stack_trace_tsk() as well.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 13/15] x86: Swap save_stack_trace_regs parameters
  2011-06-13 10:18   ` Ingo Molnar
@ 2011-06-13 11:43     ` Steven Rostedt
  0 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-13 11:43 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim

On Mon, 2011-06-13 at 12:18 +0200, Ingo Molnar wrote:
> * Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> > Cc: Ingo Molnar <mingo@elte.hu>
> > Cc: yrl.pp-manager.tt@hitachi.com
> > Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Cc: Namhyung Kim <namhyung@gmail.com>
> > Cc: linux-kernel@vger.kernel.org
> 
> That looks very messy:
> 
>  - there's duplicates
>  - the SOB is in the middle
>  - when there's a Link entry there's no need for a Cc: linux-kernel entry

Hmm, that was caused by my "do-git-am" script, that pulls in patches
from email. I added a new feature to add Cc's that were on the email but
not in the "Cc" list. I'll make sure it adds Cc's with other Cc's next
time.

BTW, I have it strip Cc's to me, do you want me to remove the Cc to you
as well? Or just keep the Cc to you in the change log (as I do now)?

As for the Cc to linux-kernel, I thought I fixed that. The script is
suppose to remove that Cc as well. I wonder why it missed it here? I
should have caught that on my final review. Sorry to miss it.

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-13 10:21   ` Ingo Molnar
@ 2011-06-13 11:44     ` Steven Rostedt
  2011-06-13 11:50       ` Ingo Molnar
  0 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-13 11:44 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim

On Mon, 2011-06-13 at 12:21 +0200, Ingo Molnar wrote:
> * Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> > Cc: Ingo Molnar <mingo@elte.hu>
> > Cc: yrl.pp-manager.tt@hitachi.com
> > Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Cc: Namhyung Kim <namhyung@gmail.com>
> > Cc: linux-kernel@vger.kernel.org
> > Link: http://lkml.kernel.org/r/20110608070934.17777.17116.stgit@fedora15
> > Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> 
> Very sloppy tags section as well, see previous mail.

Same cause as in previous email.

> 
> > +	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count(),
> > +			     NULL);
> 
> pointless linebreak.

Again, probably to do with the 80 char limit.

-- Steve

> 
> Thanks,
> 
> 	Ingo



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 01/15] tracing: Schedule a delayed work to call wakeup()
  2011-06-13 10:27     ` Pekka Enberg
@ 2011-06-13 11:45       ` Steven Rostedt
  0 siblings, 0 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-13 11:45 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Ingo Molnar, linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp,
	Linus Torvalds

On Mon, 2011-06-13 at 13:27 +0300, Pekka Enberg wrote:
> On Mon, Jun 13, 2011 at 1:07 PM, Ingo Molnar <mingo@elte.hu> wrote:
> >
> > * Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> >> +     const unsigned long delay = msecs_to_jiffies(2); //delay 2 ms
> >
> > We don't do C++ comments in the kernel ...
> 
> Heh, we also don't do pointless comments like the one above! ;-)

Right! I'll strip it.

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-13 11:39     ` Steven Rostedt
@ 2011-06-13 11:49       ` Ingo Molnar
  2011-06-13 11:54         ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 11:49 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	Vaibhav Nagarnaik, Ingo Molnar, Michael Rubin, David Sharp,
	Peter Zijlstra, Thomas Gleixner


* Steven Rostedt <rostedt@goodmis.org> wrote:

> On Mon, 2011-06-13 at 12:12 +0200, Ingo Molnar wrote:
> > * Steven Rostedt <rostedt@goodmis.org> wrote:
> > 
> > > +	switch (cmd) {
> > > +	case TRACE_RINGBUF_FREE_ON_CLOSE: {
> > > +		info->free_buffer_on_close = !!arg;
> > > +		ret = 0;
> > > +		break;
> > > +	}
> > >  	}
> > 
> > that doesn't look very tidy.
> > 
> > > @@ -3635,9 +3700,12 @@ static const struct file_operations tracing_pipe_fops = {
> > >  };
> > >  
> > >  static const struct file_operations tracing_entries_fops = {
> > > -	.open		= tracing_open_generic,
> > > +	.open		= tracing_entries_open,
> > >  	.read		= tracing_entries_read,
> > >  	.write		= tracing_entries_write,
> > > +	.unlocked_ioctl	= tracing_entries_ioctl,
> > > +	.compat_ioctl	= tracing_entries_ioctl,
> > 
> > i don't like it at all that you are adding to the ftrace ABI here. 
> > The *only* premise of the whole /debug/tracing/ muck was to allow it 
> > to be human-parseable and scripted - an ioctl is clearly outside that 
> > scope. Instead of increasing the mess in /debug/tracing/ we want 
> > clean tracing done via the perf ABI ...
> 
> Vaibhav originally suggested adding a "buffer_free" file that you 
> could write into and cause it to free the buffer. It would do this 
> on the release so you could also have an app (like Google needs) to 
> open this file and if the app dies, it will automatically free the 
> buffer closing it.
> 
> https://lkml.org/lkml/2011/3/17/366
> 
> I didn't really like adding another file to the debugfs system, and 
> recommended the ioctl. It seemed like a nice "unix" fit. But if you 
> want to go back to the Vaibhav's original method, which will stay 
> in the frame of "human-parseable and scripted". We could do that.

Yes, i think so - the *only* point of the /debug/tracing/ muck is 
that it's scriptable and human parseable. If Google wants to use it 
for more than that then they should help us enhance the perf syscall 
ABI for tracing ...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-13 11:44     ` Steven Rostedt
@ 2011-06-13 11:50       ` Ingo Molnar
  2011-06-13 12:14         ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Ingo Molnar @ 2011-06-13 11:50 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim


* Steven Rostedt <rostedt@goodmis.org> wrote:

> > > +	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count(),
> > > +			     NULL);
> > 
> > pointless linebreak.
> 
> Again, probably to do with the 80 char limit.

Well, the point of the 80 char limit warning these days is to force a 
cleanup of the code. Breaking the line mindlessly *uglifies* the code 
so it's pointless to do it.

In terms of terminal readability line length up to 90-100 cols is 
acceptable for code that is otherwise not ugly.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-13 11:49       ` Ingo Molnar
@ 2011-06-13 11:54         ` Steven Rostedt
  2011-06-13 19:12           ` Vaibhav Nagarnaik
  2011-06-13 20:01           ` Vaibhav Nagarnaik
  0 siblings, 2 replies; 41+ messages in thread
From: Steven Rostedt @ 2011-06-13 11:54 UTC (permalink / raw)
  To: Vaibhav Nagarnaik
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker, Michael Rubin,
	David Sharp, Peter Zijlstra, Thomas Gleixner, Ingo Molnar

On Mon, 2011-06-13 at 13:49 +0200, Ingo Molnar wrote:

> > https://lkml.org/lkml/2011/3/17/366
> > 
> > I didn't really like adding another file to the debugfs system, and 
> > recommended the ioctl. It seemed like a nice "unix" fit. But if you 
> > want to go back to the Vaibhav's original method, which will stay 
> > in the frame of "human-parseable and scripted". We could do that.
> 
> Yes, i think so - the *only* point of the /debug/tracing/ muck is 
> that it's scriptable and human parseable. If Google wants to use it 
> for more than that then they should help us enhance the perf syscall 
> ABI for tracing ...

Hi Vaibhav,

Sorry for the spin around, but could you resubmit your original patch?

Thanks!

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-13 11:50       ` Ingo Molnar
@ 2011-06-13 12:14         ` Steven Rostedt
  2011-06-14  1:25           ` Masami Hiramatsu
  0 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-13 12:14 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Masami Hiramatsu, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim

On Mon, 2011-06-13 at 13:50 +0200, Ingo Molnar wrote:
> * Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> > > > +	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count(),
> > > > +			     NULL);
> > > 
> > > pointless linebreak.
> > 
> > Again, probably to do with the 80 char limit.
> 
> Well, the point of the 80 char limit warning these days is to force a 
> cleanup of the code. Breaking the line mindlessly *uglifies* the code 
> so it's pointless to do it.
> 
> In terms of terminal readability line length up to 90-100 cols is 
> acceptable for code that is otherwise not ugly.

I totally agree. I've just had arguments with others that are more 80
character nazi than I am. ;)

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-13 11:54         ` Steven Rostedt
@ 2011-06-13 19:12           ` Vaibhav Nagarnaik
  2011-06-13 20:01           ` Vaibhav Nagarnaik
  1 sibling, 0 replies; 41+ messages in thread
From: Vaibhav Nagarnaik @ 2011-06-13 19:12 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker, Michael Rubin,
	David Sharp, Peter Zijlstra, Thomas Gleixner, Ingo Molnar

On Mon, Jun 13, 2011 at 4:54 AM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Mon, 2011-06-13 at 13:49 +0200, Ingo Molnar wrote:
>
>> > https://lkml.org/lkml/2011/3/17/366
>> >
>> > I didn't really like adding another file to the debugfs system, and
>> > recommended the ioctl. It seemed like a nice "unix" fit. But if you
>> > want to go back to the Vaibhav's original method, which will stay
>> > in the frame of "human-parseable and scripted". We could do that.
>>
>> Yes, i think so - the *only* point of the /debug/tracing/ muck is
>> that it's scriptable and human parseable. If Google wants to use it
>> for more than that then they should help us enhance the perf syscall
>> ABI for tracing ...
>
> Hi Vaibhav,
>
> Sorry for the spin around, but could you resubmit your original patch?
>
> Thanks!
>
> -- Steve
>

Sure Steven. I will create and send the new patch.


Vaibhav Nagarnaik

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-13 11:54         ` Steven Rostedt
  2011-06-13 19:12           ` Vaibhav Nagarnaik
@ 2011-06-13 20:01           ` Vaibhav Nagarnaik
  2011-06-14  0:37             ` Steven Rostedt
  1 sibling, 1 reply; 41+ messages in thread
From: Vaibhav Nagarnaik @ 2011-06-13 20:01 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker, Michael Rubin,
	David Sharp, Peter Zijlstra, Thomas Gleixner, Ingo Molnar

On Mon, Jun 13, 2011 at 4:54 AM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Mon, 2011-06-13 at 13:49 +0200, Ingo Molnar wrote:
>
>> > https://lkml.org/lkml/2011/3/17/366
>> >
>> > I didn't really like adding another file to the debugfs system, and
>> > recommended the ioctl. It seemed like a nice "unix" fit. But if you
>> > want to go back to the Vaibhav's original method, which will stay
>> > in the frame of "human-parseable and scripted". We could do that.
>>
>> Yes, i think so - the *only* point of the /debug/tracing/ muck is
>> that it's scriptable and human parseable. If Google wants to use it
>> for more than that then they should help us enhance the perf syscall
>> ABI for tracing ...
>
> Hi Vaibhav,
>
> Sorry for the spin around, but could you resubmit your original patch?
>
> Thanks!
>
> -- Steve
>

Hi Steven

I have created the patch.

I was wondering what would you consider a good name for the new proc
file? David suggested "free_ring_buffer".


Vaibhav Nagarnaik

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-13 20:01           ` Vaibhav Nagarnaik
@ 2011-06-14  0:37             ` Steven Rostedt
  2011-06-14  0:43               ` Vaibhav Nagarnaik
  0 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-14  0:37 UTC (permalink / raw)
  To: Vaibhav Nagarnaik
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker, Michael Rubin,
	David Sharp, Peter Zijlstra, Thomas Gleixner, Ingo Molnar

On Mon, 2011-06-13 at 13:01 -0700, Vaibhav Nagarnaik wrote:

> I have created the patch.
> 
> I was wondering what would you consider a good name for the new proc
> file? David suggested "free_ring_buffer".

Since we already have a "buffer_size" "free_buffer" should be a good
name.

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb
  2011-06-14  0:37             ` Steven Rostedt
@ 2011-06-14  0:43               ` Vaibhav Nagarnaik
  0 siblings, 0 replies; 41+ messages in thread
From: Vaibhav Nagarnaik @ 2011-06-14  0:43 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Frederic Weisbecker, Michael Rubin,
	David Sharp, Peter Zijlstra, Thomas Gleixner, Ingo Molnar

On Mon, Jun 13, 2011 at 5:37 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Mon, 2011-06-13 at 13:01 -0700, Vaibhav Nagarnaik wrote:
>
>> I have created the patch.
>>
>> I was wondering what would you consider a good name for the new proc
>> file? David suggested "free_ring_buffer".
>
> Since we already have a "buffer_size" "free_buffer" should be a good
> name.
>

Sure. Sending updated patch.



Vaibhav Nagarnaik

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-13 12:14         ` Steven Rostedt
@ 2011-06-14  1:25           ` Masami Hiramatsu
  2011-06-14  2:08             ` Steven Rostedt
  0 siblings, 1 reply; 41+ messages in thread
From: Masami Hiramatsu @ 2011-06-14  1:25 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Ingo Molnar, Peter Zijlstra, Namhyung Kim

(2011/06/13 21:14), Steven Rostedt wrote:
> On Mon, 2011-06-13 at 13:50 +0200, Ingo Molnar wrote:
>> * Steven Rostedt <rostedt@goodmis.org> wrote:
>>
>>>>> +	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count(),
>>>>> +			     NULL);
>>>>
>>>> pointless linebreak.
>>>
>>> Again, probably to do with the 80 char limit.
>>
>> Well, the point of the 80 char limit warning these days is to force a 
>> cleanup of the code. Breaking the line mindlessly *uglifies* the code 
>> so it's pointless to do it.
>>
>> In terms of terminal readability line length up to 90-100 cols is 
>> acceptable for code that is otherwise not ugly.
> 
> I totally agree. I've just had arguments with others that are more 80
> character nazi than I am. ;)

I see, nowadays most of laptops has a wide screen (16:9) :P


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-14  1:25           ` Masami Hiramatsu
@ 2011-06-14  2:08             ` Steven Rostedt
  2011-06-14 11:22               ` Peter Zijlstra
  0 siblings, 1 reply; 41+ messages in thread
From: Steven Rostedt @ 2011-06-14  2:08 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, linux-kernel, Andrew Morton, Frederic Weisbecker,
	yrl.pp-manager.tt, Ingo Molnar, Peter Zijlstra, Namhyung Kim

On Tue, 2011-06-14 at 10:25 +0900, Masami Hiramatsu wrote:

> I see, nowadays most of laptops has a wide screen (16:9) :P

<offtopic rant>

I totally hate this trend! I don't use computers to watch wide screen
movies! I use computers to do work. Who can get anything done with one
of these wide screen monitors?

I had to get my old LCD monitor restored, because I couldn't stand the
stupid wide screen I used to replace it. I find vertical space more high
valued real-estate than horizontal space.

The world is designed by idiots!

</offtopic rant>

-- Steve



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace
  2011-06-14  2:08             ` Steven Rostedt
@ 2011-06-14 11:22               ` Peter Zijlstra
  0 siblings, 0 replies; 41+ messages in thread
From: Peter Zijlstra @ 2011-06-14 11:22 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Masami Hiramatsu, Ingo Molnar, linux-kernel, Andrew Morton,
	Frederic Weisbecker, yrl.pp-manager.tt, Ingo Molnar,
	Namhyung Kim

On Mon, 2011-06-13 at 22:08 -0400, Steven Rostedt wrote:
> On Tue, 2011-06-14 at 10:25 +0900, Masami Hiramatsu wrote:
> 
> > I see, nowadays most of laptops has a wide screen (16:9) :P
> 
> <offtopic rant>
> 
> I totally hate this trend! I don't use computers to watch wide screen
> movies! I use computers to do work. Who can get anything done with one
> of these wide screen monitors?

I've got 2 16:10 screens, yielding 8 columns of text ~90 chars wide.

> I had to get my old LCD monitor restored, because I couldn't stand the
> stupid wide screen I used to replace it. I find vertical space more high
> valued real-estate than horizontal space.
> 
> The world is designed by idiots!

*grin*, I fully concur though, this 16:9 craze is totally annoying. The
worst part is that you cannot even vote with your wallet, its impossible
to buy a laptop without 16:9 screen these days.

fwiw, I didn't mind the 1600x1200 -> 1920x1200 step, that was same
height yet more pixels, yay!

But 16:9 is utter crap, the other totally annoying thing is that people
are forced to use these totally insane low-res screens. Most 12-15"
laptops come with a resolution about equal to the latest iPhone, fsck
that. I want a 12-13" with that same 300ppi ;-)

Screen resolution used to go up, but ever since this HD-tv crap
(720p/1080i should be shot in the head) got popular screen res went
down! /me can't wait for this 4K format to become popular.

> </offtopic rant>


^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2011-06-14 11:23 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-09 17:27 [PATCH 00/15] [GIT PULL] tracing: various updates Steven Rostedt
2011-06-09 17:27 ` [PATCH 01/15] tracing: Schedule a delayed work to call wakeup() Steven Rostedt
2011-06-13 10:07   ` Ingo Molnar
2011-06-13 10:27     ` Pekka Enberg
2011-06-13 11:45       ` Steven Rostedt
2011-06-09 17:27 ` [PATCH 02/15] tracing: Use NUMA allocation for per-cpu ring buffer pages Steven Rostedt
2011-06-13 10:09   ` Ingo Molnar
2011-06-13 11:28     ` Steven Rostedt
2011-06-09 17:27 ` [PATCH 03/15] tracing: Add a free on close control mechanism for buffer_size_kb Steven Rostedt
2011-06-13 10:12   ` Ingo Molnar
2011-06-13 11:39     ` Steven Rostedt
2011-06-13 11:49       ` Ingo Molnar
2011-06-13 11:54         ` Steven Rostedt
2011-06-13 19:12           ` Vaibhav Nagarnaik
2011-06-13 20:01           ` Vaibhav Nagarnaik
2011-06-14  0:37             ` Steven Rostedt
2011-06-14  0:43               ` Vaibhav Nagarnaik
2011-06-09 17:27 ` [PATCH 04/15] ftrace: Fixed an include coding style issue Steven Rostedt
2011-06-09 17:27 ` [PATCH 05/15] async: " Steven Rostedt
2011-06-09 17:27 ` [PATCH 06/15] tracing, function_graph: Remove dependency of abstime and duration Steven Rostedt
2011-06-09 17:27 ` [PATCH 07/15] tracing, function_graph: Merge overhead and duration display Steven Rostedt
2011-06-09 17:27 ` [PATCH 08/15] tracing, function: Fix trace header to follow context-info option Steven Rostedt
2011-06-09 17:27 ` [PATCH 09/15] tracing, function_graph: Remove lock-depth from latency trace Steven Rostedt
2011-06-09 17:27 ` [PATCH 10/15] tracing, function_graph: Add context-info support for function_graph Steven Rostedt
2011-06-09 17:27 ` [PATCH 11/15] tracing: Convert to kstrtoul_from_user Steven Rostedt
2011-06-09 17:27 ` [PATCH 12/15] ring-buffer: Set __GFP_NORETRY flag for ring buffer allocating Steven Rostedt
2011-06-09 17:27 ` [PATCH 13/15] x86: Swap save_stack_trace_regs parameters Steven Rostedt
2011-06-13 10:18   ` Ingo Molnar
2011-06-13 11:43     ` Steven Rostedt
2011-06-09 17:27 ` [PATCH 14/15] stack_trace: Add weak save_stack_trace_regs() Steven Rostedt
2011-06-13 10:19   ` Ingo Molnar
2011-06-13 10:52     ` Masami Hiramatsu
2011-06-13 11:42       ` Ingo Molnar
2011-06-09 17:27 ` [PATCH 15/15] tracing/kprobes: Fix kprobe-tracer to support stack trace Steven Rostedt
2011-06-13 10:21   ` Ingo Molnar
2011-06-13 11:44     ` Steven Rostedt
2011-06-13 11:50       ` Ingo Molnar
2011-06-13 12:14         ` Steven Rostedt
2011-06-14  1:25           ` Masami Hiramatsu
2011-06-14  2:08             ` Steven Rostedt
2011-06-14 11:22               ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.