All of lore.kernel.org
 help / color / mirror / Atom feed
* [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window
@ 2014-06-06 16:30 Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 1/6] tracing: Introduce saved_cmdlines_size file Steven Rostedt
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-06 16:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

These should be the last of the patches I need for merging.
There's one feature which is to be able to increase the saved cmdlines
such that it is possible not to lose any.

Other fixes were memory not cleaned up properly on boot up if there
was a failed allocation, and other clean ups.

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next

Head SHA1: 4abbf33c8c68515f812e97ba0d36d3ee480fd78f


Steven Rostedt (Red Hat) (3)
      tracing: Convert stddev into u64 in tracepoint benchmark
      tracing: Only calculate stats of tracepoint benchmarks for 2^32 times
      tracing: Fix leak of ring buffer data when new instances creation fails

Yoshihiro YUNOMAE (3)
      tracing: Introduce saved_cmdlines_size file
      tracing: Return error if ftrace_trace_arrays list is empty
      tracing/kprobes: Avoid self tests if tracing is disabled on boot up

----
 trace.c           |  200 ++++++++++++++++++++++++++++++++++++++++++++++--------
 trace.h           |    3 
 trace_benchmark.c |   39 ++++++++--
 trace_events.c    |   13 +++
 trace_kprobe.c    |    3 
 5 files changed, 224 insertions(+), 34 deletions(-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [for-next][PATCH 1/6] tracing: Introduce saved_cmdlines_size file
  2014-06-06 16:30 [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window Steven Rostedt
@ 2014-06-06 16:30 ` Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 2/6] tracing: Convert stddev into u64 in tracepoint benchmark Steven Rostedt
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-06 16:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Yoshihiro YUNOMAE

[-- Attachment #1: 0000-tracing-Introduce-saved_cmdlines_size-file.patch --]
[-- Type: text/plain, Size: 9459 bytes --]

From: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>

Introduce saved_cmdlines_size file for changing the number of saved pid-comms.
saved_cmdlines currently stores 128 command names using SAVED_CMDLINES, but
'no-existing processes' names are often lost in saved_cmdlines when we
read the trace data. So, by introducing saved_cmdlines_size file, we can
now change the 128 command names saved to something much larger if needed.

When we write a value to saved_cmdlines_size, the number of the value will
be stored in pid-comm list:

	# echo 1024 > /sys/kernel/debug/tracing/saved_cmdlines_size

Here, 1024 command names can be stored. The default number is 128 and the maximum
number is PID_MAX_DEFAULT (=32768 if CONFIG_BASE_SMALL is not set). So, if we
want to avoid losing any command names, we need to set 32768 to
saved_cmdlines_size.

We can read the maximum number of the list:

	# cat /sys/kernel/debug/tracing/saved_cmdlines_size
	128

Link: http://lkml.kernel.org/p/20140605012427.22115.16173.stgit@yunodevel

Signed-off-by: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 178 ++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 154 insertions(+), 24 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 135af323608b..e29edee1542a 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1285,22 +1285,71 @@ void tracing_reset_all_online_cpus(void)
 	}
 }
 
-#define SAVED_CMDLINES 128
+#define SAVED_CMDLINES_DEFAULT 128
 #define NO_CMDLINE_MAP UINT_MAX
-static unsigned map_pid_to_cmdline[PID_MAX_DEFAULT+1];
-static unsigned map_cmdline_to_pid[SAVED_CMDLINES];
-static char saved_cmdlines[SAVED_CMDLINES][TASK_COMM_LEN];
-static int cmdline_idx;
 static arch_spinlock_t trace_cmdline_lock = __ARCH_SPIN_LOCK_UNLOCKED;
+struct saved_cmdlines_buffer {
+	unsigned map_pid_to_cmdline[PID_MAX_DEFAULT+1];
+	unsigned *map_cmdline_to_pid;
+	unsigned cmdline_num;
+	int cmdline_idx;
+	char *saved_cmdlines;
+};
+static struct saved_cmdlines_buffer *savedcmd;
 
 /* temporary disable recording */
 static atomic_t trace_record_cmdline_disabled __read_mostly;
 
-static void trace_init_cmdlines(void)
+static inline char *get_saved_cmdlines(int idx)
+{
+	return &savedcmd->saved_cmdlines[idx * TASK_COMM_LEN];
+}
+
+static inline void set_cmdline(int idx, const char *cmdline)
 {
-	memset(&map_pid_to_cmdline, NO_CMDLINE_MAP, sizeof(map_pid_to_cmdline));
-	memset(&map_cmdline_to_pid, NO_CMDLINE_MAP, sizeof(map_cmdline_to_pid));
-	cmdline_idx = 0;
+	memcpy(get_saved_cmdlines(idx), cmdline, TASK_COMM_LEN);
+}
+
+static int allocate_cmdlines_buffer(unsigned int val,
+				    struct saved_cmdlines_buffer *s)
+{
+	s->map_cmdline_to_pid = kmalloc(val * sizeof(*s->map_cmdline_to_pid),
+					GFP_KERNEL);
+	if (!s->map_cmdline_to_pid)
+		return -ENOMEM;
+
+	s->saved_cmdlines = kmalloc(val * TASK_COMM_LEN, GFP_KERNEL);
+	if (!s->saved_cmdlines) {
+		kfree(s->map_cmdline_to_pid);
+		return -ENOMEM;
+	}
+
+	s->cmdline_idx = 0;
+	s->cmdline_num = val;
+	memset(&s->map_pid_to_cmdline, NO_CMDLINE_MAP,
+	       sizeof(s->map_pid_to_cmdline));
+	memset(s->map_cmdline_to_pid, NO_CMDLINE_MAP,
+	       val * sizeof(*s->map_cmdline_to_pid));
+
+	return 0;
+}
+
+static int trace_create_savedcmd(void)
+{
+	int ret;
+
+	savedcmd = kmalloc(sizeof(struct saved_cmdlines_buffer), GFP_KERNEL);
+	if (!savedcmd)
+		return -ENOMEM;
+
+	ret = allocate_cmdlines_buffer(SAVED_CMDLINES_DEFAULT, savedcmd);
+	if (ret < 0) {
+		kfree(savedcmd);
+		savedcmd = NULL;
+		return -ENOMEM;
+	}
+
+	return 0;
 }
 
 int is_tracing_stopped(void)
@@ -1457,9 +1506,9 @@ static int trace_save_cmdline(struct task_struct *tsk)
 	if (!arch_spin_trylock(&trace_cmdline_lock))
 		return 0;
 
-	idx = map_pid_to_cmdline[tsk->pid];
+	idx = savedcmd->map_pid_to_cmdline[tsk->pid];
 	if (idx == NO_CMDLINE_MAP) {
-		idx = (cmdline_idx + 1) % SAVED_CMDLINES;
+		idx = (savedcmd->cmdline_idx + 1) % savedcmd->cmdline_num;
 
 		/*
 		 * Check whether the cmdline buffer at idx has a pid
@@ -1467,17 +1516,17 @@ static int trace_save_cmdline(struct task_struct *tsk)
 		 * need to clear the map_pid_to_cmdline. Otherwise we
 		 * would read the new comm for the old pid.
 		 */
-		pid = map_cmdline_to_pid[idx];
+		pid = savedcmd->map_cmdline_to_pid[idx];
 		if (pid != NO_CMDLINE_MAP)
-			map_pid_to_cmdline[pid] = NO_CMDLINE_MAP;
+			savedcmd->map_pid_to_cmdline[pid] = NO_CMDLINE_MAP;
 
-		map_cmdline_to_pid[idx] = tsk->pid;
-		map_pid_to_cmdline[tsk->pid] = idx;
+		savedcmd->map_cmdline_to_pid[idx] = tsk->pid;
+		savedcmd->map_pid_to_cmdline[tsk->pid] = idx;
 
-		cmdline_idx = idx;
+		savedcmd->cmdline_idx = idx;
 	}
 
-	memcpy(&saved_cmdlines[idx], tsk->comm, TASK_COMM_LEN);
+	set_cmdline(idx, tsk->comm);
 
 	arch_spin_unlock(&trace_cmdline_lock);
 
@@ -1503,9 +1552,9 @@ static void __trace_find_cmdline(int pid, char comm[])
 		return;
 	}
 
-	map = map_pid_to_cmdline[pid];
+	map = savedcmd->map_pid_to_cmdline[pid];
 	if (map != NO_CMDLINE_MAP)
-		strcpy(comm, saved_cmdlines[map]);
+		strcpy(comm, get_saved_cmdlines(map));
 	else
 		strcpy(comm, "<...>");
 }
@@ -3593,6 +3642,7 @@ static const char readme_msg[] =
 	"  trace_options\t\t- Set format or modify how tracing happens\n"
 	"\t\t\t  Disable an option by adding a suffix 'no' to the\n"
 	"\t\t\t  option name\n"
+	"  saved_cmdlines_size\t- echo command number in here to store comm-pid list\n"
 #ifdef CONFIG_DYNAMIC_FTRACE
 	"\n  available_filter_functions - list of functions that can be filtered on\n"
 	"  set_ftrace_filter\t- echo function name in here to only trace these\n"
@@ -3715,7 +3765,8 @@ static void *saved_cmdlines_next(struct seq_file *m, void *v, loff_t *pos)
 
 	(*pos)++;
 
-	for (; ptr < &map_cmdline_to_pid[SAVED_CMDLINES]; ptr++) {
+	for (; ptr < &savedcmd->map_cmdline_to_pid[savedcmd->cmdline_num];
+	     ptr++) {
 		if (*ptr == -1 || *ptr == NO_CMDLINE_MAP)
 			continue;
 
@@ -3733,7 +3784,7 @@ static void *saved_cmdlines_start(struct seq_file *m, loff_t *pos)
 	preempt_disable();
 	arch_spin_lock(&trace_cmdline_lock);
 
-	v = &map_cmdline_to_pid[0];
+	v = &savedcmd->map_cmdline_to_pid[0];
 	while (l <= *pos) {
 		v = saved_cmdlines_next(m, v, &l);
 		if (!v)
@@ -3782,6 +3833,79 @@ static const struct file_operations tracing_saved_cmdlines_fops = {
 };
 
 static ssize_t
+tracing_saved_cmdlines_size_read(struct file *filp, char __user *ubuf,
+				 size_t cnt, loff_t *ppos)
+{
+	char buf[64];
+	int r;
+
+	arch_spin_lock(&trace_cmdline_lock);
+	r = sprintf(buf, "%u\n", savedcmd->cmdline_num);
+	arch_spin_unlock(&trace_cmdline_lock);
+
+	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
+}
+
+static void free_saved_cmdlines_buffer(struct saved_cmdlines_buffer *s)
+{
+	kfree(s->saved_cmdlines);
+	kfree(s->map_cmdline_to_pid);
+	kfree(s);
+}
+
+static int tracing_resize_saved_cmdlines(unsigned int val)
+{
+	struct saved_cmdlines_buffer *s, *savedcmd_temp;
+
+	s = kmalloc(sizeof(struct saved_cmdlines_buffer), GFP_KERNEL);
+	if (!s)
+		return -ENOMEM;
+
+	if (allocate_cmdlines_buffer(val, s) < 0) {
+		kfree(s);
+		return -ENOMEM;
+	}
+
+	arch_spin_lock(&trace_cmdline_lock);
+	savedcmd_temp = savedcmd;
+	savedcmd = s;
+	arch_spin_unlock(&trace_cmdline_lock);
+	free_saved_cmdlines_buffer(savedcmd_temp);
+
+	return 0;
+}
+
+static ssize_t
+tracing_saved_cmdlines_size_write(struct file *filp, const char __user *ubuf,
+				  size_t cnt, loff_t *ppos)
+{
+	unsigned long val;
+	int ret;
+
+	ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
+	if (ret)
+		return ret;
+
+	/* must have at least 1 entry or less than PID_MAX_DEFAULT */
+	if (!val || val > PID_MAX_DEFAULT)
+		return -EINVAL;
+
+	ret = tracing_resize_saved_cmdlines((unsigned int)val);
+	if (ret < 0)
+		return ret;
+
+	*ppos += cnt;
+
+	return cnt;
+}
+
+static const struct file_operations tracing_saved_cmdlines_size_fops = {
+	.open		= tracing_open_generic,
+	.read		= tracing_saved_cmdlines_size_read,
+	.write		= tracing_saved_cmdlines_size_write,
+};
+
+static ssize_t
 tracing_set_trace_read(struct file *filp, char __user *ubuf,
 		       size_t cnt, loff_t *ppos)
 {
@@ -6375,6 +6499,9 @@ static __init int tracer_init_debugfs(void)
 	trace_create_file("saved_cmdlines", 0444, d_tracer,
 			NULL, &tracing_saved_cmdlines_fops);
 
+	trace_create_file("saved_cmdlines_size", 0644, d_tracer,
+			  NULL, &tracing_saved_cmdlines_size_fops);
+
 #ifdef CONFIG_DYNAMIC_FTRACE
 	trace_create_file("dyn_ftrace_total_info", 0444, d_tracer,
 			&ftrace_update_tot_cnt, &tracing_dyn_info_fops);
@@ -6611,18 +6738,19 @@ __init static int tracer_alloc_buffers(void)
 	if (!temp_buffer)
 		goto out_free_cpumask;
 
+	if (trace_create_savedcmd() < 0)
+		goto out_free_temp_buffer;
+
 	/* TODO: make the number of buffers hot pluggable with CPUS */
 	if (allocate_trace_buffers(&global_trace, ring_buf_size) < 0) {
 		printk(KERN_ERR "tracer: failed to allocate ring buffer!\n");
 		WARN_ON(1);
-		goto out_free_temp_buffer;
+		goto out_free_savedcmd;
 	}
 
 	if (global_trace.buffer_disabled)
 		tracing_off();
 
-	trace_init_cmdlines();
-
 	if (trace_boot_clock) {
 		ret = tracing_set_clock(&global_trace, trace_boot_clock);
 		if (ret < 0)
@@ -6668,6 +6796,8 @@ __init static int tracer_alloc_buffers(void)
 
 	return 0;
 
+out_free_savedcmd:
+	free_saved_cmdlines_buffer(savedcmd);
 out_free_temp_buffer:
 	ring_buffer_free(temp_buffer);
 out_free_cpumask:
-- 
2.0.0.rc2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [for-next][PATCH 2/6] tracing: Convert stddev into u64 in tracepoint benchmark
  2014-06-06 16:30 [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 1/6] tracing: Introduce saved_cmdlines_size file Steven Rostedt
@ 2014-06-06 16:30 ` Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times Steven Rostedt
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-06 16:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, David Daney

[-- Attachment #1: 0001-tracing-Convert-stddev-into-u64-in-tracepoint-benchm.patch --]
[-- Type: text/plain, Size: 980 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

I've been told that do_div() expects an unsigned 64 bit number, and
is undefined if a signed is used. This gave a warning on the MIPS
build. I'm not sure if a signed 64 bit dividend is really an issue
or not, but the calculation this is used for is standard deviation,
and that isn't going to be negative. We can just convert it to
unsigned and be safe.

Reported-by: David Daney <ddaney.cavm@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_benchmark.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index a10adc7095cd..8bd3365a65b2 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -33,7 +33,7 @@ static void trace_do_benchmark(void)
 	u64 start;
 	u64 stop;
 	u64 delta;
-	s64 stddev;
+	u64 stddev;
 	u64 seed;
 	u64 last_seed;
 	unsigned int avg;
-- 
2.0.0.rc2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times
  2014-06-06 16:30 [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 1/6] tracing: Introduce saved_cmdlines_size file Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 2/6] tracing: Convert stddev into u64 in tracepoint benchmark Steven Rostedt
@ 2014-06-06 16:30 ` Steven Rostedt
  2014-06-10  5:17   ` Namhyung Kim
  2014-06-06 16:30 ` [for-next][PATCH 4/6] tracing: Return error if ftrace_trace_arrays list is empty Steven Rostedt
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Steven Rostedt @ 2014-06-06 16:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0002-tracing-Only-calculate-stats-of-tracepoint-benchmark.patch --]
[-- Type: text/plain, Size: 2706 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

When calculating the average and standard deviation, it is required that
the count be less than UINT_MAX, otherwise the do_div() will get
undefined results. After 2^32 counts of data, the average and standard
deviation should pretty much be set anyway.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_benchmark.c | 37 ++++++++++++++++++++++++++++++-------
 1 file changed, 30 insertions(+), 7 deletions(-)

diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index 8bd3365a65b2..40a14cbcf8e0 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -16,7 +16,10 @@ static u64 bm_last;
 static u64 bm_max;
 static u64 bm_min;
 static u64 bm_first;
-static s64 bm_cnt;
+static u64 bm_cnt;
+static u64 bm_stddev;
+static unsigned int bm_avg;
+static unsigned int bm_std;
 
 /*
  * This gets called in a loop recording the time it took to write
@@ -66,22 +69,35 @@ static void trace_do_benchmark(void)
 
 	bm_last = delta;
 
-	bm_total += delta;
-	bm_totalsq += delta * delta;
-
 	if (delta > bm_max)
 		bm_max = delta;
 	if (!bm_min || delta < bm_min)
 		bm_min = delta;
 
+	/*
+	 * When bm_cnt is greater than UINT_MAX, it breaks the statistics
+	 * accounting. Freeze the statistics when that happens.
+	 * We should have enough data for the avg and stddev anyway.
+	 */
+	if (bm_cnt > UINT_MAX) {
+		scnprintf(bm_str, BENCHMARK_EVENT_STRLEN,
+		    "last=%llu first=%llu max=%llu min=%llu ** avg=%u std=%d std^2=%lld",
+			  bm_last, bm_first, bm_max, bm_min, bm_avg, bm_std, bm_stddev);
+		return;
+	}
+
+	bm_total += delta;
+	bm_totalsq += delta * delta;
+
+
 	if (bm_cnt > 1) {
 		/*
 		 * Apply Welford's method to calculate standard deviation:
 		 * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
 		 */
 		stddev = (u64)bm_cnt * bm_totalsq - bm_total * bm_total;
-		do_div(stddev, bm_cnt);
-		do_div(stddev, bm_cnt - 1);
+		do_div(stddev, (u32)bm_cnt);
+		do_div(stddev, (u32)bm_cnt - 1);
 	} else
 		stddev = 0;
 
@@ -119,6 +135,10 @@ static void trace_do_benchmark(void)
 	scnprintf(bm_str, BENCHMARK_EVENT_STRLEN,
 		  "last=%llu first=%llu max=%llu min=%llu avg=%u std=%d std^2=%lld",
 		  bm_last, bm_first, bm_max, bm_min, avg, std, stddev);
+
+	bm_std = std;
+	bm_avg = avg;
+	bm_stddev = stddev;
 }
 
 static int benchmark_event_kthread(void *arg)
@@ -170,6 +190,9 @@ void trace_benchmark_unreg(void)
 	bm_max = 0;
 	bm_min = 0;
 	bm_cnt = 0;
-	/* bm_first doesn't need to be reset but reset it anyway */
+	/* These don't need to be reset but reset them anyway */
 	bm_first = 0;
+	bm_std = 0;
+	bm_avg = 0;
+	bm_stddev = 0;
 }
-- 
2.0.0.rc2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [for-next][PATCH 4/6] tracing: Return error if ftrace_trace_arrays list is empty
  2014-06-06 16:30 [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window Steven Rostedt
                   ` (2 preceding siblings ...)
  2014-06-06 16:30 ` [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times Steven Rostedt
@ 2014-06-06 16:30 ` Steven Rostedt
  2014-06-10  5:20   ` Namhyung Kim
  2014-06-06 16:30 ` [for-next][PATCH 5/6] tracing/kprobes: Avoid self tests if tracing is disabled on boot up Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 6/6] tracing: Fix leak of ring buffer data when new instances creation fails Steven Rostedt
  5 siblings, 1 reply; 15+ messages in thread
From: Steven Rostedt @ 2014-06-06 16:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Yoshihiro YUNOMAE

[-- Attachment #1: 0003-tracing-Return-error-if-ftrace_trace_arrays-list-is-.patch --]
[-- Type: text/plain, Size: 2726 bytes --]

From: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>

ftrace_trace_arrays links global_trace.list. However, global_trace
is not added to ftrace_trace_arrays if trace_alloc_buffers() failed.
As the result, ftrace_trace_arrays becomes an empty list. If
ftrace_trace_arrays is an empty list, current top_trace_array() returns
an invalid pointer. As the result, the kernel can induce memory corruption
or panic.

Current implementation does not check whether ftrace_trace_arrays is empty
list or not. So, in this patch, if ftrace_trace_arrays is empty list,
top_trace_array() returns NULL. Moreover, this patch makes all functions
calling top_trace_array() handle it appropriately.

Link: http://lkml.kernel.org/p/20140605223517.32311.99233.stgit@yunodevel

Signed-off-by: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.h        |  3 +++
 kernel/trace/trace_events.c | 13 +++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 217207ad60b3..9e82551dd566 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -252,6 +252,9 @@ static inline struct trace_array *top_trace_array(void)
 {
 	struct trace_array *tr;
 
+	if (list_empty(ftrace_trace_arrays.prev))
+		return NULL;
+
 	tr = list_entry(ftrace_trace_arrays.prev,
 			typeof(*tr), list);
 	WARN_ON(!(tr->flags & TRACE_ARRAY_FL_GLOBAL));
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 3ddfd8f62c05..f99e0b3bca8c 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -574,6 +574,9 @@ int trace_set_clr_event(const char *system, const char *event, int set)
 {
 	struct trace_array *tr = top_trace_array();
 
+	if (!tr)
+		return -ENODEV;
+
 	return __ftrace_set_clr_event(tr, NULL, system, event, set);
 }
 EXPORT_SYMBOL_GPL(trace_set_clr_event);
@@ -2065,6 +2068,9 @@ event_enable_func(struct ftrace_hash *hash,
 	bool enable;
 	int ret;
 
+	if (!tr)
+		return -ENODEV;
+
 	/* hash funcs only work with set_ftrace_filter */
 	if (!enabled || !param)
 		return -EINVAL;
@@ -2396,6 +2402,9 @@ static __init int event_trace_enable(void)
 	char *token;
 	int ret;
 
+	if (!tr)
+		return -ENODEV;
+
 	for_each_event(iter, __start_ftrace_events, __stop_ftrace_events) {
 
 		call = *iter;
@@ -2442,6 +2451,8 @@ static __init int event_trace_init(void)
 	int ret;
 
 	tr = top_trace_array();
+	if (!tr)
+		return -ENODEV;
 
 	d_tracer = tracing_init_dentry();
 	if (!d_tracer)
@@ -2535,6 +2546,8 @@ static __init void event_trace_self_tests(void)
 	int ret;
 
 	tr = top_trace_array();
+	if (!tr)
+		return;
 
 	pr_info("Running tests on trace events:\n");
 
-- 
2.0.0.rc2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [for-next][PATCH 5/6] tracing/kprobes: Avoid self tests if tracing is disabled on boot up
  2014-06-06 16:30 [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window Steven Rostedt
                   ` (3 preceding siblings ...)
  2014-06-06 16:30 ` [for-next][PATCH 4/6] tracing: Return error if ftrace_trace_arrays list is empty Steven Rostedt
@ 2014-06-06 16:30 ` Steven Rostedt
  2014-06-06 16:30 ` [for-next][PATCH 6/6] tracing: Fix leak of ring buffer data when new instances creation fails Steven Rostedt
  5 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-06 16:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu, Yoshihiro YUNOMAE

[-- Attachment #1: 0004-tracing-kprobes-Avoid-self-tests-if-tracing-is-disab.patch --]
[-- Type: text/plain, Size: 1049 bytes --]

From: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>

If tracing is disabled on boot up, the kernel should not execute tracing
self tests. The kernel should check whether tracing is disabled or not
before executing any of the tracing self tests.

Link: http://lkml.kernel.org/p/20140605223520.32311.56097.stgit@yunodevel

Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_kprobe.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 903ae28962be..ef2fba1f46b5 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1377,6 +1377,9 @@ static __init int kprobe_trace_self_tests_init(void)
 	struct trace_kprobe *tk;
 	struct ftrace_event_file *file;
 
+	if (tracing_is_disabled())
+		return -ENODEV;
+
 	target = kprobe_trace_selftest_target;
 
 	pr_info("Testing kprobe tracing: ");
-- 
2.0.0.rc2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [for-next][PATCH 6/6] tracing: Fix leak of ring buffer data when new instances creation fails
  2014-06-06 16:30 [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window Steven Rostedt
                   ` (4 preceding siblings ...)
  2014-06-06 16:30 ` [for-next][PATCH 5/6] tracing/kprobes: Avoid self tests if tracing is disabled on boot up Steven Rostedt
@ 2014-06-06 16:30 ` Steven Rostedt
  2014-06-10  5:25   ` Namhyung Kim
  5 siblings, 1 reply; 15+ messages in thread
From: Steven Rostedt @ 2014-06-06 16:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Yoshihiro YUNOMAE

[-- Attachment #1: 0005-tracing-Fix-leak-of-ring-buffer-data-when-new-instan.patch --]
[-- Type: text/plain, Size: 1906 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Yoshihiro Yunomae reported that the ring buffer data for a trace
instance does not get properly cleaned up when it fails. He proposed
a patch that manually cleaned the data up and addad a bunch of labels.
The labels are not needed because all trace array is allocated with
a kzalloc which initializes it to 0 and all kfree()s can take a NULL
pointer and will ignore it.

Adding a new helper function free_trace_buffers() that can also take
null buffers to free the buffers that were allocated by
allocate_trace_buffers().

Link: http://lkml.kernel.org/r/20140605223522.32311.31664.stgit@yunodevel

Reported-by: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index e29edee1542a..26cfff38e2ab 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -6232,6 +6232,25 @@ static int allocate_trace_buffers(struct trace_array *tr, int size)
 	return 0;
 }
 
+static void free_trace_buffers(struct trace_array *tr)
+{
+	if (!tr)
+		return;
+
+	if (tr->trace_buffer.buffer) {
+		ring_buffer_free(tr->trace_buffer.buffer);
+		tr->trace_buffer.buffer = NULL;
+		free_percpu(tr->trace_buffer.data);
+	}
+
+#ifdef CONFIG_TRACER_MAX_TRACE
+	if (tr->max_buffer.buffer) {
+		ring_buffer_free(tr->max_buffer.buffer);
+		tr->max_buffer.buffer = NULL;
+	}
+#endif
+}
+
 static int new_instance_create(const char *name)
 {
 	struct trace_array *tr;
@@ -6290,8 +6309,7 @@ static int new_instance_create(const char *name)
 	return 0;
 
  out_free_tr:
-	if (tr->trace_buffer.buffer)
-		ring_buffer_free(tr->trace_buffer.buffer);
+	free_trace_buffers(tr);
 	free_cpumask_var(tr->tracing_cpumask);
 	kfree(tr->name);
 	kfree(tr);
-- 
2.0.0.rc2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times
  2014-06-06 16:30 ` [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times Steven Rostedt
@ 2014-06-10  5:17   ` Namhyung Kim
  2014-06-10 12:16     ` Steven Rostedt
  2014-06-10 13:42     ` Steven Rostedt
  0 siblings, 2 replies; 15+ messages in thread
From: Namhyung Kim @ 2014-06-10  5:17 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

Hi Steve,

On Fri, 06 Jun 2014 12:30:37 -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
>
> When calculating the average and standard deviation, it is required that
> the count be less than UINT_MAX, otherwise the do_div() will get
> undefined results. After 2^32 counts of data, the average and standard
> deviation should pretty much be set anyway.

[SNIP]
>  	if (bm_cnt > 1) {
>  		/*
>  		 * Apply Welford's method to calculate standard deviation:
>  		 * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
>  		 */
>  		stddev = (u64)bm_cnt * bm_totalsq - bm_total * bm_total;
> -		do_div(stddev, bm_cnt);
> -		do_div(stddev, bm_cnt - 1);
> +		do_div(stddev, (u32)bm_cnt);
> +		do_div(stddev, (u32)bm_cnt - 1);
>  	} else
>  		stddev = 0;

I also think that this if-else can go as it checks bm_cnt == 1 above.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 4/6] tracing: Return error if ftrace_trace_arrays list is empty
  2014-06-06 16:30 ` [for-next][PATCH 4/6] tracing: Return error if ftrace_trace_arrays list is empty Steven Rostedt
@ 2014-06-10  5:20   ` Namhyung Kim
  2014-06-10 13:42     ` Steven Rostedt
  0 siblings, 1 reply; 15+ messages in thread
From: Namhyung Kim @ 2014-06-10  5:20 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, Yoshihiro YUNOMAE

On Fri, 06 Jun 2014 12:30:38 -0400, Steven Rostedt wrote:
> From: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
>
> ftrace_trace_arrays links global_trace.list. However, global_trace
> is not added to ftrace_trace_arrays if trace_alloc_buffers() failed.
> As the result, ftrace_trace_arrays becomes an empty list. If
> ftrace_trace_arrays is an empty list, current top_trace_array() returns
> an invalid pointer. As the result, the kernel can induce memory corruption
> or panic.
>
> Current implementation does not check whether ftrace_trace_arrays is empty
> list or not. So, in this patch, if ftrace_trace_arrays is empty list,
> top_trace_array() returns NULL. Moreover, this patch makes all functions
> calling top_trace_array() handle it appropriately.

[SNIP]
> @@ -252,6 +252,9 @@ static inline struct trace_array *top_trace_array(void)
>  {
>  	struct trace_array *tr;
>  
> +	if (list_empty(ftrace_trace_arrays.prev))
> +		return NULL;

It looks weird to me.. why not checking "list_empty(&ftrace_trace_arrays)"?

Thanks,
Namhyung

> +
>  	tr = list_entry(ftrace_trace_arrays.prev,
>  			typeof(*tr), list);
>  	WARN_ON(!(tr->flags & TRACE_ARRAY_FL_GLOBAL));

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 6/6] tracing: Fix leak of ring buffer data when new instances creation fails
  2014-06-06 16:30 ` [for-next][PATCH 6/6] tracing: Fix leak of ring buffer data when new instances creation fails Steven Rostedt
@ 2014-06-10  5:25   ` Namhyung Kim
  2014-06-10 15:58     ` Steven Rostedt
  0 siblings, 1 reply; 15+ messages in thread
From: Namhyung Kim @ 2014-06-10  5:25 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, Yoshihiro YUNOMAE

On Fri, 06 Jun 2014 12:30:40 -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
>
> Yoshihiro Yunomae reported that the ring buffer data for a trace
> instance does not get properly cleaned up when it fails. He proposed
> a patch that manually cleaned the data up and addad a bunch of labels.
> The labels are not needed because all trace array is allocated with
> a kzalloc which initializes it to 0 and all kfree()s can take a NULL
> pointer and will ignore it.
>
> Adding a new helper function free_trace_buffers() that can also take
> null buffers to free the buffers that were allocated by
> allocate_trace_buffers().
>
> Link: http://lkml.kernel.org/r/20140605223522.32311.31664.stgit@yunodevel
>
> Reported-by: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/trace/trace.c | 22 ++++++++++++++++++++--
>  1 file changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index e29edee1542a..26cfff38e2ab 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -6232,6 +6232,25 @@ static int allocate_trace_buffers(struct trace_array *tr, int size)
>  	return 0;
>  }
>  
> +static void free_trace_buffers(struct trace_array *tr)
> +{
> +	if (!tr)
> +		return;
> +
> +	if (tr->trace_buffer.buffer) {
> +		ring_buffer_free(tr->trace_buffer.buffer);
> +		tr->trace_buffer.buffer = NULL;
> +		free_percpu(tr->trace_buffer.data);
> +	}
> +
> +#ifdef CONFIG_TRACER_MAX_TRACE
> +	if (tr->max_buffer.buffer) {
> +		ring_buffer_free(tr->max_buffer.buffer);
> +		tr->max_buffer.buffer = NULL;

Hmm.. why doesn't it free tr->max_buffer.data here?  And I think it's
better to reset *_buffer.data to NULL also - maybe by adding another
helper function free_trace_buffer()..

Thanks,
Namhyung


> +	}
> +#endif
> +}
> +
>  static int new_instance_create(const char *name)
>  {
>  	struct trace_array *tr;
> @@ -6290,8 +6309,7 @@ static int new_instance_create(const char *name)
>  	return 0;
>  
>   out_free_tr:
> -	if (tr->trace_buffer.buffer)
> -		ring_buffer_free(tr->trace_buffer.buffer);
> +	free_trace_buffers(tr);
>  	free_cpumask_var(tr->tracing_cpumask);
>  	kfree(tr->name);
>  	kfree(tr);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times
  2014-06-10  5:17   ` Namhyung Kim
@ 2014-06-10 12:16     ` Steven Rostedt
  2014-06-10 13:42     ` Steven Rostedt
  1 sibling, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-10 12:16 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Tue, 10 Jun 2014 14:17:54 +0900
Namhyung Kim <namhyung@gmail.com> wrote:

> Hi Steve,
> 
> On Fri, 06 Jun 2014 12:30:37 -0400, Steven Rostedt wrote:
> > From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
> >
> > When calculating the average and standard deviation, it is required that
> > the count be less than UINT_MAX, otherwise the do_div() will get
> > undefined results. After 2^32 counts of data, the average and standard
> > deviation should pretty much be set anyway.
> 
> [SNIP]
> >  	if (bm_cnt > 1) {
> >  		/*
> >  		 * Apply Welford's method to calculate standard deviation:
> >  		 * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
> >  		 */
> >  		stddev = (u64)bm_cnt * bm_totalsq - bm_total * bm_total;
> > -		do_div(stddev, bm_cnt);
> > -		do_div(stddev, bm_cnt - 1);
> > +		do_div(stddev, (u32)bm_cnt);
> > +		do_div(stddev, (u32)bm_cnt - 1);
> >  	} else
> >  		stddev = 0;
> 
> I also think that this if-else can go as it checks bm_cnt == 1 above.
> 

I noticed this too, but just to keep gcc from complaining, I left it in.

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times
  2014-06-10  5:17   ` Namhyung Kim
  2014-06-10 12:16     ` Steven Rostedt
@ 2014-06-10 13:42     ` Steven Rostedt
  2014-06-10 16:12       ` Steven Rostedt
  1 sibling, 1 reply; 15+ messages in thread
From: Steven Rostedt @ 2014-06-10 13:42 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Tue, 10 Jun 2014 14:17:54 +0900
Namhyung Kim <namhyung@gmail.com> wrote:

> Hi Steve,
> 
> On Fri, 06 Jun 2014 12:30:37 -0400, Steven Rostedt wrote:
> > From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
> >
> > When calculating the average and standard deviation, it is required that
> > the count be less than UINT_MAX, otherwise the do_div() will get
> > undefined results. After 2^32 counts of data, the average and standard
> > deviation should pretty much be set anyway.
> 
> [SNIP]
> >  	if (bm_cnt > 1) {
> >  		/*
> >  		 * Apply Welford's method to calculate standard deviation:
> >  		 * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
> >  		 */
> >  		stddev = (u64)bm_cnt * bm_totalsq - bm_total * bm_total;
> > -		do_div(stddev, bm_cnt);
> > -		do_div(stddev, bm_cnt - 1);
> > +		do_div(stddev, (u32)bm_cnt);
> > +		do_div(stddev, (u32)bm_cnt - 1);
> >  	} else
> >  		stddev = 0;
> 
> I also think that this if-else can go as it checks bm_cnt == 1 above.
> 

I noticed this too, but just to keep gcc from complaining, I left it in.

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 4/6] tracing: Return error if ftrace_trace_arrays list is empty
  2014-06-10  5:20   ` Namhyung Kim
@ 2014-06-10 13:42     ` Steven Rostedt
  0 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-10 13:42 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: linux-kernel, Ingo Molnar, Andrew Morton, Yoshihiro YUNOMAE

On Tue, 10 Jun 2014 14:20:39 +0900
Namhyung Kim <namhyung@gmail.com> wrote:

> On Fri, 06 Jun 2014 12:30:38 -0400, Steven Rostedt wrote:
> > From: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
> >
> > ftrace_trace_arrays links global_trace.list. However, global_trace
> > is not added to ftrace_trace_arrays if trace_alloc_buffers() failed.
> > As the result, ftrace_trace_arrays becomes an empty list. If
> > ftrace_trace_arrays is an empty list, current top_trace_array() returns
> > an invalid pointer. As the result, the kernel can induce memory corruption
> > or panic.
> >
> > Current implementation does not check whether ftrace_trace_arrays is empty
> > list or not. So, in this patch, if ftrace_trace_arrays is empty list,
> > top_trace_array() returns NULL. Moreover, this patch makes all functions
> > calling top_trace_array() handle it appropriately.
> 
> [SNIP]
> > @@ -252,6 +252,9 @@ static inline struct trace_array *top_trace_array(void)
> >  {
> >  	struct trace_array *tr;
> >  
> > +	if (list_empty(ftrace_trace_arrays.prev))
> > +		return NULL;
> 
> It looks weird to me.. why not checking "list_empty(&ftrace_trace_arrays)"?
> 

Bah, I don't know how I missed that. Thanks.

-- Steve

> Thanks,
> Namhyung
> 
> > +
> >  	tr = list_entry(ftrace_trace_arrays.prev,
> >  			typeof(*tr), list);
> >  	WARN_ON(!(tr->flags & TRACE_ARRAY_FL_GLOBAL));


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 6/6] tracing: Fix leak of ring buffer data when new instances creation fails
  2014-06-10  5:25   ` Namhyung Kim
@ 2014-06-10 15:58     ` Steven Rostedt
  0 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-10 15:58 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: linux-kernel, Ingo Molnar, Andrew Morton, Yoshihiro YUNOMAE

On Tue, 10 Jun 2014 14:25:21 +0900
Namhyung Kim <namhyung@gmail.com> wrote:

 
> > +static void free_trace_buffers(struct trace_array *tr)
> > +{
> > +	if (!tr)
> > +		return;
> > +
> > +	if (tr->trace_buffer.buffer) {
> > +		ring_buffer_free(tr->trace_buffer.buffer);
> > +		tr->trace_buffer.buffer = NULL;
> > +		free_percpu(tr->trace_buffer.data);
> > +	}
> > +
> > +#ifdef CONFIG_TRACER_MAX_TRACE
> > +	if (tr->max_buffer.buffer) {
> > +		ring_buffer_free(tr->max_buffer.buffer);
> > +		tr->max_buffer.buffer = NULL;
> 
> Hmm.. why doesn't it free tr->max_buffer.data here?  And I think it's
> better to reset *_buffer.data to NULL also - maybe by adding another
> helper function free_trace_buffer()..
> 

Originally, instances didn't need them. But I'm looking now and yes
they do allocate. I guess this doesn't trigger kmem-leak checks, as I
ran creation and deletions with that running and it didn't report any
leaks.

I'll update,

Thanks!

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times
  2014-06-10 13:42     ` Steven Rostedt
@ 2014-06-10 16:12       ` Steven Rostedt
  0 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2014-06-10 16:12 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Tue, 10 Jun 2014 09:42:06 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Tue, 10 Jun 2014 14:17:54 +0900
> Namhyung Kim <namhyung@gmail.com> wrote:
> 
> > Hi Steve,
> > 
> > On Fri, 06 Jun 2014 12:30:37 -0400, Steven Rostedt wrote:
> > > From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
> > >
> > > When calculating the average and standard deviation, it is required that
> > > the count be less than UINT_MAX, otherwise the do_div() will get
> > > undefined results. After 2^32 counts of data, the average and standard
> > > deviation should pretty much be set anyway.
> > 
> > [SNIP]
> > >  	if (bm_cnt > 1) {
> > >  		/*
> > >  		 * Apply Welford's method to calculate standard deviation:
> > >  		 * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
> > >  		 */
> > >  		stddev = (u64)bm_cnt * bm_totalsq - bm_total * bm_total;
> > > -		do_div(stddev, bm_cnt);
> > > -		do_div(stddev, bm_cnt - 1);
> > > +		do_div(stddev, (u32)bm_cnt);
> > > +		do_div(stddev, (u32)bm_cnt - 1);
> > >  	} else
> > >  		stddev = 0;
> > 
> > I also think that this if-else can go as it checks bm_cnt == 1 above.
> > 
> 
> I noticed this too, but just to keep gcc from complaining, I left it in.
> 

Oh, and you are talking about the entire if () else block. Yeah, I
thought about that as well, but I'm paranoid about doing a divide by
zero ;-)

It's code for testing purposes only. A fringe of the kernel base. Not a
hot path. Slowness isn't an issue here, but correctness is.

Thanks,

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2014-06-10 21:44 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-06 16:30 [for-next][PATCH 0/6] tracing: Hopefully the last updates for 3.16 merge window Steven Rostedt
2014-06-06 16:30 ` [for-next][PATCH 1/6] tracing: Introduce saved_cmdlines_size file Steven Rostedt
2014-06-06 16:30 ` [for-next][PATCH 2/6] tracing: Convert stddev into u64 in tracepoint benchmark Steven Rostedt
2014-06-06 16:30 ` [for-next][PATCH 3/6] tracing: Only calculate stats of tracepoint benchmarks for 2^32 times Steven Rostedt
2014-06-10  5:17   ` Namhyung Kim
2014-06-10 12:16     ` Steven Rostedt
2014-06-10 13:42     ` Steven Rostedt
2014-06-10 16:12       ` Steven Rostedt
2014-06-06 16:30 ` [for-next][PATCH 4/6] tracing: Return error if ftrace_trace_arrays list is empty Steven Rostedt
2014-06-10  5:20   ` Namhyung Kim
2014-06-10 13:42     ` Steven Rostedt
2014-06-06 16:30 ` [for-next][PATCH 5/6] tracing/kprobes: Avoid self tests if tracing is disabled on boot up Steven Rostedt
2014-06-06 16:30 ` [for-next][PATCH 6/6] tracing: Fix leak of ring buffer data when new instances creation fails Steven Rostedt
2014-06-10  5:25   ` Namhyung Kim
2014-06-10 15:58     ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.