linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function
@ 2017-02-03 13:40 Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 1/8] tracing: Add ftrace_hash_key() helper function Steven Rostedt
                   ` (8 more replies)
  0 siblings, 9 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next

Head SHA1: 092adb1121aec9e0dfa2d07bc160ae60831f4798


Steven Rostedt (VMware) (8):
      tracing: Add ftrace_hash_key() helper function
      ftrace: Create a slight optimization on searching the ftrace_hash
      ftrace: Replace (void *)1 with a meaningful macro name FTRACE_GRAPH_EMPTY
      ftrace: Reset fgd->hash in ftrace_graph_write()
      ftrace: Have set_graph_functions handle write with RDWR
      tracing: Reset parser->buffer to allow multiple "puts"
      ftrace: Do not hold references of ftrace_graph_{notrace_}hash out of graph_lock
      ftrace: Have set_graph_function handle multiple functions in one write

----
 kernel/trace/ftrace.c | 186 ++++++++++++++++++++++++++++++++++----------------
 kernel/trace/trace.c  |   1 +
 2 files changed, 129 insertions(+), 58 deletions(-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [for-next][PATCH 1/8] tracing: Add ftrace_hash_key() helper function
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 2/8] ftrace: Create a slight optimization on searching the ftrace_hash Steven Rostedt
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0001-tracing-Add-ftrace_hash_key-helper-function.patch --]
[-- Type: text/plain, Size: 1640 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Replace the couple of use cases that has small logic to produce the ftrace
function key id with a helper function. No need for duplicate code.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 2d554a02241d..89240f62061c 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1185,6 +1185,15 @@ struct ftrace_page {
 static struct ftrace_page	*ftrace_pages_start;
 static struct ftrace_page	*ftrace_pages;
 
+static __always_inline unsigned long
+ftrace_hash_key(struct ftrace_hash *hash, unsigned long ip)
+{
+	if (hash->size_bits > 0)
+		return hash_long(ip, hash->size_bits);
+
+	return 0;
+}
+
 struct ftrace_func_entry *
 ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
 {
@@ -1195,11 +1204,7 @@ ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
 	if (ftrace_hash_empty(hash))
 		return NULL;
 
-	if (hash->size_bits > 0)
-		key = hash_long(ip, hash->size_bits);
-	else
-		key = 0;
-
+	key = ftrace_hash_key(hash, ip);
 	hhd = &hash->buckets[key];
 
 	hlist_for_each_entry_rcu_notrace(entry, hhd, hlist) {
@@ -1215,11 +1220,7 @@ static void __add_hash_entry(struct ftrace_hash *hash,
 	struct hlist_head *hhd;
 	unsigned long key;
 
-	if (hash->size_bits)
-		key = hash_long(entry->ip, hash->size_bits);
-	else
-		key = 0;
-
+	key = ftrace_hash_key(hash, entry->ip);
 	hhd = &hash->buckets[key];
 	hlist_add_head(&entry->hlist, hhd);
 	hash->count++;
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 2/8] ftrace: Create a slight optimization on searching the ftrace_hash
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 1/8] tracing: Add ftrace_hash_key() helper function Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 14:26   ` Namhyung Kim
  2017-02-03 13:40 ` [for-next][PATCH 3/8] ftrace: Replace (void *)1 with a meaningful macro name FTRACE_GRAPH_EMPTY Steven Rostedt
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0002-ftrace-Create-a-slight-optimization-on-searching-the.patch --]
[-- Type: text/plain, Size: 3302 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

This is a micro-optimization, but as it has to deal with a fast path of the
function tracer, these optimizations can be noticed.

The ftrace_lookup_ip() returns true if the given ip is found in the hash. If
it's not found or the hash is NULL, it returns false. But there's some cases
that a NULL hash is a true, and the ftrace_hash_empty() is tested before
calling ftrace_lookup_ip() in those cases. But as ftrace_lookup_ip() tests
that first, that adds a few extra unneeded instructions in those cases.

A new static "always_inlined" function is created that does not perform the
hash empty test. This most only be used by callers that do the check first
anyway, as an empty or NULL hash could cause a crash if a lookup is
performed on it.

Also add kernel doc for the ftrace_lookup_ip() main function.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 33 +++++++++++++++++++++++++--------
 1 file changed, 25 insertions(+), 8 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 89240f62061c..1595df0d7d79 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1194,16 +1194,14 @@ ftrace_hash_key(struct ftrace_hash *hash, unsigned long ip)
 	return 0;
 }
 
-struct ftrace_func_entry *
-ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
+/* Only use this function if ftrace_hash_empty() has already been tested */
+static __always_inline struct ftrace_func_entry *
+__ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
 {
 	unsigned long key;
 	struct ftrace_func_entry *entry;
 	struct hlist_head *hhd;
 
-	if (ftrace_hash_empty(hash))
-		return NULL;
-
 	key = ftrace_hash_key(hash, ip);
 	hhd = &hash->buckets[key];
 
@@ -1214,6 +1212,25 @@ ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
 	return NULL;
 }
 
+/**
+ * ftrace_lookup_ip - Test to see if an ip exists in an ftrace_hash
+ * @hash: The hash to look at
+ * @ip: The instruction pointer to test
+ *
+ * Search a given @hash to see if a given instruction pointer (@ip)
+ * exists in it.
+ *
+ * Returns the entry that holds the @ip if found. NULL otherwise.
+ */
+struct ftrace_func_entry *
+ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
+{
+	if (ftrace_hash_empty(hash))
+		return NULL;
+
+	return __ftrace_lookup_ip(hash, ip);
+}
+
 static void __add_hash_entry(struct ftrace_hash *hash,
 			     struct ftrace_func_entry *entry)
 {
@@ -1463,9 +1480,9 @@ static bool hash_contains_ip(unsigned long ip,
 	 * notrace hash is considered not in the notrace hash.
 	 */
 	return (ftrace_hash_empty(hash->filter_hash) ||
-		ftrace_lookup_ip(hash->filter_hash, ip)) &&
+		__ftrace_lookup_ip(hash->filter_hash, ip)) &&
 		(ftrace_hash_empty(hash->notrace_hash) ||
-		 !ftrace_lookup_ip(hash->notrace_hash, ip));
+		 !__ftrace_lookup_ip(hash->notrace_hash, ip));
 }
 
 /*
@@ -2877,7 +2894,7 @@ ops_references_rec(struct ftrace_ops *ops, struct dyn_ftrace *rec)
 
 	/* The function must be in the filter */
 	if (!ftrace_hash_empty(ops->func_hash->filter_hash) &&
-	    !ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))
+	    !__ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))
 		return 0;
 
 	/* If in notrace hash, we ignore it too */
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 3/8] ftrace: Replace (void *)1 with a meaningful macro name FTRACE_GRAPH_EMPTY
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 1/8] tracing: Add ftrace_hash_key() helper function Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 2/8] ftrace: Create a slight optimization on searching the ftrace_hash Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 4/8] ftrace: Reset fgd->hash in ftrace_graph_write() Steven Rostedt
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0003-ftrace-Replace-void-1-with-a-meaningful-macro-name-F.patch --]
[-- Type: text/plain, Size: 1453 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

When the set_graph_function or set_graph_notrace contains no records, a
banner is displayed of either "#### all functions enabled ####" or
"#### all functions disabled ####" respectively. To tell the seq operations
to do this, (void *)1 is passed as a return value. Instead of using a
hardcoded meaningless variable, define it as a macro.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 1595df0d7d79..a9cfc8713198 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4561,6 +4561,8 @@ enum graph_filter_type {
 	GRAPH_FILTER_FUNCTION,
 };
 
+#define FTRACE_GRAPH_EMPTY	((void *)1)
+
 struct ftrace_graph_data {
 	struct ftrace_hash *hash;
 	struct ftrace_func_entry *entry;
@@ -4616,7 +4618,7 @@ static void *g_start(struct seq_file *m, loff_t *pos)
 
 	/* Nothing, tell g_show to print all functions are enabled */
 	if (ftrace_hash_empty(fgd->hash) && !*pos)
-		return (void *)1;
+		return FTRACE_GRAPH_EMPTY;
 
 	fgd->idx = 0;
 	fgd->entry = NULL;
@@ -4635,7 +4637,7 @@ static int g_show(struct seq_file *m, void *v)
 	if (!entry)
 		return 0;
 
-	if (entry == (void *)1) {
+	if (entry == FTRACE_GRAPH_EMPTY) {
 		struct ftrace_graph_data *fgd = m->private;
 
 		if (fgd->type == GRAPH_FILTER_FUNCTION)
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 4/8] ftrace: Reset fgd->hash in ftrace_graph_write()
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
                   ` (2 preceding siblings ...)
  2017-02-03 13:40 ` [for-next][PATCH 3/8] ftrace: Replace (void *)1 with a meaningful macro name FTRACE_GRAPH_EMPTY Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 14:49   ` Namhyung Kim
  2017-02-03 13:40 ` [for-next][PATCH 5/8] ftrace: Have set_graph_functions handle write with RDWR Steven Rostedt
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0004-ftrace-Reset-fgd-hash-in-ftrace_graph_write.patch --]
[-- Type: text/plain, Size: 2701 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

fgd->hash is saved and then freed, but is never reset to either
ftrace_graph_hash nor ftrace_graph_notrace_hash. But if multiple reads are
performed, then the freed hash could be accessed again.

 # cd /sys/kernel/debug/tracing
 # head -1000 available_filter_functions > /tmp/funcs
 # cat /tmp/funcs > set_graph_function

Causes:

 general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC
 Modules linked in:  [...]
 CPU: 2 PID: 1337 Comm: cat Not tainted 4.10.0-rc2-test-00010-g6b052e9 #32
 Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012
 task: ffff880113a12200 task.stack: ffffc90001940000
 RIP: 0010:free_ftrace_hash+0x7c/0x160
 RSP: 0018:ffffc90001943db0 EFLAGS: 00010246
 RAX: 6b6b6b6b6b6b6b6b RBX: 6b6b6b6b6b6b6b6b RCX: 6b6b6b6b6b6b6b6b
 RDX: 0000000000000002 RSI: 0000000000000001 RDI: ffff8800ce1e1d40
 RBP: ffff8800ce1e1d50 R08: 0000000000000000 R09: 0000000000006400
 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
 R13: ffff8800ce1e1d40 R14: 0000000000004000 R15: 0000000000000001
 FS:  00007f9408a07740(0000) GS:ffff88011e500000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: 0000000000aee1f0 CR3: 0000000116bb4000 CR4: 00000000001406e0
 Call Trace:
  ? ftrace_graph_write+0x150/0x190
  ? __vfs_write+0x1f6/0x210
  ? __audit_syscall_entry+0x17f/0x200
  ? rw_verify_area+0xdb/0x210
  ? _cond_resched+0x2b/0x50
  ? __sb_start_write+0xb4/0x130
  ? vfs_write+0x1c8/0x330
  ? SyS_write+0x62/0xf0
  ? do_syscall_64+0xa3/0x1b0
  ? entry_SYSCALL64_slow_path+0x25/0x25
 Code: 01 48 85 db 0f 84 92 00 00 00 b8 01 00 00 00 d3 e0 85 c0 7e 3f 83 e8 01 48 8d 6f 10 45 31 e4 4c 8d 34 c5 08 00 00 00 49 8b 45 08 <4a> 8b 34 20 48 85 f6 74 13 48 8b 1e 48 89 ef e8 20 fa ff ff 48
 RIP: free_ftrace_hash+0x7c/0x160 RSP: ffffc90001943db0
 ---[ end trace 999b48216bf4b393 ]---

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index a9cfc8713198..b7df0dcf8652 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4858,10 +4858,13 @@ ftrace_graph_write(struct file *file, const char __user *ubuf,
 		if (!new_hash)
 			ret = -ENOMEM;
 
-		if (fgd->type == GRAPH_FILTER_FUNCTION)
+		if (fgd->type == GRAPH_FILTER_FUNCTION) {
 			rcu_assign_pointer(ftrace_graph_hash, new_hash);
-		else
+			fgd->hash = ftrace_graph_hash;
+		} else {
 			rcu_assign_pointer(ftrace_graph_notrace_hash, new_hash);
+			fgd->hash = ftrace_graph_notrace_hash;
+		}
 
 		mutex_unlock(&graph_lock);
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 5/8] ftrace: Have set_graph_functions handle write with RDWR
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
                   ` (3 preceding siblings ...)
  2017-02-03 13:40 ` [for-next][PATCH 4/8] ftrace: Reset fgd->hash in ftrace_graph_write() Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 6/8] tracing: Reset parser->buffer to allow multiple "puts" Steven Rostedt
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0005-ftrace-Have-set_graph_functions-handle-write-with-RD.patch --]
[-- Type: text/plain, Size: 1181 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Since reading the set_graph_functions uses seq functions, which sets the
file->private_data pointer to a seq_file descriptor. On writes the
ftrace_graph_data descriptor is set to file->private_data. But if the file
is opened for RDWR, the ftrace_graph_write() will incorrectly use the
file->private_data descriptor instead of
((struct seq_file *)file->private_data)->private pointer, and this can crash
the kernel.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index b7df0dcf8652..0233c8cb45f4 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4842,6 +4842,12 @@ ftrace_graph_write(struct file *file, const char __user *ubuf,
 	if (trace_parser_get_init(&parser, FTRACE_BUFF_MAX))
 		return -ENOMEM;
 
+	/* Read mode uses seq functions */
+	if (file->f_mode & FMODE_READ) {
+		struct seq_file *m = file->private_data;
+		fgd = m->private;
+	}
+
 	read = trace_get_user(&parser, ubuf, cnt, ppos);
 
 	if (read >= 0 && trace_parser_loaded((&parser))) {
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 6/8] tracing: Reset parser->buffer to allow multiple "puts"
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
                   ` (4 preceding siblings ...)
  2017-02-03 13:40 ` [for-next][PATCH 5/8] ftrace: Have set_graph_functions handle write with RDWR Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 7/8] ftrace: Do not hold references of ftrace_graph_{notrace_}hash out of graph_lock Steven Rostedt
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0006-tracing-Reset-parser-buffer-to-allow-multiple-puts.patch --]
[-- Type: text/plain, Size: 882 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

trace_parser_put() simply frees the allocated parser buffer. But it does not
reset the pointer that was freed. This means that if trace_parser_put() is
called on the same parser more than once, it will corrupt the allocation
system. Setting parser->buffer to NULL after free allows it to be called
more than once without any ill effect.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index d7449783987a..4589b67168fc 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1193,6 +1193,7 @@ int trace_parser_get_init(struct trace_parser *parser, int size)
 void trace_parser_put(struct trace_parser *parser)
 {
 	kfree(parser->buffer);
+	parser->buffer = NULL;
 }
 
 /*
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 7/8] ftrace: Do not hold references of ftrace_graph_{notrace_}hash out of graph_lock
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
                   ` (5 preceding siblings ...)
  2017-02-03 13:40 ` [for-next][PATCH 6/8] tracing: Reset parser->buffer to allow multiple "puts" Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 13:40 ` [for-next][PATCH 8/8] ftrace: Have set_graph_function handle multiple functions in one write Steven Rostedt
  2017-02-03 15:14 ` [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Namhyung Kim
  8 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0007-ftrace-Do-not-hold-references-of-ftrace_graph_-notra.patch --]
[-- Type: text/plain, Size: 3196 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The hashs ftrace_graph_hash and ftrace_graph_notrace_hash are modified
within the graph_lock being held. Holding a pointer to them and passing them
along can lead to a use of a stale pointer (fgd->hash). Move assigning the
pointer and its use to within the holding of the lock. Note, it's an
rcu_sched protected data, and other instances of referencing them are done
with preemption disabled. But the file manipuation code must be protected by
the lock.

The fgd->hash pointer is set to NULL when the lock is being released.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 0233c8cb45f4..b3a4896ef78a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4616,6 +4616,13 @@ static void *g_start(struct seq_file *m, loff_t *pos)
 
 	mutex_lock(&graph_lock);
 
+	if (fgd->type == GRAPH_FILTER_FUNCTION)
+		fgd->hash = rcu_dereference_protected(ftrace_graph_hash,
+					lockdep_is_held(&graph_lock));
+	else
+		fgd->hash = rcu_dereference_protected(ftrace_graph_notrace_hash,
+					lockdep_is_held(&graph_lock));
+
 	/* Nothing, tell g_show to print all functions are enabled */
 	if (ftrace_hash_empty(fgd->hash) && !*pos)
 		return FTRACE_GRAPH_EMPTY;
@@ -4695,6 +4702,14 @@ __ftrace_graph_open(struct inode *inode, struct file *file,
 
 out:
 	fgd->new_hash = new_hash;
+
+	/*
+	 * All uses of fgd->hash must be taken with the graph_lock
+	 * held. The graph_lock is going to be released, so force
+	 * fgd->hash to be reinitialized when it is taken again.
+	 */
+	fgd->hash = NULL;
+
 	return ret;
 }
 
@@ -4713,7 +4728,8 @@ ftrace_graph_open(struct inode *inode, struct file *file)
 
 	mutex_lock(&graph_lock);
 
-	fgd->hash = ftrace_graph_hash;
+	fgd->hash = rcu_dereference_protected(ftrace_graph_hash,
+					lockdep_is_held(&graph_lock));
 	fgd->type = GRAPH_FILTER_FUNCTION;
 	fgd->seq_ops = &ftrace_graph_seq_ops;
 
@@ -4740,7 +4756,8 @@ ftrace_graph_notrace_open(struct inode *inode, struct file *file)
 
 	mutex_lock(&graph_lock);
 
-	fgd->hash = ftrace_graph_notrace_hash;
+	fgd->hash = rcu_dereference_protected(ftrace_graph_notrace_hash,
+					lockdep_is_held(&graph_lock));
 	fgd->type = GRAPH_FILTER_NOTRACE;
 	fgd->seq_ops = &ftrace_graph_seq_ops;
 
@@ -4859,17 +4876,18 @@ ftrace_graph_write(struct file *file, const char __user *ubuf,
 		ret = ftrace_graph_set_hash(fgd->new_hash,
 					    parser.buffer);
 
-		old_hash = fgd->hash;
 		new_hash = __ftrace_hash_move(fgd->new_hash);
 		if (!new_hash)
 			ret = -ENOMEM;
 
 		if (fgd->type == GRAPH_FILTER_FUNCTION) {
+			old_hash = rcu_dereference_protected(ftrace_graph_hash,
+					lockdep_is_held(&graph_lock));
 			rcu_assign_pointer(ftrace_graph_hash, new_hash);
-			fgd->hash = ftrace_graph_hash;
 		} else {
+			old_hash = rcu_dereference_protected(ftrace_graph_notrace_hash,
+					lockdep_is_held(&graph_lock));
 			rcu_assign_pointer(ftrace_graph_notrace_hash, new_hash);
-			fgd->hash = ftrace_graph_notrace_hash;
 		}
 
 		mutex_unlock(&graph_lock);
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 8/8] ftrace: Have set_graph_function handle multiple functions in one write
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
                   ` (6 preceding siblings ...)
  2017-02-03 13:40 ` [for-next][PATCH 7/8] ftrace: Do not hold references of ftrace_graph_{notrace_}hash out of graph_lock Steven Rostedt
@ 2017-02-03 13:40 ` Steven Rostedt
  2017-02-03 15:14 ` [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Namhyung Kim
  8 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 13:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0008-ftrace-Have-set_graph_function-handle-multiple-funct.patch --]
[-- Type: text/plain, Size: 5240 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Currently, only one function can be written to set_graph_function and
set_graph_notrace. The last function in the list will have saved, even
though other functions will be added then removed.

Change the behavior to be the same as set_ftrace_function as to allow
multiple functions to be written. If any one fails, none of them will be
added. The addition of the functions are done at the end when the file is
closed.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 105 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 64 insertions(+), 41 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index b3a4896ef78a..0c0609326391 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4564,12 +4564,13 @@ enum graph_filter_type {
 #define FTRACE_GRAPH_EMPTY	((void *)1)
 
 struct ftrace_graph_data {
-	struct ftrace_hash *hash;
-	struct ftrace_func_entry *entry;
-	int idx;   /* for hash table iteration */
-	enum graph_filter_type type;
-	struct ftrace_hash *new_hash;
-	const struct seq_operations *seq_ops;
+	struct ftrace_hash		*hash;
+	struct ftrace_func_entry	*entry;
+	int				idx;   /* for hash table iteration */
+	enum graph_filter_type		type;
+	struct ftrace_hash		*new_hash;
+	const struct seq_operations	*seq_ops;
+	struct trace_parser		parser;
 };
 
 static void *
@@ -4676,6 +4677,9 @@ __ftrace_graph_open(struct inode *inode, struct file *file,
 	if (file->f_mode & FMODE_WRITE) {
 		const int size_bits = FTRACE_HASH_DEFAULT_BITS;
 
+		if (trace_parser_get_init(&fgd->parser, FTRACE_BUFF_MAX))
+			return -ENOMEM;
+
 		if (file->f_flags & O_TRUNC)
 			new_hash = alloc_ftrace_hash(size_bits);
 		else
@@ -4701,6 +4705,9 @@ __ftrace_graph_open(struct inode *inode, struct file *file,
 		file->private_data = fgd;
 
 out:
+	if (ret < 0 && file->f_mode & FMODE_WRITE)
+		trace_parser_put(&fgd->parser);
+
 	fgd->new_hash = new_hash;
 
 	/*
@@ -4773,6 +4780,9 @@ static int
 ftrace_graph_release(struct inode *inode, struct file *file)
 {
 	struct ftrace_graph_data *fgd;
+	struct ftrace_hash *old_hash, *new_hash;
+	struct trace_parser *parser;
+	int ret = 0;
 
 	if (file->f_mode & FMODE_READ) {
 		struct seq_file *m = file->private_data;
@@ -4783,10 +4793,50 @@ ftrace_graph_release(struct inode *inode, struct file *file)
 		fgd = file->private_data;
 	}
 
+
+	if (file->f_mode & FMODE_WRITE) {
+
+		parser = &fgd->parser;
+
+		if (trace_parser_loaded((parser))) {
+			parser->buffer[parser->idx] = 0;
+			ret = ftrace_graph_set_hash(fgd->new_hash,
+						    parser->buffer);
+		}
+
+		trace_parser_put(parser);
+
+		new_hash = __ftrace_hash_move(fgd->new_hash);
+		if (!new_hash) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		mutex_lock(&graph_lock);
+
+		if (fgd->type == GRAPH_FILTER_FUNCTION) {
+			old_hash = rcu_dereference_protected(ftrace_graph_hash,
+					lockdep_is_held(&graph_lock));
+			rcu_assign_pointer(ftrace_graph_hash, new_hash);
+		} else {
+			old_hash = rcu_dereference_protected(ftrace_graph_notrace_hash,
+					lockdep_is_held(&graph_lock));
+			rcu_assign_pointer(ftrace_graph_notrace_hash, new_hash);
+		}
+
+		mutex_unlock(&graph_lock);
+
+		/* Wait till all users are no longer using the old hash */
+		synchronize_sched();
+
+		free_ftrace_hash(old_hash);
+	}
+
+ out:
 	kfree(fgd->new_hash);
 	kfree(fgd);
 
-	return 0;
+	return ret;
 }
 
 static int
@@ -4848,61 +4898,34 @@ static ssize_t
 ftrace_graph_write(struct file *file, const char __user *ubuf,
 		   size_t cnt, loff_t *ppos)
 {
-	struct trace_parser parser;
 	ssize_t read, ret = 0;
 	struct ftrace_graph_data *fgd = file->private_data;
-	struct ftrace_hash *old_hash, *new_hash;
+	struct trace_parser *parser;
 
 	if (!cnt)
 		return 0;
 
-	if (trace_parser_get_init(&parser, FTRACE_BUFF_MAX))
-		return -ENOMEM;
-
 	/* Read mode uses seq functions */
 	if (file->f_mode & FMODE_READ) {
 		struct seq_file *m = file->private_data;
 		fgd = m->private;
 	}
 
-	read = trace_get_user(&parser, ubuf, cnt, ppos);
+	parser = &fgd->parser;
 
-	if (read >= 0 && trace_parser_loaded((&parser))) {
-		parser.buffer[parser.idx] = 0;
+	read = trace_get_user(parser, ubuf, cnt, ppos);
 
-		mutex_lock(&graph_lock);
+	if (read >= 0 && trace_parser_loaded(parser) &&
+	    !trace_parser_cont(parser)) {
 
-		/* we allow only one expression at a time */
 		ret = ftrace_graph_set_hash(fgd->new_hash,
-					    parser.buffer);
-
-		new_hash = __ftrace_hash_move(fgd->new_hash);
-		if (!new_hash)
-			ret = -ENOMEM;
-
-		if (fgd->type == GRAPH_FILTER_FUNCTION) {
-			old_hash = rcu_dereference_protected(ftrace_graph_hash,
-					lockdep_is_held(&graph_lock));
-			rcu_assign_pointer(ftrace_graph_hash, new_hash);
-		} else {
-			old_hash = rcu_dereference_protected(ftrace_graph_notrace_hash,
-					lockdep_is_held(&graph_lock));
-			rcu_assign_pointer(ftrace_graph_notrace_hash, new_hash);
-		}
-
-		mutex_unlock(&graph_lock);
-
-		/* Wait till all users are no longer using the old hash */
-		synchronize_sched();
-
-		free_ftrace_hash(old_hash);
+					    parser->buffer);
+		trace_parser_clear(parser);
 	}
 
 	if (!ret)
 		ret = read;
 
-	trace_parser_put(&parser);
-
 	return ret;
 }
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 2/8] ftrace: Create a slight optimization on searching the ftrace_hash
  2017-02-03 13:40 ` [for-next][PATCH 2/8] ftrace: Create a slight optimization on searching the ftrace_hash Steven Rostedt
@ 2017-02-03 14:26   ` Namhyung Kim
  2017-02-03 14:57     ` Steven Rostedt
  0 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2017-02-03 14:26 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

Hi Steve,

On Fri, Feb 3, 2017 at 10:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
>
> This is a micro-optimization, but as it has to deal with a fast path of the
> function tracer, these optimizations can be noticed.
>
> The ftrace_lookup_ip() returns true if the given ip is found in the hash. If
> it's not found or the hash is NULL, it returns false. But there's some cases
> that a NULL hash is a true, and the ftrace_hash_empty() is tested before
> calling ftrace_lookup_ip() in those cases. But as ftrace_lookup_ip() tests
> that first, that adds a few extra unneeded instructions in those cases.
>
> A new static "always_inlined" function is created that does not perform the
> hash empty test. This most only be used by callers that do the check first
> anyway, as an empty or NULL hash could cause a crash if a lookup is
> performed on it.
>
> Also add kernel doc for the ftrace_lookup_ip() main function.

It'd be nice if ftrace_graph_addr() was changed also.

Thanks,
Namhyung


>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  kernel/trace/ftrace.c | 33 +++++++++++++++++++++++++--------
>  1 file changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 89240f62061c..1595df0d7d79 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1194,16 +1194,14 @@ ftrace_hash_key(struct ftrace_hash *hash, unsigned long ip)
>         return 0;
>  }
>
> -struct ftrace_func_entry *
> -ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
> +/* Only use this function if ftrace_hash_empty() has already been tested */
> +static __always_inline struct ftrace_func_entry *
> +__ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
>  {
>         unsigned long key;
>         struct ftrace_func_entry *entry;
>         struct hlist_head *hhd;
>
> -       if (ftrace_hash_empty(hash))
> -               return NULL;
> -
>         key = ftrace_hash_key(hash, ip);
>         hhd = &hash->buckets[key];
>
> @@ -1214,6 +1212,25 @@ ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
>         return NULL;
>  }
>
> +/**
> + * ftrace_lookup_ip - Test to see if an ip exists in an ftrace_hash
> + * @hash: The hash to look at
> + * @ip: The instruction pointer to test
> + *
> + * Search a given @hash to see if a given instruction pointer (@ip)
> + * exists in it.
> + *
> + * Returns the entry that holds the @ip if found. NULL otherwise.
> + */
> +struct ftrace_func_entry *
> +ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
> +{
> +       if (ftrace_hash_empty(hash))
> +               return NULL;
> +
> +       return __ftrace_lookup_ip(hash, ip);
> +}
> +
>  static void __add_hash_entry(struct ftrace_hash *hash,
>                              struct ftrace_func_entry *entry)
>  {
> @@ -1463,9 +1480,9 @@ static bool hash_contains_ip(unsigned long ip,
>          * notrace hash is considered not in the notrace hash.
>          */
>         return (ftrace_hash_empty(hash->filter_hash) ||
> -               ftrace_lookup_ip(hash->filter_hash, ip)) &&
> +               __ftrace_lookup_ip(hash->filter_hash, ip)) &&
>                 (ftrace_hash_empty(hash->notrace_hash) ||
> -                !ftrace_lookup_ip(hash->notrace_hash, ip));
> +                !__ftrace_lookup_ip(hash->notrace_hash, ip));
>  }
>
>  /*
> @@ -2877,7 +2894,7 @@ ops_references_rec(struct ftrace_ops *ops, struct dyn_ftrace *rec)
>
>         /* The function must be in the filter */
>         if (!ftrace_hash_empty(ops->func_hash->filter_hash) &&
> -           !ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))
> +           !__ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))
>                 return 0;
>
>         /* If in notrace hash, we ignore it too */
> --
> 2.10.2
>
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 4/8] ftrace: Reset fgd->hash in ftrace_graph_write()
  2017-02-03 13:40 ` [for-next][PATCH 4/8] ftrace: Reset fgd->hash in ftrace_graph_write() Steven Rostedt
@ 2017-02-03 14:49   ` Namhyung Kim
  2017-02-03 14:57     ` Steven Rostedt
  0 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2017-02-03 14:49 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Fri, Feb 3, 2017 at 10:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
>
> fgd->hash is saved and then freed, but is never reset to either
> ftrace_graph_hash nor ftrace_graph_notrace_hash. But if multiple reads are
> performed, then the freed hash could be accessed again.

Argh, right.  Btw did you mean multiple "write" not "read", no?

Thanks,
Namhyung


>
>  # cd /sys/kernel/debug/tracing
>  # head -1000 available_filter_functions > /tmp/funcs
>  # cat /tmp/funcs > set_graph_function
>
> Causes:
>
>  general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC
>  Modules linked in:  [...]
>  CPU: 2 PID: 1337 Comm: cat Not tainted 4.10.0-rc2-test-00010-g6b052e9 #32
>  Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012
>  task: ffff880113a12200 task.stack: ffffc90001940000
>  RIP: 0010:free_ftrace_hash+0x7c/0x160
>  RSP: 0018:ffffc90001943db0 EFLAGS: 00010246
>  RAX: 6b6b6b6b6b6b6b6b RBX: 6b6b6b6b6b6b6b6b RCX: 6b6b6b6b6b6b6b6b
>  RDX: 0000000000000002 RSI: 0000000000000001 RDI: ffff8800ce1e1d40
>  RBP: ffff8800ce1e1d50 R08: 0000000000000000 R09: 0000000000006400
>  R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
>  R13: ffff8800ce1e1d40 R14: 0000000000004000 R15: 0000000000000001
>  FS:  00007f9408a07740(0000) GS:ffff88011e500000(0000) knlGS:0000000000000000
>  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>  CR2: 0000000000aee1f0 CR3: 0000000116bb4000 CR4: 00000000001406e0
>  Call Trace:
>   ? ftrace_graph_write+0x150/0x190
>   ? __vfs_write+0x1f6/0x210
>   ? __audit_syscall_entry+0x17f/0x200
>   ? rw_verify_area+0xdb/0x210
>   ? _cond_resched+0x2b/0x50
>   ? __sb_start_write+0xb4/0x130
>   ? vfs_write+0x1c8/0x330
>   ? SyS_write+0x62/0xf0
>   ? do_syscall_64+0xa3/0x1b0
>   ? entry_SYSCALL64_slow_path+0x25/0x25
>  Code: 01 48 85 db 0f 84 92 00 00 00 b8 01 00 00 00 d3 e0 85 c0 7e 3f 83 e8 01 48 8d 6f 10 45 31 e4 4c 8d 34 c5 08 00 00 00 49 8b 45 08 <4a> 8b 34 20 48 85 f6 74 13 48 8b 1e 48 89 ef e8 20 fa ff ff 48
>  RIP: free_ftrace_hash+0x7c/0x160 RSP: ffffc90001943db0
>  ---[ end trace 999b48216bf4b393 ]---
>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  kernel/trace/ftrace.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index a9cfc8713198..b7df0dcf8652 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -4858,10 +4858,13 @@ ftrace_graph_write(struct file *file, const char __user *ubuf,
>                 if (!new_hash)
>                         ret = -ENOMEM;
>
> -               if (fgd->type == GRAPH_FILTER_FUNCTION)
> +               if (fgd->type == GRAPH_FILTER_FUNCTION) {
>                         rcu_assign_pointer(ftrace_graph_hash, new_hash);
> -               else
> +                       fgd->hash = ftrace_graph_hash;
> +               } else {
>                         rcu_assign_pointer(ftrace_graph_notrace_hash, new_hash);
> +                       fgd->hash = ftrace_graph_notrace_hash;
> +               }
>
>                 mutex_unlock(&graph_lock);
>
> --
> 2.10.2
>
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 2/8] ftrace: Create a slight optimization on searching the ftrace_hash
  2017-02-03 14:26   ` Namhyung Kim
@ 2017-02-03 14:57     ` Steven Rostedt
  0 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 14:57 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Fri, 3 Feb 2017 23:26:46 +0900
Namhyung Kim <namhyung@kernel.org> wrote:

> Hi Steve,
> 
> On Fri, Feb 3, 2017 at 10:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
> >
> > This is a micro-optimization, but as it has to deal with a fast path of the
> > function tracer, these optimizations can be noticed.
> >
> > The ftrace_lookup_ip() returns true if the given ip is found in the hash. If
> > it's not found or the hash is NULL, it returns false. But there's some cases
> > that a NULL hash is a true, and the ftrace_hash_empty() is tested before
> > calling ftrace_lookup_ip() in those cases. But as ftrace_lookup_ip() tests
> > that first, that adds a few extra unneeded instructions in those cases.
> >
> > A new static "always_inlined" function is created that does not perform the
> > hash empty test. This most only be used by callers that do the check first
> > anyway, as an empty or NULL hash could cause a crash if a lookup is
> > performed on it.
> >
> > Also add kernel doc for the ftrace_lookup_ip() main function.  
> 
> It'd be nice if ftrace_graph_addr() was changed also.
> 

Yeah, I was looking at that. But I was nervous about placing this
function in the header file.

-- Steve

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 4/8] ftrace: Reset fgd->hash in ftrace_graph_write()
  2017-02-03 14:49   ` Namhyung Kim
@ 2017-02-03 14:57     ` Steven Rostedt
  0 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2017-02-03 14:57 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Fri, 3 Feb 2017 23:49:38 +0900
Namhyung Kim <namhyung@kernel.org> wrote:

> On Fri, Feb 3, 2017 at 10:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
> >
> > fgd->hash is saved and then freed, but is never reset to either
> > ftrace_graph_hash nor ftrace_graph_notrace_hash. But if multiple reads are
> > performed, then the freed hash could be accessed again.  
> 
> Argh, right.  Btw did you mean multiple "write" not "read", no?
> 

Argh, right! or "write" I mean.

;-)

-- Steve

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function
  2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
                   ` (7 preceding siblings ...)
  2017-02-03 13:40 ` [for-next][PATCH 8/8] ftrace: Have set_graph_function handle multiple functions in one write Steven Rostedt
@ 2017-02-03 15:14 ` Namhyung Kim
  8 siblings, 0 replies; 14+ messages in thread
From: Namhyung Kim @ 2017-02-03 15:14 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Fri, Feb 3, 2017 at 10:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>   git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
> for-next
>
> Head SHA1: 092adb1121aec9e0dfa2d07bc160ae60831f4798
>
>
> Steven Rostedt (VMware) (8):
>       tracing: Add ftrace_hash_key() helper function
>       ftrace: Create a slight optimization on searching the ftrace_hash
>       ftrace: Replace (void *)1 with a meaningful macro name FTRACE_GRAPH_EMPTY
>       ftrace: Reset fgd->hash in ftrace_graph_write()
>       ftrace: Have set_graph_functions handle write with RDWR
>       tracing: Reset parser->buffer to allow multiple "puts"
>       ftrace: Do not hold references of ftrace_graph_{notrace_}hash out of graph_lock
>       ftrace: Have set_graph_function handle multiple functions in one write
>
> ----
>  kernel/trace/ftrace.c | 186 ++++++++++++++++++++++++++++++++++----------------
>  kernel/trace/trace.c  |   1 +
>  2 files changed, 129 insertions(+), 58 deletions(-)

For the whole series,

Acked-by: Namhyung Kim <namhyung@kernel.org>

Thanks for fixing and enhancing this,
Namhyung

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-02-03 15:15 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-03 13:40 [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 1/8] tracing: Add ftrace_hash_key() helper function Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 2/8] ftrace: Create a slight optimization on searching the ftrace_hash Steven Rostedt
2017-02-03 14:26   ` Namhyung Kim
2017-02-03 14:57     ` Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 3/8] ftrace: Replace (void *)1 with a meaningful macro name FTRACE_GRAPH_EMPTY Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 4/8] ftrace: Reset fgd->hash in ftrace_graph_write() Steven Rostedt
2017-02-03 14:49   ` Namhyung Kim
2017-02-03 14:57     ` Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 5/8] ftrace: Have set_graph_functions handle write with RDWR Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 6/8] tracing: Reset parser->buffer to allow multiple "puts" Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 7/8] ftrace: Do not hold references of ftrace_graph_{notrace_}hash out of graph_lock Steven Rostedt
2017-02-03 13:40 ` [for-next][PATCH 8/8] ftrace: Have set_graph_function handle multiple functions in one write Steven Rostedt
2017-02-03 15:14 ` [for-next][PATCH 0/8] tracing: Clean up hash logic for set_graph_function Namhyung Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).