linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes
@ 2015-01-28  4:30 Steven Rostedt
  2015-01-28  4:30 ` [RFC][PATCH 1/3] ftrace/jprobes/x86: Allow jprobes to be graph traced if using fentry Steven Rostedt
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Steven Rostedt @ 2015-01-28  4:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu, Thomas Gleixner,
	H. Peter Anvin


I had these patches sitting in my repo for a while, thinking I already
posted them. I never did, so here I go  (a little late :-/)

Basically, what I had before was fixes for jprobes and function graph
tracing that were stepping on each other. When enabling both jprobes
and graph tracer, it could crash the system. The fix was just to ignore
function graph tracing while handling a jprobe. This was fine for stable
and fixing a bug that would usually crash, but it still messes with
function graph trace. It is still required if fentry is not used, but
when fentry is (which is now the majority of cases - gcc > 4.6 and x86_64),
we can trace jprobes with a little trickery.

The way this solves the issue is that on the return from the jprobe,
we can detect if function graph tracing happened because the stack
frame would have changed. When this is detected
 (saved_sp == return_to_handler), the ip is changed once again to
go preform a "fixup". The real ip is saved in r10 (callee clobber)
and will be put back by the fixup trampoline.

The second patch will move ip to r10 always, and will call either
the fixup (if it was modified) or will jump to ftrace_trace_addr.
The ftrace_trace_addr will call the function graph code, if the
ip address is set to be traced. Otherwise the function graph code
ignores it (which is why it's called by all jprobe returns).

The final patch fixes the name of a variable that Masami suggesed.

Steven Rostedt (Red Hat) (3):
      ftrace/jprobes/x86: Allow jprobes to be graph traced if using fentry
      ftrace/jprobes/x86: Have function being probed be graph traced
      ftrace: Rename variable from old_hash_ops to old_ops_hash

----
 arch/x86/include/asm/ftrace.h    |  4 +++
 arch/x86/include/asm/kprobes.h   |  9 +++++
 arch/x86/kernel/kprobes/core.c   | 72 +++++++++++++++++++++++++++++++++++++++-
 arch/x86/kernel/kprobes/ftrace.c | 14 ++++++++
 arch/x86/kernel/mcount_64.S      | 36 +++++++++++++++++++-
 kernel/trace/ftrace.c            | 24 +++++++-------
 6 files changed, 145 insertions(+), 14 deletions(-)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC][PATCH 1/3] ftrace/jprobes/x86: Allow jprobes to be graph traced if using fentry
  2015-01-28  4:30 [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Steven Rostedt
@ 2015-01-28  4:30 ` Steven Rostedt
  2015-01-28  4:30 ` [RFC][PATCH 2/3] ftrace/jprobes/x86: Have function being probed be graph traced Steven Rostedt
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2015-01-28  4:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu, Thomas Gleixner,
	H. Peter Anvin

[-- Attachment #1: 0001-ftrace-jprobes-x86-Allow-jprobes-to-be-graph-traced-.patch --]
[-- Type: text/plain, Size: 9427 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Currently when a jprobe handler is called it disables function graph
tracing until the jprobe handler is complete. This means that jprobe
handlers can not be traced, nor can the functions they probe be traced
if fentry is used (the disabling happens around the time the function
graph code is called on the probed function).

If fentry is enabled, then kprobes is used on top of the ftrace
infrastructure when a probe is attached to the beginning of a function
(like jprobes are). In this special case, ftrace has more control of
what happens with the jprobe, and the jprobe can detect that it was
traced.

If the jprobe detects that it was traced by the function graph tracer
(it sees that the return address on th stack was changed to return to
"return_to_handler"), it can return to a fixup function in ftrace that
would allow it to fix the skipped return that it did by calling
jprobe_return().

The solution is two fold. One is to simply add notrace to the
jprobe_return(), as there's no reason to trace it as it is only
a hack for jprobes. The other is to have jprobes detect this in
the breakpoint handler itself. If it sees that the return address
was replaced by the function graph tracer's return_to_handler
hook, it will then call an ftrace helper function that updates
the ftrace return stack data, and will have the regs->ip return
to a fixup_jprobe() handler. When it returns, the handler
will perform the final fixes and return back to the function call.

Note, the function attached to the jprobe may still not be traced
if it is filtered by the function graph tracer, as the regs->ip
is still modified by the jprobe code before it reaches the function
graph code, and it may not match the graph filtered functions.

Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/include/asm/ftrace.h    |  3 ++
 arch/x86/include/asm/kprobes.h   |  9 ++++++
 arch/x86/kernel/kprobes/core.c   | 65 +++++++++++++++++++++++++++++++++++++++-
 arch/x86/kernel/kprobes/ftrace.c | 12 ++++++++
 arch/x86/kernel/mcount_64.S      | 24 ++++++++++++++-
 5 files changed, 111 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index f45acad3c4b6..bfabbb44797f 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -35,6 +35,9 @@ struct dyn_arch_ftrace {
 
 int ftrace_int3_handler(struct pt_regs *regs);
 
+/* Used to keep jprobes from messing with function graph tracing */
+void fixup_jprobe(void);
+
 #define FTRACE_GRAPH_TRAMP_ADDR FTRACE_GRAPH_ADDR
 
 #endif /*  CONFIG_DYNAMIC_FTRACE */
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 4421b5da409d..d4e3d221b896 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -117,4 +117,13 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
 				    unsigned long val, void *data);
 extern int kprobe_int3_handler(struct pt_regs *regs);
 extern int kprobe_debug_handler(struct pt_regs *regs);
+
+/*
+ * If fentry is being used, then it's OK to function graph trace
+ * jprobes, as it is implented on top of the ftrace infrastructure.
+ */
+#if defined(CC_USING_FENTRY) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
+# define ALLOW_JPROBE_GRAPH_TRACER
+#endif
+
 #endif /* _ASM_X86_KPROBES_H */
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 98f654d466e5..971c3803f283 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1021,19 +1021,22 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
 	trace_hardirqs_off();
 	regs->ip = (unsigned long)(jp->entry);
 
+#ifndef ALLOW_JPROBE_GRAPH_TRACER
 	/*
 	 * jprobes use jprobe_return() which skips the normal return
 	 * path of the function, and this messes up the accounting of the
 	 * function graph tracer to get messed up.
 	 *
 	 * Pause function graph tracing while performing the jprobe function.
+	 * Unless we allow for jprobes to be traced.
 	 */
 	pause_graph_tracing();
+#endif
 	return 1;
 }
 NOKPROBE_SYMBOL(setjmp_pre_handler);
 
-void jprobe_return(void)
+void notrace jprobe_return(void)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 
@@ -1072,8 +1075,68 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 			show_regs(regs);
 			BUG();
 		}
+/*
+ * When using fentry (which is not yet used by i386), the jprobe handlers
+ * may use the ftrace infrastructure to implement the calling of the kprobe
+ * (and jprobe) handlers. The jprobe pre handler copies the stack frame and
+ * changes the ip address to call the jprobe callback. The callback calls
+ * jprobe_return() which does an int3 breakpoint to call the jprobe breakpiont
+ * handler which will put back the stack frame and set the regs->ip back to the
+ * originally traced function.
+ *
+ * The problem is, the stack frame can be changed by the function graph tracer
+ * to have it call return_to_handler() that does the tracing of the function
+ * when it returns. The jprobe would swap back the original stack frame
+ * and lose the update. That would in turn, cause the accounting of the
+ * stack frames in ret_stack to get messed up, which is used to put back the
+ * stack frames that were replaced. The result is that the stack frame
+ * would get the wrong value and cause the function to return to the wrong
+ * place which is most definitely followed by a function oops.
+ *
+ * To make things worse, the jprobe callback return value never gets
+ * called due to it using jprobe_return() instead of return.
+ */
+#ifdef ALLOW_JPROBE_GRAPH_TRACER
+		/*
+		 * If function graph tracing traced the function that the
+		 * jprobe attached to, then the function graph tracing
+		 * would have changed the stack return address to point to
+		 * "return_to_handler". If this is the case, then we need to
+		 * do a bit more work in order to not mess up the function
+		 * graph tracer.
+		 */
+		if (*(void **)saved_sp == return_to_handler) {
+			/*
+			 * The current->ret_stack still has the jprobe callback
+			 * in its list. It can not be removed until we are
+			 * at the function frame that set it (when going back
+			 * to regs->ip). Set the ip to the fixup_jprobe
+			 * trampoline, and the stack to the address that was
+			 * saved by the function graph trace. The fixup_jprobe
+			 * will be able to pop the ret_stack for the jprobe
+			 * handler.
+			 */
+			kcb->jprobe_saved_regs.ip = (unsigned long)fixup_jprobe;
+			/*
+			 * Since we are not returning back to the function
+			 * that was probed, the fixup_jprobe needs a way
+			 * to know what to jump back to. Store that in the
+			 * r10 register which callee functions are allowed
+			 * to clobber. Since r10 can be clobbered by the callee,
+			 * the caller must save it if necessary. As the callee
+			 * (probed function) has not been executed yet, the
+			 * value for r10 currently is not important.
+			 *
+			 * Note, as this only happens with fentry which is
+			 * not supported (yet) by i386, we can use the r10
+			 * field directly here.
+			 */
+			kcb->jprobe_saved_regs.r10 = (unsigned long)p->addr;
+		}
+#else
 		/* It's OK to start function graph tracing again */
 		unpause_graph_tracing();
+#endif
 		*regs = kcb->jprobe_saved_regs;
 		memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp));
 		preempt_enable_no_resched();
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 5f8f0b3cc674..0fd23d19ed66 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -29,6 +29,18 @@ static nokprobe_inline
 int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 		      struct kprobe_ctlblk *kcb, unsigned long orig_ip)
 {
+#ifdef ALLOW_JPROBE_GRAPH_TRACER
+	/*
+	 * If the function graph tracer traced the jprobe entry then
+	 * the ip address would be set to fixup_jprobe, and we do
+	 * not want to modify that (and we expect that orig_ip to be
+	 * zero in this case as well). Make sure that the regs->ip
+	 * is set back to fixup_jprobe on exit.
+	 */
+	if (!orig_ip && regs->ip == (unsigned long)fixup_jprobe)
+		orig_ip = regs->ip;
+#endif
+
 	/*
 	 * Emulate singlestep (and also recover regs->ip)
 	 * as if there is a 5byte nop
diff --git a/arch/x86/kernel/mcount_64.S b/arch/x86/kernel/mcount_64.S
index 94ea120fa21f..fe3ccc99530d 100644
--- a/arch/x86/kernel/mcount_64.S
+++ b/arch/x86/kernel/mcount_64.S
@@ -323,4 +323,26 @@ GLOBAL(return_to_handler)
 	movq (%rsp), %rax
 	addq $24, %rsp
 	jmp *%rdi
-#endif
+
+/*
+ * A jprobe that is traced or the function it traces is traced, then
+ * it needs to have a special verson of return_to_handler that it
+ * gets set from longjmp_break_handler(). This is because the jprobe
+ * callback is not returning to the function that called it, but instead
+ * it is returning to the function it probed.
+ */
+ENTRY(fixup_jprobe)
+	/* longjmp_break_handler() placed the probed function into r10 */
+	addq $MCOUNT_INSN_SIZE, %r10
+	pushq %r10
+	save_mcount_regs
+	/* No need to check frames here */
+	movq $0, %rdi
+	call ftrace_return_to_handler
+	/* Put the return address back to its proper place */
+	movq %rax, MCOUNT_REG_SIZE+8(%rsp)
+	restore_mcount_regs
+	retq
+END(fixup_jprobe)
+
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-- 
2.1.4



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC][PATCH 2/3] ftrace/jprobes/x86: Have function being probed be graph traced
  2015-01-28  4:30 [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Steven Rostedt
  2015-01-28  4:30 ` [RFC][PATCH 1/3] ftrace/jprobes/x86: Allow jprobes to be graph traced if using fentry Steven Rostedt
@ 2015-01-28  4:30 ` Steven Rostedt
  2015-01-28  4:30 ` [RFC][PATCH 3/3] ftrace: Rename variable from old_hash_ops to old_ops_hash Steven Rostedt
  2015-01-29  6:04 ` [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Masami Hiramatsu
  3 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2015-01-28  4:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu, Thomas Gleixner,
	H. Peter Anvin

[-- Attachment #1: 0002-ftrace-jprobes-x86-Have-function-being-probed-be-gra.patch --]
[-- Type: text/plain, Size: 5855 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Because jprobes replaces the stack frame to call the jprobe handler with
the function arguments, if kprobes/jprobes uses ftrace (fentry)
infrastructure for its implementation, it messes with the tracing of
the function graph tracer.

The jprobe gets set up from the ftrace fentry trampoline and it
changes the regs->ip from the called function to call the jprobe
handler. The function graph tracing happens after the function
tracing and the function graph will see the jprobe ip address
instead of the function that was called. If the functions were
filtered, the jprobe ip address would not match what it expected
to see, and the graph tracer will not trace the function.

Add a new trampoline called ftrace_trace_addr that jprobes always
calls if the jprobe itself was not traced, and kprobes uses the
ftrace (fentry) infrastructure. The ftrace_trace_addr will reset
the ip address with the function that was probed and recall the
ftrace_graph_caller.

In case the jprobe itself was traced, the fixup_jprobe will call
ftrace_graph_caller.

Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/include/asm/ftrace.h    |  1 +
 arch/x86/kernel/kprobes/core.c   | 33 ++++++++++++++++++++-------------
 arch/x86/kernel/kprobes/ftrace.c |  4 +++-
 arch/x86/kernel/mcount_64.S      | 14 +++++++++++++-
 4 files changed, 37 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index bfabbb44797f..d725c816ea05 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -37,6 +37,7 @@ int ftrace_int3_handler(struct pt_regs *regs);
 
 /* Used to keep jprobes from messing with function graph tracing */
 void fixup_jprobe(void);
+void ftrace_trace_addr(void);
 
 #define FTRACE_GRAPH_TRAMP_ADDR FTRACE_GRAPH_ADDR
 
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 971c3803f283..c09eee6f85bb 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1098,6 +1098,22 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
  */
 #ifdef ALLOW_JPROBE_GRAPH_TRACER
 		/*
+		 * Since we are not returning back to the function
+		 * that was probed, the fixup_jprobe and ftrace_trace_addr
+		 * needs a way to know what to jump back to. Store that in the
+		 * r10 register, which callee functions are allowed
+		 * to clobber. Since r10 can be clobbered by the callee,
+		 * the caller must save it if necessary. As the callee
+		 * (probed function) has not been executed yet, the
+		 * value for r10 currently is not important.
+		 *
+		 * Note, as this only happens with fentry which is
+		 * not supported (yet) by i386, we can use the r10
+		 * field directly here.
+		 */
+		kcb->jprobe_saved_regs.r10 = (unsigned long)p->addr;
+
+		/*
 		 * If function graph tracing traced the function that the
 		 * jprobe attached to, then the function graph tracing
 		 * would have changed the stack return address to point to
@@ -1117,21 +1133,12 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 			 * handler.
 			 */
 			kcb->jprobe_saved_regs.ip = (unsigned long)fixup_jprobe;
+		} else {
 			/*
-			 * Since we are not returning back to the function
-			 * that was probed, the fixup_jprobe needs a way
-			 * to know what to jump back to. Store that in the
-			 * r10 register which callee functions are allowed
-			 * to clobber. Since r10 can be clobbered by the callee,
-			 * the caller must save it if necessary. As the callee
-			 * (probed function) has not been executed yet, the
-			 * value for r10 currently is not important.
-			 *
-			 * Note, as this only happens with fentry which is
-			 * not supported (yet) by i386, we can use the r10
-			 * field directly here.
+			 * See if function graph tracing is enabled and
+			 * trace this function if necessary.
 			 */
-			kcb->jprobe_saved_regs.r10 = (unsigned long)p->addr;
+			kcb->jprobe_saved_regs.ip = (unsigned long)ftrace_trace_addr;
 		}
 #else
 		/* It's OK to start function graph tracing again */
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 0fd23d19ed66..317377f1df26 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -37,7 +37,9 @@ int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 	 * zero in this case as well). Make sure that the regs->ip
 	 * is set back to fixup_jprobe on exit.
 	 */
-	if (!orig_ip && regs->ip == (unsigned long)fixup_jprobe)
+	if (!orig_ip &&
+	    (regs->ip == (unsigned long)fixup_jprobe ||
+	     regs->ip == (unsigned long)ftrace_trace_addr))
 		orig_ip = regs->ip;
 #endif
 
diff --git a/arch/x86/kernel/mcount_64.S b/arch/x86/kernel/mcount_64.S
index fe3ccc99530d..b6caf6d1e10e 100644
--- a/arch/x86/kernel/mcount_64.S
+++ b/arch/x86/kernel/mcount_64.S
@@ -333,7 +333,6 @@ GLOBAL(return_to_handler)
  */
 ENTRY(fixup_jprobe)
 	/* longjmp_break_handler() placed the probed function into r10 */
-	addq $MCOUNT_INSN_SIZE, %r10
 	pushq %r10
 	save_mcount_regs
 	/* No need to check frames here */
@@ -341,8 +340,21 @@ ENTRY(fixup_jprobe)
 	call ftrace_return_to_handler
 	/* Put the return address back to its proper place */
 	movq %rax, MCOUNT_REG_SIZE+8(%rsp)
+	movq MCOUNT_REG_SIZE(%rsp), %rdi
+	leaq MCOUNT_REG_SIZE+8(%rsp), %rsi
+	movq $0, %rdx	/* No framepointers needed */
+	call prepare_ftrace_return
 	restore_mcount_regs
+	/* Skip over the fentry call */
+	addq $MCOUNT_INSN_SIZE, 0(%rsp)
 	retq
 END(fixup_jprobe)
 
+ENTRY(ftrace_trace_addr)
+	/* longjmp_break_handler() saved our return address in r10 */
+	pushq %r10
+	addq $MCOUNT_INSN_SIZE, 0(%rsp)
+	jmp ftrace_graph_caller
+END(ftrace_trace_addr)
+
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-- 
2.1.4



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC][PATCH 3/3] ftrace: Rename variable from old_hash_ops to old_ops_hash
  2015-01-28  4:30 [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Steven Rostedt
  2015-01-28  4:30 ` [RFC][PATCH 1/3] ftrace/jprobes/x86: Allow jprobes to be graph traced if using fentry Steven Rostedt
  2015-01-28  4:30 ` [RFC][PATCH 2/3] ftrace/jprobes/x86: Have function being probed be graph traced Steven Rostedt
@ 2015-01-28  4:30 ` Steven Rostedt
  2015-01-29  6:04 ` [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Masami Hiramatsu
  3 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2015-01-28  4:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu, Thomas Gleixner,
	H. Peter Anvin

[-- Attachment #1: 0003-ftrace-Rename-variable-from-old_hash_ops-to-old_ops_.patch --]
[-- Type: text/plain, Size: 3783 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

The variable name "old_hash_ops" is confusing because it is a hash
and not an ops. It is of type "struct ftrace_ops_hash" and should
be called old_ops_hash.

Link: http://lkml.kernel.org/r/54B79D98.6070800@hitachi.com

Suggested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 224e768bdc73..769b28918b5d 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -3639,7 +3639,7 @@ int
 register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
 			      void *data)
 {
-	struct ftrace_ops_hash old_hash_ops;
+	struct ftrace_ops_hash old_ops_hash;
 	struct ftrace_func_probe *entry;
 	struct ftrace_hash **orig_hash = &trace_probe_ops.func_hash->filter_hash;
 	struct ftrace_hash *old_hash = *orig_hash;
@@ -3661,9 +3661,9 @@ register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
 
 	mutex_lock(&trace_probe_ops.func_hash->regex_lock);
 
-	old_hash_ops.filter_hash = old_hash;
+	old_ops_hash.filter_hash = old_hash;
 	/* Probes only have filters */
-	old_hash_ops.notrace_hash = NULL;
+	old_ops_hash.notrace_hash = NULL;
 
 	hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, old_hash);
 	if (!hash) {
@@ -3725,7 +3725,7 @@ register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
 
 	ret = ftrace_hash_move(&trace_probe_ops, 1, orig_hash, hash);
 
-	__enable_ftrace_function_probe(&old_hash_ops);
+	__enable_ftrace_function_probe(&old_ops_hash);
 
 	if (!ret)
 		free_ftrace_hash_rcu(old_hash);
@@ -4048,7 +4048,7 @@ ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len,
 		unsigned long ip, int remove, int reset, int enable)
 {
 	struct ftrace_hash **orig_hash;
-	struct ftrace_ops_hash old_hash_ops;
+	struct ftrace_ops_hash old_ops_hash;
 	struct ftrace_hash *old_hash;
 	struct ftrace_hash *hash;
 	int ret;
@@ -4085,11 +4085,11 @@ ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len,
 
 	mutex_lock(&ftrace_lock);
 	old_hash = *orig_hash;
-	old_hash_ops.filter_hash = ops->func_hash->filter_hash;
-	old_hash_ops.notrace_hash = ops->func_hash->notrace_hash;
+	old_ops_hash.filter_hash = ops->func_hash->filter_hash;
+	old_ops_hash.notrace_hash = ops->func_hash->notrace_hash;
 	ret = ftrace_hash_move(ops, enable, orig_hash, hash);
 	if (!ret) {
-		ftrace_ops_update_code(ops, &old_hash_ops);
+		ftrace_ops_update_code(ops, &old_ops_hash);
 		free_ftrace_hash_rcu(old_hash);
 	}
 	mutex_unlock(&ftrace_lock);
@@ -4301,7 +4301,7 @@ static void __init set_ftrace_early_filters(void)
 int ftrace_regex_release(struct inode *inode, struct file *file)
 {
 	struct seq_file *m = (struct seq_file *)file->private_data;
-	struct ftrace_ops_hash old_hash_ops;
+	struct ftrace_ops_hash old_ops_hash;
 	struct ftrace_iterator *iter;
 	struct ftrace_hash **orig_hash;
 	struct ftrace_hash *old_hash;
@@ -4335,12 +4335,12 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
 
 		mutex_lock(&ftrace_lock);
 		old_hash = *orig_hash;
-		old_hash_ops.filter_hash = iter->ops->func_hash->filter_hash;
-		old_hash_ops.notrace_hash = iter->ops->func_hash->notrace_hash;
+		old_ops_hash.filter_hash = iter->ops->func_hash->filter_hash;
+		old_ops_hash.notrace_hash = iter->ops->func_hash->notrace_hash;
 		ret = ftrace_hash_move(iter->ops, filter_hash,
 				       orig_hash, iter->hash);
 		if (!ret) {
-			ftrace_ops_update_code(iter->ops, &old_hash_ops);
+			ftrace_ops_update_code(iter->ops, &old_ops_hash);
 			free_ftrace_hash_rcu(old_hash);
 		}
 		mutex_unlock(&ftrace_lock);
-- 
2.1.4



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes
  2015-01-28  4:30 [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Steven Rostedt
                   ` (2 preceding siblings ...)
  2015-01-28  4:30 ` [RFC][PATCH 3/3] ftrace: Rename variable from old_hash_ops to old_ops_hash Steven Rostedt
@ 2015-01-29  6:04 ` Masami Hiramatsu
  2015-01-29 13:48   ` Steven Rostedt
  3 siblings, 1 reply; 6+ messages in thread
From: Masami Hiramatsu @ 2015-01-29  6:04 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, Thomas Gleixner,
	H. Peter Anvin

(2015/01/28 13:30), Steven Rostedt wrote:
> I had these patches sitting in my repo for a while, thinking I already
> posted them. I never did, so here I go  (a little late :-/)

Oops, I might miss that.

> Basically, what I had before was fixes for jprobes and function graph
> tracing that were stepping on each other. When enabling both jprobes
> and graph tracer, it could crash the system. The fix was just to ignore
> function graph tracing while handling a jprobe. This was fine for stable
> and fixing a bug that would usually crash, but it still messes with
> function graph trace. It is still required if fentry is not used, but
> when fentry is (which is now the majority of cases - gcc > 4.6 and x86_64),
> we can trace jprobes with a little trickery.
> 
> The way this solves the issue is that on the return from the jprobe,
> we can detect if function graph tracing happened because the stack
> frame would have changed. When this is detected
>  (saved_sp == return_to_handler), the ip is changed once again to
> go preform a "fixup". The real ip is saved in r10 (callee clobber)
> and will be put back by the fixup trampoline.

Hmm, could you make this more generic? Maybe we can directly call
ftrace_return_to_handler() from longjmp_break_handler().
Actually, current implementation seems just skipping one return
address, however, there may be possible to call jprobe_return() in
the nested functions, like below;

void test_exit(int flag)
{
	if (!flag)
		jprobe_return();
}
int jdo_fork(...)
{
	...
	test_exit(something_to_test);
	...
	jprobe_return();
	return 0;
}

I've tested similar code on this series and it crashed kernel.
(I also checked that the above example can work safely without graph tracer)

[58149.913578] Planted jprobe at ffffffff8109a2a0, handler addr ffffffffa007e030
[58152.642572] jprobe: clone_flags = 0x1200011, stack_start = 0x0 stack_size = 0x0
[58152.642923] 	never_ret: 18874385,           (null)
[58152.664210] jprobe at ffffffff8109a2a0 unregistered
[58152.667840] kernel tried to execute NX-protected page - exploit attempt? (uid: 1000)
[58152.668796] BUG: unable to handle kernel paging request at 00007fffae3c30a0
[58152.668796] IP: [<00007fffae3c30a0>] 0x7fffae3c30a0
[58152.668796] PGD 42cb66067 PUD 42b708067 PMD 42a464067 PTE 8000000413146065
[58152.668796] Oops: 0011 [#1] SMP
[58152.668796] Modules linked in: jprobe_example(E) nls_utf8 isofs cirrus drm_kms_helper ttm drm ppdev crct10dif_pclmul crc32_pclmul parport_pc virtio_net i2c_piix4 parport crc32c_intel virtio_balloon
serio_raw ghash_clmulni_intel virtio_blk virtio_pci virtio_ring ata_generic virtio pata_acpi [last unloaded: jprobe_example]
[58152.668796] CPU: 0 PID: 12992 Comm: bash Tainted: G            E  3.19.0-rc3+ #1
[58152.668796] Hardware name: Fedora Project OpenStack Nova, BIOS 1.7.5-20140709_153950- 04/01/2014
[58152.668796] task: ffff88042c4fb8a0 ti: ffff88042b638000 task.ti: ffff88042b638000
[58152.668796] RIP: 0010:[<00007fffae3c30a0>]  [<00007fffae3c30a0>] 0x7fffae3c30a0
[58152.668796] RSP: 0018:ffff88042b63bf48  EFLAGS: 00010282
[58152.668796] RAX: 0000000000000027 RBX: 0000000000000000 RCX: ffff8800bbae0100
[58152.668796] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffa007e089
[58152.668796] RBP: ffff88042b63bf40 R08: 0000000000000cc8 R09: ffff88042bfffcd8
[58152.668796] R10: 000000000000000c R11: 0000000000000cc8 R12: 00007fffae3c3060
[58152.668796] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000001d63470
[58152.668796] FS:  00007fdfbb57a700(0000) GS:ffff88043fc00000(0000) knlGS:0000000000000000
[58152.668796] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[58152.668796] CR2: 00007fffae3c30a0 CR3: 000000042b58b000 CR4: 00000000000406f0
[58152.668796] Stack:
[58152.668796]  ffffffff8176a9c8 ffffffff817682e9 0000000001d63470 0000000000000000
[58152.668796]  0000000000000000 00007fffae3c3060 00007fffae3c30a0 0000000000000000
[58152.668796]  0000000000000246 00007fdfbb57a9d0 0000000000000000 0000000000000000
[58152.668796] Call Trace:
[58152.668796]  [<ffffffff8176a9c8>] ftrace_graph_caller+0xa8/0xa8
[58152.668796]  [<ffffffff8106370d>] __bad_area_nosemaphore+0x8d/0x220
[58152.668796]  [<ffffffff817682e9>] ? system_call_fastpath+0x12/0x17
[58152.668796] Code:  Bad RIP value.
[58152.668796] RIP  [<00007fffae3c30a0>] 0x7fffae3c30a0
[58152.668796]  RSP <ffff88042b63bf48>
[58152.668796] CR2: 00007fffae3c30a0
[58152.668796] ------------[ cut here ]------------

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes
  2015-01-29  6:04 ` [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Masami Hiramatsu
@ 2015-01-29 13:48   ` Steven Rostedt
  0 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2015-01-29 13:48 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, Thomas Gleixner,
	H. Peter Anvin

On Thu, 29 Jan 2015 15:04:44 +0900
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:
 
> Hmm, could you make this more generic? Maybe we can directly call
> ftrace_return_to_handler() from longjmp_break_handler().

I tried that, but because the longjump handlers can also be traced (and
I still want them to be :-) It makes things even more complicated to
get the stacks right. It's best to deal with the stack when you can.

> Actually, current implementation seems just skipping one return
> address, however, there may be possible to call jprobe_return() in
> the nested functions, like below;
> 
> void test_exit(int flag)
> {
> 	if (!flag)
> 		jprobe_return();
> }
> int jdo_fork(...)
> {
> 	...
> 	test_exit(something_to_test);
> 	...
> 	jprobe_return();
> 	return 0;
> }
> 
> I've tested similar code on this series and it crashed kernel.
> (I also checked that the above example can work safely without graph tracer)
> 

Ah yes, I missed this. I have a way to solve this but it's not going to
get done before the merge window. Thanks for the test case. I'll use
this and more complicated ones for future changes.

Thanks!

-- Steve


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-01-29 13:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-28  4:30 [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Steven Rostedt
2015-01-28  4:30 ` [RFC][PATCH 1/3] ftrace/jprobes/x86: Allow jprobes to be graph traced if using fentry Steven Rostedt
2015-01-28  4:30 ` [RFC][PATCH 2/3] ftrace/jprobes/x86: Have function being probed be graph traced Steven Rostedt
2015-01-28  4:30 ` [RFC][PATCH 3/3] ftrace: Rename variable from old_hash_ops to old_ops_hash Steven Rostedt
2015-01-29  6:04 ` [RFC][PATCH 0/3] kprobes/ftrace/x86: Function graph trace jprobes Masami Hiramatsu
2015-01-29 13:48   ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).