linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again
@ 2018-01-23 18:32 Steven Rostedt
  2018-01-23 18:32 ` [PATCH 1/3] x86/ftrace: Fix ORC unwinding from ftrace handlers Steven Rostedt
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-01-23 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf


With the new ORC unwinder, ftrace stack tracing became disfunctional.

One was that ORC didn't know how to handle the ftrace callbacks in
general (which Josh fixed). The other was that ORC would just bail
if it hit a dynamically allocated trampoline. I added a check to
the ORC unwinder to see if the trampoline belonged to ftrace, and
if it did, use the orc entry of the static trampoline that was used
to create the dynamic one (it would be identical).

Finally, I noticed that the skip values of the stack tracing is out
of whack. I went through and fixed them.

Anyone have any issues with these patches? I'm starting my tests on
them now and if all goes well, I plan on pushing them to Linus
hopefully tonight.

Thanks!

-- Steve


Josh Poimboeuf (1):
      x86/ftrace: Fix ORC unwinding from ftrace handlers

Steven Rostedt (VMware) (2):
      ftrace, orc, x86: Handle ftrace dynamically allocated trampolines
      tracing: Update stack trace skipping for ORC unwinder

----
 arch/x86/kernel/Makefile            |  5 +++-
 arch/x86/kernel/ftrace_64.S         | 24 +++++++++++-------
 arch/x86/kernel/unwind_orc.c        | 48 +++++++++++++++++++++++++++++++++++-
 include/linux/ftrace.h              |  2 ++
 kernel/trace/ftrace.c               | 26 +++++++++++---------
 kernel/trace/trace.c                | 34 ++++++++++++++-----------
 kernel/trace/trace_events_trigger.c | 13 ++++++++--
 kernel/trace/trace_functions.c      | 49 +++++++++++++++++++++++++++----------
 8 files changed, 150 insertions(+), 51 deletions(-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] x86/ftrace: Fix ORC unwinding from ftrace handlers
  2018-01-23 18:32 [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Steven Rostedt
@ 2018-01-23 18:32 ` Steven Rostedt
  2018-01-23 18:32 ` [PATCH 2/3] ftrace, orc, x86: Handle ftrace dynamically allocated trampolines Steven Rostedt
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-01-23 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf

[-- Attachment #1: 0001-x86-ftrace-Fix-ORC-unwinding-from-ftrace-handlers.patch --]
[-- Type: text/plain, Size: 4732 bytes --]

From: Josh Poimboeuf <jpoimboe@redhat.com>

Steven Rostedt discovered that the ftrace stack tracer is broken when
it's used with the ORC unwinder.  The problem is that objtool is
instructed by the Makefile to ignore the ftrace_64.S code, so it doesn't
generate any ORC data for it.

Fix it by making the asm code objtool-friendly:

- Objtool doesn't like the fact that save_mcount_regs pushes RBP at the
  beginning, but it's never restored (directly, at least).  So just skip
  the original RBP push, which is only needed for frame pointers anyway.

- Annotate some functions as normal callable functions with
  ENTRY/ENDPROC.

- Add an empty unwind hint to return_to_handler().  The return address
  isn't on the stack, so there's nothing ORC can do there.  It will just
  punt in the unlikely case it tries to unwind from that code.

With all that fixed, remove the OBJECT_FILES_NON_STANDARD Makefile
annotation so objtool can read the file.

Link: http://lkml.kernel.org/r/20180123040746.ih4ep3tk4pbjvg7c@treble

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 arch/x86/kernel/Makefile    |  5 ++++-
 arch/x86/kernel/ftrace_64.S | 24 +++++++++++++++---------
 2 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 81bb565f4497..7e2baf7304ae 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -29,10 +29,13 @@ KASAN_SANITIZE_stacktrace.o				:= n
 KASAN_SANITIZE_paravirt.o				:= n
 
 OBJECT_FILES_NON_STANDARD_relocate_kernel_$(BITS).o	:= y
-OBJECT_FILES_NON_STANDARD_ftrace_$(BITS).o		:= y
 OBJECT_FILES_NON_STANDARD_test_nx.o			:= y
 OBJECT_FILES_NON_STANDARD_paravirt_patch_$(BITS).o	:= y
 
+ifdef CONFIG_FRAME_POINTER
+OBJECT_FILES_NON_STANDARD_ftrace_$(BITS).o		:= y
+endif
+
 # If instrumentation of this dir is enabled, boot hangs during first second.
 # Probably could be more selective here, but note that files related to irqs,
 # boot, dumpstack/stacktrace, etc are either non-interesting or can lead to
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 7cb8ba08beb9..ef61f540cf0a 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -8,6 +8,7 @@
 #include <asm/ftrace.h>
 #include <asm/export.h>
 #include <asm/nospec-branch.h>
+#include <asm/unwind_hints.h>
 
 	.code64
 	.section .entry.text, "ax"
@@ -20,7 +21,6 @@ EXPORT_SYMBOL(__fentry__)
 EXPORT_SYMBOL(mcount)
 #endif
 
-/* All cases save the original rbp (8 bytes) */
 #ifdef CONFIG_FRAME_POINTER
 # ifdef CC_USING_FENTRY
 /* Save parent and function stack frames (rip and rbp) */
@@ -31,7 +31,7 @@ EXPORT_SYMBOL(mcount)
 # endif
 #else
 /* No need to save a stack frame */
-# define MCOUNT_FRAME_SIZE	8
+# define MCOUNT_FRAME_SIZE	0
 #endif /* CONFIG_FRAME_POINTER */
 
 /* Size of stack used to save mcount regs in save_mcount_regs */
@@ -64,10 +64,10 @@ EXPORT_SYMBOL(mcount)
  */
 .macro save_mcount_regs added=0
 
-	/* Always save the original rbp */
+#ifdef CONFIG_FRAME_POINTER
+	/* Save the original rbp */
 	pushq %rbp
 
-#ifdef CONFIG_FRAME_POINTER
 	/*
 	 * Stack traces will stop at the ftrace trampoline if the frame pointer
 	 * is not set up properly. If fentry is used, we need to save a frame
@@ -105,7 +105,11 @@ EXPORT_SYMBOL(mcount)
 	 * Save the original RBP. Even though the mcount ABI does not
 	 * require this, it helps out callers.
 	 */
+#ifdef CONFIG_FRAME_POINTER
 	movq MCOUNT_REG_SIZE-8(%rsp), %rdx
+#else
+	movq %rbp, %rdx
+#endif
 	movq %rdx, RBP(%rsp)
 
 	/* Copy the parent address into %rsi (second parameter) */
@@ -148,7 +152,7 @@ EXPORT_SYMBOL(mcount)
 
 ENTRY(function_hook)
 	retq
-END(function_hook)
+ENDPROC(function_hook)
 
 ENTRY(ftrace_caller)
 	/* save_mcount_regs fills in first two parameters */
@@ -184,7 +188,7 @@ GLOBAL(ftrace_graph_call)
 /* This is weak to keep gas from relaxing the jumps */
 WEAK(ftrace_stub)
 	retq
-END(ftrace_caller)
+ENDPROC(ftrace_caller)
 
 ENTRY(ftrace_regs_caller)
 	/* Save the current flags before any operations that can change them */
@@ -255,7 +259,7 @@ GLOBAL(ftrace_regs_caller_end)
 
 	jmp ftrace_epilogue
 
-END(ftrace_regs_caller)
+ENDPROC(ftrace_regs_caller)
 
 
 #else /* ! CONFIG_DYNAMIC_FTRACE */
@@ -313,9 +317,10 @@ ENTRY(ftrace_graph_caller)
 	restore_mcount_regs
 
 	retq
-END(ftrace_graph_caller)
+ENDPROC(ftrace_graph_caller)
 
-GLOBAL(return_to_handler)
+ENTRY(return_to_handler)
+	UNWIND_HINT_EMPTY
 	subq  $24, %rsp
 
 	/* Save the return values */
@@ -330,4 +335,5 @@ GLOBAL(return_to_handler)
 	movq (%rsp), %rax
 	addq $24, %rsp
 	JMP_NOSPEC %rdi
+END(return_to_handler)
 #endif
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] ftrace, orc, x86: Handle ftrace dynamically allocated trampolines
  2018-01-23 18:32 [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Steven Rostedt
  2018-01-23 18:32 ` [PATCH 1/3] x86/ftrace: Fix ORC unwinding from ftrace handlers Steven Rostedt
@ 2018-01-23 18:32 ` Steven Rostedt
  2018-01-23 18:32 ` [PATCH 3/3] tracing: Update stack trace skipping for ORC unwinder Steven Rostedt
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-01-23 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf

[-- Attachment #1: 0002-ftrace-orc-x86-Handle-ftrace-dynamically-allocated-t.patch --]
[-- Type: text/plain, Size: 4935 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The function tracer can create a dynamically allocated trampoline that is
called by the function mcount or fentry hook that is used to call the
function callback that is registered. The problem is that the orc undwinder
will bail if it encounters one of these trampolines. This breaks the stack
trace of function callbacks, which include the stack tracer and setting the
stack trace for individual functions.

Since these dynamic trampolines are basically copies of the static ftrace
trampolines defined in ftrace_*.S, we do not need to create new orc entries
for the dynamic trampolines. Finding the return address on the stack will be
identical as the functions that were copied to create the dynamic
trampolines. When encountering a ftrace dynamic trampoline, we can just use
the orc entry of the ftrace static function that was copied for that
trampoline.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 arch/x86/kernel/unwind_orc.c | 48 +++++++++++++++++++++++++++++++++++++++++++-
 include/linux/ftrace.h       |  2 ++
 kernel/trace/ftrace.c        | 26 ++++++++++++++----------
 3 files changed, 64 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
index be86a865087a..bcc295c0a12f 100644
--- a/arch/x86/kernel/unwind_orc.c
+++ b/arch/x86/kernel/unwind_orc.c
@@ -74,8 +74,50 @@ static struct orc_entry *orc_module_find(unsigned long ip)
 }
 #endif
 
+#ifdef CONFIG_FUNCTION_TRACER
+static struct orc_entry *orc_find(unsigned long ip);
+
+/*
+ * Ftrace dynamic trampolines do not have orc entries of their own.
+ * But they are copies of the ftrace entries that are static and
+ * defined in ftrace_*.S, which do have orc entries.
+ *
+ * If the undwinder comes across a ftrace trampoline, then find the
+ * ftrace function that was used to create it, and use that ftrace
+ * function's orc entrie, as the placement of the return code in
+ * the stack will be identical.
+ */
+static struct orc_entry *orc_ftrace_find(unsigned long ip)
+{
+	struct ftrace_ops *ops;
+	unsigned long caller;
+
+	ops = ftrace_ops_trampoline(ip);
+	if (!ops)
+		return NULL;
+
+	if (ops->flags & FTRACE_OPS_FL_SAVE_REGS)
+		caller = (unsigned long)ftrace_regs_call;
+	else
+		caller = (unsigned long)ftrace_call;
+
+	/* Prevent unlikely recursion */
+	if (ip == caller)
+		return NULL;
+
+	return orc_find(caller);
+}
+#else
+static struct orc_entry *orc_ftrace_find(unsigned long ip)
+{
+	return NULL;
+}
+#endif
+
 static struct orc_entry *orc_find(unsigned long ip)
 {
+	static struct orc_entry *orc;
+
 	if (!orc_init)
 		return NULL;
 
@@ -111,7 +153,11 @@ static struct orc_entry *orc_find(unsigned long ip)
 				  __stop_orc_unwind_ip - __start_orc_unwind_ip, ip);
 
 	/* Module lookup: */
-	return orc_module_find(ip);
+	orc = orc_module_find(ip);
+	if (orc)
+		return orc;
+
+	return orc_ftrace_find(ip);
 }
 
 static void orc_sort_swap(void *_a, void *_b, int size)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 2bab81951ced..3319df9727aa 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -332,6 +332,8 @@ extern int ftrace_text_reserved(const void *start, const void *end);
 
 extern int ftrace_nr_registered_ops(void);
 
+struct ftrace_ops *ftrace_ops_trampoline(unsigned long addr);
+
 bool is_ftrace_trampoline(unsigned long addr);
 
 /*
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index ccdf3664e4a9..bc1d66fe3728 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1119,15 +1119,11 @@ static struct ftrace_ops global_ops = {
 };
 
 /*
- * This is used by __kernel_text_address() to return true if the
- * address is on a dynamically allocated trampoline that would
- * not return true for either core_kernel_text() or
- * is_module_text_address().
+ * Used by the stack undwinder to know about dynamic ftrace trampolines.
  */
-bool is_ftrace_trampoline(unsigned long addr)
+struct ftrace_ops *ftrace_ops_trampoline(unsigned long addr)
 {
-	struct ftrace_ops *op;
-	bool ret = false;
+	struct ftrace_ops *op = NULL;
 
 	/*
 	 * Some of the ops may be dynamically allocated,
@@ -1143,16 +1139,24 @@ bool is_ftrace_trampoline(unsigned long addr)
 		 */
 		if (op->trampoline && op->trampoline_size)
 			if (addr >= op->trampoline &&
-			    addr < op->trampoline + op->trampoline_size) {
-				ret = true;
+			    addr < op->trampoline + op->trampoline_size)
 				goto out;
-			}
 	} while_for_each_ftrace_op(op);
 
  out:
 	preempt_enable_notrace();
+	return op;
+}
 
-	return ret;
+/*
+ * This is used by __kernel_text_address() to return true if the
+ * address is on a dynamically allocated trampoline that would
+ * not return true for either core_kernel_text() or
+ * is_module_text_address().
+ */
+bool is_ftrace_trampoline(unsigned long addr)
+{
+	return ftrace_ops_trampoline(addr) != NULL;
 }
 
 struct ftrace_page {
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] tracing: Update stack trace skipping for ORC unwinder
  2018-01-23 18:32 [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Steven Rostedt
  2018-01-23 18:32 ` [PATCH 1/3] x86/ftrace: Fix ORC unwinding from ftrace handlers Steven Rostedt
  2018-01-23 18:32 ` [PATCH 2/3] ftrace, orc, x86: Handle ftrace dynamically allocated trampolines Steven Rostedt
@ 2018-01-23 18:32 ` Steven Rostedt
  2018-01-24  7:46 ` [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Ingo Molnar
  2018-01-24  8:59 ` Nikolay Borisov
  4 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-01-23 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf

[-- Attachment #1: 0003-tracing-Update-stack-trace-skipping-for-ORC-unwinder.patch --]
[-- Type: text/plain, Size: 5712 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

With the addition of ORC unwinder and FRAME POINTER unwinder, the stack
trace skipping requirements have changed.

I went through the tracing stack trace dumps with ORC and with frame
pointers and recalculated the proper values.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.c                | 34 ++++++++++++++-----------
 kernel/trace/trace_events_trigger.c | 13 ++++++++--
 kernel/trace/trace_functions.c      | 49 +++++++++++++++++++++++++++----------
 3 files changed, 67 insertions(+), 29 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2a8d8a294345..8e3f20a18a06 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2374,6 +2374,15 @@ void trace_event_buffer_commit(struct trace_event_buffer *fbuffer)
 }
 EXPORT_SYMBOL_GPL(trace_event_buffer_commit);
 
+/*
+ * Skip 3:
+ *
+ *   trace_buffer_unlock_commit_regs()
+ *   trace_event_buffer_commit()
+ *   trace_event_raw_event_xxx()
+*/
+# define STACK_SKIP 3
+
 void trace_buffer_unlock_commit_regs(struct trace_array *tr,
 				     struct ring_buffer *buffer,
 				     struct ring_buffer_event *event,
@@ -2383,16 +2392,12 @@ void trace_buffer_unlock_commit_regs(struct trace_array *tr,
 	__buffer_unlock_commit(buffer, event);
 
 	/*
-	 * If regs is not set, then skip the following callers:
-	 *   trace_buffer_unlock_commit_regs
-	 *   event_trigger_unlock_commit
-	 *   trace_event_buffer_commit
-	 *   trace_event_raw_event_sched_switch
+	 * If regs is not set, then skip the necessary functions.
 	 * Note, we can still get here via blktrace, wakeup tracer
 	 * and mmiotrace, but that's ok if they lose a function or
-	 * two. They are that meaningful.
+	 * two. They are not that meaningful.
 	 */
-	ftrace_trace_stack(tr, buffer, flags, regs ? 0 : 4, pc, regs);
+	ftrace_trace_stack(tr, buffer, flags, regs ? 0 : STACK_SKIP, pc, regs);
 	ftrace_trace_userstack(buffer, flags, pc);
 }
 
@@ -2579,11 +2584,13 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 	trace.skip		= skip;
 
 	/*
-	 * Add two, for this function and the call to save_stack_trace()
+	 * Add one, for this function and the call to save_stack_trace()
 	 * If regs is set, then these functions will not be in the way.
 	 */
+#ifndef CONFIG_UNWINDER_ORC
 	if (!regs)
-		trace.skip += 2;
+		trace.skip++;
+#endif
 
 	/*
 	 * Since events can happen in NMIs there's no safe way to
@@ -2711,11 +2718,10 @@ void trace_dump_stack(int skip)
 
 	local_save_flags(flags);
 
-	/*
-	 * Skip 3 more, seems to get us at the caller of
-	 * this function.
-	 */
-	skip += 3;
+#ifndef CONFIG_UNWINDER_ORC
+	/* Skip 1 to skip this function. */
+	skip++;
+#endif
 	__ftrace_trace_stack(global_trace.trace_buffer.buffer,
 			     flags, skip, preempt_count(), NULL);
 }
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index f2ac9d44f6c4..87411482a46f 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -1123,13 +1123,22 @@ static __init int register_trigger_snapshot_cmd(void) { return 0; }
 #endif /* CONFIG_TRACER_SNAPSHOT */
 
 #ifdef CONFIG_STACKTRACE
+#ifdef CONFIG_UNWINDER_ORC
+/* Skip 2:
+ *   event_triggers_post_call()
+ *   trace_event_raw_event_xxx()
+ */
+# define STACK_SKIP 2
+#else
 /*
- * Skip 3:
+ * Skip 4:
  *   stacktrace_trigger()
  *   event_triggers_post_call()
+ *   trace_event_buffer_commit()
  *   trace_event_raw_event_xxx()
  */
-#define STACK_SKIP 3
+#define STACK_SKIP 4
+#endif
 
 static void
 stacktrace_trigger(struct event_trigger_data *data, void *rec)
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index 27f7ad12c4b1..b611cd36e22d 100644
--- a/kernel/trace/trace_functions.c
+++ b/kernel/trace/trace_functions.c
@@ -154,6 +154,24 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
 	preempt_enable_notrace();
 }
 
+#ifdef CONFIG_UNWINDER_ORC
+/*
+ * Skip 2:
+ *
+ *   function_stack_trace_call()
+ *   ftrace_call()
+ */
+#define STACK_SKIP 2
+#else
+/*
+ * Skip 3:
+ *   __trace_stack()
+ *   function_stack_trace_call()
+ *   ftrace_call()
+ */
+#define STACK_SKIP 3
+#endif
+
 static void
 function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
 			  struct ftrace_ops *op, struct pt_regs *pt_regs)
@@ -180,15 +198,7 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
 	if (likely(disabled == 1)) {
 		pc = preempt_count();
 		trace_function(tr, ip, parent_ip, flags, pc);
-		/*
-		 * skip over 5 funcs:
-		 *    __ftrace_trace_stack,
-		 *    __trace_stack,
-		 *    function_stack_trace_call
-		 *    ftrace_list_func
-		 *    ftrace_call
-		 */
-		__trace_stack(tr, flags, 5, pc);
+		__trace_stack(tr, flags, STACK_SKIP, pc);
 	}
 
 	atomic_dec(&data->disabled);
@@ -367,14 +377,27 @@ ftrace_traceoff(unsigned long ip, unsigned long parent_ip,
 	tracer_tracing_off(tr);
 }
 
+#ifdef CONFIG_UNWINDER_ORC
 /*
- * Skip 4:
+ * Skip 3:
+ *
+ *   function_trace_probe_call()
+ *   ftrace_ops_assist_func()
+ *   ftrace_call()
+ */
+#define FTRACE_STACK_SKIP 3
+#else
+/*
+ * Skip 5:
+ *
+ *   __trace_stack()
  *   ftrace_stacktrace()
  *   function_trace_probe_call()
- *   ftrace_ops_list_func()
+ *   ftrace_ops_assist_func()
  *   ftrace_call()
  */
-#define STACK_SKIP 4
+#define FTRACE_STACK_SKIP 5
+#endif
 
 static __always_inline void trace_stack(struct trace_array *tr)
 {
@@ -384,7 +407,7 @@ static __always_inline void trace_stack(struct trace_array *tr)
 	local_save_flags(flags);
 	pc = preempt_count();
 
-	__trace_stack(tr, flags, STACK_SKIP, pc);
+	__trace_stack(tr, flags, FTRACE_STACK_SKIP, pc);
 }
 
 static void
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again
  2018-01-23 18:32 [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Steven Rostedt
                   ` (2 preceding siblings ...)
  2018-01-23 18:32 ` [PATCH 3/3] tracing: Update stack trace skipping for ORC unwinder Steven Rostedt
@ 2018-01-24  7:46 ` Ingo Molnar
  2018-01-24  8:49   ` Steven Rostedt
  2018-01-24  8:59 ` Nikolay Borisov
  4 siblings, 1 reply; 9+ messages in thread
From: Ingo Molnar @ 2018-01-24  7:46 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf, Linus Torvalds


* Steven Rostedt <rostedt@goodmis.org> wrote:

> 
> With the new ORC unwinder, ftrace stack tracing became disfunctional.
> 
> One was that ORC didn't know how to handle the ftrace callbacks in
> general (which Josh fixed). The other was that ORC would just bail
> if it hit a dynamically allocated trampoline. I added a check to
> the ORC unwinder to see if the trampoline belonged to ftrace, and
> if it did, use the orc entry of the static trampoline that was used
> to create the dynamic one (it would be identical).
> 
> Finally, I noticed that the skip values of the stack tracing is out
> of whack. I went through and fixed them.
> 
> Anyone have any issues with these patches? I'm starting my tests on
> them now and if all goes well, I plan on pushing them to Linus
> hopefully tonight.

No fundamental objections from me, assuming they are well tested.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again
  2018-01-24  7:46 ` [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Ingo Molnar
@ 2018-01-24  8:49   ` Steven Rostedt
  2018-01-24 10:04     ` Steven Rostedt
  0 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2018-01-24  8:49 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf, Linus Torvalds

On Wed, 24 Jan 2018 08:46:50 +0100
Ingo Molnar <mingo@kernel.org> wrote:

 
> No fundamental objections from me, assuming they are well tested.
>

Yeah, I ran it through my ftrace test suite, and they did fine till I
hit test 20 of 34, which tests static ftrace (CONFIG_DYNAMIC_FTRACE not
set), and it crashed. I'm hoping to have it fixed and retested today.

Thanks!

-- Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again
  2018-01-23 18:32 [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Steven Rostedt
                   ` (3 preceding siblings ...)
  2018-01-24  7:46 ` [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Ingo Molnar
@ 2018-01-24  8:59 ` Nikolay Borisov
  4 siblings, 0 replies; 9+ messages in thread
From: Nikolay Borisov @ 2018-01-24  8:59 UTC (permalink / raw)
  To: Steven Rostedt, linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Josh Poimboeuf



On 23.01.2018 20:32, Steven Rostedt wrote:
> With the new ORC unwinder, ftrace stack tracing became disfunctional.
> 
> One was that ORC didn't know how to handle the ftrace callbacks in
> general (which Josh fixed). The other was that ORC would just bail
> if it hit a dynamically allocated trampoline. I added a check to
> the ORC unwinder to see if the trampoline belonged to ftrace, and
> if it did, use the orc entry of the static trampoline that was used
> to create the dynamic one (it would be identical).
> 
> Finally, I noticed that the skip values of the stack tracing is out
> of whack. I went through and fixed them.
> 
> Anyone have any issues with these patches? I'm starting my tests on
> them now and if all goes well, I plan on pushing them to Linus
> hopefully tonight.
> 
> Thanks!

FWIW with this series ftrace with ORC now produces correct stacktraces
both for events and function tracing. Furthermore, with frame pointer
unwinder I don't see chopped off stack entries.

> 
> -- Steve
> 
> 
> Josh Poimboeuf (1):
>       x86/ftrace: Fix ORC unwinding from ftrace handlers
> 
> Steven Rostedt (VMware) (2):
>       ftrace, orc, x86: Handle ftrace dynamically allocated trampolines
>       tracing: Update stack trace skipping for ORC unwinder
> 
> ----
>  arch/x86/kernel/Makefile            |  5 +++-
>  arch/x86/kernel/ftrace_64.S         | 24 +++++++++++-------
>  arch/x86/kernel/unwind_orc.c        | 48 +++++++++++++++++++++++++++++++++++-
>  include/linux/ftrace.h              |  2 ++
>  kernel/trace/ftrace.c               | 26 +++++++++++---------
>  kernel/trace/trace.c                | 34 ++++++++++++++-----------
>  kernel/trace/trace_events_trigger.c | 13 ++++++++--
>  kernel/trace/trace_functions.c      | 49 +++++++++++++++++++++++++++----------
>  8 files changed, 150 insertions(+), 51 deletions(-)
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again
  2018-01-24  8:49   ` Steven Rostedt
@ 2018-01-24 10:04     ` Steven Rostedt
  2018-01-24 12:54       ` Steven Rostedt
  0 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2018-01-24 10:04 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf, Linus Torvalds

On Wed, 24 Jan 2018 03:49:04 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Wed, 24 Jan 2018 08:46:50 +0100
> Ingo Molnar <mingo@kernel.org> wrote:
> 
>  
> > No fundamental objections from me, assuming they are well tested.
> >  
> 
> Yeah, I ran it through my ftrace test suite, and they did fine till I
> hit test 20 of 34, which tests static ftrace (CONFIG_DYNAMIC_FTRACE not
> set), and it crashed. I'm hoping to have it fixed and retested today.
>

Looking at the crash output, it was hung tasks and not an actual crash.
This is the first time I ran this test on 4.15-rc9, I'll make sure it's
not something associated with rc9 before blaming my patches. With
DYNAMIC_FTRACE disabled, the machine runs much slower. With all the PTI
work, it could possibly pushed it passed a breaking point.

-- Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again
  2018-01-24 10:04     ` Steven Rostedt
@ 2018-01-24 12:54       ` Steven Rostedt
  0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2018-01-24 12:54 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Andrew Morton, Thomas Gleixner, H. Peter Anvin,
	Nikolay Borisov, Josh Poimboeuf, Linus Torvalds

On Wed, 24 Jan 2018 05:04:23 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:


> > Yeah, I ran it through my ftrace test suite, and they did fine till I
> > hit test 20 of 34, which tests static ftrace (CONFIG_DYNAMIC_FTRACE not
> > set), and it crashed. I'm hoping to have it fixed and retested today.
> >  
> 
> Looking at the crash output, it was hung tasks and not an actual crash.
> This is the first time I ran this test on 4.15-rc9, I'll make sure it's
> not something associated with rc9 before blaming my patches. With
> DYNAMIC_FTRACE disabled, the machine runs much slower. With all the PTI
> work, it could possibly pushed it passed a breaking point.

Looks to be some kind of anomaly that's not related to these patches.
I wasn't able to reproduce it again. I ran the last 16 tests (test 19 -
34) again, and they all passed. I may investigate that later, as
running without DYNAMIC_FTRACE slows the system down enough to do
strange things all over.

I'll send out the pull request then.

Thanks!

-- Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-01-24 12:54 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-23 18:32 [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Steven Rostedt
2018-01-23 18:32 ` [PATCH 1/3] x86/ftrace: Fix ORC unwinding from ftrace handlers Steven Rostedt
2018-01-23 18:32 ` [PATCH 2/3] ftrace, orc, x86: Handle ftrace dynamically allocated trampolines Steven Rostedt
2018-01-23 18:32 ` [PATCH 3/3] tracing: Update stack trace skipping for ORC unwinder Steven Rostedt
2018-01-24  7:46 ` [PATCH 0/3] ftrace, orc, x86, tracing: Fix stack traces again Ingo Molnar
2018-01-24  8:49   ` Steven Rostedt
2018-01-24 10:04     ` Steven Rostedt
2018-01-24 12:54       ` Steven Rostedt
2018-01-24  8:59 ` Nikolay Borisov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).