live-patching.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
       [not found] <77bd5edeea72d44533c769b1e8c0fea7a9d7eb3a>
@ 2021-03-30 19:09 ` madvenka
  2021-03-30 19:09   ` [RFC PATCH v1 1/4] arm64: Implement infrastructure for " madvenka
                     ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: madvenka @ 2021-03-30 19:09 UTC (permalink / raw)
  To: mark.rutland, broonie, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel, madvenka

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

There are a number of places in kernel code where the stack trace is not
reliable. Enhance the unwinder to check for those cases and mark the
stack trace as unreliable. Once all of the checks are in place, the unwinder
can be used for livepatching.

Except for the return address check, all the other checks involve checking
the return PC of every frame against certain kernel functions. To do this,
implement some infrastructure code:

	- Define a special_functions[] array and populate the array with
	  the special functions

	- Using kallsyms_lookup(), lookup the symbol table entries for the
	  functions and record their address ranges

	- Define an is_reliable_function(pc) to match a return PC against
	  the special functions.

The unwinder calls is_reliable_function(pc) for every return PC and marks
the stack trace as reliable or unreliable accordingly.

Return address check
====================

Check the return PC of every stack frame to make sure that it is a valid
kernel text address (and not some generated code, for example).

Detect EL1 exception frame
==========================

EL1 exceptions can happen on any instruction including instructions in
the frame pointer prolog or epilog. Depending on where exactly they happen,
they could render the stack trace unreliable.

Add all of the EL1 exception handlers to special_functions[].

	- el1_sync()
	- el1_irq()
	- el1_error()
	- el1_sync_invalid()
	- el1_irq_invalid()
	- el1_fiq_invalid()
	- el1_error_invalid()

Interrupts are EL1 exceptions. When a task is preempted, the preempt
interrupt EL1 frame will show on the stack and the stack trace is
considered unreliable. This is correct behavior as preemption can
happen anywhere.

Breakpoints are EL1 exceptions and can happen anywhere. Stack traces
taken from within the breakpoint handler are, therefore, unreliable.
This includes KProbe code that gets called from the breakpoint handler.

Mark Rutland wanted me to send the EL1 checks in a separate patch series
because the exception handling code is being reorganized. But the
infrastructure code is common to the EL1 detection and other cases listed
below. I was not entirely sure how to neatly split the patches.

Besides, all this patch does is include the EL1 exception handlers in
special_functions[]. When the names change because of the code reorg,
this array can simply be edited. So, in the interest of getting review
comments on this EL1 related work, I have included it in this patch
series.

Hope this is ok.

Detect ftrace frame
===================

When FTRACE executes at the beginning of a traced function, it creates two
frames and calls the tracer function:

	- One frame for the traced function

	- One frame for the caller of the traced function

That gives a sensible stack trace while executing in the tracer function.
When FTRACE returns to the traced function, the frames are popped and
everything is back to normal.

However, in cases like live patch, the tracer function redirects execution
to a different function. When FTRACE returns, control will go to that target
function. A stack trace taken in the tracer function will not show the target
function. The target function is the real function that we want to track.
So, the stack trace is unreliable.

To detect FTRACE in a stack trace, add the following to special_functions[]:

	- ftrace_graph_call()
	- ftrace_graph_caller()

Please see the diff for a comment that explains why ftrace_graph_call()
must be checked.

Also, the Function Graph Tracer modifies the return address of a traced
function to a return trampoline (return_to_handler()) to gather tracing
data on function return. Stack traces taken from the traced function and
functions it calls will not show the original caller of the traced function.
The unwinder handles this case by getting the original caller from FTRACE.

However, stack traces taken from the trampoline itself and functions it calls
are unreliable as the original return address may not be available in
that context. This is because the trampoline calls FTRACE to gather trace
data as well as to obtain the actual return address and FTRACE discards the
record of the original return address along the way.

Add return_to_handler() to special_functions[].

Check for kretprobe
===================

For functions with a kretprobe set up, probe code executes on entry
to the function and replaces the return address in the stack frame with a
kretprobe trampoline. Whenever the function returns, control is
transferred to the trampoline. The trampoline eventually returns to the
original return address.

A stack trace taken while executing in the function (or in functions that
get called from the function) will not show the original return address.
Similarly, a stack trace taken while executing in the trampoline itself
(and functions that get called from the trampoline) will not show the
original return address. This means that the caller of the probed function
will not show. This makes the stack trace unreliable.

Add the kretprobe trampoline to special_functions[].

FYI, each task contains a task->kretprobe_instances list that can
theoretically be consulted to find the orginal return address. But I am
not entirely sure how to safely traverse that list for stack traces
not on the current process. So, I have taken the easy way out.

Optprobes
=========

Optprobes may be implemented in the future for arm64. For optprobes,
the relevant trampoline(s) can be added to special_functions[].
---
v1
	- Define a bool field in struct stackframe. This will indicate if
	  a stack trace is reliable.

	- Implement a special_functions[] array that will be populated
	  with special functions in which the stack trace is considered
	  unreliable.
	
	- Using kallsyms_lookup(), get the address ranges for the special
	  functions and record them.

	- Implement an is_reliable_function(pc). This function will check
	  if a given return PC falls in any of the special functions. If
	  it does, the stack trace is unreliable.

	- Implement check_reliability() function that will check if a
	  stack frame is reliable. Call is_reliable_function() from
	  check_reliability().

	- Before a return PC is checked against special_funtions[], it
	  must be validates as a proper kernel text address. Call
	  __kernel_text_address() from check_reliability().

	- Finally, call check_reliability() from unwind_frame() for
	  each stack frame.

	- Add EL1 exception handlers to special_functions[].

		el1_sync();
		el1_irq();
		el1_error();
		el1_sync_invalid();
		el1_irq_invalid();
		el1_fiq_invalid();
		el1_error_invalid();

	- The above functions are currently defined as LOCAL symbols.
	  Make them global so that they can be referenced from the
	  unwinder code.

	- Add FTRACE trampolines to special_functions[]:

		ftrace_graph_call()
		ftrace_graph_caller()
		return_to_handler()

	- Add the kretprobe trampoline to special functions[]:

		kretprobe_trampoline()

Madhavan T. Venkataraman (4):
  arm64: Implement infrastructure for stack trace reliability checks
  arm64: Mark a stack trace unreliable if an EL1 exception frame is
    detected
  arm64: Detect FTRACE cases that make the stack trace unreliable
  arm64: Mark stack trace as unreliable if kretprobed functions are
    present

 arch/arm64/include/asm/exception.h  |   8 ++
 arch/arm64/include/asm/stacktrace.h |   2 +
 arch/arm64/kernel/entry-ftrace.S    |  10 ++
 arch/arm64/kernel/entry.S           |  14 +-
 arch/arm64/kernel/stacktrace.c      | 211 ++++++++++++++++++++++++++++
 5 files changed, 238 insertions(+), 7 deletions(-)


base-commit: 0d02ec6b3136c73c09e7859f0d0e4e2c4c07b49b
-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC PATCH v1 1/4] arm64: Implement infrastructure for stack trace reliability checks
  2021-03-30 19:09 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks madvenka
@ 2021-03-30 19:09   ` madvenka
  2021-04-01 15:27     ` Mark Brown
  2021-03-30 19:09   ` [RFC PATCH v1 2/4] arm64: Mark a stack trace unreliable if an EL1 exception frame is detected madvenka
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: madvenka @ 2021-03-30 19:09 UTC (permalink / raw)
  To: mark.rutland, broonie, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel, madvenka

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

Implement a check_reliability() function that will contain checks for the
presence of various features and conditions that can render the stack trace
unreliable.

Introduce the first reliability check - If a return PC encountered in a
stack trace is not a valid kernel text address, the stack trace is
considered unreliable. It could be some generated code.

Other reliability checks will be added in the future.

These checks will involve checking the return PC to see if it falls inside
any special functions where the stack trace is considered unreliable.
Implement the infrastructure needed for this.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
---
 arch/arm64/include/asm/stacktrace.h |  2 +
 arch/arm64/kernel/stacktrace.c      | 80 +++++++++++++++++++++++++++++
 2 files changed, 82 insertions(+)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index eb29b1fe8255..684f65808394 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -59,6 +59,7 @@ struct stackframe {
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	int graph;
 #endif
+	bool reliable;
 };
 
 extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
@@ -169,6 +170,7 @@ static inline void start_backtrace(struct stackframe *frame,
 	bitmap_zero(frame->stacks_done, __NR_STACK_TYPES);
 	frame->prev_fp = 0;
 	frame->prev_type = STACK_TYPE_UNKNOWN;
+	frame->reliable = true;
 }
 
 #endif	/* __ASM_STACKTRACE_H */
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index ad20981dfda4..ff35b3953c39 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -18,6 +18,84 @@
 #include <asm/stack_pointer.h>
 #include <asm/stacktrace.h>
 
+struct function_range {
+	unsigned long	start;
+	unsigned long	end;
+};
+
+/*
+ * Special functions where the stack trace is unreliable.
+ */
+static struct function_range	special_functions[] = {
+	{ 0, 0 }
+};
+
+static bool is_reliable_function(unsigned long pc)
+{
+	static bool inited = false;
+	struct function_range *func;
+
+	if (!inited) {
+		static char sym[KSYM_NAME_LEN];
+		unsigned long size, offset;
+
+		for (func = special_functions; func->start; func++) {
+			if (kallsyms_lookup(func->start, &size, &offset,
+					    NULL, sym)) {
+				func->start -= offset;
+				func->end = func->start + size;
+			} else {
+				/*
+				 * This is just a label. So, we only need to
+				 * consider that particular location. So, size
+				 * is the size of one Aarch64 instruction.
+				 */
+				func->end = func->start + 4;
+			}
+		}
+		inited = true;
+	}
+
+	for (func = special_functions; func->start; func++) {
+		if (pc >= func->start && pc < func->end)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Check for the presence of features and conditions that render the stack
+ * trace unreliable.
+ *
+ * Once all such cases have been addressed, this function can aid live
+ * patching (and this comment can be removed).
+ */
+static void check_reliability(struct stackframe *frame)
+{
+	/*
+	 * If the stack trace has already been marked unreliable, just return.
+	 */
+	if (!frame->reliable)
+		return;
+
+	/*
+	 * First, make sure that the return address is a proper kernel text
+	 * address. A NULL or invalid return address probably means there's
+	 * some generated code which __kernel_text_address() doesn't know
+	 * about. Mark the stack trace as not reliable.
+	 */
+	if (!__kernel_text_address(frame->pc)) {
+		frame->reliable = false;
+		return;
+	}
+
+	/*
+	 * Check the reliability of the return PC's function.
+	 */
+	if (!is_reliable_function(frame->pc))
+		frame->reliable = false;
+}
+
 /*
  * AArch64 PCS assigns the frame pointer to x29.
  *
@@ -108,6 +186,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 
 	frame->pc = ptrauth_strip_insn_pac(frame->pc);
 
+	check_reliability(frame);
+
 	return 0;
 }
 NOKPROBE_SYMBOL(unwind_frame);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH v1 2/4] arm64: Mark a stack trace unreliable if an EL1 exception frame is detected
  2021-03-30 19:09 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks madvenka
  2021-03-30 19:09   ` [RFC PATCH v1 1/4] arm64: Implement infrastructure for " madvenka
@ 2021-03-30 19:09   ` madvenka
  2021-04-01 17:21     ` Mark Brown
  2021-03-30 19:09   ` [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable madvenka
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: madvenka @ 2021-03-30 19:09 UTC (permalink / raw)
  To: mark.rutland, broonie, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel, madvenka

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

EL1 exceptions can happen on any instruction including instructions in
the frame pointer prolog or epilog. Depending on where exactly they happen,
they could render the stack trace unreliable.

If an EL1 exception frame is found on the stack, mark the stack trace as
unreliable. Now, the EL1 exception frame is not at any well-known offset
on the stack. It can be anywhere on the stack. In order to properly detect
an EL1 exception frame, the return address must be checked against all of
the possible EL1 exception handlers.

Preemption
==========

Interrupts encountered in kernel code are also EL1 exceptions. At the end
of an interrupt, the interrupt handler checks if the current task must be
preempted for any reason. If so, it calls the preemption code which takes
the task off the CPU. A stack trace taken on the task after the preemption
will show the EL1 frame and will be considered unreliable. This is correct
behavior as preemption can happen practically at any point in code.

Probing
=======

Breakpoints encountered in kernel code are also EL1 exceptions. The probing
infrastructure uses breakpoints for executing probe code. While in the probe
code, the stack trace will show an EL1 frame and will be considered
unreliable. This is also correct behavior.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
---
 arch/arm64/include/asm/exception.h |  8 +++++++
 arch/arm64/kernel/entry.S          | 14 +++++------
 arch/arm64/kernel/stacktrace.c     | 37 ++++++++++++++++++++++++++++++
 3 files changed, 52 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 6546158d2f2d..4ebd2390ef54 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -35,6 +35,14 @@ asmlinkage void el1_sync_handler(struct pt_regs *regs);
 asmlinkage void el0_sync_handler(struct pt_regs *regs);
 asmlinkage void el0_sync_compat_handler(struct pt_regs *regs);
 
+asmlinkage void el1_sync(void);
+asmlinkage void el1_irq(void);
+asmlinkage void el1_error(void);
+asmlinkage void el1_sync_invalid(void);
+asmlinkage void el1_irq_invalid(void);
+asmlinkage void el1_fiq_invalid(void);
+asmlinkage void el1_error_invalid(void);
+
 asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs);
 asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs);
 asmlinkage void enter_from_user_mode(void);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index a31a0a713c85..9fe3aaeff019 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -630,19 +630,19 @@ SYM_CODE_START_LOCAL(el0_fiq_invalid_compat)
 SYM_CODE_END(el0_fiq_invalid_compat)
 #endif
 
-SYM_CODE_START_LOCAL(el1_sync_invalid)
+SYM_CODE_START(el1_sync_invalid)
 	inv_entry 1, BAD_SYNC
 SYM_CODE_END(el1_sync_invalid)
 
-SYM_CODE_START_LOCAL(el1_irq_invalid)
+SYM_CODE_START(el1_irq_invalid)
 	inv_entry 1, BAD_IRQ
 SYM_CODE_END(el1_irq_invalid)
 
-SYM_CODE_START_LOCAL(el1_fiq_invalid)
+SYM_CODE_START(el1_fiq_invalid)
 	inv_entry 1, BAD_FIQ
 SYM_CODE_END(el1_fiq_invalid)
 
-SYM_CODE_START_LOCAL(el1_error_invalid)
+SYM_CODE_START(el1_error_invalid)
 	inv_entry 1, BAD_ERROR
 SYM_CODE_END(el1_error_invalid)
 
@@ -650,7 +650,7 @@ SYM_CODE_END(el1_error_invalid)
  * EL1 mode handlers.
  */
 	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el1_sync)
+SYM_CODE_START_NOALIGN(el1_sync)
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_sync_handler
@@ -658,7 +658,7 @@ SYM_CODE_START_LOCAL_NOALIGN(el1_sync)
 SYM_CODE_END(el1_sync)
 
 	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
+SYM_CODE_START_NOALIGN(el1_irq)
 	kernel_entry 1
 	gic_prio_irq_setup pmr=x20, tmp=x1
 	enable_da_f
@@ -737,7 +737,7 @@ el0_irq_naked:
 	b	ret_to_user
 SYM_CODE_END(el0_irq)
 
-SYM_CODE_START_LOCAL(el1_error)
+SYM_CODE_START(el1_error)
 	kernel_entry 1
 	mrs	x1, esr_el1
 	gic_prio_kentry_setup tmp=x2
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index ff35b3953c39..7662f57d3e88 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -14,6 +14,7 @@
 #include <linux/stacktrace.h>
 
 #include <asm/irq.h>
+#include <asm/exception.h>
 #include <asm/pointer_auth.h>
 #include <asm/stack_pointer.h>
 #include <asm/stacktrace.h>
@@ -25,8 +26,44 @@ struct function_range {
 
 /*
  * Special functions where the stack trace is unreliable.
+ *
+ * EL1 exceptions
+ * ==============
+ *
+ * EL1 exceptions can happen on any instruction including instructions in
+ * the frame pointer prolog or epilog. Depending on where exactly they happen,
+ * they could render the stack trace unreliable.
+ *
+ * If an EL1 exception frame is found on the stack, mark the stack trace as
+ * unreliable. Now, the EL1 exception frame is not at any well-known offset
+ * on the stack. It can be anywhere on the stack. In order to properly detect
+ * an EL1 exception frame, the return address must be checked against all of
+ * the possible EL1 exception handlers.
+ *
+ * Interrupts encountered in kernel code are also EL1 exceptions. At the end
+ * of an interrupt, the current task can get preempted. A stack trace taken
+ * on the task after the preemption will show the EL1 frame and will be
+ * considered unreliable. This is correct behavior as preemption can happen
+ * practically at any point in code.
+ *
+ * Breakpoints encountered in kernel code are also EL1 exceptions. Breakpoints
+ * can happen practically on any instruction. Mark the stack trace as
+ * unreliable. Breakpoints are used for executing probe code. Stack traces
+ * taken while in the probe code will show an EL1 frame and will be considered
+ * unreliable. This is correct behavior.
  */
 static struct function_range	special_functions[] = {
+	/*
+	 * EL1 exception handlers.
+	 */
+	{ (unsigned long) el1_sync, 0 },
+	{ (unsigned long) el1_irq, 0 },
+	{ (unsigned long) el1_error, 0 },
+	{ (unsigned long) el1_sync_invalid, 0 },
+	{ (unsigned long) el1_irq_invalid, 0 },
+	{ (unsigned long) el1_fiq_invalid, 0 },
+	{ (unsigned long) el1_error_invalid, 0 },
+
 	{ 0, 0 }
 };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-03-30 19:09 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks madvenka
  2021-03-30 19:09   ` [RFC PATCH v1 1/4] arm64: Implement infrastructure for " madvenka
  2021-03-30 19:09   ` [RFC PATCH v1 2/4] arm64: Mark a stack trace unreliable if an EL1 exception frame is detected madvenka
@ 2021-03-30 19:09   ` madvenka
  2021-04-01 14:27     ` Mark Brown
  2021-03-30 19:09   ` [RFC PATCH v1 4/4] arm64: Mark stack trace as unreliable if kretprobed functions are present madvenka
  2021-04-03 17:01   ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks Josh Poimboeuf
  4 siblings, 1 reply; 25+ messages in thread
From: madvenka @ 2021-03-30 19:09 UTC (permalink / raw)
  To: mark.rutland, broonie, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel, madvenka

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

When CONFIG_DYNAMIC_FTRACE_WITH_REGS is enabled and tracing is activated
for a function, the ftrace infrastructure is called for the function at
the very beginning. Ftrace creates two frames:

	- One for the traced function

	- One for the caller of the traced function

That gives a reliable stack trace while executing in the ftrace code. When
ftrace returns to the traced function, the frames are popped and everything
is back to normal.

However, in cases like live patch, a tracer function may redirect execution
to a different function when it returns. A stack trace taken while still in
the tracer function will not show the target function. The target function
is the real function that we want to track.

So, if an FTRACE frame is detected on the stack, just mark the stack trace
as unreliable. The detection is done by checking the return PC against
FTRACE trampolines.

Also, the Function Graph Tracer modifies the return address of a traced
function to a return trampoline to gather tracing data on function return.
Stack traces taken from that trampoline and functions it calls are
unreliable as the original return address may not be available in
that context. Mark the stack trace unreliable accordingly.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
---
 arch/arm64/kernel/entry-ftrace.S | 10 ++++++
 arch/arm64/kernel/stacktrace.c   | 57 ++++++++++++++++++++++++++++++++
 2 files changed, 67 insertions(+)

diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
index b3e4f9a088b1..5373f88b4c44 100644
--- a/arch/arm64/kernel/entry-ftrace.S
+++ b/arch/arm64/kernel/entry-ftrace.S
@@ -95,6 +95,16 @@ SYM_CODE_START(ftrace_common)
 SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
 	bl	ftrace_stub
 
+	/*
+	 * The only call in the FTRACE trampoline code is above. The above
+	 * instruction is patched to call a tracer function. Its return
+	 * address is below (ftrace_graph_call). In a stack trace taken from
+	 * a tracer function, ftrace_graph_call() will show. The unwinder
+	 * checks this for reliable stack trace. Please see the comments
+	 * in stacktrace.c. If another call is added in the FTRACE
+	 * trampoline code, the special_functions[] array in stacktrace.c
+	 * must be updated.
+	 */
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
 	nop				// If enabled, this will be replaced
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 7662f57d3e88..8b493a90c9f3 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -51,6 +51,52 @@ struct function_range {
  * unreliable. Breakpoints are used for executing probe code. Stack traces
  * taken while in the probe code will show an EL1 frame and will be considered
  * unreliable. This is correct behavior.
+ *
+ * FTRACE
+ * ======
+ *
+ * When CONFIG_DYNAMIC_FTRACE_WITH_REGS is enabled, the FTRACE trampoline code
+ * is called from a traced function even before the frame pointer prolog.
+ * FTRACE sets up two stack frames (one for the traced function and one for
+ * its caller) so that the unwinder can provide a sensible stack trace for
+ * any tracer function called from the FTRACE trampoline code.
+ *
+ * There are two cases where the stack trace is not reliable.
+ *
+ * (1) The task gets preempted before the two frames are set up. Preemption
+ *     involves an interrupt which is an EL1 exception. The unwinder already
+ *     handles EL1 exceptions.
+ *
+ * (2) The tracer function that gets called by the FTRACE trampoline code
+ *     changes the return PC (e.g., livepatch).
+ *
+ *     Not all tracer functions do that. But to err on the side of safety,
+ *     consider the stack trace as unreliable in all cases.
+ *
+ * When Function Graph Tracer is used, FTRACE modifies the return address of
+ * the traced function in its stack frame to an FTRACE return trampoline
+ * (return_to_handler). When the traced function returns, control goes to
+ * return_to_handler. return_to_handler calls FTRACE to gather tracing data
+ * and to obtain the original return address. Then, return_to_handler returns
+ * to the original return address.
+ *
+ * There are two cases to consider from a stack trace reliability point of
+ * view:
+ *
+ * (1) Stack traces taken within the traced function (and functions that get
+ *     called from there) will show return_to_handler instead of the original
+ *     return address. The original return address can be obtained from FTRACE.
+ *     The unwinder already obtains it and modifies the return PC in its copy
+ *     of the stack frame to the original return address. So, this is handled.
+ *
+ * (2) return_to_handler calls FTRACE as mentioned before. FTRACE discards
+ *     the record of the original return address along the way as it does not
+ *     need to maintain it anymore. This means that the unwinder cannot get
+ *     the original return address beyond that point while the task is still
+ *     executing in return_to_handler. So, consider the stack trace unreliable
+ *     if return_to_handler is detected on the stack.
+ *
+ * NOTE: The unwinder must do (1) before (2).
  */
 static struct function_range	special_functions[] = {
 	/*
@@ -64,6 +110,17 @@ static struct function_range	special_functions[] = {
 	{ (unsigned long) el1_fiq_invalid, 0 },
 	{ (unsigned long) el1_error_invalid, 0 },
 
+	/*
+	 * FTRACE trampolines.
+	 */
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	{ (unsigned long) &ftrace_graph_call, 0 },
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+	{ (unsigned long) ftrace_graph_caller, 0 },
+	{ (unsigned long) return_to_handler, 0 },
+#endif
+#endif
+
 	{ 0, 0 }
 };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH v1 4/4] arm64: Mark stack trace as unreliable if kretprobed functions are present
  2021-03-30 19:09 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks madvenka
                     ` (2 preceding siblings ...)
  2021-03-30 19:09   ` [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable madvenka
@ 2021-03-30 19:09   ` madvenka
  2021-04-01 17:23     ` Mark Brown
  2021-04-03 17:01   ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks Josh Poimboeuf
  4 siblings, 1 reply; 25+ messages in thread
From: madvenka @ 2021-03-30 19:09 UTC (permalink / raw)
  To: mark.rutland, broonie, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel, madvenka

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

When a kretprobe is active for a function, the function's return address
in its stack frame is modified to point to the kretprobe trampoline. When
the function returns, the frame is popped and control is transferred
to the trampoline. The trampoline eventually returns to the original return
address.

If a stack walk is done within the function (or any functions that get
called from there), the stack trace will only show the trampoline and the
not the original caller.

Also, if the trampoline itself and the functions it calls do a stack trace,
that stack trace will also have the same problem. Detect this as well.

If the trampoline is detected in the stack trace, mark the stack trace
as unreliable.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
---
 arch/arm64/kernel/stacktrace.c | 37 ++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 8b493a90c9f3..bf5abb0dd876 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -97,6 +97,36 @@ struct function_range {
  *     if return_to_handler is detected on the stack.
  *
  * NOTE: The unwinder must do (1) before (2).
+ *
+ * KPROBES
+ * =======
+ *
+ *	There are two types of kprobes:
+ *
+ *	(1) Regular kprobes that are placed anywhere in a probed function.
+ *	    This is implemented by replacing the probed instruction with a
+ *	    breakpoint. When the breakpoint is hit, the kprobe code emulates
+ *	    the original instruction in-situ and returns to the next
+ *	    instruction.
+ *
+ *	    Breakpoints are EL1 exceptions. When the unwinder detects them,
+ *	    the stack trace is marked as unreliable as it does not know where
+ *	    exactly the exception happened. Detection of EL1 exceptions in
+ *	    a stack trace will be done separately.
+ *
+ *	(2) Return kprobes that are placed on the return of a probed function.
+ *	    In this case, Kprobes sets up an initial breakpoint at the
+ *	    beginning of the probed function. When the breakpoint is hit,
+ *	    Kprobes replaces the return address in the stack frame with the
+ *	    kretprobe_trampoline and records the original return address.
+ *	    When the probed function returns, control goes to the trampoline
+ *	    which eventually returns to the original return address.
+ *
+ *	    Stack traces taken while in the probed function or while in the
+ *	    trampoline will show kretprobe_trampoline instead of the original
+ *	    return address. Detect this and mark the stack trace unreliable.
+ *	    The detection is done by checking if the return PC falls anywhere
+ *	    in kretprobe_trampoline.
  */
 static struct function_range	special_functions[] = {
 	/*
@@ -121,6 +151,13 @@ static struct function_range	special_functions[] = {
 #endif
 #endif
 
+	/*
+	 * Kprobe trampolines.
+	 */
+#ifdef CONFIG_KRETPROBES
+	{ (unsigned long) kretprobe_trampoline, 0 },
+ #endif
+
 	{ 0, 0 }
 };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-03-30 19:09   ` [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable madvenka
@ 2021-04-01 14:27     ` Mark Brown
  2021-04-01 17:43       ` Madhavan T. Venkataraman
  2021-04-01 17:48       ` Madhavan T. Venkataraman
  0 siblings, 2 replies; 25+ messages in thread
From: Mark Brown @ 2021-04-01 14:27 UTC (permalink / raw)
  To: madvenka
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 614 bytes --]

On Tue, Mar 30, 2021 at 02:09:54PM -0500, madvenka@linux.microsoft.com wrote:

> +	 * FTRACE trampolines.
> +	 */
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> +	{ (unsigned long) &ftrace_graph_call, 0 },
> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> +	{ (unsigned long) ftrace_graph_caller, 0 },
> +	{ (unsigned long) return_to_handler, 0 },
> +#endif
> +#endif

It's weird that we take the address of ftrace_graph_call but not the
other functions - we should be consistent or explain why.  It'd probably
also look nicer to not nest the ifdefs, the dependencies in Kconfig will
ensure we only get things when we should.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 1/4] arm64: Implement infrastructure for stack trace reliability checks
  2021-03-30 19:09   ` [RFC PATCH v1 1/4] arm64: Implement infrastructure for " madvenka
@ 2021-04-01 15:27     ` Mark Brown
  2021-04-01 17:44       ` Madhavan T. Venkataraman
  0 siblings, 1 reply; 25+ messages in thread
From: Mark Brown @ 2021-04-01 15:27 UTC (permalink / raw)
  To: madvenka
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1136 bytes --]

On Tue, Mar 30, 2021 at 02:09:52PM -0500, madvenka@linux.microsoft.com wrote:
> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
> 
> Implement a check_reliability() function that will contain checks for the
> presence of various features and conditions that can render the stack trace
> unreliable.

This looks good to me with one minor stylistic thing:

> +/*
> + * Special functions where the stack trace is unreliable.
> + */
> +static struct function_range	special_functions[] = {
> +	{ 0, 0 }
> +};

Might be good to put a comment here saying that this is terminating the
list rather than detecting a NULL function pointer:

	{ /* sentinel */ }

is a common idiom for that.

Given that it's a fixed array we could also...

> +	for (func = special_functions; func->start; func++) {
> +		if (pc >= func->start && pc < func->end)

...do these as

	for (i = 0; i < ARRAY_SIZE(special_functions); i++) 

so you don't need something like that, though that gets awkward when you
have to write out special_functions[i].field a lot.

So many different potential colours for the bikeshed!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 2/4] arm64: Mark a stack trace unreliable if an EL1 exception frame is detected
  2021-03-30 19:09   ` [RFC PATCH v1 2/4] arm64: Mark a stack trace unreliable if an EL1 exception frame is detected madvenka
@ 2021-04-01 17:21     ` Mark Brown
  0 siblings, 0 replies; 25+ messages in thread
From: Mark Brown @ 2021-04-01 17:21 UTC (permalink / raw)
  To: madvenka
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 400 bytes --]

On Tue, Mar 30, 2021 at 02:09:53PM -0500, madvenka@linux.microsoft.com wrote:
> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
> 
> EL1 exceptions can happen on any instruction including instructions in
> the frame pointer prolog or epilog. Depending on where exactly they happen,
> they could render the stack trace unreliable.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 4/4] arm64: Mark stack trace as unreliable if kretprobed functions are present
  2021-03-30 19:09   ` [RFC PATCH v1 4/4] arm64: Mark stack trace as unreliable if kretprobed functions are present madvenka
@ 2021-04-01 17:23     ` Mark Brown
  0 siblings, 0 replies; 25+ messages in thread
From: Mark Brown @ 2021-04-01 17:23 UTC (permalink / raw)
  To: madvenka
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 514 bytes --]

On Tue, Mar 30, 2021 at 02:09:55PM -0500, madvenka@linux.microsoft.com wrote:
> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
> 
> When a kretprobe is active for a function, the function's return address
> in its stack frame is modified to point to the kretprobe trampoline. When
> the function returns, the frame is popped and control is transferred
> to the trampoline. The trampoline eventually returns to the original return
> address.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-04-01 14:27     ` Mark Brown
@ 2021-04-01 17:43       ` Madhavan T. Venkataraman
  2021-04-01 18:28         ` Mark Brown
  2021-04-01 17:48       ` Madhavan T. Venkataraman
  1 sibling, 1 reply; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-01 17:43 UTC (permalink / raw)
  To: Mark Brown
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel



On 4/1/21 9:27 AM, Mark Brown wrote:
> On Tue, Mar 30, 2021 at 02:09:54PM -0500, madvenka@linux.microsoft.com wrote:
> 
>> +	 * FTRACE trampolines.
>> +	 */
>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>> +	{ (unsigned long) &ftrace_graph_call, 0 },
>> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
>> +	{ (unsigned long) ftrace_graph_caller, 0 },
>> +	{ (unsigned long) return_to_handler, 0 },
>> +#endif
>> +#endif
> 
> It's weird that we take the address of ftrace_graph_call but not the
> other functions - we should be consistent or explain why.  It'd probably
> also look nicer to not nest the ifdefs, the dependencies in Kconfig will
> ensure we only get things when we should.
> 

I have explained it in the comment in the FTRACE trampoline right above
ftrace_graph_call().

        /*
         * The only call in the FTRACE trampoline code is above. The above
         * instruction is patched to call a tracer function. Its return
         * address is below (ftrace_graph_call). In a stack trace taken from
         * a tracer function, ftrace_graph_call() will show. The unwinder
         * checks this for reliable stack trace. Please see the comments
         * in stacktrace.c. If another call is added in the FTRACE
         * trampoline code, the special_functions[] array in stacktrace.c
         * must be updated.
         */

I also noticed that I have to fix something here. The label ftrace_graph_call
is defined like this:


#ifdef CONFIG_FUNCTION_GRAPH_TRACER
SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
        nop                             // If enabled, this will be replaced
                                        // "b ftrace_graph_caller"
#endif

So, it is only defined if CONFIG_FUNCTION_GRAPH_TRACER is defined. I can address
this as well as your comment by defining another label whose name is more meaningful
to our use:


+SYM_INNER_LABEL(ftrace_trampoline, SYM_L_GLOBAL) // checked by the unwinder
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
        nop                             // If enabled, this will be replaced
                                        // "b ftrace_graph_caller"
#endif

Is this acceptable?

Thanks.

Madhavan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 1/4] arm64: Implement infrastructure for stack trace reliability checks
  2021-04-01 15:27     ` Mark Brown
@ 2021-04-01 17:44       ` Madhavan T. Venkataraman
  0 siblings, 0 replies; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-01 17:44 UTC (permalink / raw)
  To: Mark Brown
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel



On 4/1/21 10:27 AM, Mark Brown wrote:
> On Tue, Mar 30, 2021 at 02:09:52PM -0500, madvenka@linux.microsoft.com wrote:
>> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
>>
>> Implement a check_reliability() function that will contain checks for the
>> presence of various features and conditions that can render the stack trace
>> unreliable.
> 
> This looks good to me with one minor stylistic thing:
> 
>> +/*
>> + * Special functions where the stack trace is unreliable.
>> + */
>> +static struct function_range	special_functions[] = {
>> +	{ 0, 0 }
>> +};
> 
> Might be good to put a comment here saying that this is terminating the
> list rather than detecting a NULL function pointer:
> 
> 	{ /* sentinel */ }
> 
> is a common idiom for that.
> 
> Given that it's a fixed array we could also...
> 
>> +	for (func = special_functions; func->start; func++) {
>> +		if (pc >= func->start && pc < func->end)
> 
> ...do these as
> 
> 	for (i = 0; i < ARRAY_SIZE(special_functions); i++) 
> 
> so you don't need something like that, though that gets awkward when you
> have to write out special_functions[i].field a lot.
> 
> So many different potential colours for the bikeshed!
I will make the above changes.

Thanks!

Madhavan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-04-01 14:27     ` Mark Brown
  2021-04-01 17:43       ` Madhavan T. Venkataraman
@ 2021-04-01 17:48       ` Madhavan T. Venkataraman
  1 sibling, 0 replies; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-01 17:48 UTC (permalink / raw)
  To: Mark Brown
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel



On 4/1/21 9:27 AM, Mark Brown wrote:
> On Tue, Mar 30, 2021 at 02:09:54PM -0500, madvenka@linux.microsoft.com wrote:
> 
>> +	 * FTRACE trampolines.
>> +	 */
>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>> +	{ (unsigned long) &ftrace_graph_call, 0 },
>> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
>> +	{ (unsigned long) ftrace_graph_caller, 0 },
>> +	{ (unsigned long) return_to_handler, 0 },
>> +#endif
>> +#endif
> 
> It's weird that we take the address of ftrace_graph_call but not the
> other functions - we should be consistent or explain why.  It'd probably
> also look nicer to not nest the ifdefs, the dependencies in Kconfig will
> ensure we only get things when we should.
> 

Sorry. I forgot to respond to the nested ifdef comment. I will fix that.

Thanks!

Madhavan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-04-01 17:43       ` Madhavan T. Venkataraman
@ 2021-04-01 18:28         ` Mark Brown
  2021-04-01 18:40           ` Madhavan T. Venkataraman
  0 siblings, 1 reply; 25+ messages in thread
From: Mark Brown @ 2021-04-01 18:28 UTC (permalink / raw)
  To: Madhavan T. Venkataraman
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1581 bytes --]

On Thu, Apr 01, 2021 at 12:43:25PM -0500, Madhavan T. Venkataraman wrote:

> >> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> >> +	{ (unsigned long) &ftrace_graph_call, 0 },
> >> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> >> +	{ (unsigned long) ftrace_graph_caller, 0 },

> > It's weird that we take the address of ftrace_graph_call but not the
> > other functions - we should be consistent or explain why.  It'd probably
> > also look nicer to not nest the ifdefs, the dependencies in Kconfig will
> > ensure we only get things when we should.

> I have explained it in the comment in the FTRACE trampoline right above
> ftrace_graph_call().

Ah, right - it's a result of it being an inner label.  I'd suggest
putting a brief note right at that line of code explaining this (eg,
"Inner label, not a function"), it wasn't confusing due to the use of
that symbol but rather due to it being different from everything else
in the list and that's kind of lost in the main comment.

> So, it is only defined if CONFIG_FUNCTION_GRAPH_TRACER is defined. I can address
> this as well as your comment by defining another label whose name is more meaningful
> to our use:

> +SYM_INNER_LABEL(ftrace_trampoline, SYM_L_GLOBAL) // checked by the unwinder
> #ifdef CONFIG_FUNCTION_GRAPH_TRACER
> SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
>         nop                             // If enabled, this will be replaced
>                                         // "b ftrace_graph_caller"
> #endif

I'm not sure we need to bother with that, you'd still need the & I think.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-04-01 18:28         ` Mark Brown
@ 2021-04-01 18:40           ` Madhavan T. Venkataraman
  2021-04-01 18:53             ` Madhavan T. Venkataraman
  0 siblings, 1 reply; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-01 18:40 UTC (permalink / raw)
  To: Mark Brown
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel



On 4/1/21 1:28 PM, Mark Brown wrote:
> On Thu, Apr 01, 2021 at 12:43:25PM -0500, Madhavan T. Venkataraman wrote:
> 
>>>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>>>> +	{ (unsigned long) &ftrace_graph_call, 0 },
>>>> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
>>>> +	{ (unsigned long) ftrace_graph_caller, 0 },
> 
>>> It's weird that we take the address of ftrace_graph_call but not the
>>> other functions - we should be consistent or explain why.  It'd probably
>>> also look nicer to not nest the ifdefs, the dependencies in Kconfig will
>>> ensure we only get things when we should.
> 
>> I have explained it in the comment in the FTRACE trampoline right above
>> ftrace_graph_call().
> 
> Ah, right - it's a result of it being an inner label.  I'd suggest
> putting a brief note right at that line of code explaining this (eg,
> "Inner label, not a function"), it wasn't confusing due to the use of
> that symbol but rather due to it being different from everything else
> in the list and that's kind of lost in the main comment.
> 

OK, So, I will add a note in the main comment above the list. I will add the
comment line you have suggested at the exact line.

>> So, it is only defined if CONFIG_FUNCTION_GRAPH_TRACER is defined. I can address
>> this as well as your comment by defining another label whose name is more meaningful
>> to our use:
> 
>> +SYM_INNER_LABEL(ftrace_trampoline, SYM_L_GLOBAL) // checked by the unwinder
>> #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>> SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
>>         nop                             // If enabled, this will be replaced
>>                                         // "b ftrace_graph_caller"
>> #endif
> 
> I'm not sure we need to bother with that, you'd still need the & I think.

I think we need to bother with that. If CONFIG_FUNCTION_GRAPH_TRACER is not on but
CONFIG_DYNAMIC_FTRACE_WITH_REGS is, then ftrace_graph_call() will not occur in the stack
trace taken from a tracer function. The unwinder still needs to recognize an ftrace frame.
I don't want to assume ftrace_common_return which is the label that currently follows
the above code. So, we need a different label outside the above ifdef.

As for the &, I needed it because ftrace_graph_call is currently defined elsewhere as:

extern unsigned long ftrace_graph_call;

I did not want to change that.

Thanks,

Madhavan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-04-01 18:40           ` Madhavan T. Venkataraman
@ 2021-04-01 18:53             ` Madhavan T. Venkataraman
  2021-04-01 19:47               ` Madhavan T. Venkataraman
  0 siblings, 1 reply; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-01 18:53 UTC (permalink / raw)
  To: Mark Brown
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel



On 4/1/21 1:40 PM, Madhavan T. Venkataraman wrote:
>>> So, it is only defined if CONFIG_FUNCTION_GRAPH_TRACER is defined. I can address
>>> this as well as your comment by defining another label whose name is more meaningful
>>> to our use:
>>> +SYM_INNER_LABEL(ftrace_trampoline, SYM_L_GLOBAL) // checked by the unwinder
>>> #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>>> SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
>>>         nop                             // If enabled, this will be replaced
>>>                                         // "b ftrace_graph_caller"
>>> #endif
>> I'm not sure we need to bother with that, you'd still need the & I think.
> I think we need to bother with that. If CONFIG_FUNCTION_GRAPH_TRACER is not on but
> CONFIG_DYNAMIC_FTRACE_WITH_REGS is, then ftrace_graph_call() will not occur in the stack
> trace taken from a tracer function. The unwinder still needs to recognize an ftrace frame.
> I don't want to assume ftrace_common_return which is the label that currently follows
> the above code. So, we need a different label outside the above ifdef.

Alternatively, I could just move the SYM_INNER_LABEL(ftrace_graph_call..) to outside the ifdef.

Madhavan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-04-01 18:53             ` Madhavan T. Venkataraman
@ 2021-04-01 19:47               ` Madhavan T. Venkataraman
  2021-04-06 11:02                 ` Mark Brown
  0 siblings, 1 reply; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-01 19:47 UTC (permalink / raw)
  To: Mark Brown
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel



On 4/1/21 1:53 PM, Madhavan T. Venkataraman wrote:
> 
> 
> On 4/1/21 1:40 PM, Madhavan T. Venkataraman wrote:
>>>> So, it is only defined if CONFIG_FUNCTION_GRAPH_TRACER is defined. I can address
>>>> this as well as your comment by defining another label whose name is more meaningful
>>>> to our use:
>>>> +SYM_INNER_LABEL(ftrace_trampoline, SYM_L_GLOBAL) // checked by the unwinder
>>>> #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>>>> SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
>>>>         nop                             // If enabled, this will be replaced
>>>>                                         // "b ftrace_graph_caller"
>>>> #endif
>>> I'm not sure we need to bother with that, you'd still need the & I think.
>> I think we need to bother with that. If CONFIG_FUNCTION_GRAPH_TRACER is not on but
>> CONFIG_DYNAMIC_FTRACE_WITH_REGS is, then ftrace_graph_call() will not occur in the stack
>> trace taken from a tracer function. The unwinder still needs to recognize an ftrace frame.
>> I don't want to assume ftrace_common_return which is the label that currently follows
>> the above code. So, we need a different label outside the above ifdef.
> 
> Alternatively, I could just move the SYM_INNER_LABEL(ftrace_graph_call..) to outside the ifdef.
> 
> Madhavan
> 

Or, even better, I could just use ftrace_call+4 because that would be the return
address for the tracer function at ftrace_call:

SYM_CODE_START(ftrace_common)
        sub     x0, x30, #AARCH64_INSN_SIZE     // ip (callsite's BL insn)
        mov     x1, x9                          // parent_ip (callsite's LR)
        ldr_l   x2, function_trace_op           // op
        mov     x3, sp                          // regs

SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
        bl      ftrace_stub

I think that would be cleaner. And, I don't need the complicated comments for ftrace_graph_call.

Is this acceptable?

Madhavan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-03-30 19:09 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks madvenka
                     ` (3 preceding siblings ...)
  2021-03-30 19:09   ` [RFC PATCH v1 4/4] arm64: Mark stack trace as unreliable if kretprobed functions are present madvenka
@ 2021-04-03 17:01   ` Josh Poimboeuf
  2021-04-04  3:29     ` Madhavan T. Venkataraman
  4 siblings, 1 reply; 25+ messages in thread
From: Josh Poimboeuf @ 2021-04-03 17:01 UTC (permalink / raw)
  To: madvenka
  Cc: mark.rutland, broonie, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel, Masami Hiramatsu

On Tue, Mar 30, 2021 at 02:09:51PM -0500, madvenka@linux.microsoft.com wrote:
> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
> 
> There are a number of places in kernel code where the stack trace is not
> reliable. Enhance the unwinder to check for those cases and mark the
> stack trace as unreliable. Once all of the checks are in place, the unwinder
> can be used for livepatching.

This assumes all such places are known.  That's a big assumption, as

a) hand-written asm code may inadvertantly skip frame pointer setup;

b) for inline asm which calls a function, the compiler may blindly
   insert it into a function before the frame pointer setup.

That's where objtool stack validation would come in.
   
> Detect EL1 exception frame
> ==========================
> 
> EL1 exceptions can happen on any instruction including instructions in
> the frame pointer prolog or epilog. Depending on where exactly they happen,
> they could render the stack trace unreliable.
> 
> Add all of the EL1 exception handlers to special_functions[].
> 
> 	- el1_sync()
> 	- el1_irq()
> 	- el1_error()
> 	- el1_sync_invalid()
> 	- el1_irq_invalid()
> 	- el1_fiq_invalid()
> 	- el1_error_invalid()

A possibly more robust alternative would be to somehow mark el1
exception frames so the unwinder can detect them more generally.

For example, as described in my previous email, encode the frame pointer
so the unwinder can detect el1 frames automatically.

> Detect ftrace frame
> ===================
> 
> When FTRACE executes at the beginning of a traced function, it creates two
> frames and calls the tracer function:
> 
> 	- One frame for the traced function
> 
> 	- One frame for the caller of the traced function
> 
> That gives a sensible stack trace while executing in the tracer function.
> When FTRACE returns to the traced function, the frames are popped and
> everything is back to normal.
> 
> However, in cases like live patch, the tracer function redirects execution
> to a different function. When FTRACE returns, control will go to that target
> function. A stack trace taken in the tracer function will not show the target
> function. The target function is the real function that we want to track.
> So, the stack trace is unreliable.

I don't think this is a real problem.  Livepatch only checks the stacks
of blocked tasks (and the task calling into livepatch).  So the
reliability of unwinding from the livepatch tracer function itself
(klp_ftrace_handler) isn't a concern since it doesn't sleep.

> To detect FTRACE in a stack trace, add the following to special_functions[]:
> 
> 	- ftrace_graph_call()
> 	- ftrace_graph_caller()
> 
> Please see the diff for a comment that explains why ftrace_graph_call()
> must be checked.
> 
> Also, the Function Graph Tracer modifies the return address of a traced
> function to a return trampoline (return_to_handler()) to gather tracing
> data on function return. Stack traces taken from the traced function and
> functions it calls will not show the original caller of the traced function.
> The unwinder handles this case by getting the original caller from FTRACE.
> 
> However, stack traces taken from the trampoline itself and functions it calls
> are unreliable as the original return address may not be available in
> that context. This is because the trampoline calls FTRACE to gather trace
> data as well as to obtain the actual return address and FTRACE discards the
> record of the original return address along the way.

Again, this shouldn't be a concern because livepatch won't be unwinding
from a function_graph trampoline unless it got preempted somehow (and
then the el1 frame would get detected anyway).

> Add return_to_handler() to special_functions[].
> 
> Check for kretprobe
> ===================
> 
> For functions with a kretprobe set up, probe code executes on entry
> to the function and replaces the return address in the stack frame with a
> kretprobe trampoline. Whenever the function returns, control is
> transferred to the trampoline. The trampoline eventually returns to the
> original return address.
> 
> A stack trace taken while executing in the function (or in functions that
> get called from the function) will not show the original return address.
> Similarly, a stack trace taken while executing in the trampoline itself
> (and functions that get called from the trampoline) will not show the
> original return address. This means that the caller of the probed function
> will not show. This makes the stack trace unreliable.
> 
> Add the kretprobe trampoline to special_functions[].
> 
> FYI, each task contains a task->kretprobe_instances list that can
> theoretically be consulted to find the orginal return address. But I am
> not entirely sure how to safely traverse that list for stack traces
> not on the current process. So, I have taken the easy way out.

For kretprobes, unwinding from the trampoline or kretprobe handler
shouldn't be a reliability concern for live patching, for similar
reasons as above.

Otherwise, when unwinding from a blocked task which has
'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
original return address.  Masami has been working on an interface to
make that possible for x86.  I assume something similar could be done
for arm64.

> Optprobes
> =========
> 
> Optprobes may be implemented in the future for arm64. For optprobes,
> the relevant trampoline(s) can be added to special_functions[].

Similar comment here, livepatch doesn't unwind from such trampolines.

-- 
Josh


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-04-03 17:01   ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks Josh Poimboeuf
@ 2021-04-04  3:29     ` Madhavan T. Venkataraman
  2021-04-05 13:24       ` Masami Hiramatsu
  0 siblings, 1 reply; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-04  3:29 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: mark.rutland, broonie, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel, Masami Hiramatsu



On 4/3/21 12:01 PM, Josh Poimboeuf wrote:
> On Tue, Mar 30, 2021 at 02:09:51PM -0500, madvenka@linux.microsoft.com wrote:
>> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
>>
>> There are a number of places in kernel code where the stack trace is not
>> reliable. Enhance the unwinder to check for those cases and mark the
>> stack trace as unreliable. Once all of the checks are in place, the unwinder
>> can be used for livepatching.
> 
> This assumes all such places are known.  That's a big assumption, as
> 
> a) hand-written asm code may inadvertantly skip frame pointer setup;
> 
> b) for inline asm which calls a function, the compiler may blindly
>    insert it into a function before the frame pointer setup.
> 
> That's where objtool stack validation would come in.
>    

Yes. I meant that the reliable stack trace in the kernel is necessary. I did
not imply that it was sufficient. Clearly, it is not sufficient. It relies
on the frame pointer being setup correctly for all functions. That has to be
guaranteed by another entity such as objtool.

So, I will improve the wording and make it clear in the next version.

>> Detect EL1 exception frame
>> ==========================
>>
>> EL1 exceptions can happen on any instruction including instructions in
>> the frame pointer prolog or epilog. Depending on where exactly they happen,
>> they could render the stack trace unreliable.
>>
>> Add all of the EL1 exception handlers to special_functions[].
>>
>> 	- el1_sync()
>> 	- el1_irq()
>> 	- el1_error()
>> 	- el1_sync_invalid()
>> 	- el1_irq_invalid()
>> 	- el1_fiq_invalid()
>> 	- el1_error_invalid()
> 
> A possibly more robust alternative would be to somehow mark el1
> exception frames so the unwinder can detect them more generally.
> 
> For example, as described in my previous email, encode the frame pointer
> so the unwinder can detect el1 frames automatically.
> 

Encoding the frame pointer by setting the LSB (like X86) was my first solution.
Mark Rutland NAKed it. His objection was that it would confuse the debuggers
which are expecting the last 4 bits of the frame pointer to be 0. I agree with
this objection.

My problem with the encoding was also that it is not possible to know if the
LSB was set because of encoding or because of stack corruption.

My second attempt was to encode the frame pointer indirectly. That is, make
pt_regs->stackframe the exception frame and use other fields in the pt_regs
(including a frame type encoding field) for verification.

Mark Rutland NAKed it. His objection (if I am rephrasing it correctly) was that
garbage on the stack may accidentally match the values the unwinder checks in
the pt_regs (however unlikely that match might be).

The consensus was that the return PC must be checked against special functions
to recognize those special cases as the special functions are only invoked in
those special contexts and nowhere else.

As an aside, Mark Brown suggested (if I recall correctly) that the exception
functions could be placed in a special exception section so the unwinder can
check a return PC against the section bounds instead of individual functions.
I did consider implementing this. But I needed a way to address FTRACE
trampolines and KPROBE trampolines as well. So, I did not do that.


>> Detect ftrace frame
>> ===================
>>
>> When FTRACE executes at the beginning of a traced function, it creates two
>> frames and calls the tracer function:
>>
>> 	- One frame for the traced function
>>
>> 	- One frame for the caller of the traced function
>>
>> That gives a sensible stack trace while executing in the tracer function.
>> When FTRACE returns to the traced function, the frames are popped and
>> everything is back to normal.
>>
>> However, in cases like live patch, the tracer function redirects execution
>> to a different function. When FTRACE returns, control will go to that target
>> function. A stack trace taken in the tracer function will not show the target
>> function. The target function is the real function that we want to track.
>> So, the stack trace is unreliable.
> 
> I don't think this is a real problem.  Livepatch only checks the stacks
> of blocked tasks (and the task calling into livepatch).  So the
> reliability of unwinding from the livepatch tracer function itself
> (klp_ftrace_handler) isn't a concern since it doesn't sleep.
> 

My thinking was - arch_stack_walk_reliable() should provide a reliable stack trace
and not assume anything about its consumers. It should not assume that livepatch is
the only consumer although it might be. 

Theoretically, there can be a tracer function that calls some kernel function F() that
can go to sleep. Is this not allowed?

Or, F() could call arch_stack_walk_reliable() on the current task for debugging
or tracing purposes. It should still work correctly.

>> To detect FTRACE in a stack trace, add the following to special_functions[]:
>>
>> 	- ftrace_graph_call()
>> 	- ftrace_graph_caller()
>>
>> Please see the diff for a comment that explains why ftrace_graph_call()
>> must be checked.
>>
>> Also, the Function Graph Tracer modifies the return address of a traced
>> function to a return trampoline (return_to_handler()) to gather tracing
>> data on function return. Stack traces taken from the traced function and
>> functions it calls will not show the original caller of the traced function.
>> The unwinder handles this case by getting the original caller from FTRACE.
>>
>> However, stack traces taken from the trampoline itself and functions it calls
>> are unreliable as the original return address may not be available in
>> that context. This is because the trampoline calls FTRACE to gather trace
>> data as well as to obtain the actual return address and FTRACE discards the
>> record of the original return address along the way.
> 
> Again, this shouldn't be a concern because livepatch won't be unwinding
> from a function_graph trampoline unless it got preempted somehow (and
> then the el1 frame would get detected anyway).
> 

Please see previous answer.

>> Add return_to_handler() to special_functions[].
>>
>> Check for kretprobe
>> ===================
>>
>> For functions with a kretprobe set up, probe code executes on entry
>> to the function and replaces the return address in the stack frame with a
>> kretprobe trampoline. Whenever the function returns, control is
>> transferred to the trampoline. The trampoline eventually returns to the
>> original return address.
>>
>> A stack trace taken while executing in the function (or in functions that
>> get called from the function) will not show the original return address.
>> Similarly, a stack trace taken while executing in the trampoline itself
>> (and functions that get called from the trampoline) will not show the
>> original return address. This means that the caller of the probed function
>> will not show. This makes the stack trace unreliable.
>>
>> Add the kretprobe trampoline to special_functions[].
>>
>> FYI, each task contains a task->kretprobe_instances list that can
>> theoretically be consulted to find the orginal return address. But I am
>> not entirely sure how to safely traverse that list for stack traces
>> not on the current process. So, I have taken the easy way out.
> 
> For kretprobes, unwinding from the trampoline or kretprobe handler
> shouldn't be a reliability concern for live patching, for similar
> reasons as above.
> 

Please see previous answer.

> Otherwise, when unwinding from a blocked task which has
> 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
> original return address.  Masami has been working on an interface to
> make that possible for x86.  I assume something similar could be done
> for arm64.
> 

OK. Until that is available, this case needs to be addressed.

Madhavan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-04-04  3:29     ` Madhavan T. Venkataraman
@ 2021-04-05 13:24       ` Masami Hiramatsu
  2021-04-05 13:46         ` Madhavan T. Venkataraman
  2021-04-05 14:56         ` Madhavan T. Venkataraman
  0 siblings, 2 replies; 25+ messages in thread
From: Masami Hiramatsu @ 2021-04-05 13:24 UTC (permalink / raw)
  To: Madhavan T. Venkataraman
  Cc: Josh Poimboeuf, mark.rutland, broonie, jthierry, catalin.marinas,
	will, linux-arm-kernel, live-patching, linux-kernel,
	Masami Hiramatsu

Hi Madhaven,

On Sat, 3 Apr 2021 22:29:12 -0500
"Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:


> >> Check for kretprobe
> >> ===================
> >>
> >> For functions with a kretprobe set up, probe code executes on entry
> >> to the function and replaces the return address in the stack frame with a
> >> kretprobe trampoline. Whenever the function returns, control is
> >> transferred to the trampoline. The trampoline eventually returns to the
> >> original return address.
> >>
> >> A stack trace taken while executing in the function (or in functions that
> >> get called from the function) will not show the original return address.
> >> Similarly, a stack trace taken while executing in the trampoline itself
> >> (and functions that get called from the trampoline) will not show the
> >> original return address. This means that the caller of the probed function
> >> will not show. This makes the stack trace unreliable.
> >>
> >> Add the kretprobe trampoline to special_functions[].
> >>
> >> FYI, each task contains a task->kretprobe_instances list that can
> >> theoretically be consulted to find the orginal return address. But I am
> >> not entirely sure how to safely traverse that list for stack traces
> >> not on the current process. So, I have taken the easy way out.
> > 
> > For kretprobes, unwinding from the trampoline or kretprobe handler
> > shouldn't be a reliability concern for live patching, for similar
> > reasons as above.
> > 
> 
> Please see previous answer.
> 
> > Otherwise, when unwinding from a blocked task which has
> > 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
> > original return address.  Masami has been working on an interface to
> > make that possible for x86.  I assume something similar could be done
> > for arm64.
> > 
> 
> OK. Until that is available, this case needs to be addressed.

Actually, I've done that on arm64 :) See below patch.
(and I also have a similar code for arm32, what I'm considering is how
to unify x86/arm/arm64 kretprobe_find_ret_addr(), since those are very
similar.)

This is applicable on my x86 series v5

https://lore.kernel.org/bpf/161676170650.330141.6214727134265514123.stgit@devnote2/

Thank you,


From 947cf6cf1fd4154edd5533d18c2f8dfedc8d993d Mon Sep 17 00:00:00 2001
From: Masami Hiramatsu <mhiramat@kernel.org>
Date: Sat, 20 Mar 2021 00:14:29 +0900
Subject: [PATCH] arm64: Recover kretprobe modified return address in
 stacktrace

Since the kretprobe replaces the function return address with
the kretprobe_trampoline on the stack, arm64 unwinder shows it
instead of the correct return address.

This finds the correct return address from the per-task
kretprobe_instances list and verify it is in between the
caller fp and callee fp.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/arm64/include/asm/stacktrace.h |  2 ++
 arch/arm64/kernel/probes/kprobes.c  | 28 ++++++++++++++++++++++++++++
 arch/arm64/kernel/stacktrace.c      |  3 +++
 kernel/kprobes.c                    |  8 ++++----
 4 files changed, 37 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index eb29b1fe8255..50ebc9e9dba9 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -9,6 +9,7 @@
 #include <linux/sched.h>
 #include <linux/sched/task_stack.h>
 #include <linux/types.h>
+#include <linux/llist.h>
 
 #include <asm/memory.h>
 #include <asm/ptrace.h>
@@ -59,6 +60,7 @@ struct stackframe {
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	int graph;
 #endif
+	struct llist_node *kr_cur;
 };
 
 extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index fce681fdfce6..204e475cbff3 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -410,6 +410,34 @@ int __init arch_populate_kprobe_blacklist(void)
 	return ret;
 }
 
+unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
+					struct llist_node **cur);
+
+unsigned long kretprobe_find_ret_addr(struct task_struct *tsk,
+				void *fp, struct llist_node **cur)
+{
+	struct kretprobe_instance *ri;
+	unsigned long ret;
+
+	do {
+		ret = __kretprobe_find_ret_addr(tsk, cur);
+		if (!ret)
+			return ret;
+		ri = container_of(*cur, struct kretprobe_instance, llist);
+		/*
+		 * Since arm64 stores the stack pointer of the entry of target
+		 * function (callee) to ri->fp, the given real @fp must be
+		 * smaller than ri->fp, but bigger than the previous ri->fp.
+		 *
+		 * callee sp (prev ri->fp)
+		 * fp (and *saved_lr)
+		 * caller sp (ri->fp)
+		 */
+	} while (ri->fp <= fp);
+
+	return ret;
+}
+
 void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
 {
 	return (void *)kretprobe_trampoline_handler(regs, (void *)kernel_stack_pointer(regs));
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index ad20981dfda4..956486d4ac10 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -105,6 +105,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 		frame->pc = ret_stack->ret;
 	}
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+	if (is_kretprobe_trampoline(frame->pc))
+		frame->pc = kretprobe_find_ret_addr(tsk, (void *)fp, &frame->kr_cur);
 
 	frame->pc = ptrauth_strip_insn_pac(frame->pc);
 
@@ -199,6 +201,7 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
 {
 	struct stackframe frame;
 
+	memset(&frame, 0, sizeof(frame));
 	if (regs)
 		start_backtrace(&frame, regs->regs[29], regs->pc);
 	else if (task == current)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 4ce3e6f5d28d..12677a463561 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1859,8 +1859,8 @@ static struct notifier_block kprobe_exceptions_nb = {
 #ifdef CONFIG_KRETPROBES
 
 /* This assumes the tsk is current or the task which is not running. */
-static unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
-					       struct llist_node **cur)
+unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
+					struct llist_node **cur)
 {
 	struct kretprobe_instance *ri = NULL;
 	struct llist_node *node = *cur;
@@ -1882,8 +1882,8 @@ static unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
 }
 NOKPROBE_SYMBOL(__kretprobe_find_ret_addr);
 
-unsigned long kretprobe_find_ret_addr(struct task_struct *tsk, void *fp,
-				      struct llist_node **cur)
+unsigned long __weak kretprobe_find_ret_addr(struct task_struct *tsk,
+				void *fp, struct llist_node **cur)
 {
 	struct kretprobe_instance *ri = NULL;
 	unsigned long ret;
-- 
2.25.1



-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-04-05 13:24       ` Masami Hiramatsu
@ 2021-04-05 13:46         ` Madhavan T. Venkataraman
  2021-04-05 14:56         ` Madhavan T. Venkataraman
  1 sibling, 0 replies; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-05 13:46 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Josh Poimboeuf, mark.rutland, broonie, jthierry, catalin.marinas,
	will, linux-arm-kernel, live-patching, linux-kernel



On 4/5/21 8:24 AM, Masami Hiramatsu wrote:
> Hi Madhaven,
> 
> On Sat, 3 Apr 2021 22:29:12 -0500
> "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:
> 
> 
>>>> Check for kretprobe
>>>> ===================
>>>>
>>>> For functions with a kretprobe set up, probe code executes on entry
>>>> to the function and replaces the return address in the stack frame with a
>>>> kretprobe trampoline. Whenever the function returns, control is
>>>> transferred to the trampoline. The trampoline eventually returns to the
>>>> original return address.
>>>>
>>>> A stack trace taken while executing in the function (or in functions that
>>>> get called from the function) will not show the original return address.
>>>> Similarly, a stack trace taken while executing in the trampoline itself
>>>> (and functions that get called from the trampoline) will not show the
>>>> original return address. This means that the caller of the probed function
>>>> will not show. This makes the stack trace unreliable.
>>>>
>>>> Add the kretprobe trampoline to special_functions[].
>>>>
>>>> FYI, each task contains a task->kretprobe_instances list that can
>>>> theoretically be consulted to find the orginal return address. But I am
>>>> not entirely sure how to safely traverse that list for stack traces
>>>> not on the current process. So, I have taken the easy way out.
>>>
>>> For kretprobes, unwinding from the trampoline or kretprobe handler
>>> shouldn't be a reliability concern for live patching, for similar
>>> reasons as above.
>>>
>>
>> Please see previous answer.
>>
>>> Otherwise, when unwinding from a blocked task which has
>>> 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
>>> original return address.  Masami has been working on an interface to
>>> make that possible for x86.  I assume something similar could be done
>>> for arm64.
>>>
>>
>> OK. Until that is available, this case needs to be addressed.
> 
> Actually, I've done that on arm64 :) See below patch.
> (and I also have a similar code for arm32, what I'm considering is how
> to unify x86/arm/arm64 kretprobe_find_ret_addr(), since those are very
> similar.)
> 
> This is applicable on my x86 series v5
> 
> https://lore.kernel.org/bpf/161676170650.330141.6214727134265514123.stgit@devnote2/
> 
> Thank you,
> 
> 

OK. I will take a look.

Thanks.

Madhavan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-04-05 13:24       ` Masami Hiramatsu
  2021-04-05 13:46         ` Madhavan T. Venkataraman
@ 2021-04-05 14:56         ` Madhavan T. Venkataraman
  2021-04-05 17:12           ` Madhavan T. Venkataraman
  2021-04-05 23:40           ` Masami Hiramatsu
  1 sibling, 2 replies; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-05 14:56 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Josh Poimboeuf, mark.rutland, broonie, jthierry, catalin.marinas,
	will, linux-arm-kernel, live-patching, linux-kernel



On 4/5/21 8:24 AM, Masami Hiramatsu wrote:
> Hi Madhaven,
> 
> On Sat, 3 Apr 2021 22:29:12 -0500
> "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:
> 
> 
>>>> Check for kretprobe
>>>> ===================
>>>>
>>>> For functions with a kretprobe set up, probe code executes on entry
>>>> to the function and replaces the return address in the stack frame with a
>>>> kretprobe trampoline. Whenever the function returns, control is
>>>> transferred to the trampoline. The trampoline eventually returns to the
>>>> original return address.
>>>>
>>>> A stack trace taken while executing in the function (or in functions that
>>>> get called from the function) will not show the original return address.
>>>> Similarly, a stack trace taken while executing in the trampoline itself
>>>> (and functions that get called from the trampoline) will not show the
>>>> original return address. This means that the caller of the probed function
>>>> will not show. This makes the stack trace unreliable.
>>>>
>>>> Add the kretprobe trampoline to special_functions[].
>>>>
>>>> FYI, each task contains a task->kretprobe_instances list that can
>>>> theoretically be consulted to find the orginal return address. But I am
>>>> not entirely sure how to safely traverse that list for stack traces
>>>> not on the current process. So, I have taken the easy way out.
>>>
>>> For kretprobes, unwinding from the trampoline or kretprobe handler
>>> shouldn't be a reliability concern for live patching, for similar
>>> reasons as above.
>>>
>>
>> Please see previous answer.
>>
>>> Otherwise, when unwinding from a blocked task which has
>>> 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
>>> original return address.  Masami has been working on an interface to
>>> make that possible for x86.  I assume something similar could be done
>>> for arm64.
>>>
>>
>> OK. Until that is available, this case needs to be addressed.
> 
> Actually, I've done that on arm64 :) See below patch.
> (and I also have a similar code for arm32, what I'm considering is how
> to unify x86/arm/arm64 kretprobe_find_ret_addr(), since those are very
> similar.)
> 
> This is applicable on my x86 series v5
> 
> https://lore.kernel.org/bpf/161676170650.330141.6214727134265514123.stgit@devnote2/
> 
> Thank you,
> 
> 

I took a brief look at your changes. Looks reasonable.

However, for now, I am going to include the kretprobe_trampoline in the special_functions[]
array until your changes are merged. At that point, it is just a matter of deleting
kretprobe_trampoline from the special_functions[] array. That is all.

I hope that is fine with everyone.

Madhavan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-04-05 14:56         ` Madhavan T. Venkataraman
@ 2021-04-05 17:12           ` Madhavan T. Venkataraman
  2021-04-05 23:39             ` Masami Hiramatsu
  2021-04-05 23:40           ` Masami Hiramatsu
  1 sibling, 1 reply; 25+ messages in thread
From: Madhavan T. Venkataraman @ 2021-04-05 17:12 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Josh Poimboeuf, mark.rutland, broonie, jthierry, catalin.marinas,
	will, linux-arm-kernel, live-patching, linux-kernel



On 4/5/21 9:56 AM, Madhavan T. Venkataraman wrote:
> 
> 
> On 4/5/21 8:24 AM, Masami Hiramatsu wrote:
>> Hi Madhaven,
>>
>> On Sat, 3 Apr 2021 22:29:12 -0500
>> "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:
>>
>>
>>>>> Check for kretprobe
>>>>> ===================
>>>>>
>>>>> For functions with a kretprobe set up, probe code executes on entry
>>>>> to the function and replaces the return address in the stack frame with a
>>>>> kretprobe trampoline. Whenever the function returns, control is
>>>>> transferred to the trampoline. The trampoline eventually returns to the
>>>>> original return address.
>>>>>
>>>>> A stack trace taken while executing in the function (or in functions that
>>>>> get called from the function) will not show the original return address.
>>>>> Similarly, a stack trace taken while executing in the trampoline itself
>>>>> (and functions that get called from the trampoline) will not show the
>>>>> original return address. This means that the caller of the probed function
>>>>> will not show. This makes the stack trace unreliable.
>>>>>
>>>>> Add the kretprobe trampoline to special_functions[].
>>>>>
>>>>> FYI, each task contains a task->kretprobe_instances list that can
>>>>> theoretically be consulted to find the orginal return address. But I am
>>>>> not entirely sure how to safely traverse that list for stack traces
>>>>> not on the current process. So, I have taken the easy way out.
>>>>
>>>> For kretprobes, unwinding from the trampoline or kretprobe handler
>>>> shouldn't be a reliability concern for live patching, for similar
>>>> reasons as above.
>>>>
>>>
>>> Please see previous answer.
>>>
>>>> Otherwise, when unwinding from a blocked task which has
>>>> 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
>>>> original return address.  Masami has been working on an interface to
>>>> make that possible for x86.  I assume something similar could be done
>>>> for arm64.
>>>>
>>>
>>> OK. Until that is available, this case needs to be addressed.
>>
>> Actually, I've done that on arm64 :) See below patch.
>> (and I also have a similar code for arm32, what I'm considering is how
>> to unify x86/arm/arm64 kretprobe_find_ret_addr(), since those are very
>> similar.)
>>
>> This is applicable on my x86 series v5
>>
>> https://lore.kernel.org/bpf/161676170650.330141.6214727134265514123.stgit@devnote2/
>>
>> Thank you,
>>
>>
> 
> I took a brief look at your changes. Looks reasonable.
> 
> However, for now, I am going to include the kretprobe_trampoline in the special_functions[]
> array until your changes are merged. At that point, it is just a matter of deleting
> kretprobe_trampoline from the special_functions[] array. That is all.
> 
> I hope that is fine with everyone.
> 

Actually, there may still be a problem to solve.

If arch_stack_walk_reliable() is ever called from within kretprobe_trampoline() for debugging or
other purposes after the instance is deleted from the task instance list, it would not be able
to retrieve the original return address.

The stack trace would be unreliable in that case, would it not?

Madhavan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-04-05 17:12           ` Madhavan T. Venkataraman
@ 2021-04-05 23:39             ` Masami Hiramatsu
  0 siblings, 0 replies; 25+ messages in thread
From: Masami Hiramatsu @ 2021-04-05 23:39 UTC (permalink / raw)
  To: Madhavan T. Venkataraman
  Cc: Josh Poimboeuf, mark.rutland, broonie, jthierry, catalin.marinas,
	will, linux-arm-kernel, live-patching, linux-kernel

On Mon, 5 Apr 2021 12:12:08 -0500
"Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:

> 
> 
> On 4/5/21 9:56 AM, Madhavan T. Venkataraman wrote:
> > 
> > 
> > On 4/5/21 8:24 AM, Masami Hiramatsu wrote:
> >> Hi Madhaven,
> >>
> >> On Sat, 3 Apr 2021 22:29:12 -0500
> >> "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:
> >>
> >>
> >>>>> Check for kretprobe
> >>>>> ===================
> >>>>>
> >>>>> For functions with a kretprobe set up, probe code executes on entry
> >>>>> to the function and replaces the return address in the stack frame with a
> >>>>> kretprobe trampoline. Whenever the function returns, control is
> >>>>> transferred to the trampoline. The trampoline eventually returns to the
> >>>>> original return address.
> >>>>>
> >>>>> A stack trace taken while executing in the function (or in functions that
> >>>>> get called from the function) will not show the original return address.
> >>>>> Similarly, a stack trace taken while executing in the trampoline itself
> >>>>> (and functions that get called from the trampoline) will not show the
> >>>>> original return address. This means that the caller of the probed function
> >>>>> will not show. This makes the stack trace unreliable.
> >>>>>
> >>>>> Add the kretprobe trampoline to special_functions[].
> >>>>>
> >>>>> FYI, each task contains a task->kretprobe_instances list that can
> >>>>> theoretically be consulted to find the orginal return address. But I am
> >>>>> not entirely sure how to safely traverse that list for stack traces
> >>>>> not on the current process. So, I have taken the easy way out.
> >>>>
> >>>> For kretprobes, unwinding from the trampoline or kretprobe handler
> >>>> shouldn't be a reliability concern for live patching, for similar
> >>>> reasons as above.
> >>>>
> >>>
> >>> Please see previous answer.
> >>>
> >>>> Otherwise, when unwinding from a blocked task which has
> >>>> 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
> >>>> original return address.  Masami has been working on an interface to
> >>>> make that possible for x86.  I assume something similar could be done
> >>>> for arm64.
> >>>>
> >>>
> >>> OK. Until that is available, this case needs to be addressed.
> >>
> >> Actually, I've done that on arm64 :) See below patch.
> >> (and I also have a similar code for arm32, what I'm considering is how
> >> to unify x86/arm/arm64 kretprobe_find_ret_addr(), since those are very
> >> similar.)
> >>
> >> This is applicable on my x86 series v5
> >>
> >> https://lore.kernel.org/bpf/161676170650.330141.6214727134265514123.stgit@devnote2/
> >>
> >> Thank you,
> >>
> >>
> > 
> > I took a brief look at your changes. Looks reasonable.
> > 
> > However, for now, I am going to include the kretprobe_trampoline in the special_functions[]
> > array until your changes are merged. At that point, it is just a matter of deleting
> > kretprobe_trampoline from the special_functions[] array. That is all.
> > 
> > I hope that is fine with everyone.
> > 
> 
> Actually, there may still be a problem to solve.
> 
> If arch_stack_walk_reliable() is ever called from within kretprobe_trampoline() for debugging or
> other purposes after the instance is deleted from the task instance list, it would not be able
> to retrieve the original return address.
> 
> The stack trace would be unreliable in that case, would it not?

Good catch! I'm preparing a patch to fix that case (currently only for x86, see below).
This is currently only for x86. Arm64 kretprobe may have to modify its stack
layout similar to x86 so that unwinder can find the return address from
stack.

Thank you,

From cdca74a1ebc174062eb99a376072002ae21f7d7e Mon Sep 17 00:00:00 2001
From: Masami Hiramatsu <mhiramat@kernel.org>
Date: Mon, 8 Mar 2021 00:22:51 +0900
Subject: [PATCH] x86/kprobes: Fixup return address in generic trampoline
 handler

In x86, kretprobe trampoline address on the stack frame will
be replaced with the real return address after returning from
trampoline_handler. Before fixing the return address, the real
return address can be found in the current->kretprobe_instances.

However, since there is a window between updating the
current->kretprobe_instances and fixing the address on the stack,
if an interrupt caused at that timing and the interrupt handler
does stacktrace, it may fail to unwind because it can not get
the correct return address from current->kretprobe_instances.

This will minimize that window by fixing the return address
right before updating current->kretprobe_instances.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/x86/kernel/kprobes/core.c | 14 ++++++++++++--
 kernel/kprobes.c               |  8 ++++++++
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 00c5944ae8f6..950b8e873937 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1032,6 +1032,7 @@ STACK_FRAME_NON_STANDARD(kretprobe_trampoline);
 #undef UNWIND_HINT_FUNC
 #define UNWIND_HINT_FUNC
 #endif
+
 /*
  * When a retprobed function returns, this code saves registers and
  * calls trampoline_handler() runs, which calls the kretprobe's handler.
@@ -1073,6 +1074,16 @@ asm(
 );
 NOKPROBE_SYMBOL(kretprobe_trampoline);
 
+void arch_kretprobe_fixup_return(struct pt_regs *regs,
+				 unsigned long correct_ret_addr)
+{
+	unsigned long *frame_pointer;
+	frame_pointer = ((unsigned long *)&regs->sp) + 1;
+
+	/* Replace fake return address with real one. */
+	*frame_pointer = correct_ret_addr;
+}
+
 /*
  * Called from kretprobe_trampoline
  */
@@ -1090,8 +1101,7 @@ __used __visible void trampoline_handler(struct pt_regs *regs)
 	regs->sp += sizeof(long);
 	frame_pointer = ((unsigned long *)&regs->sp) + 1;
 
-	/* Replace fake return address with real one. */
-	*frame_pointer = kretprobe_trampoline_handler(regs, frame_pointer);
+	kretprobe_trampoline_handler(regs, frame_pointer);
 	/*
 	 * Move flags to sp so that kretprobe_trapmoline can return
 	 * right after popf.
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 12677a463561..3c72df5b31dd 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1899,6 +1899,12 @@ unsigned long __weak kretprobe_find_ret_addr(struct task_struct *tsk,
 }
 NOKPROBE_SYMBOL(kretprobe_find_ret_addr);
 
+void __weak arch_kretprobe_fixup_return(struct pt_regs *regs,
+					unsigned long correct_ret_addr)
+{
+	/* Do nothing by default. */
+}
+
 unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs,
 					     void *frame_pointer)
 {
@@ -1939,6 +1945,8 @@ unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs,
 		first = first->next;
 	}
 
+	arch_kretprobe_fixup_return(regs, (unsigned long)correct_ret_addr);
+
 	/* Unlink all nodes for this frame. */
 	first = current->kretprobe_instances.first;
 	current->kretprobe_instances.first = node->next;
-- 
2.25.1





-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
  2021-04-05 14:56         ` Madhavan T. Venkataraman
  2021-04-05 17:12           ` Madhavan T. Venkataraman
@ 2021-04-05 23:40           ` Masami Hiramatsu
  1 sibling, 0 replies; 25+ messages in thread
From: Masami Hiramatsu @ 2021-04-05 23:40 UTC (permalink / raw)
  To: Madhavan T. Venkataraman
  Cc: Josh Poimboeuf, mark.rutland, broonie, jthierry, catalin.marinas,
	will, linux-arm-kernel, live-patching, linux-kernel

On Mon, 5 Apr 2021 09:56:48 -0500
"Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:

> 
> 
> On 4/5/21 8:24 AM, Masami Hiramatsu wrote:
> > Hi Madhaven,
> > 
> > On Sat, 3 Apr 2021 22:29:12 -0500
> > "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:
> > 
> > 
> >>>> Check for kretprobe
> >>>> ===================
> >>>>
> >>>> For functions with a kretprobe set up, probe code executes on entry
> >>>> to the function and replaces the return address in the stack frame with a
> >>>> kretprobe trampoline. Whenever the function returns, control is
> >>>> transferred to the trampoline. The trampoline eventually returns to the
> >>>> original return address.
> >>>>
> >>>> A stack trace taken while executing in the function (or in functions that
> >>>> get called from the function) will not show the original return address.
> >>>> Similarly, a stack trace taken while executing in the trampoline itself
> >>>> (and functions that get called from the trampoline) will not show the
> >>>> original return address. This means that the caller of the probed function
> >>>> will not show. This makes the stack trace unreliable.
> >>>>
> >>>> Add the kretprobe trampoline to special_functions[].
> >>>>
> >>>> FYI, each task contains a task->kretprobe_instances list that can
> >>>> theoretically be consulted to find the orginal return address. But I am
> >>>> not entirely sure how to safely traverse that list for stack traces
> >>>> not on the current process. So, I have taken the easy way out.
> >>>
> >>> For kretprobes, unwinding from the trampoline or kretprobe handler
> >>> shouldn't be a reliability concern for live patching, for similar
> >>> reasons as above.
> >>>
> >>
> >> Please see previous answer.
> >>
> >>> Otherwise, when unwinding from a blocked task which has
> >>> 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
> >>> original return address.  Masami has been working on an interface to
> >>> make that possible for x86.  I assume something similar could be done
> >>> for arm64.
> >>>
> >>
> >> OK. Until that is available, this case needs to be addressed.
> > 
> > Actually, I've done that on arm64 :) See below patch.
> > (and I also have a similar code for arm32, what I'm considering is how
> > to unify x86/arm/arm64 kretprobe_find_ret_addr(), since those are very
> > similar.)
> > 
> > This is applicable on my x86 series v5
> > 
> > https://lore.kernel.org/bpf/161676170650.330141.6214727134265514123.stgit@devnote2/
> > 
> > Thank you,
> > 
> > 
> 
> I took a brief look at your changes. Looks reasonable.
> 
> However, for now, I am going to include the kretprobe_trampoline in the special_functions[]
> array until your changes are merged. At that point, it is just a matter of deleting
> kretprobe_trampoline from the special_functions[] array. That is all.
> 
> I hope that is fine with everyone.

Agreed, that is reasonable unless my series is merged. 

Thank you,


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable
  2021-04-01 19:47               ` Madhavan T. Venkataraman
@ 2021-04-06 11:02                 ` Mark Brown
  0 siblings, 0 replies; 25+ messages in thread
From: Mark Brown @ 2021-04-06 11:02 UTC (permalink / raw)
  To: Madhavan T. Venkataraman
  Cc: mark.rutland, jpoimboe, jthierry, catalin.marinas, will,
	linux-arm-kernel, live-patching, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 526 bytes --]

On Thu, Apr 01, 2021 at 02:47:11PM -0500, Madhavan T. Venkataraman wrote:
> On 4/1/21 1:53 PM, Madhavan T. Venkataraman wrote:

> > Alternatively, I could just move the SYM_INNER_LABEL(ftrace_graph_call..) to outside the ifdef.

> Or, even better, I could just use ftrace_call+4 because that would be the return
> address for the tracer function at ftrace_call:

> I think that would be cleaner. And, I don't need the complicated comments for ftrace_graph_call.

> Is this acceptable?

I think either of those should be fine.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2021-04-06 11:03 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <77bd5edeea72d44533c769b1e8c0fea7a9d7eb3a>
2021-03-30 19:09 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks madvenka
2021-03-30 19:09   ` [RFC PATCH v1 1/4] arm64: Implement infrastructure for " madvenka
2021-04-01 15:27     ` Mark Brown
2021-04-01 17:44       ` Madhavan T. Venkataraman
2021-03-30 19:09   ` [RFC PATCH v1 2/4] arm64: Mark a stack trace unreliable if an EL1 exception frame is detected madvenka
2021-04-01 17:21     ` Mark Brown
2021-03-30 19:09   ` [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable madvenka
2021-04-01 14:27     ` Mark Brown
2021-04-01 17:43       ` Madhavan T. Venkataraman
2021-04-01 18:28         ` Mark Brown
2021-04-01 18:40           ` Madhavan T. Venkataraman
2021-04-01 18:53             ` Madhavan T. Venkataraman
2021-04-01 19:47               ` Madhavan T. Venkataraman
2021-04-06 11:02                 ` Mark Brown
2021-04-01 17:48       ` Madhavan T. Venkataraman
2021-03-30 19:09   ` [RFC PATCH v1 4/4] arm64: Mark stack trace as unreliable if kretprobed functions are present madvenka
2021-04-01 17:23     ` Mark Brown
2021-04-03 17:01   ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks Josh Poimboeuf
2021-04-04  3:29     ` Madhavan T. Venkataraman
2021-04-05 13:24       ` Masami Hiramatsu
2021-04-05 13:46         ` Madhavan T. Venkataraman
2021-04-05 14:56         ` Madhavan T. Venkataraman
2021-04-05 17:12           ` Madhavan T. Venkataraman
2021-04-05 23:39             ` Masami Hiramatsu
2021-04-05 23:40           ` Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).