From: madvenka@linux.microsoft.com To: broonie@kernel.org, jpoimboe@redhat.com, mark.rutland@arm.com, jthierry@redhat.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v3 2/4] arm64: Check the return PC against unreliable code sections Date: Mon, 3 May 2021 12:36:13 -0500 [thread overview] Message-ID: <20210503173615.21576-3-madvenka@linux.microsoft.com> (raw) In-Reply-To: <20210503173615.21576-1-madvenka@linux.microsoft.com> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> Create a sym_code_ranges[] array to cover the following text sections that contain functions defined as SYM_CODE_*(). These functions are low-level functions (and do not have a proper frame pointer prolog and epilog). So, they are inherently unreliable from a stack unwinding perspective. .entry.text .idmap.text .hyp.idmap.text .hyp.text .hibernate_exit.text .entry.tramp.text If a return PC falls in any of these, mark the stack trace unreliable. The only exception to this is - if the unwinder has reached the last frame already, it will not mark the stack trace unreliable since there is no more unwinding to do. E.g., - ret_from_fork() occurs at the end of the stack trace of kernel tasks. - el0_*() functions occur at the end of EL0 exception stack traces. This covers all user task entries into the kernel. NOTE: - EL1 exception handlers are in .entry.text. So, stack traces that contain those functions will be marked not reliable. This covers interrupts, exceptions and breakpoints encountered while executing in the kernel. - At the end of an interrupt, the kernel can preempt the current task if required. So, the stack traces of all preempted tasks will show the interrupt frame and will be considered unreliable. Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> --- arch/arm64/kernel/stacktrace.c | 54 ++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index c21a1bca28f3..1ff14615a55a 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -15,9 +15,48 @@ #include <asm/irq.h> #include <asm/pointer_auth.h> +#include <asm/sections.h> #include <asm/stack_pointer.h> #include <asm/stacktrace.h> +struct code_range { + unsigned long start; + unsigned long end; +}; + +struct code_range sym_code_ranges[] = +{ + /* non-unwindable ranges */ + { (unsigned long)__entry_text_start, + (unsigned long)__entry_text_end }, + { (unsigned long)__idmap_text_start, + (unsigned long)__idmap_text_end }, + { (unsigned long)__hyp_idmap_text_start, + (unsigned long)__hyp_idmap_text_end }, + { (unsigned long)__hyp_text_start, + (unsigned long)__hyp_text_end }, +#ifdef CONFIG_HIBERNATION + { (unsigned long)__hibernate_exit_text_start, + (unsigned long)__hibernate_exit_text_end }, +#endif +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 + { (unsigned long)__entry_tramp_text_start, + (unsigned long)__entry_tramp_text_end }, +#endif + { /* sentinel */ } +}; + +static struct code_range *lookup_range(unsigned long pc) +{ + struct code_range *range; + + for (range = sym_code_ranges; range->start; range++) { + if (pc >= range->start && pc < range->end) + return range; + } + return range; +} + /* * AArch64 PCS assigns the frame pointer to x29. * @@ -43,6 +82,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) { unsigned long fp = frame->fp; struct stack_info info; + struct code_range *range; frame->reliable = true; @@ -103,6 +143,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) return 0; } + range = lookup_range(frame->pc); + #ifdef CONFIG_FUNCTION_GRAPH_TRACER if (tsk->ret_stack && frame->pc == (unsigned long)return_to_handler) { @@ -118,9 +160,21 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) return -EINVAL; frame->pc = ret_stack->ret; frame->pc = ptrauth_strip_insn_pac(frame->pc); + return 0; } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ + if (!range->start) + return 0; + + /* + * The return PC falls in an unreliable function. If the final frame + * has been reached, no more unwinding is needed. Otherwise, mark the + * stack trace not reliable. + */ + if (frame->fp) + frame->reliable = false; + return 0; } NOKPROBE_SYMBOL(unwind_frame); -- 2.25.1
WARNING: multiple messages have this Message-ID (diff)
From: madvenka@linux.microsoft.com To: broonie@kernel.org, jpoimboe@redhat.com, mark.rutland@arm.com, jthierry@redhat.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v3 2/4] arm64: Check the return PC against unreliable code sections Date: Mon, 3 May 2021 12:36:13 -0500 [thread overview] Message-ID: <20210503173615.21576-3-madvenka@linux.microsoft.com> (raw) In-Reply-To: <20210503173615.21576-1-madvenka@linux.microsoft.com> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> Create a sym_code_ranges[] array to cover the following text sections that contain functions defined as SYM_CODE_*(). These functions are low-level functions (and do not have a proper frame pointer prolog and epilog). So, they are inherently unreliable from a stack unwinding perspective. .entry.text .idmap.text .hyp.idmap.text .hyp.text .hibernate_exit.text .entry.tramp.text If a return PC falls in any of these, mark the stack trace unreliable. The only exception to this is - if the unwinder has reached the last frame already, it will not mark the stack trace unreliable since there is no more unwinding to do. E.g., - ret_from_fork() occurs at the end of the stack trace of kernel tasks. - el0_*() functions occur at the end of EL0 exception stack traces. This covers all user task entries into the kernel. NOTE: - EL1 exception handlers are in .entry.text. So, stack traces that contain those functions will be marked not reliable. This covers interrupts, exceptions and breakpoints encountered while executing in the kernel. - At the end of an interrupt, the kernel can preempt the current task if required. So, the stack traces of all preempted tasks will show the interrupt frame and will be considered unreliable. Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> --- arch/arm64/kernel/stacktrace.c | 54 ++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index c21a1bca28f3..1ff14615a55a 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -15,9 +15,48 @@ #include <asm/irq.h> #include <asm/pointer_auth.h> +#include <asm/sections.h> #include <asm/stack_pointer.h> #include <asm/stacktrace.h> +struct code_range { + unsigned long start; + unsigned long end; +}; + +struct code_range sym_code_ranges[] = +{ + /* non-unwindable ranges */ + { (unsigned long)__entry_text_start, + (unsigned long)__entry_text_end }, + { (unsigned long)__idmap_text_start, + (unsigned long)__idmap_text_end }, + { (unsigned long)__hyp_idmap_text_start, + (unsigned long)__hyp_idmap_text_end }, + { (unsigned long)__hyp_text_start, + (unsigned long)__hyp_text_end }, +#ifdef CONFIG_HIBERNATION + { (unsigned long)__hibernate_exit_text_start, + (unsigned long)__hibernate_exit_text_end }, +#endif +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 + { (unsigned long)__entry_tramp_text_start, + (unsigned long)__entry_tramp_text_end }, +#endif + { /* sentinel */ } +}; + +static struct code_range *lookup_range(unsigned long pc) +{ + struct code_range *range; + + for (range = sym_code_ranges; range->start; range++) { + if (pc >= range->start && pc < range->end) + return range; + } + return range; +} + /* * AArch64 PCS assigns the frame pointer to x29. * @@ -43,6 +82,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) { unsigned long fp = frame->fp; struct stack_info info; + struct code_range *range; frame->reliable = true; @@ -103,6 +143,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) return 0; } + range = lookup_range(frame->pc); + #ifdef CONFIG_FUNCTION_GRAPH_TRACER if (tsk->ret_stack && frame->pc == (unsigned long)return_to_handler) { @@ -118,9 +160,21 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) return -EINVAL; frame->pc = ret_stack->ret; frame->pc = ptrauth_strip_insn_pac(frame->pc); + return 0; } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ + if (!range->start) + return 0; + + /* + * The return PC falls in an unreliable function. If the final frame + * has been reached, no more unwinding is needed. Otherwise, mark the + * stack trace not reliable. + */ + if (frame->fp) + frame->reliable = false; + return 0; } NOKPROBE_SYMBOL(unwind_frame); -- 2.25.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-05-03 17:36 UTC|newest] Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top [not found] <65cf4dfbc439b010b50a0c46ec500432acde86d6> 2021-05-03 17:36 ` [RFC PATCH v3 0/4] arm64: Stack trace reliability checks in the unwinder madvenka 2021-05-03 17:36 ` madvenka 2021-05-03 17:36 ` [RFC PATCH v3 1/4] arm64: Introduce stack " madvenka 2021-05-03 17:36 ` madvenka 2021-05-04 15:50 ` Mark Brown 2021-05-04 15:50 ` Mark Brown 2021-05-04 19:14 ` Madhavan T. Venkataraman 2021-05-04 19:14 ` Madhavan T. Venkataraman 2021-05-04 21:52 ` Josh Poimboeuf 2021-05-04 21:52 ` Josh Poimboeuf 2021-05-04 23:13 ` Madhavan T. Venkataraman 2021-05-04 23:13 ` Madhavan T. Venkataraman 2021-05-05 0:07 ` Josh Poimboeuf 2021-05-05 0:07 ` Josh Poimboeuf 2021-05-05 0:21 ` Madhavan T. Venkataraman 2021-05-05 0:21 ` Madhavan T. Venkataraman 2021-05-03 17:36 ` madvenka [this message] 2021-05-03 17:36 ` [RFC PATCH v3 2/4] arm64: Check the return PC against unreliable code sections madvenka 2021-05-04 16:05 ` Mark Brown 2021-05-04 16:05 ` Mark Brown 2021-05-04 19:03 ` Madhavan T. Venkataraman 2021-05-04 19:03 ` Madhavan T. Venkataraman 2021-05-04 19:32 ` Madhavan T. Venkataraman 2021-05-04 19:32 ` Madhavan T. Venkataraman 2021-05-05 16:46 ` Mark Brown 2021-05-05 16:46 ` Mark Brown 2021-05-05 18:48 ` Madhavan T. Venkataraman 2021-05-05 18:48 ` Madhavan T. Venkataraman 2021-05-05 18:50 ` Madhavan T. Venkataraman 2021-05-05 18:50 ` Madhavan T. Venkataraman 2021-05-06 13:45 ` Mark Brown 2021-05-06 13:45 ` Mark Brown 2021-05-06 15:21 ` Madhavan T. Venkataraman 2021-05-06 15:21 ` Madhavan T. Venkataraman 2021-05-05 16:34 ` Mark Brown 2021-05-05 16:34 ` Mark Brown 2021-05-05 17:51 ` Madhavan T. Venkataraman 2021-05-05 17:51 ` Madhavan T. Venkataraman 2021-05-05 19:30 ` Ard Biesheuvel 2021-05-05 19:30 ` Ard Biesheuvel 2021-05-05 20:00 ` Madhavan T. Venkataraman 2021-05-05 20:00 ` Madhavan T. Venkataraman 2021-05-03 17:36 ` [RFC PATCH v3 3/4] arm64: Handle miscellaneous functions in .text and .init.text madvenka 2021-05-03 17:36 ` madvenka 2021-05-06 14:12 ` Mark Brown 2021-05-06 14:12 ` Mark Brown 2021-05-06 15:30 ` Madhavan T. Venkataraman 2021-05-06 15:30 ` Madhavan T. Venkataraman 2021-05-06 15:32 ` Madhavan T. Venkataraman 2021-05-06 15:32 ` Madhavan T. Venkataraman 2021-05-06 15:44 ` Mark Brown 2021-05-06 15:44 ` Mark Brown 2021-05-06 15:56 ` Madhavan T. Venkataraman 2021-05-06 15:56 ` Madhavan T. Venkataraman 2021-05-06 15:37 ` Mark Brown 2021-05-06 15:37 ` Mark Brown 2021-05-06 15:57 ` Madhavan T. Venkataraman 2021-05-06 15:57 ` Madhavan T. Venkataraman 2021-05-03 17:36 ` [RFC PATCH v3 4/4] arm64: Handle funtion graph tracer better in the unwinder madvenka 2021-05-03 17:36 ` madvenka 2021-05-06 14:43 ` Mark Brown 2021-05-06 14:43 ` Mark Brown 2021-05-06 15:20 ` Madhavan T. Venkataraman 2021-05-06 15:20 ` Madhavan T. Venkataraman
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210503173615.21576-3-madvenka@linux.microsoft.com \ --to=madvenka@linux.microsoft.com \ --cc=broonie@kernel.org \ --cc=catalin.marinas@arm.com \ --cc=jmorris@namei.org \ --cc=jpoimboe@redhat.com \ --cc=jthierry@redhat.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=live-patching@vger.kernel.org \ --cc=mark.rutland@arm.com \ --cc=pasha.tatashin@soleen.com \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.