From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [PATCH v9 11/11] arm64: Create a list of SYM_CODE functions, check return PC against list Date: Thu, 14 Oct 2021 21:34:05 -0500 [thread overview] Message-ID: <20211015023413.16614-4-madvenka@linux.microsoft.com> (raw) In-Reply-To: <20211015023413.16614-1-madvenka@linux.microsoft.com> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> SYM_CODE functions don't follow the usual calling conventions. Check if the return PC in a stack frame falls in any of these. If it does, consider the stack trace unreliable. Define a special section for unreliable functions ================================================= Define a SYM_CODE_END() macro for arm64 that adds the function address range to a new section called "sym_code_functions". Linker file =========== Include the "sym_code_functions" section under read-only data in vmlinux.lds.S. Initialization ============== Define an early_initcall() to create a sym_code_functions[] array from the linker data. Unwinder check ============== Add a reliability check in unwind_is_reliable() that compares a return PC with sym_code_functions[]. If there is a match, then return failure. Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> --- arch/arm64/include/asm/linkage.h | 12 +++++++ arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/stacktrace.c | 55 +++++++++++++++++++++++++++++++ arch/arm64/kernel/vmlinux.lds.S | 10 ++++++ 4 files changed, 78 insertions(+) diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h index 9906541a6861..616bad74e297 100644 --- a/arch/arm64/include/asm/linkage.h +++ b/arch/arm64/include/asm/linkage.h @@ -68,4 +68,16 @@ SYM_FUNC_END_ALIAS(x); \ SYM_FUNC_END_ALIAS(__pi_##x) +/* + * Record the address range of each SYM_CODE function in a struct code_range + * in a special section. + */ +#define SYM_CODE_END(name) \ + SYM_END(name, SYM_T_NONE) ;\ + 99: ;\ + .pushsection "sym_code_functions", "aw" ;\ + .quad name ;\ + .quad 99b ;\ + .popsection + #endif diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index e4ad9db53af1..c84c71063d6e 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -21,5 +21,6 @@ extern char __exittext_begin[], __exittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; +extern char __sym_code_functions_start[], __sym_code_functions_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 142f08ae515f..40e5af7e5b1d 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,11 +18,40 @@ #include <asm/stack_pointer.h> #include <asm/stacktrace.h> +struct code_range { + unsigned long start; + unsigned long end; +}; + +static struct code_range *sym_code_functions; +static int num_sym_code_functions; + +int __init init_sym_code_functions(void) +{ + size_t size = (unsigned long)__sym_code_functions_end - + (unsigned long)__sym_code_functions_start; + + sym_code_functions = (struct code_range *)__sym_code_functions_start; + /* + * Order it so that sym_code_functions is not visible before + * num_sym_code_functions. + */ + smp_mb(); + num_sym_code_functions = size / sizeof(struct code_range); + + return 0; +} +early_initcall(init_sym_code_functions); + /* * Check the stack frame for conditions that make further unwinding unreliable. */ static void notrace unwind_check_reliability(struct stackframe *frame) { + const struct code_range *range; + unsigned long pc; + int i; + /* * If the PC is not a known kernel text address, then we cannot * be sure that a subsequent unwind will be reliable, as we @@ -30,6 +59,32 @@ static void notrace unwind_check_reliability(struct stackframe *frame) */ if (!__kernel_text_address(frame->pc)) frame->reliable = false; + + /* + * Check the return PC against sym_code_functions[]. If there is a + * match, then the consider the stack frame unreliable. + * + * As SYM_CODE functions don't follow the usual calling conventions, + * we assume by default that any SYM_CODE function cannot be unwound + * reliably. + * + * Note that this includes: + * + * - Exception handlers and entry assembly + * - Trampoline assembly (e.g., ftrace, kprobes) + * - Hypervisor-related assembly + * - Hibernation-related assembly + * - CPU start-stop, suspend-resume assembly + * - Kernel relocation assembly + */ + pc = frame->pc; + for (i = 0; i < num_sym_code_functions; i++) { + range = &sym_code_functions[i]; + if (pc >= range->start && pc < range->end) { + frame->reliable = false; + return; + } + } } NOKPROBE_SYMBOL(unwind_check_reliability); diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 709d2c433c5e..2bf769f45b54 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -111,6 +111,14 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#define SYM_CODE_FUNCTIONS \ + . = ALIGN(16); \ + .symcode : AT(ADDR(.symcode) - LOAD_OFFSET) { \ + __sym_code_functions_start = .; \ + KEEP(*(sym_code_functions)) \ + __sym_code_functions_end = .; \ + } + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -196,6 +204,8 @@ SECTIONS swapper_pg_dir = .; . += PAGE_SIZE; + SYM_CODE_FUNCTIONS + . = ALIGN(SEGMENT_ALIGN); __init_begin = .; __inittext_begin = .; -- 2.25.1
next prev parent reply other threads:[~2021-10-15 2:34 UTC|newest] Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top [not found] <c05ce30dcc9be1bd6b5e24a2ca8fe1d66246980b> 2021-10-15 2:34 ` [PATCH v9 00/11] arm64: Reorganize the unwinder and implement stack trace reliability checks madvenka 2021-10-15 2:34 ` [PATCH v9 01/11] arm64: Select STACKTRACE in arch/arm64/Kconfig madvenka 2021-10-15 2:34 ` [PATCH v9 10/11] arm64: Introduce stack trace reliability checks in the unwinder madvenka 2021-10-15 2:34 ` madvenka [this message] 2021-10-15 2:34 ` [PATCH v9 02/11] arm64: Make perf_callchain_kernel() use arch_stack_walk() madvenka 2021-10-15 2:34 ` [PATCH v9 03/11] arm64: Make get_wchan() " madvenka 2021-10-15 2:34 ` [PATCH v9 04/11] arm64: Make return_address() " madvenka 2021-10-15 2:34 ` [PATCH v9 05/11] arm64: Make dump_stacktrace() " madvenka 2021-10-15 2:34 ` [PATCH v9 06/11] arm64: Make profile_pc() " madvenka 2021-10-15 2:34 ` [PATCH v9 07/11] arm64: Call stack_backtrace() only from within walk_stackframe() madvenka 2021-10-15 2:34 ` [PATCH v9 08/11] arm64: Rename unwinder functions, prevent them from being traced and kprobed madvenka 2021-10-15 2:34 ` [PATCH v9 09/11] arm64: Make the unwind loop in unwind() similar to other architectures madvenka 2021-10-15 2:53 ` [PATCH v9 00/11] arm64: Reorganize the unwinder and implement stack trace reliability checks Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 " madvenka 2021-10-15 2:58 ` [PATCH v10 01/11] arm64: Select STACKTRACE in arch/arm64/Kconfig madvenka 2021-10-15 18:28 ` Mark Brown 2021-10-21 12:28 ` Madhavan T. Venkataraman 2021-10-22 18:02 ` Mark Rutland 2021-11-12 17:44 ` Mark Rutland 2021-11-14 16:15 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 02/11] arm64: Make perf_callchain_kernel() use arch_stack_walk() madvenka 2021-10-20 14:59 ` Mark Brown 2021-10-21 12:28 ` Madhavan T. Venkataraman 2021-10-22 18:11 ` Mark Rutland 2021-10-23 12:49 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 03/11] arm64: Make get_wchan() " madvenka 2021-10-20 16:10 ` Mark Brown 2021-10-21 12:30 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 04/11] arm64: Make return_address() " madvenka 2021-10-20 15:03 ` Mark Brown 2021-10-21 12:29 ` Madhavan T. Venkataraman 2021-10-22 18:51 ` Mark Rutland 2021-10-23 12:51 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 05/11] arm64: Make dump_stacktrace() " madvenka 2021-10-25 16:49 ` Mark Rutland 2021-10-26 12:05 ` Mark Rutland 2021-10-27 16:09 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 06/11] arm64: Make profile_pc() " madvenka 2021-10-25 2:18 ` nobuta.keiya 2021-10-27 16:10 ` Madhavan T. Venkataraman 2021-10-27 13:32 ` Mark Rutland 2021-10-27 16:15 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 07/11] arm64: Call stack_backtrace() only from within walk_stackframe() madvenka 2021-10-15 2:58 ` [PATCH v10 08/11] arm64: Rename unwinder functions, prevent them from being traced and kprobed madvenka 2021-10-27 17:53 ` Mark Rutland 2021-10-27 20:07 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 09/11] arm64: Make the unwind loop in unwind() similar to other architectures madvenka 2021-10-15 2:58 ` [PATCH v10 10/11] arm64: Introduce stack trace reliability checks in the unwinder madvenka 2021-11-04 12:39 ` nobuta.keiya 2021-11-10 3:13 ` Madhavan T. Venkataraman 2021-10-15 2:58 ` [PATCH v10 11/11] arm64: Create a list of SYM_CODE functions, check return PC against list madvenka 2021-10-15 17:00 ` [PATCH v10 00/11] arm64: Reorganize the unwinder and implement stack trace reliability checks Madhavan T. Venkataraman
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20211015023413.16614-4-madvenka@linux.microsoft.com \ --to=madvenka@linux.microsoft.com \ --cc=ardb@kernel.org \ --cc=broonie@kernel.org \ --cc=catalin.marinas@arm.com \ --cc=jmorris@namei.org \ --cc=jpoimboe@redhat.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=live-patching@vger.kernel.org \ --cc=mark.rutland@arm.com \ --cc=nobuta.keiya@fujitsu.com \ --cc=sjitindarsingh@gmail.com \ --cc=will@kernel.org \ --subject='Re: [PATCH v9 11/11] arm64: Create a list of SYM_CODE functions, check return PC against list' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).