* [PATCH v2 0/2] arm64 live patching @ 2016-06-27 15:15 Torsten Duwe 2016-06-27 15:17 ` [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS Torsten Duwe 2016-06-27 15:17 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe 0 siblings, 2 replies; 23+ messages in thread From: Torsten Duwe @ 2016-06-27 15:15 UTC (permalink / raw) To: linux-arm-kernel So here is a slightly updated FTRACE_WITH_REGS plus live patching. Reminder: make sure you have a prolog-pad gcc, and this in your top level Makefile: ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option,-fno-ipa-ra) endif Tested with v4.7-rc3 + gcc-6.1 Changes since v1: * instead of a comment "should be CC_USING_PROLOG_PAD": do it. CC_FLAGS_FTRACE holds it now, and the IPA disabler has become a separate issue (see above). Torsten Duwe (2): arm64: implement FTRACE_WITH_REGS arm64: implement live patching arch/arm64/Kconfig | 4 ++ arch/arm64/Makefile | 4 ++ arch/arm64/include/asm/ftrace.h | 8 +++ arch/arm64/include/asm/livepatch.h | 37 ++++++++++++++ arch/arm64/kernel/Makefile | 6 +-- arch/arm64/kernel/entry-ftrace.S | 102 +++++++++++++++++++++++++++++++++++++ arch/arm64/kernel/ftrace.c | 43 ++++++++++++++-- include/asm-generic/vmlinux.lds.h | 2 +- include/linux/compiler.h | 4 ++ 9 files changed, 203 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/include/asm/livepatch.h -- 2.6.6 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-06-27 15:15 [PATCH v2 0/2] arm64 live patching Torsten Duwe @ 2016-06-27 15:17 ` Torsten Duwe 2016-07-01 12:53 ` Josh Poimboeuf ` (2 more replies) 2016-06-27 15:17 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe 1 sibling, 3 replies; 23+ messages in thread From: Torsten Duwe @ 2016-06-27 15:17 UTC (permalink / raw) To: linux-arm-kernel Once gcc is enhanced to optionally generate NOPs at the beginning of each function, like the concept proven in https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html (sans the "fprintf (... pad_size);", which spoils the data structure for kernel use), the generated pads can nicely be used to reroute function calls for tracing/profiling, or live patching. The pads look like fffffc00081335f0 <hrtimer_init>: fffffc00081335f0: d503201f nop fffffc00081335f4: d503201f nop fffffc00081335f8: a9bd7bfd stp x29, x30, [sp,#-48]! fffffc00081335fc: 910003fd mov x29, sp [...] This patch gets the pad locations from the compiler-generated __prolog_pads_loc into the _mcount_loc array, and provides the code patching functions to turn the pads at runtime into fffffc00081335f0 mov x9, x30 fffffc00081335f4 bl 0xfffffc00080a08c0 <ftrace_caller> fffffc00081335f8 stp x29, x30, [sp,#-48]! fffffc00081335fc mov x29, sp as well as an ftrace_caller that can handle these call sites. Now ARCH_SUPPORTS_FTRACE_OPS as a benefit, and the graph caller still works, too. Signed-off-by: Li Bin <huawei.libin@huawei.com> Signed-off-by: Torsten Duwe <duwe@suse.de> --- arch/arm64/Kconfig | 1 + arch/arm64/Makefile | 4 ++ arch/arm64/include/asm/ftrace.h | 8 ++++ arch/arm64/kernel/Makefile | 6 +-- arch/arm64/kernel/entry-ftrace.S | 89 +++++++++++++++++++++++++++++++++++++++ arch/arm64/kernel/ftrace.c | 43 +++++++++++++++++-- include/asm-generic/vmlinux.lds.h | 2 +- include/linux/compiler.h | 4 ++ 8 files changed, 150 insertions(+), 7 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5a0a691..36a0e26 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -72,6 +72,7 @@ config ARM64 select HAVE_DMA_API_DEBUG select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE + select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 648a32c..e5e335c 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -35,6 +35,10 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads) KBUILD_AFLAGS += $(lseinstr) +ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS), y) +CC_FLAGS_FTRACE := -fprolog-pad=2 -DCC_USING_PROLOG_PAD +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian AS += -EB diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index caa955f..a569666 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -16,6 +16,14 @@ #define MCOUNT_ADDR ((unsigned long)_mcount) #define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#define ARCH_SUPPORTS_FTRACE_OPS 1 +#define REC_IP_BRANCH_OFFSET 4 +#define FTRACE_REGS_ADDR FTRACE_ADDR +#else +#define REC_IP_BRANCH_OFFSET 0 +#endif + #ifndef __ASSEMBLY__ #include <linux/compat.h> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 2173149..c26f3f8 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -6,9 +6,9 @@ CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET) AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET) CFLAGS_armv8_deprecated.o := -I$(src) -CFLAGS_REMOVE_ftrace.o = -pg -CFLAGS_REMOVE_insn.o = -pg -CFLAGS_REMOVE_return_address.o = -pg +CFLAGS_REMOVE_ftrace.o = -pg $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_insn.o = -pg $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_return_address.o = -pg $(CC_FLAGS_FTRACE) # Object file lists. arm64-obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index 0f03a8f..3ebe791 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -12,6 +12,8 @@ #include <linux/linkage.h> #include <asm/ftrace.h> #include <asm/insn.h> +#include <asm/asm-offsets.h> +#include <asm/assembler.h> /* * Gcc with -pg will put the following code in the beginning of each function: @@ -132,6 +134,7 @@ skip_ftrace_call: ENDPROC(_mcount) #else /* CONFIG_DYNAMIC_FTRACE */ +#ifndef CONFIG_DYNAMIC_FTRACE_WITH_REGS /* * _mcount() is used to build the kernel with -pg option, but all the branch * instructions to _mcount() are replaced to NOP initially at kernel start up, @@ -171,6 +174,84 @@ ftrace_graph_call: // ftrace_graph_caller(); mcount_exit ENDPROC(ftrace_caller) +#else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ +ENTRY(_mcount) + mov x10, lr + mov lr, x9 + ret x10 +ENDPROC(_mcount) + +ENTRY(ftrace_caller) + stp x29, x9, [sp, #-16]! + sub sp, sp, #S_FRAME_SIZE + + stp x0, x1, [sp] + stp x2, x3, [sp, #16] + stp x4, x5, [sp, #32] + stp x6, x7, [sp, #48] + stp x8, x9, [sp, #64] + stp x10, x11, [sp, #80] + stp x12, x13, [sp, #96] + stp x14, x15, [sp, #112] + stp x16, x17, [sp, #128] + stp x18, x19, [sp, #144] + stp x20, x21, [sp, #160] + stp x22, x23, [sp, #176] + stp x24, x25, [sp, #192] + stp x26, x27, [sp, #208] + stp x28, x29, [sp, #224] + /* The link Register at callee entry */ + str x9, [sp, #S_LR] + /* The program counter just after the ftrace call site */ + str lr, [sp, #S_PC] + /* The stack pointer as it was on ftrace_caller entry... */ + add x29, sp, #S_FRAME_SIZE+16 /* ...is also our new FP */ + str x29, [sp, #S_SP] + + adrp x0, function_trace_op + ldr x2, [x0, #:lo12:function_trace_op] + mov x1, x9 /* saved LR == parent IP */ + sub x0, lr, #8 /* prolog pad start == IP */ + mov x3, sp /* complete pt_regs are @sp */ + + .global ftrace_call +ftrace_call: + + bl ftrace_stub + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + .global ftrace_graph_call +ftrace_graph_call: // ftrace_graph_caller(); + nop // If enabled, this will be replaced + // "b ftrace_graph_caller" +#endif + +ftrace_regs_return: + ldp x0, x1, [sp] + ldp x2, x3, [sp, #16] + ldp x4, x5, [sp, #32] + ldp x6, x7, [sp, #48] + ldp x8, x9, [sp, #64] + ldp x10, x11, [sp, #80] + ldp x12, x13, [sp, #96] + ldp x14, x15, [sp, #112] + ldp x16, x17, [sp, #128] + ldp x18, x19, [sp, #144] + ldp x20, x21, [sp, #160] + ldp x22, x23, [sp, #176] + ldp x24, x25, [sp, #192] + ldp x26, x27, [sp, #208] + ldp x28, x29, [sp, #224] + + ldr x9, [sp, #S_PC] + ldr lr, [sp, #S_LR] + add sp, sp, #S_FRAME_SIZE+16 + + ret x9 + +ENDPROC(ftrace_caller) + +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ #endif /* CONFIG_DYNAMIC_FTRACE */ ENTRY(ftrace_stub) @@ -206,12 +287,20 @@ ENDPROC(ftrace_stub) * and run return_to_handler() later on its exit. */ ENTRY(ftrace_graph_caller) +#ifndef CONFIG_DYNAMIC_FTRACE_WITH_REGS mcount_get_lr_addr x0 // pointer to function's saved lr mcount_get_pc x1 // function's pc mcount_get_parent_fp x2 // parent's fp bl prepare_ftrace_return // prepare_ftrace_return(&lr, pc, fp) mcount_exit +#else + add x0, sp, #S_LR /* address of (LR pointing into caller) */ + ldr x1, [sp, #S_PC] + ldr x2, [sp, #232] /* caller's frame pointer */ + bl prepare_ftrace_return + b ftrace_regs_return +#endif ENDPROC(ftrace_graph_caller) /* diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index ebecf9a..917065c 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -39,6 +39,12 @@ static int ftrace_modify_code(unsigned long pc, u32 old, u32 new, if (aarch64_insn_read((void *)pc, &replaced)) return -EFAULT; + /* If we already have what we'll finally want, + * report success. This is needed on startup. + */ + if (replaced == new) + return 0; + if (replaced != old) return -EINVAL; } @@ -68,28 +74,59 @@ int ftrace_update_ftrace_func(ftrace_func_t func) */ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) { - unsigned long pc = rec->ip; + unsigned long pc = rec->ip+REC_IP_BRANCH_OFFSET; + int ret; u32 old, new; +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS old = aarch64_insn_gen_nop(); + new = 0xaa1e03e9; /* mov x9,x30 */ + ret = ftrace_modify_code(pc-REC_IP_BRANCH_OFFSET, old, new, true); + if (ret) + return ret; + smp_wmb(); +#endif new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); return ftrace_modify_code(pc, old, new, true); } +int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, + unsigned long addr) +{ + unsigned long pc = rec->ip+REC_IP_BRANCH_OFFSET; + u32 old, new; + + old = aarch64_insn_gen_branch_imm(pc, old_addr, true); + new = aarch64_insn_gen_branch_imm(pc, addr, true); + + return ftrace_modify_code(pc, old, new, true); +} + /* * Turn off the call to ftrace_caller() in instrumented function */ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) { - unsigned long pc = rec->ip; + unsigned long pc = rec->ip+REC_IP_BRANCH_OFFSET; u32 old, new; + int ret; + old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); new = aarch64_insn_gen_nop(); - return ftrace_modify_code(pc, old, new, true); + ret = ftrace_modify_code(pc, old, new, true); + if (ret) + return ret; +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS + smp_wmb(); + old = 0xaa1e03e9; /* mov x9,x30 */ + new = aarch64_insn_gen_nop(); + ret = ftrace_modify_code(pc-REC_IP_BRANCH_OFFSET, old, new, true); +#endif + return ret; } void arch_ftrace_update_code(int command) diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 6a67ab9..66a72b9 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -89,7 +89,7 @@ #ifdef CONFIG_FTRACE_MCOUNT_RECORD #define MCOUNT_REC() . = ALIGN(8); \ VMLINUX_SYMBOL(__start_mcount_loc) = .; \ - *(__mcount_loc) \ + *(__mcount_loc) *(__prolog_pads_loc) \ VMLINUX_SYMBOL(__stop_mcount_loc) = .; #else #define MCOUNT_REC() diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 793c082..46289c2 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -63,8 +63,12 @@ extern void __chk_io_ptr(const volatile void __iomem *); #if defined(CC_USING_HOTPATCH) && !defined(__CHECKER__) #define notrace __attribute__((hotpatch(0,0))) #else +#ifdef CC_USING_PROLOG_PAD +#define notrace __attribute__((prolog_pad(0))) +#else #define notrace __attribute__((no_instrument_function)) #endif +#endif /* Intel compiler defines __GNUC__. So we will overwrite implementations * coming from above header files here -- 2.6.6 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-06-27 15:17 ` [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS Torsten Duwe @ 2016-07-01 12:53 ` Josh Poimboeuf 2016-07-04 9:18 ` Torsten Duwe 2016-07-03 5:17 ` kbuild test robot 2016-07-08 14:58 ` Petr Mladek 2 siblings, 1 reply; 23+ messages in thread From: Josh Poimboeuf @ 2016-07-01 12:53 UTC (permalink / raw) To: linux-arm-kernel On Mon, Jun 27, 2016 at 05:17:17PM +0200, Torsten Duwe wrote: > Once gcc is enhanced to optionally generate NOPs at the beginning > of each function, like the concept proven in > https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html > (sans the "fprintf (... pad_size);", which spoils the data structure > for kernel use), the generated pads can nicely be used to reroute > function calls for tracing/profiling, or live patching. > > The pads look like > fffffc00081335f0 <hrtimer_init>: > fffffc00081335f0: d503201f nop > fffffc00081335f4: d503201f nop > fffffc00081335f8: a9bd7bfd stp x29, x30, [sp,#-48]! > fffffc00081335fc: 910003fd mov x29, sp > [...] > > This patch gets the pad locations from the compiler-generated > __prolog_pads_loc into the _mcount_loc array, and provides the > code patching functions to turn the pads at runtime into > > fffffc00081335f0 mov x9, x30 > fffffc00081335f4 bl 0xfffffc00080a08c0 <ftrace_caller> > fffffc00081335f8 stp x29, x30, [sp,#-48]! > fffffc00081335fc mov x29, sp > > as well as an ftrace_caller that can handle these call sites. > Now ARCH_SUPPORTS_FTRACE_OPS as a benefit, and the graph caller > still works, too. > > Signed-off-by: Li Bin <huawei.libin@huawei.com> > Signed-off-by: Torsten Duwe <duwe@suse.de> > --- > arch/arm64/Kconfig | 1 + > arch/arm64/Makefile | 4 ++ > arch/arm64/include/asm/ftrace.h | 8 ++++ > arch/arm64/kernel/Makefile | 6 +-- > arch/arm64/kernel/entry-ftrace.S | 89 +++++++++++++++++++++++++++++++++++++++ > arch/arm64/kernel/ftrace.c | 43 +++++++++++++++++-- > include/asm-generic/vmlinux.lds.h | 2 +- > include/linux/compiler.h | 4 ++ > 8 files changed, 150 insertions(+), 7 deletions(-) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 5a0a691..36a0e26 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -72,6 +72,7 @@ config ARM64 > select HAVE_DMA_API_DEBUG > select HAVE_DMA_CONTIGUOUS > select HAVE_DYNAMIC_FTRACE > + select HAVE_DYNAMIC_FTRACE_WITH_REGS > select HAVE_EFFICIENT_UNALIGNED_ACCESS > select HAVE_FTRACE_MCOUNT_RECORD > select HAVE_FUNCTION_TRACER > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile > index 648a32c..e5e335c 100644 > --- a/arch/arm64/Makefile > +++ b/arch/arm64/Makefile > @@ -35,6 +35,10 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables > KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads) > KBUILD_AFLAGS += $(lseinstr) > > +ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS), y) > +CC_FLAGS_FTRACE := -fprolog-pad=2 -DCC_USING_PROLOG_PAD > +endif > + It would probably be good to print a warning for older gccs which don't support this option, so that when the build fails, there's at least a warning to indicate why. Something like: ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS CC_FLAGS_FTRACE := -fprolog-pad=2 -DCC_USING_PROLOG_PAD ifeq ($(call cc-option,-fprolog-pad=2),) $(warning Cannot use CONFIG_DYNAMIC_FTRACE_WITH_REGS: \ -fprolog-pad not supported by compiler) endif endif -- Josh ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-01 12:53 ` Josh Poimboeuf @ 2016-07-04 9:18 ` Torsten Duwe 0 siblings, 0 replies; 23+ messages in thread From: Torsten Duwe @ 2016-07-04 9:18 UTC (permalink / raw) To: linux-arm-kernel On Fri, Jul 01, 2016 at 07:53:44AM -0500, Josh Poimboeuf wrote: > On Mon, Jun 27, 2016 at 05:17:17PM +0200, Torsten Duwe wrote: > > Once gcc is enhanced to optionally generate NOPs at the beginning > > of each function, like the concept proven in > > https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html > > (sans the "fprintf (... pad_size);", which spoils the data structure > > for kernel use), the generated pads can nicely be used to reroute > > function calls for tracing/profiling, or live patching. [...] > > @@ -35,6 +35,10 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables > > KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads) > > KBUILD_AFLAGS += $(lseinstr) > > > > +ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS), y) > > +CC_FLAGS_FTRACE := -fprolog-pad=2 -DCC_USING_PROLOG_PAD > > +endif > > + > > It would probably be good to print a warning for older gccs which don't > support this option, so that when the build fails, there's at least a > warning to indicate why. Something like: > > ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS > CC_FLAGS_FTRACE := -fprolog-pad=2 -DCC_USING_PROLOG_PAD > ifeq ($(call cc-option,-fprolog-pad=2),) > $(warning Cannot use CONFIG_DYNAMIC_FTRACE_WITH_REGS: \ > -fprolog-pad not supported by compiler) > endif > endif Yes. Ideally, compiler support could be checked even before the option is offered, but your explicit warning is better than just failing obscurely. What do you think about prolog-pad in general? If we can convince the gcc people to include it, it could become the default mechanism for all architectures that do not require special treatment (e.g. like ABIv2 dual entry on ppc64le). Torsten ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-06-27 15:17 ` [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS Torsten Duwe 2016-07-01 12:53 ` Josh Poimboeuf @ 2016-07-03 5:17 ` kbuild test robot 2016-07-08 14:58 ` Petr Mladek 2 siblings, 0 replies; 23+ messages in thread From: kbuild test robot @ 2016-07-03 5:17 UTC (permalink / raw) To: linux-arm-kernel Hi, [auto build test ERROR on arm64/for-next/core] [also build test ERROR on v4.7-rc5 next-20160701] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Torsten-Duwe/arm64-live-patching/20160627-232728 base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core config: arm64-allyesconfig (attached as .config) compiler: aarch64-linux-gnu-gcc (Debian 5.3.1-8) 5.3.1 20160205 reproduce: wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree make.cross ARCH=arm64 All errors (new ones prefixed by >>): Makefile:687: Cannot use CONFIG_KCOV: -fsanitize-coverage=trace-pc is not supported by compiler >> aarch64-linux-gnu-gcc: error: unrecognized command line option '-fprolog-pad=2' make[2]: *** [kernel/bounds.s] Error 1 make[2]: Target '__build' not remade because of errors. make[1]: *** [prepare0] Error 2 make[1]: Target 'prepare' not remade because of errors. make: *** [sub-make] Error 2 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation -------------- next part -------------- A non-text attachment was scrubbed... Name: .config.gz Type: application/octet-stream Size: 50257 bytes Desc: not available URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160703/45af0b23/attachment-0001.obj> ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-06-27 15:17 ` [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS Torsten Duwe 2016-07-01 12:53 ` Josh Poimboeuf 2016-07-03 5:17 ` kbuild test robot @ 2016-07-08 14:58 ` Petr Mladek 2016-07-08 15:07 ` Torsten Duwe 2 siblings, 1 reply; 23+ messages in thread From: Petr Mladek @ 2016-07-08 14:58 UTC (permalink / raw) To: linux-arm-kernel On Mon 2016-06-27 17:17:17, Torsten Duwe wrote: > Once gcc is enhanced to optionally generate NOPs at the beginning > of each function, like the concept proven in > https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html > (sans the "fprintf (... pad_size);", which spoils the data structure > for kernel use), the generated pads can nicely be used to reroute > function calls for tracing/profiling, or live patching. > diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c > index ebecf9a..917065c 100644 > --- a/arch/arm64/kernel/ftrace.c > +++ b/arch/arm64/kernel/ftrace.c > @@ -39,6 +39,12 @@ static int ftrace_modify_code(unsigned long pc, u32 old, u32 new, > if (aarch64_insn_read((void *)pc, &replaced)) > return -EFAULT; > > + /* If we already have what we'll finally want, > + * report success. This is needed on startup. > + */ > + if (replaced == new) > + return 0; This looks strange. I wonder if it actually hides a real bug that we modify the code twice or so. I wanted to try it myself but I haven't succeeded with creating an ARM test system yet. Best Regards, Petr ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 14:58 ` Petr Mladek @ 2016-07-08 15:07 ` Torsten Duwe 2016-07-08 15:24 ` Petr Mladek 0 siblings, 1 reply; 23+ messages in thread From: Torsten Duwe @ 2016-07-08 15:07 UTC (permalink / raw) To: linux-arm-kernel On Fri, Jul 08, 2016 at 04:58:00PM +0200, Petr Mladek wrote: > On Mon 2016-06-27 17:17:17, Torsten Duwe wrote: > > Once gcc is enhanced to optionally generate NOPs at the beginning > > of each function, like the concept proven in > > https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html > > (sans the "fprintf (... pad_size);", which spoils the data structure > > for kernel use), the generated pads can nicely be used to reroute > > function calls for tracing/profiling, or live patching. > > diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c > > index ebecf9a..917065c 100644 > > --- a/arch/arm64/kernel/ftrace.c > > +++ b/arch/arm64/kernel/ftrace.c > > @@ -39,6 +39,12 @@ static int ftrace_modify_code(unsigned long pc, u32 old, u32 new, > > if (aarch64_insn_read((void *)pc, &replaced)) > > return -EFAULT; > > > > + /* If we already have what we'll finally want, > > + * report success. This is needed on startup. > > + */ > > + if (replaced == new) > > + return 0; > > This looks strange. I wonder if it actually hides a real bug that we > modify the code twice or so. Not at all. All "profilers" we abused so far generate code that needs to be disabled on boot first. prolog-pad generates nops, initially. Torsten ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 15:07 ` Torsten Duwe @ 2016-07-08 15:24 ` Petr Mladek 2016-07-08 15:48 ` Josh Poimboeuf 2016-07-08 15:49 ` Steven Rostedt 0 siblings, 2 replies; 23+ messages in thread From: Petr Mladek @ 2016-07-08 15:24 UTC (permalink / raw) To: linux-arm-kernel On Fri 2016-07-08 17:07:09, Torsten Duwe wrote: > On Fri, Jul 08, 2016 at 04:58:00PM +0200, Petr Mladek wrote: > > On Mon 2016-06-27 17:17:17, Torsten Duwe wrote: > > > Once gcc is enhanced to optionally generate NOPs at the beginning > > > of each function, like the concept proven in > > > https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html > > > (sans the "fprintf (... pad_size);", which spoils the data structure > > > for kernel use), the generated pads can nicely be used to reroute > > > function calls for tracing/profiling, or live patching. > > > diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c > > > index ebecf9a..917065c 100644 > > > --- a/arch/arm64/kernel/ftrace.c > > > +++ b/arch/arm64/kernel/ftrace.c > > > @@ -39,6 +39,12 @@ static int ftrace_modify_code(unsigned long pc, u32 old, u32 new, > > > if (aarch64_insn_read((void *)pc, &replaced)) > > > return -EFAULT; > > > > > > + /* If we already have what we'll finally want, > > > + * report success. This is needed on startup. > > > + */ > > > + if (replaced == new) > > > + return 0; > > > > This looks strange. I wonder if it actually hides a real bug that we > > modify the code twice or so. > > Not at all. All "profilers" we abused so far generate code that needs to > be disabled on boot first. prolog-pad generates nops, initially. Yeah, but I cannot find this kind of check in other architectures. I checked arch/x86/kernel/ftrace.c, arch/s390/kernel/ftrace.c, and arch/powerpc/kernel/ftrace.c. These all support ftrace with regs and livepatching. Best Regards, Petr ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 15:24 ` Petr Mladek @ 2016-07-08 15:48 ` Josh Poimboeuf 2016-07-08 15:57 ` Steven Rostedt 2016-07-08 15:49 ` Steven Rostedt 1 sibling, 1 reply; 23+ messages in thread From: Josh Poimboeuf @ 2016-07-08 15:48 UTC (permalink / raw) To: linux-arm-kernel On Fri, Jul 08, 2016 at 05:24:21PM +0200, Petr Mladek wrote: > On Fri 2016-07-08 17:07:09, Torsten Duwe wrote: > > On Fri, Jul 08, 2016 at 04:58:00PM +0200, Petr Mladek wrote: > > > On Mon 2016-06-27 17:17:17, Torsten Duwe wrote: > > > > Once gcc is enhanced to optionally generate NOPs at the beginning > > > > of each function, like the concept proven in > > > > https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html > > > > (sans the "fprintf (... pad_size);", which spoils the data structure > > > > for kernel use), the generated pads can nicely be used to reroute > > > > function calls for tracing/profiling, or live patching. > > > > diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c > > > > index ebecf9a..917065c 100644 > > > > --- a/arch/arm64/kernel/ftrace.c > > > > +++ b/arch/arm64/kernel/ftrace.c > > > > @@ -39,6 +39,12 @@ static int ftrace_modify_code(unsigned long pc, u32 old, u32 new, > > > > if (aarch64_insn_read((void *)pc, &replaced)) > > > > return -EFAULT; > > > > > > > > + /* If we already have what we'll finally want, > > > > + * report success. This is needed on startup. > > > > + */ > > > > + if (replaced == new) > > > > + return 0; > > > > > > This looks strange. I wonder if it actually hides a real bug that we > > > modify the code twice or so. > > > > Not at all. All "profilers" we abused so far generate code that needs to > > be disabled on boot first. prolog-pad generates nops, initially. > > Yeah, but I cannot find this kind of check in other architectures. > I checked arch/x86/kernel/ftrace.c, arch/s390/kernel/ftrace.c, and > arch/powerpc/kernel/ftrace.c. These all support ftrace with > regs and livepatching. My understanding is that other arches don't need this check because they use -mfentry, so they have to modify the "call fentry" instruction to a nop on startup. Here, with -fprolog-pad, it's already a nop, so no change is needed. -- Josh ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 15:48 ` Josh Poimboeuf @ 2016-07-08 15:57 ` Steven Rostedt 2016-07-08 20:24 ` Torsten Duwe 0 siblings, 1 reply; 23+ messages in thread From: Steven Rostedt @ 2016-07-08 15:57 UTC (permalink / raw) To: linux-arm-kernel On Fri, 8 Jul 2016 10:48:24 -0500 Josh Poimboeuf <jpoimboe@redhat.com> wrote: > My understanding is that other arches don't need this check because they > use -mfentry, so they have to modify the "call fentry" instruction to a > nop on startup. > > Here, with -fprolog-pad, it's already a nop, so no change is needed. > That's what I was thinking. But as I stated in another email (probably in the air when you wrote this), the call to ftrace_modify_code() may be completely circumvented by ftrace_make_nop() if the addr is MCOUNT_ADDR. -- Steve ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 15:57 ` Steven Rostedt @ 2016-07-08 20:24 ` Torsten Duwe 2016-07-08 21:08 ` Steven Rostedt 0 siblings, 1 reply; 23+ messages in thread From: Torsten Duwe @ 2016-07-08 20:24 UTC (permalink / raw) To: linux-arm-kernel On Fri, Jul 08, 2016 at 11:57:10AM -0400, Steven Rostedt wrote: > On Fri, 8 Jul 2016 10:48:24 -0500 > Josh Poimboeuf <jpoimboe@redhat.com> wrote: > > > > Here, with -fprolog-pad, it's already a nop, so no change is needed. > > Yes, exactly. > That's what I was thinking. But as I stated in another email (probably > in the air when you wrote this), the call to ftrace_modify_code() may be > completely circumvented by ftrace_make_nop() if the addr is MCOUNT_ADDR. Only on the _first_ invocation. Later on, tracing can be switched on and off, and then the instructions need to be changed just like with fentry (or profile-kernel ;-) Torsten ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 20:24 ` Torsten Duwe @ 2016-07-08 21:08 ` Steven Rostedt 2016-07-09 9:06 ` Torsten Duwe 0 siblings, 1 reply; 23+ messages in thread From: Steven Rostedt @ 2016-07-08 21:08 UTC (permalink / raw) To: linux-arm-kernel On Fri, 8 Jul 2016 22:24:55 +0200 Torsten Duwe <duwe@lst.de> wrote: > On Fri, Jul 08, 2016 at 11:57:10AM -0400, Steven Rostedt wrote: > > On Fri, 8 Jul 2016 10:48:24 -0500 > > Josh Poimboeuf <jpoimboe@redhat.com> wrote: > > > > > > Here, with -fprolog-pad, it's already a nop, so no change is needed. > > > > > Yes, exactly. > > > That's what I was thinking. But as I stated in another email (probably > > in the air when you wrote this), the call to ftrace_modify_code() may be > > completely circumvented by ftrace_make_nop() if the addr is MCOUNT_ADDR. > > Only on the _first_ invocation. Later on, tracing can be switched on and off, > and then the instructions need to be changed just like with fentry (or > profile-kernel ;-) > Understood, but ftrace_modify_code() will only receive addr == MCOUNT_ADDR on boot up or when a module is loaded. In both cases, with -fprolog-pad it will already be a nop, hence no need to call ftrace_modify_code(), in those cases. In all other cases, addr will point to a ftrace trampoline. -- Steve ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 21:08 ` Steven Rostedt @ 2016-07-09 9:06 ` Torsten Duwe 2016-07-15 18:36 ` Steven Rostedt 0 siblings, 1 reply; 23+ messages in thread From: Torsten Duwe @ 2016-07-09 9:06 UTC (permalink / raw) To: linux-arm-kernel On Fri, Jul 08, 2016 at 05:08:08PM -0400, Steven Rostedt wrote: > On Fri, 8 Jul 2016 22:24:55 +0200 > Torsten Duwe <duwe@lst.de> wrote: > > > On Fri, Jul 08, 2016 at 11:57:10AM -0400, Steven Rostedt wrote: > > > On Fri, 8 Jul 2016 10:48:24 -0500 > > > Josh Poimboeuf <jpoimboe@redhat.com> wrote: > > > > > > > > Here, with -fprolog-pad, it's already a nop, so no change is needed. > > > > > > > > Yes, exactly. > > > > > That's what I was thinking. But as I stated in another email (probably > > > in the air when you wrote this), the call to ftrace_modify_code() may be > > > completely circumvented by ftrace_make_nop() if the addr is MCOUNT_ADDR. > > > > Only on the _first_ invocation. Later on, tracing can be switched on and off, > > and then the instructions need to be changed just like with fentry (or > > profile-kernel ;-) > > > > Understood, but ftrace_modify_code() will only receive addr == > MCOUNT_ADDR on boot up or when a module is loaded. In both cases, with > -fprolog-pad it will already be a nop, hence no need to call > ftrace_modify_code(), in those cases. > > In all other cases, addr will point to a ftrace trampoline. Maybe the code in question can be replaced with the change below, now that there is a preprocessor define in V2? (untested) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 3f743b1..695a646 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2423,6 +2423,12 @@ ftrace_code_disable(struct module *mod, struct dyn_ftrace *rec) if (unlikely(ftrace_disabled)) return 0; +#ifdef CC_USING_PROLOG_PAD + /* If the compiler already generated NOPs instead of + * calls to mcount, we're done here. + */ + return 1; +#endif ret = ftrace_make_nop(mod, rec, MCOUNT_ADDR); if (ret) { ftrace_bug(ret, rec); ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-09 9:06 ` Torsten Duwe @ 2016-07-15 18:36 ` Steven Rostedt 0 siblings, 0 replies; 23+ messages in thread From: Steven Rostedt @ 2016-07-15 18:36 UTC (permalink / raw) To: linux-arm-kernel On Sat, 9 Jul 2016 11:06:32 +0200 Torsten Duwe <duwe@lst.de> wrote: > Maybe the code in question can be replaced with the change below, now that > there is a preprocessor define in V2? > (untested) > > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > index 3f743b1..695a646 100644 > --- a/kernel/trace/ftrace.c > +++ b/kernel/trace/ftrace.c > @@ -2423,6 +2423,12 @@ ftrace_code_disable(struct module *mod, struct dyn_ftrace *rec) > if (unlikely(ftrace_disabled)) > return 0; > > +#ifdef CC_USING_PROLOG_PAD > + /* If the compiler already generated NOPs instead of > + * calls to mcount, we're done here. > + */ > + return 1; > +#endif I really hate adding #ifdef's like this in generic code if the arch can handle it with some other means. -- Steve > ret = ftrace_make_nop(mod, rec, MCOUNT_ADDR); > if (ret) { > ftrace_bug(ret, rec); ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS 2016-07-08 15:24 ` Petr Mladek 2016-07-08 15:48 ` Josh Poimboeuf @ 2016-07-08 15:49 ` Steven Rostedt 1 sibling, 0 replies; 23+ messages in thread From: Steven Rostedt @ 2016-07-08 15:49 UTC (permalink / raw) To: linux-arm-kernel On Fri, 8 Jul 2016 17:24:21 +0200 Petr Mladek <pmladek@suse.com> wrote: > On Fri 2016-07-08 17:07:09, Torsten Duwe wrote: > > On Fri, Jul 08, 2016 at 04:58:00PM +0200, Petr Mladek wrote: > > > On Mon 2016-06-27 17:17:17, Torsten Duwe wrote: > > > > Once gcc is enhanced to optionally generate NOPs at the beginning > > > > of each function, like the concept proven in > > > > https://gcc.gnu.org/ml/gcc-patches/2016-04/msg01671.html > > > > (sans the "fprintf (... pad_size);", which spoils the data structure > > > > for kernel use), the generated pads can nicely be used to reroute > > > > function calls for tracing/profiling, or live patching. > > > > diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c > > > > index ebecf9a..917065c 100644 > > > > --- a/arch/arm64/kernel/ftrace.c > > > > +++ b/arch/arm64/kernel/ftrace.c > > > > @@ -39,6 +39,12 @@ static int ftrace_modify_code(unsigned long pc, u32 old, u32 new, > > > > if (aarch64_insn_read((void *)pc, &replaced)) > > > > return -EFAULT; > > > > > > > > + /* If we already have what we'll finally want, > > > > + * report success. This is needed on startup. > > > > + */ > > > > + if (replaced == new) > > > > + return 0; > > > > > > This looks strange. I wonder if it actually hides a real bug that we > > > modify the code twice or so. > > > > Not at all. All "profilers" we abused so far generate code that needs to > > be disabled on boot first. prolog-pad generates nops, initially. > > Yeah, but I cannot find this kind of check in other architectures. > I checked arch/x86/kernel/ftrace.c, arch/s390/kernel/ftrace.c, and > arch/powerpc/kernel/ftrace.c. These all support ftrace with > regs and livepatching. I guess the question is, with this approach, there's no call to mcount or fentry at compile time? Just nops are added? In this case perhaps the if statement should be more defined: /* * On boot, with the prologue code, the code will already * be a nop. */ if (replace == new && new == NOP) return 0; And perhaps you can even pass in addr and check if it equals the nop address. Maybe even not call this code then? That is, if addr == MCOUNT_ADDR passed in by ftrace_code_disable() have ftrace_make_nop() simple return 0 without doing anything. -- Steve ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 2/2] arm64: implement live patching 2016-06-27 15:15 [PATCH v2 0/2] arm64 live patching Torsten Duwe 2016-06-27 15:17 ` [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS Torsten Duwe @ 2016-06-27 15:17 ` Torsten Duwe 2016-07-11 14:03 ` Miroslav Benes 2016-07-15 16:03 ` Paul Gortmaker 1 sibling, 2 replies; 23+ messages in thread From: Torsten Duwe @ 2016-06-27 15:17 UTC (permalink / raw) To: linux-arm-kernel On top of FTRACE_WITH_REGS and the klp changes that go into v4.7 this is straightforward. Signed-off-by: Torsten Duwe <duwe@suse.de> --- arch/arm64/Kconfig | 3 +++ arch/arm64/include/asm/livepatch.h | 37 +++++++++++++++++++++++++++++++++++++ arch/arm64/kernel/entry-ftrace.S | 13 +++++++++++++ 3 files changed, 53 insertions(+) create mode 100644 arch/arm64/include/asm/livepatch.h diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 36a0e26..cb5adf3 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -80,6 +80,7 @@ config ARM64 select HAVE_GENERIC_DMA_COHERENT select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_IRQ_TIME_ACCOUNTING + select HAVE_LIVEPATCH select HAVE_MEMBLOCK select HAVE_MEMBLOCK_NODE_MAP if NUMA select HAVE_PATA_PLATFORM @@ -1042,4 +1043,6 @@ if CRYPTO source "arch/arm64/crypto/Kconfig" endif +source "kernel/livepatch/Kconfig" + source "lib/Kconfig" diff --git a/arch/arm64/include/asm/livepatch.h b/arch/arm64/include/asm/livepatch.h new file mode 100644 index 0000000..6b9a3d1 --- /dev/null +++ b/arch/arm64/include/asm/livepatch.h @@ -0,0 +1,37 @@ +/* + * livepatch.h - arm64-specific Kernel Live Patching Core + * + * Copyright (C) 2016 SUSE + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see <http://www.gnu.org/licenses/>. + */ +#ifndef _ASM_ARM64_LIVEPATCH_H +#define _ASM_ARM64_LIVEPATCH_H + +#include <linux/module.h> +#include <linux/ftrace.h> + +#ifdef CONFIG_LIVEPATCH +static inline int klp_check_compiler_support(void) +{ + return 0; +} + +static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) +{ + regs->pc = ip; +} +#endif /* CONFIG_LIVEPATCH */ + +#endif /* _ASM_ARM64_LIVEPATCH_H */ diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index 3ebe791..b166cbf 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -204,6 +204,9 @@ ENTRY(ftrace_caller) str x9, [sp, #S_LR] /* The program counter just after the ftrace call site */ str lr, [sp, #S_PC] +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_FUNCTION_GRAPH_TRACER) + mov x19,lr /* remember old return address */ +#endif /* The stack pointer as it was on ftrace_caller entry... */ add x29, sp, #S_FRAME_SIZE+16 /* ...is also our new FP */ str x29, [sp, #S_SP] @@ -219,6 +222,16 @@ ftrace_call: bl ftrace_stub +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_FUNCTION_GRAPH_TRACER) + /* Is the trace function a live patcher an has messed with + * the return address? + */ + ldr x9, [sp, #S_PC] + cmp x9, x19 /* compare with the value we remembered */ + /* to not call graph tracer's "call" mechanism twice! */ + b.eq ftrace_regs_return +#endif + #ifdef CONFIG_FUNCTION_GRAPH_TRACER .global ftrace_graph_call ftrace_graph_call: // ftrace_graph_caller(); -- 2.6.6 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 2/2] arm64: implement live patching 2016-06-27 15:17 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe @ 2016-07-11 14:03 ` Miroslav Benes 2016-07-11 21:58 ` Jessica Yu 2016-08-11 16:46 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe 2016-07-15 16:03 ` Paul Gortmaker 1 sibling, 2 replies; 23+ messages in thread From: Miroslav Benes @ 2016-07-11 14:03 UTC (permalink / raw) To: linux-arm-kernel On Mon, 27 Jun 2016, Torsten Duwe wrote: > diff --git a/arch/arm64/include/asm/livepatch.h b/arch/arm64/include/asm/livepatch.h > new file mode 100644 > index 0000000..6b9a3d1 > --- /dev/null > +++ b/arch/arm64/include/asm/livepatch.h > @@ -0,0 +1,37 @@ > +/* > + * livepatch.h - arm64-specific Kernel Live Patching Core > + * > + * Copyright (C) 2016 SUSE > + * > + * This program is free software; you can redistribute it and/or > + * modify it under the terms of the GNU General Public License > + * as published by the Free Software Foundation; either version 2 > + * of the License, or (at your option) any later version. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * You should have received a copy of the GNU General Public License > + * along with this program; if not, see <http://www.gnu.org/licenses/>. > + */ > +#ifndef _ASM_ARM64_LIVEPATCH_H > +#define _ASM_ARM64_LIVEPATCH_H > + > +#include <linux/module.h> > +#include <linux/ftrace.h> > + > +#ifdef CONFIG_LIVEPATCH A nit but we removed such guards in the other header files. > +static inline int klp_check_compiler_support(void) > +{ > + return 0; > +} > + > +static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) > +{ > + regs->pc = ip; > +} > +#endif /* CONFIG_LIVEPATCH */ I also checked mod_arch_specific structure because of the way we deal with relocations. It is defined only if CONFIG_ARM64_MODULE_PLTS is enabled and there is a pointer to 'struct elf64_shdr' called plt. It is used indirectly in apply_relocate_add() so we need it to stay. However it points to an existing Elf section and SHF_ALLOC is added to its sh_flags in module_frob_arch_sections() (arch/arm64/kernel/module-plts.c). Therefore we should be ok. Jessica, could you check it as well, please? Thanks, Miroslav ^ permalink raw reply [flat|nested] 23+ messages in thread
* arm64: implement live patching 2016-07-11 14:03 ` Miroslav Benes @ 2016-07-11 21:58 ` Jessica Yu 2016-07-12 9:47 ` Miroslav Benes 2016-08-11 16:46 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe 1 sibling, 1 reply; 23+ messages in thread From: Jessica Yu @ 2016-07-11 21:58 UTC (permalink / raw) To: linux-arm-kernel +++ Miroslav Benes [11/07/16 16:03 +0200]: >On Mon, 27 Jun 2016, Torsten Duwe wrote: > >> diff --git a/arch/arm64/include/asm/livepatch.h b/arch/arm64/include/asm/livepatch.h >> new file mode 100644 >> index 0000000..6b9a3d1 >> --- /dev/null >> +++ b/arch/arm64/include/asm/livepatch.h >> @@ -0,0 +1,37 @@ >> +/* >> + * livepatch.h - arm64-specific Kernel Live Patching Core >> + * >> + * Copyright (C) 2016 SUSE >> + * >> + * This program is free software; you can redistribute it and/or >> + * modify it under the terms of the GNU General Public License >> + * as published by the Free Software Foundation; either version 2 >> + * of the License, or (at your option) any later version. >> + * >> + * This program is distributed in the hope that it will be useful, >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the >> + * GNU General Public License for more details. >> + * >> + * You should have received a copy of the GNU General Public License >> + * along with this program; if not, see <http://www.gnu.org/licenses/>. >> + */ >> +#ifndef _ASM_ARM64_LIVEPATCH_H >> +#define _ASM_ARM64_LIVEPATCH_H >> + >> +#include <linux/module.h> >> +#include <linux/ftrace.h> >> + >> +#ifdef CONFIG_LIVEPATCH > >A nit but we removed such guards in the other header files. > >> +static inline int klp_check_compiler_support(void) >> +{ >> + return 0; >> +} >> + >> +static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) >> +{ >> + regs->pc = ip; >> +} >> +#endif /* CONFIG_LIVEPATCH */ > >I also checked mod_arch_specific structure because of the way we deal >with relocations. It is defined only if CONFIG_ARM64_MODULE_PLTS is >enabled and there is a pointer to 'struct elf64_shdr' called plt. It is >used indirectly in apply_relocate_add() so we need it to stay. However it >points to an existing Elf section and SHF_ALLOC is added to its sh_flags >in module_frob_arch_sections() (arch/arm64/kernel/module-plts.c). >Therefore we should be ok. > >Jessica, could you check it as well, please? That sounds right, the plt will remain in module core memory, so we are fine there. However I think the plt->sh_size calculation will be incorrect for livepatch modules. In calculating mod->arch.plt_max_entries (see: module-plts.c), count_plts() is called for every rela section. For livepatch modules, this means count_plts() will also be called for our .klp.rela sections, which is correct behavior. However, count_plts() only considers relas referring to SHN_UNDEF symbols, and since every rela in a klp rela section refers to a SHN_LIVEPATCH symbol, these are all ignored. So count_plts() may return an incorrect value for a klp rela section. Miroslav, can you confirm the issue? I think the fix would be easy though; we can just add an additional check for SHN_LIVEPATCH in count_plts(). Jessica ^ permalink raw reply [flat|nested] 23+ messages in thread
* arm64: implement live patching 2016-07-11 21:58 ` Jessica Yu @ 2016-07-12 9:47 ` Miroslav Benes 2016-07-13 0:11 ` [PATCH] arm64: take SHN_LIVEPATCH syms into account when calculating plt_max_entries Jessica Yu 0 siblings, 1 reply; 23+ messages in thread From: Miroslav Benes @ 2016-07-12 9:47 UTC (permalink / raw) To: linux-arm-kernel On Mon, 11 Jul 2016, Jessica Yu wrote: > +++ Miroslav Benes [11/07/16 16:03 +0200]: > > On Mon, 27 Jun 2016, Torsten Duwe wrote: > > > > > diff --git a/arch/arm64/include/asm/livepatch.h > > > b/arch/arm64/include/asm/livepatch.h > > > new file mode 100644 > > > index 0000000..6b9a3d1 > > > --- /dev/null > > > +++ b/arch/arm64/include/asm/livepatch.h > > > @@ -0,0 +1,37 @@ > > > +/* > > > + * livepatch.h - arm64-specific Kernel Live Patching Core > > > + * > > > + * Copyright (C) 2016 SUSE > > > + * > > > + * This program is free software; you can redistribute it and/or > > > + * modify it under the terms of the GNU General Public License > > > + * as published by the Free Software Foundation; either version 2 > > > + * of the License, or (at your option) any later version. > > > + * > > > + * This program is distributed in the hope that it will be useful, > > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > > > + * GNU General Public License for more details. > > > + * > > > + * You should have received a copy of the GNU General Public License > > > + * along with this program; if not, see <http://www.gnu.org/licenses/>. > > > + */ > > > +#ifndef _ASM_ARM64_LIVEPATCH_H > > > +#define _ASM_ARM64_LIVEPATCH_H > > > + > > > +#include <linux/module.h> > > > +#include <linux/ftrace.h> > > > + > > > +#ifdef CONFIG_LIVEPATCH > > > > A nit but we removed such guards in the other header files. > > > > > +static inline int klp_check_compiler_support(void) > > > +{ > > > + return 0; > > > +} > > > + > > > +static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long > > > ip) > > > +{ > > > + regs->pc = ip; > > > +} > > > +#endif /* CONFIG_LIVEPATCH */ > > > > I also checked mod_arch_specific structure because of the way we deal > > with relocations. It is defined only if CONFIG_ARM64_MODULE_PLTS is > > enabled and there is a pointer to 'struct elf64_shdr' called plt. It is > > used indirectly in apply_relocate_add() so we need it to stay. However it > > points to an existing Elf section and SHF_ALLOC is added to its sh_flags > > in module_frob_arch_sections() (arch/arm64/kernel/module-plts.c). > > Therefore we should be ok. > > > > Jessica, could you check it as well, please? > > That sounds right, the plt will remain in module core memory, so we > are fine there. > > However I think the plt->sh_size calculation will be incorrect for > livepatch modules. In calculating mod->arch.plt_max_entries (see: > module-plts.c), count_plts() is called for every rela section. > For livepatch modules, this means count_plts() will also be called for > our .klp.rela sections, which is correct behavior. However, > count_plts() only considers relas referring to SHN_UNDEF symbols, and > since every rela in a klp rela section refers to a SHN_LIVEPATCH > symbol, these are all ignored. So count_plts() may return an incorrect > value for a klp rela section. You're right. During the patch module creation we basically transform all SHN_UNDEF relas to SHN_LIVEPATCH, right? We must take it into account here. > Miroslav, can you confirm the issue? I think the fix would be easy > though; we can just add an additional check for SHN_LIVEPATCH in > count_plts(). Yes, such a check should be sufficient. Thanks for looking into it. Miroslav ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH] arm64: take SHN_LIVEPATCH syms into account when calculating plt_max_entries 2016-07-12 9:47 ` Miroslav Benes @ 2016-07-13 0:11 ` Jessica Yu 2016-08-17 9:38 ` Miroslav Benes 0 siblings, 1 reply; 23+ messages in thread From: Jessica Yu @ 2016-07-13 0:11 UTC (permalink / raw) To: linux-arm-kernel SHN_LIVEPATCH symbols are technically a subset of SHN_UNDEF/undefined symbols, except that their addresses are resolved by livepatch at runtime. Therefore, when calculating the upper-bound for the number of plt entries to allocate, make sure to take livepatch symbols into account as well. Signed-off-by: Jessica Yu <jeyu@redhat.com> --- arch/arm64/kernel/module-plts.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c index 1ce90d8..1e95dc1 100644 --- a/arch/arm64/kernel/module-plts.c +++ b/arch/arm64/kernel/module-plts.c @@ -122,7 +122,8 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num) * as well, so modules can never grow beyond that limit. */ s = syms + ELF64_R_SYM(rela[i].r_info); - if (s->st_shndx != SHN_UNDEF) + if (s->st_shndx != SHN_UNDEF && + s->st_shndx != SHN_LIVEPATCH) break; /* -- 2.5.5 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH] arm64: take SHN_LIVEPATCH syms into account when calculating plt_max_entries 2016-07-13 0:11 ` [PATCH] arm64: take SHN_LIVEPATCH syms into account when calculating plt_max_entries Jessica Yu @ 2016-08-17 9:38 ` Miroslav Benes 0 siblings, 0 replies; 23+ messages in thread From: Miroslav Benes @ 2016-08-17 9:38 UTC (permalink / raw) To: linux-arm-kernel On Tue, 12 Jul 2016, Jessica Yu wrote: > SHN_LIVEPATCH symbols are technically a subset of SHN_UNDEF/undefined > symbols, except that their addresses are resolved by livepatch at runtime. > Therefore, when calculating the upper-bound for the number of plt entries > to allocate, make sure to take livepatch symbols into account as well. > > Signed-off-by: Jessica Yu <jeyu@redhat.com> FWIW, I think the patch does what we want, but it's for arm people to judge. It might be better to include it to Torsten's patch set. Miroslav > --- > arch/arm64/kernel/module-plts.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c > index 1ce90d8..1e95dc1 100644 > --- a/arch/arm64/kernel/module-plts.c > +++ b/arch/arm64/kernel/module-plts.c > @@ -122,7 +122,8 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela > *rela, int num) > * as well, so modules can never grow beyond that > limit. > */ > s = syms + ELF64_R_SYM(rela[i].r_info); > - if (s->st_shndx != SHN_UNDEF) > + if (s->st_shndx != SHN_UNDEF && > + s->st_shndx != SHN_LIVEPATCH) > break; > > /* > -- > 2.5.5 > ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 2/2] arm64: implement live patching 2016-07-11 14:03 ` Miroslav Benes 2016-07-11 21:58 ` Jessica Yu @ 2016-08-11 16:46 ` Torsten Duwe 1 sibling, 0 replies; 23+ messages in thread From: Torsten Duwe @ 2016-08-11 16:46 UTC (permalink / raw) To: linux-arm-kernel On Mon, Jul 11, 2016 at 04:03:08PM +0200, Miroslav Benes wrote: > On Mon, 27 Jun 2016, Torsten Duwe wrote: > > + > > +#ifdef CONFIG_LIVEPATCH > > A nit but we removed such guards in the other header files. I just notice this has fallen between the cracks :-/ Torsten ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 2/2] arm64: implement live patching 2016-06-27 15:17 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe 2016-07-11 14:03 ` Miroslav Benes @ 2016-07-15 16:03 ` Paul Gortmaker 1 sibling, 0 replies; 23+ messages in thread From: Paul Gortmaker @ 2016-07-15 16:03 UTC (permalink / raw) To: linux-arm-kernel On Mon, Jun 27, 2016 at 11:17 AM, Torsten Duwe <duwe@lst.de> wrote: > On top of FTRACE_WITH_REGS and the klp changes that go into v4.7 > this is straightforward. > > Signed-off-by: Torsten Duwe <duwe@suse.de> > --- > arch/arm64/Kconfig | 3 +++ > arch/arm64/include/asm/livepatch.h | 37 +++++++++++++++++++++++++++++++++++++ > arch/arm64/kernel/entry-ftrace.S | 13 +++++++++++++ > 3 files changed, 53 insertions(+) > create mode 100644 arch/arm64/include/asm/livepatch.h > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 36a0e26..cb5adf3 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -80,6 +80,7 @@ config ARM64 > select HAVE_GENERIC_DMA_COHERENT > select HAVE_HW_BREAKPOINT if PERF_EVENTS > select HAVE_IRQ_TIME_ACCOUNTING > + select HAVE_LIVEPATCH > select HAVE_MEMBLOCK > select HAVE_MEMBLOCK_NODE_MAP if NUMA > select HAVE_PATA_PLATFORM > @@ -1042,4 +1043,6 @@ if CRYPTO > source "arch/arm64/crypto/Kconfig" > endif > > +source "kernel/livepatch/Kconfig" > + > source "lib/Kconfig" > diff --git a/arch/arm64/include/asm/livepatch.h b/arch/arm64/include/asm/livepatch.h > new file mode 100644 > index 0000000..6b9a3d1 > --- /dev/null > +++ b/arch/arm64/include/asm/livepatch.h > @@ -0,0 +1,37 @@ > +/* > + * livepatch.h - arm64-specific Kernel Live Patching Core > + * > + * Copyright (C) 2016 SUSE > + * > + * This program is free software; you can redistribute it and/or > + * modify it under the terms of the GNU General Public License > + * as published by the Free Software Foundation; either version 2 > + * of the License, or (at your option) any later version. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * You should have received a copy of the GNU General Public License > + * along with this program; if not, see <http://www.gnu.org/licenses/>. > + */ > +#ifndef _ASM_ARM64_LIVEPATCH_H > +#define _ASM_ARM64_LIVEPATCH_H > + > +#include <linux/module.h> > +#include <linux/ftrace.h> These includes don't look right. It would seem all you need is the one for struct pt_regs. Paul. -- > + > +#ifdef CONFIG_LIVEPATCH > +static inline int klp_check_compiler_support(void) > +{ > + return 0; > +} > + > +static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) > +{ > + regs->pc = ip; > +} > +#endif /* CONFIG_LIVEPATCH */ > + > +#endif /* _ASM_ARM64_LIVEPATCH_H */ > diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S > index 3ebe791..b166cbf 100644 > --- a/arch/arm64/kernel/entry-ftrace.S > +++ b/arch/arm64/kernel/entry-ftrace.S > @@ -204,6 +204,9 @@ ENTRY(ftrace_caller) > str x9, [sp, #S_LR] > /* The program counter just after the ftrace call site */ > str lr, [sp, #S_PC] > +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_FUNCTION_GRAPH_TRACER) > + mov x19,lr /* remember old return address */ > +#endif > /* The stack pointer as it was on ftrace_caller entry... */ > add x29, sp, #S_FRAME_SIZE+16 /* ...is also our new FP */ > str x29, [sp, #S_SP] > @@ -219,6 +222,16 @@ ftrace_call: > > bl ftrace_stub > > +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_FUNCTION_GRAPH_TRACER) > + /* Is the trace function a live patcher an has messed with > + * the return address? > + */ > + ldr x9, [sp, #S_PC] > + cmp x9, x19 /* compare with the value we remembered */ > + /* to not call graph tracer's "call" mechanism twice! */ > + b.eq ftrace_regs_return > +#endif > + > #ifdef CONFIG_FUNCTION_GRAPH_TRACER > .global ftrace_graph_call > ftrace_graph_call: // ftrace_graph_caller(); > -- > 2.6.6 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-arch" in > the body of a message to majordomo at vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2016-08-17 9:38 UTC | newest] Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-06-27 15:15 [PATCH v2 0/2] arm64 live patching Torsten Duwe 2016-06-27 15:17 ` [PATCH v2 1/2] arm64: implement FTRACE_WITH_REGS Torsten Duwe 2016-07-01 12:53 ` Josh Poimboeuf 2016-07-04 9:18 ` Torsten Duwe 2016-07-03 5:17 ` kbuild test robot 2016-07-08 14:58 ` Petr Mladek 2016-07-08 15:07 ` Torsten Duwe 2016-07-08 15:24 ` Petr Mladek 2016-07-08 15:48 ` Josh Poimboeuf 2016-07-08 15:57 ` Steven Rostedt 2016-07-08 20:24 ` Torsten Duwe 2016-07-08 21:08 ` Steven Rostedt 2016-07-09 9:06 ` Torsten Duwe 2016-07-15 18:36 ` Steven Rostedt 2016-07-08 15:49 ` Steven Rostedt 2016-06-27 15:17 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe 2016-07-11 14:03 ` Miroslav Benes 2016-07-11 21:58 ` Jessica Yu 2016-07-12 9:47 ` Miroslav Benes 2016-07-13 0:11 ` [PATCH] arm64: take SHN_LIVEPATCH syms into account when calculating plt_max_entries Jessica Yu 2016-08-17 9:38 ` Miroslav Benes 2016-08-11 16:46 ` [PATCH v2 2/2] arm64: implement live patching Torsten Duwe 2016-07-15 16:03 ` Paul Gortmaker
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).