* [PATCH v5 1/9] ppc64 (le): prepare for -mprofile-kernel
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
@ 2015-12-04 13:36 ` Torsten Duwe
2015-12-04 13:38 ` [PATCH v5 2/9] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
` (8 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 13:36 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
The gcc switch -mprofile-kernel, available for ppc64 on gcc > 4.8.5,
allows to call _mcount very early in the function, which low-level
ASM code and code patching functions need to consider.
Especially the link register and the parameter registers are still
alive and not yet saved into a new stack frame.
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/kernel/entry_64.S | 44 +++++++++++++++++++++++++++++++++++++++--
arch/powerpc/kernel/ftrace.c | 12 +++++++++--
arch/powerpc/kernel/module_64.c | 13 ++++++++++++
3 files changed, 65 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index a94f155..c3b2e75 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1206,7 +1206,11 @@ _GLOBAL(enter_prom)
#ifdef CONFIG_DYNAMIC_FTRACE
_GLOBAL(mcount)
_GLOBAL(_mcount)
- blr
+ mflr r0
+ mtctr r0
+ ld r0,LRSAVE(r1)
+ mtlr r0
+ bctr
_GLOBAL_TOC(ftrace_caller)
/* Taken from output of objdump from lib64/glibc */
@@ -1262,13 +1266,28 @@ _GLOBAL(ftrace_stub)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
_GLOBAL(ftrace_graph_caller)
+#ifdef CC_USING_MPROFILE_KERNEL
+ /* with -mprofile-kernel, parameter regs are still alive at _mcount */
+ std r10, 104(r1)
+ std r9, 96(r1)
+ std r8, 88(r1)
+ std r7, 80(r1)
+ std r6, 72(r1)
+ std r5, 64(r1)
+ std r4, 56(r1)
+ std r3, 48(r1)
+ mfctr r4 /* ftrace_caller has moved local addr here */
+ std r4, 40(r1)
+ mflr r3 /* ftrace_caller has restored LR from stack */
+#else
/* load r4 with local address */
ld r4, 128(r1)
- subi r4, r4, MCOUNT_INSN_SIZE
/* Grab the LR out of the caller stack frame */
ld r11, 112(r1)
ld r3, 16(r11)
+#endif
+ subi r4, r4, MCOUNT_INSN_SIZE
bl prepare_ftrace_return
nop
@@ -1277,6 +1296,26 @@ _GLOBAL(ftrace_graph_caller)
* prepare_ftrace_return gives us the address we divert to.
* Change the LR in the callers stack frame to this.
*/
+
+#ifdef CC_USING_MPROFILE_KERNEL
+ mtlr r3
+
+ ld r0, 40(r1)
+ mtctr r0
+ ld r10, 104(r1)
+ ld r9, 96(r1)
+ ld r8, 88(r1)
+ ld r7, 80(r1)
+ ld r6, 72(r1)
+ ld r5, 64(r1)
+ ld r4, 56(r1)
+ ld r3, 48(r1)
+
+ addi r1, r1, 112
+ mflr r0
+ std r0, LRSAVE(r1)
+ bctr
+#else
ld r11, 112(r1)
std r3, 16(r11)
@@ -1284,6 +1323,7 @@ _GLOBAL(ftrace_graph_caller)
mtlr r0
addi r1, r1, 112
blr
+#endif
_GLOBAL(return_to_handler)
/* need to save return values */
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 44d4d8e..080c525 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -306,11 +306,19 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
* The load offset is different depending on the ABI. For simplicity
* just mask it out when doing the compare.
*/
+#ifndef CC_USING_MPROFILE_KERNEL
if ((op[0] != 0x48000008) || ((op[1] & 0xffff0000) != 0xe8410000)) {
- pr_err("Unexpected call sequence: %x %x\n", op[0], op[1]);
+ pr_err("Unexpected call sequence at %p: %x %x\n",
+ ip, op[0], op[1]);
return -EINVAL;
}
-
+#else
+ /* look for patched "NOP" on ppc64 with -mprofile-kernel */
+ if (op[0] != 0x60000000) {
+ pr_err("Unexpected call at %p: %x\n", ip, op[0]);
+ return -EINVAL;
+ }
+#endif
/* If we never set up a trampoline to ftrace_caller, then bail */
if (!rec->arch.mod->arch.tramp) {
pr_err("No ftrace trampoline\n");
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 6838451..5312cd4 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -475,6 +475,19 @@ static unsigned long stub_for_addr(Elf64_Shdr *sechdrs,
static int restore_r2(u32 *instruction, struct module *me)
{
if (*instruction != PPC_INST_NOP) {
+#ifdef CC_USING_MPROFILE_KERNEL
+ /* -mprofile_kernel sequence starting with
+ * mflr r0; std r0, LRSAVE(r1)
+ */
+ if (instruction[-3] == 0x7c0802a6 &&
+ instruction[-2] == 0xf8010010) {
+ /* Nothing to be done here, it's an _mcount
+ * call location and r2 will have to be
+ * restored in the _mcount function.
+ */
+ return 1;
+ };
+#endif
pr_err("%s: Expect noop after relocate, got %08x\n",
me->name, *instruction);
return 0;
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 2/9] ppc64le FTRACE_WITH_REGS implementation
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
2015-12-04 13:36 ` [PATCH v5 1/9] ppc64 (le): prepare for -mprofile-kernel Torsten Duwe
@ 2015-12-04 13:38 ` Torsten Duwe
2015-12-04 13:39 ` [PATCH v5 3/9] ppc use ftrace_modify_all_code default Torsten Duwe
` (7 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 13:38 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
Implement FTRACE_WITH_REGS for powerpc64, on ELF ABI v2.
Initial work started by Vojtech Pavlik, used with permission.
* arch/powerpc/kernel/entry_64.S:
- Implement an effective ftrace_caller that works from
within the kernel binary as well as from modules.
* arch/powerpc/kernel/ftrace.c:
- be prepared to deal with ppc64 ELF ABI v2, especially
calls to _mcount that result from gcc -mprofile-kernel
- a little more error verbosity
* arch/powerpc/kernel/module_64.c:
- do not save the TOC pointer on the trampoline when the
destination is ftrace_caller. This trampoline jump happens from
a function prologue before a new stack frame is set up, so bad
things may happen otherwise...
- relax is_module_trampoline() to recognise the modified
trampoline.
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/include/asm/ftrace.h | 5 +++
arch/powerpc/kernel/entry_64.S | 77 +++++++++++++++++++++++++++++++++++++++
arch/powerpc/kernel/ftrace.c | 60 +++++++++++++++++++++++++++---
arch/powerpc/kernel/module_64.c | 25 ++++++++++++-
4 files changed, 160 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index ef89b14..50ca758 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -46,6 +46,8 @@
extern void _mcount(void);
#ifdef CONFIG_DYNAMIC_FTRACE
+# define FTRACE_ADDR ((unsigned long)ftrace_caller)
+# define FTRACE_REGS_ADDR FTRACE_ADDR
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
/* reloction of mcount call site is the same as the address */
@@ -58,6 +60,9 @@ struct dyn_arch_ftrace {
#endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* __ASSEMBLY__ */
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+#define ARCH_SUPPORTS_FTRACE_OPS 1
+#endif
#endif
#if defined(CONFIG_FTRACE_SYSCALLS) && defined(CONFIG_PPC64) && !defined(__ASSEMBLY__)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index c3b2e75..294a9ca 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1212,6 +1212,7 @@ _GLOBAL(_mcount)
mtlr r0
bctr
+#ifndef CC_USING_MPROFILE_KERNEL
_GLOBAL_TOC(ftrace_caller)
/* Taken from output of objdump from lib64/glibc */
mflr r3
@@ -1233,6 +1234,82 @@ _GLOBAL(ftrace_graph_stub)
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
+#else
+_GLOBAL(ftrace_caller)
+#if defined(_CALL_ELF) && _CALL_ELF == 2
+ mflr r0
+ bl 2f
+2: mflr r12
+ mtlr r0
+ mr r0,r2 /* save callee's TOC */
+ addis r2,r12,(.TOC.-ftrace_caller-8)@ha
+ addi r2,r2,(.TOC.-ftrace_caller-8)@l
+#else
+ mr r0,r2
+#endif
+ ld r12,LRSAVE(r1) /* get caller's address */
+
+ stdu r1,-SWITCH_FRAME_SIZE(r1)
+
+ std r12, _LINK(r1)
+ SAVE_8GPRS(0,r1)
+ std r0, 24(r1) /* save TOC */
+ SAVE_8GPRS(8,r1)
+ SAVE_8GPRS(16,r1)
+ SAVE_8GPRS(24,r1)
+
+ addis r3,r2,function_trace_op@toc@ha
+ addi r3,r3,function_trace_op@toc@l
+ ld r5,0(r3)
+
+ mflr r3
+ std r3, _NIP(r1)
+ std r3, 16(r1)
+ subi r3, r3, MCOUNT_INSN_SIZE
+ mfmsr r4
+ std r4, _MSR(r1)
+ mfctr r4
+ std r4, _CTR(r1)
+ mfxer r4
+ std r4, _XER(r1)
+ mr r4, r12
+ addi r6, r1 ,STACK_FRAME_OVERHEAD
+
+.globl ftrace_call
+ftrace_call:
+ bl ftrace_stub
+ nop
+
+ ld r3, _NIP(r1)
+ mtlr r3
+
+ REST_8GPRS(0,r1)
+ REST_8GPRS(8,r1)
+ REST_8GPRS(16,r1)
+ REST_8GPRS(24,r1)
+
+ addi r1, r1, SWITCH_FRAME_SIZE
+
+ ld r12, LRSAVE(r1) /* get caller's address */
+ mtlr r12
+ mr r2,r0 /* restore callee's TOC */
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ stdu r1, -112(r1)
+.globl ftrace_graph_call
+ftrace_graph_call:
+ b ftrace_graph_stub
+_GLOBAL(ftrace_graph_stub)
+ addi r1, r1, 112
+#endif
+
+ mflr r0 /* move this LR to CTR */
+ mtctr r0
+
+ ld r0,LRSAVE(r1) /* restore callee's lr at _mcount site */
+ mtlr r0
+ bctr /* jump after _mcount site */
+#endif /* CC_USING_MPROFILE_KERNEL */
_GLOBAL(ftrace_stub)
blr
#else
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 080c525..310137f 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -61,8 +61,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
return -EFAULT;
/* Make sure it is what we expect it to be */
- if (replaced != old)
+ if (replaced != old) {
+ pr_err("%p: replaced (%#x) != old (%#x)",
+ (void *)ip, replaced, old);
return -EINVAL;
+ }
/* replace the text with the new text */
if (patch_instruction((unsigned int *)ip, new))
@@ -106,14 +109,16 @@ static int
__ftrace_make_nop(struct module *mod,
struct dyn_ftrace *rec, unsigned long addr)
{
- unsigned int op;
+ unsigned int op, op0, op1, pop;
unsigned long entry, ptr;
unsigned long ip = rec->ip;
void *tramp;
/* read where this goes */
- if (probe_kernel_read(&op, (void *)ip, sizeof(int)))
+ if (probe_kernel_read(&op, (void *)ip, sizeof(int))) {
+ pr_err("Fetching opcode failed.\n");
return -EFAULT;
+ }
/* Make sure that that this is still a 24bit jump */
if (!is_bl_op(op)) {
@@ -158,10 +163,46 @@ __ftrace_make_nop(struct module *mod,
*
* Use a b +8 to jump over the load.
*/
- op = 0x48000008; /* b +8 */
- if (patch_instruction((unsigned int *)ip, op))
+ pop = 0x48000008; /* b +8 */
+
+ /*
+ * Check what is in the next instruction. We can see ld r2,40(r1), but
+ * on first pass after boot we will see mflr r0.
+ */
+ if (probe_kernel_read(&op, (void *)(ip+4), MCOUNT_INSN_SIZE)) {
+ pr_err("Fetching op failed.\n");
+ return -EFAULT;
+ }
+
+ if (op != 0xe8410028) { /* ld r2,STACK_OFFSET(r1) */
+
+ if (probe_kernel_read(&op0, (void *)(ip-8), MCOUNT_INSN_SIZE)) {
+ pr_err("Fetching op0 failed.\n");
+ return -EFAULT;
+ }
+
+ if (probe_kernel_read(&op1, (void *)(ip-4), MCOUNT_INSN_SIZE)) {
+ pr_err("Fetching op1 failed.\n");
+ return -EFAULT;
+ }
+
+ /* mflr r0 ; std r0,LRSAVE(r1) */
+ if (op0 != 0x7c0802a6 && op1 != 0xf8010010) {
+ pr_err("Unexpected instructions around bl\n"
+ "when enabling dynamic ftrace!\t"
+ "(%08x,%08x,bl,%08x)\n", op0, op1, op);
+ return -EINVAL;
+ }
+
+ /* When using -mkernel_profile there is no load to jump over */
+ pop = PPC_INST_NOP;
+ }
+
+ if (patch_instruction((unsigned int *)ip, pop)) {
+ pr_err("Patching NOP failed.\n");
return -EPERM;
+ }
return 0;
}
@@ -287,6 +328,13 @@ int ftrace_make_nop(struct module *mod,
#ifdef CONFIG_MODULES
#ifdef CONFIG_PPC64
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+{
+ return ftrace_make_call(rec, addr);
+}
+#endif
static int
__ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
@@ -338,7 +386,7 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
return 0;
}
-#else
+#else /* !CONFIG_PPC64: */
static int
__ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 5312cd4..7218b37 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -138,12 +138,25 @@ static u32 ppc64_stub_insns[] = {
0x4e800420 /* bctr */
};
+#ifdef CC_USING_MPROFILE_KERNEL
+/* In case of _mcount calls or dynamic ftracing, Do not save the
+ * current callee's TOC (in R2) again into the original caller's stack
+ * frame during this trampoline hop. The stack frame already holds
+ * that of the original caller. _mcount and ftrace_caller will take
+ * care of this TOC value themselves.
+ */
+#define SQUASH_TOC_SAVE_INSN(trampoline_addr) \
+ (((struct ppc64_stub_entry *)(trampoline_addr))->jump[2] = PPC_INST_NOP)
+#else
+#define SQUASH_TOC_SAVE_INSN(trampoline_addr)
+#endif
+
#ifdef CONFIG_DYNAMIC_FTRACE
static u32 ppc64_stub_mask[] = {
0xffff0000,
0xffff0000,
- 0xffffffff,
+ 0x00000000,
0xffffffff,
#if !defined(_CALL_ELF) || _CALL_ELF != 2
0xffffffff,
@@ -170,6 +183,9 @@ bool is_module_trampoline(u32 *p)
if ((insna & mask) != (insnb & mask))
return false;
}
+ if (insns[2] != ppc64_stub_insns[2] &&
+ insns[2] != PPC_INST_NOP)
+ return false;
return true;
}
@@ -618,6 +634,9 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
return -ENOENT;
if (!restore_r2((u32 *)location + 1, me))
return -ENOEXEC;
+ /* Squash the TOC saver for profiler calls */
+ if (!strcmp("_mcount", strtab+sym->st_name))
+ SQUASH_TOC_SAVE_INSN(value);
} else
value += local_entry_offset(sym);
@@ -678,6 +697,10 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
me->arch.tramp = stub_for_addr(sechdrs,
(unsigned long)ftrace_caller,
me);
+ /* ftrace_caller will take care of the TOC;
+ * do not clobber original caller's value.
+ */
+ SQUASH_TOC_SAVE_INSN(me->arch.tramp);
#endif
return 0;
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 3/9] ppc use ftrace_modify_all_code default
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
2015-12-04 13:36 ` [PATCH v5 1/9] ppc64 (le): prepare for -mprofile-kernel Torsten Duwe
2015-12-04 13:38 ` [PATCH v5 2/9] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
@ 2015-12-04 13:39 ` Torsten Duwe
2015-12-04 13:50 ` [PATCH v5 4/9] ppc64 ftrace_with_regs configuration variables Torsten Duwe
` (6 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 13:39 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
Convert ppc's arch_ftrace_update_code from its own function copy
to use the generic default functionality (without stop_machine --
our instructions are properly aligned and the replacements atomic ;)
With this we gain error checking and the much-needed function_trace_op
handling.
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/kernel/ftrace.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 310137f..e419c7b 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -511,20 +511,12 @@ void ftrace_replace_code(int enable)
}
}
+/* Use the default ftrace_modify_all_code, but without
+ * stop_machine().
+ */
void arch_ftrace_update_code(int command)
{
- if (command & FTRACE_UPDATE_CALLS)
- ftrace_replace_code(1);
- else if (command & FTRACE_DISABLE_CALLS)
- ftrace_replace_code(0);
-
- if (command & FTRACE_UPDATE_TRACE_FUNC)
- ftrace_update_ftrace_func(ftrace_trace_function);
-
- if (command & FTRACE_START_FUNC_RET)
- ftrace_enable_ftrace_graph_caller();
- else if (command & FTRACE_STOP_FUNC_RET)
- ftrace_disable_ftrace_graph_caller();
+ ftrace_modify_all_code(command);
}
int __init ftrace_dyn_arch_init(void)
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 4/9] ppc64 ftrace_with_regs configuration variables
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
` (2 preceding siblings ...)
2015-12-04 13:39 ` [PATCH v5 3/9] ppc use ftrace_modify_all_code default Torsten Duwe
@ 2015-12-04 13:50 ` Torsten Duwe
2015-12-04 13:52 ` [PATCH v5 5/9] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
` (5 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 13:50 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
* Makefile:
- globally use -mprofile-kernel in case it's configured
and available.
* arch/powerpc/Kconfig / kernel/trace/Kconfig:
- declare that ppc64le HAVE_MPROFILE_KERNEL and
HAVE_DYNAMIC_FTRACE_WITH_REGS, and use it.
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/Kconfig | 2 ++
arch/powerpc/Makefile | 10 ++++++++++
kernel/trace/Kconfig | 5 +++++
3 files changed, 17 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index db49e0d..89b1a2a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -97,8 +97,10 @@ config PPC
select OF_RESERVED_MEM
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
+ select HAVE_DYNAMIC_FTRACE_WITH_REGS if PPC64 && CPU_LITTLE_ENDIAN
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_MPROFILE_KERNEL if PPC64 && CPU_LITTLE_ENDIAN
select SYSCTL_EXCEPTION_TRACE
select ARCH_WANT_OPTIONAL_GPIOLIB
select VIRT_TO_BUS if !PPC64
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 96efd82..2f9b527 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -133,6 +133,16 @@ else
CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
endif
+ifeq ($(CONFIG_PPC64),y)
+ifdef CONFIG_HAVE_MPROFILE_KERNEL
+CC_USING_MPROFILE_KERNEL := $(call cc-option,-mprofile-kernel)
+ifdef CC_USING_MPROFILE_KERNEL
+CC_FLAGS_FTRACE := -pg $(CC_USING_MPROFILE_KERNEL)
+KBUILD_CPPFLAGS += -DCC_USING_MPROFILE_KERNEL
+endif
+endif
+endif
+
CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
CFLAGS-$(CONFIG_POWER4_CPU) += $(call cc-option,-mcpu=power4)
CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index e45db6b..a138f6d 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -52,6 +52,11 @@ config HAVE_FENTRY
help
Arch supports the gcc options -pg with -mfentry
+config HAVE_MPROFILE_KERNEL
+ bool
+ help
+ Arch supports the gcc options -pg with -mprofile-kernel
+
config HAVE_C_RECORDMCOUNT
bool
help
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 5/9] ppc64 ftrace_with_regs: spare early boot and low level
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
` (3 preceding siblings ...)
2015-12-04 13:50 ` [PATCH v5 4/9] ppc64 ftrace_with_regs configuration variables Torsten Duwe
@ 2015-12-04 13:52 ` Torsten Duwe
2015-12-04 13:55 ` [PATCH v5 6/9] ppc64 ftrace: disable profiling for some functions Torsten Duwe
` (4 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 13:52 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
Using -mprofile-kernel on early boot code not only confuses the
checker but is also useless, as the infrastructure is not yet in
place. Proceed like with -pg (remove it from CFLAGS), equally with
time.o and ftrace itself.
* arch/powerpc/kernel/Makefile:
- remove -mprofile-kernel from low level and boot code objects'
CFLAGS for FUNCTION_TRACER configurations.
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/kernel/Makefile | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ba33693..0f417d5 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -16,14 +16,14 @@ endif
ifdef CONFIG_FUNCTION_TRACER
# Do not trace early boot code
-CFLAGS_REMOVE_cputable.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_prom_init.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_btext.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_prom.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_cputable.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_prom_init.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_btext.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_prom.o = -pg -mno-sched-epilog -mprofile-kernel
# do not trace tracer code
-CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog -mprofile-kernel
# timers used by tracing
-CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog -mprofile-kernel
endif
obj-y := cputable.o ptrace.o syscalls.o \
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 6/9] ppc64 ftrace: disable profiling for some functions
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
` (4 preceding siblings ...)
2015-12-04 13:52 ` [PATCH v5 5/9] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
@ 2015-12-04 13:55 ` Torsten Duwe
2015-12-04 13:57 ` [PATCH v5 7/9] ppc64 ftrace: disable profiling for some files Torsten Duwe
` (3 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 13:55 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
At least POWER7/8 have MMUs that don't completely autoload;
a normal, recoverable memory fault might pass through these functions.
If a dynamic tracer function causes such a fault, any of these functions
being traced with -mprofile-kernel may cause an endless recursion.
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/kernel/process.c | 2 +-
arch/powerpc/mm/fault.c | 2 +-
arch/powerpc/mm/hash_utils_64.c | 18 +++++++++---------
arch/powerpc/mm/hugetlbpage-hash64.c | 2 +-
arch/powerpc/mm/hugetlbpage.c | 4 ++--
arch/powerpc/mm/mem.c | 2 +-
arch/powerpc/mm/pgtable_64.c | 2 +-
arch/powerpc/mm/slb.c | 6 +++---
arch/powerpc/mm/slice.c | 8 ++++----
9 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 646bf4d..5b3c19d 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -733,7 +733,7 @@ static inline void __switch_to_tm(struct task_struct *prev)
* don't know which of the checkpointed state and the transactional
* state to use.
*/
-void restore_tm_state(struct pt_regs *regs)
+notrace void restore_tm_state(struct pt_regs *regs)
{
unsigned long msr_diff;
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index a67c6d7..125be37 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -205,7 +205,7 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
* The return value is 0 if the fault was handled, or the signal
* number if this is a kernel fault that can't be handled here.
*/
-int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
+notrace int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
unsigned long error_code)
{
enum ctx_state prev_state = exception_enter();
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 7f9616f..64f5b40 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -849,7 +849,7 @@ void early_init_mmu_secondary(void)
/*
* Called by asm hashtable.S for doing lazy icache flush
*/
-unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
+notrace unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
{
struct page *page;
@@ -870,7 +870,7 @@ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
}
#ifdef CONFIG_PPC_MM_SLICES
-static unsigned int get_paca_psize(unsigned long addr)
+static notrace unsigned int get_paca_psize(unsigned long addr)
{
u64 lpsizes;
unsigned char *hpsizes;
@@ -899,7 +899,7 @@ unsigned int get_paca_psize(unsigned long addr)
* For now this makes the whole process use 4k pages.
*/
#ifdef CONFIG_PPC_64K_PAGES
-void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
+notrace void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
{
if (get_slice_psize(mm, addr) == MMU_PAGE_4K)
return;
@@ -920,7 +920,7 @@ void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
* Result is 0: full permissions, _PAGE_RW: read-only,
* _PAGE_USER or _PAGE_USER|_PAGE_RW: no access.
*/
-static int subpage_protection(struct mm_struct *mm, unsigned long ea)
+static notrace int subpage_protection(struct mm_struct *mm, unsigned long ea)
{
struct subpage_prot_table *spt = &mm->context.spt;
u32 spp = 0;
@@ -968,7 +968,7 @@ void hash_failure_debug(unsigned long ea, unsigned long access,
trap, vsid, ssize, psize, lpsize, pte);
}
-static void check_paca_psize(unsigned long ea, struct mm_struct *mm,
+static notrace void check_paca_psize(unsigned long ea, struct mm_struct *mm,
int psize, bool user_region)
{
if (user_region) {
@@ -990,7 +990,7 @@ static void check_paca_psize(unsigned long ea, struct mm_struct *mm,
* -1 - critical hash insertion error
* -2 - access not permitted by subpage protection mechanism
*/
-int hash_page_mm(struct mm_struct *mm, unsigned long ea,
+notrace int hash_page_mm(struct mm_struct *mm, unsigned long ea,
unsigned long access, unsigned long trap,
unsigned long flags)
{
@@ -1187,7 +1187,7 @@ bail:
}
EXPORT_SYMBOL_GPL(hash_page_mm);
-int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
+notrace int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
unsigned long dsisr)
{
unsigned long flags = 0;
@@ -1289,7 +1289,7 @@ out_exit:
/* WARNING: This is called from hash_low_64.S, if you change this prototype,
* do not forget to update the assembly call site !
*/
-void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize,
+notrace void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize,
unsigned long flags)
{
unsigned long hash, index, shift, hidx, slot;
@@ -1437,7 +1437,7 @@ void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
exception_exit(prev_state);
}
-long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
+notrace long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
unsigned long pa, unsigned long rflags,
unsigned long vflags, int psize, int ssize)
{
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index d94b1af..50b8c6f 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -18,7 +18,7 @@ extern long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
unsigned long pa, unsigned long rlags,
unsigned long vflags, int psize, int ssize);
-int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
+notrace int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
pte_t *ptep, unsigned long trap, unsigned long flags,
int ssize, unsigned int shift, unsigned int mmu_psize)
{
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 9833fee..00c4b03 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -942,7 +942,7 @@ static int __init hugetlbpage_init(void)
#endif
arch_initcall(hugetlbpage_init);
-void flush_dcache_icache_hugepage(struct page *page)
+notrace void flush_dcache_icache_hugepage(struct page *page)
{
int i;
void *start;
@@ -975,7 +975,7 @@ void flush_dcache_icache_hugepage(struct page *page)
* when we have MSR[EE] = 0 but the paca->soft_enabled = 1
*/
-pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
+notrace pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
bool *is_thp, unsigned *shift)
{
pgd_t pgd, *pgdp;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3..f690e8a 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -406,7 +406,7 @@ void flush_dcache_page(struct page *page)
}
EXPORT_SYMBOL(flush_dcache_page);
-void flush_dcache_icache_page(struct page *page)
+notrace void flush_dcache_icache_page(struct page *page)
{
#ifdef CONFIG_HUGETLB_PAGE
if (PageCompound(page)) {
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index e92cb21..c74050b 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -442,7 +442,7 @@ static void page_table_free_rcu(void *table)
}
}
-void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
+notrace void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
{
unsigned long pgf = (unsigned long)table;
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 515730e..3e9be5d 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -96,7 +96,7 @@ static inline void create_shadowed_slbe(unsigned long ea, int ssize,
: "memory" );
}
-static void __slb_flush_and_rebolt(void)
+static notrace void __slb_flush_and_rebolt(void)
{
/* If you change this make sure you change SLB_NUM_BOLTED
* and PR KVM appropriately too. */
@@ -136,7 +136,7 @@ static void __slb_flush_and_rebolt(void)
: "memory");
}
-void slb_flush_and_rebolt(void)
+notrace void slb_flush_and_rebolt(void)
{
WARN_ON(!irqs_disabled());
@@ -151,7 +151,7 @@ void slb_flush_and_rebolt(void)
get_paca()->slb_cache_ptr = 0;
}
-void slb_vmalloc_update(void)
+notrace void slb_vmalloc_update(void)
{
unsigned long vflags;
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 0f432a7..f92f0f0 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -76,8 +76,8 @@ static void slice_print_mask(const char *label, struct slice_mask mask) {}
#endif
-static struct slice_mask slice_range_to_mask(unsigned long start,
- unsigned long len)
+static notrace struct slice_mask slice_range_to_mask(unsigned long start,
+ unsigned long len)
{
unsigned long end = start + len - 1;
struct slice_mask ret = { 0, 0 };
@@ -564,7 +564,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp,
current->mm->context.user_psize, 1);
}
-unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
+notrace unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
{
unsigned char *hpsizes;
int index, mask_index;
@@ -645,7 +645,7 @@ void slice_set_user_psize(struct mm_struct *mm, unsigned int psize)
spin_unlock_irqrestore(&slice_convert_lock, flags);
}
-void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
+notrace void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize)
{
struct slice_mask mask = slice_range_to_mask(start, len);
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 7/9] ppc64 ftrace: disable profiling for some files
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
` (5 preceding siblings ...)
2015-12-04 13:55 ` [PATCH v5 6/9] ppc64 ftrace: disable profiling for some functions Torsten Duwe
@ 2015-12-04 13:57 ` Torsten Duwe
2015-12-04 14:11 ` [PATCH v5 8/9] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
` (2 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 13:57 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
This patch complements the "notrace" attribute for selected functions.
It adds -mprofile-kernel to the cc flags to be stripped from the command
line for code-patching.o and feature-fixups.o, in addition to "-pg"
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/lib/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index a47e142..98e22b2 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -6,8 +6,8 @@ subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
-CFLAGS_REMOVE_code-patching.o = -pg
-CFLAGS_REMOVE_feature-fixups.o = -pg
+CFLAGS_REMOVE_code-patching.o = -pg -mprofile-kernel
+CFLAGS_REMOVE_feature-fixups.o = -pg -mprofile-kernel
obj-y += string.o alloc.o crtsavres.o ppc_ksyms.o code-patching.o \
feature-fixups.o
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 8/9] Implement kernel live patching for ppc64le (ABIv2)
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
` (6 preceding siblings ...)
2015-12-04 13:57 ` [PATCH v5 7/9] ppc64 ftrace: disable profiling for some files Torsten Duwe
@ 2015-12-04 14:11 ` Torsten Duwe
2015-12-04 14:13 ` [PATCH v5 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
2016-01-06 14:17 ` [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Petr Mladek
9 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 14:11 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
* create the appropriate files+functions
arch/powerpc/include/asm/livepatch.h
klp_check_compiler_support,
klp_arch_set_pc
arch/powerpc/kernel/livepatch.c with a stub for
klp_write_module_reloc
This is architecture-independent work in progress.
* introduce a fixup in arch/powerpc/kernel/entry_64.S
for local calls that are becoming global due to live patching.
And of course do the main KLP thing: return to a maybe different
address, possibly altered by the live patching ftrace op.
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/include/asm/livepatch.h | 45 +++++++++++++++++++++++++++++++
arch/powerpc/kernel/entry_64.S | 51 +++++++++++++++++++++++++++++++++---
arch/powerpc/kernel/livepatch.c | 38 +++++++++++++++++++++++++++
3 files changed, 130 insertions(+), 4 deletions(-)
create mode 100644 arch/powerpc/include/asm/livepatch.h
create mode 100644 arch/powerpc/kernel/livepatch.c
diff --git a/arch/powerpc/include/asm/livepatch.h b/arch/powerpc/include/asm/livepatch.h
new file mode 100644
index 0000000..3200c11
--- /dev/null
+++ b/arch/powerpc/include/asm/livepatch.h
@@ -0,0 +1,45 @@
+/*
+ * livepatch.h - powerpc-specific Kernel Live Patching Core
+ *
+ * Copyright (C) 2015 SUSE
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef _ASM_POWERPC64_LIVEPATCH_H
+#define _ASM_POWERPC64_LIVEPATCH_H
+
+#include <linux/module.h>
+#include <linux/ftrace.h>
+
+#ifdef CONFIG_LIVEPATCH
+static inline int klp_check_compiler_support(void)
+{
+#if !defined(_CALL_ELF) || _CALL_ELF != 2 || !defined(CC_USING_MPROFILE_KERNEL)
+ return 1;
+#endif
+ return 0;
+}
+
+extern int klp_write_module_reloc(struct module *mod, unsigned long type,
+ unsigned long loc, unsigned long value);
+
+static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
+{
+ regs->nip = ip;
+}
+#else
+#error Live patching support is disabled; check CONFIG_LIVEPATCH
+#endif
+
+#endif /* _ASM_POWERPC64_LIVEPATCH_H */
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 294a9ca..09af904 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1265,6 +1265,9 @@ _GLOBAL(ftrace_caller)
mflr r3
std r3, _NIP(r1)
std r3, 16(r1)
+#ifdef CONFIG_LIVEPATCH
+ mr r14,r3 /* remember old NIP */
+#endif
subi r3, r3, MCOUNT_INSN_SIZE
mfmsr r4
std r4, _MSR(r1)
@@ -1281,7 +1284,10 @@ ftrace_call:
nop
ld r3, _NIP(r1)
- mtlr r3
+ mtctr r3 /* prepare to jump there */
+#ifdef CONFIG_LIVEPATCH
+ cmpd r14,r3 /* has NIP been altered? */
+#endif
REST_8GPRS(0,r1)
REST_8GPRS(8,r1)
@@ -1294,6 +1300,27 @@ ftrace_call:
mtlr r12
mr r2,r0 /* restore callee's TOC */
+#ifdef CONFIG_LIVEPATCH
+ beq+ 4f /* likely(old_NIP == new_NIP) */
+
+ /* For a local call, restore this TOC after calling the patch function.
+ * For a global call, it does not matter what we restore here,
+ * since the global caller does its own restore right afterwards,
+ * anyway. Just insert a KLP_return_helper frame in any case,
+ * so a patch function can always count on the changed stack offsets.
+ */
+ stdu r1,-32(r1) /* open new mini stack frame */
+ std r0,24(r1) /* save TOC now, unconditionally. */
+ bl 5f
+5: mflr r12
+ addi r12,r12,(KLP_return_helper+4-.)@l
+ std r12,LRSAVE(r1)
+ mtlr r12
+ mfctr r12 /* allow for TOC calculation in newfunc */
+ bctr
+4:
+#endif
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
stdu r1, -112(r1)
.globl ftrace_graph_call
@@ -1303,15 +1330,31 @@ _GLOBAL(ftrace_graph_stub)
addi r1, r1, 112
#endif
- mflr r0 /* move this LR to CTR */
- mtctr r0
-
ld r0,LRSAVE(r1) /* restore callee's lr at _mcount site */
mtlr r0
bctr /* jump after _mcount site */
#endif /* CC_USING_MPROFILE_KERNEL */
_GLOBAL(ftrace_stub)
blr
+
+#ifdef CONFIG_LIVEPATCH
+/* Helper function for local calls that are becoming global
+ due to live patching.
+ We can't simply patch the NOP after the original call,
+ because, depending on the consistency model, some kernel
+ threads may still have called the original, local function
+ *without* saving their TOC in the respective stack frame slot,
+ so the decision is made per-thread during function return by
+ maybe inserting a KLP_return_helper frame or not.
+*/
+KLP_return_helper:
+ ld r2,24(r1) /* restore TOC (saved by ftrace_caller) */
+ addi r1, r1, 32 /* destroy mini stack frame */
+ ld r0,LRSAVE(r1) /* get the real return address */
+ mtlr r0
+ blr
+#endif
+
#else
_GLOBAL_TOC(_mcount)
/* Taken from output of objdump from lib64/glibc */
diff --git a/arch/powerpc/kernel/livepatch.c b/arch/powerpc/kernel/livepatch.c
new file mode 100644
index 0000000..564eafa
--- /dev/null
+++ b/arch/powerpc/kernel/livepatch.c
@@ -0,0 +1,38 @@
+/*
+ * livepatch.c - powerpc-specific Kernel Live Patching Core
+ *
+ * Copyright (C) 2015 SUSE
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/module.h>
+#include <asm/livepatch.h>
+
+/**
+ * klp_write_module_reloc() - write a relocation in a module
+ * @mod: module in which the section to be modified is found
+ * @type: ELF relocation type (see asm/elf.h)
+ * @loc: address that the relocation should be written to
+ * @value: relocation value (sym address + addend)
+ *
+ * This function writes a relocation to the specified location for
+ * a particular module.
+ */
+int klp_write_module_reloc(struct module *mod, unsigned long type,
+ unsigned long loc, unsigned long value)
+{
+ /* This requires infrastructure changes; we need the loadinfos. */
+ pr_err("lpc_write_module_reloc not yet supported\n");
+ return -ENOSYS;
+}
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected.
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
` (7 preceding siblings ...)
2015-12-04 14:11 ` [PATCH v5 8/9] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
@ 2015-12-04 14:13 ` Torsten Duwe
2016-01-06 14:23 ` Petr Mladek
2016-01-06 14:17 ` [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Petr Mladek
9 siblings, 1 reply; 23+ messages in thread
From: Torsten Duwe @ 2015-12-04 14:13 UTC (permalink / raw)
To: Steven Rostedt, Michael Ellerman
Cc: Jiri Kosina, Denis Kirjanov, Petr Mladek, linuxppc-dev,
linux-kernel, live-patching
Signed-off-by: Torsten Duwe <duwe@suse.de>
---
arch/powerpc/Kconfig | 5 +++++
arch/powerpc/kernel/Makefile | 1 +
2 files changed, 6 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 89b1a2a..62a3f54 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -163,6 +163,9 @@ config PPC
select ARCH_HAS_DMA_SET_COHERENT_MASK
select HAVE_ARCH_SECCOMP_FILTER
+config HAVE_LIVEPATCH
+ def_bool PPC64 && CPU_LITTLE_ENDIAN
+
config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN
@@ -1095,3 +1098,5 @@ config PPC_LIB_RHEAP
bool
source "arch/powerpc/kvm/Kconfig"
+
+source "kernel/livepatch/Kconfig"
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 0f417d5..f9a2925 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -119,6 +119,7 @@ obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o
obj-$(CONFIG_TRACING) += trace_clock.o
+obj-$(CONFIG_LIVEPATCH) += livepatch.o
ifneq ($(CONFIG_PPC_INDIRECT_PIO),y)
obj-y += iomap.o
--
1.8.5.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v5 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected.
2015-12-04 14:13 ` [PATCH v5 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
@ 2016-01-06 14:23 ` Petr Mladek
0 siblings, 0 replies; 23+ messages in thread
From: Petr Mladek @ 2016-01-06 14:23 UTC (permalink / raw)
To: Torsten Duwe
Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Fri 2015-12-04 15:13:44, Torsten Duwe wrote:
> Signed-off-by: Torsten Duwe <duwe@suse.de>
> ---
> arch/powerpc/Kconfig | 5 +++++
> arch/powerpc/kernel/Makefile | 1 +
> 2 files changed, 6 insertions(+)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 89b1a2a..62a3f54 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -163,6 +163,9 @@ config PPC
> select ARCH_HAS_DMA_SET_COHERENT_MASK
> select HAVE_ARCH_SECCOMP_FILTER
>
> +config HAVE_LIVEPATCH
> + def_bool PPC64 && CPU_LITTLE_ENDIAN
> +
Just a small nitpicking. HAVE_LIVEPATCH is defined in
kernel/livepatch/Kconfig. I would move this to the
config PPC section and use:
select HAVE_LIVEPATCH if PPC64 && CPU_LITTLE_ENDIAN
Or did I miss something, please?
Best Regards,
Petr
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2015-12-04 14:45 [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
` (8 preceding siblings ...)
2015-12-04 14:13 ` [PATCH v5 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
@ 2016-01-06 14:17 ` Petr Mladek
2016-01-20 6:03 ` Michael Ellerman
9 siblings, 1 reply; 23+ messages in thread
From: Petr Mladek @ 2016-01-06 14:17 UTC (permalink / raw)
To: Torsten Duwe
Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Fri 2015-12-04 15:45:29, Torsten Duwe wrote:
> Changes since v4:
> * change comment style in entry_64.S to C89
> (nobody is using assembler syntax comments there).
> * the bool function restore_r2 shouldn't return 2,
> that's a little confusing.
> * Test whether the compiler supports -mprofile-kernel
> and only then define CC_USING_MPROFILE_KERNEL
> * also make the return value of klp_check_compiler_support
> depend on that.
Note that there is still needed the extra patch from
http://thread.gmane.org/gmane.linux.kernel/2093867/focus=2099603
to get the livepatching working.
Both ftrace with regs and live patching works for me with this patch
set and the extra patch. So. for the whole patchset:
Tested-by: Petr Mladek <pmladek@suse.com>
Best Regards,
Petr
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-06 14:17 ` [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Petr Mladek
@ 2016-01-20 6:03 ` Michael Ellerman
0 siblings, 0 replies; 23+ messages in thread
From: Michael Ellerman @ 2016-01-20 6:03 UTC (permalink / raw)
To: Petr Mladek, Torsten Duwe
Cc: Steven Rostedt, Jiri Kosina, Denis Kirjanov, linuxppc-dev,
linux-kernel, live-patching
On Wed, 2016-01-06 at 15:17 +0100, Petr Mladek wrote:
> On Fri 2015-12-04 15:45:29, Torsten Duwe wrote:
> > Changes since v4:
> > * change comment style in entry_64.S to C89
> > (nobody is using assembler syntax comments there).
> > * the bool function restore_r2 shouldn't return 2,
> > that's a little confusing.
> > * Test whether the compiler supports -mprofile-kernel
> > and only then define CC_USING_MPROFILE_KERNEL
> > * also make the return value of klp_check_compiler_support
> > depend on that.
>
> Note that there is still needed the extra patch from
> http://thread.gmane.org/gmane.linux.kernel/2093867/focus=2099603
> to get the livepatching working.
Sorry which extra patch?
> Both ftrace with regs and live patching works for me with this patch
> set and the extra patch. So. for the whole patchset:
>
> Tested-by: Petr Mladek <pmladek@suse.com>
Can you give me some more info on how you're testing it? What config options,
toolchain etc.?
For me the series doesn't even boot, even with livepatching disabled.
cheers
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
@ 2016-01-20 6:03 ` Michael Ellerman
0 siblings, 0 replies; 23+ messages in thread
From: Michael Ellerman @ 2016-01-20 6:03 UTC (permalink / raw)
To: Petr Mladek, Torsten Duwe
Cc: Steven Rostedt, Jiri Kosina, Denis Kirjanov, linuxppc-dev,
linux-kernel, live-patching
On Wed, 2016-01-06 at 15:17 +0100, Petr Mladek wrote:
> On Fri 2015-12-04 15:45:29, Torsten Duwe wrote:
> > Changes since v4:
> > * change comment style in entry_64.S to C89
> > (nobody is using assembler syntax comments there).
> > * the bool function restore_r2 shouldn't return 2,
> > that's a little confusing.
> > * Test whether the compiler supports -mprofile-kernel
> > and only then define CC_USING_MPROFILE_KERNEL
> > * also make the return value of klp_check_compiler_support
> > depend on that.
>
> Note that there is still needed the extra patch from
> http://thread.gmane.org/gmane.linux.kernel/2093867/focus=2099603
> to get the livepatching working.
Sorry which extra patch?
> Both ftrace with regs and live patching works for me with this patch
> set and the extra patch. So. for the whole patchset:
>
> Tested-by: Petr Mladek <pmladek@suse.com>
Can you give me some more info on how you're testing it? What config options,
toolchain etc.?
For me the series doesn't even boot, even with livepatching disabled.
cheers
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-20 6:03 ` Michael Ellerman
(?)
@ 2016-01-20 9:07 ` Torsten Duwe
-1 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2016-01-20 9:07 UTC (permalink / raw)
To: Michael Ellerman
Cc: Petr Mladek, Steven Rostedt, Jiri Kosina, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Wed, Jan 20, 2016 at 05:03:23PM +1100, Michael Ellerman wrote:
> On Wed, 2016-01-06 at 15:17 +0100, Petr Mladek wrote:
> > On Fri 2015-12-04 15:45:29, Torsten Duwe wrote:
> > > Changes since v4:
> > > * change comment style in entry_64.S to C89
> > > (nobody is using assembler syntax comments there).
> > > * the bool function restore_r2 shouldn't return 2,
> > > that's a little confusing.
> > > * Test whether the compiler supports -mprofile-kernel
> > > and only then define CC_USING_MPROFILE_KERNEL
> > > * also make the return value of klp_check_compiler_support
> > > depend on that.
> >
> > Note that there is still needed the extra patch from
> > http://thread.gmane.org/gmane.linux.kernel/2093867/focus=2099603
> > to get the livepatching working.
>
> Sorry which extra patch?
Message-ID: <20151203160004.GE8047@pathway.suse.cz>
By Petr Mladek, "Re: [PATCH v4 0/9] ftrace with regs + live patching..."
2015-12-03. It is further up in the function call hierarchy and basically
tells the arch-independent KLP to call the normal entry point on ppc64le, and
that the _mcount call site is 16 bytes further.
> > Both ftrace with regs and live patching works for me with this patch
> > set and the extra patch. So. for the whole patchset:
> >
> > Tested-by: Petr Mladek <pmladek@suse.com>
>
> Can you give me some more info on how you're testing it? What config options,
> toolchain etc.?
>
> For me the series doesn't even boot, even with livepatching disabled.
May indeed be a toolchain issue. I had to fix gcc-4.8.5 to get "notrace" working
for -mprofile-kernel. That's a gcc bug.
What are you using?
The config in the v5 patch series should be waterproof, especially with KLP disabled
ftrace with regs must work (all self-tests succeeded). If you send me your config
(via PM I suggest, spare the lists) I can verify it with the toolchain here.
Petr made a suggestion to reshuffle the config options to have it cleaner;
I suggest to patch that separately.
Torsten
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-20 6:03 ` Michael Ellerman
(?)
(?)
@ 2016-01-20 9:48 ` Petr Mladek
2016-01-21 11:34 ` Petr Mladek
-1 siblings, 1 reply; 23+ messages in thread
From: Petr Mladek @ 2016-01-20 9:48 UTC (permalink / raw)
To: Michael Ellerman
Cc: Torsten Duwe, Steven Rostedt, Jiri Kosina, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Wed 2016-01-20 17:03:23, Michael Ellerman wrote:
> On Wed, 2016-01-06 at 15:17 +0100, Petr Mladek wrote:
> > On Fri 2015-12-04 15:45:29, Torsten Duwe wrote:
> > > Changes since v4:
> > > * change comment style in entry_64.S to C89
> > > (nobody is using assembler syntax comments there).
> > > * the bool function restore_r2 shouldn't return 2,
> > > that's a little confusing.
> > > * Test whether the compiler supports -mprofile-kernel
> > > and only then define CC_USING_MPROFILE_KERNEL
> > > * also make the return value of klp_check_compiler_support
> > > depend on that.
> >
> > Note that there is still needed the extra patch from
> > http://thread.gmane.org/gmane.linux.kernel/2093867/focus=2099603
> > to get the livepatching working.
>
> Sorry which extra patch?
It was in an older reply and can be found at
http://thread.gmane.org/gmane.linux.kernel/2093867/focus=2099603
> > Both ftrace with regs and live patching works for me with this patch
> > set and the extra patch. So. for the whole patchset:
> >
> > Tested-by: Petr Mladek <pmladek@suse.com>
>
> Can you give me some more info on how you're testing it? What config options,
> toolchain etc.?
You need to fulfill all dependencies for CONFIG_LIVEPATCH, see
kernel/livepatch/Kconfig. Please, find attached the config that
that I used.
I did the testing on PPC64LE with a kernel based on 4.4.0-rc8
using the attached config. I used the following stuff:
$> gcc --version
gcc (SUSE Linux) 4.8.5
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$> rpm -q binutils
binutils-2.25.0-13.1.ppc64le
I tested it the following way:
# booted the compiled kernel and printed the default cmdline
$> cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.0-rc3-11-default+ root=UUID=...
# tried function_graph tracer to check ftrace with regs
echo function_graph >/sys/kernel/debug/tracing/current_tracer ; \
echo 1 >/sys/kernel/debug/tracing/tracing_on ; \
sleep 1 ; \
/usr/bin/ls /proc ; \
echo 0 >/sys/kernel/debug/tracing/tracing_on ; \
less /sys/kernel/debug/tracing/trace
# loaded the patch and printed the patch cmdline
$> modprobe livepatch-sample
$> cat /proc/cmdline
this has been live patched
# tried to disable and enable the patch
$> echo 0 > /sys/kernel/livepatch/livepatch_sample/enabled
$> cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.0-rc3-11-default+ root=UUID=...
$> echo 1 > /sys/kernel/livepatch/livepatch_sample/enabled
$> cat /proc/cmdline
this has been live patched
# also checked messages
$> dmesg | tail -n 4
[ 33.673057] livepatch: tainting kernel with TAINT_LIVEPATCH
[ 33.673068] livepatch: enabling patch 'livepatch_sample'
[ 1997.098257] livepatch: disabling patch 'livepatch_sample'
[ 2079.696277] livepatch: enabling patch 'livepatch_sample'
> For me the series doesn't even boot, even with livepatching disabled.
I wonder if you have enabled CONFIG_FTRACE_STARTUP_TEST and if
the ftrace with regs fails on your setup.
Best Regards,
Petr
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-20 9:48 ` Petr Mladek
@ 2016-01-21 11:34 ` Petr Mladek
0 siblings, 0 replies; 23+ messages in thread
From: Petr Mladek @ 2016-01-21 11:34 UTC (permalink / raw)
To: Michael Ellerman
Cc: Torsten Duwe, Steven Rostedt, Jiri Kosina, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Wed 2016-01-20 10:48:30, Petr Mladek wrote:
> I did the testing on PPC64LE with a kernel based on 4.4.0-rc8
> using the attached config. I used the following stuff:
Ah, I forgot to attach it. Also it is rahter big. Please,
find it at http://pastebin.com/tzJ3mdUd
Best Regards,
Petr
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-20 6:03 ` Michael Ellerman
` (2 preceding siblings ...)
(?)
@ 2016-01-21 9:33 ` Jiri Kosina
2016-01-21 12:54 ` Michael Ellerman
-1 siblings, 1 reply; 23+ messages in thread
From: Jiri Kosina @ 2016-01-21 9:33 UTC (permalink / raw)
To: Michael Ellerman
Cc: Petr Mladek, Torsten Duwe, Steven Rostedt, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Wed, 20 Jan 2016, Michael Ellerman wrote:
> For me the series doesn't even boot, even with livepatching disabled.
Could you please post config and dmesg from that non-booting kernel?
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-21 9:33 ` Jiri Kosina
@ 2016-01-21 12:54 ` Michael Ellerman
2016-01-21 15:06 ` Torsten Duwe
0 siblings, 1 reply; 23+ messages in thread
From: Michael Ellerman @ 2016-01-21 12:54 UTC (permalink / raw)
To: Jiri Kosina
Cc: Petr Mladek, Torsten Duwe, Steven Rostedt, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Thu, 2016-01-21 at 10:33 +0100, Jiri Kosina wrote:
> On Wed, 20 Jan 2016, Michael Ellerman wrote:
>
> > For me the series doesn't even boot, even with livepatching disabled.
>
> Could you please post config and dmesg from that non-booting kernel?
Sorry been busy.
There is no dmesg :)
It gets stuck in early_setup() before the console is even found.
I'll try with Petr's config and see if that helps.
Also I'm using gcc 6.0 built from mainline just last week, and binutils
similarly. So possibly that is part of the problem.
cheers
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-21 12:54 ` Michael Ellerman
@ 2016-01-21 15:06 ` Torsten Duwe
2016-01-21 15:12 ` Torsten Duwe
0 siblings, 1 reply; 23+ messages in thread
From: Torsten Duwe @ 2016-01-21 15:06 UTC (permalink / raw)
To: Michael Ellerman
Cc: Jiri Kosina, Petr Mladek, Steven Rostedt, Denis Kirjanov,
linuxppc-dev, linux-kernel, live-patching
On Thu, Jan 21, 2016 at 11:54:51PM +1100, Michael Ellerman wrote:
> There is no dmesg :)
>
> It gets stuck in early_setup() before the console is even found.
Confirmed.
| Device tree struct 0x00000000014b0000 -> 0x00000000014c0000
| Quiescing Open Firmware ...
| Booting Linux via __start() ...
and that's it.
gcc-6 --version
gcc-6 (SUSE Linux) 6.0.0 20160108 (experimental) [trunk revision 232162]
> Also I'm using gcc 6.0 built from mainline just last week, and binutils
> similarly. So possibly that is part of the problem.
Confirmed. It _is_ the problem.
mcount call sites look normal on first sight...
Torsten
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-21 15:06 ` Torsten Duwe
@ 2016-01-21 15:12 ` Torsten Duwe
2016-01-21 21:29 ` Jiri Kosina
0 siblings, 1 reply; 23+ messages in thread
From: Torsten Duwe @ 2016-01-21 15:12 UTC (permalink / raw)
To: Michael Ellerman
Cc: Petr Mladek, Denis Kirjanov, Jiri Kosina, linux-kernel,
Steven Rostedt, live-patching, linuxppc-dev
On Thu, Jan 21, 2016 at 04:06:33PM +0100, Torsten Duwe wrote:
> mcount call sites looks normal on first sight...
Not quite.
LR is not saved on the stack before the call.
Argh!
Petr, this looks like 12 bytes offset for gcc-6.
I think I can work around the rest.
Torsten
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-21 15:12 ` Torsten Duwe
@ 2016-01-21 21:29 ` Jiri Kosina
2016-01-21 21:56 ` Torsten Duwe
0 siblings, 1 reply; 23+ messages in thread
From: Jiri Kosina @ 2016-01-21 21:29 UTC (permalink / raw)
To: Torsten Duwe
Cc: Michael Ellerman, Petr Mladek, Denis Kirjanov, linux-kernel,
Steven Rostedt, live-patching, linuxppc-dev
On Thu, 21 Jan 2016, Torsten Duwe wrote:
> > mcount call sites looks normal on first sight...
>
> Not quite.
> LR is not saved on the stack before the call.
> Argh!
>
> Petr, this looks like 12 bytes offset for gcc-6.
> I think I can work around the rest.
Are we sure that gcc is doing the right thing here?
I am far from claiming understanding of ppc64 ABI, but from what Vojtech
told me I understood that saving link register is necessary for (at least)
graph tracer to work properly.
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
2016-01-21 21:29 ` Jiri Kosina
@ 2016-01-21 21:56 ` Torsten Duwe
0 siblings, 0 replies; 23+ messages in thread
From: Torsten Duwe @ 2016-01-21 21:56 UTC (permalink / raw)
To: Jiri Kosina
Cc: Michael Ellerman, Petr Mladek, Denis Kirjanov, linux-kernel,
Steven Rostedt, live-patching, linuxppc-dev
On Thu, Jan 21, 2016 at 10:29:13PM +0100, Jiri Kosina wrote:
> On Thu, 21 Jan 2016, Torsten Duwe wrote:
>
> > > mcount call sites looks normal on first sight...
> >
> > Not quite.
> > LR is not saved on the stack before the call.
> > Argh!
> >
> > Petr, this looks like 12 bytes offset for gcc-6.
> > I think I can work around the rest.
>
> Are we sure that gcc is doing the right thing here?
>
> I am far from claiming understanding of ppc64 ABI, but from what Vojtech
> told me I understood that saving link register is necessary for (at least)
> graph tracer to work properly.
It is held in R0 only, and saved right after _mcount. Thus, _mcount just
must not clobber R0 or save it the same way as it's done afterwards or
like gcc4 does it.
I'll make a v6 that's compiler agnostic. It's a few lines to change
for the kernel proper, and I'll have to have a look at the trampolines
for modules.
Torsten
^ permalink raw reply [flat|nested] 23+ messages in thread