linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
@ 2015-11-25 16:23 ` Torsten Duwe
  2015-11-26 10:12   ` Denis Kirjanov
  2015-11-25 16:34 ` [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:23 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

The gcc switch -mprofile-kernel, available for ppc64 on gcc > 4.8.5,
allows to call _mcount very early in the function, which low-level
ASM code and code patching functions need to consider.
Especially the link register and the parameter registers are still
alive and not yet saved into a new stack frame.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/kernel/entry_64.S  | 44 +++++++++++++++++++++++++++++++++++++++--
 arch/powerpc/kernel/ftrace.c    | 12 +++++++++--
 arch/powerpc/kernel/module_64.c | 13 ++++++++++++
 3 files changed, 65 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index a94f155..8d56b16 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1206,7 +1206,11 @@ _GLOBAL(enter_prom)
 #ifdef CONFIG_DYNAMIC_FTRACE
 _GLOBAL(mcount)
 _GLOBAL(_mcount)
-	blr
+	mflr	r0
+	mtctr	r0
+	ld	r0,LRSAVE(r1)
+	mtlr	r0
+	bctr
 
 _GLOBAL_TOC(ftrace_caller)
 	/* Taken from output of objdump from lib64/glibc */
@@ -1262,13 +1266,28 @@ _GLOBAL(ftrace_stub)
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 _GLOBAL(ftrace_graph_caller)
+#ifdef CC_USING_MPROFILE_KERNEL
+	// with -mprofile-kernel, parameter regs are still alive at _mcount
+	std	r10, 104(r1)
+	std	r9, 96(r1)
+	std	r8, 88(r1)
+	std	r7, 80(r1)
+	std	r6, 72(r1)
+	std	r5, 64(r1)
+	std	r4, 56(r1)
+	std	r3, 48(r1)
+	mfctr	r4		// ftrace_caller has moved local addr here
+	std	r4, 40(r1)
+	mflr	r3		// ftrace_caller has restored LR from stack
+#else
 	/* load r4 with local address */
 	ld	r4, 128(r1)
-	subi	r4, r4, MCOUNT_INSN_SIZE
 
 	/* Grab the LR out of the caller stack frame */
 	ld	r11, 112(r1)
 	ld	r3, 16(r11)
+#endif
+	subi	r4, r4, MCOUNT_INSN_SIZE
 
 	bl	prepare_ftrace_return
 	nop
@@ -1277,6 +1296,26 @@ _GLOBAL(ftrace_graph_caller)
 	 * prepare_ftrace_return gives us the address we divert to.
 	 * Change the LR in the callers stack frame to this.
 	 */
+
+#ifdef CC_USING_MPROFILE_KERNEL
+	mtlr	r3
+
+	ld	r0, 40(r1)
+	mtctr	r0
+	ld	r10, 104(r1)
+	ld	r9, 96(r1)
+	ld	r8, 88(r1)
+	ld	r7, 80(r1)
+	ld	r6, 72(r1)
+	ld	r5, 64(r1)
+	ld	r4, 56(r1)
+	ld	r3, 48(r1)
+
+	addi	r1, r1, 112
+	mflr	r0
+	std	r0, LRSAVE(r1)
+	bctr
+#else
 	ld	r11, 112(r1)
 	std	r3, 16(r11)
 
@@ -1284,6 +1323,7 @@ _GLOBAL(ftrace_graph_caller)
 	mtlr	r0
 	addi	r1, r1, 112
 	blr
+#endif
 
 _GLOBAL(return_to_handler)
 	/* need to save return values */
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 44d4d8e..080c525 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -306,11 +306,19 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 	 * The load offset is different depending on the ABI. For simplicity
 	 * just mask it out when doing the compare.
 	 */
+#ifndef CC_USING_MPROFILE_KERNEL
 	if ((op[0] != 0x48000008) || ((op[1] & 0xffff0000) != 0xe8410000)) {
-		pr_err("Unexpected call sequence: %x %x\n", op[0], op[1]);
+		pr_err("Unexpected call sequence at %p: %x %x\n",
+		ip, op[0], op[1]);
 		return -EINVAL;
 	}
-
+#else
+	/* look for patched "NOP" on ppc64 with -mprofile-kernel */
+	if (op[0] != 0x60000000) {
+		pr_err("Unexpected call at %p: %x\n", ip, op[0]);
+		return -EINVAL;
+	}
+#endif
 	/* If we never set up a trampoline to ftrace_caller, then bail */
 	if (!rec->arch.mod->arch.tramp) {
 		pr_err("No ftrace trampoline\n");
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 6838451..0819ce7 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -475,6 +475,19 @@ static unsigned long stub_for_addr(Elf64_Shdr *sechdrs,
 static int restore_r2(u32 *instruction, struct module *me)
 {
 	if (*instruction != PPC_INST_NOP) {
+#ifdef CC_USING_MPROFILE_KERNEL
+		/* -mprofile_kernel sequence starting with
+		 * mflr r0; std r0, LRSAVE(r1)
+		 */
+		if (instruction[-3] == 0x7c0802a6 &&
+		    instruction[-2] == 0xf8010010) {
+			/* Nothing to be done here, it's an _mcount
+			 * call location and r2 will have to be
+			 * restored in the _mcount function.
+			 */
+			return 2;
+		};
+#endif
 		pr_err("%s: Expect noop after relocate, got %08x\n",
 		       me->name, *instruction);
 		return 0;
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
  2015-11-25 16:23 ` [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel Torsten Duwe
@ 2015-11-25 16:34 ` Torsten Duwe
  2015-11-26 10:04   ` Denis Kirjanov
  2015-11-25 16:35 ` [PATCH v4 3/9] ppc use ftrace_modify_all_code default Torsten Duwe
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:34 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Implement FTRACE_WITH_REGS for powerpc64, on ELF ABI v2.
Initial work started by Vojtech Pavlik, used with permission.

  * arch/powerpc/kernel/entry_64.S:
    - Implement an effective ftrace_caller that works from
      within the kernel binary as well as from modules.
  * arch/powerpc/kernel/ftrace.c:
    - be prepared to deal with ppc64 ELF ABI v2, especially
      calls to _mcount that result from gcc -mprofile-kernel
    - a little more error verbosity
  * arch/powerpc/kernel/module_64.c:
    - do not save the TOC pointer on the trampoline when the
      destination is ftrace_caller. This trampoline jump happens from
      a function prologue before a new stack frame is set up, so bad
      things may happen otherwise...
    - relax is_module_trampoline() to recognise the modified
      trampoline.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/include/asm/ftrace.h |  5 +++
 arch/powerpc/kernel/entry_64.S    | 77 +++++++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/ftrace.c      | 60 +++++++++++++++++++++++++++---
 arch/powerpc/kernel/module_64.c   | 25 ++++++++++++-
 4 files changed, 160 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index ef89b14..50ca758 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -46,6 +46,8 @@
 extern void _mcount(void);
 
 #ifdef CONFIG_DYNAMIC_FTRACE
+# define FTRACE_ADDR ((unsigned long)ftrace_caller)
+# define FTRACE_REGS_ADDR FTRACE_ADDR
 static inline unsigned long ftrace_call_adjust(unsigned long addr)
 {
        /* reloction of mcount call site is the same as the address */
@@ -58,6 +60,9 @@ struct dyn_arch_ftrace {
 #endif /*  CONFIG_DYNAMIC_FTRACE */
 #endif /* __ASSEMBLY__ */
 
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+#define ARCH_SUPPORTS_FTRACE_OPS 1
+#endif
 #endif
 
 #if defined(CONFIG_FTRACE_SYSCALLS) && defined(CONFIG_PPC64) && !defined(__ASSEMBLY__)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 8d56b16..3309dd8 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1212,6 +1212,7 @@ _GLOBAL(_mcount)
 	mtlr	r0
 	bctr
 
+#ifndef CC_USING_MPROFILE_KERNEL
 _GLOBAL_TOC(ftrace_caller)
 	/* Taken from output of objdump from lib64/glibc */
 	mflr	r3
@@ -1233,6 +1234,82 @@ _GLOBAL(ftrace_graph_stub)
 	ld	r0, 128(r1)
 	mtlr	r0
 	addi	r1, r1, 112
+#else
+_GLOBAL(ftrace_caller)
+#if defined(_CALL_ELF) && _CALL_ELF == 2
+	mflr	r0
+	bl	2f
+2:	mflr	r12
+	mtlr	r0
+	mr      r0,r2   // save callee's TOC
+	addis	r2,r12,(.TOC.-ftrace_caller-8)@ha
+	addi    r2,r2,(.TOC.-ftrace_caller-8)@l
+#else
+	mr	r0,r2
+#endif
+	ld	r12,LRSAVE(r1)	// get caller's address
+
+	stdu	r1,-SWITCH_FRAME_SIZE(r1)
+
+	std     r12, _LINK(r1)
+	SAVE_8GPRS(0,r1)
+	std	r0, 24(r1)	// save TOC
+	SAVE_8GPRS(8,r1)
+	SAVE_8GPRS(16,r1)
+	SAVE_8GPRS(24,r1)
+
+	addis	r3,r2,function_trace_op@toc@ha
+	addi	r3,r3,function_trace_op@toc@l
+	ld	r5,0(r3)
+
+	mflr    r3
+	std     r3, _NIP(r1)
+	std	r3, 16(r1)
+	subi    r3, r3, MCOUNT_INSN_SIZE
+	mfmsr   r4
+	std     r4, _MSR(r1)
+	mfctr   r4
+	std     r4, _CTR(r1)
+	mfxer   r4
+	std     r4, _XER(r1)
+	mr	r4, r12
+	addi    r6, r1 ,STACK_FRAME_OVERHEAD
+
+.globl ftrace_call
+ftrace_call:
+	bl	ftrace_stub
+	nop
+
+	ld	r3, _NIP(r1)
+	mtlr	r3
+
+	REST_8GPRS(0,r1)
+	REST_8GPRS(8,r1)
+	REST_8GPRS(16,r1)
+	REST_8GPRS(24,r1)
+
+	addi r1, r1, SWITCH_FRAME_SIZE
+
+	ld	r12, LRSAVE(r1)  // get caller's address
+	mtlr	r12
+	mr	r2,r0		// restore callee's TOC
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+	stdu	r1, -112(r1)
+.globl ftrace_graph_call
+ftrace_graph_call:
+	b	ftrace_graph_stub
+_GLOBAL(ftrace_graph_stub)
+	addi	r1, r1, 112
+#endif
+
+	mflr	r0		// move this LR to CTR
+	mtctr	r0
+
+	ld	r0,LRSAVE(r1)	// restore callee's lr at _mcount site
+	mtlr	r0
+	bctr			// jump after _mcount site
+#endif /* CC_USING_MPROFILE_KERNEL */
 _GLOBAL(ftrace_stub)
 	blr
 #else
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 080c525..310137f 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -61,8 +61,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
 		return -EFAULT;
 
 	/* Make sure it is what we expect it to be */
-	if (replaced != old)
+	if (replaced != old) {
+		pr_err("%p: replaced (%#x) != old (%#x)",
+		(void *)ip, replaced, old);
 		return -EINVAL;
+	}
 
 	/* replace the text with the new text */
 	if (patch_instruction((unsigned int *)ip, new))
@@ -106,14 +109,16 @@ static int
 __ftrace_make_nop(struct module *mod,
 		  struct dyn_ftrace *rec, unsigned long addr)
 {
-	unsigned int op;
+	unsigned int op, op0, op1, pop;
 	unsigned long entry, ptr;
 	unsigned long ip = rec->ip;
 	void *tramp;
 
 	/* read where this goes */
-	if (probe_kernel_read(&op, (void *)ip, sizeof(int)))
+	if (probe_kernel_read(&op, (void *)ip, sizeof(int))) {
+		pr_err("Fetching opcode failed.\n");
 		return -EFAULT;
+	}
 
 	/* Make sure that that this is still a 24bit jump */
 	if (!is_bl_op(op)) {
@@ -158,10 +163,46 @@ __ftrace_make_nop(struct module *mod,
 	 *
 	 * Use a b +8 to jump over the load.
 	 */
-	op = 0x48000008;	/* b +8 */
 
-	if (patch_instruction((unsigned int *)ip, op))
+	pop = 0x48000008;	/* b +8 */
+
+	/*
+	 * Check what is in the next instruction. We can see ld r2,40(r1), but
+	 * on first pass after boot we will see mflr r0.
+	 */
+	if (probe_kernel_read(&op, (void *)(ip+4), MCOUNT_INSN_SIZE)) {
+		pr_err("Fetching op failed.\n");
+		return -EFAULT;
+	}
+
+	if (op != 0xe8410028) { /* ld r2,STACK_OFFSET(r1) */
+
+		if (probe_kernel_read(&op0, (void *)(ip-8), MCOUNT_INSN_SIZE)) {
+			pr_err("Fetching op0 failed.\n");
+			return -EFAULT;
+		}
+
+		if (probe_kernel_read(&op1, (void *)(ip-4), MCOUNT_INSN_SIZE)) {
+			pr_err("Fetching op1 failed.\n");
+			return -EFAULT;
+		}
+
+		/* mflr r0 ; std r0,LRSAVE(r1) */
+		if (op0 != 0x7c0802a6 && op1 != 0xf8010010) {
+			pr_err("Unexpected instructions around bl\n"
+				"when enabling dynamic ftrace!\t"
+				"(%08x,%08x,bl,%08x)\n", op0, op1, op);
+			return -EINVAL;
+		}
+
+		/* When using -mkernel_profile there is no load to jump over */
+		pop = PPC_INST_NOP;
+	}
+
+	if (patch_instruction((unsigned int *)ip, pop)) {
+		pr_err("Patching NOP failed.\n");
 		return -EPERM;
+	}
 
 	return 0;
 }
@@ -287,6 +328,13 @@ int ftrace_make_nop(struct module *mod,
 
 #ifdef CONFIG_MODULES
 #ifdef CONFIG_PPC64
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+			unsigned long addr)
+{
+	return ftrace_make_call(rec, addr);
+}
+#endif
 static int
 __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 {
@@ -338,7 +386,7 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 
 	return 0;
 }
-#else
+#else  /* !CONFIG_PPC64: */
 static int
 __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 {
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 0819ce7..9e6902f 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -138,12 +138,25 @@ static u32 ppc64_stub_insns[] = {
 	0x4e800420			/* bctr */
 };
 
+#ifdef CC_USING_MPROFILE_KERNEL
+/* In case of _mcount calls or dynamic ftracing, Do not save the
+ * current callee's TOC (in R2) again into the original caller's stack
+ * frame during this trampoline hop. The stack frame already holds
+ * that of the original caller.  _mcount and ftrace_caller will take
+ * care of this TOC value themselves.
+ */
+#define SQUASH_TOC_SAVE_INSN(trampoline_addr) \
+	(((struct ppc64_stub_entry *)(trampoline_addr))->jump[2] = PPC_INST_NOP)
+#else
+#define SQUASH_TOC_SAVE_INSN(trampoline_addr)
+#endif
+
 #ifdef CONFIG_DYNAMIC_FTRACE
 
 static u32 ppc64_stub_mask[] = {
 	0xffff0000,
 	0xffff0000,
-	0xffffffff,
+	0x00000000,
 	0xffffffff,
 #if !defined(_CALL_ELF) || _CALL_ELF != 2
 	0xffffffff,
@@ -170,6 +183,9 @@ bool is_module_trampoline(u32 *p)
 		if ((insna & mask) != (insnb & mask))
 			return false;
 	}
+	if (insns[2] != ppc64_stub_insns[2] &&
+	    insns[2] != PPC_INST_NOP)
+		return false;
 
 	return true;
 }
@@ -618,6 +634,9 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 					return -ENOENT;
 				if (!restore_r2((u32 *)location + 1, me))
 					return -ENOEXEC;
+				/* Squash the TOC saver for profiler calls */
+				if (!strcmp("_mcount", strtab+sym->st_name))
+					SQUASH_TOC_SAVE_INSN(value);
 			} else
 				value += local_entry_offset(sym);
 
@@ -678,6 +697,10 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 	me->arch.tramp = stub_for_addr(sechdrs,
 				       (unsigned long)ftrace_caller,
 				       me);
+	/* ftrace_caller will take care of the TOC;
+	 * do not clobber original caller's value.
+	 */
+	SQUASH_TOC_SAVE_INSN(me->arch.tramp);
 #endif
 
 	return 0;
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 3/9] ppc use ftrace_modify_all_code default
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
  2015-11-25 16:23 ` [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel Torsten Duwe
  2015-11-25 16:34 ` [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
@ 2015-11-25 16:35 ` Torsten Duwe
  2015-11-25 16:37 ` [PATCH v4 4/9] ppc64 ftrace_with_regs configuration variables Torsten Duwe
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:35 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Convert ppc's arch_ftrace_update_code from its own function copy
to use the generic default functionality (without stop_machine --
our instructions are properly aligned and the replacements atomic ;)

With this we gain error checking and the much-needed function_trace_op
handling.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/kernel/ftrace.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 310137f..e419c7b 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -511,20 +511,12 @@ void ftrace_replace_code(int enable)
 	}
 }
 
+/* Use the default ftrace_modify_all_code, but without
+ * stop_machine().
+ */
 void arch_ftrace_update_code(int command)
 {
-	if (command & FTRACE_UPDATE_CALLS)
-		ftrace_replace_code(1);
-	else if (command & FTRACE_DISABLE_CALLS)
-		ftrace_replace_code(0);
-
-	if (command & FTRACE_UPDATE_TRACE_FUNC)
-		ftrace_update_ftrace_func(ftrace_trace_function);
-
-	if (command & FTRACE_START_FUNC_RET)
-		ftrace_enable_ftrace_graph_caller();
-	else if (command & FTRACE_STOP_FUNC_RET)
-		ftrace_disable_ftrace_graph_caller();
+	ftrace_modify_all_code(command);
 }
 
 int __init ftrace_dyn_arch_init(void)
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 4/9] ppc64 ftrace_with_regs configuration variables
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (2 preceding siblings ...)
  2015-11-25 16:35 ` [PATCH v4 3/9] ppc use ftrace_modify_all_code default Torsten Duwe
@ 2015-11-25 16:37 ` Torsten Duwe
  2015-12-03 16:20   ` Petr Mladek
  2015-11-25 16:39 ` [PATCH v4 5/9] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:37 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

  * Makefile:
    - globally use -mprofile-kernel in case it's configured.
  * arch/powerpc/Kconfig / kernel/trace/Kconfig:
    - declare that ppc64le HAVE_MPROFILE_KERNEL and
      HAVE_DYNAMIC_FTRACE_WITH_REGS, and use it.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/Kconfig  | 2 ++
 arch/powerpc/Makefile | 7 +++++++
 kernel/trace/Kconfig  | 5 +++++
 3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9a7057e..55fd59e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -97,8 +97,10 @@ config PPC
 	select OF_RESERVED_MEM
 	select HAVE_FTRACE_MCOUNT_RECORD
 	select HAVE_DYNAMIC_FTRACE
+	select HAVE_DYNAMIC_FTRACE_WITH_REGS if PPC64 && CPU_LITTLE_ENDIAN
 	select HAVE_FUNCTION_TRACER
 	select HAVE_FUNCTION_GRAPH_TRACER
+	select HAVE_MPROFILE_KERNEL if PPC64 && CPU_LITTLE_ENDIAN
 	select SYSCTL_EXCEPTION_TRACE
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select VIRT_TO_BUS if !PPC64
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index b9b4af2..25d0034 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -133,6 +133,13 @@ else
 CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
 endif
 
+ifeq ($(CONFIG_PPC64),y)
+ifdef CONFIG_HAVE_MPROFILE_KERNEL
+CC_FLAGS_FTRACE	:= -pg $(call cc-option,-mprofile-kernel)
+KBUILD_CPPFLAGS	+= -DCC_USING_MPROFILE_KERNEL
+endif
+endif
+
 CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
 CFLAGS-$(CONFIG_POWER4_CPU) += $(call cc-option,-mcpu=power4)
 CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 1153c43..dbcb635 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -52,6 +52,11 @@ config HAVE_FENTRY
 	help
 	  Arch supports the gcc options -pg with -mfentry
 
+config HAVE_MPROFILE_KERNEL
+	bool
+	help
+	  Arch supports the gcc options -pg with -mprofile-kernel
+
 config HAVE_C_RECORDMCOUNT
 	bool
 	help
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 5/9] ppc64 ftrace_with_regs: spare early boot and low level
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (3 preceding siblings ...)
  2015-11-25 16:37 ` [PATCH v4 4/9] ppc64 ftrace_with_regs configuration variables Torsten Duwe
@ 2015-11-25 16:39 ` Torsten Duwe
  2015-11-25 16:41 ` [PATCH v4 6/9] ppc64 ftrace: disable profiling for some functions Torsten Duwe
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:39 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Using -mprofile-kernel on early boot code not only confuses the
checker but is also useless, as the infrastructure is not yet in
place. Proceed like with -pg (remove it from CFLAGS), equally with
time.o and ftrace itself.

  * arch/powerpc/kernel/Makefile:
    - remove -mprofile-kernel from low level and boot code objects'
      CFLAGS for FUNCTION_TRACER configurations.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/kernel/Makefile | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ba33693..0f417d5 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -16,14 +16,14 @@ endif
 
 ifdef CONFIG_FUNCTION_TRACER
 # Do not trace early boot code
-CFLAGS_REMOVE_cputable.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_prom_init.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_btext.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_prom.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_cputable.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_prom_init.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_btext.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_prom.o = -pg -mno-sched-epilog -mprofile-kernel
 # do not trace tracer code
-CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog -mprofile-kernel
 # timers used by tracing
-CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog -mprofile-kernel
 endif
 
 obj-y				:= cputable.o ptrace.o syscalls.o \
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 6/9] ppc64 ftrace: disable profiling for some functions
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (4 preceding siblings ...)
  2015-11-25 16:39 ` [PATCH v4 5/9] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
@ 2015-11-25 16:41 ` Torsten Duwe
  2015-11-25 16:42 ` [PATCH v4 7/9] ppc64 ftrace: disable profiling for some files Torsten Duwe
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:41 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

At least POWER7/8 have MMUs that don't completely autoload;
a normal, recoverable memory fault might pass through these functions.
If a dynamic tracer function causes such a fault, any of these functions
being traced with -mprofile-kernel may cause an endless recursion.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/kernel/process.c        |  2 +-
 arch/powerpc/mm/fault.c              |  2 +-
 arch/powerpc/mm/hash_utils_64.c      | 18 +++++++++---------
 arch/powerpc/mm/hugetlbpage-hash64.c |  2 +-
 arch/powerpc/mm/hugetlbpage.c        |  4 ++--
 arch/powerpc/mm/mem.c                |  2 +-
 arch/powerpc/mm/pgtable_64.c         |  2 +-
 arch/powerpc/mm/slb.c                |  6 +++---
 arch/powerpc/mm/slice.c              |  8 ++++----
 9 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 75b6676..c2900b9 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -715,7 +715,7 @@ static inline void __switch_to_tm(struct task_struct *prev)
  * don't know which of the checkpointed state and the transactional
  * state to use.
  */
-void restore_tm_state(struct pt_regs *regs)
+notrace void restore_tm_state(struct pt_regs *regs)
 {
 	unsigned long msr_diff;
 
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index a67c6d7..125be37 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -205,7 +205,7 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
  * The return value is 0 if the fault was handled, or the signal
  * number if this is a kernel fault that can't be handled here.
  */
-int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
+notrace int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
 			    unsigned long error_code)
 {
 	enum ctx_state prev_state = exception_enter();
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index aee7017..90e89e7 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -849,7 +849,7 @@ void early_init_mmu_secondary(void)
 /*
  * Called by asm hashtable.S for doing lazy icache flush
  */
-unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
+notrace unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
 {
 	struct page *page;
 
@@ -870,7 +870,7 @@ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
 }
 
 #ifdef CONFIG_PPC_MM_SLICES
-static unsigned int get_paca_psize(unsigned long addr)
+static notrace unsigned int get_paca_psize(unsigned long addr)
 {
 	u64 lpsizes;
 	unsigned char *hpsizes;
@@ -899,7 +899,7 @@ unsigned int get_paca_psize(unsigned long addr)
  * For now this makes the whole process use 4k pages.
  */
 #ifdef CONFIG_PPC_64K_PAGES
-void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
+notrace void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
 {
 	if (get_slice_psize(mm, addr) == MMU_PAGE_4K)
 		return;
@@ -920,7 +920,7 @@ void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
  * Result is 0: full permissions, _PAGE_RW: read-only,
  * _PAGE_USER or _PAGE_USER|_PAGE_RW: no access.
  */
-static int subpage_protection(struct mm_struct *mm, unsigned long ea)
+static notrace int subpage_protection(struct mm_struct *mm, unsigned long ea)
 {
 	struct subpage_prot_table *spt = &mm->context.spt;
 	u32 spp = 0;
@@ -968,7 +968,7 @@ void hash_failure_debug(unsigned long ea, unsigned long access,
 		trap, vsid, ssize, psize, lpsize, pte);
 }
 
-static void check_paca_psize(unsigned long ea, struct mm_struct *mm,
+static notrace void check_paca_psize(unsigned long ea, struct mm_struct *mm,
 			     int psize, bool user_region)
 {
 	if (user_region) {
@@ -990,7 +990,7 @@ static void check_paca_psize(unsigned long ea, struct mm_struct *mm,
  * -1 - critical hash insertion error
  * -2 - access not permitted by subpage protection mechanism
  */
-int hash_page_mm(struct mm_struct *mm, unsigned long ea,
+notrace int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 		 unsigned long access, unsigned long trap,
 		 unsigned long flags)
 {
@@ -1186,7 +1186,7 @@ bail:
 }
 EXPORT_SYMBOL_GPL(hash_page_mm);
 
-int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
+notrace int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
 	      unsigned long dsisr)
 {
 	unsigned long flags = 0;
@@ -1288,7 +1288,7 @@ out_exit:
 /* WARNING: This is called from hash_low_64.S, if you change this prototype,
  *          do not forget to update the assembly call site !
  */
-void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize,
+notrace void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize,
 		     unsigned long flags)
 {
 	unsigned long hash, index, shift, hidx, slot;
@@ -1436,7 +1436,7 @@ void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
 	exception_exit(prev_state);
 }
 
-long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
+notrace long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
 			   unsigned long pa, unsigned long rflags,
 			   unsigned long vflags, int psize, int ssize)
 {
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index d94b1af..50b8c6f 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -18,7 +18,7 @@ extern long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
 				  unsigned long pa, unsigned long rlags,
 				  unsigned long vflags, int psize, int ssize);
 
-int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
+notrace int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 		     pte_t *ptep, unsigned long trap, unsigned long flags,
 		     int ssize, unsigned int shift, unsigned int mmu_psize)
 {
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 06c1452..bc2f459 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -922,7 +922,7 @@ static int __init hugetlbpage_init(void)
 #endif
 arch_initcall(hugetlbpage_init);
 
-void flush_dcache_icache_hugepage(struct page *page)
+notrace void flush_dcache_icache_hugepage(struct page *page)
 {
 	int i;
 	void *start;
@@ -955,7 +955,7 @@ void flush_dcache_icache_hugepage(struct page *page)
  * when we have MSR[EE] = 0 but the paca->soft_enabled = 1
  */
 
-pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
+notrace pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
 				   unsigned *shift)
 {
 	pgd_t pgd, *pgdp;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3..f690e8a 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -406,7 +406,7 @@ void flush_dcache_page(struct page *page)
 }
 EXPORT_SYMBOL(flush_dcache_page);
 
-void flush_dcache_icache_page(struct page *page)
+notrace void flush_dcache_icache_page(struct page *page)
 {
 #ifdef CONFIG_HUGETLB_PAGE
 	if (PageCompound(page)) {
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index e92cb21..c74050b 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -442,7 +442,7 @@ static void page_table_free_rcu(void *table)
 	}
 }
 
-void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
+notrace void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
 {
 	unsigned long pgf = (unsigned long)table;
 
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 8a32a2b..5b05754 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -91,7 +91,7 @@ static inline void create_shadowed_slbe(unsigned long ea, int ssize,
 		     : "memory" );
 }
 
-static void __slb_flush_and_rebolt(void)
+static notrace void __slb_flush_and_rebolt(void)
 {
 	/* If you change this make sure you change SLB_NUM_BOLTED
 	 * and PR KVM appropriately too. */
@@ -131,7 +131,7 @@ static void __slb_flush_and_rebolt(void)
 		     : "memory");
 }
 
-void slb_flush_and_rebolt(void)
+notrace void slb_flush_and_rebolt(void)
 {
 
 	WARN_ON(!irqs_disabled());
@@ -146,7 +146,7 @@ void slb_flush_and_rebolt(void)
 	get_paca()->slb_cache_ptr = 0;
 }
 
-void slb_vmalloc_update(void)
+notrace void slb_vmalloc_update(void)
 {
 	unsigned long vflags;
 
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 0f432a7..f92f0f0 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -76,8 +76,8 @@ static void slice_print_mask(const char *label, struct slice_mask mask) {}
 
 #endif
 
-static struct slice_mask slice_range_to_mask(unsigned long start,
-					     unsigned long len)
+static notrace struct slice_mask slice_range_to_mask(unsigned long start,
+						     unsigned long len)
 {
 	unsigned long end = start + len - 1;
 	struct slice_mask ret = { 0, 0 };
@@ -564,7 +564,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp,
 				       current->mm->context.user_psize, 1);
 }
 
-unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
+notrace unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
 {
 	unsigned char *hpsizes;
 	int index, mask_index;
@@ -645,7 +645,7 @@ void slice_set_user_psize(struct mm_struct *mm, unsigned int psize)
 	spin_unlock_irqrestore(&slice_convert_lock, flags);
 }
 
-void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
+notrace void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
 			   unsigned long len, unsigned int psize)
 {
 	struct slice_mask mask = slice_range_to_mask(start, len);
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 7/9] ppc64 ftrace: disable profiling for some files
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (5 preceding siblings ...)
  2015-11-25 16:41 ` [PATCH v4 6/9] ppc64 ftrace: disable profiling for some functions Torsten Duwe
@ 2015-11-25 16:42 ` Torsten Duwe
  2015-11-25 16:48 ` [PATCH v4 8/9] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:42 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

This patch complements the "notrace" attribute for selected functions.
It adds -mprofile-kernel to the cc flags to be stripped from the command
line for code-patching.o and feature-fixups.o, in addition to "-pg"

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/lib/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index a47e142..98e22b2 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -6,8 +6,8 @@ subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
 
 ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
-CFLAGS_REMOVE_code-patching.o = -pg
-CFLAGS_REMOVE_feature-fixups.o = -pg
+CFLAGS_REMOVE_code-patching.o = -pg -mprofile-kernel
+CFLAGS_REMOVE_feature-fixups.o = -pg -mprofile-kernel
 
 obj-y += string.o alloc.o crtsavres.o ppc_ksyms.o code-patching.o \
 	 feature-fixups.o
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 8/9] Implement kernel live patching for ppc64le (ABIv2)
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (6 preceding siblings ...)
  2015-11-25 16:42 ` [PATCH v4 7/9] ppc64 ftrace: disable profiling for some files Torsten Duwe
@ 2015-11-25 16:48 ` Torsten Duwe
  2015-12-03 16:24   ` Petr Mladek
  2015-11-25 16:49 ` [PATCH v4 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
  2015-12-03 16:00 ` [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Petr Mladek
  9 siblings, 1 reply; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:48 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

  * create the appropriate files+functions
    arch/powerpc/include/asm/livepatch.h
        klp_check_compiler_support,
        klp_arch_set_pc
    arch/powerpc/kernel/livepatch.c with a stub for
        klp_write_module_reloc
    This is architecture-independent work in progress.
  * introduce a fixup in arch/powerpc/kernel/entry_64.S
    for local calls that are becoming global due to live patching.
    And of course do the main KLP thing: return to a maybe different
    address, possibly altered by the live patching ftrace op.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/include/asm/livepatch.h | 45 +++++++++++++++++++++++++++++++
 arch/powerpc/kernel/entry_64.S       | 51 +++++++++++++++++++++++++++++++++---
 arch/powerpc/kernel/livepatch.c      | 38 +++++++++++++++++++++++++++
 3 files changed, 130 insertions(+), 4 deletions(-)
 create mode 100644 arch/powerpc/include/asm/livepatch.h
 create mode 100644 arch/powerpc/kernel/livepatch.c

diff --git a/arch/powerpc/include/asm/livepatch.h b/arch/powerpc/include/asm/livepatch.h
new file mode 100644
index 0000000..3200c11
--- /dev/null
+++ b/arch/powerpc/include/asm/livepatch.h
@@ -0,0 +1,45 @@
+/*
+ * livepatch.h - powerpc-specific Kernel Live Patching Core
+ *
+ * Copyright (C) 2015 SUSE
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef _ASM_POWERPC64_LIVEPATCH_H
+#define _ASM_POWERPC64_LIVEPATCH_H
+
+#include <linux/module.h>
+#include <linux/ftrace.h>
+
+#ifdef CONFIG_LIVEPATCH
+static inline int klp_check_compiler_support(void)
+{
+#if !defined(_CALL_ELF) || _CALL_ELF != 2
+	return 1;
+#endif
+	return 0;
+}
+
+extern int klp_write_module_reloc(struct module *mod, unsigned long type,
+				   unsigned long loc, unsigned long value);
+
+static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
+{
+	regs->nip = ip;
+}
+#else
+#error Live patching support is disabled; check CONFIG_LIVEPATCH
+#endif
+
+#endif /* _ASM_POWERPC64_LIVEPATCH_H */
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 3309dd8..7a5e3e3 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1265,6 +1265,9 @@ _GLOBAL(ftrace_caller)
 	mflr    r3
 	std     r3, _NIP(r1)
 	std	r3, 16(r1)
+#ifdef CONFIG_LIVEPATCH
+	mr	r14,r3		// remember old NIP
+#endif
 	subi    r3, r3, MCOUNT_INSN_SIZE
 	mfmsr   r4
 	std     r4, _MSR(r1)
@@ -1281,7 +1284,10 @@ ftrace_call:
 	nop
 
 	ld	r3, _NIP(r1)
-	mtlr	r3
+	mtctr	r3		// prepare to jump there
+#ifdef CONFIG_LIVEPATCH
+	cmpd	r14,r3		// has NIP been altered?
+#endif
 
 	REST_8GPRS(0,r1)
 	REST_8GPRS(8,r1)
@@ -1294,6 +1300,27 @@ ftrace_call:
 	mtlr	r12
 	mr	r2,r0		// restore callee's TOC
 
+#ifdef CONFIG_LIVEPATCH
+	beq+	4f		// likely(old_NIP == new_NIP)
+
+	// For a local call, restore this TOC after calling the patch function.
+	// For a global call, it does not matter what we restore here,
+	// since the global caller does its own restore right afterwards,
+	// anyway.
+	// Just insert a KLP_return_helper frame in any case,
+	// so a patch function can always count on the changed stack offsets.
+	stdu	r1,-32(r1)	// open new mini stack frame
+	std	r0,24(r1)	// save TOC now, unconditionally.
+	bl	5f
+5:	mflr	r12
+	addi	r12,r12,(KLP_return_helper+4-.)@l
+	std	r12,LRSAVE(r1)
+	mtlr	r12
+	mfctr	r12		// allow for TOC calculation in newfunc
+	bctr
+4:
+#endif
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	stdu	r1, -112(r1)
 .globl ftrace_graph_call
@@ -1303,15 +1330,31 @@ _GLOBAL(ftrace_graph_stub)
 	addi	r1, r1, 112
 #endif
 
-	mflr	r0		// move this LR to CTR
-	mtctr	r0
-
 	ld	r0,LRSAVE(r1)	// restore callee's lr at _mcount site
 	mtlr	r0
 	bctr			// jump after _mcount site
 #endif /* CC_USING_MPROFILE_KERNEL */
 _GLOBAL(ftrace_stub)
 	blr
+
+#ifdef CONFIG_LIVEPATCH
+/* Helper function for local calls that are becoming global
+   due to live patching.
+   We can't simply patch the NOP after the original call,
+   because, depending on the consistency model, some kernel
+   threads may still have called the original, local function
+   *without* saving their TOC in the respective stack frame slot,
+   so the decision is made per-thread during function return by
+   maybe inserting a KLP_return_helper frame or not.
+*/
+KLP_return_helper:
+	ld	r2,24(r1)	// restore TOC (saved by ftrace_caller)
+	addi r1, r1, 32		// destroy mini stack frame
+	ld	r0,LRSAVE(r1)	// get the real return address
+	mtlr	r0
+	blr
+#endif
+
 #else
 _GLOBAL_TOC(_mcount)
 	/* Taken from output of objdump from lib64/glibc */
diff --git a/arch/powerpc/kernel/livepatch.c b/arch/powerpc/kernel/livepatch.c
new file mode 100644
index 0000000..564eafa
--- /dev/null
+++ b/arch/powerpc/kernel/livepatch.c
@@ -0,0 +1,38 @@
+/*
+ * livepatch.c - powerpc-specific Kernel Live Patching Core
+ *
+ * Copyright (C) 2015 SUSE
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/module.h>
+#include <asm/livepatch.h>
+
+/**
+ * klp_write_module_reloc() - write a relocation in a module
+ * @mod:       module in which the section to be modified is found
+ * @type:      ELF relocation type (see asm/elf.h)
+ * @loc:       address that the relocation should be written to
+ * @value:     relocation value (sym address + addend)
+ *
+ * This function writes a relocation to the specified location for
+ * a particular module.
+ */
+int klp_write_module_reloc(struct module *mod, unsigned long type,
+			    unsigned long loc, unsigned long value)
+{
+	/* This requires infrastructure changes; we need the loadinfos. */
+	pr_err("lpc_write_module_reloc not yet supported\n");
+	return -ENOSYS;
+}
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected.
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (7 preceding siblings ...)
  2015-11-25 16:48 ` [PATCH v4 8/9] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
@ 2015-11-25 16:49 ` Torsten Duwe
  2015-12-03 16:00 ` [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Petr Mladek
  9 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:49 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/Kconfig         | 5 +++++
 arch/powerpc/kernel/Makefile | 1 +
 2 files changed, 6 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 55fd59e..47f479c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -163,6 +163,9 @@ config PPC
 	select ARCH_HAS_DMA_SET_COHERENT_MASK
 	select HAVE_ARCH_SECCOMP_FILTER
 
+config HAVE_LIVEPATCH
+       def_bool PPC64 && CPU_LITTLE_ENDIAN
+
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
 
@@ -1095,3 +1098,5 @@ config PPC_LIB_RHEAP
 	bool
 
 source "arch/powerpc/kvm/Kconfig"
+
+source "kernel/livepatch/Kconfig"
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 0f417d5..f9a2925 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -119,6 +119,7 @@ obj-$(CONFIG_DYNAMIC_FTRACE)	+= ftrace.o
 obj-$(CONFIG_FUNCTION_GRAPH_TRACER)	+= ftrace.o
 obj-$(CONFIG_FTRACE_SYSCALLS)	+= ftrace.o
 obj-$(CONFIG_TRACING)		+= trace_clock.o
+obj-$(CONFIG_LIVEPATCH)		+= livepatch.o
 
 ifneq ($(CONFIG_PPC_INDIRECT_PIO),y)
 obj-y				+= iomap.o
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
@ 2015-11-25 16:53 Torsten Duwe
  2015-11-25 16:23 ` [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel Torsten Duwe
                   ` (9 more replies)
  0 siblings, 10 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-25 16:53 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Major changes since v3:
  * the graph tracer works now.
    It turned out the stack frame it tried to manipulate does not
    exist at that point.
  * changes only needed in order to support -mprofile-kernel are now
    in a separate patch, prepended.
  * Kconfig cleanup so this is only selectable on ppc64le.

Torsten Duwe (9):
  ppc64 (le): prepare for -mprofile-kernel
  ppc64le FTRACE_WITH_REGS implementation
  ppc use ftrace_modify_all_code default
  ppc64 ftrace_with_regs configuration variables
  ppc64 ftrace_with_regs: spare early boot and low level
  ppc64 ftrace: disable profiling for some functions
  ppc64 ftrace: disable profiling for some files
  Implement kernel live patching for ppc64le (ABIv2)
  Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it
    is selected.

 arch/powerpc/Kconfig                 |   7 ++
 arch/powerpc/Makefile                |   7 ++
 arch/powerpc/include/asm/ftrace.h    |   5 ++
 arch/powerpc/include/asm/livepatch.h |  45 ++++++++++
 arch/powerpc/kernel/Makefile         |  13 +--
 arch/powerpc/kernel/entry_64.S       | 164 ++++++++++++++++++++++++++++++++++-
 arch/powerpc/kernel/ftrace.c         |  88 ++++++++++++++-----
 arch/powerpc/kernel/livepatch.c      |  38 ++++++++
 arch/powerpc/kernel/module_64.c      |  38 +++++++-
 arch/powerpc/kernel/process.c        |   2 +-
 arch/powerpc/lib/Makefile            |   4 +-
 arch/powerpc/mm/fault.c              |   2 +-
 arch/powerpc/mm/hash_utils_64.c      |  18 ++--
 arch/powerpc/mm/hugetlbpage-hash64.c |   2 +-
 arch/powerpc/mm/hugetlbpage.c        |   4 +-
 arch/powerpc/mm/mem.c                |   2 +-
 arch/powerpc/mm/pgtable_64.c         |   2 +-
 arch/powerpc/mm/slb.c                |   6 +-
 arch/powerpc/mm/slice.c              |   8 +-
 kernel/trace/Kconfig                 |   5 ++
 20 files changed, 406 insertions(+), 54 deletions(-)
 create mode 100644 arch/powerpc/include/asm/livepatch.h
 create mode 100644 arch/powerpc/kernel/livepatch.c

-- 
1.8.5.6


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2015-11-25 16:34 ` [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
@ 2015-11-26 10:04   ` Denis Kirjanov
  2015-11-26 12:59     ` Torsten Duwe
  2015-12-01 17:29     ` Torsten Duwe
  0 siblings, 2 replies; 24+ messages in thread
From: Denis Kirjanov @ 2015-11-26 10:04 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On 11/25/15, Torsten Duwe <duwe@lst.de> wrote:
> Implement FTRACE_WITH_REGS for powerpc64, on ELF ABI v2.
> Initial work started by Vojtech Pavlik, used with permission.
>
>   * arch/powerpc/kernel/entry_64.S:
>     - Implement an effective ftrace_caller that works from
>       within the kernel binary as well as from modules.
>   * arch/powerpc/kernel/ftrace.c:
>     - be prepared to deal with ppc64 ELF ABI v2, especially
>       calls to _mcount that result from gcc -mprofile-kernel
>     - a little more error verbosity
>   * arch/powerpc/kernel/module_64.c:
>     - do not save the TOC pointer on the trampoline when the
>       destination is ftrace_caller. This trampoline jump happens from
>       a function prologue before a new stack frame is set up, so bad
>       things may happen otherwise...
>     - relax is_module_trampoline() to recognise the modified
>       trampoline.
>
> Signed-off-by: Torsten Duwe <duwe@suse.de>
> ---
>  arch/powerpc/include/asm/ftrace.h |  5 +++
>  arch/powerpc/kernel/entry_64.S    | 77
> +++++++++++++++++++++++++++++++++++++++
>  arch/powerpc/kernel/ftrace.c      | 60 +++++++++++++++++++++++++++---
>  arch/powerpc/kernel/module_64.c   | 25 ++++++++++++-
>  4 files changed, 160 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/ftrace.h
> b/arch/powerpc/include/asm/ftrace.h
> index ef89b14..50ca758 100644
> --- a/arch/powerpc/include/asm/ftrace.h
> +++ b/arch/powerpc/include/asm/ftrace.h
> @@ -46,6 +46,8 @@
>  extern void _mcount(void);
>
>  #ifdef CONFIG_DYNAMIC_FTRACE
> +# define FTRACE_ADDR ((unsigned long)ftrace_caller)
> +# define FTRACE_REGS_ADDR FTRACE_ADDR
>  static inline unsigned long ftrace_call_adjust(unsigned long addr)
>  {
>         /* reloction of mcount call site is the same as the address */
> @@ -58,6 +60,9 @@ struct dyn_arch_ftrace {
>  #endif /*  CONFIG_DYNAMIC_FTRACE */
>  #endif /* __ASSEMBLY__ */
>
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> +#define ARCH_SUPPORTS_FTRACE_OPS 1
> +#endif
>  #endif
>
>  #if defined(CONFIG_FTRACE_SYSCALLS) && defined(CONFIG_PPC64) &&
> !defined(__ASSEMBLY__)
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 8d56b16..3309dd8 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
Linux style for comments is the C89. Please update entry_64.S
> @@ -1212,6 +1212,7 @@ _GLOBAL(_mcount)
>  	mtlr	r0
>  	bctr
>
> +#ifndef CC_USING_MPROFILE_KERNEL
>  _GLOBAL_TOC(ftrace_caller)
>  	/* Taken from output of objdump from lib64/glibc */
>  	mflr	r3
> @@ -1233,6 +1234,82 @@ _GLOBAL(ftrace_graph_stub)
>  	ld	r0, 128(r1)
>  	mtlr	r0
>  	addi	r1, r1, 112
> +#else
> +_GLOBAL(ftrace_caller)
> +#if defined(_CALL_ELF) && _CALL_ELF == 2
> +	mflr	r0
> +	bl	2f
> +2:	mflr	r12
> +	mtlr	r0
> +	mr      r0,r2   // save callee's TOC
> +	addis	r2,r12,(.TOC.-ftrace_caller-8)@ha
> +	addi    r2,r2,(.TOC.-ftrace_caller-8)@l
> +#else
> +	mr	r0,r2
> +#endif
> +	ld	r12,LRSAVE(r1)	// get caller's address
> +
> +	stdu	r1,-SWITCH_FRAME_SIZE(r1)
> +
> +	std     r12, _LINK(r1)
> +	SAVE_8GPRS(0,r1)
> +	std	r0, 24(r1)	// save TOC
> +	SAVE_8GPRS(8,r1)
> +	SAVE_8GPRS(16,r1)
> +	SAVE_8GPRS(24,r1)
> +
> +	addis	r3,r2,function_trace_op@toc@ha
> +	addi	r3,r3,function_trace_op@toc@l
> +	ld	r5,0(r3)
> +
> +	mflr    r3
> +	std     r3, _NIP(r1)
> +	std	r3, 16(r1)
> +	subi    r3, r3, MCOUNT_INSN_SIZE
> +	mfmsr   r4
> +	std     r4, _MSR(r1)
> +	mfctr   r4
> +	std     r4, _CTR(r1)
> +	mfxer   r4
> +	std     r4, _XER(r1)
> +	mr	r4, r12
> +	addi    r6, r1 ,STACK_FRAME_OVERHEAD
> +
> +.globl ftrace_call
> +ftrace_call:
> +	bl	ftrace_stub
> +	nop
> +
> +	ld	r3, _NIP(r1)
> +	mtlr	r3
> +
> +	REST_8GPRS(0,r1)
> +	REST_8GPRS(8,r1)
> +	REST_8GPRS(16,r1)
> +	REST_8GPRS(24,r1)
> +
> +	addi r1, r1, SWITCH_FRAME_SIZE
> +
> +	ld	r12, LRSAVE(r1)  // get caller's address
> +	mtlr	r12
> +	mr	r2,r0		// restore callee's TOC
> +
> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> +	stdu	r1, -112(r1)
> +.globl ftrace_graph_call
> +ftrace_graph_call:
> +	b	ftrace_graph_stub
> +_GLOBAL(ftrace_graph_stub)
> +	addi	r1, r1, 112
> +#endif
> +
> +	mflr	r0		// move this LR to CTR
> +	mtctr	r0
> +
> +	ld	r0,LRSAVE(r1)	// restore callee's lr at _mcount site
> +	mtlr	r0
> +	bctr			// jump after _mcount site
> +#endif /* CC_USING_MPROFILE_KERNEL */
>  _GLOBAL(ftrace_stub)
>  	blr
>  #else
> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
> index 080c525..310137f 100644
> --- a/arch/powerpc/kernel/ftrace.c
> +++ b/arch/powerpc/kernel/ftrace.c
> @@ -61,8 +61,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old,
> unsigned int new)
>  		return -EFAULT;
>
>  	/* Make sure it is what we expect it to be */
> -	if (replaced != old)
> +	if (replaced != old) {
> +		pr_err("%p: replaced (%#x) != old (%#x)",
> +		(void *)ip, replaced, old);
>  		return -EINVAL;
> +	}
>
>  	/* replace the text with the new text */
>  	if (patch_instruction((unsigned int *)ip, new))
> @@ -106,14 +109,16 @@ static int
>  __ftrace_make_nop(struct module *mod,
>  		  struct dyn_ftrace *rec, unsigned long addr)
>  {
> -	unsigned int op;
> +	unsigned int op, op0, op1, pop;
>  	unsigned long entry, ptr;
>  	unsigned long ip = rec->ip;
>  	void *tramp;
>
>  	/* read where this goes */
> -	if (probe_kernel_read(&op, (void *)ip, sizeof(int)))
> +	if (probe_kernel_read(&op, (void *)ip, sizeof(int))) {
> +		pr_err("Fetching opcode failed.\n");
>  		return -EFAULT;
> +	}
>
>  	/* Make sure that that this is still a 24bit jump */
>  	if (!is_bl_op(op)) {
> @@ -158,10 +163,46 @@ __ftrace_make_nop(struct module *mod,
>  	 *
>  	 * Use a b +8 to jump over the load.
>  	 */
> -	op = 0x48000008;	/* b +8 */
>
> -	if (patch_instruction((unsigned int *)ip, op))
> +	pop = 0x48000008;	/* b +8 */
> +
> +	/*
> +	 * Check what is in the next instruction. We can see ld r2,40(r1), but
> +	 * on first pass after boot we will see mflr r0.
> +	 */
> +	if (probe_kernel_read(&op, (void *)(ip+4), MCOUNT_INSN_SIZE)) {
> +		pr_err("Fetching op failed.\n");
> +		return -EFAULT;
> +	}
> +
> +	if (op != 0xe8410028) { /* ld r2,STACK_OFFSET(r1) */
> +
> +		if (probe_kernel_read(&op0, (void *)(ip-8), MCOUNT_INSN_SIZE)) {
> +			pr_err("Fetching op0 failed.\n");
> +			return -EFAULT;
> +		}
> +
> +		if (probe_kernel_read(&op1, (void *)(ip-4), MCOUNT_INSN_SIZE)) {
> +			pr_err("Fetching op1 failed.\n");
> +			return -EFAULT;
> +		}
> +
> +		/* mflr r0 ; std r0,LRSAVE(r1) */
> +		if (op0 != 0x7c0802a6 && op1 != 0xf8010010) {
> +			pr_err("Unexpected instructions around bl\n"
> +				"when enabling dynamic ftrace!\t"
> +				"(%08x,%08x,bl,%08x)\n", op0, op1, op);
> +			return -EINVAL;
> +		}
> +
> +		/* When using -mkernel_profile there is no load to jump over */
> +		pop = PPC_INST_NOP;
> +	}
> +
> +	if (patch_instruction((unsigned int *)ip, pop)) {
> +		pr_err("Patching NOP failed.\n");
>  		return -EPERM;
> +	}
>
>  	return 0;
>  }
> @@ -287,6 +328,13 @@ int ftrace_make_nop(struct module *mod,
>
>  #ifdef CONFIG_MODULES
>  #ifdef CONFIG_PPC64
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> +int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
> +			unsigned long addr)
> +{
> +	return ftrace_make_call(rec, addr);
> +}
> +#endif
>  static int
>  __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
>  {
> @@ -338,7 +386,7 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long
> addr)
>
>  	return 0;
>  }
> -#else
> +#else  /* !CONFIG_PPC64: */
>  static int
>  __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
>  {
> diff --git a/arch/powerpc/kernel/module_64.c
> b/arch/powerpc/kernel/module_64.c
> index 0819ce7..9e6902f 100644
> --- a/arch/powerpc/kernel/module_64.c
> +++ b/arch/powerpc/kernel/module_64.c
> @@ -138,12 +138,25 @@ static u32 ppc64_stub_insns[] = {
>  	0x4e800420			/* bctr */
>  };
>
> +#ifdef CC_USING_MPROFILE_KERNEL
> +/* In case of _mcount calls or dynamic ftracing, Do not save the
> + * current callee's TOC (in R2) again into the original caller's stack
> + * frame during this trampoline hop. The stack frame already holds
> + * that of the original caller.  _mcount and ftrace_caller will take
> + * care of this TOC value themselves.
> + */
> +#define SQUASH_TOC_SAVE_INSN(trampoline_addr) \
> +	(((struct ppc64_stub_entry *)(trampoline_addr))->jump[2] = PPC_INST_NOP)
> +#else
> +#define SQUASH_TOC_SAVE_INSN(trampoline_addr)
> +#endif
> +
>  #ifdef CONFIG_DYNAMIC_FTRACE
>
>  static u32 ppc64_stub_mask[] = {
>  	0xffff0000,
>  	0xffff0000,
> -	0xffffffff,
> +	0x00000000,
>  	0xffffffff,
>  #if !defined(_CALL_ELF) || _CALL_ELF != 2
>  	0xffffffff,
> @@ -170,6 +183,9 @@ bool is_module_trampoline(u32 *p)
>  		if ((insna & mask) != (insnb & mask))
>  			return false;
>  	}
> +	if (insns[2] != ppc64_stub_insns[2] &&
> +	    insns[2] != PPC_INST_NOP)
> +		return false;
>
>  	return true;
>  }
> @@ -618,6 +634,9 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
>  					return -ENOENT;
>  				if (!restore_r2((u32 *)location + 1, me))
>  					return -ENOEXEC;
> +				/* Squash the TOC saver for profiler calls */
> +				if (!strcmp("_mcount", strtab+sym->st_name))
> +					SQUASH_TOC_SAVE_INSN(value);
>  			} else
>  				value += local_entry_offset(sym);
>
> @@ -678,6 +697,10 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
>  	me->arch.tramp = stub_for_addr(sechdrs,
>  				       (unsigned long)ftrace_caller,
>  				       me);
> +	/* ftrace_caller will take care of the TOC;
> +	 * do not clobber original caller's value.
> +	 */
> +	SQUASH_TOC_SAVE_INSN(me->arch.tramp);
>  #endif
>
>  	return 0;
> --
> 1.8.5.6
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel
  2015-11-25 16:23 ` [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel Torsten Duwe
@ 2015-11-26 10:12   ` Denis Kirjanov
  2015-11-26 12:57     ` Torsten Duwe
  0 siblings, 1 reply; 24+ messages in thread
From: Denis Kirjanov @ 2015-11-26 10:12 UTC (permalink / raw)
  To: 1Torsten Duwe
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On 11/25/15, Torsten Duwe <duwe@lst.de> wrote:
> The gcc switch -mprofile-kernel, available for ppc64 on gcc > 4.8.5,
> allows to call _mcount very early in the function, which low-level
> ASM code and code patching functions need to consider.
> Especially the link register and the parameter registers are still
> alive and not yet saved into a new stack frame.
>
> Signed-off-by: Torsten Duwe <duwe@suse.de>
> ---
>  arch/powerpc/kernel/entry_64.S  | 44
> +++++++++++++++++++++++++++++++++++++++--
>  arch/powerpc/kernel/ftrace.c    | 12 +++++++++--
>  arch/powerpc/kernel/module_64.c | 13 ++++++++++++
>  3 files changed, 65 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index a94f155..8d56b16 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -1206,7 +1206,11 @@ _GLOBAL(enter_prom)
>  #ifdef CONFIG_DYNAMIC_FTRACE
>  _GLOBAL(mcount)
>  _GLOBAL(_mcount)
> -	blr
> +	mflr	r0
> +	mtctr	r0
> +	ld	r0,LRSAVE(r1)
> +	mtlr	r0
> +	bctr
>
>  _GLOBAL_TOC(ftrace_caller)
>  	/* Taken from output of objdump from lib64/glibc */
> @@ -1262,13 +1266,28 @@ _GLOBAL(ftrace_stub)
>
>  #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>  _GLOBAL(ftrace_graph_caller)
> +#ifdef CC_USING_MPROFILE_KERNEL
> +	// with -mprofile-kernel, parameter regs are still alive at _mcount
> +	std	r10, 104(r1)
> +	std	r9, 96(r1)
> +	std	r8, 88(r1)
> +	std	r7, 80(r1)
> +	std	r6, 72(r1)
> +	std	r5, 64(r1)
> +	std	r4, 56(r1)
> +	std	r3, 48(r1)
> +	mfctr	r4		// ftrace_caller has moved local addr here
> +	std	r4, 40(r1)
> +	mflr	r3		// ftrace_caller has restored LR from stack
> +#else
>  	/* load r4 with local address */
>  	ld	r4, 128(r1)
> -	subi	r4, r4, MCOUNT_INSN_SIZE
>
>  	/* Grab the LR out of the caller stack frame */
>  	ld	r11, 112(r1)
>  	ld	r3, 16(r11)
> +#endif
> +	subi	r4, r4, MCOUNT_INSN_SIZE
>
>  	bl	prepare_ftrace_return
>  	nop
> @@ -1277,6 +1296,26 @@ _GLOBAL(ftrace_graph_caller)
>  	 * prepare_ftrace_return gives us the address we divert to.
>  	 * Change the LR in the callers stack frame to this.
>  	 */
> +
> +#ifdef CC_USING_MPROFILE_KERNEL
> +	mtlr	r3
> +
> +	ld	r0, 40(r1)
> +	mtctr	r0
> +	ld	r10, 104(r1)
> +	ld	r9, 96(r1)
> +	ld	r8, 88(r1)
> +	ld	r7, 80(r1)
> +	ld	r6, 72(r1)
> +	ld	r5, 64(r1)
> +	ld	r4, 56(r1)
> +	ld	r3, 48(r1)
> +
> +	addi	r1, r1, 112
> +	mflr	r0
> +	std	r0, LRSAVE(r1)
> +	bctr
> +#else
>  	ld	r11, 112(r1)
>  	std	r3, 16(r11)
>
> @@ -1284,6 +1323,7 @@ _GLOBAL(ftrace_graph_caller)
>  	mtlr	r0
>  	addi	r1, r1, 112
>  	blr
> +#endif
>
>  _GLOBAL(return_to_handler)
>  	/* need to save return values */
> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
> index 44d4d8e..080c525 100644
> --- a/arch/powerpc/kernel/ftrace.c
> +++ b/arch/powerpc/kernel/ftrace.c
> @@ -306,11 +306,19 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned
> long addr)
>  	 * The load offset is different depending on the ABI. For simplicity
>  	 * just mask it out when doing the compare.
>  	 */
> +#ifndef CC_USING_MPROFILE_KERNEL
>  	if ((op[0] != 0x48000008) || ((op[1] & 0xffff0000) != 0xe8410000)) {
> -		pr_err("Unexpected call sequence: %x %x\n", op[0], op[1]);
> +		pr_err("Unexpected call sequence at %p: %x %x\n",
> +		ip, op[0], op[1]);
>  		return -EINVAL;
>  	}
> -
> +#else
> +	/* look for patched "NOP" on ppc64 with -mprofile-kernel */
> +	if (op[0] != 0x60000000) {
> +		pr_err("Unexpected call at %p: %x\n", ip, op[0]);
> +		return -EINVAL;
> +	}
> +#endif
>  	/* If we never set up a trampoline to ftrace_caller, then bail */
>  	if (!rec->arch.mod->arch.tramp) {
>  		pr_err("No ftrace trampoline\n");
> diff --git a/arch/powerpc/kernel/module_64.c
> b/arch/powerpc/kernel/module_64.c
> index 6838451..0819ce7 100644
> --- a/arch/powerpc/kernel/module_64.c
> +++ b/arch/powerpc/kernel/module_64.c
> @@ -475,6 +475,19 @@ static unsigned long stub_for_addr(Elf64_Shdr *sechdrs,
>  static int restore_r2(u32 *instruction, struct module *me)
>  {
>  	if (*instruction != PPC_INST_NOP) {
> +#ifdef CC_USING_MPROFILE_KERNEL
> +		/* -mprofile_kernel sequence starting with
> +		 * mflr r0; std r0, LRSAVE(r1)
> +		 */
> +		if (instruction[-3] == 0x7c0802a6 &&
> +		    instruction[-2] == 0xf8010010) {
> +			/* Nothing to be done here, it's an _mcount
> +			 * call location and r2 will have to be
> +			 * restored in the _mcount function.
> +			 */
> +			return 2;
I didn't find where you check for this return value.
> +		};
> +#endif
>  		pr_err("%s: Expect noop after relocate, got %08x\n",
>  		       me->name, *instruction);
>  		return 0;
> --
> 1.8.5.6
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel
  2015-11-26 10:12   ` Denis Kirjanov
@ 2015-11-26 12:57     ` Torsten Duwe
  0 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-26 12:57 UTC (permalink / raw)
  To: Denis Kirjanov
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Thu, Nov 26, 2015 at 01:12:12PM +0300, Denis Kirjanov wrote:
> On 11/25/15, Torsten Duwe <duwe@lst.de> wrote:
> > +			 */
> > +			return 2;
> I didn't find where you check for this return value.

That's a pure debugging convenience. The return test is for != 0,
so any non-zero value will do. I've encountered situations where
I'd really liked to know _why_ a routine failed/succeeded, visible
in the registers in the debugger.

This is no big thing, I have no strong opinion about this.

	Torsten


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2015-11-26 10:04   ` Denis Kirjanov
@ 2015-11-26 12:59     ` Torsten Duwe
  2015-12-01 17:29     ` Torsten Duwe
  1 sibling, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-11-26 12:59 UTC (permalink / raw)
  To: Denis Kirjanov
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Thu, Nov 26, 2015 at 01:04:19PM +0300, Denis Kirjanov wrote:
> On 11/25/15, Torsten Duwe <duwe@lst.de> wrote:
> > --- a/arch/powerpc/kernel/entry_64.S
> > +++ b/arch/powerpc/kernel/entry_64.S
> Linux style for comments is the C89. Please update entry_64.S

I might as well switch to asm-style comments. Anyway, thanks 
for the reminder.

	Torsten


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2015-11-26 10:04   ` Denis Kirjanov
  2015-11-26 12:59     ` Torsten Duwe
@ 2015-12-01 17:29     ` Torsten Duwe
  2015-12-01 22:18       ` Michael Ellerman
  1 sibling, 1 reply; 24+ messages in thread
From: Torsten Duwe @ 2015-12-01 17:29 UTC (permalink / raw)
  To: Denis Kirjanov
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Thu, Nov 26, 2015 at 01:04:19PM +0300, Denis Kirjanov wrote:
> > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> > index 8d56b16..3309dd8 100644
> > --- a/arch/powerpc/kernel/entry_64.S
> > +++ b/arch/powerpc/kernel/entry_64.S
> Linux style for comments is the C89. Please update entry_64.S

Any other (planned) remarks or comments on the patch set?
If not I'll make a 5th, hopefully final version.

	Torsten


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2015-12-01 17:29     ` Torsten Duwe
@ 2015-12-01 22:18       ` Michael Ellerman
  2016-01-05 15:58         ` Torsten Duwe
  0 siblings, 1 reply; 24+ messages in thread
From: Michael Ellerman @ 2015-12-01 22:18 UTC (permalink / raw)
  To: Torsten Duwe, Denis Kirjanov
  Cc: Steven Rostedt, Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

On Tue, 2015-12-01 at 18:29 +0100, Torsten Duwe wrote:
> On Thu, Nov 26, 2015 at 01:04:19PM +0300, Denis Kirjanov wrote:
> > > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> > > index 8d56b16..3309dd8 100644
> > > --- a/arch/powerpc/kernel/entry_64.S
> > > +++ b/arch/powerpc/kernel/entry_64.S
> > Linux style for comments is the C89. Please update entry_64.S
> 
> Any other (planned) remarks or comments on the patch set?
> If not I'll make a 5th, hopefully final version.

I (still) haven't had a chance to have a good look at it, but I won't this week
anyway. So post v5 and hopefully I can review that and it will be perfect :)

cheers


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2)
  2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (8 preceding siblings ...)
  2015-11-25 16:49 ` [PATCH v4 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
@ 2015-12-03 16:00 ` Petr Mladek
  9 siblings, 0 replies; 24+ messages in thread
From: Petr Mladek @ 2015-12-03 16:00 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Wed 2015-11-25 17:53:17, Torsten Duwe wrote:
> Torsten Duwe (9):
>   ppc64 (le): prepare for -mprofile-kernel
>   ppc64le FTRACE_WITH_REGS implementation
>   ppc use ftrace_modify_all_code default
>   ppc64 ftrace_with_regs configuration variables
>   ppc64 ftrace_with_regs: spare early boot and low level
>   ppc64 ftrace: disable profiling for some functions
>   ppc64 ftrace: disable profiling for some files
>   Implement kernel live patching for ppc64le (ABIv2)
>   Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it
>     is selected.

I had to add the patch below to get LivePatching working.

I tested it the following way:

# livepatching sample
CONFIG_SAMPLES=y
CONFIG_SAMPLE_LIVEPATCH=m

# booted the compiled kernel and printed the default cmdline
$> cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.0-rc3-11-default+ root=UUID=...

# loaded the patch and printed the patch cmdline
$> modprobe livepatch-sample
$> cat /proc/cmdline
this has been live patched

# tried to disable and enable the patch
$> echo 0 > /sys/kernel/livepatch/livepatch_sample/enabled
$> cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.0-rc3-11-default+ root=UUID=...
$> echo 1 > /sys/kernel/livepatch/livepatch_sample/enabled
$> cat /proc/cmdline
this has been live patched

# also checked messages
$> dmesg | tail -n 4
[   33.673057] livepatch: tainting kernel with TAINT_LIVEPATCH
[   33.673068] livepatch: enabling patch 'livepatch_sample'
[ 1997.098257] livepatch: disabling patch 'livepatch_sample'
[ 2079.696277] livepatch: enabling patch 'livepatch_sample'


Here is the patch:

>From 7eb6f9453c81a996b0f5ffcb8725facadd9ec718 Mon Sep 17 00:00:00 2001
From: Petr Mladek <pmladek@suse.com>
Date: Thu, 3 Dec 2015 13:28:19 +0100
Subject: [PATCH] livepatch: Support ftrace location with an offset

ftrace_set_filter_ip() function requires the exact ftrace location
as a parameter. It is the same as the function address on x86 and s390.
But it is after the TOC load and LR save on powerpc.

This patch adds klp_ftrace_location() arch-specific function that
will compute the ftrace location from the function address.

I thought about handling this with a constant (define) but I
am not sure if it always would be a constant. I also thought
about a weak function. But this seems to be a good compromise.
We are ready for complications and the compiler still might
optimize it.

I made it livepatch-specific to keep it simple. Ftrace supports
more locations of the handler, e.g. Mcount on x86. But livepatching
does not support most of them. Also ftrace does not know the offset
easily because it gets the ftrace locations from __mcount_loc
object section.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 arch/powerpc/include/asm/livepatch.h | 10 ++++++++++
 arch/s390/include/asm/livepatch.h    |  9 +++++++++
 arch/x86/include/asm/livepatch.h     | 10 ++++++++++
 kernel/livepatch/core.c              | 10 +++++++---
 4 files changed, 36 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/livepatch.h b/arch/powerpc/include/asm/livepatch.h
index 3200c1152122..aa7e15dae58c 100644
--- a/arch/powerpc/include/asm/livepatch.h
+++ b/arch/powerpc/include/asm/livepatch.h
@@ -34,6 +34,16 @@ static inline int klp_check_compiler_support(void)
 extern int klp_write_module_reloc(struct module *mod, unsigned long type,
 				   unsigned long loc, unsigned long value);
 
+/*
+ * LivePatching works on PPC only when the kernel is compiled with
+ * -mprofile-kernel. The ftrace handler is called after the TOC load
+ * and LR save (16 bytes).
+ */
+static inline unsigned long klp_ftrace_location(unsigned long faddr)
+{
+	return faddr + 16;
+}
+
 static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 {
 	regs->nip = ip;
diff --git a/arch/s390/include/asm/livepatch.h b/arch/s390/include/asm/livepatch.h
index 7aa799134a11..b853a1e48d04 100644
--- a/arch/s390/include/asm/livepatch.h
+++ b/arch/s390/include/asm/livepatch.h
@@ -32,6 +32,15 @@ static inline int klp_write_module_reloc(struct module *mod, unsigned long
 	return -ENOSYS;
 }
 
+/*
+ * LivePatching works only when the ftrace handler is called from the first
+ * instruction of the patched function on s390.
+ */
+static inline unsigned long klp_ftrace_location(unsigned long faddr)
+{
+	return faddr;
+}
+
 static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 {
 	regs->psw.addr = ip;
diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
index 19c099afa861..9a8c84d3fae2 100644
--- a/arch/x86/include/asm/livepatch.h
+++ b/arch/x86/include/asm/livepatch.h
@@ -36,6 +36,16 @@ static inline int klp_check_compiler_support(void)
 int klp_write_module_reloc(struct module *mod, unsigned long type,
 			   unsigned long loc, unsigned long value);
 
+
+/*
+ * LivePatching works only when ftrace uses fentry on x86. Therefore
+ * the ftrace location is the same as the address of the function.
+ */
+static inline unsigned long klp_ftrace_location(unsigned long faddr)
+{
+	return faddr;
+}
+
 static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 {
 	regs->ip = ip;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index db545cbcdb89..f9dc8889e1f1 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -364,8 +364,10 @@ static void klp_disable_func(struct klp_func *func)
 		return;
 
 	if (list_is_singular(&ops->func_stack)) {
+		unsigned long ftrace_loc = klp_ftrace_location(func->old_addr);
+
 		WARN_ON(unregister_ftrace_function(&ops->fops));
-		WARN_ON(ftrace_set_filter_ip(&ops->fops, func->old_addr, 1, 0));
+		WARN_ON(ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1, 0));
 
 		list_del_rcu(&func->stack_node);
 		list_del(&ops->node);
@@ -390,6 +392,8 @@ static int klp_enable_func(struct klp_func *func)
 
 	ops = klp_find_ops(func->old_addr);
 	if (!ops) {
+		unsigned long ftrace_loc = klp_ftrace_location(func->old_addr);
+
 		ops = kzalloc(sizeof(*ops), GFP_KERNEL);
 		if (!ops)
 			return -ENOMEM;
@@ -404,7 +408,7 @@ static int klp_enable_func(struct klp_func *func)
 		INIT_LIST_HEAD(&ops->func_stack);
 		list_add_rcu(&func->stack_node, &ops->func_stack);
 
-		ret = ftrace_set_filter_ip(&ops->fops, func->old_addr, 0, 0);
+		ret = ftrace_set_filter_ip(&ops->fops, ftrace_loc, 0, 0);
 		if (ret) {
 			pr_err("failed to set ftrace filter for function '%s' (%d)\n",
 			       func->old_name, ret);
@@ -415,7 +419,7 @@ static int klp_enable_func(struct klp_func *func)
 		if (ret) {
 			pr_err("failed to register ftrace handler for function '%s' (%d)\n",
 			       func->old_name, ret);
-			ftrace_set_filter_ip(&ops->fops, func->old_addr, 1, 0);
+			ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1, 0);
 			goto err;
 		}
 
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 4/9] ppc64 ftrace_with_regs configuration variables
  2015-11-25 16:37 ` [PATCH v4 4/9] ppc64 ftrace_with_regs configuration variables Torsten Duwe
@ 2015-12-03 16:20   ` Petr Mladek
  2015-12-04  9:01     ` Torsten Duwe
  0 siblings, 1 reply; 24+ messages in thread
From: Petr Mladek @ 2015-12-03 16:20 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Wed 2015-11-25 17:37:33, Torsten Duwe wrote:
>   * Makefile:
>     - globally use -mprofile-kernel in case it's configured.
>   * arch/powerpc/Kconfig / kernel/trace/Kconfig:
>     - declare that ppc64le HAVE_MPROFILE_KERNEL and
>       HAVE_DYNAMIC_FTRACE_WITH_REGS, and use it.
> 
> --- a/arch/powerpc/Makefile
> +++ b/arch/powerpc/Makefile
> @@ -133,6 +133,13 @@ else
>  CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
>  endif
>  
> +ifeq ($(CONFIG_PPC64),y)
> +ifdef CONFIG_HAVE_MPROFILE_KERNEL
> +CC_FLAGS_FTRACE	:= -pg $(call cc-option,-mprofile-kernel)

Do we want to define -pg even when -mprofile-kernel is not supported
by the used gcc, please?

> +KBUILD_CPPFLAGS	+= -DCC_USING_MPROFILE_KERNEL

IMHO, we should not define CC_USING_MPROFILE_KERNEL if it is not
supported by the compiler.

I took inspiration from the CC_USING_FENTRY handling in
linux/Makefile. The following code worked for me:

CC_USING_MPROFILE_KERNEL := $(call cc-option, -pg -mprofile-kernel -DCC_USING_MPROFILE_KERNEL)
CC_FLAGS_FTRACE := $(CC_USING_MPROFILE_KERNEL)
KBUILD_CPPFLAGS += $(CC_USING_MPROFILE_KERNEL)

I just do not understand why we need to add the flags also
to KBUILD_CPPFLAGS. It seems that they are duplicated
when compiling kernel/livepatch/core.o. But livepatching
did not work without it. I wonder if you found the culprit.

Best Regards,
Petr

> +endif
> +endif
> +
>  CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
>  CFLAGS-$(CONFIG_POWER4_CPU) += $(call cc-option,-mcpu=power4)
>  CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 8/9] Implement kernel live patching for ppc64le (ABIv2)
  2015-11-25 16:48 ` [PATCH v4 8/9] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
@ 2015-12-03 16:24   ` Petr Mladek
  2015-12-04  9:06     ` Torsten Duwe
  0 siblings, 1 reply; 24+ messages in thread
From: Petr Mladek @ 2015-12-03 16:24 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Wed 2015-11-25 17:48:36, Torsten Duwe wrote:
>   * create the appropriate files+functions
>     arch/powerpc/include/asm/livepatch.h
>         klp_check_compiler_support,
>         klp_arch_set_pc
>     arch/powerpc/kernel/livepatch.c with a stub for
>         klp_write_module_reloc
>     This is architecture-independent work in progress.
>   * introduce a fixup in arch/powerpc/kernel/entry_64.S
>     for local calls that are becoming global due to live patching.
>     And of course do the main KLP thing: return to a maybe different
>     address, possibly altered by the live patching ftrace op.
> 
> --- /dev/null
> +++ b/arch/powerpc/include/asm/livepatch.h
> @@ -0,0 +1,45 @@
[...]
> +#include <linux/module.h>
> +#include <linux/ftrace.h>
> +
> +#ifdef CONFIG_LIVEPATCH
> +static inline int klp_check_compiler_support(void)
> +{
> +#if !defined(_CALL_ELF) || _CALL_ELF != 2

I am just curious why we do not check CC_USING_MPROFILE_KERNEL
like in the other similar locations. It would look less cryptic.
But I am not sure if it is precise enough.

Best Regards,
Petr

> +	return 1;
> +#endif
> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 4/9] ppc64 ftrace_with_regs configuration variables
  2015-12-03 16:20   ` Petr Mladek
@ 2015-12-04  9:01     ` Torsten Duwe
  0 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-12-04  9:01 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Thu, Dec 03, 2015 at 05:20:08PM +0100, Petr Mladek wrote:
>
> IMHO, we should not define CC_USING_MPROFILE_KERNEL if it is not
> supported by the compiler.

Yes, true.

> I took inspiration from the CC_USING_FENTRY handling in
> linux/Makefile. The following code worked for me:
>
> CC_USING_MPROFILE_KERNEL := $(call cc-option, -pg -mprofile-kernel -DCC_USING_MPROFILE_KERNEL)
> CC_FLAGS_FTRACE := $(CC_USING_MPROFILE_KERNEL)
> KBUILD_CPPFLAGS += $(CC_USING_MPROFILE_KERNEL)

Excellent!

> I just do not understand why we need to add the flags also
> to KBUILD_CPPFLAGS. It seems that they are duplicated
> when compiling kernel/livepatch/core.o. But livepatching
> did not work without it. I wonder if you found the culprit.

Some assembler-with-cpp files also need to be notified?

My plan is to first get this working reliably and then fine tune.

	Torsten


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 8/9] Implement kernel live patching for ppc64le (ABIv2)
  2015-12-03 16:24   ` Petr Mladek
@ 2015-12-04  9:06     ` Torsten Duwe
  0 siblings, 0 replies; 24+ messages in thread
From: Torsten Duwe @ 2015-12-04  9:06 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Steven Rostedt, Michael Ellerman, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Thu, Dec 03, 2015 at 05:24:45PM +0100, Petr Mladek wrote:
> > +++ b/arch/powerpc/include/asm/livepatch.h
> > @@ -0,0 +1,45 @@
> [...]
> > +#include <linux/module.h>
> > +#include <linux/ftrace.h>
> > +
> > +#ifdef CONFIG_LIVEPATCH
> > +static inline int klp_check_compiler_support(void)
> > +{
> > +#if !defined(_CALL_ELF) || _CALL_ELF != 2
> 
> I am just curious why we do not check CC_USING_MPROFILE_KERNEL
> like in the other similar locations. It would look less cryptic.
> But I am not sure if it is precise enough.

Because a ppc with a gcc > 4.8.5 is not sufficient. The code
currently only works for the PPC ELF ABI v2, a.k.a. ppc64le.
But now that you say it, CC_USING_MPROFILE_KERNEL should go
into that condition as well. Thanks!

	Torsten


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2015-12-01 22:18       ` Michael Ellerman
@ 2016-01-05 15:58         ` Torsten Duwe
  2016-01-18 22:22           ` Jiri Kosina
  0 siblings, 1 reply; 24+ messages in thread
From: Torsten Duwe @ 2016-01-05 15:58 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Denis Kirjanov, Steven Rostedt, Jiri Kosina, linuxppc-dev,
	linux-kernel, live-patching

On Wed, Dec 02, 2015 at 09:18:05AM +1100, Michael Ellerman wrote:
> 
> I (still) haven't had a chance to have a good look at it, but I won't this week
> anyway. So post v5 and hopefully I can review that and it will be perfect :)

The perfect v5 is there now, for 4 weeks minus the holiday season, and no further
problems arose. Independently verified by Petr Mladek -- don't forget his high-level fix.

	Torsten


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2016-01-05 15:58         ` Torsten Duwe
@ 2016-01-18 22:22           ` Jiri Kosina
  2016-01-18 23:29             ` Michael Ellerman
  0 siblings, 1 reply; 24+ messages in thread
From: Jiri Kosina @ 2016-01-18 22:22 UTC (permalink / raw)
  To: Torsten Duwe, Michael Ellerman, Steven Rostedt
  Cc: Denis Kirjanov, linuxppc-dev, linux-kernel, live-patching

On Tue, 5 Jan 2016, Torsten Duwe wrote:

> > I (still) haven't had a chance to have a good look at it, but I won't this week
> > anyway. So post v5 and hopefully I can review that and it will be perfect :)
> 
> The perfect v5 is there now, for 4 weeks minus the holiday season, and 
> no further problems arose. Independently verified by Petr Mladek -- 
> don't forget his high-level fix.

Hi everybody,

so what are the current plans with this one, please? Is this going through 
ppc tree, ftrace tree, or does it have issues that need to be worked on 
before merging?

Thanks!

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation
  2016-01-18 22:22           ` Jiri Kosina
@ 2016-01-18 23:29             ` Michael Ellerman
  0 siblings, 0 replies; 24+ messages in thread
From: Michael Ellerman @ 2016-01-18 23:29 UTC (permalink / raw)
  To: Jiri Kosina, Torsten Duwe, Steven Rostedt
  Cc: Denis Kirjanov, linuxppc-dev, linux-kernel, live-patching

On Mon, 2016-01-18 at 23:22 +0100, Jiri Kosina wrote:
> On Tue, 5 Jan 2016, Torsten Duwe wrote:
> 
> > > I (still) haven't had a chance to have a good look at it, but I won't this week
> > > anyway. So post v5 and hopefully I can review that and it will be perfect :)
> > 
> > The perfect v5 is there now, for 4 weeks minus the holiday season, and 
> > no further problems arose. Independently verified by Petr Mladek -- 
> > don't forget his high-level fix.
> 
> Hi everybody,
> 
> so what are the current plans with this one, please? Is this going through 
> ppc tree, ftrace tree, or does it have issues that need to be worked on 
> before merging?

Anton and I need to review it and then it will be merged via the powerpc tree.
We just haven't had the spare cycles to do it.

cheers

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-01-18 23:29 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-25 16:53 [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
2015-11-25 16:23 ` [PATCH v4 1/9] ppc64 (le): prepare for -mprofile-kernel Torsten Duwe
2015-11-26 10:12   ` Denis Kirjanov
2015-11-26 12:57     ` Torsten Duwe
2015-11-25 16:34 ` [PATCH v4 2/9] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
2015-11-26 10:04   ` Denis Kirjanov
2015-11-26 12:59     ` Torsten Duwe
2015-12-01 17:29     ` Torsten Duwe
2015-12-01 22:18       ` Michael Ellerman
2016-01-05 15:58         ` Torsten Duwe
2016-01-18 22:22           ` Jiri Kosina
2016-01-18 23:29             ` Michael Ellerman
2015-11-25 16:35 ` [PATCH v4 3/9] ppc use ftrace_modify_all_code default Torsten Duwe
2015-11-25 16:37 ` [PATCH v4 4/9] ppc64 ftrace_with_regs configuration variables Torsten Duwe
2015-12-03 16:20   ` Petr Mladek
2015-12-04  9:01     ` Torsten Duwe
2015-11-25 16:39 ` [PATCH v4 5/9] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
2015-11-25 16:41 ` [PATCH v4 6/9] ppc64 ftrace: disable profiling for some functions Torsten Duwe
2015-11-25 16:42 ` [PATCH v4 7/9] ppc64 ftrace: disable profiling for some files Torsten Duwe
2015-11-25 16:48 ` [PATCH v4 8/9] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
2015-12-03 16:24   ` Petr Mladek
2015-12-04  9:06     ` Torsten Duwe
2015-11-25 16:49 ` [PATCH v4 9/9] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
2015-12-03 16:00 ` [PATCH v4 0/9] ftrace with regs + live patching for ppc64 LE (ABI v2) Petr Mladek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).