All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling
       [not found]   ` <20170503155819.0cbd04e5@gandalf.local.home>
@ 2017-05-04  4:35     ` Naveen N. Rao
  0 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:35 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Anton Blanchard, linuxppc-dev

[Copying linuxppc-dev list which I missed cc'ing initially]

On 2017/05/03 03:58PM, Steven Rostedt wrote:
> On Wed,  3 May 2017 23:43:41 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > This fixes a crash when function_graph and jprobes are used together.
> > This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
> > conflict between jprobes and function graph tracing"), but for powerpc.
> > 
> > Jprobes breaks function_graph tracing since the jprobe hook needs to use
> > jprobe_return(), which never returns back to the hook, but instead to
> > the original jprobe'd function. The solution is to momentarily pause
> > function_graph tracing before invoking the jprobe hook and re-enable it
> > when returning back to the original jprobe'd function.
> 
> I wonder if any of this code could be made arch agnostic?

I don't see a way to do that as the jprobe handlers are all 
arch-specific.

> 
> Anyway, Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Thanks,
- Naveen

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace
@ 2017-05-04  4:36 Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

[Reposting this as I missed copying the distribution list in my
previous posting. For those who were copied previously, sorry for the spam]

This series initially started out as a fix for a single bug I found:
kernel Oops when jprobes is used with the function_graph tracer. Though
that particular bug turned out to be a solved problem on other
architectures (as seen in patch 1), my investigation revealed quite a
few other bugs as well as scope for optimizing our ftrace handlers.

This series includes all the fixes and enhancements I have made so far.
There are a few more bugs and fixes that I will be coding up and sending
across in a day or two.

This series has been run through ftrace selftests.

- Naveen

Naveen N. Rao (8):
  powerpc/kprobes: Pause function_graph tracing during jprobes handling
  powerpc/ftrace: Pass the correct stack pointer for
    DYNAMIC_FTRACE_WITH_REGS
  powerpc/ftrace: Remove redundant saving of LR in ftrace[_graph]_caller
  powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
  powerpc/ftrace: Eliminate duplicate stack setup for
    ftrace_graph_caller()
  powerpc/ftrace: Add support for HAVE_FUNCTION_GRAPH_FP_TEST for
    -mprofile-kernel
  powerpc/livepatch: Clarify location of mcount call site
  powerpc/xmon: Disable function_graph tracing while in xmon

 arch/powerpc/include/asm/asm-prototypes.h      |   3 +-
 arch/powerpc/include/asm/ftrace.h              |   3 +
 arch/powerpc/include/asm/livepatch.h           |   4 +-
 arch/powerpc/kernel/kprobes.c                  |  15 +++-
 arch/powerpc/kernel/trace/ftrace.c             |   4 +-
 arch/powerpc/kernel/trace/ftrace_64.S          |   1 +
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 105 +++++++++++--------------
 arch/powerpc/xmon/xmon.c                       |   3 +
 8 files changed, 70 insertions(+), 68 deletions(-)

-- 
2.12.2

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
       [not found]   ` <20170503155819.0cbd04e5@gandalf.local.home>
  2017-05-04  4:36 ` [PATCH 2/8] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS Naveen N. Rao
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

This fixes a crash when function_graph and jprobes are used together.
This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
conflict between jprobes and function graph tracing"), but for powerpc.

Jprobes breaks function_graph tracing since the jprobe hook needs to use
jprobe_return(), which never returns back to the hook, but instead to
the original jprobe'd function. The solution is to momentarily pause
function_graph tracing before invoking the jprobe hook and re-enable it
when returning back to the original jprobe'd function.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 255d28d31ca1..562d18f456d7 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -609,6 +609,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
 	regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
 #endif
 
+	/*
+	 * jprobes use jprobe_return() which skips the normal return
+	 * path of the function, and this messes up the accounting of the
+	 * function graph tracer.
+	 *
+	 * Pause function graph tracing while performing the jprobe function.
+	 */
+	pause_graph_tracing();
+
 	return 1;
 }
 NOKPROBE_SYMBOL(setjmp_pre_handler);
@@ -634,6 +643,8 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 	 * saved regs...
 	 */
 	memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
+	/* It's OK to start function graph tracing again */
+	unpause_graph_tracing();
 	preempt_enable_no_resched();
 	return 1;
 }
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/8] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 3/8] powerpc/ftrace: Remove redundant saving of LR in ftrace[_graph]_caller Naveen N. Rao
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

For DYNAMIC_FTRACE_WITH_REGS, we should be passing-in the original set
of registers in pt_regs, to capture the state _before_ ftrace_caller.
However, we are instead passing the stack pointer *after* allocating a
stack frame in ftrace_caller. Fix this by saving the proper value of r1
in pt_regs. Also, use SAVE_10GPRS() to simplify the code.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 7c933a99f5d5..fa0921410fa4 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -45,10 +45,14 @@ _GLOBAL(ftrace_caller)
 	stdu	r1,-SWITCH_FRAME_SIZE(r1)
 
 	/* Save all gprs to pt_regs */
-	SAVE_8GPRS(0,r1)
-	SAVE_8GPRS(8,r1)
-	SAVE_8GPRS(16,r1)
-	SAVE_8GPRS(24,r1)
+	SAVE_GPR(0, r1)
+	SAVE_10GPRS(2, r1)
+	SAVE_10GPRS(12, r1)
+	SAVE_10GPRS(22, r1)
+
+	/* Save previous stack pointer (r1) */
+	addi	r8, r1, SWITCH_FRAME_SIZE
+	std	r8, GPR1(r1)
 
 	/* Load special regs for save below */
 	mfmsr   r8
@@ -103,10 +107,10 @@ ftrace_call:
 #endif
 
 	/* Restore gprs */
-	REST_8GPRS(0,r1)
-	REST_8GPRS(8,r1)
-	REST_8GPRS(16,r1)
-	REST_8GPRS(24,r1)
+	REST_GPR(0,r1)
+	REST_10GPRS(2,r1)
+	REST_10GPRS(12,r1)
+	REST_10GPRS(22,r1)
 
 	/* Restore possibly modified LR */
 	ld	r0, _LINK(r1)
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/8] powerpc/ftrace: Remove redundant saving of LR in ftrace[_graph]_caller
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 2/8] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 4/8] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

For -mprofile-kernel, there is no need to save LR into the ABI save
location on entry into ftrace_caller(). It is sufficient to have the
(possibly updated) return address in LR (and r0?). On return from
ftrace, this value is stored in the ABI save location if necessary.

Furthermore, we can also remove the redundant saving of LR in
ftrace_graph_caller() for similar reasons. It is sufficient to ensure
LR and r0 point to the new return address.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index fa0921410fa4..d8d75f4eb853 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -38,9 +38,6 @@
  * and then arrange for the ftrace function to be called.
  */
 _GLOBAL(ftrace_caller)
-	/* Save the original return address in A's stack frame */
-	std	r0,LRSAVE(r1)
-
 	/* Create our stack frame + pt_regs */
 	stdu	r1,-SWITCH_FRAME_SIZE(r1)
 
@@ -271,6 +268,5 @@ _GLOBAL(ftrace_graph_caller)
 
 	addi	r1, r1, 112
 	mflr	r0
-	std	r0, LRSAVE(r1)
 	bctr
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/8] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
                   ` (2 preceding siblings ...)
  2017-05-04  4:36 ` [PATCH 3/8] powerpc/ftrace: Remove redundant saving of LR in ftrace[_graph]_caller Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 5/8] powerpc/ftrace: Eliminate duplicate stack setup for ftrace_graph_caller() Naveen N. Rao
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

ftrace_caller() depends on a modified regs->nip to detect if a certain
function has been livepatched. However, with KPROBES_ON_FTRACE, it is
possible for regs->nip to have been modified by the jprobes pre_handler.
In this case, we do not want to invoke the livepatch_handler so as not
to consume the livepatch stack.

To distinguish between the two (jprobes and livepatch), we compare the
returned NIP with R12. Jprobes setjmp_pre_handler() sets up both NIP
and R12 to the global entry point of the jprobes hook, while livepatch
handler only sets up NIP and R12 is setup in livepatch_handler.

So, if NIP == R12, we know we came here due to jprobes and we just
branch to the new IP. Otherwise, we continue with livepatch processing
as usual.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index d8d75f4eb853..b1bad68ea6db 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -153,8 +153,18 @@ _GLOBAL(ftrace_stub)
 	 *
 	 * r0 can't be used as the base register for a DS-form load or store, so
 	 * we temporarily shuffle r1 (stack pointer) into r0 and then put it back.
+	 *
+	 * But, before we can do all that, we first need to confirm that we are
+	 * indeed here due to livepatch and not due to jprobes. The difference
+	 * between the two handlers is that jprobes additionally sets up r12.
+	 * So, we can compare nip with r12 as a test to determine if we are here
+	 * due to livepatch or due to jprobes.
 	 */
 livepatch_handler:
+	mfctr	r0
+	cmpd	r0, r12
+	beqctr
+
 	CURRENT_THREAD_INFO(r12, r1)
 
 	/* Save stack pointer into r0 */
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/8] powerpc/ftrace: Eliminate duplicate stack setup for ftrace_graph_caller()
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
                   ` (3 preceding siblings ...)
  2017-05-04  4:36 ` [PATCH 4/8] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 6/8] powerpc/ftrace: Add support for HAVE_FUNCTION_GRAPH_FP_TEST for -mprofile-kernel Naveen N. Rao
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

Currently, ftrace_graph_caller() sets up a separate stack frame to save
some state before calling into prepare_ftrace_return(). Eliminate this
by calling into ftrace_graph_caller() right after the ftrace handler.

We make a few changes to accommodate this:
1. Currently, due to where it is present, ftrace_graph_caller() is not
invoked if the NIP is modified by the ftrace handler, indicating either
a livepatched function or a jprobe'd function. To achieve the same
behavior, we now save/compare against original NIP if function graph is
enabled, in addition to CONFIG_LIVEPATCH.

2. We use r14 for saving the original NIP and r15 for storing the
possibly modified NIP. r15 is later used to determine if the function
has been livepatched.

3. To re-use the same stack frame setup/teardown code, we have
ftrace_graph_caller() save the modified LR in pt_regs.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 70 ++++++++------------------
 1 file changed, 21 insertions(+), 49 deletions(-)

diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index b1bad68ea6db..981b99dc3029 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -72,9 +72,10 @@ _GLOBAL(ftrace_caller)
 	addi	r3,r3,function_trace_op@toc@l
 	ld	r5,0(r3)
 
-#ifdef CONFIG_LIVEPATCH
+#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_LIVEPATCH)
 	mr	r14,r7		/* remember old NIP */
 #endif
+
 	/* Calculate ip from nip-4 into r3 for call below */
 	subi    r3, r7, MCOUNT_INSN_SIZE
 
@@ -97,10 +98,19 @@ ftrace_call:
 	nop
 
 	/* Load ctr with the possibly modified NIP */
-	ld	r3, _NIP(r1)
-	mtctr	r3
+	ld	r15, _NIP(r1)
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+.globl ftrace_graph_call
+ftrace_graph_call:
+	b	ftrace_graph_stub
+_GLOBAL(ftrace_graph_stub)
+#endif
+
+	mtctr	r15
+
 #ifdef CONFIG_LIVEPATCH
-	cmpd	r14,r3		/* has NIP been altered? */
+	cmpd	r14, r15		/* has NIP been altered? */
 #endif
 
 	/* Restore gprs */
@@ -124,13 +134,6 @@ ftrace_call:
 	bne-	livepatch_handler
 #endif
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-.globl ftrace_graph_call
-ftrace_graph_call:
-	b	ftrace_graph_stub
-_GLOBAL(ftrace_graph_stub)
-#endif
-
 	bctr			/* jump after _mcount site */
 
 _GLOBAL(ftrace_stub)
@@ -233,50 +236,19 @@ livepatch_handler:
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 _GLOBAL(ftrace_graph_caller)
-	stdu	r1, -112(r1)
-	/* with -mprofile-kernel, parameter regs are still alive at _mcount */
-	std	r10, 104(r1)
-	std	r9, 96(r1)
-	std	r8, 88(r1)
-	std	r7, 80(r1)
-	std	r6, 72(r1)
-	std	r5, 64(r1)
-	std	r4, 56(r1)
-	std	r3, 48(r1)
+	cmpd	r14, r15			/* has NIP been altered? */
+	bne-	ftrace_graph_stub
 
-	/* Save callee's TOC in the ABI compliant location */
-	std	r2, 24(r1)
-	ld	r2, PACATOC(r13)	/* get kernel TOC in r2 */
-
-	mfctr	r4		/* ftrace_caller has moved local addr here */
-	std	r4, 40(r1)
-	mflr	r3		/* ftrace_caller has restored LR from stack */
-	subi	r4, r4, MCOUNT_INSN_SIZE
+	ld	r3, _LINK(r1)
+	subi	r4, r14, MCOUNT_INSN_SIZE	/* load saved original NIP */
 
 	bl	prepare_ftrace_return
 	nop
 
 	/*
 	 * prepare_ftrace_return gives us the address we divert to.
-	 * Change the LR to this.
+	 * We save it in pt_regs to be used when we go back.
 	 */
-	mtlr	r3
-
-	ld	r0, 40(r1)
-	mtctr	r0
-	ld	r10, 104(r1)
-	ld	r9, 96(r1)
-	ld	r8, 88(r1)
-	ld	r7, 80(r1)
-	ld	r6, 72(r1)
-	ld	r5, 64(r1)
-	ld	r4, 56(r1)
-	ld	r3, 48(r1)
-
-	/* Restore callee's TOC */
-	ld	r2, 24(r1)
-
-	addi	r1, r1, 112
-	mflr	r0
-	bctr
+	std	r3, _LINK(r1)
+	b	ftrace_graph_stub
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/8] powerpc/ftrace: Add support for HAVE_FUNCTION_GRAPH_FP_TEST for -mprofile-kernel
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
                   ` (4 preceding siblings ...)
  2017-05-04  4:36 ` [PATCH 5/8] powerpc/ftrace: Eliminate duplicate stack setup for ftrace_graph_caller() Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 7/8] powerpc/livepatch: Clarify location of mcount call site Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 8/8] powerpc/xmon: Disable function_graph tracing while in xmon Naveen N. Rao
  7 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

This is very handy to catch potential crashes due to unexpected
interactions of function_graph tracer with weird things like
jprobes.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/asm-prototypes.h      | 3 ++-
 arch/powerpc/include/asm/ftrace.h              | 3 +++
 arch/powerpc/kernel/trace/ftrace.c             | 4 ++--
 arch/powerpc/kernel/trace/ftrace_64.S          | 1 +
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 1 +
 5 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 7330150bfe34..14964ab80a53 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -124,6 +124,7 @@ extern int __ucmpdi2(u64, u64);
 
 /* tracing */
 void _mcount(void);
-unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
+unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,
+					unsigned long fp);
 
 #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index 686c5f70eb84..2b89043b0c61 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -6,6 +6,9 @@
 #ifdef CONFIG_FUNCTION_TRACER
 #define MCOUNT_ADDR		((unsigned long)(_mcount))
 #define MCOUNT_INSN_SIZE	4 /* sizeof mcount call */
+#ifdef CONFIG_MPROFILE_KERNEL
+#define HAVE_FUNCTION_GRAPH_FP_TEST
+#endif
 
 #ifdef __ASSEMBLY__
 
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 32509de6ce4c..7e3e099cdfe4 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -572,7 +572,7 @@ int ftrace_disable_ftrace_graph_caller(void)
  * Hook the return address and push it in the stack of return addrs
  * in current thread info. Return the address we want to divert to.
  */
-unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)
+unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long fp)
 {
 	struct ftrace_graph_ent trace;
 	unsigned long return_hooker;
@@ -592,7 +592,7 @@ unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)
 	if (!ftrace_graph_entry(&trace))
 		goto out;
 
-	if (ftrace_push_return_trace(parent, ip, &trace.depth, 0,
+	if (ftrace_push_return_trace(parent, ip, &trace.depth, fp,
 				     NULL) == -EBUSY)
 		goto out;
 
diff --git a/arch/powerpc/kernel/trace/ftrace_64.S b/arch/powerpc/kernel/trace/ftrace_64.S
index e5ccea19821e..57ab4f73a5c4 100644
--- a/arch/powerpc/kernel/trace/ftrace_64.S
+++ b/arch/powerpc/kernel/trace/ftrace_64.S
@@ -68,6 +68,7 @@ _GLOBAL(return_to_handler)
 	 */
 	ld	r2, PACATOC(r13)
 
+	addi	r3, r1, 112
 	bl	ftrace_return_to_handler
 	nop
 
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 981b99dc3029..fb6910e48f22 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -241,6 +241,7 @@ _GLOBAL(ftrace_graph_caller)
 
 	ld	r3, _LINK(r1)
 	subi	r4, r14, MCOUNT_INSN_SIZE	/* load saved original NIP */
+	addi	r5, r1, SWITCH_FRAME_SIZE
 
 	bl	prepare_ftrace_return
 	nop
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/8] powerpc/livepatch: Clarify location of mcount call site
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
                   ` (5 preceding siblings ...)
  2017-05-04  4:36 ` [PATCH 6/8] powerpc/ftrace: Add support for HAVE_FUNCTION_GRAPH_FP_TEST for -mprofile-kernel Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
  2017-05-04  4:36 ` [PATCH 8/8] powerpc/xmon: Disable function_graph tracing while in xmon Naveen N. Rao
  7 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

The existing comment around the location of mcount() call site in a
function is a bit confusing.

Depending on the gcc version, the mcount() call can either be the fourth
(gcc v6 and later) or the fifth instruction (gcc v5 and earlier) into a
function. So, the mcount call is actually within the first _20_ bytes of
a function.

However, ftrace_location_range() does an inclusive search and hence
passing (addr + 16) is still accurate.

Clarify the same by updating comments around this.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/livepatch.h | 4 ++--
 arch/powerpc/kernel/kprobes.c        | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/livepatch.h b/arch/powerpc/include/asm/livepatch.h
index 47a03b9b528b..62f98d977f59 100644
--- a/arch/powerpc/include/asm/livepatch.h
+++ b/arch/powerpc/include/asm/livepatch.h
@@ -37,8 +37,8 @@ static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 static inline unsigned long klp_get_ftrace_location(unsigned long faddr)
 {
 	/*
-	 * Live patch works only with -mprofile-kernel on PPC. In this case,
-	 * the ftrace location is always within the first 16 bytes.
+	 * Live patch only works with -mprofile-kernel on PPC. In this case, the
+	 * ftrace location always starts within the first 16 bytes (inclusive).
 	 */
 	return ftrace_location_range(faddr, faddr + 16);
 }
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 562d18f456d7..4398ea60b4e0 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -62,8 +62,8 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
 #ifdef CONFIG_KPROBES_ON_FTRACE
 		unsigned long faddr;
 		/*
-		 * Per livepatch.h, ftrace location is always within the first
-		 * 16 bytes of a function on powerpc with -mprofile-kernel.
+		 * Per livepatch.h, ftrace location always starts within the first
+		 * 16 bytes (inclusive) of a function with -mprofile-kernel.
 		 */
 		faddr = ftrace_location_range((unsigned long)addr,
 					      (unsigned long)addr + 16);
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 8/8] powerpc/xmon: Disable function_graph tracing while in xmon
  2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
                   ` (6 preceding siblings ...)
  2017-05-04  4:36 ` [PATCH 7/8] powerpc/livepatch: Clarify location of mcount call site Naveen N. Rao
@ 2017-05-04  4:36 ` Naveen N. Rao
  7 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2017-05-04  4:36 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Steven Rostedt,
	Anton Blanchard, linuxppc-dev

HAVE_FUNCTION_GRAPH_FP_TEST reveals another area (apart from jprobes)
that conflicts with the function_graph tracer: xmon. This is due to the
use of longjmp() in various places in xmon.

To address this, pause function_graph tracing while in xmon.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/xmon/xmon.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index da5619847517..22db68347ff6 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -28,6 +28,7 @@
 #include <linux/bug.h>
 #include <linux/nmi.h>
 #include <linux/ctype.h>
+#include <linux/ftrace.h>
 
 #include <asm/debugfs.h>
 #include <asm/ptrace.h>
@@ -459,6 +460,7 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
 
 	local_irq_save(flags);
 	hard_irq_disable();
+	pause_graph_tracing();
 
 	bp = in_breakpoint_table(regs->nip, &offset);
 	if (bp != NULL) {
@@ -655,6 +657,7 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
 
 	touch_nmi_watchdog();
 	local_irq_restore(flags);
+	unpause_graph_tracing();
 
 	return cmd != 'X' && cmd != EOF;
 }
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-05-04  4:38 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-04  4:36 [PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace Naveen N. Rao
2017-05-04  4:36 ` [PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling Naveen N. Rao
     [not found]   ` <20170503155819.0cbd04e5@gandalf.local.home>
2017-05-04  4:35     ` Naveen N. Rao
2017-05-04  4:36 ` [PATCH 2/8] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS Naveen N. Rao
2017-05-04  4:36 ` [PATCH 3/8] powerpc/ftrace: Remove redundant saving of LR in ftrace[_graph]_caller Naveen N. Rao
2017-05-04  4:36 ` [PATCH 4/8] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes Naveen N. Rao
2017-05-04  4:36 ` [PATCH 5/8] powerpc/ftrace: Eliminate duplicate stack setup for ftrace_graph_caller() Naveen N. Rao
2017-05-04  4:36 ` [PATCH 6/8] powerpc/ftrace: Add support for HAVE_FUNCTION_GRAPH_FP_TEST for -mprofile-kernel Naveen N. Rao
2017-05-04  4:36 ` [PATCH 7/8] powerpc/livepatch: Clarify location of mcount call site Naveen N. Rao
2017-05-04  4:36 ` [PATCH 8/8] powerpc/xmon: Disable function_graph tracing while in xmon Naveen N. Rao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.