All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] powerpc/ftrace: Add support for ftrace_modify_call() and a few other fixes
@ 2018-03-21 10:43 Naveen N. Rao
  2018-03-21 10:43 ` [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 10:43 UTC (permalink / raw)
  To: Michael Ellerman, Steven Rostedt, Paul Mackerras
  Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga

The first two patches fix a kernel oops when function tracing is enabled 
while using KVM and are v2 of the patches posted at:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg130548.html

Patch 3 is a new patch that tightens how we detect _mcount() call sites 
for -mprofile-kernel during module loading.

The last two patches implement support for ftrace_caller() to 
conditionally save the register state. The existing ftrace_caller() is 
renamed to ftrace_regs_caller() since we save the entire pt_regs today.  
A new implementation of ftrace_caller() that saves the minimum register 
state is provided. We switch between the two variants through 
ftrace_modify_call(). The necessary support to call into the two 
different variants from modules is also added.


- Naveen


Naveen N. Rao (5):
  powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code
    paths
  powerpc64/ftrace: Disable ftrace during kvm guest entry/exit
  powerpc64/module: Tighten detection of mcount call sites with
    -mprofile-kernel
  powerpc64/ftrace: Use the generic version of ftrace_replace_code()
  powerpc64/ftrace: Implement support for ftrace_regs_caller()

 arch/powerpc/include/asm/ftrace.h              |   2 -
 arch/powerpc/include/asm/module.h              |   3 +
 arch/powerpc/include/asm/paca.h                |   1 +
 arch/powerpc/kernel/asm-offsets.c              |   1 +
 arch/powerpc/kernel/module_64.c                |  43 +++--
 arch/powerpc/kernel/trace/ftrace.c             | 210 ++++++++++++++++++++-----
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  85 +++++++++-
 arch/powerpc/kernel/trace/ftrace_64_pg.S       |   4 +
 arch/powerpc/kvm/book3s_hv_rmhandlers.S        |   8 +
 9 files changed, 294 insertions(+), 63 deletions(-)

-- 
2.16.2

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths
  2018-03-21 10:43 [PATCH v2 0/5] powerpc/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
@ 2018-03-21 10:43 ` Naveen N. Rao
  2018-03-21 13:46   ` Steven Rostedt
  2018-03-21 10:43 ` [PATCH v2 2/5] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit Naveen N. Rao
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 10:43 UTC (permalink / raw)
  To: Michael Ellerman, Steven Rostedt, Paul Mackerras
  Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga

We have some C code that we call into from real mode where we cannot
take any exceptions. Though the C functions themselves are mostly safe,
if these functions are traced, there is a possibility that we may take
an exception. For instance, in certain conditions, the ftrace code uses
WARN(), which uses a 'trap' to do its job.

For such scenarios, introduce a new field in paca 'ftrace_disabled',
which is checked on ftrace entry before continuing. This field can then
be set to a non-zero value to disable/pause ftrace, and reset to zero to
resume ftrace.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
Changes since v1:
- Do not guard the paca field check within CONFIG_KVM.
- Move the early return code out of the usual path to optimize for 
  better cache performance in the normal case when ftrace is enabled.

 arch/powerpc/include/asm/paca.h                |  1 +
 arch/powerpc/kernel/asm-offsets.c              |  1 +
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 14 ++++++++++++++
 arch/powerpc/kernel/trace/ftrace_64_pg.S       |  4 ++++
 4 files changed, 20 insertions(+)

diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index d2bf71dddbef..4f47adc2a408 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -211,6 +211,7 @@ struct paca_struct {
 	u16 in_mce;
 	u8 hmi_event_available;		/* HMI event is available */
 	u8 hmi_p9_special_emu;		/* HMI P9 special emulation */
+	u8 ftrace_disabled;
 #endif
 
 	/* Stuff for accurate time accounting */
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index ea5eb91b836e..8e4fc96ff6bc 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -240,6 +240,7 @@ int main(void)
 	OFFSET(PACA_RFI_FLUSH_FALLBACK_AREA, paca_struct, rfi_flush_fallback_area);
 	OFFSET(PACA_EXRFI, paca_struct, exrfi);
 	OFFSET(PACA_L1D_FLUSH_SIZE, paca_struct, l1d_flush_size);
+	OFFSET(PACA_FTRACE_DISABLED, paca_struct, ftrace_disabled);
 
 #endif
 	OFFSET(PACAHWCPUID, paca_struct, hw_cpu_id);
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 3f3e81852422..8f2380304ef1 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -47,6 +47,12 @@ _GLOBAL(ftrace_caller)
 	/* Save all gprs to pt_regs */
 	SAVE_GPR(0, r1)
 	SAVE_10GPRS(2, r1)
+
+	/* Ok to continue? */
+	lbz	r3, PACA_FTRACE_DISABLED(r13)
+	cmpdi	r3, 0
+	beq	ftrace_no_trace
+
 	SAVE_10GPRS(12, r1)
 	SAVE_10GPRS(22, r1)
 
@@ -168,6 +174,14 @@ _GLOBAL(ftrace_graph_stub)
 _GLOBAL(ftrace_stub)
 	blr
 
+ftrace_no_trace:
+	mflr	r3
+	mtctr	r3
+	REST_GPR(3, r1)
+	addi	r1, r1, SWITCH_FRAME_SIZE
+	mtlr	r0
+	bctr
+
 #ifdef CONFIG_LIVEPATCH
 	/*
 	 * This function runs in the mcount context, between two functions. As
diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.S b/arch/powerpc/kernel/trace/ftrace_64_pg.S
index f095358da96e..3d8186832a34 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_pg.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_pg.S
@@ -16,6 +16,10 @@
 
 #ifdef CONFIG_DYNAMIC_FTRACE
 _GLOBAL_TOC(ftrace_caller)
+	lbz	r3, PACA_FTRACE_DISABLED(r13)
+	cmpdi	r3, 0
+	bnelr
+
 	/* Taken from output of objdump from lib64/glibc */
 	mflr	r3
 	ld	r11, 0(r1)
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/5] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit
  2018-03-21 10:43 [PATCH v2 0/5] powerpc/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
  2018-03-21 10:43 ` [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
@ 2018-03-21 10:43 ` Naveen N. Rao
  2018-03-21 10:43 ` [PATCH v2 3/5] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 10:43 UTC (permalink / raw)
  To: Michael Ellerman, Steven Rostedt, Paul Mackerras
  Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga

During guest entry/exit, we switch over to/from the guest MMU context.
While doing so, we set our state to KVM_GUEST_MODE_HOST_HV to note down
the fact that we cannot take any exceptions in the hypervisor code.

Since ftrace may be enabled and since it can result in us taking a trap,
disable ftrace by setting paca->ftrace_disabled. Once we exit the guest
and restore host MMU context, we re-enable ftrace.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index f31f357b8c5a..9292087adb68 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -600,6 +600,10 @@ kvmppc_hv_entry:
 	/* Save R1 in the PACA */
 	std	r1, HSTATE_HOST_R1(r13)
 
+	/* Disable ftrace since we can't take a trap any more */
+	li	r6, 1
+	stb	r6, PACA_FTRACE_DISABLED(r13)
+
 	li	r6, KVM_GUEST_MODE_HOST_HV
 	stb	r6, HSTATE_IN_GUEST(r13)
 
@@ -2048,6 +2052,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
 	/* Unset guest mode */
 	li	r0, KVM_GUEST_MODE_NONE
 	stb	r0, HSTATE_IN_GUEST(r13)
+	li	r0, 0
+	stb	r0, PACA_FTRACE_DISABLED(r13)
 
 	ld	r0, SFS+PPC_LR_STKOFF(r1)
 	addi	r1, r1, SFS
@@ -3379,6 +3385,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
 	ld	r8, KVM_HOST_LPCR(r10)
 	mtspr	SPRN_LPCR, r8
 	isync
+	li	r0, 0
+	stb	r0, PACA_FTRACE_DISABLED(r13)
 	li	r0, KVM_GUEST_MODE_NONE
 	stb	r0, HSTATE_IN_GUEST(r13)
 
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/5] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel
  2018-03-21 10:43 [PATCH v2 0/5] powerpc/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
  2018-03-21 10:43 ` [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
  2018-03-21 10:43 ` [PATCH v2 2/5] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit Naveen N. Rao
@ 2018-03-21 10:43 ` Naveen N. Rao
  2018-03-21 10:43 ` [PATCH v2 4/5] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 10:43 UTC (permalink / raw)
  To: Michael Ellerman, Steven Rostedt, Paul Mackerras
  Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga

For R_PPC64_REL24 relocations, we suppress emitting instructions for TOC
load/restore in the relocation stub if the relocation is for _mcount()
call when using -mprofile-kernel ABI.

To detect this, we check if the preceding instructions are per the
standard set of instructions emitted by gcc: either the two instruction
sequence of 'mflr r0; std r0,16(r1)', or the more optimized variant of a
single 'mflr r0'. This is not sufficient since nothing prevents users
from hand coding sequences involving a 'mflr r0' followed by a 'bl'.

For removing the toc save instruction from the stub, we additionally
check if the symbol is "_mcount". Add the same check here as well.

Also rename is_early_mcount_callsite() to is_mprofile_mcount_callsite()
since that is what is being checked. The use of "early" is misleading
since there is nothing involving this function that qualifies as early.

Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/module_64.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index a2636c250b7b..8413be31d6a4 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -463,8 +463,11 @@ static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
 }
 
 #ifdef CC_USING_MPROFILE_KERNEL
-static bool is_early_mcount_callsite(u32 *instruction)
+static bool is_mprofile_mcount_callsite(const char *name, u32 *instruction)
 {
+	if (strcmp("_mcount", name))
+		return false;
+
 	/*
 	 * Check if this is one of the -mprofile-kernel sequences.
 	 */
@@ -496,8 +499,7 @@ static void squash_toc_save_inst(const char *name, unsigned long addr)
 #else
 static void squash_toc_save_inst(const char *name, unsigned long addr) { }
 
-/* without -mprofile-kernel, mcount calls are never early */
-static bool is_early_mcount_callsite(u32 *instruction)
+static bool is_mprofile_mcount_callsite(const char *name, u32 *instruction)
 {
 	return false;
 }
@@ -505,11 +507,11 @@ static bool is_early_mcount_callsite(u32 *instruction)
 
 /* We expect a noop next: if it is, replace it with instruction to
    restore r2. */
-static int restore_r2(u32 *instruction, struct module *me)
+static int restore_r2(const char *name, u32 *instruction, struct module *me)
 {
 	u32 *prev_insn = instruction - 1;
 
-	if (is_early_mcount_callsite(prev_insn))
+	if (is_mprofile_mcount_callsite(name, prev_insn))
 		return 1;
 
 	/*
@@ -650,7 +652,8 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 				value = stub_for_addr(sechdrs, value, me);
 				if (!value)
 					return -ENOENT;
-				if (!restore_r2((u32 *)location + 1, me))
+				if (!restore_r2(strtab + sym->st_name,
+							(u32 *)location + 1, me))
 					return -ENOEXEC;
 
 				squash_toc_save_inst(strtab + sym->st_name, value);
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 4/5] powerpc64/ftrace: Use the generic version of ftrace_replace_code()
  2018-03-21 10:43 [PATCH v2 0/5] powerpc/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (2 preceding siblings ...)
  2018-03-21 10:43 ` [PATCH v2 3/5] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
@ 2018-03-21 10:43 ` Naveen N. Rao
  2018-03-21 10:43 ` [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
       [not found] ` <b6aff6c7194bd7bb97828db63ca0c82ee4598918.1521627906.git.naveen.n.rao__49352.2364100956$1521630977$gmane$org@linux.vnet.ibm.com>
  5 siblings, 0 replies; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 10:43 UTC (permalink / raw)
  To: Michael Ellerman, Steven Rostedt, Paul Mackerras
  Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga

Our implementation matches that of the generic version, which also
handles FTRACE_UPDATE_MODIFY_CALL. So, remove our implementation in
favor of the generic version.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/trace/ftrace.c | 36 ------------------------------------
 1 file changed, 36 deletions(-)

diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 4741fe112f05..80667128db3d 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -485,42 +485,6 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
 	return ret;
 }
 
-static int __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
-{
-	unsigned long ftrace_addr = (unsigned long)FTRACE_ADDR;
-	int ret;
-
-	ret = ftrace_update_record(rec, enable);
-
-	switch (ret) {
-	case FTRACE_UPDATE_IGNORE:
-		return 0;
-	case FTRACE_UPDATE_MAKE_CALL:
-		return ftrace_make_call(rec, ftrace_addr);
-	case FTRACE_UPDATE_MAKE_NOP:
-		return ftrace_make_nop(NULL, rec, ftrace_addr);
-	}
-
-	return 0;
-}
-
-void ftrace_replace_code(int enable)
-{
-	struct ftrace_rec_iter *iter;
-	struct dyn_ftrace *rec;
-	int ret;
-
-	for (iter = ftrace_rec_iter_start(); iter;
-	     iter = ftrace_rec_iter_next(iter)) {
-		rec = ftrace_rec_iter_record(iter);
-		ret = __ftrace_replace_code(rec, enable);
-		if (ret) {
-			ftrace_bug(ret, rec);
-			return;
-		}
-	}
-}
-
 /*
  * Use the default ftrace_modify_all_code, but without
  * stop_machine().
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-03-21 10:43 [PATCH v2 0/5] powerpc/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (3 preceding siblings ...)
  2018-03-21 10:43 ` [PATCH v2 4/5] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
@ 2018-03-21 10:43 ` Naveen N. Rao
  2018-03-21 13:59   ` Steven Rostedt
       [not found] ` <b6aff6c7194bd7bb97828db63ca0c82ee4598918.1521627906.git.naveen.n.rao__49352.2364100956$1521630977$gmane$org@linux.vnet.ibm.com>
  5 siblings, 1 reply; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 10:43 UTC (permalink / raw)
  To: Michael Ellerman, Steven Rostedt, Paul Mackerras
  Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga

With -mprofile-kernel, we always save the full register state in
ftrace_caller(). While this works, this is inefficient if we're not
interested in the register state, such as when we're using the function
tracer.

Rename the existing ftrace_caller() as ftrace_regs_caller() and provide
a simpler implementation for ftrace_caller() that is used when registers
are not required to be saved.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ftrace.h              |   2 -
 arch/powerpc/include/asm/module.h              |   3 +
 arch/powerpc/kernel/module_64.c                |  28 +++-
 arch/powerpc/kernel/trace/ftrace.c             | 184 +++++++++++++++++++++++--
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  71 +++++++++-
 5 files changed, 262 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index 9abddde372ab..f7a23c2dce74 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -49,8 +49,6 @@
 extern void _mcount(void);
 
 #ifdef CONFIG_DYNAMIC_FTRACE
-# define FTRACE_ADDR ((unsigned long)ftrace_caller)
-# define FTRACE_REGS_ADDR FTRACE_ADDR
 static inline unsigned long ftrace_call_adjust(unsigned long addr)
 {
        /* reloction of mcount call site is the same as the address */
diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
index 7e28442827f1..2d16b6d9147d 100644
--- a/arch/powerpc/include/asm/module.h
+++ b/arch/powerpc/include/asm/module.h
@@ -43,6 +43,9 @@ struct mod_arch_specific {
 #ifdef CONFIG_DYNAMIC_FTRACE
 	unsigned long toc;
 	unsigned long tramp;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	unsigned long tramp_regs;
+#endif
 #endif
 
 	/* For module function descriptor dereference */
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 8413be31d6a4..f7667e2ebfcb 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -280,6 +280,10 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
 #ifdef CONFIG_DYNAMIC_FTRACE
 	/* make the trampoline to the ftrace_caller */
 	relocs++;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	/* an additional one for ftrace_regs_caller */
+	relocs++;
+#endif
 #endif
 
 	pr_debug("Looks like a total of %lu stubs, max\n", relocs);
@@ -765,7 +769,8 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
  * via the paca (in r13). The target (ftrace_caller()) is responsible for
  * saving and restoring the toc before returning.
  */
-static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module *me)
+static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs,
+				struct module *me, unsigned long addr)
 {
 	struct ppc64_stub_entry *entry;
 	unsigned int i, num_stubs;
@@ -792,9 +797,10 @@ static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module
 	memcpy(entry->jump, stub_insns, sizeof(stub_insns));
 
 	/* Stub uses address relative to kernel toc (from the paca) */
-	reladdr = (unsigned long)ftrace_caller - kernel_toc_addr();
+	reladdr = addr - kernel_toc_addr();
 	if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) {
-		pr_err("%s: Address of ftrace_caller out of range of kernel_toc.\n", me->name);
+		pr_err("%s: Address of %ps out of range of kernel_toc.\n",
+							me->name, (void *)addr);
 		return 0;
 	}
 
@@ -802,22 +808,30 @@ static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module
 	entry->jump[2] |= PPC_LO(reladdr);
 
 	/* Eventhough we don't use funcdata in the stub, it's needed elsewhere. */
-	entry->funcdata = func_desc((unsigned long)ftrace_caller);
+	entry->funcdata = func_desc(addr);
 	entry->magic = STUB_MAGIC;
 
 	return (unsigned long)entry;
 }
 #else
-static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module *me)
+static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs,
+				struct module *me, unsigned long addr)
 {
-	return stub_for_addr(sechdrs, (unsigned long)ftrace_caller, me);
+	return stub_for_addr(sechdrs, addr, me);
 }
 #endif
 
 int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
 {
 	mod->arch.toc = my_r2(sechdrs, mod);
-	mod->arch.tramp = create_ftrace_stub(sechdrs, mod);
+	mod->arch.tramp = create_ftrace_stub(sechdrs, mod,
+					(unsigned long)ftrace_caller);
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	mod->arch.tramp_regs = create_ftrace_stub(sechdrs, mod,
+					(unsigned long)ftrace_regs_caller);
+	if (!mod->arch.tramp_regs)
+		return -ENOENT;
+#endif
 
 	if (!mod->arch.tramp)
 		return -ENOENT;
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 80667128db3d..79d2924e75d5 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -357,6 +357,8 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 {
 	unsigned int op[2];
 	void *ip = (void *)rec->ip;
+	unsigned long entry, ptr, tramp;
+	struct module *mod = rec->arch.mod;
 
 	/* read where this goes */
 	if (probe_kernel_read(op, ip, sizeof(op)))
@@ -368,19 +370,44 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 		return -EINVAL;
 	}
 
-	/* If we never set up a trampoline to ftrace_caller, then bail */
-	if (!rec->arch.mod->arch.tramp) {
+	/* If we never set up ftrace trampoline(s), then bail */
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	if (!mod->arch.tramp || !mod->arch.tramp_regs) {
+#else
+	if (!mod->arch.tramp) {
+#endif
 		pr_err("No ftrace trampoline\n");
 		return -EINVAL;
 	}
 
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	if (rec->flags & FTRACE_FL_REGS)
+		tramp = mod->arch.tramp_regs;
+	else
+#endif
+		tramp = mod->arch.tramp;
+
+	if (module_trampoline_target(mod, tramp, &ptr)) {
+		pr_err("Failed to get trampoline target\n");
+		return -EFAULT;
+	}
+
+	pr_devel("trampoline target %lx", ptr);
+
+	entry = ppc_global_function_entry((void *)addr);
+	/* This should match what was called */
+	if (ptr != entry) {
+		pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+		return -EINVAL;
+	}
+
 	/* Ensure branch is within 24 bits */
-	if (!create_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) {
+	if (!create_branch(ip, tramp, BRANCH_SET_LINK)) {
 		pr_err("Branch out of range\n");
 		return -EINVAL;
 	}
 
-	if (patch_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) {
+	if (patch_branch(ip, tramp, BRANCH_SET_LINK)) {
 		pr_err("REL24 out of range!\n");
 		return -EINVAL;
 	}
@@ -388,14 +415,6 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 	return 0;
 }
 
-#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
-int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
-			unsigned long addr)
-{
-	return ftrace_make_call(rec, addr);
-}
-#endif
-
 #else  /* !CONFIG_PPC64: */
 static int
 __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
@@ -472,6 +491,137 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 #endif /* CONFIG_MODULES */
 }
 
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+#ifdef CONFIG_MODULES
+static int
+__ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+					unsigned long addr)
+{
+	unsigned int op;
+	unsigned long ip = rec->ip;
+	unsigned long entry, ptr, tramp;
+	struct module *mod = rec->arch.mod;
+
+	/* If we never set up ftrace trampolines, then bail */
+	if (!mod->arch.tramp || !mod->arch.tramp_regs) {
+		pr_err("No ftrace trampoline\n");
+		return -EINVAL;
+	}
+
+	/* read where this goes */
+	if (probe_kernel_read(&op, (void *)ip, sizeof(int))) {
+		pr_err("Fetching opcode failed.\n");
+		return -EFAULT;
+	}
+
+	/* Make sure that that this is still a 24bit jump */
+	if (!is_bl_op(op)) {
+		pr_err("Not expected bl: opcode is %x\n", op);
+		return -EINVAL;
+	}
+
+	/* lets find where the pointer goes */
+	tramp = find_bl_target(ip, op);
+	entry = ppc_global_function_entry((void *)old_addr);
+
+	pr_devel("ip:%lx jumps to %lx", ip, tramp);
+
+	if (tramp != entry) {
+		/* old_addr is not within range, so we must have used a trampoline */
+		if (module_trampoline_target(mod, tramp, &ptr)) {
+			pr_err("Failed to get trampoline target\n");
+			return -EFAULT;
+		}
+
+		pr_devel("trampoline target %lx", ptr);
+
+		/* This should match what was called */
+		if (ptr != entry) {
+			pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+			return -EINVAL;
+		}
+	}
+
+	/* The new target may be within range */
+	if (test_24bit_addr(ip, addr)) {
+		/* within range */
+		if (patch_branch((unsigned int *)ip, addr, BRANCH_SET_LINK)) {
+			pr_err("REL24 out of range!\n");
+			return -EINVAL;
+		}
+
+		return 0;
+	}
+
+	if (rec->flags & FTRACE_FL_REGS)
+		tramp = mod->arch.tramp_regs;
+	else
+		tramp = mod->arch.tramp;
+
+	if (module_trampoline_target(mod, tramp, &ptr)) {
+		pr_err("Failed to get trampoline target\n");
+		return -EFAULT;
+	}
+
+	pr_devel("trampoline target %lx", ptr);
+
+	entry = ppc_global_function_entry((void *)addr);
+	/* This should match what was called */
+	if (ptr != entry) {
+		pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+		return -EINVAL;
+	}
+
+	/* Ensure branch is within 24 bits */
+	if (!create_branch((unsigned int *)ip, tramp, BRANCH_SET_LINK)) {
+		pr_err("Branch out of range\n");
+		return -EINVAL;
+	}
+
+	if (patch_branch((unsigned int *)ip, tramp, BRANCH_SET_LINK)) {
+		pr_err("REL24 out of range!\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+#endif
+
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+			unsigned long addr)
+{
+	unsigned long ip = rec->ip;
+	unsigned int old, new;
+
+	/*
+	 * If the calling address is more that 24 bits away,
+	 * then we had to use a trampoline to make the call.
+	 * Otherwise just update the call site.
+	 */
+	if (test_24bit_addr(ip, addr) && test_24bit_addr(ip, old_addr)) {
+		/* within range */
+		old = ftrace_call_replace(ip, old_addr, 1);
+		new = ftrace_call_replace(ip, addr, 1);
+		return ftrace_modify_code(ip, old, new);
+	}
+
+#ifdef CONFIG_MODULES
+	/*
+	 * Out of range jumps are called from modules.
+	 */
+	if (!rec->arch.mod) {
+		pr_err("No module loaded\n");
+		return -EINVAL;
+	}
+
+	return __ftrace_modify_call(rec, old_addr, addr);
+#else
+	/* We should not get here without modules */
+	return -EINVAL;
+#endif /* CONFIG_MODULES */
+}
+#endif
+
 int ftrace_update_ftrace_func(ftrace_func_t func)
 {
 	unsigned long ip = (unsigned long)(&ftrace_call);
@@ -482,6 +632,16 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
 	new = ftrace_call_replace(ip, (unsigned long)func, 1);
 	ret = ftrace_modify_code(ip, old, new);
 
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	/* Also update the regs callback function */
+	if (!ret) {
+		ip = (unsigned long)(&ftrace_regs_call);
+		old = *(unsigned int *)&ftrace_regs_call;
+		new = ftrace_call_replace(ip, (unsigned long)func, 1);
+		ret = ftrace_modify_code(ip, old, new);
+	}
+#endif
+
 	return ret;
 }
 
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 8f2380304ef1..7b81db85f76e 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -20,8 +20,8 @@
 #ifdef CONFIG_DYNAMIC_FTRACE
 /*
  *
- * ftrace_caller() is the function that replaces _mcount() when ftrace is
- * active.
+ * ftrace_caller()/ftrace_regs_caller() is the function that replaces _mcount()
+ * when ftrace is active.
  *
  * We arrive here after a function A calls function B, and we are the trace
  * function for B. When we enter r1 points to A's stack frame, B has not yet
@@ -37,7 +37,7 @@
  * Our job is to save the register state into a struct pt_regs (on the stack)
  * and then arrange for the ftrace function to be called.
  */
-_GLOBAL(ftrace_caller)
+_GLOBAL(ftrace_regs_caller)
 	/* Save the original return address in A's stack frame */
 	std	r0,LRSAVE(r1)
 
@@ -100,8 +100,8 @@ _GLOBAL(ftrace_caller)
 	addi    r6, r1 ,STACK_FRAME_OVERHEAD
 
 	/* ftrace_call(r3, r4, r5, r6) */
-.globl ftrace_call
-ftrace_call:
+.globl ftrace_regs_call
+ftrace_regs_call:
 	bl	ftrace_stub
 	nop
 
@@ -162,6 +162,7 @@ ftrace_call:
 	bne-	livepatch_handler
 #endif
 
+ftrace_caller_common:
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 .globl ftrace_graph_call
 ftrace_graph_call:
@@ -182,6 +183,66 @@ ftrace_no_trace:
 	mtlr	r0
 	bctr
 
+_GLOBAL(ftrace_caller)
+	/* Save the original return address in A's stack frame */
+	std	r0, LRSAVE(r1)
+
+	/* Create our stack frame + pt_regs */
+	stdu	r1, -SWITCH_FRAME_SIZE(r1)
+
+	/* Save all gprs to pt_regs */
+	SAVE_8GPRS(3, r1)
+
+	lbz	r3, PACA_FTRACE_DISABLED(r13)
+	cmpdi	r3, 0
+	beq	ftrace_no_trace
+
+	/* Get the _mcount() call site out of LR */
+	mflr	r7
+	std     r7, _NIP(r1)
+
+	/* Save callee's TOC in the ABI compliant location */
+	std	r2, 24(r1)
+	ld	r2, PACATOC(r13)	/* get kernel TOC in r2 */
+
+	addis	r3, r2, function_trace_op@toc@ha
+	addi	r3, r3, function_trace_op@toc@l
+	ld	r5, 0(r3)
+
+	/* Calculate ip from nip-4 into r3 for call below */
+	subi    r3, r7, MCOUNT_INSN_SIZE
+
+	/* Put the original return address in r4 as parent_ip */
+	mr	r4, r0
+
+	/* Set pt_regs to NULL */
+	li	r6, 0
+
+	/* ftrace_call(r3, r4, r5, r6) */
+.globl ftrace_call
+ftrace_call:
+	bl	ftrace_stub
+	nop
+
+	ld	r3, _NIP(r1)
+	mtctr	r3
+
+	/* Restore gprs */
+	REST_8GPRS(3,r1)
+
+	/* Restore callee's TOC */
+	ld	r2, 24(r1)
+
+	/* Pop our stack frame */
+	addi	r1, r1, SWITCH_FRAME_SIZE
+
+	/* Reload original LR */
+	ld	r0, LRSAVE(r1)
+	mtlr	r0
+
+	/* Handle function_graph or go back */
+	b	ftrace_caller_common
+
 #ifdef CONFIG_LIVEPATCH
 	/*
 	 * This function runs in the mcount context, between two functions. As
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths
  2018-03-21 10:43 ` [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
@ 2018-03-21 13:46   ` Steven Rostedt
  0 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2018-03-21 13:46 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Paul Mackerras, linuxppc-dev, Anton Blanchard,
	Nicholas Piggin, sathnaga

On Wed, 21 Mar 2018 16:13:18 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> We have some C code that we call into from real mode where we cannot
> take any exceptions. Though the C functions themselves are mostly safe,
> if these functions are traced, there is a possibility that we may take
> an exception. For instance, in certain conditions, the ftrace code uses
> WARN(), which uses a 'trap' to do its job.
> 
> For such scenarios, introduce a new field in paca 'ftrace_disabled',
> which is checked on ftrace entry before continuing. This field can then
> be set to a non-zero value to disable/pause ftrace, and reset to zero to
> resume ftrace.

Looks fine to me:

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-03-21 10:43 ` [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
@ 2018-03-21 13:59   ` Steven Rostedt
  2018-03-21 14:37     ` Naveen N. Rao
  0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2018-03-21 13:59 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Paul Mackerras, linuxppc-dev, Anton Blanchard,
	Nicholas Piggin, sathnaga

On Wed, 21 Mar 2018 16:13:22 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

>  int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
>  {
>  	mod->arch.toc = my_r2(sechdrs, mod);
> -	mod->arch.tramp = create_ftrace_stub(sechdrs, mod);
> +	mod->arch.tramp = create_ftrace_stub(sechdrs, mod,
> +					(unsigned long)ftrace_caller);
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> +	mod->arch.tramp_regs = create_ftrace_stub(sechdrs, mod,
> +					(unsigned long)ftrace_regs_caller);

So you only reference ftrace_regs_caller if you have
DYNAMIC_FTRACE_WITH_REGS defined?

> +	if (!mod->arch.tramp_regs)
> +		return -ENOENT;
> +#endif
>  
>  	if (!mod->arch.tramp)
>  		return -ENOENT;


> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index 8f2380304ef1..7b81db85f76e 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -20,8 +20,8 @@
>  #ifdef CONFIG_DYNAMIC_FTRACE
>  /*
>   *
> - * ftrace_caller() is the function that replaces _mcount() when ftrace is
> - * active.
> + * ftrace_caller()/ftrace_regs_caller() is the function that replaces _mcount()
> + * when ftrace is active.
>   *
>   * We arrive here after a function A calls function B, and we are the trace
>   * function for B. When we enter r1 points to A's stack frame, B has not yet
> @@ -37,7 +37,7 @@
>   * Our job is to save the register state into a struct pt_regs (on the stack)
>   * and then arrange for the ftrace function to be called.
>   */

Perhaps you want to add:

#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS

here.

> -_GLOBAL(ftrace_caller)
> +_GLOBAL(ftrace_regs_caller)
>  	/* Save the original return address in A's stack frame */
>  	std	r0,LRSAVE(r1)
>  
> @@ -100,8 +100,8 @@ _GLOBAL(ftrace_caller)
>  	addi    r6, r1 ,STACK_FRAME_OVERHEAD
>  
>  	/* ftrace_call(r3, r4, r5, r6) */
> -.globl ftrace_call
> -ftrace_call:
> +.globl ftrace_regs_call
> +ftrace_regs_call:
>  	bl	ftrace_stub
>  	nop
>  
> @@ -162,6 +162,7 @@ ftrace_call:
>  	bne-	livepatch_handler
>  #endif
>  
> +ftrace_caller_common:
>  #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>  .globl ftrace_graph_call
>  ftrace_graph_call:
> @@ -182,6 +183,66 @@ ftrace_no_trace:
>  	mtlr	r0
>  	bctr
>  
> +_GLOBAL(ftrace_caller)
> +	/* Save the original return address in A's stack frame */
> +	std	r0, LRSAVE(r1)
> +
> +	/* Create our stack frame + pt_regs */
> +	stdu	r1, -SWITCH_FRAME_SIZE(r1)
> +
> +	/* Save all gprs to pt_regs */
> +	SAVE_8GPRS(3, r1)
> +
> +	lbz	r3, PACA_FTRACE_DISABLED(r13)
> +	cmpdi	r3, 0
> +	beq	ftrace_no_trace

Of course you would need to keep the ftrace_no_trace part out of the
#if block then.

-- Steve

> +
> +	/* Get the _mcount() call site out of LR */
> +	mflr	r7
> +	std     r7, _NIP(r1)
> +
> +	/* Save callee's TOC in the ABI compliant location */
> +	std	r2, 24(r1)
> +	ld	r2, PACATOC(r13)	/* get kernel TOC in r2 */
> +
> +	addis	r3, r2, function_trace_op@toc@ha
> +	addi	r3, r3, function_trace_op@toc@l
> +	ld	r5, 0(r3)
> +
> +	/* Calculate ip from nip-4 into r3 for call below */
> +	subi    r3, r7, MCOUNT_INSN_SIZE
> +
> +	/* Put the original return address in r4 as parent_ip */
> +	mr	r4, r0
> +
> +	/* Set pt_regs to NULL */
> +	li	r6, 0
> +
> +	/* ftrace_call(r3, r4, r5, r6) */
> +.globl ftrace_call
> +ftrace_call:
> +	bl	ftrace_stub
> +	nop
> +
> +	ld	r3, _NIP(r1)
> +	mtctr	r3
> +
> +	/* Restore gprs */
> +	REST_8GPRS(3,r1)
> +
> +	/* Restore callee's TOC */
> +	ld	r2, 24(r1)
> +
> +	/* Pop our stack frame */
> +	addi	r1, r1, SWITCH_FRAME_SIZE
> +
> +	/* Reload original LR */
> +	ld	r0, LRSAVE(r1)
> +	mtlr	r0
> +
> +	/* Handle function_graph or go back */
> +	b	ftrace_caller_common
> +
>  #ifdef CONFIG_LIVEPATCH
>  	/*
>  	 * This function runs in the mcount context, between two functions. As

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-03-21 13:59   ` Steven Rostedt
@ 2018-03-21 14:37     ` Naveen N. Rao
  2018-03-21 15:22       ` Steven Rostedt
  0 siblings, 1 reply; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 14:37 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Anton Blanchard, linuxppc-dev, Michael Ellerman, Nicholas Piggin,
	Paul Mackerras, sathnaga

Steven Rostedt wrote:
> On Wed, 21 Mar 2018 16:13:22 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
>=20
>>  int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
>>  {
>>  	mod->arch.toc =3D my_r2(sechdrs, mod);
>> -	mod->arch.tramp =3D create_ftrace_stub(sechdrs, mod);
>> +	mod->arch.tramp =3D create_ftrace_stub(sechdrs, mod,
>> +					(unsigned long)ftrace_caller);
>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>> +	mod->arch.tramp_regs =3D create_ftrace_stub(sechdrs, mod,
>> +					(unsigned long)ftrace_regs_caller);
>=20
> So you only reference ftrace_regs_caller if you have
> DYNAMIC_FTRACE_WITH_REGS defined?

Yes.

>=20
>> +	if (!mod->arch.tramp_regs)
>> +		return -ENOENT;
>> +#endif
>> =20
>>  	if (!mod->arch.tramp)
>>  		return -ENOENT;
>=20
>=20
>> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/power=
pc/kernel/trace/ftrace_64_mprofile.S
>> index 8f2380304ef1..7b81db85f76e 100644
>> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> @@ -20,8 +20,8 @@
>>  #ifdef CONFIG_DYNAMIC_FTRACE
>>  /*
>>   *
>> - * ftrace_caller() is the function that replaces _mcount() when ftrace =
is
>> - * active.
>> + * ftrace_caller()/ftrace_regs_caller() is the function that replaces _=
mcount()
>> + * when ftrace is active.
>>   *
>>   * We arrive here after a function A calls function B, and we are the t=
race
>>   * function for B. When we enter r1 points to A's stack frame, B has no=
t yet
>> @@ -37,7 +37,7 @@
>>   * Our job is to save the register state into a struct pt_regs (on the =
stack)
>>   * and then arrange for the ftrace function to be called.
>>   */
>=20
> Perhaps you want to add:
>=20
> #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>=20
> here.

I think that will always be set here. ftrace_64_mprofile.S is only built=20
for -mprofile-kernel and we select HAVE_DYNAMIC_FTRACE_WITH_REGS if=20
MPROFILE_KERNEL is enabled. It looks like there is no way to unset just=20
CONFIG_DYNAMIC_FTRACE_WITH_REGS and so, for -mprofile-kernel, we can=20
assume it is always set?

- Naveen

=

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-03-21 14:37     ` Naveen N. Rao
@ 2018-03-21 15:22       ` Steven Rostedt
  2018-03-21 15:29         ` Naveen N. Rao
  0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2018-03-21 15:22 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Anton Blanchard, linuxppc-dev, Michael Ellerman, Nicholas Piggin,
	Paul Mackerras, sathnaga

On Wed, 21 Mar 2018 20:07:32 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> I think that will always be set here. ftrace_64_mprofile.S is only built 
> for -mprofile-kernel and we select HAVE_DYNAMIC_FTRACE_WITH_REGS if 
> MPROFILE_KERNEL is enabled. It looks like there is no way to unset just 
> CONFIG_DYNAMIC_FTRACE_WITH_REGS and so, for -mprofile-kernel, we can 
> assume it is always set?

OK, if that's the case, then I'm fine with it.

-- Steve

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-03-21 15:22       ` Steven Rostedt
@ 2018-03-21 15:29         ` Naveen N. Rao
  2018-03-21 15:31           ` Steven Rostedt
  0 siblings, 1 reply; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 15:29 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Anton Blanchard, linuxppc-dev, Michael Ellerman, Nicholas Piggin,
	Paul Mackerras, sathnaga

Steven Rostedt wrote:
> On Wed, 21 Mar 2018 20:07:32 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
>=20
>> I think that will always be set here. ftrace_64_mprofile.S is only built=
=20
>> for -mprofile-kernel and we select HAVE_DYNAMIC_FTRACE_WITH_REGS if=20
>> MPROFILE_KERNEL is enabled. It looks like there is no way to unset just=20
>> CONFIG_DYNAMIC_FTRACE_WITH_REGS and so, for -mprofile-kernel, we can=20
>> assume it is always set?
>=20
> OK, if that's the case, then I'm fine with it.

Thanks for the review!

- Naveen

=

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-03-21 15:29         ` Naveen N. Rao
@ 2018-03-21 15:31           ` Steven Rostedt
  2018-03-21 19:10             ` Naveen N. Rao
  0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2018-03-21 15:31 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Anton Blanchard, linuxppc-dev, Michael Ellerman, Nicholas Piggin,
	Paul Mackerras, sathnaga

On Wed, 21 Mar 2018 20:59:03 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> Thanks for the review!

You're welcome. Note, I did put "Acked-by" and not "Reviewed-by"
because my "Reviewed-by" is usually a bit more thorough than what I did
for your patches. That's because it's been a while since I have worked
on PPC and don't feel comfortable adding "Reviewed-by" for PPC code. :-/

-- Steve

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-03-21 15:31           ` Steven Rostedt
@ 2018-03-21 19:10             ` Naveen N. Rao
  0 siblings, 0 replies; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-21 19:10 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Anton Blanchard, linuxppc-dev, Michael Ellerman, Nicholas Piggin,
	Paul Mackerras, sathnaga

Steven Rostedt wrote:
> On Wed, 21 Mar 2018 20:59:03 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
>=20
>> Thanks for the review!
>=20
> You're welcome. Note, I did put "Acked-by" and not "Reviewed-by"
> because my "Reviewed-by" is usually a bit more thorough than what I did
> for your patches. That's because it's been a while since I have worked
> on PPC and don't feel comfortable adding "Reviewed-by" for PPC code. :-/

Sure, I understand. As long as the rest of the changes look fine, that's=20
good. Michael Ellerman wrote the -mprofile-kernel ftrace_caller()=20
implementation, so I'll look forward to his review of that part.

- Naveen

=

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths
       [not found]   ` <b6aff6c7194bd7bb97828db63ca0c82ee4598918.1521627906.git.naveen.n.rao__49352. 2364100956$1521630977$gmane$org@linux.vnet.ibm.com>
@ 2018-03-22  7:53     ` Naveen N. Rao
  0 siblings, 0 replies; 14+ messages in thread
From: Naveen N. Rao @ 2018-03-22  7:53 UTC (permalink / raw)
  To: Michael Ellerman, Paul Mackerras, Steven Rostedt
  Cc: Anton Blanchard, linuxppc-dev, Nicholas Piggin, sathnaga

Naveen N. Rao wrote:
> We have some C code that we call into from real mode where we cannot
> take any exceptions. Though the C functions themselves are mostly safe,
> if these functions are traced, there is a possibility that we may take
> an exception. For instance, in certain conditions, the ftrace code uses
> WARN(), which uses a 'trap' to do its job.
>=20
> For such scenarios, introduce a new field in paca 'ftrace_disabled',
> which is checked on ftrace entry before continuing. This field can then
> be set to a non-zero value to disable/pause ftrace, and reset to zero to
> resume ftrace.
>=20
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
> Changes since v1:
> - Do not guard the paca field check within CONFIG_KVM.
> - Move the early return code out of the usual path to optimize for=20
>   better cache performance in the normal case when ftrace is enabled.
>=20
>  arch/powerpc/include/asm/paca.h                |  1 +
>  arch/powerpc/kernel/asm-offsets.c              |  1 +
>  arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 14 ++++++++++++++
>  arch/powerpc/kernel/trace/ftrace_64_pg.S       |  4 ++++
>  4 files changed, 20 insertions(+)
>=20
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/p=
aca.h
> index d2bf71dddbef..4f47adc2a408 100644
> --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -211,6 +211,7 @@ struct paca_struct {
>  	u16 in_mce;
>  	u8 hmi_event_available;		/* HMI event is available */
>  	u8 hmi_p9_special_emu;		/* HMI P9 special emulation */
> +	u8 ftrace_disabled;
>  #endif
> =20
>  	/* Stuff for accurate time accounting */
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-=
offsets.c
> index ea5eb91b836e..8e4fc96ff6bc 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -240,6 +240,7 @@ int main(void)
>  	OFFSET(PACA_RFI_FLUSH_FALLBACK_AREA, paca_struct, rfi_flush_fallback_ar=
ea);
>  	OFFSET(PACA_EXRFI, paca_struct, exrfi);
>  	OFFSET(PACA_L1D_FLUSH_SIZE, paca_struct, l1d_flush_size);
> +	OFFSET(PACA_FTRACE_DISABLED, paca_struct, ftrace_disabled);
> =20
>  #endif
>  	OFFSET(PACAHWCPUID, paca_struct, hw_cpu_id);
> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerp=
c/kernel/trace/ftrace_64_mprofile.S
> index 3f3e81852422..8f2380304ef1 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -47,6 +47,12 @@ _GLOBAL(ftrace_caller)
>  	/* Save all gprs to pt_regs */
>  	SAVE_GPR(0, r1)
>  	SAVE_10GPRS(2, r1)
> +
> +	/* Ok to continue? */
> +	lbz	r3, PACA_FTRACE_DISABLED(r13)
> +	cmpdi	r3, 0
> +	beq	ftrace_no_trace
	^^^ bne

:facepalm:

- Naveen

=

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-03-22  7:53 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-21 10:43 [PATCH v2 0/5] powerpc/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
2018-03-21 10:43 ` [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
2018-03-21 13:46   ` Steven Rostedt
2018-03-21 10:43 ` [PATCH v2 2/5] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit Naveen N. Rao
2018-03-21 10:43 ` [PATCH v2 3/5] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
2018-03-21 10:43 ` [PATCH v2 4/5] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
2018-03-21 10:43 ` [PATCH v2 5/5] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
2018-03-21 13:59   ` Steven Rostedt
2018-03-21 14:37     ` Naveen N. Rao
2018-03-21 15:22       ` Steven Rostedt
2018-03-21 15:29         ` Naveen N. Rao
2018-03-21 15:31           ` Steven Rostedt
2018-03-21 19:10             ` Naveen N. Rao
     [not found] ` <b6aff6c7194bd7bb97828db63ca0c82ee4598918.1521627906.git.naveen.n.rao__49352.2364100956$1521630977$gmane$org@linux.vnet.ibm.com>
     [not found]   ` <b6aff6c7194bd7bb97828db63ca0c82ee4598918.1521627906.git.naveen.n.rao__49352. 2364100956$1521630977$gmane$org@linux.vnet.ibm.com>
2018-03-22  7:53     ` [PATCH v2 1/5] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.