* [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes
@ 2018-03-22 10:25 Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 1/6] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
` (5 more replies)
0 siblings, 6 replies; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-22 10:25 UTC (permalink / raw)
To: Michael Ellerman, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
This is v3 of the patches posted at:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg130652.html
This series has been tested using mambo for p8 (hash) and p9 (radix).
The first two patches fix a kernel oops when function tracing is enabled
while using KVM.
Patch 3 is new and changes how ftrace is disabled before kexec.
Patch 4 tightens how we detect _mcount() call sites for -mprofile-kernel
during module loading.
The last two patches implement support for ftrace_caller() to
conditionally save the register state. This speeds up the function
tracer a bit. The existing ftrace_caller() is renamed to
ftrace_regs_caller() since we save the entire pt_regs today. A new
implementation of ftrace_caller() that saves the minimum register state
is provided. We switch between the two variants through
ftrace_modify_call(). The necessary support to call into the two
different variants from modules is also added.
- Naveen
Naveen N. Rao (6):
powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code
paths
powerpc64/ftrace: Disable ftrace during kvm guest entry/exit
powerpc/kexec: Disable ftrace before switching to the new kernel
powerpc64/module: Tighten detection of mcount call sites with
-mprofile-kernel
powerpc64/ftrace: Use the generic version of ftrace_replace_code()
powerpc64/ftrace: Implement support for ftrace_regs_caller()
arch/powerpc/include/asm/ftrace.h | 2 -
arch/powerpc/include/asm/module.h | 3 +
arch/powerpc/include/asm/paca.h | 1 +
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/kernel/machine_kexec.c | 6 +-
arch/powerpc/kernel/module_64.c | 43 +++--
arch/powerpc/kernel/trace/ftrace.c | 210 ++++++++++++++++++++-----
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 85 +++++++++-
arch/powerpc/kernel/trace/ftrace_64_pg.S | 4 +
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 +
10 files changed, 296 insertions(+), 67 deletions(-)
--
2.16.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3 1/6] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths
2018-03-22 10:25 [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
@ 2018-03-22 10:25 ` Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 2/6] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit Naveen N. Rao
` (4 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-22 10:25 UTC (permalink / raw)
To: Michael Ellerman, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
We have some C code that we call into from real mode where we cannot
take any exceptions. Though the C functions themselves are mostly safe,
if these functions are traced, there is a possibility that we may take
an exception. For instance, in certain conditions, the ftrace code uses
WARN(), which uses a 'trap' to do its job.
For such scenarios, introduce a new field in paca 'ftrace_disabled',
which is checked on ftrace entry before continuing. This field can then
be set to a non-zero value to disable/pause ftrace, and reset to zero to
resume ftrace.
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
Changes since v2:
- Move paca->ftrace_disabled out of CONFIG_BOOK3S_64.
- Disable tracing when asked for, not the other way around.
arch/powerpc/include/asm/paca.h | 1 +
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 14 ++++++++++++++
arch/powerpc/kernel/trace/ftrace_64_pg.S | 4 ++++
4 files changed, 20 insertions(+)
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index d2bf71dddbef..5be57c20c8c5 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -164,6 +164,7 @@ struct paca_struct {
u8 io_sync; /* writel() needs spin_unlock sync */
u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */
u8 nap_state_lost; /* NV GPR values lost in power7_idle */
+ u8 ftrace_disabled; /* Hard disable ftrace */
u64 sprg_vdso; /* Saved user-visible sprg */
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
u64 tm_scratch; /* TM scratch area for reclaim
*/
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index ea5eb91b836e..775c847f0e33 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -180,6 +180,7 @@ int main(void)
OFFSET(PACAKMSR, paca_struct, kernel_msr);
OFFSET(PACAIRQSOFTMASK, paca_struct, irq_soft_mask);
OFFSET(PACAIRQHAPPENED, paca_struct, irq_happened);
+ OFFSET(PACA_FTRACE_DISABLED, paca_struct, ftrace_disabled);
#ifdef CONFIG_PPC_BOOK3S
OFFSET(PACACONTEXTID, paca_struct, mm_ctx_id);
#ifdef CONFIG_PPC_MM_SLICES
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 3f3e81852422..9d234835e1e6 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -47,6 +47,12 @@ _GLOBAL(ftrace_caller)
/* Save all gprs to pt_regs */
SAVE_GPR(0, r1)
SAVE_10GPRS(2, r1)
+
+ /* Ok to continue? */
+ lbz r3, PACA_FTRACE_DISABLED(r13)
+ cmpdi r3, 0
+ bne ftrace_no_trace
+
SAVE_10GPRS(12, r1)
SAVE_10GPRS(22, r1)
@@ -168,6 +174,14 @@ _GLOBAL(ftrace_graph_stub)
_GLOBAL(ftrace_stub)
blr
+ftrace_no_trace:
+ mflr r3
+ mtctr r3
+ REST_GPR(3, r1)
+ addi r1, r1, SWITCH_FRAME_SIZE
+ mtlr r0
+ bctr
+
#ifdef CONFIG_LIVEPATCH
/*
* This function runs in the mcount context, between two functions. As
diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.S b/arch/powerpc/kernel/trace/ftrace_64_pg.S
index f095358da96e..3d8186832a34 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_pg.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_pg.S
@@ -16,6 +16,10 @@
#ifdef CONFIG_DYNAMIC_FTRACE
_GLOBAL_TOC(ftrace_caller)
+ lbz r3, PACA_FTRACE_DISABLED(r13)
+ cmpdi r3, 0
+ bnelr
+
/* Taken from output of objdump from lib64/glibc */
mflr r3
ld r11, 0(r1)
--
2.16.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 2/6] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit
2018-03-22 10:25 [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 1/6] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
@ 2018-03-22 10:25 ` Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel Naveen N. Rao
` (3 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-22 10:25 UTC (permalink / raw)
To: Michael Ellerman, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
During guest entry/exit, we switch over to/from the guest MMU context.
While doing so, we set our state to KVM_GUEST_MODE_HOST_HV to note down
the fact that we cannot take any exceptions in the hypervisor code.
Since ftrace may be enabled and since it can result in us taking a trap,
disable ftrace by setting paca->ftrace_disabled. Once we exit the guest
and restore host MMU context, we re-enable ftrace.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index f31f357b8c5a..9292087adb68 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -600,6 +600,10 @@ kvmppc_hv_entry:
/* Save R1 in the PACA */
std r1, HSTATE_HOST_R1(r13)
+ /* Disable ftrace since we can't take a trap any more */
+ li r6, 1
+ stb r6, PACA_FTRACE_DISABLED(r13)
+
li r6, KVM_GUEST_MODE_HOST_HV
stb r6, HSTATE_IN_GUEST(r13)
@@ -2048,6 +2052,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
/* Unset guest mode */
li r0, KVM_GUEST_MODE_NONE
stb r0, HSTATE_IN_GUEST(r13)
+ li r0, 0
+ stb r0, PACA_FTRACE_DISABLED(r13)
ld r0, SFS+PPC_LR_STKOFF(r1)
addi r1, r1, SFS
@@ -3379,6 +3385,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
ld r8, KVM_HOST_LPCR(r10)
mtspr SPRN_LPCR, r8
isync
+ li r0, 0
+ stb r0, PACA_FTRACE_DISABLED(r13)
li r0, KVM_GUEST_MODE_NONE
stb r0, HSTATE_IN_GUEST(r13)
--
2.16.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel
2018-03-22 10:25 [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 1/6] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 2/6] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit Naveen N. Rao
@ 2018-03-22 10:25 ` Naveen N. Rao
2018-03-26 22:30 ` Michael Ellerman
2018-03-22 10:25 ` [PATCH v3 4/6] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
` (2 subsequent siblings)
5 siblings, 1 reply; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-22 10:25 UTC (permalink / raw)
To: Michael Ellerman, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
If function_graph tracer is enabled during kexec, we see the below
exception in the simulator:
root@(none):/# kexec -e
kvm: exiting hardware virtualization
kexec_core: Starting new kernel
[ 19.262020070,5] OPAL: Switch to big-endian OS
kexec: Starting switchover sequence.
Interrupt to 0xC000000000004380 from 0xC000000000004380
** Execution stopped: Continuous Interrupt, Instruction caused exception, **
Now that we have a more effective way to disable ftrace, let's use that
before switching to a new kernel during kexec.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
arch/powerpc/kernel/machine_kexec.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index 2694d078741d..4a1b24a9dd61 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -95,16 +95,14 @@ void arch_crash_save_vmcoreinfo(void)
*/
void machine_kexec(struct kimage *image)
{
- int save_ftrace_enabled;
-
- save_ftrace_enabled = __ftrace_enabled_save();
+ get_paca()->ftrace_disabled = 1;
if (ppc_md.machine_kexec)
ppc_md.machine_kexec(image);
else
default_machine_kexec(image);
- __ftrace_enabled_restore(save_ftrace_enabled);
+ get_paca()->ftrace_disabled = 0;
/* Fall back to normal restart if we're still alive. */
machine_restart(NULL);
--
2.16.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 4/6] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel
2018-03-22 10:25 [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
` (2 preceding siblings ...)
2018-03-22 10:25 ` [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel Naveen N. Rao
@ 2018-03-22 10:25 ` Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 5/6] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 6/6] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
5 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-22 10:25 UTC (permalink / raw)
To: Michael Ellerman, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
For R_PPC64_REL24 relocations, we suppress emitting instructions for TOC
load/restore in the relocation stub if the relocation is for _mcount()
call when using -mprofile-kernel ABI.
To detect this, we check if the preceding instructions are per the
standard set of instructions emitted by gcc: either the two instruction
sequence of 'mflr r0; std r0,16(r1)', or the more optimized variant of a
single 'mflr r0'. This is not sufficient since nothing prevents users
from hand coding sequences involving a 'mflr r0' followed by a 'bl'.
For removing the toc save instruction from the stub, we additionally
check if the symbol is "_mcount". Add the same check here as well.
Also rename is_early_mcount_callsite() to is_mprofile_mcount_callsite()
since that is what is being checked. The use of "early" is misleading
since there is nothing involving this function that qualifies as early.
Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
arch/powerpc/kernel/module_64.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index a2636c250b7b..8413be31d6a4 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -463,8 +463,11 @@ static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
}
#ifdef CC_USING_MPROFILE_KERNEL
-static bool is_early_mcount_callsite(u32 *instruction)
+static bool is_mprofile_mcount_callsite(const char *name, u32 *instruction)
{
+ if (strcmp("_mcount", name))
+ return false;
+
/*
* Check if this is one of the -mprofile-kernel sequences.
*/
@@ -496,8 +499,7 @@ static void squash_toc_save_inst(const char *name, unsigned long addr)
#else
static void squash_toc_save_inst(const char *name, unsigned long addr) { }
-/* without -mprofile-kernel, mcount calls are never early */
-static bool is_early_mcount_callsite(u32 *instruction)
+static bool is_mprofile_mcount_callsite(const char *name, u32 *instruction)
{
return false;
}
@@ -505,11 +507,11 @@ static bool is_early_mcount_callsite(u32 *instruction)
/* We expect a noop next: if it is, replace it with instruction to
restore r2. */
-static int restore_r2(u32 *instruction, struct module *me)
+static int restore_r2(const char *name, u32 *instruction, struct module *me)
{
u32 *prev_insn = instruction - 1;
- if (is_early_mcount_callsite(prev_insn))
+ if (is_mprofile_mcount_callsite(name, prev_insn))
return 1;
/*
@@ -650,7 +652,8 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
value = stub_for_addr(sechdrs, value, me);
if (!value)
return -ENOENT;
- if (!restore_r2((u32 *)location + 1, me))
+ if (!restore_r2(strtab + sym->st_name,
+ (u32 *)location + 1, me))
return -ENOEXEC;
squash_toc_save_inst(strtab + sym->st_name, value);
--
2.16.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 5/6] powerpc64/ftrace: Use the generic version of ftrace_replace_code()
2018-03-22 10:25 [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
` (3 preceding siblings ...)
2018-03-22 10:25 ` [PATCH v3 4/6] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
@ 2018-03-22 10:25 ` Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 6/6] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
5 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-22 10:25 UTC (permalink / raw)
To: Michael Ellerman, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
Our implementation matches that of the generic version, which also
handles FTRACE_UPDATE_MODIFY_CALL. So, remove our implementation in
favor of the generic version.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
arch/powerpc/kernel/trace/ftrace.c | 36 ------------------------------------
1 file changed, 36 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 4741fe112f05..80667128db3d 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -485,42 +485,6 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
return ret;
}
-static int __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
-{
- unsigned long ftrace_addr = (unsigned long)FTRACE_ADDR;
- int ret;
-
- ret = ftrace_update_record(rec, enable);
-
- switch (ret) {
- case FTRACE_UPDATE_IGNORE:
- return 0;
- case FTRACE_UPDATE_MAKE_CALL:
- return ftrace_make_call(rec, ftrace_addr);
- case FTRACE_UPDATE_MAKE_NOP:
- return ftrace_make_nop(NULL, rec, ftrace_addr);
- }
-
- return 0;
-}
-
-void ftrace_replace_code(int enable)
-{
- struct ftrace_rec_iter *iter;
- struct dyn_ftrace *rec;
- int ret;
-
- for (iter = ftrace_rec_iter_start(); iter;
- iter = ftrace_rec_iter_next(iter)) {
- rec = ftrace_rec_iter_record(iter);
- ret = __ftrace_replace_code(rec, enable);
- if (ret) {
- ftrace_bug(ret, rec);
- return;
- }
- }
-}
-
/*
* Use the default ftrace_modify_all_code, but without
* stop_machine().
--
2.16.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 6/6] powerpc64/ftrace: Implement support for ftrace_regs_caller()
2018-03-22 10:25 [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
` (4 preceding siblings ...)
2018-03-22 10:25 ` [PATCH v3 5/6] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
@ 2018-03-22 10:25 ` Naveen N. Rao
5 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-22 10:25 UTC (permalink / raw)
To: Michael Ellerman, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
With -mprofile-kernel, we always save the full register state in
ftrace_caller(). While this works, this is inefficient if we're not
interested in the register state, such as when we're using the function
tracer.
Rename the existing ftrace_caller() as ftrace_regs_caller() and provide
a simpler implementation for ftrace_caller() that is used when registers
are not required to be saved.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
Changes since v2:
- Disable ftrace when asked for, in ftrace_caller().
arch/powerpc/include/asm/ftrace.h | 2 -
arch/powerpc/include/asm/module.h | 3 +
arch/powerpc/kernel/module_64.c | 28 +++-
arch/powerpc/kernel/trace/ftrace.c | 184 +++++++++++++++++++++++--
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 71 +++++++++-
5 files changed, 262 insertions(+), 26 deletions(-)
diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index 9abddde372ab..f7a23c2dce74 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -49,8 +49,6 @@
extern void _mcount(void);
#ifdef CONFIG_DYNAMIC_FTRACE
-# define FTRACE_ADDR ((unsigned long)ftrace_caller)
-# define FTRACE_REGS_ADDR FTRACE_ADDR
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
/* reloction of mcount call site is the same as the address */
diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
index 7e28442827f1..2d16b6d9147d 100644
--- a/arch/powerpc/include/asm/module.h
+++ b/arch/powerpc/include/asm/module.h
@@ -43,6 +43,9 @@ struct mod_arch_specific {
#ifdef CONFIG_DYNAMIC_FTRACE
unsigned long toc;
unsigned long tramp;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ unsigned long tramp_regs;
+#endif
#endif
/* For module function descriptor dereference */
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 8413be31d6a4..f7667e2ebfcb 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -280,6 +280,10 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
#ifdef CONFIG_DYNAMIC_FTRACE
/* make the trampoline to the ftrace_caller */
relocs++;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ /* an additional one for ftrace_regs_caller */
+ relocs++;
+#endif
#endif
pr_debug("Looks like a total of %lu stubs, max\n", relocs);
@@ -765,7 +769,8 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
* via the paca (in r13). The target (ftrace_caller()) is responsible for
* saving and restoring the toc before returning.
*/
-static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module *me)
+static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs,
+ struct module *me, unsigned long addr)
{
struct ppc64_stub_entry *entry;
unsigned int i, num_stubs;
@@ -792,9 +797,10 @@ static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module
memcpy(entry->jump, stub_insns, sizeof(stub_insns));
/* Stub uses address relative to kernel toc (from the paca) */
- reladdr = (unsigned long)ftrace_caller - kernel_toc_addr();
+ reladdr = addr - kernel_toc_addr();
if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) {
- pr_err("%s: Address of ftrace_caller out of range of kernel_toc.\n", me->name);
+ pr_err("%s: Address of %ps out of range of kernel_toc.\n",
+ me->name, (void *)addr);
return 0;
}
@@ -802,22 +808,30 @@ static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module
entry->jump[2] |= PPC_LO(reladdr);
/* Eventhough we don't use funcdata in the stub, it's needed elsewhere. */
- entry->funcdata = func_desc((unsigned long)ftrace_caller);
+ entry->funcdata = func_desc(addr);
entry->magic = STUB_MAGIC;
return (unsigned long)entry;
}
#else
-static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module *me)
+static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs,
+ struct module *me, unsigned long addr)
{
- return stub_for_addr(sechdrs, (unsigned long)ftrace_caller, me);
+ return stub_for_addr(sechdrs, addr, me);
}
#endif
int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
{
mod->arch.toc = my_r2(sechdrs, mod);
- mod->arch.tramp = create_ftrace_stub(sechdrs, mod);
+ mod->arch.tramp = create_ftrace_stub(sechdrs, mod,
+ (unsigned long)ftrace_caller);
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ mod->arch.tramp_regs = create_ftrace_stub(sechdrs, mod,
+ (unsigned long)ftrace_regs_caller);
+ if (!mod->arch.tramp_regs)
+ return -ENOENT;
+#endif
if (!mod->arch.tramp)
return -ENOENT;
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 80667128db3d..79d2924e75d5 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -357,6 +357,8 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
unsigned int op[2];
void *ip = (void *)rec->ip;
+ unsigned long entry, ptr, tramp;
+ struct module *mod = rec->arch.mod;
/* read where this goes */
if (probe_kernel_read(op, ip, sizeof(op)))
@@ -368,19 +370,44 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
return -EINVAL;
}
- /* If we never set up a trampoline to ftrace_caller, then bail */
- if (!rec->arch.mod->arch.tramp) {
+ /* If we never set up ftrace trampoline(s), then bail */
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ if (!mod->arch.tramp || !mod->arch.tramp_regs) {
+#else
+ if (!mod->arch.tramp) {
+#endif
pr_err("No ftrace trampoline\n");
return -EINVAL;
}
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ if (rec->flags & FTRACE_FL_REGS)
+ tramp = mod->arch.tramp_regs;
+ else
+#endif
+ tramp = mod->arch.tramp;
+
+ if (module_trampoline_target(mod, tramp, &ptr)) {
+ pr_err("Failed to get trampoline target\n");
+ return -EFAULT;
+ }
+
+ pr_devel("trampoline target %lx", ptr);
+
+ entry = ppc_global_function_entry((void *)addr);
+ /* This should match what was called */
+ if (ptr != entry) {
+ pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+ return -EINVAL;
+ }
+
/* Ensure branch is within 24 bits */
- if (!create_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) {
+ if (!create_branch(ip, tramp, BRANCH_SET_LINK)) {
pr_err("Branch out of range\n");
return -EINVAL;
}
- if (patch_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) {
+ if (patch_branch(ip, tramp, BRANCH_SET_LINK)) {
pr_err("REL24 out of range!\n");
return -EINVAL;
}
@@ -388,14 +415,6 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
return 0;
}
-#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
-int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
- unsigned long addr)
-{
- return ftrace_make_call(rec, addr);
-}
-#endif
-
#else /* !CONFIG_PPC64: */
static int
__ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
@@ -472,6 +491,137 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
#endif /* CONFIG_MODULES */
}
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+#ifdef CONFIG_MODULES
+static int
+__ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+{
+ unsigned int op;
+ unsigned long ip = rec->ip;
+ unsigned long entry, ptr, tramp;
+ struct module *mod = rec->arch.mod;
+
+ /* If we never set up ftrace trampolines, then bail */
+ if (!mod->arch.tramp || !mod->arch.tramp_regs) {
+ pr_err("No ftrace trampoline\n");
+ return -EINVAL;
+ }
+
+ /* read where this goes */
+ if (probe_kernel_read(&op, (void *)ip, sizeof(int))) {
+ pr_err("Fetching opcode failed.\n");
+ return -EFAULT;
+ }
+
+ /* Make sure that that this is still a 24bit jump */
+ if (!is_bl_op(op)) {
+ pr_err("Not expected bl: opcode is %x\n", op);
+ return -EINVAL;
+ }
+
+ /* lets find where the pointer goes */
+ tramp = find_bl_target(ip, op);
+ entry = ppc_global_function_entry((void *)old_addr);
+
+ pr_devel("ip:%lx jumps to %lx", ip, tramp);
+
+ if (tramp != entry) {
+ /* old_addr is not within range, so we must have used a trampoline */
+ if (module_trampoline_target(mod, tramp, &ptr)) {
+ pr_err("Failed to get trampoline target\n");
+ return -EFAULT;
+ }
+
+ pr_devel("trampoline target %lx", ptr);
+
+ /* This should match what was called */
+ if (ptr != entry) {
+ pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+ return -EINVAL;
+ }
+ }
+
+ /* The new target may be within range */
+ if (test_24bit_addr(ip, addr)) {
+ /* within range */
+ if (patch_branch((unsigned int *)ip, addr, BRANCH_SET_LINK)) {
+ pr_err("REL24 out of range!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ if (rec->flags & FTRACE_FL_REGS)
+ tramp = mod->arch.tramp_regs;
+ else
+ tramp = mod->arch.tramp;
+
+ if (module_trampoline_target(mod, tramp, &ptr)) {
+ pr_err("Failed to get trampoline target\n");
+ return -EFAULT;
+ }
+
+ pr_devel("trampoline target %lx", ptr);
+
+ entry = ppc_global_function_entry((void *)addr);
+ /* This should match what was called */
+ if (ptr != entry) {
+ pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+ return -EINVAL;
+ }
+
+ /* Ensure branch is within 24 bits */
+ if (!create_branch((unsigned int *)ip, tramp, BRANCH_SET_LINK)) {
+ pr_err("Branch out of range\n");
+ return -EINVAL;
+ }
+
+ if (patch_branch((unsigned int *)ip, tramp, BRANCH_SET_LINK)) {
+ pr_err("REL24 out of range!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+#endif
+
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+{
+ unsigned long ip = rec->ip;
+ unsigned int old, new;
+
+ /*
+ * If the calling address is more that 24 bits away,
+ * then we had to use a trampoline to make the call.
+ * Otherwise just update the call site.
+ */
+ if (test_24bit_addr(ip, addr) && test_24bit_addr(ip, old_addr)) {
+ /* within range */
+ old = ftrace_call_replace(ip, old_addr, 1);
+ new = ftrace_call_replace(ip, addr, 1);
+ return ftrace_modify_code(ip, old, new);
+ }
+
+#ifdef CONFIG_MODULES
+ /*
+ * Out of range jumps are called from modules.
+ */
+ if (!rec->arch.mod) {
+ pr_err("No module loaded\n");
+ return -EINVAL;
+ }
+
+ return __ftrace_modify_call(rec, old_addr, addr);
+#else
+ /* We should not get here without modules */
+ return -EINVAL;
+#endif /* CONFIG_MODULES */
+}
+#endif
+
int ftrace_update_ftrace_func(ftrace_func_t func)
{
unsigned long ip = (unsigned long)(&ftrace_call);
@@ -482,6 +632,16 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
new = ftrace_call_replace(ip, (unsigned long)func, 1);
ret = ftrace_modify_code(ip, old, new);
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ /* Also update the regs callback function */
+ if (!ret) {
+ ip = (unsigned long)(&ftrace_regs_call);
+ old = *(unsigned int *)&ftrace_regs_call;
+ new = ftrace_call_replace(ip, (unsigned long)func, 1);
+ ret = ftrace_modify_code(ip, old, new);
+ }
+#endif
+
return ret;
}
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 9d234835e1e6..f57822aa2fe8 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -20,8 +20,8 @@
#ifdef CONFIG_DYNAMIC_FTRACE
/*
*
- * ftrace_caller() is the function that replaces _mcount() when ftrace is
- * active.
+ * ftrace_caller()/ftrace_regs_caller() is the function that replaces _mcount()
+ * when ftrace is active.
*
* We arrive here after a function A calls function B, and we are the trace
* function for B. When we enter r1 points to A's stack frame, B has not yet
@@ -37,7 +37,7 @@
* Our job is to save the register state into a struct pt_regs (on the stack)
* and then arrange for the ftrace function to be called.
*/
-_GLOBAL(ftrace_caller)
+_GLOBAL(ftrace_regs_caller)
/* Save the original return address in A's stack frame */
std r0,LRSAVE(r1)
@@ -100,8 +100,8 @@ _GLOBAL(ftrace_caller)
addi r6, r1 ,STACK_FRAME_OVERHEAD
/* ftrace_call(r3, r4, r5, r6) */
-.globl ftrace_call
-ftrace_call:
+.globl ftrace_regs_call
+ftrace_regs_call:
bl ftrace_stub
nop
@@ -162,6 +162,7 @@ ftrace_call:
bne- livepatch_handler
#endif
+ftrace_caller_common:
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
.globl ftrace_graph_call
ftrace_graph_call:
@@ -182,6 +183,66 @@ ftrace_no_trace:
mtlr r0
bctr
+_GLOBAL(ftrace_caller)
+ /* Save the original return address in A's stack frame */
+ std r0, LRSAVE(r1)
+
+ /* Create our stack frame + pt_regs */
+ stdu r1, -SWITCH_FRAME_SIZE(r1)
+
+ /* Save all gprs to pt_regs */
+ SAVE_8GPRS(3, r1)
+
+ lbz r3, PACA_FTRACE_DISABLED(r13)
+ cmpdi r3, 0
+ bne ftrace_no_trace
+
+ /* Get the _mcount() call site out of LR */
+ mflr r7
+ std r7, _NIP(r1)
+
+ /* Save callee's TOC in the ABI compliant location */
+ std r2, 24(r1)
+ ld r2, PACATOC(r13) /* get kernel TOC in r2 */
+
+ addis r3, r2, function_trace_op@toc@ha
+ addi r3, r3, function_trace_op@toc@l
+ ld r5, 0(r3)
+
+ /* Calculate ip from nip-4 into r3 for call below */
+ subi r3, r7, MCOUNT_INSN_SIZE
+
+ /* Put the original return address in r4 as parent_ip */
+ mr r4, r0
+
+ /* Set pt_regs to NULL */
+ li r6, 0
+
+ /* ftrace_call(r3, r4, r5, r6) */
+.globl ftrace_call
+ftrace_call:
+ bl ftrace_stub
+ nop
+
+ ld r3, _NIP(r1)
+ mtctr r3
+
+ /* Restore gprs */
+ REST_8GPRS(3,r1)
+
+ /* Restore callee's TOC */
+ ld r2, 24(r1)
+
+ /* Pop our stack frame */
+ addi r1, r1, SWITCH_FRAME_SIZE
+
+ /* Reload original LR */
+ ld r0, LRSAVE(r1)
+ mtlr r0
+
+ /* Handle function_graph or go back */
+ b ftrace_caller_common
+
#ifdef CONFIG_LIVEPATCH
/*
* This function runs in the mcount context, between two functions. As
--
2.16.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel
2018-03-22 10:25 ` [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel Naveen N. Rao
@ 2018-03-26 22:30 ` Michael Ellerman
2018-03-26 22:32 ` Michael Ellerman
0 siblings, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2018-03-26 22:30 UTC (permalink / raw)
To: Naveen N. Rao, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
> If function_graph tracer is enabled during kexec, we see the below
> exception in the simulator:
> root@(none):/# kexec -e
> kvm: exiting hardware virtualization
> kexec_core: Starting new kernel
> [ 19.262020070,5] OPAL: Switch to big-endian OS
> kexec: Starting switchover sequence.
> Interrupt to 0xC000000000004380 from 0xC000000000004380
> ** Execution stopped: Continuous Interrupt, Instruction caused exception, **
>
> Now that we have a more effective way to disable ftrace, let's use that
> before switching to a new kernel during kexec.
>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
> arch/powerpc/kernel/machine_kexec.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
> index 2694d078741d..4a1b24a9dd61 100644
> --- a/arch/powerpc/kernel/machine_kexec.c
> +++ b/arch/powerpc/kernel/machine_kexec.c
> @@ -95,16 +95,14 @@ void arch_crash_save_vmcoreinfo(void)
> */
> void machine_kexec(struct kimage *image)
> {
> - int save_ftrace_enabled;
> -
> - save_ftrace_enabled = __ftrace_enabled_save();
> + get_paca()->ftrace_disabled = 1;
This doesn't compile on 32-bit:
Failed: powerpc-next/corenet32_smp_defconfig/powerpc-5.3 (http://kisskb.ozlabs.ibm.com/kisskb/buildresult/13314085/log/)
/kisskb/src/arch/powerpc/kernel/machine_kexec.c:105:12: error: invalid type argument of '->' (have 'int')
/kisskb/src/arch/powerpc/kernel/machine_kexec.c:98:12: error: invalid type argument of '->' (have 'int')
/kisskb/src/arch/powerpc/kernel/machine_kexec.c:98:2: error: implicit declaration of function 'get_paca' [-Werror=implicit-function-declaration]
A wrapper is probably best that does nothing on 32-bit.
cheers
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel
2018-03-26 22:30 ` Michael Ellerman
@ 2018-03-26 22:32 ` Michael Ellerman
2018-03-27 6:53 ` Naveen N. Rao
0 siblings, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2018-03-26 22:32 UTC (permalink / raw)
To: Naveen N. Rao, Steven Rostedt, Paul Mackerras
Cc: linuxppc-dev, Anton Blanchard, Nicholas Piggin, sathnaga
Michael Ellerman <mpe@ellerman.id.au> writes:
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
>> If function_graph tracer is enabled during kexec, we see the below
>> exception in the simulator:
>> root@(none):/# kexec -e
>> kvm: exiting hardware virtualization
>> kexec_core: Starting new kernel
>> [ 19.262020070,5] OPAL: Switch to big-endian OS
>> kexec: Starting switchover sequence.
>> Interrupt to 0xC000000000004380 from 0xC000000000004380
>> ** Execution stopped: Continuous Interrupt, Instruction caused exception, **
>>
>> Now that we have a more effective way to disable ftrace, let's use that
>> before switching to a new kernel during kexec.
>>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>> arch/powerpc/kernel/machine_kexec.c | 6 ++----
>> 1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
>> index 2694d078741d..4a1b24a9dd61 100644
>> --- a/arch/powerpc/kernel/machine_kexec.c
>> +++ b/arch/powerpc/kernel/machine_kexec.c
>> @@ -95,16 +95,14 @@ void arch_crash_save_vmcoreinfo(void)
>> */
>> void machine_kexec(struct kimage *image)
>> {
>> - int save_ftrace_enabled;
>> -
>> - save_ftrace_enabled = __ftrace_enabled_save();
>> + get_paca()->ftrace_disabled = 1;
>
> This doesn't compile on 32-bit:
>
> Failed: powerpc-next/corenet32_smp_defconfig/powerpc-5.3 (http://kisskb.ozlabs.ibm.com/kisskb/buildresult/13314085/log/)
> /kisskb/src/arch/powerpc/kernel/machine_kexec.c:105:12: error: invalid type argument of '->' (have 'int')
> /kisskb/src/arch/powerpc/kernel/machine_kexec.c:98:12: error: invalid type argument of '->' (have 'int')
> /kisskb/src/arch/powerpc/kernel/machine_kexec.c:98:2: error: implicit declaration of function 'get_paca' [-Werror=implicit-function-declaration]
>
>
> A wrapper is probably best that does nothing on 32-bit.
Actually it should probably do __ftrace_enabled_save() etc. on 32-bit,
like the existing code does.
cheers
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel
2018-03-26 22:32 ` Michael Ellerman
@ 2018-03-27 6:53 ` Naveen N. Rao
0 siblings, 0 replies; 10+ messages in thread
From: Naveen N. Rao @ 2018-03-27 6:53 UTC (permalink / raw)
To: Michael Ellerman, Paul Mackerras, Steven Rostedt
Cc: Anton Blanchard, linuxppc-dev, Nicholas Piggin, sathnaga
Michael Ellerman wrote:
> Michael Ellerman <mpe@ellerman.id.au> writes:
>=20
>> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
>>> If function_graph tracer is enabled during kexec, we see the below
>>> exception in the simulator:
>>> root@(none):/# kexec -e
>>> kvm: exiting hardware virtualization
>>> kexec_core: Starting new kernel
>>> [ 19.262020070,5] OPAL: Switch to big-endian OS
>>> kexec: Starting switchover sequence.
>>> Interrupt to 0xC000000000004380 from 0xC000000000004380
>>> ** Execution stopped: Continuous Interrupt, Instruction caused excepti=
on, **
>>>
>>> Now that we have a more effective way to disable ftrace, let's use that
>>> before switching to a new kernel during kexec.
>>>
>>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>>> ---
>>> arch/powerpc/kernel/machine_kexec.c | 6 ++----
>>> 1 file changed, 2 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/=
machine_kexec.c
>>> index 2694d078741d..4a1b24a9dd61 100644
>>> --- a/arch/powerpc/kernel/machine_kexec.c
>>> +++ b/arch/powerpc/kernel/machine_kexec.c
>>> @@ -95,16 +95,14 @@ void arch_crash_save_vmcoreinfo(void)
>>> */
>>> void machine_kexec(struct kimage *image)
>>> {
>>> - int save_ftrace_enabled;
>>> -
>>> - save_ftrace_enabled =3D __ftrace_enabled_save();
>>> + get_paca()->ftrace_disabled =3D 1;
>>
>> This doesn't compile on 32-bit:
>>
>> Failed: powerpc-next/corenet32_smp_defconfig/powerpc-5.3 (http://kiss=
kb.ozlabs.ibm.com/kisskb/buildresult/13314085/log/)
>> /kisskb/src/arch/powerpc/kernel/machine_kexec.c:105:12: error: inval=
id type argument of '->' (have 'int')
>> /kisskb/src/arch/powerpc/kernel/machine_kexec.c:98:12: error: invali=
d type argument of '->' (have 'int')
>> /kisskb/src/arch/powerpc/kernel/machine_kexec.c:98:2: error: implici=
t declaration of function 'get_paca' [-Werror=3Dimplicit-function-declarati=
on]
Hmm... pmac32_defconfig built fine for me, so I thought we are ok there.
>>
>>
>> A wrapper is probably best that does nothing on 32-bit.
>=20
> Actually it should probably do __ftrace_enabled_save() etc. on 32-bit,
> like the existing code does.
Agree. In hindsight, I'm not updating ftrace_32.S, so I think I will=20
gate this with CONFIG_PPC64.
Thanks,
Naveen
=
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-03-27 6:53 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-22 10:25 [PATCH v3 0/6] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 1/6] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 2/6] powerpc64/ftrace: Disable ftrace during kvm guest entry/exit Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 3/6] powerpc/kexec: Disable ftrace before switching to the new kernel Naveen N. Rao
2018-03-26 22:30 ` Michael Ellerman
2018-03-26 22:32 ` Michael Ellerman
2018-03-27 6:53 ` Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 4/6] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 5/6] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
2018-03-22 10:25 ` [PATCH v3 6/6] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.