All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes
@ 2018-04-19  7:03 Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 01/10] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
                   ` (10 more replies)
  0 siblings, 11 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:03 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

This is v5 of the patches posted at:
https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=37250

This series has been tested using mambo for p8 (hash) and p9 (radix), 
and also on Power8 host.

In v5, the patch for KVM has been re-worked and is now [6/10], instead 
of [2/10]. This now works properly on a Power8 machine. More details in 
the patch. All other patches are unchanged from v4.


- Naveen


Naveen N. Rao (10):
  powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code
    paths
  powerpc64/ftrace: Rearrange #ifdef sections in ftrace.h
  powerpc64/ftrace: Add helpers to hard disable ftrace
  powerpc64/ftrace: Delay enabling ftrace on secondary cpus
  powerpc64/ftrace: Disable ftrace during hotplug
  powerpc64/ftrace: Disable ftrace during kvm entry/exit
  powerpc64/kexec: Hard disable ftrace before switching to the new
    kernel
  powerpc64/module: Tighten detection of mcount call sites with
    -mprofile-kernel
  powerpc64/ftrace: Use the generic version of ftrace_replace_code()
  powerpc64/ftrace: Implement support for ftrace_regs_caller()

 arch/powerpc/include/asm/ftrace.h             |  27 ++-
 arch/powerpc/include/asm/module.h             |   3 +
 arch/powerpc/include/asm/paca.h               |   1 +
 arch/powerpc/kernel/asm-offsets.c             |   1 +
 arch/powerpc/kernel/machine_kexec.c           |   2 +
 arch/powerpc/kernel/module_64.c               |  43 ++--
 arch/powerpc/kernel/setup_64.c                |   7 +
 arch/powerpc/kernel/smp.c                     |  12 +
 arch/powerpc/kernel/trace/ftrace.c            | 210 ++++++++++++++----
 .../powerpc/kernel/trace/ftrace_64_mprofile.S |  85 ++++++-
 arch/powerpc/kernel/trace/ftrace_64_pg.S      |   4 +
 arch/powerpc/kvm/book3s_hv.c                  |   4 +
 arch/powerpc/kvm/book3s_hv_rmhandlers.S       |   3 +
 13 files changed, 335 insertions(+), 67 deletions(-)

-- 
2.17.0

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v5 01/10] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-05-08 14:52   ` [v5, " Michael Ellerman
  2018-04-19  7:04 ` [PATCH v5 02/10] powerpc64/ftrace: Rearrange #ifdef sections in ftrace.h Naveen N. Rao
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

We have some C code that we call into from real mode where we cannot
take any exceptions. Though the C functions themselves are mostly safe,
if these functions are traced, there is a possibility that we may take
an exception. For instance, in certain conditions, the ftrace code uses
WARN(), which uses a 'trap' to do its job.

For such scenarios, introduce a new field in paca 'ftrace_enabled',
which is checked on ftrace entry before continuing. This field can then
be set to zero to disable/pause ftrace, and set to a non-zero value to
resume ftrace.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/paca.h                |  1 +
 arch/powerpc/kernel/asm-offsets.c              |  1 +
 arch/powerpc/kernel/setup_64.c                 |  3 +++
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 14 ++++++++++++++
 arch/powerpc/kernel/trace/ftrace_64_pg.S       |  4 ++++
 5 files changed, 23 insertions(+)

diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 4185f1c96125..163f13f31255 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -223,6 +223,7 @@ struct paca_struct {
 	u8 hmi_event_available;		/* HMI event is available */
 	u8 hmi_p9_special_emu;		/* HMI P9 special emulation */
 #endif
+	u8 ftrace_enabled;		/* Hard disable ftrace */
 
 	/* Stuff for accurate time accounting */
 	struct cpu_accounting_data accounting;
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 6bee65f3cfd3..262c44a90ea1 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -180,6 +180,7 @@ int main(void)
 	OFFSET(PACAKMSR, paca_struct, kernel_msr);
 	OFFSET(PACAIRQSOFTMASK, paca_struct, irq_soft_mask);
 	OFFSET(PACAIRQHAPPENED, paca_struct, irq_happened);
+	OFFSET(PACA_FTRACE_ENABLED, paca_struct, ftrace_enabled);
 #ifdef CONFIG_PPC_BOOK3S
 	OFFSET(PACACONTEXTID, paca_struct, mm_ctx_id);
 #ifdef CONFIG_PPC_MM_SLICES
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index b78f142a4148..313136006d1c 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -252,6 +252,9 @@ static void cpu_ready_for_interrupts(void)
 
 	/* Set IR and DR in PACA MSR */
 	get_paca()->kernel_msr = MSR_KERNEL;
+
+	/* We are now ok to enable ftrace */
+	get_paca()->ftrace_enabled = 1;
 }
 
 unsigned long spr_default_dscr = 0;
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 3f3e81852422..ae1cbe783ab6 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -47,6 +47,12 @@ _GLOBAL(ftrace_caller)
 	/* Save all gprs to pt_regs */
 	SAVE_GPR(0, r1)
 	SAVE_10GPRS(2, r1)
+
+	/* Ok to continue? */
+	lbz	r3, PACA_FTRACE_ENABLED(r13)
+	cmpdi	r3, 0
+	beq	ftrace_no_trace
+
 	SAVE_10GPRS(12, r1)
 	SAVE_10GPRS(22, r1)
 
@@ -168,6 +174,14 @@ _GLOBAL(ftrace_graph_stub)
 _GLOBAL(ftrace_stub)
 	blr
 
+ftrace_no_trace:
+	mflr	r3
+	mtctr	r3
+	REST_GPR(3, r1)
+	addi	r1, r1, SWITCH_FRAME_SIZE
+	mtlr	r0
+	bctr
+
 #ifdef CONFIG_LIVEPATCH
 	/*
 	 * This function runs in the mcount context, between two functions. As
diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.S b/arch/powerpc/kernel/trace/ftrace_64_pg.S
index f095358da96e..b7ba51a0f3b6 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_pg.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_pg.S
@@ -16,6 +16,10 @@
 
 #ifdef CONFIG_DYNAMIC_FTRACE
 _GLOBAL_TOC(ftrace_caller)
+	lbz	r3, PACA_FTRACE_ENABLED(r13)
+	cmpdi	r3, 0
+	beqlr
+
 	/* Taken from output of objdump from lib64/glibc */
 	mflr	r3
 	ld	r11, 0(r1)
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 02/10] powerpc64/ftrace: Rearrange #ifdef sections in ftrace.h
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 01/10] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 03/10] powerpc64/ftrace: Add helpers to hard disable ftrace Naveen N. Rao
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

Re-arrange the last #ifdef section in preparation for a subsequent
change.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ftrace.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index 9abddde372ab..ebfc0b846bb5 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -68,8 +68,8 @@ struct dyn_arch_ftrace {
 #endif
 #endif
 
-#if defined(CONFIG_FTRACE_SYSCALLS) && !defined(__ASSEMBLY__)
-#ifdef PPC64_ELF_ABI_v1
+#ifndef __ASSEMBLY__
+#if defined(CONFIG_FTRACE_SYSCALLS) && defined(PPC64_ELF_ABI_v1)
 #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME
 static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)
 {
@@ -81,7 +81,7 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name
 	 */
 	return !strcmp(sym + 4, name + 3);
 }
-#endif
-#endif /* CONFIG_FTRACE_SYSCALLS && !__ASSEMBLY__ */
+#endif /* CONFIG_FTRACE_SYSCALLS && PPC64_ELF_ABI_v1 */
+#endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_FTRACE */
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 03/10] powerpc64/ftrace: Add helpers to hard disable ftrace
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 01/10] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 02/10] powerpc64/ftrace: Rearrange #ifdef sections in ftrace.h Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 04/10] powerpc64/ftrace: Delay enabling ftrace on secondary cpus Naveen N. Rao
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

Add some helpers to enable/disable ftrace through paca->ftrace_enabled.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ftrace.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index ebfc0b846bb5..3b5e85a72e10 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -82,6 +82,23 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name
 	return !strcmp(sym + 4, name + 3);
 }
 #endif /* CONFIG_FTRACE_SYSCALLS && PPC64_ELF_ABI_v1 */
+
+#ifdef CONFIG_PPC64
+#include <asm/paca.h>
+
+static inline void this_cpu_disable_ftrace(void)
+{
+	get_paca()->ftrace_enabled = 0;
+}
+
+static inline void this_cpu_enable_ftrace(void)
+{
+	get_paca()->ftrace_enabled = 1;
+}
+#else /* CONFIG_PPC64 */
+static inline void this_cpu_disable_ftrace(void) { }
+static inline void this_cpu_enable_ftrace(void) { }
+#endif /* CONFIG_PPC64 */
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_FTRACE */
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 04/10] powerpc64/ftrace: Delay enabling ftrace on secondary cpus
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (2 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 03/10] powerpc64/ftrace: Add helpers to hard disable ftrace Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 05/10] powerpc64/ftrace: Disable ftrace during hotplug Naveen N. Rao
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

On the boot cpu, though we enable paca->ftrace_enabled in early_setup()
(via cpu_ready_for_interrupts()), we don't start tracing until much
later since ftrace is not initialized yet and since we only support
DYNAMIC_FTRACE on powerpc. However, it is possible that ftrace has been
initialized by the time some of the secondary cpus start up. In this
case, we will try to trace some of the early boot code which can cause
problems.

To address this, move setting paca->ftrace_enabled from
cpu_ready_for_interrupts() to early_setup() for the boot cpu, and towards
the end of start_secondary() for secondary cpus.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/setup_64.c | 10 +++++++---
 arch/powerpc/kernel/smp.c      |  4 ++++
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 313136006d1c..7a7ce8ad455e 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -252,9 +252,6 @@ static void cpu_ready_for_interrupts(void)
 
 	/* Set IR and DR in PACA MSR */
 	get_paca()->kernel_msr = MSR_KERNEL;
-
-	/* We are now ok to enable ftrace */
-	get_paca()->ftrace_enabled = 1;
 }
 
 unsigned long spr_default_dscr = 0;
@@ -349,6 +346,13 @@ void __init early_setup(unsigned long dt_ptr)
 	 */
 	cpu_ready_for_interrupts();
 
+	/*
+	 * We enable ftrace here, but since we only support DYNAMIC_FTRACE, it
+	 * will only actually get enabled on the boot cpu much later once
+	 * ftrace itself has been initialized.
+	 */
+	this_cpu_enable_ftrace();
+
 	DBG(" <- early_setup()\n");
 
 #ifdef CONFIG_PPC_EARLY_DEBUG_BOOTX
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index e16ec7b3b427..c4f5dfb686ca 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -59,6 +59,7 @@
 #include <asm/kexec.h>
 #include <asm/asm-prototypes.h>
 #include <asm/cpu_has_feature.h>
+#include <asm/ftrace.h>
 
 #ifdef DEBUG
 #include <asm/udbg.h>
@@ -1031,6 +1032,9 @@ void start_secondary(void *unused)
 
 	local_irq_enable();
 
+	/* We can enable ftrace for secondary cpus now */
+	this_cpu_enable_ftrace();
+
 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
 
 	BUG();
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 05/10] powerpc64/ftrace: Disable ftrace during hotplug
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (3 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 04/10] powerpc64/ftrace: Delay enabling ftrace on secondary cpus Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 06/10] powerpc64/ftrace: Disable ftrace during kvm entry/exit Naveen N. Rao
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

Disable ftrace when a cpu is about to go offline. When the cpu is woken
up, ftrace will get enabled in start_secondary().

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/smp.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index c4f5dfb686ca..f615660cb3b8 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1131,6 +1131,8 @@ int __cpu_disable(void)
 	if (!smp_ops->cpu_disable)
 		return -ENOSYS;
 
+	this_cpu_disable_ftrace();
+
 	err = smp_ops->cpu_disable();
 	if (err)
 		return err;
@@ -1149,6 +1151,12 @@ void __cpu_die(unsigned int cpu)
 
 void cpu_die(void)
 {
+	/*
+	 * Disable on the down path. This will be re-enabled by
+	 * start_secondary() via start_secondary_resume() below
+	 */
+	this_cpu_disable_ftrace();
+
 	if (ppc_md.cpu_die)
 		ppc_md.cpu_die();
 
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 06/10] powerpc64/ftrace: Disable ftrace during kvm entry/exit
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (4 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 05/10] powerpc64/ftrace: Disable ftrace during hotplug Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19 15:22   ` Steven Rostedt
  2018-04-19  7:04 ` [PATCH v5 07/10] powerpc64/kexec: Hard disable ftrace before switching to the new kernel Naveen N. Rao
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

During guest entry/exit, we switch over to/from the guest MMU context
and we cannot take exceptions in the hypervisor code.

Since ftrace may be enabled and since it can result in us taking a trap,
disable ftrace by setting paca->ftrace_enabled to zero. There are two
paths through which we enter/exit a guest:
1. If we are the vcore runner, then we enter the guest via
__kvmppc_vcore_entry() and we disable ftrace around this. This is always
the case for Power9, and for the primary thread on Power8.
2. If we are a secondary thread in Power8, then we would be in nap due
to SMT being disabled. We are woken up by an IPI to enter the guest. In
this scenario, we enter the guest through kvm_start_guest(). We disable
ftrace at this point. In this scenario, ftrace would only get re-enabled
on the secondary thread when SMT is re-enabled (via start_secondary()).

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kvm/book3s_hv.c            | 4 ++++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 3 +++
 2 files changed, 7 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4d07fca5121c..f604cbd8fc34 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2911,8 +2911,12 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
 
 	srcu_idx = srcu_read_lock(&vc->kvm->srcu);
 
+	this_cpu_disable_ftrace();
+
 	trap = __kvmppc_vcore_entry();
 
+	this_cpu_enable_ftrace();
+
 	srcu_read_unlock(&vc->kvm->srcu, srcu_idx);
 
 	trace_hardirqs_off();
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index bd63fa8a08b5..2c3cbe0067b2 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -342,6 +342,9 @@ kvm_start_guest:
 
 	ld	r2,PACATOC(r13)
 
+	li	r0,0
+	stb	r0,PACA_FTRACE_ENABLED(r13)
+
 	li	r0,KVM_HWTHREAD_IN_KVM
 	stb	r0,HSTATE_HWTHREAD_STATE(r13)
 
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 07/10] powerpc64/kexec: Hard disable ftrace before switching to the new kernel
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (5 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 06/10] powerpc64/ftrace: Disable ftrace during kvm entry/exit Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 08/10] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

If function_graph tracer is enabled during kexec, we see the below
exception in the simulator:
	root@(none):/# kexec -e
	kvm: exiting hardware virtualization
	kexec_core: Starting new kernel
	[   19.262020070,5] OPAL: Switch to big-endian OS
	kexec: Starting switchover sequence.
	Interrupt to 0xC000000000004380 from 0xC000000000004380
	** Execution stopped: Continuous Interrupt, Instruction caused exception,  **

Now that we have a more effective way to completely disable ftrace on
ppc64, let's also use that before switching to a new kernel during
kexec.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/machine_kexec.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index 2694d078741d..936c7e2d421e 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -98,12 +98,14 @@ void machine_kexec(struct kimage *image)
 	int save_ftrace_enabled;
 
 	save_ftrace_enabled = __ftrace_enabled_save();
+	this_cpu_disable_ftrace();
 
 	if (ppc_md.machine_kexec)
 		ppc_md.machine_kexec(image);
 	else
 		default_machine_kexec(image);
 
+	this_cpu_enable_ftrace();
 	__ftrace_enabled_restore(save_ftrace_enabled);
 
 	/* Fall back to normal restart if we're still alive. */
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 08/10] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (6 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 07/10] powerpc64/kexec: Hard disable ftrace before switching to the new kernel Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 09/10] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

For R_PPC64_REL24 relocations, we suppress emitting instructions for TOC
load/restore in the relocation stub if the relocation is for _mcount()
call when using -mprofile-kernel ABI.

To detect this, we check if the preceding instructions are per the
standard set of instructions emitted by gcc: either the two instruction
sequence of 'mflr r0; std r0,16(r1)', or the more optimized variant of a
single 'mflr r0'. This is not sufficient since nothing prevents users
from hand coding sequences involving a 'mflr r0' followed by a 'bl'.

For removing the toc save instruction from the stub, we additionally
check if the symbol is "_mcount". Add the same check here as well.

Also rename is_early_mcount_callsite() to is_mprofile_mcount_callsite()
since that is what is being checked. The use of "early" is misleading
since there is nothing involving this function that qualifies as early.

Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/module_64.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index a2636c250b7b..8413be31d6a4 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -463,8 +463,11 @@ static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
 }
 
 #ifdef CC_USING_MPROFILE_KERNEL
-static bool is_early_mcount_callsite(u32 *instruction)
+static bool is_mprofile_mcount_callsite(const char *name, u32 *instruction)
 {
+	if (strcmp("_mcount", name))
+		return false;
+
 	/*
 	 * Check if this is one of the -mprofile-kernel sequences.
 	 */
@@ -496,8 +499,7 @@ static void squash_toc_save_inst(const char *name, unsigned long addr)
 #else
 static void squash_toc_save_inst(const char *name, unsigned long addr) { }
 
-/* without -mprofile-kernel, mcount calls are never early */
-static bool is_early_mcount_callsite(u32 *instruction)
+static bool is_mprofile_mcount_callsite(const char *name, u32 *instruction)
 {
 	return false;
 }
@@ -505,11 +507,11 @@ static bool is_early_mcount_callsite(u32 *instruction)
 
 /* We expect a noop next: if it is, replace it with instruction to
    restore r2. */
-static int restore_r2(u32 *instruction, struct module *me)
+static int restore_r2(const char *name, u32 *instruction, struct module *me)
 {
 	u32 *prev_insn = instruction - 1;
 
-	if (is_early_mcount_callsite(prev_insn))
+	if (is_mprofile_mcount_callsite(name, prev_insn))
 		return 1;
 
 	/*
@@ -650,7 +652,8 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 				value = stub_for_addr(sechdrs, value, me);
 				if (!value)
 					return -ENOENT;
-				if (!restore_r2((u32 *)location + 1, me))
+				if (!restore_r2(strtab + sym->st_name,
+							(u32 *)location + 1, me))
 					return -ENOEXEC;
 
 				squash_toc_save_inst(strtab + sym->st_name, value);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 09/10] powerpc64/ftrace: Use the generic version of ftrace_replace_code()
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (7 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 08/10] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19  7:04 ` [PATCH v5 10/10] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
  2018-04-19 15:28 ` [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Steven Rostedt
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

Our implementation matches that of the generic version, which also
handles FTRACE_UPDATE_MODIFY_CALL. So, remove our implementation in
favor of the generic version.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/trace/ftrace.c | 36 ------------------------------
 1 file changed, 36 deletions(-)

diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 4741fe112f05..80667128db3d 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -485,42 +485,6 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
 	return ret;
 }
 
-static int __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
-{
-	unsigned long ftrace_addr = (unsigned long)FTRACE_ADDR;
-	int ret;
-
-	ret = ftrace_update_record(rec, enable);
-
-	switch (ret) {
-	case FTRACE_UPDATE_IGNORE:
-		return 0;
-	case FTRACE_UPDATE_MAKE_CALL:
-		return ftrace_make_call(rec, ftrace_addr);
-	case FTRACE_UPDATE_MAKE_NOP:
-		return ftrace_make_nop(NULL, rec, ftrace_addr);
-	}
-
-	return 0;
-}
-
-void ftrace_replace_code(int enable)
-{
-	struct ftrace_rec_iter *iter;
-	struct dyn_ftrace *rec;
-	int ret;
-
-	for (iter = ftrace_rec_iter_start(); iter;
-	     iter = ftrace_rec_iter_next(iter)) {
-		rec = ftrace_rec_iter_record(iter);
-		ret = __ftrace_replace_code(rec, enable);
-		if (ret) {
-			ftrace_bug(ret, rec);
-			return;
-		}
-	}
-}
-
 /*
  * Use the default ftrace_modify_all_code, but without
  * stop_machine().
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v5 10/10] powerpc64/ftrace: Implement support for ftrace_regs_caller()
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (8 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 09/10] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
@ 2018-04-19  7:04 ` Naveen N. Rao
  2018-04-19 15:28 ` [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Steven Rostedt
  10 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-19  7:04 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Paul Mackerras, Steven Rostedt, Satheesh Rajendran

With -mprofile-kernel, we always save the full register state in
ftrace_caller(). While this works, this is inefficient if we're not
interested in the register state, such as when we're using the function
tracer.

Rename the existing ftrace_caller() as ftrace_regs_caller() and provide
a simpler implementation for ftrace_caller() that is used when registers
are not required to be saved.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ftrace.h             |   2 -
 arch/powerpc/include/asm/module.h             |   3 +
 arch/powerpc/kernel/module_64.c               |  28 ++-
 arch/powerpc/kernel/trace/ftrace.c            | 184 ++++++++++++++++--
 .../powerpc/kernel/trace/ftrace_64_mprofile.S |  71 ++++++-
 5 files changed, 262 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index 3b5e85a72e10..f0806a2fd451 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -49,8 +49,6 @@
 extern void _mcount(void);
 
 #ifdef CONFIG_DYNAMIC_FTRACE
-# define FTRACE_ADDR ((unsigned long)ftrace_caller)
-# define FTRACE_REGS_ADDR FTRACE_ADDR
 static inline unsigned long ftrace_call_adjust(unsigned long addr)
 {
        /* reloction of mcount call site is the same as the address */
diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
index 4f6573934792..18f7214d68b7 100644
--- a/arch/powerpc/include/asm/module.h
+++ b/arch/powerpc/include/asm/module.h
@@ -53,6 +53,9 @@ struct mod_arch_specific {
 #ifdef CONFIG_DYNAMIC_FTRACE
 	unsigned long toc;
 	unsigned long tramp;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	unsigned long tramp_regs;
+#endif
 #endif
 
 	/* For module function descriptor dereference */
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 8413be31d6a4..f7667e2ebfcb 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -280,6 +280,10 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
 #ifdef CONFIG_DYNAMIC_FTRACE
 	/* make the trampoline to the ftrace_caller */
 	relocs++;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	/* an additional one for ftrace_regs_caller */
+	relocs++;
+#endif
 #endif
 
 	pr_debug("Looks like a total of %lu stubs, max\n", relocs);
@@ -765,7 +769,8 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
  * via the paca (in r13). The target (ftrace_caller()) is responsible for
  * saving and restoring the toc before returning.
  */
-static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module *me)
+static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs,
+				struct module *me, unsigned long addr)
 {
 	struct ppc64_stub_entry *entry;
 	unsigned int i, num_stubs;
@@ -792,9 +797,10 @@ static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module
 	memcpy(entry->jump, stub_insns, sizeof(stub_insns));
 
 	/* Stub uses address relative to kernel toc (from the paca) */
-	reladdr = (unsigned long)ftrace_caller - kernel_toc_addr();
+	reladdr = addr - kernel_toc_addr();
 	if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) {
-		pr_err("%s: Address of ftrace_caller out of range of kernel_toc.\n", me->name);
+		pr_err("%s: Address of %ps out of range of kernel_toc.\n",
+							me->name, (void *)addr);
 		return 0;
 	}
 
@@ -802,22 +808,30 @@ static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module
 	entry->jump[2] |= PPC_LO(reladdr);
 
 	/* Eventhough we don't use funcdata in the stub, it's needed elsewhere. */
-	entry->funcdata = func_desc((unsigned long)ftrace_caller);
+	entry->funcdata = func_desc(addr);
 	entry->magic = STUB_MAGIC;
 
 	return (unsigned long)entry;
 }
 #else
-static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs, struct module *me)
+static unsigned long create_ftrace_stub(const Elf64_Shdr *sechdrs,
+				struct module *me, unsigned long addr)
 {
-	return stub_for_addr(sechdrs, (unsigned long)ftrace_caller, me);
+	return stub_for_addr(sechdrs, addr, me);
 }
 #endif
 
 int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
 {
 	mod->arch.toc = my_r2(sechdrs, mod);
-	mod->arch.tramp = create_ftrace_stub(sechdrs, mod);
+	mod->arch.tramp = create_ftrace_stub(sechdrs, mod,
+					(unsigned long)ftrace_caller);
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	mod->arch.tramp_regs = create_ftrace_stub(sechdrs, mod,
+					(unsigned long)ftrace_regs_caller);
+	if (!mod->arch.tramp_regs)
+		return -ENOENT;
+#endif
 
 	if (!mod->arch.tramp)
 		return -ENOENT;
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 80667128db3d..79d2924e75d5 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -357,6 +357,8 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 {
 	unsigned int op[2];
 	void *ip = (void *)rec->ip;
+	unsigned long entry, ptr, tramp;
+	struct module *mod = rec->arch.mod;
 
 	/* read where this goes */
 	if (probe_kernel_read(op, ip, sizeof(op)))
@@ -368,19 +370,44 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 		return -EINVAL;
 	}
 
-	/* If we never set up a trampoline to ftrace_caller, then bail */
-	if (!rec->arch.mod->arch.tramp) {
+	/* If we never set up ftrace trampoline(s), then bail */
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	if (!mod->arch.tramp || !mod->arch.tramp_regs) {
+#else
+	if (!mod->arch.tramp) {
+#endif
 		pr_err("No ftrace trampoline\n");
 		return -EINVAL;
 	}
 
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	if (rec->flags & FTRACE_FL_REGS)
+		tramp = mod->arch.tramp_regs;
+	else
+#endif
+		tramp = mod->arch.tramp;
+
+	if (module_trampoline_target(mod, tramp, &ptr)) {
+		pr_err("Failed to get trampoline target\n");
+		return -EFAULT;
+	}
+
+	pr_devel("trampoline target %lx", ptr);
+
+	entry = ppc_global_function_entry((void *)addr);
+	/* This should match what was called */
+	if (ptr != entry) {
+		pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+		return -EINVAL;
+	}
+
 	/* Ensure branch is within 24 bits */
-	if (!create_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) {
+	if (!create_branch(ip, tramp, BRANCH_SET_LINK)) {
 		pr_err("Branch out of range\n");
 		return -EINVAL;
 	}
 
-	if (patch_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) {
+	if (patch_branch(ip, tramp, BRANCH_SET_LINK)) {
 		pr_err("REL24 out of range!\n");
 		return -EINVAL;
 	}
@@ -388,14 +415,6 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 	return 0;
 }
 
-#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
-int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
-			unsigned long addr)
-{
-	return ftrace_make_call(rec, addr);
-}
-#endif
-
 #else  /* !CONFIG_PPC64: */
 static int
 __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
@@ -472,6 +491,137 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 #endif /* CONFIG_MODULES */
 }
 
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+#ifdef CONFIG_MODULES
+static int
+__ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+					unsigned long addr)
+{
+	unsigned int op;
+	unsigned long ip = rec->ip;
+	unsigned long entry, ptr, tramp;
+	struct module *mod = rec->arch.mod;
+
+	/* If we never set up ftrace trampolines, then bail */
+	if (!mod->arch.tramp || !mod->arch.tramp_regs) {
+		pr_err("No ftrace trampoline\n");
+		return -EINVAL;
+	}
+
+	/* read where this goes */
+	if (probe_kernel_read(&op, (void *)ip, sizeof(int))) {
+		pr_err("Fetching opcode failed.\n");
+		return -EFAULT;
+	}
+
+	/* Make sure that that this is still a 24bit jump */
+	if (!is_bl_op(op)) {
+		pr_err("Not expected bl: opcode is %x\n", op);
+		return -EINVAL;
+	}
+
+	/* lets find where the pointer goes */
+	tramp = find_bl_target(ip, op);
+	entry = ppc_global_function_entry((void *)old_addr);
+
+	pr_devel("ip:%lx jumps to %lx", ip, tramp);
+
+	if (tramp != entry) {
+		/* old_addr is not within range, so we must have used a trampoline */
+		if (module_trampoline_target(mod, tramp, &ptr)) {
+			pr_err("Failed to get trampoline target\n");
+			return -EFAULT;
+		}
+
+		pr_devel("trampoline target %lx", ptr);
+
+		/* This should match what was called */
+		if (ptr != entry) {
+			pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+			return -EINVAL;
+		}
+	}
+
+	/* The new target may be within range */
+	if (test_24bit_addr(ip, addr)) {
+		/* within range */
+		if (patch_branch((unsigned int *)ip, addr, BRANCH_SET_LINK)) {
+			pr_err("REL24 out of range!\n");
+			return -EINVAL;
+		}
+
+		return 0;
+	}
+
+	if (rec->flags & FTRACE_FL_REGS)
+		tramp = mod->arch.tramp_regs;
+	else
+		tramp = mod->arch.tramp;
+
+	if (module_trampoline_target(mod, tramp, &ptr)) {
+		pr_err("Failed to get trampoline target\n");
+		return -EFAULT;
+	}
+
+	pr_devel("trampoline target %lx", ptr);
+
+	entry = ppc_global_function_entry((void *)addr);
+	/* This should match what was called */
+	if (ptr != entry) {
+		pr_err("addr %lx does not match expected %lx\n", ptr, entry);
+		return -EINVAL;
+	}
+
+	/* Ensure branch is within 24 bits */
+	if (!create_branch((unsigned int *)ip, tramp, BRANCH_SET_LINK)) {
+		pr_err("Branch out of range\n");
+		return -EINVAL;
+	}
+
+	if (patch_branch((unsigned int *)ip, tramp, BRANCH_SET_LINK)) {
+		pr_err("REL24 out of range!\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+#endif
+
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+			unsigned long addr)
+{
+	unsigned long ip = rec->ip;
+	unsigned int old, new;
+
+	/*
+	 * If the calling address is more that 24 bits away,
+	 * then we had to use a trampoline to make the call.
+	 * Otherwise just update the call site.
+	 */
+	if (test_24bit_addr(ip, addr) && test_24bit_addr(ip, old_addr)) {
+		/* within range */
+		old = ftrace_call_replace(ip, old_addr, 1);
+		new = ftrace_call_replace(ip, addr, 1);
+		return ftrace_modify_code(ip, old, new);
+	}
+
+#ifdef CONFIG_MODULES
+	/*
+	 * Out of range jumps are called from modules.
+	 */
+	if (!rec->arch.mod) {
+		pr_err("No module loaded\n");
+		return -EINVAL;
+	}
+
+	return __ftrace_modify_call(rec, old_addr, addr);
+#else
+	/* We should not get here without modules */
+	return -EINVAL;
+#endif /* CONFIG_MODULES */
+}
+#endif
+
 int ftrace_update_ftrace_func(ftrace_func_t func)
 {
 	unsigned long ip = (unsigned long)(&ftrace_call);
@@ -482,6 +632,16 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
 	new = ftrace_call_replace(ip, (unsigned long)func, 1);
 	ret = ftrace_modify_code(ip, old, new);
 
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+	/* Also update the regs callback function */
+	if (!ret) {
+		ip = (unsigned long)(&ftrace_regs_call);
+		old = *(unsigned int *)&ftrace_regs_call;
+		new = ftrace_call_replace(ip, (unsigned long)func, 1);
+		ret = ftrace_modify_code(ip, old, new);
+	}
+#endif
+
 	return ret;
 }
 
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index ae1cbe783ab6..ed9d7a46c3af 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -20,8 +20,8 @@
 #ifdef CONFIG_DYNAMIC_FTRACE
 /*
  *
- * ftrace_caller() is the function that replaces _mcount() when ftrace is
- * active.
+ * ftrace_caller()/ftrace_regs_caller() is the function that replaces _mcount()
+ * when ftrace is active.
  *
  * We arrive here after a function A calls function B, and we are the trace
  * function for B. When we enter r1 points to A's stack frame, B has not yet
@@ -37,7 +37,7 @@
  * Our job is to save the register state into a struct pt_regs (on the stack)
  * and then arrange for the ftrace function to be called.
  */
-_GLOBAL(ftrace_caller)
+_GLOBAL(ftrace_regs_caller)
 	/* Save the original return address in A's stack frame */
 	std	r0,LRSAVE(r1)
 
@@ -100,8 +100,8 @@ _GLOBAL(ftrace_caller)
 	addi    r6, r1 ,STACK_FRAME_OVERHEAD
 
 	/* ftrace_call(r3, r4, r5, r6) */
-.globl ftrace_call
-ftrace_call:
+.globl ftrace_regs_call
+ftrace_regs_call:
 	bl	ftrace_stub
 	nop
 
@@ -162,6 +162,7 @@ ftrace_call:
 	bne-	livepatch_handler
 #endif
 
+ftrace_caller_common:
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 .globl ftrace_graph_call
 ftrace_graph_call:
@@ -182,6 +183,66 @@ ftrace_no_trace:
 	mtlr	r0
 	bctr
 
+_GLOBAL(ftrace_caller)
+	/* Save the original return address in A's stack frame */
+	std	r0, LRSAVE(r1)
+
+	/* Create our stack frame + pt_regs */
+	stdu	r1, -SWITCH_FRAME_SIZE(r1)
+
+	/* Save all gprs to pt_regs */
+	SAVE_8GPRS(3, r1)
+
+	lbz	r3, PACA_FTRACE_ENABLED(r13)
+	cmpdi	r3, 0
+	beq	ftrace_no_trace
+
+	/* Get the _mcount() call site out of LR */
+	mflr	r7
+	std     r7, _NIP(r1)
+
+	/* Save callee's TOC in the ABI compliant location */
+	std	r2, 24(r1)
+	ld	r2, PACATOC(r13)	/* get kernel TOC in r2 */
+
+	addis	r3, r2, function_trace_op@toc@ha
+	addi	r3, r3, function_trace_op@toc@l
+	ld	r5, 0(r3)
+
+	/* Calculate ip from nip-4 into r3 for call below */
+	subi    r3, r7, MCOUNT_INSN_SIZE
+
+	/* Put the original return address in r4 as parent_ip */
+	mr	r4, r0
+
+	/* Set pt_regs to NULL */
+	li	r6, 0
+
+	/* ftrace_call(r3, r4, r5, r6) */
+.globl ftrace_call
+ftrace_call:
+	bl	ftrace_stub
+	nop
+
+	ld	r3, _NIP(r1)
+	mtctr	r3
+
+	/* Restore gprs */
+	REST_8GPRS(3,r1)
+
+	/* Restore callee's TOC */
+	ld	r2, 24(r1)
+
+	/* Pop our stack frame */
+	addi	r1, r1, SWITCH_FRAME_SIZE
+
+	/* Reload original LR */
+	ld	r0, LRSAVE(r1)
+	mtlr	r0
+
+	/* Handle function_graph or go back */
+	b	ftrace_caller_common
+
 #ifdef CONFIG_LIVEPATCH
 	/*
 	 * This function runs in the mcount context, between two functions. As
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v5 06/10] powerpc64/ftrace: Disable ftrace during kvm entry/exit
  2018-04-19  7:04 ` [PATCH v5 06/10] powerpc64/ftrace: Disable ftrace during kvm entry/exit Naveen N. Rao
@ 2018-04-19 15:22   ` Steven Rostedt
  2018-04-20  6:31     ` Naveen N. Rao
  0 siblings, 1 reply; 15+ messages in thread
From: Steven Rostedt @ 2018-04-19 15:22 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Paul Mackerras, Satheesh Rajendran

On Thu, 19 Apr 2018 12:34:05 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> 2. If we are a secondary thread in Power8, then we would be in nap due
> to SMT being disabled. We are woken up by an IPI to enter the guest. In
> this scenario, we enter the guest through kvm_start_guest(). We disable
> ftrace at this point. In this scenario, ftrace would only get re-enabled
> on the secondary thread when SMT is re-enabled (via start_secondary()).
> 
	trace_hardirqs_off();
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index bd63fa8a08b5..2c3cbe0067b2 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -342,6 +342,9 @@ kvm_start_guest:
>  
>  	ld	r2,PACATOC(r13)
>  

You may want to add a comment here about where ftrace gets re-enabled.

-- Steve

> +	li	r0,0
> +	stb	r0,PACA_FTRACE_ENABLED(r13)
> +
>  	li	r0,KVM_HWTHREAD_IN_KVM
>  	stb	r0,HSTATE_HWTHREAD_STATE(r13)
>  

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes
  2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
                   ` (9 preceding siblings ...)
  2018-04-19  7:04 ` [PATCH v5 10/10] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
@ 2018-04-19 15:28 ` Steven Rostedt
  10 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2018-04-19 15:28 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Paul Mackerras, Satheesh Rajendran

On Thu, 19 Apr 2018 12:33:59 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> This is v5 of the patches posted at:
> https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=37250
> 
> This series has been tested using mambo for p8 (hash) and p9 (radix), 
> and also on Power8 host.
> 
> In v5, the patch for KVM has been re-worked and is now [6/10], instead 
> of [2/10]. This now works properly on a Power8 machine. More details in 
> the patch. All other patches are unchanged from v4.
> 

I had a small comment on patch 6, but by doing a quick review of the
code, I don't have any issues with this. I would assume others will do
a more thorough review though.

For the series:

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v5 06/10] powerpc64/ftrace: Disable ftrace during kvm entry/exit
  2018-04-19 15:22   ` Steven Rostedt
@ 2018-04-20  6:31     ` Naveen N. Rao
  0 siblings, 0 replies; 15+ messages in thread
From: Naveen N. Rao @ 2018-04-20  6:31 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linuxppc-dev, Michael Ellerman, Paul Mackerras, Satheesh Rajendran

Steven Rostedt wrote:
> On Thu, 19 Apr 2018 12:34:05 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
>=20
>> 2. If we are a secondary thread in Power8, then we would be in nap due
>> to SMT being disabled. We are woken up by an IPI to enter the guest. In
>> this scenario, we enter the guest through kvm_start_guest(). We disable
>> ftrace at this point. In this scenario, ftrace would only get re-enabled
>> on the secondary thread when SMT is re-enabled (via start_secondary()).
>>=20
> 	trace_hardirqs_off();
>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/=
book3s_hv_rmhandlers.S
>> index bd63fa8a08b5..2c3cbe0067b2 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> @@ -342,6 +342,9 @@ kvm_start_guest:
>> =20
>>  	ld	r2,PACATOC(r13)
>> =20
>=20
> You may want to add a comment here about where ftrace gets re-enabled.

Sure. That would be:

/*
 * If this is the primary thread, ftrace will get re-enabled when we
 * go back to the hypervisor in kvmppc_run_core(). For secondary threads=20
 * on Power8, ftrace will get enabled when SMT is re-enabled through the=20
 * start_secondary() cpu bringup path.
 */

- Naveen

=

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [v5, 01/10] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths
  2018-04-19  7:04 ` [PATCH v5 01/10] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
@ 2018-05-08 14:52   ` Michael Ellerman
  0 siblings, 0 replies; 15+ messages in thread
From: Michael Ellerman @ 2018-05-08 14:52 UTC (permalink / raw)
  To: Naveen N. Rao; +Cc: Satheesh Rajendran, linuxppc-dev, Steven Rostedt

On Thu, 2018-04-19 at 07:04:00 UTC, "Naveen N. Rao" wrote:
> We have some C code that we call into from real mode where we cannot
> take any exceptions. Though the C functions themselves are mostly safe,
> if these functions are traced, there is a possibility that we may take
> an exception. For instance, in certain conditions, the ftrace code uses
> WARN(), which uses a 'trap' to do its job.
> 
> For such scenarios, introduce a new field in paca 'ftrace_enabled',
> which is checked on ftrace entry before continuing. This field can then
> be set to zero to disable/pause ftrace, and set to a non-zero value to
> resume ftrace.
> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/ea678ac627e01daf5b4f1da24bf1d0

cheers

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-05-08 14:52 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-19  7:03 [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 01/10] powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code paths Naveen N. Rao
2018-05-08 14:52   ` [v5, " Michael Ellerman
2018-04-19  7:04 ` [PATCH v5 02/10] powerpc64/ftrace: Rearrange #ifdef sections in ftrace.h Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 03/10] powerpc64/ftrace: Add helpers to hard disable ftrace Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 04/10] powerpc64/ftrace: Delay enabling ftrace on secondary cpus Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 05/10] powerpc64/ftrace: Disable ftrace during hotplug Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 06/10] powerpc64/ftrace: Disable ftrace during kvm entry/exit Naveen N. Rao
2018-04-19 15:22   ` Steven Rostedt
2018-04-20  6:31     ` Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 07/10] powerpc64/kexec: Hard disable ftrace before switching to the new kernel Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 08/10] powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernel Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 09/10] powerpc64/ftrace: Use the generic version of ftrace_replace_code() Naveen N. Rao
2018-04-19  7:04 ` [PATCH v5 10/10] powerpc64/ftrace: Implement support for ftrace_regs_caller() Naveen N. Rao
2018-04-19 15:28 ` [PATCH v5 00/10] powerpc64/ftrace: Add support for ftrace_modify_call() and a few other fixes Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.