linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] ftrace ported to PPC
@ 2008-05-15  3:49 Steven Rostedt
  2008-05-15  3:49 ` [PATCH 1/2] ftrace ppc: add irqs_disabled_flags to ppc Steven Rostedt
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Steven Rostedt @ 2008-05-15  3:49 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, linuxppc-dev,
	paulus, benh


The following two patches ports ftrace to PowerPC. I tested this on
both my PPC64 box as well as my 32bit PowerBook G4.

This applies to the latest sched-devel (with some extra hacks to get that
to boot on PPC).

This also depends on the CFLAGS_REMOVE_foo.o patches I sent earlier.

-- Steve


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/2] ftrace ppc: add irqs_disabled_flags to ppc
  2008-05-15  3:49 [PATCH 0/2] ftrace ported to PPC Steven Rostedt
@ 2008-05-15  3:49 ` Steven Rostedt
  2008-05-16 12:05   ` Ingo Molnar
  2008-05-15  3:49 ` [PATCH 2/2] ftrace: support for PowerPC Steven Rostedt
  2008-05-15  4:40 ` [PATCH 0/2] ftrace ported to PPC Paul Mackerras
  2 siblings, 1 reply; 17+ messages in thread
From: Steven Rostedt @ 2008-05-15  3:49 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, linuxppc-dev,
	paulus, benh, Steven Rostedt

[-- Attachment #1: powerpc-irqs-disabled-flags.patch --]
[-- Type: text/plain, Size: 1063 bytes --]

PPC doesn't have the irqs_disabled_flags needed by ftrace.
This patch adds it.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 include/asm-powerpc/hw_irq.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

Index: linux-sched-devel.git/include/asm-powerpc/hw_irq.h
===================================================================
--- linux-sched-devel.git.orig/include/asm-powerpc/hw_irq.h	2008-05-14 18:12:21.000000000 -0700
+++ linux-sched-devel.git/include/asm-powerpc/hw_irq.h	2008-05-14 19:24:59.000000000 -0700
@@ -59,6 +59,11 @@ extern void iseries_handle_interrupts(vo
 		get_paca()->hard_enabled = 0;	\
 	} while(0)
 
+static inline int irqs_disabled_flags(unsigned long flags)
+{
+	return flags == 0;
+}
+
 #else
 
 #if defined(CONFIG_BOOKE)
@@ -113,6 +118,11 @@ static inline void local_irq_save_ptr(un
 #define hard_irq_enable()	local_irq_enable()
 #define hard_irq_disable()	local_irq_disable()
 
+static inline int irqs_disabled_flags(unsigned long flags)
+{
+	return (flags & MSR_EE) == 0;
+}
+
 #endif /* CONFIG_PPC64 */
 
 /*

-- 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 2/2] ftrace: support for PowerPC
  2008-05-15  3:49 [PATCH 0/2] ftrace ported to PPC Steven Rostedt
  2008-05-15  3:49 ` [PATCH 1/2] ftrace ppc: add irqs_disabled_flags to ppc Steven Rostedt
@ 2008-05-15  3:49 ` Steven Rostedt
  2008-05-15  5:28   ` David Miller
                     ` (2 more replies)
  2008-05-15  4:40 ` [PATCH 0/2] ftrace ported to PPC Paul Mackerras
  2 siblings, 3 replies; 17+ messages in thread
From: Steven Rostedt @ 2008-05-15  3:49 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, linuxppc-dev,
	paulus, benh, Steven Rostedt

[-- Attachment #1: ftrace-powerpc-port.patch --]
[-- Type: text/plain, Size: 15631 bytes --]

This patch adds full support for ftrace for PowerPC (both 64 and 32 bit).
This includes dynamic tracing and function filtering.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 arch/powerpc/Kconfig                     |    5 
 arch/powerpc/kernel/Makefile             |   14 ++
 arch/powerpc/kernel/entry_32.S           |  130 ++++++++++++++++++++++++
 arch/powerpc/kernel/entry_64.S           |   62 +++++++++++
 arch/powerpc/kernel/ftrace.c             |  165 +++++++++++++++++++++++++++++++
 arch/powerpc/kernel/io.c                 |    3 
 arch/powerpc/kernel/irq.c                |    6 -
 arch/powerpc/kernel/setup_32.c           |   11 +-
 arch/powerpc/kernel/setup_64.c           |    5 
 arch/powerpc/platforms/powermac/Makefile |    5 
 kernel/trace/trace_selftest.c            |   11 +-
 11 files changed, 406 insertions(+), 11 deletions(-)

Index: linux-sched-devel.git/arch/powerpc/kernel/io.c
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/kernel/io.c	2008-05-14 19:30:53.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/kernel/io.c	2008-05-14 19:31:48.000000000 -0700
@@ -120,7 +120,8 @@ EXPORT_SYMBOL(_outsl_ns);
 
 #define IO_CHECK_ALIGN(v,a) ((((unsigned long)(v)) & ((a) - 1)) == 0)
 
-void _memset_io(volatile void __iomem *addr, int c, unsigned long n)
+notrace void
+_memset_io(volatile void __iomem *addr, int c, unsigned long n)
 {
 	void *p = (void __force *)addr;
 	u32 lc = c;
Index: linux-sched-devel.git/arch/powerpc/kernel/Makefile
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/kernel/Makefile	2008-05-14 19:30:53.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/kernel/Makefile	2008-05-14 19:31:56.000000000 -0700
@@ -12,6 +12,18 @@ CFLAGS_prom_init.o      += -fPIC
 CFLAGS_btext.o		+= -fPIC
 endif
 
+ifdef CONFIG_FTRACE
+# Do not trace early boot code
+CFLAGS_REMOVE_cputable.o = -pg
+CFLAGS_REMOVE_prom_init.o = -pg
+
+ifdef CONFIG_DYNAMIC_FTRACE
+# dynamic ftrace setup.
+CFLAGS_REMOVE_ftrace.o = -pg
+endif
+
+endif
+
 obj-y				:= cputable.o ptrace.o syscalls.o \
 				   irq.o align.o signal_32.o pmc.o vdso.o \
 				   init_task.o process.o systbl.o idle.o \
@@ -79,6 +91,8 @@ obj-$(CONFIG_KEXEC)		+= machine_kexec.o 
 obj-$(CONFIG_AUDIT)		+= audit.o
 obj64-$(CONFIG_AUDIT)		+= compat_audit.o
 
+obj-$(CONFIG_DYNAMIC_FTRACE)	+= ftrace.o
+
 obj-$(CONFIG_8XX_MINIMAL_FPEMU) += softemu8xx.o
 
 ifneq ($(CONFIG_PPC_INDIRECT_IO),y)
Index: linux-sched-devel.git/arch/powerpc/platforms/powermac/Makefile
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/platforms/powermac/Makefile	2008-05-14 19:30:53.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/platforms/powermac/Makefile	2008-05-14 19:31:48.000000000 -0700
@@ -1,5 +1,10 @@
 CFLAGS_bootx_init.o  		+= -fPIC
 
+ifdef CONFIG_FTRACE
+# Do not trace early boot code
+CFLAGS_REMOVE_bootx_init.o = -pg
+endif
+
 obj-y				+= pic.o setup.o time.o feature.o pci.o \
 				   sleep.o low_i2c.o cache.o pfunc_core.o \
 				   pfunc_base.o
Index: linux-sched-devel.git/arch/powerpc/Kconfig
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/Kconfig	2008-05-14 19:30:50.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/Kconfig	2008-05-14 19:31:56.000000000 -0700
@@ -106,11 +106,12 @@ config PPC
 	bool
 	default y
 	select HAVE_IDE
-	select HAVE_OPROFILE
+	select HAVE_IMMEDIATE
+	select HAVE_FTRACE
 	select HAVE_KPROBES
 	select HAVE_KRETPROBES
 	select HAVE_LMB
-	select HAVE_IMMEDIATE
+	select HAVE_OPROFILE
 
 config EARLY_PRINTK
 	bool
Index: linux-sched-devel.git/arch/powerpc/kernel/entry_32.S
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/kernel/entry_32.S	2008-05-14 19:30:50.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/kernel/entry_32.S	2008-05-14 19:31:56.000000000 -0700
@@ -1035,3 +1035,133 @@ machine_check_in_rtas:
 	/* XXX load up BATs and panic */
 
 #endif /* CONFIG_PPC_RTAS */
+
+#ifdef CONFIG_FTRACE
+#ifdef CONFIG_DYNAMIC_FTRACE
+_GLOBAL(mcount)
+_GLOBAL(_mcount)
+	stwu	r1,-48(r1)
+	stw	r3, 12(r1)
+	stw	r4, 16(r1)
+	stw	r5, 20(r1)
+	stw	r6, 24(r1)
+	mflr	r3
+	stw	r7, 28(r1)
+	mfcr	r5
+	stw	r8, 32(r1)
+	stw	r9, 36(r1)
+	stw	r10,40(r1)
+	stw	r3, 44(r1)
+	stw	r5, 8(r1)
+	.globl mcount_call
+mcount_call:
+	bl	ftrace_stub
+	nop
+	lwz	r6, 8(r1)
+	lwz	r0, 44(r1)
+	lwz	r3, 12(r1)
+	mtctr	r0
+	lwz	r4, 16(r1)
+	mtcr	r6
+	lwz	r5, 20(r1)
+	lwz	r6, 24(r1)
+	lwz	r0, 52(r1)
+	lwz	r7, 28(r1)
+	lwz	r8, 32(r1)
+	mtlr	r0
+	lwz	r9, 36(r1)
+	lwz	r10,40(r1)
+	addi	r1, r1, 48
+	bctr
+
+_GLOBAL(ftrace_caller)
+	/* Based off of objdump optput from glibc */
+	stwu	r1,-48(r1)
+	stw	r3, 12(r1)
+	stw	r4, 16(r1)
+	stw	r5, 20(r1)
+	stw	r6, 24(r1)
+	mflr	r3
+	lwz	r4, 52(r1)
+	mfcr	r5
+	stw	r7, 28(r1)
+	stw	r8, 32(r1)
+	stw	r9, 36(r1)
+	stw	r10,40(r1)
+	stw	r3, 44(r1)
+	stw	r5, 8(r1)
+.globl ftrace_call
+ftrace_call:
+	bl	ftrace_stub
+	nop
+	lwz	r6, 8(r1)
+	lwz	r0, 44(r1)
+	lwz	r3, 12(r1)
+	mtctr	r0
+	lwz	r4, 16(r1)
+	mtcr	r6
+	lwz	r5, 20(r1)
+	lwz	r6, 24(r1)
+	lwz	r0, 52(r1)
+	lwz	r7, 28(r1)
+	lwz	r8, 32(r1)
+	mtlr	r0
+	lwz	r9, 36(r1)
+	lwz	r10,40(r1)
+	addi	r1, r1, 48
+	bctr
+#else
+_GLOBAL(mcount)
+_GLOBAL(_mcount)
+	stwu	r1,-48(r1)
+	stw	r3, 12(r1)
+	stw	r4, 16(r1)
+	stw	r5, 20(r1)
+	stw	r6, 24(r1)
+	mflr	r3
+	lwz	r4, 52(r1)
+	mfcr	r5
+	stw	r7, 28(r1)
+	stw	r8, 32(r1)
+	stw	r9, 36(r1)
+	stw	r10,40(r1)
+	stw	r3, 44(r1)
+	stw	r5, 8(r1)
+
+	LOAD_REG_ADDR(r5, ftrace_trace_function)
+#if 0
+	mtctr	r3
+	mr	r1, r5
+	bctrl
+#endif
+	lwz	r5,0(r5)
+#if 1
+	mtctr	r5
+	bctrl
+#else
+	bl	ftrace_stub
+#endif
+	nop
+
+	lwz	r6, 8(r1)
+	lwz	r0, 44(r1)
+	lwz	r3, 12(r1)
+	mtctr	r0
+	lwz	r4, 16(r1)
+	mtcr	r6
+	lwz	r5, 20(r1)
+	lwz	r6, 24(r1)
+	lwz	r0, 52(r1)
+	lwz	r7, 28(r1)
+	lwz	r8, 32(r1)
+	mtlr	r0
+	lwz	r9, 36(r1)
+	lwz	r10,40(r1)
+	addi	r1, r1, 48
+	bctr
+#endif
+
+_GLOBAL(ftrace_stub)
+	blr
+
+#endif /* CONFIG_MCOUNT */
Index: linux-sched-devel.git/arch/powerpc/kernel/entry_64.S
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/kernel/entry_64.S	2008-05-14 19:30:50.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/kernel/entry_64.S	2008-05-14 19:31:56.000000000 -0700
@@ -870,3 +870,65 @@ _GLOBAL(enter_prom)
 	ld	r0,16(r1)
 	mtlr    r0
         blr
+
+#ifdef CONFIG_FTRACE
+#ifdef CONFIG_DYNAMIC_FTRACE
+_GLOBAL(mcount)
+_GLOBAL(_mcount)
+	/* Taken from output of objdump from lib64/glibc */
+	mflr	r3
+	stdu	r1, -112(r1)
+	std	r3, 128(r1)
+	.globl mcount_call
+mcount_call:
+	bl	ftrace_stub
+	nop
+	ld	r0, 128(r1)
+	mtlr	r0
+	addi	r1, r1, 112
+	blr
+
+_GLOBAL(ftrace_caller)
+	/* Taken from output of objdump from lib64/glibc */
+	mflr	r3
+	ld	r11, 0(r1)
+	stdu	r1, -112(r1)
+	std	r3, 128(r1)
+	ld	r4, 16(r11)
+.globl ftrace_call
+ftrace_call:
+	bl	ftrace_stub
+	nop
+	ld	r0, 128(r1)
+	mtlr	r0
+	addi	r1, r1, 112
+_GLOBAL(ftrace_stub)
+	blr
+#else
+_GLOBAL(mcount)
+	blr
+
+_GLOBAL(_mcount)
+	/* Taken from output of objdump from lib64/glibc */
+	mflr	r3
+	ld	r11, 0(r1)
+	stdu	r1, -112(r1)
+	std	r3, 128(r1)
+	ld	r4, 16(r11)
+
+
+	LOAD_REG_ADDR(r5,ftrace_trace_function)
+	ld	r5,0(r5)
+	ld	r5,0(r5)
+	mtctr	r5
+	bctrl
+
+	nop
+	ld	r0, 128(r1)
+	mtlr	r0
+	addi	r1, r1, 112
+_GLOBAL(ftrace_stub)
+	blr
+
+#endif
+#endif
Index: linux-sched-devel.git/arch/powerpc/kernel/ftrace.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-sched-devel.git/arch/powerpc/kernel/ftrace.c	2008-05-14 19:31:56.000000000 -0700
@@ -0,0 +1,165 @@
+/*
+ * Code for replacing ftrace calls with jumps.
+ *
+ * Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
+ *
+ * Thanks goes out to P.A. Semi, Inc for supplying me with a PPC64 box.
+ *
+ */
+
+#include <linux/spinlock.h>
+#include <linux/hardirq.h>
+#include <linux/ftrace.h>
+#include <linux/percpu.h>
+#include <linux/init.h>
+#include <linux/list.h>
+
+#include <asm/cacheflush.h>
+
+#define CALL_BACK		4
+
+static unsigned int ftrace_nop = 0x60000000;
+
+#ifdef CONFIG_PPC32
+# define GET_ADDR(addr) addr
+#else
+/* PowerPC64's functions are data that points to the functions */
+# define GET_ADDR(addr) *(unsigned long *)addr
+#endif
+
+notrace int ftrace_ip_converted(unsigned long ip)
+{
+	unsigned int save;
+
+	ip -= CALL_BACK;
+	save = *(unsigned int *)ip;
+
+	return save == ftrace_nop;
+}
+
+static unsigned int notrace ftrace_calc_offset(long ip, long addr)
+{
+	return (int)((addr + CALL_BACK) - ip);
+}
+
+notrace unsigned char *ftrace_nop_replace(void)
+{
+	return (char *)&ftrace_nop;
+}
+
+notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
+{
+	static unsigned int op;
+
+	addr = GET_ADDR(addr);
+
+	/* Set to "bl addr" */
+	op = 0x48000001 | (ftrace_calc_offset(ip, addr) & 0x03fffffe);
+
+	/*
+	 * No locking needed, this must be called via kstop_machine
+	 * which in essence is like running on a uniprocessor machine.
+	 */
+	return (unsigned char *)&op;
+}
+
+#ifdef CONFIG_PPC64
+# define _ASM_ALIGN	" .align 3 "
+# define _ASM_PTR	" .llong "
+#else
+# define _ASM_ALIGN	" .align 2 "
+# define _ASM_PTR	" .long "
+#endif
+
+notrace int
+ftrace_modify_code(unsigned long ip, unsigned char *old_code,
+		   unsigned char *new_code)
+{
+	unsigned replaced;
+	unsigned old = *(unsigned *)old_code;
+	unsigned new = *(unsigned *)new_code;
+	int faulted = 0;
+
+	/* move the IP back to the start of the call */
+	ip -= CALL_BACK;
+
+	/*
+	 * Note: Due to modules and __init, code can
+	 *  disappear and change, we need to protect against faulting
+	 *  as well as code changing.
+	 *
+	 * No real locking needed, this code is run through
+	 * kstop_machine.
+	 */
+	asm volatile (
+		"1: lwz		%1, 0(%2)\n"
+		"   cmpw	%1, %5\n"
+		"   bne		2f\n"
+		"   stwu	%3, 0(%2)\n"
+		"2:\n"
+		".section .fixup, \"ax\"\n"
+		"3:	li %0, 1\n"
+		"	b 2b\n"
+		".previous\n"
+		".section __ex_table,\"a\"\n"
+		_ASM_ALIGN "\n"
+		_ASM_PTR "1b, 3b\n"
+		".previous"
+		: "=r"(faulted), "=r"(replaced)
+		: "r"(ip), "r"(new),
+		  "0"(faulted), "r"(old)
+		: "memory");
+
+	if (replaced != old && replaced != new)
+		faulted = 2;
+
+	if (!faulted)
+		flush_icache_range(ip, ip + 8);
+
+	return faulted;
+}
+
+notrace int ftrace_update_ftrace_func(ftrace_func_t func)
+{
+	unsigned long ip = (unsigned long)(&ftrace_call);
+	unsigned char old[4], *new;
+	int ret;
+
+	ip += CALL_BACK;
+
+	memcpy(old, &ftrace_call, 4);
+	new = ftrace_call_replace(ip, (unsigned long)func);
+	ret = ftrace_modify_code(ip, old, new);
+
+	return ret;
+}
+
+notrace int ftrace_mcount_set(unsigned long *data)
+{
+	unsigned long ip = (long)(&mcount_call);
+	unsigned long *addr = data;
+	unsigned char old[4], *new;
+
+	/* ip is at the location, but modify code will subtact this */
+	ip += CALL_BACK;
+
+	/*
+	 * Replace the mcount stub with a pointer to the
+	 * ip recorder function.
+	 */
+	memcpy(old, &mcount_call, 4);
+	new = ftrace_call_replace(ip, *addr);
+	*addr = ftrace_modify_code(ip, old, new);
+
+	return 0;
+}
+
+int __init ftrace_dyn_arch_init(void *data)
+{
+	/* This is running in kstop_machine */
+
+	ftrace_mcount_set(data);
+
+	return 0;
+}
+
Index: linux-sched-devel.git/arch/powerpc/kernel/irq.c
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/kernel/irq.c	2008-05-14 19:30:50.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/kernel/irq.c	2008-05-14 19:31:56.000000000 -0700
@@ -98,7 +98,7 @@ EXPORT_SYMBOL(irq_desc);
 
 int distribute_irqs = 1;
 
-static inline unsigned long get_hard_enabled(void)
+static inline notrace unsigned long get_hard_enabled(void)
 {
 	unsigned long enabled;
 
@@ -108,13 +108,13 @@ static inline unsigned long get_hard_ena
 	return enabled;
 }
 
-static inline void set_soft_enabled(unsigned long enable)
+static inline notrace void set_soft_enabled(unsigned long enable)
 {
 	__asm__ __volatile__("stb %0,%1(13)"
 	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
 }
 
-void raw_local_irq_restore(unsigned long en)
+notrace void raw_local_irq_restore(unsigned long en)
 {
 	/*
 	 * get_paca()->soft_enabled = en;
Index: linux-sched-devel.git/arch/powerpc/kernel/setup_32.c
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/kernel/setup_32.c	2008-05-14 19:30:50.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/kernel/setup_32.c	2008-05-14 19:31:56.000000000 -0700
@@ -47,6 +47,11 @@
 #include <asm/kgdb.h>
 #endif
 
+#ifdef CONFIG_FTRACE
+extern void _mcount(void);
+EXPORT_SYMBOL(_mcount);
+#endif
+
 extern void bootx_init(unsigned long r4, unsigned long phys);
 
 int boot_cpuid;
@@ -81,7 +86,7 @@ int ucache_bsize;
  * from the address that it was linked at, so we must use RELOC/PTRRELOC
  * to access static data (including strings).  -- paulus
  */
-unsigned long __init early_init(unsigned long dt_ptr)
+notrace unsigned long __init early_init(unsigned long dt_ptr)
 {
 	unsigned long offset = reloc_offset();
 	struct cpu_spec *spec;
@@ -111,7 +116,7 @@ unsigned long __init early_init(unsigned
  * This is called very early on the boot process, after a minimal
  * MMU environment has been set up but before MMU_init is called.
  */
-void __init machine_init(unsigned long dt_ptr, unsigned long phys)
+notrace void __init machine_init(unsigned long dt_ptr, unsigned long phys)
 {
 	/* Enable early debugging if any specified (see udbg.h) */
 	udbg_early_init();
@@ -133,7 +138,7 @@ void __init machine_init(unsigned long d
 
 #ifdef CONFIG_BOOKE_WDT
 /* Checks wdt=x and wdt_period=xx command-line option */
-int __init early_parse_wdt(char *p)
+notrace int __init early_parse_wdt(char *p)
 {
 	if (p && strncmp(p, "0", 1) != 0)
 	       booke_wdt_enabled = 1;
Index: linux-sched-devel.git/arch/powerpc/kernel/setup_64.c
===================================================================
--- linux-sched-devel.git.orig/arch/powerpc/kernel/setup_64.c	2008-05-14 19:30:50.000000000 -0700
+++ linux-sched-devel.git/arch/powerpc/kernel/setup_64.c	2008-05-14 19:31:56.000000000 -0700
@@ -85,6 +85,11 @@ struct ppc64_caches ppc64_caches = {
 };
 EXPORT_SYMBOL_GPL(ppc64_caches);
 
+#ifdef CONFIG_FTRACE
+extern void _mcount(void);
+EXPORT_SYMBOL(_mcount);
+#endif
+
 /*
  * These are used in binfmt_elf.c to put aux entries on the stack
  * for each elf executable being started.
Index: linux-sched-devel.git/kernel/trace/trace_selftest.c
===================================================================
--- linux-sched-devel.git.orig/kernel/trace/trace_selftest.c	2008-05-14 19:30:50.000000000 -0700
+++ linux-sched-devel.git/kernel/trace/trace_selftest.c	2008-05-14 19:31:56.000000000 -0700
@@ -123,6 +123,7 @@ int trace_selftest_startup_dynamic_traci
 	int ret;
 	int save_ftrace_enabled = ftrace_enabled;
 	int save_tracer_enabled = tracer_enabled;
+	char *func_name;
 
 	/* The ftrace test PASSED */
 	printk(KERN_CONT "PASSED\n");
@@ -142,9 +143,15 @@ int trace_selftest_startup_dynamic_traci
 		return ret;
 	}
 
+	/*
+	 * Some archs *cough*PowerPC*cough* add charachters to the
+	 * start of the function names. We simply put a '*' to
+	 * accomodate them.
+	 */
+	func_name = "*" STR(DYN_FTRACE_TEST_NAME);
+
 	/* filter only on our function */
-	ftrace_set_filter(STR(DYN_FTRACE_TEST_NAME),
-			  sizeof(STR(DYN_FTRACE_TEST_NAME)), 1);
+	ftrace_set_filter(func_name, strlen(func_name), 1);
 
 	/* enable tracing */
 	tr->ctrl = 1;

-- 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/2] ftrace ported to PPC
  2008-05-15  3:49 [PATCH 0/2] ftrace ported to PPC Steven Rostedt
  2008-05-15  3:49 ` [PATCH 1/2] ftrace ppc: add irqs_disabled_flags to ppc Steven Rostedt
  2008-05-15  3:49 ` [PATCH 2/2] ftrace: support for PowerPC Steven Rostedt
@ 2008-05-15  4:40 ` Paul Mackerras
  2008-05-16 12:05   ` Ingo Molnar
  2 siblings, 1 reply; 17+ messages in thread
From: Paul Mackerras @ 2008-05-15  4:40 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, linux-kernel, pq, proski, sandmann, a.p.zijlstra,
	linuxppc-dev, benh

Steven Rostedt writes:

> The following two patches ports ftrace to PowerPC. I tested this on
> both my PPC64 box as well as my 32bit PowerBook G4.

Very cool!  Thanks.

> This applies to the latest sched-devel (with some extra hacks to get that
> to boot on PPC).
> 
> This also depends on the CFLAGS_REMOVE_foo.o patches I sent earlier.

Ingo, when do you intend to send the ftrace stuff to Linus?  In the
next merge window?

Paul.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-15  3:49 ` [PATCH 2/2] ftrace: support for PowerPC Steven Rostedt
@ 2008-05-15  5:28   ` David Miller
  2008-05-15 13:38     ` Steven Rostedt
  2008-05-15 16:48     ` Scott Wood
  2008-05-16 12:06   ` Ingo Molnar
  2008-05-20 14:04   ` Michael Ellerman
  2 siblings, 2 replies; 17+ messages in thread
From: David Miller @ 2008-05-15  5:28 UTC (permalink / raw)
  To: rostedt
  Cc: mingo, linux-kernel, pq, proski, sandmann, a.p.zijlstra,
	linuxppc-dev, paulus, benh, srostedt

From: Steven Rostedt <rostedt@goodmis.org>
Date: Wed, 14 May 2008 23:49:44 -0400

> +#ifdef CONFIG_FTRACE
> +#ifdef CONFIG_DYNAMIC_FTRACE
> +_GLOBAL(mcount)
> +_GLOBAL(_mcount)
> +	stwu	r1,-48(r1)
> +	stw	r3, 12(r1)
> +	stw	r4, 16(r1)
> +	stw	r5, 20(r1)
> +	stw	r6, 24(r1)
> +	mflr	r3
> +	stw	r7, 28(r1)
> +	mfcr	r5
> +	stw	r8, 32(r1)
> +	stw	r9, 36(r1)
> +	stw	r10,40(r1)
> +	stw	r3, 44(r1)
> +	stw	r5, 8(r1)

Yikes, that's really expensive.

Can't you do a tail call and let the function you end
up calling do all the callee-saved register pops onto
the stack?

That's what I did on sparc.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-15  5:28   ` David Miller
@ 2008-05-15 13:38     ` Steven Rostedt
  2008-05-15 16:48     ` Scott Wood
  1 sibling, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2008-05-15 13:38 UTC (permalink / raw)
  To: David Miller
  Cc: mingo, linux-kernel, pq, proski, sandmann, a.p.zijlstra,
	linuxppc-dev, paulus, benh, srostedt


On Wed, 14 May 2008, David Miller wrote:

> From: Steven Rostedt <rostedt@goodmis.org>
> Date: Wed, 14 May 2008 23:49:44 -0400
>
> > +#ifdef CONFIG_FTRACE
> > +#ifdef CONFIG_DYNAMIC_FTRACE
> > +_GLOBAL(mcount)
> > +_GLOBAL(_mcount)
> > +	stwu	r1,-48(r1)
> > +	stw	r3, 12(r1)
> > +	stw	r4, 16(r1)
> > +	stw	r5, 20(r1)
> > +	stw	r6, 24(r1)
> > +	mflr	r3
> > +	stw	r7, 28(r1)
> > +	mfcr	r5
> > +	stw	r8, 32(r1)
> > +	stw	r9, 36(r1)
> > +	stw	r10,40(r1)
> > +	stw	r3, 44(r1)
> > +	stw	r5, 8(r1)
>
> Yikes, that's really expensive.

Well, at least with dynamic ftrace, it's only expensive when tracing is
enabled.

>
> Can't you do a tail call and let the function you end
> up calling do all the callee-saved register pops onto
> the stack?

Not sure PPC has such a thing. I'm only a hobby PPC hacker (did it full
time in another life). If there is such a way, I'll be happy to Ack any
patches.

>
> That's what I did on sparc.
>

So that was your secret! ;-)

-- Steve


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-15  5:28   ` David Miller
  2008-05-15 13:38     ` Steven Rostedt
@ 2008-05-15 16:48     ` Scott Wood
  1 sibling, 0 replies; 17+ messages in thread
From: Scott Wood @ 2008-05-15 16:48 UTC (permalink / raw)
  To: David Miller
  Cc: rostedt, proski, a.p.zijlstra, pq, linux-kernel, srostedt,
	linuxppc-dev, sandmann, paulus, mingo

On Wed, May 14, 2008 at 10:28:57PM -0700, David Miller wrote:
> From: Steven Rostedt <rostedt@goodmis.org>
> Date: Wed, 14 May 2008 23:49:44 -0400
> 
> > +#ifdef CONFIG_FTRACE
> > +#ifdef CONFIG_DYNAMIC_FTRACE
> > +_GLOBAL(mcount)
> > +_GLOBAL(_mcount)
> > +	stwu	r1,-48(r1)
> > +	stw	r3, 12(r1)
> > +	stw	r4, 16(r1)
> > +	stw	r5, 20(r1)
> > +	stw	r6, 24(r1)
> > +	mflr	r3
> > +	stw	r7, 28(r1)
> > +	mfcr	r5
> > +	stw	r8, 32(r1)
> > +	stw	r9, 36(r1)
> > +	stw	r10,40(r1)
> > +	stw	r3, 44(r1)
> > +	stw	r5, 8(r1)
> 
> Yikes, that's really expensive.
> 
> Can't you do a tail call and let the function you end
> up calling do all the callee-saved register pops onto
> the stack?

The PPC32 ABI seems to (unfortunately) suggest that, with mcount, all
registers are callee-saved (except for the modifiable-during-function-linkage
registers like r0, r11, and r12) -- so mcount has to save the registers that
the callee won't (because they're normally volatile).

-Scott

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/2] ftrace ported to PPC
  2008-05-15  4:40 ` [PATCH 0/2] ftrace ported to PPC Paul Mackerras
@ 2008-05-16 12:05   ` Ingo Molnar
  0 siblings, 0 replies; 17+ messages in thread
From: Ingo Molnar @ 2008-05-16 12:05 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Steven Rostedt, linux-kernel, pq, proski, sandmann, a.p.zijlstra,
	linuxppc-dev, benh


* Paul Mackerras <paulus@samba.org> wrote:

> Steven Rostedt writes:
> 
> > The following two patches ports ftrace to PowerPC. I tested this on 
> > both my PPC64 box as well as my 32bit PowerBook G4.
> 
> Very cool!  Thanks.

great - could you please send an Acked-by line for those patches?

> > This applies to the latest sched-devel (with some extra hacks to get 
> > that to boot on PPC).
> > 
> > This also depends on the CFLAGS_REMOVE_foo.o patches I sent earlier.
> 
> Ingo, when do you intend to send the ftrace stuff to Linus?  In the 
> next merge window?

yeah, that's the plan, this merge window was too hot already. Right now 
there's an ftrace topic in the -tip tree:

    http://people.redhat.com/mingo/tip.git/README

that contains the latest ftrace patches. Please holler if you see 
something weird in it.

	Ingo

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/2] ftrace ppc: add irqs_disabled_flags to ppc
  2008-05-15  3:49 ` [PATCH 1/2] ftrace ppc: add irqs_disabled_flags to ppc Steven Rostedt
@ 2008-05-16 12:05   ` Ingo Molnar
  0 siblings, 0 replies; 17+ messages in thread
From: Ingo Molnar @ 2008-05-16 12:05 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, linuxppc-dev,
	paulus, benh, Steven Rostedt


* Steven Rostedt <rostedt@goodmis.org> wrote:

> PPC doesn't have the irqs_disabled_flags needed by ftrace. This patch 
> adds it.

applied, thanks.

	Ingo

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-15  3:49 ` [PATCH 2/2] ftrace: support for PowerPC Steven Rostedt
  2008-05-15  5:28   ` David Miller
@ 2008-05-16 12:06   ` Ingo Molnar
  2008-05-20 14:04   ` Michael Ellerman
  2 siblings, 0 replies; 17+ messages in thread
From: Ingo Molnar @ 2008-05-16 12:06 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, pq, proski, sandmann, a.p.zijlstra, linuxppc-dev,
	paulus, benh, Steven Rostedt


* Steven Rostedt <rostedt@goodmis.org> wrote:

> This patch adds full support for ftrace for PowerPC (both 64 and 32 
> bit). This includes dynamic tracing and function filtering.

applied, thanks. Nice stuff! :)

	Ingo

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-15  3:49 ` [PATCH 2/2] ftrace: support for PowerPC Steven Rostedt
  2008-05-15  5:28   ` David Miller
  2008-05-16 12:06   ` Ingo Molnar
@ 2008-05-20 14:04   ` Michael Ellerman
  2008-05-20 14:17     ` Benjamin Herrenschmidt
                       ` (2 more replies)
  2 siblings, 3 replies; 17+ messages in thread
From: Michael Ellerman @ 2008-05-20 14:04 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, proski, a.p.zijlstra, pq, linux-kernel,
	Steven Rostedt, linuxppc-dev, sandmann, paulus

[-- Attachment #1: Type: text/plain, Size: 6768 bytes --]

On Wed, 2008-05-14 at 23:49 -0400, Steven Rostedt wrote:
> plain text document attachment (ftrace-powerpc-port.patch)
> This patch adds full support for ftrace for PowerPC (both 64 and 32 bit).
> This includes dynamic tracing and function filtering.

Hi Steven,

Just a few comments inline ..

> Index: linux-sched-devel.git/arch/powerpc/kernel/Makefile
> ===================================================================
> --- linux-sched-devel.git.orig/arch/powerpc/kernel/Makefile	2008-05-14 19:30:53.000000000 -0700
> +++ linux-sched-devel.git/arch/powerpc/kernel/Makefile	2008-05-14 19:31:56.000000000 -0700
> @@ -12,6 +12,18 @@ CFLAGS_prom_init.o      += -fPIC
>  CFLAGS_btext.o		+= -fPIC
>  endif
>  
> +ifdef CONFIG_FTRACE
> +# Do not trace early boot code
> +CFLAGS_REMOVE_cputable.o = -pg
> +CFLAGS_REMOVE_prom_init.o = -pg

Why do we not want to trace early boot? Just because it's not useful? 

> Index: linux-sched-devel.git/arch/powerpc/kernel/entry_32.S
> ===================================================================
> --- linux-sched-devel.git.orig/arch/powerpc/kernel/entry_32.S	2008-05-14 19:30:50.000000000 -0700
> +++ linux-sched-devel.git/arch/powerpc/kernel/entry_32.S	2008-05-14 19:31:56.000000000 -0700
> @@ -1035,3 +1035,133 @@ machine_check_in_rtas:
>  	/* XXX load up BATs and panic */
>  
.. snip

> +_GLOBAL(mcount)
> +_GLOBAL(_mcount)
> +	stwu	r1,-48(r1)
> +	stw	r3, 12(r1)
> +	stw	r4, 16(r1)
> +	stw	r5, 20(r1)
> +	stw	r6, 24(r1)
> +	mflr	r3
> +	lwz	r4, 52(r1)
> +	mfcr	r5
> +	stw	r7, 28(r1)
> +	stw	r8, 32(r1)
> +	stw	r9, 36(r1)
> +	stw	r10,40(r1)
> +	stw	r3, 44(r1)
> +	stw	r5, 8(r1)
> +
> +	LOAD_REG_ADDR(r5, ftrace_trace_function)
> +#if 0
> +	mtctr	r3
> +	mr	r1, r5
> +	bctrl
> +#endif
> +	lwz	r5,0(r5)
> +#if 1
> +	mtctr	r5
> +	bctrl
> +#else
> +	bl	ftrace_stub
> +#endif

#if 0, #if 1 ?

> Index: linux-sched-devel.git/arch/powerpc/kernel/ftrace.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-sched-devel.git/arch/powerpc/kernel/ftrace.c	2008-05-14 19:31:56.000000000 -0700
> @@ -0,0 +1,165 @@
> +/*
> + * Code for replacing ftrace calls with jumps.
> + *
> + * Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
> + *
> + * Thanks goes out to P.A. Semi, Inc for supplying me with a PPC64 box.
> + *
> + */
> +
> +#include <linux/spinlock.h>
> +#include <linux/hardirq.h>
> +#include <linux/ftrace.h>
> +#include <linux/percpu.h>
> +#include <linux/init.h>
> +#include <linux/list.h>
> +
> +#include <asm/cacheflush.h>
> +
> +#define CALL_BACK		4

I don't grok what you're doing with CALL_BACK, you add it in places and
subtract in others - and it looks like you could do neither, but I haven't
gone over it in detail.

> +static unsigned int ftrace_nop = 0x60000000;

I should really add a #define for that.

> +#ifdef CONFIG_PPC32
> +# define GET_ADDR(addr) addr
> +#else
> +/* PowerPC64's functions are data that points to the functions */
> +# define GET_ADDR(addr) *(unsigned long *)addr
> +#endif

And that.

.. snip

> +notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
> +{
> +	static unsigned int op;
> +
> +	addr = GET_ADDR(addr);
> +
> +	/* Set to "bl addr" */
> +	op = 0x48000001 | (ftrace_calc_offset(ip, addr) & 0x03fffffe);

0x03fffffe should be 0x03fffffc, if you set bit 1 you'll end with a "bla" instruction,
ie. branch absolute and link. That shouldn't happen as long as ip and addr are
properly aligned, but still.

In fact I think you should just use create_function_call() or create_branch() from
include/asm-powerpc/system.h

> +#ifdef CONFIG_PPC64
> +# define _ASM_ALIGN	" .align 3 "
> +# define _ASM_PTR	" .llong "
> +#else
> +# define _ASM_ALIGN	" .align 2 "
> +# define _ASM_PTR	" .long "
> +#endif

We already have a #define for .long, it's called PPC_LONG (asm/asm-compat.h)

Perhaps we should add one for .align, PPC_LONG_ALIGN or something?

> +notrace int
> +ftrace_modify_code(unsigned long ip, unsigned char *old_code,
> +		   unsigned char *new_code)
> +{
> +	unsigned replaced;
> +	unsigned old = *(unsigned *)old_code;
> +	unsigned new = *(unsigned *)new_code;
> +	int faulted = 0;
> +
> +	/* move the IP back to the start of the call */
> +	ip -= CALL_BACK;
> +
> +	/*
> +	 * Note: Due to modules and __init, code can
> +	 *  disappear and change, we need to protect against faulting
> +	 *  as well as code changing.
> +	 *
> +	 * No real locking needed, this code is run through
> +	 * kstop_machine.
> +	 */
> +	asm volatile (
> +		"1: lwz		%1, 0(%2)\n"
> +		"   cmpw	%1, %5\n"
> +		"   bne		2f\n"
> +		"   stwu	%3, 0(%2)\n"
> +		"2:\n"
> +		".section .fixup, \"ax\"\n"
> +		"3:	li %0, 1\n"
> +		"	b 2b\n"
> +		".previous\n"
> +		".section __ex_table,\"a\"\n"
> +		_ASM_ALIGN "\n"
> +		_ASM_PTR "1b, 3b\n"
> +		".previous"

Or perhaps we just need a macro for adding exception table entries.

> +		: "=r"(faulted), "=r"(replaced)
> +		: "r"(ip), "r"(new),
> +		  "0"(faulted), "r"(old)
> +		: "memory");
> +
> +	if (replaced != old && replaced != new)
> +		faulted = 2;
> +
> +	if (!faulted)
> +		flush_icache_range(ip, ip + 8);
> +
> +	return faulted;
> +}

> Index: linux-sched-devel.git/arch/powerpc/kernel/setup_32.c
> ===================================================================
> --- linux-sched-devel.git.orig/arch/powerpc/kernel/setup_32.c	2008-05-14 19:30:50.000000000 -0700
> +++ linux-sched-devel.git/arch/powerpc/kernel/setup_32.c	2008-05-14 19:31:56.000000000 -0700
> @@ -47,6 +47,11 @@
>  #include <asm/kgdb.h>
>  #endif
>  
> +#ifdef CONFIG_FTRACE
> +extern void _mcount(void);
> +EXPORT_SYMBOL(_mcount);
> +#endif

Can you please put the extern in a header, and the EXPORT_SYMBOL in arch/powerpc/kernel/ftrace.c?

> Index: linux-sched-devel.git/arch/powerpc/kernel/setup_64.c
> ===================================================================
> --- linux-sched-devel.git.orig/arch/powerpc/kernel/setup_64.c	2008-05-14 19:30:50.000000000 -0700
> +++ linux-sched-devel.git/arch/powerpc/kernel/setup_64.c	2008-05-14 19:31:56.000000000 -0700
> @@ -85,6 +85,11 @@ struct ppc64_caches ppc64_caches = {
>  };
>  EXPORT_SYMBOL_GPL(ppc64_caches);
>  
> +#ifdef CONFIG_FTRACE
> +extern void _mcount(void);
> +EXPORT_SYMBOL(_mcount);
> +#endif

Ditto.


cheers


-- 
Michael Ellerman
OzLabs, IBM Australia Development Lab

wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)

We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-20 14:04   ` Michael Ellerman
@ 2008-05-20 14:17     ` Benjamin Herrenschmidt
  2008-05-20 14:51       ` Steven Rostedt
  2008-05-20 14:32     ` Steven Rostedt
  2008-05-22 18:31     ` [PATCH] ftrace: powerpc clean ups Steven Rostedt
  2 siblings, 1 reply; 17+ messages in thread
From: Benjamin Herrenschmidt @ 2008-05-20 14:17 UTC (permalink / raw)
  To: michael
  Cc: Steven Rostedt, proski, a.p.zijlstra, sandmann, pq, linux-kernel,
	linuxppc-dev, Steven Rostedt, paulus, Ingo Molnar


> 
> Why do we not want to trace early boot? Just because it's not useful? 

Not running at the linked address... might be causing trouble.

Ben.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-20 14:04   ` Michael Ellerman
  2008-05-20 14:17     ` Benjamin Herrenschmidt
@ 2008-05-20 14:32     ` Steven Rostedt
  2008-05-22 18:31     ` [PATCH] ftrace: powerpc clean ups Steven Rostedt
  2 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2008-05-20 14:32 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ingo Molnar, proski, a.p.zijlstra, pq, linux-kernel,
	Steven Rostedt, linuxppc-dev, sandmann, paulus


On Wed, 21 May 2008, Michael Ellerman wrote:

> On Wed, 2008-05-14 at 23:49 -0400, Steven Rostedt wrote:
> > plain text document attachment (ftrace-powerpc-port.patch)
> > This patch adds full support for ftrace for PowerPC (both 64 and 32 bit).
> > This includes dynamic tracing and function filtering.
>
> Hi Steven,
>
> Just a few comments inline ..

Hi Michael,

I really appreciate this. It's been a few years since I did any real PPC
programming, so any comments are most definitely welcome.


>
> > Index: linux-sched-devel.git/arch/powerpc/kernel/Makefile
> > ===================================================================
> > --- linux-sched-devel.git.orig/arch/powerpc/kernel/Makefile	2008-05-14 19:30:53.000000000 -0700
> > +++ linux-sched-devel.git/arch/powerpc/kernel/Makefile	2008-05-14 19:31:56.000000000 -0700
> > @@ -12,6 +12,18 @@ CFLAGS_prom_init.o      += -fPIC
> >  CFLAGS_btext.o		+= -fPIC
> >  endif
> >
> > +ifdef CONFIG_FTRACE
> > +# Do not trace early boot code
> > +CFLAGS_REMOVE_cputable.o = -pg
> > +CFLAGS_REMOVE_prom_init.o = -pg
>
> Why do we not want to trace early boot? Just because it's not useful?

The -pg flag makes calls to the mcount code. I didn't look too deeply, but
at least in my first prototypes the early boot up code would crash when
calling mcount. I found that simply keeping them from calling mcount made
things OK. Perhaps I'm just hiding the problem, but the tracing wont
happen anyway that early. We need to set up memory before tracing starts.

>
> > Index: linux-sched-devel.git/arch/powerpc/kernel/entry_32.S
> > ===================================================================
> > --- linux-sched-devel.git.orig/arch/powerpc/kernel/entry_32.S	2008-05-14 19:30:50.000000000 -0700
> > +++ linux-sched-devel.git/arch/powerpc/kernel/entry_32.S	2008-05-14 19:31:56.000000000 -0700
> > @@ -1035,3 +1035,133 @@ machine_check_in_rtas:
> >  	/* XXX load up BATs and panic */
> >
> ... snip
>
> > +_GLOBAL(mcount)
> > +_GLOBAL(_mcount)
> > +	stwu	r1,-48(r1)
> > +	stw	r3, 12(r1)
> > +	stw	r4, 16(r1)
> > +	stw	r5, 20(r1)
> > +	stw	r6, 24(r1)
> > +	mflr	r3
> > +	lwz	r4, 52(r1)
> > +	mfcr	r5
> > +	stw	r7, 28(r1)
> > +	stw	r8, 32(r1)
> > +	stw	r9, 36(r1)
> > +	stw	r10,40(r1)
> > +	stw	r3, 44(r1)
> > +	stw	r5, 8(r1)
> > +
> > +	LOAD_REG_ADDR(r5, ftrace_trace_function)
> > +#if 0
> > +	mtctr	r3
> > +	mr	r1, r5
> > +	bctrl
> > +#endif
> > +	lwz	r5,0(r5)
> > +#if 1
> > +	mtctr	r5
> > +	bctrl
> > +#else
> > +	bl	ftrace_stub
> > +#endif
>
> #if 0, #if 1 ?

Ouch! Thanks, that's leftover from debugging.

>
> > Index: linux-sched-devel.git/arch/powerpc/kernel/ftrace.c
> > ===================================================================
> > --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> > +++ linux-sched-devel.git/arch/powerpc/kernel/ftrace.c	2008-05-14 19:31:56.000000000 -0700
> > @@ -0,0 +1,165 @@
> > +/*
> > + * Code for replacing ftrace calls with jumps.
> > + *
> > + * Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
> > + *
> > + * Thanks goes out to P.A. Semi, Inc for supplying me with a PPC64 box.
> > + *
> > + */
> > +
> > +#include <linux/spinlock.h>
> > +#include <linux/hardirq.h>
> > +#include <linux/ftrace.h>
> > +#include <linux/percpu.h>
> > +#include <linux/init.h>
> > +#include <linux/list.h>
> > +
> > +#include <asm/cacheflush.h>
> > +
> > +#define CALL_BACK		4
>
> I don't grok what you're doing with CALL_BACK, you add it in places and
> subtract in others - and it looks like you could do neither, but I haven't
> gone over it in detail.

I tried hard to make most of the complex logic stay in generic code.

What dynamic ftrace does is at start up the code is simply a nop. Then
after initialization of ftrace, it calls kstop-machine that will call into
the arch code to convert the nop to a call to a "record_ip" function.
That record_ip function will start recording the return address of the
mcount function (__builtin_return_address(0)).

Then later, once a second the ftraced wakes up and checks if any new
functions have been recorded. If they have been, then it calls
kstop_machine againg and for each recorded function, it passes in the
address that was recorded.

The arch is responsible for knowning how to translate the
__builtin_return_address(0) into the address of the location of the call,
to be able to modify that code.

On boot up, all functions call "mcount". The ftraced daemon will convert
those calls to nop, and when tracing is enabled, then they will be
converted to point directly to the tracing function.

This helps tremondously in making ftrace efficient.

>
> > +static unsigned int ftrace_nop = 0x60000000;
>
> I should really add a #define for that.
>
> > +#ifdef CONFIG_PPC32
> > +# define GET_ADDR(addr) addr
> > +#else
> > +/* PowerPC64's functions are data that points to the functions */
> > +# define GET_ADDR(addr) *(unsigned long *)addr
> > +#endif
>
> And that.
>
> ... snip
>
> > +notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
> > +{
> > +	static unsigned int op;
> > +
> > +	addr = GET_ADDR(addr);
> > +
> > +	/* Set to "bl addr" */
> > +	op = 0x48000001 | (ftrace_calc_offset(ip, addr) & 0x03fffffe);
>
> 0x03fffffe should be 0x03fffffc, if you set bit 1 you'll end with a "bla" instruction,
> ie. branch absolute and link. That shouldn't happen as long as ip and addr are
> properly aligned, but still.

Thanks for the update. I guess I miss read the documents I have.

>
> In fact I think you should just use create_function_call() or create_branch() from
> include/asm-powerpc/system.h

Also good to know. I'll look into replacing them with these.

>
> > +#ifdef CONFIG_PPC64
> > +# define _ASM_ALIGN	" .align 3 "
> > +# define _ASM_PTR	" .llong "
> > +#else
> > +# define _ASM_ALIGN	" .align 2 "
> > +# define _ASM_PTR	" .long "
> > +#endif
>
> We already have a #define for .long, it's called PPC_LONG (asm/asm-compat.h)
>
> Perhaps we should add one for .align, PPC_LONG_ALIGN or something?

Ah, thanks. I'll wait till I see a PPC_LONG_ALIGN ;-)


>
> > +notrace int
> > +ftrace_modify_code(unsigned long ip, unsigned char *old_code,
> > +		   unsigned char *new_code)
> > +{
> > +	unsigned replaced;
> > +	unsigned old = *(unsigned *)old_code;
> > +	unsigned new = *(unsigned *)new_code;
> > +	int faulted = 0;
> > +
> > +	/* move the IP back to the start of the call */
> > +	ip -= CALL_BACK;
> > +
> > +	/*
> > +	 * Note: Due to modules and __init, code can
> > +	 *  disappear and change, we need to protect against faulting
> > +	 *  as well as code changing.
> > +	 *
> > +	 * No real locking needed, this code is run through
> > +	 * kstop_machine.
> > +	 */
> > +	asm volatile (
> > +		"1: lwz		%1, 0(%2)\n"
> > +		"   cmpw	%1, %5\n"
> > +		"   bne		2f\n"
> > +		"   stwu	%3, 0(%2)\n"
> > +		"2:\n"
> > +		".section .fixup, \"ax\"\n"
> > +		"3:	li %0, 1\n"
> > +		"	b 2b\n"
> > +		".previous\n"
> > +		".section __ex_table,\"a\"\n"
> > +		_ASM_ALIGN "\n"
> > +		_ASM_PTR "1b, 3b\n"
> > +		".previous"
>
> Or perhaps we just need a macro for adding exception table entries.

Yeah, that was taken from what x86 does.

>
> > +		: "=r"(faulted), "=r"(replaced)
> > +		: "r"(ip), "r"(new),
> > +		  "0"(faulted), "r"(old)
> > +		: "memory");
> > +
> > +	if (replaced != old && replaced != new)
> > +		faulted = 2;
> > +
> > +	if (!faulted)
> > +		flush_icache_range(ip, ip + 8);
> > +
> > +	return faulted;
> > +}
>
> > Index: linux-sched-devel.git/arch/powerpc/kernel/setup_32.c
> > ===================================================================
> > --- linux-sched-devel.git.orig/arch/powerpc/kernel/setup_32.c	2008-05-14 19:30:50.000000000 -0700
> > +++ linux-sched-devel.git/arch/powerpc/kernel/setup_32.c	2008-05-14 19:31:56.000000000 -0700
> > @@ -47,6 +47,11 @@
> >  #include <asm/kgdb.h>
> >  #endif
> >
> > +#ifdef CONFIG_FTRACE
> > +extern void _mcount(void);
> > +EXPORT_SYMBOL(_mcount);
> > +#endif
>
> Can you please put the extern in a header, and the EXPORT_SYMBOL in
> arch/powerpc/kernel/ftrace.c?

Actually, I think Ingo added this into the generic code. I'll see what's
in there now.

>
> > Index: linux-sched-devel.git/arch/powerpc/kernel/setup_64.c
> > ===================================================================
> > --- linux-sched-devel.git.orig/arch/powerpc/kernel/setup_64.c	2008-05-14 19:30:50.000000000 -0700
> > +++ linux-sched-devel.git/arch/powerpc/kernel/setup_64.c	2008-05-14 19:31:56.000000000 -0700
> > @@ -85,6 +85,11 @@ struct ppc64_caches ppc64_caches = {
> >  };
> >  EXPORT_SYMBOL_GPL(ppc64_caches);
> >
> > +#ifdef CONFIG_FTRACE
> > +extern void _mcount(void);
> > +EXPORT_SYMBOL(_mcount);
> > +#endif
>
> Ditto.

Ditto too ;-)


Thanks a lot for you feedback!

-- Steve


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] ftrace: support for PowerPC
  2008-05-20 14:17     ` Benjamin Herrenschmidt
@ 2008-05-20 14:51       ` Steven Rostedt
  0 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2008-05-20 14:51 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: michael, proski, a.p.zijlstra, sandmann, pq, linux-kernel,
	linuxppc-dev, Steven Rostedt, paulus, Ingo Molnar


On Tue, 20 May 2008, Benjamin Herrenschmidt wrote:

> > Why do we not want to trace early boot? Just because it's not useful?
>
> Not running at the linked address... might be causing trouble.

I figured it was something like that, so I didn't look too deep into it,
and decided that it was best just not to trace it.

-- Steve


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH] ftrace: powerpc clean ups
  2008-05-20 14:04   ` Michael Ellerman
  2008-05-20 14:17     ` Benjamin Herrenschmidt
  2008-05-20 14:32     ` Steven Rostedt
@ 2008-05-22 18:31     ` Steven Rostedt
  2008-05-27 15:36       ` Thomas Gleixner
  2008-06-02  2:15       ` Michael Ellerman
  2 siblings, 2 replies; 17+ messages in thread
From: Steven Rostedt @ 2008-05-22 18:31 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Michael Ellerman, proski, a.p.zijlstra, Pekka Paalanen, LKML,
	Steven Rostedt, linuxppc-dev, Soeren Sandmann Pedersen, paulus


This patch cleans up the ftrace code in PowerPC based on the comments from
Michael Ellerman.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 -0700                           |binary
 arch/powerpc/kernel/entry_32.S  |   11 ++---------
 arch/powerpc/kernel/ftrace.c    |    8 +++++++-
 arch/powerpc/kernel/ppc_ksyms.c |    5 +++++
 arch/powerpc/kernel/setup_32.c  |    5 -----
 arch/powerpc/kernel/setup_64.c  |    5 -----
 include/asm-powerpc/ftrace.h    |    6 ++++++
 6 files changed, 20 insertions(+), 20 deletions(-)

Index: linux-tip.git/arch/powerpc/kernel/entry_32.S
===================================================================
--- linux-tip.git.orig/arch/powerpc/kernel/entry_32.S	2008-05-22 09:17:51.000000000 -0700
+++ linux-tip.git/arch/powerpc/kernel/entry_32.S	2008-05-22 09:18:21.000000000 -0700
@@ -1129,18 +1129,11 @@ _GLOBAL(_mcount)
 	stw	r5, 8(r1)

 	LOAD_REG_ADDR(r5, ftrace_trace_function)
-#if 0
-	mtctr	r3
-	mr	r1, r5
-	bctrl
-#endif
 	lwz	r5,0(r5)
-#if 1
+
 	mtctr	r5
 	bctrl
-#else
-	bl	ftrace_stub
-#endif
+
 	nop

 	lwz	r6, 8(r1)
Index: linux-tip.git/arch/powerpc/kernel/ftrace.c
===================================================================
--- linux-tip.git.orig/arch/powerpc/kernel/ftrace.c	2008-05-22 09:19:12.000000000 -0700
+++ linux-tip.git/arch/powerpc/kernel/ftrace.c	2008-05-22 09:29:45.000000000 -0700
@@ -51,10 +51,16 @@ notrace unsigned char *ftrace_call_repla
 {
 	static unsigned int op;

+	/*
+	 * It would be nice to just use create_function_call, but that will
+	 * update the code itself. Here we need to just return the
+	 * instruction that is going to be modified, without modifying the
+	 * code.
+	 */
 	addr = GET_ADDR(addr);

 	/* Set to "bl addr" */
-	op = 0x48000001 | (ftrace_calc_offset(ip, addr) & 0x03fffffe);
+	op = 0x48000001 | (ftrace_calc_offset(ip, addr) & 0x03fffffc);

 	/*
 	 * No locking needed, this must be called via kstop_machine
Index: linux-tip.git/arch/powerpc/kernel/ppc_ksyms.c
===================================================================
--- linux-tip.git.orig/arch/powerpc/kernel/ppc_ksyms.c	2008-05-22 09:37:28.000000000 -0700
+++ linux-tip.git/arch/powerpc/kernel/ppc_ksyms.c	2008-05-22 11:07:36.000000000 -0700
@@ -43,6 +43,7 @@
 #include <asm/div64.h>
 #include <asm/signal.h>
 #include <asm/dcr.h>
+#include <asm/ftrace.h>

 #ifdef CONFIG_PPC32
 extern void transfer_to_handler(void);
@@ -68,6 +69,10 @@ EXPORT_SYMBOL(single_step_exception);
 EXPORT_SYMBOL(sys_sigreturn);
 #endif

+#ifdef CONFIG_FTRACE
+EXPORT_SYMBOL(_mcount);
+#endif
+
 EXPORT_SYMBOL(strcpy);
 EXPORT_SYMBOL(strncpy);
 EXPORT_SYMBOL(strcat);
Index: linux-tip.git/arch/powerpc/kernel/setup_64.c
===================================================================
--- linux-tip.git.orig/arch/powerpc/kernel/setup_64.c	2008-05-22 09:35:30.000000000 -0700
+++ linux-tip.git/arch/powerpc/kernel/setup_64.c	2008-05-22 11:25:30.000000000 -0700
@@ -85,11 +85,6 @@ struct ppc64_caches ppc64_caches = {
 };
 EXPORT_SYMBOL_GPL(ppc64_caches);

-#ifdef CONFIG_FTRACE
-extern void _mcount(void);
-EXPORT_SYMBOL(_mcount);
-#endif
-
 /*
  * These are used in binfmt_elf.c to put aux entries on the stack
  * for each elf executable being started.
Index: linux-tip.git/include/asm-powerpc/ftrace.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-tip.git/include/asm-powerpc/ftrace.h	2008-05-22 09:39:39.000000000 -0700
@@ -0,0 +1,6 @@
+#ifndef _ASM_POWERPC_FTRACE
+#define _ASM_POWERPC_FTRACE
+
+extern void _mcount(void);
+
+#endif
Index: linux-tip.git/arch/powerpc/kernel/setup_32.c
===================================================================
--- linux-tip.git.orig/arch/powerpc/kernel/setup_32.c	2008-05-22 09:35:30.000000000 -0700
+++ linux-tip.git/arch/powerpc/kernel/setup_32.c	2008-05-22 11:25:39.000000000 -0700
@@ -47,11 +47,6 @@
 #include <asm/kgdb.h>
 #endif

-#ifdef CONFIG_FTRACE
-extern void _mcount(void);
-EXPORT_SYMBOL(_mcount);
-#endif
-
 extern void bootx_init(unsigned long r4, unsigned long phys);

 int boot_cpuid;



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] ftrace: powerpc clean ups
  2008-05-22 18:31     ` [PATCH] ftrace: powerpc clean ups Steven Rostedt
@ 2008-05-27 15:36       ` Thomas Gleixner
  2008-06-02  2:15       ` Michael Ellerman
  1 sibling, 0 replies; 17+ messages in thread
From: Thomas Gleixner @ 2008-05-27 15:36 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, Michael Ellerman, proski, a.p.zijlstra,
	Pekka Paalanen, LKML, Steven Rostedt, linuxppc-dev,
	Soeren Sandmann Pedersen, paulus

On Thu, 22 May 2008, Steven Rostedt wrote:
> 
> This patch cleans up the ftrace code in PowerPC based on the comments from
> Michael Ellerman.

Applied, thanks

	 tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] ftrace: powerpc clean ups
  2008-05-22 18:31     ` [PATCH] ftrace: powerpc clean ups Steven Rostedt
  2008-05-27 15:36       ` Thomas Gleixner
@ 2008-06-02  2:15       ` Michael Ellerman
  1 sibling, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2008-06-02  2:15 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, proski, a.p.zijlstra, Pekka Paalanen, LKML,
	Steven Rostedt, linuxppc-dev, Soeren Sandmann Pedersen, paulus

[-- Attachment #1: Type: text/plain, Size: 846 bytes --]

On Thu, 2008-05-22 at 14:31 -0400, Steven Rostedt wrote:
> This patch cleans up the ftrace code in PowerPC based on the comments from
> Michael Ellerman.

Hi Steven,

Thanks for that.

I posted some patches last week, also in my git tree[1], that should
allow you to use create_branch() in your code (and also facilitate some
other things I wanted to do). I also added a #define for NOP also.

If/when my patches go into powerpc-next I'll do a patch against
linux-next for ftrace to use them.

cheers

[1]: http://git.kernel.org/?p=linux/kernel/git/mpe/linux-2.6.git;a=summary

-- 
Michael Ellerman
OzLabs, IBM Australia Development Lab

wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)

We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2008-06-02  2:15 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-05-15  3:49 [PATCH 0/2] ftrace ported to PPC Steven Rostedt
2008-05-15  3:49 ` [PATCH 1/2] ftrace ppc: add irqs_disabled_flags to ppc Steven Rostedt
2008-05-16 12:05   ` Ingo Molnar
2008-05-15  3:49 ` [PATCH 2/2] ftrace: support for PowerPC Steven Rostedt
2008-05-15  5:28   ` David Miller
2008-05-15 13:38     ` Steven Rostedt
2008-05-15 16:48     ` Scott Wood
2008-05-16 12:06   ` Ingo Molnar
2008-05-20 14:04   ` Michael Ellerman
2008-05-20 14:17     ` Benjamin Herrenschmidt
2008-05-20 14:51       ` Steven Rostedt
2008-05-20 14:32     ` Steven Rostedt
2008-05-22 18:31     ` [PATCH] ftrace: powerpc clean ups Steven Rostedt
2008-05-27 15:36       ` Thomas Gleixner
2008-06-02  2:15       ` Michael Ellerman
2008-05-15  4:40 ` [PATCH 0/2] ftrace ported to PPC Paul Mackerras
2008-05-16 12:05   ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).