linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13 v2] ftrace: clean ups and fixes
@ 2008-10-22 21:27 Steven Rostedt
  2008-10-22 21:27 ` [PATCH 01/13 v2] ftrace: handle generic arch calls Steven Rostedt
                   ` (13 more replies)
  0 siblings, 14 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds

[
  Changes since v1:

   Tested CONFIG_FTRACE & CONFIG_DYNAMIC_FTRACE
          CONFIG_FTRACE & !CONFIG_DYNAMIC_FTRACE
	  !CONFIG_FTRACE
	 on _both_ x86_64 and i386 kernels.

   Fixed typo in recordmcount.pl script s/xi386/i386/

   Used fault return codes for ftrace_modify_code return.

   Added patch to use probe_kernel_read/write.

   Better changelog on ftrace_kill change.

   Use return code on WARN_ON*.

   Removed "notrace" from arch/*/kernel/ftrace.c and instead
   used the CFLAGS_REMOVE_ftrace.o = -pg in the Makefile.
   This change cleans up the look of the code, and prevents
   any further bugs from happening (forgetting a notrace).

   Added sparc64 in the removal of ftrace_mcount_set (forgot the
   quilt add in the first version)
]

This is a series of patches to make ftrace more robust and clean ups.

The first couple of patches fix the recordmount.pl script and changes
it to only record the .text section functions. This means that
the init sections will not be processed.

I still have a patch to add notrace to the init sections, and not for
safety reasons, but for perfomance. Since the init sections will not be
processed, they will still call mcount. Note, mcount is just a ret,
but why have the init code waste CPU cycles to call a stub function?

A FTRACE_WARN_ON is added to change all WARN_ONS to not only print a
warning, but also to disable ftrace as well.

The later patches are a bit more drastic. Since the daemon is error prone,
I stripped it out. In doing so, I have to disable dynamic ftrace from all
archs that use it.  The archs can get dynamic ftrace reenabled when they
are ported to the recordmcount.pl method. (Arch maintainers, please contact
me if you want help. I can do it with some information about your arch).

Since the hash was created to work with the daemon, that too was stripped
out, making the remaining code smaller and cleaner. The kprobe hooks
in ftrace may need to be reworked.

-- Steve

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 01/13 v2] ftrace: handle generic arch calls
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-27 15:35   ` Ingo Molnar
  2008-10-22 21:27 ` [PATCH 02/13 v2] ftrace: dynamic ftrace process only text section Steven Rostedt
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-rm-script-bit-interface.patch --]
[-- Type: text/plain, Size: 2280 bytes --]

The recordmcount script requires that the actual arch is passed in.
This works well when ARCH=i386 or ARCH=x86_64 but does not handle the
case of ARCH=x86.

This patch adds a parameter to the function to pass in the number of
bits of the architecture. So that it can determine if x86 should be
run for x86_64 or i386 archs.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 scripts/Makefile.build  |   10 ++++++++--
 scripts/recordmcount.pl |   11 ++++++++++-
 2 files changed, 18 insertions(+), 3 deletions(-)

Index: linux-compile.git/scripts/Makefile.build
===================================================================
--- linux-compile.git.orig/scripts/Makefile.build	2008-10-22 15:09:04.000000000 -0400
+++ linux-compile.git/scripts/Makefile.build	2008-10-22 15:09:07.000000000 -0400
@@ -198,10 +198,16 @@ cmd_modversions =							\
 	fi;
 endif
 
+ifdef CONFIG_64BIT
+arch_bits = 64
+else
+arch_bits = 32
+endif
+
 ifdef CONFIG_FTRACE_MCOUNT_RECORD
 cmd_record_mcount = perl $(srctree)/scripts/recordmcount.pl \
-	"$(ARCH)" "$(OBJDUMP)" "$(OBJCOPY)" "$(CC)" "$(LD)" "$(NM)" "$(RM)" \
-	"$(MV)" "$(@)";
+	"$(ARCH)" "$(arch_bits)" "$(OBJDUMP)" "$(OBJCOPY)" "$(CC)" "$(LD)" \
+	"$(NM)" "$(RM)" "$(MV)" "$(@)";
 endif
 
 define rule_cc_o_c
Index: linux-compile.git/scripts/recordmcount.pl
===================================================================
--- linux-compile.git.orig/scripts/recordmcount.pl	2008-10-22 15:09:04.000000000 -0400
+++ linux-compile.git/scripts/recordmcount.pl	2008-10-22 15:09:45.000000000 -0400
@@ -106,7 +106,8 @@ if ($#ARGV < 6) {
 	exit(1);
 }
 
-my ($arch, $objdump, $objcopy, $cc, $ld, $nm, $rm, $mv, $inputfile) = @ARGV;
+my ($arch, $bits, $objdump, $objcopy, $cc,
+    $ld, $nm, $rm, $mv, $inputfile) = @ARGV;
 
 $objdump = "objdump" if ((length $objdump) == 0);
 $objcopy = "objcopy" if ((length $objcopy) == 0);
@@ -129,6 +130,14 @@ my $function_regex;	# Find the name of a
 			#    (return offset and func name)
 my $mcount_regex;	# Find the call site to mcount (return offset)
 
+if ($arch eq "x86") {
+    if ($bits == 64) {
+	$arch = "x86_64";
+    } else {
+	$arch = "i386";
+    }
+}
+
 if ($arch eq "x86_64") {
     $section_regex = "Disassembly of section";
     $function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:";

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 02/13 v2] ftrace: dynamic ftrace process only text section
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
  2008-10-22 21:27 ` [PATCH 01/13 v2] ftrace: handle generic arch calls Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 03/13 v2] ftrace: return error on failed modified text Steven Rostedt
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-record-only-text.patch --]
[-- Type: text/plain, Size: 2198 bytes --]

The text section stays in memory without ever leaving. With the exception
of modules, but modules know how to handle that case. With the dynamic
ftrace tracer, we need to make sure that it does not try to modify code
that no longer exists. The only safe section is .text.

This patch changes the recordmcount script to only record the mcount calls
in the .text sections.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 scripts/recordmcount.pl |   17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

Index: linux-compile.git/scripts/recordmcount.pl
===================================================================
--- linux-compile.git.orig/scripts/recordmcount.pl	2008-10-22 11:45:59.000000000 -0400
+++ linux-compile.git/scripts/recordmcount.pl	2008-10-22 11:46:33.000000000 -0400
@@ -109,6 +109,11 @@ if ($#ARGV < 6) {
 my ($arch, $bits, $objdump, $objcopy, $cc,
     $ld, $nm, $rm, $mv, $inputfile) = @ARGV;
 
+# Acceptible sections to record.
+my %text_sections = (
+     ".text" => 1,
+);
+
 $objdump = "objdump" if ((length $objdump) == 0);
 $objcopy = "objcopy" if ((length $objcopy) == 0);
 $cc = "gcc" if ((length $cc) == 0);
@@ -139,7 +144,7 @@ if ($arch eq "x86") {
 }
 
 if ($arch eq "x86_64") {
-    $section_regex = "Disassembly of section";
+    $section_regex = "Disassembly of section\\s+(\\S+):";
     $function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:";
     $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\smcount([+-]0x[0-9a-zA-Z]+)?\$";
     $type = ".quad";
@@ -151,7 +156,7 @@ if ($arch eq "x86_64") {
     $cc .= " -m64";
 
 } elsif ($arch eq "i386") {
-    $section_regex = "Disassembly of section";
+    $section_regex = "Disassembly of section\\s+(\\S+):";
     $function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:";
     $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\smcount\$";
     $type = ".long";
@@ -298,7 +303,13 @@ my $text;
 while (<IN>) {
     # is it a section?
     if (/$section_regex/) {
-	$read_function = 1;
+
+	# Only record text sections that we know are safe
+	if (defined($text_sections{$1})) {
+	    $read_function = 1;
+	} else {
+	    $read_function = 0;
+	}
 	# print out any recorded offsets
 	update_funcs() if ($text_found);
 

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 03/13 v2] ftrace: return error on failed modified text.
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
  2008-10-22 21:27 ` [PATCH 01/13 v2] ftrace: handle generic arch calls Steven Rostedt
  2008-10-22 21:27 ` [PATCH 02/13 v2] ftrace: dynamic ftrace process only text section Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 04/13 v2] ftrace: comment arch ftrace code Steven Rostedt
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-return-error-on-mod-code.patch --]
[-- Type: text/plain, Size: 4496 bytes --]

Have the ftrace_modify_code return error values:

  -EFAULT on error of reading the address

  -EINVAL if what is read does not match what it expected

  -EPERM  if the write fails to update after a successful match.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 arch/x86/kernel/ftrace.c |   12 +++++++-----
 include/linux/ftrace.h   |   24 ++++++++++++++++++++++--
 kernel/trace/ftrace.c    |   21 +++++++++++++++------
 3 files changed, 44 insertions(+), 13 deletions(-)

Index: linux-compile.git/arch/x86/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/x86/kernel/ftrace.c	2008-10-22 16:30:39.000000000 -0400
+++ linux-compile.git/arch/x86/kernel/ftrace.c	2008-10-22 16:30:45.000000000 -0400
@@ -71,14 +71,16 @@ ftrace_modify_code(unsigned long ip, uns
 	 * No real locking needed, this code is run through
 	 * kstop_machine, or before SMP starts.
 	 */
-	if (__copy_from_user_inatomic(replaced, (char __user *)ip, MCOUNT_INSN_SIZE))
-		return 1;
+	if (__copy_from_user_inatomic(replaced, (char __user *)ip,
+				      MCOUNT_INSN_SIZE))
+		return -EFAULT;
 
 	if (memcmp(replaced, old_code, MCOUNT_INSN_SIZE) != 0)
-		return 2;
+		return -EINVAL;
 
-	WARN_ON_ONCE(__copy_to_user_inatomic((char __user *)ip, new_code,
-				    MCOUNT_INSN_SIZE));
+	if (__copy_to_user_inatomic((char __user *)ip, new_code,
+				    MCOUNT_INSN_SIZE))
+		return -EPERM;
 
 	sync_core();
 
Index: linux-compile.git/kernel/trace/ftrace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/ftrace.c	2008-10-22 16:30:39.000000000 -0400
+++ linux-compile.git/kernel/trace/ftrace.c	2008-10-22 16:30:45.000000000 -0400
@@ -591,22 +591,22 @@ ftrace_code_disable(struct dyn_ftrace *r
 {
 	unsigned long ip;
 	unsigned char *nop, *call;
-	int failed;
+	int ret;
 
 	ip = rec->ip;
 
 	nop = ftrace_nop_replace();
 	call = ftrace_call_replace(ip, mcount_addr);
 
-	failed = ftrace_modify_code(ip, call, nop);
-	if (failed) {
-		switch (failed) {
-		case 1:
+	ret = ftrace_modify_code(ip, call, nop);
+	if (ret) {
+		switch (ret) {
+		case -EFAULT:
 			WARN_ON_ONCE(1);
 			pr_info("ftrace faulted on modifying ");
 			print_ip_sym(ip);
 			break;
-		case 2:
+		case -EINVAL:
 			WARN_ON_ONCE(1);
 			pr_info("ftrace failed to modify ");
 			print_ip_sym(ip);
@@ -615,6 +615,15 @@ ftrace_code_disable(struct dyn_ftrace *r
 			print_ip_ins(" replace: ", nop);
 			printk(KERN_CONT "\n");
 			break;
+		case -EPERM:
+			WARN_ON_ONCE(1);
+			pr_info("ftrace faulted on writing ");
+			print_ip_sym(ip);
+			break;
+		default:
+			WARN_ON_ONCE(1);
+			pr_info("ftrace faulted on unknown error ");
+			print_ip_sym(ip);
 		}
 
 		rec->flags |= FTRACE_FL_FAILED;
Index: linux-compile.git/include/linux/ftrace.h
===================================================================
--- linux-compile.git.orig/include/linux/ftrace.h	2008-10-22 16:30:39.000000000 -0400
+++ linux-compile.git/include/linux/ftrace.h	2008-10-22 16:31:37.000000000 -0400
@@ -72,13 +72,33 @@ extern unsigned char *ftrace_nop_replace
 extern unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr);
 extern int ftrace_dyn_arch_init(void *data);
 extern int ftrace_mcount_set(unsigned long *data);
-extern int ftrace_modify_code(unsigned long ip, unsigned char *old_code,
-			      unsigned char *new_code);
 extern int ftrace_update_ftrace_func(ftrace_func_t func);
 extern void ftrace_caller(void);
 extern void ftrace_call(void);
 extern void mcount_call(void);
 
+/**
+ * ftrace_modify_code - modify code segment
+ * @ip: the address of the code segment
+ * @old_code: the contents of what is expected to be there
+ * @new_code: the code to patch in
+ *
+ * This is a very sensitive operatation and great care needs
+ * to be taken by the arch.  The operation should carefully
+ * read the location, check to see if what is read is indeed
+ * what we expect it to be, and then on success of the compare,
+ * it should write to the location.
+ *
+ * Return must be:
+ *  0 on success
+ *  -EFAULT on error reading the location
+ *  -EINVAL on a failed compare of the contents
+ *  -EPERM  on error writing to the location
+ * Any other value will be considered a failure.
+ */
+extern int ftrace_modify_code(unsigned long ip, unsigned char *old_code,
+			      unsigned char *new_code);
+
 extern int skip_trace(unsigned long ip);
 
 extern void ftrace_release(void *start, unsigned long size);

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 04/13 v2] ftrace: comment arch ftrace code
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (2 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 03/13 v2] ftrace: return error on failed modified text Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 05/13 v2] ftrace: use probe_kernel Steven Rostedt
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-comment-mod-code.patch --]
[-- Type: text/plain, Size: 1343 bytes --]

Add comments to explain what is happening in the x86 arch ftrace code.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 arch/x86/kernel/ftrace.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Index: linux-compile.git/arch/x86/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/x86/kernel/ftrace.c	2008-10-22 15:23:20.000000000 -0400
+++ linux-compile.git/arch/x86/kernel/ftrace.c	2008-10-22 15:24:15.000000000 -0400
@@ -66,18 +66,23 @@ ftrace_modify_code(unsigned long ip, uns
 	/*
 	 * Note: Due to modules and __init, code can
 	 *  disappear and change, we need to protect against faulting
-	 *  as well as code changing.
+	 *  as well as code changing. We do this by using the
+	 *  __copy_*_user functions.
 	 *
 	 * No real locking needed, this code is run through
 	 * kstop_machine, or before SMP starts.
 	 */
+
+	/* read the text we want to modify */
 	if (__copy_from_user_inatomic(replaced, (char __user *)ip,
 				      MCOUNT_INSN_SIZE))
 		return -EFAULT;
 
+	/* Make sure it is what we expect it to be */
 	if (memcmp(replaced, old_code, MCOUNT_INSN_SIZE) != 0)
 		return -EINVAL;
 
+	/* replace the text with the new text */
 	if (__copy_to_user_inatomic((char __user *)ip, new_code,
 				    MCOUNT_INSN_SIZE))
 		return -EPERM;

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 05/13 v2] ftrace: use probe_kernel
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (3 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 04/13 v2] ftrace: comment arch ftrace code Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 06/13 v2] ftrace: only have ftrace_kill atomic Steven Rostedt
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-use-probe-kernel.patch --]
[-- Type: text/plain, Size: 1184 bytes --]

Andrew Morton suggested using the proper API for reading and writing
kernel areas that might fault.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 arch/x86/kernel/ftrace.c |    6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

Index: linux-compile.git/arch/x86/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/x86/kernel/ftrace.c	2008-10-22 15:24:15.000000000 -0400
+++ linux-compile.git/arch/x86/kernel/ftrace.c	2008-10-22 15:26:07.000000000 -0400
@@ -74,8 +74,7 @@ ftrace_modify_code(unsigned long ip, uns
 	 */
 
 	/* read the text we want to modify */
-	if (__copy_from_user_inatomic(replaced, (char __user *)ip,
-				      MCOUNT_INSN_SIZE))
+	if (probe_kernel_read(replaced, (void *)ip, MCOUNT_INSN_SIZE))
 		return -EFAULT;
 
 	/* Make sure it is what we expect it to be */
@@ -83,8 +82,7 @@ ftrace_modify_code(unsigned long ip, uns
 		return -EINVAL;
 
 	/* replace the text with the new text */
-	if (__copy_to_user_inatomic((char __user *)ip, new_code,
-				    MCOUNT_INSN_SIZE))
+	if (probe_kernel_write((void *)ip, new_code, MCOUNT_INSN_SIZE))
 		return -EPERM;
 
 	sync_core();

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 06/13 v2] ftrace: only have ftrace_kill atomic
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (4 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 05/13 v2] ftrace: use probe_kernel Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 07/13 v2] ftrace: add ftrace warn on to disable ftrace Steven Rostedt
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-only-kill-atomic.patch --]
[-- Type: text/plain, Size: 4146 bytes --]

When an anomaly is detected, we need a way to completely disable
ftrace. Right now we have two functions: ftrace_kill and ftrace_kill_atomic.
The ftrace_kill tries to do it in a "nice" way by converting everything
back to a nop.

The "nice" way is dangerous itself, so this patch removes it and only
has the "atomic" version, which is all that is needed.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 include/linux/ftrace.h |    3 +--
 kernel/trace/ftrace.c  |   42 ++----------------------------------------
 kernel/trace/trace.c   |    2 +-
 3 files changed, 4 insertions(+), 43 deletions(-)

Index: linux-compile.git/include/linux/ftrace.h
===================================================================
--- linux-compile.git.orig/include/linux/ftrace.h	2008-10-22 15:57:31.000000000 -0400
+++ linux-compile.git/include/linux/ftrace.h	2008-10-22 15:58:35.000000000 -0400
@@ -40,7 +40,7 @@ extern void ftrace_stub(unsigned long a0
 # define register_ftrace_function(ops) do { } while (0)
 # define unregister_ftrace_function(ops) do { } while (0)
 # define clear_ftrace_function(ops) do { } while (0)
-static inline void ftrace_kill_atomic(void) { }
+static inline void ftrace_kill(void) { }
 #endif /* CONFIG_FTRACE */
 
 #ifdef CONFIG_DYNAMIC_FTRACE
@@ -117,7 +117,6 @@ static inline void ftrace_release(void *
 
 /* totally disable ftrace - can not re-enable after this */
 void ftrace_kill(void);
-void ftrace_kill_atomic(void);
 
 static inline void tracer_disable(void)
 {
Index: linux-compile.git/kernel/trace/ftrace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/ftrace.c	2008-10-22 15:49:49.000000000 -0400
+++ linux-compile.git/kernel/trace/ftrace.c	2008-10-22 15:58:35.000000000 -0400
@@ -1544,22 +1544,6 @@ int ftrace_force_update(void)
 	return ret;
 }
 
-static void ftrace_force_shutdown(void)
-{
-	struct task_struct *task;
-	int command = FTRACE_DISABLE_CALLS | FTRACE_UPDATE_TRACE_FUNC;
-
-	mutex_lock(&ftraced_lock);
-	task = ftraced_task;
-	ftraced_task = NULL;
-	ftraced_suspend = -1;
-	ftrace_run_update_code(command);
-	mutex_unlock(&ftraced_lock);
-
-	if (task)
-		kthread_stop(task);
-}
-
 static __init int ftrace_init_debugfs(void)
 {
 	struct dentry *d_tracer;
@@ -1752,17 +1736,16 @@ core_initcall(ftrace_dynamic_init);
 # define ftrace_shutdown()		do { } while (0)
 # define ftrace_startup_sysctl()	do { } while (0)
 # define ftrace_shutdown_sysctl()	do { } while (0)
-# define ftrace_force_shutdown()	do { } while (0)
 #endif /* CONFIG_DYNAMIC_FTRACE */
 
 /**
- * ftrace_kill_atomic - kill ftrace from critical sections
+ * ftrace_kill - kill ftrace
  *
  * This function should be used by panic code. It stops ftrace
  * but in a not so nice way. If you need to simply kill ftrace
  * from a non-atomic section, use ftrace_kill.
  */
-void ftrace_kill_atomic(void)
+void ftrace_kill(void)
 {
 	ftrace_disabled = 1;
 	ftrace_enabled = 0;
@@ -1773,27 +1756,6 @@ void ftrace_kill_atomic(void)
 }
 
 /**
- * ftrace_kill - totally shutdown ftrace
- *
- * This is a safety measure. If something was detected that seems
- * wrong, calling this function will keep ftrace from doing
- * any more modifications, and updates.
- * used when something went wrong.
- */
-void ftrace_kill(void)
-{
-	mutex_lock(&ftrace_sysctl_lock);
-	ftrace_disabled = 1;
-	ftrace_enabled = 0;
-
-	clear_ftrace_function();
-	mutex_unlock(&ftrace_sysctl_lock);
-
-	/* Try to totally disable ftrace */
-	ftrace_force_shutdown();
-}
-
-/**
  * register_ftrace_function - register a function for profiling
  * @ops - ops structure that holds the function for profiling.
  *
Index: linux-compile.git/kernel/trace/trace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/trace.c	2008-10-22 15:09:04.000000000 -0400
+++ linux-compile.git/kernel/trace/trace.c	2008-10-22 15:58:35.000000000 -0400
@@ -3097,7 +3097,7 @@ void ftrace_dump(void)
 	dump_ran = 1;
 
 	/* No turning back! */
-	ftrace_kill_atomic();
+	ftrace_kill();
 
 	for_each_tracing_cpu(cpu) {
 		atomic_inc(&global_trace.data[cpu]->disabled);

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 07/13 v2] ftrace: add ftrace warn on to disable ftrace
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (5 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 06/13 v2] ftrace: only have ftrace_kill atomic Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 08/13 v2] ftrace: do not trace init sections Steven Rostedt
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-disable-warn-on.patch --]
[-- Type: text/plain, Size: 2521 bytes --]

Add ftrace warn on to disable ftrace as well as report a warning.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 kernel/trace/ftrace.c |   29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

Index: linux-compile.git/kernel/trace/ftrace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/ftrace.c	2008-10-22 15:58:35.000000000 -0400
+++ linux-compile.git/kernel/trace/ftrace.c	2008-10-22 16:05:30.000000000 -0400
@@ -32,6 +32,18 @@
 
 #include "trace.h"
 
+#define FTRACE_WARN_ON(cond)			\
+	do {					\
+		if (WARN_ON(cond))		\
+			ftrace_kill();		\
+	} while (0)
+
+#define FTRACE_WARN_ON_ONCE(cond)		\
+	do {					\
+		if (WARN_ON_ONCE(cond))		\
+			ftrace_kill();		\
+	} while (0)
+
 /* ftrace_enabled is a method to turn ftrace on or off */
 int ftrace_enabled __read_mostly;
 static int last_ftrace_enabled;
@@ -358,10 +370,8 @@ static struct dyn_ftrace *ftrace_alloc_d
 		rec = ftrace_free_records;
 
 		if (unlikely(!(rec->flags & FTRACE_FL_FREE))) {
-			WARN_ON_ONCE(1);
+			FTRACE_WARN_ON_ONCE(1);
 			ftrace_free_records = NULL;
-			ftrace_disabled = 1;
-			ftrace_enabled = 0;
 			return NULL;
 		}
 
@@ -410,7 +420,7 @@ ftrace_record_ip(unsigned long ip)
 
 	key = hash_long(ip, FTRACE_HASHBITS);
 
-	WARN_ON_ONCE(key >= FTRACE_HASHSIZE);
+	FTRACE_WARN_ON_ONCE(key >= FTRACE_HASHSIZE);
 
 	if (ftrace_ip_in_hash(ip, key))
 		goto out;
@@ -602,12 +612,12 @@ ftrace_code_disable(struct dyn_ftrace *r
 	if (ret) {
 		switch (ret) {
 		case -EFAULT:
-			WARN_ON_ONCE(1);
+			FTRACE_WARN_ON_ONCE(1);
 			pr_info("ftrace faulted on modifying ");
 			print_ip_sym(ip);
 			break;
 		case -EINVAL:
-			WARN_ON_ONCE(1);
+			FTRACE_WARN_ON_ONCE(1);
 			pr_info("ftrace failed to modify ");
 			print_ip_sym(ip);
 			print_ip_ins(" expected: ", call);
@@ -616,12 +626,12 @@ ftrace_code_disable(struct dyn_ftrace *r
 			printk(KERN_CONT "\n");
 			break;
 		case -EPERM:
-			WARN_ON_ONCE(1);
+			FTRACE_WARN_ON_ONCE(1);
 			pr_info("ftrace faulted on writing ");
 			print_ip_sym(ip);
 			break;
 		default:
-			WARN_ON_ONCE(1);
+			FTRACE_WARN_ON_ONCE(1);
 			pr_info("ftrace faulted on unknown error ");
 			print_ip_sym(ip);
 		}
@@ -1679,8 +1689,7 @@ static int ftraced(void *ignore)
 					ftrace_update_cnt != 1 ? "s" : "",
 					ftrace_update_tot_cnt,
 					usecs, usecs != 1 ? "s" : "");
-				ftrace_disabled = 1;
-				WARN_ON_ONCE(1);
+				FTRACE_WARN_ON_ONCE(1);
 			}
 		}
 		mutex_unlock(&ftraced_lock);

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 08/13 v2] ftrace: do not trace init sections
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (6 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 07/13 v2] ftrace: add ftrace warn on to disable ftrace Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-23 11:15   ` Jan Kiszka
  2008-10-22 21:27 ` [PATCH 09/13 v2] ftrace: disable dynamic ftrace for all archs that use daemon Steven Rostedt
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-notrace-init-sects.patch --]
[-- Type: text/plain, Size: 2154 bytes --]

The recordmcount script is now robust enough not to process any sections
but the .text section. But the gcc compiler still adds a call to mcount.

Note: The function mcount looks like:

  ENTRY(mcount)
	ret
  END(mcount)

Which means the overhead is just a return.

This patch adds notrace to the init sections to not even bother calling
mcount (which simply returns).


Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 include/linux/init.h |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Index: linux-compile.git/include/linux/init.h
===================================================================
--- linux-compile.git.orig/include/linux/init.h	2008-10-22 11:49:44.000000000 -0400
+++ linux-compile.git/include/linux/init.h	2008-10-22 12:27:19.000000000 -0400
@@ -75,15 +75,15 @@
 
 
 #ifdef MODULE
-#define __exitused
+#define __exitused notrace
 #else
-#define __exitused  __used
+#define __exitused  __used notrace
 #endif
 
 #define __exit          __section(.exit.text) __exitused __cold
 
 /* Used for HOTPLUG */
-#define __devinit        __section(.devinit.text) __cold
+#define __devinit        __section(.devinit.text) __cold notrace
 #define __devinitdata    __section(.devinit.data)
 #define __devinitconst   __section(.devinit.rodata)
 #define __devexit        __section(.devexit.text) __exitused __cold
@@ -91,7 +91,7 @@
 #define __devexitconst   __section(.devexit.rodata)
 
 /* Used for HOTPLUG_CPU */
-#define __cpuinit        __section(.cpuinit.text) __cold
+#define __cpuinit        __section(.cpuinit.text) __cold notrace
 #define __cpuinitdata    __section(.cpuinit.data)
 #define __cpuinitconst   __section(.cpuinit.rodata)
 #define __cpuexit        __section(.cpuexit.text) __exitused __cold
@@ -99,7 +99,7 @@
 #define __cpuexitconst   __section(.cpuexit.rodata)
 
 /* Used for MEMORY_HOTPLUG */
-#define __meminit        __section(.meminit.text) __cold
+#define __meminit        __section(.meminit.text) __cold notrace
 #define __meminitdata    __section(.meminit.data)
 #define __meminitconst   __section(.meminit.rodata)
 #define __memexit        __section(.memexit.text) __exitused __cold

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 09/13 v2] ftrace: disable dynamic ftrace for all archs that use daemon
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (7 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 08/13 v2] ftrace: do not trace init sections Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 10/13 v2] ftrace: remove daemon Steven Rostedt
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-remove-non-x86-dynamic-ftrace.patch --]
[-- Type: text/plain, Size: 2249 bytes --]

The ftrace daemon is complex and can cause nasty races if something goes
wrong. Since it affects all of the kernel, this patch disables dynamic 
ftrace from any arch that depends on the daemon. Until the archs are
ported over to the new MCOUNT_RECORD method, I am disabling dynamic
ftrace from them.

Note: I am leaving in the arch/<arch>/kernel/ftrace.c code alone since
that can be used when the arch is ported to MCOUNT_RECORD. To port
the arch to MCOUNT_RECORD, the scripts/recordmcount.pl needs to be
updated. I will make that easier to do for 2.6.29. For 28, we will keep
the archs disabled.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 arch/arm/Kconfig     |    1 -
 arch/powerpc/Kconfig |    1 -
 arch/sparc64/Kconfig |    1 -
 3 files changed, 3 deletions(-)

Index: linux-compile.git/arch/arm/Kconfig
===================================================================
--- linux-compile.git.orig/arch/arm/Kconfig	2008-10-22 13:10:52.000000000 -0400
+++ linux-compile.git/arch/arm/Kconfig	2008-10-22 13:20:55.000000000 -0400
@@ -17,7 +17,6 @@ config ARM
 	select HAVE_KPROBES if (!XIP_KERNEL)
 	select HAVE_KRETPROBES if (HAVE_KPROBES)
 	select HAVE_FTRACE if (!XIP_KERNEL)
-	select HAVE_DYNAMIC_FTRACE if (HAVE_FTRACE)
 	select HAVE_GENERIC_DMA_COHERENT
 	help
 	  The ARM series is a line of low-power-consumption RISC chip designs
Index: linux-compile.git/arch/powerpc/Kconfig
===================================================================
--- linux-compile.git.orig/arch/powerpc/Kconfig	2008-10-22 13:10:52.000000000 -0400
+++ linux-compile.git/arch/powerpc/Kconfig	2008-10-22 13:20:55.000000000 -0400
@@ -111,7 +111,6 @@ config ARCH_NO_VIRT_TO_BUS
 config PPC
 	bool
 	default y
-	select HAVE_DYNAMIC_FTRACE
 	select HAVE_FTRACE
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select HAVE_IDE
Index: linux-compile.git/arch/sparc64/Kconfig
===================================================================
--- linux-compile.git.orig/arch/sparc64/Kconfig	2008-10-22 13:10:52.000000000 -0400
+++ linux-compile.git/arch/sparc64/Kconfig	2008-10-22 13:20:55.000000000 -0400
@@ -11,7 +11,6 @@ config SPARC
 config SPARC64
 	bool
 	default y
-	select HAVE_DYNAMIC_FTRACE
 	select HAVE_FTRACE
 	select HAVE_IDE
 	select HAVE_LMB

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 10/13 v2] ftrace: remove daemon
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (8 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 09/13 v2] ftrace: disable dynamic ftrace for all archs that use daemon Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 11/13 v2] ftrace: remove mcount set Steven Rostedt
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-remove-dynamic-daemon.patch --]
[-- Type: text/plain, Size: 13180 bytes --]

The ftrace daemon is complex and error prone.  This patch strips it out
of the code.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 kernel/trace/ftrace.c         |  280 ++++--------------------------------------
 kernel/trace/trace_selftest.c |   14 --
 2 files changed, 28 insertions(+), 266 deletions(-)

Index: linux-compile.git/kernel/trace/ftrace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/ftrace.c	2008-10-22 16:07:29.000000000 -0400
+++ linux-compile.git/kernel/trace/ftrace.c	2008-10-22 16:07:36.000000000 -0400
@@ -165,21 +165,8 @@ static int __unregister_ftrace_function(
 }
 
 #ifdef CONFIG_DYNAMIC_FTRACE
-
 #ifndef CONFIG_FTRACE_MCOUNT_RECORD
-/*
- * The hash lock is only needed when the recording of the mcount
- * callers are dynamic. That is, by the caller themselves and
- * not recorded via the compilation.
- */
-static DEFINE_SPINLOCK(ftrace_hash_lock);
-#define ftrace_hash_lock(flags)	  spin_lock_irqsave(&ftrace_hash_lock, flags)
-#define ftrace_hash_unlock(flags) \
-			spin_unlock_irqrestore(&ftrace_hash_lock, flags)
-#else
-/* This is protected via the ftrace_lock with MCOUNT_RECORD. */
-#define ftrace_hash_lock(flags)   do { (void)(flags); } while (0)
-#define ftrace_hash_unlock(flags) do { } while(0)
+# error Dynamic ftrace depends on MCOUNT_RECORD
 #endif
 
 /*
@@ -190,8 +177,6 @@ static DEFINE_SPINLOCK(ftrace_hash_lock)
  */
 static unsigned long mcount_addr = MCOUNT_ADDR;
 
-static struct task_struct *ftraced_task;
-
 enum {
 	FTRACE_ENABLE_CALLS		= (1 << 0),
 	FTRACE_DISABLE_CALLS		= (1 << 1),
@@ -208,7 +193,6 @@ static struct hlist_head ftrace_hash[FTR
 
 static DEFINE_PER_CPU(int, ftrace_shutdown_disable_cpu);
 
-static DEFINE_MUTEX(ftraced_lock);
 static DEFINE_MUTEX(ftrace_regex_lock);
 
 struct ftrace_page {
@@ -226,10 +210,6 @@ struct ftrace_page {
 static struct ftrace_page	*ftrace_pages_start;
 static struct ftrace_page	*ftrace_pages;
 
-static int ftraced_trigger;
-static int ftraced_suspend;
-static int ftraced_stop;
-
 static int ftrace_record_suspend;
 
 static struct dyn_ftrace *ftrace_free_records;
@@ -393,7 +373,6 @@ static void
 ftrace_record_ip(unsigned long ip)
 {
 	struct dyn_ftrace *node;
-	unsigned long flags;
 	unsigned long key;
 	int resched;
 	int cpu;
@@ -425,24 +404,18 @@ ftrace_record_ip(unsigned long ip)
 	if (ftrace_ip_in_hash(ip, key))
 		goto out;
 
-	ftrace_hash_lock(flags);
-
 	/* This ip may have hit the hash before the lock */
 	if (ftrace_ip_in_hash(ip, key))
-		goto out_unlock;
+		goto out;
 
 	node = ftrace_alloc_dyn_node(ip);
 	if (!node)
-		goto out_unlock;
+		goto out;
 
 	node->ip = ip;
 
 	ftrace_add_hash(node, key);
 
-	ftraced_trigger = 1;
-
- out_unlock:
-	ftrace_hash_unlock(flags);
  out:
 	per_cpu(ftrace_shutdown_disable_cpu, cpu)--;
 
@@ -642,7 +615,7 @@ ftrace_code_disable(struct dyn_ftrace *r
 	return 1;
 }
 
-static int __ftrace_update_code(void *ignore);
+static int ftrace_update_code(void *ignore);
 
 static int __ftrace_modify_code(void *data)
 {
@@ -654,7 +627,7 @@ static int __ftrace_modify_code(void *da
 		 * Update any recorded ips now that we have the
 		 * machine stopped
 		 */
-		__ftrace_update_code(NULL);
+		ftrace_update_code(NULL);
 		ftrace_replace_code(1);
 		tracing_on = 1;
 	} else if (*command & FTRACE_DISABLE_CALLS) {
@@ -681,26 +654,9 @@ static void ftrace_run_update_code(int c
 	stop_machine(__ftrace_modify_code, &command, NULL);
 }
 
-void ftrace_disable_daemon(void)
-{
-	/* Stop the daemon from calling kstop_machine */
-	mutex_lock(&ftraced_lock);
-	ftraced_stop = 1;
-	mutex_unlock(&ftraced_lock);
-
-	ftrace_force_update();
-}
-
-void ftrace_enable_daemon(void)
-{
-	mutex_lock(&ftraced_lock);
-	ftraced_stop = 0;
-	mutex_unlock(&ftraced_lock);
-
-	ftrace_force_update();
-}
-
 static ftrace_func_t saved_ftrace_func;
+static int ftrace_start;
+static DEFINE_MUTEX(ftrace_start_lock);
 
 static void ftrace_startup(void)
 {
@@ -709,9 +665,9 @@ static void ftrace_startup(void)
 	if (unlikely(ftrace_disabled))
 		return;
 
-	mutex_lock(&ftraced_lock);
-	ftraced_suspend++;
-	if (ftraced_suspend == 1)
+	mutex_lock(&ftrace_start_lock);
+	ftrace_start++;
+	if (ftrace_start == 1)
 		command |= FTRACE_ENABLE_CALLS;
 
 	if (saved_ftrace_func != ftrace_trace_function) {
@@ -724,7 +680,7 @@ static void ftrace_startup(void)
 
 	ftrace_run_update_code(command);
  out:
-	mutex_unlock(&ftraced_lock);
+	mutex_unlock(&ftrace_start_lock);
 }
 
 static void ftrace_shutdown(void)
@@ -734,9 +690,9 @@ static void ftrace_shutdown(void)
 	if (unlikely(ftrace_disabled))
 		return;
 
-	mutex_lock(&ftraced_lock);
-	ftraced_suspend--;
-	if (!ftraced_suspend)
+	mutex_lock(&ftrace_start_lock);
+	ftrace_start--;
+	if (!ftrace_start)
 		command |= FTRACE_DISABLE_CALLS;
 
 	if (saved_ftrace_func != ftrace_trace_function) {
@@ -749,7 +705,7 @@ static void ftrace_shutdown(void)
 
 	ftrace_run_update_code(command);
  out:
-	mutex_unlock(&ftraced_lock);
+	mutex_unlock(&ftrace_start_lock);
 }
 
 static void ftrace_startup_sysctl(void)
@@ -759,15 +715,15 @@ static void ftrace_startup_sysctl(void)
 	if (unlikely(ftrace_disabled))
 		return;
 
-	mutex_lock(&ftraced_lock);
+	mutex_lock(&ftrace_start_lock);
 	/* Force update next time */
 	saved_ftrace_func = NULL;
-	/* ftraced_suspend is true if we want ftrace running */
-	if (ftraced_suspend)
+	/* ftrace_start is true if we want ftrace running */
+	if (ftrace_start)
 		command |= FTRACE_ENABLE_CALLS;
 
 	ftrace_run_update_code(command);
-	mutex_unlock(&ftraced_lock);
+	mutex_unlock(&ftrace_start_lock);
 }
 
 static void ftrace_shutdown_sysctl(void)
@@ -777,20 +733,20 @@ static void ftrace_shutdown_sysctl(void)
 	if (unlikely(ftrace_disabled))
 		return;
 
-	mutex_lock(&ftraced_lock);
-	/* ftraced_suspend is true if ftrace is running */
-	if (ftraced_suspend)
+	mutex_lock(&ftrace_start_lock);
+	/* ftrace_start is true if ftrace is running */
+	if (ftrace_start)
 		command |= FTRACE_DISABLE_CALLS;
 
 	ftrace_run_update_code(command);
-	mutex_unlock(&ftraced_lock);
+	mutex_unlock(&ftrace_start_lock);
 }
 
 static cycle_t		ftrace_update_time;
 static unsigned long	ftrace_update_cnt;
 unsigned long		ftrace_update_tot_cnt;
 
-static int __ftrace_update_code(void *ignore)
+static int ftrace_update_code(void *ignore)
 {
 	int i, save_ftrace_enabled;
 	cycle_t start, stop;
@@ -864,7 +820,6 @@ static int __ftrace_update_code(void *ig
 	stop = ftrace_now(raw_smp_processor_id());
 	ftrace_update_time = stop - start;
 	ftrace_update_tot_cnt += ftrace_update_cnt;
-	ftraced_trigger = 0;
 
 	ftrace_enabled = save_ftrace_enabled;
 	ftrace_record_suspend--;
@@ -872,17 +827,6 @@ static int __ftrace_update_code(void *ig
 	return 0;
 }
 
-static int ftrace_update_code(void)
-{
-	if (unlikely(ftrace_disabled) ||
-	    !ftrace_enabled || !ftraced_trigger)
-		return 0;
-
-	stop_machine(__ftrace_update_code, NULL, NULL);
-
-	return 1;
-}
-
 static int __init ftrace_dyn_table_alloc(unsigned long num_to_init)
 {
 	struct ftrace_page *pg;
@@ -1420,10 +1364,10 @@ ftrace_regex_release(struct inode *inode
 	}
 
 	mutex_lock(&ftrace_sysctl_lock);
-	mutex_lock(&ftraced_lock);
-	if (iter->filtered && ftraced_suspend && ftrace_enabled)
+	mutex_lock(&ftrace_start_lock);
+	if (iter->filtered && ftrace_start && ftrace_enabled)
 		ftrace_run_update_code(FTRACE_ENABLE_CALLS);
-	mutex_unlock(&ftraced_lock);
+	mutex_unlock(&ftrace_start_lock);
 	mutex_unlock(&ftrace_sysctl_lock);
 
 	kfree(iter);
@@ -1443,55 +1387,6 @@ ftrace_notrace_release(struct inode *ino
 	return ftrace_regex_release(inode, file, 0);
 }
 
-static ssize_t
-ftraced_read(struct file *filp, char __user *ubuf,
-		     size_t cnt, loff_t *ppos)
-{
-	/* don't worry about races */
-	char *buf = ftraced_stop ? "disabled\n" : "enabled\n";
-	int r = strlen(buf);
-
-	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
-}
-
-static ssize_t
-ftraced_write(struct file *filp, const char __user *ubuf,
-		      size_t cnt, loff_t *ppos)
-{
-	char buf[64];
-	long val;
-	int ret;
-
-	if (cnt >= sizeof(buf))
-		return -EINVAL;
-
-	if (copy_from_user(&buf, ubuf, cnt))
-		return -EFAULT;
-
-	if (strncmp(buf, "enable", 6) == 0)
-		val = 1;
-	else if (strncmp(buf, "disable", 7) == 0)
-		val = 0;
-	else {
-		buf[cnt] = 0;
-
-		ret = strict_strtoul(buf, 10, &val);
-		if (ret < 0)
-			return ret;
-
-		val = !!val;
-	}
-
-	if (val)
-		ftrace_enable_daemon();
-	else
-		ftrace_disable_daemon();
-
-	filp->f_pos += cnt;
-
-	return cnt;
-}
-
 static struct file_operations ftrace_avail_fops = {
 	.open = ftrace_avail_open,
 	.read = seq_read,
@@ -1522,38 +1417,6 @@ static struct file_operations ftrace_not
 	.release = ftrace_notrace_release,
 };
 
-static struct file_operations ftraced_fops = {
-	.open = tracing_open_generic,
-	.read = ftraced_read,
-	.write = ftraced_write,
-};
-
-/**
- * ftrace_force_update - force an update to all recording ftrace functions
- */
-int ftrace_force_update(void)
-{
-	int ret = 0;
-
-	if (unlikely(ftrace_disabled))
-		return -ENODEV;
-
-	mutex_lock(&ftrace_sysctl_lock);
-	mutex_lock(&ftraced_lock);
-
-	/*
-	 * If ftraced_trigger is not set, then there is nothing
-	 * to update.
-	 */
-	if (ftraced_trigger && !ftrace_update_code())
-		ret = -EBUSY;
-
-	mutex_unlock(&ftraced_lock);
-	mutex_unlock(&ftrace_sysctl_lock);
-
-	return ret;
-}
-
 static __init int ftrace_init_debugfs(void)
 {
 	struct dentry *d_tracer;
@@ -1584,17 +1447,11 @@ static __init int ftrace_init_debugfs(vo
 		pr_warning("Could not create debugfs "
 			   "'set_ftrace_notrace' entry\n");
 
-	entry = debugfs_create_file("ftraced_enabled", 0644, d_tracer,
-				    NULL, &ftraced_fops);
-	if (!entry)
-		pr_warning("Could not create debugfs "
-			   "'ftraced_enabled' entry\n");
 	return 0;
 }
 
 fs_initcall(ftrace_init_debugfs);
 
-#ifdef CONFIG_FTRACE_MCOUNT_RECORD
 static int ftrace_convert_nops(unsigned long *start,
 			       unsigned long *end)
 {
@@ -1614,7 +1471,7 @@ static int ftrace_convert_nops(unsigned 
 
 	/* p is ignored */
 	local_irq_save(flags);
-	__ftrace_update_code(p);
+	ftrace_update_code(p);
 	local_irq_restore(flags);
 
 	return 0;
@@ -1661,84 +1518,6 @@ void __init ftrace_init(void)
  failed:
 	ftrace_disabled = 1;
 }
-#else /* CONFIG_FTRACE_MCOUNT_RECORD */
-static int ftraced(void *ignore)
-{
-	unsigned long usecs;
-
-	while (!kthread_should_stop()) {
-
-		set_current_state(TASK_INTERRUPTIBLE);
-
-		/* check once a second */
-		schedule_timeout(HZ);
-
-		if (unlikely(ftrace_disabled))
-			continue;
-
-		mutex_lock(&ftrace_sysctl_lock);
-		mutex_lock(&ftraced_lock);
-		if (!ftraced_suspend && !ftraced_stop &&
-		    ftrace_update_code()) {
-			usecs = nsecs_to_usecs(ftrace_update_time);
-			if (ftrace_update_tot_cnt > 100000) {
-				ftrace_update_tot_cnt = 0;
-				pr_info("hm, dftrace overflow: %lu change%s"
-					" (%lu total) in %lu usec%s\n",
-					ftrace_update_cnt,
-					ftrace_update_cnt != 1 ? "s" : "",
-					ftrace_update_tot_cnt,
-					usecs, usecs != 1 ? "s" : "");
-				FTRACE_WARN_ON_ONCE(1);
-			}
-		}
-		mutex_unlock(&ftraced_lock);
-		mutex_unlock(&ftrace_sysctl_lock);
-
-		ftrace_shutdown_replenish();
-	}
-	__set_current_state(TASK_RUNNING);
-	return 0;
-}
-
-static int __init ftrace_dynamic_init(void)
-{
-	struct task_struct *p;
-	unsigned long addr;
-	int ret;
-
-	addr = (unsigned long)ftrace_record_ip;
-
-	stop_machine(ftrace_dyn_arch_init, &addr, NULL);
-
-	/* ftrace_dyn_arch_init places the return code in addr */
-	if (addr) {
-		ret = (int)addr;
-		goto failed;
-	}
-
-	ret = ftrace_dyn_table_alloc(NR_TO_INIT);
-	if (ret)
-		goto failed;
-
-	p = kthread_run(ftraced, NULL, "ftraced");
-	if (IS_ERR(p)) {
-		ret = -1;
-		goto failed;
-	}
-
-	last_ftrace_enabled = ftrace_enabled = 1;
-	ftraced_task = p;
-
-	return 0;
-
- failed:
-	ftrace_disabled = 1;
-	return ret;
-}
-
-core_initcall(ftrace_dynamic_init);
-#endif /* CONFIG_FTRACE_MCOUNT_RECORD */
 
 #else
 # define ftrace_startup()		do { } while (0)
@@ -1758,9 +1537,6 @@ void ftrace_kill(void)
 {
 	ftrace_disabled = 1;
 	ftrace_enabled = 0;
-#ifdef CONFIG_DYNAMIC_FTRACE
-	ftraced_suspend = -1;
-#endif
 	clear_ftrace_function();
 }
 
Index: linux-compile.git/kernel/trace/trace_selftest.c
===================================================================
--- linux-compile.git.orig/kernel/trace/trace_selftest.c	2008-10-22 15:09:03.000000000 -0400
+++ linux-compile.git/kernel/trace/trace_selftest.c	2008-10-22 16:07:36.000000000 -0400
@@ -99,13 +99,6 @@ int trace_selftest_startup_dynamic_traci
 	/* passed in by parameter to fool gcc from optimizing */
 	func();
 
-	/* update the records */
-	ret = ftrace_force_update();
-	if (ret) {
-		printk(KERN_CONT ".. ftraced failed .. ");
-		return ret;
-	}
-
 	/*
 	 * Some archs *cough*PowerPC*cough* add charachters to the
 	 * start of the function names. We simply put a '*' to
@@ -183,13 +176,6 @@ trace_selftest_startup_function(struct t
 	/* make sure msleep has been recorded */
 	msleep(1);
 
-	/* force the recorded functions to be traced */
-	ret = ftrace_force_update();
-	if (ret) {
-		printk(KERN_CONT ".. ftraced failed .. ");
-		return ret;
-	}
-
 	/* start the tracing */
 	ftrace_enabled = 1;
 	tracer_enabled = 1;

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 11/13 v2] ftrace: remove mcount set
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (9 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 10/13 v2] ftrace: remove daemon Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 12/13 v2] ftrace: remove ftrace hash Steven Rostedt
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-remove-mcount-set.patch --]
[-- Type: text/plain, Size: 5240 bytes --]

The arch dependent function ftrace_mcount_set was only used by the daemon
start up code. This patch removes it.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 arch/arm/kernel/ftrace.c     |   13 -------------
 arch/powerpc/kernel/ftrace.c |   17 -----------------
 arch/sparc64/kernel/ftrace.c |   18 ------------------
 arch/x86/kernel/ftrace.c     |    7 -------
 include/linux/ftrace.h       |    1 -
 kernel/trace/ftrace.c        |    9 ---------
 6 files changed, 65 deletions(-)

Index: linux-compile.git/arch/arm/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/arm/kernel/ftrace.c	2008-10-22 16:07:09.000000000 -0400
+++ linux-compile.git/arch/arm/kernel/ftrace.c	2008-10-22 16:07:55.000000000 -0400
@@ -95,19 +95,6 @@ int ftrace_update_ftrace_func(ftrace_fun
 	return ret;
 }
 
-int ftrace_mcount_set(unsigned long *data)
-{
-	unsigned long pc, old;
-	unsigned long *addr = data;
-	unsigned char *new;
-
-	pc = (unsigned long)&mcount_call;
-	memcpy(&old, &mcount_call, MCOUNT_INSN_SIZE);
-	new = ftrace_call_replace(pc, *addr);
-	*addr = ftrace_modify_code(pc, (unsigned char *)&old, new);
-	return 0;
-}
-
 /* run from kstop_machine */
 int __init ftrace_dyn_arch_init(void *data)
 {
Index: linux-compile.git/arch/powerpc/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/powerpc/kernel/ftrace.c	2008-10-22 15:52:28.000000000 -0400
+++ linux-compile.git/arch/powerpc/kernel/ftrace.c	2008-10-22 16:07:55.000000000 -0400
@@ -126,23 +126,6 @@ notrace int ftrace_update_ftrace_func(ft
 	return ret;
 }
 
-notrace int ftrace_mcount_set(unsigned long *data)
-{
-	unsigned long ip = (long)(&mcount_call);
-	unsigned long *addr = data;
-	unsigned char old[MCOUNT_INSN_SIZE], *new;
-
-	/*
-	 * Replace the mcount stub with a pointer to the
-	 * ip recorder function.
-	 */
-	memcpy(old, &mcount_call, MCOUNT_INSN_SIZE);
-	new = ftrace_call_replace(ip, *addr);
-	*addr = ftrace_modify_code(ip, old, new);
-
-	return 0;
-}
-
 int __init ftrace_dyn_arch_init(void *data)
 {
 	/* This is running in kstop_machine */
Index: linux-compile.git/arch/x86/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/x86/kernel/ftrace.c	2008-10-22 16:07:24.000000000 -0400
+++ linux-compile.git/arch/x86/kernel/ftrace.c	2008-10-22 16:07:55.000000000 -0400
@@ -103,13 +103,6 @@ notrace int ftrace_update_ftrace_func(ft
 	return ret;
 }
 
-notrace int ftrace_mcount_set(unsigned long *data)
-{
-	/* mcount is initialized as a nop */
-	*data = 0;
-	return 0;
-}
-
 int __init ftrace_dyn_arch_init(void *data)
 {
 	extern const unsigned char ftrace_test_p6nop[];
Index: linux-compile.git/include/linux/ftrace.h
===================================================================
--- linux-compile.git.orig/include/linux/ftrace.h	2008-10-22 16:07:27.000000000 -0400
+++ linux-compile.git/include/linux/ftrace.h	2008-10-22 16:08:24.000000000 -0400
@@ -71,7 +71,6 @@ extern int ftrace_ip_converted(unsigned 
 extern unsigned char *ftrace_nop_replace(void);
 extern unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr);
 extern int ftrace_dyn_arch_init(void *data);
-extern int ftrace_mcount_set(unsigned long *data);
 extern int ftrace_update_ftrace_func(ftrace_func_t func);
 extern void ftrace_caller(void);
 extern void ftrace_call(void);
Index: linux-compile.git/kernel/trace/ftrace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/ftrace.c	2008-10-22 16:07:36.000000000 -0400
+++ linux-compile.git/kernel/trace/ftrace.c	2008-10-22 16:07:55.000000000 -0400
@@ -619,7 +619,6 @@ static int ftrace_update_code(void *igno
 
 static int __ftrace_modify_code(void *data)
 {
-	unsigned long addr;
 	int *command = data;
 
 	if (*command & FTRACE_ENABLE_CALLS) {
@@ -638,14 +637,6 @@ static int __ftrace_modify_code(void *da
 	if (*command & FTRACE_UPDATE_TRACE_FUNC)
 		ftrace_update_ftrace_func(ftrace_trace_function);
 
-	if (*command & FTRACE_ENABLE_MCOUNT) {
-		addr = (unsigned long)ftrace_record_ip;
-		ftrace_mcount_set(&addr);
-	} else if (*command & FTRACE_DISABLE_MCOUNT) {
-		addr = (unsigned long)ftrace_stub;
-		ftrace_mcount_set(&addr);
-	}
-
 	return 0;
 }
 
Index: linux-compile.git/arch/sparc64/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/sparc64/kernel/ftrace.c	2008-10-22 16:07:09.000000000 -0400
+++ linux-compile.git/arch/sparc64/kernel/ftrace.c	2008-10-22 16:08:45.000000000 -0400
@@ -69,24 +69,6 @@ notrace int ftrace_update_ftrace_func(ft
 	return ftrace_modify_code(ip, old, new);
 }
 
-notrace int ftrace_mcount_set(unsigned long *data)
-{
-	unsigned long ip = (long)(&mcount_call);
-	unsigned long *addr = data;
-	unsigned char old[MCOUNT_INSN_SIZE], *new;
-
-	/*
-	 * Replace the mcount stub with a pointer to the
-	 * ip recorder function.
-	 */
-	memcpy(old, &mcount_call, MCOUNT_INSN_SIZE);
-	new = ftrace_call_replace(ip, *addr);
-	*addr = ftrace_modify_code(ip, old, new);
-
-	return 0;
-}
-
-
 int __init ftrace_dyn_arch_init(void *data)
 {
 	ftrace_mcount_set(data);

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 12/13 v2] ftrace: remove ftrace hash
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (10 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 11/13 v2] ftrace: remove mcount set Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-22 21:27 ` [PATCH 13/13 v2] ftrace: remove notrace from arch ftrace file Steven Rostedt
  2008-10-23  9:29 ` [PATCH 00/13 v2] ftrace: clean ups and fixes Wenji Huang
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftrace-remove-hash.patch --]
[-- Type: text/plain, Size: 11050 bytes --]

The ftrace hash was used by the ftrace_daemon code. The record ip function
would place the calling address (ip) into the hash. The daemon would later
read the hash and modify that code.

The hash complicates the code. This patch removes it.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 include/linux/ftrace.h |    8 -
 kernel/trace/ftrace.c  |  247 +++++++------------------------------------------
 kernel/trace/trace.c   |    3 
 3 files changed, 40 insertions(+), 218 deletions(-)

Index: linux-compile.git/include/linux/ftrace.h
===================================================================
--- linux-compile.git.orig/include/linux/ftrace.h	2008-10-22 16:08:24.000000000 -0400
+++ linux-compile.git/include/linux/ftrace.h	2008-10-22 16:09:05.000000000 -0400
@@ -44,8 +44,6 @@ static inline void ftrace_kill(void) { }
 #endif /* CONFIG_FTRACE */
 
 #ifdef CONFIG_DYNAMIC_FTRACE
-# define FTRACE_HASHBITS	10
-# define FTRACE_HASHSIZE	(1<<FTRACE_HASHBITS)
 
 enum {
 	FTRACE_FL_FREE		= (1 << 0),
@@ -58,9 +56,9 @@ enum {
 };
 
 struct dyn_ftrace {
-	struct hlist_node node;
-	unsigned long	  ip; /* address of mcount call-site */
-	unsigned long	  flags;
+	struct list_head	list;
+	unsigned long		ip; /* address of mcount call-site */
+	unsigned long		flags;
 };
 
 int ftrace_force_update(void);
Index: linux-compile.git/kernel/trace/ftrace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/ftrace.c	2008-10-22 16:07:55.000000000 -0400
+++ linux-compile.git/kernel/trace/ftrace.c	2008-10-22 16:09:05.000000000 -0400
@@ -25,7 +25,6 @@
 #include <linux/ftrace.h>
 #include <linux/sysctl.h>
 #include <linux/ctype.h>
-#include <linux/hash.h>
 #include <linux/list.h>
 
 #include <asm/ftrace.h>
@@ -189,9 +188,7 @@ static int ftrace_filtered;
 static int tracing_on;
 static int frozen_record_count;
 
-static struct hlist_head ftrace_hash[FTRACE_HASHSIZE];
-
-static DEFINE_PER_CPU(int, ftrace_shutdown_disable_cpu);
+static LIST_HEAD(ftrace_new_addrs);
 
 static DEFINE_MUTEX(ftrace_regex_lock);
 
@@ -210,8 +207,6 @@ struct ftrace_page {
 static struct ftrace_page	*ftrace_pages_start;
 static struct ftrace_page	*ftrace_pages;
 
-static int ftrace_record_suspend;
-
 static struct dyn_ftrace *ftrace_free_records;
 
 
@@ -242,72 +237,6 @@ static inline int record_frozen(struct d
 # define record_frozen(rec)			({ 0; })
 #endif /* CONFIG_KPROBES */
 
-int skip_trace(unsigned long ip)
-{
-	unsigned long fl;
-	struct dyn_ftrace *rec;
-	struct hlist_node *t;
-	struct hlist_head *head;
-
-	if (frozen_record_count == 0)
-		return 0;
-
-	head = &ftrace_hash[hash_long(ip, FTRACE_HASHBITS)];
-	hlist_for_each_entry_rcu(rec, t, head, node) {
-		if (rec->ip == ip) {
-			if (record_frozen(rec)) {
-				if (rec->flags & FTRACE_FL_FAILED)
-					return 1;
-
-				if (!(rec->flags & FTRACE_FL_CONVERTED))
-					return 1;
-
-				if (!tracing_on || !ftrace_enabled)
-					return 1;
-
-				if (ftrace_filtered) {
-					fl = rec->flags & (FTRACE_FL_FILTER |
-							   FTRACE_FL_NOTRACE);
-					if (!fl || (fl & FTRACE_FL_NOTRACE))
-						return 1;
-				}
-			}
-			break;
-		}
-	}
-
-	return 0;
-}
-
-static inline int
-ftrace_ip_in_hash(unsigned long ip, unsigned long key)
-{
-	struct dyn_ftrace *p;
-	struct hlist_node *t;
-	int found = 0;
-
-	hlist_for_each_entry_rcu(p, t, &ftrace_hash[key], node) {
-		if (p->ip == ip) {
-			found = 1;
-			break;
-		}
-	}
-
-	return found;
-}
-
-static inline void
-ftrace_add_hash(struct dyn_ftrace *node, unsigned long key)
-{
-	hlist_add_head_rcu(&node->node, &ftrace_hash[key]);
-}
-
-/* called from kstop_machine */
-static inline void ftrace_del_hash(struct dyn_ftrace *node)
-{
-	hlist_del(&node->node);
-}
-
 static void ftrace_free_rec(struct dyn_ftrace *rec)
 {
 	rec->ip = (unsigned long)ftrace_free_records;
@@ -361,69 +290,36 @@ static struct dyn_ftrace *ftrace_alloc_d
 	}
 
 	if (ftrace_pages->index == ENTRIES_PER_PAGE) {
-		if (!ftrace_pages->next)
-			return NULL;
+		if (!ftrace_pages->next) {
+			/* allocate another page */
+			ftrace_pages->next =
+				(void *)get_zeroed_page(GFP_KERNEL);
+			if (!ftrace_pages->next)
+				return NULL;
+		}
 		ftrace_pages = ftrace_pages->next;
 	}
 
 	return &ftrace_pages->records[ftrace_pages->index++];
 }
 
-static void
+static struct dyn_ftrace *
 ftrace_record_ip(unsigned long ip)
 {
-	struct dyn_ftrace *node;
-	unsigned long key;
-	int resched;
-	int cpu;
+	struct dyn_ftrace *rec;
 
 	if (!ftrace_enabled || ftrace_disabled)
-		return;
-
-	resched = need_resched();
-	preempt_disable_notrace();
-
-	/*
-	 * We simply need to protect against recursion.
-	 * Use the the raw version of smp_processor_id and not
-	 * __get_cpu_var which can call debug hooks that can
-	 * cause a recursive crash here.
-	 */
-	cpu = raw_smp_processor_id();
-	per_cpu(ftrace_shutdown_disable_cpu, cpu)++;
-	if (per_cpu(ftrace_shutdown_disable_cpu, cpu) != 1)
-		goto out;
-
-	if (unlikely(ftrace_record_suspend))
-		goto out;
-
-	key = hash_long(ip, FTRACE_HASHBITS);
-
-	FTRACE_WARN_ON_ONCE(key >= FTRACE_HASHSIZE);
-
-	if (ftrace_ip_in_hash(ip, key))
-		goto out;
+		return NULL;
 
-	/* This ip may have hit the hash before the lock */
-	if (ftrace_ip_in_hash(ip, key))
-		goto out;
-
-	node = ftrace_alloc_dyn_node(ip);
-	if (!node)
-		goto out;
-
-	node->ip = ip;
+	rec = ftrace_alloc_dyn_node(ip);
+	if (!rec)
+		return NULL;
 
-	ftrace_add_hash(node, key);
+	rec->ip = ip;
 
- out:
-	per_cpu(ftrace_shutdown_disable_cpu, cpu)--;
+	list_add(&rec->list, &ftrace_new_addrs);
 
-	/* prevent recursion with scheduler */
-	if (resched)
-		preempt_enable_no_resched_notrace();
-	else
-		preempt_enable_notrace();
+	return rec;
 }
 
 #define FTRACE_ADDR ((long)(ftrace_caller))
@@ -542,7 +438,6 @@ static void ftrace_replace_code(int enab
 				rec->flags |= FTRACE_FL_FAILED;
 				if ((system_state == SYSTEM_BOOTING) ||
 				    !core_kernel_text(rec->ip)) {
-					ftrace_del_hash(rec);
 					ftrace_free_rec(rec);
 				}
 			}
@@ -550,15 +445,6 @@ static void ftrace_replace_code(int enab
 	}
 }
 
-static void ftrace_shutdown_replenish(void)
-{
-	if (ftrace_pages->next)
-		return;
-
-	/* allocate another page */
-	ftrace_pages->next = (void *)get_zeroed_page(GFP_KERNEL);
-}
-
 static void print_ip_ins(const char *fmt, unsigned char *p)
 {
 	int i;
@@ -615,18 +501,11 @@ ftrace_code_disable(struct dyn_ftrace *r
 	return 1;
 }
 
-static int ftrace_update_code(void *ignore);
-
 static int __ftrace_modify_code(void *data)
 {
 	int *command = data;
 
 	if (*command & FTRACE_ENABLE_CALLS) {
-		/*
-		 * Update any recorded ips now that we have the
-		 * machine stopped
-		 */
-		ftrace_update_code(NULL);
 		ftrace_replace_code(1);
 		tracing_on = 1;
 	} else if (*command & FTRACE_DISABLE_CALLS) {
@@ -737,84 +616,34 @@ static cycle_t		ftrace_update_time;
 static unsigned long	ftrace_update_cnt;
 unsigned long		ftrace_update_tot_cnt;
 
-static int ftrace_update_code(void *ignore)
+static int ftrace_update_code(void)
 {
-	int i, save_ftrace_enabled;
+	struct dyn_ftrace *p, *t;
 	cycle_t start, stop;
-	struct dyn_ftrace *p;
-	struct hlist_node *t, *n;
-	struct hlist_head *head, temp_list;
-
-	/* Don't be recording funcs now */
-	ftrace_record_suspend++;
-	save_ftrace_enabled = ftrace_enabled;
-	ftrace_enabled = 0;
 
 	start = ftrace_now(raw_smp_processor_id());
 	ftrace_update_cnt = 0;
 
-	/* No locks needed, the machine is stopped! */
-	for (i = 0; i < FTRACE_HASHSIZE; i++) {
-		INIT_HLIST_HEAD(&temp_list);
-		head = &ftrace_hash[i];
-
-		/* all CPUS are stopped, we are safe to modify code */
-		hlist_for_each_entry_safe(p, t, n, head, node) {
-			/* Skip over failed records which have not been
-			 * freed. */
-			if (p->flags & FTRACE_FL_FAILED)
-				continue;
-
-			/* Unconverted records are always at the head of the
-			 * hash bucket. Once we encounter a converted record,
-			 * simply skip over to the next bucket. Saves ftraced
-			 * some processor cycles (ftrace does its bid for
-			 * global warming :-p ). */
-			if (p->flags & (FTRACE_FL_CONVERTED))
-				break;
-
-			/* Ignore updates to this record's mcount site.
-			 * Reintroduce this record at the head of this
-			 * bucket to attempt to "convert" it again if
-			 * the kprobe on it is unregistered before the
-			 * next run. */
-			if (get_kprobe((void *)p->ip)) {
-				ftrace_del_hash(p);
-				INIT_HLIST_NODE(&p->node);
-				hlist_add_head(&p->node, &temp_list);
-				freeze_record(p);
-				continue;
-			} else {
-				unfreeze_record(p);
-			}
-
-			/* convert record (i.e, patch mcount-call with NOP) */
-			if (ftrace_code_disable(p)) {
-				p->flags |= FTRACE_FL_CONVERTED;
-				ftrace_update_cnt++;
-			} else {
-				if ((system_state == SYSTEM_BOOTING) ||
-				    !core_kernel_text(p->ip)) {
-					ftrace_del_hash(p);
-					ftrace_free_rec(p);
-				}
-			}
-		}
+	list_for_each_entry_safe(p, t, &ftrace_new_addrs, list) {
 
-		hlist_for_each_entry_safe(p, t, n, &temp_list, node) {
-			hlist_del(&p->node);
-			INIT_HLIST_NODE(&p->node);
-			hlist_add_head(&p->node, head);
-		}
+		/* If something went wrong, bail without enabling anything */
+		if (unlikely(ftrace_disabled))
+			return -1;
+
+		list_del_init(&p->list);
+
+		/* convert record (i.e, patch mcount-call with NOP) */
+		if (ftrace_code_disable(p)) {
+			p->flags |= FTRACE_FL_CONVERTED;
+			ftrace_update_cnt++;
+		} else
+			ftrace_free_rec(p);
 	}
 
 	stop = ftrace_now(raw_smp_processor_id());
 	ftrace_update_time = stop - start;
 	ftrace_update_tot_cnt += ftrace_update_cnt;
 
-	ftrace_enabled = save_ftrace_enabled;
-	ftrace_record_suspend--;
-
 	return 0;
 }
 
@@ -846,7 +675,7 @@ static int __init ftrace_dyn_table_alloc
 	pg = ftrace_pages = ftrace_pages_start;
 
 	cnt = num_to_init / ENTRIES_PER_PAGE;
-	pr_info("ftrace: allocating %ld hash entries in %d pages\n",
+	pr_info("ftrace: allocating %ld entries in %d pages\n",
 		num_to_init, cnt);
 
 	for (i = 0; i < cnt; i++) {
@@ -1450,20 +1279,18 @@ static int ftrace_convert_nops(unsigned 
 	unsigned long addr;
 	unsigned long flags;
 
+	mutex_lock(&ftrace_start_lock);
 	p = start;
 	while (p < end) {
 		addr = ftrace_call_adjust(*p++);
-		/* should not be called from interrupt context */
-		spin_lock(&ftrace_lock);
 		ftrace_record_ip(addr);
-		spin_unlock(&ftrace_lock);
-		ftrace_shutdown_replenish();
 	}
 
-	/* p is ignored */
+	/* disable interrupts to prevent kstop machine */
 	local_irq_save(flags);
-	ftrace_update_code(p);
+	ftrace_update_code();
 	local_irq_restore(flags);
+	mutex_unlock(&ftrace_start_lock);
 
 	return 0;
 }
Index: linux-compile.git/kernel/trace/trace.c
===================================================================
--- linux-compile.git.orig/kernel/trace/trace.c	2008-10-22 16:07:27.000000000 -0400
+++ linux-compile.git/kernel/trace/trace.c	2008-10-22 16:09:05.000000000 -0400
@@ -865,9 +865,6 @@ function_trace_call(unsigned long ip, un
 	if (unlikely(!ftrace_function_enabled))
 		return;
 
-	if (skip_trace(ip))
-		return;
-
 	pc = preempt_count();
 	resched = need_resched();
 	preempt_disable_notrace();

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 13/13 v2] ftrace: remove notrace from arch ftrace file
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (11 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 12/13 v2] ftrace: remove ftrace hash Steven Rostedt
@ 2008-10-22 21:27 ` Steven Rostedt
  2008-10-23  9:29 ` [PATCH 00/13 v2] ftrace: clean ups and fixes Wenji Huang
  13 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-22 21:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

[-- Attachment #1: ftace-remove-notrace.patch --]
[-- Type: text/plain, Size: 5301 bytes --]

The entire file of ftrace.c in the arch code needs to be marked
as notrace. It is much cleaner to do this from the Makefile with
CFLAGS_REMOVE_ftrace.o.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
 arch/powerpc/kernel/ftrace.c |   10 +++++-----
 arch/sparc64/kernel/Makefile |    2 ++
 arch/sparc64/kernel/ftrace.c |    8 ++++----
 arch/x86/kernel/Makefile     |    1 +
 arch/x86/kernel/ftrace.c     |   10 +++++-----
 5 files changed, 17 insertions(+), 14 deletions(-)

Index: linux-compile.git/arch/powerpc/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/powerpc/kernel/ftrace.c	2008-10-22 16:07:55.000000000 -0400
+++ linux-compile.git/arch/powerpc/kernel/ftrace.c	2008-10-22 16:10:15.000000000 -0400
@@ -28,17 +28,17 @@ static unsigned int ftrace_nop = 0x60000
 #endif
 
 
-static unsigned int notrace ftrace_calc_offset(long ip, long addr)
+static unsigned int ftrace_calc_offset(long ip, long addr)
 {
 	return (int)(addr - ip);
 }
 
-notrace unsigned char *ftrace_nop_replace(void)
+unsigned char *ftrace_nop_replace(void)
 {
 	return (char *)&ftrace_nop;
 }
 
-notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
+unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
 {
 	static unsigned int op;
 
@@ -68,7 +68,7 @@ notrace unsigned char *ftrace_call_repla
 # define _ASM_PTR	" .long "
 #endif
 
-notrace int
+int
 ftrace_modify_code(unsigned long ip, unsigned char *old_code,
 		   unsigned char *new_code)
 {
@@ -113,7 +113,7 @@ ftrace_modify_code(unsigned long ip, uns
 	return faulted;
 }
 
-notrace int ftrace_update_ftrace_func(ftrace_func_t func)
+int ftrace_update_ftrace_func(ftrace_func_t func)
 {
 	unsigned long ip = (unsigned long)(&ftrace_call);
 	unsigned char old[MCOUNT_INSN_SIZE], *new;
Index: linux-compile.git/arch/sparc64/kernel/Makefile
===================================================================
--- linux-compile.git.orig/arch/sparc64/kernel/Makefile	2008-10-21 17:07:17.000000000 -0400
+++ linux-compile.git/arch/sparc64/kernel/Makefile	2008-10-22 16:12:22.000000000 -0400
@@ -5,6 +5,8 @@
 EXTRA_AFLAGS := -ansi
 EXTRA_CFLAGS := -Werror
 
+CFLAGS_REMOVE_ftrace.o = -pg
+
 extra-y		:= head.o init_task.o vmlinux.lds
 
 obj-y		:= process.o setup.o cpu.o idprom.o reboot.o \
Index: linux-compile.git/arch/sparc64/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/sparc64/kernel/ftrace.c	2008-10-22 16:08:45.000000000 -0400
+++ linux-compile.git/arch/sparc64/kernel/ftrace.c	2008-10-22 16:10:25.000000000 -0400
@@ -9,12 +9,12 @@
 
 static const u32 ftrace_nop = 0x01000000;
 
-notrace unsigned char *ftrace_nop_replace(void)
+unsigned char *ftrace_nop_replace(void)
 {
 	return (char *)&ftrace_nop;
 }
 
-notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
+unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
 {
 	static u32 call;
 	s32 off;
@@ -25,7 +25,7 @@ notrace unsigned char *ftrace_call_repla
 	return (unsigned char *) &call;
 }
 
-notrace int
+int
 ftrace_modify_code(unsigned long ip, unsigned char *old_code,
 		   unsigned char *new_code)
 {
@@ -59,7 +59,7 @@ ftrace_modify_code(unsigned long ip, uns
 	return faulted;
 }
 
-notrace int ftrace_update_ftrace_func(ftrace_func_t func)
+int ftrace_update_ftrace_func(ftrace_func_t func)
 {
 	unsigned long ip = (unsigned long)(&ftrace_call);
 	unsigned char old[MCOUNT_INSN_SIZE], *new;
Index: linux-compile.git/arch/x86/kernel/Makefile
===================================================================
--- linux-compile.git.orig/arch/x86/kernel/Makefile	2008-10-21 17:07:17.000000000 -0400
+++ linux-compile.git/arch/x86/kernel/Makefile	2008-10-22 16:12:01.000000000 -0400
@@ -11,6 +11,7 @@ ifdef CONFIG_FTRACE
 CFLAGS_REMOVE_tsc.o = -pg
 CFLAGS_REMOVE_rtc.o = -pg
 CFLAGS_REMOVE_paravirt-spinlocks.o = -pg
+CFLAGS_REMOVE_ftrace.o = -pg
 endif
 
 #
Index: linux-compile.git/arch/x86/kernel/ftrace.c
===================================================================
--- linux-compile.git.orig/arch/x86/kernel/ftrace.c	2008-10-22 16:07:55.000000000 -0400
+++ linux-compile.git/arch/x86/kernel/ftrace.c	2008-10-22 16:10:33.000000000 -0400
@@ -33,17 +33,17 @@ union ftrace_code_union {
 };
 
 
-static int notrace ftrace_calc_offset(long ip, long addr)
+static int ftrace_calc_offset(long ip, long addr)
 {
 	return (int)(addr - ip);
 }
 
-notrace unsigned char *ftrace_nop_replace(void)
+unsigned char *ftrace_nop_replace(void)
 {
 	return (char *)ftrace_nop;
 }
 
-notrace unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
+unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
 {
 	static union ftrace_code_union calc;
 
@@ -57,7 +57,7 @@ notrace unsigned char *ftrace_call_repla
 	return calc.code;
 }
 
-notrace int
+int
 ftrace_modify_code(unsigned long ip, unsigned char *old_code,
 		   unsigned char *new_code)
 {
@@ -90,7 +90,7 @@ ftrace_modify_code(unsigned long ip, uns
 	return 0;
 }
 
-notrace int ftrace_update_ftrace_func(ftrace_func_t func)
+int ftrace_update_ftrace_func(ftrace_func_t func)
 {
 	unsigned long ip = (unsigned long)(&ftrace_call);
 	unsigned char old[MCOUNT_INSN_SIZE], *new;

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13 v2] ftrace: clean ups and fixes
  2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
                   ` (12 preceding siblings ...)
  2008-10-22 21:27 ` [PATCH 13/13 v2] ftrace: remove notrace from arch ftrace file Steven Rostedt
@ 2008-10-23  9:29 ` Wenji Huang
  2008-10-23 10:49   ` Steven Rostedt
  13 siblings, 1 reply; 23+ messages in thread
From: Wenji Huang @ 2008-10-23  9:29 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds

Hi,
just several trivial things.

[PATCH 02/13 v2] ftrace: dynamic ftrace process only text section
+# Acceptible sections to record.

typo - Acceptible

[PATCH 03/13 v2] ftrace: return error on failed modified text.
+ * This is a very sensitive operatation and great care needs

typo - operatation
Why not put comments to function definition?

[PATCH 04/13 v2] ftrace: comment arch ftrace code
+	 *  as well as code changing. We do this by using the
+	 *  __copy_*_user functions.

According to [PATCH 05/13], change to using the proper API for reading 
and writing, not __copy_*_user.


Regards,
Wenji

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13 v2] ftrace: clean ups and fixes
  2008-10-23  9:29 ` [PATCH 00/13 v2] ftrace: clean ups and fixes Wenji Huang
@ 2008-10-23 10:49   ` Steven Rostedt
  0 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-23 10:49 UTC (permalink / raw)
  To: Wenji Huang
  Cc: linux-kernel, Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds


On Thu, 23 Oct 2008, Wenji Huang wrote:

> Hi,
> just several trivial things.

Thanks,

> 
> [PATCH 02/13 v2] ftrace: dynamic ftrace process only text section
> +# Acceptible sections to record.
> 
> typo - Acceptible

Ah, it should be Acceptable.

> 
> [PATCH 03/13 v2] ftrace: return error on failed modified text.
> + * This is a very sensitive operatation and great care needs
> 
> typo - operatation

Sorry, I must have stuttered ;-)

> Why not put comments to function definition?

The function is arch dependent. And the comment here is also to help
the arch maintainers know what is needed of that function.

> 
> [PATCH 04/13 v2] ftrace: comment arch ftrace code
> +	 *  as well as code changing. We do this by using the
> +	 *  __copy_*_user functions.
> 
> According to [PATCH 05/13], change to using the proper API for reading and
> writing, not __copy_*_user.

Good catch. Thanks,

-- Steve

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 08/13 v2] ftrace: do not trace init sections
  2008-10-22 21:27 ` [PATCH 08/13 v2] ftrace: do not trace init sections Steven Rostedt
@ 2008-10-23 11:15   ` Jan Kiszka
  2008-10-23 11:36     ` Steven Rostedt
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Kiszka @ 2008-10-23 11:15 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

Steven Rostedt wrote:
> The recordmcount script is now robust enough not to process any sections
> but the .text section. But the gcc compiler still adds a call to mcount.
> 
> Note: The function mcount looks like:
> 
>   ENTRY(mcount)
> 	ret
>   END(mcount)
> 
> Which means the overhead is just a return.
> 
> This patch adds notrace to the init sections to not even bother calling
> mcount (which simply returns).

Sorry for a potentially dumb question (didn't follow all recent ftrace
developments), but doesn't this mean that code in such sections is now
invisible for all tracers, even with dynamic tracing disabled (in which
case they should cause no problem)? What if you *do* want to have such
functions in your trace as they may contribute to problem or give other
useful hints?

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 08/13 v2] ftrace: do not trace init sections
  2008-10-23 11:15   ` Jan Kiszka
@ 2008-10-23 11:36     ` Steven Rostedt
  2008-10-23 14:30       ` Jan Kiszka
  0 siblings, 1 reply; 23+ messages in thread
From: Steven Rostedt @ 2008-10-23 11:36 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: linux-kernel, Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt


On Thu, 23 Oct 2008, Jan Kiszka wrote:

> Steven Rostedt wrote:
> > The recordmcount script is now robust enough not to process any sections
> > but the .text section. But the gcc compiler still adds a call to mcount.
> > 
> > Note: The function mcount looks like:
> > 
> >   ENTRY(mcount)
> > 	ret
> >   END(mcount)
> > 
> > Which means the overhead is just a return.
> > 
> > This patch adds notrace to the init sections to not even bother calling
> > mcount (which simply returns).
> 
> Sorry for a potentially dumb question (didn't follow all recent ftrace
> developments), but doesn't this mean that code in such sections is now
> invisible for all tracers, even with dynamic tracing disabled (in which
> case they should cause no problem)? What if you *do* want to have such
> functions in your trace as they may contribute to problem or give other
> useful hints?

Not a dumb question, I've thought about this too.

Well, you can still add tracing into those functions, just the mcount call 
will not be there. I've thought about other ways to handle this. Perhaps 
add it only when DYNAMIC_FTRACE is configured. But that to me is a new 
feature. The patches I'm submitting is to help with performance, bug 
fixes, and sanity checks. So I left out any optional notraces.

e.g. in init.h I could do something like.

#ifdef CONFIG_DYNAMIC_FTRACE
# define init_notrace notrace
#else
# define init_notrace
#endif

And then use the init_notrace throughout the file. If this is considered 
something that is acceptible for adding into 28, I would be happy to 
supply a patch.

-- Steve


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 08/13 v2] ftrace: do not trace init sections
  2008-10-23 11:36     ` Steven Rostedt
@ 2008-10-23 14:30       ` Jan Kiszka
  2008-10-23 16:05         ` Steven Rostedt
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Kiszka @ 2008-10-23 14:30 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt

Steven Rostedt wrote:
> On Thu, 23 Oct 2008, Jan Kiszka wrote:
> 
>> Steven Rostedt wrote:
>>> The recordmcount script is now robust enough not to process any sections
>>> but the .text section. But the gcc compiler still adds a call to mcount.
>>>
>>> Note: The function mcount looks like:
>>>
>>>   ENTRY(mcount)
>>> 	ret
>>>   END(mcount)
>>>
>>> Which means the overhead is just a return.
>>>
>>> This patch adds notrace to the init sections to not even bother calling
>>> mcount (which simply returns).
>> Sorry for a potentially dumb question (didn't follow all recent ftrace
>> developments), but doesn't this mean that code in such sections is now
>> invisible for all tracers, even with dynamic tracing disabled (in which
>> case they should cause no problem)? What if you *do* want to have such
>> functions in your trace as they may contribute to problem or give other
>> useful hints?
> 
> Not a dumb question, I've thought about this too.
> 
> Well, you can still add tracing into those functions, just the mcount call 
> will not be there. I've thought about other ways to handle this. Perhaps 
> add it only when DYNAMIC_FTRACE is configured. But that to me is a new 
> feature. The patches I'm submitting is to help with performance, bug 
> fixes, and sanity checks. So I left out any optional notraces.

Well, but mcount is only broken for .init sections if dynamic tracing is
on, no? Then I would focus the fix on that case as far as possible.

> 
> e.g. in init.h I could do something like.
> 
> #ifdef CONFIG_DYNAMIC_FTRACE
> # define init_notrace notrace
> #else
> # define init_notrace
> #endif
> 
> And then use the init_notrace throughout the file. If this is considered 
> something that is acceptible for adding into 28, I would be happy to 
> supply a patch.

>From my POV - I'm using mcount tracing for a few years now -, the thing
about this is gaining a complete overview of what happens at function
level, which code paths were taken (at that level). Each bit of
information you (have to) drop can harm here, even more if they come in
larger chunks like in this case.

I don't know if this feature is already considered for / part of
mainline, but oops backtraces based on mcount instrumentation used to be
quite helpful for me to understand the wider context of several faults.
The more you have to mask out of this picture, the harder it gets to
match the trace to your model of the kernel activities. At least you
have to know more in advance about the limitation of the tracer...

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 08/13 v2] ftrace: do not trace init sections
  2008-10-23 14:30       ` Jan Kiszka
@ 2008-10-23 16:05         ` Steven Rostedt
  2008-10-23 16:40           ` Ingo Molnar
  0 siblings, 1 reply; 23+ messages in thread
From: Steven Rostedt @ 2008-10-23 16:05 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: linux-kernel, Ingo Molnar, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt


On Thu, 23 Oct 2008, Jan Kiszka wrote:
> 
> >From my POV - I'm using mcount tracing for a few years now -, the thing
> about this is gaining a complete overview of what happens at function
> level, which code paths were taken (at that level). Each bit of
> information you (have to) drop can harm here, even more if they come in
> larger chunks like in this case.

Then you should be happy :-)

I removed the notrace to those sections. Although there is already a
notrace on the __init. We probably should remove it. The recordmcount.pl
does not add tracing to these functions. If you want tracing of these 
functions, you will need to use the static ftrace tracer (!DYNAMIC_FTRACE)

> 
> I don't know if this feature is already considered for / part of
> mainline, but oops backtraces based on mcount instrumentation used to be
> quite helpful for me to understand the wider context of several faults.
> The more you have to mask out of this picture, the harder it gets to
> match the trace to your model of the kernel activities. At least you
> have to know more in advance about the limitation of the tracer...

The ftrace buffer can be dumped out on oops. I'm not sure if it is always 
on. If it is, I need to make it default off and add a command line and 
sysctl to turn it on. Otherwise, we will have people complaining about the 
extra output to the console on oops.

-- Steve


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 08/13 v2] ftrace: do not trace init sections
  2008-10-23 16:05         ` Steven Rostedt
@ 2008-10-23 16:40           ` Ingo Molnar
  2008-10-23 16:51             ` Steven Rostedt
  0 siblings, 1 reply; 23+ messages in thread
From: Ingo Molnar @ 2008-10-23 16:40 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jan Kiszka, linux-kernel, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt


* Steven Rostedt <rostedt@goodmis.org> wrote:

> > I don't know if this feature is already considered for / part of 
> > mainline, but oops backtraces based on mcount instrumentation used 
> > to be quite helpful for me to understand the wider context of 
> > several faults. The more you have to mask out of this picture, the 
> > harder it gets to match the trace to your model of the kernel 
> > activities. At least you have to know more in advance about the 
> > limitation of the tracer...
> 
> The ftrace buffer can be dumped out on oops. I'm not sure if it is 
> always on. If it is, I need to make it default off and add a command 
> line and sysctl to turn it on. Otherwise, we will have people 
> complaining about the extra output to the console on oops.

it dumps currently if a tracer is enabled. I saw it a couple of times 
already during crashes.

	Ingo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 08/13 v2] ftrace: do not trace init sections
  2008-10-23 16:40           ` Ingo Molnar
@ 2008-10-23 16:51             ` Steven Rostedt
  0 siblings, 0 replies; 23+ messages in thread
From: Steven Rostedt @ 2008-10-23 16:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Jan Kiszka, linux-kernel, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt


On Thu, 23 Oct 2008, Ingo Molnar wrote:
> * Steven Rostedt <rostedt@goodmis.org> wrote:
> > 
> > The ftrace buffer can be dumped out on oops. I'm not sure if it is 
> > always on. If it is, I need to make it default off and add a command 
> > line and sysctl to turn it on. Otherwise, we will have people 
> > complaining about the extra output to the console on oops.
> 
> it dumps currently if a tracer is enabled. I saw it a couple of times 
> already during crashes.

That's right. I owe you a patch. I'll write one up.

-- Steve


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 01/13 v2] ftrace: handle generic arch calls
  2008-10-22 21:27 ` [PATCH 01/13 v2] ftrace: handle generic arch calls Steven Rostedt
@ 2008-10-27 15:35   ` Ingo Molnar
  0 siblings, 0 replies; 23+ messages in thread
From: Ingo Molnar @ 2008-10-27 15:35 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Frederic Weisbecker, Abhishek Sagar,
	David S. Miller, Thomas Gleixner, Peter Zijlstra, Andrew Morton,
	Linus Torvalds, Steven Rostedt


* Steven Rostedt <rostedt@goodmis.org> wrote:

> The recordmcount script requires that the actual arch is passed in.
> This works well when ARCH=i386 or ARCH=x86_64 but does not handle the
> case of ARCH=x86.
> 
> This patch adds a parameter to the function to pass in the number of
> bits of the architecture. So that it can determine if x86 should be
> run for x86_64 or i386 archs.
> 
> Signed-off-by: Steven Rostedt <srostedt@redhat.com>

small sidenote: please preserve feedback/review activities in the 
changelog, even if there is no delta patch. I have added this to the 
changelog:

 v2: fix i386 bug, noticed by Andrew Morton

	Ingo

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2008-10-27 15:36 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-10-22 21:27 [PATCH 00/13 v2] ftrace: clean ups and fixes Steven Rostedt
2008-10-22 21:27 ` [PATCH 01/13 v2] ftrace: handle generic arch calls Steven Rostedt
2008-10-27 15:35   ` Ingo Molnar
2008-10-22 21:27 ` [PATCH 02/13 v2] ftrace: dynamic ftrace process only text section Steven Rostedt
2008-10-22 21:27 ` [PATCH 03/13 v2] ftrace: return error on failed modified text Steven Rostedt
2008-10-22 21:27 ` [PATCH 04/13 v2] ftrace: comment arch ftrace code Steven Rostedt
2008-10-22 21:27 ` [PATCH 05/13 v2] ftrace: use probe_kernel Steven Rostedt
2008-10-22 21:27 ` [PATCH 06/13 v2] ftrace: only have ftrace_kill atomic Steven Rostedt
2008-10-22 21:27 ` [PATCH 07/13 v2] ftrace: add ftrace warn on to disable ftrace Steven Rostedt
2008-10-22 21:27 ` [PATCH 08/13 v2] ftrace: do not trace init sections Steven Rostedt
2008-10-23 11:15   ` Jan Kiszka
2008-10-23 11:36     ` Steven Rostedt
2008-10-23 14:30       ` Jan Kiszka
2008-10-23 16:05         ` Steven Rostedt
2008-10-23 16:40           ` Ingo Molnar
2008-10-23 16:51             ` Steven Rostedt
2008-10-22 21:27 ` [PATCH 09/13 v2] ftrace: disable dynamic ftrace for all archs that use daemon Steven Rostedt
2008-10-22 21:27 ` [PATCH 10/13 v2] ftrace: remove daemon Steven Rostedt
2008-10-22 21:27 ` [PATCH 11/13 v2] ftrace: remove mcount set Steven Rostedt
2008-10-22 21:27 ` [PATCH 12/13 v2] ftrace: remove ftrace hash Steven Rostedt
2008-10-22 21:27 ` [PATCH 13/13 v2] ftrace: remove notrace from arch ftrace file Steven Rostedt
2008-10-23  9:29 ` [PATCH 00/13 v2] ftrace: clean ups and fixes Wenji Huang
2008-10-23 10:49   ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).