All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up
@ 2017-03-07 21:28 Steven Rostedt
  2017-03-07 21:28 ` [RFC][PATCH 1/4] tracing: Split tracing initialization into two for early initialization Steven Rostedt
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-07 21:28 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Todd Brandt

I've had people ask about moving tracing up further in the boot process.
This patch series looks at function tracing only. It allows for tracing
(and function filtering) to be moved right after memory is initialized.
To have it happen before memory initialization would require a bit more
work with allocating the ring buffer. But this is a start.

I placed a hook into free_reserved_area() which is used by all archs
to free the init memory. Having it pass the range being freed to ftrace
lets ftrace clean up any function that is registered such that it doesn't
try to modify code that no longer exists. 


Steven Rostedt (VMware) (4):
      tracing: Split tracing initialization into two for early initialization
      ftrace: Move ftrace_init() to right after memory initialization
      ftrace: Have function tracing start in early boot up
      ftrace: Allow for function tracing to record init functions on boot up

----
 include/linux/ftrace.h         |  5 +++++
 include/linux/init.h           |  4 +++-
 init/main.c                    |  9 ++++++---
 kernel/trace/ftrace.c          | 44 ++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/trace.c           |  9 ++++++++-
 kernel/trace/trace.h           |  2 ++
 kernel/trace/trace_functions.c |  3 +--
 mm/page_alloc.c                |  4 ++++
 scripts/recordmcount.c         |  1 +
 scripts/recordmcount.pl        |  1 +
 10 files changed, 75 insertions(+), 7 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC][PATCH 1/4] tracing: Split tracing initialization into two for early initialization
  2017-03-07 21:28 [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up Steven Rostedt
@ 2017-03-07 21:28 ` Steven Rostedt
  2017-03-07 21:28 ` [RFC][PATCH 2/4] ftrace: Move ftrace_init() to right after memory initialization Steven Rostedt
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-07 21:28 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Todd Brandt

[-- Attachment #1: 0001-tracing-Split-tracing-initialization-into-two-for-ea.patch --]
[-- Type: text/plain, Size: 2131 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Create an early_trace_init() function that will initialize the buffers and
allow for ealier use of trace_printk(). This will also allow for future work
to have function tracing start earlier at boot up.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/ftrace.h | 2 ++
 init/main.c            | 5 ++++-
 kernel/trace/trace.c   | 6 +++++-
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 3633e8beff39..569db5589851 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -42,8 +42,10 @@
 /* Main tracing buffer and events set up */
 #ifdef CONFIG_TRACING
 void trace_init(void);
+void early_trace_init(void);
 #else
 static inline void trace_init(void) { }
+static inline void early_trace_init(void) { }
 #endif
 
 struct module;
diff --git a/init/main.c b/init/main.c
index b0c9d6facef9..0d6cc6661f2b 100644
--- a/init/main.c
+++ b/init/main.c
@@ -539,6 +539,9 @@ asmlinkage __visible void __init start_kernel(void)
 	trap_init();
 	mm_init();
 
+	/* trace_printk can be enabled here */
+	early_trace_init();
+
 	/*
 	 * Set up the scheduler prior starting any interrupts (such as the
 	 * timer interrupt). Full topology setup happens at smp_init()
@@ -564,7 +567,7 @@ asmlinkage __visible void __init start_kernel(void)
 
 	rcu_init();
 
-	/* trace_printk() and trace points may be used after this */
+	/* Trace events are available after this */
 	trace_init();
 
 	context_tracking_init();
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 54e3b8711aca..c4c21de61145 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -7999,7 +7999,7 @@ __init static int tracer_alloc_buffers(void)
 	return ret;
 }
 
-void __init trace_init(void)
+void __init early_trace_init(void)
 {
 	if (tracepoint_printk) {
 		tracepoint_print_iter =
@@ -8010,6 +8010,10 @@ void __init trace_init(void)
 			static_key_enable(&tracepoint_printk_key.key);
 	}
 	tracer_alloc_buffers();
+}
+
+void __init trace_init(void)
+{
 	trace_event_init();
 }
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 2/4] ftrace: Move ftrace_init() to right after memory initialization
  2017-03-07 21:28 [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up Steven Rostedt
  2017-03-07 21:28 ` [RFC][PATCH 1/4] tracing: Split tracing initialization into two for early initialization Steven Rostedt
@ 2017-03-07 21:28 ` Steven Rostedt
  2017-03-07 21:28 ` [RFC][PATCH 3/4] ftrace: Have function tracing start in early boot up Steven Rostedt
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-07 21:28 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Todd Brandt

[-- Attachment #1: 0002-ftrace-Move-ftrace_init-to-right-after-memory-initia.patch --]
[-- Type: text/plain, Size: 927 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Initialize the ftrace records immediately after memory initialization, as
that is all that is required for the records to be created. This will allow
for future work to get function tracing started earlier in the boot process.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 init/main.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/init/main.c b/init/main.c
index 0d6cc6661f2b..552a2c9ec6d9 100644
--- a/init/main.c
+++ b/init/main.c
@@ -539,6 +539,8 @@ asmlinkage __visible void __init start_kernel(void)
 	trap_init();
 	mm_init();
 
+	ftrace_init();
+
 	/* trace_printk can be enabled here */
 	early_trace_init();
 
@@ -670,8 +672,6 @@ asmlinkage __visible void __init start_kernel(void)
 		efi_free_boot_services();
 	}
 
-	ftrace_init();
-
 	/* Do the rest non-__init'ed, we're now alive */
 	rest_init();
 }
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 3/4] ftrace: Have function tracing start in early boot up
  2017-03-07 21:28 [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up Steven Rostedt
  2017-03-07 21:28 ` [RFC][PATCH 1/4] tracing: Split tracing initialization into two for early initialization Steven Rostedt
  2017-03-07 21:28 ` [RFC][PATCH 2/4] ftrace: Move ftrace_init() to right after memory initialization Steven Rostedt
@ 2017-03-07 21:28 ` Steven Rostedt
  2017-03-07 21:28   ` Steven Rostedt
  2017-03-08 19:15 ` [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in " Todd Brandt
  4 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-07 21:28 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Todd Brandt

[-- Attachment #1: 0003-ftrace-Have-function-tracing-start-in-early-boot-up.patch --]
[-- Type: text/plain, Size: 2423 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Register the function tracer right after the tracing buffers are initialized
in early boot up. This will allow function tracing to begin early if it is
enabled via the kernel command line.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.c           | 3 +++
 kernel/trace/trace.h           | 2 ++
 kernel/trace/trace_functions.c | 3 +--
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index c4c21de61145..b8f7f5cc11b0 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -7965,6 +7965,9 @@ __init static int tracer_alloc_buffers(void)
 
 	register_tracer(&nop_trace);
 
+	/* Function tracing may start here (via kernel command line) */
+	init_function_trace();
+
 	/* All seems OK, enable tracing */
 	tracing_disabled = 0;
 
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index ae1cce91fead..d8cbad40d3e2 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -896,6 +896,7 @@ int using_ftrace_ops_list_func(void);
 void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d_tracer);
 void ftrace_init_tracefs_toplevel(struct trace_array *tr,
 				  struct dentry *d_tracer);
+int init_function_trace(void);
 #else
 static inline int ftrace_trace_task(struct trace_array *tr)
 {
@@ -914,6 +915,7 @@ ftrace_init_global_array_ops(struct trace_array *tr) { }
 static inline void ftrace_reset_array_ops(struct trace_array *tr) { }
 static inline void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d) { }
 static inline void ftrace_init_tracefs_toplevel(struct trace_array *tr, struct dentry *d) { }
+static inline int init_function_trace(void) { }
 /* ftace_func_t type is not defined, use macro instead of static inline */
 #define ftrace_init_array_ops(tr, func) do { } while (0)
 #endif /* CONFIG_FUNCTION_TRACER */
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index 0efa00d80623..4199ca61b0e5 100644
--- a/kernel/trace/trace_functions.c
+++ b/kernel/trace/trace_functions.c
@@ -687,9 +687,8 @@ static inline int init_func_cmd_traceon(void)
 }
 #endif /* CONFIG_DYNAMIC_FTRACE */
 
-static __init int init_function_trace(void)
+__init int init_function_trace(void)
 {
 	init_func_cmd_traceon();
 	return register_tracer(&function_trace);
 }
-core_initcall(init_function_trace);
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 4/4] ftrace: Allow for function tracing to record init functions on boot up
  2017-03-07 21:28 [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up Steven Rostedt
@ 2017-03-07 21:28   ` Steven Rostedt
  2017-03-07 21:28 ` [RFC][PATCH 2/4] ftrace: Move ftrace_init() to right after memory initialization Steven Rostedt
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-07 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Todd Brandt, linux-mm,
	Vlastimil Babka, Mel Gorman, Peter Zijlstra

[-- Attachment #1: 0004-ftrace-Allow-for-function-tracing-to-record-init-fun.patch --]
[-- Type: text/plain, Size: 5869 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Adding a hook into free_reserve_area() that informs ftrace that boot up init
text is being free, lets ftrace safely remove those init functions from its
records, which keeps ftrace from trying to modify text that no longer
exists.

Note, this still does not allow for tracing .init text of modules, as
modules require different work for freeing its init code.

Link: http://lkml.kernel.org/r/1488502497.7212.24.camel@linux.intel.com

Cc: linux-mm@kvack.org
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Requested-by: Todd Brandt <todd.e.brandt@linux.intel.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/ftrace.h  |  3 +++
 include/linux/init.h    |  4 +++-
 kernel/trace/ftrace.c   | 44 ++++++++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c         |  4 ++++
 scripts/recordmcount.c  |  1 +
 scripts/recordmcount.pl |  1 +
 6 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 569db5589851..25407b5553c3 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -249,6 +249,8 @@ static inline int ftrace_function_local_disabled(struct ftrace_ops *ops)
 extern void ftrace_stub(unsigned long a0, unsigned long a1,
 			struct ftrace_ops *op, struct pt_regs *regs);
 
+void ftrace_free_mem(void *start, void *end);
+
 #else /* !CONFIG_FUNCTION_TRACER */
 /*
  * (un)register_ftrace_function must be a macro since the ops parameter
@@ -262,6 +264,7 @@ static inline int ftrace_nr_registered_ops(void)
 }
 static inline void clear_ftrace_function(void) { }
 static inline void ftrace_kill(void) { }
+static inline void ftrace_free_mem(void *start, void *end) { }
 #endif /* CONFIG_FUNCTION_TRACER */
 
 #ifdef CONFIG_STACK_TRACER
diff --git a/include/linux/init.h b/include/linux/init.h
index 885c3e6d0f9d..c119e76f6d6e 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -39,7 +39,7 @@
 
 /* These are for everybody (although not all archs will actually
    discard it in modules) */
-#define __init		__section(.init.text) __cold notrace __latent_entropy
+#define __init		__section(.init.text) __cold __inittrace __latent_entropy
 #define __initdata	__section(.init.data)
 #define __initconst	__section(.init.rodata)
 #define __exitdata	__section(.exit.data)
@@ -68,8 +68,10 @@
 
 #ifdef MODULE
 #define __exitused
+#define __inittrace notrace
 #else
 #define __exitused  __used
+#define __inittrace
 #endif
 
 #define __exit          __section(.exit.text) __exitused __cold notrace
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index d129ae51329a..4c2d751eb886 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5251,6 +5251,50 @@ void ftrace_module_enable(struct module *mod)
 	mutex_unlock(&ftrace_lock);
 }
 
+void ftrace_free_mem(void *start_ptr, void *end_ptr)
+{
+	unsigned long start = (unsigned long)start_ptr;
+	unsigned long end = (unsigned long)end_ptr;
+	struct ftrace_page **last_pg = &ftrace_pages_start;
+	struct ftrace_page *pg;
+	struct dyn_ftrace *rec;
+	struct dyn_ftrace key;
+	int order;
+
+	key.ip = start;
+	key.flags = end;	/* overload flags, as it is unsigned long */
+
+	mutex_lock(&ftrace_lock);
+
+	for (pg = ftrace_pages_start; pg; last_pg = &pg->next, pg = *last_pg) {
+		if (end < pg->records[0].ip ||
+		    start >= (pg->records[pg->index - 1].ip + MCOUNT_INSN_SIZE))
+			continue;
+ again:
+		rec = bsearch(&key, pg->records, pg->index,
+			      sizeof(struct dyn_ftrace),
+			      ftrace_cmp_recs);
+		if (!rec)
+			continue;
+		pg->index--;
+		if (!pg->index) {
+			*last_pg = pg->next;
+			order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+			free_pages((unsigned long)pg->records, order);
+			kfree(pg);
+			pg = container_of(last_pg, struct ftrace_page, next);
+			if (!(*last_pg))
+				ftrace_pages = pg;
+			continue;
+		}
+		memmove(rec, rec + 1,
+			(pg->index - (rec - pg->records)) * sizeof(*rec));
+		/* More than one function may be in this block */
+		goto again;
+	}
+	mutex_unlock(&ftrace_lock);
+}
+
 void ftrace_module_init(struct module *mod)
 {
 	if (ftrace_disabled || !mod->num_ftrace_callsites)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2c6d5f64feca..95ac03de4cda 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -64,6 +64,7 @@
 #include <linux/page_owner.h>
 #include <linux/kthread.h>
 #include <linux/memcontrol.h>
+#include <linux/ftrace.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -6441,6 +6442,9 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
 	void *pos;
 	unsigned long pages = 0;
 
+	/* This may be .init text, inform ftrace to remove it */
+	ftrace_free_mem(start, end);
+
 	start = (void *)PAGE_ALIGN((unsigned long)start);
 	end = (void *)((unsigned long)end & PAGE_MASK);
 	for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
index aeb34223167c..16e086dcc567 100644
--- a/scripts/recordmcount.c
+++ b/scripts/recordmcount.c
@@ -412,6 +412,7 @@ static int
 is_mcounted_section_name(char const *const txtname)
 {
 	return strcmp(".text",           txtname) == 0 ||
+		strcmp(".init.text",     txtname) == 0 ||
 		strcmp(".ref.text",      txtname) == 0 ||
 		strcmp(".sched.text",    txtname) == 0 ||
 		strcmp(".spinlock.text", txtname) == 0 ||
diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
index faac4b10d8ea..328590d58eee 100755
--- a/scripts/recordmcount.pl
+++ b/scripts/recordmcount.pl
@@ -130,6 +130,7 @@ if ($inputfile =~ m,kernel/trace/ftrace\.o$,) {
 # Acceptable sections to record.
 my %text_sections = (
      ".text" => 1,
+     ".init.text" => 1,
      ".ref.text" => 1,
      ".sched.text" => 1,
      ".spinlock.text" => 1,
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 4/4] ftrace: Allow for function tracing to record init functions on boot up
@ 2017-03-07 21:28   ` Steven Rostedt
  0 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-07 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Todd Brandt, linux-mm,
	Vlastimil Babka, Mel Gorman, Peter Zijlstra

[-- Attachment #1: 0004-ftrace-Allow-for-function-tracing-to-record-init-fun.patch --]
[-- Type: text/plain, Size: 6096 bytes --]

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Adding a hook into free_reserve_area() that informs ftrace that boot up init
text is being free, lets ftrace safely remove those init functions from its
records, which keeps ftrace from trying to modify text that no longer
exists.

Note, this still does not allow for tracing .init text of modules, as
modules require different work for freeing its init code.

Link: http://lkml.kernel.org/r/1488502497.7212.24.camel@linux.intel.com

Cc: linux-mm@kvack.org
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Requested-by: Todd Brandt <todd.e.brandt@linux.intel.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/ftrace.h  |  3 +++
 include/linux/init.h    |  4 +++-
 kernel/trace/ftrace.c   | 44 ++++++++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c         |  4 ++++
 scripts/recordmcount.c  |  1 +
 scripts/recordmcount.pl |  1 +
 6 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 569db5589851..25407b5553c3 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -249,6 +249,8 @@ static inline int ftrace_function_local_disabled(struct ftrace_ops *ops)
 extern void ftrace_stub(unsigned long a0, unsigned long a1,
 			struct ftrace_ops *op, struct pt_regs *regs);
 
+void ftrace_free_mem(void *start, void *end);
+
 #else /* !CONFIG_FUNCTION_TRACER */
 /*
  * (un)register_ftrace_function must be a macro since the ops parameter
@@ -262,6 +264,7 @@ static inline int ftrace_nr_registered_ops(void)
 }
 static inline void clear_ftrace_function(void) { }
 static inline void ftrace_kill(void) { }
+static inline void ftrace_free_mem(void *start, void *end) { }
 #endif /* CONFIG_FUNCTION_TRACER */
 
 #ifdef CONFIG_STACK_TRACER
diff --git a/include/linux/init.h b/include/linux/init.h
index 885c3e6d0f9d..c119e76f6d6e 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -39,7 +39,7 @@
 
 /* These are for everybody (although not all archs will actually
    discard it in modules) */
-#define __init		__section(.init.text) __cold notrace __latent_entropy
+#define __init		__section(.init.text) __cold __inittrace __latent_entropy
 #define __initdata	__section(.init.data)
 #define __initconst	__section(.init.rodata)
 #define __exitdata	__section(.exit.data)
@@ -68,8 +68,10 @@
 
 #ifdef MODULE
 #define __exitused
+#define __inittrace notrace
 #else
 #define __exitused  __used
+#define __inittrace
 #endif
 
 #define __exit          __section(.exit.text) __exitused __cold notrace
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index d129ae51329a..4c2d751eb886 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5251,6 +5251,50 @@ void ftrace_module_enable(struct module *mod)
 	mutex_unlock(&ftrace_lock);
 }
 
+void ftrace_free_mem(void *start_ptr, void *end_ptr)
+{
+	unsigned long start = (unsigned long)start_ptr;
+	unsigned long end = (unsigned long)end_ptr;
+	struct ftrace_page **last_pg = &ftrace_pages_start;
+	struct ftrace_page *pg;
+	struct dyn_ftrace *rec;
+	struct dyn_ftrace key;
+	int order;
+
+	key.ip = start;
+	key.flags = end;	/* overload flags, as it is unsigned long */
+
+	mutex_lock(&ftrace_lock);
+
+	for (pg = ftrace_pages_start; pg; last_pg = &pg->next, pg = *last_pg) {
+		if (end < pg->records[0].ip ||
+		    start >= (pg->records[pg->index - 1].ip + MCOUNT_INSN_SIZE))
+			continue;
+ again:
+		rec = bsearch(&key, pg->records, pg->index,
+			      sizeof(struct dyn_ftrace),
+			      ftrace_cmp_recs);
+		if (!rec)
+			continue;
+		pg->index--;
+		if (!pg->index) {
+			*last_pg = pg->next;
+			order = get_count_order(pg->size / ENTRIES_PER_PAGE);
+			free_pages((unsigned long)pg->records, order);
+			kfree(pg);
+			pg = container_of(last_pg, struct ftrace_page, next);
+			if (!(*last_pg))
+				ftrace_pages = pg;
+			continue;
+		}
+		memmove(rec, rec + 1,
+			(pg->index - (rec - pg->records)) * sizeof(*rec));
+		/* More than one function may be in this block */
+		goto again;
+	}
+	mutex_unlock(&ftrace_lock);
+}
+
 void ftrace_module_init(struct module *mod)
 {
 	if (ftrace_disabled || !mod->num_ftrace_callsites)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2c6d5f64feca..95ac03de4cda 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -64,6 +64,7 @@
 #include <linux/page_owner.h>
 #include <linux/kthread.h>
 #include <linux/memcontrol.h>
+#include <linux/ftrace.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -6441,6 +6442,9 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
 	void *pos;
 	unsigned long pages = 0;
 
+	/* This may be .init text, inform ftrace to remove it */
+	ftrace_free_mem(start, end);
+
 	start = (void *)PAGE_ALIGN((unsigned long)start);
 	end = (void *)((unsigned long)end & PAGE_MASK);
 	for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
index aeb34223167c..16e086dcc567 100644
--- a/scripts/recordmcount.c
+++ b/scripts/recordmcount.c
@@ -412,6 +412,7 @@ static int
 is_mcounted_section_name(char const *const txtname)
 {
 	return strcmp(".text",           txtname) == 0 ||
+		strcmp(".init.text",     txtname) == 0 ||
 		strcmp(".ref.text",      txtname) == 0 ||
 		strcmp(".sched.text",    txtname) == 0 ||
 		strcmp(".spinlock.text", txtname) == 0 ||
diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
index faac4b10d8ea..328590d58eee 100755
--- a/scripts/recordmcount.pl
+++ b/scripts/recordmcount.pl
@@ -130,6 +130,7 @@ if ($inputfile =~ m,kernel/trace/ftrace\.o$,) {
 # Acceptable sections to record.
 my %text_sections = (
      ".text" => 1,
+     ".init.text" => 1,
      ".ref.text" => 1,
      ".sched.text" => 1,
      ".spinlock.text" => 1,
-- 
2.10.2


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up
  2017-03-07 21:28 [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up Steven Rostedt
                   ` (3 preceding siblings ...)
  2017-03-07 21:28   ` Steven Rostedt
@ 2017-03-08 19:15 ` Todd Brandt
  2017-03-08 19:32   ` Todd Brandt
  4 siblings, 1 reply; 11+ messages in thread
From: Todd Brandt @ 2017-03-08 19:15 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Tue, 2017-03-07 at 16:28 -0500, Steven Rostedt wrote:
> I've had people ask about moving tracing up further in the boot process.
> This patch series looks at function tracing only. It allows for tracing
> (and function filtering) to be moved right after memory is initialized.
> To have it happen before memory initialization would require a bit more
> work with allocating the ring buffer. But this is a start.

I just tested out the patch and it does move function trace up to about
100ms from boot which is nice. What I'd really like is for graph trace
to start sooner as well.

P.S. I've noticed that the dmesg log and ftrace log times don't seem to
match up anymore since the v10.0 release. Is the default still the local
clock? On v10.0-rc8 I was able to match initcall_debug output with
do_one_initcall function_graph trace perfectly. But the latest is off by
anywhere from several microseconds to several milliseconds. Did I just
get lucky with v10.0-rc8 or should these still align?

v10.0-rc8 (ftrace time = dmesg time)

FTRACE:
0.519902 |    0)   systemd-1    |               |  do_one_initcall() {
0.519921 |    0)   systemd-1    |               |  do_one_initcall() {
0.519929 |    0)   systemd-1    |               |  do_one_initcall() {
0.519938 |    0)   systemd-1    |               |  do_one_initcall() {
0.519946 |    0)   systemd-1    |               |  do_one_initcall() {
DMESG:
[    0.519909] calling  init_per_zone_wmark_min+0x0/0x73 @ 1
[    0.519925] calling  init_zero_pfn+0x0/0x3d @ 1
[    0.519932] calling  memory_failure_init+0x0/0xa4 @ 1
[    0.519941] calling  cma_init_reserved_areas+0x0/0x1cd @ 1
[    0.519949] calling  fsnotify_init+0x0/0x26 @ 1

v4.11-rc1 (ftrace = dmesg + 5.089 ms)

FTRACE:
0.445317 |    2)   systemd-1    |               |  do_one_initcall() {
0.445338 |    2)   systemd-1    |               |  do_one_initcall() {
0.445346 |    2)   systemd-1    |               |  do_one_initcall() {
0.445355 |    2)   systemd-1    |               |  do_one_initcall() {
0.445363 |    2)   systemd-1    |               |  do_one_initcall() {
DMESG:
[    0.440232] calling  init_per_zone_wmark_min+0x0/0x73 @ 1
[    0.440249] calling  init_zero_pfn+0x0/0x3d @ 1
[    0.440257] calling  memory_failure_init+0x0/0xa4 @ 1
[    0.440266] calling  cma_init_reserved_areas+0x0/0x1cd @ 1
[    0.440275] calling  fsnotify_init+0x0/0x26 @ 1


> 
> I placed a hook into free_reserved_area() which is used by all archs
> to free the init memory. Having it pass the range being freed to ftrace
> lets ftrace clean up any function that is registered such that it doesn't
> try to modify code that no longer exists. 
> 
> 
> Steven Rostedt (VMware) (4):
>       tracing: Split tracing initialization into two for early initialization
>       ftrace: Move ftrace_init() to right after memory initialization
>       ftrace: Have function tracing start in early boot up
>       ftrace: Allow for function tracing to record init functions on boot up
> 
> ----
>  include/linux/ftrace.h         |  5 +++++
>  include/linux/init.h           |  4 +++-
>  init/main.c                    |  9 ++++++---
>  kernel/trace/ftrace.c          | 44 ++++++++++++++++++++++++++++++++++++++++++
>  kernel/trace/trace.c           |  9 ++++++++-
>  kernel/trace/trace.h           |  2 ++
>  kernel/trace/trace_functions.c |  3 +--
>  mm/page_alloc.c                |  4 ++++
>  scripts/recordmcount.c         |  1 +
>  scripts/recordmcount.pl        |  1 +
>  10 files changed, 75 insertions(+), 7 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up
  2017-03-08 19:15 ` [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in " Todd Brandt
@ 2017-03-08 19:32   ` Todd Brandt
  2017-03-08 19:36     ` Steven Rostedt
  0 siblings, 1 reply; 11+ messages in thread
From: Todd Brandt @ 2017-03-08 19:32 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Wed, 2017-03-08 at 11:15 -0800, Todd Brandt wrote:
> On Tue, 2017-03-07 at 16:28 -0500, Steven Rostedt wrote:
> > I've had people ask about moving tracing up further in the boot process.
> > This patch series looks at function tracing only. It allows for tracing
> > (and function filtering) to be moved right after memory is initialized.
> > To have it happen before memory initialization would require a bit more
> > work with allocating the ring buffer. But this is a start.
> 
> I just tested out the patch and it does move function trace up to about
> 100ms from boot which is nice. What I'd really like is for graph trace
> to start sooner as well.
> 
> P.S. I've noticed that the dmesg log and ftrace log times don't seem to
> match up anymore since the v10.0 release. Is the default still the local
> clock? On v10.0-rc8 I was able to match initcall_debug output with
> do_one_initcall function_graph trace perfectly. But the latest is off by
> anywhere from several microseconds to several milliseconds. Did I just
> get lucky with v10.0-rc8 or should these still align?
> 
> v10.0-rc8 (ftrace time = dmesg time)
> 
> FTRACE:
> 0.519902 |    0)   systemd-1    |               |  do_one_initcall() {
> 0.519921 |    0)   systemd-1    |               |  do_one_initcall() {
> 0.519929 |    0)   systemd-1    |               |  do_one_initcall() {
> 0.519938 |    0)   systemd-1    |               |  do_one_initcall() {
> 0.519946 |    0)   systemd-1    |               |  do_one_initcall() {
> DMESG:
> [    0.519909] calling  init_per_zone_wmark_min+0x0/0x73 @ 1
> [    0.519925] calling  init_zero_pfn+0x0/0x3d @ 1
> [    0.519932] calling  memory_failure_init+0x0/0xa4 @ 1
> [    0.519941] calling  cma_init_reserved_areas+0x0/0x1cd @ 1
> [    0.519949] calling  fsnotify_init+0x0/0x26 @ 1
> 
> v4.11-rc1 (ftrace = dmesg + 5.089 ms)
> 
> FTRACE:
> 0.445317 |    2)   systemd-1    |               |  do_one_initcall() {
> 0.445338 |    2)   systemd-1    |               |  do_one_initcall() {
> 0.445346 |    2)   systemd-1    |               |  do_one_initcall() {
> 0.445355 |    2)   systemd-1    |               |  do_one_initcall() {
> 0.445363 |    2)   systemd-1    |               |  do_one_initcall() {
> DMESG:
> [    0.440232] calling  init_per_zone_wmark_min+0x0/0x73 @ 1
> [    0.440249] calling  init_zero_pfn+0x0/0x3d @ 1
> [    0.440257] calling  memory_failure_init+0x0/0xa4 @ 1
> [    0.440266] calling  cma_init_reserved_areas+0x0/0x1cd @ 1
> [    0.440275] calling  fsnotify_init+0x0/0x26 @ 1

Oops, no sooner than 5 minutes after I sent this did I figure out
there's a trace_clock kernel parameter (not in kernel-parameters.txt).
Once I set it to global all is well. Never mind :)

> 
> 
> > 
> > I placed a hook into free_reserved_area() which is used by all archs
> > to free the init memory. Having it pass the range being freed to ftrace
> > lets ftrace clean up any function that is registered such that it doesn't
> > try to modify code that no longer exists. 
> > 
> > 
> > Steven Rostedt (VMware) (4):
> >       tracing: Split tracing initialization into two for early initialization
> >       ftrace: Move ftrace_init() to right after memory initialization
> >       ftrace: Have function tracing start in early boot up
> >       ftrace: Allow for function tracing to record init functions on boot up
> > 
> > ----
> >  include/linux/ftrace.h         |  5 +++++
> >  include/linux/init.h           |  4 +++-
> >  init/main.c                    |  9 ++++++---
> >  kernel/trace/ftrace.c          | 44 ++++++++++++++++++++++++++++++++++++++++++
> >  kernel/trace/trace.c           |  9 ++++++++-
> >  kernel/trace/trace.h           |  2 ++
> >  kernel/trace/trace_functions.c |  3 +--
> >  mm/page_alloc.c                |  4 ++++
> >  scripts/recordmcount.c         |  1 +
> >  scripts/recordmcount.pl        |  1 +
> >  10 files changed, 75 insertions(+), 7 deletions(-)
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up
  2017-03-08 19:32   ` Todd Brandt
@ 2017-03-08 19:36     ` Steven Rostedt
  0 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-08 19:36 UTC (permalink / raw)
  To: Todd Brandt; +Cc: linux-kernel, Ingo Molnar, Andrew Morton

On Wed, 08 Mar 2017 11:32:25 -0800
Todd Brandt <todd.e.brandt@linux.intel.com> wrote:

> On Wed, 2017-03-08 at 11:15 -0800, Todd Brandt wrote:
> > On Tue, 2017-03-07 at 16:28 -0500, Steven Rostedt wrote:  
> > > I've had people ask about moving tracing up further in the boot process.
> > > This patch series looks at function tracing only. It allows for tracing
> > > (and function filtering) to be moved right after memory is initialized.
> > > To have it happen before memory initialization would require a bit more
> > > work with allocating the ring buffer. But this is a start.  
> > 
> > I just tested out the patch and it does move function trace up to about
> > 100ms from boot which is nice. What I'd really like is for graph trace
> > to start sooner as well.

I can add this. I just wanted this out first.

The one part I want people to notice is the last patch where I add a
hook into the free mem reserve. I'll ping the mm folks to make sure
they are OK with that.



> > [    0.440232] calling  init_per_zone_wmark_min+0x0/0x73 @ 1
> > [    0.440249] calling  init_zero_pfn+0x0/0x3d @ 1
> > [    0.440257] calling  memory_failure_init+0x0/0xa4 @ 1
> > [    0.440266] calling  cma_init_reserved_areas+0x0/0x1cd @ 1
> > [    0.440275] calling  fsnotify_init+0x0/0x26 @ 1  
> 
> Oops, no sooner than 5 minutes after I sent this did I figure out
> there's a trace_clock kernel parameter (not in kernel-parameters.txt).
> Once I set it to global all is well. Never mind :)

I was just about to ask ;-)

-- Steve

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC][PATCH 4/4] ftrace: Allow for function tracing to record init functions on boot up
  2017-03-07 21:28   ` Steven Rostedt
@ 2017-03-08 20:07     ` Steven Rostedt
  -1 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Todd Brandt, linux-mm,
	Vlastimil Babka, Mel Gorman, Peter Zijlstra


Dear mm folks,

Are you OK with this change? I need a hook to when the init sections
are being freed along with the address that are being freed. As each
arch frees their own init sections I need a single location to place my
hook. The archs all call free_reserved_area(). As this isn't a critical
section (ie. one that needs to be really fast), calling into ftrace
with the freed address should not be an issue. The ftrace code uses a
binary search within the blocks of locations so it is rather fast
itself.

Thoughts? Acks? :-)

-- Steve


> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2c6d5f64feca..95ac03de4cda 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -64,6 +64,7 @@
>  #include <linux/page_owner.h>
>  #include <linux/kthread.h>
>  #include <linux/memcontrol.h>
> +#include <linux/ftrace.h>
>  
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -6441,6 +6442,9 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
>  	void *pos;
>  	unsigned long pages = 0;
>  
> +	/* This may be .init text, inform ftrace to remove it */
> +	ftrace_free_mem(start, end);
> +
>  	start = (void *)PAGE_ALIGN((unsigned long)start);
>  	end = (void *)((unsigned long)end & PAGE_MASK);
>  	for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC][PATCH 4/4] ftrace: Allow for function tracing to record init functions on boot up
@ 2017-03-08 20:07     ` Steven Rostedt
  0 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Todd Brandt, linux-mm,
	Vlastimil Babka, Mel Gorman, Peter Zijlstra


Dear mm folks,

Are you OK with this change? I need a hook to when the init sections
are being freed along with the address that are being freed. As each
arch frees their own init sections I need a single location to place my
hook. The archs all call free_reserved_area(). As this isn't a critical
section (ie. one that needs to be really fast), calling into ftrace
with the freed address should not be an issue. The ftrace code uses a
binary search within the blocks of locations so it is rather fast
itself.

Thoughts? Acks? :-)

-- Steve


> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2c6d5f64feca..95ac03de4cda 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -64,6 +64,7 @@
>  #include <linux/page_owner.h>
>  #include <linux/kthread.h>
>  #include <linux/memcontrol.h>
> +#include <linux/ftrace.h>
>  
>  #include <asm/sections.h>
>  #include <asm/tlbflush.h>
> @@ -6441,6 +6442,9 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
>  	void *pos;
>  	unsigned long pages = 0;
>  
> +	/* This may be .init text, inform ftrace to remove it */
> +	ftrace_free_mem(start, end);
> +
>  	start = (void *)PAGE_ALIGN((unsigned long)start);
>  	end = (void *)((unsigned long)end & PAGE_MASK);
>  	for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-03-08 21:20 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-07 21:28 [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in boot up Steven Rostedt
2017-03-07 21:28 ` [RFC][PATCH 1/4] tracing: Split tracing initialization into two for early initialization Steven Rostedt
2017-03-07 21:28 ` [RFC][PATCH 2/4] ftrace: Move ftrace_init() to right after memory initialization Steven Rostedt
2017-03-07 21:28 ` [RFC][PATCH 3/4] ftrace: Have function tracing start in early boot up Steven Rostedt
2017-03-07 21:28 ` [RFC][PATCH 4/4] ftrace: Allow for function tracing to record init functions on " Steven Rostedt
2017-03-07 21:28   ` Steven Rostedt
2017-03-08 20:07   ` Steven Rostedt
2017-03-08 20:07     ` Steven Rostedt
2017-03-08 19:15 ` [RFC][PATCH 0/4] tracing: Allow function tracing to start earlier in " Todd Brandt
2017-03-08 19:32   ` Todd Brandt
2017-03-08 19:36     ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.