linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [for-next][PATCH 0/8] tracing: Last minute updates for 4.10
@ 2016-12-09 14:26 Steven Rostedt
  2016-12-09 14:26 ` [for-next][PATCH 1/8] tracing: Have the reg function allow to fail Steven Rostedt
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:26 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

There are two patches that are marked for stable. But since Linus will
hopefully open the merge window very soon, I figured I let those changes
sit in linux-next for a week before pushing to Linus. They are not that
critical as they are old bugs, and require root privilege to abuse.

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next

Head SHA1: 1a41442864e35bff859582fe9c5d051d0b1040ba


Steven Rostedt (Red Hat) (8):
      tracing: Have the reg function allow to fail
      tracing: Do not start benchmark on boot up
      tracing: Have system enable return error if one of the events fail
      tracing: Allow benchmark to be enabled at early_initcall()
      ftrace/x86_32: Set ftrace_stub to weak to prevent gcc from using short jumps to it
      tracing: Replace kmap with copy_from_user() in trace_marker writing
      fgraph: Handle a case where a tracer ignores set_graph_notrace
      tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too

----
 arch/powerpc/include/asm/trace.h                  |   4 +-
 arch/powerpc/platforms/powernv/opal-tracepoints.c |   6 +-
 arch/powerpc/platforms/pseries/lpar.c             |   6 +-
 arch/x86/entry/entry_32.S                         |   4 +-
 arch/x86/include/asm/trace/exceptions.h           |   2 +-
 arch/x86/include/asm/trace/irq_vectors.h          |   2 +-
 arch/x86/kernel/tracepoint.c                      |   3 +-
 drivers/i2c/i2c-core.c                            |   3 +-
 include/linux/tracepoint-defs.h                   |   2 +-
 include/linux/tracepoint.h                        |   2 +-
 include/trace/events/i2c.h                        |   2 +-
 kernel/trace/trace.c                              | 139 ++++++----------------
 kernel/trace/trace.h                              |  11 ++
 kernel/trace/trace_benchmark.c                    |  26 +++-
 kernel/trace/trace_benchmark.h                    |   2 +-
 kernel/trace/trace_events.c                       |  13 +-
 kernel/trace/trace_functions_graph.c              |  31 +++--
 kernel/trace/trace_irqsoff.c                      |  12 ++
 kernel/trace/trace_sched_wakeup.c                 |  12 ++
 kernel/tracepoint.c                               |  12 +-
 samples/trace_events/trace-events-sample.c        |   3 +-
 samples/trace_events/trace-events-sample.h        |   2 +-
 22 files changed, 162 insertions(+), 137 deletions(-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [for-next][PATCH 1/8] tracing: Have the reg function allow to fail
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
@ 2016-12-09 14:26 ` Steven Rostedt
  2016-12-09 14:26 ` [for-next][PATCH 2/8] tracing: Do not start benchmark on boot up Steven Rostedt
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, David Howells, Seiji Aguchi,
	Anton Blanchard, Mathieu Desnoyers

[-- Attachment #1: 0001-tracing-Have-the-reg-function-allow-to-fail.patch --]
[-- Type: text/plain, Size: 10488 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Some tracepoints have a registration function that gets enabled when the
tracepoint is enabled. There may be cases that the registraction function
must fail (for example, can't allocate enough memory). In this case, the
tracepoint should also fail to register, otherwise the user would not know
why the tracepoint is not working.

Cc: David Howells <dhowells@redhat.com>
Cc: Seiji Aguchi <seiji.aguchi@hds.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/powerpc/include/asm/trace.h                  |  4 ++--
 arch/powerpc/platforms/powernv/opal-tracepoints.c |  6 ++++--
 arch/powerpc/platforms/pseries/lpar.c             |  6 ++++--
 arch/x86/include/asm/trace/exceptions.h           |  2 +-
 arch/x86/include/asm/trace/irq_vectors.h          |  2 +-
 arch/x86/kernel/tracepoint.c                      |  3 ++-
 drivers/i2c/i2c-core.c                            |  3 ++-
 include/linux/tracepoint-defs.h                   |  2 +-
 include/linux/tracepoint.h                        |  2 +-
 include/trace/events/i2c.h                        |  2 +-
 kernel/trace/trace_benchmark.c                    |  3 ++-
 kernel/trace/trace_benchmark.h                    |  2 +-
 kernel/tracepoint.c                               | 12 +++++++++---
 samples/trace_events/trace-events-sample.c        |  3 ++-
 samples/trace_events/trace-events-sample.h        |  2 +-
 15 files changed, 34 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/include/asm/trace.h b/arch/powerpc/include/asm/trace.h
index 32e36b16773f..c05cef6ee06c 100644
--- a/arch/powerpc/include/asm/trace.h
+++ b/arch/powerpc/include/asm/trace.h
@@ -54,7 +54,7 @@ DEFINE_EVENT(ppc64_interrupt_class, timer_interrupt_exit,
 );
 
 #ifdef CONFIG_PPC_PSERIES
-extern void hcall_tracepoint_regfunc(void);
+extern int hcall_tracepoint_regfunc(void);
 extern void hcall_tracepoint_unregfunc(void);
 
 TRACE_EVENT_FN_COND(hcall_entry,
@@ -104,7 +104,7 @@ TRACE_EVENT_FN_COND(hcall_exit,
 #endif
 
 #ifdef CONFIG_PPC_POWERNV
-extern void opal_tracepoint_regfunc(void);
+extern int opal_tracepoint_regfunc(void);
 extern void opal_tracepoint_unregfunc(void);
 
 TRACE_EVENT_FN(opal_entry,
diff --git a/arch/powerpc/platforms/powernv/opal-tracepoints.c b/arch/powerpc/platforms/powernv/opal-tracepoints.c
index 1e496b780efd..3c447002edff 100644
--- a/arch/powerpc/platforms/powernv/opal-tracepoints.c
+++ b/arch/powerpc/platforms/powernv/opal-tracepoints.c
@@ -6,9 +6,10 @@
 #ifdef HAVE_JUMP_LABEL
 struct static_key opal_tracepoint_key = STATIC_KEY_INIT;
 
-void opal_tracepoint_regfunc(void)
+int opal_tracepoint_regfunc(void)
 {
 	static_key_slow_inc(&opal_tracepoint_key);
+	return 0;
 }
 
 void opal_tracepoint_unregfunc(void)
@@ -25,9 +26,10 @@ void opal_tracepoint_unregfunc(void)
 /* NB: reg/unreg are called while guarded with the tracepoints_mutex */
 extern long opal_tracepoint_refcount;
 
-void opal_tracepoint_regfunc(void)
+int opal_tracepoint_regfunc(void)
 {
 	opal_tracepoint_refcount++;
+	return 0;
 }
 
 void opal_tracepoint_unregfunc(void)
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index aa35245d8d6d..c0423ce3955c 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -661,9 +661,10 @@ EXPORT_SYMBOL(arch_free_page);
 #ifdef HAVE_JUMP_LABEL
 struct static_key hcall_tracepoint_key = STATIC_KEY_INIT;
 
-void hcall_tracepoint_regfunc(void)
+int hcall_tracepoint_regfunc(void)
 {
 	static_key_slow_inc(&hcall_tracepoint_key);
+	return 0;
 }
 
 void hcall_tracepoint_unregfunc(void)
@@ -680,9 +681,10 @@ void hcall_tracepoint_unregfunc(void)
 /* NB: reg/unreg are called while guarded with the tracepoints_mutex */
 extern long hcall_tracepoint_refcount;
 
-void hcall_tracepoint_regfunc(void)
+int hcall_tracepoint_regfunc(void)
 {
 	hcall_tracepoint_refcount++;
+	return 0;
 }
 
 void hcall_tracepoint_unregfunc(void)
diff --git a/arch/x86/include/asm/trace/exceptions.h b/arch/x86/include/asm/trace/exceptions.h
index 2fbc66c7885b..2422b14c50a7 100644
--- a/arch/x86/include/asm/trace/exceptions.h
+++ b/arch/x86/include/asm/trace/exceptions.h
@@ -6,7 +6,7 @@
 
 #include <linux/tracepoint.h>
 
-extern void trace_irq_vector_regfunc(void);
+extern int trace_irq_vector_regfunc(void);
 extern void trace_irq_vector_unregfunc(void);
 
 DECLARE_EVENT_CLASS(x86_exceptions,
diff --git a/arch/x86/include/asm/trace/irq_vectors.h b/arch/x86/include/asm/trace/irq_vectors.h
index 38a09a13a9bc..32dd6a9e343c 100644
--- a/arch/x86/include/asm/trace/irq_vectors.h
+++ b/arch/x86/include/asm/trace/irq_vectors.h
@@ -6,7 +6,7 @@
 
 #include <linux/tracepoint.h>
 
-extern void trace_irq_vector_regfunc(void);
+extern int trace_irq_vector_regfunc(void);
 extern void trace_irq_vector_unregfunc(void);
 
 DECLARE_EVENT_CLASS(x86_irq_vector,
diff --git a/arch/x86/kernel/tracepoint.c b/arch/x86/kernel/tracepoint.c
index 1c113db9ed57..15515132bf0d 100644
--- a/arch/x86/kernel/tracepoint.c
+++ b/arch/x86/kernel/tracepoint.c
@@ -34,7 +34,7 @@ static void switch_idt(void *arg)
 	local_irq_restore(flags);
 }
 
-void trace_irq_vector_regfunc(void)
+int trace_irq_vector_regfunc(void)
 {
 	mutex_lock(&irq_vector_mutex);
 	if (!trace_irq_vector_refcount) {
@@ -44,6 +44,7 @@ void trace_irq_vector_regfunc(void)
 	}
 	trace_irq_vector_refcount++;
 	mutex_unlock(&irq_vector_mutex);
+	return 0;
 }
 
 void trace_irq_vector_unregfunc(void)
diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
index b432b64e307a..6a2b995d7fc4 100644
--- a/drivers/i2c/i2c-core.c
+++ b/drivers/i2c/i2c-core.c
@@ -77,9 +77,10 @@ static int i2c_detect(struct i2c_adapter *adapter, struct i2c_driver *driver);
 static struct static_key i2c_trace_msg = STATIC_KEY_INIT_FALSE;
 static bool is_registered;
 
-void i2c_transfer_trace_reg(void)
+int i2c_transfer_trace_reg(void)
 {
 	static_key_slow_inc(&i2c_trace_msg);
+	return 0;
 }
 
 void i2c_transfer_trace_unreg(void)
diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index 4ac89acb6136..a03192052066 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -29,7 +29,7 @@ struct tracepoint_func {
 struct tracepoint {
 	const char *name;		/* Tracepoint name */
 	struct static_key key;
-	void (*regfunc)(void);
+	int (*regfunc)(void);
 	void (*unregfunc)(void);
 	struct tracepoint_func __rcu *funcs;
 };
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index be586c632a0c..f72fcfe0e66a 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -81,7 +81,7 @@ static inline void tracepoint_synchronize_unregister(void)
 }
 
 #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
-extern void syscall_regfunc(void);
+extern int syscall_regfunc(void);
 extern void syscall_unregfunc(void);
 #endif /* CONFIG_HAVE_SYSCALL_TRACEPOINTS */
 
diff --git a/include/trace/events/i2c.h b/include/trace/events/i2c.h
index fe17187df65d..4abb8eab34d3 100644
--- a/include/trace/events/i2c.h
+++ b/include/trace/events/i2c.h
@@ -20,7 +20,7 @@
 /*
  * drivers/i2c/i2c-core.c
  */
-extern void i2c_transfer_trace_reg(void);
+extern int i2c_transfer_trace_reg(void);
 extern void i2c_transfer_trace_unreg(void);
 
 /*
diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index 0f109c4130d3..f76d0416dd83 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -164,11 +164,12 @@ static int benchmark_event_kthread(void *arg)
  * When the benchmark tracepoint is enabled, it calls this
  * function and the thread that calls the tracepoint is created.
  */
-void trace_benchmark_reg(void)
+int trace_benchmark_reg(void)
 {
 	bm_event_thread = kthread_run(benchmark_event_kthread,
 				      NULL, "event_benchmark");
 	WARN_ON(!bm_event_thread);
+	return 0;
 }
 
 /*
diff --git a/kernel/trace/trace_benchmark.h b/kernel/trace/trace_benchmark.h
index 3c1df1df4e29..ebdbfc2f2a64 100644
--- a/kernel/trace/trace_benchmark.h
+++ b/kernel/trace/trace_benchmark.h
@@ -6,7 +6,7 @@
 
 #include <linux/tracepoint.h>
 
-extern void trace_benchmark_reg(void);
+extern int trace_benchmark_reg(void);
 extern void trace_benchmark_unreg(void);
 
 #define BENCHMARK_EVENT_STRLEN		128
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index d0639d917899..1f9a31f934a4 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -194,9 +194,13 @@ static int tracepoint_add_func(struct tracepoint *tp,
 			       struct tracepoint_func *func, int prio)
 {
 	struct tracepoint_func *old, *tp_funcs;
+	int ret;
 
-	if (tp->regfunc && !static_key_enabled(&tp->key))
-		tp->regfunc();
+	if (tp->regfunc && !static_key_enabled(&tp->key)) {
+		ret = tp->regfunc();
+		if (ret < 0)
+			return ret;
+	}
 
 	tp_funcs = rcu_dereference_protected(tp->funcs,
 			lockdep_is_held(&tracepoints_mutex));
@@ -529,7 +533,7 @@ EXPORT_SYMBOL_GPL(for_each_kernel_tracepoint);
 /* NB: reg/unreg are called while guarded with the tracepoints_mutex */
 static int sys_tracepoint_refcount;
 
-void syscall_regfunc(void)
+int syscall_regfunc(void)
 {
 	struct task_struct *p, *t;
 
@@ -541,6 +545,8 @@ void syscall_regfunc(void)
 		read_unlock(&tasklist_lock);
 	}
 	sys_tracepoint_refcount++;
+
+	return 0;
 }
 
 void syscall_unregfunc(void)
diff --git a/samples/trace_events/trace-events-sample.c b/samples/trace_events/trace-events-sample.c
index 880a7d1d27d2..30e282d33d4d 100644
--- a/samples/trace_events/trace-events-sample.c
+++ b/samples/trace_events/trace-events-sample.c
@@ -79,7 +79,7 @@ static int simple_thread_fn(void *arg)
 
 static DEFINE_MUTEX(thread_mutex);
 
-void foo_bar_reg(void)
+int foo_bar_reg(void)
 {
 	pr_info("Starting thread for foo_bar_fn\n");
 	/*
@@ -90,6 +90,7 @@ void foo_bar_reg(void)
 	mutex_lock(&thread_mutex);
 	simple_tsk_fn = kthread_run(simple_thread_fn, NULL, "event-sample-fn");
 	mutex_unlock(&thread_mutex);
+	return 0;
 }
 
 void foo_bar_unreg(void)
diff --git a/samples/trace_events/trace-events-sample.h b/samples/trace_events/trace-events-sample.h
index d6b75bb495b3..76a75ab7a608 100644
--- a/samples/trace_events/trace-events-sample.h
+++ b/samples/trace_events/trace-events-sample.h
@@ -354,7 +354,7 @@ TRACE_EVENT_CONDITION(foo_bar_with_cond,
 	TP_printk("foo %s %d", __get_str(foo), __entry->bar)
 );
 
-void foo_bar_reg(void);
+int foo_bar_reg(void);
 void foo_bar_unreg(void);
 
 /*
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 2/8] tracing: Do not start benchmark on boot up
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
  2016-12-09 14:26 ` [for-next][PATCH 1/8] tracing: Have the reg function allow to fail Steven Rostedt
@ 2016-12-09 14:26 ` Steven Rostedt
  2016-12-09 14:26 ` [for-next][PATCH 3/8] tracing: Have system enable return error if one of the events fail Steven Rostedt
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:26 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0002-tracing-Do-not-start-benchmark-on-boot-up.patch --]
[-- Type: text/plain, Size: 1631 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Trace events are enabled very early on boot up via the boot command line
parameter. The benchmark tool creates a new thread to perform the trace
event benchmarking. But at start up, it is called before scheduling is set
up and because it creates a new thread before the init thread is created,
this crashes the kernel.

Have the benchmark fail to register when started via the kernel command
line.

Also, since the registering of a tracepoint now can handle failure cases,
return -ENOMEM instead of warning if the thread cannot be created.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_benchmark.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index f76d0416dd83..2bc7dc3e8ff8 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -166,9 +166,18 @@ static int benchmark_event_kthread(void *arg)
  */
 int trace_benchmark_reg(void)
 {
+	if (system_state != SYSTEM_RUNNING) {
+		pr_warning("trace benchmark cannot be started via kernel command line\n");
+		return -EBUSY;
+	}
+
 	bm_event_thread = kthread_run(benchmark_event_kthread,
 				      NULL, "event_benchmark");
-	WARN_ON(!bm_event_thread);
+	if (!bm_event_thread) {
+		pr_warning("trace benchmark failed to create kernel thread\n");
+		return -ENOMEM;
+	}
+
 	return 0;
 }
 
@@ -183,6 +192,7 @@ void trace_benchmark_unreg(void)
 		return;
 
 	kthread_stop(bm_event_thread);
+	bm_event_thread = NULL;
 
 	strcpy(bm_str, "START");
 	bm_total = 0;
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 3/8] tracing: Have system enable return error if one of the events fail
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
  2016-12-09 14:26 ` [for-next][PATCH 1/8] tracing: Have the reg function allow to fail Steven Rostedt
  2016-12-09 14:26 ` [for-next][PATCH 2/8] tracing: Do not start benchmark on boot up Steven Rostedt
@ 2016-12-09 14:26 ` Steven Rostedt
  2016-12-09 14:27 ` [for-next][PATCH 4/8] tracing: Allow benchmark to be enabled at early_initcall() Steven Rostedt
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:26 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0003-tracing-Have-system-enable-return-error-if-one-of-th.patch --]
[-- Type: text/plain, Size: 1352 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

If one of the events within a system fails to enable when "1" is written
to the system "enable" file, it should return an error. Note, some events
may still be enabled, but the user should know that something did go wrong.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_events.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index d35fc2b0d304..93116549a284 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -702,6 +702,7 @@ __ftrace_set_clr_event_nolock(struct trace_array *tr, const char *match,
 	struct trace_event_call *call;
 	const char *name;
 	int ret = -EINVAL;
+	int eret = 0;
 
 	list_for_each_entry(file, &tr->events, list) {
 
@@ -725,9 +726,17 @@ __ftrace_set_clr_event_nolock(struct trace_array *tr, const char *match,
 		if (event && strcmp(event, name) != 0)
 			continue;
 
-		ftrace_event_enable_disable(file, set);
+		ret = ftrace_event_enable_disable(file, set);
 
-		ret = 0;
+		/*
+		 * Save the first error and return that. Some events
+		 * may still have been enabled, but let the user
+		 * know that something went wrong.
+		 */
+		if (ret && !eret)
+			eret = ret;
+
+		ret = eret;
 	}
 
 	return ret;
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 4/8] tracing: Allow benchmark to be enabled at early_initcall()
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
                   ` (2 preceding siblings ...)
  2016-12-09 14:26 ` [for-next][PATCH 3/8] tracing: Have system enable return error if one of the events fail Steven Rostedt
@ 2016-12-09 14:27 ` Steven Rostedt
  2016-12-09 14:27 ` [for-next][PATCH 5/8] ftrace/x86_32: Set ftrace_stub to weak to prevent gcc from using short jumps to it Steven Rostedt
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

[-- Attachment #1: 0004-tracing-Allow-benchmark-to-be-enabled-at-early_initc.patch --]
[-- Type: text/plain, Size: 1378 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

The trace event start up selftests fails when the trace benchmark is
enabled, because it is disabled during boot. It really only needs to be
disabled before scheduling is set up, as it creates a thread.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_benchmark.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index 2bc7dc3e8ff8..e3b488825ae3 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -21,6 +21,8 @@ static u64 bm_stddev;
 static unsigned int bm_avg;
 static unsigned int bm_std;
 
+static bool ok_to_run;
+
 /*
  * This gets called in a loop recording the time it took to write
  * the tracepoint. What it writes is the time statistics of the last
@@ -166,7 +168,7 @@ static int benchmark_event_kthread(void *arg)
  */
 int trace_benchmark_reg(void)
 {
-	if (system_state != SYSTEM_RUNNING) {
+	if (!ok_to_run) {
 		pr_warning("trace benchmark cannot be started via kernel command line\n");
 		return -EBUSY;
 	}
@@ -207,3 +209,12 @@ void trace_benchmark_unreg(void)
 	bm_avg = 0;
 	bm_stddev = 0;
 }
+
+static __init int ok_to_run_trace_benchmark(void)
+{
+	ok_to_run = true;
+
+	return 0;
+}
+
+early_initcall(ok_to_run_trace_benchmark);
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 5/8] ftrace/x86_32: Set ftrace_stub to weak to prevent gcc from using short jumps to it
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
                   ` (3 preceding siblings ...)
  2016-12-09 14:27 ` [for-next][PATCH 4/8] tracing: Allow benchmark to be enabled at early_initcall() Steven Rostedt
@ 2016-12-09 14:27 ` Steven Rostedt
  2016-12-09 14:27 ` [for-next][PATCH 6/8] tracing: Replace kmap with copy_from_user() in trace_marker writing Steven Rostedt
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, stable, Colin Ian King

[-- Attachment #1: 0005-ftrace-x86_32-Set-ftrace_stub-to-weak-to-prevent-gcc.patch --]
[-- Type: text/plain, Size: 1377 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

With new binutils, gcc may get smart with its optimization and change a jmp
from a 5 byte jump to a 2 byte one even though it was jumping to a global
function. But that global function existed within a 2 byte radius, and gcc
was able to optimize it. Unfortunately, that jump was also being modified
when function graph tracing begins. Since ftrace expected that jump to be 5
bytes, but it was only two, it overwrote code after the jump, causing a
crash.

This was fixed for x86_64 with commit 8329e818f149, with the same subject as
this commit, but nothing was done for x86_32.

Cc: stable@vger.kernel.org
Fixes: d61f82d06672 ("ftrace: use dynamic patching for updating mcount calls")
Reported-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/entry/entry_32.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 21b352a11b49..edba8606b99a 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -889,8 +889,8 @@ ftrace_graph_call:
 	jmp	ftrace_stub
 #endif
 
-.globl ftrace_stub
-ftrace_stub:
+/* This is weak to keep gas from relaxing the jumps */
+WEAK(ftrace_stub)
 	ret
 END(ftrace_caller)
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 6/8] tracing: Replace kmap with copy_from_user() in trace_marker writing
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
                   ` (4 preceding siblings ...)
  2016-12-09 14:27 ` [for-next][PATCH 5/8] ftrace/x86_32: Set ftrace_stub to weak to prevent gcc from using short jumps to it Steven Rostedt
@ 2016-12-09 14:27 ` Steven Rostedt
  2016-12-09 14:27 ` [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace Steven Rostedt
  2016-12-09 14:27 ` [for-next][PATCH 8/8] tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too Steven Rostedt
  7 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Henrik Austad, Peter Zijlstra,
	Thomas Gleixner

[-- Attachment #1: 0006-tracing-Replace-kmap-with-copy_from_user-in-trace_ma.patch --]
[-- Type: text/plain, Size: 7333 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Instead of using get_user_pages_fast() and kmap_atomic() when writing
to the trace_marker file, just allocate enough space on the ring buffer
directly, and write into it via copy_from_user().

Writing into the trace_marker file use to allocate a temporary buffer
to perform the copy_from_user(), as we didn't want to write into the
ring buffer if the copy failed. But as a trace_marker write is suppose
to be extremely fast, and allocating memory causes other tracepoints to
trigger, Peter Zijlstra suggested using get_user_pages_fast() and
kmap_atomic() to keep the user space pages in memory and reading it
directly. But Henrik Austad had issues with this because it required taking
the mm->mmap_sem and causing long delays with the write.

Instead, just allocate the space in the ring buffer and use
copy_from_user() directly. If it faults, return -EFAULT and write
"<faulted>" into the ring buffer.

Link: http://lkml.kernel.org/r/20161208124018.72dd0f86@gandalf.local.home

Cc: Ingo Molnar <mingo@kernel.org>
Cc: Henrik Austad <henrik@austad.us>
Cc: Peter Zijlstra <peterz@infradead.org>
Updates: d696b58ca2c3ca "tracing: Do not allocate buffer for trace_marker"
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 139 ++++++++++++++-------------------------------------
 1 file changed, 37 insertions(+), 102 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 60416bf7c591..6f420d7b703b 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -5738,61 +5738,6 @@ tracing_free_buffer_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static inline int lock_user_pages(const char __user *ubuf, size_t cnt,
-				  struct page **pages, void **map_page,
-				  int *offset)
-{
-	unsigned long addr = (unsigned long)ubuf;
-	int nr_pages = 1;
-	int ret;
-	int i;
-
-	/*
-	 * Userspace is injecting traces into the kernel trace buffer.
-	 * We want to be as non intrusive as possible.
-	 * To do so, we do not want to allocate any special buffers
-	 * or take any locks, but instead write the userspace data
-	 * straight into the ring buffer.
-	 *
-	 * First we need to pin the userspace buffer into memory,
-	 * which, most likely it is, because it just referenced it.
-	 * But there's no guarantee that it is. By using get_user_pages_fast()
-	 * and kmap_atomic/kunmap_atomic() we can get access to the
-	 * pages directly. We then write the data directly into the
-	 * ring buffer.
-	 */
-
-	/* check if we cross pages */
-	if ((addr & PAGE_MASK) != ((addr + cnt) & PAGE_MASK))
-		nr_pages = 2;
-
-	*offset = addr & (PAGE_SIZE - 1);
-	addr &= PAGE_MASK;
-
-	ret = get_user_pages_fast(addr, nr_pages, 0, pages);
-	if (ret < nr_pages) {
-		while (--ret >= 0)
-			put_page(pages[ret]);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < nr_pages; i++)
-		map_page[i] = kmap_atomic(pages[i]);
-
-	return nr_pages;
-}
-
-static inline void unlock_user_pages(struct page **pages,
-				     void **map_page, int nr_pages)
-{
-	int i;
-
-	for (i = nr_pages - 1; i >= 0; i--) {
-		kunmap_atomic(map_page[i]);
-		put_page(pages[i]);
-	}
-}
-
 static ssize_t
 tracing_mark_write(struct file *filp, const char __user *ubuf,
 					size_t cnt, loff_t *fpos)
@@ -5802,14 +5747,14 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
 	struct ring_buffer *buffer;
 	struct print_entry *entry;
 	unsigned long irq_flags;
-	struct page *pages[2];
-	void *map_page[2];
-	int nr_pages = 1;
+	const char faulted[] = "<faulted>";
 	ssize_t written;
-	int offset;
 	int size;
 	int len;
 
+/* Used in tracing_mark_raw_write() as well */
+#define FAULTED_SIZE (sizeof(faulted) - 1) /* '\0' is already accounted for */
+
 	if (tracing_disabled)
 		return -EINVAL;
 
@@ -5821,30 +5766,31 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
 
 	BUILD_BUG_ON(TRACE_BUF_SIZE >= PAGE_SIZE);
 
-	nr_pages = lock_user_pages(ubuf, cnt, pages, map_page, &offset);
-	if (nr_pages < 0)
-		return nr_pages;
-
 	local_save_flags(irq_flags);
-	size = sizeof(*entry) + cnt + 2; /* possible \n added */
+	size = sizeof(*entry) + cnt + 2; /* add '\0' and possible '\n' */
+
+	/* If less than "<faulted>", then make sure we can still add that */
+	if (cnt < FAULTED_SIZE)
+		size += FAULTED_SIZE - cnt;
+
 	buffer = tr->trace_buffer.buffer;
 	event = __trace_buffer_lock_reserve(buffer, TRACE_PRINT, size,
 					    irq_flags, preempt_count());
-	if (!event) {
+	if (unlikely(!event))
 		/* Ring buffer disabled, return as if not open for write */
-		written = -EBADF;
-		goto out_unlock;
-	}
+		return -EBADF;
 
 	entry = ring_buffer_event_data(event);
 	entry->ip = _THIS_IP_;
 
-	if (nr_pages == 2) {
-		len = PAGE_SIZE - offset;
-		memcpy(&entry->buf, map_page[0] + offset, len);
-		memcpy(&entry->buf[len], map_page[1], cnt - len);
+	len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt);
+	if (len) {
+		memcpy(&entry->buf, faulted, FAULTED_SIZE);
+		cnt = FAULTED_SIZE;
+		written = -EFAULT;
 	} else
-		memcpy(&entry->buf, map_page[0] + offset, cnt);
+		written = cnt;
+	len = cnt;
 
 	if (entry->buf[cnt - 1] != '\n') {
 		entry->buf[cnt] = '\n';
@@ -5854,12 +5800,8 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
 
 	__buffer_unlock_commit(buffer, event);
 
-	written = cnt;
-
-	*fpos += written;
-
- out_unlock:
-	unlock_user_pages(pages, map_page, nr_pages);
+	if (written > 0)
+		*fpos += written;
 
 	return written;
 }
@@ -5875,15 +5817,14 @@ tracing_mark_raw_write(struct file *filp, const char __user *ubuf,
 	struct ring_buffer_event *event;
 	struct ring_buffer *buffer;
 	struct raw_data_entry *entry;
+	const char faulted[] = "<faulted>";
 	unsigned long irq_flags;
-	struct page *pages[2];
-	void *map_page[2];
-	int nr_pages = 1;
 	ssize_t written;
-	int offset;
 	int size;
 	int len;
 
+#define FAULT_SIZE_ID (FAULTED_SIZE + sizeof(int))
+
 	if (tracing_disabled)
 		return -EINVAL;
 
@@ -5899,38 +5840,32 @@ tracing_mark_raw_write(struct file *filp, const char __user *ubuf,
 
 	BUILD_BUG_ON(TRACE_BUF_SIZE >= PAGE_SIZE);
 
-	nr_pages = lock_user_pages(ubuf, cnt, pages, map_page, &offset);
-	if (nr_pages < 0)
-		return nr_pages;
-
 	local_save_flags(irq_flags);
 	size = sizeof(*entry) + cnt;
+	if (cnt < FAULT_SIZE_ID)
+		size += FAULT_SIZE_ID - cnt;
+
 	buffer = tr->trace_buffer.buffer;
 	event = __trace_buffer_lock_reserve(buffer, TRACE_RAW_DATA, size,
 					    irq_flags, preempt_count());
-	if (!event) {
+	if (!event)
 		/* Ring buffer disabled, return as if not open for write */
-		written = -EBADF;
-		goto out_unlock;
-	}
+		return -EBADF;
 
 	entry = ring_buffer_event_data(event);
 
-	if (nr_pages == 2) {
-		len = PAGE_SIZE - offset;
-		memcpy(&entry->id, map_page[0] + offset, len);
-		memcpy(((char *)&entry->id) + len, map_page[1], cnt - len);
+	len = __copy_from_user_inatomic(&entry->id, ubuf, cnt);
+	if (len) {
+		entry->id = -1;
+		memcpy(&entry->buf, faulted, FAULTED_SIZE);
+		written = -EFAULT;
 	} else
-		memcpy(&entry->id, map_page[0] + offset, cnt);
+		written = cnt;
 
 	__buffer_unlock_commit(buffer, event);
 
-	written = cnt;
-
-	*fpos += written;
-
- out_unlock:
-	unlock_user_pages(pages, map_page, nr_pages);
+	if (written > 0)
+		*fpos += written;
 
 	return written;
 }
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
                   ` (5 preceding siblings ...)
  2016-12-09 14:27 ` [for-next][PATCH 6/8] tracing: Replace kmap with copy_from_user() in trace_marker writing Steven Rostedt
@ 2016-12-09 14:27 ` Steven Rostedt
       [not found]   ` <CADWwUUbhD0ZQbg6zN-A7A+f+jToadTx63UMQESoM04B75S+hvg@mail.gmail.com>
  2016-12-09 14:27 ` [for-next][PATCH 8/8] tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too Steven Rostedt
  7 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, stable, Namhyung Kim

[-- Attachment #1: 0007-fgraph-Handle-a-case-where-a-tracer-ignores-set_grap.patch --]
[-- Type: text/plain, Size: 3709 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Both the wakeup and irqsoff tracers can use the function graph tracer when
the display-graph option is set. The problem is that they ignore the notrace
file, and record the entry of functions that would be ignored by the
function_graph tracer. This causes the trace->depth to be recorded into the
ring buffer. The set_graph_notrace uses a trick by adding a large negative
number to the trace->depth when a graph function is to be ignored.

On trace output, the graph function uses the depth to record a stack of
functions. But since the depth is negative, it accesses the array with a
negative number and causes an out of bounds access that can cause a kernel
oops or corrupt data.

Have the print functions handle cases where a tracer still records functions
even when they are in set_graph_notrace.

Also add warnings if the depth is below zero before accessing the array.

Note, the function graph logic will still prevent the return of these
functions from being recorded, which means that they will be left hanging
without a return. For example:

   # echo '*spin*' > set_graph_notrace
   # echo 1 > options/display-graph
   # echo wakeup > current_tracer
   # cat trace
   [...]
      _raw_spin_lock() {
        preempt_count_add() {
        do_raw_spin_lock() {
      update_rq_clock();

Where it should look like:

      _raw_spin_lock() {
        preempt_count_add();
        do_raw_spin_lock();
      }
      update_rq_clock();

Cc: stable@vger.kernel.org
Cc: Namhyung Kim <namhyung.kim@lge.com>
Fixes: 29ad23b00474 ("ftrace: Add set_graph_notrace filter")
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_functions_graph.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 8e1a115439fa..566f7327c3aa 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -842,6 +842,10 @@ print_graph_entry_leaf(struct trace_iterator *iter,
 
 		cpu_data = per_cpu_ptr(data->cpu_data, cpu);
 
+		/* If a graph tracer ignored set_graph_notrace */
+		if (call->depth < -1)
+			call->depth += FTRACE_NOTRACE_DEPTH;
+
 		/*
 		 * Comments display at + 1 to depth. Since
 		 * this is a leaf function, keep the comments
@@ -850,7 +854,8 @@ print_graph_entry_leaf(struct trace_iterator *iter,
 		cpu_data->depth = call->depth - 1;
 
 		/* No need to keep this function around for this depth */
-		if (call->depth < FTRACE_RETFUNC_DEPTH)
+		if (call->depth < FTRACE_RETFUNC_DEPTH &&
+		    !WARN_ON_ONCE(call->depth < 0))
 			cpu_data->enter_funcs[call->depth] = 0;
 	}
 
@@ -880,11 +885,16 @@ print_graph_entry_nested(struct trace_iterator *iter,
 		struct fgraph_cpu_data *cpu_data;
 		int cpu = iter->cpu;
 
+		/* If a graph tracer ignored set_graph_notrace */
+		if (call->depth < -1)
+			call->depth += FTRACE_NOTRACE_DEPTH;
+
 		cpu_data = per_cpu_ptr(data->cpu_data, cpu);
 		cpu_data->depth = call->depth;
 
 		/* Save this function pointer to see if the exit matches */
-		if (call->depth < FTRACE_RETFUNC_DEPTH)
+		if (call->depth < FTRACE_RETFUNC_DEPTH &&
+		    !WARN_ON_ONCE(call->depth < 0))
 			cpu_data->enter_funcs[call->depth] = call->func;
 	}
 
@@ -1114,7 +1124,8 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
 		 */
 		cpu_data->depth = trace->depth - 1;
 
-		if (trace->depth < FTRACE_RETFUNC_DEPTH) {
+		if (trace->depth < FTRACE_RETFUNC_DEPTH &&
+		    !WARN_ON_ONCE(trace->depth < 0)) {
 			if (cpu_data->enter_funcs[trace->depth] != trace->func)
 				func_match = 0;
 			cpu_data->enter_funcs[trace->depth] = 0;
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 8/8] tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too
  2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
                   ` (6 preceding siblings ...)
  2016-12-09 14:27 ` [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace Steven Rostedt
@ 2016-12-09 14:27 ` Steven Rostedt
  2016-12-12 16:40   ` Namhyung Kim
  7 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2016-12-09 14:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Namhyung Kim

[-- Attachment #1: 0008-tracing-fgraph-Have-wakeup-and-irqsoff-tracers-ignor.patch --]
[-- Type: text/plain, Size: 5060 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Currently both the wakeup and irqsoff traces do not handle set_graph_notrace
well. The ftrace infrastructure will ignore the return paths of all
functions leaving them hanging without an end:

  # echo '*spin*' > set_graph_notrace
  # cat trace
  [...]
          _raw_spin_lock() {
            preempt_count_add() {
            do_raw_spin_lock() {
          update_rq_clock();

Where the '*spin*' functions should have looked like this:

          _raw_spin_lock() {
            preempt_count_add();
            do_raw_spin_lock();
          }
          update_rq_clock();

Instead, have the wakeup and irqsoff tracers ignore the functions that are
set by the set_graph_notrace like the function_graph tracer does. Move
the logic in the function_graph tracer into a header to allow wakeup and
irqsoff tracers to use it as well.

Cc: Namhyung Kim <namhyung.kim@lge.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.h                 | 11 +++++++++++
 kernel/trace/trace_functions_graph.c | 14 +++++++-------
 kernel/trace/trace_irqsoff.c         | 12 ++++++++++++
 kernel/trace/trace_sched_wakeup.c    | 12 ++++++++++++
 4 files changed, 42 insertions(+), 7 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 37602e722336..c2234494f40c 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -846,6 +846,17 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr)
 	return 0;
 }
 #endif /* CONFIG_DYNAMIC_FTRACE */
+
+extern unsigned int fgraph_max_depth;
+
+static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace)
+{
+	/* trace it when it is-nested-in or is a function enabled. */
+	return !(trace->depth || ftrace_graph_addr(trace->func)) ||
+		(trace->depth < 0) ||
+		(fgraph_max_depth && trace->depth >= fgraph_max_depth);
+}
+
 #else /* CONFIG_FUNCTION_GRAPH_TRACER */
 static inline enum print_line_t
 print_graph_function_flags(struct trace_iterator *iter, u32 flags)
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 566f7327c3aa..d56123cdcc89 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -65,7 +65,7 @@ struct fgraph_data {
 
 #define TRACE_GRAPH_INDENT	2
 
-static unsigned int max_depth;
+unsigned int fgraph_max_depth;
 
 static struct tracer_opt trace_opts[] = {
 	/* Display overruns? (for self-debug purpose) */
@@ -384,10 +384,10 @@ int trace_graph_entry(struct ftrace_graph_ent *trace)
 	if (!ftrace_trace_task(tr))
 		return 0;
 
-	/* trace it when it is-nested-in or is a function enabled. */
-	if ((!(trace->depth || ftrace_graph_addr(trace->func)) ||
-	     ftrace_graph_ignore_irqs()) || (trace->depth < 0) ||
-	    (max_depth && trace->depth >= max_depth))
+	if (ftrace_graph_ignore_func(trace))
+		return 0;
+
+	if (ftrace_graph_ignore_irqs())
 		return 0;
 
 	/*
@@ -1500,7 +1500,7 @@ graph_depth_write(struct file *filp, const char __user *ubuf, size_t cnt,
 	if (ret)
 		return ret;
 
-	max_depth = val;
+	fgraph_max_depth = val;
 
 	*ppos += cnt;
 
@@ -1514,7 +1514,7 @@ graph_depth_read(struct file *filp, char __user *ubuf, size_t cnt,
 	char buf[15]; /* More than enough to hold UINT_MAX + "\n"*/
 	int n;
 
-	n = sprintf(buf, "%d\n", max_depth);
+	n = sprintf(buf, "%d\n", fgraph_max_depth);
 
 	return simple_read_from_buffer(ubuf, cnt, ppos, buf, n);
 }
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 03cdff84d026..86654d7e1afe 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -175,6 +175,18 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace)
 	int ret;
 	int pc;
 
+	if (ftrace_graph_ignore_func(trace))
+		return 0;
+	/*
+	 * Do not trace a function if it's filtered by set_graph_notrace.
+	 * Make the index of ret stack negative to indicate that it should
+	 * ignore further functions.  But it needs its own ret stack entry
+	 * to recover the original index in order to continue tracing after
+	 * returning from the function.
+	 */
+	if (ftrace_graph_notrace_addr(trace->func))
+		return 1;
+
 	if (!func_prolog_dec(tr, &data, &flags))
 		return 0;
 
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 1bf2324dc682..5d0bb025bb21 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -239,6 +239,18 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace)
 	unsigned long flags;
 	int pc, ret = 0;
 
+	if (ftrace_graph_ignore_func(trace))
+		return 0;
+	/*
+	 * Do not trace a function if it's filtered by set_graph_notrace.
+	 * Make the index of ret stack negative to indicate that it should
+	 * ignore further functions.  But it needs its own ret stack entry
+	 * to recover the original index in order to continue tracing after
+	 * returning from the function.
+	 */
+	if (ftrace_graph_notrace_addr(trace->func))
+		return 1;
+
 	if (!func_prolog_preempt_disable(tr, &data, &pc))
 		return 0;
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Fwd: [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace
       [not found]     ` <CADWwUUarxm+ACf1WqDo1B+RFCDMtGXzFYK45kbTbRm6s+QGyUA@mail.gmail.com>
@ 2016-12-12 16:30       ` Namhyung Kim
  2016-12-12 16:49         ` Steven Rostedt
  0 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2016-12-12 16:30 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Cc: LKML, Ingo Molnar, Andrew Morton, stable, Namhyung Kim

Hi Steve,

On Fri, Dec 9, 2016 at 11:27 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
>
> Both the wakeup and irqsoff tracers can use the function graph tracer when
> the display-graph option is set. The problem is that they ignore the notrace
> file, and record the entry of functions that would be ignored by the
> function_graph tracer. This causes the trace->depth to be recorded into the
> ring buffer. The set_graph_notrace uses a trick by adding a large negative
> number to the trace->depth when a graph function is to be ignored.
>
> On trace output, the graph function uses the depth to record a stack of
> functions. But since the depth is negative, it accesses the array with a
> negative number and causes an out of bounds access that can cause a kernel
> oops or corrupt data.

Sorry to miss updating those tracers.  I guess it's no more necessary once
the patch 8 is applied so that functions in the notrace filter will not be
recorded.

Or maybe we need to change the prepare_ftrace_return() so that the
graph_entry callback should be called after ftrace_push_return_trace() as
some archs do.

>
> Have the print functions handle cases where a tracer still records functions
> even when they are in set_graph_notrace.

I think it'd be better (or consistent, at least) not printing negative index
records rather than showing entry only.

>
> Also add warnings if the depth is below zero before accessing the array.
>
> Note, the function graph logic will still prevent the return of these
> functions from being recorded, which means that they will be left hanging
> without a return. For example:
>
>    # echo '*spin*' > set_graph_notrace
>    # echo 1 > options/display-graph
>    # echo wakeup > current_tracer
>    # cat trace
>    [...]
>       _raw_spin_lock() {
>         preempt_count_add() {
>         do_raw_spin_lock() {
>       update_rq_clock();
>
> Where it should look like:
>
>       _raw_spin_lock() {
>         preempt_count_add();
>         do_raw_spin_lock();
>       }
>       update_rq_clock();

If set_graph_notrace works correctly, it should be just:

         update_rq_clock();

Thanks,
Namhyung


>
> Cc: stable@vger.kernel.org
> Cc: Namhyung Kim <namhyung.kim@lge.com>
> Fixes: 29ad23b00474 ("ftrace: Add set_graph_notrace filter")
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/trace/trace_functions_graph.c | 17 ++++++++++++++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
> index 8e1a115439fa..566f7327c3aa 100644
> --- a/kernel/trace/trace_functions_graph.c
> +++ b/kernel/trace/trace_functions_graph.c
> @@ -842,6 +842,10 @@ print_graph_entry_leaf(struct trace_iterator *iter,
>
>                 cpu_data = per_cpu_ptr(data->cpu_data, cpu);
>
> +               /* If a graph tracer ignored set_graph_notrace */
> +               if (call->depth < -1)
> +                       call->depth += FTRACE_NOTRACE_DEPTH;
> +
>                 /*
>                  * Comments display at + 1 to depth. Since
>                  * this is a leaf function, keep the comments
> @@ -850,7 +854,8 @@ print_graph_entry_leaf(struct trace_iterator *iter,
>                 cpu_data->depth = call->depth - 1;
>
>                 /* No need to keep this function around for this depth */
> -               if (call->depth < FTRACE_RETFUNC_DEPTH)
> +               if (call->depth < FTRACE_RETFUNC_DEPTH &&
> +                   !WARN_ON_ONCE(call->depth < 0))
>                         cpu_data->enter_funcs[call->depth] = 0;
>         }
>
> @@ -880,11 +885,16 @@ print_graph_entry_nested(struct trace_iterator *iter,
>                 struct fgraph_cpu_data *cpu_data;
>                 int cpu = iter->cpu;
>
> +               /* If a graph tracer ignored set_graph_notrace */
> +               if (call->depth < -1)
> +                       call->depth += FTRACE_NOTRACE_DEPTH;
> +
>                 cpu_data = per_cpu_ptr(data->cpu_data, cpu);
>                 cpu_data->depth = call->depth;
>
>                 /* Save this function pointer to see if the exit matches */
> -               if (call->depth < FTRACE_RETFUNC_DEPTH)
> +               if (call->depth < FTRACE_RETFUNC_DEPTH &&
> +                   !WARN_ON_ONCE(call->depth < 0))
>                         cpu_data->enter_funcs[call->depth] = call->func;
>         }
>
> @@ -1114,7 +1124,8 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
>                  */
>                 cpu_data->depth = trace->depth - 1;
>
> -               if (trace->depth < FTRACE_RETFUNC_DEPTH) {
> +               if (trace->depth < FTRACE_RETFUNC_DEPTH &&
> +                   !WARN_ON_ONCE(trace->depth < 0)) {
>                         if (cpu_data->enter_funcs[trace->depth] != trace->func)
>                                 func_match = 0;
>                         cpu_data->enter_funcs[trace->depth] = 0;
> --
> 2.10.2
>
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 8/8] tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too
  2016-12-09 14:27 ` [for-next][PATCH 8/8] tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too Steven Rostedt
@ 2016-12-12 16:40   ` Namhyung Kim
  0 siblings, 0 replies; 14+ messages in thread
From: Namhyung Kim @ 2016-12-12 16:40 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Andrew Morton, stable, Namhyung Kim

On Tue, Dec 13, 2016 at 01:33:42AM +0900, Namhyung Kim wrote:
> From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
> 
> Currently both the wakeup and irqsoff traces do not handle set_graph_notrace
> well. The ftrace infrastructure will ignore the return paths of all
> functions leaving them hanging without an end:
> 
>   # echo '*spin*' > set_graph_notrace
>   # cat trace
>   [...]
>           _raw_spin_lock() {
>             preempt_count_add() {
>             do_raw_spin_lock() {
>           update_rq_clock();
> 
> Where the '*spin*' functions should have looked like this:
> 
>           _raw_spin_lock() {
>             preempt_count_add();
>             do_raw_spin_lock();
>           }
>           update_rq_clock();
> 
> Instead, have the wakeup and irqsoff tracers ignore the functions that are
> set by the set_graph_notrace like the function_graph tracer does. Move
> the logic in the function_graph tracer into a header to allow wakeup and
> irqsoff tracers to use it as well.
> 
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>

Acked-by: Namhyung Kim <namhyung@kernel.org>

Thanks,
Namhyung


> ---
>  kernel/trace/trace.h                 | 11 +++++++++++
>  kernel/trace/trace_functions_graph.c | 14 +++++++-------
>  kernel/trace/trace_irqsoff.c         | 12 ++++++++++++
>  kernel/trace/trace_sched_wakeup.c    | 12 ++++++++++++
>  4 files changed, 42 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index 37602e722336..c2234494f40c 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -846,6 +846,17 @@ static inline int
> ftrace_graph_notrace_addr(unsigned long addr)
>         return 0;
>  }
>  #endif /* CONFIG_DYNAMIC_FTRACE */
> +
> +extern unsigned int fgraph_max_depth;
> +
> +static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace)
> +{
> +       /* trace it when it is-nested-in or is a function enabled. */
> +       return !(trace->depth || ftrace_graph_addr(trace->func)) ||
> +               (trace->depth < 0) ||
> +               (fgraph_max_depth && trace->depth >= fgraph_max_depth);
> +}
> +
>  #else /* CONFIG_FUNCTION_GRAPH_TRACER */
>  static inline enum print_line_t
>  print_graph_function_flags(struct trace_iterator *iter, u32 flags)
> diff --git a/kernel/trace/trace_functions_graph.c
> b/kernel/trace/trace_functions_graph.c
> index 566f7327c3aa..d56123cdcc89 100644
> --- a/kernel/trace/trace_functions_graph.c
> +++ b/kernel/trace/trace_functions_graph.c
> @@ -65,7 +65,7 @@ struct fgraph_data {
> 
>  #define TRACE_GRAPH_INDENT     2
> 
> -static unsigned int max_depth;
> +unsigned int fgraph_max_depth;
> 
>  static struct tracer_opt trace_opts[] = {
>         /* Display overruns? (for self-debug purpose) */
> @@ -384,10 +384,10 @@ int trace_graph_entry(struct ftrace_graph_ent *trace)
>         if (!ftrace_trace_task(tr))
>                 return 0;
> 
> -       /* trace it when it is-nested-in or is a function enabled. */
> -       if ((!(trace->depth || ftrace_graph_addr(trace->func)) ||
> -            ftrace_graph_ignore_irqs()) || (trace->depth < 0) ||
> -           (max_depth && trace->depth >= max_depth))
> +       if (ftrace_graph_ignore_func(trace))
> +               return 0;
> +
> +       if (ftrace_graph_ignore_irqs())
>                 return 0;
> 
>         /*
> @@ -1500,7 +1500,7 @@ graph_depth_write(struct file *filp, const char
> __user *ubuf, size_t cnt,
>         if (ret)
>                 return ret;
> 
> -       max_depth = val;
> +       fgraph_max_depth = val;
> 
>         *ppos += cnt;
> 
> @@ -1514,7 +1514,7 @@ graph_depth_read(struct file *filp, char __user
> *ubuf, size_t cnt,
>         char buf[15]; /* More than enough to hold UINT_MAX + "\n"*/
>         int n;
> 
> -       n = sprintf(buf, "%d\n", max_depth);
> +       n = sprintf(buf, "%d\n", fgraph_max_depth);
> 
>         return simple_read_from_buffer(ubuf, cnt, ppos, buf, n);
>  }
> diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> index 03cdff84d026..86654d7e1afe 100644
> --- a/kernel/trace/trace_irqsoff.c
> +++ b/kernel/trace/trace_irqsoff.c
> @@ -175,6 +175,18 @@ static int irqsoff_graph_entry(struct
> ftrace_graph_ent *trace)
>         int ret;
>         int pc;
> 
> +       if (ftrace_graph_ignore_func(trace))
> +               return 0;
> +       /*
> +        * Do not trace a function if it's filtered by set_graph_notrace.
> +        * Make the index of ret stack negative to indicate that it should
> +        * ignore further functions.  But it needs its own ret stack entry
> +        * to recover the original index in order to continue tracing after
> +        * returning from the function.
> +        */
> +       if (ftrace_graph_notrace_addr(trace->func))
> +               return 1;
> +
>         if (!func_prolog_dec(tr, &data, &flags))
>                 return 0;
> 
> diff --git a/kernel/trace/trace_sched_wakeup.c
> b/kernel/trace/trace_sched_wakeup.c
> index 1bf2324dc682..5d0bb025bb21 100644
> --- a/kernel/trace/trace_sched_wakeup.c
> +++ b/kernel/trace/trace_sched_wakeup.c
> @@ -239,6 +239,18 @@ static int wakeup_graph_entry(struct
> ftrace_graph_ent *trace)
>         unsigned long flags;
>         int pc, ret = 0;
> 
> +       if (ftrace_graph_ignore_func(trace))
> +               return 0;
> +       /*
> +        * Do not trace a function if it's filtered by set_graph_notrace.
> +        * Make the index of ret stack negative to indicate that it should
> +        * ignore further functions.  But it needs its own ret stack entry
> +        * to recover the original index in order to continue tracing after
> +        * returning from the function.
> +        */
> +       if (ftrace_graph_notrace_addr(trace->func))
> +               return 1;
> +
>         if (!func_prolog_preempt_disable(tr, &data, &pc))
>                 return 0;
> 
> --
> 2.10.2

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace
  2016-12-12 16:30       ` Fwd: " Namhyung Kim
@ 2016-12-12 16:49         ` Steven Rostedt
  2016-12-12 17:09           ` Namhyung Kim
  0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2016-12-12 16:49 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: Cc: LKML, Ingo Molnar, Andrew Morton, stable, Namhyung Kim

On Tue, 13 Dec 2016 01:30:01 +0900
Namhyung Kim <namhyung@kernel.org> wrote:


> Sorry to miss updating those tracers.  I guess it's no more necessary once
> the patch 8 is applied so that functions in the notrace filter will not be
> recorded.
> 
> Or maybe we need to change the prepare_ftrace_return() so that the
> graph_entry callback should be called after ftrace_push_return_trace() as
> some archs do.

I plan on updating fgraph in general so this should all be handled then.

> 
> >
> > Have the print functions handle cases where a tracer still records functions
> > even when they are in set_graph_notrace.  
> 
> I think it'd be better (or consistent, at least) not printing negative index
> records rather than showing entry only.

I thought about this too, but I'm more concerned about it not crashing
the kernel than to show a proper trace. The fix will just make sure it
doesn't crash.

> 
> >
> > Also add warnings if the depth is below zero before accessing the array.
> >
> > Note, the function graph logic will still prevent the return of these
> > functions from being recorded, which means that they will be left hanging
> > without a return. For example:
> >
> >    # echo '*spin*' > set_graph_notrace
> >    # echo 1 > options/display-graph
> >    # echo wakeup > current_tracer
> >    # cat trace
> >    [...]
> >       _raw_spin_lock() {
> >         preempt_count_add() {
> >         do_raw_spin_lock() {
> >       update_rq_clock();
> >
> > Where it should look like:
> >
> >       _raw_spin_lock() {
> >         preempt_count_add();
> >         do_raw_spin_lock();
> >       }
> >       update_rq_clock();  
> 
> If set_graph_notrace works correctly, it should be just:
> 
>          update_rq_clock();

Which is what it should look like after patch 8. But I didn't mark 8 as
stable as that's more of a feature. As wakeup and irqsoff doesn't use
notrace yet. Yeah, notrace may break it a bit, but since this is the
first someone noticed it, I don't think it's used much.

I wanted the simplest fix for stable.

-- Steve

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace
  2016-12-12 16:49         ` Steven Rostedt
@ 2016-12-12 17:09           ` Namhyung Kim
  2016-12-12 18:07             ` Steven Rostedt
  0 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2016-12-12 17:09 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Andrew Morton, stable

On Mon, Dec 12, 2016 at 11:49:20AM -0500, Steven Rostedt wrote:
> On Tue, 13 Dec 2016 01:30:01 +0900
> Namhyung Kim <namhyung@kernel.org> wrote:
> 
> 
> > Sorry to miss updating those tracers.  I guess it's no more necessary once
> > the patch 8 is applied so that functions in the notrace filter will not be
> > recorded.
> > 
> > Or maybe we need to change the prepare_ftrace_return() so that the
> > graph_entry callback should be called after ftrace_push_return_trace() as
> > some archs do.
> 
> I plan on updating fgraph in general so this should all be handled then.

ok

> 
> > 
> > >
> > > Have the print functions handle cases where a tracer still records functions
> > > even when they are in set_graph_notrace.  
> > 
> > I think it'd be better (or consistent, at least) not printing negative index
> > records rather than showing entry only.
> 
> I thought about this too, but I'm more concerned about it not crashing
> the kernel than to show a proper trace. The fix will just make sure it
> doesn't crash.

ok

> 
> > 
> > >
> > > Also add warnings if the depth is below zero before accessing the array.
> > >
> > > Note, the function graph logic will still prevent the return of these
> > > functions from being recorded, which means that they will be left hanging
> > > without a return. For example:
> > >
> > >    # echo '*spin*' > set_graph_notrace
> > >    # echo 1 > options/display-graph
> > >    # echo wakeup > current_tracer
> > >    # cat trace
> > >    [...]
> > >       _raw_spin_lock() {
> > >         preempt_count_add() {
> > >         do_raw_spin_lock() {
> > >       update_rq_clock();
> > >
> > > Where it should look like:
> > >
> > >       _raw_spin_lock() {
> > >         preempt_count_add();
> > >         do_raw_spin_lock();
> > >       }
> > >       update_rq_clock();  
> > 
> > If set_graph_notrace works correctly, it should be just:
> > 
> >          update_rq_clock();
> 
> Which is what it should look like after patch 8. But I didn't mark 8 as
> stable as that's more of a feature. As wakeup and irqsoff doesn't use
> notrace yet. Yeah, notrace may break it a bit, but since this is the
> first someone noticed it, I don't think it's used much.
> 
> I wanted the simplest fix for stable.

I think a simpler fix is just to return when it sees a negative record..


diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 52fb1e21b86b..2fb73c2e35b5 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -844,7 +844,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
 
                /* If a graph tracer ignored set_graph_notrace */
                if (call->depth < -1)
-                       call->depth += FTRACE_NOTRACE_DEPTH;
+                       return TRACE_TYPE_HANDLED;
 
                /*
                 * Comments display at + 1 to depth. Since
@@ -887,7 +887,7 @@ print_graph_entry_nested(struct trace_iterator *iter,
 
                /* If a graph tracer ignored set_graph_notrace */
                if (call->depth < -1)
-                       call->depth += FTRACE_NOTRACE_DEPTH;
+                       return TRACE_TYPE_HANDLED;
 
                cpu_data = per_cpu_ptr(data->cpu_data, cpu);
                cpu_data->depth = call->depth;


Thanks,
Namhyung

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace
  2016-12-12 17:09           ` Namhyung Kim
@ 2016-12-12 18:07             ` Steven Rostedt
  0 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2016-12-12 18:07 UTC (permalink / raw)
  To: Namhyung Kim; +Cc: LKML, Ingo Molnar, Andrew Morton, stable

On Tue, 13 Dec 2016 02:09:04 +0900
Namhyung Kim <namhyung@kernel.org> wrote:


> > I wanted the simplest fix for stable.  
> 
> I think a simpler fix is just to return when it sees a negative record..

You're right, but I guess I was trying to get it someone closer to the
final change too.

-- Steve

> 
> 
> diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
> index 52fb1e21b86b..2fb73c2e35b5 100644
> --- a/kernel/trace/trace_functions_graph.c
> +++ b/kernel/trace/trace_functions_graph.c
> @@ -844,7 +844,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
>  
>                 /* If a graph tracer ignored set_graph_notrace */
>                 if (call->depth < -1)
> -                       call->depth += FTRACE_NOTRACE_DEPTH;
> +                       return TRACE_TYPE_HANDLED;
>  
>                 /*
>                  * Comments display at + 1 to depth. Since
> @@ -887,7 +887,7 @@ print_graph_entry_nested(struct trace_iterator *iter,
>  
>                 /* If a graph tracer ignored set_graph_notrace */
>                 if (call->depth < -1)
> -                       call->depth += FTRACE_NOTRACE_DEPTH;
> +                       return TRACE_TYPE_HANDLED;
>  
>                 cpu_data = per_cpu_ptr(data->cpu_data, cpu);
>                 cpu_data->depth = call->depth;
> 
> 
> Thanks,
> Namhyung

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-12-12 18:07 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-09 14:26 [for-next][PATCH 0/8] tracing: Last minute updates for 4.10 Steven Rostedt
2016-12-09 14:26 ` [for-next][PATCH 1/8] tracing: Have the reg function allow to fail Steven Rostedt
2016-12-09 14:26 ` [for-next][PATCH 2/8] tracing: Do not start benchmark on boot up Steven Rostedt
2016-12-09 14:26 ` [for-next][PATCH 3/8] tracing: Have system enable return error if one of the events fail Steven Rostedt
2016-12-09 14:27 ` [for-next][PATCH 4/8] tracing: Allow benchmark to be enabled at early_initcall() Steven Rostedt
2016-12-09 14:27 ` [for-next][PATCH 5/8] ftrace/x86_32: Set ftrace_stub to weak to prevent gcc from using short jumps to it Steven Rostedt
2016-12-09 14:27 ` [for-next][PATCH 6/8] tracing: Replace kmap with copy_from_user() in trace_marker writing Steven Rostedt
2016-12-09 14:27 ` [for-next][PATCH 7/8] fgraph: Handle a case where a tracer ignores set_graph_notrace Steven Rostedt
     [not found]   ` <CADWwUUbhD0ZQbg6zN-A7A+f+jToadTx63UMQESoM04B75S+hvg@mail.gmail.com>
     [not found]     ` <CADWwUUarxm+ACf1WqDo1B+RFCDMtGXzFYK45kbTbRm6s+QGyUA@mail.gmail.com>
2016-12-12 16:30       ` Fwd: " Namhyung Kim
2016-12-12 16:49         ` Steven Rostedt
2016-12-12 17:09           ` Namhyung Kim
2016-12-12 18:07             ` Steven Rostedt
2016-12-09 14:27 ` [for-next][PATCH 8/8] tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too Steven Rostedt
2016-12-12 16:40   ` Namhyung Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).