All of lore.kernel.org
 help / color / mirror / Atom feed
* [for-next][PATCH 00/13] tracing: More updates for 3.16
@ 2014-05-12 15:11 Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 01/13] ftrace: Have function graph tracer use global_ops for filtering Steven Rostedt
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next

Head SHA1: 1486eb859d525a8be763530c3318293b9b81abc0


Christoph Lameter (1):
      tracing: Replace __get_cpu_var uses with this_cpu_ptr

Frederic Weisbecker (1):
      tracing: Remove myself as a tracing maintainer

Steven Rostedt (1):
      ftrace/x86: Move the mcount/fentry code out of entry_64.S

Steven Rostedt (Red Hat) (10):
      ftrace: Have function graph tracer use global_ops for filtering
      tracing: Add trace_<tracepoint>_enabled() function
      tracing: Add __bitmask() macro to trace events to cpumasks and other bitmasks
      ftrace: Remove boolean of hash_enable and hash_disable
      ftrace: Write in missing comment from a very old commit
      ftrace: Always inline ftrace_hash_empty() helper function
      ftrace/x86: Get the current mcount addr for add_breakpoint()
      ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global
      ftrace: Use the ftrace_addr helper functions to find the ftrace_addr
      ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag

----
 Documentation/trace/tracepoints.txt |  24 ++++
 MAINTAINERS                         |   1 -
 arch/x86/kernel/Makefile            |   1 +
 arch/x86/kernel/entry_64.S          | 204 ---------------------------------
 arch/x86/kernel/ftrace.c            |  43 +------
 arch/x86/kernel/mcount_64.S         | 217 ++++++++++++++++++++++++++++++++++++
 include/linux/ftrace.h              |   4 +-
 include/linux/ftrace_event.h        |   3 +
 include/linux/kprobes.h             |   2 +-
 include/linux/trace_seq.h           |  10 ++
 include/linux/tracepoint.h          |  10 ++
 include/trace/ftrace.h              |  57 +++++++++-
 kernel/trace/ftrace.c               | 100 +++++++++++------
 kernel/trace/trace.c                |   2 +-
 kernel/trace/trace_output.c         |  41 +++++++
 15 files changed, 436 insertions(+), 283 deletions(-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [for-next][PATCH 01/13] ftrace: Have function graph tracer use global_ops for filtering
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 02/13] tracing: Remove myself as a tracing maintainer Steven Rostedt
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0001-ftrace-Have-function-graph-tracer-use-global_ops-for.patch --]
[-- Type: text/plain, Size: 2066 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Commit 4104d326b670 "ftrace: Remove global function list and call
function directly" cleaned up the global_ops filtering and made
the code simpler. But it left out function graph filtering which
also depended on that code. The function graph filtering still
needs to use global_ops as the filter otherwise it wont filter
at all.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 34b098bf..9eb1aa0 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5008,12 +5008,6 @@ ftrace_suspend_notifier_call(struct notifier_block *bl, unsigned long state,
 	return NOTIFY_DONE;
 }
 
-/* Just a place holder for function graph */
-static struct ftrace_ops fgraph_ops __read_mostly = {
-	.func		= ftrace_stub,
-	.flags		= FTRACE_OPS_FL_STUB | FTRACE_OPS_FL_RECURSION_SAFE,
-};
-
 static int ftrace_graph_entry_test(struct ftrace_graph_ent *trace)
 {
 	if (!ftrace_ops_test(&global_ops, trace->func, NULL))
@@ -5076,7 +5070,10 @@ int register_ftrace_graph(trace_func_graph_ret_t retfunc,
 	ftrace_graph_entry = ftrace_graph_entry_test;
 	update_function_graph_func();
 
-	ret = ftrace_startup(&fgraph_ops, FTRACE_START_FUNC_RET);
+	/* Function graph doesn't use the .func field of global_ops */
+	global_ops.flags |= FTRACE_OPS_FL_STUB;
+
+	ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET);
 
 out:
 	mutex_unlock(&ftrace_lock);
@@ -5094,7 +5091,8 @@ void unregister_ftrace_graph(void)
 	ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;
 	ftrace_graph_entry = ftrace_graph_entry_stub;
 	__ftrace_graph_entry = ftrace_graph_entry_stub;
-	ftrace_shutdown(&fgraph_ops, FTRACE_STOP_FUNC_RET);
+	ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET);
+	global_ops.flags &= ~FTRACE_OPS_FL_STUB;
 	unregister_pm_notifier(&ftrace_suspend_notifier);
 	unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
 
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 02/13] tracing: Remove myself as a tracing maintainer
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 01/13] ftrace: Have function graph tracer use global_ops for filtering Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 03/13] tracing: Replace __get_cpu_var uses with this_cpu_ptr Steven Rostedt
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Masami Hiramatsu,
	Namhyung Kim, Paul E. McKenney, Peter Zijlstra, Thomas Gleixner

[-- Attachment #1: 0002-tracing-Remove-myself-as-a-tracing-maintainer.patch --]
[-- Type: text/plain, Size: 1789 bytes --]

From: Frederic Weisbecker <fweisbec@gmail.com>

It has been a while since I last sent a tracing patch. I always keep an
eye on tracing evolutions and contributions in general but given
how busy I am with nohz, isolation and more generally core cleanups stuff,
I seldom have time left to provide deep reviews of tracing patches nor
simply for reviews to begin with.

I've been very lucky to start kernel development on a very young subsystem
with tons of low hanging fruits back in 2008. Given that it deals with
a lot of tricky stuffs all around (sched, timers, irq, preemption, NMIs,
SMP, RCU, ....) I basically learned everything there.

Steve has been doing most of the incredible work these last years.
Thanks a lot!

Of course consider me always available to help on tracing if any hard
days happen.

Link: http://lkml.kernel.org/p/1399131991-13216-1-git-send-email-fweisbec@gmail.com

Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 MAINTAINERS | 1 -
 1 file changed, 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index e67ea24..120af71 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9041,7 +9041,6 @@ F:	drivers/char/tpm/
 
 TRACING
 M:	Steven Rostedt <rostedt@goodmis.org>
-M:	Frederic Weisbecker <fweisbec@gmail.com>
 M:	Ingo Molnar <mingo@redhat.com>
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
 S:	Maintained
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 03/13] tracing: Replace __get_cpu_var uses with this_cpu_ptr
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 01/13] ftrace: Have function graph tracer use global_ops for filtering Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 02/13] tracing: Remove myself as a tracing maintainer Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 04/13] tracing: Add trace_<tracepoint>_enabled() function Steven Rostedt
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton,
	Masami Hiramatsu, Christoph Lameter

[-- Attachment #1: 0003-tracing-Replace-__get_cpu_var-uses-with-this_cpu_ptr.patch --]
[-- Type: text/plain, Size: 2079 bytes --]

From: Christoph Lameter <cl@linux.com>

Replace uses of &__get_cpu_var for address calculation with this_cpu_ptr.

Link: http://lkml.kernel.org/p/alpine.DEB.2.10.1404291415560.18364@gentwo.org

Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/kprobes.h | 2 +-
 kernel/trace/ftrace.c   | 4 ++--
 kernel/trace/trace.c    | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 925eaf2..7bd2ad0 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -355,7 +355,7 @@ static inline void reset_current_kprobe(void)
 
 static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void)
 {
-	return (&__get_cpu_var(kprobe_ctlblk));
+	return this_cpu_ptr(&kprobe_ctlblk);
 }
 
 int register_kprobe(struct kprobe *p);
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 9eb1aa0..38e5cf7 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -822,7 +822,7 @@ function_profile_call(unsigned long ip, unsigned long parent_ip,
 
 	local_irq_save(flags);
 
-	stat = &__get_cpu_var(ftrace_profile_stats);
+	stat = this_cpu_ptr(&ftrace_profile_stats);
 	if (!stat->hash || !ftrace_profile_enabled)
 		goto out;
 
@@ -853,7 +853,7 @@ static void profile_graph_return(struct ftrace_graph_ret *trace)
 	unsigned long flags;
 
 	local_irq_save(flags);
-	stat = &__get_cpu_var(ftrace_profile_stats);
+	stat = this_cpu_ptr(&ftrace_profile_stats);
 	if (!stat->hash || !ftrace_profile_enabled)
 		goto out;
 
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 4c392c8..0543169 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1726,7 +1726,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 	 */
 	barrier();
 	if (use_stack == 1) {
-		trace.entries		= &__get_cpu_var(ftrace_stack).calls[0];
+		trace.entries		= this_cpu_ptr(ftrace_stack.calls);
 		trace.max_entries	= FTRACE_STACK_MAX_ENTRIES;
 
 		if (regs)
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 04/13] tracing: Add trace_<tracepoint>_enabled() function
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (2 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 03/13] tracing: Replace __get_cpu_var uses with this_cpu_ptr Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 05/13] tracing: Add __bitmask() macro to trace events to cpumasks and other bitmasks Steven Rostedt
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton, Mathieu Desnoyers

[-- Attachment #1: 0004-tracing-Add-trace_-tracepoint-_enabled-function.patch --]
[-- Type: text/plain, Size: 3814 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

There are some code paths in the kernel that need to do some preparations
before it calls a tracepoint. As that code is worthless overhead when
the tracepoint is not enabled, it would be prudent to have that code
only run when the tracepoint is active. To accomplish this, all tracepoints
now get a static inline function called "trace_<tracepoint-name>_enabled()"
which returns true when the tracepoint is enabled and false otherwise.

As an added bonus, that function uses the static_key of the tracepoint
such that no branch is needed.

  if (trace_mytracepoint_enabled()) {
	arg = process_tp_arg();
	trace_mytracepoint(arg);
  }

Will keep the "process_tp_arg()" (which may be expensive to run) from
being executed when the tracepoint isn't enabled.

It's best to encapsulate the tracepoint itself in the if statement
just to keep races. For example, if you had:

  if (trace_mytracepoint_enabled())
	arg = process_tp_arg();
  trace_mytracepoint(arg);

There's a chance that the tracepoint could be enabled just after the
if statement, and arg will be undefined when calling the tracepoint.

Link: http://lkml.kernel.org/r/20140506094407.507b6435@gandalf.local.home

Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 Documentation/trace/tracepoints.txt | 24 ++++++++++++++++++++++++
 include/linux/tracepoint.h          | 10 ++++++++++
 2 files changed, 34 insertions(+)

diff --git a/Documentation/trace/tracepoints.txt b/Documentation/trace/tracepoints.txt
index 6b018b5..a3efac6 100644
--- a/Documentation/trace/tracepoints.txt
+++ b/Documentation/trace/tracepoints.txt
@@ -115,6 +115,30 @@ If the tracepoint has to be used in kernel modules, an
 EXPORT_TRACEPOINT_SYMBOL_GPL() or EXPORT_TRACEPOINT_SYMBOL() can be
 used to export the defined tracepoints.
 
+If you need to do a bit of work for a tracepoint parameter, and
+that work is only used for the tracepoint, that work can be encapsulated
+within an if statement with the following:
+
+	if (trace_foo_bar_enabled()) {
+		int i;
+		int tot = 0;
+
+		for (i = 0; i < count; i++)
+			tot += calculate_nuggets();
+
+		trace_foo_bar(tot);
+	}
+
+All trace_<tracepoint>() calls have a matching trace_<tracepoint>_enabled()
+function defined that returns true if the tracepoint is enabled and
+false otherwise. The trace_<tracepoint>() should always be within the
+block of the if (trace_<tracepoint>_enabled()) to prevent races between
+the tracepoint being enabled and the check being seen.
+
+The advantage of using the trace_<tracepoint>_enabled() is that it uses
+the static_key of the tracepoint to allow the if statement to be implemented
+with jump labels and avoid conditional branches.
+
 Note: The convenience macro TRACE_EVENT provides an alternative way to
       define tracepoints. Check http://lwn.net/Articles/379903,
       http://lwn.net/Articles/381064 and http://lwn.net/Articles/383362
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 9d30ee4..2e2a5f7 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -185,6 +185,11 @@ extern void syscall_unregfunc(void);
 	static inline void						\
 	check_trace_callback_type_##name(void (*cb)(data_proto))	\
 	{								\
+	}								\
+	static inline bool						\
+	trace_##name##_enabled(void)					\
+	{								\
+		return static_key_false(&__tracepoint_##name.key);	\
 	}
 
 /*
@@ -230,6 +235,11 @@ extern void syscall_unregfunc(void);
 	}								\
 	static inline void check_trace_callback_type_##name(void (*cb)(data_proto)) \
 	{								\
+	}								\
+	static inline bool						\
+	trace_##name##_enabled(void)					\
+	{								\
+		return false;						\
 	}
 
 #define DEFINE_TRACE_FN(name, reg, unreg)
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 05/13] tracing: Add __bitmask() macro to trace events to cpumasks and other bitmasks
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (3 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 04/13] tracing: Add trace_<tracepoint>_enabled() function Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 06/13] ftrace: Remove boolean of hash_enable and hash_disable Steven Rostedt
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton, Javi Merino

[-- Attachment #1: 0005-tracing-Add-__bitmask-macro-to-trace-events-to-cpuma.patch --]
[-- Type: text/plain, Size: 9615 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Being able to show a cpumask of events can be useful as some events
may affect only some CPUs. There is no standard way to record the
cpumask and converting it to a string is rather expensive during
the trace as traces happen in hotpaths. It would be better to record
the raw event mask and be able to parse it at print time.

The following macros were added for use with the TRACE_EVENT() macro:

  __bitmask()
  __assign_bitmask()
  __get_bitmask()

To test this, I added this to the sched_migrate_task event, which
looked like this:

TRACE_EVENT(sched_migrate_task,

	TP_PROTO(struct task_struct *p, int dest_cpu, const struct cpumask *cpus),

	TP_ARGS(p, dest_cpu, cpus),

	TP_STRUCT__entry(
		__array(	char,	comm,	TASK_COMM_LEN	)
		__field(	pid_t,	pid			)
		__field(	int,	prio			)
		__field(	int,	orig_cpu		)
		__field(	int,	dest_cpu		)
		__bitmask(	cpumask, cpus			)
	),

	TP_fast_assign(
		memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
		__entry->pid		= p->pid;
		__entry->prio		= p->prio;
		__entry->orig_cpu	= task_cpu(p);
		__entry->dest_cpu	= dest_cpu;
		__assign_bitmask(cpumask, cpumask_bits(cpus));
	),

	TP_printk("comm=%s pid=%d prio=%d orig_cpu=%d dest_cpu=%d cpumask=%s",
		  __entry->comm, __entry->pid, __entry->prio,
		  __entry->orig_cpu, __entry->dest_cpu,
		  __get_bitmask(cpumask))
);

With the output of:

        ksmtuned-3613  [003] d..2   485.220508: sched_migrate_task: comm=ksmtuned pid=3615 prio=120 orig_cpu=3 dest_cpu=2 cpumask=00000000,0000000f
     migration/1-13    [001] d..5   485.221202: sched_migrate_task: comm=ksmtuned pid=3614 prio=120 orig_cpu=1 dest_cpu=0 cpumask=00000000,0000000f
             awk-3615  [002] d.H5   485.221747: sched_migrate_task: comm=rcu_preempt pid=7 prio=120 orig_cpu=0 dest_cpu=1 cpumask=00000000,000000ff
     migration/2-18    [002] d..5   485.222062: sched_migrate_task: comm=ksmtuned pid=3615 prio=120 orig_cpu=2 dest_cpu=3 cpumask=00000000,0000000f

Link: http://lkml.kernel.org/r/1399377998-14870-6-git-send-email-javi.merino@arm.com

Suggested-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/ftrace_event.h |  3 +++
 include/linux/trace_seq.h    | 10 ++++++++
 include/trace/ftrace.h       | 57 +++++++++++++++++++++++++++++++++++++++++++-
 kernel/trace/trace_output.c  | 41 +++++++++++++++++++++++++++++++
 4 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index d16da3e..cff3106 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -38,6 +38,9 @@ const char *ftrace_print_symbols_seq_u64(struct trace_seq *p,
 								 *symbol_array);
 #endif
 
+const char *ftrace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr,
+				     unsigned int bitmask_size);
+
 const char *ftrace_print_hex_seq(struct trace_seq *p,
 				 const unsigned char *buf, int len);
 
diff --git a/include/linux/trace_seq.h b/include/linux/trace_seq.h
index a32d86e..1361169 100644
--- a/include/linux/trace_seq.h
+++ b/include/linux/trace_seq.h
@@ -46,6 +46,9 @@ extern int trace_seq_putmem_hex(struct trace_seq *s, const void *mem,
 extern void *trace_seq_reserve(struct trace_seq *s, size_t len);
 extern int trace_seq_path(struct trace_seq *s, const struct path *path);
 
+extern int trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
+			     int nmaskbits);
+
 #else /* CONFIG_TRACING */
 static inline int trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
 {
@@ -57,6 +60,13 @@ trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary)
 	return 0;
 }
 
+static inline int
+trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
+		  int nmaskbits)
+{
+	return 0;
+}
+
 static inline int trace_print_seq(struct seq_file *m, struct trace_seq *s)
 {
 	return 0;
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 0a1a4f7..d9c44af 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -53,6 +53,9 @@
 #undef __string
 #define __string(item, src) __dynamic_array(char, item, -1)
 
+#undef __bitmask
+#define __bitmask(item, src) __dynamic_array(char, item, -1)
+
 #undef TP_STRUCT__entry
 #define TP_STRUCT__entry(args...) args
 
@@ -128,6 +131,9 @@
 #undef __string
 #define __string(item, src) __dynamic_array(char, item, -1)
 
+#undef __string
+#define __string(item, src) __dynamic_array(unsigned long, item, -1)
+
 #undef DECLARE_EVENT_CLASS
 #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
 	struct ftrace_data_offsets_##call {				\
@@ -200,6 +206,15 @@
 #undef __get_str
 #define __get_str(field) (char *)__get_dynamic_array(field)
 
+#undef __get_bitmask
+#define __get_bitmask(field)						\
+	({								\
+		void *__bitmask = __get_dynamic_array(field);		\
+		unsigned int __bitmask_size;				\
+		__bitmask_size = (__entry->__data_loc_##field >> 16) & 0xffff; \
+		ftrace_print_bitmask_seq(p, __bitmask, __bitmask_size);	\
+	})
+
 #undef __print_flags
 #define __print_flags(flag, delim, flag_array...)			\
 	({								\
@@ -322,6 +337,9 @@ static struct trace_event_functions ftrace_event_type_funcs_##call = {	\
 #undef __string
 #define __string(item, src) __dynamic_array(char, item, -1)
 
+#undef __bitmask
+#define __bitmask(item, src) __dynamic_array(unsigned long, item, -1)
+
 #undef DECLARE_EVENT_CLASS
 #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, func, print)	\
 static int notrace __init						\
@@ -372,6 +390,29 @@ ftrace_define_fields_##call(struct ftrace_event_call *event_call)	\
 #define __string(item, src) __dynamic_array(char, item,			\
 		    strlen((src) ? (const char *)(src) : "(null)") + 1)
 
+/*
+ * __bitmask_size_in_bytes_raw is the number of bytes needed to hold
+ * num_possible_cpus().
+ */
+#define __bitmask_size_in_bytes_raw			\
+	((num_possible_cpus() + 7) / 8)
+
+#define __bitmask_size_in_longs						\
+	((__bitmask_size_in_bytes_raw + ((BITS_PER_LONG / 8) - 1))	\
+	 / (BITS_PER_LONG / 8))
+
+/*
+ * __bitmask_size_in_bytes is the number of bytes needed to hold
+ * num_possible_cpus() padded out to the nearest long. This is what
+ * is saved in the buffer, just to be consistent.
+ */
+#define __bitmask_size_in_bytes				\
+	(__bitmask_size_in_longs * (BITS_PER_LONG / 8))
+
+#undef __bitmask
+#define __bitmask(item, src) __dynamic_array(unsigned long, item,	\
+					     __bitmask_size_in_longs)
+
 #undef DECLARE_EVENT_CLASS
 #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print)	\
 static inline notrace int ftrace_get_offsets_##call(			\
@@ -513,12 +554,22 @@ static inline notrace int ftrace_get_offsets_##call(			\
 	__entry->__data_loc_##item = __data_offsets.item;
 
 #undef __string
-#define __string(item, src) __dynamic_array(char, item, -1)       	\
+#define __string(item, src) __dynamic_array(char, item, -1)
 
 #undef __assign_str
 #define __assign_str(dst, src)						\
 	strcpy(__get_str(dst), (src) ? (const char *)(src) : "(null)");
 
+#undef __bitmask
+#define __bitmask(item, src) __dynamic_array(unsigned long, item, -1)
+
+#undef __get_bitmask
+#define __get_bitmask(field) (char *)__get_dynamic_array(field)
+
+#undef __assign_bitmask
+#define __assign_bitmask(dst, src)					\
+	memcpy(__get_bitmask(dst), (src), __bitmask_size_in_bytes)
+
 #undef TP_fast_assign
 #define TP_fast_assign(args...) args
 
@@ -586,6 +637,7 @@ static inline void ftrace_test_probe_##call(void)			\
 #undef __print_hex
 #undef __get_dynamic_array
 #undef __get_str
+#undef __get_bitmask
 
 #undef TP_printk
 #define TP_printk(fmt, args...) "\"" fmt "\", "  __stringify(args)
@@ -651,6 +703,9 @@ __attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
 #undef __get_str
 #define __get_str(field) (char *)__get_dynamic_array(field)
 
+#undef __get_bitmask
+#define __get_bitmask(field) (char *)__get_dynamic_array(field)
+
 #undef __perf_addr
 #define __perf_addr(a)	(__addr = (a))
 
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index a436de1..f3dad80 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -126,6 +126,34 @@ trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
 EXPORT_SYMBOL_GPL(trace_seq_printf);
 
 /**
+ * trace_seq_bitmask - put a list of longs as a bitmask print output
+ * @s:		trace sequence descriptor
+ * @maskp:	points to an array of unsigned longs that represent a bitmask
+ * @nmaskbits:	The number of bits that are valid in @maskp
+ *
+ * It returns 0 if the trace oversizes the buffer's free
+ * space, 1 otherwise.
+ *
+ * Writes a ASCII representation of a bitmask string into @s.
+ */
+int
+trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
+		  int nmaskbits)
+{
+	int len = (PAGE_SIZE - 1) - s->len;
+	int ret;
+
+	if (s->full || !len)
+		return 0;
+
+	ret = bitmap_scnprintf(s->buffer, len, maskp, nmaskbits);
+	s->len += ret;
+
+	return 1;
+}
+EXPORT_SYMBOL_GPL(trace_seq_bitmask);
+
+/**
  * trace_seq_vprintf - sequence printing of trace information
  * @s: trace sequence descriptor
  * @fmt: printf format string
@@ -399,6 +427,19 @@ EXPORT_SYMBOL(ftrace_print_symbols_seq_u64);
 #endif
 
 const char *
+ftrace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr,
+			 unsigned int bitmask_size)
+{
+	const char *ret = p->buffer + p->len;
+
+	trace_seq_bitmask(p, bitmask_ptr, bitmask_size * 8);
+	trace_seq_putc(p, 0);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(ftrace_print_bitmask_seq);
+
+const char *
 ftrace_print_hex_seq(struct trace_seq *p, const unsigned char *buf, int buf_len)
 {
 	int i;
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 06/13] ftrace: Remove boolean of hash_enable and hash_disable
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (4 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 05/13] tracing: Add __bitmask() macro to trace events to cpumasks and other bitmasks Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 07/13] ftrace: Write in missing comment from a very old commit Steven Rostedt
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0006-ftrace-Remove-boolean-of-hash_enable-and-hash_disabl.patch --]
[-- Type: text/plain, Size: 1895 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Commit 4104d326b670 "ftrace: Remove global function list and call
function directly" cleaned up the global_ops filtering and made
the code simpler, but it left a variable "hash_enable" that was used
to know if the hash functions should be updated or not. It was
updated if the global_ops did not override them. As the global_ops
are now no different than any other ftrace_ops, the hash always
gets updated and there's no reason to use the hash_enable boolean.

The same goes for hash_disable used in ftrace_shutdown().

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 38e5cf7..2c99d1f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2042,7 +2042,6 @@ static void ftrace_startup_enable(int command)
 
 static int ftrace_startup(struct ftrace_ops *ops, int command)
 {
-	bool hash_enable = true;
 	int ret;
 
 	if (unlikely(ftrace_disabled))
@@ -2056,8 +2055,8 @@ static int ftrace_startup(struct ftrace_ops *ops, int command)
 	command |= FTRACE_UPDATE_CALLS;
 
 	ops->flags |= FTRACE_OPS_FL_ENABLED;
-	if (hash_enable)
-		ftrace_hash_rec_enable(ops, 1);
+
+	ftrace_hash_rec_enable(ops, 1);
 
 	ftrace_startup_enable(command);
 
@@ -2066,7 +2065,6 @@ static int ftrace_startup(struct ftrace_ops *ops, int command)
 
 static int ftrace_shutdown(struct ftrace_ops *ops, int command)
 {
-	bool hash_disable = true;
 	int ret;
 
 	if (unlikely(ftrace_disabled))
@@ -2084,8 +2082,7 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
 	 */
 	WARN_ON_ONCE(ftrace_start_up < 0);
 
-	if (hash_disable)
-		ftrace_hash_rec_disable(ops, 1);
+	ftrace_hash_rec_disable(ops, 1);
 
 	if (!global_start_up)
 		ops->flags &= ~FTRACE_OPS_FL_ENABLED;
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 07/13] ftrace: Write in missing comment from a very old commit
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (5 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 06/13] ftrace: Remove boolean of hash_enable and hash_disable Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 08/13] ftrace: Always inline ftrace_hash_empty() helper function Steven Rostedt
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0007-ftrace-Write-in-missing-comment-from-a-very-old-comm.patch --]
[-- Type: text/plain, Size: 1424 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Back in 2011 Commit ed926f9b35cda "ftrace: Use counters to enable
functions to trace" changed the way ftrace accounts for enabled
and disabled traced functions. There was a comment started as:

	/*
	 *
	 */

But never finished. Well, that's rather useless. I probably forgot
to save the file before committing it. And it passed review from all
this time.

Anyway, better late than never. I updated the comment to express what
is happening in that somewhat complex code.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 2c99d1f..61f39f8 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1552,7 +1552,14 @@ static void __ftrace_hash_rec_update(struct ftrace_ops *ops,
 			in_other_hash = !!ftrace_lookup_ip(other_hash, rec->ip);
 
 			/*
+			 * If filter_hash is set, we want to match all functions
+			 * that are in the hash but not in the other hash.
 			 *
+			 * If filter_hash is not set, then we are decrementing.
+			 * That means we match anything that is in the hash
+			 * and also in the other_hash. That is, we need to turn
+			 * off functions in the other hash because they are disabled
+			 * by this hash.
 			 */
 			if (filter_hash && in_hash && !in_other_hash)
 				match = 1;
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 08/13] ftrace: Always inline ftrace_hash_empty() helper function
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (6 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 07/13] ftrace: Write in missing comment from a very old commit Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 09/13] ftrace/x86: Get the current mcount addr for add_breakpoint() Steven Rostedt
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0008-ftrace-Always-inline-ftrace_hash_empty-helper-functi.patch --]
[-- Type: text/plain, Size: 1033 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

The ftrace_hash_empty() function is a simple test:

	return !hash || !hash->count;

But gcc seems to want to make it a call. As this is in an extreme
hot path of the function tracer, there's no reason it needs to be
a call. I only wrote it to be a helper function anyway, otherwise
it would have been inlined manually.

Force gcc to inline it, as it could have also been a macro.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 61f39f8..98fa931 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1105,7 +1105,7 @@ struct ftrace_page {
 static struct ftrace_page	*ftrace_pages_start;
 static struct ftrace_page	*ftrace_pages;
 
-static bool ftrace_hash_empty(struct ftrace_hash *hash)
+static bool __always_inline ftrace_hash_empty(struct ftrace_hash *hash)
 {
 	return !hash || !hash->count;
 }
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 09/13] ftrace/x86: Get the current mcount addr for add_breakpoint()
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (7 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 08/13] ftrace: Always inline ftrace_hash_empty() helper function Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 10/13] ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global Steven Rostedt
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0009-ftrace-x86-Get-the-current-mcount-addr-for-add_break.patch --]
[-- Type: text/plain, Size: 1424 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

The add_breakpoint() code in the ftrace updating gets the address
of what the call will become, but if the mcount address is changing
from regs to non-regs ftrace_caller or vice versa, it will use what
the record currently is.

This is rather silly as the code should always use what is currently
there regardless of if it's changing the regs function or just converting
to a nop.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/ftrace.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 3d837b0..4b3c195 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -380,9 +380,9 @@ static int add_breakpoints(struct dyn_ftrace *rec, int enable)
 	unsigned long ftrace_addr;
 	int ret;
 
-	ret = ftrace_test_record(rec, enable);
+	ftrace_addr = get_ftrace_old_addr(rec);
 
-	ftrace_addr = get_ftrace_addr(rec);
+	ret = ftrace_test_record(rec, enable);
 
 	switch (ret) {
 	case FTRACE_UPDATE_IGNORE:
@@ -394,8 +394,6 @@ static int add_breakpoints(struct dyn_ftrace *rec, int enable)
 
 	case FTRACE_UPDATE_MODIFY_CALL_REGS:
 	case FTRACE_UPDATE_MODIFY_CALL:
-		ftrace_addr = get_ftrace_old_addr(rec);
-		/* fall through */
 	case FTRACE_UPDATE_MAKE_NOP:
 		/* converting a call to a nop */
 		return add_brk_on_call(rec, ftrace_addr);
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 10/13] ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (8 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 09/13] ftrace/x86: Get the current mcount addr for add_breakpoint() Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 11/13] ftrace: Use the ftrace_addr helper functions to find the ftrace_addr Steven Rostedt
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0010-ftrace-Make-get_ftrace_addr-and-get_ftrace_addr_old-.patch --]
[-- Type: text/plain, Size: 5459 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

Move and rename get_ftrace_addr() and get_ftrace_addr_old() to
ftrace_get_addr_new() and ftrace_get_addr_curr() respectively.

This moves these two helper functions in the generic code out from
the arch specific code, and renames them to have a better generic
name. This will allow other archs to use them as well as makes it
a bit easier to work on getting separate trampolines for different
functions.

ftrace_get_addr_new() returns the trampoline address that the mcount
call address will be converted to.

ftrace_get_addr_curr() returns the trampoline address of what the
mcount call address currently jumps to.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/ftrace.c | 36 +++++-------------------------------
 include/linux/ftrace.h   |  2 ++
 kernel/trace/ftrace.c    | 36 ++++++++++++++++++++++++++++++++++++
 3 files changed, 43 insertions(+), 31 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 4b3c195..5ef43ce 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -349,38 +349,12 @@ static int add_brk_on_nop(struct dyn_ftrace *rec)
 	return add_break(rec->ip, old);
 }
 
-/*
- * If the record has the FTRACE_FL_REGS set, that means that it
- * wants to convert to a callback that saves all regs. If FTRACE_FL_REGS
- * is not not set, then it wants to convert to the normal callback.
- */
-static unsigned long get_ftrace_addr(struct dyn_ftrace *rec)
-{
-	if (rec->flags & FTRACE_FL_REGS)
-		return (unsigned long)FTRACE_REGS_ADDR;
-	else
-		return (unsigned long)FTRACE_ADDR;
-}
-
-/*
- * The FTRACE_FL_REGS_EN is set when the record already points to
- * a function that saves all the regs. Basically the '_EN' version
- * represents the current state of the function.
- */
-static unsigned long get_ftrace_old_addr(struct dyn_ftrace *rec)
-{
-	if (rec->flags & FTRACE_FL_REGS_EN)
-		return (unsigned long)FTRACE_REGS_ADDR;
-	else
-		return (unsigned long)FTRACE_ADDR;
-}
-
 static int add_breakpoints(struct dyn_ftrace *rec, int enable)
 {
 	unsigned long ftrace_addr;
 	int ret;
 
-	ftrace_addr = get_ftrace_old_addr(rec);
+	ftrace_addr = ftrace_get_addr_curr(rec);
 
 	ret = ftrace_test_record(rec, enable);
 
@@ -438,14 +412,14 @@ static int remove_breakpoint(struct dyn_ftrace *rec)
 		 * If not, don't touch the breakpoint, we make just create
 		 * a disaster.
 		 */
-		ftrace_addr = get_ftrace_addr(rec);
+		ftrace_addr = ftrace_get_addr_new(rec);
 		nop = ftrace_call_replace(ip, ftrace_addr);
 
 		if (memcmp(&ins[1], &nop[1], MCOUNT_INSN_SIZE - 1) == 0)
 			goto update;
 
 		/* Check both ftrace_addr and ftrace_old_addr */
-		ftrace_addr = get_ftrace_old_addr(rec);
+		ftrace_addr = ftrace_get_addr_curr(rec);
 		nop = ftrace_call_replace(ip, ftrace_addr);
 
 		if (memcmp(&ins[1], &nop[1], MCOUNT_INSN_SIZE - 1) != 0)
@@ -489,7 +463,7 @@ static int add_update(struct dyn_ftrace *rec, int enable)
 
 	ret = ftrace_test_record(rec, enable);
 
-	ftrace_addr  = get_ftrace_addr(rec);
+	ftrace_addr  = ftrace_get_addr_new(rec);
 
 	switch (ret) {
 	case FTRACE_UPDATE_IGNORE:
@@ -536,7 +510,7 @@ static int finish_update(struct dyn_ftrace *rec, int enable)
 
 	ret = ftrace_update_record(rec, enable);
 
-	ftrace_addr = get_ftrace_addr(rec);
+	ftrace_addr = ftrace_get_addr_new(rec);
 
 	switch (ret) {
 	case FTRACE_UPDATE_IGNORE:
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index f0ff2c2..2f8cbff 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -400,6 +400,8 @@ int ftrace_update_record(struct dyn_ftrace *rec, int enable);
 int ftrace_test_record(struct dyn_ftrace *rec, int enable);
 void ftrace_run_stop_machine(int command);
 unsigned long ftrace_location(unsigned long ip);
+unsigned long ftrace_get_addr_new(struct dyn_ftrace *rec);
+unsigned long ftrace_get_addr_curr(struct dyn_ftrace *rec);
 
 extern ftrace_func_t ftrace_trace_function;
 
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 98fa931..e825fde 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1755,6 +1755,42 @@ int ftrace_test_record(struct dyn_ftrace *rec, int enable)
 	return ftrace_check_record(rec, enable, 0);
 }
 
+/**
+ * ftrace_get_addr_new - Get the call address to set to
+ * @rec:  The ftrace record descriptor
+ *
+ * If the record has the FTRACE_FL_REGS set, that means that it
+ * wants to convert to a callback that saves all regs. If FTRACE_FL_REGS
+ * is not not set, then it wants to convert to the normal callback.
+ *
+ * Returns the address of the trampoline to set to
+ */
+unsigned long ftrace_get_addr_new(struct dyn_ftrace *rec)
+{
+	if (rec->flags & FTRACE_FL_REGS)
+		return (unsigned long)FTRACE_REGS_ADDR;
+	else
+		return (unsigned long)FTRACE_ADDR;
+}
+
+/**
+ * ftrace_get_addr_curr - Get the call address that is already there
+ * @rec:  The ftrace record descriptor
+ *
+ * The FTRACE_FL_REGS_EN is set when the record already points to
+ * a function that saves all the regs. Basically the '_EN' version
+ * represents the current state of the function.
+ *
+ * Returns the address of the trampoline that is currently being called
+ */
+unsigned long ftrace_get_addr_curr(struct dyn_ftrace *rec)
+{
+	if (rec->flags & FTRACE_FL_REGS_EN)
+		return (unsigned long)FTRACE_REGS_ADDR;
+	else
+		return (unsigned long)FTRACE_ADDR;
+}
+
 static int
 __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
 {
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 11/13] ftrace: Use the ftrace_addr helper functions to find the ftrace_addr
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (9 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 10/13] ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 12/13] ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 13/13] ftrace/x86: Move the mcount/fentry code out of entry_64.S Steven Rostedt
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0011-ftrace-Use-the-ftrace_addr-helper-functions-to-find-.patch --]
[-- Type: text/plain, Size: 1518 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

With the moving of the functions that determine what the mcount call site
should be replaced with into the generic code, there is a few places
in the generic code that can use them instead of hard coding it as it
does.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index e825fde..52c2b53 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1798,12 +1798,12 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
 	unsigned long ftrace_addr;
 	int ret;
 
-	ret = ftrace_update_record(rec, enable);
+	ftrace_addr = ftrace_get_addr_new(rec);
 
-	if (rec->flags & FTRACE_FL_REGS)
-		ftrace_addr = (unsigned long)FTRACE_REGS_ADDR;
-	else
-		ftrace_addr = (unsigned long)FTRACE_ADDR;
+	/* This needs to be done before we call ftrace_update_record */
+	ftrace_old_addr = ftrace_get_addr_curr(rec);
+
+	ret = ftrace_update_record(rec, enable);
 
 	switch (ret) {
 	case FTRACE_UPDATE_IGNORE:
@@ -1817,11 +1817,6 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
 
 	case FTRACE_UPDATE_MODIFY_CALL_REGS:
 	case FTRACE_UPDATE_MODIFY_CALL:
-		if (rec->flags & FTRACE_FL_REGS)
-			ftrace_old_addr = (unsigned long)FTRACE_ADDR;
-		else
-			ftrace_old_addr = (unsigned long)FTRACE_REGS_ADDR;
-
 		return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
 	}
 
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 12/13] ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (10 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 11/13] ftrace: Use the ftrace_addr helper functions to find the ftrace_addr Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  2014-05-12 15:11 ` [for-next][PATCH 13/13] ftrace/x86: Move the mcount/fentry code out of entry_64.S Steven Rostedt
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton

[-- Attachment #1: 0012-ftrace-Remove-FTRACE_UPDATE_MODIFY_CALL_REGS-flag.patch --]
[-- Type: text/plain, Size: 3644 bytes --]

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

As the decision to what needs to be done (converting a call to the
ftrace_caller to ftrace_caller_regs or to convert from ftrace_caller_regs
to ftrace_caller) can easily be determined from the rec->flags of
FTRACE_FL_REGS and FTRACE_FL_REGS_EN, there's no need to have the
ftrace_check_record() return either a UPDATE_MODIFY_CALL_REGS or a
UPDATE_MODIFY_CALL. Just he latter is enough. This added flag causes
more complexity than is required. Remove it.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/ftrace.c |  3 ---
 include/linux/ftrace.h   |  2 --
 kernel/trace/ftrace.c    | 13 ++++---------
 3 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 5ef43ce..89de3ea 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -366,7 +366,6 @@ static int add_breakpoints(struct dyn_ftrace *rec, int enable)
 		/* converting nop to call */
 		return add_brk_on_nop(rec);
 
-	case FTRACE_UPDATE_MODIFY_CALL_REGS:
 	case FTRACE_UPDATE_MODIFY_CALL:
 	case FTRACE_UPDATE_MAKE_NOP:
 		/* converting a call to a nop */
@@ -469,7 +468,6 @@ static int add_update(struct dyn_ftrace *rec, int enable)
 	case FTRACE_UPDATE_IGNORE:
 		return 0;
 
-	case FTRACE_UPDATE_MODIFY_CALL_REGS:
 	case FTRACE_UPDATE_MODIFY_CALL:
 	case FTRACE_UPDATE_MAKE_CALL:
 		/* converting nop to call */
@@ -516,7 +514,6 @@ static int finish_update(struct dyn_ftrace *rec, int enable)
 	case FTRACE_UPDATE_IGNORE:
 		return 0;
 
-	case FTRACE_UPDATE_MODIFY_CALL_REGS:
 	case FTRACE_UPDATE_MODIFY_CALL:
 	case FTRACE_UPDATE_MAKE_CALL:
 		/* converting nop to call */
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 2f8cbff..3e6dfb3 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -362,14 +362,12 @@ enum {
  *  IGNORE           - The function is already what we want it to be
  *  MAKE_CALL        - Start tracing the function
  *  MODIFY_CALL      - Stop saving regs for the function
- *  MODIFY_CALL_REGS - Start saving regs for the function
  *  MAKE_NOP         - Stop tracing the function
  */
 enum {
 	FTRACE_UPDATE_IGNORE,
 	FTRACE_UPDATE_MAKE_CALL,
 	FTRACE_UPDATE_MODIFY_CALL,
-	FTRACE_UPDATE_MODIFY_CALL_REGS,
 	FTRACE_UPDATE_MAKE_NOP,
 };
 
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 52c2b53..cc07b7f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1701,19 +1701,15 @@ static int ftrace_check_record(struct dyn_ftrace *rec, int enable, int update)
 		/*
 		 * If this record is being updated from a nop, then
 		 *   return UPDATE_MAKE_CALL.
-		 * Otherwise, if the EN flag is set, then return
-		 *   UPDATE_MODIFY_CALL_REGS to tell the caller to convert
-		 *   from the non-save regs, to a save regs function.
 		 * Otherwise,
 		 *   return UPDATE_MODIFY_CALL to tell the caller to convert
-		 *   from the save regs, to a non-save regs function.
+		 *   from the save regs, to a non-save regs function or
+		 *   vice versa.
 		 */
 		if (flag & FTRACE_FL_ENABLED)
 			return FTRACE_UPDATE_MAKE_CALL;
-		else if (rec->flags & FTRACE_FL_REGS_EN)
-			return FTRACE_UPDATE_MODIFY_CALL_REGS;
-		else
-			return FTRACE_UPDATE_MODIFY_CALL;
+
+		return FTRACE_UPDATE_MODIFY_CALL;
 	}
 
 	if (update) {
@@ -1815,7 +1811,6 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
 	case FTRACE_UPDATE_MAKE_NOP:
 		return ftrace_make_nop(NULL, rec, ftrace_addr);
 
-	case FTRACE_UPDATE_MODIFY_CALL_REGS:
 	case FTRACE_UPDATE_MODIFY_CALL:
 		return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
 	}
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [for-next][PATCH 13/13] ftrace/x86: Move the mcount/fentry code out of entry_64.S
  2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
                   ` (11 preceding siblings ...)
  2014-05-12 15:11 ` [for-next][PATCH 12/13] ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag Steven Rostedt
@ 2014-05-12 15:11 ` Steven Rostedt
  12 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2014-05-12 15:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Frederic Weisbecker, Andrew Morton, Thomas Gleixner,
	Jiri Kosina

[-- Attachment #1: 0013-ftrace-x86-Move-the-mcount-fentry-code-out-of-entry_.patch --]
[-- Type: text/plain, Size: 10074 bytes --]

From: Steven Rostedt <rostedt@goodmis.org>

As the mcount code gets more complex, it really does not belong
in the entry.S file. By moving it into its own file "mcount.S"
keeps things a bit cleaner.

Link: http://lkml.kernel.org/p/20140508152152.2130e8cf@gandalf.local.home

Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/Makefile    |   1 +
 arch/x86/kernel/entry_64.S  | 204 -----------------------------------------
 arch/x86/kernel/mcount_64.S | 217 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 218 insertions(+), 204 deletions(-)
 create mode 100644 arch/x86/kernel/mcount_64.S

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index f4d9600..db7f41d 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -26,6 +26,7 @@ obj-$(CONFIG_IRQ_WORK)  += irq_work.o
 obj-y			+= probe_roms.o
 obj-$(CONFIG_X86_32)	+= i386_ksyms_32.o
 obj-$(CONFIG_X86_64)	+= sys_x86_64.o x8664_ksyms_64.o
+obj-$(CONFIG_X86_64)	+= mcount_64.o
 obj-y			+= syscall_$(BITS).o vsyscall_gtod.o
 obj-$(CONFIG_X86_64)	+= vsyscall_64.o
 obj-$(CONFIG_X86_64)	+= vsyscall_emu_64.o
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..3db806d 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -53,7 +53,6 @@
 #include <asm/page_types.h>
 #include <asm/irqflags.h>
 #include <asm/paravirt.h>
-#include <asm/ftrace.h>
 #include <asm/percpu.h>
 #include <asm/asm.h>
 #include <asm/context_tracking.h>
@@ -69,209 +68,6 @@
 	.code64
 	.section .entry.text, "ax"
 
-#ifdef CONFIG_FUNCTION_TRACER
-
-#ifdef CC_USING_FENTRY
-# define function_hook	__fentry__
-#else
-# define function_hook	mcount
-#endif
-
-#ifdef CONFIG_DYNAMIC_FTRACE
-
-ENTRY(function_hook)
-	retq
-END(function_hook)
-
-/* skip is set if stack has been adjusted */
-.macro ftrace_caller_setup skip=0
-	MCOUNT_SAVE_FRAME \skip
-
-	/* Load the ftrace_ops into the 3rd parameter */
-	movq function_trace_op(%rip), %rdx
-
-	/* Load ip into the first parameter */
-	movq RIP(%rsp), %rdi
-	subq $MCOUNT_INSN_SIZE, %rdi
-	/* Load the parent_ip into the second parameter */
-#ifdef CC_USING_FENTRY
-	movq SS+16(%rsp), %rsi
-#else
-	movq 8(%rbp), %rsi
-#endif
-.endm
-
-ENTRY(ftrace_caller)
-	/* Check if tracing was disabled (quick check) */
-	cmpl $0, function_trace_stop
-	jne  ftrace_stub
-
-	ftrace_caller_setup
-	/* regs go into 4th parameter (but make it NULL) */
-	movq $0, %rcx
-
-GLOBAL(ftrace_call)
-	call ftrace_stub
-
-	MCOUNT_RESTORE_FRAME
-ftrace_return:
-
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-GLOBAL(ftrace_graph_call)
-	jmp ftrace_stub
-#endif
-
-GLOBAL(ftrace_stub)
-	retq
-END(ftrace_caller)
-
-ENTRY(ftrace_regs_caller)
-	/* Save the current flags before compare (in SS location)*/
-	pushfq
-
-	/* Check if tracing was disabled (quick check) */
-	cmpl $0, function_trace_stop
-	jne  ftrace_restore_flags
-
-	/* skip=8 to skip flags saved in SS */
-	ftrace_caller_setup 8
-
-	/* Save the rest of pt_regs */
-	movq %r15, R15(%rsp)
-	movq %r14, R14(%rsp)
-	movq %r13, R13(%rsp)
-	movq %r12, R12(%rsp)
-	movq %r11, R11(%rsp)
-	movq %r10, R10(%rsp)
-	movq %rbp, RBP(%rsp)
-	movq %rbx, RBX(%rsp)
-	/* Copy saved flags */
-	movq SS(%rsp), %rcx
-	movq %rcx, EFLAGS(%rsp)
-	/* Kernel segments */
-	movq $__KERNEL_DS, %rcx
-	movq %rcx, SS(%rsp)
-	movq $__KERNEL_CS, %rcx
-	movq %rcx, CS(%rsp)
-	/* Stack - skipping return address */
-	leaq SS+16(%rsp), %rcx
-	movq %rcx, RSP(%rsp)
-
-	/* regs go into 4th parameter */
-	leaq (%rsp), %rcx
-
-GLOBAL(ftrace_regs_call)
-	call ftrace_stub
-
-	/* Copy flags back to SS, to restore them */
-	movq EFLAGS(%rsp), %rax
-	movq %rax, SS(%rsp)
-
-	/* Handlers can change the RIP */
-	movq RIP(%rsp), %rax
-	movq %rax, SS+8(%rsp)
-
-	/* restore the rest of pt_regs */
-	movq R15(%rsp), %r15
-	movq R14(%rsp), %r14
-	movq R13(%rsp), %r13
-	movq R12(%rsp), %r12
-	movq R10(%rsp), %r10
-	movq RBP(%rsp), %rbp
-	movq RBX(%rsp), %rbx
-
-	/* skip=8 to skip flags saved in SS */
-	MCOUNT_RESTORE_FRAME 8
-
-	/* Restore flags */
-	popfq
-
-	jmp ftrace_return
-ftrace_restore_flags:
-	popfq
-	jmp  ftrace_stub
-
-END(ftrace_regs_caller)
-
-
-#else /* ! CONFIG_DYNAMIC_FTRACE */
-
-ENTRY(function_hook)
-	cmpl $0, function_trace_stop
-	jne  ftrace_stub
-
-	cmpq $ftrace_stub, ftrace_trace_function
-	jnz trace
-
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-	cmpq $ftrace_stub, ftrace_graph_return
-	jnz ftrace_graph_caller
-
-	cmpq $ftrace_graph_entry_stub, ftrace_graph_entry
-	jnz ftrace_graph_caller
-#endif
-
-GLOBAL(ftrace_stub)
-	retq
-
-trace:
-	MCOUNT_SAVE_FRAME
-
-	movq RIP(%rsp), %rdi
-#ifdef CC_USING_FENTRY
-	movq SS+16(%rsp), %rsi
-#else
-	movq 8(%rbp), %rsi
-#endif
-	subq $MCOUNT_INSN_SIZE, %rdi
-
-	call   *ftrace_trace_function
-
-	MCOUNT_RESTORE_FRAME
-
-	jmp ftrace_stub
-END(function_hook)
-#endif /* CONFIG_DYNAMIC_FTRACE */
-#endif /* CONFIG_FUNCTION_TRACER */
-
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-ENTRY(ftrace_graph_caller)
-	MCOUNT_SAVE_FRAME
-
-#ifdef CC_USING_FENTRY
-	leaq SS+16(%rsp), %rdi
-	movq $0, %rdx	/* No framepointers needed */
-#else
-	leaq 8(%rbp), %rdi
-	movq (%rbp), %rdx
-#endif
-	movq RIP(%rsp), %rsi
-	subq $MCOUNT_INSN_SIZE, %rsi
-
-	call	prepare_ftrace_return
-
-	MCOUNT_RESTORE_FRAME
-
-	retq
-END(ftrace_graph_caller)
-
-GLOBAL(return_to_handler)
-	subq  $24, %rsp
-
-	/* Save the return values */
-	movq %rax, (%rsp)
-	movq %rdx, 8(%rsp)
-	movq %rbp, %rdi
-
-	call ftrace_return_to_handler
-
-	movq %rax, %rdi
-	movq 8(%rsp), %rdx
-	movq (%rsp), %rax
-	addq $24, %rsp
-	jmp *%rdi
-#endif
-
 
 #ifndef CONFIG_PREEMPT
 #define retint_kernel retint_restore_args
diff --git a/arch/x86/kernel/mcount_64.S b/arch/x86/kernel/mcount_64.S
new file mode 100644
index 0000000..c050a01
--- /dev/null
+++ b/arch/x86/kernel/mcount_64.S
@@ -0,0 +1,217 @@
+/*
+ *  linux/arch/x86_64/mcount_64.S
+ *
+ *  Copyright (C) 2014  Steven Rostedt, Red Hat Inc
+ */
+
+#include <linux/linkage.h>
+#include <asm/ptrace.h>
+#include <asm/ftrace.h>
+
+
+	.code64
+	.section .entry.text, "ax"
+
+
+#ifdef CONFIG_FUNCTION_TRACER
+
+#ifdef CC_USING_FENTRY
+# define function_hook	__fentry__
+#else
+# define function_hook	mcount
+#endif
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+
+ENTRY(function_hook)
+	retq
+END(function_hook)
+
+/* skip is set if stack has been adjusted */
+.macro ftrace_caller_setup skip=0
+	MCOUNT_SAVE_FRAME \skip
+
+	/* Load the ftrace_ops into the 3rd parameter */
+	movq function_trace_op(%rip), %rdx
+
+	/* Load ip into the first parameter */
+	movq RIP(%rsp), %rdi
+	subq $MCOUNT_INSN_SIZE, %rdi
+	/* Load the parent_ip into the second parameter */
+#ifdef CC_USING_FENTRY
+	movq SS+16(%rsp), %rsi
+#else
+	movq 8(%rbp), %rsi
+#endif
+.endm
+
+ENTRY(ftrace_caller)
+	/* Check if tracing was disabled (quick check) */
+	cmpl $0, function_trace_stop
+	jne  ftrace_stub
+
+	ftrace_caller_setup
+	/* regs go into 4th parameter (but make it NULL) */
+	movq $0, %rcx
+
+GLOBAL(ftrace_call)
+	call ftrace_stub
+
+	MCOUNT_RESTORE_FRAME
+ftrace_return:
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+GLOBAL(ftrace_graph_call)
+	jmp ftrace_stub
+#endif
+
+GLOBAL(ftrace_stub)
+	retq
+END(ftrace_caller)
+
+ENTRY(ftrace_regs_caller)
+	/* Save the current flags before compare (in SS location)*/
+	pushfq
+
+	/* Check if tracing was disabled (quick check) */
+	cmpl $0, function_trace_stop
+	jne  ftrace_restore_flags
+
+	/* skip=8 to skip flags saved in SS */
+	ftrace_caller_setup 8
+
+	/* Save the rest of pt_regs */
+	movq %r15, R15(%rsp)
+	movq %r14, R14(%rsp)
+	movq %r13, R13(%rsp)
+	movq %r12, R12(%rsp)
+	movq %r11, R11(%rsp)
+	movq %r10, R10(%rsp)
+	movq %rbp, RBP(%rsp)
+	movq %rbx, RBX(%rsp)
+	/* Copy saved flags */
+	movq SS(%rsp), %rcx
+	movq %rcx, EFLAGS(%rsp)
+	/* Kernel segments */
+	movq $__KERNEL_DS, %rcx
+	movq %rcx, SS(%rsp)
+	movq $__KERNEL_CS, %rcx
+	movq %rcx, CS(%rsp)
+	/* Stack - skipping return address */
+	leaq SS+16(%rsp), %rcx
+	movq %rcx, RSP(%rsp)
+
+	/* regs go into 4th parameter */
+	leaq (%rsp), %rcx
+
+GLOBAL(ftrace_regs_call)
+	call ftrace_stub
+
+	/* Copy flags back to SS, to restore them */
+	movq EFLAGS(%rsp), %rax
+	movq %rax, SS(%rsp)
+
+	/* Handlers can change the RIP */
+	movq RIP(%rsp), %rax
+	movq %rax, SS+8(%rsp)
+
+	/* restore the rest of pt_regs */
+	movq R15(%rsp), %r15
+	movq R14(%rsp), %r14
+	movq R13(%rsp), %r13
+	movq R12(%rsp), %r12
+	movq R10(%rsp), %r10
+	movq RBP(%rsp), %rbp
+	movq RBX(%rsp), %rbx
+
+	/* skip=8 to skip flags saved in SS */
+	MCOUNT_RESTORE_FRAME 8
+
+	/* Restore flags */
+	popfq
+
+	jmp ftrace_return
+ftrace_restore_flags:
+	popfq
+	jmp  ftrace_stub
+
+END(ftrace_regs_caller)
+
+
+#else /* ! CONFIG_DYNAMIC_FTRACE */
+
+ENTRY(function_hook)
+	cmpl $0, function_trace_stop
+	jne  ftrace_stub
+
+	cmpq $ftrace_stub, ftrace_trace_function
+	jnz trace
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+	cmpq $ftrace_stub, ftrace_graph_return
+	jnz ftrace_graph_caller
+
+	cmpq $ftrace_graph_entry_stub, ftrace_graph_entry
+	jnz ftrace_graph_caller
+#endif
+
+GLOBAL(ftrace_stub)
+	retq
+
+trace:
+	MCOUNT_SAVE_FRAME
+
+	movq RIP(%rsp), %rdi
+#ifdef CC_USING_FENTRY
+	movq SS+16(%rsp), %rsi
+#else
+	movq 8(%rbp), %rsi
+#endif
+	subq $MCOUNT_INSN_SIZE, %rdi
+
+	call   *ftrace_trace_function
+
+	MCOUNT_RESTORE_FRAME
+
+	jmp ftrace_stub
+END(function_hook)
+#endif /* CONFIG_DYNAMIC_FTRACE */
+#endif /* CONFIG_FUNCTION_TRACER */
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ENTRY(ftrace_graph_caller)
+	MCOUNT_SAVE_FRAME
+
+#ifdef CC_USING_FENTRY
+	leaq SS+16(%rsp), %rdi
+	movq $0, %rdx	/* No framepointers needed */
+#else
+	leaq 8(%rbp), %rdi
+	movq (%rbp), %rdx
+#endif
+	movq RIP(%rsp), %rsi
+	subq $MCOUNT_INSN_SIZE, %rsi
+
+	call	prepare_ftrace_return
+
+	MCOUNT_RESTORE_FRAME
+
+	retq
+END(ftrace_graph_caller)
+
+GLOBAL(return_to_handler)
+	subq  $24, %rsp
+
+	/* Save the return values */
+	movq %rax, (%rsp)
+	movq %rdx, 8(%rsp)
+	movq %rbp, %rdi
+
+	call ftrace_return_to_handler
+
+	movq %rax, %rdi
+	movq 8(%rsp), %rdx
+	movq (%rsp), %rax
+	addq $24, %rsp
+	jmp *%rdi
+#endif
-- 
1.8.5.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-05-12 15:16 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-12 15:11 [for-next][PATCH 00/13] tracing: More updates for 3.16 Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 01/13] ftrace: Have function graph tracer use global_ops for filtering Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 02/13] tracing: Remove myself as a tracing maintainer Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 03/13] tracing: Replace __get_cpu_var uses with this_cpu_ptr Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 04/13] tracing: Add trace_<tracepoint>_enabled() function Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 05/13] tracing: Add __bitmask() macro to trace events to cpumasks and other bitmasks Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 06/13] ftrace: Remove boolean of hash_enable and hash_disable Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 07/13] ftrace: Write in missing comment from a very old commit Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 08/13] ftrace: Always inline ftrace_hash_empty() helper function Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 09/13] ftrace/x86: Get the current mcount addr for add_breakpoint() Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 10/13] ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 11/13] ftrace: Use the ftrace_addr helper functions to find the ftrace_addr Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 12/13] ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag Steven Rostedt
2014-05-12 15:11 ` [for-next][PATCH 13/13] ftrace/x86: Move the mcount/fentry code out of entry_64.S Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.