All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] ftrace: add simple oneshot function tracer
@ 2019-05-29  9:31 Thomas Preisner
  2019-05-29 14:45 ` Steven Rostedt
  0 siblings, 1 reply; 9+ messages in thread
From: Thomas Preisner @ 2019-05-29  9:31 UTC (permalink / raw)
  Cc: linux, Steven Rostedt, Ingo Molnar, Jonathan Corbet, linux-doc,
	linux-kernel

The "oneshot" tracer records every address (ip, parent_ip) exactly once.
As a result, "oneshot" can be used to efficiently create kernel function
coverage/usage reports such as in undertaker-tailor[0].

In order to provide this functionality, "oneshot" uses a
configurable hashset for blacklisting already recorded addresses. This
way, no user space application is required to parse the function
tracer's output and to deactivate functions after they have been
recorded once. Additionally, the tracer's output is reduced to a bare
mininum so that it can be passed directly to undertaker-tailor.

Further information regarding this oneshot function tracer can also be
found at [1].

[0]: https://undertaker.cs.fau.de
[1]: https://tpreisner.de/pub/ba-thesis.pdf

Signed-off-by: Thomas Preisner <linux@tpreisner.de>
---
 Documentation/trace/ftrace.rst |   7 ++
 kernel/trace/Kconfig           |  68 ++++++++++
 kernel/trace/Makefile          |   1 +
 kernel/trace/trace.h           |   4 +
 kernel/trace/trace_entries.h   |  13 ++
 kernel/trace/trace_oneshot.c   | 220 +++++++++++++++++++++++++++++++++
 kernel/trace/trace_selftest.c  |  38 ++++++
 7 files changed, 351 insertions(+)
 create mode 100644 kernel/trace/trace_oneshot.c

diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
index f60079259669..ee56d9f9b246 100644
--- a/Documentation/trace/ftrace.rst
+++ b/Documentation/trace/ftrace.rst
@@ -759,6 +759,13 @@ Here is the list of current tracers that may be configured.
 	unlikely branch is hit and if it was correct in its prediction
 	of being correct.
 
+  "oneshot"
+
+	Traces every kernel function and originating address exactly
+	once. For kernel modules the offset together with the module
+	name is printed. As a result, this tracer can be used to
+	efficiently create kernel function coverage/usage reports.
+
   "nop"
 
 	This is the "trace nothing" tracer. To remove all
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 5d965cef6c77..3b5c2650763a 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -279,6 +279,74 @@ config HWLAT_TRACER
 	 file. Every time a latency is greater than tracing_thresh, it will
 	 be recorded into the ring buffer.
 
+menuconfig ONESHOT_TRACER
+	bool "Oneshot Function Tracer"
+	default n
+	depends on HAVE_FUNCTION_TRACER
+	select GENERIC_TRACER
+	help
+	  This tracer records every function call (and callee) exactly once per
+	  cpu. It uses a separate hashtable for each cpu core to keep track of
+	  already recorded functions.
+
+	  Very useful for efficiently creating kernel function coverage/usage
+	  reports. Can also be used for mostly automated kernel-tailoring in
+	  conjunction with the undertaker toolchain as this tracer produces
+	  significantly less output in comparison to the normal function
+	  tracer.
+
+	  If unsure, say N.
+
+if ONESHOT_TRACER
+
+config ONESHOT_HASHTABLE_DYNAMIC_ALLOC
+	bool "Dynamic Hashtable Allocation"
+	default y
+	help
+	  When this is enabled (default) the oneshot tracer will try to allocate
+	  memory for one hashtable per cpu. This method should always work but
+	  might not be the most efficient way as vmalloc only allocates a
+	  contiguous memory region in the virtual address space instead of the
+	  physical one.
+
+	  When this is disabled the oneshot tracer will use static allocation to
+	  allocate memory for NR_CPUS hashtables. Keep in mind that this will
+	  drastically increase the size of the compiled kernel and may even succeed
+	  the kernel size restrictions thus failing the build. If that happens you
+	  may decrease NR_CPUS to a more fitting value as it is not possible to
+	  detect the exact amount of cpu cores beforehand.
+
+	  If unsure, say Y.
+
+config ONESHOT_HASHTABLE_BUCKET_COUNT
+	int "Hashtable bucket count"
+	default 24
+	help
+	  Sets the hashtable bucket count to be reserved for every cpu core.
+
+	  Be aware that this value represents magnitudes of 2 so increasing this
+	  number results in a much higher memory usage.
+
+	  If unsure, keep the default.
+
+config ONESHOT_HASHTABLE_ELEMENT_COUNT
+	int "Hashtable element count"
+	default 500000
+	help
+	  Sets the hashtable element count to be reserved for every cpu core.
+
+	  Depending on how many kernel features you have selected it might be
+	  useful to increase this number to be able to memorize more already
+	  visited function to decrease the generated output.
+
+	  Be aware that this number determines a huge amount of memory to be
+	  reserved for the hashtables so increasing this will result in a higher
+	  memory usage.
+
+	  If unsure, keep the default.
+
+endif # ONESHOT_TRACER
+
 config ENABLE_DEFAULT_TRACERS
 	bool "Trace process context switches and events"
 	depends on !GENERIC_TRACER
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index c2b2148bb1d2..25b66b759bd8 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -51,6 +51,7 @@ obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
 obj-$(CONFIG_PREEMPT_TRACER) += trace_irqsoff.o
 obj-$(CONFIG_SCHED_TRACER) += trace_sched_wakeup.o
 obj-$(CONFIG_HWLAT_TRACER) += trace_hwlat.o
+obj-$(CONFIG_ONESHOT_TRACER) += trace_oneshot.o
 obj-$(CONFIG_NOP_TRACER) += trace_nop.o
 obj-$(CONFIG_STACK_TRACER) += trace_stack.o
 obj-$(CONFIG_MMIOTRACE) += trace_mmiotrace.o
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 005f08629b8b..e1e1d28a2914 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -40,6 +40,7 @@ enum trace_type {
 	TRACE_BLK,
 	TRACE_BPUTS,
 	TRACE_HWLAT,
+	TRACE_ONESHOT,
 	TRACE_RAW_DATA,
 
 	__TRACE_LAST_TYPE,
@@ -398,6 +399,7 @@ extern void __ftrace_bad_type(void);
 		IF_ASSIGN(var, ent, struct bprint_entry, TRACE_BPRINT);	\
 		IF_ASSIGN(var, ent, struct bputs_entry, TRACE_BPUTS);	\
 		IF_ASSIGN(var, ent, struct hwlat_entry, TRACE_HWLAT);	\
+		IF_ASSIGN(var, ent, struct oneshot_entry, TRACE_ONESHOT);\
 		IF_ASSIGN(var, ent, struct raw_data_entry, TRACE_RAW_DATA);\
 		IF_ASSIGN(var, ent, struct trace_mmiotrace_rw,		\
 			  TRACE_MMIO_RW);				\
@@ -828,6 +830,8 @@ extern int trace_selftest_startup_preemptirqsoff(struct tracer *trace,
 						 struct trace_array *tr);
 extern int trace_selftest_startup_wakeup(struct tracer *trace,
 					 struct trace_array *tr);
+extern int trace_selftest_startup_oneshot(struct tracer *trace,
+					  struct trace_array *tr);
 extern int trace_selftest_startup_nop(struct tracer *trace,
 					 struct trace_array *tr);
 extern int trace_selftest_startup_branch(struct tracer *trace,
diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h
index fc8e97328e54..fbf3c813721f 100644
--- a/kernel/trace/trace_entries.h
+++ b/kernel/trace/trace_entries.h
@@ -366,3 +366,16 @@ FTRACE_ENTRY(hwlat, hwlat_entry,
 
 	FILTER_OTHER
 );
+
+FTRACE_ENTRY(oneshot, oneshot_entry,
+
+	TRACE_ONESHOT,
+
+	F_STRUCT(
+		__field(	unsigned long,	ip	)
+	),
+
+	F_printk("%lx\n", __entry->ip),
+
+	FILTER_OTHER
+);
diff --git a/kernel/trace/trace_oneshot.c b/kernel/trace/trace_oneshot.c
new file mode 100644
index 000000000000..931925aff20b
--- /dev/null
+++ b/kernel/trace/trace_oneshot.c
@@ -0,0 +1,220 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * oneshot tracer
+ *
+ * Copyright (C) 2019 Thomas Preisner <linux@tpreisner.de>
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/ftrace.h>
+#include <linux/hashtable.h>
+#include <linux/percpu.h>
+
+#include "trace.h"
+#include "trace_output.h"
+
+#ifdef CONFIG_ONESHOT_TRACER
+
+static struct trace_array	*oneshot_trace;
+
+struct ip_entry {
+	unsigned long address;
+	struct hlist_node next;
+};
+
+struct oneshot_hashtable {
+	DECLARE_HASHTABLE(functions, CONFIG_ONESHOT_HASHTABLE_BUCKET_COUNT);
+	int size;
+	struct ip_entry elements[CONFIG_ONESHOT_HASHTABLE_ELEMENT_COUNT];
+};
+
+static DEFINE_PER_CPU(struct oneshot_hashtable *, visited);
+#ifndef CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC
+static oneshot_hashtable visited_functions[NR_CPUS];
+#endif /* CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC */
+
+
+/*
+ * returns true if value has been inserted or if hashtable is full
+ */
+static inline bool
+oneshot_lookup_and_insert(struct oneshot_hashtable *curr_visited,
+			  unsigned long address)
+{
+	struct ip_entry *entry;
+
+	hash_for_each_possible(curr_visited->functions, entry, next, address) {
+		if (entry->address == address)
+			return false;
+	}
+
+	if (curr_visited->size >= CONFIG_ONESHOT_HASHTABLE_ELEMENT_COUNT)
+		return true;
+
+	entry = &curr_visited->elements[curr_visited->size++];
+	entry->address = address;
+
+	hash_add(curr_visited->functions, &entry->next, address);
+
+	return true;
+}
+
+static void trace_oneshot(struct trace_array *tr, unsigned long ip)
+{
+	struct trace_event_call *call = &event_oneshot;
+	struct ring_buffer *buffer = tr->trace_buffer.buffer;
+	struct ring_buffer_event *event;
+	struct oneshot_entry *entry;
+
+	event = trace_buffer_lock_reserve(buffer, TRACE_ONESHOT, sizeof(*entry),
+					  0, 0);
+	if (!event)
+		return;
+
+	entry = ring_buffer_event_data(event);
+	entry->ip = ip;
+
+	if (!call_filter_check_discard(call, entry, buffer, event))
+		trace_buffer_unlock_commit_nostack(buffer, event);
+}
+
+static void
+oneshot_tracer_call(unsigned long ip, unsigned long parent_ip,
+		    struct ftrace_ops *op, struct pt_regs *pt_regs)
+{
+	struct trace_array *tr = op->private;
+	struct oneshot_hashtable *curr_visited;
+
+	if (unlikely(!tr->function_enabled))
+		return;
+
+	preempt_disable_notrace();
+	curr_visited = this_cpu_read(visited);
+
+	if (oneshot_lookup_and_insert(curr_visited, ip))
+		trace_oneshot(oneshot_trace, ip);
+
+	if (oneshot_lookup_and_insert(curr_visited, parent_ip))
+		trace_oneshot(oneshot_trace, parent_ip);
+
+	preempt_enable_notrace();
+}
+
+static int start_oneshot_tracer(struct trace_array *tr)
+{
+	int ret;
+
+	if (unlikely(tr->function_enabled))
+		return 0;
+
+	ret = register_ftrace_function(tr->ops);
+	if (!ret)
+		tr->function_enabled = 1;
+
+	return ret;
+}
+
+static void stop_oneshot_tracer(struct trace_array *tr)
+{
+	if (unlikely(!tr->function_enabled))
+		return;
+
+	unregister_ftrace_function(tr->ops);
+	tr->function_enabled = 0;
+}
+
+static int oneshot_trace_init(struct trace_array *tr)
+{
+	int cpu;
+
+	oneshot_trace = tr;
+
+	for_each_possible_cpu(cpu) {
+#ifdef CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC
+		struct oneshot_hashtable *tmp;
+
+		tmp = vmalloc(sizeof(struct oneshot_hashtable));
+		if (!tmp)
+			return 1;
+
+		per_cpu(visited, cpu) = tmp;
+#else
+		per_cpu(visited, cpu) = &visited_functions[cpu];
+#endif /* CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC */
+
+		per_cpu(visited, cpu)->size = 0;
+		hash_init(per_cpu(visited, cpu)->functions);
+	}
+
+	ftrace_init_array_ops(tr, oneshot_tracer_call);
+
+	start_oneshot_tracer(tr);
+	return 0;
+}
+
+static void oneshot_trace_reset(struct trace_array *tr)
+{
+	int cpu;
+
+	stop_oneshot_tracer(tr);
+	ftrace_reset_array_ops(tr);
+
+	for_each_possible_cpu(cpu) {
+		vfree(per_cpu(visited, cpu));
+	}
+}
+
+static void oneshot_print_header(struct seq_file *s)
+{
+	// do not print anything!
+}
+
+static enum print_line_t oneshot_print_line(struct trace_iterator *iter)
+{
+	struct trace_seq *s = &iter->seq;
+	struct trace_entry *entry = iter->ent;
+	struct oneshot_entry *field;
+	struct module *mod;
+
+	trace_assign_type(field, entry);
+
+	mod = __module_address(field->ip);
+	if (mod) {
+		unsigned long addr;
+
+		addr = field->ip - (unsigned long) mod->core_layout.base;
+		trace_seq_printf(s, "%lx %s\n", addr, mod->name);
+	} else {
+		trace_seq_printf(s, "%lx\n", field->ip);
+	}
+
+	return trace_handle_return(s);
+}
+
+struct tracer oneshot_tracer __read_mostly = {
+	.name		= "oneshot",
+	.init		= oneshot_trace_init,
+	.reset		= oneshot_trace_reset,
+	.print_header	= oneshot_print_header,
+	.print_line	= oneshot_print_line,
+#ifdef CONFIG_FTRACE_SELFTEST
+	.selftest	= trace_selftest_startup_oneshot,
+#endif
+	.allow_instances = true,
+};
+
+
+__init static int init_oneshot_tracer(void)
+{
+	int ret;
+
+	ret = register_tracer(&oneshot_tracer);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+core_initcall(init_oneshot_tracer);
+#endif /* CONFIG_ONESHOT_TRACER */
diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
index 69ee8ef12cee..95449ecfaca7 100644
--- a/kernel/trace/trace_selftest.c
+++ b/kernel/trace/trace_selftest.c
@@ -1028,6 +1028,44 @@ trace_selftest_startup_preemptirqsoff(struct tracer *trace, struct trace_array *
 }
 #endif /* CONFIG_IRQSOFF_TRACER && CONFIG_PREEMPT_TRACER */
 
+#ifdef CONFIG_ONESHOT_TRACER
+__init int
+trace_selftest_startup_oneshot(struct tracer *trace, struct trace_array *tr)
+{
+	unsigned long count;
+	int ret;
+
+	/* make sure msleep has been recorded */
+	msleep(1);
+
+	/* start the tracing */
+	ret = tracer_init(trace, tr);
+	if (ret) {
+		warn_failed_init_tracer(trace, ret);
+		return ret;
+	}
+
+	/* Sleep for a 1/10 of a second */
+	msleep(100);
+
+	/* stop the tracing. */
+	tracing_stop();
+
+	/* check the trace buffer */
+	ret = trace_test_buffer(&tr->trace_buffer, &count);
+
+	trace->reset(tr);
+	tracing_start();
+
+	if (!ret && !count) {
+		printk(KERN_CONT ".. no entries found ..");
+		ret = -1;
+	}
+
+	return ret;
+}
+#endif /* CONFIG_ONESHOT_TRACER */
+
 #ifdef CONFIG_NOP_TRACER
 int
 trace_selftest_startup_nop(struct tracer *trace, struct trace_array *tr)
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] ftrace: add simple oneshot function tracer
  2019-05-29  9:31 [PATCH] ftrace: add simple oneshot function tracer Thomas Preisner
@ 2019-05-29 14:45 ` Steven Rostedt
  2019-06-11 20:33   ` [PATCH v2] ftrace: add simple oneshot function profiler Thomas Preisner
                     ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Steven Rostedt @ 2019-05-29 14:45 UTC (permalink / raw)
  To: Thomas Preisner; +Cc: Ingo Molnar, Jonathan Corbet, linux-doc, linux-kernel

On Wed, 29 May 2019 11:31:23 +0200
Thomas Preisner <linux@tpreisner.de> wrote:

> The "oneshot" tracer records every address (ip, parent_ip) exactly once.
> As a result, "oneshot" can be used to efficiently create kernel function
> coverage/usage reports such as in undertaker-tailor[0].
> 
> In order to provide this functionality, "oneshot" uses a
> configurable hashset for blacklisting already recorded addresses. This
> way, no user space application is required to parse the function
> tracer's output and to deactivate functions after they have been
> recorded once. Additionally, the tracer's output is reduced to a bare
> mininum so that it can be passed directly to undertaker-tailor.
> 
> Further information regarding this oneshot function tracer can also be
> found at [1].
> 
> [0]: https://undertaker.cs.fau.de
> [1]: https://tpreisner.de/pub/ba-thesis.pdf
> 
> Signed-off-by: Thomas Preisner <linux@tpreisner.de>
>

Hi,

If you are only interested in seeing what functions are called (and
don't care about the order), why not just make another function
profiler (see register_ftrace_profiler and friends)? Then you could
just list the hash table entries instead of having to record into the
ftrace ring buffer.

-- Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2] ftrace: add simple oneshot function profiler
  2019-05-29 14:45 ` Steven Rostedt
@ 2019-06-11 20:33   ` Thomas Preisner
       [not found]   ` <20190611203312.13653-1-linux@tpreisner.de>
  2019-06-12 21:29   ` Thomas Preisner
  2 siblings, 0 replies; 9+ messages in thread
From: Thomas Preisner @ 2019-06-11 20:33 UTC (permalink / raw)
  Cc: linux, Steven Rostedt, Ingo Molnar, linux-kernel

The "oneshot" profiler records every address (ip, parent_ip) exactly once.
As a result, "oneshot" can be used to efficiently create kernel function
coverage/usage reports such as in undertaker-tailor[0].

In order to provide this functionality, "oneshot" uses a configurable
hashset for blacklisting already recorded addresses. This way, no user
space application is required to parse the function profilers's output
and to deactivate functions after they have been recorded once.
Additionally, the profilers's output is reduced to a bare mininum so
that it can be passed directly to undertaker-tailor.

Further information regarding this one oneshot function tracer can also
be found at [1].

[0]: https://i4gerrit.informatik.uni-erlangen.de/undertaker.git
[1]: https://tpreisner.de/pub/ba-thesis.pdf

Signed-off-by: Thomas Preisner <linux@tpreisner.de>
---
 kernel/trace/Kconfig         |  66 ++++++++++++++
 kernel/trace/Makefile        |   1 +
 kernel/trace/trace_oneshot.c | 165 +++++++++++++++++++++++++++++++++++
 3 files changed, 232 insertions(+)
 create mode 100644 kernel/trace/trace_oneshot.c

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 5d965cef6c77..91c03881c4c7 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -279,6 +279,72 @@ config HWLAT_TRACER
 	 file. Every time a latency is greater than tracing_thresh, it will
 	 be recorded into the ring buffer.
 
+menuconfig ONESHOT_PROFILER
+	bool "Oneshot Function Profiler"
+	default n
+	depends on HAVE_FUNCTION_TRACER
+	select GENERIC_TRACER
+	help
+	  This profiler records every function call (and callee) exactly once per cpu
+	  core by using a separate hashtable for each cpu core to keep track of
+	  already recorded functions.
+
+	  Very useful for mostly automated kernel-tailoring in combination with the
+	  undertaker toolchain as this tracer produces significantly less output in
+	  comparison to the normal function tracer.
+
+	  If unsure, say N.
+
+if ONESHOT_PROFILER
+
+config ONESHOT_HASHTABLE_DYNAMIC_ALLOC
+	bool "Dynamic Hashtable Allocation"
+	default y
+	help
+	  When this is enabled (default) the oneshot profiler will try to allocate
+	  memory for one hashtable per cpu. This method should always work but
+	  might not be the most efficient way as vmalloc only allocates a
+	  contiguous memory region in the virtual address space instead of the
+	  physical one.
+
+	  When this is disabled the oneshot profiler will use static allocation to
+	  allocate memory for NR_CPUS hashtables. Keep in mind that this will
+	  drastically increase the size of the compiled kernel and may even succeed
+	  the kernel size restrictions, thus failing the build. If that happens you
+	  may decrease NR_CPUS to a more fitting value as it is not possible to
+	  detect the exact amount of cpu cores beforehand.
+
+	  If unsure, say Y.
+
+config ONESHOT_HASHTABLE_BUCKET_COUNT
+	int "Hashtable bucket count"
+	default 24
+	help
+	  Sets the hashtable bucket count to be reserved for every cpu core.
+
+	  Be aware that this value represents magnitudes of 2 so increasing this
+	  number results in a much higher memory usage.
+
+	  If unsure, keep the default.
+
+config ONESHOT_HASHTABLE_ELEMENT_COUNT
+	int "Hashtable element count"
+	default 500000
+	help
+	  Sets the hashtable element count to be reserved for every cpu core.
+
+	  Depending on how many kernel features you have selected it might be
+	  useful to increase this number to be able to memorize more already
+	  visited function to decrease the generated output.
+
+	  Be aware that this number determines a huge amount of memory to be
+	  reserved for the hashtables so increasing this will result in a higher
+	  memory usage.
+
+	  If unsure, keep the default.
+
+endif # ONESHOT_PROFILER
+
 config ENABLE_DEFAULT_TRACERS
 	bool "Trace process context switches and events"
 	depends on !GENERIC_TRACER
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index c2b2148bb1d2..c3ef309bcbe1 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -56,6 +56,7 @@ obj-$(CONFIG_STACK_TRACER) += trace_stack.o
 obj-$(CONFIG_MMIOTRACE) += trace_mmiotrace.o
 obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += trace_functions_graph.o
 obj-$(CONFIG_TRACE_BRANCH_PROFILING) += trace_branch.o
+obj-$(CONFIG_ONESHOT_PROFILER) += trace_oneshot.o
 obj-$(CONFIG_BLK_DEV_IO_TRACE) += blktrace.o
 obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += fgraph.o
 ifeq ($(CONFIG_BLOCK),y)
diff --git a/kernel/trace/trace_oneshot.c b/kernel/trace/trace_oneshot.c
new file mode 100644
index 000000000000..b533c1b05632
--- /dev/null
+++ b/kernel/trace/trace_oneshot.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * oneshot profiler
+ *
+ * Copyright (C) 2019 Thomas Preisner <linux@tpreisner.de>
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/ftrace.h>
+#include <linux/hashtable.h>
+#include <linux/percpu.h>
+
+#include "trace.h"
+#include "trace_stat.h"
+
+#ifdef CONFIG_ONESHOT_PROFILER
+
+struct ip_entry {
+	unsigned long address;
+	struct hlist_node next;
+};
+
+struct oneshot_hashtable {
+	DECLARE_HASHTABLE(functions, CONFIG_ONESHOT_HASHTABLE_BUCKET_COUNT);
+	int size;
+	struct ip_entry elements[CONFIG_ONESHOT_HASHTABLE_ELEMENT_COUNT];
+};
+
+static DEFINE_PER_CPU(struct oneshot_hashtable *, visited);
+
+#ifndef CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC
+static struct oneshot_hashtable visited_functions[NR_CPUS];
+#endif /* CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC */
+
+static struct oneshot_hashtable output_hashset;
+
+
+static inline void
+oneshot_insert(struct oneshot_hashtable *curr_visited,
+			  unsigned long address)
+{
+	struct ip_entry *entry;
+
+	if (curr_visited->size >= CONFIG_ONESHOT_HASHTABLE_ELEMENT_COUNT)
+		return;
+
+	hash_for_each_possible(curr_visited->functions, entry, next, address) {
+		if (entry->address == address)
+			return;
+	}
+
+	entry = &curr_visited->elements[curr_visited->size++];
+	entry->address = address;
+
+	hash_add(curr_visited->functions, &entry->next, address);
+}
+
+static void
+oneshot_profile_call(unsigned long ip, unsigned long parent_ip,
+		    struct ftrace_ops *op, struct pt_regs *pt_regs)
+{
+	struct oneshot_hashtable *curr_visited;
+
+	preempt_disable_notrace();
+	curr_visited = this_cpu_read(visited);
+
+	oneshot_insert(curr_visited, ip);
+	oneshot_insert(curr_visited, parent_ip);
+
+	preempt_enable_notrace();
+}
+
+static void *oneshot_stat_start(struct tracer_stat *trace)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		struct oneshot_hashtable *tmp;
+		int size;
+		int i;
+
+		tmp = per_cpu(visited, cpu);
+		size = tmp->size;
+
+		for (i = 0; i < size; i++) {
+			oneshot_insert(&output_hashset,
+				       tmp->elements[i].address);
+		}
+	}
+
+	if (output_hashset.size <= 0)
+		return NULL;
+
+	return &output_hashset.elements[0].address;
+}
+
+static void *oneshot_stat_next(void *prev, int idx)
+{
+	if (output_hashset.size <= idx)
+		return NULL;
+
+	return &output_hashset.elements[idx].address;
+}
+
+static int oneshot_stat_show(struct seq_file *s, void *entry)
+{
+	unsigned long ip = *(unsigned long *)entry;
+	struct module *mod;
+
+	mod = __module_address(ip);
+	if (mod) {
+		unsigned long addr;
+
+		addr = ip - (unsigned long) mod->core_layout.base;
+		seq_printf(s, "%lx %s\n", addr, mod->name);
+	} else {
+		seq_printf(s, "%lx\n", ip);
+	}
+	return 0;
+}
+
+static struct tracer_stat oneshot_stats = {
+	.name = "oneshot",
+	.stat_start = oneshot_stat_start,
+	.stat_next = oneshot_stat_next,
+	.stat_show = oneshot_stat_show
+};
+
+static struct ftrace_ops oneshot_profile_ops __read_mostly = {
+	.func		= oneshot_profile_call,
+	.flags		= FTRACE_OPS_FL_RECURSION_SAFE,
+};
+
+static int init_oneshot_profile(void)
+{
+	int cpu;
+	int ret;
+
+	for_each_possible_cpu(cpu) {
+#ifdef CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC
+		struct oneshot_hashtable *tmp;
+
+		tmp = vmalloc(sizeof(struct oneshot_hashtable));
+		if (!tmp)
+			return -ENOMEM;
+
+		per_cpu(visited, cpu) = tmp;
+#else
+		per_cpu(visited, cpu) = &visited_functions[cpu];
+#endif /* CONFIG_ONESHOT_HASHTABLE_DYNAMIC_ALLOC */
+
+		per_cpu(visited, cpu)->size = 0;
+		hash_init(per_cpu(visited, cpu)->functions);
+	}
+
+	ret = register_ftrace_function(&oneshot_profile_ops);
+	if (ret)
+		return ret;
+
+	return register_stat_tracer(&oneshot_stats);
+}
+
+core_initcall(init_oneshot_profile);
+#endif /* CONFIG_ONESHOT_PROFILER */
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] ftrace: add simple oneshot function tracer
       [not found]   ` <20190611203312.13653-1-linux@tpreisner.de>
@ 2019-06-11 21:52     ` Steven Rostedt
  0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2019-06-11 21:52 UTC (permalink / raw)
  To: Thomas Preisner; +Cc: Ingo Molnar, linux-kernel

On Tue, 11 Jun 2019 22:33:11 +0200
Thomas Preisner <linux@tpreisner.de> wrote:

> However, due to there not being any mechanism (that I am aware of) to
> activate such stat tracers via kernel commandline this oneshot profiler
> is now always active when selected. Therefore, it is no longer possible
> to disable this tracer during runtime and thus, allocated memory is no
> longer freed.

What do you mean? The function profile has its own file to enable it:

 echo 1 > /sys/kernel/tracing/function_profile_enabled

And disable it:

 echo 0 > /sys/kernel/tracing/function_profile_enabled

-- Steve


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Re: [PATCH] ftrace: add simple oneshot function tracer
  2019-05-29 14:45 ` Steven Rostedt
  2019-06-11 20:33   ` [PATCH v2] ftrace: add simple oneshot function profiler Thomas Preisner
       [not found]   ` <20190611203312.13653-1-linux@tpreisner.de>
@ 2019-06-12 21:29   ` Thomas Preisner
  2019-06-18  0:16     ` Steven Rostedt
  2 siblings, 1 reply; 9+ messages in thread
From: Thomas Preisner @ 2019-06-12 21:29 UTC (permalink / raw)
  To: asdf; +Cc: linux, Steven Rostedt, Ingo Molnar, linux-kernel

On Tue, 11 Jun 2019 17:52:37 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> What do you mean? The function profile has its own file to enable it:
> 
>  echo 1 > /sys/kernel/tracing/function_profile_enabled
>  
>  And disable it:
>  
>   echo 0 > /sys/kernel/tracing/function_profile_enabled
>   
>   -- Steve

Yes, I am aware of the function profiler providing a file operation for
enabling and disabling itself. However, my oneshot profiler as of [PATCH
v2] is a separate tracer/profiler without this file operation.

As this oneshot profiler is intended to be used for coverage/usage
reports I want it to be able to record functions as soon as possible
during bootup. Therefore, I just permanently activated the oneshot
profiler since as of now there is no means to activate it or the
function profiler via kernel commandline just like the normal tracers.

Still, if you want to I can add the file operation for
enabling/disabling this new profiler together with a new kernel
commandline argument for this profiler?

Or what would be your prefered way?

Greetings,
Thomas Preisner


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] ftrace: add simple oneshot function tracer
  2019-06-12 21:29   ` Thomas Preisner
@ 2019-06-18  0:16     ` Steven Rostedt
  2019-06-23 12:05       ` Thomas Preisner
  0 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2019-06-18  0:16 UTC (permalink / raw)
  To: Thomas Preisner; +Cc: asdf, Ingo Molnar, linux-kernel

On Wed, 12 Jun 2019 23:29:35 +0200
Thomas Preisner <linux@tpreisner.de> wrote:

Hi Thomas,

BTW, what email client do you use, because your replies seem to confuse
my email client (claws-mail) and it doesn't thread them at all.
Although they do look fine on mutt (when I view my LKML folder). Looks
like it doesn't create a "References:" header.

> On Tue, 11 Jun 2019 17:52:37 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> > What do you mean? The function profile has its own file to enable it:
> > 
> >  echo 1 > /sys/kernel/tracing/function_profile_enabled
> >  
> >  And disable it:
> >  
> >   echo 0 > /sys/kernel/tracing/function_profile_enabled
> >   
> >   -- Steve  
> 
> Yes, I am aware of the function profiler providing a file operation for
> enabling and disabling itself. However, my oneshot profiler as of [PATCH
> v2] is a separate tracer/profiler without this file operation.
> 
> As this oneshot profiler is intended to be used for coverage/usage
> reports I want it to be able to record functions as soon as possible
> during bootup. Therefore, I just permanently activated the oneshot
> profiler since as of now there is no means to activate it or the
> function profiler via kernel commandline just like the normal tracers.
> 
> Still, if you want to I can add the file operation for
> enabling/disabling this new profiler together with a new kernel
> commandline argument for this profiler?
> 
> Or what would be your prefered way?
> 

Hmm, I guess I still need to think about exactly what this is for.
Perhaps we could add a "oneshot" option to the function tracer, and
when set it will only trace a function once? Is there a strong reason
to add a new event type "oneshot_entry"? It may be useful to record the
parent of the function that triggered the first instance as well.

I'm still trying to get a grip around exactly what use cases this would
be good for. Especially when adding new functionality like this.

-- Steve


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] ftrace: add simple oneshot function tracer
  2019-06-18  0:16     ` Steven Rostedt
@ 2019-06-23 12:05       ` Thomas Preisner
  2019-06-26 16:04         ` Steven Rostedt
  0 siblings, 1 reply; 9+ messages in thread
From: Thomas Preisner @ 2019-06-23 12:05 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Thomas Preisner, Ingo Molnar, linux-kernel

On Mon, Jun 17, 2019 at 08:16:27PM -0400, Steven Rostedt wrote:
> On Wed, 12 Jun 2019 23:29:35 +0200
> Thomas Preisner <linux@tpreisner.de> wrote:
> 
> Hi Thomas,
> 
> BTW, what email client do you use, because your replies seem to confuse
> my email client (claws-mail) and it doesn't thread them at all.
> Although they do look fine on mutt (when I view my LKML folder). Looks
> like it doesn't create a "References:" header.
> 
> > On Tue, 11 Jun 2019 17:52:37 -0400
> > Steven Rostedt <rostedt@goodmis.org> wrote:
> > 
> > > What do you mean? The function profile has its own file to enable it:
> > > 
> > >  echo 1 > /sys/kernel/tracing/function_profile_enabled
> > >  
> > >  And disable it:
> > >  
> > >   echo 0 > /sys/kernel/tracing/function_profile_enabled
> > >   
> > >   -- Steve  
> > 
> > Yes, I am aware of the function profiler providing a file operation for
> > enabling and disabling itself. However, my oneshot profiler as of [PATCH
> > v2] is a separate tracer/profiler without this file operation.
> > 
> > As this oneshot profiler is intended to be used for coverage/usage
> > reports I want it to be able to record functions as soon as possible
> > during bootup. Therefore, I just permanently activated the oneshot
> > profiler since as of now there is no means to activate it or the
> > function profiler via kernel commandline just like the normal tracers.
> > 
> > Still, if you want to I can add the file operation for
> > enabling/disabling this new profiler together with a new kernel
> > commandline argument for this profiler?
> > 
> > Or what would be your prefered way?
> > 
> 
> Hmm, I guess I still need to think about exactly what this is for.
> Perhaps we could add a "oneshot" option to the function tracer, and
> when set it will only trace a function once? Is there a strong reason
> to add a new event type "oneshot_entry"? It may be useful to record the
> parent of the function that triggered the first instance as well.
> 
> I'm still trying to get a grip around exactly what use cases this would
> be good for. Especially when adding new functionality like this.
> 
> -- Steve
> 

I've created this tracer with kernel tailoring in mind since the
tailoring process of e.g. undertaker heavily benefits from a more
precise set of input data.

A "oneshot" option for the function tracer would be a viable
possibility. However, this may add a lot of overhead (performance wise)
in comparison to my current approach? After all, the use case of my
tracer would be some sort of kernel activity monitoring during "normal
usage" in order to get a grasp of (hopefully) all required kernel
functions.

Also, there is no strong reason to add a new event type,
this was just a means of reducing the collected data (which may as well
be omitted since there is no real benefit).

My "oneshot tracer" actually collects and outputs every parent in order
to get a more thorough view on used kernel code. Therefore, I would
suggest to keep this functionality and maybe make it configurable
instead?

Yours sincerely,
Thomas

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] ftrace: add simple oneshot function tracer
  2019-06-23 12:05       ` Thomas Preisner
@ 2019-06-26 16:04         ` Steven Rostedt
  2019-07-09 13:53           ` Thomas Preisner
  0 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2019-06-26 16:04 UTC (permalink / raw)
  To: Thomas Preisner; +Cc: Ingo Molnar, linux-kernel

On Sun, 23 Jun 2019 14:05:55 +0200
Thomas Preisner <linux@tpreisner.de> wrote:


> I've created this tracer with kernel tailoring in mind since the
> tailoring process of e.g. undertaker heavily benefits from a more
> precise set of input data.
> 
> A "oneshot" option for the function tracer would be a viable
> possibility. However, this may add a lot of overhead (performance wise)
> in comparison to my current approach? After all, the use case of my
> tracer would be some sort of kernel activity monitoring during "normal
> usage" in order to get a grasp of (hopefully) all required kernel
> functions.

Coming back from vacation and not having this threaded in my inbox,
I have to ask (to help cache this back into my head), what was the
"current approach" compared to the "oneshot" option, and why would it
have better performance?

> 
> Also, there is no strong reason to add a new event type,
> this was just a means of reducing the collected data (which may as well
> be omitted since there is no real benefit).

+1

> 
> My "oneshot tracer" actually collects and outputs every parent in order
> to get a more thorough view on used kernel code. Therefore, I would
> suggest to keep this functionality and maybe make it configurable
> instead?

Configure which? (again, coming back from vacation, I need a refresher
on this ;-)

-- Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] ftrace: add simple oneshot function tracer
  2019-06-26 16:04         ` Steven Rostedt
@ 2019-07-09 13:53           ` Thomas Preisner
  0 siblings, 0 replies; 9+ messages in thread
From: Thomas Preisner @ 2019-07-09 13:53 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Thomas Preisner, Ingo Molnar, linux-kernel

On Wed, Jun 26, 2019 at 12:04:12PM -0400, Steven Rostedt wrote:
> On Sun, 23 Jun 2019 14:05:55 +0200
> Thomas Preisner <linux@tpreisner.de> wrote:
> > I've created this tracer with kernel tailoring in mind since the
> > tailoring process of e.g. undertaker heavily benefits from a more
> > precise set of input data.
> > 
> > A "oneshot" option for the function tracer would be a viable
> > possibility. However, this may add a lot of overhead (performance wise)
> > in comparison to my current approach? After all, the use case of my
> > tracer would be some sort of kernel activity monitoring during "normal
> > usage" in order to get a grasp of (hopefully) all required kernel
> > functions.
> 
> Coming back from vacation and not having this threaded in my inbox,
> I have to ask (to help cache this back into my head), what was the
> "current approach" compared to the "oneshot" option, and why would it
> have better performance?

The current approach makes use of ftrace's profiling capabilities in
conjunction with a hashtable and preallocated memory for its entries.
When active, this oneshot-profiler will only perform lookups for ip and
parent_ip and insert those when necessary. Compared to the "oneshot"
option this allows to omit values that are not required for kernel
profiling such as interrupt flags, etc. However, I am not sure how huge
this may impact the performance.

Nonetheless, the profiling variant allows to remove duplicated entries
(due to there being one hashset per CPU core) before outputting its
gathered data. Additionally, it is independent of the ringbuffer which
may overflow due to other tracers being active (However, I am not sure
if or in which way different tracers are isolated from each other when
using the ringbuffer, so this may as well be false).

> > 
> > Also, there is no strong reason to add a new event type,
> > this was just a means of reducing the collected data (which may as well
> > be omitted since there is no real benefit).
> 
> +1
> 
> > 
> > My "oneshot tracer" actually collects and outputs every parent in order
> > to get a more thorough view on used kernel code. Therefore, I would
> > suggest to keep this functionality and maybe make it configurable
> > instead?
> 
> Configure which? (again, coming back from vacation, I need a refresher
> on this ;-)

In case you want to incorporate this oneshot functionality directly into
the function tracer as a new option it may be useful to allow
configuring the tracer's oneshot capabilities. This way, one could
disable tracing of a function after its first occurrence or instead keep
tracing enabled in order to get a better overview over where it was
called from (-> recording the parent_ip is also interesting).

Yours sincerely,
Thomas

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-07-09 13:53 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-29  9:31 [PATCH] ftrace: add simple oneshot function tracer Thomas Preisner
2019-05-29 14:45 ` Steven Rostedt
2019-06-11 20:33   ` [PATCH v2] ftrace: add simple oneshot function profiler Thomas Preisner
     [not found]   ` <20190611203312.13653-1-linux@tpreisner.de>
2019-06-11 21:52     ` [PATCH] ftrace: add simple oneshot function tracer Steven Rostedt
2019-06-12 21:29   ` Thomas Preisner
2019-06-18  0:16     ` Steven Rostedt
2019-06-23 12:05       ` Thomas Preisner
2019-06-26 16:04         ` Steven Rostedt
2019-07-09 13:53           ` Thomas Preisner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.